Can i trust google’s ai search results? | members edition

feature-image

Play all audios:

Loading...

You may already see these summaries appear on top of news and other search results, though they don’t show up for every query or on every device or browser. When I recently asked Google,


“_Why is the sky blue?_” AI Overviews explained, “_Rayleigh scattering_,” a process that occurs when sunlight passes through the Earth’s atmosphere and is scattered by tiny air molecules.


Smack in the middle of this explanation were bubbles I could click to read more from such sources as NOAA SciJinks and National Geographic Kids. Just below AI Overviews are the familiar blue


Google search links. I’m no expert but felt confident in this AI summary. AI STILL CAN’T TELL EVERY FACT FROM FICTION But that confidence isn’t always earned. Shortly after I/O, Google was


forced to play defense. Some AI Overviews results that went viral on social media were downright wacky or even dangerous. A result suggested that you should add nontoxic glue to keep cheese


from sliding off a pizza, seriously. Want a little Elmer’s with your marinara sauce? Another answer counseled people to ingest a rock a day because they contain minerals and vitamins that


are supposed to help with digestion. The, um, rocky advice was traced to the humor site _The Onion_, the pizza example to an old Reddit post. Google search executive Liz Reid conceded in a


blog that “some odd, inaccurate or unhelpful AI Overviews certainly did show up,” and admitted Google needed to do a better job in its “ability to interpret nonsensical queries and satirical


content.” Among the fixes, Reid blogged Google would “limit the use of user-generated content that could offer misleading advice.” She added that Google would not display AI Overviews for


“hard news topics, where freshness and factuality are important.”