In Google’s recent blog update, they’ve made their stance pretty clear about how they’ll handle AI information. Most notably, that they will assume the legal risks for both the training data their AI uses as well as its output.
That part is great and I have no real comment on it.
However, while it’s not my intention to assume the worst about search bias, as a pro who’s worked in that space for 16+ years there’s been plenty of bias along the way.
It’s not just me; plenty of others have noted clear cases of where Google alters the search results it shows. Here’s a good discussion on that point.
Sometimes it’s during a political season, or other times when there’s national news about gun violence where even searches for “shotgun mics” don’t return valid results anymore because they contained the word “shotgun”.
Each time you wait a week or so until whatever given inflammatory situation has died down, and magically a search for the same thing suddenly works again.
Much more prevalently is Google’s rather strong and constant bias for shopping results. If you do a search for something like “firsthand opinions about the new iPad” it will still show you mostly sites like Best Buy and Walmart in results and NOT any blogs or forums discussing firsthand opinions. You have to tailor your search to include words like forum, blog, or Reddit to get what you’re after, and even that doesn’t always work.
It can be annoying, but to give the benefit of the doubt it’s perhaps less about mustache-twirling meddling by Google engineers and more that the great many algorithm tweaks they’ve made over the years to stop black hat SEO tactics have produced unwanted side effects.
The point is, the concept of blocking certain search results out of fear of triggering someone doing searches isn’t new. Which takes us into this point about the blog post they just published.
Censoring Information For Teen Safety?
Another section in Google’s post talks about filtering results for teens to downplay sensitive topics, such as bullying and drugs.
They don’t really elaborate on how these “limited outputs” will be handled. How aggressive will that be? Will the AI understand the context of a search, such as research, versus general or ill-intentioned searches?
I recall doing research on drugs for school papers related to the D.A.R.E. program, as well as historical studies that involved violence. If my search results had been censored “for my protection” back then I may not have been able to find answers I needed.
In other words, it wouldn’t be the first time that being so concerned someone would be offended or hurt by something came at the expense of what’s actually useful. Safety over function.
But we’ll see. It’s a nice idea in principle, but it also raises potential concerns in its own right.
Google is on one hand telling us they’re working to make search results unbiased and as trustworthy as possible, but is also telling us it filters and manipulates that data depending on who is searching.
That is, by definition, showing bias.
These things are often a first step in what turns out to be larger moves. If they see value in “personalized” results for teens to filter out potentially triggering or inappropriate information, who else would it then also make sense to change results for?
With how much data Google has on most people, it wouldn’t be a stretch for them to take their usual “tailored search results” they already do and extend them to AI generated answers.
That is again, by definition, showing bias.