Near the beginning of the week, someone asked me about the ethics and effect of algorithms which filter your content for you; “helpfully” prioritizing those items which fit into your existing world view.
I was reminded of that question yesterday when I had an interesting and somewhat similar conversation with computer scientist Vagelis Papalexakis, whose work explores the way different people’s brains respond to various stimuli. Papalexakis discussed the possible implications for improving education: a classroom where teachers could tailor their lessons to the particular neural responses of their students.
While I can see the potential good in such technology, being somewhat cautious of the ills of human nature, I asked Papalexakis about the ethical implications – with access to student neural readings, what would stop ‘big brother’ from punishing children whose minds tend to wander?
While there’s no guaranteed way to prevent such abuse, Papalexakis rightly pointed to this as a broader ethical question – the ethics of personalization.
Filtering algorithms, for example, could easily be misused as tools for efficiently delivering propaganda. There is a value in having this personalization available, but there is also a risk.
What I find particularly interesting about the challenge of filtering is that it is not at all clear that there is a neutral solution to the problem.
In 2012, an estimated 2.5 billion gigabytes of data were generated every day – far more than we would expect ourselves to be able to handle. The reality is that some type of filtering is necessary – so the question becomes one of what type of filtering we think is best.
Imagine for a moment, the “things you wouldn’t enjoy…” filter. That is, rather than having an algorithm that tracks what you like and presents you with similar content, it tracks what you like an intentionally presents you with divergent views.
In theory, I would love to have this. It is a problem that we each tend to fall into our own little filter bubble, with little exposure to opposing views.
But, how would such a tool play out in practice? First, no algorithm can remove the need for human agency – I might be presented with opposing articles, but I would need to actually click on them.
This presents a real challenge for content providers who – even putting aside profit motive – need to serve their customers. If people don’t like the content that is being filtered for them, they will leave for a different service.
Furthermore, research indicates that even when interacting with conflicting information, people are likely to interpret the results with a bias that favors their initial view and even double down on their initial opinion.
So it’s not clear at all that changing a filtering algorithm in such a way is sufficient to relieve polarization and bias.
That’s not to say either, that we should just let filtering algorithms off the hook. They by no means a full solution to the challenges of information bias, but they do play a critical role in shaping the information atmosphere around us.
Markus Prior, for example, has show that when it comes to factual matters, a less-personalized media environment increases people’s political knowledge. On the other hand, he has also found that “there is no firm evidence that partisan media are making ordinary Americans more partisan.” So again, the personalization of the media environment is only part of the solution.
What does all this have to do with using brain scans to tailor information to recipients?
Well, I guess, we need to find ways to get all these moving pieces to work together. Personalization is good. It has real benefits and helps each focus on the signal in a sea of noise. But we should also be weary of too much personalization – a little noise and inefficiency should be intentionally built into the system. And, of course, we have to remember that we are our own agents in this work as well – systems of personalization can shape the broader context, but they cannot determine how we each choose to act.