Insights · June 8th, 2021

In my last post I explained how recommender systems or recommender engines work and why the current iterations of these systems can make us modern humans look like a misinformed, cognitively biased, sad and pathetic kind of animal. In this piece I will argue why I’m not convinced that the business models these algorithms rely on will continue to be viable much longer. The very same methods that made them successful in the beginning might very well be the cause of their reduced effect in the future. This doesn’t mean that recommender systems will discontinue. But they might be optimized differently or be remodeled into a completely different kind of system. I will discuss here some possible strategies that may be used in the future.

Recommender systems are like big sorting machines helping us make choices in the presence of too much information. Thus recommender systems have been developed to suggest products, services or content based on our past behaviors and that of others that are deemed similar to us. Almost all recommender systems seem to apply a “more of the same”- strategy, which explains why you see sentences that go like this, “Since you like X, you’ll probably also like Y”. For years this strategy has been immensely successful, and is probably behind Amazon’s early success as an online book store.

I suspect we might be reaching a point where two phenomena could slow down the success of the way the models are currently configured. One has to do with regression to the mean and the other phenomenon relates to the law of diminishing returns.

Regression to the mean: The illusion of extreme influencers 

After the Cambridge Analytica scandal, which revealed the use of data mining and micro targeting to make people vulnerable to political manipulation, much of the attention around algorithms was directed at political propaganda and disinformation. Many algorithmically caused filter bubbles do in fact show that we are diverging politically, being swayed by influencers with rather polarized political leanings. It turns out, however, that this is a rather elite phenomenon. On most issues in most instances people attain fairly moderate opinions when they are not persuaded by influential elites. So while algorithms will try to sway the cluster means of each filter bubble to a perpetually more extreme position, most of the targeted individuals will default to a more moderate position.

The law of diminishing returns: Positive feedback loops are intrinsically unsustainable

But when the algorithms become successful at segregating us into smaller and smaller fractions, won’t the utility at some point taper off? What can you sell us when we’ve become homogenized to a point where we’ve completely unreceptive to anything than our narrowest of interests? 

What else can you offer a user whose algorithms have reinforced fears and paranoia to such a degree that all their attention revolves around stockpiling canned foods and weapons? 

Build diversity into the algorithms 

There has been a lot of talk about bringing in more diversity in AI. A more diverse representation in training data as well as the people working with this data. These efforts should be celebrated and continue. But they do nothing to fight echo chambers and filter bubbles. After all if you set a sorting machine on the task of sorting out different colored marbles, it doesn’t matter how many colors of marbles you start off with because the end result is that each pile consists of identical marbles. The only way you’ll introduce diversity is by changing how you model and optimize the algorithms themselves. 

A new approach to user engagement

If you are responsible for a platform where your job is to engage interest among your users, what should you do? What new approaches might you want to test on your users?

  1. Maximizing difference. This is pretty obvious. Instead of suggesting items that are similar to our past history, introduce ever so often its complete opposite. Let us be surprised by the anomalies that exist outside our filter bubbles. Hey, maybe we’ll stay on the platform because we discover something new!
  2. Randomization. Introduce items that could be taken from anywhere in the universe to create a sense of serendipity. As a child, did you enjoy playing hide and seek, not knowing where to find your friends or how they would find you? Did you open your advent calendar during the month of December in suspense of what the next surprise might be? Algorithms have almost completely removed the element of randomness and surprise and instead replaced it with predictable choices, the safe stuff they know we like. And in the end we get lonely and bored to death. Break up the pattern! Surprise us!
  3. Granovetter’s network theory. In 1973 sociologist Nick Granovetter  developed the “Weak tie theory” which is the proposition that acquaintances are likely to be more influential than close friends, particularly in social networks. As much as our own social cliques love us and want to help us, it’s often the weak ties, the people at the fringe of our networks that can offer us the most. Again this has to do with circulation of ideas. You are almost certain to run out of options sooner if you never leave your filter bubble.  For a preselected amount of recommendations, the similarity criterion should be replaced with optimizing the visibility of items with the most diverse amount of appeal. So content that balances several aspects of a controversial issue may have a lower engagement than more polarized content which appeals to a more extreme cluster, but get a higher “diversity score”. So an algorithm seeking to gradually expand different points of view could test or apply the Granovetter principle of weak ties to a certain percentage of their recommendation list.

I wouldn’t be surprised if jaded users would find such diversity approaches refreshing. Imagine a social network where you are regularly connected with the ideas of people at the fringe of your network who might have other kinds of connections and knowledge than you do. Wouldn’t you find this more enriching than one where everybody knows everyone and they all think the same?

Finally, we might want the big data algorithms to give us an honest account of where we are compared to the other people in our network. Because is it really true that we are the level-headed ones while other people are the oddballs and the outliers? What if your data could be represented back to you in the form of a 3 dimensional scatter plot to show you where you really are positioned compared to others? What if it turns out that you are an outlier? The one at risk for being pulled into a black hole. Wouldn’t you want to know ahead of time? Or are we too comfortable in our bubbles that the cognitive dissonance this could invoke would be too much? 

Grab some popcorn and watch the change unfold

Generation Z knows how to play the algorithms to their advantage. And you can too. After about 6 months of carefully deciding what to like and digitally ignore in social media, I now get a steady stream of animal- and nature videos curated just for me, which is quite soothing in an otherwise turbulent year. But the big plot twist will come when powerful stakeholders realize that current recommender systems might not be sustainable – for business, for users, or for society. Until then, proceed with caution and mind the holes in your universe.

See Anne’s Think Tank profile, and please reach out to speak to Nikolas Badminton about hiring Anne Boysen to speak or work with you – click here to contact.

Disclaimer: The views expressed here are Anne Boysen’s alone and do not necessarily reflect the views of Futurist.com, other Think Tank members, her employer, organizations affiliated with her employer, past nor current clients and customers I have worked with.  All content shared is available to the public.

 

Category
Planning Futuring Strategy Science & Tech
Nikolas Badminton – Chief Futurist

Nikolas Badminton

Nikolas is the Chief Futurist of the Futurist Think Tank. He is world-renowned futurist speaker, a Fellow of The RSA, and has worked with over 300 of the world’s most impactful companies to establish strategic foresight capabilities, identify trends shaping our world, help anticipate unforeseen risks, and design equitable futures for all. In his new book – ‘Facing Our Futures’ – he challenges short-term thinking and provides executives and organizations with the foundations for futures design and the tools to ignite curiosity, create a framework for futures exploration, and shift their mindset from what is to WHAT IF…

Contact Nikolas