The dark side of AI: social media and the optimization of addiction (Ep. 89)

Data Science at Home - A podcast by Francesco Gadaleta

Categories:

Chamath Palihapitiya, former Vice President of User Growth at Facebook, was giving a talk at Stanford University, when he said this: “I feel tremendous guilt. The short-term, dopamine-driven feedback loops that we have created are destroying how society works ”. He was referring to how social media platforms leverage our neurological build-up in the same way slot machines and cocaine do, to keep us using their products as much as possible. They turn us into addicts.   F: how many times do you check your Facebook in a day? I am not a fan of Facebook. I do not have it on my phone.  Still, I check it in the morning on my laptop, and maybe twice more per day. I have a trick though: I do not scroll down. I only check the top bar to see if someone has invited me to an event, or contacted me directly. But from time to time, this resolution of mine slips, and I catch myself scrolling down, without even realising it!   F: is it the first thing you check when you wake up? No because usually I have a message from you!! :) But yes, while I have my coffee I do a sweep on Facebook and twitter and maybe Instagram, plus the news.   F: Check how much time you spend on Facebook And then sum it up to your email, twitter, reddit, youtube, instagram, etc. (all viable channels for ads to reach you) We have an answer. More on that later. Clearly in this episode there is some form of addiction we would like to talk about. So let’s start from the beginning: how does addiction work? Dopamine is a hormone produced by our body, and in the brain it works as a neurotransmitter, a chemical that neurons use to transmit signals to each other. One of the main functions of dopamine is to shape the “reward-motivated behaviour”: this is the way our brain learns through association, positive reinforcement, incentives, and positively-valenced emotions, in particular, pleasure. In other words, it makes our brain desire more of the things that make us feel good. These things can be for example good food, sex, and crucially, good social interactions, like hugging your friends or your baby, or having a laugh together. Because we are evolved to be social animals with complex social structures, successful social interactions are an evolutionary advantage, and therefore they trigger dopamine release in our brain, which makes us feel good, and reinforces the association between the action and the reward. This feeling motivates us to repeat the behaviour.   F: now that you mention reinforcement, I recall that this mechanism is so powerful and effective that in fact we have been inspired by nature and replicated it in-silico with reinforcement learning. The idea is to motivate (and eventually create an addictive pattern) an agent to follow what is called the optimal policy by giving it positive rewards or punishing it when things don’t go the way we planned.  In our brain, every time an action produces a reward, the connection between action and reward becomes stronger. Through reinforcement, a baby learns to distinguish a cat from a dog, or that fire hurts (that was me).   F: and so this means that all the social interactions people get from social media platforms are in fact doing the same, right?  Yes, but with a difference: smartphones in our pockets keep us connected to an unlimited reserve of constant social interactions. This constant flux of notifications - the rewards - flood our brain with dopamine. The mechanism of reinforcement can spin out of control. The reward pathways in our brain can malfunction, and this leads to addiction.    F: you are saying that social media has LITERALLY the effect of a drug?  Yes. In fact, social media platforms are DESIGNED to exploit the rewards systems in our brain. They are designed to work like a drug. Have you been to a casino and played roulette or the slot machines?    F: ...maybe? Why is it fun to play roulette? The fun comes from the WAIT before the reward. You put a chip on a number, you don’t know how it’s going to go. You wait for the ball to spin, you get excited. And from time to time, BAM! Your number comes out. Now, compare this with posting something on facebook. You write a message into the void, wait…. And then the LIKES start coming in.    F:  yeah i find that familiar...  Contrary to the casino, social media platforms do not want our money, in fact they are free. What they want is, and what we are buying into with, is our time. Because the longer we stay on, the longer they can show us ads, and the more money advertisers can pay them. This is no accident, this is the business model. But asking for our time out loud would not work, we would probably not consciously give it to them. So, like a casino, they make it hard for us to get off, once we are on: they make us crave the likes, the right-swipes, the retweets, the subscriptions. So we check in, we stay on, we keep scrolling, because we hope to get those rewards. The short-term satisfaction of getting a “like” is a little boost of dopamine in our brain. We get used to it, and we want more.    F: a lot of machine learning is also being deployed to amplify this form of addiction and make it.... Well more addictive :) But the question is: how such powerful ads and scenarios are so effective because of the algorithms and how much just because humans are just wired to obey such dynamics? My question is: are we essentially flawed or are these algorithms truly powerful?  It is not a flaw, it’s a feature. The way our brain has evolved has been in response to very specific needs. In particular for this conversation, our brain is wired to favour social interactions, because it is an evolutionary advantage. These algorithms exploit these features of the brain on purpose, they are designed to exploit them.    F: I believe so, but I also believe that the human brain is a powerful machine, so it should be able to predict what satisfaction it can get from social media. So how does it happen that we become addicted? An example of optimisation strategy that social media platforms use is based on the principle of “reward prediction error coding”. Our brain learns to find patterns in data - this is a basic survival skill - and therefore learns when to expect a reward for a given set of actions. I eat cake, therefore I am happy. Every time. Imagine a scenario, where we have learnt through experience that when we play slot machines in a casino, we learn that we win some money once every 100 times we pull the lever. The difference between predicted and received rewards is a known, fixed quantity. If so, just after winning once, we have almost zero incentive to play again. So the casino fixes the slot machines, to introduce a random element to the timing of the reward. Suddenly our prediction error increases substantially. In this margin of error, in the time between the action (pull the lever) and the reward (maybe) our brain has time to make us anticipate the result and make us excited at the possibility, and this releases dopamine. Playing in itself becomes a reward. F: There is an equivalent in reinforcement learning called the grid world which consists in a mouse getting to the cheese in a maze. In reinforcement learning, everything works smooth as long as the cheese stays in the same place. Exactly! Now social media apps implement an equivalent trick, called “variable reward schedules”. In our brain, after an action we get a reward or punishment, and we generate positive or negative feedback to that action. Social media apps optimise their algorithms for the ideal balance of negative and positive feedback  in our brains caused by the difference between these predicted and received rewards.  If we perceive a reward to be delivered at random, and - crucially - if checking for the reward comes at little cost, like opening the Facebook app, we end up checking for rewards all the time. Every time we are just a little bit bored, without even thinking, we check the app. The Facebook reward system (the schedule and triggers of notification and likes) has been optimised to maximise this behaviour.    F: are you saying that buffering some likes and then finding the right moment to show them to the user can make the user crave for reward?  Oh yes. Instagram will withhold likes for a period of time, causing a dip in reward compared to the expected level. It will then deliver them later in larger bundles, thus boosting the reward above the expected value, which trigger extra dopamine release, which sends us on a high akin to a cocaine hit.   F: Dear audience, do you remember my question? How much time do each of you spend on social media (or similar) in a day? And why do we still do it? The fundamental feature here is how little is the perceived cost to check for the reward: I just need to open the app. We perceive this cost to be minimal, so we don’t even think about it. YouTube for instance had the autoplay feature, so you need to do absolutely nothing to remain on the app. But the cost is cumulative over time, it becomes hours in our day, days in a month, years in our lives!! 2 hours of social media per day amounts to 1 month per year.    F: But it’s so EASY, it has become so natural to use social media for everything. To use Google for everything. The convenience that the platforms give us is one of the most dangerous things about them, and not only for our individual life. The convenience of reaching so many users, together with the business model of monetising attention is one of the causes of the centralisation of the internet, i.e. the fact a few giant platforms control most of the internet traffic. Revenue from ads is concentrated on big platforms, and content creators have no other choice but to use them, if they want to be competitive. The internet went from looking like a distributed network to a centralised network. And this in turn causes data to be centralised, in a self-reinforcing loop. Most of human conversations and interactions pass through the servers of a handful of private corporations. Conclusion As Data scientists we should be aware of this (and we think mostly we are). We should also be ethically responsible. I think that being a data scientist no longer has a neutral connotation. Algorithms have this huge power of manipulating human behaviour, and let’s be honest, we are the only ones who really understand how they work. So we have a responsibility here.  There are some organisations, like Data For Democracy for example, who are advocating for something equivalent to the Hippocratic Oath for data scientists. Do no harm.     References Dopamine reward prediction error coding https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4826767/ Skinner - Operant Conditioning https://www.simplypsychology.org/operant-conditioning.html Dopamine, Smartphones & You: A battle for your time http://sitn.hms.harvard.edu/flash/2018/dopamine-smartphones-battle-time/ Reward system https://en.wikipedia.org/wiki/Reward_system Data for democracy datafordemocracy.org

Visit the podcast's native language site