
I don’t think any of us understand what “The Generative AI Revolution” actually means. I certainly don’t. Is AI like coding, a set of tools that will enable unthinkable things in the future? Or is it more like the computer, a piece of technology on which innovation, creativity, and malice can be built? Or is it more like social media, an ‘add-on’ to prior technological revolutions that has fundamentally changed a facet of life? Or is it something completely new, unlike anything we’ve encountered before with fully unpredictable benefits and consequences?
I’ve recently been trying to learn a bit more about how it works and where it is being used. In the process, I’ve seen videos of the tools reviewing scans and providing answers as good as a radiologist could. I’ve learnt about its predictive potential, where it can sift through thousands of previous examples to forecast what might happen next far better than a human ever could. I’ve even tried “low code” tools where I’ve been able to create websites and apps just with prompts. Most recently, I’ve tried to see how well the current capability could replace my job by building a “transformation assistant”, without writing a single line of code. I’m not scared yet because lots of my job involves “unwritten rules and relationships”, but there are certainly things it can do better and faster than I can. It can break down problems and come up with ideas far faster than I could. It can make notes and structure them in seconds. It can even learn my voice and movements and respond to people like I would.
I’ve also been learning that this technology is inevitable. The train has left the station and there’s no turning it around. 65 percent of respondents to McKinseys “The State of AI in early 2024” report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago (McKinsey, 2024). That was in early 2024.
I’m left with one unanswerable question that I’ll give my almost certainly wrong perspective on. Is this good?
On the one hand, it might revolutionize medicine. On the other, it might revolutionize warfare.
On the one hand, it could drive efficiency. On the other, it could lead to mass unemployment.
On the one hand, it may facilitate innovation and creativity. On the other, it might cause creativity to lose its meaning.
On the one hand, it will likely enable new methods of connection. On the other, it will likely drive division and misunderstanding.
The truth is that it will be capable of doing all the above and more. As with all new technology, it will act as a mirror. It will mirror what is already true about us, and our capability for beauty and malice. For this reason, I disagree with pundits who think AI will create a new sort of evil. That evil already exists in us. However, I fear this technology will selectively amplify certain parts of us that are not good, and this is where I start to really worry.
I fear this technology will selectively amplify certain parts of us that are not good
In many ways, recent tech revolutions have felt democratizing. Take computers. Here was this technology that enabled us to capture and store ourselves and our thoughts. Then we added the internet. We could share these thoughts we had captured, and access the news far more easily than ever before. Then we added social media. Suddenly we could connect with people who shared similar values without needing fortuitous introductions. Popular revolutions were born, ideologies shared, and knowledge developed.
However, as time has passed, what once felt democratizing has faded, leaving a feeling of oligarchy in its wake. Our information environment is almost entirely controlled by a few large companies, and these companies are controlled by a handful of individuals. Yes, in theory we choose to expose ourselves to these information environments, but in many ways it’s a false choice. The internet has become so frictionless in how it facilitates information organizations like social media companies and a critical mass of people use them, making it very hard to do anything else. And even when challengers appear, the amount of capital it requires to challenge creates serious selection bias. If I have enough money to set up a significant information organization, I probably don’t want that money to be taxed, and I probably don’t care too much about public services. And really, I probably have more money than I could ever need, and so I probably care more about power than I do about money. Crucially, the parties that can give me power probably don’t abide by many of the norms that keep society from veering off in dangerous directions.
Previous waves of AI have already provided the mechanism for these individuals to influence society. Our feeds have become increasingly pernicious is precisely because algorithms decide what content we see, and these algorithms have a natural bent towards content that provides a reaction, both good and bad (mainly bad). The owners of social media companies may argue they don’t actively promote this, giving them a level of plausible deniability, but they certainly don’t work very hard to prevent it. They have an eerily laissez faire attitude with respect to controlling this built-in bias, despite the fact that it clearly amplifies the worst parts of us. And we see their true colors when they are given opportunities by populist leaders to get rid of fact checkers all together in the name of 'free speech'. Is free speech really free if some people's speech is unfairly amplified?
Is free speech really free if some people's speech is unfairly amplified?
Not only this, but “bot culture” is widely facilitated by these companies. Click on the comment sections of any political post, and you’ll see accounts pushing right-wing, anti-immigrant, pro-populist views. Click on the accounts, and they almost always have no followers and follow lots of people, a trademark of bots. These artificial creations shift the Overton window of acceptability. They allow users the psychological bandwidth to believe these views are acceptable, and thus shape cultural conversation. And despite their origins almost always being in nations who don’t have our best interests at heart, social media organizations are all too willing to let them run riot because they shift the conversation in a way that fits their ultimate agenda; power.
Populist parties and social media moguls have a symbiotic relationship, enabled by artificial intelligence that gives social media moguls a level of plausible deniability. Extreme parties get power because of the built-in content bias, and when they do our information overlords get the front row seats that they so desire.
Populist parties and social media moguls have a symbiotic relationship, enabled by artificial intelligence that gives social media moguls a level of plausible deniability.
This is the crux of why I’m worried about this new wave of generative AI. Yes, there are democratizing and enabling elements to the technology. It’s amazing that I can build a “transformation assistant” in a number of days that can outsource parts of my job. I could never do that without generative AI. However, if previous waves of AI have taught us anything, it's that this technology is unbelievably expensive to create and host and therefore there is significant built-in selection bias in who will steward this revolution into the future. And if I have enough money to set up an AI model, I probably don’t want that money to be taxed, and I probably don’t care too much about public services. And really, I probably have more money than I could ever need, and so I probably care more about power than I do about money.
McKinsey & Company (2024). The state of AI in early 2024: Gen AI adoption spikes and starts to generate value | McKinsey. [online] www.mckinsey.com. Available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
Коментарі