Artificial intelligence, AI, has grabbed headlines, hype, and even consternation at the beast we are unleashing. Every powerful technology can be used for good and bad, be it nuclear or biotechnology, and the same is true for AI. While much of the public discourse from the likes of Elon Musk and Stephen Hawking reflects on sci-fi like dystopian visions of overlord AI’s gone wrong (a scenario certainly worth discussing), there is a much more immediate threat when it comes to AI. Long before AI goes uncontrollable or takes over jobs, there lurks a much larger danger: AI in the hands of governments and/or bad actors used to push self-interested agendas against the greater good.
For background, as a technology optimist and unapologetic supporter of further development, in 2014 I wrote about the massive dislocation in society AI may cause, and while our economic metrics like GDP, growth, and productivity may look awesome as a result, it may worsen the less visible, but in my opinion, far more critical metrics around income disparity and social mobility. More importantly, I argued why this time might be different than the usual economists’ refrain that productivity tools always increase employment. With AI, the vast majority of current jobs may be dislocated regardless of skill or education level. In the previous industrial revolution, we saw this in agriculture between 1900–2000, when it went from a majority of US employment to less than 2%, and in industrial jobs, which today are under 20% of US employment. This time, the displacement may not happen to just lower skill jobs — truck drivers, farm workers and restaurant food preparers may be less at risk than radiologists and oncologists. If skilled jobs like doctors and mechanical engineers are displaced, education may not be a solution for employment growth (it is good for many other reasons) as is often proposed by simplistic economists who extrapolate the past without causal understanding of reasons why. In this revolution, machines will be able to duplicate the tasks they previously could not: those that require intellectual reasoning and fine grained motor skills. Because of this, it is possible that emotional labor will remain the last bastion of skills that machines cannot replicate at a human level and is one of the reasons I have argued that medical schools should transition to emphasizing and teaching interpersonal and emotional skills instead of Hippocratic reasoning.
We worry about nuclear war as we should, but we have an economic war going on between nations that is more threatening. The pundits like Goldman Sachs advocate internationalism because it serves their interests well and is the right thing if played fairly by all. And though the wrong answer, in my view, is economic nationalism, the right answer goes far beyond just a level playing field. While Trump-mania may somewhat correctly stem from feelings of unlevel playing fields in China, the problem is likely to get exponentially worse when AI is a factor in these economic wars. This problem of economics wars will likely get exponentially amplified by AI. The capability to wage this economic war is very unequal among nation states like China, USA, Brazil, Rwanda or Jordan based on who has the capital and the drive to invest in this technology. As it’s mildest implications, left to its own devices, AI technology will further concentrate global wealth to a few nations and “cause” the need for very different international approaches to development, wealth and disparity.
I wrote about the need to address this issue of disparity, especially since this transformation will result in enormous profits for the companies that develop AI, and labor will be devalued relative to capital. Fortunately, with this great abundance, we will have the means to address disparity and other social issues. Unfortunately, we will not be able to address every social issue, like human motivation, that will surely result. Capitalism is by permission of democracy, and democracy should have the tools to correct for disparity. Watch out Tea Party, you haven’t seen the developing hurricane heading your way. I suspect this AI driven income disparity effect has more than a decade or more to become material, giving us time to prepare for it. So while this necessary dialogue has begun and led to the ideation of solutions such as robotic taxes and universal basic income, which may become valuable tools, disparity is far from the worst problems AI might cause and we need to discuss these more immediate threats.
In the last year alone, the world has seen some of the underpinnings of modern society shaken by the interference of bad actors using technology. We’ve directly seen the integrity of our political system threatened by Russian interference and our global financial system threatened by incidents like the Equifax hack and the Bangladesh Bank heist (where criminals stole $100m). AI will dramatically escalate these incidents of cyberwarfare as rogue nations and criminal organizations use it to press their agendas, especially when it is outside our ability to assess or verify. This transition will resemble what we see when wind becomes a hurricane or a wave becomes a tsunami in terms of destructive power. Imagine an AI agent trained on something like OpenAI’s Universe platform, learning to navigate thousands of online web environments, and being tuned to press an agenda. This could unleash a locust of intelligent bot trolls onto the web in a way that could destroy the very notion of public opinion. Alternatively, imagine a bot army of phone calls from the next evolution of Lyrebird.ai with unique voices harassing the phone lines of congressmen and senators with requests for harmful policy changes. This danger, unlike the idea of robots taking over, has a strong chance of becoming a reality in the next decade.