In a dog-eat-dog world, it’s important to know when you are the dog.
Geoff Wilson
I’m going to write this one with the starting statement that I think all articles on Artificial Intelligence should begin with: I’m still learning.
Much is written about the disruption that is happening right at this moment due to AI and the quickening pace of AI development in everyday life. A recent Forbes poll shows that 97% of CEOs and key decision-makers see AI playing a large role in their future operations. And, if I’m really blunt: I don’t think 97% of CEOs and key decision-makers even know the scope of what AI is today or could do in the near future.
The implications are large and broad and the ignorance is real. So, with that said, I have a distinct thought that we are going to be starved for ethical frameworks to manage through the emergence of AI.
This will be true a the macro level, where nation-states and overall political ideologies are going to wrestle with how to assimilate and regulate what’s coming (which for all intents and purposes looks to be an AGI–Artificial General Intelligence–that is far and beyond anything currently contemplated); and it will be true at the micro level where companies, households, and even individuals will have to re-orient to a world that can be engineered in the blink of and eye toward some exceedingly negative outcomes.
I liken the world we are entering to the late 1800’s and the emergence of industrial monopolists and trusts. Some of the builders of our modern world were in many ways economic predators who captured power and wealth by pillaging livelihoods and social structures–even if unknowingly. Regulatory frameworks had to catch up. Ethical frameworks had to catch up. And the benefit the world had “back then” was that the world generally moved at the pace of the telegraph and the locomotive.
We are emerging into a world that not only has a similar lack of readiness in our regulatory and ethical frameworks, but that also moves at light speed.
In the annals of competition, one of the more glaringly instructive contests was the race to the South Pole undertaken between dueling expeditions led by Robert Falcon Scott (the “Terra Nova” Expedition) and Roald Amundsen (the “South Pole” expedition) in the early 1910’s. Rather than recount the full story here, I’ll merely offer an anecdote.
Among the competing choices made by Scott and Amundsen were different choices of transportation. Scott famously attempted to deploy “motor sledges” (essentially early snow tractors) and horses. Amundsen went with dogs. The choice seems mundane at first, but the implications are astounding.
First of all, after the motor sledges failed Scott as internal combustion engines were prone to do in the early 1900’s, he became dependent on horses, which were not well adapted to the cold (horses sweat when working…sweat freezes). Not only that, but the Scott expedition had to carry food for the horses, which was heavy. Add to that the human attraction for noble horses, and its accompanying emotional burden felt by the men not willing to let their horses suffer, and you end up with a real logistical and emotional (dare I say ethical) conundrum. Scott’s expedition ended up “man-hauling” their sleds and supplies hundreds of miles to the pole, and even farther back–and yes, this is as terrible as it sounds. In the end, all of the members of the team who reached the South Pole starved and died.
Charming story, right?
Amundsen’s expedition did something entirely different. They chose skis for the men, and dogs to tow their sleds. And, they used the fact that dogs are one type of animal not revulsed by cannibalism. In other words, when the going got tough, Amundsen fed his dogs to his dogs. He sacrificed the weaker animals for the survival of the stronger ones and their masters. For most of us, this strategy sounds gruesome. It was also an ingenious solution to a massive logistical challenge. Amundsen’s expedition skied and sledded to the Pole–arriving weeks ahead of Scott– then returned without loss of life or even relative difficulty.
Amundsen won because this and many other of his choices–no matter what you think of the stomach they took–ultimately were better that Scott’s.
Now, why do I bring up this anecdote in framing up the ethical conundrum we face in our march toward AGI?
It’s because of this: At this moment, we view choices that require strong stomachs with some admiration, and even when we do not, we admire those who make such choices as “impressive” humans. John D. Rockefeller made many, many predatory decisions in building Standard Oil into possibly the largest store of wealth in the world during the 1800’s. He was vilified by some, and admired by others.
Without doubt, though, he was the “Amundsen” of the story. He was the winning master who pitted dog against dog. We lionize JDR for his wealth and philanthropy, even today.
In the future, though, we have real reason to fear that the “master”–the Amundsens of future competitive arenas–will be non-human. And that, my friends, means we stand a good chance of being merely the dogs.
In a dog-eat-dog world, it’s important to know when you are the dog and not the master.
I was recommended Lex Fridman’s podcast from April (#371) with Max Tegmark. Tegmark is a physicist and AI researcher at MIT who is decidedly negative on the likely outcomes of the AI revolution. And, he has many compelling views. One that stuck with me is that, in his view the first mass deployment of AI into the human world has been within the social media space…and we humans have lost that battle in spades.
In other words, when it came to deploying AI into social media, AI models keyed in on our human habits of tribalism, sectionalism, and hatred; and they had us eat each other alive. All of this was ostensibly because the AI was “only” looking for a way to increase “engagement” on silly social media sites.
So what happens when an AI is not only making marketing and entertainment decisions (some of which have already led to massive social dislocation, strife, suicide and death), but also decisions on transportation, health, governance, corporate strategy, and social policy? What happens when humans are no longer the Amundsen?
I’ll continue this line of thinking. I firmly believe we will need not only fantastically facile management of how and when to deploy AI–which will change our world further than it has already–but also exceptional judgment and guidance on why we deploy it and how we can test and refine it to avoid unintended consequences.
This will be true for executives, and it will be true in spades for political and social leaders whose power is, by definition, even less regulated than business executive power.
Watch this space for more, and please…share your comments.
I will reiterate that I am a learner in this space…it’s just too critical not to comment on. Now it’s your turn…what do you think about the ethical implications of AI deployment?