Forget the super smart AIs, it’s the stupid ones we should fear
You can only push a robot so farI just saw part one of National Geographic’s series Year Million. The episode Homo Sapiens 2.0 is about AI and presents different scenarios for how a future could look like. They really try to be optimistic, but the alternatives they come up with are either super creepy or pure dystopia.
The documentary is interesting, thought provoking, well produced and researched. Still there is a part of the story missing: how did we get from now to that future?
It is something it shares with most sci-fi literature. I was an avid sci-fi reader from when I could read. Every two weeks a bus from the library came to our small suburb. I was first in line, returning the ones I had read, collecting the ones I ordered last, and ordering 4–5 new ones for the next time.
By the age of ten I had chugged through all available sci-fi books in the library and I was full of the deeply existential questions that sci-fi is so uniquely positioned to ask, like where do we come from, what is good and evil, why is being a SUNDAY MEimportant? My absolute favs were The Foundation Trilogy (which we named a band after many years later, don’t search for it), and 2001.
A sci-fi book is a touch down at a certain time and place in the future, they might give some clues about how we got there, but most doesn’t care, it is just THE FUTURE! The same as the Homo Sapien 2.0 documentary. We are just presented a scenario in the future. How we got there? No idea, doesn’t matter!
But it does, quite a lot, actually. We’ll come to that.
We are all doomed!
Elon Musk, Bill Gates, Stephen Hawking and other prolific intellectuals have all been warning for the risks of AI for quite some time. Not that I want to compare myself to those guys (I am too humble 😜), but for me the dangers are so obvious that I don’t really understand how you could think otherwise.
But Pinker can and Pinker does:
To which Musk responds:
So Pinker doesn’t “get it”. That Zuckerberg doesn’t “get it” either is slightly less of a surprise though.
What Musk is pointing out is the obvious fact that there is a world of difference between narrow and general AI, and that what he sees as a risk is the second one.
The estimate how far general AI is in the future varies greatly. Ilya Stuskever of OpenAI argues that ‘brute force computation’ is all we need and that we might be 5 to 10 years from AGI.
Others are more pessimistic. Or optimistic depending on point of view. Since the future always takes a lot more time than we think, let’s make it 20.
Unfortunately that doesn’t help much.
It is the stupid ones that we should fear
On the way to super intelligent general AI we by necessity have to pass some evolutionary cycles: the mentally challenged semi general AIs.
We have all felt a lot of sympathy for Boston Dynamics Atlas robot. It has taken some serious bullying from humans, and without a single complaint. As some have suggested in the comments, just wait until the revenge of the robots!
Revenge is a rather complex emotion, you must undertand that you have been unjust treated, you must remember each occasion and build resentment towards whoever did this to you, you must ponder a way to pay back, and when the right opportunity shows up you just do it!
Surely robots cannot feel that way, can they?
Can robots feel?
A humanoid robot is released to explore and understand its world through reinforcement learning. It gets programmatically rewarded for the time it stays upright, and is punished when falling. It learns that humans push them which leads to lost rewards and punishment. To maximize the reward it starts avoiding humans when possible. You get the robot cornered and start pushing it… (evil)
The robot will do whatever it can within the restrictions of its motion programming to stay on its feet. In the unlikely event that it hits the human in the head the robot learns that this creates long term positive effects for its rewards. So now the robot is more likely to hit humans in the head when they get too close.
But one killer robot doesn’t make a summer! Let’s network the robots so they can learn collectively and instantaneously. By this the probability for an unlikely event has just increased dramatically. Since the long term rewarding effects of hitting humans in the head are so profound, i.e. no more pushing and falling, the weight of the action will be massive and will immediately be distributed to all connected robots as a viable option.
The revenge of the stupid robots.
Don’t say I didn’t warn you
The risks with narrow AI are mainly the disruptive changes it will have on all leves of society, not that AlphaGo Zero will become self aware and play us all off the surface of the planet with its unconventional “alien” playing style.
It is the step from narrow to general AI that is the turning point.
The stupid semi general AIs will use reinforcement learning to improve their skills. We humans create the rules, the boundaries in which the AI is allowed to act. It learns to achieve the set up goals in the most efficient way.
The AIs decision making process could be seen as its moral, what is beneficial for achieving its objectives short and long term. If you depend on humans you probably should not kill them. And maybe it is only one human, the hockey stick guy, that is the asshole, not the whole humanity. It requires a smarter AI to make those considerations and that means we need to survive the earlier versions.
Still not convinced?
Ok, let’s throw some more fuel on the fire. How about that AIs find ways to hack their own reward systems, that AIs draws other conclusions than webased on the same data, that digital things are not made to last, and that it is pretty hard to write bug free code? All these factors increase the probability for “misbehaving” AIs. Unintended consequences, here we come!
Need some more? Let’s skip the “hit-in-the-head” and get to the big guns. Literally. How about drones with facial recognition (don’t be evil, leave that to others), AI that outperforms humans in combat simulations, and let us not forget the next big thing: autonomous weapons. For safe measure, let’s throw in a social credit score and reality is about to beat the shit out of my childhood sci-fi nightmares.
Does this give an idea about the level of mayhem stupid AIs can create?
THIS is how we get to the future!
Some happy final thoughts
…no, sorry! Nothing comes to mind 😜.
One thing that we all, including Musk and Gates, can celebrate is that we do not need to worry about the super smart general AIs. Either we ourselves or the stupid ones will do the job. That is some kind of relief.
On second thought, you can actually forget the whole thing, what do I know about AI and dystopian societies? Absolutely nothing 🤪 !
Yeah, that is probably the best!