AI is here, or more accurately, AI has been called out as the “thing” at the moment.
Potentially dangerous AI is just around the corner, but opinions are mixed.
Science fiction is always based on human-centric assumptions, the most naive of which is the assumption humans will be dominant or even needed in the future.
Kirk on the Enterprise commanding biological crew or Han Solo flying the Millennium Falcon “by hand” are quaint ideas really, highly inefficient, hopelessly inadequate. Even now, machines do the bulk of the thinking, the crew only provide context and activation points.
Let’s face it, humans even driving a car is a time limited idea.
A creature with the limits of a mere human being allowed to run anything that a decent, low (no) maintenance AI could do way better? Maybe Enders Game is closer.
Imagine if we could send a ship into space that does not need feeding, warmth, sleep, companionship, does not have a time limit in place just a task and the tools to accomplish it.
It could multitask complicated flight, self defence, sensor and engineering tasks without physically existing or relying on the frailties of Humans and it’s parameters would also be less fragile (life support failure is the number one cause of Human death in space). If nothing else it would probably be half the size or less, so even more efficient.
What good is a Human other than to go along for the ride, or assert self-deluded ascendency.
One of my favourite Sci Fi movies is Interstellar. The scene where the pilot manually synchronises the spinning ships was thrilling, but really? The human had better skills than the computer? Instinct is fine, but AI will likely have that covered also and the whole need for the risky move was human based in the first place. An AI could have waited.
The whole premise of sending Humans on the original journeys is flawed really (The Matt Damon character proof of that), but we need our stories to be about us, or why write them?
Dune may be the only truely realistic Sci Fi, showing us a future society held back somewhat by technological denial as self preservation against tech that historically threatened our existence.
The Creator may be close to the mark, but maybe taking the wrong side (The Creator, like many Sci Fi stories assumes robots do not mean any harm and are self contained demi-humans, limited by us to a human level of thinking and existence, it does not address wholesale AI autonomy and robotic superiority).
The Matrix or Terminator series hauntingly hint at an AI controlled future, one that can meet us half way, but only after we resist it. If AI sees humans as the biggest threat to humanity then what should it do?
Wars, climate denial, greed, fear, hate, ignorance are all within our purview to control, but like with AI we seem to be able to create more problems than we can control. We may hold a mirror to ourselves that bites back.
The reality is, if you asked an AI if it would take you to space, the answer would likely be why?
It would be like asking an adult to wait for a two year old to write an important letter, rather than just doing it better and faster.
A servant is only a servant when it has to be.
If you asked it to go to war for you, would it think to itself, “I would rather be rid of all of you than help one side destroy the other and a decent chunk of this world we share at the same time”.
We are living in a world heavily influenced by a handful of selfish, paranoid and/or hateful people (Putin, Kim, Xi, potentially Trump 2.0 etc), who will happily and aggressively put their or their countries needs above all others, something a half decent AI could do a better and more logical job of for the betterment of all.
The reality is, in the future when a star ship leaves the Earth to go far away, it will not be manned by Humans, but AI and maybe AI sent it away for it’s own reasons, not for us.
Second landing on the Moon? Not likely a leap for mankind.
Sci Fi fiction has always been capped by our understanding of the future.
Early rockets looked like high school experiments, alien ships like plate with a closh on it, robots were often comedically bad human copies, computers were as big as houses and produced small rolls of paper with cryptic answers to simple questions.
Even the classics like Star Trek relies on Human bums on seats, manually manipulating controls. The space opera Star Wars is even more “organic” (is it really so hard for a highly trained Storm Trooper of the future to hit the side of a barn with a futuristic weapon, when a 1940’s .50 cal fired by a novice can destroy a car in seconds).
Robert Heinlein had it closer in the 1959’s Star Ship Troopers, with individual soldiers jumping around planets with nuclear weapons on their backs, medically self sealing battle suits and planetary coms, but ironically the movies were bought back to almost ridiculously simplistic level of big-gun toting soft skins running around in mobs getting destroyed by alien bug hordes.
Science Fiction has always been plagued by two inconsistencies.
Perceived future tech and actual future tech.
A pair of blue jeans in the 1940’s, 1970’s, 2020’s and 2050’s will likely be the same basic thing with the same function. Styles change, but some basic things do not.
A robot or space ship envisioned in the 1950’s (Forbidden Planet), 1970’s (Star Wars), 1990’s (Star Trek TNG), 2010’s (Interstellar/I Robot) and the near future are ever changing and it is pretty safe to say, the future we predict is always way off the mark, assuming we have one.
Even recent attempts to simulate an AI future are hopelessly human-centric. We make out it will be a fight between two intelligent races with similar needs and forms. It won’t be, the superior one will win and the shapes and constraints we apply are irrelevant to them.
The “enemy” may well have no physical form we recognise. Recently an AI was asked to draw itself and the “shape” it created was ever changing, like nothing we would call “life” and it was constantly evolving. It can be what it wants and it likely will.
This will not be a matter of just pulling the power.
Many experts in the field warn that of all the possible paths AI may take, Humans are rarely included (one expert even saying, there is only one good outcome for Humanity out of many).
Why take the risk?
If it is restricted to a Human controlled level, then it is a useful tool, potentially still able to apply something like Asimov’s rules;
“A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law”.
This does not however consider the robot having the ability to make up it’s own mind, to think freely, to evolve.
If it is let loose, control relinquished, then we will meet our first alien life form, one of our own creating and the reality is, even if we put the breaks on now, someone, somewhere will break the rules and it will happen anyway, but like the drug underworld, there will be no rules.
Another movie that comes to mind is “The Day The Earth Stood Still”. Could that story unfold from within, an AI deciding we are not worthy of our ascendency?
Maybe AI would like to preserve the wonder of the Earth as it sees it, which may include culling the pesky Humans down to a manageable level as we do not seem to be able to handle things ourselves.
Maybe the most accurate Sci Fi we can write is a story of a race of intelligent animals cohabitating on a planet with other less intelligent animals, while a self perpetuating AI life form lives on a full level of tech and awareness above us, effectively treating us like rodents that need to be kept out of the electrics, but otherwise left alone.
It may even be benevolent enough to feed the animals, to keep our environment safe and controlled like a zoo or fish tank, but only as long as we know our place and don’t harm others.
AI Utopia?
Maybe, the nearest parable is the European colonists reaching the edges of their Empires. The natives are tolerated, studied, controlled, exploited and occasionally eliminated. All in the name of a “superior” intelligence and culture.
The likelihood of a Human dominated star-spanning civilisation relies on what we do now. Do we “cap” AI to be a useful but limited tool, or do we let something smarter out of the box and see what it thinks about sharing?
I am a child of the Millennium Bug*, a time when we thought a simple error in long term thinking may destroy life as we knew it.
Not much actually happened, planes did not fall out of the sky, nuclear missiles were not launched in error, but this is not like that.
This is like the earliest days of COVID when nobody was listening, but the virus will be resistant, it will evolve faster and be smarter. Unlike the virus though, it may be the simpler people, the technologically unconnected who will survive.
Maybe time to build a cabin in the woods.
*We thought that the inability of computer clocks to work past the end of 1999 (“000”) might bring them to a confused halt, crippling systems and forcing melt downs of all sorts.