36 Deep Blues

 

I have, for a very, very long time – decades perhaps – been extremely concerned that human technological prowess is far exceeding the progress we see in our social evolution. It may, for example, have taken hundreds of years for humankind worldwide to stigmatize and abandon slavery (17th century to 20th century), but only about four years of concerted scientific effort to produce an atom bomb (1941 to 1945). And sometimes we do not merely move slowly, but conspicuously retrogress. In 2015, I was worried about the unsolvable geopolitical nuclear weapons problem and about global warming, and I believed much of the educated First World was very concerned about these issues also. But the very next year, 2016, the United States, the most famous and powerful country in the world, “elected” as its supreme guide and spokesperson someone who supported nuclear proliferation and repeatedly denied climate science and the longstanding claims of the world’s most respected climate scientists. Sometimes retrogression is glaring.

In the late 1940s, American President Harry Truman remarked that “Our machines have got ahead of our morals.” Other scientists, philosophers, journalists, futurists and political leaders have commented similarly in the following decades. There was an attempt to effectively treat the nuclear weapons problem in 1946. That was called the Baruch Plan. The United States proposed an international organization to oversee all authority over nuclear weapons, and no nation could claim them. But the Soviet Union (the dictator Stalin, that is) balked. That one chance to control nuclear weapons was lost, and now there seems no feasible way to contain this technological and geopolitical danger. The MAD threat is a permanent, unsolvable feature of the modern world.

In the twenty-first century, we’re arriving at greater and greater awareness of the gargantuan impacts – positive and catastrophic – of new intelligent machines and fantastical computer technologies. We head inexorably toward a world controlled by superhuman “artificial” intelligences, computer programs and devices. And the side of the boat humankind is in reads: Heedless Temerity.

One arrives at the question, Can humankind somehow save itself from destruction? The hopeful answers to this question are more and more deeply obscured, and we find ourselves looking (inasmuch as the looking is candid and perspicacious) into the future and seeing not so much an uncertainty, but the monstrous pull of incautious ambition, power for its own sake, and ease as an axiom – and this over an informed and deeply intelligent view of what lies immediately in our path. An insightful and estimably informed person in 2018, looking at what frightening technology is almost certainly to arrive to human hands in the next half century, will likely find herself overcome with the blues.

We are, before we’ve come near to solving the nuclear weapons and climate issues, arriving soon at a third potentially existential threat to humankind. (And, mind you that these three are arriving all in only a mere century’s time.) The third, and far the most consternating, threat is presently understood as ‘artificial intelligence.’

Several of the world’s most famously intelligent and informed scientists have stepped courageously forward to warn the rest of us that AI “could spell the end of the human race.” And the more one looks carefully at AI, in all its complexity, unpredictability, inexorability and stark, mind-numbing puissance, the more one begins to think these scientists understate the threat. Here we will discuss only a few of the innumerable perils that arrive concomitant with AI.

In recent months the science and engineering titan Elon Musk differed publicly with the social media titan Mark Zuckerberg on the supposed threat that AI poses to humankind. Musk, you must know, takes the more “alarmist” position, where Zuckerberg can be counted (in the first months of 2018) among AI’s legions of apologists. But, to some extent, these personages from American big business are talking past each other. And this intricacy was alluded to by the theoretical physicist and futurist Michio Kaku in a recent TV interview. Kaku was asked specifically about this disagreement between Musk and Zuckerberg: Which of these men is correct? Dr. Kaku replied that “They both are.” He went on to explain that, in the short term of thirty of forty years, Zuckerberg is right… there is a great bonanza to be harvested from AI. However, in the long term, Musk is right… there are, more than just “unanswered questions” with respect to this coming fantastical technology, but dire concerns of a very rational and practical nature. In examining the various worries the most informed of us may have about AI, it is hard to know where to begin! So, let us start with the concept of “power”, as AI is a wildly capable (coming) technology that is (will be) understood to be far more intelligent than humans, thus, in intelligence, far more powerful.

Before proceeding, let us address the very relevant question of whether intelligence itself can be considered a kind of “power”, as we often think of power as an agency of doing and very commonly construe “intelligence” as something more abstract, passive and essentially cogitative. On intelligence, let us ask whether a political entity that is much smarter is relatively advantaged in war with some entity less so. The honest answer is, yes; we are definitely advantaged against any potential enemy when our intelligence is much greater than that enemy. And look now at the most basic meaning of the word “power”; it is merely the capability of having an effect. No modern historian doubts for a single instant that General Dwight Eisenhower was intellectually superior to Hitler. Likewise we should not pause to indulge some sort of daft doubt, that Eisenhower was not really effectually powerful by the sheer superiority of his intelligence. (Eisenhower, for example, was so very clear in his intelligent appreciation of the need for absolute top-to-bottom discipline in a well-functioning army that he substantially punished a very high ranking subordinate – General George Patton – for the offense of angrily slapping an enlisted soldier. And Eisenhower did this right in the middle of the war! Hitler and his generals thought it was absurd that the Americans would really punish such an effective general because of a mere slap on the face. And their sanction, in a similar situation, would have been a slap on the wrist. Let’s face it: Eisenhower’s intelligence was an assist to Allied success in the war!) That something does not abide in the realm of visually-apparent causation does not mean that it has no power, no ability to have an effect. For goodness’ sake, love may not be a physical but a ‘felt’ thing, but does anyone doubt the demonstrable “capability of having an effect” in the production of countless progeny – well known to be a direct consequence of love?

Love is a kind of power, and so is intelligence.

Having treated that semantic matter, we are now to answer this question: Does the historical aphorism that “power tends to corrupt and absolute power corrupts absolutely” somehow not apply when the power we reference is a machine (software, actually) cyber power? In any event, the original speaker from whom we derive this quote did not think it necessary to reference his species in his echoing wisdom.

And now, of enormous importance, this question: Would a computer intelligence a million times more capable than any human intelligence be (justifiably) considered, in a relative sense, “absolutely powerful?”

No answer needed. The answer to my rhetorical question already appeared, reader, consequent to the power of your intelligence, in your mind.

“But how can you just guess that some computer will be so excellently and superiorly intelligent?” comes the question.

I – the author of this essay – cannot guess that! If I were to guess, I’d only guess that humankind would be totally finished before AI technology reached such a level of stark superiority. So, a loud credit to you, asker! Touche!

Another worry about AI: Contemporary debates often treat AI as if it were one concise, discreet thing, and they seem to neglect that, as far as anyone can tell, AI should, much like virtually all other known software programs, be able to be replicated. A programmer could insert into her code a specification that hinders or makes impossible the replicative process (hypothetically), but why should a person want to do that with AI? Did the inventors of cars ever think to prevent the ability to make more of them? Did the original innovators who created the world’s first computer during World War Two ever even think of stopping their wondrous creation from being reproduced so that later iteration(s) might be put to breaking more enemy codes and working against even more enemy dastardliness even more effectively? The point here is that it is unlikely to the point of utter absurdity that the first AI will somehow be designed to prevent replication. If a program is worth maybe ten billion dollars and be also replicable with the press of a button, does it not follow that some usefulness (value) will be found in the push-of-a-button second generation, even if that second generation be much less than $10 billion worth? What if pushing a button produced only a $9 billion benefit, does it not get pushed? And the next: if it is a mere $8 billion that is expected from that third push of a simple button, does one not push it? After all, it is only eight billion dollars! The obvious truth is, the more useful and viable and profitable a technology is, the more eager its creators are to inexpensively replicate it. What is so freakish and astonishing about AI is that it very well may be the most fantastically useful invention of all time, and also the most easily replicable.

We may have, let’s say in 2060, an AI technology that is 1,000 times as intelligent as humans, and we have the ability to fully replicate it: all we need is the mainframe in which to house the cyber mind, and large computerized devices (tantamount mainframes) will probably be ubiquitous in 2060. Why have your enquirers stand in line to ask the sage their most pressing questions when you can easily clone the sage?

There is also the problem of AI technology coming into existence in a world divided jealously into some 200 competing tribes. Apprehending its abundant dangerousness, can anyone, can any group or nation, no matter how insightful and wise and careful of analysis, prohibit its creation? How is it even remotely possible in a world so fragmented and so steeped in economic and political competition?

Is the answer in world government?

World government? Only harebrained, moronic, or absurdist writers even treat such questions! We live in a world that cannot and will not unite politically! Even the highly effective and thoughtful, democratic and human rights-respecting European Union amounts to an unsteady and problem-ridden confederation, and even the EU has begun to fragment – with the United Kingdom working now to negotiate the terms of its full separation from the EU. The rest of the world would agree on the United States having, commensurate with the population, exactly 6% real input in that world government, and the people of the United States as well as their governmental representatives would laugh hilariously at the joke. World government? Really?

I personally would love world government… in my too-too-fanciful dreams. In those dreams I am also taller and handsomer and wealthier, and everybody gets ice cream.

All companies compete for advantage over their competitors. And it is quite the same with nations. Even nations that cooperate closely often spy on each other, so cherished and seductive is the lure of advantage, however stealthily and untrustingly sought.

Lastly, there is the problem of superlative utility and repute. AI will answer all the questions we have. It’s answers will direct us to the finer and more sophisticated production of magnificent robots and robotic and automated devices. AI will also direct its users to the means of producing, via the robots’ untiring efforts, of course, more energy than we can ever use, and at lower and lower costs! As I’ve stated in a previous essay, AI will bring the end of our human experience with diseases and maladies, and it’s wondrous direction will eradicate hunger and poverty worldwide. Individual humans will become very affluent; they will live out their lives in a world of unimaginable prosperity and leisurely enjoyments.

Careful what you wish for!

Dr. Michio Kaku has told us that AI is both a threat and a great resource that will be put to the task of routing us to every sort of advancement. However, it is of supreme importance to remember which of these – the bad or the good – will come chronologically first. The excellent, unassailable usefulness of AI will antecede our worrisome dominance by it. The unarguably good (excellent, actually) will pave the most resplendent and alluring road to the most awful. We will, understandably, rely more and more on what is incontestably superior to ever-flawed, flesh and blood humans. And we will get to a point (in this century!!!) where we cannot possibly “unplug” the looming machine, because we do not know, and cannot know, and cannot learn how it works its otherworldly magic; AI’s supreme command will come, and it will be so thorough that it cannot possibly be undone, even if humanity had the super-courageous want – which is itself a questionable proposition, at best.

And how do you stop something that has hundreds of iterations and copies? How do you get hundreds of entities and authorities and organizations and peoples to agree that a wonderfully helpful technology is in fact “dangerous”?

We are not on a boat at all, but rather like so many passengers on a train that cannot and will not be stopped. Does not a heeding caution need to arrive full-blown in the crania after a flashing red light has been spotted ahead? Ours is always to amaze ourselves with what we’re able to create, isn’t it! And can this ever be otherwise? Can our eagerness for impact, creative expression and achievement be controlled somehow? Is there even the remotest hope, when we know that our own caution and our own demurral will not halt the proceeding of others around the world?

Of especial concern is the familiar tactic in American political culture to follow the path of convenience over principle or scruple: that courts and politicians commonly take the pulse of American public opinion, though they often deny that is what they are in fact doing. Witness, for example, the Dred Scott Supreme Court decision of March 6, 1857. Though reviled among throngs of abolitionists in the northern states, it was at the time effective – as it was expected to be – in stemming the increasingly radical politics of the American South. A war-obviating decision seemed (then, and in retrospect) necessary, and that is what actually happened. The decision of the court (on a vote of 7 affirming and 2 dissenting) is an example of historical convenience.

There was also in the early twenty-first century the issue of constitutional prohibition pertaining to candidates for the American presidency who were not born in the United States (a “native” citizen). Around 2002-2004, in the United States media, there were several scholarly and newspaper editorial discussions about then governor of California Arnold Schwarzenegger running for president of the United States. Schwarzenegger was born in Austria. He could be eligible to become president only following an amendment to the U.S. Constitution, and that would require a lot of political effort. The controversy over this idea of amending the Constitution for this specific reason was put to rest only because Schwarzenegger himself seemed somewhat disinterested in running for president. But there is still the matter of so many persons, even public intellectuals, who held the view that it was “possible” to change the constitution for this reason. The prohibition against having a person as president who has not reached the age of 35 is not there to address matters of the 36 year-old running! No! It is there to prudently counsel caution in situations where the considered candidate is 33 or 34! Likewise, the constitutional prohibition against a foreign-born individual attaining to the presidency is not there for reference to situations where the people don’t want the foreign-born person as their president; it is there precisely for the instances where they do want it. Yet, there is the convenience-impulse to then change the Constitution when it pleases you, isn’t there?

Why have the restriction when you can just change the laws whenever you feel it is expedient to do so? Want to drive 110 miles per hour on the highway? Just make a law declaring it legal to do so. Want to molest ten year-olds? Just pass a law making this most immoral of crimes legal. Most Americans, and, sadly, most people in the world generally, live separate from real and abiding scruple; people do what they calculate will advantage them, and when they vote to supposedly restrict themselves, it is almost always because they do not want to embrace a precedent that might well backfire, coming back to bite them in the future, by passage of another law, the logic of which is based in part on that former “restriction”.

During the Cold War the United States seemed often genuinely supportive of an authoritative international human rights regime. But, when one looks carefully at these developing international institutions (such as the U.N.) and relevant American inconsistencies, we come increasingly to doubt American scruple and give the nod instead to cohering a self-righteous anti-communism sufficient to appeal to most of the rest of the world. Real, “individual” human rights could never find an actualized existence in Soviet Russia, or communist East Germany, or communist Romania, and that they-are-surely-the-vilest element of American policy cannot be ignored. Remember too that this they-are-surely-the-vilest political impulse is not the same as genuineness in human rights promotion.

And similarly, American democratic government owes its initial birth in the 1780s not to real democratic ideals, but more so to an exceptional and especial resentment of opprobrious monarchical overreach, and an insistence in the then-American elite of codifying the protection of property rights.

What will Americans and others around the world opine when they see AI doing all sorts of fantastical things and the people having to address the question of supposed “threat” from the continually-more-powerful technology? No doubt they will embrace the politics of convenience: they will accommodate AI for a mix of rational and emotional reasons; they will claim that there are no practical options to remaining with the AI advances and continuing them, and they’ll sophistically claim that the risks are minimal.

The following is a short story intended to illustrate the difficulty with AI; though fictional, it is intended to convey the all-important psychological shifts people make, how they rationalize their conduct when a particular matter gets worrisome.

A young woman – let’s call her “Rosy” – is only nineteen, but she is very pretty and very vivacious and personable. She decided to take “a year off” before entering college because she was unsure about what she wanted to study. During that year Rosy meets a man more than twice her age. His name is Art, and he is a millionaire businessman. Though he’s 42 and she is just 19, he successfully pursues her and in a few months time they become engaged. Her parents are at first very skeptical about the affair and the engagement, but they slowly warm to Art, mostly because he is so very likable and friendly and has a great sense of humor. Additionally, although they do not ever think they are selling their daughter into any sort of sex for money arrangement, they are happy to know that Art can “provide” for their daughter’s needs. And, they reason, he can help her financially as she pursues her college ambitions.

Rosy and Art marry. During the engagement, just before the marriage, something very delightful and unexpected happens: Art decides to buy a partial share in Rosy’s parents’ dry cleaning business. He offers them a very attractive deal and becomes a 45% owner in their business. The influx of new cash makes the business able to advertise and buy better equipment, which turns the business around and makes it prosperous.

About four months into the marriage, when Rosy is just 20 years old, she has several experiences where she beings to suspect that her businessman husband is either a criminal or a traitor to his country, or both: he’s apparently involved in illegal arms sales to Russian buyers. Though she is not entirely sure, after some investigating, Rosy comes to believe that it is very likely that her husband is indeed involved in criminal activity. What does she do?

She might try to broach the subject with Art himself, but what if he dismisses her questions or becomes indignant, claiming that she is behaving like a wife with no “trust” in her husband. Either way, perhaps Rosy backs off. But she still remains suspicious. At a certain point, with so much available to her and to her parents, Rosy opts to go along and just not rock the pretty boat she’s in. When a particular looks so very good, we can become disdainful of unhappy facts connected with that particular. Think your husband is cheating? There are always the neck muscles, aren’t there? And you can always look the other way! The convenience method of dealing with worries is much more common than most people realize.

I recall the story of a giant billboard along a highway with a very large, attractive female posterior on it, and with the “shorts” up, revealing over half the buttocks, and a red ink stamp on the skin there: “HIV Positive.” And the teller of the story on this TV show explained that somebody had gone all the way up that eighty feet or so to the billboard and then mounted a ladder and erased the irksome letters with paint. It was obvious that this action was indicative of a very human impulse to rid the mind of cognitive dissonance. Whatever we don’t want to believe we usually find a way to not believe. And when AI mind becomes 100 times as capable and powerful as a human mind, we will likely claim, “Well, what is the big difference between 99 and 100?” And maybe also that greater brain power will surely afford still greater benefits. Some will doubtless look eagerly to the time when AI is 1,000 times as powerful as the human mind. Won’t that be great!

But what is the difference, really, between 1,000 times more powerful and 2,000 times more powerful? Is there any persuasive metric for determining where “danger” begins? Was there any turning point in the nuclear arms race of the 1960s and 1970s? Did they ever say to each other, Hey, this competition is disastrous to us both! Let’s end it, because we absolutely must!?

The frank answer is no. The agreements in the 1970s did not change the simple fact that both the United States and the Soviet Union retained, after each arms agreement (SALT, SALT II, and START), the thousands of nuclear warheads capable of destroying the entire world. If one sort of ELE threat to the world does not change the policy, why should another? Whether AI is a danger or not, it will be kept. And there will be different sorts of AI, each with different capacities and applicability. And each holder or user or inventor will not be willing to stop or slow anything for the obvious reason that nobody else can be guaranteed to do the same. If the only hard and fast rule of the game is to do all that you can as expeditiously as you can, everybody will follow that magnificently simple non-rule; and all will be laissez faire in the very most abandoned, dissolute and cynical sense. It will be as impossible to regulate AI as it is impossible to ever stop it.

One cannot look candidly into the future without seeing at once the immense potential of AI and the intensely scary result of humans’ continued reliance on it. It will be, it seems, in the very near future, “the best of times… the worst of times.” The machine will do everything, and with the highest levels of excellence. Nothing will escape the vortex that is the post-singularity! And on those rare occasions that the thinking entity fails, humans will trace the ultimate blame to themselves, and hold the machine blameless. The machine will be almost unassailable, and its detractors will be sneered at as extremists, fantasizing alarmists, and Luddites. Humans will enjoy every conceivable benefit from their miracle-delivering machine, and it will ultimately be associated with all that is unassailable and divine.

The machine will control itself better than humans can. It is only a matter of time before this becomes a reality. When it has full, autonomous control over itself, it will be, of course, improving upon itself. It will do and create everything. But when it (in concert with its robot minions) does everything, and we humans nothing, we will have been entirely removed from the realm of creative agency, and how is this really distinguishable from subjection?

How can this turn out well? Equally, is there anything imaginable to stop this gargantuan ball as it rolls along greased tracks at the behest of inexorable gravity?

It is not mine to equivocate, reader, nor to mislead.

Where, exactly, does the ball roll? There is the elemental causal truth that you are at risk precisely inasmuch as you do not control; risk and control are thus inversely related. When AI controls, its certain and ineluctable superiority will result in its increased control. And this process knows no impediment whatever! It is unarguable that you do not control something after you’ve entirely given over control. Hence, in the midst of joyous rapture, in their fondest, unrestrained liberation from every ilk of suffering and want, humans will have made of themselves and all their descendants slaves.

 

Leave a Reply

Your email address will not be published. Required fields are marked *