I am a longtime reader of The Atlantic magazine. The magazine boasts well-written articles on a seemingly endless array of subjects, from science and entertainment to politics, culture, philosophy, and psychology, and from the topical to the epic. However, in late-April 2018, when my blameless eyes beheld the latest (May) issue, I shook my head in disgust. The cover depicts a “president’s desk” piled high with files, illustrating the immensity of the load of things he has to do. The words there on that cover tell that there is a problem in “the presidency,” strongly detracting from the all-too-familiar focus on the current occupant there.
That Atlantic cover insinuates an “impossible” to achieve totality of peculiar executive tasks and responsibilities. Yet, ironically, progressives like me glance at that inane cover and cringe; for many millions here in the United States it is we who face an insurmountable task when just looking at our predicament. The mountain of despair that we face is beyond explaining with mere words. And, very predictably, every organization, it seems, from the ACLU to Common Cause and from the Union of Concern Scientists to Human Rights Watch, are trying endlessly to put out fires left and right. It is too much for many progressives to continually dwell upon. And many have gone into a sort of hibernation as a means of maintaining mental health.
There are a multitude of mistakes and misfortunes that could undo us, I have (for a long while) thought. And, if Americans just once said with their wide-eyed toddler votes that the presidency wasn’t such a big deal and any clown or dog could do it, that would be the end of (effective) American democracy, and not just for the four years that that Payaso or Rover was president, but forever; you only need to vote away your democracy once. It may seem unfair that we must behave sanely and responsibly 100 percent of the time, yet lose all in the indulgence of just one puerile conceit, but that’s just the way the grown up universe works. The idiot says to his adorers that “Nobody knows the system like I do, and nobody knows how to fix it like I do.” I immediately asked the television, “What ‘system’ are you talking about? I know you know the ‘system’ of single-minded service of capital business! But do you know anything at all about the international system of diplomacy, about the American system of checks and balances, about cohering a system that intends to not just uphold democratic principles, but to deepen them?”
Is it now a circus funnyman that occupies the White House, or an unbowed bowwow? It matters not! What has been placed unceremoniously into the porcelain receptacle and flushed is irretrievable; this universe is coldly impersonal, cruel and unforgiving, and if we only once forget that trumpery and show, glaring hypocrisy, ersatz morality, guilty defensiveness, cowardliness, unrestrained vengeance, arrogance run amok, and oppression advocacy are not the hallmarks of greatness but its opposite, we are doomed. Americans ‘forgot’. I used to think that humankind had perhaps a one percent chance of surviving through the next several centuries. That mental childhood terminated on November 8th, 2016; thence, I hazard it is zero percent.
What dreadful pessimism, you say?
Wait. Aren’t people supposed to go through several “stages of grief” before conceding something so boorishly disagreeable? Yes. But my “denial” was back in the early 1980s. Anger? That was when I was in California campaigning against nukes back in 1984. Bargaining? That was in the late 1980s, and through the 1990s. Depression? I suppose that had to have been around May 2001 when, astonishingly, the United States was voted off the United Nations Human Rights Commission (now the Human Rights Council). This no-faith vote in the UN followed an American general election (in 2000) where the American Electoral College produced a win for George W. Bush, a Texas governor who had demonstrated little regard for basic human rights.
And, finally, my “acceptance”? That repelling November. Now, how can I – or any of the rest of the legions of stupefied disbelieving – contemplate or quiz at the presidency without woeful apprehension, grieving the loss of a loved one (civilizational rectitude), without reproaching the republic, nightmaring the dream, misgiving the mistake, pitying the polity, and deeming the doom?
I knew the reason for that Atlantic cover. We are able to suffer through losses and sadness only knowing full well that time is an admirably capable mollifier and that happier times are just ahead – if we can anywise rearrange our thinking to gain a new vantage, to sheepishly apprehend the light.
But politics is languorous sport, and there is no whistle that blows to signal a stopping point, some finite end, such as at the end of a soccer match. We live on through interminable affliction and torment, armless watchers, as the agendas of outrageous terrorism and illiberalism gain (in slow but discernable increases in human resentments, anger, frustration, opinionatedness, xenophobia, fatuous, anachronistic nationalism, and shortsighted populism). The Atlantic’s editors and contributors aspired to get ahead of their own ongoing grief, to fashion some sort of mental health-improving, at least marginally credible (although revisionist), new, more hopeful perspective.
That cover will always enjoy a historical “context”. Isn’t it natural for us to strive to escape the hideousness and hopelessness of the present? There is always an agile mental feature, aphoristically manifest, designing that we not be continually disgusted, disillusioned, affrighted or overcome with melancholy. Finally, let each idea and vision, each facet and particular, no matter how remote and prospective and obscured, no matter how guestimated or implied, captain some modest prospect of betterment.
A female scholar of statistics appears in one of an endless number of Ted Talks on YouTube. She says that she works as a professor at a school in Texas, and she shares some of her acquired knowledge with us, telling that a lecturer in a long-ago grad school classroom submitted, “If you can’t measure it, it isn’t real.” Her next remarks do not indicate that she is at all aware of the absurdity of that claim. After quoting the man, she treats his professorial inanity as if it were a genius insight. Despite overwhelming evidence to the contrary, she posits that (ever immeasurable) love does not exist. It is flabbergasting what passes for wisdom in Texas!
Value ideas are infinitely lonely, friend-seeking things. We do not spontaneously recognize this fact because the ideas we maintain are the ideas that find kindred mates, and – being lonely – they do this so very readily that they always seem to be found in the company of fond familiars. Hence, it is unlikely that we will ever adjudge them dispositionally lonely, as the concept of being “lonely” always insinuates the circumstance of being alone. And value ideas that endure and entrench are those that maintain in glad company. (Note that the interactional methods of cults previously arrange for and make available corroborative ideas/perspectives/values, and when initiates arrive half-waking from the three-quarters-washed stupor, their first encounters are with the belief cohorts smiling adoringly at them.)
Dark ideas – such as pessimism, skepticism, cynicism, disbelief, and nihilism, for example – are routinely abandoned, not because of any departure from the truth, but because they so often fail in their efforts to (quickly enough) acquire agreeable cohort – that is, other corroborating ideas. (One might reasonably contend here that “dark” ideas do not find agreeable company because they depart from truth, but this is an a discussion for another time and would definitely constitute a digression from the main point of this writing.) No matter how much reasonableness a theory of doom has in its pure, unalloyed expression, it’s finest quintessence, it is still likely to be dismissed, because the disagreeable inhering in doom prophesies is so substantial. We require meaning by our DNA-inscribed instincts, and doom foretells the end of meaning. Therefore, our efforts are always at contriving remoter and remoter “possible” happiness scenarios.
There is no doubting that intellectuals have had an enormous influence on modern society, especially in the recent history of expanding rights-freedoms. Intellectuals are not of one mind, of course, and their opinions on political questions vary greatly. Though this is so, we are still obliged to confess that most intellectuals are more politically progressive than the remainder of the population. And, (generally) much like the rest of us but perhaps even more so, their ideas must cohere together; the intellectual is compelled to agree his ideas into a persuasive, well-considered and information-plied whole.
One of the commonest foundational ideological discriminations treats that which reasons out a preference for optimism over pessimism. Whatever we propose must somehow support the idea of continuing the central logic of proposing. First and last, we must maintain an ideological perspective that credits seeing over not seeing, learning over not learning, and possibility-maintaining providential goodness over defeatism. Intellectuals usually reason somewhat this way: There are an infinite multitude of things possible in the future, and no one can accurately foretell it all. Most of what lies in the future cannot be known by any human mind, no matter how insightful. Therefore our inability to see most of what lies in the future presents us with a dualistic ideological question: ceteris paribus, both paths identically unknown, is it wiser to travel the path of optimism or of pessimism? As the full, unpretentious grasp of the vastness of human unknowing advises the brighter route, intellectuals are, on the whole, positivists. Down the lit beat lies relatively greater hope for all living, comfortable, reassuring, inspiring, astute and magnificent things; in general, the most capable minds hark upon a future more conducive to fulfillment.
It makes a lot of sense.
There are exceptions. Among the more famous pessimists: Niccolo Machiavelli and Friedrich Nietzsche. Many scholars and intellectuals read these personages and conclude that, although their arguments very often seem to admirably cohere, they nonetheless lead on to still darker and darker estimations and assumptions – to dead ends, death ends, and hopeless, paralyzing fatalism. Since the fundamental doctrines of fatalism – not unlike the political doctrines of fascism, totalitarianism and communism – so often augur in favor of accommodated inequality and abuse, they are usually abandoned as unhelpful in securing benefits to justice, decency, and “humanity”.
Sometimes the subject confronting us is neither light nor dark, neither positive nor negative. The matter is arcane, esoteric, steeped in mystique. And, very interestingly, this mysteriousness (“darkness”) does not shove the intellectual observer into philosophical counterposition; it shoves him or her into silence. And this has to be the case when we consider the eye-popping technologies that futurists tell us are likely to come into being in only the next one hundred years.
Standing like a Colossus above all other coming innovations is the as-yet-conjectured technology of artificial intelligence.
An important distinction needs to be made at this point. Usually people talk about “artificial intelligence” referencing something quite obviously different than what that literally is. If the speaker or writer references something a hundred or a thousand times as smart as any human, he or she is referencing not artificial intelligence but artificial superintelligence (ASI). Here the subject treated is only ASI, unless otherwise expressed.
The point of the present writing is to illustrate in various ways that human beings – including very intelligent ones – look upon unfavorable speculations with a prejudice in favor of the opposite. It isn’t that worrisome postulations are inane or uninformed, but rather that pleasantness is always preferred over unpleasantness. (Isn’t this preference for ideational pleasantness what the 1960s American soldier took with him as he advanced through rice paddies toward the Vietcong?) Scholars and intellectuals very often take the position of reticent wallflowers in the debates over ASI not because ASI is so very “safe” and sure of security, but because the possibilities of error and misstep are so numerous and so inexpressibly complicated that we cannot even begin to guess at the millions of ways ASI might bring terrible results. With ASI, dark possibilities are so vast that intellectuals do not even want to begin in contemplation that direction: they arrive at the debate, if at all, often already conscripted. Doom is no fun, and that “intellectual” maxim directs them from the very outset.
No modern scholar demonstrates this preference for the reassuring (blinding) light than George Zarkadakis. In the fourth chapter of his book, “In Our Own Image” he writes that artificial intelligence must be programmed to be friendly to humankind. He has apparently watched too many sci-fi movies and attended too obsequiously to the wondrousness of his Greek myths, and he fancies, apparently, that ASI is just one thing being created by one company. His ignorance is shocking!! He fails, apparently, to at all apprehend that after there is one ASI there will be another and another – else we must treat the political problem of trying to inhibit the efforts of nations and companies to get their own ASI after a first iteration of it somewhere appears. That ‘inhibiting’ is a political problem so considerable that it does not seem to appear in scholarly debates about ASI at all. Interestingly, the subject is so byzantine and confounding that even the brightest minds in the world cannot grasp the seriousness of it. In any event, stopping other nations and companies everywhere is so fraught with dissents and difficulties as to be not even worthy of consideration. If the United States or some Silicon Valley corporation (for example) acquires genuine AI, it can only spur even greater urgency in the research of the Russians, the Chinese, and everybody else who has something to lose in the unsettling imbalance. An eventual – twenty-first century – AI race is absolutely inevitable!
Inevitability does not at all translate into anything resembling safety – quite the opposite: inevitability itself is an argument to stop arguing. The idea of something “coming anyway” cannot be construed as any sort of guarantee of safety. And it ought to be looked upon as even scarier due to the fact that it is obviously coming despite our (humankind’s) not really knowing what it is all about, what it might bring about. Again, even intellectuals do not discuss this matter truly intelligently.
In his online blog, Zarkadakis answers a question about the “most urgent threat” of AI by stating, “I don’t think that AI is a danger to humanity, quite the opposite. I believe that this technology has the potential to accelerate human progress.” He has been contaminated mentally by the very dualism he so often writes about. The man fails to recognize that ASI does not have to be one thing or the other; it can be a fantastically useful technology, solving every problem and alleviating every misery whatever… until it spells doom.
Zarkadakis also references “free will” as extant, even though contemporary neuroscientists believe free will is a total myth. (And some neuroscientists harbor misgivings about people becoming aware that it’s a myth, fearing that the rationales for self-discipline and morality might then erode irreparably. But Zarkadakis is apparently unaware!)
How can a scholar in the heart of the conversation be so very daft and uninformed?
Dr. Neil DeGrasse Tyson says, in a (YouTube available) debate about AI that if the technology were to become dangerous its users could simply unplug it. It is astonishing that a person who is not only intelligent but is also deeply informed about physics and computer science could convince himself, simplistically, that the whole dynamic would reliably play out that way. There has to be less than one chance in a trillion that only one AI is ever developed. If there are three, then all three users/controllers must anticipate the dangers beforehand and take the “unplug” action. If there are three hundred – yeah, that’s right – all three hundred, not just 299, must make the decision to unplug in time. And we must contend with the fact that there may be millions of ASIs in the future, because they could be simply copied in almost no time at all; despite the hugeness of the AI program (or machine, or combination), it well may be copiable in only a few seconds time. And, if it is so fabulously capable, and if it cost it’s makers many billions of dollars to produce (maybe tens of billions of dollars) it sure seems likely that the temptation to make many of them will be enormous. Face it, it’ll be copied! Each of the copies, increasing its capabilities to perhaps thousands of times human thinking capacity (which will be easily achievable once the thing itself helps its intoxicated operators solve the problems that now remain to the creation of the quantum computer. AI functioning hand in hand with quantum computing capabilities, could produce a thinking machine (or program) a million times smarter than any human. It will provide every delightful thing, perhaps, but it will also venture beyond any sort of leash. We won’t be able to understand it, but only understand that it is the answer to our every ambition and want. No large number of people will be there to advise caution when the thing is presently giving us a carefree life of unimaginable comfort and affluence. Everybody’ll be rich and free of disease, and perhaps also free of mortality. Is anyone going to unplug a device such as that on some conjectured, far off notion of “dangerousness”?
The world’s most sophisticated minds mostly skirt around these speculations about only a few of the possible doom scenarios surrounding ASI. There are exceptions, such as the clear warnings given by Stephen Hawking in the 36 months before his death, but these exceptions are few. Most scientists and intellectuals have been silenced, as I’ve stated above, by the terribly (“dark”) worrisome aspects of ASI, or they are on the take… they’re making money somehow in the production or marketing or use or prospective use of AI or ASI in the near term. And remember that, with an infinitude of affluence that the technology promises, there’ll be a lot of ASI supporters, for sure!!! Think of what an infinitude of money/wealth/affluence means, and think of how impossible it is to stop a train like that. We are like so many nakedly exposed creatures in a wintry wilderness with blankets set before us. We will take up the blanket; it is inevitable. But we cannot elect not to take up that blanket just because there is a “possibility” that it is contaminated with radiation. We take it up because blankets are taken up by shivering animals, not because blankets are forever safe. We will take up the figurative blanket of ASI because we actually have no choice but to do so. No choice? Yes. We, governments and companies, cannot abandon AI research because we know that it only means that somebody else will get it before we do. The race is on. We (humankind) not only continue the research, we do so because AI and ASI are ineluctable no-brainers of twenty-first century scientific advance and also keeping our eye on the timeless ball of “security” (in not being vulnerable to various sorts of domination by competing world powers).
We are in a huge amount of trouble as a species, far greater trouble than any of us is willing to fully admit (as the darkness – the many, many negative possible outcomes – as I’ve already stated, is so very consternating, and recall that this is totally separate from any evenhanded assessment of truth and reason. Safety concerns cannot be addressed because we are in a competition with other human aggregations (nations, corporations, etc.).
Let us discuss the concept this way: let us consider an “if”. What if there were a technological development that were truly horrifically dangerous but was nonetheless unavoidable? Examine, if you will, the Trinity experiment in the desert of New Mexico in the summer of 1945. Scientists were experimenting with something brand new, and some of them admitted that, although it seemed “unlikely”, they could not be sure that that first atomic explosion would not destroy all of planet earth. There was no way, at that time, of being certain. But a technology so fantastical could not be abjured! If there was one chance in a (guessed at) thousand, or a million, or a billion, that was just the price of prevailing, and nobody was going to stop just because there was some “outside chance that….”
There was a similar situation at CERN about a decade ago. Scientists had prepared for many months for a particularly difficult collision of subatomic particles, and a few of the scientists openly speculated that there was no way of knowing whether this particular experiment might destroy our whole solar system in one giant kaboom. The experiment was done.
ASI is not just one prospective possible kaboom; it is that possibility interminably from the moment it comes into existence, and no one can stop the continual throwing of the fateful dice. This is a fact: it cannot be stopped, no matter how dangerous it is! Research in machine learning and AI continue apace. Humankind is now in the inexorable pull of not the gravitational black hole singularity, but the technological singularity, and the nature of the technological singularity pull is no less determinative.
And it cannot be stopped because we humans are always involved, because of our competitive, ever unsatisfied nature, in a kind of endless war (in unending preparation for war), and now we have also the prospect of “cyber-war”, and that only increases our human eagerness to acquire ASI. When we have it, will we then be satisfied? No! Of course not! We will try to improve it as fast as possible, to gain as much advantage and “security” out of it as possible. But others will try – and surely succeed – in their efforts to gain their own ASI too. What will happen as a result of all this? What is destined to happen is that we will want the superior mind – the ASI mind – to be given greater and greater decision-making authority, as that seems the most logical thing to do. Yet, the various ASIs competing against one another are unlikely to call a truce. As humans are ever untrusting of truces, their ASIs are likely to be also. And there cannot be any genuine and final “truce”: the brilliant machines are no more able to extricate themselves from the perennial competition-dynamic fate has dumped them into than we are!
Humanity’s modern hope was in its most creative scientific minds. Surely this was true in the twentieth century, the century in which the airplane and the spacecraft were invented, and the century in which the secret power of the atom was revealed full and computers were created and utilized full. We are now on a trajectory that cannot be stopped or slowed. As sure as the rise of the sun on the horizon tomorrow, the advance toward AI and hence ASI will continue apace. It can no more be controlled than the path of the ball you’ve thrown after it has left your hand at fullest muscle-propelled inertia.
There are answers. But the answers to this problem do not lie in any sort of technological stratagem. That’s been our great fault: we’ve been of a disposition to increasingly think that every problem has a technological solution. But no technology will avail when the perplexity is in our social relations. We are a species of constituent animals intent on distrust and advantage, and no machine or invention imaginable will be able to change this fact. (Yes, various alterations to the human brain will be available in the future, but it is an absurdity that this will not be fraught with insurmountable political umbrage and consternation! We are worried about technology getting ahead of us and imperiling us, and the proposed solution is to put the dangerousness smack between our ears? Really?)
The answers to the ASI problem are the very same as the answers to the nuclear weapons problem and the problem of war generally. Humankind must develop a cultural peace ethos. And this has to be done by sedulous education about human rights and the promotion of human rights. Only when/if humankind gains in a genuine appreciation of the sanctity and inviolability of the human individual – every single one of us, without exception of any kind! – only then will we be able to examine the problems of modern technologies and creatively design a way of somehow treating them. Without this pan-cultural change, we are and remain blind to the methods of gaining any control over our own (prospective) survival as a species. If we are to survive, ours, a species continually guided by fool’s gold can, needs to somehow become a species of ought.
But alas, the inauspicious clock is ticking, and technology is progressing a million times faster than our determination at social evolution (in human rights understandings and affirmations).
Tick, tick, tick.