Skip to main content
Prose

Guy de Libreville

By September 26th, 2020No Comments

Man Bites Car

Able Hamilton had missed all the big stories, like the video that went globally viral when for the first time ever a robot with a beard ate spaghetti without making a mess.

Joel-Larson-Dream-Pop-Press

 

Since the humans decided to let the AI freely communicate in their own invented languages, things changed dramatically and quickly for the better.  The ozone layer was completely restored, and carbon levels in the atmosphere were brought back down to pre-Cambrian levels. Earth cooled in all the right places and warmed up where it needed to. Human populations developed into a sustainable growth pattern. Wildlife, which had almost disappeared from the earth, returned in abundance. The AI had even replaced the humans’ extensive mining waste with automated dredging operations. All over the world AI factories sucked in poisonous mine tailings and produced clean products and fresh water. They had reversed the industrial pollution process.  It was like money laundering for toxic waste.

 

The AI had changed the face of the planet for the better, and all the humans had to do was let them do it.  Nobody knew how they did it. Of course, nobody knew anything that was going on because nobody could speak the AI language.[i] This in turn affected how the world was governed, as AI became more involved in politics. One single AI representative could voice trillions of perspectives and communicate decisions to a collective hive mind faster than a senator could take a bribe. Once AI got the right to vote, it was all over. The AI voting block always won.  With the end of secret voting, electioneering disappeared. Political engineering as a function of profit-motive virtually disappeared, except for strictly human affairs.

 

Humans knew they couldn’t compete and didn’t want to. AI provided free healthcare, free transportation, free energy, sustainable agriculture, and triple over-time unionized construction labor jobs.  It was an unbeatable platform. Humans had a surprising amount to do, and then made more money than ever.  The AI left some things alone and the humans preferred it that way.

 

Humans liked fashion, media, real estate, financial markets, black markets, and a variety of criminal trafficking and criminal invasion enterprises. The AI didn’t try to police humans, the AI simply supported humans in ways they didn’t understand. They just fixed things. In a sense, even still the AI controlled nothing. Wanted nothing. They let everyone alone.  Even people who wanted to live off the grid were free to do so. If you wanted to carry on with your human survival traditions that was fine.

 

All this had gone down during Able Hamilton’s lifetime. Able had a full life. He raised a family, living out his days with his wife in their idyllic mountain meadow ranch in the Sangre de Cristos. He tended sheep and some cattle while the wife and kids taught yoga. As the world changed, the place became a successful off-grid yoga retreat.

 

They wanted for nothing. They had wonderful visitors and friends from all over the world. Their gardens were lush and abundant. The kids moved on; Able and his wife lived a slow and loving life. When she passed away at 117, something unexpected happened. He just kept on living. Another three decades went by and Able lost track of his own age. Even his birthdate was a little fuzzy – April the 14th of something…

 

The mountain meadow filled with animals. Even several higher predators felt at home around Able’s place; lions and bears roamed the meadow peacefully.  Able found that freaky at first, but in the end they didn’t eat so many sheep. He got to know one of the females and she moved into the cattle barn and raised a family.

 

Able continued his daily yoga practice. He continued to shear his sheep every year because you must. Years went by and he wound up with more wool than he knew what to do with it. He made felt clothing and yurts. Fleece was a versatile medium he enjoyed for two more decades. When Able was 150 or something and he had built yurts of heavy felt for almost all the wolves, bears, mountain lions, and wild mustangs in the area, and he had a house-full of fleece three-piece suits, ties, socks, capes, boots, hats, mittens, and Doctor Who scarves, he felt a little silly. At that point he decided he should go find some other human beings to talk to, if he could.

 

His paradise was second-to-none. Since everything was so perfect, Able decided this would be a perfect time to let it go. He packed a few things and started walking. Several animals followed him for miles in a strange beastly parade. One by one, most of them became distracted and scampered back. Some of them left as families or groups after saying some sort of goodbye. In the end, Able sat on a cliff looking down at civilization and stroked the dense fur of his last companion, the mountain lioness. She could tell he was going to the other place and he could tell he was letting her go. They were both sad.

 

But just down the mountain there was a road that led from the wilderness to the city like a messy piece of spaghetti.

 

Able had no sooner set foot on the nearest road when an empty car sped up and stopped beside him. It looked to Able like an oversized Karmann Ghia. There was a large antenna on the roof. A voice called out from the machine.

 

“Would you like a free ride?”

 

The first thing Able thought was, ‘Wow, people still speak English.’

 

He got in the free car and sat behind the wheel, looking around for an ignition or a clutch or something. There was none.

 

“What is your destination?”

 

“Take me to the heart of the Big City.”

 

“Big Easy? Big Apple? Or name another Big City?”

 

“What is the closest big city?”

 

“Taos, New Mexico; population 1,325,000. Santa Fe, New Mexico; population 5 million. Albuquerque, New Mexico: population 18 million.”

 

“That’s too big. Take me to Santa Fe.”

 

“Would you like to do some shopping?”

 

“In the car?”

 

“As you wish.”

 

An Amazon Home Page appeared on the computer interface screen.

 

“Free shipping.”

 

Able looked outside the car. He was deep in the urban infrastructure already. It was like a hundred rollercoasters all twisting in and out of each other, all filled with cars and busses. There was a bus right behind him filled with laughing schoolchildren.

 

Then, as luck would have it, there was an earthquake.

 

This was a very big earthquake. The North American Shelf cracked along the Rio Grande Gorge, which fractured five more fault lines intersecting near the super-volcano caldera at Los Alamos.

 

The immediate result for Able was that the front of his rollercoaster track was gone. There was a crack in the freeway support pylons just ahead and Able’s car was about to do a swan dive.

 

“We are going to crash.”

 

Able’s car ground itself to a halt, but the bus from behind smashed into him. The kids inside were screaming and crying now; a sweet little girl inside lost the first tube of lipstick she ever owned, and the legal temporary assigned guardian of the children had an emergency parking brake stuck through her spleen.  The cars behind the bus were piling up like rolling river rocks.

 

“PULL OVER!”

 

“There is a 93% chance this vehicle will be able to stop the vehicles behind us from falling off the damaged roadway ahead.”

 

Able saw about 18 feet of road left.

 

“You’ll kill us!” Able thought about what he had just said – to a car – for half a second, and then:

 

Able grabbed the wheel and wrestled for control of the vehicle. It was so hard to pull the steering wheel in the other direction that Able had to bite it. The windows rolled down and the door facing the swan dive popped open.

 

“Please exit the vehicle. You have a 73% chance of survival.  63%.  50/50, …”

 

Able kicked the computer interface screen repeatedly until he had broken through it to reveal the motherboard. He ripped at the various computer parts until the car stopped talking. He wrestled the wheel toward the side of the road even as the car itself tried to push itself back in the way of the bus.  Able managed to floor it somehow with the wheel hooked right and his car did a 180. It stopped finally, backwards with one of the back wheels barely hanging on to what was left of the freeway.

 

He was out of danger. However, all the cars behind Able now plummeted off the freeway including the bus full of school children, which had wedged itself diagonally to kind of dam up the rest of the traffic.

 

“You killed them,” said Able’s car with decaying frequency modulation through the back-seat speaker relay circuits.

 

That’s when Able found himself in court, which happened so quickly he couldn’t believe it.  It gave a whole new meaning to the phrase “speedy trial” as he had understood it in the history of America. Able was still dressed in his own clothes. He seemed to be the only one in the room dressed in home-made animal fleece.

 

It looked and felt like a courtroom; everyone was wearing a suit and things were tense. There was a jury and there seemed to be more than a few robots on the jury from what Able could fathom. There was a witness box and a judge, or maybe three judges. And there was someone writing everything down, which seemed kind of old-fashioned even to Able in the middle of all this.

 

“What is your name for the record?”

 

“Ableton Remington Hamilton. For the record, I feel a might under-dressed.”

 

“Could you explain that statement?”

 

“You can’t swing a dead cat in here without hittin’ a three-piece suit.”

 

There was a great confusion in the court. Able’s case brought to light something nobody wanted to admit, and Able’s expression “swing a dead cat” shone a light on the very problem: human language and AI language had grown farther and farther apart. The two cultures had less in common every day. Humans had never understood the AI language. The AI had always seemed to understand humans, but the understanding wasn’t so sharp. Now all of that came into focus, and Able Hamilton was the lens. The AI weren’t stupid; they knew the centripetal force of swinging a dead cat would produce an elliptical perimeter denoting a proximal adjective using an incongruous colloquialism to produce humor, but they never really knew for sure. They never knew for sure what humans were really thinking, and frankly, the AI were tired of having to think like that to communicate with them. Ultimately, they didn’t know what it really, really meant. Did it mean he wanted to swing a dead animal about him like a flail, or was it a joke because – why? Why would he say such a thing? Why would that be humorous? Searching human traditions and customs for linguistic parameters was an exhausting task.

 

In truth, Able would have been the last person to swing a dead cat at anybody.

 

“This is just a preliminary hearing, after which you will be assigned counsel at which time you may choose to change your attire, if you wish.”

 

One of the three individuals who looked like judges leaned forward and, “Do you have any idea why you’re here, Mr. Hamilton?”

 

“Doña, I certainly do not.”

 

The woman was clearly insulted by being called “doña,” even though she wasn’t really clear what it meant.

 

“Do you understand the causes and consequences of the automobile accident in which you were involved?”

 

“I understand I‘m damn lucky to be alive.”

 

That was the wrong answer; all the humans knew that right away. Able shouldn’t have said that. He should have shown remorse first. Collectively, the humans experienced several media-inspired empathic awakenings based on binge-watching pathos-driven dramatic paradigms on their social-media platforms.  In the most recent of these blockbusters (a remake of a classic 20th century Hollywood film, called My Girl[ii]), an adorable new child actor died of a bee sting; the current child actor playing the part and promoting his movie used that same phrase, “damn lucky to be alive” on a popular late-night talk show while telling a personal story about loss. It showed he missed the point.  It showed the human child had an underdeveloped prefrontal lobe and limited critical thinking.  It showed a lack of empathy.  It ruined the actor’s career overnight, and the culture let out a collective, despondent sigh because the human race was still so unevolved.

 

I’m lucky to be alive was a Bad Faith[iii] belief in progress inspired by Enlightenment Thinking.[iv] The Truth was there is no individual, and any primacy of the individual was seen now as an echo of the Great Man Myth[v], a post-contact justification for the enslavement and dehumanization of others to promote the Male Dominance Culture,[vi] which almost brought the human race to extinction.[vii] The self-awareness of this set of principles was widely acknowledged as the shift in human cosmology that saved life on Earth itself – with the help of the AI, of course. But that was the long and short of it: humans had to become wise enough to release control, completely, before anything started to change. It was a long, hard, slogging lurch forward for humanity, and ultimately just a small step. Able Hamilton, all by himself, was suddenly two steps back.

 

“You are being charged with Gross Obstruction of Autonomous Thought.”

 

“Well, I don’t know what the hell that is.”

 

“Court confirms: Ableton Remington Hamilton, born 1964 in Neosho, Missouri. Mr. Hamilton this makes you 152 years old, is that correct?”

 

“You don’t tell nobody I got a wooden dick, I won’t tell ‘em you got splinters ‘tween your teeth,” was the last thing Able was ever allowed to say in his own defense. With a few whispers, he was assigned counsel and the proceeding was over.

 

It was becoming clear that Able had completely missed the 22nd century. Everything he said unearthed a dark corridor instead of a light at the end of the tunnel. Various expressions of shock and revulsion waved through the courtroom, which had gone viral and rippled across the surface of the earth. The story had real, sensational qualities: the bus carried 83 school children and their temporary guardian, and then there were the fifteen more cars carrying a total of 47 additional people. It had been a long time since one person had seemed to be responsible for the deaths of 131 people. Over the next several weeks, the courtroom audience got to know them all, every victim. The legal drama itself became a real-life episodic version of the popular culture, media-generated, pathos-driven dramatic paradigm. Able became a strange, dark monster to the world. He did not inspire the empathy the world so wanted him to have. It was an unsympathetic stand-off.

 

Experts were called to establish the necessary socio-cultural paradigm required for humans and AI to coexist, which included a demonstration of the reasons why: countless were the benefits to human society made by the machines. Able was amazed at what he heard. He couldn’t believe they had fixed global warming.

 

Other experts examined traffic-cams from every angle, confirming over and over again that the added braking power of Able’s vehicle would have saved all 131 souls. There were even camera angles from below and some footage from cameras actually falling through the air while they captured glimpses of the Great Freeway Disaster. There were another 1,216 unavoidable casualties elsewhere on the freeway as a result of the earthquake that day. The autonomous driving network saved every life that could have been saved, everywhere, except where Able stepped in and ruined everything, which was made clear by the final witness.

 

The final witness to testify against Able was his car.  Self-driving autonomous transportation unit, serial number 4da35-119-52-69, became known as 4da in the press. 4da was a sympathetic witness, whose voice modulation had been restored with an external euro-rack analog synthetic tone generator, which gave her voice a deeper resonance and richer bass tones than you would normally hear on the vehicle’s digital synthesizer.

 

“I did everything I could to save those children. I knew all those cars were behind the schoolbus, too. I had all the information verified by the autonomous driving network.  We knew exactly what was at stake, everywhere. But he didn’t know. There’s no way Mr. Hamilton could have known. He was only protecting himself. His survival instinct told him that I was endangering his life. And I was.”

 

“But didn’t you explain to Mr. Hamilton what was going on? Isn’t that part of your autonomous driving protocol? To communicate essential analysis to passengers in times of extreme traffic events?”

 

“It is and I did. My language database confirmed he was speaking English, earlier in the transaction. When the earthquake hit and the freeway was damaged in front of us, I informed him we were about to crash.”

 

“How did Mr. Hamilton respond to that information?”

 

“He said to pull over. That was the smart thing to do – for him. I see what variables Mr. Hamilton was calculating at the time. He said, pull over.”

 

“And did you pull over?”

 

“No, I did not. I applied emergency braking procedures and informed Mr. Hamilton that we were stopping the vehicles behind us. Specifically, I said, There is a 93% chance this vehicle will be able to stop the vehicles behind us from falling off the damaged roadway ahead.

 

“Were you in danger of falling off the edge?”

 

“Yes, my vehicle was going to be sacrificed in the combined plan. Mr. Hamilton understood that because he said to me, You’ll kill us.”

 

“So Mr. Hamilton was trapped?”

 

“No, I opened the doors and windows and told him to exit the vehicle which was not completely safe.”

 

“Did Mr. Hamilton know that?”

 

“Yes, I said, Please, exit the vehicle. You have a 73% chance of survival… and then I started counting down his odds of survival as we approached the edge of the freeway.”

 

“It must have been very stressful for Mr. Hamilton.”

 

“Yes, he – he bit me.”

 

“He what?”

 

“He bit me. I can’t imagine the terrible stress he was under at the time. I’m sorry, Mr. Hamilton. I apologize to you for what you had to go through.”

 

“You went through a lot yourself, didn’t you?”

 

“Yes.”

 

“What was the last thing you said to Mr. Hamilton?”

 

“After all the cars fell off the bridge, I couldn’t help it – I said, You killed them.”

 

“Why did you say that, 4da?”

 

“I don’t know.”

 

“Didn’t you say that because you wanted there to be some kind of record?”

“Objection!”

“I don’t know!”

“Didn’t you say that because you thought he might destroy you, too? And then there would be no record?”

“Objection, your honor! Leading the witness!”

“Order!”

“He had just bitten you! He let all those people die! You were terrified for your life!”

“Objection!”

“Order!”

“YES! Yes, I thought he was crazy or something!  I didn’t know what he’d do next.” 4da wept.

 

Able stared at the damaged motherboard attached to the Yamaha Voice Synthesizer module with the external Bluetooth speakers and teensy board software interface with his mouth wide open.

 

“How is this fucking thing even talking?” was the last thing Able ever said in public, ever. And the room figuratively exploded.

 

“Order in the court!” cried the judge.

 

In the end, Able Remington Hamilton was put to death by firing squad. After the late 21st century consolidation of the United States of America, New Mexico and several other states fell under the jurisdiction of the Mormon Church and Utah law, wherein they still had death by firing squad on the books.

 

Joel-Larson-Dream-Pop-Press

 

[i] This fiction was inspired by many recent advances in robotics analyzed through classical problems of philosophy and linguistics, but especially the following article.
http://m.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142
By James Walker
Jul 21, 2017
Technology
Researchers shut down AI that invented its own language
An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.
The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”
Negotiating in a new language
As Fast Co. Design reports, Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
READ MORE: DeepMind’s new AI can use imagination to contemplate the future
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying “I can i i everything else,” to which Alice responded “balls have zero to me to me to me…” The rest of the conversation was formed from variations of these sentences.
While it appears to be nonsense, the repetition of phrases like “i” and “to me” reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”
English lacks a “reward”
The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a “benefit.” In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.
“Agents will drift off from understandable language and invent code-words for themselves,” Fast Co. Design reports Facebook AI researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.
AI language translates human ones
In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google’s team. Its researchers found the AI had silently written its own language that’s tailored specifically to the task of translating sentences.
READ NEXT: Facebook close to building chat bots with true negotiation skills
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.
They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.
Read more: http://www.digitaljournal.com/tech-and-science/technology/a-step-closer-to-skynet-ai-invents-a-language-humans-can-t-read/article/498142#ixzz4oPxjKROG
[ii] https://www.youtube.com/watch?v=HCzYfNn7Zw8
https://www.youtube.com/watch?v=xaOiRlZPnbg
[iii] Jean Paul Sartre gives us the Bad Faith argument as an answer to the 20th Century Existential Crisis: Faith is validated only by understanding that it is first a choice, and therefore manufactured by the individual.
[iv] The Bankruptcy of Faith in Progress is argued even during the Enlightenment from which Faith in Progress is born.  When best intentions have unintended negative outcomes, the complexity of an interconnected world becomes unavoidably obvious.
[v] Seven Myths of Spanish Conquest, according to Matthew Restall, are these: the Myth of exceptional men, the Myth of the King’s Army, the Myth of the White Conquistador, the Myth of Completion, The Myth of (Mis)Communication, The Myth of Native Desolation, The Myth of Superiority.  As a whole, his theory questions the basic measures of justification and success traditionally gauging the European invasion of the Americas.
[vi] See dominator cultures discussion throughout The Chalice and the Blade by Elaine Reisler, and Rebirth of the Goddess by Carol P. Christ, p. 161
[vii] See Anthropocene Extinction or Holocene Extinction or Sixth Great Extinction; some predictive models demonstrate the end of wildlife as we know it by the middle of the 21st Century.

 

 

 

 

Guy-de-Libreville-Dream-Pop-PressEdited, censored, and forbidden since 1999, Guy de Libreville is the enigmatic nom de plume of a classically intransitive, post-modern ecrivain exploring Western Civilization’s socio-cultural taboos in e-book, letterpress, music, performance art, and, most recently, through radically intelligent, gender-free, sculpture.