Artificial General Intelligence in 5 Not-So-Easy Steps

By Capt. Jon Cariba PhoenixNovember 15, 2023

(Photo Credit: Graphic by Sarah Lancia) VIEW ORIGINAL

The best lens through which to view today’s artificial intelligence (AI) is the late 1990s/early 2000s dot-com bubble. Chattering elites exclaimed the poorly understood new technology (the internet) would change everything, and then the bubble, inflated by irrational exuberance, suddenly popped. Once the smoke from broken companies cleared, it became clear the internet did change a lot, but not everything. Dreams of online activism leveling the playing field crashed headfirst into a lack of know-how and difficulty sustaining digital worlds in real ones. Ironically, those who understood the limitations of the internet (namely, its privacy-eroding addictiveness) profited the most.

Today’s AI is probably even less understood. Breathless chatter about ChatGPT masks the reality that AI is not as advanced as many think. It can suggest Paris for a destination vacation but miss ongoing protests that would crimp any honeymoon. It can tell a jewelry store owner that halving prices should sell unsellable bracelets but can’t predict why a clerk accidentally doubling their price would make the bracelets sell out.

Why? Because AI is not artificial general intelligence (AGI), which is to say AI cannot think like a human. This inability is why AI fails at many jobs, particularly those involving novelty.

This raises the question: could AGI exist? To answer that, it is necessary to demonstrate what it would take to create it, spotlighting the differences between AGI and regular AI. AI will change a lot, but there are reasons it won’t change everything.

An Expanded Memory

Starting in the 1980s, AI pioneer Judea Pearl gradually developed the idea of a ladder of causation separating human from machine reasoning. At the bottom is association, seeing which variable tends to be related to another variable. In the middle is intervention, observing how changing one variable affects another. At the top are counterfactuals, understanding what causes what and imagining what could have been otherwise. Babies are already born into the middle level, reaching the higher level as they mature. But animals and computers, Pearl said, remain at the lowest level. Why? While machines can run advanced statistical analyses on how A and B correlate, going from correlation to causation requires something else.

Enter do-calculus, a type of mathematics Pearl invented to provide that missing piece. It is brilliant in its simplicity: two or more dots with an arrow from dot A to dot B representing causality (A causes B). Do-calculus lets you start with a hypothesized dot-and-arrow causal diagram and mathematically modify it to show what correlations you could expect to see from it. You can then test whether your model matches your empirical data, even if the data wasn’t collected from a randomized experiment. The result is AI’s ability to create primitive structural causal models from essentially artificial experiments, solving many simple problems in the process. Yet, as Pearl noted, the subsequent spread of AI was not the result of AI discovering complex causal models but rather the number of seemingly complex tasks (like text prediction) that could be reduced into simple ones.

Pearl’s do-calculus was a significant AI breakthrough, but there are reasons why it hasn’t, on its own, created AGI. The diagramming is primitive, which limits the ability to integrate emerging causal models on more complex subjects. And yet, Pearl’s causal modeling did capture one essential essence of human intelligence: a human brain contains around 86 billion neurons, which can encode billions of relationships.

Yet 86 billion is not infinite. This necessitates AGI’s first ingredient, which is more memory. As sociologist Robin Dunbar notes, 86 billion neurons translate into enough bandwidth to give a typical human only six close friendships and around 150 acquaintances. A true AGI must be able to encode and synthesize at least as many causal relationships as a human (and ideally more), which is not the same as computer speed.

The Blank Attractor or the Rube Goldberg Nature of Human Cognition

Rube Goldberg was an early 20th-century cartoonist known for designing unnecessarily complex machines to perform simple tasks. A Rube Goldberg machine might cobble together some improvised solution to carry a ball from point A to point B. Similarly, there is a Rube Goldberg nature to human intelligence. Given a particular set of inputs and outputs, humans improvise a solution. If, while repairing your car, a screw drops into a crevice too small for your hand, you might look for something narrow enough to fit the crevice and something capable of moving the screw toward you. Several possible solutions result, such as a magnetic rod, a claw, or a long screwdriver to push the screw onto the ground. The solutions may not be elegant, but they all fill a niche.

A blank niche that attracts possible solutions is the second ingredient of AGI. Humans identify a gap, search through one’s memory of causal relationships, and identify something that fills it. The improvisational aspect of this is what’s most important here. Unsolved problems rarely have simple solutions. Rube Goldberg cognition is thus why people with a broader range of experience are more successful at creatively solving novel/complex problems, which points to two other limitations of today’s AI: ChatGPT cannot autonomously query itself, and even with human input, ChatGPT’s answers are becoming less flexible and less accurate over time.

Psychologist Peter Hobson notes human cognition emerges from a newborn interacting with other humans in its first 18 months. Could ChatGPT talking to itself improve its accuracy or create Rube Goldberg’s cognition? Unlikely, and here’s why.

Hyperbolic Discounting

Asked to choose between receiving $60 in six months versus $30 in three, most humans choose the larger/long-term reward. But the preferences reverse when asked to choose between $60 in three months versus $30 immediately. Psychologist George Ainslie termed this frustrating aspect of human cognition hyperbolic discounting: humans typically prefer larger or long-term rewards except when short-term temptations are offered instead. It’s why New Year’s resolutions often fail. Yet hyperbolic discounting has been found across animal species, including non-mammals. This begs the question, why would such an inefficient cognitive structure evolve?

According to Ainslie, the reason is simple. By turning human (and animal) minds into constant debates between various shorter- and longer-term selves, humans don’t stagnate. Returning to the jewelry store example, hyperbolic discounting’s debate between longer- and shorter-term rewards pushes the revision of the brain’s dot-and-arrow diagrams when an unexpected event (doubling prices increases sales) clashes with received wisdom (lowering prices increases sales). In contrast, a mind using exponential discounting that always chooses the larger/long-term reward would have trouble shifting its behavior in complex environments, which is ChatGPT’s current problem.

Like all computers, today’s AI uses exponential, not hyperbolic, discounting and thus cannot revise its code unless programmed. This sharply limits the complexity of the problems AI can solve. Creating an AGI that transcends those limits would need hyperbolic discounting. But in the process, hyperbolic discounting would grant AGI the autonomy that today’s computers lack. Imagine a machine not feeling like turning on.

Humility, Curiosity, and Artificial Spirituality

Humans have only one way to resolve their hyperbolic discounting debates without stagnating: by focusing on iteratively longer and longer time scales for each longer-term self-resolving problem from before. But how do humans maintain such a quest for the infinite, given a finite lifespan? The answer is many don’t. Millions worldwide destroy their lives with short-term addictions, billions more stagnate by fixating on long-term idols (money, power, fame, ideology, etc.), and capitalism hobbles most of the rest through precarity. Diverse psychology research points out what happens next: learned helplessness and the need for certainty override the tolerance for ambiguity. The need for cognition (i.e., how much one enjoys thinking) plummets in the process.

So hyperbolic discounting drives humans to grasp for the infinite, but instinct is weak on its own. Granted, the scientific method exists for chasing infinity, but this evolved after organized religion colonized chasing infinity first. However, religious or meditative introspection alone could never reliably decipher the natural world without data-gathering and experimentation. This raises intriguing questions. If we created a hyperbolically discounting AGI, are we also creating an artificial personality or spirituality (things today’s AI conspicuously lacks)? Could either be programmed to avoid humanity’s mistakes?

Recent advances emphasize humility as a new sixth factor of human personality (along with emotionality, extraversion, agreeableness, conscientiousness, and openness). While hyperbolic discounting would suggest not programming a single personality across all AGIs, high humility and openness would be essential to bootstrap a quest for the infinite. Such a quest is necessary to optimize hyperbolic discounting but can only be sustained in a society that does not increase the precariousness of its members.

The Oneira Project

The above ingredients show how limited today’s AI truly is and how far there is yet to go to create AGI. Few will likely follow these steps, making AGI only slightly more realistic a goal than medieval alchemy.

Yet Newton’s alchemy research inadvertently fueled breakthroughs in other fields. To that end, the quest for AGI could lead to other more modest but useful advances. Perhaps new technology could expand human brainpower beyond Dunbar’s limits. A more practical advance, however, would be a hypothetical fifth AGI ingredient with multiple applications outside of it.

Pearl’s structural causal models are useful but primitive. His textbook describes his diagrams’ difficulty processing feedback loops or changes over time. This is why humans use differential equations or spoken/written language to express more complex processes.

The problem is that differential equations have severe constraints, not quite capturing humanity’s ability to visualize potential alternate worlds before they exist. Meanwhile, to paraphrase Steven Pinker, human language does act like an app, translating the brain’s tangled web of memorized relationships into a linear form. But it’s not efficient at it.

Can this be improved upon? Is there a way to create a machine language, a new type of causal calculus, or both that can capture complex systems in more detail than differential equations and structural causal models and yet more efficiently than human language? Such a tool could drastically improve an AGI’s ability to visualize possible futures in a complex world. Let’s just hope humans make better use of it first.

Editor Note: This article is a selection from the Army Sustainment University President’s Writing Competition.

--------------------

Capt. Jon Phoenix is a former Army Sustainment University Reserve Component Logistics Captains Career Course student, recently joining the Army Reserve as a space operations officer with the 2nd Space Battalion at Fort Carson, Colorado. He was formerly a finance officer with the Kentucky Army National Guard, where he served as knowledge management officer and brigade S-8 for the 63rd Theater Aviation Brigade in Frankfort. He deployed for his state’s COVID-19 response mission and to Kuwait in support of Task Force Spartan at the close of the Afghan War. He is working on a Ph.D. in sociology at the University of Louisville, Kentucky.

--------------------

This article is published in the Fall 2023 issue of Army Sustainment.