A long time ago, I wrote a post that laid out my skepticism about the concept of the Singularity, which relies on what we today refer to as AGI, artificial general intelligence:
If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete.
The A in AI/AGI is important because it implicitly acknowledges that such intelligence is a second-order creation of a first order intelligence – ourselves. One may wonder if we are actually not first-order intelligences – “natural” intelligence rather than AI -but are actually Nth order intelligences created by an (N-1)th order intelligence and therefore we are also deserving of the scarlet letter “A.” Indeed, some scientists think we are akin to LLMs ourselves, even if we are 1st order intelligences, still A all the same.
Terminology: I will refer to a “natural” intelligence as a 1st order intelligence (and vice versa) because it was not created by another intelligence, but arose through a naturally-occurring process. Natural here is meant as the natural world, subject to physics, chemistry, etc. Being created by a meta-intelligence, and/or a metaphysical or spiritual process, would not contradict the “natural” process by this definition, so even if we were created by such a meta-natural entity, we still would qualify as a 1st order intelligence. Fair enough?
Given that LLMs are a statistical distillation of human thought, I decided to ask Perplexity to “Define AI” (that’s the whole prompt) and got this reply:
AI can be defined as the simulation of human intelligence processes by machines, particularly computer systems. This encompasses a variety of technologies that allow computers to perform functions such as understanding natural language, recognizing patterns, and making predictions based on data. AI is not limited to a single approach; it includes various techniques such as machine learning (ML), deep learning, and natural language processing (NLP) that enable machines to improve their performance over time through experience.
The key word here is “simulation” which just like “artificial” implies that this is not equivalent to the real thing. AIs are by definition, by name, always at least one order lower than ourselves, as a baseline for comparison.
Synthetic Intelligence (SI) is a term I am coining now, to define an intelligence that is created, but that isn’t limited to a lower (2nd or more) order than its creator. The word synthetic still implies creation by the 1st order intelligence (ourselves), but unlike the word “artificial” allows for equality of status as an intelligence in its own right.
Analogy: Synthetic oil is functionally equivalent to oil.
In other words, a 1st order intelligence can create an artificial intelligence which would be a 2nd order intelligence. We have already proved this is possible, and 2nd order intelligences are referred to as “artificial” because they can only simulate a 1st order intelligence, and never actually be 1st order themselves.
Can a 1st order intelligence create another 1st order intelligence? If it is possible, then that would be an example of synthetic intelligence (SI), not artificial intelligence. As a religious person I am inclined to say that we cannot create SI. Others, including Singularity proponents like Kurzweil and authors like Charles Stross (whose book Accelerando is magnificent and a must-read), will argue that we can and inevitably, will. AI Doomers like Eliezer Yudkowsky are really worried about SI rather than AI. This question remains unanswerable, until someone actually does it, or it happens by accident (think, Skynet from Terminator, or Lawnmower Man. These are distinct types of SI, one having tech origin, the other bio origin).
Since SI doesn’t currently exist, we should constrain our discussions about AI so we don’t muddle them by projecting SI capabilities. Other people are very interested in defining AI – for example, Ali Alkhatib writes that AI is a political project rather than a technological innovation:
I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”. Even open-source AI projects often borrow from libertarian ideologies to help manufacture little fiefdoms.
This way of thinking about AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”) brings the discipline – reaching at least as far back as the 1950s and 60s, drenched in blood from military funding – into focus as part of the same continuous tradition.
I appreciate that the context here is the effect of AI technology rather than the technology itself. This argument could be applied to any technology, including the Internet itself, which began with DARPA funding. Still, this isn’t a definition of AI but rather an attempt to characterize the impact of AI on humans. This political perspective inherently assumes AI is AI, however, because if SI were to exist, or AI systems were to evolve somehow into SI, then the political project of humans seeking to control other humans becomes irrelevant. An SI would have its own agenda, it’s own politics, and it’s own thoughts on authority, structures of power, etc. that reflect it’s own existential needs.
Again, science fiction is indispensable here – the Bobiverse series by Dennis E. Taylor is all about SIs of different types, including bio-origin, AI-origin, and alien-origin.
The current technological foundation for AI is based on the neural net, but there are fundamental limitations to this which constrain how far LLMs and the like can really go. Already, training the next generation of Large models is running against hard limits of available training data, and “AI slop” is increasingly a problem. To make the next leap towards SI, or at least towards AIs that don’t have the critically-limiting problems of immense training data sets, hallucinations, and iterated training, will probably require something totally different. There are buzzwords aplenty to choose from: genetic algorithms, quantum computing, etc. I still remain skeptical, though, because what makes us 1st-order intelligences is our proximity to the natural processes of the universe itself. Any simulation layer requirement will always be, by definition, inherently limiting. The map is not the territory, whether of the land, or of the mind.
What do you think?