Tag: philosophy

  • Defining Artificial vs Synthetic Intelligence

    Defining Artificial vs Synthetic Intelligence

    A long time ago, I wrote a post that laid out my skepticism about the concept of the Singularity, which relies on what we today refer to as AGI, artificial general intelligence:

    If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete. 

    The A in AI/AGI is important because it implicitly acknowledges that such intelligence is a second-order creation of a first order intelligence – ourselves. One may wonder if we are actually not first-order intelligences – “natural” intelligence rather than AI -but are actually Nth order intelligences created by an (N-1)th order intelligence and therefore we are also deserving of the scarlet letter “A.” Indeed, some scientists think we are akin to LLMs ourselves, even if we are 1st order intelligences, still A all the same.

    Terminology: I will refer to a “natural” intelligence as a 1st order intelligence (and vice versa) because it was not created by another intelligence, but arose through a naturally-occurring process. Natural here is meant as the natural world, subject to physics, chemistry, etc. Being created by a meta-intelligence, and/or a metaphysical or spiritual process, would not contradict the “natural” process by this definition, so even if we were created by such a meta-natural entity, we still would qualify as a 1st order intelligence. Fair enough?

    Given that LLMs are a statistical distillation of human thought, I decided to ask Perplexity to “Define AI” (that’s the whole prompt) and got this reply:

    AI can be defined as the simulation of human intelligence processes by machines, particularly computer systems. This encompasses a variety of technologies that allow computers to perform functions such as understanding natural language, recognizing patterns, and making predictions based on data. AI is not limited to a single approach; it includes various techniques such as machine learning (ML), deep learning, and natural language processing (NLP) that enable machines to improve their performance over time through experience.

    The key word here is “simulation” which just like “artificial” implies that this is not equivalent to the real thing. AIs are by definition, by name, always at least one order lower than ourselves, as a baseline for comparison.

    Synthetic Intelligence (SI) is a term I am coining now, to define an intelligence that is created, but that isn’t limited to a lower (2nd or more) order than its creator. The word synthetic still implies creation by the 1st order intelligence (ourselves), but unlike the word “artificial” allows for equality of status as an intelligence in its own right.

    Analogy: Synthetic oil is functionally equivalent to oil.

    In other words, a 1st order intelligence can create an artificial intelligence which would be a 2nd order intelligence. We have already proved this is possible, and 2nd order intelligences are referred to as “artificial” because they can only simulate a 1st order intelligence, and never actually be 1st order themselves.

    Can a 1st order intelligence create another 1st order intelligence? If it is possible, then that would be an example of synthetic intelligence (SI), not artificial intelligence. As a religious person I am inclined to say that we cannot create SI. Others, including Singularity proponents like Kurzweil and authors like Charles Stross (whose book Accelerando is magnificent and a must-read), will argue that we can and inevitably, will. AI Doomers like Eliezer Yudkowsky are really worried about SI rather than AI. This question remains unanswerable, until someone actually does it, or it happens by accident (think, Skynet from Terminator, or Lawnmower Man. These are distinct types of SI, one having tech origin, the other bio origin).

    Since SI doesn’t currently exist, we should constrain our discussions about AI so we don’t muddle them by projecting SI capabilities. Other people are very interested in defining AI – for example, Ali Alkhatib writes that AI is a political project rather than a technological innovation:

    I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”. Even open-source AI projects often borrow from libertarian ideologies to help manufacture little fiefdoms.

    This way of thinking about AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”) brings the discipline – reaching at least as far back as the 1950s and 60s, drenched in blood from military funding – into focus as part of the same continuous tradition.

    I appreciate that the context here is the effect of AI technology rather than the technology itself. This argument could be applied to any technology, including the Internet itself, which began with DARPA funding. Still, this isn’t a definition of AI but rather an attempt to characterize the impact of AI on humans. This political perspective inherently assumes AI is AI, however, because if SI were to exist, or AI systems were to evolve somehow into SI, then the political project of humans seeking to control other humans becomes irrelevant. An SI would have its own agenda, it’s own politics, and it’s own thoughts on authority, structures of power, etc. that reflect it’s own existential needs.

    Again, science fiction is indispensable here – the Bobiverse series by Dennis E. Taylor is all about SIs of different types, including bio-origin, AI-origin, and alien-origin.

    The current technological foundation for AI is based on the neural net, but there are fundamental limitations to this which constrain how far LLMs and the like can really go. Already, training the next generation of Large models is running against hard limits of available training data, and “AI slop” is increasingly a problem. To make the next leap towards SI, or at least towards AIs that don’t have the critically-limiting problems of immense training data sets, hallucinations, and iterated training, will probably require something totally different. There are buzzwords aplenty to choose from: genetic algorithms, quantum computing, etc. I still remain skeptical, though, because what makes us 1st-order intelligences is our proximity to the natural processes of the universe itself. Any simulation layer requirement will always be, by definition, inherently limiting. The map is not the territory, whether of the land, or of the mind.

  • Reason is a limited process and can never explain everything objectively

    Reason is a limited process because it arises from consciousness, which observes the universe via filters. The mind has physiological filters (ex, wavelengths of light you can perceive, frequencies of sound you are limited to), chemical filters (the specific biochemistry of your brain, your mood and emotion, etc), and mental filters (pre-existing ideas and biases, fidelity of your metal models and assumptions, simple lack of knowledge). These are all best understood of as filters between you and the “objective” truth of reality. The universe as you observe it and understand it is vastly more complex than you can understand. The process of reason happens only with information you can process at the end of that chain of filters, so you are always operating on an insufficient dataset.

    The brain is actually evolved to extrapolate and infer information based on filtered input. The brain often fills in the gaps and makes us see what we expect to see rather than what is actually there. Simple examples are optical illusions and the way the brain can still make sense of the following sentence:

    Arinocdcg to rencet rseaerch, the hmuan brian is plrectfey albe to raed colmpex pasasges of txet caiinontng wdors in whcih the lrettes hvae been jmblued, pvioedrd the frsit and lsat leetrts rmeian in teihr crcerot piiotsons.

    As a result, there are not only filters on what we perceive but also active transformations of imagination and extrapolation going on that actively modify what we perceive. These filters and transformations all happen “upstream” from the rational process and therefore reason can never operate on an untainted, objective reality. Despite the filters and transformations, the mind does a pretty good job, especially in the context of human interactions on the planet earth (which is what our minds and their filters and transformations are optimized for, after all). However, the farther up the metaphysica ladder we go, the more we deviate from that optimal scenario for which we are evolved (or created, or created to have evolved, or whatever. I’ve not said anything to this point that most atheists and theists need disagree on).

    A good analogy is that Newton’s mechanics were a fantastic model for classical mechanics, but do not suffice for clock timing of GPS satellites in earth orbit. This is because Newton did not have the tools available to be aware of general relativity. Yes, we did eventually figure it out, but Newton could not have ever done so (for one thing, his civilization lacked the mathematical and scientific expertise to formulate the experiments that created the questions that Einstein eventually answered).

    Godel’s theorem makes this more rigorous by demonstrating that there will always be statements that can neither be proved nor disproved by reason. In other words, as Douglas Hoftstadter put it, Godel proved that provability is a weaker notion than truth, no matter what axiom system is involved. That applies for math and it applies to philosophy, it applies to physics and it applies to the theism/atheism debate.

    Related – more detailed explanation of Godel’s Theorem and implications on reason.

  • cargo cult quantum physics

    From time to time I encounter people online who are enamoured of quantum physics in a dilettante sense, obsessed with the implications upon macroscopic reality, and the broader philosophical questions about what is real anyway. I’ve come to think of their arguments as “cargo cult physics” because in a way, they simply snowball the debate with terminology, quote papers, etc and basically obfuscate beyond the capability of anyone to really follow what they are saying.

    Here’s an example from a particularly painful (for this, and other reasons) thread at Talk Islam.

    aziz 3:03 pm on January 26, 2011 Permalink | Edit
    Keid, you take the extremist position that there is no reality – and then you insist that reality is only empirical. There’s a fundamental tension that not even Godel can save you from here.

    I welcome your intellectual pity, and i reciprocate.

    THE 6:56 pm on January 26, 2011 Permalink | Edit
    OK deconstruct. I am not saying there is no reality. (How could I possibly know that?). I am saying reality is for us a theory.

    Now it’s very possible that it is a true theory. It certainly agrees with all the empirical evidence, in the information flows we have access to, that there is an external causal reality. That external stuff also seems to form our physical substrate.

    But we try to make theories now about the nature of that external & substrate-level reality. We start to understand that although it obeys rules that seemed to be mechanical when we first started to study it, the deeper we go into the structure, the more it becomes abstract and information-like.

    Particles become quanta become qubit states in a information-bearing field?

    Also very important this: The whole notion of a continuous space with continuous fields, breaks down as you get to the Planckian level, 10^-35 meters. So we need a new paradigm.

    So now we postulate: Is there a supportive subPlanckian substrate to the entire world? Presumably it could be even more information-like, more abstract. Could it be pure information?

    How could this work?

    Well there are clues. Number one IMHO is the holographic conjecture. We can reinterpret the quantum fields that make up the universe as a isomorphic to fields in an outgoing 2 dimensional surface at the edge of the universe. The area of that surface has to be quantized into planck-sized areas. Could it be a cellular lattice of some kind? In some, as yet unknown, geometry?

    Then the information could be qubits being exchanged between cells in that geometry.

    Understand that lattices can arise naturally, e.g Crystals. Geometry just has to have the right symmetry, thats all, to form repeating cells capable of computation.

    I know this is vague and impressionistic, but I’m not a real physicist and I’m out of my depth here.

    As you can imagine, I’m paying a lot of attention to current attempts to confirm Hogan’s noise.

    THE 8:12 pm on January 26, 2011 Permalink | Edit
    I found Paola Zizzi’s paper thought provoking.

    How to respond to this? Well of course, reality is a theory. But it’s a damn good one. The entire point of theory is to approximate reality in a useful way. Invoking crystals and quantum cubits and whatnot is just adding noise. Unlike Hogan’s noise, this noise is well confirmed.

    Anyway I am not trying to pick on my interlocutor here, just frustrated with the way these sorts of debates turn out. Basically it’s all a condescending attempt to wave big words around at me to intimidate me because I have spiritual faith. Pointing out I have a PhD in physics is kind of pointless here. I believe in God, so I must be an idiot on some fundamental way, and here’s the proof: [insert gibberish].

    You see the same sort of thing with people who believe in Singularity – but I’m a Singularity skeptic.

  • many worlds

    I have a bit of bias towards the Overcoming Bias blog, because of the way in which Eli pretends his Bayesian worldview is utterly pristine without a trace of dogma. That said, his recent forays into explaining quantum mechanics are superb… that is, until he revealed that his aim was to set up a tension between Science and Bayes. In a nutshell,

    Science-Goggles on: The current quantum theory has passed all experimental tests so far. Many-Worlds doesn’t make any new testable predictions – the amazing new phenomena it predicts are all hidden away where we can’t see them. You can get along fine without supposing the other worlds, and that’s just what you should do. The whole thing smacks of science fiction. But it must be admitted that quantum physics is a very deep and very confusing issue, and who knows what discoveries might be in store? Call me when Many-Worlds makes a testable prediction.

    Bayes-Goggles on: The simplest quantum equations that cover all known evidence don’t have a special exception for human-sized masses. There isn’t even any reason to ask that particular question. Next!

    And just like that – Science is dismissed. Many-Worlds must be true, after all, it makes the most sense and is the simplest possible explanation!

    Intriguingly, Eli has often dismissed the idea of God even though one could argue that God too is the “simplest” answer to any number of great Questions. Likewise, he often defends his robust atheism with an analogous assertion to “you can get along fine without supposing [God exists], and that’s just what you should do.”

    This is the sort of abuse of science that drives me crazy. People approach issues with a-prioris, such as “God doesn’t exist” or “The Singularity exists” or “Many-Worlds is True” and then contort poor physics and math into supporting positions that are purely situational.

    I don’t pretend to be overcoming bias myself. I am a scientist, I am deeply religious, and I think Bayes’ Theorem is useful in certain situations (ie, when measurements are not independent), but is hardly enough to build an entire worldview on.

    I am not implacably against the Many Worlds Interpretation, mind you. I enjoyed Tegmark’s article in Nature which really made the case more fairly. I just think that a false dichotomy between Bayes Theorem and Science as an institution serves only to muddle things rather than help us understand.

    As an aside, I wonder if Eli would be willing to prove his commitment to Bayes Theorem, by performing quantum suicide?