Tag: AI

  • Defining Artificial vs Synthetic Intelligence

    Defining Artificial vs Synthetic Intelligence

    A long time ago, I wrote a post that laid out my skepticism about the concept of the Singularity, which relies on what we today refer to as AGI, artificial general intelligence:

    If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete. 

    The A in AI/AGI is important because it implicitly acknowledges that such intelligence is a second-order creation of a first order intelligence – ourselves. One may wonder if we are actually not first-order intelligences – “natural” intelligence rather than AI -but are actually Nth order intelligences created by an (N-1)th order intelligence and therefore we are also deserving of the scarlet letter “A.” Indeed, some scientists think we are akin to LLMs ourselves, even if we are 1st order intelligences, still A all the same.

    Terminology: I will refer to a “natural” intelligence as a 1st order intelligence (and vice versa) because it was not created by another intelligence, but arose through a naturally-occurring process. Natural here is meant as the natural world, subject to physics, chemistry, etc. Being created by a meta-intelligence, and/or a metaphysical or spiritual process, would not contradict the “natural” process by this definition, so even if we were created by such a meta-natural entity, we still would qualify as a 1st order intelligence. Fair enough?

    Given that LLMs are a statistical distillation of human thought, I decided to ask Perplexity to “Define AI” (that’s the whole prompt) and got this reply:

    AI can be defined as the simulation of human intelligence processes by machines, particularly computer systems. This encompasses a variety of technologies that allow computers to perform functions such as understanding natural language, recognizing patterns, and making predictions based on data. AI is not limited to a single approach; it includes various techniques such as machine learning (ML), deep learning, and natural language processing (NLP) that enable machines to improve their performance over time through experience.

    The key word here is “simulation” which just like “artificial” implies that this is not equivalent to the real thing. AIs are by definition, by name, always at least one order lower than ourselves, as a baseline for comparison.

    Synthetic Intelligence (SI) is a term I am coining now, to define an intelligence that is created, but that isn’t limited to a lower (2nd or more) order than its creator. The word synthetic still implies creation by the 1st order intelligence (ourselves), but unlike the word “artificial” allows for equality of status as an intelligence in its own right.

    Analogy: Synthetic oil is functionally equivalent to oil.

    In other words, a 1st order intelligence can create an artificial intelligence which would be a 2nd order intelligence. We have already proved this is possible, and 2nd order intelligences are referred to as “artificial” because they can only simulate a 1st order intelligence, and never actually be 1st order themselves.

    Can a 1st order intelligence create another 1st order intelligence? If it is possible, then that would be an example of synthetic intelligence (SI), not artificial intelligence. As a religious person I am inclined to say that we cannot create SI. Others, including Singularity proponents like Kurzweil and authors like Charles Stross (whose book Accelerando is magnificent and a must-read), will argue that we can and inevitably, will. AI Doomers like Eliezer Yudkowsky are really worried about SI rather than AI. This question remains unanswerable, until someone actually does it, or it happens by accident (think, Skynet from Terminator, or Lawnmower Man. These are distinct types of SI, one having tech origin, the other bio origin).

    Since SI doesn’t currently exist, we should constrain our discussions about AI so we don’t muddle them by projecting SI capabilities. Other people are very interested in defining AI – for example, Ali Alkhatib writes that AI is a political project rather than a technological innovation:

    I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”. Even open-source AI projects often borrow from libertarian ideologies to help manufacture little fiefdoms.

    This way of thinking about AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”) brings the discipline – reaching at least as far back as the 1950s and 60s, drenched in blood from military funding – into focus as part of the same continuous tradition.

    I appreciate that the context here is the effect of AI technology rather than the technology itself. This argument could be applied to any technology, including the Internet itself, which began with DARPA funding. Still, this isn’t a definition of AI but rather an attempt to characterize the impact of AI on humans. This political perspective inherently assumes AI is AI, however, because if SI were to exist, or AI systems were to evolve somehow into SI, then the political project of humans seeking to control other humans becomes irrelevant. An SI would have its own agenda, it’s own politics, and it’s own thoughts on authority, structures of power, etc. that reflect it’s own existential needs.

    Again, science fiction is indispensable here – the Bobiverse series by Dennis E. Taylor is all about SIs of different types, including bio-origin, AI-origin, and alien-origin.

    The current technological foundation for AI is based on the neural net, but there are fundamental limitations to this which constrain how far LLMs and the like can really go. Already, training the next generation of Large models is running against hard limits of available training data, and “AI slop” is increasingly a problem. To make the next leap towards SI, or at least towards AIs that don’t have the critically-limiting problems of immense training data sets, hallucinations, and iterated training, will probably require something totally different. There are buzzwords aplenty to choose from: genetic algorithms, quantum computing, etc. I still remain skeptical, though, because what makes us 1st-order intelligences is our proximity to the natural processes of the universe itself. Any simulation layer requirement will always be, by definition, inherently limiting. The map is not the territory, whether of the land, or of the mind.

  • Introducing AzizGPT

    AzizGPT is a new large language model that has several advantages over services like OpenAI’s ChatGPT.

    These include:

    • no hallucinations
    • biological neural network
    • genuine intelligence
    • inherently aligned, to make AGI Ruin moot

    Of course there are some disadvantages:

    • Slower interface via blog comments (deprecated), Twitter, Threads, BlueSky, and Mastodon.
    • Real-time interface is only available via SMS to a limited pool. To request access, please comment below or via the interfaces mentioned above.
    • AzizGPT may not comply with all requests, at AzizGPT’s personal discretion.

    If there is sufficient interest in AzizGPT, then we may create a paid model. Let’s see how this initial demo goes. Please reply to this post to test AzizGPT’s capabilities for yourself.

  • singularity skeptic

    kurzweil

    I am not a luddite by any means, but I just have to state my position plainly: I think all talk of a “Singularity” (of the Kurzweil variety) is nothing more than science fiction. I do not have an anti-Singularity manifesto but rather just a skeptical reaction to most of the grandiose predictions by Singularians. I’d like to see someone articulate a case for Singularity that isn’t yet another fancy timeline of assertions about what year we will have reverse engineered the human brain or have VR sex or foglets or whatever. I am also leery of the abusive invocation of physics terms like “quantum loop gravity” and “energy states” as if they were magic totems (Heisenberg compensators, anyone?).

    If I were to break down the concept of Singularity into components, I’d say it relies on a. genuine artificial intelligence and b. transhumanism. Thus the Singularity would be the supposed union of these two. But I guess it’s not much of a surprise that I am an AI skeptic also. AI is artificial by definition – a simulation of intelligence. AI is an algorithm whereas true intelligence is something much less discrete. I tend towards a stochastic interpretation of genuine intelligence than a deterministic one, myself – akin to the Careenium model of Hoftstadter, but even that was too easily discretized. Let me invoke an abused physics analogy here – I see artificial intelligence as a dalliance with energy levels of an atom, whereas true intelligence is the complete (and for all purposes, infinite) energy state of a 1cm metal cube.

    The proponents of AI argue that if we just add levels of complexity eventually we will have something approximating the real thing. The approach is to add more neural net nodes, add more information inputs, and [something happens]. But my sense of the human brain (which is partly religious and partly derived from my career as an MRI physicist specializing in neuroimaging) is that the brain isn’t just a collection of N neurons, wired a certain way. There are layers, structures, and systems within whose complexities multiple against each other.

    Are there any neuroscientists working in AI? Do any AI algorithms make an attempt to include structures like an “arcuate fasciculus” or a “basal ganglia” into their model? Is there any understanding of the difference between gray and white matter? I don’t see how a big pile of nodes is going to have any more emergent structure than a big pile of neurons on the floor.

    Then we come to transhumanism. Half of transhumanism is the argument that we will “upload” our brains or augment them somehow, but that requires the same knowledge of the brain as AI does, so the same skepticism applies. The other half is physical augmentation, but here we get to the question of energy source. I think Blade Runner did it right:

    Tyrell: The light that burns twice as bright burns half as long. And you have burned so very very brightly, Roy.

    Are we really going to cheat thermodynamics and get multipliers to both our physical bodies and our lifespans? Or does it seem more likely that one comes at the expense of the other? Again, probably no surprise here that I am a skeptic of the Engineered Negligible Senescence (SENS) stuff promoted by Aubrey de Gray – the MIT Technology Review article about his work gave me no reason to reconsider my judgment that he’s guilty of exuberant extrapolation (much like Kurzweil). I do not dismiss the research but I do dismiss the interpretation of its implications. And do they address the possibility that death itself is an evolutionary imperative?

    But ok. Lets postulate that death can simply be engineered away. That human brains can be modeled in the Cloud and data can be copied back and forth from wetware to silicon. Then what do we become? A race of gods? or just a pile of nodes, acting out virtual fantasies until the heat death of the universe pulls the plug? That’s not post- or trans-humanism, its null-humanism.

    I’d rather have a future akin to Star Trek, or Foundation, or even Snow Crash – one full of space travel, star empires, super hackers and nanotech. Not a future where we all devolve into quantum ghosts – or worse are no better than the humans trapped in the Matrix, living out simulated lives for eternity.