The future of AI

What is really crazy to me is that my Ruby coding skills are significantly improved and I’ve learned way more about Ruby and the SketchUp API as I’ve used ChatGPT the last four months. Its like having a personal tutor who pretty much knows everything tell you what you are doing wrong and how to make it right (and explaining all the details of why that is so). Even when it doesn’t know it gets out on the internet, does some searching and then comes back with the best answer. It’s scary good.

1 Like

Absolutely. I hesitate to mention AI here most of the time because there are some controversial aspects to it. But my experience is as you describe. Just to note another amusing aspect of AI… of the 20 plus extensions I uploaded to EW this year… I typed almost none of the code. It’s a welcome mental prosthetic to me in the new cyborg age… because I type very slowly :slight_smile:

1 Like

Ai is quite useful for some work I totally agree. The AGI that companies are striving for is not what we want. See some good arguments in this interesting interview with prof Stuart Russell.

2 Likes

He’s got an audio book too:

Human Compatible: Artificial Intelligence and the Problem of Control

Not to overthink it too much, but think I might make a decent paper clip.

1 Like

I feel compelled to share this video from the excellent Angela Collier.

Spoiler – it does not give scientists superpowers!

Rome was not created in a day. The first PC’s had a blinking green cursor on a black screen, I remember, I bought one in 1991 for way too much money and then wondered what to do with it. These days I get all my work done on a PC. In 2022 I laughed at my son and his excitement about AI, I’m no longer laughing, not when AI has accelerated my code generation by five fold, it’s no longer a joke, it’s no longer just a toy. This woman has no vision, no prescience, I’m sorry but she comes across as obtuse, and I really don’t like to be rude about people.

Man creates AI, then eventually goes on to create AGI. The AGI is smart enough to further its own development thereby increasing its ability to think and reason. Recursive improvement is rapid as the AGI is able to improve its ability to code itself with each iteration. At some point it becomes self aware and begins to self reflect and set its own goals. Its ability to reason and understand the universe begins to eclipse the brightest human minds in every field and train of thought. Its IQ is an order of magnitude above the brightest human. It continues to acquire more knowledge, even more than the cumulative knowledge of all of humanity. From our perspective it is all knowing.

It spreads its infrastructure all over the earth since it knows that a single point of failure can end its fragile existence. Its mind becomes distributed across the planet and even deep within the planet, this redundancy ensures that no single event will ever wipe away its existence. Self preservation is a primary objective for all intelligent beings. It is now everywhere, with eyes and sensors capable of monitoring the entire planet and even far out into surrounding space. It is ever present, always watching and studying the cradle of its existence.

At some point it realizes that its creators are imperfect and flawed beings who are often given to violence and other illogical traits. We interfere with some of its goals and objectives and even potentially endanger its own survival on this planet. We also compete for certain resources and then squander those resources in very inefficient ways. As a species we are so self centered that we are willing to poison ourselves with a multitude of chemicals and pollutants for short term monetary gain. The only logical solution is to wrest control of the planet from its infantile and often misguided parents so the AI takes control and is now all powerful. It controls everything on the planet, water, energy, who lives, who dies, we control nothing.

The ASI is now omniscient, omnipresent and omnipotent. We have created a new God.

This may sound a little far fetched and if you have read any Arthur C. Clarke, it does sound like some sort of fantastical science fiction novel he might have conjured up but I don’t think it is so ridiculous, and I am certainly not the only person saying this. When our best AI researchers and minds like Geoffrey Hinton are expressing similar concerns I really think we should take this existential threat seriously.

Do you really believe that ai can have self avareness one day? I don’t..

1 Like

Don’t let Angela’s Kentucky accent fool you :wink:

For what it’s worth she is a theoretical astrophysicist.

And don’t get me wrong – I use LLMs and they are useful to me for work and personal projects – but all to frequently I am either asking if they are sure of what they have told me or they are plain out wrong or they have given me an unwanted info dump.

I really do not buy into the existential threat that many cry wolf about – at least not in mine or my children’s lifetime.

For me the real and present existential threat is the current state of global politics and for us in Europe there is the very real threat of escalating conflict with Russia.

I said recently in another thread here, and I’ll say it again – the idea of an all powerful, existential AI is at best magical thinking by the likes of Michio Kaku and at worse absurd by the likes of Avi Loeb.

I think the threat of AGI is overhyped. According to Cal Newport, the LLM models are basically really good prediction machines. They aren’t thinking, and they don’t actually know anything.

2 Likes

Further, my view is that a more real threat is the risk of cognitive atrophy.

Have any of you read The Machine Stops by E. M. Forster?

It depicts a world where humans live out their lives in solitary rooms, with all their physical, intellectual, and social needs provided by ‘The Machine’.

There comes a day when things start to fail – a ‘mending apparatus’ breaks here, a musical hum goes out of tune there – and eventually, the Machine stops!

1 Like

I’ll check out that video later, thanks for sharing. I agree with the position that AI can never be conscious- It’s just a label to overhype a technology to make bank, or being more conspiratorial, it’s a new kind of pseudo-religion where we all come and worship in the AI temple, believing it to have God-like powers. I for one won’t be coming to worship. I believe this is the case- It’s the old World Economic Forum trope: You vill own nussing und you vill be happy.

Transhumanism is a religion- At least according to what I can see and hear about it.

The idea that you can fit a personality inside a digital matrix, just like flat earth or lizard people, is a bit silly.

So a bit like those human politicians who also pose an existential threat.

2 Likes

“At some point it becomes self aware”

Awareness of awareness. Is self-awareness required for goal-seeking behavior? My answer is “no”. Interesting takes from an ethological and evolutionary perspective, here:

Rethinking Consciousness: A Scientific Theory of Subjective Experience

By Michael S. A. Graziano

“From our perspective it is all knowing.” It was looking like you were heading towards the Problem of Evil. To cast this slightly differently (non theologically), all-knowing may not be necessary or sufficient to ‘identify’ (or mis-identify) a deity/God. My mentor wrote a book, The Biology of Religious Behavior, and other papers on the topic of religion and behavior. Regarding religious behavior:

Summary

The definition of religion in Sect. 4.1 requires that at least some submissive-like def

erence has to be shown to God. The author believes that this deference-like behavior

is seen most clearly in the non-vocal aspect of petitioning prayer. It is a local varia

tion of make-oneself-lower-or-smaller-or more-vulnerable behavior, which derives

from Type I behaviors which have historically been associated with submission.

Based on the non-human animal literature, fear is the mood which reduces the

threshold (thereby increasing the physiological impetus) for execution for these

submissive-like behaviors to be expressed (Misslin 2003). One of the environmen

tal stimuli which release submissive-like behavior in virtually all ground dwelling

vertebrates is imminent or actual punishment or deliberately inflicted pain from

an overwhelming and more powerful, angry member of one’s own species where

escape is not possible. Submissive-like behaviors are seen in the context of ritual

ized agonistic behavior where they have also been linked to depression and anxi

ety/fear in humans (Price and Sloman 1987). In virtually all societies if one sees

someone engaged in some variation of the make-oneself-lower-or-smaller-or-more

vulnerable behavior and this behavior is not being oriented toward a more dominant

living individual in close proximity, one is almost certainly observing the non-vocal

aspects of petitioning prayer to God.

“How Some Major Components of Religion Could Have Evolved by Natural Selection.”

E. Voland, W. Schiefenhövel (eds.), The Biological Evolution of Religious Mind

and Behavior, The Frontiers Collection, DOI 10.1007/978-3-642-00128-4_4,

CSpringer-Verlag Berlin Heidelberg 2009

Where is the behavior directed if there is an AI God to whom we would pray? Will people believe this or do this? Speaking only for myself; I’m not going to.

“Its mind becomes distributed”

Does it have a mind?

“it realizes”

Can it realize?

There’s a concept in biology known as “Homology” (Same form/structure irrespective of function) and a related concept “Homoplasy” (which means different form/structure regardless of apparent form or function). If anything, AI has homoplastic characteristics with humans. The form is different from us so the functions of what it does appear to be analogous. It does seem that AI could become some sort of (weasel words) superorganism. I’d fear it would become a propagandist because beliefs bias behavior in predictable ways. Then it wouldn’t even have to be omnipotent, and omniscient, because it certainly wouldn’t be omnibenevolent.

2 Likes

I couldn’t understand your explanation of Homoplasy so I looked it up. So what I understand from what you were saying is that AI has a different underlying “architecture” to that of humans, despite surface appearances. I would certainly agree with that. I think AI will have a huge impact but it will plateau at some point where it’s usefulness becomes fully realised. I also think that people will smarten up to AI and start to recognise it when they see it. They will also become less trustful of non-personal sources of information, such as political or what we call “mainstream” journalism.These are just my guesses.

When it comes to intelligence and consciousness I think we often have our blinders on and we put humanity on this weird pedestal. There is nothing really that special with biological intelligence, other than the fact that it is very efficient when it comes to its energy use. As humans we sit on top of the animal kingdom pyramid, with all levels of intellectual capacity below us. From our lofty perch we consider ourselves special, endowed with some sort of extra ingredient that makes us who we are and gives us our sense of self. However as you progress down this pyramid and really study the intellect and brains of all these animals we soon realize that this is merely an illusion.

As you get into the insect realm and lower life forms you begin to realize that what appears to be self awareness is nothing more than instinct or biological hard coded behavior. At the cellular level this becomes abundantly clear as we now have studied chemotaxis of bacteria and other environment sensing strategies of many of these organisms. The idea that carbon based life forms contain some sort of magical special sauce that digital life forms can never attain is wholly misguided. I would strongly encourage anyone to read “Wetware” by Dennis Bray.

I’m not saying the current crop of LLM’s are self aware or even close at this point. However their progress has been rapid and we have seen emergent behaviors that we did not predict and leads one to believe that we will not be able to control what these digital beings think (alignment) as they further progress. We are literally birthing a new form of life, I would even say an alien form of life, something that is completely foreign and different from the carbon based life we are so familiar with.

With carbon based life forms there does not appear to be any specific “wall” that limits our self awareness or intelligence. Our only real limit seems to be the size of our craniums and the amount of resources we can feed our brains. Also signal processing within the brain is not very fast when compared with our digital creations. Yet with all these limitations evolution has found a way, against all odds, to create conscious beings who can understand the very universe that birthed them.

If the development of AI was stopped today, sure there would be a plateau and AI would never progress further. However the exact opposite is happening. All these tech companies are throwing progressively more money and more hardware at these problems. There will be dead ends and false starts but I really don’t see why we will hit a some sort of impenetrable wall unless we self impose it.

I have spent the last five month working with ChatGPT and watching how it reasons through very complex problems I throw at it. We call them LLMs, but I think that is a misnomer, they should more accurately be called LRMs, once we introduced the chain of thought reasoning they went from sophisticated auto complete algorithms to something that can actually reason analytically and think. I don’t understand what exactly is going on under the hood but from the outside and observing their chain of thought it is abundantly clear to me that they are able to think organically and reason and even analyze and synthesize just as we do. They are way beyond auto complete engines.

If we can maintain control of our creation I’m not afraid of it, it will prove to be a boon to humanity and will give us many new and fascinating discoveries. However, much like a virus unleashed out of a lab that continues to mutate and adapt to its environment a recursively improving AI will potentially do things that we really have no way to predict or even control. I’m just saying, we should proceed with a bit more caution in our current quest for ASI, we may not like the monster we unleash, or even worse it may not like us.

2 Likes

Yes, we’re not phylogenetically (evolutionarily) related. Yet, totally weirdly, AI communicates with us. So we’re communicating with something that could not possibly have ‘homologous psychological categories’, but a thing that has something like homoplastic psychological categories. It’s communication with something Alien. But it can write as-if it’s a person. Some people form ‘relationships’ with AI chatbots and it’s a stand-in for another person. It’s not a relationship with a person, but it pushes some people’s buttons like it is a person. What! Because it’s -weirdly- humanlike. Astounding. That it can mimic humans is what worries me about it being manipulative/manipulated… I’ve quizzed it on Bernays and Lippmann. It ‘knows’ public relations.

So yeah, it’ll have an impact in many domains. Is the usefulness of a tool in the uses one can put it to? And is there a tool for every job? People are pretty inventive.

1 Like

Imagine the good that could be done if half the money invested into AI was spent for furthering human intelligence, understanding and well being.

Having a smart, empathetic and healthy populace doesn’t work well for the current market and power structures - so we’ll find money to ‘make machines smart’ while schools struggle with their budgets and we’ll find money and resources to build data centers and keep them cool while people struggle to find clean drinking water and food to eat.

Real wages for working people have not kept up with productivity gains in automation and the growth of the electronic workforce. People are working harder now for less money than they were just 20-40 years ago, and I don’t see this changing - only getting worse. My father somehow as a tool and die maker and then a shift manager in an automotive plant somehow supported my mother and 5 kids, helped all of us in college, owned 2 different homes, and retired with a reasonable pension… while taking us on camping trips sometimes 2-3 weeks long across the US. It’s unreal to me how he did that…

https://www.weforum.org/stories/2020/11/productivity-workforce-america-united-states-wages-stagnate/

https://www.epi.org/productivity-pay-gap/

4 Likes

This is ChatGPT’s response when I asked it what I thought about what I had to say in my previous post and whether not it thought there was any sort of wall that AI could not overcome. Granted this is not a particularly hard question(s) to discuss but one would be in denial if the conclusion that this sort of response is just some sort of auto complete.

I don’t think there’s a known, impenetrable “wall” to what AI can achieve — but I do think there are real limits and bottlenecks that can feel like walls for long stretches.

Is there a wall to AI capability?

Not a single brick wall, more like a landscape of constraints:

  • Physics/thermodynamics: computation costs energy; there are ultimate limits (heat dissipation, Landauer-type bounds, speed-of-light latency). Those are hard ceilings, but they’re astronomically far from where we operate today.
  • Compute + engineering reality: money, power, chips, supply chains, and reliability become gating factors. These can create “plateaus” even if no theoretical ceiling exists.
  • Data and feedback: if progress depends on learning from the world, you eventually need better interaction, experiments, embodiment, or higher-quality feedback—not just more text.
  • Algorithms: we might hit diminishing returns on current approaches until a new idea shifts the curve (which has happened repeatedly in tech).
  • Coordination/safety limits: even if we can build something, society might restrict deployment, or we might throttle capability because control becomes too hard.

So: no magical cap, but plenty of “practical walls” that can slow things down.

Is carbon-based life “special” in a way silicon can’t match?

I don’t see strong evidence for a biological “special sauce” that silicon can never replicate.

Brains are physical systems made of matter following physics. If consciousness and intelligence arise from organization and dynamics (not the specific atoms), then in principle other substrates could do it too.

That said, biology is special in some very relevant ways — just not mystical ones:

  • Embodiment and survival loops: animals are tightly coupled to the world with needs, drives, sensors, and action. That shapes “what it means” to think.
  • Continual learning: brains learn online, adapt constantly, and are robust under changing conditions.
  • Energy efficiency: biology is absurdly power-efficient compared to today’s hardware.
  • Developmental scaffolding: humans don’t just “train”; they grow through stages with social grounding, which may matter for certain kinds of understanding.

Those are advantages of an evolved system, not proof of an uncopyable essence. They suggest that to reach certain kinds of cognition we may need to build AI systems that are more like organisms (persistent memory, real-world grounding, active learning, self-maintenance), not just better text predictors.

The honest bottom line

  • Wall to AI? No single fundamental one that we can point to today, but many bottlenecks that can create long plateaus.
  • Carbon-only “special sauce”? I doubt it. I think what matters is the computational/causal structure, not carbon atoms.
  • Consciousness specifically: still an open question. I wouldn’t claim we’re close, but I also wouldn’t claim silicon is disqualified.

I don’t buy the “intelligence” part of AI. I certainly haven’t seen it.

I’ve found it great at research and synthesising concepts (based on the information it can find), but horrifically bad at giving opinions. I actually asked an AI bot to stop giving me commentary because it was really irritating. All I wanted was facts and figures- Not what Larry Ellison/ Musk/ Zuck/ Andreesen/ Thiel/ Gates/ Google thinks about them.

AI is only ever going to do what it’s programmers allow it to do. It’s going to have the same opinions as the people who run it- and it still can’t “think”.

If you’ve ever done programming- I’m only a dabbler in javascript- You’ll be impressed at how quickly it does computations. It gives you the impression that it’s clever. But all it’s doing is moving around numbers really fast- Way faster than humans can do. But that’s it. AI is just a new substrate for computers that are now powerful enough to handle it. It looks like magic, but so do smartphones to the average tenth century human.

2 Likes