The future of AI

I don’t know all the details on what restrictions the Biden legislation had on AI but I do think the government should have their finger on the pulse of AI development, especially as we start approaching AGI. Within the next couple of years these AI engines could become sentient which itself is a very scary proposition.

Trump’s admin did a similar thing in 2017 with GOF research (the regulations imposed by Obama in 2014 were removed) and without the regulations we ended up with a killer virus, some would argue otherwise but the deregulation of GOF research did happen. And my suspicion is that GOF played a major role in the creation of what would become the Covid virus.

We certainly don’t need a killer AI overlord in the near future. Sometimes regulations are there to protect us from technology that can go off the rails.

https://www.reuters.com/technology/artificial-intelligence/trump-revokes-biden-executive-order-addressing-ai-risks-2025-01-21/

2 Likes

The (CoT) Chain-of-thought prompting is something relatively new and is starting to mimic human thought processing and reasoning. Also a new Google innovation is called “Titans” and the feature gives AI a long term memory similar to the human mind:

The next step is make the AI engines “real time” or always on, constantly receiving inputs and allowing for output. If we do that we might be able to coax sentience, self aware or reflective behavior out of these engines.

Things are going to get very interesting in the next few years…

Maybe HAL 9000 wasn’t so far fetched after all.

1 Like

Or Colossus.

3 Likes

So, skynet is back on the table. I did nazi that coming.

7 Likes

I don’t know but stuff is moving quickly in the AI world and when some of the “experts” start talking about existential threats it does get one thinking.

The problem is our technology has increased to the point that it is getting a little scary. Honestly we’ve had really scary technology since WWII and the invention the atom bomb, but at least that sort of technology is still within our control, and won’t self replicate or do something crazy.

Some of the stuff we are working on now could literally go completely out of our control.

Obviously biological “weapons” are a good example of this. However, other forms of “life” could be just as problematic. AI is a form of artificial life, but it is really only the “software” component.

1 Like

Yeah, well, all I can say is the call is coming from inside the White House…

2 Likes

It’s a shame we seem to forget the most powerful person on the planet is the individual with freedom of thought and action. AI developers need us to accept AI for it to be worth developing. If there are no humans or market to sell it to then what was the point? We have a choice in the level of acceptance of AI technology, but consider that the development & acceptance of AI technology is primarily driven by greed & power rather than any philanthropic consideration of the planet and humanity. You as an individual can pull the plug or limit your engagement at any time.

Do you take the red pill or the blue pill? (The Matrix. 1999)

No doubt Tim Berners-Lee is having his Oppenheimer moment right now.

no you can’t. not if you want to be a part of society.

I disagree with the overwhelming global surveillance. but successive gov have developed it. so whenever I go I know I’m seen and tracked.

if/when AI is deployed in a service, then all the users are affected. I use internet, internet is based on tools that massively use AI. I can’t escape it.
our actual engagement is partly forced, and judging by the last few years of AI-craze everywhere (including public space, gov and administration…) in most countries, it’s gonna continue.

and in the US, it’s in the hands of an apartheid-born nazi who talks about financing his friends over europe.

It’s day 2 and I’m already exhausted. and I’m not even american.

4 Likes

You can, but it is your choice. Are you perhaps just afraid of an alternative society that doesn’t rely on dictatorial control, but on one of mutual respect? 30 years ago the Web was an academic tool for the sharing of information to benefit society. Now the web through social media operates as a commercial suck on society, fuels division and the distribution of manipulative lies. How badly have your local shops which served your local community and provided employment been devastated by the laziness of online shopping in part supported by child labour? Don’t worry about it though, all that money went to someone who just wants to put a flag on Mars while the world burns.

Red pill or blue pill? :wink:

While the topic is about the future of AI, shouldn’t it be about the future of humans and trades?
I recently used AI to create an extension to see how easy it would be:

Should this be allowed, for instance?

2 Likes

Pierre – I’ve deliberately ignored what’s been happening since yesterday because I need to preserve my mental health.

I am relieved that here in UK we had a change of Government!

2 Likes

I saw a DW YouTube video on the Amish and it was interesting that some of the orders have adapted over the years.

There was an Amish construction professional who – in order to compete – had adopted some technology so long as it didn’t creep into the home environment.

The home work telephone was located in a small shed.

I don’t have a problem with this. You are accessing assistive technology to complete a task, in short it has served as a tool. You have the option to use it or not. It is no different to our migration from drawing board to CAD. If AI brings medical benefits then who is going to reject its contribution. But if it blatantly aims to manipulate and control behaviour in a negative or divisive way then that is a problem.

To paraphrase Mr Asimov’s three laws…

  1. AI may not injure a human being or, through inaction, allow a human being to come to harm.
  2. AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

I wonder if those concepts are burnt into the AI chips…

Dont’ forget the Zeroth law:

  • Asimov later added the “Zeroth Law,” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

More interesting reading here:

1 Like

off course they’re not. they go against profit, big tech loves profit. and it’s just an idea by a writer in the 40s.
AI is a tool that learns from us and considering the people in charge of it, there is little chance of seeing it obey the 3 laws ^^
it’s learning by scrapping tons of ressources and ingesting them without distinction.

I wouldn’t trust the techbros to take care of my plants in my absence, and right now they are playing sorcerer’s apprentice with AI in a non-regulated environment that will impact anyone remotely part of society.
and that sucks.

me too, I was in a train all day, and I’m teaching all week, so I’m just getting the “worst” parts in the evening.
Can’t wait for John Oliver to start the new season so I can keep up with stupidgate :slight_smile:

1 Like

If the red pill sends me back in time, and the blue pill gives me milions of cash, i would take the red pill (first) :wink:

1 Like

I don’t get it. Go for the blue one. Then buy yourself a dozen red ones so you can decide how far back in time you would want to go.

what could possibly go wrong if a bunch of for-profit unregulated uber-wealthy people are the ones who can afford to buy the best and fastest hardware and decide how / what the AI will be doing…
Combine that with some superiority complex and/or other ‘special’ characteristics and you have a nice sci-fi supervillain movie script…

4 Likes

I’m really glad that AI has sucked up all the renewable energy making terrible photos.

1 Like

Shhh, it’s listening!

3 Likes