*

*
Yosemite morning

Sunday, March 5, 2023

Energy and smart machines

I have had a few interesting things come past my nose recently here in cyberland. Like this interview with Mark Mills from the very conservative Manhattan Institute. The energy transition delusion: inescapable mineral realities. The man is very bright and those of us interested in renewables might consider his warning. Never a bad thing to listen to a smart physicist.

*

Renee sent this; "Museum of the future AI apocalypse" opens in San Francisco. Sort of a dire foreboding on the coming AI disruption and Ragnarok. I still haven't got my head around the ChatGTP thing but I am always late to the party.

But Hudgins sent a very good riposte on the subject, written seven years ago. A Kurzweilian riff on the coming superintelligence.

And on this subject here is a treatise done on the subject in 2015.   It is a excellent primer and pretty good read all in all.  Make sure to go to the bottom to click through to the second part.

And we are a lot closer to exponential recursive self-improvement than one would think. 

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html 

Of course one man's exponentially recursive self improvement might just be a bag of dung to another guy, a person like me, for instance, who gets very afraid  for our collective survival when trying to grok all this weird "progress." Don't ever leave your survival to the engineers, they have a penchant to have a far different concept of what constitutes acceptable collateral damage.

More on recursive self improvement here.
A core component of the classical case for AI risk is the potential for AGI models to recursively self-improve (RSI) and hence dramatically increase in capabilities once some threshold of intelligence is passed. Specifically, it is argued that an only somewhat superhuman AGI will rapidly be able to bootstrap itself into an exceptionally powerful superintelligence able to take over the world and impose its values on us against all opposition.

A lot of the debates on fast vs slow takeoffs hinge on the feasibility and dynamics of the process of RSI, as well as do many potential counters to AI risk. If strong takeoff is inevitable, then strategies like boxing, impact regularization, myopia, and human-in-the-loop auditing and interpretability and so on are intrinsically doomed, since the AGI will simply become too powerful too quickly and can thus break out of any box or outwit any human operators. In such world, iterative development of AGI safety techniques is doomed since the first AGI we build will immediately explode in capabilities. To survive in such a world, we need to design a foolproof solution to alignment before building the first AGIs.

Artificial intelligence and power structures. 

No comments: