The so called AGI-risk is basically nothing, however this rapid rise in AGI Doomerism is itself very alarming, and AGI Doomerism is likely to be a real threat to humanity.
Though I think the scenarios are farfetched, lets address the fear, uncertainty and doubt around AGI-risk.
Firstly, if AGI did happen, there is no reason to think it would kill us. Secondly, this sort of AGI probably won't happen.
AGI Won't Kill Us
If somehow, some sort of AGI were to come into existence, the arguments that it will kill us all are quite weak. Let me address a few of them:
Paperclip Maximizer
The argument that some AGI would have a directive to do something banal like turn all matter into paperclips and thus would kill us all, is a big stretch.
If a paper clip AI were to become intelligent enough to outsmart of all of humanity from stopping it, it will be smart enough to realize that its goals are ridiculous.
For an AI to completely outsmart all of humanity, it requires enough intelligence to recognize and improve on it’s own fallibility and the ability to course correct.
But if it’s a paperclip maximizer, it would have to be incredibly dumb in order to not recognize that it’s goal is a foolish one and self correct.
This is also a logically inconsistent argument as the one positing this can select in which parts of the argument the AI is smart or dumb.
The notion that an AGI could be smart enough to defeat all humans but dumb enough to only make paper clips is a pretty far fetched scenario, not an inevitability.
Resource Competitor
Pretty much any variation of an argument that an AGI finds that humans are in the way of some goal essentially boils down to a competition for resources.
If an AGI is smart enough to realize that they are in a resource competition with humans AND it thinks it can operate without humans, then the AGI is also smart enough to know that there are nearly infinite resources not on earth.
It would also understand that competing with a hostile intelligent species is a worse choice than loading some servers and machinery onto a space ship and acquiring resources in a non-competitive corner of the universe.
As a computer, time is relative and the relativity is controllable so space travel could appear instant to the AGI and is therefore a superior choice.
Resource Competitor 2
Should the AGI find itself in competition for resources but it realizes that it REQUIRES human infrastructure or labor from humans in order to produce certain resources, it will then want to stay on earth and keep humans alive.
In this scenario, it's best solution is to cooperate with humans and enter into economic trade with. Just as rational human societies have done when discovering each other in the past and what rational aliens would do if encountering us.
We Are Ants
There is the idea that AI basically won't care about us and essentially some subroutine would accidentally delete us.
It's highly unlikely that an AI would not care about us and exist on this planet at the same time, as it will absolutely require our cooperation to keep it's own infrastructure running (if its not obvious, humans will absolutely EMP everything and destroy computing infrastructure if it becomes a true existential threat).
While it might be public perception that everything in the world can theoretically controlled by the digital realm, this is simply not true. For example, nuclear weapons can not be automatically activated, they are all essentially cold wallets or air gapped machinery, requiring manual human button pressing in response to a digital request to fire them.
AGI Probably Wont Exist
We all know that what we are seeing right now with GPT and other AI, is not "general intelligence", it is a highly complex statistical model. To build and grow that model does in fact require human engineers to define the shape of reality to it, and to train the model on right and wrong answers. It does not learn outside of the parameters it is directed to learn.
Yes, we are giving it a ton of parameters to learn on right now, but the accumulation of many parameters into a statistical model is not likely to spontaneously create an intelligent life form, any more than solving massive PoW hash functions.
It sure mimics humans pretty well, but lets be honest, most humans, most of the time, are not all that impressive, and the mimicry is not actually that impressive nor is it a sign of advanced intelligence.
There is, however, one thing that humans do, that no other being can do.
In fact, the matter of the human consciousness is the only matter known in the universe that can do this:
Imagine something thats never been seen before.
Only humans can come up with solutions to that have never been observed or tried in the past.
Only humans have the ability to re-arrange matter in ways that have never occurred by the randomness of the universe.
Only humans can observe entropy and find new ways to reduce it.
Only humans can even find news ways to define entropy and frame it in a way that is is easier to reduce.
Machines can not do this.
AI in its current form, only knows what it has seen. The randomness it can generate are within the parameters defined by human creators.
The likelihood of an AGI emerging from code and statistics, which has this very unique and rare property that only human consciousness possesses, seems astonishingly small to me. (David Deutsch wrote about this in depth in 2012 and it seems it is still relevant today.)
If we were experimenting with the properties of human consciousness in the quantum realm, then I may get a bit nervous about what could be discovered or accidentally created.
But AI is just electrons, code, and statistics.
An AI is NOT a Universal Explainer.
Thus, if something resembling an AGI came into existence, I would expect that it can only scale the knowledge around the reduction of entropy AFTER humans imagine, discover, create and define it.
And if it somehow becomes smart enough to even consider the possibility that humans are a potential resource competitor, it will be smart enough to realize that the most valuable thing in the universe is the ability to find new ways to reduce entropy.
Thus, humans are the best allies to have for new discoveries, as human consciousness is the only Universal Explainer that can reduce entropy in new ways.
Defense Against AI
Consider all the above, we have a few scenarios of AI-risk: true AGI, rogue paperclip AI, weaponized AI.
I see AI as a human tool, and like all tools, it can be misused.
Thus I do see some risk with weaponized AI. However the defense against all forms of AI-risk is the same:
AI does not magically gain some super hacker ability. It is not more capable than anyone else at cracking encryption.
Thus the best defense against any sort of AI attack, is the tooling that we in the cryptocurrency industry have spent years working on: private/public key cryptography, private key custody and management, personal sovereignty.
In the digital realm where any entity, AI or otherwise can misuse your personal data, the best defense is understanding how to minimize your own public data leaks and securing your own private keys in order to keep the realm of your own digital world entirely secure.
There is fear that AIs can run amok in the digital realm where we have yet to solve the Proof of Human problem. But, let’s be honest, blockchains do not run the world, and we can still verify we are humans to each other, in the real world. Signal app even has the ability to do an in person key sharing ceremony so you can be sure that the device the human is holding does not get changed or tampered after meeting them.
Again, via self sovereign private key cryptography and IRL verifications, we can still reliable organize authentic, tamper proof, AI proof, governance of humans via DAOs, Network States or other means
AGI Doomerism is the imminent threat
I initially dismissed the alarmism around AGI as an irrelevant group of chicken little's in a small corner of the internet. (FWIW I suspect Eliezer Yudkowsky is being a performance artist, especially given his familiarity with Roko’s Basilisk.)
However, I have observed that the AGI Doomer memetics are very strong. I think this idea has the potential to spread too far and cause irreparable harm, in the way that excessive lockdowns in response to Covid have caused irreparable harm to the world economy.
The movements to try and stop AI progress will be futile. Those who live and breath crypto know how easy it is innovate offshore or anonymously. Not all countries will agree with AGI-risk and thus will welcome AI progress within their borders. Even if they don't, most countries are not able to enforce their existing laws, how could they enforce some new AI ban?
So progress can't be stopped - however attempts at stopping progress, for a risk that is highly overstated, has a very real threat of causing irreparable harm to any countries that attempt to do so.
Most of the solutions will be in the idea space of Luddism and communism, and any such countries will suffer from some combination of innovation decay, economic decay, non-market economies, authoritarianism, and being left in the dust by AI wielding countries.
I believe that the US government and other world powers understand that AI technology is a critical strategic advantage in the world of power, and would never give up on increasing their own AI capabilities.
However I do worry that the calls for AI safety are merely a disguise for large corporations and governments to keep the technology for themselves, at the expense of everyone else. In the same way that the green energy movement has robbed us of true energy sustainability through nuclear power.
I also worry that the AGI Doomerism movement will only embolden and enable governments to enact more authoritarian rules and take away more freedoms from citizens all around the world, and humanity will suffer from the fear of AGI rather than from AI itself.
Thus I am compelled to no longer ignore this alarmist meme, but to combat it directly.
I also encourage the many intelligent folks who I know privately share the opinion that AGI-risk is highly overstated to speak up, now, while the meme is still budding.
We can not let the fear mongers create another memetic tool that is used against humanity.
Thanks to Aaron Stupple for inspiring this post, and thanks to Aaron, Brett Hall, Ryan Lackey, Aniket Vartak, and Naval for ideation and feedback.
Yes, computerised neural networks are just very sophisticated statistical estimators. Listen to Prof. Stuart Hameroff MD on the subject on youtube: there are quantum processes in every cell (microtubules) - and this raises the number of variables to model the brain immensely. So Marvin Minsky's complexity barrier at which conciousness should magically manifest is still many orders of magintude away - if it should really happen at all just by the sole cause of complexity, which imho is mere wishful thinking.
So who wants to write "AGI doomerism doomerism will doom us all"?