Yes, computerised neural networks are just very sophisticated statistical estimators. Listen to Prof. Stuart Hameroff MD on the subject on youtube: there are quantum processes in every cell (microtubules) - and this raises the number of variables to model the brain immensely. So Marvin Minsky's complexity barrier at which conciousness should magically manifest is still many orders of magintude away - if it should really happen at all just by the sole cause of complexity, which imho is mere wishful thinking.
What if its ability to make the turing test, fake sound, fake image, read image etc. is already going to shape society with endless 2nd and 3rd order effects that is the problem?
What if running the experiment with 8 billion humans and no control group is the problem?
I think the big mistake is assuming once a smart enough AI is able to execute scripts and access the web, it can be stopped easily - it can't be.
It's enough to have only one human with bad intentions to give it a goal of destroy some of humanity, and then it can get creative: make some money online, ask some people to assemble some physical things for it (like a bomb, gas, a new virus) and then smartly deliver it.
I don't think anyone is saying the current wave of AI needs to be stopped, but I'm not sure why it doesn't sound reasonable to establish some kind of alignment group to make sure control stays in our hands. Will it slow down progress: a little bit maybe. Is that worth ensuring it will be safe: definitely.
That level is not there yet by chatGPT but it will be there eventually by someone.
1. The current type of AI being created does not resemble any sort of pre-cursor to something we might consider intelligence, and there is currently no research breaking ground in that direction, so its a bit of an unfounded fear in a large sea of possible unfounded fears.
2. The attack vectors you describe do not require AI to execute. A human could attempt those things today. Not sure that an AI would make them more or less likely to do this or succeed.
3. The problem with "alignment groups" is that it assumes there is some set of humans with the moral authority to represent us all, when we very clearly exhibit non-shared values and generally segment ourselves accordingly. This seems like a recipe for authoritarian control. Who would you trust to control your fate?
1. It can code, it can write coherent arguments, it can read images and interpret them, create them, create sound and transcribe videos ... plus we dont have a reliable model of whats going in inside the digital brain ... how is that not a pre-cursor with all the tools it needs? And dont tell me "they are not connected yet" - they will be when the first idiot wants to make a movie all with AI just because he can.
2. Its the supercharging of all peoples intention that is the problem. You just elevate the power level of everyone including the psychopaths, narcisists and mechavalianists. Maybe they balance out - but maybe its a 0 sum gametheoretic nightmare. How do you know? How could anybody have the arrogance to say they know?
3. Exactly. So fuck it, its just not a problem we can solve. Lets flip a coin and do it anyways?
Interesting take. I definitely agree that there is a big game theoretic issue with AI being captured in the Law of the Jungle.
I do believe tho that he is making a mental leap there when he points out the 3 worst case scenarios and then goes on to say that there are no other 2nd and 3rd order effects we have to worry about. With technologies so influential its not what we know might happen but what we don't yet anticipate that is the issue.
To say "ah, cant do shit about it" - so lets just role it out and flip a coin seems to be a bit irresponsible. And I dont think that coordination across companies, regulators and the people is immediately authoritarian, thats a foolish argument. Regulation does not equal communism - thats absurd. Many things are regulated that should absolutely be regulated.
Also: To just say "Ah, its a tragedy of the commons" and then call it a day without looking for a solution and instead calling everyone concerned with the risk doomers - idk mate
Yes, computerised neural networks are just very sophisticated statistical estimators. Listen to Prof. Stuart Hameroff MD on the subject on youtube: there are quantum processes in every cell (microtubules) - and this raises the number of variables to model the brain immensely. So Marvin Minsky's complexity barrier at which conciousness should magically manifest is still many orders of magintude away - if it should really happen at all just by the sole cause of complexity, which imho is mere wishful thinking.
What if conciousness is not the problem?
What if its ability to make the turing test, fake sound, fake image, read image etc. is already going to shape society with endless 2nd and 3rd order effects that is the problem?
What if running the experiment with 8 billion humans and no control group is the problem?
So who wants to write "AGI doomerism doomerism will doom us all"?
Yes I am aware of the irony of fighting alarmism with alarmism 😂
The self-awareness doesn't alleviate anything though. Just kinda makes you look retarded for following through after knowing better.
I think the big mistake is assuming once a smart enough AI is able to execute scripts and access the web, it can be stopped easily - it can't be.
It's enough to have only one human with bad intentions to give it a goal of destroy some of humanity, and then it can get creative: make some money online, ask some people to assemble some physical things for it (like a bomb, gas, a new virus) and then smartly deliver it.
I don't think anyone is saying the current wave of AI needs to be stopped, but I'm not sure why it doesn't sound reasonable to establish some kind of alignment group to make sure control stays in our hands. Will it slow down progress: a little bit maybe. Is that worth ensuring it will be safe: definitely.
That level is not there yet by chatGPT but it will be there eventually by someone.
1. The current type of AI being created does not resemble any sort of pre-cursor to something we might consider intelligence, and there is currently no research breaking ground in that direction, so its a bit of an unfounded fear in a large sea of possible unfounded fears.
2. The attack vectors you describe do not require AI to execute. A human could attempt those things today. Not sure that an AI would make them more or less likely to do this or succeed.
3. The problem with "alignment groups" is that it assumes there is some set of humans with the moral authority to represent us all, when we very clearly exhibit non-shared values and generally segment ourselves accordingly. This seems like a recipe for authoritarian control. Who would you trust to control your fate?
1. It can code, it can write coherent arguments, it can read images and interpret them, create them, create sound and transcribe videos ... plus we dont have a reliable model of whats going in inside the digital brain ... how is that not a pre-cursor with all the tools it needs? And dont tell me "they are not connected yet" - they will be when the first idiot wants to make a movie all with AI just because he can.
2. Its the supercharging of all peoples intention that is the problem. You just elevate the power level of everyone including the psychopaths, narcisists and mechavalianists. Maybe they balance out - but maybe its a 0 sum gametheoretic nightmare. How do you know? How could anybody have the arrogance to say they know?
3. Exactly. So fuck it, its just not a problem we can solve. Lets flip a coin and do it anyways?
Hi Tom,
We want to debate this on BBC Radio 4 Moral Maze this week. Please email me if you'd like to be involved. [email protected]
Interesting take. I definitely agree that there is a big game theoretic issue with AI being captured in the Law of the Jungle.
I do believe tho that he is making a mental leap there when he points out the 3 worst case scenarios and then goes on to say that there are no other 2nd and 3rd order effects we have to worry about. With technologies so influential its not what we know might happen but what we don't yet anticipate that is the issue.
To say "ah, cant do shit about it" - so lets just role it out and flip a coin seems to be a bit irresponsible. And I dont think that coordination across companies, regulators and the people is immediately authoritarian, thats a foolish argument. Regulation does not equal communism - thats absurd. Many things are regulated that should absolutely be regulated.
Also: To just say "Ah, its a tragedy of the commons" and then call it a day without looking for a solution and instead calling everyone concerned with the risk doomers - idk mate