My concern is that people are rapidly attempting to build AGI, while applying lower standards of care and safeguards than we would expect to be applied to "team of humans thinking incredibly quickly", which is a bare minimum necessary-but-not-sufficient lower bound that should be applied to superintelligence.
Among the many ways that could go wrong is the possibility of exploitable security vulnerabilities in literally any surface area handed to an AI, up to and including hardware side channels. At the same time, given the current state of affairs, I expect that that is a less likely path than an AI that was given carte blanche (e.g. "please autonomously write and submit pull requests for me" or "please run shell commands for me"), because many many AIs are being given carte blanche so it is not necessary to break out of stronger isolation.
But that statement should not be taken as "so the only problem is with whatever AI is hooked to". The fundamental problem is building something smarter than us and expecting that we have the slightest hope of controlling it in the absence of extreme care to have proven it safe.
We currently hold frontier AI development to lower standards than we do airplane avionics systems or automotive control systems.
This is not "regulatory capture"; the AI companies are the ones fighting this. The people advocating regulation here are the myriad AI experts saying that this is a critical problem.
Among the many ways that could go wrong is the possibility of exploitable security vulnerabilities in literally any surface area handed to an AI, up to and including hardware side channels. At the same time, given the current state of affairs, I expect that that is a less likely path than an AI that was given carte blanche (e.g. "please autonomously write and submit pull requests for me" or "please run shell commands for me"), because many many AIs are being given carte blanche so it is not necessary to break out of stronger isolation.
But that statement should not be taken as "so the only problem is with whatever AI is hooked to". The fundamental problem is building something smarter than us and expecting that we have the slightest hope of controlling it in the absence of extreme care to have proven it safe.
We currently hold frontier AI development to lower standards than we do airplane avionics systems or automotive control systems.
This is not "regulatory capture"; the AI companies are the ones fighting this. The people advocating regulation here are the myriad AI experts saying that this is a critical problem.