The hopes and fears of runaway artificial intelligence both share something in common: that intelligence can run away at all. But it isn’t clear to me that this is so. It’s just as likely that there are a finite number of things to know and an infinite number of things that are unknowable.
Physics has uncovered ways that the universe seems to protect how much information can be gleaned from a system. And these limits don’t appear to be related to the quality of our measuring instruments, but rather are a distinct feature of the cosmos itself. The speed of objects through space has a limit that may not be exceeded. Entropy has no antidote. It’s not even clear if the field of mathematics is internally consistent.
When computers blew past humans in the game of chess, they didn’t continue zooming into the stratosphere. Rather, the top engines began to slow down near an asymptote somewhere over 3,500. Very similar to how the top human players seem to be constrained around an Elo score of 3,000. We seem to conflate the speed with which computers get to our level and exceed that level with the possibility that there is no height to which they won’t climb. We do not yet know this.
I think it’s much more likely that computers will hit a limit similar to ours, but at some multiple beyond us. Finding that limit will be a fantastic discovery, whether it’s because you believe, like me, that AI is a necessary good for curing other ills, or if you think AI is going to cause more harm than good.