One thing most of us seem to agree on, from Sir Stephen Fry to Steve Bannon, is that artificial superintelligence development should be paused while we figure out, y’know, the safety concerns
Artificial superintelligence is, for most of us I think, quite a scary thought. A human-like AI, far more intelligent and capable than any we’ve seen before, developed by companies that often seem unprepared (or uncaring) towards the potential knock-on effects of its deployment. Goody. Still, we’ve been told by industry leaders like Mark Zuckerberg that…