Extinction Level Event: Humanitys Last Invention And Our Uncertain Future
Sunday, November 25, 2012 15:45
Share
A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge to address developments in human technologies that might pose “extinction-level” risks to our species, from biotechnology to artificial intelligence.
In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.
This machine, he continued, would be the “last invention” that mankind will ever make, leading to an “intelligence explosion” – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the “survival of man” depended on the construction of this ultra-intelligent machine.
Light cycles
Credit: Jason A. Samfield from Flickr
Credit: Jason A. Samfield from Flickr
Fast forward almost 50 years and the world looks very different. Computers dominate modern life across vast swathes of the planet, underpinning key functions of global governance and economics, increasing precision in healthcare, monitoring identity and facilitating most forms of communication – from the paradigm shifting to the most personally intimate. Technology advances for the most part unchecked and unabated.
While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us.
Now a philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose “extinction-level” risks to our species.
“At some point, this century or next, we may well be facing one of the major shifts in human history – perhaps even cosmic history – when intelligence escapes the constraints of biology,” says Huw Price, the Bertrand Russell Professor of Philosophy and one of CSER’s three founders, speaking about the possible impact of Good’s ultra-intelligent machine, or artificial general intelligence (AGI) as we call it today.
“Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted. We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous. I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies.”
Price’s interest in AGI risk stems from a chance meeting with Jaan Tallinn, a former software engineer who was one of the founders of Skype, which – like Google and Facebook – has become a digital cornerstone. In recent years Tallinn has become an evangelist for the serious discussion of ethical and safety aspects of AI and AGI, and Price was intrigued by his view:
“He (Tallinn) said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease. I was intrigued that someone with his feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to do something about it.”
We Homo sapiens have, for Tallinn, become optimised – in the sense that we now control the future, having grabbed the reins from 4 billion years of natural evolution. Our technological progress has by and large replaced evolution as the dominant, future-shaping force.
We move faster, live longer, and can destroy at a ferocious rate. And we use our technology to do it. AI geared to specific tasks continues its rapid development – from financial trading to face recognition – and the power of computing chips doubles every two years in accordance with Moore’s law, as set out by Intel founder Gordon Moore in the same year that Good predicted the ultra-intelligence machine.
We know that ‘dumb matter’ can think, say Price and Tallinn – biology has already solved that problem, in a container the size of our skulls. That’s a fixed cap to the level of complexity required, and it seems irresponsible, they argue, to assume that the rising curve of computing complexity will not reach and even exceed that bar in the future. The critical point might come if computers reach human capacity to write computer programs and develop their own technologies. This, Good’s “intelligence explosion”, might be the point we are left behind – permanently – to a future-defining AGI.
“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”
Price and Tallinn stress the uncertainties in these projections, but point out that this simply underlines the need to know more about AGI and other kinds of technological risk.
In Cambridge, Price introduced Tallinn to Lord Martin Rees, former Master of Trinity College and President of the Royal Society, whose own work on catastrophic risk includes his books Our Final Century (2003) and From Here to Infinity: Scientific Horizons (2011). The three formed an alliance, aiming to establish CSER.
With luminaries in science, policy, law, risk and computing from across the University and beyond signing up to become advisors, the project is, even in its earliest days, gathering momentum. “The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history,” says Price. “We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones.”
Price acknowledges that some of these ideas can seem far-fetched, the stuff of science fiction, but insists that that’s part of the point. “To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.”
“What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?” he adds. “We hope that CSER will be a place where world class minds from a variety of disciplines can collaborate in exploring technological risks in both the near and far future.
“Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate its millennium.”
For more information on the Centre for Study of Existential Risk, visit http://cser.org/ ,
Fred Lewsey University of Cambridge
Views: 0
You need to be a member of puredevoteeseva to add comments!
Experts at the prestigious University of Cambridge will conduct research into the “extinction-level risks” posed to humanity by artificially intelligent robots.
The Cambridge Project for Existential Risk is dedicated to “ensuring that our own species has a long-term future” by studying the risks posed by AI, nanotechnology and biotechnology.
“The scientists said that to dismiss concerns of a potential robot uprising would be “dangerous,” reports the BBC.
The project was co-founded by Huw Price, Bertrand Russell Professor of Philosophy at Cambridge, Martin Rees, Emeritus Professor of Cosmology & Astrophysics at Cambridge, and Jaan Tallinn, the co-founder of Skype.
It also counts amongst its advisers Max Tegmark, Professor of Physics, MIT and George M Church, Professor of Genetics at Harvard Medical School.
An article written by Tallinn and Price warns that artificially intelligent computers or robots could take over “the speed and direction of technological progress itself,” and shape the environment of planet earth to their own ends while displaying about as much concern for humanity as we do for a bug on the windscreen.
Far from being resigned to works as science fiction such as in the Terminator films, the threat posed by a potential future “rise of the robots” has never been closer to reality.
The study echoes the predictions of respected author, inventor and futurist Ray Kurzweil, renowned for his deadly accurate technological forecasts.
In his 1999 book The Age of Spiritual Machines, Kurzweil predicted that after 2029, the elite would come closer to their goal of technological singularity – man merging with machine – and that by the end of the century, the entire planet will be run by artificially intelligent computer systems which are smarter than the entire human race combined – similar to the Skynet system fictionalized in the Terminator franchise.
Amidst the debate, the fact that the US military under DARPA is already developing robots for the express purpose of of killing people has been largely overlooked by futurists.
As we have previously highlighted, the whole direction of drones and automated robot technology being developed by the likes of DARPA is all geared towards having machines take the role of police officers and soldiers in pursuing and engaging “insurgents” on American soil.
Experts like Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, have warned that DARPA’s robots represent “an incredible technical achievement, but it’s unfortunate that it’s going to be used to kill people.”
The Department of Defense recently issued a new policy directive attempting to “reassure” people that artificially intelligent cyborgs wouldn’t be used to murder people after Human Rights Watch called for an international ban on “killer robots”.
Policy directive 3000.09 states: “Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator.”
Replies
Experts to Study Whether Robots Will Exterminate Humanity
How close are we to a Skynet takeover?
Paul Joseph Watson
Infowars.com
November 27, 2012
Experts at the prestigious University of Cambridge will conduct research into the “extinction-level risks” posed to humanity by artificially intelligent robots.
The Cambridge Project for Existential Risk is dedicated to “ensuring that our own species has a long-term future” by studying the risks posed by AI, nanotechnology and biotechnology.
“The scientists said that to dismiss concerns of a potential robot uprising would be “dangerous,” reports the BBC.
The project was co-founded by Huw Price, Bertrand Russell Professor of Philosophy at Cambridge, Martin Rees,
Emeritus Professor of Cosmology & Astrophysics at Cambridge, and Jaan Tallinn, the co-founder of Skype.
It also counts amongst its advisers Max Tegmark, Professor of Physics, MIT and George M Church, Professor of Genetics at Harvard Medical School.
An article written by Tallinn and Price warns that artificially intelligent computers or robots could take over “the speed and direction of technological progress itself,” and shape the environment of planet earth to their own ends while displaying about as much concern for humanity as we do for a bug on the windscreen.
Far from being resigned to works as science fiction such as in the Terminator films, the threat posed by a potential future “rise of the robots” has never been closer to reality.
The study echoes the predictions of respected author, inventor and futurist Ray Kurzweil, renowned for his deadly accurate technological forecasts.
In his 1999 book The Age of Spiritual Machines, Kurzweil predicted that after 2029, the elite would come closer to their goal of technological singularity – man merging with machine – and that by the end of the century, the entire planet will be run by artificially intelligent computer systems which are smarter than the entire human race combined – similar to the Skynet system fictionalized in the Terminator franchise.
Amidst the debate, the fact that the US military under DARPA is already developing robots for the express purpose of of killing people has been largely overlooked by futurists.
As we have previously highlighted, the whole direction of drones and automated robot technology being developed by the likes of DARPA is all geared towards having machines take the role of police officers and soldiers in pursuing and engaging “insurgents” on American soil.
Experts like Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, have warned that DARPA’s robots represent “an incredible technical achievement, but it’s unfortunate that it’s going to be used to kill people.”
The Department of Defense recently issued a new policy directive attempting to “reassure” people that artificially intelligent cyborgs wouldn’t be used to murder people after Human Rights Watch called for an international ban on “killer robots”.
Policy directive 3000.09 states: “Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorised human operator.”