'Demon' Skynet is almost self-aware

Tesla founder steps up warnings about artificial intelligence, according to The Washington Post

Elon Musk is stepping up his warnings about the potential scourge of artificial intelligence, The Washington Post reported, telling a group of students that the technology was akin to "summoning the demon."

During a speech at the Massachusetts Institute of Technology on Friday, the founder of Tesla (TSLA) told an audience that the tech sector should be "very careful" about pioneering AI, The Post reported, calling it "our biggest existential threat." On several occasions, Musk has called the technology a big risk that can't be controlled.

At MIT, Musk carried the metaphor a bit further than he has in the past. "With artificial intelligence we are summoning the demon," The Post quoted Musk as saying.

Musk's comments highlighted a budding ethical debate in the broader society about whether machines should be able to think for themselves. Autonomous technology is a hot topic in engineering circles, and occupies a prominent place in popular culture.

For years, movies and television have breathed life into scenarios, in extremis, about technology eventually spinning out of control and coming to dominate the very population it was created to serve. Classic films like "The Terminator" franchise, "The Matrix" and the soon to be released "Avengers 2: Age of Ultron" all depict scenarios of machines developing their own sense of awareness-with often apocalyptic results.

Proponents say AI is the next logical step of an increasingly tech dependent society, but opponents like Musk argue there could be unintended consequences.

Musk likened the quandary to a horror movie where protagonists call forth spirits that eventually wreak havoc.

"In all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out," The Post reported Musk as saying.

To a certain extent, machine-based intellect already dictates modern contrivances such as financial trading, video games and robotics, all functions most people take for granted. That said, the rise of semi-autonomous technology has dislocated workers across key industries, even as it saves companies on money and make services more efficient.

In addition, some ethicists and technology practitioners are concerned on the potential for what Oxford University recently called "moral outsourcing."

In a blog post last year, Oxford scholars cautioned that "when a machine is 'wrong,', it can be wrong in a far more dramatic way, with more unpredictable outcomes, than a human could. Simple algorithms should be extremely predictable, but can make bizarre decisions in 'unusual' circumstances."

After acquiring British technology firm DeepMind earlier this year, Google (GOOGL) bowed to the growing controversy over AI by agreeing to establish an ethics board that would oversee its efforts to create conscious machines. The search giant has made steady advances to make its applications more convenient to users by making them increasingly autonomous.

You need to be a member of puredevoteeseva to add comments!

Join puredevoteeseva

Votes: 0
Email me when people reply –

Replies

  • eatured: Freshly Minted Almost Famous: The Stars of Social Media Tech Dirty to Me


    the robots are coming

    Google’s New Computer With Human-Like Learning Abilities Will Program Itself


    The new hybrid device might not need humans at all.
    Are we closer to AI? (Wikipedia)

    Are we closer to AI? (Wikipedia)

    In college, it wasn’t rare to hear a verbal battle regarding artificial intelligence erupt between my friends studying neuroscience and my friends studying computer science.

    One rather outrageous fellow would mention the possibility of a computer takeover, and off they went. The neuroscience-savvy would awe at the potential of such hybrid technology as the CS majors argued we have nothing to fear, as computers will always need a programmer to tell them what to do.

    Today’s news brings us to the Neural Turing Machine, a computer that will combine the way ordinary computers work with the way the human brain learns, enabling it to actually program itself. Perhaps my CS friends should reevaluate their position?

    The computer is currently being developed by the London-based DeepMind Technologies, an artificial intelligence firm that was acquired by Google earlier this year. Neural networks — which will enable the computer to invent programs for situations it has not seen before — will make up half of the computer’s architecture. Experts at the firm hope this will equip the machine with the means to create like a human, but still with the number-crunching power of a computer, New Scientist reports.

    In two different tests, the NTM was asked to 1) learn to copy blocks of binary data and 2) learn to remember and sort lists of data. The results were compared with a more basic neural network, and it was found that the computer learned faster and produced longer blocks of data with fewer errors. Additionally, the computer’s methods were found to be very similar to the code a human programmer would’ve written to make the computer complete such a task.

    These are extremely simple tasks for a computer to accomplish when being told to do so, but computers’ abilities to learn them on their own could mean a lot for the future of AI.




This reply was deleted.