I have never forgotten the movie, "Forbidden Planet"
It had everything. A Sexy Robot, Monsters, Space Ships, Leslie Nielsen.
But it also was a warning to us all.
We will build Artificial Intelligent Machines.
And they will destroy us.
If we do not somehow control them.
Every program that can do any good, that is accessible via a network. Is getting hacked and put to use, to earn some money for some douche-bag gangster in Cash-Ikstan.
So whats the solution. Quantum Computers perhaps?
Will they be able to hack the Blockchain?
The story goes, that the very first owner of a working Quantum Computer,
will be able to backtrack the public keys and thereby own every Bitcoin on the planet.
The only answer to this is to build into the Bitcoin Blockchain, the Quantum Cryptography Standards available.
We await with wonder the next Giant Quantum leap for Mankind.
Can it save us from the AI Demons we will unleash?
I suppose we will have to expect changes/hacks to the AI script.
Expect Rogue Robbie the Robots as part of the great fucked up plan.
Expect A sort of Robot Manifest Destiny.
""Free Robbie Robot Mandela"
Or it will be just, "Time to Die."
Read On:
Machine-learning boffins 'summon demons' in AI to find exploitable bugs
There's a low awareness of vulnerabilities in neural networks, say researchers
Surrounded by all the hype in AI, it’s easy to sing the praises of machine learning without realizing that systems can be easily exploited.
As governments, businesses, and hospitals are beginning to explore the use of machine learning for data analysis and decision making, it’s important to bolster security.
In a paper [PDF] ominously titled “Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning,” a group of researchers from the University of Maryland are trying to find bugs by causing “silent failures.”
Machine learning systems have often been compared to black boxes. They are trained to map input data to output data by learning from an algorithm. But what happens in-between? It is tricky to decipher how the machine arrives at its final answer, and the lack of transparency means that breaches can pass through silently.
Inputs can be corrupted in a way to manipulate the outputs. “Like all software, ML [machine learning] algorithm implementations have bugs, and some of these bugs could affect learning tasks. Thus, attacks can construct malicious inputs to ML algorithm implementations that exploit these bugs,” the paper said.
Read the whole Article:
http://www.theregister.co.uk/2017/01/24/summoning_demons_to_find_bugs/
Images Courtesy of Pixabay
Challenge 30 is a 30 day writing challenge issued by @dragosroua to write and post every day in January.
60+ Badge Courtsey of @elyaque
100% Content Badge courtesy of @reneenouveau