-
Tips for becoming a good boxer - November 6, 2020
-
7 expert tips for making your hens night a memorable one - November 6, 2020
-
5 reasons to host your Christmas party on a cruise boat - November 6, 2020
-
What to do when you’re charged with a crime - November 6, 2020
-
Should you get one or multiple dogs? Here’s all you need to know - November 3, 2020
-
A Guide: How to Build Your Very Own Magic Mirror - February 14, 2019
-
Our Top Inspirational Baseball Stars - November 24, 2018
-
Five Tech Tools That Will Help You Turn Your Blog into a Business - November 24, 2018
-
How to Indulge on Vacation without Expanding Your Waist - November 9, 2018
-
5 Strategies for Businesses to Appeal to Today’s Increasingly Mobile-Crazed Customers - November 9, 2018
Group backed by Elon Musk awards $7 million in grants for safety from Al | New York
TESLA Motors chief executive Elon Musk has turned his worries about the rise of artificial intelligence (AI) into an global research programme. The Future of Life Institute has not said when final reports will be given. But in its efforts to maximize profit, it can also do something illegal.
Advertisement
FLI will award about $7 million to 37 research teams. Most of the projects should begin work this September, and the institute intends to keep them funded for up to three years. Some of the more advanced research is looking at methods for working technical knowhow into the system so that the computer can essentially rationalize like a human.
Another group headed up by Manuela Veloso from Carnegie Mellon University in Pittsburgh, Pennsylvania will be concentrating on programming AI that will fully explain their choices to human beings.
“Here are all these leading AI researchers saying that AI safety is important”. (The complete list of winners and study descriptions are now available online.). FLI has awarded the fund to research teams that will be tasked with exploring risks associated with Artificial Intelligence.
According to the president of the Future of Life Institute, Max Tegmark, “There is this race going on between the growing power of the technology and the growing wisdom with which we manage it. So far all the investments have been about making the systems more intelligent, ;his is the first time there’s been an investment in the other”.
He continues: “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues”.
Apparently, Musk and Stephen Hawking share a terror (oh, to be Elon) that the world of AI is developing way too fast.
But Musk seems not to be concerned about a movie-like Roboapocalypse with super computers outsmarting humans and suddenly gaining conscience or an army of self-regenerating terminators coming to wipe us out.
At a June 27 conference at Boston University’s College of General Studies, FLI core member Richard Mallah described the reasons for that focus. Who said there was no money in writing stuff about robots taking over the world? That’s why we shouldn’t bother worrying about armies of malevolent robots.
They are all voicing concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. Other examples of technology advancement included self-driving cars and personal shopping assistants picking up groceries. But in this case its values weren’t aligned with the human’s. Maybe the auto drives 300 miles per hour to the airport terminal and then slams on the brakes, launching its occupant through the windshield.
Advertisement
“This is essentially the problem of the genie that’s been told for thousands of years”, Mallah said.