Share

Group backed by Elon Musk awards $7 million in grants for safety from Al | New York

TESLA Motors chief executive Elon Musk has turned his worries about the rise of artificial intelligence (AI) into an global research programme. The Future of Life Institute has not said when final reports will be given. But in its efforts to maximize profit, it can also do something illegal.

Advertisement

FLI will award about $7 million to 37 research teams. Most of the projects should begin work this September, and the institute intends to keep them funded for up to three years. Some of the more advanced research is looking at methods for working technical knowhow into the system so that the computer can essentially rationalize like a human.

Another group headed up by Manuela Veloso from Carnegie Mellon University in Pittsburgh, Pennsylvania will be concentrating on programming AI that will fully explain their choices to human beings.

“Here are all these leading AI researchers saying that AI safety is important”. (The complete list of winners and study descriptions are now available online.). FLI has awarded the fund to research teams that will be tasked with exploring risks associated with Artificial Intelligence.

According to the president of the Future of Life Institute, Max Tegmark, “There is this race going on between the growing power of the technology and the growing wisdom with which we manage it. So far all the investments have been about making the systems more intelligent, ;his is the first time there’s been an investment in the other”.

He continues: “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues”.

Apparently, Musk and Stephen Hawking share a terror (oh, to be Elon) that the world of AI is developing way too fast.

But Musk seems not to be concerned about a movie-like Roboapocalypse with super computers outsmarting humans and suddenly gaining conscience or an army of self-regenerating terminators coming to wipe us out.

At a June 27 conference at Boston University’s College of General Studies, FLI core member Richard Mallah described the reasons for that focus. Who said there was no money in writing stuff about robots taking over the world? That’s why we shouldn’t bother worrying about armies of malevolent robots.

They are all voicing concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. Other examples of technology advancement included self-driving cars and personal shopping assistants picking up groceries. But in this case its values weren’t aligned with the human’s. Maybe the auto drives 300 miles per hour to the airport terminal and then slams on the brakes, launching its occupant through the windshield.

Advertisement

“This is essentially the problem of the genie that’s been told for thousands of years”, Mallah said.

Elon Musk Do Investment $10 Million To Prevent Killer AI