Dear Visitor,

Our system has found that you are using an ad-blocking browser add-on.

We just wanted to let you know that our site content is, of course, available to you absolutely free of charge.

Our ads are the only way we have to be able to bring you the latest high-quality content, which is written by professional journalists, with the help of editors, graphic designers, and our site production and I.T. staff, as well as many other talented people who work around the clock for this site.

So, we ask you to add this site to your Ad Blocker’s "white list" or to simply disable your Ad Blocker while visiting this site.

Continue on this site freely
  HOME     MENU     SEARCH     NEWSLETTER    
TECHNOLOGY, DISCOVERY & INNOVATION. UPDATED 8 MINUTES AGO.
You are here: Home / Innovation / Should You Worry About Rise of AI?
As Tech Titans Bicker, Should You Worry About the Rise of AI?
As Tech Titans Bicker, Should You Worry About the Rise of AI?
By Ryan Nakashima and Matt OBrien Like this on Facebook Tweet this Link thison Linkedin Link this on Google Plus
PUBLISHED:
JULY
30
2017
Tech titans Mark Zuckerberg and Elon Musk recently slugged it out online over the possible threat artificial intelligence might one day pose to the human race, although you could be forgiven if you don't see why this seems like a pressing question.

Thanks to AI, computers are learning to do a variety of tasks that have long eluded them -- everything from driving cars to detecting cancerous skin lesions to writing news stories . But Musk, the founder of Tesla Motors and SpaceX, worries that AI systems could soon surpass humans, potentially leading to our deliberate (or inadvertent) extinction.

Two weeks ago, Musk warned U.S. governors to get educated and start considering ways to regulate AI in order to ward off the threat. "Once there is awareness, people will be extremely afraid," he said at the time.

Zuckerberg, the founder and CEO of Facebook, took exception. In a Facebook Live feed recorded Saturday in front of his barbecue smoker, Zuckerberg hit back at Musk, saying people who "drum up these doomsday scenarios" are "pretty irresponsible." On Tuesday, Musk slammed back on Twitter , writing that "I've talked to Mark about this. His understanding of the subject is limited."

Here's a look at what's behind this high-tech flare-up -- and what you should and shouldn't be worried about.

What Is AI, Anyway?

Back in 1956, scholars gathered at Dartmouth College to begin considering how to build computers that could improve themselves and take on problems that only humans could handle . That's still a workable definition of artificial intelligence.

An initial burst of enthusiasm at the time, however, devolved into an "AI winter" lasting many decades as early efforts largely failed to create machines that could think and learn -- or even listen, see or speak.

That started changing five years ago. In 2012, a team led by Geoffrey Hinton at the University of Toronto proved that a system using a brain-like neural network could "learn" to recognize images. That same year, a team at Google led by Andrew Ng taught a computer system to recognize cats in YouTube videos -- without ever being taught what a cat was.

Since then, computers have made enormous strides in vision, speech and complex game analysis. One AI system recently beat the world's top player of the ancient board game Go.

Here Come's Terminator's Skynet . . . Maybe

For a computer to become a "general purpose" AI system, it would need to do more than just one simple task like drive, pick up objects, or predict crop yields. Those are the sorts of tasks to which AI systems are largely limited today.

But they might not be hobbled for too long. According to Stuart Russell, a computer scientist at the University of California at Berkeley, AI systems may reach a turning point when they gain the ability to understand language at the level of a college student. That, he said, is "pretty likely to happen within the next decade."

While that on its own won't produce a robot overlord, it does mean that AI systems could read "everything the human race has ever written in every language," Russell said. That alone would provide them with far more knowledge than any individual human.

The question then is what happens next. One set of futurists believe that such machines could continue learning and expanding their power at an exponential rate, far outstripping humanity in short order. Some dub that potential event a "singularity," a term connoting change far beyond the ability of humans to grasp.

Near-Term Concerns

No one knows if the singularity is simply science fiction or not. In the meantime, however, the rise of AI offers plenty of other issues to deal with.

AI-driven automation is leading to a resurgence of U.S. manufacturing -- but not manufacturing jobs . Self-driving vehicles being tested now could ultimately displace many of the almost 4 million professional truck, bus and cab drivers now working in the U.S.

Human biases can also creep into AI systems. A chatbot released by Microsoft called Tay began tweeting offensive and racist remarks after online trolls baited it with what the company called "inappropriate" comments.

Harvard University professor Latanya Sweeney found that searching in Google for names associated with black people more often brought up ads suggesting a criminal arrest. Examples of image-recognition bias abound.

"AI is being created by a very elite few, and they have a particular way of thinking that's not necessarily reflective of society as a whole," says Mariya Yao, chief technology officer of AI consultancy TopBots.

Mitigating Harm from AI

In his speech to the governors, Musk urged governors to be proactive, rather than reactive, in regulating AI, although he didn't offer many specifics. And when a conservative Republican governor challenged him on the value of regulation, Musk retreated and said he was mostly asking for government to gain more "insight" into potential issues presented by AI.

Of course, the prosaic use of AI will almost certainly challenge existing legal norms and regulations. When a self-driving Relevant Products/Services causes a fatal accident, or an AI-driven medical system provides an incorrect medical diagnosis, society will need rules in place for determining legal responsibility and liability.

With such immediate challenges ahead, worrying about superintelligent computers "would be a tragic waste of time," said Andrew Moore, dean of the computer science school at Carnegie Mellon University.

That's because machines aren't now capable of thinking out of the box in ways they weren't programmed for, he said. "That is something which no one in the field of AI has got any idea about."

© 2017 Associated Press under contract with NewsEdge/Acquire Media. All rights reserved.

Image credit: iStock.

May Interest You:

New cars come equipped with safety systems. But how about all the other cars that are more than a year old? No worries... There are plenty of car safety features that are available, affordably, for ALL cars, not just new ones.

See products that are available for YOUR car at: Make My Car Safe, the premium online seller of car safety products for ALL cars.


Tell Us What You Think
Comment:

Name:

Michael DeKort:
Posted: 2017-07-31 @ 7:00am PT
Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI

Public Shadow Driving is Dangerous. Thousands of accidents, injuries and casualties will occur when these companies move from benign and easy scenarios to complex, dangerous and accident scenarios. And the cost in time and funding is untenable. One trillion public shadow driving miles would need to be driven at a cost of over $300B.

Issues with Public Shadow Driving AI

1. Miles and Cost – One Trillion Miles and $300B

a. Toyota and RAND have stated that in order to get to levels 4 and 5 one trillion miles will have to be driven. This to accommodate the uncontrollable nature of driving in the real world, literally stumbling and then having to restumble on scenarios to train the AI. To accomplish this in 10 years it will cost over $300B. That extremely conservative figure is the cost of 684k drivers, driving 228k vehicles 24/7. This expense in time and money is per company and vehicle.

2. Injuries/Casualties of Public Shadow Driving

a. Data from NASA, Clemson University, Waymo, Chris Urmson (Aurora) and the UK have shown situation awareness and reaction times are very poor. Between 17 and 24 seconds are needed to properly acclimate and react. This delay results in drivers not being able to function properly especially in critical scenarios. They often make the wrong decision or over react. Many including Waymo, Volvo, Ford and Chris Urmson (Aurora) have called for L3 to be skipped due to these issues. The fact of the matter is if L3 is dangerous then so is using public shadow driving for L4 and L5. (The Netherlands uses the simulation and test tracks as opposed to public shadow driving).
b. There is a video of a Tesla driver having to take over for the vehicle. Keep in mind this is in a clear night, no vehicles were to his left, the roads were not slippery and it was not a case where the driver was trying to force the vehicle to learn the accident case.

3. Injuries and Casualties caused in Complex, Dangerous and Accident Scenarios

a. In order for AI to learn how to handle complex, dangerous and actual accident scenarios it has to run them over and over. And they have to precisely match, or closely match, the original scenario. To date this is not being done. Which is why there have not been a lot of accidents, injuries or casualties. When that time comes the shadow drivers will have to drive and redrive scenarios that include progressively higher levels of complexity, involving many other vehicles or entities, bad weather, bad roads conditions, system errors etc. Many of those scenarios will be known accident scenarios. To learn these situations it will literally mean billions of miles have to be driven and possibly millions of iterations of these scenarios run to get this data. That will result in accidents, injuries and even casualties in the majority of these cases.

b. To date there have been no children or families harmed by using this process. (There have however been injuries and casualties involving drivers). That is largely because only benign scenarios are being run. The public shadow driving be utilized now occurs on well-marked, well lit, low complexity, well mapped and good environmental conditions. Given every company bringing this technology to market would have to drive that trillion miles and learn from progressively more dangerous scenarios, casualties are inevitable. I suggest that when this is known or that first mass tragedy or death of a child has occurred the public, litigators and governments will react strongly. That will halt progress for a very long time. Far more than self-realization and policing would.

4. AI – Machine Learning – Neural Networks have Inherent Flaws.

a. MIT has stated that these processes miss corner or edge cases. Which result in spontaneous and unexpected errors. And the engineers using the practice do not entirely know how it works.

If you look at these areas individually, let alone in combination, you can see for legal, morale, ethical and financial reasons public shadow driving is untenable.

As for simulation being the solution. I believe the answer is to create an international Simulation Trade Study/Exhibit and Association (Nonprofit). Something we are working on. The purpose being to:

1. Make the industry aware of what simulation can do. Especially in other industries such as aerospace. (Where the FAA has had detailed testing to assess simulation and simulator fidelity levels for decades.)

2. Make the industry aware of the MCity approach to finding the most efficient set of scenarios. Bring that one trillion miles down by 99.9%

3. Make the industry aware of who all the simulation and simulator organizations are.

4. Evaluate the available products to determine their current capabilities.

5. Determine how close the industry and any individual product is to filling all the capabilities required to eliminate public shadow driving. Where there are gaps determine a way forward to improving products or possibly creating a consortium. This may involve utilizing expertise from other industries.

6. Note- Most companies use simulation. The issue is to what degree they use it. Most of the individuals and companies in this space are unaware of where aerospace simulation is and that technology can be used to improve the autonomous industry simulation and almost eliminate public shadow driving.

Christopher:
Posted: 2017-07-30 @ 5:43pm PT
Yes, wait for it to become a problem before looking for a solution. Short-term, self-interested thoughts are what we should strive for.

Like Us on FacebookFollow Us on Twitter
MORE IN INNOVATION
SCI-TECH TODAY
NEWSFACTOR NETWORK SITES
NEWSFACTOR SERVICES
© Copyright 2017 NewsFactor Network. All rights reserved. Member of Accuserve Ad Network.