AI 2.0: Engineering Trust – EE Times

<!—->

As AI’s hype phase winds down, engineers and researchers alike are discovering what we know and don’t know about the vast but so far unfulfilled promise of artificial intelligence.

Clearly, skeptics warn, we need to look, test and validate before leaping into an AI-centric future. Hence, there is growing awareness and research focused on emerging disciplines like “AI safety” that seek to identify and ultimately anticipate the causes of unintended behavior in machine learning and other autonomous systems.

Some early steps will help, including a recent U.S. regulatory order requiring mandatory reporting of crashes involving ADAS vehicles. (We note the reference to “crashes,” rather than “accident,” a fraught term in this context since it is exculpatory.)

In this AI Special Project, we examine the engineering challenges and the unintended consequences of entrusting control of our machines to algorithms. One conclusion is that we remain a long way from using AI in mission-critical systems that must work 99.999 percent of the time.

Achieving such levels of reliability, safety and, ultimately, trust, requires relentless testing, technical standards and what researcher Helen Toner of Georgetown University’s Center for Security and Emerging Technology calls “engineering discipline.”

Another issue is allocation of scarce engineering resources as the list of companies designing AI chips soars. The latest is Tesla, which unveiled is Dojo D1 chip for training neural networks during its recent AI Day event. While accelerating the training of neural networks for ADAS applications is indeed a requirement, the vertically-integrated car maker’s AI chip appears to have been motivated by pride of ownership.

“With so many companies building AI chips, why build your own?” notes Kevin Krewell, principal analyst with Tirias Research. The growing list of companies working independently on applying AI to autonomous driving amounts to a “staggering” amount of duplication and waste, Krewell adds.

Automotive applications are pushing the limits of AI technology, and may be among the first to deploy the resulting machine learning models in hazardous settings. Before that happens, however, those machines must be as close to fool-proof as engineers can make them.

As our colleague Egil Juliussen notes, the prevailing notion of AI implies the technology is analogous to human intelligence. As we discuss below, AI remains a misnomer until engineers can imbue machines with the common sense learned by a toddler based on real-world trial and error.

Articles in this Special Project:

AI Safety Moves to the Forefront

By George Leopold

Safety advocates call for a national AI testbed, with trust based on ‘engineering discipline’.

Solving AI’s Memory Bottleneck

By Sally Ward-Foxton

HBM2e provides the best memory performance for big AI chips – so why isn’t everyone using it?

Bringing Common Sense to ‘Brittle’ Al Algorithms

By George Leopold

A DARPA effort looks to mimic the learning processes of infants to develop more general machine learning models.

AI in Automotive: Current and Future Impact

By Egil Juliussen

For the foreseeable future, AI developers must proceed with caution in building safe, robust automated systems.

Tesla AI Day Perspectives

By Egil Juliussen

Elon Musk is going all-in on neural networks. Here’s an overview of the chips, systems and software for neural network training.

What is Synthetic Data and Why is it Critical to the Future of AI

By Rev Lebaredian, Nvidia

An AI model is only as good as the data it’s trained on, but synthetic data can bridge the gap between model needs and data availability.

AI Inches Toward the Battlefield

By George Leopold

The Army tests a reconnaissance drone that could operate autonomously when GPS is jammed.

<!–

VIDEO AD

–>

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *