Blogarchiv
Raumfahrt - How NASAs Search for ET Relies on Advanced AI

29.12.2017

Jet Propulsion Laboratory’s artificial intelligence chief describes the “ultimate” test for AI in space exploration

e1631ab2-c41d-4a48-bccdafaa554

An artist's concept for NASA JPL's Mars 2020 rover. Credit: NASA, JPL-Caltech

The biggest knock against sending robots to explore the solar system for signs of life has always been their inability to make intuitive, even creative decisions as effectively as humans can. Recent advances in artificial intelligence (AI) promise to narrow that gap soon—which is a good thing, because there are no immediate plans to send people to explore Mars’s subterranean caves or search for hydrothermal ventsbelow Europa’s icy waters. For the foreseeable future those roles will likely be filled by nearly autonomous rovers and submarines that can withstand hostile conditions and conduct important science experiments, even when out of contact with Earth for weeks or even months.

When Steve Chien took over NASA’s Jet Propulsion Laboratory’s Artificial Intelligence (AI) Group in the mid-1990s, such sophisticated AI seemed more like science fiction than something destined to play a crucial role to the success of NASA’s upcoming 2020 mission to Mars. Chien had a vision to make the technology an indispensable part of NASA’s biggest missions. But AI being what it was 25 years ago—with less-sophisticated algorithms running on slower computers—the technology simply was not up to the task.

Chien was patient, though. Little by little his team’s technology began to automate tedious tasks and improve upon work that had long relied on researchers’ painstaking observations. Using predictive models called decision trees, for example, JPL created the Sky Image Cataloging and Analysis Tool (SKICAT, pdf)—and used it to help NASA automate the classification of objects discovered during the second Palomar Sky Survey, which had been conducted in the early 1980s. Once SKICAT was fed enough images of what the scientists were looking for, the software was able to classify thousands more faint, low-resolution objects in the Mount Palomar survey than humans could.

After years of incremental improvements, Chien and his team hit the AI mother lode when NASA asked them to design the software that would be used to automate the Earth Observing–1 (EO–1) satellite. NASA uploaded JPL’s Autonomous Sciencecraft Experiment (ASE) software to the satellite in 2003, and for more than a decade it helped study floods, volcanic eruptions and other natural phenomena. Prior to EO–1’s deactivation in March, the ASE software would at times receive alerts about an eruption from other satellites or from ground sensors and autonomously prompt the EO–1 to capture images—before scientists were even aware it had happened.

JPL’s work on ASE and other projects gave NASA confidence that AI could play a major role in the Mars 2020 mission. Chien and his team are spearheading the development of a new class of rover far more advanced than any other vehicle to have roamed the planet’s rocky red surface. The Mars 2020 rovers will have significant discretion in selecting their own targets for study and experimentation as they search for evidence that life once existed on Earth’s closest planetary neighbor.

Scientific American recently spoke with Chien, the JPL AI Group’s technical group supervisor and senior research scientist in the laboratory’s Mission Planning and Execution division. Chien talked to SciAm about the demands that space travel places on AI systems, the increased need for autonomy as humans explore ever farther, and what the “ultimate” AI space mission would look like.

[An edited transcript of the conversation follows]

How was the ASE software that controlled the EO–1 satellite an AI milestone for NASA?

It definitely was an AI milestone, not just for JPL and NASA but for the AI community as a whole. That’s because of ASE’s tremendous success combined with its longevity. The software is quite incredible—it controlled the spacecraft for more than 12 years. It issued approximately three million commands during that time period, acquired over 60,000 observations and actually achieved a reliability rate that was higher than human operations of the spacecraft. That software was such a success that it actually democratized space. We literally had a web page where institutions all around the world could submit requests to directly task the spacecraft.

How much mission responsibility is NASA willing to hand over to AI?

One of the challenges that we have with AI at NASA is that, because we’re dealing with space missions, there’s a lot of expense involved and long lead times to consider. We have to be sure that the AI performs well all the time—that you collect good science and that you protect the spacecraft. It doesn’t mean that you can exactly predict what it’s going to do. You want to get away from that level of micromanagement. You want the AI to work more closely as an apprentice or assistant to the scientist, and not as a machine, because the machine has to be micromanaged. There are people who worry about replacing the brilliant scientist, [but] that’s far enough off that we don’t need to worry about it.

How do you prepare AI to understand the unknown?

Unsupervised learning is extremely important to analyzing the unknown. A big part of what humans are able to do is interpret data that are unfamiliar. At NASA there are many problems like this. You see some data, and some part of [that data] just doesn’t fit. Think about Lewis and Clark exploring the Northwest Territory. They didn’t draw a map every 10 feet, which is what we currently do with most of our probes. Lewis and Clark’s expedition described mountains and rivers and other features—putting them into context. We would like an AI system to do the same thing.

To develop a system like that, we had a student take images with a digital camera while on a cross-country flight. Then we applied different unsupervised learning methods to the data we captured. We wanted the AI to learn [on its own] that there are mountains, forests and rivers, and to learn there are clouds, daytime, nighttime, etc. But we also want the AI to be prepared to be surprised when it senses something that doesn’t fit into a single category. The AI reproduced examples of the different kinds of regions the flight covered. In that way it came up with 10 or 12 meaningful classes of images, and provided exemplars of those classes. The classes were similar to those that [the researchers] came up with—rivers, forests, plains, mountains and so on. Sending down examples of these classes and a map of which regions correspond to each class is a far more efficient way to describe a planet.

What role will AI play in the upcoming Mars 2020 rover mission? 

There are three major areas of AI for the mission. The first is the autonomous driving for the rovers, a technology that goes all the way back to Pathfinder, and also to MER [Mars Exploration Rover program]. Autonomous driving is like a dial, you can [closely control] it and tell the rovers where to go, or you can just tell them to drive. There are different trade-offs for each in terms of speed and safety.

The second area of AI includes systems that will help the rovers do science. Targeting capabilities will be much better and will be on more instruments—not just a rover’s SuperCam—which will provide imaging, chemical composition analysis and mineralogy. The SuperCam is an evolution of the ChemCam featured on earlier Mars rovers that could learn the chemical composition of rocks by zapping them with a laser and studying the resulting gasses. Prior rovers—part of the Mars Exploration Rover projectMars Science Laboratory and now M2020—have had increasing ability to select targets and take follow-up imagery based on science criteria—such as target shape, texture or the presence of veins. This capability, called the Autonomous Exploration for Gathering Increased Science (AEGIS) system, enables the rovers to conduct more science in less time.

Third, the Mars 2020 rovers will also have a more sophisticated scheduling system that enables them to be more dynamic. If work is going ahead of or behind schedule, a rover will automatically adjust its itinerary, which could improve the rover’s productivity.

How will AI help the rovers explore Mars’s caves?

While we have explored the surface of Mars, scientists would like to investigate lava tube caves on Mars. Because communicating into the cave is harder—requiring relay [points in] the cave—and such a mission is likely to last only several days because the rovers will rely exclusively on battery power, cave exploration would need a lot of AI. The AI will help coordinate, map and explore as much of the cave as efficiently [as possible] with their very limited time. One approach we have been working on is called dynamic zonal allocation that might start out this way: You have four rovers you want to go 100 feet into a cave on Mars. The first rover maps zero to 25 feet, the second one 25 to 50 feet, and so on. They would map the cave incrementally. It’s a classic divide and conquer.

They also use one another to relay data out of the cave. Sending the rovers farther into the cave means they won’t be able to communicate with us continuously. So they start doing what we call sneaker netting—the first rover goes into a cave until it’s out of communications range, observes and then comes back to get back in range to send the data. The second rover goes deeper into the cave but only has to return far enough to be in range of the first rover, which will send the data. Each one works progressively deeper into the cave in order to cover the 100 feet. Think of the four rovers as part of a large accordion that keeps growing and coming back. The rovers don’t come out of the caves, but the data they collect do. It’ll be a three- or four-day mission, because that’s how long the battery lasts.

What would be the ultimate test of AI in space exploration?

The ultimate test of AI in space is a much longer-duration mission. A Europa submersible, for example, will have to survive for years on its own, potentially having contact with Earth only every 30 days. After you land the submersible on the planet’s surface you want to melt through the ice cap, which takes a year. Then you want to go from, say, the equator to the poles and back, hunting for hydrothermal vents. Like the rovers in the cave, it will have to journey out and come back in order to communicate. That’s going to take maybe a year, maybe two years. In that case, it could be on its own for six months or maybe a year. To simulate that, we’ve been designing an AI-controlled submersible here on Earth to study a hydrothermal vent under ice. Scientists would like to go under Antarctic ice shelves to study the effects of climate change—these missions need similar technology.

Even that is nothing compared to an interstellar mission, where the spacecraft will function completely autonomously, because it’s a nine-year roundtrip for communications to Proxima Centauri—the nearest star—and back. If you go to the Trappist-1 system, which is more likely to have more habitable planets in it, that’s about 40 light years away. That’s after the spacecraft survives the 60 or more years it takes to get to Proxima Centauri. With that delay in communications, the spacecraft is pretty much on its own—so when you launch a mission like that you will need an amazing AI and then just have to cross your fingers.

Quelle: Scientific American

 

 
 
3370 Views
Raumfahrt+Astronomie-Blog von CENAP 0