The Texas Advanced Computer Center talks to Masahiro (Hiro) Ono, who leads the Robotic Surface Mobility Group at NASA’s Jet Propulsion Laboratory which led all the Mars rover missions (also one of the researchers who developed the software that allows the current rover to operate): The Perseverance rover, which launched this summer, computes using RAD 750s — radiation-hardened single board computers manufactured by BAE Systems Electronics. Future missions, however, would potentially use new high-performance, multi-core radiation hardened processors designed through the High Performance Spaceflight Computing project. (Qualcomm’s Snapdragon processor is also being tested for missions.) These chips will provide about one hundred times the computational capacity of current flight processors using the same amount of power. “All of the autonomy that you see on our latest Mars rover is largely human-in-the-loop” — meaning it requires human interaction to operate, according to Chris Mattmann, the deputy chief technology and innovation officer at JPL. “Part of the reason for that is the limits of the processors that are running on them. One of the core missions for these new chips is to do deep learning and machine learning, like we do terrestrially, on board. What are the killer apps given that new computing environment…?” Training machine learning models on the Maverick2 supercomputer at the Texas Advanced Computing Center (TACC), as well as on Amazon Web Services and JPL clusters, Ono, Mattmann and their team have been developing two novel capabilities for future Mars rovers, which they call Drive-By Science and Energy-Optimal Autonomous Navigation…. “We’d like future rovers to have a human-like ability to see and understand terrain,” Ono said. “For rovers, energy is very important. There’s no paved highway on Mars. The drivability varies substantially based on the terrain — for instance beach versus bedrock. That is not currently considered. Coming up with a path with all of these constraints is complicated, but that’s the level of computation that we can handle with the HPSC or Snapdragon chips. But to do so we’re going to need to change the paradigm a little bit.” Ono explains that new paradigm as commanding by policy, a middle ground between the human-dictated: “Go from A to B and do C,” and the purely autonomous: “Go do science.” Commanding by policy involves pre-planning for a range of scenarios, and then allowing the rover to determine what conditions it is encountering and what it should do. “We use a supercomputer on the ground, where we have infinite computational resources like those at TACC, to develop a plan where a policy is: if X, then do this; if y, then do that,” Ono explained. “We’ll basically make a huge to-do list and send gigabytes of data to the rover, compressing it in huge tables. Then we’ll use the increased power of the rover to de-compress the policy and execute it.” The pre-planned list is generated using machine learning-derived optimizations. The on-board chip can then use those plans to perform inference: taking the inputs from its environment and plugging them into the pre-trained model. The inference tasks are computationally much easier and can be computed on a chip like those that may accompany future rovers to Mars. “The rover has the flexibility of changing the plan on board instead of just sticking to a sequence of pre-planned options,” Ono said. “This is important in case something bad happens or it finds something interesting….” The efforts to develop a new AI-based paradigm for future autonomous missions can be applied not just to rovers but to any autonomous space mission, from orbiters to fly-bys to interstellar probes, Ono says. Read more of this story at Slashdot.