|
ResearchAudio.io An AI Planned the Mars Rover's Route400 meters, 500K variables checked. No human drew the path. |
||||||||||||||||
|
On December 8 and 10, 2025, the navigation commands sent to NASA's Perseverance rover looked different from every command that came before them. For the first time in the history of planetary exploration, an AI model had written the route. Anthropic's Claude, working inside a Claude Code workflow built by engineers at NASA's Jet Propulsion Laboratory (JPL), planned an approximately 400-meter drive across a rock field on the Martian surface. The waypoints it generated were then validated through Perseverance's full simulation stack before being transmitted across 362 million kilometers to the rover. The rover completed the drive successfully.
Why rover navigation is difficultSignal latency between Earth and Mars runs approximately 20 minutes one way. By the time a corrective command arrives, the rover has already acted on the previous instruction. There is no joystick. Every drive must be planned in full before it begins. The standard process involves human operators manually setting a sequence of waypoints, called a "breadcrumb trail," using a combination of orbital imagery and the rover's own camera feeds. This is painstaking work. In 2009, the Spirit rover drove into a sand trap and never moved again. The stakes of a bad route are permanent. Perseverance has an onboard AutoNav system that handles obstacle avoidance between waypoints. But AutoNav operates only from the rover's own perspective and cannot plan the broader route. The high-level waypoint layout has always been a human task. How the system was builtJPL engineers did not hand Claude a map and ask it to plan a drive. The system required substantial context before it could work reliably. The team compiled years of operational knowledge from driving the rover, then provided that knowledge base to Claude Code via its skills feature.
Armed with that context, Claude used its vision capabilities to analyze overhead imagery of the Martian surface. It then generated the route by stringing together 10-meter segments into a full path. Crucially, the model did not produce a single draft and stop. It iterated, critiqued its own waypoints, and revised them before producing a final plan. The output was written in Rover Markup Language (RML), an XML-based programming language originally developed for the Mars Exploration Rover mission. Claude generated code in a domain-specific language it was not pre-trained on, by learning its structure from the provided context. Where humans still made the callClaude's waypoints were not sent to Mars without review. Every plan was run through Perseverance's standard daily simulation, where over 500,000 variables were modeled to check projected rover positions and predict potential hazards. When JPL engineers reviewed the output, they found that only minor adjustments were needed. In one case, ground-level camera images (which Claude had not seen) revealed sand ripples flanking a narrow corridor. The rover drivers chose to split that section of the route more precisely. Otherwise, the plan held.
What this points towardThe 400-meter drive is a constrained demonstration. Anthropic and JPL describe it as a test run for deeper autonomy. NASA's upcoming Artemis missions aim to establish a base on the lunar south pole, where human operators on Earth would face similar latency constraints and operational complexity. Further out, probes targeting moons like Europa or Titan would face signal delays measured in hours. Solar power would diminish. Missions would be shorter and less forgiving. The case for autonomous AI planning under those constraints is stronger, not weaker. What JPL has shown is that an AI system can internalize years of domain expertise from documentation and examples, reason spatially over imagery, generate domain-specific code, self-review its output, and produce plans accurate enough for high-stakes physical execution. The 400 meters are less significant than the method that produced them. The interesting question for future missions is not whether AI can plan a route. It is whether an AI system can update its own knowledge base when conditions on the ground differ from what the orbital maps showed. |
||||||||||||||||
|
ResearchAudio.io Source: Four Hundred Meters on Mars (Anthropic, 2025) |
World’s First Safe AI-Native Browser
AI should work for you, not the other way around. Yet most AI tools still make you do the work first—explaining context, rewriting prompts, and starting over again and again.
Norton Neo is different. It is the world’s first safe AI-native browser, built to understand what you’re doing as you browse, search, and work—so you don’t lose value to endless prompting. You can prompt Neo when you want, but you don’t have to over-explain—Neo already has the context.
Why Neo is different
Context-aware AI that reduces prompting
Privacy and security built into the browser
Configurable memory — you control what’s remembered
As AI gets more powerful, Neo is built to make it useful, trustworthy, and friction-light.


