The moon landing was one of the most iconic events of our times, and that’s probably an understatement: it’s certainly the furthest we have ever physically ventured beyond our earth. I’ve listened to some of the old radio around this event, and it helps you to imagine just how fundamental the voyage was.
Now, researchers are saying that we could get back there sooner than later, and that we might soon even have human missions to Mars.
What’s going to be different this time around?
One thing is this: robots are going to be helping out – a lot.
At MIT, some of our teams are working on groundbreaking applications, not just for terrestrial concerns, but also for the reaches of space. We have projects like this one at our CSAIL lab on electromagnetism and shape-shifting robot cubes, where we could be pioneering the applications for ruggedizing our systems to explore these environments more capably.
We also have people at Sparklab looking at how to view these systems with human-level perception, and for example, how to build an ‘ontology of concepts’ with LLMs that is going to be useful for space exploration, and other related tasks.
What are we going to do with these new and brilliant machines?
Recently, we heard from Annika Thomas talking about her undergrad work in astrophysics, and her history of looking through telescopes at the world beyond the earth’s atmosphere. She talked about developing black and white images into color, and otherwise rendering new ways of looking at space.
Here’s another project that she talked about that I think is fundamental in MIT space research:
Essentially, we are working with universities in Portugal and other stakeholders on something called ‘hyperspectral images.’
This will help with that process of robots assisting humans in exploring foreign landscapes.
“When it comes to putting humans on the moon, or in the next couple of decades, putting humans on Mars, something that we have nowadays that we didn’t have as much of back then is robots that are there to help out humans … They’re going to help assist and extend tasks that might otherwise be dangerous to humans. And something that these robots need to do that is a really big part of research right now is learn how to map the environment. And not only map the environment, but also do so (in such a way) that they can share their maps with one another, and basically distribute tasks among different robots.” – Annika Thomas
Other interesting applications include self-driving space vehicles, and tools for dealing with ‘space junk’ – which is a serious problem across the board.
In addition to going out and building robust maps, the robots can also share tasks, as mentioned above.
All of this is amazing something to think about as we move forward in general research.
It has applications on earth, too, for example, looking at phytoplankton pools or different kinds of trends in our oceans.
But when it comes to deep space, we could really benefit from our AI tools in ways that that those earlier astronauts could not.