18 C
Munich
Tuesday, October 22, 2024

AI conquers the challenge of 1980s platform games

Must read

AI conquers the challenge of 1980s platform games

AI: Scientists have come up with a computer program that can master a diversity of 1980s exploration games. Moreover paving the way for more self-sufficient robots. They created a family of algorithms able to complete classic Atari games, such as Pitfall. Previously, these scrolling platform games have been challenging to solve using artificial intelligence (AI).

On the other hand, the algorithms could help robots better navigate real-world environments. This remains a core challenge in the fields of robotics and artificial intelligence. Moreover, the types of environments in question include disaster zones. Where robots could be sent out to search for survivors or even just the average home.

However, a number of games used in the research require the user to explore order containing rewards, obstacles, and hazards. The family of algorithms, known collectively as Go-Explore, produced substantial improvements on previous attempts to solve games. 

Such as the wittily titled Montezuma’s Revenge, released in 1984, Freeway 1981, and the aforementioned Pitfall 1982. One way the researchers did this was by developing algorithms that build up archives of areas they have already visited.

Our method is indeed pretty simple and straightforward, although that is often. The case with scientific breakthroughs, researchers Adrien Ecoffet, Joost Huizinga, and Jeff Clune said in response to questions sent over the email.

The reason our approach hadn’t been considered before is that it differs strongly from the dominant approach. That was historically been used for addressing problems in the increased learning community, called ‘natural motivation’. Moreover, in natural motivation, instead of dividing exploration into returning and exploring as we do. The agent is simply rewarding for discovering new areas.

The Algorithms could help improve Robot Intelligence

A problem with the natural motivation approach is that, while searching for a solution, the algorithm can “forget” about promising areas that still need to be explored. This is known as ‘distance’.

"<yoastmark

The team found a way to overcome this: by compiling the archive of areas it has visited, the algorithm can return to a promising intermediate stage of the game as a point from which to explore further.

However, there was another problem with previous approaches to mastering these games. They count on random actions that may be taken at any point in time, including while the agent is still going towards the area that actually needs to be explored. Moreover, if you have an environment where your actions have to be accurate and precise. Such as a game with many dangers that can instantly kill you.

In addition to robotics, Go-Explore has already seen some experimental research in language learning. Where an agent learns the meaning of words by exploring a text-based game and for discovering potential failures in the behavior of a self-driving car.

- Advertisement -

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest article