innovator

Add idea


Calendar

«    September 2019    »
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
 

 

Advert

 

Payment

 

Advert

 

Authorization

Стартап

DateDate: 25-09-2019, 05:51

Can artificial intelligence develop and become more complex if competitive conditions are created for it, such as those in which life has been since its appearance on Earth? Will natural selection tools work for him in a virtual environment? These questions were asked by scientists at OpenAI, a nonprofit research company engaged in artificial intelligence.
Scientists have conducted an amazing experiment with artificial evolution. They created a virtual space, populated it with self-learning bots and made them play hide and seek again and again. A total of about 500 million rounds of the game were held. Over time, scientists began to notice that AI itself develops various strategies and learns to counteract the techniques developed by other bots.
At first, search bots (hunters) and hiding bots (victims) simply ran around the game space. But after about 25 million games, the victims learned to use boxes to block the exits and barricade themselves inside the rooms. They also developed a strategy for flocking behavior, which allowed them to work together and transfer boxes to each other, quickly blocking exits from predators.
In response to this, after about 75 million games, predators learned to get around barricades, move and use ramps to overcome obstacles.
After 85 million games, the victims learned to take the ramps with them and hide them inside their “fort”, depriving the predators of a valuable tool.
According to OpenAI's Bowen Baker, all of this is one of the most amazing examples of how evolutionary natural selection tools work in virtual space. “As soon as one team learns a new strategy, this creates pressure on the other team, forcing it to adapt. This is an interesting analogue of how humans evolved on Earth, where there was constant competition between organisms, ”says Baker.
The development of bots did not stop. At some point, they learned to use vulnerabilities and bugs in their virtual world. For example, victims learned to get rid of dangerous ramps forever by pushing them through wall textures. According to scientists, this suggests that artificial intelligence can find such solutions to complex problems that people would not even think about. And this also means that, in case of an unforeseen situation, it will be much more difficult to restrain an out-of-control AI than it might seem to us.
Source: igate.com.ua