innovator

Add idea


Calendar

«    September 2019    »
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
 

 

Advert

 

Payment

 

Advert

 

Authorization

Стартап

DateDate: 3-09-2019, 05:43
American and Japanese developers have taught the neural network to create a color 3D model of a person from one or more photographs. The peculiarity of the algorithm is that it recreates quite qualitatively even the view from the back, which is not visible in the original image, say the authors of the article that will be presented at the ICCV 2019 conference.
Reportedly, the solution, called PIFu (Pixel-aligned Implicit Function), consists of two consecutive convolutional neural networks, one of which analyzes the source image (or several source images), detects a human body on it, and using a technique called Walking Cubes ”, Recreates a 3D model from it, and the second gives color to the resulting model.
To learn the algorithm, the researchers used the RenderPeople dataset, which consists of high-quality 3D-models of people obtained using photogrammetric scanners.
See also: OpenAI updated the “dangerous” neural network that writes texts
In the video published by the authors, you can see that the algorithm recreates the whole model quite qualitatively, including from the back. In addition, the video demonstrates that even the presence of three frames instead of one significantly improves the quality of the final model. Finally, the authors showed that in many aspects the algorithm copes with the reconstruction of the 3D model better than similar algorithms of other developers.
According to the researchers, their development is another step towards creating a methodology that allows you to extract 3D scenes from ordinary videos. In further works, engineers intend to train the neural network to recreate 3D models of various objects from photographs.
Source: ITC