Do you have hazy memories of the Mannequin Challenge? Well, the viral YouTube trend of 2016 has now been used to train a neural network in understanding 3D scenes.
The context: Humans are good at interpreting 2D videos as 3D scenes, but machines need to be taught how to do it. Being able to do this can help robots maneuver in unfamiliar surroundings.
The data: The Mannequin Challenge involved standing still while someone moved around you, filming the pose from all angles. It was silly fun, but these videos also happen to be a novel source of data for understanding the depth of a 2D image.
The method: A team at Google AI converted 2,000 videos of people performing the challenge into 2D images and used them to train a neural network. It was able to predict the depth of moving objects in a video at much higher accuracy than with previous methods.
Data privacy: This data-scraping practice calls into question the norms around consent in the industry. Technologists should think about whether the way they’re using someone’s data aligns with the spirit of why it was originally generated and shared. Read more here.
—Karen Hao
No comments:
Post a Comment