Computer vision: Finding the best teaching frame in a video for fake video fightback
ANN ARBOR—Contributing to a project that aims to detect “deepfake” videos, University of Michigan engineers developed software that improves a computer’s ability to track an object through a video clip by 11% on average.
The software, called BubbleNets, chooses the best frame for a human to annotate. In addition to helping train algorithms for spotting doctored clips, it could improve computer vision in many emerging areas such as driverless cars, drones, surveillance and home robotics.
“The U.S. government has a real concern about state-sponsored groups manipulating videos and releasing them on social media,” said Brent Griffin, U-M assistant research scientist in electrical and computer engineering. “There are way too many videos for analysts to assess, so we need autonomous systems that can detect whether or not a video is authentic.”
Please click the image below to read the article in full.