8 million videos analyzed by Google Machine Learning to create an Intelligent Archive - A race to the finish

No comments

The developers at Zorroa are always up for a challenge - the bigger the better. So when the team at Google approached us about using our Zorroa Visual Intelligence platform to demonstrate their latest and greatest video analysis algorithms, we said absolutely. The conversation went sort of like this:

Google:     Can Zorroa help us demonstrate our improved video algorithms?

Zorroa:     Of course we can - that’s a great use case for Zorroa Visual Intelligence

Google:     Great - we are going to use our YouTube dataset

Zorroa:     Oh - you mean that dataset of 8 million videos?!?

Google:     Yes, that’s the one

Zorroa:     OK... no problem. That’s a lot of data to train your algorithm on

Google:     Fantastic - we need this for a conference demo in about two weeks

Zorroa:     O...K... We should probably get started on that then…

While not the typical Zorroa deployment, being a Google partner has its advantages. In addition to having a great working relationship with the Google team, we have deployed Zorroa Visual Intelligence with a number of Google Cloud Platform (GCP) users, so getting started was the easiest part. What followed was another learning opportunity for the Zorroa team.


Intuitive interface makes it easy to search and browse thousands to millions of images and video

How did we go from concept to deployed in about a week?

The Challenge: Showcase the results of Google’s improved video analysis algorithm

Our Goal: A live demo in the Google booth at the IBC conference

The Process:
Step 1: Google trained their improved algorithm on 6 million videos from the YouTube-8M dataset. The algorithm analyzes video and audio and automatically generates labels and confidence scores based on the contents of each video.

Step 2: Google ran the machine learning algorithms on the videos which extracted the relevant data - then Zorroa ingested the analyzed videos and the associated metadata for each.

Step 3: The Zorroa Visual Intelligence platform was used to run additional analysis and support interactive demonstration to search, browse, and organize the dataset using Google's video analysis labels and confidence scores.

Step 4: Zorroa showed the Google team how to use the intuitive Zorroa Visual Intelligence interface to search, browse, and organize the first 31,000 videos analyzed in the dataset.


Need all your ESPN clips - no problem. Just basketball clips - easily find what you are looking for.

Success: Zorroa and the Google team demonstrated the dataset using the results of the latest Google machine learning algorithms at the IBC conference while talking to hundreds of potential customers!

Next Steps:

  • Add drag-and-drop capability for adding and analyzing new assets
  • Run optical character recognition on the dataset to extract additional data and insights from the videos

Come see for yourself at Google Next London or the NAB Show in New York

Following the successful demonstrations with the Google team in Amsterdam we are headed to Google Next London where we will be October 10th & 11th, followed by a presentation on the Google stage at NAB New York on October 17th.

Can’t get to London or New York - drop us an email and we can schedule a demo for you. We would love to talk about your own “8 million video” challenge, how we can help you solve those, and make the most out of your valuable and always growing repository of visual assets.

We can’t promise a solution in a week - but the Zorroa team is always up for a challenge.