videospace

Digitally Transforming GLAM (Galleries, Libraries, Archives and Museums)

Some problems are so complex today that they can only be solved using AI. This certainly applies to what many consider to be the last frontier in Search technology - Audio Visual Media.

By using a combination of AIs (like speech and face recognition), Videospace unlocks hundreds and thousands of hours of knowledge within your media libraries by making them accessible and discoverable.

With the World's First Translated Video Search, we can further unleash the full potential of your media library by making them searchable in other languages! Extending its assessibility and discoverability!

Besides running Videospace on a world-class video platform (the same platform used by the 2012 and 2016 Olympics), we are using a combination of following advanced technologies:

  • Speech Recognition (over 100 languages)

  • Translation (over 60 languages)

  • Face Recognition

  • Video OCR (up to 26 languages)

  • Natural Language Processing (over 20 languages)

  • Video Search Engine - index and search video in time-series

  • World’s First Translated Search Engine – searches over 6,000 different language pairs

To find out more,

ANNOUNCEMENT: Global Launch of AIspace – The Next Generation of A.I. Storage

AIspace_banner_large.png

Singapore, 19 April 2019: - Babbobox officially announces the global launch of AIspace (pronounced as "i"space) – The Next Generation of A.I. Storage in Singapore.

Enterprises know the need for digital transformation, they also know that Artificial Intelligence (A.I.) will play a big role in that transformation. However, many perceive A.I. to be out of reach because they think it is either too costly or they have little knowledge about its benefits to them. This is about to change.

AIspace’s Mission is “To make A.I. accessible to all enterprises”

In AIspace, we are infusing A.I. into the something all enterprises need - Storage. By applying A.I. to digital assets (documents, images, audio and video) that already exist within your organization. For the first time ever, enterprises will be able to index and search inside all your documents, images, audio and video in a single platform using the World's First Unified Search Engine

Beyond Search, you can now apply A.I. to documents and images for analysis. This will open up a whole world of future possibilities, especially in Big Data. 

AIspace is simply what enterprise storage should really be - Intelligent!

About Babbobox

AISpace is a service fully owned by Babbobox. Babbobox developed one of world's most advanced A.I. Video Search Engine where it combines numerous advanced technologies (Speech Recognition, Video OCR, Cognitive Services, Image Analysis, Artificial Intelligence and Enterprise Search) into a single platform.

Babbobox started as a Cloud Document Management System focusing on helping enterprises to organize their digital assets. In 2017, Babbobox launched VideoSpace - the next generation of Video A.I. Platform. Babbobox has evolved and transformed to become a global leader in Video Search Engine technologies. We are using these breakthroughs in our data and video platforms to enable enterprises unleash the true value of their digital assets. 

AIspace is launched in 2019 with the mission of making A.I. accessible to all enterprise by means of infusing A.I. into storage.  

To find out more, please CONTACT US

Wishing all a Merry Christmas and a Fantastic New Year!

2018 has been another breakthrough year for us as we traveled the world and launched another TWO World's First:

We would like to thank all of you for taking this sensational journey with us! We just can't wait to take on 2019! 

Wishing you a Merry Christmas and a Fantastic 2019!

Yours sincerely, 

All of us at Babbobox

Video Big Data Whitepaper (FREE download)

video big data videospace

The term "Video Big Data" is rarely heard of. The reasons are pretty simple: 

  1. It's difficult to extract data from videos
  2. It's difficult make sense of unstructured video data

Therefore, it is not an understatement to say that video is the most difficult medium to search and extract intelligence from. However, given the amount of videos are that generated daily in the public domain (e.g. YouTube) and private domain (e.g. broadcasters, CCTV, education, etc.), it is also not an understatement to say that video is the King of Content. 

The objective of Big Data is to gain Business Intelligence. Video Big Data is no different. The obvious difference is the source and the type of data that can be extracted out from videos.

This Video Big Data Whitepaper aims to explain how we can extract value and intelligence from videos with a 3 step approach:

  1. Extract video data 
  2. Transform unstructured video data
  3. Analyse to data into intelligence 

With this whitepaper, we hope to share some of our knowledge and experiences working with Video Big Data. From our calculations, we estimate that Video Big Data will dwarf Big Data as we know it. Thus, the importance of this whitepaper. We hope you enjoy and benefit from it!

Yours sincerely,

The VideoSpace Team
 

Bringing AI Video Search to Asia

babbobox_videospace_alex_chan_broadcast_asia.jpg

We are super excited about bringing our A.I. Video Search to Broadcast Asia after starting out in UK, US and China in 2018. It feels so good to be home!

Babbobox CEO, Alex Chan will be talking about "The Age of AI" and how it will transform the entire broadcast and media industry with Video Search, Personalized Content and Video Big Data.

We will also be making a big announcement and showcasing it during the show! We are pretty sure it will blow you away! So do drop by and say Hi!

Video Big Data (Part 3) – From Mess to Intelligence?

The objective of Big Data is to gain Business Intelligence. Video Big Data is no different. The obvious difference is the source and the type of data that can be extracted out from videos. In there, lies the main challenges - Extraction, Transformation and Analysis.

videospace-video-big-data.png

 

In this instalment, we will explain why Artificial Intelligence is central to the “mess” in video big data.

In the first installment (Part 1), we explained:

  • Why Video Big Data will absolutely dwarf current Big Data, and
  • How Video is the most difficult medium to extract data

In the previous instalment (Part 2), we examined:

  • the kind of data elements that we can extract from videos (speech, text, objects, activities, motion, faces, emotions)

But first, let’s examine why there is a mess in video data. The short explanation is because a large part of video data is unstructured data. In particular, data from speech and text. For example, text extracted from a 30 minutes news segment could cover multiple topics and events, mentioned numerous places and persons. To add to the complexities, we have to time-aligned when these words are spoken. In many ways, text (e.g. slide presentations that appear in videos) are the same.

Thus, we have to answer 2 key question:

  1. How do we meet sense of ‘messy’ video data?
  2. How can we extract knowledge or intelligence from that mess?

The answer lies in another form of Artificial Intelligence (A.I.) - the study of Natural Language Processing (NLP). That is because it can process and attempt to make sense of unstructured text in the following areas:

  • Topic detection
  • Key phrase extraction
  • Sentiment analysis

The reason is because NLP can be used to turn unstructured video data into structured data. Only then can we start making sense and manipulating the data into either intelligence or actionable items like alerts, triggers, etc.

The field of Video Big Data is just starting. Without the advancement in multiple areas of Artificial Intelligence in multiple areas (Speech Recognition, Computer Vision, Facial Analysis, Text Analytics, etc.), Video Big Data wouldn’t even exist as it needs these fields to work in tandem or in sequence.

Given the rate that we are producing videos, alongside our ability to extract video data using A.I. The only way is up and we are not even close to uncovering the tip of Video Big Data iceberg.

Video Big Data will be bigger than BIG. 

VideoSpace will be right in the middle of it all. Let’s put this prediction into a time capsule and revisit it in a few years.

Video Big Data (Part 2) - What kind of Video Data?

videospace-video-big-data.jpg

In the last installment, we explained:

  • Why Video Big Data will absolutely dwarf current Big Data
  • How Video is the most difficult medium to extract data from

Which explains why Video Big Data remains a largely unexplored field. But also means the intense opportunities available because we have not even scrap the tip of this huge data iceberg.

In this installment, we will examine the kind of data elements that we can extract from videos. 

1. Speech
In a hour of video, a person can say up to 9,000 words. So imagine the amount of data just from speech alone. However, the process of transcribing speech is filled with problems and we are currently only starting to get an acceptable level of accuracy.

2. Text
Besides speech, text is probably the second most important element inside videos. For example, in a presentation or lecture, besides speech the speaker would augment the session with a set of slides. Or news tickers appearing during a news broadcast. 

3. Objects
There are thousands of objects inside a video within different timeframe. Therefore, it can be quite challenging to identity what objects are in the video content and in which scene they appear in. 

4. Activities
The difference between video and still images is motion. Different video scenes contain complex activities, such as “running in a group” or “driving a car”. Ability to extract activities will give a lot of insight what the videos are about. This includes offensive content that might contain nudity and profanity.

5. Motion
Detecting motion enables you to efficiently identify sections of interest within an otherwise long and uneventful video. That might sound simple, but what if you have 10,000 hours of videos to review every night? That’s a near impossible task to eyeball every video minute.

6. Faces
Detecting faces from videos adds face detection ability to any survelliance or CCTV system. This will be useful to analyze human traffic within a mall, street or even a restaurant or café. When we include facial recognition, it opens up another data dimension.

7. Emotion
Emotion detection is an extension of the Face Detection that returns analysis on multiple emotional attributes from the faces detected. With emotion detection, one can gauge audience emotional response over a period of time.

This list of video data is certainly not exhaustive but is a definitely a good starting point to the field of Video Big Data. In the next installment, we will examine some of the techniques used to extract these video data. 

Yours sincerely,

The Babbobox Team

Video Big Data (Part I) - An Introduction

babbobox-video-big-data.jpg

YouTube sees more than 300 hours of videos uploaded every minute. That's 432,000 hours in 1 day or 158 million hours in 1 year. That's 18,000 years worth of videos in a year. And that's just YouTube ONLY! If we add all other videos in the public domain, we wouldn't even know where to start with the numbers. 

However, the even bigger numbers are actually hidden in the private domain from sources like broadcasters, surveillance cameras, GoPros, bodycams, smart devices, etc. We are recording videos at an unprecedented pace and scale. 

There is one word to describe this phenomenon - BIG!

Which brings us to Video Big Data. Or should I say the lack of it. Even the term "Video Big Data" is rarely heard of. This stems from the inability to extract video data and making sense of it. But there is so much information embedded inside videos that is waiting to be discovered.  

So the real question is... how can we extract value from videos?

However, the problem with video is that it is the most difficult medium to work with. There are a few reasons why: 

  • It is very difficult to extract various elements (speech, objects, faces, etc.) of video data. 
  • Each video element requires a different data extraction technique.
  • It is very difficult to make sense of video data because of its unstructured nature

But there is hope yet. We will examine how we can tackle these problems and extract value from video big data in the next article.

Launch Announcement - “Translated A.I. Video Search” to break Language Barriers for Video Search in Washington D.C.

Washington DC, 5 March 2018: - Babbobox officially announces the launch of the World’s First “Translated A.I. Video Search” at the Microsoft Tech Summit held in Washington DC, United States.

Humans are not only divided geographically, but also by language. Today, we are lifting this language barrier and allowing video search in a language that you do not understand.

Imagine you are doing research on Japanese culture and the only language that you know is English. How would you research videos that are in Japanese? The simple answer is, you can’t. That’s because even with the best search engines today, can only search for words in the same language that you enter. Meaning, if you key in English and there will not any results because the videos are in Japanese.

Language is the BIGGEST Search barrier today.

What our “Translated A.I. Video Search” does is that it allows you to search in another language. Meaning, you can search a Japanese video in English (or any other language that you choose) and we will bring you to exactly where this word is said in the video. We can do that in 600 different language pairs.

What this means is that these videos in Japanese are no longer limited by the language barrier and the knowledge within these videos are now made available not just to watch, but also to be search.

Allowing you to unleash your video library’s true potential by allowing your audience to search your videos in their own languages. In the process, we also automatically create a massive amount of Video SEO in multiple languages, thus, allowing other search engines to index and search your videos!

From the extracted video data, we use various NLP (Natural Language Processing) techniques to transform the unstructured big data into a language that you understand, where you can further analysis, turning data into intelligence.

As of today, our Search Engine has the following languages supported:

  • Speech Recognition - 12 languages
  • Video OCR - 26 languages
  • Documents - 100+ languages
  • Translated Search – 600+ language pairs  

About Babbobox (website: www.babbobox.com)
Babbobox enables organisations to unleash potential value in their digital assets by using A.I. Search. Babbobox developed Four World’s First breakthrough A.I. solutions, including our Unified Search Engine that combines numerous advanced technologies like Speech Recognition, Video OCR, Image Analysis, Artificial Intelligence, Translation and Enterprise Search, etc. We are the only solution today that empowers you with the ability to index and search across all digital assets (documents, images, audio and videos) on a single platform. With the extracted data, we turn it into Unstructured Big Data by analysis using various Natural Language Processing (NLP) techniques.

We transform your unstructured data into intelligence. Made possible by A.I.

To find out more, please CONTACT US

Bringing A.I. Video Search to DC

mstechsummit_babbobox

Super excited about bringing our A.I. Video Search to Washington D.C at the Microsoft Tech Summit. We hope to do Asia and Singapore proud (as it looks like we are the only ones)!

On top of that, we are also super excited about the announcement that we will be making during Tech Summit. We believe it's another WORLD'S FIRST! This will bring the world closer, in terms of knowledge, language and data.

If you think our ability to search 7 video elements (speech, text, objects, motion, faces, emotions, offensive content) awesome...

What we are going to announce next will blow you away! Watch this space!