An article by Mike Bartley, T&VS Founder and CEO
Big data refers to extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions. To give some sense of size: analysts predict that by 2020, there will be 5,200 gigabytes of data on every person in the world.; Amazon sells 600 items per second; and MasterCard processes 74 billion transactions per year.
My first introduction to big data was when the UK food store Tesco identified a correlation between purchases of beer and nappies based on the time of purchase and gender of the buyer. They consequently put nappies and beer close in the evenings.
The Thee ‘Vs’ Challenge: Volume, Velocity and Variety
More recently, big data sets are also being used to train and test artificial intelligence systems. For example, in autonomous cars, we need to train the system to recognise different objects in incoming sensor data and test it does it correctly. In order to enable this, the data sets need to be prepared, and this Data preparation is considered to be one of the main challenges in the era of big data. The major challenges being the increasing volume, velocity and variety of data in many applications.
Training Autonomous Vehicles : An Example
Training autonomous vehicles requires large-scale real-world driving data that includes high-definition video to fuel the development of deep learning in internal and external perception systems. T&VS provides a variety of services for big data preparation and in the case of the video data will perform the following steps:
- Annotation of the video and image data to identify, classify and document objects as stationary or moving
- The annotated data will then be used to train machine learning algorithms
Applying Constrained Random
Autonomous systems usually feed the results of their perception systems into decision making algorithms and T&VS also has verification services in that area too as outlined in the CAPRI and Robopilot projects. This demonstrate how constrained random approach borrowed from hardware verification can be adapted to help cover the huge input space for such systems.
In the era of deep learning and artificial intelligence, we need to ensure our big data is suitable to train such systems and we need new approaches to verify such systems.
Find Out More
To find out how T&VS can help with your big data preparation please Contact Us to find or attend Verification Futures 2018 on June 14 (Reading, UK and Online) where these and many other verification challenges will be discussed.