Last week, at their annual I/O developer conference in California, Google unveiled their new software development kit, ML Kit, which enables developers to integrate machine learning models into their apps. The software, which sits under Google’s Firebase brand, offers several pre-trained models that can recognise text, detect faces, identify landmarks, scan barcodes and label images both online and offline.
Although it’s early days, the announcement shows that big technology companies are interested in providing developers with tools to use machine learning in their products. This heralds an exciting time for developers and we can already see some incredible advances in how machine learning is helping solve real world commercial issues.
There are several ways we’ll use the technology at DigitalBridge. Specifically, it will accelerate our ability to embed machine learning models into our apps and allow us to experiment with models to create proof of concepts very quickly.
To illustrate the impact ML Kit will have at DigitalBridge, we’ve listed 3 key features below:
- First of all, it’s cross platform which means we can develop our new augmented reality app using either iOS or Android as ML Kit supports both.
- Secondly, it’s pretty quick to run on a mobile device. This is particularly important as it means we can label an image in real-time, on-device, without the need to query the cloud. It will also allow users to scan their rooms offline as object detection will work just the same (more or less).
- Finally, it supports TensorFlow Lite. This means that, if we’re not satisfied with the set of labels provided by ML Kit, we can upload our own network and plug in our apps very easily allowing us to identify windows and doors in real-time, for example.
The stand out benefit of ML Kit, however, is the amount of time it will save. This will allow us to make the most of our in-house expertise and focus on making our room planning and room visualisation software even better.
With the launch of ML Kit, exciting developments are now possible, for example we could use the on-device face detection in our scanning app to make sure that our floor plan estimation algorithm doesn’t take into account pixels belonging to a person to build the point cloud. Thanks Google!