Four New
Features in Lobe

Header image for blog post "Four New #br g#Features in Lobe"

Today we are excited to introduce a new version of Lobe that will open a new range of possibilities and projects that can be built with machine learning. This new update introduces four major improvements—the ability to select which camera you want to use in Lobe, new export formats, accelerated GPU training, and performance improvements throughout the app.

One of our biggest ambitions while building Lobe is to spark creativity in everyone to find new ways to use machine learning to solve problems and automate tasks. This new version of Lobe is a major step towards making machine learning easier to use.

Camera Selection

When we launched Lobe in October, we focused on making data collection and labeling as easy as possible. We simplified one of the most tedious processes of crafting a machine learning model and made it as easy as the click of a button to quickly collect and label bursts of images.

However, data collection happens in more places than just in front of your computer. And that’s why today, we are adding the ability to select any connected camera right inside Lobe, making data collection more seamless than ever. Now you can take advantage of Lobe’s built-in functionality and combine it with the best camera source for your project, whether that’s an external webcam, a microscope, or even a virtual camera.

The ability to use any camera source in Lobe opens a new range of possibilities.
The ability to use any camera source in Lobe opens a new range of possibilities.

One of our favorite features in Lobe is the ability to try out your model right inside the app using the Play tab. Being able to use different cameras will make testing your model better by allowing you to use the same camera you used to collect your images.

The ability to select the camera you want to use will open a whole new range of possibilities and machine learning models you can create with Lobe. Check out our post where you can learn more about using camera sources in Lobe and get ideas for your project.

New Export Formats

Building a machine learning model is just the first step when you are solving a problem, prototyping an idea, or even just learning. The next step is to export your model and use it in your app.

Lobe was designed as a universal app from the ground up, so whether you are on a Mac or PC, you can use it to build machine learning models. Exporting your model should not be tied to a specific platform either, and that is why we are excited to expand Lobe’s export capabilities to include TensorFlow.js and ONNX. This is added to our already available export formats—CoreML, TensorFlow, TensorFlow Lite, and our local Lobe API.

CoreML, Local API, Tensor Flow.js, TensorFlow Lite, ONNX, and TensorFlow.
CoreML, Local API, Tensor Flow.js, TensorFlow Lite, ONNX, and TensorFlow.

In the end, a machine learning model is ultimately used to solve a problem or automate a task, making our lives easier. Because of that, a model needs to be adaptable, performant, and easy to integrate. With the addition of these export formats, Lobe makes it easier to deploy a model on any device, whether it’s a phone, a Raspberry Pi, a website, or even a server.

Accelerated GPU Training

Training a machine learning model is at the core of the Lobe experience, and we strive to make it as seamless as possible. That’s why we designed Lobe to automatically and continuously train in the background, so you can focus on improving your dataset, evaluating your model’s results, and testing it out.

To make Lobe even more seamless, this new update adds GPU acceleration, making training faster and more efficient. This will allow training to recede to the background even more, making the overall experience of building a model more enjoyable. Accelerated GPU training is available on Windows, with macOS coming soon. To learn more about the GPU accelerated training investments we’re leveraging from DirectML, check out this post.

Lobe will leverage your computer’s GPU to train even faster.
Lobe will leverage your computer’s GPU to train even faster.

Improved Performance

There is a lot of work happening behind the scenes when you train a machine learning model in Lobe. Hundreds of thousands of iterations are happening in real-time, continuously tuning your model’s neural network to better understand your images.

However, that complex process shouldn’t stop Lobe from remaining fast, responsive, and smooth at all times. In this new update, Lobe is faster and runs smoother in areas like scrolling, animating, and reacting to your interactions.

Lobe is more responsive, and scrolling is now twice as smooth.
Lobe is more responsive, and scrolling is now twice as smooth.

Sparking Creativity

Sparking creativity is one of our core beliefs and ambitions and helping you solve problems with machine learning is part of our mission. By making Lobe faster, more performant, and expanding its capabilities with more export formats and camera sources, we strive to open a whole new range of possibilities for you and your machine learning projects.

This release also includes a bunch of small improvements, bug fixes, and adjustments that streamline the experience of building a machine learning model, so the app can recede to the background, be transparent, and become the bridge between your project and your ideas.

We can’t wait to see what you do with this new version of Lobe. Download it for free to get started on your machine learning model today. And join the community to share your feedback and see what others are building.

Share Article

Train your app with Lobe