Made with lobe
Understand architecture, fly a drone, generate flower petals, monitor a well, control music visualizations, identify plants, tune instruments, detect cancer, predict physics, read lips. Explore all the amazing things people are teaching Lobe.
Hand & Face Tracker
Looks at an image of a person and learns to draw boxes around all the hands and faces. Created as a building block to help other Lobe models like Emoji Hands and Face learn better.
Looks at an image of a plant and learns to identify the species. Created to help warn people about harmful plants such as poison oak when hiking in California.
Rose Petal Generator
Looks at photos of thousands of different rose petals and learns to generate completely new, photo-realistic rose petals with similar characteristics. Created by artist Sarah Meyohas for her installation Cloud of Petals that studies the relationship between beauty and technology.
Looks at 3D models of different styles of houses, along with measurements and meta-data, then learns to identify the style of architecture. Created by architect Kyle Steinfeld to enhance future CAD software to help assist architects when designing.
Hotdog / Not Hotdog
Looks at an image and learns to determine if it contains a hotdog or not. Created by Jian Yang to help people identify different types of food.
Looks at an image of a water tank from a NestCam and learns to find the elements of an analog gauge in order to calculate the number of gallons in the tank. Created to allow around the clock monitoring and tracking of household water usage.
Listens to a musical instrument being playing and learns to identify the type of instrument. Created to allow tuner apps to automatically select the correct instrument configuration without requiring a person to change any settings.
Looks at a frame from a ball being tossed in a physics simulation and learns to predict the path the ball will take as it bounces off walls and eventually comes to a rest. Created as an experiment for future work that could, for example, predict where a golf ball will land just by watching a persons swing.
Looks at detailed measurements of flower petals and learns to identify the species of flower. Created by to help identify various species of iris using statistical analysis.
Looks at an image of a pour-over coffee filter and learns to calculate how many ounces of coffee are in it. Created to help brew the perfect cup of coffee without needing a scale.
Looks at a video frame from a camera mounted on a drone and learns what direction to fly in order to follow a hiking trail. Created by Alessandro Giusti to allow a drone to autonomously fly along rugged mountain trails to assist with search and rescue missions.
Looks at accelerometer data from a phone as a person performs various gestures, such as flicking up or down, and learns to interpret the movements. Created to allow apps and games to respond to a person’s movement in order to create a more immersive experience.
Looks at an image of a person making a facial expression and learns to turn it into an emoji. Created to allow people to send emotional reactions using the most natural of interfaces, their face.
Looks at an image of discolored skin and learns to distinguish cancerous patches from moles. Created to help assist doctors when diagnosing patients.
Looks at an image of an LED lighting installation and learns to locate the corners of its various elements. Created by lighting design studio Symmetry Labs to help build a 3D map of their complex lighting installations in order to precisely display visuals onto its surface.
Looks at an image of a person holding up a hand sign and learns to turn it into an emoji. Created to allow people to choose from a large amount of hand emoji in a quick and fun way.
Looks at a 360° depth map from inside a building and learns to identify its architectural characteristics. Created by architect Kyle Steinfeld to allow future CAD software to provide real-time statistics about a building’s different spacial ratios.
Looks at an image of a person holding up two fingers and learns to draw a box between the tips. Created to allow people to resize elements in future augmented reality interfaces.
Looks at an image of a person holding up their hand and learns to calculate the angle in degrees. Created to allow people to rotate elements in future augmented reality interfaces.
Looks at an image of a person pointing to their face, then learns to locate the tip of the finger and identify name of the body part. Created to allow people customize different parts of an augment reality face mask directly with touch.
Looks at silent video frames of a person speaking and learns to understand what word is being said. Created to allow people to silently lip questions to future phone assistants without disturbing the people around them.
Looks at an image of an analog well gauge and learns to read out the water table depth in feet. Created to allow the water table level to be monitored and logged quickly by just pointing a phone at the gauge.
Looks at an image of a crib from a NestCam and learns to tell if the baby is sleeping or awake. Created to allow future baby monitors to notify parents when a baby is awake before it starts crying.
Looks at an image of a person holding up a sign language letter and learns to identify the letter being made. Created to allow people to type without using a keyboard.
Looks at an image of water on a stove and learns to determine if the water is still, simmering, or boiling. Created to allow future smart kitchens to automatically regulate temperatures and notify people when to add ingredients.
Looks at a drawing and learns to turn it into an emoji. Created to allow people find the emoji they are looking for with a quick scribble.