At Discovery Robotics, we are frequently asked how the FX250 floor cleaning robot compares with machines using the Brain operating system (OS). Afterall, when looking for a robot to make your cleaning operations more productive, convenient and safer, you consider all options! These days, the Brain OS is showing up as an add-on to equipment from the traditional manufacturers: Tennant, Nilfisk, Karcher, Minuteman, ICE, etc. While these machines differ in their capacities and their mechanical and electrical systems, there is no differentiation in autonomy. Thus, we’ll lump them all together and compare the FX250’s autonomy against the Brain. This’ll be fun, let’s go!
When you consider new technology, you might be tempted to focus on the external features, but a robot is only as good as its core. When it comes to autonomous robots, you should pay attention to the sensor suite, the software and AI, and the navigation and localization features.
The following is a review of a battle between old and new, with a clear winner. The FX250 was designed to be a fully autonomous floor cleaning robot from the ground up. With its optimally placed 3D LiDAR and supporting sensor suite, self-directed path planning and obstacle avoidance, and one-pass mapping, the FX250 clearly wins the battle for autonomy and ease of use.
Meet the Competitors: The FX250 and The Brain OS
In the first corner is Discovery Robotics’ FX250 – a fully autonomous commercial floor cleaner. In the other corner, you have Brain Corporation’s Brain OS – a robust sensor/AI combination that bolts onto established floor cleaning machines. On the surface, the FX250’s AI and the Brain OS share many functional characteristics. Both need to know where they are with respect to some created map, referred to as localization. For example: if you are in your kitchen, no matter where you’re standing, you know where the refrigerator is, how far away the sink is, etc. While you do this by taking information in with your eyes, ears, and nose, the Brain OS and FX250 use sensors.
They also both must follow some type of path from a starting point to an end goal, referred to as navigation. Think of this as similar to your brain plotting your way through a grocery store – noting the places you need to travel to pick up the items on your list to complete the task.
Finally, both technologies must avoid obstacles not identified on a prepared map, and safely perform some useful function along the way. While both the FX250 and the Brain OS must do the same job, differences arise in how the two products accomplish obstacle avoidance and in which environments they successfully operate.
Localization: LiDAR, Cameras, and More
Let’s start with the Brain OS. Their system uses a 2-D scanning LiDAR (Light Detection And Ranging) scanner that is attached near the bottom of the robot as the primary means of creating a localization map. LiDAR mapping is made up of a scanner that utilizes beams of light to “see” how far away obstacles are and map out a given space. LiDAR sensors can be 1D, 2D, or 3D. The sensor either employs a point-and-shoot mode like the range finder in your phone camera, or a scanning mode like the ones used in mobile vehicles.
The Brain OS’s map is then enhanced with computer vision reference points from three (3) side and forward-facing 3D cameras. These cameras enable object detection and avoidance. A tilted, forward-facing 2D LiDAR scanner detects obstacles and any dangerous drop-offs in front of the machine.
The FX250 uses similar technologies, but with significant differences – especially the location and power of the LiDAR scanner. First, the FX250 has its LiDAR mounted on top of the main chassis – giving it a full 360-degree view of the environment. Remember when we said LiDAR sensors can be 1D, 2D, or 3D? While the Brain OS has a 2D LiDAR sensor – meaning it can sense objects on the X and Y axes, the FX250’s 3D LiDAR sensor can view its surroundings on all three X, Y, and Z axes. This allows the FX250 to view everything from the floor to the ceiling for up to 100 meters – giving it a complete picture of its environment.
In addition to 3D LiDAR, the FX250 has a full suite of cameras and support sensors that put its mapping and obstacle avoidance over the top. There are five (5) 3D vision cameras, giving the FX250 a full 360-degree view of its environment. This is imperative to monitor obstacles and other potential dangers, as well as giving it the ability to back up. It also has nine (9) ultrasonic sensors. These detect glass and other transparent surfaces. With these, the FX250 can navigate even the cleanest, invisible glass windows and doors. Its laser cliff sensors also safely detect and avoid drop-offs like stairs in both forward and rear directions. Finally, the FX250 also uses a 9-degree-of-freedom inertial management unit (IMU) that provides better position estimation in dynamic environments.
That was a lot of information! Let’s review:
Overall Localization and Movement Detection
3D LiDAR with 16 planes and 100-meter range, 1048-point laser wheel encoder, 9- degree of freedom inertial management unit (IMU)
2D LiDAR restricted to about 180 degrees – local assistance with 3D depth cameras
3D LiDAR; 9 Ultrasonic sensors with 360-degree pattern; 5 3D depth cameras with 360-degree coverage
Two 2D LiDAR scanners (one level and one tilted), 2 side and 1 forward looking 3D cameras
4 Laser sensors (2 front, 2 rear)
Shared use of tilted LiDAR
Navigation: More Convenient than “Teach and Repeat”
A Brain OS-equipped system uses a “teach and repeat” technique to create a map and a path. In this technique, a user drives the machine along the desired path. The system then creates a map and a set of markers for future reference. The machine will then repeat this exact path…including any operator errors, missed areas, unnecessary overlaps, or drift over time. Should the task change in the future, the entire path will need to be retaught. Throughout operation, the Brain OS can detect objects and navigation around them, returning to its taught path. However, if the object is too large or immovable, the system will stop. Since it does not have the ability to create a significantly new path around the blockage or back up to search for additional paths around the object, it sends a call for human intervention. Brain calls this an “assist.” Good marketing!
The FX250 takes a markedly different approach to mapping and defining a path. The FX250 creates a 3D and a 2D map of the entire area. An operator walks the FX250 through the area to be mapped with little need to pay attention since the 3D LiDAR is picking up everything within 100 meters. The FX250 then automatically creates the optimal path for cleaning. This eliminates the need for an operator to walk every inch of the path. This also means that the path is optimized for efficiency and contains no missed areas and no excessive tool overlap. The FX250’s 3D maps are available in the cloud and available to all the robots in a facility. Objects can easily be added, deleted, or moved digitally. This removes the need for an operator to walk the machine around the facility to remap the area. Paths can be re-optimized remotely to accommodate changes to the facility or cleaning tasks.
The FX250 also supports the “teach and repeat” capability for use in specific situations such as long hallways in hospitals and office buildings. It also supports a “teach and fill” capability. This capability allows the operator to travel around an area while the FX250 automatically creates paths and cleans within the defined boundary.
Environments: Where Are These Players Best Suited?
The commercial space has a significant impact on the performance of any robotic systems. For example, with the FX250 and Brain OS systems, the availability of mapped reference points (furniture, structural features, walls, windows, etc.) is a critical element for both localization and the resulting navigation. So where would the FX250 work best?
As mentioned above, the FX250 uses a 100-meter range, 3D LiDAR sensor to produce an enormous number of reference points in a 3D map. This is called a point cloud, a 3D rendering of the facility in color. (If you aren’t familiar look it up, they’re beautiful!) This means the FX250 can operate in wide open environments, further away from walls or other structural features. In fact, an FX250 regularly cleans the basketball court at halftime at University of Pittsburgh games. We don’t know of any other robot that can be set in the middle of a basketball court and it knows where it is, what direction it’s facing and what it’s supposed to do next.
When items change locations or are removed such as furniture or plants, there is little relative change in the number of 3D reference points available for the FX250’s location. This enables the system to reliably localize with up to a 60 percent change in reference points.
Compared to the Brain OS, the FX250’s adaptability is superior to a system using a 2D scanner near the floor. This scanner would readily lose localization with even a relatively small percent of changes in reference points. While Brain’s computer vision systems can assist in localization, its ability to identify reference points and localize itself is limited. Overall, the FX250 outperforms in wide open spaces and dynamically changing environments. Think airports, convention centers, indoor parking garages…even basketball courts but you’d better not leave a mark!
Additional Benefits of The FX250: Autonomous System and Tool Control
The FX250 is a fully autonomous system. In addition to optimally planning paths, it automatically creates routes between task areas and multiple home locations. This means in most cases operators do not need to physically move the system from one task area to another for a seamless, integrated end-to-end cleaning plan.
The FX250 can also link tasks, automatically moving from a completed task to the next assignment. By creating multiple lists of task options, operators simply select a list of tasks for the day and let the FX250 operate on its own. In yet another feature of its autonomy, the FX250 tracks where it has cleaned and notes where people or obstacles may have caused it to skip. It automatically returns to the areas it missed to complete the job. Additionally, it includes a host of recovery behaviors enabling the FX250 to self-correct from a “stuck robot” condition – for example, by automatically moving backwards.
Finally, the FX250 is also autonomous with its onboard tool options. It integrates with the installed tool to control its actions based on location. For example, the FX250 starts its brushes only after it is moving to prevent damage to floor surfaces by spinning in place too long. It alerts the operator via text or email when a bag or hopper needs to be emptied or a brush needs to be replaced. When moving between tasks, the FX250 suspends cleaning operations and pulls its tool up and out of the way. With the FX250’s unique ability to use interchangeable tools, the main chassis automatically identifies which tool is installed and modifies its operation for maximum performance.
Let’s summarize the FX250 and Brain OS features:
Brain OS Feature
3D LiDAR Scanner
2D LiDAR Scanner
The FX250 – It can operate in larger, more open spaces. It adapts to higher percentage of changes in dynamic environments.
Fully autonomous, optimal path planning with multiple cleaning pattern options
“Teach and Repeat”
The FX250 – It has a higher cleaning efficiency: operator does not need to reteach with most facility or task area changes.
Automatic global re-planning for obstacles or path blockages including a back-up capability
Limited obstacle avoidance capability: operator assistance required when path blocked
The FX250 – Less operator involvement is needed to intercede with obstacles and path blockages in dynamic environments.
Missed area cleaning
The FX250 – It provides more complete cleaning by returning to areas when obstacles may have been moved.
“Teach and Fill” capability
The FX250 – It has operator flexibility to address unique needs without mapping or planning.
Autonomous task-to-task movement
The FX250 – This reduces the need for operator involvement.
The FX250 – It autonomously controls its tools, suspends cleaning in between tasks, and prevents damage to flooring.
In summary, the Brain OS has allowed manufacturers to offer autonomy much faster and at lower cost than developing it internally. It’s hard. Discovery Robotics has spent years designing and optimizing the FX250’s hardware and software to get it just right. Clearly the FX250’s sensing and AI features make it a superior robot. The FX250 can operate in more spaces and complete more tasks with greater efficiency than machines employing the Brain OS.
So, in the future, anyone who asks us about the differences between the FX250 and the Brain OS machines will get a complimentary copy (…more likely a link) of this light reading!