Hellbender has been testing its robotic navigation system using an in-house designed mobile platform designed to operate on our manufacturing floor. At the center of the system is Hellbender’s Robotic Navigation kit, including our Stereo Camera, which provides depth sensing and real-time edge computing to enable autonomous movement, as well as our Motherboard which expands capability and functionality for all of our processing needs.
Converting Stereo Depth into Navigation Data
Using stereo vision and onboard processing, the camera generates a 3D planar point cloud. For some of our preliminary testing, we sliced this data into a smaller horizontal band to mimic traditional lidar output. This data format integrates directly with the ROS navigation stack, including libraries such as nav2, making it easier to implement standard localization, path planning, and obstacle avoidance routines.
The camera includes an IMU, pattern projector, and environmental telemetry, all processed in real time through an embedded Hailo-8 AI accelerator and Raspberry Pi Compute Module. These features allow the robot to operate without needing additional cloud-based computation or external localization systems.
Proof of Concept with Onsite Testing
Hellbender’s team, including summer interns, deployed the system in a hallway navigation scenario. They used ROS tools to establish a navigation plan and executed repeated tests with consistent results.
The robot platform carried all components onboard, including compute, power, and sensor systems. This setup demonstrated that the platform could operate in a closed loop without external infrastructure or overhead.
“This is an important step in validating our ability to move things around autonomously in our space. It’s also been a great training ground for the team,” said Adela Wee, Chief Innovation Officer.
Planned Deployment on the Production Floor
Following successful hallway tests, Hellbender plans to expand deployment of this navigation system throughout its production facility. The goal is to support internal logistics such as moving materials, transporting in-process assemblies, and bridging operations between workstations.
The system’s modularity allows it to be integrated into different mobile platforms depending on payload, form factor, and task requirements. By relying on stereo vision instead of lidar, Hellbender also reduces system complexity and cost without sacrificing performance.
The Role of the Stereo Camera
Hellbender’s Stereo Camera provides the key sensing and processing required for the robotic navigation system:
- Visual-inertial data fusion for stable pose estimation
- Real-time point cloud generation for mapping and obstacle detection
- Onboard processing with no dependence on cloud latency or remote servers
- IP54-rated design suitable for indoor industrial environments
The camera was designed specifically for edge computing applications and supports ROS integrators looking to implement vision-based autonomy without building the pipeline from scratch.
Next Steps
Hellbender continues to refine the navigation system with new features, tighter integration with factory systems, and broader applications across logistics and inspection workflows. The flexibility of the stereo camera platform allows these systems to be tailored for specific use cases in real time and adapted as needs evolve.