Methane Detection Technologies

Hatchbed has extensive experience working with, developing and integrating, methane detection technologies for industry.

Methane is a powerful greenhouse gas that has come under greater scrutiny by governments and industry to better control emmissions and waste. In the U.S. new regulations under 40 CFR Part 60 Subparts OOOOa and OOOOb as well as Appendix K will raise the target to track 19 g/hr on methane gasses.

Hatchbed specializes in integrating state-of-the-art Optical Gas Imaging (OGI) cameras and Tunable Diode Laser Absorption Spectroscopy (TDLAS) sensors as payloads onto robotic platforms and drones. This includes levearging DJI's various SDKs, Boston Dynamics' Spot SDK, Sightline Applications' API and Sierra Olympia's Windviewer API.

Hatchbed has developed state-of-the-art optical methane detection software that can robustly detect methane leaks using generic OGI camera sensor streams. Motion compensation, novel flow encoding, and custom AI trained convolutional neural networks are leveraged to automatically and reliably detect leaks in real world industrial environments. This software can be deployed onto robotic platforms like Boston Dynamic's Spot robot for real time detection or used as a post processing step of captured video for inspection reports.

Raw video (top left) courtesy of Sierra Olympia.

Hatchbed's online Gas Detection Portal allows users to upload OGI videos from any generic OGI sensor. These videos can be collect using handheld or airborne assets. The portal will:

  • Automatically detect the presence of methane leaks
  • Use metadata (if available) to identify the time and location of leaks
  • Create clips and video links to detected leaks
  • Allow for increased/decresed sensitivity to accommodate end user requirements
  • Download detection reports in common formats

If you have any questions regarding the methane detection technologies and services we provide, please reach out to us.

Simultaneous Localization and Mapping (SLAM)

Hatchbed has extensive experience integrating and customizing Lidar based Simultaneous Localization and Mapping (SLAM) solutions for robotic systems.

SLAM is a sophisticated algorithm that enables robots to map their environment while simultaneously determining their location within that map. This dual capability allows robots to navigate in previously unvisited areas and adapt to changes within their surroundings.

SLAM solutions, using a single plane lidar, can provide an absolute map based position estimate of a robot solution using a single low cost sensor. This approach is especially effective for indoor environments or other environments with flat, uniform surfaces, like paved industrial sites.

In outdoor environments, solutions like SLAM are often needed for supplementing GPS-based localization. This is because GPS reliability is compromised in areas where a clear view of the sky is obstructed, making map-based localization a necessary backup.

SLAM is a mature technology with a long history of academic research and open source implementations. Libraries such as slam_toolbox can be readily integrated into ROS based robotic platforms. However, deploying SLAM in production environments can still pose challenges. Ensuring the accuracy and correctness of loop closure detection is crucial to preventing corruption of the map and localization failure.

Depending on the environment, such issues may be resolved with parameter tuning or might require tailored solutions to take into accound environment specific landmarks using lidar or other sensor modalities. Alternatively, other strategies involving human oversight during the initial map-building phase can be employed, creating a reliable static map that the SLAM algorithm can then use for precise localization.