Search and Rescue System Using Omni-Orientation Mapping Robots
by Matthew Kim
A few words from the participant(s)
What steps did you take to develop your project?"
The project has five main parts: an Omni-Orientation Reconnaissance Robot that can traverse the rubble, a mapping program to recreate a Reconnaissance Robot’s environment, a snakebot that uses the Reconnaissance Robots as modular subunits to cover more ground, a Probability Fields program that calculates the void spaces of a disaster site according to structural elements like load bearing walls and empty spaces in between individual maps, and autonomous deployment of the Reconnaissance Robots at entry points. Currently I have worked on the first two parts and am now developing the snakebot.
The Omni-Orientation Reconnaissance Robot utilizes angled drive trains so that it can move in any orientation. It is driven by three 12V DC motors through a series of miter gears, spur gears, and chains and sprockets. It was designed in Fusion 360 and constructed primarily using 3D printing and ball bearings. Using chains that have a low coefficient of friction, it could traverse 30 degree inclines and could continue to move while being flipped. I also have a concept design that uses a single motor with electromagnetic clutches to control multiple drive trains independently of each other. This design is intended to reduce the cost, form factor, battery consumption, and weight of the robot because this system depends on a large quantity of robots.
I created a separate mapping robot that gathered IMU and motor encoder data to generate its pathways in an L-shaped rectangular prism testing environment. I made an odometry algorithm to translate the rotational displacement data from motor encoders to arc pathways to recreate the robot’s path. Data from a single, perpendicular LiDAR was mapped on top of the pathway to create a full 3D map. The IMU adjusted for inclines. These maps had a volume percent error of approximately 10%.