Beginner's Guide to Building an Outdoor Robot
My name is Shannon, and I build outdoor robots for a living.
My days are spent running sensor tests, gathering data from tons of sensors, and figuring out how to make hundreds of hardware components work together to make a great robot.
So, it's safe to say I'm familiar with the robotics space. Let's break down what goes into building a great outdoor robot - for security applications or otherwise.
Only 200,000 years in the making
If you've ever watched 15 minutes of Animal Planet, you're well-aware of the concept of animal instinct.
The human brain can process 400 billion bits of information in one second.
Humans and animals are programmed to absorb and analyze thousands of pieces of sensory data and information every second. It's estimated that the human brain can process 400 billion bits of information in one second.
Humans - in our current form - have walked the earth for 200,000 years. Our ability to rapidly process information didn't happen overnight, it came from millions of years worth of evolution and natural selection.
This sparks the question: how can we create objects that think like humans?
Furthermore, can we beat evolution's record and build it in under 200,000 years?
How do you make an object think like a human?
Let's start with the basics, how do humans bring in all of the data we analyze moment-to-moment?
Thanks to nature, we have our own sensors to gauge our environment, light, temperature, sound waves, body language of others, and to determine threats or dangerous situations.
Humanizing robots with sensor data
To bring these same capabilities to a robot, we need to recreate these sensors with hardware components:
- Cameras powered by video analytics
- Wheel encoders to determine the robot's speed throughout a space
- Lidar sensors to determine spatial awareness
- Thermal sensors to determine temperature
- Speed sensors to detect the pace of other objects
- Other helpful sensors can include: GPS, radars, sonars, sound-sensitive mic arrays, smoke detection, humidity/water sensors.
So, what can a security robot actually do?
Essentially, we need to provide the robot with tools that allow it to answer some key questions:
- What am I seeing?
- Where am I?
- Where should I go next?
- Are there any anomalies in my path?
The right sensor = the right data
The process for choosing the right sensor is no walk in the park. Considerations must be made around adaptability, application, costs, and size.
If a sensor cannot effectively answer one of the questions listed above, it could be a waste to add it to the robot.
The real MVPs: Lidars and cameras
The two most important sensors on an outdoor robot are 3D Lidars and AI-powered cameras. These are the primary devices used for robot perception and visual detection.
Lidar (light detection and ranging) technology uses laser beams to measure the distance of objects, and uses the light reflected by these objects to map and render the space. This is similar to how dolphins and whales use echolocation for hunting and navigation purposes.
3D Lidar mapping points laser beams at different angles to create a 3D rendering of a space. In a single map, millions of data points are captured and processed.
Autonomous navigation and how the magic happens
Outdoor robots integrate sensors like Inertial Measurement Units (IMUs) with Global Navigation Satellite Systems (GNSS) to form the backbone of the navigation system. Drones and other aircraft use similar technology to get precise, real-time location data.
If you play any VR or AR-based games on your mobile devices, you're working directly with IMUs. IMUs create data on orientation, angular velocities, and linear velocities. This is used for motion tracking in the games we play on our mobile devices (Pokemon Go, anyone?).
To complement the data captured by IMUs, GNSS provides accurate, absolute positioning in the global frame. It uses satellites to locate the exact longitude and latitude points for moment-to-moment location changes. This is similar to the GPS you probably use with Google or Apple Maps.
Bringing it all together
We try to create an accurate representation of the world using a combination of sensors and sensor-fusion algorithms. Paired with A.I. software, these sensors allow the robot to perceive the world similar to that of a human.
As with anything, these sensors come with their own pros and cons.
The advantages of these sensors are combined to provide an accurate map of the terrain as well as a positional estimate of the robot in space. In the instance where a sensor is unable to receive signals, the remaining sensors will compensate to provide an accurate rendering of location and mapping.
With more sensors, not only do we provide the robot with more information, but we also have safeguards if one or two sensors were to fail in the field.
In terms of creating the perfect outdoor robot, this is still the unexplored frontier. Our capabilities grow with hardware and battery improvements, and the technology is advancing by the day.
You see robots in the news, but what aren't they telling you?
Topics: Edge devices, Edge computing, Machine learning, Deep learning, IoT devices, Nimbo, Technology Trends, Emerging Technology, Autonomous, Artificial Intelligence, Robotics, Smart Security, Automation
Posts by Tag
- Technology Trends
- Smart Security
- Artificial Intelligence
- Emerging Technology
- IoT devices
- Affordable Security
- Deep learning
- Machine learning
- Edge devices
- Small Business
- Edge computing
- Small Business Security
- Computer Vision
- Neural Networks
- ISC East
- Trade Show
- Video Analytics