Welcome back to Part 2 in our series on telepresence (TP) and telerobotics (TR) technologies. In the first piece in this series, we talked about the difference between a telepresence system and a telerobotics system. We also looked at some basic applications that you may use these systems for. Today, we are going to take a more in-depth look at the three forms you are likely to find these systems in to provide you a more solid understanding of how the capabilities are facilitating security and surveillance.

There are currently three distinct manifestations of these systems. Each one builds on the last and adds to the things that the previous system can do. Each addition of capability carries its own advantages and disadvantages so strategy is warranted before attempting to employ this type of tech.

The first is vision and sound. What this category does is pretty easy to understand. It uses a (typically stationary) system to engage in the two way transmission of audio and video. This could be done with a standard two dimensional camera system, it could be a system that uses multiple cameras to create a wrap-around view or it could even use a 3D/VR camera to create a more immersive experience. Some of these systems may, in the near future, create holographic projections. That would mean the existence of systems that let people interact as if they were in the same room.

The second category includes the capability of physical manipulation. These systems will allow the user to move from pure observation to interaction with the remote environment. This is done with controls such as joysticks, wired gloves, inertial sensors or haptic feedback technology on the pilot’s side and mechanical options for ‘hands’ or other specialized tools on the side of the TR unit. Each option is meant to give the user an increased level of interaction with certain objects in a distant location. Turning door handles, pulling levers, using a manual screwdriver to repair something (for example) without being there “in person” all become possible with the correct hardware and connection.

If you want the ability to navigate a remote location and fully interact with the environment, then a ‘freedom of movement’ capability will be a requirement. This is a pretty large group of solutions meant to give telepresence bots the ability to move around an environment. Standard wheels, tracks, omni-directional balls, magnetic levitation are just some of the options for providing a remote sensing unit the ability to relocate/redirect itself. It’s typically going to be a question of cost and terrain/location specifics. These systems primarily come in two different styles, manual control and autonomous control. In a manual control system a user is able to use an interface like a software app or a hardware controller to drive the robot where desired. In an autonomous control system the robot will either move on a path that has been pre-determined or use sensors to determine the path. Both systems have their drawbacks. The pre-determined path will provide consistency/predictability but limit the flexibility. The sensor/algorithm control system has more options but also more chances for problems including getting physically stuck. You could definitely combine an autonomous default behavior with a manual pilot override (when something is automatically detected, for example) and call that a third style but I don’t think that needs elaboration yet.

The more pieces on the remote unit that replicate human senses, the more the pilot can leverage their innate capabilities to asses a situation in a faraway location. Those human capabilities can be further augmented by including analytics in a TR setup. This allows a pilot to benefit from machine learning without having to surrender the advantages of human analysis.

Now you know about the types of telepresence systems that you may encounter. Stay tuned for the next part of our series on telepresence and telerobotics…