Objective

There is a need for technology that can extend the reach and enhance the safety of teams that are tasked with finding, characterizing, and remediating unexploded ordnance (UXO) underwater. This project sought to develop co-robotic (human operator in partnership with a robot) removal of underwater UXO, which leverages human perceptive capability and maximizes the benefit and performance of the human operator. This is done through use of robotic manipulators, real-time non-contact sensors (optical and/or sonar), automatic control methods, and haptic rendering to provide the operator with sense of touch feedback.

The proof-of-concept objective of this SERDP Exploratory Development (SEED) project was to demonstrate that telerobotic control of underwater robot tools for grasping objects can be accomplished, using haptic feedback. The metrics and criteria for success include:

  1. Successfully accomplishing telerobotic-controlled grasping of munition-like objects, in underwater tests, with real-time visual (computer screen) and haptic (force) feedback provided to the operator.
  2. Developing or adopting sensors that allow for real-time image and haptic feedback underwater, suitable for ordnance remediation tasks.
  3. Implementing algorithmic assistance to the tele-operator, through haptic forbidden region virtual fixtures, to prevent contact with specified areas of the target object (essentially “no go” zones where the operator feels the interface device “push back” to resist motion).
  4. Implementing algorithmic assistance to the tele-operator, through haptic guidance virtual fixtures and “haptic tools” to assist the operator in proper gripper orientation and location.

Technical Approach

The approach involves use of underwater sensors, which are used to generate real-time data that can be processed by recently developed haptic rendering algorithms, so as to provide a human operator with a ‘sense of touch’ of objects seen by the sensor. Combined with a tele-operated robotic device, this allows human directed robotic removal of ordnance from lake, river, or sea bottoms. The methodology is somewhat modified from what was originally proposed because the project team was able to take advantage of and leverage significant external resources in the project.

The proof-of-concept system consists of the following subsystems:

  • Robot Arm. This is used to grasp the ordnance, and either move it or secure it to a sling so that it can be lifted. The original plan was to build this robot subsystem, but through fortunate circumstances, the project team obtained free access to a commercially available underwater robot arm for use in this project. This robot arm is suitable for attachment to a remotely-operated underwater vehicle (ROV), platform, or underwater vehicle. The use of a commercial robot arm reduces the technical risk of the approach.
  • Visualization Software. This software allows the teleoperator to see the robot arm and surrounding objects in any desired perspective (allowing for full 3D rotation of the image, as well as zooming). It uses image and depth information obtained from the sensors, as well as a dynamic model of the robot arm. This subsystem was not explicitly part of the proposal, but as work progressed it became clear that this is a necessary component for successful operation.
  • Sensors (Optical and Sonar). As proposed, the project team used underwater video-plus-depth optical cameras. These were lab-tested in air and in a water tank. Given concerns about muddy water, the project team also explored the use of a sonar device. This additional task was beyond what was originally proposed. It was possible to accomplish because the project team obtained access to a recently developed, commercially available sonar. The project team wrote software to process its data and to modify its use, so as to get near real-time 3D depth information. This was tested in local waters (Portage Bay, next to the Montlake Cut between Lake Washington and Lake Union in Seattle), from a research barge provided by the University of Washington Applied Physics Laboratory
  • Haptic Rendering Algorithms and Software. At the time of the SEED proposal, the project team had developed and published a 3-degree-of-freedom (DOF) version of haptic rendering. During the project, this was extended to 6-degrees-of-freedom, using a borrowed 6-DOF (translations plus rotations) haptic rendering device. Virtual fixture algorithms and software were developed for forbidden regions, guidance, and haptic tools. All were tested using different robot platforms and have been published in the engineering literature.
  • Testbeds for Evaluation. Two in-lab testbeds were developed. One was an in-air system, using the robot arm, optical sensing system, and software subsystems. The second was an underwater system, where testing was done in a large water tank. In addition, for the sonar subsystem, testing was done in a local freshwater body, from a research barge.

Results

All of the proof-of-concept objectives were successfully met in this SEED project. The combined system was tested in air and underwater, and it performed all desired tasks well. The operator could successfully grasp and lift objects (including an inert mortar shell), avoiding specified contact locations. In addition, this project demonstrated the feasibility of sonar-based haptics.

The overall conclusions for this proof-of-concept research are as follows:

  1. 6-DOF haptic rendering from streaming (that is, time varying) point cloud has been developed and demonstrated.
  2. Virtual fixtures to prevent a teleoperator from touching undesired areas of an object with a robot end effector, or guiding the teleoperator to correctly orient and place the robot end effector, have been developed and demonstrated.
  3. The project team found and modified optical sensors that can obtain 3D information underwater, in real time, to drive these algorithms. These sensors are adequate for a limited working range, in clear water.
  4. Visualization tools and testbeds have been developed that verify the performance of the above subsystems. These have been tested underwater, in a test tank. In the underwater testbed, a particular submersible robot was used. However, the algorithms and software are applicable to other commercially available robots.
  5. The project team has developed novel sonar processing methods for streaming point clouds that permit implementation of the haptic rendering, virtual fixture, and telerobot control algorithms underwater, without requiring water clarity. These have been tested in a local freshwater site.

Together, these results establish the feasibility of the haptically-enabled co-robotic approach to underwater munitions remediation. Specifically: (1) the functionality of the proposed underwater video-plus-depth camera has been demonstrated; and (2) the performance of haptic rendering and virtual fixtures has been demonstrated.

Benefits

By demonstrating the effectiveness of these tools for use underwater, and studying the feasibility of integration with a number of platform options, this project has shown that this technology has great potential to impact the cost and effectiveness of UXO remediation operations. This work will assist the Department of Defense in mitigation of underwater munitions in a safe and cost-effective manner. In addition, this SEED project has led to the development of algorithms, software, and systems for enhanced telerobotics in underwater conditions. These are applicable for a wide variety of human-operator controlled robots and ROVs for diverse military, commercial, and scientific underwater activities.