Underwater Robotics at Berkeley

Mechanical

Chasis

Gripper

Torpedo launcher

Chasis

The Robosub’s chassis consists of custom designed high-density polyethylene components, with cut-outs for the main electrical housing, the battery compartment, the manipulator, and the torpedo launcher. The chassis was made to fit around the central electrical components, with design focuses being mounts for the stereovision camera setup and 8 Blue Robotics T200 vertical and horizontal thrusters.

Grappler

Our gripper is a relatively simple design; it is powered by a threaded rod drive which converts the rotary motion of a Blue Robotics M200 motor into linear motion, causing an assembly of aluminum plates to open and close, actuating the claw.

Torpedo Launcher

The torpedo launcher is designed to accurately launch a torpedo while maintaining modularity and compactness, allowing for flexible mounting on the AUV. Due to COVID limited testing has been conducted, so inspiration for the torpedo design was taken from military torpedoes. The launching system is spring powered and utilizes pre-owned motors to launch. Though simple, the launching system has very little areas for error.

Electrical

daisychain

electricaldiagram

Two NOGGIN Boards

CAN Bus Connection

In order to facilitate quickly adding and removing components such as thrusters, end effectors, and sensors, we chose a standard set of signals that would serve as the bus connecting all of them together. Reviewing our use cases, we chose to provide 12V, 5V, and CAN bus connections. The CAN bus is widely used in the automotive industry as a noise-resistant, reasonably high-speed interconnect which is why we chose it for our architecture. With one bus connecting everything, we greatly reduced the amount of cabling used and even enabled daisy chaining modules together.

Electrical Diagram

For firmware development, we chose the Mbed platform due to its ease of use, feature set, and community support. We chose Mbed Studio as our IDE to make it easier for new members to learn. Getting Git working with Mbed Studio so we could do version control on our firmware took some tricks with symlinks, but at the end we had a monorepo to easily distribute and develop firmware with.

Custom NOGGIN Boards & Template

For hardware development, we developed the NOGGIN system which consists of both a general purpose board that is good enough for around 80% of our tasks and a KiCad template that people can build on top of. The general purpose board contains a brushed motor driver, CAN bus connections, current and voltage sensing for debugging/telemetry purposes, and I2C, UART, DAC, and PWM breakouts. The main idea of NOGGIN was to shift focus away from the details of microcontroller supporting circuitry over to the actual task the board is accomplishing, whether it be reading from a sensor or controlling a motor.

Software-Controls

Gazebo Sim

Experimental Structure

Gazebo Simulation

To test our software system, we utilized the Gazebo simulator with the underwater vehicle library UUV Simulator and its ROS 2 port Plankton. The Image shows a previous Robosub Competition field with a robot and various goal objects.

Ros Node/Topic Structure

We experiment with various control structures and try to streamline the transfer of different data across many sensors and control devices. We troubleshoot for compatibility issues and try to integrate everything in an orderly fashion.

Software-Perception

Code Pipeline

Combined Background Removal

gate_task

Code Pipeline

Code pipeline that shows each step of the process from a raw video feed to resulting bounding boxes around task-relevant objects.

In general, our algorithms for identifying objects of interest (such as the gate with two posts) in the input video feed from the sub involved thresholding out the background water. Usually, the threshold was based on color, i.e. a certain shade of blue. This would isolate the object and simplify the process of identifying its location based on the contours of the thresholded image. 

Combined Background Removal

This showcases the different components of the combined background removal algorithm. Top left image shows the output of the Minimum Barrier Saliency Detection algorithm, while the bottom left image shows the extracted contours. The top middle image showcases the output of the K-Nearest Neighbors algorithm, and the bottom middle image shows the output contours from that. The top right image shows the contours of the algorithm that is currently selected, and the bottom right image demonstrates a bounding box representation of the output.

Detecting Objects of Interest

Our first approach to detect tasks distinguished by specific colors was analyzing histograms of various color spaces and utilizing the locations of the peaks in the distribution. The implementation involved changing the original RGB color space into other color spaces such as LAB and then running Otsu’s binarization algorithm to threshold the desired peak in the distribution. By selecting the appropriate peak, the correct color would be chosen. This approach relied on the shape of the distribution and was more robust than fixed color thresholding.


Underwater Robotics at Berkeley

We are a student group acting independently of the University of California. We take full responsibility for our organization and this web site.

Hosted by the OCF