Testing of the robotic arm was fraught with last minute surprises and glitches as it was the first of the major components to be completed. The lessons I learned working with the cable drives on the forearm allowed me to make some refinements to the leg design before the parts were sent out to be machined which probably saved me a few weeks time. In addition to testing the dexterity and speed of the arm, the trials were the first major test of the haptic world modeling code. Before I had a rigorous method to calibrate the proprioceptive sensors, the world model resembled the truth like Swiss cheese resembles a brick.

I used machined drive shafts to transmit power to the legs and also used purpose built gearboxes for the places where I needed tighter tolerances and had tighter space requirements. I used Matlab to optimize various combinations of leg length, mass and acceleration.

At first I entertained the thought that the robot may be powered primarily by an electrical source. But I avoided all of this when I took the leap to a more mechanical system. You just can’t beat the power density of petrol, and even with the abysmal conversion efficiency of all IC engines, they still beat batteries. For a robot of this scope, having a long-lasting and quickly “rechargeable” power source was crucial. I wanted run-times longer than twenty minutes :-). This was by far the biggest hurdle I was able to overcome. For a walking gait, about 95% of the power comes directly from the 1.6×4, with the rest electrically coupled. Keep in mind; this is still more than 5kW of electric power.

There is precedent for IC powered robots. It was first attempted in the early sixties using an American V8 engine. More recently, quadraped trucks, hexapods and walking robots designed for the timber industry are all running on IC power.

I used High Current Titanium Oxide power transistors numbering in the hundreds for what electrical needs I had. Thank goodness my wife is an analog power circuit guru. She deserves all the credit for the power electronics and most of the middle-ware.

The sensor systems include(s) every sensor known to robot builders as well as a few purpose built applications thrown in for good measure. In place are proprioception sensors, infrared (IR) sensors, a set of high-resolution lidar, and even some old bump type sensors in places like the knees, legs and outer arms. Strain gauges were installed to detect torso twist and to help with debug and self-diagnosis and repair.

The most interesting sensors are the olfactory sensors located in the head that can detect the difference between transmission fluid, oil, brake fluid and gasoline for roadside assistance.

A unique 4-way microphone system in the head gives directional input. There are two microphones in the torso used as inputs for noise cancellation software.

One of the greatest challenges of the vision system I overcame was the recognition of reconfigurable and partially occluded objects. For some of the more typical objects the r50r is likely to encounter (autos, for example), a simple internal model of vehicle geometry is used. This model is mapped onto candidate objects, and any state information is gleamed from this model (for example, the hood of a vehicle being raised can be an indication of its state).

The next set of tests challenged the coordination between vision and manipulation. The robot needed to recognize its own movement visually, and couple the haptic and visual systems in a feedback loop to smoothly maneuver to a specific point and accomplish a task.Up until this point, my tests were done under ideal conditions, with consistent, flat lighting. The next series of visual system tests (now fully integrated into the robot) were under more realistic, and varied lighting conditions, including hard shadows in direct sunlight. Ironically, the visual system had a higher accuracy in total darkness than when the subject was well illuminated. The location of the rally lights on the robot had a pleasant side effect of creating thin outline shadows that made the isolation of the object from the background straightforward.

The robot tracks its visual world and the objects therein with two coupled processes. Each object state (including the robot itself) is kept in world coordinates, which allows the robot to make estimates of the velocities of each object it is looking at. I also found that keeping the robot in world coordinates also makes path planning more manageable

The final step before real-world testing was the integration of visual tracking, object-state determination, planning, dexterity and coordination. These tests were hair-raising a few times—I was like Jonas Salk, the guinea pig for my own experiment.