You may recall that the barrel obstacles in the 2011 Sparkfun AVC were red. I had been hoping that would be the case as I'd hoped all along to use computer vision for obstacle avoidance.
CMUcam test screen capture |
George M., one of my pals from SHARC, kindly loaned me a BOE-Bot CMUcam v1 and it was time to learn how it worked, interface it to the mbed, and then work out an algorithm to avoid giant red blobs in the robot's path.
I was able to get all the interfacing done in time, but ran into other issues. Here's the rundown.
Step 1: CMUcam Serial Interface
The CMUcam uses a simple, human-readable serial interface. You can use it from a terminal program like Termite. Or you can use their beta Java desktop application to do frame dumps (see pic, above right) and control other features of the camera.
Commands are terminated with a carriage return ('\r' for C/C++ people) and include things like RS for reset, TC for track color, MM for setting middle mass mode on/off, and myriad other commands.
Step 2: Arduino Control
The mbed on Data Bus has finally run out of serial peripherals: iGPS-500, AHRS, and the third USART is used for I2C instead of serial. That's for the compass
Anyway, with little time left, I decided to quickly make a Serial to I2C bridge with an Arduino-compatible ATmega328P. Controlling the camera over serial wasn't too difficult. I used NewSoftSerial for the camera and added an FTDI header for programming/debugging. I added several debugging features intended to save my bacon on race day.
From the PC I can monitor I2C activity, query the latest bounding box info, or even bridge the PC to the camera and control it with the CMUcam java app, all in-circuit and on-robot. It's really pretty neato.
The Arduino tells the camera to reset and track red blobs. It reads the reported x and y coordinates and sticks those in memory for access from I2C or serial.
There's also a watchdog that resets and reconfigures the camera if it stop spitting out color tracking packets for too long. It seems to work pretty well. All told, I'm pretty happy with how it all turned out.
Step 3: I2C Communication
On Arduino, I have had bad luck getting I2C to work. It's easier on the mbed, to be sure. The coding seems more intuitive to me than it did on the Arduino. So what better course of action than trying to get the two talking to each other?
Data Bus' mbed talking to breadboard Arduino talking to CMUcam |
It was unhelpful until I remembered I was seeing a lot of data from the compass communication on the same I2C bus. Disabling that code helped. Then I read the part in the I2C tutorial that device addressing is 7-bit, with an 8th bit added as a read/write indicator.
On the Arduino I simply call Wire.begin(7), where 7 is the Arduino's I2C address. Then call Wire.onRequest() specifying a handler function that spits back 4 bytes, x1, y1, x2, and y2 for the bounding box.
I found it easiest on the mbed to use the "raw" I2C library's start(), write(), read(), stop() methods and manually setting the address. Take the I2C address, left shift once, and set bit 0 to indicate a read operation. Then read four bytes. Like this:
cam.start();
data[0] = (0x7<<1 | 0x01); // send address + !write = 1
cam.write(data[0]); // send address
data[0] = cam.read(1);
data[1] = cam.read(1);
data[2] = cam.read(1);
data[3] = cam.read(0); // don't ack the last byte
cam.stop();
Eureka, it works!
The logic analyzer shows the camera tracking a small red object on my desk.
Here's some screenshots of the serial interfaces I have going simultaneously. The mbed is in "instrument check" mode, reading and displaying sensor values. You can see the box coordinates reported here.
The Arduino is in "standard" mode, after having been in "monitor" mode displaying i2c request events. The "q" command queries the current bounding box values; the same data the mbed's I2C query receives.
Step 4: The Hardware
The schematic and board are pretty simple. I'm basically copying the Solarbotics Ardweeny schematic, but using some SMD passives to keep the board uncluttered. The one through-hole resistor is convenient for single-layer routing.
Step 5: The Algorithm
In general the idea is to detect a big red object a few meters away and begin steering the robot so that the red blob isn't in the center of the image. The algorithm will have to pick a direction and either override, subsume, or trick the navigation steering algorithm to turn the robot.
Of course how far does the robot steer left or right? Imaginary parallel lines in front of the robot describe its track width. The robot, taking a picture, would see these lines converge to the vanishing point at the horizon.
I'll have to figure out what pixels these lines would occupy, and then steer the robot until the red blobs are outside of these track width lines (plus some safety margin).
Epilogue
I should've tested this right away and saved my time for fixing the navigation code. For some reason every time the Arduino and CMUCam are powered up, the GPS signals tanked. The signals drop in strength by 30-50dB! In other words, the GPS fix goes from 9 satellites to 3 instantly. I ran out of time to investigate EMI/RFI as a possible cause. So Data Bus wasn't able to see red or anything else on race day. Maybe next year.
Epilogue
I should've tested this right away and saved my time for fixing the navigation code. For some reason every time the Arduino and CMUCam are powered up, the GPS signals tanked. The signals drop in strength by 30-50dB! In other words, the GPS fix goes from 9 satellites to 3 instantly. I ran out of time to investigate EMI/RFI as a possible cause. So Data Bus wasn't able to see red or anything else on race day. Maybe next year.
Cool, I'm working on something similar to the CMUcam but much smaller, anyway, why not use a multiplexer for the cam ? http://www.sparkfun.com/products/299
ReplyDeleteKeep up the good work :)