SparkFun AVC 2015 recap

Another year, another AVC. This was my second year competing, and although I had a lot more time to prepare, things still didn’t go very well. I figured I’d give a recap of the things that went well, the things that didn’t, and plans for next year.

Last-minute modifications

SparkFun allowed teams to do some practice runs the day before the competition. I attended, and the main thing that I discovered was that the fence surrounding the course was about 5 cm off the ground, and my vehicle had a propensity to get itself wedged underneath whenever it ran into it. The suspension on the Grasshopper lets the wheels kind of bow inward while moving the chassis down; this caused the tires to move just enough to get wedged underneath or pushed all the way to the other side of the fence, causing the vehicle to get stuck. I didn’t have any padding or other bumpers on the front, and this ended most of my runs. I ended up going to Solid State Depot afterwards and attaching a clear plastic bump plate to prevent the vehicle from getting wedged again. I zip tied it to the front bumper of the Grasshopper so that it was sloped up and forward.

The morning of the competition, they allowed some more practice runs. The bump plate worked well in preventing the vehicle from getting wedged, but my zip ties snapped almost immediately. I ended up duct taping the plate to the bumper instead. Based on some more testing, I also fiddled with my collision recovery algorithm. Unfortunately, I didn’t have time to test this…


So! On to my first heat. My vehicle took off the line and then turned slightly to the left and into the fence. It ended up stopping there, and due to a bug in my last-minute code changes, it never tried to back up and try again. Another competitor made a good suggestion: drive the vehicle straight for a few seconds in order to get a better GPS heading, and then switch to the normal navigation algorithm. This should hopefully avoid other competitors and other navigation problems.

The second heat revealed a problem with my bump plate: because it was mounted at an angle, it had the tendency to drive over and on top of smaller obstacles. Another vehicle drove in front of me right at the start and hit the curb, and my vehicle ended up running into them and getting high-centered on top of them. I think I also damaged their vehicle… Sorry!

My third run ended similarly. The car appeared to be navigating well but was driving very closely along the fence. The fence itself was weighted down by some sandbags and my car made it over one, but got high-centered on the second.

Practice runs

There was some good news in the midst of all of this though. In between heats, we were allowed to do some test runs, and I did collect some interesting data. And, there were even some successful runs!


Here’s what the course looked like. I’ve highlighted the starting line in blue; the course starts travelling northwest and then makes 4 right turns to return to the starting line. Each red dot represents a single GPS reading. The first problem that I saw was that the GPS readings started a little to the south of the starting line, but eventually corrected. My vehicle initially drove into the curb at the start (because it thought that it needed to drive further north) but detected a collision and successfully recovered.

The next weird thing I saw was a jump in the GPS data that I highlighted in green. At first, I thought this was due to incorrect readings from the GPS module, but after looking at the logs, the module just failed to switch into NMEA mode. The module I used is modal, and you need to switch between having it report magnetometer readings and NMEA messages. For whatever reason, this switching failed, but my car used dead reckoning to interpolate the gaps, so there was no problem.


The next jump occurred as I was nearing the second turn. The GPS started reporting coordinates about 200 meters south, so my car tried to drive north and ended up running into the fence. The GPS jumped a few times after that, first to the west of the course, then to the inside of the course, and finally near to the fence perimeter, which is close to where my vehicle was. I’m not too sure how to deal with these discrepancies. One idea that was suggested my a member of Solid State Depot is to ignore data after it jumps; this would work for a while and I could fill in the gaps using dead reckoning, but how do you recover from this error? If the GPS readings jump back but are 5 meters off from your estimate, is the estimate wrong, or is the GPS wrong? I haven’t come up with a good plan, so I decided to scrap this idea. Another idea is to define the fence perimeter around the course and ignore readings outside of it. This would help with obviously bad readings, but the readings near the start and end of my runs were close to the vehicle’s position but outside of the fence, so this strategy would discard some somewhat useful data. Maybe I could fudge readings that are just outside of the fence to appear as just inside the fence and provide the Kalman filter with a confidence inversely proportional to the distance outside of the fence.


My second test run looked a lot better. The car didn’t run into anything and made it around the course in about 1:15. Unfortunately it stopped right before the finish line and claimed that it had reached all of its waypoints. I’m not sure why it stopped there, so I set up for a third test run.


This test run also went well: no collisions, the GPS data looked sound, and vehicle made it around in a similar time. However, like last time, it stopped before the finish line. For later runs, I just added more waypoints past the first turn, which worked well in further testing.


Here are both test runs overlaid each other in white, along with the path that I was trying to follow in yellow. I’m surprised with how closely they follow each other, especially at the start and between the 2nd and 3rd turns. I think the vehicle was turning too wide past some waypoints and ended up overcompensating as it approached the next waypoint.

Crowd pleaser!

This year, I dressed my vehicle up as a blue shell from Mario Kart, and dressed myself as Luigi. I was inspired after watching this video. This was a lot of fun and seemed to get a good reaction from the crowd. My family also dressed up! At the end of the day, I won the crowd pleaser award. Yippee!


Posted in Uncategorized | Leave a comment

SparkFun AVC build log 2015-06-14

It’s less than a week until the SparkFun AVC, so I figured it’s probably time for a quick update. There have been a lot of things I still need to get working (such as a physical start button), which is why I haven’t been writing much, but I figured this would be a good break.


Since my post a few months ago, I’ve gotten a few things checked off my to do list. Here’s what I had written, with things that are completed marked off:

  • Critical
    • Control the car from the onboard computer
    • Complete sensor data processing
    • Complete basic navigation to waypoints
    • Add a physical start button
  • Nice to have
    • Include basic collision detection and recovery
    • Use a Kalman filter for position estimation
    • Add a remote kill switch
    • Implement obstacle avoidance
    • Consider the shortcut, non-GPS option, and ramp and hoop
    • Estimate speed from accelerating from a standstill
    • Look at faster motors and other upgrades
    • Get remote monitoring working

Reading data from the iSUP800F module was a bit tricky. It’s modal in that it either emits NMEA (i.e. GPS messages), or binary messages of a proprietary binary format. The binary format contains raw magnetometer, accelerometer, temperature, and pressure readings. I wanted to use the raw magnetometer readings rather than relying on it converting these measurements into compass readings (more on that later), so my code needed to constantly switch the mode on the device.

Collision detection is very basic. I had wanted to include accelerometer readings to detect a collision, but for now, if the reported GPS speed is 0 for more than 2 seconds, then a collision is reported and the car backs up, turns, and tries again.

I had wanted to do some basic image processing to avoid the barrels, go under the hoop, and go over the jump, but one of my camera mounts ended up snapping after I ran into a curb during a test run. I still have the camera on board for recording a first person view of runs, but I don’t think it’s secured well enough to do any kind of avoidance.


In years past, a lot of people seemed to have compass problems, myself included. A lot of people (again, including myself) tended to turn left immediately and drive into the hay bails. From looking at my logs, it looked like my car was picking up some interference, because it thought it was facing 90 degrees clockwise of where it actually was.

I saw similar problems when I was testing this year. Debugging this problem was aided immensely by remote monitoring. I was able to reproduce this problem and found it occurred consistently when the car would drive over certain parts of my test runs. The compass would visibly deflect as the car drove past a point and then return back after the car had put some distance between itself and the point. It appeared that there was something in the ground that was interfering with the magnetometer.

I started logging the raw magnetometer readings and found that when there was significant deflection, the magnitude of the readings increased dramatically. I figured that I could monitor these magnitudes and drop readings when they are far away from the expected magnitude. I added some code to monitor the mean and standard deviation of the magnitude readings while I was calibrating the hard iron offsets of the compass; then I added code to drop readings if they were more than 2 standard deviations from the mean. This helped, but a lot of readings were being dropped, even though they appeared legitimate, i.e. the heading looked correct. Pushing that up to 4 standard deviations seemed to provide a better balance.


Part of the requirements for entering the AVC is providing a demo video or pictures a few weeks before the competition to prove that you have some progress. Here was mine:

The car mostly works and is following waypoints! I still need to do more testing, investigate some better waypoint following techniques, and get a physical button wired up.

Posted in Uncategorized | Tagged , | Leave a comment

Sparkfun AVC Build Log 2015-04-13

In my previous post, I laid out a list of basic things that I needed or wanted to get done, in order of importance. Since then, I ordered a GPS module (specifically the SUP800F, which integrates a GPS receiver, compass, accelerometer, thermometer, and barometer) and have gotten in working with my Raspberry Pi. I used the directions from Adafruit to get it to appear as a TTY in the Pi and read data from it.

The SUP800F is definitely a good value, but it has some quirks. The documentation is acceptable, although I’ve found a few mistakes and some things are missing. When you order it, it comes with a USB breakout board that you can use to read data from the module or configure it, which is really handy. You can also download a proprietary Windows program to read and visualize data from the module, and configure it this way too. If you want to configure it manually, or from something other than Windows, then you’ll need to find a separate AN0028 document. This is one area where the documentation falls short; the module contains some features that are not described in the AN0028 document and I’m not sure how to configure them manually. You can configure them from the propriety program, but that’s annoying when I already have it connected to the Pi. Another annoying thing is that the module is modal: it’s always either returning human-readable GPS (or specifically, NMEA) messages or a proprietary binary format that includes accelerometer, temperature, and raw magnetometer values. Switching between modes is slow, but I’m hoping that’s a problem on my end that I can fix. The documentation also claims that the compass is self-calibrating by rotating it twice in 5 seconds, but I’m not sure how to clear the calibration bit if it’s miscalibrated.

When I first got the module hooked up, I took the car out for a test spin. And spin it did, in circles, for a long time. Trying to debug what the car was doing was tricky; I was logging all my data, but reading it back and trying to compare that with what the car was doing at any particular time was painful. I could SSH into the Pi and tail the log in real time, but the data were scrolling by so quickly that it was hard to scan. I decided that I needed a monitoring solution.

I wanted to be able to access the remote monitor without needing a specialized program, so I decided to run a small web server on the Pi. Any WiFi enabled device with a web browser would be able to view the data. This was also nice because I could add an easy remote kill switch; click a button on the page, and the car stops.

I ended up choosing CherryPy because it’s a lightweight framework, and because it supports websockets. Without websockets, the page would need to poll the web server to keep getting the most recent information about the car, which would require renegotiating a new TCP connection every time. The Pi continually sends data to any connected clients, which is updated in the web page. I also added some buttons at the top to remotely control the car, including starting, stopping, and calibrating the compass. Finally, I plugged my logging infrastructure into the page so that I can view recent info, warning and error messages. Here’s what it looks like:


Based on this, it looks like my compass was having some issues. I’m going to focus on debugging that next.

Posted in Uncategorized | 3 Comments

Sparkfun AVC Build Log 2015-03-09

I entered the Sparkfun Autonomous Vehicle Competition for the first time last year. It didn’t go as well as I had hoped, so I’m hoping to start earlier and make some changes this year. This is going to be a short recap of the progress I’ve made so far.


Last year, I started late and spent too much time working on less important things at the expense of critical functions. This year, I’m going to prioritize things by importance and ease of implementation and get the earlier things tested and working well before trying other things. Here’s a rough list of things that need to be done in order of priority:

  • Critical
    • Control the car from the onboard computer (done!)
    • Complete sensor data processing (in progress)
    • Complete basic navigation to waypoints
    • Add a physical start button
  • Nice to have
    • Include basic collision detection and recovery
    • Use a Kalman filter for position estimation
    • Add a remote kill switch
    • Implement obstacle avoidance
    • Consider the shortcut, non-GPS option, and ramp and hoop
    • Estimate speed from accelerating from a standstill
    • Look at faster motors and other upgrades
    • Get remote monitoring working


Last year, all of my code was in Python. Python is nice because it is easy to test, easy to prototype new ideas, easy to edit on the Pi so that you can make changes in the field, and has fantastic library support (Want to output PWM from the Raspberry Pi’s GPIO pins with hard real-time requirements without using much CPU? There’s a library for that). It’s biggest drawback is that it’s slow. I’ve heard estimates of it being around 40 times slower that optimized C code.

Now before you write Python as an option off, I would strongly urge you to benchmark your code and consider the tradeoffs. Last year, I was running my code on a Raspberry Pi model B running an ARMv6 processor at 700 MHz. The GPS I was using only provided updates at 1 Hz, while the compass provided updates at 5 Hz. My vehicle had a top speed of about 2 meters per second, and my code directly used the GPS readings without any filtering. With my code processing this information and controlling the car at 5 Hz, I was comfortably sitting under 20% CPU utilization. Python was perfectly adequate.

This year, I have a much faster car, a GPS module that updates more frequently, and I’m hoping to do some extra position processing and image processing for obstacle avoidance. I started modifying my original code but found after running some benchmarks that the CPU usage was uncomfortably high, so I started looking for a new language.


My initial thought was to try Rust, a new programming language whose development is being sponsored by Mozilla. Rust compiles to machine code, so it should be about as fast as modern C++. It also has a complex type system that prevents memory leaks, null pointer errors, use-after-free errors, and data races.

Rust is an evolving language. They only recently released an alpha version, and they recommend that you always use the nightly builds. Unfortunately, there is no official ARMv6 support yet. I did find a guide for compiling a cross-compiler targeting the Pi, but I was worried that that would prevent me from easily modifying the code in the field. After some digging, I found a few months old build that ran on the Pi. This worked fine, but trying to get help when I ran into issues was problematic because most people were only familiar with the nightly version. Also, I found that the trigonometric functions did not work.

At this point, I wasn’t sure what to do. I started rewriting my code in C++ but became frustrated with, well, everything about C++. Build systems, Makefiles, Sconstruct files, header files, circular dependencies, manual memory management, race conditions, linking, preprocessor directives, unsafe macros, null pointer errors… During my brief time with Rust, I’d forgotten about all of the annoyances of C++. Fortunately, another user started making unofficial builds of Rust for the Pi and I was able to use them. And, the trigonometric functions worked!

Rust hasn’t been all rainbows and sunshine though. Being a pre-1.0 language, it’s still undergoing a lot of syntax changes, and I’ve had to rewrite parts of my code several times. This should stop in a month or two though. Partly as a result of this constant change, it’s also lacking an extensive library. Fortunately, I’ve found that it’s very easy to call external C functions. Finally, the biggest problem I’ve had is the learning curve. With other languages, it’s easy to pick up bits and pieces of the syntax and learn as you go, but I’ve found that you really need a good handle on the type system before you can do much of anything. Be prepared to read through the Rust book before setting off.

I guess that’s it for now. Over the next few weeks I hope to get the GPS sensors working and the car doing some basic waypoint navigation. Once that’s done, I’ll add a physical start button. This should leave me in decent shape for the competition if nothing else goes as planned.

Posted in Uncategorized | Tagged , , , | 2 Comments

2014 Side Projects

I recently saw This Year in Side Projects on Hacker News, and it seemed like it would be a good idea to document my year too.

Side Projects


Node JavaScript libraries cache

I’ve always thought that one of the most clever things that Google has done is incentivize website owners to allow Google to track their users by providing them with free tools. Install Google Analytics on your site and they’ll provide you with real-time information about how many users are visiting your site, where they are coming from, and how they got there! Of course, this information is collected by sending requests from your browser to Google, which they presumably use to track users.

It’s not that I explicitly distrust Google, but after the revelations that the NSA has backdoors inside many company’s servers, I’d rather not send the information at all. It’s really easy to install privacy add ons in browsers that block such tracking, but then Google offering hosting for popular JavaScript libraries. They sold this with the argument that sites would load faster and that users wouldn’t need to download the same library on a dozen different sites. These are good points, but the latest minified version of jQuery is only 33 KiB uncompressed, and most sites that I’ve seen use different minor versions of jQuery anyway, so it’s not like a lot of bandwidth is being saved. So now, you’re given the choice between keeping your viewing habits private and having a website that functions.

To avoid hitting a CDN and potentially being tracked, I wrote a small app in Node that you run locally that intercepts requests for JS libraries and either serves it from a local cache, or downloads it t a cache and then serves it. Just edit your hosts file to point the CDNs at localhost and you should be good to go. There’s still some room for improvement. I haven’t tried to get it to work with HTTPS, although I haven’t found any sites that use CDNs over HTTPS so that hasn’t been a problem yet. Google also offers font hosting, and they do some magic on the backend to dynamically generate CSS for only the font sizes you need, and that isn’t supported yet, but it’s a start.

TLS upgrader

Earlier in the year, I installed the browser add on Calomel SSL Validation, which ranks the strength of your SSL connection. Initially I thought something was broken because it was ranking many sites as very weak, but then I realized that many websites just defaulted to terrible encryption.

The way that TLS negotiation works is your browser sends a list of algorithms that it supports in the order that it prefers them, and then the server you’re connecting to picks one that it also supports and responds back. Unfortunately, a lot of servers prefer weaker algorithms because they require less CPU resources, and will ignore your browser’s preferences to use stronger algorithms.

Initially, I tried just disabling the weakest algorithms, but some sites (Amazon, Wikipedia) just stopped working. Some sites would load most of the page, but one particular part of the page would use different encryption than the rest, so some part of the page wouldn’t work. It was really frustrating to load YouTube and see the entire page load, and then the video would mysteriously fail. To try to work around this, I wrote a SOCKS proxy that you would run locally that would intercept HTTPS requests and incrementally renegotiate TLS connections. It would prefer strong algorithms, and if no common algorithm was found, it would fall back to weaker ones until a match was found. It would then forward that connection to the browser.

Halfway through the project, I realized that an attacker could use this to force weaker connections. I figured that there must have been some protection in the protocol against these kinds of attacks, and after reading enough of the IETF TLS document, it turned out there was. I finished up the project just because it was a good learning experience, and sure enough, Firefox gives a big scary warning that someone is tampering with your connection when you use it, so that was a dead end.

I had originally wanted to use an external proxy so that the same proxy would work with any browser or client, but that wasn’t going to work. I figured that any program that tried to incrementally renegotiate TLS would need to be run in the client. I started looking at modifying SSL Anywhere to see if that would work, but got pretty lost and didn’t make much progress.

Raspberry Pi radio control

Ever since I read about how you can use software to turn the Raspberry Pi GPIO pin into an FM transmitter, I was intrigued by the idea of modifying it to drive a radio controlled car. It would be really easy to make a robot by buying an off the shelf RC car and just taping a Raspberry Pi to the top. You can read more about here.

SparkFun Autonomous Vehicle Competition

Related to the above, I entered the SparkFun Autonomous Vehicle Competition. I used an RC car from Radio Shack and an Android phone for telemetry data. It didn’t go well – I started too late and probably spent too much time writing the Raspberry Pi radio control. On the plus side, my car never had problems moving; a lot of people had trouble even making it off the line. My design was completely disconnected – the phone connected to the Pi over WiFi and the Pi drove the car by broadcasting signals. I think avoiding wires and connections helped avoid a lot of gremlins.

OpenCV distance estimator

I wanted to have a quick and easy way to estimating distance to a target for another project I’m working on. Initially I considered using two targets and using image recognition to identify and measure the angle between them, but after some testing I found that I could get very accurate estimations using only a single target. You can read more about this project here.

frozen pipes monitor

The pipes in my apartment froze last year. My landlord has never dealt with frozen pipes before, which is understandable, and their solution was to have me turn my heat up to 80 and wait for the pipes to thaw. He was worried that there might be breaks in the pipes, so he wanted me to turn the faucet on and call him as soon as it started running.

Unfortunately, after a few hours, I needed to use the bathroom. I also wanted to shower and drink some water, but I couldn’t leave the house. I ended up pointing my webcam at the faucet and spinning up a quick website so that I could view the faucet from my phone. Later, I wanted to go out to dinner and not need to constantly check my phone, so I used OpenCV to detect changes in the image and Twilio to send me an SMS when the image changed. I didn’t have an accurate way to test the code because I couldn’t turn the water on, but it worked! When the water came back on, I got a message and checked the page to confirm.

Webcam setup

Webcam setup

collaborative multiplayer GameBoy

After seeing the hilarity of Twitch plays Pokémon, I wanted to see if it would be possible to reduce the lag for players. The original version on Twitch was accepting commands from users, feeding them into a GameBoy emulator, and then streaming that back out to thousands of connected users. This turnaround caused about a 10 second lag, which was really frustrating. I wanted to see if I could modify a JavaScript GameBoy emulator and only multiplex the commands out to each user. Each user’s browser would then emulate the GameBoy normally.

Because the emulator was written in JavaScript, I used Node on the server side so that I could emulate the game simultaneously and provide the full game state to newly connected clients. As with many emulation projects, the easy part was getting the graphical stack to work, but audio synchronization was a lot harder. If a client was emulating faster than the server, then it would need to slow down, which caused noticeable audio artifacts.

Although I got multiple clients to connect and emulate simultaneously, I never ironed out the audio issues, and another site beat me to it. I was also worried about the legality of distributing ROM images along with web pages, which wasn’t a problem with the original Twitch Plays Pokémon.


Looking forward, I’d like to get a head start on another SparkFun AVC entry. I’ve started reading a bunch of papers on improved position and other state estimation using Kalman filters, and I’m hoping to more manual testing. I’ve also started learning Rust by writing my command code for the car in it.

Posted in Uncategorized | 1 Comment

Sparkfun AVC 2014 report

I first heard about the about 2 months before it was held in 2014. It sounded like something I would love to enter, so with the help of my friends at the Boulder Hacker Space, Solid State Depot, I managed to scrape together a somewhat working entry. I figure it’s about time I write up my experience and write about my plans for next year.


The AVC has a few different categories: fixed wing aircraft, rotating wing aircraft, and ground vehicles. I entered the ground competition because I don’t have much experience with flying, and I was worried about the expense and consequences of crashes during testing: with a ground vehicle, it just bounces off whatever obstacle it ran into, but a flying vehicle can be destroyed. Ground vehicles that malfunction also won’t go very far, but an aerial vehicle might never be seen again.

The ground competition has a few different classes, 3 based on price and size of the vehicle, and the 4th contains vehicles with non-tradional (i.e. not wheeled) locomotion. The cheapest class encompasses vehicles under $350 total. This might just be sour grapes, but I think this class is the most interesting, because it really forces you to prioritize your spending and to come up with creative cost-effective solutions. I entered this class and barely came under the limit.



I don’t know a lot about electronics, so I wanted to use an off the shelf vehicle and modify it as little as possible. Since reading how to turn the Raspberry Pi into an FM transmitter, I figured that it would be possible to modify the code to control an off the shelf remote controlled car. This would solve two things: I could avoid hardware hacking by keeping the control entirely in software and I could easily swap out a car for another model if I needed to. I spent a while tinkering with it, but I eventually got it working!

All of my command software ran on a Raspberry Pi. I wrote the navigation code in Python. I was initially worried about CPU load with using an interpreted language, but with an update interval of 10 Hz, the CPU load was around 5%, so this wasn’t a problem. For next year, I’m going to try rewriting it in Rust so that I can increase the update interval and do some image processing concurrently.

I’ve had several problems with the Raspberry Pi corrupting its file system when the power was suddenly cut. The best solution is to always shut down the system cleanly when powering it down, but with the vehicle running into obstacles and wires becoming dislodged, this wasn’t always an option. I ended up replacing Raspbian with Industrial Perennial Environment, a distro that mounts the root filesystem as read-only by default. This fixed all of the corruption problems I was seeing. For logging and video and other things where I needed a writeable filesystem, I just made a separate partition on the SD card.


The Raspberry Pi can only broadcast at frequencies up to 250 MHz. A lot of new, high-end RC cars operate at 2.4 GHz, so I was limited to lower end vehicles. A lot of cheap RC cars only have 6 directional controls: forward, reverse, and some combination of left, right, or straight. This would have worked fine, but I wanted to have proportional steering. I ended up using a RadioShack Dune Warrior, which has proportional steering and operates in the 27 MHz spectrum. I used an oscilloscope attached to the transmitter to observe the signals that it broadcast and reverse engineered the commands.

The Dune Warrior worked pretty well at first, but if you read the reviews, you’ll see that one big flaw is that the steering gears wear down over time. This ended up being a problem during the competition, as the vehicle would drift slightly left or right when it was in the resting straight position. I’m planning on seeing if I can 3D print a new gear to replace the current one, but if that doesn’t work, I might go with a simpler 6 directional control vehicle instead next year.


Most of the people I talked to at the competition used GPS to navigate around the track, and I was no exception. The cheapest module I could find cost around $40 and required an Arduino or other real-time processing. Linux isn’t a real-time OS, so it really wasn’t appropriate. Instead, I ended up purchasing a cheap Android phone that already had GPS sensors built in. I liked this plan because the phone is already ruggedized and shielded, comes with its own battery supply, has a touch-screen display, and it comes with a whole bunch of sensors.

To collect data from an Android phone, you need some way for it to communicate with the Raspberry Pi. I originally tried to set up an adhoc wireless network using a USB WiFi adapter, but older versions of Android can’t connect to adhoc networks. I ended up setting up the Pi to act as an access point and then having the phone connect to it. This also had the benefit of making it easy to monitor the vehicle – just connect a laptop to the virtual access point and SSH into the Pi.

I spoke with Michael Shimniok at the Colorado Maker Faire, and he gave me a lot of advice. One thing that he suggested is that compasses are unreliable, and instead, one should use the heading computed by the GPS module. I ended up ignoring this advice because the compass seemed reliable when I was testing in my local parking lot, but sure enough, I had problems at the competition. A lot of other competitors I spoke to had similar problems. Next year, I’ll try to avoid it. It should be noted that there was interference in the magnetometer readings from the vehicle itself, but I worked around this by subtracting the raw magnetometer readings when the phone was mounted on the vehicle from the raw readings when the phone was alone.

I read a few different blog posts where people’s otherwise successful runs were stopped short when the vehicle collided with an obstacle. Some entrants I talked to used echolocation distance sensors to avoid obstacles, while others used image processing. I was already close to the spending limit, so I took a different approach. Rather than trying to avoid obstacles, I detected collisions and then backed the vehicle up to try to drive around it. The accelerometer sensors in the phone were too noisy to provide useful data when the car was moving, but when it stopped, there was a huge drop in the magnitude of the readings, which made it easy to detect collisions.


So how did my entry wind up doing in the end? Simply, not great. I only got the car to follow some basic waypoints the night before the competition at 3 AM (note to self: start earlier next year!). One big problem I had was that the compass was slow to update when the vehicle was turning, so it ended up turning past the correct heading and driving a swerving S-pattern towards the next waypoint. To work around this, I calculated the estimated turn time required to point to the next waypoint, turned for that amount of time, and then drove straight for 2 seconds to let the compass update.

At the competition, the compass caused even more problems. In the 1st heat, my car took off the line and immediately veered left into the curb. Looking at the logs, I was getting weird readings from the compass. On the plus side, the vehicle recognized that it had stopped, backed up, and tried to go around the obstacle. It ended up repeating this action a dozen times, eventually going around the first corner and netting me 25 points (the only points I would get that day). The other 2 heats ended similarly, with my vehicle turning completely around, driving straight for 2 seconds, and then turning back.

In all, I had a lot of fun working on my vehicle and am looking forward to the competition next year. I’m planning on starting earlier so that I can hopefully avoid 3 AM debugging sessions before the competition this time 🙂

Posted in Uncategorized | Leave a comment

Using SimpleCV to estimate distance to a target

For a recent project, I wanted to be able to estimate the distance and direction to a known target using a webcam. I decided to use SimpleCV, a Python computer vision library, to process the images. I wanted to use a Raspberry Pi along with a Raspberry Pi camera module so that the entire project could be mobile; the Raspberry Pi is somewhat underpowered, so I spent a lot of time trying to optimize my code to get an acceptable framerate. The full code is available on GitHub.


To start, I needed a target that would be easily and unambiguously recognizable. SimpleCV has some primitives to recognize specific colors and shapes (including circles, squares and rectangles), so I decided to put a high contrast circle against a contrasting background. I decided on the circle because it shouldn’t be affected by roll of either the camera or the target. I tried a few different things, but ended up covering a CD in bright orange duct tape and attaching it to some black posterboard. The CD worked well because it has a well known and specific diameter.

SimpleCV target - orange CD placed against posterboard.

SimpleCV target.

To calculate the distance to the target, we can use some trigonometry as long as we know the angle to the target and the length of some object. CDs have a diameter of 120mm, so that part’s easy. We can find the angle by calculating the radius in pixels of the identified CD in the image, dividing that number by the width of the image, and then multiplying the ratio by the horizontal field of view of the camera. According to fastmapper in the Raspberry Pi forums, the Raspberry Pi camera module’s HFOV is 53.50±0.13 degrees. Using this information we can calculate the distance by using a tangent calculation.

Raspberry Pi camera

The Raspberry Pi camera module doesn’t show up as a Video4Linux device, so accessing it through SimpleCV isn’t as easy as other webcams. There are some projects to get the module to appear as a Video4Linux device, but they looked more convoluted than I wanted. I instead used the picamera module to access the Raspberry Pi camera and saved the data directly into an io.BytesIO buffer and used PIL to open the image.

camera = picamera.PiCamera()
camera.resolution = (320, 240)
camera.start_preview() # Start the camera
with io.BytesIO() as byte_buffer:
    byte_buffer = io.BytesIO()
    camera.capture(byte_buffer, format='jpeg', use_video_port=True)
    image = SimpleCV.Image(

The use_video_port takes pictures faster, but the individual pictures come out more grainy. Without it, I was getting less than one picture per second. It’s worth noting that I’m encoding the image to JPG format and then immediately decoding it, which is wasteful and unecessary; it should be possible to use the raw data and avoid the transcoding, but I haven’t figured out how to do that yet.


To process the image, I needed to find circular orange blobs in the picture. SimpleCV has a method colorDistance that transforms an image into a grayscale image that represents each pixel’s distance from a particular color. This would work well for our bright orange CD if I could control the lighting at all times, but I plan on using the target outside, and any time the target enters a shadow, the color would change. Instead, I used hueDistance which uses the hue of a color and is less susceptible to changing lighting conditions.

Orange hue distance

Orange hue distance,

One problem with just using the hue is that black is generally a constant distance from any particular hue, and I was having trouble separating the black background from the orange CD. To work around this, I used colorDistance to find a black mask that I could ignore and then only run image processing on the masked portion.

Black areas

Black areas (represented as white)

SimpleCV has some functions to detect blobs in the image and then determine if the blobs are circles. Once any circles are found, it can also estimate the radius of the circle to subpixel accuracy.

black_mask = image.colorDistance(SimpleCV.Color.BLACK).binarize()
distance = image.hueDistance(ORANGE_DUCT_TAPE_RGB).invert()
blobs = (distance - black_mask).findBlobs()
if blobs is not None:
    circles = [b for b in blobs if b.isCircle(0.25)]
    if len(circles) == 0:
        return None
    # The circles should already be sorted by area ascending, so grab the biggest one
    cd = circles[-1]


Using this setup with 320×240 resolution, I was only able to get about 1 frame per second. There were a few things that I tried to speed everything up. First, I knew that the blobs I was interested in were always in a particular pixel size range, so it wasn’t worth calling isCircle on ones that were too small or too large. The findBlobs function takes optional minsize and maxsize parameters, so invalid blobs can be filtered out early. This made a small improvement, but it wasn’t consistently better because the image doesn’t always have blobs that can be filtered out.

The next thing I tried was cropping the image before doing any processing. For my application, the target is always going to be in the middle half of the image, so cropping the image before doing any processing should improve performance a lot. Unfortunately, I ran into some problems where cropping the image before calling colorDistance would produce clearly incorrect results. It’s worth noting that cropping the image before calling hueDistance worked fine. I don’t know if this is a bug in SimpleCV or in my code. To work around this, I just called colorDistance and cropped the result, and then did the rest of the processing from there. With these optimizations, I was able to about double the framerate.


Using the nominal values for the CD diameter and the camera’s HFOV, I calculated the distance to the target at a known distance of 1.155m 10 times. The total framerate was 2.04 frames per second and the readings averaged out to 1.298m with a standard deviation of 0.00134m. The precision was surprisingly high, even if the accuracy was not. I believe I read that the field of view changes when using the use_video_port option, but I can’t find if or where I read that, so I’m not going to mess with that value. I was concerned that maybe the outline of the CD was bleeding into the background, so I tried changing the the diameter to 110mm and got readings of 1.157m. I repeated this at a distance of 2m and 0.5m and got readings of 2.000m with standard deviation of 0.00554m and 0.5 and 0.496m with 0.000199m standard deviation. It’s worth noting that at 2m, the framerate dropped to 1.40 FPS; I’m not sure why that is. Considering how precise these measurements were, it might be worth reducing the resolution even more in order to increase the framerate.

I did notice that when I reduced the lighting (the above tests were done with ample lighting), the detected radius of the CD dropped slightly which affected the distance calculations. It might be possible to adjust the expected CD radius in response to changing lighting conditions, but the overall error rate in lower light conditions was only a few centimeters, so I didn’t try it.

Posted in Uncategorized | 5 Comments