Fast Robots
This is my project page for the Electrical and Computer Engineering class Fast Robots at Cornell University. This website will serve as documentation of the creative design work and technical skills I learned in this course.
For more of my projects and up to date information, check out my portfolio website @ jackdefay.com
In this lab I set up the Arduino IDE to run with the dev board we are using, the Sparkfun Artemis Red Board Nano. The video above demonstrates the 4 functions we were supposed to test:
1. Blinking the onboard LED This code comes from the arduino blink example. We simply toggle the state of the LED every 1000 msec, or every second.
2. Communicating with the board over serial This code comes from the Artemis Serial example, where the function is to echo anything sent from the serial monitor. The code reads from the serial line and writes it back to the bus.
3. Measuing temperature with the onboard temperature sensor This example prints the value obtained from an analog read of the built in temperature sensor. The raw byte value is read in and printed to the serial line. The example leaves space for a function to convert from raw values to a temperature in Fahrenheit which could be calibrated by the user.
4. Recording audio and extracting frequency information with the onboard mic This example reads in sampled data from the built in microphone, computes an FFT over the window, and prints the highest frequency component.
Each of these tasks was accomplished by uploading example code from Sparkfun to the board and testing the outcome.
This lab uses the same Sparkfun Artemis Nano board and now a usb bluetooth module to communicate with the Artemis. We use provided base code for these exercises. On the computer side we use a jupyter notebook and python code base to communicate with the Artemis.
This task requires the Artemis to repeat back a message sent, with an additional piece tacked on to the end. In this case, I copied the code from the "PONG" function, substituted in the received word to the print, and appended a " :)" to the end of the message before printing.
This task requires the Artemis to receive 3 float values from the python script. I modeled this function after the SEND_TWO_INTS function provided. I added an additional term, and replaced the integers with floats. This substitution worked well because floats are handled well in the background by the codebase to be properly truncated for sending over bluetooth.
This task requires the Python script to read a value off of the Artemis anytime it is updated. To accomplish this, I instantiated a notification handler for the given uuid. Then I wrote a handler function to read in the value, print it out, and return the value in order to set a global variable.
There are multiple ways to accomplish lab 3. I went with the most obvious way of setting up a receive_float() service, plugging in the uuid of the float, and writing a handler function to save and print the value. This runs the value through the ArduinoBLE library functions to set up the service. However, we could have instead created a receive_string() service and send the float value as a string instead. This method requires us to convert the value back from a string into a float at the end, but otherwise looks identical to the user. When I set it up with this method, I found that I had to use the bytetofloat() function directly on the received byte value rather than using bytetostring() and converting from string to float. This approach worked while the latter threw an illegal_argument error. This approach however, seems to work differently behind the scenes. Rather than going directly through the ArduinoBLE library, we use the wrapper in BLECStringCharacteristic. This wrapper allows us to send EStrings over bluetooth, rather than normal strings, which is why we are able to seemlessly append float values which would otherwise require manual formatting and may loose precision.
In this lab I finally learned how to use jupyter notebooks. I've used them before but only in a web interface that hid what was actually going on. The approach in this lab made much more sense to me and taught me to love jupyter notebooks! I also learned more about the bluetooth protocol and especially Bluetooth Light.
In this lab we assembled the sensor circuits, tested their capabilities, and got basic functionality running on the Artemis.
I will use a combination of both techniques suggested to index the two sensors. First, I will use the separate shutdown pins to activate only one sensor. Then I will change the i2c address of this sensor in software and activate the second sensor. Since this change is exclusively in software we will have to set up the new address each time the program runs, but this can be done in the setup consistently by using separete shutdown pins.
I plan to position the Time of Flight sensors at perpendicular angles to maximize coverage. One will go in the front and one to the side. For example, this will also help the car with localization in a hallway. If the sensors are too close together they will also interfere with each other so separating them will help reduce this source of error. The sensors will of course miss anything on the opposite side of the car, and there may be a blind spot between them in the front corner.
The address shows up as 0x29 as expected with a single sensor connected. With both ToF and the IMU, the i2c scan returns every address.
The sensor has 3 distance modes. The short range has a range of 1.3 meters ~4 feet. This is a very respectable range for a robot and would likely increase the sample rate or the spatial resolution. If left in an open space however, it would be unable to localize. Good for hallways but not outside. Medium range of 3m ~10ft. Could see wall to wall in a small room. Probably a good middle ground but would still have trouble outside. Long range 4m ~12ft. Wow this is really long range. Not sure why we would need this but thats awesome if it is consistent at that range. I plan to start with medium range as it seems to be a good compromise, and increase or decrease it depending on how well it works.
This test was conducted down the hallway in my apartment. I set up a tape measure and pointed the tof sensor at the door at regularly spaced intervals. Overall I would say that the sensor performed quite well. Each distance mode remained relatively accurate past their advertised range. The short range mode has an advertised range of 1300 mm while the long distance mode has an advertised range of 4000 mm. Interestingly, the short distance mode failed by eventually returning zeros presumably because no signal bounced back for it to read. The long distance mode however started returning quite short values which could be from multipath interferance. The plot shows 4 data points per location, 2 from each sensor. It demonstrates reasonable repeatability, and predictably has increasing error the longer the distance. I qualitatively tested the sensor on different surfaces and it performed impressively well, maintaining accuracy even on difficult materials like my laptop screen and a plastic bag. Qualitatively the sensor took samples very quickly. I actually slowed it down to make it more readable. Occassionally the sensor took extra long to take a sample, especially at long range, so it may not be consistent enough for some timely applications.
Adding a second sensor posed several challenges. Applying the strategy I suggested in the prelab I hooked up the two sensors. When I ran the example code this resulted in both sensors responding to the call for data, and interestingly whichever sensor had a shorter distance to report seemed to take precedence. I suspect this is because that sensor had a value available first so it reported it back to the artemis first. I had a lot of trouble figuring out how to shutdown one sensor at a time. I found that the library function to toggle a sensor on and off did not work, but that simply writing HIGH/LOW to the XSHUT GPIO pin worked well. With this strategy in place I was able to isolate one sensor at a time in order to change its address. I also had some trouble figuring out how to change the address of the sensor, there were some conflicting accounts on sparkfun documentation and forums. However, after some testing I was able to get the sensor address changed and it worked well.
I wasn't able to determine the address from the i2c scan because I already had all 3 sensors soldered together. However, by trial and error it was clear that the AD0 value should be set to 0. I later read the documentation and found that the value should be zero when the jumper is connected and 1 when disconnected. This matches a visual inspection of the board as the jumper on the back is currently connected.
In its raw data form it is hard to tell exactly what the signals mean, but visually it is clear that as I rotate and flip the sensor the lines move in a pattern that matches my understanding of what the axis should be doing. An analysis of the calculated tilt and roll values are more clearly legible and back up this claim.
Here is the directly converted pitch and roll values from the accelerometer. Although the signal is rather noisy, it is clear that it follows the shape we would expect from rotating the sensor. It is centered at 0 and peaks at ~3.1, -3.1 which matches our intuition of the [-pi,pi] range of the signal. This signal will clearly require a low-pass filter, but it looks like it doesn't need an offset or scaling which simplifies the problem.
These plots show the output and and frequency response of 3 different scenarios. Note the different ranges on the y-axis. The first plot shows the results of holding the IMU down on a table. The second plot shows this same setup but I tapped the sensor at varying intensities. The third plot shows what I believe to be normal movements of the sensor, where I spun it around in the air. We can determine an appropriate cutoff frequency by analyzing these plots. The tap seems to create low-level noise across the frequency range. Since the signal is strictly at low frequency and tapering off by 5Hz, I think a 5Hz cutoff frequency is a good place to start.
This plot shows the raw output from the gyro for the change in pitch, roll and yaw. Compared to the filtered accelerometer data, the pitch and roll from the gyroscope change much slower. There is significantly less random jittering resulting in much smoother lines. However, when I turn the sensor to a position and return it, about half the time there is a medium amount of steady state error. As expected, the gyro is precise and less noisy, but with noticiable sensor drift. When I decrease the sampling frequency by a factor of 10, there is a significant difference in the performance. While at a 2 msec delay the values do not drift too much, they also do not always reach the full range they are supposed to. At a 20 msec delay the sensors drift a fair amount, but the values much more effectively reach the expected range of motion. This initial observation suggests that the gyro measurements can be further tuned perhaps with a scaling factor on a low sampling frequency.
This video shows a demonstration of the fused sensor outputs (the camera focusses on the plot in the last few seconds). I apply the complementary filter as discussed in class to the filtered accelerometer data and the gyroscope data. I chose an alpha value of 0.8 to bias the output towards the accelerometer data because with a larger gyro component I noticed sensor drift in the output. If I can better scale the gyro data I may equalize the proportions more later. I ended up going with a alpha value of 0.5 for my low pass filter on the accelerometer data, which corresponds to a cutoff frequency of about 80Hz.
In this lab we used the remote control to test different characteristics of the car. This has two key goals: to make sure that our autonomous control of the car is using the full potential of the car, and to explore the control characteristics of different movements to aid in the design of autonomous control algorithms.
I worked with Robby on the data collection and experimentation in this lab. We collected data together, extracted data from videos and parsed raw sensor outputs in a shared document, but did independent analysis and writeups.
We realized that in order to test more interesting behaviors we would have to instrument the car. We decided to start with just the IMU because that would be faster and give us the motion data we were more interested in. We took out the lights and some unnecessary pieces and replaced them with the Artemis and IMU sensor with a small separate battery. To communicate with the sensor we built off of the bluetooth code introduced in lab 2 by setting up a string handler and sending the IMU sensor float values as a comma separated EString.
We had some trouble at first reading sensor values from the IMU, but learned a lot in the process. The accelerometer was maxing out its readings so we learned how to extend the sensor to the full sensor range. Unfortunately, this only gave up a more complete picture of the incredible amount of noise on the accelerometer line when the wheels turned. The noise was so great as to make the signal entirely unreadable. This forced us to rethink how we would go about collecting data for the different experiments, opting to take videos of some of the tricks instead. We were still able to use the gyroscope but were confined to relative measurements because of the large sensor drift.
The car's dimensions are 18 x 14 8 cm and it weighs 490.6 g with the battery (470.55 g without).
We set out to measure the car's acceleration, but found the accelerometer data to be rather unreliable while the motors were moving. As such, we used videos to collect speed data instead. We set up the car on a 10ft stretch starting from stationary and characterized its speed over several trials. It took an average of 1.24 s for the car to cover 10 ft with a standard deviation of 0.11 s. This is a fast car.
While test driving the car it was clear that turning will be a challenge. The car is very fast and the motors only move in an on/off configuration. Combined with slippery floors this causes the car to drift and skid all over the place, often spinning to a stop when you try to turn.
This test consisted of attempting to turn the car at the smallest possible increment while recording gyro data. The graph clearly shows rotational steps which we can evaluate. The data is a little confusing because it goes well outside of the [-360,360] range but this is because the values were obtained from the gyroscope, which integrates over time. Since we never changed direction on this test, it continued integrating well past 360. Regardless, we can analyze the relative angle before and after a turn. This plot shows two trials, one clockwise and one counterclockwise.
We found the minimum turning resolution to be 40.3 degrees and 37.8 degrees for clockwise and counterclockwise rotation, respectively. I am confident that we can achieve finer resolution through autonomous control.
This plot shows the results of a test to characterize the ability of the car to "ground flip". We called ground flipping when the car flips over from reversing directions quickly as shown in the video. We ran many trials of this behavior from different runup distances and took videos of the trick. We then used the videos to characterize the flip based on how much space the car had to speed up. During these tests we learned that the success of the flip was more determined by how quickly the motors switched directions and how long the opposite direction speed was held. This plot demonstrates the necessity for a certain amount of run up, but our learned experience will hopefully help us to design an autonomous trick that outperforms these manual ones.
For our final test we attempted to characterize a surprising "trick". When experimenting with spins and things we accidentally turned the car on its side. By spinning the wheels back and forth from this sideways orientation we were able to induce forwards and controllable turning motion. Through experimentation we were able to isolate two behaviors: predictable forwards motion by alternating forwards and backwards, and tunable turning motion by alternating backwards and left.
The best explanaition we could come up for this behavior is that the slippery floors allowed the rotational momentum from the freespinning wheels to sort of hop along the ground when the bottom wheels switch directions, jolting it forwards. On a grippier or slicker floor this would not work, but this particular combo worked well. We also determined that the quality of the motion is determined by the switching frequency of the two directions. By switching fast enough the motion was controlled, but too slow and the car spun out of control.
To characterize this phenomenon we conducted several trials, starting with just the forward motion, where we varied the duration of each direction. We took videos and analyzed the frequencies compared to the quality of the motion. This quality was assessed qualitatively, but looking at the video it is pretty clear which steps are successful and which are not. The graph is somewhat rudimentary but simply shows several "steps" of 3 movements and compares the durations of each component. Blue lines correspond to a successful step while red corresponds to an unsuccessful one. Although this is a preliminary exploration, this data will give us a good place to start when we attempt to replicate this behavior autonomously, crutially providing us an upper bound to the duration of the components.
This lab taught me a lot about the car and the limitations and challenges of the sensors. The car is very fast and has a lot of potential for autonomous control. With more precise timing and speed control of the motors I hope to be able to autonomously outperform each of the manual tests we conducted. However, the susceptability of the accelerometer/magnetometer to noise will be challenging to overcome. It will be exciting to see how fast this robot can really be!
This is another foundational lab where we soldered, tested, and tuned the motor drivers for the car. In order to prepare for future labs, I took this opportunity to setup the internals of my car and route all of the wires in a nice way, as shown above.
The setup I used is shown above. I daisy chained the I2C connections on the TOF sensors and IMU from the Qwiic connector on the Artemis. Each TOF sensor requires a GPIO from the Artemis to control the shutdown pin, which allows me to set the I2C addresses. The two motor controllers are powered by the motor battery, which is daisy chained across. The input signals come from 2 PWM ports on the Artemis. I decided to use pins 9, A14, A15, and A16 (I originally used pin 10 which turned out to not be capable of producing PWM signals). Next, I connected each to the MCU ground from the Artemis. Finally, the Artemis is powered by a separate battery over the builtin battery connector.
The physical placement of the components did influence the choices in pins I made. For example, I made sure to put the two shutdown pin connections next to each other so they could be routed together, and I put all of the PWM and their respective ground connections on the same side of the board for easier positioning. Of course, I shorted the two motor outputs on the motor controllers to deliver higher power at lower heat to the motors.
The Artemis and the motors should be powered by separate batteries to increase the overall runtime of the car, and to reduce noise in the ground plane from the motors to the Artemis. With the motor controll circuit the Artemis is fairly isolated from the motor noise, but if they shared a battery it could cause the Artemis to malfunction.
I made sure to twist any wire pairs that could be, use slightly longer wires than necessary, but not too long, and used multistranded wire for all conncetions.
I started the power supply at 4V, which is at a normal range for the batteries but not max, and set the current limit to 0.1A. This only caused the motors to produce a whining noise so I increased it incrementally to 1.5A. The motors started spinning around 0.3A.
This video demonstrates the Artemis driving the motor drivers with a PWM signal. The motors respond by turning on and the oscilliscope shows the motor driver output is accurate.
void setup() {
pinMode(OUTPUT, A16);
pinMode(OUTPUT, A15);
pinMode(OUTPUT, 9);
pinMode(OUTPUT, A14);
digitalWrite(A16, LOW);
digitalWrite(A15, LOW);
digitalWrite(9, LOW);
digitalWrite(A14, LOW);
}
void loop() {
analogWrite(9, 0);
analogWrite(A14, 0);
delay(10000);
for(int i=0; i<5; i++){
analogWrite(9,0);
analogWrite(A14,255);
delay(500);
analogWrite(9,255);
analogWrite(A14,0);
delay(500);
}
}
Above is the code I used to run the test.
This video shows the robot running the same test but on battery power.
And of the other side on battery power.
To test the lower range of the motor speed I ran a test where I progressively lowered the driving PWM signal. Here I reused the motor test code above, but rather than full power I input 10*i. This gave me a resolution of 10/255 which seemed sufficient for this test. In the video, the motors surprisingly are able to spin all the way down to 10 PWM. When I ran this test multiple times, it sometimes stalled out at 20 or 30, but regardless, the lower range seems to be much better than I expected. However, this test would surely yeild different results if performed under the weight of the robot on the ground.
Here is a video of my first attempt at the car driving in a straight line. The code I used to run this has an initial delay of 20 seconds, then drives forwards at max speed for 1 second and stops. Since I do not yet have a reset button that is accessible on the car, I found I can induce a reset by re-uploading the code to the Artemis.
And here it is with the correction factor. I applied a somewhat arbitrary factor of 0.9 to the right wheels to slow them down and it worked well.
Finally, here is a video demonstrating open loop control of the robot. The command sequence here is to drive straight for 0.3 seconds, wait 0.3 seconds, point turn for 0.1 seconds to the left, and repeat once. The car responds well to this, driving forwards an appropriate amount, turning near 90 degrees, and shows remarkable ability to traverse difficult terrain, climbing the folded over rug well.
This lab was fun because the robot really started coming together. I got to setup the whole system and package it into the car body. Overall this lab went pretty smoothly, the only real problem I had was when I accidentally conected the motor control pin to a non-PWM enabled pin on the Artemis and it behaved unexpectedly. This problem was solved by help on ed discussion and a quick fix with the soldering iron. Generally speaking, the robot performs quite well, reaching high speeds easily, and I can already see how autonomous control will outperform manual with the precise timings and control available with this setup. I was however, surprised by just how much peak current was required when the motors jump from zero to max speed, I imagine this puts a lot of stress on the battery.
This lab was about implementing closed loop control of our robot. Here we had two options: rotational control or distance sensor control with the gyro or TOF sensors, respectively. We had free reign to design the controller to best fit the application, with the general guidance of implementing a PID controller. I chose to use a PD controller for the drifting task. This task consisted of controlling the robot to drive straight forwards, drift into a 180 degree turn, and drive back out the direction it came. This is a perfect example of a "fast" manuever because it is only be a high frequency control loop that we can accomplish this task, and there are no guarantees of the behavior.
The prelab for this lab was very important. The prelab was about setting up a system for running tests on the car. Closed loop control is a difficult thing to debug through visual observation alone, so this prelab allowed us to take a more data driven approach to debugging and tuning.
I set up my car to only start after and activation signal from the python end in the handle_command function. This also required adding PID to the enum, and adding it to the command_types file on the python end.
Here, the PID signal sets pid to true and sets the initial setpoint to 0 straight ahead. The if case below only allows the pid code to run if the pid variable is true.
The second part of the if case, "t<arraylength", also prevents the pid code from running if the entire data array has been filled. t is incremented each cycle.
Finally, the data in the array "e_list" is written back to the python side in the else case:
This only writes back a single value, in this example I wrote the error e. During later debugging however I added additional data points to see how variables changed together. Note that the very first thing this case does is set the motor speeds to zero. Without this the wheels will continue spinning with no control whatsoever as the bluetooth tries to write this list.
For this lab I implemented a PID controller based on the gyro sensor and the motors. The controller inputs a setpoint and attempts to drive the motors to cause the gyro to read at that setpoint. The units are in radians.
There were a few additional features I added from the base PID controller. First, because I found the d term to be rather noisy, I added a lowpass filter to smooth it out. This got rid of the jittery behavior I observed with high Kd. I also modified the calculation of the d term to stop the derivative kick when changing setpoints.
Next, I observed the i term increasing unbounded whenever the robot got stuck, say against a wall. This was a problem because it would often get unstuck, but then spin out of controll due to the integrator windup. To address this I added a cap on the maximum i term.
Finally, I found that using precisely zero error as a success condition was almost never achieved. To fix this, I added an interval over which the angle was close enough, defined by the threshold variable.
I used several helper functions to abstract away some of the non-idealities of the system. First, I added a skew factor to reduce the speed of the right motor slightly so the car would drive in a straight line.
In addition to the skew, I also added a factor to account for the deadband. Here I use 50, which is a bit greater than the minimum value I found earlier, but for rotations I found the deadband to be larger. Finally, I make sure to clamp the values between [0,255] and cast back to an integer.
The last helper function I wrote is for reading the gyro. Rather than dealing with the raw sensor values in the PID loop, I first calculate the angle based on the raw data.
The first test I ran was to verify all of the peices that this lab combines. I had the python end send a "run" command to the Artemis over bluetooth, which triggers it to drive forwards for 0.3 seconds. This combined the bluetooth and motor functions. After this, I set up a writeback (WB) function to write an array of sensor data over bluetooth. I later used this function in the else case for writing data back. These two tests helped me verify that the different subsystems were working before I put it all together.
Another key feature I added was to put the whole pid loop inside of the the bluetooth if case. I then added an else case that sets the motor speeds to zero. This produced the important behavior of halting the PID control whenever bluetooth disconnected.
Once this was all set up, testing was fairly streamlined. The car was assembled with the batteries in. I uploaded the code, connected to bluetooth, sent the start command, and after a few seconds it would send back a data packet.
The process for tuning was very similar. Before attempting the main task, I decided to set the success behavior to standing still, so I could watch the robot stationary. My chosen method for tuning was to go one parameter at a time, tuning until the behavior changed. First I set the necessary parameters to fairly conservative values to maintain functionality. Then I increased p until it was responsive but had a small steady oscillation about the set point. I found a good value of Kp to be 0.85.
Next, I chose to tune the d term because I thought it would help damp out the oscillation. This turned out to work well, so I increased d until it damped out the oscillation, but before the behavior became erratic due to noise in the d term. I found a good level for Kd was 0.15.
I decided not to include an i term because I found the sensor drift to be a big problem. As the angle read by the gyro drifted from the truth, an integrator value looses meaning. It is supposed to counteract steady state error, but when the "truth" value changes this will just push the robot further from the actual setpoint. I experimented with this term but did not find it to improve the behavior. I set Ki to 0.
Next, I went back and tuned some of the extra parameters. I had to tune the lowpass filter on the d term using alpha. Although I could have done a more formal analysis of this and calculated an appropriate level, I found the experimental approach to work well. Too high a bias and the d term got kind of laggy, reducing responsiveness. Too little and it didn't do the job of reducing the noise. I found a good level to be 0.5.
Next I tuned the threshold value. Again, this was simple to tune experimentally. When I set the threshold too low, the robot would oscillate back and forth and never move forwards. Too high and the robot would not accurately return at the correct angle. This term had to be co-tuned with the d term a bit, but I found a good value to be 20 degrees. This value does not reflect the accuracy of the robots actual turn, merely the range it decides is acceptable to start driving forwards.
Finally, I tuned the motor deadband and skew with simple experiments. The skew I actually used the run command from before, and found a 0.9 bias on the right motor to cause the robot to drive in a straight line. This was the same value as from lab 5. To tune the deadband I simply increased it until I didn't hear the motors stalling during tests. After finding a pretty good value for each of these parameters, I went back through and tune more precisely to reach the values stated above. Only through iterative tuning was I able to get the drift performance I was looking for.
This demo shows a typical drift from the car. Some takes were better but I could never seem to get them on camera. Depending on the variation in the traction on the floor it occassionally overshot and had to correct more, or drove back slightly at an angle. Overall, a success! The following image shows a plot of the gyroscope data from executing this trick.
This was a hard lab! This was the first lab where we integrated all of the pieces we had been working on: bluetooth, motors, and sensors. I took extra time in the beginning to assemble the robot well, so everything was secure, routed, and I could put the case back on. Having the robot packaged up like this was very helpful for testing and I expect it to help moving forwards. The actual code writing of this lab wasn't so bad, mostly copy pasting pieces together. Tuning on the other hand, took a long time. I tried tuning in my apartment since I was working on writing the PID code during lab, but my floors were nowhere near slippery enough. After going to Phillips I was able to tune much better, and in the process of tuning I learned a lot of modifications I made to improve my code. Overall, this lab was really rewarding because it was the first time my robot really felt autonomous!
The goal of this lab was to learn how to build a kalman filter, and eventually implement it on our robot. The kalman filter will be used to perform the stunt (in my case drifting) at a specific distance from the wall.
Here is the TOF sensor data from the step response test. Since the TOF sensor is ready less often than the gyro, the datapoints show up as spikes on the graph. At second 1.5 the car hits the foam pad on the wall, and turns to face down the other hallway. This causes the large jump in the distance readings.
This plot shows the velocity calculated directly from the previous plot. Overlaid in orange is the PWM signal that was sent to the motors. Due to the sparsenss of the TOF sensor readings, this velocity data is not very useful.
From the first plot I determined the velocity to be 1.765m/s by linear interpolation, and confirmed by calculating from frames in the video. The 90% rise time from the start to this speed is 1.25s.
From these values, I calculated d = 1/1750mm, and m = (-d*1.25)/ln(0.1) = 0.00031. Therefore A = np.array([[0,1],[0,-1.843]]) and B = np.array([1,3225.8]) from the equations in the slides.
Now that we have these parameters, we can fit the kalman filter. First, I collected data from a run of the PID drift trick.
Here is the code where I implemented the kalman filter. The kf() function is copied from the lab handout, but I had to modify it a little to work for task B.
First, I had to change sigma.dot() to a multiplication by sigma because for task B, sigma is a single value and the dot function is undefined. The second change I had to make was in the final line, using np.eye(2) instead of np.eye(3) because the size of state space is 2 not 3 for this task.
Putting this all together yields the plot below. The kalman filter tracks well with the dataset.
Here's the same Kalman filter function ported into Arduino. As you can see, the * operator is overloaded for dot product, ~ is the transpose, and I returned the values by passing the mu and sigma matrices by reference.
And here is the slight modification I made to the main loop:
I also added a call to set the value of y in the tof read function. The initial value of mu is set to y at time 1 (the first time step) and the value of mu is used to determine if the wall is close enough to engage the drift.
Finally, here is a demonstration of the drift trick triggered at a specific distance to the wall:
The objective of this lab was to push the boundaries of what our robot could do. To this end, we developed a closed loop and open loop stunt. This lab synthesizes the work we've put into the PID and Kalman filters leading up to this.
This video shows the closed loop stunt I developed in task B. The car starts at the white taped line behind the wall, drives forwards and drifts into a 180 degree turn at a minimum distance to the wall of 0.6m. Reading over the instructions again I see now that the stated goal was to just barely touch the line before turning around, but I wanted to push it to the limit and drift as close to the wall as possible. I hope the following video sufficiently demonstrates the control of this trick.
To perform this trick I spent a long time working with the Kalman filter trying to get it to run fast and accurate enough to trigger the trick. Although I'm sure its possible, after not much progress I decided to take a different approach. This trick could easily have been done by guessing and checking the delay time to start drifting and taking a bunch of videos until it worked well. Although I couldn't get the more advanced Kalman filter working, I still wanted to implement a closed loop controller to perform this trick faithfully. I decided to use a middle ground approach and linearly interpolate distance data. This approach is not generalized for any drift, but works quite well for triggering a drift at a specified distance to a wall.
I tried several different ways to get this to work, but ultimately I was able to fit all of the modifications to the PID code in this single function. Every time the robot receives a new TOF distance value it calculates a velocity value as the difference between the current and previous distance. Previously I divided by the time interval to get the units right and then multiplied by the precise dt every iteration, but I found this to be pretty error prone. I suspect it is because of the high proportion of noise in these tiny measurements. I found that the robot takes TOF measurements roughly every 8-10 samples so I simply divide the difference by 10 when interpolating it. Finally, I had to ensure that the previous actual measurement was used to calculate velocity instead of the interpolated value. I accomplished this with the vel_ind variable.
Here is the data output from the drift trick:
The gyro plot clearly shows the progression of the trick. At about 17.5 seconds the drift is triggered and the car rotates almost exactly 180 degrees before plateauing again.
Although this plot initially looks a little all over the place, it demonstrates exactly what I was hoping. Confining our area of interest to about 16.2-17.3 seconds, we can see a slightly sawtoothed smooth progression from 4 meters to almost 1 meter. The line we drift to is located at 0.6 meters away from the wall, but due to the lag in the sensor I found the orange line at 1.5 meters to be a good threshold value to trigger the drift. The slight sawtooth pattern comes from when the forward projected values don't quite align with the new reading, but personally I expected this to be much more dramatic. This indicates it was tuned very well. Then, with a slight delay after the TOF value passes the threshold, the data spikes when the car turns to look down the other hallway. This worked exactly as I hoped.
This plot shows the raw velocity values that are being used to interpolate the distance data. This plot is less clear, but it is interesting to see what the raw values are. Again, in the area of interst between 16.2-17.3 seconds, we see the velocity steadily decreasing (increasing in the negative) in almost a logistic shape as it approaches the maximum velocity. Then the velocity spikes as the car turns and looks down the hallway.
This video shows 5 consecutive demonstrations of the open loop trick. These were the first 5 attempts so these are not cherry picked. As you can see, some work better than others, but overall the car is able to "walk" forwards demonstrating a completely different type of motion.
This plot shows the gyro data from this demo. The same five trials are shown here. In this case, the gyro data is from the X axis instead of the Z axis. As you can see, each run the car moves in an oscillating pattern. (The 5th run is short because I accidentally turned off the robot before collecting all of the data)
This is the code I used to produce this effect. I use the nonblocking delay function below and simply alternate turning left and right at full speed. This was the method determined in lab 4 to produce the walking motion. The delay length was guessed.
This function allows me to write standard set motor input, delay, set different motor input code while maintaining an active BLE connection and recording data. I simply loop until the delay time has been reached, while checking the BLE connection and reading the gyro.
This lab was really fun because it demonstrated a lot of the work I've put into the robot so far. I especially enjoyed the closed loop control because it is able to perform a precise drift far better than I was able to when driving the car.
The goal of this lab was to learn how to form a 2D map of a room. To do this we leveraged the TOF distance sensor and the gyroscope to measure distances and map them to angles around a known point. If we had more precise localization we could construct the map linearly, but since the only odometry we have is angle we must perform circular scans.
The actual arena that we will be mapping. The floor tiles are 1ft square which makes it easy to measure the lengths.
For PID angle control I found that a PD controller worked best. Since I did task B for the previous labs I was able to reuse my PD controller from that.
The strategy I ended up using to collect the scans was to turn in open loop small increments, noting the angle and tof distances on 500ms pauses between turns. This strategy had the benefit of steady conditions for reading the sensors without the complexity of full PID controll.
I made sure to start each scan along the positive x-axis (pointing to the right) to simplify calculations down the road.
I had minimal problems with drift using this strategy. The angles were all pretty accurate, but I found when I plotted the scans together that the corners I knew were 90 degrees looked a bit stretched out. By applying a linear correction factor the overlap of the scans improved a lot.
Plotting the raw output from the scan on a polar plot.
The transformation matrix is used to shift and rotate the individual scans to their position in the global frame. Since I started each scan pointing towards the positive x axis in the arena, I do not need to rotate the scans at all. They still require a shift, so we can apply the following transformation matrix:
This matrix is for the example of point (5,-3), the first scan I performed.
The other transformations I did to convert the raw scan to a format I could compile to a map was to correct the units and convert from polar to cartesian coordinates. First I converted from mm to m and deg to rad, then I applied the formulas: x = d*cos(theta) and y = d*sin(theta).
The result of applying the transformation marices and merging the scans.
The final map generated, with hand drawn lines over the raw output. "X" marks the locations the robot was placed to perform scans.
This figure compares the map generated from the scan in black, to the ideal map drawn by me based on the floor tiles in purple. Not bad!
The map generated from the scan is just interpretted by me with some knowledge of the room, but without tailoring the lines to the actual measurements at all. The only other correction I made was applying a slight linear adjustment to the angles so the individual scans lined up. For example if I knew a wall was straight I shifted the angles of the scans to make it straighter, to correct for the gyro drift.
Main Bounds: [(-1.3,-1.35), (-1.3,0.2), (-0.6,0.2), (-0.6,1.4), (1.9,1.4), (1.9,-1.35), (-1.3,-1.35)]
Center Box: [(0.5,-0.2), (0.5,0.5), (1.3,0.5), (1.3,-0.2), (0.5,-0.2)]
Lower Box: [(-0.2,-1.35), (-0.2,-0.9), (0.5,-0.9), (0.5,-1.35), (-0.2,-1.35)]
The full x and y lists would be: x[-1.3,-1.3,-0.6,-0.6,1.9,1.9,-1.3,0.5,0.5,1.3,1.3,0.5,-0.2,-0.2,0.5,0.5,-0.2], y[-1.35,0.2,0.2,1.4,1.4,-1.35,-1.35,-0.2,0.5,0.5,-0.2,-0.2,-1.35,-0.9,-0.9,-1.35,-1.35]
The goal of this lab was to familiarize ourselves with the simulation environment, and begin thinking about how solutions in a simulation environment can translate to our real robot.
A screenshot from the simulator of the square path I drew with the robot. The path is close to perfect in the ground truth, but at this small scale the odometry goes all over the place. Ground truth in green, odometry in red
Close up of the simulator window.
And this is the code I used to generate the path. A velocity of 0.4 was a good medium speed, and I chose to turn at a rate of pi/2 with the hopes that this would be at a rate of pi/2 rad/sec. It is close, but I had to tune the timing to get the perfect 90 degree angle. At first I ran this loop similarly to the examples with an asyncio call, but decided to implement a non-blocking delay to allow for constant data output to the plot.
And here is a plot of the data exported. As you can see, between runs the square is quite constant, but the odometry can vary significantly. The average locations of the gt and odometry data are plotted as x's in their respective colors. I was hoping that the odometry would average out to at least share a similar center of mass, but it seems like depending on the run the odometry error can accumulate to place the robot far from where it actually is.
This is an experiment I used to determine the movement step time. I ran the robot in a for loop of 4 steps at a velocity of 0.4 in a straight line to obtain this picture.
Since it takes 4 steps to go 0.1 units (lets say this is in meters), each step is 0.025m. If this is at a velocity of 0.4m/s, then each step has a duration of 0.0625s, or 62.5ms.
The robot does not always execute the exact same shape. This is clearly shown in the screenshot from openloop control, the gt path is close, but slightly irregular across loops.
For my closed loop controller I took a fairly simple approach. Every iteration I read in the tof sensor value, and with what is essentially a p controller I add a turning factor based on the error. If I scaled the velocity instead of the angle for example, the robot would oscillate about a point at 1m away from the wall. Using the angle term keeps the robot moving around the map instead.
For this code to work I had to tune two factors: the threshold distance and the scaling factor kp. First I tried a threshold distance of 0.5m, but found that the robot often got stuck. Increasing it to 1 fixed this. I also started with a kp of 2, but this was too small relative to the speed of the robot and it would crash into walls before turning out of them. By increasing kp to 4 I found a better balance where the robot avoids obstacles before it hits them but without overkilling it too much and turning in the center of the room without ever getting near.
A demo of the closed loop obstacle avoidance.
I found a good turning factor to be a cumulative 4 times the error. This causes the turning factor to scale with how close it is to an obstacle, but also with how long it has been on a collision course with the obstacle.
I found a velocity of 1 to be a good balance between fast and manueverable. The speed isn't as important as the ratio between the turning factor and the speed so the robot can turn away from obstacles in time. However, there is some sort of upper bound based on the frequency of TOF sensor sampling, step size of movement, and size of the arena. A slower speed would definitely allow the robot to more consistently avoid collisions than what I demonstrate above, but it could also probably go faster than this.
The virtual robot can get precisely next to an obstacle based on its body size. The interesting thing is that since the TOF sensor is a laser straight ahead, the robot sometimes doesn't see an obstacle that is slightly to the side, but then because of the shape of the robot, its corner will catch on the obstacle and it will get stuck. Unfortunately, the because the sensor doesn't see the obstacle, the controller doesn't see the obstacle, so it has no idea how to respond.
My obstacle avoidance does not always work. Especially because of the problem above, but also just due to the simplcity of the controller there will be geometric configurations that fool the controller into not turning enough and it can get stuck. Since there is currently no problem from hitting a wall, it usually gets itself unstuck after some wiggling. There are two ways I can think of to prevent all crashes. First, we could also slow down the robot as it approaches an obstacle. This way, it will aways stop before hitting the obstacle, and turn until it has found a way out. The other thing we could do is to integrate some map knowledge, or perhaps build up a map as we go, and path plan around the obstacles.
After writing that I realized that would be a much better way to approach the problem, so I implemented it in the demo above. The code is shown below. I simply add a slowing factor to the velocity as it approaches an obstacle, and later cap the velocity at zero so it never reverses. This simple change allowed me to halve the threshold distance right off the bat, and with this strategy the robot could definitely go much faster.
This lab was fun! It was a nice break to not have to worry about robot hardware. The simulator is really cool, reminds me of something in between ROS and pygame. So far it works really well. I had some trouble getting everything set up when I updated to python 3.10 it caused some issues with my system, but once I installed with python 3.8 it worked really well.
This lab was about leveraging the sim to develope a Bayes filter in sim so we can debug and learn how it works before attempting to implement it on the real robot.
Compute control pretty much directly implements the control model from the slides. We take in the current pose and the previous pose, and calculate the steps of rotation, translation, rotation, that would lead from the previous pose to the current. The only modification I had to make was to convert the output from arctan2 from radians to degrees so it would be consistent with the pose angle.
The odometry motion model function first uses the compute control function to determine the relative motion of the proposed poses. This allows us to directly compare the motion that we tried to execute to the motion of potential locations we could be at. We then simply take a gaussian around executed motion and take the probability that the movement from one pose to another resulted from the executed motion. Assuming the 3 movements are independent, we can multiply their probabilities together to get the total probability. This function returns the probability of the motion model for the bayes filter.
This function implements the prediction step of the bayes filter. First, I use the compute control function to calculate the true control sequence. This will be used to calculate the probabilities of different poses later. Then we go into a nested for loop through all possible positions, and a further nested loop for all possible previous positions. We sum the probabilities of being in each previous position, and multiply by the previous belief at each position to get the total prediction. Finally, we normalize even though its not strictly necessary so the probability output is more useful.
I decided to implement the sensor model and update step in one function, above. The sensor model component is the joint probability q. For each of the 18 samples taken we calculate the probability that that sample came from the proposed location given the ray traced truth values. Then, assuming these 18 samples are independent, we multiply them together to get the total probability. This probability is multiplied by the bel_bar prediction to get the total updated belief of where the robot is. Again, we normalize.
And here is the demo! I only recorded the first half of the run because it just took so long to run through. I suggest skipping around the video, as most of the time it just sits there running calculations. This was the disadvantage to using loops in python. The Image below shows a complete track.
I started this lab trying to use numpy arrays to speed up the process. Vectorizing operations can dramatically speed up python code like this. I had some success, implementing the complete bayes filter, but ran into some tricky bugs that I couldn't track down. That version of my code found an incorrect path, but did it really fast! I decided to go back to the loop method to debug, but if I have time later I now think I would be able to finish the matrix code using this code as reference.
The objective of this lab was to implement a full localization iteration on the real robot. To do this we leverage the PID controlled 360 scan from lab 9, and the localization bayes filter from the lab 11 simulation. These pieces fit together perfectly, so this lab was simply about interfacing the arduino with the jupyter notebook over bluetooth. To do this, I used a very similar strategy as I did in lab 6 to stream data back for debugging.
The first thing I did in this lab was to revisit my lab9 code. In lab 9 we executed a very similar control sequence to perform a 360deg TOF scan of the area for use in reconstructing the map. Now, we are using the pre-programmed map and checking our scan against it to infer our position. So I went to reuse the same code, but in that lab I did not perform a proper PID control. I decided to only turn in small increments and just read in the angle for plotting because this was simpler. However, now we need to turn at precisely 20deg per rotation to match the cached ray-traced truth values for comparison.
This first part of the code is the TOF read function. In the past we've read the TOF sensor very deliberately in a non-blocking way so we could poll the gyro faster. However, this time we are relying on the sensor taking a reading before moving on so it is important to be blocking. Here I use the term blocking to say that it is holding up the main loop, but I make sure to call the BLE function to maintain a bluetooth connection with my laptop.
This part of the code handles the control of the scan function. The goal is to use PID control to turn precisely 20deg, pause for a TOF measurement, and repeat. The inc variable keeps track of which TOF sample we are reading, and is also used to dictate the setpoint for the gyro PID. Lower in the code the setpoint is set to 20*inc. The moveon variable is used to coordinate between the gyro PID running frequently and the blocking TOF read.
Finally, when the angle matches the setpoint we initiate a TOF measurement.
Since I've used this controller previously for the drift trick, luckily I did not have to retune it. However, the performance of this PID controller is heavily dependent on the conditions it was tuned for. I found that when the wheels were relatively clean the robot had trouble spining in place, but with sufficient dust from the floor it turns beautifully. It is important that the robot operates under the same conditions it was tuned for.
This lab sythesizes the work we've done with control loops of the real work with the theoretical bayes filter code we developed in lab 11. The robot takes a scan of 18 samples at 20 degree increments and inputs this array into the bayes filter with pre-cached truth values from ray tracing. To do this, we have to perform the observation loop as detailed above, and send this array over bluetooth.
On the desktop side the new code is pretty simple. We instantiate a notification handler as we've done for debugging in lab 6, send the command to initiate the scan, and read in the samples. There were a couple weird things we had to get just right though. Apparently you can define a function inside of another function in python. I've never done this before and didn't know it was possible, but it turns out it only works if the callback function is defined internally instead of in the class. Then I tried using a blocking while loop to wait for the samples to come in, but it seems that this blocked other multithreaded processes from running and caused issues. Once I used an asyncio sleep instead this part worked. Finally, the output had to be in units of meters instead of millimeters so I had to divide the output by 1000 before returning.
If the embedded video doesn't work you can find it here.
The PID works as expected. I set up the robot in the 4 locations around the map. The calculated beliefs are shown below:
(5,-3)
(-3,-2)
(0,3)
(5,3)
The robot performs pretty well! The beliefs are not exactly correct, but it works impressively well. The scan in location (-3.-2) failed the first time I tried it, putting the robot in the top left corner of the map for some reason. Besides this however, it consistently guesses very close. I suspect the cause of this is similar features in the environment. At the resolution of the scan we're doing, there would only be 1 or 2 measurements that would be very wrong between those two locations.
This was a great lab because it synthesized the work we've done in several previous labs. It also clearly demonstrated the power of simulation, because we were able to solve the bayes filter problem more easily in simulation, and then interface it with the arduino, rather than attempting to solve all of these problems at once. This lab was also really cool because it demonstrated how to offload computation from the arduino to an external solver on my laptop. I've never done this before and I was surprised just how smoothly it all worked.
This lab challenged us to synthesize all of the things we've learned from previous labs and come up with a solution to an open ended problem. We were tasked with writing an autonomous algorithm to navigate a given path through map laid out in the lab. This solution should be accurate to the grid cell of the waypoints and be repeatable.
Although ideally we would implement a full bayes filter controller for this lab, considering the precision of the task and the drift of the sensors, this seemed unrealistic. However, this task could still clearly benefit from a closed loop controller. Working with Robby Huang, we came up with a strategy that better balanced the real capabilities of the robot with closed loop control strategies. The plan was to split up the path into straight line segments and use dual angle and distance PID control to iterate through the path. Each movement consists of an angle PID to the specified angle, followed by a distance PID to the specified distance to the wall. This allows the robot to reliably move through the path, to the precision of the PID controller.
Like in previous labs, I first use a PD controller on the angle returned by the gyro sensor. I found the PD controller to work the best for this application.
New in this lab is the PID loop for distance. The way I set this up is for this code to only trigger on "newsample" which is a boolean set when a new tof reading is made. This prevents the controller from using zero values as inputs on iterations when no sample is taken. Again, I implement a PD controller because I found this to work better. The output of this controller is setting the value of "vel", which is then used in the angle PD loop as the forward speed when the angle is correct. Therefore, the angle correction takes precedence, which makes sense because we don't want to move forwards when the angle is off. Only when the angle is correct, we set the speed according to the distance PID controller. Additionally, I found that the distance controller tended to overshoot the correct value and oscillate a lot. I fixed this by adding active braking when the distance error is zero, rather than allowing the car to drift. This code is also in the angle PD loop because that is where the forward or backwards movement is set.
Finally, these are the new motor functions I added for this lab.
To streamline the debugging process, I utilized the Jupyter notebook to set certain parameters on the onboard code. To do this I defined 3 additional functions (above) in the Arduino code. These set parameters for the angle setpoint, distance setpoint, and a parameter to tune the distance. I found this to be necessary because the distance sensor is systematically off from the true value. The error seems to be correlated to distance. While I could have done some experiments and tuned a linear error model, I found it easier to add a scaler factor to the distance and tune this per setpoint.
Here is the code on the python end. I first wrote out the path with intermediate points so each step is on the grid, and then multiplied by a factor of 304.8, which is the conversion from feet to millimeters. Finally, I tuned the path over a few iterations to account for systematic errors in my turning or the TOF sensor readings.
If the embedded video doesn't work you can find it here.
Using the strategy outlined above, the car is able to quickly and accurately navigate the path. Up to the point (0,3), the robot ends exactly on the grid cell. At (0,3) the robot overshoots a bit, which causes the final movement to be off. Since the TOF sensor is relying on the far wall to localize, the overshoot on (0,3) causes the TOF to see the wall instead of the box, so it misses the final point.
This strategy falls at a middle ground between full bayes filter autonomy and open loop control. With the application of the PID loop I am able to improve the repeatability of the path significantly, allowing it to correct for all sorts of errors before they accumulate. However, some errors do accumulate due to the drift of the gyro especially, so it took several tries to get the above video. Given more time, I think I could improve the performance of the algorithm by including a calibration step every few steps using perhaps the magnetometer to obtain an absolute heading and zero the gyro drift.
Overall, I am very pleased with the final performance of the car. There were many factors to balance when approaching this problem, and throughout the class we learned different methods to accomplish that. We studied open loop, PID, Kalman filters, and Bayes filter localization, each with increasing complexity and increasing awareness. Each of these control strategies have a place in robotics, depending on the specific challenges present, but more than anything this class taught me the values of closed loop control and the purpose of the more complex algorithms.
The first thing this course taught me about online writing was the value of pictures and videos. A video can more clearly show what a robot is doing than a long paragraph. A diagram can better explain an algorithm and a plot can better show results. With work like this, I found myself writing to support my figures and not the other way around. I found this change to be doubly important because it makes it easier for the reader to understand and makes it easier for me to write.
This leads into the second thing I learned, which was the importance of being concise and purpose driven in my writing. By writing around the figures I found that I narrowed the scope of my writing and left out a lot of the unnecessary details. The goal of a lab report is to convey enough information for someone unfamiliar with the lab to be able to recreate the work done. This covers a lot, but this actually leaves out a lot of unnecessary details. I learned to take out unnecessary explanations of my thought process, ideas that didn't pan out, and long winded explanations.
I think the most important thing I learned was about the purpose of the writing. I have previously thought of lab reports as a complete record of everything that I did and why I did it. When writing these reports up on a website however, my perspective changed. I started thinking “if I read this on a website would I enjoy reading it?” and “does this writing teach something to the reader?” These questions changed the way I wrote, moving from a complete record to a cohesive story, and from what I did to what, why, and what did I learn? A website is about the reader, not the grader.
Finally, although the purpose of the website was to practice technical writing, I ended up learning a lot of HTML and website building. This has already come up in another class where we were asked to write up our final project as a website. I think this skill will continue to help me as more and more of my writing is about communicating my work to others, especially online.