ESE 650: LEARNING IN ROBOTICS SPRING 2022 HOMEWORK 2
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
ESE 650: LEARNING IN ROBOTICS
SPRING 2022
[02/15] HOMEWORK 2
DUE: 03/01 TUE 11.59 PM ET
Instructions
Read the following instructions carefully before beginning to work on the home- work.
. You will submit solutions typeset in LATEX on Gradescope (strongly encour- aged). You can use hw_template.tex on Canvas in the “Homeworks” folder to do so. If your handwriting is unambiguously legible, you can submit PDF scans/tablet-created PDFs.
. Please start a new problem on a fresh page and mark all the pages corre- sponding to each problem. Failure to do so may result in your work not graded completely.
. Clearly indicate the name and Penn email ID of all your collaborators on your submitted solutions.
. For each problem in the homework, you should mention the total amount of time you spent on it. This helps us gauge the perceived difficulty of the problems.
. You can be informal while typesetting the solutions, e.g., if you want to draw a picture feel free to draw it on paper clearly, click a picture and include it in your solution. Do not spend undue time on typesetting solutions.
. You will see an entry of the form “HW 2” where you will upload the PDF of your solutions. You will also see entries like “HW 2 Problem 1” where you will upload your solution for the respective problems. For each programming problem, you should use the appropriate .py template file provided on canvas, unless otherwise stated by the problem. This file should contain all the code to reproduce the results of the problem and you will upload the .py file to Gradescope. If we have installed Autograder for a particular problem, you will use the Autograder. Before submitting to Autograder make sure that the name of the file and the function signatures have not been altered from the provided template.
. You should include all the relevant plots in the PDF, without doing so you will not get full credit.
. Your PDF solutions should be completely self-contained. We will run the Python file to check if your solution reproduces the results in the PDF.
Credit The points for the problems add up to 125. You only need to solve for 100 points to get full credit,i.e., your final score will be min(your total points, 100).
1 Problem 1 (Extended Kalman Filter, 25 points). In this problem, we will see how
2 to use filtering to estimate an unknown system parameter. Consider a dynamical
3 system given by
xk+1 = axk+ ϵk
yk = ‘xk(2) + 1 + νk (1)
4 where xk,yk ∈ R are scalars, ϵk ∼ N(0, 1) and νk ∼ N(0, 1/2) are zero-mean
5 scalar Gaussian noise uncorrelated across time k. The constant a is unknown and
6 we would like to estimate its value. If we know that our initial state has mean 1 and
7 variance 2
x0 ∼ N(1, 2),
8 develop the equations for an Extended Kalman Filter (EKF) to estimate the unknown
9 constant a.
10 (a) (5 points) You should first simulate (1) with a = −1. This is the ground-
11 truth value of a that we would like to estimate. Provide details of how you
12 simulated the system, in particular how did you sample the noise ϵk,νk.
13 The observations D = {yk : k = 1, . . . , } are the “dataset” that we thus
14 collect from the system. Run the simulation for about 100 observations.
15 (b) (15 points) You should now develop the EKF equations that will use the
16 collected dataset D to estimate the constant a. Discuss your approach in 17 detail. Your goal is to compute two quantities
µk = E [ak | y1 ,...,yk]
σk(2) = var (ak | y1 ,...,yk) .
18 for all times k.
19 (c) (5 points) Plot the true value a = −1, and the estimated values µk± σk as
20 a function of time k. Discuss your result. In particular, do your estimated
21 values µk ± σk match the ground-truth value a = −1? Does the error
22 reduce as your incorporate more and more observations?
23 You should use a fresh python file. We are not going to use an autograder for this
24 problem, so make sure to include the appropriate details and plots in your report.
25 You will submit your code under “HW 2 Problem 1” .
26 Problem 2 (Unscented Kalman Filter, 100 points). In this problem, you will imple-
27 ment an Unscented Kalman Filter (UKF) to track the orientation of a robot in three-
28 dimensions. We have given you observations from an inertial measurement unit
29 (IMU) that consists of gyroscopes and accelerometers and corresponding data from a
30 motion-capture system (called “Vicon”, see https://www.youtube.com/watch?v=qgS1pwsHQIA
31 for example). We will develop the UKF for the IMU data and the vicon data for
32 calibration and tuning of the filter, this is typical of real applications where the robot
33 uses an IMU but the filter running on the robot will be calibrating before test-time
34 in the lab using an expensive and accurate sensor like a Vicon.
1 (a) Loading and understanding the data: First,load the data given on Canvas
2 (file “hw2_p2_data.zip”) using code of the form.
from scipy import io data_num = 1 imu = io.loadmat( ’ imu/imuRaw ’+str(data_num)+ ’ .mat ’ ) accel = imu[’vals ’ ][0 :3,:] gyro = imu[’vals ’ ][3 :6,:] T = np.shape(imu[’ts ’ ])[1 ] |
12 Ignore other fields inside the .mat file, we will not use them.
13 You can use the following code to load the vicon data
vicon = io.loadmat( ’vicon/viconRot ’+str(data_num)+ ’ .mat ’ ) |
17 while calibrating and debugging. But do not include this line in the autograder
18 submission because we do not store the vicon data on the server.
19 (b) (15 points) Calibrating the sensors. Check the arrays acceland gyro. The
20 former gives the observations received by the accelerometer inside the IMU and
21 the latter gives observations from gyroscope. The variable T denotes the total
22 number of time-steps in our dataset. You will have to read the IMU reference
23 manual (file “imu_reference.pdf” on Canvas) to understand the quantities stored
24 in these arrays. Pay careful attention to the following things. First, the accel/gyro
25 readings are integers and not metric quantities, this is because there is usually an
26 analog-to-digital conversion (ADC) that happens in these sensors and one reads off
27 the ADC value as the actual observation. Because of the way these MEMS sensors
28 are constructed, they will have biases and sensitivity with respect to the working
29 voltage. In order to convert from raw values to physical units, the equation for both
30 acceland gyro is typically
value = (raw − β) 3102(300)3 α(mV)
31 where β called the bias, mV stands for milli-volt (most onboard electronics operators
32 at 3300 mV) and α is the sensitivity of the sensor. For the accelerometer, α has
33 units of mV/g where g refers to the gravitational acceleration 9.81 m/s2. In other
34 words, if α = 100 mV/gand bias β is zero, and if the raw accelerometer reading is
35 10, the actual value of the acceleration along that axis is
value = 10 × × 9.81 = 3.16 m/s2
36 Similarly, the sensitivity of a gyroscope has units mV/degrees/sec. You will have to
37 convert the sensitivity into mV/radians/sec to make everything into consistent units.
38 Typically, in a real application, we do not know the bias and sensitivity of either
39 sensor. Your goal is to use the rotation matrices in the Vicon data as the ground-truth
40 orientation (see section on quaternions below) to estimate the bias and sensitivity of
41 both the accelerometer and the gyroscope. You should be careful on two counts.
1 (1) The orientation of the IMU need not be the same as the orientation of the
2 Vicon coordinate frame. Plot all quantities in the arrays accel, gyro and
3 vicon rotation matrices to make sure you get this right. Do not proceed to
4 implementing the filter if you are not convinced your solution for this part
5 is correct.
6 (2) The acceleration ax anday is flipped in sign due to device design. A positive
7 acceleration in body-frame will result in a negative number reported by the
8 IMU. See the IMU manual for more insight.
9 You should detail how you selected the two constants α,β for both the ac-
10 celerometer and the gyroscope in your solution PDF. Simply reporting numbers
11 will get zero credit.
12 Hint: To find the sensitivity for the accelerometer, we can assume the only
13 force acting is the gravitational force. Then the magnitude of your 3-dimensional
14 accelerometer readings should be as close to 9.81 as possible. Plot the roll, pitch, and
15 yaw values from the Vicon data; you can extract these from the Vicon rotation matrix.
16 You should compare the Vicon plots with some simple plots obtained only from
17 the accelerometer, and separately only the gyroscope values, to predict orientation.
18 From the accelerometer, you can directly computeroll and pitch for each timestep
19 and compare these with ground truth (Vicon) data to ensure your sensitivity and
20 axes are correct. For the gyroscope, you can use the initial orientation from the
21 accelerometer and then integrate the angular velocity values from the gyroscope
22 for the rest of the time series. Note that the orientation estimates you get from this
23 method will have significant drift, but you should be able to get a sense of the scale
24 and check your sensitivity values. The purpose here is to ensure you are converting
25 the raw digital values in the dataset into meaningful physical units before we begin
26 the filtering.
27 (c) (0 points) Quaternions for orientation We have given you a file named
28 quaternion.py that implements a Python class for handling quaternions. Read this
29 code carefully. In particular, you should study the function euler_angle which
30 returns the Euler angles corresponding to a quaternion, from_rotm which takes in a
31 3×3 rotation matrix and assigns the quaternion and the function __mul__ which
32 multiplies two quaternions together. Try a few test cases for converting to-and-fro
33 from a rotation matrix/Euler angles to a quaternion to solidify your understanding
34 here.
35 (d) (0 points) Implementing the UKF Given this setup, you should now read the
36 PDF on Canvas titled “hw2_p2_ukf_writeup” to implement an Unscented Kalman
37 Filter for tracking the orientation using these observations. The state of your filter
38 will be
x = Iω(q)] ∈ R7
39 where q is the quaternion that indicates the orientation and ω is the angular velocity.
40 The observations are of course the readings of the accelerometer and the gyroscope
41 that we discussed above; recall that gyroscopes measure the angular velocity ω itself
1 which simplifies their observation model. The paper also has a magnetometer as a
2 sensor (which measures the orientation with respect to the magnetic north pole) but
3 we will not use ithere. You should implement quaternion averaging as described in
4 Section 3.4 of the paper; this is essential for the UKF to work. You will have to
5 choose yourself the values of the initial covariance of the state, dynamics noise
6 and measurement noise. You can discuss your steps and choices in your solution
7 PDF if you want us to follow through your implementation.
8 (e) (10 points) Analysis and debugging Plot the quaternion q (mean and di-
9 agonal of covariance), the angular velocity ω (mean and diagonal of covariance),
10 the gyroscope readings in rad/sec and the quaternion corresponding to the vicon
11 orientation as a function of time in your solution PDF. Do not plot on the server, it
12 may crash out. You should show the results for one dataset and discuss whether your
13 filter is working well. You should also use these plots to debug your performance
14 on the other datasets; plotting everything carefully is the fastest way to debugging
15 the UKF.
16 (f) (75 points) Evaluation We will use the autograder for this problem. We will
17 test the performance of your filter on the datasets provided to you as well as some
18 of our held-out datasets. Make sure you submit estimate_rot.py as well as all its
19 dependencies under “HW 2 Problem 2”. This function should return three Numpy
20 arrays of length T, one each for Euler angles (roll, pitch, yaw) for the orientation.
2024-02-25