I have recently implemented a model predictive controller (MPC) to calculate the necessary reaction forces for a legged robot. The work presented here is based on "Highly Dynamic Quadruped Locomotion via
Whole-Body Impulse Control and Model Predictive Control" by Kim et al. If you don't know what model predictive control is, I recommend this excellent explanation by Steve Brunton. I also found this guide to model predictive control with CasADI to be extremely helpful. CasADi is an open source nonlinear optimization tool which I'm using for MPC. However, I'm not entirely sure yet whether this code works the way it's intended--I'm not getting any error messages, but the force values I'm getting seem too low. If you spot anything wrong with my implementation, please let me know. Here's the code.
Part 1. The Dynamics Formulation
From the paper:
First off, you may be wondering what this notation means.
This refers to a skew-symmetric matrix. The 3x1 vectors r1 and r2 (footstep positions) are converted into a 3x3 matrix in order to perform a cross product operation with the inverse of the global inertia tensor.
A few more notes:
The end result is a very long equation which will be used as a constraint for the QP. The states and controls are defined as symbolics, but other values such as the inertia tensor, rotation matrix, and foot positions are fed in as numerical values from sensor feedback.
Part 2. The Objective Function
The objective function is shown above.
Let's start with this notation. What does it mean? Well, double bars represent a matrix norm. However, the R in subscript clues us into the fact that this is a special case of the matrix norm with an inner product. Q and R are weighing matrices and should be diagonal.
However, I'm not sure what the purpose of the square root would be here. It seems to me that minimizing the square root of f(x) is basically equivalent to minimizing f(x) by itself, but more computationally expensive. In fact, running the code with the square root operation actually causes a "QP is infeasible!" error. As such, I'm leaving the square root operation out.
So, the objective function minimizes x and f (the state and control vectors) based on the difference between actual and reference state as well as the value of f, scaled to Q and R respectively, over k time steps where k is between 0 and m. Here, m refers to the prediction horizon, which is chosen by the user.
Both my Q and R are identity matrices right now, as I'm not sure how best to weight them.
Part 3. The Constraints Vector
In the same for loop, we can now add the constraints vector. For each iteration of the loop, the discrete dynamics equation from Part 1 is subtracted from the next state vector, x(k+1).
There are more constraints, however. Namely, friction cone constraints, governed by friction of the feet with the ground. As stated in the paper:
Here's my implementation below. Both the Fx and Fy inequalities must be repeated due to the absolute value function. Fz is not required here because it's extremely simple and can be implemented as an input constraint.
Part 4. Problem Setup
The solver used here is qpOASES. The optimization variables (in this case, the state and controls), objective function, constraint functions, and reference state are declared as shown. It is important to specify a bound tolerance and termination tolerance.
Part 5. Upper and Lower Bounds
The upper and lower bounds for the constraint vector still needs to be set. The dynamics equation is set up as an equality constraint, so the upper and lower bounds are both zero. The initial condition constraint is also an equality constraint, since the first state vector must be equal to the input state vector. The rest of the constraints have a lower bound close to infinity.
Constraints for the optimization variables are set separately. This is where the friction cone restraint for Fz comes into play. The reaction force perpendicular to the ground (assumed flat) must be upward, because the robot can only push itself away from the ground rather than pull itself toward the ground. Therefore, the lower bound on all Fz is 0.
Additionally, the MPC must check if each leg is actually in contact with the ground when calculating forces for it. If that leg is not in contact, all bounds are set equal to zero.
Part 6. Solving
Firstly, initial values must be fed into the solver.
I've set the initial values for my control inputs as an array of zeros, although I'm wondering if that should really be the last set of control inputs applied. The initial values for the state are pulled in and calculated from the simulator. This 12x1 array is repeated N+1 times, where N is the prediction horizon, in order to fill the entire array including future states.
Finally, the initial states, bounds, and parameters are plugged into the solver. The controls vector is pulled from the solution, and the first row of optimized control values is returned. The rest of the values (representing future time steps) are ignored.
0 Comments
|
Archives
July 2021
Categories |