nav2_mppi_controller: add optional LP-MPPI-style low-pass perturbation sampling (paper-based, unofficial)#5997
Conversation
96f590d to
2fcfb80
Compare
Codecov Report❌ Patch coverage is
... and 19 files with indirect coverage changes 🚀 New features to boost your workflow:
|
Signed-off-by: Mohamed Samir <mohamed.samir@anovate.ai>
2fcfb80 to
c220f76
Compare
mini-1235
left a comment
There was a problem hiding this comment.
@mohamedsamirx before we start reviewing, have you had a chance to test this in a simulation environment in addition to the unit tests? I am curious whether you observed any noticeable differences before / after this PR, and in which situations you would recommend enabling / disabling this
|
@mini-1235 Sorry, I haven't had time to get to it to be tested on simulation yet, but it's on my list. I may prioritize it soon. |
|
Please do and provide some thoughts on the improvements! |
|
I tested LP-MPPI in Gazebo on two Nav2 simulation environments: the TurtleBot3 sandbox map and the warehouse map. The final benchmark used 10 candidate routes in total; 9 were valid for controller comparison, and 1 route was excluded because the global planner failed for all 4 configurations before controller execution. Each valid route was run 5 times under these 4 configurations:
My honest conclusion from the simulation results is that I would not present this as a clear “smoothness improvement” over Nav2 in its current form. The practical Nav2 winner was lp_default: it won 7 of 9 valid routes by success rate and then mean duration. Compared against nav2_smooth_on, it improved mean success rate from 0.844 to 0.978 (+13.3 percentage points), reduced mean duration from 32.69s to 30.20s (+7.6%), improved MSGFD by 21.4%, and reduced linear variation (TV vx) by 22.3%. However, it also worsened MSSD by 104.4% and worsened angular variation (TV wz) by 30.9%. I also compared the paper-faithful case, lp_only_default, against raw Nav2 without Savitzky-Golay smoothing (nav2_smooth_off). In that comparison, success improved from 0.756 to 0.867 (+11.1 percentage points), duration improved from 33.19s to 29.99s (+9.6%), and TV vx improved by 26.6%. But MSGFD was essentially flat to slightly worse (-0.5%), and both MSSD and TV wz got worse. So the honest takeaway is: LP-MPPI seems beneficial in Nav2 mainly as a success-rate and completion-time improvement, and in some cases as a linear-command smoothing improvement, but not as a consistent overall smoothness improvement. In particular, angular smoothness and MSSD regress often enough that I do not think the current evidence supports claiming it is uniformly better than the existing Nav2 behavior. |
|
How did you evaluate trajectory smoothness? I think what we’re talking about here has little to do with Path Smoothing (is that what you mean by Nav2 Smoothing? I’m really not sure) That’s to mean the smoothness in terms of the velocities in the optimal trajectory + inter-iteration, ie the amount of jitter that could cause small jerking. That is the intention behind the ticket: #5973. That method was recommended to me by the MPPI developers in the GT lab so I take that recommendation particularly seriously. Have you looked into that? Maybe its good for you to start at the beginning:
|
|
That is a fair point, and I should clarify. Also, I should note that I had not seen #5973 before implementing this, so I was not developing directly against that ticket or its exact evaluation intent. I implemented LP-MPPI because I was interested in the paper’s idea of low-pass filtering sampled perturbations before rollout, with the expected benefit of reducing high-frequency sampling noise and, hopefully, reducing small jerking in the resulting control commands. Also, I'm not planning to use it soon. I wanted to test it on our autonomous golf car in the lab later on, but I mainly implemented it thinking it might be a good addition to the library. But if this is coming from MPPI developers, then definitely I Wil read that paper and compare both and check the difference. From your comment, I agree the ticket is really about smoothness in the optimal trajectory / inter-iteration velocity behavior, not just output-level control metrics. So I do not want to overclaim that my current benchmark fully answers #5973. So I think the right next step is to reframe the evaluation around that specific intent. If it performs better, I may open a new pull request with it. |
Always appreciated and thanks for thinking of contributing! Please do give that paper a glance and let me know what you think.
These I'm a little confused about since I never see a 15%+ failure rate of MPPI on missions. I'm also not sure how these changes would have such an impact. Completion time, I could see in some situation if its smoother and/or biases samples to higher speeds (metrics support that?). A metric like "32.69s to 30.20s" from 5x runs also is close enough that I'd also want to understand the standard deviation on those metrics and if that's reproducable if you run this 5x re-run experiment a couple of more times (i.e. run 5x a second time, then a third). That small of a deviation may not be actually real improvements but just experimental variance.
Please! As you progress on that, commenting on that ticket on progress or questions would be great 😄 I appreciate this work and especially help in smoothness and more research-oriented improvements in MPPI :-) Its often difficult for me to find contributors with the technical chops to contribute in such matters so its highly appreciated |
|
Any word? :-) |
I'm sorry for taking so long, I got distracted with work. I've implemented the algorithm, but I'm unsure if I'm validating it correctly since the simulation isn't fully set up yet. If possible, I'd like to push the implementation (based on the paper) and get a review before continuing with testing. That would be very helpful. |
|
I think you should test it before I review in detail (since not worth much to review the code if the code doesn't help), but feel free to push :-) I'm doing something related right now so maybe I will be able to test the changes on my end (but no promises on timeline) |
Basic Info
Description of contribution in a few bullet points
use_low_pass_filter(bool, defaultfalse)filter_cutoff_frequency(double, Hz, default2.0)filter_order(int, default2)f_N = 1 / (2 * model_dt)Algorithm details implemented in this PR
epsilon ~ N(0, Sigma)u_sampled = u_nominal + epsilonepsilon_lp = LPF(epsilon, cutoff, order)u_sampled = u_nominal + epsilon_lpvx,wz, andvywhen holonomic)Files changed
nav2_mppi_controller/include/nav2_mppi_controller/tools/noise_generator.hppnav2_mppi_controller/src/noise_generator.cppnav2_mppi_controller/test/noise_generator_test.cppLowPassFilterSmoothsPerturbationstest.vysmoothing assertion.CutoffClampedToNyquisttest.nav2_mppi_controller/README.mdnav2_bringup/params/nav2_params.yamlFollowPath.Usage
Example configuration:
Tuning notes:
Description of documentation updates required from your changes
Description of how this change was tested
test --packages-select nav2_mppi_controller --ctest-args -R
noise_generator_test --event-handlers console_direct+
environments)
Future work that may be required in bullet points
For Maintainers: