9. Workshop on Benchmarking, Reproducibility, and Open-Source Code in Controls

Call for Short Abstract Submissionshttp://tiny.cc/cdc23-ws-abstract

Organizers: Angela P Schoellig, Jonathan P. How, Peter Corke, George J. Pappas, Sandra Hirche, Lukas Brunke, Siqi Zhou, Adam W. Hall, Federico Pizarro Bejarano, Jacopo Panerati

Location: Melati Junior 4111

Website: https://www.dynsyslab.org/cdc-2023-workshop-on-benchmarking-reproducibility-and-open-source-code-in-controls/

Lecture Schedule: https://www.dynsyslab.org/cdc-2023-workshop-on-benchmarking-reproducibility-and-open-source-code-in-controls/#program 

Abstract: Over the past years, the scientific community has grown more cognizant of the importance and challenges of transparent and reproducible research. This topic has become increasingly important given the rise of complex algorithms (e.g., machine learning models or optimization-based algorithms), which cannot be adequately documented in standard publications alone. Benchmarking and code sharing are two key instruments that researchers use to improve reproducibility. Benchmarks have played a critical role in advancing the state of the art in machine learning research. Analogously, well-established benchmarks in controls could enable researchers to compare the effectiveness of different control algorithms. There are currently only a few benchmarks available for comparing control algorithms (e.g., the Autonomie simulation model of a Toyota Prius or the shared experimental testbed Robotarium). Limited comparisons are also due to the modest number of open-source implementations of control algorithms. Over a six-year period (2016-2021), we found that the percentage of papers with code at CDC has more than doubled. However, we also found that at CDC 2021 only 2.6% of publications had code (compared to around 5% at the robotics conference ICRA and over 60% at the machine learning conference NeurIPS). These trends are encouraging, but there is still much work to be done to promote and increase efforts toward reproducible research that accelerates innovation. Benchmarking and releasing code alongside papers can serve as a critical first step in this direction. Our workshop aims to increase awareness of these challenges and inspire attendees to contribute to benchmarking efforts and share open-source code through publication in the future.

© Copyright 2023 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.