It is often challenging to get (nonlinear) optimization to converge reliably without getting stuck in spurious local minima. Better methods for handling constraints in the optimization problem and improved design of objective (cost) functions can help alleviate this problem. Leveraging new methods of data-efficient machine learning, mathematical insights from optimization theory, and computational insights from computer architecture, we are working to design robust optimization methods that can be used for real-time applications (e.g., nonlinear model predictive control for robotics).