An active subspace method for accelerating convergence in Delaunay-based optimization via dimension reduction
Authors: Muhan Zhao, Shahrouz Ryan Alimo, and Thomas R. Bewley
Published in: 2018 IEEE Conference on Decision and Control (CDC) (2018)
Abstract:
Delaunay-based derivative-free optimization (Δ-DOGS) is an efficient and provably-convergent global optimization algorithm for problems with computationally-expensive function evaluations, including cases for which analytical expressions for the objective function may not be available. Δ-DOGS belongs to the family of response surface methods (RSMs), and suffers from the typical “curse of dimensionality”, with the computational cost increasing quickly as the number of design parameters increases. As a result, the number of design parameters n in Δ-DOGS is typically limited to n ≲ 10. To improve performance for higher-dimensional problems, this paper proposes a combination of derivative-free optimization, seeking the global minimizer of a successively-refined surrogate model of the objective function, and an active subspace method, detecting and exploring preferentially the directions of most variability of the objective function. The contribution of other directions to the objective function is bounded by a small constant. This new algorithm iteratively applies Δ-DOGS to seek the minimizer on the d -dimensional active subspace that has most function variation. Inverse mapping is used to project data from the active subspace back to full-model for evaluating function values. This task is accomplished by solving a related inequality constrained problem. Test results indicate that the resulting strategy is highly effective on a handful of model optimization problems.