Abstract
Developing efficient multi-objective optimization methods to compute the Pareto set of optimal compromises between conflicting objectives remains a key challenge, especially for large-scale and expensive problems. To bridge this gap, we introduce SPREAD, a generative framework based on Denoising Diffusion Probabilistic Models (DDPMs). SPREAD first learns a conditional diffusion process over points sampled from the decision space and then, at each reverse diffusion step, refines candidates via a sampling scheme that uses an adaptive multiple gradient descent-inspired update for fast convergence alongside a Gaussian RBF–based repulsion term for diversity. Empirical results on multi-objective optimization benchmarks, including offline and Bayesian surrogate-based settings, show that SPREAD matches or exceeds leading baselines in efficiency, scalability, and Pareto front coverage. .
DiT-MOO (noise-prediction network)
We use a Transformer-based diffusion architecture (DiT-MOO) adapted to multi-objective optimization. The noisy candidate solutions are first embedded into tokens and combined with time-step embeddings and conditioning on objective information, allowing the model to learn how solution quality relates to structure in the decision space.
Training procedure
A key novelty of SPREAD’s training is the shifted-conditioning strategy, where the model is trained using slightly shifted objective values rather than the exact ones. This encourages the model to generate solutions that improve over the conditioning point at sampling time, effectively turning diffusion into a refinement mechanism rather than just a generator.
Sampling procedure
SPREAD samples solutions by running reverse diffusion from random initial points and refining them at each step using MGD-inspired guidance for objective improvement, a repulsion term for diversity, and stochastic perturbations for stable exploration, producing a well-distributed approximation of the Pareto front.
Offline MOO
In offline multi-objective optimization, the true objective functions are unavailable. Instead, a pre-collected dataset of design points and their objective values is used to train a surrogate model that approximates the objectives. To adapt SPREAD to this setting, the training data is taken directly from this dataset, and the surrogate model is used in place of the true objectives during refinement.
Bayesian MOO
A key challenge in multi-objective Bayesian optimization (MOBO) is the limited evaluation budget of expensive objective functions, which is typically handled by using iteratively updated Gaussian-process surrogate models. To adapt SPREAD to this setting, we combine these surrogate models with simulated binary crossover (SBX) as an auxiliary mechanism to escape local optima, and use the data-extraction strategy introduced in CDM-PSL.
Results
SPREAD maintains strong performance as the number of objectives increases, providing superior coverage and diversity of the Pareto front in both synthetic and engineering benchmarks.
SPREAD effectively leverages static datasets to generate high‑quality approximate Pareto fronts without any online queries, matching or even surpassing the performance of state‑of‑the‑art offline multi‑objective optimization techniques.
SPREAD consistently outperforms CDM‑PSL, another diffusion‑based generative method. This advantage stems from our novel conditioning strategy and our adaptive guidance mechanism, which steers samples more accurately toward the Pareto front, yielding stronger approximations overall.
Generated Pareto-optimal Points in the Online Setting
Approximate Pareto-optimal points obtained by different methods on four synthetic and four real-world problems. Results from five independent runs are merged, and only non-dominated points are displayed. SPREAD achieves better coverage of the Pareto fronts than the baselines.
BibTeX
@inproceedings{
hotegni2026spread,
title={{SPREAD}: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion},
author={Hotegni, Sedjro Salomon and Peitz, Sebastian},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=4731mIqv89}
}