SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion

Department of Computer Science, TU Dortmund University
Lamarr Institute for Machine Learning and Artificial Intelligence
ICLR 2026
Banner image

SPREAD learns a conditional diffusion model over decision variables and then iteratively refines sampled candidates during reverse diffusion to move them toward Pareto-optimal regions. At each step, it guides the samples using adaptive multiple-gradient-descent directions to improve objectives while adding a repulsion term to maintain diversity along the Pareto front.

Abstract

Developing efficient multi-objective optimization methods to compute the Pareto set of optimal compromises between conflicting objectives remains a key challenge, especially for large-scale and expensive problems. To bridge this gap, we introduce SPREAD, a generative framework based on Denoising Diffusion Probabilistic Models (DDPMs). SPREAD first learns a conditional diffusion process over points sampled from the decision space and then, at each reverse diffusion step, refines candidates via a sampling scheme that uses an adaptive multiple gradient descent-inspired update for fast convergence alongside a Gaussian RBF–based repulsion term for diversity. Empirical results on multi-objective optimization benchmarks, including offline and Bayesian surrogate-based settings, show that SPREAD matches or exceeds leading baselines in efficiency, scalability, and Pareto front coverage. .

DiT-MOO (noise-prediction network)

We use a Transformer-based diffusion architecture (DiT-MOO) adapted to multi-objective optimization. The noisy candidate solutions are first embedded into tokens and combined with time-step embeddings and conditioning on objective information, allowing the model to learn how solution quality relates to structure in the decision space.

Banner image

Training procedure

A key novelty of SPREAD’s training is the shifted-conditioning strategy, where the model is trained using slightly shifted objective values rather than the exact ones. This encourages the model to generate solutions that improve over the conditioning point at sampling time, effectively turning diffusion into a refinement mechanism rather than just a generator.

Train procedure diag
Training procedure algo
Sampling procedure algo

Sampling procedure

SPREAD samples solutions by running reverse diffusion from random initial points and refining them at each step using MGD-inspired guidance for objective improvement, a repulsion term for diversity, and stochastic perturbations for stable exploration, producing a well-distributed approximation of the Pareto front.

Sampling procedure diag

Offline MOO

In offline multi-objective optimization, the true objective functions are unavailable. Instead, a pre-collected dataset of design points and their objective values is used to train a surrogate model that approximates the objectives. To adapt SPREAD to this setting, the training data is taken directly from this dataset, and the surrogate model is used in place of the true objectives during refinement.

Bayesian MOO

A key challenge in multi-objective Bayesian optimization (MOBO) is the limited evaluation budget of expensive objective functions, which is typically handled by using iteratively updated Gaussian-process surrogate models. To adapt SPREAD to this setting, we combine these surrogate models with simulated binary crossover (SBX) as an auxiliary mechanism to escape local optima, and use the data-extraction strategy introduced in CDM-PSL.

Results

Generated Pareto-optimal Points in the Online Setting

Approximate Pareto-optimal points obtained by different methods on four synthetic and four real-world problems. Results from five independent runs are merged, and only non-dominated points are displayed. SPREAD achieves better coverage of the Pareto fronts than the baselines.

BibTeX

@inproceedings{
  hotegni2026spread,
  title={{SPREAD}: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion},
  author={Hotegni, Sedjro Salomon and Peitz, Sebastian},
  booktitle={The Fourteenth International Conference on Learning Representations},
  year={2026},
  url={https://openreview.net/forum?id=4731mIqv89}
}