Learning to Optimize Multi-Objective Alignment Through Dynamic Reward Weighting

1. University of Notre Dame
2. Amazon

*Work done during internship at Amazon
Teaser figure

We aim to train models that achieve strong problem-solving ability (higher accuracy) with computational efficiency (fewer tokens) while maintaining interpretable reasoning processes (better clarity). Clearly, our dynamic reward weighting consistently builds superior Pareto fronts that dominate baselines across all objectives, demonstrating its effectiveness in multi-objective alignment.

Abstract

Prior works in multi-objective reinforcement learning typically use linear reward scalarization with fixed weights, which provably fail to capture non‑convex Pareto fronts and thus yield suboptimal results. This limitation becomes especially critical in online preference alignment for large language models. Here, stochastic trajectories generated by parameterized policies create highly non-linear and non-convex mappings from parameters to objectives that no single static weighting scheme can find optimal trade-offs.
We address this limitation by introducing dynamic reward weighting, which adaptively adjusts reward weights during the online reinforcement learning process. Unlike existing approaches that rely on fixed-weight interpolation, our dynamic weighting continuously balances and prioritizes objectives in training, facilitating effective exploration of Pareto fronts in objective space.
We introduce two approaches of increasing sophistication and generalizability: (1) hypervolume-guided weight adaptation and (2) gradient-based weight optimization, offering a versatile toolkit for online multi-objective alignment. Our extensive experiments demonstrate their compatibility with commonly used online reinforcement learning algorithms (including GRPO, REINFORCE, and RLOO), effectiveness across multiple mathematical reasoning datasets, and applicability to different model families, consistently achieving Pareto dominant solutions with fewer training steps than fixed-weight linear scalarization baselines.

BibTeX

@misc{lu2025learningoptimizemultiobjectivealignment,
        title={Learning to Optimize Multi-Objective Alignment Through Dynamic Reward Weighting}, 
        author={Yining Lu and Zilong Wang and Shiyang Li and Xin Liu and Changlong Yu and Qingyu Yin and Zhan Shi and Zixuan Zhang and Meng Jiang},
        year={2025},
        eprint={2509.11452},
        archivePrefix={arXiv},
        primaryClass={cs.LG},
        url={https://arxiv.org/abs/2509.11452}, 
  }