Reinforcement learning approaches for therapeutic peptide generation suffer from mode collapse, converging to narrow regions of sequence space even when explicit diversity penalties are applied. Fine-grained analysis reveals persistent mode-seeking behavior invisible to standard diversity metrics.
We propose GFlowNet for peptide generation, which samples sequences proportionally to reward rather than maximizing expected reward. This objective provides diversity through proportional sampling without requiring explicit output diversity penalties. Comparing against GRPO with explicit diversity enforcement, GFlowNet achieves substantially more uniform sequence sampling and fewer repetitive motifs. Critically, when diversity mechanisms are removed from the reward, GRPO collapses completely while GFlowNet maintains natural diversity. These results demonstrate that proportional sampling is inherently robust to reward function design, offering a key advantage for drug discovery pipelines requiring diverse candidates.