You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Accelerating Direct Preference Optimization with Prefix Sharing, the authors proposed a efficient way to reduce total training tokens in paired preference optimization by combining the shared prompt with both chosen and rejected responses into a single sequence. As a result, the computation of the shared prompt is performed only once per training sample, eliminating redundant processing.
To do so, it leverages a custom attention mask. This mask masks out the region where the rejected response attends to the chosen response, ensuring that both responses are computed independently of each other.
To be more specific, please check the diagram from the paper below:
This method extends beyond DPO (demonstrated in the paper) and is compatible with all offline paired preference optimization algorithms, including ORPO and SimPO.
…ention` (#504)
## Summary
> TLDR of #476: The
shared prefix attention mask is an optimization for paired-preference
alignment training.
To pave the way for #476,
this PR aims to set up basic unit tests of flex attn with casual and
shared prefix mask.
## Testing Done
## Benchmarks
1. Casual Attention Mask (Flash Attention 2 vs. Torch Scaled Dot Product
Attention vs. FlexAttention)

3. Shared Prefix Attention Mask (Flash Attention 2 vs. Torch Scaled Dot
Product Attention vs. FlexAttention)

- Hardware Type: <BLANK>
- [ ] run `make test` to ensure correctness
- [ ] run `make checkstyle` to ensure code style
- [ ] run `make test-convergence` to ensure convergence
---------
Signed-off-by: Austin Liu <[email protected]>
Co-authored-by: Shao Tang <[email protected]>
🚀 The feature, motivation and pitch
In Accelerating Direct Preference Optimization with Prefix Sharing, the authors proposed a efficient way to reduce total training tokens in paired preference optimization by combining the shared prompt with both chosen and rejected responses into a single sequence. As a result, the computation of the shared prompt is performed only once per training sample, eliminating redundant processing.
To do so, it leverages a custom attention mask. This mask masks out the region where the rejected response attends to the chosen response, ensuring that both responses are computed independently of each other.
To be more specific, please check the diagram from the paper below:
This method extends beyond DPO (demonstrated in the paper) and is compatible with all offline paired preference optimization algorithms, including ORPO and SimPO.
Alternatives
No response
Additional context
https://github.com/frankxwang/dpo-prefix-sharing
The text was updated successfully, but these errors were encountered: