This is an open-source implementation of a SPLADE trainer, designed to work with Huggingface's Trainer API by referencing various SPLADE-related papers. The project is licensed under the permissive MIT License.
This repository is experimental, meaning that breaking changes may be introduced frequently. When using this project, it is recommended to fork the repository and work with a specific revision to ensure stability.
See the Japanese SPLADE example(日本語で書かれています) for details.
We also provide another project, YASEM (Yet Another Splade | Sparse Embedder), which offers a more user-friendly implementation for working with SPLADE models.
We would like to express our gratitude to the researchers behind the original SPLADE papers for their outstanding contributions to this field.
- SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking
- SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval
- From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective
- An Efficiency Study for SPLADE Models
- A Static Pruning Study on Sparse Neural Retrievers
- SPLADE-v3: New baselines for SPLADE
- Minimizing FLOPs to Learn Efficient Sparse Representations
This project is licensed under the MIT License. See the LICENSE file for full license details.
Copyright (c) 2024 Yuichi Tateno (@hotchpotch)