This project is deprecated in favour of coqui-stt. Please use that project instead.
Rust bindings of Mozilla's DeepSpeech library.
The library is open source and performs Speech-To-Text completely offline. They provide pretrained models for English.
Preparation:
- Obtain the Deepspeech
native_client
library. The release announcement contains precompiled libraries for various targets. - Download the pretrained models named like
deepspeech-{version}-models.tar.gz
from the release announcement and extract the zip file to some location. - Add the directory where the
native_client
library lies to yourLD_LIBRARY_PATH
andLIBRARY_PATH
environment variables.
You can now invoke the example via:
cargo run --release --example client <path-to-model-dir> <path-to-audio-file>
It will print out the recognized sequence on stdout. The format of the audio files is important: only mono files are supported for now.
All codecs that the awesome audrey library supports are supported.
See DeepSpeech's release announcement for more.
We currently support version 0.9.0
of the DeepSpeech library.
We will always try to provide compatibility with the most recent release possible.
Licensed under Apache 2 or MIT (at your option). For details, see the LICENSE file.
All examples inside the examples/
folder are licensed under the
CC-0 license.
The generated bindings (sys
subdirectory in git, -sys
crate on crates.io) fall under the Mozilla Public License, version 2.0.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed / CC-0 licensed as above, without any additional terms or conditions.