You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the library loads its own language model from disk into memory. However, I would like to customize this behavior. For language detection to work, it looks like the only behavior required is an implementation of getRelativeFrequency for various n-gram language models.
This would allow me to have the option of storing the language model remotely (for example: in Redis), instead of needing hundreds of megabytes of memory for the in-memory language models.
The text was updated successfully, but these errors were encountered:
Currently, the library loads its own language model from disk into memory. However, I would like to customize this behavior. For language detection to work, it looks like the only behavior required is an implementation of
getRelativeFrequency
for various n-gram language models.This would allow me to have the option of storing the language model remotely (for example: in Redis), instead of needing hundreds of megabytes of memory for the in-memory language models.
The text was updated successfully, but these errors were encountered: