Skip to content

Releases: explosion/spaCy

v1.10.0: Alpha support for Thai & Russian, plus improvements and bug fixes

07 Nov 11:41
Compare
Choose a tag to compare

⚠️ Important note: This is a bridge release that gets the current state of the v1.x branch published. Stay tuned for v2.0.

✨ Major features and improvements

  • NEW: Alpha tokenization support for Thai and Russian.
  • NEW: Alpha support for Japanese part-of-speech tagging.
  • NEW: Dependency pattern-matching algorithm (see #1120).
  • Add support for getting a lowest common ancestor matrix via Doc.get_lca_matrix().
  • Improve capturing of English noun chunks.

🔴 Bug fixes

  • Fix issue #1078: Simplify URL pattern.
  • Fix issue #1174: Fix NER model loading bug and make sure JSON keys are loaded as strings.
  • Fix issue #1291: Document correct JSON format for training.
  • Fix issue #1292: Fix error when adding custom infix rules.
  • Fix issue #1387: Ensure that lemmatizer respects exception rules.
  • Fix issue #1410: Support single value for attribute list in Doc.to_scalar and Doc.to_array.

📖 Documentation and examples

  • Document correct JSON format for training.
  • Fix various typos and inconsistencies.

👥 Contributors

Thanks to @raphael0202, @gideonite, @delirious-lettuce, @polm, @kevinmarsh, @IamJeffG, @Vimos, @ericzhao28, @galaxyh, @hscspring, @wannaphongcom, @Wellan89, @kokes, @mdcclv, @ameyuuno, @ramananbalakrishnan, @Demfier, @johnhaley81, @mayukh18 and @jnothman for the pull requests and contributions.

v1.9.0: Spanish model, alpha support for Norwegian & Japanese, and bug fixes

22 Jul 16:28
Compare
Choose a tag to compare

Thanks to all of you for 5,000 stars on GitHub, the valuable feedback in the user survey and testing spaCy v2.0 alpha. We're working hard on getting the new version ready and can't wait to release it. In the meantime, here's a new release for the 1.x branch that fixes a variety of outstanding bugs and adds capabilities for new languages.

💌 P.S.: If you haven't gotten your hands on a set of spaCy stickers yet, you can still do so – send us a DM with your address on Twitter or Gitter, and we'll mail you some!


✨ Major features and improvements

  • NEW: The first official Spanish model (377 MB) including vocab, syntax, entities and word vectors. Thanks to the amazing folks at recogn.ai for the collaboration!
python -m spacy download es
nlp = spacy.load('es')
doc = nlp(u'Esto es una frase.')
  • NEW: Alpha tokenization for Norwegian Bokmål and Japanese (via Janome).
  • NEW: Allow dropout training for Parser and EntityRecognizer, using the drop keyword argument to the update() method.
  • NEW: Glossary for POS, dependency and NER annotation scheme via spacy.explain(). For example, spacy.explain('NORP') will return "Nationalities or religious or political groups".
  • Improve language data for Dutch, French and Spanish.
  • Add Language.parse_tree method to generate POS tree for all sentences in a Doc.

🔴 Bug fixes

  • Fix issue #1031: Close gaps in Lexeme API.
  • Fix issue #1034: Add annotation scheme glossary and spacy.explain().
  • Fix issue #1051: Improved error messaging when trying to load non-existing model.
  • Fix issue #1052: Add missing SP symbol to tag map.
  • Fix issue #1061: Add flush_cache method to tokenizer.
  • Fix issue #1069: Fix Doc.sents iterator when customised with generator.
  • Fix issue ##1099, #1143: Improve documentation on models in requirements.txt.
  • Fix issue #1137: Use lower min version for requests dependency.
  • Fix issue #1207: Fix Span.noun_chunks.
  • Fix issue with six and its dependencies that occasionally caused spaCy to fail.
  • Fix typo in package command that caused error when printing error messages.

📖 Documentation and examples

  • Fix various typos and inconsistencies.
  • NEW: spaCy 101 guide for v2.0: all important concepts, explained with examples and illustrations. Note that some of the behaviour and examples are specific to v2.0+ – but the NLP basics are relevant independent of the spaCy version you're using.

👥 Contributors

Thanks to @kengz, @luvogels, @ferdous-al-imran, @uetchy, @akYoung, @pasupulaphani, @dvsrepo, @raphael0202, @yuvalpinter, @frascuchon, @kootenpv, @oroszgy, @bartbroere, @ianmobbs, @garfieldnate, @polm, @callumkift, @swierh, @val314159, @lgenerknol and @jsparedes for the contributions!

v2.0.0 alpha: Neural network models, Pickle, better training & lots of API improvements

05 Jun 19:05
Compare
Choose a tag to compare

PyPi Last update: 2.0.0rc2, 2017-11-07

This is an alpha pre-release of spaCy v2.0.0 and available on pip as spacy-nightly. It's not intended for production use. The alpha documentation is available at alpha.spacy.io. Please note that the docs reflect the library's intended state on release, not the current state of the implementation. For bug reports, feedback and questions, see the spaCy v2.0.0 alpha thread.

Before installing v2.0.0 alpha, we recommend setting up a clean environment.

pip install spacy-nightly

The models are still under development and will keep improving. For more details, see the benchmarks below. There will also be additional models for German, French and Spanish.

Name Lang Capabilities Size spaCy Info
en_core_web_sm-2.0.0a4 en Parser, Tagger, NER 42MB >=2.0.0a14 ℹ️
en_vectors_web_lg-2.0.0a0 en Vectors (GloVe) 627MB >=2.0.0a10 ℹ️
xx_ent_wiki_sm-2.0.0a0 multi NER 12MB <=2.0.0a9 ℹ️

You can download a model by using its name or shortcut. To load a model, use spaCy's loader, e.g. nlp = spacy.load('en_core_web_sm') , or import it as a module (import en_core_web_sm) and call its load() method, e.g nlp = en_core_web_sm.load().

python -m spacy download en_core_web_sm

📈 Benchmarks

The evaluation was conducted on raw text with no gold standard information. Speed and accuracy are currently comparable to the v1.x models: speed on CPU is slightly lower, while accuracy is slightly higher. We expect performance to improve quickly between now and the release date, as we run more experiments and optimise the implementation.

Model spaCy Type UAS LAS NER F POS Words/s
en_core_web_sm-2.0.0a4 v2.x neural 91.9 90.0 85.0 97.1 10,000
en_core_web_sm-2.0.0a3 v2.x neural 91.2 89.2 85.3 96.9 10,000
en_core_web_sm-2.0.0a2 v2.x neural 91.5 89.5 84.7 96.9 10,000
en_core_web_sm-1.1.0 v1.x linear 86.6 83.8 78.5 96.6 25,700
en_core_web_md-1.2.1 v1.x linear 90.6 88.5 81.4 96.7 18,800

✨ Major features and improvements

  • NEW: Neural network model for English (comparable performance to the >1GB v1.x models) and multi-language NER (still experimental).
  • NEW: GPU support via Chainer's CuPy module.
  • NEW: Strings are now resolved to hash values, instead of mapped to integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state.
  • NEW: Trainable document vectors and contextual similarity via convolutional neural networks.
  • NEW: Built-in text classification component.
  • NEW: Built-in displaCy visualizers with Jupyter notebook support.
  • NEW: Alpha tokenization for Danish, Polish and Indonesian.
  • Improved language data, support for lazy loading and simple, lookup-based lemmatization for English, German, French, Spanish, Italian, Hungarian, Portuguese and Swedish.
  • Improved language processing pipelines and support for custom, model-specific components.
  • Improved and consistent saving, loading and serialization across objects, plus Pickle support.
  • Revised matcher API to make it easier to add and manage patterns and callbacks in one step.
  • Support for multi-language models and new MultiLanguage class (xx).
  • Entry point for spacy command to use instead of python -m spacy.

🚧 Work in progress (not yet implemented)

  • NEW: Neural network models for German, French and Spanish.
  • NEW: Binder, a container class for serializing collections of Doc objects.

🔴 Bug fixes

  • Fix issue #125, #228, #299, #377, #460, #606, #930: Add full Pickle support.
  • Fix issue #152, #264, #322, #343, #437, #514, #636, #785, #927, #985, #992, #1011: Fix and improve serialization and deserialization of Doc objects.
  • Fix issue #512: Improve parser to prevent it from returning two ROOT objects.
  • Fix issue #524: Improve parser and handling of noun chunks.
  • Fix issue #621: Prevent double spaces from changing the parser result.
  • Fix issue #664, #999, #1026: Fix bugs that would prevent loading trained NER models.
  • Fix issue #671, #809, #856: Fix importing and loading of word vectors.
  • Fix issue #753: Resolve bug that would tag OOV items as personal pronouns.
  • Fix issue #905, #1021, #1042: Improve parsing model and allow faster accuracy updates.
  • Fix issue #995: Improve punctuation rules for Hebrew and other non-latin languages.
  • Fix issue #1008: train command finally works correctly if used without dev_data.
  • Fix issue #1012: Improve documentation on model saving and loading.
  • Fix issue #1043: Improve NER models and allow faster accuracy updates.
  • Fix issue #1051: Improve error messages if functionality needs a model to be installed.
  • Fix issue #1071: Correct typo of "whereve" in English tokenizer exceptions.
  • Fix issue #1088: Emoji are now split into separate tokens wherever possible.

🚧 Work in progress (not yet implemented)

📖 Documentation and examples

🚧 Work in progress (not yet implemented)

⚠️ Backwards incompatibilities

Note that the old v1.x models are not compatible with spaCy v2.0.0. If you've trained your own models, you'll have to re-train them to be able to use them with the new version. For a full overview of changes in v2.0, see the alpha documentation and guide on migrating from spaCy 1.x.

Loading models

spacy.load() is now only intended for loading models – if you need an empty language class, import it directly instead, e.g. from spacy.lang.en import English. If the model you're loading is a shortcut link or package name, spaCy will expect it to be a model package, import it and call its load() method. If you supply a path, spaCy will expect it to be a model data directory and use the meta.json to initialise a language class and call nlp.from_disk() with the data path.

nlp = spacy.load('en')
nlp = spacy.load('en_core_web_sm')
nlp = spacy.load('/model-data')
nlp = English().from.disk('/model-data')
# OLD: nlp = spacy.load('en', path='/model-data')

Hash values instead of integer IDs

The StringStore now resolves all strings to hash values instead of integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state, making a lot of workflows much simpler, especially during training. However, you still need to make sure all objects have access to the same Vocab. Otherwise, spaCy won't be able to resolve hashes back to their string values.

nlp.vocab.strings[u'coffee']       # 3197928453018144401
other_nlp.vocab.strings[u'coffee'] # 3197928453018144401

Serialization

spaCy's [serializ...

Read more

v1.8.2: French model and small improvements

26 Apr 18:51
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


✨ Major features and improvements

  • Move model shortcuts to shortcuts.json to allow adding new ones without updating spaCy.
  • NEW: The first official French model (~1.3 GB) including vocab, syntax and word vectors.
python -m spacy download fr_depvec_web_lg
import fr_depvec_web_lg

nlp = fr_depvec_web_lg.load()
doc = nlp(u'Parlez-vous Français?')

🔴 Bug fixes

  • Fix reporting if train command is used without dev_data.
  • Fix issue #1019: Make Span hashable.

📖 Documentation and examples

👥 Contributors

Thanks to @raphael0202 and @julien-c for the contributions!

v1.8.1: Saving, loading and training bug fixes

23 Apr 20:00
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


🔴 Bug fixes

  • Fix issue #988: Ensure noun chunks can't be nested.
  • Fix issue #991: convert command now uses Python 2/3 compatible json.dumps.
  • Fix issue #995: Use regex library for non-latin characters to simplify punctuation rules.
  • Fix issue #999: Fix parser and NER model saving and loading.
  • Fix issue #1001: Add SPACE to Spanish tag map.
  • Fix issue #1008: train command now works correctly if used without dev_data.
  • Fix issue #1009: Language.save_to_directory() now converts strings to pathlib paths.

📖 Documentation and examples

👥 Contributors

Thanks to @dvsrepo, @beneyal and @oroszgy for the pull requests!

v1.8.0: Better NER training, saving and loading

16 Apr 21:33
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


✨ Major features and improvements

  • NEW: Add experimental Language.save_to_directory() method to make it easier to save user-trained models.
  • Add spacy.compat module to handle platform and Python version compatibility.
  • Update package command to read from existing meta.json and supply custom location to meta file.
  • Fix various compatibility issues and improve error messages in spacy.cli.

🔴 Bug fixes

  • Fix issue #701, #822, #937, #959: Updated docs for NER training and saving/loading.
  • Fix issue #968: spacy.load() now prints warning if no model is found.
  • Fix issue #970, #978: Use correct unicode paths for symlinks on Python 2 / Windows.
  • Fix issue #973: Make token.lemma and token.lemma_ attributes writeable.
  • Fix issue #983: Add spacy.compat to handle compatibility.

📖 Documentation and examples

👥 Contributors

Thanks to @tsohil and @oroszgy for the pull requests!

v1.7.5: Bug fixes and new CLI commands

07 Apr 17:02
Compare
Choose a tag to compare

We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.

📊 Take the survey!


✨ Major features and improvements

  • NEW: Experimental convert and model commands to convert files to spaCy's JSON format for training, and initialise a new model and its data directory.
  • Updated language data for Spanish and Portuguese.

🔴 Bug fixes

  • Error messages now show the new download commands if no model is loaded.
  • The package command now works correctly and doesn't fail when creating files.
  • Fix issue #693: Improve rules for detecting noun chunks.
  • Fix issue #758: Adding labels now doesn't cause EntityRecognizer transition bug.
  • Fix issue #862: label keyword argument is now handled correctly in doc.merge().
  • Fix issue #891: Tokens containing / infixes are now split by the tokenizer.
  • Fix issue #898: Dependencies are now deprojectivized correctly.
  • Fix issue #910: NER models with new labels now saved correctly, preventing memory errors.
  • Fix issue #934, #946: Symlink paths are now handled correctly on Windows, preventing invalid switch error.
  • Fix issue #947: Hebrew module is now added to setup.py and __init__.py.
  • Fix issue #948: Contractions are now lemmatized correctly.
  • Fix issue #957: Use regex module to avoid back-tracking on URL regex.

📖 Documentation and examples

👥 Contributors

Thanks to @ericzhao28, @Gregory-Howard, @kinow, @jreeter, @mamoit, @kumaranvpl and @dvsrepo for the pull requests!

v1.7.3: Alpha support for Hebrew, new CLI commands and bug fixes

26 Mar 15:08
Compare
Choose a tag to compare

✨ Major features and improvements

  • NEW: Alpha tokenization for Hebrew.
  • NEW: Experimental train and package commands to train a model and convert it to a Python package.
  • Enable experimental support for L1-regularized regression loss in dependency parser and named entity recognizer. Should improve fine-tuning of existing models.
  • Fix high memory usage in download command.

🔴 Bug fixes

  • Fix issue #903, #912: Base forms are now correctly protected from lemmatization.
  • Fix issue #909, #925: Use mlink to create symlinks in Python 2 on Windows.
  • Fix issue #910: Update config when adding label to pre-trained model.
  • Fix issue #911: Delete old training scripts.
  • Fix issue #918: Use --no-cache-dir when downloading models via pip.
  • Fixed infinite recursion in spacy.info.
  • Fix initialisation of languages when no model is available.

📖 Documentation and examples

👥 Contributors

Thanks to @raphael0202, @pavlin99th, @iddoberger and @solresol for the pull requests!

v1.7.2: Small fixes to beam parser and model linking

20 Mar 12:37
Compare
Choose a tag to compare

🔴 Bug fixes

  • Success message in link is now displayed correctly when using local paths.
  • Decrease beam density and fix Python 3 problem in beam_parser.
  • Fix issue #894: Model packages now install and compile paths correctly on Windows.

📖 Documentation and examples

v1.7.1: Fix data download for system installation

19 Mar 10:42
Compare
Choose a tag to compare

🔴 Bug fixes

  • Fix issue #892: Data now downloads and installs correctly on system Python.