Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute TextModel limits before initializing the tokenizer #240919

Merged
merged 1 commit into from
Feb 19, 2025

Conversation

jamestut
Copy link
Contributor

This PR is to address issue #240918 where large files still get tokenized even if the editor.largeFileOptimizations option is still active.

This is caused by the _tokenizationTextModelPart being initialized before _isTooLargeForTokenization is computed. This PR moves _isTooLargeForTokenization before the initialization of _tokenizationTextModelPart to address this issue.

@alexdima alexdima enabled auto-merge (squash) February 19, 2025 18:24
@alexdima
Copy link
Member

Thank you!

@alexdima alexdima added this to the February 2025 milestone Feb 19, 2025
@alexdima alexdima merged commit e1c80bc into microsoft:main Feb 19, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants