Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patch moonshine #35731

Merged
merged 12 commits into from
Jan 20, 2025
Merged

Patch moonshine #35731

merged 12 commits into from
Jan 20, 2025

Conversation

eustlb
Copy link
Contributor

@eustlb eustlb commented Jan 16, 2025

What does this PR do?

This PR:

  • fixes typos in documentation
  • updates expected logits to the ones expected for T4 runners
  • fixes badly handled inputs (when no attention_mask is provided) in generate's wrap by simply removing the generate wrap

Justification concerning the last point: it's been discussed that handling per-batch-idx max_lengths is something that is going to be added directly to generate (see #35676). In the future, Moonshine's processor will return max_length=[val_1, ... ,val_n] that will directly be intended to be passed as max_length generate's parameter. For this reason, max_length should be computed outside generate and therefore the current wrap is not necessary! The wrap was intended to reduce hallucinations that are an edge case but impact evaluation on a test set compare to original codebase. Model cards and recommanded usage will be updated with the correct usage:

from transformers import AutoProcessor, MoonshineForConditionalGeneration
from datasets import load_dataset

processor = AutoProcessor.from_pretrained("UsefulSensors/moonshine-tiny")
model = MoonshineForConditionalGeneration.from_pretrained("UsefulSensors/moonshine-tiny")

ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
inputs = processor([ds[0]["audio"]["array"], ds[0]["audio"]["array"]], return_tensors="pt")
input_values = inputs.input_values

token_limit_factor = 6.5 / processor.feature_extractor.sampling_rate  # Maximum of 6.5 tokens per second
seq_lens = inputs.attention_mask.sum(dim=-1)
max_length = int((seq_lens * token_limit_factor).max().item())

generated_ids = model.generate(input_values, max_length=max_length)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=False)
print(transcription)

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! As discussed, let's mention why this is needed in the patch!

@ArthurZucker ArthurZucker merged commit 5f0f4b1 into huggingface:main Jan 20, 2025
16 checks passed
ArthurZucker pushed a commit that referenced this pull request Jan 20, 2025
* udpate expected logits for T4 runners

* update doc

* correct order of the args for better readability

* remove generate wrap

* convert modular
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants