Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AudioMixer with effects #8974

Open
JohnStabler opened this issue Feb 23, 2024 · 31 comments
Open

AudioMixer with effects #8974

JohnStabler opened this issue Feb 23, 2024 · 31 comments

Comments

@JohnStabler
Copy link

It would be nice if effects could be added to the voices on the audio mixer. For example: pitch (playback rate and/or time-stretch), reverb and EQ maybe?

This can't be done near-realtime in Python, so it is better suited to a compiled library. Most effects would require buffering but should be achievable on something like a Pico.

@tannewt tannewt added the audio label Feb 23, 2024
@tannewt tannewt added this to the Long term milestone Feb 23, 2024
@gamblor21
Copy link
Member

I wanted to add effects on synth channels and saw this issue also about adding audio effects. After some discussion on discord and looking through the code and some other audio libraries I see all the audio code uses get ..._get_buffer_structure and ..._get_buffer to pass information from one to another. In this way audio components can be chained together. E.g. synthio.Synthesizer can play into audiomixer.Mixer which can play into audiobusio.I2SOut to get to the speaker.

My idea is to create a new module AudioEffects then it is easy to exclude for smaller builds. The effects would each implement the buffer protocol in the same way current audio modules do. Hooks would have to be built into any audio module that wants to support to the effects. Something like mixer.voice[0].addeffect(effectName). Assuming a common protocol then a similar .addeffect can also be added to a synth or synth channel so the developer can add the effect at the point they want. In theory effects could be hooked between each other.

At this point I have done a proof-of-concept to see if a basic echo was possible on an RP2350. It worked fine and the memory allowed for a long buffer delay. The POC was done without setting up a module just an in-place test. If this sounds like a decent idea I'd next start on a more complete test with a separate module and maybe hooked into two+ audio components.

Anyone have any thoughts or comments?

@tannewt
Copy link
Member

tannewt commented Sep 4, 2024

I wouldn't do mixer.voice[0].addeffect(effectName). Instead, I think the effect could be a separate object that you'd "play" into the mixer voice or vice versa.

I like the idea of separate modules for different effects.

@todbot and @jepler may have thoughts too.

@gamblor21
Copy link
Member

Putting some ideas down both for others to critique and to organize my own thoughts:

This is basic code to play a note:

audio = audiobusio.I2SOut(...)
mixer = audiomixer.Mixer(voice_count=1, channel_count=1,...)
audio.play(mixer)
synth = synthio.Synthesizer(channel_count=1, sample_rate=44100)
mixer.voice[0].play(synth)
note = synthio.Note(261)
synth.play(note)

There are two potential ways to add effects. The first as tannewt suggested:

# ... similar to above
echo = audioeffects.EffectEcho(delay=50, decay=0.7) # define an echo effect
echo.play(synth)
mixer.voice[0].play(echo)

In theory effects could be chained together e.g. echo.play(chorus.play(synth))
Pro for this method is that almost all existing audio objects would support effects with no code change.
The downside is I'm not sure how easy to would be to dynamically change effects on the fly and the effects are only granular to an audio object like Mixer, WaveFile, Synth and not to a specific synth channel. Potentially a special case could be added for synth channels.

Another way is to add effects to audio objects.

# ... similar to above
echo = audioeffects.EffectEcho(delay=50, decay=0.7) # define an echo effect
mixer.voice[0].play(synth)
synth.addeffect(echo)

This method also allowed effects to be chained, call addeffect again e.g. synth.addeffect(chorus)
Pros for this method include dynamically changing effects, and likely being able to add effects on a per channel basis in synth.
The downside is having to add addeffect to any audio object that wants to support effects.

Anyone have any thoughts? Are dynamically changing effects a good idea? Would we want effects on a per channel basis in the synth?

In the meantime I'm going to keep trying things out to see what may work best as time permits.

@todbot
Copy link

todbot commented Sep 7, 2024

First, this is very cool!
I like the .addeffect() technique a bit more, for the reasons @gamblor21 mentions, but I'd rather have it add to Mixer instead of Synthesizer, so that other items played with the mixer can get the benefit of the effects. (Many physical mixers do this, having effects busses you feed channels into)

So that would I guess look like:

echo = audioeffects.EffectEcho(delay=50, decay=0.7) # define an echo effect
mixer.voice[0].play(synth)
mixer.voice[1].play(wavfile)
mixer.addeffect(echo)  # add effect to mixed output
# and then also
echo2 = audioeffects.EffectEcho(delay=150, decay=0.7) # define another echo effect
mixer.voice[0].addeffect(echo2)  # add effect only to voice 0

@tannewt
Copy link
Member

tannewt commented Sep 9, 2024

Anyone have any thoughts? Are dynamically changing effects a good idea? Would we want effects on a per channel basis in the synth?

I think it's important to think about how you'd remove an effect as well. I do think we want to change them dynamically.

I don't think we need them on a synth channel basis. Instead, you can have two synths.

@todbot
Copy link

todbot commented Sep 9, 2024

I don't think we need them on a synth channel basis. Instead, you can have two synths.

So no audio effects for audiocore.WaveFile and audiocore.RawSample?

@tannewt
Copy link
Member

tannewt commented Sep 9, 2024

I don't think we need them on a synth channel basis. Instead, you can have two synths.

So no audio effects for audiocore.WaveFile and audiocore.RawSample?

That's not what I meant. I was responding to "Would we want effects on a per channel basis in the synth?".

I still think the best way is through separate intermediate objects that get played.

echo = audioeffects.EffectEcho(delay=50, decay=0.7) # define an echo effect
echo.play(synth)
# with echo
mixer.voice[0].play(echo)
time.sleep(1)
# without
mixer.voice[0].play(synth)

@gamblor21
Copy link
Member

After playing around with the code all weekend the first way @tannewt suggested I think makes the most sense. Looking at how the Teensy audio library works it seems similar.

# give the synth and echo effect
echo = audioeffects.EffectEcho(delay=50, decay=0.7) # define an echo effect
echo.play(synth)

# give a wave file a chorus effect
wavesound = audiocore.WaveFile("wave.wav")
chorus = audioeffects.Chorus(voices=5)
chorus.play(wavesound)

# combine them in a mixer
mixer.voice[0].play(echo)
mixer.voice[1].play(chorus)

# add a reverb to the overall mixed sound
reverb = audioeffects.EffectReverb()
reverb.play(mixer)

I have to look closer but in some instances playing objects into others may reset buffers / reset the sound but that can be tweaked easily enough.

I am getting closer to having some code I can push as a draft PR to give something more concrete to look at.

@todbot
Copy link

todbot commented Sep 9, 2024

reverb = audioeffects.EffectReverb()
reverb.play(mixer)

Normally mixer is fed into the actual audio player, providing the much-needed audio buffer to the whole system, so I'm not sure how the above would work. Is this a typo?

Would representing the signal chain in code, rather than as a data structure, create glitches when the chain is changed? Let me think through this with an example. e.g. let's set up a synth playing through chorus & reverb, then remove the chorus:

# standard audio setup
audio = audiobusio.I2SOut(...)
mixer = audiomixer.Mixer(..., buffer_size=2048)
audio.play(mixer)
synth = synthio.Synthesizer() 
# mixer.voice[0].play(synth) is what we'd normally do here, instead...

# wire up effects: synth -> chorus -> reverb -> mixer -> i2s
chorus = audioeffects.Chorus(voices=5)
chorus.play(synth)
reverb = audioeffects.Reverb()
reverb.play(chorus)
mixer.voice[0].play(reverb)

# time passes, remove chorus from signal chain: synth -> reverb -> mixer -> i2s
reverb.play(synth)   # would this cause a glitch? 

I forget if doing the equivalent does in the TAL. I'll see if I can find my Teensy audio boards and try it. If all audio effects have a "wet/dry" mix knob and zero-effort pass-through case when dry=100%, then we can get glitchless removal of an effect without altering the signal chain.

@gamblor21
Copy link
Member

reverb = audioeffects.EffectReverb()
reverb.play(mixer)

Normally mixer is fed into the actual audio player, providing the much-needed audio buffer to the whole system, so I'm not sure how the above would work. Is this a typo?

This was more an example of "you could do this", then any practical or good idea.

Would representing the signal chain in code, rather than as a data structure, create glitches when the chain is changed? Let me think through this with an example. e.g. let's set up a synth playing through chorus & reverb, then remove the chorus:
reverb.play(synth) # would this cause a glitch?

I'm still not sure about this case. I do want it to work, or to have a way to switch effects in/out at runtime. I'm just not sure the optimal way to do that yet.

I did get a "do nothing"/pass-thru effect working tonight. Next step to make it do something.

@relic-se
Copy link

relic-se commented Sep 10, 2024

+1 on @tannewt 's suggestion of running audio buffer sources through the effect and then to the final mixer object before the output. In a way, I think it is reminiscent of guitar pedals as to how you'd chain them together. Though I know the naming scheme we're playing around with isn't final, I think audioeffects.EffectReverb is somewhat redundant and should be more like audioeffects.Reverb. There could be some minor conflict issues when using from audioeffects import Reverb, but I find that unlikely.

Some of the effects that I'd potentially like to see:

  • Reverb
  • Delay
  • Chorus/Phaser/Flanger - These mainly have timing differences between them. Correct me if I'm wrong, but they are all essentially delay-based effects with modulation on the playback speed.
  • Overdrive/Distortion/Fuzz
  • Amp Modeling?
  • Pitch Shift (or simply octave up/down)
  • EQ/Filter (wah effects could be achieved with a variable BPF)

Something synthio.BlockInput support would be great to have on effect parameters for modulation.

I remember seeing this repo a while back that was able to achieve some decent results with the rp2040: https://github.com/StoneRose35/cortexguitarfx.

@tannewt
Copy link
Member

tannewt commented Sep 10, 2024

I like the direction of this. I want to point out we probably want more specific module names instead of audioeffects. That way we can have more or less on different chips.

@gamblor21
Copy link
Member

I think audioeffects.EffectReverb is somewhat redundant and should be more like audioeffects.Reverb. There could be some minor conflict issues when using from audioeffects import Reverb, but I find that unlikely.

I actually had realized the same thing and in my proof-of-concept code changed it already.

Some of the effects that I'd potentially like to see:
...

For now I'm trying to get a base framework up, and willing to look at other effect but I'm not an expert so probably need some guide on what/how they work. But I like the ideas!

@gamblor21
Copy link
Member

I like the direction of this. I want to point out we probably want more specific module names instead of audioeffects. That way we can have more or less on different chips.

Would more modules be preferrable over having flags to turn on/off individual effects within one module? I would think we would still want some broad categories and not one module/one effect? New modules aren't hard so no real preference from me. Something that doesn't have to be decided this moment still at least.

@relic-se
Copy link

For now I'm trying to get a base framework up, and willing to look at other effect but I'm not an expert so probably need some guide on what/how they work. But I like the ideas!

Once we have the framework set up, I'd love to contribute where needed.

I like the direction of this. I want to point out we probably want more specific module names instead of audioeffects. That way we can have more or less on different chips.

Personally, I'd like to see it all compiled into one module and then disabled on an individual effect basis. I feel that would provide more cohesion in the implementation, but I'd really like to see what you might have in mind, @tannewt .

@tannewt
Copy link
Member

tannewt commented Sep 11, 2024

Personally, I'd like to see it all compiled into one module and then disabled on an individual effect basis. I feel that would provide more cohesion in the implementation, but I'd really like to see what you might have in mind, @tannewt .

I prefer separate modules because import errors normally happen early on startup. If you have optional portions of the module then you'll find its missing later.

It'd be ok if related effects are in the same module, especially if they share code under the hood.

@gamblor21
Copy link
Member

So at this point I have three questions:

  1. If we are going with different modules is it worth doing a draft PR before those are decided? And second part does anyone have any opinions on groupings that make sense?

  2. I believe getting a draft PR out so others can provide feedback may be valuable at this point. I have realized each effect will be standalone but likely follow a template (that all audio follows) so having one as a "standard" to follow would be helpful. That said there may be some standard API calls we want every effect to have (play, stop, maybe a clear sample but not stop).

  3. There is some code Mixer uses that could be reused by the effects. And anything common for effects that may come up. I'm not sure the best place. Maybe not audiocore. Some of it is inline code so outside of keeping the code base tiny shouldn't change the build size if it duplicates.

Probably cannot be at the CircuitPython meeting tomorrow as these may be good for in the weeds. But I'm about here/discord if anyone wants to discuss anything.

@todbot
Copy link

todbot commented Sep 16, 2024

So at this point I have three questions:

1. [...] anyone have any opinions on groupings that make sense?

If the grouping desire is based on memory usage, then perhaps grouping names that imply "no buffer" vs. "small buffer" vs. "big buffer". e.g. "Compressor" would go in the "no buffer" group, "Chorus" in "small buffer" and "Reverb" in the "big buffer" group.

Otherwise, maybe organize by user-facing effect type. Groups like:

  • "Filters" (e.g. SVF, ladder, autowah, wavefold)
  • "Delays" (e.g. chorus, flanger, phaser, echo),
  • "Dynamics" (e.g. compressor, limiter, bitcrush, overdrive)
  • "Reverbs" (for lack of a better name, where all the big buffer stuff goes)

This would mostly match the memory usage-based grouping, so I think I like it more.

2. [...] That said there may be some standard API calls we want every effect to have (play, stop, maybe a clear sample but not stop).

I'd like to see a "mix" parameter being part of the standard API, to adjust how much of the effect to apply: 0.0 = no effect / 100% "dry" to 1.0 = only effect / 100% "wet", defaults to 0.5. And it would be nice if "mix=0.0" would be a "true bypass" that would be an early return path to minimize processing.

I'd be willing to try out any PR you put out and try making a few simple effects too. This is very neat!

@tannewt
Copy link
Member

tannewt commented Sep 16, 2024

  • If we are going with different modules is it worth doing a draft PR before those are decided? And second part does anyone have any opinions on groupings that make sense?

Yup! A PR is a perfect place to discuss this. No need to decide beforehand. My main driver for separate modules is code size. Small amounts of code can fit before the larger ones.

  • I believe getting a draft PR out so others can provide feedback may be valuable at this point. I have realized each effect will be standalone but likely follow a template (that all audio follows) so having one as a "standard" to follow would be helpful. That said there may be some standard API calls we want every effect to have (play, stop, maybe a clear sample but not stop).

Yup! Doesn't need to be a draft either.

  • There is some code Mixer uses that could be reused by the effects. And anything common for effects that may come up. I'm not sure the best place. Maybe not audiocore. Some of it is inline code so outside of keeping the code base tiny shouldn't change the build size if it duplicates.

I think mixer is a good spot for this code. We can assume we have mixer when we have these effects.

@gamblor21
Copy link
Member

Otherwise, maybe organize by user-facing effect type. Groups like:

  • "Filters" (e.g. SVF, ladder, autowah, wavefold)
  • "Delays" (e.g. chorus, flanger, phaser, echo),
  • "Dynamics" (e.g. compressor, limiter, bitcrush, overdrive)
  • "Reverbs" (for lack of a better name, where all the big buffer stuff goes)

Those make sense to me so we could have:

  • audiofilters
  • audiodelays
  • audiodynamics
  • audioreverbs (unless someone thinks of a better name)

Not that it isn't hard to add more categories later.

I'd like to see a "mix" parameter being part of the standard API, to adjust how much of the effect to apply: 0.0 = no effect / 100% "dry" to 1.0 = only effect / 100% "wet", defaults to 0.5. And it would be nice if "mix=0.0" would be a "true bypass" that would be an early return path to minimize processing.

That should not be that hard to do as a final step, take the output buffer * mix and add the original sample * 1-mix. Also easy to check if mix=0.0 just bypass it all early.

@gamblor21 gamblor21 mentioned this issue Sep 17, 2024
7 tasks
@gamblor21
Copy link
Member

The first PR to add a basic echo effect was added to the core. Some of the effects mentioned here (EQ, pitch) are not in yet so I am not sure we want to close this issue. But if anyone is trying to add a new effect feel free to reach out to me on how I did it.

@relic-se
Copy link

The audiofilters module and the first effect in the category, audiofilters.Filter, has been added to the core (#9744) with more coming. 😸

@RAWJUNGLE
Copy link

RAWJUNGLE commented Oct 26, 2024

Sorry to interrupt, but could you add "PITCH" to audio files (audiocore.WaveFile and audiocore.RawSample) ?
It would be great to see this implemented on the RP2040

@gamblor21
Copy link
Member

Sorry to interrupt, but could you add "PITCH" to audio files (audiocore.WaveFile and audiocore.RawSample) ? It would be great to see this implemented on the RP2040

Just curious can you give an example of what you mean with adding pitch? Just to ensure I am clear on it.

As to the RP2040 the code should compile and run, the main issue is performance and code size. If you have a RP2040 and know how you could enable the audioeffects for that board and compile it to test. It is something if I get time I have thought of doing but not high on my priorities at the moment.

@RAWJUNGLE
Copy link

RAWJUNGLE commented Oct 28, 2024

Unfortunately, I don't have such examples for implementing this in code. But I can explain using the example of a DAW, let's say I have an Ableton Live and a piece of sound, I want to change the "Pitch" to it, I go into the clip settings and move the transposition knob, thereby I change the "Pitch".
I understand your workload, I'm ready to wait.

@relic-se
Copy link

"Pitch" transposition on an audio sample without modifying the playback speed is not something that is currently available. Technically, you could feed sample data into synthio.Note, do some math to work out the frequency to get normal playback, and then modify the bend property to change the "pitch". The side effect is that that will change the overall rate of playback. I have a library which helps handle most of that functionality: https://circuitpython-synthvoice.readthedocs.io/en/latest/api.html#synthvoice.sample.Sample.

There will probably be work on a dedicated pitch shifting effect in the future, but it may take a while to get there.

@RAWJUNGLE
Copy link

RAWJUNGLE commented Oct 28, 2024

"Pitch" transposition on an audio sample without modifying the playback speed is not something that is currently available. Technically, you could feed sample data into synthio.Note, do some math to work out the frequency to get normal playback, and then modify the bend property to change the "pitch". The side effect is that that will change the overall rate of playback. I have a library which helps handle most of that functionality: https://circuitpython-synthvoice.readthedocs.io/en/latest/api.html#synthvoice.sample.Sample.

There will probably be work on a dedicated pitch shifting effect in the future, but it may take a while to get there.

Yes, I am following your project and saw it, but there was no illustrative example, for this reason I decided to clarify. Thanks. And that's what I meant.

@jepler
Copy link
Member

jepler commented Oct 28, 2024

Playing back a sample at a different rate will of course change both the pitch and the tempo. Changing pitch and tempo independently (including changing pitch while preserving tempo) requires a more sophisticated algorithm (see e.g., https://gstreamer.freedesktop.org/documentation/audiofx/scaletempo.html?gi-language=c which cites "WSOLA" as the underlying algorithm).

@todbot
Copy link

todbot commented Oct 28, 2024

I think what a lot of people want (including myself) is the ability to change the playback rate of a sample, independent of the sample rate. You can do this if you don't use AudioMixer, but then you incur the clicks/pops of starting/stopping the audio system or USB accesses.

I've come across a lot of code like this:

import time, random, board, audiocore, audiopwmio
wave = audiocore.WaveFile("/StreetChicken.wav")  # sample rate 22050 Hz
audio = audiopwmio.PWMAudioOut(board.SPEAKER)
while True:
    audio.stop()
    wave.sample_rate = random.randint(8000, 36000)
    audio.play(wave)
    time.sleep(2)

or like in this Parsec. But then when folks refactor using AudioMixer to get rid of the pops, they get errors that a WaveFile's sample rate does not match the output sample rate.

@relic-se
Copy link

On a similar topic, I'd also like to support mono sources within a stereo AudioMixer (ie: mono sample with panning). But I think both of these problems likely belong in a new Issue.

@relic-se
Copy link

Just created a draft PR for a Distortion effect: #9776

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants