This is a C++11 header-only class for pitch-shifting and time-stretching, an extension of the final approach from our ADC22 presentation: Four Ways To Write A Pitch-Shifter. There are more details about how it works in the blog post.
Examples / demo
Here are some examples, and an interactive demo:
How to use
Clone using Git:
Include the appropriate header file, and set up the template using some sample type:
The easiest way to configure is a
If you want to try out different block sizes for yourself, you can use
Processing and resetting
To process a block, call
The input/output buffers cannot be the same, but they can be any type where
buffer[channel][index] gives you a sample. This might be
float ** or
double ** or some custom object (e.g. providing access to an interleaved buffer), regardless of what sample-type the stretcher is using internally.
To clear the internal buffers:
You can set a "tonality limit", which uses a non-linear frequency map to preserve a bit more of the timbre:
Alternatively, you can set a custom frequency map, mapping input frequencies to output frequencies (both normalised against the sample-rate):
To get a time-stretch, hand differently-sized input/output buffers to
.process(). There's no maximum block size for either input or output.
Since the buffer lengths (
outputSamples above) are integers, it's up to you to make sure that the block lengths average out to the ratio you want over time.
Latency is particularly ambiguous for a time-stretching effect. We report the latency in two halves:
You should be supplying input samples slightly ahead of the processing time (which is where changes to pitch-shift or stretch rate will be centred), and you'll receive output samples slightly behind that processing time:
To follow pitch/time automation accurately, you should give it automation values from the current processing time (
.outputLatency() samples ahead of the output), and feed it input from
.inputLatency() samples ahead of the current processing time.
Starting and ending
After initialisation/reset to zero, the current processing time is
.inputLatency() samples before t=0 in the input. This means you'll get
stretch.outputLatency() + stretch.inputLatency()*stretchFactor samples of pre-roll output in total.
If you're processing a fixed-length sound (instead of an infinite stream), you'll end up providing
.inputLatency() samples of extra (zero) input at the end, to get the processing time to the right place. You'll then want to give it another
.outputLatency() samples of (zero) input to fully clear the buffer, producing a correspondly-stretched amount of output.
What you do with this extra start/end output is up to you. Personally, I'd try inverting the phase and reversing them in time, and then adding them to the start/end of the result. (Wrapping this up in a helper function is on the TODO list.)
⚠️ This has mostly been tested with Clang. If you're using another compiler and have any problems, please get in touch.
signalsmith-stretch.h where needed.
It's much slower (about 10x) if optimisation is disabled though, so you might want to enable optimisation where it's used, even in debug builds.
A copy of the DSP library is included in
dsp/ for convenience, but if you're already using this elsewhere then you should remove this copy to avoid versioning issues.
The code is MIT licensed.
The DSP library in
dsp/ has its own
LICENSE.txt, also MIT.