Spectral (modeling) synthesis lets you build a sound by combining multiple (sine wave) harmonics and filtered noise signals. This synthesis method shares many underlying principles with vocoders, but tracks peaks in the overall spectrum, rather than individual amplitudes and frequencies in the signal.
Alchemy provides a flexible spectral synthesis implementation, known technically as multiresolution sunusoidal modeling. Less technically, a custom filter bank is used to analyze peaks (and other elements) in the frequency spectrum of the signal. Harmonic components, based on the spectral analysis, are modeled as a combination of sine waves and white noise passed through a filter that changes over time. The noise components are typically used to model “percussive” elements such as a piano strike or a speech fricative in a vocal sample, for example. The (sine wave) harmonic components are used to model the piano note or remainder of the vocal sound. The output of the modeled sound is a combination of the frequencies and levels of the detected harmonic components and the noise signal passed through a time-variable filter.
The spectral synthesis engine in Alchemy can be used to create sounds from scratch, by drawing or painting in the spectral edit window. You can also import and convert an image file into a spectrogram (an image of the frequency spectrum) in the spectral edit window. You can then edit this converted image with the drawing and painting tools. Alchemy analyzes the spectrogram and replaces peaks and percussive components with sine harmonics and filtered noise elements to create a sound.
Alchemy can also analyze imported samples which are broken down into “spectral bins”. The sound is recreated by filling each spectral bin with the required amount of signal, either using sine waves or filtered noise, and the results are then summed. These bins resynthesize (or reconstruct an approximation of) the original sound. See Resynthesis.