Compressing Neural Network Models of Audio Distortion Effects Using Knowledge Distillation Techniques

University of Oslo

Abstract

Neural networks have proven to be effective for modeling analog audio effects using a black-box approach. However, few can guarantee lightweight solutions suitable for real-time environments where the models must run concurrently on consumer-grade equipment. This paper explores knowledge distillation techniques for compressing recurrent neural network models for audio distortion effects, aiming to produce computationally efficient solutions with high accuracy that maintain compact model size. In particular, we consider an audio-to-audio LSTM architecture for regression tasks where small networks are trained to mimic the internal representations of larger networks, known as feature-based knowledge distillation. The evaluation was conducted on three different audio distortion effect datasets with experiments on both parametric and non-parametric data. The results show that distilled models are more accurate than non-distilled models with equal parameter size, especially for models that exhibit higher error rates. Furthermore, we observe that smaller complexity gaps between student and teacher models yield greater improvements in non-parametric cases.

Audio Examples

We evaluated our distillation architecture using three datasets of audio distortion effects: the Blackstar HT-1 vacuum tube amplifier (HT-1), the Electro-Harmonix Big Muff (Big Muff) guitar pedal, and the analog-modeled overdrive plugin DrDrive.

Below are example comparisons between our distilled student models, standard (non-distilled) student networks, and the target audio. Each dataset includes examples trained with varying hidden layer sizes (units).


DrDrive

Target (real)

Distilled (64 units)

Non-distilled (64 units)

Distilled (8 units)

Non-distilled (8 units)


Big Muff

Target (real)

Distilled (64 units)

Non-distilled (64 units)

Distilled (8 units)

Non-distilled (8 units)


HT-1

Target (real)

Distilled (64 units)

Non-distilled (64 units)

Distilled (8 units)

Non-distilled (8 units)


DrDrive Conditioned

Target (real)

Distilled (64 units)

Non-distilled (64 units)

Distilled (8 units)

Non-distilled (8 units)