Furthermore the discussion ended up about comparison to the legacy Quartz Composer Patching environment editor from Apple that was introduced more than 20 years ago and is sadly definitely deprecated since last year. Why was QC a topic with chatGPT. The AI figured out that sunvox misses one particular feature that could open the box of pandora up to a certain complexity. Namely an "Iteration" module that was available in QC. It basically resambled functionality that is in sunvox known as Meta module but offered that its "Macro" content just like meta would be applied multiple times up to a certain amount where each application of the psynth_net structure on the output buffer would represent one iteration.
This sounds quite complex but it gave me some hints that Meta is by far not at the end of its possibilities. Lets assume the event mechanism of sunvox gets some event type that can tell wich iteration step of some maximum defined iterations is applied, that would allow, lets say some residing MultiCtl to apply different scaled parameter settings to its outputs and with it apply different set of parameter values on the substructure. That way it would become possible to create polyphonic sounds resembling a trumpet sound by just defining the base sound and controlling its comb peaks in the fft spectrum. Of course such could be acheaved in another way, like dedicated generator module to control the blow pressure or similar as well but still to be able to apply iteration on a audio buffer up to a certain amount (control the CPU intensity) would be massive.
in Quartz Composer the iteration module was a macro module which allowed to apply time and iteration index on the given exposed properties/controls on its hosted substructure, this iteration module existed in parallel to the macro module because some of the modules of QC where not allowed to iterate at all to have control over CPU load or to avoid multiple operations with modules where this makes not sense, like a URL request that makes no sense to be made multiple times.. well that does not apply to sunvox but the basic idea is to allow iterations and collect all of the slightly different sound buffers in the output to go on processing.. filter, eq etc..
Does this make sense to anyone?
by the way ChatGPT suggested such iteration could basically look like
Code: Select all
#define MAX_ITERATIONS 32
void iteration_handler(float* buffer, int buffer_size, int base_frequency, int max_iterations) {
if (max_iterations > MAX_ITERATIONS) max_iterations = MAX_ITERATIONS;
for(int i = 0; i < buffer_size; i++) {
float sample = 0;
for(int j = 1; j <= max_iterations; j++) {
float overtone = base_frequency * j;
float amplitude = calculate_amplitude(j); // function to calculate amplitude based on overtone number
sample += amplitude * sin(2 * M_PI * overtone * i / 44100);
}
buffer[i] = sample;
}
}
Code: Select all
#include <stdio.h>
#include <math.h>
#define PI 3.14159265
#define NUM_HARMONICS 12
// Function to generate the base trumpet waveform
float generate_base_waveform(float phase) {
// Use a wavetable to generate the base waveform
float wavetable[256] = {...}; // Initialize the wavetable with your desired waveform
int index = (int)(phase * 256) % 256;
return wavetable[index];
}
// Function to generate the trumpet sound
void generate_trumpet(float* buffer, int num_samples, float frequency, float pressure) {
// Calculate the phase increment for the base frequency
float phase_inc = frequency / 44100;
// Initialize the phase for the base frequency
float base_phase = 0;
// Loop through the samples
for (int i = 0; i < num_samples; i++) {
// Generate the base waveform
float base_waveform = generate_base_waveform(base_phase);
// Initialize the sum for the overtones
float overtone_sum = 0;
// Loop through the overtones
for (int j = 1; j <= NUM_HARMONICS; j++) {
// Calculate the frequency of the overtone
float overtone_frequency = frequency * j;
// Calculate the phase increment for the overtone
float overtone_phase_inc = overtone_frequency / 44100;
// Initialize the phase for the overtone
float overtone_phase = 0;
// Use FM synthesis to generate the overtone
float overtone = sin(base_waveform + pressure * sin(overtone_phase));
// Add the overtone to the sum
overtone_sum += overtone;
// Update the overtone phase
overtone_phase += overtone_phase_inc;
}
// Add the overtones to the base waveform
buffer[i] = base_waveform + overtone_sum;
// Update the base phase
base_phase += phase_inc;
}
}
sidenote: chatGPT suggested also that a basic trumpet sound is made out of at least 12 overtones.
moreover because i develop a note2chord module for the sunvox engine that already works in its prototype form i think of another approach which is instead of ready made lockup table of defined chords to create multiple notes with relative pitches and just hammer those into a polyphonic generator and that way make use of overtone spacing control and velocity as expression of pressure into a "trumpet". One would maybe just define how much maximal overtones shall be allowed to expose them as notes with their according pitches representing each one of the comb'ed overtones.