Re: Channel aftertouch frames

From: Alexandre Ratchov <alex_at_caoua.org>
Date: Sat, 30 Jan 2021 11:09:05 +0100
On Tue, Jan 26, 2021 at 10:12:56PM +0100, Jeanette C. wrote:
> Hey hey,
> I have a difficulty. Creating routes for tracks, I also add an event to the
> beginning of a track so I can use tclist in a show command to see the output
> (playing) channel of a track.
> 
> Alas, both adding a kat or a cat event generate problems, if I try to insert
> measures with mins. Here's the taddev command and the mins and its error
> message:
> taddev 0 0 0 {cat $outputchannel 0};
> or
> taddev 0 0 0 {kat $outputchannel 0 0};
> g 2
> mins 3 {3 4}
> seqptr_evmerge2: cat {1 0} 0: missing state
> Aborted (core dumped)
> 

The crash is not OK and needs to be fixed. Recently reworked this part
of the code, so could you try with the latest midish version?

I couldn't reproduce the crash, here taddev appears to work as
expected:

First, note that midish always processes events in "frames", example a
note-on and the matching note-off event. From this stand-point a
single note-on event alone doesn't make sense.

The taddev function could be used to add a note-on, but until the
corresponding note-off is added, the track is inconsistent.

key aftertouch events are part of note framces: a "kat" event for a
given note can occur only if the note is started. For instance, this
sequence:

taddev 0 0 0 {non {0 0} 64 100}
taddev 1 0 0 {kat {0 0} 64 100}   
taddev 2 0 0 {noff {0 0} 64 100}

is OK. But if the "non" or the "noff" event wasn't added, midish would
keep complaining until it's added.

The logic for the channel aftertouch is equivalent to a control change
with zero neutral state. So, a zero channel aftertouch will be ignored
if not preceded by a non-zero one. And a non-zero aftertouch one must
be "closed" by a zero atertouch. The logic is the same as for the
pitch bend wheel, sustain pedal, etc.

> IOs there another event/frame that could be added or another function to
> display the output channel of that track's routing?
>

During playback, events on tracks are not processed or routed, they
are directly sent to the device.

During recording, input events are processed and routed, then the
result is sent to the output and copy stored on the track.

The processing rules (aka filter rules) are shown by the "finfo"
function. There may be multiple filters, the current filter is the one
used for processing, it's obtained with the "getf" function.

Each track has its one prefered filter. When you change the current
track (with the "ct" function), the current filter is set to the one
of the track; this way the processing rules "follow" track changes.

The track's prefered filter is obtaned and changed with the "tgetf"
and "tsetf" functions respectively.

There's no per-track current output or input. The current input and
the current output are global. They basically contain just a default
filter with default rules that send input to outputo; it's save as
track default filter when a new track is created

I admit that having all these different concepts for inputs, outputs,
filters, tracks pointing to each other is too "fine-graned" and
developper-centric. Maybe we should have stored everything into the
track settings...

HTH
Received on Sat Jan 30 2021 - 11:09:05 CET

This archive was generated by hypermail 2.3.0 : Sun Jan 31 2021 - 01:33:57 CET