Chris Cannam wrote:
On Thursday 08 Sep 2005 18:06, Dougie McGilvray wrote:
As I mentioned before, I've been looking into more integral
microtonal support. I have rewritten the Pitch class to represent
pitch as note+accidental+octave rather than midi pitch. I have added
a Tuning class which defines the intervals and equivalent spellings
of the tuning. This class also converts between midi pitch and
notated pitch.
All this time to reply to this, and I'm afraid I'm _still_ going to give
an incomplete half-digested reply.
I think I have two problems with this and they have a common cause.
First, treatment of MIDI pitch. Your first two Pitch constructors are
for construction "from MIDI pitch"; and from pitch-in-octave plus
octave. The first of these requires a Tuning object to map the pitches
across the available octaves; the second doesn't. That means there is
an expectation that you can't do anything meaningful with MIDI pitch
unless you have a tuning associated with it. That really isn't going
to wash, in a MIDI sequencer. I don't think we can reasonably expect
every piece of code that received a MIDI pitch to decompose it into
pitch plus octave using a given tuning and presumably somehow recording
for posterity the fact that a default 12tET tuning was used before it
can save it anywhere.
How do you define the tuning, as a user? At the moment when you record
from MIDI, you record plain 0-127 MIDI pitch. There's no key signature
available when you record it, and nothing we record depends on the key.
Likewise there's presumably no tuning available when we record, but we
can't (in this model) record it in a tuning-independent way because we
need to know the tuning before we can determine the octave for a given
note. So the tuning needs to be sufficiently "global" that we know it
already when we record something, or else we need to be able to record
plain MIDI pitch and map it to a given tuning afterwards. If thinking
about this is confusing me now, I'm not sure that it's going to make
sense in the code either.
First of all, thanks for looking at it - it has given me several
headaches though I have enjoyed it in a masochistic sort of way :).
At the moment, tunings would be defined from the tunings file in the
pitch tracker - ie like a scala file but with a list of equivalent
spellings after each interval. At the moment I'm treating the
I hoped to make the pitch class as close as possible to the original
class. Unfortunately, though, the tuning is required to convert between
notated and the performed midi pitch. Theoretically this should only
happen during recording and playback, right? Though after looking at the
pitch class I have found many instances where pitch is
stored/manipulated as an integer rather than a pitch class - and for
standard tuning this is probably the easiest way to handle it. In my
head - this is the biggest problem in implementing this plan.
Conversion during playback shouldn't be too bad - get the current tuning
in time (though for sanity I'd definately start with global tuning)
convert to performed midi pitch and Robert's your uncle.
Recording is a different problem - though I see the problem slightly
differently than above. I doubt many people have a microtonal midi input
device - and I personally don't see any advantages in mapping your midi
keyboard to a tuning with anything other than 12 tones in the octave.
Taking the 19tone scale example, the only practical use I can see for
recording from midi keyboard would be to convert to notated pitch using
standard 12tone tuning - this means certain notes are unplayable from
your midi device - but at least it means playing a major triad on the
keyboard will result in a major triad in the input. Realistically - I
think microtonal pieces will be entered manually,
The other problem I have with it (or perhaps I should say with thinking
about it) is with accidentals. At the moment we store performance
pitch plus optional notated accidental (which could be anything, and is
up to the user). If no accidental is available, we guess it from the
key signature. In this proposed model, things work the other way
around (right?) -- performance pitch is determined from the stored
pitch class and accidental. This means if you change the notated
accidental (using a Respell option), you also need to adjust the stored
pitch class and so on in order to get it to play right. That's going
to make for a lot of trouble -- I have in mind just how much trouble
we've had with the existing pitch code just to get the calculated
accidentals working correctly, so I fear changes that significantly
alter this aspect.
Yes - though I feel this could be achieved. The 2nd Pitch constructor
will attempt to create a pitch based on midi pitch and optional
accidental. However, in many tunings it will not be possible to preserve
the performance pitch when the accidental is changed.
I think the common cause for both of these worries is just that I don't
quite see what the use cases are. That is, I can see why we want
support for extended pitch classing, that's not the problem -- I just
don't quite have in mind how the user will interact with the program to
make use of it, and how the interactions that users already make with
the current system are expected to map onto the new internals.
I realise that the majority of users, probably won't give two hoots
about having altered tunings - if it's there maybe people try their
compositions with different tunings, but realistically this would
probably be used by a minority of your users. At the risk of shooting
down my own idea - what I suppose I am suggesting is to make a lot of
difficult changes to suit a minority, but ensure the majority is
unaffected. This is a form of madness, I realise this. Initially, the
plan seemed to fit well into the existing framework - and I was hoping
for an idea of how much work would be involved in implementing this
accross the board. Which, after inspection, seemed to depend on how much
pitch is used as an integer rather than a Pitch class. Since then, I am
begining to think pitch is rarely treated as a class other than to
perform calculations - is this correct? If so, then a midi-centric plan
as suggested below would be far easier to integrate. When I have the
chance, I'll look into this.
There is also one other problem I have, which is that it conflicts with
one of our basic principles which is where possible to store things in
such a way as to make them as near as possible trivial to play. So we
store MIDI pitch, and then we have extra optional data for displaying
it in notation -- but the only essential bit is the MIDI pitch. A
tuning mechanism that somehow preserved that quality would probably sit
more comfortably in my mind, on first acquaintance at least (if you'll
excuse the peculiar mixed metaphor).
Cheers,
Dougie
-------------------------------------------------------
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php
_______________________________________________
Rosegarden-devel mailing list
[email protected] - use the link below to unsubscribe
https://lists.sourceforge.net/lists/listinfo/rosegarden-devel