Okay.

The tokenizer on the input side downsamples by the feature size using
a math thing. One could summarise it as a network layer.

The tokenizer on the output side is a normal tokenizer, I think. Hard
to look at. These are usually just lookup tables of words, with a
little extra added.

I almost bailed. I think I might be able to continue a little if I
shrink it very small. Maybe in a different spam thread.

Reply via email to