my extended trained has for some reason 1 fewer vocab words than the
tokenizer used for the model the past few days

get to debug that!

Reply via email to