oh! and it uses the extended tokenizer made by the extend_tokenizer.py script

Reply via email to