i thought a possible spam thread could be talking to a language model.
i thought i’d try the newish yarn model with 128k context length
maybe i could run it in colab or i dunno.

possible model: https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k
smaller quantized variant of model:
https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GPTQ

the quantized one is under 8GB. there are also smaller variants..

people are likely hosting it on free api servers somewhere. i’m not
presently looking at that.
  • [ot][spam][l... Undescribed Horrific Abuse, One Victim & Survivor of Many
    • Re: [ot... Undescribed Horrific Abuse, One Victim & Survivor of Many
      • Re:... Undescribed Horrific Abuse, One Victim & Survivor of Many
        • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
          • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
            • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
              • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
                • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
                • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
                • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
                • ... mailbombbin

Reply via email to