I run alpaca.cpp on a laptop with 8 GB ram and the 7B model. Works pretty well.

I would love to find a project that would enable me to go to the 13B model, but have not yet found one that enables me to run that on only 8 GB ram.


On Wed, 5 Apr 2023, Undescribed Horrific Abuse, One Victim & Survivor of Many 
wrote:

https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html

note:
- llamacpp is cpu only. the huggingface backend can run the same model
on gpu. it is not always faster
- llamacpp is experiencing political targeting and a little upheaval
in optimization code
- there are newer and more powerful models that can be loaded just
like gpt4all, such as vicuna

langchain is a powerful and noncomplex language model frontend library
supporting openai that provides for coding autonomous agents that use
tools and access datastores. see also llama-index .



  • [ml] langcha... Undescribed Horrific Abuse, One Victim & Survivor of Many
    • Re: [ml... efc
      • Re:... Undescribed Horrific Abuse, One Victim & Survivor of Many
        • ... efc
          • ... Undescribed Horrific Abuse, One Victim & Survivor of Many
            • ... Undescribed Horrific Abuse, One Victim & Survivor of Many

Reply via email to