On 4/6/23, [email protected] <[email protected]> wrote: > I run alpaca.cpp on a laptop with 8 GB ram and the 7B model. Works pretty > well. > > I would love to find a project that would enable me to go to the 13B > model, but have not yet found one that enables me to run that on only 8 GB > ram.
4 bit quantization and mmap? i’m not using them myself yet but people are doing these things
