Hi!

On 21:50 Thu 15 Sep     , Zheng Da wrote:
> Hi,
> 
> On Sun, Sep 11, 2011 at 2:32 AM, Michael Blizek
> <[email protected]> wrote:
> >> >> The whole point of allocating a large chunk of memory is to avoid
> >> >> extra memory copy because I need to run decompression algorithms on
> >> >> it.
> >> >
> >> > In this case scatterlists solves 2 problems at once. First, you will not 
> >> > need
> >> > to allocate large continuous memory regions. Second, you avoid wasting 
> >> > memory.
> >> The problem is that the decompression library works on contiguous
> >> memory, so I have to provide contiguous memory instead of
> >> scatterlists.
> >
> > Which decompression lib are you talking about? Even if it does not have
> > explicit support for scatterlists, usually you should be able to call the
> > decompress function multiple times. Otherwise, how would you (de)compress 
> > data
> > which is larger than available memory?
> Sorry for the late reply.
> I'm using LZO. The data is compressed in blocks of 128KB. I don't
> think I can split the compressed block and run LZO decompressor on the
> pieces multiple times.
> There is a lot of free memory, but the kernel can't find contiguous
> memory sometimes. vmalloc always succeeds when kmalloc fails.

Yes, it really does look as if lzo currently does not support scatterlists.
The change looks fairly simple to me, but apparently there is no maintainer
it :-( .

        -Michi
-- 
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com


_______________________________________________
Kernelnewbies mailing list
[email protected]
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

Reply via email to