Hi,
On Tuesday, 19 September 2006 20:31, Pavel Machek wrote:
> > > > I'd say this is in agreement with the LZF documentation I read. It
> > > > says the
> > > > _decompression_ should be (almost) as fast as a "bare" read, and there's
> > > > much less data to read if they are compressed. However, _compression_
> > > > takes time which offsets the gain resulting from the decreased amount of
> > > > data to write.
> > >
> > > Then Nigel is doing something clever and we do something stupid.
> >
> > Well, I don't think we do anything stupid. Of course you're free to review
> > the code anyway. ;-)
> >
> > Nigel may be using another version of the LZF algorithm which is optimized
> > for speed. I didn't experiment with libLZF too much, so we're just using
> > the
> > default settings. Still AFAIR it is configurable to some extent.
>
> Maybe we can just ask Nigel? ;-).
>
> > > > > - early writeout is as fast with 1% steps as it is with 20% steps. It
> > > > > does
> > > > > not really matter in my tests (this is why i did not retry
> > > > > compression with
> > > > > 20% steps).
> > > >
> > > > This is what we wanted to verify. ;-)
> > >
> > > For one machine, we'd probably need to test more machines.
> >
> > Yes, certainly.
>
> I'd go for 1% steps. If someone finds it slows his machine down, he's
> the one that needs to do the benchmarking.
I guess this means we should apply the Jason's patch?
Rafael
--
You never change things by fighting the existing reality.
R. Buckminster Fuller
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Suspend-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/suspend-devel