> It hasn't had much of a problem for me so far. I haven't tested it with > really massive archives, though.
Heh, then it looks like I'll have some observations to share! > You should be able to avoid dumping the files to disk by using the > TarInputStream class. That allows you to operate on the tar like a stream, > moving through entries and reading data, so you should be able to get the > entry metadata and take a hash of the data without having to write to disk > first. Ah, if that works then I'll probably be in good shape. I'm thinking it'll be better to decompress these files first and then run the tar stream against them. Not sure how well the sharplib would handle managing memory to combine both decompressing and streaming from the tar all in memory when it's a 1GB archive! And given how loooong it takes to slog through files this large it's not like I want to sit there waiting for it to fail. -Bill Kearney ------------------------ Yahoo! Groups Sponsor --------------------~--> Get fast access to your favorite Yahoo! Groups. Make Yahoo! your home page http://us.click.yahoo.com/dpRU5A/wUILAA/yQLSAA/X1EolB/TM --------------------------------------------------------------------~-> Yahoo! Groups Links <*> To visit your group on the web, go to: http://groups.yahoo.com/group/AspNetMetroArea/ <*> To unsubscribe from this group, send an email to: [EMAIL PROTECTED] <*> Your use of Yahoo! Groups is subject to: http://docs.yahoo.com/info/terms/
