On Sun, 2025-11-09 at 17:02 +0000, [email protected] wrote:
> 
> > If the amount of data buffered in the pipe reaches the kernel's
> > limit,
> > the writing process will "block" (be put on pause, essentially)
> > until
> > the reading process has consumed some data to make room for more.
> > 

I have a program that processes a lot of data, with new data files
added to the collection every day.

I tried to reduce storage space by compressing the files and then
reading them through a pipe.

If the filename doesn't end in .gz I just open the file.

If the filename ends in gz, my program opens MyFifo (created by mkfifo
if it doesn't already exist) for reading and then runs "zcat TheFile.gz
| MyFifo"

If one of the files is big, instead of zcat blocking until my program
consumes more data, they both block. So I stopped compressing the big
files, where I would have had the most advantage.

Is there a way to make this work — other than splitting big files into
smaller ones?

Reply via email to