Hi Gokay,

That's some CSV file. That will probably be much smaller in Avro.

An Avro file is a so-called Object Container File. This was implemented in
the MapReduce era to make sure that the workload for each of the workers is
roughly the same. Which makes it easier to tune the memory requirements. An
Avro file actually contains one or more containers, which are individually
compressed. The blocks are separated using the synchronization marker. More
info can be found here: https://avro.apache.org/docs/current/spec.html

Also, the C# code gives some good pointers:
https://github.com/apache/avro/blob/42edbd721fedc0ed6cde89ab3b64a9ac606aa74f/lang/csharp/src/apache/main/File/DataFileWriter.cs

You could read (and uncompress) the blocks one by one to keep the memory
constant or use this to parallelize the reading of the file, which might
significantly improve the throughput of the application.

Kind regards,
Fokko Driesprong



Op do 12 mei 2022 om 08:16 schreef Gokay Tosunoglu
<gokaytosuno...@yahoo.com.invalid>:

> Hi there,I have a C# appilcation which deals with big data. For example
> like 1 file is more than 100 GB. This big file is CSV type, I am reading
> this customer data by buffers and checking end of line to read next
> buffer.I want to support avro too, but i couldn't find how can i read a 100
> GB avro file and/or how to divide file into buffers.Can any of you send me
> a way to do it or a sample code maybe?Thanks in advance.Gokay Tosunoglu

Reply via email to