Dear all,

The Java implementations of the Parquet readers and writers seem pretty tightly 
coupled to Hadoop (see: PARQUET-1822). For some projects, this can cause issues 
as it's an unnecessary and big dependency when you might just need to write to 
disk. Is there any appetite here for separating the Hadoop code and supporting 
more convenient ways to write to disk out of the box? I am willing to work on 
these changes but would like some pointers on whether such patches would be 
reviewed and accepted as PARQUET-1822 has been open for over three years now.

Best regards,
Atour Mousavi Gourabi

Reply via email to