Yes, parquet is always better for multiple reasons. With JSON, we have to read 
the whole file
from a single reader thread and have to parse to read individual columns. 
Parquet compresses and encodes data on disk. So, we read much less data from 
disk.
Drill can read individual columns with in each rowgroup in parallel. Also, we 
can leverage
features like filter pushdown, partition pruning, metadata cache for better 
query performance. 

Thanks
Padma

> On Jun 10, 2018, at 8:22 PM, Abhishek Girish <[email protected]> wrote:
> 
> I would suggest converting the JSON files to parquet for better
> performance. JSON supports a more free form data model, so that's a
> trade-off you need to consider, in my opinion.
> On Sun, Jun 10, 2018 at 8:08 PM Divya Gehlot <[email protected]>
> wrote:
> 
>> Hi,
>> I am looking for the advise regarding the performance for below :
>> 1. keep the JSON as is
>> 2. Convert the JSON file to parquet files
>> 
>> My JSON files data is not in fixed format and  file size varies from 10 KB
>> to 1 MB.
>> 
>> Appreciate the community users advise on above !
>> 
>> 
>> Thanks,
>> Divya
>> 

Reply via email to