Hi,

I was trying to dump into a parquet file data contained in a very big MySQL 
table.
After setting the store.format to parquet and the compression to snappy, I used 
a query like:

Create table dfs.tmp.`file.parquet` as (select ... from mysq.schema.table);

The problem that I found is that the data is big enough to fit into memory and 
I get a GC overhead exception and the drillbit process crashes.

I'v been trying to find out some configuration setting to use disk memory when 
this happens, but I had no luck.

Can this be done?

Regards,

Daniel GarcĂ­a
Continuous Improvement Team

[image002]

Albasanz 12 / 4th Floor / 28037 Madrid

T  +34 917 542 966

E   [email protected]<mailto:[email protected]>
W www.eurotaxglass.com<http://www.eurotaxglass.com/>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.

Reply via email to