Try looking into the the "sort.external.spill.directories" and
"sort.external.spill.fs" settings under drill.exec.  These "may" help with
that memory pressure, but I am not an expert.

John

On Fri, Dec 4, 2015 at 7:22 AM, Daniel Garcia <
[email protected]> wrote:

> Hi,
>
>
>
> I was trying to dump into a parquet file data contained in a very big
> MySQL table.
>
> After setting the store.format to parquet and the compression to snappy, I
> used a query like:
>
>
>
> Create table dfs.tmp.`file.parquet` as (select … from mysq.schema.table);
>
>
>
> The problem that I found is that the data is big enough to fit into memory
> and I get a GC overhead exception and the drillbit process crashes.
>
>
>
> I’v been trying to find out some configuration setting to use disk memory
> when this happens, but I had no luck.
>
>
>
> Can this be done?
>
>
>
> Regards,
>
>
>
> *Daniel García*
>
> *Continuous Improvement Team*
>
>
>
> [image: image002]
>
>
>
> Albasanz 12 */* 4th Floor */* 28037 Madrid
>
>
>
> T  +34 917 542 966
>
>
>
> E   *[email protected] <[email protected]>*
>
> W *www.eurotaxglass.com <http://www.eurotaxglass.com/>*
>
>
>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <http://www.mimecast.com/products/>.
>

Reply via email to