rill.exec: {
> > > compile: {
> > > compiler: "JDK",prefer_plain_java: true
> > > },
> > >
> > > This forces use of the JDK compiler (instead of Janino) and bypasses
> the
> > > byte code rewrite step.
> > >
&
ot;,prefer_plain_java: true
> > },
> >
> > This forces use of the JDK compiler (instead of Janino) and bypasses the
> > byte code rewrite step.
> >
> > No guarantee this will work, but something to try.
> >
> > Thanks,
> >
> > - Paul
(instead of Janino) and bypasses the
> byte code rewrite step.
>
> No guarantee this will work, but something to try.
>
> Thanks,
>
> - Paul
>
>
>
> On Tuesday, September 10, 2019, 12:28:07 PM PDT, Jiang Wu
> wrote:
>
> While doing testing against Apache Drill
DT, Jiang Wu
wrote:
While doing testing against Apache Drill 1.16.0, we are running into this
error: java.lang.OutOfMemoryError: GC overhead limit exceeded
In our use case, Apache Drill is using a custom storage plugin and no other
storage plugins like PostgreSQL, MySQL, etc. Some of the qu
While doing testing against Apache Drill 1.16.0, we are running into this
error: java.lang.OutOfMemoryError: GC overhead limit exceeded
In our use case, Apache Drill is using a custom storage plugin and no other
storage plugins like PostgreSQL, MySQL, etc. Some of the queries are very
large
y: defaultAutoCommit=false [4]
> > > > > > > >
> > > > > > > > [1] https://urldefense.proofpoint.
> com/v2/url?u=https-3A__issues.apache.org_jira_browse_DRILL-
> 2D4177=DwIGaQ=cskdkSMqhcnjZxdQVpwTXg=GXRJhB4g1YFDJsrcglHwUA=
> HfCsrd_3
set the connection autocommit mode
> > > > > > > to false e.g. conn.setAutoCommit(false) [2]. For data size of 10
> > > > > > > million plus, this is a must.
> > > > > > >
> > > > > > > You could disable "Auto Commit" opti
isable "Auto Commit" option as session option [3]
> > > > > > or to do it within plugin config URL with the following
> property: defaultAutoCommit=false [4]
> > > > > >
> > > > > > [1] https://issues.apache.org/jira/bro
> > > > > defaultAutoCommit=false [4]
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/DRILL-4177
> > > > > [2]
> > > > > https://jdbc.postgresql.org/documentation/93/query.html#fetchsize-example
> > &g
[3]
> https://www.postgresql.org/docs/9.3/static/ecpg-sql-set-autocommit.html
> > > > [4] https://jdbc.postgresql.org/documentation/head/ds-cpds.html
> > > >
> > > > Kind regards
> > > > Vitalii
> > > >
> > > >
> &
gt; > Kind regards
> > > Vitalii
> > >
> > >
> > > On Mon, Aug 13, 2018 at 3:03 PM Reid Thompson
> > > wrote:
> > > > My standalone host is configured with 16GB RAM, 8 cpus. Using
> > > > drill-embedded (single host standalone), I am attempting
gt; > Vitalii
> >
> >
> > On Mon, Aug 13, 2018 at 3:03 PM Reid Thompson
> > wrote:
> > > My standalone host is configured with 16GB RAM, 8 cpus. Using
> > > drill-embedded (single host standalone), I am attempting to pull data
> > > from Postgre
r: GC overhead limit exceeded" Can someone
advise on how to get past this?
Is there a way to have drill stream this data from PostgreSQL to parquet
files on disk, or does the data set have to be completely loaded into
memory before it can be written to disk? The documentation indicates
13 matches
Mail list logo