I am trying to complete a test case on some data. I took a schema and used
log-synth (thanks Ted) to create fairly wide table.  (89 columns).  I then
outputted my data as csv files, and created a drill view, so far so good.

One of the columns is a "date" column, (YYYY-MM-DD) format and has 1216
unique values. To me this would be like a 4 ish years of daily partitioned
data in hive, so tried to created my data partiioning on that field.

If I create a Parquet table based on that, eventually things hork on me and
I get the error below.  If I don't use the PARTITION BY clause, it creates
the table just fine with 30 files.

Looking in the folder it was supposed to create the PARTITIONED table, it
has over 20K files in there.  Is this expected? Would we expect #Partitions
* #Fragment files? Could this be what the error is trying to tell me?   I
guess I am just lost on what the error means, and what I should/could
expect on something like this.  Is this a bug or expected?








Error:

java.lang.RuntimeException: java.sql.SQLException: SYSTEM ERROR:
IllegalStateException: Failure while closing accountor.  Expected private
and shared pools to be set to initial values.  However, one or more were
not.  Stats are

zone init allocated delta

private 1000000 1000000 0

shared 9999000000 9997806954 1193046.


Fragment 1:25


[Error Id: cad06490-f93e-4744-a9ec-d27cd06bc0a1 on
hadoopmapr1.mydata.com:31010]

at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)

at
sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:87)

at sqlline.TableOutputFormat.print(TableOutputFormat.java:118)

at sqlline.SqlLine.print(SqlLine.java:1583)

at sqlline.Commands.execute(Commands.java:852)

at sqlline.Commands.sql(Commands.java:751)

at sqlline.SqlLine.dispatch(SqlLine.java:738)

at sqlline.SqlLine.begin(SqlLine.java:612)

at sqlline.SqlLine.start(SqlLine.java:366)

at sqlline.SqlLine.main(SqlLine.java:259)

Reply via email to