When you set all_text_mode to true, Drill reads all data from the JSON files as VARCHAR. After reading the data, use a SELECT statement in Drill to cast data as follows:
* Cast JSON numeric values to [SQL types](/docs/data-types), such as BIGINT, DECIMAL, FLOAT, and INTEGER. * Cast JSON strings to [Drill Date/Time Data Type Formats](/docs/supported-date-time-data-type-formats). For more info, please see preliminary docs at http://drill.apache.org/docs/json-data-model. Kristine Hahn Sr. Technical Writer 415-497-8107 @krishahn On Mon, Mar 30, 2015 at 7:50 AM, Vince Gonzalez <[email protected]> wrote: > My guess is that setting store.json.all_text_mode = True effectively tells > Drill not to skip inferrering the type of the columns, which perhaps > avoids any complications due to a change in schema that confuses things (in > this case from Float8 to null)? But I am not confident I understand what's > really happening here. Could someone enlighten me? > > I'm using a nightly 0.8.0-SNAPSHOT build, circa March 18. > > The data comes from here: > http://jsonstudio.com/wp-content/uploads/2014/02/stocks.zip > > If I set store.json.all_text_mode = True, the below query runs fine: > > 0: jdbc:drill:zk=localhost:2181> alter system set > `store.json.all_text_mode` = False; > +------------+------------+ > | ok | summary | > +------------+------------+ > | true | store.json.all_text_mode updated. | > +------------+------------+ > 1 row selected (0.018 seconds) > 0: jdbc:drill:zk=localhost:2181> select * from > dfs.`/Users/vince/Downloads/stocks.json` limit 5; > *Query failed: Query stopped., You tried to write a Float8 type when you > are using a ValueWriter of type NullableBigIntWriterImpl. [ > 4df22bc2-a37e-4057-bd5d-c7ec7d70322b on 10.9.104.180:31010 > <http://10.9.104.180:31010> ]* > * (java.lang.IllegalArgumentException) You tried to write a Float8 type > when you are using a ValueWriter of type NullableBigIntWriterImpl.* > > org.apache.drill.exec.vector.complex.impl.AbstractFieldWriter.fail():625 > > > org.apache.drill.exec.vector.complex.impl.AbstractFieldWriter.writeFloat8():205 > > > org.apache.drill.exec.vector.complex.impl.NullableBigIntWriterImpl.writeFloat8():88 > org.apache.drill.exec.vector.complex.fn.JsonReader.writeData():283 > > org.apache.drill.exec.vector.complex.fn.JsonReader.writeDataSwitch():208 > org.apache.drill.exec.vector.complex.fn.JsonReader.writeToVector():182 > org.apache.drill.exec.vector.complex.fn.JsonReader.write():156 > org.apache.drill.exec.store.easy.json.JSONRecordReader.next():125 > org.apache.drill.exec.physical.impl.ScanBatch.next():165 > > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():118 > org.apache.drill.exec.record.AbstractRecordBatch.next():99 > org.apache.drill.exec.record.AbstractRecordBatch.next():89 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():113 > org.apache.drill.exec.record.AbstractRecordBatch.next():142 > > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():118 > org.apache.drill.exec.record.AbstractRecordBatch.next():99 > org.apache.drill.exec.record.AbstractRecordBatch.next():89 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():96 > org.apache.drill.exec.record.AbstractRecordBatch.next():142 > > > org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():118 > org.apache.drill.exec.physical.impl.BaseRootExec.next():67 > > org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():97 > org.apache.drill.exec.physical.impl.BaseRootExec.next():57 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():121 > org.apache.drill.exec.work.WorkManager$RunnableWrapper.run():303 > .......():0 > > > Error: exception while executing query: Failure while executing query. > (state=,code=0) >
