Hello Abhishek, thanks! The second option worked for me! :)
cheers!,
gus
Em Wed, Apr 12, 2017 at 03:06:51PM -0700, Abhishek Girish escreveu:
> My best guess would be that there was a schema change across some records,
> which caused Drill to fail. For example a field "a": "abc" changed to "a":
> {"abc"}. Drill does not work well, out of the box in such cases.
>
> You could first try setting all text mode [1] and attempting the query.
> Should work for schema changes across scalar fields, by treating all values
> as varchars.
> set `store.mongo.all_text_mode` = true;
>
> If that doesn't help, try the below experimental option [2]
> set `exec.enable_union_type` = true;
>
> Let us know if it helps.
>
> [1] https://drill.apache.org/docs/json-data-model/#data-type-mapping
> [2]
> https://drill.apache.org/docs/json-data-model/#experimental-feature:-heterogeneous-types
>
> On Wed, Apr 12, 2017 at 2:37 PM, gus <[email protected]> wrote:
>
> > Hello! I'm using Apache Drill 1.10.0 to query MongoDB-3.4 (linux).
> > I need to compare one value inside the json array with another collection
> > value.
> >
> > This is the query:
> >
> > select fb.v1._ as codigofb, trf.v3 AS topo, fb.v20.a as titulofb,
> > trf.v20.a AS titulotrf from `filmes` fb JOIN
> > `trf20170405` trf ON trf.v1._ = fb.v1._;
> >
> >
> > It prints 100 results and then it gives me this error[1].
> >
> > Each collection have ~100 MB. And the same error appears when I try to
> > limit to 100.
> >
> > This is the example of the document from trf:
> > https://share.riseup.net/#-vKctuQvhOBQStl6RJ5iRg
> >
> > Any tips?
> >
> > cheers!,
> > gus
> >
> >
> > [1] error msg:
> >
> > Error: SYSTEM ERROR: IllegalStateException: You tried to start when you
> > are using a ValueWriter of type SingleMapWriter.
> >
> > Fragment 0:0
> >
> > [Error Id: 0f36b8e6-8f44-4696-a1c3-610a28815d20 on debian:31010]
> >
> > (java.lang.IllegalStateException) You tried to start when you are
> > using a ValueWriter of type SingleMapWriter.
> > org.apache.drill.exec.vector.complex.impl.
> > AbstractFieldWriter.startList():108
> > org.apache.drill.exec.vector.complex.impl.SingleMapWriter.
> > startList():98
> > org.apache.drill.exec.vector.complex.impl.
> > MapOrListWriterImpl.start():68
> > org.apache.drill.exec.store.bson.BsonRecordReader.
> > writeToListOrMap():83
> > org.apache.drill.exec.store.bson.BsonRecordReader.
> > writeToListOrMap():112
> > org.apache.drill.exec.store.bson.BsonRecordReader.write():75
> > org.apache.drill.exec.store.mongo.MongoRecordReader.next():186
> > org.apache.drill.exec.physical.impl.ScanBatch.next():179
> > org.apache.drill.exec.record.AbstractRecordBatch.next():119
> > org.apache.drill.exec.record.AbstractRecordBatch.next():109
> > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> > org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.innerNext():135
> > org.apache.drill.exec.record.AbstractRecordBatch.next():162
> > org.apache.drill.exec.record.AbstractRecordBatch.next():119
> > org.apache.drill.exec.record.AbstractRecordBatch.next():109
> > org.apache.drill.exec.physical.impl.join.HashJoinBatch.buildSchema():
> > 175
> > org.apache.drill.exec.record.AbstractRecordBatch.next():142
> > org.apache.drill.exec.record.AbstractRecordBatch.next():119
> > org.apache.drill.exec.record.AbstractRecordBatch.next():109
> > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> > org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.innerNext():135
> > org.apache.drill.exec.record.AbstractRecordBatch.next():162
> > org.apache.drill.exec.record.AbstractRecordBatch.next():119
> > org.apache.drill.exec.record.AbstractRecordBatch.next():109
> > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> > org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.innerNext():135
> > org.apache.drill.exec.record.AbstractRecordBatch.next():162
> > org.apache.drill.exec.record.AbstractRecordBatch.next():119
> > org.apache.drill.exec.record.AbstractRecordBatch.next():109
> > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> > org.apache.drill.exec.physical.impl.project.
> > ProjectRecordBatch.innerNext():135
> > org.apache.drill.exec.record.AbstractRecordBatch.next():162
> > org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> > org.apache.drill.exec.physical.impl.ScreenCreator$
> > ScreenRoot.innerNext():81
> > org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():232
> > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():226
> > java.security.AccessController.doPrivileged():-2
> > javax.security.auth.Subject.doAs():415
> > org.apache.hadoop.security.UserGroupInformation.doAs():1657
> > org.apache.drill.exec.work.fragment.FragmentExecutor.run():226
> > org.apache.drill.common.SelfCleaningRunnable.run():38
> > java.util.concurrent.ThreadPoolExecutor.runWorker():1145
> > java.util.concurrent.ThreadPoolExecutor$Worker.run():615
> > java.lang.Thread.run():745 (state=,code=0)
> >