[ 
https://issues.apache.org/jira/browse/DRILL-3576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14651498#comment-14651498
 ] 

Philip Deegan commented on DRILL-3576:
--------------------------------------

I can verify some files are available on every node, that that they are 
available on all nodes.

Are you saying that each node needs 100% of data shared between all nodes, and 
it's not possible to split it across nodes?

> Failure reading JSON file
> -------------------------
>
>                 Key: DRILL-3576
>                 URL: https://issues.apache.org/jira/browse/DRILL-3576
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - JSON
>    Affects Versions: 1.1.0
>         Environment: 5 Instance zookeeper cluster running RHEL7
>            Reporter: Philip Deegan
>
> 17GB of data is replicated across five machines almost verbatim.
> {noformat}
> SELECT COUNT(*) FROM  (SELECT FLATTEN(t.a) AS b FROM dfs.`directory` t) flat 
> WHERE flat.b.c > 1234;
> {noformat}
> Results in error 
> {noformat}
> java.lang.RuntimeException: java.sql.SQLException: DATA_READ ERROR: Failure 
> reading JSON file - File file:/directory/json.json does not exist
> File  /directory/json.json
> Record  1
> Fragment 1:9
> [Error Id: eea5803c-7c15-4b25-8c06-5df4cf433fb9 on machine:31010]
>       at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
>       at 
> sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:87)
>       at sqlline.TableOutputFormat.print(TableOutputFormat.java:118)
>       at sqlline.SqlLine.print(SqlLine.java:1583)
>       at sqlline.Commands.execute(Commands.java:852)
>       at sqlline.Commands.sql(Commands.java:751)
>       at sqlline.SqlLine.dispatch(SqlLine.java:738)
>       at sqlline.SqlLine.begin(SqlLine.java:612)
>       at sqlline.SqlLine.start(SqlLine.java:366)
>       at sqlline.SqlLine.main(SqlLine.java:259)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to