Liu, You don’t require to build a table, but if you want to you can.
You can directly query from the json (ex. /drill/json/filename.json ) file under hadoop fs -ls /drill/json/filename.json using drill select * from mydfs.json.`filename.json`; Thanks Sudhakar Thota On Dec 30, 2014, at 7:08 PM, LIU Xiaobing <[email protected]> wrote: > Hi all, > I am a new user of Apache Drill. Now have set up one distributed hadoop > server with zookeeper and hbase run in. Then I deployed one clustered > environment according the wiki. > Now I have registered one storage plugin instance whose name is mydfs like > this: > > { > "type": "file", > "enabled": true, > "connection": "hdfs://192.168.41.243:9000/", > "workspaces": { > "json": { > "location": "/drill/json", > "writable": true, > "defaultInputFormat": null > } > }, > "formats": { > "json": { > "type": "json" > } > } > } > > Then should I create one external table pointed to reside hdfs directory > /drill/json like hive? I havn't found relative > information about this. > By the way, I also found that there isn't any default workspace whose name > is default of file or hive in the section Workspaces. > Is there anything i missing? > > -- > Best Regards > LIU Xiaobing 刘小兵
