Hi,
Then I put the m2 version into my cluster and replaced drill m1.
I connected with "./bin/sqlline -n admin -p admin -u
jdbc:drill:schema=dfs;zk=fan:2181,slave2:2181,slave3:2181"
some error occurs:
0: jdbc:drill:schema=dfs> select * from dfs.`AllstarFull.csv`;
Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while
running query.[error_id: "29ff1ff5-1d59-4fe5-adf7-eaea48f7b9ed"
endpoint {
address: "slave2"
user_port: 31010
control_port: 31011
data_port: 31012
}
error_type: 0
message: "Failure while parsing sql. < ValidationException:[
org.eigenbase.util.EigenbaseContextException: From line 1, column 15 to line 1,
column 35 ] < EigenbaseContextException:[ From line 1, column 15 to line 1,
column 35 ] < SqlValidatorException:[ Table 'dfs.AllstarFull.csv' not found ]"
]
Error: exception while executing query (state=,code=0)
The following query return some error report:
select * from `nation.parquet`;
select * from `tt.json`;
Here is my storage-plugins.json :
{
"storage":{
dfs: {
type: "file",
connection: "hdfs://100.2.12.103:9000"
workspaces: {
"root" : {
location: "/",
writable: false
},
"tmp" : {
location: "/tmp",
writable: true,
storageformat: "csv"
}
},
formats: {
"psv" : {
type: "text",
extensions: [ "tbl" ],
delimiter: "|"
},
"csv" : {
type: "text",
extensions: [ "csv" ],
delimiter: ","
},
"tsv" : {
type: "text",
extensions: [ "tsv" ],
delimiter: "\t"
},
"parquet" : {
type: "parquet"
},
"json" : {
type: "json"
}
}
},
cp: {
type: "file",
connection: "classpath:///"
}
Thanks.