You don't need a drillbit on the cluster. It will be faster (data locality
etc.) but you can just run a drillbit on your client and access any remote
cluster (or even join data from multiple clusters).

It looks like you've created a new storage plugin. I would recommend
copying the entire JSON configuration string from the "dfs" plugin to your
new plugin (named "hdfs" in this example) and changing just the connection
string. That way you'll have all the format plugins registered properly and
the default workspaces. Your connection string actually seems OK.

Then try using the full path to the file/directory as in:
hdfs.root.`/path/to/file` (assuming hdfs is the name of the storage plugin
you created).

Does that help?





On Thu, May 21, 2015 at 5:23 PM, Abhishek Girish <[email protected]>
wrote:

> Hi Alan,
>
> What you are attempting to do wouldn't work. Without a drillbit running on
> the remote cluster, there is no way I see we can access that file system
> from Drill.
>
> If you'd like to connect to a remote cluster (cluster B), the options I see
> are
>
> (1) Install Drill on cluster B and use a local client :) It shouldn't be
> that hard!
>
> (2) Install Drill on cluster B and a client on cluster A. If you are
> connecting via Sqlline, provide IP of a node on cluster B using the
> "drillbit" parameter (sqlline -u jdbc:drill:drillbit=<IP>)
>      I tried this with MapR and it worked for me.
>
> -Abhishek
>
> On Thu, May 21, 2015 at 4:17 PM, Alan Miller <[email protected]>
> wrote:
>
> > I tried that initially, but since it didn't work I tried to simplify it
> as
> > much as possible.
> >
> > Are  saying it "should" work?. I mean all I need to do is point the
> > connection parameter
> > to a different namenode, right?
> >
> > <http://www.mapr.com/>
> >
>

Reply via email to