Unfortunately, in the short run, you'll need to copy them locally using
wget or curl and then read the ORC file using file:/// paths to use the
local file system.

I talked with Larry McCay from the Knox project and he said that they are
considering making a KnoxFS Java client, which implements
org.apache.hadoop.fs.FileSystem, that would handle this use case.

.. Owen

On Mon, Mar 6, 2017 at 4:05 AM, Srinivas M <[email protected]> wrote:

> Hi
>
> I have an application that uses the Hive ORC API and to write a ORC file
> to HDFS. I use the native FileSystem API and pass the WebHDFS URI
> (webhdfs://host:port) to create a FileSystem Object
>
> fs = FileSystem.get(hdfsuri,conf,_user) ;
>
> While trying to connect using the Knox gateway, is there a way to still
> use the Native FileSystem or should I be using the REST API calls to be
> able to access the Files on HDFS ?
>
> If so, is there any way to read or write an ORC file in such a case, given
> that the ORC Reader or Writers, needs an object of type "
> org.apache.hadoop.fs.FileSystem"
>
> --
> Srinivas
> (*-*)
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------------------------------------------------------
> You have to grow from the inside out. None can teach you, none can make
> you spiritual.
>                       -Narendra Nath Dutta(Swamy Vivekananda)
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------------------------------------------------------
>

Reply via email to