[ 
https://issues.apache.org/jira/browse/AVRO-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968015#comment-16968015
 ] 

Sean Busbey commented on AVRO-2616:
-----------------------------------

I agree that it would be best to get any needed Hadoop filesystem 
implementation from a local Hadoop install when we're talking to a non-local 
filesystem.

For the local case, I see a few options:

1)  what if we kept a smaller Hadoop footprint for localfs? Like how we don't 
have all of guava
2) we could ship our own FileSystem impl of local file access that was just 
simple java.nio stuff
3) we can try something Clever where we look at the cli and environment and try 
to guess if we need to load FileSystem stuff at all or just use nio directly.

What are you thinking for approach [~ryanskraba]?

> Do not use Hadoop FS for local files with avro-tools
> ----------------------------------------------------
>
>                 Key: AVRO-2616
>                 URL: https://issues.apache.org/jira/browse/AVRO-2616
>             Project: Apache Avro
>          Issue Type: Bug
>          Components: java
>            Reporter: Ryan Skraba
>            Priority: Minor
>
> The avro-tools jar includes the Hadoop dependencies inside the fat jar.  This 
> is probably so that the CLI can operate with files that are located on the 
> cluster (or other URIs that have Hadoop FileSystem implementations).
> This is useful if the user is accessing HDFS with the same version as the 
> Hadoop FileSystem, and *mostly* neutral if the user is accessing other URIs, 
> including the local filesystem.
> Hadoop doesn't currently officially support any version [after JDK 
> 8|https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions] 
> (at time of writing).  We might want to access the local filesystem bypassing 
> the Hadoop jars to avoid requiring this dependency when it is not needed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to