I am away from my cluster right now, I trued doing a hadoop fs -ls
maprfs:// and that worked.   When I tries hadoop fs -ls hdfs:/// it failed
with wrong fs type.  With that error I didn't try it in the mapred-site.  I
will try it.  Still...why hard code the file prefixes? I guess I am curious
on how glusterfs would work, or others as they pop up.
On Aug 15, 2014 5:04 PM, "Adam Bordelon" <[email protected]> wrote:

> Can't you just use the hdfs:// protocol for maprfs? That should work just
> fine.
>
>
> On Fri, Aug 15, 2014 at 2:50 PM, John Omernik <[email protected]> wrote:
>
>> Thanks all.
>>
>> I realized MapR has a work around for me that I will try soon in that I
>> have MapR fs NFS mounted on each node, I.e. I should be able to get the tar
>> from there.
>>
>> That said, perhaps someone with better coding skills than me could
>> provide an env variable where a user could provide the HDFS prefixes to
>> try. I know we did that with the tachyon project and it works well for
>> other HDFS compatible fs implementations, perhaps that would work here?
>> Hard coding a pluggable system seems like a long term issue that will keep
>> coming up.
>>  On Aug 15, 2014 4:02 PM, "Tim St Clair" <[email protected]> wrote:
>>
>>> The uri doesn't currently start with any of the known types (at least on
>>> 1st grok).
>>> You could redirect via a proxy that does the job for you.
>>>
>>> | if you had some fuse mount that would work too.
>>>
>>> Cheers,
>>> Tim
>>>
>>> ------------------------------
>>>
>>> *From: *"John Omernik" <[email protected]>
>>> *To: *[email protected]
>>> *Sent: *Friday, August 15, 2014 3:55:02 PM
>>> *Subject: *Alternate HDFS Filesystems + Hadoop on Mesos
>>>
>>> I am on a wonderful journey trying to get hadoop on Mesos working with
>>> MapR.   I feel like I am close, but when the slaves try to run the packaged
>>> Hadoop, I get the error below.  The odd thing is,  I KNOW I got Spark
>>> running on Mesos pulling both data and the packages from MapRFS.  So I am
>>> confused why there is and issue with the fetcher.cpp here. Granted, when I
>>> got spark working, it was on 0.19.0, and I am trying a "fresh" version from
>>> git (0.20.0?) that I just pulled today. I am not sure if that work, but
>>> when I have more time I will try spark again.
>>>
>>> Any thoughts on this error? Thanks.
>>>
>>>
>>>
>>>
>>> Error:
>>>
>>>
>>>
>>>
>>> WARNING: Logging before InitGoogleLogging() is written to STDERR
>>> I0815 15:48:35.446071 20636 fetcher.cpp:76] Fetching URI 
>>> 'maprfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz'
>>> E0815 15:48:35.446184 20636 fetcher.cpp:161] A relative path was passed for 
>>> the resource but the environment variable MESOS_FRAMEWORKS_HOME is not set. 
>>> Please either specify this config option or avoid using a relative path
>>> Failed to fetch: maprfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz
>>> Failed to synchronize with slave (it's probably exited)
>>>
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> Timothy St. Clair
>>> Red Hat Inc.
>>>
>>
>

Reply via email to