Hi Brian,

this is inside my core-site.xml 

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://ec2-75-101-210-65.compute-1.amazonaws.com/</value>
                <final>true</final>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/mnt</value>
                <description>A base for other temporary 
directories.</description>
        </property>
</configuration>

Do I need to give the port here? 

this is inside my hdfs-site.xml

<configuration>
        <property>
                <name>dfs.name.dir</name>
                <value>${hadoop.tmp.dir}/dfs/name</value>
                <final>true</final>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>${hadoop.tmp.dir}/dfs/data</value>

</property>
        <property>
                <name>fs.checkpoint.dir</name>
                <value>${hadoop.tmp.dir}/dfs/namesecondary</value>
                <final>true</final>
                <final>true</final>
        </property>
</configuration>

These directories do all exist

# ls -l /mnt/dfs/
total 12
drwxr-xr-x 2 hadoop hadoop 4096 2010-04-23 05:08 data
drwxr-xr-x 4 hadoop hadoop 4096 2010-04-23 05:17 name
drwxr-xr-x 2 hadoop hadoop 4096 2010-04-23 05:08 namesecondary

I don't have the config file hadoop-site.xml in /etc/...
In the source directory of hadoop I have a hadoop-site.xml but with this 
information

<!-- DO NOT PUT ANY PROPERTY IN THIS FILE. INSTEAD USE -->
<!-- core-site.xml, mapred-site.xml OR hdfs-site.xml -->
<!-- This empty script is to avoid picking properties from  -->
<!-- conf/hadoop-site.xml This would be removed once support  -->
<!-- for hadoop-site.xml is removed.  -->

Best Regards,
   Christian 



Am Freitag, 23. April 2010 schrieb Brian Bockelman:
> Hey Christian,
> 
> I've run into this before.
> 
> Make sure that the hostname/port you give to fuse is EXACTLY the same as 
> listed in hadoop-site.xml.
> 
> If these aren't the same text string (including the ":8020"), then you get 
> those sort of issues.
> 
> Brian
> 
> On Apr 22, 2010, at 5:00 AM, Christian Baun wrote:
> 
> > Dear All,
> > 
> > I want to test HDFS inside Amazon EC2.
> > 
> > Two Ubuntu instances are running inside EC2. 
> > One server is namenode and jobtracker. The other server is the datanode.
> > Cloudera (hadoop-0.20) is installed and running.
> > 
> > Now, I want to mount HDFS.
> > I tried to install contrib/fuse-dfs as described here:
> > http://wiki.apache.org/hadoop/MountableHDFS
> > 
> > The compilation worked via:
> > 
> > # ant compile-c++-libhdfs -Dlibhdfs=1
> > # ant package -Djava5.home=/usr/lib/jvm/java-1.5.0-sun-1.5.0.06/ 
> > -Dforrest.home=/home/ubuntu/apache-forrest-0.8/
> > # ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
> > 
> > But now, when I try to mount the filesystem:
> > 
> > # ./fuse_dfs_wrapper.sh 
> > dfs://ec2-75-101-210-65.compute-1.amazonaws.com:8020 /mnt/hdfs/ -d
> > port=8020,server=ec2-75-101-210-65.compute-1.amazonaws.com
> > fuse-dfs didn't recognize /mnt/hdfs/,-2
> > fuse-dfs ignoring option -d
> > FUSE library version: 2.8.1
> > nullpath_ok: 0
> > unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
> > INIT: 7.13
> > flags=0x0000007b
> > max_readahead=0x00020000
> >   INIT: 7.12
> >   flags=0x00000011
> >   max_readahead=0x00020000
> >   max_write=0x00020000
> >   unique: 1, success, outsize: 40
> > 
> > 
> > # ./fuse_dfs_wrapper.sh 
> > dfs://ec2-75-101-210-65.compute-1.amazonaws.com:8020 /mnt/hdfs/
> > port=8020,server=ec2-75-101-210-65.compute-1.amazonaws.com
> > fuse-dfs didn't recognize /mnt/hdfs/,-2
> > 
> > # ls /mnt/hdfs/
> > ls: reading directory /mnt/hdfs/: Input/output error
> > # ls /mnt/hdfs/
> > ls: cannot access /mnt/hdfs/o¢: No such file or directory
> > o???
> > # ls /mnt/hdfs/
> > ls: reading directory /mnt/hdfs/: Input/output error
> > # ls /mnt/hdfs/
> > ls: cannot access /mnt/hdfs/`á›Óÿ: No such file or directory
> > `?????
> > # ls /mnt/hdfs/
> > ls: reading directory /mnt/hdfs/: Input/output error
> > ...
> > 
> > 
> > What can I do at this point?
> > 
> > Thanks in advance
> >     Christian
> 
> 


Reply via email to