Can you share your core-site.xml here?
On Thu, Mar 5, 2015 at 4:32 PM, Alexandru Calin alexandrucali...@gmail.com
wrote:
No change at all, I've added them at the start and end of the CLASSPATH,
either way it still writes the file on the local fs. I've also restarted
hadoop.
On Thu, Mar 5,
you can try:
for file in `hadoop classpath | tr ':' ' ' | sort | uniq` ;do
export CLASSPATH=$CLASSPATH:$file
done
On Thu, Mar 5, 2015 at 4:48 PM, Alexandru Calin alexandrucali...@gmail.com
wrote:
This is how core-site.xml looks:
configuration
property
namefs.defaultFS/name
This is how core-site.xml looks:
configuration
property
namefs.defaultFS/name
valuehdfs://localhost:9000/value
/property
/configuration
On Thu, Mar 5, 2015 at 10:32 AM, Alexandru Calin alexandrucali...@gmail.com
wrote:
No change at all, I've added them at the start and
Now I've also started YARN ( just for the sake of trying anything), the
config for mapred-site.xml and yarn-site.xml are those on apache
website. A *jps
*command shows:
11257 NodeManager
11129 ResourceManager
11815 Jps
10620 NameNode
10966 SecondaryNameNode
On Thu, Mar 5, 2015 at 10:48 AM,
Wow, you are so right! it's on the local filesystem! Do I have to manually
specify hdfs-site.xml and core-site.xml in the CLASSPATH variable ? Like
this:
CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop/core-site.xml
?
On Thu, Mar 5, 2015 at 10:04 AM, Azuryy Yu azury...@gmail.com wrote:
you
No change at all, I've added them at the start and end of the CLASSPATH,
either way it still writes the file on the local fs. I've also restarted
hadoop.
On Thu, Mar 5, 2015 at 10:22 AM, Azuryy Yu azury...@gmail.com wrote:
Yes, you should do it:)
On Thu, Mar 5, 2015 at 4:17 PM, Alexandru
Yes, you should do it:)
On Thu, Mar 5, 2015 at 4:17 PM, Alexandru Calin alexandrucali...@gmail.com
wrote:
Wow, you are so right! it's on the local filesystem! Do I have to
manually specify hdfs-site.xml and core-site.xml in the CLASSPATH variable
? Like this:
You don't need to start Yarn if you only want to write HDFS using C API.
and you also don't need to restart HDFS.
On Thu, Mar 5, 2015 at 4:58 PM, Alexandru Calin alexandrucali...@gmail.com
wrote:
Now I've also started YARN ( just for the sake of trying anything), the
config for
you need to include core-site.xml as well. and I think you can find
'/tmp/testfile.txt' on your local disk, instead of HDFS.
if so, My guess is right. because you don't include core-site.xml, then
your Filesystem schema is file:// by default, not hdfs://.
On Thu, Mar 5, 2015 at 3:52 PM,
DataNode was not starting due to this error : java.io.IOException:
Incompatible clusterIDs in /usr/local/hadoop/hadoop_store/hdfs/datanode:
namenode clusterID = CID-b788c93b-a1d7-4351-bd91-28fdd134e9ba; datanode
clusterID = CID-862f3fad-175e-442d-a06b-d65ac57d64b2
I can't image how this happened,
After putting the CLASSPATH initialization in .bashrc it creates the file,
but it has 0 size and I also get this warnning:
file opened
Wrote 14 bytes
15/03/05 14:00:55 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/tmp/testfile.txt
I am trying to run the basic libhdfs example, it compiles ok, and actually
runs ok, and executes the whole program, but I cannot see the file on the
HDFS.
It is said here http://hadoop.apache.org/docs/r1.2.1/libhdfs.html, that
you have to include *the right configuration directory containing
12 matches
Mail list logo