Hello,
From testing it appears detecting a failed NM node takes quite some time (~5
minutes)
Is there a configurable setting for this? I need a more responsive cluster.
Thank you, Caesar.
-
To unsubscribe, e-mail:
Hi there,
The default timeout waiting to fetch a file is too long (it’s likely about 1
minute).
The default before the cluster releases a node is down and takes the node out
is even longer (about 5 minutes).
Where are all the timeout configuration locations that can be set (and
documentation
Hi,
I got further in running TeraSort, there is just one single error left related
to the Java engine below.
How do I debug this further?
Thank you, Caesar.
Container: container_1448509237184_0002_01_55 on berry3_32841
===
[Sorry about the lack of subject earlier]
Hi,
I got further in running TeraSort, there is just one single error left related
to the Java engine below.
How do I debug this further?
Thank you, Caesar.
Container: container_1448509237184_0002_01_55 on berry3_32841
Hi there,
I’m running Hadoop on a cluster of 4 Raspberry Pi’s for my project.
I am getting errors benchmarking the cluster using Terasort as shown below.
I’ve adjusted the memory & task numbers in mapred-site.xml as follows but still
getting the errors.
What can I do?
Thanks, Caesar.
Hello!
New message, please read <http://www.fashiondiaries.com/mention.php?5986s>
Caesar Samsi
Hello!
New message, please read <http://demo3.thakurweb.com/laughed.php?qtg>
Caesar Samsi
Hi Farhan,
The file and directories are in Hadoop. To get it to Windows, you’d have to
issue “hdfs dfs get hadoopfilename.txt windowsfilename.txt
Hope this helps!
Best, Caesar.
From: Farhan Iqbal [mailto:farhan.iq...@gmail.com]
Sent: Friday, October 30, 2015 12:27 PM
To:
Hello,
I'm looking for a web based file manager, simple enough to upload and
download files (text and binary).
I would appreciate if you have pointers to one.
I'm running Hadoop HDFS (i.e. non CDH or Hortonworks which I understand have
it).
Thank you, Caesar.
Hi,
I'm trying to install hadoop-hdfs-fuse package on a Ubuntu machine.
I've added the cloudera repository deb [arch=amd64]
http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm trusty-cm5 contrib
Also done sudo apt-get update
When I do sudo apt-get install hadoop-hdfs-fuse I get
Hi,
I'm trying to understand the directory structure of Hadoop.
My understanding is HADOOP_TMP_DIR is the base directory for everything that
is not specified.
Why is that? (TMP seems to indicate temporary directory, i.e. ephemereal).
Where can I lookup directory variables that can
Hello,
I'm running Hadoop 2.6.0 and while the cluster runs I've not seen a log
created/written in the expected place.
What could cause this? Is it writing to another place? What is the default
directory?
Thank you, Caesar.
.
From: Caesar Samsi
Reply-To: user@hadoop.apache.org
Date: Wednesday, June 3, 2015 at 8:07 PM
To: user@hadoop.apache.org
Subject: ack with firstBadLink as 192.168.1.12:50010?
I’ve just built my distributed cluster but am getting the following error when
I try to use HDFS.
I’ve traced
I've just built my distributed cluster but am getting the following error
when I try to use HDFS.
I've traced it by telnet to 192.168.1.12 50010 and it just waits there
waiting for a connection but never happens.
If I telnet on that host using localhost (127.0.0.1) the telnet connection
Hello,
How would I go about and confirm that a file has been distributed
successfully to all datanodes?
I would like to demonstrate this capability in a short briefing for my
colleagues.
Can I access the file from the datanode itself (todate I can only access the
files from the master
Hello,
I'm embarking on my first tutorial and would like to have tooltip help as I
hover my mouse pointer over Hadoop classes.
I've found the Hadoop docs and Javadoc URL and configured them but the
tooltips still don't show up.
Thanks you, Caesar.
[I am still a new to all of this, but hope I can help some]
Hello,
What I’ve noticed, when Namenode can’t write in the Datanode, it’s usually due
to the datanode process not running there.
I also noticed that there is a message indicating there is only 1 replica in
the system,
Hello,
Hadoop.tmp.dir seems to be the root of all storage directories.
I'd like for data to be stored in separate locations.
Is there a list of directories and how they can be specified?
Thank you, Caesar.
(.tmp seems to indicate a temporary condition and yet it's used by HDFS,
Hello,
DFSClient#getServerDefaults returns null within 1 hour of system start
https://issues.apache.org/jira/browse/HDFS-8179
I'm coming across this within 5 minutes of start and having to use
-skipTrash.
Is there a configuration option to always use -skipTrash and avoid the bug
Hello,
How would I verify that HDFS, MapReduce, and Yarn are working across the
cluster?
Puspose is at least 2:
1. Make sure the computations are distributed
2. Ascertain the nodes are healthy (by an external
monitoring/management software).
Thank you, Caesar.
Hadoop
20 matches
Mail list logo