Hi Neeraj,
This error doesn't look to be kerberos related initially. Can you
verify if 192.168.49.51
has the tasktracker process running?
Regards,
Robert
On Tue, May 28, 2013 at 7:58 PM, Rahul Bhattacharjee <
rahul.rec@gmail.com> wrote:
> The error looks a little low level , network level .
Hi Geelong,
What's the ownership and permissions on /usr/hadoop/tmp/dfs/data? If I
recall correctly, I believe it should be hdfs:hadoop, where the owner has
full permissions and the group can read and execute. Does /var/log for the
datanode show any errors?
Regards,
Robert
*
Robert M
Hi Savitha,
On your nodes running tasktrackers and datanodes, there is a core-site.xml
file that specifies the namenode to contact via the property fs.default.name.
In addition, there is a mapred-site.xml which specifies address of
jobtracker.
I hope that helps.
Regards,
Robert
On Thu, Feb 14, 2
Hi Dhanasekaran,
I believe you are trying to ask if it is recommended to use the
decommissioning feature to remove datanodes from your cluster, the answer
would be yes. As far as how to do it, there should be some information
here http://wiki.apache.org/hadoop/FAQ that should help.
Regards,
Rober
Hi Vikas,
This is showing the total amount of memory currently used in the Java
virtual machine for the namenode process versus the maximum amount of
memory that the Java virtual machine will attempt to use for the namenode
process.
I hope that helps.
Regards,
Robert
On Tue, Jan 22, 2013 at 3:54
Hi Shagun,
You may want to verify if all the services are running. Try running jps to
get list of java processes. Ideally, if all is well you should see the
following processes if running on a single node setup :
NameNode
TaskTracker
SecondaryNamenode
JobTracker
DataNode
Hope that helps.
Regard
Hi Ivan,
Here are a couple of more suggestions provided by the wiki:
http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
Regards,
Robert
On Thu, Jan 10, 2013 at 5:33 AM, Ivan Tretyakov wrote:
> I also found following exception in datanode, I suppose it might give some
> clue:
>
> 2013-01-10
Hi Ivan,
Regarding the mapreduce.jobtracker.retiredjobs.cache.size property, the
jobtracker keeps information about a number of completed jobs in memory.
There's a threshold for this, which is a single day by default - as well as
a certain number of jobs per user. Once these limits are hit, the job
Hi James,
This should be fine, but just wanted to add that you will need to also make
the change on your other nodes within the cluster, so they know how to
contact the filesystem.
Regards,
Robert
On Sun, Jan 6, 2013 at 12:18 AM, Jagat Singh wrote:
> Yes your data is safe.
>
>
>
>
> On Sun, Jan
Hi Krishna,
Do you simply want to schedule the job to run at specific times? If so, I
believe oozie maybe what you are looking for.
Regards,
Robert
On Fri, Jan 4, 2013 at 6:40 AM, Krishna Rao wrote:
> Hi al,
>
> I have a java application jar that converts some files and writes directly
> into
Hi Jean,
Hadoop will not factor in number of disks or directories, but rather mainly
allocated free space. Hadoop will do its best to spread the data across
evenly amongst the nodes. For instance, let's say you had 3 datanodes
(replication factor 1) and all have allocated 10GB each, but one of th
Hi Ryan,
Sorry overlooked that you were running Teragen which doesn't execute a
shuffle, hence it's not getting the error. So initial thoughts of issue
being specific to mapreduce program shouldn't be the case.
Regards,
Robert
On Wed, Dec 19, 2012 at 10:28 AM, Robert Molina wr
Hi Ryan,
Interesting one mapreduce program works vs. the other which seems to point
that the error is specific to the mapreduce program. What happens if you
try the wordcount example?
Regards,
Robert
On Thu, Dec 13, 2012 at 6:38 AM, Ryan Garvey wrote:
> at com.sun.net.ssl.internal.ssl.Al
Hi Mike,
Is your namenode up and running? Also, you can try checking the job
tracker logs to see if it provides any information.
Regards,
Robert
On Tue, Dec 4, 2012 at 10:39 AM, Michael Namaiandeh <
mnamaian...@healthcit.com> wrote:
> I started up Apache Hadoop version 1.0.4 and tried to submit
Hi Haitao,
To help isolate, what happens if you run a different job? Also, if you
view the namenode webui or the specific datanode webui having the issue,
are there any indicators of it being down?
Regards,
Robert
On Tue, Dec 4, 2012 at 12:49 AM, panfei wrote:
> I noticed that you are using jd
Just to add Alejandro's information regarding the wildcard support, here is
the reference to the fix:
https://issues.apache.org/jira/browse/HADOOP-6995
On Tue, Nov 13, 2012 at 11:26 AM, Alejandro Abdelnur wrote:
> Andy, Oleg,
>
> What versions of Oozie and Hadoop are you using?
>
> Some versio
Hi Amit,
There is a mention here to Start in the hadoop-20 parent path :
https://github.com/facebook/hadoop-20/wiki/Corona-Single-Node-Setup
Regards,
Rob
On Mon, Nov 12, 2012 at 8:01 AM, Amit Sela wrote:
> Hi everyone,
>
> Anyone knows if the new corona tools (Facebook just released as open
> s
For your reference, here is an explanation of the exception:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/PleaseHoldException.html
HTH
-Rob
On Sun, Nov 11, 2012 at 2:11 AM, Charlie AI wrote:
> hi, 吴靖
> I have got this error before. Please stop hbase on your cluster first. And
> restar
Hi Patai,
Have you looked into verifying if it is network related, maybe see what the
ping responses are to that node?
On Thu, Nov 1, 2012 at 1:47 PM, Patai Sangbutsarakum <
silvianhad...@gmail.com> wrote:
> I have a check monitoring the page jobtracker:50030/jobtracker.jsp,
> and the check show
Hi James,
Since it seems intermittent, have you verified if there are any maintenance
type procedures being done within the namenode machine or it's related
network?
On Sun, Oct 28, 2012 at 9:51 PM, Jianhui Zhang wrote:
> Hi folks,
>
> We've got this weird problem regularly on our NameNode (apa
Hi Yogesh,
For the task tracker node that is having the issue, is the ip
address 10.209.135.24? What's the localhost mapped to within that machines
etc/hosts file?
On Fri, Oct 26, 2012 at 3:29 AM, wrote:
> Hi All,
>
> I am trying to run Hadoop cluster but TaskTracker is not running,
> I hav
Sadak wrote:
> you mean config files robert or some other file??
>
>
> On Tue, Oct 30, 2012 at 12:43 AM, Robert Molina
> wrote:
>
>> Hey Visioner,
>> Can you show what's in your slaves file?
>>
>>
>> On Thu, Oct 25, 2012 at 5:44 AM, Visioner S
Hey Visioner,
Can you show what's in your slaves file?
On Thu, Oct 25, 2012 at 5:44 AM, Visioner Sadak wrote:
> any hints friends will i have to try this with a cluster set up?? with
> datanode installed on a diffrnt ip address
>
>
> On Tue, Oct 23, 2012 at 12:34 PM, Visioner Sadak > wrote:
>
>
The property that you wil need to set is dfs.datanode.data.dir. This
property is mentioned in hdfs-site.xml.
On Mon, Oct 15, 2012 at 1:09 AM, Adrian Acosta Mitjans <
amitj...@estudiantes.uci.cu> wrote:
> hi, I want to know how I can configure hadoop if I want to use in the
> differents datanode
I believe not many folks have deployed it. Although some testing has been
done it may not be stable. If you are really stuck on using that release,
you can check on the release notes and bugs filed for that version to see
if there are any potential show stoppers for you. As always, test as much
Here is the information for webhdfs rest call that should allow you to
upload a file:
http://hadoop.apache.org/docs/r1.0.3/webhdfs.html#CREATE
HTH
On Fri, Oct 5, 2012 at 1:16 AM, Visioner Sadak wrote:
> Hey thanks bejoy and andy act my user just has a desktop web user(like we
> browsing web) so
the remaining servers after some of them
>> already calculated packages of progress 1.0. Even the cleanup phase in the
>> end was done, ALTHOUGH(!) the pig log didn't reflect the calculations of
>> the cluster. And since i found the file as output in hdfs i supposed the
&g
oupInformation.java:**1121)
> at org.apache.hadoop.mapred.**Child.main(Child.java:249)
>
>
>
>
>
> On Mon, 1 Oct 2012 10:12:22 -0700, Robert Molina
> wrote:
>
>> Hi Bjorn,
>> Can you post the exception you are getting during the map phase?
>>
>
Hi Bjorn,
Can you post the exception you are getting during the map phase?
On Mon, Oct 1, 2012 at 9:11 AM, Björn-Elmar Macek wrote:
> Hi,
>
> i am kind of unsure where to post this problem, but i think it is more
> related to hadoop than to pig.
>
> By successfully executing a pig script i crea
Here's another link that may help:
http://wiki.apache.org/hadoop/NameNodeFailover
On Wed, Sep 26, 2012 at 11:14 AM, Serge Blazhiyevskyy <
serge.blazhiyevs...@nice.com> wrote:
> This is common procedure for cases when you need to promote secondary name
> node to become the primary one.
>
> Here i
Hi George,
I don't believe there is a setting to do what you are requesting. But
maybe the following may help regarding replication:
https://issues.apache.org/jira/browse/HDFS-3814
HTH
On Thu, Sep 20, 2012 at 7:39 AM, George Kousiouris wrote:
>
> Hi all,
>
> I have noticed that sometimes the hd
Hi Artem,
At what point do you do the copy, was namenode still running? Does the copy
of the edits file and fsimage file match up with the original (i.e
filesize)?
-Robert
On Mon, Sep 17, 2012 at 2:38 PM, Artem Ervits wrote:
> Hello all,
>
> ** **
>
> I am testing the Hadoop recovery as pe
32 matches
Mail list logo