Hi Arun,
I was running on a single node cluster, so all my 100+ containers are on
single node. And, the problem is gone when I increased YARN_HEAP_SIZE to
2GB.
Thanks,
Kishore
On Thu, Aug 1, 2013 at 5:01 AM, Arun C Murthy wrote:
> How many containers are you running per node?
>
> On Jul 25, 2
As I said before, it is a per-file property and the config can be
bypassed by clients that do not read the configs, place a manual API
override, etc..
If you want to really define a hard maximum and catch such clients,
try setting dfs.replication.max to 2 at your NameNode.
On Thu, Aug 1, 2013 at
But, please mention that the value of 'dfs.replication' of the cluster is
always 2, even when the datanode number is 3. And I am pretty sure I did
not manually create any files with rep=3. So, why were some files of hdfs
created with repl=3, but not repl=2?
2013/8/1 Harsh J
> The step (a) point
Hi,
I wanted to use the incrCounter API to generate auto increment ids but the
problem is that it doesn't return the incremented value. Does anyone know
why this API does not return the incremented value? And whether it would be
possible to change it to return the incremented value?
Thanks
Manish
How many containers are you running per node?
On Jul 25, 2013, at 5:21 AM, Krishna Kishore Bonagiri
wrote:
> Hi Devaraj,
>
> I used to run this application with the same number of containers
> successfully on previous version, i.e. hadoop-2.0.4-alpha. Is it failing with
> the new version, b
If you want to write a mapreduce Job, you need to have basic knowledge on core
Java. You can get many resources in the internet for that.
If you face any problems related to Hadoop, you could ask here for help.
Thanks
Devaraj k
From: jamal sasha [mailto:jamalsha...@gmail.com]
Sent: 31 July 201
Hi,
Thanks for responding.
How do I do that? (very new in java )
There are just two words per line..
One is word, second is integer.
Thanks
On Wed, Jul 31, 2013 at 11:20 AM, Devaraj k wrote:
> Here seems to be some problem in the mapper logic. You need to have the
> input according to your c
Here seems to be some problem in the mapper logic. You need to have the input
according to your code or need to update the code to handle the cases like
having the odd no of words in a line.
Before getting the element second time, need to check whether tokenizer has
more elements or not. If you
Hi,
I am getting this error:
13/07/31 09:29:41 INFO mapred.JobClient: Task Id :
attempt_201307102216_0270_m_02_2, Status : FAILED
java.util.NoSuchElementException
at java.util.StringTokenizer.nextToken(StringTokenizer.java:332)
at java.util.StringTokenizer.nextElement(StringTokenizer.java:39
The step (a) points to your problem and solution both. You have files
being created with repl=3 on a 2 DN cluster which will prevent
decommission. This is not a bug.
On Wed, Jul 31, 2013 at 12:09 PM, sam liu wrote:
> I opened a jira for tracking this issue:
> https://issues.apache.org/jira/browse
Hi,
Please give me solution
HTTP ERROR 403
Problem accessing /cmf/process/146/logs. Reason:
Server returned HTTP response code: 500 for URL:
http://venkat.ops.cloudwick.com:9000/process/146-SolrInit/files/logs/stderr.log
The server declined access to the page or resource.
Do we have to crea
Hi
I think it is important to make Clare how does the replica is missing.
Here is an scenario: the disk of your datanode was broken down or the
replic was just deleted, so that the append failed.
Can you get similar log of your cluster?
发自我的 iPhone
在 2013-7-31,15:01,Jitendra Yadav
Hi,
I think there is some block synchronization issue in your hdfs cluster.
Frankly i haven't face this issue yet.
I believe you need to refresh your namenode fsimage to make it up to date
with your datanodes.
Thanks.
On Wed, Jul 31, 2013 at 6:16 AM, ch huang wrote:
> thanks for reply, i the b
13 matches
Mail list logo