Hi,

Thanks.

Inode is 100% in the disk where it mounted to the directly /var/local/hadoop 
(its not temp, but hadoops working or cache directory).  This happens when we 
run aggregation query in hive.  Looks like hive query (map-red) create many 
small files.

How to control this? What are those files?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:[email protected]]
Sent: Thursday, May 22, 2014 3:07 PM
To: [email protected]
Subject: Re: HDFS Quota Error

That means there are some or a process which are creating tons of small files 
and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is 
leaving tmp files behind.

On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
<[email protected]<mailto:[email protected]>> wrote:
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
[mailto:[email protected]<mailto:[email protected]>]
Sent: Thursday, May 22, 2014 2:37 PM
To: [email protected]<mailto:[email protected]>
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:[email protected]]
Sent: Thursday, May 22, 2014 12:56 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made 
free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
<[email protected]<mailto:[email protected]>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No 
space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none             inf            none             inf           69          275  
        288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this 
meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 
100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q 
/var/local/hadoop”

none             inf    107374182400    104408308039           73          286  
        297777777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--------------


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201405211712_0625_r_000001_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
        at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
        at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
        at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
        at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
        at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
        at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
        at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
        at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
        at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar



--
Nitin Pawar

Reply via email to