Hi
When I run a query in Hive, I get below exception. I noticed the error No
space left on device.
Then I did hadoop fs -count -q /var/local/hadoop - which gave below output
none infnone inf 69 275
288034318
Hi,
Please try this out..
To Start Hive on a perticular port-
[training@localhost hive]$ hive --service hiveserver
Starting Hive Thrift Server
Hive history
file=/tmp/training/hive_job_log_training_201405212357_1347630673.txt
OK
Sample Java Code to connect to hive ---
import
no space left on device can also mean that one of your datanode disk is
full.
Can you check disk used by each datanode.
May be you will need to rebalance your replication so that some space is
made free on this datanode.
On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
Maybe you are out of space in a local disk? That location[1] looks like
the local dir where MR places some intermediate files. Can you check the
output of df -h on a shell?
[1] /var/local/hadoop/cache/mapred/local
On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
Hi
Thanks for your reply. We have more than 50% disk space.
Just FYI.. This is not a physical machine. Its vmware virtual machine.
Thanks and Regards
Prabakaran.N aka NP
nsn, Bangalore
When I is replaced by We - even Illness becomes Wellness
From: ext Aitor Perez Cedres
Thanks for your reply. But all the datanote disk has more than 50% space empty
Thanks and Regards
Prabakaran.N aka NP
nsn, Bangalore
When I is replaced by We - even Illness becomes Wellness
From: ext Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To:
Just noted that inode is 100%. Any better solutions to solve this?
Thanks and Regards
Prabakaran.N aka NP
nsn, Bangalore
When I is replaced by We - even Illness becomes Wellness
From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore)
[mailto:prabakaran.1.natara...@nsn.com]
Sent: Thursday,
That means there are some or a process which are creating tons of small
files and leaving it there when the work completed.
To free up inode space you will need to delete the files.
I do not think there is any other way.
Check in your /tmp folder, how many files are there and if any process is
Hi,
Thanks.
Inode is 100% in the disk where it mounted to the directly /var/local/hadoop
(its not temp, but hadoops working or cache directory). This happens when we
run aggregation query in hive. Looks like hive query (map-red) create many
small files.
How to control this? What are those
based on your table file format along with table definition and what kind
of query you are running on that data will decide on how many files need to
created. These files are created as temporary output from maps till
reducers consume them.
you can control how many files hive's job should create
the problem seems with the java 7, install java 6 and retry.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Wed, May 21, 2014 at 6:34 PM, Faisal Rabbani
On Thursday, May 22, 2014 10:17:42 PM Sylvain Gault wrote:
Hello,
I'm new to this mailing list, so forgive me if I don't do everything
right.
I didn't know whether I should ask on this mailing list or on
mapreduce-dev or on yarn-dev. So I'll just start there. ^^
Short story: I'm
Hi
Can help me to solve this problem please, if you solved it.
Best regards
Shlash
I only talk about Hadoop because it is the de-facto implementation of
MapReduce. But for the remaining of my thesis, I took a more general
approach and implemented my algorithms in a custom MapReduce
implentation.
I learned yesterday about the existence of YARN. :D And I definitely
can't not talk
hi,maillist:
i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G JVM,but
why i see web ui of yarn ,in it's Active Nodes - Mem Avail ,only 8GB? ,why?
hi,
In addition to that, you need to change property *yarn*.*nodemanager*.
resource.*memory*-mb in yarn-site.xmk to make NM recognize memory usage.
On May 22, 2014 7:50 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G
I install JDK in Cygwin. After replacing '\\' with '/', still failed.
Even after I reinstalled protobuf in Cygwin, I still failed and met same
exception...
I am confusing why I can not encounter such exception when running 'protoc
--version' directly in shell, but always encounter following
On Thu, May 22, 2014 at 04:47:28PM -0400, Marcos Ortiz wrote:
On Thursday, May 22, 2014 10:17:42 PM Sylvain Gault wrote:
Hello,
I'm new to this mailing list, so forgive me if I don't do everything
right.
I didn't know whether I should ask on this mailing list or on
mapreduce-dev or
Not in addition to that. You should only use the memory-mb configuration.
Giving 15GB to NodeManger itself will eat into the total memory available for
containers.
Vinod
On May 22, 2014, at 8:25 PM, Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com wrote:
hi,
In addition to that, you need to change
Thanks. The job was stuck be
On Wed, May 21, 2014 at 11:10 PM, Sebastian Gäde s116...@hft-leipzig.dewrote:
Hi,
I remember having a similar issue. My job was demanding for more memory
then available in the cluster, that’s why it was waiting forever.
Could you check the
Thanks Sebastian. The job was stuck due to memory issues. I found the below
mentioned link very useful in configuring the yarn.
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html
Regards,
-Rahul Singh
On Fri, May 23, 2014 at 9:59 AM,
Thank you for the point, Vinod. You're right.
Thanks, Tsuyoshi
On May 22, 2014 9:26 PM, Vinod Kumar Vavilapalli vino...@hortonworks.com
wrote:
Not in addition to that. You should only use the memory-mb
configuration. Giving 15GB to NodeManger itself will eat into the total
memory available
hi,mailist:
i want to know if this option still cause limitation in YARN?
23 matches
Mail list logo