RE: yarn does not allocate enough tasks/containers to my available node

2015-11-24 Thread Bikas Saha
Which scheduler is being used? Capacity/Fair/Something else? From: Nicolae Marasoiu [mailto:nicolae.maras...@adswizz.com] Sent: Monday, November 23, 2015 7:59 AM To: user@hadoop.apache.org Subject: yarn does not allocate enough tasks/containers to my available node Hi, Tasks are

Re: Container errors running Terasort

2015-11-24 Thread Namikaze Minato
Have a look at the logs for your attempt_1448325816071_0002_m_03_0. Regards, LLoyd - To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h...@hadoop.apache.org

Re: Container errors running Terasort

2015-11-24 Thread Tsuyoshi Ozawa
Hi, To get entire log, yarn logs command can help you. yarn logs -applicationId application_1448325816071_0002 Thanks, - Tsuyoshi On Tue, Nov 24, 2015 at 7:43 PM, Namikaze Minato wrote: > Have a look at the logs for your attempt_1448325816071_0002_m_03_0. > >

Custom AWS S3 endpoint with path-style access (cf. [HDFS-8727])

2015-11-24 Thread Eike von Seggern
Hello, I'm using fake-s3 for testing "s3a://"-backed storage locally. This requires path-style access, which is not accessible via configuration. I'm aware of [HDFS-8727], stating that setting a custom endpoint switches to path-style access automatically. However, this is not working for me.

Decommission a DataNode from a running Hadoop Cluster

2015-11-24 Thread Oussama Jilal
Hello, I need to decommission a datanode from a running cluster, my probleme is that the "dfs.hosts.exclude" property was not set on either the "hdfs-site.xml" nor the "core-site.xml", and since adding it requires an HDFS restart (which I can't do). How do I decommission that datanode ?

Re: Decommission a DataNode from a running Hadoop Cluster

2015-11-24 Thread Oussama Jilal
Hello, Seems like I was updating the "hdfs-site.xml" on the datanode I want to decommission instead of the namenode. I added the exclude property in the "hdfs-site.xml" on the namenode, executed the refresh command and it worked without restarting the cluster... it is now decommissioning. Please

Re: Unable to connect to beeline after kerberos is installed

2015-11-24 Thread Kumar Jayapal
Hello Arpan and Neeraj, Thanks for the response. I have checked the setting in hive and sentry on my production servers they are exactly the same. It works find in Prod but not in my test environment. Is there any thing else I am missing. Thanks Jay Thanks Jay On Tue, Nov 24, 2015 at

Custom AWS S3 endpoint with path-style access (cf. [HDFS-8727])

2015-11-24 Thread Eike von Seggern
Hello, I'm using fake-s3 for testing "s3a://"-backed storage locally. This requires path-style access, which is not accessible via configuration. I'm aware of [HDFS-8727], stating that setting a custom endpoint switches to path-style access automatically. However, this is not working for me.

HCatStorer error

2015-11-24 Thread Raj hadoop
We are facing below mentioned error on storing dataset using HCatStorer.Can someone please help us STORE F INTO 'default.CONTENT_SVC_USED' using org.apache.hive.hcatalog.pig.HCatStorer(); ERROR hive.log - Got exception: java.net.URISyntaxException Malformed escape pair at index 9: