RE: Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
accept any responsibility. -Original Message- From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal Vijayaraghavan Sent: 13 October 2015 21:37 To: user@hive.apache.org Cc: Mich Talebzadeh Subject: Re: Container is running beyond physical memory limits > Now I am rather

RE: Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
neither Peridale Ltd, its subsidiaries nor their employees accept any responsibility. From: hadoop hive [mailto:hadooph...@gmail.com] Sent: 13 October 2015 21:20 To: user@hive.apache.org Subject: Re: Container is running beyond physical memory limits http://hortonworks.com/blog/how-to-plan

Re: Container is running beyond physical memory limits

2015-10-13 Thread Gopal Vijayaraghavan
> Now I am rather confused about the following parameters (for example > mapreduce.reduce versus > mapreduce.map) and their correlation to each other They have no relationship with each other. They are meant for two different task types in MapReduce. In general you run fewer reducers than mapp

Re: Container is running beyond physical memory limits

2015-10-13 Thread hadoop hive
ridale Technology > Ltd, its subsidiaries or their employees, unless expressly so stated. It is > the responsibility of the recipient to ensure that this email is virus > free, therefore neither Peridale Ltd, its subsidiaries nor their employees > accept any responsibility. > > >

RE: Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
riginal Message- From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal Vijayaraghavan Sent: 13 October 2015 20:55 To: user@hive.apache.org Cc: Mich Talebzadeh Subject: Re: Container is running beyond physical memory limits > is running beyond physical memory l

Re: Container is running beyond physical memory limits

2015-10-13 Thread Gopal Vijayaraghavan
> is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB >physical memory used; 6.6 GB of 8 GB virtual memory used. Killing >container. You need to change the yarn.nodemanager.vmem-check-enabled=false on *every* machine on your cluster & restart all NodeManagers. The VMEM check

Re: Container is running beyond physical memory limits

2015-10-13 Thread Muni Chada
Reduce yarn.nodemanager.vmem-pmem-ratio to 2.1 and lower. On Tue, Oct 13, 2015 at 2:32 PM, hadoop hive wrote: > > > mapreduce.reduce.memory.mb > > 4096 > > > > change this to 8 G > > > On Wed, Oct 14, 2015 at 12:52 AM, Ranjana Rajendran < > ranjana.rajend...@gmail.com> wrote: > >> Here is Alti

Re: Container is running beyond physical memory limits

2015-10-13 Thread hadoop hive
mapreduce.reduce.memory.mb 4096 change this to 8 G On Wed, Oct 14, 2015 at 12:52 AM, Ranjana Rajendran < ranjana.rajend...@gmail.com> wrote: > Here is Altiscale's documentation about the topic. Do let me know if you > have any more questions. > > http://documentation.altiscale.com/heapsize

Re: Container is running beyond physical memory limits

2015-10-13 Thread Ranjana Rajendran
Here is Altiscale's documentation about the topic. Do let me know if you have any more questions. http://documentation.altiscale.com/heapsize-for-mappers-and-reducers On Tue, Oct 13, 2015 at 9:31 AM, Mich Talebzadeh wrote: > Hi, > > > > I have been having some issues with loading data into hive

Container is running beyond physical memory limits

2015-10-13 Thread Mich Talebzadeh
Hi, I have been having some issues with loading data into hive from one table to another for 1,767,886 rows. I was getting the following error Task with the most failures(4): - Task ID: task_1444731612741_0001_r_00 URL: http://0.0.0.0:8088/taskdetails.jsp?jobid=job_14447