accept
any responsibility.
-Original Message-
From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal
Vijayaraghavan
Sent: 13 October 2015 21:37
To: user@hive.apache.org
Cc: Mich Talebzadeh
Subject: Re: Container is running beyond physical memory limits
> Now I am rather
neither Peridale Ltd, its subsidiaries nor their employees accept any
responsibility.
From: hadoop hive [mailto:hadooph...@gmail.com]
Sent: 13 October 2015 21:20
To: user@hive.apache.org
Subject: Re: Container is running beyond physical memory limits
http://hortonworks.com/blog/how-to-plan
> Now I am rather confused about the following parameters (for example
> mapreduce.reduce versus
> mapreduce.map) and their correlation to each other
They have no relationship with each other. They are meant for two
different task types in MapReduce.
In general you run fewer reducers than mapp
ridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
> accept any responsibility.
>
>
>
riginal Message-
From: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal
Vijayaraghavan
Sent: 13 October 2015 20:55
To: user@hive.apache.org
Cc: Mich Talebzadeh
Subject: Re: Container is running beyond physical memory limits
> is running beyond physical memory l
> is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB
>physical memory used; 6.6 GB of 8 GB virtual memory used. Killing
>container.
You need to change the yarn.nodemanager.vmem-check-enabled=false on
*every* machine on your cluster & restart all NodeManagers.
The VMEM check
Reduce yarn.nodemanager.vmem-pmem-ratio to 2.1 and lower.
On Tue, Oct 13, 2015 at 2:32 PM, hadoop hive wrote:
>
>
> mapreduce.reduce.memory.mb
>
> 4096
>
>
>
> change this to 8 G
>
>
> On Wed, Oct 14, 2015 at 12:52 AM, Ranjana Rajendran <
> ranjana.rajend...@gmail.com> wrote:
>
>> Here is Alti
mapreduce.reduce.memory.mb
4096
change this to 8 G
On Wed, Oct 14, 2015 at 12:52 AM, Ranjana Rajendran <
ranjana.rajend...@gmail.com> wrote:
> Here is Altiscale's documentation about the topic. Do let me know if you
> have any more questions.
>
> http://documentation.altiscale.com/heapsize
Here is Altiscale's documentation about the topic. Do let me know if you
have any more questions.
http://documentation.altiscale.com/heapsize-for-mappers-and-reducers
On Tue, Oct 13, 2015 at 9:31 AM, Mich Talebzadeh
wrote:
> Hi,
>
>
>
> I have been having some issues with loading data into hive
Hi,
I have been having some issues with loading data into hive from one table to
another for 1,767,886 rows. I was getting the following error
Task with the most failures(4):
-
Task ID:
task_1444731612741_0001_r_00
URL:
http://0.0.0.0:8088/taskdetails.jsp?jobid=job_14447
10 matches
Mail list logo