I don't know what to say.
It it fails of OutOfMemory, then you have to assign more memory to it.

Also a, 2GB VM for a hadoop node is too tiny. Hadoop ecosystem is usually
memory-intensive

Missatge de Bitfox <bit...@bitfox.top> del dia dt., 29 de març 2022 a les
14:46:

> Yes, a quite small table with 10000 rows for test purposes.
>
> Thanks
>
> On Tue, Mar 29, 2022 at 8:43 PM Pau Tallada <tall...@pic.es> wrote:
>
>> Hi,
>>
>> I think it depends a lot on the data volume you are trying to process.
>> Does it work with a smaller table?
>>
>> Missatge de Bitfox <bit...@bitfox.top> del dia dt., 29 de març 2022 a
>> les 14:39:
>>
>>> 0: jdbc:hive2://localhost:10000/default> set
>>> hive.tez.container.size=1024;
>>>
>>> No rows affected (0.027 seconds)
>>>
>>>
>>> 0: jdbc:hive2://localhost:10000/default> set hive.execution.engine;
>>>
>>> +---------------------------+
>>>
>>> |            set            |
>>>
>>> +---------------------------+
>>>
>>> | hive.execution.engine=mr  |
>>>
>>> +---------------------------+
>>>
>>> 1 row selected (0.048 seconds)
>>>
>>>
>>> 0: jdbc:hive2://localhost:10000/default> set
>>> mapreduce.map.memory.mb=1024;
>>>
>>> No rows affected (0.032 seconds)
>>>
>>> 0: jdbc:hive2://localhost:10000/default> set
>>> mapreduce.map.java.opts=-Xmx1024m;
>>>
>>> No rows affected (0.01 seconds)
>>>
>>> 0: jdbc:hive2://localhost:10000/default> set
>>> mapreduce.reduce.memory.mb=1024;
>>>
>>> No rows affected (0.014 seconds)
>>>
>>> 0: jdbc:hive2://localhost:10000/default> set
>>> mapreduce.reduce.java.opts=-Xmx1024m;
>>>
>>> No rows affected (0.015 seconds)
>>>
>>>
>>> 0: jdbc:hive2://localhost:10000/default> select job,count(*) as dd from
>>> ppl group by job limit 10;
>>>
>>> Error: Error while processing statement: FAILED: Execution Error, return
>>> code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>>> (state=08S01,code=2)
>>>
>>>
>>>
>>>
>>> Sorry my test VM has 2gb Ram only. So I set all the above memory size to
>>> 1GB.
>>>
>>> But it still gets the same error.
>>>
>>>
>>>
>>> please help. thanks.
>>>
>>>
>>>
>>> On Tue, Mar 29, 2022 at 8:32 PM Pau Tallada <tall...@pic.es> wrote:
>>>
>>>> I assume you have to increase container size (if using tez/yarn)
>>>>
>>>> Missatge de Bitfox <bit...@bitfox.top> del dia dt., 29 de març 2022 a
>>>> les 14:30:
>>>>
>>>>> My hive run out of memory even for a small query:
>>>>>
>>>>> 2022-03-29T20:26:51,440  WARN [Thread-1329] mapred.LocalJobRunner:
>>>>> job_local300585280_0011
>>>>>
>>>>> java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
>>>>>
>>>>> at
>>>>> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
>>>>> ~[hadoop-mapreduce-client-common-3.3.2.jar:?]
>>>>>
>>>>> at
>>>>> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
>>>>> ~[hadoop-mapreduce-client-common-3.3.2.jar:?]
>>>>>
>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>>
>>>>>
>>>>>
>>>>> hadoop-3.3.2
>>>>>
>>>>> hive-3.1.2
>>>>>
>>>>> java version "1.8.0_321"
>>>>>
>>>>>
>>>>>
>>>>> How to fix this? thanks.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> ----------------------------------
>>>> Pau Tallada Crespí
>>>> Departament de Serveis
>>>> Port d'Informació Científica (PIC)
>>>> Tel: +34 93 170 2729
>>>> ----------------------------------
>>>>
>>>>
>>
>> --
>> ----------------------------------
>> Pau Tallada Crespí
>> Departament de Serveis
>> Port d'Informació Científica (PIC)
>> Tel: +34 93 170 2729
>> ----------------------------------
>>
>>

-- 
----------------------------------
Pau Tallada Crespí
Departament de Serveis
Port d'Informació Científica (PIC)
Tel: +34 93 170 2729
----------------------------------

Reply via email to