peter.mar...@trilliumsoftware.com **
> **
>
> ** **
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* 23 November 2012 08:55
> *To:* user@hive.apache.org
> *Subject:* Re: Creating Indexes again
>
> ** **
>
> try increasing ulimit on you
<mailto:roy.willi...@trilliumsoftware.com>
From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: 23 November 2012 08:55
To: user@hive.apache.org
Subject: Re: Creating Indexes again
try increasing ulimit on your hadoop cluster as well increase the memory for
map and reducer both by setting t
try increasing ulimit on your hadoop cluster as well increase the memory
for map and reducer both by setting them up on hive
set mapred.job.map.memory.mb=6000;
set mapred.job.reduce.memory.mb=4000;
you can change the values based on the hadoop cluster you have setup
On Fri, Nov 23, 2012 at 2:17
Hi,
I'm trying to create indexes in Hive, and I've switched
to using CDH-4. The creation of the index is failing and
it's pretty obvious that the reducers are running out of
heap space. When I use the web interface for the
"Hadoop reduce task list" I can find this entry:
Error: Java heap space
Er
s Hive errors as opposed to
> Map/Reduce errors?
>
> ** **
>
> Regards,
>
> ** **
>
> Peter Marron****
>
>
>
> ** **
>
> *From:* Dean Wampler [mailto:dean.wamp...@thinkbiganalytics.com]
> *Sent:* 02 November
>>
>> I’m running everything on a single physical machine in pseudo-distributed
>> mode.
>>
>>
>>
>> Well it certainly looks like the reducer is looking for a derby.jar,
>> although I must
>>
>> confess I don’t really understand why
>
> Regards,
>
> ** **
>
> Peter Marron
>
>
>
> ** **
>
> *From:* Dean Wampler [mailto:dean.wamp...@thinkbiganalytics.com]
> *Sent:* 02 November 2012 14:03
>
> *To:* user@hive.apache.org
> *Subject:* Re
here it records Hive errors as opposed to Map/Reduce
errors?
Regards,
Peter Marron
From: Dean Wampler [mailto:dean.wamp...@thinkbiganalytics.com]
Sent: 02 November 2012 14:03
To: user@hive.apache.org
Subject: Re: Creating Indexes
Oh, I saw this line in your Hive output and just assumed you w
m go away? Is there anything else that I can try?
>
> ** **
>
> Peter Marron
>
> ** **
>
> *From:* Dean Wampler [mailto:dean.wamp...@thinkbiganalytics.com]
> *Sent:* 01 November 2012 13:02
>
> *To:* user@hive.apache.org
> *Subject:* Re: Creating Indexes
>
> **
rron
From: Dean Wampler [mailto:dean.wamp...@thinkbiganalytics.com]
Sent: 01 November 2012 13:02
To: user@hive.apache.org
Subject: Re: Creating Indexes
It looks like you're using Derby with a real cluster, not just a single machine
in local or pseudo-distributed mode. I haven't tried this m
> on the client node .
> Regards
> Bejoy KS
>
> Sent from handheld, please excuse typos.
> --
> *From: * Dean Wampler
> *Date: *Thu, 1 Nov 2012 08:01:51 -0500
> *To: *
> *ReplyTo: * user@hive.apache.org
> *Subject: *Re: Creating Indexes
r
>
> ** **
>
> Also I found a derby.log in my home directory which I have attached.
>
> ** **
>
> Regards,
>
> ** **
>
> Z
>
> ** **
>
> *From:* Shreepadma Venugopalan [mailto:shreepa...@cloudera.com]
> *Sent:* 31 October 2012 21:58
>
>
ile that you might want to
> see.
>
> ** **
>
> Thanks for your efforts.
>
> ** **
>
> Peter Marron
>
> ** **
>
> *From:* Shreepadma Venugopalan [mailto:shreepa...@cloudera.com]
> *Sent:* 31 October 2012 18:38
> *To:* user@hive.apache.org
> *Subject:*
Hi Peter,
Can you attach the execution logs? What is the exception that you see in
the execution logs?
Thanks,
Shreepadma
On Wed, Oct 31, 2012 at 10:42 AM, Peter Marron <
peter.mar...@trilliumsoftware.com> wrote:
> Hi,
>
> ** **
>
> I am still having problems building my index.
>
> In
Hi,
I am still having problems building my index.
In an attempt to find someone who can help me
I'll go through all the steps that I try.
1) First I load my data into hive.
hive> LOAD DATA INPATH 'E3/score.csv' OVERWRITE INTO TABLE score;
Loading data to table default.score
Deleted hdfs://
Dear Hive User List,
I am having a new issue with creating indexes when using the $ hive -service
metastore option.
When I do so from a client, I get a null pointer exception and cannot run
any other commands until I close and reopen the Hive CLI. The metastore log
states "String
I have a similar problem with a trunk build and a mysql metastore.
Doing: alter table IDXS modify column DEFERRED_REBUILD boolean not null;
Doesn't seem to fix it. Perhaps because mysql converts the boolean into
a "tinyint(1)"?
Is there an easy way to make it fail with an error instead of getti
Esteban,
Thanks for the quick reply! That resolved the issue.
The "IS_COMPRESSED" error is also in the HIVE-2176.3.patch.
Clint
From: Esteban Gutierrez [mailto:este...@cloudera.com]
Sent: Wednesday, June 22, 2011 3:14 PM
To: user@hive.apache.org
Subject: Re: Troubl
Clint,
sorry, I think I was too cryptic :)
The default schema creation script is setting the column type for
"DEFERRED_REBUILD"
as bit(1) instead of boolean in the "IDXS" table and the JDBC driver is
failing silently for that type mismatch. Also, it seems that "IS_COMPRESSED"
in one of the intern
Hi Clint,
Indeed this is a bug, "DEFERRED_REBUILD" should be boolean and not bit(1) in
"IDXS".
Regards,
Esteban.
--
Support Engineer, Cloudera.
On Wed, Jun 22, 2011 at 11:25 AM, Clint Green wrote:
> Dear Hive User List,
>
> ** **
>
> I am trying to build indexes on a hive 0.7.1 environm
Dear Hive User List,
I am trying to build indexes on a hive 0.7.1 environment using postgresql as
the metastore, but it is failing silently.
The following command doesn't generate any errors:
hive> CREATE TABLE t (i INT);
OK
Time taken: 0.299 seconds
hive> CREATE INDEX i ON TABLE t (
21 matches
Mail list logo