(preferrably with --direct, since the files I have to put
there are quite big)
- provide alternative solutions, since maybe I'm going a completely
wrong way
Thanks a million!
David Morel
0.8.1.
Thanks a lot!
David Morel
it...
thanks!
David
Mark
On Fri, Nov 30, 2012 at 2:10 AM, David Morel
david.mo...@amakuru.netwrote:
Hi,
I am trying to solve the last reducer hangs because of GC because of
truckloads of data issue that I have on some queries, by using SET
hive.optimize.skewjoin=true; Unfortunately, every time I try
Hi all (and happy New Year!)
Is it possible to build a perl Thrift client for HiveServer2 (from
Cloudera's 4.1.x) ?
I'm following the instructions found here:
http://stackoverflow.com/questions/5289164/perl-thrift-client-to-hive
Downloaded Hive from Cloudera's site, then i'm a bit lost: where
)
at
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:215)
... 4 more Where should I start looking (meaning I haven't a clue)? Thanks!
David
在 2013-1-4 上午7:16,David Morel dmore...@gmail.com写道:
Hi all (and happy New Year!) Is it possible to build a perl Thrift client
for HiveServer2
:
here:
https://issues.apache.org/jira/browse/HIVE-2935
https://cwiki.apache.org/Hive/hiveserver2-thrift-api.html
HiveServer2 now is CDH extension.
I think you can use find cmd to search the CDH src dir to find the .thrift
files.
2013/1/5 David Morel dmore...@gmail.com
On 4 Jan 2013, at 16
Hi!
After hitting the curse of the last reducer many times on LEFT OUTER
JOIN queries, and trying to think about it, I came to the conclusion
there's something I am missing regarding how keys are handled in mapred
jobs.
The problem shows when I have table A containing billions of rows with
-
From: David Morel dmore...@gmail.com
Date: Thu, 24 Jan 2013 18:03:40
To: user@hive.apache.orguser@hive.apache.org
Reply-To: user@hive.apache.org
Subject: An explanation of LEFT OUTER JOIN and NULL values
Hi!
After hitting the curse of the last reducer many times on LEFT OUTER
JOIN queries
fully understand
the root cause of the problem, that would be much better. I guess I'll
dig in a bit deeper then.
Thanks a lot!
David
Regards
Bejoy KS
Sent from remote device, Please excuse typos
-Original Message-
From: David Morel dmore...@gmail.com
Date: Thu, 24 Jan 2013 18:39:56
On 25 Jan 2013, at 10:37, Bertrand Dechoux wrote:
It seems to me the question has not been answered :
is it possible yes or no to force a smaller split size
than a block on the mappers
Not that I know (but you could implement something to do it) but why would
you do it?
By default if the
would suggest if you have heavily compressed files then you may
want to
do check what will be size after uncompression and allocate more
memory to
maps
On Fri, Jan 25, 2013 at 11:46 AM, David Morel dmore...@gmail.com
wrote:
Hello,
I have seen many posts on various sites and MLs, but didn't
On 7 Mar 2013, at 2:43, Murtaza Doctor wrote:
Folks,
Wanted to get some help or feedback from the community on this one:
Hello,
in that case it is advisable to start a new thread, and not 'reply-to'
when you compose your email :-)
Have a nice day
David
On 2 Jul 2013, at 16:51, Owen O'Malley wrote:
On Tue, Jul 2, 2013 at 2:34 AM, Peter Marron
peter.mar...@trilliumsoftware.com wrote:
Hi Owen,
** **
I’m curious about this advice about partitioning. Is there some
fundamental reason why Hive
is slow when the number of partitions
On 12 Nov 2013, at 0:01, Sunita Arvind wrote:
Just in case this acts as a workaround for someone:
The issue is resolved if I eliminate the where clause in the query
(just
keep where $CONDITIONS). So 2 workarounds I can think of now are:
1. Create views in Oracle and query without the where
On 18 Nov 2013, at 21:59, Stephen Sprague wrote:
A word of warning for users of HiveServer2 - version 0.11 at least. This
puppy has the ability crash and/or hang your server with a memory leak.
Apparently its not new since googling shows this discussed before and i see
reference to a
On 25 Nov 2013, at 11:50, Sreenath wrote:
hi all,
We are using hive for Ad-hoc querying and have a hive table which is
partitioned on two fields (date,id).Now for each date there are around
1400
ids so on a single day around that many partitions are added.The
actual
data is residing in s3.
On 22 Nov 2013, at 9:35, Rok Kralj wrote:
If anybody has any clue what is the cause of this, I'd be happy to
hear it.
On Nov 21, 2013 9:59 PM, Rok Kralj rok.kr...@gmail.com wrote:
what does echo $HADOOP_HEAPSIZE return in the environment you're trying
to launch hive from?
David
On 25 Nov 2013, at 9:06, Mayank Bansal wrote:
Hi,
I was also thinking that this might be the case. For that reason I ran
this query
Select * from (select col1,col2,col3,count(*) as val from table_name
group by col1,col2,col3)a where a.val1 ;
The output that I receive from this query is
altogether) and there might be a bug and/or
something to optimize; the error you're seeing is maybe the key to the
issue but then it is for more knowledgeable people than me to commment
on.
Sorry (and good luck)
David
On Mon, Nov 25, 2013 at 5:50 PM, David Morel dmore...@gmail.com
wrote
Hive is not really meant to serve data as fast as a web page needs. You'll
have to use some intermediate (could even be a db file, or template toolkit
generated static pages).
David
Le 28 juil. 2015 8:53 AM, siva kumar siva165...@gmail.com a écrit :
Hi Lohith,
We use http
On 29 Jul 2015, at 9:42, siva kumar wrote:
Hi folks,
I need to set up a connection between perl and hive using
thrift. Can anyone sugggest me the steps involved in making this
happen?.
Thanka and regrads,
siva.
Hi,
check out
Thrift::API::HiveClient2
could you please help me out ?
Thanks and regards,
siva
On Thu, Jul 30, 2015 at 3:20 PM, David Morel dmore...@gmail.com wrote:
On 29 Jul 2015, at 9:42, siva kumar wrote:
Hi folks,
I need to set up a connection between perl and hive using
thrift. Can
Better use HCatalog for this.
David
Le 5 avr. 2016 10:14, "Mich Talebzadeh" a
écrit :
> So you want to interrogate Hive metastore and get information about
> objects for a given schema/database in Hive.
>
> These info are kept in Hive metastore database running on an
23 matches
Mail list logo