+1
On Wed, Apr 20, 2016 at 1:24 AM, Jimmy Xiang wrote:
> +1
>
> On Tue, Apr 19, 2016 at 2:58 PM, Alpesh Patel
> wrote:
> > +1
> >
> > On Tue, Apr 19, 2016 at 1:29 PM, Lars Francke
> > wrote:
> >>
> >> Thanks everyone! Vote
Hi folks,
I am trying to create HFiles from a Hive table to bulk load into HBase and
am following the HWX [1] tutorial.
It creates the HFiles correctly but then fails when closing the
RecordWriter with the following stack trace.
Error: java.lang.RuntimeException: Hive Runtime Error while
It's been ages since I wrote one, but the differences to mine:
a) I use LongWritable: public LongWritable evaluate(LongWritable startAt) {
b) I have annotations on the class (but I think they are just for docs)
@Description(name = row_sequence,
value = _FUNC_() - Returns a generated row
That class is:
https://code.google.com/p/gbif-occurrencestore/source/browse/trunk/occurrence-store/src/main/java/org/gbif/occurrencestore/hive/udf/UDFRowSequence.java
Cheers,
Tim
On Mon, Sep 30, 2013 at 10:55 PM, Tim Robertson
timrobertson...@gmail.comwrote:
It's been ages since I wrote one
that
the argument is void, so that
all the invocations would be having the same value, then I tried to pass
in a param to prevent this possibility.
On Mon, Sep 30, 2013 at 1:55 PM, Tim Robertson
timrobertson...@gmail.comwrote:
It's been ages since I wrote one, but the differences to mine:
a) I use
Note: I am a newbie to Hive.
Can someone please answer the following questions?
1) Does Hive provide APIs (like HBase does) that can be used to retrieve
data from the tables in Hive from a Java program? I heard somewhere that
the data can be accessed with JDBC (style) APIs. True?
Hi all,
I have a 6 node cluster, and on a simple query created with a table from a
CSV, I was seeing a lot of mappers reporting that they were not using data
locality.
I changed the replication factor to 6 but still MR is showing only about
60% data locality in the data-local map tasks.
How can
I believe so. From the tutorial [1] :
CREATE EXTERNAL TABLE page_view_stg(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User',
country STRING COMMENT 'country of origination')
--
*From:* Tim Robertson timrobertson...@gmail.com
*To:* user@hive.apache.org; Gopi Kodumur gkodu...@yahoo.com
*Sent:* Wednesday, April 18, 2012 12:14 PM
*Subject:* Re: Can we define external table Fields enclosed in
I believe so. From the tutorial [1] :
CREATE
It sounds like you have run Sqoop without specifying a durable metastore
for Hive. E.g. you haven't told Hive to use MySQL, PostGRES etc to store
it's metadata. It probably used Derby DB which either put it all in
memory, or put it all on the /tmp directory, which was destroyed on restart.
I
Hi all,
I am *really* interested in Hive-1634 (
https://issues.apache.org/jira/browse/HIVE-1634). I have just built from
Hive trunk using HBase 0.90.4 as the version (e.g. we run cdh3u2).
We have an HBase table populated with Bytes, so I create the Hive table
like so:
CREATE EXTERNAL TABLE
Hi Jason,
I work for an international organization involved in the mobilization of
biodiversity data (specifically we are dealing a lot with observations of
species) so think of it as a lot of point based information with metadata
tags. We have built an Oozie workflow that uses Sqoop to suck in
Hi all,
I need to perform a lot of point in polygon checks and want to use Hive
(currently I mix Hive, Sqoop and PostGIS in an Oozie workto do this).
In an ideal world, I would like to create a Hive table from a Shapefile
containing polygons, and then do the likes of the following:
SELECT
Hi all
Can someone please tell me how to achieve the following in a single hive script?
set original_value = mapred.reduce.tasks;
set mapred.reduce.tasks=1;
... do stuff
set mapred.reduce.tasks=original_value;
It is the first and last lines that don't work - is it possible?
Thanks,
Tim
Hi all
Can someone please tell me how to achieve the following in a single hive script?
set original_value = mapred.reduce.tasks;
set mapred.reduce.tasks=1;
... do stuff
set mapred.reduce.tasks=original_value;
It is the first and last lines that don't work - is it possible?
Thanks,
Tim
Hi all,
Sorry if I am missing something obvious but is there an inverse of an explode?
E.g. given t1
ID Name
1 Tim
2 Tim
3 Tom
4 Frank
5 Tim
Can you create t2:
Name ID
Tim1,2,5
Tom 3
Frank 4
In Oracle it would be a
select name,collect(id) from t1 group by name
I suspect in Hive
What about the count or max?
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCount.java
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFMax.java
I've not used UDAFs, but I only
Does it need to be a sequential INT? If not, then a UUID works very well.
Cheers,
Tim
On Tue, Nov 16, 2010 at 8:55 AM, afancy grou...@gmail.com wrote:
Hi, Zhang,
How to integrate this snowflake with Hive? Thanks!
Regards,
afancy
On Mon, Nov 15, 2010 at 10:35 AM, Jeff Zhang
Thanks Edward. I'll poke around there.
On Tue, Nov 2, 2010 at 6:40 PM, Edward Capriolo edlinuxg...@gmail.com wrote:
On Tue, Nov 2, 2010 at 12:47 PM, Tim Robertson
timrobertson...@gmail.com wrote:
Hi all,
Is the following a valid UDF please?
When I run it I get the following so I presume
That's right. Hive can use an HBase table as an input format to the
hive query regardless of output format, and can also write the output
to an HBase table regardless of the input format. You can also
supposedly do a join in Hive that uses 1 side of the join from an
HBase table, and the other
20 matches
Mail list logo