RE: Beeline throws OOM on large input query

2016-09-03 Thread Adam
*Reply to Stephen Sprague* *1) confirm your beeline java process is indeed running with expanded* *memory* I used the -XX:+PrintCommandLineFlags which showed: -XX:MaxHeapSize=17179869184 confirming the 16g setting. *2) * *try the hive-cli (or the python one even.) or "beeline -u *

Re: Beeline throws OOM on large input query

2016-09-02 Thread Adam
I set the heap size using HADOOP_CLIENT_OPTS all the way to 16g and still no luck. I tried to go down the table join route but the problem is that the relation is not an equality so it would be a theta join which is not supported in Hive. Basically what I am doing is a geographic intersection

Beeline throws OOM on large input query

2016-09-01 Thread Adam
Hive Version: 2.1.0 I have a very large, multi-line input query (8,668,519 chars) and I have gone up to 16g heap and still get the same OOM. Error: Error running query: java.lang.OutOfMemoryError: Java heap space (state=,code=0) org.apache.hive.service.cli.HiveSQLException: Error running query:

Re: hbase-1.1.1 & hive-1.0.1

2016-03-19 Thread Adam Hunt
Version information Hive 1.x will remain compatible with HBase 0.98.x and lower versions. Hive 2.x will be compatible with HBase 1.x and higher. (See HIVE-10990 for details.) Consumers wanting to work with HBase 1.x using Hive 1.x will need to

Re: NPE when reading Parquet using Hive on Tez

2016-02-02 Thread Adam Hunt
> select count(*) from x where x.x > 1; OK 1 Thanks for your help. Best, Adam On Tue, Jan 5, 2016 at 9:10 AM, Adam Hunt <adamph...@gmail.com> wrote: > Hi Gopal, > > Spark does offer dynamic allocation, but it doesn't always work as > advertised. My experience with Tez h

Re: NPE when reading Parquet using Hive on Tez

2016-01-05 Thread Adam Hunt
help. Adam On Mon, Jan 4, 2016 at 12:58 PM, Gopal Vijayaraghavan <gop...@apache.org> wrote: > > > select count(*) from alexa_parquet; > > > Caused by: java.lang.NullPointerException > >at > >org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser

NPE when reading Parquet using Hive on Tez

2016-01-04 Thread Adam Hunt
is stored in Parquet files. Thanks, Adam select count(*) from alexa_parquet; or create table kmeans_results_100_orc stored as orc as select * from kmeans_results_100; ], TaskAttempt 3 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException

RE: How to create new table like existing with an extra column in single query.

2015-08-10 Thread LaStrange, Adam
How about: create table XXX like YYY; alter table XXX add columns (new_column int); From: venkatesh b [mailto:venkateshmailingl...@gmail.com] Sent: Monday, August 10, 2015 9:28 AM To: Wangwenli Cc: user Subject: Re: How to create new table like existing with an extra column in single query.

Re: limit clause + fetch optimization

2015-07-22 Thread Adam Silberstein
take you up on that if I find some time to try it. Thanks, Adam On Tue, Jul 21, 2015 at 11:14 PM, Gopal Vijayaraghavan gop...@apache.org wrote: Just want to make sure I understand the behavior once that bug is fixed...a 'select *' with no limit will run without a M/R job and instead stream

limit clause + fetch optimization

2015-07-21 Thread Adam Silberstein
optimization, would appreciate it. This is on Hive 1.1 inside CDH5.4. Thanks, Adam

Re: limit clause + fetch optimization

2015-07-21 Thread Adam Silberstein
subsequently would apply a limit, once the job finishes. I haven't spotted this issue in JIRA, I'd be happy to file it if that's useful to you. Thanks! Adam On Tue, Jul 21, 2015 at 7:20 PM, Gopal Vijayaraghavan gop...@apache.org wrote: I've been experimenting with 'select *' and 'select * limit X

DDL stmt for showing views only

2015-06-23 Thread Adam Silberstein
with one hive call. Thanks, Adam

java.lang.RuntimeException: Unknown type BIGINT

2014-10-02 Thread Adam Kawa
is that I have many tables (and partitions) with BIGINT (capitalized). I would like to avoid altering them to change the type from BIGINT to bigint. Is there any patch (or tick) available that handle case insensitivity? Cheers! Adam

HiveServer2 http mode?

2014-04-10 Thread Adam Faris
The Setting Up HiveServer2 wiki page mentions that HiveServer2 is providing a “http mode in 0.13. Is “http mode” going to be a rest API or is it encapsulating thrift/jdbc connections inside http traffic? - Thanks, Adam

Re: org.apache.hadoop.hive.metastore.HiveMetaStoreClient with webhcat REST

2014-03-17 Thread Adam Silberstein
Hi, Didn't get any answers on this, trying one more time. Thanks, Adam On Mar 14, 2014, at 9:50 AM, Adam Silberstein a...@trifacta.com wrote: Hi, I'm testing out the REST interface to webhcat and stuck doing basic DDL operations. Background on installation: --I installed packages

org.apache.hadoop.hive.metastore.HiveMetaStoreClient with webhcat REST

2014-03-14 Thread Adam Silberstein
. -There is older material from ~2011 as well, but ignoring that. If you have any suggestions please share. Thanks in advance! -Adam

Re: Versioninfo and platformName issue.

2013-12-10 Thread Adam Kawa
Hi, Do you have Hadoop libs properly installed? Does $ hadoop version command run successfully? If true, then It sounds like some classpath issue... 2013/12/10 Manish Bhoge manishbh...@rocketmail.com Sent from Rocket Mail via Android -- * From: * Manish Bhoge

Re: hive.query.string not reflecting the current query

2013-12-03 Thread Adam Kawa
Hmmm? Maybe it is related to the fact, that a query: select * from mytable limit 100; does not start any MapReduce job. It is starts a reading operation from HDFS (and a communication with MetaStore to know what is the schema and how to parse the data using InputFormat and SerDe). For example,

Re: How to specify Hive auxiliary jar in HDFS, not local file system

2013-12-02 Thread Adam Kawa
You can use ADD JAR command inside a Hive script and a parameter in Oozie workflow definition. Example is here: http://blog.cloudera.com/blog/2013/01/how-to-schedule-recurring-hadoop-jobs-with-apache-oozie/ 2013/12/2 mpeters...@gmail.com Is it possible to specify a Hive auxiliary jar (like a

Issue with multi insert

2013-02-07 Thread Thomas Adam
Hi, I am having issues to execute the following multi insert query: FROM ${tmp_users_table} u JOIN ${user_evens_table} ue ON ( u.id = ue.user ) INSERT OVERWRITE TABLE ${dau_table} PARTITION (dt='${date}') SELECT u.country, u.platform, u.gender,

Hive inconsistently interpreting 'where' and 'group by'

2012-05-30 Thread Adam Laiacano
sometimes. Thanks, Adam

Hive question, summing second-level domain names

2011-05-23 Thread Adam Phelps
of .*[.][^.]*[.][^.]* and then output lines with a count for the common portion. Any pointers in the correct direction would be welcome. Thanks - Adam