One reason I know for this error is not setting up HADOOP_HOME. It is right
to not set this variable since it was deprecated and replaced with
HADOOP_PREFIX and HADOOP_MAPRED_HOME. However, it seems like hive still has
some haunting references to HADOOP_HOME causing this error, specially after
the
Are you using hive over yarn? If yes, see this related thread here[1].
[1]
https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/cdh-user/gHVq9C5H6RE
On Mon, Mar 4, 2013 at 4:49 AM, Bhaskar, Snehalata <
snehalata_bhas...@syntelinc.com> wrote:
> Does anyone know how to solve this i
Can you show your query that is taking 700 seconds?
On Tue, Apr 30, 2013 at 12:48 PM, Rupinder Singh wrote:
> Hi,
>
> ** **
>
> I have an hbase cluster where I have a table with a composite key. I map
> this table to a Hive external table using which I insert/select data
> into/from this t
Rupinder,
Hive supports a filter pushdown[1] which means that the predicates in the
where clause are pushed down to the storage handler level where either they
get handled by the storage handler or delegated to hive if they cannot
handle them. As of now, the HBaseStorageHandler only supports primi
nim,
>
> ** **
>
> Thanks. So this means custom map reduce is the viable option when working
> with hbase tables having composite keys, since it allows to set the start
> and stop keys. Hive+Hbase combination is out.
>
> ** **
>
> Regards
>
> Rupinder*
Do you have a different version of antlr jar in your classpath other than
the one packaged with hive?
On Thu, May 2, 2013 at 12:38 PM, Cyril Bogus wrote:
> I am using the default setup for the hive-site.xml so the meta store is in
> /user/hive/warehouse in the hdfs (Which I have setup as specif
2013 at 1:47 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Do you have a different version of antlr jar in your classpath other than
>> the one packaged with hive?
>>
>>
>> On Thu, May 2, 2013 at 12:38 PM, Cyril Bogus wrote:
>
nt in the hdfs
>
>
> On Thu, May 2, 2013 at 1:50 PM, Cyril Bogus wrote:
>
>> Actually two the one from hadoop (which is the same as from the one in
>> the hive package) and the one from mahout 0.7 which is newer antlr 3.2
>>
>>
>> On Thu, May 2, 2013 a
Unfortunately I don't think there is a clean way to achieve that (atleast
not one that I know of). Your option at this point is to run your queries
with a WHERE clause so that the predicate behind the scenes gets converted
to a range scan and restricts the amount of data that is being getting
scann
AFAIK Hive HWI has been deprecated and you should be using hue/beeswax for
all your web interface needs.
On Thu, May 16, 2013 at 11:18 AM, Aniket Mokashi wrote:
> In your hive-site.xml, change value to "lib/hive-hwi-0.9.0.war" from
> "/lib/hive-hwi-0.9.0.war". I guess its a known issue with hwi.
More often than not in my experience is caused by a malformed
hive-site.xml(or hive-default.xml). When this happened to me, it was
because I somehow had tab characters in my hive-site.xml. Try dropping the
file(s) and recreate with appropriate formatting.
On Fri, Jun 21, 2013 at 2:17 PM, Sanjay S
This error is not the actual reason why your job failed. Please look into
your jobtracker logs to know the real reason. This error simply means that
hive attempted to connect to JT to gather debugging info for your failed
job but could not due to a classpath error.
On Tue, Jul 16, 2013 at 4:50 PM
First of all, that might not be the right approach to choose the underlying
storage. You should choose HDFS or HBase depending on whether the data is
going to be used for batch processing or you need random access on top of
it. HBase is just another layer on top of HDFS. So obviously the queries
ru
> Error: Java heap space
Guess this should give a hint.
On Fri, Jul 19, 2013 at 4:22 AM, ch huang wrote:
> why the task failed? anyone can help?
>
>
> hive> select cookieid,count(url) as visit_num from alex_test_big_seq group
> by cookieid order by visit_num desc limit 10;
>
> MapReduce Total
Yes. It is possible to do that. The attached patch on the bug adds in a new
HBaseCompositeKey class that consumers can extend to provide their own
implementations. This will help hive understand their custom arrangement of
the composite keys.
If you can try the patch and let me know if it worked o
You can use beeswax from hue. It will neatly page your results.
On Sun, Aug 18, 2013 at 11:39 PM, Nitin Pawar wrote:
> it can not page, it displays all the results on the console
>
> to avoid this,
>
> we either put the output in another table or put it inside a file
>
>
> On Mon, Aug 19, 2013 a
Seems like you are running hive on yarn instead of mr1. I have had some
issues in the past doing so. The post here[1] has some solutions on how to
configure hive ot work with yarn. Hope that helps.
[1]
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/gHVq9C5H6RE
On Thu, Dec 26, 20
Hi Den,
Have you tried escaping the additional colon in the qualifier name?
On Fri, Feb 14, 2014 at 9:47 AM, Den wrote:
> I'm working with an HBase database with a column of the form 'cf:q1:q2'
> where 'cf' is the column family 'q1:q2' is the column qualifier. When
> trying to map this in Hive
"column family, column qualifier specification.");
>> }
>>
>> It seems that this will throw this error if there is not exactly 1 colon in
>> the HBase column to map. So short of tricking it into thinking something
>> else is a colon there mi
09 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Hi Den,
>
> I think that is a valid solution. If you are using a version of hive >
> 0.12, you can also select columns from hbase using prefixes (introduced in
> [1]). Marginally more efficient than th
Can you elaborate a little on what exactly you mean by "mounting"? The
least you will need to have hbase data query able in hive is to create an
external table on top of it.
On Mon, Mar 31, 2014 at 2:11 PM, Manju M wrote:
> Without mapping /mounting the hbase table , how can I access and query
>
ase table )
>
> top_cool is hbase table ( not a mapped Hive table)
>
>
>
> On Mon, Mar 31, 2014 at 12:42 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Can you elaborate a little on what exactly you mean by "mounting"? T
I feel it's pretty hard to answer this without understanding the following:
1. What exactly are you trying to query? CSV? Avro?
2. Where is your data? HDFS? HBase? Local filesystem?
3. What version of hive are you using?
4. What is an example of a query that is slow? Some queries like joins a
nswers
>
>
>
>
>
>
>
> *From:* kulkarni.swar...@gmail.com [mailto:kulkarni.swar...@gmail.com]
> *Sent:* Friday, May 30, 2014 3:34 PM
>
> *To:* user@hive.apache.org
> *Subject:* Re: Need urgent help on hive query performance
>
>
>
> I feel it's pre
ulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> I created a very simple hive table and then ran the following query that
>> should run a M/R job to return the results.
>>
>> hive> SELECT COUNT(*) FROM invites;
>>
>> But I am getti
.
On Mon, May 7, 2012 at 2:12 PM, shashwat shriparv wrote:
> Do one thing create the same structure /Users/testuser/hive-0.9.0/
> lib/hive-builtins-0.9.0.jar on the hadoop file system and den try.. will
> work
>
> Shashwat Shriparv
>
>
> On Mon, May 7, 2012 at 11:57 P
It looks more like a permissions problem to me. Just make sure that
whatever directories hadoop is writing to are owned by hadoop itself.
Also it looks a little weird to me that it is using the
"RawLocalFileSystem" instead of the "DistributedFileSystem". You might want
to look at "fs.default.name"
I installed datanucleus eclipse plugin as I realized that it is needed to
run some of the hive tests in eclipse. While trying to run the enhancer
tool, I keep getting this exception:
"Exception occurred executing command line. Cannot run program
"/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/
Does hive currently support multiple SerDe s to be defined per table name?
Looking through the code and documentation, it seems like it doesn't as
only one could be specified through the ROW FORMAT SERDE but just wanted to
be sure.
--
Swarnim
I was thinking more from a perspective of specifying a SerDe per column
name.
On Thu, May 17, 2012 at 10:38 AM, Mark Grover wrote:
> Hi Swarnim,
> What's your use case?
> If you use multiple SerDe's, when you are writing to the table, how would
> you want Hive to decide which one to use?
>
> Mar
. A Deserializer's role is to turn the value which
> came form the InputFormat into something hive can use as column data.
> In essence the Deserializer creates the columns so I do not see a
> logical way to have more then one.
>
> On Thu, May 17, 2012 at 11:53 AM, kulkarni.swar...@gmail.
I am trying to use the ReflectionStructObjectInspector to extract fields
from a protobuf generated from 2.4.1 compiler. I am seeing that reflections
fails to extract fields out of the generated protobuf class. Specifically,
this code snippet:
public static Field[] getDeclaredNonStaticFields(Class
I am trying to write a custom ObjectInspector extending the
StructObjectInspector and got a little confused about the use of the
getStructFieldData method on the inspector. Looking at the definition of
the method:
public Object getStructFieldData(Object data, StructField fieldRef);
I understand t
If someone can help understand this, I would really appreciate.
On Fri, May 25, 2012 at 3:58 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> I am trying to write a custom ObjectInspector extending the
> StructObjectInspector and got a little confused about th
29, 2012 at 11:08 AM, kulkarni.swar...@gmail.com
> wrote:
> > If someone can help understand this, I would really appreciate.
> >
> > On Fri, May 25, 2012 at 3:58 PM, kulkarni.swar...@gmail.com
> > wrote:
> >>
> >> I am trying to write a custom ObjectI
Did you try this[1]? It had got me most of my way through the process.
[1] https://cwiki.apache.org/Hive/gettingstarted-eclipsesetup.html
On Tue, Jun 5, 2012 at 8:49 AM, Arun Prakash wrote:
> Hi Friends,
> I tried to develop udf for hive but i am getting package import error
> in eclipse.
>
> im
Is the latest hive release 0.9.0 compatible with thrift 0.8 or do we need
to recompile and rebuild the package ourselves to make it compatible?
Currently it seems to depend on libthrift-0.7.
Thanks for the help.
Swarnim
Hello,
In order to provide a custom "serialization.class" to a SerDe, I created a
jar containing all my custom serialization classes and added them to the
hive classpath with "ADD JAR my-classes.jar" command. Now when I try to use
these custom classes via CLI, it still throws me a "ClassNotFoundEx
Capriolo wrote:
> >
> >> You need to put these jars in your aux_lib folder or in your hadoop
> >> classpath. There is a subtle difference between that classpath and the
> >> classpath used by UDF and anything that involves a serde or input
> >> format needs to
is searching for the jar on
HDFS rather than local filesystem. Is that intended? All "Select *" queries
that do not spawn a M/R job work fine.
Thanks,
On Wed, Jun 13, 2012 at 9:44 AM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Cool. That worked!
>
>
I was looking into the snapshot builds for hive[1] and noticed that there
is no snapshot tar ball available. Is there a reason why we don't build
them? If not, should we be adding that to the build so that interested
people can simply pull this bleeding edge tar ball and start playing with
it rathe
jectInstector.THRIFT and ObjectInspector.ProtoBuffer. I
> currently want to write a Serde that works like the thrift serde where
> protobuf objects can be given directly to hive. Come hang out in the
> IRC room and maybe we can chat more about this.
>
> On Tue, May 22, 2012 at 6:09 PM, kul
Hi Kanna,
This might just mean that in your query you are having a STRING type for a
field which is actually a DOUBLE.
On Tue, Jul 10, 2012 at 3:05 PM, Kanna Karanam wrote:
> Has anyone seen this error before? Am I missing anything here?
>
> ** **
>
> 2012-07-10 11:11:02,203 INFO org.apache
ns. MySQL varchar maxes etc. You should open a
> jirra issues on issues.apache.org/jira/hive
>
> Edward
>
> On Wed, Jul 11, 2012 at 5:10 PM, kulkarni.swar...@gmail.com
> wrote:
> > Hello,
> >
> > I am not sure I understand the significance of separators very well in
&g
By default, no. They will be displayed onto the console.
Try this to store them in HDFS:
INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT * FROM invites a
WHERE a.ds='2008-08-15';
The results of the query will be stored in '/tmp/hdfs_out' directory on HDFS.
I am not sure I understood your q
Yes.
INSERT OVERWRITE DIRECTORY ''
would mean path on HDFS
INSERT OVERWRITE LOCAL DIRECTORY ''
would mean path on the local FS.
On Thu, Jul 12, 2012 at 2:01 PM, Raihan Jamal wrote:
> Basically, I was assuming that whenever you do any HiveQL query, all the
> outputs gets stored somewhere
Hi Edward,
This project looks really good.
Internally, we also have been working on similar changes. Specifically,
enhancing the existing HIve/HBase Integration to support protobufs/thrifts
stored in HBase. Because of the need to specify explicit columns mapping
and number of issues faced [1] wit
This error is more related to hadoop than hive. Looking at the exception,
it looks like your namenode is not running/configured properly. Check you
namenode log to see why it failed to start.
Swarnim
On Mon, Jul 16, 2012 at 2:53 AM, shaik ahamed wrote:
> Hi All,
>
>How to rectify the b
13, 2012 at 12:38 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Has anyone being using hive 0.9.0 release with the CDH4 GA release? I
>> keep hitting this exception on its interaction with HBase.
>>
>> java.lang.NoSuchMethodError:
>
ve property values when building ?
>
> If it still fails, please comment on
> HIVE-3029<https://issues.apache.org/jira/browse/HIVE-3029>
> .
>
> Thanks
>
>
> On Mon, Jul 16, 2012 at 2:42 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
The problem is that it is probably looking for these files in HDFS instead
of your local file system. As a workaround, try creating that path on HDFS
and uploading these files there and see if it works. Also, try setting the
fs.default.name property in conf-site.xml to point to your local filesyste
"select *" queries don't really run a M/R job. Rather directly hit HDFS to
grab the results. While "select count(*)" run mappers/reducers to perform
the count on the data. The former running and the latter not suspects
something might be wrong with your hadoop installation. Looking at the
stacktrac
th CDH4 GA
release (hbase-0.92.1-cdh4.0.0.jar) and everything was good then.
[1] https://issues.apache.org/jira/browse/HADOOP-8350
On Mon, Jul 16, 2012 at 5:08 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Yeah. I did override hadoop.security.version to 2.0.0-alpha
usters, HADOOP_HOME is deprecated but hive still
> needs it.
>
> Don't know if that answers your question
>
> Thanks,
> Nitin
>
>
> On Wed, Jul 18, 2012 at 10:01 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Hello,
>
0.9
On Wed, Jul 18, 2012 at 12:04 PM, Nitin Pawar wrote:
> this also depends on what version of hive you are using
>
>
> On Wed, Jul 18, 2012 at 10:33 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Thanks for your reply nitin.
>>
es/branch-0.9/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
>>
>> <http://svn.apache.org/repos/asf/hive/branches/branch-0.8/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java>
>>
>> HADOOPBIN("hadoop.bin.path", System.getenv(&q
have ben patched to 0.10.0.
[1] https://issues.apache.org/jira/browse/HIVE-2757
On Wed, Jul 18, 2012 at 12:50 PM, kulkarni.swar...@gmail.com <
kulkarni.swar...@gmail.com> wrote:
> Hm. Yeah I tried out with a few version 0.7 -> 0.9 and seems like they all
> do. May be we should
Couple to add to the list:
Indexing[1]
Columnar Storage/RCFile[2]
[1] https://cwiki.apache.org/confluence/display/Hive/IndexDev
[2]
http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-11-4.pdf
On Thu, Jul 19, 2012 at 8:39 AM, Jan Dolinár wrote:
> There are many ways, but beware
rt/home 175G 118G57G68%/export/home*
> *rpool 916G 34K 668G 1%/rpool*
> *
> *
>
>
> On Fri, Jul 20, 2012 at 7:42 AM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Seems to me like you might be j
Hello,
I totally understand that usually open source projects to not have a fixed
date for release but I was just curious if something was chalked out for
releasing hive 0.10 out in the wild. There are some really interesting
additions that I am looking forward to.
Thanks,
--
Swarnim
tore changes, however the
> scripts to handle the upgrades are usually added with the changes. (So
> do not just blindly take upgrade advice without trying it in staging
> first and backing up your metastore)
>
> Edward
>
> On Fri, Jul 20, 2012 at 5:20 PM, kulkarni.swar
BIGINT is 8 bytes whereas INT is 4 bytes. Timestamps are usually of "long"
type. To avoid loss of precision, I would recommend BIGINT.
On Fri, Jul 20, 2012 at 4:52 PM, Tech RJ wrote:
> What is the difference between these two? Trying to convert timestamps to
> full date format. The only differen
Hello,
I kind of have a pretty basic question here. I am trying to read structs
stored in HBase to be read by Hive. In what format should these structs be
written so that they can be read?
For instance, if my query has the following struct:
s struct
How should I be writing my data in HBase so t
Try something like this:
CREATE EXTERNAL TABLE hbase_table_1(key struct,
value string)
ROW FORMAT DELIMITED
COLLECTION ITEMS TERMINATED BY '~'
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" =
":key,test-family:test-qual")
TBLPROPERTI
Can you also post logs from "/tmp//hive.log". That might contain some
info on your job failure.
On Wed, Jul 25, 2012 at 8:28 AM, vijay shinde wrote:
> Hi Bejoy,
>
> Thanks for quick reply. Here are some additional details
>
> Cloudera Version - CDH3U4
>
> *hive-site.xml*
> **
> *
> hive.aux.jars.
While going through some code for HBase/Hive Integration, I came across
this constructor:
public HBaseSerDe() throws SerDeException {
}
Basically, the constructor is doing nothing but throwing an exception.
Problem is fixing this now will be a non-passive change.
I couldn't really find an obvio
Hello,
I know that a custom jar can be added to hive classpath via "--auxpath"
command. But for any transitive dependencies that my jar depends on, should
they be added explicitly to the classpath too? I tried doing that too, but
still get the "ClassNotFoundException" for classes in my transitive
If you are using the latest release (0.9), you would need atleast
hbase-0.92 installed. If you are using the CDH stack, I would recommend
recompiling hive with CDH dependencies to avoid any surprises. You can find
more information about it here[1].
[1] https://cwiki.apache.org/Hive/hbaseintegratio
Have you tried using EXPLAIN[1] on your query? I usually like to use that
to get a better understanding of what my query is actually doing and
debugging at other times.
[1] https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Explain
On Tue, Aug 7, 2012 at 12:20 PM, Raihan Jamal wrote
What is the hive version that you are using?
On Tue, Aug 7, 2012 at 12:57 PM, Techy Teck wrote:
> I am not sure about the data, but when we do
>
> SELECT count(*) from data_realtime where dt='20120730' and uid is null
>
> I get the count
>
> but If I do-
>
> SELECT * from data_realtime where dt=
e, Aug 7, 2012 at 11:04 AM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> What is the hive version that you are using?
>>
>>
>> On Tue, Aug 7, 2012 at 12:57 PM, Techy Teck wrote:
>>
>>> I am not sure about the data, b
07251201_0677
> with errors
>
> 2012-08-14 11:56:21,151 ERROR ql.Driver
> (SessionState.java:printError(365)) - FAILED: Execution Error, return code
> 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
>
> ** **
>
> Any ideas?****
>
> ** **
>
> Thank you.
&
plications should implement Tool for the same.***
> *
>
> 2012-08-14 11:56:21,132 ERROR exec.MapRedTask
> (SessionState.java:printError(365)) - Ended Job = job_201207251201_0677
> with errors
>
> 2012-08-14 11:56:21,151 ERROR ql.Driver
> (SessionState.java:printEr
Mayank,
Just out of curiosityany other reason other than conventions to
preserve the case for column names in hive?
On Tue, Aug 14, 2012 at 6:38 PM, Travis Crawford
wrote:
> On Tue, Aug 14, 2012 at 4:20 PM, Edward Capriolo wrote:
>
>>
>> Just changing the code is not as easy as it sounds. It
It's probably looking for that file on HDFS. Try placing it there under the
given location and see if you get the same error.
On Wed, Sep 19, 2012 at 4:45 PM, yogesh dhari wrote:
> Hi all,
>
> I am trying to run hive wi but its showing FATAL,
>
> I have used this command
> *
> hive --service hw
over there and still the same issue..
>
> Thanks & Regards
> Yogesh Kumar
>
> ------
> From: kulkarni.swar...@gmail.com
> Date: Wed, 19 Sep 2012 16:48:37 -0500
> Subject: Re: ERROR :regarding Hive WI, hwi service is not running
> To: user@hive.apache.org
> "hbase.columns.mapping" = ":key,mtdt:string,il:string,ol:string"
This doesn't look right. The mapping should be of form
COLUMN_FAMILY:COLUMN_QUALIFIER. In this case it seems to be
COLUMN_FAMILY:TYPE which is not right.
On Thu, Oct 4, 2012 at 3:25 PM, wrote:
> Hi,
>
> In hive shell I did
>
> c
Can you try creating a table like this:
CREATE EXTERNAL TABLE hbase_table_2(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz");
Now do a select * from hbase
Hi David,
First of all, you columns are not "long". They are binary as well.
Currently as hive stands, there is no support for binary qualifiers.
However, I recently submitted a patch for that[1]. Feel free to give it a
shot and let me know if you see any issues. With that patch, you can
directly
Hi David,
DROP TABLE is the right command to drop a table. You can look at
the hive logs under "/tmp//hive.log" to see why your shell is
hanging. With dropping an EXTERNAL TABLE, you are guaranteed that the
underlying hbase table won't be touched.
On Sun, Dec 9, 2012 at 6:06 PM, David Koch wro
Hey Mohan,
Could you detail your question a little bit more? Hopefully the wiki
here[1] solves your queries.
[1] https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration
On Thu, Jan 1, 2015 at 2:38 PM, Mohan Krishna
wrote:
> Any body know Hive-HBase Integration process?
>
>
>
> Thanks
Congratulations Sergey! Well deserved!
On Fri, Feb 27, 2015 at 1:51 AM, Vinod Kumar Vavilapalli <
vino...@hortonworks.com> wrote:
> Congratulations and keep up the great work!
>
> +Vinod
>
> On Feb 25, 2015, at 8:43 AM, Carl Steinbach wrote:
>
> > I am pleased to announce that Sergey Shelukhin h
Congratulations!!
On Wed, Apr 15, 2015 at 10:57 AM, Viraj Bhat
wrote:
> Mithun Congrats!!
> Viraj
>
> From: Carl Steinbach
> To: d...@hive.apache.org; user@hive.apache.org; mit...@apache.org
> Sent: Tuesday, April 14, 2015 2:54 PM
> Subject: [ANNOUNCE] New Hive Committer - Mithun Radha
Ibrar,
This seems to be an issue with the cluster rather than the integration
itself. Can you make sure that HBase is happy and healthy and all RS are up
and running?
On Wed, May 13, 2015 at 1:58 PM, Ibrar Ahmed wrote:
> Hi,
>
> I am creating a table using hive and getting this error.
>
> [127.
Hi Ibrar,
It seems like your hive and hbase versions are incompatible. What version
of hive and hbase are you on?
On Thu, May 14, 2015 at 3:21 PM, Ibrar Ahmed wrote:
> Hi,
>
> While creating a table in Hive I am getting this error message.
>
> CREATE TABLE abcd(key int, value string) STORED BY
S ("
> hbase.table.name" = "xyz");
>
>
> But "list jars" also shows nothing.
>
>
>
> On Fri, May 15, 2015 at 1:29 AM, Ibrar Ahmed
> wrote:
>
>> Hive : 0.13
>> Hbase: 1.0.1
>>
>>
>>
>> On Fri, May 15, 20
Sarath,
I assume that the failure you are seeing doesn't happen immediately? The
current timeout on the client is set to 5 minutes. A socket timeout usually
means that the client timed out before it can even get a response from the
server. So the server could either be very busy doing something if
@Xuefu While you are already at it, would you mind giving me this access
too? :)
Thanks,
On Mon, Aug 10, 2015 at 2:37 PM, Xuefu Zhang wrote:
> Done!
>
> On Mon, Aug 10, 2015 at 1:05 AM, Xu, Cheng A wrote:
>
>> Hi,
>>
>> I’d like to have write access to the Hive wiki. My Confluence username is
Sanjeev,
Can you tell me more details about your hive version/hadoop version etc.
On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma
wrote:
> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma
> wrote:
>
>> Hi
>> We are experiencing a strange prob
ive-0.13 with hadoop1.
>
> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Sanjeev,
>>
>> Can you tell me more details about your hive version/hadoop version etc.
>>
>> On Wed, Aug 19, 2015 at 1:
> my understanding is that after using kerberos authentication, you
probably don’t need the password.
That is not an accurate statement. Beeline is a JDBC client as compared to
Hive CLI which is a thrift client to talk to HIveServer2. So it would need
the password to establish that JDBC connection
ior, but when Kerberos is enabled, isn't that a bit
> redundant ?
>
> Loïc CHANEL
> Engineering student at TELECOM Nancy
> Trainee at Worldline - Villeurbanne
>
> 2015-08-26 17:53 GMT+02:00 kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com>:
>
>>
c5068cac296f32e24e97cf87efa266c/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java#L450-L455
On Wed, Aug 26, 2015 at 5:40 PM, Lars Francke
wrote:
>
> On Wed, Aug 26, 2015 at 4:53 PM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> > my unde
Congrats!
On Mon, Sep 7, 2015 at 3:54 AM, Carl Steinbach wrote:
> The Apache Hive PMC has voted to make Lars Francke a committer on the
> Apache Hive Project.
>
> Please join me in congratulating Lars!
>
> Thanks.
>
> - Carl
>
>
--
Swarnim
; teValue(TColumn.java:381)
> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
> at org.apach
g.
>>
>> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swar...@gmail.com <
>> kulkarni.swar...@gmail.com> wrote:
>>
>>> How much memory have you currently provided to HS2? Have you tried
>>> bumping that up?
>>>
>>> On Mon, Sep 7,
Congratulations! Well deserved!
On Thu, Sep 17, 2015 at 12:03 AM, Vikram Dixit K
wrote:
> Congrats Ashutosh!
>
> On Wed, Sep 16, 2015 at 9:01 PM, Chetna C wrote:
>
>> Congrats Ashutosh !
>>
>> Thanks,
>> Chetna Chaudhari
>>
>> On 17 September 2015 at 06:53, Navis Ryu wrote:
>>
>> > Congratulat
97 matches
Mail list logo