Unsubscribe.

2014-09-30 Thread Sri kripa
Hi,

kindly Unsubscribe me. for the following Ids.

Thanks
Sridevi


Re: Unsubscribe.

2014-09-30 Thread Lefty Leverenz
Hive has a separate email address for unsubscribing -- see
http://hive.apache.org/mailing_lists.html.

Hadoop and Pig probably have a similar system for unsubscribing.  Check
their websites for instructions.

-- Lefty

On Tue, Sep 30, 2014 at 2:03 AM, Sri kripa srikripa2...@gmail.com wrote:

 Hi,

 kindly Unsubscribe me. for the following Ids.

 Thanks
 Sridevi



Query regarding Hive hanging while running a hive query

2014-09-30 Thread Kandpal, Ritu
Hi,

I have a query regarding hive, I am running following simple hive queries on 
hive CLI, hive does not respond to insert into table select .. statement:


* create table test_temp ( year int, month int) ROW FORMAT delimited 
fields terminated by ',' STORED AS TEXTFILE;

* load data local inpath '/var/opt/hive/test_temp.txt' into table 
test_temp;

* create table test (year int, month int);

* insert into table test select year, month from test_temp ;

Total MapReduce jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1407754524966_0222, Tracking URL = 
http://vm-test:8088/proxy/application_1407754524966_0222/

Kill Command = /var/opt/hadoop/hadoop/bin/hadoop job  -kill 
job_1407754524966_0222




After running the last statement insert into table test select year, month 
from test_temp  hive hangs at kill command,

Hive logs are also pasted below:



hive.log output :

2014-09-30 01:35:06,108 WARN  common.LogUtils 
(LogUtils.java:logConfigLocation(142)) - hive-site.xml not found on CLASSPATH
2014-09-30 01:35:06,425 WARN  util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
2014-09-30 01:44:17,442 WARN  bonecp.BoneCPConfig 
(BoneCPConfig.java:sanitize(1537)) - Max Connections  1. Setting to 20
2014-09-30 01:44:18,992 WARN  bonecp.BoneCPConfig 
(BoneCPConfig.java:sanitize(1537)) - Max Connections  1. Setting to 20
2014-09-30 01:44:21,460 WARN  mapreduce.JobSubmitter 
(JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.


Can anyone please let me know whether it is a configuration issue, or some 
other issue which is causing hive to hang?



Thanks,
Ritu


Unsubscribe.

2014-09-30 Thread S Byrne
 kindly Unsubscribe me. for the following Ids.
 
 Thanks
 Stan
 


access counter info at the end of query execution

2014-09-30 Thread Frank Luo
All,

I developed a UDF to increase counters in certain situations. However, I am not 
able to find a way to read the counter at the end a query run.

I have looked at HiveDriverRunHook and ExecuteWithHookContext. Both class don't 
allow me to access counters.

Is there a way to get around of this limitation?


Re: hive 0.13.0 guava issues

2014-09-30 Thread Viral Bajaria
(Take 3... for some reason my reply emails are getting rejected by apache
mailing list)

2 things here:

I looked at the discussion, and the concern there is more about breaking
user code that assumes that guava 11.0 will be available via Hadoop v/s
anything breaking in Hadoop. I think that's a little flawed argument and
everyone has been hacking around to use the latest guava by doing CLASSPATH
hacks.

The bigger issue is hive-exec in 0.13 packaging guava 11 as a fat-jar v/s
having it as a library dependency that can be easily removed if needed.

Previously I had a hack to let the ClassLoader load guava that my user code
cares about the most i.e. the latest one.

Never seen Hadoop break because of that.

Thanks,
Viral



On Mon, Sep 29, 2014 at 11:45 AM, Thejas Nair the...@hortonworks.com
wrote:

 guava jar is there as part of hadoop, and hadoop uses guava 0.11 jar.
 As the guava versions are not fully compatible with each other, hive
 can't upgrade unless hadoop also upgrades or we use a good way to
 isolate the jar usage.
 See discussion in hadoop about upgrading the guava version -

 http://search-hadoop.com/m/LgpTk2MpkYf1/guava+stevesubj=Time+to+address+the+Guava+version+problem

 On Fri, Sep 26, 2014 at 4:00 PM, Viral Bajaria viral.baja...@gmail.com
 wrote:
  Hi,
 
  We just upgraded from hive 0.11.0 to 0.13.0 (finally!!)
 
  So I noticed that for hive-exec jar, guava is packaged in the jar v/s
  previously it wasn't.
 
  Any reason it is package now ?
 
  Secondly, is there anything that is stopping to bump the guava version
 from
  11.0 to the latest one ?
 
  Given that 11.0 was released a few years ago, shouldn't we update that ?
 
  Happy to create the JIRA and make that update if there is consensus on
 that.
 
  Thanks,
  Viral
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



Hive nested query result storage

2014-09-30 Thread Vikash Talanki -X (vtalanki - INFOSYS LIMITED at Cisco)
Hi All,

Please help me understand where and how does hive store temporary result of a 
nested query.

I have written a UDF which reads the data from a table t1 in a nested query.
Table t1 should be in ascending order and I have to make sure that t1 data 
should be processed by a single mapper. The reason for single mapper is that my 
UDF contains some global variables which gets initialized per mapper and if t1 
is processed by multiple mappers then output would result wrong.

Query:

select 
gsid,contract,max_date,min_date,contract_rangeId(gsid,contract,max_date,min_date)
 as range_id from (select gsid,contract,max_date,min_date from 
tmp_rcc_normwk_gs0_test3 order by gsid,contract,max_date,min_date) t1.

Since the nested query select gsid,contract,max_date,min_date from 
tmp_rcc_normwk_gs0_test3 order by gsid,contract,max_date,min_date runs only one 
reducer, will the outer query runs with only 1 mapper?
If yes, where does the output of nested query stored? HDFS or local file system?
Love to get some help on this.
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]

Vikash Talanki
Engineer - Software
vtala...@cisco.com
Phone: +1 (408)838 4078

Cisco Systems Limited
SJ-J 3
255 W Tasman Dr
San Jose
CA - 95134
United States
Cisco.comhttp://www.cisco.com/





[Think before you print.]Think before you print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html





Re: hive 0.13.0 guava issues

2014-09-30 Thread Thejas Nair
Regarding the rejects by the mailing list, try sending emails as plain
text (not html), and see if that helps.

Please reply to that thread on the hadoop mailing list about guava 11,
more feedback there will help.

I am not sure why guava moved into hive-exec fat jar in 0.13 . Feel
free to open a jira about that.



On Tue, Sep 30, 2014 at 11:19 AM, Viral Bajaria viral.baja...@gmail.com wrote:
 (Take 3... for some reason my reply emails are getting rejected by apache
 mailing list)

 2 things here:

 I looked at the discussion, and the concern there is more about breaking
 user code that assumes that guava 11.0 will be available via Hadoop v/s
 anything breaking in Hadoop. I think that's a little flawed argument and
 everyone has been hacking around to use the latest guava by doing CLASSPATH
 hacks.

 The bigger issue is hive-exec in 0.13 packaging guava 11 as a fat-jar v/s
 having it as a library dependency that can be easily removed if needed.

 Previously I had a hack to let the ClassLoader load guava that my user code
 cares about the most i.e. the latest one.

 Never seen Hadoop break because of that.

 Thanks,
 Viral



 On Mon, Sep 29, 2014 at 11:45 AM, Thejas Nair the...@hortonworks.com
 wrote:

 guava jar is there as part of hadoop, and hadoop uses guava 0.11 jar.
 As the guava versions are not fully compatible with each other, hive
 can't upgrade unless hadoop also upgrades or we use a good way to
 isolate the jar usage.
 See discussion in hadoop about upgrading the guava version -

 http://search-hadoop.com/m/LgpTk2MpkYf1/guava+stevesubj=Time+to+address+the+Guava+version+problem

 On Fri, Sep 26, 2014 at 4:00 PM, Viral Bajaria viral.baja...@gmail.com
 wrote:
  Hi,
 
  We just upgraded from hive 0.11.0 to 0.13.0 (finally!!)
 
  So I noticed that for hive-exec jar, guava is packaged in the jar v/s
  previously it wasn't.
 
  Any reason it is package now ?
 
  Secondly, is there anything that is stopping to bump the guava version
  from
  11.0 to the latest one ?
 
  Given that 11.0 was released a few years ago, shouldn't we update that ?
 
  Happy to create the JIRA and make that update if there is consensus on
  that.
 
  Thanks,
  Viral
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


user as table alias is not allowed in hive 0.13

2014-09-30 Thread wzc
We just upgrade our hive from hive 0.11 to hive 0.13,  we find that
 running select * from src1 user limit 5; in hive 0.13 report the
following errors:

 ParseException line 1:14 cannot recognize input near 'src1' 'user' 'limit'
 in from source


I don't know why user would be a preserve keyword in hive 0.13. It
doesn't make sense.
There are some sqls in our data warehouse using user as table alias and
I'm trying to fix it. Any help is appreciated.

Thanks.