hi,all
i execute a sql on hive on spark,the comand like:
select distinct st.sno,sname from student st join score sc
on(st.sno=sc.sno) where sc.cno IN(11,12,13) and st.sage 28;(some days
ago this sql can work )
but it give me some Info in hive shell :
Query Hive on Spark job[0] stages:
0
Hi experts, I heard that if I want to use Hive UDFs, I must deploy the jars to
all the machines that are running Hive (which is painful to me..)
After reading
thishttps://cwiki.apache.org/confluence/display/Hive/HivePlugins I didn't
find any document talking about this - could someone help to
Thanks!
For Q1 - is there a way that Hive helps me to automatically do this (for
example, I can register the UDF somewhere and the UDF gets automatically
distributed)? Or I need to login to each node to ensure this happens?
Xiaoyong
-Original Message-
From: Rathish A M
+ Rathish
Xiaoyong
-Original Message-
From: Xiaoyong Zhu
Sent: Thursday, December 18, 2014 5:56 PM
To: user@hive.apache.org
Subject: RE: Hive UDFs?
Thanks!
For Q1 - is there a way that Hive helps me to automatically do this (for
example, I can register the UDF somewhere and the UDF
Thank you.
On Wed, Dec 17, 2014 at 8:55 PM, Navis류승우 navis@nexr.com wrote:
Afaik, it was restricted by implementation of hadoop. But now hadoop-2
supports custom delimiter, hopefully it also can be implemented in hive.
I'm not sure but currently possible way of do that is setting
You only need the jar on the computer where you execute the hive command ,
not on all cluster nodes
On Thu, Dec 18, 2014 at 2:55 AM, Xiaoyong Zhu xiaoy...@microsoft.com
wrote:
Thanks!
For Q1 - is there a way that Hive helps me to automatically do this (for
example, I can register the UDF
Another option is
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-PermanentFunctions,
as another user mentioned on this list a few days ago.
On Dec 18, 2014, at 5:54 AM, Stéphane Verlet kaweahsoluti...@gmail.com wrote:
You only need the jar on the
Hi thanks for the answer so far, however, I still think there must be an easy
way.
The file format I’m looking at is pretty simple.
There is first an header of
n bytes, Which can be ignored. After that there is the data.
The data consists of rows where ich rows has 9 bytes.
First there is a
Hi Ingo,
Take a look at
https://hadoop.apache.org/docs/r2.3.0/api/org/apache/hadoop/mapred/FixedLengthInputFormat.html--it
seems to be designed for use cases very similar to yours. You may need
to subclass it to make things work precisely the way you need (in
particular, to deal with the
Hello Andrew,
this one looks indeed like a good idea.
However, there is also another Problem already here. This InputFormat expects
that
conf.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, recordLength);
is set. I haven’t found any way to specify a parameter for a InputFormat.
I couldn’t
So in hive you can actually do that via the SET command (documented here
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli) as
follows:
hive SET fixedlengthinputformat.record.length = length
This value will be passed through to the JobConf, and the input format
ought to
I see, thanks!
Xiaoyong
From: Jason Dere [mailto:jd...@hortonworks.com]
Sent: Friday, December 19, 2014 3:52 AM
To: user@hive.apache.org
Subject: Re: Hive UDFs?
Another option is
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-PermanentFunctions,
as
Sorry to update this again - but why don't we do a cross query optimization and
make the query into 1 DAG (if all the queries in a certain script are linked
with each other).. this seems a more optimized way..
Xiaoyong
From: Xiaoyong Zhu [mailto:xiaoy...@microsoft.com]
Sent: Thursday, December
13 matches
Mail list logo