This cant be done since insert update delete are not supported in hive.
Mysql Applier for Hadoop package servers the same purpose of the prototype
tool which i intended to develop.
link for Mysql Applier for Hadoop
http://dev.mysql.com/tech-resources/articles/mysql-hadoop-applier.html
Hi all,
I have two hive tables pointing to the same location(folder) when i join
these tables hive fails in map-reduce with the error
java.lang.RuntimeException: Error in configuring object
at
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
at
Hi,
Many thanks for sending these links. Looking forward to more documentation
around this.
BTW, why does hive-exec-0.13.0.2.1.1.0-385.jar not have any class files
for MatchPath UDF ? Have they been chucked out to a separate JAR file?
I can see that hive-exec-0.13.0.jar has the appropriate
Hi Nathan,
This was done in https://issues.apache.org/jira/browse/HIVE-6248 Reasoning
was to minimize api surface area to users so that they are immune of
incompatible changes in internal classes and thus making it easier for them
to consume this and not get worried about version upgrade. Seems
Hi Muhammad,
From what I've googled a few months ago on the subject, MatchPath UDF has
been removed from Cloudera and Hortonworks releases because TeraData
claims it violates one of their patent (apparently renaming it did not
suffice).
I guess that if you really need it, it might be possible to
Hi Furcy,
Many thanks for your email :)
My latest info was that the rename took place due to objections by
Teradata, but didn't know if they had actually requested to take it off the
distribution entirely.
Does anybody else have an idea on the licensing aspect of this? What
exactly has Teradata
Dear All,
Kindly help with hwi setup, I am using cygwin on windows 8, jdk 1.7, ant
1.9. I have successfully built hive and able to start the hwi war using
command hive --service hwi. It shows Started
SocketConnector@0.0.0.0: in console.
Problem I am facing is when I open the
?Hi Ashutosh,
Thanks for the reply!
Well, we are a yarn app that is essentially doing the same things mapreduce
does. For regular files in Hadoop, we get the block locations and sizes and
perform some internal sorting and load balancing on the master which then
creates the slave yarn apps
This api is designed for use cases like yours only. So, I will say api is
failing if it cannot service what you are trying to do with it. So, I will
encourage you to use this api and consider current shortcoming as missing
feature in it.
Feel free to file a jira requesting addition of these
I have two tables in hive:
Table1: uid,txid,amt,vendor Table2: uid,txid
Now I need to join the tables on txid which basically confirms a
transaction is finally recorded. There will be some transactions which will
be present only in Table1 and not in Table2.
I need to find out number of avg of
I don't know of anything like what you want atleast until Hive 0.11.
However, you could try something like this:
INSERT OVERWRITE TABLE rollup
SELECT id, start_time, collect_set(concat_ws(,, objects.name,
objects.value, objects.type)) AS product_details
FROM bar
GROUP BY id, start_time;
It's a
Geez, too many mistakes for the day :P
I meant the following above
*CREATE TABLE* rollup_new *AS*
SELECT id, start_time, collect_set(concat_ws(,, object.name
http://objects.name/, object.value, object.type)) AS product_details
FROM bar
GROUP BY id, start_time;
The change is in the table
try this:
SELECT
A.vendor,
AVG(totalmatchesperuser) as avgmatches
FROM
(SELECT
A.vendor,
A.uid,
count(*) as totalmatchesperuser
FROM
Table1 A INNER JOIN
Table2 B
ON A.uid = B.uid AND B.txid =A.txid
GROUP BY
A.vendor,
A.uid
) t
GROUP BY
A.vendor
On Thu, Sep 4, 2014 at 3:38 AM, Mohit Durgapal
13 matches
Mail list logo