Yeah installing Mysql hadoop applier took lot of time when building and
installing GCC 4.6, and its working but its not serving the exact purpose.
So now am trying with my own python scripting.
Idea is reading insert query from binlog and save it under hive warehouse
as table and query from
interesting. thanks Muthu.
a colleague of mine pointed out this one too, linkedin's databus (
https://github.com/linkedin/databus/wiki) this one looks extremely heavy
weight and again not sure its worth the headache.
i like the idea of a trigger on the mysql table and then broadcasting the
data
great find, Muthu. I would be interested in hearing any about any success
or failures using this adapter. almost sounds too good to be true.
After reading the blog (
http://innovating-technology.blogspot.com/2013/04/mysql-hadoop-applier-part-2.html)
about it i see it comes with caveats and it
This cant be done since insert update delete are not supported in hive.
Mysql Applier for Hadoop package servers the same purpose of the prototype
tool which i intended to develop.
link for Mysql Applier for Hadoop
http://dev.mysql.com/tech-resources/articles/mysql-hadoop-applier.html
Dear All
Am developing a prototype of syncing tables from mysql to Hive using
python and JDBC. Is it a good idea using the JDBC for this purpose.
My usecase will be generating the sales report using the hive, data pulled
from mysql using the prototype tool.My data will be around 2GB/day.
have you looked at sqoop?
On Wed, Sep 3, 2014 at 10:15 AM, Muthu Pandi muthu1...@gmail.com wrote:
Dear All
Am developing a prototype of syncing tables from mysql to Hive using
python and JDBC. Is it a good idea using the JDBC for this purpose.
My usecase will be generating the sales