Thanks Gopal.

SAP Replication server (SRS) does it to Hive real time as well. That is the
main advantage of replication as it is real time. Picks up committed data
from the log and sends it to hive as well. Also it ois way ahead of Sqoop
that only does the initial load really.  It does 10k rows at a time with
insert into Hive table. Hive table cannot be transactional to start with.

I. 2016/04/08 09:38:23. REPLICATE Replication Server: Dropped subscription
<102_105_t> for replication definition <102_t> with replicate at
<hiveserver2.asehadoop>
I. 2016/04/08 09:38:31. REPLICATE Replication Server: Creating subscription
<102_105_t> for replication definition <102_t> with replicate at
<hiveserver2.asehadoop>
I. 2016/04/08 09:38:31. PRIMARY Replication Server: Creating subscription
<102_105_t> for replication definition <102_t> with replicate at
<hiveserver2.asehadoop>
T. 2016/04/08 09:38:32. (84): Command sent to 'SYB_157.scratchpad':
T. 2016/04/08 09:38:32. (84): 'begin transaction  '
T. 2016/04/08 09:38:32. (84): Command sent to 'SYB_157.scratchpad':
T. 2016/04/08 09:38:32. (84): 'select  count (*) from t  '
T. 2016/04/08 09:38:34. (84): Command sent to 'SYB_157.scratchpad':
T. 2016/04/08 09:38:34. (84): 'select OWNER, OBJECT_NAME, SUBOBJECT_NAME,
OBJECT_ID, DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP2,
STATUS, TEMPORARY2, GENERATED, SECONDARY, NAMESPACE, EDITION_NA
ME, PADDING1, PADDING2, ATTRIBUTE from t  '
T. 2016/04/08 09:39:54. (86): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:39:54. (86): 'Bulk insert table 't' (9999 rows affected)'
T. 2016/04/08 09:40:12. (89): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:40:12. (89): 'Bulk insert table 't' (9999 rows affected)'
T. 2016/04/08 09:40:34. (87): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:40:34. (87): 'Bulk insert table 't' (9999 rows affected)'
T. 2016/04/08 09:40:52. (88): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:40:52. (88): 'Bulk insert table 't' (9999 rows affected)'
T. 2016/04/08 09:41:11. (90): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:41:11. (90): 'Bulk insert table 't' (9999 rows affected)'
T. 2016/04/08 09:41:56. (86): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:41:56. (86): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:42:30. (87): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:42:30. (87): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:42:53. (89): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:42:53. (89): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:43:14. (90): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:43:14. (90): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:43:33. (88): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:43:33. (88): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:44:25. (86): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:44:25. (86): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:44:44. (89): Command sent to 'hiveserver2.asehadoop':
T. 2016/04/08 09:44:44. (89): 'Bulk insert table 't' (10000 rows affected)'
T. 2016/04/08 09:45:37. (90): Command sent to 'hiveserver2.asehadoop':

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 31 May 2016 at 22:18, Gopal Vijayaraghavan <gop...@apache.org> wrote:

>
> > Can LLAP be used as a caching tool for data from Oracle DB or any RDBMS.
>
> No, LLAP intermediates HDFS. It holds column & index data streams as-is
> (i.e dictionary encoding, RLE, bloom filters etc are preserved).
>
> Because it does not cache row-tuples, it cannot exist as a caching tool
> for another RDBMS.
>
> I have heard of Oracle GoldenGate replicating into Hive, but it is not
> without its own pains of schema compat.
>
> Cheers,
> Gopal
>
>
>
>

Reply via email to