[ 
https://issues.apache.org/jira/browse/ATLAS-629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Yamijala updated ATLAS-629:
-----------------------------------
    Attachment: ATLAS-629-3.patch

I made just one change to the last patch. When initializing the 
autoCommitEnabled variable for the Kafka consumer, I default the value to true 
to maintain backwards compatibility, considering cases like Ranger. It can of 
course be overridden in configuration as usual.

{code}
-+                Boolean.valueOf(properties.getProperty("auto.commit.enable", 
"false")));
++                Boolean.valueOf(properties.getProperty("auto.commit.enable", 
"true")));
{code}

> Kafka messages in ATLAS_HOOK might be lost in HA mode at the instant of 
> failover.
> ---------------------------------------------------------------------------------
>
>                 Key: ATLAS-629
>                 URL: https://issues.apache.org/jira/browse/ATLAS-629
>             Project: Atlas
>          Issue Type: Bug
>    Affects Versions: 0.7-incubating
>            Reporter: Hemanth Yamijala
>            Assignee: Hemanth Yamijala
>            Priority: Critical
>             Fix For: 0.7-incubating
>
>         Attachments: ATLAS-629-1.patch, ATLAS-629-2.patch, ATLAS-629-3.patch, 
> ATLAS-629.patch
>
>
> Write data to Kafka continuously from Hive hook - can do this by writing a 
> script that constantly creates tables. Bring down the Active instance with 
> kill -9. Ensure writes continue after passive becomes active. The expectation 
> is the number of tables created and the number of tables in Atlas match.
> In one test, wrote 180 tables and switched over 6 times from one instance to 
> another. Found that 1 table was lost of the lot. i.e. 179 tables were 
> created, and 1 did not get in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to