[ 
https://issues.apache.org/jira/browse/ATLAS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16875965#comment-16875965
 ] 

Adam Rempter commented on ATLAS-3305:
-------------------------------------

Yes, its true, it just spawns multiple consumers.

I guess, Atlas is using Kafka which is distributed message broker, and by 
definition there is no real way (at least now?) to guarantee global consitency. 

 

One way to mitigate it, would be to use message key by producer, so at least 
there will be order preserved by partition. 

Key could be either userId (one user per type of service, eg. hive) or entity 
name.

In more strict mode (configuration option?) Atlas consumer could then check if 
message has key and if not discard such message?

> Unable to scale atlas kafka consumers
> -------------------------------------
>
>                 Key: ATLAS-3305
>                 URL: https://issues.apache.org/jira/browse/ATLAS-3305
>             Project: Atlas
>          Issue Type: Bug
>          Components:  atlas-core, atlas-intg
>    Affects Versions: 1.1.0, 2.0.0
>            Reporter: Adam Rempter
>            Priority: Major
>              Labels: performance
>
> We wanted to scale kafka consumers for atlas, as we are getting many lineage 
> messages and processing them just with one consumer is not enough. 
>  
> There is parameter atlas.notification.hook.numthreads to scale consumers in  
> NotificationHookConsumer.
> But the method:
>  
> notificationInterface.createConsumers(NotificationType.HOOK, numThreads)
>  
> is always returning one element list, which effectively always starts one 
> consumer
> List<NotificationConsumer<T>> consumers = 
> Collections.singletonList(kafkaConsumer);
>  
> Log incorrectly says that nuber of consumers has been created:
> LOG.info("<== KafkaNotification.createConsumers(notificationType={}, 
> numConsumers={}, autoCommitEnabled={})", notificationType, numConsumers, 
> autoCommitEnabled)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to