[ 
https://issues.apache.org/jira/browse/CONNECTORS-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14636047#comment-14636047
 ] 

Karl Wright commented on CONNECTORS-1162:
-----------------------------------------

Hmm, I don't see proper set up in this code still.

Notice the corresponding code in AlfrescoConnectorTest:

{code}
  @Mock
  private AlfrescoClient client;
 
  private AlfrescoConnector connector;
    
  @Before
  public void setup() throws Exception {
    connector = new AlfrescoConnector();
    connector.setClient(client);

    when(client.fetchNodes(anyInt(), anyInt(), 
Mockito.any(AlfrescoFilters.class)))
            .thenReturn(new AlfrescoResponse(
                    0, 0, "", "", Collections.<Map<String, 
Object>>emptyList()));
  }
{code}

Here, "client" corresponds to your "producer" object.  There needs to be a 
protected method, for testing, in your connector called "setProducer()", which 
corresponds to "setClient()" here, which I know you had before.

The @Before annotated methods are called once, before your tests run, and 
basically should create both the KafkaProducer object and the connector object. 
 Be sure to use @Mock for the KafkaProducer object since you want mockito to 
track it.  If you call a connector method, like addOrReplaceDocument(), it 
should result in call(s) to your mocked producer object.  So 
"when().thenReturn()" should work, and "verify()" after that.

Hope this helps.




> Apache Kafka Output Connector
> -----------------------------
>
>                 Key: CONNECTORS-1162
>                 URL: https://issues.apache.org/jira/browse/CONNECTORS-1162
>             Project: ManifoldCF
>          Issue Type: Wish
>    Affects Versions: ManifoldCF 1.8.1, ManifoldCF 2.0.1
>            Reporter: Rafa Haro
>            Assignee: Karl Wright
>              Labels: gsoc, gsoc2015
>             Fix For: ManifoldCF 1.10, ManifoldCF 2.2
>
>         Attachments: 1.JPG, 2.JPG
>
>
> Kafka is a distributed, partitioned, replicated commit log service. It 
> provides the functionality of a messaging system, but with a unique design. A 
> single Kafka broker can handle hundreds of megabytes of reads and writes per 
> second from thousands of clients.
> Apache Kafka is being used for a number of uses cases. One of them is to use 
> Kafka as a feeding system for streaming BigData processes, both in Apache 
> Spark or Hadoop environment. A Kafka output connector could be used for 
> streaming or dispatching crawled documents or metadata and put them in a 
> BigData processing pipeline



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to