[
https://issues.apache.org/jira/browse/CONNECTORS-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14600462#comment-14600462
]
Karl Wright commented on CONNECTORS-1162:
-----------------------------------------
We have unit tests for some connectors, but they are limited because we
had no mock capabilities until recently. But there is one connector now
with decent tests. I will look for it and get back to you.
Sent from my Windows Phone
From: Tugba Dogan (JIRA)
Sent: 6/24/2015 6:56 PM
To: [email protected]
Subject: [jira] [Commented] (CONNECTORS-1162) Apache Kafka Output
Connector
[
https://issues.apache.org/jira/browse/CONNECTORS-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14600319#comment-14600319
]
Tugba Dogan commented on CONNECTORS-1162:
-----------------------------------------
Hi Karl,
I called get() methods for simulating blocking call and it works.
Here is the commit link:
https://github.com/tugbadogan/manifoldcf/commit/fb1f8a525662b635cae0b84adc24b1ac172965eb
I want to ask you whether unit test is required for connectors or not.
I didn't see example unit test in other connectors. If it is required,
can you give a unit test example for connector?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
> Apache Kafka Output Connector
> -----------------------------
>
> Key: CONNECTORS-1162
> URL: https://issues.apache.org/jira/browse/CONNECTORS-1162
> Project: ManifoldCF
> Issue Type: Wish
> Affects Versions: ManifoldCF 1.8.1, ManifoldCF 2.0.1
> Reporter: Rafa Haro
> Assignee: Karl Wright
> Labels: gsoc, gsoc2015
> Fix For: ManifoldCF 1.10, ManifoldCF 2.2
>
> Attachments: 1.JPG, 2.JPG
>
>
> Kafka is a distributed, partitioned, replicated commit log service. It
> provides the functionality of a messaging system, but with a unique design. A
> single Kafka broker can handle hundreds of megabytes of reads and writes per
> second from thousands of clients.
> Apache Kafka is being used for a number of uses cases. One of them is to use
> Kafka as a feeding system for streaming BigData processes, both in Apache
> Spark or Hadoop environment. A Kafka output connector could be used for
> streaming or dispatching crawled documents or metadata and put them in a
> BigData processing pipeline
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)