GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1537
KAFKA-3846: include timestamp in Connect record types
KIP to come
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka kafka-3846
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1404
KAFKA-2935: Remove vestigial WorkerConfig.CLUSTER_CONFIG
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka kafka-2935
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1729
Assign ConfigDef.NO_DEFAULT_VALUE as a literal, use equals() for compâ¦
â¦arison rather than ==
You can merge this pull request into a Git repository by running:
$ git pull https
Github user shikhar closed the pull request at:
https://github.com/apache/kafka/pull/1729
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1567
Minor: fix Bash shebang on vagrant/ scripts
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka vagrant-scripts-shebang
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1745
WIP: KAFKA-4042: prevent DistributedHerder thread from dying from
connector/task lifecycle exceptions
You can merge this pull request into a Git repository by running:
$ git pull https
Github user shikhar closed the pull request at:
https://github.com/apache/kafka/pull/1745
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shikhar reopened a pull request:
https://github.com/apache/kafka/pull/1745
KAFKA-4042: prevent DistributedHerder thread from dying from connector/task
lifecycle exceptions
- `worker.startConnector()` and `worker.startTask()` can throw (e.g.
`ClassNotFoundException
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1815
KAFKA-4115: grow default heap size for connect-distributed.sh to 1G
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka connect-heap
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1800
KAFKA-4100: ensure 'fields' and 'fieldsByName' are not null for Struct
schemas
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1969
MINOR: missing fullstop in doc for `max.partition.fetch.bytes`
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka patch-2
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1964
KAFKA-4010: add ConfigDef toEnrichedRst() for additional fields in output
followup on https://github.com/apache/kafka/pull/1696
cc @rekhajoshm
You can merge this pull request into a Git
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1968
MINOR: missing whitespace in doc for `ssl.cipher.suites`
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka patch-1
Alternatively
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1872
KAFKA-4183: centralize checking for optional and default values to avoid
bugs
Cleaner to just check once for optional & default value from the
`convertToConnect()` function.
It also h
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1865
KAFKA-4173: SchemaProjector should successfully project missing Struct
field when target field is optional
You can merge this pull request into a Git repository by running:
$ git pull https
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1778
KAFKA-4042: Contain connector & task start/stop failures within the Worker
Invoke the statusListener.onFailure() callback on start failures so that
the statusBackingStore is updated. This invo
Github user shikhar closed the pull request at:
https://github.com/apache/kafka/pull/1745
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/1790
KAFKA-4070: implement Connect Struct.toString()
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka add-struct-tostring
Github user shikhar closed the pull request at:
https://github.com/apache/kafka/pull/1968
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2040
KAFKA-4161: prototype for exploring API change
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka kafka-4161
Alternatively you can
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2131
Remove failing ConnectDistributedTest.test_bad_connector_class
Since #1911 was merged it is hard to externally test a connector
transitioning to FAILED state due to an initialization failure, which
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2182
ConfigDef experimentation - support List and Map<String, T>
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka con
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2139
KAFKA-4161: KIP-89: Allow sink connectors to decouple flush and offset
commit
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka
Github user shikhar closed the pull request at:
https://github.com/apache/kafka/pull/2040
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2232
HOTFIX: Fix HerderRequest.compareTo()
With KAFKA-3008 (#1788), the implementation does not respect the contract
that 'sgn(x.compareTo(y)) == -sgn(y.compareTo(x))'
This fix addresses
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2374
KAFKA-3209: KIP-66: more single message transforms
WIP, in this PR I'd also like to add doc generation for transformations.
You can merge this pull request into a Git repository by running
Github user shikhar closed the pull request at:
https://github.com/apache/kafka/pull/2182
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2365
MINOR: avoid closing over both pre & post-transform record in
WorkerSourceTask
Followup to #2299 for KAFKA-3209
You can merge this pull request into a Git repository by running:
$ git
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2196
KAFKA-3910: prototype of another approach to cyclic schemas
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka KAFKA-3910
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2313
KAFKA-4575: ensure topic created before starting sink for
ConnectDistributedTest.test_pause_resume_sink
Otherwise in this test the sink task goes through the pause/resume cycle
with 0 assigned
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2299
KAFKA-3209: KIP-66: single message transforms
*WIP* particularly around testing
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/shikhar/kafka
GitHub user shikhar opened a pull request:
https://github.com/apache/kafka/pull/2277
KAFKA-4527: task status was being updated before actual pause/resume
h/t @ewencp for pointing out the issue
You can merge this pull request into a Git repository by running:
$ git pull https
Kafkarati,
Here is a pretty straightforward proposal, for exposing timestamps that
were added in Kafka 0.10 to the connect framework so connectors can make
use of them:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-65%3A+Expose+timestamps+to+Connect
Appreciate your thoughts!
Shikhar
Since there isn't much to discuss with this KIP, bringing it to a vote
KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-65%3A+Expose+timestamps+to+Connect
Pull request: https://github.com/apache/kafka/pull/1537
Thanks,
Shikhar
KIP page should be updated with what you decide.
>
> Ismael
>
> On Sat, Jun 25, 2016 at 1:29 AM, Shikhar Bhushan <shik...@confluent.io>
> wrote:
>
> > Hi Ismael,
> >
> > Good point. This is down to an implementation detail, the getter was
> ad
).
It probably makes sense to be consistent and use either Long everywhere or
the primitive long and default values.
Feel free to add the comment on the PR
<https://github.com/apache/kafka/pull/1537/files> as well and I can follow
up there :-)
Thanks,
Shikhar
On Fri, Jun 24, 2016 at 3:52 PM
Not sure I understand the motivation to use a FIPS-compliant hash function
for log compaction -- what are the security ramifications?
On Fri, Jul 22, 2016 at 2:56 PM Luciano Afranllie
wrote:
> A little bit of background first.
>
> We are trying to make a deployment of
that from FIPS point of view, but we want
> to deploy with FIPS 140-2 mode enabled using only RSA security providers.
> With this settings it is not possible to use MD5.
>
> On Fri, Jul 22, 2016 at 8:49 PM, Shikhar Bhushan <shik...@confluent.io>
> wrote:
>
> > Not sure I
flatMap() / supporting 1->n feels nice and general since filtering is just
the case of going from 1->0
I'm not sure why we'd need to do any more granular offset tracking (like
sub-offsets) for source connectors: after transformation of a given record
to n records, all of those n should map to
st an example)?
>
> On Sat, Jul 23, 2016 at 11:13 PM, Ewen Cheslack-Postava <e...@confluent.io
> >
> wrote:
>
> > On Fri, Jul 22, 2016 at 12:58 AM, Shikhar Bhushan <shik...@confluent.io>
> > wrote:
> >
> > > flatMap() / supporting 1->n fee
>
>
> Hmm, operating on ConnectRecords probably doesn't work since you need to
> emit the right type of record, which might mean instantiating a new one. I
> think that means we either need 2 methods, one for SourceRecord, one for
> SinkRecord, or we'd need to limit what parts of the message you
a.me.uk> wrote:
>
> > +1 (binding), provided that we make the usage of `Long/null` versus
> > `long/-1` consistent.
> >
> > Ismael
> >
> > On Sat, Jun 25, 2016 at 12:42 AM, Gwen Shapira <g...@confluent.io>
> wrote:
> >
> > > +1
> >
+1 (non-binding)
On Mon, Aug 15, 2016 at 1:20 PM Ismael Juma wrote:
> +1 (binding)
>
> On 15 Aug 2016 7:21 pm, "Ewen Cheslack-Postava" wrote:
>
> > I would like to initiate the voting process for KIP-75:
> >
Hi,
I would like to initiate a vote on KIP-89
https://cwiki.apache.org/confluence/display/KAFKA/KIP-89%3A+Allow+sink+connectors+to+decouple+flush+and+offset+commit
Best,
Shikhar
Hi all,
I created KIP-89 for making a Connect API change that allows for sink
connectors to decouple flush and offset commits.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-89%3A+Allow+sink+connectors+to+decouple+flush+and+offset+commit
I'd welcome your input.
Best,
Shikhar
The vote passed with +3 binding votes. Thanks all!
On Sun, Nov 13, 2016 at 1:42 PM Gwen Shapira <g...@confluent.io> wrote:
+1 (binding)
On Nov 9, 2016 2:17 PM, "Shikhar Bhushan" <shik...@confluent.io> wrote:
> Hi,
>
> I would like to initiate a vote on KIP-
Hi all,
I have another iteration at a proposal for this feature here:
https://cwiki.apache.org/confluence/display/KAFKA/Connect+Transforms+-+Proposed+Design
I'd welcome your feedback and comments.
Thanks,
Shikhar
On Tue, Aug 2, 2016 at 7:21 PM Ewen Cheslack-Postava <e...@confluent.io>
it out? Or
at least keeping the built-in set super minimal (Flume has like 3 built-in
interceptors)?
Gwen
On Wed, Dec 14, 2016 at 1:36 PM, Shikhar Bhushan <shik...@confluent.io>
wrote:
> With regard to a), just using `ConnectRecord` with `newRecord` as a new
> abstract method woul
before that we have ExtractAvroMetadata that may
fit? and ExtractEmailHeaders doesn't sound totally outlandish either...
Nothing in the baked-in list by Shikhar looks out of place. I am concerned
about slipperly slope. Or the arbitrariness of the decision if we say that
this list is final and nothin
e
project later than to remove functionality.
On Thu, Dec 15, 2016 at 11:59 AM, Shikhar Bhushan <shik...@confluent.io>
wrote:
> I have updated KIP-66
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 66%3A+Single+Message+Transforms+for+Kafka+Connect
> with
> the ch
o data
enrichment by querying external systems, so building a bunch of
transformations in could potentially open the floodgates, or at least make
decisions about what is included vs what should be 3rd party muddy.
-Ewen
On Wed, Dec 7, 2016 at 11:46 AM, Shikhar Bhushan <shik...@confluent.io>
wrot
Hi David,
You can override the underlying consumer's `max.poll.records` setting for
this. E.g.
consumer.max.poll.records=500
Best,
Shikhar
On Thu, Jan 5, 2017 at 3:59 AM <david.frank...@bt.com> wrote:
> Is there any way of limiting the number of events that are passed into t
Sorry I forgot to specify, this needs to go into your Connect worker
configuration.
On Fri, Jan 6, 2017 at 02:57 <david.frank...@bt.com> wrote:
> Hi Shikhar,
>
> I've just added this to ~config/consumer.properties in the Kafka folder
> but it doesn't appear to have made any d
Configurable` and remove the `init` method?
>
> On Thu, Jan 5, 2017 at 5:48 PM, Neha Narkhede <n...@confluent.io> wrote:
>
> > +1 (binding)
> >
> > On Wed, Jan 4, 2017 at 2:36 PM Shikhar Bhushan <shik...@confluent.io>
> > wrote:
> >
> >
Makes sense Ewen, I edited the KIP to include this criteria.
I'd like to start a voting thread soon unless anyone has additional points
for discussion.
On Fri, Dec 30, 2016 at 12:14 PM Ewen Cheslack-Postava <e...@confluent.io>
wrote:
On Thu, Dec 15, 2016 at 7:41 PM, Shikhar Bhushan
> wrote:
>
> > +1
> >
> > On Wed, Jan 4, 2017 at 1:29 PM, Gwen Shapira <g...@confluent.io> wrote:
> >
> > > I would have preferred not to bundle transformations, but since SMT
> > > capability is a much needed feature, I'll take it in its cu
Thanks all. The vote passed with +5 (binding).
On Fri, Jan 6, 2017 at 11:37 AM Shikhar Bhushan <shik...@confluent.io>
wrote:
That makes sense to me, I'll fold that into the PR and update the KIP if it
gets committed in that form.
On Fri, Jan 6, 2017 at 9:44 AM Jason Gustafs
Hi all,
I'd like to start voting on KIP-66:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect
Best,
Shikhar
[
https://issues.apache.org/jira/browse/KAFKA-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan reassigned KAFKA-3846:
--
Assignee: Shikhar Bhushan (was: Ewen Cheslack-Postava)
> Connect record types sho
[
https://issues.apache.org/jira/browse/KAFKA-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15346891#comment-15346891
]
Shikhar Bhushan commented on KAFKA-3846:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-65
[
https://issues.apache.org/jira/browse/KAFKA-3846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3846 started by Shikhar Bhushan.
--
> Connect record types should include timesta
[
https://issues.apache.org/jira/browse/KAFKA-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290294#comment-15290294
]
Shikhar Bhushan commented on KAFKA-2935:
I could not find any documentation reference
[
https://issues.apache.org/jira/browse/KAFKA-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290333#comment-15290333
]
Shikhar Bhushan commented on KAFKA-3335:
Currently {{start()}} looks like
{code:java}
public
Shikhar Bhushan created KAFKA-4010:
--
Summary: ConfigDef.toRst() should create sections for each group
Key: KAFKA-4010
URL: https://issues.apache.org/jira/browse/KAFKA-4010
Project: Kafka
Shikhar Bhushan created KAFKA-3962:
--
Summary: ConfigDef support for resource-specific configuration
Key: KAFKA-3962
URL: https://issues.apache.org/jira/browse/KAFKA-3962
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-3962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-3962:
---
Description:
It often comes up with connectors that you want some piece of configuration
[
https://issues.apache.org/jira/browse/KAFKA-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4048:
---
Summary: Connect does not support RetriableException consistently for sinks
(was: Connect
[
https://issues.apache.org/jira/browse/KAFKA-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4048:
---
Description: We only allow for handling {{RetriableException}} from calls
to {{SinkTask.put
[
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4042:
---
Component/s: KafkaConnect
> DistributedHerder thread can die because of connector &am
Shikhar Bhushan created KAFKA-4048:
--
Summary: Connect does not support RetriableException consistently
for sources & sinks
Key: KAFKA-4048
URL: https://issues.apache.org/jira/browse/KAFKA-
[
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4042:
---
Summary: DistributedHerder thread can die because of connector & task
lifecycle except
[
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4042:
---
Description: As one example, there isn't exception handling
Shikhar Bhushan created KAFKA-4042:
--
Summary: Missing error handling in Worker.startConnector() can
cause Herder thread to die
Key: KAFKA-4042
URL: https://issues.apache.org/jira/browse/KAFKA-4042
[
https://issues.apache.org/jira/browse/KAFKA-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan reassigned KAFKA-3054:
--
Assignee: Shikhar Bhushan (was: jin xing)
> Connect Herder fail forever if s
Shikhar Bhushan created KAFKA-4678:
--
Summary: Create separate page for Connect docs
Key: KAFKA-4678
URL: https://issues.apache.org/jira/browse/KAFKA-4678
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan resolved KAFKA-3054.
Resolution: Done
> Connect Herder fail forever if sent a wrong connector config or t
[
https://issues.apache.org/jira/browse/KAFKA-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15427362#comment-15427362
]
Shikhar Bhushan commented on KAFKA-3054:
Addressing this in KAFKA-4042, which should take care
[
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4042:
---
Fix Version/s: 0.10.1.0
> DistributedHerder thread can die because of connector &am
[
https://issues.apache.org/jira/browse/KAFKA-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-4042 started by Shikhar Bhushan.
--
> Missing error handling in Worker.startConnector() can cause Herder thr
[
https://issues.apache.org/jira/browse/KAFKA-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3054 started by Shikhar Bhushan.
--
> Connect Herder fail forever if sent a wrong connector config or t
Shikhar Bhushan created KAFKA-4068:
--
Summary: FileSinkTask - use JsonConverter to serialize
Key: KAFKA-4068
URL: https://issues.apache.org/jira/browse/KAFKA-4068
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4070:
---
Description: Logging of {{Struct}}'s does not currently provide any useful
output, and users
[
https://issues.apache.org/jira/browse/KAFKA-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4070:
---
Description: Logging of {{Struct}}'s does not currently provide any useful
output, and users
Shikhar Bhushan created KAFKA-4070:
--
Summary: Implement a useful Struct.toString()
Key: KAFKA-4070
URL: https://issues.apache.org/jira/browse/KAFKA-4070
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-4068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan resolved KAFKA-4068.
Resolution: Not A Problem
I was thinking JSON since it would be easy to serialize
[
https://issues.apache.org/jira/browse/KAFKA-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan closed KAFKA-4127.
--
> Possible data loss
> --
>
> Key: KAFKA-4127
>
[
https://issues.apache.org/jira/browse/KAFKA-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468012#comment-15468012
]
Shikhar Bhushan commented on KAFKA-4127:
Dupe of KAFKA-3968
> Possible data l
[
https://issues.apache.org/jira/browse/KAFKA-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan resolved KAFKA-4127.
Resolution: Duplicate
> Possible data loss
> --
>
>
[
https://issues.apache.org/jira/browse/KAFKA-3962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468352#comment-15468352
]
Shikhar Bhushan edited comment on KAFKA-3962 at 9/6/16 7:49 PM
[
https://issues.apache.org/jira/browse/KAFKA-3962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468352#comment-15468352
]
Shikhar Bhushan commented on KAFKA-3962:
This is also realizable today by using
[
https://issues.apache.org/jira/browse/KAFKA-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan resolved KAFKA-4048.
Resolution: Not A Problem
Turns out all exceptions from {{task.flush()}} are treated
Shikhar Bhushan created KAFKA-4115:
--
Summary: Grow default heap settings for distributed Connect from
256M to 1G
Key: KAFKA-4115
URL: https://issues.apache.org/jira/browse/KAFKA-4115
Project: Kafka
Shikhar Bhushan created KAFKA-4100:
--
Summary: Connect Struct schemas built using SchemaBuilder with no
fields cause NPE in Struct constructor
Key: KAFKA-4100
URL: https://issues.apache.org/jira/browse/KAFKA-4100
[
https://issues.apache.org/jira/browse/KAFKA-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-4100 started by Shikhar Bhushan.
--
> Connect Struct schemas built using SchemaBuilder with no fields cause
Shikhar Bhushan created KAFKA-4159:
--
Summary: Allow overriding producer & consumer properties at the
connector level
Key: KAFKA-4159
URL: https://issues.apache.org/jira/browse/KAFKA-4159
Pro
[
https://issues.apache.org/jira/browse/KAFKA-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491296#comment-15491296
]
Shikhar Bhushan commented on KAFKA-4161:
bq. Probably worth clarifying whether we're really
Shikhar Bhushan created KAFKA-4173:
--
Summary: SchemaProjector should successfully project when source
schema field is missing and target schema field is optional
Key: KAFKA-4173
URL: https://issues.apache.org
Shikhar Bhushan created KAFKA-4161:
--
Summary: Allow connectors to request flush via the context
Key: KAFKA-4161
URL: https://issues.apache.org/jira/browse/KAFKA-4161
Project: Kafka
Issue
[
https://issues.apache.org/jira/browse/KAFKA-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4154:
---
Fix Version/s: (was: 0.10.0.2)
> Kafka Connect fails to shutdown if it has not comple
[
https://issues.apache.org/jira/browse/KAFKA-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shikhar Bhushan updated KAFKA-4154:
---
Fix Version/s: (was: 0.10.1.0)
0.10.0.2
> Kafka Connect fa
1 - 100 of 133 matches
Mail list logo