[jira] [Commented] (KAFKA-5209) Transient failure: kafka.server.MetadataRequestTest.testControllerId

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005898#comment-16005898
 ] 

ASF GitHub Bot commented on KAFKA-5209:
---

GitHub user umesh9794 opened a pull request:

https://github.com/apache/kafka/pull/3018

KAFKA-5209 : Transient failure: 
kafka.server.MetadataRequestTest.testControllerId

Added sleep after creating the socket so that socket can be spawned 
properly and ready to send the request.  @guozhangwang may I request you to 
review it. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/umesh9794/kafka KAFKA-5209

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3018.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3018


commit b65432dac84f308d6a5ea2c5a45ff19ab2da4ce8
Author: umesh chaudhary 
Date:   2017-05-11T05:15:49Z

KAFKA-5209 :  Transient failure: 
kafka.server.MetadataRequestTest.testControllerId




> Transient failure: kafka.server.MetadataRequestTest.testControllerId
> 
>
> Key: KAFKA-5209
> URL: https://issues.apache.org/jira/browse/KAFKA-5209
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> {code}
> Stacktrace
> java.lang.NullPointerException
>   at 
> kafka.server.MetadataRequestTest.testControllerId(MetadataRequestTest.scala:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[DISCUSS] Modify / Remove "Unstable" annotations in Streams API

2017-05-10 Thread Guozhang Wang
Hello folks,

As we are approaching the feature freeze deadline of 0.11.0.0, one thing I
realized is that currently the Streams public APIs are still marked as
"Unstable", which is to indicate that the API itself does not provide
guarantees about backward compatibility across releases. On the other hand,
since Streams have now been widely adopted in production use cases by many
organizations, we are in fact evolving its APIs in a much stricter manner
than "Unstable" allows us: for all the current Streams related KIP
proposals under discussions right now [1], people have been working hard to
make sure none of them are going to break backward compatibility in the
coming releases. So I think it would be a good timing to change the Streams
API annotations.

My proposal would be the following:

1. For "o.a.k.streams.errors" and "o.a.k.streams.state" packages: remove
the annotations except `StreamsMetrics`.

2. For "o.a.k.streams.kstream": remove the annotations except "KStream",
"KTable", "GroupedKStream", "GroupedKTable", "GlobalKTable" and
"KStreamBuilder".

3. For all the other public classes, including "o.a.k.streams.processor"
and the above mentioned classes, change the annotation to "Evolving", which
means "we might break compatibility at minor releases (i.e. 0.12.x, 0.13.x,
1.0.x etc) only".


The ultimate goal is to make sure we won't break anything going forward,
hence in the future we should remove all the annotations to make that
clear. The above changes in 0.11.0.0 is to give us some "buffer time" in
case there are some major API change proposals after the release.

Would love to hear your thoughts.


[1]

KIP-95: Incremental Batch Processing for Kafka Streams


KIP-120: Cleanup Kafka Streams builder API


KIP-123: Allow per stream/table timestamp extractor


KIP 130: Expose states of active tasks to KafkaStreams public API


KIP-132: Augment KStream.print to allow extra parameters in the printed
string


KIP-138: Change punctuate semantics


KIP-147: Add missing type parameters to StateStoreSupplier factories and
KGroupedStream/Table methods


KIP-149: Enabling key access in ValueTransformer, ValueMapper, and
ValueJoiner


KIP-150 - Kafka-Streams Cogroup


KIP 155 - Add range scan for windowed state stores


KIP 156 Add option "dry run" to Streams application reset tool



-- 
-- Guozhang


Re: [VOTE] KIP-146: Isolation of dependencies and classes in Kafka Connect (restarted voting thread)

2017-05-10 Thread Konstantine Karantasis
The KIP description has been updated to reflect the use of the term
plugin.path instead.

-Konstantine




On Wed, May 10, 2017 at 2:10 PM, Ismael Juma  wrote:

> Konstantine, I am not convinced that it will represent similar
> functionality as the goals are different. Also, I don't see a migration
> path. To use Jigsaw, it makes sense to configure the module path during
> start-up (-mp) like one configures the classpath. Whatever we are
> implementing in Connect will be its own thing and it will be with us for
> many years.
>
> Ewen, as far as the JVM goes, I think `module.path` is probably the name
> most likely to create confusion since it refers to a concept that was
> recently introduced, has very specific (and some would say unexpected)
> behaviour and it will be supported by java/javac launchers, build tools,
> etc.
>
> Gwen, `plugin.path` sounds good to me.
>
> In any case, I will leave it to you all to decide. :)
>
> Ismael
>
> On Wed, May 10, 2017 at 8:11 PM, Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Thank you Ismael for your vote as well as your comment.
> >
> > To give some context, it's exactly because of the similarities with
> Jigsaw
> > that module.path was selected initially.
> > The thought was that it could allow for a potential integration with
> Jigsaw
> > in the future, without having to change property names significantly.
> >
> > Of course there are differences, as the ones you mention, mainly because
> > Connect's module path currently is composed as a list of top-level
> > directories that include the modules as subdirectories. However I'd be
> > inclined to agree with Ewen. Maybe using a property name that presents
> > similarities to other concepts in the JVM ecosystem reserves for us more
> > flexibility than using a different name for something that will
> eventually
> > end up representing similar functionality.
> >
> > In any case, I don't feel very strong about it. Let me know if you insist
> > on a name change.
> >
> > -Konstantine
> >
> >
> > On Wed, May 10, 2017 at 10:24 AM, Ewen Cheslack-Postava <
> e...@confluent.io
> > >
> > wrote:
> >
> > > +1 binding, and I'm flexible on the config name. Somehow I am guessing
> no
> > > matter what terminology we use there somebody will find a way to be
> > > confused.
> > >
> > > -Ewen
> > >
> > > On Wed, May 10, 2017 at 9:27 AM, Gwen Shapira 
> wrote:
> > >
> > > > +1 and proposing 'plugin.path' as we use the term connector plugins
> > when
> > > > referring to the jars themselves.
> > > >
> > > > Gwen
> > > >
> > > > On Wed, May 10, 2017 at 8:31 AM, Ismael Juma 
> > wrote:
> > > >
> > > > > Thanks for the KIP Konstantine, +1 (binding) from me. One comment;
> > > > >
> > > > > 1. One thing to think about: the config name `module.path` could be
> > > > > confusing in the future as Jigsaw introduces the concept of a
> module
> > > > > path[1] in Java 9. The module path co-exists with the classpath,
> but
> > > its
> > > > > behaviour is quite different. To many people's surprise, Jigsaw
> > doesn't
> > > > > handle versioning and it disallows split packages (i.e. if the same
> > > > package
> > > > > appears in 2 different modules, it is an error). What we are
> > proposing
> > > is
> > > > > quite different and perhaps it may make sense to use a different
> name
> > > to
> > > > > avoid confusion.
> > > > >
> > > > > Ismael
> > > > >
> > > > > [1] https://www.infoq.com/articles/Latest-Project-
> > > Jigsaw-Usage-Tutorial
> > > > >
> > > > > On Mon, May 8, 2017 at 7:48 PM, Konstantine Karantasis <
> > > > > konstant...@confluent.io> wrote:
> > > > >
> > > > > > ** Restarting the voting thread here, with a different title to
> > avoid
> > > > > > collapsing this thread's messages with the discussion thread's
> > > messages
> > > > > in
> > > > > > mail clients. Apologies for the inconvenience. **
> > > > > >
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Given that the comments during the discussion seem to have been
> > > > > addressed,
> > > > > > I'm pleased to bring
> > > > > >
> > > > > > KIP-146: Classloading Isolation in Connect
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 146+-+Classloading+Isolation+in+Connect
> > > > > >
> > > > > > up for voting. Again, this KIP aims to bring the highly desired
> > > feature
> > > > > of
> > > > > > dependency isolation in Kafka Connect.
> > > > > >
> > > > > > In the meantime, for any additional feedback, please continue to
> > send
> > > > > your
> > > > > > comments in the discussion thread here:
> > > > > >
> > > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg71453.html
> > > > > >
> > > > > > This voting thread will stay active for a minimum of 72 hours.
> > > > > >
> > > > > > Sincerely,
> > > > > > Konstantine
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 | 

[jira] [Updated] (KAFKA-5204) Connect needs to validate Connector type during instantiation

2017-05-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-5204:
---
Fix Version/s: (was: 0.10.2.1)

> Connect needs to validate Connector type during instantiation
> -
>
> Key: KAFKA-5204
> URL: https://issues.apache.org/jira/browse/KAFKA-5204
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
> Fix For: 0.11.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently Connect will accept to instantiate connectors that extend the 
> {{Connector}} abstract class but not one of its subclasses, 
> {{SourceConnector}} or {{SinkConnector}}. 
> However, in distributed mode as well as in REST, Connect assumes in a few 
> places that there are only two types of connectors, sinks or sources. Based 
> on this assumption it checks the type dynamically, and if it is not a sink it 
> treats it as a source (by constructing the corresponding configs). 
> A connector that implements only the {{Connector}} abstract class does not 
> fit into this classification. Therefore a validation needs to take place 
> early, during the instantiation of the {{Connector}} object. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Jason Gustafson
+1

On Wed, May 10, 2017 at 1:45 PM, Xavier Léauté  wrote:

> Thank you for the feedback Michal.
>
> While I agree the return may be a little bit more confusing to reason
> about, the reason for doing so was to keep the range query interfaces
> consistent with their single-key counterparts.
>
> In the case of the window store, the "key" of the single-key iterator is
> the actual timestamp of the underlying entry, not just range of the window,
> so if we were to wrap the result key a window we wouldn't be getting back
> the equivalent of the single key iterator.
>
> In both cases peekNextKey is just returning the timestamp of the next entry
> in the window store that matches the query.
>
> In the case of the session store, we already return Windowed for the
> single-key method, so it made sense there to also return Windowed for
> the range method.
>
> Hope this make sense? Let me know if you still have concerns about this.
>
> Thank you,
> Xavier
>
> On Wed, May 10, 2017 at 12:25 PM Michal Borowiecki <
> michal.borowie...@openbet.com> wrote:
>
> > Apologies, I missed the discussion (or lack thereof) about the return
> > type of:
> >
> > WindowStoreIterator> fetch(K from, K to, long timeFrom,
> > long timeTo)
> >
> >
> > WindowStoreIterator (as the KIP mentions) is a subclass of
> > KeyValueIterator
> >
> > KeyValueIterator has the following method:
> >
> > /** * Peek at the next key without advancing the iterator * @return the
> > key of the next value that would be returned from the next call to next
> > */ K peekNextKey();
> >
> > Given the type in this case will be Long, I assume what it would return
> > is the window timestamp of the next found record?
> >
> >
> > In the case of WindowStoreIterator fetch(K key, long timeFrom, long
> > timeTo);
> > all records found by fetch have the same key, so it's harmless to return
> > the timestamp of the next found window but here we have varying keys and
> > varying windows, so won't it be too confusing?
> >
> > KeyValueIterator (as in the proposed
> > ReadOnlySessionStore.fetch) just feels much more intuitive.
> >
> > Apologies again for jumping onto this only once the voting has already
> > begun.
> > Thanks,
> > Michał
> >
> > On 10/05/17 20:08, Sriram Subramanian wrote:
> > > +1
> > >
> > > On Wed, May 10, 2017 at 11:42 AM, Bill Bejeck 
> wrote:
> > >
> > >> +1
> > >>
> > >> Thanks,
> > >> Bill
> > >>
> > >> On Wed, May 10, 2017 at 2:38 PM, Guozhang Wang 
> > wrote:
> > >>
> > >>> +1. Thank you!
> > >>>
> > >>> On Wed, May 10, 2017 at 11:30 AM, Xavier Léauté  >
> > >>> wrote:
> > >>>
> >  Hi everyone,
> > 
> >  Since there aren't any objections to this addition, I would like to
> > >> start
> >  the voting on KIP-155 so we can hopefully get this into 0.11.
> > 
> >  https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >  155+-+Add+range+scan+for+windowed+state+stores
> > 
> >  Voting will stay active for at least 72 hours.
> > 
> >  Thank you,
> >  Xavier
> > 
> > >>>
> > >>>
> > >>> --
> > >>> -- Guozhang
> > >>>
> >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #1500

2017-05-10 Thread Apache Jenkins Server
See 




[jira] [Updated] (KAFKA-5194) KIP-153: Include only client traffic in BytesOutPerSec metric

2017-05-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5194:
---
Status: Patch Available  (was: Open)

> KIP-153: Include only client traffic in BytesOutPerSec metric
> -
>
> Key: KAFKA-5194
> URL: https://issues.apache.org/jira/browse/KAFKA-5194
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Mickael Maison
>Assignee: Mickael Maison
> Fix For: 0.11.0.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-153+%3A+Include+only+client+traffic+in+BytesOutPerSec+metric



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005754#comment-16005754
 ] 

ASF GitHub Bot commented on KAFKA-5213:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3015


> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3015: KAFKA-5213; Mark a MemoryRecordsBuilder as full as...

2017-05-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3015


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-143: Controller Health Metrics

2017-05-10 Thread Ismael Juma
Thanks everyone for the feedback and votes. The vote has passed with 5
binding +1s (Sriram, Neha, Gwen, Jun, Ismael) and 5 non-binding +1s
(Michael, Tom, Mickael, James, Onur).

Ismael

On Fri, May 5, 2017 at 3:34 AM, Ismael Juma  wrote:

> Hi everyone,
>
> It seems like the discussion has converged, so I would like to initiate
> the voting process for KIP-143: Controller Health Metrics:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-143%
> 3A+Controller+Health+Metrics
>
> The vote will run for a minimum of 72 hours.
>
> Thanks,
> Ismael
>


[GitHub] kafka pull request #3017: Kafka 5218: New Short serializer, deserializer, se...

2017-05-10 Thread mmolimar
GitHub user mmolimar opened a pull request:

https://github.com/apache/kafka/pull/3017

Kafka 5218: New Short serializer, deserializer, serde



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mmolimar/kafka KAFKA-5218

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3017.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3017


commit 2ecddbdcf1093c2fce1cc692f5f27980463f717f
Author: Mario Molina 
Date:   2017-05-11T01:17:19Z

New Short serializer, deserializer and serde

commit 9c747e1bc6a186ac5934f21e47cb19892f4e69f1
Author: Mario Molina 
Date:   2017-05-11T01:17:36Z

Test for short serializer/deserializer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-143: Controller Health Metrics

2017-05-10 Thread Ismael Juma
Sorry for missing this question Onur. I think we covered that in the
discussion thread since (thanks for raising it again).

On Sat, May 6, 2017 at 2:08 AM, Onur Karaman 
wrote:

> I noticed that both the ControllerState metric and the *RateAndTimeMs
> metrics only cover a subset of the controller event types. Was this
> intentional?
>
> On Fri, May 5, 2017 at 6:03 PM, Gwen Shapira  wrote:
>
> > +1
> >
> > On Thu, May 4, 2017 at 7:34 PM, Ismael Juma  wrote:
> >
> > > Hi everyone,
> > >
> > > It seems like the discussion has converged, so I would like to initiate
> > the
> > > voting process for KIP-143: Controller Health Metrics:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-143
> > > %3A+Controller+Health+Metrics
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > > Thanks,
> > > Ismael
> > >
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
> >
>


Re: [VOTE] KIP-143: Controller Health Metrics

2017-05-10 Thread Jun Rao
Hi, Ismael,

Thanks for the latest update. +1 from me,

Jun

On Thu, May 4, 2017 at 7:34 PM, Ismael Juma  wrote:

> Hi everyone,
>
> It seems like the discussion has converged, so I would like to initiate the
> voting process for KIP-143: Controller Health Metrics:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-143
> %3A+Controller+Health+Metrics
>
> The vote will run for a minimum of 72 hours.
>
> Thanks,
> Ismael
>


Re: [DISCUSS] KIP-143: Controller Health Metrics

2017-05-10 Thread Ismael Juma
Thanks Jun. Discussed this offline with Onur and Jun and I believe there's
agreement so updated the KIP:

https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?
pageId=69407758=8=7

Ismael

On Wed, May 10, 2017 at 4:46 PM, Jun Rao  wrote:

> Hi, Onur,
>
> We probably don't want to do the 1-to-1 mapping from the event type to the
> controller state since some of the event types are implementation details.
> How about the following mapping?
>
> 0 - idle
> 1 - controller change (Startup, ControllerChange, Reelect)
> 2 - broker change (BrokerChange)
> 3 - topic creation/change (TopicChange, PartitionModifications)
> 4 - topic deletion (TopicDeletion, TopicDeletionStopReplicaResult)
> 5 - partition reassigning (PartitionReassignment,
> PartitionReassignmentIsrChange)
> 6 - auto leader balancing (AutoPreferredReplicaLeaderElection)
> 7 - manual leader balancing (PreferredReplicaLeaderElection)
> 8 - controlled shutdown (ControlledShutdown)
> 9 - isr change (IsrChangeNotification)
>
> For each state, we will add a corresponding timer to track the rate and the
> latency, if it's not there already (e.g., broker change and controlled
> shutdown). If there are future changes to the controller, we can make a
> call whether the new event should be mapped to one of the existing states
> or a new state.
>
> Thanks,
>
> Jun
>
> On Tue, May 9, 2017 at 6:17 PM, Onur Karaman  >
> wrote:
>
> > @Ismael, Jun
> > After bringing up an earlier point twice now, it still doesn't feel like
> > it's been commented on/addressed, so I'm going to give it one more shot:
> > Assuming that ControllerState should reflect the current event being
> > processed, the KIP is missing states.
> >
> > The controller currently has 14 event types:
> > BrokerChange
> > TopicChange
> > PartitionModifications
> > TopicDeletion
> > PartitionReassignment
> > PartitionReassignmentIsrChange
> > IsrChangeNotification
> > PreferredReplicaLeaderElection
> > AutoPreferredReplicaLeaderElection
> > ControlledShutdown
> > TopicDeletionStopReplicaResult
> > Startup
> > ControllerChange
> > Reelect
> >
> > The KIP only shows 10 event types (and it's not a proper subset of the
> > above set).
> >
> > I think this mismatch would cause the ControllerState to incorrectly be
> in
> > the Idle state when in fact the controller could be doing a lot of work.
> >
> > 1. Should ControllerState exactly consist of the 14 controller event
> types
> > + the 1 Idle state?
> > 2. If so, what's the process for adding/removing/merging event types
> > w.r.t. this metric?
> >
> > On Tue, May 9, 2017 at 4:45 PM, Ismael Juma  wrote:
> >
> >> Becket,
> >>
> >> Are you OK with extending the metrics via a subsequent KIP (assuming
> that
> >> what we're doing here doesn't prevent that)? The KIP freeze is tomorrow
> >> (although I will give it an extra day or two as many in the community
> have
> >> been attending the Kafka Summit this week), so we should avoid
> increasing
> >> the scope unless it's important for future improvements.
> >>
> >> Thanks,
> >> Ismael
> >>
> >> On Wed, May 10, 2017 at 12:09 AM, Jun Rao  wrote:
> >>
> >> > Hi, Becket,
> >> >
> >> > q10. The reason why there is not a timer metric for broker change
> event
> >> is
> >> > that the controller currently always has a LeaderElectionRateAndTimeMs
> >> > timer metric (in ControllerStats).
> >> >
> >> > q11. I agree that that it's useful to know the queue time in the
> >> > controller event queue and suggested that earlier. Onur thinks that
> >> it's a
> >> > bit too early to add that since we are about to change how to queue
> >> events
> >> > from ZK. Similarly, we will probably also make changes to batch
> requests
> >> > from the controller to the broker. So, perhaps we can add more metrics
> >> once
> >> > those changes in the controller have been made. For now, knowing the
> >> > controller state and the controller channel queue size is probably
> good
> >> > enough.
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >> >
> >> >
> >> >
> >> > On Mon, May 8, 2017 at 10:05 PM, Becket Qin 
> >> wrote:
> >> >
> >> >> @Ismael,
> >> >>
> >> >> About the stage and event type. Yes, I think each event handling
> should
> >> >> have those stages covered. It is similar to what we are doing for the
> >> >> requests on the broker side. We have benefited from such systematic
> >> metric
> >> >> structure a lot so I think it would be worth following the same way
> in
> >> the
> >> >> controller.
> >> >>
> >> >> As an example, for BrokerChangeEvent (or any event), I am thinking we
> >> >> would
> >> >> have the following metrics:
> >> >> 1. EventRate
> >> >> 2. EventQueueTime : The event queue time
> >> >> 3. EventHandlingTime: The event handling time (including zk path
> >> updates)
> >> >> 4. EventControllerChannelQueueTime: The queue time of the
> >> corresponding
> >> >> LeaderAndIsrRequest and UpdateMetadataRequest 

Re: [VOTE] KIP-153 (separating replication traffic from BytesOutPerSec metric)

2017-05-10 Thread Jun Rao
Hi,

Thanks for everyone who have voted. The results are

6 binding +1 (Ismael, Sriram, Guozhang, Rajini, Becket, Joel)
5 non-binding +1 (James, Michael, Edoardo, Roger, Dong)
0 -1

The vote passes.

Jun

On Tue, May 9, 2017 at 11:54 AM, Xavier Léauté  wrote:

> +1 (non-binding)
>
> On Tue, May 9, 2017 at 11:00 AM Dong Lin  wrote:
>
> > +1
> >
> > On Sun, May 7, 2017 at 7:40 PM, Jun Rao  wrote:
> >
> > > Hi, Everyone,
> > >
> > > Since this is a relatively simple change, I would like to start the
> > voting
> > > process for KIP-153 : Include only client traffic in BytesOutPerSec
> > metric.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-153+%
> > > 3A+Include+only+client+traffic+in+BytesOutPerSec+metric
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> >
>


Re: [VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Colin McCabe
Thanks, all!

With +1s from Jun Rao, Gwen Shapira, Ismael Juma, Ram Subramanian,
Michael Pearce, and Dongjin Lee,  the vote passes.

Thanks for all the comments.

cheers,
Colin


On Wed, May 10, 2017, at 17:51, Jun Rao wrote:
> Hi, Colin,
> 
> Thanks for the KIP. +1 from me.
> 
> Jun
> 
> On Wed, May 10, 2017 at 1:24 PM, Colin McCabe  wrote:
> 
> > Hi all,
> >
> > Some folks said that the previous VOTE thread was getting collapsed by
> > gmail into a different thread, so I am reposting this.
> >
> > The KIP page is here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs
> >
> > The previous VOTE thread was here:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html
> >
> > best,
> > Colin
> >


Re: [VOTE] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-10 Thread Colin McCabe
On Wed, May 10, 2017, at 17:41, Jason Gustafson wrote:
> >
> > Hmm... I agree that the request is invalid, but it's invalid because the
> > version is not supported.  I guess I think UnsupportedVersionException
> > is a very specific exception that tells users exactly what is wrong
> > (aha! it's the version!) whereas InvalidRequestException is kind of a
> > generic catch-all exception for many different types of issue with
> > requests.  So I think it's preferable to throw the UVE here.
> 
> 
> That seems debatable. The broker doesn't actually know whether the
> unknown
> type is supported by a more recent version or not. UVE in that case is a
> guess and it could just be an invalid request. Anyway, not sure it
> matters
> too much as long as we document the behavior. The main point is that
> there's no reason for a properly behaving client to send a type that the
> broker doesn't understand.

I agree.  This is really a corner case, though, right?  If things are
operating properly, the client looks at the server API version and says
"nope, too old" and throws UVE without ever contacting the server.  In
that case, UVE is not a guess, since the client knows the old server
version does not support what it wants.  And in the case where the new
enum value somehow ends up on the old server, UVE is certainly better
than just immediately disconnecting, which is what we do most of the
time when there is a version mismatch.

thanks,
Colin


> 
> Thanks,
> Jason
> 
> On Wed, May 10, 2017 at 4:41 PM, Colin McCabe  wrote:
> 
> > On Wed, May 10, 2017, at 15:54, Jason Gustafson wrote:
> > > Hey Colin,
> > >
> > > Thanks, I think continuing to bump the protocol makes sense. It's nice to
> > > be consistent with the other APIs. In the KIP, you have the following:
> > >
> > > > If the client is newer than the broker, some of the fields may show up
> > as
> > > UNKNOWN on the broker side.  In this case, the filter will get an
> > > UnsupportedVersionException and the filter will not be applied.
> > >
> > > So the only case this would happen is if the client sent an invalid type
> > > for the request version, right? In that case, I would expect the client
> > > to raise an exception before sending the request when it sees that the
> > > broker does not support the needed request version. So maybe this should
> > it be
> > > an invalid request error instead perhaps?
> >
> > Hmm... I agree that the request is invalid, but it's invalid because the
> > version is not supported.  I guess I think UnsupportedVersionException
> > is a very specific exception that tells users exactly what is wrong
> > (aha! it's the version!) whereas InvalidRequestException is kind of a
> > generic catch-all exception for many different types of issue with
> > requests.  So I think it's preferable to throw the UVE here.
> >
> > I agree that the exception can be thrown by the client itself even
> > before sending anything over the wire.  This is the case with our
> > existing usage of UVE, which happens when someone calls offsetsForTimes
> > on a pre-0.10.1 broker that is too old to support that call.  The client
> > code throws the exception, not the broker.  I expect most UVEs will be
> > thrown by client code in the future since most of them correspond to
> > "the broker has no vocabulary to even understand $NEWFEATURE."  ACLs are
> > one case where we do have to have an additional server-side check in
> > case a poorly coded client is sending enum codes from the future.
> >
> > cheers,
> > Colin
> >
> > >
> > > Thanks,
> > > Jason
> > >
> > >
> > > On Wed, May 10, 2017 at 3:10 PM, Colin McCabe 
> > wrote:
> > >
> > > > Hi Jason,
> > > >
> > > > Thanks for taking a look.
> > > >
> > > > I think it's likely that we will bump the ApiVersion of the various
> > > > requests if we add a new resource type, operation type, or permission
> > > > type.  Although it may not strictly be required, bumping the version
> > > > number makes it a little bit clearer that the protocol is evolving and
> > > > clarifies the relative age of the server and client.  This would be
> > > > helpful in a debugging situation when using broker-api-versions.sh, or
> > > > similar, since it would clarify that the server does/does not support
> > > > some particular detail of ACLs.
> > > >
> > > > However, the v1 server will still need a way of responding to a v0
> > > > ListAclRequest.  In that case, any type not present in v0 will be
> > mapped
> > > > to UNKNOWN, as required.  This mapping could be done either on the
> > > > client or the server side... for simplicity's sake, it probably should
> > > > be done on the client side.
> > > >
> > > > The intention is to make it clear to the v0 client that "something is
> > > > going on" even if the v0 client can't see what the exact details are of
> > > > the v1 ACL.  For example, the v0 client  might see some ACLs on a
> > > > particular topic, but not be able to understand exactly what 

[jira] [Created] (KAFKA-5218) New Short serializer, deserializer, serde

2017-05-10 Thread Mario Molina (JIRA)
Mario Molina created KAFKA-5218:
---

 Summary: New Short serializer, deserializer, serde
 Key: KAFKA-5218
 URL: https://issues.apache.org/jira/browse/KAFKA-5218
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.10.2.0, 0.10.1.1
Reporter: Mario Molina
Priority: Minor
 Fix For: 0.11.0.0


There is no Short serializer/deserializer in the current clients component.

It could be useful when using Kafka-Connect to write data to databases with 
SMALLINT fields (or similar) and avoiding conversions to int improving a bit 
the performance in terms of memory and network.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Jun Rao
Hi, Colin,

Thanks for the KIP. +1 from me.

Jun

On Wed, May 10, 2017 at 1:24 PM, Colin McCabe  wrote:

> Hi all,
>
> Some folks said that the previous VOTE thread was getting collapsed by
> gmail into a different thread, so I am reposting this.
>
> The KIP page is here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs
>
> The previous VOTE thread was here:
> https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html
>
> best,
> Colin
>


Re: [VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Gwen Shapira
+1

The email title made me do a double-take: I thought you are adding a new
thread to Kafka for handling the admin RPC...

On Wed, May 10, 2017 at 5:42 PM Jason Gustafson  wrote:

> +1
>
> On Wed, May 10, 2017 at 3:01 PM, Ismael Juma  wrote:
>
> > Repeating my vote here, +1 (binding).
> >
> > Ismael
> >
> > On Wed, May 10, 2017 at 9:24 PM, Colin McCabe 
> wrote:
> >
> > > Hi all,
> > >
> > > Some folks said that the previous VOTE thread was getting collapsed by
> > > gmail into a different thread, so I am reposting this.
> > >
> > > The KIP page is here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%
> > 2C+and+listing+ACLs
> > >
> > > The previous VOTE thread was here:
> > > https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html
> > >
> > > best,
> > > Colin
> > >
> >
>


[jira] [Created] (KAFKA-5217) Improve Streams internal exception handling

2017-05-10 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-5217:
--

 Summary: Improve Streams internal exception handling
 Key: KAFKA-5217
 URL: https://issues.apache.org/jira/browse/KAFKA-5217
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.1
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax
 Fix For: 0.11.0.0


Streams does not handle all exceptions gracefully atm, but tend to throw 
exceptions to the user, even if we could handle them internally and recover 
automatically. We want to revisit this exception handling to be more resilient.

For example, for any kind of rebalance exception, we should just log it, and 
rejoin the consumer group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Jason Gustafson
+1

On Wed, May 10, 2017 at 3:01 PM, Ismael Juma  wrote:

> Repeating my vote here, +1 (binding).
>
> Ismael
>
> On Wed, May 10, 2017 at 9:24 PM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > Some folks said that the previous VOTE thread was getting collapsed by
> > gmail into a different thread, so I am reposting this.
> >
> > The KIP page is here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%
> 2C+and+listing+ACLs
> >
> > The previous VOTE thread was here:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html
> >
> > best,
> > Colin
> >
>


Re: [VOTE] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-10 Thread Jason Gustafson
>
> Hmm... I agree that the request is invalid, but it's invalid because the
> version is not supported.  I guess I think UnsupportedVersionException
> is a very specific exception that tells users exactly what is wrong
> (aha! it's the version!) whereas InvalidRequestException is kind of a
> generic catch-all exception for many different types of issue with
> requests.  So I think it's preferable to throw the UVE here.


That seems debatable. The broker doesn't actually know whether the unknown
type is supported by a more recent version or not. UVE in that case is a
guess and it could just be an invalid request. Anyway, not sure it matters
too much as long as we document the behavior. The main point is that
there's no reason for a properly behaving client to send a type that the
broker doesn't understand.

Thanks,
Jason

On Wed, May 10, 2017 at 4:41 PM, Colin McCabe  wrote:

> On Wed, May 10, 2017, at 15:54, Jason Gustafson wrote:
> > Hey Colin,
> >
> > Thanks, I think continuing to bump the protocol makes sense. It's nice to
> > be consistent with the other APIs. In the KIP, you have the following:
> >
> > > If the client is newer than the broker, some of the fields may show up
> as
> > UNKNOWN on the broker side.  In this case, the filter will get an
> > UnsupportedVersionException and the filter will not be applied.
> >
> > So the only case this would happen is if the client sent an invalid type
> > for the request version, right? In that case, I would expect the client
> > to raise an exception before sending the request when it sees that the
> > broker does not support the needed request version. So maybe this should
> it be
> > an invalid request error instead perhaps?
>
> Hmm... I agree that the request is invalid, but it's invalid because the
> version is not supported.  I guess I think UnsupportedVersionException
> is a very specific exception that tells users exactly what is wrong
> (aha! it's the version!) whereas InvalidRequestException is kind of a
> generic catch-all exception for many different types of issue with
> requests.  So I think it's preferable to throw the UVE here.
>
> I agree that the exception can be thrown by the client itself even
> before sending anything over the wire.  This is the case with our
> existing usage of UVE, which happens when someone calls offsetsForTimes
> on a pre-0.10.1 broker that is too old to support that call.  The client
> code throws the exception, not the broker.  I expect most UVEs will be
> thrown by client code in the future since most of them correspond to
> "the broker has no vocabulary to even understand $NEWFEATURE."  ACLs are
> one case where we do have to have an additional server-side check in
> case a poorly coded client is sending enum codes from the future.
>
> cheers,
> Colin
>
> >
> > Thanks,
> > Jason
> >
> >
> > On Wed, May 10, 2017 at 3:10 PM, Colin McCabe 
> wrote:
> >
> > > Hi Jason,
> > >
> > > Thanks for taking a look.
> > >
> > > I think it's likely that we will bump the ApiVersion of the various
> > > requests if we add a new resource type, operation type, or permission
> > > type.  Although it may not strictly be required, bumping the version
> > > number makes it a little bit clearer that the protocol is evolving and
> > > clarifies the relative age of the server and client.  This would be
> > > helpful in a debugging situation when using broker-api-versions.sh, or
> > > similar, since it would clarify that the server does/does not support
> > > some particular detail of ACLs.
> > >
> > > However, the v1 server will still need a way of responding to a v0
> > > ListAclRequest.  In that case, any type not present in v0 will be
> mapped
> > > to UNKNOWN, as required.  This mapping could be done either on the
> > > client or the server side... for simplicity's sake, it probably should
> > > be done on the client side.
> > >
> > > The intention is to make it clear to the v0 client that "something is
> > > going on" even if the v0 client can't see what the exact details are of
> > > the v1 ACL.  For example, the v0 client  might see some ACLs on a
> > > particular topic, but not be able to understand exactly what they are
> > > (they are type UNKNOWN) until you upgrade your client.  This would at
> > > least give you some idea what was going on if a user was having trouble
> > > doing something with that topic.  There may be cases where we can't
> even
> > > provide this much (for example, if we ever implement role-based access
> > > control, the v0 client simply won't have any idea about it).  But we
> are
> > > doing what we can for compatibility.
> > >
> > > cheers,
> > > Colin
> > >
> > >
> > > On Wed, May 10, 2017, at 12:26, Jason Gustafson wrote:
> > > > Hi Colin,
> > > >
> > > > Thanks for the KIP. Looks good overall. One thing I wasn't too clear
> > > > about
> > > > is whether new resource types, operations, or permissions require a
> > > > version
> > > > bump for the three new 

[jira] [Updated] (KAFKA-5194) KIP-153: Include only client traffic in BytesOutPerSec metric

2017-05-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5194:
---
Summary: KIP-153: Include only client traffic in BytesOutPerSec metric  
(was: KIP-153 : Include only client traffic in BytesOutPerSec metric)

> KIP-153: Include only client traffic in BytesOutPerSec metric
> -
>
> Key: KAFKA-5194
> URL: https://issues.apache.org/jira/browse/KAFKA-5194
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Mickael Maison
>Assignee: Mickael Maison
> Fix For: 0.11.0.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-153+%3A+Include+only+client+traffic+in+BytesOutPerSec+metric



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Build failed in Jenkins: kafka-trunk-jdk8 #1499

2017-05-10 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-5099; Replica Deletion Regression from KIP-101

--
[...truncated 3.58 MB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldNotRetryOnCommitWhenAppendToLogFailsWithNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldNotRetryOnCommitWhenAppendToLogFailsWithNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteCommitToLogOnEndTxnWhenStatusIsOngoingAndResultIsCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteCommitToLogOnEndTxnWhenStatusIsOngoingAndResultIsCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 

[jira] [Updated] (KAFKA-5216) Cached Session/Window store may return error on iterator.peekNextKey()

2017-05-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/KAFKA-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté updated KAFKA-5216:
-
Status: Patch Available  (was: Open)

> Cached Session/Window store may return error on iterator.peekNextKey()
> --
>
> Key: KAFKA-5216
> URL: https://issues.apache.org/jira/browse/KAFKA-5216
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>
> {{AbstractMergedSortedCacheStoreIterator}} uses the wrong cache key 
> deserializer in {{peekNextKey}}. This may result in errors or incorrect keys 
> returned from {{peekNextKey}} on a {{WindowStoreIterator}} or 
> {{SessionStoreIterator}} for a cached window or session store.
> CachingKeyValueStore does not seem to be affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5216) Cached Session/Window store may return error on iterator.peekNextKey()

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005683#comment-16005683
 ] 

ASF GitHub Bot commented on KAFKA-5216:
---

GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/3016

KAFKA-5216 fix error on peekNextKey in cached window/session store iterators

@guozhangwang @mjsax @dguy 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka kafka-5216

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3016






> Cached Session/Window store may return error on iterator.peekNextKey()
> --
>
> Key: KAFKA-5216
> URL: https://issues.apache.org/jira/browse/KAFKA-5216
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0, 0.11.0.0
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>
> {{AbstractMergedSortedCacheStoreIterator}} uses the wrong cache key 
> deserializer in {{peekNextKey}}. This may result in errors or incorrect keys 
> returned from {{peekNextKey}} on a {{WindowStoreIterator}} or 
> {{SessionStoreIterator}} for a cached window or session store.
> CachingKeyValueStore does not seem to be affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3016: KAFKA-5216 fix error on peekNextKey in cached wind...

2017-05-10 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/3016

KAFKA-5216 fix error on peekNextKey in cached window/session store iterators

@guozhangwang @mjsax @dguy 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka kafka-5216

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3016






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5216) Cached Session/Window store may return error on iterator.peekNextKey()

2017-05-10 Thread JIRA
Xavier Léauté created KAFKA-5216:


 Summary: Cached Session/Window store may return error on 
iterator.peekNextKey()
 Key: KAFKA-5216
 URL: https://issues.apache.org/jira/browse/KAFKA-5216
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0, 0.11.0.0
Reporter: Xavier Léauté
Assignee: Xavier Léauté


{{AbstractMergedSortedCacheStoreIterator}} uses the wrong cache key 
deserializer in {{peekNextKey}}. This may result in errors or incorrect keys 
returned from {{peekNextKey}} on a {{WindowStoreIterator}} or 
{{SessionStoreIterator}} for a cached window or session store.

CachingKeyValueStore does not seem to be affected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-10 Thread Colin McCabe
On Wed, May 10, 2017, at 15:54, Jason Gustafson wrote:
> Hey Colin,
> 
> Thanks, I think continuing to bump the protocol makes sense. It's nice to
> be consistent with the other APIs. In the KIP, you have the following:
> 
> > If the client is newer than the broker, some of the fields may show up as
> UNKNOWN on the broker side.  In this case, the filter will get an
> UnsupportedVersionException and the filter will not be applied.
> 
> So the only case this would happen is if the client sent an invalid type
> for the request version, right? In that case, I would expect the client
> to raise an exception before sending the request when it sees that the
> broker does not support the needed request version. So maybe this should it be
> an invalid request error instead perhaps?

Hmm... I agree that the request is invalid, but it's invalid because the
version is not supported.  I guess I think UnsupportedVersionException
is a very specific exception that tells users exactly what is wrong
(aha! it's the version!) whereas InvalidRequestException is kind of a
generic catch-all exception for many different types of issue with
requests.  So I think it's preferable to throw the UVE here.

I agree that the exception can be thrown by the client itself even
before sending anything over the wire.  This is the case with our
existing usage of UVE, which happens when someone calls offsetsForTimes
on a pre-0.10.1 broker that is too old to support that call.  The client
code throws the exception, not the broker.  I expect most UVEs will be
thrown by client code in the future since most of them correspond to
"the broker has no vocabulary to even understand $NEWFEATURE."  ACLs are
one case where we do have to have an additional server-side check in
case a poorly coded client is sending enum codes from the future.

cheers,
Colin

> 
> Thanks,
> Jason
> 
> 
> On Wed, May 10, 2017 at 3:10 PM, Colin McCabe  wrote:
> 
> > Hi Jason,
> >
> > Thanks for taking a look.
> >
> > I think it's likely that we will bump the ApiVersion of the various
> > requests if we add a new resource type, operation type, or permission
> > type.  Although it may not strictly be required, bumping the version
> > number makes it a little bit clearer that the protocol is evolving and
> > clarifies the relative age of the server and client.  This would be
> > helpful in a debugging situation when using broker-api-versions.sh, or
> > similar, since it would clarify that the server does/does not support
> > some particular detail of ACLs.
> >
> > However, the v1 server will still need a way of responding to a v0
> > ListAclRequest.  In that case, any type not present in v0 will be mapped
> > to UNKNOWN, as required.  This mapping could be done either on the
> > client or the server side... for simplicity's sake, it probably should
> > be done on the client side.
> >
> > The intention is to make it clear to the v0 client that "something is
> > going on" even if the v0 client can't see what the exact details are of
> > the v1 ACL.  For example, the v0 client  might see some ACLs on a
> > particular topic, but not be able to understand exactly what they are
> > (they are type UNKNOWN) until you upgrade your client.  This would at
> > least give you some idea what was going on if a user was having trouble
> > doing something with that topic.  There may be cases where we can't even
> > provide this much (for example, if we ever implement role-based access
> > control, the v0 client simply won't have any idea about it).  But we are
> > doing what we can for compatibility.
> >
> > cheers,
> > Colin
> >
> >
> > On Wed, May 10, 2017, at 12:26, Jason Gustafson wrote:
> > > Hi Colin,
> > >
> > > Thanks for the KIP. Looks good overall. One thing I wasn't too clear
> > > about
> > > is whether new resource types, operations, or permissions require a
> > > version
> > > bump for the three new request types. On reading the proposal, it sort of
> > > sounds like the intent is not to bump the versions and let the new type
> > > be
> > > mapped to "unknown" on clients/brokers that don't support it. Is that
> > > correct?
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Wed, May 10, 2017 at 8:24 AM, Ismael Juma  wrote:
> > >
> > > > Thanks for the KIP, +1(binding) from me. Colin, it may make sense to
> > start
> > > > a new thread for this as GMail is currently hiding it in the middle of
> > the
> > > > discuss thread. Please mention the date the thread was started along
> > with
> > > > the 3 +1s (1 binding) so far.
> > > >
> > > > Ismael
> > > >
> > > > On Sat, Apr 29, 2017 at 1:09 AM, Colin McCabe 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to start the voting for KIP-140: Add administrative RPCs for
> > > > > adding, deleting, and listing ACLs.  This provides an API for adding,
> > > > > deleting, and listing the access control lists (ACLs) which are used
> > to
> > > > > control access on Kafka 

[jira] [Commented] (KAFKA-4950) ConcurrentModificationException when iterating over Kafka Metrics

2017-05-10 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005658#comment-16005658
 ] 

Vahid Hashemian commented on KAFKA-4950:


[~dpostoronca] Thanks for providing the additional details. I cannot reproduce 
the error when running a simple consumer (that polls and collects the metrics 
in a loop) with the provided {{KafkaMetricSet}} class. I see that both 
{{PartitionStates.partitionSet()}} and the overloaded {{getValue()}} methods 
are called but they don't seem to interfere with each other. Perhaps I'm 
missing something? 

> ConcurrentModificationException when iterating over Kafka Metrics
> -
>
> Key: KAFKA-4950
> URL: https://issues.apache.org/jira/browse/KAFKA-4950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.1
>Reporter: Dumitru Postoronca
>Assignee: Vahid Hashemian
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> It looks like the when calling {{PartitionStates.partitionSet()}}, while the 
> resulting Hashmap is being built, the internal state of the allocations can 
> change, which leads to ConcurrentModificationException during the copy 
> operation.
> {code}
> java.util.ConcurrentModificationException
> at 
> java.util.LinkedHashMap$LinkedHashIterator.nextNode(LinkedHashMap.java:719)
> at 
> java.util.LinkedHashMap$LinkedKeyIterator.next(LinkedHashMap.java:742)
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:119)
> at 
> org.apache.kafka.common.internals.PartitionStates.partitionSet(PartitionStates.java:66)
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedPartitions(SubscriptionState.java:291)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$ConsumerCoordinatorMetrics$1.measure(ConsumerCoordinator.java:783)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:61)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.value(KafkaMetric.java:52)
> {code}
> {code}
> // client code:
> import java.util.Collections;
> import java.util.HashMap;
> import java.util.Map;
> import com.codahale.metrics.Gauge;
> import com.codahale.metrics.Metric;
> import com.codahale.metrics.MetricSet;
> import org.apache.kafka.clients.consumer.KafkaConsumer;
> import org.apache.kafka.common.MetricName;
> import static com.codahale.metrics.MetricRegistry.name;
> public class KafkaMetricSet implements MetricSet {
> private final KafkaConsumer client;
> public KafkaMetricSet(KafkaConsumer client) {
> this.client = client;
> }
> @Override
> public Map getMetrics() {
> final Map gauges = new HashMap();
> Map m = client.metrics();
> for (Map.Entry e : 
> m.entrySet()) {
> gauges.put(name(e.getKey().group(), e.getKey().name(), "count"), 
> new Gauge() {
> @Override
> public Double getValue() {
> return e.getValue().value(); // exception thrown here 
> }
> });
> }
> return Collections.unmodifiableMap(gauges);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread dan norwood (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005651#comment-16005651
 ] 

dan norwood commented on KAFKA-5213:


awesome we found a real problem.

i did run this locally with everything pointed at trunk and i do not see the 
issue... not sure why that would matter. 

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA

Re: [VOTE] KIP-133: List and Alter Configs Admin APIs (second attempt)

2017-05-10 Thread Jason Gustafson
+1 Thanks for the KIP.

On Wed, May 10, 2017 at 8:55 AM, Jun Rao  wrote:

> Hi, Ismael,
>
> Thanks for the KIP. Looks good overall. A couple of minor comments.
>
> 1. Currently, quotas can be updated at the  combination
> level. So, it seems that we need to reflect that somehow in both the wire
> protocol and the admin api.
> 2. It would be useful to clarify what configs are considered read-only.
>
> Jun
>
> On Mon, May 8, 2017 at 8:52 AM, Ismael Juma  wrote:
>
> > Quick update, I renamed ListConfigs to DescribeConfigs (and related
> classes
> > and methods) as that is more consistent with other protocols (like
> > ListGroups and DescribeGroups). So the new link is:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 133%3A+Describe+and+Alter+Configs+Admin+APIs
> >
> > Ismael
> >
> > On Mon, May 8, 2017 at 5:01 AM, Ismael Juma  wrote:
> >
> > > [Seems like the original message ended up in the discuss thread in
> GMail,
> > > so trying again]
> > >
> > > Hi everyone,
> > >
> > > I believe I addressed the comments in the discussion thread and given
> the
> > > impending KIP freeze, I would like to start the voting process for
> > KIP-133:
> > > List and Alter Configs Admin APIs:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-133%3A
> > > +List+and+Alter+Configs+Admin+APIs
> > >
> > > As mentioned previously, this KIP and KIP-140 (Add administrative RPCs
> > for
> > > adding, deleting, and listing ACLs) complete the AdminClient work that
> > was
> > > originally proposed as part KIP-4.
> > >
> > > If you have additional feedback, please share it in the discuss thread.
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > > Thanks,
> > > Ismael
> > >
> >
>


Re: [VOTE] KIP-138: Change punctuate semantics

2017-05-10 Thread Guozhang Wang
+1. Thanks Michal!

On Wed, May 10, 2017 at 4:35 AM, Thomas Becker  wrote:

> +1
>
> On Wed, 2017-05-10 at 10:52 +0100, Michal Borowiecki wrote:
> > Hi all,
> >
> > This vote thread has gone quiet.
> >
> > In view of the looming cut-off for 0.11.0.0 I'd like to encourage
> > anyone
> > who cares about this to have a look and vote and/or comment on this
> > proposal.
> >
> > Thanks,
> >
> > Michał
> >
> >
> > On 07/05/17 10:16, Eno Thereska wrote:
> > >
> > > +1 (non binding)
> > >
> > > Thanks
> > > Eno
> > > >
> > > > On May 6, 2017, at 11:01 PM, Bill Bejeck 
> > > > wrote:
> > > >
> > > > +1
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Sat, May 6, 2017 at 5:58 PM, Matthias J. Sax  > > > nt.io>
> > > > wrote:
> > > >
> > > > >
> > > > > +1
> > > > >
> > > > > Thanks a lot for this KIP!
> > > > >
> > > > > -Matthias
> > > > >
> > > > > On 5/6/17 10:18 AM, Michal Borowiecki wrote:
> > > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Given I'm not seeing any contentious issues remaining on the
> > > > > > discussion
> > > > > > thread, I'd like to initiate the vote for:
> > > > > >
> > > > > > KIP-138: Change punctuate semantics
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 138%3A+Change+punctuate+semantics
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Michał
> > > > > > --
> > > > > > Signature
> > > > > >  Michal Borowiecki
> > > > > > Senior Software Engineer L4
> > > > > >   T:  +44 208 742 1600
> > > > > >
> > > > > >
> > > > > >   +44 203 249 8448
> > > > > >
> > > > > >
> > > > > >
> > > > > >   E:  michal.borowie...@openbet.com
> > > > > >   W:  www.openbet.com 
> > > > > >
> > > > > >
> > > > > >   OpenBet Ltd
> > > > > >
> > > > > >   Chiswick Park Building 9
> > > > > >
> > > > > >   566 Chiswick High Rd
> > > > > >
> > > > > >   London
> > > > > >
> > > > > >   W4 5XT
> > > > > >
> > > > > >   UK
> > > > > >
> > > > > >
> > > > > > 
> > > > > >
> > > > > > This message is confidential and intended only for the
> > > > > > addressee. If you
> > > > > > have received this message in error, please immediately
> > > > > > notify the
> > > > > > postmas...@openbet.com  and
> > > > > > delete it
> > > > > > from your system as well as any copies. The content of e-
> > > > > > mails as well
> > > > > > as traffic data may be monitored by OpenBet for employment
> > > > > > and security
> > > > > > purposes. To protect the environment please do not print this
> > > > > > e-mail
> > > > > > unless necessary. OpenBet Ltd. Registered Office: Chiswick
> > > > > > Park Building
> > > > > > 9, 566 Chiswick High Road, London, W4 5XT, United Kingdom. A
> > > > > > company
> > > > > > registered in England and Wales. Registered no. 3134634. VAT
> > > > > > no.
> > > > > > GB927523612
> > > > > >
> --
>
>
> Tommy Becker
>
> Senior Software Engineer
>
> O +1 919.460.4747
>
> tivo.com
>
>
> 
>
> This email and any attachments may contain confidential and privileged
> material for the sole use of the intended recipient. Any review, copying,
> or distribution of this email (or any attachments) by others is prohibited.
> If you are not the intended recipient, please contact the sender
> immediately and permanently delete this email and any attachments. No
> employee or agent of TiVo Inc. is authorized to conclude any binding
> agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> Inc. may only be made by a signed written agreement.
>



-- 
-- Guozhang


Re: [VOTE] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-10 Thread Jason Gustafson
Hey Colin,

Thanks, I think continuing to bump the protocol makes sense. It's nice to
be consistent with the other APIs. In the KIP, you have the following:

> If the client is newer than the broker, some of the fields may show up as
UNKNOWN on the broker side.  In this case, the filter will get an
UnsupportedVersionException and the filter will not be applied.

So the only case this would happen is if the client sent an invalid type
for the request version, right? In that case, I would expect the client to
raise an exception before sending the request when it sees that the broker
does not support the needed request version. So maybe this should it be an
invalid request error instead perhaps?

Thanks,
Jason


On Wed, May 10, 2017 at 3:10 PM, Colin McCabe  wrote:

> Hi Jason,
>
> Thanks for taking a look.
>
> I think it's likely that we will bump the ApiVersion of the various
> requests if we add a new resource type, operation type, or permission
> type.  Although it may not strictly be required, bumping the version
> number makes it a little bit clearer that the protocol is evolving and
> clarifies the relative age of the server and client.  This would be
> helpful in a debugging situation when using broker-api-versions.sh, or
> similar, since it would clarify that the server does/does not support
> some particular detail of ACLs.
>
> However, the v1 server will still need a way of responding to a v0
> ListAclRequest.  In that case, any type not present in v0 will be mapped
> to UNKNOWN, as required.  This mapping could be done either on the
> client or the server side... for simplicity's sake, it probably should
> be done on the client side.
>
> The intention is to make it clear to the v0 client that "something is
> going on" even if the v0 client can't see what the exact details are of
> the v1 ACL.  For example, the v0 client  might see some ACLs on a
> particular topic, but not be able to understand exactly what they are
> (they are type UNKNOWN) until you upgrade your client.  This would at
> least give you some idea what was going on if a user was having trouble
> doing something with that topic.  There may be cases where we can't even
> provide this much (for example, if we ever implement role-based access
> control, the v0 client simply won't have any idea about it).  But we are
> doing what we can for compatibility.
>
> cheers,
> Colin
>
>
> On Wed, May 10, 2017, at 12:26, Jason Gustafson wrote:
> > Hi Colin,
> >
> > Thanks for the KIP. Looks good overall. One thing I wasn't too clear
> > about
> > is whether new resource types, operations, or permissions require a
> > version
> > bump for the three new request types. On reading the proposal, it sort of
> > sounds like the intent is not to bump the versions and let the new type
> > be
> > mapped to "unknown" on clients/brokers that don't support it. Is that
> > correct?
> >
> > Thanks,
> > Jason
> >
> > On Wed, May 10, 2017 at 8:24 AM, Ismael Juma  wrote:
> >
> > > Thanks for the KIP, +1(binding) from me. Colin, it may make sense to
> start
> > > a new thread for this as GMail is currently hiding it in the middle of
> the
> > > discuss thread. Please mention the date the thread was started along
> with
> > > the 3 +1s (1 binding) so far.
> > >
> > > Ismael
> > >
> > > On Sat, Apr 29, 2017 at 1:09 AM, Colin McCabe 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start the voting for KIP-140: Add administrative RPCs for
> > > > adding, deleting, and listing ACLs.  This provides an API for adding,
> > > > deleting, and listing the access control lists (ACLs) which are used
> to
> > > > control access on Kafka topics and brokers.
> > > >
> > > > The wiki page is here:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%
> > > 2C+and+listing+ACLs
> > > >
> > > > The previous [DISCUSS] thread:
> > > > https://www.mail-archive.com/dev@kafka.apache.org/msg70858.html
> > > >
> > > > cheers,
> > > > Colin
> > > >
> > >
>


[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005584#comment-16005584
 ] 

Apurva Mehta commented on KAFKA-5213:
-

This also explains why it wasn't seen till now: it only occurs when there is a 
race condition between two appends of very different sizes.

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005581#comment-16005581
 ] 

ASF GitHub Bot commented on KAFKA-5213:
---

GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3015

KAFKA-5213; Mark a MemoryRecordsBuilder as full as soon as the append 
stream is closed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-5213-illegalstateexception-in-ensureOpenForAppend

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3015


commit 799fd8d3d60e3f8950bbe4b7d5e8865e6755f5aa
Author: Apurva Mehta 
Date:   2017-05-10T22:35:52Z

Mark a MemoryRecordsBuilder as full as soon as the append stream is closed




> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> 

[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005578#comment-16005578
 ] 

Ismael Juma commented on KAFKA-5213:


Good catch.

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3015: KAFKA-5213; Mark a MemoryRecordsBuilder as full as...

2017-05-10 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3015

KAFKA-5213; Mark a MemoryRecordsBuilder as full as soon as the append 
stream is closed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-5213-illegalstateexception-in-ensureOpenForAppend

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3015


commit 799fd8d3d60e3f8950bbe4b7d5e8865e6755f5aa
Author: Apurva Mehta 
Date:   2017-05-10T22:35:52Z

Mark a MemoryRecordsBuilder as full as soon as the append stream is closed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005575#comment-16005575
 ] 

Apurva Mehta commented on KAFKA-5213:
-

The fix is to mark the builder as `full` if the append stream is closed. 
Currently, it is considered 'full' once the records are built or if the 
incoming record is too large.

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was 

[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005570#comment-16005570
 ] 

Apurva Mehta commented on KAFKA-5213:
-

I found the bug. The problem is that in a particular call to 
`RecordAccumulator.append`, we do two calls to `ProducerBatch.tryAppend`, with 
a gap in the middle where the lock is released.

So it is possible that the first `ProducerBatch.tryAppend` closes the the 
`MemoryRecordsBuilder` for appends because the incoming record is too large for 
the batch. But then before we can allocate a new batch, another append comes 
along with a smaller size and which fits in the current batch. In this case the 
second append will see this exception. 

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> 

[jira] [Commented] (KAFKA-5182) Transient failure: RequestQuotaTest.testResponseThrottleTime

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005550#comment-16005550
 ] 

ASF GitHub Bot commented on KAFKA-5182:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3014

KAFKA-5182: Close txn coordinator threads during broker shutdown

Shutdown delayed delete purgatory thread, transaction marker purgatory 
thread and send thread in `TransactionMarkerChannelManager` during broker 
shutdown. Made `InterBrokerSendThread` interruptible so that it is shutdown.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5182

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3014


commit 3f35cdfd42a927fe805d364ba35a996ca6cd487d
Author: Rajini Sivaram 
Date:   2017-05-10T15:22:22Z

KAFKA-5182: Close txn coordinator threads during broker shutdown




> Transient failure: RequestQuotaTest.testResponseThrottleTime
> 
>
> Key: KAFKA-5182
> URL: https://issues.apache.org/jira/browse/KAFKA-5182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
>
> Stacktrace
> {code}
> java.util.concurrent.ExecutionException: java.lang.AssertionError: Response 
> not throttled: Client JOIN_GROUP apiKey JOIN_GROUP requests 3 requestTime 
> 0.009982775502048156 throttleTime 0.0
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:326)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:324)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:324)
>   at 
> kafka.server.RequestQuotaTest.testResponseThrottleTime(RequestQuotaTest.scala:105)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[GitHub] kafka pull request #3014: KAFKA-5182: Close txn coordinator threads during b...

2017-05-10 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3014

KAFKA-5182: Close txn coordinator threads during broker shutdown

Shutdown delayed delete purgatory thread, transaction marker purgatory 
thread and send thread in `TransactionMarkerChannelManager` during broker 
shutdown. Made `InterBrokerSendThread` interruptible so that it is shutdown.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5182

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3014


commit 3f35cdfd42a927fe805d364ba35a996ca6cd487d
Author: Rajini Sivaram 
Date:   2017-05-10T15:22:22Z

KAFKA-5182: Close txn coordinator threads during broker shutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-10 Thread Colin McCabe
Hi Jason,

Thanks for taking a look.

I think it's likely that we will bump the ApiVersion of the various
requests if we add a new resource type, operation type, or permission
type.  Although it may not strictly be required, bumping the version
number makes it a little bit clearer that the protocol is evolving and
clarifies the relative age of the server and client.  This would be
helpful in a debugging situation when using broker-api-versions.sh, or
similar, since it would clarify that the server does/does not support
some particular detail of ACLs.

However, the v1 server will still need a way of responding to a v0
ListAclRequest.  In that case, any type not present in v0 will be mapped
to UNKNOWN, as required.  This mapping could be done either on the
client or the server side... for simplicity's sake, it probably should
be done on the client side.

The intention is to make it clear to the v0 client that "something is
going on" even if the v0 client can't see what the exact details are of
the v1 ACL.  For example, the v0 client  might see some ACLs on a
particular topic, but not be able to understand exactly what they are
(they are type UNKNOWN) until you upgrade your client.  This would at
least give you some idea what was going on if a user was having trouble
doing something with that topic.  There may be cases where we can't even
provide this much (for example, if we ever implement role-based access
control, the v0 client simply won't have any idea about it).  But we are
doing what we can for compatibility.

cheers,
Colin


On Wed, May 10, 2017, at 12:26, Jason Gustafson wrote:
> Hi Colin,
> 
> Thanks for the KIP. Looks good overall. One thing I wasn't too clear
> about
> is whether new resource types, operations, or permissions require a
> version
> bump for the three new request types. On reading the proposal, it sort of
> sounds like the intent is not to bump the versions and let the new type
> be
> mapped to "unknown" on clients/brokers that don't support it. Is that
> correct?
> 
> Thanks,
> Jason
> 
> On Wed, May 10, 2017 at 8:24 AM, Ismael Juma  wrote:
> 
> > Thanks for the KIP, +1(binding) from me. Colin, it may make sense to start
> > a new thread for this as GMail is currently hiding it in the middle of the
> > discuss thread. Please mention the date the thread was started along with
> > the 3 +1s (1 binding) so far.
> >
> > Ismael
> >
> > On Sat, Apr 29, 2017 at 1:09 AM, Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start the voting for KIP-140: Add administrative RPCs for
> > > adding, deleting, and listing ACLs.  This provides an API for adding,
> > > deleting, and listing the access control lists (ACLs) which are used to
> > > control access on Kafka topics and brokers.
> > >
> > > The wiki page is here:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%
> > 2C+and+listing+ACLs
> > >
> > > The previous [DISCUSS] thread:
> > > https://www.mail-archive.com/dev@kafka.apache.org/msg70858.html
> > >
> > > cheers,
> > > Colin
> > >
> >


Re: [DISCUSS] KIP-133: List and Alter Configs Admin APIs

2017-05-10 Thread Ismael Juma
Hi Jason,

Thanks for the feedback. I agree that it's useful and reasonably simple to
implement. So, I've updated the KIP.

Ismael

On Wed, May 10, 2017 at 10:48 PM, Jason Gustafson 
wrote:

> Hey Ismael,
>
> Thanks for the KIP. The use of the Describe API might be a little limited
> if it always returns the full set of topics for the requested resource. I
> wonder if we can let the client provide a list of the configs they are
> interested in. Perhaps something like this:
>
> DescribeConfigs Request (Version: 0) => [resource [config_name]]
>   resource => resource_type resource_name
> resource_type => INT8
> resource_name => STRING
>   config_name => STRING
>
> This would reduce the overhead for use cases where the client only cares
> about a small subset of configs. A null array can then indicate the desire
> to fetch all configs. What do you think?
>
> Thanks,
> Jason
>
> On Mon, May 8, 2017 at 10:47 AM, Colin McCabe  wrote:
>
> > On Sun, May 7, 2017, at 20:18, Ismael Juma wrote:
> > > Thanks for the feedback Colin. Comments inline.
> > >
> > > On Sun, May 7, 2017 at 9:29 PM, Colin McCabe 
> wrote:
> > > >
> > > > Hmm.  What's the behavior if I try to list the configuration for a
> > topic
> > > > that doesn't exist?  It seems like in this case, the whole request
> has
> > > > to return an error, and nothing gets listed.  Shouldn't the protocol
> > > > should return an error code and message per element in the batch,
> > > > similar to how the other batch protocols work (CreateTopics,
> > > > DeleteTopics, etc. etc.)
> > > >
> > >
> > > CreateTopics and DeleteTopics are more like AlterConfigs and that has
> an
> > > error code per batch. For requests that mutate, this is essential
> because
> > > the operations are not transactional. I followed the same pattern as
> > > ListGroups, but that doesn't take filters, so MetadataRequest is a
> better
> > > example. For consistency with that, I added an error code per batch.
> > >
> > > > We also should think about the case where someone requests
> > > > configurations from multiple brokers.  Since there are multiple
> > requests
> > > > sent over the wire in this case, there is the chance for some of them
> > to
> > > > fail when others have succeeded.  So the API needs a way of returning
> > an
> > > > error per batch element.
> > > >
> > >
> > > Yeah, that's true. For the AdminClient side, I followed the same
> pattern
> > > as
> > > ListTopicsResponse, but it seems like DescribeTopicsResults is a better
> > > example. I named this request ListConfigs for consistency with
> ListAcls,
> > > but it looks like they really should be DescribeConfigs and
> DescribeAcls.
> > >
> > > > On the other hand, with configurations, each topic will have a single
> > > > configuration, never more and never less.  Each cluster will have a
> > > > single configuration-- never more and never less.  So having separate
> > > > Configuration resources doesn't seem to add any value, since they
> will
> > > > always map 1:1 to existing resources.
> > > >
> > >
> > > Good point. I thought about your proposal in more depth and I agree
> that
> > > it
> > > solves the issue in a nice way. I updated the KIP.
> >
> > Thanks, Ismael.
> >
> > >
> > > I guess my question is more conceptual-- if these things are all
> > > > resources, shouldn't we have a single type to model all of them?  We
> > > > might want to add configurations to other resources later on as well.
> > > >
> > >
> > > I've been thinking about how we could satisfy the 2 requirements of
> > > having
> > > a general resource type while making it clear which values are allowed
> > > for
> > > a given operation. I updated the KIP to use a shared resource type in
> the
> > > wire, renamed entity to resource, but kept a separate class and enum
> for
> > > ConfigResource and ConfigResource.Type. They map trivially to Resource
> > > and
> > > ResourceType.
> >
> > I still feel like it would be better to use the same type in the API.
> > I'm curious what others think here?
> >
> > >
> > > Also, I realised that I was a bit hasty when I changed Config to
> > > Collection in the signature of a few methods. I think
> Config
> > > is the right type. It's a container for a Collection of ConfigEntry
> > > instances so that we can provide useful methods to work with the
> configs
> > > (e.g. exclude items with defaults, etc.).
> >
> > Good idea.  Should we add a Config#get(String) that can get the value of
> > a specific ConfigEntry?
> >
> > >
> > > Here's the full diff of the changes:
> > >
> > > https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?
> > pageId=68719318=14=11
> > >
> > > Given that we don't have much time until the KIP freeze and this is an
> > > important KIP to make the AdminClient truly useful, I will start the
> vote
> > > thread. That said, don't hesitate to provide additional feedback.
> >
> > +1.
> >
> > best,
> > Colin
> >
> > >
> 

Re: [VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Ismael Juma
Repeating my vote here, +1 (binding).

Ismael

On Wed, May 10, 2017 at 9:24 PM, Colin McCabe  wrote:

> Hi all,
>
> Some folks said that the previous VOTE thread was getting collapsed by
> gmail into a different thread, so I am reposting this.
>
> The KIP page is here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs
>
> The previous VOTE thread was here:
> https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html
>
> best,
> Colin
>


[jira] [Commented] (KAFKA-5215) Small JavaDoc fix for AdminClient#describeTopics

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005528#comment-16005528
 ] 

ASF GitHub Bot commented on KAFKA-5215:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3013

KAFKA-5215. Small JavaDoc fix for AdminClient#describeTopics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5215

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3013


commit 79056aa7d0d240dfb53945f379a9a6f9e8833da1
Author: Colin P. Mccabe 
Date:   2017-05-10T21:59:34Z

KAFKA-5215. Small JavaDoc fix for AdminClient#describeTopics




> Small JavaDoc fix for AdminClient#describeTopics
> 
>
> Key: KAFKA-5215
> URL: https://issues.apache.org/jira/browse/KAFKA-5215
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Small JavaDoc fix for AdminClient#describeTopics



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5215) Small JavaDoc fix for AdminClient#describeTopics

2017-05-10 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5215:
--

 Summary: Small JavaDoc fix for AdminClient#describeTopics
 Key: KAFKA-5215
 URL: https://issues.apache.org/jira/browse/KAFKA-5215
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


Small JavaDoc fix for AdminClient#describeTopics



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3013: KAFKA-5215. Small JavaDoc fix for AdminClient#desc...

2017-05-10 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3013

KAFKA-5215. Small JavaDoc fix for AdminClient#describeTopics



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5215

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3013


commit 79056aa7d0d240dfb53945f379a9a6f9e8833da1
Author: Colin P. Mccabe 
Date:   2017-05-10T21:59:34Z

KAFKA-5215. Small JavaDoc fix for AdminClient#describeTopics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread dan norwood (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005525#comment-16005525
 ] 

dan norwood commented on KAFKA-5213:


turns out i was running trunk client against 0.10.2.1 brokers. currently 
rebuilding trunk locally to try all trunk everything.

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3012: KAFKA-5214: KafkaAdminClient#apiVersions should re...

2017-05-10 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3012

KAFKA-5214: KafkaAdminClient#apiVersions should return a public class



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3012.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3012


commit 5e2dc72bf74a98f5bfca4b863ddfd26366481f26
Author: Colin P. Mccabe 
Date:   2017-05-10T21:53:42Z

KAFKA-5214: KafkaAdminClient#apiVersions should return a public class




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5214) KafkaAdminClient#apiVersions should return a public class

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005523#comment-16005523
 ] 

ASF GitHub Bot commented on KAFKA-5214:
---

GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/3012

KAFKA-5214: KafkaAdminClient#apiVersions should return a public class



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3012.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3012


commit 5e2dc72bf74a98f5bfca4b863ddfd26366481f26
Author: Colin P. Mccabe 
Date:   2017-05-10T21:53:42Z

KAFKA-5214: KafkaAdminClient#apiVersions should return a public class




> KafkaAdminClient#apiVersions should return a public class
> -
>
> Key: KAFKA-5214
> URL: https://issues.apache.org/jira/browse/KAFKA-5214
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> KafkaAdminClient#apiVersions should not refer to internal classes like 
> ApiKeys, NodeApiVersions, etc.  Instead, we should have stable public classes 
> to represent these things in the API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5214) KafkaAdminClient#apiVersions should return a public class

2017-05-10 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-5214:
--

 Summary: KafkaAdminClient#apiVersions should return a public class
 Key: KAFKA-5214
 URL: https://issues.apache.org/jira/browse/KAFKA-5214
 Project: Kafka
  Issue Type: Bug
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


KafkaAdminClient#apiVersions should not refer to internal classes like ApiKeys, 
NodeApiVersions, etc.  Instead, we should have stable public classes to 
represent these things in the API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-133: List and Alter Configs Admin APIs

2017-05-10 Thread Jason Gustafson
Hey Ismael,

Thanks for the KIP. The use of the Describe API might be a little limited
if it always returns the full set of topics for the requested resource. I
wonder if we can let the client provide a list of the configs they are
interested in. Perhaps something like this:

DescribeConfigs Request (Version: 0) => [resource [config_name]]
  resource => resource_type resource_name
resource_type => INT8
resource_name => STRING
  config_name => STRING

This would reduce the overhead for use cases where the client only cares
about a small subset of configs. A null array can then indicate the desire
to fetch all configs. What do you think?

Thanks,
Jason

On Mon, May 8, 2017 at 10:47 AM, Colin McCabe  wrote:

> On Sun, May 7, 2017, at 20:18, Ismael Juma wrote:
> > Thanks for the feedback Colin. Comments inline.
> >
> > On Sun, May 7, 2017 at 9:29 PM, Colin McCabe  wrote:
> > >
> > > Hmm.  What's the behavior if I try to list the configuration for a
> topic
> > > that doesn't exist?  It seems like in this case, the whole request has
> > > to return an error, and nothing gets listed.  Shouldn't the protocol
> > > should return an error code and message per element in the batch,
> > > similar to how the other batch protocols work (CreateTopics,
> > > DeleteTopics, etc. etc.)
> > >
> >
> > CreateTopics and DeleteTopics are more like AlterConfigs and that has an
> > error code per batch. For requests that mutate, this is essential because
> > the operations are not transactional. I followed the same pattern as
> > ListGroups, but that doesn't take filters, so MetadataRequest is a better
> > example. For consistency with that, I added an error code per batch.
> >
> > > We also should think about the case where someone requests
> > > configurations from multiple brokers.  Since there are multiple
> requests
> > > sent over the wire in this case, there is the chance for some of them
> to
> > > fail when others have succeeded.  So the API needs a way of returning
> an
> > > error per batch element.
> > >
> >
> > Yeah, that's true. For the AdminClient side, I followed the same pattern
> > as
> > ListTopicsResponse, but it seems like DescribeTopicsResults is a better
> > example. I named this request ListConfigs for consistency with ListAcls,
> > but it looks like they really should be DescribeConfigs and DescribeAcls.
> >
> > > On the other hand, with configurations, each topic will have a single
> > > configuration, never more and never less.  Each cluster will have a
> > > single configuration-- never more and never less.  So having separate
> > > Configuration resources doesn't seem to add any value, since they will
> > > always map 1:1 to existing resources.
> > >
> >
> > Good point. I thought about your proposal in more depth and I agree that
> > it
> > solves the issue in a nice way. I updated the KIP.
>
> Thanks, Ismael.
>
> >
> > I guess my question is more conceptual-- if these things are all
> > > resources, shouldn't we have a single type to model all of them?  We
> > > might want to add configurations to other resources later on as well.
> > >
> >
> > I've been thinking about how we could satisfy the 2 requirements of
> > having
> > a general resource type while making it clear which values are allowed
> > for
> > a given operation. I updated the KIP to use a shared resource type in the
> > wire, renamed entity to resource, but kept a separate class and enum for
> > ConfigResource and ConfigResource.Type. They map trivially to Resource
> > and
> > ResourceType.
>
> I still feel like it would be better to use the same type in the API.
> I'm curious what others think here?
>
> >
> > Also, I realised that I was a bit hasty when I changed Config to
> > Collection in the signature of a few methods. I think Config
> > is the right type. It's a container for a Collection of ConfigEntry
> > instances so that we can provide useful methods to work with the configs
> > (e.g. exclude items with defaults, etc.).
>
> Good idea.  Should we add a Config#get(String) that can get the value of
> a specific ConfigEntry?
>
> >
> > Here's the full diff of the changes:
> >
> > https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?
> pageId=68719318=14=11
> >
> > Given that we don't have much time until the KIP freeze and this is an
> > important KIP to make the AdminClient truly useful, I will start the vote
> > thread. That said, don't hesitate to provide additional feedback.
>
> +1.
>
> best,
> Colin
>
> >
> > Thanks.
> > Ismael
>


[jira] [Commented] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005510#comment-16005510
 ] 

ASF GitHub Bot commented on KAFKA-5173:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/3011


> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3011: KAFKA-5173: Close txn coordinator threads during b...

2017-05-10 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/3011


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5173) SASL tests failing with Could not find a 'KafkaServer' or 'sasl_plaintext.KafkaServer' entry in the JAAS configuration

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005504#comment-16005504
 ] 

ASF GitHub Bot commented on KAFKA-5173:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3011

KAFKA-5173: Close txn coordinator threads during broker shutdown

Shutdown delayed delete purgatory thread, transaction marker purgatory 
thread and send thread in `TransactionMarkerChannelManager`  during broker 
shutdown. Made `InterBrokerSendThread` interruptible so that it is shutdown.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5182

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3011.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3011


commit ead2bbc0618bc20421c693b6f96895780c75276b
Author: Rajini Sivaram 
Date:   2017-05-10T15:22:22Z

KAFKA-5173: Close txn coordinator threads during broker shutdown




> SASL tests failing with Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration
> --
>
> Key: KAFKA-5173
> URL: https://issues.apache.org/jira/browse/KAFKA-5173
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
> Fix For: 0.11.0.0
>
>
> I've seen this a few times. One example:
> {code}
> java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 
> 'sasl_plaintext.KafkaServer' entry in the JAAS configuration. System property 
> 'java.security.auth.login.config' is /tmp/kafka8162725028002772063.tmp
>   at 
> org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
>   at 
> org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
>   at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:73)
>   at kafka.network.Processor.(SocketServer.scala:423)
>   at kafka.network.SocketServer.newProcessor(SocketServer.scala:145)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
>   at 
> kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:90)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at kafka.network.SocketServer.startup(SocketServer.scala:90)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:218)
>   at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
>   at 
> kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:51)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslPlaintextTopicMetadataTest.scala:23)
>   at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:31)
>   at 
> kafka.integration.SaslPlaintextTopicMetadataTest.setUp(SaslPlaintextTopicMetadataTest.scala:23)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk8/1479/testReport/junit/kafka.integration/SaslPlaintextTopicMetadataTest/testIsrAfterBrokerShutDownAndJoinsBack/
> [~rsivaram], any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #3011: KAFKA-5173: Close txn coordinator threads during b...

2017-05-10 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/3011

KAFKA-5173: Close txn coordinator threads during broker shutdown

Shutdown delayed delete purgatory thread, transaction marker purgatory 
thread and send thread in `TransactionMarkerChannelManager`  during broker 
shutdown. Made `InterBrokerSendThread` interruptible so that it is shutdown.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-5182

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3011.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3011


commit ead2bbc0618bc20421c693b6f96895780c75276b
Author: Rajini Sivaram 
Date:   2017-05-10T15:22:22Z

KAFKA-5173: Close txn coordinator threads during broker shutdown




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5213:
---
Priority: Critical  (was: Major)

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-5213:
---
Fix Version/s: 0.11.0.0

> IllegalStateException in ensureOpenForRecordAppend
> --
>
> Key: KAFKA-5213
> URL: https://issues.apache.org/jira/browse/KAFKA-5213
> Project: Kafka
>  Issue Type: Bug
>Reporter: dan norwood
>Assignee: Apurva Mehta
>Priority: Critical
> Fix For: 0.11.0.0
>
>
> i have a streams app that was working recently while pointing at trunk. this 
> morning i ran it and now get
> {noformat}
> [2017-05-10 14:29:26,266] ERROR stream-thread 
> [_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
>  Streams application error during processing: {} 
> (org.apache.kafka.streams.processor.internals.StreamThread:518)
> java.lang.IllegalStateException: Tried to append a record, but 
> MemoryRecordsBuilder is closed for record appends
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
>   at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
>   at 
> org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
>   at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
>   at 
> org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
>   at 
> org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
>   at 
> org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
>   at 
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
>   at 
> org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
>   at 
> org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
>   at 
> org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
>   at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5184) Transient failure: MultipleListenersWithAdditionalJaasContextTest.testProduceConsume

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005495#comment-16005495
 ] 

ASF GitHub Bot commented on KAFKA-5184:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3010


> Transient failure: 
> MultipleListenersWithAdditionalJaasContextTest.testProduceConsume
> 
>
> Key: KAFKA-5184
> URL: https://issues.apache.org/jira/browse/KAFKA-5184
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Balint Molnar
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3574/testReport/junit/kafka.server/MultipleListenersWithAdditionalJaasContextTest/testProduceConsume/
> {code}
> Error Message
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
> Stacktrace
> java.lang.AssertionError: Partition [SECURE_INTERNAL,1] metadata not 
> propagated after 15000 ms
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:311)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:811)
>   at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:857)
>   at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:254)
>   at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:253)
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
>   at scala.collection.immutable.Range.foreach(Range.scala:156)
>   at scala.collection.TraversableLike.map(TraversableLike.scala:234)
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:253)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3(MultipleListenersWithSameSecurityProtocolBaseTest.scala:109)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.$anonfun$setUp$3$adapted(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> kafka.server.MultipleListenersWithSameSecurityProtocolBaseTest.setUp(MultipleListenersWithSameSecurityProtocolBaseTest.scala:106)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Updated] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread dan norwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dan norwood updated KAFKA-5213:
---
Description: 
i have a streams app that was working recently while pointing at trunk. this 
morning i ran it and now get

{noformat}
[2017-05-10 14:29:26,266] ERROR stream-thread 
[_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
 Streams application error during processing: {} 
(org.apache.kafka.streams.processor.internals.StreamThread:518)
java.lang.IllegalStateException: Tried to append a record, but 
MemoryRecordsBuilder is closed for record appends
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
at 
org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
at 
org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
at 
org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
at 
org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
at 
org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
at 
org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
at 
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
at 
org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)

{noformat}

  was:
i have a streams app that was working recently while pointing at trunk. this 
morning i ran it and now get

{preformat}
[2017-05-10 14:29:26,266] ERROR stream-thread 
[_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
 Streams application error during processing: {} 
(org.apache.kafka.streams.processor.internals.StreamThread:518)
java.lang.IllegalStateException: Tried to append a record, but 
MemoryRecordsBuilder is closed for record appends
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
at 

[GitHub] kafka pull request #3010: KAFKA-5184, KAFKA-5173; Various improvements to SA...

2017-05-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3010


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5213) IllegalStateException in ensureOpenForRecordAppend

2017-05-10 Thread dan norwood (JIRA)
dan norwood created KAFKA-5213:
--

 Summary: IllegalStateException in ensureOpenForRecordAppend
 Key: KAFKA-5213
 URL: https://issues.apache.org/jira/browse/KAFKA-5213
 Project: Kafka
  Issue Type: Bug
Reporter: dan norwood
Assignee: Apurva Mehta


i have a streams app that was working recently while pointing at trunk. this 
morning i ran it and now get

{preformat}
[2017-05-10 14:29:26,266] ERROR stream-thread 
[_confluent-controlcenter-3-3-0-1-04624550-88f9-4557-a47f-3dfbec3bc3d1-StreamThread-4]
 Streams application error during processing: {} 
(org.apache.kafka.streams.processor.internals.StreamThread:518)
java.lang.IllegalStateException: Tried to append a record, but 
MemoryRecordsBuilder is closed for record appends
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.ensureOpenForRecordAppend(MemoryRecordsBuilder.java:607)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.appendLegacyRecord(MemoryRecordsBuilder.java:567)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:353)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:382)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:440)
at 
org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:463)
at 
org.apache.kafka.clients.producer.internals.ProducerBatch.tryAppend(ProducerBatch.java:83)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.tryAppend(RecordAccumulator.java:257)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:210)
at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:645)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:598)
at 
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
at 
org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
at 
org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:55)
at 
org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:100)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:51)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:42)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:90)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:145)
at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:239)
at 
org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:214)
at 
org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:122)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:143)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:111)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:47)
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:133)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:82)
at 
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:69)
at 
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:175)
at 
org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:657)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:540)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:511)

{preformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5212) Consumer ListOffsets request can starve group heartbeats

2017-05-10 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5212:
---
Component/s: consumer

> Consumer ListOffsets request can starve group heartbeats
> 
>
> Key: KAFKA-5212
> URL: https://issues.apache.org/jira/browse/KAFKA-5212
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> The consumer is not able to send heartbeats while it is awaiting a 
> ListOffsets response. Typically this is not a problem because ListOffsets 
> requests are handled quickly, but in the worst case if the request takes 
> longer than the session timeout, the consumer will fall out of the group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-146: Isolation of dependencies and classes in Kafka Connect (restarted voting thread)

2017-05-10 Thread Ismael Juma
Konstantine, I am not convinced that it will represent similar
functionality as the goals are different. Also, I don't see a migration
path. To use Jigsaw, it makes sense to configure the module path during
start-up (-mp) like one configures the classpath. Whatever we are
implementing in Connect will be its own thing and it will be with us for
many years.

Ewen, as far as the JVM goes, I think `module.path` is probably the name
most likely to create confusion since it refers to a concept that was
recently introduced, has very specific (and some would say unexpected)
behaviour and it will be supported by java/javac launchers, build tools,
etc.

Gwen, `plugin.path` sounds good to me.

In any case, I will leave it to you all to decide. :)

Ismael

On Wed, May 10, 2017 at 8:11 PM, Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Thank you Ismael for your vote as well as your comment.
>
> To give some context, it's exactly because of the similarities with Jigsaw
> that module.path was selected initially.
> The thought was that it could allow for a potential integration with Jigsaw
> in the future, without having to change property names significantly.
>
> Of course there are differences, as the ones you mention, mainly because
> Connect's module path currently is composed as a list of top-level
> directories that include the modules as subdirectories. However I'd be
> inclined to agree with Ewen. Maybe using a property name that presents
> similarities to other concepts in the JVM ecosystem reserves for us more
> flexibility than using a different name for something that will eventually
> end up representing similar functionality.
>
> In any case, I don't feel very strong about it. Let me know if you insist
> on a name change.
>
> -Konstantine
>
>
> On Wed, May 10, 2017 at 10:24 AM, Ewen Cheslack-Postava  >
> wrote:
>
> > +1 binding, and I'm flexible on the config name. Somehow I am guessing no
> > matter what terminology we use there somebody will find a way to be
> > confused.
> >
> > -Ewen
> >
> > On Wed, May 10, 2017 at 9:27 AM, Gwen Shapira  wrote:
> >
> > > +1 and proposing 'plugin.path' as we use the term connector plugins
> when
> > > referring to the jars themselves.
> > >
> > > Gwen
> > >
> > > On Wed, May 10, 2017 at 8:31 AM, Ismael Juma 
> wrote:
> > >
> > > > Thanks for the KIP Konstantine, +1 (binding) from me. One comment;
> > > >
> > > > 1. One thing to think about: the config name `module.path` could be
> > > > confusing in the future as Jigsaw introduces the concept of a module
> > > > path[1] in Java 9. The module path co-exists with the classpath, but
> > its
> > > > behaviour is quite different. To many people's surprise, Jigsaw
> doesn't
> > > > handle versioning and it disallows split packages (i.e. if the same
> > > package
> > > > appears in 2 different modules, it is an error). What we are
> proposing
> > is
> > > > quite different and perhaps it may make sense to use a different name
> > to
> > > > avoid confusion.
> > > >
> > > > Ismael
> > > >
> > > > [1] https://www.infoq.com/articles/Latest-Project-
> > Jigsaw-Usage-Tutorial
> > > >
> > > > On Mon, May 8, 2017 at 7:48 PM, Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > > ** Restarting the voting thread here, with a different title to
> avoid
> > > > > collapsing this thread's messages with the discussion thread's
> > messages
> > > > in
> > > > > mail clients. Apologies for the inconvenience. **
> > > > >
> > > > >
> > > > > Hi all,
> > > > >
> > > > > Given that the comments during the discussion seem to have been
> > > > addressed,
> > > > > I'm pleased to bring
> > > > >
> > > > > KIP-146: Classloading Isolation in Connect
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 146+-+Classloading+Isolation+in+Connect
> > > > >
> > > > > up for voting. Again, this KIP aims to bring the highly desired
> > feature
> > > > of
> > > > > dependency isolation in Kafka Connect.
> > > > >
> > > > > In the meantime, for any additional feedback, please continue to
> send
> > > > your
> > > > > comments in the discussion thread here:
> > > > >
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg71453.html
> > > > >
> > > > > This voting thread will stay active for a minimum of 72 hours.
> > > > >
> > > > > Sincerely,
> > > > > Konstantine
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *Gwen Shapira*
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter  | blog
> > > 
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #1498

2017-05-10 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR: Add a release script that helps generate release candidates.

--
[...truncated 851.72 KB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldNotRetryOnCommitWhenAppendToLogFailsWithNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldNotRetryOnCommitWhenAppendToLogFailsWithNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteCommitToLogOnEndTxnWhenStatusIsOngoingAndResultIsCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendCompleteCommitToLogOnEndTxnWhenStatusIsOngoingAndResultIsCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED


[jira] [Created] (KAFKA-5212) Consumer ListOffsets request can starve group heartbeats

2017-05-10 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5212:
--

 Summary: Consumer ListOffsets request can starve group heartbeats
 Key: KAFKA-5212
 URL: https://issues.apache.org/jira/browse/KAFKA-5212
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.11.0.0


The consumer is not able to send heartbeats while it is awaiting a ListOffsets 
response. Typically this is not a problem because ListOffsets requests are 
handled quickly, but in the worst case if the request takes longer than the 
session timeout, the consumer will fall out of the group.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-1194) The kafka broker cannot delete the old log files after the configured time

2017-05-10 Thread Jose Alberto Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005392#comment-16005392
 ] 

Jose Alberto Gutierrez commented on KAFKA-1194:
---

Still a problem.

Kafka : kafka_2.12-0.10.2.0  

Config:

advertised.host.name = 192.168.2.202
advertised.listeners = null
advertised.port = null
authorizer.class.name = 
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 6202
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 60
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 3
create.topic.policy.class.name = null
default.replication.factor = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.max.session.timeout.ms = 30
group.min.session.timeout.ms = 6000
host.name = 
inter.broker.listener.name = null
inter.broker.protocol.version = 0.8.2.2
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = 
SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 8640
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 6
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.8.2.2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 30
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 6
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides = 
message.max.bytes = 112
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 3
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 10
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 60
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = class 
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 1000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 1
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 3
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 3
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit

[GitHub] kafka pull request #2986: KAFKA-5099: Replica Deletion Regression from KIP-1...

2017-05-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2986


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-5099) Replica Deletion Regression from KIP-101

2017-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16005374#comment-16005374
 ] 

ASF GitHub Bot commented on KAFKA-5099:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2986


> Replica Deletion Regression from KIP-101
> 
>
> Key: KAFKA-5099
> URL: https://issues.apache.org/jira/browse/KAFKA-5099
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> It appears that replica deletion regressed from KIP-101. Replica deletion 
> happens when a broker receives a StopReplicaRequest with delete=true. Ever 
> since KAFKA-1911, replica deletion has been async, meaning the broker 
> responds with a StopReplicaResponse simply after marking the replica 
> directory as staged for deletion. This marking happens by moving a data log 
> directory and its contents such as /tmp/kafka-logs1/t1-0 to a marked 
> directory like /tmp/kafka-logs1/t1-0.8c9c4c0c61c44cc59ebeb00075a2a07f-delete, 
> acting as a soft-delete. A scheduled thread later actually deletes the data. 
> It appears that the regression occurs while the scheduled thread is actually 
> trying to delete the data, which means the controller considers operations 
> such as partition reassignment and topic deletion complete. But if you look 
> at the log4j logs and data logs, you'll find that the soft-deleted data logs 
> haven't actually won't get deleted. It seems that restarting the broker 
> actually allows for the soft-deleted directories to get deleted.
> Here's the setup:
> {code}
> > ./bin/zookeeper-server-start.sh config/zookeeper.properties
> > export LOG_DIR=logs0 && ./bin/kafka-server-start.sh 
> > config/server0.properties
> > export LOG_DIR=logs1 && ./bin/kafka-server-start.sh 
> > config/server1.properties
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t0 
> > --replica-assignment 1:0
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t1 
> > --replica-assignment 1:0
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic t0
> > cat p.txt
> {"partitions":
>  [
>   {"topic": "t1", "partition": 0, "replicas": [0] }
>  ],
> "version":1
> }
> > ./bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 
> > --reassignment-json-file p.txt --execute
> {code}
> Here are sample logs:
> {code}
> [2017-04-20 17:46:54,801] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions t0-0 (kafka.server.ReplicaFetcherManager)
> [2017-04-20 17:46:54,814] INFO Log for partition t0-0 is renamed to 
> /tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete and is 
> scheduled for deletion (kafka.log.LogManager)
> [2017-04-20 17:47:27,585] INFO Deleting index 
> /tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete/.index
>  (kafka.log.OffsetIndex)
> [2017-04-20 17:47:27,586] INFO Deleting index 
> /tmp/kafka-logs1/t0-0/.timeindex (kafka.log.TimeIndex)
> [2017-04-20 17:47:27,587] ERROR Exception in deleting 
> Log(/tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete). Moving it 
> to the end of the queue. (kafka.log.LogManager)
> java.io.FileNotFoundException: 
> /tmp/kafka-logs1/t0-0/leader-epoch-checkpoint.tmp (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:162)
>   at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:41)
>   at 
> kafka.server.checkpoints.LeaderEpochCheckpointFile.write(LeaderEpochCheckpointFile.scala:61)
>   at 
> kafka.server.epoch.LeaderEpochFileCache.kafka$server$epoch$LeaderEpochFileCache$$flush(LeaderEpochFileCache.scala:178)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply$mcV$sp(LeaderEpochFileCache.scala:161)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply(LeaderEpochFileCache.scala:159)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply(LeaderEpochFileCache.scala:159)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at 
> kafka.server.epoch.LeaderEpochFileCache.clear(LeaderEpochFileCache.scala:159)
>   at kafka.log.Log.delete(Log.scala:1051)
>   at 
> kafka.log.LogManager.kafka$log$LogManager$$deleteLogs(LogManager.scala:442)
>   at 
> kafka.log.LogManager$$anonfun$startup$5.apply$mcV$sp(LogManager.scala:241)
>   at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 

[jira] [Updated] (KAFKA-5099) Replica Deletion Regression from KIP-101

2017-05-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-5099:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2986
[https://github.com/apache/kafka/pull/2986]

> Replica Deletion Regression from KIP-101
> 
>
> Key: KAFKA-5099
> URL: https://issues.apache.org/jira/browse/KAFKA-5099
> Project: Kafka
>  Issue Type: Bug
>Reporter: Onur Karaman
>Assignee: Onur Karaman
>Priority: Blocker
> Fix For: 0.11.0.0
>
>
> It appears that replica deletion regressed from KIP-101. Replica deletion 
> happens when a broker receives a StopReplicaRequest with delete=true. Ever 
> since KAFKA-1911, replica deletion has been async, meaning the broker 
> responds with a StopReplicaResponse simply after marking the replica 
> directory as staged for deletion. This marking happens by moving a data log 
> directory and its contents such as /tmp/kafka-logs1/t1-0 to a marked 
> directory like /tmp/kafka-logs1/t1-0.8c9c4c0c61c44cc59ebeb00075a2a07f-delete, 
> acting as a soft-delete. A scheduled thread later actually deletes the data. 
> It appears that the regression occurs while the scheduled thread is actually 
> trying to delete the data, which means the controller considers operations 
> such as partition reassignment and topic deletion complete. But if you look 
> at the log4j logs and data logs, you'll find that the soft-deleted data logs 
> haven't actually won't get deleted. It seems that restarting the broker 
> actually allows for the soft-deleted directories to get deleted.
> Here's the setup:
> {code}
> > ./bin/zookeeper-server-start.sh config/zookeeper.properties
> > export LOG_DIR=logs0 && ./bin/kafka-server-start.sh 
> > config/server0.properties
> > export LOG_DIR=logs1 && ./bin/kafka-server-start.sh 
> > config/server1.properties
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t0 
> > --replica-assignment 1:0
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic t1 
> > --replica-assignment 1:0
> > ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic t0
> > cat p.txt
> {"partitions":
>  [
>   {"topic": "t1", "partition": 0, "replicas": [0] }
>  ],
> "version":1
> }
> > ./bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 
> > --reassignment-json-file p.txt --execute
> {code}
> Here are sample logs:
> {code}
> [2017-04-20 17:46:54,801] INFO [ReplicaFetcherManager on broker 1] Removed 
> fetcher for partitions t0-0 (kafka.server.ReplicaFetcherManager)
> [2017-04-20 17:46:54,814] INFO Log for partition t0-0 is renamed to 
> /tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete and is 
> scheduled for deletion (kafka.log.LogManager)
> [2017-04-20 17:47:27,585] INFO Deleting index 
> /tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete/.index
>  (kafka.log.OffsetIndex)
> [2017-04-20 17:47:27,586] INFO Deleting index 
> /tmp/kafka-logs1/t0-0/.timeindex (kafka.log.TimeIndex)
> [2017-04-20 17:47:27,587] ERROR Exception in deleting 
> Log(/tmp/kafka-logs1/t0-0.bbc8fa126e3e4ff787f6b68d158ab771-delete). Moving it 
> to the end of the queue. (kafka.log.LogManager)
> java.io.FileNotFoundException: 
> /tmp/kafka-logs1/t0-0/leader-epoch-checkpoint.tmp (No such file or directory)
>   at java.io.FileOutputStream.open0(Native Method)
>   at java.io.FileOutputStream.open(FileOutputStream.java:270)
>   at java.io.FileOutputStream.(FileOutputStream.java:213)
>   at java.io.FileOutputStream.(FileOutputStream.java:162)
>   at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:41)
>   at 
> kafka.server.checkpoints.LeaderEpochCheckpointFile.write(LeaderEpochCheckpointFile.scala:61)
>   at 
> kafka.server.epoch.LeaderEpochFileCache.kafka$server$epoch$LeaderEpochFileCache$$flush(LeaderEpochFileCache.scala:178)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply$mcV$sp(LeaderEpochFileCache.scala:161)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply(LeaderEpochFileCache.scala:159)
>   at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$clear$1.apply(LeaderEpochFileCache.scala:159)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at 
> kafka.server.epoch.LeaderEpochFileCache.clear(LeaderEpochFileCache.scala:159)
>   at kafka.log.Log.delete(Log.scala:1051)
>   at 
> kafka.log.LogManager.kafka$log$LogManager$$deleteLogs(LogManager.scala:442)
>   at 
> kafka.log.LogManager$$anonfun$startup$5.apply$mcV$sp(LogManager.scala:241)
>   at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
>   at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
>   at 

Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Xavier Léauté
Thank you for the feedback Michal.

While I agree the return may be a little bit more confusing to reason
about, the reason for doing so was to keep the range query interfaces
consistent with their single-key counterparts.

In the case of the window store, the "key" of the single-key iterator is
the actual timestamp of the underlying entry, not just range of the window,
so if we were to wrap the result key a window we wouldn't be getting back
the equivalent of the single key iterator.

In both cases peekNextKey is just returning the timestamp of the next entry
in the window store that matches the query.

In the case of the session store, we already return Windowed for the
single-key method, so it made sense there to also return Windowed for
the range method.

Hope this make sense? Let me know if you still have concerns about this.

Thank you,
Xavier

On Wed, May 10, 2017 at 12:25 PM Michal Borowiecki <
michal.borowie...@openbet.com> wrote:

> Apologies, I missed the discussion (or lack thereof) about the return
> type of:
>
> WindowStoreIterator> fetch(K from, K to, long timeFrom,
> long timeTo)
>
>
> WindowStoreIterator (as the KIP mentions) is a subclass of
> KeyValueIterator
>
> KeyValueIterator has the following method:
>
> /** * Peek at the next key without advancing the iterator * @return the
> key of the next value that would be returned from the next call to next
> */ K peekNextKey();
>
> Given the type in this case will be Long, I assume what it would return
> is the window timestamp of the next found record?
>
>
> In the case of WindowStoreIterator fetch(K key, long timeFrom, long
> timeTo);
> all records found by fetch have the same key, so it's harmless to return
> the timestamp of the next found window but here we have varying keys and
> varying windows, so won't it be too confusing?
>
> KeyValueIterator (as in the proposed
> ReadOnlySessionStore.fetch) just feels much more intuitive.
>
> Apologies again for jumping onto this only once the voting has already
> begun.
> Thanks,
> Michał
>
> On 10/05/17 20:08, Sriram Subramanian wrote:
> > +1
> >
> > On Wed, May 10, 2017 at 11:42 AM, Bill Bejeck  wrote:
> >
> >> +1
> >>
> >> Thanks,
> >> Bill
> >>
> >> On Wed, May 10, 2017 at 2:38 PM, Guozhang Wang 
> wrote:
> >>
> >>> +1. Thank you!
> >>>
> >>> On Wed, May 10, 2017 at 11:30 AM, Xavier Léauté 
> >>> wrote:
> >>>
>  Hi everyone,
> 
>  Since there aren't any objections to this addition, I would like to
> >> start
>  the voting on KIP-155 so we can hopefully get this into 0.11.
> 
>  https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>  155+-+Add+range+scan+for+windowed+state+stores
> 
>  Voting will stay active for at least 72 hours.
> 
>  Thank you,
>  Xavier
> 
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>>
>
>


Re: [VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Sriram Subramanian
+1

On Wed, May 10, 2017 at 1:24 PM, Colin McCabe  wrote:

> Hi all,
>
> Some folks said that the previous VOTE thread was getting collapsed by
> gmail into a different thread, so I am reposting this.
>
> The KIP page is here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs
>
> The previous VOTE thread was here:
> https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html
>
> best,
> Colin
>


[VOTE] KIP-140: A new thread for adding administrative RPCs for adding deleting and listing ACLs

2017-05-10 Thread Colin McCabe
Hi all,

Some folks said that the previous VOTE thread was getting collapsed by
gmail into a different thread, so I am reposting this.

The KIP page is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-140%3A+Add+administrative+RPCs+for+adding%2C+deleting%2C+and+listing+ACLs

The previous VOTE thread was here:
https://www.mail-archive.com/dev@kafka.apache.org/msg71412.html

best,
Colin


[GitHub] kafka pull request #2795: MINOR: Add a release script that helps generate re...

2017-05-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2795


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-140: Add administrative RPCs for adding, deleting, and listing ACLs

2017-05-10 Thread Jason Gustafson
Hi Colin,

Thanks for the KIP. Looks good overall. One thing I wasn't too clear about
is whether new resource types, operations, or permissions require a version
bump for the three new request types. On reading the proposal, it sort of
sounds like the intent is not to bump the versions and let the new type be
mapped to "unknown" on clients/brokers that don't support it. Is that
correct?

Thanks,
Jason

On Wed, May 10, 2017 at 8:24 AM, Ismael Juma  wrote:

> Thanks for the KIP, +1(binding) from me. Colin, it may make sense to start
> a new thread for this as GMail is currently hiding it in the middle of the
> discuss thread. Please mention the date the thread was started along with
> the 3 +1s (1 binding) so far.
>
> Ismael
>
> On Sat, Apr 29, 2017 at 1:09 AM, Colin McCabe  wrote:
>
> > Hi all,
> >
> > I'd like to start the voting for KIP-140: Add administrative RPCs for
> > adding, deleting, and listing ACLs.  This provides an API for adding,
> > deleting, and listing the access control lists (ACLs) which are used to
> > control access on Kafka topics and brokers.
> >
> > The wiki page is here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 140%3A+Add+administrative+RPCs+for+adding%2C+deleting%
> 2C+and+listing+ACLs
> >
> > The previous [DISCUSS] thread:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg70858.html
> >
> > cheers,
> > Colin
> >
>


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Michal Borowiecki
Apologies, I missed the discussion (or lack thereof) about the return 
type of:


WindowStoreIterator> fetch(K from, K to, long timeFrom, 
long timeTo)



WindowStoreIterator (as the KIP mentions) is a subclass of 
KeyValueIterator


KeyValueIterator has the following method:

/** * Peek at the next key without advancing the iterator * @return the 
key of the next value that would be returned from the next call to next 
*/ K peekNextKey();


Given the type in this case will be Long, I assume what it would return 
is the window timestamp of the next found record?



In the case of WindowStoreIterator fetch(K key, long timeFrom, long 
timeTo);
all records found by fetch have the same key, so it's harmless to return 
the timestamp of the next found window but here we have varying keys and 
varying windows, so won't it be too confusing?


KeyValueIterator (as in the proposed 
ReadOnlySessionStore.fetch) just feels much more intuitive.


Apologies again for jumping onto this only once the voting has already 
begun.

Thanks,
Michał

On 10/05/17 20:08, Sriram Subramanian wrote:

+1

On Wed, May 10, 2017 at 11:42 AM, Bill Bejeck  wrote:


+1

Thanks,
Bill

On Wed, May 10, 2017 at 2:38 PM, Guozhang Wang  wrote:


+1. Thank you!

On Wed, May 10, 2017 at 11:30 AM, Xavier Léauté 
wrote:


Hi everyone,

Since there aren't any objections to this addition, I would like to

start

the voting on KIP-155 so we can hopefully get this into 0.11.

https://cwiki.apache.org/confluence/display/KAFKA/KIP+
155+-+Add+range+scan+for+windowed+state+stores

Voting will stay active for at least 72 hours.

Thank you,
Xavier




--
-- Guozhang





[jira] [Assigned] (KAFKA-5182) Transient failure: RequestQuotaTest.testResponseThrottleTime

2017-05-10 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-5182:
-

Assignee: Rajini Sivaram

> Transient failure: RequestQuotaTest.testResponseThrottleTime
> 
>
> Key: KAFKA-5182
> URL: https://issues.apache.org/jira/browse/KAFKA-5182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Rajini Sivaram
>
> Stacktrace
> {code}
> java.util.concurrent.ExecutionException: java.lang.AssertionError: Response 
> not throttled: Client JOIN_GROUP apiKey JOIN_GROUP requests 3 requestTime 
> 0.009982775502048156 throttleTime 0.0
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:206)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:326)
>   at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:324)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
>   at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:324)
>   at 
> kafka.server.RequestQuotaTest.testResponseThrottleTime(RequestQuotaTest.scala:105)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> 

Re: [VOTE] KIP-146: Isolation of dependencies and classes in Kafka Connect (restarted voting thread)

2017-05-10 Thread Konstantine Karantasis
Thank you Ismael for your vote as well as your comment.

To give some context, it's exactly because of the similarities with Jigsaw
that module.path was selected initially.
The thought was that it could allow for a potential integration with Jigsaw
in the future, without having to change property names significantly.

Of course there are differences, as the ones you mention, mainly because
Connect's module path currently is composed as a list of top-level
directories that include the modules as subdirectories. However I'd be
inclined to agree with Ewen. Maybe using a property name that presents
similarities to other concepts in the JVM ecosystem reserves for us more
flexibility than using a different name for something that will eventually
end up representing similar functionality.

In any case, I don't feel very strong about it. Let me know if you insist
on a name change.

-Konstantine


On Wed, May 10, 2017 at 10:24 AM, Ewen Cheslack-Postava 
wrote:

> +1 binding, and I'm flexible on the config name. Somehow I am guessing no
> matter what terminology we use there somebody will find a way to be
> confused.
>
> -Ewen
>
> On Wed, May 10, 2017 at 9:27 AM, Gwen Shapira  wrote:
>
> > +1 and proposing 'plugin.path' as we use the term connector plugins when
> > referring to the jars themselves.
> >
> > Gwen
> >
> > On Wed, May 10, 2017 at 8:31 AM, Ismael Juma  wrote:
> >
> > > Thanks for the KIP Konstantine, +1 (binding) from me. One comment;
> > >
> > > 1. One thing to think about: the config name `module.path` could be
> > > confusing in the future as Jigsaw introduces the concept of a module
> > > path[1] in Java 9. The module path co-exists with the classpath, but
> its
> > > behaviour is quite different. To many people's surprise, Jigsaw doesn't
> > > handle versioning and it disallows split packages (i.e. if the same
> > package
> > > appears in 2 different modules, it is an error). What we are proposing
> is
> > > quite different and perhaps it may make sense to use a different name
> to
> > > avoid confusion.
> > >
> > > Ismael
> > >
> > > [1] https://www.infoq.com/articles/Latest-Project-
> Jigsaw-Usage-Tutorial
> > >
> > > On Mon, May 8, 2017 at 7:48 PM, Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > > ** Restarting the voting thread here, with a different title to avoid
> > > > collapsing this thread's messages with the discussion thread's
> messages
> > > in
> > > > mail clients. Apologies for the inconvenience. **
> > > >
> > > >
> > > > Hi all,
> > > >
> > > > Given that the comments during the discussion seem to have been
> > > addressed,
> > > > I'm pleased to bring
> > > >
> > > > KIP-146: Classloading Isolation in Connect
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 146+-+Classloading+Isolation+in+Connect
> > > >
> > > > up for voting. Again, this KIP aims to bring the highly desired
> feature
> > > of
> > > > dependency isolation in Kafka Connect.
> > > >
> > > > In the meantime, for any additional feedback, please continue to send
> > > your
> > > > comments in the discussion thread here:
> > > >
> > > > https://www.mail-archive.com/dev@kafka.apache.org/msg71453.html
> > > >
> > > > This voting thread will stay active for a minimum of 72 hours.
> > > >
> > > > Sincerely,
> > > > Konstantine
> > > >
> > >
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
> >
>


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Sriram Subramanian
+1

On Wed, May 10, 2017 at 11:42 AM, Bill Bejeck  wrote:

> +1
>
> Thanks,
> Bill
>
> On Wed, May 10, 2017 at 2:38 PM, Guozhang Wang  wrote:
>
> > +1. Thank you!
> >
> > On Wed, May 10, 2017 at 11:30 AM, Xavier Léauté 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > Since there aren't any objections to this addition, I would like to
> start
> > > the voting on KIP-155 so we can hopefully get this into 0.11.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> > > 155+-+Add+range+scan+for+windowed+state+stores
> > >
> > > Voting will stay active for at least 72 hours.
> > >
> > > Thank you,
> > > Xavier
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


[jira] [Created] (KAFKA-5211) KafkaConsumer should not skip a corrupted record after throwing an exception.

2017-05-10 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-5211:
---

 Summary: KafkaConsumer should not skip a corrupted record after 
throwing an exception.
 Key: KAFKA-5211
 URL: https://issues.apache.org/jira/browse/KAFKA-5211
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


In 0.10.2, when there is a corrupted record, KafkaConsumer.poll() will throw an 
exception and block on that corrupted record. In the latest trunk this behavior 
has changed to skip the corrupted record (which is the old consumer behavior). 
With KIP-98, skipping corrupted messages would be a little dangerous as the 
message could be a control message for a transaction. We should fix the issue 
to let the KafkaConsumer block on the corrupted messages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Bill Bejeck
+1

Thanks,
Bill

On Wed, May 10, 2017 at 2:38 PM, Guozhang Wang  wrote:

> +1. Thank you!
>
> On Wed, May 10, 2017 at 11:30 AM, Xavier Léauté 
> wrote:
>
> > Hi everyone,
> >
> > Since there aren't any objections to this addition, I would like to start
> > the voting on KIP-155 so we can hopefully get this into 0.11.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> > 155+-+Add+range+scan+for+windowed+state+stores
> >
> > Voting will stay active for at least 72 hours.
> >
> > Thank you,
> > Xavier
> >
>
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Guozhang Wang
+1. Thank you!

On Wed, May 10, 2017 at 11:30 AM, Xavier Léauté  wrote:

> Hi everyone,
>
> Since there aren't any objections to this addition, I would like to start
> the voting on KIP-155 so we can hopefully get this into 0.11.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> 155+-+Add+range+scan+for+windowed+state+stores
>
> Voting will stay active for at least 72 hours.
>
> Thank you,
> Xavier
>



-- 
-- Guozhang


Re: [VOTE] KIP-151: Expose Connector type in REST API (first attempt :)

2017-05-10 Thread Guozhang Wang
+1

On Wed, May 10, 2017 at 10:27 AM, Ismael Juma  wrote:

> Thanks for the KIP, +1 (binding).
>
> Ismael
>
> On Mon, May 8, 2017 at 11:39 PM, dan  wrote:
>
> > i'd like to begin voting on
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 151+Expose+Connector+type+in+REST+API
> >
> > discussion should remain on
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201705.
> > mbox/%3CCAFJy-U-pF7YxSRadx_zAQYCX2+SswmVPSBcA4tDMPP5834s6Kg@mail.
> > gmail.com%3E
> >
> > This voting thread will stay active for a minimum of 72 hours.
> >
> > thanks
> > dan
> >
>



-- 
-- Guozhang


[VOTE] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Xavier Léauté
Hi everyone,

Since there aren't any objections to this addition, I would like to start
the voting on KIP-155 so we can hopefully get this into 0.11.

https://cwiki.apache.org/confluence/display/KAFKA/KIP+155+-+Add+range+scan+for+windowed+state+stores

Voting will stay active for at least 72 hours.

Thank you,
Xavier


Re: [DISCUSS] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Guozhang Wang
Sounds great, thanks!

On Wed, May 10, 2017 at 10:43 AM, Xavier Léauté  wrote:

> I'll be driving the implementation as well. Unless there are specific
> concerns about the implementation, I mainly wrote the KIP to agree on the
> additional public interfaces.
>
> Current window store fetch operations already rely on range scanning
> & filtering the relevant keys in the underlying state store, and
> implementing key range fetch would follow a similar approach.
>
> On Tue, May 9, 2017 at 9:21 PM Guozhang Wang  wrote:
>
> > Thanks, the KIP sounds good to me. There is no implementation proposal in
> > the JIRA / KIP yet, would you like you drive the implementation also?
> >
> >
> > Guozhang
> >
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-155 Add range scan for windowed state stores

2017-05-10 Thread Xavier Léauté
I'll be driving the implementation as well. Unless there are specific
concerns about the implementation, I mainly wrote the KIP to agree on the
additional public interfaces.

Current window store fetch operations already rely on range scanning
& filtering the relevant keys in the underlying state store, and
implementing key range fetch would follow a similar approach.

On Tue, May 9, 2017 at 9:21 PM Guozhang Wang  wrote:

> Thanks, the KIP sounds good to me. There is no implementation proposal in
> the JIRA / KIP yet, would you like you drive the implementation also?
>
>
> Guozhang
>


Re: [VOTE] KIP-151: Expose Connector type in REST API (first attempt :)

2017-05-10 Thread Ismael Juma
Thanks for the KIP, +1 (binding).

Ismael

On Mon, May 8, 2017 at 11:39 PM, dan  wrote:

> i'd like to begin voting on
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 151+Expose+Connector+type+in+REST+API
>
> discussion should remain on
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201705.
> mbox/%3CCAFJy-U-pF7YxSRadx_zAQYCX2+SswmVPSBcA4tDMPP5834s6Kg@mail.
> gmail.com%3E
>
> This voting thread will stay active for a minimum of 72 hours.
>
> thanks
> dan
>


Re: [VOTE] KIP-151: Expose Connector type in REST API (first attempt :)

2017-05-10 Thread Sriram Subramanian
+1

On Wed, May 10, 2017 at 10:20 AM, Ewen Cheslack-Postava 
wrote:

> +1 binding
>
> Thanks,
> Ewen
>
> On Mon, May 8, 2017 at 4:54 PM, BigData dev 
> wrote:
>
> > +1 (non-binding)
> >
> > Thanks,
> > Bharat
> >
> >
> > On Mon, May 8, 2017 at 4:39 PM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Mon, May 8, 2017 at 3:39 PM, dan  wrote:
> > >
> > > > i'd like to begin voting on
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 151+Expose+Connector+type+in+REST+API
> > > >
> > > > discussion should remain on
> > > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201705.
> > > > mbox/%3CCAFJy-U-pF7YxSRadx_zAQYCX2+SswmVPSBcA4tDMPP5834s6Kg@mail.
> > > > gmail.com%3E
> > > >
> > > > This voting thread will stay active for a minimum of 72 hours.
> > > >
> > > > thanks
> > > > dan
> > > >
> > >
> >
>


Re: [VOTE] KIP-146: Isolation of dependencies and classes in Kafka Connect (restarted voting thread)

2017-05-10 Thread Ewen Cheslack-Postava
+1 binding, and I'm flexible on the config name. Somehow I am guessing no
matter what terminology we use there somebody will find a way to be
confused.

-Ewen

On Wed, May 10, 2017 at 9:27 AM, Gwen Shapira  wrote:

> +1 and proposing 'plugin.path' as we use the term connector plugins when
> referring to the jars themselves.
>
> Gwen
>
> On Wed, May 10, 2017 at 8:31 AM, Ismael Juma  wrote:
>
> > Thanks for the KIP Konstantine, +1 (binding) from me. One comment;
> >
> > 1. One thing to think about: the config name `module.path` could be
> > confusing in the future as Jigsaw introduces the concept of a module
> > path[1] in Java 9. The module path co-exists with the classpath, but its
> > behaviour is quite different. To many people's surprise, Jigsaw doesn't
> > handle versioning and it disallows split packages (i.e. if the same
> package
> > appears in 2 different modules, it is an error). What we are proposing is
> > quite different and perhaps it may make sense to use a different name to
> > avoid confusion.
> >
> > Ismael
> >
> > [1] https://www.infoq.com/articles/Latest-Project-Jigsaw-Usage-Tutorial
> >
> > On Mon, May 8, 2017 at 7:48 PM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > ** Restarting the voting thread here, with a different title to avoid
> > > collapsing this thread's messages with the discussion thread's messages
> > in
> > > mail clients. Apologies for the inconvenience. **
> > >
> > >
> > > Hi all,
> > >
> > > Given that the comments during the discussion seem to have been
> > addressed,
> > > I'm pleased to bring
> > >
> > > KIP-146: Classloading Isolation in Connect
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 146+-+Classloading+Isolation+in+Connect
> > >
> > > up for voting. Again, this KIP aims to bring the highly desired feature
> > of
> > > dependency isolation in Kafka Connect.
> > >
> > > In the meantime, for any additional feedback, please continue to send
> > your
> > > comments in the discussion thread here:
> > >
> > > https://www.mail-archive.com/dev@kafka.apache.org/msg71453.html
> > >
> > > This voting thread will stay active for a minimum of 72 hours.
> > >
> > > Sincerely,
> > > Konstantine
> > >
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Re: [VOTE] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-10 Thread Ewen Cheslack-Postava
+1 binding

-Ewen

On Wed, May 10, 2017 at 9:34 AM, Gwen Shapira  wrote:

> +1
> Thanks for doing this, Randall. It will be a huge usability improvement.
>
> On Mon, May 8, 2017 at 8:43 PM BigData dev 
> wrote:
>
> > +1 (non-binding)
> >
> > On Mon, May 8, 2017 at 3:25 PM, Dongjin Lee  wrote:
> >
> > > +1
> > >
> > > On Tue, May 9, 2017 at 7:24 AM, Sriram Subramanian 
> > > wrote:
> > >
> > > > +1
> > > >
> > > > On Mon, May 8, 2017 at 2:14 PM, Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > > +1 (non binding)
> > > > >
> > > > > On Mon, May 8, 2017 at 1:33 PM, Stephane Maarek <
> > > > > steph...@simplemachines.com.au> wrote:
> > > > >
> > > > > > +1 (non binding)
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 9/5/17, 5:51 am, "Randall Hauch"  wrote:
> > > > > >
> > > > > > Hi, everyone.
> > > > > >
> > > > > > Given the simple and non-controversial nature of the KIP, I
> > would
> > > > > like
> > > > > > to
> > > > > > start the voting process for KIP-154: Add Kafka Connect
> > > > configuration
> > > > > > properties for creating internal topics:
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 154+Add+Kafka+Connect+configuration+properties+for+
> > > > > > creating+internal+topics
> > > > > >
> > > > > > The vote will run for a minimum of 72 hours.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Randall
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > >
> > >
> > > *A hitchhiker in the mathematical
> > > world.facebook: www.facebook.com/dongjin.lee.kr
> > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > github:
> > > github.com/dongjinleekr
> > > twitter: www.twitter.com/dongjinleekr
> > > *
> > >
> >
>


Re: [VOTE] KIP-151: Expose Connector type in REST API (first attempt :)

2017-05-10 Thread Ewen Cheslack-Postava
+1 binding

Thanks,
Ewen

On Mon, May 8, 2017 at 4:54 PM, BigData dev  wrote:

> +1 (non-binding)
>
> Thanks,
> Bharat
>
>
> On Mon, May 8, 2017 at 4:39 PM, Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > +1 (non-binding)
> >
> > On Mon, May 8, 2017 at 3:39 PM, dan  wrote:
> >
> > > i'd like to begin voting on
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 151+Expose+Connector+type+in+REST+API
> > >
> > > discussion should remain on
> > > http://mail-archives.apache.org/mod_mbox/kafka-dev/201705.
> > > mbox/%3CCAFJy-U-pF7YxSRadx_zAQYCX2+SswmVPSBcA4tDMPP5834s6Kg@mail.
> > > gmail.com%3E
> > >
> > > This voting thread will stay active for a minimum of 72 hours.
> > >
> > > thanks
> > > dan
> > >
> >
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread BigData dev
Thanks everyone for voting.

KIP-156 has passed with +4 binding (Neha, Jay Kreps, Sriram Subramanian and
Gwen Shapira) and +3 non-binding (Eno Thereska, Matthias J. Sax and Bill
Bejeck)

Thanks,

Bharat Viswanadham

On Wed, May 10, 2017 at 9:46 AM, Sriram Subramanian 
wrote:

> +1
>
> On Wed, May 10, 2017 at 9:45 AM, Neha Narkhede  wrote:
>
> > +1
> >
> > On Wed, May 10, 2017 at 12:32 PM Gwen Shapira  wrote:
> >
> > > +1. Also not sure that adding a parameter to a CLI requires a KIP. It
> > seems
> > > excessive.
> > >
> > >
> > > On Tue, May 9, 2017 at 7:57 PM Jay Kreps  wrote:
> > >
> > > > +1
> > > > On Tue, May 9, 2017 at 3:41 PM BigData dev 
> > > > wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Since this is a relatively simple change, I would like to start the
> > > > voting
> > > > > process for KIP-156: Add option "dry run" to Streams application
> > reset
> > > > tool
> > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=69410150
> > > > >
> > > > >
> > > > > The vote will run for a minimum of 72 hours.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Bharat
> > > > >
> > > >
> > >
> > --
> > Thanks,
> > Neha
> >
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread Sriram Subramanian
+1

On Wed, May 10, 2017 at 9:45 AM, Neha Narkhede  wrote:

> +1
>
> On Wed, May 10, 2017 at 12:32 PM Gwen Shapira  wrote:
>
> > +1. Also not sure that adding a parameter to a CLI requires a KIP. It
> seems
> > excessive.
> >
> >
> > On Tue, May 9, 2017 at 7:57 PM Jay Kreps  wrote:
> >
> > > +1
> > > On Tue, May 9, 2017 at 3:41 PM BigData dev 
> > > wrote:
> > >
> > > > Hi, Everyone,
> > > >
> > > > Since this is a relatively simple change, I would like to start the
> > > voting
> > > > process for KIP-156: Add option "dry run" to Streams application
> reset
> > > tool
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=69410150
> > > >
> > > >
> > > > The vote will run for a minimum of 72 hours.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Bharat
> > > >
> > >
> >
> --
> Thanks,
> Neha
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread Neha Narkhede
+1

On Wed, May 10, 2017 at 12:32 PM Gwen Shapira  wrote:

> +1. Also not sure that adding a parameter to a CLI requires a KIP. It seems
> excessive.
>
>
> On Tue, May 9, 2017 at 7:57 PM Jay Kreps  wrote:
>
> > +1
> > On Tue, May 9, 2017 at 3:41 PM BigData dev 
> > wrote:
> >
> > > Hi, Everyone,
> > >
> > > Since this is a relatively simple change, I would like to start the
> > voting
> > > process for KIP-156: Add option "dry run" to Streams application reset
> > tool
> > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69410150
> > >
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > >
> > > Thanks,
> > >
> > > Bharat
> > >
> >
>
-- 
Thanks,
Neha


Re: [DISCUSS] KIP 151 - Expose Connector type in REST API

2017-05-10 Thread Gwen Shapira
How about also adding the path from which we loaded the connector (since
users run into issues where they are actually running a different connector
than they think they are running)?

If you think this is too complex or out of scope, no big deal. But it is a
"nice to have" feature.

On Sun, May 7, 2017 at 4:58 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Nice. Thanks!
>
> -Konstantine
>
> On Sat, May 6, 2017 at 10:43 PM, dan  wrote:
>
> > thanks for the feedback, it all sounds good. i have made the changes to
> the
> > pr and the kip.
> >
> > dan
> >
> > On Fri, May 5, 2017 at 9:29 AM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > Thank you for the KIP. It's a nice improvement.
> > >
> > > Two small suggestions:
> > >
> > > 1) Let's not use all caps to describe the type of the connector.
> "Source"
> > > and "Sink" seem more appropriate (but even all lower case would be
> > better).
> > > 2) It's been discussed in other contexts recently, but I believe
> finally
> > > exposing a connector's version here makes more sense than anywhere else
> > at
> > > the moment. There's an existing interface method to grab the version,
> and
> > > publishing it through REST is not affected by any conventions made with
> > > respect to versioning format (also sorting based on name and version I
> > > guess is a concern that can be postponed to when we support multiple
> > > versions of the same connector and this doesn't have to be addressed
> on a
> > > KIP anyways).
> > >
> > > Let me know what you think. I'll add comments to the PR as well.
> > > Thanks again.
> > >
> > > -Konstantine
> > >
> > > On Thu, May 4, 2017 at 4:20 PM, Gwen Shapira 
> wrote:
> > >
> > > > YES PLEASE!
> > > >
> > > > On Tue, May 2, 2017 at 1:48 PM, dan  wrote:
> > > >
> > > > > hello.
> > > > >
> > > > > in an attempt to make the connect rest endpoints more useful i'd
> like
> > > to
> > > > > add the Connector type (Sink/Source) in our rest endpoints to make
> > them
> > > > > more self descriptive.
> > > > >
> > > > > KIP here:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 151+Expose+Connector+type+in+REST+API
> > > > > initial pr: https://github.com/apache/kafka/pull/2960
> > > > >
> > > > > thanks
> > > > > dan
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > > Follow us: Twitter  | blog
> > > > 
> > > >
> > >
> >
>


Re: [VOTE] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-10 Thread Gwen Shapira
+1
Thanks for doing this, Randall. It will be a huge usability improvement.

On Mon, May 8, 2017 at 8:43 PM BigData dev  wrote:

> +1 (non-binding)
>
> On Mon, May 8, 2017 at 3:25 PM, Dongjin Lee  wrote:
>
> > +1
> >
> > On Tue, May 9, 2017 at 7:24 AM, Sriram Subramanian 
> > wrote:
> >
> > > +1
> > >
> > > On Mon, May 8, 2017 at 2:14 PM, Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > > +1 (non binding)
> > > >
> > > > On Mon, May 8, 2017 at 1:33 PM, Stephane Maarek <
> > > > steph...@simplemachines.com.au> wrote:
> > > >
> > > > > +1 (non binding)
> > > > >
> > > > >
> > > > >
> > > > > On 9/5/17, 5:51 am, "Randall Hauch"  wrote:
> > > > >
> > > > > Hi, everyone.
> > > > >
> > > > > Given the simple and non-controversial nature of the KIP, I
> would
> > > > like
> > > > > to
> > > > > start the voting process for KIP-154: Add Kafka Connect
> > > configuration
> > > > > properties for creating internal topics:
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 154+Add+Kafka+Connect+configuration+properties+for+
> > > > > creating+internal+topics
> > > > >
> > > > > The vote will run for a minimum of 72 hours.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Randall
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > *Dongjin Lee*
> >
> >
> >
> > *A hitchhiker in the mathematical
> > world.facebook: www.facebook.com/dongjin.lee.kr
> > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > github:
> > github.com/dongjinleekr
> > twitter: www.twitter.com/dongjinleekr
> > *
> >
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread Gwen Shapira
+1. Also not sure that adding a parameter to a CLI requires a KIP. It seems
excessive.


On Tue, May 9, 2017 at 7:57 PM Jay Kreps  wrote:

> +1
> On Tue, May 9, 2017 at 3:41 PM BigData dev 
> wrote:
>
> > Hi, Everyone,
> >
> > Since this is a relatively simple change, I would like to start the
> voting
> > process for KIP-156: Add option "dry run" to Streams application reset
> tool
> >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69410150
> >
> >
> > The vote will run for a minimum of 72 hours.
> >
> >
> > Thanks,
> >
> > Bharat
> >
>


Re: [VOTE] KIP-146: Isolation of dependencies and classes in Kafka Connect (restarted voting thread)

2017-05-10 Thread Gwen Shapira
+1 and proposing 'plugin.path' as we use the term connector plugins when
referring to the jars themselves.

Gwen

On Wed, May 10, 2017 at 8:31 AM, Ismael Juma  wrote:

> Thanks for the KIP Konstantine, +1 (binding) from me. One comment;
>
> 1. One thing to think about: the config name `module.path` could be
> confusing in the future as Jigsaw introduces the concept of a module
> path[1] in Java 9. The module path co-exists with the classpath, but its
> behaviour is quite different. To many people's surprise, Jigsaw doesn't
> handle versioning and it disallows split packages (i.e. if the same package
> appears in 2 different modules, it is an error). What we are proposing is
> quite different and perhaps it may make sense to use a different name to
> avoid confusion.
>
> Ismael
>
> [1] https://www.infoq.com/articles/Latest-Project-Jigsaw-Usage-Tutorial
>
> On Mon, May 8, 2017 at 7:48 PM, Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > ** Restarting the voting thread here, with a different title to avoid
> > collapsing this thread's messages with the discussion thread's messages
> in
> > mail clients. Apologies for the inconvenience. **
> >
> >
> > Hi all,
> >
> > Given that the comments during the discussion seem to have been
> addressed,
> > I'm pleased to bring
> >
> > KIP-146: Classloading Isolation in Connect
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 146+-+Classloading+Isolation+in+Connect
> >
> > up for voting. Again, this KIP aims to bring the highly desired feature
> of
> > dependency isolation in Kafka Connect.
> >
> > In the meantime, for any additional feedback, please continue to send
> your
> > comments in the discussion thread here:
> >
> > https://www.mail-archive.com/dev@kafka.apache.org/msg71453.html
> >
> > This voting thread will stay active for a minimum of 72 hours.
> >
> > Sincerely,
> > Konstantine
> >
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



  1   2   >