Jenkins build is back to normal : kafka-trunk-jdk7 #1592

2016-09-30 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #1936: KAFKA-3824: Clarify autocommit delivery semantics ...

2016-09-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1936


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4244) Update our website look & feel

2016-09-30 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-4244:
---

 Summary: Update our website look & feel
 Key: KAFKA-4244
 URL: https://issues.apache.org/jira/browse/KAFKA-4244
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira


Our website deserves a facelift.

This will be multi-part change:
1. Changes to the web pages in our normal GitHub to new headers, fix some 
missing tags, etc.
2. Changes to the auto-get code to get protocol.html correct too
3. Deploy changes to website + update the header/footer/CSS in the website to 
actual cause facelift.

Please do not deploy changes to the website from our GitHub after #1 is done 
but before #3 is complete. Hopefully, I'll be all done by Monday.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4238) consumer-subscription not working, when accessing a newly created topic immediately after its creation with the AdminUtils

2016-09-30 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537069#comment-15537069
 ] 

Jason Gustafson commented on KAFKA-4238:


[~apa...@flwi.de] The consumer periodically refetches topic metadata internally 
according to the configuration {{metadata.max.age.ms}}. If a topic is just 
created, you may have to wait up to this amount of time for the consumer to 
detect it. If you want to detect the topic sooner, adjust this setting lower.

> consumer-subscription not working, when accessing a newly created topic 
> immediately after its creation with the AdminUtils
> --
>
> Key: KAFKA-4238
> URL: https://issues.apache.org/jira/browse/KAFKA-4238
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: Florian Witteler
>
> I created a test-project to reproduce the bug.
> https://github.com/FloWi/kafka-topic-creation-bug
> We use a docker container that creates a fresh topic before a testsuite gets 
> executed (see {{trait FreshKafkaTopics}}). That trait uses the AdminUtils to 
> create the topic. 
> If we access the newly created topic directly after its creation, the 
> subscriber is broken. It sometimes works though (<5%), so it seems to be a 
> race-condition.
> If I put a {{Thread.sleep(1000)}} after the topic-creation, everything's fine 
> though.
> So, the problem is twofold:
> - {{AdminUtils.createTopic}} should block until the topic-creation is 
> completed
> - {{new KafkaConsumer[String, 
> String](props).subscribe(util.Arrays.asList(topic))}} should throw an 
> exception, when the topic is "not ready"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3930) IPv6 address can't used as ObjectName

2016-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536848#comment-15536848
 ] 

ASF GitHub Bot commented on KAFKA-3930:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1848


> IPv6 address can't used as ObjectName
> -
>
> Key: KAFKA-3930
> URL: https://issues.apache.org/jira/browse/KAFKA-3930
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0
>Reporter: wateray
>Assignee: Rajini Sivaram
>Priority: Minor
> Fix For: 0.10.1.0
>
>
> When use IPv6 start the broker. The server.log output this error:
> ===
> [2016-05-25 15:45:56,120] WARN Error processing 
> kafka.server:type=FetcherStats,name=RequestsPerSec,clientId=console-consumer-25184,brokerHost=fe80::92e2:baff:fe92:62f,brokerPort=3392
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at javax.management.ObjectName.construct(ObjectName.java:618)
> at javax.management.ObjectName.(ObjectName.java:1382)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at com.yammer.metrics.core.MetricsRegistry.newMeter(MetricsRegistry.java:240)
> at kafka.metrics.KafkaMetricsGroup$class.newMeter(KafkaMetricsGroup.scala:80)
> at kafka.server.FetcherStats.newMeter(AbstractFetcherThread.scala:264)
> at kafka.server.FetcherStats.(AbstractFetcherThread.scala:269)
> at kafka.server.AbstractFetcherThread.(AbstractFetcherThread.scala:55)
> at kafka.consumer.ConsumerFetcherThread.(ConsumerFetcherThread.scala:38)
> ..
> ==
> In the  AbstractFetcherThread.scala line 264:
> class FetcherStats(metricId: ClientIdAndBroker) extends KafkaMetricsGroup {
>   val tags = Map("clientId" -> metricId.clientId,
> "brokerHost" -> metricId.brokerHost,
> "brokerPort" -> metricId.brokerPort.toString)
> When brokerHost is IPv6, the address has :, which can't use as ObjectName.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4039) Exit Strategy: using exceptions instead of inline invocation of exit/halt

2016-09-30 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4039:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.1.1

> Exit Strategy: using exceptions instead of inline invocation of exit/halt
> -
>
> Key: KAFKA-4039
> URL: https://issues.apache.org/jira/browse/KAFKA-4039
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Maysam Yabandeh
>Priority: Critical
> Fix For: 0.10.1.1
>
> Attachments: deadlock-stack2
>
>
> The current practice is to directly invoke halt/exit right after the line 
> that intends to terminate the execution. In the case of System.exit this 
> could cause deadlocks if the thread invoking System.exit is holding  a lock 
> that will be requested by the shutdown hook threads that will be started by 
> System.exit. An example is reported by [~aozeritsky] in KAFKA-3924. This 
> would also makes testing more difficult as it would require mocking static 
> methods of System and Runtime classes, which is not natively supported in 
> Java.
> One alternative suggested 
> [here|https://issues.apache.org/jira/browse/KAFKA-3924?focusedCommentId=15420269=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15420269]
>  would be to throw some dedicated exceptions that will eventually invoke 
> exit/halt:
> {quote} it would be great to move away from executing `System.exit` inline in 
> favour of throwing an exception (two examples, but maybe we can find better 
> names: FatalExitException and FatalHaltException) that is caught by some 
> central code that then does the `System.exit` or `Runtime.getRuntime.halt`. 
> This helps in a couple of ways:
> (1) Avoids issues with locks being held as in this issue
> (2) It makes it possible to abstract the action, which is very useful in 
> tests. At the moment, we can't easily test for these conditions as they cause 
> the whole test harness to exit. Worse, these conditions are sometimes 
> triggered in the tests and it's unclear why.
> (3) We can have more consistent logging around these actions and possibly 
> extended logging for tests
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4233) StateDirectory fails to create directory if any parent directory does not exist

2016-09-30 Thread Ryan Worsley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536773#comment-15536773
 ] 

Ryan Worsley commented on KAFKA-4233:
-

Thanks :)

> StateDirectory fails to create directory if any parent directory does not 
> exist
> ---
>
> Key: KAFKA-4233
> URL: https://issues.apache.org/jira/browse/KAFKA-4233
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Ryan Worsley
>Assignee: Damian Guy
> Fix For: 0.10.1.0, 0.10.2.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The method {{directoryForTask}} attempts to create a task directory but will 
> silently fail to do so as it calls {{taskDir.mkdir();}} which will only 
> create the leaf directory. 
> Calling {{taskDir.mkdirs();}} (note the 's') will create the entire path if 
> any parent directory is missing.
> The constructor also attempts to create a bunch of directories using the 
> former method and should be reviewed as part of any fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3234) Minor documentation edits: clarify minISR; some topic-level configs are missing

2016-09-30 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3234:
---
Fix Version/s: (was: 0.10.1.0)
   0.10.2.0

> Minor documentation edits: clarify minISR; some topic-level configs are 
> missing
> ---
>
> Key: KAFKA-3234
> URL: https://issues.apache.org/jira/browse/KAFKA-3234
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Reporter: Joel Koshy
>Assignee: Joel Koshy
> Fix For: 0.10.2.0
>
>
> Based on an offline conversation with [~junrao] and [~gwenshap]
> The current documentation is somewhat confusing on minISR in that it says 
> that it offers a trade-off between consistency and availability. From the 
> user's view-point, consistency (at least in the usual sense of the term) is 
> achieved by disabling unclean leader election - since no replica that was out 
> of ISR can be elected as the leader. So a consumer will never see a message 
> that was not acknowledged to a producer that set acks to "all". Or to put it 
> another way, setting minISR alone will not prevent exposing uncommitted 
> messages - disabling unclean leader election is the stronger requirement. You 
> can achieve the same effect though by setting minISR equal to  the number of 
> replicas.
> There is also some stale documentation that needs to be removed:
> {quote}
> In our current release we choose the second strategy and favor choosing a 
> potentially inconsistent replica when all replicas in the ISR are dead. In 
> the future, we would like to make this configurable to better support use 
> cases where downtime is preferable to inconsistency.
> {quote}
> Finally, it was reported on the mailing list (from Elias Levy) that 
> compression.type should be added under the topic configs. Same goes for 
> unclean leader election. Would be good to have these auto-generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1942: KAFKA-4233: StateDirectory fails to create directo...

2016-09-30 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1942


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-873) Consider replacing zkclient with curator (with zkclient-bridge)

2016-09-30 Thread Jordan Zimmerman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15536316#comment-15536316
 ] 

Jordan Zimmerman commented on KAFKA-873:


FYI - Curator would be happy to shade Guava. We've been asked in the past.

> Consider replacing zkclient with curator (with zkclient-bridge)
> ---
>
> Key: KAFKA-873
> URL: https://issues.apache.org/jira/browse/KAFKA-873
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Scott Clasen
>Assignee: Grant Henke
>
> If zkclient was replaced with curator and curator-x-zkclient-bridge it would 
> be initially a drop-in replacement
> https://github.com/Netflix/curator/wiki/ZKClient-Bridge
> With the addition of a few more props to ZkConfig, and a bit of code this 
> would open up the possibility of using ACLs in zookeeper (which arent 
> supported directly by zkclient), as well as integrating with netflix 
> exhibitor for those of us using that.
> Looks like KafkaZookeeperClient needs some love anyhow...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3900) High CPU util on broker

2016-09-30 Thread Maurice Wolter (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15535902#comment-15535902
 ] 

Maurice Wolter commented on KAFKA-3900:
---

I experience the same issue with kafka 0.9.0, CPU around around 90%, 
replication cannot catch up due to disconnection errors.

> High CPU util on broker
> ---
>
> Key: KAFKA-3900
> URL: https://issues.apache.org/jira/browse/KAFKA-3900
> Project: Kafka
>  Issue Type: Bug
>  Components: network, replication
>Affects Versions: 0.10.0.0
> Environment: kafka = 2.11-0.10.0.0
> java version "1.8.0_91"
> amazon linux
>Reporter: Andrey Konyaev
>
> I start kafka cluster in amazon with m4.xlarge (4 cpu and 16 GB mem (14 
> allocate for kafka in heap)). Have three nodes.
> I haven't high load (6000 message/sec) and we have cpu_idle = 70%, but 
> sometime (about once a day) I see this message in server.log:
> [2016-06-24 14:52:22,299] WARN [ReplicaFetcherThread-0-2], Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@6eaa1034 
> (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 2 was disconnected before the response was 
> read
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:87)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:84)
> at scala.Option.foreach(Option.scala:257)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:84)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:80)
> at 
> kafka.utils.NetworkClientBlockingOps$.recursivePoll$2(NetworkClientBlockingOps.scala:137)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:143)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:80)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:244)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:229)
> at 
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:107)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:98)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> I know, this can be network glitch, but why kafka eat all cpu time?
> My config:
> inter.broker.protocol.version=0.10.0.0
> log.message.format.version=0.10.0.0
> default.replication.factor=3
> num.partitions=3
> replica.lag.time.max.ms=15000
> broker.id=0
> listeners=PLAINTEXT://:9092
> log.dirs=/mnt/kafka/kafka
> log.retention.check.interval.ms=30
> log.retention.hours=168
> log.segment.bytes=1073741824
> num.io.threads=20
> num.network.threads=10
> num.partitions=1
> num.recovery.threads.per.data.dir=2
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> socket.send.buffer.bytes=102400
> zookeeper.connection.timeout.ms=6000
> delete.topic.enable = true
> broker.max_heap_size=10 GiB 
>   
> Any ideas?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)