This list is getting slightly long. Would be good to clean it up as
time permits--either apply patches or review them and bounce back for
more work as appropriate.

-Jay

On Mon, May 14, 2012 at 12:01 PM,  <j...@apache.org> wrote:
> Issue Subscription
> Filter: outstanding kafka patches (28 issues)
> The list of outstanding kafka patches
> Subscriber: kafka-mailing-list
>
> Key         Summary
> KAFKA-323   Add the ability to use the async producer in the Log4j appender
>            https://issues.apache.org/jira/browse/KAFKA-323
> KAFKA-319   compression support added to php client does not pass unit tests
>            https://issues.apache.org/jira/browse/KAFKA-319
> KAFKA-318   update zookeeper dependency to 3.3.5
>            https://issues.apache.org/jira/browse/KAFKA-318
> KAFKA-314   Go Client Multi-produce
>            https://issues.apache.org/jira/browse/KAFKA-314
> KAFKA-313   Add JSON output and looping options to ConsumerOffsetChecker
>            https://issues.apache.org/jira/browse/KAFKA-313
> KAFKA-312   Add 'reset' operation for AsyncProducerDroppedEvents
>            https://issues.apache.org/jira/browse/KAFKA-312
> KAFKA-298   Go Client support max message size
>            https://issues.apache.org/jira/browse/KAFKA-298
> KAFKA-297   Go Client Publisher Improvments
>            https://issues.apache.org/jira/browse/KAFKA-297
> KAFKA-296   Update Go Client to new version of Go
>            https://issues.apache.org/jira/browse/KAFKA-296
> KAFKA-291   Add builder to create configs for consumer and broker
>            https://issues.apache.org/jira/browse/KAFKA-291
> KAFKA-274   Handle corrupted messages cleanly
>            https://issues.apache.org/jira/browse/KAFKA-274
> KAFKA-273   Occassional GZIP errors on the server while writing compressed 
> data to disk
>            https://issues.apache.org/jira/browse/KAFKA-273
> KAFKA-267   Enhance ProducerPerformance to generate unique random Long value 
> for payload
>            https://issues.apache.org/jira/browse/KAFKA-267
> KAFKA-260   Add audit trail to kafka
>            https://issues.apache.org/jira/browse/KAFKA-260
> KAFKA-253   Refactor the async producer to have only one queue instead of one 
> queue per broker in a Kafka cluster
>            https://issues.apache.org/jira/browse/KAFKA-253
> KAFKA-251   The ConsumerStats MBean's PartOwnerStats  attribute is a string
>            https://issues.apache.org/jira/browse/KAFKA-251
> KAFKA-246   log configuration values used
>            https://issues.apache.org/jira/browse/KAFKA-246
> KAFKA-242   Subsequent calls of ConsumerConnector.createMessageStreams cause 
> Consumer offset to be incorrect
>            https://issues.apache.org/jira/browse/KAFKA-242
> KAFKA-196   Topic creation fails on large values
>            https://issues.apache.org/jira/browse/KAFKA-196
> KAFKA-191   Investigate removing the synchronization in Log.flush
>            https://issues.apache.org/jira/browse/KAFKA-191
> KAFKA-175   Add helper scripts to wrap the current perf tools
>            https://issues.apache.org/jira/browse/KAFKA-175
> KAFKA-173   Support encoding for non ascii characters
>            https://issues.apache.org/jira/browse/KAFKA-173
> KAFKA-169   Layering violations in Kafka code
>            https://issues.apache.org/jira/browse/KAFKA-169
> KAFKA-163   Ruby client needs to support new compression byte
>            https://issues.apache.org/jira/browse/KAFKA-163
> KAFKA-134   Upgrade Kafka to sbt 0.10.1
>            https://issues.apache.org/jira/browse/KAFKA-134
> KAFKA-133   Publish kafka jar to a public maven repository
>            https://issues.apache.org/jira/browse/KAFKA-133
> KAFKA-77    Implement "group commit" for kafka logs
>            https://issues.apache.org/jira/browse/KAFKA-77
> KAFKA-46    Commit thread, ReplicaFetcherThread for intra-cluster replication
>            https://issues.apache.org/jira/browse/KAFKA-46
>
> You may edit this subscription at:
> https://issues.apache.org/jira/secure/FilterSubscription!default.jspa?subId=11820&filterId=12318279

Reply via email to