Jay Kreps jay.kreps@... writes:
Hey Daniel,
partitionsFor() will block the very first time it sees a new topic that
it
doesn't have metadata for yet. If you want to ensure you don't block even
that one time, call it prior to your regular usage so it initializes
then.
The rationale
I just want to bring up that idea of no server side de/recompression
again. Features like KAFKA-1499
https://issues.apache.org/jira/browse/KAFKA-1499 seem to steer the
project into a different direction and the fact that tickets like
KAFKA-845 https://issues.apache.org/jira/browse/KAFKA-845
Hey Daniel,
Yeah I think that would be doable. If you want to pursue it you would need
to do a quick KIP just to get everyone on the same page since this would be
a public interface we would have to support over a long time:
We identified at least one more blocker issue KAFKA-1971 during testing.
So, we will have to roll another RC for 0.8.2.1.
Thanks,
Jun
On Sat, Feb 21, 2015 at 6:04 PM, Joe Stein joe.st...@stealth.ly wrote:
Source verified, tests pass, quick start ok.
Binaries verified, tests on scala
Here's my summary of the state of the compression discussion:
1. We all agree that current compression performance isn't very good and
it would be nice to improve it.
2. This is not entirely due to actual (de)compression, in large part it
is inefficiencies in the current
I’m considering porting an app from ActiveMQ to Kafka so I’m new here.
Apologizes if this has been asked before.
Is it possible to delete inactive queues? Specifically queues with no
messages on them which haven’t received any new messages in say 5 minutes.
We use a lot of ephemeral queues
Jun,
Can we also add https://issues.apache.org/jira/browse/KAFKA-1724 to the
next RC please?
Thanks!
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.ly
- - - - - - - - - - - - - - - - -
On Sun, Feb 22, 2015 at 11:59 AM, Jun Rao j...@confluent.io wrote:
We identified at
I am just starting to use it and could use a little guidance. I was able to
get it working with 0.8.2 but am not clear on best practices for using it.
Anyway willing to help me out a bit? Got a few questions, like how to
protect applications from when kafka is down or unreachable.
It seems like
I saw performance issues with the web console whenever using a large
number of partitions (1000 partitions).
On 2/3/15 12:09 PM, Sa Li wrote:
Hi, All
I am currently using kafka-web-console to monitor the kafka system, it get
down regularly, so I have to restart it every few hours which is
Allen,
Regarding the two crc computation calls, the first one is used to validate
the messages, and the second call is only used if we need to re-compress
the data. So logically they are not redundant operations. As Jay said, the
re-compression is acutally savable and once it is removed, we will
I have a hard time figuring out how to do a commit using API 0.8.2 on JDK 8.
I tried using the examples from 0.8.1.1.
First of all: I can't use OffsetMetadataAndError inside the
offsets-map as it was possible in 0.8.1. I can't really find a
difference, but builds break.
I'm also unable to
If you haven't seen it yet, you probably want to look at
http://kafka.apache.org/documentation.html#java
-Ewen
On Thu, Feb 19, 2015 at 10:53 AM, Zakee kzak...@netzero.net wrote:
Well are there any measurement techniques for Memory config in brokers. We
do have a large load, with a max
We just configure our logback.xml to have two Appenders, an AsyncAppender -
KafkaAppender, and FileAppender (or ConsoleAppender as appropriate).
AsyncAppender removes more failure cases too, e.g. a health check hanging
rather than returning rapidly could block you application.
On Feb 22, 2015,
Here’s my attempt at a Logback version, should be fairly easily ported:
https://github.com/opentable/otj-logging/blob/master/kafka/src/main/java/com/opentable/logging/KafkaAppender.java
On Feb 22, 2015, at 1:36 PM, Scott Chapman sc...@woofplanet.com wrote:
I am just starting to use it and could
Theres also another one here.
https://github.com/danielwegener/logback-kafka-appender.
It has a fallback appender which might address the issue of Kafka being
un-available.
On Mon, Feb 23, 2015 at 9:45 AM, Steven Schlansker
sschlans...@opentable.com wrote:
Here’s my attempt at a Logback
Hi,
Please let me know how to find the total number of messages in a particular
topic.
Regards,
Bhuvana
What kind of load do you have on the brokers? On an idle cluster (just
fetch requests from the follower replicas), I
saw NetworkProcessorAvgIdlePercent at about 97%.
Thanks,
Jun
On Thu, Feb 19, 2015 at 5:19 PM, Zakee kzak...@netzero.net wrote:
Jun,
I am already using the latest release
Hi Alex,
What I originally meant is that you probably need to manually modify the MM
code in order to achieve your needs. However, MM has been improved a lot
since last time we synced up, in the next major release the MM will support
exact mirroring (details in KAFKA-1650) with some functional
Alex,
Before 0.8 Kafka is written in Scala, and in 0.8.2 we are re-writing the
clients in Java for better clients adoption while the server is still under
Scala. The plan after the Java clients also includes migrating the common
utils / error code / request formats to Java that will be used for
so there will be both scala and java clients? or will scala users simply
import the java libraries (which is after all not too bad)
2015-02-22 16:30 GMT-08:00 Guozhang Wang wangg...@gmail.com:
Alex,
Before 0.8 Kafka is written in Scala, and in 0.8.2 we are re-writing the
clients in Java for
The low connection partitioner might work for this
by attempting to reuse recently used nodes whenever possible. That is
useful in environments with lots and lots of producers where you don't care
about semantic partitioning.
In one of the perf test, we found that above sticky partitioner
Interesting, and this was with the new Java client? This sounds like as
much an opportunity for improvement in the code as anything. Would you be
willing to share the details?
-jay
On Sunday, February 22, 2015, Steven Wu stevenz...@gmail.com wrote:
The low connection partitioner might work
In
http://apache.osuosl.org/kafka/0.8.2-beta/scala-doc/index.html#kafka.consumer.SimpleConsumer
class SimpleConsumer:
def earliestOrLatestOffset(topicAndPartition: TopicAndPartition,
earliestOrLatest: Long, consumerId:Int): Long
1) What's the consumerId? It doesn't seem to matter what
I'm finding that if I continuously produce values to a topic (say,
once every 2 seconds), and in another thread, query the head and tail
offsets of a topic, then sometimes I see the head offset increasing,
sometimes its frozen. What's up with that?
I'm using scala client: 0.8.2 and server:
yes. this is with the new java client. since it is using non-blocking NIO,
sender thread probably was able to scan the buffer very frequently. hence
random partitioner won't get much chance to accumulate records for batch or
request.
Setup
* - 3 broker instances (m1.xlarge)- 6 producer instances
25 matches
Mail list logo