Will do. What did you have in mind? just write a big file to disk and
measure the time it took to write? maybe also read back? using specific
API's?
Apart from the local Win machine case, are you aware of any issues with
Amazon EC2 instances that may be causing that same latency in production?
Hi Srividhya,
See
http://search-hadoop.com/m/4TaT4B9tys1/subj=Re+Kafka+0+8+2+release+before+Santa+Claus
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Mon, Jan 5, 2015 at 11:55 AM, Srividhya Shanmugam
Kafka Team,
We are currently using the 0.8.2 beta version with a patch for KAFKA-1738. Do
you have any updates on when 0.8.2 final version will be released?
Thanks,
Srividhya
This email and any files transmitted with it are confidential, proprietary and
intended solely for the individual or
OffsetCommitRequest has two constructors now:
For version 0:
OffsetCommitRequest(String groupId, MapTopicPartition,
PartitionData offsetData)
And version 1:
OffsetCommitRequest(String groupId, int generationId, String
consumerId, MapTopicPartition, PartitionData offsetData)
None of them seem
Hi Gwen, I am using/writing kafka-python to construct api requests and have
not dug too deeply into the server source code. But I believe it is
kafka/api/OffsetCommitRequest.scala and specifically the readFrom method
used to decode the wire protocol.
-Dana
OffsetCommitRequest has two
Hi,
That sounds a bit like needing a full, cross-app, cross-network
transaction/call tracing, and not something specific or limited to Kafka,
doesn't it?
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support * http://sematext.com/
On Mon,
I'm using 0.82beta and I'm trying to push data with the mirrormaker tool
from several remote sites to two datacenters. I'm testing this from a node
containing zk, broker and mirrormaker and the data is pushed to a normal
cluster. 3 zk and 4 brokers with replication.
While the configuration seems
Ah, I see :)
The readFrom function basically tries to read two extra fields if you
are on version 1:
if (versionId == 1) {
groupGenerationId = buffer.getInt
consumerId = readShortString(buffer)
}
The rest looks identical in version 0 and 1, and still no timestamp in sight...
Several features in Zookeeper depend on server time. I would highly recommend
that you properly setup ntpd (or whatever), then try to reproduce.
-Jon
On Jan 2, 2015, at 2:35 PM, Birla, Lokesh lokesh.bi...@verizon.com wrote:
We donĀ¹t see zookeeper expiration. However I noticed that our servers
@Sa,
the required.acks is producer side configuration. Set to -1 means requiring
ack from all brokers.
On Fri, Jan 2, 2015 at 1:51 PM, Sa Li sal...@gmail.com wrote:
Thanks a lot, Tim, this is the config of brokers
--
broker.id=1
port=9092
host.name=10.100.70.128
ok, opened KAFKA-1841 . KAFKA-1634 also related.
-Dana
On Mon, Jan 5, 2015 at 10:55 AM, Gwen Shapira gshap...@cloudera.com wrote:
Ooh, I see what you mean - the OffsetAndMetadata (or PartitionData)
part of the Map changed, which will modify the wire protocol.
This is actually not handled
specifically comparing 0.8.1 --
https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/api/OffsetCommitRequest.scala#L37-L50
```
(1 to partitionCount).map(_ = {
val partitionId = buffer.getInt
val offset = buffer.getLong
val metadata = readShortString(buffer)
preinitialize.metadata=true/false can help to certain extent. if the
kafka cluster is down, then metadata won't be available for a long time
(not just the first msg). so to be safe, we have to set
metadata.fetch.timeout.ms=1 to fail fast as Paul mentioned. I can also
echo Jay's comment that
Ooh, I see what you mean - the OffsetAndMetadata (or PartitionData)
part of the Map changed, which will modify the wire protocol.
This is actually not handled in the Java client either. It will send
the timestamp no matter which version is used.
This looks like a bug and I'd even mark it as
Hi Kafka Team/Users,
We are using Linked-in Kafka data pipe-line end-to-end.
Producer(s) -Local DC Brokers - MM - Central brokers - Camus Job -
HDFS
This is working out very well for us, but we need to have visibility of
latency at each layer (Local DC Brokers - MM - Central brokers - Camus
Job
15 matches
Mail list logo