RE: read message use SimpleConsumer,can not start with assign offset

2014-08-25 Thread chenlax
i just want get message start with assign offse. in the code , requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1)); whichTime is a message's offset Thanks, Lax Date: Sun, 24 Aug 2014 21:41:47 -0700 Subject: Re: read message use SimpleConsumer,can not start with

Re: OutOfMemoryError during index mapping

2014-08-25 Thread pavel.zalu...@sematext.com
Hi, Thanks for reply! Yes, we running Kafka on 32bit machine, but ‘log.index.size.max.bytes’ defaults to 10MB according http://kafka.apache.org/documentation.html#persistence and it is not redefined in our config. $ find kafka-data-dir -name ‘*.index’ | wc -l 425 $ find kafka-data-dir -size

RE: read message use SimpleConsumer,can not start with assign offset

2014-08-25 Thread chenlax
thanks junrao. it can't use assign offset to get the last offset,my mistake. Thanks, Lax Date: Sun, 24 Aug 2014 21:41:47 -0700 Subject: Re: read message use SimpleConsumer,can not start with assign offset From: jun...@gmail.com To: users@kafka.apache.org If you want to get the latest

Re: Kafka Mirroring Issue

2014-08-25 Thread François Langelier
What is your partitioning function? François Langelier Étudiant en génie Logiciel - École de Technologie Supérieure http://www.etsmtl.ca/ Capitaine Club Capra http://capra.etsmtl.ca/ VP-Communication - CS Games http://csgames.org 2014 Jeux de Génie http://www.jdgets.com/ 2011 à 2014 Magistrat

How retention is working

2014-08-25 Thread François Langelier
Hi! I'm wondering how the retention time is working exactly... I know that log.retention.{minutes,hours} and retention.ms (by topic) to set the retention time and that there is log.retention.check.interval.ms for deleting But those properties work then i create the topic but can I change the

Re: Kafka Mirroring Issue

2014-08-25 Thread François Langelier
Do you have a partition key? IIRC, the DefaultPartitioner works differently if you use a partition key or not. If you do have a partition key, it use that algorithm: Utils.abs(key.hashCode) % numPartitions (So if your key is the same for all your topic you will always publish in the same

Re: How retention is working

2014-08-25 Thread Philip O'Toole
Retention is per topic, per Kafka broker, it is nothing to do with the Producer. You do not need to restart the Producer for retention changes to take effect. You do, however, need to restart the broker however. Once restarted, all messages will then be subject to the new policy. Philip  

RE: Consumer sensitive expiration of topic

2014-08-25 Thread Prunier, Dominique
Any idea on this usecase guys ? Thanks, -Original Message- From: Prunier, Dominique [mailto:dominique.prun...@emc.com] Sent: Friday, August 15, 2014 11:02 AM To: users@kafka.apache.org Subject: RE: Consumer sensitive expiration of topic Hi, Thanks for the answer. The topics

EBCDIC support

2014-08-25 Thread sonali.parthasarathy
Hey all, This might seem like a silly question, but does kafka have support for EBCDIC? Say I had to read data from an IBM mainframe via a TCP/IP socket where the data resides in EBCDIC format, can Kafka read that directly? Thanks, Sonali This message is for

Re: EBCDIC support

2014-08-25 Thread Gwen Shapira
Hi Sonali, Kafka doesn't really care about EBCDIC or any other format - for Kafka bits are just bits. So they are all supported. Kafka does not read data from a socket though. Well, it does, but the data has to be sent by a Kafka producer. Most likely you'll need to implement a producer that

RE: EBCDIC support

2014-08-25 Thread sonali.parthasarathy
Thanks Gwen! makes sense. So I'll have to weigh the pros and cons of doing an EBCDIC to ASCII conversion before sending to Kafka Vs. using an ebcdic library after in the consumer Thanks! S -Original Message- From: Gwen Shapira [mailto:gshap...@cloudera.com] Sent: Monday, August 25,

Re: EBCDIC support

2014-08-25 Thread Gwen Shapira
Personally, I like converting data before writing to Kafka, so I can easily support many consumers who don't know about EBCDIC. A third option is to have a consumer that reads EBCDIC data from one Kafka topic and writes ASCII to another Kafka topic. This has the benefits of preserving the raw

Re: EBCDIC support

2014-08-25 Thread Christian Csar
Having been spared any EBCDIC experience whatsoever (ie from a positio of thorough ignorance), if you are transmitting text or things with a designated textual form (presumably) I would recommend that your conversion be to unicode rather than ascii if you don't already have consumers expecting a

Migrating data from old brokers to new borkers question

2014-08-25 Thread Marcin Michalski
Hi, I would like to migrate my Kafka setup from old servers to new servers. Let say I have 8 really old servers that have the kafka topics/partitions replicated 4 ways and want to migrate the data to 4 brand new servers and want the replication factor be 3. I wonder if anyone has ever performed

Re: Migrating data from old brokers to new borkers question

2014-08-25 Thread Kashyap Paidimarri
If you are also planning on a version upgrade as part of this, it might be safer to create the new cluster separately and use the mirror maker to copy data over. On Aug 26, 2014 6:29 AM, Marcin Michalski mmichal...@tagged.com wrote: Hi, I would like to migrate my Kafka setup from old servers to

Re: Migrating data from old brokers to new borkers question

2014-08-25 Thread Joe Stein
Marcin, that is a typical task now. What version of Kafka are you running? Take a look at https://kafka.apache.org/documentation.html#basic_ops_cluster_expansion and https://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor Basically you can do a --generate to get