y:
>
> https://github.com/confluentinc/examples/blob/3.
> 2.x/kafka-streams/src/main/java/io/confluent/examples/streams/
> MapFunctionLambdaExample.java#L126
>
> Full docs: http://docs.confluent.io/current/streams/index.html
>
>
> -Matthias
>
> On 5/17/17 1:45 PM, Robert
gt; wrapping
the results of #poll() which can then be passed into a map/filter pipeline.
I am using an underlying blocking queue data structure to buffer in memory
and using Stream.generate() to pull records. Any recommendations on a best
approach here?
Thanks
--
Robert Quinlivan
Software Engineer, Signal
en. Vous avez
> > accepté de recevoir le(s) document(s) ci-joint(s) par voie électronique à
> > l'adresse courriel indiquée ci-dessus; veuillez conserver une copie de
> > cette confirmation pour les fins de reference future.
> >
>
--
Robert Quinlivan
Software Engineer, Signal
e into the producer should arrive in the consumer, so if I do
> >> >> this in one windows console:
> >> >>
> >> >> kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic
> >> >> big_ptns1_repl1_nozip --zookeeper localhost:2181 >
> >> >> F:\Users\me\Desktop\shakespear\single_all_shakespear_OUT.txt
> >> >>
> >> >> and this in another:
> >> >>
> >> >> kafka-console-producer.bat --broker-list localhost:9092 --topic
> >> >> big_ptns1_repl1_nozip <
> >> >> F:\Users\me\Desktop\shakespear\complete_works_no_bare_lines.txt
> >> >>
> >> >> then the output file "single_all_shakespear_OUT.txt" should be
> >> >> identical to the input file "complete_works_no_bare_lines.txt"
> except
> >> >> it's not. For the complete works (sabout 5.4 meg uncompressed) I lost
> >> >> about 130K in the output.
> >> >> For the replicated shakespeare, which is about 5GB, I lost about 150
> >> meg.
> >> >>
> >> >> This can't be right surely and it's repeatable but happens at
> >> >> different places in the file when errors start to be produced, it
> >> >> seems.
> >> >>
> >> >> I've done this using all 3 versions of kafak in the 0.10.x.y branch
> >> >> and I get the same problem (the above commands were using the
> 0.10.0.0
> >> >> branch so they look a little obsolete but they are right for that
> >> >> branch I think). It's cost me some days.
> >> >> So, am I making a mistake, if so what?
> >> >>
> >> >> thanks
> >> >>
> >> >> jan
> >> >>
> >> >
> >>
> >
>
--
Robert Quinlivan
Software Engineer, Signal
console-producer.bat --broker-list localhost:9092 --topic
> > >> big_ptns1_repl1_nozip <
> > >> F:\Users\me\Desktop\shakespear\complete_works_no_bare_lines.txt
> > >>
> > >> then the output file "single_all_shakespear_OUT.txt" should be
> > >> identical to the input file "complete_works_no_bare_lines.txt" except
> > >> it's not. For the complete works (sabout 5.4 meg uncompressed) I lost
> > >> about 130K in the output.
> > >> For the replicated shakespeare, which is about 5GB, I lost about 150
> > meg.
> > >>
> > >> This can't be right surely and it's repeatable but happens at
> > >> different places in the file when errors start to be produced, it
> > >> seems.
> > >>
> > >> I've done this using all 3 versions of kafak in the 0.10.x.y branch
> > >> and I get the same problem (the above commands were using the 0.10.0.0
> > >> branch so they look a little obsolete but they are right for that
> > >> branch I think). It's cost me some days.
> > >> So, am I making a mistake, if so what?
> > >>
> > >> thanks
> > >>
> > >> jan
> > >>
> > >
> >
>
--
Robert Quinlivan
Software Engineer, Signal
reported
partition count. I have seen no mention of a need to restart or reconfigure
the producer in order to pick up the added partitions. Is this required?
Thanks
--
Robert Quinlivan
Software Engineer, Signal
to use
> different storage for maintaining the offsets.
>
> Could someone more experienced elaborate a bit on this topic?
>
> Thanks
> jakub
>
--
Robert Quinlivan
Software Engineer, Signal
t;
>
> Regards
> V G Sunjay Jeffrish
>
--
Robert Quinlivan
Software Engineer, Signal
dinator
>
> -James
>
> Sent from my iPhone
>
> > On Mar 15, 2017, at 9:40 AM, Robert Quinlivan <rquinli...@signal.co>
> wrote:
> >
> > I should also mention that this error was seen on broker version
> 0.10.1.1.
> > I found that this condition sounds somewhat
, 2017 at 11:11 AM, Robert Quinlivan <rquinli...@signal.co>
wrote:
> Good morning,
>
> I'm hoping for some help understanding the expected behavior for an offset
> commit request and why this request might fail on the broker.
>
> *Context:*
>
> For context, my configura
ail?
2. If this is an issue with metadata size, what would cause abnormally
large metadata?
3. How is this cache used within the broker?
Thanks in advance for any insights you can provide.
Regards,
Robert Quinlivan
Software Engineer, Signal
le consuming a
> kafka message? Any pointers to do that would be greatly appreciated.
>
> Thanks in advance.
>
> --
> Thanks,
> Syed.
>
--
Robert Quinlivan
Software Engineer, Signal
an be shared across multiple threads.
>
> Or should be there one kafka producer created to handle one request?
>
> Is there any best practice documents/guidelines to follow for using simple
> java Kafka producer api?
>
> Thanks in advance for your responses.
>
> Thanks,
> Amit
>
--
Robert Quinlivan
Software Engineer, Signal
( https://cwiki.apache.org/confluence/display/KAFKA/
> > > System+Tools#SystemTools-ConsumerOffsetChecker )
> > >
> > > However, if my version they don't work, because they try and read from
> > > zookeeper /consumers which is empty.. I think they are old tools.
> > >
> > > Does anyone know where in zookeeper, where the current kafka keeps
> > > consumer offsets?
> > >
> > > Regards
> > > --
> > > Glen Ogilvie
> > > Open Systems Specialists
> > > Level 1, 162 Grafton Road
> > > http://www.oss.co.nz/
> > >
> > > Ph: +64 9 984 3000
> > > Mobile: +64 21 684 146
> > > GPG Key: ACED9C17
> > >
> >
>
--
Robert Quinlivan
Software Engineer, Signal
to org.apache.kafka.common.errors.RecordTooLargeException,
returning UNKNOWN error code to the client
(kafka.coordinator.GroupMetadataManager)
The consumer group cannot attach. How can I resolve this issue on the
broker?
Thanks
--
Robert Quinlivan
Software Engineer, Signal
ytes"
setting? This seems like an edge case to me. The leader would accept the
record but replicas would not be able to receive it, so it would be lost.
Or does the replica take the max of those two settings in order to avoid
this condition?
Thanks in advance!
--
Robert Quinlivan
Software Engineer, Signal
nimum delay? or
> is the minimum time required by kafka for the whole process?
>
> Best Regards,
> Patricia
--
Robert Quinlivan
Software Engineer, Signal
Hello,
Are there more detailed descriptions available for the metrics exposed by
Kafka via JMX? The current documentation provides some information but a
few metrics are not listed in detail – for example, "Log flush rate and
time."
--
Robert Quinlivan
Software Engineer, Signal
,
ConsumerRebalanceListener) would follow a similar behavior by distributing
the assigned topics among all consumers in the group.
Is this not the case? What is the expected behavior and how would you
recommend implementing this design?
Thank you
--
Robert Quinlivan
Software Engineer, Signal
verbose logging, or is there another way of checking
the offsets?
Thank you
--
Robert Quinlivan
Software Engineer, Signal
20 matches
Mail list logo