es through this producer
> to the sink cluster?
>
> Sound like a solution? Have a better suggestion or any warnings about this
> approach?
>
> -Roger
>
>
> On 2/2/17, 10:10 AM, "Damian Guy" <damian@gmail.com> wrote:
>
> Hi Roger,
&
Hi Roger,
This is not currently supported and won't be available in 0.10.2.0.
This has been discussed, but it doesn't look there is a JIRA for it yet.
Thanks,
Damian
On Thu, 2 Feb 2017 at 16:51 Roger Vandusen
wrote:
> We would like to source topics from one
Hi Matthew,
You shouldn't close the stores in your custom processors. They are closed
automatically by the framework during rebalances and shutdown.
There is a good chance that your closing of the stores is causing the
issue. Of course if you see the exception again then please report back so
we
Hi Elliot,
With GlobalKTables your processor wouldn't be able to write directly to the
table - they are read-only from all threads except the thread that keeps
them up-to-date.
You could potentially write the dimension data to the topic that is the
source of the GlobalKTable, but there is no
+1
On Wed, 1 Feb 2017 at 07:48 Michael Noll wrote:
> Thanks for bringing this up, Matthias.
>
> +1
>
> On Wed, Feb 1, 2017 at 8:15 AM, Gwen Shapira wrote:
>
> > +1
> >
> > On Tue, Jan 31, 2017 at 5:57 PM, Matthias J. Sax
> >
I think this might be an issue related to having
auto.create.topics.enabled=true (the default).
Try setting auto.create.topics.enabled=false in server.properties.
On Tue, 31 Jan 2017 at 17:29 Peter Kopias wrote:
> Hello.
>
> I've got a local virtual development
ialization and improved
>>>>> semantics
>>>>> Date: 24 January 2017 at 19:30:10 GMT
>>>>> To: d...@kafka.apache.org
>>>>> Reply-To: d...@kafka.apache.org
>>>>>
>>>>> That not what I meant by "huge impact".
>&
;> To: d...@kafka.apache.org
> >>> Reply-To: d...@kafka.apache.org
> >>>
> >>> That not what I meant by "huge impact".
> >>>
> >>> I refer to the actions related to materialize a KTable: creating a
> >>> RocksDB store and a
gt;
> I am following this guide
> https://cwiki.apache.org/confluence/display/KAFKA/jmxterm+quickstart
>
> However it is not clear what config I need to provide.
>
> Thanks
> Sachin
>
>
>
> On Fri, Jan 27, 2017 at 2:31 PM, Damian Guy <damian@gma
Hi Sachin,
You can configure an implementation of org.apache.kafka.common.Metrics.
This is done via StreamsConfig.METRICS_REPORTER_CLASSES_CONFIG
There is a list of jmx reporters here:
https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
I'm sure their are plenty more available on
Hi Nick,
I guess there is some reason why you can't just build it as a table to
begin with?
There isn't a convenient method for doing this right now, but you could do
something like:
stream.to("some-other-topic");
builder.table("some-other-topic");
Thanks,
Damian
On Tue, 24 Jan 2017 at 16:32
You could possibly also use KStream.transform(...)
On Wed, 18 Jan 2017 at 14:22 Damian Guy <damian@gmail.com> wrote:
> Hi Nicolas,
>
> Good question! I'm not sure why it is a terminal operation, maybe one of
> the original authors can chip in. However, you could pr
Hi Nicolas,
Good question! I'm not sure why it is a terminal operation, maybe one of
the original authors can chip in. However, you could probably work around
it by using TopologyBuilder.addProcessor(...) rather then KStream.process
Thanks,
Damian
On Wed, 18 Jan 2017 at 13:48 Nicolas Fouché
Hi Nicolas,
I guess you are using the Processor API for your topology? The
WindowedSerializer is an internal class that is used as part of the DSL. In
the DSL a topic will be created for each window operation, so we don't need
the end time as it can be calculated from the window size.
However,
Congratulations!
On Thu, 12 Jan 2017 at 03:35 Jun Rao wrote:
> Grant,
>
> Thanks for all your contribution! Congratulations!
>
> Jun
>
> On Wed, Jan 11, 2017 at 2:51 PM, Gwen Shapira wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as
Hi,
There is no way to enable caching on in-memory-store - by definition it is
already cached. However the in-memory store will write each update to the
changelog (regardless of context.commit), which seems to be the issue you
have?
When you say large, how large? Have you tested it and observed
guring a processor that it uses the sources applied to that
> processor to ensure the partitions are aligned for that task?
>
> thanks,
> brian
>
> On 05.01.2017 13:37, Damian Guy wrote:
> > Hi Brian,
> >
> > It might be helpful if you provide some code showing your
Hi Brian,
It might be helpful if you provide some code showing your Topology.
Thanks,
Damian
On Thu, 5 Jan 2017 at 10:59 Brian Krahmer wrote:
> Hey guys,
>
>I'm fighting an issue where I can currently only run one instance of
> my streams application because when
Hi Sachin,
The key is a combination of the record key + window start time + a sequence
number. The timestamp is 8 bytes and the sequence number is 4 bytes.
Thanks,
Damian
On Thu, 22 Dec 2016 at 15:26 Sachin Mittal wrote:
> Hi All,
> Our stream is something like
>
>
Hi,
Have you tried building the examples with the provided pom.xml? Last i
checked it all compiled and worked.
Thanks
On Thu, 22 Dec 2016 at 16:13 Amrit Jangid wrote:
> Hi All,
>
> I want to try out kafka stream example using this example :
>
>
ams config directly.
>
> Thanks
> Sachin
>
>
>
> On Mon, Dec 19, 2016 at 5:32 PM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Sachin,
> >
> > I think we have a way of doing what you want already. If you create a
> > custom state store yo
like to see in logs if there is any additional information
> which we can check.
>
> Thanks
> Sachin
>
> On Mon, Dec 19, 2016 at 5:43 PM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Sachin,
> >
> > This would usually indicate that may indicate that
Hi Sachin,
This would usually indicate that may indicate that there is a connectivity
issue with the brokers. You would need to correlate the logs etc on the
brokers with the streams logs to try and understand what is happening.
Thanks,
Damian
On Sun, 18 Dec 2016 at 07:26 Sachin Mittal
Hi Sachin,
I think we have a way of doing what you want already. If you create a
custom state store you can call the enableLogging method and pass in any
configuration parameters you want: For example:
final StateStoreSupplier supplier = Stores.create("store")
.withKeys(Serdes.String())
Hi Avi,
Technically you can, but not without writing some code. If you want to use
consumer groups then you would need to write a custom PartitionAssignor and
configure it in your Consumer Config, like so:
consumerProps.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
That is correct
On Wed, 14 Dec 2016 at 12:09 Jon Yeargers <jon.yearg...@cedexis.com> wrote:
> I have the app running on 5 machines. Is that what you mean?
>
> On Wed, Dec 14, 2016 at 1:38 AM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Jon,
> >
&g
Hi Sachin,
> windowstore.changelog.additional.retention.ms
>
> How does this relate to rentention.ms param of topic config?
> I create internal topic manually using say rentention.ms=360.
> In next release (post kafka_2.10-0.10.0.1) since we support delete of
> internal changelog topic as
n Yeargers <jon.yearg...@cedexis.com> wrote:
> As near as I can see it's rebalancing constantly.
>
> I'll up that value and see what happens.
>
> On Tue, Dec 13, 2016 at 9:04 AM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Jon,
> >
> > I haven't h
.
>
> On Tue, Dec 13, 2016 at 4:55 AM, Jon Yeargers <jon.yearg...@cedexis.com>
> wrote:
>
> Yes - saw that one. There were plenty of smaller records available though.
>
> I sent another log this morning with the level set to DEBUG. Hopefully you
> rec'd it.
>
> On Tue
HI Jon,
It looks like you have the logging level for KafkaStreams set to at least
WARN. I can only see ERROR level logs being produced from Streams.
However, i did notice an issue in the logs (not related to your specific
error but you will need to fix anyway):
There are lots of messages like:
(non-ZooKeeper-based consumers).
>
> Error: Consumer group 'test' has no active members.
>
> What does this mean.
>
> It means I can check the offset of consumer only when streams applictaion
> "test" is running.
>
> Thanks
> Sachin
>
>
> On Mon, De
Mon, Dec 12, 2016 at 4:48 AM, Damian Guy <damian@gmail.com> wrote:
>
> Just set the log level to debug and then run your app until you start
> seeing the problem.
> Thanks
>
> On Mon, 12 Dec 2016 at 12:47 Jon Yeargers <jon.yearg...@cedexis.com>
> wrote:
>
&
group, i.e its own thread?
>
> On Mon, Dec 12, 2016 at 10:29 PM, Damian Guy <damian@gmail.com> wrote:
>
> > Yep - that looks correct
> >
> > On Mon, 12 Dec 2016 at 17:18 Avi Flax <avi.f...@parkassist.com> wrote:
> >
> > >
&g
Yep - that looks correct
On Mon, 12 Dec 2016 at 17:18 Avi Flax <avi.f...@parkassist.com> wrote:
>
> > On Dec 12, 2016, at 11:42, Damian Guy <damian@gmail.com> wrote:
> >
> > If you want to split these out so that they can run in parallel, then you
>
i.f...@parkassist.com> wrote:
>
> > On Dec 12, 2016, at 10:24, Damian Guy <damian@gmail.com> wrote:
> >
> > The code for your Streams Application. Doesn't have to be the actual
> code,
> > but an example of how you are using Kafka Streams.
>
> OK, I’ve
Hi Avi,
The code for your Streams Application. Doesn't have to be the actual code,
but an example of how you are using Kafka Streams.
On Mon, 12 Dec 2016 at 15:13 Avi Flax <avi.f...@parkassist.com> wrote:
>
> > On Dec 12, 2016, at 10:08, Damian Guy <damian@gmail.com> w
Hi Avi,
Can you provide an example of your topology?
Thanks,
Damian
On Mon, 12 Dec 2016 at 15:02 Avi Flax wrote:
Hi all,
I’m running Kafka 0.10.0.1 on Java 8 on Linux — same for brokers and
streams nodes.
I’m attempting to scale up a Streams app that does a lot of
Hi Sachin,
You should use the kafka-consumer-groups.sh command. The
ConsumerOffsetChecker is deprecated and is only for the old consumer.
Thanks,
Damian
On Mon, 12 Dec 2016 at 14:32 Sachin Mittal wrote:
> Hi,
> I have a streams application running with application id test.
Just set the log level to debug and then run your app until you start
seeing the problem.
Thanks
On Mon, 12 Dec 2016 at 12:47 Jon Yeargers <jon.yearg...@cedexis.com> wrote:
> I can log whatever you need. Tell me what is useful.
>
> On Mon, Dec 12, 2016 at 4:43 AM, Dam
n Mon, Dec 12, 2016 at 4:15 AM, Jon Yeargers <jon.yearg...@cedexis.com>
> wrote:
>
> > At this moment I have 5 instances each running 2 threads.
> > Single instance / machine.
> >
> > Define 'full logs' ?
> >
> > On Mon, Dec 12, 2016 at 3:54 AM,
Jon,
How many StreamThreads do you have running?
How many application instances?
Do you have more than one instance per machine? If yes, are they sharing
the same State Directory?
Do you have full logs that can be provided so we can try and see how/what
is happening?
Thanks,
Damian
On Mon, 12
ll process the complete
> downstream before becoming standby.
>
> Thanks
> Sachin
>
>
>
> On Mon, Dec 12, 2016 at 3:19 PM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Sachin,
> >
> > The KafkaStreams StreamsPartitionAssignor will take care of assi
cation and if one
> fails and crashes one of the standby takes over.
>
> Please let me know if my understanding is correct so far.
>
> Thanks
> Sachin
>
>
> On Fri, Dec 9, 2016 at 1:34 PM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Sachin,
> >
>
Hi Rob,
Do you have any further information you can provide? Logs etc?
Have you configured max.poll.interval.ms?
Thanks,
Damian
On Sun, 11 Dec 2016 at 20:30 Robert Conrad wrote:
> Hi All,
>
> I have a relatively complex streaming application that seems to struggle
>
Hi Jon,
Are you using 0.10.1? There is a resource leak to do with the Window
Iterator. The bug has been fixed on the 0.10.1 branch (which will soon be
released as 0.10.1.1)
and it is also fixed in the confluent fork.
You can get the confluent version by using the following:
Hi Ara,
It is a bug in 0.10.1 that has been fixed:
https://issues.apache.org/jira/browse/KAFKA-4311
To work around it you should set
StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG to 0
The fix is available on trunk and 0.10.1 branch and there will be a
0.10.1.1 release any day now.
Thanks,
Hi Sachin,
What you have suggested will never happen. If there is only 1 partition
there will only ever be one consumer of that partition. So if you had 2
instances of your streams application, and only a single input partition,
only 1 instance would be processing the data.
If you are running
at 17:49 Jon Yeargers <jon.yearg...@cedexis.com> wrote:
> Im only running one consumer-instance so would rebalancing / wrong host be
> an issue?
>
>
>
> On Thu, Dec 8, 2016 at 7:31 AM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Jon,
> &g
Hi Jon,
How are you trying to access the store?
That exception is thrown in a few circumstances:
1. KakfaStreams hasn't initialized or is re-initializing due to a
rebalance. This can occur for a number of reasons, i.e., new
topics/partitions being added to the broker (including streams internal
zer,sumRecordsDeserializer);
> >>>
> >>> StringSerializer stringSerializer = new StringSerializer();
> >>> StringDeserializer stringDeserializer = new StringDeserializer();
> >>> Serde stringSerde =
> >>> Serdes.serdeFrom(stringSerial
Hi Jon,
A couple of things: Which version are you using?
Can you share the code you are using to the build the topology?
Thanks,
Damian
On Tue, 6 Dec 2016 at 14:44 Jon Yeargers wrote:
> Im using .map to convert my (k/v) string/Object to Object/Object but when I
>
t; affected cache.
>
> Mathieu
>
>
> On Mon, Dec 5, 2016 at 8:36 AM, Damian Guy <damian@gmail.com> wrote:
>
> > Hi Mathieu,
> >
> > I'm trying to make sense of the rather long stack trace in the gist you
> > provided. Can you possibly share
Hi Mathieu,
I'm trying to make sense of the rather long stack trace in the gist you
provided. Can you possibly share your streams topology with us?
Thanks,
Damian
On Mon, 5 Dec 2016 at 14:14 Mathieu Fenniak
wrote:
> Hi Eno,
>
> This exception occurred w/ trunk @
Hi Ali,
The only way KafkaStreams will process new topics after start is if the
original stream was defined with a regular expression, i.e,
kafka.stream(Pattern.compile("foo-.*");
If any new topics are added after start that match the pattern, then they
will also be consumed.
Thanks,
Damian
On
gt; problem there.
>
> On Fri, 25 Nov 2016 at 13:37, Svante Karlsson <svante.karls...@csi.se>
> wrote:
>
> > What kind of disk are you using for the rocksdb store? ie spinning or
> ssd?
> >
> > 2016-11-25 12:51 GMT+01:00 Damian Guy <damian@gmail.co
Hi Frank,
Is this on a restart of the application?
Thanks,
Damian
On Fri, 25 Nov 2016 at 11:09 Frank Lyaruu wrote:
> Hi y'all,
>
> I have a reasonably simple KafkaStream application, which merges about 20
> topics a few times.
> The thing is, some of those topic datasets
Mikeal,
When you use `through(..)` topics are not created by KafkaStreams. You need
to create them yourself before you run the application.
Thanks,
Damian
On Thu, 24 Nov 2016 at 11:27 Mikael Högqvist wrote:
> Yes, the naming is not an issue.
>
> I've tested this with the
Hi Hamid,
Out of interest - what are the results if you use KStreamTestDriver?
Thanks,
Damian
On Thu, 24 Nov 2016 at 12:05 Hamidreza Afzali <
hamidreza.afz...@hivestreaming.com> wrote:
> The map() returns non-null keys and values and produces the following
> stream:
>
>
Hi Brian,
It sounds like you might want do something like:
KTable inputOne = builder.table("input-one");
KTable inputTwo = builder.table("input-two");
KTable inputThree = builder.table("input-three");
ValueJoiner joiner1 = //...
ValueJoiner joiner2 = //...
inputOne.join(inputTwo, joiner1)
Hi Ara,
Are you running Kafka Streams v0.10.1? If so, there is a resource leak in
the window store. See: https://github.com/apache/kafka/pull/2122
You can either try checking out the Apache Kafka 0.10.1 branch (which has
the fix) and building it manually, or you can try using the latest
confluent
Hi Sachin,
You can achieve what you want by setting the correct cleanup.policy on
these topics.
In this case you want cleanup.policy=compact,delete - you'll also want to
set retention.ms and/or retention.bytes.
The topic will then be compacted, but it will also delete any segments
based on the
That is correct
On Thu, 10 Nov 2016 at 11:50 Sachin Mittal wrote:
> Hi,
> The reset tool looks like a great feature.
>
> So following this link
>
> https://www.confluent.io/blog/data-reprocessing-with-kafka-streams-resetting-a-streams-application/
>
> What I understand is
t; If i would pass tp the 0.10.1. Is this possible without impacting the
> applications that i've already done ?
>
> If yes then which parameters are needed for deduplication with caching.
> Like this you would save me time looking for it.
>
>
> Thanks.
>
> Hamza
>
>
Hi Hamza,
If you are using version 0.10.0, then there is no way of controlling this.
In 0.10.1 we do some deduplication with caching, but you should still
expect multiple results for a window.
Thanks,
Damian
On Fri, 4 Nov 2016 at 08:42 Hamza HACHANI wrote:
> Hi,
>
>
>
Hi Frank,
This usually means that another StreamThread has the lock for the state
directory. So it would seem that one of the StreamThreads hasn't shut down
cleanly. If it happens again can you please take a Thread Dump so we can
see what is happening?
Thanks,
Damian
On Sun, 30 Oct 2016 at
Hi Frank,
Which version of kafka are you running? The line numbers in the stack trace
don't match up with what i am seeing on 0.10.1 or on trunk.
FYI - I created a JIRA for this here:
https://issues.apache.org/jira/browse/KAFKA-4311
Thanks,
Damian
On Tue, 18 Oct 2016 at 15:52 Damian Guy
Also, it'd be great if you could share your streams topology.
Thanks,
Damian
On Tue, 18 Oct 2016 at 15:48 Damian Guy <damian@gmail.com> wrote:
> Hi Frank,
>
> Are you able to reproduce this? I'll have a look into it, but it is not
> immediately clear how it could g
porting the bug, and many thanks to Damian for the
> > > quick catch!
> > >
> > > On Thu, Oct 13, 2016 at 12:30 PM, Frank Lyaruu <flya...@gmail.com>
> > wrote:
> > >
> > >> The issue seems to be gone. Amazing work, thanks...!
> > >
Hi Jason,
Really sorry, but we are going to need to cut another RC. There was a
report on the user list w.r.t the NamedCache (in KafkaStreams) throwing a
NullPointerException. I've looked into it and it is definitely a bug that
needs to be fixed. jira is
Hi, i believe i found the problem. If possible could you please try with
this: https://github.com/dguy/kafka/tree/cache-bug
Thanks,
Damian
On Thu, 13 Oct 2016 at 17:46 Damian Guy <damian@gmail.com> wrote:
> Hi Frank,
>
> Thanks for reporting. Can you provide a sample
Hi Frank,
Thanks for reporting. Can you provide a sample of the join you are running?
Thanks,
Damian
On Thu, 13 Oct 2016 at 16:10 Frank Lyaruu wrote:
> Hi Kafka people,
>
> I'm joining a bunch of Kafka Topics using Kafka Streams, with the Kafka
> 0.10.1 release candidate.
>
0.10.1 will release will hopefully be within the next couple of weeks.
On Wed, 12 Oct 2016 at 15:52 Pierre Coquentin
wrote:
> Ok it works against the trunk and the branch 0.10.1, both have a dependency
> to rockdb 4.9.0 vs 4.4.1 for kafka 0.10.0.
> Do you know when
up the changes?
>
> On 1 October 2016 at 04:42, Damian Guy <damian@gmail.com> wrote:
>
> > That is correct.
> >
> > On Fri, 30 Sep 2016 at 18:00 Gary Ogden <gog...@gmail.com> wrote:
> >
> > > So how exactly would that work? For
re you saying that I could put a regex in place of the SYSTEM_TOPIC and
> that one KStream would be streaming from multiple topics that match that
> regex?
>
> If so, that could be useful.
>
> Gary
>
>
> On 30 September 2016 at 13:35, Damian Guy <damian@gmail.com&g
Hi Gary,
In the upcoming 0.10.1 release you can do regex subscription - will that
help?
Thanks,
Damian
On Fri, 30 Sep 2016 at 14:57 Gary Ogden wrote:
> Is it possible to use the topic filter whitelist within a Kafka Streaming
> application? Or can it only be done in a
You need to use one of the options you've already outlined. Without this
you are just doing fire-and-forget.
On Fri, 16 Sep 2016 at 09:25 Agostino Calamita
wrote:
> Hi,
> I have 2 brokers with a topic with replication factor = 2.
> Brokers are configured with
Hi,
On trunk you can use a regex when creating a stream, i.e:,
builder.stream(Pattern.compile("topic-\\d"));
HTH,
Damian
On Fri, 19 Aug 2016 at 06:50 Kessiler Rodrigues
wrote:
> Hey Drew,
>
> You can easily use a WhiteList passing as parameter your regex pattern.
>
>
Thanks for providing the instructions - appreciated.
On Fri, 5 Aug 2016 at 14:10 Mathieu Fenniak
wrote:
> I took that approach. It was painful, but, ultimately did get me a working
> Windows development environment.
>
> To any who follow in my footsteps, here is
Hi Alex,
Yes SQL support is something we'd like to add in the future. I'm not sure
when at this stage.
Thanks,
Damian
On Thu, 30 Jun 2016 at 08:41 Alex Glikson wrote:
> Did folks consider adding support in Kafka Streams for Apache Calcite [1],
> for streaming-enabled SQL
+1
On Tue, 21 Jun 2016 at 09:59 Marcus Gründler
wrote:
> Hi Ismael,
>
> thanks for the pointer to the latest WebSphere documentation - I wasn’t
> aware
> of that release.
>
> We currently have customers that run our software frontend on an older
> WebSphere version
Hi Ted - if the data is keyed you can use a key compacted topic and
essentially keep the data 'forever',i.e., you'll always have the latest
version of the data for a given key. However, you'd still want to backup
the data someplace else just-in-case.
On 16 February 2016 at 21:25, Ted Swerve
Hi,
I had the same issue and managed to work around it by simulating a
heartbeat to kafka. It works really well, i.e., we have had zero issues
since it was implemented
I have somthing like this:
void process() {
records = consumer.poll(timeout)
dispatcher.dispatch(records)
f I lose one broker?
>
> Thanks,
> Sean
>
> On 2/16/16, 9:14 AM, "Damian Guy" <damian@gmail.com> wrote:
>
> >Hi,
> >
> >You need to have at least replication factor brokers.
> >replication factor = 1 is no replication.
> >
> >
Hi,
You need to have at least replication factor brokers.
replication factor = 1 is no replication.
HTH,
Damian
On 16 February 2016 at 14:08, Sean Morris (semorris)
wrote:
> Should your number of brokers be atleast one more then your replication
> factor of your topic(s)?
Hi,
Pass the key into the callback you provide to kafka. You then have it
available when the callback is invoked.
Cheers,
Damian
On 11 February 2016 at 10:59, Franco Giacosa wrote:
> Hi,
>
> Is there a way to get the record key on the callback of the send() for a
> record?
Hi,
I believe it is a broker property.
It will create the topic with the name you provide.
The topic will not get deleted unless you manually delete it.
It wont get re-created on subsequent calls (unless you've deleted it)
HTH,
Damian
On 20 January 2016 at 13:14, Joe San
;:[]}
On 17 December 2015 at 15:32, Ben Stopford <b...@confluent.io> wrote:
> Hi Damian
>
> The reassignment should treat the offsets topic as any other topic. I did
> a quick test and it seemed to work for me. Do you see anything suspicious
> in the controller log?
>
> B
> >
And in doing so i've answered my own question ( i think! ) - i don't
believe the topic has been created on that cluster yet...
On 18 December 2015 at 10:56, Damian Guy <damian@gmail.com> wrote:
> I was just trying to get it generate the json for reassignment and the
> output wa
Hi,
We have had some temporary nodes in our kafka cluster and i now need to
move assigned partitions off of those nodes onto the permanent members. I'm
familiar with the kafka-reassign-partitions script, but ... How do i get it
to work with the __consumer_offsets partition? It currently seems to
unication. It seems that the broker got a
> request more than the default allowed size (~10MB). How many
> topic/partitions do you have on this cluster? Do you have clients running
> on the broker host?
>
> Thanks,
>
> Jun
>
>
> On Tue, Nov 17, 2015 at 4:10 AM, Damian G
I would think not
I'm bringing up a new 0.9 cluster and i'm getting the below Exception (and
the same thing on all nodes) - the IP address is the IP for the host the
broker is running on. I think DNS is a bit stuffed on these machines and
maybe that is the cause, but... any ideas?
[2015-11-17
Hi,
If you are using the Scala Producer then yes it will drop messages. It will
try up to num retries times and then throw a FailedToSendMessageException.
This is caught in the ProducerSendThread and logged, you'd see something
like:
"Error in handling batch of 10 events ..."
If you don't want to
You can try altering some config on your producers, see here:
https://kafka.apache.org/documentation.html#producerconfigs
To control how many messages are buffered and how often the buffer is flushed:
queue.buffering.max.ms
queue.buffering.max.messages
To control the behaviour when the buffer
Hi,
Assuming i am using the latest kafka (trunk), exclusively with the new
consumer, and i want to monitor consumer lag across all groups - how would
i go about discovering the consumer groups? Is there an API call for this?
Thanks,
Damian
I turned off compression and still get duplicates, but only 1 from each
topic.
Should the initial fetch offset for a partition be committed offset +1 ?
Thanks,
Damian
On 15 September 2015 at 14:07, Damian Guy <damian@gmail.com> wrote:
> Hi,
>
> I've been trying out the new co
Hi,
I've been trying out the new consumer and have noticed that i get duplicate
messages when i stop the consumer and then restart (different processes,
same consumer group).
I consume all of the messages on the topic and commit the offsets for each
partition and stop the consumer. On the next
Can you do:
producer.send(...)
...
producer.send(...)
producer.flush()
By the time the flush returns all of your messages should have been sent
On 8 September 2015 at 11:50, jinxing wrote:
> if i wanna send the message syncronously i can do as below:
>
of load etc) or are you just looking for experiences
running 0.8.x with that many producers?
B
On 25 Aug 2015, at 10:29, Damian Guy damian@gmail.com wrote:
Hi,
We currently run 0.7.x on our clusters and are now finally getting around
to upgrading to kafka latest. One thing
Hi,
We currently run 0.7.x on our clusters and are now finally getting around
to upgrading to kafka latest. One thing that has been holding us back is
that we can no longer use a VIP to front the clusters. I understand we
could use a VIP for metadata lookups, but we have 100,000 + producers to
201 - 299 of 299 matches
Mail list logo