also need a Task, Config etc.
Is there some simpler way, some kind of hook?
Thanks in advance,
Jan
) {
return null;
}
@Override
public void close(){}
}
And setting these Environment Variables in Kafka Connect
- CONNECT_CONFIG_PROVIDERS=tracing
- CONNECT_CONFIG_PROVIDERS_TRACING_CLASS=org.example.TracingConfigProvider
Best regards,
Jan
Von: Jakub Scholz
Datum: Montag, 20
also need a Task, Config etc.
Is there some simpler way, some kind of hook?
Thanks in advance,
Jan
u use for logging into to other Apache services." as I don't have
such an account (btw. not sure which account is meant here) and I am not
sure this is intended for non-ASF users.
Kind regards,
Jan
On Wed, Feb 1, 2023 at 10:41 PM Bill Bejeck
wrote:
> Jan,
>
> Your account is created, and y
Hi Bill,
thank you! My preferred
* username: dahoc (or alternatively: JanHendriks)
* display name: Jan
* email address: dahoc3...@gmail.com
Best,
Jan
On Wed, Feb 1, 2023 at 4:07 PM Bill Bejeck
wrote:
> Hi Jan,
>
> If you can provide your preferred username, display name, and email
>
Hi,
I would hereby like to request a Jira account in order to be able to file a
ticket related to dependency convergence errors, see i.e.
https://github.com/spring-projects/spring-kafka/issues/2561 that directed
me to this project.
Kind regards,
Jan
Hi,
I would hereby like to request a Jira account in order to be able to file a
ticket related to dependency convergence errors, see i.e.
https://github.com/spring-projects/spring-kafka/issues/2561 that directed
me to this project.
Kind regards,
Jan
Nevermind,
https://github.com/spring-projects/spring-kafka is the place to raise this.
BR,
Jan
On Mon, Jan 30, 2023 at 1:33 PM Jan Hendriks wrote:
> Hi,
> we have issues with dependency convergence with Spring-Kafka-test and the
> maven enforcer plugin.
> A reproducer can be foun
a-metadata:3.1.2
As I don't have a Jira account for this project, I cannot open up an issue,
and I could not find such an issue in the project Jira or user mailing list.
Kind regards,
Jan
a-metadata:3.1.2
As I don't have a Jira account for this project, I cannot open up an issue,
and I could not find such an issue in the project Jira or user mailing list.
Kind regards,
Jan
It might be best to do a web search for companies that know this stuff
and speak to them.
re. kafka over UDP I dunno but perhaps instead do normal kafka talking
to a proxy machine via TCP and have that proxy forward traffic via
UDP.
If that works, would simplify the problem I guess.
cheers
jan
tacting (I'm not affiliated in any way).
The first question I'd ask myself is, would a burn-to-dvd solution
work? Failing that, basic stuff like email?
In any case, what if the data's corrupted, how can the server's detect
and re-request? What are you protecting against exactly? Stuff like
that.
jan
O
aggregating
and joining users of the same team.
I ended up using the second approach, but I wonder if that was really a
good idea b/c the entire streaming logic does become quite involved.
What is your experience with this type of data?
Best regards
Jan
nts to mull over. Doubt I can suggest anything further. Good luck.
jan
On 02/09/2020, cedric sende lubuele wrote:
> Let me introduce myself, my name is Cedric and I am a network engineer
> passionate about new technologies and as part of my new activity, I am
> interested in Big Data. Curre
Ok, Matthias,
thanks for the clarification. This makes sense to me.
Glad I learned something new about kafka-streams. Even if it was the hard
way ;-)
Greetings
Jan
On Wed, Apr 1, 2020 at 11:52 PM Matthias J. Sax wrote:
> That is expected behavior.
>
> And yes, there is a `Transformer`
e even though another instance
did a *store.put(key, value)* before. Is this expected behaviour? Is there
a transformer for each partition and does it get its own state store?
Best regards
Jan
On Fri, Mar 27, 2020 at 12:59 AM Matthias J. Sax wrote:
> Your code looks correct to me. If
alue for the same key is
still null.
Any idea's why this is?
Best regards
Jan
alue for the same key is
still null.
Any idea's why this is?
Best regards
Jan
get a read-only reference to the
state stores using queryable stores but that won't do.
Jan
On Thu, Jan 2, 2020 at 11:17 PM Alex Brekken wrote:
> Hi Jan, unfortunately there is no easy or automatic way to do this.
> Publishing null values directly to the changelog topics will remove them
-streams. And what about the state store?
Best regards
Jan
The default partitioner takes a hash of the key of a topic to determine the
partition number.
It would be useful for a key to be able to specify the object on which the
default partitioner should base its hash on. This would allow us to use
different composite keys and still be certain that
-+Add+timestamps+to+Kafka+message
[6] https://issues.apache.org/jira/browse/KAFKA-5353
[7] http://kafka.apache.org/documentation/#recordbatch
[8]
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets
[9] https://github.com/klarna
On 2019/06/27 15:39:29, Jan Kosecki wrote:
> Hi,
>
> I have a hyperledger fabric cluster that uses a cluster of 3 kafka nodes +
> 3 zookeepers.
> Fabric doesn't support pruning yet so any change to recorded offsets is
> detected by it and it fails to reconnect to kafk
ion factor of 3.
I've added logs from kafka-0 that contain any reference to audit topic.
Any suggestions, why the topic's offset has been truncated to 1?
Thanks in advance,
Jan
For now, just use the name it gets automatically, or crack the
AbstractStream open with reflection ;)
307 is doing it the wrong way again, just make name accessible instead
of make the users put them :face_with_rolling_eyes:
On 08.02.2019 02:36, Guozhang Wang wrote:
> Hi Nan,
>
> Glad it helps
Congratz!
On 15.01.2019 23:44, Jason Gustafson wrote:
> Hi All,
>
> The PMC for Apache Kafka has invited Vahid Hashemian as a project committer
> and
> we are
> pleased to announce that he has accepted!
>
> Vahid has made numerous contributions to the Kafka community over the past
> few years.
tention time and throughput and try to come out at ~50GB
at rest. If you need more throughput you can still use more partitions then
best Jan
I may have missed this (I'm missing the first few messages), so sorry
in advance if I have, but what OS are you using?
Kafka does not work well on windows, I had problems using it that
sounds a little like this (just a little though) when on win.
jan
On 30/11/2018, Satendra Pratap Singh wrote
on ANTLR stuff (which I'll return to when bits of me stop
hurting) and currently am trying to suss if permutation encoding can
be done from L1 cache for large permutations in less than a single
DRAM access time. Look up cuckoo filters to see why.
jan
On 05/09/2018, Liam Clarke wrote:
> Hi
the FAQ (which I did read), and also have other people need to
ask here whether windows is supported?
Why?
This is just nuts.
cheers
jan
On 05/09/2018, Liam Clarke wrote:
> Hi Jan,
>
> I'd presume that downloading an archive and seeing a bunch of .sh files
> would imply that Kafk
) but was informed I now had two (n00bness, and
an unsupported platform).
tl;dr if it doesn't work on X, we need to say so clearly. It's just...
good manners, surely?
cheers
jan
On 07/08/2018, M. Manna wrote:
> By fully broken, i mean not designed and tested to work on Windows.
>
> On Tue, 7 Aug
This is an excellent suggestion and I intend to do so henceforth
(thanks!), but it would be an adjunct to my request rather than the
answer; it still needs to be made clear in the docs/faq that you
*can't* use windows directly.
jan
On 07/08/2018, Rahul Singh wrote:
> I would recommend us
aveat in the documentation, please, right at the top?
jan
On 07/08/2018, M. Manna wrote:
> The answer is - Absolutely not. If you don’t have Linux rack, or Kubernetes
> deployment -it will not work on Windows as guaranteed.
>
> I know this because I have tried to make it work for the
t argue but I'd warn that some abstractions can be
expensive and I suspect shapeless may be one. Also, for parsers may I
suggest looking at ANTLR?
Idiomatic scala code can be expensive *as curremtly implemented*. Just
understand that cost by profiling, and de-idiomise in hot code as
needed.
It's
wrote:
Thanks Jan. We have 9 broker-zookeeper setup in production and during
monthly maintenance we need to shut it down gracefully (or reasonably) to
do our work.
Are you saying that it's okay not to shut down the entire cluster?
Also, will this hold true even when we are trying to do rolling
Hi,
this is expected. A gracefully shutdown means the broker is only
shutting down when it is not the leader of any partition.
Therefore you should not be able to gracefully shut down your entire
cluster.
Hope that helps
Best Jan
On 09.05.2018 12:02, M. Manna wrote:
Hello,
I have
st suggest a bit of reading and some
guesswork.
cheers
jan
On 13/02/2018, YuFeng Shen <v...@hotmail.com> wrote:
> If that is like what you said , why index file use the memory mapped file?
>
> ____
> From: jan <rtm4...@googlemail.com>
> Se
I would encourage you todo so.
I also think its not reasonable behavior
On 13.02.2018 11:28, Wouter Bancken wrote:
We have upgraded our Kafka version as an attempt to solve this issue.
However, the issue is still present in Kafka 1.0.0.
Can I log a bug for this in JIRA?
Wouter
On 5 February
y expensive (it isn't), that memory mapping is
always cheap (it isn't cheap),"
A bit vague on my part, but HTH anyway
jan
On 12/02/2018, YuFeng Shen <v...@hotmail.com> wrote:
> Hi jan ,
>
> I think the reason is the same as why index file using memory mapped file.
>
> As the mem
I'm not sure I can answer your question, but may I pose another in
return: why do you feel having a memory mapped log file would be a
good thing?
On 09/02/2018, YuFeng Shen wrote:
> Hi Experts,
>
> We know that kafka use memory mapped files for it's index files ,however
> it's
HI
brokers still try todo a gracefull shutdown I supose?
It would only shut down if it is not the leader of any partition anymore.
Can you verify: there are other brokers alive that took over leadership?
and the broker in question stepped down as a leader for all partitions?
Best Jan
Hi Peter,
glad it helped,
these are the preferred ways indeed.
On 07.12.2017 15:58, Peter Figliozzi wrote:
Thanks Jan, super helpful! To summarize (I hope I've got it right), there
are only two ways for external applications to access data derived from a
KTable:
1. Inside the streams
this Store with a say REST or any other RPC interface, to let
applications from outside your JVM query it.
So i would say the blogpost still applies quite well.
Hope this helps
Best Jan
On 07.12.2017 04:59, Peter Figliozzi wrote:
I've written a Streams application which creates a KTable
Hi,
two questions. Is your MirrorMaker collocated with the source or the target?
what are the send and receive buffer sizes on the connections that do span
across WAN?
Hope we can get you some help.
Best jan
On 06.12.2017 14:36, Xu, Zhaohui wrote:
Any update on this issue?
We also run
Hope this helps
Best Jan
On 03.12.2017 20:27, Dmitry Minkovsky wrote:
This is a pretty stupid question. Mostly likely I should verify these by
observation, but really I want to verify that my understanding of the
documentation is correct:
Suppose I have topic configurations like:
retention.ms
optimisation, but my opinions on it are not to high.
Hope that helps, just keep the questions coming, also check if you might
want to join confluentcommunity on slack.
Could never imaging that something like a insurance can really be
modelled as 4 streams ;)
Best Jan
On 30.11.2017 21:07
Hi,
Haven't checked your code. But from what you describe you should be fine.
Upgrading the version might help here and there but should still work
with 0.10
I guess.
Best Jan
On 30.11.2017 19:16, Artur Mrozowski wrote:
Thank you Damian, it was very helpful.
I have implemented my solution
ot;
https://engineering.linkedin.com/blog/2017/08/open-sourcing-kafka-cruise-control
could also handle node failures. But usually this is not necessary. The
hop across the broker is usually just to efficient
to have this kind of fuzz going on.
Hope this can convince you to try it out.
of GB per day for us.
Hope this helps.
Best Jan
On 29.11.2017 15:10, Adrienne Kole wrote:
Hi,
The purpose of this email is to get overall intuition for the future plans
of streams library.
The main question is that, will it be a single threaded application in the
long run and serve
Hi,
I probably would recommend you to go for 1 instance. You can bump a few
thread configs to match your hardware better.
Best Jan
On 06.11.2017 12:23, chidigam . wrote:
Hi All,
Let say, I have big machine, which having 120GB RAM, with lot of cores,
and very high disk capacity.
How many
Thanks for the remarks. hope I didn't miss any.
Not even sure if it makes sense to introduce A and B or just stick with
"this ktable", "other ktable"
Thank you
Jan
On 27.10.2017 06:58, Ted Yu wrote:
Do you mind addressing my previous comments ?
http://searc
discussion and vote on a solution is exactly what is
needed to bring this feauture into kafka-streams. I am looking forward
to everyones opinion!
Please keep the discussion on the mailing list rather than commenting on
the wiki (wiki discussions get unwieldy fast).
Best
Jan
.
Given the Logsizes your dealing with, I am very confident that this is
your issue.
Best Jan
On 25.10.2017 12:21, Elmar Weber wrote:
Hi,
On 10/25/2017 12:15 PM, Xin Li wrote:
> I think that is a bug, and should be fixed in this task
https://issues.apache.org/jira/browse/KAFKA-6030.
&
a different connect string. That should do what you
want instantly
Best Jan
On 16.09.2017 22:51, M. Manna wrote:
Yes I have, I do need to build and run Schema Registry as a pre-requisite
isn't that correct? because the QuickStart seems to start AVRO - without
AVRO you need your own implementation
I can't help you here but maybe can focus the question - why would you want to?
jan
On 10/08/2017, Sven Ludwig <s_lud...@gmx.de> wrote:
> Hello,
>
> assume that all producers and consumers regarding a topic-partition have
> been shutdown.
>
> Is it possible in this situa
questions, can always approach me.
Otherwise I am just going to drink the kool-aid now. :(
Best Jan
On 08.08.2017 20:37, Guozhang Wang wrote:
Hello Jan,
Thanks for your feedback. Trying to explain them a bit more here since I
think there are still a bit mis-communication here:
Here are a few things I
tors
maintain a Store and provide a ValueGetterSupplier.
Does this makes sense to you?
Best Jan
On 02.08.2017 18:09, Bill Bejeck wrote:
Hi Jan,
Thanks for the effort in putting your thoughts down on paper.
Comparing what I see from your proposal and what is presented in
KIP-182, one of the
'Buffered' idea
seems ideal here.
please have a look. Looking forward for your opinions.
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too)
Given that we already have StateStoreSuppliers that are configurable using the
fluent-like API, probably it’s worth discussing
could checkout <https://svn.apache.org/repos/asf/kafka/> and see
if it has any statements on logo use.
Also top 3 hits of <https://www.google.co.uk/search?q=use+logo+apache>
sound promising but I've not looked at them.
Best I can suggest ATM
jan
On 01/08/2017, Sunil, Rinu <rinu.su...@in.u
.
The whole logic about partitioners and what else does not change.
Hope this makes my points more clear.
Best Jan
On 19.07.2017 12:03, Damian Guy wrote:
Hi Jan,
Thanks for your input. Comments inline
On Tue, 18 Jul 2017 at 15:21 Jan Filipiak <jan.filip...@trivago.com> wrote:
H
ve
problems and especially why i don't think we can win tackiling only
point 1 in the long run.
If anything would need an implementation draft please feel free to ask
me to provide one. Initially the proposal hopefully would get the job
done of just removing clutter.
Looking forward to your c
specify your pain more precisely maybe we can work around it.
Best Jan
On 10.07.2017 10:31, Dmitriy Vsekhvalnov wrote:
Guys, let me up this one again. Still looking for comments about
kafka-consumer-groups.sh
tool.
Thank you.
On Fri, Jul 7, 2017 at 3:14 PM, Dmitriy Vsekhvalnov <dvsekh
ards 3am.
The example I provided was
streams.$applicationid.stores.$storename.inmemory = false
streams.$applicationid.stores.$storename.cachesize = 40k
for the configs. The Query Handle thing make sense hopefully.
Best Jan
-Matthias
On 7/8/17 2:23 AM, Jan Filipiak wrote:
Hi Matthias
de the specific ones?)
Does this makes sense to people? what pieces should i outline with code
(time is currently sparse :( but I can pull of some smaller examples i
guess)
Best Jan
On 08.07.2017 01:23, Matthias J. Sax wrote:
It's too issues we want to tackle
- too many overload (for som
Hi,
I'd is this the right place to ask about cockroachDB?
(well he started it, officer...)
jan
On 07/07/2017, David Garcia <dav...@spiceworks.com> wrote:
> “…events so timely that the bearing upon of which is not immediately
> apparent and are hidden from cognitive regard; the s
builder to
reduce the overloaded functions as well. WDYT?
Guozhang
On Tue, Jul 4, 2017 at 1:40 AM, Damian Guy <damian@gmail.com> wrote:
Hi Jan,
Thanks very much for the input.
On Tue, 4 Jul 2017 at 08:54 Jan Filipiak <jan.filip...@trivago.com>
wrote:
Hi Damian,
I do see
really work well for them.
Best Jan
On 30.06.2017 09:31, Damian Guy wrote:
Thanks Matthias
On Fri, 30 Jun 2017 at 08:05 Matthias J. Sax <matth...@confluent.io> wrote:
I am just catching up on this thread, so sorry for the long email in
advance... Also, it's to some extend
dling/embedding of ASF products, encryption
reporting, and shipping documentation."
I agree with you, it seems bizarre, and wrong.
jan
On 28/06/2017, Martin Gainty <mgai...@hotmail.com> wrote:
>
> MG>am requesting clarification below
>
>
sorry, but maybe it's worth a look through that page.
I have to admit I'd never heard of ECCN classifications and am
surprised it even exists.
cheers
jan
On 27/06/2017, Axelle Margot <axelle.mar...@ericsson.com> wrote:
> Hello,
>
> You were contacted as part of a new project i
lly) will be later. As I
strongly recommend stopping the usage of ChangeSerde and have "properly"
repartitioned topic. That is just sane IMO
Best Jan
On 22.06.2017 11:54, Eno Thereska wrote:
Note that while I agree with the initial proposal (withKeySerdes, withJoinType,
et
Hi Eno,
I am less interested in the user facing interface but more in the actual
implementation. Any hints where I can follow the discussion on this? As
I still want to discuss upstreaming of KAFKA-3705 with someone
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too
Depends, embedded postgress puts you into the same spot.
But if you use your state store change log to materialize into a
postgress; that might work out decently.
Current JDBC doesn't support delete which is an issue but writing a
custom sink is not to hard.
Best Jan
On 07.06.2017 23:47
Hi,
have you thought about using connect to put data into a store that is
more reasonable for your kind of query requirements?
Best Jan
On 07.06.2017 00:29, Steven Schlansker wrote:
On Jun 6, 2017, at 2:52 PM, Damian Guy <damian@gmail.com> wrote:
Steven,
In practice, data sho
Hi Eno,
On 07.06.2017 22:49, Eno Thereska wrote:
Comments inline:
On 5 Jun 2017, at 18:19, Jan Filipiak <jan.filip...@trivago.com> wrote:
Hi
just my few thoughts
On 05.06.2017 11:44, Eno Thereska wrote:
Hi there,
Sorry for the late reply, I was out this past week. Looks lik
Hi
just my few thoughts
On 05.06.2017 11:44, Eno Thereska wrote:
Hi there,
Sorry for the late reply, I was out this past week. Looks like good progress
was made with the discussions either way. Let me recap a couple of points I saw
into one big reply:
1. Jan mentioned CRC errors. I think
old to zero would cover all cases with 1
implementation. It is still beneficial to have it pluggable
Again CRC-Errors are the only bad pills we saw in production for now.
Best Jan
On 02.06.2017 17:37, Jay Kreps wrote:
Jan, I agree with you philosophically. I think one practical challenge has
IMHO your doing it wrong then. + building to much support into the kafka
eco system is very counterproductive in fostering a happy userbase
On 02.06.2017 13:15, Damian Guy wrote:
Jan, you have a choice to Fail fast if you want. This is about giving
people options and there are times when you
addition to the kafka
toolkit that I can think of. It just doesn't fit the architecture
of having clients falling behind is a valid option.
Further. I mentioned already the only bad pill ive seen so far is crc
errors. any plans for those?
Best Jan
On 02.06.2017 11:34, Damian Guy wrote:
I agree
a fuller picture of your setup eg. OS, OS
version, memory, number of cpus, what actual hardware (PCs are very
different from servers), etc
cheers
jan
On 17/05/2017, 陈 建平Chen Jianping <chenjianp...@agora.io> wrote:
> Hi Group,
>
> Recently I am trying to turn Kafka write perform
on the weekend. I'll fix it
then in a few minutes rather spend 2 weeks ordering dead letters. (where
reprocessing might be even the faster fix)
Best Jan
On 29.05.2017 20:23, Jay Kreps wrote:
- I think we should hold off on retries unless we have worked out the
full usage pattern
+1
On 26.05.2017 18:36, Damian Guy wrote:
In that case, though, every access to that key is doomed to failure as the
database is corrupted. So i think it should probably die in a steaming heap
at that point!
On Fri, 26 May 2017 at 17:33 Eno Thereska wrote:
Hi Damian,
is only going to cover Serde exceptions or MessageSet
Iterator exceptions aswell? Speaking Checksum Error. We can't rely on
the deserializer to properly throw when we hand it data with a bad
checksum + the checksum errors are the only bad pills I have seen in
production until this point.
Best Jan
f the time.
Best Jan
PS.:
Hope you get my point. I am mostly complaing about
|public| |interface| |RecordExceptionHandler {|
|||/**|
|||* Inspect a record and the exception received|
|||*/|
|||HandlerResponse handle(that guy here >>>>>>>
ConsumerRecord<||byte||[], ||byt
and
statestore or topic name + byte[] byte[] for serializers? maybe passing
in the used serdes?
Best Jan
On 25.05.2017 11:47, Eno Thereska wrote:
Hi there,
I’ve added a KIP on improving exception handling in streams:
KIP-161: streams record processing exception handlers.
https
ilure.
"
<https://zookeeper.apache.org/doc/r3.4.10/zookeeperStarted.html>
cheers
jan
On 30/04/2017, Michal Borowiecki <michal.borowie...@openbet.com> wrote:
> Svante, I don't share your opinion.
> Having an even number of zookeepers is not a problem in itself, it
> simply
grows.
It's possible kafkacat or other producers would do a better job than
the console producer but I'll try that on linux as getting them
working on windows, meh.
thanks all
jan
On 18/04/2017, David Garcia <dav...@spiceworks.com> wrote:
> The “NewShinyProducer” is also deprecated.
>
is GC holding things up but I dunno, GC even for a second or
two should not cause a socket failure, just delay the read, though I'm
not an expert on this *at all*.
I'll go over the answers tomorrow more carefully but thanks anyway!
cheers
jan
On 18/04/2017, Serega Sheypak <serega.shey...@gmail.c
d like to know whether it *should* work
in windows.
cheers
jan
On 18/04/2017, Serega Sheypak <serega.shey...@gmail.com> wrote:
> Hi,
>
> [2017-04-17 18:14:05,868] ERROR Error when sending message to topic
> big_ptns1_repl1_nozip with key: null, value: 55 bytes with error:
>
king a mistake, if so what?
thanks
jan
Regardless of how usefull you find the tech radar.
Well deserved! even though we all here agree that trial or adopt is in reach
https://www.thoughtworks.com/radar/platforms/kafka-streams
Best Jan
automatically. + It will do the
naturally correct thing if you update parent_id in the child table.
Upstream support would also be helpfull as the statestores are changelog
even though we can use the intermediate topic for state store high
availability.
Best Jan
On 21.02.2017 20:15, Guozhang Wang wrote
you could use the current
DSL to a greater extend.
Best Jan
On 21.02.2017 13:10, Frank Lyaruu wrote:
I've read that JIRA (although I don't understand every single thing), and I
got the feeling it is not exactly the same problem.
I am aware of the Global Tables, and I've tried that first,
Hi,
yes the ticket is exactly about what you want to do. The lengthy
discussion is mainly about what the key of the output KTable is.
@gouzhang would you be interested in seeing what we did so far?
best Jan
On 21.02.2017 13:10, Frank Lyaruu wrote:
I've read that JIRA (although I don't
the group has as auto mechanism
I would definitely prefer a line oriented format rather than json. I
ramped my https://stedolan.github.io/jq/ skills up
so I can do some partition assignments but its no joy, better grep awk ...
Best Jan
On 08.02.2017 03:43, Jorge Esteban Quilcate Otoya wrote:
Hi
Hi,
sorry and using the consumer group tool, instead of the offset checker
On 02.02.2017 20:08, Jan Filipiak wrote:
Hi,
if its a kafka stream app, its most likely going to store its offsets
in kafka rather than zookeeper.
You can use the --new-consumer option to check for kafka stored
Hi,
if its a kafka stream app, its most likely going to store its offsets
in kafka rather than zookeeper.
You can use the --new-consumer option to check for kafka stored offsets.
Best Jan
On 01.02.2017 21:14, Ara Ebrahimi wrote:
Hi,
For a subset of our topics we get this error
Sometimes I wake up cause I dreamed that this had gone down:
https://cwiki.apache.org/confluence/display/KAFKA/Hierarchical+Topics
On 02.02.2017 19:07, Roger Vandusen wrote:
Ah, yes, I see your point and use case, thanks for the feedback.
On 2/2/17, 11:02 AM, "Damian Guy"
Hi Eno,
thanks for putting into different points. I want to put a few remarks
inline.
Best Jan
On 30.01.2017 12:19, Eno Thereska wrote:
So I think there are several important discussion threads that are emerging
here. Let me try to tease them apart:
1. inconsistency in what
time
Looking forward to your opinions
Best Jan
#DeathToIQMoreAndBetterConnectors
On 30.01.2017 10:42, Eno Thereska wrote:
Hi there,
The inconsistency will be resolved, whether with materialize or overloaded
methods.
With the discussion on the DSL & stores I feel we've gone in a slig
.
Best Jan
On 28.01.2017 09:36, Eno Thereska wrote:
Hi Jan,
I understand your concern. One implication of not passing any store name and
just getting an IQ handle is that all KTables would need to be materialised.
Currently the store name (or proposed .materialize() call) act as hints on
whether
1 - 100 of 150 matches
Mail list logo