s
work out for you.
Christian
On Wed, Mar 2, 2016 at 9:52 PM, Jan <cne...@yahoo.com.invalid> wrote:
> Hi folks;
> does anyone know of Kafka's ability to work over Satellite links. We have a
> IoT Telemetry application that uses Satellite communication to send data from
> remote s
Hi folks;
does anyone know of Kafka's ability to work over Satellite links. We have a IoT
Telemetry application that uses Satellite communication to send data from
remote sites to a Central hub.
Any help/ input/ links/ gotchas would be much appreciated.
Regards,Jan
is GC holding things up but I dunno, GC even for a second or
two should not cause a socket failure, just delay the read, though I'm
not an expert on this *at all*.
I'll go over the answers tomorrow more carefully but thanks anyway!
cheers
jan
On 18/04/2017, Serega Sheypak <serega.shey...@gmail.c
d like to know whether it *should* work
in windows.
cheers
jan
On 18/04/2017, Serega Sheypak <serega.shey...@gmail.com> wrote:
> Hi,
>
> [2017-04-17 18:14:05,868] ERROR Error when sending message to topic
> big_ptns1_repl1_nozip with key: null, value: 55 bytes with error:
>
king a mistake, if so what?
thanks
jan
grows.
It's possible kafkacat or other producers would do a better job than
the console producer but I'll try that on linux as getting them
working on windows, meh.
thanks all
jan
On 18/04/2017, David Garcia <dav...@spiceworks.com> wrote:
> The “NewShinyProducer” is also deprecated.
>
could checkout <https://svn.apache.org/repos/asf/kafka/> and see
if it has any statements on logo use.
Also top 3 hits of <https://www.google.co.uk/search?q=use+logo+apache>
sound promising but I've not looked at them.
Best I can suggest ATM
jan
On 01/08/2017, Sunil, Rinu <rinu.su...@in.u
I can't help you here but maybe can focus the question - why would you want to?
jan
On 10/08/2017, Sven Ludwig <s_lud...@gmx.de> wrote:
> Hello,
>
> assume that all producers and consumers regarding a topic-partition have
> been shutdown.
>
> Is it possible in this situa
Hi,
I'd is this the right place to ask about cockroachDB?
(well he started it, officer...)
jan
On 07/07/2017, David Garcia <dav...@spiceworks.com> wrote:
> “…events so timely that the bearing upon of which is not immediately
> apparent and are hidden from cognitive regard; the s
sorry, but maybe it's worth a look through that page.
I have to admit I'd never heard of ECCN classifications and am
surprised it even exists.
cheers
jan
On 27/06/2017, Axelle Margot <axelle.mar...@ericsson.com> wrote:
> Hello,
>
> You were contacted as part of a new project i
dling/embedding of ASF products, encryption
reporting, and shipping documentation."
I agree with you, it seems bizarre, and wrong.
jan
On 28/06/2017, Martin Gainty <mgai...@hotmail.com> wrote:
>
> MG>am requesting clarification below
>
>
ilure.
"
<https://zookeeper.apache.org/doc/r3.4.10/zookeeperStarted.html>
cheers
jan
On 30/04/2017, Michal Borowiecki <michal.borowie...@openbet.com> wrote:
> Svante, I don't share your opinion.
> Having an even number of zookeepers is not a problem in itself, it
> simply
a fuller picture of your setup eg. OS, OS
version, memory, number of cpus, what actual hardware (PCs are very
different from servers), etc
cheers
jan
On 17/05/2017, 陈 建平Chen Jianping <chenjianp...@agora.io> wrote:
> Hi Group,
>
> Recently I am trying to turn Kafka write perform
y expensive (it isn't), that memory mapping is
always cheap (it isn't cheap),"
A bit vague on my part, but HTH anyway
jan
On 12/02/2018, YuFeng Shen <v...@hotmail.com> wrote:
> Hi jan ,
>
> I think the reason is the same as why index file using memory mapped file.
>
> As the mem
I'm not sure I can answer your question, but may I pose another in
return: why do you feel having a memory mapped log file would be a
good thing?
On 09/02/2018, YuFeng Shen wrote:
> Hi Experts,
>
> We know that kafka use memory mapped files for it's index files ,however
> it's
st suggest a bit of reading and some
guesswork.
cheers
jan
On 13/02/2018, YuFeng Shen <v...@hotmail.com> wrote:
> If that is like what you said , why index file use the memory mapped file?
>
> ____
> From: jan <rtm4...@googlemail.com>
> Se
t argue but I'd warn that some abstractions can be
expensive and I suspect shapeless may be one. Also, for parsers may I
suggest looking at ANTLR?
Idiomatic scala code can be expensive *as curremtly implemented*. Just
understand that cost by profiling, and de-idiomise in hot code as
needed.
It's
) but was informed I now had two (n00bness, and
an unsupported platform).
tl;dr if it doesn't work on X, we need to say so clearly. It's just...
good manners, surely?
cheers
jan
On 07/08/2018, M. Manna wrote:
> By fully broken, i mean not designed and tested to work on Windows.
>
> On Tue, 7 Aug
aveat in the documentation, please, right at the top?
jan
On 07/08/2018, M. Manna wrote:
> The answer is - Absolutely not. If you don’t have Linux rack, or Kubernetes
> deployment -it will not work on Windows as guaranteed.
>
> I know this because I have tried to make it work for the
This is an excellent suggestion and I intend to do so henceforth
(thanks!), but it would be an adjunct to my request rather than the
answer; it still needs to be made clear in the docs/faq that you
*can't* use windows directly.
jan
On 07/08/2018, Rahul Singh wrote:
> I would recommend us
the FAQ (which I did read), and also have other people need to
ask here whether windows is supported?
Why?
This is just nuts.
cheers
jan
On 05/09/2018, Liam Clarke wrote:
> Hi Jan,
>
> I'd presume that downloading an archive and seeing a bunch of .sh files
> would imply that Kafk
on ANTLR stuff (which I'll return to when bits of me stop
hurting) and currently am trying to suss if permutation encoding can
be done from L1 cache for large permutations in less than a single
DRAM access time. Look up cuckoo filters to see why.
jan
On 05/09/2018, Liam Clarke wrote:
> Hi
I may have missed this (I'm missing the first few messages), so sorry
in advance if I have, but what OS are you using?
Kafka does not work well on windows, I had problems using it that
sounds a little like this (just a little though) when on win.
jan
On 30/11/2018, Satendra Pratap Singh wrote
nts to mull over. Doubt I can suggest anything further. Good luck.
jan
On 02/09/2020, cedric sende lubuele wrote:
> Let me introduce myself, my name is Cedric and I am a network engineer
> passionate about new technologies and as part of my new activity, I am
> interested in Big Data. Curre
It might be best to do a web search for companies that know this stuff
and speak to them.
re. kafka over UDP I dunno but perhaps instead do normal kafka talking
to a proxy machine via TCP and have that proxy forward traffic via
UDP.
If that works, would simplify the problem I guess.
cheers
jan
tacting (I'm not affiliated in any way).
The first question I'd ask myself is, would a burn-to-dvd solution
work? Failing that, basic stuff like email?
In any case, what if the data's corrupted, how can the server's detect
and re-request? What are you protecting against exactly? Stuff like
that.
jan
O
performance
persistence). Maybe we could cooperate and share some code.
More details about MapDB:
https://github.com/jankotek/mapdb
Regards,
Jan Kotek
performance
persistence). Maybe we could cooperate and share some code.
More details about MapDB:
https://github.com/jankotek/mapdb
Regards,
Jan Kotek
Hi,
I am starting with kafka. We use version 0.7.2 currently. Does anyone know
wether automatic producer load balancing based on zookeeper is supported by
the c++ client?
Thank you!
-- Jan
Hi,
I am starting with kafka. We use version 0.7.2 currently. Does anyone know
wether automatic producer load balancing based on zookeeper is supported by
the c++ client?
Thank you!
-- Jan
.
Not sure how it applies to this case.
Regards,
Jan Kotek
On Friday 02 August 2013 22:19:34 Jay Kreps wrote:
Chris commented in another thread about the poor compression performance in
0.8, even with snappy.
Indeed if I run the linear log write throughput test on my laptop I see
75MB/sec
the serializers back in.
Looking forward to replies that can show me the benefit of serializes
and especially how the
Type = topic relationship can be handled nicely.
Best
Jan
On 25.11.2014 02:58, Jun Rao wrote:
Hi, Everyone,
I'd like to start a discussion on whether it makes sense to add
, these would be
some additional milliseconds to respond faster if we could spare
de/recompression.
Those are my thoughts about server side de/recompression. It would be
great if I could get some responses and thoughts back.
Jan
On 07.11.2014 00:23, Jay Kreps wrote:
I suspect it is possible
far and the application would just
pull up to this point?
Looking forward for some recommendations and comments.
Best
Jan
Hey,
try to not have newlines \n in your jsonfile. I think the parser dies on
those and then claims the file is empty
Best
Jan
On 13.04.2015 12:06, Ashutosh Kumar wrote:
Probably you should first try to generate proposed plan using --generate
option and then edit that if needed.
thanks
Sounds good, thanks for the clarification.
Jan
On 17 June 2015 at 22:09, Jason Gustafson ja...@confluent.io wrote:
We have a couple open tickets to address these issues (see KAFKA-1894 and
KAFKA-2168). It's definitely something we want to fix.
On Wed, Jun 17, 2015 at 4:21 AM, Jan Stette
on KafkaConsumer), so the client holds a
lock while sitting in this wait. This means that if another thread tries
to call close(), which is all synchronized, this thread will also be
blocked.
Holding locks while performing network I/O seems like a bad idea - is this
something that's planned to be fixed?
Jan
the
calling thread being blocked forever. Is this possible with the current
version of the client? (Snapshot as of 16/6/15). If not, is that something
that's planned for the future?
Jan
Hi,
you might want to have a look here:
http://kafka.apache.org/documentation.html#topic-config
_segment.ms_ and _segment.bytes _ should allow you to control the
time/size when segments are rolled.
Best
Jan
On 16.06.2015 14:05, Shayne S wrote:
Some further information, and is this a bug
segment will never be compacted.
Thanks!
Shayne
On Wed, Jun 17, 2015 at 5:58 AM, Jan Filipiak jan.filip...@trivago.com
wrote:
Hi,
you might want to have a look here:
http://kafka.apache.org/documentation.html#topic-config
_segment.ms_ and _segment.bytes _ should allow you to control the
time/size
Hi,
just out of curiosity and because of Eugene's email, I browsed
Kafka-1477 and it talks about SSL alot. So I thought I might throw in
this http://tools.ietf.org/html/rfc7568 RFC. It basically says move away
from SSL now and only do TLS. The title of the ticket still mentions TLS
but
Or one would check the file size before.
Please let me know If you would consider this useful and is worth a
feature ticket in Jira.
Thank you
Jan
Sorry wrong mailing list
On 24.07.2015 16:44, Jan Filipiak wrote:
Hello hadoop users,
I have an idea about a small feature for the getmerge tool. I recently
was in the need of using the new line option -nl because the files I
needed to merge simply didn't had one.
I was merging all the files
tricks and find out what the lag
is caused by and then fix whatever causes the lag. Its 1 am in Germany
there might be off by one errors in the algorithm above.
Best
Jan
On 04.11.2015 18:13, Otis Gospodnetić wrote:
This is an aancient thread, but I thought I'd point to
http
Hi,
just want to pick this up again. You can always use more partitions to
reduce the number of keys handled by a single broker and parallelize the
compaction. So with sufficient number of machines and the ability to
partition I don’t see you running into problems.
Jan
On 07.10.2015 05:34
)
}
logger.info("The offset of the record we just sent is: " +
recordMetadata.offset())
()
}
})
I am using the metrics() member to periodically look at
"buffer-available-bytes" and I see it is constantly decreasing over time as
messages are being sent.
Jan
to be sent to the
partitions with '-1' and th eproducer buffer becomes exhausted afetr a while
(maybe that is related?)
Jan
Topic:capture PartitionCount:16 ReplicationFactor:1 Configs:
Topic: capture Partition: 0Leader: 1 Replicas: 1 Isr: 1
perspective I am unable to detect that
messages are not being sent out. Is this normal behavior and I am simply doing
something wrong or could it be a producer bug?
Jan
Config and code again:
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> brokers,
ProducerConfig.RETRIES_CONFIG -&
how to detect a broker being down)
Jan
> On 22 Nov 2015, at 21:42, Todd Palino <tpal...@gmail.com> wrote:
>
> Hopefully one of the developers can jump in here. I believe there is a
> future you can use to get the errors back from the producer. In addition,
> you should check
will be starving. To solve this issue you need to
increase your partition count.
Regards
Jan
> On 14 Jun 2016, at 13:07, Joris Peeters <j.peet...@wintoncapital.com> wrote:
>
> I suppose the consumers would also need to all belong to the same consumer
> group for your expectation to
Hi,
I have a producer question: Is the producer (specifically the normal Java
producer) using the file system in any way?
If it does so, will a producer work after loosing this file system or its
content (for example in a containerization scenario)?
Jan
topic as an input again after a restart, or how does it load the
whole table again? Can someone explain the rules to persist or restore a
KTable to or from a changelog?
Best regards
Jan
Hi Ismael,
Unfortunately Java 8 doesn't play nice with FreeBSD. We have seen a lot of JVM
crashes running our 0.9 brokers on Java 8... Java 7 on the other hand is
totally stable.
Until these issues have been addressed, this would cause some serious issues
for us.
Regards
Jan
Hi,
I am very exited about all of this in general. Sadly I haven’t had the
time to really take a deep look. One thing that is/was always a
difficult topic to resolve many to many relationships in table x table x
table joins is the repartitioning that has to happen at some point.
From the
, "true");
consumerProperties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
Regards
Jan
> On 2 Mar 2016, at 15:14, Péricé Robin <perice.ro...@gmail.com> wrote:
>
> Hello everybody,
>
> I'm testing the new 0.9.0.1 API and I try to make
Hey guys,
Is someone using the kafka rest proxy from confluent?
We have an issue, that all messages for a certain topic end up in the same
partition. Has anyone faced this issue before? We're not using a custom
partitioner class, so it's using the default partitioner. We're sending
You have to allow topic deletion in server.properties first.
delete.topic.enable = true
Regards
Jan
> On 11 May 2016, at 09:48, Snehalata Nagaje
> <snehalata.nag...@harbingergroup.com> wrote:
>
>
>
> Hi ,
>
> Can we delete certain topic in kafka?
>
other Kafka related
tools.
Regards
Jan
Sent from my iPhone
> On 19 Apr 2016, at 08:02, Ratha v <vijayara...@gmail.com> wrote:
>
> Hi all;
>
> I try to publish/consume my java objects to kafka. I use Avro schema.
>
> My basic program works fine. In my program i us
using a proprietary message format, that's why we don't
have any plans (or capacity) to open source it at the moment.
However builiding that tool was straight forward, it shouldn't take you more
than a day or two to build something similar. Ping me if you need some help.
Regards
Jan
> On 11
Hi,
sorry and using the consumer group tool, instead of the offset checker
On 02.02.2017 20:08, Jan Filipiak wrote:
Hi,
if its a kafka stream app, its most likely going to store its offsets
in kafka rather than zookeeper.
You can use the --new-consumer option to check for kafka stored
Sometimes I wake up cause I dreamed that this had gone down:
https://cwiki.apache.org/confluence/display/KAFKA/Hierarchical+Topics
On 02.02.2017 19:07, Roger Vandusen wrote:
Ah, yes, I see your point and use case, thanks for the feedback.
On 2/2/17, 11:02 AM, "Damian Guy"
Hi,
if its a kafka stream app, its most likely going to store its offsets
in kafka rather than zookeeper.
You can use the --new-consumer option to check for kafka stored offsets.
Best Jan
On 01.02.2017 21:14, Ara Ebrahimi wrote:
Hi,
For a subset of our topics we get this error
the group has as auto mechanism
I would definitely prefer a line oriented format rather than json. I
ramped my https://stedolan.github.io/jq/ skills up
so I can do some partition assignments but its no joy, better grep awk ...
Best Jan
On 08.02.2017 03:43, Jorge Esteban Quilcate Otoya wrote:
Hi
nd ask for the store with this name.
So one could the user help to stay in DSL land and therefore maybe
confuse him less.
Best Jan
#DeathToIQMoreAndBetterConnectors :)
On 27.01.2017 16:51, Damian Guy wrote:
I think Jan is saying that they don't always need to be materialized, i.e.,
filter
Hi Eno,
thanks for putting into different points. I want to put a few remarks
inline.
Best Jan
On 30.01.2017 12:19, Eno Thereska wrote:
So I think there are several important discussion threads that are emerging
here. Let me try to tease them apart:
1. inconsistency in what
time
Looking forward to your opinions
Best Jan
#DeathToIQMoreAndBetterConnectors
On 30.01.2017 10:42, Eno Thereska wrote:
Hi there,
The inconsistency will be resolved, whether with materialize or overloaded
methods.
With the discussion on the DSL & stores I feel we've gone in a slig
mechanism by the Interactive Query Handle" under the
hood it can use the same mechanism as the PIP people again.
I hope you see my point J
Best Jan
#DeathToIQMoreAndBetterConnectors :)
On 27.01.2017 21:59, Matthias J. Sax wrote:
Jan,
the IQ feature is not limited to Streams D
Hi Gwen,
this is not a hint as in "make it smarter" this is a hint as to "make it
work" wich should not require hinting.
Best Jan
On 27.01.2017 22:35, Gwen Shapira wrote:
Another vote in favor of overloading. I think the streams API actually
trains users qui
.
Best Jan
On 28.01.2017 09:36, Eno Thereska wrote:
Hi Jan,
I understand your concern. One implication of not passing any store name and
just getting an IQ handle is that all KTables would need to be materialised.
Currently the store name (or proposed .materialize() call) act as hints on
whether
materialize method being
required! Hence I suggest leave it alone.
regarding removing the others I dont have strong opinions and it seems
to be unrelated.
Best Jan
On 26.01.2017 20:48, Eno Thereska wrote:
Forwarding this thread to the users list too in case people would like to
comment. It is also
Hi,
yes the ticket is exactly about what you want to do. The lengthy
discussion is mainly about what the key of the output KTable is.
@gouzhang would you be interested in seeing what we did so far?
best Jan
On 21.02.2017 13:10, Frank Lyaruu wrote:
I've read that JIRA (although I don't
you could use the current
DSL to a greater extend.
Best Jan
On 21.02.2017 13:10, Frank Lyaruu wrote:
I've read that JIRA (although I don't understand every single thing), and I
got the feeling it is not exactly the same problem.
I am aware of the Global Tables, and I've tried that first,
automatically. + It will do the
naturally correct thing if you update parent_id in the child table.
Upstream support would also be helpfull as the statestores are changelog
even though we can use the intermediate topic for state store high
availability.
Best Jan
On 21.02.2017 20:15, Guozhang Wang wrote
about their usecase? or can you share it?
Best Jan
On 24.08.2016 16:47, Jun Rao wrote:
Jan,
Thanks for the reply. I actually wasn't sure what your main concern on
time-based rolling is. Just a couple of clarifications. (1) Time-based
rolling doesn't control how long a segment will be retained
ogs as the broker thinks its millis.
So that would probably have caused us at least one outage if a big
producer had upgraded and done this, IMO likely mistake.
Id just hoped for a more obvious kill-switch, so I didn’t need to bother
that much.
Best Jan
On 29.08.2016 19:36, Jun Rao wrote:
Hi Jun,
thanks a lot for the hint, Ill check it out when I get a free minute!
Best Jan
On 07.09.2016 00:35, Jun Rao wrote:
Jan,
For the time rolling issue, Jiangjie has committed a fix (
https://issues.apache.org/jira/browse/KAFKA-4099) to trunk. Perhaps you can
help test out trunk and see
Hi Gourab,
Check this out:
https://github.com/linkedin/Burrow <https://github.com/linkedin/Burrow>
Regards
Jan
> On 29 Sep 2016, at 15:47, Gourab Chowdhury <gourab@gmail.com> wrote:
>
> I can get the *Lag* of offsets with the following command:-
>
Can you
suggest on the proposed re-factoring? what are the chance to get it
upstream if I could pull it off? (unlikely)
Thanks for all the effort you put in into listening to my concerns.
highly appreciated!
Best Jan
On 25.08.2016 23:36, Jun Rao wrote:
Jan,
Thanks a lot for the feedback. N
.
Thanks!
Regards
Jan
And you also still need to find the correct broker, for each http call,
wich is also hard, when programming against the http api
On 26.10.2016 09:46, Jan Filipiak wrote:
So happy to see this reply.
I do think the same, actually makes it way harder to properly batch up
records on http
So happy to see this reply.
I do think the same, actually makes it way harder to properly batch up
records on http, as kafka core would need to know how to split your payload.
It would help people do the wrong thing IMO
best Jan
On 25.10.2016 23:58, Jay Kreps wrote:
-1
I think the REST
Hi,
I was just pointed to this. https://www.vectorlogo.zone/logos/apache_kafka/
if someone else is looking for the same thing! thanks a lot
Best Jan
On 01.12.2016 13:05, Jan Filipiak wrote:
Hi Everyone,
we want to print some big banners of the Kafka logo to decorate our
offices. Can anyone
Hi Everyone,
we want to print some big banners of the Kafka logo to decorate our
offices. Can anyone help me find a version
of the kafka logo that would still look nice printed onto 2x4m flags?
Highly appreciated!
Best Jan
.1.RC0.
Regards
Jan
> On 22 Dec 2016, at 18:16, Ismael Juma <ism...@juma.me.uk> wrote:
>
> Hi Valentin,
>
> Is inter.broker.protocol.version set correctly in brokers 1 and 2? It
> should be 0.10.0 so that they can talk to the older broker without issue.
>
> Ismael
Regardless of how usefull you find the tech radar.
Well deserved! even though we all here agree that trial or adopt is in reach
https://www.thoughtworks.com/radar/platforms/kafka-streams
Best Jan
tors
maintain a Store and provide a ValueGetterSupplier.
Does this makes sense to you?
Best Jan
On 02.08.2017 18:09, Bill Bejeck wrote:
Hi Jan,
Thanks for the effort in putting your thoughts down on paper.
Comparing what I see from your proposal and what is presented in
KIP-182, one of the
'Buffered' idea
seems ideal here.
please have a look. Looking forward for your opinions.
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too)
Given that we already have StateStoreSuppliers that are configurable using the
fluent-like API, probably it’s worth discussing
questions, can always approach me.
Otherwise I am just going to drink the kool-aid now. :(
Best Jan
On 08.08.2017 20:37, Guozhang Wang wrote:
Hello Jan,
Thanks for your feedback. Trying to explain them a bit more here since I
think there are still a bit mis-communication here:
Here are a few things I
specify your pain more precisely maybe we can work around it.
Best Jan
On 10.07.2017 10:31, Dmitriy Vsekhvalnov wrote:
Guys, let me up this one again. Still looking for comments about
kafka-consumer-groups.sh
tool.
Thank you.
On Fri, Jul 7, 2017 at 3:14 PM, Dmitriy Vsekhvalnov <dvsekh
ards 3am.
The example I provided was
streams.$applicationid.stores.$storename.inmemory = false
streams.$applicationid.stores.$storename.cachesize = 40k
for the configs. The Query Handle thing make sense hopefully.
Best Jan
-Matthias
On 7/8/17 2:23 AM, Jan Filipiak wrote:
Hi Matthias
builder to
reduce the overloaded functions as well. WDYT?
Guozhang
On Tue, Jul 4, 2017 at 1:40 AM, Damian Guy <damian@gmail.com> wrote:
Hi Jan,
Thanks very much for the input.
On Tue, 4 Jul 2017 at 08:54 Jan Filipiak <jan.filip...@trivago.com>
wrote:
Hi Damian,
I do see
de the specific ones?)
Does this makes sense to people? what pieces should i outline with code
(time is currently sparse :( but I can pull of some smaller examples i
guess)
Best Jan
On 08.07.2017 01:23, Matthias J. Sax wrote:
It's too issues we want to tackle
- too many overload (for som
ve
problems and especially why i don't think we can win tackiling only
point 1 in the long run.
If anything would need an implementation draft please feel free to ask
me to provide one. Initially the proposal hopefully would get the job
done of just removing clutter.
Looking forward to your c
.
The whole logic about partitioners and what else does not change.
Hope this makes my points more clear.
Best Jan
On 19.07.2017 12:03, Damian Guy wrote:
Hi Jan,
Thanks for your input. Comments inline
On Tue, 18 Jul 2017 at 15:21 Jan Filipiak <jan.filip...@trivago.com> wrote:
H
really work well for them.
Best Jan
On 30.06.2017 09:31, Damian Guy wrote:
Thanks Matthias
On Fri, 30 Jun 2017 at 08:05 Matthias J. Sax <matth...@confluent.io> wrote:
I am just catching up on this thread, so sorry for the long email in
advance... Also, it's to some extend
Hi Eno,
I am less interested in the user facing interface but more in the actual
implementation. Any hints where I can follow the discussion on this? As
I still want to discuss upstreaming of KAFKA-3705 with someone
Best Jan
On 21.06.2017 17:24, Eno Thereska wrote:
(cc’ing user-list too
lly) will be later. As I
strongly recommend stopping the usage of ChangeSerde and have "properly"
repartitioned topic. That is just sane IMO
Best Jan
On 22.06.2017 11:54, Eno Thereska wrote:
Note that while I agree with the initial proposal (withKeySerdes, withJoinType,
et
is only going to cover Serde exceptions or MessageSet
Iterator exceptions aswell? Speaking Checksum Error. We can't rely on
the deserializer to properly throw when we hand it data with a bad
checksum + the checksum errors are the only bad pills I have seen in
production until this point.
Best Jan
+1
On 26.05.2017 18:36, Damian Guy wrote:
In that case, though, every access to that key is doomed to failure as the
database is corrupted. So i think it should probably die in a steaming heap
at that point!
On Fri, 26 May 2017 at 17:33 Eno Thereska wrote:
Hi Damian,
and
statestore or topic name + byte[] byte[] for serializers? maybe passing
in the used serdes?
Best Jan
On 25.05.2017 11:47, Eno Thereska wrote:
Hi there,
I’ve added a KIP on improving exception handling in streams:
KIP-161: streams record processing exception handlers.
https
1 - 100 of 150 matches
Mail list logo