How would time releases relate to versions? (Major, minor, API
compatibility, etc).
On Thu, Aug 11, 2016 at 9:37 AM, Guozhang Wang wrote:
> I think we do not need to make the same guarantee as for "how old of your
> Kafka version that you can upgrade to the latest in one
; case we need to bump to 1.X
> 2. We do something totally amazing (exactly once?) and decide to bump
> to 1.X to celebrate
> 3. The release manager decides that the features in the release are
> not very exciting and we can go with 0.10.1 (i.e very minor release)
>
> Does that
To clear up, I'm not against time-based releases, I just think that the
goals that were stated are not intrinsic to time-based releases but the
release process (whether it's time-based or not).
The goal of "when will my code get into a release" and the goal of getting
features faster in a release
There are pros and cons to having the headers be completely layered above
the broker (and hence wire level messages), let's look at the base ones.
A few pros (of headers being a higher layer thing):
- No broker changes
- No protocol changes
- Messages with headers can work with brokers that don't
On Fri, Oct 7, 2016 at 8:45 AM, Jay Kreps wrote:
> This discussion has come up a number of times and we've always passed.
>
Hopefully this time the arguments will be convincing enough that Kafka can
decide to do something about it.
> One of things that has helped keep
I'm also
1. no (ordered keys)
2. yes (propose key space)
1. I don't think there is going to be much savings in ordering the keys.
I'm assuming some parsing will happen either way. Ordering the keys would
be useful if we were doing linear search on the headers, and even then, the
performance
I think you probably require a MagicByte bump if you expect correct
behavior of the system as a whole.
>From a client perspective you want to make sure that when you deliver a
message that the broker supports the feature you're expecting
(compaction). So, depending on the behavior of the broker
Hey Roger.
The original design involved:
1- a header set per message (an array of key+values)
2- a message level API to set/get headers.
3- byte[] header-values
4- int header-keys
5- headers encoded at the protocol/core level
1- I think most (not all) people would agree that having metadata
>From Roger's description:
5a- separate metadata field, built in serialization
5c- separate metadata field, custom serialization
5b- custom serialization (inside V)
5d- built in serialization (inside V)
I added 5d for completeness.
>From this perspective I would choose
5a > 5c > 5d > 5b
In
I think it's well known I've been pushing for ints (and I could switch to
16 bit shorts if pressed).
- efficient (space)
- efficient (processing)
- easily partitionable
However, if the only thing that is keeping us from adopting headers is the
use of strings vs ints as keys, then I would cave
t; like)
> > if you want to publish your plugin for others to use. within your org do
> > whatever you want - just know that if you use [some "reserved" range]
> and a
> > future kafka update breaks it its your problem. RTFM.
> >
> > personall
gt; >> >> purposes of high performance collections: look at
> > > > >> >> >> https://github.com/leventov/Koloboke)
> > > > >> >> >>
> > > > >> >> >> so to sum up the string vs int debate:
> > > > >> >> >>
keys.
Nacho
On Wed, Nov 9, 2016 at 11:37 AM, Nacho Solis <nso...@linkedin.com> wrote:
> varint encoding was considered, but it's not as beneficial as it looks.
>
> Varints are useful when you're trying to encode numbers from a very large
> range. They provide some space
What is the criteria for keeping things in and out of Kafka, what code goes
in or out and what is part of the architecture or not?
The discussion of what goes into a project and what stays out is an always
evolving question. Different projects treat this in different ways.
Let me paint 2
anyway. It is just a good to have tool that
> might be required by quite a few users and there is an active project that
> works on this - https://github.com/confluentinc/kafka-rest
>
>
>
>
> On Fri, Oct 21, 2016 at 11:49 AM, Nacho Solis <nso...@linkedin.com.invalid
Are you saying Kafka REST is subjective but Kafka Streams and Kafka Connect
are not subjective?
> "there are likely places that can live without a rest proxy"
There are also places that can live without Kafka Streams and Kafka Connect.
Nacho
On Fri, Oct 21, 2016 at 11:17 AM, Jun Rao
I think a separate KIP is a good idea as well. Note however that potential
decisions in this KIP could affect the other KIP.
Nacho
On Fri, Oct 21, 2016 at 10:23 AM, Jun Rao wrote:
> Michael,
>
> Yes, doing a separate KIP to address the null payload issue for compacted
>
those as well.
Nacho
On Mon, Oct 31, 2016 at 2:21 PM, Andrey Dyachkov <andrey.dyach...@gmail.com>
wrote:
> Hi Nacho,
>
> yes, exactly.
>
> On Mon, 31 Oct 2016 at 22:10 Nacho Solis <nso...@linkedin.com.invalid>
> wrote:
>
> > Hi Andrew.
> >
> >
Hi Andrew.
Is this what you're saying:
- sometimes you get stuck (like a blocking call) when you call some
function in the kafka client
- you can go around this case (by wrapping it) such that if the kafka
client call gets stuck your software doesn't get stuck
- you're wondering why the kafka
Congrats!
On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy wrote:
> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> committer and we are pleased to announce that he has accepted!
>
> Becket has made significant contributions to Kafka over the last two
[
https://issues.apache.org/jira/browse/KAFKA-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584156#comment-15584156
]
Nacho Solis commented on KAFKA-1351:
Thanks [~jjkoshy].
Yes, this is a pet peeve of mine. Logging
21 matches
Mail list logo