A minor request about the RC releases in context of Maven - Right now,
these RC releases are being released into (Apache staging) Maven
repository using the version number of the final release. So each RC1,
RC2, RC3 and so on, all end up in Maven repo as 1.0.0 [1]. For using
these as dependenci
We have been using Kafka in some of our projects for the past couple of
years. Our experience with Kafka and SSL had shown some performance
issues when we had seriously tested it (which admittedly was around a
year back). Our basic tests did show that things had improved over time
with newer ve
on, Oct 30, 2017 at 8:08 AM, Jaikiran Pai wrote:
We have been using Kafka in some of our projects for the past couple of
years. Our experience with Kafka and SSL had shown some performance issues
when we had seriously tested it (which admittedly was around a year back).
Our basic tests did show t
Congratulations Kafka team on the release. Happy to see Kafka reach this
milestone. It has been a pleasure using Kafka and also interacting with
the Kafka team.
-Jaikiran
On 01/11/17 7:57 PM, Guozhang Wang wrote:
The Apache Kafka community is pleased to announce the release for Apache
Kafka
Congratulations Onur!
-Jaikiran
On 06/11/17 10:54 PM, Jun Rao wrote:
Hi, everyone,
The PMC of Apache Kafka is pleased to announce a new Kafka committer Onur
Karaman.
Onur's most significant work is the improvement of Kafka controller, which
is the brain of a Kafka cluster. Over time, we have
Hi Ismael,
Are there any new features other than the language specific changes that
are being planned for 2.0.0? Also, when 2.x gets released, will the 1.x
series see continued bug fixes and releases in the community or is the
plan to have one single main version that gets continuous updates a
Radu Radutiu wrote:
If you test with Java 9 please make sure to use an accelerated cipher suite
(e.g. one that uses AES GCM such as TLS_RSA_WITH_AES_128_GCM_SHA256).
Radu
On Mon, Oct 30, 2017 at 1:49 PM, Jaikiran Pai
wrote:
I haven't yet had a chance to try out Java 9, but that's def
Adding the Kafka dev list to cc, hoping they would answer this question.
-Jaikiran
On Friday 10 June 2016 11:18 AM, Jaikiran Pai wrote:
We are using 0.9.0.1 of Kafka server and (Java) clients. Our (Java)
consumers are assigned to dynamic runtime generated groups i.e. the
consumer group name is
I think this will help a lot in contributions. Some of my local changes
that I want to contribute back have been pending because I sometimes
switch machines and I then have to go through setting up the Ruby/python
and other stuff for the current review process. Using just github is
going to hel
here and let me know if they run into any issues.
Thanks,
Jaikiran Pai
leave out the reference to the /tmp folder.
- Jaikiran
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30403/#review71557
-------
this patch
to incorporate Neha's review comments. This is now ready to be reviewed again.
- Jaikiran Pai
On May 18, 2015, 4:42 a.m., Jaikiran Pai wrote:
>
> ---
> This is an automatically generated e-mail. To reply,
Hello everyone,
What is the next planned version of Kafka going to be? I vaguely
remember reading in some mail that it will be 0.9 and "trunk" is where
it would be released from. I was just updating my local workspace for
contributing and noticed that the gradle.properties in trunk notes the
ea6d165d8e5c3146d2c65e8ad1a513308334bf6f
Diff: https://reviews.apache.org/r/34394/diff/
Testing
---
Thanks,
Jaikiran Pai
27;ve also checked that shutting down
Kafka first, when zookeeper is still up, works fine too.
Thanks,
Jaikiran Pai
es to reconnect to zoookeeper for a maximum of
5 seconds before cleanly shutting down). I've also checked that shutting down
Kafka first, when zookeeper is still up, works fine too.
Thanks,
Jaikiran Pai
---
Thanks,
Jaikiran Pai
Could someone please look at these few review requests and let me know
if any changes are needed:
https://reviews.apache.org/r/34394/ related to
https://issues.apache.org/jira/browse/KAFKA-1907
https://reviews.apache.org/r/30403/ related to
https://issues.apache.org/jira/browse/KAFKA-1906
Th
Hi Joe,
Comments inline.
On Friday 29 May 2015 12:15 PM, Joe Stein wrote:
see below
On Fri, May 29, 2015 at 2:25 AM, Jaikiran Pai
wrote:
Could someone please look at these few review requests and let me know if
any changes are needed:
https://reviews.apache.org/r/34394/ related to
https
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34394/#review85691
-------
On May 25, 2015, 3:49 a.m., Jaikiran Pai wrote:
>
> ---
>
We happened to run into a disk space usage issue with Kafka 0.10.0.1
(the version we are using) on one of our production setups this morning.
Turns out (log4j) logging from Kafka ended up using 81G and more of disk
space. Looking at the files, I see the controller.log itself is 30G and
more (fo
On Tuesday 13 December 2016 03:02 PM, Jaikiran Pai wrote:
log4j.logger.kafka.controller=*TRACE,* controllerAppender
log4j.additivity.kafka.controller=false
log4j.logger.state.change.logger=*TRACE*, stateChangeAppender
log4j.additivity.state.change.logger=false
Is it intentional to have
ACE level which contribute to this, before filing
a JIRA.
-Jaikiran
Ismael
On Tue, Dec 13, 2016 at 9:32 AM, Jaikiran Pai
wrote:
We happened to run into a disk space usage issue with Kafka 0.10.0.1 (the
version we are using) on one of our production setups this morning. Turns
out (log4
(Re)posting this from the user mailing list to dev mailing list, hoping
for some inputs from the Kafka dev team:
We are on Kafka 0.10.0.1 (server and client) and use Java
consumer/producer APIs. We have an application where we create Kafka
topics dynamically (using the AdminUtils Java API) and
for this, I'll send
a note on what approach I settled on.
-Jaikiran
On Friday 24 February 2017 12:08 PM, James Cheng wrote:
On Feb 23, 2017, at 10:03 PM, Jaikiran Pai wrote:
(Re)posting this from the user mailing list to dev mailing list, hoping for
some inputs from the Kafka dev
ran/902c1eadbfdd66466c8d8ecbd81416bf
-Jaikiran
On Friday 24 February 2017 12:29 PM, Jaikiran Pai wrote:
James, thank you very much for this explanation and I now understand
the situation much more clearly. I wasn't aware that the consumer's
metadata.max.age.ms could play a role in th
Jaikiran,
What about
1) create topic
2) create consumer1 and do consumer1.partitionsFor() until it succeeds
3) close consumer1
4) create consumer2 and do consumer2.subscribe()
-James
An update on this. This workaround has worked out fine and our initial
tests so far show that it gets us pas
https://issues.apache.org/jira/browse/KAFKA-4631 and the discussion in the
PR https://github.com/apache/kafka/pull/2622 for details.
Regards,
Rajini
On Thu, Mar 2, 2017 at 4:35 AM, Jaikiran Pai
wrote:
For future reference - I asked this question on dev mailing list and based
on the discussion there was ab
of a
jira.ini file, during patch submission
Diffs (updated)
-
kafka-patch-review.py b7f132f9d210b8648859ab8f9c89f30ec128ab38
Diff: https://reviews.apache.org/r/29756/diff/
Testing
---
Thanks,
Jaikiran Pai
b7f132f9d210b8648859ab8f9c89f30ec128ab38
Diff: https://reviews.apache.org/r/29756/diff/
Testing
---
Thanks,
Jaikiran Pai
A user :nehanarkhede
> > JIRA password :
> > Failed to login to the JIRA instance
> > 'JIRA' object has no attribute 'current_user'
> >
> > Maybe a different version of the jira package we use renamed the user
> > field ?
>
I have been looking at some unassigned JIRAs to work on during some
spare time and found this one
https://issues.apache.org/jira/browse/KAFKA-1837. As I note in that
JIRA, I can see why this happens and have a potential fix for it. But to
first reproduce the issue and then verify the fix, I hav
I have been looking at some unassigned JIRAs to work on during some
spare time and found this one
https://issues.apache.org/jira/browse/KAFKA-1837. As I note in that
JIRA, I can see why this happens and have a potential fix for it. But to
first reproduce the issue and then verify the fix, I hav
I just downloaded the Kafka binary and am trying this on my 32 bit JVM
(Java 7)? Trying to start Zookeeper or Kafka server keeps failing with
"Unrecognized VM option 'UseCompressedOops'":
./zookeeper-server-start.sh ../config/zookeeper.properties
Unrecognized VM option 'UseCompressedOops'
Error
ntegration/kafka/api/ProducerFailureHandlingTest.scala
420a1dd30264c72704cc383a4161034c7922177d
Diff: https://reviews.apache.org/r/30013/diff/
Testing
---
Thanks,
Jaikiran Pai
I could reproduce this consistently when that test *method* is run
individually. From what I could gather, the __consumer_offset topic
(being accessed in that test) had 50 partitions (default) which took a
while for each of them to be assigned a leader and do other
initialization and that timed
ntegration/kafka/api/ProducerFailureHandlingTest.scala
420a1dd30264c72704cc383a4161034c7922177d
Diff: https://reviews.apache.org/r/30026/diff/
Testing
---
Thanks,
Jaikiran Pai
e open up a new JIRA and attach your patch there.
Your review patch is attached KAFKA-1867 which is a different issue.
Thanks,
Harsha
On Sun, Jan 18, 2015, at 07:16 AM, Jaikiran Pai wrote:
I could reproduce this consistently when that test *method* is run
individually. From what I co
-------
On Jan. 19, 2015, 10:09 a.m., Jaikiran Pai wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30026/
> ---
/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
420a1dd30264c72704cc383a4161034c7922177d
Diff: https://reviews.apache.org/r/30026/diff/
Testing
---
Thanks,
Jaikiran Pai
t seems it's simpler to just change the defaultOffsetPartition in the
> > broker config to 1. The test will run faster that way.
>
> Jaikiran Pai wrote:
> That sounds good too. I don't yet have enough knowledge of the code to be
> sure it wouldn't introduce other issues, so wen
I often see the following exception while running some tests
(ProducerFailureHandlingTest.testNoResponse is one such instance):
[2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
Controller 0 fails to send a request to broker
id:1,host:localhost,port:56729 (kafka.controll
/
Testing
---
Thanks,
Jaikiran Pai
way to fix it.
"
Guozhang
On Mon, Jan 19, 2015 at 9:07 AM, Jaikiran Pai
wrote:
I often see the following exception while running some tests
(ProducerFailureHandlingTest.testNoResponse is one such instance):
[2015-01-19 22:30:24,257] ERROR [Controller-0-to-broker-1-send-thread],
Control
/diff/
Testing
---
Thanks,
Jaikiran Pai
eWaitTime
PASSED
`
`
./gradlew core:test --tests
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic
kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic
PASSED
`
Thanks,
Jaikiran Pai
; testMetadataUpdateWaitTime
PASSED
```
```
./gradlew core:test --tests
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic
kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic
PASSED
```
Thanks,
Jaikiran Pai
so haven't added any new tests.
Thanks,
Jaikiran Pai
<https://reviews.apache.org/r/27799/#comment113741>
Hi Jay,
I think doing this unmuteAll in a finally block might be a good idea, since
that way we don't end up with a muted selected when/if something goes wrong
during that polling.
- Jaikiran Pai
On Jan. 21, 2015, 4:4
> On Jan. 22, 2015, 3:14 a.m., Jaikiran Pai wrote:
> > clients/src/main/java/org/apache/kafka/clients/NetworkClient.java, line 253
> > <https://reviews.apache.org/r/27799/diff/4/?file=828376#file828376line253>
> >
> > Hi Jay,
> >
> > I
I was just playing around with the RC2 of 0.8.2 and noticed that if I
shutdown zookeeper first I can't shutdown Kafka server at all since it
goes into a never ending attempt to reconnect with zookeeper. I had to
kill the Kafka process to stop it. I tried it against trunk too and
there too I see
``
```
./gradlew core:test --tests
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic
kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic
PASSED
```
Thanks,
Jaikiran Pai
x27;t added any new tests.
Thanks,
Jaikiran Pai
x27;t added any new tests.
Thanks,
Jaikiran Pai
d it now and updated
the patch. Thanks Jay!
- Jaikiran
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29755/#review69529
--------
WaitTime
PASSED
```
```
./gradlew core:test --tests
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic
kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic
PASSED
```
Thanks,
Jaikiran Pai
it:
https://reviews.apache.org/r/30078/#review69534
---
On Jan. 24, 2015, 11:52 a.m., Jaikiran Pai wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https
Hi Jay,
I spent some more time over this today and went back to the original
thread which brought up the issue with file leaks [1]. I think that
output of lsof in that logs has a very important hint:
/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/
think you are right, good catch. It could be that this user deleted the
files manually, but I wonder if there isn't some way that is a Kafka
bug--e.g. if multiple types of retention policies kick in at the same
time
do we synchronize that properly?
-Jay
On Sat, Jan 24, 2015 at 9:26 PM, Jaik
x27;t added any new tests.
Thanks,
Jaikiran Pai
ote:
For a clean shutdown, the broker tries to talk to the controller and also
issues reads to zookeeper. Possibly that is where it tries to reconnect to
zk. It will help to look at the thread dump.
Thanks
Neha
On Fri, Jan 23, 2015 at 8:53 PM, Jaikiran Pai
wrote:
I was just playing around with the R
82dce80d553957d8b5776a9e140c346d4e07f766
Diff: https://reviews.apache.org/r/30403/diff/
Testing
---
Thanks,
Jaikiran Pai
his change on a Windows OS and would appreciate if
someone can test this there and let me know if they run into any issues.
Thanks,
Jaikiran Pai
HAVE to
be the kafka directory. This works for the simple download case but may
making some packaging stuff harder for other use cases.
-Jay
On Mon, Jan 26, 2015 at 5:54 AM, Jaikiran Pai
wrote:
Having looked at the logs the user posted, I don't think this specific
issue has to do with /tmp
t/kafka installation path.
- Jaikiran
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30403/#review70163
---
On Jan.
the change will be limited to just the scripts.
- Jaikiran
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30403/#review70168
--------
ME}/data
> > may be enough?
> >
> > overriding configuration from server.properties in code can be very
> > unintuitive.
>
> Jaikiran Pai wrote:
> That sounds a good idea. I wasn't aware of the --override option. I'll
> give that a try and if it works t
ME}/data
> > may be enough?
> >
> > overriding configuration from server.properties in code can be very
> > unintuitive.
>
> Jaikiran Pai wrote:
> That sounds a good idea. I wasn't aware of the --override option. I'll
> give that a try and if it works
://reviews.apache.org/r/30477/diff/
Testing
---
Thanks,
Jaikiran Pai
ra thread during shutdown to trigger timeouts, but I'd imagine we
probably have other threads that could end up blocking in similar ways.
I filed https://issues.apache.org/jira/browse/KAFKA-1907 to track the
issue.
On Mon, Jan 26, 2015 at 6:35 AM, Jaikiran Pai
wrote:
The main culprit
ing configuration from server.properties in code can be
very unintuitive.
>
> Jaikiran Pai wrote:
> That sounds a good idea. I wasn't aware of the --override
option. I'll give that a try and if it works then the change will
be limited to just the script
nd without much downside.
Just a thought.
Gwen
On Sat, Jan 31, 2015 at 6:32 AM, Jaikiran Pai wrote:
Neha, Ewen (and others), my initial attempt to solve this is uploaded here
https://reviews.apache.org/r/30477/. It solves the shutdown problem and now
the server shuts down even when Zookeepe
eper itself actually does support timeouts.
On Mon, Feb 2, 2015 at 9:54 AM, Guozhang Wang wrote:
Hi Jaikiran,
I think Gwen was talking about contributing to ZkClient project:
https://github.com/sgroschupf/zkclient
Guozhang
On Sun, Feb 1, 2015 at 5:30 AM, Jaikiran Pai
wrote:
Hi Gwen,
Yes, th
y
did indicate that the project could be released after this change is merged.
-Jaikiran
On Tuesday 03 February 2015 09:03 AM, Jaikiran Pai wrote:
Thanks for pointing to that repo!
I just had a look at it and it appears that the project isn't much
active (going by the lack of a
thers than were waiting for the new zkclient
Does that make sense?
Gwen
On Mon, Feb 2, 2015 at 8:33 PM, Jaikiran Pai wrote:
I just heard back from Stefan, who manages the ZkClient repo and he seems to
be open to have these changes be part of ZkClient project. I'll be creating
a pull reques
jira and the few others than were waiting for the new zkclient
Does that make sense?
Gwen
On Mon, Feb 2, 2015 at 8:33 PM, Jaikiran Pai wrote:
I just heard back from Stefan, who manages the ZkClient repo and he seems to
be open to have these changes be part of ZkClient project. I'll be creat
own
little
zookeeper client wrapper." since we have accumulated a bunch of
issues
with
zkClient which takes long time be resolved if ever, so we ended up
have
some hacky way handling zkClient errors.
Guozhang
On Tue, Feb 3, 2015 at 7:47 PM, Jaikiran Pai <
jai.forums2...@gmail.com
Hello Kafka team,
We are using 0.9.0.1 (latest stable) of Kafka server and client
libraries. We use Java client for communicating with the Kafka
installation. Our simple application uses a single instance of
KafkaProducer (since the javadoc of that class states it's thread safe
and recommend
R is quite old
and needs rebasing. I will take a look at it today.
On Sun, May 15, 2016 at 6:14 AM, Jaikiran Pai
wrote:
Hello Kafka team,
We are using 0.9.0.1 (latest stable) of Kafka server and client
libraries. We use Java client for communicating with the Kafka
installation. Our simple appli
the outstanding send requests expire
That's reasonable.
2) The 0.10.0.0 release is imminent and I think it would be too late to
include this.
Fair enough.
Thank you for the quick and clear response.
-Jaikiran
On Mon, May 16, 2016 at 11:59 AM, Jaikiran Pai
wrote:
Thank yo
We are using Kafka 0.9.0 on our dev setups. We see that the Kafka
configurations currently expose a way to configure the host.name and/or
advertised.host.name which state:
# Hostname the broker will bind to. If not set, the server will bind to
all interfaces
#host.name=
# Hostname the broker
Just curious about deprecation policy and the version schemes. Consider
a certain feature was deprecated in 0.9.2, so a WARN gets logged in that
version. Does that now mean a (bug fix release) 0.9.3 will drop that
feature? Shouldn't the dropping of the feature be done in a 0.10.0
release instea
Congratulations Gwen.
Gwen has been very helpful in various places (blogs, user mailing lists)
which has helped me (and I'm sure many others) in using Kafka and even
contributing patches. Very well deserved promotion.
-Jaikiran
On Tuesday 07 July 2015 07:36 AM, Guozhang Wang wrote:
Congrats
ty, and if we
decide on a different policy, we can just adjust the Fix Version for the
removal of the old tool.
-Ewen
On Thu, Jul 2, 2015 at 12:19 AM, Jaikiran Pai
wrote:
Just curious about deprecation policy and the version schemes. Consider a
certain feature was deprecated in 0.9.2, so a WARN ge
There are certain features in the latest trunk for 0.8.3 release which
we would like to test in our dev environments. Is there some place where
we can get the official nightly snapshot packaged builds or is the
recommended way to always fetch latest source code and build it locally
to try out t
Recently there was a discussion that 0.8.2.2 will be released soon to
fix the critical issue with compression. Is that still the plan or has
that been shelved?
-Jaikiran
On Tuesday 01 September 2015 09:51 AM, Aditya Auradkar wrote:
createTopic should also be fine unless you try to create the same topic
concurrently.
Just to be clear - that would imply the AdminUtils _cannot_ be
considered thread-safe, isn't it?
-Jaikiran
On Mon, Aug 31, 2015 at 3:00 PM,
We are using 0.9.0.1 of Kafka (Java) libraries for our Kafka consumers
and producers. In one of our consumers, our consumer config had a SSL
specific property which ended up being used against a non-SSL Kafka
broker port. As a result, the logs ended up seeing messages like:
17:53:33,722 WARN
Any opinion about this proposed change?
-Jaikiran
On Tuesday 16 August 2016 02:28 PM, Jaikiran Pai wrote:
We are using 0.9.0.1 of Kafka (Java) libraries for our Kafka consumers
and producers. In one of our consumers, our consumer config had a SSL
specific property which ended up being used
PM, Manikumar Reddy
wrote:
During server/client startup, we are logging all the supplied configs. May
be we can just mask
the password related config values for both valid/invalid configs.
On Wed, Aug 17, 2016 at 5:14 PM, Jaikiran Pai
wrote:
Any opinion about this proposed change?
-Jaikiran
Created https://issues.apache.org/jira/browse/KAFKA-4056.
-Jaikiran
On Wednesday 17 August 2016 06:28 PM, Ismael Juma wrote:
Yes, please file a JIRA.
Thanks,
Ismael
On Wed, Aug 17, 2016 at 1:46 PM, Jaikiran Pai
wrote:
Thanks for the inputs.
I think it's fine if Kafka selectively
We have been using Kafka 0.9.0.1 (server and Java client libraries). So
far we had been using it with plaintext transport but recently have been
considering upgrading to using SSL. It mostly works except that a
mis-configured producer (and even consumer) causes a hard to relate
OutOfMemory exce
PM, Ismael Juma wrote:
Yes, please file a JIRA.
On Fri, Aug 26, 2016 at 2:28 PM, Jaikiran Pai
wrote:
We have been using Kafka 0.9.0.1 (server and Java client libraries). So
far we had been using it with plaintext transport but recently have been
considering upgrading to using SSL. It mostl
interpreting SSL handshake protocol messages
as Kafka requests. Hence the size (300MB) being allocated doesn't really
correspond to a size field. Limiting maximum buffer size would avoid OOM in
this case.
On Fri, Aug 26, 2016 at 4:13 PM, Jaikiran Pai
wrote:
Hi Rajini,
Just filed a JI
We have been using the (new) Java consumer API in 0.9.0.1 for a while
now. We have some well known issues with it - like heart beats being
part of the same thread causing the consumer to sometimes be considered
dead. I understand that this has been fixed in 0.10.0.1 but we haven't
yet had a cha
We just started enabling SSL for our Kafka brokers and (Java) clients
and among some of the issues we are running into, one of them is the
flooding of the server/broker Kafka logs where we are seeing these
messages:
[2016-09-02 08:07:13,773] WARN SSL peer is not authenticated, returning
ANONY
We are using 0.10.0.1 of Kafka and (Java) client libraries. We recently
decided to start using SSL for Kafka communication between broker and
clients. Right now, we have a pretty basic setup with just 1 broker with
SSL keystore setup and the Java client(s) communicate using the
Producer/Consume
We just started enabling SSL for our Kafka brokers and (Java) clients
and among some of the issues we are running into, one of them is the
flooding of the server/broker Kafka logs where we are seeing these messages:
[2016-09-02 08:07:13,773] WARN SSL peer is not authenticated, returning
ANONYM
way to do that is to set
`ssl.client.auth=required`. So I'd, personally, be fine with reducing the
log level to info or debug.
Ismael
On Sun, Sep 4, 2016 at 3:01 PM, Jaikiran Pai
wrote:
We just started enabling SSL for our Kafka brokers and (Java) clients and
among some of the issues we a
on in
CPU usage, which may offset the cost of SSL. We haven't had a chance to
fully investigate this, however, as changing that config depends on the
clients being updated to support the new format.
-Todd
On Sunday, September 4, 2016, Jaikiran Pai wrote:
We are using 0.10.0.1 of Kafka and (Ja
1 - 100 of 237 matches
Mail list logo