will work and behave like with
IN_MEMORY StoreType as it is straight forward to use.
Do you see a chance to get InteractiveQueryV2 work with GlobalKTable?
Kind regards,
Christian
-Original Message-
From: Sophie Blee-Goldman
Sent: Wednesday, November 22, 2023 1:51 AM
To: christian.z
thread-1) new file processed
Thanks for any input!
Christian
will this feature be GA or considered as stable?
Best regards,
Christian A. Mathiesen
on kafka broker side.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
Computer Scientist
http://www.adobe.com
do we still need a custom
producer partitioner or is it enough to simply assign to the topic like
described above?
Christian
Am Mi., 8. Dez. 2021 um 11:19 Uhr schrieb Luke Chen :
> Hi Christian,
> Answering your question below:
>
> > Let's assume we just have one topic wit
in worst case we can make this happen by encrypting the messages
but it would be great if we could filter on broker side.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
Computer Scientist
http://www.adobe.com
ache-kafka-comply-freaking-out/
That's what sparked our interest in such a solution.
Kind regards,
--
Christian Apolloni
Disclaimer: The contents of this email and any attachment thereto are intended
exclusively for the attention of the addressee(s). The email and any such
attachment(s) may con
As alternative solution we also investigated encryption: encrypting all
messages with an individual key and removing the key once the "deletion" needs
to be performed.
Has anyone experience with such a solution?
--
Christian Apolloni
Disclaimer: The contents of this ema
ure whether our understanding is correct and
whether it's a bug or not.
In general, I think part of the issue is that the system receives the delete
order at the time that it has to be performed: we don't deal with the
processing of the required waiting periods, that's what happens in the
"coo
On 2020/08/19 16:15:40, Nemeth Sandor wrote:
> Hi Christian,>
Hi, thanks for your reply.
> depending on how your Kafka topics are configured, you have 2 different>
> options:>
>
> a) if you have a non-log-compacted then you can set the message retention>
> on
ons we would be very interested in your opinion. We are also
interested in general about experiences in implementing GDPR compliance in
Kafka, especially when dealing with multiple, interconnected systems.
Kind regards,
--
Christian Apolloni
Disclaimer: The contents of this email and an
Any other ideas here? Should I create a bug?
On Tue, Jul 3, 2018 at 1:21 PM, Christian Henry wrote:
> Nope, we're setting retainDuplicates to false.
>
> On Tue, Jul 3, 2018 at 6:55 AM, Damian Guy wrote:
>
>> Hi,
>>
>> When you create your window store do y
ue`?
>
> Thanks,
> Damian
>
> On Mon, 2 Jul 2018 at 17:29 Christian Henry
> wrote:
>
> > We're using the latest Kafka (1.1.0). I'd like to note that when we
> > encounter duplicates, the window is the same as well.
> >
> > My original code was a bit simplif
to efficiently query all of the most
recent data. Note that since the healthcheck punctuator needs to aggregate
on all the recent values, it has to do a *fetchAll(start, end) *which is
how these duplicates are affecting us.
On Fri, Jun 29, 2018 at 7:32 PM, Guozhang Wang wrote:
> Hello Christian,
>
&
Hi all,
I'll first describe a simplified view of relevant parts of our setup (which
should be enough to repro), describe the behavior we're seeing, and then
note some information I've come across after digging in a bit.
We have a kafka stream application, and one of our transform steps keeps a
El mié., 29 nov. 2017 9:09 PM, Christian F. Gonzalez Di Antonio <
christian...@gmail.com> escribió:
> uhh, so sorry, I forgot it.
>
> Dockek Hub: https://hub.docker.com/r/christiangda/kafka/
>
> Github: https://github.com/christiangda/docker-kafka
>
> Regards,
>
&
kafka itself configuration.
it was not tested on Kubernates, but I expect to do that soon.
feel free to let me know your feedback on the github's repository
Regards,
Christian
I would like to share my @apachekafka <https://twitter.com/apachekafka>
@Docker <https://twitter.com/Docker> image! with all of you. The
Documentation is a work in progress!
https://hub.docker.com/r/christiangda/kafka/
Regards,
Christian
? From the long delay it looks a bit like a
reverse DNS issue but I dont know if these can happen with kafka or what to
configure to avoid the issue.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http
this be speeded up? I use this in a test and would like to make that
test faster.
Christian
--
--
Christian Schneider
http://www.liquid-reality.de
<https://owa.talend.com/owa/redir.aspx?C=3aa4083e0c744ae1ba52bd062c5a7e46=http%3a%2f%2fwww.liquid-reality.de>
Computer Scientist
http://www.adobe.com
.
Christian
On Apr 11, 2017 11:10, "IT Consultant" <0binarybudd...@gmail.com> wrote:
Thanks for your response .
We aren't allowed to hard code password in any of our program
On Apr 11, 2017 23:39, "Mar Ian" <mar@hotmail.com> wrote:
> Since is a java property y
Thank you Harsha!
On Sun, Feb 26, 2017 at 10:27 AM, Harsha Chintalapani <ka...@harsha.io>
wrote:
> Hi Christian,
> Kafka client connections are long-llving connections,
> hence the authentication part comes up during connection establishment and
> once we au
? Is the
Kafka SASL implementation not meant for such a complicated scenario or am I
just thinking about it all wrong?
Thanks,
Christian
use -Dsun.security.krb5.debug=true and
> > > -Djava.security.debug=gssloginconfig,configfile,
> > configparser,logincontext
> > > to see debug info about what's going on.
> > >
> > >
> > > Some reading material can be found at:
> > > htt
gt;
>
> Some reading material can be found at:
> https://github.com/gerritjvv/kafka-fast/blob/master/kafka-clj/Kerberos.md
>
> and if you want to see or need for testing a vagrant env with kerberos +
> kafka configured see
> https://github.com/gerritjvv/kafka-fast/blob/master/kafka-
>
the server side.
Is this expected behavior?
Thanks,
Christian
t that var to false and I got
past the problem.
On Sat, Jan 7, 2017 at 7:54 AM, Christian <engr...@gmail.com> wrote:
> Hi,
>
> I'm trying to set up SASL_PLAINTEXT authentication between the
> producer/consumer clients and the Kafka brokers only. I am not too worried
> about the
tting it to false, gives me this timeout, but only when I
also set the -Djava.security.auth... property.
I know, I'm missing a small thing.
Thanks,
Christian
% more cost, we doubled our vms, using twice as
many 1/2 sized EBS volumes.
-Christian
On Fri, Jul 8, 2016 at 12:07 PM Krish <krishnan.k.i...@gmail.com> wrote:
> Thanks, Christian.
> I am currently reading about kafka-on-mesos.
> I will hack something this weekend to see if I can b
cannibalizes
> resources; I can also ensure that it runs on a dedicated machine.
>
> Thanks.
>
> Best,
> Krish
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
message has expired and then park it in some
> topic till as such time that a service can dequeue, process it and/or
> investigate it.
>
> Thanks.
>
> Best,
> Krish
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Might be worth describing your use case a bit to see if there's another way
to help you?
On Tue, Jun 14, 2016 at 5:29 AM, Mudit Kumar <mudit.ku...@askme.in> wrote:
> Hey,
>
> How can I delete particular messages from particular topic?Is that
> possible?
>
> Thanks,
>
-hello-test1")); final int
> minBatchSize = 200; List<ConsumerRecord<String, String>> buffer =
> new ArrayList<>(); while (true) {
> ConsumerRecords<String, String> records = consumer.poll(100);
> for (ConsumerRecord<String, String> record : records){
>buffer.add(record); } consumer.commitSync();
> } } }
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Hi Gerard,
When trying to reproduce this, did you use the go sarama client Safique
mentioned?
On Fri, Jun 3, 2016 at 5:10 AM, Gerard Klijs
wrote:
> I asume you use a replication factor of 3 for the topics? When I ran some
> test with producer/consumers in a dockerized
level streams
> > API the sources and sinks can only be Kafka topics for now. So, as Gwen
> > mentioned, Connect would be the way to go to bring the data to a Kafka
> > Topic first.
>
> Got it — thank you!
>
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
the implementation
> from
> > the multi-threaded to one-thread and subscribing multi-topics?... I'm
> just
> > wonder whether a KafkaConsumer is able to stand the bunch of data without
> > performance degradation.
> >
> > Thanks in advance!
> >
> > Best regards
> >
> > KIM
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
0, unknown,
> mirrormaker-0_172.17.0.1/172.17.0.1
> mirrormaker, local.general.example, 0, unknown, 0, unknown,
> mirrormaker-0_172.17.0.1/172.17.0.1
>
> On Wed, May 18, 2016 at 2:36 PM Christian Posta <christian.po...@gmail.com
> >
> wrote:
>
> > Maybe give it a try wit
a – considering that different consumers within the group
> commit to either kafka or Zk ?
>
> Regards
> Sathya,
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
t support transactions? Let's say I want to push 3 messages
> > > > atomically but the producer process crashes after sending only 2
> > > messages,
> > > > is it possible to "rollback" the first 2 messages (e.g. "all or
> > nothing"
> > > > semantics)?
> > > >
> > > > 3) Does it support request/response style semantics or can they be
> > > > simulated? My system's primary interface with the outside world is an
> > > HTTP
> > > > API so it would be nice if I could publish an event and wait for all
> > the
> > > > internal services which need to process the event to be "done"
> > > > processing before returning a response.
> > > >
> > > > PS: I'm a Node.js/Go developer so when possible please avoid Java
> > centric
> > > > terminology.
> > > >
> > > > Thanks!
> > > >
> > > > - Oli
> > > >
> > > > --
> > > > - Oli
> > > >
> > > > Olivier Lalonde
> > > > http://www.syskall.com <-- connect with me!
> > > >
> > >
> >
>
>
>
> --
> - Oli
>
> Olivier Lalonde
> http://www.syskall.com <-- connect with me!
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
m experience with other projects, I know that
> without the initial pitch / discussion, it could be difficult to get such
> feature in. I can create a jira in the morning, no electricity again
> tonight :-/
>
> Get Outlook for iOS
>
>
>
>
> On Tue, May 17, 2016 at 4:53 PM
return the offset of the
> > > message produced, and you could check the latest offset of each
> consumer
> > in
> > > your web request handler.
> > >
> > > However, doing so is not going to work that well, unless you're ok with
> > > your web requests taking on the order of seconds to tens of seconds to
> > > fulfill. Kafka can do low latency messaging reasonably well, but
> > > coordinating the offsets of many consumers would likely have a huge
> > latency
> > > impact. Writing the code for it and getting it handling failure
> correctly
> > > would likely be a lot of work (there's nothing in any of the client
> > > libraries like this, because it is not a desirable or supported use
> > case).
> > >
> > > Instead I'd like to query *why* you need those semantics? What's the
> > issue
> > > with just producing a message and telling the user HTTP 200 and later
> > > consuming it.
> > >
> > >
> > >
> > > >
> > > > PS: I'm a Node.js/Go developer so when possible please avoid Java
> > centric
> > > > terminology.
> > >
> > >
> > > Please to note that the node and go clients are notably less mature
> than
> > > the JVM clients, and that running Kafka in production means knowing
> > enough
> > > about the JVM and Zookeeper to handle that.
> > >
> > > Thanks!
> > > Tom Crayford
> > > Heroku Kafka
> > >
> > > >
> > > > Thanks!
> > > >
> > > > - Oli
> > >
> > > >
> > > > --
> > > > - Oli
> > > >
> > > > Olivier Lalonde
> > > > http://www.syskall.com
> > > <http://www.syskall.com> <-- connect with me!
> > > >
> > >
> >
>
>
>
>
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
If you're using KafkaConnect, it does it for you!
basically you set the sourceRecord's "sourcePartition" and "sourceOffset"
fields (
https://github.com/christian-posta/kafka/blob/8db55618d5d5d5de97feab2bf8da4dc45387a76a/connect/api/src/main/java/org/apache/kafka/connect/sour
ook.com/anas.24aj> [image: twitter]
> > <https://twitter.com/anas24aj> [image: linkedin]
> > <http://in.linkedin.com/in/anas24aj> [image: googleplus]
> > <https://plus.google.com/u/0/+anasA24aj/>
> > +917736368236
> > anas.2...@gmail.com
> > Bangalore
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
10.0.0.184:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:03,337] INFO Unable to read additional data from
> server sessionid 0x1549b308dd20002, likely server has closed socket,
> closing socket connection and attempting reconnect
> (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:05,121] INFO Opening socket connection to server
> 10.0.0.184/10.0.0.184:2181. Will not attempt to authenticate using SASL
> (unknown error) (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:05,121] INFO Socket connection established to
> 10.0.0.184/10.0.0.184:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
> > > > [2016-05-10 15:41:05,122] INFO Unable to read additional data from
> server sessionid 0x1549b308dd20002, likely server has closed socket,
> closing socket connection and attempting reconnect
> (org.apache.zookeeper.ClientCnxn)
> > > >
> > > > You can see when the first zookeeper dies and connection is lost ...
> and all the retries by kafka server in order to connect to the new one
> (same IP, same port).
> > > >
> > > > Why the zookeeper server closes the connection (I can see FIN ACK
> frames on Wireshark)
> > > >
> > > > Thanks,
> > > > Paolo.
> > > >
> > > > Paolo PatiernoSenior Software Engineer (IoT) @ Red Hat
> > > > Microsoft MVP on Windows Embedded & IoTMicrosoft Azure Advisor
> > > > Twitter : @ppatierno
> > > > Linkedin : paolopatierno
> > > > Blog : DevExperience
>
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
at 9:41 AM, Mudit Kumar <mudit.ku...@askme.in> wrote:
> How can i get the list for all the class names i can run through
> ./kafka-run-class.sh [class-name] command?
>
> Thanks,
> Mudit
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
knowledge of the implementation
comment further on what would be required.
Christian
On Mon, May 2, 2016 at 9:41 PM, Bruno Rassaerts
<bruno.rassae...@novazone.be> wrote:
> We did try indeed the last scenario you describe as encrypted disks do not
> fulfil our requirements.
> We need
>From what I know of previous discussions encryption at rest can be
handled with transparent disk encryption. When that's sufficient it's
nice and easy.
Christian
On Thu, Apr 21, 2016 at 2:31 PM, Tauzell, Dave
<dave.tauz...@surescripts.com> wrote:
> Has there been any discus
ka.tools.JmxTool --object-name
> "kafka.consumer:type=FetchRequestAndResponseMetrics,name=FetchRequestRateAndTimeMs,clientId=ReplicaFetcherThread*,brokerHost=hostname*.
> cluster.com,brokerPort=*" --jmx-url
> service:jmx:rmi:///jndi/rmi://`hostname`:/jmxrmi
>
> There ma
ets.
>
> Any inputs on what could be going on?
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
>
> I think the overhead which happens while establishing connection from
> consumer/producer to kafka broker(s) seems a little heavy.
>
> Thanks in advance!
>
> Best regards
>
> bgkim
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
e one and
> > end
> > > up with bugs in production.
> > >
> > > I can volunteer for the release management of the LTS release but as a
> > > community, can we follow the rigour of back-porting the bug-fixes to
> the
> > > LTS branch?
> > >
> > > --
> > > Regards
> > > Vamsi Subhash
> > >
> >
> >
> >
> > --
> > Regards
> > Vamsi Subhash
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
]
>
> bookmarks80[0, 1][0, 1]
>
> bookmarks91[1, 0][1, 0]
>
> We monitor the disk writes and I only see writes at broker 0, and broker 1
> sees none (not comparable at all). I do see comparable network
ld make 5.b
> less likely.
>
> -Jason
>
>
>
> On Fri, Mar 11, 2016 at 1:03 AM, Michael Freeman <mikfree...@gmail.com>
> wrote:
>
> > Thanks Christian,
> > Sending a heartbeat without having to poll
> > would also be use
ed in Kafka home page
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
> is
> outdated..
>
> Thanks in advance..
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
com>
wrote:
> Thanks Christian,
>We would want to retry indefinitely. Or at
> least for say x minutes. If we don't poll how do we keep the heart beat
> alive to Kafka. We never want to loose this message and only want to commit
> to Kafka when the messag
ppy fix which was able to demonstrate that addressing
> this case solves my stuck consumer problem.
>
> How do I submit a bug report for this issue, or does this email constitute
> a bug report?
>
> --Larkin
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
to never loose a message and to ensure it makes it to Mongo. (Redelivery is
> ok)
>
> Thanks for any help or pointers in the right direction.
>
> Michael
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
/christian-posta/kafka/tree/ceposta-doco
On Thu, Mar 3, 2016 at 6:28 AM, Marcos Luis Ortiz Valmaseda <
marcosluis2...@gmail.com> wrote:
> I love that too
> +1
>
> 2016-03-02 21:15 GMT-05:00 Christian Posta <christian.po...@gmail.com>:
>
> > For sure! Will take a lo
if the security parts
work out for you.
Christian
On Wed, Mar 2, 2016 at 9:52 PM, Jan <cne...@yahoo.com.invalid> wrote:
> Hi folks;
> does anyone know of Kafka's ability to work over Satellite links. We have a
> IoT Telemetry application that uses Satellite communication to
For sure! Will take a look!
On Wednesday, March 2, 2016, Gwen Shapira <g...@confluent.io> wrote:
> Hey!
>
> Yes! We'd love that too! Maybe you want to help us out with
> https://issues.apache.org/jira/browse/KAFKA-2967 ?
>
> Gwen
>
> On Wed, Mar 2, 20
Would love to have the docs in gitbook/markdown format so they can easily
be viewed from the source repo (or mirror, technically) on github.com. They
can also be easily converted to HTML, have a side-navigation ToC, and still
be versioned along with the src code.
Thoughts?
--
*Christian Posta
ther than at the
> consumer level?
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
in case you are aware of any issue that might cause it?
> I'm chasing this leak for several days, and managed to track it down to the
> code writing to Kafka, so I'm a little desperate :) any help will do.
>
> Thanks!
>
> Asaf
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Can someone add Karma to my user id for contributing to the wiki/docs?
userid is 'ceposta'
thanks!
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
:57 AM, Jun Rao <j...@confluent.io> wrote:
>
> > Christian,
> >
> > Similar to other Apache projects, a vote from a committer is considered
> > binding. During the voting process, we encourage non-committers to vote
> as
> > well. We will cancel the release eve
I believe so. Happy to be corrected.
On Wed, Feb 17, 2016 at 12:31 PM, Joe San <codeintheo...@gmail.com> wrote:
> So if I use the High Level Consumer API, using the ConsumerConnector, I get
> this automatic zookeeper connection for free?
>
> On Wed, Feb 17, 2016 at 8:25 P
ame/0 1
>
> Can someone help me clarify or point me at a doc that explains what is
> getting counted here? You can shoot me if you like for attempting the
> hack-ish solution of re-setting the offset through the Zookeeper API, but I
> would still like to understand what, exactly, is represented by that number
> 30024.
>
> I need to hand off to IT for the Disaster Recovery portion and saying
> "trust me, it just works" isn't going to fly very far...
>
> Thanks.
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
ow which property from my setting in my email above is responsible for
> this auto reconnect mechanism?
>
> On Wed, Feb 17, 2016 at 8:04 PM, Christian Posta <
> christian.po...@gmail.com>
> wrote:
>
> > Yep, assuming you haven't completely partitioned that client from t
rops.put("group.id", groupId)
> > props.put("auto.commit.enabled", "false")
> > // this timeout is needed so that we do not block on the stream!
> > props.put("consumer.timeout.ms", "1")
> > props.put("zookeeper.sync.time.ms", "200")
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
> >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
> >
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> >
> > Thanks,
> >
> > Jun
> >
> >
>
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Wonder if you can listen to the zkPath for topics via a zk watch (
https://zookeeper.apache.org/doc/r3.3.3/api/org/apache/zookeeper/Watcher.html)
to let you know when the structure of the tree changes (ie, add/remove)?
The zkPath for topics is "/brokers/topics"
https://github.com/chris
the
operational aspects of it.
Christian
Thanks for any comments too. :)
On Mon, May 4, 2015 at 9:03 AM, Mayuresh Gharat
gharatmayures...@gmail.com
wrote:
Ok. You can deploy kafka in AWS. You can have brokers on AWS servers.
Kafka is not a push system. So you will need someone writing
Do you have a anything on the number of voters, or audience breakdown?
Christian
On Wed, Mar 4, 2015 at 8:08 PM, Otis Gospodnetic otis.gospodne...@gmail.com
wrote:
Hello hello,
Results of the poll are here!
Any guesses before looking?
What % of Kafka users are on 0.8.2.x already?
What
or two with a single producer you would not expect to see all
partitions be hit.
Christian
On Mon, Mar 2, 2015 at 4:23 PM, Yang tedd...@gmail.com wrote:
thanks. just checked code below. in the code below, the line that calls
Random.nextInt() seems to be called only *a few times* , and all
discussion might be dealt with by
covering disk encryption and how the conversations between Kafka instances
are protected.
Christian
On Wed, Feb 25, 2015 at 11:51 AM, Jay Kreps j...@confluent.io wrote:
Hey guys,
One thing we tried to do along with the product release was start to put
together
to be
fairly separate from Kafka even if there's a handy optional layer that
integrates with it.
Christian
On Wed, Feb 25, 2015 at 5:34 PM, Julio Castillo
jcasti...@financialengines.com wrote:
Although full disk encryption appears to be an easy solution, in our case
that may not be sufficient. For cases
).
Christian
On Wed, Feb 25, 2015 at 3:57 PM, Jay Kreps jay.kr...@gmail.com wrote:
Hey Christian,
That makes sense. I agree that would be a good area to dive into. Are you
primarily interested in network level security or encryption on disk?
-Jay
On Wed, Feb 25, 2015 at 1:38 PM, Christian Csar
consumers (two different applications) you would want each to have 4
threads (4*2 = 8 threads total).
There are also considerations depending on which consumer code you are
using (which I'm decidedly not someone with good information on)
Christian
On Wed, Jan 28, 2015 at 1:28 PM, Ricardo
messages' store somewhere else and have code that looks in there to make
retries happen (assuming you want the failure/retry to persist beyond the
lifetime of the process).
Christian
On Wed, Jan 28, 2015 at 7:00 PM, Guozhang Wang wangg...@gmail.com wrote:
I see. If you are using the high-level
Storm rather than just Kafka to achieve your needs though.
Christian
Christian
Thanks
On Thu, Oct 9, 2014 at 11:57 PM, Albert Vila albert.v...@augure.com
wrote:
Hi
We process data in real time, and we are taking a look at Storm and Spark
streaming too, however our actions
not be able to mark jobs as completed except in a strict order
(while maintaining a processed successfully at least once guarantee).
This is not to say it cannot be done, but I believe your workqueue would
end up working a bit strangely if built with Kafka.
Christian
On 10/09/2014 06:13 AM, William Briggs
of building such a chat system much much
easier (you can avoid writing your own message replication system) but
it is definitely not plug and play using topics for users.
Christian
On 09/05/2014 09:46 AM, Jonathan Weeks wrote:
+1
Topic Deletion with 0.8.1.1 is extremely problematic, and coupled
. My call back ends up putting information about the call to
beanstalk into another executor service for later processing.
Christian
On 08/26/2014 12:35 PM, Ryan Persaud wrote:
Hello,
I'm looking to insert log lines from log files into kafka, but I'm concerned
with handling asynchronous send
a given conversion. That way you will avoid losing
information, particularly if you expect any of your conversion tools to
be of more general use.
Christian
On 08/25/2014 05:36 PM, Gwen Shapira wrote:
Personally, I like converting data before writing to Kafka, so I can
easily support many consumers
of an Async callback.
Christian
On 06/23/2014 04:54 PM, Guozhang Wang wrote:
Hi Kyle,
We have not fully completed the test in production yet for the new
producer, currently some improvement jiras like KAFKA-1498 are still open.
Once we have it stabilized in production at LinkedIn we plan to update
/KafkaProducer.java#L151
for reference)?
Looking around it seems plausible the language in the documentation
might refer to a separate sort of callback that existed in 0.7 but not
0.8. In our use case we have something useful to do if we can detect
messages failing to be sent.
Christian
signature.asc
On 05/01/2014 07:22 PM, Christian Csar wrote:
I'm looking at using the java producer api for 0.8.1 and I'm slightly
confused by this passage from section 4.4 of
https://kafka.apache.org/documentation.html#theproducer
Note that as of Kafka 0.8.1 the async producer does not have a
callback
as described above to address the durability issue for more critical
data were realized?
Many thanks,
--
Christian Schuhegger
Jun,
For my first example is that syntax correct? I.e.
log.retention.bytes.per.topic.A = 15MB
log.retention.bytes.per.topic.B = 20MB
I totally guessed there and was wondering if I guessed right? Otherwise is
there a document with the proper formatting to full out this map?
Thank you,
Paul
Neha,
Correct, that is my question. We want to investigate capping our disk usage
so we don't fill up our hds. If you have any recommended configurations or
documents on these setting, please let us know.
Thank you,
Paul
On Tue, Aug 20, 2013 at 6:16 AM, Paul Christian
pchrist
Hi Jun,
Thank you for your reply. I'm still a little fuzzy on the concept.
Are you saying I can have topic A, B and C and with
log.retention.bytes.per.topic.A = 15MB
log.retention.bytes.per.topic.B = 20MB
log.retention.bytes = 30MB
And thus topic C will get the value 30MB? Since it's not
According to the Kafka 8 documentation under broker configuration. There
are these parameters and their definitions.
log.retention.bytes -1 The maximum size of the log before deleting it
log.retention.bytes.per.topic The maximum size of the log for some
specific topic before deleting it
I'm
Hi Everyone,
I have been experimenting with the libraries listed below and experienced the
same problems.
I have not found any another other node clients. I am interested in finding a
node solution as well.
Happy to contribute on a common solution.
Christian Carollo
On Apr 24, 2013, at 10
93 matches
Mail list logo