we need to use square brackets only for command line tool.
In your code, you just need to supply "compact,delete" string to props
object.
On Wed, Jul 11, 2018 at 8:52 AM Jayaraman, AshokKumar (CCI-Atlanta-CON) <
ashokkumar.jayara...@cox.com> wrote:
> Hi Matthias,
>
> Kept as [compact,delete]
Hi Kafka users,
I am very new to Kafka and more globally to stream processing, and am trying to
understand some of the concepts used by Kafka. From what I understand, a
key-value state store is created on each processor node that performs stateful
operations such as aggregations or joins.
Furher investigations:
I have compared open files/connections of the different nodes. Same count in
real open files (data dir files) and established connections on all nodes.
But the affected node has a lot of "CLOSE_WAIT" connections (many thousends) to
IPs of external clients (no specific
You do not need the brackets, try keep the string value as "compact,delete".
Guozhang
On Tue, Jul 10, 2018 at 8:22 PM, Jayaraman, AshokKumar (CCI-Atlanta-CON) <
ashokkumar.jayara...@cox.com> wrote:
> Hi Matthias,
>
> Kept as [compact,delete] and still got the same exception.
>
> Thanks &
Thanks Boris.
Are you using Alpakka for kafka -Akka integration?
On Wed, Jul 11, 2018 at 9:56 AM, Boris Lublinsky <
boris.lublin...@lightbend.com> wrote:
> This works fine, we (Lightbend) are using this approach all over the place
>
> Boris Lublinsky
> FDP Architect
>
Hello Jonathan,
At the very high-level, KSQL statements is compiled into a Kafka Streams
topology for execution. And the concept "state stores" are for Kafka
Streams, not for KSQL, where inside the topology for those processor nodes
that need stateful processing, like Joins, one or more state
Hi All,
I was wondering what the disk recommendation is for Kafka cluster? Is it
acceptable to use RAID0 in the case that replication is 3? We are running
on a cloud infrastructure and disk failure is addressed at another level,
so the chance of single disk failure would be very low. Besides, our
Hi,
I am new to Kafka and hence would like to validate the following design.
Imagine, a vehicle is being tracked by multiple people. V1 is tracked by
U1, U2, U3. When V1 moves U1, U2, U3 should be
notified and updated. U1, U2, U3 would be tracking several other vehicles
too.
Let me know if there
Yes, reactive Kafka
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Jul 11, 2018, at 9:46 AM, Pulkit Manchanda wrote:
>
> Thanks Boris.
> Are you using Alpakka for kafka -Akka integration?
>
> On Wed, Jul 11, 2018 at 9:56 AM, Boris Lublinsky <
>
Hello!
Using kafka-streams 1.1.0, I noticed when I sum the process rate metric for a
given processor node, the rate is many times higher than the number of incoming
messages. Digging further, it looks like the rate metric associated with each
thread in a given application instance is always
Hi All,
I want to build a datapipeline with the following design. Can please anyone
advice me that is it feasible to do? Or there are better options.
HTTP Streams --> (HTTP stream consumer)(using AKKA HTTP Streaming) --> (kafka
Stream Producer)(using Kafka Streaming) --> (Kafka Stream
This works fine, we (Lightbend) are using this approach all over the place
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Jul 11, 2018, at 8:53 AM, Pulkit Manchanda wrote:
>
> Hi All,
>
> I want to build a datapipeline with the following design.
Hi,
I am new to Apache Kafka and I am trying to work on the QuickStart but run into
problem in Step 2. After executing the first command to start zookeeper, do i
have to open a Terminal to run the Kafka Server? I even try How To Install
Apache Kafka on Ubuntu 14.04 | DigitalOcean also cannot
Hello,
I have a kafka cluster (version 1.0.1) with two brokers.
I have four topics on this cluster (w, x, y, z) with replication factor 2 and
2 partitions each.
To this cluster I connect with two consumers using the kafka-streams api
version 1.0.1.
Like so:
@Bean(name =
Hi Nicholas,
The quickstart is meant to run in terminals. The two commands in Step 2
should be run in different terminals unless you're sending the ZooKeeper
process to the backgroud.
If you are facing particular errors please share so we can better assist
you.
Thanks.
--Vahid
From:
+1 (non-binding)
Built executables from source and ran quickstart (Ubuntu / Java 8)
Thanks!
--Vahid
From: Brett Rann
To: d...@kafka.apache.org
Cc: Users , kafka-clients
Date: 07/10/2018 09:53 PM
Subject:Re: [VOTE] 2.0.0 RC2
+1 (non binding)
rolling upgrade of tiny
+1 (non binding)
rolling upgrade of shared staging multitenacy (200+ consumer groups)
cluster from 1.1.0 to 1.1.1-rc3 using the kafka_2.11-1.1.1.tgz artifact.
cluster looks healthy after upgrade. Lack of burrow lag suggests consumers
are still happy, and incoming messages remains the same.
On
+1 (non-binbding) ... built from source, run tests and used it with several
of my applications without any problems.
Thanks & Regards
Jakub
On Mon, Jul 9, 2018 at 12:36 AM Dong Lin wrote:
> Hello Kafka users, developers and client-developers,
>
> This is the fourth candidate for release of
Dong Lin's KIP -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-112%3A+Handle+disk+failure+for+JBOD
Should give you some ideas.
On 11 July 2018 at 14:31, Ali Nazemian wrote:
> Hi All,
>
> I was wondering what the disk recommendation is for Kafka cluster? Is it
> acceptable to use RAID0
+1 (non-binbding) ... I built the RC2 from source, run tests and used it
with several of my applications without any problems.
Thanks & Regards
Jakub
On Tue, Jul 10, 2018 at 7:17 PM Rajini Sivaram
wrote:
> Hello Kafka users, developers and client-developers,
>
>
> This is the third candidate
20 matches
Mail list logo