I see. For windowed aggregations the disk space (i.e.
"/tmp/kafka-streams/appname") as well as memory consumption on RocksDB
should not keep increasing forever. One thing to note is that you are using
a tumbling window where a new window will be created every minute, so
within 20 minutes of "event
My apologies. In fact the 'aggregate' step includes this: 'TimeWindows.of(20
* 60 * 1000L).advanceBy(60 * 1000L)'
On Tue, Nov 29, 2016 at 9:12 PM, Guozhang Wang wrote:
> Where does the "20 minutes" come from? I thought the "aggregate" operator
> in your
>
> stream->aggregate->filter->foreach
>
>
Where does the "20 minutes" come from? I thought the "aggregate" operator
in your
stream->aggregate->filter->foreach
topology is not a windowed aggregation, so the aggregate results will keep
accumulating.
Guozhang
On Tue, Nov 29, 2016 at 8:40 PM, Jon Yeargers
wrote:
> "keep increasing" - w
"keep increasing" - why? It seems (to me) that the aggregates should be 20
minutes long. After that the memory should be released.
Not true?
On Tue, Nov 29, 2016 at 3:53 PM, Guozhang Wang wrote:
> Jon,
>
> Note that in your "aggregate" function, if it is now windowed aggregate
> then the aggreg
Jon,
Note that in your "aggregate" function, if it is now windowed aggregate
then the aggregation results will keep increasing in your local state
stores unless you're pretty sure that the aggregate key space is bounded.
This is not only related to disk space but also memory since the current
defa
App eventually got OOM-killed. Consumed 53G of swap space.
Does it require a different GC? Some extra settings for the java cmd line?
On Tue, Nov 29, 2016 at 12:05 PM, Jon Yeargers
wrote:
> I cloned/built 10.2.0-SNAPSHOT
>
> App hasn't been OOM-killed yet but it's up to 66% mem.
>
> App takes
I cloned/built 10.2.0-SNAPSHOT
App hasn't been OOM-killed yet but it's up to 66% mem.
App takes > 10 min to start now. Needless to say this is problematic.
The 'kafka-streams' scratch space has consumed 37G and still climbing.
On Tue, Nov 29, 2016 at 10:48 AM, Jon Yeargers
wrote:
> Does eve
Does every broker need to be updated or just my client app(s)?
On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax
wrote:
> What version do you use?
>
> There is a memory leak in the latest version 0.10.1.0. The bug got
> already fixed in trunk and 0.10.1 branch.
>
> There is already a discussion
What version do you use?
There is a memory leak in the latest version 0.10.1.0. The bug got
already fixed in trunk and 0.10.1 branch.
There is already a discussion about a 0.10.1.1 bug fix release. For now,
you could build the Kafka Streams from the sources by yourself.
-Matthias
On 11/29/16 1
My KStreams app seems to be having some memory issues.
1. I start it `java -Xmx8G -jar .jar`
2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn - Got
ping response for sessionid: 0xc58abee3e13 after 0ms' messages
3. When it _finally_ starts reading values it typically goes
Oh, I am talking about another memory leak. the offheap memory leak we had
experienced. Which is about Direct Buffer memory. the callstack as below.
ReplicaFetcherThread.warn - [ReplicaFetcherThread-4-1463989770], Error in
fetch kafka.server.ReplicaFetcherThread$FetchRequest@7f4c1657. Possible
cau
Hi (I'm the author of that ticket):
>From my understanding limiting MaxDirectMemory won't workaround this memory
leak. The leak is inside the JVM's implementation, not in normal direct
buffers. On one of our brokers with this issue, we're seeing the JVM report
1.2GB of heap, and 128MB of offheap m
please refer (KAFKA-3933)
a workaround is -XX:MaxDirectMemorySize=1024m
if your callstack has direct buffer issues.(effectively off heap memory)
On Wed, May 11, 2016 at 9:50 AM, Russ Lavoie wrote:
> Good Afternoon,
>
> I am currently trying to do a rolling upgrade from Kafka 0.8.2.1 to 0.9.0.1
>
Good Afternoon,
I am currently trying to do a rolling upgrade from Kafka 0.8.2.1 to 0.9.0.1
and am running into a problem when starting 0.9.0.1 with the protocol
version 0.8.2.1 set in the server.properties.
Here is my current Kafka topic setup, data retention and hardware used:
3 Zookeeper node
14 matches
Mail list logo