Re: [akka-user] Memory Bounding Akka Streams

2016-09-29 Thread Endre Varga
On Thu, Sep 29, 2016 at 1:41 AM, Dagny Taggart 
wrote:

>
>
> LATEST updated understanding I have then is the following; and if someone
> could PLEASE correct my (Newbie) understanding with a link to a clear Blog
> Post or GitHub repo illustrating the concept!
>
> 1) The Akka Stream Graph DSL API's abstraction-level hides the details of
> handling Backpressure between Source to Sink.
>
> i.e. A Developer never has to call the Publisher or Subscriber APIs to
> initially request or signal Sink capacity for bytes of data; nor to call
> 'onNext' to get the next batch of data after processing the first batch.
> Akka Streams handles this 'under-the-covers'.
>

Yes.


>
> 2) Buffers (of specified Byte size) and with various drop strategies (e.g.
> drop oldest event, etc) can be specified on each Graph DSL API's Flow
> Stage.  This would delay the signal back to prior Stage to Source to 'slow
> down' when the (default-sized) Akka Byte memory buffer is pressured.
>

Yes.


>
> 3) At the moment; I understand that a backpressured Source will just
> appear to 'slow down'.  In which case, I suspect memory on the Source would
> be pressured with the backpressure signal if Sink is slower in processing
> Source data.
>

This is not necessarily true...


>
> SUMMARY QUESTION:
> I just don't know how to pickup that backpressure signal on an Http
> Websocket Javascript client;
> so that I can show some kind of Error-Message to the mobile or webapp User
> that they have to slow down on their event activity due to system overload.
> i.e. What's an example of using a high-level Akka API method would o do
> this!
>

It is actually very simple. HTTP uses as its underlying protocol. TCP has
backpressure built-in. When an Akka Stream on the consumer side of an HTTP
connection backpressures, then TCP slows down automatically, propagating
the backpressure to the client.

On the Javascript side then various things can happen as the local TCP
buffer gets full (due to not sending new packets because of backpressure):
 - if the Javascript code called a blocking call, then simply the call will
not return until buffer space frees up locally so that it can enqueue the
packet
 - if the Javascript code calls a non-blocking call that calls a callback
once finished with send, then the callback won't be called until local
buffer-space frees up and it can enqueue the packet.

-Endre


>
> THANKS in advance!
> D
>
>
>
>
>
> On Wed, Sep 21, 2016 at 11:01 AM, Dagny T  wrote:
>
>>
>> Just wanted to check with folks if I had the correct implementation for
>> how to protect from blowing up memory when working with Akka Streams.
>>
>> I've merged a Lightbend Blog post's code, with the latest API changes for
>> Akka v2.4.9, and the latest documentation about buffered streams in the
>> v2.4.9 API Docs.
>>
>> However, none of those explain these questions I have.  Please see
>> question comments, regarding the code snippet below it!  THANKS in advance
>> for any insights!
>>
>> // TODO 3:  MODIFIED to calling buffered API within Graph mapping -- check 
>> assumptions!
>> //  - where INTERNAL Akka implementation calls onNext() to get next 
>> BUFFERED batch,
>> //so you don't have to worry about it as a DEV?
>> //  - NUMERIC bound of 10 refers to NUMBER of elements (of possibly 
>> complex types) on a
>> //UNIFORM-ELEMENT-TYPED stream, rather than Bytes, right?
>> //  - if source produces N < BUFFER_MAX elements; then those are 
>> simply passed through the pipeline without
>> //waiting to accumulate BUFFER_MAX elements
>> //
>>
>>
>> inputSource.buffer(10, OverflowStrategy.dropHead) ~> f1 ~> ...
>>
>> --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: http://doc.akka.io/docs/akka/c
>> urrent/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Akka User List" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/to
>> pic/akka-user/bolqHjF_dvc/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to 

Re: [akka-user] Memory Bounding Akka Streams

2016-09-29 Thread Gonzalo Ortiz Jaureguizar
2016-09-29 1:41 GMT+02:00 Dagny Taggart :

>
>
> LATEST updated understanding I have then is the following; and if someone
> could PLEASE correct my (Newbie) understanding with a link to a clear Blog
> Post or GitHub repo illustrating the concept!
>
> 1) The Akka Stream Graph DSL API's abstraction-level hides the details of
> handling Backpressure between Source to Sink.
>
> i.e. A Developer never has to call the Publisher or Subscriber APIs to
> initially request or signal Sink capacity for bytes of data; nor to call
> 'onNext' to get the next batch of data after processing the first batch.
> Akka Streams handles this 'under-the-covers'.
>

That is correct (except the bytes part).


>
> 2) Buffers (of specified Byte size) and with various drop strategies (e.g.
> drop oldest event, etc) can be specified on each Graph DSL API's Flow
> Stage.  This would delay the signal back to prior Stage to Source to 'slow
> down' when the (default-sized) Akka Byte memory buffer is pressured.
>

That is, again, correct except the bytes part. As I said before, each stage
has its own policy that decides when it backpreassures the upstream, but
there are no build in stage that blocks by a number of bytes. There is a
specific stage called buffer that backpreassures when a specified number of
elements have been recived form upstream but not consumed by downstream. An
element is an object and therefore it contains more than one byte. In some
contexts, when there is a function that defines the number of bytes your
elements have, you could use other builtin stages like batchWeight to
simmulate what you are saying. For example, you coud use it to batch
elements that are a buffer of bytes. It may be the case that, on Akka HTTP
context, some specialyzed stage is backpreassuring using a condition
relative to bytes, but I have no experience with Akka HTTP, so I cannot
help you there.


>
> 3) At the moment; I understand that a backpressured Source will just
> appear to 'slow down'.  In which case, I suspect memory on the Source would
> be pressured with the backpressure signal if Sink is slower in processing
> Source data.
>

Akka Stream backpreassure system is, in fact, very simple. A stage cannot
emit new elements to the downstream until the downstream ask for more
elements. Then, if you have a slow consumer at the end of your stream, a
fast producer at the beginning and no special backpreassure stage in the
middle (for example, one that drops elements), then your producer will be
eventually backpreassured. How your source reacts to backpreassure totally
depends on you. In my company project, for example, the producer is an open
cursor to a database and we don't fetch new elements until our downstream
ask for more elements. Ideally, it is fine. On practice, if the consumer is
very slow, the database cursor can be closed by the remote database, so we
have to react to that (in our case it is usually enough to reopen the
cursor, but in some cases we need to relay to other more expensive fallback
mechanics)


>
> SUMMARY QUESTION:
> I just don't know how to pickup that backpressure signal on an Http
> Websocket Javascript client;
> so that I can show some kind of Error-Message to the mobile or webapp User
> that they have to slow down on their event activity due to system overload.
> i.e. What's an example of using a high-level Akka API method would o do
> this!
>

As I said, I have no experience with Akka HTTP, so I don't know what is
going to happen when the server is backpreassured.



>
> THANKS in advance!
> D
>
>
>
>
>
> On Wed, Sep 21, 2016 at 11:01 AM, Dagny T  wrote:
>
>>
>> Just wanted to check with folks if I had the correct implementation for
>> how to protect from blowing up memory when working with Akka Streams.
>>
>> I've merged a Lightbend Blog post's code, with the latest API changes for
>> Akka v2.4.9, and the latest documentation about buffered streams in the
>> v2.4.9 API Docs.
>>
>> However, none of those explain these questions I have.  Please see
>> question comments, regarding the code snippet below it!  THANKS in advance
>> for any insights!
>>
>> // TODO 3:  MODIFIED to calling buffered API within Graph mapping -- check 
>> assumptions!
>> //  - where INTERNAL Akka implementation calls onNext() to get next 
>> BUFFERED batch,
>> //so you don't have to worry about it as a DEV?
>> //  - NUMERIC bound of 10 refers to NUMBER of elements (of possibly 
>> complex types) on a
>> //UNIFORM-ELEMENT-TYPED stream, rather than Bytes, right?
>> //  - if source produces N < BUFFER_MAX elements; then those are 
>> simply passed through the pipeline without
>> //waiting to accumulate BUFFER_MAX elements
>> //
>>
>>
>> inputSource.buffer(10, OverflowStrategy.dropHead) ~> f1 ~> ...
>>
>> --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: http://doc.akka.io/docs/akka/c
>> 

Re: [akka-user] Memory Bounding Akka Streams

2016-09-28 Thread Dagny Taggart
LATEST updated understanding I have then is the following; and if someone
could PLEASE correct my (Newbie) understanding with a link to a clear Blog
Post or GitHub repo illustrating the concept!

1) The Akka Stream Graph DSL API's abstraction-level hides the details of
handling Backpressure between Source to Sink.

i.e. A Developer never has to call the Publisher or Subscriber APIs to
initially request or signal Sink capacity for bytes of data; nor to call
'onNext' to get the next batch of data after processing the first batch.
Akka Streams handles this 'under-the-covers'.

2) Buffers (of specified Byte size) and with various drop strategies (e.g.
drop oldest event, etc) can be specified on each Graph DSL API's Flow
Stage.  This would delay the signal back to prior Stage to Source to 'slow
down' when the (default-sized) Akka Byte memory buffer is pressured.

3) At the moment; I understand that a backpressured Source will just appear
to 'slow down'.  In which case, I suspect memory on the Source would be
pressured with the backpressure signal if Sink is slower in processing
Source data.

SUMMARY QUESTION:
I just don't know how to pickup that backpressure signal on an Http
Websocket Javascript client;
so that I can show some kind of Error-Message to the mobile or webapp User
that they have to slow down on their event activity due to system overload.
i.e. What's an example of using a high-level Akka API method would o do
this!

THANKS in advance!
D





On Wed, Sep 21, 2016 at 11:01 AM, Dagny T  wrote:

>
> Just wanted to check with folks if I had the correct implementation for
> how to protect from blowing up memory when working with Akka Streams.
>
> I've merged a Lightbend Blog post's code, with the latest API changes for
> Akka v2.4.9, and the latest documentation about buffered streams in the
> v2.4.9 API Docs.
>
> However, none of those explain these questions I have.  Please see
> question comments, regarding the code snippet below it!  THANKS in advance
> for any insights!
>
> // TODO 3:  MODIFIED to calling buffered API within Graph mapping -- check 
> assumptions!
> //  - where INTERNAL Akka implementation calls onNext() to get next 
> BUFFERED batch,
> //so you don't have to worry about it as a DEV?
> //  - NUMERIC bound of 10 refers to NUMBER of elements (of possibly 
> complex types) on a
> //UNIFORM-ELEMENT-TYPED stream, rather than Bytes, right?
> //  - if source produces N < BUFFER_MAX elements; then those are 
> simply passed through the pipeline without
> //waiting to accumulate BUFFER_MAX elements
> //
>
>
> inputSource.buffer(10, OverflowStrategy.dropHead) ~> f1 ~> ...
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/akka-user/bolqHjF_dvc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Memory Bounding Akka Streams

2016-09-25 Thread Viktor Klang
Bounded memory usage is the essence of Akka Streams tho?

On Wed, Sep 21, 2016 at 8:01 PM, Dagny T  wrote:

>
> Just wanted to check with folks if I had the correct implementation for
> how to protect from blowing up memory when working with Akka Streams.
>
> I've merged a Lightbend Blog post's code, with the latest API changes for
> Akka v2.4.9, and the latest documentation about buffered streams in the
> v2.4.9 API Docs.
>
> However, none of those explain these questions I have.  Please see
> question comments, regarding the code snippet below it!  THANKS in advance
> for any insights!
>
> // TODO 3:  MODIFIED to calling buffered API within Graph mapping -- check 
> assumptions!
> //  - where INTERNAL Akka implementation calls onNext() to get next 
> BUFFERED batch,
> //so you don't have to worry about it as a DEV?
> //  - NUMERIC bound of 10 refers to NUMBER of elements (of possibly 
> complex types) on a
> //UNIFORM-ELEMENT-TYPED stream, rather than Bytes, right?
> //  - if source produces N < BUFFER_MAX elements; then those are 
> simply passed through the pipeline without
> //waiting to accumulate BUFFER_MAX elements
> //
>
>
> inputSource.buffer(10, OverflowStrategy.dropHead) ~> f1 ~> ...
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Cheers,
√

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Memory Bounding Akka Streams

2016-09-21 Thread Dagny T

Just wanted to check with folks if I had the correct implementation for how 
to protect from blowing up memory when working with Akka Streams.

I've merged a Lightbend Blog post's code, with the latest API changes for 
Akka v2.4.9, and the latest documentation about buffered streams in the 
v2.4.9 API Docs.

However, none of those explain these questions I have.  Please see question 
comments, regarding the code snippet below it!  THANKS in advance for any 
insights!

// TODO 3:  MODIFIED to calling buffered API within Graph mapping -- check 
assumptions!
//  - where INTERNAL Akka implementation calls onNext() to get next 
BUFFERED batch,
//so you don't have to worry about it as a DEV?
//  - NUMERIC bound of 10 refers to NUMBER of elements (of possibly 
complex types) on a
//UNIFORM-ELEMENT-TYPED stream, rather than Bytes, right?
//  - if source produces N < BUFFER_MAX elements; then those are simply 
passed through the pipeline without
//waiting to accumulate BUFFER_MAX elements
//


inputSource.buffer(10, OverflowStrategy.dropHead) ~> f1 ~> ...

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.