Re: [akka-user][deprecated] Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2018-10-08 Thread Gary Malouf
We ultimately decided to rollout despite this glitch.  Not happy about it,
and hoping whatever is causing this gets resolved in a future release.  My
hunch is that it's a fixed price being paid that if 1000's of more
requests/second were sent to the app would make this unnoticeable.



On Sun, Oct 7, 2018 at 11:18 AM Avshalom Manevich 
wrote:

> Hi Gary,
>
> Did you end up finding a solution to this?
>
> We're hitting a similar issue with Akka HTTP (10.0.11) and a low-load
> server.
>
> Average latency is great but 99th percentile is horrible (~200ms).
>
> Appreciate your input.
>
> Regards,
> Avshalom
>
>
> I wonder if you could start a timer when you enter the trace block and
>> then e.g. after 200ms trigger one or multiple stack dumps (using JMX or
>> just by printing out the result of `Thread.getAllStackTraces`). It's not
>> super likely that something will turn up but it seems like a simple enough
>> thing to try.
>>
>> Johannes
>>
>> On Thursday, November 16, 2017 at 1:28:23 PM UTC+1, Gary Malouf wrote:
>>
>>> Hi Johannes,
>>>
>>> Yes; we are seeing 2-3 requests/second (only in production) with the
>>> latency spikes.  We found no correlation between the gc times and these
>>> request latencies, nor between the size/type of requests.
>>>
>>> We had to pause the migration effort for 2 weeks because of the time
>>> being taken, but just jumped back on it the other day.
>>>
>>> Our current strategy is to implement this with the low level api to see
>>> if we get the same results.
>>>
>>> Gary
>>>
>>> On Nov 16, 2017 6:57 AM,  wrote:
>>>
>>> Hi Gary,
>>>
>>> did you find out what's going on by now? If I understand correctly, you
>>> get latency spikes as soon as you use the `entity[as[String]]` directive?
>>> Could you narrow down if there's anything special to those requests? I
>>> guess you monitor your GC times?
>>>
>>> Johannes
>>>
>>>
>>> On Wednesday, November 1, 2017 at 8:56:50 PM UTC+1, Gary Malouf wrote:
>>>
>>>> So the only way I was able to successfully identify the suspicious code
>>>> was to route a percentage of my production traffic to a stubbed route that
>>>> I incrementally added back pieces of our implementation into.  What I found
>>>> was that we started getting spikes when the entity(as[CaseClassFromJson
>>>> ]) stubbed was added back in.  To figure out if it was the json
>>>> parsing or 'POST' entity consumption itself, I replaced that class with a
>>>> string - turns out we experience the latency spikes with that as well (on
>>>> low traffic as noted earlier in this thread).
>>>>
>>>> I by no means have a deep understanding of streams, but it makes me
>>>> wonder if the way I have our code consuming the entity is not correct.
>>>>
>>>> On Monday, October 30, 2017 at 4:27:13 PM UTC-4, Gary Malouf wrote:
>>>>
>>>>> Hi Roland - thank you for the tip.  We shrunk the thread pool size
>>>>> down to 1, but were disheartened to still see the latency spikes.  Using
>>>>> Kamon's tracing library (which we validated with various tests to ensure
>>>>> it's own numbers are most likely correct), we could not find anything in
>>>>> our code within the route that was causing the latency (it all appeared to
>>>>> be classified to be that route but no code segments within it).
>>>>>
>>>>> As mentioned earlier, running loads of 100-1000 requests/second
>>>>> completely hides the issue (save for the max latency) as everything 
>>>>> through
>>>>> 99th percentiles is under a few milliseconds.
>>>>>
>>>>> On Tuesday, October 24, 2017 at 2:23:07 AM UTC-4, rkuhn wrote:
>>>>>
>>>>>> You could try to decrease your thread pool size to 1 to exclude
>>>>>> wakeup latencies when things (like CPU cores) have gone to sleep.
>>>>>>
>>>>>> Regards, Roland
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On 23. Oct 2017, at 22:49, Gary Malouf  wrote:
>>>>>>
>>>>>> Yes, it gets parsed using entity(as[]) with spray-json support.
>>>>>> Under a load test of say 1000 requests/second these latencies are not
>>>>>> visible in the perc

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-11-16 Thread Gary Malouf
Hi Johannes,

Yes; we are seeing 2-3 requests/second (only in production) with the
latency spikes.  We found no correlation between the gc times and these
request latencies, nor between the size/type of requests.

We had to pause the migration effort for 2 weeks because of the time being
taken, but just jumped back on it the other day.

Our current strategy is to implement this with the low level api to see if
we get the same results.

Gary

On Nov 16, 2017 6:57 AM,  wrote:

Hi Gary,

did you find out what's going on by now? If I understand correctly, you get
latency spikes as soon as you use the `entity[as[String]]` directive? Could
you narrow down if there's anything special to those requests? I guess you
monitor your GC times?

Johannes


On Wednesday, November 1, 2017 at 8:56:50 PM UTC+1, Gary Malouf wrote:
>
> So the only way I was able to successfully identify the suspicious code
> was to route a percentage of my production traffic to a stubbed route that
> I incrementally added back pieces of our implementation into.  What I found
> was that we started getting spikes when the entity(as[CaseClassFromJson]) 
> stubbed
> was added back in.  To figure out if it was the json parsing or 'POST'
> entity consumption itself, I replaced that class with a string - turns out
> we experience the latency spikes with that as well (on low traffic as noted
> earlier in this thread).
>
> I by no means have a deep understanding of streams, but it makes me wonder
> if the way I have our code consuming the entity is not correct.
>
> On Monday, October 30, 2017 at 4:27:13 PM UTC-4, Gary Malouf wrote:
>>
>> Hi Roland - thank you for the tip.  We shrunk the thread pool size down
>> to 1, but were disheartened to still see the latency spikes.  Using Kamon's
>> tracing library (which we validated with various tests to ensure it's own
>> numbers are most likely correct), we could not find anything in our code
>> within the route that was causing the latency (it all appeared to be
>> classified to be that route but no code segments within it).
>>
>> As mentioned earlier, running loads of 100-1000 requests/second
>> completely hides the issue (save for the max latency) as everything through
>> 99th percentiles is under a few milliseconds.
>>
>> On Tuesday, October 24, 2017 at 2:23:07 AM UTC-4, rkuhn wrote:
>>>
>>> You could try to decrease your thread pool size to 1 to exclude wakeup
>>> latencies when things (like CPU cores) have gone to sleep.
>>>
>>> Regards, Roland
>>>
>>> Sent from my iPhone
>>>
>>> On 23. Oct 2017, at 22:49, Gary Malouf  wrote:
>>>
>>> Yes, it gets parsed using entity(as[]) with spray-json support.  Under a
>>> load test of say 1000 requests/second these latencies are not visible in
>>> the percentiles - they are easy to see because this web server is getting
>>> 10-20 requests/second currently.  Trying to brainstorm if a dispatcher
>>> needed to be tuned or something of that sort but have yet to see evidence
>>> supporting that.
>>>
>>> path("foos") {
>>> traceName("FooSelection") {
>>> entity(as[ExternalPageRequest]) { pr =>
>>> val spr = toSelectionPageRequest(pr)
>>> shouldTracePageId(spr.pageId).fold(
>>> Tracer.currentContext.withNewSegment(s"Page-${pr.pageId}", "PageTrace",
>>> "kamon") {
>>> processPageRequestAndComplete(pr, spr)
>>> },
>>> processPageRequestAndComplete(pr, spr)
>>> )
>>> }
>>> }
>>>
>>> }
>>>
>>> On Mon, Oct 23, 2017 at 4:42 PM, Viktor Klang 
>>> wrote:
>>>
>>>> And you consume the entityBytes I presume?
>>>>
>>>> On Mon, Oct 23, 2017 at 10:35 PM, Gary Malouf 
>>>> wrote:
>>>>
>>>>> It is from when I start the Kamon trace (just inside of my
>>>>> path("myawesomepath") declaration until (theoretically) a 'complete' call
>>>>> is made.
>>>>>
>>>>> path("myawesomepath") {
>>>>>   traceName("CoolStory") {
>>>>> ///do some stuff
>>>>>  complete("This is great")
>>>>> } }
>>>>>
>>>>> For what it's worth, this route is a 'POST' call.
>>>>>
>>>>> On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang 
>>>>> wrote:
>>>>>
>>>>>> No, I mean, is it from first-byte-received to last-byte-sent or what?
>>>>>

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-11-01 Thread Gary Malouf
So the only way I was able to successfully identify the suspicious code was 
to route a percentage of my production traffic to a stubbed route that I 
incrementally added back pieces of our implementation into.  What I found 
was that we started getting spikes when the entity(as[CaseClassFromJson]) 
stubbed 
was added back in.  To figure out if it was the json parsing or 'POST' 
entity consumption itself, I replaced that class with a string - turns out 
we experience the latency spikes with that as well (on low traffic as noted 
earlier in this thread).  

I by no means have a deep understanding of streams, but it makes me wonder 
if the way I have our code consuming the entity is not correct.

On Monday, October 30, 2017 at 4:27:13 PM UTC-4, Gary Malouf wrote:
>
> Hi Roland - thank you for the tip.  We shrunk the thread pool size down to 
> 1, but were disheartened to still see the latency spikes.  Using Kamon's 
> tracing library (which we validated with various tests to ensure it's own 
> numbers are most likely correct), we could not find anything in our code 
> within the route that was causing the latency (it all appeared to be 
> classified to be that route but no code segments within it).  
>
> As mentioned earlier, running loads of 100-1000 requests/second completely 
> hides the issue (save for the max latency) as everything through 99th 
> percentiles is under a few milliseconds.
>
> On Tuesday, October 24, 2017 at 2:23:07 AM UTC-4, rkuhn wrote:
>>
>> You could try to decrease your thread pool size to 1 to exclude wakeup 
>> latencies when things (like CPU cores) have gone to sleep.
>>
>> Regards, Roland 
>>
>> Sent from my iPhone
>>
>> On 23. Oct 2017, at 22:49, Gary Malouf  wrote:
>>
>> Yes, it gets parsed using entity(as[]) with spray-json support.  Under a 
>> load test of say 1000 requests/second these latencies are not visible in 
>> the percentiles - they are easy to see because this web server is getting 
>> 10-20 requests/second currently.  Trying to brainstorm if a dispatcher 
>> needed to be tuned or something of that sort but have yet to see evidence 
>> supporting that.
>>
>> path("foos") { 
>> traceName("FooSelection") {
>> entity(as[ExternalPageRequest]) { pr => 
>> val spr = toSelectionPageRequest(pr) 
>> shouldTracePageId(spr.pageId).fold( 
>> Tracer.currentContext.withNewSegment(s"Page-${pr.pageId}", "PageTrace", "
>> kamon") { 
>> processPageRequestAndComplete(pr, spr) 
>> }, 
>> processPageRequestAndComplete(pr, spr) 
>> ) 
>> }
>> } 
>>
>> }
>>
>> On Mon, Oct 23, 2017 at 4:42 PM, Viktor Klang  
>> wrote:
>>
>>> And you consume the entityBytes I presume?
>>>
>>> On Mon, Oct 23, 2017 at 10:35 PM, Gary Malouf  
>>> wrote:
>>>
>>>> It is from when I start the Kamon trace (just inside of my 
>>>> path("myawesomepath") declaration until (theoretically) a 'complete' call 
>>>> is made.  
>>>>
>>>> path("myawesomepath") {
>>>>   traceName("CoolStory") {
>>>> ///do some stuff
>>>>  complete("This is great")
>>>> } }
>>>>
>>>> For what it's worth, this route is a 'POST' call.
>>>>
>>>> On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang  
>>>> wrote:
>>>>
>>>>> No, I mean, is it from first-byte-received to last-byte-sent or what?
>>>>>
>>>>> On Mon, Oct 23, 2017 at 10:22 PM, Gary Malouf  
>>>>> wrote:
>>>>>
>>>>>> We are using percentiles computed via Kamon 0.6.8.  In a very low 
>>>>>> request rate environment like this, it takes roughly 1 super slow 
>>>>>> request/second to throw off the percentiles (which is what I think is 
>>>>>> happening).  
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 23, 2017 at 4:20 PM, Viktor Klang  
>>>>>> wrote:
>>>>>>
>>>>>>> What definition of latency are you using? (i.e. how is it derived)
>>>>>>>
>>>>>>> On Mon, Oct 23, 2017 at 10:11 PM, Gary Malouf  
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Konrad,
>>>>>>>>
>>>>>>>> Our real issue is that we can not reproduce the results.  The web 
>>>>>>>> server we a

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-30 Thread Gary Malouf
Hi Roland - thank you for the tip.  We shrunk the thread pool size down to 
1, but were disheartened to still see the latency spikes.  Using Kamon's 
tracing library (which we validated with various tests to ensure it's own 
numbers are most likely correct), we could not find anything in our code 
within the route that was causing the latency (it all appeared to be 
classified to be that route but no code segments within it).  

As mentioned earlier, running loads of 100-1000 requests/second completely 
hides the issue (save for the max latency) as everything through 99th 
percentiles is under a few milliseconds.

On Tuesday, October 24, 2017 at 2:23:07 AM UTC-4, rkuhn wrote:
>
> You could try to decrease your thread pool size to 1 to exclude wakeup 
> latencies when things (like CPU cores) have gone to sleep.
>
> Regards, Roland 
>
> Sent from my iPhone
>
> On 23. Oct 2017, at 22:49, Gary Malouf > 
> wrote:
>
> Yes, it gets parsed using entity(as[]) with spray-json support.  Under a 
> load test of say 1000 requests/second these latencies are not visible in 
> the percentiles - they are easy to see because this web server is getting 
> 10-20 requests/second currently.  Trying to brainstorm if a dispatcher 
> needed to be tuned or something of that sort but have yet to see evidence 
> supporting that.
>
> path("foos") { 
> traceName("FooSelection") {
> entity(as[ExternalPageRequest]) { pr => 
> val spr = toSelectionPageRequest(pr) 
> shouldTracePageId(spr.pageId).fold( 
> Tracer.currentContext.withNewSegment(s"Page-${pr.pageId}", "PageTrace", "
> kamon") { 
> processPageRequestAndComplete(pr, spr) 
> }, 
> processPageRequestAndComplete(pr, spr) 
> ) 
> }
> } 
>
> }
>
> On Mon, Oct 23, 2017 at 4:42 PM, Viktor Klang  > wrote:
>
>> And you consume the entityBytes I presume?
>>
>> On Mon, Oct 23, 2017 at 10:35 PM, Gary Malouf > > wrote:
>>
>>> It is from when I start the Kamon trace (just inside of my 
>>> path("myawesomepath") declaration until (theoretically) a 'complete' call 
>>> is made.  
>>>
>>> path("myawesomepath") {
>>>   traceName("CoolStory") {
>>> ///do some stuff
>>>  complete("This is great")
>>> } }
>>>
>>> For what it's worth, this route is a 'POST' call.
>>>
>>> On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang >> > wrote:
>>>
>>>> No, I mean, is it from first-byte-received to last-byte-sent or what?
>>>>
>>>> On Mon, Oct 23, 2017 at 10:22 PM, Gary Malouf >>> > wrote:
>>>>
>>>>> We are using percentiles computed via Kamon 0.6.8.  In a very low 
>>>>> request rate environment like this, it takes roughly 1 super slow 
>>>>> request/second to throw off the percentiles (which is what I think is 
>>>>> happening).  
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 23, 2017 at 4:20 PM, Viktor Klang >>>> > wrote:
>>>>>
>>>>>> What definition of latency are you using? (i.e. how is it derived)
>>>>>>
>>>>>> On Mon, Oct 23, 2017 at 10:11 PM, Gary Malouf >>>>> > wrote:
>>>>>>
>>>>>>> Hi Konrad,
>>>>>>>
>>>>>>> Our real issue is that we can not reproduce the results.  The web 
>>>>>>> server we are having latency issues with is under peak load of 10-15 
>>>>>>> requests/second - obviously not much to deal with.  
>>>>>>>
>>>>>>> When we use load tests (https://github.com/apigee/apib), it's easy 
>>>>>>> for us to throw a few thousand requests/second at it and get latencies 
>>>>>>> in 
>>>>>>> the ~ 3 ms range.  We use kamon to track internal metrics - what we see 
>>>>>>> is 
>>>>>>> that our 95th and 99th percentiles only look bad under the production 
>>>>>>> traffic but not under load tests.  
>>>>>>>
>>>>>>> I've since used kamon to print out the actual requests trying to 
>>>>>>> find any pattern in them to hint at what's wrong in my own code, but 
>>>>>>> they 
>>>>>>> seem to be completely random.  What we do know is that downgrading to 
>>>>>>> spray 
>>>>>>> gets us 99.9th percentile lat

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-23 Thread Gary Malouf
Yes, it gets parsed using entity(as[]) with spray-json support.  Under a
load test of say 1000 requests/second these latencies are not visible in
the percentiles - they are easy to see because this web server is getting
10-20 requests/second currently.  Trying to brainstorm if a dispatcher
needed to be tuned or something of that sort but have yet to see evidence
supporting that.

path("foos") {
traceName("FooSelection") {
entity(as[ExternalPageRequest]) { pr =>
val spr = toSelectionPageRequest(pr)
shouldTracePageId(spr.pageId).fold(
Tracer.currentContext.withNewSegment(s"Page-${pr.pageId}", "PageTrace", "
kamon") {
processPageRequestAndComplete(pr, spr)
},
processPageRequestAndComplete(pr, spr)
)
}
}

}

On Mon, Oct 23, 2017 at 4:42 PM, Viktor Klang 
wrote:

> And you consume the entityBytes I presume?
>
> On Mon, Oct 23, 2017 at 10:35 PM, Gary Malouf 
> wrote:
>
>> It is from when I start the Kamon trace (just inside of my
>> path("myawesomepath") declaration until (theoretically) a 'complete' call
>> is made.
>>
>> path("myawesomepath") {
>>   traceName("CoolStory") {
>> ///do some stuff
>>  complete("This is great")
>> } }
>>
>> For what it's worth, this route is a 'POST' call.
>>
>> On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang 
>> wrote:
>>
>>> No, I mean, is it from first-byte-received to last-byte-sent or what?
>>>
>>> On Mon, Oct 23, 2017 at 10:22 PM, Gary Malouf 
>>> wrote:
>>>
>>>> We are using percentiles computed via Kamon 0.6.8.  In a very low
>>>> request rate environment like this, it takes roughly 1 super slow
>>>> request/second to throw off the percentiles (which is what I think is
>>>> happening).
>>>>
>>>>
>>>>
>>>> On Mon, Oct 23, 2017 at 4:20 PM, Viktor Klang 
>>>> wrote:
>>>>
>>>>> What definition of latency are you using? (i.e. how is it derived)
>>>>>
>>>>> On Mon, Oct 23, 2017 at 10:11 PM, Gary Malouf 
>>>>> wrote:
>>>>>
>>>>>> Hi Konrad,
>>>>>>
>>>>>> Our real issue is that we can not reproduce the results.  The web
>>>>>> server we are having latency issues with is under peak load of 10-15
>>>>>> requests/second - obviously not much to deal with.
>>>>>>
>>>>>> When we use load tests (https://github.com/apigee/apib), it's easy
>>>>>> for us to throw a few thousand requests/second at it and get latencies in
>>>>>> the ~ 3 ms range.  We use kamon to track internal metrics - what we see 
>>>>>> is
>>>>>> that our 95th and 99th percentiles only look bad under the production
>>>>>> traffic but not under load tests.
>>>>>>
>>>>>> I've since used kamon to print out the actual requests trying to find
>>>>>> any pattern in them to hint at what's wrong in my own code, but they seem
>>>>>> to be completely random.  What we do know is that downgrading to spray 
>>>>>> gets
>>>>>> us 99.9th percentile latencies under 2ms, so something related to the
>>>>>> upgrade is allowing this.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Gary
>>>>>>
>>>>>> On Tuesday, October 17, 2017 at 12:07:51 PM UTC-4, Konrad Malawski
>>>>>> wrote:
>>>>>>>
>>>>>>> Step 1 – don’t panic ;-)
>>>>>>> Step 2 – as I already asked for, please share actual details of the
>>>>>>> benchmarks. It is not good to discuss benchmarks without any insight 
>>>>>>> into
>>>>>>> what / how exactly you’re measuring.
>>>>>>>
>>>>>>> --
>>>>>>> Cheers,
>>>>>>> Konrad 'ktoso <http://kto.so>' Malawski
>>>>>>> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>>>>>>>
>>>>>>> On October 12, 2017 at 15:31:19, Gary Malouf (malou...@gmail.com)
>>>>>>> wrote:
>>>>>>>
>>>>>>> We have a web service that we just finished migrating from spray 1.3
>>>>>>> to Akka-Http 10.0.9.  While in most cases it is performing well, we are
>>>>

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-23 Thread Gary Malouf
It is from when I start the Kamon trace (just inside of my
path("myawesomepath") declaration until (theoretically) a 'complete' call
is made.

path("myawesomepath") {
  traceName("CoolStory") {
///do some stuff
 complete("This is great")
} }

For what it's worth, this route is a 'POST' call.

On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang 
wrote:

> No, I mean, is it from first-byte-received to last-byte-sent or what?
>
> On Mon, Oct 23, 2017 at 10:22 PM, Gary Malouf 
> wrote:
>
>> We are using percentiles computed via Kamon 0.6.8.  In a very low request
>> rate environment like this, it takes roughly 1 super slow request/second to
>> throw off the percentiles (which is what I think is happening).
>>
>>
>>
>> On Mon, Oct 23, 2017 at 4:20 PM, Viktor Klang 
>> wrote:
>>
>>> What definition of latency are you using? (i.e. how is it derived)
>>>
>>> On Mon, Oct 23, 2017 at 10:11 PM, Gary Malouf 
>>> wrote:
>>>
>>>> Hi Konrad,
>>>>
>>>> Our real issue is that we can not reproduce the results.  The web
>>>> server we are having latency issues with is under peak load of 10-15
>>>> requests/second - obviously not much to deal with.
>>>>
>>>> When we use load tests (https://github.com/apigee/apib), it's easy for
>>>> us to throw a few thousand requests/second at it and get latencies in the ~
>>>> 3 ms range.  We use kamon to track internal metrics - what we see is that
>>>> our 95th and 99th percentiles only look bad under the production traffic
>>>> but not under load tests.
>>>>
>>>> I've since used kamon to print out the actual requests trying to find
>>>> any pattern in them to hint at what's wrong in my own code, but they seem
>>>> to be completely random.  What we do know is that downgrading to spray gets
>>>> us 99.9th percentile latencies under 2ms, so something related to the
>>>> upgrade is allowing this.
>>>>
>>>> Thanks,
>>>>
>>>> Gary
>>>>
>>>> On Tuesday, October 17, 2017 at 12:07:51 PM UTC-4, Konrad Malawski
>>>> wrote:
>>>>>
>>>>> Step 1 – don’t panic ;-)
>>>>> Step 2 – as I already asked for, please share actual details of the
>>>>> benchmarks. It is not good to discuss benchmarks without any insight into
>>>>> what / how exactly you’re measuring.
>>>>>
>>>>> --
>>>>> Cheers,
>>>>> Konrad 'ktoso <http://kto.so>' Malawski
>>>>> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>>>>>
>>>>> On October 12, 2017 at 15:31:19, Gary Malouf (malou...@gmail.com)
>>>>> wrote:
>>>>>
>>>>> We have a web service that we just finished migrating from spray 1.3
>>>>> to Akka-Http 10.0.9.  While in most cases it is performing well, we are
>>>>> seeing terrible 99th percentile latencies 300-450ms range) starting from a
>>>>> very low request rate (10/second) on an ec2 m3.large.
>>>>>
>>>>> Our service does not do anything complicated - it does a few Map
>>>>> lookups and returns a response to a request.  In spray, even 99th
>>>>> percentile latencies were on the order of 1-3 ms, so we are definitely
>>>>> concerned.  Connections as with many pixel-type servers are short-lived ->
>>>>> we actually pass the Connection: Close header intentionally in our
>>>>> responses.
>>>>>
>>>>> Is there any obvious tuning that should be done on the server
>>>>> configuration that others have found?
>>>>> --
>>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
>>>>> urrent/additional/faq.html
>>>>> >>>>>>>>>> Search the archives: https://groups.google.com/grou
>>>>> p/akka-user
>>>>> ---
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Akka User List" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to akka-user+...@googlegroups.com.
>>>>> To post to this group, send email to akka...@

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-23 Thread Gary Malouf
We are using percentiles computed via Kamon 0.6.8.  In a very low request
rate environment like this, it takes roughly 1 super slow request/second to
throw off the percentiles (which is what I think is happening).



On Mon, Oct 23, 2017 at 4:20 PM, Viktor Klang 
wrote:

> What definition of latency are you using? (i.e. how is it derived)
>
> On Mon, Oct 23, 2017 at 10:11 PM, Gary Malouf 
> wrote:
>
>> Hi Konrad,
>>
>> Our real issue is that we can not reproduce the results.  The web server
>> we are having latency issues with is under peak load of 10-15
>> requests/second - obviously not much to deal with.
>>
>> When we use load tests (https://github.com/apigee/apib), it's easy for
>> us to throw a few thousand requests/second at it and get latencies in the ~
>> 3 ms range.  We use kamon to track internal metrics - what we see is that
>> our 95th and 99th percentiles only look bad under the production traffic
>> but not under load tests.
>>
>> I've since used kamon to print out the actual requests trying to find any
>> pattern in them to hint at what's wrong in my own code, but they seem to be
>> completely random.  What we do know is that downgrading to spray gets us
>> 99.9th percentile latencies under 2ms, so something related to the upgrade
>> is allowing this.
>>
>> Thanks,
>>
>> Gary
>>
>> On Tuesday, October 17, 2017 at 12:07:51 PM UTC-4, Konrad Malawski wrote:
>>>
>>> Step 1 – don’t panic ;-)
>>> Step 2 – as I already asked for, please share actual details of the
>>> benchmarks. It is not good to discuss benchmarks without any insight into
>>> what / how exactly you’re measuring.
>>>
>>> --
>>> Cheers,
>>> Konrad 'ktoso <http://kto.so>' Malawski
>>> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>>>
>>> On October 12, 2017 at 15:31:19, Gary Malouf (malou...@gmail.com) wrote:
>>>
>>> We have a web service that we just finished migrating from spray 1.3 to
>>> Akka-Http 10.0.9.  While in most cases it is performing well, we are seeing
>>> terrible 99th percentile latencies 300-450ms range) starting from a very
>>> low request rate (10/second) on an ec2 m3.large.
>>>
>>> Our service does not do anything complicated - it does a few Map lookups
>>> and returns a response to a request.  In spray, even 99th percentile
>>> latencies were on the order of 1-3 ms, so we are definitely concerned.
>>> Connections as with many pixel-type servers are short-lived -> we actually
>>> pass the Connection: Close header intentionally in our responses.
>>>
>>> Is there any obvious tuning that should be done on the server
>>> configuration that others have found?
>>> --
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
>>> urrent/additional/faq.html
>>> >>>>>>>>>> Search the archives: https://groups.google.com/grou
>>> p/akka-user
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
>> urrent/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Cheers,
> √
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the F

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-23 Thread Gary Malouf
Hi Konrad,

Our real issue is that we can not reproduce the results.  The web server we 
are having latency issues with is under peak load of 10-15 requests/second 
- obviously not much to deal with.  

When we use load tests (https://github.com/apigee/apib), it's easy for us 
to throw a few thousand requests/second at it and get latencies in the ~ 3 
ms range.  We use kamon to track internal metrics - what we see is that our 
95th and 99th percentiles only look bad under the production traffic but 
not under load tests.  

I've since used kamon to print out the actual requests trying to find any 
pattern in them to hint at what's wrong in my own code, but they seem to be 
completely random.  What we do know is that downgrading to spray gets us 
99.9th percentile latencies under 2ms, so something related to the upgrade 
is allowing this.

Thanks,

Gary

On Tuesday, October 17, 2017 at 12:07:51 PM UTC-4, Konrad Malawski wrote:
>
> Step 1 – don’t panic ;-)
> Step 2 – as I already asked for, please share actual details of the 
> benchmarks. It is not good to discuss benchmarks without any insight into 
> what / how exactly you’re measuring.
>
> -- 
> Cheers,
> Konrad 'ktoso <http://kto.so>' Malawski
> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>
> On October 12, 2017 at 15:31:19, Gary Malouf (malou...@gmail.com 
> ) wrote:
>
> We have a web service that we just finished migrating from spray 1.3 to 
> Akka-Http 10.0.9.  While in most cases it is performing well, we are seeing 
> terrible 99th percentile latencies 300-450ms range) starting from a very 
> low request rate (10/second) on an ec2 m3.large.  
>
> Our service does not do anything complicated - it does a few Map lookups 
> and returns a response to a request.  In spray, even 99th percentile 
> latencies were on the order of 1-3 ms, so we are definitely concerned.  
> Connections as with many pixel-type servers are short-lived -> we actually 
> pass the Connection: Close header intentionally in our responses.  
>
> Is there any obvious tuning that should be done on the server 
> configuration that others have found?
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-17 Thread Gary Malouf
Thanks Konrad - given the huge cost changes we are seeing - is there any 
tuning you would recommend in terms of dispatchers, etc for smoothing this 
or should I consider a different server even given the streaming 
infrastructure.

On Tuesday, October 17, 2017 at 11:26:13 AM UTC-4, Konrad Malawski wrote:
>
> Short lived connections are slightly more costly in Akka-HTTP than in 
> Spray, due to the streaming infrastructure.
>
> -- 
> Cheers,
> Konrad 'ktoso <http://kto.so>' Malawski
> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>
> On October 17, 2017 at 9:48:20, Gary Malouf (malou...@gmail.com 
> ) wrote:
>
> Hi Konrad, 
>
> Understand your point - not really possible to share code on a 
> closed-source project.  I'm more asking if akka-http does not handle 
> short-lived connections very well yet as opposed to how spray handled 
> them.  I will be profiling in the mean-time trying to get to the bottom of 
> the issue.
>
> Gary
>
> On Thursday, October 12, 2017 at 8:44:55 PM UTC-4, Konrad Malawski wrote: 
>>
>> When asking about performance and benchmarks always include specific 
>> numbers, code, and benchmark methodology otherwise it’s just guessing and 
>> inventing numbers and reasons. 
>>
>> Thanks
>>
>> -- 
>> Konrad Malawski
>>
>> On October 13, 2017 at 5:36:06, Gary Malouf (malou...@gmail.com) wrote:
>>
>>> To be clear, 95th percentile and down are as low as before so wondering 
>>> if this is a new connection closing penalty being paid or if the actor 
>>> system needs to be tuned differently now...
>>>
>>> On Thursday, October 12, 2017 at 4:31:14 PM UTC-4, Gary Malouf wrote: 
>>>>
>>>> We have a web service that we just finished migrating from spray 1.3 to 
>>>> Akka-Http 10.0.9.  While in most cases it is performing well, we are 
>>>> seeing 
>>>> terrible 99th percentile latencies 300-450ms range) starting from a very 
>>>> low request rate (10/second) on an ec2 m3.large.  
>>>>
>>>> Our service does not do anything complicated - it does a few Map 
>>>> lookups and returns a response to a request.  In spray, even 99th 
>>>> percentile latencies were on the order of 1-3 ms, so we are definitely 
>>>> concerned.  Connections as with many pixel-type servers are short-lived -> 
>>>> we actually pass the Connection: Close header intentionally in our 
>>>> responses.  
>>>>
>>>> Is there any obvious tuning that should be done on the server 
>>>> configuration that others have found?
>>>>
>>> --
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >>>>>>>>>> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> ---
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-17 Thread Gary Malouf
Hi Konrad,

Understand your point - not really possible to share code on a 
closed-source project.  I'm more asking if akka-http does not handle 
short-lived connections very well yet as opposed to how spray handled 
them.  I will be profiling in the mean-time trying to get to the bottom of 
the issue.

Gary

On Thursday, October 12, 2017 at 8:44:55 PM UTC-4, Konrad Malawski wrote:
>
> When asking about performance and benchmarks always include specific 
> numbers, code, and benchmark methodology otherwise it’s just guessing and 
> inventing numbers and reasons.
>
> Thanks
>
> -- 
> Konrad Malawski
>
> On October 13, 2017 at 5:36:06, Gary Malouf (malou...@gmail.com 
> ) wrote:
>
>> To be clear, 95th percentile and down are as low as before so wondering 
>> if this is a new connection closing penalty being paid or if the actor 
>> system needs to be tuned differently now...
>>
>> On Thursday, October 12, 2017 at 4:31:14 PM UTC-4, Gary Malouf wrote: 
>>>
>>> We have a web service that we just finished migrating from spray 1.3 to 
>>> Akka-Http 10.0.9.  While in most cases it is performing well, we are seeing 
>>> terrible 99th percentile latencies 300-450ms range) starting from a very 
>>> low request rate (10/second) on an ec2 m3.large.  
>>>
>>> Our service does not do anything complicated - it does a few Map lookups 
>>> and returns a response to a request.  In spray, even 99th percentile 
>>> latencies were on the order of 1-3 ms, so we are definitely concerned.  
>>> Connections as with many pixel-type servers are short-lived -> we actually 
>>> pass the Connection: Close header intentionally in our responses.  
>>>
>>> Is there any obvious tuning that should be done on the server 
>>> configuration that others have found?
>>>
>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-12 Thread Gary Malouf
To be clear, 95th percentile and down are as low as before so wondering if 
this is a new connection closing penalty being paid or if the actor system 
needs to be tuned differently now...

On Thursday, October 12, 2017 at 4:31:14 PM UTC-4, Gary Malouf wrote:
>
> We have a web service that we just finished migrating from spray 1.3 to 
> Akka-Http 10.0.9.  While in most cases it is performing well, we are seeing 
> terrible 99th percentile latencies 300-450ms range) starting from a very 
> low request rate (10/second) on an ec2 m3.large.  
>
> Our service does not do anything complicated - it does a few Map lookups 
> and returns a response to a request.  In spray, even 99th percentile 
> latencies were on the order of 1-3 ms, so we are definitely concerned.  
> Connections as with many pixel-type servers are short-lived -> we actually 
> pass the Connection: Close header intentionally in our responses.  
>
> Is there any obvious tuning that should be done on the server 
> configuration that others have found?
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-10-12 Thread Gary Malouf
We have a web service that we just finished migrating from spray 1.3 to 
Akka-Http 10.0.9.  While in most cases it is performing well, we are seeing 
terrible 99th percentile latencies 300-450ms range) starting from a very 
low request rate (10/second) on an ec2 m3.large.  

Our service does not do anything complicated - it does a few Map lookups 
and returns a response to a request.  In spray, even 99th percentile 
latencies were on the order of 1-3 ms, so we are definitely concerned.  
Connections as with many pixel-type servers are short-lived -> we actually 
pass the Connection: Close header intentionally in our responses.  

Is there any obvious tuning that should be done on the server configuration 
that others have found?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Streams - output CSV - how to know last line so can avoid appending new line character

2016-10-31 Thread Gary Malouf
Ah - missed that in the API - thanks for the pointer!

On Mon, Oct 31, 2016 at 9:47 AM, Viktor Klang 
wrote:

> intersperse?
>
> On Mon, Oct 31, 2016 at 2:40 PM, Gary Malouf 
> wrote:
>
>> I am attempting to use Akka streams to read a large amount of data from a
>> database (in chunks) and output to a CSV on S3.  While it may seem trivial,
>> I'm trying to find the best way to identify the final line of the to be
>> created file and avoid putting a new line character at the end of it.  Is
>> there anyway to do this via the Source API?
>>
>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
>> urrent/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Cheers,
> √
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/akka-user/5mZUJzAdacM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Streams - output CSV - how to know last line so can avoid appending new line character

2016-10-31 Thread Gary Malouf
I am attempting to use Akka streams to read a large amount of data from a 
database (in chunks) and output to a CSV on S3.  While it may seem trivial, 
I'm trying to find the best way to identify the final line of the to be 
created file and avoid putting a new line character at the end of it.  Is 
there anyway to do this via the Source API?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Using akka streams to read from streamed source, format and send multi-part file to S3

2016-10-26 Thread Gary Malouf
Hi Jason,

Does yours support setting custom headers and permissions during the
upload?  We need to upload CSV files that will then be exposed via
pre-authorized urls.  I'm playing with
https://github.com/MfgLabs/commons-aws because it had some documented
examples on how it works and that it would probably support my use case.

Thanks,

Gary

On Wed, Oct 26, 2016 at 1:30 PM, Jason Martens  wrote:

> I've created one based on the AWS Java SDK here:
>
> https://github.com/3drobotics/cloud-s3-wrapper
>
> There is another one that is more "pure" in that it uses Akka-HTTP to
> interact with S3, thus does not require the AWS Java SDK here:
>
> https://github.com/bluelabsio/s3-stream
>
> There has been some discussion about getting the latter option merged into
> https://github.com/akka/akka-stream-contrib, but not much progress has
> been made on that recently.
>
> Jason
>
> On Tuesday, October 25, 2016 at 6:04:36 PM UTC-7, Gary Malouf wrote:
>>
>> Just wondering if there are any working examples of creating
>> content-on-the-fly and streaming multipart files to s3 with akka streams.
>>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/akka-user/bZw0rmPTZw0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Using akka streams to read from streamed source, format and send multi-part file to S3

2016-10-25 Thread Gary Malouf
Just wondering if there are any working examples of creating 
content-on-the-fly and streaming multipart files to s3 with akka streams.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka Quartz Scheduler with/as cluster singleton

2015-12-04 Thread Gary Malouf
We use Akka Quartz Scheduler 
 project today to 
run recurring jobs on each 'node' in our cluster.  I now have requirements 
for jobs that need to be reliably scheduled/executed once per time period 
across a cluster and I do not believe this project can quickly be extended 
for that use case.

In previous jobs, I've used Chronos  or 
Quartz  for this functionality.  I'm 
wondering if there is a known pattern in Akka world for a distributed, 
cron-like job scheduler.


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Consistent Hashing for Memcached

2014-09-29 Thread Gary Malouf
Looking at the reference documentation, it appears it may use the same 
functionality.

On Monday, September 29, 2014 2:09:25 PM UTC-4, Gary Malouf wrote:
>
> I see that Akka has it's own routing implementation based on a consistent 
> hashing function.  Many of the popular Memcached clients typically uses a 
> Ketama hash to do their version of this.  
>
> We have a use case where we want to route messages based on the server in 
> which their co-located data would be in Memcached.  Is the best approach 
> here to write our own router hashing implementation that uses a Ketama hash 
> similar to our Memcached client?  Another option it seems would be to 
> airlift Akka's consistent hashing function for deciding where to write.
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Consistent Hashing for Memcached

2014-09-29 Thread Gary Malouf
I see that Akka has it's own routing implementation based on a consistent 
hashing function.  Many of the popular Memcached clients typically uses a 
Ketama hash to do their version of this.  

We have a use case where we want to route messages based on the server in 
which their co-located data would be in Memcached.  Is the best approach 
here to write our own router hashing implementation that uses a Ketama hash 
similar to our Memcached client?  Another option it seems would be to 
airlift Akka's consistent hashing function for deciding where to write.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-20 Thread Gary Malouf
Greg - if one uses the current Akka Persistence with eventstore as the
backend, is it possible/what are the challenges in getting safe 'process
managers' to work as one would expect?  I would think you'd want event
store feeding a different Akka Persistence processor.


On Wed, Aug 20, 2014 at 2:10 PM, Ashley Aitken  wrote:

>
> Whilst we are talking about s... process managers I would like to include
> this simple way of understanding them I found on the web: "Process Managers
> produce commands and consume events, whereas Aggregate Roots consume
> commands and produce events."  The truth is a bit more complicated I
> believe in that Process Managers can also consume commands (e.g. to stop
> the process).
>
> Further, whilst I would like to accept Roland's view that both commands
> and events can be communicated by sending messages (since, as he suggests,
> it would make things a lot simpler and lighter on the write side), I am
> concerned that there are use-cases for process managers that involve them
> listening for events from ARs they have not sent a command message to.  Can
> anyone confirm/deny?
>
> Thanks,
> Ashley.
>
>
>
> On Wednesday, 20 August 2014 23:01:41 UTC+8, Greg Young wrote:
>
>> further explanation http://soa.dzone.com/news/are-sagas-and-
>> workflows-same-t
>>
>>
>> On Wed, Aug 20, 2014 at 10:39 AM, Greg Young  wrote:
>>
>> I held the same issue with ms pnp
>>
>> Clarifying the terminology
>>
>> The term saga is commonly used in discussions of CQRS to refer to a piece
>> of code that coordinates and routes messages between bounded contexts and
>> aggregates. However, for the purposes of this guidance we prefer to use the
>> term process manager to refer to this type of code artifact. There are two
>> reasons for this:
>>
>> There is a well-known, pre-existing definition of the term saga that has
>> a different meaning from the one generally understood in relation to CQRS.
>> The term process manager is a better description of the role performed by
>> this type of code artifact.
>>
>> Although the term saga is often used in the context of the CQRS pattern,
>> it has a pre-existing definition. We have chosen to use the term process
>> manager in this guidance to avoid confusion with this pre-existing
>> definition.
>>
>> The term saga, in relation to distributed systems, was originally defined
>> in the paper "Sagas" by Hector Garcia-Molina and Kenneth Salem. This paper
>> proposes a mechanism that it calls a saga as an alternative to using a
>> distributed transaction for managing a long-running business process. The
>> paper recognizes that business processes are often comprised of multiple
>> steps, each of which involves a transaction, and that overall consistency
>> can be achieved by grouping these individual transactions into a
>> distributed transaction. However, in long-running business processes, using
>> distributed transactions can impact on the performance and concurrency of
>> the system because of the locks that must be held for the duration of the
>> distributed transaction.
>>
>>
>> On Wed, Aug 20, 2014 at 10:31 AM, Roland Kuhn  wrote:
>>
>>
>> 20 aug 2014 kl. 16:16 skrev Greg Young :
>>
>> Please stop using the terminology of "saga" and replace usage with
>> "process manager" what people (largely influenced by nservicebus call a
>> saga is actually a process manager and a saga is a different pattern). Its
>> bad enough the .net community does this the last thing we need is for the
>> akka community to start doing the same :)
>>
>>
>> Sure, but please do educate us as to the right use of these two words so
>> we persist the correct definitions in the list archives. My main question
>> is: what is that other pattern that shall be called a Saga?
>>
>> Regards,
>>
>> Roland
>>
>>
>>
>>
>> On Wed, Aug 20, 2014 at 4:16 AM, Roland Kuhn  wrote:
>>
>>
>> 19 aug 2014 kl. 18:59 skrev Ashley Aitken :
>>
>> On Tuesday, 19 August 2014 21:14:17 UTC+8, rkuhn wrote:
>>
>>
>> 18 aug 2014 kl. 18:01 skrev Ashley Aitken :
>>
>> I believe Akka needs to allow actors to:
>>
>>
>> (i) persist events with as much information as efficiently possible on
>> the write side to allow the store to facilitate the read side extracting
>> them according to what criteria is needed,
>>
>> This is a convoluted way of saying that Events must be self-contained,
>> right? In that case: check!
>>
>>
>> No, I don't think so.  As I understand it now, the only thing the event
>> store knows about each event is the persistenceId and a chunk of opaque
>> data. It doesn't know the type of the event, the type of the message, any
>> time information, any causal dependency etc.  I guess what I am saying is
>> that the events need to include as much metadata as possible so that the
>> event store can provide the necessary synthetic streams if they are
>> requested by the read side.  As I mentioned later, some event stores (like
>> Kafka may replicate the events into separate topics based on this
>> information), others (l

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Gary Malouf
So how does one handle combining events from different streams- a global
sequence number is the most straightforward.

Also, not everything needs to scale on the write side to that degree.
On Aug 19, 2014 9:24 AM, "√iktor Ҡlang"  wrote:

> The decision if scale is needed cannot be implicit, as then you are luring
> people into the non-scalable world and when they find out then it is too
> late.
>
>
> On Tue, Aug 19, 2014 at 3:20 PM, Roland Kuhn  wrote:
>
>>
>> 19 aug 2014 kl. 14:57 skrev Gary Malouf :
>>
>> For CQRS specifically, a lot of what people call scalability is in it's
>> ability to easily model multiple read views to make queries very fast off
>> the same event data.
>>
>> In the cases where a true global ordering is truly necessary, one often
>> does not need to handle hundreds of thousands of writes per second.  I
>> think the ideal is to have the global ordering property for events by
>> default, and have to disable that if you feel a need to do more writes per
>> second than a single writer can handle.
>>
>>
>> Unfortunately it is not only the number of writes per second, the sheer
>> data volume can drive the need for a distributed, partitioned storage
>> mechanism. There is only so much you can fit within a single machine and
>> once you go beyond that you quickly run into CAP (if you want your
>> guarantees to hold 100% at all times). The way forward then necessitates
>> that you must compromise on something, either Availability or Determinism
>> (in this case).
>>
>> Regards,
>>
>> Roland
>>
>> Once the global ordering property is enforced, solving many of the
>> publisher ordering issues (and supporting sagas) becomes significantly
>> easier to achieve.
>> On Aug 19, 2014 8:49 AM, "Roland Kuhn"  wrote:
>>
>>>
>>> 18 aug 2014 kl. 16:49 skrev Patrik Nordwall :
>>>
>>> On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn  wrote:
>>>
>>>>
>>>> 18 aug 2014 kl. 10:27 skrev Patrik Nordwall >>> >:
>>>>
>>>> Hi Roland,
>>>>
>>>> A few more questions for clarification...
>>>>
>>>>
>>>> On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon <
>>>> vver...@shiftmethod.com> wrote:
>>>>
>>>>>
>>>>>  On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:
>>>>>>
>>>>>> Dear hakkers,
>>>>>>
>>>>>> unfortunately it took me a long time to catch up with akka-user to
>>>>>> this point after the vacation, but on the other hand this made for a very
>>>>>> interesting and stimulating read, thanks for this thread!
>>>>>>
>>>>>> If I may, here’s what I have understood so far:
>>>>>>
>>>>>>1. In order to support not only actor persistence but also full
>>>>>>CQRS we need to adjust our terminology: events are published to 
>>>>>> topics,
>>>>>>where each persistenceId is one such topic but others are also 
>>>>>> allowed.
>>>>>>2. Common use-cases of building projections or denormalized views
>>>>>>require the ability to query the union of a possibly large number of 
>>>>>> topics
>>>>>>in such a fashion that no events are lost. This union can be viewed 
>>>>>> as a
>>>>>>synthetic or logical topic, but issues arise in that true topics 
>>>>>> provide
>>>>>>total ordering while these synthetic ones have difficulties doing so.
>>>>>>3. Constructing Sagas is hard.
>>>>>>
>>>>>>
>>>>>> AFAICS 3. is not related to the other two, the mentions in this
>>>>>> thread have only alluded to the problems so I assume that the difficulty 
>>>>>> is
>>>>>> primarily to design a process that has the right eventual consistency
>>>>>> properties (i.e. rollbacks, retries, …). This is an interesting topic but
>>>>>> let’s concentrate on the original question first.
>>>>>>
>>>>>> The first point is a rather simple one, we just need to expose the
>>>>>> necessary API for writing to a given topic instead of the local Actor’s
>>>>>> persistenceId; I’d opt for adding variants of the persist() methods that
>>>>>> take an add

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-19 Thread Gary Malouf
For CQRS specifically, a lot of what people call scalability is in it's
ability to easily model multiple read views to make queries very fast off
the same event data.

In the cases where a true global ordering is truly necessary, one often
does not need to handle hundreds of thousands of writes per second.  I
think the ideal is to have the global ordering property for events by
default, and have to disable that if you feel a need to do more writes per
second than a single writer can handle.

Once the global ordering property is enforced, solving many of the
publisher ordering issues (and supporting sagas) becomes significantly
easier to achieve.
On Aug 19, 2014 8:49 AM, "Roland Kuhn"  wrote:

>
> 18 aug 2014 kl. 16:49 skrev Patrik Nordwall :
>
> On Mon, Aug 18, 2014 at 3:38 PM, Roland Kuhn  wrote:
>
>>
>> 18 aug 2014 kl. 10:27 skrev Patrik Nordwall :
>>
>> Hi Roland,
>>
>> A few more questions for clarification...
>>
>>
>> On Sat, Aug 16, 2014 at 10:11 PM, Vaughn Vernon 
>> wrote:
>>
>>>
>>> On Friday, August 15, 2014 11:39:45 AM UTC-6, rkuhn wrote:

 Dear hakkers,

 unfortunately it took me a long time to catch up with akka-user to this
 point after the vacation, but on the other hand this made for a very
 interesting and stimulating read, thanks for this thread!

 If I may, here’s what I have understood so far:

1. In order to support not only actor persistence but also full
CQRS we need to adjust our terminology: events are published to topics,
where each persistenceId is one such topic but others are also allowed.
2. Common use-cases of building projections or denormalized views
require the ability to query the union of a possibly large number of 
 topics
in such a fashion that no events are lost. This union can be viewed as a
synthetic or logical topic, but issues arise in that true topics provide
total ordering while these synthetic ones have difficulties doing so.
3. Constructing Sagas is hard.


 AFAICS 3. is not related to the other two, the mentions in this thread
 have only alluded to the problems so I assume that the difficulty is
 primarily to design a process that has the right eventual consistency
 properties (i.e. rollbacks, retries, …). This is an interesting topic but
 let’s concentrate on the original question first.

 The first point is a rather simple one, we just need to expose the
 necessary API for writing to a given topic instead of the local Actor’s
 persistenceId; I’d opt for adding variants of the persist() methods that
 take an additional String argument. Using the resulting event log is then
 done as for the others (i.e. Views and potentially queries should just
 work).

>>>
>> Does that mean that a PersistentActor can emit events targeted to its
>> persistenceId and/or targeted to an external topic and it is only the
>> events targeted to the persistenceId that will be replayed during recovery
>> of that PersistentActor?
>>
>>
>> Yes.
>>
>> Both these two types of events can be replayed by a PersistentView.
>>
>>
>> Yes; they are not different types of events, just how they get to the
>> Journal is slightly different.
>>
>>
>>
>>>  The only concern is that the Journal needs to be prepared to receive
 events concurrently from multiple sources instead of just the same Actor,
 but since each topic needs to be totally ordered this will not be an
 additional hassle beyond just routing to the same replica, just like for
 persistenceIds.

>>>
>> Replica as in data store replica, or as in journal actor?
>>
>>
>> The Journal must implement this in whatever way is suitable for the
>> back-end. A generic solution would be to shard the topics as Actors across
>> the cluster (internal to the Journal), or the Journal could talk to the
>> replicated back-end store such that a topic always is written to one
>> specific node (if that helps).
>>
>
> What has been requested is "all events for an Aggregate type", e.g. all
> shopping carts, and this will will not scale. It can still be useful, and
> with some careful design you could partition things when scalability is
> needed. I'm just saying that it is a big gun, that can be pointed in the
> wrong direction.
>
>
> Mixed-up context: #1 is about predefined topics to which events are
> emitted, not queries. We need to strictly keep these separate.
>
>
>
>
>>
>>
>>
>>>
>>> Is point one for providing a sequence number from a single ordering
>>> source?
>>>
>>
>> Yes, that is also what I was wondering. Do we need such a sequence
>> number? A PersistentView should be able to define a replay starting point.
>> (right now I think that is missing, it is only supported by saving
>> snapshots)
>>
>>
>>> Or do you mean topic in the sense that I cover above with EntitiesRef?
>>> In other words, what is the String argument and how does it work?  If you
>>> would show a

Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-10 Thread Gary Malouf
Hi Prakhyat,

We are building a CQRS/DDD-oriented configuration system based on akka
persistence and are running into the same modeling issues.  A few
characteristics of our specific case:

1) We do not expect a high volume of commands to be submitted (they are
generated via a task-based user interface that will have on the order of
30-50 users).

2) We have a number of cases where the output events of one aggregate must
eventually trigger a change on another aggregate.  This use case is what I
am referring to as 'sagas'.  There are two concerns that need to be
addressed: guarantee that the messages will eventually get delivered in the
event of system error/failure and the ability of the receiving aggregates
to be able to order/handle them.

3) We use the cassandra connector for akka persistence with a 'quorum'
consistency level for writing and reading.


Since we are not dealing with high throughputs, a less performant but a
safer solution to addressing the concerns in (2) are possible for us
without introducing another system to an already complicated
infrastructure.  We can have the aggregates that may receive events from
others reliably query the views for the aggregates they depend on (reading
from Cassandra) directly to ensure messages are not missed and come in
order.

In our view, putting the weight on the consumer to deal with out of order
messaging was painful for us.  I've read the blogs arguing for being able
to deal with this, but it just felt like something the framework should
handle for you in the end.

The reliable, in-order messaging concern also extends to 'stream consumers'
in general.  For this, we are looking at building a service that reads from
all views (ordering across processors/aggregates by timestamp), assigns a
'global' sequence number to the event, and persists this in a stream.  We
then can have our consumers read from this stream with confidence that
events will arrive in order and not be missing.  That service could run as
a singleton in an akka cluster for reliability - performance is not a
concern for us at our expected traffic.

Both of the cases, highlight the need to have a reliable messaging
integration to avoid the hoops we will be jumping through.



On Sun, Aug 10, 2014 at 10:29 AM, Prakhyat Mallikarjun <
prakhyat...@gmail.com> wrote:

> Hi Gary/akka team,
>
> I have requirement in my app that changes to one aggregate root affects
> many more aggregate roots and all have to be in sync. I keep seeing in
> discussions name of sagas being refered. Will really sagas help to resolve
> this condition? Can I find any articles in this regard?
>
> Are there any other design approachs?
>
> --
> >>  Read the docs: http://akka.io/docs/
> >>  Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>  Search the archives:
> https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/akka-user/SL5vEVW7aTo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-08 Thread Gary Malouf
One of the arguments for CQRS/Event Sourcing combo has been that it allows
you to optimize reads and writes independently for high throughput.  For
many people however (including us) we want the command/query separation +
the sequence of events for just the design benefits.  Sagas are one of the
critical pieces of this, but there need to be guarantees that if one event
occurs out of one aggregate/processor + 3 other aggregates/processors are
listening for it, they will get it barring a catastrophe.

Unless one simply polls all of the processor persistent views manually
today, this guarantee just is not there out of the box.


On Fri, Aug 8, 2014 at 6:10 AM, Ashley Aitken  wrote:

>
>
> On Friday, 8 August 2014 16:45:30 UTC+8, Patrik Nordwall wrote:
>
>>
>> On Fri, Aug 8, 2014 at 12:21 AM, Vaughn Vernon 
>> wrote:
>>
>>> I am sure you have already thought of this, Patrik, but if you
>>> leave full ordering to the store implementation, it could still have
>>> unnecessary limitations if the implementor chooses to support sequence only
>>> for persistenceId.
>>>
>>
>> As a user you would have to pick a journal that supports your needs in
>> this regard.
>>
>
> I agree with you both.  With Vaughn I agree that we need a global sequence
> (although I understand this is very impractical within distributed systems)
> and with Patrik that it should be up to the store implementation (with the
> possibility of store configuration determining this).  It would be up to
> the store (and the developer's choice in configuring that store) to
> determine how close to causal or total ordering the sequence will be.
>
> So for example, with general use of Kafka the store could provide events
> from each partition for a topic (if I understand correctly how Kafka works)
> in a round-robin fashion, which wouldn't be properly sequenced, but it may
> be manageable for some requirements.  If a developer wanted more strict
> "global" sequencing then they could configure the store to have a single
> partition,with the scaling implications that would have.
>
>
>  --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/akka-user/SL5vEVW7aTo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Improving Akka Persistence wrt CQRS/ES/DDD

2014-08-08 Thread Gary Malouf
I don't see it mentioned on this particular thread, but I feel creating 
reliable sagas across processors (Aggregates) is a real challenge right now 
as well.  Having a clearly documented way to do this is critical IMO to 
creating a more complex and reliable CQRS-based apps.

On Thursday, August 7, 2014 6:21:14 PM UTC-4, Vaughn Vernon wrote:
>
> I am sure you have already thought of this, Patrik, but if you leave full 
> ordering to the store implementation, it could still have unnecessary 
> limitations if the implementor chooses to support sequence only for 
> persistenceId. One very big limitation is, if the store doesn't support 
> single sequence you still can't play catch-up over the entire store if you 
> are dependent on interleaved events across types. You can only re-play all 
> events properly if using a global sequence. Well, you could also do so 
> using casual consistency, but (a) that's kinda difficult, and (b) it's not 
> supported at this time.
>
> Vaughn
>
>
> On Thursday, August 7, 2014 1:29:33 PM UTC-6, Patrik Nordwall wrote:
>>
>>
>>
>> 7 aug 2014 kl. 20:57 skrev ahjohannessen :
>>
>> On Thursday, August 7, 2014 7:34:15 PM UTC+1, Vaughn Vernon wrote:
>>
>> I vote that you need to have a single sequence across all events in an 
>>> event store. This is going to cover probably 99% of all actor persistence 
>>> needs and it is going to make using akka-persistence way easier.
>>>
>>
>> If that was made optional + tag facility, then those that see it hurts 
>> scalability would opt-out and others would opt-in and pay the extra penalty.
>>
>>
>> Ok, I think it's a good idea to leave it to the journal plugins to 
>> implement the full ordering as good as is possible with the specific data 
>> store. We will only require exact order of events per persistenceId.
>>
>> Any other feedback on the requirements or proposed solution of the 
>> improved PersistentView?
>>
>> /Patrik
>>
>>
>>  -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com.
>> To post to this group, send email to akka...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] EventSourced/Akka Persistence Transactionally Saving Multiple Events

2014-04-18 Thread Gary Malouf
Thanks Martin, I actually found it after but Akka User List had not 
approved my post yet so that I could delete it.

Appreciate you jumping in,

Gary

On Friday, April 18, 2014 6:35:35 AM UTC-4, Martin Krasser wrote:
>
>  Hi Gary,
>
> On 18.04.14 04:05, Gary Malouf wrote:
>  
> We have what I believe to be a common use case while using EventSourced 
> actors.  When a command enters our system, we produce multiple events as a 
> result.  In the event of the system crashing, having some subset of these 
> events saved but others not making it would be a problem possibly.   
>
>  Is there any mechanism to do a sort of 'batch transaction' to persist 
> multiple events to the journal?
>  
>
> This is already supported, see 
> http://doc.akka.io/docs/akka/2.3.2/scala/persistence.html#batch-writes.
>
> Cheers,
> Martin
>
>  -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
> -- 
> Martin Krasser
>
> blog:http://krasserm.blogspot.com
> code:http://github.com/krasserm
> twitter: http://twitter.com/mrt1nz
>
>  

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] EventSourced/Akka Persistence Transactionally Saving Multiple Events

2014-04-17 Thread Gary Malouf
We have what I believe to be a common use case while using EventSourced 
actors.  When a command enters our system, we produce multiple events as a 
result.  In the event of the system crashing, having some subset of these 
events saved but others not making it would be a problem possibly.  

Is there any mechanism to do a sort of 'batch transaction' to persist 
multiple events to the journal?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.