er than stubable:
trait DbSources {
def f1(a:A):Source[B,NotUsed]
def f2(x:X):Source[Y,NotUsed]
}
Anyone else wrestled with the same?
On Wed, May 4, 2016 at 10:03 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I have some streams to test. Each one implements a particular "com
I have some streams to test. Each one implements a particular "command". In
recent days I have warmed up to using a combination of implicit parameters
and constructor injection for DI.
http://carefulescapades.blogspot.com/2012/05/using-implicit-parameters-for.html
Some of my former singletons
Take a look at prefixAndTail
http://doc.akka.io/docs/akka/2.4.4/scala/stream/stages-overview.html#prefixAndTail
On Thu, Apr 14, 2016 at 7:42 AM, Guofeng Zhang wrote:
> Hi,
>
> I have a csv file like the following:
> ID,Product name,Price
> ,Sleeve,57.97
>
gt; On Tue, Apr 12, 2016 at 7:53 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Not a very well thought out question on my part :)
>> I guess I'm picturing something like two infinite streams A and B where
>> rather than ending, they'd keep emitting a dummy &qu
and grab chunks of
data from both sources for each time window, composing the two futures.
On Tue, Apr 12, 2016 at 12:03 AM, Viktor Klang <viktor.kl...@gmail.com>
wrote:
> What does keep going mean?
>
> On Mon, Apr 11, 2016 at 11:42 PM, Richard Rodseth <rrods...@gmail.com>
> wr
I need to compare two streams of timeseries data. I thought of doing
something with zip() but in the case where the data is missing in one
stream at the beginning or end, I'd like to keep going. Of course, I could
just make the base stream a time stream, and use non-streaming techniques
to fetch
L)
Anyway, glad all is well from your perspective :)
On Fri, Apr 8, 2016 at 9:19 AM, Patrik Nordwall <patrik.nordw...@gmail.com>
wrote:
>
>
> On Thu, Apr 7, 2016 at 5:17 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Also, seeing the wo
), fromOffset)
Consumer.plainSource(settings)
.mapAsync(1)(db.save)
}
On Thu, Apr 7, 2016 at 7:26 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> Isn't this section of the docs naïve in the face of possible errors?
>
>
> http://doc.akka.io/docs/akka/curre
to be the norm?
On Fri, Feb 26, 2016 at 1:55 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> Hmm. I see the fromSequenceNr parameter in the query traits. So the read
> side would have to persist a watermark? Where? I was hoping for less
> boilerplate.
>
> On Fri, Feb 26, 20
I'll wait. Looking at that ScalaDoc, I have no idea whatsoever how to use
it :)
On Tue, Apr 5, 2016 at 3:19 PM, Konrad Malawski wrote:
> Please refer to it's scala doc for the time being:
> http://doc.akka.io/api/akka/2.4.3/#akka.stream.KillSwitches$
>
> --
> Cheers,
>
Thank you. The divide by zero fix to throttle has already helped me.
I'm not seeing any documentation for KillSwitch. Did I miss it?
On Mon, Apr 4, 2016 at 1:08 PM, Justin du coeur wrote:
> +1, with a particular thank-you for the more-transparent process. Being
> able to
I'm doing a flatMapMerge something like this:
val stream = Source(channelMonths)
.flatMapMerge(10, channelMonth => {
..Sources.intervalsForChannelMonth(channelMonth, ...)
})
I'm implementing some monitoring using alsoTo to send stream elements to a
monitoring actor
You can also use alsoTo to send stream elements to an actor or special
purpose Sink.
On Thu, Mar 10, 2016 at 10:49 AM, Filippo De Luca
wrote:
> Hi,
> I suppose you can use map and call a external service for each message at
> defined stage.
>
> Even better you can build
Thanks! Always good to learn about new methods :)
On Thu, Mar 10, 2016 at 6:50 AM, Viktor Klang
wrote:
> There's also: someSource.zip(Source.fromIterator(() => Iterator.from(0)))
>
> On Thu, Mar 10, 2016 at 3:42 PM, Akka Team
> wrote:
>
>> Hi
Congratulations and thanks to both of you! This is a very impressive piece
of software, and an inspiring community.
On Mon, Mar 7, 2016 at 10:52 AM, Patrik Nordwall
wrote:
> Thank you for everything you have done for Akka, Roland. It would not have
> been anywhere
Thanks so much Roland. That makes sense. As a bonus, your comment about
addAttributes vs withAttributes might explain why I wasn't seeing .log
output :)
On Thu, Mar 3, 2016 at 12:04 AM, Roland Kuhn <goo...@rkuhn.info> wrote:
> Hi Richard,
>
> 3 mar 2016 kl. 00:24 skrev Richard
This whole area of async boundaries in akka streams is still very confusing
to me. I don't know what to suggest in terms of documentation, other than
numerous examples of code, with corresponding diagrams showing the
resultant boundaries and actor instances after materialization.
As an example, I
parameter, in
addition to the (implicit ec:ExecutionContext) I already have for
non-blocking things?
On Fri, Feb 26, 2016 at 3:41 AM, Akka Team <akka.offic...@gmail.com> wrote:
> Hi Richard,
>
>
>
> On Tue, Feb 23, 2016 at 12:01 AM, Richard Rodseth <rrods...@gmail.com>
&g
I can do this:
val names = Source(List("Bob", "Carol", "Alice"))
val numbers = Source.unfold(1L)(e => Some(e+1,e))
val numberedNames = names.zip(numbers)
Is that why there's no zipWithIndex?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
Hmm. I see the fromSequenceNr parameter in the query traits. So the read
side would have to persist a watermark? Where? I was hoping for less
boilerplate.
On Fri, Feb 26, 2016 at 5:47 AM, Patrik Nordwall <patrik.nordw...@gmail.com>
wrote:
>
>
> On Wed, Feb 24, 2016 at 10:33 PM,
In Vaughn Vernon's Red Book (Implementing DDD) he talks about storing
domain events in the same transaction as the one which updates an
aggregate, and then out of band you read this domain event store (not in
the event sourcing sense) in order to put messages on a message queue, for
example, to
It was *extremely* useful when the team added the footer on documentation
pages that lets you jump to the current version. Unfortunately, with
Streams moving into Akka proper, this is less useful when it asks me if I
want to update the page I'm viewing to 2.0.3. Perhaps an easy enhancement?
Keep
channel-month.
On Mon, Feb 22, 2016 at 3:40 AM, Akka Team <akka.offic...@gmail.com> wrote:
> Hi Richard,
>
>
>
> On Fri, Feb 19, 2016 at 11:11 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Thought I'd start a new thread for my latest stumbling block, whi
Thought I'd start a new thread for my latest stumbling block, while I
explore some options that don't feel great.
Short version:
flatMapMerge has a "breadth" parameter which limits the number of
substreams in flight. groupBy() does not. If maxSubstreams is exceeded the
stream will fail. I am
a file created for
each group.
Things that come to mind are using a balancer (pre-grouping), but I'm not
sure that would work, or sending the groups to a sink rather than merging
the substreams.
Thoughts?
On Thu, Feb 18, 2016 at 8:29 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> T
I'm still missing something. I thought I had solved my problem of
overwhelming Postgres by using buffer(), but today (after upgrading to
2.4.2-RC3, but that's probably coincidence) I am getting a lot of timeouts.
As you can see below, I have two Slick sources, one nested via
flatMapConcat.
Even
at 12:55 PM, Richard Rodseth <rrods...@gmail.com>
wrote:
> Congratulations! I'm excited about this release, and I think if I could
> Stream All The Things for the rest of my career, I would die a happy man :)
>
> Migration note from RC3: IOResult and Framing have changed packag
ail.com>
>>>> wrote:
>>>>
>>>>> (Or add support for compression!)
>>>>>
>>>>> On Tue, Feb 16, 2016 at 2:13 PM, Konrad Malawski <kt...@typesafe.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Rich
?
On Tue, Feb 9, 2016 at 2:03 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I'm writing to files and reading from them using FileIO.fromFile and
> toFile.
>
> Looking to add compression of the files.
>
> This looks promising:
> https://github.com/maciej/snapp
I updated my program from streams 2.0.3 to 2.4.2-RC3
One problem I ran into was that FileIO.toFile no longer creates the file
for you, and I don't think this is mentioned in the migration guide.
Also, Source.unfoldInf seems to be gone.
Otherwise looks good.
--
>> Read the docs:
This section of the documentation shows how to place blocking code on a
dedicated dispatcher:
http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/scala/stream-integrations.html#Integrating_with_External_Services
On Fri, Feb 5, 2016 at 12:14 AM, Endre Varga
I'm writing to files and reading from them using FileIO.fromFile and toFile.
Looking to add compression of the files.
This looks promising:
https://github.com/maciej/snappy-flows
The compression is expressed as a Flow[ByteString,ByteString,Unit]
Are there other examples or documentation I
the withAttributes
or the viaAsync I inserted did the trick, and whether both are necessary.
Roland said somethng about the source needing an async boundary, but the
example in the operator fusion section of the docs only adds one after the
+1 operator.
On Fri, Feb 5, 2016 at 11:34 AM, Richard Rodseth
I'm trying to capture a file read in a reusable Flow as follows. But I'm
scratching my head about how to capture the materialized value. As it
stands the signature of result is Flow[Path, String, Unit] rather than
Flow[Path, String, Future[Long]]. I've tried various combinations of viaMat
and
{ bs =>
bs.utf8String
}
asStrings
}
On Mon, Feb 8, 2016 at 12:46 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I'm trying to capture a file read in a reusable Flow as follows. But I'm
> scratching my head about how to capture the materialized value. As it
> st
Point taken. Perhaps my questions irritate you, but they may help others,
and the documentation.
On Fri, Feb 5, 2016 at 7:28 AM, Viktor Klang <viktor.kl...@gmail.com> wrote:
> What does your tests show?
>
> --
> Cheers,
> √
> On Feb 5, 2016 4:24 PM, "Richard Rodset
Or if the groupBy results in 100 substreams, how many actors are
materialized by groupBy(...).viaAsync(a) ?
Or groupBy(...).viaAsync(a).viaAsync(b) ?
On Fri, Feb 5, 2016 at 7:24 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> In an effort to be more succinct :) Is this a true
tially the flatMapX methods—if the provided sources declare async
> boundaries). We try to keep the combinators and concepts as orthogonal and
> composable as possible.
>
> Regards,
>
> Roland
>
> 5 feb 2016 kl. 16:24 skrev Richard Rodseth <rrods...@gmail.com>:
>
> In an effor
In an effort to be more succinct :) Is this a true statement?
"groupBy does not automatically introduce any *per-key* parallelism, unless
followed by mapAsync"
On Thu, Feb 4, 2016 at 3:05 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I guess I'm still a bit confused by
ous boxes (actors) and see how things relate.
>
> Then try it out in practice.
>
> -Endre
>
> On Fri, Feb 5, 2016 at 5:13 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Thank you. But parallelism *between stages* is not the same as
>> per-group-key
I guess I'm still a bit confused by parallelism in akka streams, but let me
describe what I have.
Tenants have Sites which have Channels which have Intervals (start end
value)
My root source is a stream of TenantSiteChannelInfo (obtained from a join
of channels with their sites and tenants)
I
I think you are correct to look to pipeTo rather than using onComplete
within your receive handler. You can use map and recoverWith to convert the
future to specific messages of your own design sent to self.
On Thu, Feb 4, 2016 at 2:00 PM, Paul Cleary wrote:
> I am trying
I'll take a stab at these.
1. streams are "materialized" before they are run, and I believe this is
pluggable. Currently the one and only materializer uses actors. See
http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.3/scala/stream-flows-and-basics.html#Stream_Materialization
2. Not
I'm still learning the details myself, but you put async boundaries around
*stages* rather than stream elements. Using withAttributes() or viaAsync().
See
http://doc.akka.io/docs/akka-stream-and-http-experimental/2.0.2/scala/stream-flows-and-basics.html#Operator_Fusion
On Thu, Feb 4, 2016 at
.kl...@gmail.com>
wrote:
> Put in async boundaries where you want to have them. And writing to file
> concurrently is likely not faster, but as always needs to be measured.
>
> On Wed, Feb 3, 2016 at 6:55 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Write sub str
ifted to a source. Here I'm using
> prefixAndTail(1) because I need to extract the level
> from the first element to calculate the filename.
>
> Hope it helps,
> Francesco
>
>
>> Clutching at straws!
>>
>> On Tue, Feb 2, 2016 at 12:27 AM, Roland Kuhn <goo
ill ever happen
>>> case _ =>
>>> Future.successful(0)
>>> }.
>>> mergeSubstreams.
>>> runWith(Sink.onComplete { _ =>
>>> system.shutdown()
>>> })
>>>
>>> What prefixAndTail does (if I got it
> On Wed, Feb 3, 2016 at 5:55 PM, Richard Rodseth <rrods...@gmail.com> wrote:
>> Ok. I suppose I should examine the GroupBy or SubFlow source code, but if I
>> understand correctly different stages will run concurrently (if fusing is
>> off or async boundaries ha
at 6:23 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I have run into this issue
> https://github.com/typesafehub/activator-akka-stream-scala/issues/37
>
> I want to group a stream and write each substream to a separate file. A
> pretty common use case, I'd imagine
ther the API addition that is necessary will have to wait until
> 2.4.3.
>
> Regards,
>
> Roland
>
> 1 feb 2016 kl. 22:34 skrev Richard Rodseth <rrods...@gmail.com>:
>
> I'm concerned that this might fall through the cracks since the GitHub
> issue is written against
For anyone following along, I believe this is the issue Roland refers to
https://github.com/akka/akka/issues/18969
On Mon, Feb 1, 2016 at 2:28 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> Ouch. Thanks.
>
> On Mon, Feb 1, 2016 at 1:49 PM, Roland Kuhn <goo...@rkuhn.in
I have run into this issue
https://github.com/typesafehub/activator-akka-stream-scala/issues/37
I want to group a stream and write each substream to a separate file. A
pretty common use case, I'd imagine.
The old version of the GroupLog example showed a groupBy() followed by a
to()
Because of
I'm looking to add some file output to my akka streams project
I see that there is a FileIO.toFile method which creates a sink which is
presumably non-blocking
But I need to create directories on the fly as well. Is there any help
here? Anything new in 2.4.2-RC1?
I assume the mkdirs call in the
=> (aggregator(prevScan, elem), elem)
>
> -Endre
>
>>
>>> On Thu, Jan 28, 2016 at 6:05 PM, Richard Rodseth <rrods...@gmail.com> wrote:
>>> In akka-streams, scan is like fold, in that it takes a zero and a function
>>> to do the accumulati
gt;
wrote:
> mapMaterializedValue
>
> --
> Cheers,
> √
> On Jan 28, 2016 5:04 AM, "Richard Rodseth" <rrods...@gmail.com> wrote:
>
>> I've since become aware that for something like counting a parent stream
>> for monitoring purposes I could use alsoTo to a counting si
In akka-streams, scan is like fold, in that it takes a zero and a function
to do the accumulating, but it emits each accumulated value rather than the
final result.
But what if I wanted to emit tuples of the accumulated value and the stream
element?
Is there an operator I've missed or would
How does a Source[T, Unit] become a Source[T, M] ?
I see that methods like via have a corresponding viaMat which let you
provide a combiner to combine the materialized value of the previous stage
and the next stage (I think)
But how do you get the ball rolling with a Source?
Say for example I
I've since become aware that for something like counting a parent stream
for monitoring purposes I could use alsoTo to a counting sink.
But I'm still curious how a Source[T, Unit] becomes a Source[T, Mat], if
such a thing makes sense.
On Wed, Jan 27, 2016 at 2:36 PM, Richard Rodseth <rr
() method now.
On Sun, Jan 24, 2016 at 9:10 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> Apologies if this is a dumb question
>
> I can do this:
>
> def augmentIntervals(extras: Extras):Flow[Interval, IntervalPlus, Unit] =
> Flow[Interval].map { e => IntervalPlus(
Apologies if this is a dumb question
I can do this:
def augmentIntervals(extras: Extras):Flow[Interval, IntervalPlus, Unit] =
Flow[Interval].map { e => IntervalPlus(e, extras) }
Where does the Flow[Interval] come from? I don't see an apply method in the
scaladocs for companion object Flow.
the stream of channels from Slick and do a flatMapMerge to form a stream of
intervals? Limited concurrency in that approach, no?
On Wed, Jan 20, 2016 at 1:27 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I would love to get some guidance on my first akka-streams project. To
> recap/ex
I'm considering using Slick and Akka Streams for an ETL project.
It's basically moving intervals1 to intervals2, but intervals have a
channel id and some of the channel info needs to be looked up and included
in intervals2.
I suppose I could do a map on Source(intervals1) and cache the looked up
One thing you can look into is using the EventBus to send "domain events"
between top-level actors. Apparently top-level actors (i.e. created with
system.actorOf rather than context.actorOf) are more expensive so you don't
want zillions of them, but I think a handful of loosely-coupled components
Oh, sorry, I thought I remembered reading that in the docs, but perhaps it
was just the cost of an actor system vs an actor.
Hopefully my advice was OK regardless :)
On Sat, Nov 28, 2015 at 1:03 PM, Heiko Seeberger <loe...@posteo.de> wrote:
> On 28 Nov 2015, at 18:47, Richard Rodse
derstand all the ramifications.
>
> Regards,
>
> Roland
>
> 17 nov 2015 kl. 02:02 skrev Richard Rodseth <rrods...@gmail.com>:
>
> Apologies if this is a naïve question.
> Kafka seems like great technology and a great companion to Akka.
> It has a dependency on Zookeepe
I ended up breaking down and doing an ask (to a request handler actor) in
the route definition. The request handler creates a per-request actor with
the requestor as a constructor parameter, and from then on I use no more
asks, but pass replyTo in tell messages. So there is not a chain of asks,
Apologies if this is a naïve question.
Kafka seems like great technology and a great companion to Akka.
It has a dependency on Zookeeper, and my understanding is that's just for
the watermarks.
I'm not sure if this dependency on ZK is a bad thing, but I've certainly
seen criticism of ZK and
Also see
http://doc.akka.io/docs/akka/snapshot/general/actor-systems.html#Blocking_Needs_Careful_Management
On Tue, Nov 10, 2015 at 2:04 PM, Guido Medina wrote:
> I have actors such like *AccountProcessor* and *AccountPersistor*,
> AccountPersistor is a child of
>
>
> What's in the case of Akka the way to prevent threads (that is actors)
> from blocking when doing blocking IO (querying databases, doing REST calls,
> reading from files, etc.)? Go through NIO or is there some stuff provided
> for this?
>
> Thanks for any answers.
>
Good to know. My apps seem to be running too (at least on my laptop). One
of the services was getting an out of memory error with debug logging
turned on, but I think it was a function of all the other stuff I had
running at the same time. I'll probably hold off until I can do more
careful
Is this possible?
I've learned that migration from Spray to Akka HTTP is not always trivial,
and that the performance is not yet there, but would like to benefit from
the other goodies in 2.4.
Thanks.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
I realize this is a long shot without sharing code, but I'm hoping it rings
a bell for someone.
I've implemented a RESTful API with Akka HTTP, and most calls work fine,
but I see this message in the logs (and I can't seem to correlate it with a
particular route):
16:42:47.761
I have the actor below which prints the result of a Future, then makes
the same request after a delay, ad infinitum.Based on "scheduling Periodic
Messages" here:
http://doc.akka.io/docs/akka/snapshot/scala/howto.html
Can someone please point me in the right direction to do the same with Akka
ing chapter 14 of
> Reactive Design Patterns.
>
> Regards,
>
> Roland
>
> 13 sep 2015 kl. 15:55 skrev Richard Rodseth <rrods...@gmail.com>:
>
> Hi Roland
>
> "But as long as the logic is as simple as shown above (in particular the
> error/failure handl
Hi Roland
"But as long as the logic is as simple as shown above (in particular the
error/failure handling) then ask() will be superior in every respect."
I was a little surprised to read that. I thought there was some agreement
that chained asks are tricky to manage.
See the thread(s) and Activator templates on per-request actors. With Spray I
avoided ask completely by using per-request actors and messages with a replyTo
property. Others use much more Future-centric approaches.
Sent from my phone - will be brief
> On Sep 12, 2015, at 5:15 PM, kraythe
>
> --
>
> *Heiko Seeberger*
> Home: heikoseeberger.de
> Twitter: @hseeberger <https://twitter.com/hseeberger>
> Public key: keybase.io/hseeberger
>
> On 10 Sep 2015, at 21:17, Richard Rodseth <rrods...@gmail.com> wrote:
>
> Oh good. Thanks.
>
> On
if a single top level actor
>> becomes a bottleneck.
>>
>
> That can work, even if not trivial. Also ties the Flow to a certain
> top-level actor, making it less reusable -- but it might not matter in an
> Http handler anyway. I just still don't like that single bottleneck point,
>
berger*
> Home: heikoseeberger.de
> Twitter: @hseeberger <https://twitter.com/hseeberger>
> Public key: keybase.io/hseeberger
>
> On 10 Sep 2015, at 17:02, Richard Rodseth <rrods...@gmail.com> wrote:
>
> Thanks for the response. This is somewhat encouraging. +1 to cook
rdwall <patrik.nordw...@gmail.com
> wrote:
> By forward I mean forward (not chained ask). ;-)
>
>
> On Thu, Sep 10, 2015 at 8:03 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> It seemed to me that forwarding implied chained asks. Not sure what you
>&
Oh good. Thanks.
On Thu, Sep 10, 2015 at 12:14 PM, Patrik Nordwall <patrik.nordw...@gmail.com
> wrote:
>
>
> On Thu, Sep 10, 2015 at 8:29 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Well I must be missing something. Here
5 at 10:57 PM, Heiko Seeberger <loe...@posteo.de> wrote:
> On 08 Sep 2015, at 07:39, Richard Rodseth <rrods...@gmail.com> wrote:
>
> Thanks for the link to that sample.
> The other problem in doing per-request actors without ask pattern, as in
> the net-a-porter sampl
I'm trying Akka Http for the first time, coming from Spray. In Spray I have
a routing actor which extends HttpService and is passed to HttpBind as
shown below. This way routes can create per-request actors as a child of
the routing actor.
What would be the equivalent in Akka Http?
I'm not so
I've run into the same problem. How to do per-request actors rather than
ask pattern with Akka Http, and I'm afraid I don't understand how
handlerFlow helps. I've started a separate thread, but if either of you can
elaborate that would be great. The Spray migration page is still marked
TODO.
Maybe just move the bind call into my routing actor, in response to some
sort of Start message ?
On Mon, Sep 7, 2015 at 8:38 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> I'm trying Akka Http for the first time, coming from Spray. In Spray I
> have a routing actor which extends
ing with actors?
On Mon, Sep 7, 2015 at 10:18 PM, Heiko Seeberger <loe...@posteo.de> wrote:
> On 07 Sep 2015, at 19:32, Richard Rodseth <rrods...@gmail.com> wrote:
>
> Maybe just move the bind call into my routing actor, in response to some
> sort of Start message ?
>
>
&g
Have you looked at Future.sequence? It turns a List[Future[A]] into a
Future[List[A]].
Then you can pipe that result to the same actor or another, or add a
complete handler as you have done.
On Wed, Aug 26, 2015 at 11:17 AM, kraythe kray...@gmail.com wrote:
I am using play framework with a mix
You could also look at Requester (I haven't)
https://github.com/jducoeur/Requester
On Wed, Aug 26, 2015 at 10:21 AM, Richard Rodseth rrods...@gmail.com
wrote:
Robert, in my case the REST endoint is using Spray. The per-request actor
has a reference to the RequestContext, and calls complete
ask() returns a future. You can do things like use Future.sequence, or
for-comprehensions.
That said, in my project we worked pretty hard to avoid using the ask()
pattern.
http://techblog.net-a-porter.com/2013/12/ask-tell-and-per-request-actors/
On Wed, Aug 26, 2015 at 9:11 AM, kraythe
Robert, in my case the REST endoint is using Spray. The per-request actor
has a reference to the RequestContext, and calls complete() on it, before
stopping itself.
I don't have time to check, but it might be modelled on this Activator
Template (which I think is referenced in the net-a-porter
have an
arbitrary number of topics, eg. one per channel?
On Fri, Aug 7, 2015 at 2:01 AM, Patrik Nordwall patrik.nordw...@gmail.com
wrote:
On Tue, Jul 21, 2015 at 10:56 PM, Richard Rodseth rrods...@gmail.com
wrote:
I'd love a little more input on this, being a complete neophyte when it
comes
I sometimes call recover on the returned Future to generate a message of my
own, before doing the pipeTo.
On Wed, Aug 5, 2015 at 6:30 AM, Johan Andrén johan.and...@typesafe.com
wrote:
pipeTo will wrap failures in an akka.actor.Status.Failure and send that
to the actor that you direct the
with sendOneMessageToEachGroup
and a custom routing logic?
Thanks for any thoughts on this use case.
On Fri, Jul 10, 2015 at 9:11 AM, Richard Rodseth rrods...@gmail.com wrote:
But I'll take a look, thanks. Not sure if one topic per channel is
feasible.
On Fri, Jul 10, 2015 at 9:04 AM, Richard Rodseth
Great news. Once the new modules are merged into 2.4, will the only changes
to the 2.3 artifacts be bug fixes?
On Wed, Jul 15, 2015 at 5:40 AM, Konrad Malawski ktos...@gmail.com wrote:
Dear hakkers,
we—the Akka committers—are very pleased to announce the final release of
Akka Streams HTTP
But I'll take a look, thanks. Not sure if one topic per channel is feasible.
On Fri, Jul 10, 2015 at 9:04 AM, Richard Rodseth rrods...@gmail.com wrote:
Nope. I imagined it to be for broadcasting, rather than having something
analagous to LookupClassification.
On Fri, Jul 10, 2015 at 12:16 AM
We're not using Akka Cluster yet, but I have an entity actor type that is
ripe for sharding.
But there's a complication. The numerous entities are receiving event data
for various channels, and rather than receiving messages directly from
the supervisor, the supervisor publishes them to an
You don't want more than one ActorSystem per process.
If each BC is a separate process (a la microservices) then having something
like Kafka to durably message between them would be great.
Within an ActorSystem, you can use the Akka EventBus to (non-durably)
message between root-level actors that
So each BC is an Akka Cluster, and you are relying on remote Akka (remote)
messages to propagate changes from one BC to another? That sounds risky.
Implementing DDD has a section (p303) called Spreading the news to Remote
Bounded Contexts
On Fri, Jun 5, 2015 at 11:18 AM, Guido Medina
Oh, I misunderstood. Thanks for clarifying. I'll be interested in other
reactions. My impression (perhaps incorrect) is that a small subset of Akka
users are using Akka Cluster.
On Fri, Jun 5, 2015 at 3:24 PM, Guido Medina oxyg...@gmail.com wrote:
No, I have a single cluster, each micro-service
Amir, you asked about alternatives. It seems Martin Krasser is no longer
involved with Akka Persistence and has his own project Eventuate. I don't
have any experience with it (or Akka Persistence for that matter), but
here's a recent blog post:
101 - 200 of 283 matches
Mail list logo