Anyone else running more than one stream in a deployable? Surely the answer
must be yes. Would love to know what the problem might be.
On Wed, Dec 27, 2017 at 10:02 AM, Harshit Patel
wrote:
> We have an application that uses reactive-kafka to consume messages from a
>
Despite the excellent and timely blog post from Colin Breck
http://blog.colinbreck.com/maximizing-throughput-for-akka-streams/
we are having a devil of a time optimizing throughput in a stream which
does the following
1) consume messages containing a channel UUID from Kafka
The topic is
You could potentially use the query side of Akka Persistence (eventsByTag)
to stream requests for email sends, feeding failures back into the journal
with the same tag, so they reenter the stream.
https://doc.akka.io/docs/akka/2.5/scala/persistence-query.html
On Fri, Nov 10, 2017 at 5:16 AM,
ically it would be breaking to do so, but materializers are somewhat
> special “internal” one might say.
>
> --
> Cheers,
> Konrad 'ktoso <http://kto.so>' Malawski
> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>
> On October 24, 2017 at 14:20:41, Ri
t;
> --
> Cheers,
> Konrad 'ktoso <http://kto.so>' Malawski
> Akka <http://akka.io/> @ Lightbend <http://lightbend.com/>
>
> On October 24, 2017 at 8:04:10, Richard Rodseth (rrods...@gmail.com)
> wrote:
>
> I just heard from someone who attended a
Read this if you haven't
https://doc.akka.io/docs/akka/2.5/scala/dispatchers.html#blocking-needs-careful-management
You can start with a single Actor for JMS and one for file IO, each
configured with a custom dispatcher.
Or use Futures, with custom dispatchers provided as ExecutionContext.
On
re)(Keep.both)
But those deciders are not reached, so I appear to have no way to detect
the serialization problem and skip the element.
On Fri, Oct 6, 2017 at 10:30 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> I'm using akka-kafka with committablePartitionedSource, meaning I have a
I'm using akka-kafka with committablePartitionedSource, meaning I have a
source of (partition, innersource) tuples.
I'm currently using the flatMapMerge recipe from
https://doc.akka.io/docs/akka-stream-kafka/current/consumer.html#source-per-partition
val done =
We have an integration test that uses Akka Kafka to publish messages to a
topic, that are then consumed by a library under test that also uses Akka
Kafka.
Because autoOffsetReset is "latest", we are trying to defer publishing the
test messages until the consumer is "ready".
The consumer stream
My preliminary testing suggests that the flatMapMerge version will *not*
work if the breadth value is less than maxPartitions.
I don't understand why all partition sources wouldn't continue to emit and
be merged.
On Mon, Sep 11, 2017 at 4:27 PM, Richard Rodseth <rrods...@gmail.com>
The first two code samples here show different ways of consuming multiple
Kafka partitions,
without really explaining why you would use one or the other.
http://doc.akka.io/docs/akka-stream-kafka/current/consumer.html#source-per-partition
The first uses flatMapMerge:
val done =
Off the top of my head: not sure how you are getting the parameters, if
any, to getNextItem, but since xxAsync methods like Source.unfoldAsync and
mapAsync expect a future-returning function, I would use getNextItem
(rather than identity) in one of those to give you downstream elements of
type
first). Right failure:
> Object is missing required member 'id', Left failure: Substream Source
> cannot be materialized more than once
>
> Do I need to place a toStrict somewhere in my for comprehension?
>
>
>
> On Tue, Aug 22, 2017 at 12:28 PM, Richard Rodseth <rrods...@gm
ft failure: Substream Source cannot be materialized more than once
Do I need to place a toStrict somewhere in my for comprehension?
On Tue, Aug 22, 2017 at 12:28 PM, Richard Rodseth <rrods...@gmail.com>
wrote:
> Having some success with Either unmarshaller. I don't think it's
> documen
Having some success with Either unmarshaller. I don't think it's
documented, but here's a test:
https://github.com/akka/akka-http/blob/master/akka-http-tests/src/test/scala/akka/http/scaladsl/unmarshalling/UnmarshallingSpec.scala
On Tue, Aug 22, 2017 at 11:16 AM, Richard Rodseth <rr
I'm trying to use Akka HTTP client (singleRequest) with an API that returns
different JSON for the success and error cases. How is that best handled?
Also, I'm not sure if the explicit marshalling and unmarshalling below is
the right way to do things.
Any help much appreciated.
def
I use Eclipse-based Scala IDE. I import Maven projects. But I often have to
right click on the project and do Maven -> Update Project. I also often
have to do Scala -> Restart Presentation Compiler. Not sure if that's
helpful.
On Mon, Jul 3, 2017 at 11:45 AM, John Arnold
In case it helps:
https://groups.google.com/d/topic/akka-user/AgVHHnl9ub4/discussion
https://github.com/akka/akka/issues/22742
On Fri, May 26, 2017 at 1:34 PM, Curt Siffert wrote:
>
> Hi, I see in the docs for 2.5.2 that ActorPublisher/ActorSubscriber will
> be deprecated.
Looks nice.
However, if I go to
http://akka.io/docs/
and type "Dispatcher" in the search box, nothing happens. There's no Search
button, and Enter does nothing either.
Chrome, OS X.
Am I missing something?
On Wed, May 24, 2017 at 7:56 AM, Arnout Engelen <
arnout.enge...@lightbend.com>
The Confluent add ons like Schema Registry are open source and the client
library and Kafka itself are forks of the Apache versions.
reactive-kafka seems to be "reactive Kafka for Apache Kafka".
I'm wondering if folks are using it in conjunction with the Confluent
offerings.
--
>>
On Thu, May 11, 2017 at 12:49 AM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> I was considering using CoordinatedShutdown to shutdown Kamon, either
>> with PhaseBeforeActorSystemTerminate or with addJvmShutdownHook
>>
>> But it seems I have to explicitly cal
I was considering using CoordinatedShutdown to shutdown Kamon, either with
PhaseBeforeActorSystemTerminate or with addJvmShutdownHook
But it seems I have to explicitly call run(). Where would I do that?
Not using Akka Cluster in this app.
Thanks.
val system = ActorSystem("test")
I was hoping to promote a pattern in my organization, using reactive-kafka
(source-per-partition). Some of my colleagues are comfortable with actors,
but it would be great if others could be introduced to the streams APIs
without learning all about actors.
Am I correct that there is currently no
Because we're using Avro for Kafka, we also looked at avro and avro4s, but
ended up using Protobuf (with Maven plugin) for Akka Persistence. Mapping
between the IDL-generated classes and case classes is indeed unfortunate,
but no other issues so far.
On Mon, May 1, 2017 at 12:54 AM, Joost de
g you can do
> is to poll or block until it is completed. It would have to be
> CompletableFuture.
>
> --
> Johan
> Akka Team
>
> On Fri, Apr 28, 2017 at 5:54 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Thanks. There *is* a version of
ion has completed. It would not be
> possible to use the latter with mapAsync.
>
> --
> Johan
> Akka Team
>
> On Sat, Apr 8, 2017 at 1:08 AM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> I'm still curious what, if any, advantages the ProducerStage as in
>>
am
>
> On Thu, Apr 13, 2017 at 11:30 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Congrats on the release of 2.5.
>>
>> Isn't the documentation and accompanying sample for Resumable Projections
>> a bit odd?
>>
>> http://doc.akka.io/docs
I've read that ActorPublisher and ActorSubscriber are deprecated in favour
of GraphStage, but haven't found examples of how to accomplish this. This
section of the documentation is a stub:
http://doc.akka.io/docs/akka/2.5/scala/stream/stream-customize.html#Integration_with_actors
I have several
Congrats on the release of 2.5.
Isn't the documentation and accompanying sample for Resumable Projections a
bit odd?
http://doc.akka.io/docs/akka/2.5.0/scala/persistence-query.html#Resumable_projections
DSL.
Route contains:
onSuccess(requestHandler ? RequestHandler.AskForStatus) { ... }
On Sat, Apr 8, 2017 at 2:31 PM, kraythe wrote:
> And you accomplish this with the low level or DSL api.
>
> On Saturday, April 8, 2017 at 12:56:26 AM UTC-5, rrodseth wrote:
>>
>> I
I use per-request actors. The only ask() is in the route, and is sent to a
RequestHandler actor. The RequestHandler creates a per-request actor with
the sender as a Props/Constructor parameter ("requestor"), and sends it a
"Run" message.
The per-request actor can coordinate with as many actors as
I'm still curious what, if any, advantages the ProducerStage as in
reactive-kafka has over using mapAsync as in CassandraSink. Anyone?
On Fri, Mar 31, 2017 at 7:40 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> I was looking at the implementation of the reactive-kafka ProducerStag
I was looking at the implementation of the reactive-kafka ProducerStage.
Very interesting.
https://github.com/akka/reactive-kafka/blob/master/core/src/main/scala/akka/kafka/internal/ProducerStage.scala
It seems this pattern could be used for any external API that returns a
Future.
By contrast,
t; no longer exists and just sends the messages to dead-letters.
>>
>
> That sounds wrong. The mediator is watching the subscribers and remove
> them when they are terminated. Please open an issue if you are sure and
> have a repeatable test.
>
>
>>
>> On Thu, Mar 9,
Is it true that
- Passivated (sleeping) actors aren't able to listen to the pub/sub
topic.
?
Unfortunately there's no answer here:
http://stackoverflow.com/questions/40782570/can-akka-distributedpubsub-deal-with-passivated-subscribers
On Wed, Mar 8, 2017 at 6:18 PM, Richard Ney
I think they are referring to child actors. i.e. it's OK for actor A to
have a backoff supervisor (A can be persistent or not), but if it has a
child which is a persistent actor then the caveats apply.
On Wed, Mar 8, 2017 at 6:09 AM, Alan Burlison
wrote:
> On 08/03/17
Bump.
Has anyone implemented a CircuitBreaker-like stage
(mapAsyncWithCircuitBreaker perhaps?) that would provide backpressure?
On Mon, Feb 27, 2017 at 2:18 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> What's the current thinking on this? This ticket is open:
>
> https://
What's the current thinking on this? This ticket is open:
https://github.com/akka/akka/issues/15307
I have an infinite stream communicating with an external service. If the
external service goes down or some threshhold of errors is reached, it may
be appropriate to stop the actor that runs the
Persistent Actor A consumes from Kafka and stores some events (let's call
them ProcessingRequested). Persistent Actor B runs a processing stream
whose source is tagged events from Persistent Actor A. As messages exit the
processing stream they are fed back to B which persists ProcessingSucceeded
What is the purpose of this method? It just returns lastSequenceNr.
I'm implementing an API to delete unneeded events and snapshots.
It seems I will have to store (in memory) the metadata provided with
SnapShotOffer. That's fine, I'm just curious what the purpose
of snapshotSequenceNr is.
I
We are using the akka-persistence-jdbc plugin. I know there are APIs for
deleting snapshots and events. But I'm wondering if it's essential that a
"journal manager" tool use these, or could we just issue SQL commands. What
are others doing?
And yes, I know that the events often have business
Oops. Thanks!
Sent from my phone - will be brief
> On Jan 26, 2017, at 8:09 AM, Patrik Nordwall <patrik.nordw...@gmail.com>
> wrote:
>
> Sharded actors can have constructor. You define the Props for them.
>> tors 26 jan. 2017 kl. 16:45 skrev Richard Rodseth <r
of config?
On Thu, Jan 26, 2017 at 7:40 AM, Richard Rodseth <rrods...@gmail.com> wrote:
> Good to know!
>
> https://github.com/akka/akka/issues/22233
>
> On Thu, Jan 26, 2017 at 7:25 AM, Patrik Nordwall <
> patrik.nordw...@gmail.com> wrote:
>
>> There is
r != 0)
>
> On Thu, Jan 26, 2017 at 3:53 PM, Konrad Malawski <
> konrad.malaw...@lightbend.com> wrote:
>
>> B, it's async anyway.
>>
>> --
>> Konrad `ktoso` Malawski
>> Akka <http://akka.io> @ Lightbend <http://lightbend.com>
I've implemented snapshotting every n events by checking if offset modulo n
== 0.
Also, to answer myself, the snapshot is a Protobuf object as well.
On Sun, Jan 8, 2017 at 5:14 PM, Tal Pressman wrote:
> Still, I don't think you can get snapshots that were saved by the
>
Sorry this wasn't more clear. I *am* saving the offset of a
persistence-query. I'm saving it in the persistent actor that issues the
query.
On Sun, Jan 8, 2017 at 9:48 AM, Tal Pressman wrote:
> If I am not mistaken, persistence-query does not support snapshots so you
> would
I have a PersistentActor which is a read side projection that runs a
persistence-query stream. It's only state is an offset, a Long. Protobuf
serialization is used for persistence messages.
I'm preparing to add snapshotting. The examples show a "snap" message
handled by calling saveSnapshot.
Am
Any tips for debugging one of these?
PersistentShardCoordinator - Persistence failure when replaying events for
persistenceId
Right now, our logging config is broken so I can't even get more info with
loglevel = DEBUG, but since I'm desperate I'm hoping there's a common
mistake someone will
: http://doc.akka.io/docs/akka/current/scala/
> persistence-schema-evolution.html
>
> Happy hakking
>
> On Wed, Dec 7, 2016 at 12:53 AM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Anyone else using Avro and avro4s to serialize persistence events? I wa
Anyone else using Avro and avro4s to serialize persistence events? I was
able to get the SerializerWithStringManifest below working (with generic
serialize and deserialize methods). Should I be extending something other
than SerializerWithStringManifest, given that Avro on its own helps with
d by passing it in as constructor parameter.
>
> Another way is to retry the actorOf in the test until it is successful.
> awaitAssert in testkit can be useful for that.
>
> fre 2 dec. 2016 kl. 20:16 skrev Richard Rodseth <rrods...@gmail.com>:
>
>> I see now that
>>
>&
example of using self.path.name in
the persistenceId ?
http://doc.akka.io/docs/akka/2.4/scala/cluster-sharding.html#An_Example
override val persistenceId: String = "XYZ-" + self.path.name
On Fri, Dec 2, 2016 at 10:59 AM, Richard Rodseth <rrods...@gmail.com> wrote:
&g
or with a different actor name but with the
> same persistenceId.
>
> If you don't specify the actor name a unique name will be used.
>
> /Patrik
>
> fre 2 dec. 2016 kl. 19:43 skrev Richard Rodseth <rrods...@gmail.com>:
>
>> I'm also looking here:
>> https://g
quot;a-1", "a-2"))
}
"A persistent actor" must {
"recover from persisted events" in {
val persistentActor = namedPersistentActor[Behavior1PersistentActor]
persistentActor ! GetState
expectMsg(List("a-1", "a-2"))
}
On Fri, Dec 2, 2016 at 7:42 AM,
nd call it the same thing, so they (rightfully so)
> objected to that suggestion.
>
> This trips people up every now and then, but I've never seen a proposal to
> solve it.
>
> --
> Cheers,
> √
>
> On Dec 2, 2016 02:44, "Richard Rodseth" <rrods...@gmail.com>
We have a unit test for a persistent actor, which instantiates a second
instance after waiting for termination of the first.
watch(definitionReader)
system.stop(definitionReader)
expectTerminated(definitionReader)
val notificationDefinitionReader2 = system.actorOf(actorProps, actorName)
This
somewhere, I'm looking
>> into it.
>> Seems it doesn't compose nicely with existing predefined primitive
>> marshallers (like String here), if it was wrapped in a type (say
>> Thing("One") then it works, which is what all our tests were doing).
>>
>> Please track
Trying this out for the first time. I get a 200 OK but empty body.
val eventsRoute = path("events") {
get {
//val results: List[String] = List("One", "Two", "Three")
val results: Source[String, NotUsed] = Source(List("One", "Two",
"Three"))
complete(results)
dled by streams itself as it is a feature that Sources must
> support. I think the most common solution people use for this is Kafka.
> There is a Streams connector for Kafka: https://github.com/akka/
> reactive-kafka
>
> -Endre
>
> On Tue, Nov 15, 2016 at 3:04 AM, Richard
We are seeing some baffling behaviour in a unit test (using
akka-persistence-inmemory).
The test starts actor A and actor B (which starts a stream using
eventsByTag in its
RecoveryCompleted handler).
The test then sends commands to actor A, which results in appropriately
tagged events being
Any good examples out there of resumable projections driving non-trivial
streams?
I'm guessing I will have to keep the stream 1-1 and pass the offset all the
way downstream so I can save it at the end?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
tor system with a running stream
> in it.
>
> The paths of the supervisors are pretty much an implementation detail, but
> perhaps you can provide some addition to the string "end-of-lookup" to give
> you context in the log entry?
>
> --
> Johan
> Akka T
I'm trying to write two ScalaTest tests to compare two approaches to
driving a Flow that sends out emails. In one approach the flow is sourced
using eventsByTag from akka-persistence-query. In the other, I uses
SourceQueue.offer after receiving a message via AtLeastOnceDelivery. The
test ends with
I was originally imagining a long flow from a akka-streams-kafka
commitableSource, but due to the issue I raised earlier about needing
CommitableMessages throughout the flow, I am considering splitting the
stream in StreamA and StreamB, where StreamA would capture the consumed
messages in a
Out, CommittableMessage[_,In], (Out,
CommittableMessage[_,In])] = b.add(Zip[Out, CommittableMessage[_,In]]())
bcast.out(0) ~> toValue ~> job ~> zip.in0
bcast.out(1) ~> zip.in1
FlowShape(bcast.in, zip.out)
}
result
}
On Wed, Oct 26, 2016 at 2:08 PM,
Yes, thanks. I'll explore this.
On Wed, Oct 26, 2016 at 2:04 PM, Roland Kuhn wrote:
> Yes, indeed: if it is strictly 1:1 and it retains the order of the
> messages, then this works. Thanks for the sample!
>
> Regards,
>
> Roland
>
> > 26 okt. 2016 kl. 22:12 skrev Itamar Ravid
” in any case, because the
> types you’ll encounter will certainly be daunting.
>
> I’d love to sink my teeth into this problem, but unfortunately I don’t
> have time for that right now :-(
>
> Regards,
>
> Roland
>
> 26 okt. 2016 kl. 18:49 skrev Richard Rodseth <rr
ed, Oct 26, 2016 at 9:23 AM, Viktor Klang <viktor.kl...@gmail.com>
wrote:
> what would happen if that stage would silently discard the
> CommittableMessage?
>
> --
> Cheers,
> √
>
> On Oct 26, 2016 6:09 PM, "Richard Rodseth" <rrods...@gmail.com> wrot
I'm planning to use a commitableSource from akka-streams-kafka.
Is there a way to re-use an existing Flow that knows nothing about Kafka,
extracting the record value and reconstituting the CommitableMessage at the
end of the flow?
In the past I've experimented with using TypeClasses in my flow
snapshot/java/stream/stream-integrations.html
>
> Regards,
> Patrik
>
>
>> On Tue, Oct 25, 2016 at 11:26 PM, Richard Rodseth <rrods...@gmail.com> wrote:
>> I should add that the ask() I would be inserting would actually be to the
>> ShardRegion for a shared
I should add that the ask() I would be inserting would actually be to the
ShardRegion for a shared, persistent actor.
On Tue, Oct 25, 2016 at 11:33 AM, Richard Rodseth <rrods...@gmail.com>
wrote:
> Anyone else? Suppose I need a stage that just looks up something that is
&g
backpressure.
This SO question advises against making an actor a Processor.
http://stackoverflow.com/questions/31272267/creating-an-actorpublisher-and-actorsubscriber-with-same-actor
On Thu, Oct 20, 2016 at 2:55 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> Short version: is it fa
Isn't your comment about "messes up the pseudo-single-threading invariant
of Actors" more about not closing over mutable state? In any case, you
can't avoid Futures if you're using Slick or HTTP clients, for example.
On Fri, Oct 21, 2016 at 7:32 AM, Richard Rodseth <rrods...@gma
for the Requester link.
On Fri, Oct 21, 2016 at 4:55 AM, Justin du coeur <jduco...@gmail.com> wrote:
> On Thu, Oct 20, 2016 at 5:55 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Short version: is it fair to say the traditional warnings against ask()
>> hold less wei
Short version: is it fair to say the traditional warnings against ask()
hold less weight because we have back-pressure?
In the past I've built an Akka app (no ask() pattern except at the outer
edge), and a tool that used Akka Streams (no visible actors except a
monitor updated with alsoTo), but
subscriber1)
> expectTerminated(subscriber2)
>
> On Thu, Oct 13, 2016 at 7:10 AM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Thanks, but if I do this:
>>
>> system.stop(subscriber1)
>>
>> val subscriber2 = system.actorOf(props1, &q
n Wed, Oct 12, 2016 at 9:59 PM, Patrik Nordwall <patrik.nordw...@gmail.com>
wrote:
> You can stop it and start a new actor with the same persistenceId.
> /Patrik
> tors 13 okt. 2016 kl. 06:33 skrev Richard Rodseth <rrods...@gmail.com>:
>
>> I've been able to test re
I've been able to test recovery by using the in-memory journal and sending
a "bomb" message to the actor, which is handled by throwing an exception :
myActorRef ! DoSomething
myActorRef ! "bomb"
myActorRef ! GetState
expectMsg(MyActorState(...))
Is there any way I can do this without having to
xact same
> type (tuple with statuscode and APIError)?
>
> --
> Johan
> Akka Team
>
>> On Fri, Oct 7, 2016 at 1:30 PM, Richard Rodseth <rrods...@gmail.com> wrote:
>> Heiko has a different approach here
>> https://github.com/hseeberger/reactive-flows/bl
icit conversions sure are humbling.
On Fri, Oct 7, 2016 at 11:59 AM, Viktor Klang <viktor.kl...@gmail.com>
wrote:
> WARNING: I may very well be incorrect here
>
> Options:
> A: switch to the onComplete directive
> B: map to an Either[Result, PimpedResult]?
>
> On Fri
Isn't it true that your routes need to be in a trait t be able to make use
of the Routing Testkit?
On Fri, Oct 7, 2016 at 12:08 PM, wrote:
> Thank you for the response Johan but I'm not sure that really answers my
> question but perhaps I can ask some other questions
; Johan
> Akka Team
>
> On Fri, Oct 7, 2016 at 10:55 AM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Continuing my struggles to port something we did with Spray, using
>>
>> // See https://bitbucket.org/binarycamp/spray-contrib/src
>>
>>
Continuing my struggles to port something we did with Spray, using
// See https://bitbucket.org/binarycamp/spray-contrib/src
I have resolved my implicit conversion errors to the point where I can
execute the following (Result[T] is an alias for Either[APIError, T]):
val successResult:
I asked this over on the DDD/CQRS list, but didn't get a reply, so I
thought I'd try here.
Imagine a system to notify users about alarms. I'm using Akka Persistence
which supports streaming projections from the event store to the read side.
I'm considering three aggregates, (alarm)Definition,
Thanks. It appears to be the absence of an Either marshaller ( Result[T] is
an alias for Either[APIError,T] )
Is that something that was available in Spray, but not in Akka HTTP ?
I see this got merged:
https://github.com/akka/akka/pull/20274
but it's an Unmarshaller.
On Tue, Sep 20, 2016 at 3:18
Bump.
Is this the right way to debug the missing conversion?
val prm = implicitly[Future[PimpedResult[(StatusCode, Result[StatusDTO])]]
=> ToResponseMarshallable]
On Wed, Sep 7, 2016 at 8:43 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> Can anyone help me with this plea
does not work.
>
> You can use the autopilot feature of the TestProbe or a custom test actor
> responding the way you want in the test, it could also keep incoming
> requests for later assertions.
>
> --
> Johan
> Akka Team
>
> On Wed, Sep 7, 2016 at 1:47 AM, Richard Rodse
Can anyone help me with this please?
I have the following in my route:
val result = (requestHandler ?
RequestHandler.AskForStatus).mapTo[PimpedResult[(StatusCode,
Result[StatusDTO])]]
onSuccess(result) {
case _ => complete(result)
}
The mapTo is really just to assist my debugging. The result
point me at a sample which uses the ask pattern in routes, and
also tests the routes with the ScalatestRouteTest?
On Tue, Sep 6, 2016 at 12:56 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> So the example here is not very realistic because the route does not
> depend on any acto
alid PimpedResult")
}
}
result
}
}
On Fri, Sep 2, 2016 at 1:01 PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I'm guessing the errorMarshaller and statusMarshaller shown (which use
> compose) need to be recast according to
>
> http://doc.akka.io/docs/akka/2.4
PM, Richard Rodseth <rrods...@gmail.com> wrote:
> I'm struggling to convert the following response marshaller.
> ToResponseMarshaller is a trait and object in Spray, and Marshaller takes
> one type parameter.
>
> I've read the docs, and am stumped. Can anyone provide further
I'm struggling to convert the following response marshaller.
ToResponseMarshaller is a trait and object in Spray, and Marshaller takes
one type parameter.
I've read the docs, and am stumped. Can anyone provide further guidance?
// See https://bitbucket.org/binarycamp/spray-contrib/src
trait
ring that all
stream elements from a channel are sent to the same worker, channelId
modulo n would suffice, no?
Also, shouldn't there just be a partition() operator?
On Fri, Jul 22, 2016 at 3:10 AM, Akka Team <akka.offic...@gmail.com> wrote:
>
>
> On Fri, Jul 22, 2016 at 2:14 AM,
#akka.stream.scaladsl.Partition
which does not appear to be documented here
http://doc.akka.io/docs/akka/2.4.8/scala/stream/stages-overview.html#Fan-out_stages
On Thu, Jul 21, 2016 at 10:00 AM, Richard Rodseth <rrods...@gmail.com>
wrote:
> Thanks. That gives me some terms to Google, but any further
onsistent hashing and then a N-Way merge to insert
> into db?
>
> --
> Cheers,
> √
>
> On Jul 20, 2016 9:09 PM, "Richard Rodseth" <rrods...@gmail.com> wrote:
>
>> I'm sure I've asked this before in numerous ways, but it's still an issue
>> f
I'm sure I've asked this before in numerous ways, but it's still an issue
for me.
I have an ETL stream that reads per-channel data and writes it to a
destination without backpressure. Within a channel, order of writes must be
preserved. So I want parallelism between channels, but not within.
If
ctor
> materializer")
> }
>
> Note that this may make the stage impossible to use with any future
> alternative materializers.
>
> --
> Johan
>
> On Wed, Jul 13, 2016 at 9:54 PM, Richard Rodseth <rrods...@gmail.com>
> wrote:
>
>> Is there a way to get hold o
Is there a way to get hold of the LoggingAdapter used by Akka streams?
I'm making use of Viktor's Future retry
https://gist.github.com/viktorklang/9414163
and someone would like me to log something in the retry case. So I thought
I would add an implicit LoggingAdapter parameter and supply it at
I have a stream which needs to output some progress/error text to a file
and to console (different things to each).
I've implemented something using alsoTo(fileSink).alsoTo(consoleSink).
But now I'm considering funneling these messages through an actor
intermediary instead. i.e.
LowA, LowB] = ... //stub it
> val highLevelService: BidiFlow[A, LowA, LowB, B] = ... // service under
> test
> val testService: Flow[A, B] = highLevelService join lowLevelServiceStub
>
> I am not sure this is completely what you asked for though.
>
> -Endre
>
> On Fri, Ma
1 - 100 of 283 matches
Mail list logo