will decide what to do.
B/
On 3 March 2015 at 23:22:21, Jakub Liska (liska...@gmail.com javascript:)
wrote:
Hey,
when ActorPublisher does :
onError(exceptionRegisteredInSupervisionDecider) then the stream just
fails with that exception. Supervision strategy doesn't work here. Is it
supposed
Hey,
when ActorPublisher does :
onError(exceptionRegisteredInSupervisionDecider) then the stream just
fails with that exception. Supervision strategy doesn't work here. Is it
supposed to or it won't work for ActorPublishers?
--
Read the docs: http://akka.io/docs/
Check the FAQ:
I suspect it is this issue https://github.com/akka/akka/issues/16979
The root cause is just not printed out :
Cause: java.lang.IllegalStateException: Processor actor terminated abruptly
--
Read the docs: http://akka.io/docs/
Check the FAQ:
I just found out that :
ActorPublisher and ActorSubscriber *cannot be used with remote actors*,
because if signals of the Reactive Streams protocol (e.g. request) are lost
the the stream may deadlock.
but the same applies for https://github.com/akka/akka/issues/16416, right?
--
Yeah, but as I said those are remote actors So this is not an option.
Not even ActorPublisher/Subscriber because that doesn't work remotely
either...
LineSource ~ lineToRecord ~ indexRecord ~ bcast ~ sink.ignore
sink.ignore ~ responseHandler ~ bcast
I don't get
Hi, it was my fault, I finally understand the concept behind it, I can see
what is running concurrently/sequentially now. Thank you
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives:
Hi,
If ActorX needs to pass a load of data to be processed to a remote ActorY,
is this a correct think to do?
Imagine the data is rows of huge files that are downloaded on ActorX and to
be processed and indexed on ActorY.
trait IndexReq {
def source: Source[Row]
def flow: Flow[Row, Rec]
…})
Regards,
Roland
Sent from my iPhone
On 28 Apr 2015, at 07:37, Jakub Liska liska...@gmail.com javascript:
wrote:
I'm deconstructing the argument on like 20 places in my application :
flow.mapAsync { case res :: errors :: result :: HNil = ... }
and now pattern matching will have
at 7:24:34 AM UTC+2, Jakub Liska wrote:
Hey,
shouldn't the :
Flow#mapAsync(parallelism: Int, f: Out ⇒ Future[T]): Repr[T, Mat]
method have this signature :
Flow#mapAsync(parallelism: Int)(f: Out ⇒ Future[T]): Repr[T, Mat]
as scala collection foldLeft, so it could be called like :
Flow
Hey,
shouldn't the :
Flow#mapAsync(parallelism: Int, f: Out ⇒ Future[T]): Repr[T, Mat]
method have this signature :
Flow#mapAsync(parallelism: Int)(f: Out ⇒ Future[T]): Repr[T, Mat]
as scala collection foldLeft, so it could be called like :
Flow[Resource].mapAsync(4) { res = asyncCode }
It
I'm facing also this problem with passing a context through the stream. Let
say you have a stream like this :
download resource - back it up - parse it - store errors - process data
- ... - persist data - sink
you kinda need some sort of context so that you know that if some latter
flows
Thank you Roland, I'll stick with untyped Actors because stashing is kind
of essential for the way I'm dealing with backpressure and resilience right
now. Future akka stream distributed support might be a good fit though.
--
Read the docs: http://akka.io/docs/
Check the FAQ:
I do it just via ActorPublisher, the scroll method is basically
asynchronously loading elasticsearch records (classic cursor thingy). It's
a combination of request demand and asynchronous source of events :
def receive: Receive = {
case Request(n) if totalDemand 0 n 0 isActive =
Hi, I can't find any documentation or source code reference on using
Stashing with TypedActor. Would please anybody help me out
here? UnrestrictedStash is Actor type.
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Hi,
how about these 2 deps? Haven't they been deployed or their name changed?
[warn] ::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::
[warn] ::
I have this weird problem with probably broken pull demand in a stream of
this representation :
Source[Foo]() { implicit b =
val actorSources = Array(100%CorrectlyImplementedActorPublishers)
b.add(Merge[Foo](actorSources.length))
for (i - 0 until actorSources.length)
The total number of request is tracked in totalNumber.
You mean total number of request*s*? Like total number of requested
elements that may come in x requests ?
Anyway I changed it to :
case Request(n) if totalDemand 0 isActive =
(1L to totalDemand).foldLeft(true) {
def receive: Receive = {
case Request(n) if totalDemand 0 isActive =
(1L to Math.min(n, totalDemand)).foldLeft(true) {
What is the above line intending to do? Why are you taking a minimum of n
and totalDemand? Why are you not using totalDemand directly?
Well I
Hey,
I have a Source that merges ActorPublishers, this is a simplification :
Source[Foo]() { implicit b =
val actorSources = myActorPublisherArray
b.add(Merge[Foo](actorSources.length))
for (i - 0 until actorSources.length) {
b.addEdge(b.add(actorSources(i)),
The documentation says :
totalNumber :
Total number of requested elements from the stream subscriber.
This actor automatically keeps tracks of this amount based on
incoming request messages and outgoing `onNext`.
n :
n number of requested elements
As I see it, n represents
I got it, it wasn't related to akka-stream but to one of the services on a
particular host the stream is communicating with, it's throughtput
decreased from day to day about 95% and the entire stream depended on it so
it decreased about 95% too...
It's pretty hard to detect such state, it'd
Hi,
btw can Stage by stateful? Is R/W from/to this in a PushPullStage thread
safe?
var state : Map[A,Cancellable] = Map.empty
Thanks, Jakub
On Friday, January 23, 2015 at 2:42:11 AM UTC+1, Frank Sauer wrote:
Thanks for the pointers Endre, I’ll explore those ideas.
Frank
On Jan 22,
Hi,
I cannot figure out, how would I do something like :
bucketOpt match {
case None =
*// ??? How to return just a dummy partial graph ???*
case Some(bucket) =
Flow() { implicit b =
import FlowGraph.Implicits._
val broadcast = b.add(Broadcast[Array[ResCtx]](2))
I see,
I didn't know that partial graph that exposes inout ports can be used like
a common Flow :
in ~ partialGraph ~ out
There is only this in documentation :
partialGraph.runWith(Source(List(1)), Sink.head)
So I thought it was not possible... Thank you Endre !!!
--
Read the docs:
Done, thank you https://github.com/akka/akka/issues/17614
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you
Hi,
sometimes I need to ignore the output of a flow and use it's input
instead down the stream... There are 2 possible ways to do that I'm aware
of :
def bypass[I,O](flow: Flow[I,O]): Flow[I,I,Any] = Flow() { implicit b =
import FlowGraph.Implicits._
val broadcast =
It still has a bug I cannot find, on complicated streams it pushes elements
A :
ctx.emit(element)
but they don't arrive into the Sink that follows and the stream blocks
indefinitely ... Any idea ?
--
Read the docs: http://akka.io/docs/
Check the FAQ:
FlexiMerge seems to be a valid solution to this problem :
class BypassMerge[A, B] extends FlexiMerge[A, FanInShape2[A, B, A]](new
FanInShape2(BypassMerge), OperationAttributes.name(BypassMerge)) {
def createMergeLogic(p: PortT): MergeLogic[A] = new MergeLogic[A] {
val readA: State[A] =
It required a little bit of Round Robing : -)
class RoundRobinBypassingMerge[A, B] extends FlexiMerge[A, FanInShape2[A, B,
A]](new FanInShape2(RRBMerge), OperationAttributes.name(RRBMerge)) {
def createMergeLogic(p: PortT): MergeLogic[A] = new MergeLogic[A] {
val read1: State[A] =
Hi,
in other words :
def receive: Receive = {
case Request(demand) if totalDemand 0 demand 0 isActive =
// can it happen that another Request message comes before this partial
function returns (while this one is being processed) ?
}
I have an asynchronous ActorProvider that is
Liska liska...@gmail.com
javascript: wrote:
Good thinking :-) Blocking the scroll async method right away seems to
be ideal. Thank you
On Monday, May 25, 2015 at 11:55:50 AM UTC+2, √ wrote:
On Mon, May 25, 2015 at 11:45 AM, Jakub Liska liska...@gmail.com
wrote:
1) But if you share
kl. 12:14 skrev Jakub Liska liska...@gmail.com javascript:
:
I'm facing also this problem with passing a context through the stream.
Let say you have a stream like this :
download resource - back it up - parse it - store errors - process
data - ... - persist data - sink
you kinda need
Hi, how are you guys tracking it? If your stream just starts hanging and
you cannot reproduce it because it only occurs in a complex stream.
I'm able to see a few hints thanks to
ActorMaterializerSettings#withDebugLogging(enable = true) and Log stage
that tells me what stream stages are
Samuel how did you manage to enable this logging :
[DEBUG] [04/30/2015 22:36:01.921] [default-akka.actor.default-dispatcher-8]
[akka://default/system/deadLetterListener] stopped [DEBUG] [04/30/2015
22:36:01.922] [default-akka.actor.default-dispatcher-5]
I tried this already, but it doesn't seem to have any effect on logging...
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives: https://groups.google.com/group/akka-user
---
You received this
Ah, sorry it requires logger being set to debug mode too. Thanks !
--
Read the docs: http://akka.io/docs/
Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
Search the archives: https://groups.google.com/group/akka-user
---
You received this message
Hey Sam,
if you are extending TestKitBase trait then his is the way to go :
TestActorRef[TestActor].underlyingActor.context
On Thursday, February 19, 2015 at 5:58:15 PM UTC+1, Sam Halliday wrote:
>
> On Thursday, 19 February 2015 16:54:47 UTC, Heiko Seeberger wrote:
>>
>> What about using an
Hey,
currently our lambda architecture is designed this way :
Tree based hierarchy of View Materializer Actors which is mostly done due
to Actor supervision. Each Materializer Actor triggers an akka-stream that
builds the resulting View. This design works but it leads to complex actors
Hey,
I cannot figure out how should I upgrade this very simple 1.0 FlexiMerge
that just throws away elements of one of the input ports :
https://gist.github.com/l15k4/6d01261b5e579a02f4fd
>From what I see in GraphStage source code, one can only read from a
specific Input port, so that it
Roland
>
> 16 jan 2016 kl. 16:40 skrev Jakub Liska <liska...@gmail.com
> >:
>
> Hey,
>
> I cannot figure out how should I upgrade this very simple 1.0 FlexiMerge
> that just throws away elements of one of the input ports :
>
> https://gist.github.com/l15k4/6d012
ant the link
> merely as an inspiration showing you how to use passAlong() and
> eagerTerminateInput / ignoreTerminateInput; I think your use-case calls for
> the latter.
>
> Regards,
>
> Roland
>
> 16 jan 2016 kl. 19:32 skrev Jakub Liska <liska...@gmail.com
sAlong(in1, out, doFinish = true, doFail = true)
On Saturday, January 16, 2016 at 7:55:27 PM UTC+1, Jakub Liska wrote:
>
> You are right,
>
> this is the correct version :
> https://gist.github.com/l15k4/6d01261b5e579a02f4fd#gistcomment-1671753
>
> Thanks a lot Ronald !
>
, Konrad Malawski wrote:
>
> Could you provide a sample snippet that we could help out with?
> Context helps to get quicker help.
>
> --
> Konrad `ktoso` Malawski
> Akka <http://akka.io> @ Lightbend <http://lightbend.com>
>
> On 20 July 2016 at 16:03:30, Jak
hey,
I hit a deadend with combination of Websockets and ActorPublisher because
the TextMessage expects Source and one can obtain the underlying ActorRef
from ActorPublisher only by materializing it :
to
> actually implement a *correct* Publisher (even with ActorPublisher's help).
>
> Happy hakking.,
>
> --
> Konrad `ktoso` Malawski
> Akka <http://akka.io> @ Lightbend <http://lightbend.com>
>
> On 20 July 2016 at 16:19:58, Jakub Liska (liska...@gmail.com )
s.log("") has actually never worked for me in past years. I'm using streams
mostly outside Actors and I have my slf4j + impl on classpath and setup
correctly
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>>
act same infra as actor logging.
>
> What do you mean by not working? Were you using the slf4j adapter
> (dependency + settings)?
>
> --
> Konrad `ktoso` Malawski
> Akka <http://akka.io> @ Lightbend <http://lightbend.com>
>
> On 21 February 2017 at 21:43:00,
t;
> On 21 February 2017 at 22:12:16, Jakub Liska (liska.ja...@gmail.com)
> wrote:
>
> The code underneath is :
>
> https://github.com/akka/akka/blob/master/akka-stream/src/
> main/scala/akka/stream/impl/fusing/Ops.scala#L1306
>
> I use : debugLogging = true
&g
I'm trying to figure out why this is hanging/idling indefinitely :
Source.fromIterator(() => Iterator.from(0).take(500).map(_ -> 1))
.groupBy(Int.MaxValue, _._1)
.mergeSubstreamsWithParallelism(256)
.runWith(Sink.seq)
This is the only way how to avoid instantiating ridiculous amounts of
Hey,
I'm reimplementing a few Spark batch jobs as akka streams.
I got stuck at the last one that takes two PairRdd[Key,Value] and cogroups
them by Key
which returns an Rdd[Key,Seq[Value]] and then it processes Seq[Value] for
each of the unique Keys that are present in both original PairRdds,
Hey,
what is the best practice to do in this hypothetical scenario :
1) Say you have a time series pipeline that started at 2014 and created
persistent state on S3 and other DB systems
2) You can introspect these storages and know what partitions already
exists in all of them
3) The persistent
he events/snapshot in receiveRecover and act on that when
> RecoveryCompleted.
>
> /Patrik
>
> fre 14 okt. 2016 kl. 22:11 skrev Jakub Liska <liska...@gmail.com
> >:
>
>> Hey,
>>
>> what is the best practice to do in this hypothetical scenario :
>> 1)
Have you guys done some benchmark of the groupBy capacity, ie. what count
of distinct elements and therefore substreams can be handled with what
resources?
Or more general version of the benchmark, ie. benchmark showing performance
of number of substreams running in parallel ?
If not, do you
100 000 seems to be the maximum, beyond that, no matter how much memory and
processor power it has, it blows up on :
Substream Source has not been materialized in 5000 milliseconds.
Which is a consequence of GC choking I guess
--
>> Read the docs: http://akka.io/docs/
Hey,
having an operation that heavily utilizes CPU and RAM, nothing else (no
IO), but it is a main bottleneck, then it never pays off using
`stream.mapAsync` for it unless you batch operations by something like 10
000, so I started using stuff like :
.map(operation).withAttributes(
On Tuesday, September 19, 2017 at 9:53:36 PM UTC+2, Patrik Nordwall wrote:
>
> You can try and compare Balance, Partition and PartitionHub.
>
Is the fundamental difference between GroupBy and Partition/PartitionHub
the fact that the latter gives you more fine-grain control over the
internal
Hey, I think that the :
ERROR: akka.remote.transport.Transport$InvalidAssociationException: The
remote system terminated the association because it is shutting down.
should not exist in case of graceful remote system shutdown. Imagine that
you have a system that is forking JVMs with remote
Hey,
actor system boot time matters in distributed systems integration testing,
otherwise spinning system 100 times in a test suite could
take minutes. I currently cannot use multi JVM testing, so I'm wondering
whether there is a way how to improve system boot time.
Currently forking a JVM
>
> Why do you need to start new systems?
>
For executing tasks in forked JVM. In order to establish communication
between 2 processes, it needs TCP as retrieving results from a different
process is hard without it.
Those tasks are rather long running, so 800ms boot time doesn't matter at
59 matches
Mail list logo