On Thu, Apr 24, 2014 at 10:24 PM, √iktor Ҡlang <[email protected]>wrote:
> > > > On Thu, Apr 24, 2014 at 5:52 PM, Rüdiger Klaehn <[email protected]> wrote: > >> On Thu, Apr 24, 2014 at 5:13 PM, √iktor Ҡlang <[email protected]>wrote: >> >>> Because there are many potential implementations, we explicitly opted >>> out of flatMap. For now. >>> >>> Probably a good decision. Otherwise people will complain when you remove >> it again. You can always implicit your own. Maybe have a canonical flatMap >> available that has to be implicitly imported. >> >> > So the "fundamental" ones are "concat" (all from the first, all from the > second, …), "merge" (any available) and "join" (one from the first, then > one from the second, …)—so a flatMap definition could in theory require an > implicit flatMap strategy (concat | merge | join). > > I would say that concat is the most fundamental. But I think there is nothing wrong with having the methods available under the above names. Then people can write their own implicit classes to get just the flatMap behavior they want if they want to use for comprehensions. Something like this object FlowForComprehension { implicit class FlatMapIsConcat[T](private val flow:Flow[T]) extends AnyVal { def flatMap[U](f: T => Flow[U]) : Flow[U] = flow.concat(f) } implicit class FlatMapIsMerge[T](private val flow:Flow[T]) extends AnyVal { def flatMap[U](f: T => Flow[U]) : Flow[U] = flow.merge(f, mergeSettings) } ... } import FlowForComprehension.FlatMapIsConcat for(... If you want to use different meanings for flatMap in one big for comprehension, then I guess you're screwed. But that is probably not such a good idea anyway for code readability. > I just wanted to confirm that there is no fundamental limitation. >> > > Well, there are practical limitations. For instance, an infinite stream > flatMapped with infinite streams. > With concat, you will never see anything of the second stream, but you don't need to take any special precautions in the implementation. With join you will only ever see the first argument of each stream. And a naive implementation would subscribe to every flow and run out of memory eventually. So there should probably be a size limit. With merge, I guess it depends on the settings. You want to merge from only a finite number of flows at the same time, otherwise the call will try to subscribe to an infinite number of flows and never return... All in all, nothing really surprising for people familiar with the Streams from scala.collection. Except maybe the nondeterminism of operations like merge. But this is to be expected since "Streams are not Collections". Cheers, Rüdiger -- >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>>>>>>> Check the FAQ: >>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
