No, each persistent actor will only replay its own events. The
persistenceId is part of the key in the cassandra table.
fre 22 apr. 2016 kl. 16:53 skrev Yan Pei :
> Patrik,
>
> Thank you for your response.
> My understanding is all persistent actors will replay all messages from
> Cassandra t
Patrik,
Thank you for your response.
My understanding is all persistent actors will replay all messages from
Cassandra table from last snapshot. For example there are 2 messages
and 1000 persistent actor, there will be 2000 messages in the memory
once the application is started. D
Content-Type is special – it should come with/from the data you're completing
with – please don't hardcode it like that (it won't work, and that's on
purpose).
Here's how you'd use JSON in a real world app:
object TestServer extends App
with SprayJsonSupport with DefaultJsonProtocol {
impl
Hi!
I have a simple route, that should respond (as I expect) with
`application/json` media type.
def height: Route = {
path("height") {
//
respondWithHeaders(`Content-Type`(MediaTypes.`application/json`))(complete("???"))
// also don't work
respondWithHeaders(RawHeader("Con
You should use the machine's own ip address in the hostname property.
/Patrik
On Tue, Apr 19, 2016 at 3:13 AM, Scalaian wrote:
> By following Akka documents, I can start two actors(front-end and
> back-end) on the same machine, and they can talk to each other. However,
> when I tried to deploy
On Thu, Apr 21, 2016 at 11:53 PM, Yan Pei wrote:
> Hi All,
>
>If I have lots of persistence actors in the ActorSystem, during
> recovering each instance of actor will read the same set of data into
> memory, potentially is it an issue? I have concern that it might cause
> OutOfMemory exceptio
> On 22 Apr 2016, at 12:11, Edmondo Porcu
> wrote:
>
> Shouldn't be possible to lift the unmarshaller with an implicit conversion?
How? Do you want to block?
Heiko
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/c
Shouldn't be possible to lift the unmarshaller with an implicit conversion?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/gro
Well, I kinda get the "level of abstraction" but what about a cluster
node failure running my Akka streams stages?
They are strictly local – how would you deal with an Iterator's iteration
blowing up?
Having that said, we do actually work on "Acknowledged Sources / Sinks", the
first of which
Thanks Konrad.
Well, I kinda get the "level of abstraction" but what about a cluster
node failure running my Akka streams stages? How would you cope with
it? Now, Is my only option to use Kafka and some kind of partition and
offset syncing? Please elaborate on that!
Are you suggesting that eve
In terms of level of abstraction Akka Streams are "like Iterator" ;-)
They're not as high level as you might expect them to be it seems to me.
They're a building block, using which, and Sink/Source to Actors you can
integrate the two.
The root cause of the rift there is that Actor semantics do no
No idea what's bad from just that snippet.
That template is maintained by The Iterators, so you should probably tell them
it's broken in their repo:
https://github.com/theiterators/akka-http-microservice by opening an issue,
thanks!
--
Konrad `ktoso` Malawski
Akka @ Lightbend
On 22 April 2016
Error running akka-http-microservice Activator template Unable to run
example template: akka-http-microservice from the Activator
Project builds, but on running, unable to access URL,
http://localhost:9000/ip/8.8.8.8 getting 'Internal Server Error'
Error on running service: [ERROR] [04/21/201
Hi,
I'd like to better understand the relationship of Akka streams and Akka
cluster including persistency. What I mean by that is how would you
make an Akka streams resilient in a Akka cluster setup? How would I use
persistent actors with Akka streams? Is the use of `Source.actorRef`
and `Sink.
Val src = someSource.map(processJSON).alsoTo(somewhereWhichWorksFine)
--
Cheers,
√
On Apr 22, 2016 9:14 AM, "clca" wrote:
> I'm struggling to figure out how to build a stream that does the following:
>
> Source ~> process (ex. XML2Json) ~> Broadcast[String](2) then out(0) sends
> data somewhere
I'm struggling to figure out how to build a stream that does the following:
Source ~> process (ex. XML2Json) ~> Broadcast[String](2) then out(0) sends
data somewhere (this is working fine)
out(1) needs to become a Source for a WebSocket endpoint. In this case I
can set the buffer to keep only t
16 matches
Mail list logo