[akka-user] Dose akka suit parallelism?

2016-07-26 Thread Qr Wang
Hello everyone,
I know akka is a concurrency framework.But does it fit parallelism 
coding?Or just use 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka http dealing with chunked content while already inside an existing stream

2016-07-26 Thread Jun
I worked towards option 2.  Making a nested Source[Source[_]] (I miss using 
scala).  This works, but I get a deadletter which I am unsure what is 
causing it.  Would appreciate to get some feedback if I am heading to 
correct path, or am I going to have a painful path and might as well shoot 
my foot.

final Source, NotUsed> result =
  Source.fromGraph(
 GraphDSL.create( builder -> {
  final UniformFanOutShape bcast = 
builder.add(Broadcast.create(2));
  final FanInShape2, Message, 
Pair, Message>> zip = builder.add(Zip.create());

  final Outlet>> 
source = builder.add(kafkaConsumer).out();
  builder.from(source).via(builder.add(extractMessage))
.viaFanOut(bcast);

  
builder.from(bcast).via(builder.add(requestDataFromRestApi)).toInlet(zip.in0());
  
builder.from(bcast).via(builder.add(passMessage)).toInlet(zip.in1());

  final FlowShape, Message>, 
Source> shape = builder.add(toProducerRecord);
  builder.from(zip.out()).via(shape);

  return SourceShape.of(shape.out());
}));


result.runForeach( source -> {
   source.map( message -> {
 ProducerRecord record = new 
ProducerRecord<>(KafkaConnector.topic, message.correlationId(), message);
 logger.debug("sending message" + record.value().toString());
 return record;
  }

   
).to(Producer.plainSink(KafkaConnector.producerSettings)).run(KafkaConnector.materializer);
}, KafkaConnector.materializer);




On Wednesday, July 27, 2016 at 3:38:16 PM UTC+12, Jun wrote:
>
> Hi,
>
> I have been trying to use akka streams to process our Kafka based micro 
> service.  One micro service would get further data from an older REST based 
> service.  I have put in some Java code below (sorry not doing Scala for 
> this code base, as I am working with another developer that would make 
> learning a new language, architecture, etc. overwhelming).  Below is a 
> simplified code that gets a message from kafka, broadcast/fanout, a path 
> goes to http akka, then zip and creates a new kafka record that is sent to 
> the kafka producer.  This works fine if the REST based service has a 
> content-length.  Unfortunately its an older existing one, it just spits out 
> data even though its not really streaming data, all http responses are 
> chunked.  So the code below will just timeout when I force a Strict.  Works 
> ok for REST api with content-length.
>
> I need some help in deciding where to go:
>
> 1. Fix the old service to have a content-length.  Not ideal as its being 
> used by other systems in production. There might be downstream services 
> that count on the old chunked behaviour.
> 2. Change the flow so passes the Source down the stream, then 
> consume when needed.  I would like to avoid this, as it would a nice to 
> keep requestData to be Flow[Message, String].  So we can actually just 
> plugin other flows.
> 3. Somehow force the StrictEntity to not timeout, I don't know how to do 
> this.
>
> Are there other options I can do?  Is Option #2 the best thing to do? 
>  This way its clear than requestData is really a Source of data, and not 
> hide it.  But does this mean I actually have 2 graphs instead of 1?  1 
> graph it get the message from kafka and gives a source.  The other graph is 
> get the source and push messages to kafka.
>
> Thanks, some code snippets below if the above is confusing.
>
> Jun
>
>
>
> final RunnableGraph result =
>   RunnableGraph.fromGraph(
>  GraphDSL.create(
> builder -> {
>final UniformFanOutShape bcast = 
> builder.add(Broadcast.create(2));
>final FanInShape2> zip 
> = builder.add(Zip.create());
>
>final Outlet>> source 
> = builder.add(kafkaConsumer).out();
>builder.from(source).via(builder.add(extractMessage))
>  .viaFanOut(bcast);
>
>
> builder.from(bcast).via(builder.add(requestData)).toInlet(zip.in0());
>
> builder.from(bcast).via(builder.add(passMessage)).toInlet(zip.in1());
>
>final SinkShape> sink = 
> builder.add(kafkaProducer);
>
> builder.from(zip.out()).via(builder.add(toProducerRecord)).to(sink);
>
>return ClosedShape.getInstance();
> }));
>
>
>
> final Flow requestDataFromRestApi = 
> Flow.create().map(message -> processMessage(message))
>.map(instrument -> 
> HttpRequest.create("/someapi/" + instrument)).via(connectionFlow);
>
>
>
>final Flow> 
> connectionFlow =
> 
> Http.get(system).outgoingConnection(ConnectHttp.toHost("localhost", 9007))
>   .map(response -> {
>
>  final Source dataBytes = 
> response.entity().getDataBytes();
>
>  final Sink> sink = 
> Sink.fold("", (body, bytes) 

[akka-user] akka http dealing with chunked content while already inside an existing stream

2016-07-26 Thread Jun
Hi,

I have been trying to use akka streams to process our Kafka based micro 
service.  One micro service would get further data from an older REST based 
service.  I have put in some Java code below (sorry not doing Scala for 
this code base, as I am working with another developer that would make 
learning a new language, architecture, etc. overwhelming).  Below is a 
simplified code that gets a message from kafka, broadcast/fanout, a path 
goes to http akka, then zip and creates a new kafka record that is sent to 
the kafka producer.  This works fine if the REST based service has a 
content-length.  Unfortunately its an older existing one, it just spits out 
data even though its not really streaming data, all http responses are 
chunked.  So the code below will just timeout when I force a Strict.  Works 
ok for REST api with content-length.

I need some help in deciding where to go:

1. Fix the old service to have a content-length.  Not ideal as its being 
used by other systems in production. There might be downstream services 
that count on the old chunked behaviour.
2. Change the flow so passes the Source down the stream, then 
consume when needed.  I would like to avoid this, as it would a nice to 
keep requestData to be Flow[Message, String].  So we can actually just 
plugin other flows.
3. Somehow force the StrictEntity to not timeout, I don't know how to do 
this.

Are there other options I can do?  Is Option #2 the best thing to do?  This 
way its clear than requestData is really a Source of data, and not hide it. 
 But does this mean I actually have 2 graphs instead of 1?  1 graph it get 
the message from kafka and gives a source.  The other graph is get the 
source and push messages to kafka.

Thanks, some code snippets below if the above is confusing.

Jun



final RunnableGraph result =
  RunnableGraph.fromGraph(
 GraphDSL.create(
builder -> {
   final UniformFanOutShape bcast = 
builder.add(Broadcast.create(2));
   final FanInShape2> zip = 
builder.add(Zip.create());

   final Outlet>> source = 
builder.add(kafkaConsumer).out();
   builder.from(source).via(builder.add(extractMessage))
 .viaFanOut(bcast);

   
builder.from(bcast).via(builder.add(requestData)).toInlet(zip.in0());
   
builder.from(bcast).via(builder.add(passMessage)).toInlet(zip.in1());

   final SinkShape> sink = 
builder.add(kafkaProducer);
   
builder.from(zip.out()).via(builder.add(toProducerRecord)).to(sink);

   return ClosedShape.getInstance();
}));



final Flow requestDataFromRestApi = 
Flow.create().map(message -> processMessage(message))
   .map(instrument -> 
HttpRequest.create("/someapi/" + instrument)).via(connectionFlow);



   final Flow> 
connectionFlow =
Http.get(system).outgoingConnection(ConnectHttp.toHost("localhost", 
9007))
  .map(response -> {

 final Source dataBytes = 
response.entity().getDataBytes();

 final Sink> sink = 
Sink.fold("", (body, bytes) -> {
return body + bytes.decodeString("UTF-8");
 });

 final CompletionStage completionStage = 
dataBytes.runWith(sink, KafkaConnector.materializer);
 final String body = 
completionStage.toCompletableFuture().get(5, TimeUnit.SECONDS);

 return body;
  });

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Tcp TLS example with client auth

2016-07-26 Thread Vinay Gajjala
Hello Magnus

This is not an answer to your problem but I am trying to get an answer to 
my problem. I am working on a POC developing a TCP server which listens on 
device traffic. I was able to implement the server using Akka IO (v2.3.14) 
and trying to figure out how to configure TLS for this TCP server. I read 
the documentation and online posts but could not figure out. Maybe I am 
missing the obvious.

I see you question is around TCP+TLS so I decided to ask.

Hoping you have  something that  might help me.

Thanks
Vinay

On Friday, May 13, 2016 at 3:10:29 AM UTC-5, Magnus Andersson wrote:
>
> Hi
>
> I have a TCP+TLS pipeline in with Akka Streams that is not working. It 
> does work when creating a test and wrapping a Client TLS stage, I can read 
> the client certificate without problems. But when I run it and try to 
> connect with client software using the same keys/certificates for openssl I 
> get connection refused. If I remove the TLS wrapper and turn off TLS on 
> client side the TCP server works fine.
>
> Since the TLS stage have no end user documentation today, I'm hoping 
> someone have built a (simple) sample app that uses TLS over TCP and don't 
> mind sharing it? Preferably something that uses client certificates, I am 
> probably just getting some config wrong.
>
> Regards
> Magnus Andersson
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Cannot figure out why ActorPublisher stopped?

2016-07-26 Thread Guofeng Zhang
Hi,

I tried a sample to use  range request in an ActorPublisher to download a
large file. But only one range is downloaded. The publisher is stopped when
another range request is downloading data.

I set akka logger to "DEBUG". I also printed messages when received
messages in the publisher, but I still cannot know who stopped the
publisher.

This is the source code (very short)

https://github.com/guofengzh/AkkaStreamsChunkedDownloader/blob/master/src/main/scala/com/jasonmartens/s3downloader/ChunkPublisher.scala

This is the output:

chunkList: List(RequestChunk(1,0,1024), RequestChunk(2,1024,1024),
RequestChunk(3,2048,1024), RequestChunk(4,3072,120))
Request
requestChunks
requesting chunks: totalDemand: 16, inFlightDemand: 0
Downloading chunk 1
ChunkDownloaded
chunkComplete: 1:true
ChunkData
chunkData 1 succeeded
emitChunks - Active: true
emitting chunk 1
requestChunks
requesting chunks: totalDemand: 15, inFlightDemand: 0
Downloading chunk 2
Stopped
Got 1024 bytes
[ERROR] [07/27/2016 09:37:30.508] [default-akka.actor.default-dispatcher-9]
[akka.actor.ActorSystemImpl(default)] Outgoing request stream error
(akka.stream.AbruptTerminationException)

Is the publisher implemented incorrectly and has been timed out?

Your help is appreciated.

Thanks

Guofeng

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka in a home automation system

2016-07-26 Thread Michael Hallock
Greetings, hope I'm asking in the right venue here.

I'm playing around with writing a home automation system, mostly as a 
learning experience, and partially because I don't really like anything 
that exists right now in that space. I'm targeting .NET (C#) just because 
it's the language I know best.
And I'm completely new to the Actor model (though I've been doing plenty of 
research).

I'm strongly considering Akka.net as the platform that use as a backbone 
for everything, but I'm trying to decide if it's overkill since I will 
likely never be using remoting, etc. I'm pretty much just looking at it 
from an architectyural standpoint for the following reasons:

1. I need to support (concurrent)  event publishing from all of the devices 
that are monitored (e.g., a long living thread per device / device hub 
listening for changes).
2. I need to support multiple clients requesting changes to the system at 
the same time (concurrent message processing, though I'm not talking 
hundreds of clients, so the single-threaded nature of an Actor is ok here).
3. I need to support an internal tree structure of devices and their 
current state (i.e. a single threaded Actor that listens for all device 
changes, and modifies an internal state, and then possibly publishes a 
"DeviceStateUpdated" event for clients?).

So something like

ActorSystem
- HarmonyHubActor (Example device hub, which would receive GENERIC messages 
for things connected through the Harmony Hub, and parse it for the specific 
device and state to change, etc.)
- ZWaveActor (Example device hub, which would receive GENERIC messages for 
things connected through ZWave controller, and parse it for the specific 
device and state to change, etc.)
- DeviceStateActor (Holds the system's current state internally, responds 
to broadcasts about state changes, publishes events for ClientActor to 
publish out to clients through web sockets, etc.)
- CommandActor (for handling commands from an API, etc., to change device 
states)
- ClientActor (Handles requests from an API, or a SignalR front end, etc., 
for communication with clients)

And really that's about it at this early stage. The only other caveat being 
that I want to make the inclusion of device hub actors dynamic by 
configuration (Config file for now most likely, later dynamic config stored 
in SQLite or similar maybe), and most likely allow
for inclusion of new "modules" as they are written that could then be 
configured (i.e., I add a "NestThermostat" module DLL in, it loads it, and 
now I can add "Nest" device hubs in my config, etc.)

I'm really just looking for someone to tell me if Akka.NET would be 
overkill on this, or if I'm barking up the right tree. Otherwise I think 
I'd have to do a lot of thread management and ConcurrentQueue work, etc., 
and it really seemed like Akka / actors might be a 
good alternative, but there are a lot of pieces to Akka that I will almost 
certainly never touch, like remoting, so I was just worried it might be too 
much.

Thoughts?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Unit test akka persistence against the inmem journal

2016-07-26 Thread Paweł Kaczor
This could be helpful: 
https://github.com/pawelkaczor/akka-ddd/blob/master/akka-ddd-test/src/test/scala/pl/newicom//test/dummy/DummyOfficeSpec.scala

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to disable TLSv1 when I configure "akka.remote.netty.ssl.security.protocol" property as TLSv1.2.

2016-07-26 Thread Will Sargent
You can set the "jdk.tls.client.protocols" system property to set options
for the JVM -- this is a feature that is only available in JDK 1.8 though.

https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#SunJSSE_Protocols

Otherwise, you would have to set the security
property jdk.tls.disabledAlgorithms to add TLSv1 specifically.


Will Sargent
Engineer, Lightbend, Inc.


On Tue, Jul 26, 2016 at 1:12 AM,  wrote:

> Configure file as follow:
> # Protocol to use for SSL encryption, choose from:
> # Java 6 & 7:
> #   'SSLv3', 'TLSv1'
> # Java 7:
> #   'TLSv1.1', 'TLSv1.2'
> protocol = "TLSv1.2"
>
>
> When I use nmap to scan, I find that TLSv1 is enabled:
> D:\softwares\nmap-7.12>nmap -p  --script=ssl* x.x.x.x --unprivileged
>
>
> Starting Nmap 7.12 ( https://nmap.org ) at 2016-07-26 15:33
> °?′óà÷2?±ê×?ê±??
> Nmap scan report for x.x.x.x
> Host is up (1.0s latency).
> PORT STATE  SERVICE
> /tcp open unknown
> | ssl-enum-ciphers:
> |  TLSv1.0:
> |ciphers:
> |  TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) -A
> |compressors:
> |  NULL
> |cipher preference: indeterminate
> |cipher preference error: Too few ciphers supported
> |  TLSv1.1:
> |ciphers:
> |  TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) -A
> |compressors:
> |  NULL
> |cipher preference: indeterminate
> |cipher preference error: Too few ciphers supported
> |  TLSv1.2:
> |ciphers:
> |  TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) -A
> |compressors:
> |  NULL
> |cipher preference: indeterminate
> |cipher preference error: Too few ciphers supported
> |_ least strength: A
> MAC Address: xx:xx:xx:xx:xx:xx
>
>
> Nmap done: 1 IP address (1 host up) scanned in 3.88 seconds
>
>
> D:\softwares\nmap-7.12>
>
> I want to disable TLSv1. Any method?
> Thank you.
>
>
> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] High-level Server-Side API Minimal Example not working

2016-07-26 Thread Konrad Malawski
Hi there,
I just copy pasted the file and have to say "it works here".
Are you sure you have dependencies for akka-http-experimental in your build?

It works since ContentTypes is part of the akka.http.scaladsl.model package,
and that we've imported all the symbols from at the top of the file:

import akka.http.scaladsl.model._


Hope this helps.

-- 
Konrad `ktoso` Malawski
Akka  @ Lightbend 

On 26 July 2016 at 21:14:29, Ajinkya Shukla (ajinkyashukl...@gmail.com)
wrote:

The minimal example give at the following
http://doc.akka.io/docs/akka/2.4.8/scala/http/routing-dsl/index.html does
not work.

It gives an error on line no 19.

Error:(19, 31) not found: value ContentTypes
  complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "Say
hello to akka-http"))


   1. import akka.actor.ActorSystem
   2. import akka.http.scaladsl.Http
   3. import akka.http.scaladsl.model._
   4. import akka.http.scaladsl.server.Directives._
   5. import akka.stream.ActorMaterializer
   6. import scala.io.StdIn
   7. object WebServer {
   8.  def main(args: Array[String]) {
   9. implicit val system = ActorSystem("my-system")
   10.implicit val materializer = ActorMaterializer()
   11.// needed for the future flatMap/onComplete in the end
   12.implicit val executionContext = system.dispatcher
   13. val route =
   14.  path("hello") {
   15.get {
   16.  complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "Say
   hello to akka-http"))
   17.}
   18.  }
   19. val bindingFuture = Http().bindAndHandle(route, "localhost", 8080
   )
   20. println(s"Server online at http://localhost:8080/\nPress RETURN
   to stop...")
   21.StdIn.readLine() // let it run until user presses return
   22.bindingFuture
   23.  .flatMap(_.unbind()) // trigger unbinding from the port
   24.  .onComplete(_ => system.terminate()) // and shutdown when done
   25.  }
   26. }



Can you please help me fixing the error ?
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
http://doc.akka.io/docs/akka/current/additional/faq.html
>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups
"Akka User List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] High-level Server-Side API Minimal Example not working

2016-07-26 Thread Ajinkya Shukla
The minimal example give at the following 
http://doc.akka.io/docs/akka/2.4.8/scala/http/routing-dsl/index.html does 
not work. 

It gives an error on line no 19.

Error:(19, 31) not found: value ContentTypes
  complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "Say 
hello to akka-http"))


   1. import akka.actor.ActorSystem
   2. import akka.http.scaladsl.Http
   3. import akka.http.scaladsl.model._
   4. import akka.http.scaladsl.server.Directives._
   5. import akka.stream.ActorMaterializer
   6. import scala.io.StdIn
   7. 
   8. object WebServer {
   9.  def main(args: Array[String]) {
   10. 
   11. implicit val system = ActorSystem("my-system")
   12.implicit val materializer = ActorMaterializer()
   13.// needed for the future flatMap/onComplete in the end
   14.implicit val executionContext = system.dispatcher
   15. 
   16. val route =
   17.  path("hello") {
   18.get {
   19.  complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "Say 
   hello to akka-http"))
   20.}
   21.  }
   22. 
   23. val bindingFuture = Http().bindAndHandle(route, "localhost", 8080
   )
   24. 
   25. println(s"Server online at http://localhost:8080/\nPress RETURN 
   to stop...")
   26.StdIn.readLine() // let it run until user presses return
   27.bindingFuture
   28.  .flatMap(_.unbind()) // trigger unbinding from the port
   29.  .onComplete(_ => system.terminate()) // and shutdown when done
   30.  }
   31. }



Can you please help me fixing the error ?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Type mismatch, expected: ToResponseMarshallable, actual: WebServer.Item

2016-07-26 Thread Ajinkya Shukla
The following example at 
http://doc.akka.io/docs/akka/2.4/scala/http/common/json-support.html gives 
an error at line no. 43 and 44.

Error:(43, 41) type mismatch;
 found   : WebServer.Item
 required: akka.http.scaladsl.marshalling.ToResponseMarshallable
case Some(item) => complete(item)


   1.  import akka.actor.ActorSystem
   2.import akka.http.scaladsl.Http
   3.import akka.stream.ActorMaterializer
   4.import akka.Done
   5.import akka.http.scaladsl.server.Route
   6.import akka.http.scaladsl.server.Directives._
   7.import akka.http.scaladsl.model.StatusCodes
   8.import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport._
   9.import spray.json.DefaultJsonProtocol._
   10. 
   11. import scala.io.StdIn
   12. 
   13. import scala.concurrent.Future
   14. 
   15. object WebServer {
   16. 
   17.   // domain model
   18.  final case class Item(name: String, id: Long)
   19.  final case class Order(items: List[Item])
   20. 
   21.   // formats for unmarshalling and marshalling
   22.  implicit val itemFormat = jsonFormat2(Item)
   23.  implicit val orderFormat = jsonFormat1(Order)
   24. 
   25.   // (fake) async database query api
   26.  def fetchItem(itemId: Long): Future[Option[Item]] = ???
   27.  def saveOrder(order: Order): Future[Done] = ???
   28. 
   29.   def main(args: Array[String]) {
   30. 
   31. // needed to run the route
   32.implicit val system = ActorSystem()
   33.implicit val materializer = ActorMaterializer()
   34.// needed for the future map/flatmap in the end
   35.implicit val executionContext = system.dispatcher
   36. 
   37. val route: Route =
   38.  get {
   39.pathPrefix("item" / LongNumber) { id =>
   40.  // there might be no item for a given id
   41.  val maybeItem: Future[Option[Item]] = fetchItem(id)
   42. 
   43.   onSuccess(maybeItem) {
   44.case Some(item) => complete(item)
   45.   case None   => complete(StatusCodes.NotFound)
   46.  }
   47.}
   48.  } ~
   49.post {
   50.  path("create-order") {
   51.entity(as[Order]) { order =>
   52.  val saved: Future[Done] = saveOrder(order)
   53.  onComplete(saved) { done =>
   54.complete("order created")
   55.  }
   56.}
   57.  }
   58.}
   59. 
   60. val bindingFuture = Http().bindAndHandle(route, "localhost", 
   8080)
   61.println(s"Server online at http://localhost:8080/\nPress 
   RETURN to stop...")
   62.StdIn.readLine() // let it run until user presses return
   63.bindingFuture
   64.  .flatMap(_.unbind()) // trigger unbinding from the port
   65.  .onComplete(_ ⇒ system.terminate()) // and shutdown when 
   done
   66. 
   67.   }
   68.}
   69.  }


How can I pass argument of type ToResponseMarshallable to complete method ?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] akka-cluster split brain when enable auto-down and set a node random package loss rate 50%

2016-07-26 Thread Siva Kommuri
What's the min-nr-of-members set to?

Is node A the only seed node in the cluster? Which ones are the seed nodes?

Best wishes,
Siva on 6+

> On Jul 22, 2016, at 11:45 PM, Patrik Nordwall  
> wrote:
> 
> It's one of Lightbend's commercial offerings. Contact Lightbend for more 
> information and pricing https://www.lightbend.com/contact
> 
>> lör 23 juli 2016 kl. 06:26 skrev Yutao Shuai :
>> Can I use the SBR for free?
>> 
>> 在 2016年7月22日星期五 UTC+8下午10:02:13,Patrik Nordwall写道:
>> 
>>> Yes, it exists for 2.3.x
>> 
>>> fre 22 juli 2016 kl. 13:01 skrev Yutao Shuai :
 Can I use split brain resolver in version 2.3.10 ?
 
 在 2016年7月22日星期五 UTC+8下午6:08:22,Akka Team写道:
 
> Hi Yutao,
> 
> Please refer to the documentation page Patrik has linked. It explains the 
> problem in detail and various approaches to handle it. SBR is available 
> for Lightbend customers and provides ready-made solutions (the linked 
> pages), but if you want to go with the open source version, you can 
> implement similar strategies yourself.
> 
> -Endre
 
> 
>> On Fri, Jul 22, 2016 at 11:18 AM, Yutao Shuai  wrote:
>> The result we expected is divide into two cluster, one of the cluster 
>> has three or four nodes and work normally.
>> 
>> 在 2016年7月22日星期五 UTC+8下午3:41:32,√写道:
>>> 
>>> What behavior are you looking to achieve?
>>> 
>>> -- 
>>> Cheers,
>>> √
>>> 
>>> 
 On Jul 22, 2016 5:11 AM, "Yutao Shuai"  wrote:
   In a 5-node cluster, hypothesis the nodes is A,B,C,D,E. All nodes 
 monitor each other, enable auto-down,  and we  set the node-A random 
 package loss rate 50%,  after a period of time, the cluster 
   will divide into two or more clusters. How can I fix this problem?
 -- 
 >> Read the docs: http://akka.io/docs/
 >> Check the FAQ: 
 >> http://doc.akka.io/docs/akka/current/additional/faq.html
 >> Search the archives: 
 >> https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups "Akka User List" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.
 Visit this group at https://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.
>> 
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> >> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google 
>> Groups "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send 
>> an email to akka-user+...@googlegroups.com.
>> To post to this group, send email to akka...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> -- 
 
> Akka Team
> Lightbend - Reactive apps on the JVM
> Twitter: @akkateam
 
 -- 
 >> Read the docs: http://akka.io/docs/
 >> Check the FAQ: 
 >> http://doc.akka.io/docs/akka/current/additional/faq.html
 >> Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 "Akka User List" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.
 Visit this group at https://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.
>> 
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> >> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: 
> >> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives:

Re: [akka-user] Passivate Persistent Cluster Singleton

2016-07-26 Thread Siva Kommuri
Good idea. Thanks!

Best wishes,
Siva on 6+

> On Jul 22, 2016, at 3:28 AM, Akka Team  wrote:
> 
> Hi Siva,
> 
> I don't recall cluster singleton having a passivation feature. I think this 
> feature simply does not exists. On the other hand there is not much reason to 
> passivate, the actor itself has no runtime costs if it is idle, except its 
> memory usage. If the singleton has so big state that it becomes an issue 
> (this is likely an anti-pattern though!), then you can simply create the 
> "real" singleton as the child of the cluster singleton and proxy messages to 
> it via the parent. Then you can stop/start the child as you wish.
> 
> -Endre
> 
>> On Wed, Jul 20, 2016 at 12:45 AM, Siva Kommuri  wrote:
>> Hi all!
>> 
>> What is the proper way to passivate a persistent cluster singleton? I have 
>> the following snippet in the cluster singleton:
>> 
>> receive = {
>>   case ReceiveTimeout => context.parent  ! Passivate(stopMessage=Stop)
>>   case Stop => context.stop(self)
>> }
>> I get the following warning in the logs and the Stop message never gets 
>> processed:
>> 
>> unhandled event Passivate(Stop) in state Oldest
>> Or is passivating a persistent cluster singleton considered to be an 
>> anti-pattern?
>> 
>> Best Wishes,
>> Siva Kommuri
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> >> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> -- 
> Akka Team
> Lightbend - Reactive apps on the JVM
> Twitter: @akkateam
> -- 
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: 
> >> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] getting out of memory - not able to create new native threads in akka java

2016-07-26 Thread Jitu Thakur
 

Hi,


In our system we have a few blocking actors(service calls) and we were 
using default dispatcher, but when we increase load the system becomes very 
slow and we observed that the message waits a long time to reach to next 
actor (which means dispatcher is taking time to transfer messages) Even 
though the next actor is sitting idle.


In order to solve this we used Pinned dispatcher for Actors which had 
blocking calls. But we noticed that for each actor instance a 
pinned-dispatcher instance was created, and after processing few messages 
application crashes with OutOfMemory Error.

We also tried Balancing dispatcher but again it ends up with OutOfMemory.


As Pinned dispatcher is the best suited for Blocking call actors, is there 
a limit to the no of instances of Actors for each actor type. Is there any 
property through which we limit the no of instances of pinned dispatcher.

Or any other better approach which will be best suited to our application.


Thanks,

JT

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Cleanup after HttpResponse

2016-07-26 Thread daleksan
Hello,

I was wondering what is the best way to clean up resources after an 
HttpResponse was sent over the network.

I need to use a legacy file-backed implementation of a buffer to get the 
data for HTTP responses. The API gives an InputStream to the data and 
eventually I need to call the cleanup methods (so the underlying file is 
deleted etc.). I can construct a source from the InputStream 
with StreamConverters.fromInputStream, so far so good. The source then is 
passed to the response entity: HttpEntity.Default(contentType, 
contentLength, respSource). The tricky bit is when to invoke the cleanup. 
It cannot be done as part of the flow which constructs the HttpResponse, 
because at this point the response entity hasn't read the data. The source 
passed to the entity is not yet materialised as I understand, so I don't 
have the Future[IOResult] instance, which I could wait for to execute the 
cleanup.

I found something here, but is doesn't seem to give my the answer:
https://github.com/akka/akka/issues/18571

Thank you

 David

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Akka TCP with Actors vs Akka Streams and Flows/Graphs

2016-07-26 Thread Jarl André Hübenthal
Has some developed an akka stream based TCP server where connections is 
kept open and frame delimited messages is pouring in, and is eventually 
stored somewhere in a database? It seems to be such a simple pattern that i 
am surprised to not find any examples for it. But maybe the world is not so 
simple, or maybe akka streams arent targeted for main stream development 
with semi complex business logic. I need session management, authentication 
(crude) and cross connection session buffer, where connections can connect, 
send a partial message, disconnect for some reason, reconnect and send the 
last part of the message. The first message is always an HELO message (or 
login message). I feel the actor way of doing it is too fragile and 
resource consuming. I have like a river dance of actors doing all the 
wiring to keep session buffer for each connection, store the data and also 
to make sure that each message is at least stored once (only once in fact 
since i use unique index on table).

Today I have:

ActorSystem1(ServerActor -> ConnectionActor * N -> SessionActor * N) -> 
ActorSystem2(BackendRouterActor -> BackendWorkerActor)

The ServerActor is where akka tcp sets up the the server, with host and 
port.

ConnectionActor is one per connection, and validates the first message 
received as a login message. If not a login message, disconnects. If valid 
login message, creates a new SessionActor or finds an existing one, and 
then passes all further messages to the SessionActor.

SessionActor is a persistent actor and keeps in its journal message buffer, 
last message received and message counter, and will process the buffer for 
each incoming message chunk. It searches with a tail recursive function 
(but I could also done look ahead) for messages that ends with delimiter, 
parses this messages into an Event object and sends this off to the 
BackendRouterActor.

BackendRouterActor is a persistent At least once delivery actor, that when 
receives an Event object persists it and sends it off to the worker. If not 
confirmed within a time limt, resends, as usual with ALOD.

I know there may be some weaknesses in this architecture, so general advice 
on the architecture would be great.

But my main question, drawn in direct or abstract lines, how can this be 
done with Akka Streams? 

I see three problems (things i dont know the solution for) with akka 
streams:

1) The cross connection session buffer and other parameters related to 
session
2) Connection authentication (HELO message, which must be specific format)
2) At least once delivery where all parsed events MUST be stored.

Someone who can shed some insight on possible solutions to these problems 
and the topic in general, how to convert such an actor based stateful app 
to akka streams?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] How to disable TLSv1 when I configure "akka.remote.netty.ssl.security.protocol" property as TLSv1.2.

2016-07-26 Thread yinzhonghong
Configure file as follow:
# Protocol to use for SSL encryption, choose from:
# Java 6 & 7:
#   'SSLv3', 'TLSv1'
# Java 7:
#   'TLSv1.1', 'TLSv1.2'
protocol = "TLSv1.2"


When I use nmap to scan, I find that TLSv1 is enabled:
D:\softwares\nmap-7.12>nmap -p  --script=ssl* x.x.x.x --unprivileged


Starting Nmap 7.12 ( https://nmap.org ) at 2016-07-26 15:33 
°?′óà÷2?±ê×?ê±??
Nmap scan report for x.x.x.x
Host is up (1.0s latency).
PORT STATE  SERVICE
/tcp open unknown
| ssl-enum-ciphers:
|  TLSv1.0:
|ciphers:
|  TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) -A
|compressors:
|  NULL
|cipher preference: indeterminate
|cipher preference error: Too few ciphers supported
|  TLSv1.1:
|ciphers:
|  TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) -A
|compressors:
|  NULL
|cipher preference: indeterminate
|cipher preference error: Too few ciphers supported
|  TLSv1.2:
|ciphers:
|  TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) -A
|compressors:
|  NULL
|cipher preference: indeterminate
|cipher preference error: Too few ciphers supported
|_ least strength: A
MAC Address: xx:xx:xx:xx:xx:xx


Nmap done: 1 IP address (1 host up) scanned in 3.88 seconds


D:\softwares\nmap-7.12>

I want to disable TLSv1. Any method?
Thank you.


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.