Re: [akka-user] Re: Akka-http: how to deal with incorrect http headers

2017-05-15 Thread johannes . rudolph
Hi Florian,

can you clarify what needs improvement? Is that about client or server side?

Johannes

On Saturday, May 13, 2017 at 10:47:52 AM UTC+2, Florian Rosenberg wrote:
>
> I'm seeing a similar problem, but not wit the URI but with the 
> Strict-Transport-Header, it seems to be invalid, changing the conf to 
> ignore it has no effect. I'm using a websocket though.
>
> After debugging around for a while, I instrumented some code and see this:
>
> Illegal 'strict-transport-security' header: Invalid input 'EOI', expected 
> OWS, 'i' or token0 (line 1, column 18): max-age=31536000;
>
> It may well be that the endpoint I'm sending it to does not treat this 
> correctly and returns a 400 (Bad request), but I need to be able to ignore 
> this somehow. 
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Http how to add charset utf-8 to the content-type header

2017-05-15 Thread johannes . rudolph
Hi Thibault,

you are right, there's currently no built-in way to do this. To achieve it, 
you could e.g. copy the Jackson marshaller from the sources to use a custom 
media type. See 
here: 
https://github.com/akka/akka-http/blob/5932237a86a432d623fafb1e84eeeff56d7485fe/akka-http-marshallers-java/akka-http-jackson/src/main/java/akka/http/javadsl/marshallers/jackson/Jackson.java#L27-L27

Johannes

On Saturday, May 6, 2017 at 8:12:06 AM UTC+2, Thibault Meyer wrote:
>
> Hi Greg,
>
>
> how to do this ? I see no arguments in completeOKWithFuture 
> or Jackson.marshaller() to do this. I'm using Java version of akka-http.
>
> Thanks
>
> Le samedi 6 mai 2017 00:49:14 UTC+2, Greg Methvin a écrit :
>>
>> There is no charset parameter defined for application/json. See 
>> https://www.iana.org/assignments/media-types/application/json
>>
>> The encoding should not be determined by looking at the charset 
>> parameter, but rather by looking at the first four octets: 
>> https://tools.ietf.org/html/rfc4627#section-3
>>
>> If you absolutely have to deal with non-conforming parsers, the custom 
>> media type is probably the right way to go.
>>
>> On Fri, May 5, 2017 at 2:23 PM, Alan Burlison  
>> wrote:
>>
>>> On 05/05/17 18:42, Alex Cozzi wrote:
>>>
>>> funny, I was looking into this yesterday!
 I am by no means an expert, but I think that what it is happening is the
 following:
 application/json implies UTF-8: if you look at the MediaType class from
 akka http you se the declaration:

>>>
>>> I believe you are correct, http://www.ietf.org/rfc/rfc4627.txt says:
>>>
>>> 3.  Encoding
>>> JSON text SHALL be encoded in Unicode.  The default encoding is UTF-8.
>>>
>>> so an explicit MIME type isn't required if the content is UTF-8 - but 
>>> other JSON frameworks do often seem to include the UTF-8 charset.
>>>
>>> As far as I know there's no "proper" way of forcing the addition of the 
>>> charset to the application/json content type in Akka-HTTP but it is 
>>> possible to hack around it:
>>>
>>> val ct = ContentType(MediaType.custom("application/json", 
>>> false).asInstanceOf[MediaType.WithOpenCharset], HttpCharset.custom("utf-8"))
>>>
>>> -- 
>>> Alan Burlison
>>> --
>>>
>>>
>>> -- 
>>>
  Read the docs: http://akka.io/docs/
>  Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
>  Search the archives: 
> https://groups.google.com/group/akka-user
>
 --- You received this message because you are subscribed to the 
>>> Google Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> -- 
>> *Greg Methvin*
>> *Tech Lead - Play Framework*
>> Lightbend, Inc. 
>>
>>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http ssl setup documentation / examples isn't very clear. Is config-based

2017-05-22 Thread johannes . rudolph
Hi Andrew,

your observation is correct. Server side TLS configuration is only possible 
through code right now. We have tickets to track improving documentation 
and maybe adding the configuration based approach

https://github.com/akka/akka-http/issues/55
https://github.com/akka/akka-http/issues/237

The basic problem is that security recommendations change all the time and 
people will just copy and paste any code we give, so we need to make sure 
to provide the right amount of information without claiming it to be the 
recommendation for best security.

In our test suite we have an example of just creating the data structures 
from certificates / keys in one particular format here:

https://github.com/akka/akka-http/blob/5932237a86a432d623fafb1e84eeeff56d7485fe/akka-http-core/src/test/scala/akka/http/impl/util/ExampleHttpContexts.scala#L21-L21

For better security you should also adapt the set of ciphers, etc.

Johannes


On Wednesday, May 17, 2017 at 10:20:51 AM UTC+2, Andrew Norman wrote:
>
> The information for setting up akka-http ssl is very cluttered / 
> inaccurate / dated / and referencing mismatched links from other systems 
> (such as Play WS ssl client configurations) which doesn't really tell you 
> how to implement server-side ssl. Every code example I see out there on how 
> to setup ssl with Akka-http doesn't use the "config-based" setup but does 
> the setup in the code. Those examples are actually missing the critical 
> last piece of initializing the sslContext with the keyManagers, 
> truestManagers, and SecureRandom settings to make it run. (Since the 
> sslContext 
> was never initialized it throws an initialization error)
>
>
>   sslContext.init(keyManagerFactory.getKeyManagers, tmf.getTrustManagers, 
> SecureRandom.getInstanceStrong)
>
>
> So putting this together I'm drawling the conclusion:
>
>- the config-based approach to enabling ssl is not completely wired 
>into a functional solution for akka-http
>- the examples on the website need to be updated to show a true 
>working setup (see above code snipped that needs to be included to make 
>that happen)
>- documentation should be added to not send users down a wild goose 
>chase of trying to implement a config based https setup with Play's WS 
>ssl-config.ssl (*at least not until this is offically supported by 
>akka-http*) 
>
>
> Am I right with my assumptions or am I missing something here.
>
> Also, is there a timeline on when a true config-based ssl will be 
> functionally complete for akka-http?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: has anyone attempted a multi-source authentication?

2017-05-22 Thread johannes . rudolph
Hi Andrew,

here's a general idea at how it could work:

If you model each authentication method as a `Directive1[Session]` that 
returns the session (or user, principal, etc.) for that authentication 
method and all of the directive return the a value of the same type or a 
type with a common supertype then you can combine each of those directives 
with `|`:


def authentication: Directive1[Session] = cookieAuthentication | 
tokenAuthentication | basicAuthentication

The first one will be used with highest precedence and the other ones will 
only be tried if all the previous once have rejected the request.

Is that what you are looking for?

Johannes

On Saturday, May 20, 2017 at 12:47:03 AM UTC+2, Andrew Norman wrote:
>
> This would be for a system that services requests from users directly and 
> also with other systems. Users would have a session cookie and non-user 
> systems a token. 
>
> The idea / paradigm here is the old java interceptor authentication 
> approach where authentication was attempted a number of ways before the 
> action is handled. If any of the various types of authentication was 
> successful then the action can be invoked (provided the authenticated 
> entity is authorized to invoke the target action).
>
> So looking at the akka-http setup, I'm seeing that there is cookie 
> handling directive and there are authentication directive (that look like 
> they are wired to handle usernames and passwords). Is there some sort of 
> directive that resembles or can work with the multi-source auth paradigm?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to map unstructured Json to a case class in Akka-Http

2017-05-22 Thread johannes . rudolph
Hi,

try this case class structure instead:

case class Customer(name: String,jsonData: JsValue)

Johannes

On Saturday, May 13, 2017 at 10:47:52 AM UTC+2, vishal...@exadatum.com 
wrote:
>
>
> I am trying to create the REST service using akka Http and slick . I am 
> trying to persist the Raw json in the MySql table in string format or BLOB 
> format whichever can solve my problem.
> The point where I am facing issues is Every time I try to store the Json 
> via akka http routes , It throws an error about json being malformed.
>
> object Models {
>   case class Customer(name: String,jsonData:String)
>   object ServiceJsonProtoocol extends DefaultJsonProtocol {
> implicit val customerProtocol = jsonFormat2(Customer)
>   }
> }
>
> object AkkaJsonParsing {
>
>
>
>   def main(args: Array[String]) {
>
> implicit val actorSystem = ActorSystem("rest-api")
>
> implicit val actorMaterializer = ActorMaterializer()
>
> val list = new ConcurrentLinkedDeque[Customer]()
>
> import ServiceJsonProtoocol.customerProtocol
> val route =
>   path("customer") {
> post {
>   entity(as[Customer]) {
> customer => complete {
>   list.add(customer)
>   s"got customer json ${customer.jsonData}"
> }
>   }
> }
>
>
>
>
>
> Json 
> {
> "name":"test",
>"json":[{"type":"alpha","gross":"alpha"}]
>
> }
>
>
>
> Error :Request was rejected with rejection 
> MalformedRequestContentRejection(Expected String as JsString, but got 
> [{"type":"alpha","gross":"alpha"}],None)
>
>
> I tried changing case class jsonData datatype to Byte,Array[Byte] but 
> nothing helps. 
> what is the right way to persist whole json?
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Connection retry for TCP stream.

2017-05-23 Thread johannes . rudolph
Hi Ivan,

I guess it depends on how you want to use this connection exactly. As TCP 
is a stream-based protocol there's usually some state associated with a 
connection that needs to be regarded when a new connection is opened. 
Therefore, there cannot be a general solution to the problem that would 
work for all kind of bidirectional stream-based protocols.

E.g. for HTTP you need to keep track of which requests are still 
outstanding and resubmit them on the next connection. For an FTP download 
you would have to remember how much you had already downloaded and resume 
there. Each protocol will need its own rules you would need to implement.

Depending on the requirements the stream setup needs to be customized. E.g. 
here's part of the setup we use for the HTTP client connection pool:

https://github.com/akka/akka-http/blob/a399d99eddd7ba83121ac75ff479fe31d8e2aa7d/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolFlow.scala#L27

Each slot is implemented as a  `GraphStage` that keeps track of the state 
of one connection. If that connection fails or is closed for any reason, 
the slot is responsible for creating a new connection.

Depending on your use-case there might be much simpler approaches but as so 
often, it depends ;)

Johannes


On Tuesday, May 23, 2017 at 2:33:11 PM UTC+2, ivan morozov wrote:
>
> Hi,
>
> I'm using tcp for outgoing connection in akka streams.
>
> Tcp().outgoingConnection(...)
>
> if the target connection drops you can see something like this in debug 
> logs
>
> [DEBUG] [05/23/2017 14:08:41.968] [default-akka.actor.default-dispatcher-5
> ] [akka://default/system/IO-TCP/selectors/$a/0] Closing connection due to 
> IO error java.io.IOException: Broken pipe
>
>
> How to configure a retry with a delay?
>
> To reproduce, run something like 
>
> Source.repeat(1).via(Tcp.outgoingConnection("localhost",port)).to(Sink.
> ignore).run()
>
> nc -lkv port
>
> and then kill netcat with CTRL + C and retry.
>
> Thanks
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: ActorSystem Uncaught fatal error shutting down ActorSystem !

2017-06-06 Thread johannes . rudolph
Hi,

in cases of fatal errors, the error and stack trace is logged to stderr 
(not using the logging framework). Note that in some cases, the logging 
itself may fail (that's why the error is fatal: after it happens, the state 
of the JVM might be corrupted and operations like logging may fail for 
various reasons).

Johannes

On Monday, June 5, 2017 at 9:16:44 AM UTC+2, cie...@gmail.com wrote:
>
>
> Hi all, this is my first post, i am poor in english, sorry~
>
>
>
> [ERROR] [SECURITY][05/24/2017 16:41:54.422] 
> [wssystem-akka.remote.default-remote-dispatcher-26] 
> [akka.actor.ActorSystemImpl(wssystem)] Uncaught fatal error from thread 
> [wssystem-akka.remote.default-remote-dispatcher-26] shutting down 
> ActorSystem [wssystem]
>   [INFO] [05/24/2017 16:41:54.434] 
> [wssystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://
> wssystem@192.168.0.56:20300/system/remoting-terminator] Shutting down 
> remote daemon.
>   [INFO] [05/24/2017 16:41:54.434] 
> [wssystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://
> wssystem@192.168.0.56:20300/system/remoting-terminator] Remote daemon 
> shut down; proceeding with flushing remote transports.
> [WARN] [05/24/2017 16:41:56.464] 
> [wssystem-akka.actor.default-dispatcher-34] [akka.remote.Remoting] Shutdown 
> finished, but flushing might not have been successful and some messages 
> might have been dropped. Increase akka.remote.flush-wait-on-shutdown to a 
> larger value to avoid this.
>   [INFO] [05/24/2017 16:41:56.464] 
> [wssystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://
> wssystem@192.168.0.56:20300/system/remoting-terminator] Remoting shut 
> down.
>
>
> i am got an error when using akka , i have no idea , the logs do not show 
> detail whats error, please help me, thanks.
>
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka cluster http management

2017-06-06 Thread johannes . rudolph
Hi,

the hostname setting is a bit misnamed. It defines the interface the server 
binds to.

So, you can put "0.0.0.0" in there to make sure the management interface is 
bound on all interfaces (but make sure not to expose it publicly) or put 
some other interface address in there.

I filed https://github.com/akka/akka-management/issues/30 to clarify the 
confusion.

Johannes

On Thursday, June 1, 2017 at 12:23:55 PM UTC+2, Dai Yinhua wrote:
>
> How can I enable the akka cluster http management?
>
> Should I call ClusterHttpManagement(cluster1).start() to enable it? 
> And does it supported by a singleton actor internally so that I can call 
> it on every node of akka cluster instance?
>
> Also how should I set akka.cluster.http.management.hostname if akka 
> cluster is running in multiple servers?
>
>
> Thanks.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: ActorSystem Uncaught fatal error shutting down ActorSystem !

2017-06-06 Thread johannes . rudolph
Hi,

my colleague Arnout just found out that the error will only be logged to 
stderr if you enable the `akka.jvm-exit-on-fatal-error` setting. Can you 
try enabling this setting and then run again?  

I also filed an issue to improve the logging of fatal 
errors: https://github.com/akka/akka/issues/23107

Johannes

On Tuesday, June 6, 2017 at 12:55:27 PM UTC+2, johannes...@lightbend.com 
wrote:
>
> Hi,
>
> in cases of fatal errors, the error and stack trace is logged to stderr 
> (not using the logging framework). Note that in some cases, the logging 
> itself may fail (that's why the error is fatal: after it happens, the state 
> of the JVM might be corrupted and operations like logging may fail for 
> various reasons).
>
> Johannes
>
> On Monday, June 5, 2017 at 9:16:44 AM UTC+2, cie...@gmail.com wrote:
>>
>>
>> Hi all, this is my first post, i am poor in english, sorry~
>>
>>
>>
>> [ERROR] [SECURITY][05/24/2017 16:41:54.422] 
>> [wssystem-akka.remote.default-remote-dispatcher-26] 
>> [akka.actor.ActorSystemImpl(wssystem)] Uncaught fatal error from thread 
>> [wssystem-akka.remote.default-remote-dispatcher-26] shutting down 
>> ActorSystem [wssystem]
>>   [INFO] [05/24/2017 16:41:54.434] 
>> [wssystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://
>> wssystem@192.168.0.56:20300/system/remoting-terminator] Shutting down 
>> remote daemon.
>>   [INFO] [05/24/2017 16:41:54.434] 
>> [wssystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://
>> wssystem@192.168.0.56:20300/system/remoting-terminator] Remote daemon 
>> shut down; proceeding with flushing remote transports.
>> [WARN] [05/24/2017 16:41:56.464] 
>> [wssystem-akka.actor.default-dispatcher-34] [akka.remote.Remoting] Shutdown 
>> finished, but flushing might not have been successful and some messages 
>> might have been dropped. Increase akka.remote.flush-wait-on-shutdown to a 
>> larger value to avoid this.
>>   [INFO] [05/24/2017 16:41:56.464] 
>> [wssystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://
>> wssystem@192.168.0.56:20300/system/remoting-terminator] Remoting shut 
>> down.
>>
>>
>> i am got an error when using akka , i have no idea , the logs do not show 
>> detail whats error, please help me, thanks.
>>
>>
>>
>>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: ActorSystem Uncaught fatal error shutting down ActorSystem !

2017-06-07 Thread Johannes Rudolph
The rest of the stack here will tell you where the problem comes from:

On Wed, Jun 7, 2017 at 10:40 AM,  wrote:

> * situation 1 : ( has tell me the stack, and tell me *Shutdown finished *
> )*
> INFO   | jvm 1| 2017/04/07 15:49:52 | java.lang.NoClassDefFoundError:
> Lws/protos/EnumsProtos$HardTypeEnum;
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at 
> java.lang.Class.getDeclaredFields0(Native
> Method)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at java.lang.Class.
> privateGetDeclaredFields(Class.java:2583)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at java.lang.Class.
> getDeclaredField(Class.java:2068)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1703)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
> java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:484)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at java.security.
> AccessController.doPrivileged(Native Method)
> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
> java.io.ObjectStreamClass.(ObjectStreamClass.java:472)
> ..
>
>
It might be related to actor messages received over remoting that cannot be
deserialized because classpaths differ between the two nodes.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: ActorSystem Uncaught fatal error shutting down ActorSystem !

2017-06-07 Thread Johannes Rudolph
If you set `akka.jvm-exit-on-fatal-error = on` it should log stack traces
in all cases to stderr.

Johannes

On Wed, Jun 7, 2017 at 12:16 PM,  wrote:

> yes, *situation 1  *, the log has tell me the reason cause the error ,
> and i know how to solve *situation 1 * .
> but *situation 2,3,4  *, din't tell me the reason ( the stack ), id dont
> hnow how solve this problem.
>
> 在 2017年6月7日星期三 UTC+8下午5:00:07,Johannes Rudolph写道:
>>
>>
>> The rest of the stack here will tell you where the problem comes from:
>>
>> On Wed, Jun 7, 2017 at 10:40 AM,  wrote:
>>
>>> * situation 1 : ( has tell me the stack, and tell me *Shutdown finished *
>>> )*
>>> INFO   | jvm 1| 2017/04/07 15:49:52 | java.lang.NoClassDefFoundError:
>>> Lws/protos/EnumsProtos$HardTypeEnum;
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.lang.Class.getDeclaredFields0(Native Method)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.lang.Class.privateGetDeclaredFields(Class.java:2583)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.lang.Class.getDeclaredField(Class.java:2068)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1703)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:484)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.security.AccessController.doPrivileged(Native Method)
>>> INFO   | jvm 1| 2017/04/07 15:49:52 |   at
>>> java.io.ObjectStreamClass.(ObjectStreamClass.java:472)
>>> ..
>>>
>>>
>> It might be related to actor messages received over remoting that cannot
>> be deserialized because classpaths differ between the two nodes.
>>
>> Johannes
>>
>> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/akka-user/R9utH4VIeXU/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-Http Entity Stream Truncation

2017-07-10 Thread johannes . rudolph
Hi Michael,

On Monday, July 10, 2017 at 9:01:00 AM UTC+2, Michael Pisula wrote:
>
> As far as I saw from the source code, it could point to a problem with 
> header parsing, but I am not exactly sure what could cause the problem.
>

The place in the code is actually misleading, as it the error is only 
prepared at that place after all headers have been read. The error will 
only be reported, however, only later on if the connection is closed while 
there's still data expected on the connection. That will be the case if a 
`Content-Length` was specified but less than the given number of bytes was 
read before the connection was closed, or if `chunked` transfer encoding 
was used, if the connection was closed before the final empty chunk was 
sent.

You could set `akka.http.server.log-unencrypted-network-bytes = 1000` to 
see all data that was sent on the connection which might help with 
debugging the issue.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-streams] Generic streams and abstract types

2017-07-12 Thread johannes . rudolph
Hi Jeff,

your API seems quite complex. I don't know the purpose of it so I cannot 
suggest anything but I'd try to simplify. :)

That said, your problem seems to be that you cannot write a concrete type 
that would express the dependency between the two components of the tuple 
`(RequestBuilder, Promise[???])`. There are two ways to solve it:

1) make `Out` a type parameter and convert `sink` to a `def sink[Out]`, 
then you can use the tuple `(RequestBuilder[Out], Promise[Out])`
2) create your custom tuple type that allows to express the dependency:

case class BuilderWithPromise[O](builder: RequestBuilder { type Out = O }, 
promise: Promise[O])

and then use it as MergeHub.source[BuilderWithPromise[_]]

But I can only repeat that getting rid of the dependent types if possible 
is usually the best solution ;)

Johannes

On Thursday, July 6, 2017 at 11:23:50 PM UTC+2, Jeff wrote:
>
> Here is a strawman program which illustrates the issue I am having
>
> trait RequestBuilder {
>   type Out
>
>   def complete(p: Promise[Out]): Unit
> }
>
> def makeRequest(in: RequestBuilder): Source[(RequestBuilder, 
> Promise[in.Out]), Future[in.Out]] = {
>   val p = Promise[in.Out]
>
>   Source.single(in -> p).mapMaterializedValue(_ => p.future)
> }
>
> val sink = MergeHub.source[(RequestBuilder, Promise[???])].to(Sink.foreach {
>   case (r, p) => r.complete(p)
> }).run()
>
> sink.runWith(makeRequest(new RequestBuilder {
>   type Out = Int
>
>   def complete(p: Promise[Out]): Unit = p.success(1)
> }))
>
>
> The issue is, how do I type the Promise[???]  in the sink? I have been 
> able to work around this by making the Promise a part of the RequestBuilder 
> trait itself, but this seems like a code smell to me
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-streams] Generic streams and abstract types

2017-07-13 Thread johannes . rudolph
On Wednesday, July 12, 2017 at 9:08:52 PM UTC+2, Jeff wrote:
>
> As for the issue of complexity, it's actually not as complex as it sounds. 
> I'm using Http().superPool() to make api requests and I wanted to avoid 
> having to create a separate stream for every single iteration of api 
> request when the only thing that changed was the Unmarshaller. Instead of 
> materializing multiple streams where the only thing that changed was the 
> Sink, I just created one stream where the Sink.foreach(...) take the 
> Unmarshaller function and resolves the Promise. 
>

You could just use Http.singleRequest in that case because it implements 
almost the same kind of logic and also uses the same pool as superPool.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Cluster Pub/Sub performance for chat-room like applications

2017-07-13 Thread johannes . rudolph
On Thursday, July 13, 2017 at 1:08:19 PM UTC+2, Alexander Lukyanchikov 
wrote:
>
> *The only question, is it capable to manage tens of millions of topics? 
> Would it perform better then our current solution?*
>

No, most likely it currently won't scale up to 1 million active topics. In 
Akka's pubsub, each node keeps a topic actor that manages subscriptions of 
local actors to this topic. Then the information about which node is 
interested in which topics is replicated across the whole cluster.

We plan to test the actual memory consumption under realistic conditions 
but he haven't got to that so far.

I created a ticket to estimate the memory usage and to collect ideas about 
how to optimize for bigger workloads:

https://github.com/akka/akka/issues/23357

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Cluster Pub/Sub performance for chat-room like applications

2017-07-13 Thread johannes . rudolph
On Thursday, July 13, 2017 at 2:56:52 PM UTC+2, Justin du coeur wrote:
>
> (I should note: I don't use Akka Pub/Sub myself, but I'm wondering whether 
> Cluster Sharding actually fits your use case well.  Depending on the 
> details, it might.)
>

Yep, I guess that's true. With cluster sharding each topic would be managed 
on a single node. If that node goes down you either lose your subscriptions 
or you have them persisted in which case another node will pick them up 
after a while. Each message travels from the node where it is ingested to 
the node with the topic actor and from there to all the nodes that manage 
the external connections (like WS). Without any extra work, you will have 
to deduplicate that traffic or you will internally send each message 
multiple times for each external connection. If a topic is busy, the single 
topic actor might become a bottleneck.

With PubSub, when a node goes down, all subscriptions that had been managed 
at that node are gone. Each message is in the worst case broadcasted to the 
topic actor of each other node and from there locally to the subscribers.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http - Setting TLS Server Name Indicator (SNI) explicitly for Client-Side HTTPS

2017-07-13 Thread johannes . rudolph
Hi Shayan,

this seems like an uncommon usage for an HTTP client. Basically you want to 
connect to a server that presents a certificate for the wrong host name. 
This is unsupported out of the box because it would be an unsafe thing to 
do in general.

The way you tried it does not work because the server name is later 
overwritten by the actual host name. It might work if you turn off 
akka-http's own SNI support by setting `ssl-config.loose.disableSNI = true`.

Johannes



On Monday, June 19, 2017 at 6:33:07 AM UTC+2, sha...@leaprail.com wrote:
>
> I am trying to connect to internal microservices using the akka-http 
> client-side HTTPS support.
>
> These secure microservices are hosted behind a proxy (HAProxy in tcp mode 
> passing TLS traffic through) with traffic routed to the appropriate service 
> through TLS SNI.
>
> In order to have akka-http properly connect to the services, we need to be 
> able to set the TLS extension servername in ClientHello (SNI) to be 
> different from the host in the URL it is connecting to. 
>
> Let's say the microservice has a certificate for "bar.com" and the proxy 
> is listening on "foo.com". We have setup proxy such that if the SNI in 
> TLS handshake is set to "bar.com" when connecting to "foo.com", it 
> properly routes traffic to the right place. We can verify this easily using 
> openssl with -servername argument:
>
> openssl s_client -showcerts -servername bar.com -connect foo.com:443
>
> When we try to attain the same outcome using akka-http, we are not able to 
> alter the SNI in the TLS ClientHello trying something like this:
>
> // sslContext created with internal CA Root loaded into the trust store
> val params = sslContext.getDefaultSSLParameters
> val serverName: SNIHostName = new SNIHostName("bar.com")
> val serverNames = new java.util.ArrayList[SNIServerName](1)
> serverNames.add(serverName)
> params.setServerNames(serverNames)
> val ctx = ConnectionContext.https(sslContext, sslParameters = Some(params))
> Http(system).cachedHostConnectionPoolHttps[ActorRef](host = "foo.com", 
> port = 443, connectionContext = ctx)
>
> the client still uses the value in host (foo.com) for SNI and fails to 
> get routed to the correct service.
>
> Any idea how this can be accomplished?
>
> Many thanks in advance,
> Shayan
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka http client dispatcher

2017-07-13 Thread johannes . rudolph
Hi Diego,

it seems it is an oversight that the dispatcher for at least the actor 
parts of the pool infrastructure cannot be configured anywhere. I created a 
ticket to track adding this feature:

https://github.com/akka/akka-http/issues/1278

Thanks,
Johannes



On Thursday, June 22, 2017 at 12:44:53 PM UTC+2, Diego Martinoia wrote:
>
> Hi there,
>
> I'm wondering how to set the dispatcher on which to run all the 
> akka-http-client operations. There doesn't seem to be any config voice for 
> it as far as I can tell.
>
> I have assigned a separate dispatcher at the materializer that I'm using 
> in the client, but I'm unsure if this is sufficient because the logs still 
> point to the default dispatcher threads.
>
> Is there something else that needs to be done? I have seen inside the 
> library that the PoolGateway might still be running on the def.disp. even 
> with the materializer one re-assigned.
>
> Thanks,
>
> D.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-Http Livelock with 100% CPU consumption when creating connection to non-existing host

2017-07-13 Thread johannes . rudolph
Hi Vanger,

thanks for the report. Have you changed the value of 
akka.http.host-connection-pool.min-connections 
(https://github.com/akka/akka-http/blob/master/akka-http-core/src/main/resources/reference.conf#L233)?
 
That would explain the behavior where the pool tries to keep connections 
alive to hosts it connected to previously.

In any case, it shouldn't loop infinitely in such a case. Can you create an 
issue with that problem?

Thanks,
Johannes


On Monday, July 10, 2017 at 5:52:34 PM UTC+2, Vanger B wrote:
>
> We have simple proxy based on Akka Http low-level API. Time to time we can 
> observe very strange behavior of the proxy. We are still gathering info, 
> but I think we already have enough for a post in this google group.
> At any time, our service can start consuming CPU and never stops to do so 
> by itself, but can be "cured" with restart. We found no correlations with 
> traffic volumes nor with particular HTTP endpoints. But it more likely to 
> be related with DNS manipulations, we are hosted in AWS and proxy routes 
> traffic between resources that appears and disappears dynamically (for 
> instance: DNS records and LoadBalancers) and it seems that connection pool 
> could have bad times when that happens. However, that assumption may be 
> wrong.
>  
> It looks like it only consumes available CPU time but don't steal time 
> from other threads, proxy works normal (if you can call it "normal") with 
> 100% CPU consumption. When this happens, only "average response time" is 
> affected - some constant time (tens of ms.) added for this useless 
> activity. However, we aren't agree to live with that :)
>
> What do we have:
>
>1. Akka-http 10.0.8
>2. We setup our proxy as shown below:
>
>val src = Http(system).bind(interface = host, port = port)
>
>src.to(Sink.foreach { connection =>
>  connection.handleWithAsyncHandler { request =>
>  //*rerouting original request here*
>  }
>}).run()
>
>3. We completely isolated machine form outside world and we are pretty 
>sure that no external requests get to the proxy. However, different 
>monitoring utilities show there are incoming and outgoing "traffic" on the 
>instance.
>4. There are no application logs at all without external requests. If 
>we turn root log level to TRACE (without service restart): we only see 
> Akka 
>logging with repeating pattern:
>5. [07.07.2017 18:42:21.796] DEBUG 
>[Proxy-akka.actor.default-dispatcher-13915] [akka.stream.impl.io.TLSActor] 
>closing output
>[07.07.2017 18:42:21.797] DEBUG 
>[Proxy-akka.actor.default-dispatcher-13915] 
>[a.h.impl.engine.client.PoolGateway] [1]  connection was 
>closed by peer while no requests were in flight
>[07.07.2017 18:42:21.797] DEBUG 
>[Proxy-akka.actor.default-dispatcher-13915] 
>[a.h.impl.engine.client.PoolGateway] [1] Idle -> Unconnected
>[07.07.2017 18:42:21.797] DEBUG 
>[Proxy-akka.actor.default-dispatcher-13915] 
>[a.h.impl.engine.client.PoolGateway] [1] Unconnected -> Idle
>[07.07.2017 18:42:21.797] DEBUG 
>[Proxy-akka.actor.default-dispatcher-13915] 
>[a.h.impl.engine.client.PoolGateway] [1]  Establishing 
>connection...
>[07.07.2017 18:42:21.797] DEBUG 
>[Proxy-akka.actor.default-dispatcher-13915] [akka.stream.impl.io.TLSActor] 
>closing output
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] 
>[a.h.impl.engine.client.PoolGateway] [2]  connection was 
>closed by peer while no requests were in flight
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] [akka.stream.impl.io.TLSActor] 
>closing output
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] 
>[a.h.impl.engine.client.PoolGateway] [2] Idle -> Unconnected
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] 
>[a.h.impl.engine.client.PoolGateway] [2] Unconnected -> Idle
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] 
>[a.h.impl.engine.client.PoolGateway] [2]  Establishing 
>connection...
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] [akka.stream.impl.io.TLSActor] 
>closing output
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] 
> [akka.io.TcpOutgoingConnection] 
>Resolving loadbala-1x4ty7dih7yp4-545989797.us-east-1.elb.amazonaws.com 
>before connecting
>[07.07.2017 18:42:21.814] DEBUG 
>[Proxy-akka.actor.default-dispatcher-16344] 
> [akka.io.TcpOutgoingConnection] 
>Could not establish connection to [
>loadbala-1x4ty7dih7yp4-545989797.us-east-1.elb.amazonaws.com:443] due 
>to java.net.UnknownHostException: 
>loadbala-1x4ty7dih7yp4-545989797.us-east-1.elb.amazonaws.com
>[07.07.2017 18:42:21.815] DEBUG 
>[Proxy-akka.actor.default-dispatcher

[akka-user] sbt-revolver 0.9.0 released with sbt 1.0.0 support

2017-08-15 Thread johannes . rudolph
Dear fast application restarters,

we just released sbt-revolver 0.9.0 which is the first version of
sbt-revolver cross-built for sbt 0.13.x and 1.0.x. Thanks go to Olli
Helenius / @liff who contributed the sbt 1.0 compatibility changes
(#62).

We also merged a long-standing PR that allows to customize environment
variables for the run (#63). Thanks, Daniel Moran / @danxmoran for
this contribution.

Happy revolving,

Johannes for the Akka Team

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka HTTP usage of HttpEntity.toStrict

2017-08-15 Thread johannes . rudolph
Hi Yannick,

if you want to log the complete request contents, then there is no other 
way than to collect anything into memory (actually, that's a consequence of 
logging, not of the API).

In that case, you can use toStrict method or the toStrictEntity directive 
at the root of your routing tree to make sure you load the whole contents 
into memory and use it from different places.

Johannes

On Monday, August 14, 2017 at 7:48:54 PM UTC+2, Yannick Lazzari wrote:
>
> Hi,
>
> There is an opened ticket in akka-http regarding exceptions being thrown 
> after using HttpEntity.toStrict: 
> https://github.com/akka/akka-http/issues/1177.
>
> My question is regarding Johannes Rudolph's comment about it being a 
> workaround:
>
> Using toStrictEntity is basically only a workaround and not a solution 
>> because it will mean that all entity data will be buffered in memory which 
>> means that your server will need more memory per concurrent request than 
>> necessary.
>
>
> Let's say you have the need to log the complete request payload of all 
> requests coming in to a service. If you want to allow the logging logic to 
> fully consume to request payload and still allow the entity to be 
> unmarshalled to the desired object later on, what would be another way to 
> ensure this never fails other than using toStrict, forcing the entire 
> payload into memory and allowing it to be consumed more than once?
>
> For example, consider the following route:
>
>
> val route = extractExecutionContext & extractMaterializer & extractLog { 
> (ec, mat, log) =>
>   logRequest(LoggingMagnet(_ => logRequestDetails(_)(ec, mat, log))) {
> path("someapi") {
>   post {
> entity(as[SomeInputClass]) { input =>
>   // do something with input
> }
>   }
> } 
>   }
> }
>
> def logRequestDetails(req: HttpRequest)(implicit ec: ExecutionContext, 
> materializer: Materializer, log: LoggingAdapter): Unit =
>   Unmarshal(req.entity).to[String].map(b => s"The request body is 
> $b").onComplete {
> case Success(s) => log.info(s)
> case Failure(e) => log.error(e, "Error")
>   }
>
>
> This is obviously a contrived example, but this logs the request body. It 
> works, but we've seen such a pattern inconsistently yield 
> java.lang.IllegalStateException: 
> Substream Source cannot be materialized more than once errors. Isn't this 
> a case where converting to a strict entity the correct way to address such 
> problems, i.e. allow the request entity to be consumed more than once? If 
> not, what would be the correct way?
>
> Thanks for any insight you might have!
>
> Yannick
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-http] Http().superPool() and MergeHub.source backpressure

2017-08-15 Thread johannes . rudolph
Hi Jeff,

if you don't read the response bodies of all the responses, your pipeline 
will stall because the super pool connection are still waiting for your 
code to actually read the responses. In your example, try to add 
`x.discardEntityBytes` (or actually read the entity) inside of the 
`Sink.foreach` block.

See 
http://doc.akka.io/docs/akka-http/current/scala/http/implications-of-streaming-http-entity.html
 
for more information on that topic.

Johannes

On Thursday, August 10, 2017 at 1:53:18 AM UTC+2, Jeff wrote:
>
> I am getting behavior that I do not  understand combining 
> Http().superPool() with MergeHub.source. Here is a sample application
>
> import akka.actor.ActorSystem
> import akka.http.scaladsl.Http
> import akka.http.scaladsl.model.{HttpRequest, Uri}
> import akka.stream.{ActorMaterializer, ThrottleMode}
> import akka.stream.scaladsl.{MergeHub, Sink, Source}
> import com.typesafe.config.ConfigFactory
>
> import scala.concurrent.duration._
>
> object Main extends App {
>   val config = ConfigFactory.load()
>
>   implicit val system = ActorSystem("test-system", config)
>   implicit val executionContext = system.dispatcher
>   implicit val materializer = ActorMaterializer()
>
>   val request = HttpRequest(uri = Uri("http://www.google.com";))
>
>   val sink = MergeHub.source[(HttpRequest, Int)]
> .throttle(1, 1.second, 1, ThrottleMode.shaping)
> .map { x =>
>   println(s"started ${x._2}")
>   x
> }
> .via(Http().superPool())
> .to(Sink.foreach { x =>
>   println(s"finished ${x._2}")
> }).run()
>
>   for (x <- 1 to 100) sink.runWith(Source.single(request -> x))
> }
>
>
> The output of which is 
>
> started 2
> finished 2
> started 3
> finished 3
> started 4
> finished 4
> started 1
> finished 1
> started 5
> started 6
> started 7
> started 8
>
> My understanding from the documentation is that Http().superPool() will 
> backpressure. However, after 8 iterations, nothing else happens. Thoughts?
>
> Thanks
> Jeff
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-http] adding cookie attributes

2017-08-15 Thread johannes . rudolph
Hi Christophe,

yes, that's correct. There seems to be no way to model custom cookie 
attributes right now. Using RawHeader is the right workaround for now. I 
filed https://github.com/akka/akka-http/issues/1354 to discuss improvements.

Johannes

On Monday, August 14, 2017 at 10:31:11 AM UTC+2, Christophe Pache wrote:
>
> Hello everyone!
>
> I'm trying to add a sameSite attribute to cookies using the scaladsl and 
> it looks like the Cookie API is quite closed. What I would like to do is 
> something like 
> `Set-Cookie: key=value; HttpOnly; SameSite=strict`
> My feeling is that the only current way to do so is to craft a header with 
> something like 
> `RawHeader("Set-Cookie", "key=value; HttpOnly; SameSite=strict")`
> If so, what about adding attributes to cookie header?
>
> Thanks for any comment on that! Have a nice akking day!
> Christophe
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Http & Streams interaction, http requests getting blocked, program is stuck

2017-08-22 Thread johannes . rudolph
Hi Jerry,

your explanation is spot-on. You need to be make sure that the entities of 
all responses are consumed. In your case, that may not happen because of a 
race-condition: `take(2)` will cancel the stream and responses might get 
discarded between the first and the second `mapAsyncUnordered`. That this 
will permanently stall the host connection pool is a bug 
(https://github.com/akka/akka-http/issues/1248).

It might help if you combine the two mapAsyncUnordered stream elements into 
one to make sure that every entity is consumed like this:

source
  .mapAsyncUnordered(paralellism)(x => 
 http.singleRequest(x).flatMap(response => 
Unmarshal(response).to[PareResult]
  )
  .take(2)
  .to(Sink.ignore)

Johannes

On Monday, August 21, 2017 at 10:54:51 PM UTC+2, Jerry Tworek wrote:
>
> Hi,
>
> I'm constructing a pipeline where I send a series of requests to a host, 
> parse them, process and save into a database and after some iteration it 
> seems my program is getting stuck. 
> I've tried to dive as deeply as I can into the cause, but there are enough 
> moving parts that it's hard to pinpoint exactly what's happening. It seems 
> that I'm exhausting the amount of available http connections to given host, 
> and new requests are just stalled while the old ones for some reason don't 
> get "drained".
>
> The code functions in the same way, whether I use connection pool or a 
> single request. The easiest way to replicate the issue is to use .take(n) 
> on the stream e.g. in the following way:
>
> source.mapAsyncUnordered(paralellism)(x => 
> http.singleRequest(x)).mapAsyncUnordered(paralellism)(x => 
> Unmarshal(x).to[PareResult]).take(2).to(Sink.ignore)
>
> My production case was slightly more complex, but after some time I 
> managed to reduce it to the above. I don't know if my understanding is 
> correct, but I guess, that more requests are requested from source than two 
> and land in a buffer somewhere between stream stages. Then after 2 reach 
> the stage .take(2), downstream finishes the stream which means the 
> unmarshalling stage is finished and some requests stay in the "limbo" never 
> reaching unmarshalling stage, thus never being drained and therefore 
> blocking http requests. 
>
> So my question is, is my interpretation above correct or is there another 
> possible issue in play that I don't understand? If it is, is it my bug in 
> the code of constructing the stream incorrectly, or is it a bug in Akka 
> streams? 
>
> Regards,
> Jerry
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Http & Streams interaction, http requests getting blocked, program is stuck

2017-08-22 Thread Johannes Rudolph
Hi Jerry,

On Tue, Aug 22, 2017 at 12:20 PM, Jerry Tworek 
wrote:

> Do I understand it correctly, that in this case cachedHostConnectionPool
> is basically unusable? I assume it will always be executed in a separate
> stage from the next stage, that actually consumes the request, and it can
> always happen that somewhere further down the stream something will finish
> or stop the stream because of an exception and cause the connections to
> leak?
>

Yes, until https://github.com/akka/akka-http/issues/1248 is fixed, a
connection can leak.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Http Performance with high-latency route.

2017-09-11 Thread johannes . rudolph
Hi Dominic,

it depends on what you mean with "high-latency" API. If you mean that some 
external service is called which takes a long while, then you need to 
ensure that executing this external call does not block the thread and the 
thread can be used for other tasks while waiting for the result of the 
external call. The same consideration applies if the API call is local but 
is heavy on IO.

Here's more information about blocking operations in Akka / Akka-Http: 
http://doc.akka.io/docs/akka-http/current/scala/http/handling-blocking-operations-in-akka-http-routes.html

Johannes

On Monday, September 11, 2017 at 10:58:44 AM UTC+2, Dominic Kim wrote:
>
> Hi.
>
> I am newbie to Akka-http.
> I got about  only 1,500 TPS with Akka-http.
>
> I am running performance benchmark on Apahce OpenWhisk which uses 
> Akka-http as a main web server.
> In OpenWhisk, there are many routes, few routes have low latency and a few 
> routes have relatively high latency.
> For example,
>
>  1. Ping API: 25ms
>  2. Invoking an action API: 250ms
>  3. Get all actions API: 500ms.
>
> I got about 15000 TPS with ping API but got about 1500 TPS with Invoking 
> an action API.
> I used 40cores CPU with 128GB memory.
> I monitored the number of thread and it was max 140, though there were 
> idle CPU and memory.
>
> Since their latency difference and TPS difference are about 10 times 
> between Ping(20ms, 15000TPS) and Invoking API(200ms, 1500TPS),
> I think this is because there are fixed number of threads(140) and they 
> could not handle more requests due to high latency of each requests.
> (When I run `get all actions API` which has about 20 times slower than 
> `Ping API`, I got about 800 TPS which is also 20 times lesser than 15000 
> TPS)
>
> Since there were idle CPI(50%) and memory(120GB), it should create more 
> threads and throughput should be higher even though latency is high.
>
> Do I have to configure anything to increase the number of concurrent 
> threads?
>
> I am using this version of libraries.
>
> compile 'com.typesafe.akka:akka-actor_2.11:2.4.16'
> compile 'com.typesafe.akka:akka-slf4j_2.11:2.4.16'
> ompile 'com.typesafe.akka:akka-http-core_2.11:10.0.9'
> compile 'com.typesafe.akka:akka-http-spray-json_2.11:10.0.2'
>
>
>
> This is my(OpenWhisk`s) akka-http configurations.
> akka.http {
>   server {
> request-timeout = 90s
> max-connections = 8192
> stats-support = off
> idle-timeout = 120s
>
> parsing {
>   max-uri-length = 8k   
>   max-content-length = 50m
> }
>   }
> }
>
>
>
> Thanks in advance
> Regards
> Dominic
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Http: What does the source returned by http://doc.akka.io/japi/akka/2.4.5/akka/http/scaladsl/model/HttpEntity.html#getDataBytes-- materialize to?

2017-09-11 Thread johannes . rudolph
Hi Bwmat,

On Saturday, September 9, 2017 at 2:32:13 AM UTC+2, Bwmat wrote:
>
> The type is just Object, and it's not documented in the linked javadoc.
>

It is undefined. It needs to have this type so that users can pass in 
sources with any materialized value. The Akka Http implementation will 
mostly return `NotUsed`.

And btw: make sure you use the latest version of Akka-Http, which is 
10.0.10.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Http memory leak suspect

2017-09-25 Thread johannes . rudolph
Hi Bartosz,

I can look into the heap dump. You can send it to me privately. If that's 
not possible could you post an histogram? It would be great if that could 
be filtered once for subclasses of `Actor` (which will probably be 
dominated by `ActorGraphInterpreter`) and once filtered by `GraphStage` 
which might show which streams stay open.

Johannes

On Monday, September 25, 2017 at 10:39:01 AM UTC+2, Bartosz Jankiewicz 
wrote:
>
> I have been running an app with Akka Http 1.0.9.
>
> It had only single endpoint respoding with JSON. The service returned the 
> value as future therefore I used onComlete semantics.
>
> The app was consistently running into OoM issues. Heap dump analysis has 
> led me to 1,536,693 instances of akka.actor.ActorCell. Along with 
> accompanying objects (scala.collection.immutable.RedBlackTree$BlackTree) 
>  it saturated the heap. All ActorCell objects seem to be related to 
> Akka-Streams - their names are: StreamSupervisor-xx
>
> Has anyone fallen into a similar issue?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Cluster: Loosing messages when rebalancing

2017-09-25 Thread johannes . rudolph
Hi Eduardo,

cluster sharding has at-most-once delivery (as most of Akka) so losing some 
messages is to be expected. Persistent actor can opt-in to at-least-once 
delivery (see 
http://doc.akka.io/docs/akka/current/scala/persistence.html#at-least-once-delivery),
 
for other actors, you need to make sure to confirm and resend messages 
manually.

Johannes

On Friday, September 22, 2017 at 6:02:48 PM UTC+2, Eduardo Fernandes wrote:
>
> Hi all.
>
> I'm using Akka 2.3.13 [I know... a bit old :(  ] with Java.
>
> A few times I'm loosing messages (in a very intensive concurrent 
> environment).  In my case a message is sent to the actor just before the 
> postStop() method is called, indicating that the kill action is sent to it 
> due the rebalancing logic. 
>
> By documentation the PoisonPill is sent and the the ShardRegion actor 
> (proxy) stop sending messages until the shard is completely rebalanced. 
>  Following my traces the unique case I have where I loose a message is when 
> I'm sending the message using the shard region actor which is physically in 
> the same process where the destine actor is. I have an unique id in my 
> messages the actor prints it on reception. Other commands sen't to the same 
> actor from other nodes after rebalancing work fine. 
>
> I suppose the command would be queued and then sent automatically after 
> rebalancing but I can't observe this.
>
> Have I to do something programmatically to make this work?
>
> Many thanks in advance for any help
>
> /Eduardo
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Http memory leak suspect

2017-09-26 Thread Johannes Rudolph
On Tue, Sep 26, 2017 at 7:18 AM, Patrik Nordwall 
wrote:

> If the names are StreamSupervisor- I think it can be that a new
> Materializer is created for each request. I don’t know if that is done by
> your application or by Akka Http. Does that ring any bells? Do you have any
> creation of stream materializers in your code?
>
>
Ah good point. I just assumed that it would be child actors of the
supervisor since they also have that in the name but if it's the supervisor
itself, creating too many Materializers could really be the cause.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Materialize future after content negotiation

2017-09-26 Thread johannes . rudolph
Hi Mantis,

you are right, `Marshaller.withFixedContentType` is a bit restricted in 
that regard. Fortunately, it is only a shortcut for simpler cases and the 
full asynchronous functionality is available for marshallers. Try something 
like

Marshaller[Iterator[Data], HttpResponse] { implicit ec => it =>
  doSomethingWhichReturnsFutureOfHttpResponse.map { response =>
Marshalling.WithFixedContentType(contentType, () ⇒ response) :: Nil
  }
}

Cheers,
Johannes

On Wednesday, September 20, 2017 at 12:10:24 PM UTC+2, Muntis Grube wrote:
>
> Hello
>
> Is there way to trick marshallers to accept my Future[HttpResponse] 
> in withFixedContentType ?
>
> To provide large responses from cursor I'm trying to materialize class 
> that extends Iterator. To do that we created  SourceShape[ByteString] that 
> wraps the iterator and 
> corrseponding GraphStageWithMaterializedValue[SinkShape[ByteString], 
> Future[MessageEntity]]
> that pools first values from source and decides if HttpEntity.Strict 
> or HttpEntity.Chunked should be creaded. 
>
>
>   def httpResponse(iterator: Iterator[Data])(implicit ec: ExecutionContext
> ) = {
> Await.result(Source.fromGraph(sourceGrap(iterator)).runWith(entitySink
> ).map(entity => HttpResponse(entity = entity)), 60 seconds)
>   }
>
>   val toResponseIteratorJsonMarshaller: ToResponseMarshaller[Iterator[Data
> ]] =
> Marshaller.withFixedContentType(`application/json`) {
>   result => httpResponse(result)
> }
>
>   implicit val toResponseIteratorMarshaller: ToResponseMarshaller[Iterator
> [Data]] =
> Marshaller.oneOf(
>   toResponseIteratorJsonMarshaller,
>   toResponseIteratorOdsMarshaller,
>   toResponseIteratorExcelMarshaller
> )
>
>
> As you can probably guess the biggest concern for me is 
> unnecessary Await.result. One of the promising solutions was to drop await 
> and to rewrite marshaller as: 
>
>  def httpResponse(iterator: Iterator[Data])(implicit ec: ExecutionContext) 
> = {
>Source.fromGraph(sourceGrap(iterator)).runWith(entitySink).map(entity 
> => HttpResponse(entity = entity))
>  }
>
>  val toResponseIteratorJsonMarshaller: ToResponseMarshaller[Iterator[Data
> ]] = Marshaller{ implicit ec => result =>
>httpResponse(result).map(response => Marshalling.WithFixedContentType(
> `application/json`, () => response ) :: Nil)
>  }
>
> Unfortunately project structure is built that way that query is executed 
> and cursor is opened when iterator hasNext or next methods are called. And 
> in this solution it is done before content negotiation is done and causes 
> multiple queries called for each response type at best or multiple queries 
> on one connection at worst. 
>
>
> Thanks.
> Muntis
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Materialize future after content negotiation

2017-09-26 Thread johannes . rudolph
Oops, one should read the whole question before answering... Just saw that 
you already tried that. Unfortunately, it seems that this is indeed a 
shortcoming of the current model.

I guess with a bit of fiddling you could try making all of those 
marshallers marshal to `Future[HttpResponse]` instead of `HttpResponse` and 
then use something like



val toResponseIteratorJsonMarshaller: Marshaller[Iterator[Data], 
Future[HttpResponse]] =
Marshaller.withFixedContentType(`application/json`) {
  result => httpResponse(result)
}

  implicit val toResponseIteratorMarshaller: Marshaller[Iterator[Data], 
Future[HttpResponse]] =
Marshaller.oneOf(
  toResponseIteratorJsonMarshaller,
  toResponseIteratorOdsMarshaller,
  toResponseIteratorExcelMarshaller
)

and then in your route:

val responseFuture =
Marshal(data).toResponseFor(request)(toResponseIteratorMarshaller): // 
Future[Future[HttpResponse]]

.flatMap(identity)
complete(responseFuture)

Would be interesting to know if that works.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Akka Http memory leak suspect

2017-09-27 Thread Johannes Rudolph
Hi Bartosz,

I had a quick look into the dump. It contains >317000 StreamSupervisors, so
creating too many materializers is really the issue. Note, that the
materializer itself might go out of scope but the engine still stays alive
if the materializer has not been shutdown manually.

I created https://github.com/akka/akka/issues/23736 to discuss if we could
warn if the `Materializer` reference is not referenced any more but the
infrastructure is still alive.

Johannes



On Tue, Sep 26, 2017 at 1:59 PM, Bartosz Jankiewicz <
bartosz.jankiew...@gmail.com> wrote:

> I have verified that but there are 2 places where declare the
> materializers. Both are declared as vals. I will verify the number of
> materializer instances on my heap-dump to confirm.
>
> On Tue, 26 Sep 2017 at 13:24 Johannes Rudolph  com> wrote:
>
>> On Tue, Sep 26, 2017 at 7:18 AM, Patrik Nordwall <
>> patrik.nordw...@gmail.com> wrote:
>>
>>> If the names are StreamSupervisor- I think it can be that a new
>>> Materializer is created for each request. I don’t know if that is done by
>>> your application or by Akka Http. Does that ring any bells? Do you have any
>>> creation of stream materializers in your code?
>>>
>>>
>> Ah good point. I just assumed that it would be child actors of the
>> supervisor since they also have that in the name but if it's the supervisor
>> itself, creating too many Materializers could really be the cause.
>>
>> Johannes
>>
>>
>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/
>> current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Akka User List" group.
>> To unsubscribe from this topic, visit https://groups.google.com/d/
>> topic/akka-user/GSsa1akTdjQ/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> akka-user+unsubscr...@googlegroups.com.
>> To post to this group, send email to akka-user@googlegroups.com.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/akka-user/GSsa1akTdjQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Tunning default dispatcher seems to have no effect

2017-09-28 Thread johannes . rudolph

Hi Kilic,

Try looking at stack traces during the busy periods (e.g. use `jstack` on 
the command line to gather some), that should give you a clue what's going 
on. In the picture you sent in your first email there were actually only 8 
regular pool threads. Are there times where more is going on?

Johannes

On Thursday, September 28, 2017 at 11:25:26 AM UTC+2, Kilic Ali-Firat wrote:
>
> Hi, 
>
> I know that I needs to be careful when I have blocking tasks but in the 
> context of my app, I have no blocking tasks (or blocking calls). 
>
> To give more details, my two nodes workers has a router (round robind 
> group) with 10 workers actors each. All my workers are stateless and 
> execute most of a time a job which is to read data from redis, put it in 
> s3. So, in the worst case, I can have a 10 jobs in parallel by node. The 
> thing is that my job are using Futures (reading in async way from redis, 
> uploading in async way in s3)  like below and reading the logs, it seems 
> that sometimes, a worker executing a job can receive an another job to 
> execute. This cause for a  single worker to execute more than one job and 
> use CPU resources. 
>
> override def receive = {
> case jDump: JobDump =>
>   this.logReceiveMsg(jDump)
>   this.executeJobDump(jDump) pipeTo sender()
> }
>
> I'm doing some benchmarks to see how my cluster supports long running 
> tests with a high number of read/write from redis /to s3. Monitoring my 
> worker with jvisualvm, I see that my CPUs (during read / write) increase ~ 
> 100 % and then decrease at 0 % when no jobs is available. This is expected 
> the behavior but among the time, my CPU seems to not support a such load 
> (please see the screen that I attached to this answer). My test crashed 
> when the CPU do not decrease as excepted.
>
> The reason why I want to tune my dispatcher is maybe with bounded thread 
> pool size, I can also limit the number of jobs to execute in parallel 
> without spawing workers already busy by a job.
>  
> Le jeudi 28 septembre 2017 10:39:13 UTC+2, Konrad Malawski a écrit :
>>
>> This is now how a fork-join pool works basically.
>>
>> Please read the docs about managing blocking: 
>>
>> https://doc.akka.io/docs/akka/2.5/scala/dispatchers.html#blocking-needs-careful-management
>>
>> Tuning the default dispatcher like you’re attempting to here would not 
>> have a positive effect by the way.
>> Please simply separate blocking or other behaviours on a different 
>> dispatcher, make it a thread-pool one etc.
>>
>> —
>> Konrad `kto.so` Malawski
>> Akka  @ Lightbend 
>>
>> On 28 September 2017 at 17:35:19, Kilic Ali-Firat (kilic.a...@gmail.com) 
>> wrote:
>>
>> Scala version : 2.11.7
>> Akka version : 2.4.17
>>
>> - 
>>
>> Hi Akka Team, 
>>
>> I'm writing an Akka application using Akka Cluster, Akka Actors and Akka 
>> Stream. I have a cluster of 3 nodes : 2 workers and 1 node consuming data 
>> from a source. 
>>
>> I want three configs files : application.conf, node.worker.conf and 
>> node.source.conf. I limit the scope of this message to application.conf and 
>> node.worker.conf. 
>>
>> Below you can find the content of node.worker.conf and application.conf : 
>>
>> akka.cluster.roles = [workers]
>>
>>
>> akka {
>>   loglevel = "INFO"
>>   loglevel = ${?FGS_AKKA_LOG_LEVEL}
>>
>>   actor {
>> provider = "akka.cluster.ClusterActorRefProvider"
>>
>> default-dispatcher {
>>   # This will be used if you have set "executor = 
>> "fork-join-executor""
>>   fork-join-executor {
>> # Min number of threads to cap factor-based parallelism number to
>> parallelism-min = 1
>>
>> # The parallelism factor is used to determine thread pool size 
>> using the
>> # following formula: ceil(available processors * factor). 
>> Resulting size
>> # is then bounded by the parallelism-min and parallelism-max 
>> values.
>> parallelism-factor = 1
>>
>> # Max number of threads to cap factor-based parallelism number to
>> parallelism-max = 8
>>   }
>>   throughput = 1024
>> }
>>
>>   }
>>
>>
>>   remote {
>> enabled-transports = ["akka.remote.netty.tcp"]
>> netty.tcp {
>>   hostname = ${clustering.node-ip}
>>   port = ${clustering.node-port}
>>
>>
>>
>>
>>   bind-hostname = "0.0.0.0"
>>   bind-port = ${clustering.node-port}
>> }
>>   }
>>
>>
>>   cluster {
>> seed-nodes =
>>   [
>> "akka.tcp://"${clustering.cluster.name}"@"${clustering.fst-seed-
>> node-ip}":"${clustering.fst-seed-node-port}
>> "akka.tcp://"${clustering.cluster.name}"@"${clustering.snd-seed-
>> node-ip}":"${clustering.snd-seed-node-port}
>>   ]
>>   }
>> }
>>
>>
>> clustering {
>>   # Choose the same need port for the seed nodes. Seed nodes must be on
>>   # different machines.
>>   fst-seed-node-port = 2551
>>   fst-seed-node-port = ${?FGS_FIRST_SEED_NODE_PORT}
>>
>>   snd-seed-node-port = 2552
>>   snd

[akka-user] Re: GraphStageActor and ActorGraphInterpreter

2017-09-28 Thread johannes . rudolph
Hi Unmesh,

On Wednesday, September 27, 2017 at 3:01:24 PM UTC+2, Unmesh Joshi wrote:
>
> I was trying to go through the code to understand how GraphStages receive 
> actor messages. I see that GraphStageActor is a actor not created like 
> normal actors. I looks like all the messages to GraphStageActors are 
> wrapped in AsyncInput and passed to ActorGraphInterpreter. This means 
> messages to all the graph stages will essentially be executed by a single 
> ActorGraphInterpreter actor. Is this understanding true? If so, is there is 
> any specific reason for create GraphStageActor the way it is?
>

I guess you mean the ActorRef created when calling `getStageActor` in a 
GraphStage. Yes, your understanding is correct here. The basic reason is 
that stages that belong to one "fused island" are run together in a single 
actor. Consequently, messages received for those stages also need to be 
handled in the context of that actor to ensure the thread-safety of 
GraphStages. The benefit is that you can access and modify internal state 
of your GraphStage from within the message handler of the stage actor.

Does that make sense? What would you have expected instead?

Johannes
 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Ask a lot of Actors inside an Iteration in an Akka-Scheduler and wait for reply of everyone results in a stop of actor-system and webserver

2017-09-28 Thread johannes . rudolph
Hi Simon,

as Johan said, you shouldn't use `get` to wait for the result of future. 
This just synchronously blocks the thread from doing any other useful work. 
Instead, you can asynchronously handle the result of the future once it is 
available. Because it is so common, we have a pattern for this situation: 
You can use `pipeTo` to send the result of a future calculation back to 
your own actor and then handle it once it is there. If you structure your 
processing code in this way, everything is asynchronous and no threads need 
to blocked while waiting for other work to be finished. You can read more 
about it 
here: 
https://doc.akka.io/docs/akka/current/java/actors.html#ask-send-and-receive-future

To do that you need to structure your `onReceive` method in a way that 
allows handling different kinds of messages. In the code you posted you 
seem to ignore the message that was send to it?

Another, completely different way to structure this kind of code, is not to 
use any actor at all but by just writing a method using the 
`CompletableFuture` combinators that returns a `CompletableFuture` in the 
end. Then the user of the method can decide when and how to deal with the 
result.

Johannes

On Friday, September 15, 2017 at 10:34:17 AM UTC+2, Simon Jacobs wrote:
>
>
> Hey Johan,
>
> thanks for your reply! 
>
> Calling .get on a CompletableFuture blocks the thread until the future is 
> completed, don't do that. 
>
> That's the point. I need to call .get because otherwise the actor stops 
> working. I also tried to collect the stages and wait then for the Response:
>
>
> CompletableFuture.allOf(stages.toArray(new 
> CompletableFuture[stages.size()])).toCompletableFuture().get()
>
>
> But also this results only in TimeoutExceptions (no matter what timeout is 
> given). The actors simply do nothing if I do not call .get. And that is the 
> point, that I do not understand.
>
> Best regards 
> Simon
>
>
> Am Dienstag, 12. September 2017 15:33:35 UTC+2 schrieb Akka Team:
>>
>> Calling .get on a CompletableFuture blocks the thread until the future is 
>> completed, don't do that. User.find.findPagedList also looks suspiciously 
>> like something blocking.
>> See this section of the docs for more info: 
>> http://doc.akka.io/docs/akka/current/java/dispatchers.html#problem-blocking-on-default-dispatcher
>>
>> --
>> Johan
>> Akka Team
>>
>>
>> On Fri, Sep 8, 2017 at 10:00 AM, 'Simon Jacobs' via Akka User List <
>> akka...@googlegroups.com> wrote:
>>
>>> Hy there,
>>>
>>> I have an akka scheduler to reindex some data in the search-engine and 
>>> to resize the images of every user.
>>>
>>> It will be called directly by
>>>
>>> system.scheduler().scheduleOnce(FiniteDuration.create(Long.valueOf(1), 
>>> TimeUnit.SECONDS), reindexActor, "", system.dispatcher(), null);
>>>
>>> The (flatted to one method for better readability) actor:
>>>  
>>>@Override
>>> public void onReceive(Object msg) throws Exception
>>> {
>>> long start = System.currentTimeMillis();
>>> logger.info("Start reindexing of all User");
>>> 
>>> if (indexing) {
>>> logger.info("Already reindexing. Skip");
>>> return;
>>> }
>>> 
>>> try {
>>> int page = 0;
>>> PagedList users;
>>> 
>>> do {
>>> users = User.find.findPagedList(page, COUNT_OF_ROWS
>>> );
>>> List active = users.getList().stream().filter(g 
>>> -> g.isActive()).collect(Collectors.toList());
>>> 
>>> List> stages = new 
>>> ArrayList>(COUNT_OF_ROWS);
>>> for (User user : active) {
>>> ActorRef userActor = system.actorOf(Props.create
>>> (DependencyInjector.class, injector, UpdateRedisActor.class));
>>> userActor.tell(new UpdateRedisActor.Index(user.
>>> getId()), null);
>>> 
>>> if (user.hasProfilePicture()) {
>>> /**
>>>  * get the image as FilePart (imitate http 
>>> upload)
>>>  */
>>> File image = new File(user.getProfilePicture
>>> ().getFileName());
>>> FilePart filePart = new FilePart<
>>> Object>("", user.getFirstname(), "image/jpg", image);
>>> 
>>> String randomFileName = UUID.randomUUID().
>>> toString();
>>> 
>>> /**
>>>  * Create new actor
>>>  */
>>> ActorRef imgActor = system.actorOf(Props.
>>> create(DependencyInjector.class, injector, ImageActor.class));
>>> 
>>> /**
>>>

[akka-user] Re: GraphStageActor and ActorGraphInterpreter

2017-09-28 Thread johannes . rudolph
Correct, it will limit parallelism. I usually see the streams 
infrastructure more as a control channel that makes sure that data flows 
correctly. These kind of control things shouldn't require much overall CPU 
share so it should not matter so much. If you want to do CPU-intensive work 
you need to decide where and how to run it anyway (run it asynchronously 
with `mapAsync`, insert extra async boundaries in the stream, use extra 
dispatchers, etc).

In the end, the streams infrastructure introduces "just another" layer of 
CPU scheduling infrastructure into what you have anyways:

 * OS thread scheduler
 * Fork-Join-Pool task scheduler
 * Actor mailbox
 * and now the GraphInterpreter event execution queue

The latter ones are not preemptive which can lead to thread starvation 
issues when threads (or ActorGraphInterpreters) are blocked. Also fairness 
can be an issue. An ActorGraphInterpreter has a mailbox but also an 
internal queue, the mailbox has a throughput setting which defines how long 
to work on messages before allowing other actors to do other work. 
Similarly the GraphInterpreter has an "event horizon" which is basically 
the same but for stream signals.

Regarding Akka HTTP, this should not be a problem because data traffic on a 
single HTTP connection is usually pretty linear: first a request needs to 
be parsed, then it needs to be handled, then the response needs to be sent 
out. All of those happen one after each other. But you are right that e.g. 
for HTTP/2 things can be different. On the other hand, introducing extra 
asynchronous boundaries for parallelism has a cost, and often it is better 
to spent that cost on the big picture, e.g. parallelize over multiple 
connections instead of processing within a single connection.

There's a related tricky question whether we should surround all user code 
with async boundaries to avoid unexpected deadlocks.-

Johannes


On Thursday, September 28, 2017 at 1:37:22 PM UTC+2, Unmesh Joshi wrote:
>
> Yeah. I meant ActorRef for GraphStage. My only question then is, if 
> messages to all the GraphStage Actors get serialized to 
> ActorGraphInterpreter,  will that potentially limit the possible parallel 
> execution? e.g. If HttpRequestParserStage and HttpResponseRendererStage 
> both receive actor messages, they will get executed sequentially where it 
> was potentially possible to handle them in parallel.  Its hard to say how 
> much of benefit this gives, but conceptually, thinking of each graph stage 
> as a separate actor is simpler as opposed to thinking of a graph of stages 
> backed by an Actor.
>
> On Thursday, 28 September 2017 15:59:53 UTC+5:30, 
> johannes...@lightbend.com wrote:
>>
>> Hi Unmesh,
>>
>> On Wednesday, September 27, 2017 at 3:01:24 PM UTC+2, Unmesh Joshi wrote:
>>>
>>> I was trying to go through the code to understand how GraphStages 
>>> receive actor messages. I see that GraphStageActor is a actor not created 
>>> like normal actors. I looks like all the messages to GraphStageActors are 
>>> wrapped in AsyncInput and passed to ActorGraphInterpreter. This means 
>>> messages to all the graph stages will essentially be executed by a single 
>>> ActorGraphInterpreter actor. Is this understanding true? If so, is there is 
>>> any specific reason for create GraphStageActor the way it is?
>>>
>>
>> I guess you mean the ActorRef created when calling `getStageActor` in a 
>> GraphStage. Yes, your understanding is correct here. The basic reason is 
>> that stages that belong to one "fused island" are run together in a single 
>> actor. Consequently, messages received for those stages also need to be 
>> handled in the context of that actor to ensure the thread-safety of 
>> GraphStages. The benefit is that you can access and modify internal state 
>> of your GraphStage from within the message handler of the stage actor.
>>
>> Does that make sense? What would you have expected instead?
>>
>> Johannes
>>  
>>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: GraphStageActor and ActorGraphInterpreter

2017-10-04 Thread johannes . rudolph
Cool, thanks for sharing, nice stuff :) We know, btw., that some pieces of 
the architecture are not completely optimal but are consequences of the 
history of the projects. E.g. it could make sense to write a streams-only 
implementation of the TCP layer instead of putting it on top of the actor 
based Akka IO.

One thing that seems to be missing in your project is support for streaming 
HTTP bodies. This is somewhat essential to the whole architecture because 
it shows how while reading the request body (or writing the response body), 
backpressure needs to be kept up through all the layers.

Johannes

On Thursday, September 28, 2017 at 7:09:43 PM UTC+2, Unmesh Joshi wrote:
>
> Thanks, this makes it clear.
> Btw, one thing I am trying to understand is, when will be the stream not 
> 'fused'?  When 'async' boundaries are inserted?
>
> On a separate note, I am trying to create a very thin slice through Akka 
> HTTP, Akka Streams, Akka IO and Java NIO. A very simple Http server which 
> handles only GET requests to return Hello World response. The aim is to 
> give developers minimal executable code (< 20) to look at and experiment 
> with to understand Akka Streams, Http and Akka IO. 
>
> I am hoping to improve it with some documentation. Please let me know if 
> you think this will be helpful.
> https://github.com/unmeshjoshi/reactiveio
>
>
>
> On Thursday, 28 September 2017 17:25:33 UTC+5:30, 
> johannes...@lightbend.com wrote:
>>
>> Correct, it will limit parallelism. I usually see the streams 
>> infrastructure more as a control channel that makes sure that data flows 
>> correctly. These kind of control things shouldn't require much overall CPU 
>> share so it should not matter so much. If you want to do CPU-intensive work 
>> you need to decide where and how to run it anyway (run it asynchronously 
>> with `mapAsync`, insert extra async boundaries in the stream, use extra 
>> dispatchers, etc).
>>
>> In the end, the streams infrastructure introduces "just another" layer of 
>> CPU scheduling infrastructure into what you have anyways:
>>
>>  * OS thread scheduler
>>  * Fork-Join-Pool task scheduler
>>  * Actor mailbox
>>  * and now the GraphInterpreter event execution queue
>>
>> The latter ones are not preemptive which can lead to thread starvation 
>> issues when threads (or ActorGraphInterpreters) are blocked. Also fairness 
>> can be an issue. An ActorGraphInterpreter has a mailbox but also an 
>> internal queue, the mailbox has a throughput setting which defines how long 
>> to work on messages before allowing other actors to do other work. 
>> Similarly the GraphInterpreter has an "event horizon" which is basically 
>> the same but for stream signals.
>>
>> Regarding Akka HTTP, this should not be a problem because data traffic on 
>> a single HTTP connection is usually pretty linear: first a request needs to 
>> be parsed, then it needs to be handled, then the response needs to be sent 
>> out. All of those happen one after each other. But you are right that e.g. 
>> for HTTP/2 things can be different. On the other hand, introducing extra 
>> asynchronous boundaries for parallelism has a cost, and often it is better 
>> to spent that cost on the big picture, e.g. parallelize over multiple 
>> connections instead of processing within a single connection.
>>
>> There's a related tricky question whether we should surround all user 
>> code with async boundaries to avoid unexpected deadlocks.-
>>
>> Johannes
>>
>>
>> On Thursday, September 28, 2017 at 1:37:22 PM UTC+2, Unmesh Joshi wrote:
>>>
>>> Yeah. I meant ActorRef for GraphStage. My only question then is, if 
>>> messages to all the GraphStage Actors get serialized to 
>>> ActorGraphInterpreter,  will that potentially limit the possible parallel 
>>> execution? e.g. If HttpRequestParserStage and HttpResponseRendererStage 
>>> both receive actor messages, they will get executed sequentially where it 
>>> was potentially possible to handle them in parallel.  Its hard to say how 
>>> much of benefit this gives, but conceptually, thinking of each graph stage 
>>> as a separate actor is simpler as opposed to thinking of a graph of stages 
>>> backed by an Actor.
>>>
>>> On Thursday, 28 September 2017 15:59:53 UTC+5:30, 
>>> johannes...@lightbend.com wrote:

 Hi Unmesh,

 On Wednesday, September 27, 2017 at 3:01:24 PM UTC+2, Unmesh Joshi 
 wrote:
>
> I was trying to go through the code to understand how GraphStages 
> receive actor messages. I see that GraphStageActor is a actor not created 
> like normal actors. I looks like all the messages to GraphStageActors are 
> wrapped in AsyncInput and passed to ActorGraphInterpreter. This means 
> messages to all the graph stages will essentially be executed by a single 
> ActorGraphInterpreter actor. Is this understanding true? If so, is there 
> is 
> any specific reason for create GraphStageActor the way it is?
>

 I guess you mean the A

Re: [akka-user] Performance of Akka-Http 2.5.4

2017-11-13 Thread johannes . rudolph
I missed this post before. 

I'd like to add another point. Akka Http hasn't been performance tested on 
a 40 core machine. The high idle CPU percecntage means that either Akka / 
Akka Http is not configured correctly for this amount of cores or that 
there are actual contention issues at these levels of scale. It would be 
definitely interesting to know what the problem is to offer a better 
default experience for running Akka Http on this kind of hardware.

If you are still listening in, Jakub, it would be nice if you could set 
parallelism-max to the number of cores on your machine and/or set 
`parallelism-factor = 1` as Patrik suggested. One reason for bad 
performance could be that the default parallelism-factor of 3 would lead to 
120 threads battling for resources, starving each other off CPU time maybe 
even while keeping some resource. If this alone doesn't increase 
performance, a few stack dumps from the server process during steady state 
would help because that would likely point out places with high contention.

For anyone else listening in here, I also wanted to stress, that you need 
to put any kind of performance numbers into perspective. We cannot test 
everything in every environment and details usually matter in benchmarks. 
High CPU idle times like in this case mean that something currently just 
doesn't work correctly in this setting. For best performance, you need to 
benchmark for yourself on your own hardware and then be prepared to dig 
into issues.

Johannes

>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Streaming proxy/tunel on top of akka-http

2017-11-13 Thread johannes . rudolph
I tried your code and it doesn't OOM for me. Have you tried it outside of a 
test suite? It might be that the test infrastructure is collecting all the 
data when you use something as `reponse.entity`.  If that doesn't 
help,  try capturing a heap dump on OOM and see where the memory is spent.

Johannes

On Wednesday, November 8, 2017 at 11:01:18 AM UTC+1, Jozsef Zsido wrote:
>
> Hi,
>
> Not much code I have, basically I took a sample for akka-http proxy: 
> https://gist.github.com/kovacshuni/afb7d53f40f501d0ab82
>
> trait BrowseServerRoutes3 {
>
>   implicit val system = ActorSystem("Browse")
>   implicit val materializer = ActorMaterializer()
>   implicit val ec = system.dispatcher
>
>   val proxy = Route { context =>
>
> val request = context.request
> println("Processing " + request)
>
> val flow = Http(system).outgoingConnection("10.66.0.4", 80)
>
> Source.single(context.request)
>   .map {
> _.withHeaders(HttpUtils.completeHeaderList(request))
>   .withUri(request.uri.path.toString())
>   }
>   .via(flow)
>   .runWith(Sink.head)
>   .flatMap(f => {
> context.complete(f)
>   })
>   }
> }
>
> object BrowseServer extends App with BrowseServerRoutes3 {
>
>   val binding = Http(system).bindAndHandle(handler = proxy, interface = 
> "0.0.0.0", port = 8080)
>   println(s"Server online.")
> }
>
> "Download" should "return OK" in {
>   Get("/test/huge_MP3WRAP.mp3") ~> proxy ~> check {
> response.status shouldEqual StatusCodes.OK
>   }
> }
>
>
> The first problem I have with this code is that I have a 2GB file at the 
> destination and I get the following exception:
> EntityStreamSizeException: actual entity size (Some(2272610895)) exceeded 
> content length limit (8388608 bytes)! You can configure this by setting 
> `akka.http.[server|client].parsing.max-content-length` or calling 
> `HttpEntity.withSizeLimit` before materializing the dataBytes stream.
>
> I see that the limit could be rised but this seems like akka wants to load 
> the entire data into memory. Actually I don't have a concrete value to put 
> there.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-HTTP compilation error using custom requireParam() directive

2017-11-13 Thread johannes . rudolph
Hi Evgeny,
you discovered one of the reasons for the magnet pattern. If you use 
`requireParam("param".as[Int]) { abc => ... }` then the `{ abc => }` block 
is mistaken as the implicit argument of `requireParam`. So, either you are 
ok with that and require users to use extra parentheses 
(`(requireParam("param".as[Int])) { abc => ... }`) or you use the magnet 
pattern at this level as well. You can read about the magnet pattern on the 
old spray blog (http://spray.io/blog/2012-12-13-the-magnet-pattern/) or 
just look into akka-http sources.

A better way to implement the feature would be just to use the existing 
implementation ;) and install a custom rejection handler that produces 
whatever output you would like. 
See 
https://doc.akka.io/docs/akka-http/10.0.10/scala/http/routing-dsl/rejections.html#customizing-rejection-handling
 
for how to do that.


Johannes


On Monday, November 13, 2017 at 9:22:19 AM UTC+1, Evgeny Veretennikov wrote:
>
> I developed custom generic directive, which will provide param of given 
> type, if it exists, or else reject with my custom exception.
>
> import akka.http.scaladsl.common.NameReceptacle
> import akka.http.scaladsl.server.Directives._
> import akka.http.scaladsl.server.directives.ParameterDirectives.
> ParamDefAux
> import akka.http.scaladsl.server.{Directive1, Route}
> 
> class MyCustomException(msg: String) extends Exception(msg)
> 
> def requireParam[T](name: NameReceptacle[T])
>(implicit pdef: ParamDefAux[NameReceptacle[T], 
> Directive1[T]]): Directive1[T] =
>   parameter(name).recover { _ =>
> throw new MyCustomException(s"${name.name} is missed!")
>   }
>
> It works ok, if I want to create route, using two parameters, for example:
>
> val negSumParams: Route =
>   (requireParam("param1".as[Int]) & requireParam("param2".as[Int])) {
> (param1, param2) =>
>   complete((-param1-param2).toString)
>   }
>
> But if I try to use exactly one parameter, this doesn't compile:
>
> val negParamCompilationFail: Route =
>   requireParam("param".as[Int]) {
> param => // scalac complains about missing type param here
>   complete((-param).toString)
>   }
>
> If I use it with `pass` directive, it works:
>
> val negParamWithPass: Route =
>   (pass & requireParam("param".as[Int])) { // this pass usage looks hacky
> param =>
>   complete((-param).toString)
>   }
>
> If I write `requireParam()` return type explicitly, it works too:
>
> val negParamWithExplicitType: Route =
>   (requireParam("param".as[Int]): Directive1[Int]) { // DRY violation
> param =>
>   complete((-param).toString)
>   }
>
> Why do I need these tricks? Why can't it work just with 
> `requireParam("param".as[Int])`?
>
> Scala version 2.12.1, Akka-HTTP 10.0.10.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-11-16 Thread johannes . rudolph
Hi Gary,

did you find out what's going on by now? If I understand correctly, you get 
latency spikes as soon as you use the `entity[as[String]]` directive? Could 
you narrow down if there's anything special to those requests? I guess you 
monitor your GC times?

Johannes

On Wednesday, November 1, 2017 at 8:56:50 PM UTC+1, Gary Malouf wrote:
>
> So the only way I was able to successfully identify the suspicious code 
> was to route a percentage of my production traffic to a stubbed route that 
> I incrementally added back pieces of our implementation into.  What I found 
> was that we started getting spikes when the entity(as[CaseClassFromJson]) 
> stubbed 
> was added back in.  To figure out if it was the json parsing or 'POST' 
> entity consumption itself, I replaced that class with a string - turns out 
> we experience the latency spikes with that as well (on low traffic as noted 
> earlier in this thread).  
>
> I by no means have a deep understanding of streams, but it makes me wonder 
> if the way I have our code consuming the entity is not correct.
>
> On Monday, October 30, 2017 at 4:27:13 PM UTC-4, Gary Malouf wrote:
>>
>> Hi Roland - thank you for the tip.  We shrunk the thread pool size down 
>> to 1, but were disheartened to still see the latency spikes.  Using Kamon's 
>> tracing library (which we validated with various tests to ensure it's own 
>> numbers are most likely correct), we could not find anything in our code 
>> within the route that was causing the latency (it all appeared to be 
>> classified to be that route but no code segments within it).  
>>
>> As mentioned earlier, running loads of 100-1000 requests/second 
>> completely hides the issue (save for the max latency) as everything through 
>> 99th percentiles is under a few milliseconds.
>>
>> On Tuesday, October 24, 2017 at 2:23:07 AM UTC-4, rkuhn wrote:
>>>
>>> You could try to decrease your thread pool size to 1 to exclude wakeup 
>>> latencies when things (like CPU cores) have gone to sleep.
>>>
>>> Regards, Roland 
>>>
>>> Sent from my iPhone
>>>
>>> On 23. Oct 2017, at 22:49, Gary Malouf  wrote:
>>>
>>> Yes, it gets parsed using entity(as[]) with spray-json support.  Under a 
>>> load test of say 1000 requests/second these latencies are not visible in 
>>> the percentiles - they are easy to see because this web server is getting 
>>> 10-20 requests/second currently.  Trying to brainstorm if a dispatcher 
>>> needed to be tuned or something of that sort but have yet to see evidence 
>>> supporting that.
>>>
>>> path("foos") { 
>>> traceName("FooSelection") {
>>> entity(as[ExternalPageRequest]) { pr => 
>>> val spr = toSelectionPageRequest(pr) 
>>> shouldTracePageId(spr.pageId).fold( 
>>> Tracer.currentContext.withNewSegment(s"Page-${pr.pageId}", "PageTrace", 
>>> "kamon") { 
>>> processPageRequestAndComplete(pr, spr) 
>>> }, 
>>> processPageRequestAndComplete(pr, spr) 
>>> ) 
>>> }
>>> } 
>>>
>>> }
>>>
>>> On Mon, Oct 23, 2017 at 4:42 PM, Viktor Klang  
>>> wrote:
>>>
 And you consume the entityBytes I presume?

 On Mon, Oct 23, 2017 at 10:35 PM, Gary Malouf  
 wrote:

> It is from when I start the Kamon trace (just inside of my 
> path("myawesomepath") declaration until (theoretically) a 'complete' call 
> is made.  
>
> path("myawesomepath") {
>   traceName("CoolStory") {
> ///do some stuff
>  complete("This is great")
> } }
>
> For what it's worth, this route is a 'POST' call.
>
> On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang  
> wrote:
>
>> No, I mean, is it from first-byte-received to last-byte-sent or what?
>>
>> On Mon, Oct 23, 2017 at 10:22 PM, Gary Malouf  
>> wrote:
>>
>>> We are using percentiles computed via Kamon 0.6.8.  In a very low 
>>> request rate environment like this, it takes roughly 1 super slow 
>>> request/second to throw off the percentiles (which is what I think is 
>>> happening).  
>>>
>>>
>>>
>>> On Mon, Oct 23, 2017 at 4:20 PM, Viktor Klang  
>>> wrote:
>>>
 What definition of latency are you using? (i.e. how is it derived)

 On Mon, Oct 23, 2017 at 10:11 PM, Gary Malouf  
 wrote:

> Hi Konrad,
>
> Our real issue is that we can not reproduce the results.  The web 
> server we are having latency issues with is under peak load of 10-15 
> requests/second - obviously not much to deal with.  
>
> When we use load tests (https://github.com/apigee/apib), it's 
> easy for us to throw a few thousand requests/second at it and get 
> latencies 
> in the ~ 3 ms range.  We use kamon to track internal metrics - what 
> we see 
> is that our 95th and 99th percentiles only look bad under the 
> production 
> traffic but not under load tests.  
>
> I've since used kamon to print out the actual requests try

Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2017-11-16 Thread johannes . rudolph
I wonder if you could start a timer when you enter the trace block and then 
e.g. after 200ms trigger one or multiple stack dumps (using JMX or just by 
printing out the result of `Thread.getAllStackTraces`). It's not super 
likely that something will turn up but it seems like a simple enough thing 
to try.

Johannes

On Thursday, November 16, 2017 at 1:28:23 PM UTC+1, Gary Malouf wrote:
>
> Hi Johannes,
>
> Yes; we are seeing 2-3 requests/second (only in production) with the 
> latency spikes.  We found no correlation between the gc times and these 
> request latencies, nor between the size/type of requests.
>
> We had to pause the migration effort for 2 weeks because of the time being 
> taken, but just jumped back on it the other day.  
>
> Our current strategy is to implement this with the low level api to see if 
> we get the same results.
>
> Gary
>
> On Nov 16, 2017 6:57 AM, > wrote:
>
> Hi Gary,
>
> did you find out what's going on by now? If I understand correctly, you 
> get latency spikes as soon as you use the `entity[as[String]]` directive? 
> Could you narrow down if there's anything special to those requests? I 
> guess you monitor your GC times?
>
> Johannes
>
>
> On Wednesday, November 1, 2017 at 8:56:50 PM UTC+1, Gary Malouf wrote:
>>
>> So the only way I was able to successfully identify the suspicious code 
>> was to route a percentage of my production traffic to a stubbed route that 
>> I incrementally added back pieces of our implementation into.  What I found 
>> was that we started getting spikes when the entity(as[CaseClassFromJson
>> ]) stubbed was added back in.  To figure out if it was the json parsing 
>> or 'POST' entity consumption itself, I replaced that class with a string - 
>> turns out we experience the latency spikes with that as well (on low 
>> traffic as noted earlier in this thread).  
>>
>> I by no means have a deep understanding of streams, but it makes me 
>> wonder if the way I have our code consuming the entity is not correct.
>>
>> On Monday, October 30, 2017 at 4:27:13 PM UTC-4, Gary Malouf wrote:
>>>
>>> Hi Roland - thank you for the tip.  We shrunk the thread pool size down 
>>> to 1, but were disheartened to still see the latency spikes.  Using Kamon's 
>>> tracing library (which we validated with various tests to ensure it's own 
>>> numbers are most likely correct), we could not find anything in our code 
>>> within the route that was causing the latency (it all appeared to be 
>>> classified to be that route but no code segments within it).  
>>>
>>> As mentioned earlier, running loads of 100-1000 requests/second 
>>> completely hides the issue (save for the max latency) as everything through 
>>> 99th percentiles is under a few milliseconds.
>>>
>>> On Tuesday, October 24, 2017 at 2:23:07 AM UTC-4, rkuhn wrote:

 You could try to decrease your thread pool size to 1 to exclude wakeup 
 latencies when things (like CPU cores) have gone to sleep.

 Regards, Roland 

 Sent from my iPhone

 On 23. Oct 2017, at 22:49, Gary Malouf  wrote:

 Yes, it gets parsed using entity(as[]) with spray-json support.  Under 
 a load test of say 1000 requests/second these latencies are not visible in 
 the percentiles - they are easy to see because this web server is getting 
 10-20 requests/second currently.  Trying to brainstorm if a dispatcher 
 needed to be tuned or something of that sort but have yet to see evidence 
 supporting that.

 path("foos") { 
 traceName("FooSelection") {
 entity(as[ExternalPageRequest]) { pr => 
 val spr = toSelectionPageRequest(pr) 
 shouldTracePageId(spr.pageId).fold( 
 Tracer.currentContext.withNewSegment(s"Page-${pr.pageId}", "PageTrace", 
 "kamon") { 
 processPageRequestAndComplete(pr, spr) 
 }, 
 processPageRequestAndComplete(pr, spr) 
 ) 
 }
 } 

 }

 On Mon, Oct 23, 2017 at 4:42 PM, Viktor Klang  
 wrote:

> And you consume the entityBytes I presume?
>
> On Mon, Oct 23, 2017 at 10:35 PM, Gary Malouf  
> wrote:
>
>> It is from when I start the Kamon trace (just inside of my 
>> path("myawesomepath") declaration until (theoretically) a 'complete' 
>> call 
>> is made.  
>>
>> path("myawesomepath") {
>>   traceName("CoolStory") {
>> ///do some stuff
>>  complete("This is great")
>> } }
>>
>> For what it's worth, this route is a 'POST' call.
>>
>> On Mon, Oct 23, 2017 at 4:30 PM, Viktor Klang  
>> wrote:
>>
>>> No, I mean, is it from first-byte-received to last-byte-sent or what?
>>>
>>> On Mon, Oct 23, 2017 at 10:22 PM, Gary Malouf  
>>> wrote:
>>>
 We are using percentiles computed via Kamon 0.6.8.  In a very low 
 request rate environment like this, it takes roughly 1 super slow 
 request/second to throw off the percentiles (which is what I think is 
 happening)

[akka-user] Akka Http 10.1.0-RC1 Released

2017-12-22 Thread johannes . rudolph
gt;)

Documentation
   
   - Almost the complete directive documentation has now been consolidated 
   between Java and Scala. More than hundred pages have been simplified like 
   this.
   - The top-level documentation structure has been clarified.

Bug Fixesakka-http-core
   
   - Fix return type for withStatus methods to return proper scaladsl types 
   (#1623 <https://github.com/akka/akka-http/issues/1623>)
   - Fix "Cannot push port ... twice" in NewHostConnectionPool (#1610 
   <https://github.com/akka/akka-http/issues/1610>) and several other fixes

Documentation
   
   - Fix generation/publication of javadoc (10.0.11 javadoc link pointed to 
   scaladoc instead)

Credits

A total of 22 issues were closed since 10.0.11.

The complete list of closed issues can be found on the 10.1.0-RC1 milestone 
<https://github.com/akka/akka-http/milestone/26?closed=1> milestones on 
GitHub.

For this release we had the help of 13 contributors – thank you all very 
much!

commits added removed
   4448414860 Arnout Engelen
   321196 613 Johannes Rudolph
   22 7202789 Josep Prat
3 122 349 Jonas Fonseca
2  28  17 Pavel Boldyrev
1   6   4 Johan Andrén
1 128   4 Martynas Mickevičius
1 111   1 Ivano Pagano
1  21   1 Kentaro Maeda
1   1   1 Philippus Baalman
1   2   0 Jimin Hsieh
1   1   1 David Francoeur
1  23   2 Catalin Ursachi

Happy hakking and happy holidays to everyone!

– The Akka Team

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user][deprecated] Re: [akka-user] Spray->Akka-Http Migration - seeing high 99th percentile latencies post-migration

2018-10-09 Thread Johannes Rudolph
That the entity directive is part of the picture could be a hint that
indeed streaming requests might be the cause of this. In spray, there was
no request streaming enabled by default and the engine just collected the
complete stream into a buffer and dispatched it to the app only after
everything was received. This has changed in akka-http where streaming is
on by default if the complete request wasn't received in one go from the
network. In this case the streaming case is actually more likely to happen
on low-traffic servers with a real network where network packages are not
aggregated in lower levels but are really processed immediately when they
are received.

The question is still if the 200ms are really added latency in akka-http or
just an artifact of how request processing time is measured. There's
definitely *some* overhead of processing a request in streaming fashion but
it's not 200ms. I haven't checked seriously but it seems that Kamon might
be measuring something else than you are thinking in akka-http: it seems to
start measuring the time from when the request is dispatched to your app
but at this point the request body might not have been received fully. That
means that whenever the HTTP client is slow with sending a request for
whatever reason, it will show in your request processing times.

Johannes

On Mon, Oct 8, 2018 at 10:42 PM Gary Malouf  wrote:

> We ultimately decided to rollout despite this glitch.  Not happy about it,
> and hoping whatever is causing this gets resolved in a future release.  My
> hunch is that it's a fixed price being paid that if 1000's of more
> requests/second were sent to the app would make this unnoticeable.
>
>
>
> On Sun, Oct 7, 2018 at 11:18 AM Avshalom Manevich 
> wrote:
>
>> Hi Gary,
>>
>> Did you end up finding a solution to this?
>>
>> We're hitting a similar issue with Akka HTTP (10.0.11) and a low-load
>> server.
>>
>> Average latency is great but 99th percentile is horrible (~200ms).
>>
>> Appreciate your input.
>>
>> Regards,
>> Avshalom
>>
>>
>> I wonder if you could start a timer when you enter the trace block and
>>> then e.g. after 200ms trigger one or multiple stack dumps (using JMX or
>>> just by printing out the result of `Thread.getAllStackTraces`). It's not
>>> super likely that something will turn up but it seems like a simple enough
>>> thing to try.
>>>
>>> Johannes
>>>
>>> On Thursday, November 16, 2017 at 1:28:23 PM UTC+1, Gary Malouf wrote:
>>>
 Hi Johannes,

 Yes; we are seeing 2-3 requests/second (only in production) with the
 latency spikes.  We found no correlation between the gc times and these
 request latencies, nor between the size/type of requests.

 We had to pause the migration effort for 2 weeks because of the time
 being taken, but just jumped back on it the other day.

 Our current strategy is to implement this with the low level api to see
 if we get the same results.

 Gary

 On Nov 16, 2017 6:57 AM,  wrote:

 Hi Gary,

 did you find out what's going on by now? If I understand correctly, you
 get latency spikes as soon as you use the `entity[as[String]]` directive?
 Could you narrow down if there's anything special to those requests? I
 guess you monitor your GC times?

 Johannes


 On Wednesday, November 1, 2017 at 8:56:50 PM UTC+1, Gary Malouf wrote:

> So the only way I was able to successfully identify the suspicious
> code was to route a percentage of my production traffic to a stubbed route
> that I incrementally added back pieces of our implementation into.  What I
> found was that we started getting spikes when the entity(as[Case
> ClassFromJson]) stubbed was added back in.  To figure out if it was
> the json parsing or 'POST' entity consumption itself, I replaced that 
> class
> with a string - turns out we experience the latency spikes with that as
> well (on low traffic as noted earlier in this thread).
>
> I by no means have a deep understanding of streams, but it makes me
> wonder if the way I have our code consuming the entity is not correct.
>
> On Monday, October 30, 2017 at 4:27:13 PM UTC-4, Gary Malouf wrote:
>
>> Hi Roland - thank you for the tip.  We shrunk the thread pool size
>> down to 1, but were disheartened to still see the latency spikes.  Using
>> Kamon's tracing library (which we validated with various tests to ensure
>> it's own numbers are most likely correct), we could not find anything in
>> our code within the route that was causing the latency (it all appeared 
>> to
>> be classified to be that route but no code segments within it).
>>
>> As mentioned earlier, running loads of 100-1000 requests/second
>> completely hides the issue (save for the max latency) as everything 
>> through
>> 99th percentiles is under a few milliseconds.
>>
>> On Tuesday, October 24,

[akka-user][deprecated] Re: Akka SSLSession leak when running Akka with native TLS

2018-11-22 Thread johannes . rudolph
Hi Sean,

thanks for the comprehensive report. What do you mean with a native vs 
non-native TLS server? Is the example app for the "native TLS" server?

Johannes

On Wednesday, November 21, 2018 at 6:26:09 PM UTC+1, Sean Gibbons wrote:
>
> Hi all,
>
> I have been working with a native TLS Akka HTTP service deployed to 
> Production. We have noticed memory increasing consistently throughout the 
> week until our Akka service died due to memory constraints. 
> Running a JProfiler locally I've managed to reproduce what I believe to be 
> a leak in SSL related classes just using Akka code. 
>  
> I am using Akka HTTP version 10.1.5 and Akka version 2.5.18.
>
> I ran a comparison of a non native TLS Akka server vs a native TLS Akka 
> server. The load test consisted of sending around 20 req/s to a dummy 
> endpoint that simply just returns a hardcoded "hello" string.
>
> A *complete* *runnable* *example* of the Akka code used to produce this 
> leak can be found as *dummy-app.zip *along with JProfiler snapshots in 
> this *Google Drive link* - 
> https://drive.google.com/drive/folders/1Q1zgN4m5J4oI_S0TupMs1LO1UVlJfh87?usp=sharing
>
>
> *With Native TLS:*
>
> [image: Screen Shot 2018-11-21 at 12.09.17 PM.png]
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> JProfiler snapshots associated with this image are *included *in the* 
> Google Drive link *above and named:
>  - TLSLocal1.jps
>  - TLSLocal2.jps
>  - TLSLocal3.jps
> Each blue line above is when a snapshot was taken.
>
>
> *Without Native TLS:*
>
> [image: Screen Shot 2018-11-21 at 12.12.58 PM.png]
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> JProfiler snapshots associated with this image are *included *in the* Google 
> Drive link *above and named:
>  - NoTLSLocal1.jps
>  - NoTLSLocal2.jps
>  - NoTLSLocal3.jps
> Each blue line above is when a snapshot was taken.
>
>
> Any help is much appreciated on this topic thanks.
>
>
> *Séanadh Ríomhphoist/Email DisclaimerTá an ríomhphost seo agus aon chomhad 
> a sheoltar leis faoi rún agus is lena úsáid ag an seolaí agus sin amháin 
> é. Is féidir tuilleadh a léamh anseo. 
>   
> This e-mail and any 
> files transmitted with it are confidential and are intended solely for use 
> by the addressee. Read more here. 
>  *
>
>

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>> 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user][deprecated] Re: Akka SSLSession leak when running Akka with native TLS

2018-11-22 Thread Johannes Rudolph
I see. Thanks.

With the provided code I couldn't reproduce the issue at least in the quick
tests I did. Could you run

jmap -histo:live  on the command line when some memory has accrued and
send the output here (or in private)?

Johannes


On Thu, Nov 22, 2018 at 1:16 PM Sean Gibbons 
wrote:

> And by native TLS I just mean a standard TLS AKKA Server, apologies for
> any confusion.
>
> *Séanadh Ríomhphoist/Email DisclaimerTá an ríomhphost seo agus aon chomhad
> a sheoltar leis faoi rún agus is lena úsáid ag an seolaí agus sin amháin
> é. Is féidir tuilleadh a léamh anseo.
> 
> This e-mail and any
> files transmitted with it are confidential and are intended solely for use
> by the addressee. Read more here.
>  *
>
> --
>
> *
> ** New discussion forum: https://discuss.akka.io/ replacing akka-user
> google-group soon.
> ** This group will soon be put into read-only mode, and replaced by
> discuss.akka.io
> ** More details:
> https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
>
> *
> >>
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Akka User List" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/akka-user/b6VtlNFLsr8/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>> 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user][deprecated] Re: Aggregating millions of records using Akka-Streams

2019-08-08 Thread Johannes Rudolph
Reposted here: 
https://discuss.lightbend.com/t/aggregating-a-million-records-using-akka-streams/4781

On Thursday, August 8, 2019 at 11:02:13 AM UTC+2, Aditya pavan Kumar wrote:
>
> I am running a simulation which generates a million records every second.
> I’m writing them to Kafka and reading them through Akka Streams. I’m 
> performing a few aggregations on this data and writing the output back to 
> Kafka.
>
>
> The data contains a timestamp based on which the aggregations are grouped. 
> Using the timestamps, I’m creating windows of data and performing 
> aggregation on these windows. 
> Since there are a million records each second, the aggregations are taking 
> about 40 seconds for one million records. 
> This is really slow because new data is being generated and written to 
> Kafka every second.
>
>
> I referred this blog post for performing window aggregations: 
> https://dvirgiln.github.io/akka-streams-windowing/
>
> Is there any better way to perform these aggregations in lesser 
> time(preferably less than one second) using Akka Streams?
>

-- 
*
** New discussion forum: https://discuss.akka.io/ replacing akka-user 
google-group soon.
** This group will soon be put into read-only mode, and replaced by 
discuss.akka.io
** More details: https://akka.io/blog/news/2018/03/13/discuss.akka.io-announced
*
>> 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/akka-user/48b8d04a-d1ef-4e70-8129-e85a22c7f7cf%40googlegroups.com.


[akka-user] Re: [scala-user] Re: Akka Streams & HTTP 1.0-RC1 Announcement

2015-04-27 Thread Johannes Rudolph
Hi Anton,

On Sun, Apr 26, 2015 at 9:31 PM, Anton Kulaga  wrote:
> What about Websockets, as I understood they are supported in RC-1 but is
> there anything in documentation that explains how to use them in akka-http?

Server-side websocket support is included in RC1 but not yet
documented. An example usage of the low-level API can be seen here:

https://github.com/akka/akka/blob/release-2.3-dev/akka-http-core/src/test/scala/akka/http/scaladsl/TestServer.scala

In the high-level part a new directive `handleWebsocketMessages` is
now available, see

https://github.com/akka/akka/blob/release-2.3-dev/akka-http-scala/src/main/scala/akka/http/scaladsl/server/directives/WebsocketDirectives.scala

We'll add proper documentation asap.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [scala-user] Re: Akka Streams & HTTP 1.0-RC1 Announcement

2015-04-27 Thread Johannes Rudolph
Hi Anton,

I created a small standalone chat test application. See here:

https://github.com/jrudolph/akka-http-scala-js-websocket-chat

Johannes


On Sun, Apr 26, 2015 at 9:31 PM, Anton Kulaga  wrote:
> What about Websockets, as I understood they are supported in RC-1 but is
> there anything in documentation that explains how to use them in akka-http?
>
> On Friday, April 24, 2015 at 10:19:13 PM UTC+3, Patrik Nordwall wrote:
>>
>> Dear Hakkers,
>>
>>
>> we—the Akka committers—are exceptionally proud to present the first
>> RELEASE CANDIDATE or Akka Streams & HTTP. While this is not the end of the
>> journey—several features are going to be added after 1.0—the time has come
>> to declare a (very) useful subset of the intended functionality ready for
>> public consumption. Since the last milestone we have added the following
>> high-level features:
>>
>>
>> a TestKit for streams
>>
>> proper naming for all parts of a flow topology (see .named)
>>
>> add SslTls stage including support for session renegotiation
>>
>> added ActorRefSink and ActorRefSource for simple Actor integration
>>
>> data flow logging by a prepackaged combinator (see .log)
>>
>> Source and Sink for files (using FileChannel) as well as for
>> InputStream/OutputStream
>>
>> HTTP client with connection pooling and idempotent request retry
>>
>> … and (wait for it) … Websockets :-)
>>
>>
>> In addition we fixed many small things, as usual, and we also did some
>> last renames and reorganizations in order to offer a consistent API:
>>
>>
>> Java functional interfaces moved to akka-actor (2.3.10, see
>> akka.japi.function)
>>
>> improved Java compilation error messages by adding arity to method name in
>> flow factories
>>
>> made OperationAttributes language-independent and also extensible,
>> dispatcher and supervision properties moved to ActorOperationAttributes
>>
>> removed .section in favor of .withAttributes and .via
>>
>> moved FlattenStrategy into the Java/Scala DSLs
>>
>> reorganized the project structure and package hierarchy of HTTP to offer
>> consistent and equivalent Java & Scala APIs
>>
>> relaxed method signatures to accept Graphs instead of the more specific
>> Source/Flow/Sink types to enable free reuse of blueprints between Java &
>> Scala
>>
>> renamed StreamTcp to Tcp and the bind method takes interface and port
>> parameters instead of InetSocketAddress
>>
>>
>> On the State of HTTPS
>>
>> While we now have all the ingredients—SslTls and HTTP are BidiFlows that
>> can be connected—we do not yet have nice convenience APIs for using HTTP and
>> SSL together. This will come in one of the next releases, perhaps even
>> before 1.0.
>>
>>
>> Things that are Known Missing
>>
>> The cookbook section of the streams documentation has not yet been ported
>> to Java, but the text of the Scala version applies to both languages. More
>> documentation will follow in general, in particularly SslTls currently only
>> has API docs.
>>
>>
>> The akka-http-core module is still missing the Java side of the multipart
>> model (#15674). Working with HTTPS (client- and server-side) is not yet as
>> easy as it will be. Additionally not all of the directives that make up the
>> high-level server-side API in akka-http-scala have proper counterparts in
>> akka-http-java (#16436). We will close these gaps shortly.
>>
>>
>> General Notices
>>
>> The complete list of closed tickets can be found in the streams-1.0-RC1
>> and http-1.0-RC1 github issues milestones.
>>
>>
>> For the full stats see the announcement on the website.
>>
>>
>> The activator templates have also been updated:
>>
>>
>> Akka Streams with Java8!
>>
>>
>> Akka Streams with Scala!
>>
>>
>> We’d like to thank all of you for testing and for providing feedback on
>> our progress.
>>
>>
>> Happy hAkking!
>>
>> --
>>
>> Patrik Nordwall
>> Typesafe -  Reactive apps on the JVM
>> Twitter: @patriknw
>
> --
> You received this message because you are subscribed to the Google Groups
> "scala-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scala-user+unsubscr...@googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



-- 
Johannes

---
Johannes Rudolph
http://virtual-void.net

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Http Gzip Json

2015-05-07 Thread Johannes Rudolph
Hi Paul,

thanks for the report. Yes, I can confirm that it's exactly as you
say. I filed an issue to track the problem:

https://github.com/akka/akka/issues/17417

Johannes


On Thu, May 7, 2015 at 4:40 PM, Roland Kuhn  wrote:
> Sorry, this fell through the cracks.
>
> Mathias/Johannes, have you seen this?
>
> Regards,
>
> Roland
>
> 25 mar 2015 kl. 03:53 skrev Paul Cleary :
>
> I have gone back to Spray 1.3.2 for the time being as this is a stopper for
> me.
>
> I spent a lot of time trying to sort through this Akka Http issue and threw
> in the towel.
>
> On Tuesday, March 24, 2015 at 8:23:01 AM UTC-4, Paul Cleary wrote:
>>
>> One other note with this issue, it seems when I get it, the java process
>> never dies, and starts hogging all of my CPU.  Perhaps there is some kind of
>> out-of-control looping that is going on?
>>
>> On Monday, March 23, 2015 at 9:26:06 AM UTC-4, Paul Cleary wrote:
>>>
>>> I am trying to put together a route that does a decode with Gzip, and
>>> then uses an entity(as.
>>>
>>> With gzip decoding on, my route is not running.  If I remove Gzip, then
>>> everything works fine.
>>>
>>> Here is the error I am getting:
>>>>
>>>> Request was neither completed nor rejected within 1 second
>>>> (RouteTest.scala:53)
>>>
>>>
>>> Here is the route:
>>>
>>> // this is the route, "spanRouter" is an actor ref, does nothing right
>>> now
>>>
>>> path("spans") {
>>>   post {
>>> decodeRequestWith(Gzip) {
>>>   entity(as[Span]) { span =>
>>> complete {
>>>   spanRouter ! span
>>>   Future.successful(HttpResponse(StatusCodes.Accepted))
>>> }
>>>   }
>>> }
>>>   }
>>> }
>>>
>>>
>>> // and here is the test
>>>
>>> def compress(json:String) =
>>> Gzip.newCompressor.compress(ByteString(json.getBytes))
>>>
>>> ...
>>>
>>>
>>> val compressedBody = HttpEntity(ContentTypes.`application/json`,
>>> compress(json))
>>> Post("/spans",
>>> compressedBody).withHeaders(`Content-Encoding`(HttpEncodings.gzip)) ~>
>>> underTest ~> check {
>>>   status shouldEqual StatusCodes.Accepted
>>>   probe.receiveOne(3.seconds)
>>> }
>
>
> --
>>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>>> Check the FAQ:
>>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> Dr. Roland Kuhn
> Akka Tech Lead
> Typesafe – Reactive apps on the JVM.
> twitter: @rolandkuhn
>
>



-- 
Johannes

---
Johannes Rudolph
http://virtual-void.net

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka Streams Merge variance

2015-05-18 Thread Johannes Rudolph
On Saturday, May 16, 2015 at 3:47:51 PM UTC+2, Oliver Winks wrote:
>
> If you look at the implementation of Merge you find that it is a generic 
> type, but without any defined variance:
>
> classMerge[T] extends Graph 
> 
> [UniformFanInShape 
> 
> [T, T], Unit ]
>
> Is there any reason for this? Would it not make sense to define it as a 
> covariant type? e.g:
>
> classMerge[+T] extends Graph 
> 
> [UniformFanInShape 
> 
> [T, T], Unit ]
>


Merge is both an input and an output element, so invariance is indeed the 
correct variance configuration. Otherwise you could feed it supertype 
elements that the output side cannot consume. If you use it during building 
a graph, you can access its inlets and outlets which have the right 
variances set so invariance at the `Merge-type itself shouldn't be a 
constraint.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http-RC3 can't set the content type

2015-06-12 Thread Johannes Rudolph

On Friday, June 12, 2015 at 12:30:16 AM UTC+2, ke...@aptoma.com wrote:
>
> I think ContentType is modeled directly on HttpEntity. Try a combination 
> of the mapResponseEntity-directive and the .withContentType-method on 
> HttpEntity.
>

To expand on this: 

In spray/akka-http some headers are treated specially because they 
logically belong somewhere else in the model. The content type is one of 
them which tightly belongs to the entity (= body) of an HTTP message in our 
view. (Also see the docs at [1] about this topic.)

So, how do you work with content-types?

There are basically two levels:

1) you construct the response manually with types from the model
2) you use the routing layer with `complete` and marshalling

In the first case you need to supply the content-type with the data, 
usually using one of the `HttpEntity.apply` methods [2]:

HttpEntity(MediaTypes.`text/plain`, "test")

In the second case you can still just pass an entity to `complete`: 

complete(HttpEntity(MediaTypes.`text/plain`, "test"))

or you make use of the marshalling infrastructure which uses type-classes 
to convert a value of your domain model to a acceptable entity. So, using a 
marshaller you can also be sure that content-types and charsets are 
properly negotiated with the client.

E.g.

complete("test")

will use the predefined `StringMarshaller` [3] which will answer requests 
that accept a `text/plain` response (rejecting others) and apply a charset 
that the client understands.

HTH
Johannes


[1] 
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC3/scala/http/common/http-model.html#HTTP_Headers
[2] 
http://doc.akka.io/api/akka-stream-and-http-experimental/1.0-RC3/#akka.http.scaladsl.model.HttpEntity$
[3] 
https://github.com/akka/akka/blob/release-2.3-dev/akka-http/src/main/scala/akka/http/scaladsl/marshalling/PredefinedToEntityMarshallers.scala#L50

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http-testkit responseAs cannot unmarshal the body

2015-06-12 Thread Johannes Rudolph
On Thursday, June 11, 2015 at 12:04:28 PM UTC+2, yar@gmail.com wrote:
>
> So if the response has content type text/plain, why the heck does the 
> testkit try to unmarshal it with an application/json unmarshaller and how 
> to point it in the right direction?
>

Good question. Do you use one of the json support classes?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Too many scan calls when profiling my akka application

2015-06-12 Thread Johannes Rudolph

On Sunday, June 7, 2015 at 1:42:52 PM UTC+2, Akka Team wrote:
>
> The scan method of FJP will very likely be your top method unless you have 
> very favorable load patterns for FJP. I wouldn't worry about it, unless you 
> see a performance problem with your application.
>
>
4% is indeed not much for FJP. There's still a trade-off you do with FJP 
which you need to be aware of: FJP keeps threads spinning for a while to be 
able to handle just-in-time arriving work faster. This ensures good latency 
in many cases. Especially if your pool is overprovisioned, however, the 
trade-off is a bit of latency against more power consumption in almost-idle 
situations.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-http 1.0-RC4 on android 5.0.1 : that rocks !

2015-07-10 Thread Johannes Rudolph
Hi Alain,

On Friday, July 3, 2015 at 9:05:50 AM UTC+2, alain marcel wrote:
>
> Following code is a server realized with akka-http on android that serves 
> files.
> For this to work, just call new ServerForDownloadFile() in an android 
> AsyncTask.
>

Thanks for sharing! Any reason you are using a iterator over a memory 
mapped file instead of a SynchronousFileSource from akka-stream?

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to download large file with Akka-http V1.0-RC4 ?

2015-07-10 Thread Johannes Rudolph
Hi Alain,

On Thursday, July 2, 2015 at 10:39:27 AM UTC+2, alain marcel wrote:
>
> If I replace Play server with Akka-http server (see source bellow), then I 
> can download large file (about 650Mb).
>

That's probably because you use chunked transfer encoding in your Akka HTTP 
server for which the max-content-length is not enforced (yet! see 
https://github.com/akka/akka/issues/16472). The problem against Play rather 
seems to be that your setting isn't active for some reason. The "configured 
limit of 8388608" is the default one, so you need to check that your 
configuration change is really effective (first, try invalid values and see 
if running fails, if not make it fail first before changing the value into 
something reasonable to be sure).

HTH
Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http respond with result of an actor call (java-api)

2015-07-10 Thread Johannes Rudolph
Hi,

in the Java API there are currently two ways to deal with Futures. You can 
use `RequestContext.completeWith` to "transform" a `Future` 
into a `RouteResult`. Or, if you use the reflective `handleWith` directive 
you can point it to a method that returns a `Future` instead 
of a `RouteResult`. Both ways allow you to pass the `RequestContext` around 
to complete asynchronously but instantly return a `Future`.

HTH
Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka-http-testkit route produces "internal service error" but live route works (?!?)

2015-07-12 Thread Johannes Rudolph
Hi Austin,

On Saturday, July 11, 2015 at 11:22:44 AM UTC+2, Austin Guest wrote:
>
> (1) The test for the "hello" route fails because the actual response is an 
> internal server error. (Despite the fact that this isn't true running the 
> live code!) And I get the following error messages:
>

That's probably because you are relying on a `val` that is defined inside 
an `App` class which extends from `DelayedInit` which has surprising 
semantics. Try making `route` a `def`. 
 

> (2) I get a compile error for the backtick notation used in 
> `application/json`. Not too surprising as that's a bit black-magic-looking. 
> I'm okay with using another way to specify this is a JSON request, I just 
> can't find any in the docs.
>

It's hard to say because you are not providing any details about the error 
or the types of object you try to pass in otherwise. Have you imported 
`application/json` from somewhere? Try 
`akka.http.scaladsl.model.MediaTypes.`application/json``.

HTH
Johannes 

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-http] Splitting an incoming Source stream to two Sources

2015-07-13 Thread Johannes Rudolph
Hi Michael,

leaving aside the API for a moment, the question is how the streams should 
behave. The actual live streams are not immutable but are running things 
that need to receive data from the original request and send this data out 
with another request. (I haven't completely understood in which way you 
want to "split" the source, broadcasting seems to hint that you are going 
to duplicate data and maybe filter afterwards? It may not matter at this 
point.)

Let's assume you want to duplicate the stream. How should the duplicated 
streams behave wrt backpressure when they are actually consumed? The 
simplest strategy is to assume that you want them to backpressure together, 
i.e. the slower consumer will set the speed for both of them and the two 
"stream pointers" only differ as far as the bounded buffers after the 
broadcast allow. Is that what you are after?

So far for the theoretical operation of the streams. The problem with the 
API is, that the FlowGraph API requires that the whole graph is 
materialized together. This means that you cannot possibly return two 
separate unmaterialized Sources from a graph building operation which is 
what you need for creating two HttpEntities. One workaround could be  to 
wire the two outputs to two `Sink.publishers` during graph building, and 
then directly run the graph to obtain two publishers which you then need to 
wrap into two sources which you can put into two new requests. This works 
because PublisherSinks can already be materialized on their sink side while 
still being waiting for a subscriber on the Publisher side. In any case, 
processing will only start when both sources are running (aside from 
buffers already filling up before that).

Does that make sense?

If you actually meant splitting the stream by some criteria, i.e. you will 
first receive data for creating the first request and afterwards for 
creating the second request there may another solution where you try to 
create a `Flow[ByteString, HttpRequest]` and feed that into something that 
actually runs the requests. The challenging thing here is that you need to 
create an intermediate operation `Flow[ByteString, Source[ByteString]]` 
which can only be done using groupBy, splitAfter, or splitWhen combinators 
which, however, cannot be stateful, so you may need some boilerplate to get 
that scheme implemented.

Cheers,
Johannes


On Monday, July 13, 2015 at 7:04:21 PM UTC+2, Michael Hamrah wrote:
>
> I'm having trouble wiring the following logical flow:
>
> akka-http route -> grab request entity's source stream -> split the source 
> stream in two -> pass the source stream to two new http requests with the 
> source stream in the http entity.
>
> I can easily grab the incoming request entity's source stream. I can then 
> create a FlowGraph with a Broadcast stage to split it. However, what I 
> can't seem to do
> is wire up the rest. I need two Source[ByteString]s to build the 
> HttpEntity to make the upstream request. Using broadcast.out(0) gives me an 
> outlet which I can't convert to a Stream; if I create a Flow, I have a 
> bunch of ByteStrings generating sources. 
>
> I believe I need to "wrap" the entire plit incoming stream--not the 
> specific byte string chunks-in a new http request, but not sure how.
>
> Any ideas?
>
> Thanks,
>
> Mike
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: SslTls example

2015-07-14 Thread Johannes Rudolph
Hi Mathias,

there may not be any good examples yet.

On the server side you need to supply an `HttpsContext` to the 
`Http.bind()` method with all the SSL settings.

On the client side there are either HTTPS variants like 
`Http.newHostConnectionPoolTls` or if you use the highest-level API you can 
just supply a request with an https URI to `Http.singleRequest`. The client 
side methods take an optional `HttpsContext` as well but will also use the 
default Java SSL settings (like root certificates, etc.) when none is 
supplied.

HTH
Johannes

On Tuesday, July 14, 2015 at 11:57:08 AM UTC+2, Mathias Bogaert wrote:
>
> Hi,
>
> Is there an example available where SslTls is used with Akka HTTP 
> client/server?
>
> Thanks,
>
> Mathias
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka streams - for comprehension counterpart

2015-07-14 Thread Johannes Rudolph
Hi Leslie,

On Tuesday, July 14, 2015 at 1:38:02 PM UTC+2, leslie...@googlemail.com 
wrote:
>
> When programming with functions this kind of issue is solved quite 
> elegantly by using a for comprehension:
>
>
Not a solution but a comment. In a for comprehension with usual types 
(Future/Option/Either/Try) the calculation is also cut short on the first 
error. So, it may not differ so much in that regard.

However, streams are different in another way: a Flow[T, U] is more than 
just a function T => U, it's more like a T => Seq[U] that can create any 
number of results for any input element. This makes it hard to create 
something like an `eitherFlow(leftFlow: Flow[L, U], rightFlow: Flow[R, U]): 
Flow[Either[L, R], U]` that would bypass errors around some components 
because in general you somehow need to constrain the argument flows to 
produce exactly one output element for each input element. Even then you 
need to prevent two subsequent elements `Right(r)`, `Left(l)` to start a 
race between the left and right branch (if you are interested in keeping 
the order). I think that's the main problem, that it's hard to come with an 
exact specification for a general element that would solve the problem in 
all cases.

akka-http has lots of places where we do pass "control information" around 
actual processing. However, in the end we had to hand-tune the bypass in 
most places because more general solutions didn't work out because of some 
pecularities of the needed semantics.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Connecting over Https using Akka-Http client

2015-07-15 Thread Johannes Rudolph
Hi Jeroen,

is this a virtual host you are connecting against? This may hint towards 
the client not sending the TLS SNI extension correctly which could be a bug 
in akka-http or due to an old JDK version on your client. Which JDK version 
do you use?

https://en.wikipedia.org/wiki/Server_Name_Indication says that you would 
need at least JDK 7 to make SNI work (only relevant if the host you connect 
against is an HTTPS virtual host).

Also, you could try the just released 1.0 version (though I cannot think of 
a reason why that should fix it).

Johannes

On Wednesday, July 15, 2015 at 4:13:14 PM UTC+2, Jeroen Rosenberg wrote:
>
> I'm trying to connect to a third party streaming API over HTTPS using 
> "akka-stream-experimental" % "1.0-RC4" and "akka-http-experimental" % 
> "1.0-RC4"
>
> My code looks like this
>
> class GnipStreamHttpClient(host: String, account: String, processor: 
>>> ActorRef) extends Actor with ActorLogging {
>>
>>   this: Authorization =>
>>
>>
>>>   private val system = context.system
>>
>>   private val endpoint = Uri(s"https://$host/somepath";)
>>
>>   private implicit val executionContext = system.dispatcher
>>
>>   private implicit val flowMaterializer: Materializer = 
>>> ActorMaterializer(ActorMaterializerSettings(system))
>>
>>
>>>   val client = Http(system).outgoingConnectionTls(host, port, settings = 
>>> ClientConnectionSettings(system))
>>
>>
>>>   override def receive: Receive = {
>>
>> case response: HttpResponse if response.status.intValue / 100 == 2 =>
>>
>>   response.entity.dataBytes.map(processor ! _).runWith(Sink.ignore)
>>
>> case response: HttpResponse =>
>>
>>   log.info(s"Got unsuccessful response $response")
>>
>> case _ =>
>>
>>   val req = HttpRequest(GET, 
>>> endpoint).withHeaders(`Accept-Encoding`(gzip), Connection("Keep-Alive")) ~> 
>>> authorize
>>
>>   log.info(s"Making request: $req")
>>
>>   Source.single(req)
>>
>> .via(client)
>>
>> .runWith(Sink.head)
>>
>> .pipeTo(self)
>>
>>   }
>>
>> }
>>
>>
> As a result I'm getting an Http 404 response. This doesn't make much sense 
> to me as when I copy the full url to curl it just works
>
>> curl --compressed -v -uuser:pass https://my.streaming.api.com/somepath
>>
>
> Also when I connect to a mock implementation of this streaming API using 
> Http protocol, my code works fine (using outgoingConnection instead of 
> outgoingConnectionTls).
>
> What do I do wrong when making Https request? As far as I understand 
> changing to outgoingConnectionTls should be enough for most cases. 
>
> Any help is be appreciated!
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Server attacked. What could have been done?

2015-07-20 Thread Johannes Rudolph
Hi,

On Sunday, July 19, 2015 at 10:43:07 PM UTC+2, Adam Shannon wrote:
>
> I have some logs of this happening to me as well. I'm running on EC2 in 
> us-east-1. I've got nginx and elb in front of these akka-http instances. 
> Here's a few stack traces from the instances as well as nginx access logs.
>
> https://gist.github.com/SpicyMonadz/b844ce4503e145fda7ee
>
> These requests aren't killing my jvm instances fyi. I'm on akka-http-* 1.0 
>
>

We recently fixed a problem that would unbind the listening socket under 
some conditions [1]. It was most likely to occur when a connection wasn't 
closed regularly. But it was fixed for 1.0 so I wonder if the thing you are 
seeing is something else (or what you are seeing exactly), Adam. Are you 
able to reliably reproduce the issue? Does it stop your server from 
answering new connections?

Johannes

[1] https://github.com/akka/akka/issues/17992

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Implicit value issue for FromRequestUnmarshaller in akka-http with spray-json

2015-07-24 Thread Johannes Rudolph
I agree, this is more confusing than necessary. I created 
https://github.com/akka/akka/issues/18064 to improve the situation.

Johannes

On Thursday, July 23, 2015 at 3:27:07 PM UTC+2, Kabir Idris wrote:
>
> Hi all, Im facing the same error in akka-http 1.0 
>
>
>
> On Monday, March 16, 2015 at 12:16:37 AM UTC+1, Arthur Kushka wrote:
>>
>> Hi. I was faced with a typical error with spray-json and akka-http that 
>> easy to find in Google, but none of solutions helps me.
>>
>> Scala fail compiling in place where I parsing request body with `entity` 
>> directive. As error say, they can`t find implicit value for unmarshaller, 
>> but formatter and `SprayJsonSupport` are present in context (in 
>> documentation written that that`s enough).
>>
>> Below is compiler message and my code.
>> Please help me with finding solution of this problem.
>>
>> [error] /home/arhelmus/techmedia/akka-http/src/main/scala/utils/
>> CustomDirectives.scala:31: could not find implicit value for parameter um
>> : akka.http.unmarshalling.FromRequestUnmarshaller[utils.Credentials]
>> [error] entity(as[Credentials]).flatMap { credentials =>
>>
>>
>> package utils
>>
>> import akka.http.marshallers.sprayjson.SprayJsonSupport
>> import akka.http.server.Directive1
>> import models.User
>> import spray.json._
>>
>> case class Credentials(login: String, password: String)
>>
>> trait Protocols extends DefaultJsonProtocol {
>>   implicit val formatter: RootJsonFormat[Credentials] = jsonFormat2(
>> Credentials.apply)
>> }
>>
>> object CustomDirectives extends CustomDirectives
>>
>> trait CustomDirectives extends SprayJsonSupport with Protocols {
>>
>>   import akka.http.server.directives.BasicDirectives._
>>   import akka.http.server.directives.MarshallingDirectives._
>>   import akka.http.server.directives.ParameterDirectives._
>>
>>   def authenticateByToken: Directive1[User] =
>> parameter("token").flatMap { code =>
>>   provide(User(code, "asdsada", "asdsadasd"))
>> }
>>
>>   def authenticateByCredentials: Directive1[User] = {
>> entity(as[Credentials]).flatMap { credentials =>
>>   provide(User(credentials.login, credentials.login, credentials.
>> password))
>> }
>>   }
>>
>> }
>>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-http websockets connection fails 95% of the time

2015-07-24 Thread Johannes Rudolph
Hi Luc,

I'd like to get to the bottom of this problem to make sure we find the 
problem if there's one in akka-http. Could you provide an executable 
reproduction?

Johannes

On Thursday, July 23, 2015 at 3:05:29 PM UTC+2, Luc Klaassen wrote:
>
> Hello all,
>
> I'm having trouble setting up a basic websocket connection to an akka-http 
> websocket server. Almost all connections seem to fail, and are timed out by 
> akka-http after 5 seconds. Sometimes (seems completely random) the 
> connection succeeds though and the message sending works flawlessly. This 
> is really confusing me, since the problem is not 100% reproducable and 
> seems completely random.
>
> See stackoverflow for my current code:
>
> http://stackoverflow.com/questions/31579385/connection-between-js-and-akka-http-websockets-fails-95-of-the-time
>
> Any suggestions are greatly appreciated.
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: memory issue in akka application

2015-07-24 Thread Johannes Rudolph
Hi Somak,

this seems like an Actor isn't keeping pace with its incoming messages. You 
can try to follow references from the mail box to the actor instance to 
find out which actor it is. As you are talking about log message my guess 
would be that your logging actor is too slow writing logs to disk. You may 
also look at some stack traces at runtime which may turn up what this actor 
is doing.

HTH,
Johannes

On Friday, July 24, 2015 at 1:52:02 PM UTC+2, slowhandblues wrote:
>
> We have an application in Akka and having a problem with increasing heap 
> memory that is growing with time and eventually crashing the application. 
> On analyzing the heap dump we can see all our log entries are getting 
> persisted in the memory. Below is object tree for the same
>
> Akka dispatcher --> Unbounded mail box --> concurrent linked queue --> 
> envelop object consisting of all the log information
>
> Our actors are untyped actor and we are using akka default event logger 
> for logging
>
> Any idea what could be the problem with this?
>
>
> Thanks,
>
> Somak
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-http websockets connection fails 95% of the time

2015-07-25 Thread Johannes Rudolph
Thanks Luc. I'll have a look next week.

On Saturday, July 25, 2015 at 9:42:42 AM UTC+2, Luc Klaassen wrote:
>
> I've pushed the repository to github at 
> https://github.com/Luckl/AkkaHttpWebsockets
>
> Luc
>
> Op vrijdag 24 juli 2015 10:07:53 UTC+2 schreef Johannes Rudolph:
>>
>> Hi Luc,
>>
>> I'd like to get to the bottom of this problem to make sure we find the 
>> problem if there's one in akka-http. Could you provide an executable 
>> reproduction?
>>
>> Johannes
>>
>> On Thursday, July 23, 2015 at 3:05:29 PM UTC+2, Luc Klaassen wrote:
>>>
>>> Hello all,
>>>
>>> I'm having trouble setting up a basic websocket connection to an 
>>> akka-http websocket server. Almost all connections seem to fail, and are 
>>> timed out by akka-http after 5 seconds. Sometimes (seems completely random) 
>>> the connection succeeds though and the message sending works flawlessly. 
>>> This is really confusing me, since the problem is not 100% reproducable and 
>>> seems completely random.
>>>
>>> See stackoverflow for my current code:
>>>
>>> http://stackoverflow.com/questions/31579385/connection-between-js-and-akka-http-websockets-fails-95-of-the-time
>>>
>>> Any suggestions are greatly appreciated.
>>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Can PathMatcher match string as a path parameter?

2015-07-29 Thread Johannes Rudolph
Hi John,

yes, this is called PathMachers.segment() (to be consistent with the Scala 
side). It may make sense to create an alias or have another look at the 
names in general to see if they can be made more consistent.

Johannes

On Wednesday, July 29, 2015 at 7:06:16 AM UTC+2, john@gmail.com wrote:
>
> Hi,
> I  can get do
>
> PathMatcher a = PathMatchers.longValue();
>
> Can I also get a String like with the following pseudo-code?
>
> PathMatcher b = PathMatchers.stringValue();
>
> so b would match "test111" in  /service/test111/car
>
>
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How do I create an Unmarshaller / Marshaller in java?

2015-08-04 Thread Johannes Rudolph
Hi John,

On Monday, July 27, 2015 at 11:00:57 AM UTC+2, john@gmail.com wrote:
>
> or am I supposed to use Marshallers.toEntity() but this seems quite a lot 
> of work 
>

Yes, to create custom marshallers you would need to use one of the methods 
in `Marshallers`, to create custom unmarshallers you would use one of the 
methods in `Unmarshallers`. 

>From your other topic, I see that you try to deal with json data. The 
Jackson support (while written in Scala) is supposed to be used from Java. 
See this example about how to make use of it:

https://github.com/akka/akka/blob/release-2.3-dev/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/petstore/PetStoreExample.java#L21

HTH
Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka HTTP back-pressured connections

2015-08-10 Thread Johannes Rudolph
Thanks for the reminder Hunor and Chris.

The comment is indeed a left-over from previous versions. Since a while the 
number of concurrently accepted connections for one `bindAndHandle` 
invocation can now be configured by setting 
`akka.http.server.max-connections`.

I added a commit to PR 18143 [1] that fixes Scaladoc.

Johannes

[1] https://github.com/akka/akka/pull/18143

On Monday, August 10, 2015 at 3:00:47 PM UTC+2, Hunor Kovács wrote:
>
> +1
>
> On Monday, August 10, 2015 at 3:36:26 PM UTC+3, Chris Baxter wrote:
>>
>> Just as an FYI, I saw your question on SO and posted my own here as it 
>> interested me as well.
>>
>> https://groups.google.com/forum/#!topic/akka-user/DjZDR-VWX20
>>
>> On Monday, August 10, 2015 at 8:23:05 AM UTC-4, Hunor Kovács wrote:
>>>
>>> Hey peeps, got a question here 
>>> .
>>>  
>>> It's about akka-http back-pressured connections.
>>>
>>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: akka streams 1.0 and http routing - in java

2015-08-11 Thread Johannes Rudolph
Hi paweł,

you are on the right track. You need to provide the data as a 
`Source`. Then, you can create a response like this:

Source data = ...
HttpResponse response = 
HttpResponse.create().withEntity(HttpEntities.createChunked(contentType, 
data));

If you know the length of the data (i.e. the aggregated number of octets in 
all the ByteStrings the Source provides), you can also use

HttpEntityies.create(contentType, length, data)

to create a (non-chunked) streamed default entity.

HTH
Johannes



On Monday, August 10, 2015 at 10:38:40 PM UTC+2, paweł kamiński wrote:
hi,
probably again I ve overlooked it in documentation but I cannot find a way 
to respond to a http request with streamed response. probably 
HttpEntityChunked should be used but again I cannot figure out how to set 
entities and flows correctly.


this is basic example


private RouteResult getEvents(RequestContext ctx)
{

final Source source = producer.produce(10, 100);
return ctx.complete(HttpResponse.create());
}


thanks for any hint on that

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Can't find akka.http.javadsl.testkit.JUnitRouteTest?

2015-08-11 Thread Johannes Rudolph
Hi john,

it seems that this is a bug that we thought to be fixed but which has 
reappeared. I filed it as https://github.com/akka/akka/issues/18178

Johannes

On Tuesday, August 11, 2015 at 7:08:44 AM UTC+2, john@gmail.com wrote:
>
> Ok I realized that it is in the release-2.3-dev branch.
>
> I can create out of the box with sbt package a 2.10  
> akka-http-testkit-experimental_2.10 
> version.
> but I need a 2.11 version.
>
> Can somebody help? With just java know-how I have no idea how to make a 
> 2.11 package 
>
> Am Montag, 10. August 2015 23:05:39 UTC+2 schrieb john@gmail.com:
>>
>> I wanted to use akka.http.javadsl.testkit.JUnitRouteTest. But I can only 
>> find the scala dsl  one in akka-http-testkit-experimental_2.11?
>>
>> If its not ready can I use the scala one from java? 
>>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: akka-http 1.0 RC and SSL and Java

2015-08-13 Thread Johannes Rudolph
Hi Rob,

here's some recent testing code that creates and uses certificates that are 
signed by a custom CA. It doesn't require fiddling with Java's `keytool`. 
The instructions should work similarly for proper certificates in which 
case you leave out the CA and "Create server certificate" steps and replace 
the CA key and certificate with the ones you got from your CA.

https://github.com/akka/akka/pull/18200/files

Here's the usual disclaimer: never copy and paste security relevant code or 
key material directly but try to understand what you are doing. The problem 
with security related things is that each single step of the process that 
is compromised may put the security of the whole process at a risk.

Johannes



On Thursday, August 13, 2015 at 2:28:25 AM UTC+2, Will Sargent wrote:
>
> The SSLContext is responsible for handling the trust store -- you set it 
> up and pass that into akka-http using HttpsContext.create(sslContext,...).
>
> How to set up the SSLContext is a bit confusing.  There are examples in 
> the guides for Android:
>
>
> https://developer.android.com/training/articles/security-ssl.html#HttpsExample
>
> and Fedora:
>
>
> https://docs.fedoraproject.org/en-US/Fedora_Security_Team/1/html/Defensive_Coding/sect-Defensive_Coding-TLS-Client-OpenJDK.html
>
> And you can also check out the "Understanding JSSE" section of the blog 
> post:
>
>
> https://tersesystems.com/2014/01/13/fixing-the-most-dangerous-code-in-the-world/
>
> On Tue, Aug 11, 2015 at 12:47 AM, > wrote:
>
>> Ehm, nobody?
>>
>>
>> Am Montag, 10. August 2015 16:11:45 UTC+2 schrieb akk...@gmail.com:
>>>
>>> I try to created a REST server with akka-http / SSL / Java.
>>>
>>> I'm a little bit stuck with the SSL part as following article doesn't 
>>> provide too much information: 
>>> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/http/server-side/low-level-server-side-api.html
>>> (Paragraph *Server-Side HTTPS Support*)
>>>
>>> Could somebody link me to an example where I see how to configure SSL 
>>> (truststore.jks) with akka-http?
>>>
>>> Thanky you.
>>> Rob
>>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: Looking for Code example to create akka.http.javadsl.testkitTestResponse

2015-08-13 Thread Johannes Rudolph
Yes, good point.

https://github.com/akka/akka/pull/18201

On the other hand, I wonder why you need to implement TestResponse, john? 
You should only ever need to implement it if you want to support another 
kind of testing framework. Is that what you are trying to achieve? To write 
tests with JUnit just derive your test class from JUnitRouteTest.

Have you seen 
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/http/routing-dsl/testkit.html?

Johannes



On Thursday, August 13, 2015 at 9:01:40 AM UTC+2, Akka Team wrote:
>
> Yes, that is a little unfortunate: the Scala type system allows the 
> expression of non-termination (the bottom type—Nothing) which Java does not 
> know about, so this it what happens “under the hood” (i.e. in the sausage 
> factory).
>
> We might want to change the return type of `fail` from Nothing to Unit, 
> what do you think, Johannes?
>
> Regards,
>
> Roland
>
> On Tue, Aug 11, 2015 at 1:52 PM, > wrote:
>
>> ok I found out:
>>
>> return new TestResponse(response,null,null,null){
>>   public  scala.runtime.Nothing$ fail( String message) { return null;}
>>
>>   public void assertEquals(int expected, int actual, String message) { }
>>
>>
>>   public void assertEquals(Object expected, Object actual, String 
>> message) { }
>>
>>   @Override
>>   public void assertTrue(boolean predicate, String message) {}
>>};
>> }
>>
>>
>>
>> The "scala.runtime.Nothing$" return type on fail was not obvious to me.
>>  
>>
>>
>> Am Dienstag, 11. August 2015 12:24:32 UTC+2 schrieb john@gmail.com:
>>>
>>> I am extending  
>>>
>>> akka.http.javadsl.testkit.RouteTest.
>>>
>>> In one of the methods I need to return a TestResponse.
>>>
>>>  How can I do this with java 
>>>
>>> public TestResponse createTestResponse(HttpResponse response) {
>>>
>>>return new TestResponse(response,null,null,null);
>>> }
>>>
>>>
>>> does not work?
>>>
>>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Akka Team
> Typesafe - Reactive apps on the JVM
> Blog: letitcrash.com
> Twitter: @akkateam
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-http][websocket] how to connect two independent WebSocket streams - include an actor as bridge between streams?

2015-08-14 Thread Johannes Rudolph
Hi Michael,

here's an idea how it could work without actors:

 * when the first client connects, create two yet Publisher/Subscriber 
pairs and wrap them into a Flow[Message, Message] while crossing the links, 
pass one of the flows as the websocket handler
 * when the second client connects pass the other flow as the websocket 
handler

See this comment about how to do it: 
https://github.com/akka/akka/issues/17610#issuecomment-113127961

and

https://github.com/akka/akka/issues/17769

In fact, it seemed easy and interesting enough to try, so I gave it a quick 
shot and it works indeed:

https://github.com/spray/akka/commit/720b06dd78b65f96ddf316f63108dd959b896ac1

(I made the screenshot with websocketd's dev console 
(https://github.com/joewalnes/websocketd/wiki/Developer-console))

Johannes

On Friday, August 14, 2015 at 1:31:03 PM UTC+2, Michael Zinsmaier wrote:
>
> Hey together,
>
> I have a use case where I want to connect two akka-http WebSocket streams 
> on a server + maintain backpressure. But the two
> streams might be opened with some delay (i.e. the first one is opened some 
> minutes before the 2nd and should be "paused").
>
> What should I use as Sink / Source to answer the HTTP Upgrade Request? An 
> Actor as bridge between two streams?
>
> *Problem:*
>
> lets assume I have the following setup 
>
>  _____
> __
> |  |  |
> |  |  |
> |  |  |
> |  |  |
> |  client A | < - - WS - - > |  Akka Http|  < - - WS - - > |  
> client B |
> |  |  | server  
> |   |  |
> | _ |  |___ |  
> |__ |
>
>
> Client A and Client B both connect to the server at roughly the same point 
> in time, either one could be the first to actually establish the connection.
> What I want to do is forward all data that comes from A to B and the other 
> way around.
>
> But I want to make use of backpressure such that neither of the two can 
> get overwhelmed. Especially during the connection setup backpressure should 
> be applied to
> throttle the client that connected first until the 2nd client is connected 
> and data can be forwarded at all.
>
> To complete the WebSocket Upgrade request I need a Source and a Sink. If I 
> cannot/do not want to keep the Upgrade Request open until the 2nd client 
> connects the question arises
> what should I use as Source and Sink. My current solution uses an 
> intermediate actor to bridge between two streams, kind of a dynamic network.
>
> *From the stream documentation: "Dynamic networks need to be modeled by 
> explicitly using the Reactive Streams interfaces for plugging different 
> engines together."*
>
> Like this the first client will connect and two ActorBridges will be 
> created one serves as Source, the other one as Sink. The Source does 
> nothing and the Sink accepts a few items but later on throttles the 
> first client by applying backpressure. Once the second client connects it 
> will use the same two ActorBridges but in reverse roles. Thus sources and 
> sinks are interconnected and data flows.
>
> Is that the correct way to do it? Is there a solution with less custom 
> code? Are there any caveats that I miss?
>
>
>
> best regards Michael
>
>
> *Example Code:*
>
> class StreamBridge extends ActorSubscriber with ActorPublisher[Int] {
>
>   val msgQueue = mutable.Queue[Int]()
>   val MaxInFlight = 5
>
>   override def receive: Receive = {
> // Subscriber
> case OnNext(in: Int) => {
>   msgQueue.enqueue(in)
>   deliver()
> }
> case OnError(err: Exception) => {
>   onError(err)
>   context.stop(self)
> }
> case OnComplete => {
>   onComplete()
>   context.stop(self)
> }
> // Publisher
> case Request(num) => {
>   deliver()
> }
> case Cancel => {
>   cancel()
>   context.stop(self)
> }
>   }
>
>   def deliver(): Unit = {
> while (isActive && totalDemand > 0 && msgQueue.nonEmpty) {
>   onNext(msgQueue.dequeue())
> }
>   }
>
>   val requestStrategy = new MaxInFlightRequestStrategy(MaxInFlight) {
> def inFlightInternally: Int = {
>   msgQueue.size
> }
>   }
> }
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group

[akka-user] Re: Specs2RouteTest equivalent for Akka HTTP

2015-08-18 Thread Johannes Rudolph
Hi,

specs2 testing support hasn't been ported from spray yet. The basic issue 
is that akka modules weren't allowed to depend on Scala dependencies 
because of bootstrapping issues when releasing a new Scala version that 
includes Akka. I'm not completely sure if this is still the most recent 
information on the topic so I created 
https://github.com/akka/akka/issues/18246 to clarify the issue.

Thanks for the report,
Johannes

On Tuesday, August 18, 2015 at 4:38:58 AM UTC+2, Everson Alves da Silva 
wrote:
>
> Hi,
>
> We are starting a new project at work and we chose to use Akka HTTP for 
> the API. So far everything is working as expected and the docs/tutorials 
> for spray work without changes or with minimal changes.
>
> However, now that we want to add integration tests for the routes, the 
> examples that fits our needs in spray are done using Specs2RouteTest or 
> Scalatest. We have a somewhat large codebase that uses Specs2 and don't 
> want to introduce Scalatest. Is there an equivalent for Specs2RouteTest on 
> Akka HTTP?
>
> Regards,
> @johnny_everson 
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] [akka-streams] Http source design question?

2015-08-20 Thread Johannes Rudolph
Hi john,

AFAIU your question was not about existing code in akka-http but how to put 
a queue before a superpool, right? And no, putting a blocking queue inside 
of an Iterator is not likely the best solution because it will spend one 
thread permanently that is blocking in `queue.take` almost all the time. 
akka-http already provides `Http.singleRequest` which implements exactly 
what you are looking for so that should be your default entry point for 
running single requests. 

If this isn't enough (the best example IMO would be the need to provide 
custom queuing logic) akka-stream should provide a general facility that 
allows users to put a queue around one-to-one flows (with or without 
context).

Johannes

On Thursday, August 20, 2015 at 11:56:06 AM UTC+2, Konrad Malawski wrote:
>
> Hi John, 
> good observation, however this has not shown up in our initial 
> benchmarking as a bottle-neck so far AFAIR.
> We're taking a systematic aproach and first improving the perf of the 
> "biggest bang for the buck" elements of Akka Http.
> Currently this means we're focusing on fusing and need to implement fusing 
> of fan-in and fan-out stages, which will save a lot of actor creation 
> during request handling.
>
> From there we'll benchmark again and see what then becomes the bottleneck 
> :-)
>
> If you'd like to help out, we'd definitely welcome contributions (backed 
> with benchmarks in case of perf improvements).
>
> Thanks!
>
> On Sat, Aug 15, 2015 at 1:01 PM, > wrote:
>
>> Is this source sound for a "flow" based on Http().superPool(...) :
>>
>> BlockingQueue queue = new LinkedBlockingQueue<>();
>>
>> Source< HttpRequest, ?> source = Source.fromIterator(() ->
>>   new Iterator() {
>>  public boolean hasNext() {return true;}
>>
>>  public HttpRequest next() {
>> try {
>>return queue.take();
>> } catch (InterruptedException e) {}
>> return null;
>>  }
>>  });
>>
>>
>> So many clients will be adding async many  HttpRequests to the single 
>> queue and the flow (based on Http().superPool(...)) should then process 
>> theses requests.
>>
>> I do think that using a concurrent LinkedBlockingQueue does scale. But 
>> is there an alternative?
>>
>>
>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Cheers,
> Konrad 'ktoso' Malawski
> Akka  @ Typesafe 
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-http-experimental-1.0] More Fine granular logging of accepted connections needed

2015-08-22 Thread Johannes Rudolph
Yes, the solution is not to log at DEBUG level. :)

Johannes

On Saturday, August 22, 2015 at 1:32:36 PM UTC+2, Simon Schäfer wrote:
>
> I use akka-http to build a webservice. This works great so far, I just 
> don't like the noise of logging. The webservice not only answers to 
> websockets requests but also serves *.js and *.css files. The problem is 
> that every request creates one of these logging messages:
>
> backend [DEBUG] [08/22/2015 13:23:01.262] 
> [default-akka.actor.default-dispatcher-2] 
> [akka://default/system/IO-TCP/selectors/$a/0] New connection accepted
> ...
> backend [DEBUG] [08/22/2015 13:23:01.672] 
> [default-akka.stream.default-file-io-dispatcher-20] 
> [akka://default/user/$a/flow-12-1-concatMatSource-singleSource] No more 
> bytes available to read (got `-1` or `0` from `read`), marking final bytes 
> of file @ -1
> ...
> backend [DEBUG] [08/22/2015 13:23:06.677] 
> [default-akka.actor.default-dispatcher-10] 
> [akka://default/user/$a/flow-10-3-publisherSource-prefixAndTail] Cancelling 
> akka.stream.impl.MultiStreamOutputProcessor$SubstreamOutput@48be1c2d 
> (after: 5000 ms)
>
> It tells me that the connection is accepted and because no further 
> requests arrive it closes the connection after 5s. And it even tells me 
> that no more bytes are available. All of these messages are unnecessary. 
> Can I disable them somehow?
>
> I created the server with this:
>
>   val binding = Http().bindAndHandle(service.route, interface, port)
>
> where route looks like this:
>
>   def route = get {
> pathSingleSlash(complete {
>   val content = Content.indexPage(
> cssDeps = Seq("default.css", "codemirror.css", "solarized.css"),
> jsDeps = Seq("clike.js", "markdown.js", 
> "scalajs-test-ui-fastopt.js", "scalajs-test-ui-launcher.js")
>   )
>   HttpEntity(MediaTypes.`text/html`, content)
> }) ~
> 
> path("scalajs-test-ui-jsdeps.js")(getFromResource("scalajs-test-ui-jsdeps.js"))
>  
> ~
> 
> path("scalajs-test-ui-fastopt.js")(getFromResource("scalajs-test-ui-fastopt.js"))
>  
> ~
> 
> path("scalajs-test-ui-launcher.js")(getFromResource("scalajs-test-ui-launcher.js"))
>  
> ~
> path("marked.js")(getFromResource("marked/lib/marked.js")) ~
> path("clike.js")(getFromResource("codemirror/mode/clike/clike.js")) ~
> 
> path("markdown.js")(getFromResource("codemirror/mode/markdown/markdown.js")) 
> ~
> path("default.css")(getFromResource("default.css")) ~
> 
> path("codemirror.css")(getFromResource("codemirror/lib/codemirror.css")) ~
> 
> path("solarized.css")(getFromResource("codemirror/theme/solarized.css")) ~
> path("auth") {
>   handleWebsocketMessages(authClientFlow())
> } ~
> path("communication") {
>   parameter('name) { name ⇒
> handleWebsocketMessages(communicationFlow(sender = name))
>   }
> } ~
> rejectEmptyResponse {
>   path("favicon.ico")(getFromResource("favicon.ico", 
> MediaTypes.`image/x-icon`))
> }
>   }
>
> Maybe I need to split things up and handle websockets by a different 
> routing?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Re: akka-http and selenium

2015-08-24 Thread Johannes Rudolph
Hi Rafał,

On Monday, August 24, 2015 at 3:49:57 PM UTC+2, Rafał Krzewski wrote:
>
> Oh, I see. OneServerPerSuite / OneServerPerTest traits must be really 
> handy. Providing similar helpers for testing akka-http would be hard, 
> because Play is a framework that mandates a well defined entry point and 
> configuration strategy, whereas akka-http is not.
>

I wonder what you mean exactly? Maybe you mean that spray/akka-http doesn't 
really promote a "convention over configuration" approach to a degree that 
many modern web frameworks do. But that doesn't mean that there are no 
entry points or best practices upon which you could build similar testing 
functionality. It may mean that you need to be a bit more explicit about 
what you are currently testing and where to find the application components.

Or am I missing something?

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] [akka-stream-experimental-1.0] How to reuse akka.stream.scaladsl.Tcp connections?

2015-08-25 Thread Johannes Rudolph
Hi Simon,

I think there are two conceptual difficulties you need to tackle:

The first is the problem which you describe with infinite / finite streams 
which is actually more one of the "traditional" (= actor based) push-style 
asynchronous programming versus the "new" [*] pull-style of reactive/akka 
streams which was introduced to deal with backpressure. The issue with 
backpressure is that it only works if all components take part in it. If 
you have one component that opts-out of backpressure it will have to fail 
or drop elements if it becomes overloaded and this component will become 
the weakest link (or the "Sollbruchstelle") of your application under load. 
Akka currently supports `Source.actorRef` (and `Sink.actorRef` 
respectively) which does exactly this translation from a push-style Actor 
API to the pull-style streams API. You usually don't want to use them as 
they will be limited by design and bound to fail under load.

Pull-style means that you need to write your program so that it is 
completely passive and waits for demand (you could also call that style 
"reactive", you setup your program to passively wait for a source to 
provide elements and then react to them). Writing "passive" programs is 
perfectly suited to services that follow the request / response principle. 
You setup your handler as a flow and just put it between the 
Source[Request] / Sink[Response].

But what does it mean for a client program which usually actively tries to 
achieve something? I think you can also build such a program in a passive 
style: if it doesn't take any dynamic input it is easy as you can create 
all the sources and sinks from static data. If it does take dynamic input 
(like user input), you just need a Source of that user input that only 
prompts the user for more input if there's demand. It should be possible to 
structure a program like this but it will be a pervasive change that cannot 
be recommended in all cases.

So, in reality for client applications you will probably use things like 
the brittle `Source.actorRef` and just statically configure the size of the 
buffers and queues to be big enough for the common use cases. (You could 
say that `Source.actorRef` is not more brittle than the JVM itself which 
you also need to configure with a maximum heap size.) In any case using 
streams will force you to think about these kind of issues.

The second difficulty is a shortcoming in your description (IMO) regarding 
your notion of "reusing a connection" that is also uncovered by your use of 
streams. Look at what this line means:

val resp = 
Source(byteString).via(tcpFlow).runFold(ByteString.empty)(_++_)

It says, "open a TCP connection, stream the source byteString to the 
connection, read all data *until the connection closed by the other side* 
and return this data". So, the end of the response is determined by looking 
for the end of the TCP stream. To be able to reuse a connection you will 
need a different end-of-response marker than the signal that TCP connection 
has been closed. You will need some framing protocol on top of TCP that can 
discern where one response ends and the next one starts and implement a 
streaming parser for that. You would start by implementing a

def requestRenderer: Flow[Request, ByteString]

and a

def responseParser: Flow[ByteString, Response]

Between those you can put the tcp connection:

def pipeline: Flow[ByteString, ByteString] = 
Flow[Request].via(requestRenderer).via(Tcp.outgoingConnection).via(responseParser)

Now you still have the problem how to interface that Flow.(And maybe that 
is what all your question is about). If you can structure your program like 
hinted above then you could create a

// prompts user for more input
def userInput: Source[UserInput]

and a 

def userInputParser: Flow[UserInput, Request]

and a

def output: Sink[Response]

so you could finally create and run your program as 

userInput.via(userInputParser).via(pipeline).to(output).run()

(If you are into functional programming, that may be actually very similar 
to how you would have structured your program in any case).

For the rest of us, it would be nice if we could wrap the `pipeline` above 
with something to either get a function `Request => Future[Response]` or an 
ActorRef to which requests could be sent and which would send back a 
message after the response was received. Unfortunately, The Right Solution 
(TM) for that case is still missing. It would be nice if there was a a 
one-to-one Flow wrapper in akka-stream that would do exactly this 
translation but unfortunately there currently isn't one readily available. 
You can build such a component yourself (Mathias actually built a specific 
solution for akka-http to implement `Http.singleRequest()` which has 
exactly the same problem).

So, how you can build something like that? Here is a sketch:

class RequestResponseActor extends Actor {
  val pipelineActor = 
Source.actorRef[Request].via(pipeline).to(Sink.

[akka-user] Re: [akka-http] Custom Directives - Scala

2015-08-27 Thread Johannes Rudolph
Hi,

use the combinators of the Directive class to make custom directives. E.g.

extractClientIp.flatMap { ip =>
  val cond = // something
  if (cond) pass else reject(...)
}

or if you want to return a value, i.e. create a Directive1[T], use 
`provide` instead of `pass`.

HTH
Johannes

On Tuesday, August 25, 2015 at 8:23:54 AM UTC+2, tdroz...@gmail.com wrote:
>
> I'm new to working with akka-http and have a question that I hope someone 
> can provide some guidance with.
>
> I've been trying to build my own custom directive - that would take the 
> results of an existing directive - perform some logic and then move onto 
> the next directive, or reject route. 
>
> For example...
>
> Let's say I want to take the extractClientIp directive, use the 
> RemoteAddress value to perform some logic and either continue on with 
> further directives or reject the request.  I don't intend to complete the 
> request here.
>
> Is this possible?  To make this re-usable is a directive what I'd want to 
> do here?  Or am I over thinking this?
>
> Any help is appreciated!
>
> -t
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka-http 1.0-RC4 ssl session info/cert info accessible?

2015-08-27 Thread Johannes Rudolph
Hi,

this is now tracked as https://github.com/akka/akka/issues/18349.

Johannes

On Thursday, August 27, 2015 at 4:03:26 PM UTC+2, Mark van Buskirk wrote:
>
> Hi Dr. Kuhn,
> How is this issue progressing, is there a ticket I can watch for updates? 
> It's hard to use ssl without being able to get the peerprincipal/ cert 
> info/ dn. 
> Thanks!
> Mark
>
> On Thursday, July 2, 2015 at 8:11:40 AM UTC-4, rkuhn wrote:
>>
>> Hi Mark,
>>
>> this is indeed not yet implemented: high-level HTTPS support will be done 
>> after 1.0, including session renegotiation.
>>
>> Regards,
>>
>> Roland
>>
>> 1 jul 2015 kl. 22:02 skrev Mark van Buskirk :
>>
>> Seems like the only place the term "principal" is located in the code is 
>> in the akka.stream.io.SessionBytes. I am guessing this isn't possible with 
>> RC4?
>>
>>
>> https://github.com/akka/akka/blob/releasing-akka-stream-and-http-experimental-1.0-RC4/akka-stream/src/main/scala/akka/stream/io/SslTls.scala
>>
>>
>>
>> On Tuesday, June 30, 2015 at 3:51:44 PM UTC-4, Mark van Buskirk wrote:
>>>
>>> Using spray we could get at the ssl-session info and find out the cert 
>>> principal name and stuff using by setting "ssl-session-info-header = 
>>> on" in the config and then extracting a "SSLSessionInfo" object from the 
>>> httprequest's header. Can we get this info using RC4? If so how?
>>>
>>> Here is a link to the SSLSessionInfo object we extract:
>>> http://spray.io/documentation/1.1-SNAPSHOT/api/#spray.util.SSLSessionInfo
>>>
>>> Here is the spray config documentation, search for 
>>> "ssl-session-info-header".
>>> http://spray.io/documentation/1.2.3/spray-can/configuration/
>>>
>>>
>>>
>> -- 
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com.
>> To post to this group, send email to akka...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>>
>> *Dr. Roland Kuhn*
>> *Akka Tech Lead*
>> Typesafe  – Reactive apps on the JVM.
>> twitter: @rolandkuhn
>> 
>>
>>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Debugging `jstack PID` on Non-Completing Future?

2015-09-15 Thread Johannes Rudolph
Is the code on the stack of any thread which does the DB query? If not 
google for thread starvation :)

Johannes

On Sunday, September 13, 2015 at 6:22:25 PM UTC+2, Kevin Meredith wrote:
>
> My DB calls, which returns a `Future[...]` is not completing in time for 
> my remote spray service's time-out. Note that the time-out is 30 seconds, 
> which should be more than sufficient for my simply DB query.
>
> As a result, I'm trying to figure out why the Future is not completing.
>
> I SSH'd onto my server that's running `java`. Then, I ran *jstack PID*.
>
> What should I be looking for this in this output to try to understand why 
> this Future[...] isn't completing?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Akka-Http handle Multipart formData

2015-09-18 Thread Johannes Rudolph
Hi,

the approach that Konrad showed will work, however, the problem with the 
suggested approach is that it will load the complete file into memory which 
limits the number of concurrent uploading requests severely. Any multipart 
entity is a natural stream of parts so your original approach is a good 
start.

On Friday, September 11, 2015 at 4:12:02 PM UTC+2, lucian enache wrote:
>
> Hello everyone, for akka-http when handling a POST request, if I have my 
> entity as a Multipart.FormData , 
> and I POST some JSON and a binary file, how can I split the two when I 
> parse the formData ?
>

What do you mean by "split"? If you post them as multipart you will have 
several parts which are split automatically and will be delivered as 
elements on the `parts` stream. Inside of your `mapAsync` you can inspect 
the metadata (like the filename of a part, or the field name for which it 
was posted) before doing anything with the data (but then don't do 
`.map(_.entity.dataBytes)` first), so there's no really the need to "split" 
processing but just decide which processing you want to do based on the 
metadata of each part. If you need to correlate both parts with some 
context things will be a bit harder, I would then suggest to return some 
result for each part inside of mapAsync (e.g. the filename of the part 
together with the future representing the result of processing that part) 
and then collect all these result and do whatever is necessary before 
completing.

Here's a sketch:

case class ProcessingResult(partFileName: String, result: Result)
def processBinaryPart(part): Future[ProcessingResult] = ...
def processJsonPart(part): Future[ProcessingResult] = ...

entity(as[Multipart.FormData]) { (formData) =>
  val processedParts = formData.parts.mapAsync(1) { part =>
part.filename match {
  case Some("xyz.bin") => processBinaryPart(part)
  case Some("abc.json") => processJsonPart(json)
}
  } // result of mapAsync is Source[ProcessingResult, ...]
  // now collect all results into a map for further processing
  .runFold(Map.empty[String, ProcessingResult])((map, res) => map += 
res.partFileName -> res)
  // result is a Future[Map[String, ProcessingResult]]
  
  // wait for processing to have finished and then create response
  onSuccess(processedParts) { resultMap =>
complete { ...
}
  }
}

Cheers,
Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How do I attach multiple subscribers to a stream using fanoutPublisher, actorSubscriber and FlowGraph.partial?

2015-10-05 Thread Johannes Rudolph
Hi Abhijit,

from a quick glance to your code it seems that you are broadcasting an 
HttpResponse and then access its `entity.dataBytes` in several branches. 
This is not supported. You need to put the broadcast behind the 
`entity.dataBytes` and not behind the stream of responses.

We prepared a similar example for scala.world but didn't get to show the 
fanoutPublisher in the end. There's still in an early version of it in the 
example repository:

https://github.com/jrudolph/scala-world-2015/blob/43e6a4e50c68fdf22e70e0137d6bb54e9614e808/backend/src/main/scala/example/repoanalyzer/Webservice.scala

I'd suggest using `Http.singleRequest` for creating the single request to 
meetup to simplify things. If you use fanoutPublisher you need to be aware 
of its several potential traps:

 * it backpressures over all its subscribers, if one is slow or stalled, 
this will stop all the others as well at some point
 * this means you may want to add a `.buffer` directly behind each branch 
to relieve the fanoutPublisher from backpressure and add a OverflowStrategy 
that suits your needs
 * OverflowStrategies may not be what you want, so you may want to use 
`conflate`, instead, (which is somewhat harder to use) to implement custom 
overflow strategies
 * the fanoutPublisher shuts down if the last subscriber cancels its 
subscription, to make it persistent you need to attach a dummy sink to it 
to keep it alive

HTH
Johannes

On Tuesday, October 6, 2015 at 6:21:52 AM UTC+2, Abhijit Sarkar wrote:
>
> Hi Victor,
>
> The reason I didn't want to totally do away with streaming and create some 
> local source was that the point of my question would be lost. In an effort 
> to simplify, I might end up with a working but completely unrelated and 
> somewhat trivial example, which is IMHO the issue with the examples on most 
> blogs.
>
> So I converted to Twitter streaming example to a minimal Meetup streaming 
> example. The later doesn't require any auth and hence is a similar but 
> simpler representation of what I'm after. And guess what, I was able to 
> reproduce the issue again. That project is attached (meetup-streaming.zip). 
> All you need to do is download it and execute the class 
> name.abhijitsarkar.scala.meetup.MeetupStreamingApp.
>
> Just for kicks, I did also run an example without using a streaming HTTP 
> source and I didn't get the IllegalStateException. That's precisely the 
> reason I didn't want to deviate too far from my actual use case.
>
> Thanks for your time. If you need anything else from me, please let me 
> know.
>
> BTW, in your sample code below allTweets ~> broadcast doesn't compile. 
> They're different types.
>
> Regards,
> Abhijit
>
> On Monday, October 5, 2015 at 1:44:43 AM UTC-7, √ wrote:
>
> Hi Abhijit,
>
> I think it would be much more helpful with a *minimized* reproducer.
> Have you tried something similar to:
>
>   val partial = FlowGraph.partial() { implicit builder =>
> import FlowGraph.Implicits._
> val allTweets = Flow[HttpResponse].map { _.entity.dataBytes 
> }.flatten(FlattenStrategy.concat).map {
>   b => parseTweet(b.utf8String)
> }
>
> val isAfterEpoch = (t: Tweet) => t.createdAt.getYear > 1970
> val broadcast = builder.add(Broadcast[HttpResponse](2))
>
> allTweets ~> broadcast
>  broadcast ~> Flow[Tweet].filter(isAfterEpoch) ~> 
> Sink.actorSubscriber(TwitterSubscriber.props("good"))
>  broadcast ~> Flow[Tweet].filter(!isAfterEpoch(_)) ~> 
> Sink.actorSubscriber(TwitterSubscriber.props("bad"))
>
> SinkShape(broadcast.in)
>   }
>
> On Mon, Oct 5, 2015 at 10:28 AM, Abhijit Sarkar  
> wrote:
>
> I've attached the maven project for picture 1. If you run the class 
> name.abhijitsarkar.scala.scauth.TwitterStreamingApp, you'll get an 
> IllegalStateException as I've posted before. There's no code for picture 2 
> as I couldn't figure out how to attach multiple subscribers to a fanout 
> publisher (as shown by ? in the pic).
>
> The only problem is that in order to connect to Twitter and get data, you 
> need OAuth credentials. Without those, you can't actually execute the 
> program (well, technically you can but Twitter will return auth error).
>
> On Monday, October 5, 2015 at 12:55:27 AM UTC-7, √ wrote:
>
> Hi,
>
> both of those should work, so I'm looking fwd to the code.
>
> -- 
> Cheers,
> √
> On 5 Oct 2015 09:46, "Abhijit Sarkar"  wrote:
>
>
> 
>
>
> 
>
> I read that 
> .
>  
> In fact, I always change the version in the URL to "current" before reading 
> any Akka Streams doc as I don't want to waste time reading outdated ones.
>
> I've attached pictures showing what I'm trying to do. The gray bounding 
> boxes enclosing the graphs show

Re: [akka-user] How do I attach multiple subscribers to a stream using fanoutPublisher, actorSubscriber and FlowGraph.partial?

2015-10-06 Thread Johannes Rudolph
Here is a sketch:

val dataStream: Future[Source[ByteString]] = 
Http().singleRequest(...).map(_.entity.dataBytes)
val parseData: Flow[ByteString, DomainEvent] = ...
val parsedDataStream: Future[Source[DomainEvent]] = 
dataStream.map(_.map(parseData))

val broadcastSink: SinkShape[DomainEvent] =  FlowGraph.partial() { implicit 
builder =>
import FlowGraph.Implicits._
val broadcast = builder.add(Broadcast[DomainEvent](2))

broadcast ~> sink1
broadcast ~> sink2

SinkShape(broadcast.in)
  }

parsedDataStream.onComplete {
  case Success(parsedStream) => parsedStream.runWith(broadcastSink)
  case Failure(e) => // handleFailure
}


On Tuesday, October 6, 2015 at 9:23:23 AM UTC+2, Abhijit Sarkar wrote:
>
> Hi Johannes,
> Thank you for your response. Can you show me what do you mean by "put the 
> broadcast behind the entity.dataByes'? Please pardon me if this is obvious 
> as I'm just learning.
>
> I'm using Source.single(httpRequest), not fanoutPub. The example you 
> pointed me to use fanout, which I intend to study but not apply in this 
> case.
>
> Regards,
> Abhijit
>
>
> On Monday, October 5, 2015 at 11:29:54 PM UTC-7, Johannes Rudolph wrote:
>
> Hi Abhijit,
>
> from a quick glance to your code it seems that you are broadcasting an 
> HttpResponse and then access its `entity.dataBytes` in several branches. 
> This is not supported. You need to put the broadcast behind the 
> `entity.dataBytes` and not behind the stream of responses.
>
> We prepared a similar example for scala.world but didn't get to show the 
> fanoutPublisher in the end. There's still in an early version of it in the 
> example repository:
>
>
> https://github.com/jrudolph/scala-world-2015/blob/43e6a4e50c68fdf22e70e0137d6bb54e9614e808/backend/src/main/scala/example/repoanalyzer/Webservice.scala
>
> I'd suggest using `Http.singleRequest` for creating the single request to 
> meetup to simplify things. If you use fanoutPublisher you need to be aware 
> of its several potential traps:
>
>  * it backpressures over all its subscribers, if one is slow or stalled, 
> this will stop all the others as well at some point
>  * this means you may want to add a `.buffer` directly behind each branch 
> to relieve the fanoutPublisher from backpressure and add a OverflowStrategy 
> that suits your needs
>  * OverflowStrategies may not be what you want, so you may want to use 
> `conflate`, instead, (which is somewhat harder to use) to implement custom 
> overflow strategies
>  * the fanoutPublisher shuts down if the last subscriber cancels its 
> subscription, to make it persistent you need to attach a dummy sink to it 
> to keep it alive
>
> HTH
> Johannes
>
> On Tuesday, October 6, 2015 at 6:21:52 AM UTC+2, Abhijit Sarkar wrote:
>
> Hi Victor,
>
> The reason I didn't want to totally do away with streaming and create some 
> local source was that the point of my question would be lost. In an effort 
> to simplify, I might end up with a working but completely unrelated and 
> somewhat trivial example, which is IMHO the issue with the examples on most 
> blogs.
>
> So I converted to Twitter streaming example to a minimal Meetup streaming 
> example. The later doesn't require any auth and hence is a similar but 
> simpler representation of what I'm after. And guess what, I was able to 
> reproduce the issue again. That project is attached (meetup-streaming.zip). 
> All you need to do is download it and execute the class 
> name.abhijitsarkar.scala.meetup.MeetupStreamingApp.
>
> Just for kicks, I did also run an example without using a streaming HTTP 
> source and I didn't get the IllegalStateException. That's precisely the 
> reason I didn't want to deviate too far from my actual use case.
>
> Thanks for your time. If you need anything else from me, please let me 
> know.
>
> BTW, in your sample code below allTweets ~> broadcast doesn't compile. 
> They're different types.
>
> Regards,
> Abhijit
>
> On Monday, October 5, 2015 at 1:44:43 AM UTC-7, √ wrote:
>
> Hi Abhijit,
>
> I think it would be much more helpful with a *minimized* reproducer.
> Have you tried something similar to:
>
>   val partial = FlowGraph.partial() { implicit builder =>
> import FlowGraph.Implicits._
> val allTweets = Flow[HttpResponse].map { _.entity.dataBytes 
> }.flatten(FlattenStrategy.concat).map {
>   b => parseTweet(b.utf8String)
> }
>
> val isAfterEpoch = (t: Tweet) => t.createdAt.getYear > 1970
> val broadcast = builder.add(Broadcast[HttpResponse](2))
>
> allTweets ~> broadcast
>  broa

[akka-user] Re: akka-http tcp proxy protocol support

2015-10-06 Thread Johannes Rudolph
Hi Julian,

see scaladsl/Http.scala for all the glue code between TCP and HTTP. The 
hard thing will be getting the metadata from the proxy implementation into 
the requests but I guess putting it in a @volatile var after reading it and 
mapping each request would make for a simple solution.

In case you get this done, I guess your PROXY implementation and its 
integration with HTTP would make a nice PR ;)

HTH
Johannes

On Monday, October 5, 2015 at 6:50:03 PM UTC+2, Julian Howarth wrote:
>
> Are there any plans to add proxy protocol support for akka-http? If not, 
> how difficult would it be  to manually configure in support via a 
> flow/stage?
>
>
> The reason we needed is specific but possibly not uncommon:
>
>
>  - we currently use akka-http to provide a websocket api which works very 
> well
>
>  - we deploy on AWS 
>
>  - we use an AWS Elastic load balancer to distribute traffic to our 
> websocket instances
>
>
> The above all works without issue, but we now need to identify the IP 
> addresses that the websocket connections originate from. 
>
>
> For HTTP(S) connections, AWS ELB adds an X-Forwarded-For header which 
> is already supported in akka-http. However, in order to use AWS ELB for 
> websocket connections, the ELB needs to be configured to listen using TCP 
> rather than HTTP which means there is no X-Forwarded-For header and instead 
> the proxy protocol is used.
>
>
> We already have a stateful stage that manages the proxy protocol for our 
> TCP connections but what I need some guidance with is how to use that when 
> using the HTTP bindings. Any ideas?
>
>
> Thanks,
>
>
> Julian
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Marshalling FormData to multi-part mime with one field per part

2015-10-15 Thread Johannes Rudolph


On Wednesday, October 14, 2015 at 7:17:14 PM UTC+2, Eric Swenson wrote:
>
> AWS/S3 doesn’t want the form data parts, nor the whole HTTP POST to be 
> chunked. So those entities have to be made “strict”.  So I had to make each 
> MIME part be strict, as well as the final Multipart.FormData entity.
>

Have you seen/tried 
http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html? 
It seems to suggest that chunked transfer-encoding is supported, though in 
a bit complicated format where on top of chunked transfer-encoding AWS 
seems to introduce their own chunked content-encoding layer where each 
chunk is signed separately.
 

> The point of my original post was that it seems the kaka-http library has 
> made a policy decision
>

WDYM? akka-http supports both `FormData` = www-url-encoding and 
`multipart/formdata` encodings, so, I wonder which "policy decision" you 
mean?
 

> that the only way to convert a FormData to an entity (in a multi-part MIME 
> body) is to create a single part, using www-url-encoding, and include all 
> fields in query string syntax. This doesn’t work for AWS and therefore it 
> seems a limiting policy decision.
>

Yes, because `scaladsl.model.FormData` models www-url-encoding and not 
multipart/formdata. You seem to suggest that there should be a more general 
encoding that would allow marshalling to either of both variants. In any 
case, there's no policy decision but, as explained above, two different 
models for two different kinds of modelling things.

It should be equally possible/easy to convert a FormData to a collection of 
> parts. 
>

Yes, that could be useful, or otherwise a more general representation of 
form data that unifies both kinds.
 

> The requirement that there should be a Content-Type header in these parts, 
> also should not be dictated by the framework.
>

 I see that you have to fight a bit to get akka-http to render exactly what 
you want, but the reason seems to be mostly that AWS has very specific 
rules and interpretations of how to use HTTP. akka-http is a general 
purpose HTTP library and cannot foresee all possible deviations or 
additional constraints HTTP APIs try to enforce. So, in the end, akka-http 
should make it possible to connect to these APIs (as long as they support 
HTTP to a large enough degree) but it may mean that the extra complexity 
the API enforces lies in your code. You could see that as a feature.

That said, I'm pretty sure that there's some sugar missing in the APIs that 
would help you build those requests more easily. If you can distill the 
generic helpers that are missing from your particular use case we could 
discuss how to improve things.

Here are some things I can see:

 * you can directly build a `Multipart.FormData.Strict` which can be 
directly converted into a Strict entity, I guess one problem that we could 
solve is that there's only a non-strict FormData.BodyPart.fromFile` which 
builds a streamed part to prevent loading the complete file into memory. 
There's no `FormData.BodyPart.fromFile` that would actually load the file 
and create a strict version of it. We could add that (even if it wouldn't 
be recommended to load files into memory...)
 * having to run marshalling and unmarshalling manually could be replaced 
by some sugar
 * dealing with those chains of `T => Future[U]` functions is cumbersome, 
for this and the previous point, we had the simple `pipelining` DSL in 
spray which seems to have been lost in parts in the transition to akka-http

Btw. I don't think your code is too bad, if you break it down into methods 
for every step and put it into a utility object, you can reuse it and don't 
need to deal with it any more.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Fusing in akka-streams 2.0

2015-12-23 Thread Johannes Rudolph
On Tuesday, December 22, 2015 at 8:37:08 PM UTC+1, Adam Warski wrote:
>
> Having conflate only makes sense if there's some non-fused component out 
> there I guess, but you are correct of course that it can be fused with 
> things before/after it. 
>

I thought a bit about this. Actually, it is not really about fusing but by 
which resources producer and consumer of `conflate` are bounded. If input 
and output of conflate are constrained by the same resource (e.g. CPU) the 
actual result will mostly depend on the fairness of the scheduling 
algorithm that provisions the (little) existing capacity of the resource to 
the (many) tasks at hand. Usually, that happens at the run queues in the 
scheduler by choosing how long one task is allowed to run before it is 
preempted to let another task run.

Fusing "just" adds another run queue so we now have 
 * the kernel queue that schedules RUNNABLE threads on cores
 * the dispatcher that schedules tasks to pooled threads
 * the actor mailbox that processes a certain amount of messages in a batch 
before rescheduling on the dispatcher
 * the fusing GraphInterpreter queue that executes events up to an event 
limit before it looks at the actor mailbox again

So, the behavior of conflate probably changes if producers and consumers 
live in the same fused part or if they were previously also bounded by CPU. 
In many cases, however, input and output of conflate are not constrained by 
the same resource (and as you say are asynchronous for other reasons) so my 
guess would be that the result won't change much in these scenarios.

Johannes

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Memory leak with akka-streams 2.0.1?

2016-01-08 Thread Johannes Rudolph
Hi Leo,

when do you want the WS connections to be closed? I probably miss something 
but I don't see code on either side which would attempt to shutdown the 
connection.

Johannes

On Friday, January 8, 2016 at 2:02:12 PM UTC+1, Clelio De Souza wrote:
>
>
> Hi,
>
> We use Akka Streams for streaming live data points via Websocket client 
> connections in our project. We have this in production and it has been 
> working fine for some time.
>
> I have been working on upgrading to Akka Streams to 2.0.1 and we noticed a 
> significantly downgrade of resource management and/or potential memory leak.
>
> To demonstrate that I have pushed a small project called 
> "websocket-stream" to my Github account 
> https://github.com/cleliofs/websocket-stream - I have also taken some 
> screen shots showing my findings.
>
> The scenario I ran was 3 sequential executions for 500 requests each first 
> with Akka Streams 2.0.1 and then I repeated the test with Akka Streams 1.0:
>
>
> 1) Akka Streams 2.0.1 
>
> a) See attachment "akka-streams-2.0.1-mem.png" (JMX Heap graph): Here we 
> can see humps showing the memory allocation. Also after each execution (~ 
> 2mins) I trigger GC to see how much of memory would be recollected. On this 
> image we can see that progressively the amount of memory re-collected by GC 
> was less and less.
>
> b) See attachment "akka-streams-2.0.1-histogram.png" (JMX memory 
> histogram): Here we can see a memory histogram of the allocated memory at 
> the end of all 3 executions (~ 6mins). I would like to draw your attention 
> to "akka.stream.FlowShape" class. For this class there are > 64k allocated 
> objects occupying 1.7% of the whole heap size (and of course, this is also 
> cascading to other objects, such as "akka.stream.GraphStageModule", 
> "akka.stream.Outlet", "akka.stream.AbstractStage", "akka.stream.Inlet")
>
>
> 2) Akka Streams 1.0
>
> a) See attachment "akka-streams-1.0-mem.png" (JMX Heap graph): Here at the 
> start of the test, a similar pattern happens with some memory being 
> allocated, but the huge difference occurs when I trigger GC (I did 3 times 
> by clicking "Perform GC"). Visually we can see that all the time the memory 
> being freed up via GC is significantly better with Akka Streams 1.0 in 
> comparison with Akka Streams 2.0.1
>
> b) See attachment "akka-streams-1.0-histogram.png" (JMX memory histogram): 
> The same improvement can be seen for the memory histogram when the number 
> of live objects for "akka.stream.FlowShape" at the end of the test (after 
> performing 3 GCs) was reduced to only 3 (with only 96 bytes allocated - 
> 0.0%).
>
>
>
> So, clearly based on my tests Akka Streams 1.0 manages better the 
> resources, by freeing them up when the websocket connections are closed, 
> whereas the same does not apply for 2.0.1.
>
> I added on my flow a PushStage to hook up the onPush and other methods 
> such as onDownstreamFinish and postStop. Interestingly, for my Flow with 
> 2.0.1 I can not see the "onDownstreamFinish" nor "postStop" being printed, 
> only for version 1.0 of Akka Streams I can see those methods being invoked 
> after a websocket client disconnection.
>
> At the moment, unfortunately, we postponed the task for upgrading Akka 
> Streams to 2.0 until we are confident this potential memory leak issue is 
> fixed.
>
> The source code is available at my Github repository (
> https://github.com/cleliofs/websocket-stream) if anyone would like to try 
> that out.
>
>
> Thanks,
> Leo
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka HTTP File Not Found Exception

2016-01-19 Thread Johannes Rudolph
I see several issues here:

 * Entity stream failures don't go through the routing layer exception 
handling code. It also wouldn't make any sense because we are already 
sending out the response here and we cannot change the response at that 
time any more. We should document that somewhere.
 * In this particular case, however, the error could have been reported 
earlier, i.e. either before or at materialization time. There should be 
some FilePublisher constructor which immediately reports the error.
 * It would be nice if exceptions happening during materialization could be 
handled by the routing infrastructure. Not sure though how to implement 
this because stream construction and materialization are currently done at 
quite different places.

Johannes


On Saturday, January 16, 2016 at 12:17:15 AM UTC+1, Konrad Malawski wrote:
>
> Oh wait you should check 
> http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-M2/scala/http/directives/file-and-resource-directives/getFromFile.html
>
> I'm already in bed though ;)
> On Jan 16, 2016 00:01, "Konrad Malawski"  > wrote:
>
>> This may be a bug, I'll look into it soon so thanks for reporting!
>> Glad you liked the talk :-)
>>
>> -- 
>> Cheers,
>> Konrad 'ktoso’ Malawski
>> Akka  @ Typesafe 
>>
>> On 15 January 2016 at 23:57:01, Gustavo Politano (gustavo@gmail.com 
>> ) wrote:
>>
>> Hi! 
>>
>> I was watching Konrad's workshop from Scala eXchange - amazing talk, btw 
>> - (
>> https://skillsmatter.com/skillscasts/6869-workshop-end-to-end-asynchronous-back-pressure-with-akka-streams)
>>  
>> and latter playing with the code.
>>
>> On step 3 is demoed a stream completed with a file from disk. I changed 
>> the name of the file to a inexistent one, so I could force an exception and 
>> check the exception handling. My expectation was that the browser would 
>> receive an HTML with the exception details, since the following handler is 
>> used: 
>>
>>
>>  val myExceptionHandler = ExceptionHandler {
>>
>>   case ex: Exception =>
>>
>> complete {
>>
>>   
>>
>> 
>>
>>   {ex.getMessage}
>>
>> 
>>
>>   
>>
>> }
>>
>> }
>>
>>
>>  // our routes:
>>
>> val route: Route = handleExceptions(myExceptionHandler) {
>>
>>   helloRoutes ~ simpleStreamRoutes
>>
>> }
>>
>>
>>  But what happened was that the connection was reset in Safari and it 
>> received nothing. When I tested on Chrome, I saw that it started the 
>> download of a file with the wrong name, but it failed with a network error 
>> as well. On the console I could see the following:
>>
>>
>>  
>>  [ERROR] [01/15/2016 13:45:06.181] 
>> [default-akka.actor.default-dispatcher-12] 
>> [akka.actor.ActorSystemImpl(default)] Outgoing response stream error
>>
>> java.io.FileNotFoundException: .data.csv (No such file or directory)
>>
>>  at java.io.RandomAccessFile.open(Native Method)
>>
>> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
>>
>> at akka.stream.impl.io.FilePublisher.preStart(FilePublisher.scala:49)
>>
>>   at akka.actor.Actor$class.aroundPreStart(Actor.scala:485)
>>
>>   at 
>> akka.stream.impl.io.FilePublisher.akka$stream$actor$ActorPublisher$$super$aroundPreStart(FilePublisher.scala:34)
>>
>> at 
>> akka.stream.actor.ActorPublisher$class.aroundPreStart(ActorPublisher.scala:322)
>>
>>  at akka.stream.impl.io.FilePublisher.aroundPreStart(FilePublisher.scala:34)
>>
>> at akka.actor.ActorCell.create(ActorCell.scala:590)
>>
>> at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
>>
>>at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
>>
>>   at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
>>
>>at akka.dispatch.Mailbox.run(Mailbox.scala:223)
>>
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>
>>  at java.lang.Thread.run(Thread.java:745)
>>
>>
>>  
>>  So my question is if this is the excepted behavior or it's a bug of some 
>> sorts. And, if it is the expected behavior, then I don't really understand 
>> when the exception handling would kick in, and where I could handle 
>> exceptions like this.
>>
>>
>>  Thanks,
>>
>> Gus.
>>
>>
>>  
>>  
>>  PS: link to the original code: 
>> https://github.com/ktoso/akka-scala-exchange/blob/master/src/main/scala/samples/scalaexchange/step3/SimpleStreamHttpServiceApp.scala
>>
>>
>>  --
>> >> Read the docs: http://akka.io/docs/
>> >> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, se

[akka-user] Re: HTTP header 'UpgradeToWebSocketResponseHeader: ' is not allowed in responses

2016-01-28 Thread Johannes Rudolph
Hi Shayan,

this seems to be a bug. I filed it 
here: https://github.com/akka/akka/issues/19639

The functionality shouldn't have changed, though, so you can safely ignore 
this output.

Johannes


On Wednesday, January 27, 2016 at 9:19:46 AM UTC+1, sha...@gearzero.com 
wrote:
>
> Team,
>
> I am trying out akka-http 2.4.2-RC1 for providing a websocket service. I 
> am getting the following warning in logs on socket connect:
>
> WARN  akka.actor.ActorSystemImpl - HTTP header 
> 'UpgradeToWebSocketResponseHeader: 
> ' is not allowed in responses
>
> Any idea where this might from.
>
> I am using a simple route with this directive:
>
> handleWebsocketMessages
>
>
> Thanks in advance,
> Shayan
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Dynamically add HTTP Routes

2016-02-01 Thread Johannes Rudolph
Hi Ubaldo,

yes, that's certainly possible. A `Route` is just a regular Java object, so 
you can choose a new `Route` to do the processing for every new request. 
You just need to make sure that you switch out the Route in a thread-safe 
way.

E.g.

val routeHolder = new AtomicReference[Route](initialRoute)
def updateRoute(newRoute: Route): Unit = routeHolder.set(newRoute)
def currentRoute(): Route = routeHolder.get

Http.bindAndHandle(currentRoute(), ...)

Probably even a `volatile var` would suffice if you don't care about the 
old value. The question then becomes how to organize your routes in an 
efficient way but that depends on how your routes and route updates would 
look like.

(You may find answers about this topic on the old spray mailing list as 
well.)

Johannes

On Friday, January 29, 2016 at 4:13:09 PM UTC+1, Ubaldo Taladríz wrote:
>
> Hi, I'm trying to dynamically add http routes to a server without 
> restarting
> Is it possible?
> I'm using Akka-http
>
> Regards
> Ubaldo
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Dynamically add HTTP Routes

2016-02-01 Thread Johannes Rudolph
On Monday, February 1, 2016 at 11:28:58 AM UTC+1, Johannes Rudolph wrote:
>
> Http.bindAndHandle(currentRoute(), ...)
>

Actually, I just figured that this won't work, because currentRoute() will 
only be evaluated once.

You will need a simple wrapper which is evaluated anew for every request. 
This can be any directive, e.g.

val wrapper: Route = get { currentRoute() }
Http.bindAndHandle(wrapper, ...)

or something simple like

val wrapper: Route = ctx => currentRoute()(ctx)
Http.bindAndHandle(wrapper, ...)

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: ('=') sign still not allowed in query string even with Uri.ParsingMode.Relaxed? (akka/akka#18479)

2016-10-17 Thread johannes . rudolph
Great, thanks Richard for tackling this and André for the explanations!

On Saturday, October 15, 2016 at 10:05:59 PM UTC+2, Richard Imaoka wrote:
>
> Sorry I had read the full discussion but I think I didn't correctly 
> understand what is allowed in Akka, and what is RFC 3986 compliant.
> Now it's clearer for me - double "=" isn't even RFC 3986 compliant, so it 
> shouldn't be allowed in the relaxed mode.
>
> I will try to add decent amount of examples for #276 so that other people 
> don't need to ask the same or similar question as I did !
>
>
> Thanks
>
> 2016年10月12日水曜日 16時31分59秒 UTC+9 André:
>>
>> I know it's long but please read the full discussion in 
>> https://github.com/akka/akka/issues/18479.
>>
>> The problem isn't the question mark but the double "=". That's why you 
>> get an IllegalUriException: Illegal query: Invalid input '='.
>>
>> > will try to work on #276 if no one is picking it up yet
>>
>> Excellent! :)
>>
>> On Tuesday, October 11, 2016 at 7:34:40 PM UTC+2, Richard Imaoka wrote:
>>>
>>> Ah, I thought akka/akka#18479  
>>> was saying 
>>> that "GROUP=10380?page=2" is a valid case for the relaxed mode 
>>> as RomanIakovlev said this.
>>>
>>> > I still think it's a workaround rather than proper solution though. 
>>> Question marks and equal signs are not prohibited inside the query. Just 
>>> sayin'.
>>>
>>> If that is still not allowed, and if you need your own parsing 
>>> implementation to allow that, then clarification in the documentation helps 
>>> us more, which akka/akka-http#276 
>>>  is asking for. (will try 
>>> to work on #276 if no one is picking it up yet)
>>>
>>> Thanks
>>>
>>> 2016年10月10日月曜日 17時23分00秒 UTC+9 André:

 Hi Richard,

 "GROUP=10380?page=2" isn't a well formed query and therefore can't be 
 parsed even in relaxed mode. akka/akka#18479 
  provides a way to prevent 
 parsing of the query component and to just look at the raw query string. 
 You can access your query via the queryString() method and parse it 
 youself.

 HTH
 André

 On Sunday, October 9, 2016 at 6:38:34 PM UTC+2, Richard Imaoka wrote:
>
> Hi,
>
> While I tried to work on akka/akka-http#276 
> , and looked at 
> akka/akka#18479  (PR to fix 
> #18479 = akka/akka#18715 ), 
> I found that the following code:
>
> Query("GROUP=10380?page=2",mode = Uri.ParsingMode.Relaxed)
>
>
> throws an exception:
>
>   IllegalUriException: Illegal query: Invalid input '=', expected 
> '+', query-char, 'EOI', '&' or pct-encoded (line 1, column 17): 
> GROUP=10380?page=
>
>
> Wasn't akka/akka#18479  (PR=
> akka/akka#18715 ) intended 
> to make this a valid query string, with Uri.ParsingMode.Relaxed ?
>
>
> *Additional Info*
> 1. I looked for a test case for this.
> According to UriSpec.scala 
> ,
>  
> Uri("?a=b=c").query() is still invalid, although query() method has 
> default 
> mode = Uri.ParsingMode.Relaxed.
>
>
> 2. I didn't fully understand how parsing works  but probably it's 
> because CharacterClases.scala 
> 
> defines:
>
>   val `relaxed-query-char` = VCHAR -- "%&=#"
>
> where ('=') is made invalid?
>
> UriParser has 
>
> private[this] val `query-char` = uriParsingMode match {
>   case Uri.ParsingMode.Strict  ⇒ `strict-query-char`
>   case Uri.ParsingMode.Relaxed ⇒ `relaxed-query-char`
> }
>
> where `relaxed-query-char` is defined as above.
>
> Thanks,
> Richard
>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to handle web socket message to upload base64 encoded string of 10mb file

2016-10-17 Thread johannes . rudolph
Thanks Rafał for these explanations. Just a small correction:

On Tuesday, October 11, 2016 at 3:58:09 PM UTC+2, Rafał Krzewski wrote:
>
> An alternative solution would be looking up websocket buffering settings 
> and jacking it up enough to receive all messages as Strict :)
>

Unfortunately, no, you cannot control whether you get Strict or Streamed. 
It only depends on what has already arrived in the OS-level network buffer 
and you cannot control how data might be split up there because it depends 
on the timing of network packets arriving there.

(We want to add alternative ways of handling websocket traffic so that you 
can actually specify a buffer size (and a timeout) for which akka-http will 
collect data for you and you are guaranteed to receive only Strict 
messages. We didn't get to that yet.)

Johannes


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: [akka-stream] Detect failure in MergeHub

2016-10-18 Thread johannes . rudolph
Hi Victor,

good point. I think the Scaladoc is wrong there. Could you raise an issue 
at akka/akka?

Johannes

On Tuesday, October 18, 2016 at 2:28:14 PM UTC+2, Victor wrote:
>
> Hi,
>
> It's written in the ScalaDoc of the *MergeHub.source* method that:
>
> If one of the inputs fails the Sink, the Source is failed in turn
>
>
> But, in the MergeHub source code, the *onUpstreamFailure* method only 
> throw an exception:
>
> override def onUpstreamFailure(ex: Throwable): Unit = {
>   throw new MergeHub.ProducerFailed("Upstream producer failed with 
> exception, " +
> "removing from MergeHub now", ex)
> }
>
> So maybe I'm missing something, but why failing an input will fails the 
> Source?
> I need the MergeHub to fails when an input fails, but it doesn't seems to 
> work.
>
> Thanks,
> Victor
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Fetching remote file via akka-http singleRequest results in extremely high allocation rate

2016-10-19 Thread johannes . rudolph
Hi Vladyslav,

this sounds like a worst-case scenario for the Framing stages: 45M lines 
and each line 2-3 characters long will put a lot of pressure on the streams 
infrastructure and the framing stage. It might still make sense to 
benchmark and optimize the Framing stage. One optimization could be to 
collect a few elements in one go (the current implementation only looks for 
one match at a time).

Johannes


On Wednesday, October 19, 2016 at 4:11:29 PM UTC+2, 
vladysla...@rtsmunity.com wrote:
>
>
> After some playing with JMC and async boundaries it turned out the hottest 
> methods are related to ByteString splitting and delimiting (for example, 
> scala.collection.IndexedSeqOptimized$class.sameElements(IndexedSeqOptimized, 
> GenIterable) takes 73-75% of processor time). The input file is 45M lines, 
> 2-3-characters each. Is it at all possible to make this work?
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Fetching remote file via akka-http singleRequest results in extremely high allocation rate

2016-10-19 Thread johannes . rudolph
Using mapConcat instead may even be faster ;)

On Wednesday, October 19, 2016 at 5:05:22 PM UTC+2, 
vladysla...@rtsmunity.com wrote:
>
> Okay, so the issue was really in Framing performance. Changing Framing 
> stage for
>
> flatMapConcat(chunk -> Source.from(Arrays.asList(chunk.split("\n".
>
> actually made it reasonably fast.
>
>
> On Wednesday, October 19, 2016 at 3:25:50 PM UTC+2, 
> vladysla...@rtsmunity.com wrote:
>>
>> Hello,
>>
>> I have the following problem. I'm trying to fetch a file via HTTP in 
>> streaming fashion, processing it line by line (code: 
>> https://gist.github.com/anonymous/c737fa388b55181dcdab9fa6cb8cb2bc), but 
>> it is extremely slow (about a minute to fetch a 150MB file to t2.micro 
>> instance from S3). I guess, something is reallocated for each input line 
>> (input files have very short lines), resulting in chainsaw-like heap size 
>> graph and extremely high allocation rate (JMC screenshot: 
>> http://prnt.sc/cw8si6). What am I doing wrong?
>>
>> Thanks
>>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


  1   2   >