[akka-user] Re: ConductR sandbox linking to another container

2016-08-15 Thread Chris Baxter
Hey Christopher.  Thanks for getting back to me.  Glad to hear that you 
have this use case on your road map.  In the mean time, I will use the -e 
option workaround that you suggested.  Thanks again.

On Monday, August 15, 2016 at 4:04:10 AM UTC-4, Christopher Hunt wrote:
>
> Hi Chris,
>
> There's nothing built in to do this right now. Sounds like what we need to 
> do here is allow the service locator to be configured with an external 
> service. We have this on our to-do list.
>
> However you should be able to pass an environment var via the -e option 
> though and have that environment var resolve to the uri of your Cassandra 
> instance. Your Typesafe Config file could then use environment var 
> substitution in order to declare the location of your Cassandra service. 
> How does that sound?
>
> Kind regards,
> Christopher
>
> On Monday, 15 August 2016 00:13:41 UTC+10, Chris Baxter wrote:
>>
>> I don't know of any ConductR user group or forum out there, so I am 
>> asking here.  I am playing around with the ConductR sandbox on my Mac and I 
>> want to be able to have my 3 ConductR nodes communicate with Cassandra 
>> which is running in another local container.  Usually, this can be 
>> accomplished with links (--link) being established when starting up the 
>> containers.  But it seems that you cannot use the --link option when 
>> running the sandbox via "sandbox run ...".  I know you can deploy Cassandra 
>> as another bundle into ConductR but I don't want to go that route.  Does 
>> anyone have any expertise or suggestions on being able to setup a 
>> networking link between a ConductR node's container in sandbox and my 
>> Cassandra container?
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] ConductR sandbox linking to another container

2016-08-14 Thread Chris Baxter
I don't know of any ConductR user group or forum out there, so I am asking 
here.  I am playing around with the ConductR sandbox on my Mac and I want 
to be able to have my 3 ConductR nodes communicate with Cassandra which is 
running in another local container.  Usually, this can be accomplished with 
links (--link) being established when starting up the containers.  But it 
seems that you cannot use the --link option when running the sandbox via 
"sandbox run ...".  I know you can deploy Cassandra as another bundle into 
ConductR but I don't want to go that route.  Does anyone have any expertise 
or suggestions on being able to setup a networking link between a ConductR 
node's container in sandbox and my Cassandra container?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Handling timeouts when consuming http services via pooled connections

2016-05-05 Thread Chris Baxter
I realized recently that if I add a completionTimeout to a Flow setup to 
make http requests through a host connection pool that I can wedge the pool 
when enough (4 for the default config) of these timeouts happen.  I believe 
this is because the entity associated with the eventual response is not 
consumed or cancelled.  What's the proper way to handle this use case when 
wanting to add a timeout oriented combinator to the processing Flow that 
won't cause the pool to wedge.  Also, this may only happen when the 
response is chunked but I have to confirm that still.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Best practice for consuming a Http response entity's data stream

2016-04-13 Thread Chris Baxter
Bumping this back up to the top to see if I can get an answer from someone 
on the Akka team.  If the intention is to parse the entity's response body 
as json into an object structure, what's the best way to handle that 
without having to eagerly read the entire response into memory?  

On Friday, April 8, 2016 at 9:57:27 AM UTC-4, Chris Baxter wrote:
>
> None of the out of the box Unmarshallers for json (like SprayJsonSupport) 
> support parsing in a lazy and streaming way.  I did find this repo which 
> looks promising:
>
> https://github.com/knutwalker/akka-stream-json
>
> Is this the best kind of approach?  It's certainly nice in that you don't 
> have to read all of the data into memory, but large responses are not the 
> norm and are more of the exception for us.
>
> On Friday, April 8, 2016 at 8:59:10 AM UTC-4, Chris Baxter wrote:
>>
>> Thanks for responding Viktor.
>>
>> 1) I see the flaw in this design (flatMapConcat) now.  If the response is 
>> chunked you only read the first chunk
>> 2) I want to be able to parse the body as json and have the final result 
>> of the flow be a Future for some object that I have mapped the response 
>> json to.  Any suggestions for doing that w/o reading the entire byte string 
>> into memory?  Are you maybe suggesting that instead of feeding something 
>> complete into my json parser (String, Array[Byte]) that I should instead 
>> try and feed in something that is more stream oriented, such as an 
>> InputStream and then find a way to plumb that together with the response 
>> stream?
>>
>> Any suggestions for a Flow that can deal with chunk and feed the data 
>> into a parsing stage w/o having to read it all into memory would be greatly 
>> appreciated?  I don't need perfect code, just an approach so I can take it 
>> from there.
>>
>> On Friday, April 8, 2016 at 7:13:51 AM UTC-4, √ wrote:
>>>
>>>
>>>
>>> On Fri, Apr 8, 2016 at 1:06 PM, Chris Baxter <cba...@gmail.com> wrote:
>>>
>>>> If I want to consume a Http service and then do something with the 
>>>> response body, there are a couple of ways to go about doing that.  The two 
>>>> ones that I am trying to decide between are:
>>>>
>>>> val f:Future[ByteString] =
>>>>
>>>>   Source.single(req).
>>>>
>>>> via(outgoingConn).
>>>>
>>>> flatMapConcat(_.entity.dataBytes).
>>>>
>>>> completionTimeout(timeout).
>>>>
>>>> runWith(Sink.head)
>>>>
>>>
>>> This does not return the entire body as a ByteString.
>>>  
>>>
>>>> and
>>>>
>>>>
>>>> val f:Future[ByteString] =
>>>>
>>>>   Source.single(req).
>>>>
>>>> via(pool).
>>>>
>>>> mapAsync(1){ resp =>
>>>>
>>>>   resp.entity.toStrict(timeout).map(_.data )
>>>>
>>>> }.
>>>>
>>>> completionTimeout(timeout).
>>>>
>>>> runWith(Sink.head)
>>>>
>>>>
>>>> I'm thinking the first approach is the better one.  Up until now, my 
>>>> common code for making outbound request has been using the second 
>>>> approach.  I'm about to refactor that code into using the first approach 
>>>> as 
>>>> it seems cleaner and requires less use of Futures.  Just wanted to see 
>>>> what 
>>>> the consensus from the Akka team and others was on this.
>>>>
>>> My question is: why do you need to eagerly read everything into memory 
>>> to "do something with the repsonse body"?
>>>  
>>>
>>>>
>>>>
>>>> -- 
>>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>>> >>>>>>>>>> Check the FAQ: 
>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>> >>>>>>>>>> Search the archives: 
>>>> https://groups.google.com/group/akka-user
>>>> --- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Akka User List" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to akka-user+...@googlegroups.com.
>>>> To post to this group, send email to akka...@googlegroups.com.
>>>> Visit this group at https://groups.google.com/group/akka-user.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>>>
>>> -- 
>>> Cheers,
>>> √
>>>
>>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Best practice for consuming a Http response entity's data stream

2016-04-08 Thread Chris Baxter
None of the out of the box Unmarshallers for json (like SprayJsonSupport) 
support parsing in a lazy and streaming way.  I did find this repo which 
looks promising:

https://github.com/knutwalker/akka-stream-json

Is this the best kind of approach?  It's certainly nice in that you don't 
have to read all of the data into memory, but large responses are not the 
norm and are more of the exception for us.

On Friday, April 8, 2016 at 8:59:10 AM UTC-4, Chris Baxter wrote:
>
> Thanks for responding Viktor.
>
> 1) I see the flaw in this design (flatMapConcat) now.  If the response is 
> chunked you only read the first chunk
> 2) I want to be able to parse the body as json and have the final result 
> of the flow be a Future for some object that I have mapped the response 
> json to.  Any suggestions for doing that w/o reading the entire byte string 
> into memory?  Are you maybe suggesting that instead of feeding something 
> complete into my json parser (String, Array[Byte]) that I should instead 
> try and feed in something that is more stream oriented, such as an 
> InputStream and then find a way to plumb that together with the response 
> stream?
>
> Any suggestions for a Flow that can deal with chunk and feed the data into 
> a parsing stage w/o having to read it all into memory would be greatly 
> appreciated?  I don't need perfect code, just an approach so I can take it 
> from there.
>
> On Friday, April 8, 2016 at 7:13:51 AM UTC-4, √ wrote:
>>
>>
>>
>> On Fri, Apr 8, 2016 at 1:06 PM, Chris Baxter <cba...@gmail.com> wrote:
>>
>>> If I want to consume a Http service and then do something with the 
>>> response body, there are a couple of ways to go about doing that.  The two 
>>> ones that I am trying to decide between are:
>>>
>>> val f:Future[ByteString] =
>>>
>>>   Source.single(req).
>>>
>>> via(outgoingConn).
>>>
>>> flatMapConcat(_.entity.dataBytes).
>>>
>>> completionTimeout(timeout).
>>>
>>> runWith(Sink.head)
>>>
>>
>> This does not return the entire body as a ByteString.
>>  
>>
>>> and
>>>
>>>
>>> val f:Future[ByteString] =
>>>
>>>   Source.single(req).
>>>
>>> via(pool).
>>>
>>> mapAsync(1){ resp =>
>>>
>>>   resp.entity.toStrict(timeout).map(_.data )
>>>
>>> }.
>>>
>>> completionTimeout(timeout).
>>>
>>> runWith(Sink.head)
>>>
>>>
>>> I'm thinking the first approach is the better one.  Up until now, my 
>>> common code for making outbound request has been using the second 
>>> approach.  I'm about to refactor that code into using the first approach as 
>>> it seems cleaner and requires less use of Futures.  Just wanted to see what 
>>> the consensus from the Akka team and others was on this.
>>>
>> My question is: why do you need to eagerly read everything into memory to 
>> "do something with the repsonse body"?
>>  
>>
>>>
>>>
>>> -- 
>>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>>> >>>>>>>>>> Check the FAQ: 
>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>> >>>>>>>>>> Search the archives: 
>>> https://groups.google.com/group/akka-user
>>> --- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Akka User List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to akka-user+...@googlegroups.com.
>>> To post to this group, send email to akka...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/akka-user.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> -- 
>> Cheers,
>> √
>>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Best practice for consuming a Http response entity's data stream

2016-04-08 Thread Chris Baxter
Thanks for responding Viktor.

1) I see the flaw in this design (flatMapConcat) now.  If the response is 
chunked you only read the first chunk
2) I want to be able to parse the body as json and have the final result of 
the flow be a Future for some object that I have mapped the response json 
to.  Any suggestions for doing that w/o reading the entire byte string into 
memory?  Are you maybe suggesting that instead of feeding something 
complete into my json parser (String, Array[Byte]) that I should instead 
try and feed in something that is more stream oriented, such as an 
InputStream and then find a way to plumb that together with the response 
stream?

Any suggestions for a Flow that can deal with chunk and feed the data into 
a parsing stage w/o having to read it all into memory would be greatly 
appreciated?  I don't need perfect code, just an approach so I can take it 
from there.

On Friday, April 8, 2016 at 7:13:51 AM UTC-4, √ wrote:
>
>
>
> On Fri, Apr 8, 2016 at 1:06 PM, Chris Baxter <cba...@gmail.com 
> > wrote:
>
>> If I want to consume a Http service and then do something with the 
>> response body, there are a couple of ways to go about doing that.  The two 
>> ones that I am trying to decide between are:
>>
>> val f:Future[ByteString] =
>>
>>   Source.single(req).
>>
>> via(outgoingConn).
>>
>> flatMapConcat(_.entity.dataBytes).
>>
>> completionTimeout(timeout).
>>
>> runWith(Sink.head)
>>
>
> This does not return the entire body as a ByteString.
>  
>
>> and
>>
>>
>> val f:Future[ByteString] =
>>
>>   Source.single(req).
>>
>> via(pool).
>>
>> mapAsync(1){ resp =>
>>
>>   resp.entity.toStrict(timeout).map(_.data )
>>
>> }.
>>
>> completionTimeout(timeout).
>>
>> runWith(Sink.head)
>>
>>
>> I'm thinking the first approach is the better one.  Up until now, my 
>> common code for making outbound request has been using the second 
>> approach.  I'm about to refactor that code into using the first approach as 
>> it seems cleaner and requires less use of Futures.  Just wanted to see what 
>> the consensus from the Akka team and others was on this.
>>
> My question is: why do you need to eagerly read everything into memory to 
> "do something with the repsonse body"?
>  
>
>>
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Cheers,
> √
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Best practice for consuming a Http response entity's data stream

2016-04-08 Thread Chris Baxter
If I want to consume a Http service and then do something with the response 
body, there are a couple of ways to go about doing that.  The two ones that 
I am trying to decide between are:

val f:Future[ByteString] =

  Source.single(req).

via(outgoingConn).

flatMapConcat(_.entity.dataBytes).

completionTimeout(timeout).

runWith(Sink.head)


and


val f:Future[ByteString] =

  Source.single(req).

via(pool).

mapAsync(1){ resp =>

  resp.entity.toStrict(timeout).map(_.data )

}.

completionTimeout(timeout).

runWith(Sink.head)


I'm thinking the first approach is the better one.  Up until now, my common 
code for making outbound request has been using the second approach.  I'm 
about to refactor that code into using the first approach as it seems 
cleaner and requires less use of Futures.  Just wanted to see what the 
consensus from the Akka team and others was on this.



-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Behavior of akka 2.4.3 with modeled custom headers

2016-04-05 Thread Chris Baxter
So I have some custom modeled headers, with one of them being defined like 
so:

object `X-Access-Token` extends 
ModeledCustomHeaderCompanion[`X-Access-Token`]{

  override val name = "X-Access-Token"

  override def parse(value:String) = util.Try(`X-Access-Token`(value))

}

case class `X-Access-Token`(token:String) extends 
ModeledCustomHeader[`X-Access-Token`] with DefaultHeaderRendering{

  override val companion = `X-Access-Token`

  val value = token

}


Using akka 2.4.3, if I pass it in on a request as:


X-Access-Token: abc123


and then in a route extract it via headerValueByType[`X-Access-Token`] the 
value of the token on the case class produced is "X-Access-Token: abc123" 
(included the header name) instead of the expected "abc123".  I tracked 
this down to the HeaderMagnet used for modeled headers:


implicit def fromUnitForModeledCustomHeader[T <: ModeledCustomHeader[T], H 
<: ModeledCustomHeaderCompanion[T]]

(u: Unit)(implicit tag: ClassTag[T], companion: 
ModeledCustomHeaderCompanion[T]): HeaderMagnet[T] =

new HeaderMagnet[T] {

  override def runtimeClass = tag.runtimeClass.asInstanceOf[Class[T]]

  override def classTag = tag

  override def extractPF = {

case h if h.is(companion.lowercaseName) => companion.apply(h
.toString)

  }

}


In extractPF, it's using the companion.apply with h.toString instead of 
just the .value of the header.  Am I supposed to be parsing out the value 
from that string in .parse on my companion because if that's the case it's 
not clear from the docs.  This all worked correctly in 2.4.2.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Issue with Smallest Mailbox router pool introduced in akka 2.4.2

2016-03-04 Thread Chris Baxter
Should I create a separate ticket for this?  It appeared to me like the 
issue with AbstractNodeQueue was fixed though.  Am I reading this wrong?  

On Friday, March 4, 2016 at 7:38:47 AM UTC-5, √ wrote:
>
> Could be related to: https://github.com/akka/akka/issues/19216
>
> On Fri, Mar 4, 2016 at 1:26 PM, Chris Baxter <cba...@gmail.com 
> > wrote:
>
>> I have noticed a strange issue after we upgraded to akka 2.4.2 that 
>> appears sporadically but once it rears its ugly head it basically causes 
>> the server to consume a ton of CPU and we have to restart.  This issue 
>> appears to be with the smallest mailbox pool router.  It gets hung up in 
>> peekNode in AbstractNodeQueue, in that do/while loop:
>>
>> protected final Node peekNode() {
>>
>> final Node tail = ((Node)Unsafe.instance.getObjectVolatile(
>> this, tailOffset));
>>
>> Node next = tail.next();
>>
>> if (next == null && get() != tail) {
>>
>> // if tail != head this is not going to change until 
>> producer makes progress
>>
>> // we can avoid reading the head and just spin on next until 
>> it shows up
>>
>> do {
>>
>> next = tail.next();
>>
>> } while (next == null);
>>
>> }
>>
>> return next;
>>
>> }
>>
>>
>> I have attached a screenshot of the thread dump from jconsole.  I'm still 
>> in the early stages of debugging this, but would appreciate any info from 
>> the akka team on this.
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Cheers,
> √
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Issue with Smallest Mailbox router pool introduced in akka 2.4.2

2016-03-04 Thread Chris Baxter
I have noticed a strange issue after we upgraded to akka 2.4.2 that appears 
sporadically but once it rears its ugly head it basically causes the server 
to consume a ton of CPU and we have to restart.  This issue appears to be 
with the smallest mailbox pool router.  It gets hung up in peekNode in 
AbstractNodeQueue, in that do/while loop:

protected final Node peekNode() {

final Node tail = ((Node)Unsafe.instance.getObjectVolatile(
this, tailOffset));

Node next = tail.next();

if (next == null && get() != tail) {

// if tail != head this is not going to change until producer 
makes progress

// we can avoid reading the head and just spin on next until it 
shows up

do {

next = tail.next();

} while (next == null);

}

return next;

}


I have attached a screenshot of the thread dump from jconsole.  I'm still 
in the early stages of debugging this, but would appreciate any info from 
the akka team on this.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Nginx error when proxying to Akka Http and large response sizes

2016-01-20 Thread Chris Baxter
Issue created, including a code sample to reproduce the issue:

https://github.com/akka/akka/issues/19542

On Wednesday, January 20, 2016 at 6:38:50 AM UTC-5, Akka Team wrote:
>
> Hi Chris,
>
> This looks like a weird issue to me. The linked issue in aleph might or 
> might not really matter here, they seemed to have ignored backpressure 
> (Channel Future from netty If I understood correctly). 
>
> Akka should handle these cases correctly, even if you send enourmous 
> ByteStrings to its IO layer (it will write it in chunks), but we can take 
> an extra look. Can you please open a ticket and continue the discussion 
> there?
>
> -Endre
>
>
> On Fri, Jan 15, 2016 at 4:42 PM, Chris Baxter <cba...@gmail.com 
> > wrote:
>
>> We just finished our conversion of all of our rest endpoints from 
>> Unfiltered to Akka Http.  In doing so, we noticed a strange error when we 
>> have our nginx (version 1.6) proxy sitting in front of Akka Http.  If the 
>> response is a bit large, over say 30K, nginx fails with the following error:
>>
>> upstream prematurely closed connection while reading upstream
>>
>>
>> The only way we could find to fix this was to explicitly chunk responses 
>> that were larger than a specified size.  I used a piece of code like this 
>> to handle the chunking.
>>
>>   def chunkResponses(chunkThreshold:Int) = mapResponseEntity{
>> case h @ HttpEntity.Strict(ct, data) if data.size > chunkThreshold => 
>>   
>>   val chunks = 
>> Source.
>>   single(data).
>>   mapConcat(_.grouped(chunkThreshold).toList).
>>   map(HttpEntity.Chunk.apply(_))
>>   HttpEntity.Chunked(ct, chunks)
>> case other => other
>>   }
>>
>>
>> All of the responses we had been producing were always HttpEntity.Strict 
>> as we always had all of the json that we were going to respond with in 
>> memory already.  I did not see any need to chunk.  I want to better 
>> understand why this is happening.  Has anyone else run into this?  I 
>> couldn't find much info out there on this.  The closest thing I found was 
>> someone else seeing this a while back with their own http server impl:
>>
>> https://github.com/ztellman/aleph/issues/169
>>
>> -- 
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to akka-user+...@googlegroups.com .
>> To post to this group, send email to akka...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Akka Team
> Typesafe - Reactive apps on the JVM
> Blog: letitcrash.com
> Twitter: @akkateam
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Nginx error when proxying to Akka Http and large response sizes

2016-01-15 Thread Chris Baxter
We just finished our conversion of all of our rest endpoints from 
Unfiltered to Akka Http.  In doing so, we noticed a strange error when we 
have our nginx (version 1.6) proxy sitting in front of Akka Http.  If the 
response is a bit large, over say 30K, nginx fails with the following error:

upstream prematurely closed connection while reading upstream


The only way we could find to fix this was to explicitly chunk responses 
that were larger than a specified size.  I used a piece of code like this 
to handle the chunking.

  def chunkResponses(chunkThreshold:Int) = mapResponseEntity{
case h @ HttpEntity.Strict(ct, data) if data.size > chunkThreshold =>   

  val chunks = 
Source.
  single(data).
  mapConcat(_.grouped(chunkThreshold).toList).
  map(HttpEntity.Chunk.apply(_))
  HttpEntity.Chunked(ct, chunks)
case other => other
  }


All of the responses we had been producing were always HttpEntity.Strict as 
we always had all of the json that we were going to respond with in memory 
already.  I did not see any need to chunk.  I want to better understand why 
this is happening.  Has anyone else run into this?  I couldn't find much 
info out there on this.  The closest thing I found was someone else seeing 
this a while back with their own http server impl:

https://github.com/ztellman/aleph/issues/169

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Nginx error when proxying to Akka Http and large response sizes

2016-01-15 Thread Chris Baxter
We just finished our conversion of all of our rest endpoints from 
Unfiltered to Akka Http.  In doing so, we noticed a strange error when we 
have our nginx (version 1.6) proxy sitting in front of Akka Http.  If the 
response is a bit large, over say 30K, nginx fails with the following error:

upstream prematurely closed connection while reading upstream


The only way we could find to fix this was to explicitly chunk responses 
that were larger than a specified size.  I used a piece of code like this 
to handle the chunking.

  def chunkResponses(chunkThreshold:Int) = mapResponseEntity{

case h @ HttpEntity.Strict(ct, data) if data.size > chunkThreshold =>  
 

  val chunks = 

Source.

  single(data).

  mapConcat(_.grouped(chunkThreshold).toList).

  map(HttpEntity.Chunk.apply(_))

  HttpEntity.Chunked(ct, chunks)

case other => other

  }


All of the responses we had been producing were always HttpEntity.Strict as 
we always had all of the json that we were going to respond with in memory 
already.  I did not see any need to chunk.  I want to better understand why 
this is happening.  Has anyone else run into this?  I couldn't find much 
info out there on this.  The closest thing I found was someone else seeing 
this a while back with their own http server impl:

https://github.com/ztellman/aleph/issues/169


-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] Akka Http Parsing of X-Forwarded-For and DNS lookups

2016-01-08 Thread Chris Baxter
Done.  Ticket is:

https://github.com/akka/akka/issues/19388

I described the potential solution in there and also included possible 
regex expressions to use.  If you want me to take this the next step, which 
would be making the change myself, let me know.

On Friday, January 8, 2016 at 2:15:18 AM UTC-5, rkuhn wrote:
>
> This sounds like a bug, please open a ticket about this and thanks for 
> noticing! If you want to follow up with a PR that would be even more 
> awesome :-)
>
> Regards,
>
> Roland 
>
> Sent from my iPhone
>
> On 07 Jan 2016, at 16:36, Chris Baxter <cba...@gmail.com > 
> wrote:
>
> I was looking at the Akka Http code recently, specifically how the 
> X-Forwarded-For header is parsed into the model header class of the same 
> name.  I noticed that the list of strings that represent the header value 
> are mapped into RemoteAddress classes, using the companion's apply method 
> that take a String.  In that code, I see InetAddress.getByName being called 
> which worries me a bit.  If what is being passed in there is already an IP 
> address (not a host name), then no DNS lookup will occur.  But, if someone 
> supplied an explicit X-Forwarded-For header on the request and put a 
> hostname in there then getByName will do a DNS lookup and that can be slow 
> and potentially dangerous from a denial of service perspective.  From my 
> experience, the only safe way to take a String and get it into a 
> InetAddress is to break it down into the individual octet pieces (split on 
> ".") and then convert those into bytes and then use 
> InetAddress.getByAddress.  If it happened to be a hostname, we throw it out 
> because none of our proxy servers would ever append a hostname anyway so 
> it's probably garbage.  
>
> Is this something you guys have given consideration to?  Is there any way 
> I can change how X-Forwarded-For is parsed to avoid such a potential issue?
>
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com .
> To post to this group, send email to akka...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
>>>>>>>>>>  Read the docs: http://akka.io/docs/
>>>>>>>>>>  Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Idle connection handling for non-pooled http connections

2015-10-19 Thread Chris Baxter
Referencing this ticket , I can 
see that the idle connection/timeout handling is not yet in place when 
using pooled connections.  I added a comment around the behavior of 
non-pooled outbound connections with timeouts hoping to get a response but 
nothing so far so I'm adding here in hopes of getting some clarification. 
 If I use a non-pooled connection and use a takeWithin on the Flow to 
handle a timeout, and it does hit that timeout condition, failing the Flow, 
will the underlying single connection be closed immediately or will it stay 
open until a response is received from the remote server even though 
control does return to the code using the connection?  From the debug 
logging it does not look like it closed the connection until a response is 
received.  This seems to run counter to what @sirthias says about using the 
low level Http().outboundConnection API.  How can I get this to work as 
expected, where my takeWithin timeout handling will close the underlying 
connection immediately after hitting the timeout condition?

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Using CircuitBreaker within outbound Http stream Flow

2015-09-02 Thread Chris Baxter
Are there any additional plans to provide enhancements to the 
CircuitBreaker to allow it to more tightly integrate into outbound Http 
stream Flows?  Right now we can make use of the current CircuitBreaker with 
a mapAsync step as the current breaker supports a Future based 
withCircuitBreaker method and this works fine.  It seems however that 
something like this (protecting external http calls) could be built right 
into the host connection pool flows as an optional input to allow the 
caller to supply a breaker to use for that pool flow.  Are there any plans 
to do something like this, and if not, might I suggest it as a new feature 
to include in some upcoming release because I think it would be very 
useful.  Perhaps if the breaker is open, then the Try that flows downstream 
after the http request is executed will always be failed until the breaker 
closes again.  I think something like that fits the model that exists 
already pretty well.

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Unhandled ResumeReading messages

2015-07-10 Thread Chris Baxter
Recently, I have started seeing an increase in unhandled ResumeReading 
messages in my actor system.  We are using Akka Streams and Akka Http (RC3) 
for a small number of things and I'm assuming this is coming from 
Streams/Http.  The log message from my custom unhandled message listener 
looks like this:

Unhandled message received by actor: 
Actor[akka://AqutoServices/system/IO-TCP/selectors/$a/2#128257626]: 
ResumeReading

While I don't think this is causing issues, it is cluttering my log.  I can 
silence it if it's not really something to be concerned about, but I wanted 
to check here first.  Anyone have any thoughts on this?  We are using Akka 
2.3.11.




-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Unhandled ResumeReading messages

2015-07-10 Thread Chris Baxter
I've dug in a bit more and I am thinking this is coming from our use of 
outbound HTTP.  I am using the host level way of consuming http services, 
setting up a host level pool of connections.  Is this expected behavior? 
 Also, I am seeing deadletters that I think are related that look like this 
in my logs:

Message [akka.io.SelectionHandler$ChannelReadable$] from 
Actor[akka://AqutoServices/deadLetters] to 
Actor[akka://AqutoServices/system/IO-TCP/selectors/$a/8#2114675738] was not 
delivered. [8] dead letters encountered. This logging can be turned off or 
adjusted with configuration settings 'akka.log-dead-letters' and 
'akka.log-dead-letters-during-shutdown'.

On Friday, July 10, 2015 at 8:51:54 AM UTC-4, Chris Baxter wrote:

 Recently, I have started seeing an increase in unhandled ResumeReading 
 messages in my actor system.  We are using Akka Streams and Akka Http (RC3) 
 for a small number of things and I'm assuming this is coming from 
 Streams/Http.  The log message from my custom unhandled message listener 
 looks like this:

 Unhandled message received by actor: 
 Actor[akka://AqutoServices/system/IO-TCP/selectors/$a/2#128257626]: 
 ResumeReading

 While I don't think this is causing issues, it is cluttering my log.  I 
 can silence it if it's not really something to be concerned about, but I 
 wanted to check here first.  Anyone have any thoughts on this?  We are 
 using Akka 2.3.11.






-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Handling request timeouts with client side http

2015-06-04 Thread Chris Baxter
I know in Spray that had a request-timeout setting that could be used to 
control when to timeout a request that has sent the request but has yet to 
receive a response within a specified timeframe.  I don't see this setting 
anywhere in Akka Http.  For a single request, I can handle this by adding 
takeWithin to my stream processing like so:

  val poolClientFlow = Http().cachedHostConnectionPool[Int](localhost, 92
00)
  val responseFuture: Future[(Try[HttpResponse], Int)] =
Source.single(HttpRequest(uri = /foo/bar/16) - 42)
  .via(poolClientFlow)
  .takeWithin(5 seconds)
  .runWith(Sink.head)


In this case, the Future would be failed with a NoSuchElementException 
which I suppose I can interpret to mean the timeout occurred.  Is there a 
better way to do this?  What happens to the underlying connection from the 
pool in this case?  If it was really and truly hung (as opposed to just 
being a little late with the response) I would want it closed.  Is this 
kind of stuff possible?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Config for adding the Remote-Address header in Akka Http

2015-06-01 Thread Chris Baxter
Great.  I will keep an eye out for RC4 to see if it makes it in there or 
not.

On Thursday, May 28, 2015 at 6:53:51 AM UTC-4, Chris Baxter wrote:

 I noticed there is a setting (akka.http.server.remote-address-header) that 
 can be turned to on that should automatically add the Remote-Address 
 header to all incoming requests.  I set this to on using RC3 but I do not 
 see the header being added. I searched through the source and saw the 
 setting in ServerSettings but I did not see that field that ties to this 
 config setting being used anywhere in the code.  Is this something that has 
 yet to be implemented?


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Config for adding the Remote-Address header in Akka Http

2015-05-28 Thread Chris Baxter
I noticed there is a setting (akka.http.server.remote-address-header) that 
can be turned to on that should automatically add the Remote-Address 
header to all incoming requests.  I set this to on using RC3 but I do not 
see the header being added. I searched through the source and saw the 
setting in ServerSettings but I did not see that field that ties to this 
config setting being used anywhere in the code.  Is this something that has 
yet to be implemented?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: RoundRobinRountingLogic: A slightly faster version?

2015-05-28 Thread Chris Baxter
Have you profiled this to quantify the performance improvement, and if so, 
did you run it under high contention situations to be sure the locks on the 
atomic don't start to be an issue under high contention?

On Thursday, May 28, 2015 at 11:43:24 AM UTC-4, Guido Medina wrote:

 Hi,

 I have my own version of a RoundRobinRouting logic which I believe it is 
 faster and less CPU intensive due to no mod math operation, here is my 
 version in Java 8, it has though one small error because NoRoutee instance 
 if private and I don't know much Scala, so here it is a copy/proposal :

 public class CircularRoutingLogic implements RoutingLogic {

   final AtomicInteger next = new AtomicInteger(-1);

   @Override
   public Routee select(Object message, IndexedSeqRoutee routees) {
 final int size = routees.size();

 // null should be replaced by NoRoutee Akka/Scala instance.
 return size == 0 ? null : routees.apply(next.accumulateAndGet(1, (index, 
 increment) - ++index = size ? 0 : index));
   }
 }



-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Bottleneck with StreamTcp and long lived connection?

2014-10-02 Thread Chris Baxter
I've been playing with the Tcp module for a while now, trying to use it to 
build a Memcached client that performs as well as the spymemcached java 
client does.  I got stuff working a while back with the plain Tcp extension 
but the code was tricky (back pressure handling) and always seemed to hit a 
bottleneck at around 10K QPS; it will plateau there and never go higher. 
 The spymemcached client, which uses a single I/O thread can get going up 
to 100K QPS (pipelining multiple requests into a single write to the 
outbound channel).  I recently converted my code to use StreamTcp now, 
hoping this would be cleaner (it is) and also possibly remove that 
unexplained bottleneck.  Unfortunately it still hits the same bottleneck, 
even though I am pipelining too (via a groupWithin).  The only way I can 
get above 10K QPS is to create multiple clients, each opening its own 
StreamTcp connection, but this is not a great idea; I should be able to 
achieve high throughput on this long lived connection with only a single 
connection into memcached.

A few things to note:

   - My Memcached client actor connects via StreamTcp.Connect
   - On the resulting OutboundTcpConnection, I create two Flows; one for 
   outbound data, one for inbound
   - The outbound Flow uses a custom ActorPublisher at the head of the Flow 
   and the Flow looks like this: ActorPublisher - groupWithin - map (to fold 
   the multiple requests into one mega request) - 
   produceTo(outboundConnection)
   - The Custom Publisher uses an internal Queue (a mutable java.util.Queue 
   for perf reasons) that it will add to if there is no demand
   - The inbound Flow uses a custom ActorSubscriber at the tail of the Flow 
   and the Flow looks like this: Inbound Connection - transform (for 
   memcached frame decoding) - produceTo(Custom Subscriber)

I know it's going to be tough to completely diagnose this issue without 
seeing my code, and I'm willing to share it, but I wanted to reach out 
first and see if this type of situation is a known issue or not.  Is there 
an issue trying to do a very high throughput single StreamTcp connection? 
 FWIW, on the custom publisher, I generally only ever see the demand at 4 
when I get a Request message so it seems as if the downstream stuff is 
exhibiting significant back pressure on me.  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Unbinding a StreamTcp server binding

2014-09-23 Thread Chris Baxter
I can't currently find a way to perform an Unbind with a StreamTcp server 
binding.  What if for some reason I no longer want to be connected to that 
socket.  The regular Tcp binding has a way to do this (by sending an Unbind 
to whoever responded to the original Bind request), but I don't see this 
same system in place for StreamTcp.  Am I missing something?

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to split an inbound stream on a delimiter character using Akka Streams

2014-09-05 Thread Chris Baxter
Thanks Viktor.  That certainly looks more succinct.  My actual use case 
(this one was intentionally simplified to make the example easier) is that 
I have messages coming in in protobuf format, each one preceded by a length 
indicator.  So my Transformer basically toggles back and forth between two 
states; looking for the length of the next message or reading the current 
message.  I got the code working correctly but it's definitely verbose. 
 It's recursive (and tail recursive at that) but it still needs 
improvement.  I want to add sufficient unit testing to it and then I want 
to start to tighten up the internals of the code.  I will use this code 
here as a guideline.  Thanks.

On Wednesday, September 3, 2014 8:15:33 AM UTC-4, Chris Baxter wrote:

 Posted this on Stackoverflow but haven't seen any activity on it so I 
 figured I'd post it here as well.

 I've been playing around with the experimental Akka Streams API a bit and 
 I have a use case that I wanted to see how to implement.  For my use case, 
 I have a `StreamTcp` based `Flow` that is being fed from binding the input 
 stream of connections to my server socket.  The Flow that I have is based 
 on `ByteString` data coming into it.  The data that is coming in is going 
 to have a delimiter in it that means I should treat everything before the 
 delimiter as one message and everything after and up to the next delimiter 
 as the next message.  So playing around with a simpler example, using no 
 sockets and just static text, this is what I came up with:

 import akka.actor.ActorSystem
 import akka.stream.{ FlowMaterializer, MaterializerSettings }
 import akka.stream.scaladsl.Flow
 import scala.util.{ Failure, Success }
 import akka.util.ByteString
  object BasicTransformation {
   def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem(Sys)
 val data = ByteString(Lorem Ipsum is simply.Dummy text of the 
 printing.And typesetting industry.)
 Flow(data).
   splitWhen(c = c == '.').
   foreach{producer = 
 Flow(producer).
   filter(c = c != '.').
   fold(new StringBuilder)((sb, c) = sb.append(c.toChar)).
   map(_.toString).
   filter(!_.isEmpty).
   foreach(println(_)).
   consume(FlowMaterializer(MaterializerSettings()))
   }.
   onComplete(FlowMaterializer(MaterializerSettings())) {
 case any =
   system.shutdown
   }
   }
 }

 The main function on the `Flow` that I found to accomplish my goal was 
 `splitWhen`, which then produces additional sub-flows, one for each message 
 per that `.` delimiter.  I then process each sub-flow with another pipeline 
 of steps, finally printing the individual messages at the end.

 This all seems a bit verbose, to accomplish what I thought to be a pretty 
 simple and common use case.  So my question is, is there a cleaner and less 
 verbose way to do this or is this the correct and preferred way to split a 
 stream up by a delimiter?

 The link to the SO question is: 
 http://stackoverflow.com/questions/25631099/how-to-split-an-inbound-stream-on-a-delimiter-character-using-akka-streams


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] How to split an inbound stream on a delimiter character using Akka Streams

2014-09-03 Thread Chris Baxter
Posted this on Stackoverflow but haven't seen any activity on it so I 
figured I'd post it here as well.

I've been playing around with the experimental Akka Streams API a bit and I 
have a use case that I wanted to see how to implement.  For my use case, I 
have a `StreamTcp` based `Flow` that is being fed from binding the input 
stream of connections to my server socket.  The Flow that I have is based 
on `ByteString` data coming into it.  The data that is coming in is going 
to have a delimiter in it that means I should treat everything before the 
delimiter as one message and everything after and up to the next delimiter 
as the next message.  So playing around with a simpler example, using no 
sockets and just static text, this is what I came up with:

import akka.actor.ActorSystem
import akka.stream.{ FlowMaterializer, MaterializerSettings }
import akka.stream.scaladsl.Flow
import scala.util.{ Failure, Success }
import akka.util.ByteString
 object BasicTransformation {
  def main(args: Array[String]): Unit = {
implicit val system = ActorSystem(Sys)
val data = ByteString(Lorem Ipsum is simply.Dummy text of the 
printing.And typesetting industry.)
Flow(data).
  splitWhen(c = c == '.').
  foreach{producer = 
Flow(producer).
  filter(c = c != '.').
  fold(new StringBuilder)((sb, c) = sb.append(c.toChar)).
  map(_.toString).
  filter(!_.isEmpty).
  foreach(println(_)).
  consume(FlowMaterializer(MaterializerSettings()))
  }.
  onComplete(FlowMaterializer(MaterializerSettings())) {
case any =
  system.shutdown
  }
  }
}

The main function on the `Flow` that I found to accomplish my goal was 
`splitWhen`, which then produces additional sub-flows, one for each message 
per that `.` delimiter.  I then process each sub-flow with another pipeline 
of steps, finally printing the individual messages at the end.

This all seems a bit verbose, to accomplish what I thought to be a pretty 
simple and common use case.  So my question is, is there a cleaner and less 
verbose way to do this or is this the correct and preferred way to split a 
stream up by a delimiter?

The link to the SO question 
is: 
http://stackoverflow.com/questions/25631099/how-to-split-an-inbound-stream-on-a-delimiter-character-using-akka-streams

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: How to split an inbound stream on a delimiter character using Akka Streams

2014-09-03 Thread Chris Baxter
Thanks for the suggestions Viktor and Endre.  I will try Viktor's chop 
solution as well as looking into the Endre's Transformer solution (and the 
decoding DSL) and then post back with my results.

On Wednesday, September 3, 2014 8:15:33 AM UTC-4, Chris Baxter wrote:

 Posted this on Stackoverflow but haven't seen any activity on it so I 
 figured I'd post it here as well.

 I've been playing around with the experimental Akka Streams API a bit and 
 I have a use case that I wanted to see how to implement.  For my use case, 
 I have a `StreamTcp` based `Flow` that is being fed from binding the input 
 stream of connections to my server socket.  The Flow that I have is based 
 on `ByteString` data coming into it.  The data that is coming in is going 
 to have a delimiter in it that means I should treat everything before the 
 delimiter as one message and everything after and up to the next delimiter 
 as the next message.  So playing around with a simpler example, using no 
 sockets and just static text, this is what I came up with:

 import akka.actor.ActorSystem
 import akka.stream.{ FlowMaterializer, MaterializerSettings }
 import akka.stream.scaladsl.Flow
 import scala.util.{ Failure, Success }
 import akka.util.ByteString
  object BasicTransformation {
   def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem(Sys)
 val data = ByteString(Lorem Ipsum is simply.Dummy text of the 
 printing.And typesetting industry.)
 Flow(data).
   splitWhen(c = c == '.').
   foreach{producer = 
 Flow(producer).
   filter(c = c != '.').
   fold(new StringBuilder)((sb, c) = sb.append(c.toChar)).
   map(_.toString).
   filter(!_.isEmpty).
   foreach(println(_)).
   consume(FlowMaterializer(MaterializerSettings()))
   }.
   onComplete(FlowMaterializer(MaterializerSettings())) {
 case any =
   system.shutdown
   }
   }
 }

 The main function on the `Flow` that I found to accomplish my goal was 
 `splitWhen`, which then produces additional sub-flows, one for each message 
 per that `.` delimiter.  I then process each sub-flow with another pipeline 
 of steps, finally printing the individual messages at the end.

 This all seems a bit verbose, to accomplish what I thought to be a pretty 
 simple and common use case.  So my question is, is there a cleaner and less 
 verbose way to do this or is this the correct and preferred way to split a 
 stream up by a delimiter?

 The link to the SO question is: 
 http://stackoverflow.com/questions/25631099/how-to-split-an-inbound-stream-on-a-delimiter-character-using-akka-streams


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


Re: [akka-user] How to split an inbound stream on a delimiter character using Akka Streams

2014-09-03 Thread Chris Baxter
My bad ;)

Okay, I will proceed with a Transformer approach.  Thanks for the push in 
the right direction.

On Wednesday, September 3, 2014 10:04:30 AM UTC-4, √ wrote:

 That was not the thing you asked for, you had a single ByteString. :)

 If you want to do that then you need to create a Transformer and use the 
 `transform` method.


 On Wed, Sep 3, 2014 at 3:55 PM, Chris Baxter cba...@gmail.com 
 javascript: wrote:

 Viktor, how would this work if the delimiter was not in the current 
 ByteString, meaning that it's coming in a subsequent ByteString and I need 
 to buffer this ByteString until the next one comes through?


 On Wednesday, September 3, 2014 8:42:46 AM UTC-4, √ wrote:

 def chop(find: Byte, in: ByteString, res: Seq[ByteString] = Nil): 
 Seq[ByteString] = in.indexOf(find) match {
   case -1 = res
   case x  = 
 val chunk = in.take(x)
 chop(find, in.drop(x + 1), if (chunk.isEmpty) res else res :+ chunk)
 }

 scala chop('.', ByteString())
 res10: Seq[akka.util.ByteString] = List()

 scala chop('.', ByteString())
 res11: Seq[akka.util.ByteString] = List()

 scala chop('.', ByteString(Lorem Ipsum is simply.Dummy text of the 
 printing.And typesetting industry.)).map(_.utf8String)
 res12: Seq[String] = List(Lorem Ipsum is simply, Dummy text of the 
 printing, And typesetting industry)

 Flow(data).mapConcat(bs = chop('.', bs)).etc


  On Wed, Sep 3, 2014 at 2:15 PM, Chris Baxter cba...@gmail.com wrote:

 Posted this on Stackoverflow but haven't seen any activity on it so I 
 figured I'd post it here as well.

 I've been playing around with the experimental Akka Streams API a bit 
 and I have a use case that I wanted to see how to implement.  For my use 
 case, I have a `StreamTcp` based `Flow` that is being fed from binding the 
 input stream of connections to my server socket.  The Flow that I have is 
 based on `ByteString` data coming into it.  The data that is coming in is 
 going to have a delimiter in it that means I should treat everything 
 before 
 the delimiter as one message and everything after and up to the next 
 delimiter as the next message.  So playing around with a simpler example, 
 using no sockets and just static text, this is what I came up with:

 import akka.actor.ActorSystem
 import akka.stream.{ FlowMaterializer, MaterializerSettings }
 import akka.stream.scaladsl.Flow
  import scala.util.{ Failure, Success }
 import akka.util.ByteString
  object BasicTransformation {
   def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem(Sys)
 val data = ByteString(Lorem Ipsum is simply.Dummy text of the 
 printing.And typesetting industry.)
  Flow(data).
   splitWhen(c = c == '.').
   foreach{producer = 
 Flow(producer).
   filter(c = c != '.').
   fold(new StringBuilder)((sb, c) = sb.append(c.toChar)).
   map(_.toString).
   filter(!_.isEmpty).
   foreach(println(_)).
   consume(FlowMaterializer(MaterializerSettings()))
   }.
   onComplete(FlowMaterializer(MaterializerSettings())) {
 case any =
   system.shutdown
   }
   }
 }

 The main function on the `Flow` that I found to accomplish my goal was 
 `splitWhen`, which then produces additional sub-flows, one for each 
 message 
 per that `.` delimiter.  I then process each sub-flow with another 
 pipeline 
 of steps, finally printing the individual messages at the end.

 This all seems a bit verbose, to accomplish what I thought to be a 
 pretty simple and common use case.  So my question is, is there a cleaner 
 and less verbose way to do this or is this the correct and preferred way 
 to 
 split a stream up by a delimiter?

 The link to the SO question is: http://stackoverflow.com/
 questions/25631099/how-to-split-an-inbound-stream-on-a-
 delimiter-character-using-akka-streams
  
 -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: http://doc.akka.io/docs/akka/
 current/additional/faq.html
  Search the archives: https://groups.google.com/
 group/akka-user
 --- 
 You received this message because you are subscribed to the Google 
 Groups Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to akka-user+...@googlegroups.com.
 To post to this group, send email to akka...@googlegroups.com.

 Visit this group at http://groups.google.com/group/akka-user.
 For more options, visit https://groups.google.com/d/optout.




 -- 
 Cheers,
 √
  
  -- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
 --- 
 You received this message because you are subscribed to the Google Groups 
 Akka User List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to akka-user+...@googlegroups.com javascript:.
 To post to this group, send email to akka...@googlegroups.com 
 javascript:.
 Visit this group

[akka-user] Re: How to split an inbound stream on a delimiter character using Akka Streams

2014-09-03 Thread Chris Baxter
So transform worked for me.  Here is my quick and dirty impl of a 
Transformer and then using that Transformer in a Flow.  Thanks again for 
the help guys.

class PeriodDelimitedTransformer extends Transformer[ByteString,String]{
  val buffer = new ByteStringBuilder
  
  def onNext(msg:ByteString) = { 
val msgString = msg.utf8String
val delimIndex = msgString.indexOf('.')
if (delimIndex == -1){
  buffer.append(msg)
  List.empty
}
else{
  val parts = msgString.split(\\.)
  val endsWithDelim = msgString.endsWith(.)
  
  buffer.putBytes(parts.head.getBytes())
  val currentPiece = buffer.result.utf8String 
  val otherPieces = parts.tail.dropRight(1).toList

  buffer.clear
  val lastPart = 
  if (endsWithDelim){
  List(parts.last)
  }
  else{
  buffer.putBytes(parts.last.getBytes())
  List.empty
  }  
  
  
  val result = currentPiece :: otherPieces ::: lastPart
  result
}

  }  
}
 object BasicTransformation {
  def main(args: Array[String]): Unit = {
implicit val system = ActorSystem(Sys)   
implicit val mater = FlowMaterializer(MaterializerSettings())

val data = List(ByteString(Lorem Ipsum is), ByteString( simply.Dummy 
text of.The prin), ByteString(ting.And typesetting industry.))
Flow(data).transform(new PeriodDelimitedTransformer).foreach(println(_))
  }
}

On Wednesday, September 3, 2014 8:15:33 AM UTC-4, Chris Baxter wrote:

 Posted this on Stackoverflow but haven't seen any activity on it so I 
 figured I'd post it here as well.

 I've been playing around with the experimental Akka Streams API a bit and 
 I have a use case that I wanted to see how to implement.  For my use case, 
 I have a `StreamTcp` based `Flow` that is being fed from binding the input 
 stream of connections to my server socket.  The Flow that I have is based 
 on `ByteString` data coming into it.  The data that is coming in is going 
 to have a delimiter in it that means I should treat everything before the 
 delimiter as one message and everything after and up to the next delimiter 
 as the next message.  So playing around with a simpler example, using no 
 sockets and just static text, this is what I came up with:

 import akka.actor.ActorSystem
 import akka.stream.{ FlowMaterializer, MaterializerSettings }
 import akka.stream.scaladsl.Flow
 import scala.util.{ Failure, Success }
 import akka.util.ByteString
  object BasicTransformation {
   def main(args: Array[String]): Unit = {
 implicit val system = ActorSystem(Sys)
 val data = ByteString(Lorem Ipsum is simply.Dummy text of the 
 printing.And typesetting industry.)
 Flow(data).
   splitWhen(c = c == '.').
   foreach{producer = 
 Flow(producer).
   filter(c = c != '.').
   fold(new StringBuilder)((sb, c) = sb.append(c.toChar)).
   map(_.toString).
   filter(!_.isEmpty).
   foreach(println(_)).
   consume(FlowMaterializer(MaterializerSettings()))
   }.
   onComplete(FlowMaterializer(MaterializerSettings())) {
 case any =
   system.shutdown
   }
   }
 }

 The main function on the `Flow` that I found to accomplish my goal was 
 `splitWhen`, which then produces additional sub-flows, one for each message 
 per that `.` delimiter.  I then process each sub-flow with another pipeline 
 of steps, finally printing the individual messages at the end.

 This all seems a bit verbose, to accomplish what I thought to be a pretty 
 simple and common use case.  So my question is, is there a cleaner and less 
 verbose way to do this or is this the correct and preferred way to split a 
 stream up by a delimiter?

 The link to the SO question is: 
 http://stackoverflow.com/questions/25631099/how-to-split-an-inbound-stream-on-a-delimiter-character-using-akka-streams


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Strange behavior with Akka Tcp

2014-04-25 Thread Chris Baxter
Hi Patrick.  I will try and boil this down to something much simpler before 
taking out a ticket.  Thanks for looking into it for me.

Yes, I'm just starting to familiarize myself with the new Akka reactive 
streams api.  This is just a prototype to get myself familiar with the 
inner workings of the core IO stuff.  I'm really waiting for reactive 
streams to come out and then I plan on building on top of that as it will 
be much cleaner and simpler.

On Friday, April 11, 2014 2:32:37 PM UTC-4, Chris Baxter wrote:

 I'm using the latest release of Akka (2.3.2) and I've been playing around 
 with the Tcp Api a bit, trying to get familiar with how it works.  In doing 
 so, I've been writing a little Memcached binary client.  This usage of Tcp 
 is a little different then the examples in the docs in that the connection 
 is kept alive and never closed (unless the peer closes the connection in 
 which a reconnect will happen).  In my little prototype, I'm using the Ack 
 based back-pressure solution.  I also recently switched to pullMode for 
 reads because I found that when I didn't, so many reads were coming when I 
 was hammering it with load that it took forever to receive the write ack 
 thus delaying the next write and slowing throughput down.  When I switched 
 to pullMode, things sped up, but now I'm running into a strange issue where 
 I eventually do not receive an Ack for one of the writes that I made which 
 pretty much kills the flow as it's the acks that keep data flowing from my 
 memcached node actor into the connection actor.  I enabled trace logging 
 and more often then not, when this happens, I see this is the log as the 
 last log message from the selection handler:

 [DEBUG] [04/11/2014 14:11:02.281] [couch-akka.actor.default-dispatcher-22] 
 [akka://couch/system/IO-TCP/selectors/$a/0] Wrote [0] bytes to channel

 I can post my code if need be, but I just wanted to first see if anyone 
 else has ever seen this behavior.  


-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Strange behavior with Akka Tcp

2014-04-11 Thread Chris Baxter
I'm using the latest release of Akka (2.3.2) and I've been playing around 
with the Tcp Api a bit, trying to get familiar with how it works.  In doing 
so, I've been writing a little Memcached binary client.  This usage of Tcp 
is a little different then the examples in the docs in that the connection 
is kept alive and never closed (unless the peer closes the connection in 
which a reconnect will happen).  In my little prototype, I'm using the Ack 
based back-pressure solution.  I also recently switched to pullMode for 
reads because I found that when I didn't, so many reads were coming when I 
was hammering it with load that it took forever to receive the write ack 
thus delaying the next write and slowing throughput down.  When I switched 
to pullMode, things sped up, but now I'm running into a strange issue where 
I eventually do not receive an Ack for one of the writes that I made which 
pretty much kills the flow as it's the acks that keep data flowing from my 
memcached node actor into the connection actor.  I enabled trace logging 
and more often then not, when this happens, I see this is the log as the 
last log message from the selection handler:

[DEBUG] [04/11/2014 14:11:02.281] [couch-akka.actor.default-dispatcher-22] 
[akka://couch/system/IO-TCP/selectors/$a/0] Wrote [0] bytes to channel

I can post my code if need be, but I just wanted to first see if anyone 
else has ever seen this behavior.  

-- 
  Read the docs: http://akka.io/docs/
  Check the FAQ: 
 http://doc.akka.io/docs/akka/current/additional/faq.html
  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups Akka 
User List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.