I think the disk read speed will be the limiting factor. I am not sure
using actors is the best way to solve this. Akka stream will hold back if
you read faster than you process. So i would take a look at that. Or just
try java async file read api. Reading and processing 2gb files should only
take
Hi Harit,
You should try akka streams:
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC2/
It has all you need.
There are also activator tutorials:
http://www.typesafe.com/activator/template/akka-stream-java8
http://www.typesafe.com/activator/template/akka-stream-scala
-Endre
Hi Idar
I just confirmed with some of our team mates that it depends upon our
customers.
1. Some customers use local disk and remove logs after processing. There
are customers who use NAS based storage. None uses SSD as per my
understanding.
2. The logs differ in size a lot.
@AkkaTeam, thank you very much, seems like a weekend reading :). I will get
back based on my progress/questions.
Thank you
On Thursday, May 7, 2015 at 11:55:01 PM UTC-7, Akka Team wrote:
Hi Harit,
You should try akka streams:
what is the result of the log processing of a single file? is it some
aggregation or summary, or are you performing some action for each log line?
it seems to me the most performant solution would be to not use actors
at all, but to create a dedicated dispatcher and process each log file
in
Hello
This is what my use case looks like
*Use Case*
- Given many log files in range (2MB - 2GB), I need to parse each of these
logs and apply some processing, generate Java POJO.
- For this problem, lets assume that we have just 1 log file
- Also, the idea is to making best use of System.