Hello All, 
   I deal with a lot of large files on Amazon S3 and disk, which have been 
indexed into chunks that can be used individually. What I would like to do 
is have a Source of offsets and sizes which can be filtered to then pass 
into another stream element which could read that particular file from S3 
or from a local disk. However, my attempts thus far to make this work have 
failed, since the stream element which does the reading seems to block 
after the first read. In effect, I guess this is equivalent to having an 
upstream and downstream source, which is perhaps not supported. Does anyone 
have any suggestions for how I could accomplish this? The key feature is 
the ability to select a subset of the index records so that the entire file 
does not have to be read from S3. In my imagination it would look something 
like this:

val indexSource[(Long, Long), Unit] = Source(() => index.iterator)
def readFileRange(offset: Long, size: Long): Array[Byte]

indexSource.skip(10).map(readFileRange).map(println).runWith(Sink.ignore) 
// Prints out a byte array for every 10th chunk in the file

Thanks,
Jason

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to