Yes concurrent mapping of parts (pages) of files into memory for use. I.e. we only load those pages that need to be accessed and flush them after updates or drop them if there have been no updates at eviction.
What is your actual problem? On Tue, Apr 4, 2017 at 1:10 AM, kan wu <[email protected]> wrote: > Hi there, > > I'm an application developer working on Neo4j and trying to tune the > performance of my current application. And this is a problem generated > during my exploration of how Neo4j load records from files into memory. > > As my understanding, each "mapped" file(of understore) has a PageSwapper, > who will help swap-in/out pages between the file and the memory. And the > swapper uses nio.channels api to do IO. > > The problem is why each swapper create an array of channels(64 as I > observed) connected to its corresponding file? I found that different page > of this file will be Read/Write from/into the file through different > channels. But, I cannot understand what's the intention of this design? To > improve parallelism perhaps? Thanks. > > Appreciate for all your help. > > Best, > Kan > > -- > You received this message because you are subscribed to the Google Groups > "Neo4j" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
