26.05.2017 2:17, Leyne, Sean пишет:

- Asynchronous File I/O

    It is not really asynchronous as it waits for the completion of every 
single IO
request.

True, but it allows the storage controller to decide the best order in which to 
perform the operations...


  Order of what ? IO requests are queued serially by the same thread. It is 
exactly the same
as without this code ;)


  Also, note, it completely disables file system caching. It kills write
performance

Really?

  Yes. You may try Firebird with disabled file system cache to evaluate it.

I could see a benefit for writing several pages, which are of the same 'priority/level' 
in the "carefully order writes", through a single operation for any storage 
device.  Fewer calls would improve performance.

  Sure. But it is not present in SUPERSERVER_V2 (which we speak about).
And you miss one important word - consecutive. To write few pages at once
they must be consecutive in physical order.

Separately, I seem to recall that the feature was completed and released in a 
IB 6.5+ release.

  It easy to check

There is no UNIX part of it, btw.

Unix/Linux... Smm-*nix!!

Don't you know, Windows is the only platform that matters!

  Sure ;)

- Defer Header Page Write (i.e. reduce the number of times that the
header page is written to disk)

    This is most mature piece of code and i'm going to use it as a base for the
our implementation. It have no support for CS and, of course, it must be
tested very carefully.

Interesting that you want to continue with it...

  Why throw away good ideas ? :)


Although, I see the benefits of avoiding the "sun level hotspot" that is the DB 
Header page write operations.

I don't see how database integrity can be maintained if the header page changes 
are not persisted to disk immediately -- aside from an MPI based multi-node 
cluster where pages changes are sent to other nodes (as witnesses for 
safekeeping)*.

  The idea is to defer header page write up to the write of the any other page 
in
a hope that other transactions could start in between.

* This approach could actually provide several possible benefits...

Even over 10GBs TCP connection (w/properly configured MTU) MPI latencies (for 4KB 
messages) are < 250 micro-seconds (us) whereas SSD/PCIe SSDs latencies are 10-5 
(ms).

Using 10GBs RDMA, latencies are < 100 micro-seconds (us).

Using latest RDMA NICs, latencies are < 15 micro-seconds (us).

  Good to know ;)

Regards,
Vlad

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to