> On 1/31/16 7:38 PM, Kouhei Kaigai wrote: > > I'm under investigation of SSD-to-GPU direct feature on top of > > the custom-scan interface. It intends to load a bunch of data > > blocks on NVMe-SSD to GPU RAM using P2P DMA, prior to the data > > loading onto CPU/RAM, to preprocess the data to be filtered out. > > It only makes sense if the target blocks are not loaded to the > > CPU/RAM yet, because SSD device is essentially slower than RAM. > > So, I like to have a reliable way to check the latest status of > > the shared buffer, to kwon whether a particular block is already > > loaded or not. > > That completely ignores the OS cache though... wouldn't that be a major > issue? > Once we can ensure the target block is not cached in the shared buffer, it is a job of the driver that support P2P DMA to handle OS page cache. Once driver get a P2P DMA request from PostgreSQL, it checks OS page cache status and determine the DMA source; whether OS buffer or SSD block.
> To answer your direct question, I'm no expert, but I haven't seen any > functions that do exactly what you want. You'd have to pull relevant > bits from ReadBuffer_*. Or maybe a better method would just be to call > BufTableLookup() without any locks and if you get a result > -1 just > call the relevant ReadBuffer function. Sometimes you'll end up calling > ReadBuffer even though the buffer isn't in shared buffers, but I would > think that would be a rare occurrence. > Thanks, indeed, extension can call BufTableLookup(). PrefetchBuffer() has a good example for this. If it returned a valid buf_id, we have nothing difficult; just call ReadBuffer() to pin the buffer. Elsewhere, when BufTableLookup() returned negative, it means a pair of (relation, forknum, blocknum) does not exist on the shared buffer. So, extension enqueues P2P DMA request for asynchronous translation, then driver processes the P2P DMA soon but later. Concurrent access may always happen. PostgreSQL uses MVCC, so the backend which issued P2P DMA does not need to pay attention for new tuples that didn't exist on executor start time, even if other backend loads and updates the same buffer just after the above BufTableLookup(). On the other hands, we have to pay attention whether a fraction of the buffer page is partially written to OS buffer or storage. It is in the scope of operating system, so it is not controllable from us. One idea I can find out is, temporary suspension of FlushBuffer() for a particular pairs of (relation, forknum, blocknum) until P2P DMA gets completed. Even if concurrent backend updates the buffer page after the BufTableLookup(), it allows to prevent OS caches and storages getting dirty during the P2P DMA. How about people's thought? -- NEC Business Creation Division / PG-Strom Project KaiGai Kohei <kai...@ak.jp.nec.com> -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers