At 03:42 PM 3/20/2006, Dror Goldenberg wrote: >I'd rather call this a tradeoff between performance and exposure. >You probably also trust your NFS server not to return you garbage >or malicious executable files when you're reading from the network >filesystem. You might as well just trust it not to do other malicious >stuff.
Well, for that, the client code implements a "full-frontal mode" - which just does a single ib_get_dma_mr() with REMOTE_READ|REMOTE_WRITE privs. Then it performs all addressing as offsets against the single rkey. What I'm looking for is *alternatives* to this - ones which allow RDMA protection but do not suffer rather extreme performance penalties. It's not even the overhead of the memory registration itself, it's the fact that everything single-threads on the TPT loads/invalidates. Throughput just grinds to tens of MB/sec (from hundreds). Here are the memory registration modes, under the module parameter "Memreg". The idea is they go faster as the setting increases: 0 - no memory registration (all inline, via bounce buffers) 1 - memory registration per-rdma chunk (very correct but slow) 2 - memory window bind per chunk (not supported by OpenIB) 3 - memory window bind with async unbind (ditto) 4 - registration with FMR (what I'm attempting to implement) 5 - "persistent" memory registration (all physical) Modes 0, 1, 2 and 4 were thought to be safe. Now I'm not so sure about 4. Maybe implementing memory windows isn't looking so bad after all! :-) >I'd leave it for you to decide, but I strongly recommend on trying the >API even just for the sake of performance assessment. I suppose. My concern, again, is that I expect stronger protection in exchange for the overhead. I don't think the mempool gives me that. Tom. _______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
