Hi
In the ceph's source code(https://github.com/ceph/ceph
<https://github.com/ceph/ceph>), we can see:
1. ceph <https://github.com/ceph/ceph>/src
<https://github.com/ceph/ceph/tree/master/src>/os
<https://github.com/ceph/ceph/tree/master/src/os>/bluestore/BlockDevice.cc
<https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlockDevice.cc>:
#if defined(HAVE_SPDK) if (type == "ust-nvme") {
return new NVMEDevice(cct, cb, cbpriv);
}
#endif
There is a comment in the code which means it is no effect. I couldn't find
anything about accelerating ceph via spdk in the code,
so I guess there has been none work done in Bluestore for this. Accelerating
Ceph osd backed relies on bluestone. Bluestone use a new store to
implement lockless, asynchronous and high performance storage service.
2. spdk
<https://github.com/ceph/spdk/tree/7b7f2aa6854745caf6e2803133043132ca400285>/lib
<https://github.com/ceph/spdk/tree/7b7f2aa6854745caf6e2803133043132ca400285/lib>/bdev
<https://github.com/ceph/spdk/tree/7b7f2aa6854745caf6e2803133043132ca400285/lib/bdev>/rbd
<https://github.com/ceph/spdk/tree/7b7f2aa6854745caf6e2803133043132ca400285/lib/bdev/rbd>/bdev_rbd.c:
#include "spdk/conf.h”
#include "spdk/env.h”
#include "spdk/log.h”
#include "spdk/bdev.h”
#include "spdk/io_channel.h”
The source code includes a series of workflow for ceph operation, such
as:rados_create(cluster, NULL),
rados_conf_read_file(*cluster, NULL),
rados_connect(*cluster),spdk_io_channel_get_ctx(ch), etc.There
are some accelerating about the client I/O performance via spdk on Ceph Cluster
in the bdev_rbd.c file
by using the poller mode not interrupt mode in SPDK to accelerate Ceph Clusters.
Therefore, we can guess spdk did not work in osd backend, rather than in front
of osd. And in the latest
version of spdk, blobstore comes. It also implement lockless, asynchronous and
high performance storage service.
Can we replace the bluestone with the blobstore in osd backend via spdk.
Am I wrong? Could someone offer the help for me? To tell me more detail.
Thankx.
Helloway
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com