> In this case, can a single block device (for example a huge virtual
> machine image) be striped across many OSDs to archieve better
> performance in reading?
> an image striped across 3 disks, should get 3*IOPS when reading
Yes, but network (and many other isssues) must be considered.


> Another question: in a standard RGW/RBD infrastructure (no CephFS), I
> have to configure only "mon" and "osd" nodes, right?
Yes.

> How many monitor nodes is suggested?
3 is suggested.


On 30 October 2012 22:31, Gandalf Corvotempesta
<[email protected]> wrote:
> 2012/10/30 袁冬 <[email protected]>:
>> RGW and libRBD are not the same pool,  you can`t access aRBD volume with
>> RGW. The RADOS treats the RBD volume as just a large object.
>
> Ok, I think to have understood. RADOS store only object on an existent
> filesystem (this is why I have to create a FS to use RADOS/Ceph).
> Now, if that object is accessed by RGW, that object will be a single
> file stored on the FS but if I'm accessing
> with RBD, RBD is masking a very large object, stored on FS, as it is a
> single block device.
> In this case, can a single block device (for example a huge virtual
> machine image) be striped across many OSDs to archieve better
> performance in reading?
> an image striped across 3 disks, should get 3*IOPS when reading
>
>
> Another question: in a standard RGW/RBD infrastructure (no CephFS), I
> have to configure only "mon" and "osd" nodes, right?
> How many monitor nodes is suggested?



-- 
袁冬
Email:[email protected]
QQ:10200230
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to