Can ceph act in a raid-5 (or raid-6) mode, storing objects so that storage 
overhead of n/(n-1)?   For some systems where the underlying OSD's are known to 
be very reliable, but where the storage is very tight, this could be useful.

-----Original Message-----
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Kirkwood
Sent: Thursday, April 10, 2014 7:21 PM
To: Udo Lembke; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD space usage 2x object size after rados put

On 11/04/14 06:35, Udo Lembke wrote:
> Hi,
>
> On 10.04.2014 20:03, Russell E. Glaue wrote:
>> I am seeing the same thing, and was wondering the same.
>>
>> We have 16 OSDs on 4 hosts. The File system is Xfs. The OS is CentOS 6.4. 
>> ceph version 0.72.2
>>
>> I am importing a 3.3TB disk image into a rbd image.
>> At 2.6TB, and still importing, 5.197TB is used according to `rados -p <pool> 
>> df`
> that's looks normal for me. With replication of 2 you have all data
> twice: 2*2.6TB=5.2TB
>
>

No, the replication of the pg goes to different OSDs - In my case they 
are on different hosts (and I have replication level 3 also).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to