2013/1/10 Gregory Farnum <[email protected]>:
> On Thu, Jan 10, 2013 at 8:56 AM, Marcin Szukala
> <[email protected]> wrote:
>> Hi,
>>
>> Scenario is correct but the last line. I can mount the image, but the
>> data that was written to the image before power failure is lost.
>>
>> Currently the ceph cluster is not healthy, but i dont think its
>> related because I had this issue before the cluster itsef had issues
>> (about that I will write in different post not to mix topics).
>
> This sounds like one of two possibilities:
> 1) You aren't actually committing data to RADOS very often and so when
> the power fails you lose several minutes of writes. How much data are
> you losing, how's it generated, and is whatever you're doing running
> any kind of fsync or sync? And what filesystem are you using?
> 2) Your cluster is actually not accepting writes and so RBD never
> manages to do a write but you aren't doing much and so you don't
> notice. What's the output of ceph -s?
> -Greg
Hi,
Today I have created new ceph cluster from scratch.
root@ceph-1:~# ceph -s
health HEALTH_OK
monmap e1: 3 mons at
{a=10.3.82.102:6789/0,b=10.3.82.103:6789/0,d=10.3.82.105:6789/0},
election epoch 4, quorum 0,1,2 a,b,d
osdmap e65: 56 osds: 56 up, 56 in
pgmap v3892: 13744 pgs: 13744 active+clean; 73060 MB data, 147 GB
used, 51983 GB / 52131 GB avail
mdsmap e1: 0/0/1 up
The issue persisst.
I`am losing all of data on the image. On the mounted image I have 5
logical volumes.
root@compute-9:~# mount
(snip)
/dev/mapper/compute--9-nova on /var/lib/nova type xfs (rw)
/dev/mapper/compute--9-tmp on /tmp type xfs (rw)
/dev/mapper/compute--9-libvirt on /etc/libvirt type xfs (rw)
/dev/mapper/compute--9-log on /var/log type xfs (rw)
/dev/mapper/compute--9-openvswitch on /var/lib/openvswitch type xfs (rw)
So I have directories with little to none data writes and with a lot
of writes (logs). No fsync or sync. Filesystem is xfs.
Regards,
Marcin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html