On Wed, 16 Jan 2013, Wido den Hollander wrote:
>
> Op 16 jan. 2013 om 18:00 heeft Sage Weil <[email protected]> het volgende
> geschreven:
>
> > On Wed, 16 Jan 2013, Wido den Hollander wrote:
> >>
> >> On 01/16/2013 11:50 AM, Marcin Szukala wrote:
> >>> Hi all,
> >>>
> >>> Any ideas how can I resolve my issue? Or where the problem is?
> >>>
> >>> Let me describe the issue.
> >>> Host boots up and maps RBD image with XFS filesystems
> >>> Host mounts the filesystems from the RBD image
> >>> Host starts to write data to the mounted filesystems
> >>> Host experiences power failure
> >>> Host comes up and map the RBD image
> >>> Host mounts the filesystems from the RBD image
> >>> All data from all filesystems is lost
> >>> Host is able to use the filesystems with no problems.
> >>>
> >>> Filesystem is XFS, no errors on filesystem,
> >>
> >> That simply does not make sense to me. How can all data be gone and the FS
> >> just mount cleanly.
> >>
> >> Can you try to format the RBD with EXT4 and see if that makes any
> >> difference.
> >>
> >> Could you also try to run a "sync" prior to pulling the power from the
> >> host to
> >> see if that makes any difference.
> >
> > A few other quick questions:
> >
> > What version of qemu and librbd are you using? What is the command line
> > that is used to start the VM? This could be a problem with the qemu
> > and librbd caching configuration.
> >
>
> I don't think he uses Qemu. From what I understand he uses kernel RBD
> since he uses the words 'map' and 'unmap'
That's what I originally thought too, and then I saw
> >>> root@openstack-1:/etc/init# ceph -s
and wasn't sure...
Marcin?
sage
> >>> health HEALTH_OK
> >>> monmap e1: 3 mons at
> >>> {a=10.3.82.102:6789/0,b=10.3.82.103:6789/0,d=10.3.82.105:6789/0},
> >>> election epoch 10, quorum 0,1,2 a,b,d
> >>> osdmap e132: 56 osds: 56 up, 56 in
> >>> pgmap v87165: 13744 pgs: 13744 active+clean; 52727 MB data, 102 GB
> >>> used, 52028 GB / 52131 GB avail
> >>> mdsmap e1: 0/0/1 up
> >>>
> >>> Regards,
> >>> Marcin
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >>> the body of a message to [email protected]
> >>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to [email protected]
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html