Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-22 Thread Brad Hubbard
Blair, I should clarify that I am *now* aware of your support case =D For anyone willing to run a systemtap the following should give us more information about the problem. stap --all-modules -e 'probe kernel.function("handle_bad_sector"){ printf("handle_bad_sector(): ARGS is %s\n",$$parms$$);

Re: [ceph-users] dealing with incomplete PGs while using bluestore

2017-07-22 Thread mofta7y
Thats exactly what I am doing the only difference is that i didnt need to do step1 since for me the dev was already mounted in /var/lib/ceph/ceph-### but remaining steps are exactly what i am doing. it seems to me the PG got corrupted in my case in all copies and thats what causing it to

Re: [ceph-users] dealing with incomplete PGs while using bluestore

2017-07-22 Thread Daniel K
I am in the process of doing exactly what you are -- this worked for me: 1. mount the first partition of the bluestore drive that holds the missing PGs (if it's not already mounted) > mkdir /mnt/tmp > mount /dev/sdb1 /mnt/tmp 2. export the pg to a suitable temporary storage location: >

[ceph-users] dealing with incomplete PGs while using bluestore

2017-07-22 Thread mofta7y
Hi All, I have a situation here. I have an EC pool that is having cache tier pool (the cache tier is replicated with size 2). Had an issue on the pool and the crush map got changed after rebooting some OSD in any case I lost 4 cache ties OSDs those lost OSDs are not really lost they look

[ceph-users] Luminous: ceph mgr crate error - mon disconnected

2017-07-22 Thread Oscar Segarra
Hi, I have upgraded from kraken version with a simple "yum upgrade command". Later the upgrade, I'd like to deploy the mgr daemon on one node of my ceph infrastrucute. But, for any reason, It gets stuck! Let's see the complete set of commands: [root@vdicnode01 ~]# ceph -s cluster: id: