[ceph-users] ceph-deploy osd creation failed with multipath and dmcrypt

2018-11-06 Thread Pavan, Krish
Trying to created OSD with multipath with dmcrypt and it failed . Any 
suggestion please?.
ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr 
--bluestore --dmcrypt  -- failed
ceph-deploy --overwrite-conf osd create ceph-store1:/dev/mapper/mpathr 
--bluestore - worked

the logs for fail
[ceph-store12][WARNIN] command: Running command: /usr/sbin/restorecon -R 
/var/lib/ceph/osd-lockbox/e15f1adc-feff-4890-a617-adc473e7331e/magic.68428.tmp
[ceph-store12][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph 
/var/lib/ceph/osd-lockbox/e15f1adc-feff-4890-a617-adc473e7331e/magic.68428.tmp
[ceph-store12][WARNIN] Traceback (most recent call last):
[ceph-store12][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in 
[ceph-store12][WARNIN] load_entry_point('ceph-disk==1.0.0', 
'console_scripts', 'ceph-disk')()
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5736, in run
[ceph-store12][WARNIN] main(sys.argv[1:])
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5687, in main
[ceph-store12][WARNIN] args.func(args)
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2108, in main
[ceph-store12][WARNIN] Prepare.factory(args).prepare()
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2097, in prepare
[ceph-store12][WARNIN] self._prepare()
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2171, in _prepare
[ceph-store12][WARNIN] self.lockbox.prepare()
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2861, in prepare
[ceph-store12][WARNIN] self.populate()
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2818, in populate
[ceph-store12][WARNIN] get_partition_base(self.partition.get_dev()),
[ceph-store12][WARNIN]   File 
"/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 844, in 
get_partition_base
[ceph-store12][WARNIN] raise Error('not a partition', dev)
[ceph-store12][WARNIN] ceph_disk.main.Error: Error: not a partition: /dev/dm-215
[ceph-store12][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v 
prepare --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --bluestore 
--cluster ceph --fs-type btrfs -- /dev/mapper/mpathr
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] loaded dup inode

2018-05-17 Thread Pavan, Krish
I am seeing ceph-mds error with "loaded dup inode"

2018-05-17 09:33:25.141358 [ERR]  loaded dup inode 0x1000212acba [2,head] v7879 
at /static/x/A/B/A3AF99016CC90CA60CEFBA1A0696/20180515, but inode 
0x1000212acba.head v3911 already exists at /static/X/A/20180515/014


How to clean?. I try to delete /static/X/A/20180515/014, but failed

Ceph-mds 12.2.4 AND osd servers are still 12.2.2.  We have two active MDS, I 
believed It happened after one mds server failed over to stdby



Regards
Krish
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] MDS is Readonly

2018-05-02 Thread Pavan, Krish

We have ceph 12.2.4 cephfs with two active MDS server and directory are  pinned 
 to MDS servers. Yesterday MDS server crashed.  Once all fuse clients have  
unmounted, we bring back MDS online. Both MDS are active now.

Once It came back, we started to see one MDS is   Readonly.
...
2018-05-01 23:41:22.765920 7f71481b8700  1 mds.0.cache.dir(0x1002fc5d22d) 
commit error -2 v 3
2018-05-01 23:41:22.765964 7f71481b8700 -1 log_channel(cluster) log [ERR] : 
failed to commit dir 0x1002fc5d22d object, errno -2
2018-05-01 23:41:22.765974 7f71481b8700 -1 mds.0.222755 unhandled write error 
(2) No such file or directory, force readonly...
2018-05-01 23:41:22.766013 7f71481b8700  1 mds.0.cache force file system 
read-only
2018-05-01 23:41:22.766019 7f71481b8700  0 log_channel(cluster) log [WRN] : 
force file system read-only


It health waring I see
health: HEALTH_WARN
1 MDSs are read only
1 MDSs behind on trimming

There is no error on OSDs on metadata pool
Will ceph daemon mds.x scrub_path / force recursive repair will fix?.Or 
offline data-scan need to be done



Regards
Krish
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs metadata dump

2018-03-09 Thread Pavan, Krish
Hi All,
We have cephfs with larger size( > PB)  and expected to grow more. I need to 
dump the metadata ( cinode,Cdir with ACL, size, ctime, ...)  weekly to 
find/report usage as well as acl.
Is there any tool to dump the metadata pool and decode, without going via MDS 
servers?.
What is the best way to do?

Regards
Krish

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph 12.2.2 release date

2017-10-31 Thread Pavan, Krish
Do you have any approximate release date for 12.2.2?

Krish
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com