[ceph-users] Checking cephfs compression is working

2018-11-16 Thread Rhian Resnick
How do you confirm that cephfs files and rados objects are being compressed? I don't see how in the docs. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Recover files from cephfs data pool

2018-11-05 Thread Rhian Resnick
u > need to somehow replay the journal (I'm unsure whether cephfs-data-scan > tool operates on journal entries). > > > > On 6.11.2018, at 03:43, Rhian Resnick wrote: > > Workload is mixed. > > We ran a rados cpool to backup the metadata pool. > > So your thinking t

Re: [ceph-users] Recover files from cephfs data pool

2018-11-05 Thread Rhian Resnick
t;rados export" in order to preserve omap data*, then try truncating > journals (along with purge queue if supported by your ceph version), wiping > session table, and resetting the fs. > > > On 6.11.2018, at 03:26, Rhian Resnick wrote: > > That was our original plan. So we

Re: [ceph-users] Recover files from cephfs data pool

2018-11-05 Thread Rhian Resnick
doing "recover dentries", truncating the journal, and then > "fs reset". After that I was able to revert to single-active MDS and kept > on running for a year until it failed on 13.2.2 upgrade :)) > > > On 6.11.2018, at 03:18, Rhian Resnick wrote: > > Our

Re: [ceph-users] Recover files from cephfs data pool

2018-11-05 Thread Rhian Resnick
ing to get MDS to > start and backup valuable data instead of doing long running recovery? > > > On 6.11.2018, at 02:59, Rhian Resnick wrote: > > Sounds like I get to have some fun tonight. > > On Mon, Nov 5, 2018, 6:39 PM Sergey Malinin >> inode linkage (i.e. folder hie

[ceph-users] Recover files from cephfs data pool

2018-11-05 Thread Rhian Resnick
Does a tool exist to recover files from a cephfs data partition? We are rebuilding metadata but have a user who needs data asap. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] speeding up ceph

2018-11-05 Thread Rhian Resnick
after disabling it. > > > > On 5.11.2018, at 17:32, Rhian Resnick wrote: > > > > We are running cephfs-data-scan to rebuild metadata. Would changing the > cache tier mode of our cephfs data partition improve performance? If so > what should we swi

[ceph-users] speeding up ceph

2018-11-05 Thread Rhian Resnick
We are running cephfs-data-scan to rebuild metadata. Would changing the cache tier mode of our cephfs data partition improve performance? If so what should we switch to? Thanks Rhian ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] cephfs-data-scan

2018-11-03 Thread Rhian Resnick
nts & scan_inodes can be done in a few > hours by running the tool on each OSD node, but scan_links will be > painfully slow due to it’s single-threaded nature. > In my case I ended up getting MDS to start and copied all data to a fresh > filesystem ignoring errors. > On Nov 4,

[ceph-users] cephfs-data-scan

2018-11-03 Thread Rhian Resnick
For a 150TB file system with 40 Million files how many cephfs-data-scan threads should be used? Or what is the expected run time. (we have 160 osd with 4TB disks.) ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Snapshot cephfs data pool from ceph cmd

2018-11-03 Thread Rhian Resnick
is it possible to snapshot the cephfs data pool? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephfs-journal-tool event recover_dentries summary killed due to memory usage

2018-11-03 Thread Rhian Resnick
PM Rhian Resnick wrote: > I was posting with my office account but I think it is being blocked. > > Our cephfs's metadata pool went from 1GB to 1TB in a matter of hours and > after using all storage on the OSD's reports two damaged ranks. > > The cephfs-journal-tool crashes w

Re: [ceph-users] cephfs-journal-tool event recover_dentries summary killed due to memory usage

2018-11-03 Thread Rhian Resnick
the new metadata pool to the original filesystem? how to remove the new cephfs so the original mounts work. Rhian Resnick Associate Director Research Computing Enterprise Systems Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL 33431

[ceph-users] cephfs-journal-tool event recover_dentries summary killed due to memory usage

2018-11-02 Thread Rhian Resnick
I was posting with my office account but I think it is being blocked. Our cephfs's metadata pool went from 1GB to 1TB in a matter of hours and after using all storage on the OSD's reports two damaged ranks. The cephfs-journal-tool crashes when performing any operations due to memory utilization.

[ceph-users] Damaged MDS Ranks will not start / recover

2018-11-02 Thread Rhian Resnick
[osd.5] host = ceph-storage2 crush_location = root=ssds host=ceph-storage2-ssd [osd.68] host = ceph-storage2 crush_location = root=ssds host=ceph-storage2-ssd [osd.87] host = ceph-storage2 crush_location = root=ssds host=ceph-storage2-ssd Rhian Resnick Associate Director Research Computing

Re: [ceph-users] Removing MDS

2018-11-02 Thread Rhian Resnick
journal reset --rank=4 12. cephfs-table-tool all reset session 13. Start metadata servers 14. Scrub mds: * ceph daemon mds.{hostname} scrub_path / recursive * ceph daemon mds.{hostname} scrub_path / force 15. Rhian Resnick Associate Director Research Computing Enterprise

Re: [ceph-users] Removing MDS

2018-11-01 Thread Rhian Resnick
host=ceph-storage2-ssd [osd.87] host = ceph-storage2 crush_location = root=ssds host=ceph-storage2-ssd Rhian Resnick Associate Director Research Computing Enterprise Systems Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL 33431

Re: [ceph-users] Removing MDS

2018-10-30 Thread Rhian Resnick
That is what I though. I am increasing debug to see where we are getting stuck. I am not sure if it is an issue deactivating or a rdlock issue. Thanks if we discover more we will post a question with details. Rhian Resnick Associate Director Research Computing Enterprise Systems Office

[ceph-users] Removing MDS

2018-10-30 Thread Rhian Resnick
Evening, We are running into issues deactivating mds ranks. Is there a way to safely forcibly remove a rank? Rhian Resnick Associate Director Research Computing Enterprise Systems Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL

Re: [ceph-users] Reducing Max_mds

2018-10-30 Thread Rhian Resnick
John, Thanks! Rhian Resnick Associate Director Research Computing Enterprise Systems Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL 33431 Phone 561.297.2647 Fax 561.297.0222 [image] <https://hpc.fau.edu/wp-content/uplo

[ceph-users] Reducing Max_mds

2018-10-30 Thread Rhian Resnick
Evening, I am looking to decrease our max mds servers as we had a server failure and need to remove a node. When we attempt to decrease the number of mds servers from 5 to 4 (or any other number) they never transition to standby. They just stay active. ceph fs set cephfs max_mds X

Re: [ceph-users] Error Creating OSD

2018-04-14 Thread Rhian Resnick
you need to use the following command to delete them. vgdisplay to find the bad volume groups vgremove --select vg_uuid=your uuid -f # -f forces it to be removed Rhian Resnick Associate Director Middleware and HPC Office of Information Technology Florida Atlantic University 777 Glades Road

Re: [ceph-users] Error Creating OSD

2018-04-14 Thread Rhian Resnick
urn vgs.get(vg_name=vg_name, vg_tags=vg_tags) File "/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py", line 429, in get raise MultipleVGsError(vg_name) ceph_volume.exceptions.MultipleVGsError: Got more than 1 result looking for volume group: ceph-6a2e8f21-bca2-492b-8869-eecc9

[ceph-users] Error Creating OSD

2018-04-13 Thread Rhian Resnick
ed osd.140 --> MultipleVGsError: Got more than 1 result looking for volume group: ceph-6a2e8f21-bca2-492b-8869-eecc995216cc Any hints on what to do? This occurs when we attempt to create osd's on this node. Rhian Resnick Associate Director Middleware and HPC Office of Information T

[ceph-users] cephfs increase max file size

2017-08-04 Thread Rhian Resnick
Morning, We ran into an issue with the default max file size of a cephfs file. Is it possible to increase this value to 20 TB from 1 TB without recreating the file system? Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University

Re: [ceph-users] Inconsistent pgs with size_mismatch_oi

2017-07-03 Thread Rhian Resnick
I didn't see any guidance on how to resolve the check some error online. Any hints? Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL 33431 Phone 561.297.2647 Fax 561.297.0222

Re: [ceph-users] Inconsistent pgs with size_mismatch_oi

2017-07-03 Thread Rhian Resnick
7f95a4be6700 -1 log_channel(cluster) log [ERR] : 1.15f repair 3 errors, 0 fixed Is it possible this thread is related to the error we are seeing? Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca

Re: [ceph-users] Odd latency numbers

2017-03-16 Thread Rhian Resnick
10 0.27219 osd.10 up 1.0 1.0 11 0.27219 osd.11 up 1.0 1.0 Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL

[ceph-users] Odd latency numbers

2017-03-15 Thread Rhian Resnick
71151M 209G 278G [1,2,3,4,5,7,8,9,10,11] 32 779459M 201G 278G [0,1,2,4,5,6,8,9,10,11] 40 923961M 112G 136G [0,1,2,4,5,6,7,8,10,11] 21 8 104G 174G 278G [0,1,2,3,4,5,7,9,10,11] 34 sum970G 2231G 3202G Rhian Resnick Assistant Director

Re: [ceph-users] cephfs and erasure coding

2017-03-09 Thread Rhian Resnick
Thanks everyone for the input. We are online in our test environment and are running user workflows to make sure everything is running as expected. Rhian From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Rhian Resnick Sent: Thursday, March 9, 2017 8:31 AM To: Maxime

[ceph-users] cephfs and erasure coding

2017-03-08 Thread Rhian Resnick
Two questions on Cephfs and erasure coding that Google couldn't answer. 1) How well does cephfs work with erasure coding? 2) How would you move an existing cephfs pool that uses replication to erasure coding? Rhian Resnick Assistant Director Middleware and HPC Office of Information

Re: [ceph-users] Cephfs with large numbers of files per directory

2017-02-21 Thread Rhian Resnick
Logan, Thank you for the feedback. Rhian Resnick Assistant Director Middleware and HPC Office of Information Technology Florida Atlantic University 777 Glades Road, CM22, Rm 173B Boca Raton, FL 33431 Phone 561.297.2647 Fax 561.297.0222 [image] <https://hpc.fau.edu/wp-content/uplo

[ceph-users] Cephfs with large numbers of files per directory

2017-02-21 Thread Rhian Resnick
had with large numbers of files in a single directory (500,000 - 5 million). We know that directory fragmentation will be required but are concerned about the stability of the implementation. Your opinions and suggestions are welcome. Thank you Rhian Resnick Assistant Director Middleware