How do you confirm that cephfs files and rados objects are being compressed?
I don't see how in the docs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
u
> need to somehow replay the journal (I'm unsure whether cephfs-data-scan
> tool operates on journal entries).
>
>
>
> On 6.11.2018, at 03:43, Rhian Resnick wrote:
>
> Workload is mixed.
>
> We ran a rados cpool to backup the metadata pool.
>
> So your thinking t
t;rados export" in order to preserve omap data*, then try truncating
> journals (along with purge queue if supported by your ceph version), wiping
> session table, and resetting the fs.
>
>
> On 6.11.2018, at 03:26, Rhian Resnick wrote:
>
> That was our original plan. So we
doing "recover dentries", truncating the journal, and then
> "fs reset". After that I was able to revert to single-active MDS and kept
> on running for a year until it failed on 13.2.2 upgrade :))
>
>
> On 6.11.2018, at 03:18, Rhian Resnick wrote:
>
> Our
ing to get MDS to
> start and backup valuable data instead of doing long running recovery?
>
>
> On 6.11.2018, at 02:59, Rhian Resnick wrote:
>
> Sounds like I get to have some fun tonight.
>
> On Mon, Nov 5, 2018, 6:39 PM Sergey Malinin
>> inode linkage (i.e. folder hie
Does a tool exist to recover files from a cephfs data partition? We are
rebuilding metadata but have a user who needs data asap.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
after disabling it.
>
>
> > On 5.11.2018, at 17:32, Rhian Resnick wrote:
> >
> > We are running cephfs-data-scan to rebuild metadata. Would changing the
> cache tier mode of our cephfs data partition improve performance? If so
> what should we swi
We are running cephfs-data-scan to rebuild metadata. Would changing the
cache tier mode of our cephfs data partition improve performance? If so
what should we switch to?
Thanks
Rhian
___
ceph-users mailing list
ceph-users@lists.ceph.com
nts & scan_inodes can be done in a few
> hours by running the tool on each OSD node, but scan_links will be
> painfully slow due to it’s single-threaded nature.
> In my case I ended up getting MDS to start and copied all data to a fresh
> filesystem ignoring errors.
> On Nov 4,
For a 150TB file system with 40 Million files how many cephfs-data-scan
threads should be used? Or what is the expected run time. (we have 160 osd
with 4TB disks.)
___
ceph-users mailing list
ceph-users@lists.ceph.com
is it possible to snapshot the cephfs data pool?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
PM Rhian Resnick wrote:
> I was posting with my office account but I think it is being blocked.
>
> Our cephfs's metadata pool went from 1GB to 1TB in a matter of hours and
> after using all storage on the OSD's reports two damaged ranks.
>
> The cephfs-journal-tool crashes w
the new metadata pool to the original filesystem?
how to remove the new cephfs so the original mounts work.
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
I was posting with my office account but I think it is being blocked.
Our cephfs's metadata pool went from 1GB to 1TB in a matter of hours and
after using all storage on the OSD's reports two damaged ranks.
The cephfs-journal-tool crashes when performing any operations due to
memory utilization.
[osd.5]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
[osd.68]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
[osd.87]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
Rhian Resnick
Associate Director Research Computing
journal reset --rank=4
12. cephfs-table-tool all reset session
13. Start metadata servers
14. Scrub mds:
* ceph daemon mds.{hostname} scrub_path / recursive
* ceph daemon mds.{hostname} scrub_path / force
15.
Rhian Resnick
Associate Director Research Computing
Enterprise
host=ceph-storage2-ssd
[osd.87]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
That is what I though. I am increasing debug to see where we are getting stuck.
I am not sure if it is an issue deactivating or a rdlock issue.
Thanks if we discover more we will post a question with details.
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office
Evening,
We are running into issues deactivating mds ranks. Is there a way to safely
forcibly remove a rank?
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL
John,
Thanks!
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297.0222
[image] <https://hpc.fau.edu/wp-content/uplo
Evening,
I am looking to decrease our max mds servers as we had a server failure and
need to remove a node.
When we attempt to decrease the number of mds servers from 5 to 4 (or any other
number) they never transition to standby. They just stay active.
ceph fs set cephfs max_mds X
you need to use the following command to delete them.
vgdisplay to find the bad volume groups
vgremove --select vg_uuid=your uuid -f # -f forces it to be removed
Rhian Resnick
Associate Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road
urn vgs.get(vg_name=vg_name, vg_tags=vg_tags)
File "/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py", line 429, in
get
raise MultipleVGsError(vg_name)
ceph_volume.exceptions.MultipleVGsError: Got more than 1 result looking for
volume group: ceph-6a2e8f21-bca2-492b-8869-eecc9
ed osd.140
--> MultipleVGsError: Got more than 1 result looking for volume group:
ceph-6a2e8f21-bca2-492b-8869-eecc995216cc
Any hints on what to do? This occurs when we attempt to create osd's on this
node.
Rhian Resnick
Associate Director Middleware and HPC
Office of Information T
Morning,
We ran into an issue with the default max file size of a cephfs file. Is it
possible to increase this value to 20 TB from 1 TB without recreating the file
system?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
I didn't see any guidance on how to resolve the check some error online. Any
hints?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297.0222
7f95a4be6700 -1
log_channel(cluster) log [ERR] : 1.15f repair 3 errors, 0 fixed
Is it possible this thread is related to the error we are seeing?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca
10 0.27219 osd.10 up 1.0 1.0
11 0.27219 osd.11 up 1.0 1.0
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL
71151M 209G 278G [1,2,3,4,5,7,8,9,10,11] 32
779459M 201G 278G [0,1,2,4,5,6,8,9,10,11] 40
923961M 112G 136G [0,1,2,4,5,6,7,8,10,11] 21
8 104G 174G 278G [0,1,2,3,4,5,7,9,10,11] 34
sum970G 2231G 3202G
Rhian Resnick
Assistant Director
Thanks everyone for the input. We are online in our test environment and are
running user workflows to make sure everything is running as expected.
Rhian
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Rhian
Resnick
Sent: Thursday, March 9, 2017 8:31 AM
To: Maxime
Two questions on Cephfs and erasure coding that Google couldn't answer.
1) How well does cephfs work with erasure coding?
2) How would you move an existing cephfs pool that uses replication to erasure
coding?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information
Logan,
Thank you for the feedback.
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297.0222
[image] <https://hpc.fau.edu/wp-content/uplo
had with large numbers of files in a single
directory (500,000 - 5 million). We know that directory fragmentation will be
required but are concerned about the stability of the implementation.
Your opinions and suggestions are welcome.
Thank you
Rhian Resnick
Assistant Director Middleware
33 matches
Mail list logo