[ceph-users] Re: Recovering from a Failed Disk (replication 1)

2019-10-16 Thread Ashley Merrick
I think your better off doing the DD method, you can export and import a PG at a time (ceph-objectstore-tool) But if the disk is failing a DD is probably your best method. On Thu, 17 Oct 2019 11:44:20 +0800 vladimir franciz blando wrote Sorry

[ceph-users] Re: Recovering from a Failed Disk (replication 1)

2019-10-16 Thread vladimir franciz blando
Sorry for not being clear, when I say healthy disk, I mean those are already an OSD, so I need to transfer the data from the failed OSD to the other OSDs that are healthy. - Vlad ᐧ On Thu, Oct 17, 2019 at 11:31 AM Konstantin Shalygin wrote: > > On 10/17/19 10:29 AM, vladimir franciz blando

[ceph-users] Recovering from a Failed Disk (replication 1)

2019-10-16 Thread vladimir franciz blando
Hi, I have a not ideal setup on one of my cluster, 3 ceph nodes but using replication 1 on all pools (don't ask me why replication 1, it's a long story). So it has come to this situation that a disk keeps on crashing, possible a hardware failure and I need to recover from that. What's my best

[ceph-users] Re: CephFS and 32-bit Inode Numbers

2019-10-16 Thread Yan, Zheng
On Mon, Oct 14, 2019 at 7:19 PM Dan van der Ster wrote: > > Hi all, > > One of our users has some 32-bit commercial software that they want to > use with CephFS, but it's not working because our inode numbers are > too large. E.g. his application gets a "file too big" error trying to > stat inode

[ceph-users] Re: RGW blocking on large objects

2019-10-16 Thread Robert LeBlanc
On Wed, Oct 16, 2019 at 2:50 PM Paul Emmerich wrote: > > On Wed, Oct 16, 2019 at 11:23 PM Robert LeBlanc wrote: > > > > On Tue, Oct 15, 2019 at 8:05 AM Robert LeBlanc wrote: > > > > > > On Mon, Oct 14, 2019 at 2:58 PM Paul Emmerich > > > wrote: > > > > > > > > Could the 4 GB GET limit

[ceph-users] Re: RGW blocking on large objects

2019-10-16 Thread Paul Emmerich
On Wed, Oct 16, 2019 at 11:23 PM Robert LeBlanc wrote: > > On Tue, Oct 15, 2019 at 8:05 AM Robert LeBlanc wrote: > > > > On Mon, Oct 14, 2019 at 2:58 PM Paul Emmerich > > wrote: > > > > > > Could the 4 GB GET limit saturate the connection from rgw to Ceph? > > > Simple to test: just rate-limit

[ceph-users] Re: Dealing with changing EC Rules with drive classifications

2019-10-16 Thread Paul Emmerich
On Wed, Oct 16, 2019 at 9:51 PM Robert LeBlanc wrote: > > On Wed, Oct 16, 2019 at 12:15 PM Jeremi Avenant wrote: >> >> Thanks for the reply Robert. >> >> Version: 12.2.12 Luminous, using ceph-ansible containerised. >> >> I was told that we're using a pre-luminous CRUSH format. I don't know if

[ceph-users] Re: RGW blocking on large objects

2019-10-16 Thread Robert LeBlanc
On Tue, Oct 15, 2019 at 8:05 AM Robert LeBlanc wrote: > > On Mon, Oct 14, 2019 at 2:58 PM Paul Emmerich wrote: > > > > Could the 4 GB GET limit saturate the connection from rgw to Ceph? > > Simple to test: just rate-limit the health check GET > > I don't think so, we have dual 25Gbp in a LAG, so

[ceph-users] Please help me understand this large omap object found message.

2019-10-16 Thread Robert LeBlanc
I've been searching around trying to learn about this, but it doesn't seem to be in index sharding problem, so I'm not sure how to approach it and I'm still new to RGW. This is what is in the cluster logs: Large omap object found. Object:

[ceph-users] Re: Dealing with changing EC Rules with drive classifications

2019-10-16 Thread Robert LeBlanc
On Wed, Oct 16, 2019 at 12:15 PM Jeremi Avenant wrote: > Thanks for the reply Robert. > > Version: 12.2.12 Luminous, using ceph-ansible containerised. > > I was told that we're using a pre-luminous CRUSH format. I don't know if > you can "migrate" or upgrade it to a Luminous based one? > Only

[ceph-users] Re: mix sata/sas same pool

2019-10-16 Thread Anthony D'Atri
I have clusters mixing with no problems. At one point we bought a SAS model, when we don’t have a local exact spare we replace dead ones with SATA. Some HBAs or backplanes though might not support mixing. Unlike crappy HBA RoC vols, Ceph doesn’t care. > On Oct 16, 2019, at 1:39 PM,

[ceph-users] Re: mix sata/sas same pool

2019-10-16 Thread Nathan Fish
I would benchmark them both, and see how similar they are. On Wed, Oct 16, 2019 at 1:38 PM Frank R wrote: > > I have inherited a cluster where about 30% of the osds in a pool are 7200 > SAS. The other 70% are 7200 SATA. > > Should I look into creating 2 pools or will this likely not be a huge

[ceph-users] mix sata/sas same pool

2019-10-16 Thread Frank R
I have inherited a cluster where about 30% of the osds in a pool are 7200 SAS. The other 70% are 7200 SATA. Should I look into creating 2 pools or will this likely not be a huge deal? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send

[ceph-users] Re: Monitor unable to join existing cluster, stuck at probing

2019-10-16 Thread Paul Emmerich
This does sound like a network problem. Try increasing log levels for mon (debug_mon = 10/10) and maybe the messenger (debug_ms=5/5 or 10/10, very noisy, to see where it is stuck) Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH

[ceph-users] Re: File listing with browser

2019-10-16 Thread Janne Johansson
Den ons 16 okt. 2019 kl 15:43 skrev Daniel Gryniewicz : > S3 is not a browser friendly protocol. There isn't a way to get > user-friendly output via the browser alone, you need some form of > client that speaks the S3 REST protocol. The most commonly used one > by us is s3cmd, which is a

[ceph-users] Re: File listing with browser

2019-10-16 Thread Daniel Gryniewicz
S3 is not a browser friendly protocol. There isn't a way to get user-friendly output via the browser alone, you need some form of client that speaks the S3 REST protocol. The most commonly used one by us is s3cmd, which is a command line utility. A quick google search finds some web-based

[ceph-users] Occasionally ceph.dir.rctime is incorrect (14.2.4 nautilus)

2019-10-16 Thread Toby Darling
Hi Occasionally, a directory's ceph.dir.rctime isn't updated when it is copied (rsync'd) to ceph, it reads the same as from stat, not the time it was created on ceph. The cluster is all Scientific Linux release 7.7 with ceph 14.2.4 nautilus (stable), the problem happened with kernel client

[ceph-users] Re: File listing with browser

2019-10-16 Thread rjaklic
How do I setup ceph that user can access objects from one pool from browser? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] File listing with browser

2019-10-16 Thread Rok Jaklič
Hi, I installed ceph object gateway and I have put one test object onto storage. I can see it with rados -p mytest ls How do I setup ceph that users can access (download,upload) files to this pool? Kind regards, Rok ___ ceph-users mailing list --

[ceph-users] Monitor unable to join existing cluster, stuck at probing

2019-10-16 Thread msmit
Hi, I'm currently working on upgrading my existing monitors within my cluster. During the first deployment of this production cluster I made some choices that in hindsight where not the best. But, it worked, I learned and now I wish to mediate my previous bad choices. The cluster exists of

[ceph-users] MDS Crashes at “ceph fs volume v011”

2019-10-16 Thread Guilherme Geronimo
Dear ceph users, we're experiencing a segfault during MDS startup (replay process) which is making our FS inaccessible. MDS log messages: Oct 15 03:41:39.894584 mds1 ceph-mds: -472> 2019-10-15 00:40:30.201 7f3c08f49700 1 -- 192.168.8.195:6800/3181891717 <== osd.26

[ceph-users] Re: CephFS and 32-bit Inode Numbers

2019-10-16 Thread Ingo Schmidt
This is not quite true. The numberspace of MD5 is much greater than 2³², (2¹²⁸ exactly) and as long as you don't exhaust this Numberspace, the probability of having a collision is roughly equally likely as with any other input. There might be collisions, and the more Data you have, i.e. the