Re: [ceph-users] Bug in OSD Maps

2017-05-25 Thread Gregory Farnum
On Thu, May 25, 2017 at 8:39 AM Stuart Harland < s.harl...@livelinktechnology.net> wrote: > Has no-one any idea about this? If needed I can produce more information > or diagnostics on request. I find it hard to believe that we are the only > people experiencing this, and thus far we have lost

[ceph-users] Multi-Tenancy: Network Isolation

2017-05-25 Thread Deepak Naidu
I am trying to gather and understand on how can or has multitenancy solved for network interfaces or isolation. I can get ceph under a virtualized env and achieve the isolation but my question or though is more on the physical ceph deployment. Is there a way, we can have multiple

Re: [ceph-users] Upper limit of MONs and MDSs in a Cluster

2017-05-25 Thread Gregory Farnum
You absolutely cannot do this with your monitors -- as David says every node would have to participate in every monitor decision; the long tails would be horrifying and I expect it would collapse in ignominious defeat very quickly. Your MDSes should be fine since they are indeed just a bunch of

Re: [ceph-users] Bug in OSD Maps

2017-05-25 Thread Stuart Harland
Has no-one any idea about this? If needed I can produce more information or diagnostics on request. I find it hard to believe that we are the only people experiencing this, and thus far we have lost about 40 OSDs to corruption due to this. Regards Stuart Harland > On 24 May 2017, at 10:32,

Re: [ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-25 Thread Jake Grimmett
Hi John, Sorry, I'm not sure what the largest file is on our systems. We have lots of data sets that are ~8TB uncompressed, these typically compress 3:1. Thus if the users wants a single file, we hit 3TB. I'm rsyncing 360TB of data from an Isilon to cephfs, it'll be interesting to see how

Re: [ceph-users] Upper limit of MONs and MDSs in a Cluster

2017-05-25 Thread David Turner
For the MDS, the primary doesn't hold state data that needs to be replayed to a standby. The information exists in the cluster. Your setup would be 1 Active, 100 Standby. If the active went down, 1 of the standby's would be promoted and read the information from the cluster. With Mons, it's

[ceph-users] Upper limit of MONs and MDSs in a Cluster

2017-05-25 Thread Wes Dillingham
How much testing has there been / what are the implications of having a large number of Monitor and Metadata daemons running in a cluster. Thus far I have deployed all of our Ceph clusters as a single service type per physical machine but I am interested in a use case where we deploy

[ceph-users] Prometheus RADOSGW usage exporter

2017-05-25 Thread Berant Lemmenes
Hello all, I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and exports the usage information for all users and buckets. This is my first prometheus exporter so if anyone has feedback I'd greatly appreciate it. I've tested it against Hammer, and will shortly test against

Re: [ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-25 Thread John Spray
On Thu, May 25, 2017 at 2:14 PM, Ken Dreyer wrote: > On Wed, May 24, 2017 at 12:36 PM, John Spray wrote: >> >> CephFS has a configurable maximum file size, it's 1TB by default. >> >> Change it with: >> ceph fs set max_file_size > > How does this command

Re: [ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-25 Thread Ken Dreyer
On Wed, May 24, 2017 at 12:36 PM, John Spray wrote: > > CephFS has a configurable maximum file size, it's 1TB by default. > > Change it with: > ceph fs set max_file_size How does this command relate to "ceph mds set max_file_size" ? Is it different? I've put some of the