[ceph-users] SSD sizing for Bluestore

2018-11-12 Thread Brendan Moloney
Hi, I have been reading up on this a bit, and found one particularly useful mailing list thread [1]. The fact that there is such a large jump when your DB fits into 3 levels (30GB) vs 4 levels (300GB) makes it hard to choose SSDs of an appropriate size. My workload is all RBD, so objects shoul

Re: [ceph-users] searching mailing list archives

2018-11-12 Thread Marc Roos
This one i am using https://www.mail-archive.com/ceph-users@lists.ceph.com/On Nov 12, 2018 10:32 PM, Bryan Henderson wrote: > > Is it possible to search the mailing list archives? > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/ > > seems to have a search function, but in my experien

[ceph-users] Ceph BoF at SC18

2018-11-12 Thread Douglas Fuller
Hi ceph-users, If you’re in Dallas for SC18, please join us for the Ceph Community BoF, Ceph Applications in HPC Environments. It’s Tomorrow night, from 5:15-6:45PM central. See below for all the details! https://sc18.supercomputing.org/presentation/?id=bof103&sess=sess364 Cheers, —Doug ___

[ceph-users] searching mailing list archives

2018-11-12 Thread Bryan Henderson
Is it possible to search the mailing list archives? http://lists.ceph.com/pipermail/ceph-users-ceph.com/ seems to have a search function, but in my experience never finds anything. -- Bryan Henderson San Jose, California

Re: [ceph-users] Ensure Hammer client compatibility

2018-11-12 Thread Kees Meijs
Hi again, I just read (and reread, and again) the chapter of Ceph Cookbook on upgrades and http://docs.ceph.com/docs/jewel/rados/operations/crush-map/#tunables and figured there's a way back if needed. The sortbitwise flag is set (repeering was almost instant) and tunables to "hammer". There's a

[ceph-users] RGW and keystone integration requiring admin credentials

2018-11-12 Thread Ronnie Lazar
Hello, The documentation mentions that in order to integrate RGW to keystone, we need to supply an admin user. We are using S3 APIs only and don't require openstack integration, except for keystone. We can make authentication requests to keystone without requiring an admin token (POST v3/s3tokens

Re: [ceph-users] Automated Deep Scrub always inconsistent

2018-11-12 Thread Ashley Merrick
Thanks does look like it ticks all the boxes. As it’s been merged I’ll hold off till the next release than rebuilding from source. As from what it seems it won’t cause an issue outside of just re running the deep-scrub manually which is what the fix is basically doing (but isolated to just the fai

Re: [ceph-users] Automated Deep Scrub always inconsistent

2018-11-12 Thread Jonas Jelten
Maybe you are hitting the kernel bug worked around by https://github.com/ceph/ceph/pull/23273 -- Jonas On 12/11/2018 16.39, Ashley Merrick wrote: > Is anyone else seeing this? > > I have just setup another cluster to check on completely different hardware > and everything running EC still. >

Re: [ceph-users] Automated Deep Scrub always inconsistent

2018-11-12 Thread Ashley Merrick
Is anyone else seeing this? I have just setup another cluster to check on completely different hardware and everything running EC still. And getting inconsistent PG’s flagged after an auto deep scrub which can be fixed by just running another deep-scrub. On Thu, 8 Nov 2018 at 4:23 PM, Ashley Mer

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Dan van der Ster
Hi, Is it identical? In the places we use sync=disabled (e.g. analysis scratch areas), we're totally content with losing last x seconds/minutes of writes, and understood that on-disk consistency is not impacted. Cheers,Dan On Mon, Nov 12, 2018 at 3:16 PM Kevin Olbrich wrote: > > Hi Dan, > > ZFS w

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Yes, the access VM layer is there because of multi-tenancy - we need to provide parts of the storage into different private environments (can be potentially on private IP addresses). And we need both - NFS as well as CIFS. On Mon, Nov 12, 2018 at 3:54 PM Ashley Merrick wrote: > Does your use cas

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Ashley Merrick
Does your use case mean you need something like nfs/cifs and can’t use CephFS mount directly? Has been quite a few advances in that area with quotas and user management in recent versions. But obviously all depends on your use case at client end. On Mon, 12 Nov 2018 at 10:51 PM, Premysl Kouril

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Some kind of single point will always be there I guess. Because even if we go with the distributed filesystem, it will be mounted to the access VM and this access VM will be providing NFS/CIFS protocol access. So this machine is single point of failure (indeed we would be running two of them for ac

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Ashley Merrick
My 2 cents would be depends how H/A you need. Going with the monster VM you have a single point of failure and a single point of network congestion. If you go the CephFS route you remove that single point of failure if you mount to clients directly. And also can remove that single point of networ

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Hi Kevin, I should have also said, that we are internally inclined towards the "monster VM" approach due to seemingly simpler architecture (data distribution on block layer rather than on file system layer). So my original question is more about comparing the two approaches (distribution on block

Re: [ceph-users] Using Cephfs Snapshots in Luminous

2018-11-12 Thread Marc Roos
>> >> is anybody using cephfs with snapshots on luminous? Cephfs snapshots >> are declared stable in mimic, but I'd like to know about the risks >> using them on luminous. Do I risk a complete cephfs failure or just >> some not working snapshots? It is one namespace, one fs, one data and >>

Re: [ceph-users] Using Cephfs Snapshots in Luminous

2018-11-12 Thread Yan, Zheng
On Mon, Nov 12, 2018 at 3:53 PM Felix Stolte wrote: > > Hi folks, > > is anybody using cephfs with snapshots on luminous? Cephfs snapshots are > declared stable in mimic, but I'd like to know about the risks using > them on luminous. Do I risk a complete cephfs failure or just some not > working s

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Kevin Olbrich
Hi Dan, ZFS without sync would be very much identical to ext2/ext4 without journals or XFS with barriers disabled. The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high risk (using ext4 with kvm-mode unsafe would be similar I think). Also, ZFS only works as expected with schedu

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Dan van der Ster
We've done ZFS on RBD in a VM, exported via NFS, for a couple years. It's very stable and if your use-case permits you can set zfs sync=disabled to get very fast write performance that's tough to beat. But if you're building something new today and have *only* the NAS use-case then it would make b

Re: [ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Kevin Olbrich
Hi! ZFS won't play nice on ceph. Best would be to mount CephFS directly with the ceph-fuse driver on the endpoint. If you definitely want to put a storage gateway between the data and the compute nodes, then go with nfs-ganesha which can export CephFS directly without local ("proxy") mount. I had

Re: [ceph-users] Ceph Influx Plugin in luminous

2018-11-12 Thread Wido den Hollander
On 11/12/18 12:54 PM, mart.v wrote: > Hi, > > I'm trying to set up a Influx plugin > (http://docs.ceph.com/docs/mimic/mgr/influx/). The docs says that it > will be available in Mimic release, but I can see it (and enable) in > current Luminous. It seems that someone else acutally used it in > Lu

[ceph-users] Ceph Influx Plugin in luminous

2018-11-12 Thread mart.v
Hi, I'm trying to set up a Influx plugin (http://docs.ceph.com/docs/mimic/mgr/ influx/). The docs says that it will be available in Mimic release, but I can see it (and enable) in current Luminous. It seems that someone else acutally used it in Luminous (http://lists.ceph.com/pipermail/ceph-us

[ceph-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Hi, We are planning to build NAS solution which will be primarily used via NFS and CIFS and workloads ranging from various archival application to more “real-time processing”. The NAS will not be used as a block storage for virtual machines, so the access really will always be file oriented. We a

Re: [ceph-users] Effects of restoring a cluster's mon from an older backup

2018-11-12 Thread Hector Martin
On 10/11/2018 06:35, Gregory Farnum wrote: Yes, do that, don't try and back up your monitor. If you restore a monitor from backup then the monitor — your authoritative data source — will warp back in time on what the OSD peering intervals look like, which snapshots have been deleted and created

Re: [ceph-users] Ensure Hammer client compatibility

2018-11-12 Thread Kees Meijs
Hi list, Having finished our adventures with Infernalis we're now finally running Jewel (10.2.11) on all Ceph nodes. Woohoo! However, there's still KVM production boxes with block-rbd.so being linked to librados 0.94.10 which is Hammer. Current relevant status parts: health HEALTH_WA

Re: [ceph-users] I can't find the configuration of user connection log in RADOSGW

2018-11-12 Thread Janne Johansson
Den mån 12 nov. 2018 kl 06:19 skrev 대무무 : > > Hello. > I installed ceph framework in 6 servers and I want to manage the user access > log. So I configured ceph.conf in the server which installing the rgw. > > ceph.conf > [client.rgw.~~~] > ... > rgw enable usage log = True > > However, I c