[ceph-users] Near Perfect PG distrubtion apart from two OSD

2020-01-09 Thread Ashley Merrick
Hey, I have a cluster of 30 OSD's that is near perfect distribution minus two OSD's. I am running ceph version 14.2.6 however has been the same for the previous versions, I have the balance module enabled in upmap and it says no improvements, I have also tried in crush mode. ceph

Re: [ceph-users] Looking for experience

2020-01-09 Thread Mainor Daly
Hi Stefan, before I give some suggestions, can you first describe your usecase for which you wanna use that setup? Also which aspects are important for you. Stefan Priebe - Profihost AG < s.pri...@profihost.ag> hat am 9. Januar 2020 um

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread JC Lopez
Hi, you can actually specify the feature you want to enable at creation time so this way no need to remove the feature after. To illustrate Ilya’s message: rbd create rbd/test --size=128M --image-feature=layering,striping --stripe-count=8 --stripe-unit=4K The object size is hereby left to the

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Kyriazis, George
On Jan 9, 2020, at 2:16 PM, Ilya Dryomov mailto:idryo...@gmail.com>> wrote: On Thu, Jan 9, 2020 at 2:52 PM Kyriazis, George mailto:george.kyria...@intel.com>> wrote: Hello ceph-users! My setup is that I’d like to use RBD images as a replication target of a FreeNAS zfs pool. I have a 2nd

Re: [ceph-users] Looking for experience

2020-01-09 Thread Ed Kalk
It sounds like an I/O bottleneck (either max IOPS or max throughput) in the making. If you are looking for cold storage archival data only, then it may be ok.(if it doesn't matter how long it takes to write the data) If this is production data with any sort of IOPs load or data change rate,

Re: [ceph-users] Looking for experience

2020-01-09 Thread Stefan Priebe - Profihost AG
As a starting point the current idea is to use something like: 4-6 nodes with 12x 12tb disks each 128G Memory AMD EPYC 7302P 3GHz, 16C/32T 128GB RAM Something to discuss is - EC or go with 3 replicas. We'll use bluestore with compression. - Do we need something like Intel Optane for WAL / DB or

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Ilya Dryomov
On Thu, Jan 9, 2020 at 2:52 PM Kyriazis, George wrote: > > Hello ceph-users! > > My setup is that I’d like to use RBD images as a replication target of a > FreeNAS zfs pool. I have a 2nd FreeNAS (in a VM) to act as a backup target > in which I mount the RBD image. All this (except the source

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Stefan Kooman
Quoting Kyriazis, George (george.kyria...@intel.com): > > Hmm, I meant you can use large block size for the large files and small > block size for the small files. > > Sure, but how to do that. As far as I know block size is a property of the > pool, not a single file. recordsize:

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Kyriazis, George
On Jan 9, 2020, at 9:27 AM, Stefan Kooman mailto:ste...@bit.nl>> wrote: Quoting Kyriazis, George (george.kyria...@intel.com): On Jan 9, 2020, at 8:00 AM, Stefan Kooman mailto:ste...@bit.nl>> wrote: Quoting Kyriazis, George

Re: [ceph-users] Looking for experience

2020-01-09 Thread Stefan Priebe - Profihost AG
> Am 09.01.2020 um 16:10 schrieb Wido den Hollander : > >  > >> On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote: >> Hi Wido, >>> Am 09.01.20 um 14:18 schrieb Wido den Hollander: >>> >>> >>> On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote: Am 09.01.20 um 13:39 schrieb

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Stefan Kooman
Quoting Kyriazis, George (george.kyria...@intel.com): > > > > On Jan 9, 2020, at 8:00 AM, Stefan Kooman wrote: > > > > Quoting Kyriazis, George (george.kyria...@intel.com): > > > >> The source pool has mainly big files, but there are quite a few > >> smaller (<4KB) files that I’m afraid will

Re: [ceph-users] Looking for experience

2020-01-09 Thread Joachim Kraftmayer
I would try to scale horizontally with smaller ceph nodes, so you have the advantage of being able to choose an EC profile that does not require too much overhead and you can use failure domain host. Joachim Am 09.01.2020 um 15:31 schrieb Wido den Hollander: On 1/9/20 2:27 PM, Stefan

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Kyriazis, George
> On Jan 9, 2020, at 8:00 AM, Stefan Kooman wrote: > > Quoting Kyriazis, George (george.kyria...@intel.com): > >> The source pool has mainly big files, but there are quite a few >> smaller (<4KB) files that I’m afraid will create waste if I create the >> destination zpool with ashift > 12

Re: [ceph-users] Looking for experience

2020-01-09 Thread Wido den Hollander
On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote: > Hi Wido, > Am 09.01.20 um 14:18 schrieb Wido den Hollander: >> >> >> On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote: >>> >>> Am 09.01.20 um 13:39 schrieb Janne Johansson: I'm currently trying to workout a concept for

Re: [ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Stefan Kooman
Quoting Kyriazis, George (george.kyria...@intel.com): > The source pool has mainly big files, but there are quite a few > smaller (<4KB) files that I’m afraid will create waste if I create the > destination zpool with ashift > 12 (>4K blocks). I am not sure, > though, if ZFS will actually write

[ceph-users] RBD EC images for a ZFS pool

2020-01-09 Thread Kyriazis, George
Hello ceph-users! My setup is that I’d like to use RBD images as a replication target of a FreeNAS zfs pool. I have a 2nd FreeNAS (in a VM) to act as a backup target in which I mount the RBD image. All this (except the source FreeNAS server) is in Proxmox. Since I am using RBD as a backup

Re: [ceph-users] monitor ghosted

2020-01-09 Thread Peter Eisch
As oddly as it drifted away it came back. Next time, should there be a next time, I will snag logs as suggested by Sascha. The window for all this was, local time: 9:02 am - disassociated; 11:20 pm - associated. No changes were made, I did reboot the mon02 host at 1 pm. No other network or

[ceph-users] OSD Marked down unable to restart continuously failing

2020-01-09 Thread Radhakrishnan2 S
Hello Everyone, One of the OSD node out of 16 has 12 OSD's with a bcache as NVMe, locally those osd daemons seem to be up and running, while the ceph osd tree shows them as down. Logs show that OSD's have struck IO for over 4096 sec. I tried checking for iostat, netstat, ceph -w along with

Re: [ceph-users] Looking for experience

2020-01-09 Thread Stefan Priebe - Profihost AG
Hi Wido, Am 09.01.20 um 14:18 schrieb Wido den Hollander: > > > On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote: >> >> Am 09.01.20 um 13:39 schrieb Janne Johansson: >>> >>> I'm currently trying to workout a concept for a ceph cluster which can >>> be used as a target for backups

Re: [ceph-users] Looking for experience

2020-01-09 Thread Wido den Hollander
On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote: > > Am 09.01.20 um 13:39 schrieb Janne Johansson: >> >> I'm currently trying to workout a concept for a ceph cluster which can >> be used as a target for backups which satisfies the following >> requirements: >> >> -

Re: [ceph-users] Looking for experience

2020-01-09 Thread Daniel Aberger - Profihost AG
Am 09.01.20 um 13:39 schrieb Janne Johansson: > > I'm currently trying to workout a concept for a ceph cluster which can > be used as a target for backups which satisfies the following > requirements: > > - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s > > > You might

Re: [ceph-users] Looking for experience

2020-01-09 Thread Janne Johansson
> > > I'm currently trying to workout a concept for a ceph cluster which can > be used as a target for backups which satisfies the following requirements: > > - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s > You might need to have a large (at least non-1) number of writers to get to that

[ceph-users] Looking for experience

2020-01-09 Thread Daniel Aberger - Profihost AG
Hello, I'm currently trying to workout a concept for a ceph cluster which can be used as a target for backups which satisfies the following requirements: - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s - 500 Tbyte total available space Does anyone we have experience with a ceph cluster

Re: [ceph-users] Install specific version using ansible

2020-01-09 Thread Konstantin Shalygin
Hello all! I'm trying to install a specific version of luminous (12.2.4). In the directory group_vars/all.yml I can specify the luminous version, but i didn't find a place where I can be more specific about the version. The ansible installs the latest version (12.2.12 at this time). I'm using

Re: [ceph-users] CRUSH rebalance all at once or host-by-host?

2020-01-09 Thread Stefan Kooman
Quoting Sean Matheny (s.math...@auckland.ac.nz): > I tested this out by setting norebalance and norecover, moving the host > buckets under the rack buckets (all of them), and then unsetting. Ceph starts > melting down with escalating slow requests, even with backfill and recovery > parameters