[ceph-users] CEPH ISCSI LIO multipath change delay

2019-03-19 Thread li jerry
Hi,ALL I've deployed mimic(13.2.5) cluster on 3 CentOS 7.6 servers, then configured iscsi-target and created a LUN, referring to http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/. I have another server which is CentOS 7.4, configured and mounted the LUN I've just created, referring to

Re: [ceph-users] SSD Recovery Settings

2019-03-19 Thread Konstantin Shalygin
I setup an SSD Luminous 12.2.11 cluster and realized after data had been added that pg_num was not set properly on the default.rgw.buckets.data pool ( where all the data goes ). I adjusted the settings up, but recovery is going really slow ( like 56-110MiB/s ) ticking down at .002 per log

Re: [ceph-users] CephFS: effects of using hard links

2019-03-19 Thread Gregory Farnum
On Tue, Mar 19, 2019 at 2:13 PM Erwin Bogaard wrote: > Hi, > > > > For a number of application we use, there is a lot of file duplication. > This wastes precious storage space, which I would like to avoid. > > When using a local disk, I can use a hard link to let all duplicate files > point to

[ceph-users] SSD Recovery Settings

2019-03-19 Thread Brent Kennedy
I setup an SSD Luminous 12.2.11 cluster and realized after data had been added that pg_num was not set properly on the default.rgw.buckets.data pool ( where all the data goes ). I adjusted the settings up, but recovery is going really slow ( like 56-110MiB/s ) ticking down at .002 per log

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Konstantin Shalygin
On 3/19/19 2:52 PM, Benjamin Cherian wrote: >/Hi, />//>/I'm getting an error when trying to use the APT repo for Ubuntu bionic. />/Does anyone else have this issue? Is the mirror sync actually still in />/progress? Or was something setup incorrectly? />//>/E: Failed to fetch

Re: [ceph-users] leak memory when mount cephfs

2019-03-19 Thread Brad Hubbard
On Tue, Mar 19, 2019 at 7:54 PM Zhenshi Zhou wrote: > > Hi, > > I mount cephfs on my client servers. Some of the servers mount without any > error whereas others don't. > > The error: > # ceph-fuse -n client.kvm -m ceph.somedomain.com:6789 /mnt/kvm -r /kvm -d > 2019-03-19 17:03:29.136

Re: [ceph-users] Can CephFS Kernel Client Not Read & Write at the Same Time?

2019-03-19 Thread Andrew Richards
I don't think file locks are to blame. I tired to control for that in my tests; I was reading with fio from one set of files (multiple fio pids spawned from a single command) while writing with dd to an entirely different file using a different shell on the same host. So one CephFS kernel

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Sarunas Burdulis
On 3/19/19 2:52 PM, Benjamin Cherian wrote: > Hi, > > I'm getting an error when trying to use the APT repo for Ubuntu bionic. > Does anyone else have this issue? Is the mirror sync actually still in > progress? Or was something setup incorrectly? > > E: Failed to fetch >

Re: [ceph-users] fio test rbd - single thread - qd1

2019-03-19 Thread jesper
> One thing you can check is the CPU performance (cpu governor in > particular). > On such light loads I've seen CPUs sitting in low performance mode (slower > clocks), giving MUCH worse performance results than when tried with > heavier > loads. Try "cpupower monitor" on OSD nodes in a loop and

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Benjamin Cherian
Hi, I'm getting an error when trying to use the APT repo for Ubuntu bionic. Does anyone else have this issue? Is the mirror sync actually still in progress? Or was something setup incorrectly? E: Failed to fetch

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Jan Fajerski
On Tue, Mar 19, 2019 at 02:17:56PM +0100, Dan van der Ster wrote: On Tue, Mar 19, 2019 at 1:05 PM Alfredo Deza wrote: On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote: > > On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote: > > > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote:

Re: [ceph-users] Looking up buckets in multi-site radosgw configuration

2019-03-19 Thread Casey Bodley
On 3/19/19 12:05 AM, David Coles wrote: I'm looking at setting up a multi-site radosgw configuration where data is sharded over multiple clusters in a single physical location; and would like to understand how Ceph handles requests in this configuration. Looking through the radosgw source[1]

Re: [ceph-users] fio test rbd - single thread - qd1

2019-03-19 Thread Piotr Dałek
One thing you can check is the CPU performance (cpu governor in particular). On such light loads I've seen CPUs sitting in low performance mode (slower clocks), giving MUCH worse performance results than when tried with heavier loads. Try "cpupower monitor" on OSD nodes in a loop and observe

Re: [ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Sean Purdy
Hi, Will debian packages be released? I don't see them in the nautilus repo. I thought that Nautilus was going to be debian-friendly, unlike Mimic. Sean On Tue, 19 Mar 2019 14:58:41 +0100 Abhishek Lekshmanan wrote: > > We're glad to announce the first release of Nautilus v14.2.0 stable

[ceph-users] fio test rbd - single thread - qd1

2019-03-19 Thread jesper
Hi All. I'm trying to get head and tails into where we can stretch our Ceph cluster into what applications. Parallism works excellent, but baseline throughput it - perhaps - not what I would expect it to be. Luminous cluster running bluestore - all OSD-daemons have 16GB of cache. Fio files

[ceph-users] v14.2.0 Nautilus released

2019-03-19 Thread Abhishek Lekshmanan
We're glad to announce the first release of Nautilus v14.2.0 stable series. There have been a lot of changes across components from the previous Ceph releases, and we advise everyone to go through the release and upgrade notes carefully. The release also saw commits from over 300 contributors

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Dan van der Ster
On Tue, Mar 19, 2019 at 1:05 PM Alfredo Deza wrote: > > On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote: > > > > On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote: > > > > > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote: > > > > > > > > On Tue, Mar 19, 2019 at 6:47 AM Dan van

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 7:26 AM Dan van der Ster wrote: > > On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote: > > > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote: > > > > > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster > > > wrote: > > > > > > > > Hi all, > > > > > > > > We've

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Dan van der Ster
On Tue, Mar 19, 2019 at 12:17 PM Alfredo Deza wrote: > > On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote: > > > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster > > wrote: > > > > > > Hi all, > > > > > > We've just hit our first OSD replacement on a host created with > > > `ceph-volume

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 7:00 AM Alfredo Deza wrote: > > On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote: > > > > Hi all, > > > > We've just hit our first OSD replacement on a host created with > > `ceph-volume lvm batch` with mixed hdds+ssds. > > > > The hdd /dev/sdq was prepared like

Re: [ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Alfredo Deza
On Tue, Mar 19, 2019 at 6:47 AM Dan van der Ster wrote: > > Hi all, > > We've just hit our first OSD replacement on a host created with > `ceph-volume lvm batch` with mixed hdds+ssds. > > The hdd /dev/sdq was prepared like this: ># ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes > > Then

[ceph-users] ceph-volume lvm batch OSD replacement

2019-03-19 Thread Dan van der Ster
Hi all, We've just hit our first OSD replacement on a host created with `ceph-volume lvm batch` with mixed hdds+ssds. The hdd /dev/sdq was prepared like this: # ceph-volume lvm batch /dev/sd[m-r] /dev/sdac --yes Then /dev/sdq failed and was then zapped like this: # ceph-volume lvm zap

[ceph-users] leak memory when mount cephfs

2019-03-19 Thread Zhenshi Zhou
Hi, I mount cephfs on my client servers. Some of the servers mount without any error whereas others don't. The error: # ceph-fuse -n client.kvm -m ceph.somedomain.com:6789 /mnt/kvm -r /kvm -d 2019-03-19 17:03:29.136 7f8c80eddc80 -1 deliberately leaking some memory 2019-03-19 17:03:29.137

[ceph-users] CephFS: effects of using hard links

2019-03-19 Thread Erwin Bogaard
Hi, For a number of application we use, there is a lot of file duplication. This wastes precious storage space, which I would like to avoid. When using a local disk, I can use a hard link to let all duplicate files point to the same inode (use "rdfind", for example). As there isn't any