Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-18 Thread Ketil Froyn
I think there may be something wrong with the apt repository for bionic, actually. Compare the packages available for Xenial: https://download.ceph.com/debian-luminous/dists/xenial/main/binary-amd64/Packages to the ones available for Bionic:

Re: [ceph-users] Placing replaced disks to correct buckets.

2019-02-18 Thread Eugen Block
Hi, We skipped stage 1 and replaced the UUIDs of old disks with the new ones in the policy.cfg We ran salt '*' pillar.items and confirmed that the output was correct. It showed the new UUIDs in the correct places. Next we ran salt-run state.orch ceph.stage.3 PS: All of the above ran

Re: [ceph-users] Placing replaced disks to correct buckets.

2019-02-18 Thread John Molefe
Hi David Removal process/commands ran as follows: #ceph osd crush reweight osd. 0 #ceph osd out #systemctl stop ceph-osd@ #umount /var/lib/ceph/osd/ceph- #ceph osd crush remove osd. #ceph auth del osd. #ceph osd rm #ceph-disk zap /dev/sd?? Adding them back on: We skipped stage 1 and

Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?

2019-02-18 Thread Konstantin Shalygin
On 2/18/19 9:43 PM, David Turner wrote: Do you have historical data from these OSDs to see when/if the DB used on osd.73 ever filled up?  To account for this OSD using the slow storage for DB, all we need to do is show that it filled up the fast DB at least once.  If that happened, then

Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-18 Thread David Turner
Everybody is just confused that you don't have a newer version of Ceph available. Are you running `apt-get dist-upgrade` to upgrade ceph? Do you have any packages being held back? There is no reason that Ubuntu 18.04 shouldn't be able to upgrade to 12.2.11. On Mon, Feb 18, 2019, 4:38 PM Hello

Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-18 Thread ceph
Hello people, Am 11. Februar 2019 12:47:36 MEZ schrieb c...@elchaka.de: >Hello Ashley, > >Am 9. Februar 2019 17:30:31 MEZ schrieb Ashley Merrick >: >>What does the output of apt-get update look like on one of the nodes? >> >>You can just list the lines that mention CEPH >> > >... .. . >Get:6

Re: [ceph-users] ceph mon_data_size_warn limits for large cluster

2019-02-18 Thread Anthony D'Atri
On older releases, at least, inflated DBs correlated with miserable recovery performance and lots of slow requests. The DB and OSDs were also on HDD FWIW. A single drive failure would result in substantial RBD impact. > On Feb 18, 2019, at 3:28 AM, Dan van der Ster wrote: > > Not

Re: [ceph-users] Understanding EC properties for CephFS / small files.

2019-02-18 Thread Patrick Donnelly
Hello Jesper, On Sat, Feb 16, 2019 at 11:11 PM wrote: > > Hi List. > > I'm trying to understand the nuts and bolts of EC / CephFS > We're running an EC4+2 pool on top of 72 x 7.2K rpm 10TB drives. Pretty > slow bulk / archive storage. > > # getfattr -n ceph.dir.layout

Re: [ceph-users] IRC channels now require registered and identified users

2019-02-18 Thread David Turner
Is this still broken in the 1-way direction where Slack users' comments do not show up in IRC? That would explain why nothing I ever type (as either helping someone or asking a question) ever have anyone respond to them. On Tue, Dec 18, 2018 at 6:50 AM Joao Eduardo Luis wrote: > On 12/18/2018

Re: [ceph-users] Doubts about parameter "osd sleep recovery"

2019-02-18 Thread Fabio Abreu
Hi Jean-Charles , I will validate this config in my laboratory and production, and share the results here. Thanks. Regards , Fabio Abreu On Mon, Feb 18, 2019 at 3:18 PM Jean-Charles Lopez wrote: > Hi Fabio, > > have a look here: >

Re: [ceph-users] CephFS - read latency.

2019-02-18 Thread Patrick Donnelly
On Sun, Feb 17, 2019 at 9:51 PM wrote: > > > Probably not related to CephFS. Try to compare the latency you are > > seeing to the op_r_latency reported by the OSDs. > > > > The fast_read option on the pool can also help a lot for this IO pattern. > > Magic, that actually cut the read-latency in

Re: [ceph-users] Doubts about parameter "osd sleep recovery"

2019-02-18 Thread Jean-Charles Lopez
Hi Fabio, have a look here: https://github.com/ceph/ceph/blob/luminous/src/common/options.cc#L2355 It’s designed to relieve the pressure generated by the recovery and backfill on both the drives and the network as it

Re: [ceph-users] Ceph auth caps 'create rbd image' permission

2019-02-18 Thread Jason Dillaman
You could try something similar to what was described here [1]: mon 'profile rbd' osd 'allow class-read object_prefix rbd_children, allow r class-read object_prefix rbd_directory, allow r class-read object_prefix rbd_id.', allow rwx object_prefix rbd_header., allow rwx object_prefix rbd_data.,

Re: [ceph-users] Some ceph config parameters default values

2019-02-18 Thread Neha Ojha
On Sat, Feb 16, 2019 at 12:44 PM Oliver Freyermuth wrote: > > Dear Cephalopodians, > > in some recent threads on this list, I have read about the "knobs": > > pglog_hardlimit (false by default, available at least with 12.2.11 and > 13.2.5) > bdev_enable_discard (false by default,

[ceph-users] Doubts about parameter "osd sleep recovery"

2019-02-18 Thread Fabio Abreu
Hi Everybody ! I finding configure my cluster to receives news disks and pgs and after configure the main standard configuration too, I look the parameter "osd sleep recovery" to implement in production environment but I find just sample doc about this config. Someone have experience with this

Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-18 Thread Marc Roos
Why not just keep it bare metal? Especially with future ceph upgrading/testing. I am having centos7 with luminous and am running libvirt on the nodes aswell. If you configure them with a tls/ssl connection, you can even nicely migrate a vm, from one host/ceph node to the other. Next thing I

Re: [ceph-users] Fwd: NAS solution for CephFS

2019-02-18 Thread Jeff Layton
On Mon, 2019-02-18 at 17:02 +0100, Paul Emmerich wrote: > > > I've benchmarked a ~15% performance difference in IOPS between cache > > > expiration time of 0 and 10 when running fio on a single file from a > > > single client. > > > > > > > > > > NFS iops? I'd guess more READ ops in particular?

[ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-18 Thread David Turner
I'm getting some "new" (to me) hardware that I'm going to upgrade my home Ceph cluster with. Currently it's running a Proxmox cluster (Debian) which precludes me from upgrading to Mimic. I am thinking about taking the opportunity to convert most of my VMs into containers and migrate my cluster

Re: [ceph-users] Fwd: NAS solution for CephFS

2019-02-18 Thread Paul Emmerich
> > > > I've benchmarked a ~15% performance difference in IOPS between cache > > expiration time of 0 and 10 when running fio on a single file from a > > single client. > > > > > > NFS iops? I'd guess more READ ops in particular? Is that with a > FSAL_CEPH backend? Yes. But that take that with a

Re: [ceph-users] CephFS: client hangs

2019-02-18 Thread Ashley Merrick
Correct yes from my expirence OSD’s aswel. On Mon, 18 Feb 2019 at 11:51 PM, Hennen, Christian < christian.hen...@uni-trier.de> wrote: > Hi! > > >mon_max_pg_per_osd = 400 > > > >In the ceph.conf and then restart all the services / or inject the config > >into the running admin > > I restarted all

Re: [ceph-users] CephFS: client hangs

2019-02-18 Thread Hennen, Christian
Hi! >mon_max_pg_per_osd = 400 > >In the ceph.conf and then restart all the services / or inject the config >into the running admin I restarted all MONs, but I assume the OSDs need to be restarted as well? > MDS show a client got evicted. Nothing else looks abnormal. Do new cephfs > clients

Re: [ceph-users] Fwd: NAS solution for CephFS

2019-02-18 Thread Jeff Layton
On Mon, 2019-02-18 at 16:40 +0100, Paul Emmerich wrote: > > A call into libcephfs from ganesha to retrieve cached attributes is > > mostly just in-memory copies within the same process, so any performance > > overhead there is pretty minimal. If we need to go to the network to get > > the

[ceph-users] Intel P4600 3.2TB U.2 form factor NVMe firmware problems causing dead disks

2019-02-18 Thread David Turner
We have 2 clusters of [1] these disks that have 2 Bluestore OSDs per disk (partitioned), 3 disks per node, 5 nodes per cluster. The clusters are 12.2.4 running CephFS and RBDs. So in total we have 15 NVMe's per cluster and 30 NVMe's in total. They were all built at the same time and were

Re: [ceph-users] Fwd: NAS solution for CephFS

2019-02-18 Thread Paul Emmerich
> > A call into libcephfs from ganesha to retrieve cached attributes is > mostly just in-memory copies within the same process, so any performance > overhead there is pretty minimal. If we need to go to the network to get > the attributes, then that was a case where the cache should have been >

Re: [ceph-users] CephFS: client hangs

2019-02-18 Thread Yan, Zheng
On Mon, Feb 18, 2019 at 10:55 PM Hennen, Christian wrote: > > Dear Community, > > > > we are running a Ceph Luminous Cluster with CephFS (Bluestore OSDs). During > setup, we made the mistake of configuring the OSDs on RAID Volumes. Initially > our cluster consisted of 3 nodes, each housing 1

Re: [ceph-users] Placing replaced disks to correct buckets.

2019-02-18 Thread David Turner
Also what commands did you run to remove the failed HDDs and the commands you have so far run to add their replacements back in? On Sat, Feb 16, 2019 at 9:55 PM Konstantin Shalygin wrote: > I recently replaced failed HDDs and removed them from their respective > buckets as per procedure. > >

Re: [ceph-users] CephFS: client hangs

2019-02-18 Thread Ashley Merrick
I know this may sound simple. Have you tried raising the PG per an OSD limit, I'm sure I have seen in the past people with the same kind of issue as you and was just I/O being blocked due to a limit but not actively logged. mon_max_pg_per_osd = 400 In the ceph.conf and then restart all the

[ceph-users] CephFS: client hangs

2019-02-18 Thread Hennen, Christian
Dear Community, we are running a Ceph Luminous Cluster with CephFS (Bluestore OSDs). During setup, we made the mistake of configuring the OSDs on RAID Volumes. Initially our cluster consisted of 3 nodes, each housing 1 OSD. Currently, we are in the process of remediating this. After a loss of

Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow storage for db - why?

2019-02-18 Thread David Turner
Do you have historical data from these OSDs to see when/if the DB used on osd.73 ever filled up? To account for this OSD using the slow storage for DB, all we need to do is show that it filled up the fast DB at least once. If that happened, then something spilled over to the slow storage and has

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-02-18 Thread Alfredo Deza
On Mon, Feb 18, 2019 at 2:46 AM Rainer Krienke wrote: > > Hello, > > thanks for your answer, but zapping the disk did not make any > difference. I still get the same error. Looking at the debug output I > found this error message that is probably the root of all trouble: > > # ceph-volume lvm

[ceph-users] Setting rados_osd_op_timeout with RGW

2019-02-18 Thread Wido den Hollander
Hi, Has anybody ever tried or does know how safe it is to set 'rados_osd_op_timeout' in a RGW-only situation? Right now, if one PG becomes inactive or OSDs are super slow the RGW will start to block at some point since the RADOS operations will never time out. Using rados_osd_op_timeout you can

Re: [ceph-users] ceph mon_data_size_warn limits for large cluster

2019-02-18 Thread M Ranga Swami Reddy
OK, sure will restart the ceph-mon (starting from non leader first, and then last leader node). On Mon, Feb 18, 2019 at 4:59 PM Dan van der Ster wrote: > > Not really. > > You should just restart your mons though -- if done one at a time it > has zero impact on your clients. > > -- dan > > > On

Re: [ceph-users] ceph mon_data_size_warn limits for large cluster

2019-02-18 Thread Dan van der Ster
Not really. You should just restart your mons though -- if done one at a time it has zero impact on your clients. -- dan On Mon, Feb 18, 2019 at 12:11 PM M Ranga Swami Reddy wrote: > > Hi Sage - If the mon data increases, is this impacts the ceph cluster > performance (ie on ceph osd bench,

Re: [ceph-users] ceph mon_data_size_warn limits for large cluster

2019-02-18 Thread Dan van der Ster
On Thu, Feb 14, 2019 at 2:31 PM Sage Weil wrote: > > On Thu, 7 Feb 2019, Dan van der Ster wrote: > > On Thu, Feb 7, 2019 at 12:17 PM M Ranga Swami Reddy > > wrote: > > > > > > Hi Dan, > > > >During backfilling scenarios, the mons keep old maps and grow quite > > > >quickly. So if you have

Re: [ceph-users] ceph mon_data_size_warn limits for large cluster

2019-02-18 Thread M Ranga Swami Reddy
Hi Sage - If the mon data increases, is this impacts the ceph cluster performance (ie on ceph osd bench, etc)? On Fri, Feb 15, 2019 at 3:13 PM M Ranga Swami Reddy wrote: > > today I again hit the warn with 30G also... > > On Thu, Feb 14, 2019 at 7:39 PM Sage Weil wrote: > > > > On Thu, 7 Feb

Re: [ceph-users] Bluestore increased disk usage

2019-02-18 Thread Jan Kasprzak
Jakub Jaszewski wrote: : Hi Yenya, : : I guess Ceph adds the size of all your data.db devices to the cluster : total used space. Jakub, thanks for the hint. The disk usage increase almost corresponds to that - I have added about 7.5 TB of data.db devices with the last batch of OSDs.

Re: [ceph-users] Understanding EC properties for CephFS / small files.

2019-02-18 Thread Paul Emmerich
Inline data is officially an experimental feature. I know of a production cluster that's running with inline data enabled, no problems so far (but it was only enabled two months ago or so). You can reduce the bluestore min alloc size; it's only 16kb for SSDs by default. But the main overhead will