[ceph-users] Zabbix sender issue

2021-06-04 Thread Bob Loi
Hi, I managed to build a Ceph cluster with the help of cephadm tool. It works like a charm. I have a problem that i’m still not able to fix: I know that zabbix-sender executable is not integrated into the cephadm image of ceph-mgr pulled and started by podman because of this choice.

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-04 Thread 胡 玮文
> 在 2021年6月4日,21:51,Eneko Lacunza 写道: > > Hi, > > We operate a few Ceph hyperconverged clusters with Proxmox, that provides a > custom ceph package repository. They do a great work; and deployment is a > brezee. > > So, even as currently we would rely on Proxmox packages/distribution and

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread huxia...@horebdata.cn
One could use enterprise NVMe SSD (with PLP) as DB/WAL for those consumer SSD huxia...@horebdata.cn From: mj Date: 2021-06-04 11:23 To: ceph-users Subject: [ceph-users] Re: SSD recommendations for RBD and VM's Hi, On 5/30/21 8:45 PM, mhnx wrote: > Hello Samuel. Thanks for the answer. > >

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread mhnx
I wonder that when a osd came back from power-lost, all the data scrubbing and there are 2 other copies. PLP is important on mostly Block Storage, Ceph should easily recover from that situation. That's why I don't understand why I should pay more for PLP and other protections. In my use case %90

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread mj
Hi, On 6/4/21 12:57 PM, mhnx wrote: I wonder that when a osd came back from power-lost, all the data scrubbing and there are 2 other copies. PLP is important on mostly Block Storage, Ceph should easily recover from that situation. That's why I don't understand why I should pay more for PLP and

[ceph-users] Re: CRUSH rule for EC 6+2 on 6-node cluster

2021-06-04 Thread Fulvio Galeazzi
Hallo Dan, I am using Nautilus with a slightly outdated version 14.2.16, and I don't remember me playing with upmaps in the past. Following your suggestion, I removed a bunch of upmaps (the "longer" lines) and after a while I verified that all PGs are properly mapped. Thanks!

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread mj
Hi, On 5/30/21 8:45 PM, mhnx wrote: Hello Samuel. Thanks for the answer. Yes the Intel S4510 series is a good choice but it's expensive. I have 21 server and data distribution is quite well. At power loss I don't think I'll lose data. All the VM's using same image and the rest is cookie. In

[ceph-users] Re: SSD recommendations for RBD and VM's

2021-06-04 Thread mhnx
"Plus how I understand it also: using SSDs with PLP also reduces latency, as the SSDs don't need to flush after each write." I didn't know that but it makes sense. I should dig into this. Thanks. mj , 4 Haz 2021 Cum, 14:24 tarihinde şunu yazdı: > > Hi, > > On 6/4/21 12:57 PM, mhnx wrote: > > I

[ceph-users] Re: Rolling upgrade model to new OS

2021-06-04 Thread Vladimir Sigunov
Hi Drew, I performed the upgrade from Nautilus (bare-metal deployment) -> Octopus (podman containerization) and RHEL-7 -> RHEL-8. Everything was done in-place. My sequence was: ceph osd noout/norebalance shutdown/disable running services perform full OS upgrade install necessary software like

[ceph-users] Turning on "compression_algorithm" old pool with 500TB usage

2021-06-04 Thread mhnx
Hello. I have a erasure pool and I didn't turn on compression at the beginning. Now I'm writing new type of very small data and overhead is becoming an issue. I'm thinking to turn on compression on the pool but in most filesystems it will effect only the new data. What is the behavior in ceph?

[ceph-users] Rolling upgrade model to new OS

2021-06-04 Thread Drew Weaver
Hello, I need to upgrade the OS that our Ceph cluster is running on to support new versions of Ceph. Has anyone devised a model for how you handle this? Do you just: Install some new nodes with the new OS Install the old version of Ceph on the new nodes Add those nodes/osds to the cluster

[ceph-users] Creating a role in another tenant seems to be possible

2021-06-04 Thread Daniel Iwan
Hi It seems that with command like this aws --profile=my-user-tenant1 --endpoint=$HOST_S3_API --region="" iam create-role --role-name="tenant2\$TemporaryRole" --assume-role-policy-document file://json/trust-policy-assume-role.json I can create a role in another tenant. Executing user have

[ceph-users] Re: Creating a role in another tenant seems to be possible

2021-06-04 Thread Pritha Srivastava
On Fri, Jun 4, 2021 at 5:06 PM Daniel Iwan wrote: > Hi > > It seems that with command like this > > aws --profile=my-user-tenant1 --endpoint=$HOST_S3_API --region="" iam > create-role --role-name="tenant2\$TemporaryRole" > --assume-role-policy-document file://json/trust-policy-assume-role.json >

[ceph-users] Re: Rolling upgrade model to new OS

2021-06-04 Thread Martin Verges
Hello Drew, or whole deployment and management solution is build on just replacing an OS whenever there is an update. We at croit.io even provide Debian and Suse based OS images and you can switch between per host at any time. No problem. Just go and reinstall a node, install Ceph and the

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-04 Thread Marc
Do you use rbd images in containers that are residing on osd nodes? Does this give any problems? I used to have kernel mounted cephfs on a osd node, after a specific luminous release this was giving me problems. > -Original Message- > From: Eneko Lacunza > Sent: Friday, 4 June 2021

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-04 Thread Eneko Lacunza
Hi, We operate a few Ceph hyperconverged clusters with Proxmox, that provides a custom ceph package repository. They do a great work; and deployment is a brezee. So, even as currently we would rely on Proxmox packages/distribution and not upstream, we have a number of other projects