[ceph-users] Re: iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around?

2023-02-14 Thread Konstantin Shalygin
 Hi, You can use smartctl_exporter [1] for all your media, not only the SSD k [1] https://github.com/prometheus-community/smartctl_exporter Sent from my iPhone > On 14 Feb 2023, at 23:01, Drew Weaver wrote: > Hello, > > After upgrading a lot of iDRAC9 modules to version 6.10 in servers

[ceph-users] Announcing go-ceph v0.20.0

2023-02-14 Thread Sven Anderson
We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.20.0 Changes include additions to the rbd, rgw and cephfs packages. More details are available at the

[ceph-users] Re: iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around? [EXT]

2023-02-14 Thread Drew Weaver
That is pretty awesome, I will look into doing it that way. All of our monitoring is integrated to use the very very expensive DRAC enterprise license we pay for (my fault for trusting Dell). We are looking for a new hardware vendor but this will likely work for the mistake we already made.

[ceph-users] Re: Missing object in bucket list

2023-02-14 Thread J. Eric Ivancich
A bug was reported recently where if a put object occurs when bucket resharding is finishing up, it would write to the old bucket shard rather than the new one. From your logs there is evidence that resharding is underway alongside the put object. A fix for that bug is on main and pacific, and

[ceph-users] Re: iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around? [EXT]

2023-02-14 Thread Dave Holland
On Tue, Feb 14, 2023 at 04:00:30PM +, Drew Weaver wrote: > What are you folks using to monitor your write endurance on your SSDs that > you couldn't buy from Dell because they had a 16 week lead time while the MFG > could deliver the drives in 3 days? Our Ceph servers are SuperMicro not

[ceph-users] iDRAC 9 version 6.10 shows 0% for write endurance on non-dell drives, work around?

2023-02-14 Thread Drew Weaver
Hello, After upgrading a lot of iDRAC9 modules to version 6.10 in servers that are involved in a Ceph cluster we noticed that the iDRAC9 shows the write endurance as 0% on any non-certified disk. OMSA still shows the correct remaining write endurance but I am assuming that they are working

[ceph-users] Re: Cephalocon 2023 Amsterdam Call For Proposals Extended to February 19!

2023-02-14 Thread Satoru Takeuchi
Hi Mike, I have two questions about Cephalocon 2023. 1. Will this event only be held as on-site (no virtual platform)? 2. Will the session records be available on YouTube as other Ceph events? Thanks, Satoru ___ ceph-users mailing list --

[ceph-users] Renaming a ceph node

2023-02-14 Thread Manuel Lausch
Hi, yes you can rename a node without massive rebalancing. The following I tested with pacific. But I think this should work with older versions as well. You need to rename the node in the crushmap between shutting down the node with the old name and starting it with the new name. You only must

[ceph-users] Re: Frequent calling monitor election

2023-02-14 Thread Frank Schilder
Hi Stefan, thanks for that hint. We use xfs on a dedicated RAID array for the MON stores. I'm not sure if I have seen elections caused by trimming, I will keep an eye on it. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14