Hello,
Ran into an interesting error today and I'm not sure best way to fix it.
When I run 'ceph orch device ls', I get the following error "Insufficient
space (<10 extents) on vgs, LVM detected, locked", on every HD.
Here's the output of ceph-volume lvm list, incase it helps
== osd.0 ===
Do you have mon_compact_on_start on true and tried an mon restart?
Just a guess
Hth
Mehmet
Am 27. Juni 2022 16:46:26 MESZ schrieb Wyll Ingersoll
:
>
>Running Ceph Pacific 16.2.7
>
>We have a very large cluster with 3 monitors. One of the monitor DBs is > 2x
>the size of the other 2 and is grow
Hello folks,
We're operating a small ceph test cluster made of 5 VMs, 1
monitor/manager, 3 osd and 1 radosgateway for
owncloud S3 external storage use. This works almost fine.
We're planning to use rbd too and get block device for a linux server.
In order to do that, we installed ceph-common
Dear Burkhard,
Thanks for the help, works also when specified in the mount option
Best
Robert
On Fri, Jul 8, 2022 at 11:39 AM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 08.07.22 11:34, Robert Reihs wrote:
> > Hi,
> > I am very new to the ceph world, and
Hello,
Does the MGR node have an "_admin" label on it?
Thanks,
- Adam King
On Fri, Jul 8, 2022 at 4:23 AM E Taka <0eta...@gmail.com> wrote:
> Hi,
>
> since updating to 17.2.1 we get 5 – 10 times per day the message:
>
> [WARN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
>
Hi,
On 08.07.22 11:34, Robert Reihs wrote:
Hi,
I am very new to the ceph world, and working on setting up a cluster. We
have two cephfs filesystems (slow and fast), everything is running and
showing um in the dashboard. I can mount on of the filesystems (it mounts
it as default). How can I speci
Hi,
I am very new to the ceph world, and working on setting up a cluster. We
have two cephfs filesystems (slow and fast), everything is running and
showing um in the dashboard. I can mount on of the filesystems (it mounts
it as default). How can I specify the filesystem in the mount command?
Ceph V
On Fri, Jul 8, 2022 at 1:58 PM Santhosh Alugubelly
wrote:
>
> Hello ceph community,
>
> Our ceph cluster is in the version 16.2.4. Our cluster is around 1
> year old. All of the sudden MDS demons went into the failed state and
> also it is showing that 1 filesystem is in degraded state. We tried t
Hello ceph community,
Our ceph cluster is in the version 16.2.4. Our cluster is around 1
year old. All of the sudden MDS demons went into the failed state and
also it is showing that 1 filesystem is in degraded state. We tried to
restart the demons then it came into the running state and then with
Hi,
since updating to 17.2.1 we get 5 – 10 times per day the message:
[WARN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
host cephXX `cephadm gather-facts` failed: Unable to reach remote
host cephXX.
(cephXX is not always the same node).
This status is cleared after one o
10 matches
Mail list logo