On Fri, Sep 30, 2022 at 7:36 PM Filipe Mendes wrote:
>
> Hello!
>
>
> I'm considering switching my current storage solution to CEPH. Today we use
> iscsi as a communication protocol and we use several different hypervisors:
> VMware, hyper-v, xcp-ng, etc.
Hi Filipe,
Ceph's main hypervisor
Hello!
I'm considering switching my current storage solution to CEPH. Today we use
iscsi as a communication protocol and we use several different hypervisors:
VMware, hyper-v, xcp-ng, etc.
I was reading that the current version of CEPH has discontinued iscsi
support in favor of RBD or Nvmeof.
On Thu, Sep 29, 2022 at 12:40 PM Neha Ojha wrote:
>
>
>
> On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote:
>>
>> Update:
>>
>> Remaining =>
>> upgrade/octopus-x - Neha pls review/approve
>
>
> Both the failures in
>
Hello!
I have a client with Ubuntu 22.04 where I'd like to mount a cephfs
volume. I got error: mount error 1 = Operation not permitted
When I run mount command in verbose mode I see an extra field (key) in
mount options. As I see this field is exactly same value as name field.
What's this
Hi Dominique,
How do I check using cephadm shell ? I am new to cephadm :)
https://paste.opendev.org/show/b4egkEdAkCWSkT3VRyO9/
On Fri, Sep 30, 2022 at 6:20 AM Dominique Ramaekers <
dominique.ramaek...@cometal.be> wrote:
>
> Ceph.conf isn't available on that node/container.
>
> Wat happens if
Hi Alvaro,
I have seen errors on every node and even functional and working nodes so
assuming it's not important "ceph_volume.exceptions.ConfigurationError:
Unable to load expected Ceph config at: /etc/ceph/ceph.conf"
Maybe cephadm run inside docker and that is why its just giving this
warning.
Hello all,
We have been trying to implement RDMA for ceph public + cluster However,
in terms of troubleshooting, we can't go any further from here. Do you have
any idea how I am supposed to dig in more to reveal the issue behind?
Thanks and Regards,
ms_type=async+rdma
ms_cluster_type =
> I used to create Bluestore OSDs using commands such as this one:
>
> ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db
> ceph-db-50-54/db-50
> with the goal of having block.db and wal.db co-located on the same LV
> (ceph-db-50-54/db-5 in my example, which is on a SSD
What is your cluster status (ceph -s)? I assume that either your
cluster is not healthy or your crush rules don't cover an osd failure.
Sometimes it helps to fail the active mgr (ceph mgr fail). Can you
also share your 'ceph osd tree'? Do you use the default
replicated_rule or any
Hi Stefan,
Thanks for your feedback!
On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote:
> On 9/26/22 18:04, Gauvain Pocentek wrote:
>
> >
> >
> > We are running a Ceph Octopus (15.2.16) cluster with similar
> > configuration. We have *a lot* of slow ops when starting OSDs. Also
> >
10 matches
Mail list logo