yes I did,
This mentioned command works fine on the node where ceph common is
installed.
#*sudo mount -t ceph
:/volumes/hns/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8 /mnt/imgs -o
name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v*
parsing options:
Thanks, I try to change the pg and pgp number to an higher value but pg do not
increase
ta:
pools: 8 pools, 1085 pgs
objects: 242.28M objects, 177 TiB
usage: 553 TiB used, 521 TiB / 1.0 PiB avail
pgs: 635281/726849381 objects degraded (0.087%)
Hi,
> > Matthias suggest to enable write cache, you suggest to disble it... or i'm
> > cache-confused?! ;-)
there were some discussions about write cache settings last year, e.g.
https://www.spinics.net/lists/ceph-users/msg73263.html
https://www.spinics.net/lists/ceph-users/msg69489.html
That was all that it logged.
In the meantime I did some further test. I've created a new erasure coded
datapool 'ecpool_test' and if I create a new rbd image with this data pool
I can create snapshots, but I can't create snapshots on both new and
existing images with existing data pool
Hi,
After doing a 'radosgw-admin metadata sync init' on a secondary site in a
2-cluster multisite configuration, (See my previous post on doing a quincy
upgade with sts roles manually synced between primary and secondary), and
letting it sync all metadata from a master zone, I get the following
On Mon, Apr 17, 2023 at 6:37 PM Reto Gysi wrote:
>
> Hi Ilya,
>
> Thanks for the reply. Here's is the output:
>
> root@zephir:~# rbd status ceph-dev
> Watchers:
>watcher=192.168.1.1:0/338620854 client.19264246
> cookie=18446462598732840969
>
> root@zephir:~# rbd snap create
Hi Ilya,
Thanks for the reply. Here's is the output:
root@zephir:~# rbd status ceph-dev
Watchers:
watcher=192.168.1.1:0/338620854 client.19264246
cookie=18446462598732840969
root@zephir:~# rbd snap create ceph-dev@backup --debug-ms 1 --debug-rbd 20
2023-04-17T18:23:16.211+0200
On Mon, Apr 17, 2023 at 2:01 PM Reto Gysi wrote:
>
> Dear Ceph Users,
>
> After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
> to create snapshots
> of images that have an erasure coded datapool.
>
> root@zephir:~# rbd snap create ceph-dev@backup_20230417
> Creating snap:
I've just tried this on 17.2.6 and it worked fine
On 17/04/2023 12:57, Reto Gysi wrote:
Dear Ceph Users,
After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
to create snapshots
of images that have an erasure coded datapool.
root@zephir:~# rbd snap create
We've hit a memory leak in the Manager Restful interface, in versions
17.2.5 & 17.2.6. On our main production cluster the active MGR grew to
about 60G until the oom_reaper killed it, causing a successful failover
and restart of the failed one. We can then see that the problem is
recurring,
when adding OSDs the first host gets created OSDs as expected, but
during creating OSDs on second host the output gets wierd, even when
adding each device separately the output shows that `ceph orch` tries to
create multiple osds at once
```
root@test1:~# for xxx in j k l m; do ceph orch
Dear Ceph Users,
After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
to create snapshots
of images that have an erasure coded datapool.
root@zephir:~# rbd snap create ceph-dev@backup_20230417
Creating snap: 10% complete...failed.
rbd: failed to create snapshot: (95)
Hi everyone, quick question regarding radosgw zone data-pool.
I’m currently planning to migrate an old data-pool that was created with
inappropriate failure-domain to a newly created pool with appropriate
failure-domain.
If I’m doing something like:
radosgw-admin zone modify —rgw-zone default
On 14.04.23 12:17, Lokendra Rathour wrote:
*mount: /mnt/image: mount point does not exist.*
Have you created the mount point?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Thanks for your responses,
Their names look exactly similar and also have exactly the same object
entries such as tag, etag, mtime etc.
Best Regards,
Mahnoosh
On Sat, Apr 15, 2023 at 7:33 PM Ramin Najjarbashi <
ramin.najarba...@gmail.com> wrote:
> Hi,
>
> to verify if the bucket names are
Hi,
Thank you all for your help, I was able to fix the issue (in a dirty way, but
it worked). Here is a quick summary of the steps:
- create a CentOS 8 Stream VM (I took the cloudimg from
https://cloud.centos.org/centos/8-stream/x86_64/images/), to match what the
container is using
- git
On EL7 only Nautilus was present. Pacific was from EL8
k
> On 17 Apr 2023, at 11:29, Marc wrote:
>
>
> Is there ever going to be rpms in
>
> https://download.ceph.com/rpm-pacific/el7/
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi,
thank you for your hint Burkhard!
For de.ceph.com i changed the sync source from eu to us (
download.ceph.com ).
So at least de.ceph.com should be in sync with the main source within
the next 24h.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
Layer7 Networks
Dear Ceph-users,
we have trouble with a Ceph cluster after a full shutdown. A couple of OSDs
don't start anymore, exiting with SIGABRT very quickly. With debug logs and
lots of work (I find cephadm clusters hard to debug btw) we received the
following stack trace:
debug-16>
Is there ever going to be rpms in
https://download.ceph.com/rpm-pacific/el7/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
these you are mentioning are missing even the pacific release, so mabye try
https://download.ceph.com/
>
>
> at least eu.ceph.com and de.ceph.com are lacking packages for the
> pacific release. All package not start with "c" (e.g. librbd, librados,
> radosgw) are missing.
>
Hi,
at least eu.ceph.com and de.ceph.com are lacking packages for the
pacific release. All package not start with "c" (e.g. librbd, librados,
radosgw) are missing.
Best regards,
Burkhard Linke
___
ceph-users mailing list -- ceph-users@ceph.io
Hi Marco.
>> For your disk type I saw "volatile write cache available = yes" on "the
>> internet". This looks a bit odd, but maybe these HDDs do have some volatile
>> cache. Try to disable it with smartctl and do the benchmark again.
>
> Sorry, i'm a bit puzzled here.
>
> Matthias suggest to
Hi,
This by the reason of DNS. Something from userland should be provide IP
addresses for kernel
k
Sent from my iPhone
> On 17 Apr 2023, at 05:56, Lokendra Rathour wrote:
>
> Hi Team,
> The mount at the client side should be independent of Ceph, but here in
> this case of DNS SRV-based
24 matches
Mail list logo