Ok, thanks Venky!
Am Do., 20. Apr. 2023 um 06:12 Uhr schrieb Venky Shankar <
vshan...@redhat.com>:
> Hi Reto,
>
> On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov wrote:
> >
> > On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
> > >
> > >
> > > Hi,
> > >
> > > Am Mi., 19. Apr. 2023 um 11:02 Uhr
Hi Reto,
On Wed, Apr 19, 2023 at 9:34 PM Ilya Dryomov wrote:
>
> On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
> >
> >
> > Hi,
> >
> > Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov
> > :
> >>
> >> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
> >> >
> >> > yes, I used the
Hi Ilya,
Ok, I've migrated the ceph-dev image to a separate ecpool for rbd and now
the backup works fine again.
root@zephir:~# umount /opt/ceph-dev
root@zephir:~# rbd unmap ceph-dev
root@zephir:~# rbd migration prepare --data-pool rbd_ecpool ceph-dev
root@zephir:~# rbd migration execute ceph-dev
On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
>
>
> Hi,
>
> Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov :
>>
>> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
>> >
>> > yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
>> > ecpool_test I've created for
Hi,
Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov :
> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
> >
> > yes, I used the same ecpool_hdd also for cephfs file systems. The new
> pool ecpool_test I've created for a test, I've also created it with
> application profile 'cephfs',
On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
>
> yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
> ecpool_test I've created for a test, I've also created it with application
> profile 'cephfs', but there aren't any cephfs filesystem attached to it.
This is not
yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
ecpool_test I've created for a test, I've also created it with application
profile 'cephfs', but there aren't any cephfs filesystem attached to it.
root@zephir:~# ceph fs status
backups - 2 clients
===
RANK STATE
On Tue, Apr 18, 2023 at 11:34 PM Reto Gysi wrote:
>
> Ah, yes indeed I had disabled log-to-stderr in cluster wide config.
> root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms 1
> --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1
Hi Reto,
So "rbd snap
Ah, yes indeed I had disabled log-to-stderr in cluster wide config.
root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms
1 --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1
root@zephir:~#
Here's the log.txt
Am Di., 18. Apr. 2023 um 18:36 Uhr schrieb Ilya
Hi Eugen
Yes, I used the default setting of rbd_default_pool='rbd'. I don't have
anything set for default_data_pool.
root@zephir:~# ceph config show-with-defaults mon.zephir | grep -E
"default(_data)*_pool"
osd_default_data_pool_replay_window 45
You don't seem to specify a pool name to the snap create command, does
your rbd_default_pool match the desired pool? And also does
rbd_default_data_pool match what you expect (if those values are even
set)? I've never used custom values for those configs but if you don't
specify a pool
On Tue, Apr 18, 2023 at 5:45 PM Reto Gysi wrote:
>
> Hi Ilya
>
> Sure.
>
> root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1
> --debug-rbd 20 >/home/rgysi/log.txt 2>&1
You probably have custom log settings in the cluster-wide config. Please
append "--log-to-stderr true"
Hi Ilya
Sure.
root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1
--debug-rbd 20 >/home/rgysi/log.txt 2>&1
root@zephir:~#
Am Di., 18. Apr. 2023 um 16:19 Uhr schrieb Ilya Dryomov :
> On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi wrote:
> >
> > Hi,
> >
> > Yes both snap create
On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi wrote:
>
> Hi,
>
> Yes both snap create commands were executed as user admin:
> client.admin
>caps: [mds] allow *
>caps: [mgr] allow *
>caps: [mon] allow *
>caps: [osd] allow *
>
> deep scrubbing+repair of ecpool_hdd is
Hi,
Yes both snap create commands were executed as user admin:
client.admin
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
deep scrubbing+repair of ecpool_hdd is still ongoing, but so far the
problem still exists
Am Di., 18. Apr. 2023
Hi,
In the meantime I did some further test. I've created a new erasure coded
datapool 'ecpool_test' and if I create a new rbd image with this data pool
I can create snapshots, but I can't create snapshots on both new and
existing images with existing data pool 'ecpool_hdd'
just one thought,
That was all that it logged.
In the meantime I did some further test. I've created a new erasure coded
datapool 'ecpool_test' and if I create a new rbd image with this data pool
I can create snapshots, but I can't create snapshots on both new and
existing images with existing data pool
On Mon, Apr 17, 2023 at 6:37 PM Reto Gysi wrote:
>
> Hi Ilya,
>
> Thanks for the reply. Here's is the output:
>
> root@zephir:~# rbd status ceph-dev
> Watchers:
>watcher=192.168.1.1:0/338620854 client.19264246
> cookie=18446462598732840969
>
> root@zephir:~# rbd snap create
Hi Ilya,
Thanks for the reply. Here's is the output:
root@zephir:~# rbd status ceph-dev
Watchers:
watcher=192.168.1.1:0/338620854 client.19264246
cookie=18446462598732840969
root@zephir:~# rbd snap create ceph-dev@backup --debug-ms 1 --debug-rbd 20
2023-04-17T18:23:16.211+0200
On Mon, Apr 17, 2023 at 2:01 PM Reto Gysi wrote:
>
> Dear Ceph Users,
>
> After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
> to create snapshots
> of images that have an erasure coded datapool.
>
> root@zephir:~# rbd snap create ceph-dev@backup_20230417
> Creating snap:
I've just tried this on 17.2.6 and it worked fine
On 17/04/2023 12:57, Reto Gysi wrote:
Dear Ceph Users,
After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
to create snapshots
of images that have an erasure coded datapool.
root@zephir:~# rbd snap create
21 matches
Mail list logo