Re: [ceph-users] RBD journal feature

2018-08-17 Thread Jason Dillaman
On Fri, Aug 17, 2018 at 12:25 PM David Turner  wrote:

> > I'll open a tracker ticket to fix the issue.
>
> Is there a tracker URL we can follow along with?
>

It's here: [1]


>
> On Thu, Aug 16, 2018 at 10:04 PM Glen Baars 
> wrote:
>
>> Thanks for your help 
>>
>> Kind regards,
>>
>> *Glen Baars*
>>
>> *From:* Jason Dillaman 
>> *Sent:* Thursday, 16 August 2018 10:21 PM
>>
>>
>> *To:* Glen Baars 
>> *Cc:* ceph-users 
>> *Subject:* Re: [ceph-users] RBD journal feature
>>
>>
>>
>> On Thu, Aug 16, 2018 at 2:37 AM Glen Baars 
>> wrote:
>>
>> Is there any workaround that you can think of to correctly enable
>> journaling on locked images?
>>
>>
>>
>> You could add the "rbd journal pool = XYZ" configuration option to the
>> ceph.conf on the hosts currently using the images (or use 'rbd image-meta
>> set  conf_rbd_journal_pool SSDPOOL' on each image),
>> restart/live-migrate the affected VMs(?) to pick up the config changes, and
>> enable journaling.
>>
>>
>>
>> Kind regards,
>>
>> *Glen Baars*
>>
>>
>>
>> *From:* ceph-users  *On Behalf Of *Glen
>> Baars
>> *Sent:* Tuesday, 14 August 2018 9:36 PM
>> *To:* dilla...@redhat.com
>> *Cc:* ceph-users 
>> *Subject:* Re: [ceph-users] RBD journal feature
>>
>>
>>
>> Hello Jason,
>>
>>
>>
>> Thanks for your help. Here is the output you asked for also.
>>
>>
>>
>> https://pastebin.com/dKH6mpwk
>>
>> Kind regards,
>>
>> *Glen Baars*
>>
>>
>>
>> *From:* Jason Dillaman 
>> *Sent:* Tuesday, 14 August 2018 9:33 PM
>> *To:* Glen Baars 
>> *Cc:* ceph-users 
>> *Subject:* Re: [ceph-users] RBD journal feature
>>
>>
>>
>> On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
>> wrote:
>>
>> Hello Jason,
>>
>>
>>
>> I have now narrowed it down.
>>
>>
>>
>> If the image has an exclusive lock – the journal doesn’t go on the
>> correct pool.
>>
>>
>>
>> OK, that makes sense. If you have an active client on the image holding
>> the lock, the request to enable journaling is sent over to that client but
>> it's missing all the journal options. I'll open a tracker ticket to fix the
>> issue.
>>
>>
>>
>> Thanks.
>>
>>
>>
>> Kind regards,
>>
>> *Glen Baars*
>>
>>
>>
>> *From:* Jason Dillaman 
>> *Sent:* Tuesday, 14 August 2018 9:29 PM
>> *To:* Glen Baars 
>> *Cc:* ceph-users 
>> *Subject:* Re: [ceph-users] RBD journal feature
>>
>>
>>
>>
>>
>> On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
>> wrote:
>>
>> Hello Jason,
>>
>>
>>
>> I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf.
>> it doesn’t seem to make a difference.
>>
>>
>>
>> It should be SSDPOOL, but regardless, I am at a loss as to why it's not
>> working for you. You can try appending "--debug-rbd=20" to the end of the
>> "rbd feature enable" command and provide the generated logs in a pastebin
>> link.
>>
>>
>>
>> Also, here is the output:
>>
>>
>>
>> rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>>
>> There are 0 metadata on this image.
>>
>> Kind regards,
>>
>> *Glen Baars*
>>
>>
>>
>> *From:* Jason Dillaman 
>> *Sent:* Tuesday, 14 August 2018 9:00 PM
>> *To:* Glen Baars 
>> *Cc:* dillaman ; ceph-users <
>> ceph-users@lists.ceph.com>
>> *Subject:* Re: [ceph-users] RBD journal feature
>>
>>
>>
>> I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling
>> journaling on a different pool:
>>
>>
>>
>> $ rbd info rbd/foo
>>
>> rbd image 'foo':
>>
>>size 1024 MB in 256 objects
>>
>>order 22 (4096 kB objects)
>>
>>block_name_prefix: rbd_data.101e6b8b4567
>>
>>format: 2
>>
>>features: layering, exclusive-lock, object-map, fast-diff,
>> deep-flatten
>>
>>flags:
>>
>>create_timestamp: Tue Aug 14 08:51:19 2018
>>
>> $ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
>>
>> $ rbd journal info --pool rbd --image foo
>>
>&

Re: [ceph-users] RBD journal feature

2018-08-17 Thread David Turner
> I'll open a tracker ticket to fix the issue.

Is there a tracker URL we can follow along with?

On Thu, Aug 16, 2018 at 10:04 PM Glen Baars 
wrote:

> Thanks for your help 
>
> Kind regards,
>
> *Glen Baars*
>
> *From:* Jason Dillaman 
> *Sent:* Thursday, 16 August 2018 10:21 PM
>
>
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Thu, Aug 16, 2018 at 2:37 AM Glen Baars 
> wrote:
>
> Is there any workaround that you can think of to correctly enable
> journaling on locked images?
>
>
>
> You could add the "rbd journal pool = XYZ" configuration option to the
> ceph.conf on the hosts currently using the images (or use 'rbd image-meta
> set  conf_rbd_journal_pool SSDPOOL' on each image),
> restart/live-migrate the affected VMs(?) to pick up the config changes, and
> enable journaling.
>
>
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* ceph-users  *On Behalf Of *Glen
> Baars
> *Sent:* Tuesday, 14 August 2018 9:36 PM
> *To:* dilla...@redhat.com
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> Hello Jason,
>
>
>
> Thanks for your help. Here is the output you asked for also.
>
>
>
> https://pastebin.com/dKH6mpwk
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:33 PM
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I have now narrowed it down.
>
>
>
> If the image has an exclusive lock – the journal doesn’t go on the correct
> pool.
>
>
>
> OK, that makes sense. If you have an active client on the image holding
> the lock, the request to enable journaling is sent over to that client but
> it's missing all the journal options. I'll open a tracker ticket to fix the
> issue.
>
>
>
> Thanks.
>
>
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:29 PM
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
>
>
> On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf.
> it doesn’t seem to make a difference.
>
>
>
> It should be SSDPOOL, but regardless, I am at a loss as to why it's not
> working for you. You can try appending "--debug-rbd=20" to the end of the
> "rbd feature enable" command and provide the generated logs in a pastebin
> link.
>
>
>
> Also, here is the output:
>
>
>
> rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> There are 0 metadata on this image.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:00 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling
> journaling on a different pool:
>
>
>
> $ rbd info rbd/foo
>
> rbd image 'foo':
>
>size 1024 MB in 256 objects
>
>order 22 (4096 kB objects)
>
>block_name_prefix: rbd_data.101e6b8b4567
>
>format: 2
>
>features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
>flags:
>
>create_timestamp: Tue Aug 14 08:51:19 2018
>
> $ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
>
> $ rbd journal info --pool rbd --image foo
>
> rbd journal '101e6b8b4567':
>
>header_oid: journal.101e6b8b4567
>
>object_oid_prefix: journal_data.1.101e6b8b4567.
>
>order: 24 (16384 kB objects)
>
>splay_width: 4
>
>object_pool: rbd_ssd
>
>
>
> Can you please run "rbd image-meta list " to see if you are
> overwriting any configuration settings? Do you have any client
> configuration overrides in your "/etc/ceph/ceph.conf"?
>
>
>
> On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I will also complete testing of a few combinations tomorrow to try and
> isolate the issue now that we can get it to work with a new image.
>
>
>
> The cluster started out at 12.2.3 blue

Re: [ceph-users] RBD journal feature

2018-08-16 Thread Glen Baars
Thanks for your help 
Kind regards,
Glen Baars
From: Jason Dillaman 
Sent: Thursday, 16 August 2018 10:21 PM
To: Glen Baars 
Cc: ceph-users 
Subject: Re: [ceph-users] RBD journal feature

On Thu, Aug 16, 2018 at 2:37 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Is there any workaround that you can think of to correctly enable journaling on 
locked images?

You could add the "rbd journal pool = XYZ" configuration option to the 
ceph.conf on the hosts currently using the images (or use 'rbd image-meta set 
 conf_rbd_journal_pool SSDPOOL' on each image), 
restart/live-migrate the affected VMs(?) to pick up the config changes, and 
enable journaling.

Kind regards,
Glen Baars

From: ceph-users 
mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of Glen Baars
Sent: Tuesday, 14 August 2018 9:36 PM
To: dilla...@redhat.com<mailto:dilla...@redhat.com>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

Hello Jason,

Thanks for your help. Here is the output you asked for also.

https://pastebin.com/dKH6mpwk
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:33 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have now narrowed it down.

If the image has an exclusive lock – the journal doesn’t go on the correct pool.

OK, that makes sense. If you have an active client on the image holding the 
lock, the request to enable journaling is sent over to that client but it's 
missing all the journal options. I'll open a tracker ticket to fix the issue.

Thanks.

Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:29 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature


On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it 
doesn’t seem to make a difference.

It should be SSDPOOL, but regardless, I am at a loss as to why it's not working 
for you. You can try appending "--debug-rbd=20" to the end of the "rbd feature 
enable" command and provide the generated logs in a pastebin link.

Also, here is the output:

rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:00 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling 
journaling on a different pool:

$ rbd info rbd/foo
rbd image 'foo':
   size 1024 MB in 256 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.101e6b8b4567
   format: 2
   features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten
   flags:
   create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
   header_oid: journal.101e6b8b4567
   object_oid_prefix: journal_data.1.101e6b8b4567.
   order: 24 (16384 kB objects)
   splay_width: 4
   object_pool: rbd_ssd

Can you please run "rbd image-meta list " to see if you are 
overwriting any configuration settings? Do you have any client configuration 
overrides in your "/etc/ceph/ceph.conf"?

On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I will also complete testing of a few combinations tomorrow to try and isolate 
the issue now that we can get it to work with a new image.

The cluster started out at 12.2.3 bluestore so there shouldn’t be any old 
issues from previous versions.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current image

Re: [ceph-users] RBD journal feature

2018-08-16 Thread Jason Dillaman
On Thu, Aug 16, 2018 at 2:37 AM Glen Baars 
wrote:

> Is there any workaround that you can think of to correctly enable
> journaling on locked images?
>

You could add the "rbd journal pool = XYZ" configuration option to the
ceph.conf on the hosts currently using the images (or use 'rbd image-meta
set  conf_rbd_journal_pool SSDPOOL' on each image),
restart/live-migrate the affected VMs(?) to pick up the config changes, and
enable journaling.


> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* ceph-users  *On Behalf Of *Glen
> Baars
> *Sent:* Tuesday, 14 August 2018 9:36 PM
> *To:* dilla...@redhat.com
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> Hello Jason,
>
>
>
> Thanks for your help. Here is the output you asked for also.
>
>
>
> https://pastebin.com/dKH6mpwk
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:33 PM
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I have now narrowed it down.
>
>
>
> If the image has an exclusive lock – the journal doesn’t go on the correct
> pool.
>
>
>
> OK, that makes sense. If you have an active client on the image holding
> the lock, the request to enable journaling is sent over to that client but
> it's missing all the journal options. I'll open a tracker ticket to fix the
> issue.
>
>
>
> Thanks.
>
>
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:29 PM
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
>
>
> On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf.
> it doesn’t seem to make a difference.
>
>
>
> It should be SSDPOOL, but regardless, I am at a loss as to why it's not
> working for you. You can try appending "--debug-rbd=20" to the end of the
> "rbd feature enable" command and provide the generated logs in a pastebin
> link.
>
>
>
> Also, here is the output:
>
>
>
> rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> There are 0 metadata on this image.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:00 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling
> journaling on a different pool:
>
>
>
> $ rbd info rbd/foo
>
> rbd image 'foo':
>
>size 1024 MB in 256 objects
>
>order 22 (4096 kB objects)
>
>block_name_prefix: rbd_data.101e6b8b4567
>
>format: 2
>
>features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
>flags:
>
>create_timestamp: Tue Aug 14 08:51:19 2018
>
> $ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
>
> $ rbd journal info --pool rbd --image foo
>
> rbd journal '101e6b8b4567':
>
>header_oid: journal.101e6b8b4567
>
>object_oid_prefix: journal_data.1.101e6b8b4567.
>
>order: 24 (16384 kB objects)
>
>splay_width: 4
>
>object_pool: rbd_ssd
>
>
>
> Can you please run "rbd image-meta list " to see if you are
> overwriting any configuration settings? Do you have any client
> configuration overrides in your "/etc/ceph/ceph.conf"?
>
>
>
> On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I will also complete testing of a few combinations tomorrow to try and
> isolate the issue now that we can get it to work with a new image.
>
>
>
> The cluster started out at 12.2.3 bluestore so there shouldn’t be any old
> issues from previous versions.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 7:43 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I can confirm that your 

Re: [ceph-users] RBD journal feature

2018-08-16 Thread Glen Baars
Is there any workaround that you can think of to correctly enable journaling on 
locked images?
Kind regards,
Glen Baars

From: ceph-users  On Behalf Of Glen Baars
Sent: Tuesday, 14 August 2018 9:36 PM
To: dilla...@redhat.com
Cc: ceph-users 
Subject: Re: [ceph-users] RBD journal feature

Hello Jason,

Thanks for your help. Here is the output you asked for also.

https://pastebin.com/dKH6mpwk
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:33 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have now narrowed it down.

If the image has an exclusive lock – the journal doesn’t go on the correct pool.

OK, that makes sense. If you have an active client on the image holding the 
lock, the request to enable journaling is sent over to that client but it's 
missing all the journal options. I'll open a tracker ticket to fix the issue.

Thanks.

Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:29 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature


On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it 
doesn’t seem to make a difference.

It should be SSDPOOL, but regardless, I am at a loss as to why it's not working 
for you. You can try appending "--debug-rbd=20" to the end of the "rbd feature 
enable" command and provide the generated logs in a pastebin link.

Also, here is the output:

rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:00 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling 
journaling on a different pool:

$ rbd info rbd/foo
rbd image 'foo':
   size 1024 MB in 256 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.101e6b8b4567
   format: 2
   features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten
   flags:
   create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
   header_oid: journal.101e6b8b4567
   object_oid_prefix: journal_data.1.101e6b8b4567.
   order: 24 (16384 kB objects)
   splay_width: 4
   object_pool: rbd_ssd

Can you please run "rbd image-meta list " to see if you are 
overwriting any configuration settings? Do you have any client configuration 
overrides in your "/etc/ceph/ceph.conf"?

On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I will also complete testing of a few combinations tomorrow to try and isolate 
the issue now that we can get it to work with a new image.

The cluster started out at 12.2.3 bluestore so there shouldn’t be any old 
issues from previous versions.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current images to use a different object pool. Do you 
think that maybe another feature is incompatible with this feature? Below is a 
log of the issue.

I wouldn't think so. I used master branch for my testing but I'll try 12.2.7 
just in case it's an issue that's only in the luminous release.

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May  5 11:39:07 2018

:~# rbd journal info --pool RBD_HD

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason,

Thanks for your help. Here is the output you asked for also.

https://pastebin.com/dKH6mpwk
Kind regards,
Glen Baars

From: Jason Dillaman 
Sent: Tuesday, 14 August 2018 9:33 PM
To: Glen Baars 
Cc: ceph-users 
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have now narrowed it down.

If the image has an exclusive lock – the journal doesn’t go on the correct pool.

OK, that makes sense. If you have an active client on the image holding the 
lock, the request to enable journaling is sent over to that client but it's 
missing all the journal options. I'll open a tracker ticket to fix the issue.

Thanks.

Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:29 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: ceph-users mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature


On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it 
doesn’t seem to make a difference.

It should be SSDPOOL, but regardless, I am at a loss as to why it's not working 
for you. You can try appending "--debug-rbd=20" to the end of the "rbd feature 
enable" command and provide the generated logs in a pastebin link.

Also, here is the output:

rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:00 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling 
journaling on a different pool:

$ rbd info rbd/foo
rbd image 'foo':
   size 1024 MB in 256 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.101e6b8b4567
   format: 2
   features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten
   flags:
   create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
   header_oid: journal.101e6b8b4567
   object_oid_prefix: journal_data.1.101e6b8b4567.
   order: 24 (16384 kB objects)
   splay_width: 4
   object_pool: rbd_ssd

Can you please run "rbd image-meta list " to see if you are 
overwriting any configuration settings? Do you have any client configuration 
overrides in your "/etc/ceph/ceph.conf"?

On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I will also complete testing of a few combinations tomorrow to try and isolate 
the issue now that we can get it to work with a new image.

The cluster started out at 12.2.3 bluestore so there shouldn’t be any old 
issues from previous versions.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current images to use a different object pool. Do you 
think that maybe another feature is incompatible with this feature? Below is a 
log of the issue.

I wouldn't think so. I used master branch for my testing but I'll try 12.2.7 
just in case it's an issue that's only in the luminous release.

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May  5 11:39:07 2018

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a

:~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling 
--journal-pool RBD_SSD

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd journal '37c8974b0dc51':
header_oid: journal.

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Jason Dillaman
On Tue, Aug 14, 2018 at 9:31 AM Glen Baars 
wrote:

> Hello Jason,
>
>
>
> I have now narrowed it down.
>
>
>
> If the image has an exclusive lock – the journal doesn’t go on the correct
> pool.
>

OK, that makes sense. If you have an active client on the image holding the
lock, the request to enable journaling is sent over to that client but it's
missing all the journal options. I'll open a tracker ticket to fix the
issue.

Thanks.


> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:29 PM
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
>
>
> On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf.
> it doesn’t seem to make a difference.
>
>
>
> It should be SSDPOOL, but regardless, I am at a loss as to why it's not
> working for you. You can try appending "--debug-rbd=20" to the end of the
> "rbd feature enable" command and provide the generated logs in a pastebin
> link.
>
>
>
> Also, here is the output:
>
>
>
> rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> There are 0 metadata on this image.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:00 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling
> journaling on a different pool:
>
>
>
> $ rbd info rbd/foo
>
> rbd image 'foo':
>
>size 1024 MB in 256 objects
>
>order 22 (4096 kB objects)
>
>block_name_prefix: rbd_data.101e6b8b4567
>
>format: 2
>
>features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
>flags:
>
>create_timestamp: Tue Aug 14 08:51:19 2018
>
> $ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
>
> $ rbd journal info --pool rbd --image foo
>
> rbd journal '101e6b8b4567':
>
>header_oid: journal.101e6b8b4567
>
>object_oid_prefix: journal_data.1.101e6b8b4567.
>
>order: 24 (16384 kB objects)
>
>splay_width: 4
>
>object_pool: rbd_ssd
>
>
>
> Can you please run "rbd image-meta list " to see if you are
> overwriting any configuration settings? Do you have any client
> configuration overrides in your "/etc/ceph/ceph.conf"?
>
>
>
> On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I will also complete testing of a few combinations tomorrow to try and
> isolate the issue now that we can get it to work with a new image.
>
>
>
> The cluster started out at 12.2.3 bluestore so there shouldn’t be any old
> issues from previous versions.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 7:43 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I can confirm that your tests work on our cluster with a newly created
> image.
>
>
>
> We still can’t get the current images to use a different object pool. Do
> you think that maybe another feature is incompatible with this feature?
> Below is a log of the issue.
>
>
>
> I wouldn't think so. I used master branch for my testing but I'll try
> 12.2.7 just in case it's an issue that's only in the luminous release.
>
>
>
> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
> size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
> flags:
>
> create_timestamp: Sat May  5 11:39:07 2018
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd: journaling is not enabled for image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
>
>
> :~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f65

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason,

I have now narrowed it down.

If the image has an exclusive lock – the journal doesn’t go on the correct pool.
Kind regards,
Glen Baars

From: Jason Dillaman 
Sent: Tuesday, 14 August 2018 9:29 PM
To: Glen Baars 
Cc: ceph-users 
Subject: Re: [ceph-users] RBD journal feature


On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it 
doesn’t seem to make a difference.

It should be SSDPOOL, but regardless, I am at a loss as to why it's not working 
for you. You can try appending "--debug-rbd=20" to the end of the "rbd feature 
enable" command and provide the generated logs in a pastebin link.

Also, here is the output:

rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 9:00 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling 
journaling on a different pool:

$ rbd info rbd/foo
rbd image 'foo':
   size 1024 MB in 256 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.101e6b8b4567
   format: 2
   features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten
   flags:
   create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
   header_oid: journal.101e6b8b4567
   object_oid_prefix: journal_data.1.101e6b8b4567.
   order: 24 (16384 kB objects)
   splay_width: 4
   object_pool: rbd_ssd

Can you please run "rbd image-meta list " to see if you are 
overwriting any configuration settings? Do you have any client configuration 
overrides in your "/etc/ceph/ceph.conf"?

On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I will also complete testing of a few combinations tomorrow to try and isolate 
the issue now that we can get it to work with a new image.

The cluster started out at 12.2.3 bluestore so there shouldn’t be any old 
issues from previous versions.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current images to use a different object pool. Do you 
think that maybe another feature is incompatible with this feature? Below is a 
log of the issue.

I wouldn't think so. I used master branch for my testing but I'll try 12.2.7 
just in case it's an issue that's only in the luminous release.

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May  5 11:39:07 2018

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a

:~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling 
--journal-pool RBD_SSD

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd journal '37c8974b0dc51':
header_oid: journal.37c8974b0dc51
object_oid_prefix: journal_data.1.37c8974b0dc51.
order: 24 (16384 kB objects)
splay_width: 4
*** 

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten, journaling
flags:
create_timestamp: Sat May  5 11:39:07 2018
journal: 37c8974b0dc51
mirroring state: disabled

Kind regards,
Glen Baars
From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 12

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Jason Dillaman
On Tue, Aug 14, 2018 at 9:19 AM Glen Baars 
wrote:

> Hello Jason,
>
>
>
> I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf.
> it doesn’t seem to make a difference.
>

It should be SSDPOOL, but regardless, I am at a loss as to why it's not
working for you. You can try appending "--debug-rbd=20" to the end of the
"rbd feature enable" command and provide the generated logs in a pastebin
link.


> Also, here is the output:
>
>
>
> rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> There are 0 metadata on this image.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 9:00 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling
> journaling on a different pool:
>
>
>
> $ rbd info rbd/foo
>
> rbd image 'foo':
>
>size 1024 MB in 256 objects
>
>order 22 (4096 kB objects)
>
>block_name_prefix: rbd_data.101e6b8b4567
>
>format: 2
>
>features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
>flags:
>
>create_timestamp: Tue Aug 14 08:51:19 2018
>
> $ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
>
> $ rbd journal info --pool rbd --image foo
>
> rbd journal '101e6b8b4567':
>
>header_oid: journal.101e6b8b4567
>
>object_oid_prefix: journal_data.1.101e6b8b4567.
>
>order: 24 (16384 kB objects)
>
>splay_width: 4
>
>object_pool: rbd_ssd
>
>
>
> Can you please run "rbd image-meta list " to see if you are
> overwriting any configuration settings? Do you have any client
> configuration overrides in your "/etc/ceph/ceph.conf"?
>
>
>
> On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I will also complete testing of a few combinations tomorrow to try and
> isolate the issue now that we can get it to work with a new image.
>
>
>
> The cluster started out at 12.2.3 bluestore so there shouldn’t be any old
> issues from previous versions.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 7:43 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I can confirm that your tests work on our cluster with a newly created
> image.
>
>
>
> We still can’t get the current images to use a different object pool. Do
> you think that maybe another feature is incompatible with this feature?
> Below is a log of the issue.
>
>
>
> I wouldn't think so. I used master branch for my testing but I'll try
> 12.2.7 just in case it's an issue that's only in the luminous release.
>
>
>
> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
> size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
> flags:
>
> create_timestamp: Sat May  5 11:39:07 2018
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd: journaling is not enabled for image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
>
>
> :~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
> journaling --journal-pool RBD_SSD
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd journal '37c8974b0dc51':
>
> header_oid: journal.37c8974b0dc51
>
> object_oid_prefix: journal_data.1.37c8974b0dc51.
>
> order: 24 (16384 kB objects)
>
> splay_width: 4
>
> *** 
>
>
>
> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
>     size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exc

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason,

I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it 
doesn’t seem to make a difference.

Also, here is the output:

rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars

From: Jason Dillaman 
Sent: Tuesday, 14 August 2018 9:00 PM
To: Glen Baars 
Cc: dillaman ; ceph-users 
Subject: Re: [ceph-users] RBD journal feature

I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling 
journaling on a different pool:

$ rbd info rbd/foo
rbd image 'foo':
   size 1024 MB in 256 objects
   order 22 (4096 kB objects)
   block_name_prefix: rbd_data.101e6b8b4567
   format: 2
   features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten
   flags:
   create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
   header_oid: journal.101e6b8b4567
   object_oid_prefix: journal_data.1.101e6b8b4567.
   order: 24 (16384 kB objects)
   splay_width: 4
   object_pool: rbd_ssd

Can you please run "rbd image-meta list " to see if you are 
overwriting any configuration settings? Do you have any client configuration 
overrides in your "/etc/ceph/ceph.conf"?

On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I will also complete testing of a few combinations tomorrow to try and isolate 
the issue now that we can get it to work with a new image.

The cluster started out at 12.2.3 bluestore so there shouldn’t be any old 
issues from previous versions.
Kind regards,
Glen Baars

From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current images to use a different object pool. Do you 
think that maybe another feature is incompatible with this feature? Below is a 
log of the issue.

I wouldn't think so. I used master branch for my testing but I'll try 12.2.7 
just in case it's an issue that's only in the luminous release.

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May  5 11:39:07 2018

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a

:~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling 
--journal-pool RBD_SSD

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd journal '37c8974b0dc51':
header_oid: journal.37c8974b0dc51
object_oid_prefix: journal_data.1.37c8974b0dc51.
order: 24 (16384 kB objects)
splay_width: 4
*** 

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten, journaling
flags:
create_timestamp: Sat May  5 11:39:07 2018
journal: 37c8974b0dc51
mirroring state: disabled

Kind regards,
Glen Baars
From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 12:04 AM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Sun, Aug 12, 2018 at 12:13 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any objects. 
Is this the correct way to view the journal objects?

You won't see any journal objects in the SSDPOOL until you issue a write:

$ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M 
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Jason Dillaman
I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling
journaling on a different pool:

$ rbd info rbd/foo
rbd image 'foo':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.101e6b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
header_oid: journal.101e6b8b4567
object_oid_prefix: journal_data.1.101e6b8b4567.
order: 24 (16384 kB objects)
splay_width: 4
object_pool: rbd_ssd

Can you please run "rbd image-meta list " to see if you are
overwriting any configuration settings? Do you have any client
configuration overrides in your "/etc/ceph/ceph.conf"?

On Tue, Aug 14, 2018 at 8:25 AM Glen Baars 
wrote:

> Hello Jason,
>
>
>
> I will also complete testing of a few combinations tomorrow to try and
> isolate the issue now that we can get it to work with a new image.
>
>
>
> The cluster started out at 12.2.3 bluestore so there shouldn’t be any old
> issues from previous versions.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 7:43 PM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> I can confirm that your tests work on our cluster with a newly created
> image.
>
>
>
> We still can’t get the current images to use a different object pool. Do
> you think that maybe another feature is incompatible with this feature?
> Below is a log of the issue.
>
>
>
> I wouldn't think so. I used master branch for my testing but I'll try
> 12.2.7 just in case it's an issue that's only in the luminous release.
>
>
>
> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
> size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
> flags:
>
> create_timestamp: Sat May  5 11:39:07 2018
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd: journaling is not enabled for image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
>
>
> :~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
> journaling --journal-pool RBD_SSD
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd journal '37c8974b0dc51':
>
> header_oid: journal.37c8974b0dc51
>
> object_oid_prefix: journal_data.1.37c8974b0dc51.
>
> order: 24 (16384 kB objects)
>
> splay_width: 4
>
> *** 
>
>
>
> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
> size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, journaling
>
>     flags:
>
> create_timestamp: Sat May  5 11:39:07 2018
>
> journal: 37c8974b0dc51
>
> mirroring state: disabled
>
>
>
> Kind regards,
>
> *Glen Baars*
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 12:04 AM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Sun, Aug 12, 2018 at 12:13 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any
> objects. Is this the correct way to view the journal objects?
>
>
>
> You won't see any journal objects in the SSDPOOL until you issue a write:
>
>
>
> $ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
>
> $ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
> rbd_hdd/test --rbd-cache=false
>
> bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
>
>   SEC   OPS   OPS/SEC   BYTES/SEC
>
> 1   320332.01  1359896.98
>
> 2   736360.83  1477975.96
>
> 3  1040351.17  143839

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason,

I will also complete testing of a few combinations tomorrow to try and isolate 
the issue now that we can get it to work with a new image.

The cluster started out at 12.2.3 bluestore so there shouldn’t be any old 
issues from previous versions.
Kind regards,
Glen Baars

From: Jason Dillaman 
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars 
Cc: dillaman ; ceph-users 
Subject: Re: [ceph-users] RBD journal feature

On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current images to use a different object pool. Do you 
think that maybe another feature is incompatible with this feature? Below is a 
log of the issue.

I wouldn't think so. I used master branch for my testing but I'll try 12.2.7 
just in case it's an issue that's only in the luminous release.

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May  5 11:39:07 2018

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a

:~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling 
--journal-pool RBD_SSD

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd journal '37c8974b0dc51':
header_oid: journal.37c8974b0dc51
object_oid_prefix: journal_data.1.37c8974b0dc51.
order: 24 (16384 kB objects)
splay_width: 4
*** 

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten, journaling
flags:
create_timestamp: Sat May  5 11:39:07 2018
journal: 37c8974b0dc51
mirroring state: disabled

Kind regards,
Glen Baars
From: Jason Dillaman mailto:jdill...@redhat.com>>
Sent: Tuesday, 14 August 2018 12:04 AM
To: Glen Baars mailto:g...@onsitecomputers.com.au>>
Cc: dillaman mailto:dilla...@redhat.com>>; ceph-users 
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] RBD journal feature

On Sun, Aug 12, 2018 at 12:13 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any objects. 
Is this the correct way to view the journal objects?

You won't see any journal objects in the SSDPOOL until you issue a write:

$ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M 
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC   OPS   OPS/SEC   BYTES/SEC
1   320332.01  1359896.98
2   736360.83  1477975.96
3  1040351.17  1438393.57
4  1392350.94  1437437.51
5  1744350.24  1434576.94
6  2080349.82  1432866.06
7  2416341.73  1399731.23
8  2784348.37  1426930.69
9  3152347.40  1422966.67
   10  3520356.04  1458356.70
   11  3920361.34  1480050.97
elapsed:11  ops: 4096  ops/sec:   353.61  bytes/sec: 1448392.06
$ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd_hdd --image test
rbd journal '10746b8b4567':
header_oid: journal.10746b8b4567
object_oid_prefix: journal_data.2.10746b8b4567.
order: 24 (16 MiB objects)
splay_width: 4
object_pool: rbd_ssd
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M 
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC   OPS   OPS/SEC   BYTES/SEC
1   240248.54  1018005.17
2   512263.47  1079154.06
3   768258.74  1059792.10
4  1040258.50  1058812.60
5  1312258.06  1057001.34
6  1536258.21  1057633.14
7  1792253.81  1039604.73
8  2032253.66  1038971.01
9  2256241.41  988800.93
   10  2480237.87  974335.65
   11  2752239.41  980624.20
   12  2992239.61  981440.94
   13  3200233.13  954887.84
   14  3440237.36  972237.80
   15  3680239.47  980853.37
   16  3920238.75  977920.70
el

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Jason Dillaman
On Tue, Aug 14, 2018 at 4:08 AM Glen Baars 
wrote:

> Hello Jason,
>
>
>
> I can confirm that your tests work on our cluster with a newly created
> image.
>
>
>
> We still can’t get the current images to use a different object pool. Do
> you think that maybe another feature is incompatible with this feature?
> Below is a log of the issue.
>

I wouldn't think so. I used master branch for my testing but I'll try
12.2.7 just in case it's an issue that's only in the luminous release.


> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
> size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
> flags:
>
> create_timestamp: Sat May  5 11:39:07 2018
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd: journaling is not enabled for image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
>
>
> :~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
> journaling --journal-pool RBD_SSD
>
>
>
> :~# rbd journal info --pool RBD_HDD --image
> 2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd journal '37c8974b0dc51':
>
> header_oid: journal.37c8974b0dc51
>
> object_oid_prefix: journal_data.1.37c8974b0dc51.
>
> order: 24 (16384 kB objects)
>
> splay_width: 4
>
> *** 
>
>
>
> :~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
>
> rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
>
> size 51200 MB in 12800 objects
>
> order 22 (4096 kB objects)
>
> block_name_prefix: rbd_data.37c8974b0dc51
>
> format: 2
>
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, journaling
>
> flags:
>
> create_timestamp: Sat May  5 11:39:07 2018
>
> journal: 37c8974b0dc51
>
>     mirroring state: disabled
>
>
>
> Kind regards,
>
> *Glen Baars*
>
> *From:* Jason Dillaman 
> *Sent:* Tuesday, 14 August 2018 12:04 AM
> *To:* Glen Baars 
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Sun, Aug 12, 2018 at 12:13 AM Glen Baars 
> wrote:
>
> Hello Jason,
>
>
>
> Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any
> objects. Is this the correct way to view the journal objects?
>
>
>
> You won't see any journal objects in the SSDPOOL until you issue a write:
>
>
>
> $ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
>
> $ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
> rbd_hdd/test --rbd-cache=false
>
> bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
>
>   SEC   OPS   OPS/SEC   BYTES/SEC
>
> 1   320332.01  1359896.98
>
> 2   736360.83  1477975.96
>
> 3  1040351.17  1438393.57
>
> 4  1392350.94  1437437.51
>
> 5  1744350.24  1434576.94
>
> 6  2080349.82  1432866.06
>
> 7  2416341.73  1399731.23
>
> 8  2784348.37  1426930.69
>
> 9  3152347.40  1422966.67
>
>10  3520356.04  1458356.70
>
>11  3920361.34  1480050.97
>
> elapsed:11  ops: 4096  ops/sec:   353.61  bytes/sec: 1448392.06
>
> $ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd
>
> $ rbd journal info --pool rbd_hdd --image test
>
> rbd journal '10746b8b4567':
>
> header_oid: journal.10746b8b4567
>
> object_oid_prefix: journal_data.2.10746b8b4567.
>
> order: 24 (16 MiB objects)
>
> splay_width: 4
>
> object_pool: rbd_ssd
>
> $ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
> rbd_hdd/test --rbd-cache=false
>
> bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
>
>   SEC   OPS   OPS/SEC   BYTES/SEC
>
> 1   240248.54  1018005.17
>
> 2   512263.47  1079154.06
>
> 3   768258.74  1059792.10
>
> 4  1040258.50  1058812.60
>
> 5  1312258.06  1057001.34
>
> 6  1536258.21  1057633.14
>
> 7  1792253.81  1039604.73
>
> 8  2032253.66  10389

Re: [ceph-users] RBD journal feature

2018-08-14 Thread Glen Baars
Hello Jason,

I can confirm that your tests work on our cluster with a newly created image.

We still can’t get the current images to use a different object pool. Do you 
think that maybe another feature is incompatible with this feature? Below is a 
log of the issue.


:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May  5 11:39:07 2018

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a

:~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling 
--journal-pool RBD_SSD

:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd journal '37c8974b0dc51':
header_oid: journal.37c8974b0dc51
object_oid_prefix: journal_data.1.37c8974b0dc51.
order: 24 (16384 kB objects)
splay_width: 4
*** 

:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, 
deep-flatten, journaling
flags:
create_timestamp: Sat May  5 11:39:07 2018
journal: 37c8974b0dc51
mirroring state: disabled

Kind regards,
Glen Baars
From: Jason Dillaman 
Sent: Tuesday, 14 August 2018 12:04 AM
To: Glen Baars 
Cc: dillaman ; ceph-users 
Subject: Re: [ceph-users] RBD journal feature

On Sun, Aug 12, 2018 at 12:13 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Jason,

Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any objects. 
Is this the correct way to view the journal objects?

You won't see any journal objects in the SSDPOOL until you issue a write:

$ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M 
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC   OPS   OPS/SEC   BYTES/SEC
1   320332.01  1359896.98
2   736360.83  1477975.96
3  1040351.17  1438393.57
4  1392350.94  1437437.51
5  1744350.24  1434576.94
6  2080349.82  1432866.06
7  2416341.73  1399731.23
8  2784348.37  1426930.69
9  3152347.40  1422966.67
   10  3520356.04  1458356.70
   11  3920361.34  1480050.97
elapsed:11  ops: 4096  ops/sec:   353.61  bytes/sec: 1448392.06
$ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd_hdd --image test
rbd journal '10746b8b4567':
header_oid: journal.10746b8b4567
object_oid_prefix: journal_data.2.10746b8b4567.
order: 24 (16 MiB objects)
splay_width: 4
object_pool: rbd_ssd
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M 
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC   OPS   OPS/SEC   BYTES/SEC
1   240248.54  1018005.17
2   512263.47  1079154.06
3   768258.74  1059792.10
4  1040258.50  1058812.60
5  1312258.06  1057001.34
6  1536258.21  1057633.14
7  1792253.81  1039604.73
8  2032253.66  1038971.01
9  2256241.41  988800.93
   10  2480237.87  974335.65
   11  2752239.41  980624.20
   12  2992239.61  981440.94
   13  3200233.13  954887.84
   14  3440237.36  972237.80
   15  3680239.47  980853.37
   16  3920238.75  977920.70
elapsed:16  ops: 4096  ops/sec:   245.04  bytes/sec: 1003692.81
$ rados -p rbd_ssd ls | grep journal_data.2.10746b8b4567.
journal_data.2.10746b8b4567.3
journal_data.2.10746b8b4567.0
journal_data.2.10746b8b4567.2
journal_data.2.10746b8b4567.1

rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
The symptoms that we are experiencing is a huge decrease in write speed ( 1QD 
128K writes from 160MB/s down to 14MB/s ). We see no improvement when moving 
the journal to SSDPOOL ( but we don’t think it is really moving )

If you are trying to optimize for 128KiB writes, you might need to tweak the 
"rbd_journal_max_payload_bytes" setting since it currently is defaulted to 
split journal write events into a maximum of 16KiB payload [1] in order to 
optimize the worst-case memory usage of the r

Re: [ceph-users] RBD journal feature

2018-08-13 Thread Jason Dillaman
On Sun, Aug 12, 2018 at 12:13 AM Glen Baars 
wrote:

> Hello Jason,
>
>
>
> Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any
> objects. Is this the correct way to view the journal objects?
>

You won't see any journal objects in the SSDPOOL until you issue a write:

$ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC   OPS   OPS/SEC   BYTES/SEC
1   320332.01  1359896.98
2   736360.83  1477975.96
3  1040351.17  1438393.57
4  1392350.94  1437437.51
5  1744350.24  1434576.94
6  2080349.82  1432866.06
7  2416341.73  1399731.23
8  2784348.37  1426930.69
9  3152347.40  1422966.67
   10  3520356.04  1458356.70
   11  3920361.34  1480050.97
elapsed:11  ops: 4096  ops/sec:   353.61  bytes/sec: 1448392.06
$ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd_hdd --image test
rbd journal '10746b8b4567':
header_oid: journal.10746b8b4567
object_oid_prefix: journal_data.2.10746b8b4567.
order: 24 (16 MiB objects)
splay_width: 4
object_pool: rbd_ssd
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC   OPS   OPS/SEC   BYTES/SEC
1   240248.54  1018005.17
2   512263.47  1079154.06
3   768258.74  1059792.10
4  1040258.50  1058812.60
5  1312258.06  1057001.34
6  1536258.21  1057633.14
7  1792253.81  1039604.73
8  2032253.66  1038971.01
9  2256241.41  988800.93
   10  2480237.87  974335.65
   11  2752239.41  980624.20
   12  2992239.61  981440.94
   13  3200233.13  954887.84
   14  3440237.36  972237.80
   15  3680239.47  980853.37
   16  3920238.75  977920.70
elapsed:16  ops: 4096  ops/sec:   245.04  bytes/sec: 1003692.81
$ rados -p rbd_ssd ls | grep journal_data.2.10746b8b4567.
journal_data.2.10746b8b4567.3
journal_data.2.10746b8b4567.0
journal_data.2.10746b8b4567.2
journal_data.2.10746b8b4567.1


> rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
>
> The symptoms that we are experiencing is a huge decrease in write speed (
> 1QD 128K writes from 160MB/s down to 14MB/s ). We see no improvement when
> moving the journal to SSDPOOL ( but we don’t think it is really moving )
>

If you are trying to optimize for 128KiB writes, you might need to tweak
the "rbd_journal_max_payload_bytes" setting since it currently is defaulted
to split journal write events into a maximum of 16KiB payload [1] in order
to optimize the worst-case memory usage of the rbd-mirror daemon for
environments w/ hundreds or thousands of replicated images.


> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman 
> *Sent:* Saturday, 11 August 2018 11:28 PM
> *To:* Glen Baars 
> *Cc:* ceph-users 
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Fri, Aug 10, 2018 at 3:01 AM Glen Baars 
> wrote:
>
> Hello Ceph Users,
>
>
>
> I am trying to implement image journals for our RBD images ( required for
> mirroring )
>
>
>
> rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
>
>
>
> When we run the above command we still find the journal on the SLOWPOOL
> and not on the SSDPOOL. We are running 12.2.7 and all bluestore. We have
> also tried the ceph.conf option (rbd journal pool = SSDPOOL )
>
> Has anyone else gotten this working?
>
> The journal header was on SLOWPOOL or the journal data objects? I would
> expect that the journal metadata header is located on SLOWPOOL but all data
> objects should be created on SSDPOOL as needed.
>
>
>
> Kind regards,
>
> *Glen Baars*
>
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
>
> Ja

Re: [ceph-users] RBD journal feature

2018-08-11 Thread Glen Baars
Hello Jason,

Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any objects. 
Is this the correct way to view the journal objects?
rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
The symptoms that we are experiencing is a huge decrease in write speed ( 1QD 
128K writes from 160MB/s down to 14MB/s ). We see no improvement when moving 
the journal to SSDPOOL ( but we don’t think it is really moving )
Kind regards,
Glen Baars

From: Jason Dillaman 
Sent: Saturday, 11 August 2018 11:28 PM
To: Glen Baars 
Cc: ceph-users 
Subject: Re: [ceph-users] RBD journal feature

On Fri, Aug 10, 2018 at 3:01 AM Glen Baars 
mailto:g...@onsitecomputers.com.au>> wrote:
Hello Ceph Users,

I am trying to implement image journals for our RBD images ( required for 
mirroring )

rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL

When we run the above command we still find the journal on the SLOWPOOL and not 
on the SSDPOOL. We are running 12.2.7 and all bluestore. We have also tried the 
ceph.conf option (rbd journal pool = SSDPOOL )
Has anyone else gotten this working?
The journal header was on SLOWPOOL or the journal data objects? I would expect 
that the journal metadata header is located on SLOWPOOL but all data objects 
should be created on SSDPOOL as needed.

Kind regards,
Glen Baars
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Jason
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD journal feature

2018-08-11 Thread Jason Dillaman
On Fri, Aug 10, 2018 at 3:01 AM Glen Baars 
wrote:

> Hello Ceph Users,
>
>
>
> I am trying to implement image journals for our RBD images ( required for
> mirroring )
>
>
>
> rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
>
>
>
> When we run the above command we still find the journal on the SLOWPOOL
> and not on the SSDPOOL. We are running 12.2.7 and all bluestore. We have
> also tried the ceph.conf option (rbd journal pool = SSDPOOL )
>
> Has anyone else gotten this working?
>
The journal header was on SLOWPOOL or the journal data objects? I would
expect that the journal metadata header is located on SLOWPOOL but all data
objects should be created on SSDPOOL as needed.


> Kind regards,
>
> *Glen Baars*
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com