Re: [ceph-users] Out-of-date RBD client libraries

2016-10-25 Thread Steve Taylor
CRUSH is what determines where data gets stored, so if you employ newer CRUSH 
tunables prematurely against older clients that don’t support them, then you 
run the risk of your clients not being able to find nor place objects 
correctly. I don’t know Ceph’s internals well enough to tell you all of what 
might result at a lower level from such a scenario, but clients not knowing 
where data belongs seems bad enough. I wouldn’t necessarily expect data loss, 
but potentially a lot of client errors.

From: jdavidli...@gmail.com [mailto:jdavidli...@gmail.com] On Behalf Of J David
Sent: Tuesday, October 25, 2016 1:27 PM
To: Steve Taylor <steve.tay...@storagecraft.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Out-of-date RBD client libraries


On Tue, Oct 25, 2016 at 3:10 PM, Steve Taylor 
<steve.tay...@storagecraft.com<mailto:steve.tay...@storagecraft.com>> wrote:
Recently we tested an upgrade from 0.94.7 to 10.2.3 and found exactly the 
opposite. Upgrading the clients first worked for many operations, but we got 
"function not implemented" errors when we would try to clone RBD snapshots.

Yes, we have seen “function not implemented” in the past as well when 
connecting new clients to old clusters.

you must keep your CRUSH tunables at firefly or hammer until the clients are 
upgraded.

Not that I am proposing to try it, but… or else what?

Whatever the “or else!” is, the same would apply, I assume, to connecting old 
clients to a brand-new jewel cluster which would have been created with jewel 
tunables in the first place?

Thanks!




[cid:image8cec56.JPG@f605432c.4b8508fe]<https://storagecraft.com>   Steve 
Taylor | Senior Software Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Out-of-date RBD client libraries

2016-10-25 Thread J David
On Tue, Oct 25, 2016 at 3:10 PM, Steve Taylor  wrote:

> Recently we tested an upgrade from 0.94.7 to 10.2.3 and found exactly the
> opposite. Upgrading the clients first worked for many operations, but we
> got "function not implemented" errors when we would try to clone RBD
> snapshots.
>

Yes, we have seen “function not implemented” in the past as well when
connecting new clients to old clusters.


> you must keep your CRUSH tunables at firefly or hammer until the clients
> are upgraded.
>

Not that I am proposing to try it, but… or else what?

Whatever the “or else!” is, the same would apply, I assume, to connecting
old clients to a brand-new jewel cluster which would have been created with
jewel tunables in the first place?

Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Out-of-date RBD client libraries

2016-10-25 Thread Jason Dillaman
On Tue, Oct 25, 2016 at 2:46 PM, J David  wrote:
> Are long-running RBD clients (like Qemu virtual machines) placed at
> risk of instability or data corruption if they are not updated and
> restarted before, during, or after such an upgrade?

No, we try very hard to ensure forward and backwards compatibility.
However, since firefly is EOL and our testing capacity is finite, I
don't believe we perform any direct tests between firefly clients and
jewel clusters.

> If so, what are the potential consequences, and where in the process
> should they be upgraded to avoid those consequences?

In general, I would recommend upgrading the librbd clients after the
cluster is fully upgraded. It really shouldn't matter unless you are
attempting to use new CRUSH map / RBD features without the necessary
backing support in the cluster. Assuming your VM environment is
properly set up, you can use live migration to transparently upgrade
the running librbd version within VMs.

[1] http://docs.ceph.com/docs/master/releases/

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Out-of-date RBD client libraries

2016-10-25 Thread Steve Taylor
We tested an upgrade from 0.94.3 to 0.94.7 and experienced issues when the 
librbd clients were not upgraded first in the process. It was a while back and 
I don't remember the specific issues, but upgrading the clients prior to 
upgrading any services worked in that case.

Recently we tested an upgrade from 0.94.7 to 10.2.3 and found exactly the 
opposite. Upgrading the clients first worked for many operations, but we got 
"function not implemented" errors when we would try to clone RBD snapshots. We 
re-tested that upgrade with the clients being upgraded after all of the 
services and everything worked fine for us in that case. The caveat there is 
that you must keep your CRUSH tunables at firefly or hammer until the clients 
are upgraded.

At any rate, we've had different experiences upgrading the clients at different 
points in the process depending on the releases involved. The key is to test 
first and make sure you have a sane upgrade path before doing anything in 
production.




[cid:imagebeeb2c.JPG@5541413f.4f9d6fa0]<https://storagecraft.com>   Steve 
Taylor | Senior Software Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.



-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of J David
Sent: Tuesday, October 25, 2016 12:46 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Out-of-date RBD client libraries

What are the potential consequences of using out-of-date client libraries with 
RBD against newer clusters?

Specifically, what are the potential ill-effects of using Firefly client 
libraries (0.80.7 and 0.80.8) to access Hammer or Jewel
(10.2.3) clusters?

The upgrading instructions (
http://docs.ceph.com/docs/jewel/install/upgrading-ceph/ ) don’t actually 
mention clients, just giving the recommended order as:
ceph-deploy, mons, osds, mds, object gateways.

Are long-running RBD clients (like Qemu virtual machines) placed at risk of 
instability or data corruption if they are not updated and restarted before, 
during, or after such an upgrade?

If so, what are the potential consequences, and where in the process should 
they be upgraded to avoid those consequences?

Thanks for any advice!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Out-of-date RBD client libraries

2016-10-25 Thread J David
What are the potential consequences of using out-of-date client
libraries with RBD against newer clusters?

Specifically, what are the potential ill-effects of using Firefly
client libraries (0.80.7 and 0.80.8) to access Hammer or Jewel
(10.2.3) clusters?

The upgrading instructions (
http://docs.ceph.com/docs/jewel/install/upgrading-ceph/ ) don’t
actually mention clients, just giving the recommended order as:
ceph-deploy, mons, osds, mds, object gateways.

Are long-running RBD clients (like Qemu virtual machines) placed at
risk of instability or data corruption if they are not updated and
restarted before, during, or after such an upgrade?

If so, what are the potential consequences, and where in the process
should they be upgraded to avoid those consequences?

Thanks for any advice!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com