Anecdotally, I see the same behaviour, but there seem to be no negative 
side-effects.  The “jewel” clients below are more than likely the (Linux) 
kernel client:

[cinder] root@aurae-dashboard:~# ceph features
{
    "mon": [
        {
            "features": "0x3ffddff8ffacffff",
            "release": "luminous",
            "num": 1
        }
    ],
    "mds": [
        {
            "features": "0x3ffddff8ffacffff",
            "release": "luminous",
            "num": 1
        }
    ],
    "osd": [
        {
            "features": "0x3ffddff8ffacffff",
            "release": "luminous",
            "num": 1
        }
    ],
    "client": [
        {
            "features": "0x27018fb86aa42ada",
            "release": "jewel",
            "num": 5
        },
        {
            "features": "0x3ffddff8ffacffff",
            "release": "luminous",
            "num": 8
        }
    ],
    "mgr": [
        {
            "features": "0x3ffddff8ffacffff",
            "release": "luminous",
            "num": 1
        }
    ]
}
[cinder] root@aurae-dashboard:~# ceph -s
  cluster:
    id:     650c5366-efa8-4636-a1a1-08740513ac3c
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum aurae-storage-1 (age 45h)
    mgr: aurae-storage-1(active, since 45h)
    mds: cephfs:1 {0=aurae-storage-1=up:active}
    osd: 1 osds: 1 up (since 45h), 1 in (since 43h)
    rgw: 1 daemon active (radosgw.aurae-storage-1)

  data:
    pools:   10 pools, 832 pgs
    objects: 1.42k objects, 3.0 GiB
    usage:   4.1 GiB used, 91 GiB / 95 GiB avail
    pgs:     832 active+clean

  io:
    client:   36 KiB/s wr, 0 op/s rd, 3 op/s wr

[cinder] root@aurae-dashboard:~# ceph versions
{
    "mon": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 1
    },
    "mgr": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 1
    },
    "osd": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 1
    },
    "mds": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 1
    },
    "rgw": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 1
    },
    "overall": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 5
    }
}

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com<http://www.knightpoint.com>
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) and may contain confidential and privileged 
information. Any unauthorized review, copy, use, disclosure, or distribution is 
STRICTLY prohibited. If you are not the intended recipient, please contact the 
sender by reply e-mail and destroy all copies of the original message.

On Mar 27, 2019, at 6:52 AM, John Hearns 
<[email protected]<mailto:[email protected]>> wrote:

Sure

# ceph versions
{
    "mon": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 2
    },
    "osd": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 12
    },
    "mds": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 3
    },
    "rgw": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 4
    },
    "overall": {
        "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) 
nautilus (stable)": 24
    }
}


On Wed, 27 Mar 2019 at 11:20, Konstantin Shalygin 
<[email protected]<mailto:[email protected]>> wrote:


We recently updated a cluster to the Nautlius release by updating Debian
packages from the Ceph site. Then rebooted all servers.

ceph features still reports older releases, for example the osd

    "osd": [
        {
            "features": "0x3ffddff8ffacffff",
            "release": "luminous",
            "num": 12
        }

I think I am not understanding what is exactly meant by release here.
Cn we alter the osd (mon, clients etc.) such that they report nautilus ??


Show your `ceph versions` please.



k

_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to