On Tuesday, September 9, 2025 3:16:18 PM Eastern Daylight Time Eugen Block 
wrote:
> Hi,
> 
> thanks for the info, I'm excited to test the new release canidate!
> 
> And I already got my first issue, trying to upgrade a tiny single-node
> cluster from 19.2.3 to 20.1.0 fails (health detail at the end).
> The host is a VM running openSUSE Leap 15.6. The first MGR seems to
> have been upgraded successfully:
> 
> soc9-ceph:~ # ceph versions -f json | jq -r '.mgr'
> {
>    "ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62)
> squid (stable)": 1,
>    "ceph version 20.1.0 (010a3ad647c9962d47812a66ad6feda26ab28aa4)
> tentacle (rc - RelWithDebInfo)": 1
> }
> 
> 
> Is this already a known issue?
> 
> Thanks!
> Eugen
> 
> # ceph orch upgrade status
> {
>      "in_progress": true,
>      "target_image":
> "quay.io/ceph/ceph@sha256:06b298a25e7cee11677f06a54ad90bb69f9b295e0d5482663f
> 26b439d14d4045", "services_complete": [],
>      "which": "Upgrading all daemon types on all hosts",
>      "progress": "1/10 daemons upgraded",
>      "message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull
> target image",
>      "is_paused": true
> }
> 
> 
> soc9-ceph:~ # ceph health detail
> HEALTH_WARN failed to probe daemons or devices; Upgrade: failed to
> pull target image
> [WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
>      host soc9-ceph `cephadm ls` failed: cephadm exited with an error
> code: 1, stderr: Traceback (most recent call last):
>    File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
>      "__main__", mod_spec)
>    File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
>      exec(code, run_globals)
>    File
> "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6de
> d98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in
> <module>
>    File
> "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6de
> d98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py",
> line 53, in <module>
> ImportError: cannot import name 'TypedDict'
>      host soc9-ceph `cephadm gather-facts` failed: cephadm exited with
> an error code: 1, stderr: Traceback (most recent call last):
>    File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
>      "__main__", mod_spec)
>    File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
>      exec(code, run_globals)
>    File
> "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6de
> d98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in
> <module>
>    File
> "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6de
> d98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py",
> line 53, in <module>
> ImportError: cannot import name 'TypedDict'
>      host soc9-ceph `cephadm list-networks` failed: cephadm exited
> with an error code: 1, stderr: Traceback (most recent call last):
>    File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
>      "__main__", mod_spec)
>    File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
>      exec(code, run_globals)
>    File
> "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6de
> d98e09fc882a9f4f1772aeb82d4664369096243131efe94f/__main__.py", line 190, in
> <module>
>    File
> "/var/lib/ceph/95f0ae1e-7d97-11f0-918f-fa163e2ad8c5/cephadm.07ba6d1a818cf6de
> d98e09fc882a9f4f1772aeb82d4664369096243131efe94f/cephadmlib/listing.py",
> line 53, in <module>
> ImportError: cannot import name 'TypedDict'
> [WRN] UPGRADE_FAILED_PULL: Upgrade: failed to pull target image
>      failed to pull
> quay.io/ceph/ceph@sha256:06b298a25e7cee11677f06a54ad90bb69f9b295e0d5482663f2
> 6b439d14d4045 on host soc9-ceph
> 
> Zitat von Yuri Weinstein <ywein...@redhat.com>:
> > This is the first release candidate for Tentacle.
> > Ubuntu 22.04, 24.04 and CentOS 9 were built for this RC0
> > 
> > Feature highlights:
> > 
> > * RADOS: New features include long expected performance optimizations
> > (FastEC)
> > 
> >   for EC pools, including partial reads and partial writes.
> >   Users can also expect to see BlueStore improvements such as compression
> > 
> > and a new,
> > 
> >   faster WAL (write-ahead-log). Additional improvements include switching
> > 
> > all components
> > 
> >   to the faster OMAP iteration interface, bypassable ceph_assert()s,
> >   fixed mclock bugs and configuration defaults, and testing improvements
> > 
> > for
> > 
> >   dencoding verification.
> > 
> > * MGR: Highlights include the ability to force-disable always-on modules
> > and
> > 
> >   the removal of the restful and zabbix modules (both deprecated since
> > 
> > 2020).
> > 
> >   Note that the dashboard module's richer and better-maintained RESTful
> >   API
> > 
> > can be used
> > 
> >   as an alternative to the restful module, and the prometheus module can
> >   be
> > 
> > used
> > 
> >   as an alternative monitoring solution for zabbix.
> > 
> > * RGW: Multiple fixes: Lua scripts will not run against health checks,
> > 
> >   properly quoted ETag values returned by S3 CopyPart, PostObject and
> >   CompleteMultipartUpload responses.
> > 
> > * RGW: IAM policy evaluation now supports conditions ArnEquals and
> > ArnLike,
> > 
> >   along with their Not and IfExists variants.
> > 
> > * RBD: New live migration features: RBD images can now be instantly
> > 
> >   imported from another Ceph cluster (native format) or from a wide
> >   variety of external sources/formats with the help of the new NBD
> >   stream and an appropriately capable NBD server such as `qemu-nbd`.
> >   Also added support for RBD namespace remapping while mirroring
> >   between Ceph clusters, new `rbd group info` and `rbd group snap info`
> >   commands and enhanced `rbd group snap ls` command.  `rbd device map`
> >   command now defaults to msgr2.
> > 
> > * CephFS: Directories may now be configured with case-insensitive or
> > 
> >   normalized directory entry names. This is an inheritable configuration
> > 
> > making
> > 
> >   it apply to an entire directory tree. For more information, see
> >   https://docs.ceph.com/en/latest/cephfs/charmap/
> > 
> > * CephFS: Modifying the FS setting variable "max_mds" when a cluster is
> > 
> >   unhealthy now requires users to pass the confirmation flag
> >   (--yes-i-really-mean-it). This has been added as a precaution to tell
> >   the
> >   users that modifying "max_mds" may not help with troubleshooting or
> > 
> > recovery
> > 
> >   effort. Instead, it might further destabilize the cluster.
> > 
> > * CephFS: EOPNOTSUPP (Operation not supported ) is now returned by the
> > CephFS
> > 
> >   fuse client for `fallocate` for the default case (i.e. mode == 0) since
> >   CephFS does not support disk space reservation. The only flags supported
> > 
> > are
> > 
> >   `FALLOC_FL_KEEP_SIZE` and `FALLOC_FL_PUNCH_HOLE`.
> > 
> > * Dashboard: Added support for NVMe/TCP (gateway groups, multiple
> > namespaces),
> > 
> >   multi-cluster management, oAuth2 integration, and enhanced RGW/SMB
> > 
> > features
> > 
> >   including multi-site automation, tiering, policies, lifecycles,
> >   notifications, and granular replication.
> > 
> > * Git at git://github.com/ceph/ceph.git
> > * Tarball at https://download.ceph.com/tarballs/ceph-20.1.0.tar.gz
> > * Containers at https://quay.io/repository/ceph/ceph
> > * For packages, see https://docs.ceph.com/en/latest/install/get-packages/
> > * Release git sha1: 010a3ad647c9962d47812a66ad6feda26ab28aa4
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io


The minimum version of python for cephadm is now 3.9 in tentacle.  Does your 
distro have a python version older than that?

We probably need to update the matrix at https://docs.ceph.com/en/tentacle/
start/os-recommendations/ to include tentacle itself, too.


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to