Are you able to successfully run the following command successfully?

rados -p glebe-sata get rbd_id.hypervtst-lun04



On Sun, Jun 5, 2016 at 8:49 PM, Adrian Saul
<adrian.s...@tpgtelecom.com.au> wrote:
>
> I upgraded my Infernalis semi-production cluster to Jewel on Friday.  While 
> the upgrade went through smoothly (aside from a time wasting restorecon 
> /var/lib/ceph in the selinux package upgrade) and the services continued 
> running without interruption.  However this morning when I went to create 
> some new RBD images I am unable to do much at all with RBD.
>
> Just about any rbd command fails with an I/O error.   I can run showmapped 
> but that is about it - anything like an ls, info or status fails.  This 
> applies to all my pools.
>
> I can see no errors in any log files that appear to suggest an issue.  I  
> have also tried the commands on other cluster members that have not done 
> anything with RBD before (I was wondering if perhaps the kernel rbd was 
> pinning the old library version open or something) but the same error occurs.
>
> Where can I start trying to resolve this?
>
> Cheers,
>  Adrian
>
>
> [root@ceph-glb-fec-01 ceph]# rbd ls glebe-sata
> rbd: list: (5) Input/output error
> 2016-06-06 10:41:31.792720 7f53c06a2d80 -1 librbd: error listing image in 
> directory: (5) Input/output error
> 2016-06-06 10:41:31.792749 7f53c06a2d80 -1 librbd: error listing v2 images: 
> (5) Input/output error
>
> [root@ceph-glb-fec-01 ceph]# rbd ls glebe-ssd
> rbd: list: (5) Input/output error
> 2016-06-06 10:41:33.956648 7f90de663d80 -1 librbd: error listing image in 
> directory: (5) Input/output error
> 2016-06-06 10:41:33.956672 7f90de663d80 -1 librbd: error listing v2 images: 
> (5) Input/output error
>
> [root@ceph-glb-fec-02 ~]# rbd showmapped
> id pool       image                 snap device
> 0  glebe-sata test02                -    /dev/rbd0
> 1  glebe-ssd  zfstest               -    /dev/rbd1
> 10 glebe-sata hypervtst-lun00       -    /dev/rbd10
> 11 glebe-sata hypervtst-lun02       -    /dev/rbd11
> 12 glebe-sata hypervtst-lun03       -    /dev/rbd12
> 13 glebe-ssd  nspprd01_lun00        -    /dev/rbd13
> 14 glebe-sata cirrux-nfs01          -    /dev/rbd14
> 15 glebe-sata hypervtst-lun04       -    /dev/rbd15
> 16 glebe-sata hypervtst-lun05       -    /dev/rbd16
> 17 glebe-sata pvtcloud-nfs01        -    /dev/rbd17
> 18 glebe-sata cloud2sql-lun00       -    /dev/rbd18
> 19 glebe-sata cloud2sql-lun01       -    /dev/rbd19
> 2  glebe-sata radmast02-lun00       -    /dev/rbd2
> 20 glebe-sata cloud2sql-lun02       -    /dev/rbd20
> 21 glebe-sata cloud2fs-lun00        -    /dev/rbd21
> 22 glebe-sata cloud2fs-lun01        -    /dev/rbd22
> 3  glebe-sata radmast02-lun01       -    /dev/rbd3
> 4  glebe-sata radmast02-lun02       -    /dev/rbd4
> 5  glebe-sata radmast02-lun03       -    /dev/rbd5
> 6  glebe-sata radmast02-lun04       -    /dev/rbd6
> 7  glebe-ssd  sybase_iquser02_lun00 -    /dev/rbd7
> 8  glebe-ssd  sybase_iquser03_lun00 -    /dev/rbd8
> 9  glebe-ssd  sybase_iquser04_lun00 -    /dev/rbd9
>
> [root@ceph-glb-fec-02 ~]# rbd status glebe-sata/hypervtst-lun04
> 2016-06-06 10:47:30.221453 7fc0030dc700 -1 librbd::image::OpenRequest: failed 
> to retrieve image id: (5) Input/output error
> 2016-06-06 10:47:30.221556 7fc0028db700 -1 librbd::ImageState: failed to open 
> image: (5) Input/output error
> rbd: error opening image hypervtst-lun04: (5) Input/output error
> Confidentiality: This email and any attachments are confidential and may be 
> subject to copyright, legal or some other professional privilege. They are 
> intended solely for the attention and use of the named addressee(s). They may 
> only be copied, distributed or disclosed with the consent of the copyright 
> owner. If you have received this email by mistake or by breach of the 
> confidentiality clause, please notify the sender immediately by return email 
> and delete or destroy all copies of the email. Any confidentiality, privilege 
> or copyright is not waived or lost because this email has been sent to you by 
> mistake.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to