[ceph-users] Fwd: Error
Dear ceph-users, I'm preparing a Ceph cluster on my debian machines. I successfully walked true the Preflight installation guide on the website. Now i'm stuck at the STORAGE CLUSTER QUICK START, when i enter the following command: ceph-deploy mon create-initial I get the following errors: [cephsrv1][ERROR ] "ceph auth get-or-create for keytype admin returned 1 [ceph_deploy.gatherkeys][ERROR ] Failed to connect to host:cephsrv1 [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpK5yQHi [ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon I already changed line 78 from allow * --> allow in the gatherkeys.py File under /usr/lib/python2.7/dist-packages/ceph_deploy/ (on the admin-node). source: http://tracker.ceph.com/issues/16443 Best regards, Thomas ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Fwd: error opening rbd image
Hello I cant open rbd image after restart cluster. I user rbd image for KVM virtual machine. ceph version 0.87 uname -a Linux ceph4 3.14.31-gentoo #1 SMP Fri Jan 30 22:24:11 YEKT 2015 x86_64 Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz GenuineIntel GNU/Linux rbd info raid0n/homes rbd: error opening image homes: (6) No such device or address 2015-02-02 07:13:41.334712 7f5e190ff780 -1 librbd: unrecognized header format 2015-02-02 07:13:41.334726 7f5e190ff780 -1 librbd: Error reading header: (6) No such device or address rados get -p raid0n rbd_directory - | strings homes rados get -p raid0n homes - | strings error getting raid0n/homes: (2) No such file or directory ceph pg stat v37728538: 784 pgs: 784 active+clean; 1447 GB data, 3614 GB used, 3186 GB / 6801 GB avail; 21305 B/s rd, 24634 B/s wr, 10 op/s rbd export raid0n/homes rbd: error opening image homes: (6) No such device or address 2015-02-02 07:17:19.188832 7f203ad17780 -1 librbd: unrecognized header format 2015-02-02 07:17:19.188844 7f203ad17780 -1 librbd: Error reading header: (6) No such device or address How I can repair this? Aleksey Leonov ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Fwd: Error creating monitors
Can someone please help out. I am stuck Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 Fax: +27 12 841 4223 Cell: +27 71 331 9622 Email: shad...@csir.co.za Sakhi Hadebe 10/31/2014 1:28 PM Hi Support, I attempt to test ceph storage cluster on a 3 node cluster. I have installed Ubuntu 12.04 LTS in all 3 nodes. While attempting to create the monitors fro node 2 and node3, I am getting the error below: [ceph-node3][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory But mon.ceph1 gets created with no errors. What could I be doing wrong? These commands are executed on the primary node, node1. Please help. Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 Fax: +27 12 841 4223 Cell: +27 71 331 9622 Email: shad...@csir.co.za -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. Please consider the environment before printing this email. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Fwd: Error zapping the disk
Hi Support, Can someone please help me with the below error so I can proceed with my cluster installation. It has taken a week now not knowing how to carry on. Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 Fax: +27 12 841 4223 Cell: +27 71 331 9622 Email: shad...@csir.co.za Sakhi Hadebe 10/22/2014 1:56 PM Hi, I am building a three node cluster on debian7.7. I have a problem in zapping the disk of the very first node. ERROR: [ceph1][WARNIN] Error: Partition(s) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64 on /dev/sda3 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes. [ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: partprobe /dev/sda3 Please help. Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 Fax: +27 12 841 4223 Cell: +27 71 331 9622 Email: shad...@csir.co.za -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. Please consider the environment before printing this email. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Fwd: Error zapping the disk
Hi Sakhi: I got this problem before. Host OS is Ubuntu 14.04 3.13.0-24-generic. In the end I use fdisk /dev/sdX delete all partition and reboot. Maybe you can try. Best wishes, Mika 2014-10-29 17:13 GMT+08:00 Sakhi Hadebe shad...@csir.co.za: Hi Support, Can someone please help me with the below error so I can proceed with my cluster installation. It has taken a week now not knowing how to carry on. Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 Fax: +27 12 841 4223 Cell: +27 71 331 9622 Email: shad...@csir.co.za Sakhi Hadebe 10/22/2014 1:56 PM Hi, I am building a three node cluster on debian7.7. I have a problem in zapping the disk of the very first node. ERROR: [ceph1][WARNIN] Error: Partition(s) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64 on /dev/sda3 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes. [ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: partprobe /dev/sda3 Please help. Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 Fax: +27 12 841 4223 Cell: +27 71 331 9622 Email: shad...@csir.co.za -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by *MailScanner* http://www.mailscanner.info/, and is believed to be clean. Please consider the environment before printing this email. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com