[ceph-users] ceph mount error

2015-06-11 Thread
Hi ,
My ceph health is OK ,  And now , I want to  build  a  Filesystem , refer to  
the CEPH FS QUICK START guide .
http://ceph.com/docs/master/start/quick-cephfs/
however , I got a error when i use the command ,  mount -t ceph 
192.168.1.105:6789:/ /mnt/mycephfs .  error :   mount error 22 = Invalid 
argument 
I refer to munual , and now , I don't know how to solve it . 
I am looking forward to your reply !

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Error in sys.exitfunc

2015-06-11 Thread
OS:  CentOS release 6.6 (Final)
kernel : 3.10.77-1.el6.elrepo.x86_64
Installed:   ceph-deploy.noarch 0:1.5.23-0
Dependency Installed:python-argparse.noarch 0:1.2.1-2.el6.centos

I install the ceph-deploy refer to the manual  ,
http://ceph.com/docs/master/start/quick-start-preflight/  .  However , When
I run ceph-deploy , error will appear , Error in sys.exitfunc:  .   how
to solve it ?

 I find the same error message with me in the web ,
http://www.spinics.net/lists/ceph-devel/msg21388.html , but I cannot
find the way to solve this problem .

 I am looking forward for your reply !
 Best wishes!

zhongbo


error message:

[root@node1 ~]# ceph-deploy
usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME]
   [--overwrite-conf] [--cluster NAME] [--ceph-conf
CEPH_CONF]
   COMMAND ...

Easy Ceph deployment

-^-
   /   \
   |O o|  ceph-deploy v1.5.23
   ).-.(
  '/|||\`
  | '|` |
'|`

Full documentation can be found at: http://ceph.com/ceph-deploy/docs

optional arguments:
  -h, --helpshow this help message and exit
  -v, --verbose be more verbose
  -q, --quiet   be less verbose
  --version the current installed version of ceph-deploy
  --username USERNAME   the username to connect to the remote host
  --overwrite-conf  overwrite an existing conf file on remote host (if
present)
  --cluster NAMEname of the cluster
  --ceph-conf CEPH_CONF
use (or reuse) a given ceph.conf file

commands:
  COMMAND   description
new Start deploying a new cluster, and write a
CLUSTER.conf and keyring for it.
install Install Ceph packages on remote hosts.
rgw Deploy ceph RGW on remote hosts.
mds Deploy ceph MDS on remote hosts.
mon Deploy ceph monitor on remote hosts.
gatherkeys  Gather authentication keys for provisioning new
nodes.
diskManage disks on a remote host.
osd Prepare a data disk on remote host.
admin   Push configuration and client.admin key to a remote
host.
config  Push configuration file to a remote host.
uninstall   Remove Ceph packages from remote hosts.
purgedata   Purge (delete, destroy, discard, shred) any Ceph
data
from /var/lib/ceph
purge   Remove Ceph packages from remote hosts and purge all
data.
forgetkeys  Remove authentication keys from the local directory.
pkg Manage packages on remote hosts.
calamariInstall and configure Calamari nodes
Error in sys.exitfunc:
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Error in sys.exitfunc

2015-06-11 Thread
 in sys.exitfunc:


I look forward to hearing from you soon. 
Best Regards!
zhongbo 



在 2015-05-13 21:21:23,Alfredo Deza ad...@redhat.com 写道:


- Original Message -
From: Patrick McGarry pmcga...@redhat.com
To: 张忠波 zhangzhongbo2...@163.com, Ceph-User ceph-us...@ceph.com
Cc: community commun...@ceph.com
Sent: Tuesday, May 12, 2015 1:23:36 PM
Subject: Re: [ceph-users] Error in sys.exitfunc

Moving this to ceph-user where it belongs for eyeballs and responses.


On Mon, May 11, 2015 at 10:39 PM, 张忠波 zhangzhongbo2...@163.com wrote:
 Hi
   When I run ceph-deploy , error will appear , Error in sys.exitfunc:  .
 I find the same error message with me ,
 http://www.spinics.net/lists/ceph-devel/msg21388.html , but I cannot find
 the way to solve this problem .

It is not a problem, it is just a poor way that Python has to terminate 
threads.

This is safe to ignore.


 CentOS release 6.6 (Final)

 Python 2.6.6

 ceph-deploy v1.5.19

 Linux ceph1 3.10.77-1.el6.elrepo.x86_64


 I am looking forward for your reply !
 best wishes!

 zhongbo

 error message:
 [root@ceph1 leadorceph]# ceph-deploy new mdsnode
 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /root/.cephdeploy.conf
 [ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy new mdsnode
 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
 [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
 [mdsnode][DEBUG ] connected to host: ceph1
 [mdsnode][INFO  ] Running command: ssh -CT -o BatchMode=yes mdsnode
 [ceph_deploy.new][WARNIN] could not connect via SSH
 [ceph_deploy.new][INFO  ] will connect again with password prompt
 root@mdsnode's password:
 [mdsnode][DEBUG ] connected to host: mdsnode
 [mdsnode][DEBUG ] detect platform information from remote host
 [mdsnode][DEBUG ] detect machine type
 [mdsnode][WARNIN] .ssh/authorized_keys does not exist, will skip adding keys
 root@mdsnode's password:
 root@mdsnode's password:
 [mdsnode][DEBUG ] connected to host: mdsnode
 [mdsnode][DEBUG ] detect platform information from remote host
 [mdsnode][DEBUG ] detect machine type
 [mdsnode][DEBUG ] find the location of an executable
 [mdsnode][INFO  ] Running command: /sbin/ip link show
 [mdsnode][INFO  ] Running command: /sbin/ip addr show
 [mdsnode][DEBUG ] IP addresses found: ['192.168.72.70']
 [ceph_deploy.new][DEBUG ] Resolving host mdsnode
 [ceph_deploy.new][DEBUG ] Monitor mdsnode at 192.168.72.70
 [ceph_deploy.new][DEBUG ] Monitor initial members are ['mdsnode']
 [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.72.70']
 [ceph_deploy.new][DEBUG ] Creating a random mon key...
 [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
 [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
 Error in sys.exitfunc:







-- 

Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy osd activate ERROR

2015-05-14 Thread
Hi ,
I encountered other problems when i installed ceph .
#1. When i run the command ,   ceph-deploy new ceph-0, and got
the  ceph.conf
  file . However , there is not any information aboutosd pool default
size or public network .
[root@ceph-2 my-cluster]# more ceph.conf
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 192.168.72.33
mon_initial_members = ceph-0
fsid = 74d682b5-2bf2-464c-8462-740f96bcc525

#2.  I ignore the problem #1 , and continue to  set us the Ceph Storage
Cluster , encountered a error  , whhen run the command  ' ceph-deploy osd
activate  ceph-2:/mnt/sda ' .
I do it refer to the manual ,
http://ceph.com/docs/master/start/quick-ceph-deploy/
error message
[root@ceph-0 my-cluster]#ceph-deploy osd prepare ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy osd
prepare ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-2 disk /mnt/sda journal None
activate False
[ceph-2][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs
--cluster ceph -- /mnt/sda
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=osd_journal_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph-2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /mnt/sda
[ceph-2][INFO  ] checking OSD status...
[ceph-2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
Error in sys.exitfunc:
[root@ceph-0 my-cluster]# ceph-deploy osd activate  ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy osd
activate ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph-2 disk /mnt/sda
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph-2][INFO  ] Running command: ceph-disk -v activate --mark-init
sysvinit --mount /mnt/sda
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster uuid is
af23707d-325f-4846-bba9-b88ec953be80
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph-2][WARNIN] DEBUG:ceph-disk:OSD uuid is
ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster
ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/
ceph.keyring osd create --concise
ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] 2015-05-14 17:37:10.988914 7f373bd34700  0 librados:
client.bootstrap-osd authentication error (1) Operation not permitted
[ceph-2][WARNIN] Error connecting to cluster: PermissionError
[ceph-2][WARNIN] ceph-disk: Error: ceph osd create failed: Command
'/usr/bin/ceph' returned non-zero exit status 1:
[ceph-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v
activate --mark-init sysvinit --mount /mnt/sda

Error in sys.exitfunc:

I look forward to hearing from you soon.