ceph-deploy 1.3.3 just got released and you should not see this with the
new version.




On Tue, Nov 26, 2013 at 9:56 AM, Alfredo Deza <[email protected]>wrote:

>
>
>
> On Tue, Nov 26, 2013 at 9:19 AM, upendrayadav.u <
> [email protected]> wrote:
>
>> Dear Team
>> After executing : *ceph-deploy -v osd prepare ceph-node2:/home/ceph/osd1*
>>
>> i'm getting some error :
>>
>> [ceph-node2][DEBUG ] connected to host: ceph-node2
>> [ceph-node2][DEBUG ] detect platform information from remote host
>> [ceph-node2][DEBUG ] detect machine type
>> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
>> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node2
>> [ceph-node2][DEBUG ] write cluster configuration to
>> /etc/ceph/{cluster}.conf
>> [ceph-node2][WARNIN] osd keyring does not exist yet, creating one
>> [ceph-node2][DEBUG ] create a keyring file
>> [ceph_deploy.osd]*[ERROR ] OSError*: [Errno 18] Invalid cross-device link
>> [ceph_deploy]*[ERROR ] GenericError*: Failed to create 1 OSDs
>>
>
> You are hitting a bug in ceph-deploy where it fails to copy files across
> different file systems. This is fixed and should
> be released soon: http://tracker.ceph.com/issues/6701
>
>
>
>>
>> and same error for *ceph-deploy -v osd prepare
>> ceph-node3:/home/ceph/osd2*
>> ===============================================================
>>
>> 1st osd successfully prepared : *ceph-deploy -v osd prepare
>> ceph-node1:/home/ceph/osd0*
>> [ceph-node1][DEBUG ] connected to host: ceph-node1
>> [ceph-node1][DEBUG ] detect platform information from remote host
>> [ceph-node1][DEBUG ] detect machine type
>> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
>> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1
>> [ceph-node1][DEBUG ] write cluster configuration to
>> /etc/ceph/{cluster}.conf
>> [ceph-node1][INFO  ] Running command: sudo udevadm trigger
>> --subsystem-match=block --action=add
>> [ceph_deploy.osd][DEBUG ] Preparing host ceph-node1 disk /home/ceph/osd0
>> journal None activate False
>> [ceph-node1][INFO  ] Running command: sudo ceph-disk-prepare --fs-type
>> xfs --cluster ceph -- /home/ceph/osd0
>> [ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
>>
>> *********************
>> I have 1 mon and 3 osd. where monitor and 1st osd sharing same machine...
>>
>> mon and osd0 -       ceph-node1
>> osd1 -                     ceph-node2
>> osd2 -                     ceph-node3
>>
>> ceph-deploy - admin-node
>>
>> ====================================
>> Please help me to solve this problem.... thanks for your precious time
>> and kind attention...
>>
>>
>> *Regards,*
>> *Upendra Yadav*
>> *DFS*
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to