[Re-added the list.]

I believe you'll find everything you need at
http://ceph.com/docs/master/cephfs/createfs/
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Aug 26, 2014 at 1:25 PM, LaBarre, James  (CTR)      A6IT
<james.laba...@cigna.com> wrote:
> So is there a link for documentation on the newer versions?  (we're doing 
> evaluations at present, so I had wanted to work with newer versions, since it 
> would be closer to what we would end up using).
>
>
> -----Original Message-----
> From: Gregory Farnum [mailto:g...@inktank.com]
> Sent: Tuesday, August 26, 2014 4:05 PM
> To: Sean Crosby
> Cc: LaBarre, James (CTR) A6IT; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph-fuse fails to mount
>
> In particular, we changed things post-Firefly so that the filesystem isn't 
> created automatically. You'll need to set it up (and its pools,
> etc) explicitly to use it.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Mon, Aug 25, 2014 at 2:40 PM, Sean Crosby <richardnixonsh...@gmail.com> 
> wrote:
>> Hi James,
>>
>>
>> On 26 August 2014 07:17, LaBarre, James (CTR) A6IT
>> <james.laba...@cigna.com>
>> wrote:
>>>
>>>
>>>
>>> [ceph@first_cluster ~]$ ceph -s
>>>
>>>     cluster e0433b49-d64c-4c3e-8ad9-59a47d84142d
>>>
>>>      health HEALTH_OK
>>>
>>>      monmap e1: 1 mons at {first_cluster=10.25.164.192:6789/0},
>>> election epoch 2, quorum 0 first_cluster
>>>
>>>      mdsmap e4: 1/1/1 up {0=first_cluster=up:active}
>>>
>>>      osdmap e13: 3 osds: 3 up, 3 in
>>>
>>>       pgmap v480: 192 pgs, 3 pools, 1417 MB data, 4851 objects
>>>
>>>             19835 MB used, 56927 MB / 76762 MB avail
>>>
>>>                  192 active+clean
>>
>>
>> This cluster has an MDS. It should mount.
>>
>>>
>>>
>>>
>>> [ceph@second_cluster ~]$ ceph -s
>>>
>>>     cluster 06f655b7-e147-4790-ad52-c57dcbf160b7
>>>
>>>      health HEALTH_OK
>>>
>>>      monmap e1: 1 mons at {second_cluster=10.25.165.91:6789/0},
>>> election epoch 1, quorum 0 cilsdbxd1768
>>>
>>>      osdmap e16: 7 osds: 7 up, 7 in
>>>
>>>       pgmap v539: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>>
>>>             252 MB used, 194 GB / 194 GB avail
>>>
>>>                  192 active+clean
>>
>>
>> No mdsmap line for this cluster, and therefore the filesystem won't mount.
>> Have you added an MDS for this cluster, or has the mds daemon died?
>> You'll have to get the mdsmap line to show before it will mount
>>
>> Sean
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ------------------------------------------------------------------------------
> CONFIDENTIALITY NOTICE: If you have received this email in error,
> please immediately notify the sender by e-mail at the address shown.
> This email transmission may contain confidential information.  This
> information is intended only for the use of the individual(s) or entity to
> whom it is intended even if addressed incorrectly.  Please delete it from
> your files if you are not the intended recipient.  Thank you for your
> compliance.  Copyright (c) 2014 Cigna
> ==============================================================================
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to