ool=cephfs_data'
Thanks!
*Nate Curry*
On Tue, Apr 12, 2016 at 3:56 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Tue, Apr 12, 2016 at 12:20 PM, Nate Curry <cu...@mosaicatm.com> wrote:
> > I am seeing an issue with cephfs where I am unable to write changes to
>
nd remount
the filesystem without any issues. It also reboots and mounts no problem.
I am not sure what this could be caused by. Any ideas?
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You are correct sir. I modified the user capabilities by adding the mds
cap with the 'allow r' permission using the following command.
*ceph auth caps client.cephfs mon 'allow r' mds 'allow r' osd 'allow rwx
pool=cephfs_metadata,allow rwx pool=cephfs_data'*
Thanks,
*Nate Curry*
On Thu, Apr
pools:
*client.cephfskey: #caps: [mon] allow
rcaps: [osd] allow rwx pool=datastore_metadata,allow rwx
pool=datastore_data*
Could someone tell me what else I would need to give the user permission to
in order to be able to mount the filesystem?
Thanks,
That was it. I had recently rebuilt the OSD hosts and completely forgot to
configure the firewall.
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
26 down0 1.00000
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Yes that was what I meant. Thanks. Was that in a production environment?
Nate Curry
On Jul 10, 2015 11:21 AM, Quentin Hartman qhart...@direwolfdigital.com
wrote:
You mean the hardware config? They are older Core2-based servers with 4GB
of RAM. Nothing special. I have one running mon and rgw
What was your monitor node's configuration when you had multiple ceph
daemons running on them?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 9, 2015 at 5:36 PM, Quentin Hartman
qhart...@direwolfdigital.com wrote:
I
supposed to straddle both the ceph only network and the storage network or
just in the ceph network?
Another question is can I run multiple things on the monitor nodes? Like
the RADOS GW and the MDS?
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph
64 GB of memory per monitor? I don't think that would scale
well at some point so I am thinking that is not correct. Can I get some
clarification?
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Are you using the 4TB disks for the journal?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 2, 2015 at 12:16 PM, Shane Gibson shane_gib...@symantec.com
wrote:
I'd def be happy to share what numbers I can get out
hot spares for the
6TB drives and 2 drives for the OS. I was thinking of 400GB SSD drives but
am wondering if that is too much. Any informed opinions would be
appreciated.
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
12 matches
Mail list logo