> I'm trying to understand the nuts and bolts of EC / CephFS
> We're running an EC4+2 pool on top of 72 x 7.2K rpm 10TB drives. Pretty
> slow bulk / archive storage.
Ok, did some more searching and found this:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021642.html.
Which to
Hi List.
I'm trying to understand the nuts and bolts of EC / CephFS
We're running an EC4+2 pool on top of 72 x 7.2K rpm 10TB drives. Pretty
slow bulk / archive storage.
# getfattr -n ceph.dir.layout /mnt/home/cluster/mysqlbackup
getfattr: Removing leading '/' from absolute path names
# file:
Hi all,
I know that it seems like a stupid question, but I have some concerns
about this, maybe someone can clear the things for me.
I read in the offical docs that , when I create a rgw server with
'ceph-deploy rgw create', the rgw scripts will automatically create the
rgw system pools.
Clients' experience depends on whether at the very moment they need to
read/write to those particular PGs involved in peering.
If their objects are placed in another PGs, then I/O operations shouldn't
be impacted.
If clients were performing I/O ops to those PGs that went into peering,
then they
Dear Cephalopodians,
in some recent threads on this list, I have read about the "knobs":
pglog_hardlimit (false by default, available at least with 12.2.11 and
13.2.5)
bdev_enable_discard (false by default, advanced option, no description)
bdev_async_discard (false by default,
> Hello,
> your log extract shows that:
>
> 2019-02-15 21:40:08 OSD.29 DOWN
> 2019-02-15 21:40:09 PG_AVAILABILITY warning start
> 2019-02-15 21:40:15 PG_AVAILABILITY warning cleared
>
> 2019-02-15 21:44:06 OSD.29 UP
> 2019-02-15 21:44:08 PG_AVAILABILITY warning start
> 2019-02-15 21:44:15
Hello,
your log extract shows that:
2019-02-15 21:40:08 OSD.29 DOWN
2019-02-15 21:40:09 PG_AVAILABILITY warning start
2019-02-15 21:40:15 PG_AVAILABILITY warning cleared
2019-02-15 21:44:06 OSD.29 UP
2019-02-15 21:44:08 PG_AVAILABILITY warning start
2019-02-15 21:44:15 PG_AVAILABILITY warning
I recently replaced failed HDDs and removed them from their respective
buckets as per procedure.
But I’m now facing an issue when trying to place new ones back into the
buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or
directory not found’ OR command sintax error.
I have been
Hi Everyone,
I recently replaced failed HDDs and removed them from their respective
buckets as per procedure.
But I’m now facing an issue when trying to place new ones back into the
buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or
directory not found’ OR command sintax error.
I
Currently I am using 'profile rbd' on mon and osd. Is it possible with
the caps to allow a user to
- List rbd images
- get state of images
- write/read to images
Etc
But do not allow to have it create new images?
___
ceph-users mailing list
>>There are 10 OSDs in these systems with 96GB of memory in total. We are
>>runnigh with memory target on 6G right now to make sure there is no
>>leakage. If this runs fine for a longer period we will go to 8GB per OSD
>>so it will max out on 80GB leaving 16GB as spare.
Thanks Wido. I send
### ceph.conf
[global]
fsid = b5e30221-a214-353c-b66b-8c37b4349123
mon host = ceph-mon.service.i.ewcs.ch
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
###
## ceph.ec.conf
[global]
fsid =
12 matches
Mail list logo