Hi Yehuda,
I have used the method in this blog to pass the header information
http://blog.defunct.ca/2013/10/10/using-the-radosgw-admin-api/ and gave like
Authorization: AWS {access-key}:{hash-of-header-
and-secret}
and the complete command like
curl -i 'http://gateway.3linux.com/admin/usage?fo
Hi Yehuda,
I have used the method in this blog to pass the header information
http://blog.defunct.ca/2013/10/10/using-the-radosgw-admin-api/ and gave like
Authorization: AWS {access-key}:{hash-of-header-
and-secret}
and the complete command like
curl -i 'http://s3.linux.com/admin/usage?format=j
On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote:
Thanks Kevin for for the full explain!
cache.writeback=on,cache.direct=off,cache.no-flush=off
I didn't known about the cache options split,thanks.
rbd does, to my knowledge, not use the kernel page cache, so we're safe
>from that part. It
When you increase the number of PGs, don't just go to the max value.
Step into it.
You'll want to end up around 2048, so do 400 -> 512, wait for it to
finish, -> 1024, wait, -> 2048.
Also remember that you don't need a lot of PGs if you don't have much
data in the pools. My .rgw.buckets pool
Right now, you still have a chance of recovering this data, if you can
find a copy or fix some of the OSDs. If you're ready to give up on the
data, there is a command to tell Ceph to recreate the missing PGs. Read
up on force_create_pg at
http://ceph.com/docs/master/rados/troubleshooting/trou
Thanks Kevin for for the full explain!
>>cache.writeback=on,cache.direct=off,cache.no-flush=off
I didn't known about the cache options split,thanks.
>>rbd does, to my knowledge, not use the kernel page cache, so we're safe
>>from that part. It does however honour the cache.direct flag when it
Hi Cedric,
use the radios command to remove the empty pool name if you need to.
rados rmpool ‘’ ‘’ —yes-i-really-really-mean-it
You won’t be alb to remove it with the ceph command
JC
On Apr 18, 2014, at 03:51, Cedric Lemarchand wrote:
> Hi,
>
> I am facing a strange behaviour where a pool
Hi all,
One goal of the storage system is to achieve certain durability SLAs, so that
we replicate data with multiple copies, and check consistency on regular basis
(e.g. scrubbing), however, replication could increase cost (tradeoff between
cost & durability), and cluster wide consistency check
Hi Ирек Фасихов,
*#ls -lsa /var/lib/ceph/osd/cloud-26/current/14.7c8_*/*
total 16
0 drwxr-xr-x 2 root root 6 Apr 9 22:49 .
16 drwxr-xr-x 443 root root 12288 Apr 18 18:46 ..
*# ls -lsa /var/lib/ceph/osd/cloud-82/current/14.7c8_*/*
total 16
0 drwxr-xr-x 2 root root 6 Apr 18 11:25 .
That is rather low, increasing the pg count should help with the data
distribution.
Documentation recommends starting with (100 * (num of osds)) /(replicas)
rounded up to the nearest power of two.
https://ceph.com/docs/master/rados/operations/placement-groups/
On Fri, Apr 18, 2014 at 4:54 AM,
- Message from Tyler Brekke -
Date: Fri, 18 Apr 2014 04:37:26 -0700
From: Tyler Brekke
Subject: Re: [ceph-users] OSD distribution unequally
To: Dan Van Der Ster
Cc: Kenneth Waegeman , ceph-users
How many placement groups do you have in your pool containing th
How many placement groups do you have in your pool containing the data, and
what is the replication level of that pool?
Looks like you have too few placement groups which is causing the data
distribution to be off.
-Tyler
On Fri, Apr 18, 2014 at 4:12 AM, Dan Van Der Ster wrote:
> ceph osd re
This pools are created automatically when there is a start S3
(ceph-radosgw). By default, your configuration file, indicate the number of
pgs = 333. But it's a lot for your configuration.
2014-04-18 15:28 GMT+04:00 Cedric Lemarchand :
> Hi,
>
> Le 18/04/2014 13:14, Ирек Фасихов a écrit :
>
> S
Hi,
Le 18/04/2014 13:14, Ирек Фасихов a écrit :
> Show command please: ceph osd tree.
Sure :
root@node1:~# ceph osd tree
# idweighttype nameup/downreweight
-13root default
-23host node1
01osd.0up1
11osd.1up1
Show command please: ceph osd tree.
2014-04-18 14:51 GMT+04:00 Cedric Lemarchand :
> Hi,
>
> I am facing a strange behaviour where a pool is stucked, I have no idea
> how this pool appear in the cluster in the way I have not played with pool
> creation, *yet*.
>
> # root@node1:~# ceph -s
>
ceph osd reweight-by-utilization
Is that still in 0.79?
I'd start with reweight-by-utilization 200 and then adjust that number down
until you get to 120 or so.
Cheers, Dan
On Apr 18, 2014 12:49 PM, Kenneth Waegeman wrote:
Hi,
Some osds of our cluster filled up:
health HEALTH_ERR 1 full
Is there any data to:
ls -lsa /var/lib/ceph/osd/ceph-82/current/14.7c8_*/
ls -lsa /var/lib/ceph/osd/ceph-26/current/14.7c8_*/
2014-04-18 14:36 GMT+04:00 Ta Ba Tuan :
> Hi Ирек Фасихов
>
> I send it to you :D,
> Thank you!
>
> { "state": "incomplete",
> "epoch": 42880,
> "up": [
>
Hi,
I am facing a strange behaviour where a pool is stucked, I have no idea
how this pool appear in the cluster in the way I have not played with
pool creation, *yet*.
# root@node1:~# ceph -s
cluster 1b147882-722c-43d8-8dfb-38b78d9fbec3
health HEALTH_WARN 333 pgs degraded; 333 pgs st
Hi,
Some osds of our cluster filled up:
health HEALTH_ERR 1 full osd(s); 4 near full osd(s)
monmap e1: 3 mons at
{ceph001=10.141.8.180:6789/0,ceph002=10.141.8.181:6789/0,ceph003=10.141.8.182:6789/0}, election epoch 96, quorum 0,1,2
ceph001,ceph002,ceph003
mdsmap e93: 1/1/1 up
Hi Ирек Фасихов
I send it to you :D,
Thank you!
{ "state": "incomplete",
"epoch": 42880,
"up": [
82,
26],
"acting": [
82,
26],
"info": { "pgid": "14.7c8",
"last_update": "0'0",
"last_complete": "0'0",
"log_tail": "0'0",
"last_user_v
Show command please 'ceph health detail'.
2014-04-18 12:48 GMT+04:00 Ta Ba Tuan :
> Yes, I restarted all ceph-osds (22,23,82). But:
>
> cluster
> health HEALTH_WARN 75 pgs backfill; 1 pgs backfilling; 76 pgs
> degraded; *5 pgs incomplete*;
> ..
>
>
>
> On 04/18/201
Yes, I restarted all ceph-osds (22,23,82). But:
cluster
health HEALTH_WARN 75 pgs backfill; 1 pgs backfilling; 76 pgs
degraded; *5 pgs incomplete*;
..
On 04/18/2014 03:42 PM, Ирек Фасихов wrote:
You OSD restarts all disks on which is your unfinished pgs? (22,23,
You OSD restarts all disks on which is your unfinished pgs? (22,23,82)
2014-04-18 12:35 GMT+04:00 Ta Ba Tuan :
> Thank Ирек Фасихов for my reply.
> I restarted osds that contains incomplete pgs, but still false :(
>
>
>
> On 04/18/2014 03:16 PM, Ирек Фасихов wrote:
>
> Ceph detects that a pla
On Thu, 17 Apr 2014 08:14:04 -0500 John-Paul Robinson wrote:
> So in the mean time, are there any common work-arounds?
>
> I'm assuming monitoring imageused/imagesize ratio and if its greater
> than some tolerance create a new image and move file system content over
> is an effective, if crude a
Thank Ирек Фасихов for my reply.
I restarted osds that contains incomplete pgs, but still false :(
On 04/18/2014 03:16 PM, Ирек Фасихов wrote:
Ceph detects that a placement group is missing a necessary period of
history from its log. If you see this state, report a bug, and try to
start any fa
Ceph detects that a placement group is missing a necessary period of
history from its log. If you see this state, report a bug, and try to start
any failed OSDs that may contain the needed information.
2014-04-18 12:15 GMT+04:00 Ирек Фасихов :
> Oh, sorry, confused with inconsistent. :)
>
>
> 20
Oh, sorry, confused with inconsistent. :)
2014-04-18 12:13 GMT+04:00 Ирек Фасихов :
> You need to repair pg. This is the first sign that your hard drive was
> fail under.
> ceph pg repair *14.a5a *
> ceph pg repair *14.aa8*
>
>
> 2014-04-18 12:09 GMT+04:00 Ta Ba Tuan :
>
>> Dear everyone,
>>
>>
You need to repair pg. This is the first sign that your hard drive was fail
under.
ceph pg repair *14.a5a *
ceph pg repair *14.aa8*
2014-04-18 12:09 GMT+04:00 Ta Ba Tuan :
> Dear everyone,
>
> I lost 2 osd(s) and my '.rgw.buckets' pool is using 2 replicate, Therefore
> has some incomplete pgs
>
Dear everyone,
I lost 2 osd(s) and my '.rgw.buckets' pool is using 2 replicate,
Therefore has some incomplete pgs
cluster
health HEALTH_WARN 88 pgs backfill; 1 pgs backfilling; 89 pgs
degraded; *5 pgs incomplete;**
*
*14.aa8* 39930 0 0 1457965487
29 matches
Mail list logo