Thanks a lot guys.
Best,
*German*
2016-03-24 15:55 GMT-03:00 Sean Redmond :
> Hi German,
>
> For Data to be split over the racks you should set the crush rule set to
> 'step chooseleaf firstn 0 type rack' instead of 'step chooseleaf firstn 0
> type host'
>
> Thanks
>
> On Wed, Mar 23, 2016 at
Hello,
On Thu, 24 Mar 2016 10:11:09 +0100 Jacek Jarosiewicz wrote:
> Hi!
>
> I have a problem with the osds getting full on our cluster.
>
> I've read all the topics on this list on how to deal with that, but I
> have a few questions.
>
"All" is probably a misnomer here, your situation isn't
If Ceph cluster stuck in recovery state?
Did you try command "ceph pg repair " or "ceph pg query" to
trace its state?
2016-03-24 22:36 GMT+08:00 yang sheng :
> Hi all,
>
> I am testing the ceph right now using 4 servers with 8 OSDs (all OSDs are
> up and in). I have 3 pools in my cluster (image
Hello,
On Thu, 24 Mar 2016 13:44:17 +0100 (CET) Erik Schwalbe wrote:
> Hi,
>
> I have a pg calc question.
>
> https://ceph.com/pgcalc/
>
> We have 45 OSD's.
> 30 OSD's as SSD's and 15 as normal SATA disks.
>
> 2 rulesets:
> ssd and sata
>
Firstly you treat this as what it is, 2 separa
Hi German,
For Data to be split over the racks you should set the crush rule set to
'step chooseleaf firstn 0 type rack' instead of 'step chooseleaf firstn 0
type host'
Thanks
On Wed, Mar 23, 2016 at 3:50 PM, German Anders wrote:
> Hi all,
>
> I had a question, I'm in the middle of a new ceph
Space on hosts in rack2 does not add up to cover space in rack1. After
enough data are written to the cluster all pgs on rack2 would be
allocated and the cluster won't be able to find a free pg to map new
data to for the 3rd replica.
Bottomline, spread your big disks to all 4 hosts, or add some mo
Hey cephers,
Just a reminder, if you hadn’t already noticed, there will be no Ceph
Tech Talk today due to a last minute speaker cancellation. We’ll see
you next month live from OpenStack! Thanks.
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://c
Hi all,
I am testing the ceph right now using 4 servers with 8 OSDs (all OSDs are
up and in). I have 3 pools in my cluster (image pool, volume pool and
default rbd pool), both image and volume pool have replication size =3.
Based on the pg equation, there are 448 pgs in my cluster.
$ ceph osd tre
[root@cf01 ceph]# ceph osd pool ls detail
pool 0 'vms' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 67 flags hashpspool stripe_width 0
pool 1 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0
object_hash rjenkins pg_num 8 pgp_num 8 las
>First of all - the cluster itself is nowhere near being filled (about
>55% data is used), but the osds don't get filled equally.
the most important things is why your data filled unequally。
I think maybe your pg_num do not set correctly.
can you paste the output of :ceph osd dump|grep '^pool’
P
Hi,
I have a pg calc question.
https://ceph.com/pgcalc/
We have 45 OSD's.
30 OSD's as SSD's and 15 as normal SATA disks.
2 rulesets:
ssd and sata
We have 6 pools.
data
metadata
ssd
sata
ssd-bench
sata-bench
So some pools points to the same ruleset (osd's).
What is the best pg
What is your pool size? 304 pgs sound awfuly small for 20 OSDs.
More pgs will help distribute full pgs better.
But with a full or near full OSD in hand, increasing pgs is a no-no
operation. If you search in the list archive, I believe there was a
thread last month or so which provided a walkthroug
disk usage on the full osd is as below. What are the *_TEMP directories
for? How can I make sure which pg directories are safe to remove?
[root@cf04 current]# du -hs *
156G0.14_head
156G0.21_head
155G0.32_head
157G0.3a_head
155G0.e_head
156G0.f_head
40K 10.2_head
4.0K
But I face a unfound object issue.
and command ceph pg 4.438 mark_unfound_lost revert/delete does not work.
detail:http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008453.html
2016-03-23 8:38 GMT+08:00 David Wang :
> Hi zhou,
> put the cluster to HEALTH_OK status first and then
Hi, Mika,
By default, single MDS is used unless you set max_mds to a value bigger than 1.
Creating more than one MDS is okay, as these will by default simply become
standbys.
All the standby MDS can be down as long as the master MDS(up:active) works.
Thanks,
yang
-- Original
Hi Mika,
as i understand the situation, important is just that you have at least
one MD running.
You dont have quorum there, so its just about making sure that there is
always at least one MD ready.
I red, that for now, running multiple MDs active at the same time, might
cause problems.
So you
Hi cephers,
If I want to activate more than one mds node. Should ceph needs an odd
number of mds ?
Another question, if more than one mds activated than how many mds can
lost?(If 3 mds node activated in the same time)
Best wishes,
Mika
___
ceph-users
Hi!
I have a problem with the osds getting full on our cluster.
I've read all the topics on this list on how to deal with that, but I
have a few questions.
First of all - the cluster itself is nowhere near being filled (about
55% data is used), but the osds don't get filled equally.
I've t
18 matches
Mail list logo