after google and digging i found this BUG, why its not pushed to all branches ?
https://github.com/ceph/ceph-ansible/commit/d3b427e16990f9ebcde7575aae367fd7dfe36a8d#diff-34d2eea5f7de9a9e89c1e66b15b4cd0a
On Fri, Jul 20, 2018 at 11:26 PM, Satish Patel wrote:
> My Ceph version is
>
>
My Ceph version is
[root@ceph-osd-02 ~]# ceph -v
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
On Fri, Jul 20, 2018 at 11:24 PM, Satish Patel wrote:
> I am using openstack-ansible with ceph-ansible to deploy my Ceph
> custer and here is my config in yml file
>
I am using openstack-ansible with ceph-ansible to deploy my Ceph
custer and here is my config in yml file
---
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: /dev/sdb
- data: /dev/sdc
- data: /dev/sdd
- data: /dev/sde
This is the error i am getting..
TASK [ceph-osd :
Hello Ceph Users,
We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB
drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the
SSD pool ).
I would assume that the weight needs to be changed but I didn't think I would
need to? Should I change
Hi Satish,
that really completely depends on your controller.
For what it's worth: We have AVAGO MegaRAID controllers (9361 series).
They can be switched to a "JBOD personality". After doing so and reinitializing
(poewrcycling),
the cards change PCI-ID and run a different firmware, optimized
Thanks Brian,
That make sense because i was reading document and found you can
either choose RAID or JBOD
On Fri, Jul 20, 2018 at 5:33 PM, Brian : wrote:
> Hi Satish
>
> You should be able to choose different modes of operation for each
> port / disk. Most dell servers will let you do RAID and
Hi Satish
You should be able to choose different modes of operation for each
port / disk. Most dell servers will let you do RAID and JBOD in
parallel.
If you can't do that and can only either turn RAID on or off then you
can use SW RAID for your OS
On Fri, Jul 20, 2018 at 9:01 PM, Satish Patel
I am getting this error, why its complaining about disk even we have
enough space
2018-07-20 16:04:58.313331 7f0c047f8ec0 0 set uid:gid to 167:167 (ceph:ceph)
2018-07-20 16:04:58.313350 7f0c047f8ec0 0 ceph version 12.2.7
(3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process
Folks,
I never used JBOD mode before and now i am planning so i have stupid
question if i switch RAID controller to JBOD mode in that case how
does my OS disk will get mirror?
Do i need to use software raid for OS disk when i use JBOD mode?
___
On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote:
> Hi,
>
>
>
> I noticed that in commit
> https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3,
> the ability to specify a cluster name was removed. Is there a reason for
> this removal ?
>
>
>
> Because right
Hi,
I noticed that in commit
https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3,
the ability to specify a cluster name was removed. Is there a reason for this
removal ?
Because right now, there are no possibility to create a ceph cluster with a
different name
Hello Caspar
That makes a great deal of sense, thank you for elaborating. Am I correct to
assume that if we were to use a k=2, m=2 profile, it would be identical to a
replicated pool (since there would be an equal amount of data and parity
chunks)? Furthermore, how should the proper erasure
In response to my own questions, I read that you shouldn't separate your
journal / rocksDB from the disks where your data resides, with bluestore. And
the general rule of one core per OSD seems to be unnecessary, since in the
current clusters we've got 4 cores with 5 disks and CPU usage never
Ziggy,
For EC pools: min_size = k+1
So in your case (m=1) -> min_size is 3 which is the same as the number of
shards. So if ANY shard goes down, IO is freezed.
If you choose m=2 min_size will still be 3 but you now have 4 shards (k+m =
4) so you can loose a shard and still remain availability.
Caspar,
Thank you for your reply. I’m in all honesty still not clear on what value to
use for min_size. From what I understand, it should be be set to the sum of k+m
for erasure coded pools, as it is set by default.
Additionally, could you elaborate why m=2 would be able to sustain a node
Correct, sorry, I have just read the first question and answered too quickly.
As fas as I know the space available is "shared" (the space is a combination of
OSD drives and crushmap ) between pools using the same device class but you can
define quota for each pool if needed.
ceph osd pool
Ziggy,
The default min_size for your pool is 3 so losing ANY single OSD (not even
host) will result in reduced data availability:
https://patchwork.kernel.org/patch/8546771/
Use m=2 to be able to handle a node failure.
Met vriendelijke groet,
Caspar Smit
Systemengineer
SuperNAS
Hello
I am currently trying to find out if Ceph can sustain the loss of a full host
(containing 4 OSDs) in a default erasure coded pool (k=2, m=1). We have
currently have a production EC pool with the default erasure profile, but would
like to make sure the data on this pool remains accessible
No way I'm expert and let see what other folks suggesting but I would say go
with Intel if you only care about performance.
Sent from my iPhone
> On Jul 19, 2018, at 12:54 PM, Steven Vacaroaia wrote:
>
> Hi,
> I would appreciate any advice ( with arguments , if possible) regarding the
>
I had similar question a while ago, maybe these you want to read.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46768.html
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46799.html
-Original Message-
From: Satish Patel [mailto:satish@gmail.com]
Sent:
What is the use of LVM in blurstore, I have seen people using LVM but don't
know why ?
Sent from my iPhone
> On Jul 19, 2018, at 10:00 AM, Eugen Block wrote:
>
> Hi,
>
> if you have SSDs for RocksDB, you should provide that in the command
> (--block.db $DEV), otherwise Ceph will use the one
That is the used column not?
[@c01 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
G G G 60.78
POOLS:
NAME ID USED %USED MAX
AVAIL OBJECTS
iscsi-images 16
Hi Sebastien,
Your command(s) returns the replication size and not the size in terms of
bytes.
I want to see the size of a pool in terms of bytes.
The MAX AVAIL in "ceph df" is:
[empty space of an OSD disk with the least empty space] multiplied by
[amount of OSD]
That is not what I am looking
Hi Matthew,
Thanks for the advice but we are no longer using orphans find since the problem
does not seem to be solved with it.
Regards,
-Message d'origine-
De : Matthew Vernon
Envoyé : 20 July 2018 11:03
À : CUZA Frédéric ; ceph-users@lists.ceph.com
Objet : Be careful with orphans
Hi,
On 19/07/18 17:19, CUZA Frédéric wrote:
> After that we tried to remove the orphans :
>
> radosgw-admin orphans find –pool= default.rgw.buckets.data
> --job-id=ophans_clean
>
> radosgw-admin orphans finish --job-id=ophans_clean
>
> It finds some orphans : 85, but the command finish seems
Hi,
ceph osd pool get your_pool_name size
ceph osd pool ls detail
these are commands to get the size of a pool regarding the
replication, not the available storage.
So the capacity in 'ceph df' is returning the space left on the pool and
not the 'capacity size'.
I'm not aware of a
In the meantime I upgraded the cluster to 12.2.7 and added the osd
distrust data digest = true setting in ceph.conf because it's mixed
cluster.
But I still see a constantly growing number of inconsistent PG's and
Scrub errors.
If I check the running ceph config with ceph --admin-daemon
# for a specific pool:
ceph osd pool get your_pool_name size
> Le 20 juil. 2018 à 10:32, Sébastien VIGNERON a
> écrit :
>
> #for all pools:
> ceph osd pool ls detail
>
>
>> Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit :
>>
>> Hi,
>>
>> How can I see the size of a pool? When I create
#for all pools:
ceph osd pool ls detail
> Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit :
>
> Hi,
>
> How can I see the size of a pool? When I create a new empty pool I can see
> the capacity of the pool using 'ceph df', but as I start putting data in
> the pool the capacity is decreasing.
Thanks, we are fully bluestore and therefore just set osd skip data digest =
true
Kind regards,
Glen Baars
-Original Message-
From: Dan van der Ster
Sent: Friday, 20 July 2018 4:08 PM
To: Glen Baars
Cc: ceph-users
Subject: Re: [ceph-users] 12.2.6 upgrade
That's right. But please
That's right. But please read the notes carefully to understand if you
need to set
osd skip data digest = true
or
osd distrust data digest = true
.. dan
On Fri, Jul 20, 2018 at 10:02 AM Glen Baars wrote:
>
> I saw that on the release notes.
>
> Does that mean that the
I saw that on the release notes.
Does that mean that the active+clean+inconsistent PGs will be OK?
Is the data still getting replicated even if inconsistent?
Kind regards,
Glen Baars
-Original Message-
From: Dan van der Ster
Sent: Friday, 20 July 2018 3:57 PM
To: Glen Baars
Cc:
CRC errors are expected in 12.2.7 if you ran 12.2.6 with bluestore. See
https://ceph.com/releases/12-2-7-luminous-released/#upgrading-from-v12-2-6
On Fri, Jul 20, 2018 at 8:30 AM Glen Baars wrote:
>
> Hello Ceph Users,
>
>
>
> We have upgraded all nodes to 12.2.7 now. We have 90PGs ( ~2000 scrub
On Thu, Jul 19, 2018 at 11:51 AM Robert Sander
wrote:
>
> On 19.07.2018 11:15, Ronny Aasen wrote:
>
> > Did you upgrade from 12.2.5 or 12.2.6 ?
>
> Yes.
>
> > sounds like you hit the reason for the 12.2.7 release
> >
> > read : https://ceph.com/releases/12-2-7-luminous-released/
> >
> > there
Hi,
How can I see the size of a pool? When I create a new empty pool I can see
the capacity of the pool using 'ceph df', but as I start putting data in
the pool the capacity is decreasing.
So the capacity in 'ceph df' is returning the space left on the pool and
not the 'capacity size'.
Thanks!
Hi,
I am trying to understand what happens when an OSD fails.
Few days back I wanted to check what happens when an OSD goes down for that
what I did was I just went to the node and stopped one of the osd's
service. When OSD went in down state pgs started recovering and after
sometime everything
Hello Ceph Users,
We have upgraded all nodes to 12.2.7 now. We have 90PGs ( ~2000 scrub errors )
to fix from the time when we ran 12.2.6. It doesn't seem to be affecting
production at this time.
Below is the log of a PG repair. What is the best way to correct these errors?
Is there any
37 matches
Mail list logo