Quoting Robert LeBlanc (rob...@leblancnet.us):
> The link that you referenced above is no longer available, do you have a
> new link?. We upgraded from 12.2.8 to 12.2.12 and the MDS metrics all
> changed, so I'm trying to may the old values to the new values. Might just
> have to look in the code.
Hello,
does anybody have real live experience with externel block db?
Greets,
Stefan
Am 13.01.20 um 08:09 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i'm plannung to split the block db to a seperate flash device which i
> also would like to use as an OSD for erasure coding metadata for rb
On 1/13/20 6:37 PM, vita...@yourcmc.ru wrote:
>> Hi,
>>
>> we're playing around with ceph but are not quite happy with the IOs.
>> on average 5000 iops / write
>> on average 13000 iops / read
>>
>> We're expecting more. :( any ideas or is that all we can expect?
>
> With server SSD you can expe
Den mån 13 jan. 2020 kl 08:09 skrev Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:
> Hello,
>
> i'm plannung to split the block db to a seperate flash device which i
> also would like to use as an OSD for erasure coding metadata for rbd
> devices.
>
> If i want to use 14x 14TB HDDs per Nod
On 1/10/20 7:43 PM, Philip Brown wrote:
> Surprisingly, a google search didnt seem to find the answer on this, so guess
> I should ask here:
>
> what determines if an rdb is "100% busy"?
>
> I have some backend OSDs, and an iSCSI gateway, serving out some RBDs.
>
> iostat on the gateway says
(sorry for empty mail just before)
> i'm plannung to split the block db to a seperate flash device which i
>> also would like to use as an OSD for erasure coding metadata for rbd
>> devices.
>>
>> If i want to use 14x 14TB HDDs per Node
>>
>> https://docs.ceph.com/docs/master/rados/configuration/
On 1/10/20 5:32 PM, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> we‘re currently in the process of building a new ceph cluster to backup rbd
> images from multiple ceph clusters.
>
> We would like to start with just a single ceph cluster to backup which is
> about 50tb. Compression ratio of
One tricky thing is each layer of RocksDB is 100% on SSD or 100% on HDD,
so either you need to tweak the rocksdb configuration , or there will be a
huge waste, e.g 20GB DB partition makes no difference compared to a 3GB
one (under default rocksdb configuration)
Janne Johansson 于2020年1月14日周二 下午4
...disable signatures and rbd cache. I didn't mention it in the email to not
repeat myself. But I have it in the article :-)
--
With best regards,
Vitaliy Filippov___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
i'm plannung to split the block db to a seperate flash device which i
also would like to use as an OSD for erasure coding metadata for rbd
devices.
If i want to use 14x 14TB HDDs per Node
https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#sizing
recommends a minimum size
I have just finished the update of a ceph cluster from luminous to nautilus
Everything seems running, but I keep receiving notifications (about ~ 10 so
far, involving different PGs and different OSDs) of PGs in inconsistent
state.
rados list-inconsistent-obj pg-id --format=json-pretty (an exampl
This is what I see in the OSD.54 log file
2020-01-14 10:35:04.986 7f0c20dca700 -1 log_channel(cluster) log [ERR] :
13.4 soid
13:20fbec66:::%2fhbWPh36KajAKcJUlCjG9XdqLGQMzkwn3NDrrLDi_mTM%2ffile2:head :
size 385888256 > 134217728 is too large
2020-01-14 10:35:08.534 7f0c20dca700 -1 log_channel(clust
Hi Stefan,
thank you for your time.
"temporary write through" does not seem to be a legit parameter.
However write through is already set:
root@proxmox61:~# echo "temporary write through" >
/sys/block/sdb/device/scsi_disk/*/cache_type
root@proxmox61:~# cat /sys/block/sdb/device/scsi_di
Hi Konstantin!
Quoting Konstantin Shalygin (k0...@k0ste.ru):
> >Is there any recommandation of how many osds a single flash device can
> >serve? The optane ones can do 2000MB/s write + 500.000 iop/s.
>
> Any sizes of db, except 3/30/300 is useless.
I have this from Mattia Belluco in my notes wh
Hi Vitaliy,
thank you for your time. Do you mean
cephx sign messages = false
with "diable signatures" ?
KR
Stefan
-Ursprüngliche Nachricht-
Von: Виталий Филиппов
Gesendet: Dienstag 14 Januar 2020 10:28
An: Wido den Hollander ; Stefan Bauer
CC: ceph-users@lists.ceph.com
Bet
The odd thing is:
the network interfaces on the gateways dont seem to be at 100% capacity
and the OSD disks dont seem to be at 100% utilization.
so I'm confused where this could be getting held up.
- Original Message -
From: "Wido den Hollander"
To: "Philip Brown" , "ceph-users"
Sent:
Also..
"It seems like your RBD can't flush it's I/O fast enough"
implies that there is some particular measure of "fast enough", that is a
tunable value somewhere.
If my network cards arent blocked, and my OSDs arent blocked...
then doesnt that mean that I can and should "turn that knob" up?
--
Yes, that's it, see the end of the article. You'll have to disable
signature checks, too.
cephx_require_signatures = false
cephx_cluster_require_signatures = false
cephx_sign_messages = false
Hi Vitaliy,
thank you for your time. Do you mean
cephx sign messages = false
with "diable signature
Thank you all,
performance is indeed better now. Can now go back to sleep ;)
KR
Stefan
-Ursprüngliche Nachricht-
Von: Виталий Филиппов
Gesendet: Dienstag 14 Januar 2020 10:28
An: Wido den Hollander ; Stefan Bauer
CC: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] low io w
Hi Philip,
I'm not sure if we're talking about the same thing but I was also
confused when I didn't see 100% OSD drive utilization during my first
RBD write benchmark. Since then I collect all my confusion here
https://yourcmc.ru/wiki/Ceph_performance :)
100% RBD utilization means that somet
On Tue, Jan 14, 2020 at 12:30 AM Stefan Kooman wrote:
> Quoting Robert LeBlanc (rob...@leblancnet.us):
> > The link that you referenced above is no longer available, do you have a
> > new link?. We upgraded from 12.2.8 to 12.2.12 and the MDS metrics all
> > changed, so I'm trying to may the old v
Hi,
I am getting one inconsistent object on our cluster with an inconsistency error
that I haven’t seen before. This started happening during a rolling upgrade of
the cluster from 14.2.3 -> 14.2.6, but I am not sure that’s related.
I was hoping to know what the error means before trying a repa
Does anyone know if this is also respecting an nearfull values?
Thank you in advice
Mehmet
Am 14. Januar 2020 15:20:39 MEZ schrieb Stephan Mueller :
>Hi,
>I sent out this message on the 19th of December and somehow it didn't
>got into the list and I just noticed it now. Sorry for the delay.
>I t
Quoting Robert LeBlanc (rob...@leblancnet.us):
>
> req_create
> req_getattr
> req_readdir
> req_lookupino
> req_open
> req_unlink
>
> We were graphing these as ops, but using the new avgcount, we are getting
> very different values, so I'm wondering if we are choosing the wrong new
> value, or we
Hi all,
I'm on 13.2.6. My cephfs has managed to lose one single object from
it's data pool. All the cephfs docs I'm finding show me how to recover
from an entire lost PG, but the rest of the PG checks out as far as I
can tell. How can I track down which file does that object belongs to?
I'm missin
As I wrote here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2020-January/037909.html
I saw the same after an update from Luminous to Nautilus 14.2.6
Cheers, Massimo
On Tue, Jan 14, 2020 at 7:45 PM Liam Monahan wrote:
> Hi,
>
> I am getting one inconsistent object on our cluster with
26 matches
Mail list logo