Den ons 3 juli 2019 kl 09:01 skrev Luk :
> Hello,
>
> I have strange problem with scrubbing.
>
> When scrubbing starts on PG which belong to default.rgw.buckets.index
> pool, I can see that this OSD is very busy (see attachment), and starts
> showing many
> slow request, after the scrubbin
Den ons 3 juli 2019 kl 20:51 skrev Austin Workman :
>
> But a very strange number shows up in the active sections of the pg's
> that's the same number roughly as 2147483648. This seems very odd,
> and maybe the value got lodged somewhere it doesn't belong which is causing
> an issue.
>
>
That
Den tis 16 juli 2019 kl 16:16 skrev M Ranga Swami Reddy <
swamire...@gmail.com>:
> Hello - I have created 10 nodes ceph cluster with 14.x version. Can you
> please confirm below:
> Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is -
> creating a pool per project). Any limitation
Den mån 15 juli 2019 kl 23:05 skrev Oscar Segarra :
> Hi Frank,
> Thanks a lot for your quick response.
> Yes, the use case that concerns me is the following:
> 1.- I bootstrap a complete cluster mons, osds, mgr, mds, nfs, etc using
> etcd as a key store
>
as a key store ... for what? Are you stu
v/vdd \
> -e KV_TYPE=etcd \
> -e KV_IP=192.168.0.20 \
> ceph/daemon osd
>
> Thanks a lot for your help,
>
> Óscar
>
>
>
>
> El mar., 16 jul. 2019 17:34, Janne Johansson
> escribió:
>
>> Den mån 15 juli 2019 kl 23:05 skrev Oscar Segarra <
>>
Den tis 16 juli 2019 kl 18:15 skrev Oscar Segarra :
> Hi Paul,
> That is the initial question, is it possible to recover my ceph cluster
> (docker based) if I loose all information stored in the etcd...
> I don't know if anyone has a clear answer to these questions..
> 1.- I bootstrap a complete c
Den fre 19 juli 2019 kl 12:43 skrev Marc Roos :
>
> Maybe a bit of topic, just curious what speeds did you get previously?
> Depending on how you test your native drive of 5400rpm, the performance
> could be similar. 4k random read of my 7200rpm/5400 rpm results in
> ~60iops at 260kB/s.
> I also w
Den ons 24 juli 2019 kl 21:48 skrev Wido den Hollander :
> Right now I'm just trying to find a clever solution to this. It's a 2k
> OSD cluster and the likelihood of an host or OSD crashing is reasonable
> while you are performing maintenance on a different host.
>
> All kinds of things have cross
Den tors 25 juli 2019 kl 04:36 skrev zhanrzh...@teamsun.com.cn <
zhanrzh...@teamsun.com.cn>:
> I think it should to set "osd_pool_default_min_size=1" before you add osd ,
> and the osd that you add at a time should in same Failure domain.
>
That sounds like weird or even bad advice?
What is the
Den tors 25 juli 2019 kl 10:47 skrev 展荣臻(信泰) :
>
> 1、Adding osds in same one failure domain is to ensure only one PG in pg up
> set (ceph pg dump shows)to remap.
> 2、Setting "osd_pool_default_min_size=1" is to ensure objects to read/write
> uninterruptedly while pg remap.
> Is this wrong?
>
How d
Den tis 30 juli 2019 kl 10:33 skrev Massimo Sgaravatto <
massimo.sgarava...@gmail.com>:
> The documentation that I have seen says that the minimum requirements for
> clients to use upmap are:
> - CentOs 7.5 or kernel 4.5
> - Luminous version
> E.g. right now I am interested about 0x1ffddff8eea4fff
Den ons 31 juli 2019 kl 06:55 skrev Muhammad Junaid :
> The question is about RBD Cache in write-back mode using KVM/libvirt. If
> we enable this, it uses local KVM Host's RAM as cache for VM's write
> requests. And KVM Host immediately responds to VM's OS that data has been
> written to Disk (Act
Den tors 1 aug. 2019 kl 07:31 skrev Muhammad Junaid :
> Your email has cleared many things to me. Let me repeat my understanding.
> Every Critical data (Like Oracle/Any Other DB) writes will be done with
> sync, fsync flags, meaning they will be only confirmed to DB/APP after it
> is actually writ
Den tors 1 aug. 2019 kl 11:31 skrev dannyyang(杨耿丹) :
> H all:
>
> we have a cephfs env,ceph version is 12.2.10,server in arm,but fuse clients
> are x86,
> osd disk size is 8T,some osd use 12GB memory,is that normal?
>
>
For bluestore, there are certain tuneables you can use to limit memory a
bit.
Den ons 14 aug. 2019 kl 09:49 skrev Simon Oosthoek :
> Hi all,
>
> Yesterday I marked out all the osds on one node in our new cluster to
> reconfigure them with WAL/DB on their NVMe devices, but it is taking
> ages to rebalance.
>
> > ceph tell 'osd.*' injectargs '--osd-max-backfills 16'
> > c
Den tors 15 aug. 2019 kl 00:16 skrev Anthony D'Atri :
> Good points in both posts, but I think there’s still some unclarity.
>
...
> We’ve seen good explanations on the list of why only specific DB sizes,
> say 30GB, are actually used _for the DB_.
> If the WAL goes along with the DB, shouldn’t
Den ons 11 sep. 2019 kl 12:18 skrev Matthew Vernon :
> We keep finding part-made OSDs (they appear not attached to any host,
> and down and out; but still counting towards the number of OSDs); we
> never saw this with ceph-disk. On investigation, this is because
> ceph-volume lvm create makes the
>
> I don't remember where I read it, but it was told that the cluster is
> migrating its complete traffic over to the public network when the cluster
> networks goes down. So this seems not to be the case?
>
Be careful with generalizations like "when a network acts up, it will be
completely down
(Slightly abbreviated)
Den tors 24 okt. 2019 kl 09:24 skrev Frank Schilder :
> What I learned are the following:
>
> 1) Avoid this work-around too few hosts for EC rule at all cost.
>
> 2) Do not use EC 2+1. It does not offer anything interesting for
> production. Use 4+2 (or 8+2, 8+3 if you hav
Den tors 31 okt. 2019 kl 04:22 skrev soumya tr :
> Thanks 潘东元 for the response.
>
> The creation of a new pool works, and all the PGs corresponding to that
> pool have active+clean state.
>
> When I initially set ceph 3 node cluster using juju charms (replication
> count per object was set to 3),
Den tors 31 okt. 2019 kl 15:07 skrev George Shuklin <
george.shuk...@gmail.com>:
> Thank you everyone, I got it. There is no way to fix out-of-space
> bluestore without expanding it.
>
> Therefore, in production we would stick with 99%FREE size for LV, as it
> gives operators 'last chance' to repa
Den tis 5 nov. 2019 kl 19:10 skrev J David :
> On Tue, Nov 5, 2019 at 3:18 AM Paul Emmerich
> wrote:
> > could be a new feature, I've only realized this exists/works since
> Nautilus.
> > You seem to be a relatively old version since you still have ceph-disk
> installed
>
> The next approach may
Is the flip between the client name "rz" and "user" also a mistype? It's
hard to divinate if it is intentional or not since you are mixing it about.
Den fre 15 nov. 2019 kl 10:57 skrev Rainer Krienke :
> I found a typo in my post:
>
> Of course I tried
>
> export CEPH_ARGS="-n client.rz --keyrin
Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave :
> So would you recommend doing an entire node at the same time or per-osd?
>
You should be able to do it per-OSD (or per-disk in case you run more than
one OSD per disk), to minimize data movement over the network, letting
other OSDs on the same hos
It's mentioned here among other places
https://books.google.se/books?id=vuiLDwAAQBAJ&pg=PA79&lpg=PA79&dq=rocksdb+sizes+3+30+300+g&source=bl&ots=TlH4GR0E8P&sig=ACfU3U0QOJQZ05POZL9DQFBVwTapML81Ew&hl=en&sa=X&ved=2ahUKEwiPscq57YfmAhVkwosKHY1bB1YQ6AEwAnoECAoQAQ#v=onepage&q=rocksdb%20sizes%203%2030%20300
Den ons 4 dec. 2019 kl 01:37 skrev Milan Kupcevic <
milan_kupce...@harvard.edu>:
> This cluster can handle this case at this moment as it has got plenty of
> free space. I wonder how is this going to play out when we get to 90% of
> usage on the whole cluster. A single backplane failure in a node
Den ons 4 dec. 2019 kl 09:57 skrev Marc Roos :
>
> But I guess that in 'ceph osd tree' the ssd's were then also displayed
> as hdd?
>
Probably, and the difference in perf would be the different defaults hdd
gets vs ssd OSDs with regards to bluestore caches.
--
May the most significant bit of yo
Den tors 5 dec. 2019 kl 00:28 skrev Milan Kupcevic <
milan_kupce...@harvard.edu>:
>
>
> There is plenty of space to take more than a few failed nodes. But the
> question was about what is going on inside a node with a few failed
> drives. Current Ceph behavior keeps increasing number of placement
>
>
> I'm currently trying to workout a concept for a ceph cluster which can
> be used as a target for backups which satisfies the following requirements:
>
> - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s
>
You might need to have a large (at least non-1) number of writers to get to
that s
Den mån 13 jan. 2020 kl 08:09 skrev Stefan Priebe - Profihost AG <
s.pri...@profihost.ag>:
> Hello,
>
> i'm plannung to split the block db to a seperate flash device which i
> also would like to use as an OSD for erasure coding metadata for rbd
> devices.
>
> If i want to use 14x 14TB HDDs per Nod
(sorry for empty mail just before)
> i'm plannung to split the block db to a seperate flash device which i
>> also would like to use as an OSD for erasure coding metadata for rbd
>> devices.
>>
>> If i want to use 14x 14TB HDDs per Node
>>
>> https://docs.ceph.com/docs/master/rados/configuration/
Den mån 20 jan. 2020 kl 09:03 skrev Dave Hall :
> Hello,
>
Since upgrading to Nautilus (+ Debian 10 Backports), when I issue
> 'ceph-volume lvm batch --bluestore ' it fails with
>
> bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
>
> I previously had Luminous + Debian 9 running
101 - 132 of 132 matches
Mail list logo