Hello Brad,
> I see others have answered these questions but I'll provide the link
> to the relevant section of the docs here for those that may read this
> later.
>
> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/#adding-monitors
>
thanks for the link, i think i have read th
On Thu, Dec 7, 2017 at 10:24 PM, Marcus Priesch wrote:
> Hello Alwin, Dear All,
[snip]
>> Mixing of spinners with SSDs is not recommended, as spinners will slow
>> down the pools residing on that root.
>
> why should this happen ? i would assume that osd's are seperate parts
> running on hosts -
On Thu, Dec 7, 2017 at 6:59 PM, Marcus Priesch wrote:
> Hello Brad,
Hi,
>> You don't really have six MONs do you (although I know the answer to
>> this question)? I think you need to take another look at some of the
>> docs about monitors.
Sorry, I could have phrased this much better in hindsig
Hello Marcus,
On Thu, Dec 07, 2017 at 10:24:13AM +0100, Marcus Priesch wrote:
> Hello Alwin, Dear All,
>
> yesterday we finished cluster migration to proxmox and i had the same
> problem again:
>
> A couple of osd's down and out and a stuck request on a completely
> different osd which blocked the
as the 1Gb network is completely busy in such a scenario i would assume
maybe the problem is that some network communication got stuck somewhere
1Gbit is nothing for ceph OSD hosts. Even if you use only spinners.
Don't forget 1Gbit is much more latency and less speed (obviously)
compared with 1
the rule of thumb
is 3 for small to mid-sized cluster.
3 mons works with 1+ OSD with Luminous:
http://ceph.com/community/new-luminous-scalability/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-
Hello Fabian,
> an even number is always bad for quorum based systems (6 is no better
> than 5, as you can only tolerate a loss of 2 before losing quorum).
>
> in Ceph, additional monitors require additional resources AND generate
> additional overhead (more mons -> more communication). the rule
On Thu, Dec 07, 2017 at 09:59:43AM +0100, Marcus Priesch wrote:
> Hello Brad,
>
> thanks for your answer !
>
> >> at least the point of all is that a single host should be allowed to
> >> fail and the vm's continue running ... ;)
> >
> > You don't really have six MONs do you (although I know the
Hello Alwin, Dear All,
yesterday we finished cluster migration to proxmox and i had the same
problem again:
A couple of osd's down and out and a stuck request on a completely
different osd which blocked the vm's.
i tried to put this specific osd out (ceph osd out xx) and voila, the
problem was g
Hello Brad,
thanks for your answer !
>> at least the point of all is that a single host should be allowed to
>> fail and the vm's continue running ... ;)
>
> You don't really have six MONs do you (although I know the answer to
> this question)? I think you need to take another look at some of th
Hello Marcus,
On Tue, Dec 05, 2017 at 07:09:35PM +0100, Marcus Priesch wrote:
> Dear Ceph Users,
>
> first of all, big thanks to all the devs and people who made all this
> possible, ceph is amazing !!!
>
> ok, so let me get to the point where i need your help:
>
> i have a cluster of 6 hosts, mixe
On Wed, Dec 6, 2017 at 4:09 AM, Marcus Priesch wrote:
> Dear Ceph Users,
>
> first of all, big thanks to all the devs and people who made all this
> possible, ceph is amazing !!!
>
> ok, so let me get to the point where i need your help:
>
> i have a cluster of 6 hosts, mixed with ssd's and hdd's.
Dear Ceph Users,
first of all, big thanks to all the devs and people who made all this
possible, ceph is amazing !!!
ok, so let me get to the point where i need your help:
i have a cluster of 6 hosts, mixed with ssd's and hdd's.
on 4 of the 6 hosts are 21 vm's running in total with less to no
w
13 matches
Mail list logo