Hi Alvaro:
From the code , I see unsigned need = monmap->size() / 2 + 1; So
for 2 mons , the quorum must be 2 so that it can start election.
That's why I use 3 mons. I know if I stop mon.0 or mon.1 , everything
will work fine. And if this failure happens, it must be handled by
human ? Is the
I think it is really a bug, and I tested it.
if the network between mon.0 and mon.1 is cut off, it is easy to reproduce.
mon.0
\
\
\
\
mon.1 -- mon.2
mon.0 win the election
Z,
You are forcing a byzantine failure, the paxos implemented to form the
consensus ring of the mon daemons does not support this kind of failures,
that is why you get and erratic behaviour, I believe is the common paxos
algorithm implemented in mon daemon code.
If you just gracefully shutdown a m
Hi:
I am testing ceph-mon brain split . I have read the code . If I
understand it right , I know it won't be brain split. But I think
there is still another problem. My ceph version is 0.94.10. And here
is my test detail :
3 ceph-mons , there ranks are 0, 1, 2 respectively.I stop the rank 1
mon
Hello,
On a Luminous upgraded from Jewel I am seeing the following in ceph -s : "Full
ratio(s) out of order"
and
ceph pg dump | head
dumped all
version 44281
stamp 2017-07-04 05:52:08.337258
last_osdmap_epoch 0
last_pg_scan 0
full_ratio 0
nearfull_ratio 0
I have tried to inject the values
On Mon, Jul 3, 2017 at 10:17 AM, Martin Emrich
wrote:
> Hi!
>
>
>
> Having to interrupt my bluestore test, I have another issue since upgrading
> from Jewel to Luminous: My backup system (Bareos with RadosFile backend) can
> no longer write Volumes (objects) larger than around 128MB.
>
> (Of cours
Hi!
Having to interrupt my bluestore test, I have another issue since upgrading
from Jewel to Luminous: My backup system (Bareos with RadosFile backend) can no
longer write Volumes (objects) larger than around 128MB.
(Of course, I did not test that on my test cluster prior to upgrading the
prod
I didn't see any guidance on how to resolve the check some error online. Any
hints?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297.0222
[
On Mon, Jul 3, 2017 at 6:02 AM, Rhian Resnick wrote:
>
> Sorry to bring up an old post but on Kraken I am unable to repair a PG that
> is inconsistent in a cache tier . We remove the bad object but am still
> seeing the following error in the OSD's logs.
It's possible, but the digest error mea
I would also recommend keeping each pool at base 2 numbers of PGs. So with
the 512 PGs example, do 512 PGs for the data pool and 64 PGs for the
metadata pool.
On Sat, Jul 1, 2017 at 9:01 AM Wido den Hollander wrote:
>
> > Op 1 juli 2017 om 1:04 schreef Tu Holmes :
> >
> >
> > I would use the ca
Le Mon, 3 Jul 2017 12:32:20 +,
Martin Emrich a écrit :
> Hi!
>
> Thanks for the super-fast response!
>
> That did work somehow... Here's my commandline (As Bluestore seems to
> still require a Journal,
No, it doesn't. :D
> I repurposed the SSD partitions for it and
> put the DB/WAL on th
Hi,
On 06/27/2017 11:54 PM, Daniel K wrote:
> Is there anywhere that details the various compression settings for
> bluestore backed pools?
>
> I can see compression in the list of options when I run ceph osd pool
> set, but can't find anything that details what valid settings are.
>
> I've tr
Also please search the ML door min_size=1. Curiously you're doing it with
size=3.
On Mon, Jul 3, 2017, 8:33 AM Christian Balzer wrote:
>
> Hello,
>
> On Mon, 3 Jul 2017 14:18:27 +0200 Mateusz Skała wrote:
>
> > @Christian ,thanks for quick answer, please look bellow.
> >
> > > -Original Mess
Hi,
we have a 144 OSD 6 node ceph cluster with some pools (2 repl and EC).
Today I did an CEPH (10.2.5 -> 10.2.7) and kernel update and rebooted
two nodes.
on both nodes some OSDs dont get mountet and on one node I get some
dmesg like:
attempt to access beyond end of device
Currently the Clust
Hello,
How to view latest OSD epoch value in luminous. Normally this can be found
by below command.
#ceph -s | grep osd
or
#ceph osd stat
Need to know how to find this from v12.1.0
Thanks
Jayaram
On Sun, Jul 2, 2017 at 6:11 PM, Marc Roos wrote:
>
> I have updated a test cluster by just updat
Sorry to bring up an old post but on Kraken I am unable to repair a PG that is
inconsistent in a cache tier . We remove the bad object but am still seeing
the following error in the OSD's logs.
Prior to removing invalid object:
/var/log/ceph/ceph-osd.126.log:928:2017-07-03 08:07:55.331479 7f
Hello,
On Mon, 3 Jul 2017 14:18:27 +0200 Mateusz Skała wrote:
> @Christian ,thanks for quick answer, please look bellow.
>
> > -Original Message-
> > From: Christian Balzer [mailto:ch...@gol.com]
> > Sent: Monday, July 3, 2017 1:39 PM
> > To: ceph-users@lists.ceph.com
> > Cc: Mateusz Sk
Hi!
Thanks for the super-fast response!
That did work somehow... Here's my commandline (As Bluestore seems to still
require a Journal, I repurposed the SSD partitions for it and put the DB/WAL on
the spinning disk):
ceph-deploy osd create --bluestore
:/dev/sdc:/dev/mapper/cl-ceph_journal_s
@Christian ,thanks for quick answer, please look bellow.
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: Monday, July 3, 2017 1:39 PM
> To: ceph-users@lists.ceph.com
> Cc: Mateusz Skała
> Subject: Re: [ceph-users] Cache Tier or any other possibility to acceler
> Op 3 juli 2017 om 13:01 schreef Mateusz Skała :
>
>
> Hello,
>
> We are using cache-tier in Read-forward mode (replica 3) for accelerate
> reads and journals on SSD to accelerate writes. We are using only RBD. Based
> on the ceph-docs, RBD have bad I/O pattern for cache tier. I'm looking fo
Hello,
On Mon, 3 Jul 2017 13:01:06 +0200 Mateusz Skała wrote:
> Hello,
>
> We are using cache-tier in Read-forward mode (replica 3) for accelerate
> reads and journals on SSD to accelerate writes.
OK, lots of things wrong with this statement, but firstly, Ceph version
(it is relevant) and mo
Le Mon, 3 Jul 2017 11:30:04 +,
Martin Emrich a écrit :
> Hi!
>
> Thanks for the hint, but I get this error:
>
> [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No
> such file or directory: 'ceph.conf'; has `ceph-deploy new` been run
> in this directory?
>
> Obviously, ceph
Hi!
Thanks for the hint, but I get this error:
[ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file
or directory: 'ceph.conf'; has `ceph-deploy new` been run in this directory?
Obviously, ceph-deploy only works if the cluster has been managed with
ceph-deply all along
Hello,
We are using cache-tier in Read-forward mode (replica 3) for accelerate
reads and journals on SSD to accelerate writes. We are using only RBD. Based
on the ceph-docs, RBD have bad I/O pattern for cache tier. I'm looking for
information about other possibility to accelerate reads on RBD wi
Good to know. Frankly, the RGW isn’t my major concern at the moment, it seems
to be able to handle things well enough.
It’s the RBD/CephFS side of things for one cluster where we will eventually
need to support IPv6 clients but will not necessarily be able to switch
everyone to IPv6 in one go.
Hi Sage,
On 06/30/2017 06:48 PM, Sage Weil wrote:
> Ah, crap. This is what happens when devs don't read their own
> documetnation. I recommend against btrfs every time it ever comes
> up, the downstream distributions all support only xfs, but yes, it
> looks like the docs never got updated...
26 matches
Mail list logo