On Thu, 2015-11-26 at 22:13 +0300, Andrey Korolyov wrote:
> On Thu, Nov 26, 2015 at 1:29 AM, Laurent GUERBY wrote:
> > Hi,
> >
> > After our trouble with ext4/xattr soft lockup kernel bug we started
> > moving some of our OSD to XFS, we're using ubuntu 14.04 3.19 kernel
> >
I'm a bit confused re this setting with regard to xfs. Docs state:
"Enables writeahead journaling, default for xfs.", which implies to me that
is on by default for xfs, but then after that is it states:
"Default:false"
So is it on or off by default for xfs? and is there a way to tell?
Also -
Thanks Wido !
Could you please explain a bit more on the relationship between user created
buckets and the objects within .bucket.index pool ?
I am not seeing for each bucket one entry is created within .bucket.index pool.
Regards
Somnath
-Original Message-
From: ceph-users
On Thu, 2015-11-26 at 07:52 +, Межов Игорь Александрович wrote:
> Hi!
>
> >After our trouble with ext4/xattr soft lockup kernel bug we started
> >moving some of our OSD to XFS, we're using ubuntu 14.04 3.19 kernel
> >and ceph 0.94.5.
>
> It was a rather serious bug, but there is small a
This has nothing to do with the number of seconds between backfills. It is
actually the number of objects from a PG being scanned during a single op
when PG is backfilled. From what I can tell by looking at the source code,
impact on performance comes from the fact that during this scanning the PG
It seams that you played around with crushmap, and done something wrong.
Compare the look of 'ceph osd tree' and crushmap. There are some 'osd' devices
renamed to 'device' think threre is you problem.
Отправлено с мобильного устройства.
-Original Message-
From: Vasiliy Angapov
Hi,
I want to know what are the best practices to start or stop all OSDs of a node
with infernalis.
Before with init, we used « /etc/init.d/ceph start » now with systemd I have a
script per osd : "systemctl start ceph-osd@171.service"
Where is the global one ?
Thanks in advance!
SUSE has pretty good documentation about interacting with Ceph using
systemctl -
https://www.suse.com/documentation/ses-1/book_storage_admin/data/ceph_operating_services.html
The following should work:
systemctl start ceph-osd*
On 26/11/15 12:46, Marc Boisis wrote:
>
> Hi,
>
> I want to know
Hi.
Vasiliy, Yes it is a problem with crusmap. Look at height:
" -3 14.56000 host slpeah001
-2 14.56000 host slpeah002
"
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
2015-11-26 13:16 GMT+03:00 ЦИТ РТ-Курамшин Камиль Фидаилевич <
kamil.kurams...@tatar.ru>:
> It seams that
the documentation is good but it doesn’t work on my centos7:
root@cephrr1n8:/root > systemctl status "ceph*"
ceph\x2a.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
maybe a bug with centos’s systemd release, is there anybody with centos 7 +
infernalis
Hi guys. For some months i had a simple working ceph cluster with 3 nodes and 3
monitors inside. Client, monitors and cluster network was at redundant 10Gbps
ports in the same subnet 10.10.10.0/24.
Here is the conf
#
[global]
auth client required = cephx
auth cluster
Hi,
On 11/25/2015 06:41 PM, Robert LeBlanc wrote:
Since the one that is different is not your primary for the pg, then
pg repair is safe.
Ok, that's clear thanks.
I think we managed to identify the root cause of the scrubbing errors
even if the files are identical.
It seems to be a hardware
Hi,
I have also seen inconsistent PGs despite md5 being the same on all
objects, however all my hardware uses ECC RAM, which as I understand
should prevent this type of error. To be clear - in your case you were
using ECC or non-ECC module?
--
Tomasz Kuzemko
tomasz.kuze...@ovh.net
W dniu
Hi,
We don't use ECC modules but the ECC doesn't mean you're safe.
See the presentation I linked earlier:
https://www.nsc.liu.se/lcsc2007/presentations/LCSC_2007-kelemen.pdf>https://www.nsc.liu.se/lcsc2007/presentations/LCSC_2007-kelemen.pdf
Hi all,
I'm using python scripts to create rbd images like described here
http://docs.ceph.com/docs/giant/rbd/librbdpy/
rbd_inst.create(ioctx, 'myimage', size, old_format=False, features=1) seems to
create a layering image
rbd_inst.create(ioctx, 'myimage', size, old_format=False, features=2)
ECC will not be able to recover the data, but it will always be able to
detect that data is corrupted. AFAIK under Linux this results in
immediate halt of system, so it would not be able to report bad checksum
data during deep-scrub.
--
Tomasz Kuzemko
tomasz.kuze...@corp.ovh.com
W dniu
Le 26/11/2015 15:53, Tomasz Kuzemko a écrit :
> ECC will not be able to recover the data, but it will always be able to
> detect that data is corrupted.
No. That's a theoretical impossibility as the detection is done by some
kind of hash over the memory content which brings the possibility of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Based on [1] and my experience with Hammer it is seconds. After
adjusting this back to the defaults and doing recovery in our
production cluster I saw batches of recovery start every 64 seconds.
It initially started out nice and distributed, but
Hi there,
I am using Ceph-Hammer and I am wondering about the following:
What is the recommended way to find out when an rbd-Image was last modified ?
Thanks
Christoph
--
Christoph Adomeit
GATWORKS GmbH
Reststrauch 191
41199 Moenchengladbach
Sitz: Moenchengladbach
Amtsgericht
Find in which block the filesystem on your RBD image stores journal, find the
object hosting this block in rados and use its mtime :-)
Jan
> On 26 Nov 2015, at 18:49, Gregory Farnum wrote:
>
> I don't think anything tracks this explicitly for RBD, but each RADOS object
>
On Thu, Nov 26, 2015 at 1:29 AM, Laurent GUERBY wrote:
> Hi,
>
> After our trouble with ext4/xattr soft lockup kernel bug we started
> moving some of our OSD to XFS, we're using ubuntu 14.04 3.19 kernel
> and ceph 0.94.5.
>
> We have two out of 28 rotational OSD running XFS
21 matches
Mail list logo