perfomance
degradation) or there will be mostly one thread?
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
)
HEALTH_OK, tunables optimal.
What is it?
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
How to avoid slow requests on rbd v1 snapsot delete? Time ago it looks solved,
but on emperor seen again.
Are migrating to rbd v2 can solve it?
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing
in write task (if
code enables it).
So write requests will be always protected from data loss (of course, still
possibility to invert written and offline OSDs in one pass in large cluster, but
this is minimal care for mind about min_size).
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http
Does somebody try ceph + suspend|hibernate (for UPS power-off)? Can it cause
problems with ceph sync in case async poweroff? Fear to try on production (v2)
first!
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
, May 18, 2013 at 5:48 PM, Dzianis Kahanovich
maha...@bspu.unibel.by wrote:
IMHO interaction QEMU kernel's FREEZER (part of hibernation cgroups) can
solve many of problems. It can be done via QEMU host-2-guest sockets and
scripts
or embedded into virtual hardware (simulate real suspend
interaction QEMU kernel's FREEZER (part of hibernation cgroups) can
solve many of problems. It can be done via QEMU host-2-guest sockets and
scripts
That would require a cooperating VM. What I was looking at was how to do this
for non-cooperating VMs.
--
WBR, Dzianis Kahanovich AKA Denis
Gentoo and simple
heartbeat, no corosync, so I don't want to work too much).
So, if you need no byte-range locking, I suggest to use OCFS2 with simple O2CB
stack.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph
serious overheads?
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
this with pools enabling for
users other ceph features, but there are just perspective goal, not for current
(our) users.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
-working: on quota overflow - nothing limited, but ceph health show
warning. In case of no other ways to quota, it may qualified as bug and not
too actual only while big number of pools performance limitation. So, FYI.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by
daemon per node (but still
disk-per-osd self). But if you have relative few [planned] cores per task on
node - you can think about it.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users
the cluster in any way.
Usually data distributed per-host, so whole array failure cause only longer
cluster resync, but nothing new cluster-wide.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph
, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by
-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis
and destroy any and all copies
of this message in your possession (whether hard copies or electronically
stored copies).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis
I plan to migrate cluster from straw to straw2 mapping. Ceph and kernels is up
to date (kernel 4.1.0), so I want to change directly in crush map srew to straw2
and load changed crush map (by steps - per host and rack). Are this relative
safe and must be remapped runtime?
--
WBR, Dzianis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users
PS I start to use this patches with samba 4.1. IMHO some of problems may (or
must) be solved not inside vfs code, but outside - in samba kernel, but I still
use both in samba 4.2.3 without verification.
Dzianis Kahanovich пишет:
I use cephfs over samba vfs and have some issues.
1) If I use 1
Reading on SourceForge blog, there are experienced ceph corruption. IMHO there
will be good idea to know technical details. Version, what happened...
http://sourceforge.net/blog/sourceforge-infrastructure-and-service-restoration/
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http
iling list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
uck on mount, but xfs_repair still required.
PPPS Use swap and avoid forced kill.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
p://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
les.
during mds failover, mds needs to open these files, which take a long
time.
Can some kind of cache improve behaviour?
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
John Spray пишет:
On Tue, Oct 6, 2015 at 2:21 PM, Dzianis Kahanovich
<maha...@bspu.unibel.by> wrote:
John Spray пишет:
On Tue, Oct 6, 2015 at 1:22 PM, Dzianis Kahanovich
<maha...@bspu.unibel.by> wrote:
Even now I remove "mds standby replay = true":
e7151: 1/1/1
0700 7 mds.0.cache.dir(19e3a8c) already
fetching; waiting
2015-10-06 23:43:40.929537 7f255eb50700 7 mds.0.cache.dir(19a66a4) already
fetching; waiting
2015-10-06 23:43:40.936432 7f255eb50700 7 mds.0.cache.dir(19c8188) already
fetching; waiting
2015-10-06 23:43:40.975802 7f255ca4b700 -1
E/xhpK6d53+RlkPODKxXx816hXvDP6NADaC78XGmx+A4FfepdxBijGBsmOQ
7SxAZe469K0E6EAfClc664VzwuvBEZjwTg1eK5Z6VS/FDTH/RxTKeFhlbUIT
XpezlP7XZ1/YRrJ/Eg7nb1Dv0MYQdu18tQ6QBv+C1ZsmxYLlHlcf6BZ3gNar
rZW5
=dKn9
-END PGP SIGNATURE-
___________
ceph-users mailing list
ceph-
) On 2 additional out-of-cluster (service) nodes:
4.1.8 (now 4.2.3) kernel mount;
4.1.0 both mounts;
3) 2 VMs:
kernel mounts (most active: web & mail);
4.2.3;
fuse mounts - same version with ceph;
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibe
er 2xVMs) on apache root. In this place was (no more now) described
CLONE_FS -> CLONE_VFORK deadlocks. But 4.2.3 installed just before tests, was
4.1.8 with similar effects (but log from 4.2.3 on VM clients).
Waith this night for MDSs restart.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, ht
ooks like freezed by CLONE_VFORK (B) freeze (A) & others (B), (but sometime on
PREEMPT, always - on PREEMPT_NONE).
I will restart mds this night, will look to restart time.
Dzianis Kahanovich пишет:
Yan, Zheng пишет:
It seems you have 16 mounts. Are you using kernel client or fuse
client,
]
host = megaserver3
[mds.c]
host = megaserver4
(I trying to unswitch all non-defaults, IMHO no results - fixme)
Or may be I need special care on mds stop (now - SIGKILL).
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by
John Spray пишет:
On Tue, Oct 6, 2015 at 11:43 AM, Dzianis Kahanovich
<maha...@bspu.unibel.by> wrote:
Short: how to sure avoid (if possible) fs freezes on 1 of 3 mds rejoin?
ceph version 0.94.3-242-g79385a8 (79385a85beea9bccd82c99b6bda653f0224c4fcd)
I moving 2 VM clients from ocfs2 (st
Sorry, skipped some...
John Spray пишет:
On Tue, Oct 6, 2015 at 1:22 PM, Dzianis Kahanovich
<maha...@bspu.unibel.by> wrote:
Even now I remove "mds standby replay = true":
e7151: 1/1/1 up {0=b=up:active}, 2 up:standby
Cluster stuck on KILL active mds.b. How to correctly stop mds
# vs. laggy beacon
mds decay halflife = 9
mds beacon interval = 8
mds beacon grace = 30
[mds.a]
host = megaserver1
[mds.b]
host = megaserver3
[mds.c]
host = megaserver4
(I trying to unswitch all non-defaults, IMHO no results - fixme)
Or may
S "setuser match path = /var/lib/ceph/$type/$cluster-$id" added to config.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I have content for apache 2.4 in cephfs, trying to be scalable, "EnableMMAP On".
Some environments known as not friendly for MMAP for SMP scalability (more
locks). What cephfs-specific recommendations about apache's EnableMMAP setting?
--
WBR, Dzianis Kahanovich AKA Denis Kagano
Fix: affected not only Megaraid SAS. For tested time:
Affected: MegaRAID SAS 2108, Intel 82801JI
Unaffected: Intel C602
Both Intel's in AHCI mode.
So, hardware possible not important.
Dzianis Kahanovich пишет:
> This issue was fixed by "xfs_repair -L".
>
> 1) Megaraid SAS
Dzianis Kahanovich пишет:
> Christian Balzer пишет:
>
>>> New problem (unsure, but probably not observed in Hammer, but sure in
>>> Infernalis): copying large (tens g) files into kernel cephfs (from
>>> outside of cluster, iron - non-VM, preempt kernel) - make
eph/commit/24de350d936e5ed70835d0ab2ad6b0b4f506123f.patch
, previous incident was older & without patch.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
tros
reality...
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
PS Now I stop this mds, active migrated and warning removed. Cannot try more.
Dzianis Kahanovich пишет:
> John Spray пишет:
>
>>> Looks happened both time at night - probably on long backup/write operations
>>> (something like compressed local root backup to cephfs). Al
with cluster) I mounts with
"wsize=131072,rsize=131072,write_congestion_kb=128,readdir_max_bytes=131072"
(and net.ipv4.tcp_notsent_lowat = 131072) to conserve RAM. Obtaining good
servers for VMs I remove it. May be better turn it back for better congestion
quantum.
--
WBR, Dziani
threads numbers. Scheduler=noop. size=3 min_size=2
No same problem with fuse.
Looks like broken or unbalanced congestion mechanism or I don't know how to
moderate it. write_congestion_kb trying low (=1) - nothing interesting.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http
after repair too).
PS hammer from git.
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rge writes (how much memory does your test machine have) when it
> gets flushed.
I bound all read/write values in kernel client more then fuse.
Mostly I understand - problem are fast write & slow HDDs. But IMHO some
mechanisms must prevent it (congestion-like). And early I don't observe thi
nning mds) .
>
> This is the first time you've upgraded your pool to jewel right?
> Straight from 9.X to 10.2.2?
>
Yes
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.by/
___
ceph-users mailing list
ceph-users@lists.ce
Gregory Farnum пишет:
> On Thu, Jun 30, 2016 at 1:03 PM, Dzianis Kahanovich <maha...@bspu.by> wrote:
>> Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global
>> stop/restart everything oneshot.
>>
>> Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1
pools 5
metadata_pool 6
inline_data disabled
3104110:10.227.227.103:6800/14627 'a' mds.0.5436 up:active seq 30
3084126:10.227.227.104:6800/24069 'c' mds.0.0 up:standby-replay seq 1
If standby-replay false - all OK: 1/1/1 up {0=a=up:active}, 2 up:standby
How to fix this 3-m
If you say cmake is preferred by developers (and can solve some of this) - I
will try rework Gentoo ebuild to it (my own and will report into Gentoo
bugzilla).
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.by/
___
ceph-users mailing
irtio-blk benefit from multiple queues?
>
> (I'm hopeful because virtio-scsi had multi-queue support for a while,
> and someone reported increased IOPS even with RBD devices behind those.)
>
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.by/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
16 : cluster [ERR]
repair 3.4e 3:73d0516f:::rbd_data.2d2082ae8944a.3239:2368 is an
unexpected clone
2016-09-09 17:24:26.490788 osd.1 10.227.227.103:6802/5237 17 : cluster [ERR]
3.4e repair 0 missing, 1 inconsistent objects
2016-09-09 17:24:26.490807 osd.1 10.227.227.103:6802/5237
Dzianis Kahanovich пишет:
>
> I have 1 active+clean+inconsistent PG (from metadata pool) without real error
> reporting and any other symphtoms. All 3 copies same (md5sum). Deep-scrub,
> repair, etc just say "1 errors 0 fixed" in the end. I remember, it PG may be
> han
54 matches
Mail list logo