ees, entire filesystem ?
* other ?
TIA,
[1] http://docs.ceph.com/docs/luminous/cephfs/experimental-features/#sn
apshots
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
forget to
install the qemu-block-extra package (Debian stretch) along with qemu-
utils which contains the qemu-img command.
This command is actually compiled with rbd support (hence the output
above), but need this extra package to pull actual support-code and
dependencies...
--
Nicolas Huil
clear: we love xfs, and do not like btrfs very much.
Thanks for your anecdote ;-)
Could it be that I stack too many things (XFS in LVM in md-RAID in SSD
's FTL)?
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s FS have a lot of very promising features. I view it
as the single-host-ceph-like FS, and do not see any equivalent (apart
from ZFS which will also never included in the kernel).
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@list
-var 253:60 4,7G 0 lvm /var
├─nvme0n1p2 259:20 29,8G 0 part [SWAP]
TIA !
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Le dimanche 02 septembre 2018 à 11:31 +0200, Nicolas Huillard a écrit :
> I just noticed that 12.2.8 was available on the repositories, without
> any announce. Since upgrading to unannounced 12.2.6 was a bad idea,
> I'll wait a bit anyway ;-)
> Where can I find info on this bu
stion, but don't know if that's the best
> approach in this case.
>
> Patrick
>
>
> On 21.09.2018 10:49, Nicolas Huillard wrote:
> > Hi all,
> >
> > One of my server crashed its root filesystem, ie. the currently
> > open
> > sh
"in"
immediately.
Ceph 12.2.7 on Debian
TIA,
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
... This Intel microcode + vendor
BIOS may have mitigated the problem, and postpones hardware
replacement...
Le mardi 24 juillet 2018 à 12:18 +0200, Nicolas Huillard a écrit :
> Hi all,
>
> The same server did it again with the same CATERR exactly 3 days
> after
> rebooting
video
record, etc.)...
Thanks is advance for any hint ;-)
Le samedi 21 juillet 2018 à 10:31 +0200, Nicolas Huillard a écrit :
> Hi all,
>
> One of my server silently shutdown last night, with no explanation
> whatsoever in any logs. According to the existing logs, the shutdown
>
Le lundi 23 juillet 2018 à 12:40 +0200, Oliver Freyermuth a écrit :
> Am 23.07.2018 um 11:18 schrieb Nicolas Huillard:
> > Le lundi 23 juillet 2018 à 18:23 +1000, Brad Hubbard a écrit :
> > > Ceph doesn't shut down systems as in kill or reboot the box if
> > > that
current
> best theory is that it's related to CPU power management. Disabling
> it
> in BIOS seems to have helped.
Too bad my hardware design heavily rely on power management, thus
silence...
--
Nicolas Huillard
___
ceph-users mailing list
vent Data (RAW) : 00
Event Interpretation : Missing
Description :
Sensor ID : CPU CATERR (0x76)
Entity ID : 26.1
Sensor Type (Discrete): Unknown
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
; https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46768.htm
> > l
> >
> > ~S
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ECC memory.
> Kind regards,
Many thanks!
>
> Caspar
>
> 2018-07-21 10:31 GMT+02:00 Nicolas Huillard :
>
> > Hi all,
> >
> > One of my server silently shutdown last night, with no explanation
> > whatsoever in any logs. According to the existing logs
y help next time this event occurs... Triggers look like
they're tuned for Windows BSOD though...
Thanks for all answers ;-)
> On Mon, Jul 23, 2018 at 5:04 PM, Nicolas Huillard .fr> wrote:
> > Le lundi 23 juillet 2018 à 11:07 +0700, Konstantin Shalygin a écrit
> > :
> > > &g
e bugs in this
> release.
That was done (cf. subject).
This is happening with 12.2.7, fresh and 6 days old.
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s !
>
> Cheers,
> Oliver
>
> Am 21.07.2018 um 14:34 schrieb Nicolas Huillard:
> > I forgot to mention that this server, along with all the other Ceph
> > servers in my cluster, do not run anything else than Ceph, and each
> > run
> > all the Ceph da
I forgot to mention that this server, along with all the other Ceph
servers in my cluster, do not run anything else than Ceph, and each run
all the Ceph daemons (mon, mgr, mds, 2×osd).
Le samedi 21 juillet 2018 à 10:31 +0200, Nicolas Huillard a écrit :
> Hi all,
>
> One of my server
n.oxygene@3(peon).data_health(66142) update_stats avail 79% total 4758 MB,
used 991 MB, avail 3766 MB
2018-07-21 03:57:27.636464 7f25bdc17700 0
mon.oxygene@3(peon).data_health(66142) update_stats avail 79% total 4758 MB,
used 991 MB, avail 3766 MB
I can see no evidence of intrusion or anything (
ect for pool ops
> (issue#24838, Jason Dillaman)
>
>
> - The config-key interface can store arbitrary binary blobs but JSON
> can only express printable strings. If binary blobs are present,
> the 'ceph config-key dump' command will show them as something like
> ``<<&l
dditional space.
Using that space may be on the TODO, though, so this may not be a
complete waste of space...
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I should monitor more closely the ML.
This means I'll just wait for the fix with 12.2.7 ;-)
Have a nice day !
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
not sure if it will work as it's apparently doing nothing at
> the
> moment (maybe it's just very slow).
>
> Any help is appreciated, thanks!
>
>
> Alessandro
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listin
w-choose-tries --show-statistics | less
etc.
This helped me validate the placement on different hosts and
datacenters.
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
anged, and may or may not trigger a problem upon
reboot (I don't know if this UUID is part of the dd'ed data)
You effectively helped a lot! Thanks.
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
24]: File "/usr/lib/ceph/mgr/dashboard/module.py", line 268, in
get_rate
ceph-mgr[1324]: return (data[-1][1] - data[-2][1]) / float(data[-1][0] -
data[-2][0])
ceph-mgr[1324]: ZeroDivisionError: float division by zero
HTH,
--
Nicolas Huillard
_
code != 0.
Is there Pacemaker/OCF standard a way to report a proper exit code /
test for failure another way / anything else ?
TIA,
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for your answer.
Le jeudi 29 mars 2018 à 13:51 -0700, Patrick Donnelly a écrit :
> On Thu, Mar 29, 2018 at 1:02 PM, Nicolas Huillard <nhuillard@dolomede
> .fr> wrote:
> > I manage my 2 datacenters with Pacemaker and Booth. One of them is
> > the
> > publi
ilable MON is always the leader. It's also probably less of a
problem since MON traffic is low.
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
kets in certain cases).
Sorry for the disturbance. I'll continue to test.
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
"
}
Since the issue was solved more than a year ago and I use 12.2.4, I
guess that's not the issue here. I may change that velue to something
else ("simple" or more recent setting?).
TIA,
Le jeudi 29 mars 2018 à 00:40 +0200, Nicolas Huillard a écrit :
> Hi all,
>
> I didn't
roblem for the cephfs kernel client.
> Regards
> JC
>
> > On Mar 28, 2018, at 15:40, Nicolas Huillard <nhuill...@dolomede.fr>
> > wrote:
> >
> > Hi all,
> >
> > I didn't find much information regarding this kernel client loop in
> > the
>
k re-opens. I have experienced problems with some daemons
like dnsmasq, ntpd, etc. The only solution seems to be to restart those
deamons.
I may have to unmount/remount cephfs to have the same effect. I'll also
try cephfs/Fuse.
Did anyone dig into the cause of this flurry of message
ts in the kernel that
would justify using 4.14 instead of 4.9 ? I can't find any list
anywhere...
Since I'm building a new cluster, I'd rather choose the latest software
from the start if it's justified.
--
Nicolas Huillard
___
ceph-users mailing list
ceph-u
server-side kernel don't really matter.
TIA,
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote:
> > Then I tried to reduce the number of MDS, from 4 to 1,
> Le lundi 19 mars 2018 à 19:15 +0300, Sergey Malinin a écrit :
> Forgot to mention, that in my setup the issue gone when I had
> reverted back to single M
your advice!
--
Nicolas Huillard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
elaborate on "original value multiplied several times" ?
I'm just seeing more MDS_TRIM warnings now. Maybe restarting the MDSs
just delayed re-emergence of the initial problem.
>
> From: ceph-users <ceph-users-boun...@lists.ceph.com> on behal
health message (mds.2): Behind on trimming (64/30)".
I wonder why cephfs would write anything to the metadata (I'm mounting
on the clients with "noatime"), while I'm just reading data from it...
What could I tune to reduce that write-load-while-readin
ow-level
problems, but may also be used in my case, obviously not at the same
scale.
Thanks in advance, for your reading patience and answers!
--
Nicolas Huillard
Associé fondateur - Directeur Technique - Dolomède
nhuill...@dolomede.fr
Fixe : +33 9 52 31 06 10
Mobile : +33 6 50 27
42 matches
Mail list logo