Re: [ceph-users] Using Cephfs Snapshots in Luminous

2019-02-06 Thread Nicolas Huillard
ees, entire filesystem ? * other ? TIA, [1] http://docs.ceph.com/docs/luminous/cephfs/experimental-features/#sn apshots -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Packages for debian in Ceph repo

2018-11-06 Thread Nicolas Huillard
forget to install the qemu-block-extra package (Debian stretch) along with qemu- utils which contains the qemu-img command. This command is actually compiled with rbd support (hence the output above), but need this extra package to pull actual support-code and dependencies... -- Nicolas Huil

Re: [ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-24 Thread Nicolas Huillard
clear: we love xfs, and do not like btrfs very much. Thanks for your anecdote ;-) Could it be that I stack too many things (XFS in LVM in md-RAID in SSD 's FTL)? -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-24 Thread Nicolas Huillard
s FS have a lot of very promising features. I view it as the single-host-ceph-like FS, and do not see any equivalent (apart from ZFS which will also never included in the kernel). -- Nicolas Huillard ___ ceph-users mailing list ceph-users@list

[ceph-users] [slightly OT] XFS vs. BTRFS vs. others as root/usr/var/tmp filesystems ?

2018-09-22 Thread Nicolas Huillard
-var  253:60   4,7G  0 lvm   /var ├─nvme0n1p2   259:20  29,8G  0 part  [SWAP] TIA ! -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] No announce for 12.2.8 / available in repositories

2018-09-22 Thread Nicolas Huillard
Le dimanche 02 septembre 2018 à 11:31 +0200, Nicolas Huillard a écrit : > I just noticed that 12.2.8 was available on the repositories, without > any announce. Since upgrading to unannounced 12.2.6 was a bad idea, > I'll wait a bit anyway ;-) > Where can I find info on this bu

Re: [ceph-users] Remotely tell an OSD to stop ?

2018-09-21 Thread Nicolas Huillard
stion, but don't know if that's the best > approach in this case. > > Patrick > > > On 21.09.2018 10:49, Nicolas Huillard wrote: > > Hi all, > > > > One of my server crashed its root filesystem, ie. the currently > > open > > sh

[ceph-users] Remotely tell an OSD to stop ?

2018-09-21 Thread Nicolas Huillard
"in" immediately. Ceph 12.2.7 on Debian TIA, -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] No announce for 12.2.8 / available in repositories

2018-09-02 Thread Nicolas Huillard
-- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Self shutdown of 1 whole system: Oops, it did it again (not yet anymore)

2018-07-31 Thread Nicolas Huillard
... This Intel microcode + vendor BIOS may have mitigated the problem, and postpones hardware replacement... Le mardi 24 juillet 2018 à 12:18 +0200, Nicolas Huillard a écrit : > Hi all, > > The same server did it again with the same CATERR exactly 3 days > after > rebooting

Re: [ceph-users] Self shutdown of 1 whole system: Oops, it did it again

2018-07-24 Thread Nicolas Huillard
video record, etc.)... Thanks is advance for any hint ;-) Le samedi 21 juillet 2018 à 10:31 +0200, Nicolas Huillard a écrit : > Hi all, > > One of my server silently shutdown last night, with no explanation > whatsoever in any logs. According to the existing logs, the shutdown >

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-23 Thread Nicolas Huillard
Le lundi 23 juillet 2018 à 12:40 +0200, Oliver Freyermuth a écrit : > Am 23.07.2018 um 11:18 schrieb Nicolas Huillard: > > Le lundi 23 juillet 2018 à 18:23 +1000, Brad Hubbard a écrit : > > > Ceph doesn't shut down systems as in kill or reboot the box if > > > that

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-23 Thread Nicolas Huillard
current > best theory is that it's related to CPU power management. Disabling > it > in BIOS seems to have helped. Too bad my hardware design heavily rely on power management, thus silence... -- Nicolas Huillard ___ ceph-users mailing list

Re: [ceph-users] "CPU CATERR Fault" Was: Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-23 Thread Nicolas Huillard
vent Data (RAW)  : 00  Event Interpretation  : Missing  Description   :  Sensor ID  : CPU CATERR (0x76)  Entity ID : 26.1  Sensor Type (Discrete): Unknown -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-23 Thread Nicolas Huillard
; https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46768.htm > > l > > > > ~S > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] "CPU CATERR Fault" Was: Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-23 Thread Nicolas Huillard
ECC memory. > Kind regards, Many thanks! > > Caspar > > 2018-07-21 10:31 GMT+02:00 Nicolas Huillard : > > > Hi all, > > > > One of my server silently shutdown last night, with no explanation > > whatsoever in any logs. According to the existing logs

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-23 Thread Nicolas Huillard
y help next time this event occurs... Triggers look like they're tuned for Windows BSOD though... Thanks for all answers ;-) > On Mon, Jul 23, 2018 at 5:04 PM, Nicolas Huillard .fr> wrote: > > Le lundi 23 juillet 2018 à 11:07 +0700, Konstantin Shalygin a écrit > > : > > > &g

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-23 Thread Nicolas Huillard
e bugs in this > release. That was done (cf. subject). This is happening with 12.2.7, fresh and 6 days old. -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-22 Thread Nicolas Huillard
s ! > > Cheers, > Oliver > > Am 21.07.2018 um 14:34 schrieb Nicolas Huillard: > > I forgot to mention that this server, along with all the other Ceph > > servers in my cluster, do not run anything else than Ceph, and each > > run > >  all the Ceph da

Re: [ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-21 Thread Nicolas Huillard
I forgot to mention that this server, along with all the other Ceph servers in my cluster, do not run anything else than Ceph, and each run  all the Ceph daemons (mon, mgr, mds, 2×osd). Le samedi 21 juillet 2018 à 10:31 +0200, Nicolas Huillard a écrit : > Hi all, > > One of my server

[ceph-users] Self shutdown of 1 whole system (Derbian stretch/Ceph 12.2.7/bluestore)

2018-07-21 Thread Nicolas Huillard
n.oxygene@3(peon).data_health(66142) update_stats avail 79% total 4758 MB, used 991 MB, avail 3766 MB 2018-07-21 03:57:27.636464 7f25bdc17700 0 mon.oxygene@3(peon).data_health(66142) update_stats avail 79% total 4758 MB, used 991 MB, avail 3766 MB I can see no evidence of intrusion or anything (

Re: [ceph-users] v12.2.7 Luminous released

2018-07-18 Thread Nicolas Huillard
ect for pool ops > (issue#24838, Jason Dillaman) > > > - The config-key interface can store arbitrary binary blobs but JSON >   can only express printable strings.  If binary blobs are present, >   the 'ceph config-key dump' command will show them as something like >   ``<<&l

Re: [ceph-users] resize wal/db

2018-07-17 Thread Nicolas Huillard
dditional space. Using that space may be on the TODO, though, so this may not be a complete waste of space... -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] MDS damaged

2018-07-15 Thread Nicolas Huillard
I should monitor more closely the ML. This means I'll just wait for the fix with 12.2.7 ;-) Have a nice day ! -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] MDS damaged

2018-07-15 Thread Nicolas Huillard
not sure if it will work as it's apparently doing nothing at > the  > moment (maybe it's just very slow). > > Any help is appreciated, thanks! > > >      Alessandro > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listin

Re: [ceph-users] Place on separate hosts?

2018-05-04 Thread Nicolas Huillard
w-choose-tries --show-statistics | less etc. This helped me validate the placement on different hosts and datacenters. -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Proper procedure to replace DB/WAL SSD

2018-05-02 Thread Nicolas Huillard
anged, and may or may not trigger a problem upon reboot (I don't know if this UUID is part of the dd'ed data) You effectively helped a lot! Thanks. -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ZeroDivisionError: float division by zero in /usr/lib/ceph/mgr/dashboard/module.py (12.2.4)

2018-04-15 Thread Nicolas Huillard
24]: File "/usr/lib/ceph/mgr/dashboard/module.py", line 268, in get_rate ceph-mgr[1324]: return (data[-1][1] - data[-2][1]) / float(data[-1][0] - data[-2][0]) ceph-mgr[1324]: ZeroDivisionError: float division by zero HTH, -- Nicolas Huillard _

[ceph-users] "ceph-fuse" / "mount -t fuse.ceph" do not report a failed mount on exit (Pacemaker OCF "Filesystem" resource)

2018-04-11 Thread Nicolas Huillard
code != 0. Is there Pacemaker/OCF standard a way to report a proper exit code / test for failure another way / anything else ? TIA, -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Is it possible to suggest the active MDS to move to a datacenter ?

2018-03-29 Thread Nicolas Huillard
Thanks for your answer. Le jeudi 29 mars 2018 à 13:51 -0700, Patrick Donnelly a écrit : > On Thu, Mar 29, 2018 at 1:02 PM, Nicolas Huillard <nhuillard@dolomede > .fr> wrote: > > I manage my 2 datacenters with Pacemaker and Booth. One of them is > > the > > publi

[ceph-users] Is it possible to suggest the active MDS to move to a datacenter ?

2018-03-29 Thread Nicolas Huillard
ilable MON is always the leader. It's also probably less of a problem since MON traffic is low. -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] session lost, hunting for new mon / session established : every 30s until unmount/remount

2018-03-29 Thread Nicolas Huillard
kets in certain cases). Sorry for the disturbance. I'll continue to test. -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] session lost, hunting for new mon / session established : every 30s until unmount/remount

2018-03-29 Thread Nicolas Huillard
" } Since the issue was solved more than a year ago and I use 12.2.4, I guess that's not the issue here. I may change that velue to something else ("simple" or more recent setting?). TIA, Le jeudi 29 mars 2018 à 00:40 +0200, Nicolas Huillard a écrit : > Hi all, > > I didn't

Re: [ceph-users] session lost, hunting for new mon / session established : every 30s until unmount/remount

2018-03-29 Thread Nicolas Huillard
roblem for the cephfs kernel client. > Regards > JC > > > On Mar 28, 2018, at 15:40, Nicolas Huillard <nhuill...@dolomede.fr> > > wrote: > > > > Hi all, > > > > I didn't find much information regarding this kernel client loop in > > the >

[ceph-users] session lost, hunting for new mon / session established : every 30s until unmount/remount

2018-03-28 Thread Nicolas Huillard
k re-opens. I have experienced problems with some daemons like dnsmasq, ntpd, etc. The only solution seems to be to restart those deamons. I may have to unmount/remount cephfs to have the same effect. I'll also try cephfs/Fuse. Did anyone dig into the cause of this flurry of message

Re: [ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Nicolas Huillard
ts in the kernel that would justify using 4.14 instead of 4.9 ? I can't find any list anywhere... Since I'm building a new cluster, I'd rather choose the latest software from the start if it's justified. -- Nicolas Huillard ___ ceph-users mailing list ceph-u

[ceph-users] Kernel version for Debian 9 CephFS/RBD clients

2018-03-23 Thread Nicolas Huillard
server-side kernel don't really matter. TIA, -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote: > > Then I tried to reduce the number of MDS, from 4 to 1,  > Le lundi 19 mars 2018 à 19:15 +0300, Sergey Malinin a écrit : > Forgot to mention, that in my setup the issue gone when I had > reverted back to single M

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
your advice! -- Nicolas Huillard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
elaborate on "original value multiplied several times" ? I'm just seeing more MDS_TRIM warnings now. Maybe restarting the MDSs just delayed re-emergence of the initial problem. > > From: ceph-users <ceph-users-boun...@lists.ceph.com> on behal

[ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
health message (mds.2): Behind on trimming (64/30)". I wonder why cephfs would write anything to the metadata (I'm mounting on the clients with "noatime"), while I'm just reading data from it... What could I tune to reduce that write-load-while-readin

[ceph-users] Multiple storage sites for disaster recovery and/or active-active failover

2016-10-06 Thread Nicolas Huillard
ow-level problems, but may also be used in my case, obviously not at the same scale. Thanks in advance, for your reading patience and answers! -- Nicolas Huillard Associé fondateur - Directeur Technique - Dolomède nhuill...@dolomede.fr Fixe : +33 9 52 31 06 10 Mobile : +33 6 50 27