[ceph-users] Antw: cephx error - renew key

2015-06-12 Thread Steffen Weißgerber
tombo to...@scs.sk schrieb am Dienstag, 9. Juni 2015 um 21:44: Hello guys, Hi tombo, that seem's to be related to http://tracker.ceph.com/issues/4282. We had the same effects but limited by 1 hour. After that the authentication works again. When increasing the log level when the

[ceph-users] SLES Packages

2015-06-01 Thread Steffen Weißgerber
Hi, I'm searching for actual packages for SLES11 SP3. Via SMT-Updateserver it seems that there's only Version 0.80.8 available. Are there other package sources available (at least for Giant)? What I want to do is mount ceph via rbd map natively instead mounting nfs from another host on which

[ceph-users] Antw: Re: clock skew detected

2015-06-11 Thread Steffen Weißgerber
Andrey Korolyov and...@xdel.ru schrieb am Mittwoch, 10. Juni 2015 um 15:29: On Wed, Jun 10, 2015 at 4:11 PM, Pavel V. Kaygorodov pa...@inasan.ru wrote: Hi, for us a restart of the monitor solved this. Regards Steffen Hi! Immediately after a reboot of mon.3 host its clock was

[ceph-users] Antw: Hammer reduce recovery impact

2015-09-23 Thread Steffen Weißgerber
Based on the book 'Learning Ceph' (https://www.packtpub.com/application-development/learning-ceph), chapter performance tuning, we swapped the values for osd_recovery_op_priority and osd_client_op_priority to 60 and 40. "... osd recovery op priority: This is the priority set for recovery

[ceph-users] Antw: Re: Antw: Hammer reduce recovery impact

2015-09-23 Thread Steffen Weißgerber
>>> Dan van der Ster <d...@vanderster.com> schrieb am Mittwoch, 23. September 2015 um 14:04: > On Wed, Sep 23, 2015 at 1:44 PM, Steffen Weißgerber > <weissgerb...@ksnb.de> wrote: >> "... osd recovery op priority: This is >> the priority set for

[ceph-users] Antw: Re: Ceph + Libvirt + QEMU-KVM

2016-01-28 Thread Steffen Weißgerber
>>> Bill WONG schrieb am Donnerstag, 28. Januar 2016 um 09:30: > Hi Marius, > Hello, > with ceph rdb, it looks can support qcow2 as well as per its document: - > http://docs.ceph.com/docs/master/rbd/qemu-rbd/ > -- > Important The raw data format is really the only

[ceph-users] Antw: Re: plain upgrade hammer to infernalis?

2016-02-19 Thread Steffen Weißgerber
>>> Gregory Farnum schrieb am Montag, 8. Februar 2016 um 19:10: > On Mon, Feb 8, 2016 at 10:00 AM, Dzianis Kahanovich > wrote: >> I want to know about plain (not systemd, no deployment tools, only own > simple >> "start-stop-daemon" scripts under

[ceph-users] Antw: Question: replacing all OSDs of one node in 3node cluster

2016-02-19 Thread Steffen Weißgerber
Hi Daniel, we had the same problem with a SataDom on our ceph nodes. After write errors the root partition was mounted read only and the monitor died because logging was not possible anymore. Instead the osd's kept running. For minimal downtime of the node I backuped the system disk via ssh to

[ceph-users] Antw: Re: Deprecating ext4 support

2016-04-14 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Dienstag, 12. April 2016 um >>> 01:39: > Hello, > Hi, > I'm officially only allowed to do (preventative) maintenance during weekend > nights on our main production cluster. > That would mean 13 ruined weekends at the realistic rate of 1 OSD

[ceph-users] Antw: Advice on OSD upgrades

2016-04-14 Thread Steffen Weißgerber
Hi, that's how I did it for my osd's 25 to 30 (you can add as much as osd numbers you like as long you have free space). First you can reweight the osd's to 0 to move their copies to other osd's for i in {25..30}; do ceph osd crush reweight osd.$i done and have to wait until it's done (when

[ceph-users] remote logging

2016-04-14 Thread Steffen Weißgerber
Hello, I tried to configure ceph logging to a remote syslog host based on Sebastian Han's Blog (http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/): ceph.conf [global] ... log_file = none log_to_syslog = true err_to_syslog = true [mon] mon_cluster_log_to_syslog = true

[ceph-users] Antw: Re: Deprecating ext4 support

2016-04-15 Thread Steffen Weißgerber
>>> Christian Balzer <ch...@gol.com> schrieb am Donnerstag, 14. April 2016 um 17:00: > Hello, > > [reduced to ceph-users] > > On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote: > >> >> >> >>> Christian Balzer <ch.

[ceph-users] Antw: Re: librados: client.admin authentication error

2016-04-15 Thread Steffen Weißgerber
>>> "leon...@gstarcloud.com" schrieb am Freitag, 15. April 2016 um 11:33: > Hello Daniel, > > I'm a newbie to Ceph, and when i config the storage cluster on CentOS 7 VMs, > i encontered the same problem as you posted on >

[ceph-users] Antw: Re: Flood of 'failed to encode map X with expected crc' on 1800 OSD cluster after upgrade

2016-07-12 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Dienstag, 12. Juli 2016 um >>> 08:47: > Hello, > > On Tue, 12 Jul 2016 08:39:16 +0200 (CEST) Wido den Hollander wrote: > >> Hi, >> >> I am upgrading a 1800 OSD cluster from Hammer 0.94.5 to 0.94.7 prior to > going to Jewel and while doing so

[ceph-users] Antw: Ceph : Generic Query : Raw Format of images

2016-07-21 Thread Steffen Weißgerber
>>> Gaurav Goyal schrieb am Mittwoch, 20. Juli 2016 >>> um 17:41: > Dear Ceph User, > Hi, > I want to ask a very generic query regarding ceph. > > Ceph does use .raw format. But every single company is providing qcow2 > images. > It takes a lot of time to convert

[ceph-users] Antw: Re: SSD Journal

2016-07-15 Thread Steffen Weißgerber
>>> Christian Balzer <ch...@gol.com> schrieb am Donnerstag, 14. Juli 2016 um 17:06: > Hello, > > On Thu, 14 Jul 2016 13:37:54 +0200 Steffen Weißgerber wrote: > >> >> >> >>> Christian Balzer <ch...@gol.com> schrieb am Donners

[ceph-users] Antw: Re: Mounting Ceph RBD image to XenServer 7 as SR

2016-07-05 Thread Steffen Weißgerber
>>> Jake Young schrieb am Donnerstag, 30. Juni 2016 um 00:28: > On Wednesday, June 29, 2016, Mike Jacobacci wrote: > Hi, >> Hi all, >> >> Is there anyone using rbd for xenserver vm storage? I have XenServer 7 >> and the latest Ceph, I am looking for the

[ceph-users] Antw: Re: Running ceph in docker

2016-07-05 Thread Steffen Weißgerber
>>> Josef Johansson schrieb am Donnerstag, 30. Juni 2016 um 15:23: > Hi, > Hi, > You could actually managed every osd and mon and mds through docker swarm, > since all just software it make sense to deploy it through docker where you > add the disk that is needed. > >

[ceph-users] Antw: Re: Best practices for extending a ceph cluster with minimal client impact data movement

2016-08-25 Thread Steffen Weißgerber
Hi, >>> Wido den Hollander schrieb am Dienstag, 9. August 2016 um 10:05: >> Op 8 augustus 2016 om 16:45 schreef Martin Palma : >> >> >> Hi all, >> >> we are in the process of expanding our cluster and I would like to >> know if there are some best

[ceph-users] rbd cache mode with qemu

2016-08-30 Thread Steffen Weißgerber
Hello, after correcting the configuration for different qemu vm's with rbd disks (we removed the cache=writethrough option to have the default writeback mode) we have a strange behaviour after restarting the vm's. For most of them the cache mode is now writeback as expected. But some neverthless

[ceph-users] Antw: Re: rbd cache mode with qemu

2016-08-31 Thread Steffen Weißgerber
>>> Loris Cuoghi <l...@stella-telecom.fr> schrieb am Dienstag, 30. August 2016 um 16:34: > Hello, > Hi Loris, thank you for your answer. > Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit : >> Hello, >> >> after correcting the configuration for

[ceph-users] Antw: Re: Antw: Re: rbd cache mode with qemu

2016-08-31 Thread Steffen Weißgerber
t rbd_cache_writethrough_until_flush force > writethrough in this case. That makes sense. The time reference of the bugreport matches the driver version 61.63.103.3000 from 03.07.2012 distributed with virtio-win-0.1-30.iso from Fedora. Thank you. Regards > > - Mail original - >

[ceph-users] building ceph from source (exorbitant space requirements)

2016-10-10 Thread Steffen Weißgerber
Hi, while using client ceph also on gentoo and because I'm a friend of building from source within a ram based filesystem since ceph release 9.x i'm wondering about the exorbitant space requirements when buildung the ceph components. Until hammer 3GB where sufficient to complete the compile.

[ceph-users] Antw: Re: SSS Caching

2016-10-27 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Donnerstag, 27. Oktober 2016 um 04:07: Hi, > Hello, > > On Wed, 26 Oct 2016 15:40:00 + Ashley Merrick wrote: > >> Hello All, >> >> Currently running a CEPH cluster connected to KVM via the KRBD and used only > for this purpose. >> >> Is

[ceph-users] Antw: Re: reliable monitor restarts

2016-10-25 Thread Steffen Weißgerber
>>> Wido den Hollander <w...@42on.com> schrieb am Samstag, 22. Oktober 2016 um 08:35: >> Op 21 oktober 2016 om 21:31 schreef Steffen Weißgerber <weissgerb...@ksnb.de>: >> >> >> Hello, >> >> we're running a 6 node ceph cluster with

[ceph-users] Antw: Re: reliable monitor restarts

2016-10-25 Thread Steffen Weißgerber
ays and will be back at the office tomorrow. Thank you for you help. Regards Steffen > On Sat, Oct 22, 2016 at 6:57 AM, Ruben Kerkhof <ru...@rubenkerkhof.com> wrote: >> On Fri, Oct 21, 2016 at 9:31 PM, Steffen Weißgerber >> <weissgerb...@ksnb.de> wrote: >>> He

[ceph-users] Antw: Re: reliable monitor restarts

2016-10-25 Thread Steffen Weißgerber
Hi, >>> Ruben Kerkhof <ru...@rubenkerkhof.com> schrieb am Samstag, 22. Oktober 2016 um 12:57: > On Fri, Oct 21, 2016 at 9:31 PM, Steffen Weißgerber > <weissgerb...@ksnb.de> wrote: >> Hello, >> >> we're running a 6 node ceph cluster with 3 mons on

[ceph-users] reliable monitor restarts

2016-10-21 Thread Steffen Weißgerber
Hello, we're running a 6 node ceph cluster with 3 mons on Ubuntu (14.04.4). Sometimes it happen's that the mon services die and have to restarted manually. To have reliable service restarts I normally use D.J. Bernsteins deamontools on other Linux distributions. Until now I never did this on

[ceph-users] Antw: Re: hammer on xenial

2016-11-16 Thread Steffen Weißgerber
>>> Robert Sander <r.san...@heinlein-support.de> schrieb am Mittwoch, 16. November 2016 um 10:23: > On 16.11.2016 09:05, Steffen Weißgerber wrote: >> Hello, >> Hello, >> we started upgrading ubuntu on our ceph nodes to Xenial and had to see that > d

[ceph-users] hammer on xenial

2016-11-16 Thread Steffen Weißgerber
Hello, we started upgrading ubuntu on our ceph nodes to Xenial and had to see that during the upgrade ceph automatically was upgraded from hammer to jewel also. Because we don't want to upgrade ceph and the OS at the same time we deinstalled the ceph jewel components reactivated

[ceph-users] Antw: Re: hammer on xenial

2016-11-16 Thread Steffen Weißgerber
>>> Robert Sander <r.san...@heinlein-support.de> schrieb am Mittwoch, 16. November 2016 um 10:23: > On 16.11.2016 09:05, Steffen Weißgerber wrote: >> Hello, >> Hello, >> we started upgrading ubuntu on our ceph nodes to Xenial and had to see that > d

[ceph-users] Antw: Re: hammer on xenial

2016-11-16 Thread Steffen Weißgerber
Hi, after doing 'apt-mark hold ceph' the upgrade failed. It seems due to some kind of fetch failed: ... OK http://archive.ubuntu.com trusty-backports/universe amd64 Packages Fehl http://ceph.com xenial/main Translation-en

[ceph-users] Antw: Re: hammer on xenial

2016-11-16 Thread Steffen Weißgerber
config as before the upgrade with iburts entries to local time server and the rest of the cluster nodes. Regards Steffen >>> Robert Sander <r.san...@heinlein-support.de> 16.11.2016 10:23 >>> On 16.11.2016 09:05, Steffen Weißgerber wrote: > Hello, > > we started u

[ceph-users] Antw: Re: hammer on xenial

2016-11-16 Thread Steffen Weißgerber
Hi, looks good. Because I've made an image fo the node's system disk I can revert to the state before the upgrade and restart the hole process. Thank you. Steffen >>> "钟佳佳" 16.11.2016 09:32 >>> hi : you can google apt-mark apt-mark hold PACKAGENAME

[ceph-users] Antw: ceph osd down

2016-11-21 Thread Steffen Weißgerber
>>> 马忠明 schrieb am Sonntag, 20. November 2016 um 12:16: > Hi guys, > So our cluster always got osd down due to medium error.Our current action > plan is to replace the defective disk drive.But I was wondering whether it's > too sensitive for ceph to take it down.Or whether

[ceph-users] Antw: Re: SSS Caching

2016-10-27 Thread Steffen Weißgerber
>>> Christian Balzer <ch...@gol.com> schrieb am Donnerstag, 27. Oktober 2016 um 13:55: Hi Christian, > > Hello, > > On Thu, 27 Oct 2016 11:30:29 +0200 Steffen Weißgerber wrote: > >> >> >> >> >>> Christian Balzer <

[ceph-users] Antw: Safely Upgrading OS on a live Ceph Cluster

2017-02-28 Thread Steffen Weißgerber
Hello, some time ago I upgraded our 6 node cluster (0.94.9) running on Ubuntu from Trusty to Xenial. The problem here was that with the os update also ceph is upgraded what we did not want in the same step because then we had to upgrade all nodes at the same time. Therefore we did it node by

[ceph-users] Probleme mit Pathologie-Rechner (Job: 116.152)

2017-08-01 Thread Steffen Weißgerber
blocken. Das müsste auch wieder freigegeben werden. Vielen Dank für Ihre Unterstützung Mit freundlichen Grüßen Steffen Weißgerber IT-Zentrum -- Klinik-Service Neubrandenburg GmbH Allendestr. 30, 17036 Neubrandenburg Amtsgericht Neubrandenburg, HRB 2457 Geschaeftsfuehrerin: Gudrun Kappich

[ceph-users] Antw: Re: Performance after adding a node

2017-05-09 Thread Steffen Weißgerber
Hi, checking the actual value for osd_max_backfills at our cluster (0.94.9) I also made a config diff of the osd configuration (ceph daemon osd.0 config diff) and wondered why there's a displayed default of 10 which differs from the documented default at

[ceph-users] Antw: Re: Antw: Re: Antw: problem with automounting cephfs on KVM VM boot

2018-02-01 Thread Steffen Weißgerber
ets to mount cephfs on boot. > > Thanks again for your replies and questions what directed me to a right way > and let found a workaround! > > Steffen Weißgerber wrote on 01/02/18 13:30: >> Ok, so it seems that all things necessary for mount are configured for >> now bu

[ceph-users] Antw: problem with automounting cephfs on KVM VM boot

2018-02-01 Thread Steffen Weißgerber
Hello, and what happens when you mount it manually using the fstab entry with 'mount /mnt/ceph'? Regards Steffen >>> schrieb am Mittwoch, 31. Januar 2018 um 16:19: > Hello! > > I need to mount automatically cephfs on KVM VM boot . > > I tried to follow recommendations

[ceph-users] Antw: Re: Antw: problem with automounting cephfs on KVM VM boot

2018-02-01 Thread Steffen Weißgerber
/rc.log? Regards Steffen >>> <kna...@gmail.com> schrieb am Donnerstag, 1. Februar 2018 um 09:24: > Hello, Steffen! > > Thanks for reply! > > Please, see my comments inline. > > Steffen Weißgerber wrote on 01/02/18 11:16: >> Hello, >> >> and w

[ceph-users] Antw: Re: Luminous/Ubuntu 16.04 kernel recommendation ?

2018-02-09 Thread Steffen Weißgerber
Hello, >>> Kevin Olbrich schrieb am Donnerstag, 8. Februar 2018 um >>> 12:54: > 2018-02-08 11:20 GMT+01:00 Martin Emrich : > >> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally, >> running linux-generic-hwe-16.04

[ceph-users] Antw: RBD device as SBD device for pacemaker cluster

2018-02-07 Thread Steffen Weißgerber
Hello Kai, we use RBD's as part of pacemaker resource groups for 2 years on Hammer with no problems. The resource is always configured in active/passive mode due to the fact that the filesystem is not cluster aware. Therefore during switchover the RBD's are unmapped cleanly on the active node