Re: [ceph-users] SSDs behind Hardware Raid

2019-12-04 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] SSDs behind Hardware Raid

2019-12-04 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] New best practices for osds???

2019-07-16 Thread Stolte, Felix
Hi guys, our ceph cluster is performing way less than it could, based on the disks we are using. We could narrow it down to the storage controller (LSI SAS3008 HBA) in combination with an SAS expander. Yesterday we had a meeting with our hardware reseller and sale representatives of the

[ceph-users] Missing Ubuntu Packages on Luminous

2019-07-08 Thread Stolte, Felix
Hi folks, I want to use the community repository http://download.ceph.com/debian-luminous for my luminous cluster instead of the packages provided by ubuntu itself. But apparently only the ceph-deploy package is available for bionic (Ubuntu 18.04). All packages exist for trusty though. Is this

Re: [ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-28 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph-volume ignores cluster name from ceph.conf

2019-06-27 Thread Stolte, Felix
Hi folks, I have a nautilus 14.2.1 cluster with a non-default cluster name (ceph_stag instead of ceph). I set “cluster = ceph_stag” in /etc/ceph/ceph_stag.conf. ceph-volume is using the correct config file but does not use the specified clustername. Did I hit a bug or do I need to define the

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-11 Thread Stolte, Felix
- - Von: John Petrini Datum: Freitag, 7. Juni 2019 um 15:49 An: "Stolte, Felix" Cc: Sinan Polat , ceph-users Betreff: Re: [

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
nd regards, Sinan Polat > Op 7 juni 2019 om 12:47 schreef "Stolte, Felix" : > > > Hi Sinan, > > that would be great. The numbers should differ a lot, since you have an all > flash pool, but it would

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
ur command on my cluster? Sinan > Op 7 jun. 2019 om 08:52 heeft Stolte, Felix het volgende geschreven: > > I have no performance data before we migrated to bluestore. You should start a separate topic regarding your question. > > Could anyone wit

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-07 Thread Stolte, Felix
know what the difference is iops? And is the advantage more or less when your sata hdd's are slower? -Original Message- From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] Sent: donderdag 6 juni 2019 10:47 To: ceph-users Subject: [ceph-users] Expected IO in lum

[ceph-users] Expected IO in luminous Ceph Cluster

2019-06-06 Thread Stolte, Felix
Hello folks, we are running a ceph cluster on Luminous consisting of 21 OSD Nodes with 9 8TB SATA drives and 3 Intel 3700 SSDs for Bluestore WAL and DB (1:3 Ratio). OSDs have 10Gb for Public and Cluster Network. The cluster is running stable for over a year. We didn’t had a closer look on IO

[ceph-users] Nfs-ganesha with rados_kv backend

2019-05-29 Thread Stolte, Felix
Hi, is anyone running an active-passive nfs-ganesha cluster with cephfs backend and using the rados_kv recovery backend? My setup runs fine, but takeover is giving me a headache. On takeover I see the following messages in ganeshas log file: 29/05/2019 15:38:21 : epoch 5cee88c4 : cephgw-e2-1 :

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-09 Thread Stolte, Felix
08.05.19, 18:33 schrieb "Patrick Donnelly" : On Wed, May 8, 2019 at 4:10 AM Stolte, Felix wrote: > > Hi folks, > > we are running a luminous cluster and using the cephfs for fileservices. We use Tivoli Storage Manager to backup all data in the ceph filesystem

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
ay 8, 2019 at 1:10 PM Stolte, Felix wrote: > > Hi folks, > > we are running a luminous cluster and using the cephfs for fileservices. We use Tivoli Storage Manager to backup all data in the ceph filesystem to tape for disaster recovery. Backup runs on two dedicated

Re: [ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
Hi folks, we are running a luminous cluster and using the cephfs for fileservices. We use Tivoli Storage Manager to backup all data in the ceph filesystem to tape for disaster recovery. Backup runs on two dedicated servers, which mounted the cephfs via kernel mount. In order to complete the

[ceph-users] clients failing to respond to cache pressure

2019-05-08 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Required caps for cephfs

2019-04-30 Thread Stolte, Felix
Hi folks, we are using nfs-ganesha to expose cephfs (Luminous) to nfs clients. I want to make use of snapshots, but limit the creation of snapshots to ceph admins. I read about cephx capabilities which allow/deny the creation of snapshots a while ago, but I can’t find the info anymore. Can

[ceph-users] Fujitsu

2017-04-20 Thread Stolte, Felix
Hello cephers, is anyone using Fujitsu Hardware for Ceph OSDs with the PRAID EP400i Raidcontroller in JBOD Mode? We are having three identical servers with identical Disk placement. First three Slots are SSDs for journaling and remaining nine slots with SATA Disks. Problem is, that in Ubuntu (and

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Stolte, Felix
: Freitag, 11. Dezember 2015 15:17 An: Stolte, Felix; Jens Rosenboom Cc: ceph-us...@ceph.com Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis Hi Felix, Could you try again ? Hopefully that's the right one :-) https://raw.githubusercontent.com/dachary/ceph

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Stolte, Felix
Hi Jens, output is attached (stderr + stdout) Regards -Ursprüngliche Nachricht- Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] Gesendet: Freitag, 11. Dezember 2015 09:10 An: Stolte, Felix Cc: Loic Dachary; ceph-us...@ceph.com Betreff: Re: [ceph-users] ceph-disk list crashes

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Stolte, Felix
, 9. Dezember 2015 23:55 An: Stolte, Felix; ceph-us...@ceph.com Betreff: Re: AW: [ceph-users] ceph-disk list crashes in infernalis Hi Felix, It would be great if you could try the fix from https://github.com/dachary/ceph/commit/7395a6a0c5776d4a92728f1abf0e8a87e5d5e 4bb . It's only changing the ceph

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Stolte, Felix
. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -Ursprüngliche Nachricht- Von: Loic Dachary [mailto:l...@dachary.org] Gesendet: Freitag, 11. Dezember 2015 02:12 An: Stolte, Felix; ceph-us...@ceph.com Betreff

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-08 Thread Stolte, Felix
Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -Ursprüngliche Nachricht- Von: Loic Dachary [mailto:l...@dachary.org] Gesendet: Dienstag, 8. Dezember 2015 15:17 An: Stolte, Felix; ceph-us...@ceph.com Betreff: Re: [

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-08 Thread Stolte, Felix
orsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -Ursprüngliche Nachricht- Von: Loic Dachary [mailto:l...@dachary.org] Gesendet: Dienstag, 8. Dezember 2015 15:06 An: Stolte, Felix; ceph-us...@ceph.com Betreff: Re: [ceph-users] ceph-disk list crashes in infernalis Hi Felix,

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-06 Thread Stolte, Felix
er), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Sebastian M. Schmidt -----Ursprüngliche Nachricht- Von: Loic Dachary [mailto:l...@dachary.org] Gesendet: Samstag, 5. Dezember 2015 19:29 An: Stolte, Felix; ceph-us...@ceph.com Betreff: Re: AW: [ceph-users] ceph-

[ceph-users] ceph-disk list crashes in infernalis

2015-12-03 Thread Stolte, Felix
Hi all, i upgraded from hammer to infernalis today and even so I had a hard time doing so I finally got my cluster running in a healthy state (mainly my fault, because I did not read the release notes carefully). But when I try to list my disks with "ceph-disk list" I get the following

[ceph-users] Running Openstack Nova and Ceph OSD on same machine

2015-10-26 Thread Stolte, Felix
Hi all, is anyone running nova compute on ceph OSD Servers and could share his experience? Thanks and Regards, Felix Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender

[ceph-users] RadosGW problems on Ubuntu

2015-08-14 Thread Stolte, Felix
Hello everyone, we are currently testing ceph (Hammer) and Openstack (Kilo) on Ubuntu 14.04 LTS Servers. Yesterday I tried to setup the radosgateway with keystone integration for swift via ceph-deploy. I followed the instructions on http://ceph.com/docs/master/radosgw/keystone/ and