Re: [ceph-users] EC pools grinding to a screeching halt on Luminous

2018-12-31 Thread Marcus Murwall
please let us know. I wish you all a happy new year. Regards Marcus Mohamad Gebai <mailto:mge...@suse.de> 28 December 2018 at 16:10 Hi Marcus, On 12/27/18 4:21 PM, Marcus Murwall wrote: Hey Mohamad I work with Florian on this issue. Just reinstalled the ceph cluster and triggered the error

Re: [ceph-users] Migration of a Ceph cluster to a new datacenter and new IPs

2018-12-27 Thread Marcus Müller
; Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Wed, Dec 19, 2018 at 8:55 PM Marcus Müller > wrote: >> >> Hi all, >> >> we’re running a ceph hammer cluster with 3 mons and 24 osds (3 same nodes) >> and need to m

[ceph-users] Migration of a Ceph cluster to a new datacenter and new IPs

2018-12-19 Thread Marcus Müller
Hi all, we’re running a ceph hammer cluster with 3 mons and 24 osds (3 same nodes) and need to migrate all servers to a new datacenter and change the IPs of the nodes. I found this tutorial:

[ceph-users] Purge Ceph Node and reuse it for another cluster

2018-09-26 Thread Marcus Müller
Hi all, Is it safe to purge a ceph osd / mon node like described here: http://docs.ceph.com/docs/giant/rados/deployment/ceph-deploy-purge/ and later use this node with the same OS again for another production ceph cluster?

[ceph-users] Ceph mon quorum problems under load

2018-07-06 Thread Marcus Haarmann
we are a little stuck where to search for a solution. What debug output would help to see whether we have a disk or network problem here ? Thankx for your input ! Marcus Haarmann ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.

Re: [ceph-users] Hangs with qemu/libvirt/rbd when one host disappears

2017-12-08 Thread Marcus Priesch
think i have read the wrong part ... http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mon/ thanks for the link ! regards, marcus. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Hangs with qemu/libvirt/rbd when one host disappears

2017-12-07 Thread Marcus Priesch
r really big clusters, you can then > start splitting out the mgr instances to reduce the load further. ok, so i will turn back to only having three mon's. does this also hold for mgr's ? and: most important, can this lead to my problems ? just by having

Re: [ceph-users] Hangs with qemu/libvirt/rbd when one host disappears

2017-12-07 Thread Marcus Priesch
have look at the docs > & the forum. > > Docs: https://pve.proxmox.com/pve-docs/ > Forum: https://forum.proxmox.com thanks, been there ... > Some more useful information on PVE + Ceph: > https://forum.proxmox.com/threads/ceph-raw-usage-grows-by-itself.38395/#post-189842 havent read this ... thanks a lot ! marcus. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Hangs with qemu/libvirt/rbd when one host disappears

2017-12-07 Thread Marcus Priesch
another look at some of the > docs about monitors. however i dont get the point here ... because its an even number ? i read docs ... but dont get any hints on the number of mons ... i would assume, the more the better ... is this wrong ? regards, marcus. __

[ceph-users] Hangs with qemu/libvirt/rbd when one host disappears

2017-12-05 Thread Marcus Priesch
can help me shed any light on this ... at least the point of all is that a single host should be allowed to fail and the vm's continue running ... ;) regards and thanks in advance, marcus. -- Marcus Priesch open source consultant - solution provider www.priesch.co.at / off...@pri

Re: [ceph-users] Help change civetweb front port error: Permission denied

2017-09-18 Thread Marcus Haarmann
Ceph is running as non-root user, so normally it is not permitted to listen to a port < 1024 for non-root users. This is not specific to ceph. You could trick a listener on port 80 with a redirect via iptables or you might proxy the connection through an apache/nginx instance. Mar

Re: [ceph-users] cephfs(Kraken 11.2.1), Unable to write more file when one dir more than 100000 files, mds_bal_fragment_size_max = 5000000

2017-09-07 Thread Marcus Haarmann
Its a feature ... http://docs.ceph.com/docs/master/cephfs/dirfrags/ https://www.spinics.net/lists/ceph-users/msg31473.html Marcus Haarmann Von: donglifec...@gmail.com An: "zyan" <z...@redhat.com> CC: "ceph-users" <ceph-users@lists.ceph.com> Gesendet: Fre

Re: [ceph-users] luminous/bluetsore osd memory requirements

2017-08-10 Thread Marcus Haarmann
stores each file as a single object, while the rbd is configured to allocate larger objects. Marcus Haarmann Von: "Stijn De Weirdt" <stijn.dewei...@ugent.be> An: "ceph-users" <ceph-users@lists.ceph.com> Gesendet: Donnerstag, 10. August 2017 10:34:48 Betreff:

Re: [ceph-users] CEPH bluestore space consumption with small objects

2017-08-08 Thread Marcus Haarmann
that a filesystem or a database could become inconsistent easier than a rados-only approach. Even cephfs was not the right approach since the space consumption would be the same as with rados directly. Thanks to everybody, Marcus Haarmann Von: "Pavel Shub" <pa...@citymaps.com> An:

[ceph-users] CEPH bluestore space consumption with small objects

2017-08-02 Thread Marcus Haarmann
Thank you all in advance for support. Marcus Haarmann ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-21 Thread Marcus Furlong
On 20 July 2017 at 18:48, Brad Hubbard <bhubb...@redhat.com> wrote: > On Fri, Jul 21, 2017 at 4:23 AM, Marcus Furlong <furlo...@gmail.com> wrote: >> On 20 July 2017 at 12:49, Matthew Vernon <m...@sanger.ac.uk> wrote: >>> Hi, >>> >>> On 18/07

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-20 Thread Marcus Furlong
On 20 July 2017 at 12:49, Matthew Vernon <m...@sanger.ac.uk> wrote: > Hi, > > On 18/07/17 05:08, Marcus Furlong wrote: >> >> On 22 March 2017 at 05:51, Dan van der Ster <d...@vanderster.com >> <mailto:d...@vanderster.com>> wrote: > > >> Apo

Re: [ceph-users] XFS attempt to access beyond end of device

2017-07-17 Thread Marcus Furlong
On 22 March 2017 at 05:51, Dan van der Ster <d...@vanderster.com> wrote: > On Wed, Mar 22, 2017 at 8:24 AM, Marcus Furlong <furlo...@gmail.com> wrote: >> Hi, >> >> I'm experiencing the same issue as outlined in this post: >> >> http://lists.ceph.com

Re: [ceph-users] Ceph newbie thoughts and questions

2017-05-04 Thread Marcus
got this tip from anybody else. Thanks again! We will start using ceph fs, because this goes hand in hand with our future needs. Best regards Marcus On 04/05/17 06:30, David Turner wrote: The clients will need to be able to contact the mons and the osds. NEVER use 2 mons. Mons are a quorum

[ceph-users] Ceph newbie thoughts and questions

2017-05-03 Thread Marcus Pedersén
he second ods. Of course I will test this out before I bring it to production. Many thanks in advance! Best regards Marcus ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] XFS attempt to access beyond end of device

2017-03-28 Thread Marcus Furlong
On 22 March 2017 at 19:36, Brad Hubbard <bhubb...@redhat.com> wrote: > On Wed, Mar 22, 2017 at 5:24 PM, Marcus Furlong <furlo...@gmail.com> wrote: >> [435339.965817] [ cut here ] >> [435339.965874] WARNING: at fs/xfs/xfs_aops.c:1244 >> x

Re: [ceph-users] XFS attempt to access beyond end of device

2017-03-22 Thread Marcus Furlong
artition always on the same disk and only for OSDs which we > added after our cluster was upgraded to jewel. On OSDs which use a > dedicated SSD journal we do not see them. Hmm, this occurs for us on OSDs with a dedicated journal on a different disk. Ch

[ceph-users] XFS attempt to access beyond end of device

2017-03-22 Thread Marcus Furlong
-disk in ceph-deploy? I didn't change these, so it used the defaults of: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdi1 Any pointers for how to debug it further and/or fix it? Cheers, Marcus. [435339.965817] [ cut here ] [435339.965874] WARNING: at fs/xfs/xf

Re: [ceph-users] Latency between datacenters

2017-02-08 Thread Marcus Furlong
that note, is anyone aware of documentation that details the differences between federated gateway and multisite? And where each would be most appropriate? Regards, Marcus. -- Marcus Furlong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-11 Thread Marcus Müller
Yes, but everything i want to know is, if my way to change the tunables is right or not? > Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo <ski...@redhat.com>: > > Please refer to Jens's message. > > Regards, > >> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Mü

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-11 Thread Marcus Müller
> You likely need to tweak your crushmap to handle this configuration > better or, preferably, move to a more uniform configuration. > > > On Wed, Jan 11, 2017 at 5:38 PM, Marcus Müller <mueller.mar...@posteo.de> > wrote: >> I have to thank you all. You give free support and

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-10 Thread Marcus Müller
st. > -Sam > > On Mon, Jan 9, 2017 at 11:08 PM, Marcus Müller <mueller.mar...@posteo.de> > wrote: >> Ok, i understand but how can I debug why they are not running as they >> should? For me I thought everything is fine because ceph -s said they are up >> an

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-10 Thread Marcus Müller
t is being padded out with an extra osd > which happens to have the data to keep you up to the right number of > replicas. Please refer back to Brad's post. > -Sam > >> On Mon, Jan 9, 2017 at 11:08 PM, Marcus Müller <mueller.mar...@posteo.de> >> wrote: >> Ok,

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
t;> ], > > > Here is an example: > > "up": [ > 1, >0, >2 > ], > "acting": [ >1, >0, >2 > ], > > Regards, > > > On Tue, Jan 10, 2017 at 3:52 PM, Marcus Müller <mueller.mar...@posteo.de

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
; On Tue, Jan 10, 2017 at 3:44 PM, Marcus Müller <mueller.mar...@posteo.de> > wrote: >> All osds are currently up: >> >> health HEALTH_WARN >>4 pgs stuck unclean >>recovery 4482/58798254 objects degraded (0.008

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
All osds are currently up: health HEALTH_WARN 4 pgs stuck unclean recovery 4482/58798254 objects degraded (0.008%) recovery 420522/58798254 objects misplaced (0.715%) noscrub,nodeep-scrub flag(s) set monmap e9: 5 mons at

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
Wuerdig > <christian.wuer...@gmail.com>: > > > > On Tue, Jan 10, 2017 at 8:23 AM, Marcus Müller <mueller.mar...@posteo.de > <mailto:mueller.mar...@posteo.de>> wrote: > Hi all, > > Recently I added a new node with new osds to my cluster, which, of course

[ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
bug and I really lost important data or is this a ceph cleanup action after the backfill? Thanks and regards, Marcus ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Failed to install ceph via ceph-deploy on Ubuntu 14.04 trusty

2017-01-02 Thread Marcus Müller
016-September/012909.html> Running apt-get install ceph manually did the job for me on the node. As far as I know this is all ceph-deploy would do, right? Has someone had the same issue in the past? I thought I was reading that the error should have been fixed already. Regards,

Re: [ceph-users] docs.ceph.com down?

2017-01-02 Thread Marcus Müller
/master/doc > <https://github.com/ceph/ceph/tree/master/doc> > > On Mon, Jan 2, 2017 at 7:45 PM, Andre Forigato <andre.forig...@rnp.br > <mailto:andre.forig...@rnp.br>> wrote: > Hello Marcus, > > Yes, it´s down. :-( > > > André > >

[ceph-users] docs.ceph.com down?

2017-01-02 Thread Marcus Müller
Hi all, I can not reach docs.ceph.com for some days. Is the site really down or do I have a problem here? Regards, Marcus___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] docs.ceph.com down?

2017-01-02 Thread Marcus Müller
Hi all, I can not reach docs.ceph.com for some days. Is the site really down or do I have a problem? Regards, Marcus___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Need help! Ceph backfill_toofull and recovery_wait+degraded

2016-11-01 Thread Marcus Müller
Hi all, i have a big problem and i really hope someone can help me! We are running a ceph cluster since a year now. Version is: 0.94.7 (Hammer) Here is some info: Our osd map is: ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 26.67998 root default

Re: [ceph-users] Single-node Ceph & Systemd shutdown

2016-08-20 Thread Marcus
t.com> wrote: > On Sat, Aug 20, 2016 at 10:35 AM, Marcus <letharg...@gmail.com> wrote: > > For a home server project I've set up a single-node ceph system. > > > > Everything works just fine; I can mount block devices and store stuff on > > them, however the sys

Re: [ceph-users] Single-node Ceph & Systemd shutdown

2016-08-20 Thread Marcus Cobden
e systemd > > Tested and approved on all my ceph nodes, and all my servers :) > > On 20/08/2016 19:35, Marcus wrote: >> Blablabla systemd blablabla ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Single-node Ceph & Systemd shutdown

2016-08-20 Thread Marcus
the block devices, so when it tries to do so it hangs. Does anyone have any idea how I might fix this? If this weren't systemd I'm sure I could find a place to kludge it, but I've had no such luck. Thanks!, Marcus ___ ceph-users mailing list ceph-users

[ceph-users] Perfomance issue.

2015-06-30 Thread Marcus Forness
hi! anyone able to privide som tips on performance issue on a newly installe all flash ceph cluster? When we do write test we get 900MB/s write. but read tests are only 200MB/s all servers are on 10GBit connections. [global] fsid = 453d2db9-c764-4921-8f3c-ee0f75412e19 mon_initial_members =

Re: [ceph-users] Basic Ceph questions

2014-10-13 Thread Marcus White
?.. 4. In QEMU, is it a SCSI device? MW Mukul On Mon, Oct 13, 2014 at 6:44 AM, Robert Sander r.san...@heinlein-support.de wrote: On 10.10.2014 02:19, Marcus White wrote: For VMs, I am trying to visualize how the RBD device would be exposed. Where does the driver live exactly? If its exposed via

Re: [ceph-users] Basic Ceph questions

2014-10-12 Thread Marcus White
Thanks:) If someone can help reg the question below, that would be great! For VMs, I am trying to visualize how the RBD device would be exposed. Where does the driver live exactly? If its exposed via libvirt and QEMU, does the kernel driver run in the host OS, and communicate with a

Re: [ceph-users] Basic Ceph questions

2014-10-09 Thread Marcus White
.. MW On Wed, Oct 8, 2014 at 6:37 PM, Craig Lewis cle...@centraldesktop.com wrote: Comments inline. On Tue, Oct 7, 2014 at 5:51 PM, Marcus White roastedseawee...@gmail.com wrote: Hello, Some basic Ceph questions, would appreciate your help:) Sorry about the number and detail in advance

Re: [ceph-users] Basic Ceph questions

2014-10-08 Thread Marcus White
Just a bump:) Is this the right list or should I be posting in devel? MW On Tue, Oct 7, 2014 at 5:51 PM, Marcus White roastedseawee...@gmail.com wrote: Hello, Some basic Ceph questions, would appreciate your help:) Sorry about the number and detail in advance! a. Ceph RADOS is strongly

[ceph-users] Basic Ceph questions

2014-10-07 Thread Marcus White
Hello, Some basic Ceph questions, would appreciate your help:) Sorry about the number and detail in advance! a. Ceph RADOS is strongly consistent and different from usual object, does that mean all metadata also, container and account etc is all consistent and everything is updated in the path of