Re: [ceph-users] Sizing your MON storage with a large cluster

2018-02-05 Thread Wes Dillingham
t; period of backfills/recoveries and also have a large number of OSDs you'll > see the DB grow quite big. > > This has improved significantly going to Jewel and Luminous, but it is > still something to watch out for. > > Make sure your MONs have enough free space to handle this! > &g

Re: [ceph-users] slow requests on a specific osd

2018-01-15 Thread Wes Dillingham
wrote: > Hi Wes, > > On 15-1-2018 20:32, Wes Dillingham wrote: > >> I dont hear a lot of people discuss using xfs_fsr on OSDs and going over >> the mailing list history it seems to have been brought up very infrequently >> and never as a suggestion for regular maintenanc

Re: [ceph-users] slow requests on a specific osd

2018-01-15 Thread Wes Dillingham
read across OSDs brings also better > distribution of load between the OSDs) > > Or other ideas to check out? > > MJ > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >

Re: [ceph-users] Switching a pool from EC to replicated online ?

2018-01-15 Thread Wes Dillingham
_ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computing | Senior CyberInfrastructure Storage Engineer Harvard Univ

Re: [ceph-users] Open Compute (OCP) servers for Ceph

2018-01-10 Thread Wes Dillingham
;> vendor >>> and what are your experiences? >>> >>> Thanks! >>> >>> Wido >>> >>> [0]: http://www.opencompute.org/ >>> [1]: http://www.wiwynn.com/ >>> [2]: http://www.wiwynn.com/english/product/type/details/65?ptype=2

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Wes Dillingham
; > > I'm guessing you're in the (1) case anyway and this doesn't affect you at > > all :) > > > > sage > > -- > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > > the body of a message to majord...@vger.kernel.org >

[ceph-users] Upper limit of MONs and MDSs in a Cluster

2017-05-25 Thread Wes Dillingham
or aware of any testing with very high numbers of each? At the MDS level I would just be looking for 1 Active, 1 Standby-replay and X standby until multiple active MDSs are production ready. Thanks! -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computing | Infrastructure

Re: [ceph-users] Available tools for deploying ceph cluster as a backend storage ?

2017-05-18 Thread Wes Dillingham
eph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computing | Infrastructure Engineer Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102 ___

Re: [ceph-users] Client's read affinity

2017-04-05 Thread Wes Dillingham
> E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857 > >> > _ > >> > www.nubeliu.com > >> > ___ > >> > ceph-users mailing list > >> > ceph-users@lists.ceph.com > >> > http://lists.ceph.com/list

Re: [ceph-users] INFO:ceph-create-keys:ceph-mon admin socket not ready yet.

2017-03-21 Thread Wes Dillingham
met > Main PID: 2576 (code=exited, status=0/SUCCESS) > === > > Have anyone faced this error before ? > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >

Re: [ceph-users] I/O hangs with 2 node failure even if one node isn't involved in I/O

2017-03-21 Thread Wes Dillingham
as under the false impression that my rdb devices was a > single object. That explains what all those other things are on a test > cluster where I only created a single object! > > > -- > Adam Carheden > > On 03/20/2017 08:24 PM, Wes Dillingham wrote: > > This is becaus

Re: [ceph-users] I/O hangs with 2 node failure even if one node isn't involved in I/O

2017-03-20 Thread Wes Dillingham
up 1.0 1.0 >> >> >> -- >> Adam Carheden >> _______ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>

Re: [ceph-users] Cephalocon Sponsorships Open

2016-12-22 Thread Wes Dillingham
@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computing | Infrastructure Engineer Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210 _

Re: [ceph-users] Question about writing a program that transfer snapshot diffs between ceph clusters

2016-11-01 Thread Wes Dillingham
listinfo.cgi/ceph-users-ceph.com > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computi

Re: [ceph-users] reliable monitor restarts

2016-10-24 Thread Wes Dillingham
see systemd.service(5) for details. > >> >> Regards and have a nice weekend. >> >> Steffen > > Kind regards, > > Ruben Kerkhof > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] ceph on two data centers far away

2016-10-21 Thread Wes Dillingham
- >>>>> Think big; Dream impossible; Make it happen. >>>>> >>>>> _______ >>>>> ceph-users mailing list >>>>> ceph-users@lists.ceph.com >>>>>

Re: [ceph-users] Erasure coding general information Openstack+kvm virtual machine block storage

2016-09-16 Thread Wes Dillingham
iling list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Respectfully, Wes Dillingham wes_dilling...@harvard.edu Research Computing | Infrastructure Engineer Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210 __