t; period of backfills/recoveries and also have a large number of OSDs you'll
> see the DB grow quite big.
>
> This has improved significantly going to Jewel and Luminous, but it is
> still something to watch out for.
>
> Make sure your MONs have enough free space to handle this!
>
&g
wrote:
> Hi Wes,
>
> On 15-1-2018 20:32, Wes Dillingham wrote:
>
>> I dont hear a lot of people discuss using xfs_fsr on OSDs and going over
>> the mailing list history it seems to have been brought up very infrequently
>> and never as a suggestion for regular maintenanc
read across OSDs brings also better
> distribution of load between the OSDs)
>
> Or other ideas to check out?
>
> MJ
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Respectfully,
Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Senior CyberInfrastructure Storage Engineer
Harvard Univ
;> vendor
>>> and what are your experiences?
>>>
>>> Thanks!
>>>
>>> Wido
>>>
>>> [0]: http://www.opencompute.org/
>>> [1]: http://www.wiwynn.com/
>>> [2]: http://www.wiwynn.com/english/product/type/details/65?ptype=2
;
> > I'm guessing you're in the (1) case anyway and this doesn't affect you at
> > all :)
> >
> > sage
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majord...@vger.kernel.org
>
or aware of
any testing with very high numbers of each? At the MDS level I would just
be looking for 1 Active, 1 Standby-replay and X standby until multiple
active MDSs are production ready. Thanks!
--
Respectfully,
Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Infrastructure
eph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Respectfully,
Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Infrastructure Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102
___
> E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857
> >> > _
> >> > www.nubeliu.com
> >> > ___
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/list
met
> Main PID: 2576 (code=exited, status=0/SUCCESS)
> ===
>
> Have anyone faced this error before ?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
as under the false impression that my rdb devices was a
> single object. That explains what all those other things are on a test
> cluster where I only created a single object!
>
>
> --
> Adam Carheden
>
> On 03/20/2017 08:24 PM, Wes Dillingham wrote:
> > This is becaus
up 1.0 1.0
>>
>>
>> --
>> Adam Carheden
>> _______
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Respectfully,
Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Infrastructure Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210
_
listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Respectfully,
Wes Dillingham
wes_dilling...@harvard.edu
Research Computi
see systemd.service(5) for details.
>
>>
>> Regards and have a nice weekend.
>>
>> Steffen
>
> Kind regards,
>
> Ruben Kerkhof
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users
-
>>>>> Think big; Dream impossible; Make it happen.
>>>>>
>>>>> _______
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>>
iling list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Respectfully,
Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Infrastructure Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210
__
17 matches
Mail list logo