Hello,

On Mon, 6 Mar 2017 16:06:51 +0700 Vy Nguyen Tan wrote:

> Hi Jiajia zhong,
> 
> I'm using mixed SSD and HDD on the same node and I did it from url
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/,
> I don't get any problems when run SSD and HDD on the same node. Now I want
> to increase Ceph thoughput by increase network into 20Gbps (I want single
> network stream get max 20Gbps - test by iperf ). Could you please share
> your experience about HA network for Ceph ? What type of bonding do you
> have? are you using stackable switches?
> 
This has been discussed here a few times, time to google. ^_^
https://www.spinics.net/lists/ceph-users/msg31660.html

In short:
1. Are you _sure_ you actually need the additional bandwidth
instead of more PPS/lower latency?

2. Faster links (20Gb/s, 40Gb/s, Infiniband) mean lower latency.

3. Bonding will not improve latency, nor will it give you higher bandwidth
for a single stream. 

4. Any (ethernet) switches (there are some cheap ones) that support mc-LAG
will allow you to do a HA and bonding network.

Christian

> I very appreciate your help.
> 
> On Mon, Mar 6, 2017 at 11:45 AM, jiajia zhong <zhong2p...@gmail.com> wrote:
> 
> > we are using mixed too, intel PCIE 400G SSD * 8 for metadata pool and tier
> > caching pool for our cephfs.
> >
> > *plus:*
> > *'osd crush update on start = false*'  as Vladimir replied.
> >
> > 2017-03-03 20:33 GMT+08:00 Дробышевский, Владимир <v...@itgorod.ru>:
> >  
> >> Hi, Matteo!
> >>
> >>   Yes, I'm using mixed cluster in production but it's pretty small at the
> >> moment. I've made a smal step by step manual for myself when I did this for
> >> the first time and now put it as a gist: https://gist.github.com/vheath
> >> en/cf2203aeb53e33e3f80c8c64a02263bc#file-manual-txt. Probably it could
> >> be a little bit outdated since it was some time ago.
> >>
> >>   Crush map modifications are going to be persistent in case of reboots
> >> and maintenance if you put *'osd crush update on start = false*' to the
> >> [osd] section of ceph conf.
> >>
> >>   But I would also recommend to start from this article:
> >> https://www.sebastien-han.fr/blog/2014/08/25/ceph-
> >> mix-sata-and-ssd-within-the-same-box/
> >>
> >>   P.S. While I was writing this letter I've seen a letter from Maxime
> >> Guyot. Seems that his method is much easier if it leads to the same 
> >> results.
> >>
> >> Best regards,
> >> Vladimir
> >>
> >> С уважением,
> >> Дробышевский Владимир
> >> Компания "АйТи Город"
> >> +7 343 2222192 <+7%20343%20222-21-92>
> >>
> >> Аппаратное и программное обеспечение
> >> IBM, Microsoft, Eset
> >> Поставка проектов "под ключ"
> >> Аутсорсинг ИТ-услуг
> >>
> >> 2017-03-03 16:30 GMT+05:00 Matteo Dacrema <mdacr...@enter.eu>:
> >>  
> >>> Hi all,
> >>>
> >>> Does anyone run a production cluster with a modified crush map for
> >>> create two pools belonging one to HDDs and one to SSDs.
> >>> What’s the best method? Modify the crush map via ceph CLI or via text
> >>> editor?
> >>> Will the modification to the crush map be persistent across reboots and
> >>> maintenance operations?
> >>> There’s something to consider when doing upgrades or other operations or
> >>> different by having “original” crush map?
> >>>
> >>> Thank you
> >>> Matteo
> >>> --------------------------------------------
> >>> This email and any files transmitted with it are confidential and
> >>> intended solely for the use of the individual or entity to whom they are
> >>> addressed. If you have received this email in error please notify the
> >>> system manager. This message contains confidential information and is
> >>> intended only for the individual named. If you are not the named addressee
> >>> you should not disseminate, distribute or copy this e-mail. Please notify
> >>> the sender immediately by e-mail if you have received this e-mail by
> >>> mistake and delete this e-mail from your system. If you are not the
> >>> intended recipient you are notified that disclosing, copying, distributing
> >>> or taking any action in reliance on the contents of this information is
> >>> strictly prohibited.
> >>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>>  
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>  
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >  


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to