Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-18 Thread Mehmet
Hi,

We actually using 3xIntel Server with 12 osds and One supermicro with 24 osds 
in One ceph Cluster Journals on nvme per server. Did not seeing any issues jet. 

Best
Mehmet

Am 9. Juni 2017 19:24:40 MESZ schrieb Deepak Naidu <dna...@nvidia.com>:
>Thanks David for sharing your experience, appreciate it.
>
>--
>Deepak
>
>From: David Turner [mailto:drakonst...@gmail.com]
>Sent: Friday, June 09, 2017 5:38 AM
>To: Deepak Naidu; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] OSD node type/count mixes in the cluster
>
>
>I ran a cluster with 2 generations of the same vendor hardware. 24 osd
>supermicro and 32 osd supermicro (with faster/more RAM and CPU cores). 
>The cluster itself ran decently well, but the load differences was
>drastic between the 2 types of nodes. It required me to run the cluster
>with 2 separate config files for each type of node and was an utter
>PITA when troubleshooting bottlenecks.
>
>Ultimately I moved around hardware and created a legacy cluster on the
>old hardware and created a new cluster using the newer configuration. 
>In general it was very hard to diagnose certain bottlenecks due to
>everything just looking so different.  The primary one I encountered
>was snap trimming due to deleted thousands of snapshots/day.
>
>If you aren't pushing any limits of Ceph, you will probably be fine. 
>But if you have a really large cluster, use a lot of snapshots, or are
>pushing your cluster harder than the average user... Then I'd avoid
>mixing server configurations in a cluster.
>
>On Fri, Jun 9, 2017, 1:36 AM Deepak Naidu
><dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
>Wanted to check if anyone has a ceph cluster which has mixed vendor
>servers both with same disk size i.e. 8TB but different count i.e.
>Example 10 OSD servers from Dell with 60 Disk per server and other 10
>OSD servers from HP with 26 Disk per server.
>
>If so does that change any performance dynamics ? or is it not
>advisable .
>
>--
>Deepak
>---
>This email message is for the sole use of the intended recipient(s) and
>may contain
>confidential information.  Any unauthorized review, use, disclosure or
>distribution
>is prohibited.  If you are not the intended recipient, please contact
>the sender by
>reply email and destroy all copies of the original message.
>---
>___
>ceph-users mailing list
>ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-09 Thread Deepak Naidu
Thanks David for sharing your experience, appreciate it.

--
Deepak

From: David Turner [mailto:drakonst...@gmail.com]
Sent: Friday, June 09, 2017 5:38 AM
To: Deepak Naidu; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] OSD node type/count mixes in the cluster


I ran a cluster with 2 generations of the same vendor hardware. 24 osd 
supermicro and 32 osd supermicro (with faster/more RAM and CPU cores).  The 
cluster itself ran decently well, but the load differences was drastic between 
the 2 types of nodes. It required me to run the cluster with 2 separate config 
files for each type of node and was an utter PITA when troubleshooting 
bottlenecks.

Ultimately I moved around hardware and created a legacy cluster on the old 
hardware and created a new cluster using the newer configuration.  In general 
it was very hard to diagnose certain bottlenecks due to everything just looking 
so different.  The primary one I encountered was snap trimming due to deleted 
thousands of snapshots/day.

If you aren't pushing any limits of Ceph, you will probably be fine.  But if 
you have a really large cluster, use a lot of snapshots, or are pushing your 
cluster harder than the average user... Then I'd avoid mixing server 
configurations in a cluster.

On Fri, Jun 9, 2017, 1:36 AM Deepak Naidu 
<dna...@nvidia.com<mailto:dna...@nvidia.com>> wrote:
Wanted to check if anyone has a ceph cluster which has mixed vendor servers 
both with same disk size i.e. 8TB but different count i.e. Example 10 OSD 
servers from Dell with 60 Disk per server and other 10 OSD servers from HP with 
26 Disk per server.

If so does that change any performance dynamics ? or is it not advisable .

--
Deepak
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD node type/count mixes in the cluster

2017-06-09 Thread David Turner
I ran a cluster with 2 generations of the same vendor hardware. 24 osd
supermicro and 32 osd supermicro (with faster/more RAM and CPU cores).  The
cluster itself ran decently well, but the load differences was drastic
between the 2 types of nodes. It required me to run the cluster with 2
separate config files for each type of node and was an utter PITA when
troubleshooting bottlenecks.

Ultimately I moved around hardware and created a legacy cluster on the old
hardware and created a new cluster using the newer configuration.  In
general it was very hard to diagnose certain bottlenecks due to everything
just looking so different.  The primary one I encountered was snap trimming
due to deleted thousands of snapshots/day.

If you aren't pushing any limits of Ceph, you will probably be fine.  But
if you have a really large cluster, use a lot of snapshots, or are pushing
your cluster harder than the average user... Then I'd avoid mixing server
configurations in a cluster.

On Fri, Jun 9, 2017, 1:36 AM Deepak Naidu  wrote:

> Wanted to check if anyone has a ceph cluster which has mixed vendor
> servers both with same disk size i.e. 8TB but different count i.e. Example
> 10 OSD servers from Dell with 60 Disk per server and other 10 OSD servers
> from HP with 26 Disk per server.
>
> If so does that change any performance dynamics ? or is it not advisable .
>
> --
> Deepak
>
> ---
> This email message is for the sole use of the intended recipient(s) and
> may contain
> confidential information.  Any unauthorized review, use, disclosure or
> distribution
> is prohibited.  If you are not the intended recipient, please contact the
> sender by
> reply email and destroy all copies of the original message.
>
> ---
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD node type/count mixes in the cluster

2017-06-08 Thread Deepak Naidu
Wanted to check if anyone has a ceph cluster which has mixed vendor servers 
both with same disk size i.e. 8TB but different count i.e. Example 10 OSD 
servers from Dell with 60 Disk per server and other 10 OSD servers from HP with 
26 Disk per server.

If so does that change any performance dynamics ? or is it not advisable .

--
Deepak
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com