Re: [ceph-users] ceph - even filling disks

2016-12-05 Thread David Turner
Ceph's data distribution is the crush algorithm.  While your use case is 
simple, the algorithm is very complex to handle complex scenarios.  The 
variable you have access to is the crush weight for each osd.  If you have an 
osd, like ceph-3 that has more data than the rest and ceph-2 that has less data 
than the rest, then you can increase the crush weight to have the crush 
algorithm assign more placement groups to that osd or inversly reduce the crush 
weight to have the osd lose some placement groups.

Like has already been mentioned, to do this by hand you would use the command 
`ceph osd crush reweight osd. `.  You don't want to adjust 
these weights by more than 0.05 or so each time.  Every time you change the 
weight of something in the crush map, your cluster will start backfilling until 
the data is where the updated crush map says it will be.

To have Ceph try to do this for you, you would use `ceph osd 
reweight-by-utilization`.  I don't have experience with this method, but a lot 
of the community uses it.



[cid:imagec6c30d.JPG@168bcdbd.4099a498]<https://storagecraft.com>   David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943



If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.




From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Volkov Pavel 
[vol...@mobilon.ru]
Sent: Monday, December 05, 2016 3:11 AM
To: 'John Petrini'
Cc: 'ceph-users'
Subject: Re: [ceph-users] ceph - even filling disks

OSD different sizes are used for different tasks. Such as cache. My concern is 
4TB OSD, used as a storage pool. Place them engaged is not the same.

/Dev/sdf1 4.0T 1.7T 2.4T 42% / var / lib / ceph / osd / ceph-4
/Dev/sdd1 4.0T 1.7T 2.4T 41% / var / lib / ceph / osd / ceph-2
/Dev/sdb1 4.0T 1.9T 2.2T 46% / var / lib / ceph / osd / ceph-0
/Dev/sde1 4.0T 2.1T 2.0T 51% / var / lib / ceph / osd / ceph-3
/Dev/sdc1 4.0T 1.8T 2.3T 45% / var / lib / ceph / osd / ceph-1

For example, on /dev/sdd1 42%, and /dev/sde1 51%. They are 1 pool.
Maybe there is an option which can be used to fill evenly OSD?

From: John Petrini [mailto:jpetr...@coredial.com]
Sent: Friday, December 02, 2016 1:04 PM
To: Волков Павел (Мобилон)
Cc: ceph-users
Subject: Re: [ceph-users] ceph - even filling disks

You can reweight the OSD's either automatically based on utilization (ceph osd 
reweight-by-utilization) or by hand.

See:
https://ceph.com/planet/ceph-osd-reweight/
http://docs.ceph.com/docs/master/rados/operations/control/#osd-subsystem

It's probably not ideal to have OSD's of such different sizes on a node.


___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //   
coredial.com<http://coredial.com/>   //   [Описание: Рисунок удален 
отправителем. Twitter] <https://twitter.com/coredial>[Описание: Рисунок 
удален отправителем. LinkedIn] <http://www.linkedin.com/company/99631>
[Описание: Рисунок удален отправителем. Google Plus] 
<https://plus.google.com/104062177220750809525/posts>[Описание: Рисунок 
удален отправителем. Blog] <http://success.coredial.com/blog>
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: 
jpetr...@coredial.com<mailto:jpetr...@coredial.com>

[Описание: Рисунок удален отправителем. Exceptional people. Proven Processes. 
Innovative Technology. Discover CoreDial - watch our 
video]<http://cta-redirect.hubspot.com/cta/redirect/210539/4c492538-6e4b-445e-9480-bef676787085>

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential and/or privileged material. Any 
review, retransmission,  dissemination or other use of, or taking of any action 
in reliance upon, this information by persons or entities other than the 
intended recipient is prohibited. If you received this in error, please contact 
the sender and delete the material from any computer.

On Fri, Dec 2, 2016 at 12:36 AM, Волков Павел (Мобилон) 
<vol...@mobilon.ru<mailto:vol...@mobilon.ru>> wrote:
Good day.
I have set up the repository ceph and created several pools on the hdd 4TB. My 
problem lies in uneven filling hdd.

root@ceph-node1:~# df -H
Filesystem  Size  Used Avail Use% Mounted on
/dev/sda1   236G  2.7G  221G   2% /
none4.1k 0  4.1k   0% /sys/fs/cgroup
udev 30G  4.1k   30G   1% /dev
tmpfs   6.0G  1.1M  6.0G   1% /run
none5.3M 0  5.3M   0% /run/lock
none 30G  8.2k   30G   1% /run/shm
no

Re: [ceph-users] ceph - even filling disks

2016-12-05 Thread Volkov Pavel
OSD different sizes are used for different tasks. Such as cache. My concern is 
4TB OSD, used as a storage pool. Place them engaged is not the same.

 

/Dev/sdf1 4.0T 1.7T 2.4T 42% / var / lib / ceph / osd / ceph-4

/Dev/sdd1 4.0T 1.7T 2.4T 41% / var / lib / ceph / osd / ceph-2

/Dev/sdb1 4.0T 1.9T 2.2T 46% / var / lib / ceph / osd / ceph-0

/Dev/sde1 4.0T 2.1T 2.0T 51% / var / lib / ceph / osd / ceph-3

/Dev/sdc1 4.0T 1.8T 2.3T 45% / var / lib / ceph / osd / ceph-1

 

For example, on /dev/sdd1 42%, and /dev/sde1 51%. They are 1 pool.

Maybe there is an option which can be used to fill evenly OSD?

 

From: John Petrini [mailto:jpetr...@coredial.com] 
Sent: Friday, December 02, 2016 1:04 PM
To: Волков Павел (Мобилон)
Cc: ceph-users
Subject: Re: [ceph-users] ceph - even filling disks

 

You can reweight the OSD's either automatically based on utilization (ceph osd 
reweight-by-utilization) or by hand.

 

See: 

https://ceph.com/planet/ceph-osd-reweight/

http://docs.ceph.com/docs/master/rados/operations/control/#osd-subsystem

 

It's probably not ideal to have OSD's of such different sizes on a node.




___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //<http://coredial.com/> 
coredial.com   //<https://twitter.com/coredial> Описание: Рисунок удален 
отправителем. Twitter<http://www.linkedin.com/company/99631> Описание: 
Рисунок удален отправителем. LinkedIn
<https://plus.google.com/104062177220750809525/posts> Описание: Рисунок удален 
отправителем. Google Plus<http://success.coredial.com/blog> Описание: 
Рисунок удален отправителем. Blog 
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422 
P: 215.297.4400 x232   //   F: 215.297.4401   //   E:  
<mailto:jpetr...@coredial.com> jpetr...@coredial.com

 
<http://cta-redirect.hubspot.com/cta/redirect/210539/4c492538-6e4b-445e-9480-bef676787085>
 Описание: Рисунок удален отправителем. Exceptional people. Proven Processes. 
Innovative Technology. Discover CoreDial - watch our video

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential and/or privileged material. Any 
review, retransmission,  dissemination or other use of, or taking of any action 
in reliance upon, this information by persons or entities other than the 
intended recipient is prohibited. If you received this in error, please contact 
the sender and delete the material from any computer.

 

On Fri, Dec 2, 2016 at 12:36 AM, Волков Павел (Мобилон) <vol...@mobilon.ru> 
wrote:

Good day.

I have set up the repository ceph and created several pools on the hdd 4TB. My 
problem lies in uneven filling hdd.

 

root@ceph-node1:~# df -H

Filesystem  Size  Used Avail Use% Mounted on

/dev/sda1   236G  2.7G  221G   2% /

none4.1k 0  4.1k   0% /sys/fs/cgroup

udev 30G  4.1k   30G   1% /dev

tmpfs   6.0G  1.1M  6.0G   1% /run

none5.3M 0  5.3M   0% /run/lock

none 30G  8.2k   30G   1% /run/shm

none105M 0  105M   0% /run/user

/dev/sdf1   4.0T  1.7T  2.4T  42% /var/lib/ceph/osd/ceph-4

/dev/sdg1   395G  329G   66G  84% /var/lib/ceph/osd/ceph-5

/dev/sdi1   195G  152G   44G  78% /var/lib/ceph/osd/ceph-7

/dev/sdd1   4.0T  1.7T  2.4T  41% /var/lib/ceph/osd/ceph-2

/dev/sdh1   395G  330G   65G  84% /var/lib/ceph/osd/ceph-6

/dev/sdb1   4.0T  1.9T  2.2T  46% /var/lib/ceph/osd/ceph-0

/dev/sde1   4.0T  2.1T  2.0T  51% /var/lib/ceph/osd/ceph-3

/dev/sdc1   4.0T  1.8T  2.3T  45% /var/lib/ceph/osd/ceph-1

 

 

On the test machine, this leads to an overflow error CDM and further incorrect 
operation. 

How to make that all hdd filled equally?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph - even filling disks

2016-12-01 Thread John Petrini
You can reweight the OSD's either automatically based on utilization (ceph
osd reweight-by-utilization) or by hand.

See:
https://ceph.com/planet/ceph-osd-reweight/
http://docs.ceph.com/docs/master/rados/operations/control/#osd-subsystem

It's probably not ideal to have OSD's of such different sizes on a node.

___

John Petrini

NOC Systems Administrator   //   *CoreDial, LLC*   //   coredial.com
//   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422
*P: *215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com

[image: Exceptional people. Proven Processes. Innovative Technology.
Discover CoreDial - watch our video]


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission,  dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.

On Fri, Dec 2, 2016 at 12:36 AM, Волков Павел (Мобилон) 
wrote:

> Good day.
>
> I have set up the repository ceph and created several pools on the hdd
> 4TB. My problem lies in uneven filling hdd.
>
>
>
> root@ceph-node1:~# df -H
>
> Filesystem  Size  Used Avail Use% Mounted on
>
> /dev/sda1   236G  2.7G  221G   2% /
>
> none4.1k 0  4.1k   0% /sys/fs/cgroup
>
> udev 30G  4.1k   30G   1% /dev
>
> tmpfs   6.0G  1.1M  6.0G   1% /run
>
> none5.3M 0  5.3M   0% /run/lock
>
> none 30G  8.2k   30G   1% /run/shm
>
> none105M 0  105M   0% /run/user
>
> */dev/sdf1   4.0T  1.7T  2.4T  42% /var/lib/ceph/osd/ceph-4*
>
> /dev/sdg1   395G  329G   66G  84% /var/lib/ceph/osd/ceph-5
>
> /dev/sdi1   195G  152G   44G  78% /var/lib/ceph/osd/ceph-7
>
> */dev/sdd1   4.0T  1.7T  2.4T  41% /var/lib/ceph/osd/ceph-2*
>
> /dev/sdh1   395G  330G   65G  84% /var/lib/ceph/osd/ceph-6
>
> */dev/sdb1   4.0T  1.9T  2.2T  46% /var/lib/ceph/osd/ceph-0*
>
> */dev/sde1   4.0T  2.1T  2.0T  51% /var/lib/ceph/osd/ceph-3*
>
> */dev/sdc1   4.0T  1.8T  2.3T  45% /var/lib/ceph/osd/ceph-1*
>
>
>
>
>
> On the test machine, this leads to an overflow error CDM and further
> incorrect operation.
>
> How to make that all hdd filled equally?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph - even filling disks

2016-12-01 Thread Мобилон
Good day.

I have set up the repository ceph and created several pools on the hdd 4TB.
My problem lies in uneven filling hdd.

 

root@ceph-node1:~# df -H

Filesystem  Size  Used Avail Use% Mounted on

/dev/sda1   236G  2.7G  221G   2% /

none4.1k 0  4.1k   0% /sys/fs/cgroup

udev 30G  4.1k   30G   1% /dev

tmpfs   6.0G  1.1M  6.0G   1% /run

none5.3M 0  5.3M   0% /run/lock

none 30G  8.2k   30G   1% /run/shm

none105M 0  105M   0% /run/user

/dev/sdf1   4.0T  1.7T  2.4T  42% /var/lib/ceph/osd/ceph-4

/dev/sdg1   395G  329G   66G  84% /var/lib/ceph/osd/ceph-5

/dev/sdi1   195G  152G   44G  78% /var/lib/ceph/osd/ceph-7

/dev/sdd1   4.0T  1.7T  2.4T  41% /var/lib/ceph/osd/ceph-2

/dev/sdh1   395G  330G   65G  84% /var/lib/ceph/osd/ceph-6

/dev/sdb1   4.0T  1.9T  2.2T  46% /var/lib/ceph/osd/ceph-0

/dev/sde1   4.0T  2.1T  2.0T  51% /var/lib/ceph/osd/ceph-3

/dev/sdc1   4.0T  1.8T  2.3T  45% /var/lib/ceph/osd/ceph-1

 

 

On the test machine, this leads to an overflow error CDM and further
incorrect operation. 

How to make that all hdd filled equally?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com