Re: [ceph-users] Increase PG number

2016-09-20 Thread Matteo Dacrema
Thanks a lot guys.

I’ll try to do as you told me.

Best Regards
Matteo

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

> Il giorno 20 set 2016, alle ore 12:20, Vincent Godin  
> ha scritto:
> 
> Hi, 
> 
> In fact, when you increase your pg number, the new pgs will have to peer 
> first and during this time, a lot a pg will be unreachable. The best way to 
> upgrade the number of PG of a cluster (you 'll need to adjust the number of 
> PGP too) is :
> 
> Don't forget to apply Goncalo advices to keep your cluster responsive for 
> client operations. Otherwise, all the IO and CPU will be used for the 
> recovery operations and your cluster will be unreachable. Be sure that all 
> these new parameters are in place before upgrading your cluster
> stop and wait for scrub and deep-scrub operations
> ceph osd set noscrub
> ceph osd set nodeep-scrub
> 
> set you cluster in maintenance mode with :
> ceph osd set norecover
> ceph osd set nobackfill
> ceph osd set nodown
> ceph osd set noout
> 
> wait for your cluster not have scrub or deep-scrub opration anymore
> upgrade the pg number with a small increment like 256
> wait for the cluster to create and peer the new pgs (about 30 seconds)
> upgrade the pgp number with the same increment
> wait for the cluster to create and peer (about 30 seconds)
> (Repeat the last 4 operations until you reach the number of pg and pgp you 
> want
> 
> At this time, your cluster is still functionnal. 
> 
> Now you have to unset the maintenance mode
> ceph osd unset noout
> ceph osd unset nodown
> ceph osd unset nobackfill
> ceph osd unset norecover
> 
> It will take some time to replace all the pgs but at the end you will have a 
> cluster with all pgs active+clean.During all the operation,your cluster will 
> still be functionnal if you have respected Goncalo parameters
> 
> When all the pgs are active+clean, you can re-enable the scrub and deep-scrub 
> operations
> ceph osd unset noscrub
> ceph osd unset nodeep-scrub
> 
> Vincent
> 
> 
> 
> -- 
> Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non 
> infetto. 
> Clicca qui per segnalarlo come spam. 
>  
> Clicca qui per metterlo in blacklist 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase PG number

2016-09-20 Thread Vincent Godin
Hi,

In fact, when you increase your pg number, the new pgs will have to peer
first and during this time, a lot a pg will be unreachable. The best way to
upgrade the number of PG of a cluster (you 'll need to adjust the number of
PGP too) is :


   - Don't forget to apply Goncalo advices to keep your cluster responsive
   for client operations. Otherwise, all the IO and CPU will be used for the
   recovery operations and your cluster will be unreachable. Be sure that all
   these new parameters are in place before upgrading your cluster


   - stop and wait for scrub and deep-scrub operations

ceph osd set noscrub
ceph osd set nodeep-scrub

   - set you cluster in maintenance mode with :

ceph osd set norecover
ceph osd set nobackfill
ceph osd set nodown
ceph osd set noout

wait for your cluster not have scrub or deep-scrub opration anymore

   - upgrade the pg number with a small increment like 256


   - wait for the cluster to create and peer the new pgs (about 30 seconds)


   - upgrade the pgp number with the same increment


   - wait for the cluster to create and peer (about 30 seconds)

(Repeat the last 4 operations until you reach the number of pg and pgp you
want

At this time, your cluster is still functionnal.

   - Now you have to unset the maintenance mode

ceph osd unset noout
ceph osd unset nodown
ceph osd unset nobackfill
ceph osd unset norecover

It will take some time to replace all the pgs but at the end you will have
a cluster with all pgs active+clean.During all the operation,your cluster
will still be functionnal if you have respected Goncalo parameters


   - When all the pgs are active+clean, you can re-enable the scrub and
   deep-scrub operations

ceph osd unset noscrub
ceph osd unset nodeep-scrub
Vincent
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase PG number

2016-09-18 Thread Matteo Dacrema
Hi , thanks for your reply.

Yes, I’don’t any near full osd.

The problem is not the rebalancing process but the process of creation of new 
pgs.

I’ve only 2 host running Ceph Firefly version with 3 SSDs for journaling each.
During the creation of new pgs all the volumes attached stop to read or write 
showing high iowait.
Ceph -s tell me that there are thousand of slow requests.

When all the pgs are created slow request begin to decrease and the cluster 
start rebalancing process.

Matteo

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

> Il giorno 18 set 2016, alle ore 13:08, Goncalo Borges 
>  ha scritto:
> 
> Hi
> I am assuming that you do not have any near full osd  (either before or along 
> the pg splitting process) and that your cluster is healthy. 
> 
> To minimize the impact on the clients during recover or operations like pg 
> splitting, it is good to set the following configs. Obviously the whole 
> operation will take longer to recover but the impact on clients will be 
> minimized.
> 
> #  ceph daemon mon.rccephmon1 config show | egrep 
> "(osd_max_backfills|osd_recovery_threads|osd_recovery_op_priority|osd_client_op_priority|osd_recovery_max_active)"
>"osd_max_backfills": "1",
>"osd_recovery_threads": "1",
>"osd_recovery_max_active": "1"
>"osd_client_op_priority": "63",
>"osd_recovery_op_priority": "1"
> 
> Cheers
> G.
> 
> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Matteo 
> Dacrema [mdacr...@enter.eu]
> Sent: 18 September 2016 03:42
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Increase PG number
> 
> Hi All,
> 
> I need to expand my ceph cluster and I also need to increase pg number.
> In a test environment I see that during pg creation all read and write 
> operations are stopped.
> 
> Is that a normal behavior ?
> 
> Thanks
> Matteo
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail. Please notify the sender 
> immediately by e-mail if you have received this e-mail by mistake and delete 
> this e-mail from your system. If you are not the intended recipient you are 
> notified that disclosing, copying, distributing or taking any action in 
> reliance on the contents of this information is strictly prohibited.
> 
> 
> --
> Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non 
> infetto.
> Seguire il link qui sotto per segnalarlo come spam: 
> http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=D6CF2401EE.A1426
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increase PG number

2016-09-18 Thread Goncalo Borges
Hi
I am assuming that you do not have any near full osd  (either before or along 
the pg splitting process) and that your cluster is healthy. 

To minimize the impact on the clients during recover or operations like pg 
splitting, it is good to set the following configs. Obviously the whole 
operation will take longer to recover but the impact on clients will be 
minimized.

#  ceph daemon mon.rccephmon1 config show | egrep 
"(osd_max_backfills|osd_recovery_threads|osd_recovery_op_priority|osd_client_op_priority|osd_recovery_max_active)"
"osd_max_backfills": "1",
"osd_recovery_threads": "1",
"osd_recovery_max_active": "1"
"osd_client_op_priority": "63",
"osd_recovery_op_priority": "1"

Cheers
G.

From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Matteo 
Dacrema [mdacr...@enter.eu]
Sent: 18 September 2016 03:42
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Increase PG number

Hi All,

I need to expand my ceph cluster and I also need to increase pg number.
In a test environment I see that during pg creation all read and write 
operations are stopped.

Is that a normal behavior ?

Thanks
Matteo
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com