Hi , thanks for your reply.

Yes, I’don’t any near full osd.

The problem is not the rebalancing process but the process of creation of new 
pgs.

I’ve only 2 host running Ceph Firefly version with 3 SSDs for journaling each.
During the creation of new pgs all the volumes attached stop to read or write 
showing high iowait.
Ceph -s tell me that there are thousand of slow requests.

When all the pgs are created slow request begin to decrease and the cluster 
start rebalancing process.

Matteo

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.

> Il giorno 18 set 2016, alle ore 13:08, Goncalo Borges 
> <goncalo.bor...@sydney.edu.au> ha scritto:
> 
> Hi
> I am assuming that you do not have any near full osd  (either before or along 
> the pg splitting process) and that your cluster is healthy. 
> 
> To minimize the impact on the clients during recover or operations like pg 
> splitting, it is good to set the following configs. Obviously the whole 
> operation will take longer to recover but the impact on clients will be 
> minimized.
> 
> #  ceph daemon mon.rccephmon1 config show | egrep 
> "(osd_max_backfills|osd_recovery_threads|osd_recovery_op_priority|osd_client_op_priority|osd_recovery_max_active)"
>    "osd_max_backfills": "1",
>    "osd_recovery_threads": "1",
>    "osd_recovery_max_active": "1"
>    "osd_client_op_priority": "63",
>    "osd_recovery_op_priority": "1"
> 
> Cheers
> G.
> ________________________________________
> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Matteo 
> Dacrema [mdacr...@enter.eu]
> Sent: 18 September 2016 03:42
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Increase PG number
> 
> Hi All,
> 
> I need to expand my ceph cluster and I also need to increase pg number.
> In a test environment I see that during pg creation all read and write 
> operations are stopped.
> 
> Is that a normal behavior ?
> 
> Thanks
> Matteo
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail. Please notify the sender 
> immediately by e-mail if you have received this e-mail by mistake and delete 
> this e-mail from your system. If you are not the intended recipient you are 
> notified that disclosing, copying, distributing or taking any action in 
> reliance on the contents of this information is strictly prohibited.
> 
> 
> --
> Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non 
> infetto.
> Seguire il link qui sotto per segnalarlo come spam: 
> http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=D6CF2401EE.A1426
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to