Hi Eric,

I see celery and mongod on the capsule using a lot of CPU time (celery 
seems to jump between 50-100%, and has been going for 435:32.54) and mongod 
~30-40%, 286:18.74. Here's current memory usage on the box

free -h
              total        used        free      shared  buff/cache   
available
Mem:           5.7G        3.5G        166M         37M        2.0G       
 1.9G
Swap:          7.8G        2.4G        5.4G

and current IO
[root@wellcapsuleext foreman-proxy]# sar
Linux 3.10.0-327.el7.x86_64 (wellcapsuleext.niwa.co.nz)         07/28/2016 
     _x86_64_        (6 CPU)

12:00:01 AM     CPU     %user     %nice   %system   %iowait    %steal     
%idle
12:10:01 AM     all      0.76      0.03      0.41      0.02      0.00     
98.78
12:20:01 AM     all      1.11      0.03      0.51      0.01      0.00     
98.35
12:30:01 AM     all      0.94      0.03      0.46      0.01      0.00     
98.57
12:40:01 AM     all      0.77      0.03      0.41      0.01      0.00     
98.78
12:50:01 AM     all      1.09      0.03      0.50      0.02      0.00     
98.37
01:00:01 AM     all      0.95      0.03      0.47      0.01      0.00     
98.54
01:10:01 AM     all      0.74      0.03      0.39      0.01      0.00     
98.83
01:20:01 AM     all      1.07      0.02      0.48      0.02      0.00     
98.40
01:30:01 AM     all      0.92      0.03      0.45      0.06      0.00     
98.54
01:40:01 AM     all      0.77      0.03      0.41      0.01      0.00     
98.79
01:50:01 AM     all      1.07      0.03      0.49      0.01      0.00     
98.41
02:00:01 AM     all      1.00      0.03      0.49      0.08      0.00     
98.41
02:10:01 AM     all      0.76      0.03      0.40      0.01      0.00     
98.81
02:20:01 AM     all      1.09      0.02      0.50      0.01      0.00     
98.38
02:30:01 AM     all      0.94      0.03      0.45      0.01      0.00     
98.57
02:40:02 AM     all      0.79      0.03      0.42      0.01      0.00     
98.76
02:50:01 AM     all      1.09      0.03      0.50      0.02      0.00     
98.36
03:00:01 AM     all      0.93      0.03      0.46      0.01      0.00     
98.57
03:10:01 AM     all      0.79      0.03      0.40      0.01      0.00     
98.77
03:20:01 AM     all      1.09      0.03      0.50      0.01      0.00     
98.37
03:30:01 AM     all      0.96      0.02      0.50      0.01      0.00     
98.51
03:40:01 AM     all      1.15      0.06      1.96      1.39      0.00     
95.44
03:50:01 AM     all      1.08      0.00      0.43      0.18      0.00     
98.31
04:00:01 AM     all      0.99      0.00      0.40      0.18      0.00     
98.43
04:10:01 AM     all      0.75      0.03      0.41      0.02      0.00     
98.80
04:20:01 AM     all      1.06      0.03      0.50      0.03      0.00     
98.39
04:30:01 AM     all      0.93      0.03      0.47      0.01      0.00     
98.56
04:40:01 AM     all      0.74      0.03      0.39      0.02      0.00     
98.83
04:50:01 AM     all      1.09      0.03      0.51      0.05      0.00     
98.33
05:00:01 AM     all      0.91      0.03      0.45      0.02      0.00     
98.60
05:10:01 AM     all      0.76      0.03      0.41      0.02      0.00     
98.79
05:20:01 AM     all      1.06      0.03      0.49      0.02      0.00     
98.41
05:30:01 AM     all      0.93      0.03      0.47      0.02      0.00     
98.55
05:40:01 AM     all      0.74      0.02      0.40      0.02      0.00     
98.82
05:50:01 AM     all      1.08      0.03      0.52      0.04      0.00     
98.34
06:00:01 AM     all      0.96      0.02      0.47      0.03      0.00     
98.52
06:10:01 AM     all      0.74      0.02      0.41      0.01      0.00     
98.81
06:20:01 AM     all      1.06      0.03      0.49      0.01      0.00     
98.41

06:20:01 AM     CPU     %user     %nice   %system   %iowait    %steal     
%idle
06:30:01 AM     all      0.93      0.02      0.46      0.01      0.00     
98.58
06:40:01 AM     all      0.75      0.03      0.40      0.01      0.00     
98.81
06:50:02 AM     all      1.07      0.02      0.50      0.02      0.00     
98.38
07:00:01 AM     all      0.96      0.02      0.46      0.01      0.00     
98.54
07:10:02 AM     all      0.77      0.02      0.42      0.01      0.00     
98.78
07:20:01 AM     all      1.06      0.03      0.48      0.01      0.00     
98.42
07:30:01 AM     all      0.94      0.03      0.47      0.01      0.00     
98.56
07:40:01 AM     all      0.78      0.03      0.41      0.01      0.00     
98.77
07:50:01 AM     all      1.06      0.03      0.50      0.01      0.00     
98.41
08:00:02 AM     all      0.96      0.03      0.47      0.01      0.00     
98.53
08:10:01 AM     all      0.75      0.02      0.40      0.01      0.00     
98.82
08:20:01 AM     all      1.08      0.03      0.49      0.01      0.00     
98.40
08:30:02 AM     all     10.48      0.02      2.18      1.58      0.00     
85.73
08:40:02 AM     all      9.51      0.02      2.08      1.99      0.00     
86.40
08:50:01 AM     all     10.72      0.02      0.97      0.73      0.00     
87.56
09:00:01 AM     all     10.93      0.02      1.71      1.16      0.00     
86.18
09:10:01 AM     all     14.54      0.02      1.99      1.94      0.00     
81.52
09:20:01 AM     all     14.60      0.02      2.07      1.82      0.00     
81.50
Average:        all      2.10      0.02      0.63      0.21      0.00     
97.04

There has now been various sync jobs running for nearly 24 hours, the last 
one is the below

Id: cbda7f78-dd70-48f6-ae09-67f29b0a5746
Label: Actions::Katello::Repository::CapsuleGenerateAndSync
Name: Sync Repository on Capsule(s)
Owner: foreman_admin
Execution type: Delayed
Start at: 2016-07-27 20:39:57 UTC
Start before: -
Started at: 2016-07-27 20:39:57 UTC
Ended at:
State: running
Result: -
Params: {"services_checked"=>["pulp", "pulp_auth"], "locale"=>"en"}



I'm upped the CPU count on the capsule and added some extra RAM, will see 
how it goes...

Dylan

On Thursday, July 28, 2016 at 3:24:52 AM UTC+12, Eric Helms wrote:
>
> Dylan,
>
> Are you able to determine the network traffic between the server and 
> capsule? In general I think there are two possibilities, slow transfer of 
> data to the Capsule or slowness on the Capsule itself. I would recommend 
> checking the network speeds and the Capsule resources to see if its 
> constrained I/O wise or processes are eating the CPU.
>
>
> Eric
>
> On Tue, Jul 26, 2016 at 10:52 PM, Dylan Baars <[email protected] 
> <javascript:>> wrote:
>
>> Hi all,
>>
>> we have a katello 3.0.1 install with a katello capsule in our DMZ. 
>>
>> we've noticed if we publish/promote content, the main katello server does 
>> this reasonably quickly, but tasks (to sync content) on the capsule are 
>> taking hours to complete to match the change on the main server. Does 
>> anyone know why this is? Can it be improved?
>>
>> Thanks!
>> Dylan
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Foreman users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/foreman-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Eric D. Helms
> Red Hat Engineering
> Ph.D. Student - North Carolina State University
>

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to