I'm swinging back the other way.

>From 
http://en.wikipedia.org/wiki/Load_(computing)#Unix-style_load_calculation:
One should divide load average by the number of CPUs you have - 
Therefore, on my 64-bit Fedora K12LTSP 4-proc system, a normal 15-minute 
load average was about .25 (meaning a "real" average of .25/4=.0625). On 
my i386 8-proc Ubuntu 8.04 system I'm seeing a 1.25 15-minute load 
average (meaning a "real average of 1.25/8 = .15625)

Also, mpstat shows:

10:19:31 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  
%steal   %idle    intr/s
10:19:31 AM  all    0.64    0.02    0.25    0.08    0.01    0.07    
0.00   98.94    649.29

So while there is a noticable difference - closer to 200% load average 
increase, I don't believe that my CPUs are io bound and waiting for 
memory to the point of affecting system responsiveness.



The question then becomes, why are my clients so damn sluggish.. maybe 
just additional gnome candy over FC6...? For my 15 100Mbit users I have 
a GIG NIC & switch and only 5-8 clients on each NIC and switch.



-Michael

Michael Blinn wrote:
> I guess that makes sense.. I just didn't think the PAE stuff made _that_ 
> much of a difference.
>
> So... if I can have my sound and eat it too, anyone know an easy method 
> of i386->amd64 conversion of a production server running 20 or 30 live 
> services? (;
>
> jam wrote:
>   
>> On Wednesday 21 May 2008 02:47:23 [EMAIL PROTECTED] 
>> wrote:
>> [snip]
>>   
>>     
>>> If you have references for the 'slowness and other issues' or 'hardware
>>> support issues' I'd love to see them. I've not found any quantitative
>>> analysis of i386+PAE vs. x86_64/amd64 performance. I have to assume that
>>> while there is necessarily some some slowdown in rewriting memory addresses
>>> to access +3GB RAM, it's not so significant that it causes the measured
>>> load average to go up 400%.
>>>     
>>>       
>> [snip]
>> It is significant specially doing **exactly** what you are doing
>>
>> 64: a large footy field
>> 32: a small room with lots of drawers, You can only open 1 draw at a time:
>>
>> usual big mem scene: open a drawer and do lots of work, open another ...
>>
>> ltsp: open a drawer, do a little work, close the drawer, open another 
>> drawer ...
>>
>> The more work you do, as opposed to doing hard work, the more you need 64.
>>
>> In your case while you are 'opening the drawer' and 'closing the drawer' the 
>> work is piling  up (Load Avg up). Open and close are non trivial so if you 
>> do 
>> it too often ... and you have 8 CPUs all waiting in line for their own 
>> drawer ...
>>
>> James
>>     
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft 
> Defy all challenges. Microsoft(R) Visual Studio 2008. 
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _____________________________________________________________________
> Ltsp-discuss mailing list.   To un-subscribe, or change prefs, goto:
>       https://lists.sourceforge.net/lists/listinfo/ltsp-discuss
> For additional LTSP help,   try #ltsp channel on irc.freenode.net
>
>   

-- 
CONFIDENTIALITY NOTICE:
This message, and any attachments that may accompany it, 
contain information that is intended for the use of the 
individual or entity to which it is addressed and may 
contain information that is privileged, confidential, or 
otherwise exempt from disclosure under applicable law.
If the recipient of this message is not the intended
recipient, any disclosure, copying, or other use of this
communication or any of the information herein is 
unauthorized and prohibited.  If you have received this 
message in error, please notify the original sender by 
return mail and delete this message, along with any 
attachments, from your computer.
Thank you.  


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft 
Defy all challenges. Microsoft(R) Visual Studio 2008. 
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_____________________________________________________________________
Ltsp-discuss mailing list.   To un-subscribe, or change prefs, goto:
      https://lists.sourceforge.net/lists/listinfo/ltsp-discuss
For additional LTSP help,   try #ltsp channel on irc.freenode.net

Reply via email to