I would like to share some thoughts and research on this.

First, at share last week, there were only two numbers presented
that I found interesting.  Was a websphere presentation by
Tom Pajak and Tom Ronksley. The two tasks measured on an
INTEL 1.5GHz vs an MP50 were measured wall clock only:

1) Intel took 28 seconds, MP50 took 614 seconds, ratio of 1:22.
2) Intel took 2.5 seconds, MP50 took 178 seconds. Ratio of 1:70.
No other comparisons were presented....

No information about what else was happening on the MP50 (Tom
does not have performance tools....) - performance numbers would
hopefully explain why the MP50 is so slow - my home grown model
says the ratio should have been between 1:8 and 1:10....

In looking at how to evaluate scalability, this is the research
i've been working on.

The following split screen comes from ESAMON at an installation
running a mail program under Linux on s/390.  The top half of
the screen shows the Linux cpu loads, what Linux thinks is
happening, the bottom screen is a measure of load.  Each TCP
connection is likely one piece of mail delivered.

Following is a quick analysis of CPU per Connect. Analysis:
This is quick and dirty, but it appears that one connect per minute
utilizes between 1 and 1.5% of the G6 processor.  1.5% was chosen
as an average overhead value of system plus measurement code,
which is the second calculation.

This analysis (once the tools were installed) took all of
5 minutes, not a lot of effort. But gave me some very quick
boundaries of how this work would scale.  Doing the same for
SAMBA would not be that difficult....

So, all you need to perform this scaling analysis is a system
with the application, and of course the tool set and maybe
a little education on what to look for (did I mention Velocity
Software is hosting a performance workshop in June???)

The best part of developing the tools? Using them....

******************************************************************

 Screen: ESAUCD4  xxxxxxxxxxxxxxxx
 1 of 2  LINUX UCD System Statistics

                   <Processor Load->
 Time     Node     Syst  User  Idle
 -------- -------- ----- ----- -----
 13:38:00 FOXMAIL   1.50  0.52   197
 13:37:00 FOXMAIL   2.92 13.45   183
 13:36:00 FOXMAIL   4.04 21.46   174
 13:35:00 FOXMAIL   1.40  0.58   198
 13:34:00 FOXMAIL   1.62  6.32   192
 13:33:00 FOXMAIL   1.54  2.89   195
 13:32:00 FOXMAIL   3.17 15.47   181
 13:31:00 FOXMAIL   7.73 26.34   165
 13:30:00 FOXMAIL   2.14 10.51   187
 13:29:00 FOXMAIL   1.07  0.82   198
 13:28:00 FOXMAIL   0.86  0.50   198
 13:27:00 FOXMAIL   2.42  6.52   191
PF1=Help                   PF3=Quit
PF7=Backward  PF8=Forward        PF10
 ====>
 Screen: ESATCP1  xxxxxxxxxxxxxxxxxx
 1 of 2  TCPIP Transport Layer Data

                   <----- TCP Connect
                   Curr  <Opens/Min>
 Time     Node     Conn  Activ
 -------- -------- ----  -----
 13:38:00 FOXMAIL     0    0.0
 13:37:00 FOXMAIL     0   24.0
 13:36:00 FOXMAIL     3   19.0
 13:35:00 FOXMAIL     0    0.0
 13:34:00 FOXMAIL     2    4.0
 13:33:00 FOXMAIL     2    0.0
 13:32:00 FOXMAIL     1   15.0
 13:31:00 FOXMAIL     6   25.0
 13:30:00 FOXMAIL     2    7.0
 13:29:00 FOXMAIL     0    0.0
 13:28:00 FOXMAIL     0    0.0

******************************************************************

Calculation 1:  (SYST + USER      ) / TCPOPENS = cpu/connect
Calculation 2:  (SYST + USER - 1.5) / TCPOPENS = cpu/connect

Syst  User    TCP     CPU Per
<Processor   <Opens   connection
Syst  User   Activ    Calc1   Calc2
----- -----  -----
 3.17 15.47   15.0    1.242   1.142
 7.73 26.34   25.0    1.362   1.302
 2.14 10.51    7.0    1.807   1.592
 1.07  0.82    0.0    0       0
 0.86  0.50    0.0    0       0
 2.42  6.52    2.0    4.47    3.72
 3.78 11.10   17.0    0.875   0.787
 5.09  7.99   10.0    1.308   1.158
 5.94 10.75   11.0    1.517   1.380
 7.59  9.54    8.0    2.141   1.953
 7.73 13.07   14.0    1.485   1.378


>From:         Chuck Gray <[EMAIL PROTECTED]>
>
>
>Chuck Gray
>IT Architect
>Seattle Office (206) 587-3091
>Fax Number (206) 587-3091
>Tie Line 277-3091
>email: [EMAIL PROTECTED]
>
>
>Hi gang,
>
>I am right now canvasing interested parties trying to find an
>answer to a philosophical issue.  One of the plays we are looking
>at is Server Consolidation onto Linux, across brands.  When I
>look at Samba though, I don't have a defined group to interface
>with (internal, another company, whatever).  At best I have
>samba.org and some IBMers on the Samba Team who are making best
>effort to help.
>
>My problem is that there is no place to go for scaling numbers.
>How many print queues can I configure under Linux and at what
>rate can I spool to them?  (There appears to be no answer to
>this.) Given any disk array, how much data can I push out
>through the pipe?  (There are some very limited answers using a
>tool called smbtorture.) How do file serving and print serving
>interact when run at capacity?  And what about when we virtualize
>this load, either using z/VM or VMware.
>
>I know the immediate answer, those are great questions and
>unfortunately we don't know the answers.  So how should we go
>about getting them?  In a perfect world, I could look at a chart
>that would answer them for both the z800 and the x360.  From what
>I can see & verify, we don't even have the tools to go about
>answering these questions.








"If you can't measure it, I'm Just NOT interested!"(tm)

/************************************************************/
Barton Robinson - CBW     Internet: [EMAIL PROTECTED]
Velocity Software, Inc    Mailing Address:
 196-D Castro Street       P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012         Web Page:  WWW.VELOCITY-SOFTWARE.COM
/************************************************************/

Reply via email to