Over the weekend I have re-purposed my Solaris build machine to be a compute node. I was expecting, based on raw numbers and past experience, it to build the current C++ code base I'm working with in 50-60% of the time taken by either of my current build slaves.

Reality hasn't matched expectations big time!

An existing system, using the same motherboard but slower CPUs does the build in about 9 minutes. After 40 minutes the build was only a little over half done on the "fast" box...

Both builds where using Ubuntu-16.04 lx zones with more than enough memory.

Older box:

# sysinfo
{
...
  "Manufacturer": "Supermicro",
  "Product": "X9DRH-7TF/7F/iTF/iF",
...
  "CPU Type": "Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz",
  "CPU Virtualization": "vmx",
  "CPU Physical Cores": 2,
...
  "CPU Total Cores": 24,
  "MiB of Memory": "131044",

The newer box:

  "CPU Type": "Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz",
  "CPU Virtualization": "vmx",
  "CPU Physical Cores": 2,
...
  "CPU Total Cores": 32,
  "MiB of Memory": "65501",

I noticed this comparing prstat on the two hosts:

Older:

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
 33887 1005      294M  205M cpu1     1    0   0:00:04 1.6% clang/1
 33919 1005      251M  156M cpu5     1    0   0:00:02 1.0% clang/1
 33934 1005      238M  146M cpu0     1    0   0:00:02 0.8% clang/1
 33949 1005      215M  123M cpu9     1    0   0:00:01 0.6% clang/1
 33952 1005      213M  118M cpu4     1    0   0:00:01 0.6% clang/1
 33955 1005      201M  107M cpu3     1    0   0:00:01 0.5% clang/1
 33958 1005      200M  107M cpu10    1    0   0:00:01 0.4% clang/1
 33963 1005      180M   82M cpu8     1    0   0:00:00 0.2% clang/1
 33964 1005      179M   81M run      1    0   0:00:00 0.2% clang/1
 33967 1005      179M   81M cpu6     1    0   0:00:00 0.2% clang/1
 33186 1005       51M   38M sleep    1    0   0:00:01 0.2% ninja/1
 33970 1005      174M   75M cpu2     1    0   0:00:00 0.1% clang/1

Newer:

PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
 59248 1005      220M  145M wait     4    0   0:00:03 0.3% clang/1
 59221 1005      240M  165M wait     4    0   0:00:04 0.2% clang/1
 59236 1005      208M  135M wait     4    0   0:00:02 0.2% clang/1
 59251 1005      172M   96M wait     4    0   0:00:02 0.2% clang/1
 59254 1005      176M  101M wait     4    0   0:00:02 0.2% clang/1
 58811 1005      651M  576M wait     4    0   0:00:20 0.2% clang/1
 59043 1005      353M  278M wait     4    0   0:00:08 0.2% clang/1
 59259 1005      124M   48M wait     4    0   0:00:00 0.1% clang/1
 59264 1005      121M   45M wait     8    0   0:00:00 0.1% clang/1
 59265 1005      119M   42M wait     8    0   0:00:00 0.1% clang/1
 59200 1005      194M  119M wait     4    0   0:00:02 0.1% clang/1
 59180 1005      163M   87M wait     4    0   0:00:02 0.0% clang/1
 59099 1005      252M  177M wait     4    0   0:00:04 0.0% clang/1

All the "wait" states look odd to me.

Ideas?
--
Ian.


-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to