Re: Big LPAR vs small LPAR and DataSharing.

2024-04-21 Thread Scott Chapman
In short, giant LPARs can definitely be problematic. Similarly, too small LPARs 
can be problematic. Somewhere in the middle is ideal, but where that is will 
depend. 

First off, the most significant impact is you don't want LPARs whose processor 
count is so high that it crosses drawers. (In most cases this count would be 
zIIP + GP.) So that's a high water mark you definitely want to avoid that's 
around the low 30s for relatively recent machines. Similarly you don't want 
LPARs with so much memory that they cross drawers, although that may be 
slightly less problematic. In both cases, if you have CPs referencing memory or 
L4 cache on another drawer, access times will be longer. How big the impact 
will be depends on how much of that cross-drawer access is going on.

Secondly, there's internal housekeeping and locks and latches that are impacted 
by having more CPs. How much this impacts things depends again on the workload. 

In contrast, more LPARs does add more MSU consumption for certain system tasks 
like monitors and the like. I.E. add an extra LPAR and it's an extra RMF/CMF 
that's running collecting/writing data. And extra whatever other monitors you 
have. And while adding a third system to the sysplex adds nowhere near the 
overhead that adding a second adds, there still can be some. E.G. resolving 
lock contention now involves potentially 3 systems instead of 2. So I wouldn't 
look to add systems willy-nilly where I have a sysplex of lots of small LPARs, 
all that are part of a single data sharing group. (I'm less concerned about 
adding dev/test LPARs that have minimal sharing with the production LPARs 
already part of the sysplex.)

My general rule of thumb, is that: <10 busy CPs I'm not concerned; more than 20 
busy CPs I might start to wonder (not necessarily worry) a bit; more than a 
drawer, I definitely want to have a discussion about it. I'm not sure that 
those are perfect thresholds, and I'd generally temper that with specific 
details about the situation. 

Somebody mentioned not having enough CPs such that production LPARs can't get 
any high-polarity CPs. I agree that's a growing problem that we're seeing with 
multiple customers due to the wider steps between the sub-capacity settings on 
the large machines. In general, going from zero to multiple high-polarity 
processors for important LPARs is important, but tweaking weights to convert a 
medium to a high on an LPAR that already has multiple highs is probably not 
going to be significant. (That's the general conclusion of my GSE presentation 
this Thursday.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Anyone exploiting ZEDC?

2024-04-19 Thread Scott Chapman
Agreed, it's been many years since I ran similar test with our software and 
don't remember the exact details, but my recollection is that I saw no 
noticeable change in CPU consumption. Which I remember being particularly 
interested in because I'd heard of issues reading that compressed data. 

IIRC, I was only looking at at CPU because I/O time can be significantly 
variable depending on where we reading the data from. And doing less I/O is 
obviously always better, and can significantly impact runtime in some cases. So 
I/O time wasn't really a question in my mind. 

Scott Chapman

On Thu, 18 Apr 2024 14:42:47 +1000, Andrew Rowley 
 wrote:

>On 18/04/2024 12:04 am, Michael Oujesky wrote:
>> Just a thought, but anyone processing internally compressed CICS or
>> DB2 data on a non-z/OS platform (Windows/Unix) might see substantial
>> CPU usage from RLE decompression.
>
>If the compression is lightweight, decompression should be too. I can't
>speak for any other product, but I did an experiment with the EasySMF
>Java API.
>
>Running a CICS report on my laptop I got:
>
>Processing CICS compressed data: 1.2 GB/s (size after decompression)
>
>Processing uncompressed data: 800 MB/s
>
>So processing the compressed data was actually about 50% faster.
>
>--
>Andrew Rowley
>Black Hill Software
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Anyone exploiting ZEDC?

2024-04-17 Thread Scott Chapman
My recommendation has always been to leave Db2/CICS's RLE compression of SMF 
data enabled even with zEDC compression of the data.

1) Less data will be sent to the zEDC compression engine, which will then 
process faster. I believe at one point I had an IBM chart that showed this. 
2) The data might (likely) compress better because intervening repeated values 
are removed before it goes through the zEDC compression. (As Andrew shows 
below.) It might be dependent on the data, but it makes some sense when you 
realize that LZ77 relies on compressing in 32K blocks and by removing the 
duplicate zeros you potentially get more interesting repeated data into that 
32K block.  
3) When the data is read back from the zEDC-compressed store to be sent 
someplace for processing it will be smaller if the RLE compression was enabled. 
Depending on what you're doing with the data, that might be significant. 
4) The RLE compression is extremely lightweight in terms of CPU. I do not 
expect it to be noticeable: it's going to disappear in the normal variation in 
CPU time seen for running the same work on any shared system. The only 
CICS/Db2s that I would expect could have a measurable increase in CPU would be 
those that are completely idle and doing nothing but writing interval SMF 
records to say they haven't processed any data. 

Scott Chapman

On Wed, 17 Apr 2024 16:36:34 +1000, Andrew Rowley 
 wrote:

>On 17/04/2024 12:09 pm, Michael Oujesky wrote:
>> Yes and zEDC poorly compresses internally RLE compressed records.
>
>I was surprised how well zEDC compressed the already compressed records.
>Using my data:
>
>zEDC alone : 52000 tracks
>
>CICS compression + zEDC : 22000 tracks
>
>zEDC seems to be biased towards speed rather than compression ratio, so
>maybe the RLE compression helps by packing more repeated bytes into
>whatever compression window zEDC uses?
>
>> Plus CSRCESRV uses GP engine cycles
>
>That's true - CPU is probably more expensive than storage, so this could
>be just an interesting side-track. On the other hand, I think zEDC has
>to decompress and recompress the data for SMF dump etc. so CICS
>compression might save some overhead for SMF housekeeping type
>operations, reducing the amount of data going through zEDC?
>
>--
>Andrew Rowley
>Black Hill Software
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM - service class and Dispatch priority

2024-03-06 Thread Scott Chapman
Well, a significant portion of the value proposition for WLM when it was 
introduced in the mid 90s was in fact to eliminate the static assignment of 
dispatching priorities and the fact that WLM would potentially adjust the 
dispatching priorities every 10 seconds to attempt to balance the performance 
of different workloads at different importances to optimize overall throughput 
of the system. 

WLM makes those decisions based on how the different workloads are performing 
relative to their goals. But of course if the goals and importances are set 
"poorly" the results may not be ideal. 

The CPU Critical attribute can be set for service classes to keep a service 
class at a dispatching priority above all SCs at a lower importance. Well 
mostly... except for promotion that can happen for a variety of reasons to help 
resolve things like resource contention. But CPU Critical is generally not the 
first tool to be pulled out of the tool box.

If you want a (relatively) quick overview of WLM, you might check the 
presentations section of our website: https://pivotor.com/content.html You 
might want to click on the topic view button at the top and scroll down to the 
WLM section. The "Introdution to the WLM" presentation might be a good place to 
start. "WLM’s Algorithms – How WLM Works" might be another good early one to 
look at. It sounds like "Revisiting Goals over Time" might also be of interest. 
:)

Scott Chapman


On Wed, 6 Mar 2024 08:33:14 +0400, Peter  wrote:

>Hello
>
>I must confess that I am not a WLM expert but I just wanted to understand
>how this works
>
>In our environment we have few started where their Service class(Velocity)
>and Dispatch priority keeps changing on its own.
>
>Based on what constraint or definition in WLM the service class and
>Dispatch priority are dynamic? Keeping a static value would be right thing
>to do ?
>Sometimes those task loop and freezes the entire zOS. So If I make those
>started task Service class and DP static then will it help consuming the
>zOS memory due to looping?
>
>Sorry if this question are basic and lacks some information
>
>Any suggestions or advice are much appreciated
>
>Peter
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Masking SMF data internally

2024-01-22 Thread Scott Chapman
On Sun, 21 Jan 2024 07:38:44 -0600, Paul Feller  wrote:

>Jake, I agree you need to identify what record types are needed for the
>sizing operation.  After you know which record types (and subtypes) you may
>not need to do anything.  As an example, I can't think of any sensitive data
>that might be in the SMF type 7x records.

While I agree that the 7x records generally have nothing that should be 
considered "sensitive", some organizations consider system names sensitive. 
Seems overkill to me, but... 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMF Interval

2023-12-30 Thread Scott Chapman
Yeah, the stuff I said there, plus... we see data from a lot of systems. 
Periodically I sample that data for interesting values, including what the SMF 
interval is. Historically that has been 15 minutes for right around 90% of the 
systems we see. Next most popular is 30 minutes in about 5% of the customers, 
although I actively discourage that when I see it. A few will show up with 5 or 
10 minute intervals. And some customers will run with 1 minute intervals for 
all or part of the day. 

It should be noted that on the creation side, the interval only drives the data 
storage required and the tiny bit of CPU/IO required to write those records. 
But of course when you go to process the data, having more intervals will mean 
that processing the data will take longer and consume more resources. That's 
usually the bigger concern. 

Opinion: In general, if I was running a system today, I'd probably use 5 
minutes. Yes, it's 3x the data (vs 15 minutes), but for most systems that 
shouldn't really be a problem. And it may better align with your non-mainframe 
platforms. The 1 minute intervals are I think overkill, especially given the 
data you can get out of the SMF records that have shorter (sub-minute) 
intervals like the 99s and 98s. When you have 1 minute SMF intervals, you can 
generate a whole lot of SMF 30 interval records. And if you carry that forward 
to RMF/CMF, the 74s can get quite voluminous. I'm not saying 1 minute intervals 
are never useful, but that's a rather exceptional condition. Day-in, day-out 
most customers manage their systems just fine with 15 minute intervals, diving 
down to event or shorter interval records (like the 99s) as needed. 

Because everything has exceptions... one exception to my above thinking might 
be for certain customers with extreme spikes for important events. I believe I 
once saw a financial services customer that set a 1 minute interval for a few 
minutes around market open/close. That makes sense to me. The "we needed 1 
minute intervals for a particular problem we had several years ago and just 
left it set to that" makes much less sense to me. 


Scott Chapman

On Fri, 29 Dec 2023 17:35:56 -0800, Ed Jaffe  
wrote:

>On 12/29/2023 3:20 PM, Mark Zelden wrote:
>> This paper from Scott Chapman of EPS talks about the subject and he agrees 
>> with
>> me that it should be no longer than 15 minutes and that RMF/SMF should be 
>> synced.
>>
>> https://www.pivotor.com/library/content/Chapman_SMFRecommendations_2022.pdf
>
>Super helpful. Thanks, Mark!
>
>--
>Phoenix Software International
>Edward E. Jaffe
>831 Parkview Drive North
>El Segundo, CA 90245
>https://www.phoenixsoftware.com/
>
>
>
>This e-mail message, including any attachments, appended messages and the
>information contained therein, is for the sole use of the intended
>recipient(s). If you are not an intended recipient or have otherwise
>received this email message in error, any use, dissemination, distribution,
>review, storage or copying of this e-mail message and the information
>contained therein is strictly prohibited. If you are not an intended
>recipient, please contact the sender by reply e-mail and destroy all copies
>of this email message and do not otherwise utilize or retain this email
>message or any or all of the information contained therein. Although this
>email message and any attachments or appended messages are believed to be
>free of any virus or other defect that might affect any computer system into
>which it is received and opened, it is the responsibility of the recipient
>to ensure that it is virus free and no responsibility is accepted by the
>sender for any loss or damage arising in any way from its opening or use.
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AI System Services on z/OS 3.1 - is a CF really mandatory?

2023-11-21 Thread Scott Chapman
I think the better question is why does EzNoSQL require RLS? Probably makes it 
easier because they don't have to handle different sharing issues, but it seems 
possible that some might be interested in using the EzNoSQL API from a single 
task without sharing implications. Of course I don't know how interested people 
are in general in EzNoSQL. 

Scott Chapman


On Mon, 20 Nov 2023 17:28:17 -0600, Peter Bishop  wrote:

>Also, given it's just SMF data being used here, surely there's a way for z/OS 
>to process that without VSAM RLS and EzNoSQL (?).  Perhaps they are using 
>"ported" code, i.e. not native to z/OS, for the AI inferencing part and hence 
>must have EzNoSQL and thus VSAM RLS.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AI System Services on z/OS 3.1 - is a CF really mandatory?

2023-11-20 Thread Scott Chapman
Just to add to the point about "or general purpose engine". A CF LPAR doing 
relatively little activity for a non-critical work can run just fine on a 
shared GP engine, assuming you have some available capacity. The CF LPARs don't 
generally consume much CPU, if they're not being driven by intensive data 
sharing. 

Unlike ~30 years ago when sysplex first came out, things like thin interrupts 
and sub-capacity pricing as well as faster CPUs means that it is plausible 
today to run CF LPARs on GPs. The most extreme case I've seen has 3 z/OS LPARs 
and 2 CF LPARs running on a single sub-capacity engine. Obviously a very small 
environment, and not a configuration I'd recommend, but it functions. 

But I do think there might be performance advantages available to some 
customers who don't have CF LPARs defined today if they just would stand up a 
small CF LPAR running on a GP. But it requires some effort to configure and 
manage. 

Scott Chapman

On Mon, 20 Nov 2023 06:32:08 +, Timothy Sipples  wrote:

>The z/OS AI Framework requires EzNoSQL, EzNoSQL requires VSAM Record-Level 
>Sharing (RLS), and VSAM RLS requires a Coupling Facility (internal or 
>external) running on either a CF or general purpose engine.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Switching between SMT-1 and SMT-2

2023-09-07 Thread Scott Chapman
Prior to those functions being zIIP-eligible they equally depended upon access 
to CPU. Yet there's an unfortunately limited amount of calls for running GCPs 
less busy, which would also be good for performance overall. 

It's also worth noting that not all Db2 subsystems are production and don't 
need production levels of performance. 

But I agree that many environments would benefits from more zIIPs. Almost as 
many would benefit from more GCPs as well, but that's usually even more 
difficult to do because of the way software pricing works on the mainframe and 
the fact that it's often more than the hardware cost. And the z16 A02 machines 
remain limited to  only allowing 6 GCPs. Which is a shame: I was just working 
with a customer whose environment cries out for more than 6 CPs and and a 
processor capacity setting that's between the A01 5xx and 6xx. Their situation 
was particularly problematic, but they're not the only ones that could benefit 
from more than 6 CPs that could be finely adjusted in terms of capacity. 

Scott Chapman


On Wed, 6 Sep 2023 09:52:47 +, Martin Packer  
wrote:

>I really hope you�re not advising customers to run the zIIP pool at 100%. Key 
>functions such as Db2 DBM1 Deferred Write and Prefetch, as well as Db2 Log 
>Writes, depend on very good access to zIIP.
>
>(This is actually why I first wrote my zIIP Capacity & Performance 
>presentation 10 years ago.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Switching between SMT-1 and SMT-2

2023-09-05 Thread Scott Chapman
It's more complicated than that. Although I would agree that if an LPAR has 
only a single zIIP, likely SMT would be a good idea. But B is not true for 
intervals that people usually consider when looking at utilization levels 
because at the level of dispatch intervals, it's much more likely there's at 
least some intervals where the zIIPs are 100% busy. It really depends on 
arrival patterns and how much you care about very short transactions that may 
be running on the zIIPs. 

On Mon, 4 Sep 2023 20:10:31 +1000, Andrew Rowley  
wrote:

>The current situation sounds like SMT-2 should only be used if you
>
>a) have a single zIIP
>
>or b) are running your zIIPs consistently 100% busy
>
>and for b) you need to turn it off when the workload reduces?
>
>--
>Andrew Rowley
>Black Hill Software
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Switching between SMT-1 and SMT-2

2023-09-01 Thread Scott Chapman
There are two levels of dispatching here: PR/SM dispatches zIIP cores for the 
LPARs to use. Whether the LPAR uses both threads on that core or not depends on 
the z/OS setting. With SMT enabled, it looks like you have twice as many zIIPs 
as the LPAR has online zIIP cores. But (IIRC) the even-odd pairs are really two 
threads on the same zIIP core. 

So for example, a z/OS LPAR has two zIIPs online. (Maybe you paid for 3 zIIPs 
and you give 2 logical zIIPs to each LPAR on the HMC). With SMT enabled and 
active, that will show up as 4 zIIPs. zIIPs 0 and 1 are threads 0 and 1 on the 
a zIIP core and zIIPs 2 and 3 are threads 0 and 1 on a second zIIP core. 

At any given point in time, PR/SM will have dispatched 0 to 2 zIIP. So pretend 
that at this moment in time PR/SM has given 2 cores to the LPAR, so from z/OS's 
perspective there are 4 zIIPs to dispatch work on. If there are two units of 
work to run on the zIIP, those units of work will likely go to zIIPs 0 and 1 
which are the two threads on the one zIIP core (densely packing work to that 
core) vs. putting the work on zIIPs 0 and 2 which are different zIIP cores 
(sparsely packing the work on those cores). 

In real life things are of course more chaotic. But that's the general idea. I 
just presented on PR/SM and z/OS dispatching and managing of CPUs at SHARE New 
Orleans. If you're a SHARE member you should be able to access it from their 
website I believe. Hopefully we'll get it up on our website next week. 

Oh and why do we (mainframe) care more than the other platforms? I'd say there 
are multiple reasons for that:
- Most other OSes sparsely pack the cores so the degradation of having multiple 
threads sharing the same core isn't felt as soon as when z/OS densely packs the 
cores.

- We measure things at a finer level of detail than the other platforms 
(typically) do. And then we obsess over them because our individual unit costs 
are much higher. When a CPU core costs you a few hundred dollars your need to 
worry about the fine details of optimizing the usage of that resource decreases 
significantly. (Now in aggregate they may spend more to do the same amount of 
work because they have to buy so many more cores, but... I think the point 
remains that you don't worry so much about individual core performance. 
Although maybe they should!)

- On your own PC/laptop it matters relatively little and at this point in time, 
people may just assume it's "ok" there and so is "ok" on the server that's 
running somewhat similar feeling hardware. This may not be a good assumption.

- Actually, when Intel's Hyper-Threading came out there were server situations 
where the recommendation was to disable it. I'm not sure if that's still the 
case or not. But I do know you can disable it if you're running (at least) 
bare-metal instances in AWS and a quick google shows that at least IBM cloud 
also allows you to disable it. So it may be that the other platforms also 
sometimes have valid reasons to disable it. 

Scott Chapman


On Thu, 31 Aug 2023 13:35:11 +, kekronbekron  
wrote:

>Hi Scott,
>
>Could you expand on this please.
>
>> But z/OS "densely packs" the cores, meaning that if a work unit is running 
>> on a zIIP core and another zIIP eligible work unit comes in it will run on 
>> the second thread on the already busy zIIP core instead of being dispatched 
>> to an available but unused zIIP core. As I understand it, this was done 
>> because PR/SM dispatches cores, not threads, to the LPARs and this dense 
>> packing makes that easier.
>
>What does "dispatches cores" mean, and how is "run on second thread on already 
>busy zIIP" an example of that (dispatching cores), and the second part 
>(dispatch to a new core) isn't?
>
>
>Also a general Q to all - why is SMT a big topic with mainframes?
>Distributed's hyperthreading is everywhere.
>
>- KB
>
>--- Original Message ---
>On Thursday, August 31st, 2023 at 18:20, Scott Chapman 
><03fffd029d68-dmarc-requ...@listserv.ua.edu> wrote:
>
>
>> On Wed, 30 Aug 2023 12:14:29 +, Peter Relson rel...@us.ibm.com wrote:
>> 
>> > I'll bite. Why would you want to switch? Activating it is one thing.
>> > 
>> > There are situations where a job might run better not multi-threaded.
>> > It's not clear that the system ever would run better not multi-threaded.
>> 
>> 
>> There are multiple considerations as to whether SMT should be enabled. As Ed 
>> Jaffe said, my preference would be to add real zIIPs if I at all could and 
>> only use SMT when that was not (or no longer) feasible. My recommendation is 
>> to not enable SMT until you have a defined reason to and where it's then 
>> proven to be beneficial.
>> 
>> As a review for those stumbling acr

Re: Switching between SMT-1 and SMT-2

2023-08-31 Thread Scott Chapman
On Wed, 30 Aug 2023 12:14:29 +, Peter Relson  wrote:

>I'll bite. Why would you want to switch? Activating it is one thing.
>
>There are situations where a job might run better not multi-threaded.
>It's not clear that the system ever would run better not multi-threaded.

There are multiple considerations as to whether SMT should be enabled. As Ed 
Jaffe said, my preference would be to add real zIIPs if I at all could and only 
use SMT when that was not (or no longer) feasible. My recommendation is to not 
enable SMT until you have a defined reason to and where it's then proven to be 
beneficial. 

As a review for those stumbling across this who might not know:
If there's only single unit of work running on the zIIP, SMT matters not at all 
because there's no contention for that zIIP core. But when there's two active 
threads on a zIIP, both will contend for the common core resources and so run 
somewhat slower. So it is never a single job that runs worse multi-threaded, if 
SMT is negatively impacting individual workloads it will always be impacting 
work in groups of two work units. 

But z/OS "densely packs" the cores, meaning that if a work unit is running on a 
zIIP core and another zIIP eligible work unit comes in it will run on the 
second thread on the already busy zIIP core instead of being dispatched to an 
available but unused zIIP core. As I understand it, this was done because PR/SM 
dispatches cores, not threads, to the LPARs and this dense packing makes that 
easier. 

So depending on the arrival pattern and volume of the work, how busy the zIIPs 
are, what the LPAR configuration is like, etc. it is possible that work could 
be densely packed on the zIIPs while there's unused zIIP cores that would allow 
the work to run better. zIIPs are often lowly utilized compared to GCPs and at 
certain points in time, it's entirely conceivable that it would be better to 
utilize under-utilized zIIP cores without SMT. 

In general, SMT is more valuable at higher zIIP utilization levels. However 
(depending on lots of things) it can be useful at lower utilizations where, for 
example, there's spikes in the arrival patterns of very short-running 
transactions. That can certainly happen in DDF environments, but the most 
egregious cases of this I've seen have been in Websphere environments. 

The threshold for "at higher zIIP utilization levels" is variable again 
depending. E.G. in a configuration with low zIIP utilization levels where 
there's only 1 or 2 zIIPs shared amongst several LPARs, SMT might be useful 
because an LPAR may not have access to both zIIP cores simultaneously so having 
that extra thread on the single core that PR/SM gave it could be useful. 

Another, less significant, consideration is capacity planning. Because 
performance and zIIP consumption is so variable with SMT enabled and because 
the workloads' zIIP consumption in the SMF 30 and SMF 72 records are recorded 
as an estimate of what they would have consumed if SMT was not enabled, 
accurate zIIP capacity planning (especially at the workload level) is pretty 
much impossible with SMT enabled. But this is of relatively little concern if 
your zIIP capacity planning is "we'll buy more when we start to see problems... 
or when we do the next upgrade". Which, to be fair, most customers are in that 
situation: they don't do any real detailed planning for zIIP capacity.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Switching between SMT-1 and SMT-2

2023-08-30 Thread Scott Chapman
Note you also must IPL PROCVIEW CORE (optionally append ,CPU_OK) in LOADxx 
before you can switch back and forth by the setting in IEAOPTxx.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Calculating SIIS% as MSUs or MIPS

2023-05-06 Thread Scott Chapman
I agree. I don't think you can readily deduce the potential absolute CPU 
capacity impact from the SIIS % number. That's no doubt why IBM gave general 
guidance on how serious you should consider the problem rather than saying "oh 
this is costing you this many CPU seconds/MSUs//MIPS". My recommendation is to 
also consider the actual CPU consumption during the periods where there is a 
high SIIS% issue. I've seen cases where the SIIS % is quite high but the actual 
CPU consumption of the LPAR is relatively minor (e.g. perhaps a sysprog test 
LPAR or a relatively idle production LPAR) meaning that the net savings will be 
also relatively minor in the overall scheme of things. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question on use of LPARNAME, SYSNAME and SMFID

2023-02-11 Thread Scott Chapman
Having looked at data from a whole lot of customer systems, I can say that 
SMFID and SYSNAME are often (but not always) the same. LPARNAME is very often 
different, although I appreciate it when there's at least some sort of visual 
link between it and SMFID/SYSNAME. E.G. SYSA and C1SYSA vs SYSA and C1LP4. Most 
sites do tend to have that sort of link between them, but some don't. It seems 
like that would make it easier to make a mistake while working on the HMC. 

There's a whole lot of PRODPLEX and SYSA and similar out there. But there's 
also a fair number of more creative names too. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Transmitting SMF records

2022-12-17 Thread Scott Chapman
Ah, I missed that or forgot by the time I got to posting. I haven't tried it 
myself, but have heard it is problematic to get the data back into a z/OS 
dataset in a usable fashion. 

On Sat, 17 Dec 2022 09:47:39 +1100, Andrew Rowley 
 wrote:

>The OP specified being able to reverse the process, so my understanding
>was it needed to transfer back to a dataset on z/OS.
>
>This seems to be surprisingly difficult - IBM doesn't seem to have
>considered round trip capability when they wrote the FTP functions.
>(Although I haven't tried it with a RECFM U transfer.)
>
>--
>Andrew Rowley
>Black Hill Software
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Transmitting SMF records

2022-12-16 Thread Scott Chapman
We have customers (S)FTP(S)ing us data every day directly referencing the 
output of IFASMFDP with DCB=RECFM=U with no problem. From our standard 
instructions:

//SENDFTP EXEC PGM=FTP,PARM='(EXIT=12'
//SYSPRINT DD SYSOUT=*
//OUTPUT DD SYSOUT=*
//SMFFILEA DD DSN=*.SMFDMP.DUMPOUT,DCB=RECFM=U,BLKSIZE=32760,
// DISP=(OLD,DELETE) 
//SYSIN DD *
ftp.pivotor.com userid password
BINARY
LOCSITE EPSV4
QUOTE EPSV 
PUT //DD:SMFFILEA Company.Dyymmdd.SysplexName.SystemName.SMFU

BDWs and RDWs come through fine. (I think "quote epsv" is extraneous but it's 
been in the instructions forever. LOCSITE EPSV4 can be important for getting 
through some firewalls properly.)

Scott Chapman 

On Thu, 15 Dec 2022 12:27:56 -0600, Paul Gilmartin  wrote:

>>From Barry Merrill :
>>...
>>But if the destination is for ASCII and SAS, you can use IEBGENER to create a 
>>copy of
>>the data, on z/OS, but using RECFM=U, which ftp can't muck-up, and SAS on 
>>ASCII processes that data using RECFM=S370VBS, since the file has the BDW and 
>>RDW, so the downloaded file
>>RECFM=U file can be read directly by SAS.
>>
>> // EXEC PGM=IEBGENER
>> //SYSUT1 DD DSN=YOUR.VB.FILE,DISP=SHR,RECFM=U,BLKSIZE=32760
>> ...
>I have tried to shortcut such a process at this point by:
> // EXEC PGM=FTP
> //SYSUT1 DD DSN=YOUR.VB.FILE,DISP=SHR,RECFM=U,BLKSIZE=32760
> //INPUR DD *
> ...
> binary
> put DD:SYSUT1 ...
> ...
>Only to be dismayed that FTP apparently reads the DSCB and lets that dominate
>attributes coded on the DD statement.
>
>This is a significant transgression of MVS conventions.
>
>-- 
>gil
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: FYI: IBM sales jump shows the mainframe is not dead, with hybrid cloud alive and well | Network World

2022-10-21 Thread Scott Chapman
Which may be part of the reason for releasing the smaller version* of a 
particular generation some months after the larger version. 

* - The models formerly known as "Business Class" that are now seeking a handy 
name. 


On Thu, 20 Oct 2022 17:07:14 -0500, Mike Schwab  wrote:

>The big problem with mainframe reporting is the finances compare one
>year apart while models come out 2 years apart.  Useful reporting
>would be by months since model release.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How to calculate MIPS or SU's from user CPU time statistics?

2022-02-09 Thread Scott Chapman
Presumably what management really would like to know is how many dollars 
(euros, whatever currency you're working in) was saved. How you go about 
calculating that depends...

If you're under Tailored Fit Pricing with IBM your IBM software bill is based 
on the CPU time you consume over the year. You should have two numbers from IBM 
the: the baseline cost per "MSU" (really MSU hour) and the lower (50%?) 
incremental cost per MSU that kicks in after you've reached your baseline for 
the year. Note that TFP contracts are individually negotiated but my 
understanding is that in most contracts, reducing your MSU-hours consumed below 
the annual baseline won't reduce the actual money you're committed to send to 
IBM. But you could still argue that the value of MSU-hours saved is that 
baseline cost per MSU. 

Converting from CPU time to MSU-hours is relatively straight-forward:

MSU-hours consumed = (MSU Rating of Machine / GPs of the machine) * CPU-seconds 
/ 3600

If your TFP agreement says that you pay $x / MSU for your baseline and $y / MSU 
for your incremental beyond the baseline, multiply by the appropriate number 
depending on whether you're going to be above or below the baseline for the 
year. (Recognizing that if you're below, you may not actually be reducing the 
money sent to IBM.)

That's one of TFP's selling points: you can more directly relate CPU 
consumption to your software costs. 

If you don't have TFP you're most likely under WLC (Workload License Charges) 
and your IBM MLC software bill is based on the peak rolling 4 hour average for 
the month. If that's the case, you first have to determine if you reduced that 
peak. If you didn't, you didn't save anything. If you did, then you need to 
find your incremental MLC cost per MSU. That is not the average cost/MSU. Your 
IBM MLC rep should be able to help with this. 

There may be an argument to be made that if you removed workload from your 
peak, but the peak didn't go down because other latent demand immediately 
consumed that capacity, well presumably there was some business value in that 
other work getting done sooner. (If there wasn't then think about whether that 
work needs to run in the peak!)

If you're not under TFP, separate from MLC, your IBM IPLA/PPA software is 
likely not based on the R4HA and is likely based on some amount of paid-for 
capacity (which could be less than your hardware capacity). Typically, reducing 
your usage won't reduce those (usually annual) maintenance charges until/unless 
you give back some of your purchased license capacity. But then you'll have to 
re-purchase that license capacity if you need it back in the future. 

ISV software costs are of course dependent on your software contracts. You may 
possibly have a combination of all 3 of the aforementioned models. 

Then there's the whole hardware cost point of view too. For many customers this 
is somewhat abstract and is something along the lines of "if we're reducing our 
utilization maybe we can delay an eventual hardware upgrade". Putting a dollar 
value on that may be... difficult. 

But if you're regularly doing Capacity On Demand to add more capacity, and 
you've enabled efficiencies that have let you avoid a COD event, then the 
savings from that should be fairly obviously the cost of that COD that you 
avoided. 

Understanding the real dollar impact of tuning efforts is important but 
obviously requires knowledge of how your billing is arranged. We've seen 
customers who've saved significant real dollars from tuning to avoid COD or 
reducing their peak utilization under R4HA. But we've also seen customers 
who've not saved real dollars at all because they were paying for the R4HA peak 
and they were only affecting things that were off-peak. For customers trying to 
move workload off from the mainframe, it can sometimes be hard to reduce 
mainframe costs until they've moved off really significant amounts of workload. 
(Notice I didn't use the word "savings" in the previous sentence: whether the 
mainframe is cheaper or more expensive than the environment they're moving too 
is a yet deeper discussion!)

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Software drag racing

2021-06-25 Thread Scott Chapman
That is a bit surprising given that it looks like that only runs for 5 seconds? 
But there's not much code there and it's all in a loop. My recollection is that 
the JIT compiler will step in after x repeated executions, which will happen 
pretty quickly here. And the compiled code as the potential to be as fast (or 
maybe faster) than any other compiled code. Without looking too hard, it 
doesn't appear that there's really any object allocation going on inside that 
loop, so the overhead of Java managing objects on the heap appears to be a 
non-factor here as well.

So I think it would certainly be possible that Java would be similar to any 
other compiled language, if the test ran sufficiently long such that the time 
to get the code JITed is relatively short compared to the overall execution 
time. And IBM did a whole lot of work to speed up JVM startup. Still, it is 
surprising to me that it works that well over a 5 second test. 

If other platforms don't JIT as quickly or aggressively, or if their JIT 
compiler isn't as smart as IBM's then their results may not be the same. 
Similarly, if the IBM C compiler isn't as optimized as it is on other 
platforms, it might underperform.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Java 8 (latest!) and TLSv1.3 - anyone got it working?

2021-06-19 Thread Scott Chapman
For those wondering: Java 9 changed some fundamental things and is not 
necessarily drop-in compatible with Java 8, making migration from 8 to 9 (or 
above) something that can take some real effort. There were always potential 
issues going between Java versions but the 8 to 9 transition is especially 
painful. 

After 9 they also went to a 6 month release cadence, but most of those releases 
are only supported for 6 months. But about every 3 years there's a long term 
support (LTS) release that's supported for years. Version 11 was the first of 
those, 17 (this fall) will be the next. 

Not really important on z/OS in particular, but in 2019 Oracle also changed 
it's licensing such that the Oracle Java no longer free for commercial use. 
Those using Java commercially can continue to use OpenJDK (the reference 
implementation) or one of the other free alternatives though. 

In short "they" made a mess of Java after 8. There's reasons for it and there's 
some good things in Java 9+, but... things are definitely different.

Now why it's taken IBM >2 years to support Java 11, I don't know. One guess 
might be that they haven't put much effort into it because there's not a lot of 
demand for it as long as 8 is viable and getting people to migrate to 11 from 8 
may be non-trivial. (How many sites are still using old COBOL compilers despite 
better more modern alternatives being available?) At this point OpenJDK shows 
Java 8 being supported until "at least May 2026" and Java 11 until "at least 
October 2024". So given that 17 is potentially coming available in September, 
and given that I think the migration from 11 to 17 will likely be easier than 8 
to 9+, I wouldn't be surprised if they just hold off for 17. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBM Zcloud - is it just outsourcing ?

2021-05-29 Thread Scott Chapman
I think one important distinction of cloud vs. outsourcing is the ephemeral 
nature of the resources in cloud computing. I.E. the ability to start from 
zero, provision compute and storage resources of some type (either manually or 
automatically in response to changing conditions) and then deprovision them 
similarly after using the resources for perhaps mere minutes or hours. The cost 
is determined by what you used for the duration you used it, typically billed 
to an interval of minutes or sometimes even seconds. And since it has on-ramp 
starting at zero infrastructure and zero cost, you can easily try out ideas at 
a cost of something you can put on a credit card. Infrastructure is charged in 
increments of pennies. And if it doesn't work out, you turn it off and your 
charges stop.*

Last I knew, and I would like to be proven wrong, zCloud didn't embody the idea 
of "I want to play with z/OS for a few hours, stand up a z/OS image with x CPU 
and y GB of disk and put it on my credit card". 

*-Remember: in the cloud, you pay for what you forgot to turn off. And those 
pennies can add up shockingly fast in some cases! 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Low softcapping on high capacity CEC

2021-04-01 Thread Scott Chapman
Absolute CP capping caps the LPAR at the specified number of CP's worth of 
capacity. It avoids the issues with initial capping (by weight) in which LPAR 
A's available capacity can change when LPAR B or C is activated or deactivated 
if LPAR A's weight isn't readjusted too. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Low softcapping on high capacity CEC

2021-03-31 Thread Scott Chapman
As was mentioned, ABSMSUCapping may be useful in that you can limit the entire 
group to a specific number of MSUs. I.E. if you defined the group of 4 LPARs in 
a capacity group at 24 MSUs (or whatever) and enabled ABSMSUCapping on all of 
them, all 4 LPARs would be capped (in total) at that 24 MSUs. 

The Absolute CP Capping can work too, but you can't set the limit across the 4 
LPARs, so they can't borrow capacity from each other. I.E. if you want the 
limit to be a combined 24 MSUs but you want each to be able to consume up to 24 
MSUs, then a group cap with Absolute MSU Capping is the way to go.

If you want to limit individual workloads within the LPARs, then WLM resource 
groups can do that for you. That's at a service class by service class basis 
and most often are specified by SU/s (not the same as MSUs). So assign the SAS 
work to a specific SC (BATHOGS?) and make sure that SC has a resource group 
that limits how much CPU the work can consume. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Querying WLM address space CPU delays

2021-03-17 Thread Scott Chapman
Alas no, but there's a number of products out there that will read said 
records, including our own. ;) Pivotor does have a free tier, but it's not open 
source. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Querying WLM address space CPU delays

2021-03-16 Thread Scott Chapman
SRM/WLM are already sampling the work running on the system. SMF 72 contains 
delay samples. Including by report class. You can define up to 2047 report 
classes so you can get a good bit of granularity. Maybe not down to a specific 
batch job, but probably more than granular enough to understand how the work 
overall is performing and monitor for the work degrading over time. 

Monitoring the delay samples over time is one of the things I highly recommend, 
especially in the situations where you're always running at 100% busy or always 
running at cap or something like that. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: zIIP MultiThreads

2020-09-05 Thread Scott Chapman
On Fri, 4 Sep 2020 09:22:09 -0700, Ed Jaffe  wrote:


>IMHO, if you need additional zIIP capacity for a production workload, it
>probably makes more sense to configure another zIIP core online than it
>does to enable MT=2.

Agreed. SMT is a good thing to keep in your pocket for the emergency of "we 
need a little more zIIP capacity, but we can't purchase another zIIP right 
now". Don't use it as a substitute for real zIIP capacity. If you need zIIP 
capacity buy more zIIP capacity. It's relatively inexpensive. 

1) The amount of extra throughput you'll get is variable depending on your 
workload mix. And the timing of the mix of workload. You can't use test to 
accurately predict production, you just have to try it. 

2) MT=2 is effectively more/slower engines vs. fewer/faster engines. This is a 
trade-off that's ok to good for most workloads, but is worth bearing in mind.

3) The measurements are complicated estimates based on instruction counts seen 
with 1 and 2 threads active that are not well documented. Most importantly, the 
reported zIIP time becomes MT1ET: the estimated time the work would have 
consumed on the zIIP had the zIIP been running in MT=1 instead of MT=2. But 
remember it's an estimate and based on numbers that change based on the 
instruction mix. Doing accurate capacity planning with estimates that vary this 
way is difficult. Which may not be a problem if your capacity planning for 
zIIPs is "when we start seeing cross-over to the GCPs we buy more zIIPs" (which 
is not necessarily a bad policy). 

SMT is a useful tool in certain situations, but IMHO, I wouldn't consider 
enabling MT=2 as the default. If you're having a problem that more zIIP 
capacity might help, then more real zIIP capacity is the best answer. In cases 
where that's impractical, MT=2 might be useful.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Free Mainframe Stuff 2020: Reply Here with Nominations

2020-07-10 Thread Scott Chapman
We have people using our free WLM to HTML tool pretty much every day:
https://www.pivotor.com/wlm2html.html

There is also a free tier to our Pivotor performance reporting service as well:
https://www.pivotor.com/freeTier.html

We also do free cursory performance reviews and of course do regular free 
webinars as well. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New Mainframe Community

2020-06-16 Thread Scott Chapman
Increasingly rare though. 

Looking across all of our customer requests for the last month or so, I do see 
users who came in on IE, but in every one of those instances, somebody else 
from the company came in on Chrome, Firefox, or Edge (which the earlier 
versions weren't that much better than IE). There's even a few Safari and Opera 
users out there. 

The numbers imply that IE may be the corporate default but most (and apparently 
all our customers) are allowing Chrome or Firefox as well. Chrome seems to be 
the most popular option. Even Microsoft saw the light: the current version of 
Edge is based on the open source version of Chrome under the covers. Some have 
suggested it would be better to have more diversity in the underlying browser 
technology, but Chromium generally is pretty good. 

Scott Chapman


On Mon, 15 Jun 2020 10:46:13 -0400, Gord Tomlin 
 wrote:

>On 2020-06-15 00:49, David Crayford wrote:
>> Wow, "corporate-required Internet Explorer"! Your company needs to
>> review some of it's standards!!
>
>We still encounter this now and then in our customer base.
>
>--
>
>Regards, Gord Tomlin
>Action Software International
>(a division of Mazda Computer Corporation)
>Tel: (905) 470-7113, Fax: (905) 470-6507
>Support: https://actionsoftware.com/support/
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What is GRXBIMG

2020-06-04 Thread Scott Chapman
I was making a similar point to somebody recently that, the majority of the 
words in the manuals do not change between versions and just because some words 
were written back in the 90s (or earlier?) and are still in the manual, doesn't 
mean that they're equally applicable in today's world. Especially if they're 
performance advice. 

Scott Chapman
 

On Wed, 3 Jun 2020 07:00:19 -0700, Charles Mills  wrote:

>I must have time on my hands. I just dragged out the OS/390 V2R8 CDs from 
>1999, and the sentence is there verbatim.
>
>It's the only hit on GRXBIMG on CD #1.
>
>Charles
>
>
>-Original Message-
>From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
>Behalf Of Steve Smith
>Sent: Wednesday, June 3, 2020 6:37 AM
>To: IBM-MAIN@LISTSERV.UA.EDU
>Subject: Re: What is GRXBIMG
>
>It's still there in V2R4... and I am appalled that I've been running REXX
>incorrectly for decades now.
>
>sas
>
>
>On Wed, Jun 3, 2020 at 9:35 AM Charles Mills  wrote:
>
>> Fascinating!
>>
>> I'm looking at a V1R4 TSO/E Rexx manual and the sentence is in there.
>> Chapter 8, Using Rexx in Different Address Spaces.
>>
>> Charles
>>
>>
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Seymour J Metz
>> Sent: Tuesday, June 2, 2020 9:31 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: What is GRXBIMG
>>
>> In the REXX Reference I saw this: "You can invoke a REXX exec in the TSO/E
>> address space in several ways. To invoke an exec in TSO/E
>> foreground, use the TSO/E EXEC command processor to either implicitly or
>> explicitly invoke the exec and
>> you must have ddname GRXBIMG allocated." What is GRXBIMG?
>>
>>
>>
>> --
>> Shmuel (Seymour J.) Metz
>> http://mason.gmu.edu/~smetz3
>>
>>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New Jersey Pleas for COBOL Coders for Mainframes Amid Coronavirus Pandemic

2020-04-11 Thread Scott Chapman
On Fri, 10 Apr 2020 16:10:02 -0400, Phil Smith III  wrote:

>Sigh:
>
>https://www.popularmechanics.com/science/a32095395/cobol-programming-language-covid-19/
>

At the end it really goes off the rails when it starts making performance 
assumptions that Java would be impossibly slow, and maybe Python would be 
better. 

Python is on my list of "one of these days I should probably learn that", but 
from my limited Python knowledge that seemed really unlikely that Python would 
be faster than Java. Some simple googling around confirms that it's really hard 
to find any references to Python being faster than Java for anything but 
trivial scripts. Although there are apparently some ways to compile Python, 
Python's loose typing is often cited as a potential performance limitation as 
well. 

And in real life Java is definitely not always slower than COBOL. Just-in-time 
compilation can result in more performant object code than ahead-of-time 
compilation, especially when "ahead-of-time" means "a decade ago for a target 
machine that was not real new even then". And Java runs on the zIIPs, which can 
be a significant advantage in some environments. 

But I would be surprised to find a case where COBOL isn't the most memory 
efficient. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How do I compare CPU times on two machines?

2019-12-15 Thread Scott Chapman
>> The numbers below (from IBM.com) do not seem to support what you are saying 
>> however: "if you're trying to convert CPU time between machines, the ratio 
>> of any of SUs, MSUs, or PCI will be pretty much equally "fine"." The ratio 
>> of the PCI's of the two machines is about eight-to-one but they seem in 
>> practice to be *about* the same speed: that is, a job that uses about 1 CPU 
>> second on one seems to use about 1 CPU second on the other (certainly not 
>> eight times as much!). The SU/SEC ratio for the two machines is 40404/3 
>> which seems to more accurately reflect observed reality (although way less 
>> than perfectly! -- less perfectly than a guess of "oh, I guess they are 
>> about the same speed").
>>
>> Processor#CP PCI MSU MSUps   Low Average High
>> 2817-730 30  23,929  2,855   2,370   49.54   42.75   37.96
>>
>> Processor#CP PCI MSU Low Average High
>> 2818-Z05 5   3,139   388 6.185.61
>> 4.77
>>

Sorry... I failed to mention that you have to use the Per CPU ratings. SU/sec 
is already on a per CPU basis, which is why that number seems more in line with 
what you expect. 

23929 / 30 = 797.62855 / 30 = 95.1
3139 / 5 = 627.8 388 / 5 = 77.6

797.6 / 627.8 = 1.27
95.1 / 77.6 = 1.22
40404 / 3 = 1.21

The PCI ratio is a bit farther off from the other two, but again, these are 
rough estimates and to that degree they're reasonably close. We're drawing with 
the fat crayons here, not fine drafting pens. 

But... I just realized you used the SU/sec from the 2818-Z04, not the Z05, 
which is 32258.

40404 / 32258 = 1.25

Which is pretty much in the middle of the other two ratios, so it all seems to 
match up as I'd expect now. 

Re. your "a job on one machine uses about 1 second of CPU and uses about 1 
second of CPU on the other". If 1.00 is about 1.25 then, I think all is as one 
might expect. 

But a 1 second job is relatively quick. And there's probably other work on the 
systems that could be influencing both. For example, the larger machine may 
have more work running that's having a larger negative impact on the test job 
running on that machine, so it could actually consume more CPU time than the 
test job running on the notionally slower machine if the slower machine is 
relatively idle when the test job runs.  LPAR configurations can also play in 
here, sometimes significantly. 

Remember, your CPU time increases as your application has to go further into 
the memory hierarchy to find the data. (I.E. if the instructions/data weren't 
in L1 cache.) So on a busier system, other work (especially higher priority) 
work may be making it harder for a particular test job to keep it's data closer 
to the processor core. That's also why you'll see potentially significant 
variations between runs of the same exact job.  That's why I always want to see 
multiple re-runs so I can understand the "normal" variation. (But one still 
needs to take into account the current system activity: "normal" variation will 
itself vary.)

Nothing is simple...

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How do I compare CPU times on two machines?

2019-12-14 Thread Scott Chapman
SUs, MSU, PCI (IBM MIPS) ratings are all just different magnitudes of the same 
number. What I mean is that they all are calculated from the same LSPR tests 
and exist in relatively fixed ratios to each other. There may be some slight 
variations because (for example) MSUs and PCIs are quoted in whole numbers and 
IBM seems to tweak them very slightly. The ratios differ slightly between 
single and dual frame, full speed and sub-cap engines, and number of engines. 
But for all practical purposes, the ratios hold within a couple percent.

Re. the "technology dividend" where they derated the MSUs relative to the SUs 
(before they decided to deliver software price improvements by software price 
changes), that only changed the ratios between MSUs and the SUs. 

In the old days (before the "technology dividend") MSUs ~ SUs * CPUs * 3600
Now, for the last several generations of machines: MSUs ~ SUs * CPUs * 3600 * 
0.664

With the z15, the ratio between SUs and PCI are even flatter than they were in 
prior generation. I.E. my understanding is that they're doing much more of a 
straight-up calculation and not "tweaking" (my term) the results as much as 
they did prior. (And to be clear: the prior variation due to "tweaking" was not 
very much, it's just less with the z15.) 

So in short, if you're trying to convert CPU time between machines, the ratio 
of any of SUs, MSUs, or PCI will be pretty much equally "fine". Not necessarily 
accurate, but all of them will be about the same. 

If you want to be more accurate about it (such as evaluating whether an upgrade 
delivered the expected results), then build zPCR models of the two machines in 
question and use the ratios that it produces. But in all cases, the expectation 
is that the ratio between the two machines is an average of many different 
types of work. Individual work units will over- or under-perform expectations. 
The hope (and real expectation) is that across all the work on the system you 
come close (+/- 5%) to the ratio provided by zPCR. Reality may differ more 
significantly from expectation if you're just using one of the single-number 
metrics without regard to the RNI of the work and the LPAR configuration 
(factors that zPCR takes into account). 

Scott Chapman


On Fri, 13 Dec 2019 20:55:54 -0500, Phil Smith III  wrote:

>I don't think service units necessarily work, since there's the "technology 
>dividend", where IBM admits (heck, touts) that 1 MSU on
>generation n is capable of more work than 1 MSU on generation n-1. They don't 
>always do this, but have more than once.
>
>
>
>.phsiii
>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How display level of paging?

2019-11-04 Thread Scott Chapman
Fair enough. My reply was in response to the comment (which I apparently forgot 
to quote) about having 3x real memory for paging space. 

Depending on where the problem is, even if you're not paging, more memory could 
be beneficial if the system was configured to use it appropriately. I.E. if the 
problem is related to doing excessive I/O and you could eliminate that with 
additional memory for buffering/caching, then maybe adding memory might make 
sense. 

But you do need to figure out why people are complaining before you can come up 
with a plan for making it better. I'd start with trying to compare a "good" 
time to a "bad" time to see what is different. Is the difference in paging 
rates? I/O rates? CPU consumption? CPU Delay? Looking at those at a high level 
should be relatively straight-forward. Digging into the "why" of the particular 
metric changed gets more interesting. 

If your performance reporting tool makes that onerous, remember that we 
(Enterprise Performance Strategies) do offer free cursory performance reviews 
and our performance reporting service does have a free tier as well. Contact me 
off-list if you're interested. 

Scott Chapman

On Sun, 3 Nov 2019 06:16:42 -0800, Charles Mills  wrote:

>"Large memory" is not the situation I am dealing with. It is a modern system 
>but it is at a service bureau and there is a substantial charge associated 
>with real memory. My management does not want to just throw money at the 
>system; he wants some way of seeing whether real memory constraint is a 
>problem and whether additional real memory improves the problem. "Performance" 
>is hard to measure because the workload is extremely varied and not directly 
>under our control, so mostly what we have is subjective: "it's really slow 
>today."
>
>Charles
>
>
>-Original Message-
>From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
>Behalf Of Scott Chapman
>Sent: Sunday, November 3, 2019 5:08 AM
>To: IBM-MAIN@LISTSERV.UA.EDU
>Subject: Re: How display level of paging?
>
>I'm not so sure that's practical and necessary for the large memory systems 
>that we have today. 
>
>Last time I looked across a number of customers it was fairly common for LPARs 
>with hundreds of GB of memory to have paging space < 1x memory. Sometimes much 
>less. Those with Storage Class Memory were more likely to have paging space >= 
>real storage. But even there, we've seen >1TB LPARs with with only a few 
>hundred GB of paging space, including SCM. 
>
>Of course it is also fairly common for those large memory systems to be 
>running with large amounts of that memory being available. 
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How display level of paging?

2019-11-03 Thread Scott Chapman
I'm not so sure that's practical and necessary for the large memory systems 
that we have today. 

Last time I looked across a number of customers it was fairly common for LPARs 
with hundreds of GB of memory to have paging space < 1x memory. Sometimes much 
less. Those with Storage Class Memory were more likely to have paging space >= 
real storage. But even there, we've seen >1TB LPARs with with only a few 
hundred GB of paging space, including SCM. 

Of course it is also fairly common for those large memory systems to be running 
with large amounts of that memory being available. 

Scott Chapman


On Sat, 2 Nov 2019 12:30:25 -0500, Mike Schwab  wrote:

>Paging packs should be a minimum of 3X real memory.  1 to back main memory,
>then another copy of both for a system dump.
>
>On Sat, Nov 2, 2019, 08:17 Charles Mills  wrote:
>
>> Please forgive the basic question: I'm a product developer, not a sysprog.
>>
>> Two part question:
>>
>> 1. What command, panel or report would show the level of paging in a z/OS
>> system? Ideally I would like something that would show the instantaneous
>> level and some sort of "period" level (yesterday, last week, last month,
>> etc.).
>>
>> 2. What is a good value? (You have plenty of real memory for your level of
>> activity.) What is a bad value? ("Oh-oh. That's a lot of paging.")
>>
>> Thanks,
>>
>> Charles
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Can you update WLM via a batch program?

2019-09-17 Thread Scott Chapman
On Tue, 17 Sep 2019 11:53:35 +0100, Martin Packer  
wrote:

>Thanks!
>
>But I believe you can load WLM XML into the app - and also into z/OSMF.
>
>(Yes, I'll admit I only ever parse the XML (with PHP) - having fixed up 
>some breakage. But the point of unloading is reloading.)

Yes, the extracted XML is a bit annoyingly not quite valid XML, with some 
extraneous character between the XML elements, that seems to be different for 
everybody by the time it gets downloaded, presumably due to codepage 
differences. 

But in principle, it seems that one could change the XML (presumably being 
careful with those extraneous characters) and then simply re-import the XML 
from the panels. Not fully ideal, but better than nothing. 

However, I don't know that the XML file is meant to be a programming interface, 
so... I'm not sure whether I'd actually trust that. Seems like it should be a 
fine enough idea. But I'd check things closely the first few times at least. 
And I'd trust it more if it seemed to be valid XML. But at least from what I've 
seen by the time it gets downloaded, it's not quite so.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Using bpxbatch to compress an MVS dataset

2019-07-01 Thread Scott Chapman
On Sun, 30 Jun 2019 15:46:17 -0500, Paul Gilmartin  wrote:

>On Sun, 30 Jun 2019 20:34:19 +, Denis wrote:
>
>>I wonder if the jar command of the Java SDK in z/OS creates gzip compatible 
>>jar files, which in Windows can be extracted/used by renaming Form .jar to 
>>.ZIP.I cannot remember if the jar compression uses ziip.Any file type can be 
>>added to a jar, but I have not tried dd names.
>> 
>I believe jar uses zip deflate.  I've used it to unpack zipped files from 
>cbttape.org.
>I believe jar isn't fussy about filenames -- you can bypass the rename.
>Jar syntax is most similar to tar.

I've used jar to zip up files and transfer them to/from the mainframe. Do need 
to handle ASCII/EBCDIC translations on one side or another. My recollection is 
that it does run on at least partially on the zIIP. Command is something like:
jar -cfM mydata.jar maydata.txt

The "M" option says don't produce the manifest file, which you don't need if 
the goal is just to zip up the data. 

The zip file API in Java is not that difficult to use and combined with the 
JZOS library, it's not hard to write a little utility that will read/write from 
DDs instead of file system files. I did so at my past job, but alas the source 
for that was left with the previous employer. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: honorpriority=no in WLM

2019-06-20 Thread Scott Chapman
On Wed, 19 Jun 2019 13:34:09 -0500, Horst Sinram  wrote:

>The OP's question was about DB2 workloads. Resource group  capping for DB2 
>workloads would be pretty risky unless you could really guarantee that you do 
>not share resources with your production work.
>

Although I haven't counted them all, I maintain that there's more dev/test DB2 
subsystems in the world than production ones.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: honorpriority=no in WLM

2019-06-19 Thread Scott Chapman
>How about submitting a requirement to IBM that would  add a control to WLM
>This control would re-classify a ZIIP eligible  workload to a different
>service class if it spills over to a GCP because you are running your ZIIPS
>hot (or hit the "generosity factor" for DB2 work). This service class could
>have a lower importance/goal than the original service class. You could
>also restrict its CPU consumption using a resource group.

My understanding of how it works internally suggest that would be very 
difficult. When the system determines that the zIIPs need help, one or more 
GCPs simply select from the zIIP queue of work as well as the GCP queue 
(although the GCP queue gets preference). So the system doesn't know what 
zIIP-eligible work is going to be dispatched on the GCP until it gets 
dispatched there. At which point it wouldn't make sense to reclassify it. 

>In other words, have a workload run at high priority on ZIIP, but limit it
>if it crosses over to GCP.

Along those lines, I've always wanted resource groups on the period level so 
once something has aged down into a penalty period it could also be 
constrained. 

>https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/perf/src/tpc/db2z_ibmziip.html
>
>IIPHONORPRIORITY parameter
>
>The IIPHONORPRIORITY parameter in the IEAOPTxx member of SYS1.PARMLIB
>determines whether processes are routed from zIIP specialty engines to
...
>I would be very very careful about setting IIPHONORPRIORITY to No in a Db2
>environment

Agreed--don't use IIPHONORPRIORITY from IEAOPT if you're on Db2 11 or 12. 
HONORPRIORITY on a service class level is not so improperly looked at by DB2 
though, at least last that I knew. Doing it on a service class basis to 
restrict your lower importance zIIP eligible work to the zIIPs to help prevent 
them from increasing the R4HA is a reasonable strategy. Any time you restrict 
the capacity available for a workload you are potentially impacting 
performance, but sites make the choice to restrict available capacity to 
workloads to reduce their software costs all the time. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: honorpriority=no in WLM

2019-06-18 Thread Scott Chapman
True, relative to the zIIP workload. But if that zIIP workload is relatively 
low importance and crossing over to the GCPs and raising your R4HA, it may make 
sense to restrict the low importance work instead of increasing the R4HA, 
depending on what your business requirements are. And keeping the low 
importance workload off from the GCP can be a good thing if the GCPs are being 
driven relatively busy by the zIIP-eligible work.

Performance decisions are often dictated by financial concerns. More zIIP 
capacity is always good, but it costs money. And for some machines IBM won't 
sell you more on that generation of machine, making them even more difficult to 
obtain. (I.E. machine replacements are not quick and simple solutions.) 
Lowering software costs to make the platform more cost-competitive is good, but 
that can cost performance. Unfortunately the dollar increments that we deal 
with on the mainframe makes such decisions more difficult than just "add 
another CPU" or "spin up another instance".

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAP Processor Utilization

2019-06-05 Thread Scott Chapman
Might the theory be if R783IIPB is large relative to R893IIFS, maybe you have 
an issue?

According to this:
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.erbb200/erbb200209.htm

RMF calculates "Percent I/O processor busy" as (IIPB * 100) / (IIPB + IIPI)

Which would make sense if those are samples of some type. Looking across a 
several customers, I do see that they tend to be about 240 samples per second, 
within about +/- 1%. So it does seem like they're samples of some type. 

I see very few intervals with an I/O processor more than 10% busy in the 
customers I sampled though, which matches my historical understanding that most 
customers don't have an issue with SAP capacity.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Effective of SMT with high zIIP usage

2019-03-14 Thread Scott Chapman
My recommendation is to leave SMT turned off until you have a defined need for 
it. Then when/if you turn it on, evaluate both application responsiveness as 
well as the standard SMT measurement values. And then generally keep an eye on 
those values over time.

With SMT enabled, understanding effective zIIP capacity becomes "problematic". 
zIIP CPU time measurements are "adjusted" to be MT1ET. 

I'd always prefer to buy more real zIIP capacity vs. enabling some additional 
variable and hard to understand amount of capacity via SMT. 

We've seen cases where SMT was useful and we've seen cases where SMT was 
problematic. The majority of the cases are somewhere in-between and may not be 
worth the measurement issues. I'd say we see more sites with it disabled than 
enabled. 

But... if you need more zIIP capacity and can't immediately acquire more real 
zIIP capacity, it can be a very useful stop-gap measure. 

That may be the case for you, I don't know. Are you getting much cross-over to 
the GPs? Do you have a large spikes of work units waiting for a zIIP? 

I have a presentation out there about SMT that explains the measurements and 
gives my recommendation for when you might want to try it. Go to 
https://www.pivotor.com and click on the "Free!" button then find and click 
through to our presentations. You probably can find it on a number of the 
conference web sites as I've presented it at the major conferences as well. 

Scott Chapman
Enterprise Performance Strategies


On Thu, 14 Mar 2019 15:08:36 +, Bill Bishop (TMNA)  
wrote:

>Anyone have any real-world experience with SMT?  
>
>We have very high zIIP usage, 70% to 80% across 3 zIIPs, right now and have 
>been asked to evaluate turning on SMT.
>
>One response was that with high zIIP usage, SMT might not be as effective as 
>could be, and the other response is that it will make more efficient use of 
>the zIIPS and allow them to drive higher.
>
>We are aware of the impact of zIIP overflow to CPs.
>
>Thanks
>
>Bill Bishop
>Consultant, Mainframe Engineer
>Mainframe and Scheduling | Infrastructure Technology Services 
>Toyota Motor North America
> bill.bis...@toyota.com
>Office:  (469) 292-5149
>Cell:  (502) 316-4386
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What tools are available for z/OS programs to access a z/Linux database?

2019-03-10 Thread Scott Chapman
Write the code in Java and use a JDBC Type 4 driver. Should be quite doable for 
most popular relational databases. 

If you have to get around an EBCDIC to ASCII issue you might need to run the 
JVM with the file encoding flag to say to do the I/O in an ASCII codepage 
encoding by default so the JDBC driver is happy, and then specifically use an 
EBCDIC codepage while doing I/O on the mainframe side. 

If you need to come from another language it's probably going to get trickier. 
And there are options for calling between (for example) Java and COBOL, but 
again, trickier than just doing everything in Java. However, it might not be 
too bad if you can create one generic module (or even copybook) for doing I/O 
to the database and let it handle getting to/from the database, whether that be 
directly or indirectly through Java somehow. 

Note: you don't want to spin up a new JVM for every database call! So you may 
consider something running the database calls through some sort of started task 
that is sitting there waiting for requests. That could be CICS, WebSphere, 
Tomcat, or even something you write yourself, although I suspect getting over 
to Websphere via WOLA or CICS will be most performant. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Internal Coupling Channel on z13

2019-01-30 Thread Scott Chapman
I haven't seen many single-CP boxes in general, and haven't seen one running 
both CFCC and z/OS on that single CP. My expectation is that this would perform 
poorly. Sync requests would be impossible since PR/SM can't have both the z/OS 
and CFCC dispatched on the single CP at the same time, so all requests would 
have to be converted to async. 

My expectation is that just using CTCs would probably be faster. I would 
exercise caution in testing this though, and if it was my production system I 
probably wouldn't even try. But I'd certainly be curious about the results if 
you do try it. 

Note that my reluctance applies specifically to single-CP machines. If I had 
even two CPs, then I'd be much more willing to give it a shot, depending on 
current utilization levels and so forth. With dyndisp=thin of course. 

Scott Chapman

On Wed, 30 Jan 2019 00:10:26 -0600, Brian Westerman 
 wrote:

>Do you have any figures for how much "more" friendly the CPU usage is?
>
>This box is a single CPU, no ICF, Zipp or zapp.
>
>Brian
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Service class changes

2018-11-10 Thread Scott Chapman
Yup.. should be SMF30PF1 that "1" and "I" look pretty close at the font size I 
had up. 

Scott

On Fri, 9 Nov 2018 07:16:03 -0600, Elardus Engelbrecht 
 wrote:
>Sorry, but is it not SMF30PF1  (number one instead of letter i)?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Service class changes

2018-11-09 Thread Scott Chapman
SMF30PFI includes flags indicating that the service class was changed either 
during or before execution.

Scott Chapman
Enterprise Performance Strategies


On Thu, 8 Nov 2018 17:59:19 +, Sankaranarayanan, Vignesh 
 wrote:

>Hello list,
>
>Apart from 99.2, can I find info on service class changes to jobs on any other 
>record type?
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: I/O priority

2018-11-09 Thread Scott Chapman
Interesting you should mention that. 

Most systems we see do in fact have I/O priority management enabled. (Groups 
less so.) 

However, that will affect the computed velocity. And in some cases it can skew 
the velocity in ways that may be counter-productive. And the problems that I/O 
priority management solves are generally much less of a problem today than in 
the past. A long-standing recommendation has been to enable I/O priority 
management, but I'm now researching whether or not that recommendation is still 
the correct one. I'm leaning that is not (i.e. I/O priority management may be 
better left disabled), but I've been busy the past several weeks and haven't 
finished all the research I want to do.

So the short answer for now is that if you don't have it enabled, I wouldn't 
enable it today. If you have it enabled, I can't really say if it's problematic 
for you without looking at your data. If you do enable (or disable) I/O 
priority management you will need to re-evaluate your velocity goals. In most, 
but probably not all cases, I suspect the issues are relatively minor either 
way. In at least a few specific cases, having I/O priority management enabled 
may be leading to WLM not being able to effectively manage certain service 
classes. 

Note again: this is something I'm still researching and I hope/expect to 
present on the topic at the next SHARE. 

Scott Chapman
Enterprise Performance Strategies

On Fri, 9 Nov 2018 04:52:01 +, Sankaranarayanan, Vignesh 
 wrote:

>Hello everyone,
>
>Has anyone here had any experience with enabling I/O priority 
>groups/management in WLM.. ?
>I understand it has to be enabled in all systems connected to the DASD... what 
>are the gotchas or things to watch out for.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CPU Consuption on ZIIP - SMF / RMF

2018-07-26 Thread Scott Chapman
The type 72s will give you zIIP utilization on a service class and report class 
basis. You can use the RMF post processor Workload Activity Report (WLMGL) to 
report on those records. 

The type 30s will give you utilization on an address space level. You'll need 
to use some other tool for reporting on those.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why are highly busy zIIPs worse than highly busy CPs?

2018-06-08 Thread Scott Chapman
This seems to come up a lot. 

I'm going to start by taking the opposite tack: you probably shouldn't run your 
GCPs at 90-100% busy either. Busier CPUs are generally going to have more cache 
contention which means the work is generally going to run "somewhat" less 
efficiently (i.e. more CPU time / unit of work) than if the CPUs were running 
less busy. Apparently some IBMer at some point measured that as a 4% increase 
in CPU time / unit of work per 10% increase in overall utilization. Alas I have 
no further details about the source of that number. I believe that it's at 
least directionally correct though.

Now if you have a mix of different importances / priorities for the work that 
is driving the machine to 100% busy, then likely the most (but not only) 
impacted work is the lower importance work. So maybe that's ok. But, in all 
likelihood, if the machine had more capacity and was running at only say 70% 
busy, then likely the same work would consume fewer MSUs. Which may be a good 
thing. 

From purely a performance perspective, running less busy is always better as 
there's less chance for queueing for a processor. But rarely is "as fast as 
possible" the required and most cost effective answer. 

But this question is about zIIPs. But zIIPs are the same processors as GCPs and 
the aforementioned discussion is mostly the same: you can run the zIIPs busy, 
but things may not be as efficient. Of course "less efficient" matters a little 
less on the zIIPs given that the hardware is cheaper and they don't increase 
software costs. 

The primary issue as I see it comes in where the zIIPs are running busier than 
the GCPs and so work is more delayed trying to get through the zIIP queue than 
if they had been just dispatched directly to the GCP. If that work is very 
important work (such as perhaps DB2 system tasks) then that could have 
relatively widespread negative impacts. Are those impacts greater than if there 
were no zIIPs in the configuration at all and the GCPs were running a similarly 
busy level? Maybe a bit due to the extra overhead of (unsuccessfully) 
scheduling work onto the zIIP. 

But I believe the potential for harm is very situation dependent. In particular 
with the way the rules are today, and with the mix of the different workloads 
that are zIIP eligible, if you have zIIP capacity (both in terms of MIPS and 
engines) greater or equal to your GCP capacity, I'm hard pressed to believe 
that there's a significant risk to running the zIIPs as busy as you're 
comfortable running your GCPs. But my belief is also that many are comfortable 
running their GCPs hotter than is really ideal. 

In Kathy Walsh's presentation from Orlando one of her slides has the statement: 
"Can run zIIPs very busy IF there are multiple classes of work with different 
response time objectives, but watch IIPCP time". I think that's a very 
reasonable statement. If you're starting to see significant crossover from the 
zIIPs to the GCPs, you're probably running the zIIPs too busy for that 
particular workload. Note that the crossover amounts are not necessarily 
well-correlated with zIIP busy, although generally when zIIPs are busy they are 
more likely to see significant crossover. 

A potential issue is that some systems only have a single zIIP. When you have a 
single CP, your threshold for where you'll start feeling pain is significantly 
lower vs. having evern 2 CPs. The situation of having a 710 with a single zIIP 
is a significantly different situation than having a say a 505 with 4 zIIPs. 
I'm going to be a lot more concerned about zIIP utilization in the former vs. 
the latter. In the former, that zIIP very well might become a bottleneck that 
could be more problematic than not having a zIIP at all. That would seem to be 
much less likely in the second case. 

My personal belief is that given the amount of work that's zIIP eligible today, 
most systems should have at least 2 zIIPs, and ideally the system should have 
zIIP capacity at least similar to the GCP capacity. Yes, there's a hardware 
cost to that, but in the grand scheme of things, the costs are not nearly as 
significant as GCP capacity, so err on the side of having too much zIIP 
capacity. That would be an interesting study: what's common ratio of zIIP 
capacity to GCP capacity? I suspect that that ratio has been creeping up over 
the years. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ZIIP engine utilization

2018-05-18 Thread Scott Chapman
Since you have at least the RMF post processor reports available to you, the 
CPU Activity report is a high-level place to start. In particular I would start 
at the Partition Data Report section. Look for the total physical utilization 
of the zIIP(s). 

That's not the only place you may see an indication of a problem. I.E. your 
zIIP (or your GPs for that matter) may be much less than 100% busy and you may 
still be having performance problems. There's lots of things to consider. 

One example is that the RMF interval length can hide spikes that may be causing 
problems. E.G. the 15 minute average utilization may be 40% busy, but that may 
be because it's 100% busy for a solid 5 minutes, and mostly unused for the 
remainder of the interval. Looking at the RMF III panels interactively can be 
useful to help see this. (And there's some other very useful SMF records as 
well.)

Or maybe the LPAR weights are not correct for the workload and your business 
needs. 

Or maybe the WLM policy is not protecting/helping the work that really needs to 
be protected/helped. 

Or maybe it's not so much a zIIP capacity problem as it is a GP capacity issue. 

Regardless, we're always happy to do a free cursory performance review. We 
often find "interesting" things during those reviews. See: 
https://www.pivotor.com/cursoryReview.html 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IRS Assembler Java Big Plus Jobs

2018-04-19 Thread Scott Chapman
Agreed, that would be interesting: same (significant and reasonable) Java 
workload on Linux on Intel, Windows on Intel, z/OS, and zLinux. And throw Linux 
on AWS in there as well.

I'm almost at a position where I could do most of those. I think I did some of 
those tests a few years ago, but I can't seem to find the results at the 
moment. My (vague) recollection is that there wasn't a whole lot of significant 
difference. But certainly technology has marched on in the intervening years. 

I will say AWS is generally seems a little slower than running locally on my 
main Windows machine, primarily (I believe) due to I/O being slower. 

Scott Chapman

On Thu, 19 Apr 2018 12:13:38 +0800, David Crayford <dcrayf...@gmail.com> wrote:

>
>I've yet to see a Java workload run faster on z/OS then on x86. And our
>x86 servers are heavily virtualized using Hyper-V. Our zIIP runs at
>below 10% so there is plenty
>of capacity available. It would be interesting to compare the
>performance of Java running on z/OS vs zLinux.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Graph database on z/OS?

2018-03-23 Thread Scott Chapman
The default encoding on z/OS occasionally causes problems. Particularly when 
doing network I/O. Adding option "-Dfile.encoding=ISO8859-1" in my experience 
takes care of those issues. Of course you have to deal with ASCII files then, 
but that's a minor issue.

Scott Chapman

On Thu, 22 Mar 2018 22:27:44 +, Graham Harris <harris...@gmail.com> wrote:

>Any pure java library (typically in the form of jar file[s]) should 'just
>work' on z/OS (i.e. no porting required).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Interesting - NODE for z/OS

2018-02-23 Thread Scott Chapman
While ES6 does add some interesting features to the language, people have done 
useful work in server-side well before ES6 came along. 

Java 8 contained Nashorn with ES5 support. 
Java 6 & 7 contained the Rhino JavaScript engine (although at what level of 
Javascript, I forget at the moment). You could also use the external Rhino Jars 
if you needed a version later than the embedded Rhino version.

I remember being surprised several years ago at how well JavaScript (using 
Rhino) ran on z/OS.

Scott Chapman 

On Fri, 23 Feb 2018 14:16:18 +0800, David Crayford <dcrayf...@gmail.com> wrote:

>Java 9 comes with an ES6 compliant Nashorn JavaScript engine
>https://www.oracle.com/corporate/features/nashorn-javascript-engine-jdk9.html.
>ES6 is essential as it
>turns JavaScript into a reasonable language. Those poor souls without a
>zIIP might be a bit gun shy of running Java though!
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMF advice on additional logstreams

2018-02-09 Thread Scott Chapman
Remember when looking at SMF volume, record counts are interesting, but the 
bigger issue is the number of bytes written. 

We (Peter Enrico and myself) recommend collecting at least 99 subtypes 6, 10, 
11, 12, and 14. 

6 is especially important as it's the summary service class period information 
(every 10 seconds)
10 is dynamic processor speed changes, which you hopefully don't see
11 is for Group Capacity limits, and is written every 5 minutes
12 is HiperDispatch interval (2 second) data which can show you utilization 
information on a 2 second basis which can be quite interesting
14 is HiperDispatch topology data written every 5 minutes or when a change 
occurs

The 6s and 12s are in fact high volume in terms of the number of records, but 
the records themselves are relatively short. In terms of bytes, from what I've 
seen the subtype 6 is somewhere between 40 and 100 MB of additional SMF data 
per system per day. Subtype 12 seems to run around 40 to 50MB. I expect that's 
not noticeable in most environments. Indeed, the type 30s can easily be more 
data than that. Not to mention the 101s, 110s, and 120s. I actually have a 
slide on this in an upcoming conference presentation. 

The other 99 subtypes are used less often and some can be more voluminous than 
the 6 summary records. If you don't want to record those subtypes all the time, 
I'm ok with that. But OTOH, if you need them to do a deep dive on WLM to try to 
understand why things worked the way they did, then having them handy is better 
than having to turn them on and recreate the issue. We don't formally recommend 
people keep them enabled, but if it was me, I'd probably keep at least most of 
them enabled. 

The 92s are file system information. The subtypes 10 and 11 are written every 
time a file is opened/closed. In large Websphere Application Server 
environments I've seen these being very voluminous. I haven't looked at them 
lately, but my recollection from quite some time ago is that directory 
traversal (at least in the HFS file systems) triggered these records as well. 
I've seen the 92s in such an environment being much more voluminous than the 
99s. In that environment, I did have the 92s turned off because of this.

There are relatively new subtypes (at least 50-59) in the 92s, that may be why 
the OP is seeing more 92s. It looks like possibly useful information if you're 
tuning zFS performance, but I personally haven't spent any time yet 
investigating them. 


Scott Chapman


On Thu, 8 Feb 2018 16:17:47 +, Allan Staller <allan.stal...@hcl.com> wrote:

>Not sure about SMF92, but SMF99 are "WLM decision records".
>
>Yes they are large volume, but somewhat indispensable.
>
>Generally when there is a WLM problem it is extremely difficult or impossible 
>to reproduce.
>If the SMF99's are not available "during the problem" it is virtually 
>impossible to debug.
>
>IMO, SMF99's should be recorded.  I know Cheryl Watson and others may disagree.
>
>My US$ $0.02 worth,

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: System z & Meltdown/Spectre

2018-01-11 Thread Scott Chapman
I agree: from a practical perspective, unauthorized user code running on on a 
typical z/OS system has many more practical barriers to being able to use the 
side channel communication methods involved in Meltdown/Spectre. However, 
"difficult" and "unlikely" doesn't mean "impossible". That's assuming of course 
that it's even possible. But because it seems that there are some patches 
coming to Linux on the platform, it seems like there's at least some 
possibility that recent hardware is at least somewhat susceptible. I suspect 
(but don't know) that the impacts on z, and z/OS in particular, are not as 
catastrophically bad as some other platforms. 

Having said all that, if there is a chance of it being exploited, it will have 
to be mitigated. The question comes from how much of an impact that mitigation 
is going to have and whether it's made optional or not. (Some of the Linux 
mitigations can be selectively disabled.) If it's optional, then it comes down 
to a risk assessment, and it's going to take much more information before sites 
can make a choice. If it's not optional (or if the risk assessment indicates 
many sites need to enable the mitigation) and if the impact is more than a 
small single digit impact for typical workloads, then... this will get really 
"interesting". 

I've personally experienced that "interesting" on our main AWS processing 
server, which was using the older PV virtualization. The impact was significant 
and definitely pushed us over the edge in dramatic fashion. Fortunately, moving 
to current-generation HVM instances has more than offset the performance 
impacts that we were seeing. Unfortunately that move was more than a shutdown 
and start back up and ran into more "interesting" AWS issues/limitations. It's 
been an "interesting" week. 

Hopefully this all will be less painful on z/OS.

Scott Chapman


On Wed, 10 Jan 2018 15:26:04 -0600, Tom Marchant <m42tom-ibmm...@yahoo.com> 
wrote:

>On Wed, 10 Jan 2018 21:44:29 +0100, R.S. wrote:
>
>>BTW: It's worth to remember chances the vulnerability would really
>>compromise system security are really small. (IMHO)
>
>I agree. Especially since the method of exploiting it involves flushing 
>cache and testing to see what memory location was reloaded into 
>cache. In a real system the amount of cache activity is too high for 
>the technique to be reliable. And the attacking task would use a lot 
>of CPU, making it unlikely that WLM would allow it to complete its 
>work without being interrupted frequently.
>
>-- 
>Tom Marchant
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS - DB2 conversion to Java

2017-11-30 Thread Scott Chapman
I'll have to defend SQL in general... despite the current popularity of NoSQL 
databases I've used SQL for going on 3 decades now for all sorts of things, 
including now SMF reporting. And PROC SQL was the primary way I (at least 
initially) survived getting data out of MXG's PDBs. You can do a whole lot in 
SQL. We have over 1000 reports that we've created in our product, all of which 
use SQL to retrieve the SMF data that has been loaded into a relational 
database.

However, I do agree that the straight mapping of "here's an SMF record segment 
as a record in a table" does leave something to be desired and likely will make 
it more difficult to build the necessary reports as there will likely be more 
work necessary than if the data was enriched. In a few cases, as you suggest, 
it may be almost (but probably not) impossible. That's (in part) why our 
extractor is a bit more sophisticated than just segment -> record and why we 
have a process to further rejoin, deduplicate, deaccumulate and generally 
enrich the records in the database before we begin reporting on the data. I'm 
also not sure how sophisticated SQL the MDSS supports. For example, support for 
temporary tables would make certain things easier. 

For small shops that only want a few reports, and have the time for playing 
with it, the Spark access to SMF is probably sufficient for doing that. In that 
situation you can potentially take some short cuts and make some assumptions 
that may not be valid in all installations. 

But I'm not convinced that that's the right answer for most SMF reporting 
needs. At least not for the performance data that I'm most familiar with. 
Especially when there are (multiple, low-cost) other alternatives that can do 
that in a zIIP-offloaded or completely offloaded fashion. The use case of 
grabbing certain event records in near realtime from the SMF buffers might be 
an interesting use case, for example. Of course that also envisions running 
those queries repeatedly on some relatively short interval basis and I'm not 
sure what that implications of that might be. 

Obviously I might be slightly  (although I hope not much) biased at this point, 
but I think the points are worth consideration. 

Scott Chapman
www.pivotor.com



On Wed, 29 Nov 2017 21:16:00 +1100, Andrew Rowley 
<and...@blackhillsoftware.com> wrote:

>On 29/11/2017 11:59 AM, Anthony Thompson wrote:
>> http://s3-us-west-1.amazonaws.com/watsonwalker/ww/wp-content/uploads/2016/03/28073651/Spark-and-SMF.pdf
>I have seen various presentations. My experience has been that SQL is
>very limiting when it comes to SMF reports. Most of the reports I do
>would be difficult or impossible, or would require multiple data passes
>to get the data using SQL.
>
>--
>Andrew Rowley
>and...@blackhillsoftware.com
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS - DB2 conversion to Java

2017-11-24 Thread Scott Chapman
Indeed, I don't think that's what the OP is (mostly) looking for, but Pivotor 
is written in Java and can store the extracted SMF data to DB2. The combination 
of which was part of my original attraction to the product. 

But it appears the OP wants a more general purpose SAS code replacement option. 

Scott Chapman
https://www.pivotor.com

On Thu, 23 Nov 2017 09:04:40 +, Martin Packer <martin_pac...@uk.ibm.com> 
wrote:

>When you say "planning to" I hope you really mean "beginning to consider".
>There are just so many ways this could turn out to be a bad idea.
>
>But there is at least one company I know of that makes SMF-through-java
>software.
>
>Cheers, Martin
>
>Martin Packer
>
>zChampion, Systems Investigator & Performance Troubleshooter, IBM

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: UTF-8 woes on z/OS, a solution - comments invited

2017-09-05 Thread Scott Chapman
On Mon, 4 Sep 2017 21:02:29 -0500, Paul Gilmartin <paulgboul...@aim.com> wrote:

>(What does Java use internally?)
>
>-- gil

Currently Java does use UTF-16, but Java 9 will get a little smarter about 
that, storing ins 1 byte/character ISO8859-1/Latin-1 where it can. 
http://openjdk.java.net/jeps/254

The G1 garbage collector (which I believe will be the new default) will also 
get string deduplication:
http://openjdk.java.net/jeps/192

Since those are internal JVM things, if those make will it into the IBM JVM I 
of course don't know. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: dumb VSAM KSDS & AIX question.

2017-08-31 Thread Scott Chapman
On Wed, 30 Aug 2017 22:27:52 +0100, David W Noon  
wrote:

>> the keys. Yes, I know, if this sort of thing is a requirement, we need Db2
>> (or is it DB2), but that is _never_ going to happen around here.
>
>It is DB2. But that isn't really necessary.

Except they really did change the branding to Db2. 

http://www.ibmbigdatahub.com/blog/announcing-db2-family-hybrid-data-management-offerings

"The modernization of “Db2” with a capital “D” and lowercase “b” places all 
emphasis on ‘Data’ — your data. At the same time, the new design represents the 
elemental nature of Db2 (think periodic table) and connotes the fundamental 
importance of hybrid data management."

I think it may be some time before I retrain myself to write Db2 though...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why buy / rent z for small users was Re: z/OS 2.3 announcement

2017-07-18 Thread Scott Chapman
On Mon, 17 Jul 2017 16:44:10 -0300, Clark Morris <cfmpub...@ns.sympatico.ca> 
wrote:

>would it ever go to z?  Given the volatility of organizations these
>days, not having a good entry level offering seems to be long term
>suicide.

Exactly, there needs to be an on-ramp that starts at zero (or very nearly so). 
Specifically for z/OS. (Does Linux for Z survive without z/OS cash to help back 
new machine development?)

I know it happens on occasion, but "on occasion"  doesn't seem like a long-term 
survival (let alone growth) strategy. It's too easy to get started on other 
platforms. Which also have way more mind share among the smaller/startup 
organizations. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: BPXWDYN and AVGREC usage

2017-04-30 Thread Scott Chapman
On Sat, 29 Apr 2017 11:17:43 -0500, Paul Gilmartin <paulgboul...@aim.com> wrote:

>On Sat, 29 Apr 2017 12:39:00 +0200, Peter Hunkeler wrote:
>>
>>AVGREC is a funny beast. it specifies the multiplier to be applied to the 
>>primary and secondary space figures. The resulting number is the number of 
>>records that must fit into the final space allocation. To calculate this 
>>value, the system needs the (average) record length. This value, however, is 
>>not taken neither from the LRECL, nor from the BLKSIZE. In JCL, it is the 
>>first sub-operand of the SPACE= specification.
>> 
>This is a nightmare.
>
>When they say "record" do they mean "block?  Usually?

I can't say for sure,  but I believe, and the way we always used, going back to 
the c. 1994 when we started using it, was that the design goal for this support 
was to give the application programmer a way of space that they understood. 
After all, they should inherently have an idea of how many records that their 
job creates or consumes because that has some sort of business connection that 
they understand. I.E. this job normally processes "x" accounts, each account 
has "y" records in this file, so we need a file that can hold about x*y 
records. 

If you have fixed-length records, you know the length of each record. But if 
you have variable length records, it's a little more variable. But the 
application programmer should have a decent idea of the average length of those 
records. 

From there you need to convert those numbers to some number of tracks based on 
the block size and the average number records per block based on their average 
record length. By using the AVGREC allocation method, the application 
programmer can just give the information to the system and let it figure it out 
instead of doing the math themselves. Which seems like it should make life 
easier for everybody.

At least that seems like a good theory. In practice I found a distressing 
number of application programmers who didn't actually know the business 
numbers--i.e. that they were going to process x accounts each night. To me this 
seems to be a larger problem than just not being able to allocate space 
correctly. One might argue that if one is guessing at space allocations, it 
doesn't really matter what units one is guessing in. 

Over 20 years later, disk is drastically cheaper and larger and so the need to 
allocate the data set size correctly is probably reduced in many shops. But my 
guess is that most also don't want every space allocation to be 
SPACE=(CYL,(1000,1000)).

Having extensively used the AVGREC allocation method 20+ years ago I probably 
knew the answer to your questions at one time, but the only one I remember for 
sure is that it works fine with SMS. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMS STORGRP question

2017-04-22 Thread Scott Chapman
My recollection is that we used to solve this sort of problem (only use these 
volumes if you absolutely have to) with what we called an "internal spill 
pool". Basically we added volumes to the storage group that were in status 
quiesce(new). If something appeared on those volumes we became interested in 
why. (E.G. it may be time to expand the storage group, or maybe it was an 
exceptionally large allocation.)

I don't recall the mechanism in SMS for the order that eligible storage groups 
are selected in, if there is one. I rather believe that all volumes in the 
eligible storage groups are added to the eligible list and evaluated the same 
regardless of which SG they're in. But I could be wrong about that. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: UTC and Daylight Savings Time

2017-01-06 Thread Scott Chapman
On Thu, 5 Jan 2017 17:02:23 +, Bill Bishop (TMNA) <bill.bis...@toyota.com> 
wrote:

>Using STP, the local time changes automatically and I am not aware of any way 
>prevent it.  
>
>Is anyone aware of how to accomplish this?

My recollection was that you could disable the automatic adjustment of DST, and 
the Redbook supports that memory...

"If the selected time zone does not support automatic adjustment or if the user 
does not wish to use automatic adjustment of daylight saving time, select Set 
standard time or Set daylight saving time depending on what is in effect at the 
time that the change is made. "

See PDF page 93, indicated page 75 in: 
http://www.redbooks.ibm.com/redbooks/pdfs/sg247281.pdf

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why Can't You Buy z Mainframe Services from Amazon Cloud Services?

2016-12-09 Thread Scott Chapman
> I would point out that the cost to provide z/OS services, or any computing 
> services for that matter, is greater than zero, especially but not only for 
> "real production business work." If you'd like to suggest that any company 
> price its set of products and associated services below cost, it wouldn't 
> shock me if that company disagrees with your suggestion. 

Starting at $0, doesn't mean that it stays at $0. And it may be largely 
marketing hyperbole, but it certainly catches your attention. 

But actually AWS offers a free tier for many of their services and you can in 
fact use them for whatever you want. The free tiers are limited in some way. 
For disk/CPU/memory this is a relatively small configuration free for 12 
months. Granted, you aren't going to get real heavy lifting done on the free 
tier, but the point is you have something to try and play with. Other services 
just give you the first x operations for free and only starts charging after 
that. For example, Amazon SNS gives you 1 million publishes free per month. AWS 
Lambda gives you 1 million free requests per month and 400,000 GB-seconds of 
compute time per month. I have no idea how much useful work this really equates 
to but it's non-zero. 

The real point here is that services like AWS give new businesses an easy on 
ramp with costs that scale pretty linearly from $0 in small, easily digested 
increments. Those pennies certainly add up to real significant dollars in the 
long run, but once you've built your architecture & infrastructure, you're less 
inclined make a radical change to something else. 

A reasonable question might be how does an organization transition something 
that started in Linux in AWS to z/OS? Where is the point where that makes 
sense? Who would even know to consider that? What is the real entry cost and 
TCA for z/OS versus other options? What can be done in AWS that can't be done 
in z/OS easily, and vice-versa? These are all questions that hopefully IBM 
product/platform planners are seriously considering. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why Can't You Buy z Mainframe Services from Amazon Cloud Services?

2016-12-08 Thread Scott Chapman
I don't see anything there that says one can do real production business work 
using z/OS, starting at $0. Or $500. Or really any amount. 

Would be happy to be shown otherwise.

On Thu, 8 Dec 2016 13:37:37 +0800, Timothy Sipples  wrote:

>Charles Mills asks:
>>Is there any good reason IBM could not offer Cloud Z starting at $0?
>
>IBM already does. See here for more information:
>
>http://millennialmainframer.com/2016/08/mainframe-free-stuff-2016-edition/
>
>
>Timothy Sipples
>IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
>E-Mail: sipp...@sg.ibm.com
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why Can't You Buy z Mainframe Services from Amazon Cloud Services?

2016-12-07 Thread Scott Chapman
On Tue, 6 Dec 2016 13:54:27 -0500, Steve  wrote:

>If you look at the sheer cost if setting up a zOS ecosystem, its not cheap.  

Yeah, that's the key point not mentioned in the article: building your system 
on AWS starts at $0. However... AWS costs can add up too. Most of their rates 
are in pennies, but those pennies do add up eventually. But the free tiers do 
help get customers hooked. So in total the idea of the article that z scales 
more cost effectively than AWS may be fair, but doing direct apples-to-apples 
comparisons is difficult. 

But until we can say z/OS starts at $0, and we make it as easy to try as AWS, 
getting new z/OS customers is going to be a struggle because new organizations 
are going to start in the cloud. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Watson

2016-11-07 Thread Scott Chapman
Since Alexa is mentioned...

http://www.theverge.com/2016/11/4/13525172/amazon-alexa-big-mouth-billy-bass-hack-api

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: HV Common in SMF71.

2016-09-29 Thread Scott Chapman
These sound promising...

SMF71C1A "Average total number of high virtual common memory pages (in units of 
4 KB)."
SMF71CPA "Average number of high virtual common pages in-use" 

The latter is apparently new in z/OS 2.2. 

Min and max fields available too. 

Scott


On Wed, 28 Sep 2016 11:31:03 +, Vernooij, Kees (ITOPT1) - KLM 
 wrote:

>Hello,
>
>I want to check from SMF71 if our HVCOMMON parameter is still sufficient or 
>has been close to its limits in the past.
>
>It can be displayed by:
>D VS,HVCOMMON
>IAR019I  13.24.41 DISPLAY VIRTSTOR
> SOURCE =  00
> TOTAL 64-BIT COMMON = 152G
> 64-BIT COMMON RANGE = 1896G-2048G
> 64-BIT COMMON ALLOCATED = 66193M
>
>In this display 66G of the 152G has been allocated. However, this figure is 
>not present in the numerous HV COMMON figures in SMF71, only the really used 
>pages backed in real storage, SCM storage and Aux stor. The problem will be 
>that DB2 won't start if the 66G is too close to the 152G, but I cannot track 
>how close these values have been in past.
>Am I overlooking something?
>
>Thanks,
>Kees.
>
>
>For information, services and offers, please visit our web site: 
>http://www.klm.com. This e-mail and any attachment may contain confidential 
>and privileged material intended for the addressee only. If you are not the 
>addressee, you are notified that no part of the e-mail or any attachment may 
>be disclosed, copied or distributed, and that any other action related to this 
>e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
>received this e-mail by error, please notify the sender immediately by return 
>e-mail, and delete this message.
>
>Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
>employees shall not be liable for the incorrect or incomplete transmission of 
>this e-mail or any attachments, nor responsible for any delay in receipt.
>Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
>Airlines) is registered in Amstelveen, The Netherlands, with registered number 
>33014286
>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: 2FA in the Real World

2016-09-14 Thread Scott Chapman
Their purchase of EMC just closed, so I guess Dell now also makes mainframe 
disk subsystems. Will be interesting to see what they do with that. 

Scott

On Tue, 13 Sep 2016 21:57:22 +0100, Vince Coen  wrote:

>Quest.
>
>Seem to recall some other m/f products as well.  Toad ?
>
>Vince
>
>
>On 13/09/16 18:31, Steve wrote:
>> NC-Pass was purchased some time ago by Dell, and I don't remember who wrote 
>> itt
>>
>>
>> Steve
>> -Original Message-
>> From: "Vince Coen" 
>> Sent: Tuesday, September 13, 2016 1:23pm
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: 2FA in the Real World
>>
>>
>>
>> For me the problem is that it is a Dell product.
>>
>> Previous experience with them just leave a bitter taste in the mouth and
>> one I have no intention of repeating.
>>
>> Vincent
>>
>>
>> On 13/09/16 17:49, Steve wrote:
>>> Is anyone in the real not government world using this product?
>>>
>>> [ https://software.dell.com/products/defender-mainframe-edition/ ]( 
>>> https://software.dell.com/products/defender-mainframe-edition/ )
>>>
>>>
>>> Steve
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bypassing s322

2016-09-14 Thread Scott Chapman
You can't really bypass the system exits, but that doesn't mean that the exits 
might not include certain "secret" triggers that might allow you to specify a 
higher time value on the job card.  E.G. if the job is in this class and it's 
this time of day and this job name, then allow/set something higher. Talk to 
your friendly system programmer responsible for maintaining such controls. (If 
that's you because you've inherited a situation, you'll have to go do some 
research.)

In the distant past I remember using Omegamon to dynamically extend the limit 
of a running job. I don't remember the details at this point, but I think it 
was just adjusting the existing time limit, not doing something like taking it 
out of control of the exit or anything like that. 

Of course, when anybody came to me complaining about an S322, assuming it was 
already in one of the classes that allowed them to get the max we allowed of 1 
or 2 hours of CPU time, my first reaction was always something along the lines 
of "Are you sure you aren't looping? Are you sure you don't have a tuning 
opportunity that needs to be fixed?" An hour of CPU time is usually a whole lot 
of work.

Scott

On Tue, 13 Sep 2016 18:35:15 +0530, Peter  wrote:

>Hello
>
>I am running which is a long running job but it keeps abending with s322. I
>have used all the long running WLM initiators but still abends. I am not
>sure if IEFUTL exit is restricting it.
>
>The error message doesn't produce much information to diagnose.
>
>Is there a way to bypass any EXIT which might be timing out the Jobs ?
>
>Peter
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: System Automation Question

2016-09-12 Thread Scott Chapman
Nice. I had thought there was a physical dongle involved. The lack of such 
certainly makes things much easier. 

On Sun, 11 Sep 2016 10:55:28 -0400, Scott Ford <idfzos...@gmail.com> wrote:

>zPDT versions
>
>On Sunday, September 11, 2016, Scott Chapman <scott.chap...@epstrategies.com>
>wrote:
>
>> On Sat, 10 Sep 2016 13:12:37 -0400, Scott Ford <idfzos...@gmail.com
>> <javascript:;>> wrote:
>>
>> >We have multiple z/OS images running in AWS ( Amazon ) and I have a
>>
>> Sorrry, I just have to ask: how do you have z/OS running in AWS??
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu <javascript:;> with the message:
>> INFO IBM-MAIN
>>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: System Automation Question

2016-09-11 Thread Scott Chapman
On Sat, 10 Sep 2016 13:12:37 -0400, Scott Ford  wrote:

>We have multiple z/OS images running in AWS ( Amazon ) and I have a

Sorrry, I just have to ask: how do you have z/OS running in AWS??

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: RD/z

2016-08-31 Thread Scott Chapman
On Wed, 31 Aug 2016 15:51:07 +0800, David Crayford  wrote:

>Is anybody outside of IBMs customers still using Eclipse? I was under
>the impression that IntelliJ was the dominant force
>in the fat IDE space now. And you get the same editor and user
>experience no matter what language you're editing -
>Java, JSP, HTML/CSS, Javascript, PHP, Scala, C/C++, Actionscript etc. It
>also indexes the world so navigating around complex code
>bases is seamless. Now that's progress...

Maybe a little off-topic but judging by the current questions on StackOverflow, 
I'm going to say yes, plenty of people are using Eclipse for a variety of 
projects and certainly not all of them can be IBM customers. Free is a strong 
competitive advantage. Personally I use both Eclipse and NetBeans, with a 
growing preference for NetBeans for most Java & web work, but there's a few 
things that Eclipse does better. I've thought about giving IntelliJ a try, but 
the free options generally work pretty well for me. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM Question

2016-08-27 Thread Scott Chapman
To extract the policy in an automated fashion, I believe you'll have to write 
some code--I don't believe there's a provided utility to do so.

You could use the IWMDEXTR macro to extract the current service definition from 
the WLM couple dataset. See: 
http://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieaw200/ieaw200108.htm
There is also an IWMDINST:
http://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.ieaw200/dinst.htm

In principle, it seems that z/OSMF should have a REST API for extracting and 
installing the policy, but I can't find it documented.

It may be also possible to automate driving the ISPF application. 

Note that I haven't personally tried any of these. 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Resource Group Limits as a cost containment method?

2016-07-22 Thread Scott Chapman
Yes, resource groups might be able to help if you don't want to lower the DC 
itself. But it's very dependent on the situation. My impression from your 
discussion is that addressing the scheduling issues may be the best first step 
though.

Remember that it's an average over 4 hours. So if you're running at 100% for 
two hours and then 50% for the next two hours, smearing the work out so it runs 
more evenly at 75% for 4 hours doesn't change that 4 hour average. Well, at 
least from the MLC perspective. But you mention a chargeback system so I'm not 
sure whether lowering the R4HA is the goal or something else to lower a 
chargeback number is really the goal. 

If the end goal is to lower the actual amount owed IBM, don't forget that a 10% 
reduction in the R4HA almost always equates to something less than a 10% cost 
reduction. And understand your ELA Ts  if you have one.

Scott


On Thu, 21 Jul 2016 09:59:10 -0500, Tim Hare  
wrote:

>There's a system where their 4-hour rolling average maximum is always during 
>their batch window, when they routinely reach their defined capacity limit,  
>which in turn affects what they pay in a chargeback system.
>
>Because of scheduling dumbness (another story) they have several gaps in their 
>batch window where nothing runs, giving some leeway for jobs to increase their 
>elapsed time - not the usual thing you want to do, but bear with us.
>
>Would it be worth investigating setting a resource group limit for the batch 
>service class(es) to hold the total 4HRA down as a cost-saving measure, or are 
>resource group limits going to cause more trouble than they are worth?
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Mirror/back up your Development DASD

2016-05-20 Thread Scott Chapman
In addition to the other reasons cited, one of my arguments was that I wasn't 
100.00% sure that the app team hasn't squirreled something away in a 
non-production storage group that really was needed for either running or 
fixing or recovering production. And the chance for human error sneaking in 
when you're setting up the list of volumes to replicate likely increases if the 
rule is "replicate only some of the volumes" vs. "replicate all volumes".

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Java problem

2016-05-08 Thread Scott Chapman
On Sat, 7 May 2016 11:25:40 -0400, Phil Smith III  wrote:

>P.S. Scott, the same command still failed after -help was working. Do you
>know what's wrong with it? Would love to grok this in fullness (well, "more
>completely" -- I know I'll never grok in fullness!)

>>/u/Java6_64/J6.0_64/bin/javac -J-Xmx64m help
>error: Class names, 'help', are only accepted if annotation processing is
>explicitly requested

Typo. It should have been a "-help" instead of "help". I was trying to suggest 
simply adding "-J-Xmx64m" to your original command that failed. The -J passes 
parameters to the JVM that javac runs in. In this case that parameter is 
-Xmx64m says to limit the maximum heap size to 64M instead of the default, 
which in Java 6 is 512M, which is probably why changing the memlimit to 512M 
allowed it to work. 

What's curious is that the 64-bit Java 5 apparently worked with memlimit=0. 
Perhaps Java 5 allocated heap below the bar even in 64-bit, if it could. Or 
perhaps you have an IEFUSI exit that changes things for memlimit=0 to allow 
some small (>=64M, <512M) amount anyways. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Java problem

2016-05-07 Thread Scott Chapman
If my google-fu is good, max heap size default on z/OS for IBM Java 5 was 64M 
and on Java 6 it was increased to "half of real storage with a maximum of 
512M". Min size also increased from 1M to 4M.

Just to make sure that it's the heap size change between 5 and 6, did you try 
"javac -J-Xmx64M help"?

If that works, it of course won't tell you why it won't run with the default, 
but it will at least verify the problem area.

Because the break occurs between 31- and 64-bit versions, I would suspect 
memlimit. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage Increment

2016-05-06 Thread Scott Chapman
The Technical Guide Redbooks for the relevant processor typically has this. 
Might be in the planning guide too. Looks like minimum granularity for the z13 
is 512MB.

Scott

On Thu, 5 May 2016 22:52:19 -0400, phil yogendran  wrote:

>Thanks but where is that documented? How can I tell beforehand that my z13
> model 123 will have an increments size of xyz?
>
>
>On Thu, May 5, 2016 at 4:54 PM, Jesse 1 Robinson 
>wrote:
>
>> Storage increment is determined by hardware. That's why you see a
>> difference on your existing boxes. We don't have a z13 in house yet, but
>> z196 and z12 both show 256 MB in D M=STOR.
>>
>> .
>> .
>> .
>> J.O.Skip Robinson
>> Southern California Edison Company
>> Electric Dragon Team Paddler
>> SHARE MVS Program Co-Manager
>> 323-715-0595 Mobile
>> 626-302-7535 Office
>> robin...@sce.com
>>
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of phil yogendran
>> Sent: Thursday, May 05, 2016 11:49 AM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: (External):Storage Increment
>>
>> Hello,
>>
>> We currently have 3 processors where 2 have a storage increment size of
>> 256 and the 3rd has a storage increment size of 128 defined.
>>
>> We are going to be migrating to z13's shortly. My question is where do I
>> find what the correct storage increment sizes should be? I understand it is
>> specified in the activation profile but how does one determine what the
>> increment size should be? Is it based on the hardware model? we're going to
>> two z13-605 and one z13-404 processors.
>>
>> Thanks,
>>
>> Phil
>>
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM issue with a proposed solution

2016-04-29 Thread Scott Chapman
>If your batch jobs are running Dicretionary at a DP lower than CICS, it is 
>very 
>unlikely that they are causing significant CICS delays.

True from a CPU perspective. But the batch jobs could be locking resources in 
DB2 that are delaying the CICS transactions. And if the batch jobs holding 
those locks are progressing very slowly due to running in discretionary when 
there's little CPU available, the locks may persist for an extended period of 
time, elongating CICS transaction response time. 

Or I saw a similar situation once where some batch queries exhausted the RID 
pool, which caused sub-second CICS transactions to start taking over 60 
seconds. That's fortunately harder to do on the later versions of DB2. 

In short, while adjusting the goals very well may be in order, I'd be inclined 
to first look into the apparently unusually long running CICS transactions to 
identify why those particular transactions are taking a long time.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WLM Resource Group Capping: Dispatching and un-dispatching

2016-03-19 Thread Scott Chapman
My understanding is that it’s somewhat simpler than cycling dispatchability 
flags off and on: as work comes to the front of the queue, if it’s subject to a 
resource group, it is checked to see if the current slice is a cap slice or an 
awake slice. If it’s an awake slice, it’s dispatched. If the current slice is a 
cap slice, the work unit is moved to the back of the queue (subject to priority 
order). 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Does everybody use chargeback?

2016-03-09 Thread Scott Chapman
>Scott Chapman wrote:
>>Software billing is based on available/consumed capacity.
>
>IBM's is/are not. It's based on *peak* four hour rolling average
>utilization per month -- or, effectively, per subscription year for
>products that are not Monthly License Charge products.

Peak R4Ha is a measure of consumed capacity, it just happens to end up being a 
subset of the utilization for the month. And very commonly there are multiple 
periods that reach that same peak value. 

zOTC costs can be based on usage as well--with the appropriate subcap agreement 
in place, you only have to pre-purchace entitlement to match your peak R4HA 
too. But if you don't purchase enough entitlement to satisfy that, then you'll 
have to then purchase the appropriate amount when/if the monthly peak R4HA is 
greater than your entitlement. My understanding is that that is billed at list 
price. Because of the R4HA variability and the ability to negotiate better 
prices during such zOTC product acquisition and/or the ELA process, often it 
may be better just to license zOTC for the installed capacity. 

ELAs complicate things of course, but they still come back to if you use more 
capacity and/or you have more capacity installed, you pay more.

Moving to a straight per-core charge and ignoring the theoretical capacity of 
the core (as Ed Jaffe pointed out is common on other platforms) is possibly 
what should be done. Or, where possible, tie the software costs to the business 
value--such as a provider of print management software bases the charge on the 
number of printers. But when the charge is based on some utilization, it occurs 
to me that perhaps that encourages one to find ways to not utilize the 
platform. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Does everybody use chargeback?

2016-03-08 Thread Scott Chapman
I believe that while chargeback is an important issue that SMT messes up, 
that's already somewhat messed up today because there's more variance from 
execution to execution. I.E. run the same exact job twice and even absent SMT 
you'll get different CPU measurements. That's always been the case, but it's 
much larger today than when I started measuring such things in some detail in 
the 90s. Add SMT into the mix and it gets even more "interesting". 

But I believe the bigger issue for SMT is the inability to determine the impact 
on available capacity within an acceptable tolerance. Software billing is based 
on available/consumed capacity. (Consumed capacity just be a factor of you used 
x% of the available capacity.) So how would they set the MSU/MIPS rating of a 
machine with SMT enabled on the GCPs? IBM could be generous and simply say "SMT 
capacity is free", in other words it doesn't affect the MSU ratings. But 
whether all ISVs would go along with that point of view is a big question. And 
for those that don't want to go along with it, what would they do? 

But software is bigger money than hardware, so IBM may not want to be as 
generous as giving away approximately 0 to 30% of their software revenue. In 
that case, the calculations for the R4HA gets even more complicated. Although I 
believe RMF contains the numbers that could be used for doing that, the basis 
for some of the numbers have not been fully explained (to my knowledge--it's 
also possible my brain hasn't fully understood the explanations that are 
available). I believe they're coming out of CPU-MF but I don't think they're 
externalized in the 113s. 

Even absent the chargeback and software cost issues, how do you do capacity 
planning with that level of variability? How do you do performance testing? Of 
course the other platforms that have this sort of technology seem to largely 
just say something like "oh we just by a 4, 8 or 16 way machine"--but that's 
possible because of the hardware and software costs being smaller increments. 
I'm not sure we can make that work in the current z world. 

But this sort of technology seems likely to be part of the way forward for 
increasing CPU capacity, so I suspect that we (the z community) will have to 
address the issues eventually. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Introducing the New z13s: Tim's Hardware Highlights

2016-02-19 Thread Scott Chapman
On Fri, 19 Feb 2016 18:16:53 +1100, Andrew Rowley 
 wrote:

>Memory leaks are not a usual case, but I would suggest you will still
>want to garbage collect.
>
>I'm not arguing against large memory - I am all in favour of as much as
>you can afford. It's just the suggestion that avoiding Java GC is a good
>idea.
>
>I believe that Java is one of the keys to the future of z/OS.
>Suggestions like "allocate enough memory that you don't have to GC"
>contribute to the view that Java is a memory hog and performs poorly.
>That is damaging to Java on z/OS and as a result damaging to z/OS as a
>whole.

I agree--and echo Martin's point make sure that GC isn't getting in your way 
and echo your point that small working sets can be advantageous. 

But I'm also for loosening up IEFUSI limits. If somebody is trying to force all 
of their Java batch to run in 32MB heaps, well... my guess would be that 
loosening that up to 64-128MB could make a significant change. And when you 
have 64GB, you have lots of room to run a lot of Java batch with 128MB heaps. 
Unfortunately it's not possible to predict optimal heap requirements: you have 
to actually test. But my recommendation (for batch) is give 'em 128MB to start 
and only investigate in detail if you really need to. I would not be in favor 
of starting every Java batch job off at a GB or more of heap--my guess is that 
I could put that memory to better use as DB2 buffer pools or something like 
that. 

I'm also all for Java on z/OS. Just make sure you have a zIIP (or preferably 
more) to support it.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Introducing the New z13s: Tim's Hardware Highlights

2016-02-19 Thread Scott Chapman
On Fri, 19 Feb 2016 08:56:17 +, Martin Packer  
wrote:

>And if you think that's bad try making your favourite slide or email
>editor keep the "z" lower case. Permanent nightmare. :-)

Amen. But the Ctrl-z every time after you type it reinforces what platform 
you're writing about. :)

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Java (was: DFSORT - SMF Records - GMT To EST)

2016-02-07 Thread Scott Chapman
On Sat, 6 Feb 2016 16:02:52 -0600, Kirk Wolf  wrote:

>I doubt that there is a significant difference in CPU resources between
>running the JVM in JZOS vs BPXBATC**.

I was surprised too.

>Perhaps the differences that you are seeing have to do with not measuring
>all of the address spaces?

That's a very good point, but I believe I got them all as the original point of 
the exercise was to find all the type 30 records that corresponded to the 
activity. And in the second case where some amount of real work was happening 
if I somehow missed a record, I would be surprised if it was as much as ~10 CPU 
seconds. 

Unfortunately it appears I don't have the original data collected from the 
experiments as I'm now kind of curious about the recorded elapsed time too.

>For JZOS, the JVM will be in the same address space as the JES2 initiator.
>For BPXBATCH, it will be in a forked OMVS address space.

The BPXBATSL uses a local spawn. Also it appeared in my testing that JZOS did 
trigger a second very short running address space, but there was almost no CPU 
time recorded there (0.02s).

The difference is within normal variation that one can see today in measured 
CPU time. Or at least that I could in that particular configuration at the 
time, at least for Java workloads. But I was also very well aware of that, and 
the presented figures were averages over multiple runs (5, IIRC). Of course it 
is possible that of those executions, the JZOS ones were on the high side of 
the variability range and the BPXBAT* ones were on the low side of the range. 
Seems unlikely, but certainly possible. 

Actually, rereading my CMG paper that came from this I see that I did do a 
larger comparison between BPXBATSL and JZOS and found BPXBATSL to exhibit less 
variation.  "I ran one final test comparing BPXBATSL to JZOS and found that 
BPXBATSL jobs seemed to exhibit only about half as much variability. 
Unfortunately, JZOS is so much more convenient to use that it’s hard to 
recommend using BPXBATSL instead and the reduced variability is still quite 
significant."

All of which was puzzling, but that's what the measurements showed. Remember of 
course this was some time ago: z10 hardware, Java 6, z/OS 1.10. Past results 
may not be indicative of future performance. :)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Java (was: DFSORT - SMF Records - GMT To EST)

2016-02-05 Thread Scott Chapman
On Fri, 5 Feb 2016 11:41:52 +1100, Andrew Rowley <and...@blackhillsoftware.com> 
wrote:

>I am using JZOS to run Java as a batch job, and these are my tests for
>general processing of SMF data rather than time zone conversion
>specifically. It wouldn't surprise me if the batch job is better than
>running under the shell.

Tests I did a few years ago seemed to indicate that there was some additional 
overhead from running under JZOS vs. BPXBAT*

Workload #1:
   Average CPU secs (multiple runs)
  zAAPn  GCP
BPXBATCH   0.500.20
JZOS   0.730.12
BPXBATSL   0.520.14

I figured that maybe that was just a minor startup difference, but surprisingly 
a much longer workload followed the same pattern:

Workload #2:
   Average CPU secs (multiple runs)
  zAAPn  GCP
BPXBATCH 141.72  0.52
JZOS 153.49  0.39
BPXBATSL 142.09  0.45

But the JZOS launcher is more convenient, and unless you're very sensitive 
about the consumed zAAP (now likely zIIP) time, the difference probably doesn't 
matter. 

This was under Java 6, but I don't recall what the exact processor model was. 
My guess is that it was a z10 5xx.

Interestingly, IBM Java 7 seemed to add a little additional overhead. That was 
somewhat expected for short-running tasks, but it seemed to be there for 
long-running started tasks too, which was unexpected. It was in the single 
digit percentage range, but it was consistent across multiple different 
workloads. I never did get that difference understood to my satisfaction.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: RMF monitor III panel restriction

2016-02-03 Thread Scott Chapman
The RMF Distributed Data Portal emits XML. So you can write your own app to 
issue requests to it and interpret the results--perhaps building friendlier 
html pages from a subset of values that are of interest to the target audience. 
You could either do that all in the browser (relatively easy) or do it server 
side extracting the relative data once instead of once per user (more 
complicated). I've done both. It is some amount of effort and it doesn't mean 
that you've necessarily secured RMF, but at least you've given your target 
audience some information while making it less obvious how to get to 
potentially more confusing information.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Any Git compliant Client for OMVS

2015-12-19 Thread Scott Chapman
I can't offer any specific help, but did you try running jgit with 
"-Dfile.encoding=ISO8859-1"? I think every open source Java package I've tried 
running on the mainframe has just worked when run like that. Of course all it's 
I/O is then in ASCII, so all the files that it needs to read from the zFS needs 
to be in ASCII, but that's usually not too big of a deal--ISPF can edit ASCII 
ok, you just need to tell it that it's ASCII. (Although at the moment I can't 
remember exactly where/how that's done.) 

Scott Chapman

On Fri, 18 Dec 2015 01:46:04 -0600, Munif Sadek <munif.sa...@gmail.com> wrote:

>Dear Listers
>
>I am trying to find any Git client to run under OMVS (z/OS 2.1) to automate 
>couple of Java objects deployment. I have tried 
>https://eclipse.org/jgit/download/ but not able to make it work.
>
>regards 
>Munif
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMF/RMF Reporting question

2015-11-21 Thread Scott Chapman
As others have said, there are multiple ways to go look at what was using the 
CPU during a particular interval, depending on what tools you have access to 
and what the system configuration is. To recap:

SMF 30 interval (subtype 2 & 3) records will show CPU utilization by interval 
by address space.
SMF/RMF 72 subtype 3 records will have utilization by service class and report 
class. Depending on how WLM is configured the report classes may be fairly 
granular or may be almost non-existant. 
RMF III's PROCU panel can be very handy for looking at processor utilization on 
a more granular timeframe in the recent past. (Assuming it is so configured.)
Other system monitors such as SysView (which I know nothing of) or Omegamon 
usually have history recording capabilities that can be leveraged.

However, I would submit that perhaps the more interesting question in this case 
is: "was the service class that the affected work was running in significantly 
delayed for some reason?" RMF III will again give you some insight into this 
via the DELAY panel. You have to be a little careful with that as there's no 
clear indication of the number of samples. (Although this tends to be a bigger 
issue on the enclave panel, because some enclaves tend to come and go rather 
quickly.)

From an SMF perspective, the 72.3 records include details about the delay 
samples on a service class / report class basis. 

Finally, you mention that you know the code is good because it hasn't changed. 
But remember that the data may have changed over time; in particular the data 
volume may have changed. I have seen instances where the volume grew to a point 
that performance changed noticeably and relatively suddenly--because an extra 
level was added to an index or the data had grown disorganized or large enough 
that it crossed some threshold and the database optimizer started using a 
different access path. I'm not saying this is the case here, I just mention it 
as something to keep in mind. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: A New Perfromance Model ?

2015-04-09 Thread Scott Chapman
That is a very important point. I have been involved in more than one upgrade 
(or lack thereof) where we chose a less than technically ideal configuration 
because the MSU/MIPS of said configuration were somehow more favorable. 

Once though, the optimal technical and software cost solutions did align. 
Strange but true. 

In my experience, when adding capacity, the software cost is the biggest 
component. And ISV one-time charges are often the largest component of that. 
The most galling part is when the software that's the biggest cost has nothing 
to do with the reason you're adding capacity and is not going to be used any 
more or less after the upgrade.

On Wed, 8 Apr 2015 22:11:05 +, J O Skip Robinson jo.skip.robin...@sce.com 
wrote:

I have not mined this thread meticulously, but I did not see mention of 
software costs. If you upgrade your CEC, the ISV (V for vulture) folks will 
descend upon you as in the Hitchcock movie and peck your corpse clean to the 
bone. IBM will be there too with beak in motion. 

The software costs of a hardware upgrade can be stunning, especially if the 
bean counters budgeted only the hardware portion. Those additional costs live 
on forever because the annual maintenance fees go up as well. 

Which is to say that no matter how cheaply memory or storage can be obtained, 
the cost of additional MIPS looms larger than it appears in the mirror.  

.
.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
jo.skip.robin...@sce.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
Behalf Of Roger W. Suhr (GMail)
Sent: Wednesday, April 08, 2015 2:30 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: A New Perfromance Model ?

I think this nailed it!  Clueless is also correct.  It did start back in the 
90's with disk space is cheap, then it went to memory is cheap' 
and now it's MIPS is a commodity, so is the manpower to maintain all that 
stuff.
It's all in the CLOUD now anyway - who cares!

Roger

On 4/8/2015 4:11 PM, Dave Barry wrote:
 In the old paradigm, technology was managed by technologists.  In the new 
 paradigm, technology is managed by accountants.  Computer hardware and labor 
 costs wind up on different lines of the general ledger.  They have different 
 budgetary constraints and are treated differently for tax purposes depending 
 on whether they are capitalized or expensed.

 I've heard the new regime refer to MIPS as a commodity.  Talk about 
 clueless...!

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
 On Behalf Of esst...@juno.com
 Sent: Saturday, April 04, 2015 4:12 PM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: A New Perfromance Model ?

 .
 Can someone explain and rationalize for this new paradyne ?
 .
 cheaper to Upgrade the mainfame than to have the application programmers 
 review their code for performance oppurtunities.

 .
 Im clueless .  ??

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Unload DB2 table in CSV format

2015-02-24 Thread Scott Chapman
I believe the DB2 Unload utility will also allow you to create an extract in a 
delimited form. Seems like there were some idiosyncrasies with it, but I don't 
recall the details at the moment. But you might start here:

http://www-01.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.ugref/src/tpc/db2z_unloadsamples.dita

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Consolidated LPAR monitoring solution

2015-01-28 Thread Scott Chapman
On Tue, 27 Jan 2015 14:24:01 +, Staller, Allan allan.stal...@kbmg.com 
wrote:
 Check out the GPMSERVE function of RMF (no additional $$$) 

I did exactly that at AEP, merging the data from multiple sysplexes into a 
single one pane of glass view. Everything passed between the LPARs/sysplexes 
with TCP/IP, the data stored in a Derby database. The vast majority of the 
process ran inside a JVM so the GCP impact was minimal.

I was pretty happy with it. Alas, now that I've moved on that code seems 
destined to die. (I tried to get them to release it as open source so I could 
continue to support it for them, but they weren't interested in that.) I did 
write a CMG paper in 2007 about an earlier, more limited version, of the 
process.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: z13 unanswered question.

2015-01-16 Thread Scott Chapman
On Fri, 16 Jan 2015 19:27:50 +0800, Timothy Sipples sipp...@sg.ibm.com wrote:

Kees Vernooij wrote:
It again looks like small customers are becoming less and less
interesting for IBM.

Your guess is 180 degrees wrong.

I hope you're right. I believe there's a good number of sub-100 MSU customers 
out there. And software costs per unit of capacity are highest for those 
customers. Dealing with that situation would be a good step to addressing those 
customer's concerns and keeping them in the fold. (Important since the small 
customers are probably most likely to be able to complete a migration to a new 
platform.)

Hopefully IBM releases 1xx, 2xx, and 3xx capacity settings in the not 
too-distant future. Perhaps they didn't announce them immediately because they 
expect to need their manufacturing capacity to satisfy the demand for larger 
capacity (and larger profit) machines. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DB2 query estimator (was: How Does Your Shop Limit Testing in the Production LPAR)

2015-01-11 Thread Scott Chapman
On Sat, 10 Jan 2015 14:40:19 +0100, Bernd Oppolzer bernd.oppol...@t-online.de 
wrote:

for batch programs, a table space scan on large tables may well be
the best access strategy, if the related SQL is the overall cursor
controlling
the batch program, and if large portions of the table is used. So you

That is a very good point. I don't remember what we did with those 
exceptions--I want to say there was a separate exception table, but I don't 
recall the particulars now.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DB2 query estimator (was: How Does Your Shop Limit Testing in the Production LPAR)

2015-01-10 Thread Scott Chapman
On Sat, 10 Jan 2015 01:44:35 +0100, Bernd Oppolzer bernd.oppol...@t-online.de 
wrote:

It is normal practice at the shops I work to do EXPLAIN regularly on all
programs that go into production and to store the PLAN TABLE results
for later trouble shooting ... if there is trouble. The developers at
our sites

Probably 20 years ago one of our DBAs added a step to the migration process 
that checked the plan table in the QA environment, looking for certain obvious 
problems. For example a tablespace scan on a table larger than x. Such packages 
were flagged and migration to production halted, IIRC. It didn't catch 
everything, but it was helpful.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How Does Your Shop Limit Testing in the Production LPAR

2015-01-10 Thread Scott Chapman
On Fri, 9 Jan 2015 09:58:03 -0500, Ted MacNEIL eamacn...@yahoo.ca wrote:

I'm one of those types.
The governor pretty well guarantees a re-submission.
Which means twice the resources (or more) spent to do nothing!

How can they debug/tune something if we don't let complete?

That does seem to be a prevailing thought. And it's not an inherently bad one. 
To me the decision to use the governor in production is a matter of risk 
management. 

I should have mentioned that the governor limit in production should likely be 
very high. My point with it is to stop those things that individual users have 
submitted that have run for hours. How many hours is acceptable? That depends 
on the shop and your philosophy for how many hours you're willing to let 
something run that either may never complete or may not have a user sitting 
behind it waiting anymore.

I have seen queries run all weekend. I've seen queries run all night. Such 
cases can have an impact on the R4H. As well as other work that needs the DB2 
resources it's consuming. I'm not in favor of allowing things that aren't 
pre-approved to run for days, or even a substantial portion of a day.

Note that the governor is controllable by authid. IIRC, you can have an overall 
default limit and specific limits for specific ids. So if you have that one 
process that really needs to run a 36 hour query, then you could make sure that 
they're running in their own process id and give that id a limit that allows 
such.

Best of course is to have your DBAs actively watching for DDF queries that have 
been on the system for more than an hour (or two or whatever) and have them 
investigate them to see if there really still is a user waiting at the other 
end and if the query has a chance of completing. But such a manual process will 
not catch everything.

Of course, having an IDAA in the mix can help avoid utilization on z/OS as well 
as improve response time to the business users. But that's not free, nor 
something you can turn on in a day. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How Does Your Shop Limit Testing in the Production LPAR

2015-01-09 Thread Scott Chapman
On Thu, 8 Jan 2015 14:38:40 +, Martin Packer martin_pac...@uk.ibm.com 
wrote:

Here your multitenancy concerns seem to be at the DB2 subsystem level. I
don't think many would advocate sharing a DB2 subsystem between Prod and
Test.

Nor would I, but it happens. Especially for off-platform things coming up via 
DDF. Oh, there's a test system?

Also as business units get more functional capability to solve their own 
problems by either writing their own queries or employing clever software that 
does such things for them, we see more iterative testing work coming from 
business users. It is hard to determine if a real business user is doing 
important productive work today or is simply trying out a new idea. (Which is 
of course also quite possibly important work as well.) And some of those 
initial iterations can be pretty terrible queries and after the first 5 minutes 
the user cancelled it from their end but DB2 doesn't know that and keeps on 
chugging for hours or days. Having the final period in your DDF service class 
be discretionary or essentially so can be a very good thing, but as pointed out 
it doesn't always solve all of the issues. Enabling DB2's governor is a good 
idea as well.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: MIPS, CEC, LPARs, PRISM, WLM

2014-12-18 Thread Scott Chapman
You've had some great responses. I have seen this exact situation in the past.

As others have said I'd start by looking at the relative percentage of its 
share the production LPAR is normally consuming. For example, if the prod LPAR 
has a share equal to 60% of the box but normally consuming 80%, it's normally 
using 133% of it's share. If the other LPARs demand their 40%, it's going to be 
restricted to its 60%, which in this example is a 25% reduction from what it 
normally is getting. Certainly enough to be noticeable, and would explain the 
reported symptoms. 

If soft caps are in place then the situation gets more complicated and you have 
to look at the R4H utilization over time. My guess from the described symptoms 
that's probably not the situation here though.

Some remediation possibilities depending on the situation and the business 
requirements for the workloads in question...

-- Fix the weights so  that the production LPAR's weight assignment meets its 
normal requirement.

-- If the problem is occurring when soft capping, and the dev LPARs R4H is 
driving up the group's R4H, causing the group (including prod) to be capped, 
consider adding a defined cap limit for the dev LPAR to limit the amount of the 
group capacity it can consume. An LPAR can have a defined cap limit as well as 
belong to a capacity group.

-- Since these seem to happen regularly and truly are bad SQLs, and in dev, 
consider using DB2's governor in the dev region to cancel threads that have 
consumed some significant amount of resources.

-- Consider putting the DEV DDF work into a WLM resource group to limit the 
amount of CPU that work can consume.

Note I always recommend having multiple periods for DDF with the last period 
being a very low importance (likely discretionary for non-prod work) so that 
those bad SQLs that will eventually show up on your system and consume 
excessive amounts of resources will have very limited impact on other workloads 
on the system. While that's a good practice, it won't stop dev DDF work from 
consuming the dev LPAR's entire (essentially) assigned capacity. That's where 
the resource group may be useful. 

Scott Chapman

On Wed, 17 Dec 2014 23:40:18 -0500, Linda Hagedorn linda.haged...@gmail.com 
wrote:

The bad SQL is usually tablespace scans, and/or Cartesian product.  They are 
relatively easy to identify and cancel.  

MVS reports the stress in prod, the high CPU use on the dev lpar, and I find 
the misbehaving thread and cancel it.  Mvs reports things then return to 
normal.  

The perplexing part is the bad SQL running on LPARA is affecting its own lpar 
and the major lpar on the CEC.  It's own lpar I can understand, but the other 
one too? 

The prefetches - dynamic, list, and sequential are ziip eligible in DB2 V10, 
so the comment about the bad SQL taking the ziips from prod is possible.  I'm 
adding that to my list as something to check.  

The I/o comment is interesting. I'll add it to my list to watch for also.  

I'm hitting the books tonight.  Thanks for all the ideas and references. 

Sent from my iPad

 On Dec 17, 2014, at 9:48 PM, Clark Morris cfmpub...@ns.sympatico.ca wrote:
 
 On 17 Dec 2014 14:13:46 -0800, in bit.listserv.ibm-main you wrote:
 
 In pretty good with DB2, and Craig is wonderful.  
 
 It's the intricacies of MVS performance I need to bring in focus.  I have a 
 lot of reading and research to do so I can collect appropriate doc the next 
 time one of these hits.  
 
 After reading most of this thread, two things hit this retired systems
 programmer.  The first is that with all DASD shared, runaway bad SQL
 may be doing a number on your disk performance due to contention and I
 would look at I-O on both production and test.  DB2 and other experts
 who are more familiar with current DASD technology and contention can
 give more help.  The other is the role played on both LPARs by the use
 of zAAP and zIIP processors which run hecat full speed and reduced cost
 for selected work loads.  The bad SQL may be eligible to run on those
 processors and taking away the availability from production.  This is
 just a guess based on a dangerous (inadequate) amount of knowledge.
 
 Clark Morris
 
 Linda 
 Sent from my iPhone
 
 On Dec 17, 2014, at 2:34 PM, Ed Finnell 
 000248cce9f3-dmarc-requ...@listserv.ua.edu wrote:
 
 Craig Mullin's DB/2 books are really educational in scope and insight(and  
 heavy). Fundamental understanding of the interoperation is key to 
 identifying  and tuning  problems. He was with Platinum  when he first 
 began the  
 series and moved on after the acquisition by CA.(He and other vendors were
 presenting at our ADUG conference on 9/11/01. Haven't seen him since but  
 still get the updates.)
 
 The CA tools are really good at pinpointing problems. Detector and Log  
 Analyzer are key. For SQL there's the SQL analyzer(priced) component. 
 Sometimes 
 if it's third party software there may be a known issues with certain

Re: Death of spinning disk?

2014-11-29 Thread Scott Chapman
On Fri, 28 Nov 2014 16:14:03 +0800, Timothy Sipples sipp...@sg.ibm.com wrote:

Setting aside current pricing, what are the characteristics of hard disks
that make them better suited to particular use cases than (modern, current)
SSD?

You make a fair point that if pricing is not an object, then there's no 
particular use case for spinning disk that comes to mind. If you figure that 
the durability issues can be overcome with over-provisioning, then the the 
durability issue is simply a price issue as well. (Mostly, I'm not sure if SSDs 
are more or less susceptible to data rot simply sitting unused on a shelf.)

So the question would be how quickly can SSD pricing can catch (and likely have 
to pass to deal with the durability/over-provisioning) issue? If SSD capacity 
follows Moore's law and disk doesn't improve substantially, that could be as 
soon as 6 years or so. It would be interesting to find some historical disk and 
SSD capacity pricing over the last 6 years to see if that looks plausible. 

Hmmm According to the wayback machine, Newegg November 2008 best SSD price 
was $2.25/GB. Best hard drive price easily accessible was $0.12/GB. Today 
Newegg's best SSD prices are about $0.38 and best HDD prices are just over 
$0.03/GB. Make of that as you will, but my guess is that over the next 10 
years, spinning HD capacity will remain cheaper than SSD. But at relatively 
small capacities, the price difference will likely become immaterial.

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


  1   2   >