Brian Westerman wrote, in part:
>What collateral damage is cause by a vendor's use of keys in their software?  
>The keys are there to "lock" the software to the system it was licensed for.  
>If the software is moved, or used in other creative means without permission 
>from the vendor (who we must remember, owns the software), then it 
>(theoretically) won't work on that "other" platform.  I guess I'm missing the 
>damage part of that.  Do you mean disaster recovery keys?  I think every 
>vendor has that covered by now, but maybe they don't, and again, it's their 
>software, if they don't want to allow that use, and they let you know up 
>front, then whats the damage?

This is (mostly) a good discussion, despite violating Charles' original plea 
not to have it!

Having spent time carrying the beeper back at Sterling, I'd say that the cost 
is downtime when Stuff Happens. Even well-managed sites would have problems 
that caused 3AM calls for an emergency key.

What I think is important is that, if keys are going to be used, the vendor 
understands that:

1)     This is a cost to the vendor: they need to be serious about it, with 
folks available 24x7 to deal with problems. Not "That won't happen". Not 
"Customers should plan ahead". Not "Somebody is usually available". Serious, 
24x7x365+, and the staffers should be compensated for that service-even if it's 
$25/day, just something to make them take it seriously. Otherwise it's too easy 
to say "Oh, heck, I'm going for a run, I'll leave the phone here, what are the 
odds?" and miss a call.

2)     CPUIDs (as I still call them, lo these 20+ years later) need to be as 
transparent and bulletproof as possible. One vendor I worked for had license 
files that had to have Unix-style linends. So if a customer was sent one via 
email and it passed through a Windows box, it wouldn't work until they did a 
dos2unix on it. Not hard to do, but when you're trying to get the damned 
software to come up and you just got a new key, you don't think of it: you call 
the vendor back. Dumb, and I couldn't get the Unix-heads to understand it. 
Similarly, the license files should be text, not binary, and ideally should be 
self-checking-that is, the license checker can clearly report a difference 
between "This is a valid license, just not for this system" and "This is not a 
valid license" (and of course "Valid but expired").

3)     The license checking should be proactive, and warn well in advance of 
failure. This can be tricky for some products that Just Run with no real UI. 
ObAnecdote: many years ago, a friend was in his data center, glanced at the 
console, and saw "<BackupProductName> WILL EXPIRE IN 14 DAYS!" He pointed at it 
and said something to the operator that probably started with "What the 
****..."; the operator glanced at it and said "Oh, it always says that." Um, 
no, just for the last 16 days, moron... You can't save some folks from 
themselves!

4)     The license should be as permissive as possible. One of the ones I 
always liked was the SAS C compiler, which, whenever you compiled something, 
said "This compiler is licensed to <companyname>". But it would always run (if 
not expired): it didn't actually check CPUID, just date (so there was a license 
key, and it would expire). The theory was that a company might need to use it 
on the wrong system in an emergency, but no real enterprise would accept it 
reporting another company's name. Obviously that will vary with the product 
type, but for a compiler it felt about right. Critical products should allow 
some form of operation even if expired, if possible, and should accept 
temporary, short-term, universal keys, so when that 3AM call happens, a 24-hour 
key can be provided without relying on the staffer's ability to get through 
firewalls, VPNs, etc. to cut a new one.

Bottom line is that CPUIDs are not that hard to break, if someone wants to. So 
the real effort should be on usability, not on getting super-clever and 
complicated. (Another ObAnecdote: we had a "senior" developer who was having a 
problem. This was a VM product, and the CPUID checking would turn off all 
tracing as part of its operation. Said developer was thus stymied from 
debugging it! I pointed out that (a) it was our code, so worst case, a hacked 
version of the CPUID checker could be used, and (b) it took all of 30 seconds 
to break, especially when you could see how it worked...)

>There are many parts (I guess types of keys makes more sense) of vendors keys 
>that I don't agree with, and I don't personally think that software in and of 
>itself should cost more for one processor than another, regardless of 
>processor size, but that's just my personal feeling.  If a vendor wishes to 
>price their software that way, then it's completely their decision.

Not to get into this aspect of the religious war, but that depends on the 
product. For some, I agree; for others, not so much. Usage is, of course, the 
real metric that makes sense: a tiny DB2 (or "Db2", as IBM insanely wants us to 
call it now) database on a huge system will be charged unfairly at the moment, 
but it's hard to argue that a 50TB DB2 installation used by 1,000 applications 
should not cost more in license fees than a tiny one used by a single 
application, no? Utilities often seem easier to justify "one size fits all" 
pricing.

Another aspect of this whole thing is trial versions of software. For DB2, 
that's easy: you try it, and if you don't buy it, you uninstall it. For things 
like, say, a performance tool, it's harder: if you allow trials, you risk 
customers trialing it, solving their problem, and then not buying. That's an 
unsustainable business model.
--

...phsiii

Phil Smith III
Senior Architect & Product Manager, Mainframe & Enterprise
Distinguished Technologist
Micro Focus


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to