On Tue, Oct 21, 2008 at 12:39 PM, Thomas Kern <[EMAIL PROTECTED]> wrote:
> I think that minute-by-minute usage of main memory could be accounted
> for in the same manor as CPU can be done if you start with the MONITOR
> data instead of the ACCOUNTING data. I think the current ACCOUNTING data
> is insufficient for a good chargeback system. Although with today's

Yes, my statement implied that the details about that is in the
monitor data. The "insufficient" is subjective. There is always the
risk that you spend dollars to charge for a few cents. Especially for
internal accounts you may not care about the cents, knowing that it
will average out somehow.
In the end you need something that distributes the total charges over
your customers in a realistic way. When the distribution adheres the
perceived benefit rather than the actual cost, that often makes it
more acceptable to the customer. So you charge them double for CPU
hours that are beyond the budget. Not because those CPU hours cost you
more, but because they're will to pay for you helping them out. It
also encourages serious forecast and planning because it saves them
money.

OT I have my doubts when I see the specification for the cell phone
bill. Even from a financial point of view it does not seem practical
to keep records for 6 months that I called for 10 seconds to announce
when I would be home. Wonder how much lower my bill could be when they
would summarize and subtotal some things. I suspect the "flat rate for
5 contacts" schemes may be driven by such motives. OTOH it sells DFP
hardware for mainframes, so some benefit from it...

> LINUX workloads with each linux system wanting to use ALL of its memory
> definition, you could charge for the definition of memory because that
> is the potential for impact on the system in main memory use and paging
> area requirements. So add something to your accounting to indicate that
> one day a linux system used 3/4 GB memory definition, and 1 GB the next
> day and two weeks later they wanted more and got bumped up to 1 1/2 GB,
> etc. Charge them at each increment. In the same vein, you should not
> charge a user for how much of a minidisk they really use, charge them
> for the whole allocation and for changes in the allocation. Although
> these 'allocation' charges do not reflect true usage, they do account
> for a level of system management that is required to maintain the system
> for the user and therefore are valid for charging.

Indeed, to charge on disk space you must feed the directory
information into your accounting process. We used to do that on
thursday, so each wednesday night folks would sendfile to themselves
and release the mini disk. And allocate one again next morning. :-)

I disagree with your view on main memory. Your model only fits LPAR.
Memory in System z is too expensive not to share, so you must
encourage Linux servers to behave well and drop from queue so the
footprint is reduced. That means their charges for memory are less.
Your TSM server could be made to release some memory early in the
morning, and use it only during the night when backups are running.

Another challenge is cost recovery and utilization. System upgrades
are discrete steps but usage growth is often gradual. In the early
days some shops worked with varying rates to just recover the cost.
Customers tend to dislike varying rates because they can't do budgets
that way. A model based on fixed subscription plus variable portion is
becoming more popular. The fixed part covers your average usage, where
the variable part deals with the (not guaranteed) extra usage.

-Rob

Reply via email to