Thank you, gents.
The '3420' is the number of 8k blocks for the 36-cylinder 'growth' value
mentioned in the APAR.
I'd created the LDS with CYLS(857,857) and had expected to growth in chunks
of 77130 blocks.
Then in my IOEAGFMT job, I'd originally specified '-size 77130 -grow
77130'.
When I didn't see growth occurring as I'd expected, I played with those two
numbers, but nothing seemed to affect the 3420-block growth size.


On Mon, Mar 3, 2014 at 2:43 PM, Tom Ambros <[email protected]>wrote:

> Yeah, this is one of the things that has me muttering darkly upon
> occasion.
>
> The mount parm is "AGGRGROW".
>
> The command is "zfsadm grow".  No aggr.
>
> Thomas Ambros
> Operating Systems and Connectivity Engineering
> 518-436-6433
>
>
>
>
>
> From:   "Staller, Allan" <[email protected]>
> To:     [email protected]
> Date:   03/03/2014 09:31
> Subject:        Re: zFS aggregate growth amount query
> Sent by:        IBM Mainframe Discussion List <[email protected]>
>
>
>
> There is a currently open APR on ZFS secondary allocation. See OA44214.
>
> I have always had success w/ zfsadm agggrow -size (final desired size).
> Check the fine manual for syntax. I am sure the above is incorrect.
> <snip>
> Listcat the zFS.  It is the secondary allocation by default, I do believe.
>
>  You can override this by hand with the zfsadm grow command but if it is
> done as a result of the AGGRGROW parm on the mount it is the secondary
> allocation of the linear VSAM cluster, IIRC.
> </snip>
>
> <snip>>
> I've developed a zFS aggregate & am loading it up with a fairly large
> amount of data.
> I keep on seeing console messages sequences like:
>
> IOEZ00078E zFS aggregate <cluster name> exceeds 99% full (1282681/1285920)
> (WARNING)
> IOEZ00312I Dynamic growth of aggregate <cluster name> in progress, (by
> user xxxx).
> IOEZ00309I Aggregate <cluster name> successfully dynamically grown (by
> user xxxx).
> IOEZ00078E zFS aggregate <cluster name> exceeds 99% full (1285920/1289160)
> (WARNING)
>
> My question is with regard to the amount that the aggregate has grown by.
> The difference between the '1285920' and '1289160' is 3240 - presumably
> that is # 8k-byte blocks, but available documentation appears to be
> woefully inadequate.
> Who defines that '3420'? I certainly didn't specify it when creating the
> cluster, nor with any parm value when formatting it with IOEAGFMT. When I
> look at either IOEFSPRM or IOEPRMxx, all I see is a bunch of comment
> lines.
> </snip>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>
>
>
> This communication may contain privileged and/or confidential information.
> It is intended solely for the use of the addressee. If you are not the
> intended recipient, you are strictly prohibited from disclosing, copying,
> distributing or using any of this information. If you received this
> communication in error, please contact the sender immediately and destroy
> the material in its entirety, whether electronic or hard copy. This
> communication may contain nonpublic personal information about consumers
> subject to the restrictions of the Gramm-Leach-Bliley Act. You may not
> directly or indirectly reuse or redisclose such information for any purpose
> other than to provide the services for which you are receiving the
> information.
>
> 127 Public Square, Cleveland, OH 44114
> If you prefer not to receive future e-mail offers for products or services
> from Key
> send an e-mail to mailto:[email protected] with 'No Promotional
> E-mails' in the
> SUBJECT line.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to