Charles,

I think that you misunderstood me.
I'm suggesting that the cheap first hash (used to rule out common changes)
would be over a combination of:

- the first n bytes of the data set (just like you suggested)
- the F1/F8 DSCB  (which has stuff like DS1LSTAR and DS1TRBAL which helps
to detect common changes to the end





Kirk Wolf
Dovetailed Technologies
http://dovetail.com

On Thu, Aug 20, 2015 at 12:41 PM, Charles Mills <[email protected]> wrote:

> Right. That's a better idea. Seek to end minus 'n' and go from there.
>
> Only small negative is that if you did the first 'n' bytes and then needed
> to do the whole file, you could just keep going rather than starting over.
>
> Charles
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] On
> Behalf Of Kirk Wolf
> Sent: Thursday, August 20, 2015 10:31 AM
> To: [email protected]
> Subject: Re: Ideas for hash of a sequential data set
>
> That is certainly a possibility, but wouldn't help in the (common) case of
> a change that just appends to the end.   Perhaps hashing both the "first n
> bytes" with the F1/8 DSCB (which has information about the last TTR and
> bytes in the last track) would cover more of the common changes than either
> alone.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to