Richard Sims wrote: > On Mar 13, 2006, at 6:00 AM, Remco Post wrote: > >> Hi all, >> >> I'm a bit confused, so I was hoping maybe the list could help. >> >> When I read the help on def stg/upd stg for the Maxsi parameter it >> mentionses >> two things: >> >> 1- It's the size of the physical file (aggregate) >> 2- it's the size of the file before compression if compression is >> used. >> >> Now during backup I can inmagine that 2 is being used, but during >> migration/backup stg/whatever I can only inmagine TSM using the >> size of the >> aggregate or the size of the file on the filesystem, but not both. >> So which is >> it? Is the use of the size of the file before compresion the old way >> (pre-aggregate) way of doing things? Is the manual wrong? Is the >> help wrong? >> >> (and yes I've read the quickfacts, and no, they don't make things >> any clearer) >> > > Hi, Remco - > > I think you picked up the item about compression being a participant > in storage pool operations, from the TSM Concepts redbook. > Unfortunately, that part of the redbook is poorly written, failing to > explain the context of its discussion, leading to confusion. > > Compression is a factor only when a file is being backed up, and at > that point the TSM server is evaluating the size reported by the > client in deciding which storage pool the new object (actually, an > Aggregate for B/A; an individual file for HSM and perhaps TDPs)
Well, actually, I can inmagine the TSM server allocating the destenation resource on a per file basis even for B/A client backups. This I get, not from any redbook, but both the quickfacts and 'help def stg'. Or is there a verb in the tsm protocol that says someting like: 'hey server! here's a bunch of files, the grand total is x bytes, make sure you're ready to store it', where bunch is defined by tnxgroupmax and movebatchsize? The reason I'm asking: I've done a query on the contents table that tells me: 1- the number of files in an aggregate 2- the size of the aggregate This is about as much info as the server has during reclamation/migration etc. I'm trying to determine how large a maxsize setting would give me how many % of the total number of files and how many % of the total number of bytes stored. (so for eg. maxsize of 10MB I have 73% of the total number of files and they take up about 10% of the total data-volume). I then could determine the size of a 'FILE' pool to keep all 'small' files on-line for my environment at this point in time. Now if the maxsize is _always_ the size of the aggregate, this is a correct figure (in my environment), but if in one case this is the size of an individual file (B/A client) to be aggregated, and in another it is the size of the aggregate... I'm uhhh, trouble because I'll need a larger file pool for that setting (or I'll end up migrating files to tape that I don't want to store on tape). > will > land in. Once in TSM server storage, the object is just a clump of > bits: no considerations for compression prevail. Where it will fit > thereafter is a function of Aggregate size (which can shrink during > reclamation operations, most visibly via MOVe Data RECONStruct=Yes). > So with reclamation, migration and move data, well TSM working on aggregates makes sense. But for B/A client activity both make sense, so which is it? > Richard Sims We could of course just test to see what happens, but maybe somebody allready knows (developers? anyone?).... -- Met vriendelijke groeten, Remco Post SARA - Reken- en Netwerkdiensten http://www.sara.nl High Performance Computing Tel. +31 20 592 3000 Fax. +31 20 668 3167 "I really didn't foresee the Internet. But then, neither did the computer industry. Not that that tells us very much of course - the computer industry didn't even foresee that the century was going to end." -- Douglas Adams
