We currently run a TSM 5.1 server under OS/390. We have our client population divided among roughly twenty policy domains, each with its own primary and copy tape pools. This is done primarily to allow for reasonably fast restores without using an outlandish number of tapes to support collocation by node. Because of the way migration works, each policy domain also has its own disk storage pool used as the initial destination for incoming backup data. As you would expect, day to day variations in client workloads are a major nuisance. On any given day a few disk pools will run out of space and spill to tape during the backup window, and other disk pools will have sizable amounts of unused space.
We are now preparing to migrate to a 5.2 server under mainframe Linux. We are considering the following arrangement: 1.Use LVM to create a large (hundreds of gigabytes) file system. 2.Define a file device class using the large file system. 3.Create a sequential storage pool for each policy domain (to be used as the initial destination for backups) using the file device class. We are already aware of two potential problems: having the file device class run out of space, and having processes hang waiting for access to volumes when backups run late. Are there any other pitfalls we should be aware of? In particular, what will happen when a client sends multiple streams of backup data from the same filespace? Will the server mount multiple volumes concurrently, or will it force the multiple streams to queue up waiting for access to a single volume?
