Ever thought about storing the bulk items outside of of the database, and create within the database, links to the files contained in TSM via the API? This will have a two fold effect.
1) Keep your database doing what it does best, referencing data not storing Bulk objects. 2) Speed up access, depending on how you stored the data. On tape, in a dedicated storage pool, HSM or whatever you decide on given your circumstances. Ie. Store the objects name in the database, and reference the TSM path name in the database. These items would have to be loaded into TSM first, then referenced in the database for later retreival. Of course, it depends on how far you are into the project I guess, and whether you have already have explored this already and discounted it. IMHO.. regards Stephen -----Original Message----- From: Zlatko Krastev [mailto:[EMAIL PROTECTED] Sent: 10 August, 2003 11:08 PM To: [EMAIL PROTECTED] Subject: Re: Large file backup strategies Many comments: 1. Multi-gigabyte backup needs time to be created on that file. Afterwards you need to backup the file using the backup product (be it TSM or not). Hope you are not adding to the sum backup to diskpool and migration to tape. Use TDP for SQL as already suggested. It is not that expensive compared to *restore* you will have to perform - restore the file and just after that restore the database. And applying transaction logs might be very useful, isn't it. 2. Using single backup file forces single-threaded backup, thus single-threaded restore. As result even if database data files are spread across several spindles, the speed is bottlenecked by the performance of the filesystem the backup file is created in. Also that same filesystem must accomodate whole database, therefore be big, and from this probably is on RAID-5 - again backup performance decrease. Answer is again TDP for SQL with its multi-stripe backup/restore capabilities. 3. What is the business requirement justifying those 60 versions of a file - it is hard to be understood for both data and OS files??? Maybe the actual requirement is to keep last 60 days but usage of some other product transformed the requirement to 60 versions. Find the root cause! Ask requirements to be defined in *business* terms and not in IT terms (from your signature it seems you ought to know the ArcView requirements, but maybe again in IT terms). 4. Your administrator is wrong. TSM is a mature product and definitely can distinguish between different files and their corresponding requirements. Ask your administrator to become familiar with redbook TSM Concepts (SG24-4877) or recommend him/her to pass TSM education. Zlatko Krastev IT Consultant Randy Kreuziger <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]> 08.08.2003 17:58 Please respond to "ADSM: Dist Stor Manager" To: [EMAIL PROTECTED] cc: Subject: Large file backup strategies We resently purchased a couple of sql server boxes for use in storing ESRI GIS layers. This includes the storing of orthophotography images covering our entire state. The orthos are stored in SQL Server with the initial load creating a backup file 55 GBs in size. As we load more TIFs this backup file will only grow. According to the administrator of our backups the policy governing this machine is set to maintain 60 versions of a file. I thought this was over kill before but with SQL Server backups that will approach 100GBs our backup server will probably drown in orthophoto backups. My administrator states that the system can not be configured to retain x number of versions of one file type and y number of versions for all others . Are there any work arounds so that I don't have 60 versions of .bak files while retaining that number for all other files? Thank you, Randy Kreuziger ArcSDE Administrator
