I have been considering using this option on all of our server backups once we move to 
a 100% disk environment for our primary pools.  This would potentially save us a lot 
of disk space, while still allowing for fairly rapid restores.  But if there are 
limitations that would prevent us from restoring, obviously that will not work.  I 
agree that the developers should start looking at how to enable this option being used 
at the enterprise level, not just laptop.

Steve Schaub
Systems Engineer
Haworth, Inc
616-393-1457 (desk)
616-412-0544 (numeric page)
[EMAIL PROTECTED] (text page)
WWJWMTD


-----Original Message-----
From: Salak Juraj [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2003 3:26 AM
To: [EMAIL PROTECTED]
Subject: AW: SUBFILEBACKUP problems and issues


Hi,

I have a couple of small file servers beeing backed up over VPN�s of 128-256k.

All share same business requirement- to be able to restore single files rapidly, while 
restoration of  whole server - considered to be rare - may take a week.

So raising this limits would be of help for me and my network utilisation.

Maybe it could help if some of us  (== more than one) would open 
an official enhancement requirement to IBM?

There had been a small chat with one developer this year on this forum, he meant there 
were no huge technical problems with it.

regards
Juraj Salak



-----Urspr�ngliche Nachricht-----
Von: Len Boyle [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 11. August 2003 23:22
An: [EMAIL PROTECTED]
Betreff: SUBFILEBACKUP problems and issues


Hello

We have been using the subfilebackup feature for windows 2k clients located out on the 
wan.This feature has helped quite a bit as the windows client disk usage has grown and 
the wan lines speeds have grown but not at the same rate.

Side note: his is where the tsm client on an icrenmental backup only backups changed 
blocks and not the whole file. This is a huge savings in network and tsm server 
media.To do this the tsm client creates a checksum for each file and stores them in a 
directory on the client.

We have run up against several design limits, which were discussed on this listserv a 
few months back. Now we have run into a problem that appears to be the same as listed 
in apar IC36552.

Both the limits and the problem are related to that magic 1 and 2 gig numbers.

The problem is when the base file and delta files, each under the 2 gig support limit, 
in total exceed 2gig. Then the client can not put the data together to restore the 
file.It appears that the client uses a 2gig memory map of the data.

The first limit is that the files used to store the checksums have a limit of 1 
gigbyte.

The second limit is that the files covered under subfile are limited to 2 gig.

The orginal design thoughts for this feature seemed to be for laptops with small disk 
drives. Of course we are using this for windows files servers with users who store 
files greater then 2gig.

So we run run up against both of the above limits. And now we are finding that the 
file limit is really going to have to be something much smaller then 2gig, because of 
apar ic36552.

Can I ask if there is a test fix for Apar IC36552, that I be put on the list of folks 
that are on the testors list.

Also let me put in a vote to raising the limit on the file size and the checksum size.

How many folks out there are using this feature?
For servers?
On the lan and not the wan?


-------------------------------------------------------------------------
Leonard Boyle                               [EMAIL PROTECTED]
SAS Institute Inc.                          [EMAIL PROTECTED]
Room RB448                                  [EMAIL PROTECTED]
1 SAS Campus Drive                          (919) 531-6241
Cary NC 27513

Reply via email to