Moving from TSMV5 to V6 / question on DIRMC

2013-10-30 Thread Rainer Wolf

Hello All,

we are currently using tsm V5 Server and soon will be moving to
TSM V6, starting again from scratch with a brandnew ts3500 library
and the new 3592-c drives.

The new TSM-Server Setups are to be reviewed and now I have some quuestions
on the DIRMC feature which we distribute so far via server defined 
clientoptionset.

on dirmc there are two reasons for me why using it:
a) some tape mounts can be avoided using online pools of directories,links,...
...we have not much drives
b) the roughly quite easily producable output on the balancing
  between 'normal files' and 'directories + 0-byte-files + links + ...'
  can be shown with simple 'per-storage-pool-basis'
( select sum(num_files),stgpool_name from occupancy  group by 
stgpool_name )
  or simply can be displayed on a 'per-node-basis'
( with simly 'query occu stg=diskdirpool'

Especially the feature b) has often directly helped finding the basic source
of occuring strange problems.

One question now is : with TSM 6 thre  may be no need to use dirmc anymore,
is it possibe that directory-entries ( with extended acl-info) are
then always stored in the database and never going to tapes ?

If it is okay and not quite abnormal to use the dirmc then the other question 
is:
is it still okay to use the following way defining a 2-Stage- Storagepool-setup
and using the 'DATAFormat=nonblock' for the file-volumes fetching those dikdir 
data ?
 like
  define stg filedirs FILEDIR maxscr=0 reused=3 hi=100 lo=30 COLlocate=group 
reclaim=100 DATAFormat=nonblock
  define STG DISKDIRS DISK hi=60 lo=20 nextstg=filedirs

any hints are welcome
Rainer




--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm


Re: Moving from TSMV5 to V6 / question on DIRMC

2013-10-30 Thread Rainer Wolf

Hello Nick,

thanks for your answer - because of the 2 stage  storagepool there I  was 
missing the devclas definitions ( iadd here
at the bottom)
So the primary pool is a random-access-  StoragePool with a device class of 
disk so far to be accessed by lots of
simultanous sessions and really not quite large - 2 Volumes with together 10 GB 
is quite enough for a small/medium
server. The final destination of the 'dir-data' is via migrating from the 
disk-pool into a storage-pool
with  a device class of file. Why I was using this was because of the effect 
that at least with the TSMV5 Version it
soon takes too much time to make a copy-operation of such a disk-pool - it 
simply took
enourmous time to have the 'backup stg -Process' being traversed through such a 
thing.
It may be that the sequential accessed media for such data is better in the 
performance but I haven' measured this.
At least I could not not see
So it works much better with the file-volumes in the end , but the dirmc itself 
needs to point to the diskdirs-pool
to handle a lot of sessions for lots of simultanous incoming data.
The reason using the DATAFormat=nonblock format was because of an older 
ibm-V5-setup preferences cookbook
... which I cannot find anymore... It still works without problem - but 
possibly this 'DATAFormat=nonblock' is no
more suitable/needed at TSM V6  ?

Rainer

define STG DISKDIRS DISK hi=60 lo=20 nextstg=filedirs
DEFINE DEVCLASS FILEDIR DEVTYPE=FILE FORMAT=DRIVE MAXCAPACITY=1000M  
MOUNTLIMIT=200 DIRECTORY=/filedir SHARED=NO
define stg filedirs FILEDIR maxscr=0 reused=3 hi=100 lo=30 COLlocate=group 
reclaim=100 DATAFormat=nonblock


Am 30.10.2013 13:09, schrieb Marouf, Nick:

Hi Rainer,
We had to use DIRMC, even though I've heard the same that it is no 
longer needed. Some of the servers I backup are so large that without DIRMC, 
the restores would take a substantial time to complete, I've also had an issue 
were the GUI had stopped working, and enabling DIRMC on these very large server 
solved the problem (http://www-01.ibm.com/support/docview.wss?uid=swg21162784 )

With only 3 servers using DIRMC, using your query below I have over 5,000,000 
objects in the dirmc stg pool. This really saves time for laying out the 
directory structure when restoring large folders/subfolder.

As far using a different nodeblock format, IBM preferred format is native. Are 
you seeing considerable changes with the nonblock option?

What would be the benefit of the two stage storage pool? Are both using a 
devclass of disk?

-Nick



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rainer 
Wolf
Sent: Wednesday, October 30, 2013 6:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Moving from TSMV5 to V6 / question on DIRMC

Hello All,

we are currently using tsm V5 Server and soon will be moving to TSM V6, 
starting again from scratch with a brandnew ts3500 library and the new 3592-c 
drives.

The new TSM-Server Setups are to be reviewed and now I have some questions on 
the DIRMC feature which we distribute so far via server defined clientoptionset.

on dirmc there are two reasons for me why using it:
a) some tape mounts can be avoided using online pools of directories,links,...
...we have not much drives
b) the roughly quite easily producable output on the balancing
between 'normal files' and 'directories + 0-byte-files + links + ...'
can be shown with simple 'per-storage-pool-basis'
( select sum(num_files),stgpool_name from occupancy  group by 
stgpool_name )
or simply can be displayed on a 'per-node-basis'
  ( with simly 'query occu stg=diskdirpool'

Especially the feature b) has often directly helped finding the basic source of 
occuring strange problems.

One question now is : with TSM 6 thre  may be no need to use dirmc anymore, is 
it possibe that directory-entries ( with extended acl-info) are then always 
stored in the database and never going to tapes ?

If it is okay and not quite abnormal to use the dirmc then the other question 
is:
is it still okay to use the following way defining a 2-Stage- Storagepool-setup 
and using the 'DATAFormat=nonblock' for the file-volumes fetching those dikdir 
data ?
   like
define stg filedirs FILEDIR maxscr=0 reused=3 hi=100 lo=30 COLlocate=group 
reclaim=100 DATAFormat=nonblock
define STG DISKDIRS DISK hi=60 lo=20 nextstg=filedirs

any hints are welcome
Rainer




--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm



--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm


Re: million files backup

2012-02-02 Thread Rainer Wolf

Hi ,
it depends on how many changes happens daily and
the other thing is how fast can the tsm-client scan the filesystem.
We have some clients with  30Mio files and they can scan with tsm more than
5 Million Files per hour - but there is not very much change in the files.

On some clients with also such a mass but with
more changes we are mixing the scans with 'normal' incremental
on week-end with lower activities and with q schedule made for 'incremental by 
date'
so the client is associated with two schedules like

   Policy Domain Name: U15040_SM
 Schedule Name: MO_FR_MAIL
   Description: Mo-Fr Backup ab 22:00:00 Uhr ... incrbydate
Action: Incremental
   Options: -incrbydate
   Objects:
   Day of Week: Weekday
...
and the normal one
Policy Domain Name: U15040_SM
 Schedule Name: SA_SO_MAIL
   Description: Wochenends Backup ab 22:00:00 Uhr
Action: Incremental
   Options:
   Objects:
  Priority: 5
   Start Date/Time: 04/18/08   22:00:00
  Duration: 2 Hour(s)
Schedule Style: Classic
Period: 1 Day(s)
   Day of Week: Weekend

Thats nothing special , just classic TSM ... with much bigger ones
you may split or tsm might be not suitable.

best regards
Rainer

Am 02.02.2012 15:30, schrieb Jorge Amil:

Hi everybody,

Does anyone know what is the best way to make a filesystem backup than contains 
million files?

Backup image is not posible because is a GPFS filesystem and is not supported.

Thanks in advance

Jorge



--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm


TSM V6.3 Client/Server for x86_64: solaris ​​ .. . missing ?

2011-10-28 Thread Rainer Wolf

Hi All,
just wanted to call the ibm service but its currently saying
'this service is not available at the moment' ...

Someone knows why the 'x86_64: solaris' is missing in the support-matrix
both for client and server 6.3 ?

there is only sparc listed - will the x86-support be available another time ?


thanks in advance for
any hints

Rainer

--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm


Mac-Klient: problem with domain-statement after update to Version6

2009-09-02 Thread Rainer Wolf

Hi All,
We recently upgraded a mac-Klient from 5.5.1.6
to 6.1.0.2 (because of the gui not running-problem)

Now we have a problem with the backup of a domain, which has a space inside
The domain-entries in dsm.opt looks like:
 DOMAIN /
 DOMAIN /Volumes/Promise RAID
... this worked so far.

Now with the version6 we get  the error:
 02.09.2009 10:28:41 ANS1071E Invalid domain name entered: '/Volumes/Promise'
 02.09.2009 10:28:41 ANS1071E Invalid domain name entered: 'RAID'
 02.09.2009 10:28:43 Filespace is invalid
... doing a 'backup domain' from the gui

Is this a known problem ?

thanks in advance
Rainer

--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Mac-Klient: problem with domain-statement after update to Version6

2009-09-02 Thread Rainer Wolf

Hi Andy,

DOMAIN '/Volumes/Promise RAID' -- it works :-)

thanks a lot
Rainer



Andrew Raibeck schrieb:

Hi Rainer,

I'm not sure yet what changed, but try putting single quotes (') around the
double quotes to bypass the problem:

DOMAIN '/Volumes/Promise RAID'

Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Product Development
Level 3 Team Lead
Internal Notes e-mail: Andrew Raibeck/Hartford/i...@ibmus
Internet e-mail: stor...@us.ibm.com

IBM Tivoli Storage Manager support web page:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html


The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 09/02/2009
05:44:56 AM:



[image removed]

Mac-Klient: problem with domain-statement after update to Version6

Rainer Wolf

to:

ADSM-L

09/02/2009 05:45 AM

Sent by:

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU

Please respond to ADSM: Dist Stor Manager

Hi All,
We recently upgraded a mac-Klient from 5.5.1.6
to 6.1.0.2 (because of the gui not running-problem)

Now we have a problem with the backup of a domain, which has a space


inside


The domain-entries in dsm.opt looks like:
 DOMAIN /
 DOMAIN /Volumes/Promise RAID
... this worked so far.

Now with the version6 we get  the error:
 02.09.2009 10:28:41 ANS1071E Invalid domain name entered:


'/Volumes/Promise'


 02.09.2009 10:28:41 ANS1071E Invalid domain name entered: 'RAID'
 02.09.2009 10:28:43 Filespace is invalid
... doing a 'backup domain' from the gui

Is this a known problem ?

thanks in advance
Rainer

--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: SV: Move data from DISK pool to FILE pool

2009-07-28 Thread Rainer Wolf

Hi,
you can prevent fragmentation on filepool-volumes if you define/create them
assigning to the pool and not using scratch-volumes for FILE-Storagepools.
Using a command like ...
define volume Storagepoolname volumename numberofvolumes=10 
formatsize=2 wait=yes
... defines 10 volumes with a prefix of 'volumename' ech at 20GB .
It is important to use 'wait=yes' so those 10 volumes will be created 
sequential.
Without 'wait=yes' all 10 volumes would be created in parallel and 
fragmentation possibly will happen.
You can really have filepool-volumes without fragmentation (wait=yes) and 
scratch-volumes
surely are not so recommended.

Cheers Rainer


Christian Svensson schrieb:


Hi,
If I where you I should like this instead.


1) Rename your old Diskpool
2) Create your File Pool with the same name as the old Diskpool
3) Update your Diskpool with HIGH=0 LOW=0 and NEXT=FILEPOOL (Disable Cache also 
if you have any)
4) Wait 2-3 days
5) Delete all Volumes in your DISKPOOL and Delete DISKPOOL

Now will you migrate the data and you can still backup to your new STG POOL 
with any issue an no extra work for you.

But why do you wanna move to FILE CLASS? Do you wanna use De-duplication? You 
know you will delay your backups and get a lot of fragmentation if you don't 
pre-create all Volumes? This has nothing with TSM really, but most of the issue 
is the filesystem, TSM need to create the file first before TSM can save any 
data to that volume, and that will create a delay.
If TSM creates multiple volumes, then will you get fragmentation.
I normally don't recommend anyone to have FILE CLASS as first storage pool. I 
only use that if a customer want to have a VTL but don't have any money to buy 
a real VTL.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson

Från: ADSM: Dist Stor Manager [ads...@vm.marist.edu] f#246;r Grigori 
Solonovitch [g.solonovi...@bkme.com]
Skickat: den 28 juli 2009 12:35
Till: ADSM-L@VM.MARIST.EDU
Ämne: Move data from DISK pool to FILE pool

Dear TSMers,
We have TSM 5.5.1.1 under AIX 5.3.
I need to move data from DISK primary storage pool (raw logical volumes) to 
FILE primary storage pool (JFS2 file system).
There are 3 tape copy pools for DISK primary pool.
I am going to use next procedure:
1) create FILE primary storage pool with appropriate size;
2) move data from DISK primary storage pool to FILE pool by move data vol 
stg=FILE reconstruct=no (volume by volume);
3) delete DISK primary storage pool;
4) rename FILE storage pool to old DISK pool name.
My expectations:
1) all existing tape copy pools, made for DISK pool, will be still legal 
for FILE primary storage pool (no need to make extra backups and it is possible 
to restore any data in FILE storage pool);
2) no need to modify any copy groups connected to old DISK pool because of 
the same storage pool name;
3) expiry process will continue to work normally;
4) full/incremental backup history will be untouched by data movement.
Am I right?
I will deeply appreciate any comments.
Thank you very much in advance.
Kindest regards,


Grigori G. Solonovitch

Senior Technical Architect

Information Technology  Bank of Kuwait and Middle East  http://www.bkme.com

Phone: (+965) 2231-2274  Mobile: (+965) 99798073  E-Mail: 
g.solonovi...@bkme.commailto:g.solonovi...@bkme.com

Please consider the environment before printing this Email


Please consider the environment before printing this Email.


This email message and any attachments transmitted with it may contain confidential 
and proprietary information, intended only for the named recipient(s). If you have 
received this message in error, or if you are not the named recipient(s), please delete 
this email after notifying the sender immediately. BKME cannot guarantee the integrity of 
this communication and accepts no liability for any damage caused by this email or its 
attachments due to viruses, any other defects, interception or unauthorized modification. 
The information, views, opinions and comments of this message are those of the individual 
and not necessarily endorsed by BKME.


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Max number of Filespaces?

2009-06-30 Thread Rainer Wolf

Hi Michael,
... no problem !
it is simply because the client acts as a solaris-fileserver
Running solaris10 + zfs there is unfortunatly currently no way do set up
disk-quotas (friend of backup)  per user.
But it is easy to group things with zfs , to have a kind of group-quata.
And you can even set up easily filespaces for every user with zfs.
This extreme splitting on a per-user base leads to a user-quota and I think
there are about 4500 user-account on that file-server.

At least the grouping is a good thing which leads to a splitting
of big filespaces  which anyway is better for tsm both on backup and restore.
We splitted our all-in-on nfs into handy parts thus moving from 1 to around 15
filespaces and it runs very fine.

Such an extreme per-user splitting (4500 filespaces) I don't know what to think 
about it.
I must say that the other extreme 'handling of very big single zfs-filespaces'
is also no problem looking only at the backup itself is an indicator
and this runs very good -- we can see big single zfs-filespaces
with 30Mio files running at a backup-speed (filescan) of 6Mio objects per 
hour.
So backup on big zfs-spaces in principal is no problem, but the restore is a 
problem.
For the backup is okay it is unfortunately not possible to restore such a 
file-space
completely with a single restore (32-bit clients).
So here we have to do anyway things to split up complete restores on
those big single-filespace-extremes.

Possibly best is to be in-between but we are still waiting for 64-bit clients
because of those occasional 4GB-dsmc-core-dump doing full restores ... someone
knows about 64-bit clients ?

Cheers
Rainer

Petrullo, Michael G. schrieb:

Rainer,

WOW! That is an incredible amount of file spaces under one node! If you
don't mind me asking, what is that client backing up?

Michael Petrullo
Storage Support Administrator
Legg Mason Technology Services
Phone: 410.580.7381
Email:  mpetru...@leggmason.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Rainer Wolf
Sent: Friday, June 26, 2009 3:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Max number of Filespaces?

Hi,
I recently asked ibm and there was no known limit given. But maybe
someone knows.
We currently have one user with araound 4500 filespaces and I cannot see
any problems so far.

Another question is how to possibly do a full-restore of those splitted
zfs-perUser-filespaces (solaris):
Because finally using tapes ... I ask myself how to do a successfull
'complete restore'
on that thing.
Is it just and only possible (if using tapes) with a preceding 'move
nodedata'
like  'move nodedata NODE  fromstg=TAPEPOOL  tostg=FILEPOOL'   ?
Or something else ?

Cheers Rainer


Christian Svensson schrieb:


Hi *SMers,
Do anyone know the maximum numbers of Filespaces per node?

Using TSM Server 5.5.2.0 on MVS.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson



--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Max number of Filespaces?

2009-06-26 Thread Rainer Wolf

Hi,
I recently asked ibm and there was no known limit given. But maybe someone 
knows.
We currently have one user with araound 4500 filespaces and I cannot see any 
problems so far.

Another question is how to possibly do a full-restore of those splitted
zfs-perUser-filespaces (solaris):
Because finally using tapes ... I ask myself how to do a successfull 'complete 
restore'
on that thing.
Is it just and only possible (if using tapes) with a preceding 'move nodedata'
like  'move nodedata NODE  fromstg=TAPEPOOL  tostg=FILEPOOL'   ?
Or something else ?

Cheers Rainer


Christian Svensson schrieb:

Hi *SMers,
Do anyone know the maximum numbers of Filespaces per node?

Using TSM Server 5.5.2.0 on MVS.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: is TSM Client-encrypted data still compressable on the 3592 Drives ?

2009-03-19 Thread Rainer Wolf

Hi Wanda ,
thanks a lot for this detailed clarification and your additional idea !
- it makes all sense and is quite comprehensible now

Rainer

Wanda Prather schrieb:


Compression algorithms work by removing repeating patterns in the data.
Encryption also works by removing repeating patterns in the data.

So you are correct that no matter what type of drives you are using, the
drives are unlikely to be able to do any significant amount of compression
on encrypted data.

Thus you should turn on compression in the client as well as encryption.
The client is smart enough to compress first, then encrypt.  If you don't
turn on compression first, you will use up twice as much tape on the TSM
server end (assuming avg. compression ratio of 2:1) The biggest drawback of
doing this is that it will slow down your backups, and especially slow down
restores.

Regardless of what combination of encryption/compression you use, whether
client or drive, or how many of those options you have turned on, you won't
have any problems doing a restore.  Everything that was done to the client
data during backup, will get undone, correctly, in order, during a restore.

If you have compressalways yes, the client sends the data even if it detects
that compression makes the data grow some.
If you have compressalways no, if the client detects that compressing the
data makes it grow, it will stop compressing and resend the data
uncompressed.  That will be a bit slower, so you have the option to use it
or not use it.

The statistics will be misleading, no matter what you do, because:.
-The TSM server only knows/records how many bytes the client sends to it.
-The TSM server only knows./records how many bytes it sends out to the tape
drive, it doesn't know how much the drive may or may not compress it on the
other end of the cable.

So (ignoring encryption for a minute):
Assume you are using the 3592 500 GB cartridges, and your data compresses
2:1.

If the client is not doing compression:
The client sends 500 GB of data to the server disk pool during a backup, and
the TSM server later migrates out to the tape drive, which compresses the
data 2:1.
The client stats (in accounting records or the activity log) will show 500GB
of data sent to the server.  Q OCC will show 500GB of data for that client.
Migration stats will show 500GB of data migrated.  But your 500GB cartridge
will only be half full.

If the client is doing compression at 2:1:
The client backup sends 250GB of data across the network to the server disk
pool.  The TSM server later migrates that data out to the tape drive, which
is unable to compress the data again.  The client stats will show 250 GB of
data sent to the server.  Q OCC will show 250GB of data for that client.
Migration stats will show 250GB of data migrated.  But your 500 GB cartrdige
will still be half full.

For capacity planning purposes, you just need to keep in your head whether
your data gets compressed when it's going out to tape.  If you have a
mixture of clients that are compressing and not compressing, it's nearly
impossible to make a capacity planning estimate, you just have to track your
growth from week to week.

Now here's an additional idea that might make life easier:

If your customers are worried about transmission of the data to the TSM
server, they need to encrypt at the client level.  But if their worry is
about the vulnerability of data once it's on tape, just use the encryption
in your TS1120 drives.  It's very easy to set up application-managed
encryption.  The keys stay in the TSM data base, you never need to know what
they are.  You can even encrypt at the storage-pool level.  (Some of my
customers only encrypt their copy pools which are going offsite.).  And the
tape encryption is done with an extra processor  buffer in the drive, so it
doesn't slow down reads or writes.

W






On Wed, Mar 18, 2009 at 11:34 AM, Rainer Wolf rainer.w...@uni-ulm.dewrote:



Hi All,

we normally recommend 'not using TSM Compression' becaus the
fantastic 3592-drives are doing the compression very well and fast.

If users want to encrypt their data with the tsm-client I tend to
recommend also using compression because data would be first get compressed
and then get encrypted (on the client).
This should help save some space on the tapes but it is only an assumption
and possibly compression is not essential.

my question is
If I have a 10GB file and this would appear (without Client-compression and
without
client-encryption) as 6GB on the tape (after the hardwarecompression) ...

... is it possible to say something about what happens if I set up
TSM Encryption (AES128) and send this file again - now encrypted ?
Wil this data  appear
more at around 6GB,  more at around 10GB, somewhere between  ?
Or is it something completely unpredictable ?
Statistics would be also interesting

If it is more at 10GB it makes sense using TSM client-compression just to
save space.
Because of the recently discussed problems

is TSM Client-encrypted data still compressable on the 3592 Drives ?

2009-03-18 Thread Rainer Wolf

Hi All,

we normally recommend 'not using TSM Compression' becaus the
fantastic 3592-drives are doing the compression very well and fast.

If users want to encrypt their data with the tsm-client I tend to
recommend also using compression because data would be first get compressed
and then get encrypted (on the client).
This should help save some space on the tapes but it is only an assumption
and possibly compression is not essential.

my question is
If I have a 10GB file and this would appear (without Client-compression and 
without
client-encryption) as 6GB on the tape (after the hardwarecompression) ...

... is it possible to say something about what happens if I set up
TSM Encryption (AES128) and send this file again - now encrypted ?
Wil this data  appear
  more at around 6GB,  more at around 10GB, somewhere between  ?
Or is it something completely unpredictable ?
Statistics would be also interesting

If it is more at 10GB it makes sense using TSM client-compression just to save 
space.
Because of the recently discussed problems with restoring
tsm-compressed data that is aleady compressed by any software
then the comressalway-Option shouldn't also be used there
to avoid problems at restore-process ?

thanks in advance for any hints
Rainer


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: slow restore

2009-03-13 Thread Rainer Wolf

Hi,
you may try instead
dsmc restore /oracle/backup/fedwp1/bkup.hpdw1p.200903081920.fedwp1.hot/?* 
/oracle/backup/paul/ -inactive -subdir=yes -replace=no

This would deactivate the nqr -feature and we use it often when restoreing 
quite small
number of files out of bigger file-spaces.

regards
Rainer

Richard Rhodes schrieb:


We are performing a restore that is running real slow.

client:  hpux 11.11, tsm v5.5.0
tsm server:  v5.4.1

The restore operation is about 1000 files totaling about 600gb. The tsm
server is sitting with the session on SendW.When dsmc is first started
it runs the restore at about 10-15mb/s.  After some time (an hour or so) it
slows to a crawl.  The problem appears to be that dsmc is running an entire
processor flat out at 100%.  In other words, dsmc becomes cpu bound.  We
have killed and restarted it several times and the same pattern occurs.

Here is dsmc cmd we ran:
dsmc restore /oracle/backup/fedwp1/bkup.hpdw1p.200903081920.fedwp1.hot/
/oracle/backup/paul/ -inactive -subdir=yes -replace=no

Any thoughts on what's happening would be appreciated!

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: slow restore

2009-03-13 Thread Rainer Wolf

okay - you could change the COMPRESSALWAYS to 'NO', the default is at 'YES'
Otherwise already compressed files normally grow with the tsm compression
and so 'COMPRESSALWAYS NO' can disable compression for those files.

regards
Rainer




Richard Rhodes schrieb:


Thanks . . .

Grrr . . . I figured out what is happening.  The files that are backed
up are unix compressed (foo.Z).  The dsm.sys file is set with compression
yes.  Of course the cpu is spinning, it's uncompressing the file which is
already compressed

Rick






 Rainer Wolf
 rainer.w...@uni-
 ULM.DETo
 Sent by: ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager
 ads...@vm.marist Subject
 .EDU Re: slow restore


 03/13/2009 08:11
 AM


 Please respond to
 ADSM: Dist Stor
 Manager
 ads...@vm.marist
   .EDU






Hi,
you may try instead
dsmc restore /oracle/backup/fedwp1/bkup.hpdw1p.200903081920.fedwp1.hot/?*
/oracle/backup/paul/ -inactive -subdir=yes -replace=no

This would deactivate the nqr -feature and we use it often when restoreing
quite small
number of files out of bigger file-spaces.

regards
Rainer

Richard Rhodes schrieb:



We are performing a restore that is running real slow.

client:  hpux 11.11, tsm v5.5.0
tsm server:  v5.4.1

The restore operation is about 1000 files totaling about 600gb. The tsm
server is sitting with the session on SendW.When dsmc is first


started


it runs the restore at about 10-15mb/s.  After some time (an hour or so)


it


slows to a crawl.  The problem appears to be that dsmc is running an


entire


processor flat out at 100%.  In other words, dsmc becomes cpu bound.  We
have killed and restarted it several times and the same pattern occurs.

Here is dsmc cmd we ran:
dsmc restore /oracle/backup/fedwp1/bkup.hpdw1p.200903081920.fedwp1.hot/
/oracle/backup/paul/ -inactive -subdir=yes -replace=no

Any thoughts on what's happening would be appreciated!

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.



--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: slow restore

2009-03-13 Thread Rainer Wolf

thanks - one question I have on that flash:
We have several win2000 Clients stil running with 5.3.6.4
and some WinNT Systems running possibly 5.1.8.2.
Reading this flash I interpret those clients not affected by that problem
-- aren't they ?

If not -- those win2000 and NT Klients are affected but not fixed
then it is surely to avoid 'COMPRESSALWAYS NO'  ... so it is a 'must'
to have the 'COMPRESSALWAYS YES' set for those old ones ?

regards
Rainer


Andrew Raibeck schrieb:


okay - you could change the COMPRESSALWAYS to 'NO', the default is at


'YES'


Otherwise already compressed files normally grow with the tsm compression
and so 'COMPRESSALWAYS NO' can disable compression for those files.



Using EXCLUDE.COMPRESSION is better performance-wise for known file
specifications that do not compress well, since COMPRESSALWAYS NO will
require redrive of the transaction if the file does not compress well.

Since we're talking about COMPRESSALWAYS NO, make sure you keep this flash
in mind before implementing it:
http://www-01.ibm.com/support/docview.wss?uid=swg21322625

Best regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Product Development
Level 3 Team Lead
Internal Notes e-mail: Andrew Raibeck/Tucson/i...@ibmus
Internet e-mail: stor...@us.ibm.com

IBM Tivoli Storage Manager support web page:
http://www.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageManager.html


The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Question about group collocation

2008-12-03 Thread Rainer Wolf

Hi Farren ,
you can have it all. If you set the stg to group-collocation and
want a group or even a single node to be collocated then you set
up the collocation groups with the number of members as needed.
We only use group-collocation and if a single node should be collocated then
we just define one collocation-group with one member. This acts like 
node-collocation.

When you change your stg from node- to group- collocation
note that the currently Filling-Volumes possibly won't be written again. It's 
no problem
but after some time you may identify those volumes (by the filling-status and 
an old
Approx. Date Last Written ) and simply move those volumes with
'move data volumename reconstr=yes'. The output of that data will be
written to those volumes you expect (group-collocation).
It's just for saving empty slots ...

Rainer




Sam Rawlins schrieb:


Hi Farren,

As per the Admin Guide  Configuring and Managing Server Storage  Chapter
10  Keeping a Client's Files Together  Planning for and Enabling
Collocation, located here:
http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/topic/com.ibm.itsmcw.doc/anrwgd55330.htm#colplan
you are correct; when collocation is set to 'group,' data belonging to nodes
which are not members of a collocation group is stored on as few volumes as
possible.
This text is found in the 5.4 and 5.5 Admin Guides. The Admin Guide for 5.3
was not as specific.

On Tue, Nov 25, 2008 at 3:56 AM, Minns, Farren - Chichester 
[EMAIL PROTECTED] wrote:



Hi all

I currently have a tapepool collocated by node.

I have created my first collocation group and added three client nodes to
it.

So, if I now update the tapepool to collocate by 'group', am I right in
thinking it will still collocate all nodes NOT in the collocation group by
node?

Thanks

Farren Minns



This email (and any attachment) is confidential, may be legally privileged
and is intended solely for the
use of the individual or entity to whom it is addressed. If you are not the
intended recipient please do
not disclose, copy or take any action in reliance on it. If you have
received this message in error please
tell us by reply and delete all copies on your system.

Although this email has been scanned for viruses you should rely on your
own virus check as the sender
accepts no liability for any damage arising out of any bug or virus
infection. Please note that email
traffic data may be monitored and that emails may be viewed for security
reasons.

John Wiley  Sons Limited is a private limited company registered in
England with registered number 641132.

Registered office address: The Atrium, Southern Gate, Chichester, West
Sussex, PO19 8SQ.








--
Sam Rawlins


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Old copypool data question

2008-11-20 Thread Rainer Wolf
 y.com
In-Reply-To: [EMAIL PROTECTED]
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
X-Virus-Scanned: by amavisd-new
X-Barracuda-Connect: mail.uni-ulm.de[134.60.1.11]
X-Barracuda-Start-Time: 1227181339
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-Bayes: INNOCENT GLOBAL 0. 1. -2.0210
X-Barracuda-Virus-Scanned: by Email Relay at marist.edu
X-Barracuda-Spam-Score: -2.02
X-Barracuda-Spam-Status: No, SCORE=-2.02 using global scores of TAG_LEVEL=3.5 
QUARANTINE_LEVEL=1000.0 KILL_LEVEL=5.0 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.1.10699
Rule breakdown below
 pts rule name  description
 -- 
--

Hi,
first step:
 backup stg primary-pool new-copy-pool
If any data in this first step may not be readable from the primary-stg
you still can restore that data from the old copy-pool,
so be sure that the first step has finished without failures

and then second step:
   delete the old copy-storagepool-volumes with craeting and executing a macro 
del.mac
  consisting of  lines like
  'del vol xxx discardd=yes wait=yes'
  ... for every volume in that old copy-pool
  you easily start that macro from a dsmadmc-shell using the '-itemcommit' 
option
  as  'macro del.mac'

regards
Rainer


Minns, Farren - Chichester schrieb:

 Hi All

 A few months ago I renamed our old copypool and created a new one with a new 
 device class to make use of our 3592 tape drives.

 So I now have lots of 3590 copypool volumes with data on them, and I want to 
 move them to the new 3592 media.

 What is the best way to do this?

 Many thanks in advance

 Farren Minns
 John Wiley  Sons Ltd
 
 This email (and any attachment) is confidential, may be legally privileged 
 and is intended solely for the
 use of the individual or entity to whom it is addressed. If you are not the 
 intended recipient please do
 not disclose, copy or take any action in reliance on it. If you have received 
 this message in error please
 tell us by reply and delete all copies on your system.

 Although this email has been scanned for viruses you should rely on your own 
 virus check as the sender
 accepts no liability for any damage arising out of any bug or virus 
 infection. Please note that email
 traffic data may be monitored and that emails may be viewed for security 
 reasons.

 John Wiley  Sons Limited is a private limited company registered in England 
 with registered number 641132.

 Registered office address: The Atrium, Southern Gate, Chichester, West 
 Sussex, PO19 8SQ.
 

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Old copypool data question

2008-11-20 Thread Rainer Wolf

Hi,
for the drm-issue you can check the settings of
'Primary Storage Pools:' and 'Copy Storage Pools:'
in the output of 'query drmstatus' - if you change the names you should change 
them as well
in there with 'SET DRMPRIMSTGPOOL ...' and 'SET DRMCOPYSTGPOOL...' pointing to 
the new names

if confused you can make a check if you do a simple 'query occupancy' on any 
node that should
have copy-data  -- the output should give you the new copy-stg as well as the 
old one
... but the old copy-pool is in the state of the last 'backup stg'.  The new 
one should be
conforming to the primary-pool

regards
Rainer


Minns, Farren - Chichester schrieb:


OK, that's cool, I guess I'm in good shape then :-)

It just confuses me why TSM lists any of the old 3590 media in the DRM plan as 
there can't be anything on those volumes that are required?

Cheers

Farren



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark 
Stapleton
Sent: 20 November 2008 12:02
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Old copypool data question

As long as there is a copy of every file in (at least) one primary
storage pool, and a copy of the same file on any copy storage pool,
you're in good shape. It doesn't matter which pools, because your
database tracks where it is.

--
Mark Stapleton ([EMAIL PROTECTED])
CDW Berbee
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com




-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of Minns, Farren - Chichester
Sent: Thursday, November 20, 2008 6:55 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Old copypool data question

Thanks Rainer

But I'm wondering if I'm still in a position to do that.

Here's what we did. Rename our original tapepool and copypool to
tapepool_3590 and copypool_3590.

Then created new tapepool and copypool with device class of 3592.

I then went through the long process of moving data from the
tapepool_3590 volumes to tapepool volumes (now using 3592 media), one
tape at a time. This has all been done.

So now I have all the old on-site primary pool 3590 data on 3592


media,


and all offsite data being written to 3592 as well.

Will this have taken into account the old 3590 offsite copypool_3590
media?

From a DRM point of view those tapes are still required in the event


of


a disaster.

Thanks again

Farren



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of Rainer Wolf
Sent: 20 November 2008 11:42
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Old copypool data question

y.com
In-Reply-To: [EMAIL PROTECTED]
MB.wiley.com
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
X-Virus-Scanned: by amavisd-new
X-Barracuda-Connect: mail.uni-ulm.de[134.60.1.11]
X-Barracuda-Start-Time: 1227181339
X-Barracuda-Encrypted: AES256-SHA
X-Barracuda-Bayes: INNOCENT GLOBAL 0. 1. -2.0210
X-Barracuda-Virus-Scanned: by Email Relay at marist.edu
X-Barracuda-Spam-Score: -2.02
X-Barracuda-Spam-Status: No, SCORE=-2.02 using global scores of
TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=5.0 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.1.10699
   Rule breakdown below
pts rule name  description
    --


---


---

Hi,
first step:
backup stg primary-pool new-copy-pool
If any data in this first step may not be readable from the


primary-stg


you still can restore that data from the old copy-pool,
so be sure that the first step has finished without failures

and then second step:
  delete the old copy-storagepool-volumes with craeting and executing
a macro del.mac
 consisting of  lines like
 'del vol xxx discardd=yes wait=yes'
 ... for every volume in that old copy-pool
 you easily start that macro from a dsmadmc-shell using the '-
itemcommit' option
 as  'macro del.mac'

regards
Rainer


Minns, Farren - Chichester schrieb:



Hi All

A few months ago I renamed our old copypool and created a new one


with a new device class to make use of our 3592 tape drives.


So I now have lots of 3590 copypool volumes with data on them, and I


want to move them to the new 3592 media.


What is the best way to do this?

Many thanks in advance

Farren Minns
John Wiley  Sons Ltd



-


---


This email (and any attachment) is confidential, may be legally


privileged and is intended solely for the


use of the individual or entity to whom it is addressed. If you are


not the intended recipient please do


not disclose, copy or take any action in reliance on it. If you have


received this message in error please


tell us by reply and delete all copies on your system.

Although this email has been scanned for viruses you should rely on


your

question on new dynamic encryption

2008-11-20 Thread Rainer Wolf

Hi,
I have two questions on the newly dynamic encryption which comes
with 5.5. Client/Server and with the option 'encryptkey generate'

Is there any way for the client to proof/check if the data is encrypted or not ?
The only way we have found  -not to proof-  but just to realize that
something is happening with encryption is using the 'TESTFLAG INSTRUMENT:DETAIL'
and check for the value of 'Frequency used'/'Encryption' in the 
dsminstr.report.xx-file.
Nice would be something in the dsmsched.log like 'Objects compressed by:'
for example something like 'Objects encrypted: '  ... but maybe there is 
something ?

The other question is  -just to be sure- is there nothing but the
node-password necessary to have encryted data restored ?
Example: doing encrypted backup on a solaris-node - Can I expect to
restore/decrypt that data without any problem on a linux-system using the
nodename/password of the solaris node (all nodes using the same 
tsm-Client-version) ?

thanks
rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


usage of collocatebyfilespec + resourceutilization ?

2008-09-24 Thread Rainer Wolf

Hi,
we have tsm5.5.1 / solaris + 3494/4*J1A+4*E05

I have a question with our general ClientOption-Set defined to all Clients.
We have around 600 Clients and backups  are running
into file-Pools  and then migrating onto tapes.

The only 2 option we currently distribute via client-OptionSet are
compressalways no
dirmc directory

To make the restores faster it seems to be okay, generally using the
'resourceutilization  3'  - to enable the client reading vom 3
tapes at one time.
To have this feature just 'available' I would like to move this option
resourceutilization  3
into the default Clientoption set.
The problem now is : we are using groupcollocation and the resourceutilizaion
also effects the backup and it is not necessary to have the client backups 
running
parallel and opening more than one file(20GB) in the primary filepool.
This may also lead to splitting data on more tapes and may be a waste on 
file-volumes

The Option
collocatebyfilespec yes
seems to do what I want regarding backup.

The question is
If a client has 3 filespaces and the backup is running with
the options ...
'resourceutilization  3'
!collocatebyfilespec yes'
... will a backup run with 1 or 3 sessions ?

Are you spreading the 'resorceutilization' vie the Client Optionset ?

tanks for any hints
Rainer




--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Auditbd help

2008-09-19 Thread Rainer Wolf

Hi,
i think you have to examine around 3*61 Mio =183 mio ,
as far as I can remember there are 3 objects for every file/directory -object
 - the objects counted at the auditdb are not the just those objects  
we usually think of.


No one can say how long time this will take - it may also happen
that this process slows down or may get faster .

You may think  -with your IBM support-  about following:
 -though it is not recommended to interrupt an audit process, it is
 neither impossible nor forbidden
 - may be the server cannot be started after the interrupt because it
 asks for a successful audit - please just run in this case a partial
 audit with
 dsmserv auditdb admin fix=yes detail=yes  auditadmin.log
 It should take only a very short time and afterwards it should be
 possible to start the server (if the interrupt didn't cause any other
 issues preventing restart of the TSM server) 

good luck
Rainer


Zitat von Ochs, Duane [EMAIL PROTECTED]:


The results from the select 60,963,651

Expiration results for the same TSM server expiration: examined
124,304,930 objects

This TSM instance is approximately 1/2 the size of the DB being audited
and that has processed over 450 million entries.

I'm assuming the entries identified in the db updates is using more than
just number of files or objects.

Any other ideas ?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Friday, September 19, 2008 10:33 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Auditbd help

Select Sum(NUM_FILES) as Total Files from OCCUPANCY will quickly tell
you the count in your storage pools.
Add some factor for Unix directories and empty files, which are solely
within the TSM database.

 Richard Sims



Re: TSM database size?

2008-08-29 Thread Rainer Wolf
. This
database may generally be queried via an emulated SQL-98 compliant
interface, or through undocumented SHOW, CREATE or DELETE commands.


Also, see IBM site:
http://www-01.ibm.com/support/docview.wss?rs=0q1=Maximum+TSM+DB+Sizeui
d=swg21243509loc=en_AUcs=utf-8cc=aulang=en
The Administrator Guide manual specifies that the maximum size of the
database is 530GB. This is however a rounded figure.The actual maximum
size of the database is 543184 MB, which equates to about 530.5 GB.




Regards,



Ankur Patel
TSM Administrator


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Pahari, Dinesh P
Sent: Friday, 29 August 2008 11:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM database size?

Hi All,
I have got a TSM server with the 80GB database size. It is already
utilized above 80%. Could someone please let me know, what is the exact
recommended database size by IBM? Any links with such information would
be good.

Kind Regards,

Dinesh Pahari


DISCLAIMER

Confidential Communication: This email and any attachments are intended for
the addressee(s)
only and are confidential. They may contain legally privileged or copyright
material. If you
are not the intended recipient, please contact the sender immediately by
reply email and
delete this email and any attachments. You must not read, copy, use,
distribute or disclose
the contents of this email without consent and Harvey Norman Holdings
Limited ACN 003 237 545
(and its related subsidiaries) (Harvey Norman) does not accept
responsibility for any
unauthorised use or reliance on the contents of this email.

Harvey Norman does not represent or warrant that the integrity of this
email has been maintained
or that it is free from errors, viruses, interceptions or interference. Any
views expressed by
the sender do not necessarily represent the views of Harvey Norman.

This notice should not be removed from this email.







--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


don't get mails from adsm-l since 3-may-2008 ?

2008-06-27 Thread Rainer Wolf

Dear TSMers,
sinc 3-may-2008 i got 8 mails from this list. Already tried to contact
the mail-admin from the list , without response.
There are at least 2 other sites in south-germany with independant mail-servers
that have the same effect (since that date).
Can someone help or knows someone who can help ?
I really like to stay with this list and any help is
appreciated  ... please send E-Mail directly to me, for I might not get it

thanks a lot ... and have nice weekend
rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Upgrade client from 5.4.0.0 to 5.4.1.5 causes full backup

2008-02-16 Thread Rainer Wolf

Hi Lance,
we have the same problem and it is caused because you
are using a zfs filesystem (i assume) and this appears in 5.4.0 as UNKNOWN
and now you should see the filespace as ZFS .
this support comes with 5.4.1.x
I don't believe that the sending of all data can be suppressed.
A much more bigger problem for us here is that the backup-performance
is strongly degraded for filepaces  5mioFiles.
Even skipacl and other things didn't help on this performance
Problem. The backup session takes around 50% more time to get through.

After your 'full-backup' : can you please take a look at the
total-elapsed time before and after the update ?
I would be interested if you get about the same times running incrementals
on the 'tsm-zfs' filespaces as before.
It would be interesting if you have  5mio files to scan.
On smaller filespaces we have no problems at all.

I had an pmr open for around 9 months and closed it now because
we want to start again all with 5.5 server+clients and I am
tired to go on with this terrible performance problem
and ibm cannot reproduce this ... maybe they don't have those
fast Solaris-Server (to run the tsm-clients) where those things happens ?
Currently we are using 5.3 server and 5.4 clients

regards
Rainer

kiz - Abt. Infrastruktur
Universitaet Ulm




Zitat von Lance Nakata [EMAIL PROTECTED]:


On one host, I recently upgraded the TSM client from 5.4.0.0 to
5.4.1.5.  During the next backup run, it proceeded to backup all
files rather than just changes.  Has anyone else seen this behavior?
Is a simple client version switch enough to trigger a full backup?

I was able to repeat the behavior by reverting back to 5.4.0.0, at
which point it again backed up everything.  I then ran it again at
5.4.0.0 and nothing was backed up, as expected.  Then I upgraded
again to 5.4.1.5.  Everything was backed up.  Ran it again at
5.4.1.5.  Nothing was backed up, as expected.

TSM Server: Sun SPARC V880, Solaris 9, TSM EE Server 5.2.9.0
TSM Client: Sun X4500 Thumper, Solaris 10 x86, TSM client 5.4.0.0
and 5.4.1.5

The reason why this would be a huge problem for us is that our
X4500s (and other file servers) have many TBs of data on them.  I
don't want that going to tape for a second time after a TSM client
upgrade.

Any ideas?

Lance Nakata
Stanford Linear Accelerator Center





moving solaris tsm-server from sparc to x86 ?

2007-11-30 Thread Rainer Wolf

Hi All,

our tsm-server (tsm 5.3.6.0)  is currently running on a sparc machine
and we use a 3494-library with 3592/3592e drives.
Now we want to move the server to a new x86 machine also running solaris10.
I simply may try it with some test-server ... but maybe someone has done
this already and can share the expirience ?

My Idea would be somehow as described in the section 'moving the server
on another machine with the same operating-system'
I don't want to export all the tapes - just a 'restore db' would be
really okay - We are a bit unsure because of the byte-swap on x86
but the tsm-server might handle this at the time of the restore-db ?
It may be even possible to plug in the tsmdb-raid from the sparc
onto the x86 system ?

Thanks a lot for any hints !
Rainer




--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


slow 5.4.1 tsm-client ?

2007-06-22 Thread Rainer Wolf

Hi All,
we have tsm-server 5.3.4.0 on solaris with 3494 and 3592/3592e drives .
The mail-Client is a mail-server with TSMClient 5.3.4.6/5.4.1.0 -- solaris10_x86

Since the new tsm-Client(5.4.1.0) is installed on our mail-server
the incremental backup takes about +50% and more of the 'Total elapsed time' 
than before.
I have loked at all side-effects but the +50%-relation running
at same conditions is quite insistent.
So I have stay with TSM5.3.4.6 and do the downgrade again.

My question: anyone seen the same problem ? Is it possibly a known problem ?
Often such performance-issues are common to all platforms so maybe
someone experiences this on other platforms as well ?

Thanks for any hints in advance !
Rainer


some time-values are:
--
end-date  files-backed-up  fileSizefileScan  deleted   fail  
Total-elapsed
--
at night (with busy tsm-server):

TSM V5.3.4.6:
06/20/07  01:19:50__92534_ 13.55 GB6222581___52496_14__ 03:15:29
06/21/07  01:43:40__88732_ 13.75 GB6287429___68954_12__ 03:16:05

TSM V5.4.1.0:
06/22/07  03:24:59__57731_ 9.37 GB_6284128___31764_9___ 05:00:49

--
at daylight (with relaxed tsm-server):

TSM V5.3.4.6:
06/21/07  15:06:13__58002_ 10.75 GB6272542___19755_49__ 01:07:14

TSM V5.4.1.0:
06/22/07  14:54:49  ... stopped after 1 hour and fileScan of 700k

--


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


tsm-database Volumes on zfs

2007-01-26 Thread Rainer Wolf

Dear TSMers,
we have tsm server 5.3.4.0 on solaris10 and already using zfs
for all backup-diskPools and filepools on disk.
We currently use the tsm-db-volumes ( define dbvol ) on UFS .

Because we just got a new raid for the tsm-db we now want to use
the new one with 'zfs' and move the tsmdb-volumes onto that.

The question now is: are there any special things/settings
to think about when using 'zfs' for the tsmdb-volumes instead of 'ufs' ?
... any caveats ?

best regards
 and thanks in advance !
Rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: client session stopps with 'no space available in storage... and all successor pools'

2006-09-11 Thread Rainer Wolf

Hi all,

since changing some things as pointed out (thanks again to all hints) the error
now has luckily disappeared :-) . Although it may happen again
I just want to give a short feedback on what we've changed:
- increased the mountlimit from default 20 - 200 on the devclass of the 
file-pool
- increased the random-acces backuppool
- decreased the migration thresholds for both random-access ( 90/50 - 60/20 )
and the sequential file-pool ( 90/70 - 90/50 )
- moving from 'random migration by thresholds' to 'schedulded migration by time'
with the 'duration=xx' option, running through admin-schedules
at times with not much activity.

So until now everything is fine and the overall throughput-rate is much better 
now;
things wanted to do anyway to not weigh down the nightly backups with 
migrations.

the stages now appear something like ...
StorageDevice Estimated  Pct  PctHighLow
Next Stora-
Pool Name  Class Name  Capacity Util Migr MigMigge 
Pool
  PctPct
------------
---
BACKUPPOOL2DISK   200 G 19.4 19.4  60 20
FILEPOOL2
FILEPOOL2  FILE2  502 G 48.0 53.2  90 50
TAPEPOOL2
TAPEPOOL2  359224,620 G  8.0 25.0 100 70


Cheers
Rainer


client session stopps with 'no space available in storage... and all successor pools'

2006-08-30 Thread Rainer Wolf
   Reclamation Threshold: 100
   Offsite Reclamation Limit:
 Maximum Scratch Volumes Allowed: 50
  Number of Scratch Volumes Used: 32
   Delay Period for Volume Reuse: 8 Day(s)
  Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
  Last Update by (administrator): xx
   Last Update Date/Time: 08/29/06   16:33:34
Storage Pool Data Format: Native
Copy Storage Pool(s):
 Continue Copy on Error?:
CRC Data: No
Reclamation Type: Threshold




--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: client session stopps with 'no space available in storage... and all successor pools'

2006-08-30 Thread Rainer Wolf

Arnaud,
no thats not hurting cause
we have no cache enabled on the disk storage pools ... checked the other things
of the tech-note and still nothing applies to this problem.
Especially there are available scratch tapes and the number of the available 
tapes
in the tapepool is high enough.
think I look through the 'fixed things' of 5.3 Client
... it seems to only happen at the 5.2 Clients .

Cheers
Rainer


PAC Brion Arnaud schrieb:


Rainer,

Just found this technote :
http://www-1.ibm.com/support/docview.wss?uid=swg21079391 which refers to
ANS1311E and ANR0522W  problems, and states this possible reason :

- Cache is enabled on the disk storage pool that the TSM Client backup
data is being sent to, and cached data cannot be deleted quickly enough
to allow the backup data to be written to the disk storage pool. In this
case, update the storage pool so that cache is not enabled.

Couldn't this be hurting you ?

Cheers


Arnaud


**
Panalpina Management Ltd., Basle, Switzerland,
CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Rainer Wolf
Sent: Wednesday, 30 August, 2006 09:31
To: ADSM-L@VM.MARIST.EDU
Subject: client session stopps with 'no space available in storage...
and all successor pools'

Dear TSmers,

this happens on tsm server 5.3.3.2 / solaris ,3494

and Clients: linux86 5.2.3.1 , linux86 5.2.3.0 , solaris 5.2.5.0 ,
solaris 5.2.2.6 , winnt 5.2.3.11

we have a strange problem with occasionly stopped client sessions with
the message 'no space available in storage pool BACKUPPOOL and all
successor pools' .
If this happens it happens withe clients running bigger transfers in
time and data - mostly on initial backups.
The data flow is set up as
random access disk pool --  sequential file pool -- sequential tape
pool

It may happen that the first 2 stages are going to be full but the
tapepool always has free and usable scratch volumes available.

The question is : is this a bug at the server or do I have to change
something in the setups of the pools ?
The space in the random-access pools are normally migrating down to
about 50 % -- is it better to bring this down to 0% Usage as a daily
task ?

I thought that sessions that don't have enough space in the
backup/filepools would directly write to tape if it is needed.
But if this stopping happens it seems to be just happening on that long
and large running sessions starting to write on backuppool and then
switching to filepool ... it seems to be that there is no second
switching on the tapepool possible ?

I just checked the client-versions of all nodes where this happens and
all of them have 5.2.X.X ... so is it just a client-problem with the old
5.2.X.X clients ?

Thanks a lot in advance for any hints !
Rainer


tsm: TSM1q actlog begint=-20 search=94090


Date/TimeMessage

--
08/29/06   08:13:01  ANR0406I Session 94090 started for node
ULLI187.CHEMIE
   (Linux86) (Tcp/Ip
134.60.42.187(1039)).(SESSION: 94090)
08/29/06   20:17:08  ANR8340I FILE volume
/tsmdata3/tsm1/file8/6B4D.BFS
   mounted.(SESSION: 94090)
08/29/06   20:17:08  ANR0511I Session 94090 opened output volume
   /tsmdata3/tsm1/file8/6B4D.BFS.(SESSION:
94090)
08/29/06   20:17:24  ANR8341I End-of-volume reached for FILE volume
   /tsmdata3/tsm1/file8/6B4D.BFS.(SESSION:
94090)
08/29/06   20:17:24  ANR0514I Session 94090 closed volume
   /tsmdata3/tsm1/file8/6B4D.BFS.(SESSION:
94090)
08/29/06   20:17:24  ANR0522W Transaction failed for session 94090
for node
   ULLI187.CHEMIE (Linux86) - no space available
in storage
   pool BACKUPPOOL8 and all successor
pools.(SESSION: 94090)
08/29/06   20:17:53  ANR0403I Session 94090 ended for node
ULLI187.CHEMIE
   (Linux86).(SESSION: 94090)



tsm: TSM1q actlog search=94086 begind=-2

Date/TimeMessage

--
08/29/06   08:10:22  ANR0406I Session 94086 started for node
ULLI187.CHEMIE
   (Linux86) (Tcp/Ip
134.60.42.187(1038)).(SESSION: 94086)
08/29/06   20:17:54  ANE4952I (Session: 94086, Node: ULLI187.CHEMIE)
Total
   number of objects inspected:
1,458,833(SESSION: 94086)
08/29/06   20:17:54  ANE4954I (Session: 94086, Node: ULLI187.CHEMIE)
Total
   number of objects backed up:
1,457,166(SESSION: 94086)
08/29

Re: client session stopps with 'no space available in storage... and all successor pools'

2006-08-30 Thread Rainer Wolf

David,
the second is of type file and the mountlimit is set to 20
in the file-devclass.
The third and last Pool is a tapepool with a maximum of 4 drives available .

From the help of...

UPDATE DEVCLASS -- FILE
...
MOUNTLimit
 Specifies the maximum number of files that can be simultaneously open
 for input/output. This parameter is optional. You can specify a number
 from 1 to 4096.

 If you plan to use the simultaneous write function, ensure that
 sufficient drives are available for the write operation. If the number
 of drives needed for a simultaneous write operation is greater than the
 value of the MOUNTLIMIT parameter for a device class, the transaction
 will fail. For details about the simultaneous write function, refer to
 the Administrator's Guide.
...
... I don't understand whats about the 'drives' mentioned.
So I'm confused now if I should increase the mountlimit to eg 40 ?
or better decrease  ? ... to the number of the maximum available drives of the
tape-destination that comes after the file-pool ?

Cheers
Rainer

David le Blanc schrieb:

I believe this can happen at 5.3 clients

Are any of your pools (in the chain of pools the client writes to) of
type FILE ?

Try increasing the number of mount points for the device class for that
pool.





-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]
On Behalf Of Rainer Wolf
Sent: Wednesday, 30 August 2006 7:34 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] client session stopps with 'no space
available in storage... and all successor pools'

Arnaud,
no thats not hurting cause
we have no cache enabled on the disk storage pools ...
checked the other things
of the tech-note and still nothing applies to this problem.
Especially there are available scratch tapes and the number
of the available tapes
in the tapepool is high enough.
think I look through the 'fixed things' of 5.3 Client
... it seems to only happen at the 5.2 Clients .

Cheers
Rainer


PAC Brion Arnaud schrieb:



Rainer,

Just found this technote :
http://www-1.ibm.com/support/docview.wss?uid=swg21079391


which refers to


ANS1311E and ANR0522W  problems, and states this possible reason :

- Cache is enabled on the disk storage pool that the TSM


Client backup


data is being sent to, and cached data cannot be deleted


quickly enough


to allow the backup data to be written to the disk storage


pool. In this


case, update the storage pool so that cache is not enabled.

Couldn't this be hurting you ?

Cheers


Arnaud




**
**


**
Panalpina Management Ltd., Basle, Switzerland,
CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]



**
**


**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]


On Behalf Of


Rainer Wolf
Sent: Wednesday, 30 August, 2006 09:31
To: ADSM-L@VM.MARIST.EDU
Subject: client session stopps with 'no space available in


storage...


and all successor pools'

Dear TSmers,

this happens on tsm server 5.3.3.2 / solaris ,3494

and Clients: linux86 5.2.3.1 , linux86 5.2.3.0 , solaris 5.2.5.0 ,
solaris 5.2.2.6 , winnt 5.2.3.11

we have a strange problem with occasionly stopped client


sessions with


the message 'no space available in storage pool BACKUPPOOL and all
successor pools' .
If this happens it happens withe clients running bigger transfers in
time and data - mostly on initial backups.
The data flow is set up as
random access disk pool --  sequential file pool --


sequential tape


pool

It may happen that the first 2 stages are going to be full but the
tapepool always has free and usable scratch volumes available.

The question is : is this a bug at the server or do I have to change
something in the setups of the pools ?
The space in the random-access pools are normally migrating down to
about 50 % -- is it better to bring this down to 0% Usage as a daily
task ?

I thought that sessions that don't have enough space in the
backup/filepools would directly write to tape if it is needed.
But if this stopping happens it seems to be just happening


on that long


and large running sessions starting to write on backuppool and then
switching to filepool ... it seems to be that there is no second
switching on the tapepool possible ?

I just checked the client-versions of all nodes where this


happens and


all of them have 5.2.X.X ... so is it just a client-problem


with the old


5.2.X.X clients ?

Thanks a lot in advance for any hints !
Rainer


tsm: TSM1q actlog begint=-20 search=94090


Date/TimeMessage

--
08/29/06   08:13:01  ANR0406I Session 94090 started for node
ULLI187.CHEMIE

Re: client session stopps with 'no space available in storage... and all successor pools'

2006-08-30 Thread Rainer Wolf

Hi All,
i just found two more things:
the client got the following 2 messages in his dsmerror.log file:

08/29/06   20:17:53 ANS5092S Server out of data storage space.
08/29/06   20:17:54 ANS5092S Server out of data storage space

onother thing is that shortly before the client-session stopps there was an
automatic migration process starting at the server that efectively
writes out the data to the tapepool and run concurrently with the 
client-session.
One thing is that the mountlimit then may be reached ?

The other thing i think of is the following:
the reusedelay value on the filepool is '1' ... so somehow i feel it
'may not have happened' if the reusedelay had been '0'

I put the log of that migration -process in here - maybe someone has another 
idea idea ?

So now I may try to increase the mount-limit - another thing is to change the
reclamation threashold from currently 90/50 to about 60/20 - that may decrease
the chance of such stopped  sessions ?


Cheers
Rainer

tsm: TSM1q actlog begint=-29 search=PROCESS: 789

Date/TimeMessage
 
--
08/29/06   19:43:30  ANR0984I Process 789 for MIGRATION started in the
  BACKGROUND at 19:43:30.(PROCESS: 789)
08/29/06   19:43:30  ANR1000I Migration process 789 started for storage pool
  BACKUPPOOL8 automatically, highMig=90, lowMig=50,
  duration=No.(PROCESS: 789)
08/29/06   19:43:31  ANR8340I FILE volume /tsmdata3/tsm1/file8/6B4D.BFS
  mounted.(PROCESS: 789)
08/29/06   19:43:31  ANR0513I Process 789 opened output volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:43:43  ANR8341I End-of-volume reached for FILE volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:43:43  ANR0515I Process 789 closed volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:44:23  ANR8337I 3592 volume IS0146 mounted in drive 3592_2
  (/dev/rmt/9stcbn).(PROCESS: 789)
08/29/06   19:44:23  ANR0513I Process 789 opened output volume 
IS0146.(PROCESS:
  789)
08/29/06   19:49:56  ANR0515I Process 789 closed volume IS0146.(PROCESS: 
789)
08/29/06   19:49:56  ANR8340I FILE volume /tsmdata3/tsm1/file8/6B4D.BFS
  mounted.(PROCESS: 789)
08/29/06   19:49:56  ANR0513I Process 789 opened output volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:50:03  ANR8341I End-of-volume reached for FILE volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:50:03  ANR0515I Process 789 closed volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:50:04  ANR0513I Process 789 opened output volume 
IS0146.(PROCESS:
  789)
08/29/06   19:55:27  ANR0515I Process 789 closed volume IS0146.(PROCESS: 
789)
08/29/06   19:55:27  ANR8340I FILE volume /tsmdata3/tsm1/file8/6B4D.BFS
  mounted.(PROCESS: 789)
08/29/06   19:55:27  ANR0513I Process 789 opened output volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:55:35  ANR8341I End-of-volume reached for FILE volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:55:35  ANR0515I Process 789 closed volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:55:36  ANR0513I Process 789 opened output volume 
IS0146.(PROCESS:
  789)
08/29/06   19:59:02  ANR0515I Process 789 closed volume IS0146.(PROCESS: 
789)
08/29/06   19:59:02  ANR8340I FILE volume /tsmdata3/tsm1/file8/6B4D.BFS
  mounted.(PROCESS: 789)
08/29/06   19:59:02  ANR0513I Process 789 opened output volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:59:08  ANR8341I End-of-volume reached for FILE volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:59:08  ANR0515I Process 789 closed volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   19:59:08  ANR0513I Process 789 opened output volume 
IS0146.(PROCESS:
  789)
  08/29/06   20:03:26  ANR0515I Process 789 closed volume IS0146.(PROCESS: 
789)
08/29/06   20:03:26  ANR8340I FILE volume /tsmdata3/tsm1/file8/6B4D.BFS
  mounted.(PROCESS: 789)
08/29/06   20:03:26  ANR0513I Process 789 opened output volume
  /tsmdata3/tsm1/file8/6B4D.BFS.(PROCESS: 789)
08/29/06   20:03:32  ANR8341I End-of-volume reached for FILE volume
 

Re: Using tsm-encryption and want to change the hostname at the Client

2006-08-01 Thread Rainer Wolf

Alexei,
thanks a lot for your detailled explanation  !  It's clearer to me now :-)
... just only two more questions ?
What about the windows-Clients - do I then (when changing the windows 
system-name)
also have to manually remove the  equivalent 'TSM.PWD' entry
in the registry or elsewhere ?
if so: Is that something to be done with the windows registry-editor
or is there a tsm-windows-client function that can do for me the
renaming/refresh of the locally stored tsm-pwds on windows so I can reenter
the (same) encryption key passord once again ?

About the 'using some garbage encryption key' : Isn't that something
where the tsm-client really should say 'NO'
stop backup and generate an error message ?
... preventing the user to have something unrecoverable
- is there an existing apar ?

Best regards
Rainer


Alexei Kojenov schrieb:


Rainer,

Your data is always encrypted with the key generated from the password that
you enter, regardless of the hostname. The hostname is only used to store
the password locally. For example,

1) Let's say the hostname is 'mercury'
2) You run your first backup and are prompted for encryption key password.
Let's say you enter 'secret'
3) The string 'secret' is encrypted with 'mercury' and is stored in TSM.PWD
4) The data are encrypted with 'secret'.
5) On the next backup, the stored password is retrieved from TSM.PWD and
decrypted with 'mercury', and 'secret' is used for data backup.

6) Let's say you change the hostname to 'venus' and delete/rename existing
TSM.PWD
7) TSM prompts you for encryption key password and you enter 'secret'
again.
8) 'secret' is encrypted with 'venus' and is stored in TSM.PWD (note,
TSM.PWD will binary differ from the one from step 3, because the key, which
is dependent on hostname, is different)
9) The data are encrypted with 'secret' (the same as in step 4, regardless
of hostname).
10) On the next backup, stored password is decrypted with 'venus', and the
same password 'secret' is used for backup.

So you shouldn't worry about validity of your old backups as long as you
use the same encryption password and you deleted/renamed TSM.PWD when
changing the hostname.

The problems come when someone changes the hostname bud does not delete
TSM.PWD. In the example above, a backup following the hostname change will
try to decrypt stored password with 'venus' and will get an incorrect
result (because 'secret' was originally encrypted with 'mercury'!), so the
new backups will be using some garbage encryption key, and it would be
really hard to restore the new data correctly if TSM.PWD is lost or if the
restore happens on a different machine.

Alexei


ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 07/27/2006
06:31:17 AM:



Hi Alexei,

thanks for your hint - now i come with a new question concerning the
'restore' :
Because nothing changes other than the 'hostname' of that linux system


...


... what about the data that has been backed up prior to the time
I rename the hostname and reenter the 'encryption key password' ?

Because I stay with 'encryptkey save' what happens when (some time)
I may do a full restore of the '/home/' -Filespace ?

Because this Filespace '/home/'  has data backed up that is encrypted
with both encryption-key-usage of the old and the new 'hostname'
( but always the same 'tsm-Nodename' )
... will I am able to restore(and decrypt) all of it ?

... i fear to go into problems - Or do I have to start backup again
from 'zero' - for example :
by renaming  the filespace on the server
at the time changing the hostname ?

Thanks again for any hints !
-- that is something really confusing to me :-|

Rainer



Alexei Kojenov schrieb:



Rainer,

You need to make TSM client prompt you for encryption key password on


the


next backup after you changed the hostname. The only way to do this is


to


rename/remove the existing TSM.PWD file (this is the file where TSM


client


stores its passwords). You should rename this file rather than delete


it,


in case you have problems and want to revert.

Alexei

---

Dear TSmers,

we have tsmserver 5.3.3.2 /solaris and tsm-Client 5.3.4.0 /linux.

On the Client we use tsm-encryption :
The 'nodename' Option is set in the dsm.sys and also the
'encryptkey save' OPtion is set  and  'encryptiontype AES128' is also


set.


The inclexc-File contains a line like 'include.encrypt *'
So far anything runs fine :-)

Problem: Next week we have to change the 'hostname' of that


linux-server.


The Question now is : - if any - what steps are to be done at the
tsm-Client ?
... and even at the tsm-server ?
The (tsm)nodename won't be changed.
Do I need the TSM-Client in a manual way give once again the
encryption-key password to let the encryption-key be generated ?
Or is there nothing to be done at the Client ?

I have looked throgh the lists and docs and havent't found any
'procedures' for that scenario - just pointers to dependancies on the
system's hostname.

Thanks in advance for any 

Re: 3592 tape read performance

2006-07-31 Thread Rainer Wolf

Hi Thomas,
3592-J1a, tsmserver 5.3.3.2 on Solaris10
we have the same thing happened - also removed 2 or 3 tapes
from the library.
It was really annoying because a tape may happen to be reading
24 hours constantly reading but with some kBytes per second.

In our case it seems to be possibly both our old firmware in the
tape-drives and also the tapes ( here: ibm labeled - not the fujii ).

The tape support technician who checked the drives described
2 possibilities that may happenen at the tapes:
one thing is that the builtin brakes on the tapes may seldomly have a
malfunction leading to heavy positioning tasks of the drive.
The other thing is that the tape-material is slightly stuck - that
may happen with brandnew tapes and that might disappear once using the
tape at the whole length.

The firmware -update here has been a little bit complicated, because
first the drives seems to be gone.
After reset of the drives also the server-system had to be restarted.
Also you should check the latest tape-driver (IBMTape) .

Because now anything seems to be fine we may test again the
ploblem-tapes if they now work better .


Using the latest tape-drive and solaris-os Version it is very fine
for me that the unix 'iostat' - utility now is friendly showing
the current statistics of the tape-drives too  ... not only of the disks as 
before our update.
I currently localized one drive running a migration process and constantly 
running
with nearly 100 % busy and a write speed roughly around 5 MB/s over the time.
Moving that process to another drive ( same data , same destination tape-volume 
)
it shows to run normal and being 10 times faster. No errors at all - Just 
called the service 
... so you may also take a look at iostat ( eg 'iostat -x 5' ) if you also
can see the drives there.

for example that is really no 'problem-output' :-) :
  extended device statistics
device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
IBMtape6 0.0  348.30.0 89166.2  0.0  0.61.7   0  58


Greetings
Rainer





Thomas Denier schrieb:

We are seeing increasingly frequent problems reading data from 3592
tapes. TSM sometimes spends as much as a couple of hours reading a
single file with a size of a few hundred megabyes. In some cases,
TSM reports a hardware or media error at the end of that time. In
other cases TSM eventually reads the file successfully. In the
latter case there are, as far as we can tell, no error indications
at all: no TSM messages, nothing logged by the OS, and no indicators
on the front panel of the tape drive. In some case the same tape
volume suffers this type of problem repeatedly. The problems seem
to spread roughly evenly over our whole population of 3592 drives.

We have just removed one 3592 volume from service because of
recurrent read problems, and are about to remove a second volume
from service. We only have about 120 3592 volumes, and losing two
of them within a week is disturbing, to put it mildly. The
possiblity that the volumes with non-recurring (so far) problems
will eventually need replacement is even more disturbing.

Our TSM server is at 5.2.6.0, running under mainframe Linux. The
3592 tapes drives are all the J1A model.
Does anyone have any suggestions for getting to the bottom of this?





--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Using tsm-encryption and want to change the hostname at the Client

2006-07-27 Thread Rainer Wolf

Hi Alexei,

thanks for your hint - now i come with a new question concerning the 'restore' :
Because nothing changes other than the 'hostname' of that linux system ...
... what about the data that has been backed up prior to the time
I rename the hostname and reenter the 'encryption key password' ?

Because I stay with 'encryptkey save' what happens when (some time)
I may do a full restore of the '/home/' -Filespace ?

Because this Filespace '/home/'  has data backed up that is encrypted
with both encryption-key-usage of the old and the new 'hostname'
( but always the same 'tsm-Nodename' )
... will I am able to restore(and decrypt) all of it ?

... i fear to go into problems - Or do I have to start backup again
from 'zero' - for example :
by renaming  the filespace on the server
at the time changing the hostname ?

Thanks again for any hints !
-- that is something really confusing to me :-|

Rainer



Alexei Kojenov schrieb:


Rainer,

You need to make TSM client prompt you for encryption key password on the
next backup after you changed the hostname. The only way to do this is to
rename/remove the existing TSM.PWD file (this is the file where TSM client
stores its passwords). You should rename this file rather than delete it,
in case you have problems and want to revert.

Alexei

---

Dear TSmers,

we have tsmserver 5.3.3.2 /solaris and tsm-Client 5.3.4.0 /linux.

On the Client we use tsm-encryption :
The 'nodename' Option is set in the dsm.sys and also the
'encryptkey save' OPtion is set  and  'encryptiontype AES128' is also set.
The inclexc-File contains a line like 'include.encrypt *'
So far anything runs fine :-)

Problem: Next week we have to change the 'hostname' of that linux-server.
The Question now is : - if any - what steps are to be done at the
tsm-Client ?
... and even at the tsm-server ?
The (tsm)nodename won't be changed.
Do I need the TSM-Client in a manual way give once again the
encryption-key password to let the encryption-key be generated ?
Or is there nothing to be done at the Client ?

I have looked throgh the lists and docs and havent't found any
'procedures' for that scenario - just pointers to dependancies on the
system's hostname.

Thanks in advance for any hints , recipe or links ... !
Rainer


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de




--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Using tsm-encryption and want to change the hostname at the Client

2006-07-24 Thread Rainer Wolf

Dear TSmers,

we have tsmserver 5.3.3.2 /solaris and tsm-Client 5.3.4.0 /linux.

On the Client we use tsm-encryption :
The 'nodename' Option is set in the dsm.sys and also the
'encryptkey save' OPtion is set  and  'encryptiontype AES128' is also set.
The inclexc-File contains a line like 'include.encrypt *'
So far anything runs fine :-)

Problem: Next week we have to change the 'hostname' of that linux-server.
The Question now is : - if any - what steps are to be done at the tsm-Client ?
... and even at the tsm-server ?
The (tsm)nodename won't be changed.
Do I need the TSM-Client in a manual way give once again the
encryption-key password to let the encryption-key be generated ?
Or is there nothing to be done at the Client ?

I have looked throgh the lists and docs and havent't found any
'procedures' for that scenario - just pointers to dependancies on the
system's hostname.

Thanks in advance for any hints , recipe or links ... !
Rainer


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Question on inactive files / can all inactive files be restored ?

2006-07-05 Thread Rainer Wolf

Hi,
we have tsmserver 5.3.3.2 / Client 5.3.3.0 - on solaris

The Copy-Group looks like:

Policy Policy Mgmt   Copy   Versions  VersionsRetain   
Retain
Domain Set Name   Class  Group  Data  Data Extra 
Only
Name  Name   Name Exists   Deleted  Versions  
Version
-  -  -  -        
---
AAA000 ACTIVE DIRECTORY  STANDARD   No Limit 560   
60
AAA000 ACTIVE STANDARD   STANDARD  5 560   
60
AAA000 STANDARD   DIRECTORY  STANDARD   No Limit 560   
60
AAA000 STANDARD   STANDARD   STANDARD  5 560   
60

I have tested the 'verexist' option and can see that
(without running expiration on the server)
I can do for example 6 incremental backups on the same always changing file.

Now the query on the inactive Files only shows up 5 Versions of that file
even though there are really 6 Versions stored at the server.
I am not sure but i think in former tsm-Versions i could
query/restore all the 6 Versions from the Client.
Is that behavior now normal ?


Consequently for the retonly-Option ( here 60 days ) :
If not running any expiration on the server: Am I able to restore
an inactive / deleted file  - expired more than 60 days ago ?

I always thought all files that haven'T left TSM through
the expiration could be restored at the Client - but it does seem to be so ?

Thanks in advance
for any hints
Rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: group collocation on virtual volumes/ remote-server volumes ?

2006-05-23 Thread Rainer Wolf

Allen- thanks a lot - I expected that but wanted to be sure
yes I agree, it might be too much pain there :-)


Allen S. Rout schrieb:

On Mon, 22 May 2006 16:14:28 +0200, Rainer Wolf [EMAIL PROTECTED] said:






the question is now:
is it possible to make use of the group-collocation-feature when
having disk-cache as Primary StoragePools and the next-storagepool
is on a remote-tsm-server's virtual volumes ?


If you want to get this kind of distinction working on the other side
of a virtual-volume link, you're going to have to split things up by
node on the virtual-volume target side.  Then, you'll have different
SERVER definitions on the source server, and different devclasses, and
different stgpools.

Probably more pain in the patoot than you desire.



recommendations for remote backup ?

2006-05-23 Thread Rainer Wolf

Hi TSMers,

I want to ask a more kind of general question for any recommendations

Currently we have one local tsm-server and a library with quite a lot capacity.
We want to backup another site being some hundreds km away with a good Gbit 
network
connectivity to our site. The remote site has about 200 clients with
a mixture of desktops,file-servers and so on - all 'normal' tsm clients.

Because an additional tape-library is not desired/needed there
seems to be 2 principally possibilities:

A) Placing a new tsm-server near by the clients at the remote site
   (acting as source server) having no library but a big disk-cache
   that may hold the backup-data of the last 4 weeks
   The next-stgs would be on our local site also on disk and
   finally migrating to tape.
   For this the setup of an additional logical tsm-Server
   acting as target server at the library-site is supposed.

B) Placing the new tsm-server nearby the library on the same
   machine - having direct access to the tapes.

So the questions is:
Are both possibilities not anomalous ?
Does one of those has a strong preference ? - any caveats ?
From network-view: is one solution much easier to handle  ?
 My thoughts on A) : Running long distances it seems to be easier to have 
the
 tsm-server nearby the client because only this has to be tuned
 to send/receive data from the library-link-node ... if it happens.
 On the other side : if the server is nearby the clients - this will
 lead to both short-distance client-connections and long-distance
 target-server connections. So here i am concerned about
 setting of the window-size of that tsm-server because it should be
 small for clients and at the same time high for the target server
 ... because of the so called long-fat-pipes
- is this a problem ?
 on B) do all clients have to be tuned on the window-size ?

From tsm-view B) seems to be easier  -for example the use group-collocation-
From network-throughput the use of A) seems to be better because
data-transfers can be bundled and the transfer can be done when
its a good time to do so.

Last question is :
when using A)
is it a good idea or even perhaps 'a must' to make use of 'Cached Copy' ?
... or on the contrary is 'Cached Copy' something to avoid -especially
using virtual volumes ?


best regards
an thanks in advance for any hints !
Rainer


group collocation on virtual volumes/ remote-server volumes ?

2006-05-22 Thread Rainer Wolf

Hi TSMers,

I have looked through the archive and docu but did not find :
Scenario:
nodes at a local tsm-server with only disk-cache (limited space) and
migrating data via server-server to a remote-tsm-server
having the tapes (with much more space)

the question is now:
is it possible to make use of the group-collocation-feature when
having disk-cache as Primary StoragePools and the next-storagepool
is on a remote-tsm-server's virtual volumes ?
...or does it just has no effect - setting this server-device-class-storage pool
with those virtual volumes to 'collocation=group' ?

best regards and thanks in advance
Rainer


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Seeking thoughts on Cyrus email backup/restore

2006-05-19 Thread Rainer Wolf

Hi Richard ,
tsm-server: V5.3.3.0 on solaris v440/16GB ram, 3494lib, 3592drives
tsm-client: V5.3.3.0 on solaris v440/16GB ram

Our cyrus mail-server is set up on solaris and running
the whole mail-data with a copy using a private-fc to another building
as a permanently synced one -- it's a poor-mans solution just using
the available system services -- but running okay.
You may also consider to clone the mailserver-machine too so in case
of a catastrophic scenario the mail-service can swap completely to
another location. Our mail-data itself is always in a synchronized
state - others are doing delays to have a kind of restore-window
from that copy.
Because of this HA-config we don't hope
to ever restore the full mail-data from TSM :-)

With TSM we backup as normal incremental (no snapshot ... )
and doing the 'normal' restores of user-folders accasionally deleted by users.

In the last time I have done a lot of tsm-restore-tests
of our cyrus mail-server ( currently 1 Filesystem, 5 Mio Files, 280GB )
and had those expereinces:
A complete backup of the whole ( with just one session )
runs in about 18 hours, but normally not doing that - just incremental.
The incremental with around 80.000files/10GB take about 4 hours per night.

Currently our best restore-time of that single mail-server filesystem
has taken  03:49:51 for 4,4mio objects/280GB
 - thats pretty good for us and I just try to get this down to about 3 hours
balancing the data on more input-volummes/disk-cache.
That best-restore time was a result of a 'fresh' full backup finally placed on
mainly 2 3592 tapes and only very few on disk-cache.
These values ( average objects/hour  - average data/hour ) are
finally the facts showing what at least had been possible once.

Doing a full restore from the normal backup data
(not the 'fresh' one) with all the real wholes and with
the aggregate-wholes within and so on ... take about +70-80 % of time
compared with the best-one real-possible.

In case of full desaster - with HA -solution also not working we finally would
do that full restore and while the restore is running no sending of mail
would be possible - incoming mails would be queued.
So having that pause the cyrus-reconstruct
of all folders is not necessary which itself may take a very long time.

I think everyone has do one full-restore-test of the mail server at
any time using  tsm-snapshot or the 'normal'
tsm backup data from incrementals-whatever using-
just to proof whats going on.

The other thing I now came to is to get the value of the best
full-restore-througput that is possible  -in practice-
just to verify the overall-status and identify possible bottlenecks.


Currently I have two problems with the cyrus-backup
1) Full restore : comparing with our individual
best-possible full-mail-restore-time
... the +80% are not bad but it seems that tsm slows down in some
way the restore. I have really always measuring a fast start of the restore
in the first 2-3 hours ( measuring restored-Files and restored-data )
and the restore-forecast always looks like having a
total restore-elapsed time of about +30% (comparing to the best-possible).
In the end the restore slows down without any obvious reasons
an the restore-process on the client is raising with his cpu-usage
and it take no wonder that the tsm-server is showing
more and more 'SendWait' states of the sessions.
For me the bottleneck seems to be 'inside' the
tsm-software and currently i have an open pmr on that.

2) Partial restore:
Restoring just a few hundred Files/few MB may result in a too-long-time
... tsm is doing things not understandable... maybe its an
deep problem / architectura.
Hhere we help ourself disabling the nqr-restore
using just for example
dsmc restore /mail/imap/j/user/juser/?*
... running pretty fast ...
instead of  dsmc restore /mail/imap/j/user/juser/
... may take 10 times longer ...
I hope IBM is aware of that problem
- because its a really painfull and annoying one



...just some thoughts
Rainer


Re: Redirecting Output of NetWare Console dsmc to a File ?

2006-05-12 Thread Rainer Wolf

Hi,
thanks a lot ! it worked with
 dsmc q backup sys:system/tsm/ -sub=yes -ina (CLIB_OPT)/sys:test.out

best regards
Rainer


Matt Zufelt schrieb:

is there a way to put the output of, for example,
dsmc q backup on the NetWare 6 console into a file?

Or, if not, can I use another OS's TSM client?
I didn't succeed with Linux and MS Windows TSM clients.




Try appending (CLIB_OPT)/sys:test.out to your command.  Something like:

dsmc q backup (CLIB_OPT)/sys:test.out

--Matt



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Redirecting Output of NetWare Console dsmc to a File ?

2006-05-09 Thread Rainer Wolf

Hello TSM+NetWare Experts,

is there a way to put the output of, for example,
dsmc q backup on the NetWare 6 console into a file?

Or, if not, can I use another OS's TSM client?
I didn't succeed with Linux and MS Windows TSM clients.

Thank you all.
Rainer

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Update - Tape problem after moving TSM to new server

2006-03-28 Thread Rainer Wolf
Hi,
its really no problem but something changed since we are using the bigger
3592 drives - the volumes increased here from 40 GB - 300 GB
and at the same time we started to use group-collocation.
After one year in use   now Volumes are much more longer in 'filling'.
So of course it's nothing else than to write to them.

But anyway something else is happening: at the Moment the last
byte is written on the volume and it changes to state 'FULL'
the same volume more often will show for example 80 %  reclaimable space.
Just because of the longer time expiration can put the holes on it.

The bigger the volumes are in use - the more you may think about those
filling volumes . And I really would not care if a filling
tape that already got much more than the estimated Capacity and would
show let's say 80% reclaimable space then start to reclaim
by the recalamation process  - not by me :-)

Regards,
Rainer



Roger Deschner wrote:

 I have never seen reclamation take a Filling volume. I thought the Big
 Idea ever since the product was called WDSF was to fill up the Filling
 volumes until they are Full, let expiration gradually eat holes in them,
 and then reclaim them. What's the point in reclaiming a tape that isn't
 full yet? Even with collocation, I don't get it. If it isn't full yet,
 the only thing you should be doing to it is writing more data to it.

 Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]
 = What if there were no rhetorical questions? ==

 On Mon, 27 Mar 2006, Richard Sims wrote:

 On Mar 27, 2006, at 1:56 AM, Rainer Wolf wrote:
 
  Hi,
 
  Roger Deschner wrote:
 
 
  3. The Full tapes should reclaim themselves normally. However,
  reclamation will not select any tape that is still marked as
  Filling, so
  you've got to reclaim them manually yourself with MOVE DATA. Might
  take
  a while, which is OK as long as you don't run out of tapes.
 
  Is it true ?  I thought that reclamation can also affect
  volumes in filling state - why not ?
 
 Reclamation operates on any usable volume, regardless of Full or
 Filling. Of course, when you get to the point of reclaiming Filling
 volumes, reclamation may not then be a productive thing to do.
 
Richard Sims
 

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Update - Tape problem after moving TSM to new server

2006-03-26 Thread Rainer Wolf
Hi,

Roger Deschner wrote:


 3. The Full tapes should reclaim themselves normally. However,
 reclamation will not select any tape that is still marked as Filling, so
 you've got to reclaim them manually yourself with MOVE DATA. Might take
 a while, which is OK as long as you don't run out of tapes.

Is it true ?  I thought that reclamation can also affect
volumes in filling state - why not ?

Regards,
Rainer


Question on recommended excludes for TSM-Mac Clients with MacOS 10.4 ?

2006-03-09 Thread Rainer Wolf
Hi TSMers,

we have tsm-server/solaris 5.3.2.1 and Macintosh tsm-Klients at 5.3.2.1.

since starting with macOS 10.4 those Macintosh - Klients are coming into
the tsm-server with quite a lot of tsm-db entries.
Doing just normal incremental backups those TSM-Clients are appearing
with up to 100.000 Directories and in the range of up to 500.000 Files in 
tsm-db.
No Server - just acting as Labtops. The default-excludes that came
with the tsm-Klient Installation are active but don't seem to be very
efficient.

My question is: Has anyone some tipps how to handle that - are
we doing something wrong ?
Someone found extended reasonable excludes ?
Is there another way recommended for doing backup ? Is it reasonable
to propagate something else, for example to backup just and only
one Directory like /backup ?
We currently don't have very much of those mac-os-10.4 Klients
- so it's not a real problem for us - but I wonder
how to solve the Backup of lets say 200 MacLabtops ... a new tsm-db ?

Regards - and thanks in advance !
Rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Q STG hangs during reclamation

2006-03-01 Thread Rainer Wolf
Hi,
shortly had the same effect. There are various reasons for this behaviour - 
often hardware-related
sometimes maybe just software.
Our last 'hang of query stg ' results from a filesystem that was offline
because of a defect FC-Adapter  - so
first I would check ( on os-level ) if all
related disk-hardware -filesystems or raw partitions-  are ok and really usable.
In tsm you may also check the client events of the last 24 hours.
You should call ibm  for support.

Greetings
Rainer

Orville Lantto wrote:

 We are having problems on a TSM server instance on AIX.  Some simple commands 
 such as Q Stg appear to hang, along with their sessions, while reclamation 
 is in process.  Other commands like Q Pr' do not hang.  The server is AIX 
 5.2.5 with six instances of TSM Server 5.3.2.0 on it.  The disk is DS4300 
 with a 3584 tape library shared between the instances.

 Anyone have a guess as to why?

 Orville L. Lantto
 Glasshouse Technologies, Inc.
 Cell:  952-738-1933


 

 From: ADSM: Dist Stor Manager on behalf of Prather, Wanda
 Sent: Tue 2/28/2006 4:50 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] 3584 help

 Having a problem with the Tape Library Specialist/web app, so I want to
 know what level of microcode is on this 3584.

 Can you get that information from the web app, or for that matter, from
 the front LED panel of the 3584?

 Thanks!

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: AW: (Too) long no query restore?

2006-02-27 Thread Rainer Wolf
Hi,
 don't think so . Had this problem some years ago on aix-tsm 5.1.x server
and now actually the same on solaris-tsm 5.3.2.1 server.

When migrating from aix to solaris I first thought this problem
has evacuated. Moving our cyrus-mail-server on the new platform
the restore was continously very good - even after 8 months
( we have retver=45 ) . The data to be restored is located
on 2-4 tapes (3592) and disk-cache.
The relation between aktive - inactive files for this node is about
4.2 mio aktive files  + 1.1 mio inactive files.
Because of the fantastic new drives we did not do any tape-reclamation
and we had no problems at all.
Because my collegue who normally do the restorals on mail-folders
( mostly just some hundred files up to some thousand using
a restore command like   ' dsmc restore /mail/imap/x/user/xuser/remfolder/ ' )
is sitting vis-a-vis I know that the restore-performance was always very good.
Then I said to him   -ohhh we should use the tapes a bit more before
running out of warranty-  and started some reclamations on tapes
being full and with a usage of lets say 20% or less.
Thus one of the tapes the mail-server-tsm-client has data on
( the storagepool has node-collocation) has reclaimed ...
... and uups  : the old problem ( waiting for files ...) suddenly arised 
again.
So another 'workaround' might be not to use reclamation ;-)

Greetings
Rainer


Whitlock, Brett wrote:

 Is this problem platform specific?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Richard Sims
 Sent: Friday, February 24, 2006 9:47 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] AW: (Too) long no query restore?

 Ironically, the most crucial part of the TSM product - restoral - has
 become its most troublesome.  Since the advent of the well-intentioned
 NQR, when customers perform qualified restorals they never know what to
 expect in terms of restoral speed.
 Indeed, restorals can end up being prohibitively long.

 I'm incredulous that IBM *still* has not tackled and resolved this
 long-festering problem in the product, which has simply lingered for
 years now.  Descriptive APARs and Technotes only tell of alternatives
 for when the customer runs into a debilitating restoral, but we see no
 initiative to address the architectural problems.  TSM is an Enterprise
 level product, and yet a crippling problem of this severity remains in
 it?
 Not good.

 Richard Sims

 On Feb 24, 2006, at 10:22 AM, Rainer Wolf wrote:

  Hi,
 
  I often experienced this and was in discussion with IBM - at last it
  was closed by ibm-support with a point to
  http://www-1.ibm.com/support/docview.wss?uid=swg1IC34713
  IC34713: PERFORMANCE
   DEGRADATION WHEN
   RUNNING NO QUERY
   RESTORES
  I don't know why this old one is still active ( tsm 4.2 ) ??
 
  In our case the performance-problem arised at that point when the
  first reclamation-process has run on a tape on which the client has
  data on ... maybe also by hazard.
  For me that problem sometimes seems to be the most alarming one in
  TSM.
 
  Greetings
  Rainer

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: AW: (Too) long no query restore?

2006-02-24 Thread Rainer Wolf
Hi,

I often experienced this and was in discussion with IBM - at 
last it was closed by ibm-support with a point to
http://www-1.ibm.com/support/docview.wss?uid=swg1IC34713
IC34713: PERFORMANCE
 DEGRADATION WHEN
 RUNNING NO QUERY
 RESTORES
I don't know why this old one is still active ( tsm 4.2 ) ??

In our case the performance-problem arised at that point 
when the first reclamation-process has run on a tape on which
the client has data on ... maybe also by hazard.
For me that problem sometimes seems to be the most alarming one in TSM. 

Greetings
Rainer  


Christoph Pilgram wrote:
 
 Hi
 I had the same problem, and as Wanda wrote I tried it with the TESTFLAG
 DISABLEQR and it finished in about 5 minutes. Another test was : do the
 complete restore with the exception of one file in one of the subdirectories
 : did run in 5 minutes.
 
 Best wishes
 Christoph
 
 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
 Prather, Wanda
 Gesendet: Mittwoch, 22. Februar 2006 17:06
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: (Too) long no query restore?
 
 If you have the luxury of re-creating the problem, try again and put
 TESTFLAG DISABLENQR in the dsm.opt file (search on DISABLENQR in the
 archives to see what people have said about this before).
 
 That turns off NQR and uses CLASSIC restore.  It is known that sometimes
 CLASSIC will outperform NQR.
 
 If you get no difference in performance between the two, then you have
 something in your hardware config that needs tuning.
 
 Wanda Prather
 I/O, I/O, It's all about I/O  -(me)
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Thomas Rupp
 Sent: Wednesday, February 22, 2006 10:26 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: AW: [ADSM-L] (Too) long no query restore?
 
 Some additional information:
 
 Server: IBM eserver xSeries 346, Intel Xeon CPU, 3.2GHz
 Database: 20GB on 4 Volumes (EMC AX100)
 Storagepool: 1317GB on 13 Volumes (EMC AX100)
 
 No tape activity is involved as all data (40MB) is restored from disk.
 
 So TSM seems to spend most of the time scanning through the database.
 And I think more than 4.5 hours to scan 9.5 million files is way to
 long.
 
 Thomas

-- 

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


question on select for migration statistics / summary table ?

2006-02-07 Thread Rainer Wolf
Hello all,
we currently have tsm server 5.3.2.1 on solaris and i have difficulties with 
the following select:

 select start_time, end_time,start_time-end_time as -
Elapsed Time ,entity,processes, bytes,((cast(bytes as decimal(18,0)) / 
cast((end_time -start_time)seconds as decimal(18,0/1024
-
KB/second from summary where activity='MIGRATION' and 
cast((current_timestamp-end_time)minutes as decimal) 0 -
and cast((current_timestamp-start_time)hours as decimal) 168  and 
cast((end_time -start_time)seconds as decimal) 0

It should give back the throughput values in kB/s on migration processes - all 
migration processes of the last
7 days should be listed. The purpose is to keep an eye on the 
migration-performance - knowing
that the time of the whole process including mount(wait)-times is used.

It formerly went fine but I think since one tsm-update ( 5.3.2 ? ) I got no 
output anymore
although migration happens.
The output is only :
ANR2034E SELECT: No match found using this criteria.
ANS8001I Return code 11.

The same sql-query still runs fine for 'RECLAMAION' processes instead 
'MIGRATION'.

Can anyone help - I am not very firm with select
It looks like the start_time for Migrations is somewhow cummulativ in the 
summary table ?
Can I take some other select to get the migration performance ( in kB/s ) on the
whole migration process ? Maybe not from the summary ?

Thanks in advance for any help !
Rainer



tsm: TSM1select * from summary where activity='MIGRATION'

...

  START_TIME: 1900-01-01 00:00:00.00
END_TIME: 2006-02-07 07:38:57.00
ACTIVITY: MIGRATION
  NUMBER: 393
  ENTITY: BACKUPPOOL2
COMMMETH:
 ADDRESS:
   SCHEDULE_NAME:
EXAMINED: 4114238
AFFECTED: 4114238
  FAILED: 0
   BYTES: 836764217344
IDLE: 0
  MEDIAW: 1106
   PROCESSES: 14
  SUCCESSFUL: YES
 VOLUME_NAME:
  DRIVE_NAME:
LIBRARY_NAME:
LAST_USE:
   COMM_WAIT: 0
NUM_OFFSITE_VOLS:


tsm: TSM1

For the migration-Processes the start-time looks 'normal' like

tsm: TSM1select * from summary where activity='RECLAMATION'

...

  START_TIME: 2006-02-05 11:40:52.00
END_TIME: 2006-02-05 16:03:43.00
ACTIVITY: RECLAMATION
  NUMBER: 346
  ENTITY: DA-COPY-VBA
COMMMETH:
 ADDRESS:
   SCHEDULE_NAME:
EXAMINED: 203830
AFFECTED: 203830
  FAILED: 1
   BYTES: 52093334736
IDLE: 0
  MEDIAW: 1055
   PROCESSES: 1
  SUCCESSFUL: NO
 VOLUME_NAME:
  DRIVE_NAME:
LIBRARY_NAME:
LAST_USE:
   COMM_WAIT:
NUM_OFFSITE_VOLS: 68


Re: how to speed up ...

2005-09-28 Thread Rainer Wolf
Hi,
the TCPWINDOWSIZE is interesting to adjust if you have so called 'lfp ... long 
fat pipes' , that means
much data over a long distance network, something like WAN and not on a LAN - 
afaik
You may check the manual about tspwindowsize.

Goran, I think you have missed something: beside the 'elapsed processing '
and 'objects compressed' you should give the other values , whats happened
Its not clear how many files really are backed-up - or do you want to backup 
every time all the files
--selective or incremental backup  ?
Are both Client and server connected on the same LAn and what Network 
Interfaces do they have ?
Another question maybe : how many Filesystems does the Client have - all files 
in one  ?

Greetings
Rainer



Kurt Beyers wrote:

 See the TSM 5.3 Performance guide for recommended values of eg TCPWINDOWSIZE, 
 which can improve the performance.

 But an additional question I just was dealing with. The file system backup 
 handles each domain sequentially, first the backup of domain A, then the 
 backup of domain B and so on. Can the backup of eg 4 domains be started 
 simultaneously if the backup goes to a diskpool. Might have to check the 
 manual on RESOURCEUTILIZATION first.

 best regards,
 Kurt

 

 Van: ADSM: Dist Stor Manager namens goc
 Verzonden: wo 28/09/2005 16:51
 Aan: ADSM-L@VM.MARIST.EDU
 Onderwerp: [ADSM-L] how to speed up ...

 hi, i have a question :

 any ideas how to speed up AIX client backup of  8,259,711 files ?
 it's not about size , these 8 mil files have cca. 3GB even less ...

 client aix 5.2 via TCP network LAN to aix 5.2 with TSM 5.3.1.4
 with SATA cache pools ... data compression forced by server

 Elapsed processing time: 12:40:47
 objects compressed by LOUSY 17%

 it can be it, right ? anyone has something similar ? i hope yes :-)

 thanks
 goran

 PS: i'll try to off data compression when the node will not be accessing
 server, which is rarely ... LOOL

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Question about DR TSM site

2005-09-14 Thread Rainer Wolf
Stapleton, Mark wrote:

 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
 Behalf Of Jon Evans
 Following on from this... If you had a failure of one of the libraries
 (say it burnt!) and the primary storage pool is lost, would it be
 possible to have a second TSM server (other than the one that owns the
 storage pool) take care of recreating the primary pool from the copy
 pool?.. in other words, take the processing required to recreate the
 primary storage pool away from the production TSM server, and
 hand it to
 another server that's not so busy? I have read through the manual on
 virtual volumes, and it seems
 That this is maybe possible, but I'm not sure.. can anyone confirm if
 this is the case?

 No, that's exactly what you *can't* do with virtual volumes. Virtual
 volume usage requires not only the server where the data physically
 resides, but also the server that created the virtual volumes in the
 first place. Otherwise there will be no access to the data.


Hi all,

I have 2 questions on this: If you only got the virtual volumes because
the primary place ( library+tsm-server)  ist completely destroyed ... but you 
got the
tsm-db recreated on some box -  you will have access to the data through the 
virtual volumes.
question1: can the recreation of the primary data only be done using the same 
Hardware
- same library , same drives  ?
If the answer is yes : is there a technical reason ?
if no: why should the primary pool not be recreated on an alternative
remote-tsm-server, just using virtual volumes of that server ?
I thought virtual volumes on remote server could be used for primary-pools ?

Greetings
Rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Tape Question

2005-07-27 Thread Rainer Wolf
Hi Debbie,
you can mix the tapes for 3590 and 3592 drives inside the library as you like,
but  only 3592 drives can use the 300GB tapes and the
20/40GB tapes are only usable by the 3590 magstar drives.
You can also mix 3592 drives and 3590 drives in the library but not in one 
frame.
We have upgraded our l12-frame to a l22-frame : this includes a new os/2-PC
and the frames needed for the new 3592 -drives ( maximum 4 Drives in l22 ) and 
placed
the new 3592 Drives in there.
You may move the old 3590 Drives into another D12 -drive- frame: we have done 
that.
One thing is that you may not forget the total number of Drives you plan
and the number of serial-ports for the Drives ... if you excceed 8 Drives
and currently have 8 Ports available you have to extend the number of ports too.

Greetings
Rainer

Debbie Bassler wrote:

 Thanks for the reference information, RichardWe had an IBM rep come in
 and he said we could still use the 3494 library with 3592 tape drives, but
 I wasn't sure about the tapes.

 Debbie

 Richard Sims [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 07/27/2005 08:34 AM
 Please respond to ADSM: Dist Stor Manager

 To: ADSM-L@VM.MARIST.EDU
 cc:
 Subject:Re: [ADSM-L] Tape Question

 On Jul 27, 2005, at 8:17 AM, Debbie Bassler wrote:

  Currently, we have a 3494 tape library and 3590 tape drives and
  3590 tapes.
  We are planning to get a couple of 3592 tape drives. My question
  is, will
  we be able to use the 3590 tapes in the 3592 tape drives? I would
  think
  since the capacity of the 3590 tapes is only 20G/40G it would not
  be a good
  idea to use them.  Also, I'm not sure how the write speed would be
  effected
  if we use these tapes. I assume there are 3592 tapes which have a
  larger
  capacity and write speed.

 Debbie -

 Refer to the manual IBM TotalStorage Enterprise Tape System 3592
 Introduction and Planning Guide, where on page 5 it says:

 Model 3592 tape cartridges are not compatible with 3590 tape drives,
 and, likewise, 3590 tapes cannot be used in the 3592 drives.

 3592 is a technology departure, as 3590 was to 3490.

 Another good reference is redpaper IBM TotalStorage Enterprise Tape
 3592: Presentation Guide;
 and there is the IBM TotalStorage Enterprise Tape Cartridge 3592
 brochure (G225-6987).

 Richard Sims

--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


How to erase magstar-Tapes ?

2005-06-15 Thread Rainer Wolf
Hello All ,

we have a 3494 Library with currently both 3590 drives and 3592 drives.
Anything is moved now to 3592-tapes and we end using the 3590 - Drives.
The tsm is no problem -
Because we want to give the now unused  J-Label(20GB) and K-Label(40GB)
to another company , I need to erase/zero the data on the old tapes - about 
1000  .
Now my question: is there an efficient way to make the data on those
old tapes unreadable, but not only by labelling  ?

Has someone done this ?
I have never done something outside adsm/tsm with the tapes and I think
it is better done from aix ?

Thanks a lot for any hints
Rainer



--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


End-of-Support Matrix for TSM(/adsm) Klients

2005-05-30 Thread Rainer Wolf
hello all,

i remember a matrix that formerly showed
the adsm/tsm client and server - versions with the associated
operating-system versions, that are ( ore have been ) supported
together with the date on which the official tsm(adsm)-support ends.

I cannot find this matrix anymore. Can someone send the link
or just a copy of that list ?
It would be optimal that already out-of-date clients would be
listet ( ADSM V3 and so on ).

Greetings and thanks in advance !
Rainer


--

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: Restore performance problem

2005-04-01 Thread Rainer Wolf
Hi,
I have done quite similar restores on our mailserver.
you may also look at the Client what happens to the
restore-process. It may happen that the cpu is at 100 % for 
the 'dsmc restore ..' ? Another thing is the filesystem on the 
Client and you may check the filesystem/disk-activity/Service-time if there 
is any 'weakness' that may result from creating that many i-nodes.

I have recently done a lot of mailserver-restores (always  3,5 mio Files/140 GB 
)
using an old tsm-server ( v5.1.9.5  with k-tapes and same konfig like you ... 
10 tapes )
and observed that specially this old tsm-server was at the end.
Especially our io-konfiguration of that old tsm-server was very bad :
db,log, disk-cache are mixed up. This decreases the restore-performance 
especially 
when other activity ( backups at night ) happens. 
So we used
dsmc restore -quiet /mail/ /data2/mail/
(tcpwindowsize 64, tcpbuffsize 32, largecommbuffers no, txnbytelimit 25600
resourceutilization 3)
and received the  3,5 mio Files/140 GB finally in 09:53:34
For me that was ok because I know about the bad server-constitution.
The restore time would be much more worse if the restore comes into a time
when the tsm-DB got a lot of other transactions - like nighly backups. 
... restoring the same with only one drive results in 51 hours .


Running the same mail-restore test on a new hardware ( new db, tsm5.3, with 
3592 Drives )
--using the same restore-client--- we finally got 3.5mio Files/150GB  restored 
in 04:52:00
... using just 1 drive because the data fits on 1 3599-tape.
But here I have experienced a reproduceable bug/behaviour ( it is in the moment 
'closed' because
the solaris10 is not yet supported ) : when starting the restore everything 
runs fine and 
fast ( with a restore-performance at about 1 mio Files/hour ) ... after some 
time -maybe 40 %
of the total restore time-  the cpu of the client is raising to 100 % and the 
restore performance ( data/files) is thus slowing down -- there is no reason 
for this found at the server
or at the client. 
... maybe it happens when a very big directory with a lot of directory in it  
is in progress ...
In the end I found a 'workaround': I canceled this slowed-down restore-process 
running at 100%CPU 
( 'dsmc restore -quiet /mail/ /data2/mail/' ) 
with Control-C, and let him shut down ... and then I just restart the restore 
with 
'dsmc restart restore -quiet' . This 'restarted restore' works fast again and 
finally 
ends  with the 04:52:00 (total time).   
If I would not stop/restart the client-restore-session the restore will 
end restoring with 06:49:09 .
That is reproduceable and it is a quite big difference 
( 30 % faster with interrupting and restarting ) 
but maybe its because of our unsupported tsm-version 
...  or has someone else seen this cpu-crunching behaviour  ?

Greetings 
Rainer



Thomas Denier wrote:
 
 We recently restored a large mail server. We restored about nine million
 files with a total size of about ninety gigabytes. These were read from
 nine 3490 K tapes. The node we were restoring is the only node using the
 storage pool involved. We ran three parallel streams. The restore took
 just over 24 hours.
 
 The client is Intel Linux with 5.2.3.0 client code. The server is mainframe
 Linux with 5.2.2.0 server code.
 
 'Query session' commands run during the restore showed the sessions in 'Run'
 status most of the time. Accounting records reported the sessions in media
 wait most of the time. We think most of this time was spent waiting for
 movement of tape within a drive, not waiting for tape mounts.
 
 Our analysis has so far turned up only two obvious problems: the
 movebatchsize and movesizethreshold options were smaller than IBM
 recommends. On the face of it, these options affect server housekeeping
 operations rather than restores. Could these options have any sort of
 indirect impact on restore performance? For example, one of my co-workers
 speculated that the option values might be forcing migration to write
 smaller blocks on tape, and that the restore performance might be
 degraded by reading a larger number of blocks.
 
 We are thinking of running a test restore with tracing enabled on the
 client, the server, or both. Which trace classes are likely to be
 informative without adding too much overhead? We are particularly
 interested in information on the server side. The IBM documentation for
 most of the server trace classes seems to be limited to the names of the
 trace classes.

-- 

Rainer Wolf  eMail:   [EMAIL PROTECTED]
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitt Ulm  wwweb:http://kiz.uni-ulm.de


Re: select question: Finding files bound to an archive management class per node ?

2005-03-21 Thread Rainer Wolf
Hi Andrew,
thanks a lot - all of your examples helps a lot !

What I am really trying to find out is the time/date when I can delete
some obsolete archive-copygroups like
Policy Domain Name: U12345
   Policy Set Name: STANDARD
   Mgmt Class Name: ARCHIVE_10Y
   Copy Group Name: STANDARD
   Copy Group Type: Archive
Retain Version: 3,653
  Retention Initiation: Creation
   Retain Minimum Days:
Copy Serialization: Static
Copy Frequency: CMD
 Copy Mode: Absolute
  Copy Destination: NONE
Last Update by (administrator): MANAG_W
 Last Update Date/Time: 03/16/05   12:49:15
  Managing profile:

It is just for 'cleanup' - not now but someday.
At the same time some other archive classes are still used to archive data
and therefore must stay untouched.
( Unfortunately those obsolete archive-classes have the data inside
one tape storagepool mixed with still usable ones - so I can not determine
those obsolete archive-class just by the occupancy... )

Maybe it will take some years when files that are bound to those 'old' classes 
will
leave through the expiration-process.
My solution is to simply check from time to time if there ar still files bound
to a specific (obsolete) archive-class - and if no files are bound ( by any
node in that domain) then I can delete it.

Really great would be the following:
Is it possible to get the date/time when the 'last'
file will expire in a given (archive)class-name and a given 
domain-name/or-nodename  ?

Greetings
Rainer





Andrew Raibeck wrote:

 Hi Rainer,

 What is it you are *really* trying to find out? Depending on the size of
 your database, mining the ARCHIVES table can be a slow process when maybe
 you don't quite need that level of detail?

 You can find out who has any archived data by querying the occupancy
 table:

query occupancy type=archive

 That will show you which nodes have any archived data, and how many
 objects they have.

 If you still want to mine the ARCHIVES table, then perhaps it would be
 best for you to run one SELECT against the entire table, store the results
 in a file, then use some other utiity to mine that data:

dsmadmc -id=you -pa=xx -comma select * from archives  archives.out

 With that said...

 If you want to find out how many objects a given node has archived, you
 can do something like this:

select node_name, class_name, count(*) \
   from archives \
   group by node_name, class_name

 If you want to know what specific object by node name:

select filespace_name || hl_name || ll_name \
   as FILE NAME \
   from archives \
   where node_name='YOURNODE' and class_name='YOURCLASS'

 Depending on your specific needs, you can create variations of these.

 For your other question, the BACKRETENTION and ARCHRETENTION settings for
 the policy domain might be useful.

 Regards,

 Andy

 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED]

 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-03-15
 06:25:40:

  Hello,
  we have discontinued a way to archive data on some special hardware
  and the archive-copygrous are now modified with a 'copy
 destination'named NONE
  that points to nothing and will give an error message if someone is
  using that.
  One question is : can someone help me with a select command that shows
  those files or just the number of files  that are bound to an
  archive management class  in a
  policy domain  and is stored by a node  ?
  Alternatively a select that shows those files  or just the number of
  files that are
  bound to an archive management class  in a policy domain  and is
  stored  in a given storagepool  ?
  I need that because some day those obsolete archive copygroups
  should be deleted
  -the archive data is not unlimited by time-  if not needed anymore.
 
  My other question is: is there a way to make management classes kind of
  unseen to the client if those archive managementclasses /  archive
 copygroups
  are not usable anymore and just defined at the server for not to
  lose the data ?
 
  ... or is there something else what can be done ?
 
  Greetings and Thanks a lot in advance !
  Rainer
 
  --
 
 --
  Rainer Wolf  Mail:
 [EMAIL PROTECTED]
  Kommunikations und Informationszentrum   Tel/Fax:  ++49 731
 50-22482/22471
  Abt. Infrastruktur, Uni Ulm  Web: http://kiz.uni-ulm.de/

--
--
Rainer

select question: Finding files bound to an archive management class per node ?

2005-03-15 Thread Rainer Wolf
Hello,
we have discontinued a way to archive data on some special hardware
and the archive-copygrous are now modified with a 'copy destination' named NONE
that points to nothing and will give an error message if someone is using that.
One question is : can someone help me with a select command that shows
those files or just the number of files  that are bound to an archive 
management class  in a
policy domain  and is stored by a node  ?
Alternatively a select that shows those files  or just the number of files that 
are
bound to an archive management class  in a policy domain  and is
stored  in a given storagepool  ?
I need that because some day those obsolete archive copygroups should be deleted
-the archive data is not unlimited by time-  if not needed anymore.

My other question is: is there a way to make management classes kind of
unseen to the client if those archive managementclasses /  archive copygroups
are not usable anymore and just defined at the server for not to lose the data ?

... or is there something else what can be done ?

Greetings and Thanks a lot in advance !
Rainer

--
--
Rainer Wolf  Mail:  [EMAIL PROTECTED]
Kommunikations und Informationszentrum   Tel/Fax:  ++49 731 50-22482/22471
Abt. Infrastruktur, Uni Ulm  Web:   http://kiz.uni-ulm.de/


select question on migration processes

2005-02-09 Thread Rainer Wolf
Hello ,

can someone help me ? :
I am looking for a select statement that gives as Output
3 values
the total sum of
'items'  'bytes'  'processing-time'

for all the mgration processes that have been finished in the
past 'n' hours .
I have problem to get the processing-time.
Someone got this or something similar ?

Thanks in Advance
Rainer


--
--
Rainer Wolf  Mail:  [EMAIL PROTECTED]
Kommunikations und Informationszentrum   Tel/Fax:  ++49 731 50-22482/22471
Abt. Infrastruktur, Uni Ulm  Web:   http://kiz.uni-ulm.de/


Restore problem with big filesystem

2005-02-02 Thread Rainer Wolf
Hello,
has someone successfully restored a filesystem with more than 3 mio files ?
Currently our 'restore' is now running 20 hours and it still shows

ANS1899I * Examined 2,689,000 files *
ANS1899I * Examined 2,690,000 files *
ANS1899I * Examined 2,691,000 files *
...
Because there are about 3.5 mio files to be restored it may run for a
very long time. At this stage there is no byte 'restored' yet  --- just
doing this 'Examination'.

The dsmc on the solaris(10) client-system ( v440, 4 cpus, 16GB mem ) is showing

CPUPID User  NI  State   SizeRSS  CPU% SCPU%   CPU-Time Command
  4T  8970 root   0 on  1011m  1009m 100.4   0.0   12:30:48 dsmc

... there is nothing else running there.

I seems that the 'restore' slows down with the number of files 'examined'

We have both tried V5.3.0.0 solaris- client and  this current session
has client Version 5.1.6.5 .
The process is excessively doing string-operation like 'strcmp' and 'strlen'.

The restore - command is:
dsmc restore -quiet -virtualnode=mail /mail/ /data2/mail/

The /data2 -fs is empty and our dsm.sys/dsm.opt looks like:

blackhole:...//# cat /opt/tivoli/tsm/client/ba/bin/dsm.opt
subdir  yes
testflag disablenqr
blackhole:...//# cat /opt/tivoli/tsm/client/ba/bin/dsm.sys
servername adsmaix
commmethod tcpip
tcpport1500
tcpserveraddress   xxx
passwordaccess generate
schedlogname   /var/adm/dsmsched.log
errorlogname   /var/adm/dsmerror.log
schedlogretention  7
errorlogretention  14
inclexcl   /opt/tivoli/tsm/client/ba/bin/exclude.lis
tcpnodelay yes
blackhole:...//#

The 'testflag disablenqr' we used because at least for a smaller number of
files ( eG 1 files ) it had a tremendeous performance effect.

The server is showing often SendWait - currently up to 20 s WaitTime:

  SessComm. SessWait  Bytes  BytesSess Platform 
   Client Name
NumberMethodState   Time   Sent  RecvdType
--------------- 
   
26,610Tcp/IpSendW  20 S 689.8 M496Node SUN SOL- 
   MAIL
ARIS


Anyone knows whats going on here ?

Greetings,
Rainer



--
--
Rainer Wolf  Mail:  [EMAIL PROTECTED]
Kommunikations und Informationszentrum   Tel/Fax:  ++49 731 50-22482/22471
Abt. Infrastruktur, Uni Ulm  Web:   http://kiz.uni-ulm.de/


Re: Restore problem with big filesystem

2005-02-02 Thread Rainer Wolf
Hi Richard,
started again the whole thing and
-enabled the NQR again
-get again the v5.3.0.0 client
-added  tcpwindowsize  64
tcpbuffsize32
largecommbuffers   no
txnbytelimit   25600
to dsm.sys ...
The  client and Server have both gigabit.
Finally I raised the BufPoolSize on the server.
The restore-command without any options
'dsmc restore -quiet /mail/ /data2/mail/'
now works quite ok - it is still running at about 7 GB/h
( slightly getting faster ).

Not very much --- but the data really transfered right from the beginning.
There are 140 GB / 3.5 mio Files to be restored. ( it is just a test )

Another general restore-question here is: the server knows what files are to be
restored and the server also knows what tapes are needed ...
... so why is only one tape mounted at one time ?
The backup data of this client is on a node-collocated Tapepool
and I have free unmounted tapes. The whole data of the client
resides on about 10 magstar-cartridges. I think here in this
example I would now have about 14 GB / hour using two tape-drives
with one restore session.
I think there is a need for using more than one drive at a
single restore-session, for filesystems are growing and growing .


Greetings ,
Rainer



Richard Sims wrote:

 Hi, Rainer -

 Unfortunately, implicit restorals (where you do not explicitly name
 objects to be restored) has become a difficult challenge in the
 product, and remains a muddled area which sorely needs to finally be
 straightened out by Development, as the product should figure out the
 best approach: the mess should not be left to the befuddled,
 exasperated customer.

 In some cases, suppression of the more modern No Query Restore, by
 explicit suppression or involvement of qualifying restoral options, can
 improve restoral time, as Classic/Standard protocols are in play and a
 files list is promptly sent to the client and it can make its choice.
 This is what prompted you to add DISABLENQR to your option file. More
 often, you want No Query Restore to be in effect, though, such that the
 server generates the list of files to send to the client. However, the
 client's memory and processing power may be overwhelmed by the volume
 of files information (not to mention the time to send it over the
 network). With NQR in effect, customers get concerned as they see
 nothing coming back from the server for some time and wonder what's
 going on.

 Whereas you suppressed NQR, you are seeing the client examining the
 inventory list, and the client is probably slowing as its virtual
 storage is taxed and paging increases. In your wholesale restoral, NQR
 may be the better choice, as the server may have better resources to
 generate the list. Then again, it may take as long.

 I wish there were a better answer: none of us wants to have to deal
 with such quandries.

 Richard Sims

--
--
Rainer Wolf  Mail:  [EMAIL PROTECTED]
Kommunikations und Informationszentrum   Tel/Fax:  ++49 731 50-22482/22471
Abt. Infrastruktur, Uni Ulm  Web:   http://kiz.uni-ulm.de/


Re: Drives become unavailable

2004-12-17 Thread Rainer Wolf
Hello,
you should check your path with 'query path'
and update the path if the possibly hardware-failures have been resolved.
To look for hardware-failures you might find them on the system with
errpt ( on aix ).

Greetings
Rainer

Mohsin Saleem Khan wrote:

 Hi,

  We have two drives define in TSM 5.1 Level 6.5, it has been
 happening now from few days that every drives become unavailable quite
 often, and I update them as online=yes they become available againI am
 not sure why it is happening and what should i do, to get stable drives,
 any help

 Regards
 Mohsin

--
--
Rainer Wolf  Mail:  [EMAIL PROTECTED]
Kommunikations und Informationszentrum   Tel/Fax:  ++49 731 50-22482/22471
Abt. Infrastruktur, Uni Ulm  Web:   http://kiz.uni-ulm.de/


Re: Tape status

2003-01-29 Thread Rainer Wolf
Hello,
you may also try
query drmedia d00600
... to see if its used for example by database - backup ?

and a
query libv LIBRARY_NAME d00600
( instead of what you mentioned)

Greetings
Rainer


Bruce Kamp wrote:

 In my case:
 tsm: TSMSERVq vol d00600 f=d
 ANR2034E QUERY VOLUME: No match found using this criteria.
 ANS8001I Return code 11.

 tsm: TSMSERVq libv d00600 f=d
 ANR2034E QUERY LIBVOLUME: No match found using this criteria.
 ANS8001I Return code 11.

 --
 Bruce Kamp
 Midrange Systems Analyst II
 Memorial Healthcare System
 E: [EMAIL PROTECTED]
 P: (954) 987-2020 x4597
 F: (954) 985-1404
 ---

 -Original Message-
 From: Richard Sims [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, January 29, 2003 8:43 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Tape status

 Re: Your mystery Private status on volumes...

 Show us the output of a 'Query Volume __ F=D' on one of them. Of
 particular interest is Scratch Volume? (to reveal whether the tape is from
 a scratch pool or was Defined to the storage pool, in which case empty and
 Private makes sense); and Date Last Written, to see if any data had been
 written to the tape.

--
---
Rainer Wolf mail: [EMAIL PROTECTED]
tel: ++49 731 50-22482  fax:  ++49 731 50-22471
Computing Center, University of Ulm, Germanyweb: http://www.uni-ulm.de/urz



question on backupset volumes: how to count them ?

2002-12-11 Thread Rainer Wolf
Hello,
sorry if this is often asked or I haven't found the doc...
I have the following problem:
We are using aix tsm server 4.2.2.13 and a 3494 Library with k- and j-tapes.
Backupsets of clients are created with duration of 999 and the
only device-class for backupset-voluems is the devclass of the 3494 library.
Now some of the Backupset - tapes are checked out and some are still in
the Library. Some of the online Backupset-Volumes have been once
checked out and are checked in again with status=private-
I have not found something equivalent like 'q drmed ... for DBBackup-Volumes'
for thoe backupset-volumes.

Now i am looking for a good way to ...
... to count the total number of those 3494 Backupset volumes
  ( online OR offline )
... only to get the number of the currently checked-in backupset volumes
  in the library.
  Maybe a problem is, that some of the online backup-set-volumes
  have been once checked out and then checked in again and may appear
  just as private ( in the q libvol ... ) ?
  So do I have to combine something like 'q libvol' and 'q volhist t=backupset'
  to get it ?
...  count the number ( online or offline  )
  of tapes depending on the node, for whom this backupset is created ?

Thanks for any help/scripts/macros :-)
Rainer


--
---
Rainer Wolf mail: [EMAIL PROTECTED]
tel: ++49 731 50-22482  fax:  ++49 731 50-22471
Computing Center, University of Ulm, Germanyweb: http://www.uni-ulm.de/urz



Re: private volume returns into sctatch?

2002-11-26 Thread Rainer Wolf
Hello Kurt,
i think it depends on the time the db-volume was createtd and on your
deletion-policy of db-volumes like
  'del volh todate=today-10   t=dbb'
... for example freeing volumes older than 10 days.
If you just manually check in/out those volumes you
may experience the following: if the date of
the-checkin-again-of-the-db-volume
minus the date of the-creation-of-the-db-volume
is LOWER than your deletion-policy then that volume may come automatically
into scratch when the 'del volh t=dbbb ' will run again.
If the time is HIGHER than your deleteion-policy, then your
volume may stay in private until you update it to scratch.
Knowing that you are checking in db-volumes you may easyly, but manually
use the command you have mentioned and after that
check the state of the successfully-checked-in tape with
a command like
'query drmed  DB_MON '
... if this command shows up something like
 ' No match found using this criteria'
the volume won't go to scratch automatically and you have checked in the
volume after the date when your expiration had whiped it out.
... if this command shows the volume with something like 'Mountable ...'
the volume will go automatically into scratch because
the expiration will run in future and your volume will be online at that time.

So your problem maybe gone if you just checkin the volumes
some days later ?

Greetings Rainer




[EMAIL PROTECTED] wrote:

 Hi everybody,

 My environment is TSM 5.1.1.6 on a Win2k server.

 I take every day a full TSM db backup to a private tape volume. The tapes are 
checked in as private.

 However, the past week it happened twice that a database tape was allocated in the 
storage pool for the backup of the clients. If I check the activity log, it says 
indeed Scratch volume DB_MON is now defined in storage pool SSL_POOL1.

 I've checked in the tape with a status private, but somehow it was returned as being 
scratch.  Am I'm missing something here? When I perform the command 'checkin libv 
ssl2020 search=bulk status=private', the tape is checked in as private and  it 
shouldn't return to a status scratch. Has anybody else experienced the same behaviour?

 Thanks in advance,
 Kurt

--
---
Rainer Wolf mail: [EMAIL PROTECTED]
tel: ++49 731 50-22482  fax:  ++49 731 50-22471
Computing Center, University of Ulm, Germanyweb: http://www.uni-ulm.de/urz



Re: HELP! Faster Restore than Backups over Gigabit?

2002-10-31 Thread Rainer Wolf
]

--
---
Rainer Wolf mail: [EMAIL PROTECTED]
tel: ++49 731 50-22482  fax:  ++49 731 50-22471
Computing Center, University of Ulm, Germanyweb: http://www.uni-ulm.de/urz



Re: Backup reporting

2002-09-19 Thread Rainer Wolf

Hello All,

i have two questions on this:
In our server all and only the backup-data is going through migration pools.
Comparing this size with the
Amount of backup files, in kilobytes, sent by the client to the server
this more real (i believe) backup-data-size_counted-by-migration
has a nearly constant difference of about 15-20 % , no matter what time-period
is selected.

for example:
tsm: ADSMAIXselect sum(cast(bytes/1024/1024/1024 as decimal(6,3))) Total GB
Backup from backup_activity from summary where start_time=current_timestamp
-30 day and activity='BACKUP'

Total GB Backup from backup_activity

1928.968

tsm: ADSMAIXselect sum(cast(bytes/1024/1024/1024 as decimal(6,3))) Total GB
Backup from migration_activity from summary where start_time=current_timestamp
-30 day and activity='MIGRATION'

Total GB Backup from migration_activity
---
   1631.496

Question 1 : is it ok to get the datasize of whats being backed up by
the migration ( assuming there is really nothing else doing migration,
like 'move data' ... and accepting that the time_period is slightly shifted )
 or is there another way to get the size of Backup data ?

Question 2: we are using archive writing directly onto tapes, and if using
the summary with activity='ARCHIVE' the output may be also too high.
My idea is to get the amount of archive data by the summary of
activity 'STGPOOL BACKUP' together with entity=... .
Is this ok (accepting some time_shifting ) or is there another
way to get the Numer of bytes being archived , some sql script ?


Thanks for any hint !
Rainer


Rushforth, Tim wrote:

 The Amount of backup files, in kilobytes, sent by the client to the server
 does include client retries (at least at 4.2.0 level).  We've just
 experienced this when a node that retries a huge file shows sending 19 GB
 instead of the normal 8 GB.

 Field 17 from the accounting record below shows this (19,350,787).

 4,0,ADSM,09/12/2002,01:46:22,COWSVP08,,WinNT,1,Tcp/Ip,1,0,0,0,0,2569,
 19350787,0,0,19355216,9883,205,9094,0,4,0,0,0,0,2,0

 Tim Rushforth
 City of Winnipeg
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Bill Boyer
  Sent: Wednesday, September 18, 2002 3:10 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Backup reporting
 
 
  There was a problem where the bytes transferred in the summary table as
  zero. It has been fixed in later patch levels. I'm not sure what the APAR
  number is or the level where it was fixed.
 
  If you need this data, turn on the accounting records. There is an
  additional field Amount of backup files, in kilobytes, sent by the client
  to the server in addition to the Amount of data, in kilobytes,
  communicated between the client node and the server during the
  session. The
  bytes communicated is the total bytes transferred and includes and
  re-transmissions/retries. I believe the Amount of backup files, in
  kilobytes, sent by the client to the server is just what was sent AND
  stored in TSM.
 
  I haven't fully looked into this, but if I'm trying to get a total for the
  amount of data backed up I would be using this field as opposed
  to the bytes
  transmitted field. Something for me to add to my Honey-Do list..:-)
 
  Bill Boyer
  DSS, Inc.
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
  Mark Bertrand
  Sent: Wednesday, September 18, 2002 2:39 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Backup reporting
 
 
  Paul and all,
 
  When I attempt to use any of the following select statements my Total MB
  returned is always 0. I get my list of nodes but there is never
  any numbers
  for size.
 
  Since this is my first attempt at select statements, I am sure I doing
  something wrong. I have tried from command line and through macro's.
 
  I am trying this on a W2K TSM v4.2.2 server.
 
  Thanks,
  Mark B.
 
  -Original Message-
  From: Seay, Paul [mailto:[EMAIL PROTECTED]]
  Sent: Monday, September 16, 2002 11:43 PM
  To: [EMAIL PROTECTED]
  Subject: Re: Backup reporting
 
 
  See if these will help:
 
  /* SQL Script:   */
  /*   */
  /* backup_volume_last_24hours.sql*/
  /* Date   Description*/
  /* 2002-06-10 PDS Created*/
 
  /* Create Report of total MBs per each session */
 
  select entity as Node Name  , cast(bytes/1024/1024 as decimal(10,3))
  as Total MB  ,  cast(substr(cast(end_time-start_time as
  char(17)),3,8) as
  char(8)) as Elapsed  , substr(cast(start_time as  char(26)),1,19) as
  Date/Time  , case when cast((end_time-start_time) seconds as
  decimal) 0 then  

Re: Bug or request for design?

2002-03-18 Thread Rainer Wolf

Hi,
i understand -- thats one reason why I prefer the command-line :-)
A solution maybe to change the locations in that way
that the dsmc/dsm  Programm is in the same Directory/filesystem
where your option Files and Include/exclude File is located.
Using this ( default ) location won't get you into trouble.
For example when running multiple '*SM-nodes' in one machine it is important
to have strict separation i think.
So a solution may be: if not loacted on system-disk
the Config + include/exclude-file ( + possibly symboliclinks to dsmc /dsm
when using multiple *SM-nodes on one machine  )
should be located together on the same filesystem to avoid this happening
and then maybe needed to be restored first
(if you don't have a copy elsewhere) prior to the restore of the
crashed Filesystem.

Greetings Rainer




Loon, E.J. van - SPLXM wrote:

 Hi Rainer!
 In our case the include/exclude file was located on a crashed disk they were
 trying to restore using TSM. That's why the TSM client reported a options
 file error.
 But could you imagine the panic that our UNIX guys felt when they saw no
 files to restore?
 If Tivoli sees this as a critical error, the GUI should not start at all,
 like the command line interface. It just quits then the options file
 contains errors.
 Kindest regards,
 Eric van Loon
 KLM Royal Dutch Airlines

 -Original Message-
 From: Rainer Wolf [mailto:[EMAIL PROTECTED]]
 Sent: Monday, March 18, 2002 10:50
 To: [EMAIL PROTECTED]
 Subject: Re: Bug or request for design?

 Loon, E.J. van - SPLXM wrote:
 
  Hi *SM-ers!
  I would like to know your opinion on something:
  I opened a PNR for the following TSM client behavior:
  The TSM client (AIX and NT tested) doesn't list any restorable objects
 when
  the options file contains an error.
  Try the following on AIX: rename your include/exclude file and start the
 TSM
  client. It returns the error: ANS1036S Invalid option 'INCLEXCL' found in
  options file. Click on OK and now click the restore button on the main
 GUI.
  No files are listed!
  This was quite disturbing for our UNIX people! When you correct the
  include/exclude and you restart the GUI all files are listed again.
  To my opinion, this is clearly a bug, but this was the response form the
  Tivoli lab:
  With both the command line and the GUI, the TSM client informed the user
 of
  the option file problem and location.  If the option file problem is
  resolved, normal client behavior is returned. This appears to be a request
  for a design change.
  What do you think? Is it a bug or am I a nit-picker?
  Kindest regards,

 Hi ...
 because you cannot only do no restore, but also cannot backup anything
 I would believe this is really no bug but a feature.
 So for restore AND backup you have to repair the error and nothing serious
 happens until then ... and no errors should be allowed ...

 Greetings Rainer

 --

  _
 Rainer Wolf [EMAIL PROTECTED]
 Tel: 0731-50-22482  Fax: 0731-50-22471
 University Computing Center http://www.uni-ulm.de/urz

 **
 For information, services and offers, please visit our web site: http://www.klm.com. 
This e-mail and any attachment may contain confidential and privileged material 
intended for the addressee only. If you are not the addressee, you are notified that 
no part of the e-mail or any attachment may be disclosed, copied or distributed, and 
that any other action related to this e-mail or attachment is strictly prohibited, 
and may be unlawful. If you have received this e-mail by error, please notify the 
sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart 
Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for 
the incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.
 **

--

 _
Rainer Wolf [EMAIL PROTECTED]
Tel: 0731-50-22482  Fax: 0731-50-22471
University Computing Center http://www.uni-ulm.de/urz



Re: TSM trying to backup its own, open files..........

2002-02-22 Thread Rainer Wolf

Zoltan Forray/AC/VCU wrote:

 Why is TSM trying to backup its own, open files ?  I thought it was smart
 enough to not do this ?

 I am getting this error message from an NT 6a node !  The client is
 4.2.1.20

  02/21/2002 20:47:46
   ANE4037E (Session: 11678, Node: INFO-OFFICE)  File
 '\\info-office_vcu\c$\Program
 Files\Tivoli\TSM\baclient\dsmsched.log' changed during
 processing.
  File skipped.

Hello,
maybe your
 Copy Serialization.
... for the used backup ManagementClass is set to Static
and you may try a shared dynamic if you want so.

You can check the setting this with a
query copy DOMAINNAME  active MANAGEMENTCLASSNAME t=backup f=d


 Also, what is wrong with this exclude statement in my client options file:

 Exclude *:\WINNT\Profiles\*

 I get these errors:

 02/21/2002 20:49:54 Retry # 1  Normal File-- 1,024
 \\info-office_vcu\c$\WINNT\Profiles\Administrator\ntuser.dat.LOG  **
 Unsuccessful **

 

 Zoltan Forray
 Virginia Commonwealth University - University Computing Center
 e-mail: [EMAIL PROTECTED]  -  voice: 804-828-4807



Re: Problem with german

2002-02-07 Thread Rainer Wolf

sorry - its not the LANG - Variable as I thought, 
but you may get the files being backed up if you set instead 
theLC_CTYPEVariable for example to en_US. 

This works for Umlaut-Files on a Suse 7.1 / TSMClient 4.2.0.0+TSMClient 4.2.1.0  

The LC_CTYPE environmental - Variable on the Suise-Linux is normaly set to
POSIX.
Also you may check env-Variable LC_ALL ... this could overwrite
all the LC-Variables which maybe the reason if a setting of 
LC_CTYPE does not work  so you also may check LC_ALL ( should stay unset ).

Greetings 
Rainer


...
ENVIRONMENT VARIABLES
   LC_CTYPE
   Character classification and case conversion.

for example:
# locale 
LANG=POSIX
LC_CTYPE=en_US
LC_NUMERIC=POSIX
LC_TIME=POSIX
LC_COLLATE=POSIX
LC_MONETARY=POSIX
LC_MESSAGES=POSIX
LC_PAPER=POSIX
LC_NAME=POSIX
LC_ADDRESS=POSIX
LC_TELEPHONE=POSIX
LC_MEASUREMENT=POSIX
LC_IDENTIFICATION=POSIX
LC_ALL=

# dsmc inc /Disks/tst/
Tivoli Storage Manager
Command Line Backup Client Interface - Version 4, Release 2, Level 0.0  
(C) Copyright IBM Corporation, 1990, 2001, All Rights Reserved.

Node Name: TEST
Session established with server ADSMAIX: AIX-RS/6000
  Server Version 4, Release 1, Level 3.0
  Server date/time: 2002-02-07 15:03:11  Last access: 2002-02-07 15:02:47


Incremental backup of volume '/Disks/tst/'
Expiring--5 /Disks/tst/tst [Sent]  
Successful incremental backup of '/Disks/tst/*'

-- 

 _
Rainer Wolf [EMAIL PROTECTED]  
Tel: 0731-50-22482  Fax: 0731-50-22471  
University Computing Center http://www.uni-ulm.de/urz


Stumpf, Joachim wrote:
 
 Hi together,
 
 I told our Linux-gurus to use the export LANG-Parm, but it doesnt work...
 
 root@p25002t0:/var/adm  export LANG=en_US
 root@p25002t0:/var/adm  echo $LANG
 en_US
 root@p25002t0:/var/adm 
 
 After backup the errorlog still contains messages like this:
 23.11.2001 10:07:40 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.FRANZSISCH.txt' contains unrecognized symbols 
for current locale, skipping...
 
 Im not sure...
 Why should it solve the problem with german symbols by setting LANG=en_US?
 
 Any ideas?
 Perhaps I should open a problem at IBM/Tivoli?
 
 Thanks for help!
 
 regards,
 Joachim Stumpf
 Datev eG
 
 Rainer Wolf [EMAIL PROTECTED] (Dienstag, 20. November 2001, 13:39:45, CET):
 
  Hi,
 
  as in the readme mentioned you must set your LANG
  
  - The TSM Linux x86 B/A client is now enabled to handle file names using
multibyte character sets. The characters are displayed correctly, if the
LANG environment is set to the appropriate language.
  ...
 
  backup on those files will fail, if LANG is not set
  If LANG is not system-wide you may set it in your start-script of the scheduler
  and always define it before you use the dcmc - interactively.
  for example
  bash export LANG=en_US; dsmc inc
 
  Greetings
  Rainer Wolf
 
  Stumpf, Joachim wrote:
  
   Hi together,
  
   I didnt find something in the list-archive related to the new versions.
   Is there a known problem with TSM-Client for Linux 4.2.1.0 and german
   umlauts (,)?
   We have TSM-Server 4.1.3.0 on OS/390 2.10.
   The Linux version is SuSE 7.3.
  
   If we do a backup we get this in the error.log:
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.URSPRNGLICHEN.txt' contains unrecognized symbols 
for current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.URSPRNGLICHEN.html' contains unrecognized 
symbols for current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.MEN.txt' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.WEI.txt' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.GELSCHT.txt' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.GEBHREN.html' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.SCHLSSEL.txt' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.NICHTUNTERSTTZT.html' contains unrecognized 
symbols for current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.HNGEN.txt' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.HNGER.txt' contains unrecognized symbols for 
current locale, skipping...
   05.11.2001 16:28

Re: Strange 3590/3494-volume-behavior..?

2002-01-07 Thread Rainer Wolf

Hello,
mabe you just need to halt and restart the server.
I had the same phaenomen some weeks ago with volumes, that have been 
written directly by clients and at the same time the log file has reached 
100 % before the triggered DBbackup has ended ... 
( this should not normally happen )
first I tried the same as you ( 3494/3590/tsm4.1.3.0 ) and the restart did it 
... the volumes became accessible as usual.  

-- 

Mit freundlichen Gren / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm


Tom Tann{s wrote:
 
 Hello TSM'ers!
 
 I have a problem with a volume in my 3494-library..
 
 Tsm-server is 4.2.1.7, AIX.
 
 I discovered the problem a few days ago, when trying to move data from the
 volume to another stg-pool.
 Tsm incists that the volume is inaccessible.
 An audit also result in the same:
 
 01/07/2002 16:57:05  ANR2321W Audit volume process terminated for volume ORA052
   - storage media inaccessible.
 
 The volume was dropped by the gripper a week or so ago, but it was
 re-entered via the recovery-cell. I have inventoried the frame, audited
 the library on the tsm-server. A manual mount,
 mtlib -l /dev/lmcp0 -m -f/dev/rmt1 -VORA052 /mtlib -l /dev/lmcp0 -d -f/dev/rmt1
 work fine.
 I've even done checkout/checkin libvol.
 
 But now I'm stuck...
 
 ANy suggestions on where to look/what to try, would be appreciated...
 
  Tom
 
 tsm: SUMOq libvol 3494 ORA052
 
 Library Name   Volume Name   Status   OwnerLast UseHome Element
    ---   --   --   -   
 3494   ORA052Private
 
 sumo# mtlib -l /dev/lmcp0 -qV -VORA052
 Volume Data:
volume state.00
logical volume...No
volume class.3590 1/2 inch cartridge tape
volume type..HPCT 320m nominal length
volser...ORA052
category.012C
subsystem affinity...03 04 01 02 00 00 00 00
 00 00 00 00 00 00 00 00
 00 00 00 00 00 00 00 00
 00 00 00 00 00 00 00 00
 
 tsm: SUMOq vol ora052 f=d
 
Volume Name: ORA052
  Storage Pool Name: ORATAPE
  Device Class Name: BCKTAPE
Estimated Capacity (MB): 40,960.0
   Pct Util: 42.1
  Volume Status: Filling
 Access: Read/Write
 Pct. Reclaimable Space: 0.0
Scratch Volume?: No
In Error State?: No
   Number of Writable Sides: 1
Number of Times Mounted: 35
  Write Pass Number: 2
  Approx. Date Last Written: 12/24/2001 05:29:05
 Approx. Date Last Read: 12/11/2001 04:50:20
Date Became Pending:
 Number of Write Errors: 0
  Number of Read Errors: 0
Volume Location:
 Last Update by (administrator): TOM
  Last Update Date/Time: 01/07/2002 16:49:34



world-writable dsmsched.log

2001-11-20 Thread Rainer Wolf

Hello ,
I wonder if someone had the same problem, that on unix clients
the dsmc - scheduler leaves dsmched.log / dsmerror.log - files 
which got their protection to a world-writable permission.
Has someone seen this effect and stopped it ? 

for example to produce this effect on unix:
unix-client:...//# ls -l dsmerror.log
dsmerror.log: No such file or directory
unix-client:...//# umask
022
unix-client:...//# dsmc q gjdfsgl
ANS1138E The 'QUERY' command must be followed by a subcommand
unix-client:...//# ls -l dsmerror.log
-rw-rw-rw-   1 root other 82 Nov 20 12:46 dsmerror.log
unix-client:...//# dsmc
Tivoli Storage Manager
Command Line Backup Client Interface - Version 4, Release 2, Level 0.0  
(C) Copyright IBM Corporation, 1990, 2001, All Rights Reserved.

tsm 
 
how can I have these files without the world-write-permissions -
any hints ?

-- 

Mit freundlichen Grüßen / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: Problem with german umlaut in Linux-Client?

2001-11-20 Thread Rainer Wolf

Hi,

as in the readme mentioned you must set your LANG

- The TSM Linux x86 B/A client is now enabled to handle file names using
  multibyte character sets. The characters are displayed correctly, if the
  LANG environment is set to the appropriate language.
...

backup on those files will fail, if LANG is not set
If LANG is not system-wide you may set it in your start-script of the scheduler 
and always define it before you use the dcmc - interactively. 
for example 
bash export LANG=en_US; dsmc inc  

Greetings
Rainer Wolf

Stumpf, Joachim wrote:
 
 Hi together,
 
 I didnt find something in the list-archive related to the new versions.
 Is there a known problem with TSM-Client for Linux 4.2.1.0 and german
 umlauts (äÄöÖüÜ,ß)?
 We have TSM-Server 4.1.3.0 on OS/390 2.10.
 The Linux version is SuSE 7.3.
 
 If we do a backup we get this in the error.log:
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.URSPRÜNGLICHEN.txt' contains unrecognized symbols 
for current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.URSPRÜNGLICHEN.html' contains unrecognized 
symbols for current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.MENÜ.txt' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.WEIß.txt' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.GELÖSCHT.txt' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.GEBÜHREN.html' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.SCHLÜSSEL.txt' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.NICHTUNTERSTÜTZT.html' contains unrecognized 
symbols for current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.HÄNGEN.txt' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.HÄNGER.txt' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.AUSWÄHLEN.html' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.TÄST.html' contains unrecognized symbols for 
current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.AUSFÜHRUNG.html' contains unrecognized symbols 
for current locale, skipping...
 05.11.2001 16:28:28 fioScanDirEntry(): Object 
'/usr/share/doc/sdb/de/html/keylist.FRANZÖSISCH.txt' contains unrecognized symbols 
for current locale, skipping...
 05.11.2001 16:28:32 ANS1228E Sending of object '/home/t06286a/KDesktop/Mülleimer' 
failed
 05.11.2001 16:28:32 ANS1304W Active object not found
 
 05.11.2001 16:28:32 ANS1802E Incremental backup of '/' finished with 1 failure
 
 05.11.2001 16:28:34 ANS1512E Scheduled event 'P25002T0' failed.  Return code = 4.
 
 Any help would be great.
 
 --
 Joachim Stumpf
 Datev eG




complete change of hardware and server-platform ?

2001-06-28 Thread Rainer Wolf

Dwight, we tested the aix-to-solaris-tsmdb-moving and it worked very fine-
so maybe we will use this as a regular 'quick-stand-by-server'
only for restore/retrieve purpose ( on data located on an
offsite-copy-server )  and *nothing else*
( no backup - no archive - no dbbackup ... ) and only just for
a short time in case of cpu-crash or in case of desaster.

It's not actual right now but my question is : if there is a need to change
the complete hardware (server platform , backup and archive libraries )
and assuming to have  all data ( maybe temporarily for this action )
located on an offsite-copy-server  ...
... can i thus move to the db on a new server-platform to get the access
to the db and to the offsite copy pools
then create new dev-classes pointing to the completely new
library-hardware - create a new primary STG on this newlib-devclasses
and the all by a command like
restore stgpool OLDSTG copypool=offsite-copy-pool newstgpool=NEWSTG

Is this a usual way to change to new server-platforms/librarys/tapes ... ?
- it seems to be easy and straight forward -


Thanks in advance for any hints !
Rainer

---
the test we have done:
- on production server-a ( aix-tsm 4.1.3.0 ) doing not all but some of the
full-DBB into a flatfile located on an solaris-system
- on this solaris system server-b  ( tsm4.1.3.0 ) with no tapes - only disk
and just the solaris server-package installed we load the db exported
by the production server-a
( this solaris system has some km distance to the production-server)
- after shutting down the production-tsm-server on the aix system
-not to run both at once-  we can start this solaris
'quick-stand-by-server' and have access
- to all the DB-entries
- to all data located on the offsite-copy-server ( server-c )
The dsmserv.opt on the solaris-server was started with
DISABLESCHED-set-to-YES   to avoid any automatic processes
and
EXPINTERVAL-set-to-0 to avoid any db Expiration processes -
additionally we delete all admin schedules on the server-b after starting.
All primary volumes were changed access to detryoed.

So the only thing would then be to change the clients field
'TCPServeraddress' to the Ip-Address of the solaris-server.
Any client with data that has a copy on the 'offsite-copy-server'
can then immediately restore/retrieve data ( to the time of the
last flatfile-fullbackup - which maybe not the most actual one ).

end of test
-




 __

 Rainer Wolf  [EMAIL PROTECTED]
 Tel: 0731-50-22482   Fax: 0731-50-22471
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11
 AG Basissysteme  89069 Ulm




Cook, Dwight E wrote:

 Uhm I might just have to put me together a sun box  try this...
 Dwight

 -Original Message-
 From: France, Don G (Pace) [mailto:[EMAIL PROTECTED]]
 Sent: Friday, April 13, 2001 4:50 PM
 To: [EMAIL PROTECTED]
 Subject: Re: ADSM database backup to a file.

 There are specific-path references, and file-system-dependent things in the
 TSM data base - which makes each server you install PLATFORM-SPECIFIC.
 So... the short answer is no.

  -Original Message-
 From:   Rainer Wolf [mailto:[EMAIL PROTECTED]]
 Sent:   Wednesday, April 11, 2001 6:04 AM
 To: [EMAIL PROTECTED]
 Subject:Re: ADSM database backup to a file.

 Hi ,

 Can I also create this 'flatfile' on a AIX system ( server-a )
  and restore the server on a Solaris system ( server-b ) ?

 I would like to use this solaris-adsm/tsm- server only for a quick restore
 of data previously backed up on server-a which uses copy-Storagepools
 via server-server on a third system- (server-c)  and I don't want to use
 this solaris system for backup, because it only has disks and no library
 ... someone using such a configuration -or is this quite anomalous ?

 ( Szenario : server-a and Clients from this server-a
 (with 'client-data-copys-send-to-server-c' )
 are completely destroyed - then trying to restore latest
 active backups for the Clients as fast as possible on server-b using
 just the copy from server-c )

 Thanks in advance  for any hints !

 Rainer

 Cook, Dwight E wrote:
 
  Sure, to move an adsm environment across town where I was a few states
 away
  and didn't want to fly in for a half day...
  define a device class of FILE and use it to backup the DB.
  I did a full, then FTP'ed it over to a new machine that was to become the
  server... as soon as I got the full FTP'ed over I did a restore with
  commit=no, then I locked out clients, did

Re: Recovery Log

2001-05-21 Thread Rainer Wolf

Gill, Geoffrey L. wrote:
 
 This morning I noticed my recovery log had risen to 83.2 at some point over
 the weekend. I'm already doing a full and one incremental backup of the log
 every day to keep it's size down. As you can see it's only 4GB and I still
 have room.
 
 The real question is, how and why did it get that high in the first place?
 The previous high had been 39.3%. Is there any way to find out what caused
 this or exactly when it happened?

Hello Gill,
you may check the values how long a second version of a file stays 
in your default-backup-management class . If you count back the time
your log-file was so high and you come to the date when 
'daylight-saving-time-comes' ... then the reason may be that the date 
when a lot of files came out again from your server has just passed 
and you may have a huge expiration process running on this day(s). 

-- 

Mit freundlichen Grüßen / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: 3494 setup on new host server

2001-05-18 Thread Rainer Wolf

Gina Glenewinkel wrote:
 
 This weekend I'm moving my ADSM server to a new AIX box and the only part
 I'm really nervous about is the 3494 library.  Can someone point me to a
 good set of procedures for configuring and setting up this library on a new
 host?  The particular piece that concerns me is that the new host is on a
 different subnet than the current server.  Will that involve some routing
 configuration changes on the 3494 itself?
 
 thanks,
 /gina

... using a direct rs232 connection you may just check the current config
i.e.
# lsdev -C -c tape
... something like 
rmt0  Available 10-60-00-5,0 4.0 GB 4mm Tape Drive
rmt1  Available 30-68-00-0,0 IBM 3590 Tape Drive and Medium Changer
rmt2  Available 30-68-00-1,0 IBM 3590 Tape Drive and Medium Changer
rmt3  Available 10-70-00-2,0 IBM 3590 Tape Drive and Medium Changer
rmt4  Available 10-70-00-3,0 IBM 3590 Tape Drive and Medium Changer
lmcp0 Available  LAN/TTY Library Management Control Point
# 

and see also the file /etc/ibmatl.conf 
i.e.
# cat /etc/ibmatl.conf
3494/dev/tty1 dev-null
#
... to get the  symbolic name of the library you are using

To add again the library manager controlpoint (i.e. /dev/lmcp0 )
 
-first install the drivers for drives and library you can use   
-add the drives with cfgmgr
-add the direct rs232 ... library manager controlpoint
in this example '3494' is the symbolic name of the library - you 
just replace it with the name of your library
with  
/etc/methods/defatl -ctape -slibrary -tatl -a library_name='3494'

... this should create the /dev/lmcp0 entry for the library named in the 
/etc/ibmatl.conf file 

or 
/etc/methods/defatl -ctape -slibrary -tatl -a library_name='3494' -l'lmcp#'

... by replacing # with any other numbers than 0
if you have otehr definitions

It should just simply match your current definition in adsm 

i.e.
tsm: ADSMAIXq libr

Library Name   Library  Device PrivateScratch   
External  
   TypeCategory   Category  
Manager   
   --           
--
3494   349X /dev/lmcp0 300   
301  

 




- 

Mit freundlichen Grüßen / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: Archive retention

2001-04-27 Thread Rainer Wolf

Hi Steve, we use the same scheme - 
'archive_3y, archive_5y, archive_8y etc' with final storagepools based on 
mo-drives in 3995-C66  - all these will have 2 copies - one on an offsite 
copypool and one in an offsite local safe.

... beside this we also use a second serie like
'offline_2d, offline_3m, offline_6m etc.' with the difference that 
these are shorter ones and the final STGs are 
just tapes which have one (cheaper) copy in a local safe -
that serie is just usefull for some kind of snapshot-archives or some 
kind of scratch-archives or something like a panic-archive etc
- the associated copy-Pool on this serie is not so high-available 
as those for the longterm-archive.


Mit freundlichen Grüßen / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm


Steve Harris wrote:
 
 Ok, so the implication here is that there should be multiple Archive management 
classes named for say each application, rather than a single set on Archmcs for all 
applications.
 
 I currently have set up arch_1y, arch_9m, arch_6m, arch_3m etc
 if anyone archives something to say arch_1Y and then wants the retention extended 
then they can only retrieve and re-archive (or have the archmc names not match their 
retention periods, ugh!)
 
 whereas if I had appA_longterm and appB_longterm both set at 1 year I could change 
these independently as the application archive requirements changed.
 
 Does anyone use such a scheme?
 
 Steve Harris
 AIX and ADSM Admin
 Queensland Health, Brisbane Australia
 
  Richard Sims [EMAIL PROTECTED] 25/04/2001 6:44:00 
 Archived copies of files have their expiration date set upon the file being
 archived based on current date plus the duration of the archive management
 class...
 
 Dwight - The Archives table contains an Archive_Date column: there is no
  Expiration_Date column, as the files conform to whatever the
 prevailing management class retention rules are at the time.  So if you
 extend your retention policy, it pertains to all archive files, old
 and new.
 
Richard Sims, BU

--



Re: ADSM database backup to a file.

2001-04-11 Thread Rainer Wolf

Hi ,

Can I also create this 'flatfile' on a AIX system ( server-a ) 
 and restore the server on a Solaris system ( server-b ) ?

I would like to use this solaris-adsm/tsm- server only for a quick restore 
of data previously backed up on server-a which uses copy-Storagepools 
via server-server on a third system- (server-c)  and I don't want to use 
this solaris system for backup, because it only has disks and no library 
... someone using such a configuration -or is this quite anomalous ?

( Szenario : server-a and Clients from this server-a 
(with 'client-data-copys-send-to-server-c' ) 
are completely destroyed - then trying to restore latest 
active backups for the Clients as fast as possible on server-b using
just the copy from server-c )  


Thanks in advance  for any hints !

Rainer

"Cook, Dwight E" wrote:
 
 Sure, to move an adsm environment across town where I was a few states away
 and didn't want to fly in for a half day...
 define a device class of "FILE" and use it to backup the DB.
 I did a full, then FTP'ed it over to a new machine that was to become the
 server... as soon as I got the full FTP'ed over I did a restore with
 commit=no, then I locked out clients, did an incremental, FTP'ed that one
 over, did a restore with commit=yes and started up TSM.  (while I was doing
 that the DNS folks were doing there thing, then the clients just had to
 bounce their schedulers...)
 -DEFine DEVclass--device_class_nameDEVType--=--FILE---
 
   .-MOUNTLimit--=--1.
 -+-+---
   '-MOUNTLimit--=--mountlimitvalue--'
 
   .-MAXCAPacity--=--4M.
 -+---+-
   '-MAXCAPacity--=--size--'
 
   .-DIRectory--=--current_directory_name--.
 -+---+
   '-DIRectory--=--directory_name--'
 
 so something like
 def devc FLATFILE devt=file maxcap=4096M dir=/usr/adsm/flatfile
 then just use it like
 backup db t=f s=y dev=flatfile
 and it will create a file in /usr/adsm/flatfile
 to automatically get rid of the file in that directory, do like you would
 normally... del volhist t=dbb tod=-x and any db backup files in
 /usr/adsm/flatfile older than "x" will be deleted...
 
 Dwight
 
 -Original Message-
 From: Zosimo Noriega (ADNOC IST) [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 10, 2001 1:10 AM
 To: [EMAIL PROTECTED]
 Subject: ADSM database backup to a file.
 
 Hi everyone,
 Can i backup my adsm db into a file because i usually backed up into tapes
 using the devclass.
 if possible, please provide the commands or steps how to do it and how to
 restore it.
 
 thanks,
 zosi

-- 

Mit freundlichen Gren / best regards
Rainer Wolf


 __
       
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: Backup Policies - Kind of FAQs

2001-04-04 Thread Rainer Wolf

Hi Mahesh, 

Mahesh Babbar wrote:
3. A file backed up and not changed is called an ACTIVE version and
remain there in the backup system FOREVER. 

... would keep in mind that a file could also change from 
ACTIVE to INACTIVE without removing or changing the file 
but with a change of an Incl/exclude-statement which might be 
initiated by the User/Admin of the ClientSystem. 
So presumed that there is also no change of include/exclude-statements 
that has an affect on the file ...
... it remains in the backup system FOREVER ...

I think an active file just remains FOREVER in the BackupSystem 
as long as you don't run any more incremental backup , ie. because of 
desaster happened to the ClientMachine. 
  

-- 

Mit freundlichen Gren / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: Longlasting tape-reclamation run

2001-03-05 Thread Rainer Wolf

Hello Geoff,

Richard Sims wrote:
 
 Geoff - Greetings from Up Over.  ;-)
 
 2. Influence of client type.
 
I have clients of the following types: Novell Netware, Unix, NT, and
 also NT with the Lotus Notes agent.  Since I have collocation on my
 onsite tape pool, I was able to determine that the tapes causing trouble
 all belonged to Notes clients.  Looking at a list of my tape pool today
 (about 200 volumes), I can say that for the non-Notes clients, the
 number of clusters is always less than 10.  The Notes client volumes have
 HUNDREDS (highest today is 967).
I don't know if this is something to do with the Notes agent itself,
 or just a result of the fact that Notes seems to generate vast numbers of
 very small documents.
 
 Though we may have collocation activated in the server, I believe it to be
 the general case that API-based clients either cannot or do not collocate.
 (This is the case with HSM, at least.)  API-based clients which back up
 numerous small client files thus pose a special burden on the server.
 
setting up several STGs on our Server ( adsm3.1.2.40) I only have one STG with 
(Client) Collocation - this is the only one which shows up 'clusters'
between 2 and just 4 ... This Pool just consists of 2 Clients ( Solaris - 
multipurpose FileServer, client-compression turned on ) and at the moment 
has 19 3590e Volumes / 3 Mio Files  
- the reclamation threshold is at 50 % and it runs without problem -  
I just believe it has to do with the number of Clients, kind of Collocation 
in the STG and at last with the total-number of tapes in that pool ...  


-- 

Mit freundlichen Gren / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: ANS1304W on 4.1

2001-02-26 Thread Rainer Wolf

Reinhold Wagner wrote:
 
 Folks,
 
 i searched the archives and found several threads about ANS1304W, but non of
 them seem to be
 related to Version 4.
 
 We migrated a week ago from 3.1 to 4.1.2 and now see _ANS1304W Active object not
 found_ in a
 client's error log. In the archives i learned that it's a problem with national
 language
 characters and this could be true in this case also.
 
 Our Environment:
 
 Server: TSM 4.1.2.0, AIX 4.3.3
 Client: TSM 4.1.2.0, Solaris 7
 
 'couse it's 18:29 here and we don't have a support contract which covers this
 time I'll try
 my luck here.
 
 TIA
 
 Reinhold Wagner, Zeuna Staerker GmbH  Co. KG

Hello Reinhold,

by reading your mail - just found the same 

Our Environment:
Server: TSM 3.1.2.40, AIX 4.3.3
Client: TSM 4.1.2.0, Solaris 8

at LEAST all files containing a "?" are concerned and thats easy to reproduce:

# echo "ob das auch gut geht ? "  echt\?gut
# dsmc inc
...
# rm echt\?gut
# dsmc inc 
...
Retry # 1  Directory--   1,024 /priv/ [Sent]  
Retry # 1  Expiring--   34 /priv/echt?gut  ** Unsuccessful **
ANS1228E Sending of object '/priv/echt?gut' failed
ANS1304W Active object not found

restore and more versions on those files seems to work  ... ?

-- 

Mit freundlichen Gren / best regards
Rainer Wolf


 __
       
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Re: Linux, TSM problem.

2001-02-05 Thread Rainer Wolf

Hi Anders ,
the problem is known and will be corrected in Version 4.2  
( ... don't know when ? )



Mit freundlichen Gren / best regards
Rainer Wolf





APAR Text


+++

ERROR DESCRIPTION:
If working with TSM 4.1.2 client for SUSE Linux failes, if
trying to backup objects, containing german umlauts.
Dsmerror.log shows messages:
"fioScanDirEntry(): Object '/local/xcc4010/notes/hs.rs.id' con-
tains unrecognized symbols for current locale, skipping..."
or "ANS1228E Sending of object '/pub/FAZMM/xcc4007/lotus/arbeit/
smartctr/Gesch"ftliches' failed "
Similar problem noted with Red Hat Linux v6.2 (Japanese)
running TSM Client v4.1.2 during backup. dsmerror.log shows:
fioScanDirEntry(): Object '/home/' contains unrecognized sym
ANS1228E Sending of object '/home/' failed


LOCAL FIX:
Temporary workaround is to move back to 4.1.1 client

+




Bcklund Anders wrote:
 
 I got the same problem here at volvo in Sweden. After a lot of same messages
 the scheduler stop with following error:
 
 2001-02-02 23:31:18 fioScanDirEntry(): Object
 '/usr/lib/linuxconf/images/no.gif' contains unrecognized symbols for
 current locale, skipping...
 2001-02-02 23:31:18 fioScanDirEntry(): Object
 '/usr/lib/linuxconf/images/lohy.gif' contains unrecognized symbols for
 current locale, skipping...
 2001-02-02 23:31:23 B/A Txn Producer thread, fatal error, signal 6
 
 This is our first running system on Linux with tivoli and have no idea how I
 shall solve this problem, so please if you get any answer send a not to me.
 
 Best regards,
 ___
  Anders Bcklund
  Volvo Information Technology AB
  System Progr. Storage, dept 8255, DA1S
  SE-405 08 Gteborg, Sweden
 
  Telephone: +46 31 7651586
  E-mail: [EMAIL PROTECTED]
 
 

-- 

 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



LinuxClient fails on some files

2001-01-25 Thread Rainer Wolf

HelloAll !

on a linux client
Tivoli Storage Manager (TSM)
Linux86 Backup-Archive Client
Version 4, Release 1, Level 2

we have the problem that files which contains
German Umlaute ( eg  ) are not backed up

The corresponding error message in the dsmerror.log shows eg.

08-01-2001 21:38:07 fioScanDirEntry(): Object '/home/weber/Backup
Folder/23.10.2000/alte Projekte/WoTel/Verffentlichungen' contains
unrecognized symbols for current locale, skipping...

Can someone give me a hint where to look for any tsm-option ... ?
or do I need a workaround ?
... I have not found this message concerning this current locale 
in this list -- 
Is this message just only refering to settings of the OS ?
  -- 

Mit freundlichen Gren / best regards
Rainer Wolf


 __
   
 Rainer Wolf  [EMAIL PROTECTED]  
 Tel: 0731-50-22482   Fax: 0731-50-22471   
 University of Ulmhttp://www.uni-ulm.de/urz
 University Computing Center  Albert-Einstein-Allee 11   
 AG Basissysteme  89069 Ulm



Windows is freezing after deinstalling ADSM3.1.X and Installing 4.1.1

2000-10-20 Thread Rainer Wolf

Hi  *SMer,

can someone please help me on the symptom occurs on:
(Server: 3.1.2.40)
Windows32: NT SP4 - system

The Client had an 3.1.* Version installed - my steps were
- uninstall adsm3.1.*
reboot
- got tsm4.1.1 Client for Windows - unpack ...
reboot
- continued installation
- select all Components and different Location for Software
reboot

... i was expected now to continue with the Config and thats it
but: after DoubleClick on on TSM Icon or just start the Graphic-dsm
in another way - the system is freezing and no reset is possible.
Just hard Reset works .

After reboot the freezing system the only message
I found in the dsmsched.log was:

ReadPswdFromRegistry(): RegOpenPathEx(): Win32 RC=2

the passordacces option in the dsm.opt is set to generate and
on this point I checked that the
non-graphic -Interface worked without problem
- i can set a new password and do incremental backups.
Trying again the Gui  just leads to the hard-reset Button .
What could I Try ?
... btw... the gui starts to come up and
I can see someone who I don't know but the person looks into another
computer and thats just the point where no-change-will-come ...

Thanks in advance
Rainer Wolf



--
 __

 Rainer Wolf   [EMAIL PROTECTED]
 Tel: 0731-50-22482  Fax: 0731-50-22471
 Universitaet Ulm
 Universitaetsrechenzentrum Albert-Einstein-Allee 11
 AG Basissysteme  89069 Ulm



TSANDS-version for ADSM-Client for Netware 4.10

2000-08-24 Thread Rainer Wolf

Hello,

Using ADSM for Netware 4.10 I have a question about the SMS
requirements:

The README of the ADSM-Client-Version 3.1.0.8 for Novell-Netware says
that a version 4.14 of the module TSANDS.NLM is needed when using
Netware
4.10 or 4.11. But such a version does not exist for Netware 4.10.

Is it ok to use version 4.13? Or should I use version 4.14, which is
made
for Netware 4.11 and 4.2?

rgds

Martin