Re: Question about "Storage hierarchy"

2014-04-16 Thread bjoern.nacht...@gwdg.de

Dear all,
hi Rick,

thanks for the answer.

in the past i observed that migrating from a SAS-based DISKPOOL to a 
SATA-based FILEPOOL is much faster than DISK => LTO4 and even doing 
FILEpool => LTO4 runs much faster than DISK => LTO4.


So now I'm thinking of a set up with DISK and FILE pools in front of the 
tapes:


1) diskpool, size ~ 100% of a **typical** daily backup amount, based on SAS
2) filepool, size ~ 3 -- 4x the diskpool size, bases on SATA (nlSAS)
=> with this size we can handle a library failure for a weekend as well 
as some clients doing large backups

3) tapepools

because the tapes are shared between some tsm-servers, i don't get many 
mountpoints especially for bypassing the staging pools


I thought of DISK because due to random access it can handle more 
sessions by each volume,


but :
maybe it's more clever to have a filepooly only -- with enough 
mountpoints for backup sessions and migrations?


=> any comments on this?

thanks & best reagrds
Bjørn

Rhodes, Richard L. wrote:

Hi,

Isn't reading IBM docs fun!


=> does this really mean, that when one client is flooding my staging pools
the migration processes becme concurrent to new data that is written directly
to the NEXTSTGPool, because the first pool is full? if the next pool has a 
limited
number of mount points (e.g. tape drives) this will cause the next, eben bigger 
problem.

In the "normal" tsm setup you make your disk pool big enough to hold a nights
worth of backups (or whatever your backup windows is).
Then in the day (backups not running) you migrate from the disk pool to the
NEXTpool  (tape, datadomain), emptying the disk pool.   Repeat every day.

If the disk pool runs out of space, TSM will attempt to send files directly to 
the nextpool.
This situation will flood your tape drives.  You really don't want this to 
happen.

If you specify a file size limit on the disk pool, files bigger than the limit 
go directly to the
next pool.  Yes, this uses some/many tape drives, and you can run out.   Think 
of this
option as protecting your disk pool for being flooded by really big backups, or 
a
way to keep the size of your disk pool down.

This is a resource balancing act that you have to watch and tune for your 
environment.
How big to make your disk pool.
How many tape drives you have available.
Enough tape drives, disk bandwidth and time to empty the disk pools.
Enough tape drives, disk bandwidth and time to update copy pools.

There's no one right answer . . . .it depends!

Rick



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
bjoern.nacht...@gwdg.de
Sent: Wednesday, April 16, 2014 2:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Question about "Storage hierarchy"

Dear all,

switching to an new employer allows me to read all^H^H^H some of IBM's 
documentation about TSM ;-)

inside the server documentation there's a short description on "Example:
How the server determines where to store files in a hierarchy" [1].
within this there's written:

"If the DISKPOOL storage pool has no maximum file size specified, the server 
checks for enough space in the pool to store the physical file.
If there is not enough space for the physical file, the server uses the next storage 
pool in the storage hierarchy to store the file."

=> does this really mean, that when one client is flooding my staging pools the 
migration processes becme concurrent to new data that is written directly to the 
NEXTSTGPool, because the first pool is full? if the next pool has a limited number 
of mount points (e.g. tape drives) this will cause the next, eben bigger problem.

if so, what's the best practise to empty the staging pool?
- setting a max filesize does not solve the problem, especially if there are 
many files
- perhaps "DISABLE SESSIONS CLIENT" -- this seems to be no really good answer 
:-(

Thanks & best regards,
Bjørn

[1]
http://pic.dhe.ibm.com/infocenter/tsminfo/v7r1/index.jsp?topic=%2Fcom.ibm.itsm.srv.doc%2Ft_volume_seq_define.html
[1] http://tinyurl.com/p23j5eo



--

Bjørn Nachtwey
Arbeitsgruppe IT-Infrastruktur   Tel. +49 551 201-2181
- G W D G 
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
Am Fassberg 11, 37077 Göttingen
E-Mail: g...@gwdg.de   Tel.:   +49 (0)551 201-1510
URL:http://www.gwdg.de Fax:+49 (0)551 201-2150
Geschäftsführer:Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender:  Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft:  Göttingen
Registergericht:Göttingen  Handelsregister-Nr. B 598


Re: Question about "Storage hierarchy"

2014-04-16 Thread Rhodes, Richard L.
Hi,

Isn't reading IBM docs fun!

> => does this really mean, that when one client is flooding my staging pools
> the migration processes becme concurrent to new data that is written directly
> to the NEXTSTGPool, because the first pool is full? if the next pool has a 
> limited 
>number of mount points (e.g. tape drives) this will cause the next, eben 
>bigger problem.

In the "normal" tsm setup you make your disk pool big enough to hold a nights 
worth of backups (or whatever your backup windows is).
Then in the day (backups not running) you migrate from the disk pool to the 
NEXTpool  (tape, datadomain), emptying the disk pool.   Repeat every day.

If the disk pool runs out of space, TSM will attempt to send files directly to 
the nextpool.
This situation will flood your tape drives.  You really don't want this to 
happen.

If you specify a file size limit on the disk pool, files bigger than the limit 
go directly to the 
next pool.  Yes, this uses some/many tape drives, and you can run out.   Think 
of this
option as protecting your disk pool for being flooded by really big backups, or 
a 
way to keep the size of your disk pool down.

This is a resource balancing act that you have to watch and tune for your 
environment.
How big to make your disk pool.
How many tape drives you have available.
Enough tape drives, disk bandwidth and time to empty the disk pools.
Enough tape drives, disk bandwidth and time to update copy pools.

There's no one right answer . . . .it depends!

Rick



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
bjoern.nacht...@gwdg.de
Sent: Wednesday, April 16, 2014 2:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Question about "Storage hierarchy"

Dear all,

switching to an new employer allows me to read all^H^H^H some of IBM's 
documentation about TSM ;-)

inside the server documentation there's a short description on "Example: 
How the server determines where to store files in a hierarchy" [1]. 
within this there's written:

"If the DISKPOOL storage pool has no maximum file size specified, the server 
checks for enough space in the pool to store the physical file. 
If there is not enough space for the physical file, the server uses the next 
storage pool in the storage hierarchy to store the file."

=> does this really mean, that when one client is flooding my staging pools the 
migration processes becme concurrent to new data that is written directly to 
the NEXTSTGPool, because the first pool is full? if the next pool has a limited 
number of mount points (e.g. tape drives) this will cause the next, eben bigger 
problem.

if so, what's the best practise to empty the staging pool?
- setting a max filesize does not solve the problem, especially if there are 
many files
- perhaps "DISABLE SESSIONS CLIENT" -- this seems to be no really good answer 
:-(

Thanks & best regards,
Bjørn

[1]
http://pic.dhe.ibm.com/infocenter/tsminfo/v7r1/index.jsp?topic=%2Fcom.ibm.itsm.srv.doc%2Ft_volume_seq_define.html
[1] http://tinyurl.com/p23j5eo

-- 

Bjørn Nachtwey
Arbeitsgruppe IT-Infrastruktur   Tel. +49 551 201-2181
- G W D G  Gesellschaft 
für wissenschaftliche Datenverarbeitung mbH Göttingen Am Fassberg 11, 37077 
Göttingen
E-Mail: g...@gwdg.de   Tel.:   +49 (0)551 201-1510
URL:http://www.gwdg.de Fax:+49 (0)551 201-2150
Geschäftsführer:Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender:  Dipl.-Kfm. Markus Hoppe
Sitz der Gesellschaft:  Göttingen
Registergericht:Göttingen  Handelsregister-Nr. B 598
-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.