Re: VTL's and D2D solutions

2012-07-03 Thread Daniel Sparrman
HP has StoreOnce B6000 series, and you also have the Sepaton S2100-ES2.

Both are stable, high-performing alternatives with encryption and VTLVTL 
replication. 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote: -
To: ADSM-L@VM.MARIST.EDU
From: Richard Rhodes 
Sent by: ADSM: Dist Stor Manager 
Date: 07/03/2012 14:39
Subject: Re: VTL's and D2D solutions

We use DataDomain with the NFS interface.  When we did our evaluation we
were only interested in NFS interface (not VTL).  We looked at  DataDomain
and Quantum DXi8500.   We wanted Exagrid to take part in the evaluation
but they had just released TSM support and decided not to respond to the
RFP.

Rick





From:   Kevin Boatright boatr...@memorialhealth.com
To:     ADSM-L@VM.MARIST.EDU
Date:   07/02/2012 10:49 AM
Subject:        VTL's and D2D solutions
Sent by:        ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



We are currently looking at adding a Disk to Disk backup solution.  Our
current solution has a 3584 tape library with LTO-5 drives using TKLM.

We have looked at Exagrid and Data Domain.  Also, I believe HP has a
solution.

We will need to have encryption on the device and the ability to replicate
between the two disk units.

Anyone have any comments or recommendations?

Thanks,
Kevin





-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


SV: migration threads for random access pool and backups issue

2012-03-15 Thread Daniel Sparrman
Hi 

A migration process will always only use 1 process per node, independent on the 
storage media. So if you migrate disk  tape, it's 1 process per node, and if 
you do tape  tape it's only 1 process per node. 

As for the original poster, you claim that you need to backup 300TB of data. 
I'm guessing this is your entire environment and not a single server. However, 
you describe the process of backing up a fileserver. How large is this server? 

Generally speaking, I'd say that a 500GB diskpool is way to small to handle a 
300TB environment. Originally, you're supposed to size your diskpool to hold 1 
day of incremental backups. Since change is usually around 5-10%, that would be 
15-30TB of disk to hold 1 day. 

However, I recommend sending your database/mail/application backups (actually, 
all large chunks of data) straight to tape since there is no performance 
benefit in sending it to a random pool (as long as you can stream the data 
continously, the tape drive performance should be good enough). 

Sending all of these large chunks straight to tape should reduce the amount of 
disk you need to hold 1 day of incremental backups from your fileservers. 500GB 
is still gonna be to small though, I would expect a couple of TB's of disk to 
handle the 1 day incremental.

Other options to reduce the disk storage needed on your TSM server is to 
introduce client-side deduplication or compression. That will, except for 
reducing the amount of actual TB stored in your disk storage, also reduce the 
amount of data you send over your network.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Christian Svensson 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/15/2012 09:40
Ärende: SV: migration threads for random access pool and backups issue


Hi Amit,
After some investigation I found that TSM can only use one Process (1 drive) 
per node or one process per collocation group.
This is only when you are using Random disk. If you want to use multiple drives 
you need to change from random disk to sequeneal disk.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Christian Svensson
Skickat: den 14 mars 2012 07:07
Till: ADSM: Dist Stor Manager
Ämne: SV: migration threads for random access pool and backups issue

Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups issue

Hi,

I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I have 
3 filespaces, backing up on single node. I am triggering multiple dsmc,  
dividing the filespace  on directories. I have 15 E06 Tape drives and can 
allocate 5 drives for this backup.

If I run multiple dsmc sessions the, server starts only one migration process 
and one tape mount.
As per ADMIN GUIDE the Migration for Random Access is Performed by node.
Migration from random-access pools can use multiple processes.

My Question:
1. On Random Access pools multiple migration sessions can be generated, only if 
we backup on multiple nodes ? Is my understanding correct or there is any way 
to increase the number of tape mounts ?

2. The only way to speed up with current resources is to backup to File device 
class, so that I can have multiple tape mounts?

3. Any inputs to speed up this backup?

Server and client both are on Linux, running TSM version 6.2.2

Any suggestions are welcome.

Thanks
Amit

Ang: [ADSM-L] Multiple restore session fails in TSM5.3 on AIX5.5

2012-03-15 Thread Daniel Sparrman
You have a restartable restore saved in the TSM server.

Do a q restore on your TSM server prompt and you will see it.

Until you have either a) restarted the restore from where you left off b) 
cancelled the restartable restore, you wont be able to initiate a new restore.

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: devesh 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/15/2012 11:33
Ärende: [ADSM-L] Multiple restore session fails in TSM5.3 on AIX5.5


Hello All,

I am trying to perform multiple restore operation parallely.
But whenever I try, only one session start other session terminates with 
following error :

---
ANS1247I Waiting for files from the server...

 Restore Processing Interrupted!! 
ANS1330S This node currently has a pending restartable restore session.

The requested operation cannot complete until this session either
completes or is canceled.
---

I have changed the following setting to perform multiple restore:

update node  maxnummp=10
maxsession 25

My device type is file
I have set mountlimit to 10

Also, I have changed the resourceutilization to 10 (changed in the dsm.sys file)

thanks and regards,
Devesh Gupta

+--
|This was sent by devesh.gu...@nechclst.in via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

Re: SV: migration threads for random access pool and backups issue

2012-03-15 Thread Daniel Sparrman
That's also correct. The hugely negative impact from running migration while 
performing backups is due to the heavy load migration puts on the database 
(aswell as stealing valuable I/O from your disks which should be used for 
backups).

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Rick Adamson 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/15/2012 16:57
Ärende: Re: SV: migration threads for random access pool and backups issue


Also, using a random access disk pool that is undersized will result in 
migration potentially kicking off while the backup/archive process is still 
running. It has been my experience, and I have often read, this situation has 
an enormous negative impact to the TSM server performance.


~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Thursday, March 15, 2012 4:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: migration threads for random access pool and backups issue

Hi 

A migration process will always only use 1 process per node, independent on the 
storage media. So if you migrate disk  tape, it's 1 process per node, and if 
you do tape  tape it's only 1 process per node. 

As for the original poster, you claim that you need to backup 300TB of data. 
I'm guessing this is your entire environment and not a single server. However, 
you describe the process of backing up a fileserver. How large is this server? 

Generally speaking, I'd say that a 500GB diskpool is way to small to handle a 
300TB environment. Originally, you're supposed to size your diskpool to hold 1 
day of incremental backups. Since change is usually around 5-10%, that would be 
15-30TB of disk to hold 1 day. 

However, I recommend sending your database/mail/application backups (actually, 
all large chunks of data) straight to tape since there is no performance 
benefit in sending it to a random pool (as long as you can stream the data 
continously, the tape drive performance should be good enough). 

Sending all of these large chunks straight to tape should reduce the amount of 
disk you need to hold 1 day of incremental backups from your fileservers. 500GB 
is still gonna be to small though, I would expect a couple of TB's of disk to 
handle the 1 day incremental.

Other options to reduce the disk storage needed on your TSM server is to 
introduce client-side deduplication or compression. That will, except for 
reducing the amount of actual TB stored in your disk storage, also reduce the 
amount of data you send over your network.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Christian Svensson 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/15/2012 09:40
Ärende: SV: migration threads for random access pool and backups issue


Hi Amit,
After some investigation I found that TSM can only use one Process (1 drive) 
per node or one process per collocation group.
This is only when you are using Random disk. If you want to use multiple drives 
you need to change from random disk to sequeneal disk.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Christian Svensson
Skickat: den 14 mars 2012 07:07
Till: ADSM: Dist Stor Manager
Ämne: SV: migration threads for random access pool and backups issue

Hi Wanda,
Glad to see you at Pulse, but I got a similar problem here in Sweden where the 
migration only use one process during migration.
I was thinking of to open a PMR to understand why it only use one drive.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
CPU2TSM Support: http://www.cristie.se/cpu2tsm-supported-platforms



Från: Prather, Wanda [wprat...@icfi.com]
Skickat: den 13 mars 2012 21:36
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: migration threads for random access pool and backups issue

Since your disk pool is much smaller than what you are backing up, plus you 
have plenty of tape drives, it doesn't make much sense to send those 3 large 
filespaces to the diskpool.

Create a new management class, send each of those 3 filespaces directly to the 
tape drives, bypassing the disk pool.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of amit 
jain
Sent: Sunday, March 11, 2012 12:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migration threads for random access pool and backups

Re: migration threads for random access pool and backups issue

2012-03-15 Thread Daniel Sparrman
Well,

a) Large servers with small filesize should always go to a random diskpool. 
Sending them to tape (or a sequential filepool) will most certainly reduce 
performance. So NO, keep sending those small files to a random disk pool. If 
it's not big enough, increase the size, dont try sending the data somewhere 
else.

b) For ALL storage pools, there will be 1 migration process per node. It doesnt 
matter if it's random, sequential or a CD. It's always gonna be 1 process per 
node.

c) Migration processes is usually not the issue. The issue is somewhere else. 
So dont get blind on the 1 migration process per node. If you're having 
performance issues, it's most likely not the migration process.

d) If you get I/O wait, the disks containing your disk pools is most likely not 
properly configured. Remember that the basic idea of performance for a random 
disk pool is to have enough spindles (as in, having enough harddrives in 
whatever disksystem you're using). The only way to long-term increase the 
performance is to increase the amount of spindles. The disksystem's memory 
cache is always helpful, but when it gets filled (and it will) it will need to 
empty to physical spindles. If your disks cant handle that load, you have a 
bottleneck.

e) When you increase your diskpools to 10TB, make sure that the increase is 
done over new spindles (harddrives) and not the same you're already using. 
Expanding the array/lun across the already used spindles wont increase your 
performance.

Regards

Daniel

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: amit jain 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/15/2012 17:46
Ärende: Re: migration threads for random access pool and backups issue


Thanks to all for these valuable inputs. Appreciate it a lot.

Well this is first time this data is getting backed up.

For now here is what I was not aware and what I have done: 1. On Random
Access pools multiple migration sessions can be generated, only if we
backup on multiple nodes ? Is my understanding correct or there is any way
to increase the number of tape mounts ?

Now i know:  Migration processes depends on nodes. random access storage
has limitation in regards to migration process.  If I had  more than 2
nodes backing up to the same random access storage pools then I could have
more than two migration processes depending on the configuration settings.
If the disk pool gets filled up, data goes to next pool and backups wont
fail.

As in our environment we have  large small number of files so file type
disk pool is not a good idea.  Improving backend speed does not always work
better.  Because the speed coming into TSM server will not be fast enough
or sometimes equal or little bit better compared to the speed dumping data
from disk to tape. This all depends on the type of data to back up. If
there are huge number of files, One migration process is good enough and
have 2 or 3 more additional tape drives allocated to direct backup when
disk pool is overflow. That was much much faster than using file type
devclass with multiple migration processes.

Currently able to backup ~4TB a day. I will be increasing the stg pool size
to 10 TB and hope i will get better performance. These is also bottleneck
from the TSM client side. Seeing IO wait from the client side.


Thanks

Amit




On Sat, Mar 10, 2012 at 9:08 PM, amit jain amit12.j...@gmail.com wrote:

 Hi,

 I have to backup large data ~ 300TB and have small DISK POOL SIZE 500GB. I
 have 3 filespaces, backing up on single node. I am triggering multiple
 dsmc,  dividing the filespace  on directories. I have 15 E06 Tape drives
 and can allocate 5 drives for this backup.

 If I run multiple dsmc sessions the, server starts only one migration
 process and one tape mount.
 As per ADMIN GUIDE the Migration for Random Access is Performed by node.
 Migration from random-access pools can use multiple processes.

 My Question:
 1. On Random Access pools multiple migration sessions can be generated,
 only if we backup on multiple nodes ? Is my understanding correct or there
 is any way to increase the number of tape mounts ?

 2. The only way to speed up with current resources is to backup to File
 device class, so that I can have multiple tape mounts?

 3. Any inputs to speed up this backup?

 Server and client both are on Linux, running TSM version 6.2.2

 Any suggestions are welcome.

 Thanks
 Amit


Ang: [ADSM-L] TDP 6.3 and Exchange 2010?

2012-03-12 Thread Daniel Sparrman
Hi

That wont work since you will need domain-level administrator rights to backup 
Exchange 2010.

Either create a new account with domain-level administrator rights and the role 
Administrator in Exchange, or use your existing Administrator account.

Local System doesnt have domain-level administrator rights.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Vandeventer, Harold [BS] 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/12/2012 15:35
Ärende: [ADSM-L] TDP 6.3 and Exchange 2010?


Is anyone using TDP 6.3 to run VSS backup against Exchange 2010 DAG?

We're fighting a problem where a scheduled backup is reported as successful 
from the TSM server perspective, but according to the TDP/Exchange manager the 
backup never runs.  He doesn't see any corresponding backup history in the TDP 
GUI.

Setting the TDP Scheduler service to Log On As a domain account (not Local 
System) seems to let scheduled backup work, but the TDP/Exchange manager wants 
to avoid password change polices by using Local System as he has for years with 
Exchange 2007.

We've got a PMR open, with no progress other than try a domain account.


Harold Vandeventer

Re: TSM Export issue unavailable tapes causing the exports to suspend themselves

2012-03-07 Thread Daniel Sparrman
If you do the audit by using checkl=barcode, the process should take no more 
than a few minutes to complete, so it should also be one of the activities you 
should perform.

Audit library only takes a (very) long time to complete when you do checkl=yes 
since this means that TSM needs to mount each tape and check the written label 
to do the audit. Since audit library isnt multi-threaded, only 1 drive is used, 
thus the long time to complete.

Regards

Daniel Sparrman 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Hughes, Timothy 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/07/2012 15:46
Ärende: Re: [ADSM-L] TSM Export issue unavailable tapes causing the exports to 
suspend themselves

Thanks David! No didn't try this I thought about it but have to discuss this 
with the other Admins I believe this is a long process and if I am not mistaken 
this was performed recently.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Ehresman,David E.
Sent: Tuesday, March 06, 2012 3:23 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Export issue unavailable tapes causing the exports to suspend 
themselves

Have you tried an audit library.  That would tell you whether TSM can see the 
tapes or not.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Hughes, Timothy
Sent: Tuesday, March 06, 2012 3:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Export issue unavailable tapes causing the exports to 
suspend themselves

Thanks again George!

I think I am going try this


run the audit command   AUDIT VOLUME volser FIX=YES


That shouldn't hurt anything.


regards

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of George 
Huebschman
Sent: Tuesday, March 06, 2012 1:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Export issue unavailable tapes causing the exports to suspend 
themselves

That is a q vol, not q libvol.

It sounds like TSM or the library can not find the tapes.


   - The location is blank, so TSM believes the tapes are home.  That does
   not guarantee they are, or that they are in the right address in the
   library.


   - Then the actlog should show if there are read errors (although these
   tapes are not in error state and show no read or write errors.)


   - I don't know what library you have and I only have experience with a
   few, but libraries do lose tapes.  You could try a library audit.

These tapes were last read/writen from half a year to almost a year ago.
Have there been any changes to the library or virtual library since then?


   - Try ejecting the tapes from the library (checking them out) then
   checking them back in.
      - If they won't checkout try ejecting them via the library.  Then
      check them out with remove=no...then check them back in.


   - You could try a move data on one of the tapes as a test.  Put the
   console up and look for error/warning messages as the command processes.



George H

On Tue, Mar 6, 2012 at 12:03 PM, Hughes, Timothy 
timothy.hug...@oit.state.nj.us wrote:

 George, Thanks for your reply

 Yes they are there


      VOLUME_NAME: C07498L3
     STGPOOL_NAME: LTO64POOL
    DEVCLASS_NAME: LTO64CLASS
  EST_CAPACITY_MB: 102400.0
 SCALEDCAP_APPLIED:
     PCT_UTILIZED: 0.0
           STATUS: FILLING
           ACCESS: UNAVAILABLE
      PCT_RECLAIM: 0.1
          SCRATCH: YES
      ERROR_STATE: NO
        NUM_SIDES: 1
    TIMES_MOUNTED: 1
       WRITE_PASS: 1
  LAST_WRITE_DATE: 2011-05-06 07:25:51.00
   LAST_READ_DATE: 2011-05-06 05:11:00.00
     PENDING_DATE:
     WRITE_ERRORS: 0
      READ_ERRORS: 0
         LOCATION:
    MVSLF_CAPABLE: No
         CHG_TIME: 2012-03-06 07:52:03.00
  BEGIN_RCLM_DATE:
    END_RCLM_DATE:
  VOL_ENCR_KEYMGR: None



Re: TSM Export issue unavailable tapes causing the exports to suspend themselves

2012-03-06 Thread Daniel Sparrman
Hi

I'm guessing you've already checked 

a) Element numbers (they CAN change)

b) That your drives hasnt gone over-due on cleaning

c) That the tapes has been usable before

d) That your library device is available (if not, it's gonna put your volumes 
in an unavailable state)

Not questioning your competence, but have you tried running a audit library?

Most of the issues with TSM  tape volumes usually adhere from configuration 
issues. You either had it before, or you got it now. If you got it now, a 
configuration change has happened. If you had it before, I'm sure it's one of 
the above.

If it's nothing of the above, describe your problem more in detail and I might 
be able to help you.

Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Hughes, Timothy 
Sänt av: ADSM: Dist Stor Manager 
Datum: 03/06/2012 21:03
Ärende: Re: TSM Export issue unavailable tapes causing the exports to suspend 
themselves


Richard thanks,

When the volumes are updated to access=readwrite the commands seems to work and 
in the error it doesn't show much. It seems the tapes go offline fairly 
quickly, As soon as the request for a mount of the tape is made, At least 
that's the way it seems to me.

Thanks again


03/06/12   14:35:23  ANR1402W Mount request denied for volume C07498L3 - 
volume
  unavailable. (SESSION: 113956, PROCESS: 5603)
03/06/12   14:35:23  ANR1420W Read access denied for volume C07498L3 - 
volume
  access mode = unavailable. (SESSION: 113956, 
PROCESS:
  5603)
03/06/12   14:35:38  ANR1402W Mount request denied for volume C23919L3 - 
volume
  unavailable. (SESSION: 113956, PROCESS: 5604)
03/06/12   14:35:38  ANR1420W Read access denied for volume C23919L3 - 
volume
  access mode = unavailable. (SESSION: 113956, 
PROCESS:
  5604)
03/06/12   14:35:46  ANR1402W Mount request denied for volume C23727L3 - 
volume
  unavailable. (SESSION: 113956, PROCESS: 5605)
03/06/12   14:35:46  ANR1420W Read access denied for volume C23727L3 - 
volume
  access mode = unavailable. (SESSION: 113956, 
PROCESS:
  5605)
03/06/12   14:36:43  ANR2017I Administrator ADMIN1 issued command: QUERY
  ACTLOG search=denied  (SESSION: 113956)

tsm q actlog search=update

Date/TimeMessage
 
--
03/06/12   14:27:22  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c23727l3 access=readwrite  (SESSION: 113956)
03/06/12   14:27:22  ANR2207I Volume C23727L3 updated. (SESSION: 113956)
03/06/12   14:33:23  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c23919l3 access=readwrite  (SESSION: 113956)
03/06/12   14:33:23  ANR2207I Volume C23919L3 updated. (SESSION: 113956)
03/06/12   14:33:40  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c07498l3 access=readwrite  (SESSION: 113956)
03/06/12   14:33:40  ANR2207I Volume C07498L3 updated. (SESSION: 113956)
03/06/12   14:33:42  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c07498l3 access=readwrite  (SESSION: 113956)
03/06/12   14:33:42  ANR2207I Volume C07498L3 updated. (SESSION: 113956)
03/06/12   14:35:01  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c23727l3 access=readwrite  (SESSION: 113956)
03/06/12   14:35:01  ANR2207I Volume C23727L3 updated. (SESSION: 113956)
03/06/12   14:35:06  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c23727l3 access=readwrite  (SESSION: 113956)
03/06/12   14:35:06  ANR2207I Volume C23727L3 updated. (SESSION: 113956)
03/06/12   14:35:07  ANR2017I Administrator ADMIN1 issued command: UPDATE
  VOLUME c23727l3 access=readwrite  (SESSION: 113956)


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Sims
Sent: Tuesday, March 06, 2012 2:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Export issue unavailable tapes causing the exports to suspend 
themselves

The Activity Log should have messages for when the tapes went to Unavailable, 
thus suggesting a cause.  If a Library Manager is involved, it may be having 
issues which need investigation.

 Richard Sims

Ang: logical libraries in VTL

2012-02-23 Thread Daniel Sparrman
Hi

Yes, you can share it with multiple backup applications by defining more than 1 
library. Each library needs it's own dedicated drives though (as with all 
VTL's).

Since TSM  BackupExec writes information differently to the VTL, data backed 
up using TSM wont be deduped against data backed up with BackupExec. If this 
has any impact on your dedup ratio depends on the amount of information you 
store with each application. At a certain point, TSM will have enough own 
information stored to dedup against, and so will BackupExec.

Hope that answers your question.

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Mehdi Salehi 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/23/2012 09:10
Ärende: logical libraries in VTL

Hi,
Can a VTL like TS7650G be presented to more than one backup servers (like
TSM and BackupExec) simultaneously? If yes, will this affect deduplication
ratio? Please compare it with the case if all systems send their backups to
a single solution: TSM+VTL.

Thank you,
Mehdi


Ang: Expiration performance TSM 5.5 (request)

2012-02-16 Thread Daniel Sparrman
Hi

To begin with, the guidelines for database setup is something similiar to:

a) 8-12 primary database volumes (since you're on 5.x you can still use TSM 
mirroring). Each volume should be in it's own filesystem, preferably on their 
own spindles (harddrives). If possible, make sure that your storage group 
assigns you 8-12 volumes from different arrays within the Vmax if possible, or 
at least as many arrays as possible. If they assign you 8 volumes from the same 
array, performance will be horrible.

b) Log should reside on it's own volume(s). Since the log is sequential, 
raid-10 is the optimal setup. 

c) Using DB mirroring in parallell mode will increase performance

d) Using LOG mirroring in normal mode will aswell increase performance

e) Max sure you have enough bufferpool

From your description, it sounds like a) is the place to start, I wouldnt be 
surprised if your db volumes are located (some of them or all of them) within 
the same array.

What operating system are you using? If you're on AIX, try checking I/O 
statistics during expiration to see if your queues are getting full (as in 100% 
utilization of the disks using iostat).  If so, try increasing the queues, and 
go to your storage guys and have them look at the underlying Vmax to determine 
if there are any configuration issues there.

Performance issues with expiration and database backups usually relates to the 
fact that the read-performance of the underlying array is limited.

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Loon, EJ van - SPLXO 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/16/2012 15:02
Ärende: Expiration performance TSM 5.5 (request)

Hi TSM-ers!
I'm struggling with the performance of our expiration process. I can't
get it any faster than 100 object/second max. We tried everything, like
using more or less database volumes, multiple volumes per filesystem,
mirroring, unmirroring, but nothing seems to have any positive effect.
We are using SAN attached enterprise class storage (EMC Vmax) with the
fastest disks available.
I have seen other users with similar (or larger) databases with much
higher figures, like more than 1000 objects/sec, so there must be
something I can do to achieve this. In 2007 at the Oxford TSM Symposium
(http://tsm-symposium.oucs.ox.ac.uk/2007/papers/Dave%20Canan%20-%20Disk%
20Tuning%20and%20TSM.pdf page 25) IBM also stated that 1000 object/sec
is possible.
I would really like to know from other TSM 5.5 users how their
expiration is performing. Could you please let me know by sending me the
output from the following two SQL queries, along with the platform you
are using:

select activity, cast((end_time) as date) as Date,
(examined/cast((end_time-start_time) seconds as decimal(18,13))*3600)
Objects Examined/Hr from summary where activity='EXPIRATION' and
days(end_time)-days(start_time)=0

select capacity_mb as Capacity MB, pct_utilized as Percentage in
use, cast(capacity_mb*pct_utilized/100 as integer) as Used MB from db

Thank you VERY much for your help in advance
Kind regards,
Eric van Loon
KLM Royal Dutch Airlines
/prebrFor 
information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.brbrKoninklijke Luchtvaart Maatschappij NV 
(KLM), its subsidiaries and/or its employees shall not be liable for the 
incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.brKoninklijke Luchtvaart Maatschappij 
N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The 
Netherlands, with registered number  33014286 
brpre


Re: Expiration performance TSM 5.5 (request)

2012-02-16 Thread Daniel Sparrman
Well, since you have multi-threaded expiration in TSM v6 (basically 1 thread 
per volume) expiration is alot faster.

That could be an easy way of handling your expiration problems, going to v6, 
but if your databasevolumes are located on the same arrays, you'll still get 
lousy performance, just spread over multiple thread instead of one ;)

Best Regards

Daniel 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Lee, Gary 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/16/2012 15:32
Ärende: Re: Expiration performance TSM 5.5 (request)

Do you have many win 2008 and win 7 clients with client version 6.2.2?

For some reason (forget the apar), expiration is very slow with these clients.

I am going to 6.3 soon, and hope to solve this with that move.
I have a 6.2 server, can't get your script to run, but observation tells me 
that expirations that run hours on 5.5 run in minutes on 6.2.

 


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
EJ van - SPLXO
Sent: Thursday, February 16, 2012 9:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Expiration performance TSM 5.5 (request)

Hi TSM-ers!
I'm struggling with the performance of our expiration process. I can't
get it any faster than 100 object/second max. We tried everything, like
using more or less database volumes, multiple volumes per filesystem,
mirroring, unmirroring, but nothing seems to have any positive effect.
We are using SAN attached enterprise class storage (EMC Vmax) with the
fastest disks available.
I have seen other users with similar (or larger) databases with much
higher figures, like more than 1000 objects/sec, so there must be
something I can do to achieve this. In 2007 at the Oxford TSM Symposium
(http://tsm-symposium.oucs.ox.ac.uk/2007/papers/Dave%20Canan%20-%20Disk%
20Tuning%20and%20TSM.pdf page 25) IBM also stated that 1000 object/sec
is possible.
I would really like to know from other TSM 5.5 users how their
expiration is performing. Could you please let me know by sending me the
output from the following two SQL queries, along with the platform you
are using:

select activity, cast((end_time) as date) as Date,
(examined/cast((end_time-start_time) seconds as decimal(18,13))*3600)
Objects Examined/Hr from summary where activity='EXPIRATION' and
days(end_time)-days(start_time)=0

select capacity_mb as Capacity MB, pct_utilized as Percentage in
use, cast(capacity_mb*pct_utilized/100 as integer) as Used MB from db

Thank you VERY much for your help in advance
Kind regards,
Eric van Loon
KLM Royal Dutch Airlines
/prebrFor 
information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.brbrKoninklijke Luchtvaart Maatschappij NV 
(KLM), its subsidiaries and/or its employees shall not be liable for the 
incorrect or incomplete transmission of this e-mail or any attachments, nor 
responsible for any delay in receipt.brKoninklijke Luchtvaart Maatschappij 
N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The 
Netherlands, with registered number  33014286 
brpre


Re: Expiration performance TSM 5.5 (request)

2012-02-16 Thread Daniel Sparrman
Hi Eric

Out of curiosity, how long has the TSM server existed, and how long has it been 
since you did an unload/load database?

Fragmentation could also be the root cause to expiration taking to long.

Regards

Daniel 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Loon, EJ van - SPLXO 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/16/2012 16:19
Ärende: Re: Expiration performance TSM 5.5 (request)

Hi Daniel!
Been there, done that... 
a) We completely redesigned our database layout. Each database file is located 
on one single hdisk, one single vg and one single filesystem. We are using 10 
database volumes and tried everything. LUN's are striped across multiple 
arrays, the backend is using raid 1. Performance of the disks outside of TSM is 
fine: 120 Mb/sec read as long as we do not use mirroring, write even better 
(because of cache) around 180 Mb/sec. Even when we try to imitate TSM (by doing 
4k reads using dd) read performance is fine.
b) Tried several setups for the log too, still no improvement.
c) I think you mean the way the vg is mirrored? All vg's are using parallel.
d) Tried that too, the only effect is that log utilization during the day is 
much lower (of course).
e) Cache hit ration is 99.9% so the bufferpool should be large enough.
I have done extensive I/O analysis along with TSM support, there is no queuing, 
0.0 most of the time...
Thanks for your reply!
Kind regards,
Eric van Loon

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: donderdag 16 februari 2012 15:12
To: ADSM-L@VM.MARIST.EDU
Subject: Ang: Expiration performance TSM 5.5 (request)

Hi

To begin with, the guidelines for database setup is something similiar to:

a) 8-12 primary database volumes (since you're on 5.x you can still use TSM 
mirroring). Each volume should be in it's own filesystem, preferably on their 
own spindles (harddrives). If possible, make sure that your storage group 
assigns you 8-12 volumes from different arrays within the Vmax if possible, or 
at least as many arrays as possible. If they assign you 8 volumes from the same 
array, performance will be horrible.

b) Log should reside on it's own volume(s). Since the log is sequential, 
raid-10 is the optimal setup. 

c) Using DB mirroring in parallell mode will increase performance

d) Using LOG mirroring in normal mode will aswell increase performance

e) Max sure you have enough bufferpool

From your description, it sounds like a) is the place to start, I wouldnt be 
surprised if your db volumes are located (some of them or all of them) within 
the same array.

What operating system are you using? If you're on AIX, try checking I/O 
statistics during expiration to see if your queues are getting full (as in 100% 
utilization of the disks using iostat).  If so, try increasing the queues, and 
go to your storage guys and have them look at the underlying Vmax to determine 
if there are any configuration issues there.

Performance issues with expiration and database backups usually relates to the 
fact that the read-performance of the underlying array is limited.

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Loon, EJ van - SPLXO 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/16/2012 15:02
Ärende: Expiration performance TSM 5.5 (request)

Hi TSM-ers!
I'm struggling with the performance of our expiration process. I can't
get it any faster than 100 object/second max. We tried everything, like
using more or less database volumes, multiple volumes per filesystem,
mirroring, unmirroring, but nothing seems to have any positive effect.
We are using SAN attached enterprise class storage (EMC Vmax) with the
fastest disks available.
I have seen other users with similar (or larger) databases with much
higher figures, like more than 1000 objects/sec, so there must be
something I can do to achieve this. In 2007 at the Oxford TSM Symposium
(http://tsm-symposium.oucs.ox.ac.uk/2007/papers/Dave%20Canan%20-%20Disk%
20Tuning%20and%20TSM.pdf page 25) IBM also stated that 1000 object/sec
is possible.
I would really like to know from other TSM 5.5 users how their
expiration is performing. Could you please let me know by sending me the
output from the following two SQL queries, along with the platform you
are using:

select activity, cast((end_time) as date) as Date,
(examined/cast((end_time-start_time) seconds as decimal(18,13))*3600)
Objects Examined/Hr from summary where activity='EXPIRATION' and
days(end_time)-days(start_time)=0

select capacity_mb as Capacity MB

Re: Expiration performance TSM 5.5 (request)

2012-02-16 Thread Daniel Sparrman
I'm pretty sure we can agree that expiration should be alot higher than 100 
objects / sec.

The question is more WHY he's getting 100 objects / sec. According to previous 
information, disks should be in order, so the next questions would be:

a) What platform are you on?

b) How long has this bTSM server been running, and when did you last do a 
unload/load DB (since it's TSM 5)?

If disks are in order, load isnt to heavy, disks are ok and not congested, the 
only reason why expiration would be slow on performance is one of the following:

a) You're database is heavily fragmented, causing your expiration to look 
through alot of empty space (since expiration is sequential)

b) You have a bug in your current TSM code

c) Your server is running out of other resources than disk (memory, RAM) though 
I think you would have checked this by now

Expiration / sec of objects isnt related to the amount of objects, total 
expiration time is, so even though you would have a billion objects to look 
through, expiration / sec would be fast, but the time to complete would be long.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/16/2012 18:55
Ärende: Re: Expiration performance TSM 5.5 (request)


On 02/16/2012 09:02 AM, Loon, EJ van - SPLXO wrote:


 select activity, cast((end_time) as date) as Date,
 (examined/cast((end_time-start_time) seconds as decimal(18,13))*3600)
 Objects Examined/Hr from summary where activity='EXPIRATION' and
 days(end_time)-days(start_time)=0

I'm getting between 900/s and 1300/s.

AIX, 5.5, on an EMC VNX with Fast Cache (SSD caching).

- Allen S. Rout

Re: Best way to check size of TSM Client ?

2012-02-15 Thread Daniel Sparrman
Hi

If you're wondering how much data is actually stored on your client, for 
fileservers, you could just do a query filespace for all or a specific node. It 
will show you the size of the nodes filespaces, and how much space that is 
utilized for each filesystem.

As for using export node, the filedata=backup will show you the size for all 
versions stored in TSM, not only the active data (active data as in the latest 
of each file, or what is actually on your server).

So only the latest version of files using export, use filedata=allactive 
instead of filedata=backup. This will show you what TSM thinks should be 
restored in case of a server failure.

One thing though, if the filespace query is enough for you, I'd use that 
instead of export node since export node consumes more resources on your TSM 
server, and takes longer time to complete (especially for a lot of nodes).

export node will not give you any information regarding NAS filers (if you used 
NDMP to back them up). 

Best Regards

Daniel Sparrman




Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Horst Scherzer 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/15/2012 10:47
Ärende: Re: [ADSM-L] Best way to check size of TSM Client ?


Am 15.02.2012 10:38, schrieb Minns, Farren - Chichester:
 Hi All,

 What is the best way to get a list of all client nodes and how much disk 
 space would be required for a rebuild?

 I'm not looking for the total amount of data in backup, but how much data 
 each client actually has on it (roughly) currently?

 Farren
 
 John Wiley  Sons Limited is a private limited company registered in England 
 with registered number 641132.
 Registered office address: The Atrium, Southern Gate, Chichester, West 
 Sussex, United Kingdom. PO19 8SQ.
 

export node xx.yy.zz filed=backup preview=yes

for each node should do the job


Hth,
-- 



Horst SCHERZER  e-Mail:  horst.scher...@univie.ac.at
Vienna University Computer Center   Phone:  (+43 1) 4 277 x14053
Ebendorferstraße 10 Cellular:   (+43) 0664/60 277  14053
A-1010 Wien/Vienna  Fax:(+43 1) 4 277  x9140
Oesterreich/Austria URL: http://homepage.univie.ac.at/horst.scherzer


Ang: Re: TSM 6.2.3 down, DB2DAS service will not start...

2012-02-15 Thread Daniel Sparrman
Hi

db2das is the DB2 Administration Server and shouldnt be the root cause to why 
you cant start the TSM Server.

SQL1219N indicated that your DB2 instance has issues allocating memory 
(especially shared memory). 

Is the user you're trying to start TSM with a member of the DB2ADMIN group? 
Does the hung db2das service allocate alot of memory? How much free memory is 
available in the machine for use by TSM?

Has any updates been done to the machine since the last restart (such as 
servicepack update)?

Either something is broken (which is not very likely) or you have a memory 
issue in your TSM server.

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/15/2012 15:55
Ärende: Re: TSM 6.2.3 down, DB2DAS service will not start...


TSM 6.2.3 on Win2K3.
Rebooted the box, TSM will not start as service or in the foreground.
Error below, but I suspect the TSM failure to activate is due to a DB2 service 
that is hung in starting state:  DB2DAS - DB2DAS00
Nothing in the Windows event log except Event log except The DB2DAS - DB2DAS00 
service hung on starting.

Same result on 2nd reboot.
db2diag.log has not been updated since 2/12.

Anybody seen this before, or have any insight on what that service does or how 
to deal with a service hung on starting?
Start/stop/restart not available for that service now.

IBM support line dropped the ball on the last update to the PMR, so I'm 
starting over with a new tech in a new time zone on the SEV1 PMR.

Error in the foreground:
++
Tivoli Storage Manager for Windows
Version 6, Release 2, Level 3.0

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2010.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR0900I Processing options file d:\program 
files\tivoli\tsm\server1\dsmserv.opt.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR0172I rdbdb.c(1839): Error encountered performing action ActivateDatabase.
ANR0162W Supplemental database diagnostic information:  -1219:SQLSTATE 57011: 
Virtual storage or
database resource is not available.
:-1219 (SQL1219N  The request failed because private virtual
memory could not be allocated.  SQLSTATE=57011
).
+

Ang: Re: TSM 6.2.3 down, DB2DAS service will not start...

2012-02-15 Thread Daniel Sparrman
As a side note:

Have you checked if this APAR relates to your problem?

http://www-01.ibm.com/support/docview.wss?uid=swg21438530

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda 
Sänt av: ADSM: Dist Stor Manager 
Datum: 02/15/2012 15:55
Ärende: Re: TSM 6.2.3 down, DB2DAS service will not start...


TSM 6.2.3 on Win2K3.
Rebooted the box, TSM will not start as service or in the foreground.
Error below, but I suspect the TSM failure to activate is due to a DB2 service 
that is hung in starting state:  DB2DAS - DB2DAS00
Nothing in the Windows event log except Event log except The DB2DAS - DB2DAS00 
service hung on starting.

Same result on 2nd reboot.
db2diag.log has not been updated since 2/12.

Anybody seen this before, or have any insight on what that service does or how 
to deal with a service hung on starting?
Start/stop/restart not available for that service now.

IBM support line dropped the ball on the last update to the PMR, so I'm 
starting over with a new tech in a new time zone on the SEV1 PMR.

Error in the foreground:
++
Tivoli Storage Manager for Windows
Version 6, Release 2, Level 3.0

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2010.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR0900I Processing options file d:\program 
files\tivoli\tsm\server1\dsmserv.opt.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR0172I rdbdb.c(1839): Error encountered performing action ActivateDatabase.
ANR0162W Supplemental database diagnostic information:  -1219:SQLSTATE 57011: 
Virtual storage or
database resource is not available.
:-1219 (SQL1219N  The request failed because private virtual
memory could not be allocated.  SQLSTATE=57011
).
+

Ang: Low level tape drives for TSM

2012-01-19 Thread Daniel Sparrman
Hi

I'm taking it as if you'd like to have standalone tape drive which are 
connected through your SAN/network environment. Most standalone tape drives I 
know of use SAS or LVD interfaces. FC is most commonly used in tape libraries 
since FC is aimed at larger, centralized environments.

I dont know of any tape drives that are utilizing a network connection, since 
the network connection would require intelligent hardware to utilize 
(standalone tape drives usually dont have an TCP/IP stack built in to them).

If you tried explaining what you are trying to accomplish having the standalone 
tape drives, perhaps it would be easier to help you out. As far as I know, TSM 
cant utilize a network-attached tape drive so I'm assuming you're planning 
something else.

Best Regards

Daniel Sparrman




Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Grigori Solonovitch 
Sänt av: ADSM: Dist Stor Manager 
Datum: 01/19/2012 11:25
Ärende: Low level tape drives for TSM


Hello Everybody,
We are using disk pools at Head Office and Disaster Site connected by fiber 
optics links (DS8100 online mirroring and 1GB Ethernet).
We are using SCSI tape drives 3590 only for out-of-country copies with daily 
amount less than 1TB. We need to replace 3590 drives by something newer.
We are looking for tape drives:

1)  supported by TSM 5.5 and 6.X;

2)  supported by AIX on Power6 and Power7 logical partitions;

3)  connected via fiber optics switch (at least 2Gb/s) or via network (at 
least 1Gb/s);

4)  with volume capacity 128GB or more (not compressed);

5)  just drive or manual library like 3590.
I will deeply appreciate any suggestions or links to documents.
Kindest regards,

Grigori G. Solonovitch
Senior Technical Architect  Ahli United Bank Kuwait  www.ahliunited.com.kw

Please consider the environment before printing this E-mail



CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.

Please consider the environment before printing this Email.

Re: Datamover on Library Manager Library Client

2012-01-11 Thread Daniel Sparrman
Hi

Since you also have to define paths for the datamover, dont you have to define 
it to the library manager (which usually controls all the drive paths and the 
mounts in the library) ?

When you asked about library manager / library client I'm assuming you're in a 
multi-TSM server environment with 1 TSM library manager and one or multiple TSM 
library clients (the NAS appliance defined as a datamover with drive paths 
could also been seen as a library client). 

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: George Huebschman 
Sänt av: ADSM: Dist Stor Manager 
Datum: 01/11/2012 16:11
Ärende: Re: [ADSM-L] Datamover on Library Manager Library Client

You can do that on the Library Client.

On Wed, Jan 11, 2012 at 10:01 AM, Meuleman, Ruud 
ruud.meule...@tatasteel.com wrote:

 Hi,

 We are going to configure datamover for backup with ndmp. Where do I
 have to define the datamover? TSM Library Manager or TSM Library Client?

 Kind Regards,
 Ruud Meuleman

 **


 This transmission is confidential and must not be used or disclosed by
 anyone other than the intended recipient. Neither Tata Steel Europe Limited
 nor any of its subsidiaries can accept any responsibility for any use or
 misuse of the transmission by anyone.

 For address and company registration details of certain entities within
 the Tata Steel Europe group of companies, please visit
 http://www.tatasteeleurope.com/entities


 **



Ang: 3494 library questions

2011-12-14 Thread Daniel Sparrman
Hi Gary

The ibmatl.conf file should be able to hold more than one 3494 library by 
adding a new line with the new library definition.

If i'm not mistaken, each line should contain 4 or 5 lines separated by tab. 
Field 1 should be the 3494 symbolic name, 2nd field should be IP adress or tty 
if serial attached, 3rd field should be the short hostname and the 4th field 
should (if applicable) be the HA3494 IP adress (if HA is used). 

That's for AIX, if you're running something else, it might be different.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Lee, Gary D. 
Sänt av: ADSM: Dist Stor Manager 
Datum: 12/14/2011 15:24
Ärende: 3494 library questions

I have two 3494 libraries at different locations.
I would like to have both defined to both of my tsm servers.

Is it possible to define two libraries in the ibmatl.conf file, and if so, how 
to define them to tsm?

If not, am I correct that the other option is to share the libraries as 
necessary defining appropriate tsm servers as library managers?

Thanks for the assistance.



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

Re: 3494 library questions

2011-12-14 Thread Daniel Sparrman
Hi Steven

A TSM library manager isnt mandatory in a 3494 library environment. If the 
libraries are partitioned, he can just connect his two hosts to each library 
and assign access through the 3494 CU.

Which is simplest is just a matter of how the existing environment looks. If 
the libraries are already partitioned for the two hosts, no TSM library manager 
is needed, just make an entry for each library in the ibmatl.conf and assign 
access in the 3494 CU.

if the libraries are not partitioned, and there is limited 3494 competence 
on-site, perhaps TSM library sharing is the way to go. It all depends on Gary's 
requirements (like, can we share volumes between the two TSM servers, or do 
they need to be divided).

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Steven Langdale 
Sänt av: ADSM: Dist Stor Manager 
Datum: 12/14/2011 15:39
Ärende: Re: [ADSM-L] 3494 library questions

Gary

A library manager is mandatory to coordinate library access.  Than can be a
standalone instance or the existing instance that has exclusive access (the
latter being the easier option with an environment of this size)

Thanks

Steven

On 14 December 2011 14:22, Lee, Gary D. g...@bsu.edu wrote:

 I have two 3494 libraries at different locations.
 I would like to have both defined to both of my tsm servers.

 Is it possible to define two libraries in the ibmatl.conf file, and if so,
 how to define them to tsm?

 If not, am I correct that the other option is to share the libraries as
 necessary defining appropriate tsm servers as library managers?

 Thanks for the assistance.



 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310




Re: Restoring (DB2) API data using the BA client

2011-12-13 Thread Daniel Sparrman
Since the DB2 API only act as a media manager for DB2, the B/A client wont be 
able to read what has actually been backed up.

So the answer to your question is no, the B/A client cant restore data that has 
been backed up using DB2 (through the TSM API).

What you can do to restore the data is 

a) Install a DB2 server and then restore the data using that instance

b) Start writing your own software to retrieve the stored objects on the TSM 
server through the TSM API.

Out of the alternatives, I'd say A is alot easier.

Best Regards

Daniel Sparrman 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Steven Langdale 
Sänt av: ADSM: Dist Stor Manager 
Datum: 12/13/2011 10:50
Ärende: Re: Restoring (DB2) API data using the BA client

Stefan

Anyone feel free to correct me, but I don't think you can.  Have you tried
running up a BA client to see what is actually there?

Steven

On 13 December 2011 08:29, Stefan Folkerts stefan.folke...@gmail.comwrote:

 Hi all,

 I am looking into restoring DB2 (version 7) data of a node that has since
 been physically removed but still has data in TSM.
 Has anybody ever restored API data using a BA client to disk, it doesn't
 have to be restored to DB2, no logs tricks..just the plain data to disk
 restore using the same platform BA client.

 Please advise.

 Regards,
  Stefan



Question concerning large filepools on HP EVA

2011-12-07 Thread Daniel Sparrman
Hi

We have a customer running TSM with a large (+130TB) HP EVA 64000 as a large 
filepool (multi-directory). After a few months, they got problems with disks 
breaking down in the EVA box, ending up having raidarrays in almost a constant 
state of rebuild. After talking to HP, the advice they got was to try to reduce 
the load on the box since it shouldnt be used more than 30% of the time during 
24 hours.

Initially, the customer used deduplication on the box which probably put even 
more stress on it, but this is now turned off. The problem still exists however.

Has anyone else had issues with large diskboxes in combination with file device 
pools? According to HP, this is due to the high amount of I/O that TSM 
produces, but I've seen non-SATA boxes handle alot more I/O than this. So the 
question is, is it because of the use of SATA drives, or is this a problem with 
just this model/box?

The box is equipped with 1TB HP labeled S-ATA disks and the customer has a 
small SAS-based diskbox to handle daily backups and then migrates to the HP EVA 
box. Data is then backed up to a remote LTO-based tape library.

Another problem related to the same pool is that file device volumes that has 
been reclaimed (0.0% usage) is not returned as scratch and deleted, but is held 
within the storage pool as a volume with 0.0% usage. Anyone know of any related 
issues with file device volumes not being deleted?

Customer is at v6.2 on RedHat Enterprise Linux, and we've checked permissions 
on both directories and files of the file device volumes, aswell as the TSM 
activity log, but cannot see any relevant issues.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

Ang: multi-threaded Exchange backups

2011-11-28 Thread Daniel Sparrman
Hi

I wouldnt mind getting a copy of that script, I'm not much of a fan of multiple 
nodes / schedules either.

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Schaub, Steve 
Sänt av: ADSM: Dist Stor Manager 
Datum: 11/28/2011 15:07
Ärende: multi-threaded Exchange backups

In case anyone is interested, I modified our Powershell script for full backup 
of our Exchange databases so that it now backs up multiple concurrent Storage 
Groups in a single script (because I'm too darn lazy to deal with multiple node 
names, schedules, etc).

In our case, it dropped elapsed times from 36hr to 11hr, running 4 concurrent 
threads.

If anyone would like a copy, just contact me.

Steve Schaub
Systems Engineer II, Windows Backup/Recovery
BlueCross BlueShield of Tennessee
steve_schaub at bcbst.com

-
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Ang: Migrating from AIX to Linux (again)

2011-11-16 Thread Daniel Sparrman
Hi John

Not sure where you've read that crossplattform migration by db backup/restore 
has been possible previously. Since TSM was called ADSM 3.1, exporting nodes 
has been the only available option to do a crossplattform move of TSM.

Previously, you could only export nodes to sequential media, but in newer 
version (I think it was somewhere around v5), you are now able to do a export 
directly to a target server.

Most of my customers has chosen 1. below due to the amount of historical data 
that needs to be kept for a long. However, if your versioning and save times 
are short, redirecting backups to the new TSM server and just deleting the old 
one when new historical data has been built up in the new one is a viable 
option. However, most people want to get rid of the old TSM server as fast as 
possible, since it holds up storage. So make sure that the timeframe for 
building up historical data in your new TSM server is fairly short.

Regards

Daniel 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Dury, John C. 
Sänt av: ADSM: Dist Stor Manager 
Datum: 11/16/2011 16:48
Ärende: Migrating from AIX to Linux (again)

Our current environment looks like this:
We have a production TSM server that all of our clients backup to throughout 
the day. This server has 2 SL500 tape libraries attached via fiber. One is 
local and the other at a remote site which is connected by dark fiber. The 
backup data is sent to the remote SL500 library several times a day in an 
effort to keep them in sync.  The strategy is to bring up the TSM DR server at 
the remote site and have it do backups and recovers from the SL500 at that site 
in case of a DR scenario.

I've done a lot of reading in the past and some just recently on the possible 
ways to migrate from an AIX TSM server to a Linux TSM server. I understand that 
in earlier versions (we are currently at 5.5.5.2) of the TSM server it allowed 
you to backup the DB on one platform (AIX for instance) and restore on another 
platform (Linux for instance) and if you were keeping the same library, it 
would just work but apparently that was removed by IBM in the TSM server code 
to presumably prevent customers from moving to less expensive hardware. (Gee, 
thanks IBM! sigh).
I posted several years ago about any possible ways to migrate the TSM Server 
from AIX to Linux.
The feasible solutions were as follows:

1.       Build new linux server with access to same tape library and then 
export nodes from one server to the other and then change each node as it's 
exported, to backup to the new TSM Server instead.  Then the old data in the 
old server can be purged. A lengthy and time consuming process depending on the 
amount of data in your tape library.

2.       Build a new TSM linux server and point all TSM clients to it but keep 
the old TSM server around in case of restores for a specified period of time 
until it can be removed.

There may have been more options but those seemed the most reasonable given our 
environment. Our biggest problem with scenario 1 above is exporting the data 
that lives on the remote SL500 tape library would take much longer as the 
connection to that tape library is slower than the local library.  I can 
probably get some of our SLAs adjusted to not have to export all data and only 
the active data but that remains to be seen.

My question. Has any of this changed with v6 TSM or has anyone come up with a 
way to do this in a less painful and time consuming way? Hacking the DB so the 
other platform code doesn't block restoring an AIX TSM DB on a Linux box? 
Anything?

Thanks again and sorry to revisit all of this again. Just hoping something has 
changed in the last few years.
John


Ang: [ADSM-L] ANR3181E: Replication server server name has the same replication key as this server.

2011-11-08 Thread Daniel Sparrman
ANR3181E: Replication server server name has the same replication key as this 
server. 
Explanation During replication initialization, the server detected that the 
replication key of the target server is the same as this server. Each server 
must have a unique replication key.
  
System action Replication to the specified server is stopped.
  
User response Issue the SET REPLSERVER command to specify a different server 
for the target of the replication.
   
 
 
Parent topic: Version 6.3.0 ANR messages


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Stefan Folkerts 
Sänt av: ADSM: Dist Stor Manager 
Datum: 11/08/2011 10:31
Ärende: [ADSM-L] ANR3181E: Replication server server name has the same 
replication key as this server.

After cloning a virtual machine with TSM installed and setting up
replication I got this error message ;

ANR3181E: Replication server server name has the same replication key as
this server.
Explanation

During replication initialization, the server detected that the replication
key of the target server is the same as this server. Each server must have
a unique replication key.
System action

Replication to the specified server is stopped.
User response

My question is ; How can i change the replication key of a TSM instance?


Re: Ang: [ADSM-L] ANR3181E: Replication server server name has the same replication key as this server.

2011-11-08 Thread Daniel Sparrman
Hi Stefan

I assume that you have changed the TSM servername of the cloned TSM server to 
something else? And that you have registred the new name as a server on the old 
TSM server?

In that case, you need to update the replication key by issuing the below 
command. This will also update the replication key. There is no command to just 
set the replication key.

Regards

Daniel Sparrman 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Stefan Folkerts 
Sänt av: ADSM: Dist Stor Manager 
Datum: 11/08/2011 10:57
Ärende: Re: Ang: [ADSM-L] ANR3181E: Replication server server name has the same 
replication key as this server.

This is only valid if you defined the source as the replication target,
this doesn't update the replication key of the TSM server but changes the
replication server to a new target server..I don't want to to do this.

Because I cloned the VM the whole TSM configuration is copied including
something new..the replication key, I can't find a command or setting to
change this, I'm hoping it's not generated at install and fixed because
that would break the ability to clone TSM servers on a VM level.


On Tue, Nov 8, 2011 at 10:38 AM, Daniel Sparrman
daniel.sparr...@exist.sewrote:

 ANR3181E: Replication server server name has the same replication key as
 this server.
 Explanation During replication initialization, the server detected that
 the replication key of the target server is the same as this server. Each
 server must have a unique replication key.

 System action Replication to the specified server is stopped.

 User response Issue the SET REPLSERVER command to specify a different
 server for the target of the replication.



 Parent topic: Version 6.3.0 ANR messages


 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE

 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
 Till: ADSM-L@VM.MARIST.EDU
 Från: Stefan Folkerts
 Sänt av: ADSM: Dist Stor Manager
 Datum: 11/08/2011 10:31
 Ärende: [ADSM-L] ANR3181E: Replication server server name has the same
 replication key as this server.

 After cloning a virtual machine with TSM installed and setting up
 replication I got this error message ;

 ANR3181E: Replication server server name has the same replication key as
 this server.
 Explanation

 During replication initialization, the server detected that the replication
 key of the target server is the same as this server. Each server must have
 a unique replication key.
 System action

 Replication to the specified server is stopped.
 User response

 My question is ; How can i change the replication key of a TSM instance?



Re: Ang: [ADSM-L] ANR3181E: Replication server server name has the same replication key as this server.

2011-11-08 Thread Daniel Sparrman
As I understand it, the replication key is the same thing as the SSL key, which 
is stored in the SSL keyring database. Since you've cloned the TSM server, both 
servers now have the same SSL key, thus making it impossible to replicate 
between the two.

I found the following commands which might help you resolve the issue:

QUERY SSLKEYRINGPW
SET SSLKEYRINGPW newpw UPDATE=Y

I'm not sure if they will help you resolve the issue with identical keys, since 
I think they're only for setting the password to the keyring database. That's 
why I figured updating the replication status would also update the SSL key.

Not sure if the forcesync=yes on the update server command would do the same 
thing, since it refreshes the verification key.


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Stefan Folkerts 
Sänt av: ADSM: Dist Stor Manager 
Datum: 11/08/2011 12:07
Ärende: Re: Ang: [ADSM-L] ANR3181E: Replication server server name has the same 
replication key as this server.

Daniel,

Ok, so you are saying that by changing the replication target of an
instance you also change the replication key, why would you want that to
happen with the same command, it seems a bit odd but ok.

I tried it (it's a lab setup anyway), I also gave the command at the
tsm63node3 server but then used a different target and it does not appear
to work, I still get the same message ;

11/08/2011 12:03:48  ANR2017I Administrator ADMIN issued command: SET

                      REPLSERVER tsm63node3  (SESSION: 281)

11/08/2011 12:03:48  ANR1634I Default replication server name set to

                      TSM63NODE3.  (SESSION: 281)

11/08/2011 12:03:48  ANR0405I Session 281 ended for administrator ADMIN

                      (WinNT). (SESSION: 281)

11/08/2011 12:03:52  ANR0407I Session 282 started for administrator ADMIN

                      (WinNT) (Tcp/Ip 172.28.15.16(3564)). (SESSION: 282)

11/08/2011 12:03:52  ANR2017I Administrator ADMIN issued command: REPLICATE

                      NODE REPGROUP01  (SESSION: 282)

11/08/2011 12:03:52  ANR0984I Process 10 for Replicate Node started in the

                      BACKGROUND at 12:03:52 PM. (SESSION: 282, PROCESS:
10)
11/08/2011 12:03:52  ANR2110I REPLICATE NODE started as process 10.
(SESSION:
                      282, PROCESS: 10)

11/08/2011 12:03:52  ANR0405I Session 282 ended for administrator ADMIN

                      (WinNT). (SESSION: 282)

11/08/2011 12:03:52  ANR0408I Session 283 started for server TSM63NODE3

                      (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION:
282,
                      PROCESS: 10)

11/08/2011 12:03:52  ANR3181E Replication server TSM63NODE3 has the same

                      replication key as this server. (SESSION: 282,
PROCESS:
                      10)

11/08/2011 12:03:52  ANR0409I Session 283 ended for server TSM63NODE3

                      (Linux/x86_64). (SESSION: 282, PROCESS: 10)

11/08/2011 12:03:52  ANR3179E Server TSM63NODE3 does not support
                      replication or is not initialized for replication.
(SESSION: 282,
                      PROCESS: 10)

11/08/2011 12:03:52  ANR0327I Replication of node LOADGEN completed. Files

                      current: 0. Files replicated: 0 of 0. Files updated:
0 of
                      0. Files deleted: 0 of 0. Amount replicated: 0 KB of
0
                      KB. Amount transferred: 0 KB. Elapsed time: 0 Day(s),
0
                      Hour(s), 0 Minute(s). (SESSION: 282, PROCESS: 10)

11/08/2011 12:03:52  ANR0985I Process 10 for Replicate Node running in the

                      BACKGROUND completed with completion state FAILURE at

                      12:03:52 PM. (SESSION: 282, PROCESS: 10)

11/08/2011 12:03:52  ANR1893E Process 10 for Replicate Node completed with
a
                      completion state of FAILURE. (SESSION: 282, PROCESS:
10)
11/08/2011 12:03:52  ANR0408I Session 284 started for server TSM63NODE3

                      (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION:
282)
11/08/2011 12:03:52  ANR3181E Replication server TSM63NODE3 has the same

                      replication key as this server. (SESSION: 282)

11/08/2011 12:03:52  ANR0409I Session 284 ended for server TSM63NODE3

                      (Linux/x86_64). (SESSION: 282)



On Tue, Nov 8, 2011 at 11:03 AM, Daniel Sparrman
daniel.sparr...@exist.sewrote:

 Hi Stefan

 I assume that you have changed the TSM servername of the cloned TSM server
 to something else? And that you have registred the new name as a server on
 the old TSM server?

 In that case, you need to update the replication key by issuing the below
 command. This will also update the replication key. There is no command to
 just set the replication key.

 Regards

 Daniel Sparrman


 Daniel Sparrman

TSM Performance 5.5 vs 6.2

2011-10-27 Thread Daniel Sparrman
First off, to determine if your hardware is enough, it would be useful to know 
the size of your envinvironment (db size, amount of daily data, total amount of 
data).

As for 6.2 in general, the internal housekeeping of TSM is alot faster. One of 
the main issues for alot of people during 5.5 was expiration processing. With 
the new features such as multi-threading, expiration now goes alot faster. 

TSM 6.2 requires abit more hardware, but overall, all environments I've 
upgraded this far has seen performance increase across the board. So I believe 
the risk that your performance would go down to be very, very small.

I think IBM mentioned somewhere that overall database performance has been 
increase threefolded with the implementation of DB2 as a database engine.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Druckenmiller, David 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/26/2011 20:41
Ärende: [ADSM-L] TSM Performance 5.5 vs 6.2


I need to show management that simply upgrading TSM from 5.5 to 6.2 will not 
cause a degradation in performance.  I know a lot of upgrades are done by 
moving to new hardware.  We don't have that luxury.  We are currently running 
on AIX 6.1 on p520 server.  I've already upped memory to 32gb.

Anyone have any experience to share?

Thanks
Dave

-
CONFIDENTIALITY NOTICE: This email and any attachments may contain
confidential information that is protected by law and is for the
sole use of the individuals or entities to which it is addressed.
If you are not the intended recipient, please notify the sender by
replying to this email and destroying all copies of the
communication and attachments. Further use, disclosure, copying,
distribution of, or reliance upon the contents of this email and
attachments is strictly prohibited. To contact Albany Medical
Center, or for a copy of our privacy practices, please visit us on
the Internet at www.amc.edu.

Ang: [ADSM-L] Dedup for DB's?

2011-10-18 Thread Daniel Sparrman
We're having customers using Exchange, DB2  SQL backups de-duping just fine. 
Exchange databases are a total of 2TB per machine, having (if I'm not recalling 
wrong) 200GB per storage group.

So unless you're talking huge sizes, no, de-dup works fine for databases. And 
the de-dup ratio is really good on databases tbh.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/18/2011 19:24
Ärende: [ADSM-L] Dedup for DB's?


Just asking.
As I recall, the announcements for client-side dedup said it is supported by 
the API, so therefore should work with TDP's.
Has anybody achieved (or attempted?) significant improvements in throughput 
with backup of large DB's using client-side dedup?

Wanda Prather  |  Senior Technical Specialist  | 
wprat...@icfi.commailto:wprat...@icfi.com  |  www.jasi.comwww.jasi.com%20
ICF Jacob  Sundstrom  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 | 
410.539.1135

Re: Check signals on Power vs. x86...

2011-10-12 Thread Daniel Sparrman
Abit off-topic, but I have no problems convincing the wife the safety in using 
a Mercedes to take the children to school instead of a Skoda ;)

Same comparison, would your wife accept you taking the kids to school in a 
rusty oldy Skoda, or would she prefer you driving them in your new Mercedes? If 
you cant convince your wife about that, you should probably be visiting my 
seminars ;)


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Schaub, Steve 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/12/2011 13:41
Ärende: Re: [ADSM-L] Check signals on Power vs. x86...


If you can convince your wife that you need a Porsche to take the kids to 
school, you need to start offering seminars.
-steve

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Tuesday, October 11, 2011 10:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Check signals on Power vs. x86...

Yepp, but when I pass you on the freeway in my Porsche, your keychain wont be 
much help ;)

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/11/2011 16:31
Ärende: Re: [ADSM-L] Check signals on Power vs. x86...


Hehe, I wouldn't have to be seen with x86 hardware.  I could still have a 
power7 keychain if I wanted to. 

Regards, 
Shawn

Shawn Drew





Internet
daniel.sparr...@exist.se

Sent by: ADSM-L@VM.MARIST.EDU
10/08/2011 03:46 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Check signals on Power vs. x86...






A Toyota Prius easily beats a Porsche in a price / performance comparison.

However, would you still buy the Prius?


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/07/2011 17:20
Ärende: Re: [ADSM-L] Check signals on Power vs. x86...


I think he was looking for Power vs x86 in price/performance.
I.E  If you spend 50K on Power systems and 50K on x86 systems.  which
could produce more I/O throughput.
(If not, that's what I'd like to hear an update on)

From what I remember from previous discussions, x86/linux would come out
on top for pure price/performance,
but managing many x86 systems vs a single/fewer Power systems certainly
has some value.
I'm sure there are many other intangibles when it comes to value.


Regards,
Shawn

Shawn Drew





Internet
howard.co...@ardenthealth.com

Sent by: ADSM-L@VM.MARIST.EDU
10/07/2011 09:44 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Check signals on Power vs. x86...






Yes, it is.  There are very few things I would say this about, but this is
one of 'em.

For a RHEL box to match the performance capabilities it would have to be
installed on Power as well (which it can be).  I think the evidence I've
seen both in experience and raw numbers has shown the power boxes can
sustain higher levels of throughput and performance.  Because to get an
x86 box to perform at those levels you would have to spend just as much,
buying one really powerful x86 box, or spreading it across multiple boxes.

All that said, if we didn't have a good AIX guy here, I'd go RHEL on
multiple boxes, or on a really powerful x86 box.  I can admin AIX, but I'm
much more comfortable on Linux.  So, as someone else said, it all depends
on what you're good at.


See Ya'
Howard Coles Jr.
John 3:16!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Allen S. Rout
Sent: Thursday, October 06, 2011 3:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Check signals on Power vs. x86...

I'm looking around for an update on my expectations that power hardware
and AIX is more performant per memory/CPU/IO than x86 and RHEL.

I know this topic comes up from time to time; I don't think I've seen it
rehashed particularly recently.

I'm an advocate of AIX for this, but I wanted to check signals and
experiences, again.


- Allen S. Rout


DISCLAIMER:
This communication, along with any documents, files or attachments, is
intended only for the use of the addressee and may contain legally
privileged and confidential information. If you are not the intended
recipient, you are hereby notified that any dissemination, distribution or
copying

Re: Data change rate

2011-10-12 Thread Daniel Sparrman
Hi Allen

Out of curiosity, the 2) isnt the total filespace capacity right, but the used 
amount on each filespace? Or are you looking at some other number?

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/12/2011 15:41
Ärende: Re: [ADSM-L] Data change rate


On 10/12/2011 08:53 AM, Ehresman,David E. wrote:

 Is there any way from the server side to get an estimate of overall
 client data change rate?

I'm doing this, crudely, right now as I try to characterize our
hosting customers behavior, to identify outliers.

Here's what I'm doing:


1: sum Yesterday's sessions, per-node.  This is from the actlogs.

2: total filespace capacity, per-node.

3: Divide 1 by 2.

4: Discard outliers.  I have some zeroes, and I have some 100+ %: the
latter tend to be database backups and erroenous reads on capacity
(like a nas filespace which is technically terabytes large, but
looks small on a 'q filespace').

5: Do (elementary) statistics on the results.



- Allen S. Rout

Re: Weird move nodedata with maxproc?

2011-10-11 Thread Daniel Sparrman
That would've been me ;)

Since this:

Specifies the maximum number of parallel processes to use for moving data. 
This parameter is optional. You can specify a value from 1–999, inclusive. The 
default value is 1. Increasing the number of parallel processes should improve 
throughput

is mentioned under Move Node Data - Moving selected filespaces for one node

I would assume that the process handles 1 filespace per process, not like 
migration  backup storagepool, 1 node per process.

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Sascha Askani 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/11/2011 13:39
Ärende: Re: [ADSM-L] Weird move nodedata with maxproc?


Am 11.10.2011 13:31, schrieb Richard Sims:
 The TSM documentation fails to say just how Maxprocess is honored in the Move 
 Nodedata context.  In other commands, such as Backup Stgpool, it is known 
 that operation is by clusters - a non-grouped node or a collocation group 
 of nodes.  Whereas your task involves a single node, it looks like you are 
 getting a single thread.

 Richard Sims, at Boston University

Richard,

thanks for your reply. I also got a mail from a fellow listmember
suggesting that the problem could also arise from the node only having
one (1) filespace.

Best,

Sascha

Re: Check signals on Power vs. x86...

2011-10-11 Thread Daniel Sparrman
Yepp, but when I pass you on the freeway in my Porsche, your keychain wont be 
much help ;)

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/11/2011 16:31
Ärende: Re: [ADSM-L] Check signals on Power vs. x86...


Hehe, I wouldn't have to be seen with x86 hardware.  I could still have a 
power7 keychain if I wanted to. 

Regards, 
Shawn

Shawn Drew





Internet
daniel.sparr...@exist.se

Sent by: ADSM-L@VM.MARIST.EDU
10/08/2011 03:46 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Check signals on Power vs. x86...






A Toyota Prius easily beats a Porsche in a price / performance comparison.

However, would you still buy the Prius?


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/07/2011 17:20
Ärende: Re: [ADSM-L] Check signals on Power vs. x86...


I think he was looking for Power vs x86 in price/performance.
I.E  If you spend 50K on Power systems and 50K on x86 systems.  which
could produce more I/O throughput.
(If not, that's what I'd like to hear an update on)

From what I remember from previous discussions, x86/linux would come out
on top for pure price/performance,
but managing many x86 systems vs a single/fewer Power systems certainly
has some value.
I'm sure there are many other intangibles when it comes to value.


Regards,
Shawn

Shawn Drew





Internet
howard.co...@ardenthealth.com

Sent by: ADSM-L@VM.MARIST.EDU
10/07/2011 09:44 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Check signals on Power vs. x86...






Yes, it is.  There are very few things I would say this about, but this is
one of 'em.

For a RHEL box to match the performance capabilities it would have to be
installed on Power as well (which it can be).  I think the evidence I've
seen both in experience and raw numbers has shown the power boxes can
sustain higher levels of throughput and performance.  Because to get an
x86 box to perform at those levels you would have to spend just as much,
buying one really powerful x86 box, or spreading it across multiple boxes.

All that said, if we didn't have a good AIX guy here, I'd go RHEL on
multiple boxes, or on a really powerful x86 box.  I can admin AIX, but I'm
much more comfortable on Linux.  So, as someone else said, it all depends
on what you're good at.


See Ya'
Howard Coles Jr.
John 3:16!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Allen S. Rout
Sent: Thursday, October 06, 2011 3:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Check signals on Power vs. x86...

I'm looking around for an update on my expectations that power hardware
and AIX is more performant per memory/CPU/IO than x86 and RHEL.

I know this topic comes up from time to time; I don't think I've seen it
rehashed particularly recently.

I'm an advocate of AIX for this, but I wanted to check signals and
experiences, again.


- Allen S. Rout


DISCLAIMER:
This communication, along with any documents, files or attachments, is
intended only for the use of the addressee and may contain legally
privileged and confidential information. If you are not the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of any information contained in or attached to this communication
is strictly prohibited. If you have received this message in error, please
notify the sender immediately and destroy the original communication and
its attachments without reading, printing or saving in any manner.

Please consider the environment before printing this e-mail.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or 
partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that 
certain
functions and services for BNP Paribas may be performed by BNP Paribas 
RCC, Inc.


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify

Re: Check signals on Power vs. x86...

2011-10-08 Thread Daniel Sparrman
A Toyota Prius easily beats a Porsche in a price / performance comparison.

However, would you still buy the Prius?


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/07/2011 17:20
Ärende: Re: [ADSM-L] Check signals on Power vs. x86...


I think he was looking for Power vs x86 in price/performance.
I.E  If you spend 50K on Power systems and 50K on x86 systems.  which
could produce more I/O throughput.
(If not, that's what I'd like to hear an update on)

From what I remember from previous discussions, x86/linux would come out
on top for pure price/performance,
but managing many x86 systems vs a single/fewer Power systems certainly
has some value.
I'm sure there are many other intangibles when it comes to value.


Regards,
Shawn

Shawn Drew





Internet
howard.co...@ardenthealth.com

Sent by: ADSM-L@VM.MARIST.EDU
10/07/2011 09:44 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Check signals on Power vs. x86...






Yes, it is.  There are very few things I would say this about, but this is
one of 'em.

For a RHEL box to match the performance capabilities it would have to be
installed on Power as well (which it can be).  I think the evidence I've
seen both in experience and raw numbers has shown the power boxes can
sustain higher levels of throughput and performance.  Because to get an
x86 box to perform at those levels you would have to spend just as much,
buying one really powerful x86 box, or spreading it across multiple boxes.

All that said, if we didn't have a good AIX guy here, I'd go RHEL on
multiple boxes, or on a really powerful x86 box.  I can admin AIX, but I'm
much more comfortable on Linux.  So, as someone else said, it all depends
on what you're good at.


See Ya'
Howard Coles Jr.
John 3:16!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Allen S. Rout
Sent: Thursday, October 06, 2011 3:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Check signals on Power vs. x86...

I'm looking around for an update on my expectations that power hardware
and AIX is more performant per memory/CPU/IO than x86 and RHEL.

I know this topic comes up from time to time; I don't think I've seen it
rehashed particularly recently.

I'm an advocate of AIX for this, but I wanted to check signals and
experiences, again.


- Allen S. Rout


DISCLAIMER:
This communication, along with any documents, files or attachments, is
intended only for the use of the addressee and may contain legally
privileged and confidential information. If you are not the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of any information contained in or attached to this communication
is strictly prohibited. If you have received this message in error, please
notify the sender immediately and destroy the original communication and
its attachments without reading, printing or saving in any manner.

Please consider the environment before printing this e-mail.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.

Ang: [ADSM-L] Check signals on Power vs. x86...

2011-10-07 Thread Daniel Sparrman
Well, I have customers running both AIX and RHEL, and my experience this far is 
that AIX still outperforms RHEL on I/O performance. It might be that the 
systems running AIX usually have more expensive/higher performance equipment 
connected to them than their RHEL counterpart.

I also prefer the device management on AIX compared to Linux (IBMTape, storage 
device management, LVM) aswell as the cluster capabilities that are provided. 

But in the end, I guess it all comes down to what competence you have in-house, 
and what the budget is. Linux is still a very competitive option.

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/06/2011 22:46
Ärende: [ADSM-L] Check signals on Power vs. x86...

I'm looking around for an update on my expectations that power hardware
and AIX is more performant per memory/CPU/IO than x86 and RHEL.

I know this topic comes up from time to time; I don't think I've seen it
rehashed particularly recently.

I'm an advocate of AIX for this, but I wanted to check signals and
experiences, again.


- Allen S. Rout

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-06 Thread Daniel Sparrman
I have customers who during an audit sees the object count go from 0 to +2 
billion and then starts counting backwards with a - (that was during TSM 5.5) 
several times. So no, it's not bullocks. I did however mean million (as in, 
several billion) so a mistake from my side there.

Some of those customers also hit the technical limit during 5.5 for the 
database size (524GB) on several of their TSM instances. Thus having even more 
instances of TSM today.

Sorry for the mistake saying billion and not million. As for how much 
objects they actually have in each TSM instance, it's fairly hard to tell since 
there is no possibility to do a select on contents for example to count the 
amount of objects. Those kind of SQL statements just hangs. And like I said, 
during the last audit we did on one of the TSM instances, it went up to 
21 objects and then started counting backwards several times so we 
actually have no clue about the exact amount of objects in that database. 

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Ben Bullock 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/06/2011 16:11
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

Ok, I have been following this thread with some interest, since I have a dedupe 
appliance. From the conversation, I've come to the conclusion that Daniel is a 
very cautious administrator who would like to eliminate any risk of data loss. 
Don't we all, it's a noble and worthwhile endeavor. All the discussed options 
are worthwhile if you are concerned about hash collisions (copypools, Async 
replication, reuse delay, etc)

At some point in the pursuit, you get to the point where there are diminishing 
returns and it is not worth the money to eliminate the next .01% probability of 
failure. Everyone will have a different stopping point.  

We get it. I think we have beat this horse within an inch of its life.

But I gotta ask...
 Daniel, you said but I've got several TSM customers who have several 
thousands of billions of objects. Are you telling us that someone has a TSM 
server with multiple ~TRILLIONS~ of objects backed up? Is that hyperbole or 
truth?  

Ben


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Wednesday, October 05, 2011 3:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

Hi Remco

Not sure if you're talking about hardware de-dup or TSM de-dup (which is using 
a larger block size due to the load) but:

Relatively small? I've only seen it happen once, but then I live in a 
relatively small market since I live in Sweden. So you're telling me (based on 
facts) that this haven't happened elsewhere? I seriously have to disagree. In 
my opinion, it think it's more likley that others that had this issue have 
decided to keep it in the dark. Sweden is a relatively small market, and the 
odds that it would have happened here, but nowhere else, is quite small.

Not sure about the size or anything in your TSM comparison, but I've got 
several TSM customers who have several thousands of billions of objects ... And 
like I said, if it's a chance of 1000.000.000.000 it's much more likely to hit 
you at 1000.000. It's not a quota that needs to be filled before it hits you. 
It's a random chance.

And, alike the customer I had who got it, if it's a very common block geting 
that hash conflict, yes, it will hit you badly since every file that contains 
that block will be invalid.

I do agree about your comment about TSM v6 though, I'd consider it very stable, 
I'd actually (today, with the amount of checking being done) consider it more 
stable than still being at version 5.5

Regards

Daniel

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Remco Post
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/05/2011 21:11
Ärende: Re: [ADSM-L] vtl versus file systems for pirmary pool

Hi,

I saw last week that about half of the people visiting the TSM Symposium were 
running V6, it's been stable for me so far.

The likeliness of an accidental SHA1 hash collision is relatively small even 
compared to the total number of objects that a TSM server could possibly ever 
store during its entire lifetime, insignificant. That being said, if you think 
that your data is to valuable to even risk that, don't dedup. 


-- 

Gr., Remco

Op 5 okt. 2011 om 19:24 heeft Shawn Drew shawn.d...@americas.bnpparibas.com 
het volgende geschreven:

 Along this line, we are still using TSM5.5   Some

Ang: [ADSM-L] Label volumes in a scsi library?

2011-10-06 Thread Daniel Sparrman
Use the search=yes and it will search within the library, not the bulk entry.

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Moyer, Joni M 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/06/2011 16:59
Ärende: [ADSM-L] Label volumes in a scsi library?

Hi Everyone,

I was trying to label a volume in a scsi library and I used the label libvol 
command to try and label Q0 in the library NAS_QI6000 and it is failing.  
The tape is already physically in a slot in the library, so could someone 
please explain how I am to label tapes within this library on TSM?

Any help is greatly appreciated

Date/Time Message
  --
10/06/11 10:24:59 ANR0984I Process 33581 for LABEL LIBVOLUME started in the
   BACKGROUND at 10:24:59. (SESSION: 58024, PROCESS: 33581)
10/06/11 10:24:59 ANR8799I LABEL LIBVOLUME: Operation for library NAS_QI6000
   started as process 33581. (SESSION: 58024, PROCESS:
   33581)
10/06/11 10:24:59 ANR0609I LABEL LIBVOLUME started as process 33581.
   (SESSION: 58024, PROCESS: 33581)
10/06/11 10:25:10 ANR8323I 006: Insert ANY volume Q0 R/W into entry/exit
   port of library NAS_QI6000 within 60 minute(s); issue
   'REPLY' along with the request ID when ready. (SESSION:
   58024, PROCESS: 33581)
10/06/11 10:26:45 ANR2017I Administrator LIDZR8V issued command: QUERY
   ACTLOG search=process: 33581  (SESSION: 58028)
10/06/11 10:32:21 ANR2017I Administrator LIDZR8V issued command: QUERY
   ACTLOG search=process: 33581  (SESSION: 58031)
10/06/11 10:32:30 ANR8385E All entry/exit ports of library NAS_QI6000 are
   empty. (SESSION: 58024, PROCESS: 33581)
10/06/11 10:32:30 ANR8802E LABEL LIBVOLUME process 33581 for library
   NAS_QI6000 failed. (SESSION: 58024, PROCESS: 33581)
10/06/11 10:32:30 ANR0985I Process 33581 for LABEL LIBVOLUME running in the
   BACKGROUND completed with completion state FAILURE at
   10:32:30. (SESSION: 58024, PROCESS: 33581)





This e-mail and any attachments to it are confidential and are intended solely 
for use of the individual or entity to whom they are addressed. If you have 
received this e-mail in error, please notify the sender immediately and then 
delete it. If you are not the intended recipient, you must not keep, use, 
disclose, copy or distribute this e-mail without the author's prior permission. 
The views expressed in this e-mail message do not necessarily represent the 
views of Highmark Inc., its subsidiaries, or affiliates.

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-05 Thread Daniel Sparrman
Not really sure why you would need +24 hours to handle copying data to your 
secondary site.

With features like:

- Copy storage pool feature for client backups
- Copy storage pool feature on migration
- Normal backup storage pool

With the first 2, there shouldnt really be much data left to copy for the 3rd 
one. I've seen sites handling up to 35TB per day using these features and they 
never had to spend 24 hours doing backup storagepool.

Ofc, it also depends on the throughput your getting. There are VTL's out there 
that can handle up to 45TB/hour that costs alot less than 1 million. However, 
it does require the infrastructure (fiber) connections to reach those numbers.

As with the hash conflict, the DD uses SHA-1 with a variable block length for 
deduplication. Theoretically, there is a 2^160 chance it will happen. Doesnt 
seem to be that bad, but your first hash collision is randomly more likely to 
happen than that number suggests.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/05/2011 00:14
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: 
Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

We were using the copystg feature of the storage pools before, and even 
then our TSM cycle was growing, growing and eventually passed 24 hours. 
For us, it wasn't 4 hours vs 1 hour.  It was 24+hours+finger crossing vs 
one hour.  We reached the point where we had to kill the backup-stg's and 
try and catch up on the weekends.

As far as the hash conflict, I think all of these systems use 
cryptographic level hashes (MD5/SHA1). From my understanding, the chances 
of a collision are way, way, exponentially lower than .1%.  Where did that 
number come from, Is that a commonly accepted number for this issue? Maybe 
I'm not understanding the proposed error. Are you referring to a 
cryptographic hash collision? or something else? 

In order to update our environment to handle the traditional TSM copypool 
architecture, we would have had to spend an additional million dollars (at 
a minimum) and many more expensive WAN links.   Considering this is for 
short-term data (30 days or less) , we still have weekly tape backups, and 
I've never seen anything to convince me that the chances of these specific 
errors are anywhere near the quoted .1%,  I'm still confident this was the 
right move for us. 

I do keep looking for where I could be wrong, so I want to make sure I 
completely understand what you are saying.

Regards, 
Shawn

Shawn Drew





Internet
daniel.sparr...@exist.se

Sent by: ADSM-L@VM.MARIST.EDU
10/04/2011 04:44 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: 
[ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool






Not entirely true since you dont have todo backup stgpool as a only resort 
with TSM 6.2. Since the simultaneous copy feature is now not only 
available for client, but for server processes, there is nothing saying 
you have to copy all the data by using backup stgpool.

And it's not a matter if it's a risk or not getting a logical error. The 
risk is there, and if it hits you, you dont loose a few hours of data. A 
hash conflict would strike you across both primary and secondary storage, 
and it could strike you for a huge amount of data if you're unlucky. You 
get a hash conflict on a common hash key, and it could strike you all 
across the board.

So, having your offsite replicated within the hour instead of 4 hours with 
the risk of loosing most of it, or letting it take 4 hours, but be sure 
you'll be able to recover?

The descriptions around here sometimes scares me (not in particular this 
description). Some people seem to think that 0.1% chance of loosing it all 
is worth it for other benefits.

If you told your boss (not your IT manager, but your business developer 
for example) that there's a 0.1% chance you'll loose all your backups and 
all your versions, think he'd be ok with it? My guess? I'd say he wouldnt 
even answer it...

If you feel replication is such a good way of going, you can always go 
sync or async mirroring instead. You're doing just the same (since you can 
actually have verification on your mirroring). You're syncing/replicating 
bits and bytes, but there's no way for the mirroring / replication to be 
sure it's actually readable for the app. Unlike having your app creating a 
2ndary copy.

Best Regards  nice evening

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-05 Thread Daniel Sparrman
I'm not entirely focused on the plausible risk of a hash collision. The chance 
of a broken firmware or a logical error during de-dup/replication destroying 
your data is highly more likely to happen.

Putting all your eggs in one basket is what I'm against. Putting all your data 
in a duplicated disksystem solution is putting all your eggs in one basket. 
When TSM is duplicating your data (aka backing up storage pools), there is no 
logical connection between your primary storage pool and your copypool. In a 
replicated/mirrored solution, you have a logical connection (not only a 
physical) which produces a risk of striking out not only your primary storage, 
but also your copypool storage in the same process.

In contrary to a replicated/mirrored solution, TSM actually needs to be able to 
read the logical part of data, aka the files, while a replicated solution with 
no application awareness doesnt read the logical part, only the bits and bytes.

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-Allen S. Rout a...@ufl.edu skrev: -
Till: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
Från: Allen S. Rout a...@ufl.edu
Datum: 10/05/2011 14:43
Kopia: Daniel Sparrman daniel.sparr...@exist.se
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: 
Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for 
pirmary pool

Extensive top-post trail deleted.


On 10/05/2011 02:39 AM, Daniel Sparrman wrote:

 As with the hash conflict, the DD uses SHA-1 with a variable block
 length for deduplication. Theoretically, there is a 2^160 chance it
 will happen. Doesnt seem to be that bad, but your first hash
 collision is randomly more likely to happen than that number
 suggests.


I agree with your technical analysis, and I feel your disquiet.  Waay
back in the '80s, I brought a (8mm :) tape to a meeting with a dept
official to say One chance in a billion means to me that there are
five broken files on this tape..  The topic then was should we make
copies of these?

But I feel that you express these numbers in a vacuum which misleads.
The appropriate judgement has to be, not Is an error possible?, but
How risky is this?; and that risk has to be compared to the other
risks you're taking.

I feel that you are focused on the unpredictably large impact of a
collision.  All my backups are gone! is emotionally accessible to
any of us, and makes me shudder.  But that scenario is not a plausible
result of a hash collision.  Not that the reality is peachy: Some
difficult-to identify set of my files are now corrupt is quite bad
enough, thank you.

A 1/10^30 risk just doesn't have the same emotional availability.  But
the homeopathic chances of it happening ought to temper the
resistance.


I would invoke the analogy of driving your car across the country
vs. taking an airplane; Many are paralyzed by the risks of air travel,
when the actuaries will tell you with great precision that you've a
better chance of dying in the drive _to the airport_ than once you've
taken off.  Similarly, I'd guess that more DD failures have happened
due to physical violence than due to hash collisions.


- Allen S. Rout





Ang: Re: [ADSM-L] Strange behaviour...please Help

2011-10-05 Thread Daniel Sparrman
Since the library volume is called L2, the volume should also be named L2 
(library volume name = volume name). If it isnt, you certainly have a weird 
error in the TSM server :)

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: Peter Dümpert 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/05/2011 16:05
Ärende: Re: [ADSM-L] Strange behaviour...please Help

Robert,
instead of trying
    q vol 01L2 f=d

try the following:

    q vol 01 f=d

i.e. WITHOUT the L2, i.e,. the 6 char Volser.
I assume the L2 seems to be the appended LTO-type
accordingly to Richard Sims' ADSM.Quickfacts with resp. to
    LTO barcode format

Being retired and can't run dsmadmc any longer, I can't prove my
assumption.

Regards, Peter

On Tue, 4 Oct 2011, Robert Ouzen wrote:

 Hi all
 
 I run q libv I2000lib on my library here some of the output:
 
 tsm: ADSMq libv i2000lib
 
 Library Name     Volume Name     Status               Owner 
 Last Use      Home        Device
                                                                               
     Element 
 Type
      ---          -- 
 -     ---     --
 I2000LIB         01L2        Private              ADSM 
 Data          4,150
 I2000LIB         02L2        Private              ADSM 
 Data          4,121
 I2000LIB         04L2        Private              ADSM 
 Data          4,110
 I2000LIB         22L2        Private              ADSM 
 Data          4,132
 I2000LIB         23L2        Private              ADSM 
 Data          4,116
 I2000LIB         26L2        Private              ADSM 
 Data          4,131
 I2000LIB         29L2        Private              ADSM 
 Data          4,100
 I2000LIB         30L2        Scratch 
 4,122
 I2000LIB         32L2        Scratch 
 4,174
 I2000LIB         34L2        Private              ADSM 
 Data          4,141
 I2000LIB         35L2        Scratch 
 4,171
 I2000LIB         59L2        Private              ADSM 
 DbBackup      4,128
 I2000LIB         65L2        Scratch 
 4,106
 I2000LIB         66L2        Scratch 
 4,102
 I2000LIB         71L2        Private              ADSM 
 Data          4,115
 I2000LIB         72L2        Private              ADSM 
 Data          4,112
 I2000LIB         75L2        Scratch 
 4,164
 I2000LIB         80L2        Private              ADSM 
 Data          4,098
 I2000LIB         82L2        Scratch 
 4,178
 I2000LIB         84L2        Private              ADSM 
 Data          4,127
 I2000LIB         87L2        Private              ADSM 
 Data          4,124
 I2000LIB         89L2        Private              ADSM 
 Data          4,109
 I2000LIB         93L2        Scratch 
 4,138
 I2000LIB         97L2        Private              ADSM 
 Data          4,189
 I2000LIB         000100L2        Private              ADSM 
 Data          4,105
 I2000LIB         000103L2        Private              ADSM 
 DbBackup      4,104
 
 But trying to view more details of some volumes (a few) as volume 
 01L2 running the command: q vol 01L2  f=d
 
 I got:
 tsm: ADSMq vol 01L2 f=d
 ANR2034E QUERY VOLUME: No match found using this criteria.
 ANS8001I Return code 11.
 
 I did more investigation on each storage pool using this library  as 
 q vol * stg=I-SAPDB …..
 
 No record at all about volume 01L2 and another few volumes with 
 the same behavior in any storage .
 
 How can I eliminate  those volumes.
 
 My environment is TSM V5.5.2.0 on ADSM: AIX-RS/6000
 
 Any help will be really appreciate.
 
 Regards Robert Ouzen

Ang: [ADSM-L] Defining Multiple filesystems to one FILEDEVCLASS in TSM v5.5

2011-10-05 Thread Daniel Sparrman
a) Yes you can, you just have to use , to separate each path/directory in the 
DIRECTORY option.

b) When you need to add a directory, you have to replace the current 
definition. Aka, you cant just add a directory to the DIRECTORY option, you 
need to specify all existing + the new directory you want to add.

Regards

Daniel 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -
Till: ADSM-L@VM.MARIST.EDU
Från: David W Daniels/AC/VCU 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/05/2011 16:22
Ärende: [ADSM-L] Defining  Multiple filesystems to one FILEDEVCLASS in TSM v5.5

We have a  (red hat linux)  server running TSM server 5.5  The question
is,  can you define multiple file systems to a FILEDEVCLASS? If so, what
are pros/cons.

** Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-05 Thread Daniel Sparrman
Hi Remco

Not sure if you're talking about hardware de-dup or TSM de-dup (which is using 
a larger block size due to the load) but:

Relatively small? I've only seen it happen once, but then I live in a 
relatively small market since I live in Sweden. So you're telling me (based on 
facts) that this haven't happened elsewhere? I seriously have to disagree. In 
my opinion, it think it's more likley that others that had this issue have 
decided to keep it in the dark. Sweden is a relatively small market, and the 
odds that it would have happened here, but nowhere else, is quite small.

Not sure about the size or anything in your TSM comparison, but I've got 
several TSM customers who have several thousands of billions of objects ... And 
like I said, if it's a chance of 1000.000.000.000 it's much more likely to hit 
you at 1000.000. It's not a quota that needs to be filled before it hits you. 
It's a random chance.

And, alike the customer I had who got it, if it's a very common block geting 
that hash conflict, yes, it will hit you badly since every file that contains 
that block will be invalid.

I do agree about your comment about TSM v6 though, I'd consider it very stable, 
I'd actually (today, with the amount of checking being done) consider it more 
stable than still being at version 5.5

Regards

Daniel

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Remco Post 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/05/2011 21:11
Ärende: Re: [ADSM-L] vtl versus file systems for pirmary pool

Hi,

I saw last week that about half of the people visiting the TSM Symposium were 
running V6, it's been stable for me so far.

The likeliness of an accidental SHA1 hash collision is relatively small even 
compared to the total number of objects that a TSM server could possibly ever 
store during its entire lifetime, insignificant. That being said, if you think 
that your data is to valuable to even risk that, don't dedup. 


-- 

Gr., Remco

Op 5 okt. 2011 om 19:24 heeft Shawn Drew shawn.d...@americas.bnpparibas.com 
het volgende geschreven:

 Along this line, we are still using TSM5.5   Some of the features
 mentioned previously require TSM6.  TSM6 still feels risky to me.  Maybe
 more risky than a hash collision.
 Just looking for a consensus, Do people think its mature enough now that
 it is as stable/reliable as TSM5 ?
 
 PS. Test restores are the only way to be sure your backups are good.  You
 shouldn't just trust TSM.
 
 Regards,
 Shawn
 
 Shawn Drew
 
 
 
 
 
 Internet
 rrho...@firstenergycorp.com
 
 Sent by: ADSM-L@VM.MARIST.EDU
 10/05/2011 11:03 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU
 
 
 To
 ADSM-L
 cc
 
 Subject
 Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang:
 Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl
 versus file systems for pirmary pool
 
 
 
 
 
 
 When TSM is duplicating your data (aka backing
 up storage pools), there is no logical connection between your
 primary storage pool and your copypool.
 
 Well . . .yes . .. no . . .
 
 All our eggs are in one basket no matter what.  The logical connection
 between pri and copy pools is TSM itself.  A logical corruption in TSM can
 take out both. Your data could be sitting there on tape and completely
 useless.  Yes, that's why we have TSM db backups, but are they good?  What
 if there is a TSM bug that renders all your backups bad - we don't find
 out until we need it!
 
 At some point you have to trust something.  We all trust TSM.  Yes, we do
 the db backup, create pri and copy pools, use reuse delay . . .everything
 to allow for problems . . . but we are still trusting that TSM workss as
 advertised.  A really, really paranoid would run two complete
 separate/different backup systems - but who can afford that, or want to?
 But then, we do do that for our biggest SAP/ORacle systems.  We use
 Oracle/RMAN-to-flasharea/RMAN-to-TDPO/TSM, but we also run EMC/clone
 backups off our DR sites R2's . . but also to TSM.
 
 
 Rick
 
 
 
 
 
 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If
 the reader of this message is not the intended recipient or an
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that you have received this document in error
 and that any review, dissemination, distribution, or copying of
 this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete
 the original message.
 
 
 
 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive

Ang: [ADSM-L] Migrate TSM server ( Win to Linux)

2011-10-04 Thread Daniel Sparrman
If you dont have space in your library, none of the options will work. Both 
require you to export/import data, which in turn requires space for the new TSM 
server in your library.

No way of making space? Perhaps some old data that you can check out of your 
library and have outside the library? A copypool you can remove?

If there's no way of making space in the library, it's going to be a hard thing 
migrating your TSM server from the Windows box to the Linux box, all available 
options require that you do a export/import since moving the database from 
Windows to Linux isnt supported.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Gibin 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/04/2011 08:06
Ärende: [ADSM-L] Migrate TSM server ( Win to Linux)

The Library we have does support partitioning,but it is packed to capacity with 
ability to accommodate  just 5 more tapes.So i do'nt think a secondary library  
for the Linux server will help.

So Daniel as you were saying i will :
1. Setup Library Manager /client between by New  old TSm servers

2.Import TSM database/policies/schedules  info from old TSM server to New TSm 
server

3.Switch Library Manager to New TSM sever

+--
|This was sent by gibi...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-04 Thread Daniel Sparrman
Not entirely true since you dont have todo backup stgpool as a only resort with 
TSM 6.2. Since the simultaneous copy feature is now not only available for 
client, but for server processes, there is nothing saying you have to copy all 
the data by using backup stgpool.

And it's not a matter if it's a risk or not getting a logical error. The risk 
is there, and if it hits you, you dont loose a few hours of data. A hash 
conflict would strike you across both primary and secondary storage, and it 
could strike you for a huge amount of data if you're unlucky. You get a hash 
conflict on a common hash key, and it could strike you all across the board.

So, having your offsite replicated within the hour instead of 4 hours with the 
risk of loosing most of it, or letting it take 4 hours, but be sure you'll be 
able to recover?

The descriptions around here sometimes scares me (not in particular this 
description). Some people seem to think that 0.1% chance of loosing it all is 
worth it for other benefits.

If you told your boss (not your IT manager, but your business developer for 
example) that there's a 0.1% chance you'll loose all your backups and all your 
versions, think he'd be ok with it? My guess? I'd say he wouldnt even answer 
it...

If you feel replication is such a good way of going, you can always go sync or 
async mirroring instead. You're doing just the same (since you can actually 
have verification on your mirroring). You're syncing/replicating bits and 
bytes, but there's no way for the mirroring / replication to be sure it's 
actually readable for the app. Unlike having your app creating a 2ndary copy.

Best Regards  nice evening

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/04/2011 20:17
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: 
Re: [ADSM-L] vtl versus file systems for pirmary pool

The other side of this is that with 3rd party deduplicated replication, my 
offsite copy is rarely more than one hour behind the source copy.  Before 
I moved to this, we would schedule our backup-stg pools to run once a day, 
and they would have to run several hours before it was in sync.   If there 
was a real DR situation, we would lose much more data by being up to 
24-hours out of sync with the old backup-stg solution.   And it's always 
recent just backed-up data that is the most desirable after a DR 
situation. 

 The chances of a real DR situation seem higher to me than a logical 
error.  Perhaps just feeling like that after seeing a minor earthquake and 
a hurricane within a couple weeks of each other. 

 I have seen a few logical errors and a few DR situations in my career and 
even in my conservative bank environment, a 
single-pool-3rd-party-replicated is lower risk than the slow-backup-stg 
solution.  We still keep periodic longer term backups on Tape, so we would 
have a last resort restore source if there was a logical error, but it 
is still not a daily backup.

In the end, its a risk trade-off decision that you have to make for 
yourself.  And decide if you can afford the time that a backup-stg takes 
each day. 


Regards, 
Shawn

Shawn Drew





Internet
steven.langd...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
10/04/2011 02:57 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: 
Re: [ADSM-L] vtl versus file systems for pirmary pool






The logical error question has come up before.  With no TSM managed copy
pool you are perhaps at a slightly higher risk.

An option is to still have a copy pool, but on the same DD.  So little 
real
disk usage, but some protection from a TSM logical error.  That obviously
does not protect you from a DD induced one.

FWIW, when we implement VTL's, and if the bandwidth allows, we use TSM to
create the copy.  Small sites with limited bandwidth, we rely on the
appliance.

Steven



On 4 October 2011 06:41, Daniel Sparrman daniel.sparr...@exist.se wrote:

  If someone puts a high-caliber bullet through my Gainesville DD, then
  I recover it from the replicated offsite DD, perhaps selecting a
 snapshot.
 
  If someone puts a high-caliber bullet through both of them, then I
  have lost my backups of a bunch of important databases.

 And if you have a logical error on your primary box, which is then
 replicated to your 2nd box? Or even worse, a hash conflict?

 I dont consider someone putting a bullet through both the boxes a high
 risk, I do however consider other errors to be more of a high risk.

 Best Regards

 Daniel





 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr

Ang: [ADSM-L] Migrate TSM server ( Win to Linux)

2011-10-03 Thread Daniel Sparrman
The simple answer is no.

You cant backup/restore the database(which contains all information) between 
Windows and Linux. That's why you need to do the import/export, you need to 
import the information stored in your TSM database to a new empty database on 
the Linux machine.

Dont know what kind of library you have, but cant you partition it and create a 
secondary logical library for your Linux server?

You could also setup a library manager/library client configuration so that 
your new TSM server has access to the library, and then just switch the library 
manager to the new TSM server as soon as you've done the export/import step.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Gibin tsm-fo...@backupcentral.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 10/02/2011 14:01
Ärende: [ADSM-L] Migrate TSM server ( Win to Linux)

We are planning to migrate our TSM server V6.2.3(Win2k3) to Linux Server 
(Redhat ) ,but the major problem is  the availability of  a single Tape Library 
which is currently  used by the TSM Production server and all our backup data 
is  held in this tape library.


I think ,one of the ways to go about this would be server to server export but 
as we are having a single tape library which is almost filled to capacity , I 
was wondering if it is possible to have  my New Linux TSM server to  just take 
control of all the tapes/media/database held by the current Win TSM server 
without actually doing export/import.


Please let me know your views on this if anyone as tried this out.

Thanks!! :)

+--
|This was sent by gibi...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-10-03 Thread Daniel Sparrman
 If someone puts a high-caliber bullet through my Gainesville DD, then
 I recover it from the replicated offsite DD, perhaps selecting a snapshot.

 If someone puts a high-caliber bullet through both of them, then I
 have lost my backups of a bunch of important databases.

And if you have a logical error on your primary box, which is then replicated 
to your 2nd box? Or even worse, a hash conflict?

I dont consider someone putting a bullet through both the boxes a high risk, I 
do however consider other errors to be more of a high risk.

Best Regards

Daniel





Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE


-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: - 
Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout 
Sänt av: ADSM: Dist Stor Manager 
Datum: 10/03/2011 23:38
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl 
versus file systems for pirmary pool

On 09/28/2011 02:16 AM, Daniel Sparrman wrote:

 In this mail, it really sounds like you're using your DD as both
 primary storage and for TSM storage.

I am, right now, using the DD as a target for direct-written database
backups, only.  So that's not really primary storage, as I think
about it.


 If the DD box fails, what are your losses?

If someone puts a high-caliber bullet through my Gainesville DD, then
I recover it from the replicated offsite DD, perhaps selecting a snapshot.

If someone puts a high-caliber bullet through both of them, then I
have lost my backups of a bunch of important databases.



 Sorry for all the questions, I'm just trying to get an idea how
 you're using this box.

No problem. Our conversation is fuzzed by the fact that I am also
talking about how one _might_ use it for TSM storage.  I'm
contemplating it, but not doing it at the moment.

 [ ... if you lose a DD, then ... ] you have to restore the data from
 somewhere else (tape?).


In my planning, the DD gets copied / offsited to a remote DD, so
that's the somewhere else.

- Allen S. Rout

Ang: [ADSM-L] the production date of a cartridge

2011-10-01 Thread Daniel Sparrman
Not sure for how long your regulatory rules state that you need to keep the 
archived data on your 3592 cartridges but:

a) The lifetime of a cartridge(or, the data stored on it) can be counted from 
the first use, not the production date. The physical cartridge itself wont 
break down into dust, but the magnetics on the tape will sooner or later be 
unreadable.

b) If you need to keep data for a very long tape, I suggest you get another 
media than using magnetic tapes. The lifetime of the data stored on the tape is 
quite limited comparted to MO media or something similar.

Best Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Mehdi Salehi ezzo...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 10/01/2011 10:36
Ärende: [ADSM-L] the production date of a cartridge

Hi,
Is there any way to determine the age of a cartridge? We know the purchase
date, but it does not necessarily mean that the cartdige has been
manufactured around the same date. Maybe it has been stored for a long time
before we get it. To be more precise, we have thousands of 3592 cartridges
for old J1A cartridges. In order to make sure whether archive data is safe
during the period that regulatory states, it is essential to know when a
cartridge is physically dead.

Regards,
Mehdi

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-29 Thread Daniel Sparrman
Like it says in the document, it's a recommendation and not a technical limit.

However, having the server running at 100% utilization all the time doesnt seem 
like a healthy scenario.

Why arent you deduplicating files larger than 1GB? From my experience, 
datafiles from SQL, Exchange and such has a very large de-dup ratio, while 
TSM's deduplication skips files smaller than 2KB?

I have a customer up north who used this configuration on an HP EVA based box 
with SATA disks. The disks where breaking down so fast that the arrays within 
the box was in a constant rebuild phase. HP claimed it was TSM dedup that was 
breaking the disks (they actually claimed TSM was writing so often that the 
disks broke), a scenario I have very hard to believe.

Best Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Colwell, William F. bcolw...@draper.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 20:43
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

Hi Daniel,

 

I remember hearing about a 6 TB limit for dedup in a webinar or conference call,

but what I recall is that that was a daily thruput limit.  In the same section 
of the

redbook as you quote is this paragraph -

 

Experienced administrators already know that Tivoli Storage Manager database 
expiration

was one of the more processor-intensive activities on a Tivoli Storage Manager 
Server.

Expiration is still processor intensive, albeit less so in Tivoli Storage 
Manager V6.1, but this is

now second to deduplication in terms of consumption of processor cycles. 
Calculating the

MD5 hash for each object and the SHA1 hash for each chunk is a processor 
intensive activity.

 

I can say this is absolutely correct; my processor is frequently running at or 
near 100%.

 

I have gone way beyond 6 TB of storage for dedup storagepools as this sql shows

for the 2 instances on my server -

 

select cast(stgpool_name as char(12)) as Stgpool, -

   cast(sum(num_files) / 1024 /1024 as decimal(4,1)) as Mil Files, -

   cast(sum(physical_mb)   / 1024 /1024 as decimal(4,1)) as Physical_TB, -

   cast(sum(logical_mb)/ 1024 /1024 as decimal(4,1))as Logical_TB, -

   cast(sum(reporting_mb)  / 1024 /1024 as decimal(4,1))as Reporting_TB -

from occupancy -

  where stgpool_name in (select stgpool_name from stgpools where deduplicate = 
'YES') -

   group by stgpool_name

 

 

StgpoolMil Files  Physical_TB  Logical_TB  Reporting_TB

- --  --- -

BKP_2  368.0  0.030.0  95.8

BKP_2X 341.0  0.023.9  58.6

 

 

StgpoolMil Files  Physical_TB  Logical_TB  Reporting_TB

- --  --- -

BKP_2  224.0  0.035.7  74.1

BKP_FS_249.0  0.021.0  45.5

 

 

Also, I am not using any random disk pool, all the disk storage is scratch 
allocated

file class volumes.  There is also a tape library (lto5) for files larger than 
1GB

which are excluded from deduplication.

 

 

Regards,

 

Bill Colwell

Draper Lab

 

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Wednesday, September 28, 2011 3:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

 

To be honest, it doesnt really say. The information is from the Tivoli Storage 
Manager Technical Guide:

 

Note: In terms of sizing Tivoli Storage Manager V6.1 deduplication, we currently

recommend using Tivoli Storage Manager to deduplicate up to 6 TB total of 
storage pool

space for the deduplicated pools. This is a rule of thumb only and exists 
solely to give an

indication of where to start investigating VTL or filer deduplication. The 
reason that a

particular figure is mentioned is for guidance in typical scenarios on 
commodity hardware.

If more than 6 TB of real diskspace is to be duplicated, you can either use 
Tivoli Storage

Manager or a hardware deduplication device. The 6 TB is in addition to whatever 
disk is

required by non-deduplicated storage pools. This rule of thumb will change as 
processor

and disk technologies advance, because the recommendation is not an 
architectural,

support, or testing limit.

 

http://www.redbooks.ibm.com/redbooks/pdfs/sg247718.pdf

 

I'm guessing it's server-side since client-side shouldnt use any resources @ 
the server. I'm

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-29 Thread Daniel Sparrman
I'm not fully aware of how the DD replicates data, but if you have 15-20TB/day 
being written to your main DD, and that data is then replicated to the off-site 
DD, how much data is actually replicated?
 
With a 1Gbs connection, you could hit values up to 360GB/s hour (expecting 
100MB/s which should be theoretically possible, but it's usually lower than 
that on a 1Gbs connections) which means 8.6TB per 24 hours. So the data is both 
deduplicated and compressed before you send it offsite?
 
Does the DD do the dedup within the same box, or require a separate box for 
dedup?
 
You're also running with the same risk as the previous poster, you're relying 
entirely on the fact that your DD setup wont break. Is this how the DD is sold? 
(Buy 2 DD's, replicate between them and you're safe) ? I know it's (like the 
previous poster stated) always a question about costs vs mitigating risks, but 
if I got to choose, I'd rather have fast restores from my main site and slow 
from my offsite, as long as I can restore the data. Instead of having fast from 
main, fast from off, but there's a chance I might not be able to do restore at 
all.

If DD claims they have data invunerability I'd really like to see how they 
hit 100% protection, since it would be the first system in the world to 
actually have managed to secure that last 0,0001% risk ;) RAID usually was 
secure until someone made an error, put in a blank disk and forgot to rebuild 
:)
 
Best Regards
 
Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Shawn Drew shawn.d...@americas.bnpparibas.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 22:26
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

We average between 15-20TB/day at our main site, and that goes directly to 
a single DD890 (no random pool) .  single-pool, file devclass, NFS mounted 
on 2x10GB crossover connections. Replicates over a 1gb WAN link to another 
DD890.   (I spent all the money on the DD boxes, I didn't have enough left 
over for 10GB switches!)

That other DD890 backs up another 7-10TB/day, replicating to the main site 
   (bi-directional replication). 

All with file devclasses and there is not more than a one hour lag in 
replication by the time I show up in the morning.TSM doesn't have to 
do replication or backup stgpools anymore, so I can actually afford to do 
full db backups every day now.  (I was doing an incremental scheme before)

IBM has a similar recommended configuration with their Protectier 
solution, so they do support a single pool, backend replication solution.  
Data Domain also claims that data invulnerability which should catch any 
data corruption issue as soon as the data is written, and not later, when 
you try and restore. 


Regards, 
Shawn

Shawn Drew





Internet
daniel.sparr...@exist.se

Sent by: ADSM-L@VM.MARIST.EDU
09/28/2011 02:13 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for 
pirmary pool






How many TB of data is common in this configuration? In a large 
environment, where databases are 5-10TB each and you have a demand to 
backup 5-10-15-20TB of data each night, this would require you to have 
10Gbs for every host, something that would also cost a penny. Especially 
since the DD needs to be configured to have the throughput to write all 
those TB within a limited amount of time.
 
Does the DD do de-dup within the same box (meaning, can I have 1 box that 
handles normal storage and does de-dup) or do I need a 2nd box?
 
And the same issue also arises with the filepool, you're moving alot of 
data around completely unnecessary every day when u do reclaim. 
 
If I'm right, it also sounds like (in your description from the previous 
mails) you're not only using the DD for TSM storage. That sounds like 
putting all the eggs in the same basket.
 
Best Regards
 
Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout a...@ufl.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:55
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary 
pool

On 09/27/2011 12:02 PM, Rick Adamson wrote:


 The bigger question I have is since the file based storage is
  native to TSM why exactly is using a file based storage
  not supported?

Not supported by what?

If you've got a DD, then the simplest way to connect it to TSM is via
files.  Some backup apps require

Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-29 Thread Daniel Sparrman
Yepp, we have the same thing with our Sepaton, all deduplication is done 
inline. Reason I asked is because there seems to be other manufacturers who 
needs a 2nd box todo deduplication.

Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Nick Laflamme dplafla...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/29/2011 13:34
Ärende: Re: [ADSM-L] vtl versus file systems for pirmary pool

On Sep 29, 2011, at 12:30 AM, Daniel Sparrman wrote:

 I'm not fully aware of how the DD replicates data, but if you have 
 15-20TB/day being written to your main DD, and that data is then replicated 
 to the off-site DD, how much data is actually replicated?
 
 With a 1Gbs connection, you could hit values up to 360GB/s hour (expecting 
 100MB/s which should be theoretically possible, but it's usually lower than 
 that on a 1Gbs connections) which means 8.6TB per 24 hours. So the data is 
 both deduplicated and compressed before you send it offsite?

It's certainly de-duped before being replicated; it's probably compressed as 
well, but that's less obvious to me. 

 Does the DD do the dedup within the same box, or require a separate box for 
 dedup?

Same box, as an in-line process. They're very proud of that. 

Nick

 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE
 


Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-29 Thread Daniel Sparrman
The elephant has left the building.

Do you get the same advanced features by just dumping data onto a DD as you do 
with the TSM TDP clients? Exmerge anyone? Or perhaps an SQL dump?

Still have todo filebackups, or wait, why not just use robocopy and copy it 
onto the DD? Or what the heck, just place the fileserver on the DD. That way 
you dont have todo backups, the data is already on the DD.

As for TSM loosing data, what tells you that the DD dedup algorithm never lost 
data? I bet I can prove you wrong.

Well, when the DD hits the wall, at least you wont have todo a fsck, since 
there wont be anything left that needs an fsck.

DD replication = not application aware = not detecting software-based 
discrepencies. That's why I'd never replace TSM's backup storage pool or the 
copypools feature with a replicated solution.

If you're OK with replication, why dont you just mirror the solution (if you 
want the errors to hit both the boxes at the same time, make sure to use 
synchronous mirroring and not async, god knows, with async you might not get 
the error mirrored in time).

It's ok to make it easy, but when the shit hits the fan, make sure you actually 
know what you sacrificed (having I destroyed a datacenter in your resume 
probably wont make it easier to find a new job).k

Scary *schruggs*





Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: robert_clark robert_cl...@mac.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/29/2011 19:34
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool


The elephants in the room:

It is tempting, once DD gets in the door, to move all database backups (the 
typical TDP/RMAN and SQLLiteSpeed stuff) to go directly to DD. (No TSM 
involved, so save money on licenses?)

Combinations that have more advanced communications with the back end storage 
(OST / Boost / Avarmar+DD) may be able to get hints about what is already 
stored on the dedupe device? Seems unlikey that TSM will gain any features like 
this any time soon. (NDMP? VTL? these feature are pretty dated.)

Is TSM 6 not losing data via dedupe this week?

How problematic is many TB of data on fileclass on file systems  when it 
comes time to do a fsck after a system crash?

[RC]

On Sep 27, 2011, at 03:06 PM, Prather, Wanda wprat...@icfi.com wrote:

Actually I have more customers using Data Domains without the VTL license than 
with it.

With a Windows TSM server, you can just write to it via TCP/IP using a CIFS 
share(NFS mount with an AIX TSM server).
If you have sufficient TCP/IP bandwidth for your load, no fibre connections 
needed.
From the TSM point of view, you configure it as a file pool.

You get the benefits of dedup and (if you have a 2nd one at your DR site) 
replication. 
Neither good or bad, just different.
Very simple setup, works great if it meets your throughput requirements.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Tuesday, September 27, 2011 2:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems 
for pirmary pool

The fact you actually need to pay a VTL license is just plain scary.

When u bought it, did they think you're gonna use it as a fileserver? I'm not 
to specialized into Data Domain, but arent they marketed as backup hardware? So 
you get a disk, but if you want to use it for something else than that, you 
need to pay a license?

Sorry for sounding bitter, but I've always heard people referring to Data 
Domain as a VTL.



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout a...@ufl.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:55
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

On 09/27/2011 12:02 PM, Rick Adamson wrote:


 The bigger question I have is since the file based storage is
 native to TSM why exactly is using a file based storage  not supported?

Not supported by what?

If you've got a DD, then the simplest way to connect it to TSM is via files. 
Some backup apps require something that looks like a library, in which case 
you'd be buying the VTL license.

FWIW, if you're already in DD space, you're paying a pretty penny. The VTL 
license isn't chicken feed, I agree, but it's not a major component of the 
total cost.


- Allen S. Rout

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Daniel Sparrman
How many TB of data is common in this configuration? In a large environment, 
where databases are 5-10TB each and you have a demand to backup 5-10-15-20TB of 
data each night, this would require you to have 10Gbs for every host, something 
that would also cost a penny. Especially since the DD needs to be configured to 
have the throughput to write all those TB within a limited amount of time.
 
Does the DD do de-dup within the same box (meaning, can I have 1 box that 
handles normal storage and does de-dup) or do I need a 2nd box?
 
And the same issue also arises with the filepool, you're moving alot of data 
around completely unnecessary every day when u do reclaim. 
 
If I'm right, it also sounds like (in your description from the previous mails) 
you're not only using the DD for TSM storage. That sounds like putting all the 
eggs in the same basket.
 
Best Regards
 
Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout a...@ufl.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:55
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

On 09/27/2011 12:02 PM, Rick Adamson wrote:


 The bigger question I have is since the file based storage is
  native to TSM why exactly is using a file based storage
  not supported?

Not supported by what?

If you've got a DD, then the simplest way to connect it to TSM is via
files.  Some backup apps require something that looks like a library, in
which case you'd be buying the VTL license.

FWIW, if you're already in DD space, you're paying a pretty penny.  The
VTL license isn't chicken feed, I agree, but it's not a major component
of the total cost.


- Allen S. Rout

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Daniel Sparrman
In this mail, it really sounds like you're using your DD as both primary 
storage and for TSM storage.

If the DD box fails, what are your losses?

Sorry for all the questions, I'm just trying to get an idea how you're using 
this box. It's sounds alot like you're using it both as a filer and as TSM 
storage. That means if you loose it, you've lost both the primary storage (the 
filer part) and your primary storage within TSM (the storage pool). That means 
you have to restore the data from somewhere else (tape?).

Best Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout a...@ufl.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 22:59
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

On 09/27/2011 02:48 PM, Daniel Sparrman wrote:

 When u bought it, did they think you're gonna use it as a
 fileserver?

Well, exactly.

 I'm not to specialized into Data Domain, but arent they marketed as
 backup hardware? So you get a disk, but if you want to use it for
 something else than that, you need to pay a license?

 Sorry for sounding bitter, but I've always heard people referring to
 Data Domain as a VTL.


DD is deduplicated storage, plain and simple.  There are more general
ways to use it, and more specific.

For example: Say you dump a MySQL database to disk; where do you put
it?  You can dump daily, compress, stick it in a directory and manage
it. I do this for several of customers.

With a DD in the picture, you NFS-mount a share, and dump to NFS.  You
don't compress, you let it dedupe and compress thereafter.  Resulting
space occupation is substantially smaller than the bzip2 -9ed
allocation of the same set of files.


Less simple: Say you've got an Oracle database.  Any one else out
there with one of those? ;) RMAN dumps its stuff to somewhere.  We
now use a somewhere on the DD.  We're currently getting dedupe numbers
in excess of 30:1

Different use-case: lots of VMWare VStorage backups dump their image
files, to... somewhere.  Most of them are happy with a CIFS share.
If that's provided by the DD, then it gets deduplicated in a manner
that leverages all the other stuff on the same DD device.


So, these are very general methods: I got a NFS mount; I got a CIFS
mount.  They bootstrap common, existing hardware and software.  Your
dedupe appliance already talks to the network, why not make it talk
fast, and then use that for service provision?


Exposing that filesystem via FC requires a bunch of additional code,
and a bunch of additional hardware.  So it's a separate feature, and
you can unbundle it if you like.  We did.

Some applications are unwilling or unable to talk to a share (I'm
lookin' at you, DPM...) so you have to get the extra features to
enable the additional type of fabric connection and the extra protocol
translation.



- Allen S. Rout

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Daniel Sparrman
Like I said in my previous mail, it all depends on the size of you're TSM 
server. Transfering (and storing) smaller amounts of data across the network is 
one thing, but when u sit there with your 10TB DB2 server that needs to have a 
full backup at least 2-3 days a week, transfering it across the network will 
never be an option compared to using LAN-free.

The size and cost of the DD to be able to handle that kind of load would never 
be justified just by using it as TSM storage, which would put you in a 
situation where the DD is bought for multiple use (filer, backup and so forth). 
That means you're putting your only lifeline (the backup) in the same box as 
your primary storage (filers, databases). That's a risky situation, considering 
what would happen if the box fails.

In my opionion, TSM storage should be quarantened, since if you're primary 
storage fails, TSM is what's gonna bring you back on track.

Best Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda wprat...@icfi.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 00:06
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

Actually I have more customers using Data Domains without the VTL license than 
with it.

With a Windows TSM server, you can just write to it via TCP/IP using a CIFS 
share(NFS mount with an AIX TSM server).
If you have sufficient TCP/IP bandwidth for your load, no fibre connections 
needed.
From the TSM point of view, you configure it as a file pool.

You get the benefits of dedup and (if you have a 2nd one at your DR site) 
replication.  
Neither good or bad, just different.
Very simple setup, works great if it meets your throughput requirements.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Tuesday, September 27, 2011 2:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems 
for pirmary pool

The fact you actually need to pay a VTL license is just plain scary.

When u bought it, did they think you're gonna use it as a fileserver? I'm not 
to specialized into Data Domain, but arent they marketed as backup hardware? So 
you get a disk, but if you want to use it for something else than that, you 
need to pay a license?

Sorry for sounding bitter, but I've always heard people referring to Data 
Domain as a VTL.



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout a...@ufl.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:55
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

On 09/27/2011 12:02 PM, Rick Adamson wrote:


 The bigger question I have is since the file based storage is
  native to TSM why exactly is using a file based storage   not supported?

Not supported by what?

If you've got a DD, then the simplest way to connect it to TSM is via files.  
Some backup apps require something that looks like a library, in which case 
you'd be buying the VTL license.

FWIW, if you're already in DD space, you're paying a pretty penny.  The VTL 
license isn't chicken feed, I agree, but it's not a major component of the 
total cost.


- Allen S. Rout

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Daniel Sparrman
To be honest, it doesnt really say. The information is from the Tivoli Storage 
Manager Technical Guide:

Note: In terms of sizing Tivoli Storage Manager V6.1 deduplication, we currently
recommend using Tivoli Storage Manager to deduplicate up to 6 TB total of 
storage pool
space for the deduplicated pools. This is a rule of thumb only and exists 
solely to give an
indication of where to start investigating VTL or filer deduplication. The 
reason that a
particular figure is mentioned is for guidance in typical scenarios on 
commodity hardware.
If more than 6 TB of real diskspace is to be duplicated, you can either use 
Tivoli Storage
Manager or a hardware deduplication device. The 6 TB is in addition to whatever 
disk is
required by non-deduplicated storage pools. This rule of thumb will change as 
processor
and disk technologies advance, because the recommendation is not an 
architectural,
support, or testing limit.

http://www.redbooks.ibm.com/redbooks/pdfs/sg247718.pdf

I'm guessing it's server-side since client-side shouldnt use any resources @ 
the server. I'm also guessing you could do 8TB or 10, but not 60TB.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Hans Christian Riksheim bull...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 09:56
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

This 6 TB supported limit for deduplicated FILEPOOL does this limit
apply when one does client side deduplication only?

Just wondering since I have just set up a 30 TB FILEPOOL for this purpose.

Regards

Hans Chr.

On Tue, Sep 27, 2011 at 8:44 PM, Daniel Sparrman
daniel.sparr...@exist.se wrote:
 Just to put an end to this discussion, we're kinda running out of limits here:

 a) No VTL solution, neither DD, neither Sepaton, neither anyone, is a 
 replacement for random diskpools. Doesnt matter if you can configure 50 
 drives, 500 drives or 5000 drives, the way TSM works, you're gonna make the 
 system go bad since the system is made from having random pools infront, 
 sequential pools in the back.  A sequential device is not gonna replace that, 
 independent being a sequential file pool or a VTL (or, for that question, a 
 tape library).

 b) VTL's where invented because most backup software (I've only worked with 
 TSM, Legato  Veritas aka Symantec) is used to working with sequential 
 devices. That havent changed, and wont change in the near future. VTL's (and 
 the file device option) is just a replacement. Performance wise, VTL's are 
 gonna win all the time compared to a file device, question you need to ask 
 yourself is, do I need the VTL, or can I go along with using file devices. 
 According to the TSM manual (dont have the link , but if you want i'll find 
 it) the maximum supported file device pool for deduplication is 6TB... so if 
 you're thinking of replacing a VTL with a seq. file pool, keep that in mind. 
 The limit is because the amount of resources needed by TSM to do the file 
 deduplication is limited, or as the manual says, until new technologies are 
 available.

 The discussion here where people are actually planning on just having a 
 sequential pool (since noone is actually discussing that there's a random 
 pool infront) is plain scary. No sequential device is gonna have their time 
 of the life having a fileserver serving 50K blocks at a time.

 So my last 50 cents worth is:

 a) Have a random pool infront

 b) Depending on the size of your environment, you're either gonna go with a 
 filepool and use de-dup (limit is 6TB for each pool, you might not want to 
 de-dup everything), or you're gonna go with a fullscale VTL. Choice here is 
 size vs costs.

 I've seen alot of posts here lately about the disadvantages with VTL's .. 
 well, I havent seen one this far with mine. I have a colleague who bought a 
  VTL and found out he needed another VTL just todo the de-dup, since one 
 VTL wasnt a supported configuration to do de-dup. I have another colleague 
 who bought a very cheap VTL solution (from a very mentioned name around here) 
 and ended up with having same hashes, but different data, leaving him with 
 unrestorable data.

 Comparing eggs to apples just isnt fair.  Different manufactures of VTL's do 
 different things, meaning both performance and availability is completely 
 different.

 Just to sum up, we've had both 3584's and (back in the days) 3575, and I've 
 never been happier with our VTL (and yes, we do restore tests).

 Best Regards

 Daniel



 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE



 -ADSM: Dist Stor Manager

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Daniel Sparrman
It's not so much the amount of data as the amount of clients though Dave. I 
dont mind having a few nodes backing up straightly to the VTL (my database 
servers are doing just that), but having 1000 nodes mounting 1000 virtual tapes 
in the VTL is what I'm trying to avoid with having a small random pool infront. 
Especially if you're using resourceutil on your nodes.
 
Regards
 
Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Ehresman,David E. deehr...@louisville.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 14:46
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

I guess it just goes to show 'that your mileage may vary'.  We've been happily 
backing up 3-5 TB a night to VTL as primary storage with no random disk 
frontend for four years now.

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Tuesday, September 27, 2011 2:45 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems 
for pirmary pool

Just to put an end to this discussion, we're kinda running out of limits here:
 
a) No VTL solution, neither DD, neither Sepaton, neither anyone, is a 
replacement for random diskpools. Doesnt matter if you can configure 50 drives, 
500 drives or 5000 drives, the way TSM works, you're gonna make the system go 
bad since the system is made from having random pools infront, sequential pools 
in the back.  A sequential device is not gonna replace that, independent being 
a sequential file pool or a VTL (or, for that question, a tape library).
 
b) VTL's where invented because most backup software (I've only worked with 
TSM, Legato  Veritas aka Symantec) is used to working with sequential devices. 
That havent changed, and wont change in the near future. VTL's (and the file 
device option) is just a replacement. Performance wise, VTL's are gonna win all 
the time compared to a file device, question you need to ask yourself is, do I 
need the VTL, or can I go along with using file devices. According to the TSM 
manual (dont have the link , but if you want i'll find it) the maximum 
supported file device pool for deduplication is 6TB... so if you're thinking of 
replacing a VTL with a seq. file pool, keep that in mind. The limit is because 
the amount of resources needed by TSM to do the file deduplication is limited, 
or as the manual says, until new technologies are available.
 
The discussion here where people are actually planning on just having a 
sequential pool (since noone is actually discussing that there's a random pool 
infront) is plain scary. No sequential device is gonna have their time of the 
life having a fileserver serving 50K blocks at a time.
 
So my last 50 cents worth is:
 
a) Have a random pool infront
 
b) Depending on the size of your environment, you're either gonna go with a 
filepool and use de-dup (limit is 6TB for each pool, you might not want to 
de-dup everything), or you're gonna go with a fullscale VTL. Choice here is 
size vs costs.
 
I've seen alot of posts here lately about the disadvantages with VTL's .. well, 
I havent seen one this far with mine. I have a colleague who bought a  VTL 
and found out he needed another VTL just todo the de-dup, since one VTL wasnt a 
supported configuration to do de-dup. I have another colleague who bought a 
very cheap VTL solution (from a very mentioned name around here) and ended up 
with having same hashes, but different data, leaving him with unrestorable data.
 
Comparing eggs to apples just isnt fair.  Different manufactures of VTL's do 
different things, meaning both performance and availability is completely 
different.
 
Just to sum up, we've had both 3584's and (back in the days) 3575, and I've 
never been happier with our VTL (and yes, we do restore tests).
 
Best Regards
 
Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Rick Adamson rickadam...@winn-dixie.com Sänt av: ADSM: Dist Stor 
Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:02
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

Interesting. Every VTL based solution, including data domain, that I looked at 
had limits on the amount of drives that could be emulated which were nowhere 
near a hundred let alone a thousand. Perhaps it's time to revisit this.

The license is a data domain fee, and a hefty one at that. 

The bigger question I have is since the file based storage is native to TSM why 
exactly

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Daniel Sparrman
The 6TB limit (or recommendation as described) is when you're using TSM's dedup 
capabilities on the filepool, not when using the DD dedup.
 
Setup sounds great as long as your TSM system doesnt suffer from a logical 
error (since logical errors will be deduplicated from your primary DD to your 
secondary DD). In that case, you risk loosing data. Since TSM verifies data 
being written between primary and copypools, the risk of duplicating the 
logical error is much less likely to happen when doing a backup storagepool 
compared to using DD replication (which doesnt take in account TSM logical 
errors). This would also be the case if your DD environment suffers a hash 
conflict (not likely to happen, but still a real threat).
 
I'm guessing you're only using your DD setup for TSM then?
 
Best Regards
 
Daniel





Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Rick Adamson rickadam...@winn-dixie.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 15:17
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

Really appreciate everyone's feedback.

In an attempt to clear up a few points and questions that Daniel (and others) 
brought up and something that has surprisingly been left out of the 
conversation is DD's replication capabilities. 

Since TSM uses the infamous incremental forever approach, very well I may add, 
de-duplication is a plus, but being able to utilize an efficient method to get 
the data to a DR site as an exact duplicate of your production TSM data is 
huge. Our DD system gets an average of 10:1 de-dup ratio on storage across all 
data types. Comparatively, my main-frame counterpart gets about 35:1, which 
emphasizes the job TSM incr. forever does.

If there is a 6tb limit on file device storage, IBM said nothing about it when 
they approved our design, and I have several that are well above that and 
performance could not be better. In reference to the LAN vs. Lan-free, the only 
data that traverses the LAN is from client to TSM server, except special needs 
systems, VMware, Exchange, etc.

The TSM servers have a 10gb FcoE back end network that is used strictly for the 
data transport between TSM and DD (plus the special cases mentioned above). The 
storage hierarchy uses fibre attached EMC Clarrion for the random-access disk 
pools which then migrates to the DD880 where deduplication and compression 
takes place.

Once in the DD880 all data including the TSM server database backups are 
replicated to another DD device at our DR facility where I have virtual TSM 
servers as warm stand-bys. In the case we declare a disaster, or a BCP test, 
the TSM databases are restored to the warm TSM servers and I'm up and running 
with all of my TSM servers in under an hour. (yes, I've tested it). As far as 
TSM support, there comments were that they will not support the replication 
component simply because that piece is DD not IBM.

Lastly, I guess I always looked at the VTL choice as being something additional 
to maintain (define drives, paths, etc). File device class for our operation 
just works, and works, and works :)

As always, questions and comments appreciated.

~Rick Adamson
JaX, Fl.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Hans 
Christian Riksheim
Sent: Wednesday, September 28, 2011 3:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

This 6 TB supported limit for deduplicated FILEPOOL does this limit
apply when one does client side deduplication only?

Just wondering since I have just set up a 30 TB FILEPOOL for this purpose.

Regards

Hans Chr.

On Tue, Sep 27, 2011 at 8:44 PM, Daniel Sparrman
daniel.sparr...@exist.se wrote:
 Just to put an end to this discussion, we're kinda running out of limits here:

 a) No VTL solution, neither DD, neither Sepaton, neither anyone, is a 
 replacement for random diskpools. Doesnt matter if you can configure 50 
 drives, 500 drives or 5000 drives, the way TSM works, you're gonna make the 
 system go bad since the system is made from having random pools infront, 
 sequential pools in the back.  A sequential device is not gonna replace that, 
 independent being a sequential file pool or a VTL (or, for that question, a 
 tape library).

 b) VTL's where invented because most backup software (I've only worked with 
 TSM, Legato  Veritas aka Symantec) is used to working with sequential 
 devices. That havent changed, and wont change in the near future. VTL's (and 
 the file device option) is just a replacement. Performance wise, VTL's are 
 gonna win all the time compared to a file device, question you need to ask 
 yourself is, do I need

Ang: [ADSM-L] Merging nodedata

2011-09-27 Thread Daniel Sparrman
Hi

Instead of merging the nodes before exporting them to the 6.2 server, why dont 
you use mergefilespaces feature of the export command?

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Dean Landry deanlan...@nobletrade.ca
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 15:47
Ärende: [ADSM-L] Merging nodedata

Hi All,

TSM 5.5

Nodename node1 (Archive data pre 2010)
Nodename node2 (Archive data post 2010)

Node1 and node2 are the same server, at some point the nodename was
different causing two separate sets of archives.

What I want to do is merge these two together, so that all archive data
is under the nodename node2.

I will then be doing server import/export to TSM 6.2

I have been over the forums and commands I would think would do it but
I'm not getting anywhere, anybody have any ideas or suggestions?

Thanks,
Dean.

Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-27 Thread Daniel Sparrman
Not really sure where the general idea that a VTL will limit the number of 
available mount points.

I'm not familiar with Data Domain, but generally speaking, the number of 
virtual tape drives configured within a VTL is usually thousands. Not sure why 
you'd want that many though, I always prefer having a small diskpool infront of 
whatever sequential pool I have, and let the bigger files pass the diskpoool 
and go straightly to the seq. pool.

As far as for LAN-free, the only available option I know of is SANergy. And 
going down that road (concerning both price  complexity) will probably make 
the VTL look cheap.

Not sure what kind of licensing you're talking about concerning VTL, but I 
assume it's a Data Domain license and not a TSM license? 

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Rick Adamson rickadam...@winn-dixie.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 16:52
Ärende: Re: [ADSM-L] vtl versus file systems for pirmary pool

A couple of things that I did not see mentioned here which I experienced
was for Data Domain the VTL is an additional license and it does
limit the available mount points (or emulated drives), where a TSM file
based pool does not. Like Wanda stated earlier depends what you can
afford !

I myself have grown fond of using the file based approach, easy to
manage, easy to configure, and never worry about an available tape drive
(virtual or otherwise). The lan-free issue is something to consider but
from what I have heard lately is that it can still be accomplished using
the file based storage. If anyone has any info on it I would appreciate
it.

~Rick
Jax, Fl.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Tim Brown
Sent: Monday, September 26, 2011 4:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] vtl versus file systems for pirmary pool

What advantage does VTL emulation on a disk primary storage pool have

as compared to disk storage pool that is non vtl ?



It appears to me that a non vtl system would not require the daily
reclamation process

and also allow for more client backups to occur simultaneously.



Thanks,



Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the
intended recipient. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this
message to the intended recipient, please notify the sender immediately
by replying to this note and deleting all copies and attachments.

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-27 Thread Daniel Sparrman
Just to put an end to this discussion, we're kinda running out of limits here:
 
a) No VTL solution, neither DD, neither Sepaton, neither anyone, is a 
replacement for random diskpools. Doesnt matter if you can configure 50 drives, 
500 drives or 5000 drives, the way TSM works, you're gonna make the system go 
bad since the system is made from having random pools infront, sequential pools 
in the back.  A sequential device is not gonna replace that, independent being 
a sequential file pool or a VTL (or, for that question, a tape library).
 
b) VTL's where invented because most backup software (I've only worked with 
TSM, Legato  Veritas aka Symantec) is used to working with sequential devices. 
That havent changed, and wont change in the near future. VTL's (and the file 
device option) is just a replacement. Performance wise, VTL's are gonna win all 
the time compared to a file device, question you need to ask yourself is, do I 
need the VTL, or can I go along with using file devices. According to the TSM 
manual (dont have the link , but if you want i'll find it) the maximum 
supported file device pool for deduplication is 6TB... so if you're thinking of 
replacing a VTL with a seq. file pool, keep that in mind. The limit is because 
the amount of resources needed by TSM to do the file deduplication is limited, 
or as the manual says, until new technologies are available.
 
The discussion here where people are actually planning on just having a 
sequential pool (since noone is actually discussing that there's a random pool 
infront) is plain scary. No sequential device is gonna have their time of the 
life having a fileserver serving 50K blocks at a time.
 
So my last 50 cents worth is:
 
a) Have a random pool infront
 
b) Depending on the size of your environment, you're either gonna go with a 
filepool and use de-dup (limit is 6TB for each pool, you might not want to 
de-dup everything), or you're gonna go with a fullscale VTL. Choice here is 
size vs costs.
 
I've seen alot of posts here lately about the disadvantages with VTL's .. well, 
I havent seen one this far with mine. I have a colleague who bought a  VTL 
and found out he needed another VTL just todo the de-dup, since one VTL wasnt a 
supported configuration to do de-dup. I have another colleague who bought a 
very cheap VTL solution (from a very mentioned name around here) and ended up 
with having same hashes, but different data, leaving him with unrestorable data.
 
Comparing eggs to apples just isnt fair.  Different manufactures of VTL's do 
different things, meaning both performance and availability is completely 
different.
 
Just to sum up, we've had both 3584's and (back in the days) 3575, and I've 
never been happier with our VTL (and yes, we do restore tests).
 
Best Regards
 
Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Rick Adamson rickadam...@winn-dixie.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:02
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

Interesting. Every VTL based solution, including data domain, that I looked at 
had limits on the amount of drives that could be emulated which were nowhere 
near a hundred let alone a thousand. Perhaps it's time to revisit this.

The license is a data domain fee, and a hefty one at that. 

The bigger question I have is since the file based storage is native to TSM why 
exactly is using a file based storage not supported?

~Rick


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Tuesday, September 27, 2011 10:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

Not really sure where the general idea that a VTL will limit the number of 
available mount points.

I'm not familiar with Data Domain, but generally speaking, the number of 
virtual tape drives configured within a VTL is usually thousands. Not sure why 
you'd want that many though, I always prefer having a small diskpool infront of 
whatever sequential pool I have, and let the bigger files pass the diskpoool 
and go straightly to the seq. pool.

As far as for LAN-free, the only available option I know of is SANergy. And 
going down that road (concerning both price  complexity) will probably make 
the VTL look cheap.

Not sure what kind of licensing you're talking about concerning VTL, but I 
assume it's a Data Domain license and not a TSM license? 

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-27 Thread Daniel Sparrman
The fact you actually need to pay a VTL license is just plain scary.

When u bought it, did they think you're gonna use it as a fileserver? I'm not 
to specialized into Data Domain, but arent they marketed as backup hardware? So 
you get a disk, but if you want to use it for something else than that, you 
need to pay a license?

Sorry for sounding bitter, but I've always heard people referring to Data 
Domain as a VTL.



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Allen S. Rout a...@ufl.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 18:55
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

On 09/27/2011 12:02 PM, Rick Adamson wrote:


 The bigger question I have is since the file based storage is
  native to TSM why exactly is using a file based storage
  not supported?

Not supported by what?

If you've got a DD, then the simplest way to connect it to TSM is via
files.  Some backup apps require something that looks like a library, in
which case you'd be buying the VTL license.

FWIW, if you're already in DD space, you're paying a pretty penny.  The
VTL license isn't chicken feed, I agree, but it's not a major component
of the total cost.


- Allen S. Rout

Ang: [ADSM-L] AIXASYNCIO

2011-09-26 Thread Daniel Sparrman
Hi
 
We have a site backing up 20-25TB of data every night spread across 3 different 
instances running in 2 clusters. We use AIXASYNCIO on all of these server and 
it's certainly an I/O increase. If you're gonna use it, make sure you increase 
the number of AIO servers. You will notice abit more memory being used due to 
the use of ASYNC I/O, but it's well worth the memory.
 
If you're using SAN volumes, other things to look at is increasing the 
queue_depth of your hdisks (default is usually to low). If you're using RAW 
devices for storagepools, you should also look at lowering the filesystem 
cache, since there's no really point in caching the database volumes, and RAW 
devices arent handled by the filesystem cache.
 
Best Regards
 
Daniel Sparrman





Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Pagnotta, Pam (CONTR) pam.pagno...@hq.doe.gov
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/27/2011 01:27
Ärende: [ADSM-L] AIXASYNCIO

Hello,

Anyone with experience with using the AIXASYNCIO server parameter on a version 
6.2** TSM server running on an AIX v6.1 OS? We backup approximately 5TB of data 
nightly and although the performance is not bad, if this would make it better, 
we might have more time for administrative work.

Just looking for some opinions from all the wonderful experience out there.

Thank you,
Pam




Pam Pagnotta
Sr. Systems Engineer
Energy Enterprise Solutions (EES), LLC
Supporting IM-621.1, Enterprise Service Center East
Contractor to the U.S. Department of Energy
Office: 301-903-5508
Email: pam.pagno...@hq.doe.gov
Location: USA (EST/EDT)

Ang: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-26 Thread Daniel Sparrman
I'm trying to figure out your question here:
 
Are you comparing a primary sequential pool located on a VTL, compared to a 
primary disk pool? In that case, it's like comparing an apple to a pear. Using 
diskpools for long-time storage is not something I'd suggest since your 
diskpools will get fragemented. Also, sequential write speed is usually faster 
on a sequential pool than a random disk pool.
 
If you're comparing a sequential pool located on a VTL, compared to a FILE 
device sequential pool? In that case, you will have reclamation on both pools. 
The difference however (and I can only speak for our SEPATON VTL), in case of 
the SEPATON, there is never any datamovement during reclaim since it's 
application aware. In case of the FILE device pool, you will have reclamation, 
and you will have unnecessary data movement across the same disks when doing 
reclamation.
 
As for performance, our 2 port SEPATON VTL easily hits 700-800MB/s with 2x4GB 
ports, and it's a small one. It's upgradable to a total speed of 43.2TB/hour 
(however, you will need a server, network and SAN HBA's that can actually 
achieve that amount of throughput).
 
As for deduplication, no, a VTL cant do client-side deduplication. However, the 
achieved de-dup ratio is alot higher than you will see on a TSM-based client- 
or server-based dedup. And unless your network is congested, there is really no 
point in doing client-side dedup, since the dedup load isnt placed on the TSM 
server, but on the VTL hardware (which is one of the reasons except for a 
congested network where you'd want to use client-side de-dup). I've seen other 
VTL's where you need a separate VTL todo dedup. That isnt the case with our VTL 
though, it's all done within the same box.
 
On a 2nd note, when using a FILE device pool or a diskpool, you're also missing 
out on the hardware compression offered by a standard tape library or VTL. You 
can probably use client-side compression, but from experience, it's never as 
good as hardware compression, and puts an unnecessary load on your 
backupclients (which wasnt a problem 10 years ago when working hours was during 
days, today however, alot of systems are online serving customers all around 
the clock).
 
My 5 cents worth.
 
Best Regards
 
Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Tim Brown tbr...@cenhud.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/26/2011 22:05
Ärende: [ADSM-L] vtl versus file systems for pirmary pool

What advantage does VTL emulation on a disk primary storage pool have

as compared to disk storage pool that is non vtl ?



It appears to me that a non vtl system would not require the daily reclamation 
process

and also allow for more client backups to occur simultaneously.



Thanks,



Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the intended 
recipient. If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended 
recipient, please notify the sender immediately by replying to this note and 
deleting all copies and attachments.

Ang: [ADSM-L] Issue with tape drive - TSM Polling it but can't access it (TSM 5.5.2.0)

2011-09-20 Thread Daniel Sparrman
a) Have you checked that the 3592E is using the correct driver (not OS or TSM 
driver). This is usually the issue when a tape drive can be mounted/umounted in 
the OS, but not within TSM.

b) What error are you recieving within TSM when it tries mounting the drive?

c) Have you checked that nothing has happened with the library/drives during 
the outage? If TSM tries mounting a drive, it will first speak to your library 
manager over TCP/IP, and then try reading the drive. 

You claim that both the drive and the path is online, then what happens during 
use? Path is not set offline or drive is not ending up in a polling state? That 
tells me you should see alot of errors in the actlog.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Minns, Farren - Chichester fmi...@wiley.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/20/2011 11:13
Ärende: [ADSM-L] Issue with tape drive - TSM Polling it but can't access it 
(TSM 5.5.2.0)

Hi All

Been ages since I have been on here as I don't really have a lot of dealings 
with TSM these days but this issue has come my way :-/

Basically, a 3592E drive in our 3494 library cannot be seen by TSM (it just 
keeps polling). This has occurred since a power outage at the weekend (don't 
ask).

Not, from the Solaris server, I can mount/un-mount a tape with the mtlib 
command, so that would say to me the tape drive itself is ok. But, if I 
initiate anything from within TSM, I have no joy.

I have checked that the path and drive are both online.

Can anyone advise on this please?

Regards

Farren Minns 

John Wiley  Sons Limited is a private limited company registered in England 
with registered number 641132.
Registered office address: The Atrium, Southern Gate, Chichester, West Sussex, 
United Kingdom. PO19 8SQ.



Ang: Re: [ADSM-L] Exchange Legacy 2007 to VSS 2010 backup performance

2011-08-25 Thread Daniel Sparrman
Ben, not sure if you have counted it in, but have you turned off the mailbox 
history part? We have an Exchange environment running 4x2TB and the integrity 
check wasnt really the big issue, but with 19000 users, the mailbox history was.


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Del Hoobler hoob...@us.ibm.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/25/2011 22:22
Ärende: Re: [ADSM-L] Exchange Legacy 2007 to VSS 2010 backup performance

For the most part, your description is accurate...
but I have a few comments:

- Actually... an integrity check was performed for legacy backups,
  but the Exchange Server did it while it was reading the data,
  so the penalty was not as bad. If the Exchange server found
  an integrity problem while running the check, it would
  fail the backup.

- The integrity check for VSS backups is done after the snapshot,
  and it must read all the pages of the .EDB and .LOG files
  using an Exchange interface to do it

- Microsoft has relaxed their requirement for certain Exchange 2010
  environments. They say:
 Database mobility features, including Database Availability
 Groups (DAGs), provide more flexible and more reliable
 database replication. For databases in a DAG that have
 two or more healthy copies, the database consistency
 checking step can even be skipped.
  Source:

http://msdn.microsoft.com/en-us/library/dd877010%28v=exchg.140%29.aspx

- Since VSS operations use the BA client (DSMAGENT) to perform the
  backup of files, you can try increasing the RESOURCEUTILIZATION
  in the DSM.OPT file for the DSMAGENT (not the TDP) to allocate
  more threads to handle the backup load.

- You can also create separate instances for backing up,
  but... if you do this, they should be snapping separate
  volumes. If all your databases or logs occupy the same
  volume, you cannot split them up. In addition, you will
  want to stagger your backup starts so that the
  snaps of the different volumes.

- Perform backups from the DAG passive copies so that
  it offloads the impact to production databases.

Thanks,

Del




ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 08/25/2011
12:24:42 PM:

 From: Ben Bullock bbull...@bcidaho.com
 To: ADSM-L@vm.marist.edu
 Date: 08/25/2011 03:37 PM
 Subject: Re: Exchange Legacy 2007 to VSS 2010 backup performance
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 We recently moved from Exchange 2007 to 2010 on our environment.
 Just for a reference, we have about 3.3 TB of exchange data. Our
 TSM server is v5.5.5.0 running on AIX 6.1. The exchange servers
 have v6.1.3.0 of the TDP client, v6.2.2.0 of the BA client, Windows
 2008 R2 SP1 and Exchange 2010 SP1.

 We have found that the transfer of the data runs about the same
 once it starts moving data to TSM, but the VSS parts (which are no
 longer optional) caused our backups to take almost twice as long.

 We weren't able to find anything in the TDP documentation to tell
 us why backups were now taking twice as long, however by looking at
 the IO on the SAN and Exchange logs, we believe we were able to
 determine what was going on. Feel free to chime in if you think our
 assessment is incorrect.

 By default, the TDP causes the exchange software do its own an
 integrity check of the database every time it does a backup,
 (either full, incremental or differential). We found that
 essentially doubles the time the backup takes, because it seems to
 read/check what it wants to send for the integrity check and then
 read/send the data to TDP/TSM on another pass. So you are
 essentially reading all the bits twice for every backup. It seems
 like a rather inefficient way to run it, but perhaps that's the way
 it has to be done.

 There's a flag you can put in so that the integrity check is not
 called (/SKIPINTEGRITYCHECK), but there is an inherent risk in
 skipping it. Then again, it didn't do this check for the Legacy
 backups and we never had a corruption problem with the backups, but
 YMMV. Currently, a weekly full and daily differentials meet our
 SLA, and we are toying with the idea of leaving the integrity check
 on the fulls (and have it take twice as long as it used to) and
 turning it off for the differentials. Your choice may be different
 depending on your risk assessment.

 Test it out for yourself and let us know if your experiences are
 any different from ours.

 Ben

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
 Behalf Of Ian Smith
 Sent: Wednesday, August 24, 2011 10:12 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Exchange Legacy 2007 to VSS 2010 backup performance

 Our Mail/GroupWare service is migrating from Exchange 2007

Ang: [ADSM-L] Poll: What Windows client features do you install by default

2011-08-24 Thread Daniel Sparrman
Question 1: Journaling, but not OFS as standard

Question 2: a) I dont keep the journal database on a reboot or service 
shutdown, and since the Windows machines generally dont have an uptime of +1 
year, each time the machines are rebooted, we get a normal incremental for 
free. Never seen any problem with the journal not backing up all files though.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Zoltan Forray/AC/VCU zfor...@vcu.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/24/2011 16:27
Ärende: [ADSM-L] Poll: What Windows client features do you install by default

I would like to get some idea of what features of the Windows client do you 
install, by default, when doing a brand-new client install.

Folks here are just starting to realize the benefits of using the Journaling 
feature (yeah, I know, it is not a new feature - I have been trying to get 
folks to use it but until recently when more clients with millions of objects 
have been complain about 12-hour elapsed times to backup 20K files, they are 
now realizing the benefit).

So, do you automatically/by default install the:

A - Journaling
B - Open File Support
C - Both
D - Neither

Speaking of the journal, the docs say you should occasionally run a backup 
without the journal, just to make sure nothing is missed.  How do you do this?

Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


Ang: Re: [ADSM-L] Ang: [ADSM-L] Poll: What Windows client features do you install by default

2011-08-24 Thread Daniel Sparrman
What I ment was that I have not set the option to retain the journal database 
when the service is stopped (reboot, restart of service or whatever the cause 
is) since I consider that a much higher security risk than that the journal 
wouldnt backup certain files. If you retain the database while the service is 
stopped, chances are alot of files have been changed that the journal didnt 
take notice of.

The option to retain the journal database is setting the tsmjbbd.ini. It's 
default set to off, but in some rare cases, you might want to retain it so that 
you dont have to redo a normal incremental backup.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Zoltan Forray/AC/VCU zfor...@vcu.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/24/2011 17:00
Ärende: Re: [ADSM-L] Ang: [ADSM-L] Poll: What Windows client features do you 
install by default

Question - how do you get rid of the journal on reboot?  Is there an option 
to trigger this?

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote: -
To: ADSM-L@VM.MARIST.EDU
From: Daniel Sparrman 
Sent by: ADSM: Dist Stor Manager 
Date: 08/24/2011 10:38AM
Subject: [ADSM-L] Ang: [ADSM-L] Poll: What Windows client features do you 
install by default

Question 1: Journaling, but not OFS as standard  Question 2: a) I dont keep the 
journal database on a reboot or service shutdown, and since the Windows 
machines generally dont have an uptime of +1 year, each time the machines are 
rebooted, we get a normal incremental for free. Never seen any problem with the 
journal not backing up all files though.  Best Regards  Daniel Sparrman
Daniel Sparrman Exist i Stockholm AB Växel: 08-754 98 00 Fax: 08-754 97 30 
daniel.sparr...@exist.se http://www.existgruppen.se Posthusgatan 1 761 30 
NORRTÄLJE  -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -  
Till: ADSM-L@VM.MARIST.EDU Från: Zoltan Forray/AC/VCU zfor...@vcu.edu Sänt 
av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU Datum: 08/24/2011 16:27 
Ärende: [ADSM-L] Poll: What Windows client features do you install by default  
I would like to get some idea of what features of the Windows client do you 
install, by default, when doing a brand-new client install.  Folks here are 
just starting to realize the benefits of using the Journaling feature (yeah, I 
know, it is not a new feature - I have been trying to get folks to use it but 
until recently when more clients with millions of objects have been complain 
about 12-hour elapsed times to backup 20K files, they are now realizing the 
benefit).  So, do you automatically/by default install the:  A - Journaling B - 
Open File Support C - Both D - Neither  Speaking of the journal, the docs say 
you should occasionally run a backup without the journal, just to make sure 
nothing is missed.  How do you do this?  Zoltan Forray TSM Software  Hardware 
Administrator Virginia Commonwealth University UCC/Office of Technology 
Services zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and 
other reputable organizations will never use email to request that you reply 
with your password, social security number or confidential personal 
information. For more details visit http://infosecurity.vcu.edu/phishing.html  

Ang: [ADSM-L] How to completely get rid of TSM6.2/DB2 on AIX

2011-08-22 Thread Daniel Sparrman
The autonomous installler places files under /var/ibm that needs to be removed 
incase the installation fails. I'm not infront of a machine atm so I cant give 
you the exact path, but previously when installations failed, I made sure to 
remove the entire directory containing configuration files (telling the 
installer what is installed) and the log files. I believe there is also files 
under /etc that needs to be removed, but like I said, I'm at home and have no 
access to any production machines so I cant give you the exact path.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Hans Christian Riksheim bull...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/22/2011 20:41
Ärende: [ADSM-L] How to completely get rid of TSM6.2/DB2 on AIX

Hi,

I've got a problem. An installation failed when creating an instance and I
decided to uninstall and start over. I removed the installation using the
uninstall script provided. But when I ran install.bin the next time it asked
for the password of the instance owner and then failed.

So it seems something is left behind after uninstall confusing the
installer.

Any idea what it might be? Expect reinstalling AIX of course.

AIX61 TL03

Hans Chr.

Ang: [ADSM-L] Backupset restore question - different server OS's

2011-08-22 Thread Daniel Sparrman
I've read through your question 3 times now and I'm still trying to figure out 
what you're doing :)

You're mentioning creating backupsets. For me, a backupset is a way of 
restoring a client without having to transfer the data across the LAN. But 
you're also mentioning TSM servers placed on AIX and Linux respectively. Are 
you trying to move a TSM server from AIX to Linux? I've never heard of anyone 
transfering client data betweeen TSM servers using backupsets, I didnt even 
know it was possible. Did you mix it up and mean export tapes? 

Cant answer for AIX  Linux, but for example AIX  Windows wont work since the 
way the OS writes/reads labels of the tape isnt the same (tried it, didnt work 
for me, perhaps someone else was more lucky).

Is there a reason you dont want to do a server-to-server export?

Generally (and I'm only talking from my own experience) tapes, databases and 
normal volumes arent compatible between OS's. The way they handle tapes are 
just too different.

If you try to explain what you're trying to accomplish, perhaps it's easier to 
help. Or it's just getting late and I'm too tired :)

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Strand, Neil B. nbstr...@leggmason.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/22/2011 20:25
Ärende: [ADSM-L] Backupset restore question - different server OS's

I am currently running TSM V5.5 on AIX.  I am setting up a TSM V6.2 server on 
Linux at a new data center.  I would like to use backup sets to transfer client 
data from the old to the new data centers. This backupset data will be used to 
populate the newly built clients - not as a backup data store.  Both the old 
and new data centers have IBM TS1120 drives in their TS3500 libraries.  I don't 
plan to attach each client to tape drives and would perform the backupset 
restore via TSM server.

Does anyone know or have experienced creating a backup set on a TSM V5 server 
on AIX and then recovering that backup to a TSM V6 server on Linux? The Linux 
server would need to generate a TOC from the tape created by the AIX server.

The TSM V6.2 server should be able to work with the backup set.  It is the OS 
tape read/write compatibility that I am unsure of.  I don't currently have a 
Linux box to play with and cannot test this scenario.

Your comment is highly welcomed.

Thank you,
Neil Strand 
Storage Engineer - Legg Mason 
Baltimore, MD. 
(410) 580-7491 
Whatever you can do or believe you can, begin it. 
Boldness has genius, power and magic. 


IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.

Ang: Re: [ADSM-L] Help tracking down spurious Failed 12 errors

2011-08-17 Thread Daniel Sparrman
Failed 12 is a common error when TSM failed backuping a few files. Usual 
reasons are the file is locked, the file was removed during backup and the 
account running the TSM scheduler did not have access to the file.

When you looked through your dsmerror.log and dsmsched.log, you couldnt see any 
failed files at all? Nor any issues with backuping specific filespaces (such as 
parts of systemstate).

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Lee, Gary D. g...@bsu.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/17/2011 12:42
Ärende: Re: [ADSM-L] Help tracking down spurious Failed 12 errors

Do a q act s=clientname for the backup times in question.
You may see something there.

Also, check the windows event log if the clients are windows.

 


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Davis, 
Adrian
Sent: Wednesday, August 17, 2011 4:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Help tracking down spurious Failed 12 errors

When using query event to check my scheduled backups, I seem to be getting 
Failed 12 errors for a couple of  backups which appear to have been 
successful (This occurs every day for the same servers).

I've checked the client schedule and error logs - but there is no record of an 
error and all messages are what I would expect to see for a successful backup.

Any ideas where else I should look?

Many Thanks,
   =Adrian=

DISCLAIMER

This message is confidential and intended solely for the use of the
individual or entity it is addressed to. If you have received it in
error, please contact the sender and delete the e-mail. Please note 
that we may monitor and check emails to safeguard the Council network
from viruses, hoax messages or other abuse of the Council's systems.
To see the full version of this disclaimer please visit the following 
address: http://www.lewisham.gov.uk/AboutThisSite/EmailDisclaimer.htm

For advice and assistance about online security and protection from
internet threats visit the Get Safe Online website at
http://www.getsafeonline.org

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Help tracking down spurious Failed 12 errors

2011-08-17 Thread Daniel Sparrman
The most common reason I see Failed 12 at our customers is when parts of the 
systemstate backup fails.

I just checked a customer TSM server and they had several Windows boxes failing 
with Failed 12. When going through the activity log, there isnt an actually 
error, but I can see that it's backing up 6 parts of the systemstate object. 
The result from the backup says 4 objects where successfully backed up, and 2 
objects failed. Unfortunately, the activity log doesnt tell me which part of 
the systemstate backup that failed, only that it tried backing them up. I have 
no access atm to the dsmerror.log / dsmsched.log so I cannot tell you what kind 
of error (if any) to look for.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Andrew Raibeck stor...@us.ibm.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/17/2011 14:10
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Help tracking down spurious Failed 12 
errors

 Failed 12 is a common error when TSM failed backuping a few files.
 Usual reasons are the file is locked, the file was removed during
 backup and the account running the TSM scheduler did not have access
 to the file.

In the cases you mention, I would expect the return code to be 4, not 12,
provided that there are no other warnings or errors that would cause a
higher return code. RC 4 is very specific: except for some skipped files,
the backup was otherwise successful.

Best regards,

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Product Development
Level 3 Team Lead
Internal Notes e-mail: Andrew Raibeck/Hartford/IBM@IBMUS
Internet e-mail: stor...@us.ibm.com

IBM Tivoli Storage Manager support web page:
http://www.ibm.com/support/entry/portal/Overview/Software/Tivoli/Tivoli_Storage_Manager

ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 2011-08-17
06:06:29:

 From: Daniel Sparrman daniel.sparr...@exist.se
 To: ADSM-L@vm.marist.edu
 Date: 2011-08-17 07:04
 Subject: Ang: Re: [ADSM-L] Help tracking down spurious Failed 12 errors
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 Failed 12 is a common error when TSM failed backuping a few files.
 Usual reasons are the file is locked, the file was removed during
 backup and the account running the TSM scheduler did not have access
 to the file.

 When you looked through your dsmerror.log and dsmsched.log, you
 couldnt see any failed files at all? Nor any issues with backuping
 specific filespaces (such as parts of systemstate).

 Best Regards

 Daniel Sparrman



 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE



 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


 Till: ADSM-L@VM.MARIST.EDU
 Från: Lee, Gary D. g...@bsu.edu
 Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 Datum: 08/17/2011 12:42
 Ärende: Re: [ADSM-L] Help tracking down spurious Failed 12 errors

 Do a q act s=clientname for the backup times in question.
 You may see something there.

 Also, check the windows event log if the clients are windows.




 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
 Behalf Of Davis, Adrian
 Sent: Wednesday, August 17, 2011 4:41 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Help tracking down spurious Failed 12 errors

 When using query event to check my scheduled backups, I seem to be
 getting Failed 12 errors for a couple of  backups which appear to
 have been successful (This occurs every day for the same servers).

 I've checked the client schedule and error logs - but there is no
 record of an error and all messages are what I would expect to see
 for a successful backup.

 Any ideas where else I should look?

 Many Thanks,
=Adrian=

 DISCLAIMER

 This message is confidential and intended solely for the use of the
 individual or entity it is addressed to. If you have received it in
 error, please contact the sender and delete the e-mail. Please note
 that we may monitor and check emails to safeguard the Council network
 from viruses, hoax messages or other abuse of the Council's systems.
 To see the full version of this disclaimer please visit the following
 address: http://www.lewisham.gov.uk/AboutThisSite/EmailDisclaimer.htm

 For advice and assistance about online security and protection from
 internet threats visit the Get Safe Online website at
 http://www.getsafeonline.org

Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Help tracking down spurious Failed 12 errors

2011-08-17 Thread Daniel Sparrman
I fully agree with you Richard, and we do have a central log storage at this 
customer. However, in this case it's a simple matter of only being able to 
access the TSM server via remote HTTP (i'm not onsite) ;)

Best Regards

Daniel Sparrman


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Richard Sims r...@bu.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/17/2011 14:59
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Help tracking down 
spurious Failed 12 errors

On Aug 17, 2011, at 7:34 AM, Daniel Sparrman wrote:

 ... I have no access to the dsmerror.log / dsmsched.log so I cannot tell you 
 what kind of error (if any) to look for.
 

This is a common situation in most most sites, where the TSM administrator is 
not afforded access to client systems, and yet is usually the only person with 
the knowledge to make sense of TSM processing issues.

Sites might consider instituting a standard client schedule model where a 
POSTSchedulecmd transmits a copy of the dsmerror.log and dsierror.log to a 
central location which allows such review by the TSM specialist.  Conveyance 
might be via scp, NFS, or even a 'dsmc archive' with a modest retention period 
and Set Access permission.  This should be an easy sell in most environments, 
given that logs are typically ignored by client administrators, and yet often 
reveal files which everyone thinks are being backed up but never are (as in 
Linux LANG/locale conflicts).

 Richard Sims

Ang: [ADSM-L] The volume has data but I get this: ANR8941W

2011-08-14 Thread Daniel Sparrman
It's either corrupt or actually blank, meaning that TSM has written data on it, 
but the hardware didnt actually write the data (common when using an incorrect 
FC driver for example).

The q volume command only gathers data from the database, thus not proving 
that the cartridge actually contains data. I'm guessing you didnt do 
checkl=barcode when auditing the library. That means that TSM actually tried 
reading the data on the cartridge, but failed.

Best Regards

Daniel Sparrman



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Mehdi Salehi ezzo...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/14/2011 08:15
Ärende: [ADSM-L] The volume has data but I get this: ANR8941W

Hi,
After audit library:
ANR8941W The volume from slot-element 4098 in drive LTODRIVE0 (/dev/rmt0) in
library LIB3310 is
blank.

But the volume is not only labeled, but has data:

Volume Name  Storage Device Estimated
Pct  Volume
 Pool Name   Class Name  Capacity
Util  Status
 --- -- -
- 
TS0045   POOL3310CLASS3310  919,886.1
92.5   Full

Does it mean the volumes is corrupted?

Thanks.

Ang: Re: [ADSM-L] Ang: [ADSM-L] The volume has data but I get this: ANR8941W

2011-08-14 Thread Daniel Sparrman
Hi

What result does audit vol give you?

No label on the tape/blank tapes usually points towards an error in the FC HBA 
driver. TSM  the tape driver thinks they are writing data to the tape, but the 
FC HBA driver does nothing. I've seen it before, a customer tried using JNI FC 
HBA's under AIX, but the driver didnt have the correct FC Tape level support on 
the FC HBA driver.

If the tape is really empty, then it's not much you can do but a delete vol XXX 
discard=yes, re-label the tape with checkin=scratch.

Best Regards

Daniel
 


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Mehdi Salehi ezzo...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/14/2011 11:11
Ärende: Re: [ADSM-L] Ang: [ADSM-L] The volume has data but I get this: ANR8941W

Hi Steven,
The volume cannot be labeled:
ANR8816E LABEL LIBVOLUME: Volume TS0045 in library LIB3310 cannot be labeled
because it is currently defined in a storage pool or in the volume history
file.

Ang: Re: [ADSM-L] Ang: [ADSM-L] The volume has data but I get this: ANR8941W

2011-08-14 Thread Daniel Sparrman
TSM doesnt allow you to re-label tapes, or change it's status to scratch when 
the volume still has an inventory.


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Steven Langdale steven.langd...@gmail.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/14/2011 10:09
Ärende: Re: [ADSM-L] Ang: [ADSM-L] The volume has data but I get this: ANR8941W

That error is saying that there is no label, as you have nothing to loose,
you could always check it out and back in again with a label libvol ...
And see what happens.

Steven

On 14 August 2011 09:02, Mehdi Salehi ezzo...@gmail.com wrote:

 Thanks Daniel,
 Yes, what TSM database shows means that the volume contains data. We found
 this problem when a client tried to restore their data, but part of it was
 unavailable! As the data is not critical, we have not implemented backup
 pool. Means backup data loss :(


Ang: [ADSM-L] performance issue with commonstore for sap and tsm 6.2.x

2011-08-11 Thread Daniel Sparrman
The Commonstore log doesnt really say much since there can be a number of 
reasons why Commonstore is pausing every 2nd document which is not related to 
TSM.

In what way are you seeing the performance decrease when importing data from 
another server? How big is the difference in time or MB/s? Have you tried 
importing exactly the same data earlier? Or are you comparing two different 
imports?

Are the backups also affected, or have you only noticed the performance issue 
when it comes to Commonstore and importing from other TSM server?

You didnt mention the version of the TSM API on your Commonstore machine.

As a sidenote, data archiving from Commonstore directly to TSM works great 
since you're only storing large BLOB's from SAP which are never checked or 
accessed from SAP unless SAP needs to bring back that old archived table data. 
However, when using Commonstore for document management of SAP (that is, 
storing individual documents) I'd recommend having a content system inbetween 
such as IBM CM OnDemand or IBM CM. SAP regularly checks the archive for 
availability of documents, and storing them directly in TSM might be a 
potential performance issue.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: TSM t...@profi-ag.de
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 08/11/2011 11:34
Ärende: [ADSM-L] performance issue with commonstore for sap and tsm 6.2.x

hello,

with new tsm server 6 (now the newest version 6.2.3 on windows 2008) we
see performance issues on archiving sap data with commonstore (cssap)

here is a part of the cssap log

18:44:25; 0; 7827;  ARCHIVE;A3;28.751;14.306;06076;
;4E42810B97C906E5E10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60145.tmp;52682.0;
18:44:25; 0; 7828;  ARCHIVE;A3;28.751;14.290;03028;
;4E416BE5170D67BFE10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60146.tmp;16084.0;
18:44:41; 0; 7829;  ARCHIVE;A3;29.469;15.148;06076;
;4E4283AC97C906E5E10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60148.tmp;120799.0;
18:44:41; 0; 7830;  ARCHIVE;A3;29.469;15.148;03028;
;4E416BF7170D67BFE10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60147.tmp;49075.0;
18:44:55; 0; 7832;  ARCHIVE;A3;29.500;14.337;03028;
;4E42875997C906E5E10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60150.tmp;79822.0;
18:44:55; 0; 7831;  ARCHIVE;A3;29.531;14.352;06076;
;4E42875097C906E5E10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60149.tmp;16420.0;
18:45:09; 0; 7834;  ARCHIVE;A3;28.579;14.227;06076;
;4E30DFA40FF92F32E10080030AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60151.tmp;23139.0;
18:45:09; 0; 7833;  ARCHIVE;A3;28.595;14.227;03028;
;4E428ACC97C906E5E10080010AFAFA3D;;0.0;data;;;application/pdf;D:\CSSAP\temp\cs_HTTP60152.tmp;109633.0;


interesting, that after archiving 2 files, there is a pause of 13 to 16
sec every time. so the import of some thousand files takes about 6 hours
or more.


environment
IBM CommonStore for SAP - Server 8.4.0.13, installed on tsm server
TSM Server 6.2.3 , target destination for cssap archives = diskpool

we also see performance issues on importing  other data from another tsm
server.
reorg of the tsm database is working.
we do not see any performance issues on migrate, copy or something else

any ideas?

with best regards
stefan savoric


Ang: Restore trouble on Virtuel W2K3 server.

2011-06-16 Thread Daniel Sparrman
Some more information on what happens during boot would help trying to resolve 
the issue.

Is the new VM identical to the old one? (SCSI adapter, VM version)?

Are you restoring to the same ESXi server or to a different one?

What is the error you're getting when trying to boot the newly restored VM?

Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Bo Carsten Krogholm Nielsen bo...@dongenergy.dk
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 06/16/2011 13:52
Ärende: Restore trouble on Virtuel W2K3 server.

Hi all,

I have restored a W2K3 server in a VM environment, but now it will not back up.
It is a new server, where I restored the backup from another server.
1. Restored C-drive
2. SystemState restored.

Then boot the server, which then stops during boot.
What went wrong?


Have fun

Bo Nielsen
Senior Technology Consultant

DONG Energy A/S
Klædemålet 9
2100 København Ø
Denmark

Tlf.:  +45 9955 
Mobile +45 9955 5434

bo...@dongenergy.dkmailto:bo...@dongenergy.dk
www.dongenergy.dk

Ang: Re: [adsm] Re: TSM 6.x and HADR

2011-06-16 Thread Daniel Sparrman
Since Wanda said she didnt have any in-house competence on DB2, I will again 
however point out that if you're going to use HADR with your TSM server, make 
sure you have anyone who can diagnostic/troubleshoot a DB2 server in-house or 
someone you can hire.

As long as it works, monitoring (as people have pointed out) is a simple thing. 
Both Veritas and Tivoli supports monitoring it(as well as Nagios I believe), 
and writing scripts with a rc=0 isnt that much of a thing. However, IF, and it 
hopefully doesnt happen, something goes wrong, you'll need someone who knows 
DB2 to sort things out. It's not just a matter of starting your DB2 
HADR-connected servers back up again.

I've yet to see a TSM server ending up with a split-brain, but I've seen 
several large DB2 servers in that scenario, and it's not a fun thing to try to 
sort out.

Best Regards

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Francisco Molero fmol...@yahoo.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 06/16/2011 17:24
Ärende: Re: [adsm] Re: TSM 6.x and HADR

This is a very good document: 


The installation is easy and it works perfectly. 


https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Electronic+vaulting+using+deduplicated+remote+copy+storage+pools#Electronicvaultingusingdeduplicatedremotecopystoragepools-HADRconfigurationinformation




De: Lloyd Dieter ldie...@rochester.rr.com
Para: ADSM-L@VM.MARIST.EDU
Enviado: jueves 16 de junio de 2011 15:41
Asunto: Re: [adsm] Re: TSM 6.x and HADR

I've been playing with this recently...as Daniel indicated, it's pretty
easy to set up.

db2pd -hadr -db tsmdb1

Run from either the primary or secondary will give the status of the
connection, and which log files it's working on.

Failing over from primary - secondary is also straight forward.  Where
I ran into a problem was trying to fail back...that didn't seem to be as
easy.

I don't recall the exact steps, but I think what I did was to stop DB2
on both the primary and secondary, then restart DB2 and HADR on the
secondary as standby.

On the primary, I tried to start HADR as primary, and it wouldn't start.
  I wound up doing a db2 rollforward db tsmdb1 to end of logs and
complete, after which it came back up, and I was able to get it to
resume its role as primary.

-Lloyd



On 06/15/2011 10:51 AM, Prather, Wanda wrote:
 I'm interested in hearing from folks using it.

  From the presentation, I am uneasy at all the cmd-line DB2 setup commands 
  required to use it, and wonder if it's suitable for a shop with no in-house 
  DB2 expertise.

 Once it's set up, how much time/expertise does it take to monitor/manage it?
 In fact, how do you monitor it at all?



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Steven Langdale
 Sent: Tuesday, June 14, 2011 3:19 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM 6.x and HADR

 I might be hijacking the thread (excuse me) so I'll change the subject too.
 Is there already an official announcement of TSM 6.x and replicating
 the database by means of HADR?

 It is supported:
 https://www-304.ibm.com/support/docview.wss?uid=swg27021382wv=1
 But it aint free, you have to purchase a DB2EE license for it.

 As for who is using it, I'm sure I recall someone on the mail list was doing 
 it a few weeks back.

 Steven


Ang: Re: TSM 6.x and HADR

2011-06-15 Thread Daniel Sparrman
The work to set it up really isnt that bad.

You setup the main TSM database, do a offline backup of it and restore it to 
your secondary (standby) DB2 instance.

You then configure the following parameters on each database (you can get the 
values by typing db2 get db cfg for TSMDB1):

HADR_LOCAL_HOST = hostname of the the server you're configuring
HADR_REMOTE_HOST = hostname of the other DB2 server
HADR_LOCAL_SVC = Port to use on the host you're configuring
HADR_REMOTE_SVC = TCP/IP port to use on the other DB2 server
HADR_REMOTE_INST = Name of the DB2 instance on the other DB2 server

There are also a few optional configuration parameters on the database:

HADR_SYNC_MODE = Default is NEAR_SYNC and you have the option to change it to 
SYNC or ASYNC. For further description of the different alternatives, see here: 
http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp?topic=/com.ibm.db2.udb.admin.doc/doc/c0021056.htm

HADR_TIMEOUT_VALUE = Should be left by default or lowered. Tells DB2 when the 
primary node should be considered down and a forced HADR takeover should be 
done.

When the above configuration is set, you start the standby node first by 
doing 

db2 start hadr on TSMDB1 as standby

and then follow up by starting up the primary node (on the server where TSM is 
suppose to be active initially)

db2 start hadr on TSMDB1 as primary

There are several gotchas when using HADR. For example, if you end up in a 
situation where both nodes are down, and you start up the wrong node first as 
primary, you might end up in a split-brain, aka two nodes who both think they 
have the primary copy.

There are several ways of monitoring HADR, and it all depends on what kind of 
monitoring tools you're using today. I've seen HACMP handling HADR through 
scripting, Veritas handling it by using Veritas Cluster and other solutions.

I wouldnt recommend using HADR if your shop has no DB2 competence at all. Not 
because it's hard to setup, but if something happens, you might find yourself 
in a situation where lacking DB2 competence might be a severe issue.

Best Regards

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda wprat...@icfi.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 06/15/2011 16:51
Ärende: Re: TSM 6.x and HADR

I'm interested in hearing from folks using it.

From the presentation, I am uneasy at all the cmd-line DB2 setup commands 
required to use it, and wonder if it's suitable for a shop with no in-house 
DB2 expertise.  

Once it's set up, how much time/expertise does it take to monitor/manage it?
In fact, how do you monitor it at all?



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Steven 
Langdale
Sent: Tuesday, June 14, 2011 3:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 6.x and HADR


 I might be hijacking the thread (excuse me) so I'll change the subject too.
 Is there already an official announcement of TSM 6.x and replicating 
 the database by means of HADR?

 It is supported:
https://www-304.ibm.com/support/docview.wss?uid=swg27021382wv=1
But it aint free, you have to purchase a DB2EE license for it.

As for who is using it, I'm sure I recall someone on the mail list was doing it 
a few weeks back.

Steven

Ang: Identify Duplicates Idle vs Active state?

2011-05-25 Thread Daniel Sparrman
Hi

Have you tried using 

select * from processes where PROCESS='Deduplication' and status like '%idle%'

and checking the RC status of that? I dont run dedup on my testserver, but I 
think the process is called Deduplication in the processes table.

Regards


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Vandeventer, Harold [BS] harold.vandeven...@da.ks.gov
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/24/2011 20:59
Ärende: Identify Duplicates Idle vs Active state?

I'm working up scripting for our TSM 6.2 system where dedup will be implemented.

Is there a way to test for the IDLE state of an IDENTIFY DUPLICATES process?

I'd like to have our script test for the idle state to allow the next set of 
work to proceed as soon as possible.

We've used  IF(RC_OK) in TSM 5.x scripts to test for upper(process) = BACKUP 
STORGE POOL or upper(session_type) = NODE, but I don't see a way to detect 
that idle vs. active state on identify duplicates processes.

Thanks...



Harold Vandeventer
Systems Programmer
State of Kansas - DISC
harold.vandeven...@da.ks.govmailto:dane.woodr...@da.ks.gov
(785) 296-0631


Ang: VM restore failed..

2011-05-18 Thread Daniel Sparrman
Hi

Is the restore being made to an identical environment (same datastores, same 
virtual switches, same port groups) or a dissimiliar one?

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Bo Carsten Krogholm Nielsen bo...@dongenergy.dk
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/18/2011 15:11
Ärende: VM restore failed..

Hi,

When I try to Restore a VM backup made with Ba Client,
restore fails with the following error: ANS1416E Creating a Virtual Machine, 
but the datastore was not found.

From the opt file:


VMBACKUPTYPE FULLVM
VMFULLTYPE VSTOR
VMCHOST xx.x.dk
VMCUSER de-prod\userAccount
VMCPW 
DATEFORMAT 5
DOMAIN.VMFULL VM=tsttsm01;


What I miss


Have fun

Bo Nielsen
Senior Technology Consultant
DONG Energy A/S

DONG Energy A/S
Klædemålet 9
2100 København Ø
Denmark

Tlf.:  +45 9955 
Mobile +45 9955 5434

bo...@dongenergy.dkmailto:ja...@dongenergy.dk
www.dongenergy.dk


Ang: Re: Ang: tdp for domino LAN free backup poor performance

2011-05-18 Thread Daniel Sparrman
Hi Sandeep

If your hardware is working correctly, and tape operations from the TSM server 
shows good performance, I can only imagine two possible reasons for the low 
throughput:

a) The disks connected to your Domino server isnt able to push the necessary 
MB/s to the tape drives thus creating start/write/stop/rewind/write/stop/rewind 
(and so on) sequences on your tape drives, a.k.a shoeshine effect.
b) Your Domino server contains alot of small files. LAN-free to tape drives is 
only a benefit when transfering large chunks of data at a time, since all the 
meta data goes over the LAN anyway. So having a large amount of small files is 
certainly gonna give you shoeshine on your tape drives. This only goes for 
LAN-free to tape drives. For a file device class, you wont suffer throughput 
loss, but the metadata will still be sent over the LAN.

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Sandeep Jain sandeep.j...@dcmds.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/18/2011 09:01
Ärende: Re: Ang: tdp for domino LAN free backup poor performance

Hi Daniel

TDP through LAN takes 38 hours and LANfree takes 30 hours.
I am also working on the OS/HARdware performance factor, I am too having 
doubt that the read I/O is slow...
We are having HP server
ProLiant DL180 G6

, windows 2003 64Bit ,
OS WIN2k3 64_bit Ent Ed.
 Lotus. 8.5.2
 TDP for mail 5.5.3
 Storage agent 5.5
 Version of ibmtape device driver on the Domino windows host that controls 
 tape drives---6.2.1.5

Regards
Sandeep


- Original Message - 
From: Daniel Sparrman daniel.sparr...@exist.se
To: ADSM-L@vm.marist.edu
Sent: Wednesday, May 18, 2011 10:20 AM
Subject: [ADSM-L] Ang: tdp for domino LAN free backup poor performance


 Hi

 First off, there are several things that affect the performance except for 
 just the optfile, software versions and your operating system:

 * What speed are you having making the backup over the LAN?
 * How fast are the disks on your Domino server, are they able to transfer 
 data at the speeds your tape drives are writing?
 * How large are the files on your Domino server (average) ?
 * What's the version of the IBMtape driver on your Domino host? The 
 version on your TSM server (which controls the library) really doesnt 
 affect read/write performance since it only controls the robotics.

 Have you verified that the backup is actually being done across the SAN 
 and not the LAN? 700GB in 30 hours is 23MB/s, which more looks like a 
 slower than average LAN backup over gigabit ethernet.

 If your Domino server cant deliver data to the tapes drives with enough 
 speed, you'll end up with a shoeshine effect, which will seriously reduce 
 performance. Since you're using IBMtape, I'm assuming you're using some 
 sort of LTO drives. If you for example have a LTO3 drive, it will write 
 data @ 80MB/s natively. If your host only delivers data at 40MB/s, you 
 will have a drive that is spending more time rewinding than actually 
 writing the data.

 Some more information about your environment (hardware on both client  
 server incl tape technology, your TDP configuration file) would be helpful 
 in determining the error.

 Best Regards

 Daniel Sparrman
 Exist i Stockholm AB
 Växel: 08-754 98 00
 Fax: 08-754 97 30
 daniel.sparr...@exist.se
 http://www.existgruppen.se
 Posthusgatan 1 761 30 NORRTÄLJE



 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


 Till: ADSM-L@VM.MARIST.EDU
 Från: Sandeep Jain sandeep.j...@dcmds.com
 Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 Datum: 05/18/2011 06:54
 Ärende: tdp for domino LAN free backup poor performance

 HI friends

 i am experiencing very very poor backup performance while taking backup of 
 domino server.
 It is having around 700GB of data and LAN FREE backup on 2 tapes 
 completing in 30 hours.

 OS WIN2k3 64_bit Ent Ed.
 Lotus. 8.5.2
 TDP for mail 5.5.3
 Storage agent 5.5
 Version of ibmtape device driver on the Domino windows host that controls 
 tape drives---6.2.1.5

 i have also tried performance tunning parameters but no luck.

 DOMTXNGROUPMAX= 64
 DOMTXNBYTELIMIT =2097152


 dsm.opt


 *==*

 * *

 * IBM Tivoli Storage Manager for Mail *

 * Data Protection for Lotus Domino *

 * *

 * Sample Options File *

 * *

 *==*

 COMMMethod TCPip

 TCPPort 1500

 TCPServeraddress 10.3.3.34

 TCPWindowsize 63

 TCPBuffSize 32

 TXNBYTELIMIT 2097152

 #12288;

 NODename domino_PDCHM_tdp

 PASSWORDAccess Generate

 #12288;

 SCHEDMODE Polling

 *SCHEDLOGRetention 14

 *SCHEDMODE Prompted

 *TCPCLIENTADDRESS yy.yy.yy.yy

 *TCPCLIENTPORT 1502

 enablelanfree yes

Ang: linux client for RHES v3

2011-05-18 Thread Daniel Sparrman
If you mean support as in IBM supporting it, I'd say no.

If you're saying, can I run it, yes you should be able to run older code. It's 
mostly just about the kernel level. Check the FTP. I know 5.2.2.6 is stating it 
runs on RHEL 3.0, perhaps an even newer client does.

ftp://ftp.software.ibm.com/storage/tivoli-storage-management/patches/client/v5r2/Linux/Linux86/v522/

Best Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Richard Rhodes rrho...@firstenergycorp.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/18/2011 21:18
Ärende: linux client for RHES v3

Hi everyone,

We have just found that we have to start TSM backups of a RedHat
Enterprise Linux ES Release 3 server (that's what I'm told the version
is).I went looking on IBM's web site - the oldest compatibility info I
can come up with is for TSM client v5.4.1 as supporting RH v5.

Does anyone have any idea what TSM client would work onRedHat
Enterprise Linux ES Release 3  . . . if any?

(Then I get to try and find that client)

Thanks

Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.

Ang: tdp for domino LAN free backup poor performance

2011-05-17 Thread Daniel Sparrman
Hi
 
First off, there are several things that affect the performance except for just 
the optfile, software versions and your operating system:
 
* What speed are you having making the backup over the LAN? 
* How fast are the disks on your Domino server, are they able to transfer data 
at the speeds your tape drives are writing?
* How large are the files on your Domino server (average) ?
* What's the version of the IBMtape driver on your Domino host? The version on 
your TSM server (which controls the library) really doesnt affect read/write 
performance since it only controls the robotics.
 
Have you verified that the backup is actually being done across the SAN and not 
the LAN? 700GB in 30 hours is 23MB/s, which more looks like a slower than 
average LAN backup over gigabit ethernet.
 
If your Domino server cant deliver data to the tapes drives with enough speed, 
you'll end up with a shoeshine effect, which will seriously reduce performance. 
Since you're using IBMtape, I'm assuming you're using some sort of LTO drives. 
If you for example have a LTO3 drive, it will write data @ 80MB/s natively. If 
your host only delivers data at 40MB/s, you will have a drive that is spending 
more time rewinding than actually writing the data. 

Some more information about your environment (hardware on both client  server 
incl tape technology, your TDP configuration file) would be helpful in 
determining the error.
 
Best Regards

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Sandeep Jain sandeep.j...@dcmds.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/18/2011 06:54
Ärende: tdp for domino LAN free backup poor performance

HI friends

i am experiencing very very poor backup performance while taking backup of 
domino server.
It is having around 700GB of data and LAN FREE backup on 2 tapes completing in 
30 hours.

OS WIN2k3 64_bit Ent Ed.
Lotus. 8.5.2
TDP for mail 5.5.3
Storage agent 5.5
Version of ibmtape device driver on the Domino windows host that controls tape 
drives---6.2.1.5

i have also tried performance tunning parameters but no luck.

DOMTXNGROUPMAX= 64
DOMTXNBYTELIMIT =2097152


dsm.opt


*==*

* *

* IBM Tivoli Storage Manager for Mail *

* Data Protection for Lotus Domino *

* *

* Sample Options File *

* *

*==*

COMMMethod TCPip

TCPPort 1500

TCPServeraddress 10.3.3.34

TCPWindowsize 63

TCPBuffSize 32

TXNBYTELIMIT 2097152

#12288;

NODename domino_PDCHM_tdp

PASSWORDAccess Generate

#12288;

SCHEDMODE Polling

*SCHEDLOGRetention 14

*SCHEDMODE Prompted

*TCPCLIENTADDRESS yy.yy.yy.yy

*TCPCLIENTPORT 1502

enablelanfree yes

*lanfreecommmethod tcpip

*lanfreetcpport 1500

#12288;

COMPRESSIon NO

COMPRESSAlways NO

#12288;

#12288;

* Exclude all databases named db1.nsf regardless of where they appear

*EXCLUDE db1.nsf

* Exclude all databases that match help5_* in the help subdirectory

*EXCLUDE help\help5_*

* Include all databases in the mail6 directory

*INCLUDE mail6\...\*

* Assign all databases that match *.nsf in the mail subdirectory

* to the MAILDB management class

*INCLUDE mail\*.nsf* MAILDB

* Exclude all databases in the mail6 subdirectory from compression

*EXCLUDE.COMPRESSION mail6\...\*

* Encrypt all databases in the mail5 directory

*INCLUDE.ENCRYPT mail5\...\*

* The Default include/exclude list follows:

*

* Note: You can back up the log.nsf database but you can only restore

* it to an alternate name.

*

EXCLUDE log.nsf

EXCLUDE mail.box

* Include all transaction logs

INCLUDE S*.TXN

TCPNODELAY YES


Regards
Sandeep jain

--
I am using the free version of SPAMfighter.
We are a community of 7 million users fighting spam.
SPAMfighter has removed 18254 of my spam emails to date.
Get the free SPAMfighter here: http://www.spamfighter.com/len

The Professional version does not have this message

Disclaimer: This e-mail is intended for the sole use of the recipient(s) and 
may contain confidential or privileged information. If you are not the intended 
recipient and receive this message in error, any dissemination, use, review, 
distribution, printing or copying of this message is strictly prohibited, and 
you are requested to notify the sender and destroy all copies of the original 
message. Thank you

Ang: Trying to delete a filespace???

2011-05-16 Thread Daniel Sparrman
Hi

What is the result you're getting from executing the delete filespace command? 
Are you getting an error? Or are you getting a successful completion state but 
the filespace is still there? It would be helpful if you also told us the 
result from executing the command.

Have you tried doing delete filespace CHI-AS-SCCMEUR 7 nametype=fsid ?

Best Regards


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Minns, Farren - Chichester fmi...@wiley.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/16/2011 11:09
Ärende: Trying to delete a filespace???

Hi all

I'm trying to delete the following filespace and having no luck at all.

The command I'm using is as below and I'm not getting any joy.

Any ideas?

Regards

Farren



delete filespace CHI-AS-SCCMEUR 
CHI-AS-SCCMEUR\SystemState\NULL\SystemState\SystemState type=any namet=uni




Node Name: CHI-AS-SCCMEUR
 Filespace Name: 
CHI-AS-SCCMEUR\SystemState\NULL\System
  State\SystemState
 Hexadecimal Filespace Name: 
4348492d41532d5343434d425c53797374656d5-
  
3746174655c4e554c4c5c53797374656d205374617-
  4655c53797374656d5374617465
   FSID: 7
   Platform: WinNT
 Filespace Type: VSS
  Is Filespace Unicode?: Yes
  Capacity (MB): 0.0
   Pct Util: 0.0
Last Backup Start Date/Time: 05/13/11   18:28:59
 Days Since Last Backup Started: 3
   Last Backup Completion Date/Time: 05/14/11   00:45:55
   Days Since Last Backup Completed: 2
Last Full NAS Image Backup Completion Date/Time:


John Wiley  Sons Limited is a private limited company registered in England 
with registered number 641132.
Registered office address: The Atrium, Southern Gate, Chichester, West Sussex, 
United Kingdom. PO19 8SQ.



Ang: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

2011-05-15 Thread Daniel Sparrman
Background operations shouldnt really be the reason for this, since most of 
them are actually faster on 6.X than 5.5. Expiration should be faster due to 
the ability to control resources for expiration, and copy / move processes 
should also be faster due to the performance upgrade of the database.
 
To find the issue, i'd be looking for clients who isnt behaving naturally. 
Check your timings (which clients aren't performing aswell as pre-upgrade) 
since that's usually the reason.
 
My 5 cents worth.
 
Best Regards



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda wprat...@icfi.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/15/2011 22:17
Ärende: Re: TSM Recovery log is pinning since upgrade to 5.5.5.0 code

Am I correct that scheduling of background processes is important as well?
Don't expiration, migration, backup stgpool also create log transactions?
I'm wondering if it is the housekeeping, rather than the number of client 
backups, that is sending my server over the edge.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Friday, May 13, 2011 3:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Recovery log is pinning since upgrade to 5.5.5.0 code

When I went through this with Andy R, the only solutions we came up with
was:

1.  Redistribute the workload/schedules to smooth out the spikes of number of 
nodes connecting to perform backups 2.  Redistributed / moved nodes to another 
TSM server 3.  In some cases where we were able to identify nodes causing the 
pinning, which was usually Oracle TDP backups, we made changes to break up the 
backups into multiple smaller transactions/chunks

This eventually smoothed out the stair-stepping of the transaction load 
against the logs.

This basically accelerated / gave me an excuse to move nodes to my 6.1.4 server 
and any new nodes are create there (or our 6.2.x servers).
Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html



From:
Meadows Andrew andrew.mead...@hcahealthcare.com
To:
ADSM-L@VM.MARIST.EDU
Date:
05/13/2011 03:24 PM
Subject:
Re: [ADSM-L] TSM Recovery log is pinning since upgrade to 5.5.5.0 code Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



We have seen this for quite a while with our TSM servers/Clients. The only 
thing we have found that works as a work around is to point our clients to back 
up directly to tape. If you are able to find a resolution to this issue please 
include me on the resolution as well as I would rather not write backup data 
directly to tape during backups.

Thanks,

Andrew

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Dave 
Canan
Sent: Friday, May 13, 2011 10:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Recovery log is pinning since upgrade to 5.5.5.0 code

Andy Raibeck and I have been monitoring this thread for the past few weeks and 
have been having numerous discussions about what we need to do to address these 
problems. In addition, we have noted the many PMRs coming into the IBM support 
queue, and many of these PMRs are now on my queue. Andy will be assisting me 
along with other support people as needed.

As part of this analysis we will be doing, we have a new questionnaire (titled 
Items to Gather for a PMR with Log Pinning Condition for TSM V5) that 
customers will be filling out that ask several questions regarding your 
environment. We understand that this involves extra work to gather this 
information, but there can be many different areas that can cause log pinning 
conditions, so this information is needed. In addition, there will be a script 
provided (named serverperf5a.pl) that will help us gather additional data. Both 
of these will be provided to you by support. When these PMRs are now opened, 
please make sure that level 1 support adds the keyword LOGPIN55 to the PMR. 
This will allow Andy and I to quickly find all the PMRs being worked for this 
issue.

Eric, your PMR is now one that I have on my queue (or I will shortly today).
We will be contacting you to work the PMR.

Dave Canan
IBM ATS TSM Performance
ddcananATUSDOTIBMDOTCOM
916-723-2410

On Thu, May 12, 2011 at 7:53 AM, Loon, EJ van - SPLXO eric-van.l...@klm.com
 wrote:

 Hi Rick!
 You are running in normal mode. In this mode the recovery log only 
 contains uncommited transactions

Ang: Problem installing TSM server Instance 6.2

2011-05-13 Thread Daniel Sparrman
Since the format seems to be OK, have you tried starting the TSM server 
instance after the error below?
 
The error you're getting below could very well be from the following error 
which returns a non-zero return code:
 
Could not open file C:\Program Files\Tivoli\TSM\server1\Format.Out

Have you tried manually formatting the database by using:
 
dsmserv format -k server1 dbdir=C:\Program Files\Tivoli\TSM\server1\dbfile.1\ 
activelogdir=C:\TSMACTLOG\log\  
archlogdir=C:\SEQDISK\SAT705\Archivelog\primary\ 
mirrorlogdir=C:\TSMACTLOG\mirror\ 
archfailoverlogdir=C:\SEQDISK\SAT705\Archivelog\secondary\ activelogsize=16384

What results do you get from the above command?

Does the path C:\Program Files\Tivoli\TSM\server1\dbfile.1 exist and is 
writable by the instance owner?

Out of a more cosmetical view, collecting your different files  directories 
under a mutual path usually makes administering things abit easier. For example 
creating a directory named C:\ITSM and then putting your different paths under 
that directory (including your server instance which could be put in for 
example C:\ITSM\home) usually makes it abit easier to keep track of files and 
avoids having program updates actually affecting your instance.

Best Regards

Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Leif Torstensen l...@athena.dk
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/13/2011 07:23
Ärende: Problem installing TSM server Instance 6.2

Hi

I try to install new TSM server 6.2 on a windows 3003 64bit, but I allways 
stops at with following in the log view :

ANR2976I Offline DB backup for database TSMDB1 started.

Format completed with return code 499

And the dsmicfgx.trc shows:

Thu May 12 14:02:27 CEST 2011 com.tivoli.dsm.ServerConfig.ServerDB.doDb2Set(): 
enter, Setting DB2CODEPAGE=819
Thu May 12 14:02:27 CEST 2011 com.tivoli.dsm.ServerConfig.ServerDB.doDb2Set(): 
Issuing cmd: C:\Program Files\Tivoli\TSM\db2\bin\db2set -i Server1 
DB2CODEPAGE=819 21
Thu May 12 14:02:27 CEST 2011 com.tivoli.dsm.ServerConfig.ServerDB.doDb2Set(): 
exit, rc 0
Thu May 12 14:02:27 CEST 2011 
com.tivoli.dsm.ServerConfig.ServerDB.getShortPath(): enter, resolving: 
C:\Program Files\Tivoli\TSM\server1
Thu May 12 14:02:27 CEST 2011 
com.tivoli.dsm.ServerConfig.ServerDB.getShortPath(): exit, result: 
C:\PROGRA~1\Tivoli\TSM\server1
Thu May 12 14:02:27 CEST 2011 com.tivoli.dsm.ServerConfig.ServerDB.doDb2Set(): 
enter, Setting DB2_VENDOR_INI=C:\PROGRA~1\Tivoli\TSM\server1\tsmdbmgr.env
Thu May 12 14:02:27 CEST 2011 com.tivoli.dsm.ServerConfig.ServerDB.doDb2Set(): 
Issuing cmd: C:\Program Files\Tivoli\TSM\db2\bin\db2set -i Server1 
DB2_VENDOR_INI=C:\PROGRA~1\Tivoli\TSM\server1\tsmdbmgr.env 21
Thu May 12 14:02:28 CEST 2011 com.tivoli.dsm.ServerConfig.ServerDB.doDb2Set(): 
exit, rc 0
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ServerDB.createDb2Instance(): exit, rc 301
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ConfigWizard.DoFormatPanel.signalEvent(): enter, 
event=createInstanceDone, rc=301
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ServerDB$FormatThread.run(): Starting remote 
session for format
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ServerDB$FormatThread.run(): Starting remote 
session for monitor
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.Constructor(): enter
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.Constructor(): exit
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ServerDB$FormatThread.run(): Issuing cmd: 
C:\Program Files\Tivoli\TSM\Server\dsmserv -k Server1 FORMAT  
dbfile=\C:\Program Files\Tivoli\TSM\server1\dbfile.1\ 
activelogdir=\C:\TSMACTLOG\log\ 
archlogdir=\C:\SEQDISK\SAT705\Archivelog\primary\ 
mirrorlogdir=\C:\TSMACTLOG\mirror\ 
archfailoverlogdir=\C:\SEQDISK\SAT705\Archivelog\secondary\ 
activelogsize=16384 Format.Out 21
Thu May 12 14:02:28 CEST 2011 com.tivoli.dsm.ServerConfig.ProcessMonitor.run(): 
enter, Monitoring file C:\Program Files\Tivoli\TSM\server1\Format.Out
Thu May 12 14:02:28 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): Could not open file 
C:\Program Files\Tivoli\TSM\server1\Format.Out
Thu May 12 14:05:20 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): enter
Thu May 12 14:05:20 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): exit
Thu May 12 14:10:30 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): enter
Thu May 12 14:10:30 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): exit
Thu May 12 14:15:40 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): enter
Thu May 12 14:15:40 CEST 2011 
com.tivoli.dsm.ServerConfig.ProcessMonitor.read(): exit
Thu May 12 14:20:50 CEST 2011

Ang: Re: Question about Hybrid method upgrade to TSM V6.2.2 and server-to-server export

2011-05-12 Thread Daniel Sparrman
It'll work (since like you say, the Windows clients wont see the old box 
again). I'm assuming you're switching names when doing the DNS re-direct to 
the new server. In that case, you could end up with having a Windows client 
needing a restore of a file which havent been exported yet. That could put you 
in the situation you're describing (Windows clients vs named-changed TSM 
server) since you'll have to redirect the restore to the old TSM server (since 
it's the only TSM server holding the actual file).

Best Regards


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Howard Coles howard.co...@ardenthealth.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 05/12/2011 16:45
Ärende: Re: Question about Hybrid method upgrade to TSM V6.2.2 and 
server-to-server export

While we didn't do the hybrid method, I don't see any reason why that
wouldn't work.  As long as the windows boxes aren't connecting back up,
you should be good.  

See Ya'
Howard Coles Jr., RHCE, CNE, CDE
John 3:16!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Wednesday, May 11, 2011 10:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Question about Hybrid method upgrade to TSM V6.2.2
and server-to-server export

I am planning to do some upgrades using the hybrid method.
The V5.5 server will be shut down, the DB extracted, and the V5.5 server
will be brought back up and clients will run backups while we load the
DB into 6.2.

Then the V5.5 server will be disabled for client sessions, and an
incremental export/import used to get the backup inventory in sync with
6.2, then clients will be cut over to the 6.2 server via a DNS change.

Here's the question:

The description of the hybrid method assumes that you will do the
incremental export to media.  I'd rather use server-to-server.

I can't set up the server-to-server communication while the server names
(the TSM server name, not the host name) are the same.  Any reason I
can't get around that by doing a SET SERVERNAME V55SERVERNAME_OLD on the
V55 server, once all client sessions are cancelled?  I know that causes
havoc with Windows clients, but they should never be connecting to the
old server again.

Any suggestions/anybody tried this?

Thanks
W



Wanda Prather  |  Senior Technical Specialist  |
wprat...@icfi.commailto:wprat...@icfi.com  |
www.jasi.comwww.jasi.com%20
ICF Jacob  Sundstrom  | 401 E. Pratt St, Suite 2214, Baltimore, MD
21202 | 410.539.1135

Ang: TSM V6 Instance ID

2011-04-19 Thread Daniel Sparrman
As for directories, I usually try to collect all my directories under a 
specific path (DB, log, mirror log, archlog dir, failover dir, stgpool dir and 
config dir) such as itsm. When I only have one instance I use something such as:
 
/itsm/home for instance home directory
/itsm/db/dbXX for database directories
/itsm/log
/itsm/logmir
 
and so on. All of the directories are still filesystems of their own, placed on 
arrays/spindles of their own, but it keeps it simple.
 
As for using multiple directories I still try to collect them under a single 
path and then just add the instance owner:
 
/itsm/tsmsrv1
/itsm/tsmsrv2
 
Biggest reason for doing this is to keep it simple. I dont want to mix up a DB2 
instance home dir with other home directories under /home, and I want a clear 
structure for my TSM configuration.
 
Best Regards
 
Daniel





Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Mike De Gasperis mdegaspe...@comcast.net
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/19/2011 20:04
Ärende: TSM V6 Instance ID

I was wondering what everyone else has done for instance ID's on their AIX or 
other UNIX systems for the DB2 instance ID. Are there any issues having the ID 
with no password but not being allowed to login via telnet/ssh outside of 
su'ing from root?

I was also going to request the file ulimit being set to unlimited, are there 
any other specific ulimit's I should change to unlimited or increase in 
general? For the user home directory did you just use the default /home or did 
you split it off in to a separate file space, we're thinking separate file 
space at this time.

I wasn't able to find too much on specifics for the user ID so I apologize if 
this has been asked before or covered in depth somewhere.

- Mike

Ang: Re: How do you allocate costs for deduplicated data?

2011-04-18 Thread Daniel Sparrman
Reporting_MB = REPORTING_MB is the amount of space that would be occupied if 
the data wasn't placed in a deduplication-enabled storage pool.
 
So on the first node below, you saved 5MB out of 1.8GB.
 
Your physical_mb shouldnt be 0 though which is weird (unless I'm mixing up the 
columns, they look abit weird in my mail)

Best Regards

Daniel


Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Hart, Charles A charles_h...@uhc.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/18/2011 21:08
Ärende: Re: How do you allocate costs for deduplicated data?

It appears by doing a select on the occupancy table; the full amount of
data is reported in the Logical Column, and reporting is deduped 

tsm: TSMLAB1select NODE_NAME, STGPOOL_NAME, NUM_FILES, PHYSICAL_MB,
LOGICAL_MB, REPORTING_MB from occupancy where STGPOOL_NAME='DISK-DEDUPE'

NODE_NAME  STGPOOL_NAME  NUM_FILES
PHYSICAL_MBLOGICAL_MB  REPORTING_MB
-- -- 
- - -
LABW9236   DISK-DEDUPE   -2620
0.00   1852.71  5.10
LABW9236   DISK-DEDUPE   0
0.00  0.01  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.04  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.10  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.13  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.23  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.87  0.00
LABW9236   DISK-DEDUPE   0
0.00  0.94  0.00
LABW9236   DISK-DEDUPE   0
0.00  1.19  0.00
LABW9236   DISK-DEDUPE   0
0.00 14.51  0.00
LABW9236   DISK-DEDUPE   0
0.00 17.57  0.00
LABW9236   DISK-DEDUPE   0
0.00 31.43  0.00
LABW9236   DISK-DEDUPE   0
0.00 24.59  0.01
LABW9236   DISK-DEDUPE   0
0.00 45.27  0.01
LABW9236   DISK-DEDUPE   0
0.00 82.46  0.01
LABW9236   DISK-DEDUPE   0
0.00229.62  0.03
LABW9236   DISK-DEDUPE   0
0.00300.47  0.04
LABW9236   DISK-DEDUPE   0
0.00418.44  0.05
LABW9236   DISK-DEDUPE   0
0.00470.74  0.06
LABW9236   DISK-DEDUPE   0
0.00925.03  0.12
LABW9236   DISK-DEDUPE   0
0.00939.91  0.12
LABW9236   DISK-DEDUPE   0
0.00   1257.26  0.16
LABW9236   DISK-DEDUPE   0
0.00   1572.13  0.19
LABW9236   DISK-DEDUPE   0
0.00   2152.09  0.28
LABW9236   DISK-DEDUPE   0
0.00   3131.19  0.39
LABW9236   DISK-DEDUPE   0
0.00   3191.17  0.39
LABW9236   DISK-DEDUPE   0
0.00   3322.11  0.41
LABW9236   DISK-DEDUPE   0
0.00   5474.88  0.67
LABW9236   DISK-DEDUPE   0
0.00   6981.77  0.87
LABW9236   DISK-DEDUPE   0
0.00   7247.60  0.89

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Jim Neal
Sent: Monday, April 18, 2011 1:51 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] How do you allocate costs for deduplicated data?
Importance: High

Hi All,





We are in the process of testing TSM's Deduplication
feature and I have a question about how the cost of the deduplicated
data gets allocated.  For example:



  Two

Ang: Re: TBMR or CBMR

2011-04-13 Thread Daniel Sparrman
I dont work at or use CBMR myself, but I did evaluate it once.
 
It does actually integrate with TSM since it uses TSM as a repository. So that 
it would be a standalone product is wrong (unless something changed in a newer 
version).
 
I'd still select TBMR out of several reasons though; a) same producer, easier 
support b) It's using MS standards (as in ASR) c) It's easier to handle as to 
licenses since again, it's the same producer b) It does what it's intended todo.
 
I have also heared from several customers that they were having issues related 
to network cards when it came to CBMR, I guess I'll be corrected on this one by 
Christian.
 
Best Regards
 
Daniel Sparrman




Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Grigori Solonovitch grigori.solonovi...@ahliunited.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/13/2011 17:47
Ärende: Re: TBMR or CBMR

Of course, TBMR (TIVOLI Bare Machine Recovery - integrated with TSM Server)
CBMR (Cristie Bare Machine Recovery) is a backup system like TSM (to disks, USB 
disks, tapes). It is not integrated with TSM directly.

Grigori G. Solonovitch

Please consider the environment before printing this Email.

CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.

Ang: Re: Ang: Re: TBMR or CBMR

2011-04-13 Thread Daniel Sparrman
Well, when I used it, the only data stored on the CBMR server was operating 
system data, not user data. Unless that changed, the big amount of data is 
still used from the TSM server.

Like I said, been a while since I used it, but then, OS data was gathered from 
the CBMR server and user data (that is, the large amount of data you will be 
restoring which is databases, user files or whatever server you are restoring) 
was restored from TSM.

Again, Christian Svensson knows this alot better than I do, I evaluated it once 
and decided to go for TBMR for my customers out of simplicity.



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Grigori Solonovitch grigori.solonovi...@ahliunited.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/13/2011 22:07
Ärende: Re: Ang: Re: TBMR or CBMR

I was using CBMR for quite long time and we are using TBMR now for quite big 
number of servers (upgraded from CBMR).
CBMR can keep configuration on diskette, USB drive, network drive or on TSM 
Server, but it does not meen it keeps backup data on TSM Server.
TBMR keeps configuration and backup data on TSM Server and can read them from 
TSM after booting from CD by using WinPE1 or WinPE2 (depend on OS).
So only TBMR is totally integrared with TSM and cann't keep backup data on 
disks or tapes.
We are using TBMR 6.3.1 and TSM Clients 6.1.3.3 and 6.2.2.0 with TSM Server 
5.5.4.1. Everything is working perfectly.


From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman [daniel.sparr...@exist.se]
Sent: Wednesday, April 13, 2011 9:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Ang: Re: TBMR or CBMR

I dont work at or use CBMR myself, but I did evaluate it once.

It does actually integrate with TSM since it uses TSM as a repository. So that 
it would be a standalone product is wrong (unless something changed in a newer 
version).

I'd still select TBMR out of several reasons though; a) same producer, easier 
support b) It's using MS standards (as in ASR) c) It's easier to handle as to 
licenses since again, it's the same producer b) It does what it's intended todo.

I have also heared from several customers that they were having issues related 
to network cards when it came to CBMR, I guess I'll be corrected on this one by 
Christian.

Best Regards

Daniel Sparrman




Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Grigori Solonovitch grigori.solonovi...@ahliunited.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/13/2011 17:47
Ärende: Re: TBMR or CBMR

Of course, TBMR (TIVOLI Bare Machine Recovery - integrated with TSM Server)
CBMR (Cristie Bare Machine Recovery) is a backup system like TSM (to disks, USB 
disks, tapes). It is not integrated with TSM directly.

Grigori G. Solonovitch

Please consider the environment before printing this Email.

CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.

CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.

Ang: reclamation question

2011-04-11 Thread Daniel Sparrman
As the previous 2 mentioned, if the tape is available, TSM will use it. If it's 
offsite, TSM will try to collect the data from primary volumes. So your issue 
is that the copypool tapes are actually available. It's kinda weird you get 
them reclaimed though, they should have been moved offsite long before reclaim 
is needed. Are you storing large database/mail/other application backups on 
them which expire regularly?

Best Regards

Daniel Sparrman

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Tyree, David david.ty...@sgmc.org
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/11/2011 18:53
Ärende: reclamation question

We have three storage pools, the primary pool is a devtype FILE 
and the two copy pools are devtype LTO3. The offsite copy pools gets 
transferred offsite via DRM.
Whenever I run a reclamation on the offsite copy pool the 
system grabs a scratch tape and then copies files from the primary pool and 
starts filling up tapes. I'm perfectly happy with that process.
I have an issue when I do the reclamation of onsite tapes. It 
loads up a tape (might be scratch) and then another tape to copy the data from. 
It ends up doing a tape to tape copy. In a way it uses twice as many tape 
mounts as an offsite reclamation. And since I only have 6 drives it kinda 
cramps up my options sometimes.
Is there a way to do a reclamation of the onsite copy pool and 
have it pull data from the primary pool instead of doing a tape to tape copy? I 
mean it can do it for the offsite pool, why not the onsite as well?


David Tyree
Interface Analyst
South Georgia Medical Center
229.333.1155
Confidential Notice:  This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information.  Any unauthorized review, use,  disclosure or 
distribution is prohibited.  If you are not the intended recipient, please 
contact the sender by reply e-mail and destroy all copies of the original 
message.

Ang: Trying to install lin_tape

2011-04-11 Thread Daniel Sparrman
Which version are you running? You should have full support from 1.49. 

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Lee, Gary D. g...@bsu.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/11/2011 18:58
Ärende: Trying to install lin_tape

Anyone out there solved how to install the lin_tape padckage on redhat 
enterprise linux 6?

Apparently, rebuild is not supported in their implementation of rpm.

I've been looking at the rpm man page, but the light hasn't come on yet.
Any pointers at this time would be helpful.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

Ang: Re: Ang: Copypool storage advice

2011-04-08 Thread Daniel Sparrman
Hi Paul

So I assume you dont have any database dumps or TDP's from for example SQL, 
DB2, Oracle, Exchange or Domino, everthing is just simple file backups?

In that case, there's probably only 2 options to reduce the amount of copypool 
tapes:

a) Divide your servers into 2 groups, one with a large incremental daily change 
and one group with more static servers and direct them to two different 
copypools.

b) Like I said in my previous message, lower your reclamation threshold to 
around 30%, let the TSM server reduce the amount of tapes by completing the 
operation. This option will however probably make you end up in the same 
situation again in the future. 

The reason you have so many copypool tapes with a high pct reclaim is due to 
the large amount of change in your environment leading to data being expired on 
your copypool tapes. How does your primary pool look like? Are you seeing the 
same issue there with a large number of tapes having a high percentage of 
change?

Are you having more copypool tapes than nodes?

Best Regards

Daniel Sparrman

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Paul_Dudley pdud...@anl.com.au
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/08/2011 08:06
Ärende: Re: Ang: Copypool storage advice

The copypool is on LTO3 tapes. They are not database backups just incremental 
server backups

Thanks  Regards
Paul


 -Original Message-

 It would be helpful to know what kind of the tape technology you're using 
 since the
 reclamation threshold % is usually based off which technology is being used.
 Smaller tapes can usually have a small threshold while larger tapes requires a
 larger threshold.

 One way to reduce the amount of tapes is simply to reduce the threshold to
 something like 30 and let the reclaim process run until it's complete. This 
 will require
 enough free tape drives to a) let reclamation run until it's complete b) do 
 normal
 operations.

 There can be several reasons why you get so high pct reclaim. One is that 
 you're
 running full database or application backups. Since this will expire a full 
 backup
 every day, it will cause the reclaim on your tapes to rise. Splitting your 
 copypool into
 separate ones categorized on the type of data stored (one for fileservers, 
 one for
 application servers for example) is one way to go, using collocation is 
 another.

 Best Regards

 Daniel Sparrman

 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


 I currently have a lot of copypool storage tapes which are between 50 - 60%
 utilization. Expiration runs daily and I run reclaimation daily on this 
 copypool, set to
 50.

 Is there anything I can do to try and consolidate the data onto fewer copypool
 tapes?



 Thanks  Regards

 Paul



 Paul Dudley

 Senior IT Systems Administrator

 ANL Container Line Pty Limited

 Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








 ANL DISCLAIMER

 This e-mail and any file attached is confidential, and intended solely to the 
 named
 addressees. Any unauthorised dissemination or use is strictly prohibited. If 
 you
 received this e-mail in error, please immediately notify the sender by return 
 e-mail
 from your system. Please do not copy, use or make reference to it for any 
 purpose,
 or disclose its contents to any person.





ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.

Ang: Copypool storage advice

2011-04-07 Thread Daniel Sparrman
Hi Paul

It would be helpful to know what kind of the tape technology you're using since 
the reclamation threshold % is usually based off which technology is being 
used. Smaller tapes can usually have a small threshold while larger tapes 
requires a larger threshold. 

One way to reduce the amount of tapes is simply to reduce the threshold to 
something like 30 and let the reclaim process run until it's complete. This 
will require enough free tape drives to a) let reclamation run until it's 
complete b) do normal operations.

There can be several reasons why you get so high pct reclaim. One is that 
you're running full database or application backups. Since this will expire a 
full backup every day, it will cause the reclaim on your tapes to rise. 
Splitting your copypool into separate ones categorized on the type of data 
stored (one for fileservers, one for application servers for example) is one 
way to go, using collocation is another.

Best Regards

Daniel Sparrman

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Paul_Dudley pdud...@anl.com.au
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/08/2011 06:07
Ärende: Copypool storage advice

I currently have a lot of copypool storage tapes which are between 50 - 60% 
utilization. Expiration runs daily and I run reclaimation daily on this 
copypool, set to 50.

Is there anything I can do to try and consolidate the data onto fewer copypool 
tapes?



Thanks  Regards

Paul



Paul Dudley

Senior IT Systems Administrator

ANL Container Line Pty Limited

Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.

Ang: Re: Ang: Re: Tsm v6 migration question

2011-03-25 Thread Daniel Sparrman
Hi Gregory


My comment was to this previous comment:

But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

What I ment was that it has never been recommended to share the same array / 
spindle / filesystem between logs and database.

If you go back to the TSM v5.5 performance guide, you'll will notice the 
recommendation to separate log and DB on separate spindles / filesystems / 
arrays.

As for DB2, yes, DB2 is new for v6, but the performance setup isnt.

My guess is that if your setup crashed after 2 months, whoever implemented your 
solution probably shared filesystemspace between the database and some other 
part of TSM, such as archivelog or log. Since a) the database can now grow of 
it's own I wouldnt place it with something else since you might run out of 
space b) I wouldnt share archivelog space with something else either since it 
also grows of it's own.

I cant imagine that your TSM implementation crashed just because the person who 
implemented it placed the database on a single filesystem. This is technically 
possible, but not something I would recommend due to performance issues.

Best Regards

Daniel Sparrman

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory gregory.mo...@afnor.org
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 03/25/2011 14:47
Ärende: Re: Ang: Re: Tsm v6 migration question

Hello Daniel,

You say : ' That's not really something new to TSM v6'.

DB2 is now the DBMS for TSM, and if DB2 improve database, DB2 has system 
requirement who was not the same in previous version

My experience:
The installer has set up the solution on our site has not implemented the 
recommendations related to DB2.
TSM crash after two months.


Best Regards
Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Daniel 
Sparrman
Envoyé : jeudi 24 mars 2011 10:17
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Ang: Re: Tsm v6 migration question

That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory gregory.mo...@afnor.org
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. 

ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses

Ang: TSM for DB password expiration

2011-03-25 Thread Daniel Sparrman
Hi David

Correct me if I'm wrong, but when the nodes password has expired, 
passwordaccess generate will not set a new password for the client. You'll have 
to set a new password for the node, and then use tdpoconf to update the stored 
password on the client machine.

Best Regards

Daniel Sparrman
-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: David E Ehresman deehr...@louisville.edu
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 03/25/2011 15:41
Ärende: TSM for DB password expiration

Has anyone had problems with TSM for DB (Oracle), aka TDPO, and password
expiration?

We are running TSM for DB v5.5.1.0 with TSM api 6.2.2.0 on AIX 5.3 to a
TSM server at 6.2.2.0 also on AIX 5.3. We are running with password
generate.  Last night we got two ANR0425W password has expired messages
for node DBFINTEST. I assume I got two messages because the rman command
is using two channels, thus two concurrent TSM sessions.  At that point
the password should have been set to something new and life should go
on.  Instead, subsequent sessions for that node got the ANS0282E
Password file is not available., ie password is incorrect, message.

Any thoughts on what might be going wrong?

David Ehresman


Ang: Re: Tsm v6 migration question

2011-03-24 Thread Daniel Sparrman
That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory gregory.mo...@afnor.org
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. 


Ang: Re: draining a diskpool when using a DD and no copypool

2011-03-22 Thread Daniel Sparrman
It's not always such a good idea to use a file device as the primary storage 
device for backups. It all depends on the amount of clients, if you're using 
multi-session backups and the size of your file volumes. 

Remember that even though TSM has concurrent access to file volumes, only 1 
session is actually allowed to write to the volume. This means that backup 
sessions, migration TO a file device and such will effectively lock up that 
specific volume for write access. 

So basicallly, if you run 1000 sessions against a filedevice pool, you will be 
creating 1000 volumes.
-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: Prather, Wanda wprat...@icfi.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 03/22/2011 15:25
Ärende: Re: draining a diskpool when using a DD and no copypool

Curious as to why you are using the disk pool with migration to DD instead of 
having your backups write directly to the DD.  

Wanda


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Richard Rhodes
Sent: Tuesday, March 22, 2011 9:13 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] draining a diskpool when using a DD and no copypool

Hello everyone,

An interesting problem . . . we are implementing a Data Domain system.  It will 
be configured as a file device via NFS mount.  DD replication of the primary 
file device pool to the DR site will be use so there will be no
copypool.   We will still use a diskpool with migration into the file
device pool, and this brings up a problem.  The majority of backups come in 
over night into diskpool and get migrated.  But some backups (Oracle archive 
logs, long running backups from remote sites and some others) come in at any 
time around the clock.  Since DR relies on DD replication of the primary file 
device pool, we MUST make sure that at some point every  file
in diskpool  gets migrated.With ongoing backups coming into diskpool,
migration to 0% may never complete.The one thought we had was to once
a day (after migration completed) run movedata on each diskpool volume.

We've asked DD this question, but so far they haven't provided an answer.


Thanks!

Rick


-
The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.


  1   2   3   4   5   >