Re: v6.3.5 hung db2??

2015-02-23 Thread Colwell, William F.
Rick,

ask L1/L2 about how to make db2 on aix use tcpip to communicate with dsmserv.
aix has a problem doing the massive amount of ipc processing which db2 
generates.

I learned of this recently when the rhel 6.6 kernel bug bit us.
The switch to tcpip is available on all platforms.  TSM will run slower but at 
least
it will run.

Good luck!

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Friday, February 13, 2015 9:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: v6.3.5 hung db2??

Two days ago we upgrade one of our TSM instances to v6.3.5 (from v6.3.4).
This is our first v6.3.5 instance.   It runs on a AIX server.

Last night at 19:32 it looks like DB2 went into some kind of a loop.
The instance became unresponsive.  Dsmadmc cmds hung (didn't error, just hung).
Dsmserv process was getting almost no cpu, while ds2sync was running the box
At 65-70% but had no disk I/O.  I killed dsmserv, but db2 didn't go down.
I tried db2stop but it did nothing.  Finally rebooted to get everything up.
The actlog shows no nasty errors.

Just wondering if anyone else has had a runaway db2.

Thanks

Rick






-

The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.

 Notice: This email and any attachments may contain proprietary (Draper 
non-public) and/or export-controlled information of Draper Laboratory. If you 
are not the intended recipient of this email, please immediately notify the 
sender by replying to this email and immediately destroy all copies of this 
email.



Re: TSM 7.1 usage of volumes for dedupe

2014-10-30 Thread Colwell, William F.
Hi Martha,

I am glad this was useful to you.

I have not reported this as a bug; I expect they would say working-as-designed, 
try
submitting an rfe.

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Martha 
M McConaghy
Sent: Thursday, October 30, 2014 10:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM 7.1 usage of volumes for dedupe

Bill,

I just wanted to let you know how much this information helped.   I was
able to clear out all the problem volumes and have removed the full LUNs
from the devclass until there is enough space on them to be used again.

This situation really seems strange to me.  Why has TSM not been updated
to handle the out of space condition better?  If it has a command that
shows how much space is left on the LUN, why can't TSM understand it is
time to stop allocating volumes on it?  Forcing admins to do manual
clean up like this just to keep things healthy seems inconsistent with
how the rest of TSM functions.

Has anyone ever reported this as a bug?

Martha

On 10/22/2014 2:38 PM, Colwell, William F. wrote:
 Hi Martha,

 I see this situation occur when a filesystem gets almost completely full.

 Do 'q dirsp dev-class-name' to check for nearly full filesystems.

 The server doesn't fence off a filesystem like this, instead it keeps
 hammering on it, allocating new volumes.  When it tries to write to a volume
 and gets an immediate out-of-space error, it marks the volume full so it won't
 try to use it again.

 I run this sql to find such volumes and delete them -

 select 'del v '||cast(volume_name as char(40)), cast(stgpool_name as 
 char(30)), last_write_date -
   from volumes where upper(status) = 'FULL' and pct_utilized = 0 and 
 pct_reclaim = 0 order by 2, 3

 You should remove such filesystems from the devclass directory list until
 reclaim has emptied them a little bit.

 Hope his helps,

 Bill Colwell
 Draper Lab



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Martha M McConaghy
 Sent: Wednesday, October 22, 2014 2:23 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: TSM 7.1 usage of volumes for dedupe

 Interesting.  Seems very similar, except the status of these volumes is
 FULL, not EMPTY.  However, the %reclaimable space is 0.0.

 I think this is a bug.  I would expect the volume to leave the pool once
 it is reclaimed.  It would be OK with me if it did not. However, since
 the status is FULL, it will never be reused. That seems wrong.  If it
 is going to remain attached to the dedupepool, the status should convert
 to EMPTY so the file can be reused.  Or, go away altogether so the space
 can be reclaimed and reused.

 In looking at the filesystem on the Linux side (sorry I didn't mention
 this is running on RHEL), the file exists on /data0, but with no size:

 [urmm@tsmserver data0]$ ls -l *d57*
 -rw--- 1 tsminst1 tsmsrvrs 0 Oct 10 20:22 0d57.bfs

 /data0 is 100% utilized, so this file can never grow.  Seems like it
 should get cleaned up rather than continue to exist.

 Martha

 On 10/22/2014 1:58 PM, Erwann SIMON wrote:
 hi Martha,

 See if this can apply :
 www-01.ibm.com/support/docview.wss?uid=swg21685554

 Note that I had a situation where Q CONT returned that the volume was empty 
 but it wasn't in reality since it was impossible to delete it (without 
 discrading data). A select statement against the contents showed some files. 
 Unforunately, I don't know how this story finished...

 --
 Martha McConaghy
 Marist: System Architect/Technical Lead
 SHARE: Director of Operations
 Marist College IT
 Poughkeepsie, NY  12601
 
   Notice: This email and any attachments may contain proprietary (Draper 
 non-public) and/or export-controlled information of Draper Laboratory. If you 
 are not the intended recipient of this email, please immediately notify the 
 sender by replying to this email and immediately destroy all copies of this 
 email.
 

--
Martha McConaghy
Marist: System Architect/Technical Lead
SHARE: Director of Operations
Marist College IT
Poughkeepsie, NY  12601

 Notice: This email and any attachments may contain proprietary (Draper 
non-public) and/or export-controlled information of Draper Laboratory. If you 
are not the intended recipient of this email, please immediately notify the 
sender by replying to this email and immediately destroy all copies of this 
email.



Re: TSM 7.1 usage of volumes for dedupe

2014-10-22 Thread Colwell, William F.
Hi Martha,

I see this situation occur when a filesystem gets almost completely full.

Do 'q dirsp dev-class-name' to check for nearly full filesystems.

The server doesn't fence off a filesystem like this, instead it keeps
hammering on it, allocating new volumes.  When it tries to write to a volume
and gets an immediate out-of-space error, it marks the volume full so it won't
try to use it again.

I run this sql to find such volumes and delete them -

select 'del v '||cast(volume_name as char(40)), cast(stgpool_name as char(30)), 
last_write_date -
 from volumes where upper(status) = 'FULL' and pct_utilized = 0 and pct_reclaim 
= 0 order by 2, 3

You should remove such filesystems from the devclass directory list until
reclaim has emptied them a little bit.

Hope his helps,

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Martha 
M McConaghy
Sent: Wednesday, October 22, 2014 2:23 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM 7.1 usage of volumes for dedupe

Interesting.  Seems very similar, except the status of these volumes is
FULL, not EMPTY.  However, the %reclaimable space is 0.0.

I think this is a bug.  I would expect the volume to leave the pool once
it is reclaimed.  It would be OK with me if it did not. However, since
the status is FULL, it will never be reused. That seems wrong.  If it
is going to remain attached to the dedupepool, the status should convert
to EMPTY so the file can be reused.  Or, go away altogether so the space
can be reclaimed and reused.

In looking at the filesystem on the Linux side (sorry I didn't mention
this is running on RHEL), the file exists on /data0, but with no size:

[urmm@tsmserver data0]$ ls -l *d57*
-rw--- 1 tsminst1 tsmsrvrs 0 Oct 10 20:22 0d57.bfs

/data0 is 100% utilized, so this file can never grow.  Seems like it
should get cleaned up rather than continue to exist.

Martha

On 10/22/2014 1:58 PM, Erwann SIMON wrote:
 hi Martha,

 See if this can apply :
 www-01.ibm.com/support/docview.wss?uid=swg21685554

 Note that I had a situation where Q CONT returned that the volume was empty 
 but it wasn't in reality since it was impossible to delete it (without 
 discrading data). A select statement against the contents showed some files. 
 Unforunately, I don't know how this story finished...


--
Martha McConaghy
Marist: System Architect/Technical Lead
SHARE: Director of Operations
Marist College IT
Poughkeepsie, NY  12601

 Notice: This email and any attachments may contain proprietary (Draper 
non-public) and/or export-controlled information of Draper Laboratory. If you 
are not the intended recipient of this email, please immediately notify the 
sender by replying to this email and immediately destroy all copies of this 
email.



Re: 7.1.1 / 4.1.1 Product documentation collections for download

2014-10-21 Thread Colwell, William F.
Hi Angela,

I downloaded the kc.zip file to a Linux server.  After unzipping, the /bin
directory doesn't have any .sh files, only .bat files, so I can't
run the kc as a local server.

Thanks,

Bill Colwell
Draper lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Angela 
Robertson
Sent: Monday, October 13, 2014 7:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: 7.1.1 / 4.1.1 Product documentation collections for download

The  downloadable IBM Knowledge Center contains the latest set of
information about Tivoli Storage Manager products.

You can download and display an instance of IBM Knowledge Center either
locally on a workstation, or from a server where it can be accessed by
others through a web address.

To download and run the Customer Installable IBM Knowledge Center, complete
the following steps:
   1.   Download the tsm711kc.zip file from the following location:
  ftp://public.dhe.ibm.com/software/products/TSM/current_kc/tsm711kc.zip
   2.   Extract the files to a location of your choice.
   3.   Read the Terms of Use statement (termsofuse.html), and the
  NOTICES.txt file in the KnowledgeCenter directory.
   4.   Review the procedures in the download_dir
  /KnowledgeCenter/knowledgecenter_instructions.html file.
   5.   Start IBM Knowledge Center by following the instructions in the
  knowledgecenter_instructions.html file.

Angela Robertson
IBM Software Group
Durham, NC 27703
aprob...@us.ibm.com


 Notice: This email and any attachments may contain proprietary (Draper 
non-public) and/or export-controlled information of Draper Laboratory. If you 
are not the intended recipient of this email, please immediately notify the 
sender by replying to this email and immediately destroy all copies of this 
email.



Re: TSM and VTL Deduplication

2014-06-12 Thread Colwell, William F.
IBM supplies a perl script to measure the cost of dedup.
See http://www-01.ibm.com/support/docview.wss?uid=swg21596944

I just ran it in an instance with an 800 GB db, here are the final summary 
lines -


Final Dedup and Database Impact Report


  Deduplication Database Totals
  -
  Total Dedup Chunks in DB   :  1171344436
  Average Dedup Chunk Size   :  447243.5

  Deduplication Impact to Database and Storage Pools
  ---
  Estimated DB Cost of Deduplication:  796.51 GB
  Total Storage Pool Savings:   230466.30 GB

That works out to ~3.5 GB per TB saved.

The db is not on SSD.  It is on a 6 disk raid 10 array internal on a Dell 
server.

Overall I am very happy with TSM dedup.

Thanks,

Bill Colwell
Draper lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Dan 
Haufer
Sent: Thursday, June 12, 2014 4:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM and VTL Deduplication

Yes, one of the two. If TSM deduplication is enabled and the target is a 
virtual tape, i doubt if the VTL can deduplicate anything from the write data.


On Thu, 6/12/14, Ehresman,David E. deehr...@louisville.edu wrote:

 Subject: Re: [ADSM-L] TSM and VTL Deduplication
 To: ADSM-L@VM.MARIST.EDU
 Date: Thursday, June 12, 2014, 12:51 PM
 
 Unless you have a
 specific requirement, I would suggest you choose either TSM
 dedup to disk or go straight to virtual tape.  There is not
 usually a need to do both.
 
 David
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU]
 On Behalf Of Dan Haufer
 Sent: Thursday, June
 12, 2014 2:41 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM and VTL
 Deduplication
 
 Thanks for
 all the answers. So SSDs (Looking at  SSD caching) for the
 database storage and 10GB  per TB of total backup data on
 the safer side. 
 
 
 On Thu, 6/12/14, Erwann Simon erwann.si...@free.fr
 wrote:
 
  Subject: Re:
 [ADSM-L] TSM and VTL Deduplication
  To: ADSM-L@VM.MARIST.EDU
  Date: Thursday, June 12, 2014, 8:47 AM
  
  Hi,
  
  I'd rather say 6 to 10 times, or 10 GB
 of
  DB for each 1 TB of data (native, not
 deduped) stored.
  
  -- 
  Best
  regards / Cordialement /
 مع تحياتي
  Erwann SIMON
  
  -
  Mail
 original -
  De: Norman
  Gee norman@lc.ca.gov
  À: ADSM-L@VM.MARIST.EDU
  Envoyé: Jeudi 12 Juin 2014 16:55:29
  Objet: Re: [ADSM-L] TSM and VTL
  Deduplication
  
  Be prepare
  for your database
 size to double or triple if you are using
 
 TSM deduplication.
  
 
 -Original Message-
  From: ADSM: Dist
 Stor Manager [mailto:ADSM-L@VM.MARIST.EDU]
  On Behalf Of Prather, Wanda
 
 Sent: Thursday,
  June 12, 2014 7:15 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: TSM and VTL Deduplication
  
  And if you are on the
  licensing-by-TB model, when it gets un-deduped
 (reduped,
  rehydrated, whatever), your costs
 go up!
  
  -Original
 Message-
  From: ADSM: Dist Stor Manager
 [mailto:ADSM-L@VM.MARIST.EDU]
  On Behalf Of Dan Haufer
  Sent:
 Thursday, June
  12, 2014 9:48 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] TSM and VTL
  Deduplication
  
  Understood.
  Thanks !
  
 
 
  On Thu, 6/12/14, Ehresman,David E. deehr...@louisville.edu
  wrote:
  
  
 Subject: Re:
  [ADSM-L] TSM and VTL
 Deduplication
   To: ADSM-L@VM.MARIST.EDU
   Date: Thursday, June 12, 2014, 5:33 AM
  
   If TSM moves data from
 a
   (disk) dedup pool to tape, TSM has to
 un-dedup
  the data as  it reads it
 


Re: Relabel/checkin Tapes marked as empty

2014-04-16 Thread Colwell, William F.
Are these tapes by any chance ejected from the library with the 'move media' 
command?



When I have tapes go empty which are racked out of the library, I run a script

to get the media tracking to forget about them and make them scratch.



the script -



tsm: LM2run qscr mmi



Description

---

Do move media commands to bring volumes in from the rack



Line num Command

- -

1 move media $1 stg=$2 wherestate=mountablenotinlib wherestatus=$3





And sample executions -



run mmi33L3 UNIX1_CPPT_ORA_PRD   EMPTY

run mmi38L3 UNIX1_CPPT_ORA_PRD   EMPTY





These tapes can now be entered as scratch.



Bill Colwell

Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lamb, 
Charles P.
Sent: Wednesday, April 16, 2014 2:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Relabel/checkin Tapes marked as empty



Here is what we use -



LABEL  LIBVOLUME 3584lib checkin=scr overwrite=yes search=bulk  
labelsource=barcode



-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Sims, 
Richard B

Sent: Wednesday, April 16, 2014 1:00 PM

To: ADSM-L@VM.MARIST.EDUmailto:ADSM-L@VM.MARIST.EDU

Subject: Re: [ADSM-L] Relabel/checkin Tapes marked as empty



On Apr 16, 2014, at 12:15 PM, Nick Laflamme 
n...@laflamme.usmailto:n...@laflamme.us wrote:



 I'll bet today's lunch money that what's going on is that these aren't

 scratch volumes; they're volumes that were assigned to the pool with a

 DEF VOLUME command.



But then the volumes would not show as   Scratch Volume?: Yes



Leonard needs to do Activity Log research on the volumes in question.



   sorry about your lunch,



Richard Sims


Re: Moving the TSM DB2 database (AIX)

2014-03-27 Thread Colwell, William F.
Hi Kevin,

glad to be of help.

If you haven't seen it yet, you can setup schedules in the netapp cmd line
admin windows (maybe in a gui too, but I wouldn't know about that) for doing 
reallocates.

I looked for a script I made to do this, but I guess I deleted it when I got off
the netapp.

For comparison, and future planning, my databases are on raid 10 arrays, with 
15k 600 GB sas disks.

For a db on a 2 x 7 array, here are 5 dbb backup points.  The array is san 
attached.
I use 2 db streams, the output files are on sata raid.

Activity   Start Time   End Time Elapsed (hh:mm:ss) 
   Gigs
--   --- 
--
FULL_DBBACKUP  02-23-12.31  02-23-17.46  05:14:49   
2338.08
FULL_DBBACKUP  03-02-09.38  03-02-13.02  03:24:30   
2335.73
FULL_DBBACKUP  03-09-10.48  03-09-15.33  04:44:29   
2437.75
FULL_DBBACKUP  03-16-11.11  03-16-14.46  03:35:18   
2371.32
FULL_DBBACKUP  03-23-11.19  03-23-15.31  04:12:48   
2363.04

And for a 2 X 3 array internal to a Dell r910 using 1 stream to sata raid -

Activity   Start Time   End Time  Elapsed Min   Gigs
--    --
FULL_DBBACKUP  02-23-11.00  02-23-12.40 100.6 817.73
FULL_DBBACKUP  03-02-11.00  03-02-12.25  85.9 772.45
FULL_DBBACKUP  03-09-11.00  03-09-12.27  87.6 774.07
FULL_DBBACKUP  03-16-11.00  03-16-12.33  93.0 789.22
FULL_DBBACKUP  03-24-13.53  03-24-15.21  88.2 795.65

Good Luck!

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Kevin 
Kettner
Sent: Thursday, March 27, 2014 12:34 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Moving the TSM DB2 database (AIX)

Thanks a ton for that info!

This is hugely promising. My volumes are definitely fragmented. I have 5
200 GB vols that make up my DB volumes on my worst server. Four of them
were recommended for reallocation. I started a reallocate on one last
night and it's 80% done. I can already see a huge improvement in my DB
backup rate. I expect it will slow down to my normal rates when it hits
the next volume, but I can already see it's helping:

DB backup rate graph

The DB backup rate started out nearly twice as fast today.



On 3/26/2014 13:04, Colwell, William F. wrote:
 When I had TSM databases on Netapp - both v5  v6 - I had to do frequent 
 netapp
 'reallocate' commands to get the physical order in the netapp to match
 the logical order of db2.

 The db2 backup is reading the database sequentially, but within the netapp it
 is completely out of order.

 Try doing ' wafl scan measure_layout' to get a measure of the disorder.

 Bill Colwell
 Draper Lab

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Kevin Kettner
 Sent: Wednesday, March 26, 2014 1:24 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: Moving the TSM DB2 database (AIX)

 That is what I'm leaning towards as well. I've got NetApp looking at the
 disk end to see if its getting hit hard. I also want to find out if the
 work load is heavier on reads or write (I'm guessing read) to know what
 sort of hardware fix is best for this, cache, flash, or more spindles, etc.

   From Wanda's email, they're using a DS3512. I wouldn't expect that to
 be much different, performance wise, than the NetApp 3160 that I'm
 using. That leads me to think that maybe it's not the disk afterall...

 On 3/26/2014 11:30, Ehresman,David E. wrote:
 Kevin,

 My gut reaction is that your disk drives can't feed the data fast enough.  
 If it were me, I would open up a PMR to find out what the real bottleneck is.

 David Ehresman
 University of Louisville.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Kevin Kettner
 Sent: Wednesday, March 26, 2014 11:34 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving the TSM DB2 database (AIX)

 On the original question, I have moved DBs on AIX with LVM several times
 with good success. The only real concern is the performance impact. The
 benefit over mirroring is you can do it with no outage at all.

 I have 3 servers with similar sized DBs on AIX with NetApp SAS disk on
 the back end, backing up to IBM 3592 drives, and my DB backups take 4-6
 hours. I'm on 6.3.4 now and I've tried using more streams but that has
 not made much difference.

 My smallest production DB is around 200 GB and it takes about an hour to
 backup.

 I wonder what's going wrong. Do you have any advice?

 Thanks!

 On 3/21/2014 15:42, Prather, Wanda wrote:
 Was just thinking the same -
 It's only the conversion from V5

Re: Moving the TSM DB2 database (AIX)

2014-03-26 Thread Colwell, William F.
When I had TSM databases on Netapp - both v5  v6 - I had to do frequent netapp
'reallocate' commands to get the physical order in the netapp to match
the logical order of db2.  

The db2 backup is reading the database sequentially, but within the netapp it
is completely out of order.

Try doing ' wafl scan measure_layout' to get a measure of the disorder.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Kevin 
Kettner
Sent: Wednesday, March 26, 2014 1:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Moving the TSM DB2 database (AIX)

That is what I'm leaning towards as well. I've got NetApp looking at the
disk end to see if its getting hit hard. I also want to find out if the
work load is heavier on reads or write (I'm guessing read) to know what
sort of hardware fix is best for this, cache, flash, or more spindles, etc.

 From Wanda's email, they're using a DS3512. I wouldn't expect that to
be much different, performance wise, than the NetApp 3160 that I'm
using. That leads me to think that maybe it's not the disk afterall...

On 3/26/2014 11:30, Ehresman,David E. wrote:
 Kevin,

 My gut reaction is that your disk drives can't feed the data fast enough.  If 
 it were me, I would open up a PMR to find out what the real bottleneck is.

 David Ehresman
 University of Louisville.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Kevin Kettner
 Sent: Wednesday, March 26, 2014 11:34 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving the TSM DB2 database (AIX)

 On the original question, I have moved DBs on AIX with LVM several times
 with good success. The only real concern is the performance impact. The
 benefit over mirroring is you can do it with no outage at all.

 I have 3 servers with similar sized DBs on AIX with NetApp SAS disk on
 the back end, backing up to IBM 3592 drives, and my DB backups take 4-6
 hours. I'm on 6.3.4 now and I've tried using more streams but that has
 not made much difference.

 My smallest production DB is around 200 GB and it takes about an hour to
 backup.

 I wonder what's going wrong. Do you have any advice?

 Thanks!

 On 3/21/2014 15:42, Prather, Wanda wrote:
 Was just thinking the same -
 It's only the conversion from V5 to V6 that takes forever.  Once you are 
 V6/DB2, DB backup-restore is fast again.

 I have TSM 6.3.4 on Windows, DB is 930G used, DS3512 disk, and it will back 
 up to LTO5 in 90 minutes if the server isn't doing a lot else at the time.  
 Restore takes maybe 15 minutes longer.

 You've got other issues you should address, if your DB backup is taking many 
 hours @ 300GB

 W


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Ehresman,David E.
 Sent: Friday, March 21, 2014 9:11 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Moving the TSM DB2 database (AIX)

 I've used migratepv to move oracle DBs around with no problems.  I would not 
 expect any issues with using LVM mirroring or migratepv to move the TSM DB.  
 That is what I would do in your situation.

 But your comments about taking days to backup and restore your TSM DB 
 worries me.  How long does it take to backup your DB?  I have a 600G 
 allocated/415G used TSM DB.  It backs up in under an hour and restore time 
 is about the same.

 David Ehresman
 University of Louisville

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Roger Deschner
 Sent: Thursday, March 20, 2014 11:35 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Moving the TSM DB2 database (AIX)

 Now that TSM V5 is gone from our shop and we're all TSM V6.2, it's time to 
 move some things around. Such as the TSM DB2 database. The manual says to do 
 a full database backup and restore. That could take days of downtime with 
 our 150-300GB databases, and a lot of angst, so that is not really 
 acceptable.

 What I'm planning to do instead, is what I've always done on AIX. It's one 
 of the reasons I like AIX for hosting something like TSM. That is, to 
 basically walk the database over to the new location using AIX LVM 
 mirroring. All this with TSM up and running, albeit with a performance 
 impact. (It's Spring Break, so the performance impact is acceptable.) The 
 end result will be that the database has exactly the same Unix filesystem 
 names, path names, and file names as before, except that it will be on a 
 nice new faster disk subsystem.

 Other than the obvious performance impact while AIX LVM is doing the 
 mirroring, is there anything wrong with moving a TSM DB2 database by this 
 method? Anybody done this and had problems?

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
 ==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Informal Poll: General question about use of policysets

2014-01-31 Thread Colwell, William F.
I have 2 policysets in each domain.  They are identical except for the 
copygroup destination
parameter.

This is the design I came up with to implement 6.1 with dedup.  Backups come 
into an ingest
pool on hi-speed disk (bkp_1a) while policyset set_a is active.  At 5 am, a 
schedule/script
activates policyset set_b so that sessions for the next 24 hours will write to 
pool bkp_1b.

After the switch, more schedules/scripts do id dup, copypooling and migration 
so that pool bkp_1a
is empty and ready for the next policyset flip.

That was the theory.  Unfortunately, the server resources couldn't keep up.  
And migrated volumes
weren't deleted because of the whole dereferenced chunk problem set.

I still do the policyset flip/flop (on 4 servers), but id dup runs continuously 
and 
copypooling runs many times a day.

Bill Colwell
Draper lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Friday, January 31, 2014 4:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Informal Poll: General question about use of policysets

The TSM books show that you can have multiple policysets per domain.
I don't mean just the active vs inactive, but you can have multiple policysets 
like NORMAL, OFFHOUR, WEEKEND, within one domain, and switch them back and 
forth.

I've never done that, or had a reason to.  Seems inordinately confusing to me.
All my customers just have one policyset per domain, with the active and 
inactive copy.

The inactive is the one you update, then you validate it and activate it.

Can I get some feedback on what other people do?

Do you have just one unique policyset per domain?  Or what is your use case for 
having multiples?

Thank you!!!

Wanda




**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  | wanda.prat...@icfi.com  |  
www.icfi.comhttp://www.icfi.com | 410-868-4872 (m)
ICF International  | 7125 Thomas Edison Dr., Suite 100, Columbia, Md 
|443-718-4900 (o)


Re: How well do .pst files dedup?

2014-01-24 Thread Colwell, William F.
Wanda,

I tried deduping them and got  50% savings.  I expected much more, thinking 
that from
one day to the next, a pst should be 99% the same.  I suspect that outlook makes
little updates all over the file which makes it hard for tsm to find duplicate 
chunks.

Since I only keep 3 versions, and they backup every day, I quickly realized it 
just isn't 
worth the extra server cycles.  And also, as we know, deleting deduped versions 
is 
expensive too.

Regards,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Friday, January 24, 2014 3:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: How well do .pst files dedup?

Anybody looked in detail at how well .pst files dedup with TSM - (client or 
server-end)?

**Please note new office phone:
Wanda Prather  |  Senior Technical Specialist  | wanda.prat...@icfi.com  |  
www.icfi.comhttp://www.icfi.com | 410-868-4872 (m)
ICF International  | 7125 Thomas Edison Dr., Suite 100, Columbia, Md 
|443-718-4900 (o)


Re: TSM 7.1

2014-01-17 Thread Colwell, William F.
Hello,

I don't know what the baseline for the claimed 10x improvement is.  I hope there
is an ATS webinar soon to explain it.

I did serious amounts of dedup on 6.1 servers.  They are now at 6.3.4.2+ and I 
don't
remember a big improvement from the upgrade.

This url, http://www-01.ibm.com/support/docview.wss?uid=swg21452146 says that 
the
big tables and indexes related to dedup are put in separate tablespaces.  So I 
can
guess that if you can commit the disk resources to separate them there could be 
a big
improvement.  Wanda used to have a sig line about i/o and it is still true.

If the separation of tables and indexes accounts for the bulk of the 10x, I 
don't know
how an upgrade from 6.3 will get the dedup performance improvement without some
downtime and direct db2 manipulation of the tables and indexes.

Regards,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Friday, January 17, 2014 1:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM 7.1

That's actually good to hear - did you indeed see significant dedup speed 
improvements in 6.3.4.200?

Someone on this list said 6.3.4.200 made it worse for them.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Tristan Kelkermans
Sent: Friday, January 17, 2014 12:02 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 7.1

Hi all,

Don't expect 10X improvement from 6.3.4.200, I guess it's pretty the same dedup 
speed betweeb both versions :

[image: Images intégrées 1]

___

*Tristan KELKERMANS*

*Ingénieur Stockage  Sécurité*+ 33 (0)1 81 08 21 09 | Ligne directe
+ 33 (0)6 80 36 87 88 | Mobile
+ 33 (0)1 70 24 73 86 | Fax


*ATOO SYSTEMES  SERVICES*
* 9 bis rue du Général Leclerc - 91230 MONTGERON* www.atoosys.fr  |  
www.tsmservice.fr  
http://www.tsmservice.fr/


2014/1/17 Prather, Wanda wanda.prat...@icfi.com

 Tivoli has promised a 10X improvement in dedup speed.  (Yes, I've seen 
 that in writing.) Need it.  Want it.
 Would like to know if anybody is seeing it...

 We also need the 7.1 client for VSphere 5.5 support...

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of Skylar Thompson
 Sent: Friday, January 17, 2014 9:49 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM 7.1

 For those of you upgrading or looking at upgrading, what 
 features/fixes are motivating the decision? We'll probably sit at 
 v6.3.4 for now, so I'm mostly curious.

 Thanks,

 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S046, (206)-685-7354
 -- University of Washington School of Medicine



Re: Deduplication number of chunks waiting in queue continues to rise?

2013-12-20 Thread Colwell, William F.
Hi Wanda,

some quick rambling thoughts about dereferenced chunk cleanup.

Do you know about the 'show banner' command?  If IBM sends you an e-fix, this
will tell you what it is fixing.

tsm: xshow banner

* EFIX Cumulative level 6.3.4.207  *
* This is a Limited Availability TEMPORARY fix for *
* IC94121 - ANR2033E DEFINE ASSOCIATION: Command failed - lock con *
*   when def assoc immediately follows def sched.  *
* IC95890 - Allow numeric volser for zOS Media server volumes. *
* IC93279 - Redrive failed outbound replication connect requests.  *
* IC93850 - PAM authentication login protocol exchange failure *
* wi3187  - AUDIT LIBVOLUME new command*
* IC96637 - SERVER CAN HANG WHEN USING OPERATION CENTER*
* IC95938 - ANRD_2644193874 BFCHECKENDTOEND DURING RESTORE/RET *
* IC96993 - MOVE NODEDATA OPERATION MIGHT RESULT IN INVALID LINKS  *
* IC91138 - Enable audit volume to mark one more kind invalid link *
*   THE RESTARTED RESTORE OPERATION MAY BE SINGLE-THREADED *
*   Avoid restore stgpool linking to orphaned base chunks  *
* WI3236  - Oracle T1D tape drive support  *
* 94297   - Add a parameter DELETEALIASES for DELETE BITFILE utili *
* IC96462 - Mount failure retry for zOS Media server tape volumes. *
* IC96993 - SLOW DELETION OF DEREFERENCED DEDUPLICATED CHUNKS  *
* This cumulative efix server is based on code level   *
* made generally available with FixPack 6.3.4.200  *
*  *



I have 2 servers on 6342.006 and 2 on 6342.007.  I have .009 efix waiting to be 
installed
on my biggest, oldest, badest server to fix the chunks in queue problem.

On 3 servers, the queue is down to 0, and they usually run without a problem.  
On the big bad
one, here are the stats -

tsm: WIN1show dedupdeleteinfo
 Dedup Deletion General Status
 Number of worker threads  : 15
 Number of active worker threads   : 1
 Number of chunks waiting in queue : 11326513

Dedup Deletion Worker Info
Dedup deletion worker id: 1
Total chunks queued : 0
Total chunks deleted: 0
Deleting AF Entries?: Yes
In error state? : No

Worker thread 2 is not active

Worker thread 3 is not active

Worker thread 4 is not active

Worker thread 5 is not active

Worker thread 6 is not active

Worker thread 7 is not active

Worker thread 8 is not active

Worker thread 9 is not active

Worker thread 10 is not active

Worker thread 11 is not active

Worker thread 12 is not active

Worker thread 13 is not active

Worker thread 14 is not active

Worker thread 15 is not active

--
Total worker chunks queued : 0
Total worker chunks deleted: 0


The cleanup of reclaimed volumes is done by the thread which has 
' Deleting AF Entries?: Yes'.  The pending efix is supposed to
get this process to finish.  It never finishes on this server, something about 
a bad
access plan.

When I have a lot of volumes which are empty but won't delete, I generate
move data commands for them.  Move data to the same pool will manually do what
the chunk cleanup process is trying to do.

Regards,

Bill Colwell
Draper lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Thursday, December 19, 2013 11:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Deduplication number of chunks waiting in queue continues to rise?

TSM 6.3.4.00 on Win2K8
Perhaps some of you that have dealt with the dedup chunking problem can 
enlighten me.
TSM/VE backs up to a dedup file pool, about 4 TB of changed blocks per day

I currently have more than 2 TB (yep, terabytes)  of volumes in that file pool 
that will not reclaim.
We were told by support that when you do:

SHOW DEDUPDELETEINFO
That the number of chunks waiting in queue has to go to zero for those 
volumes to reclaim.

(I know that there is a fix at 6.3.4.200 to improve the chunking process, but 
that has been APARed, and waiting on 6.3.4.300.)

I have shut down IDENTIFY DUPLICATES and reclamation for this pool.
There are no clients writing into the pool, we have redirected backups to a 
non-dedup pool for now to try and get this cleared up.
There is no client-side dedup here, only server side.
I've also set deduprequiresbackup to NO for now, although I hate doing that, to 
make sure that doesn't' interfere with the reclaim process.

But SHOW DEDUPDELETEINFO shows that the number of chunks waiting in queue is 
*still* increasing.
So, WHAT is putting stuff on that dedup delete queue?
And how do I ever 

Re: SQL query in v6 server

2013-11-20 Thread Colwell, William F.
Hi Eric,

the timestampdiff function will do what you need.  This works -

select node_name, platform_name, date(lastacc_time) -
 from nodes -
  where cast(timestampdiff(16, current_timestamp - lastacc_time) as 
decimal(4,1)) 2

The first number in timestampdiff can be -

1 Fractions of a second 
2 Seconds 
4 Minutes 
8 Hours 
16 Days 
32 Weeks 
64 Months 
128 Quarters 
256 Years

For full details on this and other functions, download the db2 9.7 sql 
reference volume 1.


Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
EJ van - SPLXM
Sent: Wednesday, November 20, 2013 10:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: SQL query in v6 server

Hi TSM-ers!

We just migrated a second server to v6 and now I need to 'patch' TSM
Operational Reporter. Among others, the following SQL statement no
longer works:

 

select node_name, platform_name, date(lastacc_time) from nodes where
cast((current_timestamp-lastacc_time)days as decimal) =2 and contact
like 'Component Team Linux%%'

 

It must have something to do with the cast part, because when I leave
that out it works fine.

I have a hard time finding the correct information about rewriting your
SQL queries, so if somebody could help me out, I'll appreciate it!

Kind regards,

Eric van Loon

AF/KLM Storage Engineering


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



Re: TSM Dedup stgpool target

2013-11-18 Thread Colwell, William F.
Paul,

I describe my copypool setup in a previous reply, last Friday.
If you lost it somehow, it is on adsm.org.

But quickly, they are on virtual volumes.  I have never seen any issues
related to the primary pool volume size.

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Monday, November 18, 2013 9:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

One other question, if you don't mind Bill:  Do you have Copy Storage Pools?  
If so, are they on tape or file?  If tape, is the small volume size on the 
primary pool an issue?  I.e., does TSM optimize output tape mounts?

Thanks.
..Paul

At 05:48 PM 11/14/2013, Colwell, William F. wrote:
Paul,

I am using 4 GB volumes on the 15k disks (aka ingest pool).  Since each disk 
is ~576 GiB
and there are 16 disks assigned to this server, that's a lot of volumes!

On the sata based pools I am using 50 GiB volumes.

All volumes are scratch allocated not pre-allocated.

I know scratch volumes are supposed to perform less well, but I haven't heard 
how much less and I did ask.
I couldn't run the way I do and manage pre-allocation.  There are 2 very big 
and very busy instances on the
processor and both share all the filesystems.  And each instance has multiple 
storage hierarchies so
mapping out pre-allocation would be a nightmare.

thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Thursday, November 14, 2013 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Hi Bill,

Can I ask what size volumes you use for the ingest pool (on 15k disks) and 
also on your 4TB sata pool?  I assume you are pre-allocating volumes and not 
using scratch?

Thanks.
..Paul

At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from 
Nexsan (now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances on the processor sharing the filesystems.  The 
OS is Linux rhel 5.

All volumes are scratch allocated.

The backups first land on non raid 15k 600GB disks in an Infortrend device.  
The copypooling is done from there
and also the identify processing.  Then they are migrated to the Nexsan based 
storagepools.

There is also a tape library.  Really big files are excluded from dedup via 
the stgpool MAXSIZE parameter and
land on a separate pool on the Nexsan storage which then migrates to tape.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Sergio O. Fuentes
Sent: Wednesday, November 13, 2013 10:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Dedup stgpool target

In an earlier thread, I polled this group on whether people recommend going 
with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
answers came back mixed, obviously with an 'It depends'-type clause.

So, moving on...  assuming that I'm using TSM dedup, what sort of target 
arrays are people putting behind their TSM servers.   Assume here, also, that 
you'll be having multiple TSM servers,  another backup product, *coughveeam 
and potentially having to do backup stgpools on the dedup stgpools.  I ask 
because I've been barking up the mid-tier storage array market as our 
potential disk based backup target simply because of the combination of cost, 
performance, and scalability.  I'd prefer something that is dense I.e. more 
capacity less footprint and can scale up to 400TB.  It seems like vendors get 
disappointed when you're asking for a 400TB array with just SATA disk simply 
for backup targets.  None of that fancy array intelligence like auto-tiering, 
large caches, replication, dedup, etc.. is required.

Is there another storage market I should be looking at, I.e. really dumb raid 
arrays, direct attached, NAS, etc...

Any feedback is appreciated, even the 'it depends'-type.

Thanks!
Sergio


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: TSM Dedup stgpool target

2013-11-18 Thread Colwell, William F.
Paul,

the virtual volumes land on a disk buffer in the target server.
Actually there are 3 filesystems of raid 5 sata which are round
robin'ed as the target; a script updates the devclass directory
at midnight.  Then the previous days virtual volumes are migrated to
tape.  My intent is to minimized head contention and let ingest and migration
proceed independently.

The BA STG processes on the source move from one primary volume to the next
without ending the session with the VV server.  I don't think this would
be any different if the target server wrote directly on to tape.

The bkp_1[A|B] pools are identical and serve the same purpose.  But they are 
targeted by different policysets in each domain.  Every day I flip flop them so 
that 
Id dup and BA STG and migration can be done isolated from new backups coming in.

This was the design I came up with for an early implementation of V6.1 servers
doing lots of dedup.  The problem to be solved is to get everything dedup'ed on 
the high
speed disk before the files migrated to the slower sata disks.  This realizes
the space saving as soon as possible and I don't have to do reclaim of large
volumes on slow disk to save space.

I gave a presentation at Pulse 2011 about my early experiences.  I can send you 
the
PowerPoint if you like.

Thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Monday, November 18, 2013 3:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Bill,

Are you virtual volumes purely on tape on the target server, or are they 
fronted by some sort of disk storage pool?  I am trying to understand whether a 
small volume size for the ingest dedup file pool will cause a lot of tape 
mounts on the copy storage pool during a backup storage pool process, or 
whether TSM is smart enough to optimize output tape volume mounts.  If your 
virtual volumes are fronted by some sort of disk, or if you have a plethora of 
tape drives, you might not notice this even if TSM was dumb in this regard.  Do 
you use collocation (in order to collocate volumes in your copy storage pool)?  
If not, that could be another reason why you wouldn't notice it.

One other question, if I may.  Why do you have a BKP_1A and BKP_1B storage 
pool?  They seem to have the same attributes and both funnel into BKP_2.

I'm sure you've put a lot of thought into this, but I'm not sure I'm getting 
everything you did, and why.

..Paul



At 10:24 AM 11/18/2013, Colwell, William F. wrote:
Paul,

I describe my copypool setup in a previous reply, last Friday.
If you lost it somehow, it is on adsm.org.

But quickly, they are on virtual volumes.  I have never seen any issues
related to the primary pool volume size.

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Monday, November 18, 2013 9:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

One other question, if you don't mind Bill:  Do you have Copy Storage Pools?  
If so, are they on tape or file?  If tape, is the small volume size on the 
primary pool an issue?  I.e., does TSM optimize output tape mounts?

Thanks.
..Paul

At 05:48 PM 11/14/2013, Colwell, William F. wrote:
Paul,

I am using 4 GB volumes on the 15k disks (aka ingest pool).  Since each disk 
is ~576 GiB
and there are 16 disks assigned to this server, that's a lot of volumes!

On the sata based pools I am using 50 GiB volumes.

All volumes are scratch allocated not pre-allocated.

I know scratch volumes are supposed to perform less well, but I haven't heard 
how much less and I did ask.
I couldn't run the way I do and manage pre-allocation.  There are 2 very big 
and very busy instances on the
processor and both share all the filesystems.  And each instance has multiple 
storage hierarchies so
mapping out pre-allocation would be a nightmare.

thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Thursday, November 14, 2013 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Hi Bill,

Can I ask what size volumes you use for the ingest pool (on 15k disks) and 
also on your 4TB sata pool?  I assume you are pre-allocating volumes and not 
using scratch?

Thanks.
..Paul

At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from 
Nexsan (now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances

Re: TSM Dedup stgpool target

2013-11-15 Thread Colwell, William F.
Hi Sergio,

my first server started at 6.1 so it was all server side dedup.  I have not let 
any
of its clients do client side.  The separation based on maxsize is working as 
designed.

My 2nd server started at 6.3 and I do use client side.  The clients do not 
react well
when a file bigger than the maxsize needs to be backed up.  It gets backed up 
but the client
does not reset for subsequent files which are under the maxsize.  I have 
adjusted to this 
but making nextpools for the unlimited pool which recreate the maxsize 
separation during migration
of the unlimited maxsize pool.  Here is a script output of the storagepools in 
one of
the newer servers -

Name  Numscr   DevicePoolSzGb  PctUtil Migpr  Next  
  MaxSz
-  -   -- 
 
BKP_1A28   DD_L1   2094.6  5.1 4  BKP_2 
   1.00
BKP_1B6DD_L1   2003.0  0.8 4  BKP_2 
   1.00
BKP_2 405  DD_L2  46062.9 52.3 1  BKP_3 
   1.00
BKP_3 8NDD_L3 22279.4  1.5 1  BKP_3A
BKP_3A0DD_L2  0.0  0.0 1  
BKP_3B   1.00
BKP_3B121  NDD_L3 29347.5 25.2 1  BKP_4
BKP_4 0LTO5A  0.0  0.0 1

BKP_3 is unlimited but when it migrates the files separate into bkp_3a with a 
maxsize of 1 GB and
bkp_3b which is unlimited.  The reclaim target pool for bkp_3a is bkp_2, so 
that gets all the files
I intended to dedup back together.

I reported the issue to IBM and I think it will be fixed in 6.3.5; I don't know 
if it is
in 7.1, but it should be.

Copypools go to virtual volumes hosted by a small server with a small tape 
library.
Since the volumes are never marked off-site, reclamation doesn't recreate the 
unexpired
files from the dedup'ed primary pools.  And I wouldn't want this anyway.  The 
point
of the deduprequiresbackup server parameter is to have a version of the file in 
its
original never-ripped-apart state.  I have developed a process to reclaim the 
copypool
volumes in time order because they are really stored on racked tapes.  The 
reclaim command
would jump all over the range of volumes causing constant requests for tapes to 
be entered.
I don't actually do a reclaim process, instead I issue move data commands in 
the order
that the volumes were created.


Thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Sergio 
O. Fuentes
Sent: Friday, November 15, 2013 12:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Bill,

Thanks for info.  Just curious, are you utilizing source-side dedupe or
relying on the TSM identify to identify all your duplicates
(post-process)? How does the maxsize parameter interact with source-side
dedup?  I'll have to look that up.

Eventually you have to reclaim your copy pools and based on your
hierarchies, it looks like reclamation would be feeding off from the large
4TB drives.  Have you had issues reclaiming from those pools?

Thanks!
Sergio


Re: TSM Dedup stgpool target

2013-11-14 Thread Colwell, William F.
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from Nexsan 
(now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion 
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances on the processor sharing the filesystems.  The OS 
is Linux rhel 5.

All volumes are scratch allocated. 

The backups first land on non raid 15k 600GB disks in an Infortrend device.  
The copypooling is done from there
and also the identify processing.  Then they are migrated to the Nexsan based 
storagepools.

There is also a tape library.  Really big files are excluded from dedup via the 
stgpool MAXSIZE parameter and
land on a separate pool on the Nexsan storage which then migrates to tape.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Sergio 
O. Fuentes
Sent: Wednesday, November 13, 2013 10:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Dedup stgpool target

In an earlier thread, I polled this group on whether people recommend going 
with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
answers came back mixed, obviously with an 'It depends'-type clause.

So, moving on...  assuming that I'm using TSM dedup, what sort of target arrays 
are people putting behind their TSM servers.   Assume here, also, that you'll 
be having multiple TSM servers,  another backup product, *coughveeam and 
potentially having to do backup stgpools on the dedup stgpools.  I ask because 
I've been barking up the mid-tier storage array market as our potential disk 
based backup target simply because of the combination of cost, performance, and 
scalability.  I'd prefer something that is dense I.e. more capacity less 
footprint and can scale up to 400TB.  It seems like vendors get disappointed 
when you're asking for a 400TB array with just SATA disk simply for backup 
targets.  None of that fancy array intelligence like auto-tiering, large 
caches, replication, dedup, etc.. is required.

Is there another storage market I should be looking at, I.e. really dumb raid 
arrays, direct attached, NAS, etc...

Any feedback is appreciated, even the 'it depends'-type.

Thanks!
Sergio


Re: TSM Dedup stgpool target

2013-11-14 Thread Colwell, William F.
Paul,

I am using 4 GB volumes on the 15k disks (aka ingest pool).  Since each disk is 
~576 GiB
and there are 16 disks assigned to this server, that's a lot of volumes!

On the sata based pools I am using 50 GiB volumes.

All volumes are scratch allocated not pre-allocated.

I know scratch volumes are supposed to perform less well, but I haven't heard 
how much less and I did ask.
I couldn't run the way I do and manage pre-allocation.  There are 2 very big 
and very busy instances on the
processor and both share all the filesystems.  And each instance has multiple 
storage hierarchies so 
mapping out pre-allocation would be a nightmare.

thanks,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Thursday, November 14, 2013 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Dedup stgpool target

Hi Bill,

Can I ask what size volumes you use for the ingest pool (on 15k disks) and also 
on your 4TB sata pool?  I assume you are pre-allocating volumes and not using 
scratch?

Thanks.
..Paul

At 02:13 PM 11/14/2013, Colwell, William F. wrote:
Hi Sergio,

I faced the same questions 3 years ago and settled on the products from Nexsan 
(now owned by Imation) for
massive bulk storage.

You can get a 4u 60 drive head unit with 4TB sata disks (the E60 model), and 
later attach 2 60 drive expansion
units to it (the E60X model).

I have 3 head units now, not with the configuration above because they are 
older.

1 unit is direct attached with fiber and the other 2 are san attached.  I am 
planning to convert the
direct unit to san attached to facilitate a processor upgrade.

There are 2 server instances on the processor sharing the filesystems.  The OS 
is Linux rhel 5.

All volumes are scratch allocated.

The backups first land on non raid 15k 600GB disks in an Infortrend device.  
The copypooling is done from there
and also the identify processing.  Then they are migrated to the Nexsan based 
storagepools.

There is also a tape library.  Really big files are excluded from dedup via 
the stgpool MAXSIZE parameter and
land on a separate pool on the Nexsan storage which then migrates to tape.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Sergio O. Fuentes
Sent: Wednesday, November 13, 2013 10:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM Dedup stgpool target

In an earlier thread, I polled this group on whether people recommend going 
with an array-based dedup solution or doing a TSM dedup solution.  Well, the 
answers came back mixed, obviously with an 'It depends'-type clause.

So, moving on...  assuming that I'm using TSM dedup, what sort of target 
arrays are people putting behind their TSM servers.   Assume here, also, that 
you'll be having multiple TSM servers,  another backup product, *coughveeam 
and potentially having to do backup stgpools on the dedup stgpools.  I ask 
because I've been barking up the mid-tier storage array market as our 
potential disk based backup target simply because of the combination of cost, 
performance, and scalability.  I'd prefer something that is dense I.e. more 
capacity less footprint and can scale up to 400TB.  It seems like vendors get 
disappointed when you're asking for a 400TB array with just SATA disk simply 
for backup targets.  None of that fancy array intelligence like auto-tiering, 
large caches, replication, dedup, etc.. is required.

Is there another storage market I should be looking at, I.e. really dumb raid 
arrays, direct attached, NAS, etc...

Any feedback is appreciated, even the 'it depends'-type.

Thanks!
Sergio


--
Paul ZarnowskiPh: 607-255-4757
Manager of Storage Services   Fx: 607-255-8521
IT at Cornell / InfrastructureEm: p...@cornell.edu
719 Rhodes Hall, Ithaca, NY 14853-3801


Re: What's New in TSM 7.1?

2013-10-30 Thread Colwell, William F.
Hi Nick,

there is a webinar next week from the Tivoli User Community.



What's New in Tivoli Storage Manager V7.1

November 7, 2013 at 11:00 AM, ET USA

Join Ian T. Smith, Director of IBM Storage Software, to learn more about how 
Tivoli Storage Manager V7.1 
dramatically increases scalability and performance while providing backup 
infrastructure cost savings up to 38%.

You can register at 
http://tivoli-ug.org/tech-zones/storage-management/c/e/912.aspx

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Nick 
Laflamme
Sent: Wednesday, October 30, 2013 1:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: What's New in TSM 7.1?

It's surprisingly hard to find a What's New in TSM 7.1 page or list on
IBM's web site. It's not in the TSM Wiki, and it's not on the product pages
that I can find, although the data sheets refer to 7.1.

I'll muddle through, but this shouldn't be hard, should it?

(Y'all heard that they announced TSM 7.1, right?)

Nick


Re: V6.2.5 to V6.3.4.200 Linux Server Upgrade

2013-10-30 Thread Colwell, William F.
Hi Zoltan,

when I went to 6.3.4.0 from 6.3.somewhere-lower, an index reorg started in all 
my servers.
The reorg was of a big table involved in dedup.  It caused the active log
to fill up and all the servers crashed more than once.

I opened a pmr;  IBM was aware of the problem, see 
http://www-01.ibm.com/support/docview.wss?uid=swg1IC91190

To fix it, I had to max out the active log at 128GB, and stop all the other big
log generators like expiration, reclaim, migration, id dup.  Then the reorg had
enough log to finish.

When it was all done, there was a side benefit.  These indexes had gotten 
really big.
After the reorg there was a lot of freespace in the db.  For example -

tsm: q db f=d

Database Name: TSMDB1
...
  Total Pages: 49,053,700
 Usable Pages: 49,053,500
   Used Pages: 22,947,116
   Free Pages: 26,106,384

Good luck!

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Wednesday, October 30, 2013 10:30 AM
To: ADSM-L@VM.MARIST.EDU
Subject: V6.2.5 to V6.3.4.200 Linux Server Upgrade

Just checking for any issues/gotchas in performing these upgrades.  I want
to get all my servers up to the latest.

From what I found in books online, this should be a simple 1-upload and
install base 6.3.4 (from Passport), 2-install 6.3.4.200 patch,
 3-re-activate licenses.None of the mess of upgrading from 6.1 to 6.3.
 Of course, I will backup the DB, devconfig, volhist.

Am I missing anything?  Anyone else do this on Linux?  Any war-stories?

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Deduplication/replication options

2013-07-24 Thread Colwell, William F.
Hi Norman,

that is incorrect.  IBM doesn't care what the hardware is when measuring used 
capacity
in the Suite for Unified Recovery licensing model.

A description of the measurement process and the sql to do it is at
http://www-01.ibm.com/support/docview.wss?uid=swg21500482

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Gee, 
Norman
Sent: Wednesday, July 24, 2013 11:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options

This why IBM is pushing their VTL solution.  IBM will only charge for the net 
amount using an all IBM solution.  At least that is what I was told.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
EJ van - SPLXM
Sent: Tuesday, July 23, 2013 11:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options

Hi Sergio!
Another thing to take into consideration: if you have switched from PVU
licensing to sub-capacity licensing in the past: TSM sub-capacity
licensing is based on the amount of data stored in your primary pool. If
this data is stored on a de-duplicating storage device you will be
charged for the gross amount of data. If you are using TSM
de-duplication you will have to pay for the de-duplicated amount. This
will probably save you a lot of money...
Kind regards,
Eric van Loon
AF/KLM Storage Engineering

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Sergio O. Fuentes
Sent: dinsdag 23 juli 2013 19:20
To: ADSM-L@VM.MARIST.EDU
Subject: Deduplication/replication options

Hello all,

We're currently faced with a decision go with a dedupe storage array or
with TSM dedupe for our backup storage targets.  There are some very
critical pros and cons going with one or the other.  For example, TSM
dedupe will reduce overall network throughput both for backups and
replication (source-side dedupe would be used).  A dedupe storage array
won't do that for backup, but it would be possible if we replicated to
an identical array (but TSM replication would be bandwidth intensive).
TSM dedupe might not scale as well and may neccessitate more TSM servers
to distribute the load.  Overall, though, I think the cost of additional
servers is way less than what a native dedupe array would cost so I
don't think that's a big hit.

Replication is key. We have two datacenters where I would love it if TSM
replication could be used in order to quickly (still manually, though)
activate the replication server for production if necessary.  Having a
dedupe storage array kind of removes that option, unless we want to
replicate the whole rehydrated backup data via TSM.

I'm going on and on here, but has anybody had to make a decision to go
one way or the other? Would it make sense to do a hybrid deployment
(combination of TSM Dedupe and Array dedupe)?  Any thoughts or tales of
woes and forewarnings are appreciated.

Thanks!
Sergio

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




Re: Migrating last 5.5 server to 6.3.3 and new hardware

2013-06-20 Thread Colwell, William F.
Hi Zoltan,

regarding the upgrade of the 6.1 servers, if you are doing dedup, pay close
attention to apar IC90488 - 
http://www-01.ibm.com/support/docview.wss?uid=swg1IC90488

If you upgrade to 6.3.4.0, the problem is fixed, otherwise you will need to 
build
an index manually.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Friday, June 14, 2013 10:17 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Migrating last 5.5 server to 6.3.3 and new hardware

I am scheduling to do the above and want to make sure I am not missing
something, especially since the upgrade guide is a little confusing.

Current config - RedHat 5.9, TSM 5.5.6.100
New config - RedHat 6.4, TSM 6.3.3.200

What I am proposing (all done on the new, virgin server)

1.  Install 5.5.7 server and server upgrade
2.  Backup DB, devconfig and volhist on 5.5 server and halt/shut down
3.  Restore 5.5 server DB backup using last devconfig and volhist
4.  Install 6.3.3
5.  Run dsmupgdx process which does the UPGRADEDB, EXTRACTDB and then
configs and loads the new DB
6.  Switch over network connections/config

The last version of the upgrade book I have has you going through the
upgradedb and extractdb manually but then when you run the dsmupgdx is does
it all over again..

Am I missing anything?

FWIW, next up is replacing my 6.1 serverfun.

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Collocation anomaly report

2013-04-17 Thread Colwell, William F.
Hi Grant,

I used to track collocation group spill overs when my servers were version 5
and used tapes.  Now I am on v6 and almost all disk, so I don't do that anymore.

Anyway, I used a mysql database on my desktop system.  I would dump data from
the tsm servers and load it into mysql where I could do manipulations not
allowed in the tsm servers.  Then I would run a report which showed
among other things volumes which have data from more than 1 collocation group.

The key bit of data from tsm is q nodedata * which provides
almost all the same info as a select from volumeusage, but is much faster.

I can send you a sample report if you are interested.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Grant 
Street
Sent: Tuesday, April 16, 2013 7:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Collocation anomaly report

Hello

We use collocation to segment data into collocation groups and nodes,
but recently found that collocation is on a best efforts basis and
will use any tape if there is not enough space.

I understand the theory behind this but it does not help with compliance
requirements. I know that we should make sure that there are always
enough free tapes, but without any way to know we have no proof that we
are in compliance.

I have created an RFE
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=33537
. Please vote if you agree:-)

While I wait a more than two years for this to be implemented, I was
wondering if anyone had a way to report on any Collocation anomalies?
I created the following but still not complete enough

select volume_name, count(volume_name) as Nodes_per_volume from
(select Unique  volume_name , volumeusage.node_name from volumeusage,
nodes where nodes.node_name = volumeusage.node_name and nodes.
collocgroup_name is null) group by (volume_name) having count
(volume_name) 1

and

select unique volume_name, count(volume_name) as Groups_per_volume
from (select Unique  volume_name ,  collocgroup_name from volumeusage,
nodes where nodes.node_name = volumeusage.node_name ) group by
(volume_name) having count(volume_name) 1

Thanks in advance

Grant


Re: GPFS TSM

2013-02-01 Thread Colwell, William F.
IBM has an apar open fix a performance issue with mmbackup; see IC86976.

We are seriously looking at gpfs to replace our current file server on Netapp.

Prior to the v6 snapshot enhancements it would take 4 days to do a backup
of the fileserver via the b/a client over cifs.  With the snapshot enhancements
it takes about 3 hours most nights.  I wouldn't want to go back to walking
the tree every night even with a faster walker.  I hope the mmbackup will work.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Friday, February 01, 2013 2:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: GPFS  TSM

mmbackup is part of GPFS, not TSM. There's some docs here:

http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r5.0.7.gpfs100.doc%2Fbl1adm_mmbackup.htm

We experimented using mmbackup, but found it didn't scale well and had
some reliability issues. We ended up partitioning our data up into
separate directories, and backing those up as separate filespaces using
the standard dsmc. That worked much better.

-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

On 02/ 1/13 07:48 AM, Rick Adamson wrote:
 Our AIX admins are building a multiple node GPFS cluster and mention the 
 desire to use the mmbackup command to accomplish backups to TSM.
 In reading over the manual I find no mention of it, but searching IBM support 
 there are plenty of docs which I just started reading.
 My question is what is the preferred way to do this amongst those who have 
 dealt with it?
 Win 2008 with TSM Server 6.3
 AIX 7.1 with BA client 6.3

 All comments welcome
 Thanks !
 ~Rick



Re: Server media mount not possible

2012-10-12 Thread Colwell, William F.
Hi Geoff,

The messages manual says  Ensure that the MAXNUMMP (maximum number of mount 
points) defined on the server for this node is greater than 0.

What is the maxnummp for the node?

I version 6, I set it for all nodes to 6.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Geoff 
Gill
Sent: Thursday, October 11, 2012 8:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Server media mount not possible

I thought I'd throw this out there for ideas since I'm just being exposed to 
NAS backpus and restores. I believe I got a previous post yesterday figured out 
and was finally able to move on to testing a restore. Now I'm getting a 
different error but I'm a bit confused as to why. I went through lots of config 
and permissions info already with a valuable source and got these errors once 
that was complete.

 Here's what I did once I was able to see the vol the data needs to be restored 
to.

1. open the web gui, login with an admin id to restore some NAS data.
2. Used point in time to define which Full would have this nodes data on. At 
this point the server mounted a tape to display the TOC.
3. I found the client machine in which I wanted to restore data and checked the 
C drive to restore.
4. Selected a vol from the dropdown to restore to and clicked restore.
5. Within a few seconds I received a popup with the error Server media mount 
not possible.

I watched this process from the TSM server and during step 2 I watched the 
server mount the tape to draw the TOC from. While the tape was still mounted, 
Idle, I clicked the restore and while waiting saw the tape was still idle and 
at that point received the error. Within a few seconds the tape dismounted, 
which makes me believe it was not requested for anything.

I tried this a second time with a completely different node and file and can 
see in the activity log it tried to mount a tape that was not available in the 
library. Interestingly enough I received the same error message but it I see in 
the activity log the tape it was looking for. I tried a second time to restore 
the node I really need data from. This time the TOC seemed to be still in 
memory so it did not mount the tape initially. When I actually started the 
restore I watched the server again and it never even received a request to 
mount a tape. No mount messages, no tape unavailable messages in the logs and 
it failed immediately also.

I'm confused as to why no tape mount request happened either time and it's more 
confusing because there WAS a tape mounted to build the TOC. I assume the rest 
of the data is on that tape, and it proves the system is actually mounting 
tapes, but even if the data spans multiple tapes there is no indication in the 
logs stating a tape is not available.

Anyone have any idea what else I can look at? I already have an open PMR which 
I will continue to work on tomorrow but I thought I'd throw this out there 
anyway.



Thank You
Geoff Gill


Re: TSM for SharePoint vs Docave version numbers

2012-08-24 Thread Colwell, William F.
Hi David,

last month IBM withdrew the tsm for SharePoint product.

- - -

Software withdrawal and support discontinuance:  IBM Tivoli Storage Manager for 
Microsoft SharePoint V6.x

http://www.ibm.com/vrm/newsletter_10577_10362_232814_email_DYN_1IN/BColwell13712838


At the same time, they announce reselling of the Docave product.  A bullet 
point in the
announcement says Doc ave version 6 can write its output to tsm.

- - -

AvePoint DocAve Backup and Restore offers a fast,
flexible, and intelligent backup solution for Microsoft SharePoint

http://www.ibm.com/vrm/newsletter_10577_10362_233613_email_DYN_1IN/BColwell13712838


Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Ehresman,David E.
Sent: Friday, August 24, 2012 9:05 AM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM for Sharepont vs Docave version numbers

Can anyone tell me what the newest TSM for Sharepoint version/release is?  And 
what Docave version/release is distributed with that TSM for Sharepoint level?


Re: select or other command

2012-07-02 Thread Colwell, William F.
Hi Geoff,



there isn't one command to do this, but a select and then 1 or 2 show commands 
will

find the volume name.  Here is an example.



tsm: WIN2select object_id from backups where node_name = 'A-NODE-NAME' and 
ll_name = 'OUTLOOK.PST'



OBJECT_ID

-

   590316



tsm: WIN2show bfo 590316



Bitfile Object: 590316

  Active

**Archival Bitfile Entry

  Bitfile Type: PRIMARY  Storage Format: 22

  Bitfile Size: 10480218  Number of Segments: 1, flags: 0

  Storage Pool ID: 8  Volume ID: 127  Volume Name: /tsm_nx331/win2/007F.BFS



tsm: WIN2show invo 590316

Inventory object 590316 of copy type Backup has attributes:

  NodeName: A-NODE-NAME, Filespace(1): \\a-node-name\c$,

  ObjName: \USERS\A-NODE-NAME\APPDATA\LOCAL\MICROSOFT\OUTLOOK\OUTLOOK.PST.

  hlID: 0292E2463557C3F93E61EB8A7821EA1BE364A261

  llID: 93534B77D3C2BC010BCA14FAF8EB123DC0ED5D7B

  Type: 2 (File)  MC: 18 (OUTLOOK3) CG: 1  Size: 32016384  HeaderSize: 0

  Active, Inserted 03/06/2012 11:25:28 AM (CUT Not Set)

  GroupMap   , bypassRecogToken NULL



Bitfile Object: 590316

  Active

**Archival Bitfile Entry

  Bitfile Type: PRIMARY  Storage Format: 22

  Bitfile Size: 10480218  Number of Segments: 1, flags: 0

  Storage Pool ID: 8  Volume ID: 127  Volume Name: /tsm_nx331/win2/007F.BFS





Sometimes the 'show bfo' is suffiecient and sometimes the 'show invo' is 
required

depending on if the file is in an aggregate or by itself.





I have used this process too many time to find files which should not have been 
backed up

and thoroughly expunge all traces of them.



You will probably need to expand the select to distinguish active and inactive 
versions and

to improve performance on version 5 servers.  On version 6 servers just 
supplying the ll_name is

very quick.



Bill Colwell

Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Geoff 
Gill
Sent: Thursday, June 28, 2012 8:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: select or other command



Hello,



Over the years I've saved a lot of commands but never saw one for this. I'm not 
sure if it is possible but I thought I'd ask to see if anyone has it.



Is it possible to create a select command that I input the name of a file that 
was backed up and have the output tell me what tape(s) it would be on? I have a 
command that will tell me all the tapes a node has data on, and another to spit 
out the contents of a tape to a file, and from there search for the filename 
but I'm curious if there is an easier way.



Thank You

Geoff Gill


Re: More on Library Manager/Library Client

2012-06-07 Thread Colwell, William F.
Hi Geoff,

are you aware of the new command in 6.3, perform libaction?  I haven't run it 
yet, but the help
seems to be saying that if you have san discovery running, then just define the 
library and then
run the command and it will create all the drives and paths.


I have 2 scripts which run in the library manager to give the status of things.

First, 'qdr'. It selects from the drives and paths table to get the status of 
each drive.
It also calls a script in the clients to get info about what they are doing 
with the drives.
Notice that it finds an error, a path is offline (and will be forever, the 
drive is busted and off
maintenance).

tsm: LIBRARY_MANAGERrun qdr

DRIVE  Online? ELEMENT Serial number DEVICE   
STATUS VOLUME USER
-- --- --- -  
-- -- --
DRIVE00NO  500 MXP9A00Q1K/dev/rmt/3mt 
EMPTY
DRIVE01YES 501 MXP9C03CFC/dev/rmt/7mt 
LOADED 004213 TSM_SERVER_2_FOR_DESKTOPS
DRIVE02NO  502 HUL2L00510/dev/rmt/4mt 
EMPTY
DRIVE03NO  503 MXP6K01S8B/dev/rmt/8mt 
EMPTY
DRIVE04YES 504 MXP081372B/dev/rmt/15mt
EMPTY
DRIVE05YES 505 MXP07455EV/dev/rmt/12mt
EMPTY
DRIVE06YES 506 MXP0913CKG/dev/rmt/13mt
EMPTY
DRIVE07NO  507 HU10847L10/dev/rmt/14mt
EMPTY

Paths offline
-
Path from server TSM_SERVER_2_FOR_DESKTOPSto drive DRIVE00  is 
not online
ANR1699I Resolved TSM1 to 4 server(s) - issuing command RUN tapes  against 
server(s).
ANR1687I Output for command 'RUN tapes ' issued against server 
TSM_SERVER_2_FOR_DESKTOPS follows:

process  Tape in use
 
--
EXP N Current input volumes: 004213,(2689 Seconds)


Second, 'qlib'.  This just lists the counts of tapes and which client server 
owns them.  This is a dual media
setup, lto2 and lto3.

tsm: LIBRARY_MANAGERrun qlib

 Free cells
---
 23

STATUS   MEDIATYPE Count of tapes
-- --- --
Cleaner387  2
Private394 87
Private417383
Scratch394169
Scratch417 14

Server name  Tapes owned by server
 -
LIBRARY_MANAGER  4
TSM_SERVER_2_FOR_DESKTOPS  183
TSM_SERVER_FOR_DESKTOPS 58
TSM_SERVER_FOR_SERVERS 225



I am willing to send you the scripts, or put them in a common download site.

Hope this helps,

Bill Colwell
Draper Lab






-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Geoff 
Gill
Sent: Thursday, June 07, 2012 11:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: More on Library Manager/Library Client

Hi again,

I'd like to go back to my post yesterday on this subject to see if there are 
folks out there using this functionality who would be willing to share their 
typical daily problems. This kind of goes along with the other post from 
yesterday when Nick brought up the Teaching Problem Solving issue. Since I'm 
what you would call one of those contractors now, and even my contacts are 
half a world away, I will be relied upon to work the other half of the 
schedule. I'm also asking because I have no experience with this configuration 
but will soon be involved in troubleshooting problems. While I'm sure I will 
initially look fairly ignorant on the subject, even though they knew up front 
my experience, I'd prefer to be proactive and find out as much as possible, 
from wherever possible, what folks consider to be their daily routine in this 
area and how they go about locating and fixing problems. I'm also wondering if 
the problems are the same sort of issues we see
 with a single TSM server/library/drive(s) configurations. These won't be all 
the questions so please add what you wish. As with all IBM publications what I 
see it the how to's related to initializing systems, at least I haven't seen 
any troubleshooting advice for those who are coming into new situations. Hence 
the reason I go here.


So here are some of my questions:

1. What are some of the daily issues you see related to the manager/client 
setup/communication if any and what commands do you 

Re: Thoughts and experiences on Technote: Local fix information for APAR IC82886

2012-06-05 Thread Colwell, William F.
Hi Sergio,

I ran the fix up procedure on 2 small 6.3.1 instances and at went well, no 
problems.
I didn't have to run anything more than the directions.

If you plan to do a lot of dedup running this is a good idea before your 
instances
gets too big.

I will not be running it on my 6.1 servers where the table in each instance has 
 3 billion rows,
the outage would be too long.

The text says   Because of the required server outage and the fact that not 
all server users experience
 the problem, the server does not perform this reconfiguration automatically 
so the 6.3.3 fix will
not do this automatically.


Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Sergio 
O. Fuentes
Sent: Tuesday, June 05, 2012 4:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Thoughts and experiences on Technote: Local fix information for APAR 
IC82886

Hello all,

We have three TSM servers with versions between 6.3.0 to 6.3.1 range.  
According to the technote here:

http://www-304.ibm.com/support/docview.wss?uid=swg21592404myns=swgtivmynp=OCSSGSG7mync=E

it states that for any server CREATED on TSM version below the fix for APAR 
IC82886 (6.3.3 is targeted) should apply the local fix regardless if you're 
experiencing errant DB growth and utilization.   That would include anyone on 
versions TSM 6.1, 6.2, and 6.3.2 or below.  Anyone out there have experience in 
implementing this fix?  Is the local fix complete, or is there something to do 
after all the reorgs, runstats and create index processes run to reclaim space? 
 Were there major outages for your environment?  Our largest DB is 250GB but 
the BF_AGGREGATED_BITFILES table is relatively small (about 10 million 
objects).   Do you recommend opening a PMR with IBM to hold my hand during the 
process?  Would the fix in 6.3.3 actually do the local fix for us?

Considering that EVERYONE who has TSM V6 has created a DB on a non-patched 
version of TSM everyone should probably consider running the local fix... 
unless the 6.3.3 level fixes earlier versions of the DB.

Thoughts, experiences?  Thanks for your help!

Sergio
U. of Maryland


Re: Occupancy discrepancy between 6.1.5.10 and 6.2.3.0 server

2012-03-07 Thread Colwell, William F.
Zoltan,

occupancy numbers were made incorrect by various bugs in early 6.1 code,
see apar ic73005.  There is a special utility to fix the numbers, repair 
occupancy.
It was supposed to be in 6.1.5.10 but isn't, you need an e-fix for 6.1.5.102.

Of course, you can ignore the errors unless you are using the unified
recovery license.


Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Wednesday, March 07, 2012 8:26 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Occupancy discrepency between 6.1.5.10 and 6.2.3.0 server

Doing some reorganization, we recently moved (server-to-server export)
some nodes from a 6.1.5.10 server to a 6.2.3.0 server.  Now, the occupancy
numbers on the 6.2 (71mb) server are lower than the 6.1.5 (83mb) server,
eventhough the file/object counts are identical (static file system)?

All of the apars I found (so far) that address occupancy information are
at (supposedly) patch levels below these levels.

Anyone else see this kind of discrepancy?

Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Re-activating files?

2012-03-06 Thread Colwell, William F.
Allen,

after the pending big backup is done, and if the copygroup keeps
enough versions, you can delete the active backups using the client.
This action will promote the most recent inactive backup back up to
the active state.

See the b/a client guide, 'delete backup', especially the note
under 'deltype'.

Good Luck!

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Karel 
Bos
Sent: Tuesday, March 06, 2012 12:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Re-activating files?

Ofc there is, just restore the tsm database to a point be4 the data being 
inactivated :)

Kind regards,
karel

Verzonden vanaf mijn HTC

- Oorspronkelijk bericht -
Van: Allen S. Rout a...@ufl.edu
Verzonden: dinsdag 6 maart 2012 15:09
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: [ADSM-L] Re-activating files?

I think I know the answer to this question, but I'm asking just in case
someone's got a trick...

I've got a customer, who's got a user who deleted a third of a TB of
Stuff.  He's completed his restore, but: between the deletion and the
restore an incr ran.  So as currently configured, next incr another
326GB, formally sworn to be 'the same' 326GB as was there last time,
will get re-backed up.

We're a chargeback service, so this represents not-trivial money.

So, the question:  Is there any way to 're-activate' inactive backups?

I'm not aware of any such, but I figured I'd ask the assemblage Just In
Case.


- Allen S. Rout


Re: Deployment Engine Failed to initialize

2012-02-28 Thread Colwell, William F.
I agree with Zoltan.  I have 2 very large instances at 6.1.5.10 in production

doing large amounts of dedup processing.  I am aware of the reorg issues but it

doesn't bother me, I am not interested in reorging the tables.  In any case

6.3 doesn't solve all the reorg issues, see apar ic81261 and flash 1580639.



Thanks,



Bill Colwell

Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Tuesday, February 28, 2012 9:39 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deployment Engine Failed to initialize



WOW - such harsh words about 6.1 !   I don't agree..my main production

6.x system is 6.1.5.10 with no issues.  At least it hasn't had this wacky,

problem my other 6.2.x servers have had with a DB backup randomly,

intermittently failing with no discernible reason(note, there are docs

that say you really need to be at least at 6.1.4.1 to resolve some big

problems, especially with reorgs)





Zoltan Forray

TSM Software  Hardware Administrator

Virginia Commonwealth University

UCC/Office of Technology Services

zfor...@vcu.edu - 804-828-4807

Don't be a phishing victim - VCU and other reputable organizations will

never use email to request that you reply with your password, social

security number or confidential personal information. For more details

visit http://infosecurity.vcu.edu/phishing.html







From:   Prather, Wanda wprat...@icfi.com

To: ADSM-L@VM.MARIST.EDU

Date:   02/28/2012 05:57 AM

Subject:Re: [ADSM-L] Deployment Engine Failed to initialize

Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU







What Remco said.

Nothing Good will Happen on 6.1.

I finally got a production system stable on 6.1.3 by disabling reorgs, but

that was Windows.

I wouldn't even think of doing it on Linux.



W



-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of

Remco Post

Sent: Monday, February 27, 2012 5:10 PM

To: ADSM-L@VM.MARIST.EDU

Subject: Re: [ADSM-L] Deployment Engine Failed to initialize



Hi,



do not use TSM server 6.1, not even if you have no other options. 6.1 does

not even begin to approach alpha quality software. IBM should never have

shipped it. I can't think of a single good reason to install 6.1. Go with

6.2.3 or newer or 6.3 something.







On 27 feb. 2012, at 22:57, George Huebschman wrote:



 We are getting the Deployment Engine Failed to Initialize when

 running ./install.bin for TSM Server 6.1 on a clean new RHEL server.

 I see lots of noise out here about this error, in and out of the TSM

world.



 (We have another TSM installation of TSM 6.3 on a VM  that isn't even

 QA as such, just a practice install.) Documetation specifies that

 there be 2GB available in the home directory.

 We only have 1.6 GB, BUT so does the successful 6.3 install.

 We had the error on the first and subsequent 3 attempts to run the

 install.  We did not find any .lock or .lck files.

 I am told that SELINUX is set to permissive.



 Except for the home directory, the other space guidelines were met.

 The install is being done as root.



 Looking at the TSM related posts about this issue, I didn't notice any

 for releases after 6.1.

 Is that because I didn't look hard enough?  Or, was documentation

 improved, or was a bug fixed?

 Should I talk someone into 6.2 to get past this?



 Most of my experience has been with 5.* I have read the install guide

 (most of it) for 6.2, which is what I thought we were installing.  Do

 I need to step back in documentation?





 --

 George Huebschman



 When you have a choice, spend money where you would prefer to work if

 you had NO choice.



--

Met vriendelijke groeten/Kind Regards,



Remco Post

r.p...@plcs.nl

+31 6 248 21 622


Re: Deleting option sets

2011-12-15 Thread Colwell, William F.
Harold,

After recreating the optionset, remember to update the nodes to use it.
When you deleted it, the server implicitly updated the nodes to not use any
optionset.

Bill Colwell
Draper lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Thursday, December 15, 2011 4:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deleting option sets

Thanks to everyone for the quick replies.

I'd never worked in a new domain setting. Did the stupid and didn't read the 
manual first.

Fortunately, on this server, all the nodes were using one option set that is 
common across all our TSMs; thus easy to recreate.




Harold Vandeventer
Systems Programmer
State of Kansas - Department of Administration - Office of Information 
Technology Services
harold.vandeven...@da.ks.gov
(785) 296-0631


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary D.
Sent: Thursday, December 15, 2011 1:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deleting option sets

Option sets are not domain specific.

Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 


Re: Stupid question about TSM server-side dedup

2011-11-22 Thread Colwell, William F.
Wanda,

when id dup finds duplicate chunks in the same storagepool, it will
raise the pct_reclaim
value for the volume it is working on.  If the pct_reclaim isn't going
up, that means there
are no duplicate chunks being found.  Id dup is still chunking the
backups up (watch you database grow!)
but all the chunks are unique.

Is it possible that the ndmp agent in the storage appliance is putting
in unique metadata with each file?
This would make every backup appear to be unique in chunk-speak.

I remember from the v6 beta that the standard v6 clients were enhanced
so that the metadata could
be better identified by id dup and skipped over so that it could just
work on the files and get
better dedup ratios.  If id dup doesn't know how to skip over the
metadata in an ndmp stream, and
the metadata is always changed, then you will get very low dedup ratios.

If you do a 'q pr' while the id dup is running, do the processes say
they are finding duplicates?

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, November 21, 2011 11:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Stupid question about TSM server-side dedup

Have a customer would like to go all disk backups using TSM dedup.  This
would be a benefit to them in several respects, not the least in having
the ability to replicate to another TSM server using the features in
6.3.

The customer has a requirement to keep their NDMP dumps 6 months.  (I
know that's not desirable, but the backup group has no choice in the
matter right now, it's imposed by a higher level of management.)

The NDMP dumps come via TCP/IP into a regular TSM sequential filepool.
They should dedup like crazy, but client-side dedup is not an option (as
there is no client).

So here's the question.  NDMP backups come into the filepool and
identify duplicates is running.  But because of those long retention
times, all the volumes in the filepool are FULL, but 0% reclaimable, and
they will continue to be that way for 6 months, as no dumps will expire
until then.  Since the dedup occurs as part of reclaim, and the volumes
won't reclaim -how do we prime the pump and get this data to dedup?
Should we do a few MOVE DATAs to get the volumes partially empty?


Wanda Prather  |  Senior Technical Specialist  |
wprat...@icfi.commailto:wprat...@icfi.com  |
www.icf.comhttp://www.icf.com
ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)
Connect with us on social mediahttp://www.icfi.com/social


Re: Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-29 Thread Colwell, William F.
Hi Daniel,

My main point was to say that your previous posts seemed to be saying that 
dedup storagepools
were recommended to be 6 TB in size at most.  It is my understanding the 6TB 
recommendation was 
a daily server thruput maximum design target when dedup is in use.

I agree, a processor at 100% is not good and I have been adjusting the server 
design to reduce
the load.

I started re-hosting our backup service on v6 as soon as v6 was available.  I 
started out
deduping everything but quickly ran into performance problems.  To solve them I 
started excluding
classes of data from dedup - all Oracle backups, all outlook PST files and any 
other file larger
than 1 GB.  I also replaced all the disks I started with over 12 months and 
greatly expanded the
total storage.

Where the Redbook says that expiration is much improved, that is only partly 
true.  If dedup is involved,
a hidden process starts after the visible expiration process is done and runs 
on for quite a while longer.
This process has to check if a chuck in an expired file can truly be removed 
from storage because
it could be that other files are pointing to that chunk.  You can see the 
process by entering
'show dedupdeleteinfo' after expiration completes.

The thing about big files is that they are broken into lots of chunks.  When a 
big file is expired,
this hidden process will take a long time to complete and can bog down the 
system.  This is the
real reason I exclude some files from dedup.

As for SATA, I have been using some big arrays (20 2TB disks, raid 6), 8 such 
arrays, for 18 months
and have had only 1 disk fail.  But I try not to abuse them.  Backups first go 
onto jbod
disks - 15K rpm, 600GB - and all the dedup activity is done there.  The 
storagepools on those disks
are then migrated to storagepools on the SATA arrays.  It is a mostly 
sequential process.

I can only suggest that if your customer does storagepool backup from the SATA 
arrays after migration or
reclaim, and the copypool is not dedup, then there would be a lot of random 
requests to the SATA storagepools
to rehydrate the backups.

Regards,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Thursday, September 29, 2011 1:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

Like it says in the document, it's a recommendation and not a technical limit.

However, having the server running at 100% utilization all the time doesnt seem 
like a healthy scenario.

Why arent you deduplicating files larger than 1GB? From my experience, 
datafiles from SQL, Exchange and such has a very large de-dup ratio, while 
TSM's deduplication skips files smaller than 2KB?

I have a customer up north who used this configuration on an HP EVA based box 
with SATA disks. The disks where breaking down so fast that the arrays within 
the box was in a constant rebuild phase. HP claimed it was TSM dedup that was 
breaking the disks (they actually claimed TSM was writing so often that the 
disks broke), a scenario I have very hard to believe.

Best Regards

Daniel



Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
daniel.sparr...@exist.se
http://www.existgruppen.se
Posthusgatan 1 761 30 NORRTÄLJE



-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Colwell, William F. bcolw...@draper.com
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 09/28/2011 20:43
Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

Hi Daniel,

 

I remember hearing about a 6 TB limit for dedup in a webinar or conference call,

but what I recall is that that was a daily thruput limit.  In the same section 
of the

redbook as you quote is this paragraph -

 

Experienced administrators already know that Tivoli Storage Manager database 
expiration

was one of the more processor-intensive activities on a Tivoli Storage Manager 
Server.

Expiration is still processor intensive, albeit less so in Tivoli Storage 
Manager V6.1, but this is

now second to deduplication in terms of consumption of processor cycles. 
Calculating the

MD5 hash for each object and the SHA1 hash for each chunk is a processor 
intensive activity.

 

I can say this is absolutely correct; my processor is frequently running at or 
near 100%.

 

I have gone way beyond 6 TB of storage for dedup storagepools as this sql shows

for the 2 instances on my server -

 

select cast(stgpool_name as char(12)) as Stgpool, -

   cast(sum(num_files) / 1024 /1024 as decimal(4,1)) as Mil Files, -

   cast(sum(physical_mb)   / 1024 /1024 as decimal(4,1)) as Physical_TB, -

   cast(sum(logical_mb)/ 1024 /1024 as decimal(4,1))as Logical_TB, -

   cast(sum(reporting_mb)  / 1024 /1024 as decimal(4,1))as Reporting_TB -

from occupancy

Re: Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file systems for pirmary pool

2011-09-28 Thread Colwell, William F.
Hi Daniel,

 

I remember hearing about a 6 TB limit for dedup in a webinar or conference call,

but what I recall is that that was a daily thruput limit.  In the same section 
of the

redbook as you quote is this paragraph -

 

Experienced administrators already know that Tivoli Storage Manager database 
expiration

was one of the more processor-intensive activities on a Tivoli Storage Manager 
Server.

Expiration is still processor intensive, albeit less so in Tivoli Storage 
Manager V6.1, but this is

now second to deduplication in terms of consumption of processor cycles. 
Calculating the

MD5 hash for each object and the SHA1 hash for each chunk is a processor 
intensive activity.

 

I can say this is absolutely correct; my processor is frequently running at or 
near 100%.

 

I have gone way beyond 6 TB of storage for dedup storagepools as this sql shows

for the 2 instances on my server -

 

select cast(stgpool_name as char(12)) as Stgpool, -

   cast(sum(num_files) / 1024 /1024 as decimal(4,1)) as Mil Files, -

   cast(sum(physical_mb)   / 1024 /1024 as decimal(4,1)) as Physical_TB, -

   cast(sum(logical_mb)/ 1024 /1024 as decimal(4,1))as Logical_TB, -

   cast(sum(reporting_mb)  / 1024 /1024 as decimal(4,1))as Reporting_TB -

from occupancy -

  where stgpool_name in (select stgpool_name from stgpools where deduplicate = 
'YES') -

   group by stgpool_name

 

 

StgpoolMil Files  Physical_TB  Logical_TB  Reporting_TB

- --  --- -

BKP_2  368.0  0.030.0  95.8

BKP_2X 341.0  0.023.9  58.6

 

 

StgpoolMil Files  Physical_TB  Logical_TB  Reporting_TB

- --  --- -

BKP_2  224.0  0.035.7  74.1

BKP_FS_249.0  0.021.0  45.5

 

 

Also, I am not using any random disk pool, all the disk storage is scratch 
allocated

file class volumes.  There is also a tape library (lto5) for files larger than 
1GB

which are excluded from deduplication.

 

 

Regards,

 

Bill Colwell

Draper Lab

 

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Daniel 
Sparrman
Sent: Wednesday, September 28, 2011 3:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

 

To be honest, it doesnt really say. The information is from the Tivoli Storage 
Manager Technical Guide:

 

Note: In terms of sizing Tivoli Storage Manager V6.1 deduplication, we currently

recommend using Tivoli Storage Manager to deduplicate up to 6 TB total of 
storage pool

space for the deduplicated pools. This is a rule of thumb only and exists 
solely to give an

indication of where to start investigating VTL or filer deduplication. The 
reason that a

particular figure is mentioned is for guidance in typical scenarios on 
commodity hardware.

If more than 6 TB of real diskspace is to be duplicated, you can either use 
Tivoli Storage

Manager or a hardware deduplication device. The 6 TB is in addition to whatever 
disk is

required by non-deduplicated storage pools. This rule of thumb will change as 
processor

and disk technologies advance, because the recommendation is not an 
architectural,

support, or testing limit.

 

http://www.redbooks.ibm.com/redbooks/pdfs/sg247718.pdf

 

I'm guessing it's server-side since client-side shouldnt use any resources @ 
the server. I'm also guessing you could do 8TB or 10, but not 60TB.

 

Best Regards

 

Daniel Sparrman

 

 

 

Daniel Sparrman

Exist i Stockholm AB

Växel: 08-754 98 00

Fax: 08-754 97 30

daniel.sparr...@exist.se

http://www.existgruppen.se

Posthusgatan 1 761 30 NORRTÄLJE

 

 

 

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -

 

 

Till: ADSM-L@VM.MARIST.EDU

Från: Hans Christian Riksheim bull...@gmail.com

Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU

Datum: 09/28/2011 09:56

Ärende: Re: [ADSM-L] Ang: Re: [ADSM-L] Ang: Re: [ADSM-L] vtl versus file 
systems for pirmary pool

 

This 6 TB supported limit for deduplicated FILEPOOL does this limit

apply when one does client side deduplication only?

 

Just wondering since I have just set up a 30 TB FILEPOOL for this purpose.

 

Regards

 

Hans Chr.

 

On Tue, Sep 27, 2011 at 8:44 PM, Daniel Sparrman

daniel.sparr...@exist.se wrote:

 Just to put an end to this discussion, we're kinda running out of limits here:

 

 a) No VTL solution, neither DD, neither Sepaton, neither anyone, is a 
 replacement for random diskpools. Doesnt matter if you can configure 50 
 drives, 500 drives or 5000 drives, the way TSM works, you're gonna make the 
 system go bad since the system 

Re: snapdiff advice

2011-06-28 Thread Colwell, William F.
Hi Dave,

 

I can't comment on your error messages, but you asked how I schedule
snapdiff backups.

 

The schedule invokes a command on the client.  Here is a shortened
version of the command file.

 

echo on

for /f tokens=2-4 delims=/  %%a in ('date /t') do (set
date=%%a-%%b-%%c)

echo %date%

 

net use share-name

... 12 more net use statements ...

 

 

 

dsmc i -snapdiff share-name -optfile=dsm-unix1.opt 
c:\backuplogs\xxx\snapdiff-%date%.txt

... 12 more dsmc commands ...

dsmc i c: -optfile=dsm-unix1.opt 
c:\backuplogs\vscan64\local-%date%.txt

 

 

The last line backs up the local file system.  

 

 

Regards,

 

Bill Colwell

Draper Lab

 

 

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
David Bronder
Sent: Monday, June 27, 2011 4:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: snapdiff advice

 

Hi folks.

 

I'm trying to get snapdiff backups of our NetApp (OnTAP version 8.0.1P5)

working so I can move away from everybody's favorite NDMP backups...

 

So far, I'm not having much luck.  I don't know whether I'm just Doing

It Wrong (tm) or if something else is going on.  In particular, on both

Windows 2008 R2 (6.2.3.0) and RHEL 5.6 (6.2.2.0), I'm getting failures

like the following, depending on the dsmc invocation:

 

  ANS1670E The file specification is not valid. Specify a valid Network

   Appliance or N-Series NFS (AIX, Linux) or CIFS (Windows)
volume.

 

  ANS2831E  Incremental by snapshot difference cannot be performed on

   'volume-name' as it is not a NetApp NFS or CIFS volume.

 

(These are shares at the root of full volumes, not Q-trees.  I'm using a

CIFS share for the Windows client, and an NFS share for the Linux
client,

with the correct respective permission/security styles.  TSM server is

still 5.5, but my understanding is that that should be OK.)

 

For those of you who have snapdiff working, could you share any examples

of how you're actually doing it?  E.g., your dsmc invocation, how you're

mounting the share (must a Windows share be mapped to a drive letter?),

or anything relevant in the dsm.opt or dsm.sys (other than the requisite

testflags if using an older OnTAP).  Or anything else you think is
useful

that the documentation left out.

 

(Also of interest would be how you're scheduling your snapdiff backups,

and how you have that coexisting with local filesystems on the client

running the snapdiff backups.)

 

Thanks,

=Dave

 

--

Hello World.David Bronder - Systems
Admin

Segmentation Fault  ITS-EI, Univ. of
Iowa

Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


Re: Identify Duplicates Idle vs Active state?

2011-05-25 Thread Colwell, William F.
Hi Harold,

 

I am running 6.1 with dedup and have coded scripts to check the id dup
processes before proceeding.

Here is a snippet -

 

upd scr start_migration 'select count(*) from processes where
substr(process,1,1)=''I'' -'

upd scr start_migration ' and status
like ''%1A.%active%'' having count(*)  1 '

upd scr start_migration 'if(rc_notfound) goto reschedule'

 

The sql is looking for any still id dup processes still active - I run 3
processes

for each 'landing zone' pool.  If none are active, the select returns a
value (0) and the logic

falls thru to start migration.

 

Hope this helps,

 

Bill Colwell

Draper Lab

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Vandeventer, Harold [BS]
Sent: Tuesday, May 24, 2011 2:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Identify Duplicates Idle vs Active state?

 

I'm working up scripting for our TSM 6.2 system where dedup will be
implemented.

 

Is there a way to test for the IDLE state of an IDENTIFY DUPLICATES
process?

 

I'd like to have our script test for the idle state to allow the next
set of work to proceed as soon as possible.

 

We've used  IF(RC_OK) in TSM 5.x scripts to test for upper(process) =
BACKUP STORGE POOL or upper(session_type) = NODE, but I don't see a
way to detect that idle vs. active state on identify duplicates
processes.

 

Thanks...

 

 



Harold Vandeventer

Systems Programmer

State of Kansas - DISC

harold.vandeven...@da.ks.govmailto:dane.woodr...@da.ks.gov

(785) 296-0631


Re: Identify Duplicates Idle vs Active state?

2011-05-25 Thread Colwell, William F.
I have 2 pools to receive the backups, and I flip/flop them daily.  So
the
%1A etc is to test for the processes associated with the pool which is
now idle,
which is the one I want to migrated down to the sata based storagepools.

There are other lines in the script which test for %1B etc.

Here it the output of a script which makes a consolidate display of
processes.  It reformats
the status column from 'select * from processes' -

Num   ProcessStatus
- --


1 Identify D  BKP_1A. Volume: NONE. State: idle. Total
Duplicate Bytes Found: 932,910,685,096.
2 Identify D  BKP_1A. Volume: NONE. State: idle. Total
Duplicate Bytes Found: 788,584,107,631.
3 Identify D  BKP_1A. Volume: NONE. State: idle. Total
Duplicate Bytes Found: 736,142,766,941.
4 Identify D  BKP_1B. Volume: /tsm_es115/win1/00072EF1.BFS.
State: active. Total Duplicate Bytes Found: 803,112,627,190.
5 Identify D  BKP_1B. Volume: /tsm_es118/win1/00072FE3.BFS.
State: active. Total Duplicate Bytes Found: 625,537,450,389.
6 Identify D  BKP_1B. Volume: /tsm_es123/win1/00072E56.BFS.
State: active. Total Duplicate Bytes Found: 521,743,380,790.



Thanks,

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
David E Ehresman
Sent: Wednesday, May 25, 2011 11:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Identify Duplicates Idle vs Active state?

What does the %1A. in like ''%1A.%active%'' having count(*)  1 '
test for?

 Colwell, William F. bcolw...@draper.com 5/25/2011 11:17 AM 
Hi Harold,



I am running 6.1 with dedup and have coded scripts to check the id dup
processes before proceeding.

Here is a snippet -



upd scr start_migration 'select count(*) from processes where
substr(process,1,1)=''I'' -'

upd scr start_migration ' and status
like ''%1A.%active%'' having count(*)  1 '

upd scr start_migration 'if(rc_notfound) goto reschedule'



The sql is looking for any still id dup processes still active - I run
3
processes

for each 'landing zone' pool.  If none are active, the select returns
a
value (0) and the logic

falls thru to start migration.



Hope this helps,



Bill Colwell

Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of
Vandeventer, Harold [BS]
Sent: Tuesday, May 24, 2011 2:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Identify Duplicates Idle vs Active state?



I'm working up scripting for our TSM 6.2 system where dedup will be
implemented.



Is there a way to test for the IDLE state of an IDENTIFY DUPLICATES
process?



I'd like to have our script test for the idle state to allow the next
set of work to proceed as soon as possible.



We've used  IF(RC_OK) in TSM 5.x scripts to test for upper(process) =
BACKUP STORGE POOL or upper(session_type) = NODE, but I don't see
a
way to detect that idle vs. active state on identify duplicates
processes.



Thanks...







Harold Vandeventer

Systems Programmer

State of Kansas - DISC

harold.vandeven...@da.ks.govmailto:dane.woodr...@da.ks.gov

(785) 296-0631


Re: Filesystem preferences for tsm 6 pools

2011-04-22 Thread Colwell, William F.
Hi,

I used ext3 for the first storage attached to the server, but I switched to 
ext4 for the second
storage purchase.  Both file systems work fine, but the documentation for ext4 
say it is designed
to support large files better than ext3.  Scratch volumes delete much faster 
from the ext4 filesystems.

The TSM databases are also on ext4.


Regards,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefán 
Þór Hreinsson
Sent: Thursday, April 21, 2011 8:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Filesystem preferences for tsm 6 pools

I've been running on EXT3 for now 5 years, one year on 6.1 and 6.2 on several 
servers, it's solid, no complaints.  Performance has always been enough, from 
where I'm standing you go with the most commonly used solid filesystem in 
Linux, for me that's EXT3.

regards
stefan thor hreinsson
basis

From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of Lee, Gary D. 
[g...@bsu.edu]
Sent: Thursday, April 21, 2011 14:01
To: ADSM-L@VM.MARIST.EDU
Subject: Filesystem preferences for tsm 6 pools

Setting up a tsm 6.2.2 server under redhat enterprise linux 6 on the intel 
platform.

Wondering what was the group's opinion on which type of file system to use for 
storage pools?
Since raw devices are not supported, I am looking to maximize space and 
performance as much as possible.



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310


Re: Tsm 6.22 server script problem.

2011-02-09 Thread Colwell, William F.
Hi Gary,

in v6 expiration puts a row in the summary table for every node, plus a
summary
row for the whole process.  Here is output from a script which displays
rows from summary.
As you can see, some of the elapsed times are 0 -

Activity Target   Start Time  End TimeElapsed
(hh:mm:ss)Gigs   ExaminedAffected
 -
--- --  -   -
EXPIRATION   node102-09-07.21 02-09-07.21 00:00:07
0.00111 111
EXPIRATION   node202-09-07.21 02-09-07.21 00:00:07
0.00145 145
EXPIRATION   node302-09-07.21 02-09-07.53 00:32:11
0.00 129775  129775
EXPIRATION02-09-07.21 02-09-07.53 00:32:11
0.00 129775  129775
EXPIRATION   node402-09-07.21 02-09-07.22 00:01:04
0.00   31643164
EXPIRATION   node502-09-07.21 02-09-07.26 00:05:50
0.00  22557   22557
EXPIRATION   node602-09-07.21 02-09-07.21 00:00:00
0.00145 145
EXPIRATION   node702-09-07.21 02-09-07.21 00:00:08
0.00387 387
EXPIRATION   node802-09-07.21 02-09-07.21 00:00:09
0.00522 522
EXPIRATION   node902-09-07.21 02-09-07.21 00:00:00
0.00387 387


To select the summary row add 'and entity is null' to the where clause.


Best regards,

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary D.
Sent: Wednesday, February 09, 2011 2:12 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Tsm 6.22 server script problem.

I ported the following script from my tsm server v5.5.4  to the 6.2.2
server.
Wanted to compare expiration performance between the two servers.
However, script errors out with the following message.

ANR0162W Supplemental database diagnostic information: -1:42911:-419
([IBM][CLI Driver][DB2/LINUXX8664] SQL0419N A decimal divide operation
is not
valid because the result would have a negative scale. SQLSTATE=42911
).


 script follows 


select activity, cast ((end_time) as date) as Date, -
(examined/cast ((end_time-start_time) seconds as decimal (18,13)) *3600)
-
Objects Examined Up/Hr from summary where -
activity='EXPIRATION' and days (end_time) -days (start_time)=0


Thanks for any help.



Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 


Re: Managing/deleting DBSNAPSHOTs

2010-12-21 Thread Colwell, William F.
Zoltan,

you will also need to run expiration on the target server to delete what
the
server things are archive files.

Regards,


Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
J. Pohlmann
Sent: Tuesday, December 21, 2010 1:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Managing/deleting DBSNAPSHOTs

Zoltan, use reconcile volumes fix=yes at the source server to get rid of
dangling archive objects at the target server. I have a couple of
customers
that have a PMR open for the archive objects of virtual volumes not
being
deleted at the target server. You might want to open a PMR too so that
there
is more evidence of this phenomenon. This started with v6.1 and in one
installation that is not using tape but instead file device class at the
target server I had to run regular reconcile volumes fix=yes commands
because they ran out of space. Regardless of whether you are using file
or
disk device class to store the data at the target server, reconcile
volumes
fix=yes at the source server will communicate with the target server to
inactivate the dangling archive objects. Then run expire inv at the
target
server to physically remove the objects. For file device class, the flat
files will be deleted (assuming they are scratch volumes) and for disk
device class volumes the pages will be freed up.

 And, yes, the comment about delg=0 is quite correct. Use q server f=s
to
find out what it is, then update server name delg=0.

Regards,

Joerg Pohlmann

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Zoltan Forray/AC/VCU
Sent: Tuesday, December 21, 2010 07:18
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Managing/deleting DBSNAPSHOTs

We have setup a small, offsite TSM server to function as a repository of
across-the-wire DBSNAPSHOT database backups for our production TSM
servers.
We want each server to perform daily dbsnapshot backups to this server.
Since the offsite server currently has limited disk space (2-dbsnapshots
for
all servers), we need to expire/purge the previous
dbsnapshot-1 before we can perform another dbsnapshot.

I have everything configured and the dbsnapshots work as expected.

My problem is this.  How to I get the dbsnapshots to expire/go away on
the
offsite server?  What controls the expiration of the dbsnapshots?  I
have
run delete volhist devclass todate=today-1 type=dbsnapshot commands
and it
says they are deleted but the space is not release on the offsite
server?  I
have had to perform manual delete filespace ... type=server
on the offsite server but that deletes everything in the filespace.

What am I missing?
Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit
http://infosecurity.vcu.edu/phishing.html


Re: DB2: SSD vs more RAM

2010-11-22 Thread Colwell, William F.
Hi Henrik,

I have 2 TSM (6.1.4.2) instances on one server.  One instance db size
(the size of the full db backup) is
558 GB, the other is 1,448 GB.

The server (IBM x3850 m2, running RHEL 5.5) started with 16 GB of ram, I
bumped it to 40 GB and then max'ed it out
with 128 GB.  I can't say I did a though performance analysis because it
was such a cheap thing to do.
When there are 2 or more instances on a server you need to use the
DBMEMPERCENT parameter in
dsmserv.opt to keep the instances from fighting for the memory and leave
some for the OS.  I have 
each set to 45%.

I started out with both databases on a Netapp, sharing 1 aggregate.  The
aggregate was 27 300 GB 15k sas
disks.  I wasn't satisfied with the performance and the usage was up to
70% so I bought a 
Nexsan SASbeast unit with 2 raid 10 arrays.  12 600GB 15k disks for
the smaller db and 16 disks for the 
larger DB.  I just finished moving the databases on to the arrays.  The
speed of the db backups increased dramatically.
Here is an sql query showing the last 6 dbbackups -

Activity   Start Time   End Time Elapsed (hh:mm:ss)
Gigs
--   ---
--
FULL_DBBACKUP  10-24-13.00  10-24-23.10  10:10:15
1390.90
FULL_DBBACKUP  10-31-15.52  11-01-00.46  08:54:39
1399.24
FULL_DBBACKUP  11-07-13.00  11-07-23.12  10:11:46
1432.42
FULL_DBBACKUP  11-14-13.00  11-14-21.22  08:21:49
1436.85
FULL_DBBACKUP  11-20-07.04  11-20-13.46  06:42:09
1442.77
FULL_DBBACKUP  11-21-15.00  11-21-17.35  02:35:54
1448.55

Line 4 is the last 'normal' backup from the Netapp (other things going
on during the backup).
Line 5 is the 'special' backup just before the restore (nothing else
going on)
Line 6 is the first 'normal' backup from the raid 10 array.  Much
faster.

Since the topic is about SSD or RAM, I can say I never considered SSD.
I expected it would be too expensive
for DB's this size.  If you are planning on doing dedup, expect the db
to grow very big very fast.

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Henrik Ahlgren
Sent: Monday, November 22, 2010 4:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: DB2: SSD vs more RAM

Or maybe he has a huge amount of DB entries?  If his options are either
six SAS 15K or eight SSDs (50GB each), it means his DB is propably in
the multi-hundred gigabyte range. If he just needs the IOPS for smaller
DB, then he would not need 8 SSDs to beat 6 platters, even one or two
could be enough. (Just one Intel X25E does 35K IOPS random 4K read.) I'm
not sure how much doubling the RAM would help with operations such as
expiration, DB backup etc. compared to nice SSD setup.

I'm wondering why so little discussion here on using solid state devices
for TSM databases? Some of you must be doing it, right?

On Nov 17, 2010, at 7:50 PM, Remco Post wrote:

 SSD to me seems overkill if you already have 24 GB of RAM, unless you
need superfast performance and are going to run a very busy TSM server
with a huge amount of concurrent sessions.
 
 -- 
 
 Gr., Remco
 
 On 17 nov. 2010, at 12:16, Pretorius, Louw l...@sun.ac.za
l...@sun.ac.za wrote:
 
 Hi all,
 
 I am currently in the process of setting up specifications for our
new TSM6.2 server.  
 
 I started by adding 8 x SSD 50GB disks to hold OS and DB, but because
of the high costs was wondering if it's possible to rather buy more RAM
and increase the DB2 cache to speed up the database.
 
 Currently I have RAM set at 24GB but its way cheaper doubling the RAM
than to buy 8 x SSD's
 Currently I have 8 x SSD vs 6 x SAS 15K 


-- 
Henrik Ahlgren
Seestieto
+358-50-3866200


Re: De-dup ratio's

2010-11-16 Thread Colwell, William F.
Hi Eric,

I started doing dedup fairly soon after 6.1 became available.  What I
found is that
the server had a lot of trouble expiring large files.  After expire runs
and appears to
be done, the server has a lot of extra work to do before it actually
deletes chunks from
storagepools.  And early in 6.1, this code had problems and was
inefficient.  So I had
to stop deduping big files to get the server to run smoothly.  Oracle
backups were very
bad this way and they weren't getting spectacular dedup ratios so I
stopped deduping and
returned to doing client compression which gets about 80% compression.

The process is much better now - I know this because I tested a lot of
patches to it - so I
am thinking of deduping 2GB files, and if all goes well then 4 GB etc.

But I won't start deduping PST's again because they are backed up every
day and I only keep 3 versions
so why do all the dedup effort only to have to go thru the chunk
deletion effort 3 days later?
Then I would have to reclaim the volumes to actually get the space back.

What I do now is back them up to their own storagepool directly to my
Sata filesystems using scratch
volumes.  The storagepool is collocated by node.  Every day at 17:30 I
update all volumes in the
pool to be readonly.  This sets up a state of one file per volume.  When
expiration runs and a PST
is deleted, the volume is deleted too and I get the space back
immediately - no reclaim process is needed.

All together this storage pool needs about 15 TB of space.

Thanks,

Bill Colwell

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Loon, EJ van - SPLXO
Sent: Tuesday, November 16, 2010 8:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: De-dup ratio's

Hi Bill!
Just out of curiosity, why do you exclude large files from dedup? When
for example a large PST file changes, probably only a small portion of
the file changes, so the rest of the file should be 'deduplicatable',
right?
Kind regards,
Eric van Loon

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Colwell, William F.
Sent: maandag 15 november 2010 16:12
To: ADSM-L@VM.MARIST.EDU
Subject: Re: De-dup ratio's

Hi David,

I am doing dedup with v6, no appliance involved.

On a server for windows systems, I am getting 3 to 1 savings.  The 'q
stg f=d' command
shows the savings -

   Duplicate Data Not Stored: 77,638 G (67%)

I exclude pst files and any other file larger than 1 GB from dedup.


On another server for linux, solaris, mac clients, the savings are -

   Duplicate Data Not Stored: 26,558 G (58%)

I also exclude  1 GB files and the oracle/rman backups.

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Druckenmiller, David
Sent: Friday, November 12, 2010 11:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: De-dup ratio's

I'm curious what others are seeing for de-dup ratios for various
methods.

We're using IBM's ProtecTier for our TSM (5.5) primary pools and only
see about a 4 to 1 ratio.  This is less than half of what IBM was
projecting for us.  We have roughly 400 clients (mostly Windows servers)
totalling about 135TB of data.  Biggest individual uses are Exchange and
SQL Dumps.

Just wondering what others might be getting for other appliances or with
TSM v6?

Thanks
Dave



-
CONFIDENTIALITY NOTICE: This email and any attachments may contain
confidential information that is protected by law and is for the
sole use of the individuals or entities to which it is addressed.
If you are not the intended recipient, please notify the sender by
replying to this email and destroying all copies of the
communication and attachments. Further use, disclosure, copying,
distribution of, or reliance upon the contents of this email and
attachments is strictly prohibited. To contact Albany Medical
Center, or for a copy of our privacy practices, please visit us on
the Internet at www.amc.edu.

For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee only. If
you are not the addressee, you are notified that no part of the e-mail
or any attachment may be disclosed, copied or distributed, and that any
other action related to this e-mail or attachment is strictly
prohibited, and may be unlawful. If you have received this e-mail by
error, please notify the sender immediately by return e-mail, and delete
this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
its employees shall not be liable for the incorrect or incomplete
transmission of this e-mail or any attachments, nor responsible for any
delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
Airlines) is registered

Re: De-dup ratio's

2010-11-16 Thread Colwell, William F.
Hi Paul,

I haven't installed 6.2 yet so I haven't tested client side dedup (CSD).
But I don't think
I would apply it to PST files anyway.  CSD doesn't solve the problem of
chunk expiration.
But if the network folk told me the network was overloaded and could
prove that backups were
the problem, then yes I would try CSD.

6 years ago Draper Lab was using Eudora for an email client.  Eudora
detached attachments
to a folder where they were backed up once; backups of email were not a
problem for TSM.
But then we wanted to get into calendaring.  So we tried the Oracle
Collaboration Suite
which required outlook as a client.  So everyone's Eudora folders were
sucked into PST files.
Since our users are not restricted about how much email to keep, they
kept pretty much
every email and still do.  The result was huge PST files; there are
100's of PST files 
larger than 10 GB.

Of course there was no planning for backing up the new PST files;  I had
to scramble.  I
directed them to a separate storage hierarchy and changed the policy
from 10-90-5-180 to 3-7-5-180
to expire them much quicker.  This made for a pool of tapes which rolled
over
quicker so I could keep my media expenses under control.  My current
policy on v6 is very similar.

Well, the OCS was a failure so in came Exchange about 5 years ago.

My idea of the best way to deal with PST files is to ban them entirely.
Instead have unlimited
quotas in Exchange and deploy an exchange backend archiving product to
keep the exchange db
manageable.  Some of the mail managers think this is a good idea too,
but we are now in the
midst of a drawn out exchange 2010 implementation so there are no
resources to get serious
about an archiving backend.

Thanks,

- bill



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Paul Zarnowski
Sent: Tuesday, November 16, 2010 1:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: De-dup ratio's

By using source-mode deduplication, you could avoid backing up the
entire PST files every day.  We've just introduced Exchange here, within
the last year, and are still figuring out the best way to deal with PST
files.

At 10:51 AM 11/16/2010, Colwell, William F. wrote:
But I won't start deduping PST's again because they are backed up every
day and I only keep 3 versions
so why do all the dedup effort only to have to go thru the chunk
deletion effort 3 days later?
Then I would have to reclaim the volumes to actually get the space
back.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: De-dup ratio's

2010-11-15 Thread Colwell, William F.
Hi David,

I am doing dedup with v6, no appliance involved.

On a server for windows systems, I am getting 3 to 1 savings.  The 'q
stg f=d' command
shows the savings -

   Duplicate Data Not Stored: 77,638 G (67%)

I exclude pst files and any other file larger than 1 GB from dedup.


On another server for linux, solaris, mac clients, the savings are -

   Duplicate Data Not Stored: 26,558 G (58%)

I also exclude  1 GB files and the oracle/rman backups.

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Druckenmiller, David
Sent: Friday, November 12, 2010 11:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: De-dup ratio's

I'm curious what others are seeing for de-dup ratios for various
methods.

We're using IBM's ProtecTier for our TSM (5.5) primary pools and only
see about a 4 to 1 ratio.  This is less than half of what IBM was
projecting for us.  We have roughly 400 clients (mostly Windows servers)
totalling about 135TB of data.  Biggest individual uses are Exchange and
SQL Dumps.

Just wondering what others might be getting for other appliances or with
TSM v6?

Thanks
Dave



-
CONFIDENTIALITY NOTICE: This email and any attachments may contain
confidential information that is protected by law and is for the
sole use of the individuals or entities to which it is addressed.
If you are not the intended recipient, please notify the sender by
replying to this email and destroying all copies of the
communication and attachments. Further use, disclosure, copying,
distribution of, or reliance upon the contents of this email and
attachments is strictly prohibited. To contact Albany Medical
Center, or for a copy of our privacy practices, please visit us on
the Internet at www.amc.edu.


Re: Linux ext4 filesystems - is anyone using them for devt=file storage?

2010-10-29 Thread Colwell, William F.
Hi Christian,

thanks for the reply, I am glad to hear that someone else is using it.

I haven't had a problem using scratch volumes.  Could things be faster?  Sure, 
but
all the work is getting done.  And I am expecting ext4 to improve things with
extent allocation.  I am also changing the hardware for the db which should
speed things up too.

Thanks again,

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Christian Svensson
Sent: Friday, October 29, 2010 3:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: SV: Linux ext4 filesystems - is anyone using them for devt=file 
storage?

Hi Colwell,
I'm using EXT4 on 2 TSM Servers. One of them do I have full controll of and it 
works fine.
The other Linux system to I only see twice a year. But the customer normally 
drop me emails if he got something wrong.

But the same problem with EXT4 as with EXT3 is that you need to pre allocate 
all volumes before and not let TSM create them on-demand.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson
Supported Platform for CPU2TSM:: 
http://www.cristie.se/cpu2tsm-supported-platforms


Från: Colwell, William F. [bcolw...@draper.com]
Skickat: den 28 oktober 2010 22:08
Till: ADSM-L@VM.MARIST.EDU
Ämne: Linux ext4 filesystems - is anyone using them for devt=file storage?

Hi,



I am running 2 6.1 servers on rhel 5.5.  I am doing a lot of doing
dedup.  All primary storagepools are

devicetype file.  Current I have 10 16TB ext3 filesystems on raid 6
Sata.  All volumes are

scratch allocations.



I have another 96TB ready to go.  I haven't made the filesystems yet.
So my question is if anyone

is using ext4 yet as the filesystem type for TSM storagepools.



From my initial reading, I think the extent allocation feature would be
very useful.

See http://en.wikipedia.org/wiki/Ext4



I opened a pmr today to ask if IBM would support servers using ext4, and
they just called back!

They will support servers using ext4 for file storage.  (But not for
client backups yet).



Also, is anyone using ext4 for the database?



Thanks,



Bill Colwell

Draper Lab


Linux ext4 filesystems - is anyone using them for devt=file storage?

2010-10-28 Thread Colwell, William F.
Hi,

 

I am running 2 6.1 servers on rhel 5.5.  I am doing a lot of doing
dedup.  All primary storagepools are

devicetype file.  Current I have 10 16TB ext3 filesystems on raid 6
Sata.  All volumes are

scratch allocations.

 

I have another 96TB ready to go.  I haven't made the filesystems yet.
So my question is if anyone

is using ext4 yet as the filesystem type for TSM storagepools.

 

From my initial reading, I think the extent allocation feature would be
very useful.

See http://en.wikipedia.org/wiki/Ext4

 

I opened a pmr today to ask if IBM would support servers using ext4, and
they just called back!

They will support servers using ext4 for file storage.  (But not for
client backups yet).

 

Also, is anyone using ext4 for the database?

 

Thanks,

 

Bill Colwell

Draper Lab


Re: Deduplication Status

2010-04-21 Thread Colwell, William F.
Hi Andy,

there are 2 sources for this information.  A column in the stgpools table has 
the MB saved -

tsm: select cast(stgpool_name as char(20)) as Name, -
 cast(space_saved_mb / 1024.0 / 1024.0 as decimal(6,2)) as T 
Saved from stgpools

Name  T Saved
- -
BKP_0
BKP_1A0.00
BKP_1B0.00
BKP_2 24.38


Or 'q stg f=d' will show it -

tsm: q stg bkp_2 f=d

   Storage Pool Name: BKP_2
   Storage Pool Type: Primary
   Device Class Name: VT01_50GB
  Estimated Capacity: 50,775 G
...
...
...
   Deduplicate Data?: Yes
Processes For Identifying Duplicates: 0
   Duplicate Data Not Stored: 24,972 G (56%)


Hope this helps,

Bill Colwell
Draper Lab


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Andrew 
Carlson
Sent: Wednesday, April 21, 2010 4:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication Status

Server side dedup, Server V6.2, client V6.2.

On Wed, Apr 21, 2010 at 2:39 PM, Mark Yakushev bar...@us.ibm.com wrote:
 Hi Andy,

 Are you doing server- or client-side deduplication? What are the versions
 of your TSM Client and Server?

 Regards,
 Mark L. Yakushev




 From: Andrew Carlson naclos...@gmail.com
 To:   ADSM-L@vm.marist.edu
 Date: 04/21/2010 12:36 PM
 Subject:    [ADSM-L] Deduplication Status



 I have been looking through the commands and outputs of commands,
 trying to find something to tell me how much deduplication has
 occurred.  Is there one there I am missing?  Thanks.

 --
 Andy Carlson
 ---
 Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month,
 The feeling of seeing the red box with the item you want in it:Priceless.




-- 
Andy Carlson
---
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month,
The feeling of seeing the red box with the item you want in it:Priceless.


Re: Sessions idle for silly periods...

2010-02-12 Thread Colwell, William F.
Hi Allen,

yes, I am seeing sessions hang like this.  The sending server is version
6.
The receiver is 5.5.  I am making the copypools for the v6 servers on
virtual
volumes.  I get hanging sessions like this when doing backup stgpool and
also doing
tsm db backups to the same 5.5 server.  I monitor it every morning and
cancel the
hanging sessions. I have a pmr open.

If you look in the sending server, you should see messages like this -

tsm: q act begint=00:00 s=socket
Session established with server : Linux/x86_64
  Server Version 6, Release 1, Level 3.0
  Server date/time: 02/12/2010 14:05:36  Last access: 02/12/2010
13:51:06

02/12/2010 01:37:22 ANR8213E Socket 19 aborted due to send error;
error 110. (SESSION: 12169, PROCESS: 248)
02/12/2010 06:07:21 ANR8213E Socket 12 aborted due to send error;
error 110. (SESSION: 13343)
02/12/2010 06:37:14 ANR8213E Socket 6 aborted due to send error;
error 110. (SESSION: 13510)
02/12/2010 14:05:36 ANR2017I Administrator id issued command:
QUERY ACTLOG begint=00:00 s=socket  (SESSION: 16213)


Bill Colwell

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Allen S. Rout
Sent: Friday, February 12, 2010 12:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Sessions idle for silly periods...

Have any of you seen server-to-server sessions stay around, idle, for
ridiculous periods?

I have a server with


tsm: ATLCOPYq opt idletimeout

Server Option Option Setting
- 
IdleTimeOut   60


but it accumulates idle sessions:


   Sess  Comm.   Sess  WaitBytesBytes  Sess   Platform
Client Name
 Number  Method  State Time SentRecvd  Type
---  --  --  --  ---  ---  -  
---
 89,734  Tcp/Ip  IdleW250.71.2 K7.4 G  Node   Windows
UFF-OFF
 H
 93,028  Tcp/Ip  IdleW226.41.9 K   20.7 G  Node   Windows
UFF-OFF
 H
 96,362  Tcp/Ip  IdleW202.91.2 K   60.4 M  Node   Windows
UFF-OFF
 H
 99,649  Tcp/Ip  IdleW180.81.4 K   18.8 G  Node   Windows
UFF-OFF
 H
100,751  Tcp/Ip  IdleW172.11.4 K   48.1 G  Node   Windows
UFF-OFF

[ ... ]


Makes me wish for

CANCEL SESS wherestate=IdleW Wherewait = 6000

or some such.


- Allen S. Rout


Re: Formating SQL query

2009-10-15 Thread Colwell, William F.
Grigori,

 

I assume the sql*plus feature you use is the break statement which

by default does outlines on break columns.

 

Besides submitting sql and retrieving results sets, Sql*plus includes

a lot of report writer functions which are not strictly SQL.

 

So I don't know any way to do outlining with just sql.

 

In version 6 you can make the db2 databases visible to other tools

using jdbc or odbc.  See the wiki for directions.  I use a free tool -
DB visualizer

(http://www.minq.se/products/dbvis/download/index.jsp)

to examine tables to attempt to understand what is going on.  All the

views on the tsm db are there.  I haven't looked for a tool which does

outlining but I am sure there is one.

 

Bill Colwell

Draper Lab.

 

 

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Grigori Solonovitch
Sent: Thursday, October 15, 2009 8:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Formating SQL query

 

I am not very cool in SQL and I need help.

I have query like select distict a,b from c group by a,b

Response on this SQL query in  TSM Server is:

A1 B1

A1 B2

A2 B3

A2 B4

A2 B5

I would like to have:

A1 B1

 B2

A2 B3

 B4

 B5

I know exactly it is possible in Oracle SQL*Plus.

Is it possible in TSM Server 5.5.3?

Is it possible in TSM Server 6.1.2 (DB2)?

What is the way, if possible?

 

 

 

Grigori G. Solonovitch

 

Senior Technical Architect

 

Information Technology  Bank of Kuwait and Middle East
http://www.bkme.com

 

Phone: (+965) 2231-2274  Mobile: (+965) 99798073  E-Mail:
g.solonovi...@bkme.commailto:g.solonovi...@bkme.com

 

Please consider the environment before printing this Email

 

 

Please consider the environment before printing this Email.

 



This email message and any attachments transmitted with it may contain
confidential and proprietary information, intended only for the named
recipient(s). If you have received this message in error, or if you are
not the named recipient(s), please delete this email after notifying the
sender immediately. BKME cannot guarantee the integrity of this
communication and accepts no liability for any damage caused by this
email or its attachments due to viruses, any other defects, interception
or unauthorized modification. The information, views, opinions and
comments of this message are those of the individual and not necessarily
endorsed by BKME.


Re: Windows TSM server 6.1.2.0 after clean install : ANR2968E Database backup terminated. DB2 sqlcode: -2033.

2009-08-27 Thread Colwell, William F.
Stefan,

all my executions of the wizard were at the 6.1.0.0 level and on 64bit
Linux.
I hope they haven't introduced a bug in 6.1.2.0.  

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Stefan Folkerts
Sent: Thursday, August 27, 2009 2:48 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Windows TSM server 6.1.2.0 after clean install : ANR2968E
Database backup terminated. DB2 sqlcode: -2033.

I was in the beta as well Bill. :)

But I am sorry to say IBM did not do a good job on the 6.1.2.0 Windows
release instance wizard, Wanda pointed me the problem on the IBM page :
http://www-01.ibm.com/support/docview.wss?uid=swg21390301

There they confirm the problem, I did two clean installs and it just
doesn't work out of the box.
I can confirm that it still doesn't work on the TSM 6.1.2.0 64bit
install package.

Stefan

-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Namens
Colwell, William F.
Verzonden: woensdag 26 augustus 2009 17:55
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] Windows TSM server 6.1.2.0 after clean install :
ANR2968E Database backup terminated. DB2 sqlcode: -2033.

Stefan,

I was in the beta and I never got a database to backup because of the
api config
difficulties.  Fortunately this is all handled now by the instance
creation wizard, dsmicfgx.
I have run it 3 times and in every case the instance is created
successfully, starts up and
the db backs up because the api is configured.  I normally don't do
wizards, but IBM did a good
job on this one.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Wanda Prather
Sent: Wednesday, August 26, 2009 9:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Windows TSM server 6.1.2.0 after clean install : ANR2968E
Database backup terminated. DB2 sqlcode: -2033.

Been there done that.
Go to www.ibm.com, in the search window put:  *1390301*
**
That says what you need to do to get the problem fixed.
What it omits to say, is that you must be logged in with the DB2 userid
when
you do it.

**


On Wed, Aug 26, 2009 at 6:24 AM, Stefan Folkerts
stefan.folke...@itaa.nlwrote:

 I have done a clean install of the 64bit windows version of 64bit TSM
 6.1.2.0 on Windows 2008 DC +sp1 + windows patches
 After I do a minimal setup of the server instance I am able to connect
 to the instance using TSMmanager.
 When I set the dbrecovery option to the default file device class I
 should be able to backup the TSM database with the 'ba db type=full
 devcass=FILEDEV1' command (FILEDEV1 is the name of the default file
 device class with TSM 6.1.2.0)

 However, what happens is this ;

 08/26/2009 12:11:29  ANR2017I Administrator ADMIN issued command:
BACKUP
 DB
  type=full devclass=filedev1  (SESSION: 12)

 08/26/2009 12:11:29  ANR4559I Backup DB is in progress. (SESSION: 12)

 08/26/2009 12:11:29  ANR0984I Process 3 for DATABASE BACKUP started in
 the
  BACKGROUND at 12:11:29. (SESSION: 12, PROCESS: 3)

 08/26/2009 12:11:29  ANR2280I Full database backup started as process
3.

  (SESSION: 12, PROCESS: 3)

 08/26/2009 12:11:29  ANR0405I Session 12 ended for administrator ADMIN
 (WinNT).
  (SESSION: 12)

 08/26/2009 12:11:30  ANR2968E Database backup terminated. DB2 sqlcode:
 -2033.
  DB2 sqlerrmc: 406. (SESSION: 12, PROCESS: 3)

 08/26/2009 12:11:30  ANR0985I Process 3 for DATABASE BACKUP running in
 the
  BACKGROUND completed with completion state
FAILURE
 at
  12:11:30. (SESSION: 12, PROCESS: 3)



 Here :

http://publib.boulder.ibm.com/infocenter/tsminfo/v6/index.jsp?topic=/com
 .ibm.itsm.messages.doc/msgs2417.html

 I find this information ;


 ===

 ANR2968E: Database backup terminated. DB2 sqlcode: sqlcode. DB2
 sqlerrmc: sqlerrmc.
 Explanation
 DB2(r) detected a problem during the backup operation. Sources of the
 problem might include:

   1. Tivoli(r) Storage Manager API configuration errors for the DB2
 instance in which the Tivoli Storage Manager server database resides.
   2. An error related to the DB2 backup operation.
   3. Errors related to the target backup device.

 System action

 The database backup is terminated.
 User response

 If the message indicates DB2 sqlcode 2033, then the problem is
probably
 the Tivoli Storage Manager API configuration. The DB2 instance uses
the
 Tivoli Storage Manager API to copy the Tivoli Storage Manager
 database-backup image to the Tivoli Storage Manager server-attached
 storage devices. Common sqlerrmc codes include:

   1. 50 - To determine whether an error created the API timeout
 condition, look for any Tivoli Storage Manager server messages that
 occurred during the database-backup process and that were issued
before
 ANR2968E. Ensure that the Tivoli Storage Manager API options file has

Re: SQL SELECT to show what's mounted and why

2009-02-12 Thread Colwell, William F.
Roger,

I have a script called 'qdr' to show the drives and the tapes on each.
The script also
executes scripts in the library manager client servers to get details of
the usage.

The scripts and sample output are in the attached file.

Hope this helps,

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Roger Deschner
Sent: Thursday, February 12, 2009 1:16 PM
To: ADSM-L@VM.MARIST.EDU
Subject: SQL SELECT to show what's mounted and why

I'm looking for an SQL SELECT that will display a list of what tape is
mounted on each drive, and which session or process it's mounted for.

I've looked at SHOW ASMOUNTED, SHOW ASVOL, SHOW MP and they don't really
do it.

I'm dealing with a drive-constrained system and no budget to add
drives, so I'm trying to manage the situation with better automation.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=
-- qdr script ---
-- save to a file, run the file as a macro in the library manager server 
-

del scr qdr
commit
def scr qdr desc='Quick display of drives'
upd scr qdr 'set sqldisplaymode w'
upd scr qdr 'select left(dr.drive_name, 10) as Drive, left(dr.online, 7) as 
Online?, -'
upd scr qdr ' dr.element, left(drive_serial, 13) as Serial number, -'
upd scr qdr ' left(p.device, 16) as device, left(dr.drive_state, 10) as Status, 
left(dr.volume_name, 6) as Volume, -'
upd scr qdr ' left(dr.allocated_to, 32) as User -'
upd scr qdr ' from drives dr, paths p where drive_name = p.destination_name  
and p.source_name = ''LIBRARY_MANAGER'''
upd scr qdr 'select ''Path from server ''||left(source_name, 28)||'' to drive 
''||left(destination_name, 20)||'' is not online'' as Paths offline  from 
paths -'
upd scr qdr ' where online  ''YES'' order by 1'
upd scr QDR 'label: tsm1:run tapes'

--- tapes script 
-- save to a file, run the file as a macro in the client servers -

del scr tapes
def scr TAPES 'Show tapes used by sessions and processes'   


  
upd scr TAPES '/*  -*/' line=1  


  
upd scr TAPES '/*  Script Name:  TAPES  */' line=5  


  
upd scr TAPES '/*  Description: Display ses  proc tapes*/' line=10 


  
upd scr TAPES '/*  Parameter:   none*/' line=15 


  
upd scr TAPES '/*  Example:  run tapes  */' line=20 


  
upd scr TAPES '/*  -*/' line=25 


  
upd scr TAPES 'set sqldisplaymode w' line=30


  
upd scr TAPES 'commit' line=35  


  
upd scr TAPES 'select cast(process as char(28)) as process, -' line=40


  
upd scr TAPES 'cast(substr(status,posstr(status,''put vol'')-11,length(status)) 
as char(82)) as Tape in use -' line=45


upd scr TAPES '  from processes -' line=50  
 

Re: TSM database on NETAPP

2008-12-24 Thread Colwell, William F.
Hi Sam,

I went thru this about two years ago, moving db volume from raw volumes
on Solaris to
a netapp.  My experience was that it got faster as the number of db
volumes decreased.
The decreased because the old volumes were 6GB and the netapp volumes
were 20GB.
What I this the problem is, is that TSM updates every dbvolume and log
volume every time
4MB is moved.  You know that 1 MB reserved at the start of each volume?
I vaguely remember
hearing that every db volumes knows about every other volume and the
info is in the 
1MB reserved.

How many db volumes to you have?

Here are some stats I gathered at the time to report to IBM.  They show
the
megs/minute going up as the number of db volumes decreases.  If you are
in this situation
there is nothing to do but push through it.

thanks,

Bill Colwell
Draper Lab

   stats -
minutes  dbv size   bytes MB/min  Comments  

10215,658,116,096   5.2968 db volumes
11146,446,645,248   5.52
10246,446,645,248   6.00
898 6,446,645,248   6.85
913 6,446,645,248   6.73
843 6,446,645,248   7.29
837 6,446,645,248   7.35
853 6,446,645,248   7.21
819 6,446,645,248   7.51
769 6,446,645,248   7.99
722 6,446,645,248   8.52
623 6,446,645,248   9.8744 db volumes
683 6,446,645,248   9.00
583 6,446,645,248   10.55   
589 6,446,645,248   10.44   
536 6,446,645,248   11.47   
492 6,446,645,248   12.50   
467 6,446,645,248   13.16   
375 6,446,645,248   16.39   
366 6,446,645,248   16.80   28 db volumes
326 6,446,645,248   18.86   
293 6,446,645,248   20.98   
319 6,446,645,248   19.27   
231 6,446,645,248   26.61   21 db volumes
187 6,446,645,248   32.88   
335 12,096,372,736  34.44   
299 12,096,372,736  38.58   
199 12,096,372,736  57.97   
168 12,096,372,736  68.67   14 db volumes


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Sam Sheppard
Sent: Wednesday, December 17, 2008 10:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: TSM database on NETAPP

TSM Server 5.5.1 on Solaris10, Sun V240.
I need to relocate my database volumes and have been assigned a LUN on
one of our Netapp FAS6030 devices.  I allocated 6 new volumes and then
deleted one of the old volumes (5GB) and the move process started and is
running at about 600MB/hour. I'm not real familiar with the performance
characteristics of the Netapp box, but am assured by the Unix/Netapp
guys that it's a great performer (it's FC-connected).

This seems way slow to me and I haven't seen this poor performance
doing the same kind of operation on my ESS. Anyone have any experience
with the TSM database on one of these devices? I know the original
network-attached storage had write performance problems, but this seems
ridiculous.

TIA

Sam Sheppard
San Diego Data Processing Corp.
(858-581-9668)


Re: TSM being abandoned?

2008-04-16 Thread Colwell, William F.
I have been configuring a new TSM server since last November.  At first
I wanted a VTL.  But when I learned from the Oxford symposium
presentations
that TSM would have its own dedup in version 6,
and considering the cost of the vtl, I ditched it and ordered a lot more
of SATA arrays for less money.

I think in a few years after v6 is widely installed, VTL's won't look so
good
for TSM sites.  Assuming it all works of course.

your VTL vendor may just have been whistling past the graveyard.

Bill Colwell

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Wednesday, April 16, 2008 12:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM being abandoned?

Deduplicating VTLs fit better into NBU sites.  TSM's progressive
incremental methodology already reduces the data stream, making deduping
VTLs less of a win, though it can still be beneficial.  My point is
that
VTL vendors may not look as positively on TSM as they do on other
less-efficient backup solutions, because they don't sell as much VTL
product to them.  IMHO.
..Paul

 A VTL vendor said he is seeing a number of mid-sized businesses
 migrating from TSM to NBU (Symantec). Do you think this is true? My
 concern is that the pool of support techs will shrink and put us in a
 bind.

 Regards,
 Orin

 Orin Rehorst
 Port of Houston



Re: TSM being abandoned?

2008-04-16 Thread Colwell, William F.
Timothy,

I don't remember where I heard it and of course IBM can change it,
but I heard it is scheduled for late this year.

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Timothy Hughes
Sent: Wednesday, April 16, 2008 2:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM being abandoned?

Is Version 6 going to be released this Year or Next?

regards

Colwell, William F. wrote:

I have been configuring a new TSM server since last November.  At first
I wanted a VTL.  But when I learned from the Oxford symposium
presentations
that TSM would have its own dedup in version 6,
and considering the cost of the vtl, I ditched it and ordered a lot
more
of SATA arrays for less money.

I think in a few years after v6 is widely installed, VTL's won't look
so
good
for TSM sites.  Assuming it all works of course.

your VTL vendor may just have been whistling past the graveyard.

Bill Colwell

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
Paul Zarnowski
Sent: Wednesday, April 16, 2008 12:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM being abandoned?

Deduplicating VTLs fit better into NBU sites.  TSM's progressive
incremental methodology already reduces the data stream, making
deduping
VTLs less of a win, though it can still be beneficial.  My point is
that
VTL vendors may not look as positively on TSM as they do on other
less-efficient backup solutions, because they don't sell as much VTL
product to them.  IMHO.
..Paul



A VTL vendor said he is seeing a number of mid-sized businesses
migrating from TSM to NBU (Symantec). Do you think this is true? My
concern is that the pool of support techs will shrink and put us in a
bind.

Regards,
Orin

Orin Rehorst
Port of Houston







Re: TSM dream setup

2008-02-15 Thread Colwell, William F.
We backup complete end user desktops.  Ever since the advent of
TSM - actually adsm 1.1 - some people, mostly managers, have
asked how many copies of any file, for example winword.exe, are
stored in tsm.  When I tell the 1,200, I can see they are thinking
'what a waste, what's wrong with tsm'.  So I am looking forward to
being able to say 'just one copy'.

According to the Oxford presentations, tsm software dedup will only
happen
during reclaim and the storage pool is a devtype=file sequential disk
pool.
I don't see any need for new targeting features, they wouldn't
do much in my 'dream system'.

I am spec'ing out a new system now.  Before hearing about version 6, I
wanted a vtl.  Now my dream system is an x-series running linux and
version 6 with midrange raid for the database and backuppool and
50 - 100T of sata arrays.  No tapes for primary pools.

Thanks,

Bill Colwell
Draper lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Friday, February 15, 2008 8:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM dream setup

About deduplication, Mark Stapleton said:

  It's highly overrated with TSM, since TSM doesn't do absolute (full)
  backups unless such are forced.

At 12:04 AM 2/15/2008, Curtis Preston wrote:
Depending on your mix of databases and other application backup data,
you can actually get quite a bit of commonality in a TSM datastore.

I've been thinking a lot about dedup in a TSM environment.  While it's
true
that TSM has progressive-incremental and no full backups, in our
environment anyway, we have hundreds or thousands of systems with lots
of
common files across them.  We have hundreds of desktop systems that have
a
lot of common OS and application files.  We have local e-mail stores
that
have a lot of common attachments.

While it may be true that overall, you will see less duplication in a
TSM
environment than with other backup applications, with TSM you also have
the
ability to associate different management classes with different files,
and
thereby target different files to different storage pools.  Wouldn't it
be
great if we could target only the files/directories that we *know* have
a
high likelihood of duplication to a storage pool that has deduplication
capability?  You can actually do this with TSM.  I'd like to see an
option
in TSM that can target files/directories to different back-end storage
pools that is independent of the management class concept, which also
affects versions  retentions and other management attributes.


..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]


Re: Backing up PST files

2008-02-14 Thread Colwell, William F.
Hi Paul,

the subfile cache doesn't hold the whole file, it holds signatures of
some
sort, probably a checksum for each page.  I use subfile for pst files.
My pst is
75 meg but the folder is 5 meg.

We went thru a conversion to outlook 2 years ago.  First, you're lucky
to be involved
before the conversion is done.  

The key is the attachments.  If the conversion from the old mail client
can leave
the attachments out of the pst then you will have much smaller pst
files.  Unfortunately
we loaded the attachments in.  As I said, I wasn't asked about the
conversion until
it was done.

Also, there are programs which extract attachments from the pst.  We use
EzDetach.  It helped a lot but the deployment of it wasn't completed so
we
have lots of psts  2gig, some up to 10 gig.  Yes, they backup every
night.

I made a special management class for them with separate disk and tape
pool.
The tapes cycle faster because expiration deletes everything from them.

Good luck!

Bill Colwell
Draper lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Thursday, February 14, 2008 11:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backing up PST files

At 11:04 PM 2/13/2008, Wanda Prather wrote:
Subfile backup was designed to work on laptops, not servers.

Yes, that is what we are trying to do.  Sam, on the other hand

The base or initial backup copy of a file is stored in a cache
directory
on the client, which is also limited in size to 2 GB.

I didn't realize the cache directory was limited in size and that it
held
complete copies of files.  Checking this, it seems you may be correct
about
full copies of files being made in the cache directory.  The size of the
cache directory is only 1GB according to the Users Guide, and that is
what
the EditPreferences GUI restricts you to as well  I don't understand
how subfile backup can work for files up to 2GB in size, if the
subfilecachesize is restricted to 1024 MB.

In short, you can only manage a total of about 2 GB of files for
subfile
backup, and only files less than 2 GB in size are eligible.

Bummer.  Thanks Wanda.

..Paul


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]


Re: Data Domain Question

2007-12-14 Thread Colwell, William F.
Curtis,

the Oxford 2007 presentations are available at
http://tsm-symposium.oucs.ox.ac.uk/2007contributions.html

Review the ones by Dave Cannon and Freddy Saldana, they are very good
with lots of information about possible future tsm features.

Bill Colwell

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Curtis Preston
Sent: Friday, December 14, 2007 4:23 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Data Domain Question

Really!  What did they say?  They've been rather tight-lipped elsewhere.

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Paul Zarnowski
Sent: Wednesday, December 12, 2007 11:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Data Domain Question

At 01:58 PM 12/12/2007, Hart, Charles A wrote:
We are using the Other de-duping product Diligent Protectier

There are actually quite a few de-duping storage products on the
market now.  IBM also discussed TSM-based dedup possibilities at the
Oxford TSM Symposium a couple of months ago.



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]


Re: backup nasnode questions

2007-10-04 Thread Colwell, William F.
Hi Dirk,

the backup node admin command has a mgmtclas parameter.  You could
put the full backups to a separate mgmtclass which keeps it long enough.

For scheduling fulls I suggest you prefix the vfs mapping name with some
indicator of a cycle like cycle01/filespace.  the a script could check
the date and run a full when the cycle comes around.

Regards,

bill Colwell

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Dirk Kastens
 Sent: Thursday, October 04, 2007 8:53 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: backup nasnode questions
 
 Larry,
 
 thanks for your reply, but:
 
  Per the TSM 5.4 Admin reference: If you backup up at an attached
 library
  you
  need to create a copypool with the same NDMP data format...in your
 case
  NETAPPDUMP. In other situations, the format needs to be NATIVE.
 
 I backup directly to the TSM server. There is no library or drive
 attached to the filers.
 
  verexists refers to active copies. What is your retain extra set
for?
 The
  full may still exist as an inactive file. If you want a full and two
  incrementals active you'll have to change your verexists value. Do a
 query
  with inact=yes to see if the full is still available for restore.
 
 The q nasbackup command doesn't have an inactive option. When I
set
 the verexists=2 option, after the first backup, the full image is the
 active version. After the second backup, the full image changes to
 inactive and the differential is the active version. After the third
 backup, the first differential changes to inactive and the second
 differential is active. The full backup has disappeared from the list,
 although it must be there because the differential depends on the full
 backup. I don't see a command that can list the expired full backup.
 On the other hand, I don't want to keep 14 differential backups until
I
 make the next full backup. I just want to keep (and see) the full
 backup
 and the last two differentials.
 
  Scheduling full backups every two weeks: Use a separate schedule for
 full
  backups with a perunits=weeks period=2
 
 Yes, but I don't want to backup the whole filer once every two weeks.
I
 want to backup vol1 on Monday, vol2 on Tuesday, and so on, what means,
 that I had to define a single schedule for each filesystem. My idea
was
 to set an options on a filespace that automatically arranges a full
 backup after a defined number of days. So I only need one schedule
with
 the backup node command, TSM looks up the date of the last full
 backup
 of each filesystem and handles the next full backup dependent on the
 defined option.
 
 --
 Regards,
 
 Dirk Kastens
 Universitaet Osnabrueck, Rechenzentrum (Computer Center)
 Albrechtstr. 28, 49069 Osnabrueck, Germany
 Tel.: +49-541-969-2347, FAX: -2470


Re: deleted management class in database

2007-09-07 Thread Colwell, William F.
Keith,

Did you put the node name in upper case?  The only way you can
get no rows retuned from the query is if the node name is cased wrong or

misspelled.

- bill

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Keith Arbogast
 Sent: Friday, September 07, 2007 9:44 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: deleted management class in database
 
 Bill,
 By 'current nodes' I mean ones we are backing up daily, not ones
 retired but somehow still in the database.
 
 I had misread or misremembered the description of what happens when a
 management class is deleted, and expected any files, inactive or
 active, bound to a deleted to be rebound to the default management
 class for the domain, etc.  In a hurry, I couldn't find the
 documentation on that to clarify the behavior.
 
 I did run the query you suggested; select ll_name, state,
 backup_date, deactivate_date, class_name from backups where node_name
 = 'node-name'   The result was 'ANR2034E SELECT: No match found
 using this criteria'.
 
 This makes me wonder whether the original query had a, subtle to me
 but glaring to others, logic error.  I am now running a simpler
 query; select node_name from backups where class_name =
 'mystery_class'.  It may run for awhile, so I am sending this
 ahead in hope of additional suggestions.
 
 With my thanks,
 Keith Arbogast


Re: deleted management class in database

2007-09-06 Thread Colwell, William F.
Keith,

I'm not sure what you mean by 'current nodes', but if they
backed up before you last changed management classes, and if there
are files that have never changed, then they are still in the active set
and will still have the surprising mgmtclass.

I am just finishing up a ppt for managers to try once again to
explain tsm and backup policy.  This exercise reminded me that
there is an important policy built in and there is nothing for
us to specify about it and therefore can be invisible, namely
that the active set is not managed.  I know it is invisible to
my managers for sure.

To check, try this select on one of the nodes,

select ll_name, state, backup_date, deactivate_name, class_name
 from backups
  where node_name = 'THE_NODE_NAME'

I expect you will see files in the active state with no deactivate date
and the mystery class.

Bill Colwell
Draper lab

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Keith Arbogast
 Sent: Thursday, September 06, 2007 5:30 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: deleted management class in database
 
 I ran this query to determine which management classes are being used
 by our clients; select distinct node_name, class_name from backups.
 
 The query itself may be wrong or wrong for the purpose. However, the
 output contained a surprise. Several current nodes had management
 classes listed with them that are no longer defined, and haven't been
 for some years. That is, they do not appear in the output of 'q mg'.
 
 Under what other circumstances could management classes be in the
 database, but not in 'q mg' output? The TSM server is at level
 5.3.1.4 on AIX.
 
 With my thanks,
 Keith Arbogast
 Indiana University


Re: First Backup day

2007-08-23 Thread Colwell, William F.
Hi Shawn,

use this sql to find the oldest backup date -

select min(backup_date) from backups
 where node_name = 'NODE' and filespace_name = '\\node\c$'

It will return just one line of output. The filespace criteria is
optional.
I use it because we have nodes with old filespaces like '\\node-nt\c$'.

Bill Colwell
Draper lab



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of Shawn Drew
 Sent: Thursday, August 23, 2007 11:48 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: First Backup day
 
 We have a request to provide the first day backups were run for a list
 of
 nodes.  The registration date isn't good, as there is a
 several day lag from registration to first backup.
 These dates are older than the age of the event and activity log.  Can
 anyone suggest a select statement for this?
 
 
 
 Shawn Drew
 Data Protection Engineer
 Core IT Production
 Office:   212.471.6998
 Mobile: 917.774.8141
 
 This message and any attachments (the message) is intended solely
for
 the addressees and is confidential. If you receive this message in
 error, please delete it and immediately notify the sender. Any use not
 in accord with its purpose, any dissemination or disclosure, either
 whole or partial, is prohibited except formal approval. The Internet
 can not guarantee the integrity of this message. BNP PARIBAS (and its
 subsidiaries) shall (will) not therefore be liable for the message if
 modified. Please note that certain functions and services for BNP
 Paribas may be performed by BNP Paribas RCC, Inc.
 
  
 
 Ce message et toutes les pieces jointes (ci-apres le message) sont
 etablis a l'intention exclusive de ses destinataires et sont
 confidentiels. Si vous recevez ce message par erreur, merci de le
 detruire et d'en avertir immediatement l'expediteur. Toute utilisation
 de ce message non conforme a sa destination, toute diffusion ou toute
 publication, totale ou partielle, est interdite, sauf autorisation
 expresse. L'internet ne permettant pas d'assurer l'integrite de ce
 message, BNP PARIBAS (et ses filiales) decline(nt) toute
responsabilite
 au titre de ce message, dans l'hypothese ou il aurait ete modifie.
 Veuillez noter que certaines fonctions et certains services pour BNP
 PARIBAS peuvent etre fournis par BNP Paribas RCC, Inc.


Re: Volumeusage plus occupancy equals shrug?

2007-07-19 Thread Colwell, William F.
 
Allen,

use the query nodedata command which was added in 5.3(?)
as part of the collocation group feature.  for example -

tsm: serverxq nodedata * vol=84

Node NameVolume NameStorage Pool
Physical
Name
Space
 
Occupied
 
(MB)
 -- 

node184 TP2_PRIME
9,723.18
node284 TP2_PRIME
268.29
node22   84 TP2_PRIME
167.32
node99   84 TP2_PRIME
2,889.26



Bill Colwell


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Allen S. Rout
Sent: Thursday, July 19, 2007 12:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Volumeusage plus occupancy equals shrug?

I'm interested in asking my TSM server, of 1TB on volume T9, how
much is associated with each of the two dozen nodes I know to be
present on the volume?.

I know I can calculate that by performing a prohibitive amount of work
on e.g. CONTENTS.  Not interested. :)

Are there any queries (or perhaps SHOWs) anyone's ever come up with to
get that additional detail out of the server?


- Allen S .Rout


Solaris restore problem

2007-07-16 Thread Colwell, William F.
Hi,
 
one of my Solaris admins had to restore the boot disk recently.  It
didn't
go well!  Everything was restore eventually but it took a long time
because
the mount points of symbolic links were never backed up.  He had
to make them manually and restart the restore. The client os is 
Solaris 8, TSM client is 5160.  (My TSM server is on Solaris 9, level
5343).
 
I see a feature added at the 522 level called
'include.attribute.symlink'.
My question for anyone supporting Solaris is did you have this problem
and did 'include.attribute.symlink' fix it?
 
Thanks,
 
Bill Colwell
Draper Lab
 
Here is the admins note to me --
 
We have a Solaris 8 machine with the following filesystems:
 
# df -k
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c0t0d0s04131866 3423633  66691584%/
/proc  0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
fd 0   0   0 0%/dev/fd
/dev/dsk/c0t0d0s54920 7211849 379192266%/var
swap 207058792  40 207058752 1%/var/run
swap 207902688  843936 207058752 1%/tmp
/dev/md/dsk/d3   246393677 117406778 12652296349%/vsmodels
/dev/md/dsk/d4   246393677 60366051 18356369025%/vstools
/dev/md/dsk/d5   702176891 612652932 8250219189%/home
gbc:/vol/vol1/tools  250524060 230791752 1973230893%/nfs/tools
fs1:/export/pub/SunOS-5.8-sparc
   390523840 211332012 17919182855%/nfs/pub
 
We lost the boot disk (c0t0d0) and had to restore the / and /var
filesystems 
from TSM backups (by attaching the replacement disk to another machine 
and performing a cross-restore).  But all of the filesystems listed
above 
(except for / itself) are mounted on empty directories (mount points) 
in either / or /var.  But, because TSM backs up each filesystem as a 
separate filespace, and because it does not back up explicitly
excluded 
filesystems (/tmp, /var/run), NFS filesystems (/nfs/*), or special 
system filesystems (/proc, /etc/mnttab, /dev/fd) at all, the empty 
directories that serve as the mount points do not get backed up as part
of 
the parent filespace backup.  Therefore, when one does a full restore of

any filesystem, it is necessary to manually re-create all of the mount
points 
for anything that mounts within it (not something one wants to have to
remember 
to do when one has been up all night waiting for the restore to
complete, with 
angry users clamoring to get on!).
 
Since TSM is smart enough to determine what to exclude (either
explicitly 
or implicitly) and what is a separate filespace, it ought to be clever 
enough to put into the backup for any filespace all of the empty
directories 
that will be necessary to mount the things that were excluded or backed
up 
separately.


Re: HSM for Windows

2007-03-29 Thread Colwell, William F.
Debbie,

I have a suggestion to give you 6 months on disk, then push to tape.
This will work especially well with file disk.  Make multiple stgpools,
for example, march, april, etc.  Using a script or schedule, update the archive 
destination
to use the current month stgpool.  Using another script or schedule, migrate 
the pool
6 months later.  The file type scratch volumes will be freed and so you will
have only a net of 6 months worth of migrated files stored on
TSM disk.

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Weeks, 
Debbie
Sent: Wednesday, March 28, 2007 8:26 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: HSM for Windows

The problem is that to satisfy customer requests for archived data, it has to 
stay on disk.  Tape retrievals take too long.  We would prefer a hierarchical 
system that would allow, for example, files not touched in 6 months to go to 
disk, then if still not touched for another 6 months they migrate to tape.  
With the way this product works we will have to either manually create the 
hierarchy, or leave everything on disk.  Not much of a savings there.  Might as 
well just add the extra disk to the file server. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Francisco 
Molero
Sent: Wednesday, March 28, 2007 6:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: HSM for Windows

Hi,

I don't agree with you I think it is a more or les a good product, I only 
detect a problem  Reconcile Files.  But you can backup and restore stub files 
from TSM client and you can restore one stub and recall the file from HSM and 
you don't need to recall all files. In case you lost a directory with 1 
files you restore the stub files it is more quickly. Other point is you need to 
establish a archive copy group with a long retention because you can recall a 
file because you have the stub file and this can be expired in the TSM Server.

- Mensaje original 
De: Allen S. Rout [EMAIL PROTECTED]
Para: ADSM-L@VM.MARIST.EDU
Enviado: miércoles, 28 de marzo, 2007 15:10:27
Asunto: Re: HSM for Windows


 On Wed, 28 Mar 2007 08:49:33 -0400, Weeks, Debbie [EMAIL PROTECTED] said:


 Thanks.  We are HIGHLY disappointed with this product.

When I heard about it ( Long Time Ago: Last Oxford ) my comment was that it is 
not really like anything that I've seen labeled HSM before.

Previous discussion, last January, starts:

http://www2.marist.edu/htbin/wlvtype?ADSM-L.118435

I piped up here:

http://www2.marist.edu/htbin/wlvtype?ADSM-L.118453


I think your observations mesh well with mine, though you're looking at it from 
a slightly different perspective.


Beware about the back up the migrated stub file problem.


- Allen S. Rout





__
LLama Gratis a cualquier PC del Mundo. 
Llamadas a fijos y móviles desde 1 céntimo por minuto. 
http://es.voice.yahoo.com


Re: Shrinking scratch pools - tips?

2007-03-23 Thread Colwell, William F.
Chip,

I would check first for volume leaks.  If this select returns anything
it is bad -

select volume_name from libvolumes where status = 'Private' and owner is
null

I have also had a different kind of leak, where I have too many filling
tapes.
If you aren't collocating (!) then you should have only filling tapes
for as many migration task you run.  If collocating by node, then only
as many nodes and if by group only as many groups as you have.

I have seen - and I don't know why - every group in a server stops
writing to
the current filling tape and start another one.  

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Bell, Charles (Chip)
Sent: Friday, March 23, 2007 10:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Shrinking scratch pools - tips?

Since this a GREAT place for info, etc., I though I would ask for
tips/how-to's on tracking down why my scratch pools are dwindling, for
LTO/LTO2/VTL. My guess is I have a couple of clients that are sending
out a
vast amount of data to primary/copy. But without a good reporting tool,
how
can I tell? Expiration/reclamation runs fine, and I am going to run a
check
against my Iron Mountain inventory to see if there is anything there
that
should be here. What else would you guys/gals look at?  :-)  Thanks in
advance!

 

God bless you!!! 

Chip Bell 
Network Engineer I
IBM Tivoli Certified Deployment Professional 
Baptist Health System 
Birmingham, AL 



 



-
Confidentiality Notice:
The information contained in this email message is privileged and
confidential information and intended only for the use of the
individual or entity named in the address. If you are not the
intended recipient, you are hereby notified that any dissemination,
distribution, or copying of this information is strictly
prohibited. If you received this information in error, please
notify the sender and delete this information from your computer
and retain no copies of any of this information.


Re: Active Only Storage Pools for DR

2007-02-16 Thread Colwell, William F.
Hi,

I did a little test of active-only pools and they do have inactive files
in them.
The way they differ from ordinary pools is that during reclaim all
inactive
versions will be squeezed out, whereas with ordinary pools only expired
versions
are.  Except at the very start of the AOP implementation there will
always be inactive versions.
You just can't reclaim that quickly.

You copypool will still have inactive version too unless you reclaim it
aggressively.

The feature is mis-named; it should be 'almost active-only if
aggressively reclaimed'.

I hope someone else will run some simple test of this.  When I did mine,
query contents
of the volumes showed the inactive files until the reclaim was done.

Bill Colwell
Draper Lab

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
TSM_User
Sent: Thursday, February 15, 2007 2:02 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Active Only Storage Pools for DR

For years I've been asked by my customers if they could have many
versions for files in their primary pools while limiting the versions in
their copy pools to 1 for disaster recovery.
   
  In reading up on the new TSM V5.4 feature Active-Only Storage Pools
it looks like this is now a reality. I could create an Active-Only
storage pool (limited to backup data, no archive data). This new pool
would now become my new destination pool for my backup storage pool
command.  I could even go one step further and choose to collocate this
data by node. The end result would be a set of tapes at DR that would
not have to skip over any files when performing a restore.
   
  I realize great consideration has to be done before implementating
something like this because if the active file is corrupt you wouldn't
be able to recover a previous version. Still, in the case of DR I know I
have many customers that would accept the risk in order to reduce the
amount of data they have offsite and to speed up their restores.
   
  I know that you can set a tape in an active only storage pool to
offsite so I'm assuming that it will be included with move drm.  I still
haven't completed testing myself yet though.
   
  I'm wondering if anyone out there is considering this as well?

 
-
Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.


Re: Active Only Storage Pools for DR

2007-02-16 Thread Colwell, William F.
Helder,

you have to use 'copy activedata' first.  But then 'backup stgpool'
will,
on the same day, copy only active data, because that is all there is
until the next nights backups occur. But this is no different than
backing
up from the standard backuppool.  After that, time marches on and the
copypool tapes, regardless of which pool is backed up, will develop
holes of inactive versions.

On first hearing, active-data only sounds great, but there isn't any
magic to it.

Bill Colwell
Draper Lab 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Helder Garcia
Sent: Friday, February 16, 2007 5:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Active Only Storage Pools for DR

Which command did you use? copy activedata or backup stgpool?

On 2/16/07, Colwell, William F. [EMAIL PROTECTED] wrote:

 Hi,

 I did a little test of active-only pools and they do have inactive
files
 in them.
 The way they differ from ordinary pools is that during reclaim all
 inactive
 versions will be squeezed out, whereas with ordinary pools only
expired
 versions
 are.  Except at the very start of the AOP implementation there will
 always be inactive versions.
 You just can't reclaim that quickly.

 You copypool will still have inactive version too unless you reclaim
it
 aggressively.

 The feature is mis-named; it should be 'almost active-only if
 aggressively reclaimed'.

 I hope someone else will run some simple test of this.  When I did
mine,
 query contents
 of the volumes showed the inactive files until the reclaim was done.

 Bill Colwell
 Draper Lab



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
 TSM_User
 Sent: Thursday, February 15, 2007 2:02 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Active Only Storage Pools for DR

 For years I've been asked by my customers if they could have many
 versions for files in their primary pools while limiting the versions
in
 their copy pools to 1 for disaster recovery.

   In reading up on the new TSM V5.4 feature Active-Only Storage
Pools
 it looks like this is now a reality. I could create an Active-Only
 storage pool (limited to backup data, no archive data). This new pool
 would now become my new destination pool for my backup storage pool
 command.  I could even go one step further and choose to collocate
this
 data by node. The end result would be a set of tapes at DR that would
 not have to skip over any files when performing a restore.

   I realize great consideration has to be done before implementating
 something like this because if the active file is corrupt you wouldn't
 be able to recover a previous version. Still, in the case of DR I know
I
 have many customers that would accept the risk in order to reduce the
 amount of data they have offsite and to speed up their restores.

   I know that you can set a tape in an active only storage pool to
 offsite so I'm assuming that it will be included with move drm.  I
still
 haven't completed testing myself yet though.

   I'm wondering if anyone out there is considering this as well?


 -
 Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.




--
Helder Garcia


Re: Getting and using time values with sql

2007-01-31 Thread Colwell, William F.
Gary,

I have this sql in a script I am using right now to move the database to
new volumes.

upd scr add-dbcopy 'select ''ok'' from status where hour(current_time) -
7 0'
upd scr add-dbcopy 'if(rc_notfound) goto do_moves' 

To do what you want I suggest -

upd scr add-dbcopy 'select ''ok'' from status where hour(current_time) -
4 0'
upd scr add-dbcopy 'if(rc_ok) goto process'
upd scr add-dbcopy 'select ''ok'' from status where hour(current_time) -
9 0'
upd scr add-dbcopy 'if(rc_ok) goto process'
upd scr add-dbcopy 'exit'

(
ok, I just tested something better -
upd scr add-dbcopy 'select ''ok'' from status where hour(current_time)
between 4 and 8'
upd scr add-dbcopy 'if (rc_ok) exit'
)


Hope this helps,

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Lee, Gary D.
Sent: Wednesday, January 31, 2007 8:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Getting and using time values with sql

Tsm server 5.2.9.0, running on solaris 2.8.
I would like to restrict a tsm script from running between the hours of
4:00 and 9:00 aa.m..

I haven't figured out how to test the value of time in a script, and
branch on that test.
Any pointers would be helpful.


Gary Lee
Senior System Programmer
Ball State University
 


[no subject]

2006-11-08 Thread Colwell, William F.
Hi Matt,

I run this command in a script to check free cells

tsm: LIBRARY_MANAGERselect 678 - count(*) as Free cells from
libvolumes

 Free cells
---
 44

'678' is the number of cells in my library.  Plug in the number of cells
in
a 3584.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Martinez, Matt
Sent: Wednesday, November 08, 2006 11:23 AM
To: ADSM-L@VM.MARIST.EDU
Subject:

Hi all,

I am having problems writing a TSM Script or command finding the
number of empty slots in an IBM 3584 Library, I am running TSM 5.2.4 on
Win2K, and any help will be appreciated.

Thank You,
Matt Martinez
Systems Administrator
IDEXX Laboratories, Inc.
Phone:207-856-0656
Fax:207-856-8320
[EMAIL PROTECTED]


Re: Backup failure

2006-11-02 Thread Colwell, William F.
Hi,

message anr1639i provides details about node changes.  I report on them
every day with the TSM OR.

tsm: xxxhelp anr1639i

---

ANR1639I Attributes changed for node nodeName: changed attribute list.

Explanation: The TCP/IP name or address, or the globally unique
identifier
(GUID) has changed for the specified node. The old and new values are
displayed for entries that have changed.

System Action: Server operation continues.

User Response: None.


Bill Colwell


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Roger Deschner
Sent: Thursday, November 02, 2006 12:56 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Backup failure

Looks to me like it could be two different machines trying to use one
node name. The biggest clue that this is the problem is your message
ANR2576W. Reboot the client machine, in case it really did get two
schedulers running. But it's more likely two different machines trying
to share one nodename. At any rate, do

q actlog begindate=-2 search=GBTLSWIOA001

and just check the IP addresses it is connecting from. This is one way
to catch cheaters who use one node name on two machines. Here's what
you're looking for:


2006-11-01 23:15:25  ANR0406I Session 81867 started for node
GBTLSWIOA001
  (WinNT) (Tcp/Ip 111.222.111.16(2566)).
then a bit later...
2006-11-01 23:20:25  ANR0403I Session 81867 ended for node
GBTLSWIOA001
  (WinNT).

This is as it sould be. But if you have

2006-11-01 23:15:25  ANR0406I Session 81867 started for node
GBTLSWIOA001
  (WinNT) (Tcp/Ip 111.222.111.16(2566)).
2006-11-01 23:25:25  ANR0406I Session 81895 started for node
GBTLSWIOA001
  (WinNT) (Tcp/Ip 111.222.111.50(2566)).

...Now you've got your perpetrator. They're running two machines on one
nodename, and you now have the IP addresses of both of them. Go gettum!

Q NODE GBTLSWIOA001 F=D shows the IP address this node last connected
from, which can be useful.

Beware that IP addresses can change if the client node uses DHCP (i.e. a
laptop), but even if the IP addresses change, you should see a start and
an end. If you see several starts from the same IP address, that is
normal too, especially if they have RESOURCEUTILIZATION set higher than
1. What you are looking for is several starts from different IP
addresses before you get the matching ends.

Also try

q filespace GBTLSWIOA001

and see if you can eyeball what looks like two different primary drives:

ROGERD.ADSM1   \\rogerd\c$   1WinME
ROGERD.ADSM1   \\861077\c$   3WinNT

Aha! This one has two different Windows C:\ drives. This is harder to
spot on unix-ish systems (including Mac OSX) because they all have the
same filespace names. You can eliminate false positives here by doing
Q FILESPACE F=D which shows the last backup start and end dates. It's
possible that one of those duplicate C:\ drives is from an old OS or
old machine that got upgraded, and backup dates will tell you that.

The most frequent cause of cheating like this is not people who set
out to beat the system, but rather people who replace their computer
with a new one and give away their old computer to a lucky colleague.
It still has the old TSM client including the scheduler on it, and
like the Energizer Bunny, it keeps going, and going, and going. We
find that most people with these Energizer Bunny nodes don't even know
they're backing up. The key message to look for to spot this kind of
problem is the ANR2576W you show below.

You might want to set minimum throughput thresholds, also. See the TSM
Admin Guide.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Wed, 1 Nov 2006, Gopinathan, Srinath wrote:

Hi All,

I am having a backup which is failing regularly. There are no objects
which is showing as failing. However, the backup is failing with the
following errors.

Any help on this would be appreciated.

Regards,
Srinath G

10/31/2006 15:51:51  ANR0482W Session 694902 for node GBTLSWIOA001
(WinNT)
  terminated - idle for more than 750 minutes.
(SESSION:
  694902)
10/31/2006 15:51:51  ANR2579E Schedule TBO_SEV3_INCR_2 in domain
SWITBOW
for
  node GBTLSWIOA001 failed (return code 12).
(SESSION:
  694889)
10/31/2006 15:51:51  ANR2576W An attempt was made to update an event
record for
  a scheduled operation which has already been
executed -
  multiple client schedulers may be active for node
  GBTLSWIOA001. (SESSION: 694889)
10/31/2006 15:51:51  ANR0480W Session 694889 for node GBTLSWIOA001
(WinNT)
  terminated - connection with client severed.
(SESSION:
  694889)

This e-mail has been scanned for viruses by the Cable  Wireless 

Re: Mixed drives in a library

2006-10-04 Thread Colwell, William F.
Hi James,

I went thru this 6 months ago, going from 8 lto2 to 4 lto3 and 4 lto2.

I can't say this is a complete procedure, but here are some key points.

Go to version 5.3.3.* for the server and device driver.  lto3 media
isn't
recognized at a lower version.

My lto2 device classes had format=drive.  This will make trouble in a
mixed
library.  Change the lto2 devclass(s) to format=ultrium2c  and restart
your
server.  Since I run with 4 servers - 1 library manager and 3 backup
servers -
I had to stop all of them and restart the library manager first, then
the other
3.

Define new devclass(s) for lto3 with format=ultrium3c.  Then define
drives and paths. After that, the server
and/or device driver knows what tape to put on what drive.  Be aware
that while
lto3 tapes won't be mounted on lto2 drives, lto2 tapes will be mounted
on lto3
drives for both reading and writing.

My drives are all scsi, so I don't know if the scsi and fiber mix will
have
issues.

My servers are on Solaris with an STK l700e.

Hope this helps,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
James Choate
Sent: Wednesday, October 04, 2006 11:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Mixed drives in a library

Anyone have any suggestions for how to add lto3 drives into an existing
library that has lto2 drives. I'm trying to figure out how to add the
lto3 drives / devclass for lto3 drives / new stgpools while at the same
time ensuring that the new lto3 media gets used by lto3 drives only.

Any ideas or suggestions are welcomed.

The tsm server is 5.2.7.3
Os aix 5.2 ML8
Lto2 drives :  SCSI
Lto3 drives :  fibre

Thanks,
James Choate


Re: Adding Tables to TSM Database

2006-09-26 Thread Colwell, William F.
Hi Adrian,

you could create a master script which runs daily at 00:00 and
checks for holidays, and then inactivates schedules.  A sql
statement like this can find any particular date -

select 'HOLIDAY' from status where year(current_date) = 2006 and
month(current_date) = 9 and day(current_date) = 26

Unnamed[1]
--
HOLIDAY


Hope this helps,

Bill Colwell

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Adrian Compton
Sent: Tuesday, September 26, 2006 4:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Adding Tables to TSM Database
Importance: High

Hi all,

I would like to use a calendar in TSM to determine when to schedule/not
to
schedule scripts/schedules based on a calendar. The use of TSM's
dayofweek,
Month etc does not go deep enough,
and there are certain things I would like to schedule based on a
calendar
specific to my organisation i.e public holidays etc.

Can one easily add tables to the TSM database or should this be avoided.
Has anyone tried someting simliar. I am using 5.3 on AIX 5.3 (p500 LPAR)

Thanks in advance
Adrian Compton


Solaris server option 'DISKMAP' - is anyone using it?

2006-09-22 Thread Colwell, William F.
Hi,

I was looking over the options ('q opt') and saw the diskmap option.  I
looked it up in the
admin ref and it sounds interesting.  Is anyone using it?  Does it do
any good?  Here is
the text from the online manual -


DISKMAP


Specifies one of two ways the server performs I/O to a disk storage
pool:

*   Maps the client data to memory.
*   Writes the client data directly to disk.

You can switch from one method to the other. The default is to write
directly to disk. To determine the best method for your system, perform
the same operation (for example, a client file backup) for each setting.



Thanks,

Bill Colwell