Re: Restoring virus infected file halts TSM client

2023-01-24 Thread Bent Christensen (BVC)
Hi Andrew,

Thanks for your response and suggestion.

I have been using the weekend to dig a little deeper into the issue, and it 
turns out that if I just restore the folder containing the infected file, TSM 
restores all other files and just responds with a:

01/19/2023 15:58:57 ANSE ..\..\common\winnt\ntrc.cpp(784): Received Win32 
RC 225 (0x00e1) from HlClose(): CreateFile. Error description: Operation 
did not complete successfully because the file contains a virus or potentially 
unwanted software.

But if I run 3-4 or more DSMC RESTORE sessions simultaneously the session which 
has the infected file terminates with this in DSMERROR.LOG:
01/18/2023 17:27:31 ANSE ..\..\common\winnt\ntrc.cpp(784): Received Win32 
RC 225 (0x00e1) from HlClose(): CreateFile. Error description: Operation 
did not complete successfully because the file contains a virus or potentially 
unwanted software.
01/18/2023 17:27:42 ANS1028S An internal program error occurred.

In the last scenario the server receiving the restore is pretty heavily loaded 
on CPU usage with Windows Defender using the major part of the CPUs.

So I will open a case with IBM Support and report this.

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Andrew Raibeck
Sent: Thursday, January 19, 2023 2:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restoring virus infected file halts TSM client

Hello Brent,

Without knowing the specific details of the errors you see, one thing you can 
try is to add this line to the dsm.opt file:

TESTFLAGS CONTINUERESTORE

Restart the client, and see if that causes the operation to continue with the 
next file after an error is reported.

If that does not work, then what error message(s) do you see? What messages, 
coincident with the failed restore, are logged to dsmerror.log? Be sure to 
include the full text, though you can redact user names and file names, as 
appropriate.

Based on that, I might have some other ideas, or else I will suggest opening a 
case with IBM Support.

Unsolicited thought that might be redundant, but I mention it anyway :-) please 
use appropriate care when restoring the files, even if the AV software is 
guarding against suspicious files.

Regards,

Andy

Andrew Raibeck
IBM Spectrum Protect Level 3
IBM Storage
stor...@us.ibm.com

IBM

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Bent 
Christensen (BVC)
Sent: Thursday, 19 January, 2023 06:02
To: ADSM-L@VM.MARIST.EDU
Subject: [EXTERNAL] Restoring virus infected file halts TSM client

Hello,

Just wondered if anyone have had the same issue and maybe found a solution for 
it:

Now and then we are tasked with restoring data that were backed up very long 
ago back to Windows file shares. In a few cases it turns out that some of these 
old files are infected by virus/malware which was not detected by the AV 
application at the time when the malicious file was written.

When the TSM client tries to restore an infected file back to a Windows server, 
the AV application on the Windows server will of course prevent the file from 
being written. However, the TSM client interprets this as an disk error (or 
something) and terminates the restore processes so any subsequent non-infected 
files are not restored, making it almost impossible to do un-monitored restores 
of these data sets.

Would really appreciate it if anyone got some ideas to circumvent this (except 
for disabling the AV application while restoring)?

Regards

Bent


COWI handles personal data as stated in our Privacy 
Notice<https://www.cowi.com/privacy >.
COWI handles personal data as stated in our Privacy 
Notice<https://www.cowi.com/privacy>.


Restoring virus infected file halts TSM client

2023-01-19 Thread Bent Christensen (BVC)
Hello,

Just wondered if anyone have had the same issue and maybe found a solution for 
it:

Now and then we are tasked with restoring data that were backed up very long 
ago back to Windows file shares. In a few cases it turns out that some of these 
old files are infected by virus/malware which was not detected by the AV 
application at the time when the malicious file was written.

When the TSM client tries to restore an infected file back to a Windows server, 
the AV application on the Windows server will of course prevent the file from 
being written. However, the TSM client interprets this as an disk error (or 
something) and terminates the restore processes so any subsequent non-infected 
files are not restored, making it almost impossible to do un-monitored restores 
of these data sets.

Would really appreciate it if anyone got some ideas to circumvent this (except 
for disabling the AV application while restoring)?

Regards

Bent


COWI handles personal data as stated in our Privacy 
Notice.


SQL query for outstanding requests?

2020-07-06 Thread Bent Christensen (BVC)
Hi,

Does anyone know how to query the TSM DB2 for outstanding requests with a 
SELECT statement?

I basically need to do much the same as QUERY REQUEST does, but I only need a 
Yes or No and would like to avoid parsing the output of QUERY REQUEST.


 - Bent


COWI handles personal data as stated in our Privacy 
Notice.


Re: server interim fix level 8.1.9.300

2020-04-21 Thread Bent Christensen (BVC)
Due to:

IT31576: CONTAINER COPY TAPES DO NOT RETURN FROM PENDING STATE PROPERLY

IT32181: AFTER SERVER UPGRADE TO 8.1.9.100 "PROTECT STGPOOL" DOES NOT PROTECT 
ANY EXTENTS

I guess the latter one is why 8.1.9.100 is withdrawn?

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Christian 
Scheffczyk
Sent: Tuesday, April 21, 2020 12:39 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] server interim fix level 8.1.9.300

Hello Bent,

thanks for the info! Do you have more details?
I'm using PROTECT STGPOOL on some instances but I don't see any errors?
O.k., I will schedule downtime.

Thanks and kind regards, Christian


Am 21.04.2020 um 12:27 schrieb Bent Christensen (BVC):
> If you are using PROTECT STGPOOL, either local or to another server, then: 
> Update!!!
>
>   - Bent
>
> -Original Message-
> From: ADSM: Dist Stor Manager  On Behalf Of
> Christian Scheffczyk
> Sent: Tuesday, April 21, 2020 11:54 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] server interim fix level 8.1.9.300
>
> Hello,
>
> yesterday I received emails from "IBM My Notifications" that server fix pack 
> 8.1.9.300 appeared, but I cannot find anywhere what changed?
> Document https://www.ibm.com/support/pages/node/1275274 has the download 
> links and document https://www.ibm.com/support/pages/node/1171654 says 
> "Interim fix 8.1.9.300 did not include APAR updates."
> So, what's new and is it important?
> Furthermore I have some of my servers running at level 8.1.9.100 which is 
> withdrawn from the ftp server, why?
> Is it save to run 8.1.9.100 or should I update asap?
> Could someone from IBM give a comment on this, thank you very much!
>
> Kind regards, Christian
> --
> Dr. Christian Scheffczyk 
> Philipps-Universität, Hochschulrechenzentrum (HRZ)
> Hans-Meerwein-Straße 6, Raum 05A06, D-35043 Marburg
> Fon: +49 (0)6421 28-23519, Fax: +49 (0)6421 28-26994 COWI handles
> personal data as stated in our Privacy Notice<https://www.cowi.com/privacy>.
>


--
Dr. Christian Scheffczyk  Philipps-Universität, 
Hochschulrechenzentrum (HRZ) Hans-Meerwein-Straße 6, Raum 05A06, D-35043 Marburg
Fon: +49 (0)6421 28-23519, Fax: +49 (0)6421 28-26994
COWI handles personal data as stated in our Privacy 
Notice<https://www.cowi.com/privacy>.


Re: server interim fix level 8.1.9.300

2020-04-21 Thread Bent Christensen (BVC)
If you are using PROTECT STGPOOL, either local or to another server, then: 
Update!!!

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Christian 
Scheffczyk
Sent: Tuesday, April 21, 2020 11:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] server interim fix level 8.1.9.300

Hello,

yesterday I received emails from "IBM My Notifications" that server fix pack 
8.1.9.300 appeared, but I cannot find anywhere what changed?
Document https://www.ibm.com/support/pages/node/1275274 has the download links 
and document https://www.ibm.com/support/pages/node/1171654 says "Interim fix 
8.1.9.300 did not include APAR updates."
So, what's new and is it important?
Furthermore I have some of my servers running at level 8.1.9.100 which is 
withdrawn from the ftp server, why?
Is it save to run 8.1.9.100 or should I update asap?
Could someone from IBM give a comment on this, thank you very much!

Kind regards, Christian
--
Dr. Christian Scheffczyk  Philipps-Universität, 
Hochschulrechenzentrum (HRZ) Hans-Meerwein-Straße 6, Raum 05A06, D-35043 Marburg
Fon: +49 (0)6421 28-23519, Fax: +49 (0)6421 28-26994
COWI handles personal data as stated in our Privacy 
Notice.


Re: Spectrum Protect PVU licensing

2019-07-09 Thread Bent Christensen (BVC)
Thanks, Del
I have requested them to do so.

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Del Hoobler
Sent: Monday, July 8, 2019 5:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Spectrum Protect PVU licensing

Hi Bent,

Please have the auditors contact me directly ASAP.

Thank you.

Del



"ADSM: Dist Stor Manager"  wrote on 07/08/2019
11:08:58 AM:

> From: "Bent Christensen (BVC)" 
> To: ADSM-L@VM.MARIST.EDU
> Date: 07/08/2019 11:15 AM
> Subject: [EXTERNAL] Spectrum Protect PVU licensing Sent by: "ADSM:
> Dist Stor Manager" 
>
> Hi,
>
> We are just going through an IBM license audit and to our overwhelming
> astonishment the IBM auditors want to charge us for licenses for test
> nodes - that is, the computers where we test the installation of new
> fix packs and interim fixes before deploying to our production
> environment.
> In our company, deploying untested software is a very efficient way of
> getting a permanent and self-payed vacation, and we really do not want
> to pay for evaluating other people's faulty software. If it was not
> for the test node licenses, we are fully compliant.
>
> So, have any of you been subject to demands for licensing test nodes?
>
> I should mention that we are currently PVU licensed but looking into
> capacity licensing so we can evolve our virtual environment backup
> strategy - but because of the audit results there is a more than fair
> chance that Veeam is going to run with that☹
>
>  - Bent, loyal TSM customer for 15+ years
>
>
> COWI handles personal data as stated in our Privacy Notice urldefense.proofpoint.com/v2/url?
> u=https-3A__www.cowi.com_privacy=DwIGaQ=jf_iaSHvJObTbx-
>
siA1ZOg=0hq2JX5c3TEZNriHEs7Zf7HrkY2fNtONOrEOM8Txvk8=D1L3hrbS2BGQTTc2u52YFQofS74J1SWDIfE-4_FP4GU=4yqC9LbeQ9zu5_7v3eTBFuZaSxsuSyIUKHowb0JTIvE=
> >.
>


COWI handles personal data as stated in our Privacy 
Notice<https://www.cowi.com/privacy>.


Spectrum Protect PVU licensing

2019-07-08 Thread Bent Christensen (BVC)
Hi,

We are just going through an IBM license audit and to our overwhelming 
astonishment the IBM auditors want to charge us for licenses for test nodes - 
that is, the computers where we test the installation of new fix packs and 
interim fixes before deploying to our production environment.
In our company, deploying untested software is a very efficient way of getting 
a permanent and self-payed vacation, and we really do not want to pay for 
evaluating other people's faulty software. If it was not for the test node 
licenses, we are fully compliant.

So, have any of you been subject to demands for licensing test nodes?

I should mention that we are currently PVU licensed but looking into capacity 
licensing so we can evolve our virtual environment backup strategy - but 
because of the audit results there is a more than fair chance that Veeam is 
going to run with that☹

 - Bent, loyal TSM customer for 15+ years


COWI handles personal data as stated in our Privacy 
Notice.


Re: NtfsDisableLastAccessUpdate

2015-10-08 Thread Bent Christensen
Your offhand thought is correct.

NtfsDisableLastAccessUpdate goes all the way back to Windows 2000 where the 
time stamp of a directory would change if you did something with the files in 
it.

I find it hard to imagine that anyone backs up based on the last-access time 
stamp and not the last-written time stamp?

 - Bent


-Original Message-

All,

TSM 7.1.1 environment.
My Windows fileserver colleague is asking me about any potential backup 
ramifications from disabling the LastAccessUpdate on our new Windows 2012R2 
fileservers.
I didn't find any definitive answers via Mr. Google so I thought I'd reach out 
and see if anyone in the group had some knowledge.
My offhand thought is that since we don't do any HSM, we don't really use this 
particular bit, but...

Here is what he asked:

By default Windows Server 2012 R2 disabled Last Access Update on files and 
folders.
This is the registry location where this is stored:
HKLM\SYSTEM\CurrentControlSet\Control\FileSystem\NtfsDisableLastAccessUpdate
This KB articles discusses this and it says this:
https://technet.microsoft.com/en-us/library/cc785435.aspx
The disablelastaccess parameter can affect programs such as Backup and Remote 
Storage that rely on this feature.
Does the TSM Baclient rely on this value? With it disabled do we lose anything 
with TSM?

Thanks,
Steve Schaub
Systems Engineer II, Backup/Recovery
Blue Cross Blue Shield of Tennessee


--
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Increase performance with SSD disls

2015-03-10 Thread Bent Christensen
Just to chip in my experiences after almost a year with TSM on enterprise SSDs:

OS drive (Windows): 100 GB RAID 1 - wear at 99.78 %
TSM log disk: 100 GB RAID 1 - wear at 99.49 % (log disk may be 128 GB - I know, 
but this was what I had)
TSM DB disks: 8 containers on 500 GB each, all on a 4 TB RAID5 - wear is hardly 
measurable. 

Db size is 1.3 TB and we are using both TSM dedup and node replication.

 - Bent 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Tuesday, March 10, 2015 3:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Increase performance with SSD disls

We have two large TSM servers, the larger with a database on the order of 1.4TB.
In the latest hardware refresh, I debated whether to buy SSDs for the database. 
I ended up sticking with the tried-and-true 15K SAS disks since I wasn't sure 
of the consequences of the high write load from DB2 on the lifespan of the SSDs.

On Tue, Mar 10, 2015 at 07:52:39AM -0600, Matthew McGeary wrote:
 Hello Robert,
 
 We use SSD arrays for both our database and our active log.  That 
 said, unless you are using TSM deduplication or node replication, SSD 
 disks should not be required for good server performance.  Standard 
 15K SAS drives are more than sufficient for regular server operations 
 when dedup and replication are not used.
 
 Your restore steps appear correct and should function as desired.
 
 Regards,
 __
 Matthew McGeary
 Technical Specialist - Operations
 PotashCorp
 T: (306) 933-8921
 www.potashcorp.com
 
 
 
 
 From:   Robert Ouzen rou...@univ.haifa.ac.il
 To: ADSM-L@VM.MARIST.EDU
 Date:   03/10/2015 07:28 AM
 Subject:[ADSM-L] Increase performance with SSD disls
 Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 
 
 
 Hi to all
 
 To increase performance on my  TSM servers 7.1.1.0 , I think to 
 install my databases on SSD disks.
 
 I have some questions:
 
 
 Þ Will be more efficient to put  the active log  too on the SSD disks
 
 Þ Can I move  in the same step the database and the active log as:
 
 o   On dsmserv.opt  change the activelogdir  to the new path for SSD disk  
 (before I un the restore db)
 
 o   dsmserv restore db todate=today on=dbdir.file(on dbdir.file the 
 new path for SSD disks)
 Best Regards
 
 Robert
 



--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: DEVCLASS=FILE - what am I missing

2015-02-15 Thread Bent Christensen
Rumour has it that there is a dedupicable (is that a word?) DISK devclass 
coming up in one of this year's TSM releases - if it makes it through the beta.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Nick 
Laflamme
Sent: Friday, February 13, 2015 7:37 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DEVCLASS=FILE - what am I missing

FILE allows deduplication; DISK doesn't.

My impression after some experimenting is that FILE wasn't meant to replace 
DISK; it was solely meant to replace tape device classes. We didn't need to, so 
those experiments quietly ended.


On Fri, Feb 13, 2015 at 12:30 PM, Zoltan Forray zfor...@vcu.edu wrote:

 WOW - I didn't realize that.  Thanks for pointing that out.

 Won't automatically go to nextstgpool,  didn't automatically reclaim?  
 So, what is the advantage/benefit of DEVCLASS=FILE?  Sounds like time 
 to go back to DEVCLASS=DISK

 On Fri, Feb 13, 2015 at 1:22 PM, Paul Zarnowski p...@cornell.edu wrote:

  At 12:12 PM 2/13/2015, Zoltan Forray wrote:
  Well, last night became a disaster.  Backups failing all over 
  because it couldn't allocate any more files and also would not 
  automatically shift
 to
  use the nextpool which is defined as a tape pool.
 
  Alas, TSM doesn't automatically roll over when the ingest pool in FILE.
  I really wish that it did.  Here's the relevant documentation for 
  NEXTSTG for FILE stgpools:
 
 When there is insufficient space available in the current 
   storage
   pool, the NEXTSTGPOOL parameter for sequential access storage pools
  does
  not allow data to be stored into the next pool. In this case,   the
 server
  issues a message and the transaction fails.
 
  ..Paul
 
 
  --
  Paul ZarnowskiPh: 607-255-4757
  Assistant Director for Storage Services   Fx: 607-255-8521
  IT at Cornell / InfrastructureEm: p...@cornell.edu
  719 Rhodes Hall, Ithaca, NY 14853-3801
 



 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 BigBro / Hobbit / Xymon Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations 
 will never use email to request that you reply with your password, 
 social security number or confidential personal information. For more 
 details visit http://infosecurity.vcu.edu/phishing.html



SV: TSM level for deduplication

2014-12-06 Thread Bent Christensen
Hi Thomas,

when you are calling 7.1.1- an utter distaster when it comes to dedup then 
what issues are you referring to?

I have been using 7.1.1 in a production environment dedupping some 500 TB, 
approx 400 nodes, without any bigger issues for more than a year now.

Surely, there are still lots of not-very-well-documented features in TSM 7, 
and I am not at all impressed by IBM support, and especially not DB2 support 
and their lack of willingness to recognize TSM DB2 as being a production 
environment, but when it comes to dedupping it has been smooth sailing for us 
up until now.


 - Bent


Fra: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] P#229; vegne af Thomas 
Denier [thomas.den...@jefferson.edu]
Sendt: 5. december 2014 20:56
Til: ADSM-L@VM.MARIST.EDU
Emne: [ADSM-L] TSM level for deduplication

My management is very eager to deploy TSM deduplication in our production
environment. We have been testing deduplication on a TSM 6.2.5.0 test server,
but the list of known bugs makes me very uncomfortable about using that
level for production deployment of deduplication. The same is true of later
Version 6 levels and TSM 7.1.0. TSM 7.1.1.000 was an utter disaster. Is there
any currently available level in which the deduplication code is really fit
for production use?

IBM has historically described patch levels as being less thoroughly tested
than maintenance levels. Because of that I have avoided patch levels unless they
were the only option for fixing crippling bugs in code we were already using.
Is that attitude still warranted? In particular, is that attitude warranted for
TSM 7.1.1.100?

Has IBM dropped any hints about the likely availability date for TSM 7.1.2.000?

Thomas Denier
Thomas Jefferson University Hospital


The information contained in this transmission contains privileged and 
confidential information. It is intended only for the use of the person named 
above. If you are not the intended recipient, you are hereby notified that any 
review, dissemination, distribution or duplication of this communication is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.

CAUTION: Intended recipients should NOT use email communication for emergent or 
urgent health care matters.

Re: tsm administration via web

2014-09-25 Thread Bent Christensen
I am running TOC 7.1.1 on a virtual Windows 2008 R2, 4 vCPU, 12 GB memory, 50 
GB laid out for DB, 40 GB for active log, 100 GB for archive log. Monitoring 5 
TSM servers, approx. 300 clients.
Most of the time the CPUs are flatlining at 0 % and memory consumption at 4 GB. 
The database is just 3 GB.

I have been using this vm for some internal TSM education as well so it is 
probably grossly overspec'ed just for the TOC. 

But hey - once in a while the Vmware admins actually give you what you're 
asking for and then you just got to cling on to it :-)

 - Bent


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Tuesday, September 23, 2014 9:25 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm administration via web


What has been suggested by sneaky folks, is that you could very easily 
install TSM 7.1.1 on a tiny doesn't-run-any-backups TSM server, to be 
your HUB server.  Then it could manage a spoke TSM 6.3.4 or higher 
server.

Gee, just what we were talking about! 

To ask the next sneeky/unofficial question, just what would the resources on a 
VM need to be for a minimal TSM to just run TOC (memory, disk for db2, other)? 

Thanks 

Rick




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Tuesday, September 23, 2014 3:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: tsm administration via web

For all the things they are adding into the TOC (TSM Operations Center), they 
had to add new tables and functions in the TSM server itself (Which is why it 
has some cool stuff we've never had before!  The TOC developers are just 
awesome!)  

That means you always have to have the TOC talking to a TSM server at a level 
that has those mods, shown in the table that Erwann provided the link for 
below, and I suspect it will be the same for 7.1.1 and likely going forward.  
But that's just for the top level or hub TOC TSM server.  You can have a 
hub-and-spoke configuration.

The TOC is very lightweight, and if you have a single TSM sever the most common 
thing is to run the TOC on the same machine as your TSM server.  But the TOC 
can manage other spoke TSM servers, as that link below shows. 

What has been suggested by sneaky folks, is that you could very easily install 
TSM 7.1.1 on a tiny doesn't-run-any-backups TSM server, to be your HUB server.  
Then it could manage a spoke TSM 6.3.4 or higher server.  

And FWIW that tiny TSM can even be a VM.  The TSM server is supported on VMware 
if there are no tape drivERs in use.  

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Erwann 
SIMON
Sent: Tuesday, September 23, 2014 2:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm administration via web

Hi David,

See :
http://www-01.ibm.com/support/docview.wss?uid=swg21640917

Unfortunately, this technote has not been updated with 7.1.1 informations yet.

Le 23/09/2014 19:17, David Ehresman a écrit :
 Can the 7.1.1 OpCenter be run with pre-7.1 servers?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of Prather, Wanda
 Sent: Tuesday, September 23, 2014 12:49 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] tsm administration via web

 The ISC is on the FTP site, under Maintenance\admincenter.
 But I don't recommend it.

 You should know it has been stabilized at 6.3.  There is a 7.1 version on 
 the site which has been patched to work with a 7.1 server, but it has no 7.1 
 function and it will never be updated any more.

 It also has never been upgraded to run with any IE later than IE8 or 
 Firefox 3.6.  (Although I've gotten good results using IE10, press 
 F12, then choosing IE8 compatibility mode.)

 Just don't go the ISC route.  It's dead. (FINALLY)

 And, the Operations Center which came out Sep 12 (the 7.1.1 version) really 
 has a lot of the functions you want, including client scheduling now, and 
 will be further developed.
 They have a live demo up that you can show your Winders folks.
 https://www.ibmserviceengage.com/on-premises-solutions


 Wanda


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of Lee, Gary
 Sent: Tuesday, September 23, 2014 10:58 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] tsm administration via web

 Guess I am referring to the isc.

 We are running tsm 6.2.5 on redhat enterprise 6.1.

 I use the command line client exclusively, but the windows folks want 
 something gui.

 I went looking, but its been so long since I used it, where do you find the 
 isc?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of Skylar Thompson
 Sent: Tuesday, September 23, 2014 9:32 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] tsm administration via web

 Are you thinking of the ISC? I started out administering TSM 

Possible performance issue with TSM 7.1.0.x / DB2 10.5.x in large dedup environment

2014-08-19 Thread Bent Christensen
Hi all,

Just wanted to direct your attention to this blog entry by Josh-Daniel Davis if 
you are using TSM 7.1 in a large dedupped environment: 
http://omnitech.net/reference/2014/08/04/db2-10-5-0-1-negative-colcard

We had a backup system in flames and 2 severity 1 PMRs open for more than 6 
weeks with TSM L2 support and DEV being pretty clueless, and yesterday the 
problem was solved (fingers crossed) by changing the COLCARD value from a 
negative to a positive value as described in the above blog.

We experienced this problem as db2syscs.exe constantly utilizing 80-85 % CPU on 
a 32-core server, growing queue of dereferenced chunks, fragments in FILE 
devclass pool volumes not being removed which renders those volumes unusable 
and expiration, reclamation and client backups failing or running very slow.

There are no fixes (yet) and TSM support recommends regularly monitoring of the 
db2 tables for negative COLCARD values.

Apparently a DB2 APAR IT03792 exists but it is pretty new and has not been 
published to external IBM websites yet. A TSM APAR is on its way.

 - Bent 




Re: backup files after ntfs security

2014-07-19 Thread Bent Christensen
If your target pool is dedupped it is less of a problem :-)

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tim 
Brown
Sent: Friday, July 18, 2014 6:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] backup files after ntfs security

Can a folder be prevented from backup if the only change is NTFS security.

Thanks,

Tim Brown
Supervisor Computer Operations
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.commailto:tbr...@cenhud.com mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255


This message contains confidential information and is only for the intended 
recipient. If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended 
recipient, please notify the sender immediately by replying to this note and 
deleting all copies and attachments.


Skipped because of pending fragments?

2014-07-19 Thread Bent Christensen
Hi,

On my two FILE devclass storage pools I am starting to get lots of these:

19-07-2014 21:51:32 ANR3247W Process 44 skipped 1 files on volume 
S:\TSMDATA\STAGE_FILE\FSP0102.DSM because of pending fragments.

A MOVE DATA generates these entries:

19-07-2014 22:28:47 ANR2017I Administrator ADMIN issued command: MOVE DATA 
S:\TSMDATA\STAGE_FILE\B8F2.BFS recons=NO shredtono=NO 
19-07-2014 22:28:47 ANR1157I Removable volume S:\TSMDATA\STAGE_FILE\FSP0035.DSM 
is required for move process.
19-07-2014 22:28:47 ANR1157I Removable volume 
S:\TSMDATA\STAGE_FILE\B8F2.BFS is required for move process.
19-07-2014 22:28:47 ANR1157I Removable volume S:\TSMDATA\STAGE_FILE\FSP0116.DSM 
is required for move process.
19-07-2014 22:28:47 ANR0984I Process 55 for MOVE DATA started in the BACKGROUND 
at 22:28:48.
19-07-2014 22:28:47 ANR1140I Move data process started for volume 
S:\TSMDATA\STAGE_FILE\B8F2.BFS (process ID 55).
19-07-2014 22:28:47 ANR1176I Moving data for collocation set 1 of 1 on volume 
S:\TSMDATA\STAGE_FILE\B8F2.BFS.
19-07-2014 22:28:47 ANR3247W Process 55 skipped 3 files on volume 
S:\TSMDATA\STAGE_FILE\B8F2.BFS because of pending fragments. 
19-07-2014 22:28:47 ANR1141I Move data process ended for volume 
S:\TSMDATA\STAGE_FILE\B8F2.BFS.
19-07-2014 22:28:47 ANR0405I Session 2007 ended for administrator ADMIN (WinNT).
19-07-2014 22:28:47 ANR0985I Process 55 for MOVE DATA running in the BACKGROUND 
completed with completion state SUCCESS at 22:28:48.

The volumes being hit by this are more or less useless - you cannot read from 
them and you cannot write to them. And yet TSM claims that the operation 
completed successfully so our monitoring team didn't discover it right away...

Any thoughts? Anyone seen something similar?

A PMR is open with IBM and was immediately escalated to L2, but they doesn't 
seem to react during weekends.

 - Bent


TSM 7.1 and dedup chunking issues

2014-07-02 Thread Bent Christensen
Hi all,

I remember noticing that there were some dedup housekeeping (removal of 
dereferenced chunks) issues with TSM Server 6.3.4.200 and that a fix was 
released. We used 6.3.4.200 for a while as stepping stone on our road from 5.5 
to 7.1 but without the fix.

Now, on 7.1, I am seeing some stuff that makes me worry a bit - initiated by a 
gut feeling that there are more data in my dedup pool than there should be.

SHOW DEDUPDELETEINFO shows that I have +30M chunks waiting in queue and the 
number is increasing. It also shows that I right now have 8 active worker 
threads with a total of 5.8M  queued, but only approx. 4000 chunks/hour get 
deleted.

Any knowing if these numbers make sense? 

We use a full-blown TSM server on Windows, 32 cores, 256 GB RAM, DB and 
activelog on SSD. 

 - Bent 


Re: TSM 7.1 and dedup chunking issues

2014-07-02 Thread Bent Christensen
This server manages 200 clients, approx. 500.000.000 files occupying approx. 
800 TB, a mixture of various databases, Exchange and files, all Windows 
clients. Not all client data end up in a dedup pool. 

Daily backup ingest to the dedup pool is 1.5 TB on average. We aim to receive 
backup data on an internal SAS array, backup to tape copypool and then migrate 
to the dedup pool. The dedup pool storage is Hitachi AMS, 2000 family with SATA 
disks, fiber attached, should be able to deliver at least 2-300K IOPS in this 
configuration.

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Wednesday, July 02, 2014 2:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 7.1 and dedup chunking issues

Also :

32 cores, 256 GB RAM, DB and activelog on SSD.

Wow, that's pretty serieus.

Also, if I might ask, what is your daily backup/archive ingest, how much data 
do you manage and what type of disk (system/config) do you use for your 
filepool storage?



On Wed, Jul 2, 2014 at 12:58 PM, Bent Christensen b...@cowi.dk wrote:

 Hi all,

 I remember noticing that there were some dedup housekeeping (removal 
 of dereferenced chunks) issues with TSM Server 6.3.4.200 and that a 
 fix was released. We used 6.3.4.200 for a while as stepping stone on 
 our road from
 5.5 to 7.1 but without the fix.

 Now, on 7.1, I am seeing some stuff that makes me worry a bit - 
 initiated by a gut feeling that there are more data in my dedup pool 
 than there should be.

 SHOW DEDUPDELETEINFO shows that I have +30M chunks waiting in queue 
 and the number is increasing. It also shows that I right now have 8 
 active worker threads with a total of 5.8M  queued, but only approx. 
 4000 chunks/hour get deleted.

 Any knowing if these numbers make sense?

 We use a full-blown TSM server on Windows, 32 cores, 256 GB RAM, DB 
 and activelog on SSD.

  - Bent



Re: TSM server upgrade from v5.5 to v6.2

2014-06-27 Thread Bent Christensen
“A major show stopper is that TSM 7.1 does not support TSM 5 
clients/servers/agents.”

One should note that ”not supported” is not the same as ”not working” which 
makes this a not-so-major show stopper.

We still have a bunch of TSM 5.4.x clients and even a 5.3.0.15 happily backing 
up to a 7.1 server.


-  Bent


Re: Cancelling REPLICATION Processes via script.

2014-06-19 Thread Bent Christensen
Back in the day we used to do something like this to be able to bail out of 
long-running house-keeping scripts:

repl node node1,node2,node3 wait=yes
select * from scripts where name='CANCEL_REPLI'
if(rc_ok) goto cancel

repl node node4,node5,node6 wait=yes
select * from scripts where name='CANCEL_REPLI'
if(rc_ok) goto cancel

cancel:
cancel replication
del script CANCEL_REPLI

Then it was just a matter of defining a script CANCEL_REPLI ...


 - Bent

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Thursday, June 19, 2014 3:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Cancelling REPLICATION Processes via script.

I have an admin script that issues several node replicate commands, with 
WAIT=YES on each to control bandwidth/processor burden.

If I issue CANCEL REPLICATION, only the currently running process is cancelled. 
 The next REPLICATE NODE (pending via the script) then starts.

Is there a way to cancel all of those pending processes?

I tried a DELETE schedule_name TYPE=A (it did delete the schedule), followed 
by CANCEL PRO (on the currently running process), but the next REPLICATION 
immediately started.

The only solution coming to mind is to write a script that would repeatedly 
issue Q REPLICATION * with a CANCEL REPLICATION issued if one is active.  
Then exiting when none are found.

Any of you TSM wizards have a solution you're willing to share? ... thanks.


Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services STE 751-S
910 SW Jackson
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the person 
or entity to which it is addressed and may contain confidential or privileged 
information.  Any unauthorized review, use, or disclosure is prohibited.  If 
you are not the intended recipient, please contact the sender and destroy the 
original message, including all copies, Thank you.
***


SV: [ADSM-L] out of TSM compliance period question

2014-06-02 Thread Bent Christensen
I do not think there is such a thing as a true-up with IBM. 

In a PVU licensing environment, if you get hit by an IBM Audit (as we did last 
year) you are expected to have all TSM servers and clients properly licensed at 
all times except for:

 - test/developement TSM servers
 - servers/nodes being migrated (it is correct that you have a 3 months 'grace' 
period if you are updating/migrating nodes, servers or data centers, there is 
an IBM document (which I can't find right now) from last year explaining this)
 - decommisioned nodes/servers

With new nodes you are actually expected to have the required PVUs in your PVU 
pool *before* you install the TSM client (or at least before you do the first 
backup).

IBM are really doing what they can to 'persuade' their customers to join the 
Capacity model :-) 

 - Bent


Fra: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] P#229; vegne af Del 
Hoobler [hoob...@us.ibm.com]
Sendt: 29. maj 2014 20:28
Til: ADSM-L@VM.MARIST.EDU
Emne: Re: [ADSM-L] out of TSM compliance period question

There is no IBM document for this.

You need to work with your IBM account team to make sure
you are getting the answers you need to stay compliant.


Del




ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 05/29/2014
01:57:41 PM:

 From: Martin Harriss mar...@princeton.edu
 To: ADSM-L@vm.marist.edu
 Date: 05/29/2014 01:59 PM
 Subject: Re: out of TSM compliance period question
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 On 03/05/2014 02:36 PM, Mark Blunden wrote:
  The IBM requirement is a true up once a year, on your renewal
anniversary.
 
  However, TSM license are perpetual, which means they never expire, or
never
  stop working. The renewal process allows you to call support in the
event
  of an issue, and also allows you to download new versions and fixes.
 
  If you are migrating from one machine to another, you do not have to
buy
  licenses to cover both machines. IBM allows you to exceed your license
  count in this instance as you will eventually remove the old one after
  migration. I think you have 3 months to migrate, but that number may
be
  wrong.
 
  If you are using the old PVU license method, then obviously newer
machines
  generally have larger core counts, and thus incur a higher value.
  If you are on the TB license model, then you can deploy as many
clients as
  you wish.


 This is a question that we've wanted an official answer to for some
 time -- is there an IBM document somewhere that says you need to true up
 once a year??

 Martin

Re: Bandwidth

2014-05-15 Thread Bent Christensen
Hi Thomas,

Just to be sure, are you talking about LAN or WAN backup traffic? Is your 
bottleneck in the TSM-client-to-switch connection or the switch-to-tsm-server 
connection?

If TSM is saturating your WAN lines, looking into dedup and compression is the 
best you can do. If your problem is within a LAN you might have to reconsider 
your backbone and network design, if that is not an option spreading the client 
start times might do the trick.

But there is no such thing in TSM as bandwidth throttling like in i.e. Symantec 
Netbackup.

 - Bent





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tom 
Taylor
Sent: Wednesday, May 14, 2014 5:56 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Bandwidth

Good morning,

I run TSM 6.3.4


How do I throttle bandwidth so that the clients don't choke the network during 
backups. I have already set a large window for the clients to use, and I am 
reading about client side de-duplication, and adaptive file backup. Are these 
the only two avenues to reduce the bandwidth used by TSM?








Thomas Taylor
System Administrator
Jos. A. Bank Clothiers
Cell (443)-974-5768


Re: Question about TSM 7.1 Node Replication

2014-03-25 Thread Bent Christensen
Hi,

That is right, run a REMOVE REPLNODE node_name on the target server, re-direct 
the nodes to the target server (setting tcpserveraddress and tcpport) AND do 
remember to check that node and the target server agree on the node password.

If/when the source server becomes available again, set up replication from the 
target server to the source server 

On source server: 
REMOVE REPLNODE node_name for all previously replicated nodes

On target server:
SET REPLSERVER source_server
UPDATE NODE node_name REPLSTATE=enabled REPLMODE=syncsend

When the target and the source are in sync, remove replication again and 
re-enable it from the source server with REPLMODE=syncsend - and redirect the 
client nodes back to the source server.

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Wolfgang Sprenger
Sent: Tuesday, March 25, 2014 7:55 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question about TSM 7.1 Node Replication

Hi Mike,

was looking for the same thing some days ago.
Found this in the V7.1 Info Center:

Converting client nodes for store operations on a target replication server:
If a source replication server is unavailable, you can convert client nodes to 
be non-replicating nodes. Non-replicating client nodes can back up, archive, or 
migrate data to a target replication server.

Source:
http://pic.dhe.ibm.com/infocenter/tsminfo/v7r1/topic/com.ibm.itsm.srv.doc/t_repl_dr_failover.html

Best regards,
Wolfgang


On Tue, Mar 25, 2014 at 12:46 AM, Ryder, Michael S  michael_s.ry...@roche.com 
wrote:

 Hello Folks:

 I've been searching all day, but haven't been able to find an answer 
 on my own yet.

 Is anyone using Node Replication on TSM 7.1?

 I am tasked with coming up with some sort of disaster recovery 
 solution for TSM 7.1 -- we are currently running server v6.2, and I 
 want to take this opportunity to come up with something a little better.

 I have all the infrastructure necessary to setup a pair of TSM 7.1 
 servers, and am thinking that Node Replication will satisfy most of 
 the requirements.

 The one thing I can't figure out is this -- say the worst happens, and 
 the source server (primary) is destroyed.  Now I have a read-only 
 target server, from which I can restore my servers...  but, is there a 
 way to convert this target server into a source server as well, so I 
 can start using it to backup my surviving and restored nodes?

 Part of recovering from a disaster, is being able to continually 
 protect data, especially of surviving and restored nodes.

 Best regards,

 Mike Ryder
 RMD IT Client Services



SV: EXPORTs versus NODE REPLICATION

2013-12-15 Thread Bent Christensen
Thanks Wanda,

Now I am kind of hoping I am not the only one who thinks export/import is slow 
- that would imply that I might be doing it wrong somehow :-) 

My recent experience with exports was from a TSM 5.5 (4 cores, 8 GB RAM) to a 
6.3.4 (32 cores, 256 GB RAM), servers and nodes Gbit LAN connected. What I saw 
was that the TSM 5 server was able to back up a node with a 1 TB filespace in 
just below 4 hours, but when I exported that node to the TSM 6 it took almost 
24 hours - and both those servers even have trunked GB NICs. Absolutely nothing 
was maxed out on the servers. 

The file spaces I need to bring back home are between 2 TB and 8 TB, most 
branch offices only have one file space.

We tried the copy solution some years ago but were not too happy about it, TSM 
decided to back up approx 1/3 of the node data anyway. It might have been an 
attribute issue, we didn't dig that much into it back then.

I should mention the portable TSM server I am playing with: It is actually 
small QNAP NAS devices featuring a big iSCSI-published disk for the storage 
pool and a VMware image containing a Windows server with TSM 5.5.7 which will 
connect to the iSCSI disk by its DNS name . 
If the branch office already have a VMware host running I just mount the image 
on that or else it is just borrowing a capable workstation for a weekend, 
install VMware Player and spin the image up on that - works like a charm :-)
The main reason for this solution is footprint, you can have travelling 
employees carry it in their luggage and the risc of the equipment getting stuck 
in customs in certain countries is so much reduced. TSM 5.5.7 is due to less 
ressources required compared to TSM 6, it is just easier to find a host with 
sufficient RAM and disk in the branch office.

 - Bent



Emne: Re: [ADSM-L] EXPORTs versus NODE REPLICATION

That's a curious question, and an interesting idea.

I see the advantages of  set it and forget it with replication, let it catch 
up on its own, without manual intervention.

But I'm also wondering *why* your export/import is so slow.  No inherent 
reason it should be.

I'm assuming your portable TSM server just has a really big disk to hold all 
the data from the remote client?
Do you start multiple exports? If you start multiple filespaces concurrently 
you should be able to run at the full bandwidth available between your portable 
and home TSM server, on your in-house network.

If that's not happening, I wonder if the problem could be the disk speed on the 
portable server, or the NIC is already too  busy on your home server.
If the problem is a resource bottleneck, then you'll have the same problem with 
either replication or export/import.

Also,
if your portable TSM server has enough disk to hold all the data from the 
remote client, Have you ever tried  just copying the data wholesale to the 
disk, with dragdrop or xcopy? (assuming Windows here as an example).  Then 
bring it back, do the first back up in house, rename filespaces on the server 
end as needed.

W



We just started a project around consolidated backup of WAN-connected branch 
offices to a central TSM server. As always distant nodes, the first problem to 
cope with is how to get the first full backup of the node without waiting for 
days, weeks or months. We usually do that by sending a small TSM server to the 
branch office, do a first full backup, send it back to HQ and import the 
node(s) to the production TSM server.

However, export/import is sooo ridiculously slow so the export often takes 
days. So we have discussed upgrading the temp server to v6.3.4 and use node 
replication to copy the nodes. Lots of advantages, it can be put in a 
set-and-forget configuration until the nodes are fully replicated so switching 
the nodes to the prod server is pretty easy, but how fast is node replication 
actually?

Is node replication faster or slower than exports in a setup like this? Any 
thoughts or real-life experience?

- Bent

EXPORTs versus NODE REPLICATION

2013-12-12 Thread Bent Christensen
Hi guys,

We just started a project around consolidated backup of WAN-connected branch 
offices to a central TSM server. As always distant nodes, the first problem to 
cope with is how to get the first full backup of the node without waiting for 
days, weeks or months. We usually do that by sending a small TSM server to the 
branch office, do a first full backup, send it back to HQ and import the 
node(s) to the production TSM server.

However, export/import is sooo ridiculously slow so the export often takes 
days. So we have discussed upgrading the temp server to v6.3.4 and use node 
replication to copy the nodes. Lots of advantages, it can be put in a 
set-and-forget configuration until the nodes are fully replicated so switching 
the nodes to the prod server is pretty easy, but how fast is node replication 
actually?

Is node replication faster or slower than exports in a setup like this? Any 
thoughts or real-life experience?

- Bent


Exchange information store restore speed

2013-02-22 Thread Bent Christensen
So, it happened for us also :(

Due to a series of unlikely events our oldest and largest Exchange Public 
Folder information store got corrupted and went down with a bang. According to 
the on-site Microsoft Premium Field Engineer the store is fubar so we have 
started a restore. The size of the store is 3.5 TB (!), and the restore started 
out pretty well, at the speed that we would expect from the combination of 
server, storage and network.

However, after a few hours restore speed began to slowly decrease, and here 
after almost 24 hours it is down to a crawl. No errors to be seen anywhere, no 
network, CPUs or storage systems are maxed out - the restore just continues 
flawlessly, but at something like a tenth of the expected speed.

TSM client and server software are rather old, Exchange TDP is ver. 5.2.1.0 and 
TSM server is ver. 5.5.2.1. Hardware is 2-3 years old and giga-bit connected. 
We do not see this as an hardware issue.

Anyone seen something like this before and does anyone have any of idea of why 
the restore runs at turtle speed?

- Bent


Re: TSM 1st full backup of remote low-bandwidth nodes

2013-01-18 Thread Bent Christensen
Thanks to everyone for thoughts and ideas.

Most planning, sizing, bandwidth usage forecasting, SLAs and stuff like that 
are almost done, it was how to do the initial full backup in the fastest and 
cheapest way that bugged my mind.

One possible solution that just came to mind the other day is to create a 
virtual TSM server, put the virtual image file on a USB disk and ship that to 
the remote office. Then mount the image at the remote site (2/3 are vmWare 
sites already, use VirtualBox on the rest), create a backup pool on the USB 
disk, backup up the clients, put the image file back on the USB disk and ship 
the disk back home. This saves the laptops. TSM performance might be crawling 
in this setup, but we can live with that.

At the larger sites it is probably going to be a setup like Neils suggestion.

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Neil 
Strand
Sent: Thursday, January 17, 2013 7:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

NAS device replication may not be a real option.  The initial seed still has to 
transfer all of the data.  Assuming a T1 operating at 1.5Mbits per second, it 
would take at least 14,814 hours (about 2 years) to transfer 10TBytes.

You may consider shipping a SAS attached LTO5 tape drive with a small form 
factor (mini ITX MB) PC.

at remote site
- Backup locally to encrypted LTO5 media
- Backup the TSM server DB to LTO5 media
- Ship the encrypted tapes to the home office
- ship the TSM server and tape drive to the next remote office and repeat

at home office
- Perform a TSM server recovery at the home office
- export/replicate remote client data to primary TSM server
- point remote client to primary TSM server which is now seeded with remote 
client data


LTO5 is less expensive than 10TB of JBOD and probably what you use at the home 
office and can be easily encrypted and shipped.

Small form factor headless PC is less expensive than laptop and can be managed 
remotely. All you need is someone to plug it into the ethernet, attach the tape 
drive and swap tapes when requested.

Faster turnaround since the travelling TSM server(s) only returns to the home 
office when all remote site backups are complete.

Thank You,
Neil Strand



From:   Bill Boyer bjdbo...@comcast.net
To: ADSM-L@VM.MARIST.EDU,
Date:   01/16/2013 01:04 PM
Subject:Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth
nodes
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Have you looked at replication of those remote sites as opposed to backup of 
the sites directly?

For those sites that could use a storage replication device to replace the file 
server (Netapp, Data Domain,...) and replicate it to possibly a central or hub 
sites. Then back up from there? Replace the file server with a NAS CIFS device 
and let it do the replication. If you use a solution like Netapp, snapshots can 
even be your backup solution for the site.

Possibly cloud solutions. An example could be CarbonCopy and DATTO. Just to 
name them as examples as opposed to recommending those specific products.

Or (and I can't believe I'm going to suggest this!) Microsoft DFS replication.

Just some other thoughts on the subject

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Wednesday, January 16, 2013 11:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Andy,

I do not totally agree with you here.

The main issue for us is to get all 107 remote sites converted to TSM 
reasonably fast to save maintenance and service fees on the existing backup 
solutions. With the laptop server solution we predict the turn-around time for 
each laptop to be around 2 weeks, which includes sending the laptop to the 
remote site, back up all data, send the laptop back to the backup center, 
export the node. With say 10 laptops this will take at least 6 months. We could 
buy more laptops but we cannot charge the expenses to the remote sites, and we 
are stuck with the laptops afterwards ...

Disaster restores is a very different ball game. Costs will not be a big issue 
and we have approved plans for recovering any remote site within 48 hours, 
which for a few sites includes chartering an aircraft to transport hardware and 
a technician.

 - Bent



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Tuesday, January 15, 2013 5:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

You should use the same method to seed the first backup as you plan to use to 
restore the data.
When you look at it that way a laptop and big external drive is not that 
expensive.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L

Re: TSM 1st full backup of remote low-bandwidth nodes

2013-01-16 Thread Bent Christensen
Andy,

I do not totally agree with you here.

The main issue for us is to get all 107 remote sites converted to TSM 
reasonably fast to save maintenance and service fees on the existing backup 
solutions. With the laptop server solution we predict the turn-around time for 
each laptop to be around 2 weeks, which includes sending the laptop to the 
remote site, back up all data, send the laptop back to the backup center, 
export the node. With say 10 laptops this will take at least 6 months. We could 
buy more laptops but we cannot charge the expenses to the remote sites, and we 
are stuck with the laptops afterwards ...

Disaster restores is a very different ball game. Costs will not be a big issue 
and we have approved plans for recovering any remote site within 48 hours, 
which for a few sites includes chartering an aircraft to transport hardware and 
a technician.

 - Bent



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Tuesday, January 15, 2013 5:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

You should use the same method to seed the first backup as you plan to use to 
restore the data.
When you look at it that way a laptop and big external drive is not that 
expensive.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Tuesday, January 15, 2013 9:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Hi,

We are starting up a backup consolidation project where we are going to 
implement TSM 6.3 clients in all our 100+ remote sites and having them back up 
over the WAN to a few well-placed TSM backup datacenters.

We have been through similar projects with selected sites a few times before, 
but this time the sites are larger and the bandwidth/latency worse, so there is 
little room for configuration mishaps ;-)

One question always pops up early in the process: How are we going to do the 
first full TSM backup of the remote site nodes? 
So far we have tried:

 - copy data from the new node (include all attributes and permissions) to 
USB-disks, mount those on a TSM server (as drive X) and do a 'dsmc incr 
\\newnode\z$ -snapshotroot=X:\newnode_zdrive -asnodename=newnode'. This works 
OK and only requires a bunch of cheap high capacity USB disks, but our 
experience is that when we afterwards do the first incremental backup of the 
new node then 20-40 % of the files get backed up again - and we can't figure 
out why.

- build a temp TSM laptop server, send it to the remote site, direct first full 
backup to this server, send it back to the backup datacenter and export the 
node(s). Nice and easy, but requires a lot of expensive laptops (and USB disks, 
the remote sites typically contain 2 to 10 TB of file data) to finish the 
project in a reasonable time frame.

So how are you guys doing the first full backup of a remote node when using the 
WAN is not an option?

 - Bent


TSM 1st full backup of remote low-bandwidth nodes

2013-01-15 Thread Bent Christensen
Hi,

We are starting up a backup consolidation project where we are going to 
implement TSM 6.3 clients in all our 100+ remote sites and having them back up 
over the WAN to a few well-placed TSM backup datacenters.

We have been through similar projects with selected sites a few times before, 
but this time the sites are larger and the bandwidth/latency worse, so there is 
little room for configuration mishaps ;-)

One question always pops up early in the process: How are we going to do the 
first full TSM backup of the remote site nodes? 
So far we have tried:

 - copy data from the new node (include all attributes and permissions) to 
USB-disks, mount those on a TSM server (as drive X) and do a 'dsmc incr 
\\newnode\z$ -snapshotroot=X:\newnode_zdrive -asnodename=newnode'. This works 
OK and only requires a bunch of cheap high capacity USB disks, but our 
experience is that when we afterwards do the first incremental backup of the 
new node then 20-40 % of the files get backed up again - and we can't figure 
out why.

- build a temp TSM laptop server, send it to the remote site, direct first full 
backup to this server, send it back to the backup datacenter and export the 
node(s). Nice and easy, but requires a lot of expensive laptops (and USB disks, 
the remote sites typically contain 2 to 10 TB of file data) to finish the 
project in a reasonable time frame.

So how are you guys doing the first full backup of a remote node when using the 
WAN is not an option?

 - Bent


Re: Mix PVU and Terabyte licensing

2012-12-04 Thread Bent Christensen
Hi Hans Christian,
Earlier this year we were considering going from PVU to capacity licensing but 
as 2/3 of our 1 PB primary pool capacity at that time was 'owned' by two nodes 
only, we asked IBM if we somehow could do both licensing schemes.

The answer was we would need two Passport Advantage sites and that the two 
servers + TSM server servicing those had to be in a completely separate 
location, and just moving the servers to our fail-over facility a few hundred 
meters away from the main facility would not be considered 'separate' enough.

Regards,

 - Bent

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Hans 
Christian Riksheim
Sent: Tuesday, December 04, 2012 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Mix PVU and Terabyte licensing

We are on a volume licensing scheme but we are considering PVU licensing for a 
new project since the volume/CPU ratio there is very high. As I understand we 
can do this as long as the two different environments are separate.

I haven't yet got a definitive answer on what IBM means by separate 
environments. It sounds reasonable that the PVU clients must reside on 
dedicated TSM servers but then I also hear some mumbling about separate 
libraries as well which would be a drag since we use library sharing.

Any comments or experience? We would prefer to be on the safe side here for 
obvious reasons.

Regards,

Hans Chr.