Re: Restore TSM DB without instance directory

2017-11-24 Thread Roger Deschner
One of the nice things about having more than one TSM/ISP server is that
they can be ordinary backup clients of one another. Items 2-4 can be
restored from regular TSM backups on the other server, along with the
entire Instance Directory, the executable binaries, etc. It really makes
for a much faster, less painful, restore of the server that failed. You
probbaly still have to do a database restore, but everything for that
will be there in the right places. I've been there.

Even though I do that, I still make daily copies of devconfig and
volhist to 3 different locations, right after the database backup.
They're that important.

BTW I have tried doing database backups via server-to-server, but I
found that it was slower and less flexible, so I do not recommend that.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 23 Nov 2017, Sasa Drnjevic wrote:

>You need the following to be able to restore:
>
>1) DB backup (full + logs for PIT restore or just DBsnapshot)
>2) dsmserv.opt
>3) volhistory backup (or at least the last DBBackup entry)
>4) devconfig backup
>
>
>No.2 was probably in your /tsminst1 home, but if you know your
>installation you can recreate dsmserv.opt - just use the correct options
>and values.
>
>
>No.3 - in what location did you backup your volhistory? - you can also
>recreate this one - you need just the last entry with the right DB
>backup...but you must know exactly, when and where it was backed up, and
>which devclass...
>
>No.4 - same as no.3 considering location where you backed up devconf.
>This one could also be recreated, but depending on your configuration it
>could be the most complicated of all three...maybe even impossible :-(
>
>Wish you luck.
>
>Regards,
>
>--
>Sasa Drnjevic
>www.srce.unizg.hr
>
>
>
>
>On 2017-11-23 20:25, Richard van Denzel wrote:
>> Hi All,
>>
>> A while ago my Linux system (running TSM 7.1.3) died and was unable to boot
>> anymore.
>> Sadly enough I had to reinstall it, overwriting the system disk (where my
>> /tsminst1 resided).
>>
>> Is there a way to recover my TSM DB without the /tsminst1 present? My
>> data-disk (DB, LOG, Arch , DBBackup and StoragePools) were in different
>> volumegroups and are still there.
>>
>> Thanks in advance,
>>
>> Richard
>


Re: 7.1.8/8.1.3 Security Upgrade Install Issues

2017-10-07 Thread Roger Deschner
Thanks to all for this discussion of this 7.1.8/8.1.3 issue. I've heard
enough to postpone our production upgrade to 7.1.8, scheduled for
tomorrow. We've got to set up a test server and fiddle around with it
and see what it breaks in our environment.

I'm considering a strategy of upgrading all servers first, while keeping
all clients on "Old" versions. Then when we've got the Admin ID and
certificate issues ironed out between the servers, start upgrading
clients, carefully. It will be a minefield. I'm going to save copies of
"Old" dsmadmc in a very safe place.

This difficulty comes up while there are open, now-published security
vulnerabilities out there inviting exploits, and making our Security
people very nervous. But the considerations described in
http://www-01.ibm.com/support/docview.wss?uid=swg22004844 make it very
difficult and risky to proceed with 7.1.8/8.1.3 as though it was just a
patch. It's a major upgrade, requiring major research and planning, with
the threat of an exploit constantly hanging over our heads. I really
wish this had been handled differently.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 6 Oct 2017, Sergio O. Fuentes wrote:

>Hello all!
>
>I just discovered this thread today because I had been testing 8.1.1 server
>very recently.  I had some issues with that on Thursday and then Friday I
>went further down the rabbit hole.  Now I'm finding that major portions of
>our environment will have to be upgraded very soon.
>
>I'm just starting testing all sorts of client versions against the new
>8.1.3 instance I have, however, I found this tidbit on "q opt tcpport":
>
>If you specify the same port number for both the SSLTCPPORT and TCPPORT
>>
>> options, only SSL connections are accepted and TCP/IP connections are
>>
>> disabled for the port.
>>
>>
>I read this as, "If you have SSLTCPPORT and TCPPORT as different numbers,
>TCPPORT will allow non-SSL TCP/IP connections".
>
>We actually do this.  Does that mean our "old" clients will still be able
>to use TCPPORT without any issues?
>
>Now to find out where this SESSIONSECURITY parameters is located
>
>Thanks!
>
>SF
>
>
>On Fri, Oct 6, 2017 at 3:30 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
>
>> Well, my testing of upgrading to 8.1.2/3 is not going well.  Sure glad I am
>> doing this on a test server, since it doesn't bode well for a production
>> system.  This is what we did in our testing.
>>
>> 1.  Server was upgraded from 8.1.1 to 8.1.3
>> 2.  Created a new node.  Installed 7.1.6 client on a W10E workstation.
>> Connected to the 8.1.3 server with no issues.  Performed backups, etc.
>> Even tried the webGUI/client with no issues.
>> 3.  Upgraded workstation client to 8.1.2 and now we can't connect to the
>> server.  Keeps giving us an SSL error.  Checked all configuration for the
>> node and opt file.  Everything was set to SESSIONSECURITY Transitional.
>> Now all we get (using the default client) is:  10/06/2017 15:09:25 ANS1592E
>> Failed to initialize SSL protocol.
>>
>> I thought you were supposed to be able to upgrade the server to 8.1.2+ and
>> then all of the clients would automagically get the cert/key from the
>> server once they upgraded to 8.1.2+
>>
>> What am I missing?
>>
>> On Fri, Oct 6, 2017 at 10:00 AM, Skylar Thompson <skyl...@u.washington.edu
>> >
>> wrote:
>>
>> >  Content preview:  We recently went from 7.1.7.300 to 7.1.8 in a 3-server
>> > environment
>> > (one library manager, two library clients). As always, do the library
>> > manager
>> > before any of the clients. We had some communication problems with
>> one
>> > of
>> > the library clients that we ended up solving like so: [...]
>> >
>> >  Content analysis details:   (0.7 points, 5.0 required)
>> >
>> >   pts rule name  description
>> >   -- --
>> > 
>> >   0.7 SPF_NEUTRALSPF: sender does not match SPF record
>> > (neutral)
>> >  -0.0 RP_MATCHES_RCVDEnvelope sender domain matches handover
>> relay
>> > domain
>> > X-Barracuda-Connect: mx.gs.washington.edu[128.208.8.134]
>> > X-Barracuda-Start-Time: 1507298431
>> > X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384
>> > X-Barracuda-URL: https://148.100.49.27:443/cgi-mod/mark.cgi
>> > X-Barracuda-Scan-Msg-Size: 4257
>> > X-Virus-Scanned: by bsmt

7.1.8/8.1.3 Security Upgrade Install Issues

2017-10-05 Thread Roger Deschner
Versions 7.1.8 and 8.1.3 of WDSF/ADSM/TSM/SP have now been made
available containing substantial security upgrades. A bunch of security
advisories were sent this week containing details of the vulnerabilities
patched. Some are serious; our security folks are pushing to get patches
applied.

For the sake of discussion, I will simply call versions 7.1.7 and before
and 8.1.1 "Old", and I'll call 7.1.8 and 8.1.3 "New". (Not really sure
where 8.1.2 falls, because some of the security issues are only fixed in
8.1.3.)

There are some totally unclear details outlined in
http://www-01.ibm.com/support/docview.wss?uid=swg22004844. What's most
unclear is how to upgrade a complex, multi-server, library-manager
configuration. It appears from this document, that you must jump all in
at once, and upgrade all servers and clients from Old to New at the same
time. That is simply impractical. There is extensive discussion of the
new SESSIONSECURITY parameter, but no discussion of what happens when
connecting to an Old client or server that does not even have the
SESSIONSECURITY parameter.

We have 4 TSM servers. One is a library manager. Two of them are clients
of the manager. The 4th server manages its tapes by itself, though it
still communicates with all the other servers. That 4th server, the
independent one, is what I'm going to upgrade first, because it is the
easiest. All our clients are Old.

The question is, what's going to happen next? Will this one New server
still be able communicate with the other Old servers?

Once my administrator id connects to a New server, this document says
that my admin id can no longer connect to Old servers. (SESSIONSECURITY
is automatically changed to STRICT.) Or does that restriction only apply
if I connect from a New client? This could be an issue since I regularly
connect to all servers in a normal day's work. We also have automated
scripts driven by cron that fetch information from each of the servers.
The bypass of creating another administrator ID is also not practical,
because that would involve tracking down and changing all of these
cron-driven scripts. So, the question here is, at the intermediate phase
where some servers are Old and some New, can I circumvent this Old/New
administrator ID issue by only connecting using dsmadmc on Old clients?

This has also got to have an impact on users of software like
Servergraph.

There's also the issue of having to manually configure certificates
between our library managers and library clients, but at least the steps
to do that are listed in that document. (Comments? Circumventions?)

We're plunging ahead regardless, because of a general policy to apply
patches quickly for all published security issues. (Like Equifax didn't
do for Apache.) I'm trying to figure this out fast, because we're doing
it this coming weekend. I'm sure there are parts of this I don't
understand. I'm trying to figure out how ugly it's going to be.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Tivoli Monitoring/Cognos for Spectrum Protect 8.1?

2017-09-07 Thread Roger Deschner
"Tivoli Monitoring for TSM" (the monitoring and analysis tool that uses
Cognos) is no longer mentioned in anything about Spectrum Protect 8.1.
There is a tiny note that Tivoli Monitoring for TSM V7 can monitor a
Spectrum Protect V8 server, but that's all.

Has this feature been deprecated? (i.e. Could it be unwise to start
using it now if we've never used it before?)

Has this feature been combined into the new 8.1 Operations Center?

Has this feature been spun off as a separate product?

We are trying to figure out a future course here as we plan for 8.1. We
currently use IBM SPSS for this purpose, as we have since the product
was called WDSF, then ADSM, and TSM, and had planned on migrating to
Tivoli Monitoring/Cognos. But now I'm thinking of just keeping our
substantial base of SPSS procedures for monitoring, long-term trends,
and the like. As a university, we will always be licensed for IBM SPSS
for academic research. Sticking with our homebrewed SPSS procedures
might be better future-proofing if the Tivoli Monitoring/Cognos feature
is going away at some point.

Servergraph is always a possibility, but the cost has been prohibitive.

I cannot find any information about this. Either I'm missing something
obvious, or there really is not much happening here.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: How to backup Billions of files ??

2017-03-22 Thread Roger Deschner
How about periodic image backups, with daily journal-based backups to
catch new/changed files?

OS-based filesystems are sometimes abused as databases, and have been
for many years. (e.g. Z/VM VMSES) When that happens, such a filesystem
needs to be backed up more like a database than a filesystem. The answer
there is ISP image backups.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Mon, 20 Mar 2017, Harris, Steven wrote:

>Bo
>
>The problem with small files is that the TSM database entry may well be larger 
>than the file you are storing. If your files are less than about 3000 bytes 
>that will be the case.
>
>What is happening is that the file system is being used as a database.  A 
>complex file path becomes the key and the file content is the data.
>I realize this has probably been dumped on you without consultation, but a 
>database is probably a better fit.  It could be something as simple as a 
>key/value store (maybe one per day) or as complex as a document DB like 
>Couchbase.
>
>A previous customer of mine did something similar.  It was logs of ecommerce 
>transactions that averaged about 1500 bytes each and had to be kept for 7 
>years.  A million transactions a day and growing.  They killed a TSM 5.5 
>database in 2 years, and when I left were well on the way to killing a TSM 6.3 
>database as well.  Any requests to alter the application were met with active 
>hostility.
>
>Good Luck
>
>Steve
>Steven Harris
>TSM Admin/Consultant
>Canberra Australia
>
>
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
>Adamson
>Sent: Tuesday, 21 March 2017 12:08 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: [ADSM-L] How to backup Billions of files ??
>
>Bo
>I suggest you provide a few more details about the data and you backup 
>environment.
>For example; what is this data, how frequently will it be accessed on average, 
>what is its total space requirements, what is the source stored on?
>Type of backup storage; tape, disk, cloud? (specifics) Bandwidth/network speed 
>between data and target backup server?
>
>-Rick Adamson
> Jacksonville,Fl.
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bo 
>Nielsen
>Sent: Monday, March 20, 2017 7:20 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: [ADSM-L] How to backup Billions of files ??
>
>Hi TSM's
>
>I have earlier asked for help with archiving of 80 Billion very small files, 
>But now they want the files backed up. They expect an average change rate of 3 
>percent/Month.
>
>Anyone with experience of such an exercise, and will share it with me??
>
>Regards
>
>Bo
>
>
>Bo Nielsen
>
>
>IT Service
>
>
>
>Technical University of Denmark
>
>IT Service
>
>Frederiksborgvej 399
>
>Building 109
>
>DK - 4000 Roskilde
>
>Denmark
>
>Mobil +45 2337 0271
>
>boa...@dtu.dk<mailto:boa...@dtu.dk>
>
>This message and any attachment is confidential and may be privileged or 
>otherwise protected from disclosure. You should immediately delete the message 
>if you are not the intended recipient. If you have received this email by 
>mistake please delete it from your system; you should not copy the message or 
>disclose its content to anyone.
>
>This electronic communication may contain general financial product advice but 
>should not be relied upon or construed as a recommendation of any financial 
>product. The information has been prepared without taking into account your 
>objectives, financial situation or needs. You should consider the Product 
>Disclosure Statement relating to the financial product and consult your 
>financial adviser before making a decision about whether to acquire, hold or 
>dispose of a financial product.
>
>For further details on the financial product please go to http://www.bt.com.au
>
>Past performance is not a reliable indicator of future performance.
>


Re: delete volume

2017-03-22 Thread Roger Deschner
I'm confused. Is this an actual tape, or a "tape" in a VTL? What do you
mean by "VTL requirements"? If this stgpool consists of real tapes, the
existenace of a VTL elsewhere in your configuration should not matter at
all.

I've done this several times with worn-out real tapes. DELETE VOL
DISCARDDATA=YES and it has always gone into Pending status.

Sounds like a defect. Preserve the ACTLOG and call IBM support.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 21 Mar 2017, Loon, Eric van (ITOPT3) - KLM wrote:

>Hi Gary!
>It's three days and it's working fine for all other volumes which become empty 
>the normal way.
>Kind regards,
>Eric van Loon
>Air France/KLM Storage Engineering
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
>Gary
>Sent: dinsdag 21 maart 2017 16:29
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: delete volume
>
>What is the reusedelay on that storage pool?
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon, 
>Eric van (ITOPT3) - KLM
>Sent: Tuesday, March 21, 2017 11:05 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: [ADSM-L] delete volume
>
>Hi all!
>I just had to delete a tape with discarddata=yes because an audit wasn't able 
>to fix the volume. I witnessed something unexpected: as soon as you delete a 
>volume, it immediately becomes scratch and it is being relabeled (because of 
>VTL requirements). This means that if you accidently delete a volume and you 
>want to recover it, let's say one day later, by restoring the database to a 
>point before the deletion, chances are great the tape was already reused and 
>overwritten. In fact, I think the relabel process itself will probably render 
>the data unrecoverable.
>Shouldn't a delete volume make a volume pending instead of scratch?
>Kind regards,
>Eric van Loon
>Air France/KLM Storage Engineering
>
>For information, services and offers, please visit our web site: 
>http://www.klm.com. This e-mail and any attachment may contain confidential 
>and privileged material intended for the addressee only. If you are not the 
>addressee, you are notified that no part of the e-mail or any attachment may 
>be disclosed, copied or distributed, and that any other action related to this 
>e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
>received this e-mail by error, please notify the sender immediately by return 
>e-mail, and delete this message.
>
>Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
>employees shall not be liable for the incorrect or incomplete transmission of 
>this e-mail or any attachments, nor responsible for any delay in receipt.
>Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
>Airlines) is registered in Amstelveen, The Netherlands, with registered number 
>33014286
>
>
>For information, services and offers, please visit our web site: 
>http://www.klm.com. This e-mail and any attachment may contain confidential 
>and privileged material intended for the addressee only. If you are not the 
>addressee, you are notified that no part of the e-mail or any attachment may 
>be disclosed, copied or distributed, and that any other action related to this 
>e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
>received this e-mail by error, please notify the sender immediately by return 
>e-mail, and delete this message.
>
>Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
>employees shall not be liable for the incorrect or incomplete transmission of 
>this e-mail or any attachments, nor responsible for any delay in receipt.
>Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
>Airlines) is registered in Amstelveen, The Netherlands, with registered number 
>33014286
>
>


Re: LTO4 Tape Recovery

2017-03-09 Thread Roger Deschner
Index Engines machines can do this. While they can't recover all of the
metadata (which TSM mostly stores in its database) they can recover the
data itself from a TSM backup tape. You will lose some information such
as file names and owner nodenames, but you can get the data. We use
Index Engines. Go to www.indexengines.com and click on "Partners and
Resources" for companies that can do this as a service.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 10 Mar 2017, Saravanan Palanisamy wrote:

>Folks,
>
>I badly need some help
>
>One of the data got deleted from TSM database but tape wasn't not re-used and 
>it's available in PCT reclaimable. This tape also never overwritten. 
>Unfortunately I don't have TSM DB backup for the the same date.
>
>We need this data and can anyone tried to recover this previously.
>
>Does IBM have any other tool to recover such cases ?
>
>Regards
>Sarav
>+65 9857 8665
>


Restoring Mac HFS files to CIFS

2017-02-07 Thread Roger Deschner
I'm on a Mac (which is already a problem; I'm unfamiliar) and this Mac
backed up a lot of data (4 TB, 900,000 files) that was in native Apple
HFS+ format. That 4 TB HFS RAID array crashed and burned, so now the
last copy of this data is in our TSM server. It exists there as HFS+
files with HFS+ style metadata. We'd like to restore this data directly
to a new CIFS share that lives on a NetApp, which will be a much better
place for it in the future.

When I attempt a test restore of a single file dsm.sys, I get:

  ANS5036I DIAG: Error for file:
  Parameter error
  ANS5036I DIAG: Error for file: /Volumes/CIFSshare/test/dsm.sys
  Parameter error
  ANS5036I DIAG: Session function thread, fatal error, signal 11

(/Volumes/CIFSshare/test/dsm.sys was the full filename I was trying to
restore one test file to.)

Then it crashed and created a core file.

Anybody had to do this before? Is the Mac ISP Client (which is v7.1.5)
trying to set HFS+ style ACLs on a CIFS filesystem during the restore,
and failing? The SKIPACL option is only for backup/archive, not for
restore. Fishing for ideas here.

I want to avoid an intermediate step of restoring to an external 6 TB
HFS+ USB drive, and then using Unix cp to copy it to CIFS. That would be
a double copy of 4 TB / 900,000 files. It's already going to take a
while to restore this, and I don't want to double my time to do this.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Move Data from STG pool directory to another one.

2017-02-06 Thread Roger Deschner
Wouldn't ordinary migration work for this?

UPDATE STGPOOL stg1 HI=1 LO=0 NEXTSTGPOOL=stg2

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Sun, 5 Feb 2017, rou...@univ.haifa.ac.il wrote:

>Hello all
>Working with TSM server version 7.1.7.0 and Directory stg pools.
>
>I  have a stg pool with a large capacity and I want to create a new stg pool 
>smaller. I wonder what will be  the correct process to move the data from stg1 
>to stg2.
>
>Both of them are with type directory.
>
>To use MOVE CONTAINER ???  or MOVE DATA ??? or another command ?
>
>Any suggestion , tips or commands
>
>Best Regards
>
>Robert
>
>
>[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
>? 
>???  ?? ??? ??.
>?? 
>: ? , ??? 5015
>?:  04-8240345 (?: 2345)
>: rou...@univ.haifa.ac.il
>_
>??  | ??' ???  199 | ?? ?,  | ?: 3498838
>??? ??? ? ??? : 
>http://computing.haifa.ac.il<http://computing.haifa.ac.il/>
>


TSM Server on CentOS Linux

2017-01-19 Thread Roger Deschner
Management here is contemplating having us move our production TSM
servers to the CentOS Linux operating system, which is a free branch of
Red Hat.

Has anybody done this? What are the support issues with IBM?

(TSM Client is already supported on CentOS via "Best effort".)

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Documentation IBM Spectrum Protect V8.1 PDF Bundle

2016-12-28 Thread Roger Deschner
That's very helpful. I had been looking for it.

It answered a critical question for me. Did the default client
installation directory change? Such a change would have been extremely
disruptive. The answer to the question is that fortunately it did not
change. For instance on Windows, it is still
c:\Program Files\Tivoli\TSM. Whew! That will make ISP v8.1 deployment
MUCH easier! Thank you to whoever in IBM insisted that it not change.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 27 Dec 2016, Zoltan Forray wrote:

>Thanks a lot for those links. Really helpful!
>
>On Tue, Dec 27, 2016 at 8:54 AM, Gerd W Becker <gerd.bec...@empalis.com>
>wrote:
>
>> Hi,
>> it is unbelievable, but I found it. Even if IBM has hidden the current
>> documentation behind a new link, I found it.
>> The TSM-Publication PDF Bundle (up to 7.1.7) you will find here:
>> ftp://public.dhe.ibm.com/software/products/TSM/current/
>> The IBM Spectrum Protect PDF Bundle (starting at 8.1) you will find here:
>>
>> ftp://public.dhe.ibm.com/software/products/ISP/current/
>>
>> Watch the difference between TSM and ISP.
>>
>> Why can IBM not put a link to the TSM site, so that the current 8.1
>> documentation can be found easily?
>>
>> Regards
>>
>> Gerd
>>
>
>
>
>--
>*Zoltan Forray*
>Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
>Xymon Monitor Administrator
>VMware Administrator (in training)
>Virginia Commonwealth University
>UCC/Office of Technology Services
>www.ucc.vcu.edu
>zfor...@vcu.edu - 804-828-4807
>Don't be a phishing victim - VCU and other reputable organizations will
>never use email to request that you reply with your password, social
>security number or confidential personal information. For more details
>visit http://infosecurity.vcu.edu/phishing.html
>


Re: Upgrading servers from 6.3 to 7.1

2016-09-19 Thread Roger Deschner
My understanding is you can do almost any combination of versions as
long as the Library Manager servers are always at an equal or higher
release than their Library Client servers. You should be able to upgrade
the Library Managers one at a time, unless they are clients of one
another, which I'm not sure is possible.

It gets trickier when you have multiple TSM servers on a single OS
image, one of which is a Library Manager. This is my configuration. (It
seemed like a good idea at the time...)

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Mon, 19 Sep 2016, Zoltan Forray wrote:

>Just a few questions to confirm my upgrade process.
>
>I have 6-RH Linux TSM servers.  All are at 6.3.5.100.  2-of these servers
>are Library Managers for my TS3500.
>
>>From what I understand, I will have to upgrade BOTH LM servers to 7.1.3.x
>(and then jump to 7.1.6.1) at the same time BEFORE I can upgrade the rest
>of the servers/Library Clients to 7.1.6.1.
>
>Since this is a big jump and I am having lots of problems with replication
>(most of which are addressed in 6.3.6), I am going to upgrade to 6.3.6
>first and then do the 7.1 upgrades at a later date.
>
>Anything else I should be concerned with or do differently?
>
>--
>*Zoltan Forray*
>TSM Software & Hardware Administrator
>Xymon Monitor Administrator
>VMware Administrator (in training)
>Virginia Commonwealth University
>UCC/Office of Technology Services
>www.ucc.vcu.edu
>zfor...@vcu.edu - 804-828-4807
>Don't be a phishing victim - VCU and other reputable organizations will
>never use email to request that you reply with your password, social
>security number or confidential personal information. For more details
>visit http://infosecurity.vcu.edu/phishing.html
>


Re: Windows Client Upgrades

2016-08-18 Thread Roger Deschner
My experience has been that you can avoid the reboot IFF you stop all
scheduler and/or Client Acceptor Daemon processes before you begin
installing the new version. It also helps to remove schedulers and/or
C.A.D., using the wizard in the old-version GUI client before upgrading.
Then put them back after the upgrade, so that they are guaranteed to be
running on the new version, and you shouldn't have to reboot. This
actually makes sense.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 18 Aug 2016, Rick Adamson wrote:

>I have experienced random "unprompted" reboots performing manual installs on 
>Windows clients from 7.1.1.0 to 7.1.6.0
>Haven't nailed down exactly why yet, still gathering info
>On one machine MS updates had been applied and may not have been restarted, on 
>another it seemed to happen after installing the first C++ prerequisite.
>
>-Rick Adamson
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Kamp, 
>Bruce (Ext)
>Sent: Thursday, August 18, 2016 10:40 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: [ADSM-L] Windows Client Upgrades
>
>If all prerequisites are already installed & all TSM processes are stopped you 
>shouldn't need to.
>
>>From what I have seen even if it asks to reboot basic functionality remains.
>
>
>
>Bruce Kamp
>GIS Backup & Recovery
>(817) 568-7331
>e-mail: mailto:bruce.k...@novartis.com
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
>Ehresman
>Sent: Thursday, August 18, 2016 9:31 AM
>To: ADSM-L@VM.MARIST.EDU
>Subject: [ADSM-L] Windows Client Upgrades
>
>Can one upgrade a Windows 7.1.x client to a 7.1.somethinghigher client without 
>a reboot or do all Windows 7.1 upgrades require a reboot?
>
>David
>


Re: autodeploy of 7.1.6 client updates fails

2016-06-30 Thread Roger Deschner
I'm seeing the same thing on a regular manual install of Windows 7.1.6.0
client. Both 32 bit and 64 bit. I already opened a PMR.

7.1.6.0 fixed a critical security problem in the InstallShield program
that installs TSM. Something seems to have gotten loused up in the
packaging of the patched level of InstallShield with TSM.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 29 Jun 2016, Erwann SIMON wrote:

>Hi all,
>
>I'm trying to deploy 7.1.6 client updates on Linux and Windows boxes using the 
>autodeploy packages from the FTP site. Unfortunately, without success on both 
>platforms (while I succeed to deploy 7.1.4 or earlier versions on the same 
>boxes).
>
>On Windows, it fails on extracting installation image for Backup-archive 
>client : only a few folders and files are extracted. Manual extraction from 
>the same packages retrieved from the server works fine.
>
>
>On Linux, it fails on running updatemgr : no updatemgr process is running and 
>the mutex file is left behind. A successfull workaroung is to release the lock 
>(./updatemgr/managemutex Release) and manualy re-run the deployclient .sh 
>script (./deployclient.sh).
>
>
>--
>Best regards / Cordialement / مع تحياتي
>Erwann SIMON
>


Re: trouble with db backups inside scripts

2016-06-24 Thread Roger Deschner
This error message points to how the database is configured. The HELP
ANR4588E info has some useful things to check. Review Chapter 3 of the
Installation Guide. There's a lot of interconnected things here.
Something may be changing them. Could be an ID/permissions issue. Is the
ID of the cron process that triggers the script the one you intended? It
could be an issue with the ID you use to run the maintenance script, an
issue that is circumvented when you log into dsmadmc and manually type a
BACKUP DB command. Just some random ideas.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 23 Jun 2016, Lee, Gary wrote:

>Tsm server 6.3.4 under redhat 6.7.
>
>About every third time our daily maintenance script runs, the db backup fails 
>with the message
>
>Anr4588e  a database configuration might be incorrect.
>
>However, as soon as I see this anywhere from a couple of minutes to an hour or 
>so later, I can run the same exact backup by hand and it works fine; only 
>giving the expected anr4976 message.
>
>Anyone out there have a clue?
>I'm stumped.
>


Re: Spectrum Protect documentatie on Knowledgecenter site

2016-06-07 Thread Roger Deschner
Thanks Andy. You've always been helpful to WDSF/ADSM/TSM/ISP customers;
I still remember meeting you in person at a SHARE conference some years
ago. These links are helpful while IBM sorts out its documentation mess.

This discussion is why I have always saved PDFs whenever I ran across
them, to an encrypted USB thumb drive that I carry with me, especially
at places like heavily firewalled data centers or DR sites. I shudder to
think what recovery would be like in an actual disaster with only online
web "knowledge center" documentation to tell me how to proceed to
recover my employer's critical business information as quickly as
possible - if I could access the web at all. As we see here, even
bookmarking those online sites is a pointless exercise, since the URLs
change so frequently. Keeping this "bookshelf" of PDFs (backed up to TSM
and to PCs both at work and at home) should be a crucial part of any DR
plan. I'll be using this list of URLs to update my bookshelf.

It is also why I have written previously (at length, to the point of
admittedly becoming redundant) about the removal of the Administrator's
Guide as a single book, and scattering its priceless collection of
content to the winds in the form of thousands of online web pages.

Again, thnaks Andy!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Mon, 6 Jun 2016, Andrew Raibeck wrote:

>I hear and feel (and see) the pain, and have passed the comments on. Let's
>just say that some things are not immediately in our control...
>
>At any rate, for now I suggest using this landing page:
>
>https://www.ibm.com/support/knowledgecenter/SSGSG7
>
>>From there:
>
>1. Select a product version from the drop-down list near the left-hand
>upper corner of the page. For example, if I select 7.1.5, it will take me
>to this page:
>
>https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.5/tsm/welcome.html
>
>2. In the upper left corner of the page you should see a "breadcrumb" trail
>like this:
>
>   icon   >   Tivoli Storage Manager   >   Tivoli Storage Manager 7.1.5   >
>Welcome
>
>If you click on that icon, that will open a table-of-contents (TOC) that
>should look familiar.
>
>3. If you do not immediately see what you are looking for (like info about
>Data Protection for Microsoft SQL Server), click on the "Product suites and
>related products" link in the TOC, and in the right hand side you should
>see links to the products you are looking for.
>
>ALTERNATIVELY
>
>1. From the landing page, click a product (like Tivoli Storage Manager for
>Databases) in the "Find related products" section. It will take you to a
>page like this:
>
>http://www.ibm.com/support/knowledgecenter/SSTFZR/landing/welcome_sstfzr.html
>
>2. Select a product version from the drop-down list (this time it will show
>7.1.4 as the latest because there is no 7.1.5 for this product).
>
>3. Once again you will see the breadcrumb trail in the upper left corner.
>Click on the icon to see the TOC.
>
>Best regards,
>
>Andy
>
>
>
>Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
>
>IBM Tivoli Storage Manager links:
>Product support:
>https://www.ibm.com/support/entry/portal/product/tivoli/tivoli_storage_manager
>
>Online documentation:
>http://www.ibm.com/support/knowledgecenter/SSGSG7/landing/welcome_ssgsg7.html
>
>Product Wiki:
>https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager
>
>"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2016-06-06
>13:16:02:
>
>> From: Stefan Folkerts <stefan.folke...@gmail.com>
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 2016-06-06 13:18
>> Subject: Re: Spectrum Protect documentatie on Knowledgecenter site
>> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>>
>> Thanks all, saved my day tomorrow with the pdf links guy's.
>>
>> Please IBM don't mess around with the documentation, this stuff is
>> essential to get right and keeping simple and fast access is a necessity.
>>
>> So please keep it simple, keep it in one place, give us a webinterface
>that
>> makes it easy to select versions and find the stuff we need. HTML is fine
>> but also give us complete pdf files please without going to some wiki of
>> ftp link, bundles of all documentation per version are a nice addition
>but
>> just make it easy to find later on not just a ftp directory and/or some
>> obscure wiki page somewhere.
>> Without quick and easy access to pdf files we

Re: EoS for TSM 6.3

2016-05-03 Thread Roger Deschner
OK, now I'm confused all over again. There are lots of v6.4 clients in
use, but you are saying that, unless we license Tivoli Storage Manager
Suite for Unified Recovery 6.4 (whatever that is) then our v6.4 servers
(aka 6.3.3+) will have EOL on April 30, 2017?

I was confused back when v6.4 was announced with a v6.3.3 server
component, and now I'm even more confused.

Bottom line: Our servers are at 6.3.5.100. Will they be EOL on April 30,
2017? I see both yes and no answers below in this thread. This confusion
is interfering with planning. A complete clarification would be
appreciated, since IBM introduced this confusion back with v6.4.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Mon, 2 May 2016, Del Hoobler wrote:

>As you may or may not have seen, EOS (end of support) was announced for
>Tivoli Storage Manager 6.3 for April 30, 2017.
>
> http://www-01.ibm.com/common/ssi/rep_ca/2/897/ENUS916-072/index.html
>
>There has been some confusion around this because Tivoli Storage Manager
>6.4 did not release a "server" component.
>The Tivoli Storage Manager Suite for Unified Recovery 6.4 product(s)
>shipped a Tivoli Storage Manager 6.3.3 server.
>
>For those customers that have purchased any of the Tivoli Storage Manager
>Suite for Unified Recovery 6.4 products,
>they are also entitled to service for the Tivoli Storage Manager 6.3.x
>server component until the EOS date for 6.4.
>
>
>Thank you,
>
>Del
>
>
>
>"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 04/20/2016
>09:21:20 AM:
>
>> From: Erwann SIMON <erwann.si...@free.fr>
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 04/20/2016 09:22 AM
>> Subject: Re: EoS for TSM 6.3
>> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>>
>> Thanks Del, so there's no hurry to upgarde. But it's still a good
>> idea to do so in order to get benefit for an enhanced Operations
>> Center, effective alert triggering or status thresholds.
>>
>> --
>> Best regards / Cordialement / مع تحياتي
>> Erwann SIMON
>>
>> - Mail original -
>> De: "Del Hoobler" <hoob...@us.ibm.com>
>> À: ADSM-L@VM.MARIST.EDU
>> Envoyé: Mardi 19 Avril 2016 18:15:48
>> Objet: Re: [ADSM-L] EoS for TSM 6.3
>>
>> Yes, the 6.3.x server will be supported until the 6.4 is EOS.
>>
>>
>>
>> Del
>>
>> ---
>>
>>
>> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 04/19/2016
>> 09:42:41 AM:
>>
>> > From: Erwann SIMON <erwann.si...@free.fr>
>> > To: ADSM-L@VM.MARIST.EDU
>> > Date: 04/19/2016 09:44 AM
>> > Subject: Re: EoS for TSM 6.3
>> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>> >
>> > Hi all, and IBMers particularly,
>> >
>> > Can someone clarify this point ? Will 6.3.3 server be supported
>> > along with 6.4 version ?
>> >
>> > --
>> > Best regards / Cordialement / مع تحياتي
>> > Erwann SIMON
>> >
>> > - Mail original -
>> > De: "Skylar Thompson" <skyl...@u.washington.edu>
>> > À: ADSM-L@VM.MARIST.EDU
>> > Envoyé: Lundi 18 Avril 2016 18:12:10
>> > Objet: Re: [ADSM-L] EoS for TSM 6.3
>> >
>> > Given that the TSM v6.4 "release" shipped the v6.3 server[1], does
>this
>> > mean that the v6.3 server will continue to be supported until v6.4 is
>> > no longer supported?
>> >
>> > [1] http://www-01.ibm.com/support/docview.wss?uid=swg21243309
>> >
>> > On Mon, Apr 18, 2016 at 04:56:16PM +0100, Schofield, Neil (Storage &
>> > Middleware, Backup & Restore) wrote:
>> > > In case anyone missed it, IBM last week announced the End-of-
>> > Support date for TSM 6.3 would be April 30th 2017:
>> > > http://www.ibm.com/common/ssi/rep_ca/2/897/ENUS916-072/index.html
>> >
>> > --
>> > -- Skylar Thompson (skyl...@u.washington.edu)
>> > -- Genome Sciences Department, System Administrator
>> > -- Foege Building S046, (206)-685-7354
>> > -- University of Washington School of Medicine
>> >
>>
>
>
>


Re: SQL QUERY FOR AMOUNT OF ACTIVE VS INACTIVE DATA

2016-04-29 Thread Roger Deschner
This is pretty much what I do as well. A huge advantage in doing it this
way, at the filespace level, is that it is MUCH faster than counting
individual files. We have about 1 billion files divided between 3 TSM
servers, compared to only several thousand filespaces.

I call it CHURN rather than ABUSE. Some servers and some applications
are going to have a high churn ratio (occupancy/filespace), with a lot
of inactive versions as a rule, such as email servers. The ratio can be
very high (5+) and still be reasonable for an email server's mail
storage filespace. Our email servers are in a management class where the
copy groups have a higher RETEXTRA setting, so when a user calls and
says "Thunderbird ate my inbox! It happened last week. Help!" I can
restore it for them from the inactive versions. OTOH, the ratio for a
typical Windows PC client workstation should be around 0.6 to 0.8, and
for a relatively static file server should be close to 1.0. I study
these numbers a lot, and the sample numbers Maurice has below are in the
same range that I see. I only get suspicious when ((occupancy /
filespace) > (RETEXTRA / 2)). YMMV.

I may use your SELECT as being much simpler than what I do to arrive at
the same number. Thanks, Maurice van 't Loo!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 28 Apr 2016, Maurice van 't Loo wrote:

>Hello Gary,
>
>Just guessing the actual reason, it might be that they want to know the
>amount of TSM storage compared with the amount of storage on the clients.
>While counting the number of objects doesn't give you much information,
>maybe it's best to compare the filespaces with occupancy. That is what I
>call the "abuse factor"
>
>NODE
>FS
>FS_GB  OCC_GB ABUSE
>---
>-
>- --- -
>XX
>/csminstall/AIX/images
>35.06  149.36   4.2
>XXX
>/nim/aix54
>39.13   39.15   1.0
>
>/csminstall/AIX/aix610
>37.07   36.99   0.9
>
>/home/dvpt
>14.07   35.39   2.5
>X
>/build
>24.28   33.24   1.3
>
>/db2/dvptdb
>1.57   30.48  19.3
>XX
>/home/dvpt
>30.57   30.42   0.9
>X
>/csminstall/AIX/products
>25.65   25.77   1.0
>XX
>/build
>22.53   21.89   0.9
>
>/build
>22.53   21.89   0.9
>
>Copy/paste this in mono-type font (notepad) to get it better readable.
>I use this to hunt for missing excludes (mssql databases not excluded) but
>I can also use it to calculate how much space active and inactive I roughly
>have. Of course filespaces with excludes gives some mismatch.
>
>SQL used for above output:
>select cast(substr(f.NODE_NAME,1,30) as char(30)) as
>NODE,cast(substr(f.FILESPACE_NAME,1,30) as char(30)) as
>FS,dec(f.CAPACITY*f.PCT_UTIL/100/1024,14,2) as
>FS_GB,dec(sum(o.PHYSICAL_MB)/1024,12,2) as
>OCC_GB,dec(dec(sum(o.PHYSICAL_MB),14,1)/dec(f.CAPACITY*f.PCT_UTIL/100,16,1),14,1)
>as ABUSE from filespaces as f,occupancy as o where f.NODE_NAME=o.NODE_NAME
>and f.FILESPACE_NAME=o.FILESPACE_NAME and f.CAPACITY>0 and f.PCT_UTIL>0 and
>o.STGPOOL_NAME in (select stgpool_name from stgpools where
>pooltype='PRIMARY') and o.TYPE='Bkup' group by
>o.NODE_NAME,o.FILESPACE_NAME,f.NODE_NAME,f.FILESPACE_NAME,f.CAPACITY,f.PCT_UTIL
>order by 4 desc fetch first 10 rows only
>
>For all nodes ordered by nodename:
>select cast(substr(f.NODE_NAME,1,30) as char(30)) as
>NODE,cast(substr(f.FILESPACE_NAME,1,30) as char(30)) as
>FS,dec(f.CAPACITY*f.PCT_UTIL/100/1024,14,2) as
>FS_GB,dec(sum(o.PHYSICAL_MB)/1024,12,2) as
>OCC_GB,dec(dec(sum(o.PHYSICAL_MB),14,1)/dec(f.CAPACITY*f.PCT_UTIL/100,16,1),14,1)
>as ABUSE from filespaces as f,occupancy as o where f.NODE_NAME=o.NODE_NAME
>and f.FILESPACE_NAME=o.FILESPACE_NAME and f.CAPACITY>0 and f.PCT_UTIL>0 and
>o.STGPOOL_NAME in (select stgpool_name from stgpools where
>pooltype='PRIMARY') and o.TYPE='Bkup' group by
>o.NODE_NAME,o.FILESPACE_NAME,f.NODE_NAME,f.FILESPACE_NAME,f.CAPACITY,f.PCT_UTIL
>order by 1
>
>Regards,
>Maurice van 't Loo
>
>http://mvantloo.nl/maupack.php
>Personal pack of selects (in scripts)
>
>
>2016-04-22 16:35 GMT+02:00 Schneider, Jim <jschnei...@essendant.com>:
>
>> You can also 

Re: SP 7.1.5 and tape preemption

2016-04-17 Thread Roger Deschner
There seems to be a more overall issue with preemption not happening
when it should in Whatchamacallit 7.1.*. A few days ago, there was a
posting here that Client Restore was not preempting BACKUP STGPOOL.

DBB _must_ be able to preempt anything else, in order to prevent a log
fillup crash.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 15 Apr 2016, J. Pohlmann wrote:

>IHAC where we just upgraded to SP 7.1.5.000 and LTO7. It used to be that a
>DBB pre-empted space reclamation for a mount point. I have had the DBB
>waiting for a mount point for several hours. The q opt output shows NOPREMPT
>(No) - the default. Has anyone else noticed this changed behavior?
>
>
>
>Best regards,
>
>Joerg Pohlmann
>
>+1-250-585-3711
>


Re: Preempt

2016-04-09 Thread Roger Deschner
This would appear to be a bug, so submit it to IBM as a bug.

The doc (Administrator's Guide) is very clear on this issue. "The server
can preempt server or client operations for a higher priority operation
when a mount point is in use and no others are available, or access to a
specific volume is required." It then goes on to list Restore as a
high-priority operation that can preempt Backup among other
lower-priority operations, for either a mount point or a volume.

Preemption SHOULD work similarly to, and at the same speed as, CANCEL
PROCESS. Preemption should happen as soon as the backup of an object
(which could be large) is completed. Then the BACKUP STG command should
be cancelled to allow the Restore to have access to the resources it
needs. I have watched a Reclamation process get cancelled by a Restore,
sometimes to my surprise ("Where did it go?") until I discovered the
Restore which preempted it.

Yet another proof of the value of the Administrator's Guide manual. You
can find this - sort of - in the online doc, but a direct search does
not bring it up, and it's broken apart into pieces so that it's
difficult to gain a comprehensive understanding. This is a very specific
case where the Administrator's Guide manual is very much superior to the
online web-based doc which sort-of contains the same information. Keep
and update the Administrator's Guide!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 8 Apr 2016, Hans Christian Riksheim wrote:

>Hi Pam,
>
>yes it is. At first I thought the backup stg was cancelled but that it was
>in the middle of a backing up a large object. But that was not the case. It
>was moving object after object to copy stg while the restore was waiting
>for that tape. After one hour backup stg continued with another primary
>volume and the restore session could use it.
>
>Hans Chr.
>
>On Fri, Apr 8, 2016 at 5:40 PM, Pagnotta, Pamela (CONTR) <
>pamela.pagno...@hq.doe.gov> wrote:
>
>> Han,
>>
>> Is it waiting for a tape that is in use in the backup operation? Restores
>> should have top priority, as far as I know.
>>
>> Regards,
>>
>> Pam Pagnotta
>> Sr. System Engineer
>> Criterion Systems, Inc./ActioNet
>> Contractor to US. Department of Energy
>> Office of the CIO/IM-622
>> Office: 301-903-5508
>> Mobile: 301-335-8177
>>
>>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>> Hans Christian Riksheim
>> Sent: Friday, April 08, 2016 11:28 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: [ADSM-L] Preempt
>>
>> Shouldn't a restore have first priority for a tape volume? I have a restore
>> waiting an hour for a backup stg operation.  The option NOPREEMPT is at
>> default(No).
>>
>> Hans Chr.
>>
>


Migration should preempt reclamation

2016-02-17 Thread Roger Deschner
I was under the impression that higher priority tasks could preempt
lower priority tasks. That is, migration should be able to preempt
reclamation. But it doesn't. A very careful reading of Administrator's
Guide tells me that it does not.

We're having a problem with a large client backup that fails, due to a
disk stgpool filling. (It's a new client, and this is its initial large
backup.) It fills up because the migration process can not get a tape
drive, due to their all being used for reclamation. This also prevents
the client backup from getting a tape drive directly. Does anybody have
a way for migration to get resources (drives, volumes, etc) when a
storage pool reaches its high migration threshold, and reclamation is
using those resources? "Careful scheduling" is the usual answer, but you
can't always schedule what client nodes do. Back on TSM 5.4 I built a
Unix cron job to look for this condition and cancel reclamation
processes, but it was a real Rube Goldberg contraption, so I'm reluctant
to revive it now in the TSM 6+ era. Anybody have a better way?

BTW, I thought I'd give the 7.1.4 Information center a try to answer
this. I searched on "preemption". 10 hits none of which were the answer.
So I went to the PDF of the old Administrator's Guide and found it right
away. We need that book!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Bring back TSM Administrator's Guide

2016-02-03 Thread Roger Deschner
I'm glad that the last PDFs of this valuable book at Version 7.1.1 are
being made more widely available. I have already downloaded that PDF to
the USB thumb drive I carry around everywhere, for access regardless of
the network condition. This could be in a disaster recovery scenario, or
at a client node site where I might need to create a new account and
password just to access the net and read the doc. The fact that it was
deemed necessary to make this PDF more widely available, points out the
problem.

If you are hinting that you are considering publishing an updated 7.1.5
Administrator's Guide, that would be great! An important thing to update
would be container storage pools. When to use them, how they compare to
other types of storage pools (disk, file, tape, VTL...), and how to set
them up and manage them. Perhaps include them in a new column in Table 5
on Page 77 of the 7.1.1 Admin Guide. Another would be deduplication,
which has a lot of new powerful features in 7.1.4, which will make it
attractive to users who have not deduplicated before.

I rarely use the online Information Center. I find its hierarchical
micro-partitioning of information to be way too narrowly focused,
leading me to overlook obvious solutions to the problem at hand.
Especially new solutions such as container storage pools and the new
features of deduplication. The Information Center is also especially
poor in the client area. I always go to the PDFs first, even when online
with fast network access. Therefore I can't give feedback in the way you
suggest.

I tried the Custom PDF file creation facility, and I found it
disappointing. The resulting PDF has only topics I could think of, and
omits the ones I really need to solve a problem. It requires
clairvoyance to make a useful PDF containing the answer to a problem
that I haven't had yet, and it has no index.

So I am going to reiterate what I said before: Update the
Administrator's Guide for 7.1.5, and keep it up-to-date going forward as
a single book. This book is a key asset for the entire Spectrum Protect
product, especially as existing customers seek to utilize features they
haven't used before, and as new customers explore the incredible depth
of features in the product.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 27 Jan 2016, Clare Byrne wrote:

>Following up on this thread, the 7.1.1 Administrator's Guide PDFs are now
>available in IBM Knowledge Center at
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.1/com.ibm.itsm.tsm.doc/r_pdfs_tsm.html
>This is in addition to where they were available before, in the .zip file
>and on the Tivoli Storage Manager wiki. We hope this makes it easier to
>find and access the 7.1.1 guides.
>
>A reminder: We are still very much interested in hearing about what the
>most important subjects from the 7.1.1 Administrator's Guide are for us to
>consider for updates. Knowing more specifics about your priorities is very
>important to us to set priorities.
>
>One way to give such feedback privately: Go to any topic (in any version)
>for Tivoli Storage Manager in IBM Knowledge Center and click the Feedback
>link in the menu bar at the very bottom. If your comments exceed the
>1000-character limit in the feedback form, you can include your email
>address in the feedback and ask us to contact you.
>
>Clare Byrne
>Information Development for IBM Spectrum Protect
>Tucson, AZ  US
>
>
>__
>
>Please be assured that we in IBM development are listening and are
>discussing future actions regarding the Administrator's Guide content
>since the feedback about the 7.1.3 release that did not include that book.
>I gave some information in a response to that earlier feedback:
>http://www.mail-archive.com/adsm-l@vm.marist.edu/msg99286.html
>
>We are working on a plan to publish a solution guide for the disk-to-tape
>architecture next. This should fill in some gaps left by the removal of
>the Administrator's Guide. Timing of the publication is yet to be
>determined.
>
>We are also discussing how to fill the information needs for those who
>will not be using one of the four documented architectures. We already
>point to the 7.1.1 guides from newer releases but we realize this is not
>ideal. Possible actions include creating additional, supplemental
>information outside of the solution guides, similar to what we have done
>for the information about NDMP (
>http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.4/srv.admin/r_ndmp.html
>).
>
>What are the most important subjects from the old Administrator's Guide
>for us to consider? Knowing more specifics about your priorities would be
>very helpful to us for setting our priorities and plans. Web traffic for
>the content on IBM Knowled

TSM Encryption security gap?

2016-01-07 Thread Roger Deschner
We are starting to make more use of TSM Encryption. There is a
combination of features that appears to leave a security gap.

We have decided to use ENCRYPTKEY GENERATE, because it provides what is
in effect encryption key escrow. We require key escrow whenever
encryption is used for university data - it's surprising how many times
encryption keys get lost. We also use PASSWORDACCESS GENERATE, in order
to enable automatic scheduled backups.

The gap is in restore. If I have an encrypted drive, whose contents are
backed up using TSM encryption, and then I unplug that drive thinking it
is secure, it is not. Anyone who can boot the machine can restore
everything from the encrypted drive, without entering any key or
password, due to PASSWORDACCESS GENERATE.

We are thinking of instructing users to always do a complete shutdown
(not sleep or hibernate), and to encrypt their boot drive if they have
any sensitive data, even if that data resides somewhere other than the
boot drive. However, this is herding cats. It's unlikely to be followed
in all cases.

A possible solution would be to require re-entry of the TSM password to
restore encrypted data, if both ENCRYPTKEY GENERATE and PASSWORDACCESS
GENERATE are in effect.

Am I understanding this correctly? Is there something I am missing here?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Bring back TSM Administrator's Guide

2015-12-16 Thread Roger Deschner
Thank you for your response Clare. It's good to know these comments are
being read.

There are a large number of areas that cannot be covered separately. A
very large topic is Policy, which is one of Spectrum Protect's hardest
to understand areas, yet it affects and is affected by everything else.
For instance, it has been essential to understand Policy in crafting our
hybrid D2D2D2T solution. Or when the auditor came and said we needed to
keep just a certain set of files for five years. The Admin Guide covered
Policy well enough that you could make your way through its many
complexities and get it to work for you. But to cover Policy adequately,
it had to refer to many other areas of the book.

Policy is just one example, which really needs everything else in the
book, in order to explain it well enough. But it needs to all be in one
place. I carry 4 PDFs (Admin Guide, Admin Ref, Windows Client, Unix
Client) around on a USB stick so I can refer to them wherever I am and
whatever the network is there.

Another topic that the solution guides simply do not cover, is the
complex interrelationships between storage pools of different kinds -
tape, VTL, disk, file, optical, etc, and how to structure migration,
reclamation, collocation, and storage pool backup to make the data flow
among them the way you want.

There's sveral other topics of similar complexity, such as managing the
client scheduler, and how to build a daily schedule of server
activities.

There are of course specialized topics that shouldn't be in there. You
mentioned NDMP. Another is V5 to V6/7 migration. Virtual Machines could
be its own book.

The basic issue here is that, as the sophisticated and fully evolved
"best of breed", Spectrum Protect has a rather steep learning curve. It
always has, since it was WDSF. After 20 years with WDSF/ADSM/TSM/SP I am
still learning things, and I still refer to the Admin Guide frequently.

Doing away with its best documentation can only hurt this highly
sophisticated product. Coherent how-to knowledge is the essential weapon
in the face of complexity. Without the Administrator's Guide, more will
simply declare Spectrum Protect to be "too complicated to understand",
and settle for an inferior solution that is easier to understand. It's a
battle that those of us who know the product well, constantly fight,
which is why this book is so essential to the product.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 15 Dec 2015, J. Pohlmann wrote:

>Hi Clare. The Admin Guide provided a one-stop shopping approach to all
>functional areas of Spectrum Protect. I think this is what's missing from
>the current documentation. Clearly the Admin Guide was aimed at an
>experienced person as opposed to a novice. In an initial Spectrum Protect
>implementation, the Solutions Guides are of value. However they fall short
>in supplying guidance and "how-to" or "setup" information to an existing
>installation. So, even though the Admin Guide became bigger over the
>releases, the topics are still of extreme value. For example tape is still
>aa viable media, especially in the light of LTO 7 technology. It fits into
>large to very large enterprises, yet when it comes to media management of
>disaster protection implementation, we have no guidance in the new
>publications. So, I would vote for the return of the Admin Guide, it still
>is the best collection of implementation and usage guidance for all the
>various Spectrum Protect aspects. I would prefer one book to a large set of
>publications that are topic oriented.
>
>Best regards,
>Joerg Pohlmann
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>Clare Byrne
>Sent: December 15, 2015 13:59
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: [ADSM-L] Bring back TSM Administrator's Guide
>
>Please be assured that we in IBM development are listening and are
>discussing future actions regarding the Administrator's Guide content since
>the feedback about the 7.1.3 release that did not include that book.
>I gave some information in a response to that earlier feedback:
>http://www.mail-archive.com/adsm-l@vm.marist.edu/msg99286.html
>
>We are working on a plan to publish a solution guide for the disk-to-tape
>architecture next. This should fill in some gaps left by the removal of the
>Administrator's Guide. Timing of the publication is yet to be determined.
>
>We are also discussing how to fill the information needs for those who will
>not be using one of the four documented architectures. We already point to
>the 7.1.1 guides from newer releases but we realize this is not ideal.
>Possible actions include creating additional, supplemental information
>outside of the solution 

Re: tsm client 7.1.3.1 and windows 10

2015-12-14 Thread Roger Deschner
Windows 10 is not supported for that client version. You must use client
version 7.1.3.2 or 7.1.4.0, both of which are now available. Try
updating.

Version 7.1.3.2 and 7.1.4.0 also introduced support for Mac OSX 10.11
"El Capitan".

See http://www-01.ibm.com/support/docview.wss?uid=swg21197133

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 11 Dec 2015, Sami Ahomaa wrote:

>Hi,
>
>Have you tried schedmode=polling?
>Another thing you could do is take all power savings away from network
>card(s).
>
>Regards,
>Sami
>On 11 Dec 2015 6:01 pm, "Lee, Gary" <g...@bsu.edu> wrote:
>
>> Have a client with a new laptop.  He installed tsm client tsm 7.1.3.1.
>> Set up client acceptor as usual, running as system account.
>> Start acceptor, all appears fine, client sees its schedule, and
>> dsmwebcl.log ends with
>> "waiting to be contacted by the server".
>>
>> However, client misses its backup, even with firewall disabled.
>>
>> Any help appreciated.
>>
>


Bring back TSM Administrator's Guide

2015-12-11 Thread Roger Deschner
A great book is gone. The TSM Administrator's Guide has been obsoleted
as of v7.1.3. Its priceless collection of how-to information has been
scattered to the winds, which basically means it is lost. A pity,
because this book had been a model of complete, well-organized,
documentation.

The "Solution Guides" are suitable only for planning a new TSM
installation, and do not address existing TSM sites. Furthermore, they
are much too narrow, and do not cover real-world existing TSM customers.
For instance, we use a blended disk and tape solution (D2D2D2T) to
create a good working compromise between faster restore and storage
cost.

Following links to topics in the online Information Center is a
haphazard process at best, which is never repeatable. There is no Index
or Table of Contents for the online doc - so you cannot even see what
information is there. Unless you actually log in, there is no way to
even leave a "trail of breadcrumbs". Browser bookmarks are useless here,
due to IBM's habit of changing URLs frequently. This is an extremely
inefficient use of my time in finding out how to do something in TSM.

Search is not an acceptable replacement for good organization. Search is
necessary, but it cannot stand alone.

Building a "collection" is not an answer to this requirement. It still
lacks a coherent Index or Table of Contents, so once my collection gets
sizeable, it is also unuseable. And with each successive version, I will
be required to rebuild my collection from scratch all over again.

Despite the fact that it had become fairly large, I humbly ask that the
Administrator's Guide be published again, as a single PDF, in v7.1.4.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Retore the database TSM V6 with "DB2

2015-10-14 Thread Roger Deschner
The TSM database is not restored via DB2 commands. Instead, you should
restore it using the "dsmserv restore db" command. This is described in
the "TSM Administrator's Guide" for TSM V6, Chapter 33. The exact syntax
is in "TSM Administrator's Reference", Chapter 4.

Many of us have had to do this, and it works. It is going to be very
difficult or impossible to do this restore using DB2 commands, because
this is not really a DB2 backup. It has to be restored in a particular
way to restore a TSM server that will run.

I would suggest calling IBM Support for assistance.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Tue, 13 Oct 2015, Abdelwahed RABBAB wrote:

>Hello everyone
>
>please someone help me, I look for the way to restore the database TSM V6
>with "DB2 restore" command from a band (TAPE)
>
>Any idea on the DB2 restore syntax?
>
>--
>


Re: Client for Windows 10

2015-09-27 Thread Roger Deschner
I did not encounter the two-step install. I installed 64-bit Windows 10
from a DVD onto bare metal. Then I installed TSM Client V7.1.3 which was
only a one-step process. It appears to work, but I'm nervous about using
it in production until I hear from IBM. I assume it will have the same
restrictions as noted for Windows 8.1.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 23 Sep 2015, J. Pohlmann wrote:

>I haven't seen a support statement. Here is my experience (sample of 1 - use
>your judgement). My 7.1.2 clients survived the upgrade to Windows 10 just
>fine. I have upgraded one Windows 10 client to 7.1.3 by installing over the
>7.1.2 client. It was a two-step instaal: 1) install the prereq - then the
>installed quit; 2) install the client - the installer now saw the pre-req
>and the install went fine. I also set up the services and for the first time
>in many months, the Web client/Java worked (wonders never cease to happen).
>Backups and restore worked fine. My intent is to upgrade once there is a
>7.1.3.x with official support for Windows 10. Maybe the install will then
>just be a one-step install.
>
>Regards,
>Joerg Pohlmann
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
>Roger Deschner
>Sent: September 23, 2015 07:48
>To: ADSM-L@VM.MARIST.EDU
>Subject: [ADSM-L] Client for Windows 10
>
>We are starting to get requests for a TSM client to back up Windows 10.
>I've been testing V7.1.3 on a 64-bit Windows 10 test machine which
>*appears* to work. With Microsoft pushing Windows 10 upgrades as fast as
>they can backed by slick television advertizing, and with unhappy Windows
>8/8.1 users eager for relief, this is starting to come fast.
>
>When might we see a client version that supports Windows 10? What have
>others experienced backing up AND RESTORING Windows 10 with a TSM client?
>
>Roger Deschner  University of Illinois at Chicago rog...@uic.edu
>==I have not lost my mind -- it is backed up on tape somewhere.=
>


Client for Windows 10

2015-09-23 Thread Roger Deschner
We are starting to get requests for a TSM client to back up Windows 10.
I've been testing V7.1.3 on a 64-bit Windows 10 test machine which
*appears* to work. With Microsoft pushing Windows 10 upgrades as fast as
they can backed by slick television advertizing, and with unhappy
Windows 8/8.1 users eager for relief, this is starting to come fast.

When might we see a client version that supports Windows 10? What have
others experienced backing up AND RESTORING Windows 10 with a TSM
client?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


MS Patch 3092627 fixes hang in TSM Windows Client

2015-09-08 Thread Roger Deschner
If you are running the TSM Backup Client on any current Windows
platform, be sure to check out Microsoft Windows patch 3092627. It
claims to fix a hang in Tivoli Storage Manager (among other specific
programs listed in the MS document). The hang was apparently introduced
by MS Security Patch 3076895.

The fixing patch, 3092627, is included in today's "2nd Tuesday" updates.

For more information, see the Microsoft description at:
https://support.microsoft.com/en-us/kb/3092627

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
"The only problem with troubleshooting is that sometimes trouble
shoots back."


Miscalculation of Estimated Capacity for DEVCLASS=FILE

2015-03-16 Thread Roger Deschner
I just had a TSM server (v6.3.5 on AIX) grind to a halt because all
directories of a FILE devclass had become full. However, the Q STGPOOL
command showed it only 83% full. I have followed all the rules about
each directory being a separate filesystem that contains nothing else,
and not overestimating MAXSCRATCH. There is only one storage pool in
this devclass. In fact, changing the MAXSCRATCH value does not change
Estimated Capacity, despite what the manuals say.

The Q DIRSPACE and Unix df commands both show the same true status.

This situation is making setting migration and reclamation thresholds
to be difficult.

So, what's going wrong here? How can I make the Estimated Capacity of a
FILE storage pool match reality?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Ransomware deleted TSM backups from node

2015-01-30 Thread Roger Deschner
I'm not sure there's anything that can be done about this, but take it
as a warning anyway.

A Windows 7 desktop node here was attacked by CryptoWare 3.0 ransomware.
They encrypted all files on the node, and left a ransom note.

The node owner called me because they were having trouble restoring
their files from TSM using a point-in-time restore. The files were gone!
Apparently this villian located which backup program was installed,
found it was TSM, and issued actual dsmc delete backup commands, which
they were allowed to do since PASSWORDACCESS GENERATE was in effect. So
this attack vector is not limited to TSM; it would work with any backup
program that the villian can figure out how to use.

I have moved this node to a domain that includes VEREXISTS=NOLIMIT
VERDELETED=NOLIMIT RETEXTRA=NOLIMIT RETONLY=NOLIMIT for that Copy Group,
while our data security people investigate.

I am planning to change all TSM client nodes to BACKDEL=NO ARCHDEL=NO to
prevent a hacker from deleting backups. Anybody got a better idea?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
=== ALL YUOR BASE ARE BELONG TO US!! ===


Re: req: tsm db backup completed with failure and taking all scratch tapes fro DB BC

2015-01-27 Thread Roger Deschner
You've got to free up one or more scratch tape(s) somehow. Either that
or decide to write your TSM DB backups somewhere else, such as SAN.
There is no way around this.

How many Full DB backups are you keeping? Is that more than you need? If
so, see the DELETE VOLHISTORY command to get rid of old backups and
release scratch tapes. A typical thing is to run a scheduled DELETE
VOLHIST TODAY-8 or something like that after a successful rc=0 full
database backup. Always make sure you're keeping your most recent FULL
database backup and all the incremental backups after it.

Do your own inventory of tapes in your tape library, and make sure you
can identify what each of them is used for. They will either be VOLUMEs
or they'll be listed in the Volume History File for DB backups etc. You
may find some free tapes where you did not expect them. Or you may find
library slots taken up by defective tapes that can be replaced with good
new ones. Just be careful here, not to free a tape with needed data on
it.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 28 Jan 2015, Srikanth Kola23 wrote:

Hi Team,

Can any one get me out of this problem.

tsm server - Version 7, Release 1, Level 1.0

   Platform: WinNT (windows 2008)
   Client OS Level: 6.01

DESCRIPTION : DB Backup completing with failure and  all scratch volumes
using for DB BC .

act log
==

01/27/2015 14:08:26  ANR0985I Process 28 for Database Backup running
in the  BACKGROUND completed with completion state FAILURE at
14:08:26. (PROCESS: 28)
01/27/2015 14:18:23  ANR0299I A full database backup will be started.
The
  archive log space used is 82% and the archive
log space
  used threshold is 80%. (PROCESS: 28)
01/27/2015 14:18:23  ANR0984I Process 29 for Database Backup started
in the
  BACKGROUND at 14:18:23. (PROCESS: 29)
01/27/2015 14:18:23  ANR4559I Backup DB is in progress. (PROCESS: 29)

01/27/2015 14:18:28  ANR1405W Scratch volume mount request denied - no
scratch
  volume available. (PROCESS: 29)
01/27/2015 14:18:30  ANR1626I The previous message (message number
1405) was
  repeated 1 times.
01/27/2015 14:18:30  ANR4578E Database backup/restore terminated -
required
  volume was not mounted. (PROCESS: 29)
01/27/2015 14:18:30  ANR0985I Process 29 for Database Backup running
in the
  BACKGROUND completed with completion state
FAILURE at
  14:18:30. (PROCESS: 29)
01/27/2015 14:18:30  ANR1893E Process 29 for Database Backup completed
with a
  completion state of FAILURE. (PROCESS: 29)
01/27/2015 14:21:59  ANR1959I Status monitor collecting current data
at
  14:21:59.
01/27/2015 14:22:15  ANR1960I Status monitor finished collecting data
at
  14:22:15 and will sleep for 15 minutes.
01/27/2015 14:28:26  ANR0299I A full database backup will be started.
The
  archive log space used is 82% and the archive
log space
  used threshold is 80%. (PROCESS: 29)



Thanks  Regards,

Srikanth kola
Backup  Recovery
IBM India Pvt Ltd, Chennai
Mobile: +91 9885473450



Re: Drive preference in a mixed-media library sharing environment

2014-12-10 Thread Roger Deschner
It won't work. I tried and failed in a StorageTek SL500 library with
LTO4 and LTO5. Just like you are reporting, the LTO4 tapes would get
mounted in the LTO5 drives first, and then there was no free drive in
which to mount a LTO5 tape. I tried all kinds of tricks to make it work,
but it did not work.

Furthermore, despite claims of compatibility, I found that there was a
much higher media error rate when using LTO4 tapes in LTO5 drives,
compared to using the same LTO4 tapes in LTO4 drives. These were HP
drives.

The only way around it is to define two libraries in TSM, one consisting
of the LTO5 drives and tapes, and the other consisting of the LTO6
drives and tapes. Hopefully your LTO5 and LTO6 tapes can be identified
by unique sequences of volsers, e.g. L50001 versus L60001, which will
greatly simplify TSM CHECKIN commands, because then you can use ranges
instead of specifying lists of individual volsers. To check tapes into
that mixed-media library I use something like VOLRANGE=L5,L5 on
the CHECKIN and LABEL commands to make sure the right tapes get checked
into the right TSM Library. Fortunately the different generations of
tape cartridges are different colors.

You can read all about what I went through, and the good, helpful
recommendations from others on this list, by searching the ADSM-L
archives for UN-mixing LTO-4 and LTO-5. Thanks again to Remco Post
and Wanda Prather for their help back then in 2012!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 10 Dec 2014, Grant Street wrote:

On 10/12/14 02:40, Skylar Thompson wrote:
 Hi folks,

 We have two TSM 6.3.4.300 servers connected to a STK SL3000 with 8x LTO5
 drives, and 8x LTO6 drives. One of the TSM servers is the library manager,
 and the other is a client. I'm seeing odd behavior when the client requests
 mounts from the server. My understanding is that a mount request for a
 volume will be placed preferentially in the least-capable drive for that
 volume; that is, a LTO5 volume mounted for write will be placed in a LTO5
 drive if it's available, and in a LTO6 drive if no LTO5 drives are
 available.

 What I'm seeing is that LTO5 volumes are ending up in LTO6 drives first,
 even with no LTO5 drives in use at all. I've verified that all the LTO5
 drives and paths are online for both servers.

 I haven't played with MOUNTLIMIT yet, but I don't think it'll do any good
 since I think that still depends on the mounts ending up in the
 least-capable drives first.

 Any thoughts?

 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S046, (206)-685-7354
 -- University of Washington School of Medicine
might be a stab in the dark . try numbering the drives such that the
LTO5's are first in the drive list or vice versa.
That way when tsm scans for an available drive it will always try the
LTO5's first.

HTH

Grant



Archive Log Overflow never empties

2014-11-24 Thread Roger Deschner
We have four instances of TSM Server 6.2.5 on AIX, which basically
behave pretty well. But on all of them, we occasionally get a little bit
of data overflowing into the Archive Overflow Log filesystem. I don't
know why, because as far as I can tell the Archive Log filesystem has
never been allowed to fill up. But anyway, it never empties out. Full
database backups reliably free space in the Archive Log filesystem, but
the Archive Overflow Log just gradually fills up. One of them is up to
20%, over a period of a year. I saw somewhere that it takes two full
database backups, but there have been many dozens of full backups.

Any idea how I can get the Archive Overflow Log to empty out?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== You will finish your project ahead of schedule. ===
= (Best fortune-cookie fortune ever.) ==


Re: Strange tcp_address value

2014-11-06 Thread Roger Deschner
Don't worry. 192.168.*.* addresses are typically assigned locally by
something like DHCP. They are not exposed outside your local network. In
fact they can't be, because each of us has our own set of 192.168
addresses. Most home routers use this address range. This is quite
normal. Read http://en.wikipedia.org/wiki/Private_network

This could mean that somebody on staff took their TSM-backed-up laptop
home, and while it was connected to their home network, the automatic
scheduled backup happened. This may be something you want to allow.

For serious tracking of nodes and their hardware, I use GUIDs, not IP
addresses.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 6 Nov 2014, Anjaneyulu Pentyala wrote:

Please check the /etc/host entry on your tsm server. /etc/host entry was
added with wrong entry of ip address, how ever nodes are contacting by
using DNS. check the nodes ip address in DNS.

nslookup tcp_name

Regards
Anjen

Aanjaneyulu Penttyala
Technical Services Team Leader
SSO MR Storage
Regional Delivery Center Bangalore
Delivery Center India, GTS Service Delivery
Phone: +91-80-40258197
Mobile: +91- 849781
e-mail: anjan.penty...@in.ibm.com
MTP, K Block, 4F, Bangalore, India




From:
Saravanan evergreen.sa...@gmail.com
To:
ADSM-L@VM.MARIST.EDU
Date:
11/06/2014 08:40 AM
Subject:
Re: [ADSM-L] Strange tcp_address value
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



TCP address is mostly choosen by network and you can try tracerte from
both ends to identify the issue

By Sarav
+65-82284384


 On 6 Nov 2014, at 12:45 am, Thomas Denier thomas.den...@jefferson.edu
wrote:

 If I execute the command:

 select node_name,tcp_address from nodes

 on one of our TSM servers, two nodes have the same, very strange, value
for the
 address: 192.168.30.4. The same address appears in the corresponding
output
 fields from 'query node' with 'format=detailed'.

 This address does not belong to my employer. All of the network
interfaces on
 the TSM server have addresses in one the officially defined private
address
 ranges. This has been the case since the TSM server code was first
installed.
 Given that, I don't see how a system with the address 192.168.30.4 could
ever
 have connected to the TSM server.

 I see session start messages for both nodes on a daily basis. There are
no error
 messages for these sessions except for an occasional expired password
 message. Even when that happens, subsequent sessions run without errors,
 indicating that a new password was negotiated successfully. The origin
 addresses for the sessions look perfectly reasonable. They are in the
same
 private address range as the TSM server addresses, and in the right
subnet
 for the building the client systems are in. Every relevant statement I
have
 found in the TSM documentation indicates that the tcp_address field
should
 be updated to match the session origin address.

 When the TSM central scheduler attempts to request a backup of one of
the
 nodes it attempts to contact an address in the same subnet as the
session
 origin addresses.

 The TSM server is running TSM 6.2.5.0 server code under zSeries Linux.
The
 two clients are running Windows XP and using TSM 6.2.2.0 client code.
The
 two clients are administered by the same group of people.

 Does anyone know where the strange address could have come from, or
 how to get the TSM to track the node addresses correctly in the future?

 Thomas Denier
 Thomas Jefferson University Hospital
 The information contained in this transmission contains privileged and
confidential information. It is intended only for the use of the person
named above. If you are not the intended recipient, you are hereby
notified that any review, dissemination, distribution or duplication of
this communication is strictly prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

 CAUTION: Intended recipients should NOT use email communication for
emergent or urgent health care matters.



Re: Restore of drives containing multi-millions of files

2014-09-15 Thread Roger Deschner
What is the OS platform of the client? It matters. The reason is that
the restore process must rebuild the directory tree on the client system
before it can repopulate it by restoring the files, and with that many
files it is likely a complex directory tree.

If it's Windows, you may be seeing a problem of restoring the directory
objects first. This can cause the same set of tapes to be mounted over
and over, each time restoring only a tiny amount of data. This age-old
ADSM and TSM problem is typically bypassed by using a separate
management class and DIRMC in the client options file. You can find a
lot of information about setting up DIRMC by searching the archives of
this list. Collocation can also help here. But, now that you've got the
problem, the only bypass is to use MOVE NODEDATA to a FILE storage pool
before restoring. I had to do this in order to restore a large Windows
client just last week.

Unix-family clients, including Linux and MacOSX, do not have this issue,
because the metadata for Unix directory objects fits into the TSM
Database itself, instead of being written into the storage pool
heirarchy and ending up on tape as happens with the larger Windows
directory objects.

Another issue you may be encountering is that, if the TSM client program
is 32-bit, it is unable to restore multi-millions of files at once due
to address space limits. Memory efficient settings seemed not to help,
because that forced the use of a large scratch disk area that it
accessed so heavily that it was taking days. We encountered this with a
32-bit Solaris client and 18,000,000 files, that ultimately had to be
restored in parts. 64-bit clients do not have this limitation.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Sun, 14 Sep 2014, Geoff Gill wrote:

I was wondering if anyone has come across this in the past. I don't have 
client side access but looks like they are at client level 6.2.0.0, which is 
about all I can say right now.

This client I'm told has over 14 million files on it. The restore which has 
been running for some days has not completely stopped moving data but there 
are 2 sessions that had tapes mounted moving data that are in a Run state 
which show tapes mounted that no longer are. One of those actually dismounted 
that tape 2 days ago, the other just yesterday. Another session is moving data 
and the last is in a wait state for that tape.

My question is what may be going on with the 2 that actually do NOT have the 
tape mounted any longer but f=d shows they do.While I have seen customers with 
clients like this in the past none have ever attempted such a large restore.


Thank You
Geoff Gill



Re: recamation process fails, no indication why

2014-09-02 Thread Roger Deschner
Try:

q actlog search=recla begintime=08:24:30 endtime=08:24:40
(Might also need to specify begindate and enddate by the time you read
this.)

The message you are interested in was probably produced right before the
message that reclamation had failed. It is quite possible it failed due
to being cancelled by preemption for a higher-priority process, such as
database backup or client restore using the same tape drive. Another
frequent cause is media or drive errors.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 2 Sep 2014, Lindes, Michael wrote:

Search the actlog for another error message or warning message that goes along 
with the error you mention.
You can search for the session and or process number to help as well.
q actlog sear=ANRE
q actlog sear=ANRW
q actlog sear=36980
q actlog sear=276   ... will probably have to use startime/endtime parameters 
on this one



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@vm.marist.edu] On Behalf Of Tim 
Brown
Sent: Tuesday, September 02, 2014 9:31 AM
To: ADSM-L@vm.marist.edu
Subject: [ADSM-L] recamation process fails, no indication why

RECLAMATION  process is failing but I don't see any entries in actlog 
indicating reason for failure?

ANR0986I Process 276 for SPACE RECLAMATION running in the BACKGROUND processed 
22,523 items for a total of
3,233,900,670 bytes with a completion state of FAILURE at 08:24:36. (SESSION: 
36980, PROCESS: 276) ANR1893E Process 276 for SPACE RECLAMATION completed with 
a completion state of FAILURE. (SESSION: 36980, PROCESS:
276)




Thanks,

Tim Brown
Supervisor Computer Operations
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.commailto:tbr...@cenhud.com mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255


This message contains confidential information and is only for the intended 
recipient. If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended 
recipient, please notify the sender immediately by replying to this note and 
deleting all copies and attachments.

This message and any attachments are solely for the use of intended 
recipients. The information contained herein may include trade secrets, 
protected health or personal information, privileged or otherwise confidential 
information. Unauthorized review, forwarding, printing, copying, distributing, 
or using such information is strictly prohibited and may be unlawful. If you 
are not an intended recipient, you are hereby notified that you received this 
email in error, and that any review, dissemination, distribution or copying of 
this email and any attachment is strictly prohibited. If you have received 
this email in error, please contact the sender and delete the message and any 
attachment from your system. Thank you for your cooperation



Re: Restoration Issue in TSM 5.5...

2014-06-09 Thread Roger Deschner
Is the client Windows? If so, you should have been using the DIRMC
technique to back up Windows directory objects to an online storage
pool.

But, now that you are restoring, it's too late for that. Halt the
restore, and run MOVE NODEDATA to an online DISK or FILE storage pool.
It can be a temporary online storage pool just for this restore. Then
run your restore, and it will go fast. When this restore is done, let it
migrate back to tape.

BACKGROUND: Windows directory objects are too large to fit in the TSM
database, so they are written to the storage pools along with the data,
and eventually migrated to tape. When restoring, TSM has to restore all
the directory objects first, which may only be a few bytes per tape, so
it's going to mount tapes over and over restoring only a very small
amount of data each time, until it rebuilds the entire directory tree so
it can start restoring actual data. OTOH, Unix (AIX, Solaris, Linux,
MacOSX...) directory objects are a lot smaller, and fit in the TSM
database itself, so you won't experience this problem with Unix clients.

The workaround has always been to use DIRMC to direct Windows directory
objects to a different Management Class that is stored online. They're
not that large, so you can afford to keep them online. (Search the
ADSM-L archives for DIRMC and you'll find lots of information about it.)

But if you are stuck with this problem, and you are doing a restore, the
only immediate workaround to get your restore to finish in a reasonable
amount of time is MOVE NODEDATA to an online storage pool. This applies
to all releases of TSM, Server v5.5 or later.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 10 Jun 2014, Kiran wrote:

Hi, We are using TSM Server and client 5.5 .5 . We are facing slow
restoration issue.



Our restore size is 4TB.



Usually whenever we perform restore (dsm restore) or with GUI BA Client it
should display the following



ANR1182I Removable volume 000519L2 is required for a restore request from
session 553



But restore session is mounting all tapes with creating directory structure
instead of data along with the directories.



Please help.





Regards,



Kiran

Disclaimer: http://www.dqentertainment.com/e-mail_disclaimer.html



Re: Bandwidth

2014-05-16 Thread Roger Deschner
If the switch-to-server links are saturated, you can gang several of
them together with etherchannel.

It's too easy to blame the network for everything, including
inadequately cold soda in the vending machine. So before blaming the
network, make sure that the TSM server processor is not the actual
bottleneck. I have also relieved what appeared to be network bottlenecks
by adding TSM server memory. We have also seen bottlenecks in the links
to the TSM server's disk subsystem. If these kinds of server issues are
the bottleneck, dedup will just make it worse.

And as usual, remember that each time you remove a bottleneck, you just
expose another one - which could actually be the network. If you can't
back up as much as you want, look at the whole system, not just the
network.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu


On Thu, 15 May 2014, Tom Taylor wrote:

Thanks for all the advice, I am talking about WAN traffic.








Thomas Taylor
System Administrator
Jos. A. Bank Clothiers
Cell (443)-974-5768



From:
Bent Christensen b...@cowi.dk
To:
ADSM-L@VM.MARIST.EDU,
Date:
05/15/2014 08:54 AM
Subject:
Re: [ADSM-L] Bandwidth
Sent by:
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Thomas,

Just to be sure, are you talking about LAN or WAN backup traffic? Is your
bottleneck in the TSM-client-to-switch connection or the
switch-to-tsm-server connection?

If TSM is saturating your WAN lines, looking into dedup and compression is
the best you can do. If your problem is within a LAN you might have to
reconsider your backbone and network design, if that is not an option
spreading the client start times might do the trick.

But there is no such thing in TSM as bandwidth throttling like in i.e.
Symantec Netbackup.

 - Bent





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Tom Taylor
Sent: Wednesday, May 14, 2014 5:56 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Bandwidth

Good morning,

I run TSM 6.3.4


How do I throttle bandwidth so that the clients don't choke the network
during backups. I have already set a large window for the clients to use,
and I am reading about client side de-duplication, and adaptive file
backup. Are these the only two avenues to reduce the bandwidth used by
TSM?








Thomas Taylor
System Administrator
Jos. A. Bank Clothiers
Cell (443)-974-5768



Re: Moving the TSM DB2 database (AIX)

2014-03-22 Thread Roger Deschner
Thanks to all. AIX migraepv is the make/break mirror method. We will
probably proceed that way during the relative calm of Spring Break.

As for the database restore, we've never done one under V6 yet. Our full
DB backups are running about 45 minutes to local LTO5 or 1:15 to an
offsite SAN over the hill and through the woods. Sounds like it will be
a lot better than with V5. My only experience has been the V5 to V6
conversion.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Fri, 21 Mar 2014, Prather, Wanda wrote:

Was just thinking the same -
It's only the conversion from V5 to V6 that takes forever.  Once you are 
V6/DB2, DB backup-restore is fast again.

I have TSM 6.3.4 on Windows, DB is 930G used, DS3512 disk, and it will back up 
to LTO5 in 90 minutes if the server isn't doing a lot else at the time.  
Restore takes maybe 15 minutes longer.

You've got other issues you should address, if your DB backup is taking many 
hours @ 300GB

W


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Ehresman,David E.
Sent: Friday, March 21, 2014 9:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Moving the TSM DB2 database (AIX)

I've used migratepv to move oracle DBs around with no problems.  I would not 
expect any issues with using LVM mirroring or migratepv to move the TSM DB.  
That is what I would do in your situation.

But your comments about taking days to backup and restore your TSM DB worries 
me.  How long does it take to backup your DB?  I have a 600G allocated/415G 
used TSM DB.  It backs up in under an hour and restore time is about the same.

David Ehresman
University of Louisville

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Thursday, March 20, 2014 11:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Moving the TSM DB2 database (AIX)

Now that TSM V5 is gone from our shop and we're all TSM V6.2, it's time to 
move some things around. Such as the TSM DB2 database. The manual says to do a 
full database backup and restore. That could take days of downtime with our 
150-300GB databases, and a lot of angst, so that is not really acceptable.

What I'm planning to do instead, is what I've always done on AIX. It's one of 
the reasons I like AIX for hosting something like TSM. That is, to basically 
walk the database over to the new location using AIX LVM mirroring. All this 
with TSM up and running, albeit with a performance impact. (It's Spring Break, 
so the performance impact is acceptable.) The end result will be that the 
database has exactly the same Unix filesystem names, path names, and file 
names as before, except that it will be on a nice new faster disk subsystem.

Other than the obvious performance impact while AIX LVM is doing the 
mirroring, is there anything wrong with moving a TSM DB2 database by this 
method? Anybody done this and had problems?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



Moving the TSM DB2 database (AIX)

2014-03-20 Thread Roger Deschner
Now that TSM V5 is gone from our shop and we're all TSM V6.2, it's time
to move some things around. Such as the TSM DB2 database. The manual
says to do a full database backup and restore. That could take days of
downtime with our 150-300GB databases, and a lot of angst, so that is
not really acceptable.

What I'm planning to do instead, is what I've always done on AIX. It's
one of the reasons I like AIX for hosting something like TSM. That is,
to basically walk the database over to the new location using AIX LVM
mirroring. All this with TSM up and running, albeit with a performance
impact. (It's Spring Break, so the performance impact is acceptable.)
The end result will be that the database has exactly the same Unix
filesystem names, path names, and file names as before, except that it
will be on a nice new faster disk subsystem.

Other than the obvious performance impact while AIX LVM is doing the
mirroring, is there anything wrong with moving a TSM DB2 database by
this method? Anybody done this and had problems?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Bad tape question

2014-01-29 Thread Roger Deschner
Try AUDIT VOL ... FIX=NO before giving up on any data. That can clear
files previously marked as damaged.

Try a different tape drive. In general, I have found that a tape which
is starting to go bad, can always be read by trying it on each other
drive until it works. Then after you get the data moved off, discard the
tape.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 29 Jan 2014, mik wrote:

Hi people,

I have a question, i have a bad tape with 5 file on i do an audit fix=yes the 
audit marked them to damaged and i try a move data after that and the move 
data failure directly after skipping damaged file.
Anybody have a solution to success the move data (or re scratch the tape and 
the tsm don't ask for damaged file on this tape)
I want to export node and i don't want that the export failed because he need 
the 5 damaged file.

See after the result of the audit and move

For the Audit
ANR4132I Audit volume process ended for volume A00015L3; 5 files inspected, 0
damaged files deleted, 5 damaged files marked as damaged, 0 files previously
marked as damaged reset to undamaged, 0 objects updated.
ANR0987I Process 52 for AUDIT VOLUME (REPAIR) running in the BACKGROUND
processed 5 items with a completion state of SUCCESS at 13:18:57.

For the move data
ANR2017I Administrator ADMIN issued command: MOVE DATA A00015L3 stg=tapepool2
reconst=yes
ANR1157I Removable volume A00015L3 is required for move process.
ANR0984I Process 57 for MOVE DATA started in the BACKGROUND at 13:45:56.
ANR1140I Move data process started for volume A00015L3 (process ID 57).
ANR1176I Moving data for collocation set 1 of 1 on volume A00015L3.
ANR1161W The move process is skipping a damaged file on volume A00015L3: Node
node_name, Type Backup, File space \\fold\f$, File name \folder\folder
- folder\101 folder - folder - file.jpg.
ANR1161W The move process is skipping a damaged file on volume A00015L3: Node
node_name, Type Backup, File space \\fold\f$, File name \folder\folder
- folder\101 folder - folder - file2.jpg.
ANR1161W The move process is skipping a damaged file on volume A00015L3: Node
node_name, Type Backup, File space \\fold\f$, File name \folder\folder
- folder\101 folder - folder - file3.jpg.
ANR1161W The move process is skipping a damaged file on volume A00015L3: Node
node_name, Type Backup, File space \\fold\f$, File name \folder\folder
- folder\101 folder - folder - file4.jpg..
ANR1161W The move process is skipping a damaged file on volume A00015L3: Node
node_name, Type Backup, File space \\fold\f$, File name \folder\folder
- folder\101 folder - folder - file5.jpg.
ANR1141I Move data process ended for volume A00015L3.
ANR0985I Process 57 for MOVE DATA running in the BACKGROUND completed with
completion state FAILURE at 13:45:57.

I wish anybody have a solution.

Regard, Mickael.

+--
|This was sent by bobpatrick808...@yahoo.fr via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



Re: Who might be running multiple V6 TSM instances on the same LPAR/system???

2014-01-14 Thread Roger Deschner
We are running three TSM 6.2 server instances on one AIX machine,
without LPAR, HACMP, or VM. One of them is Library Manager for the other
two.

It works great if you are very careful with directory and file names,
and Unix permissions, so as to keep everything separate. Run each
instance as non-root, which mostly keeps them from walking on each
other. Map out your directory and file naming conventions in advance of
installing ANY of them, using the worksheets in the manuals as a
starting point. The large amount of time I spent in this basic planning
stage has been very worthwhile.

Just follow the instructions in Installation Guide and it works. The
manuals and installation procedures were mostly written with multple
instances in mind, so this works much better than it did on TSM 5. The
only change I made there, is that the manual suggests making the
Instance Directory to be /home/tsminst1/tsminst1. However I have found
it is much easier to keep track of it all by naming them
/home/tsminst1/instancedir or /home/tsminst2/instancedir ...

Everything else (database, logs, etc...) is stored under /var/tsminst1/
or /var/tsminst2/...

However, DISK and FILE storage pools are generic, because sometimes I
move them from one server to another as load dictates. (Changing Unix
ownership when I do move them.)

Our Client Nodes all expect to connect via Port 1500, and with hundreds
of nodes, we cannot get them to change without me spending a year going
office to office, which cannot work because I'd need to change them all
in one day. Therefore we use xinetd to forward port 1500 to other port
numbers. This requires one NIC per instance, which is better for
performance reasons anyway.

Your biggest pitfall will be Unix permissions. It works if you are
careful and get it right. Make each instance ID the owner of everything
it actually owns. Set the Unix Group to be tsmsrvrs everywhere, but be
careful not to give one instance write access to another instance's
things by setting group-write. Let the Library Manager keep each
instance's tapes separate.

DO NOT let permissions issues tempt you to run any instance as root!
That is an invitation to disaster.

What this does is that in the future we can combine multiple TSM servers
onto fewer machines, or split them apart to separate machines, with
minimum disruption. All this without the overhead of virtual machines.

P.S. After trying for months to migrate nodes from an old V5 server to a
new V6 server, we gave up, and simply upgraded the V5 server using the
upgrade procedure. (We used the New System, Network method.) Moving
nodes individually turned out to be worse than herding cats. (I'm
dealing with college faculty!) A V5 to V6 upgrade is a couple of days of
downtime and very hard work, but then you're done. This decision to
upgrade instead of moving nodes, is why we now have multiple instances.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Mon, 13 Jan 2014, Meunier, Yann wrote:

Hi,

I'm in this configuration ! :)
I've 4 TSM 6.4 instances on 2 AIX servers in HACMP.
We went in production last month, so I did'nt have any feedback.
We begin the migration of our nodes this week and we have more than 2000 nodes 
to do!

For the moment we haven't :
- VTL
- Active pool
- dedup
- Replication

We hope to term implement replication.

Best regards,

Yann

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de 
Dwight Cook
Envoyé : vendredi 10 janvier 2014 21:00
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Who might be running multiple V6 TSM instances on the same 
LPAR/system???

Would greatly appreciate any input/feedback on what to look out for, tips, 
hints, etc. in an AIX environment.
Ce message et toutes les pièces jointes (ci-après le « message ») sont 
confidentiels et établis à l’intention exclusive de ses destinataires. 
Toute utilisation de ce message non conforme à sa destination, toute 
diffusion ou toute publication, totale ou partielle, est interdite, sauf 
autorisation expresse. Si vous recevez ce message par erreur, merci de le 
détruire sans en conserver de copie et d’en avertir immédiatement 
l’expéditeur. Internet ne permettant pas de garantir l’intégrité de ce 
message, la Caisse des Dépôts et Consignations décline toute 
responsabilité au titre de ce message s’il a été modifié, altéré, 
déformé ou falsifié. Par ailleurs et malgré toutes les précautions prises 
pour éviter la présence de virus dans nos envois, nous vous recommandons de 
prendre, de votre côté, les mesures permettant d'assurer la non-introduction 
de virus dans votre système informatique.

This email message and any attachments (“the email”) are confidential and 
intended only for the recipient(s) indicated. If you are not an intented 
recipient, please be advised that any use, dissemination, forwarding

Re: running two distinct tsm client instances pointing to two different servers on windows box

2014-01-08 Thread Roger Deschner
We do this, using the method Wanda suggests. We tried it using just
different management classes, but there was bleed-over and we had some
backed-up data retained for a lot longer than we had intended. Using
different domains, as Wanda suggests, made it work the way we wanted.

Just use include/exclude to make sure each one is backing up only the
data you want it to back up. It's easy to make a mistake and either back
up some data twice, or never.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 9 Jan 2014, Prather, Wanda wrote:

I have several customers doing this.
One, for example, has a machine called CLIENT.
It is registered as CLIENT-DAILY to run daily backups, and also as 
CLIENT-MONTHLY to run monthly backups.

You just have 2 different schedulers, pointing to 2 different dsm.opt files.
The first dsm.opt specifies NODENAME CLIENT-DAILY
The second dsm.opt specifies NODENAME CLIENT-MONTHLY
CLIENT-DAILY is registered in the DAILY domain
CLIENT-MONTHLY is registered in the MONTHLY domain.

Domains have different schedules, and mgmt. classes have copy groups pointing 
to different destination disk and tape pools.
To simplify I use classic schedulers (running without dsmcad and  
MANAGEDSERVICES SCHEDULE).  Although it should be possible to set up multiple 
dsmcads too (but it makes my head hurt so I don't).

But have never had a problem getting it to work.
Can you be more specific about the problems you are having?

W



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Ehresman,David E.
Sent: Wednesday, January 08, 2014 12:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] running two distinct tsm client instances pointing to 
two different servers on windows box

Gary,

Maybe a different approach would work.

Use the same TSM node but use INCLUDE statements to point different 
filesystems to different management classes pointing to different disk/tape 
combinations.

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Wednesday, January 08, 2014 11:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] running two distinct tsm client instances pointing to two 
different servers on windows box

I need to do some testing where a client machine must be two distinct tsm 
nodes on different servers; or on the same server but a different domain.

I've been playing with this, but can't seem to get it working.

I want machine client-a backing up to one disk / tape combination, while its 
alter ego client-b backs up to another disk / tape combination on the same tsm 
server.

Most clients will be windows, with a few linux thrown in.

Any ideas would be helpful.



Re: Export node bad tape

2014-01-04 Thread Roger Deschner
Try exporting the node directly from one server to the other via the
network. Getting the two servers to talk to each other over the net is
not difficult. V5 talks to V6 OK.

BTW just finished an upgrade of our largest TSM server (300GB database)
from V5 to V6, and it went very well. I have concluded that upgrading a
TSM server from V5 to V6 is much better, faster, and easier than moving
the nodes via export or having them do new backups. I wish I had known
this two years ago, because I could have saved a lot of work. The
upgrade process is long and complicated, taking us 49 hours, but it
works and then you're done.

We now have a v6.2.5 TSM server with the Server Installation Date/Time
in 1999.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 3 Jan 2014, mik wrote:

Hi and thanks for reply Rick,

The audit volume was in progress, i hope the audit was good.

Happy new year to you and your family.

Regars, Mickael.

+--
|This was sent by bobpatrick808...@yahoo.fr via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



Re: ANR8944E and move data

2013-12-16 Thread Roger Deschner
First run SHOW DAMAGED stgpool to see what's damaged.

It might be interesting to run CLEAN DRIVE.

Then run AUDIT VOL A00015L3 FIX=NO. You need to use FIX=NO to prevent
deletion of the damaged files which might be readable on a different
drive. It will still recover damaged files. Once you find a drive that
can read this worn-out tape, make sure it uses that same drive for
MOVE DATA. AUDIT VOL with FIX=NO is a crucial tool if you've got a
flaky drive.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Mon, 16 Dec 2013, Skylar Thompson wrote:

You can also try doing Q CON volume-name DAM=Y to see if TSM sees any
damaged files on that volume. It could be that the MOVE DATA managed to
read the data after all, and it's now on a different volume. In any case,
you have a suspect volume.

More concerning, though, is the state of your copy pool. Without your copy
pools, you're vulnerable to losing all data in your primary pools.

On Mon, Dec 16, 2013 at 05:30:09AM -0800, mik wrote:
 Skylar Thompson

 The MOVE DATA encountered a problem with either the tape, or the drive
 itself. A couple things to try:

 * Try marking DRIVE2 as offline (UPD DR library DRIVE2 ONL=N), and
 then running the MOVE DATA again. This will force the operation to use a
 different drive. If you get the same problem, then it points to a
 problem with the tape, not the drive.

 -- The drive 1 are already down.

 * Try running an audit on the tape (AUDIT VOL A00015L3 F=Y) and watch
 the activity log for problems

 -- I try no error during this (maybe after that move data that should be 
 different

 * If you do have a bad tape, you can use your copy pool to restore it
 with RESTORE VOLUME.

 -- 1 drive down and copypool down too

 You might also check the logs on your library; some of them will give an
 indication of whether the problem is media- or drive-related.

 -- Just basic information no precision or else.

 For Richard B same response

 Thanks for the reply.

 Regards, Mickael.

 +--
 |This was sent by bobpatrick808...@yahoo.fr via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine



Re: Recover Archived Data from Tapes without Catalog

2013-12-12 Thread Roger Deschner
Back to the original question. You've got to fix that original Solaris
server somehow. You don't need to access the whole tape library, just
one LTO4 drive, and then mount tapes manually.

The tapes themselves are utterly useless without the original TSM
Database. There is no way to reconstruct the database from the tapes -
the metadata is not there, such as file names and owner nodenames. Files
that were encrypted at the client will be very difficult if not
impossible to recover without the databsae.

There is a machine called Index Engines that you can buy, which can read
TSM data from tapes. However, it still cannot recover the metadata, such
as whom it belongs to or what the filenames were, or whether it was the
most recent copy. File formats that contain binary metadata themselves,
such a Microsoft Word documents, will not come back in any useable form.
It is mostly used in forensic exploration such as legal e-discovery.
These machines are not cheap - repairing your broken Solaris machine
would likely cost less, take less time and effort, and give better
results.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 11 Dec 2013, Nora wrote:

Hello,
We recently lost a TSM server because of a severe server hardware problem that 
makes the tape library completely inaccessible to the server. We are now 
building a new TSM server and connecting the library to it. However the new 
TSM server we are building is on a different OS (Red Hat linux) while the 
original server was on Solaris 10. The different OS's between the 2 servers 
makes it impossible to restore the existing TSM database although we have it. 
And the old server's inability to use the tape library is also stopping us 
from using the Export Node functionality for data we want to save.
We are mainly concerned about 120 GB of archived data that we want to retrieve 
from this tape library with TSM before migrating to the new TSM server.
Does anyone know of a way to read TSM data of LTO4 tapes without the catalog 
so we can save this data? Or does IBM provide such a service ?

+--
|This was sent by noran...@adma.ae via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



Patch this TSM Server vulnerability now

2013-12-04 Thread Roger Deschner
On Monday IBM sent a Flash to many of us announcing a security
vulnerability in the TSM Server. Regular non-administrator end-users on
a multi-user system can restore files belonging to other users,
including userid root. For instance, this could be a Unix system that
hosts shell accounts. Dissecting the CVSS scoring reveals Access
Complexity: Low and Authentication: None - which basically means
anyone can do it. Obviously, this is an opportunity for a breach of
confidentiality.

If you back up any multi-user clients which have non-administrative
accounts, this applies to you. It definitely applied to us, so I updated
all our TSM server instances immediately.

The Flash containing the full description and a list of fixing releases
is at http://www-01.ibm.com/support/docview.wss?uid=swg21657726

Kudos to IBM for making well-tested fixes widely available before
publishing the vulnerability, and also for announcing it after the
Thanksgiving holiday rather than before.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Help on q nodedata

2013-10-31 Thread Roger Deschner
It looks like the strange behavior is in MOVE NODEDATA, not Q NODEDATA.
You should believe what Q NODEDATA says. Something went wrong with the
move - even though it says SUCCESS. Do something like

Q ACTLOG BEGINT=xx:xx BEGIND=xx/xx/2013 SEARCH='MOVE NODEDATA'

to see why it stopped and thought it had succeeded before moving all of
the data. Perhaps a restore or reclamation underway? Beware that it
likely has to un-dedup the data as a part of the MOVE NODEDATA process.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Thu, 31 Oct 2013, Robert Ouzen wrote:

Hi to all

I have a strange behavior on a client running the Q NODEDATA EDUNW  , here the 
output:



Node NameVolume NameStorage Pool 
Physical
Name  
 Space
 
 Occupied
   
 (MB)
 --  

EDUNWT:\DEDUP6\00053C08.BFS DEDUP6TSM 
18.16
EDUNWT:\DEDUP6\00053C21.BFS DEDUP6TSM 
71.95
EDUNWT:\DEDUP6\00053C25.BFS DEDUP6TSM
4,964.89
EDUNWT:\DEDUP6\00053C39.BFS DEDUP6TSM  
0.06
EDUNWT:\DEDUP6\00053C44.BFS DEDUP6TSM 
699.58
EDUNW\\DD580G\BACKUP\POSTBACK\EDUC- DD_EDUCATION 
36,533.5
  ATION\.BFS   
   7
EDUNW\\DD580G\BACKUP\POSTBACK\EDUC- DD_EDUCATION 
37,187.1
  ATION\0001.BFS   
   4
EDUNW\\DD580G\BACKUP\POSTBACK\EDUC- DD_EDUCATION 
38,004.6
  ATION\0002.BFS   
   9
EDUNW\\DD580G\BACKUP\POSTBACK\EDUC- DD_EDUCATION 
33,756.1
  ATION\0003.BFS   
   9
EDUNW\\DD580G\BACKUP\POSTBACK\EDUC- DD_EDUCATION 
24,543.9
  ATION\0004.BFS   
   1
EDUNW\\DD580G\BACKUP\POSTBACK\EDUC- DD_EDUCATION 
32,374.4
  ATION\0005.BFS   
   0

The correct storage to be on is:  DD_EDUCATION

I run several times a move nodedata edunw from=dedup6tsm  to=dd_education   
recons=yes   with every time success process.

But still  got this output ?

I did to a : q nodedata * vol= T:\DEDUP6\00053C08.BFS and still 
see reference of node EDUNW on it ?

tsm: POSTBACKq nodedata edunw vol=S:\DEDUP6\00053BB2.BFS

Node NameVolume NameStorage Pool 
Physical
Name  
 Space
 
 Occupied
   
 (MB)
 --  

EDUNWS:\DEDUP6\00053BB2.BFS DEDUP6TSM 
57.52

How I can be rid of those entries ?

My Tsm server version is:   6.3.4.200
My Tsm client version is: 6.2.2.0

T.I.A  Regards

Robert



Mac OSX 10.9 Mavericks in TSM Backup?

2013-10-29 Thread Roger Deschner
We've got Mac users who are showing up with OSX 10.9 Mavericks, and
who would like to back these up to TSM. I am seeing in
http://www-01.ibm.com/support/docview.wss?uid=swg21053584 that even the
TSM V7.1 client will not support OSX 10.9, but only supports 10.8.

Does anybody have any experience backing up (and of course also
restoring) OSX 10.9 Mavericks with TSM? Is something about to come,
such as a clarification when TSM 7.1 is GA in a couple of days?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: TSM v5-v6 upgrade - permissions of raw disk pool vols

2013-09-20 Thread Roger Deschner
I have two TSM V6 instances on one computer. I always make the instance
owner userid to be the owner of its raw logical volumes, as a matter of
safety, with chmod 600. This way I don't accidentally allow more than
one TSM instance to get access to the same raw LV.

This is not required, but it sure is a good idea! And it's a much better
idea than 'chmod 666' suggested in that IBM document. The sign of the
devil should be a warning of data loss if you use it, especially with
multiple TSM instances. (And I'm not just being irrationally
hexakosioihexekontahexaphobic.)

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 19 Sep 2013, Shawn DREW wrote:

Yes, permission needs to be considered for v6 resource access, although you 
don't necessarily need to reassign ownership.

http://www-01.ibm.com/support/docview.wss?uid=swg21394164


Regards,
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Thursday, September 19, 2013 1:03 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] TSM v5-v6 upgrade - permissions of raw disk pool vols

 Our TSM v5 servers all run as root.  After the conversion to v6 they will be
 running as a non-root account which is the tsm/db2 instance owner.

 Our disk pools are all raw logical volumes.  Do we need to change ownership
 of the raw volumes to the new instance owner so dsmserv can access the
 LV's?
 Along the same lines, is the new v6 dsmserv  able to access the RMT tape
 devices, or do I have to change their ownership also?

 Thanks

 Rick





 -
 The information contained in this message is intended only for the personal
 and confidential use of the recipient(s) named above. If the reader of this
 message is not the intended recipient or an agent responsible for delivering
 it to the intended recipient, you are hereby notified that you have received
 this document in error and that any review, dissemination, distribution, or
 copying of this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete the original
 message.


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.



Re: strange tape handling

2013-09-20 Thread Roger Deschner
I see this strange behavior as well. All the tapes are READWRITE and
FILLING. It's the same on TSM V5 or V6.

Sometimes this arises when you have multiple migration processes. A
guess is that you will have (migprocesses * collocationgroups) FILLING
tapes. That could be a lot of FILLING tapes. But in real life, there are
often more than (migprocess * collocationgroups) FILLING tapes.

I think this can also come about due to tape thrashing as migration
works its way through collocation groups. If you have two tape drives,
and the FILLING tape for a collocation group is in use on one, but the
second migration process wants to use that tape to migrate files for
another node in that same collocation group, it allocates a new scratch
tape. So one change I'd suggest for TSM development would be to migrate
all nodes in a collocation group together. This would improve both tape
utilization, and performance.

Even that does not explain it all. I still see scratach tapes allocated
when there are FILLING READWRITE tapes available for the same
collocation group. It is still strange.

If the number of scratch tapes is getting low, I MOVE DATA off of the 1%
full volumes. There is so little data involved in a 1% full tape, that
you can typically afford to move it back into a disk stgpool where it
will be re-migrated.

I wish it didn't work this way, but I don't worry about it too much. The
99% empty FILLING tapes will be used as soon as really needed. I start
to worry when I see that the situation is forcing the server to ignore
collocation boundaries. That can happen when the number of scratch tapes
reaches 0. Pay more attention to setting MAXSCRATCH to the actual number
of tapes in your tape library, and to setting ESTCAP in the devclass to
how much data you can actually put on a tape after compression, and then
your %full statistics will accurately reflect the total amount of unused
space left in the stgpool on FILLING tapes as well as SCRATCH tapes. Our
real-life compression ratio runs about 1.4:1.

This annoying but not critical problem is even worse with FILE stgpools.
Another reason we use scratch FILE volumes, not preallocated ones. A 1%
full scratch FILE volume only takes up 1% of the space, whereas a 1%
full preallocated volume (like a tape) takes up 100% of the space.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Thu, 19 Sep 2013, Lee, Gary wrote:

A few were readonly.  However, that did not explain all of them by a long way.
Others are filling.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Thursday, September 19, 2013 4:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] strange tape handling

What is the access setting on the tape, is it READWRITE? And is status FULL or 
FILLING?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Thursday, September 19, 2013 2:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] strange tape handling

Tsm server 6.2.4 on RHEL 6

Tape storage pool on ts1120 tapes, collocation set to group.

Node libprint does not belong to any group.  Its data is now on two volumes.
First volume has estimated capacity 1 tB.
It is 1.4% full.

This morning, tsm was migrating its data from disk pool to tape.  It pulled 
another scratch to store systemstate data even though the original volume was 
only  1.4% full as stated above.

What is up with this.  This is causing my scratch tapes to be eaten up to no 
purpose.



Migration tape mount thrashing despite collocation

2013-08-05 Thread Roger Deschner
I've been going over logs of tape mounts, looking for a tape performance
issue. I watched the tape library for a while during migration, and the
robot was being kept very busy, mounting and dismounting the same set of
tapes over and over. When each tape volume is mounted and dismounted as
many as 6 times during a single migration, something is not right. Not
only is this rather slow, but it is wearing out the tapes.

Group collocation is set for both the DEVTYPE=FILE disk stgpool, and the
next one in the heirarchy which is tape. The collocation groups are
sensibly-sized - total size of each group is about (capacity of one tape
* number of tape drives), to optimize large restores.

However, I'm seeing this tape thrashing as migration proceeds. Why is
this happening with the same collocation in effect for both stgpools?
Shouldn't migration be optimized for tape mounts when the collocation
definitions are the same?

BTW, I looked over the tape mounts for a different server which migrates
from DEVCLASS=DISK (random, no collocation) to tape, and I saw the same
effect, though not quite as bad as DEVTYPE=FILE (sequential, group
collocation) to tape.

I have set MAXCAP on the devclass to 25gb. Would making that larger
help? I'm not sure it would, since the random-access stgpool acted
mostly the same way.

This is TSM 6.2 on AIX, with LTO-4 and LTO-5 tape.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Windows client memory leak

2013-07-27 Thread Roger Deschner
The problem should go away if you use the Client Acceptor Daemon. The
scheduler client program is notorious for memory leaks at various
release levels. With the CAD, dsmsched terminates after each backup,
releasing all its memory. We now use the CAD on all Windows client
machines.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 26 Jul 2013, Thomas Denier wrote:

We have a number of Windows systems with TSM 6.2.2.0 client code. It is
by now well known that the client scheduler service at this level retains
the maximum amount of memory needed during the backup window through
intervals between backup windows.

We have installed 6.2.4.0 client code on a few of our Windows systems.
This eliminates the behavior described above, but our Windows administrators
are reporting evidence of a memory leak. The amount of memory used by the
client scheduler service when it is waiting for the next backup to start
seems to grow by about 10 KB per day. Are other sites seeing this? Is
there an available service level with a fix for this, or a prediction
for a fix in a future service level?

Thomas Denier
Thomas Jefferson University Hospital



Re: : TSM Running out of Scratch Tapes

2013-07-08 Thread Roger Deschner
You should use the Q ACTLOG command with a begintime and begindate just
before a bunch of tapes were marked unavailable to see more information
about why this is happening. Though you are searching for a needle in a
hay stack you may be able to narrow it down with the SEARCH option on
the Q ACTLOG command. For instance, search on the volume name of a tape
that you know got marked unavailable during that time period. You could
also search on UNAVAILABLE. The error messages giving the reasons are
in the ACTLOG, though finding them may take some searching.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== You will finish your project ahead of schedule. ===
= (Best fortune-cookie fortune ever.) ==


On Mon, 8 Jul 2013, white jeff wrote:

Hi

It is most useful if you state which version of TSM server you are running.
The OS it runs on and the tape media (LTO3/4/5 etc) would be helpful

What retention policies are in place for the domain copygroups? If NOLIMIT
is in force, then data will never expire and will eventually use all of
your tapes

When you say tapes are becoming 'unavailable', is this the status of the
volume within TSM? (Do a 'q vol f=d' to check). If so, this is often
because you are trying to use tapes that have been physically removed from
the library. If TSM wants to use that tape, it will eventually place the
tape in an 'UNAVAILABLE' state because it cannot mount the volume in the
library as a result of being removed.

In terms of running low on scratch, as the previous poster said, make sure
expiration and reclaim is running. In particular, make sure reclaim is
running and actually completing. The SUMMARY table will report on
expiration and reclaim activity over a period of time (usually 28 days)

Establish the percent-reclaimable space of your tape volumes. Use this
command:

SELECT VOLUME_NAME, STGPOOL_NAME, PCT_RECLAIM, STATUS FROM VOLUMES WHERE
VOLUME_NAME LIKE '%L3'   (This is assuming you are using LTO3 tape. Change
accordingly)

How many versions of database backups do you retain?

These are common causes of tape problems.








On 8 July 2013 10:13, Adeel Mehmood ad.mehm...@diyarme.com wrote:

 Dears ,

 We are facing problem as the TSM scratch tapes are running out , also the
 tapes becoming unavailable .
 Please advise , how to handle this issue .

 Thanks , Adeel

 

 D I S C L A I M E R

 The information in this email and in any files transmitted with it, is
 intended only for the addressee and may contain confidential and/or
 privileged material. Access to this email by anyone else is unauthorized.
 If you receive this in error, please contact the sender immediately and
 delete the material from any computer. If you are not the intended
 recipient, any disclosure, copying, distribution or any action taken or
 omitted to be taken in reliance on it, is strictly prohibited. Statement
 and opinions expressed in this e-mail are those of the sender, and do not
 necessarily reflect those of the company.




Re: Clients using DHCP vs Fixed IP addresses

2013-05-25 Thread Roger Deschner
Us too. We have always forced all clients to use polling sched mode, and
have never had to hassle with IP addresses. DHCP and VLANs came and it
was no problem. Our client node administrators have always been happy
that we have always said that TSM backup is independent of IP address.

We use other methods to detect two+ machines backing up to one nodename,
specifically if the GUID changes repeatedly.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
=== Klein bottle for rent -- inquire within. ===


On Fri, 24 May 2013, Karel Bos wrote:

Hi,

One of the nice things of having all clients in polling sched mode is no need 
to admin ip stuff, even when the clients ip changes without restarting tsm 
services (or manual contact the tsm server).

Kind regards,
Karel

- Oorspronkelijk bericht -
Van: Zoltan Forray zfor...@vcu.edu
Verzonden: donderdag 23 mei 2013 18:13
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: [ADSM-L] Clients using DHCP vs Fixed IP addresses

Up until now, we have always assigned fixed IP addresses to all TSM
clients/nodes.

Now they want to reconfigure all backup networking to use VLANs and have
asked the question, do the clients need to be fixed IP addresses or can
they be dynamically assigned from a specific DHCP server?

Is this do-able?  How does it effect scheduling? What configuration changes
would be needed (SCHEDMODE?  SESSIONINITIATION?)

How do most folks manage backup scheduling and client addressing?

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



Re: Library manager question

2013-04-21 Thread Roger Deschner
This thread has strayed.

There's no special magic here. On the client server, you define the
library as SHARED. That's how it knows. Then you run CHECKIN on the
Library Manager. Just remember: The Library Manager owns the LIBVOLS and
the DRIVES, and the Library Client owns the VOLUMES.

It's all pretty well described in TSM Administrators Guide. It even
contains a step-by-step procedure for doing exactly what you are doing.
The only tricky part is knowing which TSM server (manager or client) you
need to run which commands on. Get that right and everything just works,
though when I did it, I had a strong feeling like I was the proverbial
restaurant waiter who pulls the tablecloth out without disturbing the
place setting.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
No one may throw an old computer across the street at their neighbor.
-- city ordinance, Warsaw, Indiana



On Sat, 20 Apr 2013, Remco Post wrote:

Haven't tried it, but I'm quite eager to hear what from you once you're done. 
It doesn't seem to be very complicated to implement and I've always wondered 
why TSM requires a restart when changing anything in the library configuration 
(or deleting all paths, all drives the library and then rebuilding everything, 
imagine doing that on a LM or a TSM server that shares 18 drives with 9 NAS 
filers ;-) )

On 19 apr. 2013, at 15:11, Zoltan Forray zfor...@vcu.edu wrote:

 For this issue of tape library replacements/swaps (since we are about to do
 this), has anyone tried the AUDIT LIBRARY command with REFRESHSTATE=YES?  I
 stumbled upon this new option in this doc:

 http://www-01.ibm.com/support/docview.wss?uid=swg21203271

 On Thu, Apr 18, 2013 at 2:02 PM, Remco Post r.p...@plcs.nl wrote:

 On 18 apr. 2013, at 16:50, Heikel, Cory chei...@hmc.psu.edu wrote:

 I am trying to bring up a library manager to share my 3584 between 2 tsm
 servers. My question is this:

 How do I tell the library client that all the tapes for a certain
 storage pool are now contained, for lack of a better word, on the library
 manager? If it were on the same server I would just run a checkin, but I
 cannot run a checkin on a library client. Perhaps I am missing something
 fundamental?

 check the tapes into the library manager as private, then run an audit
 libr on the lib from the library client to take ownership.



 Cory L Heikel
 Business Continuity Team
 Hershey Medical Center
 (717)531-7972
 Next week there can't be any crisis. My schedule is already full Henry
 Kissinger

 --

 Met vriendelijke groeten/Kind regards,

 Remco Post
 r.p...@plcs.nl
 +31 6 24821622




 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html

--

Met vriendelijke groeten/Kind regards,

Remco Post
r.p...@plcs.nl
+31 6 24821622



Re: TSM 6.3.3

2013-03-27 Thread Roger Deschner
Having just done an upgrade of a 120 GB TSM 5.5 server to 6.2, IBM's
time estimates were surprisingly accurate. The process was long and
labor intensive, but in the end it worked. You can see my notes in the
archives of this list from the end of December.

We're even considering altering our strategy for our oldest 5.5 server
with its bloated 300 GB database and 1000 clients. Plan A was to put up
the new V6 server alongside it and do exports and new backups, like
you're planning. That process is underway, but it may take a whole year,
and I want to junk the old hardware before then. Plan B is to get the
database of the 5.5 server down from 300 GB to about 150 GB by exporting
or doing new backups of the easy, large clients, and then do an upgrade
for the rest using the new system/media method to another instance on
the new hardware. The largest savings in my Plan B would be in not
having to change anything on most of those 1000 clients, which sit on
the desktops of professors, researchers, and deans, some of whose
offices I'd probably have to visit in person. That terrible prospect of
spending weeks walking around campus switching client nodes one at a
time, makes the long and somewhat excruciating TSM upgrade process look
like a walk in the park. Instead I will just have to change one DNS
definition after the upgrade is done.

In summary, do the upgrade. I think it will be easier for you.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 27 Mar 2013, Alex Paschal wrote:

Hi, Jerome.  I've found IBM's quote of 5GB/hr is pretty accurate across
a variety of hardware architectures, OSs, and disk arrays. Figure your
200GB database would take 40-ish hours to upgrade, possibly less if you
feel a reorg would shrink your database significantly.  That means you'd
probably miss two nights' backups, IF you left the old system down.
Here are some thoughts.

Migration takes a lot of work and time.  If you can possibly swing the
40-hour upgrade, I highly recommend it.  To help convince management,
figure the time you'd spend migrating and how much that would cost in
consultant time, vs 2-3 days of consultant time for just the upgrade.
Try to convince them your time isn't free and should be factored in
because, while you're migrating, there's other stuff you aren't doing.

If you use the New System/Media method, which is my personal preference,
you can even bring your 5.5 server back up after the extract is
complete, which means you can take backups and be able to do restores
for those two days.  This will not be possible if you use the Network
method.  This covers the what if someone needs a restore during those
two days and the that leaves us unprotected for two days complaints.
You would want to make sure your reusedelay is set correctly, though,
and don't bother with reclamation.  You would have to manually track
volumes that are sent offsite during those two days, but maybe you could
just skip the offsite process those two days.

If you can convince management to abandon that two days' worth of
backups to the old 5.5 server, e.g. shut down the 5.5 server, switch
production to the 6.x server, resume incrementals with a 2-day gap, this
is just as good because you don't have to do any migration. Again, save
time, effort, and years of premature aging.  Try using business phrases
like:  migration cost not commensurate with the benefits, migration
increases risk, lost opportunity cost of the migration time, and any
other BS-Bingo terms you can repurpose for the fight against evil.

If you can't convince management to abandon that two days' worth of
backups, see if you can convince them there's only a few nodes that you
can't abandon.  If you can limit it to just a few nodes, you could do an
export node filedata=all fromdate= fromtime= todate= totime= to skim
only the data from the time of the extract and import it into the new
server.  That will drastically reduce the amount of data you have to
migrate.  If you took those backups to new media, like a new library
partition or something, or a bunch of spare disk (if there is such a
thing), you could do it server-to-server.  If not, simply export to
tape, shut down the 5.5 server, and import those tapes into the new server.

If all of the above falls through, only then consider the migration.
shudder  But since you have only 60 servers, it might be worth
considering doing the export node fromdate/fromtime for all 60, rather
than migrating all that historical data.  One other thing - since you're
getting a new system, extract the DB, then test the upgrade a few times
before doing it for real.

Good luck, and may your management be amenable.

Oh, and for all you other TSM consultants out there (and my sales guys),
I apologize; I know those migration engagements are mighty lucrative.  :-)

Alex


On 3/27/2013 12:26 PM, Swartz, Jerome wrote:
 Thanks Erwann,

 Currently just

Second full DB backup triggered while first was still underway

2013-03-22 Thread Roger Deschner
The Archive Log got to 80% full, so it triggered a full TSM DB backup.
So far, so good.

However, as the first one was finishing, and before it could empty the
Archive Log, a second full TSM DB backup was triggered. For a short
while, there were two full TSM DB backups running. This second backup
was a pointless duplicate, and may possibly be corrupted. It appears
there is a window where the trigger does not know that a full DB backup
has just been run.

TSM Server 6.2.2.30 on AIX 5.3.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Second full DB backup triggered while first was still underway

2013-03-22 Thread Roger Deschner
The Active Log was never more than 2% full, and yes, I have an Archive
Failover Log, which was never written to and remained empty. Q DB F=D
shows blanks for Last Database Reorganization, which means it has never
been done? Hopefully never just since the last reboot. At any rate, DB
reorg was not what caused this to happen. (Odd, because our other
production V6.2 server does reorgs fairly often.)

What happened was that some client backed up 3,000,000 small files.
(That's a separate problem; it happens.) Since they were all small, the
Active Log never got pinned for more than a second or two, and the
storage pools never got full, but the Archive Log sure got hammered.

On our system, it appears that db2dump/db2diag.0.log is in the instance
owner ID's homedir, not the instance directory which I have set up to be
a subdir of its homedir. But anyway, I found it, at
/home/adsm_4/sqllib/db2dump/db2diag.0.log and it was very interesting.
(Among other things, it looks like it did its last DB reorg over a year
ago.) It appears from this log file that the first DB backup was
cleaning up and futzing with the Volume History File over and over, when
the second DB backup was triggered. I guess I'm lucky it only happened
twice.

Thanks Wanda and Skylar, it looks like we need to go to 6.2.5 soon. And
also increase the size of the main Archive Log because 3,000,000 small
files can happen again.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



Skylar Thompson wrote:
We had the same issue until going to v6.3. DB reorgs were frequently the
cause of the problems. We had the ALLOWREORGTABLE server option off to
help prevent this, although there's a long-term performance hit from
that as well.

If you look in your TSM instance directory, you'll find a
db2dump/db2diag.0.log file that you can tail to confirm that. It'll log
every time reorganization starts and stops, along with other interesting
details.

-- Skylar Thompson (skylar2 AT u.washington DOT edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

On 03/22/13 01:38 PM, Prather, Wanda wrote:

Could the active log also be filling because the archive log was full?
There are known issues with DB backup triggers in 6.2.2.  (although we're
Windows, I think the problem is the same)

We've seen cases where the server would just fire db backup after db backup
after dbbackup, because the active log was getting full.
As the db backup doesn't clear the active log anyway, it was pointless.
That's fixed in 6.2.5, we haven't seen it since the upgrade.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On 
 Behalf Of
Roger Deschner
Sent: Friday, March 22, 2013 2:01 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: [ADSM-L] Second full DB backup triggered while first was still 
 underway

The Archive Log got to 80% full, so it triggered a full TSM DB backup.
So far, so good.

However, as the first one was finishing, and before it could empty the 
 Archive
Log, a second full TSM DB backup was triggered. For a short while, there 
 were
two full TSM DB backups running. This second backup was a pointless 
 duplicate,
and may possibly be corrupted. It appears there is a window where the 
 trigger
does not know that a full DB backup has just been run.

TSM Server 6.2.2.30 on AIX 5.3.

Roger Deschner  University of Illinois at Chicago rogerd AT uic 
 DOT edu
==I have not lost my mind -- it is backed up on tape somewhere.=






Re: upgrae from 5.5.4 to 6.2.4

2013-02-28 Thread Roger Deschner
We did this, more or less. Our old V5.5.6 system was perfectly useable
until I wiped it out. The preparedb and extract processes did not harm
the 5.5.6 system at all.

We did it cold turkey, and the only change I made to your procedure
was a couple more database backups between steps. Search the archives
for my posts to this list December 20-27, 2012. Our experience was that
IBM's time estimates in the manuals are fairly close.

I would suggest a preliminary step of upgrading from 5.5.4 to 5.5.7.
There have been patches in the upgrade area.

Definitely make your target 6.2.5. Our newly-upgraded 6.2.3 system has
issues that appear to be fixed in 6.2.5.

Prepare yourself for it physically. Be rested. Stock the fridge at work
with sandwiches and soda. This is a marathon.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 27 Feb 2013, James Choate wrote:

We have done the same thing as well.
If you want more detail on the plan, I would do the following:

.1 Only needs to be done once, but load the upgrade utilities on the 
server to be upgraded

.2 Backup 5.5.4 Database

4  extract the database .(when I extract the db I always put the 
manifest file on the /upgrade filesystem)
4.1Copy devconfig  volhist to /upgrade  (grabs the devclass for the 
extracted file)

5.1copy devconfig  volhist from /upgrade to /home/tsminst1 (usr 
name of new 6.2.4 instance owner)
   Check permissions on files

5.2dsmserv removedb TSMDB1
5.3cleanup directories that hold the db2 db, db2 log, db2 logm, db2 
archlog, db2 arch failover by removing files
5.3format database to meet sizing requirments
6.1set dbrecovery
6.2backup new 6.2.5 TSM db

No ideas on what the prepared db does.

~james
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Wednesday, February 27, 2013 10:03 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] upgrae from 5.5.4 to 6.2.4

So far, sounds good.
This is my plan.

1. run prepared on 5.5.4 server.

Two create a file devclass on 5.5.4 server pointing to /upgrade.

3. mount a file system from the new tsm server onto /upgrade.

4. extract the db to /upgrade.

5. restart the 5.5.4 server for production work.

6. do the insert on the 6.2.4 server.

Does this look reasonable?

I have so far not found an explanation of what the dsmupgrd prepared does.

Any ideas?

Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Wednesday, February 27, 2013 9:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] upgrae from 5.5.4 to 6.2.4

We did an upgrade from 5.4 to 6.2.
I used a copy (SAN trick) for the source of the upgrade.  I do not think the 
5.5 to 6.2 upgrade changed the copy, I did not check, but I did not see any 
writes to the disks.  The extract was written to a new disk.  We created a new 
devclass (file) to write the data to.
Our about 200GB DB took about 3 hours to extract, the insert to 6.2 took 
10+hours.  This was done on a P6.
We did 3 practice upgrades for each server.  Only 1 upgrade test failed 
because we did not get a clean copy.

You might consider 6.2.5 as your target.

Good Luck.

Andy Huebner


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Wednesday, February 27, 2013 8:20 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] upgrae from 5.5.4 to 6.2.4

I want to do a couple of practice upgrades of our tsm server v5.5.4 to a 
different box running v6.2.4.

Reading the upgrade guide, I have to do the

Dsmupgr prepared

Then a

Dsmupgrd extractdb

I have not yet determined whether these will render the 5.5.4 server unusable?

Has anyone out there done this and if so, what should I expect?

Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310





Re: Versions for Web Client security hole

2013-02-06 Thread Roger Deschner
Markus, I wonder if you are confusing the two IBM TSM security noitices
that were both sent on the same day. The other one, a denial-of-service
exposure in the Classic Scheduler, mentioned v5.5, 6.1, and 6.2, and it
also mentioned several easy workarounds. We circumvented it by SET
SCHEDMODE POLLING on all our TSM servers.

This one, involving unauthorized information disclosure in the Web
Client, did not mention those earlier versions. It is harder to deal
with, because there are no workarounds, it is a more serious issue, and
the only possible remediation is at the client level. Upgrading clients
to 6.3.1.0 or 6.4.0.1 to fix this, is not supported for Windows XP
clients (we still have a lot of XP clients) or V5.5 servers. Plus, it
involves the cooperation of clients, which can be difficult.

So, I still need to know if this affects 5.5, 6.1, or 6.2, because if it
does, I have a much larger number of clients to individually remediate.
Our clients are mostly 5.5 or 6.2.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 5 Feb 2013, Zoltan Forray wrote:

Where did you get this information?  When I read the Security Bulletin it
only addresses 6.3.x and 6.4.0.  Searching for patches I can only find
6.4.0.1 and 6.3.1.0, per the bulletin.  None of the older versions have
been updated.

2013/2/5 Markus Engelhard markus.engelh...@bundesbank.de

 Hi Roger,

 according to my infos, the vulnerability is reported in versions 5.5.0.0
 through 5.5.4.x, 6.1.0.0 through 6.1.5.x, 6.2.0.0 through 6.2.4.x, 6.3.0.x,
 and 6.4.0.0.

 Regards, Markus



 --
 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
 Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
 irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
 vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte
 Weitergabe dieser Mail oder von Teilen dieser Mail ist nicht gestattet.

 Wir haben alle verkehrsüblichen Maßnahmen unternommen, um das Risiko der
 Verbreitung virenbefallener Software oder E-Mails zu minimieren, dennoch
 raten wir Ihnen, Ihre eigenen Virenkontrollen auf alle Anhänge an dieser
 Nachricht durchzuführen. Wir schließen außer für den Fall von Vorsatz oder
 grober Fahrlässigkeit die Haftung für jeglichen Verlust oder Schäden durch
 virenbefallene Software oder E-Mails aus.

 Jede von der Bank versendete E-Mail ist sorgfältig erstellt worden, dennoch
 schließen wir die rechtliche Verbindlichkeit aus; sie kann nicht zu einer
 irgendwie gearteten Verpflichtung zu Lasten der Bank ausgelegt werden.
 __

 This e-mail may contain confidential and/or privileged information. If you
 are not the intended recipient (or have received this e-mail in error)
 please notify the sender immediately and destroy this e-mail. Any
 unauthorised copying, disclosure or distribution of  the material in this
 e-mail or of parts hereof is strictly forbidden.

 We have taken precautions to minimize the risk of transmitting software
 viruses but nevertheless advise you to carry out your own virus checks on
 any attachment of this message. We accept no liability for loss or damage
 caused by software viruses except in case of gross negligence or willful
 behaviour.

 Any e-mail messages from the Bank are sent in good faith, but shall not be
 binding or construed as constituting any kind of obligation on the part of
 the Bank.




--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



Versions for Web Client security hole

2013-02-04 Thread Roger Deschner
In http://www-01.ibm.com/support/docview.wss?uid=swg21624118
(CVE-2013-0472), IBM warned us of a security exposure in the TSM Web
Client. That document says the vulnerable versions are 6.3.0.x and
6.4.0.0, and the fixing versions are 6.3.1.0 and 6.4.0.1.

It does not answer the question whether versions prior to 6.3 are
vulnerable, or if this exposure was a new one introduced in 6.3 which is
now fixed. That document implies, but does not say, that prior versions
are not included in this notice, but it does not definitively answer the
question whether they are also at risk.

To simplify my question, are client versions 5.5, 6.1, or 6.2 vulnerable
to this security issue?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: issue with lto devclass and mount limits

2013-01-10 Thread Roger Deschner
Did you redo the autodetect to detect the serial number of the new tape
drive?

 adsm q drive libmame drivename f=d  (notice the serial number)
 adsm update path adsm drivename srctype=server desttype=drive -
 cont library=libname device=/dev/mtN autodetect=yes online=yes
 adsm q drive libmame drivename f=d  (serial num should have changed)

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 9 Jan 2013, Huebner, Andy wrote:

You probably already checked, but I would also check to make sure the drives 
and paths are online.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tim 
Brown
Sent: Wednesday, January 09, 2013 12:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] issue with lto devclass and mount limits

Using IBM 3310 LTO tape library for copy storage pools with TSM 6.3 Had recent 
issues with tapes and we had 1 drive replaced. Now we have updated all the 
firmware and it appears that there aren't any tape issues but cant get the 
backup stgpool to work.

ANR2017I Administrator ADMIN issued command: BACKUP STGPOOL oms_prim oms_copy 
ANR0984I Process 18 for BACKUP STORAGE POOL started in the BACKGROUND at 
13:50:06.
ANR2110I BACKUP STGPOOL started as process 18.
ANR1210I Backup of primary storage pool OMS_PRIM to copy storage pool OMS_COPY 
started as process 18.
ANR1228I Removable volume D:\OMS_PRIM1\BA02.BFS is required for storage 
pool backup.
ANR4743W An insufficient number of mount points are available in device class 
LTO_TAPE.
ANR1626I The previous message (message number 4743) was repeated 3 times.
ANR1217E BACKUP STGPOOL: Process 18 stopped - insufficient number of mount 
points available for removable media.
ANR0985I Process 18 for BACKUP STORAGE POOL running in the BACKGROUND 
completed with completion state FAILURE at 13:50:06.
ANR1893E Process 18 for BACKUP STORAGE POOL completed with a completion state 
of FAILURE.
ANR1214I Backup of primary storage pool OMS_PRIM to copy storage pool OMS_COPY 
has ended.  Files Backed Up: 0, Bytes Backed Up: 0, Unreadable Files: 0, 
Unreadable Bytes: 0.

tsm: TSMPOK_SERVER1q devc lto_tape f=d

 Device Class Name: LTO_TAPE
Device Access Strategy: Sequential
Storage Pool Count: 8
   Device Type: LTO
Format: DRIVE
 Est/Max Capacity (MB): 1,536,000.0
   Mount Limit: DRIVES
  Mount Wait (min): 15
 Mount Retention (min): 0
  Label Prefix: ADSM
  Drive Letter:
   Library: TSMLIBR
 Directory:
   Server Name:
  Retry Period:
Retry Interval:
  Twosided:
Shared:
High-level Address:
  Minimum Capacity:
  WORM: No
  Drive Encryption: Allow
   Scaled Capacity:
   Primary Allocation (MB):
 Secondary Allocation (MB):
   Compression:
 Retention:
Protection:
   Expiration Date:
  Unit:
  Logical Block Protection: No
Last Update by (administrator): ADMIN
 Last Update Date/Time: 01/09/2013 06:39:24

Thanks,

Tim



Re: Happy Holiday TSM Server upgrade V5 to V6

2012-12-26 Thread Roger Deschner
Merry Christmas, or rather Happy Boxing Day, to all. I have a new V6 TSM
server under my tree. I used the media method, and it worked. Still
sorting out a number of lesser issues. With a 90GB databse, the extract
took 3.5 hours and the insert took 12.5 hours, for a rate of about
6gb/hour, which is within the range the documentation says to expect.
The media was a devtype=file RAID array mounted via NFS, which worked
fine. The whole process took quite a bit longer, because of the work
necessary between the extract and the insert steps to redefine the disk
space from the V5 DB and logs for V6 use. In V5 we had been using Raw
Logical Volumes for DB and Log, and I had to define filesystems. Unix
permissions were also a pain, going from running V5 under the root user,
to V6 under a non-root user. No rocket science here; you just have to
slog through a whole bunch of routine Unix permission issues.

1. You will need more disk space than you thought. Have extra on hand.

2. Do not ignore the time it will take you for the manual steps. Add a
number of hours to your downtime window. IBM's time estimates for the
extract and insert steps are fairly accurate, but they do not take into
account your time performing manual steps.

3. The following is left out of the Upgrade manuals: Before extracting
the database, you must see document
http://www-01.ibm.com/support/docview.wss?uid=swg21443606

which says to do the following before extracting the database:

export DSMSERV_DIR=/usr/tivoli/tsm/upgrade/bin

4. I think I am seeing some V6.2.3 issues that are fixed in 6.2.5. I am
considering a quick update to V6.2.5. I stumbled across a GUID string
compare issue that was supposedly fixed in 6.2.2, described in APAR
IC68005. This prevented me from running dsmapipw to set the API password
for database backups. TSM Service directed me to the local fix described
in IC68005, which worked, which was to add DBMTRUSTEDGUIDIGNORE YES to
dsmserv.opt.

5. MEMORIZE the upgrade manual! Print the relevant chapter for your
scenario and check off the steps. About halfway through, I got tired and
my brain turned to jelly, and I forgot where I was. Having a paper
record of where you are will be very helpful. This TSM V6 upgrade thing
is a marathon. Do not embark on it unless you are very prepared,
organized, and well-rested.

This is the only time I will do this. Our other TSM5 server is being
upgraded by the replacement method, and it's replacement is already up
and running, with clients being moved over piecemeal. Some by export,
and some by a new full backup. It was older hardware that would not have
been capable of running TSM6. The server I just upgraded in-place was
newer and is doing OK on TSM6. Comparing the two methods, I can
recommend this in-place upgrade if you have a large number of clients.
You don't have to change anything on the clients, of which this server
has about 1,000.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== Chicago: What other great city would erect ==
=== its statue of Abraham Lincoln in Grant Park, and ===
= its statue of Ulysses Grant in Lincoln Park? =


On Tue, 25 Dec 2012, Lee Calhoun wrote:

Hi Roger,

I did a migration from TSM 5.5.20 to 6.2.2.2 back in 2011.  I tried to use the 
media method, but it did not work. I then learned of APAR IC61396, which said 
that our current version of TSM (5.5.2.0) does not support the media method.  
This was 18 months ago, and I don't know if that APAR has since been updated.  
Since you are at 5.5.1.0 you might also be affected.

At the time the consultant we were using told us in his experience most people 
use the network method for this type of upgrade.  I ended up doing the network 
method and it worked just fine.

Good luck with it in any case,

Lee
Lee Calhoun
Lead Systems Engineer,
Storage  Backups Group
Columbia University Information Technology
612 West 115th Street,  Room #703
Mail Code:  6001
New York NY 10025-7799
phone: 212-854-5119 fax: 212-662-6442
email: l...@columbia.edu

--

Date:Sun, 23 Dec 2012 08:53:59 +
From:Steven Langdale steven.langd...@gmail.com
Subject: Re: Happy Holiday TSM Server upgrade V5 to V6

Not me Roger, but good luck too you and I hope it goes well!

Keep us informed how you are getting on, I'd be interested in how long it
takes.

All the best, Steven.
On Dec 23, 2012 4:16 AM, Roger Deschner rog...@uic.edu wrote:

 Anybody else out there spending the holiday upgrading a TSM V5 server
 to TSM V6? The one I'm doing has a 150GB database, and I'm using the
 media method, so it's going to take a while. It's going from
 V5.5.1.0 to V6.2.3.0, on AIX 5.3. This was the only time I could get
 permission to bring this server down for several days.

 No huge problems yet; just wondering how many others out there are
 doing the same thing right now?

 Roger Deschner

Re: How to schedule new task in case of previous one completed

2012-12-24 Thread Roger Deschner
Not all commands have a WAIT=YES option available. If the command has
WAIT=YES, use it. It's really the easiest way. PITFALL: A TSM script
containing a long-running command with WAIT=YES should never be run from
a live console, not even for testing and debugging. It should only be
started by an administrative (T=A) schedule. If you accidentally start
it from a live dsmadmc console session, your terminal session will be
stuck until it finishes. Attempts to break out can result in an infinite
loop on your client machine. Last time I did this, my SSH client crashed
and it was ugly. Unix tricks like ctrlZ and bg don't work.

One in particular which lacks WAIT=YES that I have posted here in the
past is AUDIT VOL. I had to invoke it from a Unix script that loops with
a 60-second sleep that checks if it's still running, and if it is no
longer there, it queries the actlog to see if it ended OK.

Crude, but it works.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
=== Beware of Geeks bearing grifts. 


On Mon, 24 Dec 2012, Nick Laflamme wrote:

On Dec 24, 2012, at 12:31 PM, nkir tsm-fo...@backupcentral.com wrote:

 Thank you! I know about these options. But how exactly I can apply them?
 How to check that DB backup was successful and I can proceed next task?

 +--
 |This was sent by kniko...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

I'm not sure what you mean by these options, so I'll start with the obvious 
ones and let you tell us if these are what you already know.

Presumably you're talking about running a script as an administrative schedule.

Scripts can run in two modes: parallel and serial. If you want to make sure 
one command finishes before another starts, you must be in serial mode. Also, 
my observations make me doubt that serial mode is honored if a script calls 
another script.

A script can branch based on the return code from a command. One way to do it 
would be to branch for a successful backup, where the non-branch code-path 
would be a very general error routine. Alternately, you could branch on 
various return codes you expect to get and have different error routines for 
each error, but that is more than I do.

Finally, remember that many commands by default run in the background. BACKUP 
DB by default only tells you that it started. If you want to wait for it to 
finish and check its return code then, you need to use the WAIT=YES option on 
it. This is true for many long-running administrative commands.

Nick


Happy Holiday TSM Server upgrade V5 to V6

2012-12-22 Thread Roger Deschner
Anybody else out there spending the holiday upgrading a TSM V5 server to
TSM V6? The one I'm doing has a 150GB database, and I'm using the media
method, so it's going to take a while. It's going from V5.5.1.0 to
V6.2.3.0, on AIX 5.3. This was the only time I could get permission to
bring this server down for several days.

No huge problems yet; just wondering how many others out there are doing
the same thing right now?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
= While it may be true that a watched pot never boils, the one =
 you don't keep an eye on can make an awful mess of your stove. 
= -- Edward Stevenson ==


Re: Stable Version of TSM 6

2012-12-01 Thread Roger Deschner
I'm on the somewhat cautious side too. The latest 6.2.3 is good and
stable. V5.5 clients are still fully supported with a V6.2 server, as
are V6.3 clients, so it is a good bridge release that will allow you to
migrate clients later individually on a sensible schedule. And once
you're on 6.2.3, going beyond that will be easy.

Windows is OK for small servers and light loads. Windows still does not
scale well with heavy processing and I/O loads such as a TSM Server.
Those of us running larger TSM systems tend to prefer AIX.

TSM Server will run rather poorly in a virtual machine. It really needs
a real machine given the extremely high amount of I/O it does, by the
nature of the job it is doing. Wanda mentioned that tape is unsupported
in a VM, but even an all-disk system will run a lot better in a real
machine. OTOH, if you go with AIX, TSM Server is fine in an LPAR, even
with tape.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=




On Sat, 1 Dec 2012, Welton, Charles wrote:

Hello:

I have a few questions about TSM 6.

# 1: Is there a STABLE version of 6 that would be recommended?  Right now, we 
are running TSM at version 5.5.2.0, but we are thinking about upgrading.  I am 
not interested in getting to the latest and greatest version.  I am 
interested in getting to a supported, stable version.

# 2: Based on the recommendation in question # 1, can we run that version of 
TSM 6 on Windows?  What about a VM?

Thank you...


Charles
This email contains information which may be PROPRIETARY IN NATURE OR 
OTHERWISE PROTECTED BY LAW FROM DISCLOSURE and is intended only for the use of 
the addresses(s) named above.  If you have received this email in error, 
please contact the sender immediately.



Re: AUDIT VOL with WAIT=YES ?

2012-11-08 Thread Roger Deschner
I wound up using Alex Paschal's method - thanks!

I am not worried (yet!) about damaged files. There are probably very few
of them anyway. I am only doing a FIX=NO audit to see where the problems
are. I'm capturing the results by Q ACTLOG ... SEARCH='PROCESS: '
and parsing msg ANR4133I for the audit results. If FIX=NO does not find
any damaged files, as reported in ANR4133I, then the disk file volume is
changed to READWRITE automatically. If the FIX=NO audit does find
something wrong, then I will deal with it manually and carefully.

I did not want to use Eric van Loon's suggestion of MOVE DATA, because
that would really fill up this storage pool, considering we use a reuse
delay. The advantage of AUDIT is that it does not cause any data
movement other than one read pass, and most of the volumes will not have
any problems.

Thanks again, to both Eric and Alex. That was the obvious solution - a
sleep loop querying processes after starting the AUDIT command. The
improvement I added was to get the actual results via Q ACTLOG after it
was finished.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 6 Nov 2012, Alex Paschal wrote:

Hi, Roger.  I don't have one already made, but this should get you
started.  Hopefully there won't be too many typos.

#!/usr/bin/ksh
dsmadmc -id=id -pa=pa audit vol /path/file1
sleep 10
while dsmadmc -id=id -pa=pa -comma q pr | grep -qi audit ; do
sleep 10
done
dsmadmc -id=id -pa=pa update vol /path/file1 acc=readw

On 11/6/2012 3:17 PM, Roger Deschner wrote:
 Does anybody have a script or program that can issue a TSM AUDIT VOLUME
 command and wait for it to finish - as though WAIT=YES existed?

 I keep getting r/o vols in my DEVCLASS FILE storage pools. I want to
 audit them before changing them back to r/w. I want an automatic process
 to do that, one at a time.

 I could have set up this storage pool with preallocated files instead of
 letting the operating system allocate and remove scratch volume files,
 but dsmfmt on 55TB of space to prepare the fixed volume files would take
 a very long time, like about a CPU-year.

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
 ==I have not lost my mind -- it is backed up on tape somewhere.=




AUDIT VOL with WAIT=YES ?

2012-11-06 Thread Roger Deschner
Does anybody have a script or program that can issue a TSM AUDIT VOLUME
command and wait for it to finish - as though WAIT=YES existed?

I keep getting r/o vols in my DEVCLASS FILE storage pools. I want to
audit them before changing them back to r/w. I want an automatic process
to do that, one at a time.

I could have set up this storage pool with preallocated files instead of
letting the operating system allocate and remove scratch volume files,
but dsmfmt on 55TB of space to prepare the fixed volume files would take
a very long time, like about a CPU-year.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: consolidating file type primary pools

2012-10-23 Thread Roger Deschner
Yes, that's how to do it. Manipulating the list of directories in the
device class is an essential part of managing a sequential file disk
storage pool. I change mine fairly often to reflect hardware changes or
any other issues.

After you update the decice class to remove the one you want to be
read/only (E), and if you're not in a hurry, you can let movement from E
to D happen by itself with normal reclamation. In my experience if I can
wait a week, half of the .bfs files will have walked over by themselves.
(YMMV!) Then use MOVE DATA for the rest, after you get tired of waiting
for reclamaiton. Certainly the lazy way to move this kind of data,
reducing the need for MOVE DATA processes.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Tue, 23 Oct 2012, Shawn Drew wrote:

Yes, just update the device class  to only have the directory you want to
write to.  You will still be able to read from volumes that were on the E
drive.  Then just move data to clear off the E drive

Regards,
Shawn

Shawn Drew





Internet
tbr...@cenhud.com

Sent by: ADSM-L@VM.MARIST.EDU
10/23/2012 04:41 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] consolidating file type primary pools






Say you have a file based storage pool, with the devclass coded as

D:\DEVT_PRIM,E:\DEVT_PRIM

Over time they accumulate a number of .bfs files in each and then  due
to application retirement or other circumstances you don't need all
the space that this consumes. Is there a way to move the bfs files
from E:\DEVT_PRIM to D:\DEVT_PRIM

Can you update the DEVC to just D:\DEVT_PRIM and then use MOVE DATA to
move the .bfs files from E:\DEVT_PRIM to D:\DEVT_PRIM

Thanks,

Tim Brown
Supervisor Computer Operations
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.commailto:tbr...@cenhud.com 
mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255


This message contains confidential information and is only for the
intended recipient. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this message
to the intended recipient, please notify the sender immediately by
replying to this note and deleting all copies and attachments.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.



Re: Nodes per TSM server

2012-10-11 Thread Roger Deschner
Size of node can be misleading. Number of objects (sum of files,
subdirectories, symlinks) is more important than sheer quantity of data.
This is what takes up space in the database. We've got nodes with huge
quantities of data in a relatively small number of files, which are no
problem for TSM. We've got other nodes (e.g. Blackboard server) with
tens of millions of files that are a constant headache, especially when
the need to restore comes.

TSM Version 6 scales much better than Version 5. The DB2 database realy
does work better under heavy load with a large number of node files. The
maximum practical database size is much larger on V6 than on V5.
Consolidating several V5 servers onto a single V6 server is good.

Also, the largest TSM servers are predominantly on AIX systems, not
Windows or Linux. The AIX platform scales better than others. See the
poll on adsm.org.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 10 Oct 2012, Vandeventer, Harold [BS] wrote:

Thanks to everyone.

The comments about DB limits and size of nodes were in my thoughts, but didn't 
make it to my original post.

I have a collection of TSM V5, on older Windows boxes running W2k3.  New 
gear is much more powerful, Win2k8, and running TSM V6, so there is some room 
to grow.  I'm keeping detailed stats on how long it takes to get through the 
backup window and then through overnight processing tasks.

Thanks again.


Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services
harold.vandeven...@ks.gov
(785) 296-0631


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ian 
Smith
Sent: Wednesday, October 10, 2012 3:47 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Nodes per TSM server

harold,

just for reference, we have  1000 nodes on some servers - these are v5 
servers migrated to v6 and *not* consolidated. However, the average occupancy 
and number of objects for each node is probably much smaller than many other 
sites - we are an academic site with lots of smaller clients (research, admin, 
learning materials, etc) with a very limited number of versions and retention 
period.
Wanda is correct - the real limitations are size of DB ( related to the number 
of objects stored ) and I/O. Memory, cpu and fast disks will all have a 
bearing on what you can achieve - there are published IBM recommendations on 
memory (and CPU ?) - search under 'server requirements'.

Regards
Ian Smith
IT Services, | University of Oxford


From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of Christian 
Svensson [christian.svens...@cristie.se]
Sent: 09 October 2012 07:06
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: Nodes per TSM server

Hi Harold,
Everything depending on your hardware.
But I got everything from 300 nodes up to approx. 500 nodes and every TSM 
Server are running on Wintel.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se

Säkra återläsningar.



-Ursprungligt meddelande-
Från: Vandeventer, Harold [BS] [mailto:harold.vandeven...@ks.gov]
Skickat: den 8 oktober 2012 22:01
Till: ADSM-L@VM.MARIST.EDU
Ämne: Nodes per TSM server

There are all kinds of measures involved in setting up a TSM server; 
processor, RAM, disk I/O, stg pool design, reclamation, migration, all the 
bits and pieces.

But, I'm curious about how many nodes some of you have on your TSM servers?

I'm in a Windows environment, and have been tasked with consolidating.

Also, about how much memory is on those systems.

Thanks.


Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services 
harold.vandeven...@ks.gov
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the person 
or entity to which it is addressed and may contain confidential or privileged 
information.  Any unauthorized review, use, or disclosure is prohibited.  If 
you are not the intended recipient, please contact the sender and destroy the 
original message, including all copies, Thank you.
***



Re: unaccounted for volumes

2012-10-06 Thread Roger Deschner
This just happens, for a variety of reasons, ranging from human error to
the normal operation of Murphy's Law.

Periodically (about once per week) I run an audit program (written in
SPSS) that compares the lists of libvols, volumes, and database backup
(only!) entries in volhist, just to make sure everything is OK. I don't
want orphan tapes, but I realy don't want anything to be claimed twice.
I investigate anything it flags. When I find what I can confirm to be an
orphan tape, I usually just UPDATE LIBVOL lib vol STATUS=SCRATCH. It is
not necessary to check it out and back in, unless of course the problem
is that it wasn't labeled correctly in the first place.

This audit procedure got a lot more complex, and also a lot more vital,
after we went to a Library Manager environment. Now there are a lot more
ways something could go wrong.

Everybody must do a periodic audit like the one Gary Lee just did. Make
sure you get it right. Cheers!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
 Help stamp out and abolish redundancy. 



On Fri, 5 Oct 2012, Bob Levad wrote:

The most obvious reasons would be a tape operator running a check-in script 
instead of a label script or checking in with a status=private instead of 
status=scratch.

TSM will not initialized a tape with data it knows about, so checking them out 
and labeling or checking back in as scratch (whichever is appropriate) will 
work.  If you check everything out with remove=no, and checkin scratch first 
and then private, you should get back all the volumes that have been 
initialized.  If you are still missing some, they probably haven't been 
initialized.

As for the volhist, a volume isn't listed there until it is actually used for 
data of some kind.

Bob.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee, 
Gary
Sent: Friday, October 05, 2012 1:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] unaccounted for volumes

Tsm server 6.2.2 running on RHEL v6 with a 3494 library and 8 ts1120 drives.

Was doing some checking on tape inventory and usage for one of our tsm servers.

A

Select count(*) from libvolumes

Shows 298 volumes in the library.

Checking volhistory for database backup volumes shows 3 tapes in that capacity.

Check of pending volumes shows 1

And summing up all tape pool volumes gives 118.

A
Select count(*) from libvolumes where status='Private'

Shows 160
And
Select for scratch volumes gives 137.

The upshot is that I have around 40 volumes in the libvolumes table listed as 
private, but not accounted for in the volumes or volhistory?
Audit library ields no errors.

Can I just check these out and re check them in as scratch?

What would account for this.

p.s.

I have the volhist file from the moment the server was placed online.  The 
lost volumes do not appear there anywhere.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310


This electronic transmission and any documents accompanying this electronic 
transmission contain confidential information belonging to the sender. This 
information may be legally privileged. The information is intended only for 
the use of the individual or entity named above. If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, distribution, 
or the taking of any action in reliance on or regarding the contents of this 
electronically transmitted information is strictly prohibited.



Re: Slow backup

2012-08-07 Thread Roger Deschner
We've got a similar beast. The problem is the sheer number of files,
which the client must keep a list of, in order to decide which files
need to be backed up. This is the whole idea behind TSM's Progressive
Incremental backup model. We have found that as file counts reach the
many millions, backup (and restore!) performance degrades exponentially.

One fact you must face is that a filespace with 21,000,000 files almost
cannot be restored. We tried it with 18,000,000 and it counted and
sorted its list for about a day before restoring any files - slowly. The
restore would have taken many days when we stopped it to reconsider.
There is no restore equivalent to MEMORYEFFICIENTBACKUP YES or
MEMORYEFFICIENT DISKCACHEMETHOD for restore. The ability to restore is
the whole point of backup, so take restore issues seriously.

The only workable answer we have found, without drastically changing the
thing you are backing up, is Virtual Mount Points. The idea is to keep
the file count in each Virtual Mount Point low enough that the client
can work efficiently.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=



On Mon, 6 Aug 2012, Arbogast, Warren K wrote:

There is a Linux fileserver here that serves web content. It has 21 million 
files in one filesystem named /ip. There are over 4,500 directories at the 
second level of the filesystem. The server is running the 6.3.0.0 client, and 
has 2 virtual cpus and 16 GB of RAM.  Resourceutilization is set to 10, and 
currently there are six client sessions running.  I am looking for ways to 
accelerate the backup of this server since currently it never ends.

The filesystem is NFS mounted so a journal based backup won't work. Recently, 
we added four proxy agents, and are splitting up the one big filesystem among 
them using include/exclude statements. Here is one of the agent's 
include/exclude files.

exclude /ip/[g-z]*/.../*
include /ip/[a-f]*/.../*

__Since we added the proxies the proxy backups are copying many thousands of 
files, as if this were the first backup of the server as a whole. Is that 
expected behavior?

__Recently, the TSM server database is growing faster than it usually does, 
and I'm wondering whether there could be any correlation between the ultra 
long running backup, many thousands of files copied, and the faster pace of 
the database growth.

__The four proxies haven't made a  big difference in the run time of the 
backup. Could something else be done to speed it up?

Thank you,
Keith Arbogast
Indiana University





TSM on Windows 8 and Mac Mountain Lion

2012-08-03 Thread Roger Deschner
We're starting to see people on Mac OS X 10.8 Mountain Lion, and we
will very soon start having people on Windows 8. What is the status of
these two newest OSs as TSM client systems?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Do you perform Windows SYSTEMSTATE backups?

2012-07-25 Thread Roger Deschner
We do not back up the Windows System State, and I run a program to find
them, delete them, and change the cloptset for offending nodes to one
that forbids it. The problem started with Vista, and it hit us real
hard. It's a problem that keeps on giving, as people gradually upgrade
from XP to Win7 and from Server 2003 to Server 2008.

A pet peeve of mine in this regard is that XP calls it System Object,
and TSM very inflexibly enforces this semantic detail. If you assign a
Vista+ node to a cloptset that contains DOMAIN -SYSTEMOBJECT you get a
fatal error and no backup. If you assign an XP node to a cloptset that
contains DOMAIN -SYSTEMSTATE you also get a fatal error. TSM should
accept SYSTEMSTATE and SYSTEMOBJECT as aliases for one another. This
would save me a ***___HUGE___*** amount of time and effort.

The difference in whether or not you can handle it, is the TSM server
version, as well as the client version. In general, neither V5 servers
nor V5 clients can handle System State backup, while V6.2.3+ servers can
deal with it reasonably well if the client is also V6.2.3+. This is
IBM's recommendation.

We have never found the System Object/State backup to be useful for a
desktop node restore. OTOH if the node is a server, we have started to
back up the System State to our new V6.2.3 server using V6.2.4+ clients.
We continue to absolutely forbid System State backups on our old V5.5
servers. They simply cannot handle it.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
My business, is to teach my aspirations to conform themselves to fact,
not to try and make facts harmonize with my aspirations.
-- Thomas Huxley, 1860


On Wed, 25 Jul 2012, Zoltan Forray wrote:

We are constantly seeing problems with Windows VSS and systemstate
backups.  The recent black Tuesday updates has causes numerous Windows
servers to start having backup problem where they previously worked just
fine.

In most cases they require the VSS hotfixes/fixpacks, etc.  The quickest
fix is to simply stop systemstate backups since VSS patches usually require
reboots.

I bought up the issue of have we ever restored the systemstate or any
objects within and the response was No, Never.

So I am wondering, how many folks here who backup Windows servers, include
SYSTEMSTATE backups and why?

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



Re: how to restore files from TSM backup tapes w/o TSM db

2012-07-21 Thread Roger Deschner
It won't be cheap, but an Index Engines machine can do this. We have one
that we use for e-discovery. It can actually read TSM backup tapes
without a TSM DB. http://www.indexengines.com/

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
Walking on water and developing software from a specification are
easy if both are frozen.
- Edward V Berard


On Fri, 20 Jul 2012, Shawn Drew wrote:

TSM is not designed to do this and there is no documentation on recovering
data without the appropriate database.  This is why the manuals stress the
importance of protecting the database.

That said, there are binaries out there and have seen them on sourceforge
or google code that can dump the contents of a ADSM/TSM tape.  I never
tried it myself and would definitely take some time to figure out.

There are also some archiving/recovery companies that are able to read TSM
tapes for indexing and restoring using their proprietary software.

Regards,
Shawn

Shawn Drew





Internet
tsm-fo...@backupcentral.com

Sent by: ADSM-L@VM.MARIST.EDU
07/11/2012 11:10 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] how to restore files from TSM backup tapes w/o TSM db






Hi all,
We had a monthly backup job in our TSM system and the data must be
kept forever without purging the TSM database.  Recently, we found that we
couldn't restore some users' database in 2010.  During the troubleshooting
process, we further found that some users' backup records for 2008, 2009,
2010 were missing in TSM database.  We reported the case to IBM and they
said that the data could not be restored without a health TSM database.

We have all monthly backup tapes on-hand.  Would you all kindly advise me
how to restore the files in the monthly backup tapes without using the TSM
database?

Thanks a lot.

Regards,
KK

+--
|This was sent by kcheu...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.



Re: UN-mixing LTO-4 and LTO-5

2012-06-13 Thread Roger Deschner
Thanks Remco and Wanda! I am proceeding to define two logical libraries.

We do have nicely arranged volsers. However, with the TSM STK SL500
library driver we do not see the L4 or L5 suffix. We just see XXX999. So
I'm going to change the XXX part to be distinctive between LTO4 and LTO5
just to reduce the chances that _I_ will make a mistake during checkin.
At least the cartridges are different colors.

After several false starts integrating LTO5 (which was only physically
installed a week ago), I'm starting over for about the 3rd time, towards
making LTO5 work in our operation in a rational way. It looks like two
logical libraries is indeed the way to go. That way, both Library Client
servers can actually access all tapes, unlike some of my other ideas
such as taking paths offline and online under control of some
likely-to-misbehave daemon.

The fact that this is a Library Manager setup has not added technical
hurdles, though it has definitely made it more complicated.

BTW, mounting LTO4 tapes in LTO5 drives is supposed to work, and most of
the time it does. However, we have experienced a much higher I/O error
rate when LTO4 tapes are mounted in LTO5 drives, compared to mounting
the same LTO4 tapes in LTO4 drives. (All drives are HP.) This is another
reason I want to prevent it, in addition to keeping the LTO5 drives free
for LTO5 tape operations.

I came into this upgrade assuming everything would just work, according
to the TSM documentation, STK documentation, and the LTO consortium
website. It doesn't.

This list remains a priceless resource, to which I try to give back.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu


On Wed, 13 Jun 2012, Prather, Wanda wrote:

Hi Roger,
I've done this many times (creating 2 logical libraries in 1 physical) to 
separate LTO types at various customers.
I don't know anything dangerous about it.  (Granted, I've never tried to have 
the 2 logical libraries run a CHECKIN at EXACTLY the same time, but I don't 
see that as being a likely occurrence.)

If you do a search=yes (or search =bulk) with no other parms, then yes indeed, 
the checkin for that logical library will grab all the tapes that are 
available for checkin.

However, if you have nicely arranged numeric volsers (see the manual for 
requirements, alpha prefix and alpha suffix are allowed), you can use VOLRANGE 
and automate everything.

For example, you can throw a mixture of carts into the bulk I/O door, then run

CHECKIN LIBV SCSILIB4 SEARCH=BULK CHECKLABEL=BARCODE VOLRANGE=XXX000L4,XXX999L4
CHECKIN LIBV SCSILIB5 SEARCH=BULK CHECKLABEL=BARCODE VOLRANGE=YYY000L5,YYY999L5

The first checkin will pick up all the XXXnnnL4's, and the second checkin will 
pick up all the YYYnnnL5's.

If your volsers are all alpha, you'll probably need to have your operators put 
the L4's in the I/O door and run that checkin, then put the L5's in the I/O 
door and run the L5 checkin.

You won't run into any worse situation than what you have today, with both 
types of carts mixed in the same logical library.  The worst thing that's 
going to happen, is that you get the wrong tapes checked into the library, 
which means TSM can try to mount LTO5 carts in the LTO4 drives, and you'll get 
an I/O error when it tries to read or write.If you have LTO4 and LTO5 
carts in the same library today, and you run an audit library with 
checklabel=yes, I think you are subject to the same problem, unless you take 
the LTO4 drives offline.  With separated logical libraries, you won't have 
those issues.

You can't get any data damage, TSM still won't let you overwrite any data it 
shouldn't, even if the tape is checked into the wrong library, as both 
libraries and all data volumes belong to the same TSM DB.  So it's a totally 
harmless thing to try, as far as I know.

It's really no different than having 2 separate physical libraries attached to 
a TSM server; somebody can still throw the wrong cartridges into the I/O door!

I'm not sure if that was the answer you were looking for, if you have other 
specific questions or situations, feel free to mail back -
Wanda



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger 
Deschner
Sent: Tuesday, June 12, 2012 6:27 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] UN-mixing LTO-4 and LTO-5

Defining separate libraries had occurred to me, but it seems like it could 
also be dangerous, if both logical libraries were to try to check in the same 
volume using a search. How do I avoid that? That is, if I create two 
libraries, one of them with the LTO-4 drives and the other with the LTO-5 
drives, both libraries will have access to all the tape slots. Does this mean 
I cannot use CHECKIN LIBVOL SEARCH=YES but instead I should check in new 
volumes by name? Would this be similar to sharing a physical library with 
another application, except that the other application happens to be the same 
TSM server?

This configuration

UN-mixing LTO-4 and LTO-5

2012-06-12 Thread Roger Deschner
We have a tape library (Oracle/Sun/STK SL500) which contains both LTO-4
and LTO-5 drives, and both LTO-4 and LTO-5 media. I am trying to keep
TSM from mounting an LTO-4 cartridge in an LTO-5 drive, but it is
insisting on doing it anyway.

We have 4 LTO-4 drives and 3 LTO-5 drives. The mount limits for the two
devclasses are set accordingly - to 4 for the LTO-4 devclass and 3 for
the LTO-5 devclass. When a request to mount an LTO-4 cartridge comes, it
seems to use any of the 7 drives, regardless of whether it is an LTO-4
or LTO-5 drive. Therefore some tape mounts for LTO-5 cartridges are
failing or being delayed due to there being no available LTO-5 drives
when some of them are occupied by LTO-4 tapes. This is despite the claim
in the section of the TSM Administrator's Guide for AIX servers titled
Mount limits in LTO mixed-media environments (on book page 221 /
physical page 249 in TSM V6.2 for AIX, or book page 198 / physical page
232 at V6.3) that setting the mountlimit to the actual number of
earlier-generation drives will prevent the use of later-generation
drives for the earlier-generation devclass.

BTW, this is a library manager configuration, which may complicate
things. The devclass definititions do match on the library manager and
both of its library clients. The library manager is at 6.2.2.30, and its
two clients are at 5.5.6.0 and 6.2.2.30.

So, the question is, how do I prevent LTO-4 cartridges from being
mounted in LTO-5 drives? I would prefer not to use the hardware library
partitioning feature, which has its own set of hassles.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
 NO OVERNIGHT CAMPING AUTOMATIC SPRINKLER SYSTEM IN OPERATION 
 --sign, I-70 rest area, Parachute, Colorado ===


Re: UN-mixing LTO-4 and LTO-5

2012-06-12 Thread Roger Deschner
On the LTO4 devclass it is ULTRIUM4C. On the LTO5 devclass it is
ULTRIUM5C.

Q DRIVE shows that the LTO-5 drives can both read and write the format
ULTRIUM4C.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu


On Tue, 12 Jun 2012, Remco Post wrote:

check the format= option on your device class...

On 12 jun. 2012, at 08:02, Roger Deschner wrote:

 We have a tape library (Oracle/Sun/STK SL500) which contains both LTO-4
 and LTO-5 drives, and both LTO-4 and LTO-5 media. I am trying to keep
 TSM from mounting an LTO-4 cartridge in an LTO-5 drive, but it is
 insisting on doing it anyway.

 We have 4 LTO-4 drives and 3 LTO-5 drives. The mount limits for the two
 devclasses are set accordingly - to 4 for the LTO-4 devclass and 3 for
 the LTO-5 devclass. When a request to mount an LTO-4 cartridge comes, it
 seems to use any of the 7 drives, regardless of whether it is an LTO-4
 or LTO-5 drive. Therefore some tape mounts for LTO-5 cartridges are
 failing or being delayed due to there being no available LTO-5 drives
 when some of them are occupied by LTO-4 tapes. This is despite the claim
 in the section of the TSM Administrator's Guide for AIX servers titled
 Mount limits in LTO mixed-media environments (on book page 221 /
 physical page 249 in TSM V6.2 for AIX, or book page 198 / physical page
 232 at V6.3) that setting the mountlimit to the actual number of
 earlier-generation drives will prevent the use of later-generation
 drives for the earlier-generation devclass.

 BTW, this is a library manager configuration, which may complicate
 things. The devclass definititions do match on the library manager and
 both of its library clients. The library manager is at 6.2.2.30, and its
 two clients are at 5.5.6.0 and 6.2.2.30.

 So, the question is, how do I prevent LTO-4 cartridges from being
 mounted in LTO-5 drives? I would prefer not to use the hardware library
 partitioning feature, which has its own set of hassles.

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
  NO OVERNIGHT CAMPING AUTOMATIC SPRINKLER SYSTEM IN OPERATION 
  --sign, I-70 rest area, Parachute, Colorado ===

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622



Re: UN-mixing LTO-4 and LTO-5

2012-06-12 Thread Roger Deschner
Defining separate libraries had occurred to me, but it seems like it
could also be dangerous, if both logical libraries were to try to check
in the same volume using a search. How do I avoid that? That is, if I
create two libraries, one of them with the LTO-4 drives and the other
with the LTO-5 drives, both libraries will have access to all the tape
slots. Does this mean I cannot use CHECKIN LIBVOL SEARCH=YES but instead
I should check in new volumes by name? Would this be similar to sharing
a physical library with another application, except that the other
application happens to be the same TSM server?

This configuration of two logical library definitions for one physical
library would appear to have some risks. Or is it safe? Are there any
other problems I should expect with this kind of configuration?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu



On Tue, 12 Jun 2012, Remco Post wrote:

did you define separate libraries for those device classes?

On 12 jun. 2012, at 08:15, Remco Post wrote:

 check the format= option on your device class...

 On 12 jun. 2012, at 08:02, Roger Deschner wrote:

 We have a tape library (Oracle/Sun/STK SL500) which contains both LTO-4
 and LTO-5 drives, and both LTO-4 and LTO-5 media. I am trying to keep
 TSM from mounting an LTO-4 cartridge in an LTO-5 drive, but it is
 insisting on doing it anyway.

 We have 4 LTO-4 drives and 3 LTO-5 drives. The mount limits for the two
 devclasses are set accordingly - to 4 for the LTO-4 devclass and 3 for
 the LTO-5 devclass. When a request to mount an LTO-4 cartridge comes, it
 seems to use any of the 7 drives, regardless of whether it is an LTO-4
 or LTO-5 drive. Therefore some tape mounts for LTO-5 cartridges are
 failing or being delayed due to there being no available LTO-5 drives
 when some of them are occupied by LTO-4 tapes. This is despite the claim
 in the section of the TSM Administrator's Guide for AIX servers titled
 Mount limits in LTO mixed-media environments (on book page 221 /
 physical page 249 in TSM V6.2 for AIX, or book page 198 / physical page
 232 at V6.3) that setting the mountlimit to the actual number of
 earlier-generation drives will prevent the use of later-generation
 drives for the earlier-generation devclass.

 BTW, this is a library manager configuration, which may complicate
 things. The devclass definititions do match on the library manager and
 both of its library clients. The library manager is at 6.2.2.30, and its
 two clients are at 5.5.6.0 and 6.2.2.30.

 So, the question is, how do I prevent LTO-4 cartridges from being
 mounted in LTO-5 drives? I would prefer not to use the hardware library
 partitioning feature, which has its own set of hassles.

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
  NO OVERNIGHT CAMPING AUTOMATIC SPRINKLER SYSTEM IN OPERATION 
  --sign, I-70 rest area, Parachute, Colorado ===

 --
 Met vriendelijke groeten/Kind Regards,

 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622



Re: There are not enough scratch volumes available

2012-06-02 Thread Roger Deschner
Happens to me too, and it is a managable situation.

Look carefully for unavailable or readonly tapes. Watch I/O errors. I
once had a flaky drive, and it would mount a scratch tape, start to
write on it, have an I/O error, make it readonly, and mount another.
This consumed 25 scratch tapes in only a couple of hours, and then none
were left and it was a crisis. (during a vacation, of course) You need
to track all tape I/O errors, so you can identify either media or drives
that are going bad and need to be replaced.

Our reusedelay is set to 2 days, and I know I can always delete any
pending tapes that became pending before the most recent database
backup. This is my emergency scratch tape supply.

I am currently managing a very full library, and I am finding that I can
always keep scratch tapes available by limiting MAXSCRATCH to less than
library capacity. This causes the gradual elimination of collocation as
the library fills, but it keeps the lights on until the library can be
expanded. I tweak MAXSCRATCH frequently.

With some effort, full library situations can be managed. I've been
doing it for quite a while due to delays in approving upgrades.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
 NO OVERNIGHT CAMPING AUTOMATIC SPRINKLER SYSTEM IN OPERATION 
 --sign, I-70 rest area, Parachute, Colorado ===




On Fri, 1 Jun 2012, Richard Sims wrote:

As an administrator, you need to perform the following regularly:
 'Query Volume ACCess=UNAVailable,DESTroyed'
 'Query Volume ACCess=READOnly STATus=FIlling'
 'Query Volume ACCess=READOnly STATus=EMPty'
 'Query Volume STatus=PENding Format=Detailed'
to find tapes which TSM has given up on, as per messages like ANR1411W.
Also check for tapes which may have been dedicated via Define Volume and which 
might be put back into the scratch pool.
This is to say that you may be artificially out of immediately usable scratch 
tapes, but that potentially usable tapes may be identified and rendered 
candidates once again.

You should not check primary storage tapes out of a library except in dire 
circumstances, and then judiciously.  Capacity planning monitoring should be 
in place to anticipate the need for more library resources in advance of 
exhaustion.

Richard Sims



Re: server to server dbbackup

2012-05-01 Thread Roger Deschner
We have two separate locations that house TSM servers. They back up
their databases to each other. Mixture of TSM V5.5 and V6.2 is not a
problem. The VOLHISTORY and DEVCONFIG statements in the dsmserv.opt file
each define two separate files - one is local and the other is an NFS
mount in the campus cloud. Plus each server does its normal file backup
as an ordinary TSM client to a TSM server at the other site. It all
works fine.

The documentation in the TSM Administrator's Guide is pretty good on
this topic.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Mon, 30 Apr 2012, Zoltan Forray/AC/VCU wrote:

V5.5 (and probably older - I don't have anything older) and higher have
always been able to do this..   I have 5-onsite servers that daily do a
SNAPSHOT backup of the database (plus other things like devconfig, volhist
but that is another story) to an offsite TSM server that does nothing but
hold 2-days DB backups and the config files.


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



From:   Tim Brown tbr...@cenhud.com
To: ADSM-L@VM.MARIST.EDU
Date:   04/30/2012 01:22 PM
Subject:[ADSM-L] server to server dbbackup
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Can tsm 6.3 be used to backup servera database to a location on another
tsm server serverb



Thanks,



Tim Brown
Supervisor Computer Operations

Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email:  mailto:tbr...@cenhud.com tbr...@cenhud.com  
mailto:tbr...@cenhud.com mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255




This message contains confidential information and is only for the
intended recipient. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this message
to the intended recipient, please notify the sender immediately by
replying to this note and deleting all copies and attachments.



Re: Objects Assigned vs. Your Database.

2012-04-16 Thread Roger Deschner
We do not allow Windows Vista/2008/7 System State backups to our
remaining V5.5 servers, and are reluctant to allow them on our new
V6.2.3 server. We enforce this with client option sets. Ever since Vista
landed, we have had extreme problems with the System State, and we
decided to simply prohibit backing it up. For even a V6.2 server,
backing up the Vista/2008/7 System State is like a snake swallowing a
whole pig.

OTOH, I too am seeing much faster overall DB operations on the V6.2
server, as Paul and Wanda have reported. But we've got a lot of V5.5
clients left, which means I have a lot of visits to reluctant faculty
members' offices ahead, who are still scared off by the abyssmal
performance of the Windows Java GUI client. Couldn't something be done
about this, such as bring back the native Windows dsmmfc client which
was an order of magnitude faster at V6.1?

(I do wish that systemstate and systemobject were aliases to one another
on the DOMAIN statement, because I have to constantly watch for people
who upgrade from Win XP (which has System Object) to Win 7 (which has
System State), and change their cloptset.)

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
=== Java is a 4-letter word 



On Mon, 16 Apr 2012, Paul Zarnowski wrote:

At 05:08 PM 4/16/2012, Prather, Wanda wrote:
But from experience at a customer where we had similar problems (Win2K8 
clients were taking 8 hours for the incremental systemstate backup), the 
long-term solution is to get your clients to a V6.2+ TSM server.


I'll echo Wanda's observations.  We've been at 6.2 for awhile now, and it has 
solved a lot of these kinds of issues.  Expiration and DB backups fly now.  
System State backups (and expirations) are no longer the problem that they 
were.  I should mention that we did more than just upgrade to 6.2 - we also 
upgraded our server at the same time, so that might have helped a bit too.

..Paul



--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu



Controling FILLING tapes at end of Migration

2012-03-12 Thread Roger Deschner
I'm having a problem with FILLING tapes multiplying out of control,
which have very little data on them.

It appears that this happens at the end of migration, when that one last
collocation group is being migrated, and all others have finished. TSM
sees empty tape drives, and less than the defined number of migration
processes running, and it decides it can use the drives, so it mounts
fresh scratch tapes to fill up all the drives. This only happens when
the remaining data to be migrated belongs to more than one node - but
that's still fairly often. The result is a large number of FILLING tapes
that contain almost no data. A rough formula for these almost-empty
wasted filling tapes is:

 (number of migration processes - 1) * number of collocation groups

Is there a way, short of combining collocation groups, to deal with this
problem? We've got a very full tape library, and I'm looking for any
obvious ways to get more data onto the same number of tapes. Ideally,
I'd like there to be only one FILLING tape per collocation group.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center ==I have
not lost my mind -- it is backed up on tape somewhere.=


Re: TSM in AIX WPARS

2012-01-25 Thread Roger Deschner
Run them as multiple TSM instances in a single AIX image. Works fine
under TSM 5.5, but it's even easier in TSM 6 than in previous versions,
and it's lower overhead, easier management, etc. as long as you get your
naming conventions right to keep them separate. Set it up with a Library
Manager and then you won't be wasting HBAs or tapes in the scratch pool.
The only downside is all the TSM instances in a single AIX instance have
to be at the exact same TSM release level.

So a strategy that might serve your client well into the future could be
to set up two LPARs, one with TSM 5.5, and the other with TSM 6.X. This
only wastes one set of HBAs etc. A Library Manager must be at the same
or higher release level than all of its Library Clients. You could set
up the Library Manager as the first TSM 6 instance in the TSM 6 LPAR,
which could then allow for an orderly future migration of all instances
to TSM 6. We currently have a 6.2 Library Manager serving a mix of 6.2
and 5.5 Library Clients. This is how we are migrating to TSM 6 - Library
Manager first.

xinetd can solve your port 1500 collisions. All you need is separate
NICs. The users continue to specify port 1500, but xinetd can change
that to 1501, 1502, 9997... within AIX so that each goes to the intended
TSM server image. We do this on one AIX system that hosts two TSM 5.5
instances, and on another that hosts two TSM 6.2 instances. Just beware
of how much CPU xinetd can consume with the heavy net traffic of TSM
backups, however CPU cycles may not be an issue on that new P7 machine.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=




On Tue, 24 Jan 2012, Steve Harris wrote:

Hi All

I'm involved in a consolidation exercise of a TSM environment that has
just growed over the last 15 years, and features multiple TSM 5.5
servers running on AIX on various P4 and P5 hardware.  These are to be
moved, with as few changes as possible, to new P7 hardware, at a new
data centre.  I'm also under the pump to get this done because of the
contract ending with the old data centre.

Migration to TSM 6 would take too long and is outside my brief. Because
of the unplanned server sprawl, most of the servers use port 1500, and I
don't have time to visit all the clients to update them to use a
different port. Also, given that the new hardware is so much better than
the old, it seems a waste to have two disk and two tape HBA ports
dedicated to each server in its own LPAR. So, I've hit on the idea of a
reasonably large TSM LPAR with the instances running in WPARS inside it.

Is anyone running TSM 5 in WPARS?  I'm interested in any problems you
might have had, war stories, gotchas, and also any yep works fine for
me reports.


Many Thanks


Steve

Steven Harris
TSM Admin, Canberra Australia.



Re: Drive and Path Definition

2012-01-18 Thread Roger Deschner
I used to believe in all sorts of hokus-pokus in this regard, but I have
found that if you do things in an organized way in the correct order,
everything just works, without restarting dsmserv, on both TSM 5.5 and
6.2. See the TSM for AIX Administrator's Guide.

I recently moved a library from a simple configuration with one server,
to a Library Manager configuration with multiple server clients, which
involved a lot of undefining and redefining of paths and drives, and it
all just worked. The Library Manager is TSM 6.2, and it has a 5.5 client
and a 6.2 client. There were surprisingly few surprises.

Add the drive, run AIX cfgmgr, go into smit and add Tivoli Storage
Manager devices, define the drive and path in TSM, and it just works.
Pay attention to the fact that typically rmtN, mtN, and TSM driveN
device numbers will be different. Make sure the serial numbers in TSM Q
DRIVE F=D are correct.

Do things in the wrong order, and sometimes restarting dsmserv or even
AIX can make up for your disorganization, making it appear that those
restarts were necessary. In my experience, this is an area that has
become a lot more stable and predictable in recent releases of TSM 5.5
and 6.2.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== You will finish your project ahead of schedule. ===
= (Best fortune-cookie fortune ever.) ==


On Wed, 18 Jan 2012, David Bronder wrote:

Richard Rhodes wrote:

  In the past we've never had to restart TSM to add a new drive.  As
  long as the drive was discovered and available at the AIX layer, we
  could define the new drive.  Has this changed with v6.2.x?

 It's my experience that any change to the library (adding slots or drives)
 requires cycling TSM before it can access the new resource.

Seriously?  Is this behavior due to changes in TSM 6.x, or has it
always been this way for SCSI-style (smcX) libraries?

I've never had this issue with my 3494 library and any TSM version
from ADSM 2.1 through TSM 5.5.

If it's a 6.x change, it makes me nervous about the upgrade from 5.5.
If a SCSI library limitation, it makes me hesitant to give up my 3494
(looking at dedupe VTL appliances and/or newer physical libraries).
Getting TSM downtime is like pulling teeth, as our DBAs run log backups
for Oracle and MS-SQL servers almost continuously, 24x7.

--
Hello World.David Bronder - Systems Architect
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu



Re: TSM Migration Related

2011-12-06 Thread Roger Deschner
This has been the migration strategy we adopted, and are right now in
the middle of. New server machine, new TSM version, the client nodes
make fresh, new backups spread over a period of time. The old system
gradually expires away until you can get rid of it.

Run the new server on 6.2.3, instead of 6.3.0, for better supported
working with existing V5 servers. V6.3 with V5.5 is not supported.
However V6.2.3 does support working with V5.5, and it will be supported
for a while. You could do this migration first to V6.2.3, and then
upgrade to the very latest version later easily.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
You know you're a geek when... You try to shoo a fly away from the
monitor with your cursor. That just happened to me. It was scary.

On Tue, 6 Dec 2011, Stefan Folkerts wrote:


But there also might be another route, the let most of the data die on the
old system, migratie via server-to-server what needs to be saved and start
over, that way you don't have that much work on the old evironment (you do
need to upgrade TSM) and it's not such a big-bang migration.
However you do this, it's going to take a some solid preparation and
planning.


Mac OSX 10.7 Lion backup to V5.5 server

2011-11-29 Thread Roger Deschner
This is more of an interoperability issue than a problem within the new
TSM 6.3 Mac client. Mac OSX 10.7 Lion is supported only with the V6.3
client. The issue is that two of our three servers are still running
V5.5. Trillions of bits have been devoted on this list to issues of
migrating TSM servers from V5 to V6 - it is not easy, no matter which
strategy you use. (Ours was to buy a new computer to run V6.)

The time required to migrate TSM servers to V6 should not matter to our
Mac user base, which is acquiring Lion-based Macs in large numbers right
now. You can't buy a new Mac with anything else. When I discover them
backing up to our old V5.5 servers, I am facing a prospect of
immediately moving them to our new V6.2 server - and also forcing them
to update their client to V6.3. This is more cats than I care to herd
around at once - literally.

What has been the experience so far running V6.3 clients on a V5.5
server? Is this something we need to actively prevent?

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: delete filespace and LOGMODE NORMAL

2011-11-22 Thread Roger Deschner
In TSM V5, DELETE FILESPACE is extremely resource-intensive. To get rid
of this huge filespace you may have to plan to schedule it in pieces.
Set a schedule to start it every day at a quiet time, and then cancel
the process when the server needs to do something else. Doing it in
small pieces will also keep it from filling the log. Repeat for as many
days as it takes to get rid of the filespace. I don't know if it's
faster in V6, but I sure hope so, since the average Apple Mac client
node has about 1,000,000 files, and we've got many Macs. The larger log
size in V6 should help.

More comments below.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== You will finish your project ahead of schedule. ===
= (Best fortune-cookie fortune ever.) ==


On Tue, 22 Nov 2011, Sascha Askani wrote:

Am 22.11.2011 12:27, schrieb Loon, EJ van - SPLXO:
 Hi Sascha!
 Indeed this sounds strange. I can imagine that the delete filespace pins
 the log, which causes the log to grow, but as soon as you cancel the
 delete filespace, the pinning should be gone and thus the log
 utilization should be back to 0.

Yes, that's what I was expecting.

 This only proves my point: I have a PMR open for months about log
 utilization. Our log continues to grow and triggers a backup several
 times a night. We switched to normal mode, just to see what happened,
 but this also causes the log to grow. Less (up to 60/70%) but still, the
 log grows more than expected. When running in normal mode, the log only
 contains uncommitted transaction. Typically large SQL Server client
 backups tend to pin the log for a long time, but I also saw that the log
 space isn't returned after a pinning transaction is completed.
 Development explained that the recovery log uses a sort of a round robin
 method and that this is the reason why space isn't returned straight
 away.
 The fact that a canceled delete filespace doesn't free the log only
 proves to me that something is definitely broken/not working correctly
 in TSM, but I can't seem convince development...

While thinking about your answer I remember a had a strange behaviour
(yes, yet another one!):

After cancelling the DELETE FILESPACE and the log not returning to
zero, I tried a DELETE VOL n DISCARDD=yes in the STGPool affected;
after that the log returned to zero immediately, but unfortunately, I
could not reproduce this, so maybe it was jst coincidence, who knows?

I have seen this kind of behavior. It was a coincidence. The thing that
caused the log to go back down to 0 was the simple passage of time, so
don't bother with DELETE VOL or any other trick. Sometimes it can take
several hours.

This is how TSM behaves about a lot of things, including CANCEL PROCESS.
It acts on them when it gets around to it and it decides that all the
related locks have expired, or it gets to the next file to be moved, or
whatever else it is that makes it take its time. A lot of things can be
held up by a client node restore that is underway, which will preempt
most other processes. Relax and go with the flow.

--Roger



However, it feels good not being the only one having this type of problem :)

BR,

Sascha



Re: Migrating from AIX to Linux (again)

2011-11-17 Thread Roger Deschner
We buy slightly used Power equipment for TSM, and are extremely happy
with the cost comparisons and the performance. You can get a lot of work
done with a used, higher-end P6.

When TSM gets down to some of its serious computation tasks such as
expiration, delete filespace, reclamation, and deduplication, you need a
lot more processing and I/O horsepower than you can get from commodity
equipment. The only alternative we would seriously consider is Sun
SPARC.

Some tasks such as web page serving and email can easily be broken up
into chunks and run on commodity equipment working against a central
NAS/SAN infrastructure, and we do this. TSM does not fit this model
without a LOT of additional management effort. TSM needs fewer, larger,
faster computers, which is also cheaper in terms of environmentals such
as power, cooling, and floor space.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Wed, 16 Nov 2011, Hans Christian Riksheim wrote:

I am not of any help here but you say you are moving to Linux because
it is cheaper.

Our Power servers running TSM accounts for less than 3% of the yearly
total cost for our backup infrastructure. Then we include licenses and
man hours in addition to hardware and data center costs(floor space,
power and cooling).

Cutting off a little of those 3% is not an option for us if it means
moving away from a rock solid platform. Even if Linux on Dell was
handed to us free of charge we would stay on Power. But YMMV.

Anyone else done the same calculation and found out what the cost of
the physical servers amount to compared to total cost for the TSM
infrastructure? Maybe you should.

Hans Chr.



On Wed, Nov 16, 2011 at 4:47 PM, Dury, John C. jd...@duqlight.com wrote:
 Our current environment looks like this:
 We have a production TSM server that all of our clients backup to throughout 
 the day. This server has 2 SL500 tape libraries attached via fiber. One is 
 local and the other at a remote site which is connected by dark fiber. The 
 backup data is sent to the remote SL500 library several times a day in an 
 effort to keep them in sync.  The strategy is to bring up the TSM DR server 
 at the remote site and have it do backups and recovers from the SL500 at 
 that site in case of a DR scenario.

 I've done a lot of reading in the past and some just recently on the 
 possible ways to migrate from an AIX TSM server to a Linux TSM server. I 
 understand that in earlier versions (we are currently at 5.5.5.2) of the TSM 
 server it allowed you to backup the DB on one platform (AIX for instance) 
 and restore on another platform (Linux for instance) and if you were keeping 
 the same library, it would just work but apparently that was removed by IBM 
 in the TSM server code to presumably prevent customers from moving to less 
 expensive hardware. (Gee, thanks IBM! sigh).
 I posted several years ago about any possible ways to migrate the TSM Server 
 from AIX to Linux.
 The feasible solutions were as follows:

 1.       Build new linux server with access to same tape library and then 
 export nodes from one server to the other and then change each node as it's 
 exported, to backup to the new TSM Server instead.  Then the old data in the 
 old server can be purged. A lengthy and time consuming process depending on 
 the amount of data in your tape library.

 2.       Build a new TSM linux server and point all TSM clients to it but 
 keep the old TSM server around in case of restores for a specified period of 
 time until it can be removed.

 There may have been more options but those seemed the most reasonable given 
 our environment. Our biggest problem with scenario 1 above is exporting the 
 data that lives on the remote SL500 tape library would take much longer as 
 the connection to that tape library is slower than the local library.  I can 
 probably get some of our SLAs adjusted to not have to export all data and 
 only the active data but that remains to be seen.

 My question. Has any of this changed with v6 TSM or has anyone come up with 
 a way to do this in a less painful and time consuming way? Hacking the DB so 
 the other platform code doesn't block restoring an AIX TSM DB on a Linux 
 box? Anything?

 Thanks again and sorry to revisit all of this again. Just hoping something 
 has changed in the last few years.
 John




Re: TSM v63 is out there to download from the Passport Advantage page

2011-10-22 Thread Roger Deschner
That's wonderful. And the TSM 6.3 Mac client supports OSX 10.7 Lion,
which is important to us.

But there's plenty of rather bad news here. V6.3 servers and clients
cannot interoperate with V5.5 clients and servers. So Mac Lion cannot
back up to a V5.5 server. This also makes migration from V5.5 to V6.3 a
lot harder, because a V5.5 server cannot be a Library Client to a V6.3
Library Manager. The restriction that the Library Manager must be at the
same or higher release level than all of its clients means we will have
to completely eliminate all of our V5.5 TSM server instances before we
can migrate any of our instances to V6.3. That's going to be hard.

V6.3 has dropped support for a number of client and server platforms
that are still rather mainstream, such as AIX 5.3, Windows XP (still 50%
of our clients), Windows Server 2003, any 32-bit Linux. See it all at:

http://www-01.ibm.com/support/docview.wss?uid=swg21243309

http://www-01.ibm.com/support/docview.wss?uid=swg21302789

Looks like we can't go beyond V6.2 for quite a while, except for a few
V6.3 clients who will be restricted to our new V6.2 server that's
currently in test mode.

IBM: Supporting V6.3 interoperablility with V5.5 would help a LOT, at
least for some limited pain areas, such as Library Manager
configurations, and an Apple Mac client that can run on OSX 10.7 Lion
with a V5.5 server.

P.S. No documentation yet. The V6.3 infocenter is a broken link.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
   Academic Computing  Communications Center


On Fri, 21 Oct 2011, Oscar Kolsteren wrote:

Hi all,Don't know if this was already posted here, but TSM 63
is now available on the Passport Advantage pageBest Regards, 
Oscar   
---
ATTENTION: The information in this electronic mail message is private
and confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that any
disclosure, reproduction, distribution or use of this message is
strictly prohibited. Please inform the sender by reply transmission and
delete the message without copying or opening it.  Messages and
attachments are scanned for all viruses known. If this message contains
password-protected or encrypted attachments, the files have NOT been
scanned for viruses by the ING mail domain. Always scan attachments
before opening them.  ING Direct NV is limited liability company
incorporated in The Netherlands. Registered in England  Wales, Branch
Ref BR7357, 410 Thames Valley Park Drive, Reading, Berkshire, RG6 1RH.
---



Re: Label volumes in a scsi library?

2011-10-12 Thread Roger Deschner
You probably want to pipe the output of SHOW SLOTS into a file, and then
examine that file carefully. The output can be voluminous, but it can be
very useful, so do it.

You can also run the lbtest utility. Call IBM TSM Support for detailed
instructions, because it's an extremely powerful and useful program with
a not-too-friendly user interface. However, with lbtest you're talking
directly to the device driver, without TSM getting in between. Another
program whose output you want to somehow save in a file for more
contemplative examination. I save lbtest output via my SSH client.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Wed, 12 Oct 2011, Jerry Michalak wrote:

try  show slots library_name in TSM to see the library info.

 
Jerry Michalak
jerry_...@yahoo.com



From: Moyer, Joni M joni.mo...@highmark.com
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, October 12, 2011 12:57 PM
Subject: Re: [ADSM-L] Label volumes in a scsi library?

Hi,

I've tried to do this and I'm now getting the following errors:

10/06/11 12:50:34     ANR0609I LABEL LIBVOLUME started as process 33588.       
                       (SESSION: 58343, PROCESS: 33588)                       
10/06/11 12:50:46     ANR2017I Administrator LIDZR8V issued command: QUERY     
                       PROCESS  (SESSION: 58344)                               
10/06/11 12:50:47     ANR8300E I/O error on library NAS_QI6000 (OP=6C03,   
                       CC=207, KEY=05, ASC=21, ASCQ=01, SENSE=70.00.05.00.00.00-
                       .00.0A.00.00.00.00.21.01.00.CF.00.06., Description=Device
                       is not in a state capable of performing request).  Refer
                       to Appendix C in the 'Messages' manual for recommended 
                       action. (SESSION: 58343, PROCESS: 33588)               
10/06/11 12:50:48     ANR8942E Could not move volume Q0 from slot-element 
                       4096 to slot-element 65535. (SESSION: 58343, PROCESS:   
                       33588)                                                 
10/06/11 12:50:48     ANR8802E LABEL LIBVOLUME process 33588 for library       
                       NAS_QI6000 failed. (SESSION: 58343, PROCESS: 33588)     
10/06/11 12:50:48     ANR0985I Process 33588 for LABEL LIBVOLUME running in the
                       BACKGROUND completed with completion state FAILURE at   

It seems like it doesn't want to move the volume to a drive to label it, but 
I'm having issues trying to figure out what that error message means from 
Appendix C.

How do I tell where those slot elements are?  It appears as if something is 
still not working correctly, but I'm trying to figure out if this is a set up 
issue?  Or if it's something else entirely.

I defined a scsi library as follows for my nas partition which the drives are 
zoned from the library to the datamovers and I have the library  path and 
drives  paths defined within TSM:

                 Library Name: NAS_QI6000
                  Library Type: SCSI
                        ACS Id:
              Private Category:
              Scratch Category:
         WORM Scratch Category:
              External Manager:
                        Shared: No
                       LanFree:
            ObeyMountRetention:
       Primary Library Manager:
                           WWN:
                 Serial Number: QUANTUM273100111_LL2
                     AutoLabel: No
                  Reset Drives: No
               Relabel Scratch:

                  Source Name: TSMPROD3
                   Source Type: SERVER
              Destination Name: NAS_QI6000
              Destination Type: LIBRARY
                       Library:
                     Node Name:
                        Device: /dev/lb0
              External Manager:
                           LUN:
                     Initiator: 0
                     Directory:
                       On-Line: Yes

If anyone has any suggestions/ideas as to what I might be doing wrong I'd 
really appreciate the help.  Thanks

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of James 
Choate
Sent: Friday, October 07, 2011 9:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Label volumes in a scsi library?

How did tape Q0 get into the library? Did you manually load a batch of 
tapes in this library without checking them in thru the bulk I/O?

If that is the case, you can just checkin the tapes.
You can try:
Check in volumes that are already labeled:
checkin libvolume NAS_QI6000 search=yes status=scratch checklabel=barcode

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Moyer, 
Joni M
Sent: Thursday, October 06, 2011 8:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Label volumes in a scsi library?

Hi Everyone,

I was trying

Re: Recovering orphaned tape from Library Server

2011-10-11 Thread Roger Deschner
That AUDIT LIBRARY on the library client system worked! Thank you. This
is a hint to file away, as I'm sure this will come up again as we expand
our use of TSM library sharing. It would not have occurred to me to
try that, although it's the first thing the doc for AUDIT LIBRARY says.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
There are only 10 types of people in the world: Those who understand
binary, and those who don't.


On Fri, 7 Oct 2011, Neil Schofield wrote:

Roger

Before running the DELete VOLHistory with FORCE=YES, try running an AUDIT
LIBRary LIBE CHECKLabel=Barcode from the library client.

Perversely, it doesn't report the discrepancies it discovers in the
activity log on either the library client or the library manager (at least
on v5.5), but for me it's always returned the library volume on the
library manager to Scratch 'in the background' as part of the processing
on the library client.

Regards
Neil Schofield
Technical Leader, Data Centre Services Engineering Team
Yorkshire Water

 

Spotted a leak?
If you spot a leak please report it immediately. Call us on 0800 57 3553 or go 
to http://www.yorkshirewater.com/leaks

Get a free water saving pack
Don't forget to request your free water and energy saving pack, it could save 
you money on your utility bills and help you conserve water. 
http://www.yorkshirewater.com/savewater

The information in this e-mail is confidential and may also be legally 
privileged. The contents are intended for recipient only and are subject to 
the legal notice available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House, Halifax Road, Bradford, BD6 2SZ
Registered in England and Wales No 2366682



Recovering orphaned tape from Library Server

2011-10-06 Thread Roger Deschner
I'm not having much luck recovering a tape which seems to be orphaned.
It's a library manager setup, with both Library Server and Library
Client at TSM 6.2.30.

The Library Client had mounted this tape to run a database backup, but
the backup failed with a tape I/O error. So it didn't make it into the
Library Client's Volume History File. I reran the DB backup, which
worked the second time using a different tape. The Library Client knows
nothing of the first tape anymore.

However, the Library Server still has it listed in its Volume History
File as Volume Type REMOTE, with a date more than 8 days ago. Q LIBVOL
shows:

  Library Name: LIBE
   Volume Name: TE0105
Status: Private
 Owner: ADSM-4   (That's the name of the Library Client)
  Last Use: DbBackup
  Home Element: 1,105
   Device Type: LTO
 Cleanings Left:
Media Type: LTO-4

On the Library Server system, I tried UPDATE LIBVOL STATUS=SCRATCH, and
DELETE VOLHISTORY TYPE=ALL TODATE=TODAY-8 to no avail. That didn't
remove it from the VolHist file. It seems stuck in limbo.

This tape has nothing useful on it - an old and incomplete database
backup. How can I recover it? I'd hate to hand-edit the VolHist file.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


Re: Getting unix permissions back from TSM

2011-09-17 Thread Roger Deschner
This happened to us only about two weeks ago, on a Solaris system. Just
go ahead and do a full restore back to the original locations,
specifying replace=all. Restoring to an alternate location and trying to
merge the metadata back in is a mess that will take much more time than
a simple full restore.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=


On Fri, 16 Sep 2011, Strand, Neil B. wrote:

You might want to look into the AIX commands:
lppchk and/or tcbck


Thank you,
Neil Strand
Storage Engineer - Legg Mason
Baltimore, MD.
(410) 580-7491
Whatever you can do or believe you can, begin it.
Boldness has genius, power and magic.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Steve 
Harris
Sent: Friday, September 16, 2011 1:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Getting unix permissions back from TSM

Hi All

One of my accounts has just had a unix admin tried to run something
like

chown -R something:something /home/fred/*

but he had an extra space in there and ran it from the root directory

chown -R something:something /home/fred/ *

This has destroyed the ownership of the operating system binaries and
trashed the system.  Worse it was done using a distributed tool, so
quite a number of AIX lpars are affected including the TSM server.

Once we get the TSM Server back up, is there any way to restore just
the file permissions without restoring the data?  I can't think of a
way.  Maybe there is a testflag to do this?  Even a listing of the file
and permissions for all active files would be enough to be able to fix
the problem.

TSM Server 5.5 AIX 5.3

Thanks

Steve

TSM Admin
Canberra Australia.

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.



Re: Comment about the TSM gui from a IBM conference

2011-09-07 Thread Roger Deschner
I've been ranting about that here for a while now. Yes, it sucks! And
removing the dsmmfc native GUI client in 6.2 just added insult to
injury. I did a benchmark of the Windows v6.1 java client versus the
v6.1 native dsmmfc client on the same computer, and the java client took
10 times longer to initialize. TEN TIMES. It's so slow that when you ask
it to do something, you find yourself frequently clicking again because
nothing happened, but that sometimes causes malfunctions. I should
submit a performance APAR against it. It's so bad that we are still
distributing the v5.5 client to all our Windows users, despite the other
improvements in 6.2 such as incremental system state backup.

Until this improved and hopefully fast new client is developed with all
that new technology, please update and return the dsmmfc client to
distributions in v6.3, and make it the default. It was fast; THAT was
the product that raised the bar for performance expectations for TSM
client programs - TSM v5.5 dsmmfc, not XIV.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape somewhere.=



On Wed, 7 Sep 2011, Richard Rhodes wrote:

Interesting question/comment on the IBM Storage blog by Tony Pearsonat
https://www.ibm.com/developerworks/mydeveloperworks/blogs/InsideSystemStorage/?lang=en

In talking about events on Day 4 of the IBM Storage University, he
describes
a Question/Answer event with this Q+A . . .

Q)  The TSM GUI sucks!  Are there any plans to improve it?

A)  Yes, we are aware that products like IBM XIV have raised the bar for
what
people expect for graphical user interfaces.  We have plans to improve the

TSM GUI.  IBM's new GUI for the SAN Volume Controller and Storwize V7000
has been well-received, and wll be used as a template for the GUIs of
other
storage hardware and software products.   The GUI uses the latest HTML5
Dojo
widgets and AJAX technologies, eliminating Java dependencies on the client
browser.



Rick




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.



Active Log Mirror in 6.2?

2011-07-08 Thread Roger Deschner
Which configuration is better on TSM 6.2:

1. TSM Active Log on mirrored RAID (RAID1/RAID10) with no TSM Active Log
Mirror.

2. TSM Active Log, and TSM Active Log Mirror, each on separate JBOD
disks. (JBOD is faster than RAID, which compensates for the performance
hit of the second write by TSM. This moves the second write from the
RAID controller to TSM itself.)

The second arrangement is how we've always done it on TSM 5 and before,
because it offered better recoverability. This is how I set up my first
TSM 6.2 server, and it works OK. As I set up my second one, I'm
wondering if it's the best way.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== History does not repeat itself, but it does rhyme. -- Mark Twain ==


Re: Deduplication and Collocation

2011-06-22 Thread Roger Deschner
Back to client side dedupe, which we're about to deploy for a branch
campus 90 miles away in Rockford IL.

The data is sent from the clients in Rockford via tin cans and string to
the TSM server in Chicago already dedpued. We're using source dedupe
because the network bandwidth is somewhat limited. So if it is received
into a DEVCLASS DISK stgpool, then I assume it is still deduped, because
that's how it arrived. Then finally when it's migrated to tape, we've
already established that it gets reinflated, and then you can collocate
or not as you wish.

But the question is, does this imply that deduped data CAN exist in
random access DEVCLASS DISK stgpools if client-side dedupe is being
used? I sure hope so, because that's what we're planning to do.

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
== You will finish your project ahead of schedule. ===
= (Best fortune-cookie fortune ever.) ==


On Tue, 21 Jun 2011, Paul Zarnowski wrote:

Even if a FILE devclass has dedup turned on, when the data is migrated, 
reclaimed, or backed up (backup stgpool) to tape, then the files are 
reconstructed from their pieces.

You cannot dedup on DISK stgpools.
DISK implies random access disk - e.g., devclass DISK.
FILE implies serial access disk - e.g., devclass FILE.

But I think there is still an open question about collocation and 
deduplication.  Deduplication must be done using FILE stgpools, but FILE 
stgpools CAN use collocation.  I don't know what happens in this case.

..Paul

At 02:38 PM 6/21/2011, Prather, Wanda wrote:
If it is a file device class with dedup turned off, yes.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Mark 
Mooney
Sent: Tuesday, June 21, 2011 2:29 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication and Collocation

So data is deduplicated in a disk storage pool but when it is written to tape 
the entire reconstructed file is written out?  Is this the same for file 
device classes?



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu



  1   2   3   4   >