Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?
Hi Tom, One other option, if you have SQL 2008 or above, is to set "sqlcompression" in tdpsql.cfg. In some databases we were able to reduce by half size of a full backup. On Fri, Mar 31, 2017 at 9:44 AM, Hans Christian Riksheimwrote: > Any reason to set TCPWINDOWSIZE lower than maximum? And on the same note > why not let the OS handle it(TCPWINDOWSIZE 0) ? > > Hans Chr. > > On Tue, Mar 28, 2017 at 9:54 PM, Matthew McGeary < > matthew.mcge...@potashcorp.com> wrote: > > > Hello Tom, > > > > Yes, you will need a mountpoint for each stripe. Unlike > > resourceutilization, stripes represent client sessions that send data, > not > > data and control sessions combined. > > > > Since we're totally in the container-class pool world, all my nodes have > > maxnummp=100 because I heavily use multiple sessions to increase > throughput. > > > > __ > > Matthew McGeary > > Senior Technical Specialist – Infrastructure Management Services > > PotashCorp > > T: (306) 933-8921 > > www.potashcorp.com > > > > > > -Original Message- > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of > > Tom Alverson > > Sent: Tuesday, March 28, 2017 1:43 PM > > To: ADSM-L@VM.MARIST.EDU > > Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A Client > > Settings ? > > > > I tried 10 and the backup failed due to not enough mount points. I set > it > > to 2 and that did speed things up. Do I need one mount point for each > > stripe? We normally set the mount points to 2. Does this mean that I > need > > one mount point for my conventional TSM backup and 10 more to do 10 > > stripes? I notice that when I set RESOUCEUTILIZATION to 10 for the > > conventional backups I get four parallel sessions. Do I need 4 mount > > points just for that (plus whatever I need for SQL)? > > > > Thanks! > > > > On Mon, Mar 27, 2017 at 3:24 PM, Matthew McGeary < > > matthew.mcge...@potashcorp.com> wrote: > > > > > If you're using TDP for SQL you can specify how many stripes to use in > > > the tdpo.cfg file. > > > > > > For our large SQL backups, I use 10 stripes. > > > > > > __ > > > Matthew McGeary > > > Senior Technical Specialist – Infrastructure Management Services > > > PotashCorp > > > T: (306) 933-8921 > > > www.potashcorp.com > > > > > > > > > -Original Message- > > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf > > > Of Tom Alverson > > > Sent: Monday, March 27, 2017 1:11 PM > > > To: ADSM-L@VM.MARIST.EDU > > > Subject: Re: [ADSM-L] Best Practices/Best Performance SP/TSM B/A > > > Client Settings ? > > > > > > Our biggest performance issue is with SQL backups of large databases. > > > Our DBA's all want full backups ever night (and log backups every > > > hour) and for the databases that are around 1TB the backup will start > > > at Midnight and finish 5 to 13 hours later (varies day to day). When > > > these backups start extending into the daytime hours they complain but > > > I don't know how we could improve the speed. Our Storage servers all > > > have 10GB interfaces but they are backing up hundreds of clients every > > > night (mostly incremental file level backups). I am running a test > > > right now to see if RESOURCEUTILIZATION 10 helps one of these database > > > backups but I suspect it will make no difference as 99% of the data is > > > all in one DB and I don't think SQL/TSM will split that into multiple > > streams (will it?). > > > > > > On Sun, Mar 26, 2017 at 6:57 AM, Del Hoobler > wrote: > > > > > > > Hi Tom, > > > > > > > > My original posting was an excerpt from best practices for container > > > > pools, and does not necessarily apply to other storage pool types. > > > > > > > > Yes, client-side deduplication and compression options should be > > > > avoided with a Data Domain storage pool. > > > > > > > > A fixed resourceutilization setting of 2 may underperform for > > > > clients that have a lot of data to back up and fast network > > > > connections, but this is not a black and white answer. There are > > > > various other conditions that can affect this and trying to narrow > > > > in on them in > > > ADSM-L would be difficult. > > > > If you want some help with a performance issue, please open a PMR. > > > > > > > > > > > > Del > > > > > > > > > > > > > > > > "ADSM: Dist Stor Manager" wrote on 03/25/2017 > > > > 12:20:43 AM: > > > > > > > > > From: Tom Alverson > > > > > To: ADSM-L@VM.MARIST.EDU > > > > > Date: 03/25/2017 05:40 AM > > > > > Subject: Re: Best Practices/Best Performance SP/TSM B/A Client > > > > > Settings > > > > ? > > > > > Sent by: "ADSM: Dist Stor Manager" > > > > > > > > > > Del: > > > > > > > > > > We have been using these settings as our defaults. Is our > > > > > TCPWINDOWSIZE too large? > > > > > > > > >
Re: TSM VE 6.4: How to kill a running backup
I've been looking for someway of cancelling the backup too, but no luck. The snapshot been removed takes more time than the backup. On Tue, May 28, 2013 at 2:43 AM, Stefan Folkerts stefan.folke...@gmail.comwrote: I use something like your STPPP-procedure and don't know of any way to do it using the plug-in, I don't think it can be done. Part of your hypothetical problem might be to aggresive vSphere DRS settings that triggered these purely hypothetical vMotion jobs in vCenter, I find the default DRS setting to be way to jumpy and tone that setting back a bit. Stefan On Mon, May 27, 2013 at 8:42 PM, Prather, Wanda wanda.prat...@icfi.com wrote: If I do a run now backup task/schedule from the plug-in, and decide it's a bad thing (say hypothetically that I started a multiple-vm backup task and noticed that it caused massive VMotion activity - just saying hypothetically, not admitting anything, oh no, nothing to see here) how do you cancel a backup task from the plug-in? If hypothetically I really REALLY wanted to stop the active backup task, one might suppose I could have hypothetically done it in a panic by screaming STPPP!! several times loudly and killing the session from the server end and shutting down dsmcad on the data mover vm, which is also my plug-in server. But hypothetically there should be something I could do instead from the plug-in, right? Wanda Prather | Senior Technical Specialist | wanda.prat...@icfi.com| www.icfi.com ICF International | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 | 410.539.1135 (o) -- __ Leandro Mazur IBM Certified Deployment Professional - Tivoli Storage Manager V6.3 Linkedin: http://br.linkedin.com/in/leandromazur
Re: Learning resources for VMware
Hello Steven It's not necessary to have a great knowledge of VMWare in order to install TSM for VE. I think there's two things that's good to understand how it works: snapshot and inventory hierarchy. For this, you can find a lot of information in the VMWare site. Hope this helps. On Wed, Nov 14, 2012 at 4:41 AM, Steven Harris st...@stevenharris.infowrote: Hi Gang I have a customer who is bursting to get TSM for VE 6.4 up and running. So its time to get me some book-learnin'. What does a TSM admin need to know about VMware and especially vStorage in order to get a TSM for VE installation working properly? Where is the best place to obtain such info? Yes I've done some searches, but for example one document that looked promising turned out to be dated 2006, and in this fast-moving environment that is positively stone-age. Thanks Steve. Steven Harris TSM Admin Canberra Australia -- __ Leandro Mazur
Re: Reserved mount point
Jim, Use the command Show lib to see if there's anything that might help you. To release from this state, only if you restart TSM I think. On Mon, Jan 31, 2011 at 6:59 PM, Jim Davis jjda...@email.arizona.eduwrote: A q mount returns ANR8376I Mount point reserved in device class LTODEV, status: RESERVED. It's been in that state for quite some time. I'm trying to figure out how to identify what's put that mount point into a reserved state, and how to release it from that state. This is on a version 5.4 server. Thanks! -- Jim Davis Biotechnology Computing Facility Arizona Research Labs -- __ Leandro Mazur
Re: Strange Behaviour: backup STG primary storage pool
Rajesh, you can set the COPYSTG option in your primary storage pool. On Tue, Nov 2, 2010 at 6:19 AM, Lakshminarayanan, Rajesh rajesh.lakshminaraya...@dfs.com wrote: Hi, Your theory seems to be correct. I just queried my activity log to check how my admin schedule to copy the primary stg to copy pool is working. It indeed submits three separate processes and works in parallel. One of my client node pushed 300Gb of data to primary pool and when I ran backup stg, I saw this behavior. Is their a work around to copy single node data to tape pool in parallel because I see my resource(tape) not getting fully utilized, other than changing the primary pool to file dev class. Regards, Rajesh -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Remco Post Sent: Tuesday, November 02, 2010 3:02 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Strange Behaviour: backup STG primary storage pool Hi Rajesh, could it be that you only have data for a small amount of nodes in your server? Or that there is one exceptionally large node? IIRC bacup stg for DISK works by node, so if one process is working on the data of the last node, the other process will finish if there is no data of other nodes to backup. This is different for FILE and tape volumes, there backup stg works per volume. On 2 nov 2010, at 07:04, Lakshminarayanan, Rajesh wrote: Hi, When I trigger backup stg primary stg pool Tape copy stg pool maxproc=2 command I see two processes getting submitted to backup the primary storage pool (Disk dev class) to my tape copy pool. After a while one of the processes gets completed normally while the other keeps running for a while till it fully creates a copy in the secondary storage pool (copy). I use MAXPROC=2, to speed up my work, but I don't see that happening. I don't have any issues with the mount point. I have the needful tape mount point available during this backup. It would be great if some one can share your exp. Or is it a bug with TSM. My environment details: TSM: 5.4.1.2 (tsm server running in aix 6 TL5). Regards, Rajesh -- Met vriendelijke groeten/Kind Regards, Remco Post r.p...@plcs.nl +31 6 248 21 622 -- __ Leandro Mazur
Re: incremental backup of many millions of very small files
Does anybody had experienced this same situation in Linux ? We have 1 server with around 50 M files On Thu, Jun 24, 2010 at 5:13 PM, Zoltan Forray/AC/VCU zfor...@vcu.eduwrote: Sorry for the lack of clarification. I was talking about regular TSM and nodes with millions of objects. Across my 5-servers, I have 10-nodes with 20M and the highest is 97M. From: Lindsay Morris lind...@tsmworks.com To: ADSM-L@VM.MARIST.EDU Date: 06/24/2010 11:16 AM Subject: Re: [ADSM-L] incremental backup of many millions of very small files Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU You say Been there, done that. You mean with Fastback, not TSM? When you talk about NQR, No Query Restore, I don't think you're talking about Fastback anymore. Lindsay Morris CEO, TSMworks Tel. 1-859-539-9900 lind...@tsmworks.com On Thu, Jun 24, 2010 at 9:19 AM, Zoltan Forray/AC/VCU zfor...@vcu.eduwrote: Been there - done that - went through a complete restore that took days (could not do NQR for some of it). Why is journaling not feasible? I have a Windows box with 97M total files (including offsite copy) that uses journaling and backs up every day. Granted, it takes 7-hours and uses the minimum memory model (the box is still 2K3 32-bot with 4GB RAM) From: Mehdi Salehi ezzo...@googlemail.com To: ADSM-L@VM.MARIST.EDU Date: 06/24/2010 09:05 AM Subject: [ADSM-L] incremental backup of many millions of very small files Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU Hi, Can TSM Fastback be a good solution to backup an NTFS filesystem (about 500GB) with tens of millions of files? The daily increment of this filesystem is about 10-15 GB. Currently we use full daily image backups with b/a client. Because incremental (even journaling) is not feasible and furthermore restore would take even days, I wonder whether the block-level incremental of FastBack can help in this way? Thanks so much -- __ Leandro Mazur
Re: Unicode on UNIX
We had a lot of problems like that...the solution, at least in the cases where the locale change was not possible, was to create a client schedule (with action=command), create a script and put an export line in this scriptsomething like this: export LC_ALL=pt_BR.ISO8859-1 export LANG=pt_BR.ISO8859-1 dsmc inc -verbose -sub=yes some_log not exactly the same, but you've got the idea...we've tried to put the export lines on the dsmc sched script, but it didn't work...when you do export on the command line, that's only valid for that session and I think that the two scripts are two sessions separated (in the OS) I the cases where the change of locale was possible, the sysadmins put the lines above in the /etc/environment...but the locales had to be installed first. On Tue, May 18, 2010 at 6:02 PM, Michael Green mishagr...@gmail.com wrote: On Tue, May 18, 2010 at 10:49 PM, km k...@grogg.org wrote: I would advice against overriding the default settings in a script and instead to set the correct locale for the system. Most system settings in RHEL based distros are made in the sysconfig directory: http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-sysconfig-i18n.html Please let me disagree with you. I think it's a wrong approach to change locale for the entire OS for the sake of backups only. Besides, I'm not fully aware of consequences of changing the locale system wide. Are you? In this case, if the locale does not exist, just install it. Since the en_US locale is included in the glibc-common RPM try to reinstall or update that RPM. I didn't tell en_US locale doesn't exist. In contrary, it does. What I said is that Linux TSM client will not backup files with funny characters in filename after dsmcad is started from init script on _bootup_ with LC_CTYPE and LANG locales set to en_US in RHEL and SLES. I challenge anyone to show that it works for him/her in any version of RHEL or SLES. However, a user running CentOS thinks that en_US does not exist in that flavor of LINUX, so he misses 1000s of files each night. Anyone have any thoughts on this? Fred has touched here a major problem that has plagued TSM product line for ages and continues to go unresolved. This is absolutely unacceptable that TSM client skips files with filenames that do not conform to specific locale. In my view, every file that can be registered in a file system (ext3/reiser/xfs) supported by major commercial Linux distributions (RHEL/SLES) must be backed up no matter what. As long as file system itself is consistent and underlying physical media is not damaged everything should just work. At around 2008 IBM published a paper called Tivoli Storage Manager and Localization. The paper contains explanations on why it doesn't work and describes in length how to deal with the files named in various barbarian languages. It's a fascinating reading, but doesn't help much in my situation. And besides, with all due respect, IMO that's not something I, as administrator, should be dealing with. If GNU tar can swallow and restore these files without messing with locale or anything else, why TSM cannot? -- Warm regards, Michael Green -- __ Leandro Mazur
Re: Securing TSM Client
Humm...that's interesting...I'll take a look ! Thanks Richard ! On Wed, May 12, 2010 at 8:18 AM, Richard Sims r...@bu.edu wrote: Leandro - The problem you're dealing with is a personnel management one, which can't be solved by technology alone. The management there is the only avenue of solution. Technology can help, though. You can harvest records from the dsmaccnt.log to compile a report to management demonstrating the times and data amounts that inappropriate people have been performing TSM actions on that client, by virtue of field 7 containing a username whenever dsmc is invoked by an individual. You could go further by having a dsmadmc-based monitor performing Query SEssion Format=Detailed to look for sessions from that node: where the User Name is inappropriate, the monitor could then cancel the session and send a notification of the usage violation. Richard Simsat Boston University -- __ Leandro Mazur
Re: Securing TSM Client
I've tested this optionit works well, but in cases where the schedule calls a script, didn't worked because the client could not start the session... On Wed, May 12, 2010 at 12:32 PM, Shawn Drew shawn.d...@americas.bnpparibas.com wrote: Intended for firewall security, the sessioninitiation property on the nodes may work for this. You will have to disable the setting if anyone actually wanted to do a manual backup or restore from the client side, but that's a quick upd n Only the TSM Server would be allowed to start client sessions. Regards, Shawn Shawn Drew Internet leandroma...@gmail.com Sent by: ADSM-L@VM.MARIST.EDU 05/11/2010 04:08 PM Please respond to ADSM-L@VM.MARIST.EDU To ADSM-L cc Subject [ADSM-L] Securing TSM Client Hello everyone ! I don't know if somebody has this kind of problem, but I have the following situation in the company I work for: - We have a TSM team to install, configure and maintain the whole backup process, server and client; - We have sysadmins that take care of the operational system and the applications; - When there's a need for any action to do with backup, they should open a ticket for the TSM team; The problem that we have is that the sysadmins are doing backups/archives and restores/retrieves without our knowledge, with great impact on our database (among other things...). We would like to block the access on the client, but we were not successful. If we use password generate on dsm.sys, the password is prompted only at first access. If we use password prompt, the scheduler doesn't work (ANS2050E)... Any sugestions from the experts ? Maybe it could be a improvement to IBM implement on the future... __ Leandro Mazur This message and any attachments (the message) is intended solely for the addressees and is confidential. If you receive this message in error, please delete it and immediately notify the sender. Any use not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval. The internet can not guarantee the integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore be liable for the message if modified. Please note that certain functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc. -- __ Leandro Mazur
Securing TSM Client
Hello everyone ! I don't know if somebody has this kind of problem, but I have the following situation in the company I work for: - We have a TSM team to install, configure and maintain the whole backup process, server and client; - We have sysadmins that take care of the operational system and the applications; - When there's a need for any action to do with backup, they should open a ticket for the TSM team; The problem that we have is that the sysadmins are doing backups/archives and restores/retrieves without our knowledge, with great impact on our database (among other things...). We would like to block the access on the client, but we were not successful. If we use password generate on dsm.sys, the password is prompted only at first access. If we use password prompt, the scheduler doesn't work (ANS2050E)... Any sugestions from the experts ? Maybe it could be a improvement to IBM implement on the future... __ Leandro Mazur
Re: Securing TSM Client
Thanks for the answers ! About the sugestions: - I can't lock the nodes during the day, because several backups run every 2, 4 and 6 hours; - Lock the admin is a good sugetsion, although not possible... - The admins have the administrator/root password, so they can do anything... - Is not the occasional backups that worries me...instead of using the tsm client schedule, we just find out that they are using crontab/task scheduler to do backups (a lot of them !). Our ticket response time is 1 hour at mostFor 99% of the cases we have, it is more than acceptable; - We are having a considerable growth of our data, which causes the impact that I mentioned, but it is managable as long we don't have surprises like that It seems that the only thing I can do is convince the admins to not do...anyway, thanks for the help ! On Tue, May 11, 2010 at 7:22 PM, Remco Post r.p...@plcs.nl wrote: On 11 mei 2010, at 22:08, Leandro Mazur wrote: Hello everyone ! I don't know if somebody has this kind of problem, but I have the following situation in the company I work for: - We have a TSM team to install, configure and maintain the whole backup process, server and client; - We have sysadmins that take care of the operational system and the applications; - When there's a need for any action to do with backup, they should open a ticket for the TSM team; The problem that we have is that the sysadmins are doing backups/archives and restores/retrieves without our knowledge, with great impact on our database (among other things...). if a system administrator running an occasional backup has _great_ impact on your database, you need to reconsider your TSM infrastructure. I'm assuming here that your system administrators have better things to do with their time than running backups all day, so when they do, there is an actual need for it. We would like to block the access on the client, but we were not successful. If we use password generate on dsm.sys, the password is prompted only at first access. If we use password prompt, the scheduler doesn't work (ANS2050E)... Any sugestions from the experts ? Maybe it could be a improvement to IBM implement on the future... have you considered cattle prods? Except for Lindsay's suggestion of locking everything down during the day (disable sessions at 7:00, enable sessions at 18:00) there is no way. You may want to think about your procedures, since they probably do this because raising a ticket takes to long, and they need to get on with their work. __ Leandro Mazur -- Met vriendelijke groeten/Kind Regards, Remco Post r.p...@plcs.nl +31 6 248 21 622 -- __ Leandro Mazur
Re: Incl/Excl Problem
: acomp...@aspenpharma.com -Original Message- From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Patryk Bobak Sent: 23 July 2009 09:37 AM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Incl/Excl Problem Hi, I have problem with Incl/Excl list on client. I've searched forum and google but I cant find good solution. I want to backup only one selected folder (with subfolders and files) from C:. For example C:\test\. I tryed a lot of configurations with exclude, exclude.dir and includes, and it backup my whole C: or dont backup anything. TSM Client: 6.1 Windows, TSM Server: 5.5.0 z/OS Patryk. -- __ Leandro Mazur
Re: export/import nodes
I think you can use the command export node with the parameter filedata=none. Example: export node test filed=none tos=server1 Please, correct me if I'm wrong. On Thu, Nov 20, 2008 at 9:13 PM, Wanda Prather [EMAIL PROTECTED] wrote: You can't copy just some DB entries. Other people have reported success with restoring the TSM DB to the new server, then deleting the unwanted filespaces/nodes on the new server. But you need to review your tape inventory carefully. The fact that your storage pool is collocated doesn't mean that tapes necessarily have data for only one client. If there was ever a day where you ran short of scratch tapes, TSM would have started stacking multiple clients per tape. I'd recommend checking each tape before moving it would be a good plan. If there are multiple nodes on a tape, you can use MOVE DATA to force the data to other volumes in the storage pool; if scratch tapes are available, collocation will be honored on output. I think that would be less trouble than cleaning up the mess after a move. On Thu, Nov 20, 2008 at 9:32 AM, Henrik Ahlgren [EMAIL PROTECTED] wrote: Hi, I have a simple question. Let's say that I'm splitting a large TSM server instances into few smaller servers, using a shared library with library manager/client architecture. Now, if there are pretty big client nodes whose data is on collocated tapes, can I move a node between servers without copying the data to new tapes with export? Since collocated tapes have only data from one node, it should be possible to copy just the DB entries and change the ownership of the volumes in the library manager. Is there a way to do it? Or any other good ideas on how to migrate nodes between TSM instances without using too much tape media/network bandwidth/time/effort? -- Terveisin Henrik Ahlgren Technical Manager Itella Information Oy Tietäjäntie 2 FI-02130 Espoo GSM +358-50-3866200 -- __ Leandro Mazur
Re: export/import nodes
I never used, but I think that might work for you... If you use the command UPDATE LIBV, then you can change the owner of the volumes... -UPDate LIBVolume--library_name--volume_name-- --++--+---+--- '-STATus--=--+-PRIvate-+-' '-OWNer--=--server_name-' '-SCRatch-' OWNer Specifies which server owns a private volume in a shared library that is shared across a SAN. You can change the owner of a private volume in a shared library (SAN) when you issue the command from the library manager server. If you do not specify this parameter, the library manager server owns the private volume. Note: OWNER is invalid for all scratch volumes, but is valid when changing a scratch volume to private. On Fri, Nov 21, 2008 at 8:00 AM, Henrik Ahlgren [EMAIL PROTECTED] wrote: thanks for comments so far. Yeah, but wouldn't that lead to an empty node in the target server? Of course I want to migrate the data, I just don't like the idea of copying ten tapes worth of data to a set of temporary tapes and then again to new storage pool tapes (I have not tried it yet, but I assume TSM doesn't just use the export tapes as stgpool tapes, right?), if I had the possibility to just move the DB entries and keep the data in the tapes it was in the first place. I mean, with reusedelay and all, I would have the same data in 30 tapes instead of 10 for some time (not counting copypools volumes). Multiply that for couple of nodes, and soon you run out of library space. Well, at least it would have the side effect of reclamation. I guess the next best thing to do would be to use FILEs on disk. Imoprting the whole DB to a new server is also a nice hack, but I would like to start from scratch, since I'm planning an upgrade from prettyy ancient TSM version with DB that have lived for about eight years. BTW, I haven't heard or noticed before that TSM might not adhere collocation in case of running out of scratch volumes. Is that behavior version dependend? I believe everyone runs out of scratch volumes every now and then... On Fri, Nov 21, 2008 at 06:51:23AM -0200, Leandro Mazur wrote: I think you can use the command export node with the parameter filedata=none. Example: export node test filed=none tos=server1 Please, correct me if I'm wrong. On Thu, Nov 20, 2008 at 9:13 PM, Wanda Prather [EMAIL PROTECTED] wrote: You can't copy just some DB entries. Other people have reported success with restoring the TSM DB to the new server, then deleting the unwanted filespaces/nodes on the new server. But you need to review your tape inventory carefully. The fact that your storage pool is collocated doesn't mean that tapes necessarily have data for only one client. If there was ever a day where you ran short of scratch tapes, TSM would have started stacking multiple clients per tape. I'd recommend checking each tape before moving it would be a good plan. If there are multiple nodes on a tape, you can use MOVE DATA to force the data to other volumes in the storage pool; if scratch tapes are available, collocation will be honored on output. I think that would be less trouble than cleaning up the mess after a move. -- Terveisin Henrik Ahlgren Technical Manager Itella Information Oy Tietäjäntie 2 FI-02130 Espoo GSM +358-50-3866200 -- __ Leandro Mazur
Re: Expiration Query
I think the both numbers should match, if didn't backuped anything at all after ran the first command... On Thu, Apr 17, 2008 at 5:59 AM, Jeff White [EMAIL PROTECTED] wrote: Hi TSM v5.4.0 I have a query regarding expiration Recently implemented new retention policies new reduce the amount of version of windows files we have in TSM storage. Before implementing, i ran this script to get the total number of files in storage at that time: SELECT SUM(NUM_FILES), NODE_NAME FROM OCCUPANCY GROUP BY NODE_NAME order by 1 desc This told me i had 125,171,962 files in storage I run expiry every day, capture and record from the activity log the number of backup objects deleted every day. Over the first 10 days, this totalled 6,652,218 files. I then re-ran the above script and the total number of files was 113,796,651, a difference of 11,375,311. Was i wrong to expect number of expired files to match the actual difference? Thanks Jeff Woolworths plc Registered Office: 242 Marylebone Road, London NW1 6JL Registered in England, Number 104206 This e-mail is only intended for the person(s) to whom it is addressed and may contain confidential information. Unless stated to the contrary, any opinions or comments are personal to the writer and do not represent the official view of the company. If you have received this e-mail in error, please notify us immediately by reply e-mail and then delete this message from your system Please do not copy it or use it for any purposes, or disclose its contents to any other person. Thank you for your co-operation. Email scanned for viruses and unwanted content by emailsystems Information regarding this service can be found at www.emailsystems.com -- __ Leandro Mazur
Re: TSM Server and BA Client support for Ubuntu
Unfortunately, even with the configuration recommended in number 3, we have problems with locale on Debian Sarge. Upgrading to Etch, it worked fine. On Wed, Apr 16, 2008 at 6:13 AM, Peter Jones [EMAIL PROTECTED] wrote: Hi, If you Google tsm ubuntu you will see that many folks have gotten it to work using varying methods, such as alien. Here is an example http://lists.ibiblio.org/pipermail/unclug/2007-August/000404.html We re-package the TSM client for our environment, including packaging it for our debian/ubuntu users. If you want to get the client working on ubuntu there are a few things to note: 1. The dsmj script requires ksh (sudo apt-get install ksh) 2. dsmj needs Sun java not gij (check wih java -version): 1. check you have the multiverse enabled in /etc/apt/sources.list 2. sudo apt-get install sun-java6-jre 3. double check java runs the Sun jre not gij: update-alternatives --display java java -version 3. Ubuntu does not seem to have 8-bit ISO8859-1 locales installed by default. If you only have UTF8 filenames on the disk then a UTF8 locale (locale command shows ??_??.UTF-8) will suffice when backing up. However, if you have any filenames in non-UTF8 encodings then you probably want to use en_US so you do not get any ANS4042E Object name contains unrecognized characters errors. If the en_US locale is not listed with locale -a you need to install it with: sudo locale-gen en_US then you can run backups with: LANG=en_US LC_ALL=en_US dsmc incr With this, characters in all encodings will get backed up: ISO8859, UTF-8 etc. If you try en_US and it isn't installed, it looks like it defaults back to 7-bit C/POSIX and all characters 127 get skipped. Aside from these little gotchas, and the fact that this is officially unsupported, the client functions fine on all recent Debain and Ubuntu machines. Hope this helps, Pete -- Peter Jones Senior Specialist (HFS) Oxford University Computing Services -- __ Leandro Mazur
Re: ttdpsql in a cluster environment
I'll tell you what we have here... - One .cmd per instance; - The .cmd file is located on the drive associated with the instance. Same thing with the .opt; Hope it helps, On Wed, Mar 19, 2008 at 8:36 PM, Gill, Geoffrey L. [EMAIL PROTECTED] wrote: Perhaps others have already dealt with this. Is there a way to create a single .cmd file that can be used for one schedule with multiple sqlservers in a cluster environment so it will run properly no matter what instance is on any one node or may have failed to a node it normally does not live on and not report a failure? Or do I have to make separate schedules for every node each with its own .cmd file and make sure the .cmd files are on all cluster nodes? Geoff Gill TSM Administrator PeopleSoft Sr. Systems Administrator SAIC M/S-G1b (858)826-4062 Email: [EMAIL PROTECTED] -- __ Leandro Mazur
Re: Upgrade of Linux Client 5.3-5.4 and filesystem ACLs
We had the same problem here with a Linux clientthe solution was to include these two lines on dsm.opt: SKIPACL yes SKIPACLUPdatecheck yes On Feb 5, 2008 8:49 AM, Rainer Schöpf [EMAIL PROTECTED] wrote: Hello TSMers! After upgrading the TSM Linux client on one of our fileservers (x68_64) from 5.3.5 to 5.4.1, the next incremental started to backup all files again. I traced this to a change in the handling of filesystem ACLs by the TSM client. The filesystem in question is xfs. For the time being, I went back to 5.3.5 (server is 5.4.1.0 on W2K3). I can live with that for a while. I traced the problem with strace and client -TRACEFLags=service. This shows that the 5.4 does not use the libacl interface, but accesses the extended attributes directly with the getxattr system call. And indeed, the TSM client trace shows this: 04.02.2008 12:48:39.468 : unxfilio.cpp(1571): fioCmpAttribs: Attribute comparison of two directories Attribute Old New - --- --- File mode16893 16893 uid501 501 gid100 100 ACL size 424 0 ACL checksum3721320641 0 Xattr size 0 494 Xattr checksum 0 3013615671 04.02.2008 12:48:39.468 : fileio.cpp (4627): fioCmpAttribs(): old attrib's data from build (IBM TSM 5.4.1.2) 04.02.2008 12:48:39.468 : unxfilio.cpp(1825): --Attribs different: returning ATTRIBS_BACKUP It is interesting that in Technote swg21249081 libacl is mentioned as a potential problem to look for, but not getxattr. I made sure that libacl is present, but the strace output shows that it isn't used. This is very annoying. It means that the TSM client will probably force a backup of every file with an ACL when I upgrade from 5.3 to 5.4. Is this change documented somewhere? Is there a way to go back to either using libacl with the 5.4 client, or get the same ACL size/checksum from the Attributes comparison? Rainer Schöpf ProteoSys AG Carl-Zeiss-Straße 51 55129 Mainz Dr. Rainer Schöpf Leiter Software/Softwareentwicklung Mail: [EMAIL PROTECTED] Phone: +49-(0)6131-50192-41 Fax:+49-(0)6131-50192-11 WWW:http://www.proteosys.com/ ProteoSys AG - Carl-Zeiss-Str. 51 - D-55129 Mainz Amtsgericht Mainz HRB 7508 - USt.-Id Nr.: DE213940570 Vorstand: Helmut Matthies (Vorsitzender), Prof. Dr. André Schrattenholz Vorsitzender des Aufsichtsrates: Dr. Werner Zöllner -- __ Leandro Mazur
Re: Fwd: Upgrade client from 5.4.0.0 to 5.4.1.5 causes full backup
Must correct and add some information...if you'll use SKIPACL, there's no need for the SKIPACLUP (one overrides the other). - With SKIPACL, no ACL and extended attribute will be backuped or restored; - With SKIPACLUP, the ACL and extended attribute will be backuped or restored, but the file will not be backuped if only ACL or extended attribute are updated; On Fri, Feb 15, 2008 at 6:25 PM, Leandro Mazur [EMAIL PROTECTED] wrote: We had the same problem here with a Linux clientthe solution was to include these two lines on dsm.opt: SKIPACL yes SKIPACLUPdatecheck yes Hope it helps, On Fri, Feb 15, 2008 at 5:34 PM, Richard Sims [EMAIL PROTECTED] wrote: ... See APAR 1292891 ... Sorry: make that Technote 1292891. I need a 3-day weekend. R. -- __ Leandro Mazur Datacenter Supervisor -- __ Leandro Mazur
Re: Fwd: Upgrade client from 5.4.0.0 to 5.4.1.5 causes full backup
We had the same problem here with a Linux clientthe solution was to include these two lines on dsm.opt: SKIPACL yes SKIPACLUPdatecheck yes Hope it helps, On Fri, Feb 15, 2008 at 5:34 PM, Richard Sims [EMAIL PROTECTED] wrote: ... See APAR 1292891 ... Sorry: make that Technote 1292891. I need a 3-day weekend. R. -- __ Leandro Mazur Datacenter Supervisor
Re: Fw: Time-To-Transfer calculator
Very good...Congratulations ! Leandro Mazur On Dec 21, 2007 9:12 PM, Nicholas Cassimatis [EMAIL PROTECTED] wrote: Dave, That's very slick - I bet that page will be bookmarked by thousands (ok, maybe hundreds) before the end of the week! Nick Cassimatis - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 12/21/2007 06:11 PM - ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 12/21/2007 01:21:59 PM: Hey admins, First off, warm holiday wishes. If you're whitting down the hours until vacation, here's something to kill some time and might be useful to you at some point. It seems like I am always asked How long is that going to take? especially when it comes to full backups or restores. Users don't seem to realize what a multi-faceted question that can be... For restores, I need to lookup how much data is stored on the server, subtract out how much has already been restored, calculate time based on its restore rate (that isn't always wire speed for many reasons,) and then do the time math to say, Yes, your restore should be done by 3:30. I finally got the time to knock out a small javascript calculator to do the heavy lifting for me. Maybe it will be useful for you. My Time-To-Transfer Calculator http://www-dave.cs.uiuc.edu/timer.html It's self-contained html/javascript so you can copy it locally if you like, or bookmark it. It works in Firefox and IE, I hope. If you find bugs, let me know. Enjoy, and happy new year! Dave -- __ Leandro Mazur