Re: R: [ADSM-L] Can't start the server

2018-05-23 Thread Shawn Drew
I had this problem when testing a restore to a different server with a 
slight different version of the gskit installed as a part of the client.

Check if this is the version you have installed:

http://www-01.ibm.com/support/docview.wss?uid=swg22007298

I was able to fix it by downgrading the gskit that comes with the TSM 
client and baclient.
Not sure if the appropriate fix levels are available. The article says 
AIX but affected me on RHEL6/7




On 5/23/2018 11:09 AM, Tommaso Bollini wrote:

Hi Eric, yes I can do db2start or log on with the instance user on DB2. My get 
dbm cfg output is below.
I've found that the stash file, it is written by the server: if I remove that 
file, the Spectrum recreates it.
  There is my problem: the file will have zero bytes in size; so no stash -> no 
login and after the final error SQL30082N for username/password invalid.

Do you know how Spectrum read that password?

I have tried this: http://www-01.ibm.com/support/docview.wss?uid=swg21642264
But unsuccesfully. It seems that the SRVCON_PW_PLUGIN parameter its mandatory.

db2 => get dbm cfg

Database Manager Configuration

  Node type = Enterprise Server Edition with local and remote clients

  Database manager configuration release level= 0x1000

  CPU speed (millisec/instruction) (CPUSPEED) = 3.267047e-07
  Communications bandwidth (MB/sec)  (COMM_BANDWIDTH) = 1.00e+02

  Max number of concurrently active databases (NUMDB) = 32
  Federated Database System Support   (FEDERATED) = NO
  Transaction processor monitor name(TP_MON_NAME) =

  Default charge-back account   (DFT_ACCOUNT_STR) =

  Java Development Kit installation path   (JDK_PATH) = 
/home/tsmadmin/sqllib/java/jdk64

  Diagnostic error capture level  (DIAGLEVEL) = 3
  Notify Level  (NOTIFYLEVEL) = 3
  Diagnostic data directory path   (DIAGPATH) = 
/home/tsmadmin/sqllib/db2dump/
  Current member resolved DIAGPATH= 
/home/tsmadmin/sqllib/db2dump/
  Alternate diagnostic data directory path (ALT_DIAGPATH) =
  Current member resolved ALT_DIAGPATH=
  Size of rotating db2diag & notify logs (MB)  (DIAGSIZE) = 1024

  Default database monitor switches
Buffer pool (DFT_MON_BUFPOOL) = ON
Lock   (DFT_MON_LOCK) = OFF
Sort   (DFT_MON_SORT) = OFF
Statement  (DFT_MON_STMT) = OFF
Table (DFT_MON_TABLE) = OFF
Timestamp (DFT_MON_TIMESTAMP) = ON
Unit of work(DFT_MON_UOW) = OFF
  Monitor health of instance and databases   (HEALTH_MON) = OFF

  SYSADM group name(SYSADM_GROUP) = TSMSRVRS
  SYSCTRL group name  (SYSCTRL_GROUP) =
  SYSMAINT group name(SYSMAINT_GROUP) =
  SYSMON group name(SYSMON_GROUP) =

  Client Userid-Password Plugin  (CLNT_PW_PLUGIN) =
  Client Kerberos Plugin(CLNT_KRB_PLUGIN) =
  Group Plugin (GROUP_PLUGIN) =
  GSS Plugin for Local Authorization(LOCAL_GSSPLUGIN) =
  Server Plugin Mode(SRV_PLUGIN_MODE) = UNFENCED
  Server List of GSS Plugins  (SRVCON_GSSPLUGIN_LIST) =
  Server Userid-Password Plugin(SRVCON_PW_PLUGIN) = dsmdb2pw
  Server Connection Authentication  (SRVCON_AUTH) = NOT_SPECIFIED
  Cluster manager =

  Database manager authentication(AUTHENTICATION) = SERVER
  Alternate authentication   (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED
  Cataloging allowed without authority   (CATALOG_NOAUTH) = NO
  Trust all clients  (TRUST_ALLCLNTS) = YES
  Trusted client authentication  (TRUST_CLNTAUTH) = CLIENT
  Bypass federated authentication(FED_NOAUTH) = NO

  Default database path   (DFTDBPATH) = /home/tsmadmin

  Database monitor heap size (4KB)  (MON_HEAP_SZ) = AUTOMATIC(90)
  Java Virtual Machine heap size (4KB) (JAVA_HEAP_SZ) = 2048
  Audit buffer size (4KB)  (AUDIT_BUF_SZ) = 0
  Global instance memory (4KB)  (INSTANCE_MEMORY) = AUTOMATIC(22523032)
  Member instance memory (4KB)= GLOBAL
  Agent stack size   (AGENT_STACK_SZ) = 1024
  Sort heap threshold (4KB)  (SHEAPTHRES) = 0

  Directory cache support (DIR_CACHE) = YES

  Application support layer heap size (4KB)   (ASLHEAPSZ) = 15
  Max requester I/O block size (bytes) (RQRIOBLK) = 65535
  Workload impact by throttled utilities(UTIL_IMPACT_LIM) = 10

  Priority of agents   (AGENTPRI) = SYSTEM
  Agent pool size(NUM_POOLAGENTS) = AUTOMATIC(100)
  Initial number of 

Re: NDMP backup over LAN.

2017-12-20 Thread Shawn Drew
Over the LAN needs a Native type destination storagepool. Can you confirm the 
storagepool data format?

Thanks,
-Shawn

On Dec 20, 2017, 4:42 AM -0500, Bo Nielsen , wrote:
> I need to backup an Isilon storage system over LAN, since there are approx. 
> 40 km between TSM server and Storage system. I have created its own domain 
> for this backup, with its own storage pool.
> I have defined a node and a datamover with type = NAS.
>
> When I start a backup:
> backup node dcc-isilon01 / ifs toc = no mode = full
>
> I get this error:
>
> 12/06/2017 08:52:18 ANR0984I Process 155 for BACKUP NAS (FULL) started in the
> BACKGROUND at 08:52:18. (SESSION: 78516, PROCESS: 155)
> 12/06/2017 08:52:18 ANR1063I Full backup or NAS node DCC-ISILON01, file
> system / ifs, started as process 155 by administrator
> BOANIE. (SESSION: 78516, PROCESS: 155)
> 12/06/2017 08:52:21 ANR4399E NAS file server 130.225.82.184 is reporting NDMP
> error number 23: A data connection could not be
> established. (SESSION: 78516, PROCESS: 155)
> 12/06/2017 08:52:21 ANR4794E The NAS file server 130.225.82.184 failed to
> open an NDMP data connection to the TSM tape server.
> Please verify that the file server is capable of
> outbound data connections. (SESSION: 78516, PROCESS:
> 155)
> 12/06/2017 08:52:21 ANR4728E Server connection to file server DCC-ISILON01
> failed. Please check the attributes of the file server
> specified during definition of the data transfer. (SESSION:
> 78516, PROCESS: 155)
> 12/06/2017 08:52:21 ANR1096E NAS Backup to TSM Storage Process 155 terminated
> - storage media inaccessible. (SESSION: 78516, PROCESS:
> 155)
> 12/06/2017 08:52:21 ANR0985I Process 155 for BACKUP NAS (FULL) running in the
> BACKGROUND completed with completion state FAILURE at
> 8:52:21. (SESSION: 78516, PROCESS: 155)
> 12/06/2017 08:52:21 ANR1893E Process 155 for BACKUP NAS (FULL) completed with
> a completion state of FAILURE. (SESSION: 78516, PROCESS:
> 155)
>
> Errors from the Filer:
>
> 2017-12-06 08:08:14 dcchome-2-1 isi_ndmp_d[21867]: ERRO:NDMP 
> data.c:358:ndmpdDataAbort Cannot abort Data Server in idle state
> 2017-12-06 08:08:14 dcchome-2-1 isi_ndmp_d[21867]: ERRO:NDMP 
> data.c:311:ndmpdDataStop Data Server not in halted state; state=IDLE
> 2017-12-06 10:35:29 dcchome-2-1 isi_ndmp_d[46349]: ALRT:NDMP 
> data.c:918:ndmpdDataConnect_v4 Unable to connect to addr:xxx.xx.xx.xx 
> port:60817 - Operation timed out
> 2017-12-06 10:35:29 dcchome-2-1 isi_ndmp_d[46349]: ALRT:NDMP 
> data.c:952:ndmpdDataConnect_v4 Could not connect to server
> 2017-12-06 10:35:29 dcchome-2-1 isi_ndmp_d[46349]: ALRT:NDMP 
> data.c:352:ndmpdDataAbort Received NDMP_DATA_ABORT
>
> All ports between Filer and TSM are open. Firewall on windows server hosting 
> TSM is disabled.
>
> Regards
>
> Bo Nielsen
>
>
> IT Service
>
>
>
> Technical University of Denmark
>
> IT Service
>
> Frederiksborgvej 399
>
> Building 109
>
> DK - 4000 Roskilde
>
> Denmark
>
> Mobil +45 2337 0271
>
> boa...@dtu.dk

Re: syslog

2017-08-24 Thread Shawn Drew
*rsyslog syntax


Re: syslog

2017-08-24 Thread Shawn Drew
Right, when trying to figure this out I tried all the local facilities but 
couldn't find the TSM messages. I gave up on the facilities when I found the 
rsync syntax.

On Aug 24, 2017, 3:48 AM -0400, Remco Post <r.p...@plcs.nl>, wrote:
> Hi Shawn,
>
> great! thanks! This is really useful. I guess only IBM knows what syslog 
> facility is being used…
>
>
> > On 24 Aug 2017, at 02:29, Shawn Drew <shaw...@gmail.com> wrote:
> >
> > I think this syntax is specific to rsyslog (which you probably have)
> > When you put it in the conf, make sure it is above the line for the
> > messages file
> >
> > if $programname == 'dsmserv' and not ($msg contains 'REPORTING_ADMIN')
> > and not ($msg contains 'ANR8592I') then /var/log/dsmserv.log
> > & @splunkserver.intranet
> > & ~
> >
> > That is 3 lines, in case it wraps.
> > Line 1) I am filtering out messages that are created by a specific
> > data-collector service account (connects every 5 minutes) and a specific
> > informational message. Make sure and setup logrotation for this log
> > Line 2) Duplicate the log msg previously described and also send it to
> > "splunkserver.intranet"
> > Line 3) Any log already filtered, do not include in any further logging.
> > This prevents TSM logs from also showing up in the messages file but
> > needs to be before the messages line in the conf for this to work.
> >
> >
> > This sends the message using the standard syslog protocol to
> > "splunkserver.intranet". That server receives the message using the its
> > own standard rsyslog installation (needs to be configured to receive
> > syslog) Then splunk will monitor the messages file and load it into the
> > index. You can then use splunk filters if you want to move it to a
> > separate index or whatever. I have all the TSM/DataDomain stuff going
> > into an isolated index. I think splunk can be configured to receive
> > syslog messages directly but we don't do it that way (I don't run the
> > splunk server)
> >
> >
> >
> > On 8/23/2017 3:56 PM, Remco Post wrote:
> > > Tell me more, please. I'm quite sure that there is Splunk in my future as 
> > > well, can you share your syslog config?
> > >
>
> --
>
> Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622


Re: syslog

2017-08-23 Thread Shawn Drew

I think this syntax is specific to rsyslog (which you probably have)
When you put it in the conf, make sure it is above the line for the
messages file

if $programname == 'dsmserv' and not ($msg contains 'REPORTING_ADMIN')
and not ($msg contains 'ANR8592I') then /var/log/dsmserv.log
& @splunkserver.intranet
& ~

That is 3 lines, in case it wraps.
Line 1) I am filtering out messages that are created by a specific
data-collector service account (connects every 5 minutes) and a specific
informational message.  Make sure and setup logrotation for this log
Line 2) Duplicate the log msg previously described and also send it to
"splunkserver.intranet"
Line 3) Any log already filtered, do not include in any further logging.
This prevents TSM logs from also showing up in the messages file but
needs to be before the messages line in the conf for this to work.


This sends the message using the standard syslog protocol to
"splunkserver.intranet".  That server receives the message using the its
own standard rsyslog installation (needs to be configured to receive
syslog)  Then splunk will monitor the messages file and load it into the
index.  You can then use splunk filters if you want to move it to a
separate index or whatever. I have all the TSM/DataDomain stuff going
into an isolated index.  I think splunk can be configured to receive
syslog messages directly but we don't do it that way (I don't run the
splunk server)



On 8/23/2017 3:56 PM, Remco Post wrote:

Tell me more, please. I'm quite sure that there is Splunk in my future as well, 
can you share your syslog config?



Re: syslog

2017-08-23 Thread Shawn Drew
Yes, they ninja-added this is 7.1.4
I enabled it and now I can collect all TSM actlogs and use syslog to forward to 
splunk without a splunk agent and without the weird formatting of the 
filetextexit. I wish they had this on all the architectures.
You can use the syslog utilities to do filtering if you want also.

On Aug 23, 2017, 11:08 AM -0400, Remco Post , wrote:
> Hi all,
>
> I read something nice in the TSM server manual: TSM now supports logging to 
> syslog on Linux (and, it no longer supports SNMP). Now, my customer has a 
> very specific policy: certain events must be logged to an off-host logging 
> facility. The current solution is to write those events to a filetextexit and 
> copy these events once a minute to a remote host. Brrr. So, I was wondering: 
> has anyone done anything with TSM and syslog and forwarded the syslog events 
> to another host for further processing?
>
> --
>
> Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622


Re: [EXTERNAL] Re: [ADSM-L] sp 8.1.2

2017-08-09 Thread Shawn Drew

The web client is the mini-webserver that runs a java b/a client and
installed optionally as a part of the client software.
As far as I know, this is the only way to get a GUI to perform NDMP
restores and is the only way to access the NDMP DAR functionality.
"restore node" commands on the server do not perform DAR restores so
restoring just one file from an NDMP backup will results in a complete
image scan.
Hopefully I'm wrong and someone will correct me.
-Shawn


On 8/9/2017 1:24 PM, Zoltan Forray wrote:

Thanks for clarifying that.  I didn't know the "web client" on the server
was still available?  I haven't used that since we got TSMManager to manage
the TSM/SP servers.

With all the require security enhancements, looks like we won't be going to
8.1.2 for a while.  Still need to get to 8.1.1.



Re: [EXTERNAL] Re: [ADSM-L] sp 8.1.2

2017-08-09 Thread Shawn Drew
www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.2/client/r_new_for_version.html

"You can no longer use the web client to connect to IBM Spectrum Protect V8.1.2 
or later"

On Aug 9, 2017, 1:04 PM -0400, Zoltan Forray <zfor...@vcu.edu>, wrote:
> Say what? If there is no longer a web-client interface, that will totally
> destroy our current process of having 1-server with 40-TSM clients/nodes
> and all access for restores is via the web-interface. Can you point me to
> the document you are referring to?
>
> On Wed, Aug 9, 2017 at 12:53 PM, Shawn Drew <shaw...@gmail.com> wrote:
>
> > Geez, just read the client section. The web client is deprecated, which
> > means ndmp is a command-line-only thing now?
> >
> > On Aug 9, 2017, 12:34 PM -0400, Shawn DREW <shawn.d...@us.bnpparibas.com>,
> > wrote:
> > > Probably just referring to the documentation:
> > >
> > > https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.
> > 2/srv.common/r_wn_tsmserver.html
> > >
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of Zoltan Forray
> > > Sent: Wednesday, August 09, 2017 11:51 AM
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: [EXTERNAL] Re: [ADSM-L] sp 8.1.2
> > >
> > > I am curious how you got 8.1.2? I just searched and it isn't on the FTP
> > site or Passport? Are you a beta tester?
> > >
> >
>
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html


Re: [EXTERNAL] Re: [ADSM-L] sp 8.1.2

2017-08-09 Thread Shawn Drew
Geez, just read the client section. The web client is deprecated, which means 
ndmp is a command-line-only thing now?

On Aug 9, 2017, 12:34 PM -0400, Shawn DREW <shawn.d...@us.bnpparibas.com>, 
wrote:
> Probably just referring to the documentation:
>
> https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.2/srv.common/r_wn_tsmserver.html
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Zoltan Forray
> Sent: Wednesday, August 09, 2017 11:51 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [EXTERNAL] Re: [ADSM-L] sp 8.1.2
>
> I am curious how you got 8.1.2? I just searched and it isn't on the FTP site 
> or Passport? Are you a beta tester?
>


Re: [EXTERNAL] Re: [ADSM-L] sp 8.1.2

2017-08-09 Thread Shawn DREW
Probably just referring to the documentation:

https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.2/srv.common/r_wn_tsmserver.html


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Wednesday, August 09, 2017 11:51 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [EXTERNAL] Re: [ADSM-L] sp 8.1.2

I am curious how you got 8.1.2?  I just searched and it isn't on the FTP site 
or Passport?  Are you a beta tester?

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: Open File Support and CIFS/DFS shares/mounts

2017-06-02 Thread Shawn Drew
Without a snapshot API (like netapp) the last resort is NDMP, unfortunately.

On Jun 2, 2017, 8:50 AM -0400, Zoltan Forray , wrote:
> We have 3-Windows 2012R2 servers (current client is 7.1.3) that are used to
> backup numerous (>50-each) DFS (were CIFS) mounts, each having their own
> TSM NODE and schedule that includes Objects that point to the filesystem
> (i.e. \\rams.adp.vcu.edu\SOM\TSM\GA\*).
>
> This is done so each mount/group/area/directory/department can administer
> their own backups via the webclient and unique ports for each.
>
> One problem we have always struggled with is Open Files. Invariably, we
> constant get errors like:
>
> ANE4987E (Session: 278819, Node: ISILON-DWS) Error processing '\\
> rams.adp.vcu.edu\dws\WilderSchool\Shared\Administrative\GVPA_FIN\GPA
> Department\Center for Public Policy\CPP Fiscal Officer\Com
> Logs\FY17\~$CURA_Com Log_FY17_2017.04.15.xlsx': *the object is in use by
> another process*
>
> Since snapshotproviderfs doesn't work, how else can we handle open files
> and getting backups, albeit fuzzy, of these files?
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html


Re: TDP for SQL and UAC

2017-06-02 Thread Shawn Drew
Are you executing tdp with "runas" in the script or is control-m agent service 
logged on as the desired domain account?

Thanks,
-Shawn

On Jun 1, 2017, 8:05 PM -0400, Harris, Steven 
, wrote:
> Hi All
>
> I've got a bit of a show-stopper here that I could use some help with.
>
> Previously, SQL backups have been mostly handled by dump to disk and backup 
> with BA Client. Database sizes have grown over time and now we cannot get 
> through the whole dump/backup cycle in a reasonable window, so we have been 
> pushing to get TDP used as the standard backup mechanism for SQL. There are a 
> couple of issues with this ; I'll be submitting an RFE for one of them 
> shortly however the big one is to do with security.
>
> The TDP for SQL backup is to be run using the control-m scheduling tool as 
> the backup has to run at a particular point in the processing cycle (also for 
> restores as we can't afford to have a large restore fail because someone's 
> RDP session timed out) . It must use a domain account, nothing else is 
> permissible. When doing this we get a windows UAC pop-up. Again we are not 
> permitted to turn off UAC.. don't worry about how reasonable that is, IT 
> Security and Auditors don't cope well with arguments about 'reasonable' or 
> 'low risk' it's all black and white.
>
> So, has anyone else solved this issue: run a TDP backup from a domain account 
> and somehow bypass the UAC prompt without turning it off in a blanket fashion?
>
> Any and all suggestions gratefully received.
>
> Steve
>
> Steven Harris
> TSM Admin/Consultant
> Canberra Australia
>
> This message and any attachment is confidential and may be privileged or 
> otherwise protected from disclosure. You should immediately delete the 
> message if you are not the intended recipient. If you have received this 
> email by mistake please delete it from your system; you should not copy the 
> message or disclose its content to anyone.
>
> This electronic communication may contain general financial product advice 
> but should not be relied upon or construed as a recommendation of any 
> financial product. The information has been prepared without taking into 
> account your objectives, financial situation or needs. You should consider 
> the Product Disclosure Statement relating to the financial product and 
> consult your financial adviser before making a decision about whether to 
> acquire, hold or dispose of a financial product.
>
> For further details on the financial product please go to http://www.bt.com.au
>
> Past performance is not a reliable indicator of future performance.


Re: Restore NDMP backups from Netapp 7 mode to CIFS shares ?

2017-03-22 Thread Shawn Drew
To be clear, you are saying you can restore NDMP data without a file server 
providing NDMP services as long as the data was stored in a native pool?

-Shawn


Re: ISP 81 Discontinued functions

2016-12-12 Thread Shawn Drew
This just means it is not supported in 8.1.
Microsoft isn't releasing anything new on these OSes either and TSM 7.1 (with 
xp and 2008 support) will still be around for quite a while.
I remember carrying around win2003 for a long long time on TSM 6.3 (which is 
still supported by the way)

-Shawn

On Dec 12, 2016, 11:15 AM -0500, Rainer Tammer , 
wrote:
> Hello,
> The new support matrix is a BIG joke
> Dropping support for Windows 7 / Windows 2008 R2!
>
> Bye
> Rainer Tammer
>
> On 12.12.2016 16:18, Martin Janosik wrote:
> > I'm also a bit nervous and curoius at the same time about discontinued
> > functions, namely:
> > Online system state restores - You can no longer restore the system state
> > on a system that is online. Instead, use the Automated System Recovery
> > (ASR) based recovery method to restore the system state in offline Windows
> > Preinstallation Environment (PE) mode.
> > and
> > The following operating systems are no longer supported by the
> > backup-archive client:
> > Linux on Power Systems™ (big endian). You can still use the IBM Spectrum
> > Protect API on Linux on Power Systems (big endian).
> >
> > What is the idea behind "to be competive with x86 platform and get
> > compativility certification for SAP HANA with Linux on Power platform (Big
> > Endian)" (ref.
> > https://blogs.saphana.com/2015/08/21/announcing-general-availability-of-sap-hana-on-ibm-power-systems/
> > ) and then 1.5 year later drop support of BAclient for OS"?
> >
> > We deployed 10+ big SAP HANA on Power8 instances last year, and now we will
> > be getting "not supported" for new releases?
> >
> > M. Janosik
> >
> > "ADSM: Dist Stor Manager"  wrote on 12/09/2016
> > 02:45:34 PM:
> >
> > > From: Chavdar Cholev  > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 12/09/2016 02:47 PM
> > > Subject: [ADSM-L] ISP 81 Discontinued functions
> > > Sent by: "ADSM: Dist Stor Manager"  > >
> > > Does some one check discontinued functions in new version...
> > > especially part of no VM backup as standard function in BA client
> > > It is not good at all. TDP for Virtual environment for hyper-V creats
> > diff
> > > hdds (.avhdx)
> > > and my customers will not be happy with this, because you have to merge
> > > these file,
> > > when you need to expand .vhdx disk for example 
> > >
> > > :(
> > > Regards
> > > Cahvdar
> > >
> >


Re: *EXTERNAL* TDP Oracle best practice

2016-11-11 Thread Shawn Drew
We have regular 1gb connections. They only get about 40MB/s with one channel 
and are able max it out with 4-6 channels. We have maybe 20-30 dbs in the 1-3TB 
range so basically just the right numbers to want to avoid storage agents.

On Nov 11, 2016, 9:49 AM -0500, Rhodes, Richard L. 
, wrote:
> What's the connection between the Oracle server and the TSM server? Whether 
> multiple channels not multiplexed, or, one channel with multiplexing (or some 
> other combination) may not speed up the backup/restore depending on the 
> weakest link (source disks, lan, disk pool target disks, tape). Our DBA's 
> performed multiple tests to figure out the best speed. Once you hit a 
> throughput limit, I don't think the number of channels or multiplexing would 
> matter.
>
> I agree, if you stage to a disk and migrate, then I know of no way to control 
> what tapes multiple TDP files would be placed on. Since we TDPO direct to 
> tape via storage agents, we can always give a restore the same channels the 
> backup used. I don't know how the number of backup channels would effect a 
> restore if all the files were on the same tape. This is confusing to me.
>
>
> Rick
>
>


TDP Oracle best practice

2016-11-11 Thread Shawn Drew
I’m looking for best practice advice for TDP for Oracle with regards to # of 
channels.

If you have a classic TSM environment with a disk pool that migrates to tape, 
it would seem that using one-channel for TDPO backups makes the most sense to 
prevent the case where multi-channel files get migrated to the same tape.  Our 
DBAs are quite opposed to using only one channel (particularly large multi-TB 
databases) but I can’t find any official best practice statement from IBM that 
I can use as a response. 
Unfortunately IBM support worded it “you might want to only use one-channel” 
which didn’t sound strong enough for our DBAs.

There is no “reverse-collocation” to ensure the data ends up on different 
tapes, as far as I know, so what is the best practice for backing up Oracle 
data in an environment with data migrations?

-Thanks
Shawn


Re: Client issues after upgrade

2016-11-03 Thread Shawn DREW
I have run into this problem when the install was done using the silent install 
automated through a script instead of the GUI install. 
When I redid the install through the GUI, it offered to install those MS 
libraries and fixed everything. 

You can also perform silent installs of the vcredit*exe binaries as well if you 
need to automate it.
-Shawn

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Robert 
Talda
Sent: Thursday, November 03, 2016 3:52 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Client issues after upgrade

Zoltan:
  Just a hunch, but those files sound like the Microsoft redistributable 
libraries that are required for the client.  So the upgrade consists of more 
than just running msiexec; there are a couple of vcredist_x86.exe calls to make 
as well.  It just feels like those calls were missed.


FWIW,
Robert Talda
EZ-Backup Systems Engineer
Cornell University
+1 607-255-8280
r...@cornell.edu


This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: NDMP backups Restore

2016-09-06 Thread Shawn DREW
This is a long way from TSM allowing you to restore a backup from a "NetApp 
Dump" storage pool to a non-Netapp appliance.
I am skeptical but haven't tried it myself.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Tuesday, September 06, 2016 11:46 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] NDMP backups Restore

I found this comment from NetApp:

>Dump Format:
>Before we begin, it is important to realize that NetApp dump 
>(whether initiated using NDMP or using the filer console) adheres to 
>the ufsdump (Solaris dump) format. 

So it sounds line the NetApp data format is Solaris dump format.


Rick

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Basic select help

2016-08-22 Thread Shawn Drew
I am trying to get a list of nodes that have no filespaces and I am getting 
stuck on what seems to be a very basic select statement.  Can someone tell me 
where I am going wrong?
The way I understand it, the select should at least show the node I just 
created with no filespaces.


tsm: TSM1500>reg n shawntest  do=admin userid=none
ANR2060I Node SHAWNTEST registered in policy domain ADMIN.
 
tsm: TSM1500>select node_name from nodes where node_name NOT IN (select 
distinct(node_name) from filespaces)
ANR2034E SELECT: No match found using this criteria.
ANS8001I Return code 11.


Re: TSM 7.1.6 binary truncated?

2016-07-07 Thread Shawn DREW
This has always been a roll of the dice for me.   I always got the impression 
is was related to the ongoing war between IBM/IE/Firefox. 
Sometimes one browser works and the other doesn't.  
-Shawn

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: Restore TSM data with no TSM database.

2016-06-14 Thread Shawn DREW
Just run regular dbbackups and "prepare"'s to the data domain and you'll be 
covered.
-Shawn

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: MSSQL 2012 SQL permissions

2016-06-14 Thread Shawn DREW
Sysadmin role is required according to the documentation:

/SQLAUTHentication=INTegrated | SQLuserid
This parameter specifies the authorization mode used when logging on to the 
SQL Server. The integrated value specifies Windows authentication. The user id 
you use to log on to Windows is the same id you will use to log on to the SQL 
Server. This is the default value.

Use the sqluserid value to specify SQL Server user id authorization. The 
user id specified by the /sqluserid parameter is the id you use to log on to 
the SQL Server. Any SQL Server user id must have the SQL Server SYSADMIN fixed 
server role.

http://www.ibm.com/support/knowledgecenter/SSTFZR_7.1.4/db.sql/dps_ref_opt_backupoptional.html

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: FW: v7 upgrade woes

2016-06-13 Thread Shawn DREW
We just went through a complete upgrade. 6.3.5 -> 7.1.5

23 instances on 11 servers. 1 AIX, 2 windows, rest Linux. 
What worked for us is only installing the license package from the passport 
7.1.3 package, then upgrading directly to 7.1.5

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Rhodes, Richard L.
Sent: Friday, June 10, 2016 3:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] FW: v7 upgrade woes

We just brought up new pSeries chassis to rollover our existing pSeries 
chassis.  The new lpars will be AIX v7.  The plan is to get up to TSM v7, then 
swing the storage to new lpars on the new chassis.  Of course, testing all this 
swing is yet to occur.  We’ve got stuck just on the TSM v6-v7 upgrade.

Rick



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Nick 
Marouf
Sent: Friday, June 10, 2016 2:37 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: FW: v7 upgrade woes

Fyi,
 When I upgraded on AIX I had went from 6.3.4.0 to 7.1.3.0 and then to
7.1.5.100 with no issues.

Date (GMT)   Version  Pre-release Driver
  
2016/06/01 18:29:52  7.1.3.0
2016/06/07 17:11:31  7.1.5.0


AIX 6100-09

next two upgrade are on AIX 7100-04

On Fri, Jun 10, 2016 at 12:28 PM, Zoltan Forray  wrote:

> Since I need to upgrade 6-TSM servers (2-of them at the same time since
> they are Library Managers) to 7.1.5.200, I am closely following this
> thread/discussion about upgrade issues.
>
> Have all these woes only been on AIX system?
>
> I tested an old decommissioned server (all user data had been exported to
> another TSM server) and upgraded from 6.3.5.100 to 7.1.3.00 base (Passport
> downloaded with license keys) and then to 7.1.5.100 with no issues.  Of
> course, mine are all RedHat 6 Linux.
>
> On Fri, Jun 10, 2016 at 10:59 AM, Rhodes, Richard L. <
> rrho...@firstenergycorp.com> wrote:
>
> > We've been testing . . . we tried walking all the upgrades:
> >
> > - brought up a 6.3.5 tsm db
> > - tried upgrade to 7.1.4, it failed/hung (yup, responds like all our
> tests)
> > - restored snapshots back to the 6.3.5 setup
> > - tried upgrade v7.1- WORKED, TSM comes up fine
> > - tried upgrade to v7.1.1.1 - upg utility said there was nothing to
> > upgrade!
> > - tried upgrade to v7.3 - Upgrade worked, but TSM hangs on startup
> >
> > The v7.3 upgrade installed just fine and the upgrade utility ended.  But
> > when we try to start TSM it just sits there hung.  While TSM is sitting
> > there "hung" trying to come up, we do a "ps -ef | grep db2" and found
> that
> > DB2 was NOT UP. (should have checked that sooner!)
> >
> >   ps -ef | egrep -i "db2|dsm"
> > tsmuser 12910806 14221316   0 10:26:23  pts/4  0:00
> > /opt/tivoli/tsm/server/bin/dsmserv -i /tsmdata/tsmsap1/config
> > tsmuser 17432632 12910806   0 10:26:24  pts/4  0:00 db2set -i tsmuser
> > DB2_PMODEL_SETTINGS
> >root 199885461   0 09:55:59  -  0:00
> > /opt/tivoli/tsm/db2/bin/db2fmcd
> >
> >
> > You can leave the TSM startup "hung" like this for hours and it never
> goes
> > any further.
> >
> > We keep digging.  Support sent us to the DB2 team, hopefully they can
> help.
> >
> > Rick
> >
> >
> >
> >
> >
> > From: Rhodes, Richard L.
> > Sent: Tuesday, June 07, 2016 3:32 PM
> > To: 'ADSM: Dist Stor Manager' 
> > Cc: Ake, Elizabeth K. 
> > Subject: v7 upgrade woes
> >
> > This is half plea for help and half rant (grr).
> >
> > For the past two months we've been trying to test a TSM v6.3.5 to
> > v7.1.4/v7.1.5 upgrade . . . . and have completely failed!
> >
> > TSM v6.3.5
> > AIX 6100-09
> > upgrade to TSM v7.1.4 and/or v7.1.5
> >
> > When we run the upgrade it runs to this status message:
> > Installing: [(99%) com.tivoli.dsm.server.db2.DB2PostInstall ]
> > Then it sits, doing nothing.  AIX sees no db2 activity, no java activity,
> > no nothing.  It sits for hours!  We've let it sit once for 13 hours
> > (overnight) and the above message was still there.  Eventually we kill
> it.
> >
> > We've tried:
> >   two separate AIX systems
> >   two separate TSM's (one 400gb db and one 100gb db)
> >   probably tried 8-10 upgrades, all failed
> >
> > Support had us try a DB2 standalone upgrade and it also never finished.
> >
> > We've worked with support and got nowhere.  They have no idea what is
> > happening, let alone any idea of how to figure out what is happening.
> They
> > did find one issue - we have to install a base XLC runtime v13.1.0.0
> before
> > the v7 upgrade.
> >
> > Any thoughts are welcome.
> >
> > Rick
> >
> >
> >
> >
> >
> > -
> >
> > The information contained in this message is intended only for the
> > personal and confidential use of the recipient(s) named above. If the
> > reader of this message is not the 

Re: Restore Individual File from NDMP COPYPOOL

2016-05-12 Thread Shawn DREW
You can figure out which tapes the backup occupies by looking at the backups 
and contents  tables.  Since NDMP backups are single objects it isn't as 
unpleasant as looking through the backups table for regular backup files. 

Just a quick overview:
- Get object IDs for the backup image(s) (Get filespace id from "q fi")
select NODE_NAME, FILESPACE_NAME, FILESPACE_ID, cast(BACKUP_DATE as date) as 
"Date", OBJECT_ID from backups where node_name='NDMP_NODENAME' and 
FILESPACE_ID=16 and BACKUP_DATE >= '2011-06-20' and BACKUP_DATE <= '2012-02-15' 
order by "Date" 
- select volume_name, object_id from contents where 
OBJECT_ID=NUMBERs_FROM_PREV_STEP

This will give you the volume names for both pools and TOC's if you have them. 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Thursday, May 12, 2016 11:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore Individual File from NDMP COPYPOOL

The copy pool is  pushed offsite every day.  And, that is one of the problems, 
I don’t know what the tape number is in the copy pool. 

In the past, for different kinds of restores, if the primary pool media is not 
available it usually tells you a different tape number from the copy pool and 
you can recall it.


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, May 12, 2016 10:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore Individual File from NDMP COPYPOOL

Dumb question but is the copypool tape marked as offsite/offline (I know all of 
my offsite tapes are marked as such)

On Thu, May 12, 2016 at 10:52 AM, Plair, Ricky 
wrote:

> HELP PLEASE!
>
> I have an EMC Ision that I'm backing up with TSM 7.1.10.
>
> Using TSM I can restore a file back to the Isilon, with no problem.
>
> My problem is, if I take the tape media that TSM used during the 
> successful restore to the Isilon from the primary pool and change it 
> unavailable, TSM should call on the tape media from the copy pool, right?
>
> The restore fails with media not available.
>
> We are backing up the Isilon with data movers straight to tape with 
> schedules on the TSM server. We are not using EMC Backup Accelerator.
>
> I appreciate any help I can get.
>
>
>
>
> Ricky M. Plair
> Storage Engineer
> HealthPlan Services
> Office: 813 289 1000 Ext 2273
> Mobile: 813 357 9673
>
>
>
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ _ _ CONFIDENTIALITY NOTICE: This email message, including any 
> attachments, is for the sole use of the intended recipient(s) and may 
> contain confidential and privileged information and/or Protected 
> Health Information (PHI) subject to protection under the law, 
> including the Health Insurance Portability and Accountability Act of 
> 1996, as amended (HIPAA). If you are not the intended recipient or the 
> person responsible for delivering the email to the intended recipient, 
> be advised that you have received this email in error and that any 
> use, disclosure, distribution, forwarding, printing, or copying of 
> this email is strictly prohibited. If you have received this email in 
> error, please notify the sender immediately and destroy all copies of the 
> original message.
>



--
*Zoltan Forray*
TSM Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator (in training)
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 

Re: Rolling a TSM instance to a new server/lpar

2016-05-09 Thread Shawn DREW
I go through this exercise when performing DR tests and we swing over the 
storage.  Here is my cheat sheet:

- create instance owner user 

db2icrt -a server -s ese -u tsminstance tsminstance
su - tsminstance

#copy/validate:
~/sqllib/usercshrc
~/sqllib/userprofile
~/.bash_profile #or whatever shell profile to source the db2 env (. 
~/sqllib/db2profile)

db2set -i tsminstance DB2CODEPAGE=819
db2 update dbm cfg using dftdbpath /instanceDIR
db2 catalog database tsmdb1
db2 list database directory

# copy/validate db backup stanza
vi /opt/tivoli/tsm/client/api/bin64/dsm.sys


This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: TSM RHEL7 Linux server and EMC Isilon

2016-03-28 Thread Shawn DREW
We use NFS for large (>500TB) device classes. (on Data Domain, not isilon 
though) 
File device class is the way to go.  Just review the NFS performance settings 
and get the best practice stuff from EMC for your implementation  (i.e. trunked 
10gb connections might have different settings than 1gb)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Monday, March 28, 2016 11:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM RHEL7 Linux server and EMC Isilon

We are working on a project to beef up offsite backups using an EMC Isilon
box attached to a RHEL TSM server.

Anybody doing this kind of configuration?  I have concerns since it will be
connecting >300TB via NFS mount to use for the TSM storage. I am assuming
it will be best to define the stgpool as a FILE format?


--
*Zoltan Forray*
TSM Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator (in training)
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: TS3500 library changes to Max Cartridges and Max VIO slots

2015-10-12 Thread Shawn DREW
They added "refreshstate=yes" to the audit library command at a certain version 
so that you will not need to restart/redefine anything when changing slot 
counts on a library.
Do a "help audit libr" to check if your version has that option.


-Original Message-
From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU] 
Sent: Monday, October 12, 2015 8:01 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TS3500 library changes to Max Cartridges and Max VIO slots

Folks,

Our TS3500 is configured as 2-logical libraries with n-slots configured for 
each.  We just reached the maximum number of cartridges I can load into one of 
these libraries due to the Max Cartridges value.  I want to adjust the 
2-logical libraries to shift slots from one to the other.  Also want to reduce 
the VIO Slots (currently at default/max of 255 for each library).

When I tried to change the Max. Cartridges value for one of the libraries, I 
got the message "WARNING - Changing Maximum settings for a logical library may 
require reconfiguration of the host applications for selected logical library"

The TS3500 is solely used for TSM.  2-of my 7-TSM servers are assigned as 
library managers of the 2-logical libraries (1-onsite and 1-offsite tapes).  
All TSM servers are RH Linux.

Will I need to restart/reboot the 2-TSM servers that manage the libraries if I 
make this change?  Will it impact all of the TSM servers?

--
*Zoltan Forray*
TSM Software & Hardware Administrator
Xymon Monitor Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html

This message and any attachments (the "message") is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
. See www.bnpparibas.ca 
 for more information on BNP Paribas, in Canada.


Re: nfs mounted tsm storage becoming unavailable

2015-09-09 Thread Shawn Drew

It depends on the file system the volume is created on. Ext4 and JFS
support
sparse file creation and TSM takes advantage of it. NFS does not. Search
the
admin guide for "preallocated" for info.

On Wed, Sep 9, 2015 at 10:51 AM, Loon, EJ van (ITOPT3) - KLM
 wrote:
Smarter way of formatting? Either it fills the files with zero's or with a
certain text string like in 5.5, formatting takes time. Allocating a 100 Gb
files currently takes a few seconds, so it's impossible that TSM physically
formats a file.
Kind regards,
Eric van Loon
AF/KLM Storage Engineering

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Remco
Post
Sent: woensdag 9 september 2015 14:17
To: ADSM-L@VM.MARIST.EDU
Subject: Re: nfs mounted tsm storage becoming unavailable

Op 9 sep. 2015, om 09:11 heeft "Loon, EJ van (ITOPT3) - KLM"
 het volgende geschreven:

> Hi Rick!
> Formatting the whole volume was the case in the TSM 5.5 era, but not
anymore.

not true, but TSM does attempt smarter ways of formatting, so it seems
faster
from your perspective.

What is important is that TSM uses a particular kind of file locking,
something
that doesn't work for every combination of NFS client and server. See:
https://www-304.ibm.com/support/docview.wss?uid=swg21470193

> Formatting a diskpool or filepool volume took ages back then because it
was
filed with a text string (my name backwards :-), but with version 6 this
has
changed and it just pre-allocates the requested space without formatting.
Creating a volume is now a matter of seconds, no matter how large it is.
> Kind regards,
> Eric van Loon
> AF/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rhodes, Richard L.
> Sent: dinsdag 8 september 2015 18:49
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: nfs mounted tsm storage becoming unavailable
>
> It sounds like you are pre-defining the file vols on the NFS share.
> If you pre-define vols I believe TSM writes the entire vol to initialize
it.
That would be a very heavy write load. It sounds like maybe the NFS server
is
overloaded and maybe causing I/O errors to the other TSM's.
>
> Are you seeing ANR errors in the actlog of any of the TSM servers when
this
problem occurs?
>
> I'd check the network stats for your TSM server and NFS server.
> Possibly getting lan errors under heavy load is causing problems.
>
> Are your NFS shares mounted hard? I believe they should be.
>
> Run around - don't pre-define vols, let TSM to create scratch vols as
needed.
> That's what we do for our NFS DataDomain shares.
>
> Just some thoughts!
>
> Rick
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee,
Gary
> Sent: Tuesday, September 08, 2015 10:38 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: nfs mounted tsm storage becoming unavailable
>
> We are trying to troubleshoot a problem with our tsm disk storage.
>
> Tsm server 6.3.4 running on RHEL 6.5.
> Our storage is a set of subdirectories on a file system mounted via nfs
v3 to
all three of our tsm servers.
>
> All goes fine until an attempt is made to define one or more volumes to
a
storage pool.
> At that time, if other tsm servers are writing to the storage, those
volumes
will become red-only when the volume define is in process.
>
> Other operations, such as reclamation, and normal writing do not cause
problems.
>
> Our storage provider, (nexenta) is asking what tsm does when creating a
storage volume.
> So, I am coming to the list for information. Just what does tsm do under
the
covers when defining a storage volume?
>
> Thanks for any help.
>
>
> -The information contained in
this
message is intended only for the personal and confidential use of the
recipient(s) named above. If the reader of this message is not the intended
recipient or an agent responsible for delivering it to the intended
recipient,
you are hereby notified that you have received this document in error and
that
any review, dissemination, distribution, or copying of this message is
strictly
prohibited. If you have received this communication in error, please notify
us
immediately, and delete the original message.
> 
> For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential
and
privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment
may be
disclosed, copied or distributed, and that any other action related to this
e-mail or attachment is strictly prohibited, and may be unlawful. If you
have
received this e-mail by error, please notify the sender immediately by
return
e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its 

Re: Reg: script to get library drive Serial Nos from OS level

2015-08-07 Thread Shawn Drew
AIX:
lscfg -vl rmt* | awk '/rmt/ {printf $1};/Serial/ {print $0};' | sed 
's/Serial.*\.//'

Linux
(ibm driver):
cat /proc/scsi/IBMtape
(tsm driver):
for each in /dev/tsmscsi/mt*; do echo $each $(sginfo -s $each); done

http://www-01.ibm.com/support/docview.wss?uid=swg21425983


Also, here is an old AIX shell script I used to change all the device names so 
they match on every host. It uses chdev instead of rendev” (I never heard of 
rendev until today!)
For linux, you would use the udev facility.

#!/bin/sh

TEMPFILE=/tmp/drive-renumber.tmp1

# Change devs to temp names
for i in `lsdev -Cc tape | awk '/LTO Ultrium Tape Drive/ {print $1}'`; do chdev 
-l $i -a new_name=$i-temp; done


#End result file
echo rmt0 F001CD4001
rmt1 F001CD4007
rmt2 F001CD400D
rmt3 F001CD4013
rmt4 F001CD4019
rmt5 F001CD401F
rmt6 F001CD4025  $TEMPFILE


# Get serial numbers
lscfg -vl rmt* | awk '/rmt/ {printf $1};/Serial/ {print $0};' | sed 
's/Serial.*\.//' | while read TEMPDEVICE SERIAL do if grep $SERIAL $TEMPFILE  
/dev/null then chdev -l $TEMPDEVICE -a new_name=`grep $SERIAL $TEMPFILE | awk 
'{print $1}'` fi done

rm $TEMPFILE



 On Aug 6, 2015, at 3:51 PM, Srikanth Kola23 srkol...@in.ibm.com wrote:
 
 Hi Team,
 
 I am in the need of collecting drive  library serial numbers for 50
 servers from os level for lanfree setup
 
 any scripts for me to fetch data
 
 I have AIX , Linux boxes
 
 library 3573 scsi ( linux  AIX )
 
 Emc data domain(VTL) (linux)
 
 Thanks  Regards,
 
 Srikanth kola
 Backup  Recovery
 IBM India Pvt Ltd, Chennai
 Mobile: +91 9885473450


Re: server out of storage space when really not missing vmware backups

2015-06-08 Thread Shawn DREW
Assuming it is in a filling state, check out the Filling section here to see 
why a filling tape might not be used

http://people.bu.edu/rbs/ADSM.QuickFacts


-Original Message-
From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU] 
Sent: Monday, June 08, 2015 1:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] server out of storage space when really not missing 
vmware backups

Gary

That depends on the volume status.

If FILLING, then it is eligible to be written to. Example a volume in FILLING 
status which is 75% utilised has 25% free space. Therefore a candidate for 
wring to.

If FULL, the volume was 100% full at one stage. The volume remains in that 
state until all of the space is reclaimed and is 0% full, then PENDING state 
and finally SCRATCH. Example, a 75% utilised volume that is in FULL state is 
NOT eligible to be written to.

Hitting maxscratch indeed renders the storage pool full, which it effectively 
is.

select volume_name, status, pct_utilized, pct_reclaim from volumes order by
2,3

This will show you which volumes are in FILLING state and therefore eligible to 
be written to. What are your reclaim values - are you freeing up tapes via 
reclaim in the VM pool?



On 8 June 2015 at 18:15, Lee, Gary g...@bsu.edu wrote:

 My question here is that 59 out of 195 volumes in the pool are less 
 than 75% utilized.
 Shouldn't tsm fil them before declaring itself out of space?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of David Ehresman
 Sent: Monday, June 08, 2015 12:07 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] server out of storage space when really not 
 missing vmware backups

 Maxscratch defines how many scratch volumes you can have in a 
 storagepool.  When you reach maxscratch, assuming you do not have 
 volumes predefined, you are by definition out of data storage space 
 regardless of how much space is in your filesystem.


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of Lee, Gary
 Sent: Monday, June 08, 2015 10:32 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] server out of storage space when really not missing 
 vmware backups

 Tsm server 6.2.5
 TDP for vmware 6.4

 I have the vmware data pool defined as a sequencial file pool, to 
 facilitate migration.

 However, last night the server claimed it was out of data storage 
 space, but there is 18% of a 14 tb pool free.

 I checked, and the only limit reached was maxscratch.

 Is there a good way around this?

 We want to migrate to server 7.x, but could not get an install failure 
 resolved on RHEL 6.5 for server version 7.1.1.

 I have increased maxscratch, but this cannot go on indefinitely.


This message and any attachments (the message) is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
http://www.bnpparibas.ca/en/unsubscribe/. See www.bnpparibas.ca 
http://www.bnpparibas.ca for more information on BNP Paribas, in Canada.


Re: NAS Backup

2015-05-19 Thread Shawn DREW
Check the access state of the storage pool and make sure it is READW.  
I get this error when I forget and leave them in a reado state.


-Original Message-
From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU] 
Sent: Tuesday, May 19, 2015 10:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] NAS Backup

The max scratch volumes allowed is 1000 but only 362 are used.  I don't have 
any with the status of empty.


Eric 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Nick 
Marouf
Sent: Monday, May 18, 2015 1:57 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] NAS Backup

Hi Eric,

 Does the storage pool have a limit to the number of scratch tapes available? 
Or do you have to manually define tapes to that pool?

q stg pool_name f=d

Maximum Scratch Volumes Allowed: 2,000
Number of Scratch Volumes Used: 1,160


If you manually define tapes to the storage pool, try this query below.
Does it return any available tapes in the pool?

q v * stg=pool_name status=Empty


On Mon, May 18, 2015 at 12:46 PM, McWilliams, Eric  
emcwilli...@medsynergies.com wrote:

 How does the NAS backup determine if there is enough space to back up 
 the data to?  I'm currently backing up an EMC Isilon directly to tape 
 (I know, I know, you don't have to tell me!) and am getting an error 
 that there is not enough space in the storage pool.

 ANR1072E NAS Backup to TSM Storage process 8 terminated - insufficient 
 space in destination storage pool. (SESSION: 5226, PROCESS: 8)

 I'm only backing up around 9TB so there should be more than enough 
 space in the tape library.  This has worked well up until last week.

 Thanks

 Eric

 **
 *** CONFIDENTIALITY NOTICE ***

  This message and any included attachments are from MedSynergies, Inc. 
 and are intended only for the addressee. The contents of this message 
 contain confidential information belonging to the sender that is legally 
 protected.
 Unauthorized forwarding, printing, copying, distribution, or use of 
 such information is strictly prohibited and may be unlawful. If you 
 are not the addressee, please promptly delete this message and notify 
 the sender of the delivery error by e-mail or contact MedSynergies, 
 Inc. at postmas...@medsynergies.com.


**
*** CONFIDENTIALITY NOTICE *** 

 This message and any included attachments are from MedSynergies, Inc. and are 
intended only for the addressee. The contents of this message contain 
confidential information belonging to the sender that is legally protected. 
Unauthorized forwarding, printing, copying, distribution, or use of such 
information is strictly prohibited and may be unlawful. If you are not the 
addressee, please promptly delete this message and notify the sender of the 
delivery error by e-mail or contact MedSynergies, Inc. at 
postmas...@medsynergies.com.

This message and any attachments (the message) is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
using this link: www.bnpparibas.ca/en/unsubscribe/ 
http://www.bnpparibas.ca/en/unsubscribe/. See www.bnpparibas.ca 
http://www.bnpparibas.ca for more information on BNP Paribas, in Canada.


Re: Changing domain of CIFS backup server

2015-03-24 Thread Shawn DREW
The thing to worry about is the permission to the CIFS paths.  Domain trust 
relationships aside, the account that the TSM scheduler/CAD is running under 
will still need to be able to access all the CIFS paths that you are backing 
up.  

If you only change the domain of the server, but everything else remains the 
same (i.e. the old domain stays up and running) there *shouldn't* be a problem 
because the old service account should still be in there referring to the 
original domain service account.  Just make sure that migration tool doesn't 
mess with the TSM scheduler log on as account in the Windows services control 
panel.

If they plan on decommissioning the old domain, then you will need to get a new 
service account on the new domain and make sure that account is permissioned 
for all the shares you want to back up. 
-Shawn


-Original Message-
From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU] 
Sent: Tuesday, March 24, 2015 4:16 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Changing domain of CIFS backup server

To clarify, the Windows Domain that this backup server logs into - not TSM 
domain.

On Tue, Mar 24, 2015 at 4:00 PM, Prather, Wanda wanda.prat...@icfi.com
wrote:

 Zoltan,

 If you are talking about Windows domains, it's all about permissions.
 If you are talking about TSM domains, it's all about retention/backup 
 copy groups.

 Which kind of domain are you referring to?

 Wanda


 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
 Of Zoltan Forray
 Sent: Tuesday, March 24, 2015 3:25 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Changing domain of CIFS backup server

 We have a Windows server whose sole purpose is to backup CIFS mount points.
 This server is currently in domain x and needs to be moved to domain y.

 Will changing the domain of this box have any effect on the CIFS mount 
 points it backs up? They are using some kind of migration tool.

 Zoltan Forray
 TSM Software  Hardware Administrator
 Xymon Administrator
 VCU Computer Center
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations 
 will never use email to request that you reply with your password, 
 social security number or confidential personal information. For more 
 details visit http://infosecurity.vcu.edu/phishing.html




--
*Zoltan Forray*
TSM Software  Hardware Administrator
Xymon Monitor Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


This message and any attachments (the message) is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


 

Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
sending us your request by email to unsubscr...@ca.bnpparibas.com. See 
www.bnpparibas.ca for more information on BNP Paribas, in Canada.


Re: EKM to SKLM Migration Path

2014-12-26 Thread Shawn Drew
Yes there is, but you need to upgrade EKM to to 2.1 if not already.  The SKLM 
install asks you to provide the location of the EKM db if migrating.
 
http://www-01.ibm.com/support/knowledgecenter/#!/SSWPVP_2.5.0/com.ibm.sklm.doc_2.5/cpt/cpt_ic_plan_migration.html?cp=SSWPVP_2.5.0%2F5-0-2
 
I would copy the EKM dir to a VM and test the upgrade separately.  I was even 
able to move from a Windows to a Linux server.


 On Dec 19, 2014, at 9:47 AM, Bill Boyer bjdbo...@comcast.net wrote:
 
 I have a customer that is still using the original Java based EKM for server
 up the LTO keys. The servers running the EKM software are being retired due
 to OS level. Is there a migration path to get my EKM keystore in to SKLM?
 The customer's last Passport renewal has them licensed for SKLM.
 
 
 
 Any help or suggestions is very appreciated!
 
 
 
 Bill Boyer
 DSS, Inc.
 (610) 927-4407
 Enjoy life. It has an expiration date. - ??


Re: Re-hosting

2014-10-28 Thread Shawn DREW
If the new server is keeping the same hostname, then just follow the normal 
recovery process described in the documentation. 

If not, then do the normal recovery process, but also take a look at this 
article:
http://www.tsmblog.org/how-to-change-a-tsm-v6-x-server-hostname-on-windows/



Regards, 
Shawn

Shawn Drew
 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Tuesday, October 28, 2014 9:40 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Re-hosting
 
 I need to re-host a TSM 7.1.1 server from one physical server to another.
 It is currently running RHEL 6, and will be on RHEL 6 on the new box.
 
 If it helps, I am actually doing this for a pair of servers, that use each
 other for dbbackup to virtual volumes.  Node replication is turned on.
 
 Each server has storage pools on dedicated block-level SAN storage (EVAs)
 -- the new servers would continue to use the SANs, so ideally I would not
 have to do anything with this other than connect it to the new servers.
 
 What is the best way to do this?
 
 Best regards,
 
 Mike Ryder
 RMD IT Client Services


This message and any attachments (the message) is intended solely for the 
addressees and is confidential. If you receive this message in error, please 
delete it and immediately notify the sender. Any use not in accord with its 
purpose, any dissemination or disclosure, either whole or partial, is 
prohibited except formal approval. The internet can not guarantee the integrity 
of this message. BNP PARIBAS (and its subsidiaries) shall (will) not therefore 
be liable for the message if modified. Please note that certain functions and 
services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


 

Unless otherwise provided above, this message was sent by BNP Paribas, or one 
of its affiliates in Canada, having an office at 1981 McGill College Avenue, 
Montreal, QC, H3A 2W8, Canada. To the extent this message is being sent from or 
to Canada, you may unsubscribe from receiving commercial electronic messages by 
sending us your request by email to unsubscr...@ca.bnpparibas.com. See 
www.bnpparibas.ca for more information on BNP Paribas, in Canada.


Re: tsm tcp for vm versus veeam

2014-04-29 Thread Shawn DREW
Veeam was designed as a backup product(not archive).  The way Veeam 
historically recommended longer-term tape backups (which still works and still 
criticized) is for you to use TSM (or any product that supports tape properly) 
to archive the data from the Veeam repository. The image files are dated and 
identifiable (full backups vs incrementals, etc).  
A restore would be a 2 step process of first retrieving from the TSM archive 
then restoring from Veeam.

The idea is that you would run a TSM archive job on the Veeam repository and 
look for all *vbk files or whatever pattern you wanted to keep.  
The new version of Veeam does support basic library control, but there is no 
vault tracking or anything like that.


Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Monday, April 28, 2014 11:13 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] tsm tcp for vm versus veeam
 
 No, Veeam is not able to store backups within TSM Server. Veeam and TSM
 are totally independent. Veeam is capable to use tape drives to store
 backup data, but we are not using tapes. Veeam supports data encryption
 and compression for data repositories on disks under Windows. In
 addition, it is compatible with Windows 2012 drive deduplication. We are
 using four Veeam repositories on Windows 2012 R2 VMs running on VMware
 hosts directly connected to not expensive SAN disk subsystem. Each
 repository has 2TB de-duplicated Windows 2012 drive (de-duplication rate
 60-80%).
 
 Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank
 Kuwait, www.ahliunited.com.kw
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Tim Brown
 Sent: 28 04 2014 11:47 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] tsm tcp for vm versus veeam
 
 So your Veeam backups don't end up within TSM but are stored and
 replicated totally within the Veeam environment.
 Isnt Veaam capable of storing the backed up objects within TSM itself?
 
 Tim
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Grigori Solonovitch
 Sent: Monday, 28 April, 2014 2:11 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] tsm tcp for vm versus veeam
 
 We are using Veeam Backup and Replication and Veeam ONE as a management
 software for VMware itself and Veeam Backups for VMs. Perfect
 combination! I would like to recommend to use Windows 2012 R2 as Veeam
 Repository servers with native Windows deduplication for drives with
 backup data to reduce required storage size.
  By the way, most of the critical production VMs we are backing up using
 traditional TSM backups used on source standalone servers before
 virtualization. Yes, it is a little bit expensive solution, but
 combination Veeam + TSM backups (without TDP for VM) is extremely
 reliable and I do not think TDP for VM is less expensive. We are using
 only traditional TSM backups for MS Clusters with RDMs, where Veeam is
 not working.
 
 Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank
 Kuwait, www.ahliunited.com.kw
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Tim Brown
 Sent: 28 04 2014 4:48 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] tsm tcp for vm versus veeam
 
 We are looking to start using TSM TDP for VM for our VM backup strategy.
 Our VM support staff has looked into using Veeam as an alternative. Does
 any have opinions on which approach they would recommend.
 
 Thanks,
 
 Tim
 
 
 
 Please consider the environment before printing this Email.
 
 
 
 CONFIDENTIALITY AND WAIVER: The information contained in this electronic
 mail message and any attachments hereto may be legally privileged and
 confidential. The information is intended only for the recipient(s)
 named in this message. If you are not the intended recipient you are
 notified that any use, disclosure, copying or distribution is
 prohibited. If you have received this in error please contact the sender
 and delete this message and any attachments from your computer system.
 We do not guarantee that this message or any attachment to it is secure
 or free from errors, computer viruses or other conditions that may
 damage or interfere with data, hardware or software.
 
 
 
 
 CONFIDENTIALITY AND WAIVER: The information contained in this electronic
 mail message and any attachments hereto may be legally privileged and
 confidential. The information is intended only for the recipient(s)
 named in this message. If you are not the intended recipient you are
 notified that any use, disclosure, copying or distribution is
 prohibited. If you have received this in error please contact the sender
 and delete this message and any attachments from your computer system.
 We do not guarantee that this message

Re: slow restore performance on tsmve

2014-04-29 Thread Shawn DREW
If you are using the mode=ifincr parameter, it might be the same problem we had 
a while ago. 

We started using ifincr as soon as it was added, but found our restores were 
horribly slow if we did not perform a periodic full backup.  We added in a 
weekly mode=iffull backup which addressed our problem.  


Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Saturday, April 26, 2014 2:23 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] slow restore performance on tsmve
 
 Hello,
 
 our restore performance (restore vm) is very poor. We see this on custom
 installations and also in some test areas.
 
 Environment:TSMVE 7.1 / 7.1.0.1, tsm ba client 7.1.0 and 7.1.0.2
 TSM-Server 6.3.4 and also 7.1
 
 rate restore only 8 MB/s, backups are much faster 50 to 100 MB/s
 
 
 Should we use the integrated ba client in the tsmve installation software?
 
 with best regards
 Stefan Savoric


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: What is the newest client version supported on Windowsserver 2003?

2014-03-07 Thread Shawn DREW
The Remote Client Agent is not relevant to performing scheduled backups, 
which would make sense that the reporting says the job is successful.  Are you 
saying there is another reason to believe the db backups are failing?

Otherwise, this error just indicates that the webclient wasn't started, which 
shouldn't affect anything other than the web client (The java client that 
listens on port 1581 (by default)) 

I run into this error when I have MANAGEDServices webclient in the opt file 
but I didn't install the remoteagent (dsmcutil install remoteagent...)  I 
usually don't use this client and just remove the webclient from the optfile 
to suppress the error.



Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, March 07, 2014 5:17 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] What is the newest client version supported on
 Windowsserver 2003?
 
 I don't have the PMR numbers as a co-worker is working with IBM on the
 issue.  We are trying to use TSM DP SQL 6.3.1 and are seeing this error:
 
 ANS2619S (RC2036) The Client Acceptor Daemon was unable to start the
 Remote Client Agent.
 
 Despite this being a severe error the backups were still being reported
 as successful to the TSM server.  We found this while trying to upgrade
 old clients that are still running TSM 5.5 (which is EOL soon). If we
 remove and re add the services it seems to help but the problem always
 comes back.
 
 
 On Fri, Mar 7, 2014 at 3:52 PM, Dave Canan ddca...@gmail.com wrote:
 
  Tom, can you give a brief description of the database backup issues
  you've been working on with IBM? Do you have PMR numbers I could review?
 
  Dave Canan
  IBM TSM Performance
  916-723-2410
  Office hrs: 9:00 - 5:00
 
  -Original Message-
  From: Tom Alverson tom.alver...@gmail.com
  Sent: 3/7/2014 11:29 AM
  To: ADSM-L@VM.MARIST.EDU ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] What is the newest client version supported on
  Windowsserver 2003?
 
  We have been forced onto 7.x to support our 2012R2 servers (that is
  the only client that is supported) but apparently still need 6.3.x for
  Windows
  2003 (which still has over a year of support left from Microsoft).  We
  may go to 6.4.x or 7.x on 2008/2012 servers if it will cure our
  database backup issues (which we have been working with IBM for a few
  months on).  I have a funny feeling that 7.x would work on 2003 but we
  do not want any unsupported scenarios for backups.
 
  Tom
 
 
  On Fri, Mar 7, 2014 at 1:28 PM, J. Pohlmann jpohlm...@shaw.ca wrote:
 
   Last one is 6.3.1.2 on the ftp site. I agree, I wish IBM CSI
   (former
   Tivoli) would keep on supporting Windows 2003 a bit longer, like
   into v7.1+.
  
   Regards,
  
   Joerg Pohlmann
  
   -Original Message-
   From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
   Behalf Of Tom Alverson
   Sent: Friday, March 07, 2014 10:18
   To: ADSM-L@VM.MARIST.EDU
   Subject: [ADSM-L] What is the newest client version supported on
   Windows server 2003?
  
   I can't make sense of the IBM web page with the compatibility matrix.
   What
   is the highest version client that is supported for Windows 2003
   server
  for
   both the standard BACLIENT and also the TDP database client?   We are
   having a lot of problems with the TDP 6.3.1 client and were hoping
   that moving to BAC 6.4.0.1 / TDP 6.4.1.3 would help but it appears
   that will only work on Server 2008?
  
   Tom
  
 


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Database Backup best practice

2014-02-18 Thread Shawn DREW
You should set the reusedelay and the number of daily database backups to the 
same number and you set both based on your preference on how far back would you 
want to go for any particular reason. (i.e. how many days might pass before you 
notice there is a problem)

I've done restores for a number of reasons, not all were only for recovery 
after some disk/filesystem problem.  If you are large enough to have multiple 
TSM admins, (perhaps a junior admin on the team) one might make a mistake and 
might take a few days for the other admins to notice.  

I keep a week's worth of backups around and 7 days on the reusedelay for the 
offsite pool only.  In case someone makes a del fi mistake, we have a week to 
recover.  Or we can restore to another instance and export the needed data.


Regards, 
Shawn

Shawn Drew
 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Tuesday, February 18, 2014 4:00 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Database Backup best practice
 
 Well, isn't it actually the other way around? You should set your
 REUSEDelay to the amount of days you keep your database backups.
 In case of a failure you will probably always want to restore the most
 recent database backup, but if that one fails too for some reason, you
 will have to fall back on older database backups. Of course this means
 losing backup information and it's for your shop to decide what is
 acceptable.
 We keep our database backups for 3 days (twice a day) and thus the
 REUSEDelay is set to 3 days. Since we don't use physical tape I think 3
 days (equals 6 backups) is long enough.
 Keeping database backups for a longer period of time means a higher
 REUSEDelay and thus requires more scratch volumes.
 Kind regards,
 Eric van Loon
 AF/KLM Storage Engineering
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Ehresman,David E.
 Sent: maandag 17 februari 2014 15:31
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: Database Backup best practice
 
 You should keep at least as many TSM DB backups as the largest REUSEDelay
 on your storagepools.
 
 David
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Tom Taylor
 Sent: Monday, February 17, 2014 9:27 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Database Backup best practice
 
 Good morning
 
 
 Running TSM 6.3.4
 
 
 
 
 I have read the admin guide best practice section on backing up
 the database and I have also done some google searching but I cannot find
 a straight answer on this... How many backups of the database should I
 keep available? I always do a full backup, is there any reason to keep any
 of the older full backups if I have a recent good one?
 
 
 
 
 
 
 
 
 Thomas Taylor
 System Administrator
 Jos. A. Bank Clothiers
 Cell (443)-974-5768
 
 For information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only. If
 you are not the addressee, you are notified that no part of the e-mail or
 any attachment may be disclosed, copied or distributed, and that any other
 action related to this e-mail or attachment is strictly prohibited, and
 may be unlawful. If you have received this e-mail by error, please notify
 the sender immediately by return e-mail, and delete this message.
 
 Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
 employees shall not be liable for the incorrect or incomplete transmission
 of this e-mail or any attachments, nor responsible for any delay in
 receipt.
 Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
 Airlines) is registered in Amstelveen, The Netherlands, with registered
 number 33014286
 
 


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Re: moving to new copy storage pool

2014-01-23 Thread Shawn DREW
 Updated the device class to refer to the second library?  The pools would
 remain associated with the device class, the (new|moved) drives would be
 defined to the new library, and the tapes would be checked out of the old
 library and checked into the new one (with the device class change in
 between).

This would work, but it would also change the library for the primary pool 
since they used the same device class which would complicate the migrations, 
especially if the libraries are in different locations.  

Ideally you would have every pool use a unique device class.  You can have 
multiple device classes point to the same library.

Regards, 
Shawn

Shawn Drew




This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Moving a nodes data/filespaces to a different node on the same server

2014-01-22 Thread Shawn DREW
You can do this in 2 steps, export the filespace and reimport under the 
different nodename.  You will have to do some renames.

rename n nodeb nodeb-temp
rename n nodeA nodeB
export n nodeb
rename n nodeB nodeA
remame n nodeb-temp nodeb
import n nodeb 


Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Wednesday, January 22, 2014 2:12 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Moving a nodes data/filespaces to a different node on
 the same server
 
 Is this possible with the newer TSM server levels?  I Google'd and it is a
 old question/desire (move the filespaces from NODEA to NODEB so NODEB
 doesn't have to re-backup everything on things like shared/movable
 NAS/CIFS/NFS/SAN mountpoints)
 
 I know you can rename a filespace but it has always been within the same
 NODE?
 
 --
 *Zoltan Forray*
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: moving to new copy storage pool

2014-01-22 Thread Shawn DREW
The fact that both pools use the same device class is the problem.  While you 
can change the library a device class points to, you can't change the device 
class a stgpool points to.  
You are changing from a DRM/vault flow to 2 online libraries.  This will 
require a new backup stg if you want all of your copy pool data in the new 
location. 

What I would do is:
- rename both pools to something like tape_c1.old and tape_c2.old
- Create new pools using the desired new device classes so that all new backup 
data moving forward works how you want it, 
- Maintain the old pools as you have in the past (using a vault, DRM and 
reclamation)

You can slowly migrate data from tape_c1.old - tape_c1 over time and slowly 
reclaim all the tapes in the old pools.  You will have trouble reclaiming tapes 
from the tape_c2.old pool until all the data is moved (or work some collocation 
trickery and manually delete volumes that you know is in the new pools)


Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Wednesday, January 22, 2014 11:53 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] moving to new copy storage pool
 
 We are upgrading our server from v5.5.4 to 6.2.5.  At the same time, we
 are changing platforms to redhat on x86 from suse under vm.
 
 I have tested the upgrade process, and it seems to work well.
 In the current 5.5 server, the primary tape pool uses the same devclass as
 the offsite copy pool.
 I now will have access to an offsite library from the new upgraded server.
 
 Since a new library requires a new devclass, I assume this will also
 require a new offsite pool. Is there a way to avoid a complete new ba
 stgpool from having to be done?
 At approximately 60 tB, I don't have time or tape cartridges to do this.
 
 I thought of reclaiming the old pool to the new pool for the first two or
 three weeks, then starting to backup to the new pool, hoping to get tapes
 back from the old to reuse.
 
 Any other ideas out there?


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Cannot bring back a volume to scratch

2013-12-30 Thread Shawn DREW
I've had this happen when the media was ejected using move media but checked 
back in after the volume went empty.  There must have been some kind of 
disconnect between the library manager and client but it has happened to me 
multiple times in this specific scenario.  
An audit library from the library client may help, but to fix this for sure, 
checkout the tape first (checkout libv) so the q media does show mountable not 
in lib. then do a move media command on the volume and that should delete it.


Regards, 
Shawn

Shawn Drew


 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Sunday, December 29, 2013 4:19 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Cannot bring back a volume to scratch
 
 Hello
 Tried to make a vol in status scratch  again , the volume is in LTO5
 robot in status private.
 
 Run a del volh 15l5 discardd=yes and got:
 
 12/29/2013 11:12:44  ANR1425W Scratch volume 15L5 is empty but
 will not be   deleted - volume state is mountablenotinlib. (SESSION:
 16397)
 
 As you see said the volume is empty but in state mountablenotinlib ,
 running q media 15l5 give me state: Mountable in library ! and
 volume status: Filling !
 
 
 I am stuck ...Look the ouput commands below
 
 tsm: ADSM2q media 15l5 stg=i-drm f=d
 
   Volume Name: 15L5
 State: Mountable in library Last Update Date/Time:
 09/09/2013 07:14:10
  Location:
 Storage Pool Name: I-DRM
 Automated LibName: LTO5LIB
 Volume Status: Filling
Access: Read-Only
   Last Reference Date: 09/08/2013 13:24:56
 
 
 tsm: ADSM2q vol 15l5 f=d
 
Volume Name: 15L5
  Storage Pool Name: I-DRM
  Device Class Name: LTO5CLASS
 Estimated Capacity: 2.9 T
Scaled Capacity Applied:
   Pct Util: 0.0
  Volume Status: Filling
 Access: Read-Only
 Pct. Reclaimable Space: 43.1
Scratch Volume?: Yes
In Error State?: No
   Number of Writable Sides: 1
Number of Times Mounted: 4,575
  Write Pass Number: 1
  Approx. Date Last Written: 09/08/2013 13:24:56
 Approx. Date Last Read: 08/31/2013 10:48:28
Date Became Pending:
 Number of Write Errors: 0
  Number of Read Errors: 0
Volume Location:
 Volume is MVS Lanfree Capable : No
 Last Update by (administrator): ROBERT
  Last Update Date/Time: 12/29/2013 10:03:50
   Begin Reclaim Period:
 End Reclaim Period:
   Drive Encryption Key Manager: None
Logical Block Protected: No
 
 
 Regards Robert


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Restore TSM DB to different OS server

2013-12-03 Thread Shawn Drew
The only supported option I am aware of is when moving from TSM5 - TSM6.3.4 
I've successfully migrated from AIX TSM5 - Linux TSM6.3.4 and if your solaris 
server is TSM5.5 I would consider it. 

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/index.jsp?topic=%2Fcom.ibm.itsm.srv.upgrd.doc%2Ft_xplat_mig.html

Otherwise the export node would be the another good solution.
-Shawn


On Dec 3, 2013, at 4:14 PM, Skylar Thompson wrote:

 I believe the platform must be the same on both the source and target end
 of the database restore[1].
 
 As an alternative to a database restore, you could use EXPORT NODE either
 directly or via removable sequential media.
 
 [1] 
 http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/topic/com.ibm.itsm.srv.doc/t_db_move_new_loc.html
 
 On Tue, Dec 03, 2013 at 12:23:12PM -0800, Nora wrote:
 Hello,
 We currently have a TSM instance running on a Solaris x86 machine, and we 
 would like to migrate it to a Red Hat Enterprise Linux server. But we are 
 wondering if we can restore the orginal TSM database to the other server 
 running a different OS (Linux)?
 If it is not possible, how can we migrate all data in our archival storage 
 pool (120 GB only) to from the old TSM instance running on Solaris to the 
 new TSM instance on Linux?
 
 +--
 |This was sent by noran...@adma.ae via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S046, (206)-685-7354
 -- University of Washington School of Medicine


Re: Perform Restore in a different TSM server

2013-11-08 Thread Shawn DREW
That is almost correct. After doing this, shut off the old tsm instance 
immediately to make sure there is no conflict for the tapes. (Not sure if you 
are using a shared tape environment)

You will first need to format db/log volumes on the new server (with dsmfmt or 
use raw logical volumes), 
then run a dsmserv format command with all the appropriate options to generate 
the dsmserv.dsk and prepare it for the db restore. 
After that, you run the restore, but you will need to use dsmserv restore db 
todate=today Since you probably won't be using the roll-forward function. 


Also, I prefer to backup the database to disk for a migration and just put it 
in the same path on the new server. 
Otherwise, you might need to deal with devconfig problems trying to get the new 
server to connect to the library and mount the right tape from the right slot 
into the right drive for the db restore. the devconfig file should have all 
that info, but it will only work if the library config is exactly the same as 
the original tsm server, same paths, same drive bindings, etc.  I prefer to 
deal with library configuration issues after the server is restored, up and 
running.

Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, November 08, 2013 2:44 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Perform Restore in a different TSM server
 
 Hi Shawn,
 
 Many Thanks for the details.
 
 Still have some concerns. please consider the below.
 
 1. we need to move the TSM to different physical host, since its a
 physical migration process.
 2.The OLD TSM will be decommissioned, so we dont have to think about
 further backups. Just we need to perform the restore for the existing
 backups BUT on the new TSM server.
 
 As per my understanding so far, I think I have to execute the below steps.
 PLease corret me if am wrong.
 
 lets assume the existing TSM server is TSM1 and the new host where we are
 migrating is TSM2
 
 1. install the TSM server on the new host (TSM2).
 2.Back up the TSM database of TSM1 to sequential media. For example, issue
 the following command:
 
 backup db devclass=lto4 type=full//it is assumed device class for lto4
 storage pool is lto4
 3.Move copies of the volume history file, device configuration file, and
 server options file from TSM1 to TSM2 4.Restore the backed up database on
 the target server.
 dsmserv restore db
 
 5. Now we can perform the restore of any data in TSM2 which was backed up
 by TSM1
 
 +--
 |This was sent by sumitk@tcs.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Active Storage Pools

2013-11-08 Thread Shawn DREW
I would just mark all the tapes that are not in the library as unavailable, 
then run your copy activedata.  When It finishes, eject as many tapes as you 
can and put in other tapes.  It will tell you which tapes that were needed, but 
were marked unavailable in the actlog.  Change the unavailable/readw flags 
around and repeat until you get through all the tapes it asks for. 
At the end, mark all of them readw again and rerun the copy activedata to catch 
any stragglers.

I don't know how expire inv affects the active data pool though.  The word 
expire is used in two contexts in TSM and is confusing.  The TSM client 
expires data in its logs, but it is actually making it inactive.  This is 
different than the expiration that happens with the expire inventory process. 
Hopefully, the active data pool only needs to identify inactive files during 
the reclamation. 

Insert usual comments about not expiring data in TSM here

Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, November 08, 2013 6:42 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Active Storage Pools
 
 Does anyone have much experience with active storage pools?
 
 My current customer isn't allowed to reclaim tapes, so their inventory of
 primary storage volumes is much larger than their ATL. This makes client
 restores prone to failure because of the number of primary volumes that
 need manual intervention as well as the lack of routine operational
 support at the site.
 
 I'm considering creating an active storage pool. Once fully populated, its
 tapes would remain in the ATL; mere primary volumes would be on-site but
 not in the ATL. I know I can start to create it by defining an active data
 pool to be written to as primary tapes are being written to (we already
 use this for copy volumes), but there's still the matter of copying into
 the active pool the active data on the existing primary storage volumes.
 Is there any way to build the active storage pool in phases, such as all
 the data that's four weeks old or younger or all the data on volumes
 that are available; don't flinch at volumes that don't mount? Otherwise,
 the COPY ACTIVEDATA process is going to run 24x7 for a long, long time. If
 I cancel an COPY ACTIVEDATA command and then start a new one, does it
 correctly understand what it no longer has to copy?
 
 Do reclaims of active data storage pools rely upon EXPIRE INVENTORY
 running regularly? We haven't been running EXPIRE INVENTORY out of an
 abundance of caution to avoid any risk that we lose track of an inactive
 object someone might demand from us, but I suspect we'll need EXPIRE
 INVENTORY to keep the active pool correctly populated. I can imagine how
 to set the retention policies to mimic not running EXPIRE INVENTORY, but
 that still might make Some People nervous.
 
 Thanks to Wanda's tip about EXPORT NODE a couple of weeks ago, I think I
 know how large (roughly) my active data pool would be if I create it and
 populate it.
 
 What am I forgetting? Is my plan fatally flawed?
 
 TSM Server 6.3.4, if it matters.
 
 Thanks,
 Nick


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


FTP proxy for Auto client deployment

2013-11-08 Thread Shawn DREW
The Auto Client Deployment function in the TSM Admin Center has the ability to 
connect to the IBM ftp servers and automatically download client packages, 
which assists managing large corporate environments.

Unfortunately, large corporate environments will also typically have a web 
proxy in order to access the internet and sometimes authentication to access 
the web proxy.
IBM support just told me there is no way to add proxy configuration for this 
function
I have to imagine you just need to put in the proxy information in a config 
file somewhere.

Does anyone know how this might be done?

(I know the workaround is to manually download the files and put them in the 
right place)


Regards,
Shawn

Shawn Drew



This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Perform Restore in a different TSM server

2013-11-07 Thread Shawn DREW
The metadata/catalog information for all data is stored in the TSM database 
(except for exports and backupsets)
In order to restore from a separate TSM server, you need to get this metadata 
(and possibly associated data) into the new tsm server and there are a number 
of ways to do that. 

You can NOT import catalog information from tapes in TSM.  If you lose your TSM 
database, you lose your data (undocumented hacks aside) Protecting the TSM 
database is much more critical than other backup applications. Also you cannot 
do a partial TSM db restore.  If you need to restore a TSM DB, it is a full 
recovery and the new copy will think it owns all the tapes, so no indexing is 
possible and you will need to be careful if both TSM servers are sharing the 
same library.

Some choices to do what you are talking about:
- export/import the data from the old server to the new server before a restore 
is needed. (help export node)  This will copy the metadata and the data to the 
new server (will require an additional storage for the data copy)
- Perform a TSM db recovery on the new server each time you need a restore. 
This will copy the metadata, but use the same tapes
- Upgrade to TSM6.3 on both and use the node replication feature. This will 
keep the metadata and data synced between the two servers but again, will use 
2x the space. 

I wouldn't do any of these though.  I don't think the TSM architecture was 
built for this type of tactics.  If the underlying requirement is to ensure 
restore performance, I would look at the following ways that don't technically 
involve a completely separate server:

- Create a restore-only storage agent.  This is the lan-free solution and 
similar to using a media server in Netbackup.  The metadata will still be 
processed by the TSM server, but the storage access will be direct.
- Look into using an Active data pool.  This maintains a separate storage pool 
of current data, so most restores from the most recent backups will be 
performed from this pool and wouldn't touch the main backup pool, where you may 
have backups running.

(and possibly using both a storage agent and an active pool)


Regards, 
Shawn

Shawn Drew
 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Thursday, November 07, 2013 2:12 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Perform Restore in a different TSM server
 
 Hello,
 
 I am very new in TSM  environment but not a newbie in Backup
 administration. We are about to build a new TSM environment which would be
 used only for restore. we have EXISTING TSM 5.5 running on AIX 5.3. The
 NEW TSM server (different host name/IP) where we would be running only the
 restore ops will have TSM 5.5 on AIX 7.x.
 
 It would be very helpful if I get the setps to achive this.
 
 Also is there any requirement of performing individual index build for the
 tapes from where performing the restore in the new restore environment
 even if after we are doing the TSM DB restore from the existing TSM to the
 new TSM server.
 
 Thanks,
 
 Sumit
 
 +--
 |This was sent by sumitk@tcs.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: www.ibm.com/support TSM client downloads

2013-09-19 Thread Shawn DREW
I always use this link for client code and server updates. I find it convenient 
because they put the updated-date next to the versions, 

http://www-01.ibm.com/support/docview.wss?rs=663uid=swg21239415


Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Thursday, September 19, 2013 12:09 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] www.ibm.com/support TSM client downloads
 
 IBM has changed their support portal again; they now are trying to restrict
 customers only to software downloads to which they're entitled.
 
 I can't figure out how to get them to offer me client code, and I can't figure
 out how to get them to offer me 6.2 code.
 
 Does anyone have any suggestions about how to get specific client fixes
 through sanctioned IBM channels?
 
 I apologize if this is an FAQ. If it's been covered here lately, I apparently
 missed the importance of it. :-(
 
 Thanks,
 Nick


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM v5-v6 upgrade - permissions of raw disk pool vols

2013-09-19 Thread Shawn DREW
Yes, permission needs to be considered for v6 resource access, although you 
don't necessarily need to reassign ownership. 

http://www-01.ibm.com/support/docview.wss?uid=swg21394164


Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Thursday, September 19, 2013 1:03 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] TSM v5-v6 upgrade - permissions of raw disk pool vols
 
 Our TSM v5 servers all run as root.  After the conversion to v6 they will be
 running as a non-root account which is the tsm/db2 instance owner.
 
 Our disk pools are all raw logical volumes.  Do we need to change ownership
 of the raw volumes to the new instance owner so dsmserv can access the
 LV's?
 Along the same lines, is the new v6 dsmserv  able to access the RMT tape
 devices, or do I have to change their ownership also?
 
 Thanks
 
 Rick
 
 
 
 
 
 -
 The information contained in this message is intended only for the personal
 and confidential use of the recipient(s) named above. If the reader of this
 message is not the intended recipient or an agent responsible for delivering
 it to the intended recipient, you are hereby notified that you have received
 this document in error and that any review, dissemination, distribution, or
 copying of this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete the original
 message.


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Expanding ProtecTIER Library

2013-08-27 Thread Shawn DREW
I've always had to restart the instance in order to detect new elements. 
(deleting and redefining everything also works)
I was never able to detect new slots with just an audit.


Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Tuesday, August 27, 2013 3:27 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Expanding ProtecTIER Library
 
 If your expanding the library dimensions (drives and slots) you'll want to do
 
 1) An audit lib once you bring that lib mngr back up of course making sure
 there's no libr activity.  (I like to leave the lib mngr down during the 
 process.
 
 2) Def drives and paths
 
 That should do it - the audit lib lets TSM know about the new Element #'s
 which are slots and drives.
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
 Behalf Of Steven Langdale
 Sent: Tuesday, August 27, 2013 2:04 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Expanding ProtecTIER Library
 
 Guys.
 
 I'm about to expand an existing virtual library.
 
 The lib manager is running 5.5.5.2
 
 I'm assuming I'll just need to restart the lib manager and define the extra
 drives  paths (also adding slots).  Anyone done this and had to do any more
 to get it working i.e. delete and reconfig the whole library?
 
 Thanks
 
 Steven
 
 This e-mail, including attachments, may include confidential and/or
 proprietary information, and may be used only by the person or entity to
 which it is addressed. If the reader of this e-mail is not the intended 
 recipient
 or his or her authorized agent, the reader is hereby notified that any
 dissemination, distribution or copying of this e-mail is prohibited. If you 
 have
 received this e-mail in error, please notify the sender by replying to this
 message and delete this e-mail immediately.


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: tsm v6 - show logpin

2013-08-26 Thread Shawn DREW
I can't find my reference anymore but I remember reading somewhere that due to 
the fundamental change in the database, pinning the log in tsm6 is not an issue 
anymore.  Consider that a possibility in your search


Regards, 
Shawn

Shawn Drew
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Monday, August 26, 2013 2:45 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] tsm v6 - show logpin
 
 q) Is there a show logpin in TSM v6?
 
 In our adventure to TSM v6 I found that issuing a show logpin returns that 
 it
 is a unknown command.  If I look in the Problem Determination Guide (v6.2
 and v6.3) it list show logpin as one of the show cmds.
 
 q)  How to identify sessions that are pinning the log in v6.
 
 We have a script we run against our TSM v5 servers that checks for the log
 being 75% full and issues a show logpin cancel.  Is there a way to identify
 pinning sessions in v6 so I can implement a check like this?
 
 Thanks
 
 Rick
 


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TDP SQL backup performance

2013-08-23 Thread Shawn DREW
Everywhere I look shows a lowend of 40-50MB/s as the minimum speed of an LTO5, 
depending on the vendor.   I'm sure compression will also mess with that 
number.  This is probably just classic tape shoe-shining.


Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, August 23, 2013 3:23 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TDP SQL backup performance
 
 I have a question about the target device, can an LTO-5 write at 20MB/sec?
 Perhaps an intermediate landing zone that can write that slow might help.
 
 Andy Huebner
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
 Behalf Of Håkon Phillip Tønder-Keul
 Sent: Friday, August 23, 2013 12:08 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TDP SQL backup performance
 
 On 08/23/2013 04:07 PM, Sven Seefeld wrote:
  Hi,
 
  we're experiencing slow performance while backing up a 1,5 TByte MS
  SQL 2008R2 database with the TDP SQL Agent 6.3. As for DR-scenarios,
  we're doing legacy backups only. The actual speed is usually around 20
  MByte/s, backups go to
  LTO-5 tape directly. The backup stream has to pass through 3 different
  firewalls, so we're not quite sure where the bottle neck is located,
  MS SQL API, TDP Agent or IP-firewall (SAN-Storage and TSM-Server
  should outperform everything else).
 
  Maybe someone can share his performance values with me. Any hints are
 welcome.
 
 
  Regards,
 
 
 Sven
 
 Hi,
 
 Can you pos tyour dsm.opt and tdpsql.cfg?
 
 There is a lot of tuning that can boost your perfomance.
 
 --
 Mvh/Rgds
 Håkon Phillip Tønder-Keul
 Senior Consultant


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM VE backup of Orcale Windows server

2013-07-18 Thread Shawn DREW
To expand on this...  As I understand it, the newer versions of TSM for VE and 
Virtualcenter are trying to offer a better-than-crash-consistent snapshot.  
When an application supports it, VMware communicates through VSS to the 
application (MSSQL for example) and it will attempt to quiesce the database 
before the VMware snapshot occurs.  This adds complexity and head-scratching 
when it doesn't work.   I'm not sure if Oracle has this integration and you 
don't need it if you are manually quiescing the database.  Maybe there is a 
driver in there that is attempting to make it work with Oracle.  I never 
figured out how to disable this integration.


Regards, 
Shawn



 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Thursday, July 18, 2013 3:14 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM VE backup of Orcale Windows server
 
 TSM4VE does get VMWare to create a snapshot, it's VMWare that then
 integrates with the VM to do the VSS stuff.
 
 As you don't have VSS, VMWare will use it's own driver to do this (SYNC).
 it has been know for this quiesce stage to kill a busy server:
 
 http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKCd
 ocType=kcdocTypeID=DT_KB_1_1externalId=5962168
 
 Feedback if it makes a difference.
 
 Steven
 
 P.S. loved your comment about old W2K servers!
 
 
 On 17 July 2013 19:13, Huebner, Andy andy.hueb...@alcon.com wrote:
 
  In the physical world, we stopped the application, used the SAN to
  make a snap shot of the disks then restarted the application.  The
  snaps where then given to another server where they are backed up.
  The virtual version would be similar, stop the application, start the
  backup, start the application.
 
  Our understanding of the TSM VE process is that TSM has VMWare make a
  snap of the disks then TSM backs up the snap.  If that is the case why
  would VSS matter on the guest?  Or do we have it wrong?
 
  Our problem is we have been given about 30 minutes to backup an
  application that spans 3 servers and dozens of disks.  On the disks we
  have Oracle and millions of files.  All have to be in sync to be able
  to do a complete restoration of the application.
  The physical version of this process has worked for more than 10
  years, now we need to convert it to virtual.
 
 
  Andy Huebner
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
 Behalf
  Of Ryder, Michael S
  Sent: Wednesday, July 17, 2013 11:25 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] TSM VE backup of Orcale Windows server
 
  Andy:
 
  VE uses Microsoft VSS (Volume Shadow copy Service), which was not
  available with Windows 2000.
  Oracle VSS Writer is only available with Oracle 9i or later.
 
  On Windows 2003 and newer, and Oracle 9i and alter, we have no trouble
  with hot-backups of Oracle systems where we have the Oracle writer for
  VSS installed.
 
  I can't explain why your DB would get corrupt, but without VSS and
  VSS-Writer for Oracle, there isn't any integration between Oracle and
  the snapshot process.  That in itself might be enough to explain it.
 
  On servers where we couldn't get VSS Writer for Oracle installed, we
  only do cold-backups by using VMware tools to execute batch-commands
  to properly shutdown and restart applications and their Oracle databases.
 
  I think you will only be able to get away with cold-backups in your
  current configuration.
 
  Mike
 
  Best regards,
 
  Mike
  RMD IT, x7942
 
 
  On Wed, Jul 17, 2013 at 11:20 AM, Huebner, Andy
  andy.hueb...@alcon.com
  wrote:
 
   We ran our first backup of a Oracle server using TSM VE and the
   Oracle DB reported many errors and it caused the Oracle DB to become
 corrupt.
   I believe Oracle crashed and it was later recovered.
  
   Has anyone had any issues backing up a live Oracle system with TSM VE?
  
   Oracle - v.Old
   Windows - 2000 (laugh it you want, but I bet you have some too) TSM
   agent 6.4.0.0 TSM Server 6.2.3.100
  
   There is far more to the process and well thought out reasons, but
   this is the bit that is having an issue.
  
  
   Andy Huebner
  
 


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM NDMP separate data and tape server

2013-06-07 Thread Shawn DREW
He is talking about a 3-way NDMP backup introduced with NDMP v2. 
It lets you use a second file server as a remote storage agent.  
File Server 1 -- LAN -- File Server 2 -- Local/SAN attached tape drive.

The following thread discusses it, but I never got it working and I don't think 
there was a definite resolution or documentation on it.  I was able to get the 
Remote NDMP over IP variant working that is described in the TSM 5.4 5.5 
Technical guide:
File Server 1 -- LAN -- TSM Server -- TSM Native storage pool (not 
Netappdump) but not the classic3-way backup.

http://adsm.org/lists/html/ADSM-L/2009-10/msg00255.html

Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, June 07, 2013 3:18 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] SV: TSM NDMP separate data and tape server
 
 Hi Grant,
 What do you mean?
 You can backup a NetApp direct to Tape but still keep track in TSM.
 
 /Christian
 
 -Ursprungligt meddelande-
 Från: Grant Street [mailto:gra...@al.com.au]
 Skickat: den 5 juni 2013 08:59
 Till: ADSM-L@VM.MARIST.EDU
 Ämne: TSM NDMP separate data and tape server
 
 Hi
 
 I just want to confirm, my research
 
 Can you have the NDMP Data Server on a Windows server backing up to a
 the Tape server on a Netapp using TSM?
 
 I know a Netapp can be a DATA and TAPE server in one but according to the
 NDMP Definitions you should be able to separate them.
 
 Thanks
 
 Grant


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Planning for NDMP backup

2013-06-07 Thread Shawn DREW
You can use disk or tape.  Disk can be used through a VTL or as a normal file 
device class.  
If you want to use file device classes, look in the manual for backup up a NAS 
file server to native pools


Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, June 07, 2013 3:51 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Planning for NDMP backup
 
 Hi everyone,
 
 we're planning on implementing a TSM server that also does backup NetApp
 Filers and we've run into a few questions. Hope that you can help me with
 these.
 
 1. Is it possible to store the NDMP backups on disk or is only tape possible?
 2. If only tape, how many drives are recommended for NDMP backup? One
 per Filer if they backup at the same time?
 
 Thanks in advance for any hints.
 
 Regards,
 Michael


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Planning for NDMP backup

2013-06-07 Thread Shawn DREW
To paraphrase Churchill, NDMP is the worst form of NAS backups, except for all 
the others.. 

I keep testing snapdiff with every new TSM client and/or ONTAP update.  It's 
just not as reliable as NDMP for me.
Bottom line is that if you can't handle the periodic full-scan backups (like 
journaling) then don't bother.  At least for our (250TB) environment.  

You can make NDMP on TSM bearable with lots of scripting and classic backup 
thinking (i.e. regular fulls and no copypools)  Also, separate NDMP storage 
pools and group backups with the same retention.  i.e NDMP_35day, NDMP_1year, 
etc.  This lets the tapes expire regularly without reclamation.

As far as drives per filer, it seems like a standard to not go over 4 drives 
per filer concurrently but, as always, it depends.  Some file servers are more 
powerful and can handle more, some less.  Some volumes are just plain slower 
than others and you need to plan around that.

Someone also mentioned something about not being to create a TOC because of too 
many files.  During an NDMP backup, the TOC is temporarily stored in the DB 
temp space then dumped to the actual TOCDestination after the backup finishes.  
(if your DB is 80% utilized, then the 20% unused space is the temp space).  
I ran our NDMP TSM Server/library manager with a 150GB db that was .05 pct 
utilized.  This solved all of our TOC issues, so may help. (This was TSM 5.5, 
so I'm not sure how that is handled in TSM 6)

That said, we scrapped TSM NDMP here.  We have a small dedicated Netbackup 
environment just for NAS now.  It uses a heck of a lot of tapes, but it 
finishes reliably.  
It was like magic when I was able to preview the tape I would need for a 
restore without having to run selects against the backups/contents tables and 
not having to wait a couple hours to load a particularly large TOC.

Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Friday, June 07, 2013 11:30 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Planning for NDMP backup
 
 If you have a NetApp, is there any particular reason you're not using
 snapdiff?  My experiences with NDMP are all bad.  Troubles with reclamation,
 troubles with creating copy pools, having to cancel backups, reclamation and
 backup stg because the recovery log was getting way too full.
 
 -
 Cameron Hanover
 chano...@umich.edu
 
 Reminds me of my safari in Africa. Somebody forgot the corkscrew and for
 several days we had to live on nothing but food and water.
 --W. C. Fields
 
 
 On Jun 7, 2013, at 3:51 AM, Michael Roesch michael.roe...@gmail.com
 wrote:
 
  Hi everyone,
 
  we're planning on implementing a TSM server that also does backup
  NetApp Filers and we've run into a few questions. Hope that you can
  help me with these.
 
  1. Is it possible to store the NDMP backups on disk or is only tape
  possible?
  2. If only tape, how many drives are recommended for NDMP backup? One
  per Filer if they backup at the same time?
 
  Thanks in advance for any hints.
 
  Regards,
  Michael


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Log pinning, transactions, and sequential media

2013-05-13 Thread Shawn DREW
This may be true, although I didn't correlate until I read this.   
We had log pinning problems periodically for several years, but they stopped 
happening at some point even though we are still on 5.5
I assumed they refreshed the slow Windows (100mbit) clients.   Thinking back, 
this was around the time we switched to data domain and sequential-file device 
classes.


Regards, 
Shawn

Shawn Drew

 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Monday, May 13, 2013 9:46 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Log pinning, transactions, and sequential media
 
 Hi Everyone!
 
 Environment:  TSM server v5.5.6
 
 We've been fighting log pin issues for some time.  They are being caused by
 folks installing Windows servers at remote sites that have slow data circuits.
 Many of these servers are trying to backup big files, which pin the log for 
 long
 stretches of time.  We are working to set excludes, break up big files, etc, 
 etc.
 
 As our team lead was reading up on log pin issues and came across document
 http://www-01.ibm.com/support/docview.wss?uid=swg21584401
 
 It makes the following statement:
 
 2. Sequential media should be used for large files:
 Any node storing large files should be sent directly to sequential
 media
 - tape or file device classes. When using sequential media a transaction is
 not sent until the transaction completes or when the object spans to the
 next volume. This can reduce and in some cases eliminate the total pinning
 time for a long running transaction compared to a disk device class. In
 comparison, backups to disk storage pools immediately pin the recovery log.
 To further exacerbate the problem, backups to disk storage pools require
 more unique transactions compared to backups to sequential media.
 Reducing the length of time that a long running transaction is pinning the log
 and the number of transactions helps to reduce the likelihood of log
 exhaustion.
 
 Is this really saying that when using sequential media TSM doesn't create a
 transaction for a file until it has FINISHED being sent (or change to a next
 volume)?
 
 Is this also true when sending a file to a diskpool with a Max Size Threshold,
 where the file exceeding that size limit is sent directly to the nextpool and
 the nextpool is of type FILE.
 
 
 
 Thanks
 
 Rick
 
 
 
 
 
 
 
 
 -
 The information contained in this message is intended only for the personal
 and confidential use of the recipient(s) named above. If the reader of this
 message is not the intended recipient or an agent responsible for delivering
 it to the intended recipient, you are hereby notified that you have received
 this document in error and that any review, dissemination, distribution, or
 copying of this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete the original
 message.


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Odd Migration Behaviour

2013-02-14 Thread Shawn DREW
Do you have a migration delay setting on the disk pool by any chance?


Regards, 
Shawn

Shawn Drew


 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Thursday, February 14, 2013 5:41 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Odd Migration Behaviour
 
 Hi
 
 TSM Server v6.3.1
 
 Some odd behaviour when migrating a disk pool to tape
 
 The disk pool (devicetype=disk) is 6tb in size and has approximately 2.5tb of
 data in it from last nights backups.
 
 
 I backup the stgpool, it copies 2.5tb to tape. Fine with this
 
 I mig stgpool lo=0 maxpr=3. Start fine.
 
 (I do not use a duration parameter)
 
 First migration finishes after a few minutes,  migrating 500mb. Lower than i
 expected, i guess Second migration finishes after 30 minutes, migrating
 138gb.
 Third migration finishes after 1.5 hours, migrating 819gb
 
 But the pool now shows about 24% utilised, so still about 1.5tb of data
 remaining
 
 
 The tape pool the migrations are writing to has colloocation=group specified.
 I have three collocgroups, containg approximately 250 nodes. All of the
 nodes on the server are within these 3 groups I noticed that one of the
 clients was still backing up. It's a slow backup, always has been. That node 
 is
 in one of the collocgroups.
 When that client backup completed, i ran migration again with lo=0 and it is
 now beyond 1tb and still running. Pct utilisation of the disk pool is now down
 to 10%
 
 
 So, because the backup of that node is still running, will that prevent
 migration from migrating data from that specific collocgroup while a backup
 of a client within that group is in process?
 
 Any comments welcome.
 
 Regards


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: 1 GBit FC sufficient for tape library?

2013-02-12 Thread Shawn DREW
LTO4 is rated at 120MB/s max speed, which comes out to just under 1gbit/sec 
(1gbit/s = 125MB/s).  I normally reserve 1gbit for each LTO4 drive.  (i.e. 4 
drives on a 4gb HBA)
I do see 110+MB/s streaming on those guys.  (using topas -T on AIX, so not sure 
about the accuracy)


Regards, 
Shawn

Shawn Drew
 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Tuesday, February 12, 2013 12:04 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] 1 GBit FC sufficient for tape library?
 
 Another thought:  You really want to avoid the LTO4 drives dropping out of
 streaming mode, even if 1Gb/s is enough bandwidth to theoretically move
 the amount of data you need to move.  I believe LTO4 will operate down to
 30-40MB/s.  Worth checking, since I'm not sure, but I'm pretty sure you can
 drop as low as 40MB/s and still stay in streaming mode.  Once you drop out of
 streaming mode, your throughput will go in the dumper, and you won't get
 anything close to 1Gb/s throughput.
 
 At 10:15 AM 2/12/2013, Michael Roesch wrote:
 Hi Stefan,
 
 one quick question: what are the numbers you are using for your
 calculation?
 
 Did you convert 240 MB/s into Gigabit/s?
 
 Thanks
 
 Regards,
 Michael
 
 
 
 On Tue, Feb 12, 2013 at 1:01 PM, Stefan Folkerts
 stefan.folke...@gmail.comwrote:
 
  Yes, two LTO4 drives would not be happy at all behind a 1Gb/s HBA,
  even two 1Gb/s HBA's are not enough to fully utilize the drives.
  I would imagine you could also run into driver/firmware issues with
  this combination since it's a very strange one but then againit
  would probably work just slow.
 
 
  On Tue, Feb 12, 2013 at 12:34 PM, Michael Roesch
  michael.roe...@gmail.comwrote:
 
   Hi Stefan,
  
   the drives are LTO4 ones. So the HBA would be the bottleneck
  
  
   On Tue, Feb 12, 2013 at 11:36 AM, Stefan Folkerts 
   stefan.folke...@gmail.com
wrote:
  
If it is LTO1 or LTO2 you would be OK, if it is LTO3 or higher
you are limiting your drivers with your HBA.
   
   
On Tue, Feb 12, 2013 at 10:39 AM, Michael Roesch
michael.roe...@gmail.comwrote:
   
 Hi all,

 we have an old HP MSL 6030 with two drives that share one 1
 GBit FC
   port.
 Would that be enough to use with TSM? I have a feeling that says
 no
but I
 couldn't find any minimum FC requirements.
 Anyone having more info? Maybe even an IBM technote?

 Thanks

 Regards,
 Michael Roesch

   
  
 
 
 
 --
 Paul ZarnowskiPh: 607-255-4757
 CIT Infrastructure / Storage Services Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM Device Driver

2013-01-14 Thread Shawn DREW
You will use Atape for the library control, but you didn't mention what drive 
type you are using.  Assuming it is an IBM drive, then you will use Atape for 
that as well.

I have a quantum library with IBM drives, so I use the TSM driver for the 
library control and Atape for the drives. 



Regards, 
Shawn

Shawn Drew
 -Original Message-
 From: ADSM-L@VM.MARIST.EDU [mailto:ADSM-L@VM.MARIST.EDU]
 Sent: Monday, January 14, 2013 7:28 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] TSM Device Driver
 
 Hi TSM-ers!
 
 We have been using TSM on AIX for quite a while (since ADSM 2.1) and we
 always used the Atape driver. In fact I always thought one had to use the
 Atape driver, unless you use a drive which isn't supported by Atape, in that
 case you should use the TSM device driver.
 
 The 6.3 installation manual states: The Tivoli Storage Manager device driver
 is preferred for use with the Tivoli Storage Manager server.
 
 We are using a (virtual) 3584 library, should we use Atape or the TSM device
 driver?
 
 Thanks for your help!
 
 Kind regards,
 
 Eric van Loon
 
 KLM Royal Dutch Airlines
 
 
 For information, services and offers, please visit our web site:
 http://www.klm.com. This e-mail and any attachment may contain
 confidential and privileged material intended for the addressee only. If you
 are not the addressee, you are notified that no part of the e-mail or any
 attachment may be disclosed, copied or distributed, and that any other
 action related to this e-mail or attachment is strictly prohibited, and may be
 unlawful. If you have received this e-mail by error, please notify the sender
 immediately by return e-mail, and delete this message.
 
 Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
 employees shall not be liable for the incorrect or incomplete transmission
 of this e-mail or any attachments, nor responsible for any delay in receipt.
 Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
 Airlines) is registered in Amstelveen, The Netherlands, with registered
 number 33014286
 


This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: consolidating file type primary pools

2012-10-23 Thread Shawn Drew
Yes, just update the device class  to only have the directory you want to
write to.  You will still be able to read from volumes that were on the E
drive.  Then just move data to clear off the E drive

Regards,
Shawn

Shawn Drew





Internet
tbr...@cenhud.com

Sent by: ADSM-L@VM.MARIST.EDU
10/23/2012 04:41 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] consolidating file type primary pools






Say you have a file based storage pool, with the devclass coded as

D:\DEVT_PRIM,E:\DEVT_PRIM

Over time they accumulate a number of .bfs files in each and then  due
to application retirement or other circumstances you don't need all
the space that this consumes. Is there a way to move the bfs files
from E:\DEVT_PRIM to D:\DEVT_PRIM

Can you update the DEVC to just D:\DEVT_PRIM and then use MOVE DATA to
move the .bfs files from E:\DEVT_PRIM to D:\DEVT_PRIM

Thanks,

Tim Brown
Supervisor Computer Operations
Central Hudson Gas  Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.commailto:tbr...@cenhud.com 
mailto:tbr...@cenhud.com
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255


This message contains confidential information and is only for the
intended recipient. If the reader of this message is not the intended
recipient, or an employee or agent responsible for delivering this message
to the intended recipient, please notify the sender immediately by
replying to this note and deleting all copies and attachments.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Bad rman/TSM for Oracle restore performance

2012-10-19 Thread Shawn Drew
is this VTL or physical tape?

Just as a troubleshooting step,  I would attempt a single-channel
backup/restore to see if that makes a difference. (whether it's physical
or virtual)

It's been a while, but I remember having weird problems like this in the
past with multi-channel restores on physical tape.

Regards,
Shawn

Shawn Drew





Internet
deehr...@louisville.edu

Sent by: ADSM-L@VM.MARIST.EDU
10/19/2012 03:35 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Bad rman/TSM for Oracle restore performance






I have a Linux box running oracle and using rman/TSM for Oracle for backup
and restores.

TDPO is 5.5.1.0.  The TSM client API is 6.3.0.0.  The TSM Server, running
on AIX,  is 6.2.4.0.

Backups times are acceptable, approximated 250GB in an hour using two rman
channels.  This drives the TSM server 1G Ethernet link at about 80MB/sec.

Restores are so slow I do not have a time yet for restoring the 250GB. The
linux server has no load; top shows:

top - 14:54:09 up  4:39,  8 users,  load average: 0.03, 0.02, 0.00
Tasks: 274 total,   1 running, 273 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Mem:  16463088k total, 16368212k used,94876k free,   305156k buffers
Swap:  8388600k total,0k used,  8388600k free, 15201560k cached

TSM q sess f=d commands against the two TDPO sessions issued every 15
seconds shows each thread in a SendW state that lasts about 3.5 minutes,
then the client receives a chunk of data, then goes into another 3.5
minute SendW.  This repeats:

IBM Tivoli Storage Manager
Command Line Administrative Interface - Version 6, Release 2, Level 2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2010. All Rights
Reserved.

Session established with server ULTSM: AIX
  Server Version 6, Release 2, Level 4.0
  Server date/time: 10/19/12   14:53:02  Last access: 10/19/12   14:43:04

ANS8000I Server command: 'q sess 2508358 f=d'

  Sess Number: 2,508,358
 Comm. Method: Tcp/Ip
   Sess State: SendW
Wait Time: 3.6 M
   Bytes Sent: 477.4 M
  Bytes Recvd: 693
Sess Type: Node
 Platform: TDPO LinuxAMD64
  Client Name: DBULSD4T
  Media Access Status: Current input volumes:  VR0742L3,(2615 Seconds)
User Name:
Date/Time First Data Sent:
   Proxy By Storage Agent:
  Actions: ObjRtrv


ANS8002I Highest return code was 0.

IBM Tivoli Storage Manager
Command Line Administrative Interface - Version 6, Release 2, Level 2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2010. All Rights
Reserved.

Session established with server ULTSM: AIX
  Server Version 6, Release 2, Level 4.0
  Server date/time: 10/19/12   14:53:02  Last access: 10/19/12   14:43:04

ANS8000I Server command: 'q sess 2508359 f=d'

  Sess Number: 2,508,359
 Comm. Method: Tcp/Ip
   Sess State: SendW
Wait Time: 3.6 M
   Bytes Sent: 546.1 M
  Bytes Recvd: 693
Sess Type: Node
 Platform: TDPO LinuxAMD64
  Client Name: DBULSD4T
  Media Access Status: Current input volumes:  VR0746L3,(2615 Seconds)
User Name:
Date/Time First Data Sent:
   Proxy By Storage Agent:
  Actions: ObjRtrv


ANS8002I Highest return code was 0.

IBM Tivoli Storage Manager
Command Line Administrative Interface - Version 6, Release 2, Level 2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2010. All Rights
Reserved.

Session established with server ULTSM: AIX
  Server Version 6, Release 2, Level 4.0
  Server date/time: 10/19/12   14:53:17  Last access: 10/19/12   14:43:04

ANS8000I Server command: 'q sess 2508358 f=d'

  Sess Number: 2,508,358
 Comm. Method: Tcp/Ip
   Sess State: SendW
Wait Time: 9 S
   Bytes Sent: 517.4 M
  Bytes Recvd: 693
Sess Type: Node
 Platform: TDPO LinuxAMD64
  Client Name: DBULSD4T
  Media Access Status: Current input volumes:  VR0742L3,(2630 Seconds)
User Name:
Date/Time First Data Sent:
   Proxy By Storage Agent:
  Actions: ObjRtrv


ANS8002I Highest return code was 0.

IBM Tivoli Storage Manager
Command Line Administrative Interface - Version 6, Release 2, Level 2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2010. All Rights
Reserved.

Session established with server ULTSM: AIX
  Server Version 6, Release 2, Level 4.0
  Server date/time: 10/19/12   14:53:17  Last access: 10/19/12   14:53:17

ANS8000I Server command: 'q sess 2508359 f=d'

  Sess Number: 2,508,359
 Comm. Method: Tcp/Ip
   Sess State: SendW
Wait Time: 9 S
   Bytes Sent: 591.8 M

Re: EMERGENCY -- NEED TO RESTORE ANOTHER NODE'S DATA IN WINDOWS

2012-10-18 Thread Shawn Drew
User the grant proxynode and -asnode options

PS Don't do the CAPS thing

Regards,
Shawn

Shawn Drew





Internet
g...@bsu.edu

Sent by: ADSM-L@VM.MARIST.EDU
10/18/2012 02:55 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] EMERGENCY -- NEED TO RESTORE ANOTHER NODE'S DATA IN WINDOWS






We have had a serious corruption issue, and need to do a point-in-time
restore data backed up by node1 to node2.  However, we cannot be logged in
as node1, because it is still active and will be backed up tonight during
the restore.

Need a quick answer PLEASE.





Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310





This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Server media mount not possible

2012-10-12 Thread Shawn Drew
TOC data is different than the restore data.   Look at the copygroup for
the NAS node.  You will see a Copy Destination which shows the storage
pool that the backups go to, and a TOC Destination which shows the
storage pool that the TOC will go to.  If you are using a native NDMP dump
format for the NDMP backups, the TOC will have to be a different storage
pool than the actual backup data.

It is normally best practice to store TOC data on disk with a file
deviceclass so browsing backups are slightly less horrible.  Once you
start the restore, it will need to mount the actual tape that has the
data.  Once a mount fails, TSM will mark the tape unavailable.  You need
to manually sat the tape back to READO or READW with upd stg.
do a q vol acc=unav to find any tapes that are set to unavailable after
failing a mount.

Also, just some random info. look at q stat  There is a line in there
related to how much time a TOC will stay in memory.


Regards,
Shawn

Shawn Drew





Internet
avalnch...@yahoo.com

Sent by: ADSM-L@VM.MARIST.EDU
10/11/2012 08:49 PM
Please respond to
avalnch...@yahoo.com


To
ADSM-L
cc

Subject
[ADSM-L] Server media mount not possible






I thought I'd throw this out there for ideas since I'm just being exposed
to NAS backpus and restores. I believe I got a previous post yesterday
figured out and was finally able to move on to testing a restore. Now I'm
getting a different error but I'm a bit confused as to why. I went through
lots of config and permissions info already with a valuable source and got
these errors once that was complete.

 Here's what I did once I was able to see the vol the data needs to be
restored to.

1. open the web gui, login with an admin id to restore some NAS data.
2. Used point in time to define which Full would have this nodes data on.
At this point the server mounted a tape to display the TOC.
3. I found the client machine in which I wanted to restore data and
checked the C drive to restore.
4. Selected a vol from the dropdown to restore to and clicked restore.
5. Within a few seconds I received a popup with the error Server media
mount not possible.

I watched this process from the TSM server and during step 2 I watched the
server mount the tape to draw the TOC from. While the tape was still
mounted, Idle, I clicked the restore and while waiting saw the tape was
still idle and at that point received the error. Within a few seconds the
tape dismounted, which makes me believe it was not requested for anything.

I tried this a second time with a completely different node and file and
can see in the activity log it tried to mount a tape that was not
available in the library. Interestingly enough I received the same error
message but it I see in the activity log the tape it was looking for. I
tried a second time to restore the node I really need data from. This time
the TOC seemed to be still in memory so it did not mount the tape
initially. When I actually started the restore I watched the server again
and it never even received a request to mount a tape. No mount messages,
no tape unavailable messages in the logs and it failed immediately also.

I'm confused as to why no tape mount request happened either time and it's
more confusing because there WAS a tape mounted to build the TOC. I assume
the rest of the data is on that tape, and it proves the system is actually
mounting tapes, but even if the data spans multiple tapes there is no
indication in the logs stating a tape is not available.

Anyone have any idea what else I can look at? I already have an open PMR
which I will continue to work on tomorrow but I thought I'd throw this out
there anyway.



Thank You
Geoff Gill




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Server media mount not possible

2012-10-12 Thread Shawn Drew
FYI,

maxnummp defaults to 1 but is ignored by nodes of type NAS or SERVER
(help update node)


Regards,
Shawn

Shawn Drew




Internet
bcolw...@draper.com

Sent by: ADSM-L@VM.MARIST.EDU
10/12/2012 05:19 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Server media mount not possible






Hi Geoff,

The messages manual says  Ensure that the MAXNUMMP (maximum number of
mount points) defined on the server for this node is greater than 0.

What is the maxnummp for the node?

I version 6, I set it for all nodes to 6.

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Geoff Gill
Sent: Thursday, October 11, 2012 8:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Server media mount not possible

I thought I'd throw this out there for ideas since I'm just being exposed
to NAS backpus and restores. I believe I got a previous post yesterday
figured out and was finally able to move on to testing a restore. Now I'm
getting a different error but I'm a bit confused as to why. I went through
lots of config and permissions info already with a valuable source and got
these errors once that was complete.

 Here's what I did once I was able to see the vol the data needs to be
restored to.

1. open the web gui, login with an admin id to restore some NAS data.
2. Used point in time to define which Full would have this nodes data on.
At this point the server mounted a tape to display the TOC.
3. I found the client machine in which I wanted to restore data and
checked the C drive to restore.
4. Selected a vol from the dropdown to restore to and clicked restore.
5. Within a few seconds I received a popup with the error Server media
mount not possible.

I watched this process from the TSM server and during step 2 I watched the
server mount the tape to draw the TOC from. While the tape was still
mounted, Idle, I clicked the restore and while waiting saw the tape was
still idle and at that point received the error. Within a few seconds the
tape dismounted, which makes me believe it was not requested for anything.

I tried this a second time with a completely different node and file and
can see in the activity log it tried to mount a tape that was not
available in the library. Interestingly enough I received the same error
message but it I see in the activity log the tape it was looking for. I
tried a second time to restore the node I really need data from. This time
the TOC seemed to be still in memory so it did not mount the tape
initially. When I actually started the restore I watched the server again
and it never even received a request to mount a tape. No mount messages,
no tape unavailable messages in the logs and it failed immediately also.

I'm confused as to why no tape mount request happened either time and it's
more confusing because there WAS a tape mounted to build the TOC. I assume
the rest of the data is on that tape, and it proves the system is actually
mounting tapes, but even if the data spans multiple tapes there is no
indication in the logs stating a tape is not available.

Anyone have any idea what else I can look at? I already have an open PMR
which I will continue to work on tomorrow but I thought I'd throw this out
there anyway.



Thank You
Geoff Gill



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Server initialization messages

2012-10-02 Thread Shawn Drew
You can start the processes in the foreground within a gnu screen session.
 If your ssh window detaches, it just detaches from screen, but the
terminal stays alive.   (sort of like RDP or VNC, only a text version of
it)

Otherwise, I use dsmulog to keep this stuff.

Regards,
Shawn

Shawn Drew





Internet
thomas.den...@jeffersonhospital.org

Sent by: ADSM-L@VM.MARIST.EDU
10/02/2012 04:17 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Server initialization messages






We sometimes start a TSM server in the foreground so that we can see
the initialization messages. However, this is potentially risky in
our environment if a TSM problem occurs outside of office hours.
The VPN facilities I and my coworkers use to log on to Unix and Linux
servers from our homes are subject to timeouts based on the time
since the connection was made rather than the time since the last
activity.

Is there any way to make the initialization messages available for
inspection without the risk that the termination of a specific
terminal connection will bring down the TSM server?

We have a TSM 5.5 server and several TSM 6.2 servers. All of our
TSM servers run under zSeries Linux.

Thomas Denier
Thomas Jefferson University Hospital


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


TSM for Virtual Environments - Reporting

2012-09-28 Thread Shawn Drew
Quick question, may be a long answer.
How is everyone reporting on success/failures for Full VM backups using
TSM for VE.  My reports are grabbing lines from the actlog and it is
uugly.

2012-09-25  11:40:27ANE4142I   virtual machine NODE1 backed up
to nodename DATAMOVER1
2012-09-25  11:40:27ANE4142I   virtual machine NODE2 backed up
to nodename DATAMOVER1
2012-09-25  11:40:27ANE4143I Total number of virtual machines
failed: 0



Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM for Virtual Environments - Reporting

2012-09-28 Thread Shawn Drew
Understood.  I'm looking for more of a q event type output.  Maybe from
a third party reporting system.  Maybe even the IBM reporting and
monitoring if anyone can suggest something.   Just a one-liner, vm name,
success, fail, date, etc.

Regards,
Shawn

Shawn Drew





Internet
rco...@cppassociates.com

Sent by: ADSM-L@VM.MARIST.EDU
09/28/2012 05:16 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] TSM for Virtual Environments - Reporting






Shawn,

Are you seeing these messages?

ANE4146I Starting Full VM backup of Virtual Machine 'vmname'
ANE4147I Successful Full VM backup of Virtual Machine 'vmname'
ANE4148E Full VM backup of Virtual Machine 'vmname' failed with RC rc

Do they have session/process numbers?

There are a host of other VM related messages that should help describe
the
status...

Richard

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Shawn Drew
Sent: Friday, September 28, 2012 4:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM for Virtual Environments - Reporting

Quick question, may be a long answer.
How is everyone reporting on success/failures for Full VM backups using
TSM
for VE.  My reports are grabbing lines from the actlog and it is uugly.

2012-09-25  11:40:27ANE4142I   virtual machine NODE1 backed up
to nodename DATAMOVER1
2012-09-25  11:40:27ANE4142I   virtual machine NODE2 backed up
to nodename DATAMOVER1
2012-09-25  11:40:27ANE4143I Total number of virtual machines
failed: 0



Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the
addressees and is confidential. If you receive this message in error,
please
delete it and immediately notify the sender. Any use not in accord with
its
purpose, any dissemination or disclosure, either whole or partial, is
prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain functions and services for BNP Paribas may be performed by BNP
Paribas RCC, Inc.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: NDMP Restore

2012-09-24 Thread Shawn Drew
Yes, this will work.  You just need to update the datamover definitions
with the ip/user/password of the netapp you are keeping.  You will still
perform the restore as if you were restoring to the old server, but it
will redirect itself if the datamover definition is setup correctly.

I have only done this on TSM 5.x.  I've backed up from Netapp 7.x and
restored to 8.x

Regards,
Shawn

Shawn Drew





Internet
christian.svens...@cristie.se

Sent by: ADSM-L@VM.MARIST.EDU
09/24/2012 04:23 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] NDMP Restore






Hi Everyone,
I hope the summer has been great for you all.

We are looking now to remove a customers all NetApp and replace it with
much better storage for other vendor.
But we have a situation that we need to keep the backup for does NetApp
for 6 month - 1 year, but instead of keeping all of them and cost a lot of
$$$, are we thinking of to only keep one NetApp and not all of them.

My question to you all are if someone have test to restore one NetApp NDMP
dumps to another NetApps with a different hostname and OS Level.

Source NetApp have version 7.x and 8.x and the one we keeping will have OS
level 8.0.1.
TSM Server Version will be 6.2 or 6.3 depending on what the requirement
will be.

Thanks in advance
Christian Svensson



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Actual TSM client storage utilization using Data Domain

2012-09-20 Thread Shawn Drew
I'm still not 100% on where that deduplicated data is accounted for in the
global statistic.  The way it is described is that it is the size of the
file after deduplication.
Does that mean it is the amount of unique data in that file?  If so, does
that mean the data that was not unique is accounted for in another file
that was, presumably, the first to add that data to the repository?



Regards,
Shawn

Shawn Drew




Internet
rrho...@firstenergycorp.com

Sent by: ADSM-L@VM.MARIST.EDU
09/20/2012 11:03 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Actual TSM client storage utilization using Data Domain






I recently got a first cut at some scripts that gives us the DD dedup
stats per TSM node.  It's not pretty, but it does seem to work.  But, it
requires having the file pool be collocated.  That way each node uses
separate file volumes.

The logic goes like this:

- file pool on the DD MUST be collocated - each node has it's own vols
- for each node
  - get list of vols via q nodedata node
  - for each volume
- on DD run filesys show compression vol_fiile_name
- sum cmd output to for Origional Bytes, Global Comp Bytes, Local COmp
BYtes.
  - after all vols for node have been processed,
  compute overall comp ratio by
  (sum of Origional Bytes / sum Local Comp bytes)

So basically it's just get a list of vols per node and sum the results of
the filesys show comp commands.  The fun is translating the TSM vol name
into the DD internal path for the filesys cmd.

Here are a few lines from my report with (names changed to protect the
innocent).

  Origional Bytes = what comes into the DD
  Global Comp Bytes = size after dedup
  Local COmp Bytes = size after zip - this is what gets written to disk

tsm   node#vols SumOrigBytes SumGlobalCompBytes
SumLocalCompBytes Ratio
  -   ---    --
- -
tsm7  node1   Vols= 1   OBmb= 24954.14   GCBmb= 3736.81 LCBmb= 1576.03
   CR= 15.83
tsm7  mode2   Vols= 1   OBmb= 20747.89   GCBmb= 2632.50 LCBmb= 1116.19
   CR= 18.58
tsm7  node3   Vols= 1   OBmb= 28528.93   GCBmb= 5200.65 LCBmb= 2609.92
   CR= 10.93
tsm1  node4   Vols= 9   OBmb= 250332.31  GCBmb= 7868.56 LCBmb= 4221.41
   CR= 59.30
tsm1  node5   Vols= 17  OBmb= 495973.34  GCBmb= 43150.37LCBmb=
18792.24   CR= 26.39
tsm1  node6   Vols= 29  OBmb= 853369.75  GCBmb= 126286.69   LCBmb=
36064.45   CR= 23.66
tsm6  node7   Vols= 18  OBmb= 502341.18  GCBmb= 16647.87LCBmb= 8263.54
   CR= 60.79
tsm2  node8   Vols= 2   OBmb= 43620.57   GCBmb= 11829.33LCBmb= 2366.72
   CR= 18.43
tsm1  node9   Vols= 3   OBmb= 65267.99   GCBmb= 11109.95LCBmb= 3286.02
   CR= 19.86
(and on and on)

The filesys show comp cmd gives the stats as when the file was WRITTEN
to the DD.  I suggest reading up on it and the quirks of what/how it
reports the comp info.

Anyway, that's what I did.

Rick






From:   Rick Adamson rickadam...@winn-dixie.com
To: ADSM-L@VM.MARIST.EDU
Date:   09/20/2012 10:08 AM
Subject:Actual TSM client storage utilization using Data Domain
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I have a situation where I suspect one, or more, of my TSM clients is
rapidly consuming large amount of storage space and am at odds of how to
accurately determine the culprit.

The TSM server is configured with a data domain dd880 as primary storage
of the device type file so obviously when I query the occupancy table in
TSM it provides raw numbers that do not reflect the de-dup and compression
of the DD device. Querying compression on the DD only provides numbers per
storage area, or context.

Has anyone been found a way to determine the actual amount of storage that
a particular client is using within data domain?

All comments welcome.

TSM Server 5.5 on Windows and Data Domain ddos 5.1.

~Rick




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall

Re: Unexplained ANS1071E Invalid domain Name entered: errors

2012-09-12 Thread Shawn Drew
Post the contents of your dsm.opt, specifically the DOMAIN lines.  The
problem is most likely there.
If that was the contents of the DOMAIN,  /.../ is not valid.That is
syntax for include/exclude, not domains

Regards,
Shawn

Shawn Drew




Internet
george.huebsch...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
09/12/2012 10:34 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Unexplained ANS1071E Invalid domain Name entered: errors






Can anyone help me identify the cause of these error messages?

Client AIX 6.1.0.0, TSM 6.2.1.0
Server AIX 6.1.0.0, TSM 5.5.1.0


   - I am getting failed backups on a daily basis.
   - I get a consistent list of invalid domain names.
   - There ARE exclude statements naming these filesystems.
   - There is no explicit domain statement in the dsm.sys.
   - The filesystems do exist:

/u01
/u02
/.../
/u19
/u20


   - There is a client optionset, but it does not seem relevant to the
   issue:

OptionsetDescription

---
AIX_OPTIONS  AIX Server-based Client
  Option Set

  Option: INCLEXCL
 Sequence number: 0
Use Option Set Value (FORCE): No
Option Value: exclude /.../core

  Option: INCLEXCL
 Sequence number: 1
Use Option Set Value (FORCE): No
Option Value: exclude.dir /unix/

  Option: INCLEXCL
 Sequence number: 2
Use Option Set Value (FORCE): No
Option Value: exclude /unix

  Option: INCLEXCL
 Sequence number: 3
Use Option Set Value (FORCE): No
Option Value: exclude /var/adm/pacct

  Option: INCLEXCL
 Sequence number: 4
Use Option Set Value (FORCE): No
Option Value: include /cdunix/.../* seq=4

  Option: INCLEXCL
 Sequence number: 5
Use Option Set Value (FORCE): No
Option Value: include /extcdp/.../* seq=5

  Option: QUIET
 Sequence number: 0
Use Option Set Value (FORCE): No
Option Value: yes




   - dsm.sys:

root@SomeXYZServername:/ grep -v \*
/usr/tivoli/tsm/client/ba/bin/dsm.sys|
grep \
SErvername  Server1
   COMMMethod TCPip
   TCPPort1500
   TCPServeraddress  10.xxx.xxx.xx
NODENAME   SomeXYZServername
PASSWORDDIR /etc/security
PASSWORDAccess Generate
Encryptkey Generate
QUERYSCHedperiod 1
SCHEDMODe POLLING
SCHEDLOGName /sys/logs/tsm/dsmsched.log
SCHEDLOGRetention 14 D
ERRORLOGName /sys/logs/tsm/dsmerror.log
ERRORLOGRetention 14 D
COMPression No
LARGECOMmbuffers yes
TCPB 32
TCPWindowsize 64
TCPNodelay Yes
TXNBytelimit 25600
  (((Previously, these excludes were written as exclude /uxx/.../*)))
fyi this line is not part of the dsm.sys
exclude.fs /u01
exclude.fs /u02
exclude.fs /u03
exclude.fs /u04
exclude.fs /u05
exclude.fs /u06
exclude.fs /u07
exclude.fs /u08
exclude.fs /u09
exclude.fs /u10
exclude.fs /u11
exclude.fs /u12
exclude.fs /u13
exclude.fs /u14
exclude.fs /u15
exclude.fs /u16
exclude.fs /u17
exclude.fs /u18
exclude.fs /u19
exclude.fs /u20
include/u07/.../aprefix*
include/u07/.../bprefix*
include/u07/.../cprefix*

include.encrypt /.../*

ServerName ORA
TCPServeraddress   10.xxx.xxx.xx
TCPPort1500
COMMmethod TCPip
HTTPport   1581
nodename   SomeXYZServername-TDP
PASSWORDDIR /usr/tivoli/tsm/client/oracle/bin64
PASSWORDAccess prompt
QUERYSCHedperiod 1
SCHEDMODe PRompted
SCHEDLOGName /sys/logs/tsm/orasched-dsmsched.log
SCHEDLOGRetention 30 D
ERRORLOGName /sys/logs/tsm/orasched-dsmerr.log
ERRORLOGRetention 30 D
COMPression No
LARGECOMmbuffers yes
TCPB 32
TCPWindowsize 64
TCPNodelay Yes
TXNBytelimit 25600

ServerName ORASCHED
TCPServeraddress   10.xxx.xxx.xx
TCPPort1500
COMMmethod TCPip
HTTPport   1581
nodename  SomeXYZServername-TDP
PASSWORDDIR /usr/tivoli/tsm/client/oracle/bin64
PASSWORDAccess prompt
QUERYSCHedperiod 1
SCHEDMODe PRompted
SCHEDLOGName /sys/logs/tsm/orasched-dsmsched.log
SCHEDLOGRetention 30 D
ERRORLOGName /sys/logs/tsm/orasched-dsmerr.log
ERRORLOGRetention 30 D
COMPression No
LARGECOMmbuffers yes
TCPB 32
TCPWindowsize 64
TCPNodelay Yes
TXNBytelimit 25600


--
George Huebschman
Cell (301) 875-1227


When you have a choice, spend money where you would prefer to work if you
had NO choice.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its

Re: Unexplained ANS1071E Invalid domain Name entered: errors

2012-09-12 Thread Shawn Drew
Perhaps IC68460 is related:

http://www-01.ibm.com/support/docview.wss?uid=swg1IC68460

It says the issue is addressed in 6.2.2


Regards,
Shawn

Shawn Drew





Internet
george.huebsch...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
09/12/2012 11:47 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Unexplained ANS1071E Invalid domain Name entered: errors






Richard,
Ahh, exactly like exclude.dir, the exclude.fs is processed first.  I'll
revert to the old dsm.sys immediately.
I have done the dsmc q inclexcl previously.  I had not looked at q opt.
Is there anything specific I should look for in options?  I have included
a
snippet below.
This is the result of the q inclexcl AFTER I reverted to the old
dsm.sys.
 This should still be relevant, because the problem existed before I
edited
the excludes to exclude.fs.


Snippet from dsmc q opt
 DOMAIN: Default ALL-LOCAL
 DOMAIN.IMAGE:
   DOMAIN.NAS:
  DOMAIN.SNAPSHOT:
DOMAIN.VMFILE:
DOMAIN.VMFULL:
  DOMNODE:

tsm q inclexcl
*** FILE INCLUDE/EXCLUDE ***
Mode Function  Pattern (match from top down)  Source File
 - -- -
No exclude filespace statements defined.
Excl Directory /unix/ Server
Excl Directory /.../.TsmCacheDir  TSM
Excl Directory /.../.SpaceMan Operating System
Include All   /extcdp/.../*  Server
Include All   /cdunix/.../*  Server
Exclude All   /var/adm/pacct Server
Exclude All   /unix  Server
Exclude All   /.../core  Server
Exclude HSM   /etc/adsm/SpaceMan/config/.../* Operating System
Exclude HSM   /.../.SpaceMan/.../*   Operating System
Exclude Restore   /.../.SpaceMan/.../*   Operating System
Exclude Archive   /.../.SpaceMan/.../*   Operating System
Include Encrypt   /.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Include All   /u0?/.../ias*
 /usr/tivoli/tsm/client/ba/bin/dsm.sys
Include All   /u0?/.../rman*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Include All   /u0?/.../fra*
 /usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u20/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u19/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u18/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u17/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u16/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u15/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u14/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u13/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u12/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u11/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u10/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u09/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u08/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u07/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u06/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u05/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u04/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u03/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u02/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
Exclude All   /u01/.../*
/usr/tivoli/tsm/client/ba/bin/dsm.sys
No DFS include/exclude statements defined.

Shawn,  Nothing here:

root@SomeXYZServerName:/ cat /usr/tivoli/tsm/client/ba/bin/dsm.opt

* Tivoli Storage Manager   *
*  *
* Sample Client User Options file for AIX and SunOS (dsm.opt.smp)  *


*  This file contains an option you can use to specify the TSM
*  server to contact if more than one is defined in your client
*  system options file (dsm.sys).  Copy dsm.opt.smp to dsm.opt.
*  If you enter a server name for the option below, remove the
*  leading asterisk (*).



* SErvername   A server name defined in the dsm.sys file
SErvername  Server1

root@SomeXYZServerName:/ 

On Wed, Sep 12, 2012 at 11:07 AM, Richard Sims r...@bu.edu wrote:

 As always, when pursuing an error relating to options files, do 'dsmc q
 opt' and 'dsmc q inclexcl' rather than look at files, so that TSM has an
 opportunity to point out what's wrong.  It's good practice to perform
those
 queries after making changes.  (There's no guarantee that the commands

Re: AIX Atape request

2012-09-05 Thread Shawn Drew
It is a setmode problem that resulted from an incompatibility between the
atape driver and firmware.  I assumed it was the drive firmware, but now
I'm thinking it might have something to do with the library.  It
apparently only happens in a Quantum i6000 with IBM LTO5 drives.
IBM says that it is a quantum problem, but quantum says it's IBM that
provides them with the firmware in the first place.

Here is an output from the trace:
16:35:38.737 [131][psntpop.c][2871][DetermineResult]:Error performing
SETMODE operation on drive WB_58 (/dev/rmt58); errno = 22
16:35:38.738 [131][psntpop.c][2931][DetermineResult]:Drive WB_58
(/dev/rmt58):
sense=70.00.05.00.00.00.00.58.00.00.00.00.1A.00.30.00.10.41.80.0A.00.01.
54.31.35.32.33.31.4C.00.00.00.07.BC.26.00.00.00.00.26.80.55.30.4C.34.00.
00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.
00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.45.50.35.34.52.46.31.4A.
52.31.
16:35:38.738 [131][output.c][6404][PutConsoleMsg]:ANR8302E I/O error on
drive WB_58 (/dev/rmt58) with volume  (OP=SETMODE, Error Number=22,
CC=0, rc = 1, KEY=05, ASC=1A,
ASCQ=00,~SENSE=70.00.05.00.00.00.00.58.00.00.00.00.1A.00.30.00.10.41.80.
0A.00.01.54.31.35.32.33.31.4C.00.00.00.07.BC.26.00.00.00.00.26.80.55.30.
4C.34.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.
00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.45.50.35.34.52.
46.31.4A.52.31,~Description=An undetermined error has occurred).  Refer
to Appendix C in the 'Messages' manual for recommended action.~


I tried every atape version going backwards from the current 12.5.2 down,
and 12.3.7.0 is the newest version that does not have this bug.  I also
tried every combination of drive firmware with each of these versions
going back to Jan 2011.   All had this problem after mounting a tape.

12.3.7.0 didn't work for me, because there was a separate
encryption-related bug that was fixed with in 12.3.9!
I finally downgraded to 12.3.2.0 and everything is working now.   This is
only a temporary environment so it won't affect me in the long-run.



Regards,
Shawn





Internet
y...@npp-asia.com

Sent by: ADSM-L@VM.MARIST.EDU
09/04/2012 11:42 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] AIX Atape request






Hai Shawn,

Can u share what exactly the bug that caused the necessitates to downgrade
your Atape driver version?


Rgrds,

Yudi

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Shawn Drew
Sent: Wednesday, 29 August, 2012 1:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] AIX Atape request

Greetings all,

I ran into a bug which necessitates downgrading my Atape version.  The
only older version I have is 12.3.7.0, which is subject to another bug
that is also affecting me.  I need to get an Atape version between 12.3.9
and 12.4.8.0
IBM fix central does not have it and IBM support tells me Quantum will
have it since it is the result of their incompatibility or something.
Quantum only links to IBM.  Google is not helping either.

Does someone have their own little archive history of Atape?  If so,
please send me a copy or send me a link.  I'm shocked that this stuff is
not available out-there
After this, I will definitely be maintaining my own archive.

Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or
partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain
functions and services for BNP Paribas may be performed by BNP Paribas
RCC,
Inc.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


AIX Atape request

2012-08-28 Thread Shawn Drew
Greetings all,

I ran into a bug which necessitates downgrading my Atape version.  The
only older version I have is 12.3.7.0, which is subject to another bug
that is also affecting me.  I need to get an Atape version between 12.3.9
and 12.4.8.0
IBM fix central does not have it and IBM support tells me Quantum will
have it since it is the result of their incompatibility or something.
Quantum only links to IBM.  Google is not helping either.

Does someone have their own little archive history of Atape?  If so,
please send me a copy or send me a link.  I'm shocked that this stuff is
not available out-there
After this, I will definitely be maintaining my own archive.

Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: AIX Atape request

2012-08-28 Thread Shawn Drew
Thanks all,
I received several versions of Atape.  ADSM-L is great!

Regards,
Shawn

Shawn Drew





Internet
Shawn DREW

Sent by: ADSM-L@VM.MARIST.EDU
08/28/2012 02:24 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] AIX Atape request






Greetings all,

I ran into a bug which necessitates downgrading my Atape version.  The
only older version I have is 12.3.7.0, which is subject to another bug
that is also affecting me.  I need to get an Atape version between 12.3.9
and 12.4.8.0
IBM fix central does not have it and IBM support tells me Quantum will
have it since it is the result of their incompatibility or something.
Quantum only links to IBM.  Google is not helping either.

Does someone have their own little archive history of Atape?  If so,
please send me a copy or send me a link.  I'm shocked that this stuff is
not available out-there
After this, I will definitely be maintaining my own archive.

Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or
partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain
functions and services for BNP Paribas may be performed by BNP Paribas
RCC, Inc.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: NetApp for Primary Disk Pool

2012-08-28 Thread Shawn Drew
block storage was mentioned, so I'm assuming san/iscsi.   We do have a
branch location using Netapp san luns for db/log/primary disk.
We are also using remote NFS for a copy pool.  The SAN stuff works fine,
the remote NFS is super slow, but I'll blame that on the latency.
A lot of people use Netapp for san/block access nowadays.  I don't think
it's anything to be worried about.


Regards,
Shawn

Shawn Drew





Internet
rrho...@firstenergycorp.com

Sent by: ADSM-L@VM.MARIST.EDU
08/28/2012 08:37 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] NetApp for Primary Disk Pool






as a block storage TSM primary disk pool

Do you mean that the disk pool would be on san/iscsi luns, or, nfs/cifs
shares?






From:   Mayhew, James jmay...@healthplan.com
To: ADSM-L@VM.MARIST.EDU
Date:   08/27/2012 04:28 PM
Subject:Re: NetApp for Primary Disk Pool
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



BUMP... Does anyone have thoughts on this?

From: Mayhew, James
Sent: Thursday, August 23, 2012 6:42 PM
To: 'ADSM-L@vm.marist.edu'
Subject: NetApp for Primary Disk Pool

Hello All,

We are considering using a NetApp V6210 with some attached shelves as a
block storage TSM primary disk pool. Do any of you have any experience
using NetApp storage as a TSM primary disk pool? If so, how was your
experience with this solution? Did you have any performance issues? How
was it with sequential workloads? Any insight that you all can provide is
greatly appreciated.

Best Regards,

James Mayhew
Storage Engineer
HealthPlan Services
E-mail: jmay...@healthplan.com


_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_
CONFIDENTIALITY NOTICE: If you have received this email in error, please
immediately notify the sender by e-mail at the address shown.This email
transmission may contain confidential information.This information is
intended only for the use of the individual(s) or entity to whom it is
intended even if addressed incorrectly. Please delete it from your files
if you are not the intended recipient. Thank you for your compliance.




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: NAS Restore failure, any thoughts?

2012-08-24 Thread Shawn Drew
Here are just a couple which are even in the latest 5.5.6.  You will need
an efix to address them.  You should open a case with IBM

http://www-01.ibm.com/support/docview.wss?uid=swg1IC83561
http://www-01.ibm.com/support/docview.wss?uid=swg1IC79667


Regards,
Shawn

Shawn Drew





Internet
joni.mo...@highmark.com

Sent by: ADSM-L@VM.MARIST.EDU
08/24/2012 09:52 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] NAS Restore failure, any thoughts?






Does it say what server level the fixes are in?  I'm currently at 5.5.5.0.


Thanks again!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Remco Post
Sent: Friday, August 24, 2012 9:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: NAS Restore failure, any thoughts?

Hi,

IIRC there are some NDMP bugs in your TSM server level, apply hot fixes or
upgrade before continuing.

--

Met vriendelijke groeten/Kind regards,

Remco Post
r.p...@plcs.nl
+31 6 24821622



On 24 aug. 2012, at 14:46, Moyer, Joni M joni.mo...@highmark.com
wrote:

 Hi Jeff,

 Here is the everything associated with that process.  Any ideas?  Thanks
again!

 Date/Time Message
 
--
 08/23/12 13:55:54 ANR0984I Process 23348 for RESTORE NAS (SELECTIVE)
started
   in the BACKGROUND at 13:55:54. (SESSION: 55984,
PROCESS:
   23348)
 08/23/12 13:55:54 ANR1059I Selective restore of NAS node
VNX5481_NAS_3, file
   system /root_vdm_12/HMCH1026_I_bkup, started as
process
   23348 by administrator LIDZR8V.  Specified files
and/or
   directory trees will be restored to destination
/temp_3.
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 13:55:54 ANR0403I Session 55984 ended for node
VNX5481_NAS_3
   (TSMNAS). (SESSION: 55984, PROCESS: 23348)
 08/23/12 13:56:22 ANR8337I NAS volume QA0258 mounted in drive LTO5_5

   (c256t0l0). (SESSION: 55984, PROCESS: 23348)
 08/23/12 13:56:22 ANR0512I Process 23348 opened input volume QA0258.

   (SESSION: 55984, PROCESS: 23348)
 08/23/12 13:57:16 ANR0515I Process 23348 closed volume QA0258.
(SESSION:
   55984, PROCESS: 23348)
 08/23/12 13:58:18 ANR8468I NAS volume QA0258 dismounted from drive
LTO5_5
   (c256t0l0) in library NAS_QI6000_ONSITE. (SESSION:
55984,
   PROCESS: 23348)
 08/23/12 13:58:44 ANR8337I NAS volume QA0265 mounted in drive LTO5_4

   (c272t0l0). (SESSION: 55984, PROCESS: 23348)
 08/23/12 13:58:44 ANR0512I Process 23348 opened input volume QA0265.

   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD_3279216401 ssRtrvRemote(ssremote.c:1811)

   Thread175705: Invalid offset 755.1600443456 for
image
   restore(SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705 issued message  from:

   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001c7e8
StdPutText
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001fb90
OutDiagToCons
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001a2d0
outDiagfExt
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001004f8ab8
ssRtrvRemote
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001007f9838
AfRtrvRemoteThr-
   ead  (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  00010001509c
StartThread
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANR1078E NAS Restore process 23348 terminated -
internal
   server error detected. (SESSION: 55984, PROCESS:
23348)
 08/23/12 14:01:28 ANRD Thread175705 issued message 1078 from:

   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001e138
StdPutMsg
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  000100013cd8 outRptf

   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001007f81e4
EndRemoteProc
   (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  0001007f98f0
AfRtrvRemoteThr-
   ead  (SESSION: 55984, PROCESS: 23348)
 08/23/12 14:01:28 ANRD Thread175705  00010001509c
StartThread
   (SESSION: 55984, PROCESS: 23348)
 08/23

Re: migrating tape storage pools

2012-08-10 Thread Shawn Drew
Further, it will use the copy pool volumes if they are available and have
an access of readw/reado.  If they are offsite, then it will use the
primary volumes.

Regards,
Shawn

Shawn Drew





Internet
kurt.bey...@vrt.be

Sent by: ADSM-L@VM.MARIST.EDU
08/09/2012 10:15 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] migrating tape storage pools






Hi Andy,

A 'move data' on a copy stg pool volume works, the primary stg pool
volume(s) is/are used for the operation. It just does not work for tapes
that contain NDMP backups (primary or copy).

regards,
Kurt

Van: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] namens
Huebner,Andy,FORT WORTH,IT [andy.hueb...@alconlabs.com]
Verzonden: donderdag 9 augustus 2012 16:00
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: Re: [ADSM-L] migrating tape storage pools

You cannot use move data on a copy tape.  I have tried.
I am very interested if you find a good solution.  We are moving some of
our copies to a different drive type.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
BEYERS Kurt
Sent: Thursday, August 09, 2012 3:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] migrating tape storage pools

Good morning,

We are in the process of migrating several tape storage pools, both
primary and copy, from LTO generation x to LTO generation y.

It is easy for primary storage pools, since the incremental backup
mechanism is taking all the primary storage pools in scope:

* Redirect the backups to an LTO_Y storage pool

* Migrate in the background  the LTO_X storage pool to the LTO_Y
with a duration of x minutes

However  this does not work for copy storage pools since there is a valid
reason why a backup would be kept in multiple copy storage pool volumes.
But this implies that the copy storage pool from generation LTO_Y needs to
be rebuild from scratch. Which is time consuming and expensive (more tape
volumes, more slots,more offsite volumes ). Are there really no other
workarounds available?

An option might be that given the fact we use dedicated device classes for
each  sequential storage pool and that multiple libraries will be or are
defined for each LTO generation:


* A DRM volume is linked to a copy storage pool

* The copy storage pool is linked to a device class

* Hence change the library in the device class from LTO_X to LTO_Y
for the copy storage pool

Would this workaround work? Then I could perform a daily move data in the
background to get rid from the LTO_X copy storage pool volumes. Will test
it myself of course.

It would be great too if IBM  could consider introducing the concept of a
'copy storage pool group' consisting of multiple copy storage pools that
contains only 1 backup of the item.  Perhaps I should raise an RFC for it
if other TSM users find it also a good feature. So please provide me some
feedback. Thanks in advance!

Regards,
Kurt





*** Disclaimer ***
Vlaamse Radio- en Televisieomroeporganisatie
Auguste Reyerslaan 52, 1043 Brussel

nv van publiek recht
BTW BE 0244.142.664
RPR Brussel
http://www.vrt.be/gebruiksvoorwaarden

This e-mail (including any attachments) is confidential and may be legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.

Thank you.
*** Disclaimer ***
Vlaamse Radio- en Televisieomroeporganisatie
Auguste Reyerslaan 52, 1043 Brussel

nv van publiek recht
BTW BE 0244.142.664
RPR Brussel
http://www.vrt.be/gebruiksvoorwaarden



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: snapshotroot for scheduled backups

2012-08-08 Thread Shawn Drew
Support for vFiler volumes with be in the TSM 6.40 client. Note that this
support will require ONTAP version 8.1.1 or greater.

Wow, this is huge!  Is there a feature list posted for 6.4 somewhere?

Regards,
Shawn

Shawn Drew





Internet
tanen...@us.ibm.com

Sent by: ADSM-L@VM.MARIST.EDU
08/08/2012 12:05 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] snapshotroot for scheduled backups






Thanks for taking the time to respond.  I'm thinking that if I want to do
this, I'll probably abandon the TSM Scheduler in favor of home-grown
scripting.  But I may just wait to see if snapdiff gets supported on
vFilers, at which point this issue becomes moot for me.

Support for vFiler volumes with be in the TSM 6.40 client. Note that this
support will require ONTAP version 8.1.1 or greater.

Regards,

Pete Tanenhaus
Tivoli Storage Manager Client Development
email: tanen...@us.ibm.com
tieline: 320.8778, external: 607.754.4213

Those who refuse to challenge authority are condemned to conform to it



From:   Paul Zarnowski p...@cornell.edu
To: ADSM-L@vm.marist.edu,
Date:   08/08/2012 11:57 AM
Subject:Re: snapshotroot for scheduled backups
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



Thanks Allen.

At 10:05 AM 8/8/2012, Allen S. Rout wrote:
You're in the position that the snapshot you want to use has a stable
name.  There are folks who have snapshots named related to the date of
consistency-point.

As it turns out, NetApp / nSeries snapshots do have predictable/static
names.

If you're already preschedcmding, you might use that step to calculate
the command lines to run the per-filespace dsmc incr lines, drop
them in a temporary script, and then run that as a COMMAND schedule
instead of an INCREMENTAL sched.

At that point, I'm not sure what the value would be of using the TSM
scheduler, as opposed to (e.g.) cron.

Thanks for taking the time to respond.  I'm thinking that if I want to do
this, I'll probably abandon the TSM Scheduler in favor of home-grown
scripting.  But I may just wait to see if snapdiff gets supported on
vFilers, at which point this issue becomes moot for me.

..Paul

--
Paul ZarnowskiPh: 607-255-4757
CIT Infrastructure / Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: VM Archive

2012-08-01 Thread Shawn Drew
That is a good idea for one-off archives. I might use that for my current
need, but probably just easier installing a client in the guest for
one-off requests.
Automating it, on the other hand,  with windows scripting, recovery agent
cli, and dynamic disks seems way tough

I remember in the old vcb days, you would have to script the whole
process.  i.e. mount the VM's from the data store with vcbmounter and run
a backup from there.
Is it still possible to mount the VM's like that?  I would think the
binaries are included in the VMware tools optional install in the BA
client since that is pretty much what it does for the file-level backup
vm but not sure if that is accessible.  Is there a modern replacement for
vcbmounter ?

Regards,
Shawn

Shawn Drew





Internet
kenbu...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
07/31/2012 09:54 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] VM Archive






Have you considered mounting the VM backup and then running the archive?
It's similar to a feature Fastback has for moving data to TSM for long
term
storage.

On Mon, Jul 30, 2012 at 1:22 PM, Shawn Drew 
shawn.d...@americas.bnpparibas.com wrote:

 Looked through the manual today and can't find any information on
 archiving VM data.Is there a way to archive data from a VMware guest
 (file-level or image) without installing a client on the actual guest?
 There doesn't seem to be an archive vm command

 Regards,
 Shawn
 
 Shawn Drew


 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in
error,
 please delete it and immediately notify the sender. Any use not in
accord
 with its purpose, any dissemination or disclosure, either whole or
partial,
 is prohibited except formal approval. The internet can not guarantee the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall
(will)
 not therefore be liable for the message if modified. Please note that
 certain
 functions and services for BNP Paribas may be performed by BNP Paribas
 RCC, Inc.




--
Ken Bury



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: VM Archive

2012-07-31 Thread Shawn Drew
IBM just told me this wasn't possible.  If anyone is interested, please
vote on this RFE I just submitted

RFE ID 25065:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=25065


Regards,
Shawn

Shawn Drew





Internet
Shawn DREW

Sent by: ADSM-L@VM.MARIST.EDU
07/30/2012 01:22 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] VM Archive






Looked through the manual today and can't find any information on
archiving VM data.Is there a way to archive data from a VMware guest
(file-level or image) without installing a client on the actual guest?
There doesn't seem to be an archive vm command

Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or
partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that
certain
functions and services for BNP Paribas may be performed by BNP Paribas
RCC, Inc.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


VM Archive

2012-07-30 Thread Shawn Drew
Looked through the manual today and can't find any information on
archiving VM data.Is there a way to archive data from a VMware guest
(file-level or image) without installing a client on the actual guest?
There doesn't seem to be an archive vm command

Regards,
Shawn

Shawn Drew


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: how to restore files from TSM backup tapes w/o TSM db

2012-07-20 Thread Shawn Drew
TSM is not designed to do this and there is no documentation on recovering
data without the appropriate database.  This is why the manuals stress the
importance of protecting the database.

That said, there are binaries out there and have seen them on sourceforge
or google code that can dump the contents of a ADSM/TSM tape.  I never
tried it myself and would definitely take some time to figure out.

There are also some archiving/recovery companies that are able to read TSM
tapes for indexing and restoring using their proprietary software.

Regards,
Shawn

Shawn Drew





Internet
tsm-fo...@backupcentral.com

Sent by: ADSM-L@VM.MARIST.EDU
07/11/2012 11:10 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] how to restore files from TSM backup tapes w/o TSM db






Hi all,
We had a monthly backup job in our TSM system and the data must be
kept forever without purging the TSM database.  Recently, we found that we
couldn't restore some users' database in 2010.  During the troubleshooting
process, we further found that some users' backup records for 2008, 2009,
2010 were missing in TSM database.  We reported the case to IBM and they
said that the data could not be restored without a health TSM database.

We have all monthly backup tapes on-hand.  Would you all kindly advise me
how to restore the files in the monthly backup tapes without using the TSM
database?

Thanks a lot.

Regards,
KK

+--
|This was sent by kcheu...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: RMAN direct to NFS

2012-07-11 Thread Shawn Drew
I haven't tried it myself on a file pool, but couldn't you update the
device class of the datadomain pool to have a mountlimit of 0 ?

- No way to take the DD file pool offline.  You can mark it
unavailable,
but that
only effects client sessions, not reclamation or other internal
processes.



Regards,
Shawn

Shawn Drew





Internet
rrho...@firstenergycorp.com

Sent by: ADSM-L@VM.MARIST.EDU
07/11/2012 09:07 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] RMAN direct to NFS






I don't know if we'd have gone with VTLs if we were architecting
this from scratch, but as we went from tape-based to virtual
technology, the VTL interfaces made the transition logically
simpler, and it appeased the one team member who has an irrational
hatred of NFS. We're now under pressure to adopt a new reference
architecture that is NFS based, not VTL based. I'm skeptical about
that will work, but because we're changing everything except the
fact that we're still a TSM shop, if it doesn't go well, everyone
will have a chance to blame someone else for any problems.

Compared to what you guys are describing, we are a small.
We run 10 main TSM servers, 50tb/night, 3000 nodes, 2 x 3584 libs with 50
drives each,
and now we've added 4 DataDomains.  We replicate between the DD's.

For the first two, we decided to use the NFS interface.  Our experience is
that we are now ONLY
interested in NFS file based interface.   For the TSM instances we've
moved on the DD, It has
 GREATLY simplified our TSM instances and processing.

The Good:
- no tape (zoning, paths, stuck tapes, scsi reservation errors,
   rmt/smc devices, atape, etc)
- no copy pool  (we use DD replication.  This cuts the I/O load in half.)
- quick migration (we migrate disk pool at 10% to the DD.
   It runs all night, so migration in the moring is
minimal.
- protect disk pool with lower max file size (we pass any file over 5gb
directly
   to the DD pool.)
- simpler batch processing.  No copy pool!!!  We let reclamation fun
 automatically whenever a vol needs it.  We are using 30gb volumes,
 so many need little reclamation.
- We collocate the DD pool by node.
  I'm working on a script to see the DD compression per node.
- NFS has been very reliable.  Our TSM servers are in lpars on several
  chassis.  We're using VIO to share a 10g adapter per chassis.
  I'm seeing 150-250mb/s during migrations per TSM instance.
  (Jumbo packets are a MUST.)
- DR is simpler.  DB and recplan gets backed up to the DD along with
  other stuff, which all gets replicated to the DR site.
  When brought up we have the PRI pool.

The Not-so-good:
- Yes, it's NFS.  AIX can be tied in a knot if the NFS server (DD in this
case)
has a problem.  Since the DD is a non-redundant architecture (not a
cluster)
I DO expect problems if the DD dies.  The one change I've made that DD
doesn't
recommend is that I mount the shares soft.
- No way to take the DD file pool offline.  You can mark it unavailable,
but that
only effects client sessions, not reclamation or other internal
processes.
- When you take the DD down for some reason, you have to kill
sessions/processes
using it, mark the pool unavailable, then umount the share on all
servers.
- As mentioned about, the architecture if the DD is non-redundant.  That
was
a kind of comfort with all the tape pieces/parts.  Individual
piece/part
can break, but it only effected the one part.  With the DD, if it it
crashes all users have problems.

For us, this has been a major step forward.  It's not often that a product

truly simplifies what we do, but the DD's with NFS interface is one
that stands out.

Rick






From:   Nick Laflamme dplafla...@gmail.com
To: ADSM-L@VM.MARIST.EDU
Date:   07/10/2012 08:11 PM
Subject:Re: RMAN direct to NFS
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



This is more about VTLs than TSM, but I have a couple of questions,
influenced by my shop's experience with VTLs.

1) When you say 40 VTLs, I presume that's on far fewer frames, that
you're using several virtual libraries on each ProtecTier or whatever
you're using?
2) I see that as 128 tape drives per library. Do you ever use that many,
or is this just a because we could situation? (We use 48 per library,
and that may be overkill, but we're on EMC hardware, not IBM, so the
performance curves may be different.)
3) Do I read 1) and 4) to mean that you're sharing VTLs among TSM servers?
Why, man, why? Can't you give each TSM server its own VTL and be done with
it? Or are you counting storage agents as TSM instances?

I don't know if we'd have gone with VTLs if we were architecting this from
scratch, but as we went from tape-based to virtual technology, the VTL
interfaces made the transition logically simpler, and it appeased the one
team member who has an irrational hatred

Re: TSM backup marked inactive

2012-07-02 Thread Shawn Drew
A few diagnostic questions.

- You said the files are backed up individually (dsmc backup file1)   Are
there ever any backups that happen with wildcards or any normal
incremental that happens on the file system?
- Is this directory included in the domain (opt file)?
- Is there an include/exclude file configured?
- could you provide an output of a dsmc q inclexcl  (maybe the dsm.sys
and dsm.opt while you're at it.

Regards,
Shawn

Shawn Drew





Internet
tsm-fo...@backupcentral.com

Sent by: ADSM-L@VM.MARIST.EDU
06/27/2012 11:54 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] TSM backup marked inactive






Here is the situation:
AIX 5.3
TSM 5.2
To a certain directory files are added daily and backed up (manual
selective full backup:  dsmc backup file1)
A given file will never be backed up again
All files in this directory are kept on disk indefinitely
Each file has one and only one full backup in TSM
It has been my undestanding that as long as file is on disk, it will emain
active in TSM
However, we are seeing that every day files backed up yesterday are marked
inactive in TSM
Here is what it looks like
= dsmc q ba -ina /D01/user01/REG15/SECT57/
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 2, Level
2.0
(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.

Node Name: NODE01
Session established with server TSM: AIX-RS/6000
  Server Version 5, Release 3, Level 0.0
  Server date/time: 06/27/12   22:49:45  Last access: 06/27/12   22:46:18

 Size  Backup DateMgmt Class A/I File
   ----- --- 
   204,808,192  B  06/27/12   21:44:37DEFAULT A
/D01/user01/REG15/SECT57/Z25147.TXT
22,171,648  B  06/26/12   10:20:09DEFAULT I
/D01/user01/REG15/SECT57/Z25116.TXT
33,841,152  B  06/26/12   10:20:13DEFAULT I
/D01/user01/REG15/SECT57/Z25117.TXT
27,746,304  B  06/26/12   10:20:20DEFAULT I
/D01/user01/REG15/SECT57/Z25118.TXT
12,288  B  06/26/12   10:20:25DEFAULT I
/D01/user01/REG15/SECT57/Z25119.TXT
   204,808,192  B  06/26/12   23:44:37DEFAULT I
/D01/user01/REG15/SECT57/Z25130.TXT
   204,808,192  B  06/26/12   23:44:56DEFAULT I
/D01/user01/REG15/SECT57/Z25131.TXT
   204,808,192  B  06/26/12   23:45:14DEFAULT I
/D01/user01/REG15/SECT57/Z25132.TXT
   115,646,464  B  06/27/12   01:44:38DEFAULT I
/D01/user01/REG15/SECT57/Z25133.TXT
12,288  B  06/27/12   01:45:23DEFAULT I
/D01/user01/REG15/SECT57/Z25134.TXT
   204,808,192  B  06/27/12   03:44:38DEFAULT I
/D01/user01/REG15/SECT57/Z25135.TXT
   204,808,192  B  06/27/12   13:44:37DEFAULT I
/D01/user01/REG15/SECT57/Z25144.TXT
40,316,928  B  06/27/12   14:36:38DEFAULT I
/D01/user01/REG15/SECT57/Z25145.TXT
 7,290,880  B  06/27/12   14:50:16DEFAULT I
/D01/user01/REG15/SECT57/Z25146.TXT

Why are these files marked inactive?
Thanks in advance

+--
|This was sent by 256...@ukrpost.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: VTL's and D2D solutions

2012-07-02 Thread Shawn Drew
If someone pulls a disk out of the array, (replacing a bad disk, etc), you
can't tell a regulator/auditor that it was encrypted.  A purely
bureaucratic reason, but still valid.
Regulations pop up all the time without actual technical consideration. (I
want to punch anyone who says the words 7 years to me!)

The OP's email address sounds like he's involved in the health care
industry.  They have the worst of it.  Almost as bad as the financial
industry.


Regards,
Shawn

Shawn Drew





Internet
dplafla...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
07/02/2012 05:35 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] VTL's and D2D solutions






On Jul 2, 2012, at 9:35 AM, Kevin Boatright wrote:

 We are currently looking at adding a Disk to Disk backup solution.  Our
current solution has a 3584 tape library with LTO-5 drives using TKLM.

 We have looked at Exagrid and Data Domain.  Also, I believe HP has a
solution.

 We will need to have encryption on the device and the ability to
replicate between the two disk units.

Why do you have to have encryption on the device?

No, that wasn't a sarcastic question.

If someone pulls a disk out of your DataDomain RAID, what can they do with
it? Your data is striped across many drives, in chunks that are admittedly
large enough to have a whole mailing address on it. Is someone afraid that
someone else will steal one or more drives and then read unstructured
streams of data looking for PII? Really?

There's no chance that a tape will fall off a truck as you ship your
backups off site. Sure, encrypt the VPN between sites, or use a dedicated
network. But that doesn't mean you have to encrypt your data on the
appliance, unless you're more paranoid than I am (or answer to people who
are more paranoid than I am). At this point, I start worrying more about
debacles from poor implementation or management of encryption than I do
about loss of unencrypted data.

 Anyone have any comments or recommendations?

Besides DataDomain, HP, and IBM, I'm sure the rest of EMC, Oracle, and
even small brands like Coraid would propose different solutions. For
example, why not replicate cheap disk, on top of which you build FILE
devices? Do you need the cost of a DataDomain or ProtecTier front-end, or
do you just replicate unduplicated data? Oracle and Coraid will sell you
large arrays of cheap disk with ZFS front-ends that could replicate data
if you need it and could deduplicate the data as justified. I'm not saying
I'd want to bet my job on Coraid, but others find there cost advantage
over DataDomain attractive.

 Thanks,
 Kevin

Nick


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: select or other command

2012-06-29 Thread Shawn Drew
The file name and node is in the contents table,  but I don't believe
there is any way to tell which date it was backed up, so you couldn't tell
which version of the file it is.  (i.e. it will just say
/var/log/messages, but not the date it was backed up)

The backups table does have this information, but not the volume the file
is sitting on.   This is extremely tough on the database, but if you
really have to, have to do this.

You would select the object_id from the BACKUPS table using the file
name and path (node_name, hla_name, ll_name)
Then you can use the object_id to select the volume_name from the contents
table.

I once witnessed a select for all *.nsf files in the BACKUPS table.  It
took more than a month to finish the command (but it did work).
I would avoid this at most costs.


Regards,
Shawn

Shawn Drew





Internet
avalnch...@yahoo.com

Sent by: ADSM-L@VM.MARIST.EDU
06/28/2012 08:38 PM
Please respond to
avalnch...@yahoo.com


To
ADSM-L
cc

Subject
[ADSM-L] select or other command






Hello,

Over the years I've saved a lot of commands but never saw one for this.
I'm not sure if it is possible but I thought I'd ask to see if anyone has
it.

Is it possible to create a select command that I input the name of a file
that was backed up and have the output tell me what tape(s) it would be
on? I have a command that will tell me all the tapes a node has data
on, and another to spit out the contents of a tape to a file, and from
there search for the filename but I'm curious if there is an easier way.

Thank You
Geoff Gill




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM V6 tape labels

2012-06-28 Thread Shawn Drew
If you really want to do this, the only way I can think of is to partition
your library.  I believe the TS3200 does support this.

You said tape may be reused by the primary pool.
What is the concern with this?  This is the standard way TSM operates and
I'm not sure why you would want to keep them separate.

Regards,
Shawn

Shawn Drew





Internet
andy.hueb...@alconlabs.com

Sent by: ADSM-L@VM.MARIST.EDU
06/28/2012 09:24 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] TSM V6 tape labels






I may be wrong, but the only way I know to divide media is by device
class, so I do not think you can do it in 1 library with 1 drive type.
Why the concern about using 1 pool of media for both primary and copy? DRM
will take care of what tapes go where.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Victor Shum
Sent: Thursday, June 28, 2012 2:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM V6 tape labels

Dear All :



I am implementing a TSM V6 EE on AIX with IBM TS3200 with 4 x LTO5 drive.
Is it any easy way that I can use different LTO-barcode labeling scheme
for
primary and copy pool ?



My concerns is : DRM will be implement to handle offsite tape cycling,
when
offsite tape media back to the Tape library; it shall become scratch
type.
So I concerns those tape may be reused by the primary pool.  So it seems
that cannot use different tape labeling scheme for primary and copy pool..



e.g. BON000 ~ BON999 label will be used for primary pool tape media;
BOF000
~ BOF999 labels will be used for all copypool..



Best regards,

Victor Shum

This e-mail (including any attachments) is confidential and may be legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.

Thank you.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: OpenVMS Client

2012-06-26 Thread Shawn Drew
I noticed this a little late, but saw that no one responded

If you are talking about the storserver ABC client, then yes, this is
true.  The connects to TSM through the api and not the typical
backup/archive method, so TSM scheduling isn't possible.

Regards,
Shawn





Internet
avalnch...@yahoo.com

Sent by: ADSM-L@VM.MARIST.EDU
06/20/2012 05:40 PM
Please respond to
avalnch...@yahoo.com


To
ADSM-L
cc

Subject
[ADSM-L] OpenVMS Client






Looking for info on what I used to understand about this client, that
backups must be run from the client side as it does not have a scheduler
so the TSM server cannot contact it.
Is this still true?




Thank You
Geoff Gill




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: What does volume status of OFFLINE mean/do?

2012-06-04 Thread Shawn Drew
Just remember, status and access are 2 different things.  You can set
the access to unavailable, but that has nothing to do with the status of
a volume.  The status is not set with upd volume and usually set
automatically by TSM.  (i.e. pending, full, filling, etc)

You can vary online/offline volumes, but that applies to random access
volumes and not sequential


Regards,
Shawn

Shawn Drew
Data Storage/Protection
IT Production
BNP Paribas RCC, Inc.
Office: 201.850.6998  Mobile: 917.774.8141
Storage Hotline: 212.841.2300




Internet
rick.harderw...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
06/04/2012 10:57 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] What does volume status of OFFLINE mean/do?






clicked send to soon

I believe it does not do anything to the files on the volumes, it just
tells TSM that it cannot access the files. I think it will generate errors
when trying to move/copy files in space reclamation jobs of off-site
media.

Cheers,

Rick

On Mon, Jun 4, 2012 at 4:54 PM, Rick Harderwijk
rick.harderw...@gmail.comwrote:

 Zoltan,

 Update volume wherestatus=offline would mean to do something to volumes
 that have a status of offline. I'd think you would have to use

 update volume * access=unavailable wherestgpool=SANSTGVOLUMEPOOL
 wherestatus=offline

 Output from help update volume:

   UNAVailable
Specifies that neither client nodes nor server processes can
access files stored on the volume.
Before making a random access volume unavailable, you must vary
the volume offline. After you make a random access volume
unavailable, you cannot vary the volume online.
If you make a sequential access volume unavailable, the server
does not attempt to mount the volume.
If the volume being updated is an empty scratch volume that had
an
access mode of offsite, the server deletes the volume from the
database.
 Cheers,

 Rick

 On Mon, Jun 4, 2012 at 4:42 PM, Zoltan Forray/AC/VCU
zfor...@vcu.eduwrote:

 A recent SAN system upgrade sort-of killed 19TB of storage volumes,
making
 them read only.

 Since we are having pathing issues, plus we need to perform fsck on
these
 volumes due to the journal being screwed-up,  I wanted to disable use
of
 these disk volumes without killing the server, if possible.

 When I look in the book for UPDATE VOLUME WHERESTATUS=OFFLINE,   all it
 says is Update volumes with a status of OFFLINE .  What exactly does
that
 mean?  What happens to the contents?


 Zoltan Forray
 TSM Software  Hardware Administrator
 Virginia Commonwealth University
 UCC/Office of Technology Services
 zfor...@vcu.edu - 804-828-4807
 Don't be a phishing victim - VCU and other reputable organizations will
 never use email to request that you reply with your password, social
 security number or confidential personal information. For more details
 visit http://infosecurity.vcu.edu/phishing.html






This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Backup data access issue w/recovered DR server

2012-05-25 Thread Shawn Drew
When we implemented data domain here, EMC recommended using a snapshot,
then a fastcopy for a DR test with TSM.

We tested this during one of our DR tests successfully and didn't have any
issues.


Regards,
Shawn

Shawn Drew




Internet
bjdbo...@comcast.net

Sent by: ADSM-L@VM.MARIST.EDU
05/24/2012 07:31 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Backup data access issue w/recovered DR server






By default, the target of the DDR replication is read-only. You can either
mark all you volumes in TSM read-only or you could create a snapshot of
the
target and use that for TSM. You can always break the replication in a
true
D/R.




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: 答复: [ADSM-L] backup stgpool - command issue

2012-05-18 Thread Shawn Drew
What you are doing will technically work although as other people have 
mentioned, this is not how TSM was designed to work and you will run into 
random issues like this that not many have experience with, since no one 
else runs it like this.You are making it work like Netbackup or 
Networker. 

You mentioned you are doing a daily selective and keeping it for 14 days. 
You also said that you would be able to fit everything if you changed to 
10 days. 

I think you should change to a daily incremental (instead of selective) 
and keep your 14 days.  Is there a specific reason you are doing selective 
backups instead of incremental?

The amount of slots required will drop significantly.  You will then be 
able to keep all primary pool tapes in the library and have DRM manage the 
offsite copy pools as designed.   You will have to switch to the typical 
TSM daily lifecycle and start using reclamation.

Regards, 
Shawn

Shawn Drew





Internet
victors...@cadex.com.hk

Sent by: ADSM-L@VM.MARIST.EDU
05/17/2012 08:39 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] 答复: [ADSM-L] backup stgpool - command issue






Dear Shawn:

Thanks for reply.

The ANR12281 is refer to one of the tape in the primary pool.

So is backup-stgpool use to create another set of tape same as the primary
pool ?  Since the existing tape library slot (48 slot)is not enough to 
hold
all backup of primary pool; 14 version of user data (each version require 
4
to 6 LTO5).  So we try to implement the backup like;


1.  daily schedule backup data to primary pool (daily selective / full
backup)
2.  backup stgpool to copypool
3.  backup TSM DB
4.  checkout all tape in the library
5.  check in new tape in the library
6.  checking any expired tape
recycle the daily job

the copypool' tape will be transport to DR site, while the primary pool 
tape
will be store at the safe and reuse for after 14 day

We think we just daily duplicate the primary pool to copypool (using 
backup
stgpool), and check out all active tape.  Manually trace when the tape 
shall
be expire; then reuse (cycle) those tape again to make the things work...

So if we need to have the data retention period of 14 but don't have 
enough
tape slot in the tape library to hold all primary pool tape media.  We 
have
no way out !!

OR Shall we just shorter the retention period like: from 14 to 10 day, so
that the tape library can hold all the tape inside.  Use DRM to generate 
and
manage offsite tape form us ??

Best regards,

Victor Shum


-邮件原件-
发件人: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] 代表 Shawn
Drew
发送时间: 2012年5月18日 3:00
收件人: ADSM-L@VM.MARIST.EDU
主题: Re: [ADSM-L] backup stgpool - command issue

He mentioned Due to tape library slot limitation and Check out all tape
(current cycle) I think he is checking out all tapes (including primary
copies)
I believe he is trying to manage around primary tapes being requested and
wondering why it was needed in this case.   Also, I think ANR1228I only
refers to primary volumes

backup-stg backs up ALL files that are not already in the copypool.  not
just active files.  Nothing that happened on the client could have caused
this tape to be requested.  The only thing that could have caused an older
primary tape to be requested in a backup-stg process is that, for some
reason, the files that were on that tape were not in the copypool.

This could happen from either:
1 the copy failed the first time.  either the tape was marked unavailable 
or
destroyed or the backup-stg failed for another reason.  you would receive
log messages about the tape not being available but the process may not 
have
explicitly failed.  If these tapes were changed to readw/reado in the last
couple days, this could have caused the backup-stg process to request the
old tape.
or
2 something happened to a copypool volume so that TSM decided that the
data needed to be recopied.   Did you delete any copypool volumes
recently? Any failed reclamations or any processes that could have 
detected
a fault in a copypool volume?


As mentioned before TSM is designed to work with all the primary volumes
available in the library.  Failing that, it's assumed an operater is
monitoring the activity log to satisfy mount requests.  If you are 
checking
tapes out every day like this, how are you handling reclamation ?


Regards,
Shawn

Shawn Drew





Internet
rickadam...@winn-dixie.com

Sent by: ADSM-L@VM.MARIST.EDU
05/17/2012 09:10 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] backup stgpool - command issue






Victor,

Assuming it is a copy pool volume (since primary storage volumes should
always be available in your library).

Investigate the volume status on your TSM server:
Use the query drmedia command determine the volume's current state, if 
in
fact it should be out of the library it's state would be either

Re: NDMP - TOC? upgrade?

2012-05-17 Thread Shawn Drew
We use file device classes for TOC.  No problems.  NFS mounted data domain
storage.   Maxcap of 400GB.  We have several TB of toc data and don't have
problems except for hating life when I have to do a restore.   It works
fine, just can be painfully slow if you have very large TOCs (file systems
with 20million+ files)

I haven't looked into the v6 upgrade process yet, but I would try and
pursue the standard upgrade for the library manager/NDMP instance, then
use export for everyone else.   I was under the impression the standard
upgrade process would work for ndmp data.

Regards,
Shawn

Shawn Drew





Internet
sto...@mail.nih.gov

Sent by: ADSM-L@VM.MARIST.EDU
05/17/2012 01:08 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] NDMP  - TOC?  upgrade?






Environment:  AIX 6.1, TSM 5.5.4

We've been backing up a couple of NAS filers via NDMP for around 18 months
but recently find that sending the data to a storage pool of 12 500G
rlv-formatted disks has been diagnosed by L2 as the cause of occasional
problems with the recovery log when it changes volumes -- holding the
recovery log too long and crashing the instance.

The rlv disk data has been moved off in preparation to rebuild the disks
as FILE devclass volumes...but think I remember reading something about
TOCs being troublesome/incompatible with that config?  Since one of these
NAS boxes has millions (literally, I'm told) of small documents --
versions of which are required to be kept forever -- TOC is a a
'must-have' for file level restore, else we would have to restore the
whole filesystem, the largest of which has about 5-6TB in use.

Is anyone using NDMP writing to file devclass with TOC successfully?  (any
suggestions on MAXCAP?  default is 4M which seems 'small')

Side issue:  think I heard there's no data export upgrade path for
NDMP...'take a fresh full backup' isn't going to work for millions of
keep-forever documents...we are planning to move to AIX 7.1 and TSM 6.2.x
later this year when the new hardware arrives so this will be upon us
soon.  Any comments/ideas?

Thanks, Susie



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: NDMP - TOC? upgrade?

2012-05-17 Thread Shawn Drew
FYI, This was a problem when TSM 6 was first released, but was resolved.
It shouldn't have any relevance anymore.

Regards,
Shawn

Shawn Drew





Internet
knau...@npisorters.com

Sent by: ADSM-L@VM.MARIST.EDU
05/17/2012 01:46 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] NDMP  - TOC?  upgrade?






Susie,
I figured I would reply direct to you since I am new to TSM and not
really sure if this is on target, but here is an article on TOC (and
Backup sets) when using NAS backups.

http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Bac
kup+set+and+Table+of+Contents+support+in+Tivoli+Storage+Manager+Version+
V6.1

Looks like this problem (IBM reference: IC62418) was fixed w/ interim
fixes in 6.1.3.x and 6.1.0.x, but had trouble finding original IBM
article.  Here is the cached google article for IC62418:
http://webcache.googleusercontent.com/search?q=cache:M_O_nqX7ZIYJ:www-01
.ibm.com/support/docview.wss%3Fuid%3Dswg1IC62418+Reenable+BACKUPSETS,+us
e+of+TOC%27scd=2hl=enct=clnkgl=usclient=firefox-a

I'm not really sure if this helps you upgrade path problems, but maybe
it gets you a little closer.


Regards,
Ken Naugle
Direct: (214) 634-2288 x122
Mobile: (214) 636-3775
Fax: (682) 503-8210
E-mail: knau...@npisorters.com

-Original Message-

From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Stout, Susie (NIH/CIT) [E]
Sent: Thursday, May 17, 2012 12:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] NDMP - TOC? upgrade?

Environment:  AIX 6.1, TSM 5.5.4

We've been backing up a couple of NAS filers via NDMP for around 18
months but recently find that sending the data to a storage pool of 12
500G rlv-formatted disks has been diagnosed by L2 as the cause of
occasional problems with the recovery log when it changes volumes --
holding the recovery log too long and crashing the instance.

The rlv disk data has been moved off in preparation to rebuild the disks
as FILE devclass volumes...but think I remember reading something about
TOCs being troublesome/incompatible with that config?  Since one of
these NAS boxes has millions (literally, I'm told) of small documents --
versions of which are required to be kept forever -- TOC is a a
'must-have' for file level restore, else we would have to restore the
whole filesystem, the largest of which has about 5-6TB in use.

Is anyone using NDMP writing to file devclass with TOC successfully?
(any suggestions on MAXCAP?  default is 4M which seems 'small')

Side issue:  think I heard there's no data export upgrade path for
NDMP...'take a fresh full backup' isn't going to work for millions of
keep-forever documents...we are planning to move to AIX 7.1 and TSM
6.2.x later this year when the new hardware arrives so this will be upon
us soon.  Any comments/ideas?

Thanks, Susie



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: move DRM error

2012-05-17 Thread Shawn Drew
What is the output of q drmstat  ?

- The pools are defined there (or defaults to all copypools
- Pools link to device classes
- device classes link to the library

Regards,
Shawn

Shawn Drew





Internet
andy.hueb...@alconlabs.com

Sent by: ADSM-L@VM.MARIST.EDU
05/17/2012 05:26 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] move DRM error






I have an error when I run move DRM I have not seen before and have not
found a fix for:
05/17/2012 13:51:03  ANR2017I Administrator OPERATOR issued command: MOVE
  DRMEDIA * wherestate=mountable tostate=vault wait=no
  (SESSION: 46672)

05/17/2012 13:51:05  ANR8409E CHECKOUT LIBVOLUME: Library  is not defined.
  (SESSION: 46672, PROCESS: 458)


I am not sure where to define the library for DRM.  Since TSM is mounting
tapes in the library and there are not any other problems I assume the
library is properly defined.  This is happening on the library manager and
the other three servers do not have any problems ejecting tapes.

The library has not been changed in a while:
tsm: VOODOOq library ibm3494a  f=d

  Library Name: IBM3494A
  Library Type: 349X
ACS Id:
  Private Category: 300
  Scratch Category: 302
 WORM Scratch Category:
  External Manager:
Shared: Yes
   LanFree:
ObeyMountRetention:
   Primary Library Manager:
   WWN:
 Serial Number:
 AutoLabel: Yes
  Reset Drives: Yes
   Relabel Scratch:
Last Update by (administrator): OneCoolDude
 Last Update Date/Time: 01/21/2008 22:12:25

However, the server was 5.4 Sunday, and on Tuesday it was 6.2.3.0. (but
nothing else changed, except the hardware)  3 other TSM servers share this
library and they are 5.4.



Andy Huebner


This e-mail (including any attachments) is confidential and may be legally
privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.

Thank you.



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: TSM V6.2 DB Directories

2012-05-15 Thread Shawn Drew
I've been told this is the only supported way for now

Regards, 
Shawn

Shawn Drew





Internet
erwann.si...@free.fr

Sent by: ADSM-L@VM.MARIST.EDU
05/15/2012 03:55 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] TSM V6.2 DB Directories






Hi David,

Yes, I think there is no (supported) other way.

--
Best regards / Cordialement / مع تحياتي
Erwann SIMON

Le 14/05/2012 16:38, Ehresman,David E. a écrit :
 Is doing a TSM DB backup/restore the only way to remove a TSM DB 
directory in V6.2?

 David




This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: What Class to use

2012-05-08 Thread Shawn Drew
Just register an admin,  but don't grant any authority.That will work
as a read-only account

Regards,
Shawn

Shawn Drew




Internet
c...@indiana.edu

Sent by: ADSM-L@VM.MARIST.EDU
05/08/2012 11:59 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] What Class to use






Fellow TSM Admins,

I am stuck in a pickle barrel it would seem.  Next week we have IBM
partner consultants coming onsite to perform a health check of our TSM
environments.  In order to accomplish this task they require that a TSM
admin account be created on each active server with a privilege class of
analyst or (Read only).  Problem is that with versions of TSM 6.2(+) there
is no more analyst class, and I don't seem to be finding anything that
would be a suitable replacement.  Am I just missing something obvious?  I
could just assign them to the Operator class but for me that is like
handing them a sledge hammer when all they needed was a hammer.

Thanks much,
Chad Harris



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Pct Migr=0.0, Caching=No but there is data in the disk stgpool

2012-05-03 Thread Shawn Drew
I've seen this when there is a current backup active and that's nodes
files are the only ones in the disk pool.
do a q se and see if anything is active.

Regards,
Shawn

Shawn Drew





Internet
zfor...@vcu.edu

Sent by: ADSM-L@VM.MARIST.EDU
05/03/2012 01:37 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Pct Migr=0.0, Caching=No but there is data in the disk stgpool






I have a very odd situation on my new Linux 6.2.3.100 server/disk pool, as
described in the subject.

Here is the Q STGPOOL

1:30:21 PM   FIREBALL : q stg backuppool f=d
Storage Pool Name: BACKUPPOOL
Storage Pool Type: Primary
Device Class Name: DISK
   Estimated Capacity: 5,238 G
   Space Trigger Util: 77.4
 Pct Util: 77.4
 Pct Migr: 0.0
  Pct Logical: 100.0
 High Mig Pct: 90
  Low Mig Pct: 50
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 2
Reclamation Processes:
Next Storage Pool: TS1120
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?: No
   Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed:
   Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 0
 Reclamation in Progress?:
   Last Update by (administrator): ZFORRAY
Last Update Date/Time: 04/27/2012 06:25:02
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type:
  Overwrite Data when Deleted:


It says 77% utilized but nothing migratable.  Trying to MIG STGPOOL says
nothing to migrate yet doing a BACKUP STGPOOL is running and backing up.
 I did an audit on one of the disk volumes and it said yup - there is
something in there and it is OK/no problems.

As you can see, caching is NOT turned on (never have used that feature -
usually too much traffic flowing through to make it of any value).and
what is with the Pct Logical is 100%?

This is a server I upgraded from 5.5.  The disk pools were recreated from
scratch so it's not like there was anything left behind..


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Pct Migr=0.0, Caching=No but there is data in the disk stgpool

2012-05-03 Thread Shawn Drew
What is the syntax of the migrate stg command you are using?  Are you
specifying a low setting? or letting it use the storage pool thresholds?
 (i.e. try it, specifying low=0 if not)


Regards,
Shawn

Shawn Drew





Internet
zfor...@vcu.edu

Sent by: ADSM-L@VM.MARIST.EDU
05/03/2012 02:25 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Pct Migr=0.0, Caching=No but there is data in the disk
stgpool






I thought about that too (should have mentioned it).  We just recently
rebooted the server (network issue - don't ask...) and there are no
active backups/sessions.  No processes other than the movedata I just
started.  I did a Q CONTENT and all of the disk volumes show something in
them from dozens of different nodes.


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



From:   Shawn Drew shawn.d...@americas.bnpparibas.com
To: ADSM-L@VM.MARIST.EDU
Date:   05/03/2012 02:22 PM
Subject:Re: [ADSM-L] Pct Migr=0.0, Caching=No but there is data in
the disk stgpool
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



I've seen this when there is a current backup active and that's nodes
files are the only ones in the disk pool.
do a q se and see if anything is active.

Regards,
Shawn

Shawn Drew





Internet
zfor...@vcu.edu

Sent by: ADSM-L@VM.MARIST.EDU
05/03/2012 01:37 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Pct Migr=0.0, Caching=No but there is data in the disk stgpool






I have a very odd situation on my new Linux 6.2.3.100 server/disk pool, as
described in the subject.

Here is the Q STGPOOL

1:30:21 PM   FIREBALL : q stg backuppool f=d
Storage Pool Name: BACKUPPOOL
Storage Pool Type: Primary
Device Class Name: DISK
   Estimated Capacity: 5,238 G
   Space Trigger Util: 77.4
 Pct Util: 77.4
 Pct Migr: 0.0
  Pct Logical: 100.0
 High Mig Pct: 90
  Low Mig Pct: 50
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 2
Reclamation Processes:
Next Storage Pool: TS1120
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?: No
   Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed:
   Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 0
 Reclamation in Progress?:
   Last Update by (administrator): ZFORRAY
Last Update Date/Time: 04/27/2012 06:25:02
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type:
  Overwrite Data when Deleted:


It says 77% utilized but nothing migratable.  Trying to MIG STGPOOL says
nothing to migrate yet doing a BACKUP STGPOOL is running and backing up.
 I did an audit on one of the disk volumes and it said yup - there is
something in there and it is OK/no problems.

As you can see, caching is NOT turned on (never have used that feature -
usually too much traffic flowing through to make it of any value).and
what is with the Pct Logical is 100%?

This is a server I upgraded from 5.5.  The disk pools were recreated from
scratch so it's not like there was anything left behind..


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately

Re: Size of actice data pool ?

2012-04-27 Thread Shawn Drew
You can run a preview on the copy activedata command.(i.e.  copy active 
data xxx  preview=yes)  This should tell you what you want to know. 

Regards, 
Shawn

Shawn Drew





Internet
guenther_bergm...@gbergmann.de

Sent by: ADSM-L@VM.MARIST.EDU
04/27/2012 11:35 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Size of actice data pool ?






Hi all,

given the following setup:
TSM 5.5
Primary storage pool of device class FILE
Copy ool on LTO -tapes, managed by DRM
Primary pool ist copied daily to the copy pool

Since the Copy pool is reaching some limitations ( # of  tapes, vault 
capacity) we are thinking about setting up an active data pool and copying 
the 
active data pool to the copy pool.

The question: How to estimate the size of the active data pool in compared 
to 
the primary pool?

Is there something like:
QUERY OCCUPANCY ACTIVEDATAONLY=YES ? 
Can it be done with some SQL query?
I can geht the numbers (active/inactive) from the backups table, but it 
does 
not give the size of the backed up objects.

Any hints on this?

regards Günther

-- 
Guenther Bergmann, Am Kreuzacker 10, 63150 Heusenstamm, Germany
Guenther_Bergmann at gbergmann dot de



This message and any attachments (the message) is intended solely for 
the addressees and is confidential. If you receive this message in error, 
please delete it and immediately notify the sender. Any use not in accord 
with its purpose, any dissemination or disclosure, either whole or partial, 
is prohibited except formal approval. The internet can not guarantee the 
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will) 
not therefore be liable for the message if modified. Please note that certain 
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


  1   2   3   4   5   >