Re: Nodes per TSM server

2012-10-10 Thread Ian Smith
harold,

just for reference, we have  1000 nodes on some servers - these are v5 servers 
migrated to v6 and *not* consolidated. However, the average occupancy and 
number of objects for each node is probably much smaller than many other sites 
- we are an academic site with lots of smaller clients (research, admin, 
learning materials, etc) with a very limited number of versions and retention 
period. 
Wanda is correct - the real limitations are size of DB ( related to the number 
of objects stored ) and I/O. Memory, cpu and fast disks will all have a bearing 
on what you can achieve - there are published IBM recommendations on memory 
(and CPU ?) - search under 'server requirements'.

Regards
Ian Smith
IT Services, | University of Oxford


From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of Christian 
Svensson [christian.svens...@cristie.se]
Sent: 09 October 2012 07:06
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: Nodes per TSM server

Hi Harold,
Everything depending on your hardware.
But I got everything from 300 nodes up to approx. 500 nodes and every TSM 
Server are running on Wintel.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se

Säkra återläsningar.



-Ursprungligt meddelande-
Från: Vandeventer, Harold [BS] [mailto:harold.vandeven...@ks.gov]
Skickat: den 8 oktober 2012 22:01
Till: ADSM-L@VM.MARIST.EDU
Ämne: Nodes per TSM server

There are all kinds of measures involved in setting up a TSM server; processor, 
RAM, disk I/O, stg pool design, reclamation, migration, all the bits and pieces.

But, I'm curious about how many nodes some of you have on your TSM servers?

I'm in a Windows environment, and have been tasked with consolidating.

Also, about how much memory is on those systems.

Thanks.


Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services 
harold.vandeven...@ks.gov
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the person 
or entity to which it is addressed and may contain confidential or privileged 
information.  Any unauthorized review, use, or disclosure is prohibited.  If 
you are not the intended recipient, please contact the sender and destroy the 
original message, including all copies, Thank you.
***


Re: Moving to TSM Capacity based licensing from PVU - experiences

2012-07-18 Thread Ian Smith
Hi all,

thanks supremely for some interesting, and varied takes, on this thorny 
subject. With thousands of clients in a distributed site, we are losing the 
battle to properly audit the PVU entitlement we require. For a brief second it 
looked like  the client-side reporting of PVU might be our panacea but no. It 
is an estimate as it has to assume what is a client device and what is a client 
to be licensed by PVU. Shame on IBM for constructing a licensing model so unfit 
for purpose.

TBH we are well down the path of despair with this whole issue and the assorted 
replies strongly suggest that we either pay more via the per-TB model, or stick 
with the impossible PVU model. Its not much of a choice, even if we, like 
others, are actively looking at client-side dedupe ( particularly with TSM for 
VE ) and possibly dedupe of Exchange data.

One thing that vexes me, is that it appears that once committed to per-TB 
licensing, you can never reduce the entitlement. If you do succeed in reducing 
your primary pool occupancy, it just seems that can only be used as 'headroom' 
for future growth. That is, you cannot pay for occupancy minus 10% next year 
even if your storage requirements only amount to that. Furthermore, it seems 
that there is a mandatory growth figure of at least 10% ( others have mentioned 
20% ) year on year in the occupancy entitlement - even if you don't or won't 
need it ( for example by pursuing aggressive strategies of dedupe, compression 
and quota management ). That vexes my management too and I fear that we will, 
in the course of the next couple of years, turn our backs on this product as 
soon as we have rolled our own solution. Which will be a shame, as it is a good 
product and we won't be able to recreate it.

Many thanks once again.
Ian Smith
Oxford University

From: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] on behalf of Thomas Denier 
[thomas.den...@jeffersonhospital.org]
Sent: 17 July 2012 16:19
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Moving to TSM Capacity based licensing from PVU - 
experiences

-Keith Arbogast wrote: -

For those who are wavering between that and PVU based, it may help,
depending on your client mix, to know that the 6.3 client reports
Processor Vendor, Processor Brand, Processor Type, Processor Model
and Processor Count. So, one doesn't  need to install the License
Metric Tool on clients at that level. I'm presuming IBM would accept
its own determination of that data, and not insist on it coming from
the LMT.

From the documentation of the 'query pvuestimate' command in the
Version 6.3 Administrator's Reference:

Note: The PVU information reported by Tivoli Storage Manager is
not considered an acceptable substitute for the IBM License
Metric Tool.

Thomas Denier,
Thomas Jefferson University Hospital


Re: Moving to TSM Capacity based licensing from PVU - experiences

2012-07-16 Thread Ian Smith
Hi,

We are in the midst of discussions on moving to capacity-based licensing from 
the standard PVU-based method for our site. We have a large number of clients ( 
licensed via TSM-EE, TDP agents, and on client-device basis ) and around 1PB of 
primary pool data. As I understand it, there is no published metric for the 
conversion from PVU to per TB licensing so I would be really interested and 
grateful if anyone would like to share their experiences of that conversion in 
a private email to me. 

Many thanks in advance.
Ian Smith
Oxford University
England


Re: Stupid question about TSM server-side dedup

2011-11-22 Thread Ian Smith

Wanda,

Are the identify processes issuing any failure notices in the activity log ?

You can check if id dup processes have found duplicate chunks yet to be
reclaimed by running 'show deduppending stgpoolname'  WARNING, can
take a long time to return if stgpool is large, don't panic !

I am unfamiliar with NDMP backup but off the top of my head a couple of
other (simple) things to check would be:
is the server-side SERVERDEDUPETXNLIMIT option set very low  and
preventing dedup id ?

Have these dumps been backed up to copypool yet ? ( perhaps you've
overlooked the deduperequiresbackup option at the server )?
- IIRC the identify processes run but find nothing if this option is set
and the data has not yet been backed up to copypool.

Ian Smith


On 22/11/11 15:17, Colwell, William F. wrote:

Wanda,

when id dup finds duplicate chunks in the same storagepool, it will
raise the pct_reclaim
value for the volume it is working on.  If the pct_reclaim isn't going
up, that means there
are no duplicate chunks being found.  Id dup is still chunking the
backups up (watch you database grow!)
but all the chunks are unique.

Is it possible that the ndmp agent in the storage appliance is putting
in unique metadata with each file?
This would make every backup appear to be unique in chunk-speak.

I remember from the v6 beta that the standard v6 clients were enhanced
so that the metadata could
be better identified by id dup and skipped over so that it could just
work on the files and get
better dedup ratios.  If id dup doesn't know how to skip over the
metadata in an ndmp stream, and
the metadata is always changed, then you will get very low dedup ratios.

If you do a 'q pr' while the id dup is running, do the processes say
they are finding duplicates?

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Monday, November 21, 2011 11:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Stupid question about TSM server-side dedup

Have a customer would like to go all disk backups using TSM dedup.  This
would be a benefit to them in several respects, not the least in having
the ability to replicate to another TSM server using the features in
6.3.

The customer has a requirement to keep their NDMP dumps 6 months.  (I
know that's not desirable, but the backup group has no choice in the
matter right now, it's imposed by a higher level of management.)

The NDMP dumps come via TCP/IP into a regular TSM sequential filepool.
They should dedup like crazy, but client-side dedup is not an option (as
there is no client).

So here's the question.  NDMP backups come into the filepool and
identify duplicates is running.  But because of those long retention
times, all the volumes in the filepool are FULL, but 0% reclaimable, and
they will continue to be that way for 6 months, as no dumps will expire
until then.  Since the dedup occurs as part of reclaim, and the volumes
won't reclaim -how do we prime the pump and get this data to dedup?
Should we do a few MOVE DATAs to get the volumes partially empty?


Wanda Prather  |  Senior Technical Specialist  |
wprat...@icfi.commailto:wprat...@icfi.com   |
www.icf.comhttp://www.icf.com
ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 21202 |
410.539.1135 (o)
Connect with us on social mediahttp://www.icfi.com/social


Re: External Disk Unit and TSM Deduplication

2011-11-16 Thread Ian Smith

Alper,

We have tested TSM server-side deduplication and, yes, it works, but at 
a cost. Specifically, in our opinion the sum of
the additional  resource requirements  (CPU power and memory) and the 
time required for the identification and reclamation processes meant 
that we couldn't see how server-side dedupe would scale on busy systems 
such as ours. Another factor that coloured our opinion was that not all 
our data deduped very well. As a result we are looking at client-side 
dedupe for those clients that we think will attain good dedupe ratios - 
however, client-side dedupe means longer client sessions and still 
impacts the TSM server a bit. I'm guessing that the apparent prevalence 
of hardware-based dedupe on this list is due to people realising that 
dedupe on a busy system needs to be offloaded from the TSM server.


HTH
Ian Smith

On 16/11/11 06:32, Alper DİLEKTAŞLI wrote:

Hi Neil,

We don't need LAN free backup and we won't that.
There is no problem about using file device class.
We won't change TSM license model.
We wonder about tsm deduplication is good and safe or not. We tested it in the 
test system but we havent got experience on the production. If it can do 
deduplication well(like as hardware solutions) we will buy an external disk 
devices. If not we will buy a new LTO library and we won't use deduplication.

Thanks
Alper

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Neil 
Schofield
Sent: Tuesday, November 15, 2011 8:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] External Disk Unit and TSM Deduplication

Alper

I've been giving a bit of thought to this myself recently, although it's more 
of an academic exercise since we're not in the market for a new library at the 
moment.

Some things you may want to consider:

Do you (now or in the future) have a requirement for LAN-free backup? I would 
expect this to be significantly harder with a FILE device class compared to a 
physical/virtual tape library solution.

Are you considering switching to a capacity-based TSM license model? If so then 
I've been told that it's the volume of primary storage after TSM de-dupe but 
before hardware-level de-dupe/compression that counts, which may incline you 
more towards the disk-based solution.

Either way I'd be interested to hear what you decide.

Regards
Neil Schofield
Technical Leader
Data Centre Services Engineering Team
Yorkshire Water Services Ltd.

  

Spotted a leak?
If you spot a leak please report it immediately. Call us on 0800 57 3553 or go 
to http://www.yorkshirewater.com/leaks

Get a free water saving pack
Don't forget to request your free water and energy saving pack, it could save 
you money on your utility bills and help you conserve water. 
http://www.yorkshirewater.com/savewater

The information in this e-mail is confidential and may also be legally 
privileged. The contents are intended for recipient only and are subject to the 
legal notice available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House, Halifax Road, Bradford, BD6 2SZ Registered in 
England and Wales No 2366682

Dikkat: Bu elektronik posta mesaji kisisel ve ozeldir. Eger size 
gonderilmediyse lutfen gondericiyi bilgilendirip mesaji siliniz.Firmamiza gelen 
ve giden mesajlar virus taramasindan gecirilmektedir. Mesajdaki gorusler  
gondericiye ait olup HAVELSAN A.S. resmi gorusu olmak zorunda degildir.

Attention: This e-mail message is private and privileged.If you are not the 
recipient for whom this e-mail message is intended, please notify the sender 
immediately and delete this e-mail message from your system.All sent and 
received e-mail messages go through a virus scan. Any opinions presented in 
this e-mail message are solely those of the author and do not necessarily 
represent HAVELSAN A.S.`s formal and authorized views.


Re: Dev Class type FILE with multiple directories

2011-11-14 Thread Ian Smith

Keith,

the difference is that you are using pre-defined volumes - which the doc 
says will be used in alphabetical order - see the worked example at the 
bottom of the technote (steps 1 - 8 ). The true round-robin across 
directories only works when using scratch volumes (that are auto-created 
on demand).


Ian Smith
Oxford University.


On 12/11/11 15:33, Arbogast, Warren K wrote:

Rick,
I am just getting started with FILE deviceclass storage pools, so take the 
following with ample grains of salt. However, my experience contradicts 
swg21497567. Or, I might misunderstand that Technote.

This is a TSM 6.2 RHEL5 server. The file deviceclass has fourteen 2 TB 
filesystems on raid10 ldevs.  Volumes were predefined to the storage pool with 
'wait=yes', so they were created one at a time in round-robin order across the 
filesystems.

A small amount of client backups are being written to the file pool at this 
time, so it is clear how volumes are being used.  Client backup files are 
written to the storage pool in volume name order, not in filesystem name sort 
order.   Today I see one full volume in each directory, and many empty volumes. 
 If the Technote applied, all full volumes would be in the first directory.

Ås far as I know,
Keith


Re: vtl versus file systems for pirmary pool

2011-11-03 Thread Ian Smith

On 18/10/11 16:06, Allen S. Rout wrote:


That's fantastic.  Do you chance to recall a first approximation of what
your DB sizes were when you left V5?

- Allen S. Rout

Catching up with the list so apologies for being late on this thread ...

We ran server-to-server migrations from v5.5 to 6.1 October last year
and I've listed the stats we gathered at the time below. The extracted
figure was the one reported by the extractdb function - the script
reports at the end something like:
ANR1389I EXTRACTDB: Wrote  bytes
This extracted figure was very much lower than the figure we got from a
quick 'total used pages x 4k' calculation at v5.5 - indicating either a
significant amount of empty space / fragmentation in the old DB - or
just a lot of duplicate records/data.

The V6 figure is (if memory serves) the result of the calculation from
'q db f=d' of the Total Used Pages on first run up after migration to
v6   times 16k (page size)

Server Instance



Extracted (GB)



V6 DB (GB)

Archive server



5.96



9.9

Desktop backup 1



109.18



172.03

Server backup 1



114.24



197.70

Desktop backup 2



104.77



165.66

Server backup 2



131.03



229.22

Desktop backup 3



105.25



159.91

Server backup 3



99.37



173.47

Desktop backup 4



76.02



118

Server backup 4



29.38



47.3438


Hopefully the table above will display properly. There was a mix of
client data from several versions within each server instance, but with
a generally 'older' client profile in the Archive server and instances
#1 with some of this data originating from version 2 clients. Instances
#4 on the other hand held data only from version 5 clients.

To briefly characterise the data, it was derived from a mix of windows,
linux, osx, solaris and netware clients (in decreasing numbers
respectively - windows clients make up around 55% of our client base).
The desktop backups contained only a few System Object / State type
backups - these were the only backup groups we support ( i.e. no
backupsets ). Server backups - consisted of significantly more System
Objects, plus a handful of backups from SQL-DB and Exchange-DB clients.
I don't know how, or if, any of the above had any bearing on the sizings
we saw.

Finally, for validation, we ran a bunch of select count(*) on the v5.5
and v6.1 dbs , ran them though some sed | grep | awk 'thing' to cleanse
and standardise the output and then ran diff. We also did some random
content queries as well. All was well and we were happy.

Finally finally, I accord with the comments on resourcing your TSM DB
instances - the figures reported in the best practices only ever seem to
move in one direction ...

HTH
Ian Smith
Oxford University Computing Services.
England


Re: Deduplication question

2011-09-07 Thread Ian Smith

On 06/09/11 22:40, Richard van Denzel wrote:

Hi All,

Just a question aboutn the internal dedup of TSM. When I dedup a storage
pool and then backup the pool to a dedup copy pool, will the data in the
storage pool backup be transferred deduped or will it get undeduped first,
then transferred and then dedupped again?

Richard.

Richard,

The chapter 'managing storage pools and volumes' in the TSM 6.1  Admin
Guide has a decent section on deduplicating data. We are investigating
server- and client-side dedupe but we haven't tried what you want to do
in our test environment. However, the above document suggests in two
places that the data will be  copied/moved in its deduplicated 'state'
rather than being rebuilt, copied and deduplicated again.

Ian Smith
Oxford University


Re: confused about deduprequiresbackup

2011-09-07 Thread Ian Smith

Alex,

I can't find the precise reference at the moment, but I think the copy
backup must be to a non-deduplicated pool for this flag to be satisfied.
Accordingly, I suspect reclamation, which is the process that
deconstructs the duplicate data, is disabled until the flag setting is
properly honoured.

Ian Smith

On 07/09/11 16:18, Alexander Heindl wrote:

Hi,

I'm a bit confused about this option.
when I set it to no, reclamation on my primary dedup filepool works
when set to yes, not. although all data is copied to a copypool.
reclamaion on copypool works in both situations...

could it have to do with the fact that the copypool is also file
device (on a share) with deduplication activated?

Regards,
Alex Heindl


Re: TSM v6.2 Diaster Recovery Restore

2011-09-05 Thread Ian Smith

David,

make sure that the new instance user has the same group name as the old:
i.e. as old instance user do:
' db2 get dbm cfg ' and examine the value for SYSADM group name.
If it is different, update it to the new value with 'db2 upd dbm cfg
using SYSADM_GROUP  xx' - otherwise you will get
this when starting db2.

SQL1092N XXX does not have the authority to perform the requested
command or operation.

HTH
Ian Smith

On 02/09/11 20:44, Ehresman,David E. wrote:

Can I restore a TSM v6.2 database to a tsm instance with a different name?  
That is, could I restore a db backup from a tsm with a instance user id of 
libmgr1 to a new machine which has been configured with a instance user id of 
libmgr2?


Exchange Legacy 2007 to VSS 2010 backup performance

2011-08-24 Thread Ian Smith

Our Mail/GroupWare service is migrating from Exchange 2007 to 2010 in
the next few months. Currently we employ streaming (Legacy) backups
across a private 2Gbit bonded private link direct to LTO5 tape and get
around 50MByte/s for fulls and around 16-20Mbyte/s for incrementals. In
all we have about 20TB of mailstore.

I've searched the threads and the user docs, developerworks etc etc but
can't find any real figures on how legacy versus vss backups compare. I
suspect they will be slower, because of the additional steps in the
whole VSS backup protocol but I'm happy to be surprised. If anybody has
any real world figures that address the questions below, I'd be truly
grateful.

1. Legacy versus VSS-assisted (MS VSS provider) backups of Exchange 2007
to TSM server.
2. VSS-assisted backups of Exchange 2007 versus 2010.

This is a big migration for all of us and I'd like to have an idea of
what we should expect in testing.

Thanks
Ian Smith
Oxford University/ England.


Re: Version 3 client on Version 6 server

2011-04-06 Thread Ian Smith

On 05/04/11 19:10, Thomas Denier wrote:

We are in the process of migrating from a TSM 5.5.4.0 server under
mainframe Linux to a pair of TSM 6.2.2.0 servers, also under mainframe
Linux. Our organization has an application running under SCO Open Server
3.2.5. The SCO system uses TSM 3.1.0.8 client code (the highest client
level that was ever supported for the OS level). A replacement system
running Windows 2003 has been installed, but we recently learned that
current plans call for spreading the migration of the SCO system's
remaining workload over a period of a calendar year or more. This raises
the possibility that the SCO TSM client will still be around when we are
ready to shut down the TSM 5.5 server. Has anyone successfully used this
client configuration with a TSM 6.2 server?

Thomas,

we have 4 old clients on HP-UX, IRIX, Solaris 8 and Redhat 4(?) that are
running 3.1.0.7 and .8 clients with no ill-effects. We migrated to
server version 6 several months ago and are now on 6.2.2. The migration
handled the 'old' version 3 data without problem ( the only errors we
ever saw were for orphaned entries for old storage pools that we had
deleted ).  Basically we say to these clients - while it works, fine,
but you are on your own, and we will probably lock you out., if there
are any server-side effects in the future.

Ian Smith
Oxford University


Re: Client support for Ubuntu?

2010-04-30 Thread Ian Smith
Allen,

perhaps a little tangental to your request but this announcement (?)
certainly passed me by at the time.

http://www-01.ibm.com/support/docview.wss?rs=663tcss=Newsletteruid=swg21417165

Regards
Ian Smith
Oxford University
UK

On Friday 30 Apr 2010 3:41 pm, Allen S. Rout wrote:
 So I just submitted a PMR asking IBM to officially bless some
 procedure for getting the client installed on an Ubuntu box.

 I was working on a local doc, which was going to be largely cadged
 from the advice in the archives of this list, but figured I'd at least
 attempt to do the Right Thing.

 Last time I did that, (the TS3500 CLI) the development team addressed
 the issue even before the poor entitlement staffer could figure out
 how to label the reqest. :) That one is still pending in entitlement,
 by the way.  I think she's stumped, but I intend to follow the PMR all
 the way to the bitter end.


 So I figured I'd talk about it here, too, in case some TSM Client type
 is listening. *waggle eyebrows*.

 ---

 I also thought I'd try to put my nascent doc up on the WIKI.  'cause
 that's what the wiki is for, right?  Customer-sourced information at
 the fringes of the official work?

 But it took me 30 seconds to get a page image from

 http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Home

 which means to me that none of my customers will see whatever I stick
 up there.  They'll conclude that something in IBM is broken, and move
 on, and I'll get complaints that my link is busted.


 I would love to hear what subterranean forces conspire to make a wiki
 from IBM, theoretically an enterprise-class software company, perform
 worse than Wikipedia.  I've no question they can do it right.  But why
 did they choose to do it badly?  Project out of favor?  Too many
 public interfaces, so one gets snowed under?

 ---

 So, rather than whining here, I was going to contact someone
 associated with the wiki, but I can't find a contact point.  I may
 just be obtuse: my wife says I'm excellent at not finding things in
 plain sight.  But the closest I come is the wiki's home page's
 creator, dwblogadmin.  But that links to a developerworks page for
 one Jay Allen, whose profile and VCARD do not include e.g. email,
 phone, whatever.

 So I guess I'll leave an update on his wall.




 - Allen S. Rout
 - Really trying to go through channels.


Re: 5.5 - 6.2

2010-03-05 Thread Ian Smith
On Thursday 04 Mar 2010 11:46 pm, John D. Schneider wrote:
 Because we have as many as 4
 instances on one server, we will be forced by the upgrade requirements
 to upgrade all of those on the same day.  At least, I think that is
 true.  When I upgraded a test server from 5.4 to 6.1 a couple weeks ago,
 I was told to uninstall 5.4 before installing 6.1, and that they could
 not exist on the same server.  Am I understanding this correctly?

John,

the problem of co-existence - at least on AIX - is actually at installation
time. On test and beta-test systems we successfully ran 6.1 and 5.5
servers. If my memory serves me, installing 6.1 trashed the existing 5.5
code - so, copy the code /usr/tivoli/tsm/server/bin to a separate area
before installing 6.1, and point your environment variables - DSMSERV_DIR
and DSMSERV_CONFIG - appropriately.

Of course, this leaves you unable to upgrade 5.5 on this box, but if you
have another LPAR / machine available, you can always install the upgrade
version there and copy across the binaries.

As I said, we ran this in a test environment, its not something I'd
willingly do in a production one, but sometimes needs must when the devil
drives ...

Ian Smith
Oxford University Computing Services


Re: date check in a select

2010-02-11 Thread Ian Smith
Richard,

the column reg_time is in datetime format so you need to 'cast' it into the
correct format. Further, you are comparing against an integer. What you
probably want is:

select node_name, date(reg_time) from nodes \
where cast(current_date - date(reg_time) as integer) 30

the first date() is not really necessary.
Note that if you put a space between the greater than operator and the
number, TSM will treat this as a re-direct and write the output to a file
called '30' - no space between 30

There is a manual on pre-version 6 TSM SQL syntax ... somewhere.

HTH
Ian Smith
Oxford. UK

On Thursday 11 Feb 2010 4:46 pm, Richard Rhodes wrote:
 What's the best way to write a date check like this?I'm looking
 for any nodes registered after some number of days (the 30 is just an
 example).

 dsmadmc -se=$i -id=$adminid -password=$adminpwd -tab -noc EOD
   select node_name, reg_time from nodes  where reg_time \ 30 days
 EOD


 ANR2916E The SQL data types TIMESTAMP and INTERVAL DAY(9) are
 incompatible for operator ''.


  .V
  select node_name, reg_time from nodes where reg_time  30 days



 Thanks

 Rick

 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If
 the reader of this message is not the intended recipient or an
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that you have received this document in error
 and that any review, dissemination, distribution, or copying of
 this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete
 the original message.


Re: Searching for TUG Meeting for CPU2TSM

2009-09-10 Thread Ian Smith
Is TSM not moving towards a per TB licensing model?


Ian Smith

Consultant

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Christian Svensson
Sent: 10 September 2009 07:11
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Searching for TUG Meeting for CPU2TSM

Hi,
Many of you out their either know me as the one writing here at ADSM.ORG
and other as Mr. BMR going around the world and do PoC or technical
presentations at Tivoli User Groups.

Now does days do I working with a new product called CPU2TSM.
CPU2TSM is a small fill that you implement in to your TSM Server and
from their can you now count numbers of CPU and Cores and convert that
to IBM PVU's.
More info at http://www.cristie.se/utility-box-english/

But instead of do promotion of this product on ADSM.ORG was I wondering
how many that are interested of to get a PoC and more information at
their TUG meeting.
I'm looking at to do a Around the World trip in Nov 2009 starting in
Stockholm Sweden the 22nd of Oct.

If you are interested to have me on your TUG meeting in Nov. please drop
me an email and tell me where and what date so I can start plan my
around the world trip in Nov.




Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se
Skype: cristie.christian.svensson
Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: Virtual Volume Restore Speed?

2009-08-04 Thread Ian Smith
Allen,

My only thought - given that you are restoring lots of small files -
is that you may be thrashing the target disk. Have you looked at that ?

Ian Smith
Oxford
England


On Monday 03 Aug 2009 5:07 pm, Allen S. Rout wrote:
 Howdy, all.

 I've done a decent amount of small-scale online restores from my
 offsite virtual volumes, and never been particularly unhappy with the
 speed.  (though come to think of it I've never timed it either)

 But I'm restoring a pretty big volume now, and it's taking a L-O-N-G
 time.  I was hoping to elicit war stories from some of you, and see if
 my expectations are just out of whack.


 Environment: Everything is TSM 5.5.3 on AIX.  I've got about 350 miles
 between primary and secondary site.


 I regularly get 80 MB/s sustained running tape to tape from primary to
 offsite over this link.  I've got plenty of TCP buffer space, and I've
 set my TCP windows to be 2M on all the servers.

 When I use iperf to check just TCP/IP throughput, I get 800-900Mb
 avg. over 30 seconds, with sustained 1Gb plateaus.  So the network
 level seems reasonable.

 But the restore is rattling around in the vicinity of 2.5 MB/s; just
 incredibly slow.

 Now, these are small files; average is just 300K, and most of them are
 much smaller (email messages).  But the database on the restoring
 server isn't thrashing, I don't find any bottlenecks at first look.


 So.  What have you-all seen out of your restores?  I really need to
 characterise this; It would Not Be Good if this were the best I could
 do in a real emergency.


 - Allen S. Rout


Re: Change TSM Platform

2009-08-04 Thread Ian Smith
I believe there is a tool called Backup Migrator that can do automated,
hands off migrations of legacy data. There is an initial policy setup
stage then the appliance moves the data between the environments.
Meaning the old environment can be decommissioned.



Ian Smith

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Shawn Drew
Sent: 04 August 2009 15:46
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Change TSM Platform

You will have to move the NAS clients over to the new TSM server,
and wait for the old backups to expire before you retire the old backup
server.

That's what I figured, but I'm not keeping that thing around for 7
years.
I guess we'll have to stick with Windows :(


Regards,
Shawn

Shawn Drew





Internet
john.schnei...@computercoachingcommunity.com

Sent by: ADSM-L@VM.MARIST.EDU
08/03/2009 06:38 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Change TSM Platform






Shawn,
From my understanding, you are going to have to set up a new TSM server,
and migrate the clients over to it.  You can use the export commands to
export policies and client data from one TSM server to another directly
across the LAN.  That will make it somewhat less painful, but depending
on how many clients you have, this could take a few weeks.  You will
have to have enough storage capacity on the new system to absorb all
this data.  If you only have one tape library, you will have to set up
library sharing, and have enough tapes to have two copies of some
clients' data as you migrate clients over.  If you want more detailed
instructions, please ask. Many of us have been through such migrations
before.

According to the help on EXPORT NODE, you can't use it on nodes of type
NAS.  You will have to move the NAS clients over to the new TSM server,
and wait for the old backups to expire before you retire the old backup
server.

Best Regards,

 John D. Schneider
 The Computer Coaching Community, LLC
 Office: (314) 635-5424 / Toll Free: (866) 796-9226
 Cell: (314) 750-8721


   Original Message 
Subject: Re: [ADSM-L] Change TSM Platform
From: Michael Green mishagr...@gmail.com
Date: Mon, August 03, 2009 1:32 pm
To: ADSM-L@VM.MARIST.EDU

You will be moving from x86/64 architecture to Power. They are binary
incompatible. You cannot upload DB from x86 to Power (this is what
backup/restore essentially doees). Your only option is to export DB.

Don't know about the NDMP.
--
Warm regards,
Michael Green



On Mon, Aug 3, 2009 at 9:06 PM, Shawn
Drewshawn.d...@americas.bnpparibas.com wrote:
 We are looking at the possibility of changing a branch's TSM 5.4
server
 from Windows to AIX.
 As far as I know, you can NOT backup a DB on Windows and restore it to
an
 AIX platform.Is this still the case?

 If not, can anyone think of a way to move NDMP toc/backups from one
TSM
 server to another?




This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in
error,
please delete it and immediately notify the sender. Any use not in
accord
with its purpose, any dissemination or disclosure, either whole or
partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall
(will)
not therefore be liable for the message if modified. Please note that
certain
functions and services for BNP Paribas may be performed by BNP Paribas
RCC, Inc.
Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: TSM Server 5.5, AIX and xlC version

2009-05-20 Thread Ian Smith
David,

AIX installp will say something like 'this version is superseded by
a later version' and ignore it. In our experience the TSM server filesets
will install fine and will work with xlC 9.0 ( this is on AIX 5.3 TL 8 SP 5
).

HTH
Ian Smith
Oxford University


On Wednesday 20 May 2009 2:50 am, David Longo wrote:
 I currently have TSM server 5.4.3.0 on AIX 5.3 TL8-2.

 Will be upgrading server to 5.5.2.0 in a few weeks.
 5.5 includes, for AIX platform, xlC 8.0.  My AIX already
 is at xlC 9.0.

 Install doc just says what's on CD.  Can this run with xlC
 9.0 or do I have to downgrade?

 I have a few AIX Clients with 5.5.0 and xlC 9.0 and no problem,
 but server is different.

 Thanks,
 David Longo


 #
 This message is for the named person's use only.  It may
 contain private, proprietary, or legally privileged information.
 No privilege is waived or lost by any mistransmission.  If you
 receive this message in error, please immediately delete it and
 all copies of it from your system, destroy any hard copies of it,
 and notify the sender.  You must not, directly or indirectly, use,
 disclose, distribute, print, or copy any part of this message if you
 are not the intended recipient.  Health First reserves the right to
 monitor all e-mail communications through its networks.  Any views
 or opinions expressed in this message are solely those of the
 individual sender, except (1) where the message states such views
 or opinions are on behalf of a particular entity;  and (2) the sender
 is authorized by the entity to give such views or opinions.
 #


Re: restoring a single file from an image backup, possible?

2009-04-29 Thread Ian Smith
What version of TSM Server are you running?

To single file restore from an image you need a TOC which was available
for NDMP backups but not OLVSA images at TSM version 5.


Ian Smith

Consultant
Global Consulting Services
DELL|Solutions


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Mehdi Salehi
Sent: 29 April 2009 07:13
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] restoring a single file from an image backup,
possible?

Hi,
I made a full image backup of drive f: in windows. In restore window of
B/A
client, drive f: is appeared in both File Level and Image hierarchy.
Is
it possible to restore selected files and directories from this image
backup?

Thanks so much,
Mehdi Salehi
Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: Green issue: How to avoid leaving clients on all night

2009-04-23 Thread Ian Smith
Hi Roger,

I can see you have had a few replies: we at Oxford have been chewing on this
for a while and have two solutions:

1. quick and dirty, run a PostNSchedulecmd as below ( machine only shuts
down after scheduled backup ) see:
http://www.oucs.ox.ac.uk/hfs/help/faq/index.xml.ID=scheduled#shutdown

2. implement a wake-on-lan solution - this is more involved, especially if
your are crossing subnets ( the wol packet doesn't travel across subnets )
see http://www.oucs.ox.ac.uk/wol/ for more info on our solution.

Cheers
Ian Smith
Oxford University Computing Services:
--


On Wednesday 22 Apr 2009 1:55 am, Roger Deschner wrote:
 We're getting requests from a number of people who have their desktop
 computers (mix of Macs and Windows XP/Vista) backed up to TSM, for a way
 to avoid leaving them on all night.

 The issue is simple energy conservation. Even with the monitor off, and
 the disk drives spun down, a live PC still consumes quite a bit of
 electricity. If you put it into either Hibernate or Standby mode, the
 TSM Scheduler cannot run the backup.

 We had thought of setting a POSTSCHEDULECOMMAND of shutdown, but that
 has a severe problem. What if you were working late, because of an
 urgent project, and backup ran. Your computer would then shut down
 without saving what you were working on, and precisely because it was
 urgent enough for you to be working on it late, this would be very
 valuable work that would be lost.

 Has anybody figured out a way around this basic problem?

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
  Research is what I'm doing when I don't know what I'm doing. 
 = -- Wernher von Braun =


Percent complete

2009-04-17 Thread Ian Smith
Hi

 

I need a little reminder on how to do a calculation in a SQL statement:

 

I want to divide:

 

Select count(*) from events where scheduled_start current_timestamp - 1
day

 

BY

 

Select count(*) from events where status not like '%Completed%' and
scheduled_start current_timestamp - 1 day

 

THEN * 100

 

I have done this before by putting each statement in brackets but can't
quite remember!!!

 

Thanks

 

Ian

Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: Percent complete

2009-04-17 Thread Ian Smith
Excellent, I have had to spin the selects round- but that works nicely.

Ian


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Bob Levad
Sent: 17 April 2009 14:11
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Percent complete

select  DISTINCT (100.0*(Select count(*) from events where
scheduled_start
current_timestamp - 1 day)/(Select count(*) from events where status
not
like '%Completed%' and scheduled_start current_timestamp - 1 day)) AS
Percent FROM events

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Ian
Smith
Sent: Friday, April 17, 2009 7:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Percent complete

Hi



I need a little reminder on how to do a calculation in a SQL statement:



I want to divide:



Select count(*) from events where scheduled_start current_timestamp - 1
day



BY



Select count(*) from events where status not like '%Completed%' and
scheduled_start current_timestamp - 1 day



THEN * 100



I have done this before by putting each statement in brackets but can't
quite remember!!!



Thanks



Ian

Dell Corporation Limited is registered in England and Wales. Company
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on
www.dell.co.uk.

This electronic transmission and any documents accompanying this
electronic transmission contain confidential information belonging to
the sender.  This information may be legally privileged.  The
information is intended only for the use of the individual or entity
named above.  If you are not the intended recipient, you are hereby
notified that any disclosure, copying, distribution, or the taking of
any action in reliance on or regarding the contents of this
electronically transmitted information is strictly prohibited.
Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Preferred TSM Platform

2009-02-25 Thread Ian Smith
Hi

 

I am sure this question has been asked many times, however with server
and OS development what is the favored OS for TSM v5? I have always
preferred AIX however never been keen on Solaris and am considering
Windows instead.

 

Will v6 be compatible with the Windows platform?

 

Ian Smith

 

Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: TSM v6 - article up on SearchStorage

2009-02-06 Thread Ian Smith
27 March release date stated on Register

http://www.theregister.co.uk/2009/02/06/tsmv6_gets_dedupe/

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Remco Post
Sent: 06 February 2009 10:21
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM v6 - article up on SearchStorage

On 5 feb 2009, at 21:15, Richard Rhodes wrote:


http://searchdatabackup.techtarget.com/news/article/0,289142,sid187_gci1
346956,00.html



fortunately, there is no mention of an actual release date, just that
it's coming. ;-)

ps, I found a similar posting on internetnews.com

--
Met vriendelijke groeten/Kind regards

Remco Post
r.p...@plcs.nl
+31 6 24821 622
Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


TSM Syposium 2009

2008-12-19 Thread Ian Smith
Hi,

many of you on this list have contacted us at Oxford in the last few months
about the arrangements for the bi-annual TSM Symposium in 2009. We have
been a bit coy about committing ourselves to this event for a number of
reasons, not least that Sheelagh, the driving force behind the previous
Symposia, has retired. Additionally, we can no longer so easily rely on
the local administrative support that was offered in the past and that
forms a major part of putting on the event. This means that the
administrative and logistical demands will be too high on our small team
(even with the in-house experience we now have) and we therefore have to
announce, with considerable regret, that we will not be able to host the
TSM Symposium next year.

However, we have been in discussion with other educational establishments
regarding this and the happy result is that Claus Kalle at the Computing
Centre at the University of Cologne ( aka the RRZK - Regionales
Rechenzentrum Koln ) Germany has agreed to host the event in Sept 2009.

More precise details are not yet available but we wanted to inform you as
soon as possible of at least the probable future existence of the event
so that you yourselves can pencil in appropriate plans. We'd like to take
the opportunity of thanking Claus for taking on this task, as well as
Kirsten Gloeer at the University of Heidelberg for her help and
cooperation.

I'll leave Claus to make formal, further announcements, when he has them.
We at Oxford hope that we can offer him and his team as much assistance as
possible and I'm sure many of you will join us in contributing to a really
first-class event.

Finally, just to be clear, we also hope to stage the event again in the
future - but just not in 2009.

Regards
Ian Smith
 the TSM team at Oxford, England


Re: How to exclude network drives

2008-12-15 Thread Ian Smith
Roger,

we use the following - cribbed, I think, from the client manual.

Exclude.dir \\*\*\...\*

This works (if memory serves correctly) due to the client addressing
non-local drives using the UNC convention while addressing local drives
using drive letters.
Try it and see. You will see the network shares listed at the server
when you do 'q files NODE' , but a 'query occ NODE' should
report no occupancy under these drives.
Unfortunately, I've never worked out how to exclude removeable drives :(

HTH
Ian Smith
Oxford University
UK/England/GB/*

On Saturday 13 Dec 2008 1:50 am, Roger Deschner wrote:
 We have some users who are (ab)using TSM to back up network drives that
 they have mapped. I want to stop this, via something in a Client Option
 Set. How would I go about coding such a blanket exclusion, when I don't
 know what the name of the system hosting the drive, or its drive letter,
 might be?

 Roger Deschner  University of Illinois at Chicago rog...@uic.edu
Academic Computing  Communications Center
 == You are making progress when every mistake you make is a new one. ===


TSM Library manager v Gresham

2008-11-20 Thread Ian Smith
Hi

 

What experiences do people have of library/drive control for SL8500?

 

I am trying to work out the pros and cons of either using TSM Library
manager with ACSLS or the Gresham agents. Does anyone have any thoughts
or experiences regarding performance, stability, ease of use etc?

 

 

Thanks

 

Ian Smith

Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: TSM Library manager v Gresham

2008-11-20 Thread Ian Smith
Thanks

Do you currently use TSM Library Manager with ACSLS?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Mark Stapleton
Sent: 20 November 2008 12:40
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Library manager v Gresham

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of Ian Smith
 What experiences do people have of library/drive control for SL8500?
 
 I am trying to work out the pros and cons of either using TSM Library
 manager with ACSLS or the Gresham agents. Does anyone have any
thoughts
 or experiences regarding performance, stability, ease of use etc?

There isn't anything that TSM's built-in library management has that
Gresham offers. Indeed, getting multiple TSM instances resync'd with
Gresham can cause (temporary) inability to use the library during that
resync. Stability isn't usually an issue with either TSM library
management or Gresham. Gresham requires additional administration that
TSM's library management doesn't need.

And TSM's library management doesn't have an outrageous price tag.

(You'll still need to use ACSLS, of course.)
 
--
Mark Stapleton ([EMAIL PROTECTED])
CDW Berbee
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com
 
Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: TSM Library manager v Gresham

2008-11-20 Thread Ian Smith
Accepted that the Single Point of Failure of using a single TSM Library manager 
instance is a risk. It seems that this is the major benefit of Gresham- its 
ability to distribute shared library control. This then needs to be considered 
against cost.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Copin, 
Christophe (Ext)
Sent: 20 November 2008 13:29
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Library manager v Gresham

Hi

I disagree. Except the price tag ...

Unfortunatelly, TSM built-in library manager is not smart enough for large 
ACSLS libraries.
How do you handle multiple ACSLS scratch pools without external library manager 
(Gresham or whatever) ?
What path optimization can be done without external library manager ? (to avoid 
using the pass-through) Let's say I would like TSM to choose the closest free 
drive (same LSM) to mount a tape ... ... TSM uses tape drives in sequential 
order.

If your TSM Server hosts several instances, Gresham allows you not to depend on 
one *critical* TSM library manager instance.
With an external manager, each instance is able to mount/dismount tapes as a 
stand alone server would do.

In a nutshell, I would say the more larger is your library(ies), the worth 
gresham is.
Actually i would say the drawback is only the price, and the fact you'll will 
have to get a license for each storage agent. (;_;) If you don't want to manage 
SL8500s and such large librairies, in that cases, forget about Gresham and use 
TSM builtin manager.

Christophe.

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De la part de Mark 
Stapleton Envoyé : jeudi 20 novembre 2008 13:40 À : ADSM-L@VM.MARIST.EDU Objet 
: Re: [ADSM-L] TSM Library manager v Gresham

From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Ian Smith
 What experiences do people have of library/drive control for SL8500?
 
 I am trying to work out the pros and cons of either using TSM Library 
 manager with ACSLS or the Gresham agents. Does anyone have any
thoughts
 or experiences regarding performance, stability, ease of use etc?

There isn't anything that TSM's built-in library management has that Gresham 
offers. Indeed, getting multiple TSM instances resync'd with Gresham can cause 
(temporary) inability to use the library during that resync. Stability isn't 
usually an issue with either TSM library management or Gresham. Gresham 
requires additional administration that TSM's library management doesn't need.

And TSM's library management doesn't have an outrageous price tag.

(You'll still need to use ACSLS, of course.)
 
--
Mark Stapleton ([EMAIL PROTECTED])
CDW Berbee
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com
 


Afin de préserver l'environnement, merci de n'imprimer ce courriel qu'en cas de 
nécessité.

Please consider the environment before printing this mail.

Dell Corporation Limited is registered in England and Wales. Company 
Registration Number: 2081369
Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  
Berkshire, RG12 1LF, UK.
Company details for other Dell UK entities can be found on  www.dell.co.uk.


Re: What happened to 5.5.1.2 client?

2008-09-17 Thread Ian Smith
Hi Zoltan,

we wondered the same, enquired and were told it had been pulled
because of 'issues' - I don't know more.
Don't touch it.

Cheers
Ian Smith
Oxford University

On Tuesday 16 Sep 2008 7:37 pm, Zoltan Forray/AC/VCU wrote:
 Anyone know what happened to the 5.5.1.2 patch for the Windows client?

 I downloaded it last week but the readme for what was new with this
 update would never list anything beying 5.5.1.1

 Now when I went back to check again, 5.5.1.2 is MIA?


Re: Updated VS Backed Up

2008-08-15 Thread Ian Smith
Shawn,

in the dsm.opt the syntax is, I think

testflag  DISABLEATTRIBUPDATE

i.e. no '='

Ian Smith

On Thursday 14 Aug 2008 7:53 pm, Shawn Drew wrote:
 Unfortunately that didn't work when I tried a backup.  Anyone else with
 an idea on how to get a testflag into a client option set?

 Invalid trace flag: TESTFLAGS=DISABLEATTRIBUPDATE


 Regards,
 Shawn
 
 Shawn Drew





 Internet
 [EMAIL PROTECTED]

 Sent by: ADSM-L@VM.MARIST.EDU
 08/14/2008 02:37 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU


 To
 ADSM-L
 cc

 Subject
 Re: [ADSM-L] Updated VS Backed Up





 Ok, as I said I was a little bit tired after an 14 hour shift..

 Your question was Does anyone know how to get a testflag into a client
 option set?

 tsm: def clientopt test TRACEFLAGS -testflags=DISABLEATTRIBUPDATE
 ANR2050I DEFINE CLIENTOPT: Option TRACEFLAGS defined in optionset TEST.


 //Henrik

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Shawn Drew
 Sent: den 13 augusti 2008 16:19
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Updated VS Backed Up

 Skipntpermissions addresses a different issue.  DISABLEATTRIBUPDATE
 addresses permissions that are saved in the TSM DB I believe as opposed
 to the ntfs security which stores the metadata in the storagepool.

 Regards,
 Shawn
 
 Shawn Drew





 Internet
 [EMAIL PROTECTED]

 Sent by: ADSM-L@VM.MARIST.EDU
 08/12/2008 06:29 PM
 Please respond to
 ADSM-L@VM.MARIST.EDU


 To
 ADSM-L
 cc

 Subject
 Re: [ADSM-L] Updated VS Backed Up





 Hi,

 Check 'Skipntpermissions' in the client manual.
 There could be a traceclass too but I am too tired to look..

 Thanks
 Henrik

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Shawn Drew
 Sent: den 13 augusti 2008 00:14
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Updated VS Backed Up

 testflag DISABLEATTRIBUPDATE

 I read this tsmblog article:
 http://tsmblog.org/serendipity/index.php?/archives/81-UPDATED-vs-BACKED-
 UP.html

 I remember reading about this testflag on the IBM Support site  and I
 thought I remember reading about it on adsm-l somewhere, but searching
 Google or IBM for DISABLEATTRIBUPDATE doesn't return anything anymore.

 Anyway, for posterity:

 By default TSM will not keep a history of permission changes, which
 doesn't make sense to me!
 In the past I've had to do complete system restores after a virus messed
 up permissions.

 This testflag in the dsm.opt file will disable that and backup the file
 for any change (including permission changes)


 Does anyone know how to get a testflag into a client option set?
 (My last resort will be to try adding -testflags=DISABLEATTRIBUPDATE
 to the options of all my schedules)



 Regards,
 Shawn
 
 Shawn Drew


 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in
 error, please delete it and immediately notify the sender. Any use not
 in accord with its purpose, any dissemination or disclosure, either
 whole or partial, is prohibited except formal approval. The internet can
 not guarantee the integrity of this message. BNP PARIBAS (and its
 subsidiaries) shall (will) not therefore be liable for the message if
 modified. Please note that certain functions and services for BNP
 Paribas may be performed by BNP Paribas RCC, Inc.


 ---
 The information contained in this message may be CONFIDENTIAL and is
 intended for the addressee only. Any unauthorised use, dissemination of
 the information or copying of this message is prohibited. If you are not
 the addressee, please notify the sender immediately by return e-mail and
 delete this message.
 Thank you.


 ---
 The information contained in this message may be CONFIDENTIAL and is
 intended for the addressee only. Any unauthorised use, dissemination of
 the
 information or copying of this message is prohibited. If you are not the
 addressee, please notify the sender immediately by return e-mail and
 delete
 this message.
 Thank you.


Re: Erroneous files in client log directory

2008-07-16 Thread Ian Smith
Is there a schedlogmax defined, or any trace flag that may be configured
on the client?

What ID is the client running under, does it have the privileges to
remove temp files it may be able to create?

Ian


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Matthew Large
Sent: 16 July 2008 14:19
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Erroneous files in client log directory

Hello Storage Folk,

I can't seem to find any documentation or anyone else reporting this
files being created in the log directories.

16/07/2008  14:09 3,303 dsmerror.log
16/07/2008  14:1031,912 dsmsched.log
16/07/2008  14:1016,350 dsmwebcl.log
16/06/2008  21:5622 s2i4
30/06/2008  21:4122 s2ig
20/06/2008  21:5822 s2io
10/07/2008  01:2122 s2j4
14/07/2008  01:1922 s2j8
22/06/2008  21:1822 s2jc
18/06/2008  21:5022 s2k0
16/07/2008  01:3122 s2kk
15/07/2008  01:4222 s2l0


We've all seen the change in the way the scheduler updates the log files
(seems to write to temp file before appending to the actual dsmsched.log
file itself) but my customer is complaining that this machine is falling
over in the middle of the night, and I need to disprove TSM as being a
cause. I can't figure out why these files are lying around here - having
checked their contents they contain messages like:

EXECUTE PROMPTED 1116

And

EXECUTE PROMPTED 1179

What are they? MS return codes?

Anyone seen this kind of behaviour in the past?

Cheers,
Matthew


--
Matthew Large
TSM Consultant
Storage Services
Barclays Wealth Technology

Desk: +44 (0) 207 977 3262
Mobile: +44 (0) 7736 44 8808
Alpha Room, Ground Floor Murray House
1 Royal Mint Court
London EC3N 4HH



Barclays Wealth is the wealth management division of Barclays Bank PLC.
This email may relate to or be sent from other members of the Barclays
Group.

The availability of products and services may be limited by the
applicable laws and regulations in certain jurisdictions. The Barclays
Group does not normally accept or offer business instructions via
internet email. Any action that you might take upon this message might
be at your own risk.

This email and any attachments are confidential and intended solely for
the addressee and may also be privileged or exempt from disclosure under
applicable law. If you are not the addressee, or have received this
email in error, please notify the sender immediately, delete it from
your system and do not copy, disclose or otherwise act upon any part of
this email or its attachments.

Internet communications are not guaranteed to be secure or without
viruses. The Barclays Group does not accept responsibility for any loss
arising from unauthorised access to, or interference with, any Internet
communications by any third party, or from the transmission of any
viruses. Replies to this email may be monitored by the Barclays Group
for operational or business reasons.

Any opinion or other information in this email or its attachments that
does not relate to the business of the Barclays Group is personal to the
sender and is not given or endorsed by the Barclays Group.

Barclays Bank PLC. Registered in England and Wales (registered no.
1026167).
Registered Office: 1 Churchill Place, London, E14 5HP, United Kingdom.

Barclays Bank PLC is authorised and regulated by the Financial Services
Authority.


Re: Erroneous files in client log directory

2008-07-16 Thread Ian Smith
In the Windows application log, is there any evidence of the acceptor
daemon crashing, and possibly orphaning the files? What version of TSM
is the client?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: 16 July 2008 14:37
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Erroneous files in client log directory

On Jul 16, 2008, at 9:18 AM, Matthew Large wrote:

 ... I can't figure out why these files are lying around here -
 having checked their contents they contain messages like:

 EXECUTE PROMPTED 1116

 And

 EXECUTE PROMPTED 1179

 What are they? MS return codes?


Matthew -

See Client Acceptor Daemon
in  http://people.bu.edu/rbs/ADSM.QuickFacts

where I summarize the nature of those files.  I'm surprised to see
them in that directory, and lingering.

I don't see how a few tiny files can cause a computer system to be
falling over.  The customer has to do some actual looksee into the
failure condition.

Richard Sims


Re: Problems after upgrading 5.3 Linux client to 5.4

2007-12-12 Thread Ian Smith
Hi Michael,

we experienced the same thing when we asked clients to upgrade to 5.4.1.2
( to avoid the security vulnerability in the dsmcad ): Incrementals started
backing up the entire filestore - which gave us a headache at the server end.
Running: dsmc incr -traceflags=service -tracefile=/path/to/file  on the
client showed that it was ACL-related extended attributes that were causing
files to be sent again to the server ( see trace output below my signature ):
essentially, the client sees the file ACL attributes as 'changed' ( because
it has, as yet, no ACL attribute data on the server ) and instead of updating
the file attributes at the server end, it sends the whole file again.

try searching for the string ATTRIBS_BACKUP and ATTRIBS_EQUAL in the output
tracefile.

IBM support state that ACL support was introduced in 5.3, but we're pretty
sure that we saw this behaviour even in clients moving from 5.3.x to 5.4.x.
allthough we didn't pursue this.

If this is your experience, you can bypass this with the 'SKIPACL' and
'SKIPACLUPDATECHECK' options __but__ our experience has shown that this
dramatically slows the incremental processing on the client. At the very
least, take a look at the manual on these options, and test them if unsure.

HTH
Ian Smith
Oxford University Computing Services. UK

-- Begin trace file excerpt --

06-11-2007 12:08:02.353 [016096] [4144462752] : incrdrv.cpp (14330):
Incremental attrib compare for: /blah/man/man1/vacuumdb.1
06-11-2007 12:08:02.353 [016096] [4144462752] : unxfilio.cpp(1556):
fioCmpAttribs: skipACL:'0', skipACLUpdateCheck
:'0'
06-11-2007 12:08:02.353 [016096] [4144462752] : unxfilio.cpp(1673):
fioCmpAttribs: Attribute comparison of two fil
es
Attribute  Old New
-  --- ---
ctime   1191576650  1191576650
mtime   1187781793  1187781793
atime   1191576650  1192659176
File mode OCT   100644  100644
uid  0   0
gid  0   0
size  41724172
ACL size 0   0
ACL checksum 0   0
Xattr size  54
Xattr checksum   0  1188981555
inode 9137387791373877
06-11-2007 12:08:02.353 [016096] [4144462752] : fileio.cpp  (4627):
fioCmpAttribs(): old attrib's data from build
(IBM TSM 5.4.1.2)
06-11-2007 12:08:02.353 [016096] [4144462752] : unxfilio.cpp(1765):
--ACL different: returning ATTRIBS_BACKUP
==

-- End trace file excerpt --

On Tuesday 11 Dec 2007 8:03 pm, Michael Glad wrote:
 I run a TSM 5.3.4.0 TSM server. I recently encounterered a problem where
 the 5.3.4.0 linux TSM client aborted every time it processed a given file
 system on one of my servers.

 Instead of upgrading to a newer 5.3.4 client level as would probably have
 been the wisest thing to to, I promptly uninstalled it and installed
 a 5.4.1.5 client.

 Unfortunately, it seems that the 5.4.1.5 client wants to back up a lot of
 files that have already been backed up, possibly the entire server.
 As this server contains a 8 digit number of files, this is far from
 ideal.

 Does somebody have an idea of what's going on?

   -- Michael


Re: Orphaned filespaces

2007-04-05 Thread Ian Smith
Bill,

We too have this problem - the task of identifying 'old' or 'dead'
data is made considerably more difficult by the null start and end
dates on filespaces.

We get round this by querying the latest backup date of any object in
each client filespace foreach node from the backups table - as below:

select filespace_name, max(backup_date) from backups -
where node_name='NODENAME' group by filespace_name

However, the backups table is the largest table in our view of the
TSM DB and the queries take a corresponding amount of time to run -
so run this at a 'quiet' time on your server.

Regards
Ian Smith
Oxford University Computing Services


On Wednesday 04 Apr 2007 4:28 pm, Bill Kelly wrote:
 An important little caveat to using the backup_start and/or backup_end
 columns in the filespaces table: these values are only updated when a
 full incremental backup is run.  From what I can tell, that means
 when:

 1) a client schedule with action=incremental runs, or
 2) in the GUI, Actions/Backup Domain is clicked, or
 3) a command line dsmc incremental is run (not a selective one,
 either - no paths can be specified)

 We have a bunch of people who don't like/don't want to run scheduled
 backups (don't ask me...we're a university), and who just do occasional
 backups of C: drives and such via the GUI.  These people's filespaces
 *always* show null backup_start and backup_end dates.  Richard, I
 suspect this is what's happening occasionally at your site.

 These null date fields are a real pain for me; there's no good way to
 determine whether such filespaces have been 'abandoned' (e.g., old PC
 goes away, new PC comes in and filespace name changes) or not.

 -Bill

 Bill Kelly
 Auburn University OIT
 334-844-9917

  Richard Rhodes [EMAIL PROTECTED] 4/4/2007 9:36 AM 

 I've come at orphaned filespaces from a little different angle.
 What I do is get a list of filespaces that have not
 been backed up in some number of days.  This helps for when
 filesystems
 on nodes get removed and the filespace with active files just sits in
 tsm .
 . . forever.

 This sql produces a list of filespaces that have not been backed up in
 7
 days.

 (note_1:  The check for '%ZZRT%' excludes retired nodes - we rename
 retired
 nodes to
 a zzrt_ prefix so they can be easily identified.)

 (note_2:  run from within a ksh script on aix)

 dsmadmc -se=$tsm -id=$adminid -password=$adminpwd -tab -noc EOD
   select -
   node_name, -
   filespace_name, -
   filespace_id, -
   unicode_filespace, -
   cast((current_timestamp-backup_end)days   as decimal(5,0)) -
from -
   filespaces -
where -
 node_name not like '%ZZRT%' -
 and cast((current_timestamp-backup_end)days  as decimal(5,0)) \ 7
 -
order by node_name, -
   filespace_id
 EOD



 I also run this . . .a variation of the above.  It attempts to find
 filespaces that have never been backed up.  Same notes from above
 apply.
 I'm not sure where/how these come about, but we seem to get a few.


 dsmadmc -se=$tsm -id=$adminid -password=$adminpwd  -tab -noc EOD
   select -
   node_name, -
   filespace_name, -
   filespace_id, -
   unicode_filespace, -
   backup_start, -
   backup_end -
from -
   filespaces -
where -
 node_name not like '%ZZRT%' -
 and ( backup_start is null -
  or backup_end is null )  -
order by node_name, -
   filespace_id
 EOD


 -
 The information contained in this message is intended only for the
 personal and confidential use of the recipient(s) named above. If
 the reader of this message is not the intended recipient or an
 agent responsible for delivering it to the intended recipient, you
 are hereby notified that you have received this document in error
 and that any review, dissemination, distribution, or copying of
 this message is strictly prohibited. If you have received this
 communication in error, please notify us immediately, and delete
 the original message.


Re: Migrating an HSM filesystem

2007-02-09 Thread Ian Smith
Anker,

I too am very afraid of HSM and always approach any HSM-related upgrade
( even an OS patch ) with a mixture of fear and loathing :)

By way of background: We moved our HSM data ( c 20 filesystems ) across
physical hosts _and_ from 5.2.x client level to 5.3.0.

We also kept the same client identity on the TSM server - and I'm
not clear what you will gain by renaming the client as you propose ?

Anyway, some general points:

First - the AIX client as of 5.3.0 is JFS2 only - the procedure for
migrating from JFS to JFS2 is documented in the TSM 5.3.0 AIX client
README_hsm_enu.htm file.
( Note that this procedure advises you to create the JFS2 FS and add
HSM management to it _before_ restoring the stub files. )

Second - I personally would not use scp to copy the files - I would
use dsmc to restore the (stub) files to the target filesystems.
If you keep all the operations within the tsm client then you reduce
the risk of accidental corruption by utilities 'outside' of TSM / HSM.

Third - Create a test FS with as much data as you can create and then
dry-run with this before messing with your live data.

Your challenge is ultimately to ensure that the migrated data transfers
successfully and uncorrupted. How do you ensure this ? Well, as a first
step _before_ migrating files on the source system, I would take
the md5sum signatures of a number of random files and save this to
a file. Then proceed as per the README_hsm_enu.htm file and at the end,
recall these files and compare their md5sum signatures to those pre-move.

Our md5sums, unfortunately weren't the same ... because we ran into
APAR IC50103 ( to be fixed in TSM 5.3.5 ) - this occurs only when all
of the following occur:
a) a file is a sparse file with a hole (or more than 1023-zero bytes)
at the end of the file
b) the file size is not block-aligned
c) the stub file has been restored via 'dsmc restore' using the options
RESToremigstate  Yes and MAKesparsefile Yes (the default).

Can you see why I have the fear now ?
Anyway, the easy way out of this is to add 'MakeSparseFile  No' to the
dsm.opt file.

The other gotcha is when you finish. All your files will have a different
inode number. This does not normally occasion a fresh incremental backup of
the file in the standard TSM client. However, it does with the HSM client !
This won't occur until you run dsmreconcile across the Filesystems, but when
you do, you will see that every single file is updated on the server. We are
told that this updates the inode info for the migrated copy but not the
backup copy. So once this has happened, the next backup will perform an
'inline' ( i.e. server-side only ) backup of each file from migrated stgpool
to backup stgpool - all 4TB in yours ( 6TB in ours ) cases.
Apparently this is Working As Designed ..

Apart from that, it's easy - that README_hsm_enu.htm file is your friend.

HTH
Ian Smith
Oxford University Computing Services
England.




On Thursday 08 Feb 2007 9:51 pm, Anker Lerret wrote:
 We need to move an HSM-enabled filesystem from one TSM client node to
 another.  The old client node is a venerable S70 (Regatta class machine)
 running AIX 5.2 with a 5.2.2.0 TSM client.  The new client node will be
 running AIX 5.3 with (unless I hear recommendations to the contrary from
 the list) the latest TSM client compatible with the server (I think that
 will be 5.3.4.0.).  The TSM server is running AIX 5.3 with TSM 5.3.3.0.

 After some poking around in the list archives, I get the impression that
 we have two options.  The first is to setup the new TSM client and then
 copy the files across (scp, I suppose?).  This would cause each file to be
 recalled, transferred across the network and then re-migrated.  It would
 have the advantage of leaving a pristine copy on the old client, in case
 of disaster or even of questions about the veracity of the copy.  The
 disadvantage is that we're talking about 4 terabytes of data, so it could
 take a while.

 The second option is to issue RENAME NODE OLDCLIENT NEWCLIENT and restore
 the stub files on the new client.  Richard Sims outlines the technique in
 http://www.mail-archive.com/adsm-l@vm.marist.edu/msg56480.html .  Here's
 my understanding of what Richard proposes:

 Step 1: Unmount the HSM filesystem on OLDCLIENT, exposing the stub files.
 Step 2: RENAME NODE OLDCLIENT NEWCLIENT on the TSM server.
 Step 3: Use something (scp or tar) to move the stub files to NEWCLIENT.
 Step 4: Mount the HSM filesystem on NEWCLIENT.

 Do I have the steps right?
 Has anyone done this successfully?
 What problems did you have?

 One last note: We are deeply, almost superstitiously, afraid of HSM.  We
 have had numerous problems with HSM and have never felt comfortable with
 it.  Our only wholesale loss of TSM data involved HSM; our users still
 remember it and still have unkind feelings toward TSM because of it--and
 it happened five years ago.  We generally keep all of our software quite
 current, but you will notice that we

Re: Tape drive zones for FC drives - best practices

2007-02-07 Thread Ian Smith
I second what John wrote. We have never experienced a device taking out all
devices in a zone - because we went with the best practice of one adapter -
one target per zone from the beginning.
For clarity and sanity sake, use aliases for each device host side and target
side and a naming convention like: 'host-fcs2'  or 'switch2-rmt12' for the
aliases, create the zones using the alias names and roll up everything into a
'config' that you enable (load into flash memory on the switch).
Thus 6 months down the line when an adapter or  device fails and is replaced
and you are scratching your head at the schema you drew for yourself on a
scrap of A4 ...
you just login to the switch(es) and do:
 admin alishow# list your device aliases
 admin aliadd  aliasname new_WWN  # add new WWN
 admin aliremove aliasname old_WWN  # remove the old WWN
 admin cfgshow  # show the new alias in the defined config
 admin cfgsave  # save the config to internal flash
 admin cfgenable# enable the changed config
 admin cfgactvshow  # sanity check

HTH
Ian Smith
Oxford University Computing Services
England.


On Wednesday 07 Feb 2007 1:57 am, John Monahan wrote:
 It is best practice to put one initiator and one target in each zone.  It
 may seem cumbersome but its really not that bad.  You'll be happy you did
 it if you ever have SAN problems down the road.  I have seen one device
 take out all other devices within the same zone before, more than once.
 Just pick a good naming convention for your zones so you can tell exactly
 what is in each zone just from the name.  I also prefer to use aliases so
 when you replace a HBA or tape drive you just update the alias with the
 new PWWN instead of going in and changing 20 different zones.


 ***Please note new address, phone number, and email below***
 __
 John Monahan
 Consultant
 Logicalis
 5500 Wayzata Blvd Suite 315
 Golden Valley, MN 55416
 Office: 763-417-0552
 Cell: 952-221-6938
 Fax:  952-833-0931
 [EMAIL PROTECTED]
 http://www.us.logicalis.com




 Schneider, John [EMAIL PROTECTED]
 Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 02/06/2007 05:05 PM
 Please respond to
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


 To
 ADSM-L@VM.MARIST.EDU
 cc

 Subject
 Tape drive zones for FC drives - best practices






 Greetings,
 My habit in regards to zoning FC tape drives has always been to
 put
 one host HBA in a zone with all the tape drives it should see, and to have
 a
 separate zone for each host HBA.  For example, in a situation with 2 host
 HBAs and 10 tape drives, I would have two zones, one with one host HBA and
 5
 tape drives, and the other with the other host HBA and 5 tape drives.
 Pretty simple.

 But an IBM consultant working here is telling me that the best
 practice is to have a separate zone for each HBA/tape drive pair.  So in
 my
 example above, I would have 20 zones instead of two.   His claim is that
 an
 individual tape drive can hang all the other drives if they are in the
 same
 zone, but not if they are in separate ones.  Has anyone seen this in real
 life?

 This becomes important to me because I am about to put in new SAN
 switches, and he wants me to follow this recommendation.  I have 2 TSM
 servers with 4 HBAs each, 4 NDMP nodes, and 14 tape drives.  Using my
 scheme, I would have 12 zones, with his scheme I would have 56 zones. That
 seems like a lot of zones, and unnecessarily cumbersome.

 Is it really necessary to isolate each HBA/Tape drive into a
 separate zone?  Do individual tape drives really hang other drives in
 their
 zone?

 Best Regards,

 John D. Schneider
 Sr. System Administrator - Storage
 Sisters of Mercy Health System
 3637 South Geyer Road
 St. Louis, MO.  63127
 Email:  [EMAIL PROTECTED]
 Office: 314-364-3150, Cell:  314-486-2359


Re: TSM 5.4 cautions

2007-01-29 Thread Ian Smith
FYI
One gotcha that has only recently come to light with the 5.4 client
for all Windows platforms:

Setting up a new Scheduler via the Setup Wizard deletes
your include/exclude options in the dsm.opt file.

This has just been raised as an APAR IC51657 but you won't find it
listed on the TSM site yet.

I haven't tested whether this occurs when running the command-line
equivalent dsmcutil to do the same. It only appears to happen when
setting up a _new_ service. That is, updating an existing service
does not have this problem.

Ian Smith
Oxford University Computing Systems
England

On Friday 26 Jan 2007 7:33 pm, Richard Sims wrote:
 Look, before leaping into TSM 5.4 ...
 Cautionary/advisory Technotes are starting to appear for 5.4,
 as for example 1249083.  Do a search, and pore over the Readme
 file, before committing to an install.

Richard Sims


Re: J vs K 3590 Tapes

2004-02-24 Thread Ian Smith
David,

We have two copy stgpools, one on-site and one off-site. We
use K tapes for the primary and on-site copy stgpool and J tapes for the
off-site.
We have found the K tapes require much more careful handling - dropping one
on it's edge will almost certainly cause the tape to come into contact
with the inside edge of the cartridge case - this being due to reel of tape
sitting much closer to the edge because there is twice as much of it in a
K tape as a J tape.
The J tapes are comparatively much more robust and able to stand the
carriage to the off-site storage.

Regards
---
Ian Smith
Oxford University Computing Services, Oxford, UK.
---






~Mime-Version: 1.0
~Content-Transfer-Encoding: 7bit
~Content-Disposition: inline
~Date: Mon, 23 Feb 2004 10:30:01 -0500
~From: David E Ehresman [EMAIL PROTECTED]
~Subject: J vs K 3590 Tapes
~To: [EMAIL PROTECTED]
~X-Oxmail-Spam-Status: score=1.0 tests=FROM_ENDS_IN_NUMS
~X-Oxmail-Spam-Level:
~
~We are backing up to 3590 tapes.  We currently use K (extended length)
~tapes for onsite tape storage pool and J (standard length) tapes for
~offsite copy storage pool.
~
~We will soon start replacing some of our older J tapes and are trying
~to decide whether to replace them with Js or Ks.  Has anyone weighed the
~pros and cons of using Js vs Ks for offsite tapes.  Which did you decide
~on and why?
~
~David Ehresman
~University of Louisville


Re: dsmscoutd running too long

2004-02-04 Thread Ian Smith
Mark,

check out the Maxcandidates value for the HSM-managed filesystem
It is an option given to dsmmigfs and specifies the number of migration
candidates dsmscoutd looks for in any one pass over the filesystem.
The range of values is 9 - 9,999,999 (!!) with a default of 10,000 - which
sounds like it's too high for you.

Of course, by not migrating all the candidate files, you may find that
the occupancy of the filesystem rises to trip the automigration threshold.
This may or may not be what you require. You will have to monitor and reset
the migration parameters on this FS quite carefully, according to patterns
of use on it.

HTH
---
Ian Smith
Oxford University Computing Services, Oxford, UK.
---


~
~Does anyone have any suggestions that would reduce the time it takes the scout
daemon to produce a candidates list on a HSM file system that contains a large
number of files ?
~
~We currently have the candidatesinterval option at 12, however it takes
approximately 8 to 10 hours for the scout process to complete. So in effect, the
scoutd process is just about running all of the time. Other Details below:
~
~Client OS: AIX 4.3.3
~TSM Client Version: 5.1
~Server OS: AIX 5.1
~The HSM managed file system has approximately 75 files all under a single
directory.
~
~
~
~
~
~
~Thanks,
~Mark.


Re: 2 fibre interfaces per drive

2003-11-18 Thread Ian Smith
A recent version of Atape is required for dual path support. We use
Atape 8.3.1.0.

You also need to enable alternative pathing support on the rmt device which,
I think is set to off by default:
i.e. chdev -l device -a alt_pathing=yes
Then when you make the devices available to the host you will notice
twice the number of drives. If you look at the device attributes with
lsattr -El rmtx
you will notice the location code xx-xx-xx-ALT or PRI ,
the primary device value and the WWN. The latter will be 0xnn4n for
the primary device and 0xnn8n for the alternative device.
These allow you to 'pair' up the devices.

If you have more than 3 or 4 tape devices I would recommend you employ a naming
convention to the devices to make these pairs more transparent, by renaming them
after cfgmgr has brought them into the system.
Thus, we do a rmdev -l rmtX ; chdev -l rmtX -a new_name=rmtY
such that the secondary / Alternative device name is 50 greater than primary
device name  - i.e:
rmt2 = rmt52
rmt3 = rmt53   and so on.

Note that errpt will log errors against both device names, depending on which
interface the drive was being accessed when the error occurred.

Finally, our experience tallies with Steve Harris' - failover is seamless
- we pulled the active cable from a drive while backing up (test !) client
data to tape and the backup carried on over the other path. TSM didn't blink.
We subsequently audited the tape and all was OK.

Regards
---
Ian Smith
Oxford University Computing Services, Oxford, UK.
---





~Mime-Version: 1.0
~Content-Transfer-Encoding: quoted-printable
~Content-Disposition: inline
~Date: Tue, 18 Nov 2003 08:48:41 +1000
~From: Steve Harris [EMAIL PROTECTED]
~Subject: Re: 2 fibre interfaces per drive
~To: [EMAIL PROTECTED]
~X-Oxmail-Spam-Status: score=0.0 tests=
~X-Oxmail-Spam-Level:
~
~Joni,
~
~Later versions of the Atape driver support dual pathing and failover.
~Once set up, AIX defines two RMTn drives for each physical drive, one with a
device path ending in -PRI and the other in -ALT.
~
~As I understand it rudimentary load balancing is done - when the tape is
opened, the least busy path is used.
~
~Failover is also neat.  I was running the tapeutil test command and
deliberately varied off the adapter in use.  The io just picked up where it left
off on the other path.
~
~See the Totalstorage Tape Installation and Users Guide, available from
ftp://service.boulder.ibm.com  for details
~
~Steve Harris
~AIX and TSM Admin
~Queensland Health, Brisbane Australia
~
~ [EMAIL PROTECTED] 18/11/2003 7:13:13 
~IBM 3590 and 3592 drives have 2 fibre interfaces.  I am unsure how to
~manage the multipathing yet, but we are going to look at configuring our
~3590H drives that way next year to eliminate that single point of
~failure.  Our TSM server is running AIX 5.2ML2 which is connected to a
~3494 library with 4 3590H drives which are connected to a Brocade 2800.
~
~
~Julian Armendariz
~System Engineer - UNIX
~H.B. Fuller
~(651) 236-4043
~
~
~
~ [EMAIL PROTECTED] 11/17/03 12:48PM 
~Hi everyone!
~
~We are in the process of formulating an AIX environment instead of our
~current MVS TSM environment.  We would have 2 TSM servers (one being
~the
~library manager) with 2 directors, 8 lan-free clients and 24 tape
~drives,
~which we hope to share between the 2 TSM servers.  We would like to
~have 2
~fibre interfaces per drive in order to eliminate a single point of
~failure.
~My question is, how does TSM know which path it is using to the drive
~and
~how does it know that one of the paths to an individual drive is
~already in
~use?  Is there software out there to manage this or is it done through
~the
~hardware configuration?  If anyone has any suggestions I would
~appreciate
~it!  Thanks!
~
~
~***
~Joni Moyer
~Highmark
~Storage Systems
~Work:(717)302-6603 **New as of 11/1/03
~Fax:(717)302-5974
~[EMAIL PROTECTED]
~***
~
~
~
~**
*
~This email, including any attachments sent with it, is confidential and for
the sole use of the intended recipients(s).  This confidentiality is not waived
or lost, if you receive it and you are not the intended recipient(s), or if it
is transmitted/received in error.
~
~Any unauthorised use, alteration, disclosure, distribution or review of this
email is prohibited.  It may be subject to a statutory duty of confidentiality
if it relates to health service matters.
~
~If you are not the intended recipients(s), or if you have received this e-mail
in error, you are asked to immediately notify the sender by telephone or by
return e-mail.  You should also delete this e-mail message and destroy any hard
copies produced.
~**
*


Re: From TIVsm 4.2.1.8 to TIVsm 5.1.6.1 on IBM H70, AIX 4.3.3

2003-02-10 Thread Ian Smith
Peter,

On an H70 AIX 4.3.3 we tried to upgrade as follows:
4.2.1.9 - 5.1.5.0 - 5.1.6.0

However, the upgradedb hung at 5.1.5.0 three times.
Conversations with TSM Support said this was a known problem - something to do 
with hanging while examining the Groups table for orphaned System Objects.
In the meantime we installed 5.1.6.0 ( without ever getting the server
up at 5.1.5.0 ) and this was fine. We got a message as below:
 
 ANRD iminit.c(1540): ThreadId0 Started conversion of Groups table at 103 
2 3 9 12 6
 
 then

ANRD iminit.c(1826): ThreadId0 Completed conversion of Groups table at 103 
2 3 9 19 33

some minutes later. At our server level we have missed the (well documented)
SYSTEM OBJECTS problem introduced at 4.2.2.? . I would guess that if your DB
did contain orphaned SYSTEM OBJECTS then the conversion above might take
longer. Even with a 'clean' DB of 102GB this took several minutes - so have 
patience !

Subsequently we were advised that 5.1.6.1 was a better level and so upgraded
to that.

Tomorrow we are upgrading our other two production TSM servers and we plan 
to go:

4.2.1.9 - 5.1.6.0 - 5.1.6.1

HTH
---
Ian Smith  
Oxford University Computing Services, Oxford, UK.
---






~X-X-Sender: [EMAIL PROTECTED]
~X-Acknowledge-To: [EMAIL PROTECTED]
~Confirm-Reading: [EMAIL PROTECTED]
~MIME-Version: 1.0
~Content-Transfer-Encoding: quoted-printable
~X-MIME-Autoconverted: from 8bit to quoted-printable by rzcomm5.rz.tu-bs.de id 
QAA08786
~Date: Fri, 7 Feb 2003 16:54:53 +0100
~From: Peter Duempert [EMAIL PROTECTED]
~Subject: From TIVsm 4.2.1.8 to TIVsm 5.1.6.1 on IBM H70, AIX 4.3.3
~To: [EMAIL PROTECTED]
~
~Hi *SM-ers,
~we want to do the above mentioned migration.
~
~Mig-1:  4.2.1.8 -- 5.1.0.0 -- 5.1.6.0 -- 5.1.6.1
~
~Mig-2   4.2.1.8 -- 5.1.5.0 -- 5.1.6.0 -- 5.1.6.1
~
~Q: Which one would You prefer ?
~-- 
~MfG / Ciao - - - - - - - - - - - - - - - - - - - - - - - - - - - -
~Peter Dümpert   Email: [EMAIL PROTECTED]
~Rechenzentrum der Technischen Universität   Fax  : ++49/531/391-5549
~D 38092 BraunschweigTel  : ++49/531/391-5535



Re: Label new 3590 cartridge

2002-07-12 Thread Ian Smith

Thomas,

If you read along the line of SENSE data you will see
FE.OA.31.18 .xx.xx.xx.

The FE is the FID (FRU identification number)
3118  is the first fault sympton code
see the 3590 Maintenance manual ( SA37-0301-04 ).

IBM Hardware support informed us that codes such as
FE 3118
FE 3117
FE 301C
FE 3542
FE 3A5B
were errors writing the servo-track along the tape.
You will need a new tape - send this one back under it's
warranty.

We experienced this mainly with the early use of 256 track
3590-E1A drives ( along with early microcode ). We still see
this occasionally.

HTH
Ian Smith
---
Ian Smith
Oxford University Computing Services, Oxford, UK.
---





~MIME-Version: 1.0
~Date: Fri, 12 Jul 2002 15:32:31 +0200
~From: Rupp Thomas (Illwerke) [EMAIL PROTECTED]
~Subject: Label new 3590 cartridge
~To: [EMAIL PROTECTED]
~
~Dear (I)*SM-ers,
~
~a defect 3590 extended length cartridge was replaces by IBM.
~When I try to label this cartridge for our 3494 library with
~LABEL libvolume lib3494 010251 checkin=scratch devtype=3590 I get the
~following errors:
~
~07/11/02 17:23:48 ANR2017I Administrator SYSA issued command: LABEL
~libvolume lib3494 010251 checkin=scratch devtype=3590
~07/11/02 17:23:48 ANR0984I Process 16 for LABEL LIBVOLUME started in the
~BACKGROUND at 17:23:48.
~07/11/02 17:23:48 ANR8799I LABEL LIBVOLUME: Operation for library LIB3494
~started as process 16.
~07/11/02 17:23:48 ANR0609I LABEL LIBVOLUME started as process 16.
~07/11/02 17:31:44 ANR8302E I/O error on drive DRIVE2 (/dev/rmt2) (OP=WEOF,
~Error Number=110, CC=0, KEY=03, ASC=09, ASCQ=00,
~SENSE=F0.00.03.00.00.00.04.58.00.00.00.00.09.00.FE.0A.3-
~1.18.50.00.00.03.01.31.08.0A.42.48.22.80.10.00.10.35.42-
~.12.01.33.12.00.80.33.54.80.06.00.00.00.00.04.00.00.00.-
~00.00.00.00.80.00.00.00.00.00.00.01.00.00.00.3C.00.20.0-
~0.32.32.45.20.20.20.20.00.A0.00.4B.F0.F1.F0.F2.F5.F1.00-
~.00.00.00.00.00.00, Description=An undetermined error has
~occurred). Refer to Appendix D in the 'Messages' manual for recommended
~action.
~07/11/02 17:31:44 ANR8806E Could not write volume label 010251 on the tape
~in library LIB3494.
~07/11/02 17:34:09 ANR8802E LABEL LIBVOLUME process 16 for library LIB3494
~failed.
~07/11/02 17:34:09 ANR0985I Process 16 for LABEL LIBVOLUME running in the
~BACKGROUND completed with completion state FAILURE at 17:34:09.
~
~KEY=03 means: medium error
~ASC=09 and ASCQ=00 mean: Track following error.
~
~Does this mean that the new cartridge is defect as well?
~
~Kind regards
~Thomas Rupp
~Vorarlberger Illwerke AG
~MAIL:   [EMAIL PROTECTED]
~TEL:++43/5574/4991-251
~FAX:++43/5574/4991-820-8251
~
~
~
~--

~Dieses eMail wurde auf Viren geprueft.
~
~Vorarlberger Illwerke AG
~--




Re: a client could not backup?

2002-04-23 Thread Ian Smith

Julie,

Win32 Return Code 123 means 'The filename, directory name or volume label
syntax is incorrect'.

If this is happening at the start of the backup I would guess that the
current partition doesn't have a label - this was an issue with ADSM v3
windows clients - you have to label the volume before ADSM can back it up.
Is this seem possible ? Else, check your folder names for something
with a nasty character.

---
Ian Smith
Oxford University Computing Services, Oxford, UK.
---





~MIME-Version: 1.0
~Date: Tue, 23 Apr 2002 11:18:24 +0200
~From: Loon, E.J. van - SPLXM [EMAIL PROTECTED]
~Subject: Re: a client could not backup?
~To: [EMAIL PROTECTED]
~
~Hi Julie!
~I'm pretty sure it's a security issue on the specific drive's root.
~Check for which drive this error is reported.
~Is you scheduler using the SYSTEM account? Than is most likely that the
~SYSTEM account has no access on the drive's root. So, if it's drive c: that
~can't be backed up, check the security settings on c: and give the SYSTEM
~account full control access.
~Kindest regards,
~Eric van Loon
~KLM Royal Dutch Airlines
~
~
~-Original Message-
~From: Julie Xu [mailto:[EMAIL PROTECTED]]
~Sent: Tuesday, April 23, 2002 02:45
~To: [EMAIL PROTECTED]
~Subject: a client could not backup?
~
~
~There is a problem with one of our client. we have reinstalled the client
~and we could not find the problem.
~
~The following is the shedule error log in the client:
~04/22/2002 14:58:37 TransWin32RC(): Win32 RC 123 from fioScanDirEntry():
~getFileSecuritySize
~04/22/2002 14:59:11 TransWin32RC(): Win32 RC 123 from fioScanDirEntry():
~getFileSecuritySize
~04/22/2002 16:51:03 ntConsoleEventHandler(): Caught Logoff console event .
~04/22/2002 16:51:03 ntConsoleEventHandler(): Process Detached.
~04/23/2002 03:25:31 TcpRead: Zero byte buffer read.
~04/23/2002 03:25:31 sessRecvVerb: Error -50 from call to 'readRtn'.
~04/23/2002 03:25:31 ANS1017E Session rejected: TCP/IP connection failure
~04/23/2002 03:25:31 ANS1017E Session rejected: TCP/IP connection failure
~
~04/23/2002 09:06:32 TcpRead: Zero byte buffer read.
~04/23/2002 09:06:32 sessRecvVerb: Error -50 from call to 'readRtn'.
~04/23/2002 09:06:32 ANS1017E Session rejected: TCP/IP connection failure
~04/23/2002 09:06:32 ANS1017E Session rejected: TCP/IP connection failure
~
~04/23/2002 09:46:47 TcpRead: Zero byte buffer read.
~04/23/2002 09:46:47 sessRecvVerb: Error -50 from call to 'readRtn'.
~04/23/2002 09:46:47 ANS1017E Session rejected: TCP/IP connection failure
~04/23/2002 09:46:47 ANS1017E Session rejected: TCP/IP connection failure
~
~04/23/2002 09:46:48 ANS1512E Scheduled event 'BACKUP' failed.  Return code
~= 1.
~04/23/2002 09:46:48 cuSignOnResp: Server rejected session; result code: 55
~04/23/2002 09:46:48 sessOpen: Error 55 receiving SignOnResp verb from server
~04/23/2002 09:46:53 ntConsoleEventHandler(): Caught Logoff console event .
~04/23/2002 09:46:53 ntConsoleEventHandler(): Process Detached.
~
~This client has been retried to backup many times until I cancel the
~session on server.
~
~Any comments will be appreciated
~
~Thanks in advance
~
~
~
~Julie Xu
~
~Unix/Network Administrator
~Information Technology Directorate
~University of Westen Sydney, Campbelltown
~Campbelltown NSW 2560
~
~Phone: 61 02 4620-3098
~Mobile: 0416 179 868
~Email: [EMAIL PROTECTED]
~
~
~**
~For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain confidential and
privileged material intended for the addressee only. If you are not the
addressee, you are notified that no part of the e-mail or any attachment may be
disclosed, copied or distributed, and that any other action related to this
e-mail or attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately by return
e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM),
its subsidiaries and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor responsible for
any delay in receipt.
~**



Re: connection retry from client.

2002-02-07 Thread Ian Smith

Finn,

I don't know about manual starts of the client but for scheduled events
the parameters MAXCMDRETRIES and RETRYPERIOD work together to
control the number of times a client scheduler attempts to contact the server
if the server is not responding - for whatever reason - at the allotted
scheduled time. The client establishes that a connection cannot be made ( this
in itself may take some time depending on O/S and network setup ) then counts
down Retryperiod minutes, attempts to reconnect - if it fails
it then counts down Retryperiod minutes and attempts again. This it does
Maxcmdretries times. Note that these parameters only manage the behaviour
of client schedules that have not yet started to backup or schedules that
have completed the backup and are attempting to report the results back to
the server. Note also that these parameters can be set at the server end
as well and if so, override the settings at the client end.

If a scheduled session has already started and is interupted - for example
network outage - the Commrestartduration and Commrestartinterval
parameters specify the number of minutes and the interval in seconds that
the client should attempt to reconnect. These are set just at the
client end.

Note that the operation of these parameters can mean that a scheduled
backup can take place *outside* of the scheduled backup window at the
server - something we've found difficult to handle. The 4.1 client
documentation does say something about the schedule fails if a communication
failure occurs and the client cannot reconnect with the server before the
startup window for the schedule ends. However, this is wrong - the schedule
status in the events table now gets set to restarted at the server and the
schedule will complete - this is documented by  PC44954 anAPARs IC31350 and
IC31464.

This behaviour can be a complete pain if for example, your server hangs
midway through a bunch of scheduled client backup sessions, you
restart the server and probably want to do something to redress the
conditions that caused the hang - maybe a BACKUP DB or MIGRATION or or ..
however, clients retrying and reconnecting to start or restart their backups
can seriously interfere with this. The solution is to set the
MAXCMDRETRIES to zero or RETRYPERIOD to something like 0 and 720 - this will
give you 12 hours free of client retries - and set COMMRESTARTDURATION to
something small.

HTH
---
Ian Smith
Oxford University Computing Services, Oxford, UK.
---







~MIME-Version: 1.0
~Date: Thu, 7 Feb 2002 11:23:45 +0100
~From: Leijnse, Finn F SITI-ITDSES31 [EMAIL PROTECTED]
~Subject: connection retry from client.
~To: [EMAIL PROTECTED]
~
~Hi all,
~
~I was wondering what is the setting in the dsm.opt to make a client retry
~it's connection, if for example one of the two IP adresses to connect to the
~TSM server fail.
~
~ met vriendelijke groeten, regards et salutations,
~
~ Finn Leijnse
~ ISES/31 - Central Data Storage Management
~ Shell Services International bv.
~



Re: Ejecting a Tape (Category FF00) from 3494 Library

2002-01-16 Thread Ian Smith

What the CE should have done, and what you can do is put the tape
in the 'recovery slot' which is the top left slot in
the rack facing the input door ( check your 3494 manual if this isn't clear ).
so pause the robot, open the I/O door and stick the tape in there.
Close the door and set the robot back to auto. You'll notice that about
the second thing the gripper does is go and check the recovery slot.
It'll eventually, after a couple more checks retrieve the tape from this
slot and put it back into it's original slot.
Once this is done you can eject the tape via

mtlib -l/dev/lmcp0 -C -VS0 -s FF00 -t FF10

and the tape should come out.

I wouldn't do an full inventory unless you have to as this takes time
and means taking the Library manager offline.

HTH


~MIME-Version: 1.0
~X-MTHubFilter-1.5: mail-srv1
~Date: Wed, 16 Jan 2002 11:33:42 +0800
~From: mobeenm [EMAIL PROTECTED]
~Subject: Ejecting a Tape (Category FF00) from 3494 Library
~To: [EMAIL PROTECTED]
~
~Hello All,
~I am running a 3494 Library with 3590 E1A tape drives. Recently i has an
~issue with the tape drive and the IBM CE who came on site physically removed
~a damaged volume from the library, how ever he doesn't know how to get this
~tape removed from the Inventory as he thinks thats a seperate issue.
~
~Now the following are the details of my problem
~
~1. The tape S0 was deemed damaged by the IBM CE and was physically
~removed from the tape library. So there is no S0 in the
~library.
~2. When i use mtlib command and query for this volume the following are the
~details
~root:#mtlib -l/dev/lmcp0 -q V -VS0
~   Volume Data:
~   volume state.Volume present in Library, but Inaccessible
~   logical volume...No
~   volume class.3590 1/2 inch cartridge tape
~   volume type..HPCT 320m nominal length
~   volser...S0
~   category.FF00
~   subsystem affinity...01 02 03 04 05 06 00 00
~00 00 00 00 00 00 00 00
~00 00 00 00 00 00 00 00
~00 00 00 00 00 00 00 00
~3. I tried to change the CATEGORY for this volume from FF00 to FF10, FFFA
~and FFFB and failed. The following are the details
~
~root:#mtlib -l/dev/lmcp0 -C -VS0 -s FF00 -t FF10
~  Change Category  operation Failed, ERPA code - 75,  Library VOLSER
~Inaccessible.
~
~root:#mtlib -l/dev/lmcp0 -C -VS0 -s FF00 -t FFFA
~  Change Category  operation Failed, ERPA code - 27,  Command
~Reject.
~  Subcode - 23,
~
~root:#mtlib -l/dev/lmcp0 -C -VS0 -s FF00 -t FFFB
~  Change Category  operation Failed, ERPA code - 27,  Command
~Reject.
~  Subcode - 43,
~
~
~I would like to remove this volume from my inventory. Appreciate if any one
~can help me to determine what the problem is. I tried several postings in
~ADSM.ORG and when i try to do the same i get the errors show above.
~
~With warm regards
~Mobeen



Re: Strange 3590/3494-volume-behavior..?

2002-01-08 Thread Ian Smith

Tom,

Did the checkout / checkin libvol work ?
have you tried a checkout using mtlib -l /dev/lmcp0 -C -tFF10 -Vvol ?
Which cell does the tape library think the tape is in ( via the
Search database option from one of the menus on the library PC ) - is
the tape actually in that cell ?
We had a similar - if not identical problem to this - when the gripper
had trouble turning 180 degrees to load / unload tapes. The library seemed
to think the tapes were still in the gripper ( or the mouth of the tape
drive ) and thus refused to recognize any operation on the tape - stating it was
inaccessible. Repeated use of the
above mtlib command to manually change the category of the tape
to Eject FF10 and Insert FF00 seemed to fix it.

HTH
Ian Smith
---
Ian Smithemail: [EMAIL PROTECTED]
Oxford University Computing Services, Oxford, UK.
---







~X-Sender: [EMAIL PROTECTED]
~MIME-Version: 1.0
~Date: Mon, 7 Jan 2002 17:21:13 +0100
~From: Tom Tann{s [EMAIL PROTECTED]
~Subject: Strange 3590/3494-volume-behavior..?
~To: [EMAIL PROTECTED]
~
~Hello TSM'ers!
~
~I have a problem with a volume in my 3494-library..
~
~Tsm-server is 4.2.1.7, AIX.
~
~I discovered the problem a few days ago, when trying to move data from the
~volume to another stg-pool.
~Tsm incists that the volume is inaccessible.
~An audit also result in the same:
~
~01/07/2002 16:57:05  ANR2321W Audit volume process terminated for volume
ORA052
~  - storage media inaccessible.
~
~The volume was dropped by the gripper a week or so ago, but it was
~re-entered via the recovery-cell. I have inventoried the frame, audited
~the library on the tsm-server. A manual mount,
~mtlib -l /dev/lmcp0 -m -f/dev/rmt1 -VORA052 /mtlib -l /dev/lmcp0 -d
-f/dev/rmt1
~work fine.
~I've even done checkout/checkin libvol.
~
~But now I'm stuck...
~
~ANy suggestions on where to look/what to try, would be appreciated...
~
~ Tom
~
~
~
~tsm: SUMOq libvol 3494 ORA052
~
~Library Name   Volume Name   Status   OwnerLast UseHome
Element
~   ---   --   --   -

~3494   ORA052Private
~
~sumo# mtlib -l /dev/lmcp0 -qV -VORA052
~Volume Data:
~   volume state.00
~   logical volume...No
~   volume class.3590 1/2 inch cartridge tape
~   volume type..HPCT 320m nominal length
~   volser...ORA052
~   category.012C
~   subsystem affinity...03 04 01 02 00 00 00 00
~00 00 00 00 00 00 00 00
~00 00 00 00 00 00 00 00
~00 00 00 00 00 00 00 00
~
~tsm: SUMOq vol ora052 f=d
~
~   Volume Name: ORA052
~ Storage Pool Name: ORATAPE
~ Device Class Name: BCKTAPE
~   Estimated Capacity (MB): 40,960.0
~  Pct Util: 42.1
~ Volume Status: Filling
~Access: Read/Write
~Pct. Reclaimable Space: 0.0
~   Scratch Volume?: No
~   In Error State?: No
~  Number of Writable Sides: 1
~   Number of Times Mounted: 35
~ Write Pass Number: 2
~ Approx. Date Last Written: 12/24/2001 05:29:05
~Approx. Date Last Read: 12/11/2001 04:50:20
~   Date Became Pending:
~Number of Write Errors: 0
~ Number of Read Errors: 0
~   Volume Location:
~Last Update by (administrator): TOM
~ Last Update Date/Time: 01/07/2002 16:49:34



linux client backup of Windows vfat partition

2001-10-05 Thread Ian Smith

Hi,

We have a client having problems backing up the Windows partition
with the Linux client.

Specifically; linux client is 4.1.2.99 ( the fixtest that cured the
'special' characters bug )
With the win partition mounted as type vfat the dsmerror.log fills with

ANS1228E Sending of object '/c/blah/blah' failed
ANS4037E File '/c/blah/blah' changed during processing.  File skipped.

Yet these files are not being accessed.

Can someone tell me if vfat is supported by the linux client and/or if
they have seen this before ?

A workaround has been to mount the partition readonly but I'm curious
if anyone else has seen this from their users.

Thanks
Ian Smith

Ian Smith - HFS Backup / Archive Services
Oxford University Computing Services, Oxford, England.




Re: Include/Exclude needed for Mac?

2001-07-27 Thread Ian Smith

Wanda,

Here is our default list that we install on all Mac clients.
It's almost certainly not exhaustive and I'd be interested
to have any comments and/or futher additions to the exclude list.
A point of note - the weird Ä character is the ASCii representation
of CHR(196) and on the mac looks like a leant-over f ( I'm told it's
the florint character ). 
I too am no Mac guru but am told that a wildcarded exclusion of all 
Caceh folders is not advisable as some Cache folders on the Mac hold
important app/user data.

Exclude ...:Desktop DB
Exclude ...:Desktop DF
Exclude ...:Desktop
Exclude ...:Trash:...:*
Exclude ...:Wastebasket:...:*
Exclude ...:VM Storage
Exclude ...:Norton FileSaver Data
Exclude ...:Norton VolumeSaver Data
Exclude ...:Norton VolumeSaver Index
Exclude.dir...:System Folder:Preferences:cache-cache
Exclude.dir...:System Folder:Preferences:Netscape Users:...:Cache 
Ä
Exclude.dir...:System Folder:Preferences:Netscape Ä:Cache Ä
Exclude.dir...:System Folder:Preferences:Explorer:Temporary Files
Exclude ...:Temporary Items:...:*
Exclude ...:...:TheFindByContentIndex
Exclude ...:aaa?*
Exclude ...:...:TSM Sched*
Exclude ...:...:TSM Error*

HTH
Ian Smith

Ian Smith - HFS Backup / Archive Services   
Oxford University Computing Services








MIME-Version: 1.0
Date: Thu, 26 Jul 2001 17:31:34 -0400
From: Prather, Wanda [EMAIL PROTECTED]
Subject: Include/Exclude needed for Mac?
To: [EMAIL PROTECTED]

I am NOT Mac literate -
Would someone kindly share with me their INCLUDE/EXCLUDE list for Mac
clients?

There is a small set in the Using the Mac Clients book, but it looks to me
like there are other directories that can be exluded, like Netscape cache?
or the Spool Folder? Temporary Items?

Thanks!



Re: mac client restores failing + mountret

2001-05-11 Thread Ian Smith

Joe,

Testing the V4.1 Mac client we came up against a similar problem
- the client established then closed the connection. Tivoli logged
this under an already-existant APAR IC27847 which states:

  Data backed up with the 3.1 client can not be restored with the
   3.7 client.  Client reports 'Error reading data.'  With a
   Return code of 268435556.

But the real problem was that the data channel could not handle a
tcp_ip WAIT signal when the server tried to restore from tape.

Anyway, the problem was corrected in the lates 4.1 Patch for Mac
4.1.2.15 - IP22153_15.hqx. It doesn't look as though this was
corrected at 3.7 though :(.

Regards
Ian Smith

Ian Smith - HFS Backup / Archive Services
Oxford University Computing Services



MIME-Version: 1.0
Date: Thu, 10 May 2001 17:53:03 -0700
From: Joe Faracchio [EMAIL PROTECTED]
Subject: mac client restores failing + mountret
Comments: cc: Joe Faracchio [EMAIL PROTECTED]
To: [EMAIL PROTECTED]

I just had a weird problem brought to my attention and it got weirder.
I plan to call this in to IBM support tomorrow. ASAP.

A user called to say he was trying to do a restore on a  Mac and it
failed with connecting to server.  Suspecting the mac restore problem
I said try a new folder backup to see if you can connect. It worked.

I was about to say: sounds like the bad mac client ... upgrad ..
When they said that a subsequent restore attempt started to work.
Kinda.  It was looping or restore requests giving me 6 mount requests
in the queue.

I changed mountret to something greater than zero and it restored the
file!

This shouldn't be!  That you can have mountret at zero and it causes
all restore requests to fail on timeout of mount wait.

There's more than one problem here?  Any experience like this?
Suggestions?

thanks  joe.f.


Joseph A Faracchio,  Systems Programmer, UC Berkeley
Private mail on any topic should be directed to :
   [EMAIL PROTECTED]



Win 2000 Client ANS1075E - Program memory exhausted

2000-11-13 Thread Ian Smith

A client testing the 3.7.2.18 Win client on a machine running
Win 2000 SP1 with 128Mb RAM is getting the following error message
when performing an incremental backup

ANS1075E *** Program memory exhausted ***

in the dsmerror.log. The backup terminates.

This client is backing up a large number of files - 10's of thousands -
but he assures me that that the performance monitor indicates TSM is using
40Mb of memory, that 85Mb is committed and that the machine has 128Mb
installed, so the memory exhausted message seems spurious.

I have heard of restore failures on large filesystems - indeed I believe
the new 3.7.2.19 Win client addresses such an issue - but I haven't
seen such a memory problem with backup .

Has anyone come across this with the recent Win clients ?

Thanks
Ian Smith

Ian Smith
Oxford University Computing Services