Re: nfstimeout on server ISILON storage

2018-09-12 Thread Grant Street
t; > did a
> > > > > > test copy of a large file to the NFS mount, they were getting
> > > upwards of
> > > > > 8G/s
> > > > > > vs 1.5-3G/s when TSM/ISP writes to it (via EMC monitoring tools).
> > > > > >
> > > > > > --
> > > > > > *Zoltan Forray*
> > > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > > Xymon Monitor Administrator
> > > > > > VMware Administrator
> > > > > > Virginia Commonwealth University
> > > > > > UCC/Office of Technology Services
> > > > > > www.ucc.vcu.edu
> > > > > > zfor...@vcu.edu - 804-828-4807
> > > > > > Don't be a phishing victim - VCU and other reputable
> organizations
> > > will
> > > > > > never use email to request that you reply with your password,
> social
> > > > > > security number or confidential personal information. For more
> > > details
> > > > > > visit 
> > > > > > https://clicktime.symantec.com/a/1/D4Vc0iL0Ihz01IxaPMD4FQKsz4HFdO34N56Mk9lThTY=?d=fdefGtcvWswFArtTNHn1OQ8hDy5bnpvV0uiN5e7uU9pHs5i0CVtTvVmDZXauou9rZM8HXg5NINdRQaubM-rXROr9zA8l23Hrm_tP3i7TRxca_NRoOWuC6vZpa0bV9kTSQ-961vT_pNz2In1a-CNUiP0YGaB3S1M0IQA2uIEaa2r92USf7VUnEt7mY-AH6BPp_AYHOx27RpQQwAlK_e-c_7MOVBJebYcTzeD3N0yF-fipCNsDyaUnuLRpI9NuRBcSvujU15Fjd8D2ePhNscjlIgk0yN5QkKUfrC8TJa2hKerFmvID4hYaIsSaRvL12s4muLnHrW8DaqUSdMyLaER66NRx_Whe5h160936eBuUi3MdTBbbR1uAthfTvdFu4HeJDEsjMPrwoYq1XjKV3KSwru1HnJFu_ZxaN3V9LdwqRBfxag%3D%3D=http%3A%2F%2Fphishing.vcu.edu%2F
> > > > >
> > > > > --
> > > > > -- Skylar Thompson (skyl...@u.washington.edu)
> > > > > -- Genome Sciences Department, System Administrator
> > > > > -- Foege Building S046, (206)-685-7354
> > > > > -- University of Washington School of Medicine
> > > > >
> > > >
> > > >
> > > > --
> > > > *Zoltan Forray*
> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > Xymon Monitor Administrator
> > > > VMware Administrator
> > > > Virginia Commonwealth University
> > > > UCC/Office of Technology Services
> > > > www.ucc.vcu.edu
> > > > zfor...@vcu.edu - 804-828-4807
> > > > Don't be a phishing victim - VCU and other reputable organizations
> will
> > > > never use email to request that you reply with your password, social
> > > > security number or confidential personal information. For more
> details
> > > > visit 
> > > > https://clicktime.symantec.com/a/1/D4Vc0iL0Ihz01IxaPMD4FQKsz4HFdO34N56Mk9lThTY=?d=fdefGtcvWswFArtTNHn1OQ8hDy5bnpvV0uiN5e7uU9pHs5i0CVtTvVmDZXauou9rZM8HXg5NINdRQaubM-rXROr9zA8l23Hrm_tP3i7TRxca_NRoOWuC6vZpa0bV9kTSQ-961vT_pNz2In1a-CNUiP0YGaB3S1M0IQA2uIEaa2r92USf7VUnEt7mY-AH6BPp_AYHOx27RpQQwAlK_e-c_7MOVBJebYcTzeD3N0yF-fipCNsDyaUnuLRpI9NuRBcSvujU15Fjd8D2ePhNscjlIgk0yN5QkKUfrC8TJa2hKerFmvID4hYaIsSaRvL12s4muLnHrW8DaqUSdMyLaER66NRx_Whe5h160936eBuUi3MdTBbbR1uAthfTvdFu4HeJDEsjMPrwoYq1XjKV3KSwru1HnJFu_ZxaN3V9LdwqRBfxag%3D%3D=http%3A%2F%2Fphishing.vcu.edu%2F
> > >
> > > --
> > > -- Skylar Thompson (skyl...@u.washington.edu)
> > > -- Genome Sciences Department, System Administrator
> > > -- Foege Building S046, (206)-685-7354
> > > -- University of Washington School of Medicine
> > >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zfor...@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit 
> > https://clicktime.symantec.com/a/1/D4Vc0iL0Ihz01IxaPMD4FQKsz4HFdO34N56Mk9lThTY=?d=fdefGtcvWswFArtTNHn1OQ8hDy5bnpvV0uiN5e7uU9pHs5i0CVtTvVmDZXauou9rZM8HXg5NINdRQaubM-rXROr9zA8l23Hrm_tP3i7TRxca_NRoOWuC6vZpa0bV9kTSQ-961vT_pNz2In1a-CNUiP0YGaB3S1M0IQA2uIEaa2r92USf7VUnEt7mY-AH6BPp_AYHOx27RpQQwAlK_e-c_7MOVBJebYcTzeD3N0yF-fipCNsDyaUnuLRpI9NuRBcSvujU15Fjd8D2ePhNscjlIgk0yN5QkKUfrC8TJa2hKerFmvID4hYaIsSaRvL12s4muLnHrW8DaqUSdMyLaER66NRx_Whe5h160936eBuUi3MdTBbbR1uAthfTvdFu4HeJDEsjMPrwoYq1XjKV3KSwru1HnJFu_ZxaN3V9LdwqRBfxag%3D%3D=http%3A%2F%2Fphishing.vcu.edu%2F
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit 
https://clicktime.symantec.com/a/1/D4Vc0iL0Ihz01IxaPMD4FQKsz4HFdO34N56Mk9lThTY=?d=fdefGtcvWswFArtTNHn1OQ8hDy5bnpvV0uiN5e7uU9pHs5i0CVtTvVmDZXauou9rZM8HXg5NINdRQaubM-rXROr9zA8l23Hrm_tP3i7TRxca_NRoOWuC6vZpa0bV9kTSQ-961vT_pNz2In1a-CNUiP0YGaB3S1M0IQA2uIEaa2r92USf7VUnEt7mY-AH6BPp_AYHOx27RpQQwAlK_e-c_7MOVBJebYcTzeD3N0yF-fipCNsDyaUnuLRpI9NuRBcSvujU15Fjd8D2ePhNscjlIgk0yN5QkKUfrC8TJa2hKerFmvID4hYaIsSaRvL12s4muLnHrW8DaqUSdMyLaER66NRx_Whe5h160936eBuUi3MdTBbbR1uAthfTvdFu4HeJDEsjMPrwoYq1XjKV3KSwru1HnJFu_ZxaN3V9LdwqRBfxag%3D%3D=http%3A%2F%2Fphishing.vcu.edu%2F
--
Grant Street
Senior Systems Engineer

T: +61 2 9383 4800 (main)
D: +61 2 8310 3582 (direct)
E: grant.str...@al.com.au

Building 54 / FSA #19, Fox Studios Australia, 38 Driver Avenue
Moore Park, NSW 2021
AUSTRALIA

  [LinkedIn] <https://www.linkedin.com/company/animal-logic>   [Facebook] 
<https://www.facebook.com/Animal-Logic-129284263808191/>   [Twitter] 
<https://twitter.com/AnimalLogic>   [Instagram] 
<https://www.instagram.com/animallogicstudios/>

[Animal Logic]<http://www.animallogic.com>

www.animallogic.com<http://www.animallogic.com>

CONFIDENTIALITY AND PRIVILEGE NOTICE
This email is intended only to be read or used by the addressee. It is 
confidential and may contain privileged information. If you are not the 
intended recipient, any use, distribution, disclosure or copying of this email 
is strictly prohibited. Confidentiality and legal privilege attached to this 
communication are not waived or lost by reason of the mistaken delivery to you. 
If you have received this email in error, please delete it and notify us 
immediately by telephone or email.


Re: nfstimeout on server ISILON storage

2018-09-12 Thread Grant Street
If any one is getting slow backup performance either
 - you are not using it right
 - a NAS is not right for your workflow

We are able to archive 50+TB of source data that is written to both onsite and 
offsite tape per day.  We have a method of scaling that further but that is 
sufficient for us at the moment.  Our biggest bottleneck is the TSM database, 
not how fast we can get the data off the storage.

Slight corrections
- "each client can only talk to one isilon node  PER MOUNT"
- NFS and isilon are not slow nor are they sequential
- It is the lack of scalable multithreading in the TSM agent that makes it slow 
and cumbersome, not isilon nor NFS
- It is the lack of snapshot/snapdiff aware backups in the TSM agent that make 
complete back ups happen in an "inefficient way"
- Isilon is a scalable NAS that can be very fast. Being a NAS it has 
restrictions in the latencies of TCP networking. If your after storage that is 
faster than what Network speeds/throughputs  can provide you should be looking 
at other storage solutions.

If anyone would like further clarification on these points, only happy to help 
give you more information or experience

Grant


From: ADSM: Dist Stor Manager  on behalf of Frank Kraemer 

Sent: Friday, 7 September 2018 7:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] nfstimeout on server ISILON storage

Isilon = Slow Performance

- Although a parallel filesystem inside (OneFS), each client node can only
talk to a single Isilon node using standard NAS protocols, which then
performs
 parallel I/O across the internal high speed IB network to other Storage
Nodes.

- NFS Client nodes (=TSM Server) have to use slow non-parallel data access
over Ethernet to the Isilon. NFS v3 is technology from 1986 - designed with
networks in mind of that time

- No direct client IB or high-speed network I/O with RDMA enabled to
support so single client performance is poor in comparison to other real
filesystems that scale.

- Multiple NFS mounts from the same client (TSM Server) to the Isilon box
can help a little but the setup is clumsy and this is not real parallel I/O
- it's a hack! Still slow.

- "Magic tools" like dsmisi from (?) can NOT fix this problem, they just
hide the multiple NFS mount mess a little bit and cost way to much money.

- For backups were large I/O are the norm; NFS is the most inefficient way
of using your resources.

- Get a real scalable filesystem, use a single mountpoint and drive your
networks with optimal I/O speed.

-frank-

Frank Kraemer
IBM Consulting IT Specialist  / Client Technical Architect
Am Weiher 24, 65451 Kelsterbach
mailto:kraem...@de.ibm.com
voice: +49-(0)171-3043699 / +4970342741078
IBM Germany
--
Grant Street
Senior Systems Engineer

T: +61 2 9383 4800 (main)
D: +61 2 8310 3582 (direct)
E: grant.str...@al.com.au

Building 54 / FSA #19, Fox Studios Australia, 38 Driver Avenue
Moore Park, NSW 2021
AUSTRALIA

  [LinkedIn] <https://www.linkedin.com/company/animal-logic>   [Facebook] 
<https://www.facebook.com/Animal-Logic-129284263808191/>   [Twitter] 
<https://twitter.com/AnimalLogic>   [Instagram] 
<https://www.instagram.com/animallogicstudios/>

[Animal Logic]<http://www.animallogic.com>

www.animallogic.com<http://www.animallogic.com>

CONFIDENTIALITY AND PRIVILEGE NOTICE
This email is intended only to be read or used by the addressee. It is 
confidential and may contain privileged information. If you are not the 
intended recipient, any use, distribution, disclosure or copying of this email 
is strictly prohibited. Confidentiality and legal privilege attached to this 
communication are not waived or lost by reason of the mistaken delivery to you. 
If you have received this email in error, please delete it and notify us 
immediately by telephone or email.


Re: Looking for suggestions to deal with large backups not completing in 24-hours

2018-07-11 Thread Grant Street
GL1HFyHU75
> >> lwUZLmc_kYAQxroVCZQUCSs=25_psxEcE0fvxruxybvMJZzSZv-
> >>> ach7r-VHXaLNVD_E=
> >>>>
> >>>>
> >>>> Del
> >>>>
> >>>>
> >>>> "ADSM: Dist Stor Manager"  wrote on 07/05/2018
> >>>> 02:52:27 PM:
> >>>>
> >>>>> From: Zoltan Forray 
> >>>>> To: ADSM-L@VM.MARIST.EDU
> >>>>> Date: 07/05/2018 02:53 PM
> >>>>> Subject: Looking for suggestions to deal with large backups not
> >>>>> completing in 24-hours
> >>>>> Sent by: "ADSM: Dist Stor Manager" 
> >>>>>
> >>>>> As I have mentioned in the past, we have gone through large
> >> migrations
> >>>> to
> >>>>> DFS based storage on EMC ISILON hardware.  As you may recall, we
> >> backup
> >>>>> these DFS mounts (about 90 at last count) using multiple Windows
> >> servers
> >>>>> that run multiple ISP nodes (about 30-each) and they access each DFS
> >>>>> mount/filesystem via -object=\\rams.adp.vcu.edu\departmentname.
> >>>>>
> >>>>> This has lead to lots of performance issue with backups and some
> >>>>> departments are now complain that their backups are running into
> >>>>> multiple-days in some cases.
> >>>>>
> >>>>> One such case in a department with 2-nodes with over 30-million
> >> objects
> >>>> for
> >>>>> each node.  In the past, their backups were able to finish quicker
> >> since
> >>>>> they were accessed via dedicated servers and were able to use
> >> Journaling
> >>>> to
> >>>>> reduce the scan times.  Unless things have changed, I believe
> >> Journling
> >>>> is
> >>>>> not an option due to how the files are accessed.
> >>>>>
> >>>>> FWIW, average backups are usually <50k files and <200GB once it
> >> finished
> >>>>> scanning.
> >>>>>
> >>>>> Also, the idea of HSM/SPACEMANAGEMENT has reared its ugly head since
> >>>> many
> >>>>> of these objects haven't been accessed in many years old. But as I
> >>>>> understand it, that won't work either given our current
> >> configuration.
> >>>>>
> >>>>> Given the current DFS configuration (previously CIFS), what can we
> >> do to
> >>>>> improve backup performance?
> >>>>>
> >>>>> So, any-and-all ideas are up for discussion.  There is even
> >> discussion
> >>>> on
> >>>>> replacing ISP/TSM due to these issues/limitations.
> >>>>>
> >>>>> --
> >>>>> *Zoltan Forray*
> >>>>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> >>>>> Xymon Monitor Administrator
> >>>>> VMware Administrator
> >>>>> Virginia Commonwealth University
> >>>>> UCC/Office of Technology Services
> >>>>> www.ucc.vcu.edu
> >>>>> zfor...@vcu.edu - 804-828-4807
> >>>>> Don't be a phishing victim - VCU and other reputable organizations
> >> will
> >>>>> never use email to request that you reply with your password, social
> >>>>> security number or confidential personal information. For more
> >> details
> >>>>> visit INVALID URI REMOVED
> >>>>> u=http-3A__phishing.vcu.edu_=DwIBaQ=jf_iaSHvJObTbx-
> >>>>> siA1ZOg=0hq2JX5c3TEZNriHEs7Zf7HrkY2fNtONOrEOM8Txvk8=5bz_TktY3-
> >>>>> a432oKYronO-w1z-
> >>>>> ax8md3tzFqX9nGxoU=EudIhVvfUVx4-5UmfJHaRUzHCd7Agwk3Pog8wmEEpdA=
> >>>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> *Zoltan Forray*
> >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> >>> Xymon Monitor Administrator
> >>> VMware Administrator
> >>> Virginia Commonwealth University
> >>> UCC/Office of Technology Services
> >>> www.ucc.vcu.edu
> >>> zfor...@vcu.edu - 804-828-4807
> >>> Don't be a phishing victim - VCU and other reputable organizations will
> >>> never use email to request that you reply with your password, social

ANR2033E Command failed - lock conflict

2015-07-07 Thread Grant Street

Hi All

We are running some NDMP backups in parallel using the PARALLEL
functionality in a TSM script.

Since moving to 7.1.1.300 from 6.3 we have noticed that we are getting

ANR2033E BACKUP NODE: Command failed - lock conflict.

Has anyone else seen this? or have some advice?

We can't change it to a single stream as it will take close to a week in
order to do the backup

Thanks in advance

Grant


Re: TSM server V7.1.1.300

2015-07-07 Thread Grant Street

On 07/07/15 17:00, Robert Ouzen wrote:

Hello all

Consider to upgrade my TSM servers to version 7.1.1.300 , want to know if 
someone already did it  and have any issues ?

Best Regards

Robert Ouzen

I did find that upgrading from 6.3, trying to do a restore straight
after the install took ages eg 30mins instead of 1 min. The db2
processes were going nuts during the restore. This was repeatable but
resolved itself by the next morning.

I'm guessing that it was still re-initializing indexes etc or some other
housekeeping task.

Apart from that no issues

Grant


Re: ANR2033E Command failed - lock conflict

2015-07-07 Thread Grant Street

Cheers

I might see if redoing the master script to spawn sub scripts will help

Grant


On 08/07/15 02:20, Gee, Norman wrote:

I have been running 4 streams in parallel against a single NAS server.  There 
are parameters within the NAS server that specified how many background tasks 
that can be started.  I use a single master script that in parallel start 4 
other scripts that run sequentially. I am not at 7.1.1.300 yet, but I am 
running 7.1.0.0.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Nick 
Marouf
Sent: Tuesday, July 07, 2015 7:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: ANR2033E Command failed - lock conflict

Hi Grant,

I've been interested in pursing NDMP backups in parallel, How does it work
overall?

Is this lock conflict something that you have experienced specifically with
the new version of tsm, and not with version 6.3?

Thanks,
-Nick


On Tue, Jul 7, 2015 at 12:41 AM, Grant Street grant.str...@al.com.au
wrote:


Hi All

We are running some NDMP backups in parallel using the PARALLEL
functionality in a TSM script.

Since moving to 7.1.1.300 from 6.3 we have noticed that we are getting

ANR2033E BACKUP NODE: Command failed - lock conflict.

Has anyone else seen this? or have some advice?

We can't change it to a single stream as it will take close to a week in
order to do the backup

Thanks in advance

Grant



Re: ANR2033E Command failed - lock conflict

2015-07-07 Thread Grant Street

Hi Nick

It is a bit cumbersome to do parallel backups, but it does work.
Essentially you run a backup node process per NAS volume depending on
your NAS's definition of volume.
This is tricky when you have few volumes or volumes that vary greatly in
size. If you only have two volumes you can only ever create two streams.
If you have two volumes, one 100GB and one 1TB and you want to do a
backup storge pool after, you have to wait for the largest to finish.

We never saw it in 6.3 that we ran for 18 months or more. We have seen
it every time in 7.1.1.300. We have to keep retrying until it works.

Grant

On 08/07/15 00:22, Nick Marouf wrote:

Hi Grant,

I've been interested in pursing NDMP backups in parallel, How does it work
overall?

Is this lock conflict something that you have experienced specifically with
the new version of tsm, and not with version 6.3?

Thanks,
-Nick


On Tue, Jul 7, 2015 at 12:41 AM, Grant Street grant.str...@al.com.au
wrote:


Hi All

We are running some NDMP backups in parallel using the PARALLEL
functionality in a TSM script.

Since moving to 7.1.1.300 from 6.3 we have noticed that we are getting

ANR2033E BACKUP NODE: Command failed - lock conflict.

Has anyone else seen this? or have some advice?

We can't change it to a single stream as it will take close to a week in
order to do the backup

Thanks in advance

Grant



Re: ANR2033E Command failed - lock conflict

2015-07-07 Thread Grant Street

Sorry.. to clarify
If you only have two volumes you can only ever create a maximum of two
streams.

On 08/07/15 10:07, Grant Street wrote:

Hi Nick

It is a bit cumbersome to do parallel backups, but it does work.
Essentially you run a backup node process per NAS volume depending on
your NAS's definition of volume.
This is tricky when you have few volumes or volumes that vary greatly in
size. If you only have two volumes you can only ever create two streams.
If you have two volumes, one 100GB and one 1TB and you want to do a
backup storge pool after, you have to wait for the largest to finish.

We never saw it in 6.3 that we ran for 18 months or more. We have seen
it every time in 7.1.1.300. We have to keep retrying until it works.

Grant

On 08/07/15 00:22, Nick Marouf wrote:

Hi Grant,

I've been interested in pursing NDMP backups in parallel, How does it
work
overall?

Is this lock conflict something that you have experienced
specifically with
the new version of tsm, and not with version 6.3?

Thanks,
-Nick


On Tue, Jul 7, 2015 at 12:41 AM, Grant Street grant.str...@al.com.au
wrote:


Hi All

We are running some NDMP backups in parallel using the PARALLEL
functionality in a TSM script.

Since moving to 7.1.1.300 from 6.3 we have noticed that we are getting

ANR2033E BACKUP NODE: Command failed - lock conflict.

Has anyone else seen this? or have some advice?

We can't change it to a single stream as it will take close to a
week in
order to do the backup

Thanks in advance

Grant



Re: Trailing .000000

2015-07-05 Thread Grant Street

try this

select varchar_format (actlog.date_time, '-MM-DD HH24:MI:SS') as date_time , actlog.nodename as 
Nodename, actlog.message as Message, nodes.node_name, nodes.contact from 
nodes,actlog where nodes.contact like 'Component Team Windows%%' and actlog.nodename=nodes.node_name and 
actlog.originator='CLIENT' and actlog.severity  'I' and cast(timestampdiff(16, current_timestamp - 
actlog.date_time) as decimal(4,1))= 2 group by actlog.nodename, actlog.date_time, actlog.message, 
nodes.node_name, nodes.contact order by actlog.nodename






On 02/07/15 01:08, Loon, EJ van (ITOPT3) - KLM wrote:

Hi Kurt!
Your SQL does indeed return the output I'm looking for, but how dow I embed it 
in the following SQL statement?

select actlog.date_time, actlog.nodename as Nodename, actlog.message as Message, 
nodes.node_name, nodes.contact from nodes,actlog where nodes.contact like 'Component Team Windows%%' and 
actlog.nodename=nodes.node_name and actlog.originator='CLIENT' and actlog.severity  'I' and 
cast(timestampdiff(16, current_timestamp - actlog.date_time) as decimal(4,1))= 2 group by 
actlog.nodename, actlog.date_time, actlog.message, nodes.node_name, nodes.contact order by actlog.nodename

Again, thanks for your help!
Kind regards,
Eric van Loon
AF/KLM Storage Engineering


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of BEYERS 
Kurt
Sent: woensdag 1 juli 2015 15:57
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Trailing .00

Hi Eric,

It's always fun to play and calculate with timestamps in db2, here is one of 
the solutions to get the desired format:

tsm: TSMLABO3select date_time from actlog fetch first 2 rows only

   DATE_TIME
---
  2015-03-27 07:06:10.00
  2015-03-27 07:06:10.00

tsm: TSMLABO3select varchar_format (date_time, '-MM-DD HH24:MI:SS') as 
date_time from actlog fetch first 2 rows only

DATE_TIME: 2015-03-27 07:06:10
DATE_TIME: 2015-03-27 07:06:10

Regards,
Kurt

-Oorspronkelijk bericht-
Van: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] Namens Loon, EJ van 
(ITOPT3) - KLM
Verzonden: woensdag 1 juli 2015 15:17
Aan: ADSM-L@VM.MARIST.EDU
Onderwerp: [ADSM-L] Trailing .00

Hi guys!
Now that I have my SQL statement working I noticed that a select date_time from 
actlog returns the following format:

2015-06-29 23:27:04.00

What is the proper way to remove the trailing .00?
Again, thanks for your help!!!
Kind regards,
Eric van Loon
AF/KLM Storage Engineering


For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286


*** Disclaimer ***
Vlaamse Radio- en Televisieomroeporganisatie Auguste Reyerslaan 52
1043 Brussel

nv van publiek recht
BTW BE 0244.142.664
RPR Brussel
VRT Gebruikersvoorwaarden http://www.vrt.be/gebruiksvoorwaarden

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286




Re: Automatic startup of dsmcad daemon

2015-05-12 Thread Grant Street

Create an init script.
Use chkconfig to enable and disable
Use service to stop start status

The following Is an ok starting point
https://www-304.ibm.com/support/docview.wss?uid=swg21358414

I have a server that runs multiple dsmcads at once and created a more
flexible version attached

HTH

Grant

On 12/05/15 16:20, Grigori Solonovitch wrote:

In AIX I am using  nohup /usr/bin/dsmcad 

Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank Kuwait, 
www.ahliunited.com.kw

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
madu...@gmail.com
Sent: 12 05 2015 8:24 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Automatic startup of dsmcad daemon

Good Day,

I would like to have automatic startup of dsmcad daemon on system reboot on Red 
Hat Linux, would be this the best approach:
cad::once:/opt/tivoli/tsm/client/ba/bin/dsmcad /dev/null 21 # TSM Webclient

-mad


Please consider the environment before printing this Email.



CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.





#!/bin/sh
#
# Taken from https://www-304.ibm.com/support/docview.wss?uid=swg21358414
# start and stop the client daemon for a node
# Fixed so multiple could be run at once
#
# To create a new node
# 1. Create the node definition in the dsm.sys
# 2. Create an opt file called node name lowercase.opt
# 3. Copy this file to /etc/init.d/dsmcad-node name lowercase
# 4. chkconfig --add dsmcad-node name lowercase
# 5. service dsmcad-node name lowercase start
#
# chkconfig: 345 93 35
# description: Starts and stops TSM client acceptor daemon
#
# added new chkconfig
### BEGIN INIT INFO
# Provides: 
# Required-Start: $local_fs $network $remote_fs $autofs
# Required-Stop: $local_fs $network $remote_fs $autofs
# Default-Start:   3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: TSM infr Instance
# Description: Tivoli Storage Manager Infrastructure Instance
### END INIT INFO

#Source function library.
. /etc/rc.d/init.d/functions

[ -f /opt/tivoli/tsm/client/ba/bin/dsmc ] || exit 0
[ -f /opt/tivoli/tsm/client/ba/bin/dsmcad ] || exit 0

prog=dsmcad

service=`basename $0`

# see if $0 starts with Snn or Knn, where 'n' is a digit.  If it does, then
# strip off the prefix and use the remainder as the instance name.
if [[ ${service:0:1} == S ]]
then
  service=${service#S[0123456789][0123456789]}
elif [[ ${service:0:1} == K ]]
then
  service=${service#K[0123456789][0123456789]}
fi


node=${service#dsmcad-}
pidfile=/var/run/${service}.pid

export DSM_DIR=/opt/tivoli/tsm/client/ba/bin
export DSM_CONFIG=/opt/tivoli/tsm/client/ba/bin/$node.opt

# To not skip  unrecognized characters during sched or web client backups
export LANG=en_AU
export LC_ALL=en_AU

start() {
  echo -n $Starting $prog for tsm node $node: 
  cd $DSM_DIR
  daemon $DSM_DIR/dsmcad -optfile=$DSM_CONFIG
  echo
#  echo daemon $DSM_DIR/dsmcad 
  RETVAL=$?
  [ $RETVAL -eq 0 ]  touch /var/lock/subsys/$service  ps -fe |grep -v grep 
|grep $DSM_DIR/dsmcad -optfile=$DSM_CONFIG |awk '{print $2}'  $pidfile
  return $RETVAL
}

stop() {
  if [ -f $pidfile ] ; then
echo -n $Stopping $prog for tsm node $node: 
killproc -p $pidfile $prog
#echo killproc dsmcad
echo
  else 
echo -n $service not running
echo
  fi
  RETVAL=$?
  [ $RETVAL -eq 0 ]  rm -f /var/lock/subsys/$service  rm  -f $pidfile
  return $RETVAL
}

case $1 in
start)
start
;;

stop)
stop
;;

status)
status -p $pidfile $prog
;;
restart)
stop
start
;;
condrestart)
if [ -f /var/lock/subsys/$service ] ; then
  stop
  start
else 
  echo $service not running
fi
;;

*)
echo $Usage: $0 {start|stop|restart|condrestart|status}
exit 1

esac

exit 0


Speed of NDMP with TOC data over WAN

2015-01-22 Thread Grant Street

Hello

We have a backup scenario where we have a NetApp filer with a Tape
library attached in the US and our TSM server in Australia (250ms round
trip time).

We are seeing some very slow NDMP backup times. Anyone have a similar
setup? Does the link between the TSM server and the Filer effect backup
speed, even though it's only TOC/control data? Are there any
configurations that can help?

Feel free to contact me directly.

Grant


Re: Drive preference in a mixed-media library sharing environment

2015-01-06 Thread Grant Street

Could you post the PMR number so others can track it? Also if it becomes
a RFE, for some reason, can you post the RFE number so that others (ie
me)  can vote for it?

Even though this is functionality that I don't need now, it is something
that may be of use and help in future architecture designs. We tend to
use mixed generational media ie LTO4, LTO5 and LTO6 because of our
mostly Archival nature. Being able to extend the range of media by using
a mix of drives in a sane way, would definitely be of interest for us.

Thanks

Grant

  On 07/01/15 01:47, Skylar Thompson wrote:

Good to know. Unfortunately, while we have discrete barcode ranges for
each media type, it would be a big change for our checkin/checkout
procedures so I don't know that we'll be able to go that route. We'll live
with it for now, and file a PMR with IBM if it does start impacting us
more. Based on the documentation, it does seem like the current behavior is
a defect.

On Fri, Dec 12, 2014 at 04:29:18PM +, Prather, Wanda wrote:

I've never had a problem defining multiple TSM (logical) libraries on one 
device address (but I can't say I've tried it since 6.2, and that was on 
Windows).

What you can't do is have one device class pointing to 2 different libraries, 
so you'll also have to do some juggling there, create some new devclasses and 
storage pools to use going forward.


Wanda Prather
TSM Consultant
ICF International Cybersecurity Division





-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Thursday, December 11, 2014 10:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Drive preference in a mixed-media library sharing 
environment

Interesting. I hadn't considered using different libraries to solve this.
It was a little unclear from the thread - does this require partitioning on the 
library side? I wasn't aware that two different libraries (presumably with two 
different paths) could share a single device special node.

On Wed, Dec 10, 2014 at 06:23:10PM -0600, Roger Deschner wrote:

It won't work. I tried and failed in a StorageTek SL500 library with
LTO4 and LTO5. Just like you are reporting, the LTO4 tapes would get
mounted in the LTO5 drives first, and then there was no free drive in
which to mount a LTO5 tape. I tried all kinds of tricks to make it
work, but it did not work.

Furthermore, despite claims of compatibility, I found that there was a
much higher media error rate when using LTO4 tapes in LTO5 drives,
compared to using the same LTO4 tapes in LTO4 drives. These were HP
drives.

The only way around it is to define two libraries in TSM, one
consisting of the LTO5 drives and tapes, and the other consisting of
the LTO6 drives and tapes. Hopefully your LTO5 and LTO6 tapes can be
identified by unique sequences of volsers, e.g. L50001 versus L60001,
which will greatly simplify TSM CHECKIN commands, because then you can
use ranges instead of specifying lists of individual volsers. To check
tapes into that mixed-media library I use something like
VOLRANGE=L5,L5 on the CHECKIN and LABEL commands to make sure
the right tapes get checked into the right TSM Library. Fortunately
the different generations of tape cartridges are different colors.

You can read all about what I went through, and the good, helpful
recommendations from others on this list, by searching the ADSM-L
archives for UN-mixing LTO-4 and LTO-5. Thanks again to Remco Post
and Wanda Prather for their help back then in 2012!

Roger Deschner  University of Illinois at Chicago rog...@uic.edu
==I have not lost my mind -- it is backed up on tape
somewhere.=


On Wed, 10 Dec 2014, Grant Street wrote:


On 10/12/14 02:40, Skylar Thompson wrote:

Hi folks,

We have two TSM 6.3.4.300 servers connected to a STK SL3000 with 8x
LTO5 drives, and 8x LTO6 drives. One of the TSM servers is the
library manager, and the other is a client. I'm seeing odd behavior
when the client requests mounts from the server. My understanding
is that a mount request for a volume will be placed preferentially
in the least-capable drive for that volume; that is, a LTO5 volume
mounted for write will be placed in a LTO5 drive if it's available,
and in a LTO6 drive if no LTO5 drives are available.

What I'm seeing is that LTO5 volumes are ending up in LTO6 drives
first, even with no LTO5 drives in use at all. I've verified that
all the LTO5 drives and paths are online for both servers.

I haven't played with MOUNTLIMIT yet, but I don't think it'll do
any good since I think that still depends on the mounts ending up
in the least-capable drives first.

Any thoughts?

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

might be a stab in the dark . try numbering the drives such that
the LTO5's are first in the drive list or vice versa.
That way when tsm

Re: use of a tape library from two NetApps

2014-12-09 Thread Grant Street

On 10/12/14 04:36, TSM wrote:

Hello TSMers,

who has knowledge or experience with this topic?
Is it possible, to use the same tape library for NDMP backups from two
NetApps without access from the TSM server to the tape library?

The TSM admin guide says:
When the library is attached directly to the NAS file server, the Tivoli
Storage Manager server controls the library by passing SCSI commands to
the library through the NAS file server.

But is this also possible with two NetApps?


Best regards
Andreas.

I know you can do this if you are able to partition your library. Most
libraries can do this these days even though there is only one robot.

This does mean, obviously, that drives and slots can not be used or SEEN
by both netapps at the same time. You would be able to repartition the
library but there would need to be a rediscover process so that the
client ie Netapps can see the new configuration.

HTH

Grant


Re: Drive preference in a mixed-media library sharing environment

2014-12-09 Thread Grant Street

On 10/12/14 02:40, Skylar Thompson wrote:

Hi folks,

We have two TSM 6.3.4.300 servers connected to a STK SL3000 with 8x LTO5
drives, and 8x LTO6 drives. One of the TSM servers is the library manager,
and the other is a client. I'm seeing odd behavior when the client requests
mounts from the server. My understanding is that a mount request for a
volume will be placed preferentially in the least-capable drive for that
volume; that is, a LTO5 volume mounted for write will be placed in a LTO5
drive if it's available, and in a LTO6 drive if no LTO5 drives are
available.

What I'm seeing is that LTO5 volumes are ending up in LTO6 drives first,
even with no LTO5 drives in use at all. I've verified that all the LTO5
drives and paths are online for both servers.

I haven't played with MOUNTLIMIT yet, but I don't think it'll do any good
since I think that still depends on the mounts ending up in the
least-capable drives first.

Any thoughts?

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

might be a stab in the dark . try numbering the drives such that the
LTO5's are first in the drive list or vice versa.
That way when tsm scans for an available drive it will always try the
LTO5's first.

HTH

Grant


Re: Backing up Isilons with TSM

2014-11-25 Thread Grant Street

Are you using NFS or CIFS?

If it's NFS make sure you export the mount point or /ifs with at least
read and root access to the TSM node(s).

Grant

On 26/11/14 03:34, Zoltan Forray wrote:

Skylar,

Can you give us details on how you setup the Isilon to give a TSM node
authority to back up the individual mount points?  My SAN/Isilon guy has
tried setting up a standard TSM node but all attempts at backing up mounted
filesystems blocks him with access denied errors.  We are very new to
Isilon so any help would be greatly appreciated.

Feel free to email me directly if you want to take this discussion
offline/off-list.

On Fri, Nov 21, 2014 at 11:08 AM, Skylar Thompson skyl...@u.washington.edu
wrote:


Hi Zoltan,

We backup two large Isilon clusters (one 715TB, the other 2PB) using TSM.
We looked at NDMP at ran away quickly due to the full backup requirement.
Instead, we worked with the data owners to setup a data organization scheme
before hand, then back up individual directories as filespaces using TSM
NFS clients connected over 10GbE. Currently we have five of these clients,
and have accomodated bursts of as much as 50TB of changed/new data in a day
using them. The only nemesis we have are folks who create lots of tiny
files; fortunately, though, over time these incidents have gotten less
frequent due to education and improvements in genomic tools.

We've been asking for Isilon to make a native TSM client since before they
were bought by EMC, and unfortunately they're just not intererested. Now
that EMC owns them, I think the possibility of a native client is even more
remote.

On Fri, Nov 21, 2014 at 11:00:28AM -0500, Zoltan Forray wrote:

Anyone have experience backing up an EMC Isilon and can share

war-stories,

methods, etc?

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine




--
*Zoltan Forray*
TSM Software  Hardware Administrator
BigBro / Hobbit / Xymon Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Spectra T-Finity/TS1140/TSM 6.3/RHEL6

2014-09-21 Thread Grant Street

On 11/09/14 09:01, Tom Tann{s wrote:

Hi!

Anyone using this combination of library/drives/OS?

I try using the tsm-device-driver to control the library, and lin_tape to
control the drives.

The library and the drives connect OK and seem to work fine by themselves.

Defining paths to the drives in the library fails, because the library use only
the first 10 digits in the drive S/N, while lin_tape use all 12. So there is a
mismatch, and the drive is not found in the library.

PMR's have been opened against IBM and Spectra..

Has anyone got a similar configuration and made i work?

I have a similar setup different library vendor but...
Just be careful with compatible versions, The compatible matrix gets
very complicated. Last I heard you had to use RH6.4...

Grant


Re: Wrong estimation for VTL

2014-08-06 Thread Grant Street

The following is how TSM handles tapes, I assume VTL would be the same.

Essentially the estimated capacity is assigned to the tape when it is
allocated to a stgpool and loaded into the drive.
This value is determined from the device classEst/Max Capacity (MB)

TSM will not update the capacity again untill it has reached the End Of
Tape.

I have put in an rfe 31662 so that TSM would (better) estimate tape
capacity.
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=31662

The best solution would be to align the estimated capacity for the
device class with the actual data size you expect to fit on a tape.

Grant

On 06/08/14 15:37, Grigori Solonovitch wrote:

TSM 6.3.3.100
VTL on Data Domain
I have found wrong Estimated Capacity: for primary storage pools on VTL ( for example, 
for pool with 128 * 64 = 8192GB it gives estimated capacity 18,191 GB in query stg, but 
it is impossible).
As a result storage pool utilization is calculated wrongly as well.
What is the source of problem?
Data Domain deduplication or something else?
Shall I ignore this problem?

Grigori Solonovitch, Senior Systems Architect, IT, Ahli United Bank Kuwait, 
www.ahliunited.com.kw




CONFIDENTIALITY AND WAIVER: The information contained in this electronic mail 
message and any attachments hereto may be legally privileged and confidential. 
The information is intended only for the recipient(s) named in this message. If 
you are not the intended recipient you are notified that any use, disclosure, 
copying or distribution is prohibited. If you have received this in error 
please contact the sender and delete this message and any attachments from your 
computer system. We do not guarantee that this message or any attachment to it 
is secure or free from errors, computer viruses or other conditions that may 
damage or interfere with data, hardware or software.


Please consider the environment before printing this Email.


Re: Exchange TDP and Export weirdness

2014-08-04 Thread Grant Street

On 05/08/14 07:14, Prather, Wanda wrote:

TSM server 6.3.4 on Win2K8 64
Exchange 2010 in a DAG configuration, 22 DB's
TDP for Exchange 6.4.0
We run fulls on Saturday, incrementals Sun-Fri.

Trying to Export a set of Exchange backups for *one* DB/filespace to sequential 
media for legal hold.
Here's the command:

EXPORT NODE x-dag1 fsid=19 filedata=all fromdate=05/10/2014 
todate=05/16/2014 fromtime=04:00 totime=23:59
devclass=lto

x-dag1 is the storage node that holds all the Exchange DB filespaces.
There was a full backup run starting 05/10/2014 at 08:00.

So here's the weirdness:
The Export runs a while, mounts some of the tapes you'd expect, then calls for 
a tape whose last write date is 04/25/2014.

That fails the job because that tape is offsite.  I don't mind getting that 
tape back from the vault for processing, except that something is clearly hosed 
here, and I wonder if *any* of the data going on my export tape is correct.

Anybody seen something like this before?
I have even done a query on the BACKUPs table to verify there are no objects of 
type DIR in that filespace.
I'm flummoxed.


The tape that causes the failure is it a primary volume or a copy volume?
If it's a copy volume, then TSM can't access the primary volume so there
is something wrong.

The tape could be a TOC/windows metadata pool.

Grant


Re: Question about NDMP and TSM.

2014-07-16 Thread Grant Street

Hi

First check compatibility with IBM and EMC.

Ron has mentioned a solution that applies to NetApp that may not apply 
to EMC isilon eg. isilon does not have a vfiler concept.


I found that the restrictions of NDMP did not warrant the effort to use 
it on something like an isilon cluster.
The benefit of isilon is the size you can grow them to and the bandwidth 
you have to storage with multiple nodes.


Using NDMP negates all of this and reduces the functionality.
- you are limited to the bandwidth a single host(TSM server) and single 
isilon node can service.
- There is no incremental only differential. There is no snapdiff type 
functionality that NetApps have

- you can't use NDMP to back up part of it.
- The time it would take to do a full backup.
- The time it would take to do a restore.

One other niggle is that you need a windows machine to do a graphical 
restore (Web/Java UI).


That said if your data is small then these probably don't matter so much.


Grant

On 16/07/14 13:51, Gee, Norman wrote:

I found out that this is true a long time ago, but it does not stop you from 
manually doing a move data to empty out a tape volume.  It is just a very 
manual form of reclaim.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Steven 
Harris
Sent: Tuesday, July 15, 2014 6:48 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Question about NDMP and TSM.

And as a bonus, ndmp storage pools cannot be reclaimed and this means that
they hold tapes until the last data has expired.  TSM format storage pools
can be reclaimed, and if they are file storage pools can be deduped as well.


Regards

Steve

Steven Harris
TSM Admin
Canberra Australia



On 16 July 2014 05:38, Ron Delaware ron.delaw...@us.ibm.com wrote:


Ricky,

The configuration that you are referring to is what could be considered
the 'Traditional' implementation of NDMP.  As you have found for yourself,
there are a number of restrictions on how the data can be managed though.

If you configure the NDMP environment so that a Tivoli Storage Manager
controls the data flow instead of the NetApp Appliance, you have more
options


This configuration will allow you backup up to TSM storage pools (Disk,
VTL, Tape), send copies offsite, because the TSM Server controls the
destination.  You have the option to use a traditional TSM Client utilizing
the NDMP protocol or have the TSM server perform the backup and restores
using the BACKUP NODE and RESTORE NODE commands. It a table of contents
storage pool (disk based only highly recommended) you can perform single
file restores.  you can also create virtual filespace pointers to your
vfiler that will allow you to run simultaneous backups of the vfiler, that
could shorten your backup and restore times.





Best Regards,

_
* Ronald C. Delaware*
IBM Level 2 - IT Plus Certified Specialist – Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
Butterfly Solutions Professional
916-458-5726 (Office
925-457-9221 (cell phone)

email: *ron.delaw...@us.ibm.com* ron.delaw...@us.ibm.com
*Storage Services Offerings*
http://www-01.ibm.com/software/tivoli/services/consulting/offers-storage-optimization.html





From:Schneider, Jim jschnei...@ussco.com
To:ADSM-L@vm.marist.edu
Date:07/15/2014 12:19 PM
Subject:Re: [ADSM-L] Question about NDMP and TSM.
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu
--



Ricky,

The Isilon uses the OneFS file system and TSM views it as one huge file
system.  If backing up to disk, TSM will attempt to preallocate enough
space to back up the entire allocated space on the Isilon. Defining Virtual
File systems will not help because directory quota information is not
passed to TSM, and TSM only sees the total allocated space.

We were able to back up the Isilon to disk when we started on a test
system with little data on it, around 25 GB.  When we attempted to
implement the same backups on a second, well-populated Isilon we ran into
the space allocation problem.

When backing up to tape, TSM assumes you have unlimited storage available
and is able to run VFS backups.  We use Virtual File Space Mapping (VFS)
and back up to tape.

Refer to EMC SR#4646, TSM PMR 23808,122,000.

Jim Schneider
United Stationers

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU
ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, Ricky
Sent: Tuesday, July 15, 2014 1:21 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Question about NDMP and TSM.

I have been asked to look into backing up our EMC Isilon using our TSM
server.

Everything I read,  seems to point to backing this NDMP device to tape.

Problem is, we do not use tape to backup production.

I have researched and found a few articles about backing the NDMP device
to tape but, there seem to 

Re: SQL statement

2014-03-16 Thread Grant Street

Be aware that this does not work for snapdiff backups.

RFE 13145 : snapdiff to update last backup fields in filespace data

Grant

On 14/03/14 03:37, Skylar Thompson wrote:

You'll want to do a join across both tables on the node name. Something like 
this:

SELECT f.node_name,f.filespace_name,o.physical_mb -
FROM filespaces f -
INNER JOIN occupancy o ON f.node_name=o.node_name -
WHERE -
(days(f.backup_end)  (days(current_date)-30)) -
ORDER BY o.physical_mb DESC

On Thu, Mar 13, 2014 at 05:16:37PM +0100, Loon, EJ van (SPLXM) - KLM wrote:

Dear TSM-ers,

I'm trying to generate a SQL statement to create a list of filespaces
which are not backed up for more than 30 days, sorted on their occupancy
size. This is what I've got so far:



select node_name, filespace_name, physical_mb from occupancy where
filespace_name in (select filespace_name from filespaces where
(days(filespaces.backup_end)  (days(current_date)-30))) order by
physical_mb desc



It doesn't work however, because filespace names are not unique. As soon
as a different node is found with the same filespace_name it's listed
too and that's not what I'm aiming for. I guess nested SQL is not the
way to go, but I don't know the solution.

Thanks you very much for your help in advance!!!

Kind regards,

Eric van Loon

AF/KLM Storage Engineering




For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message.

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286


--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: NFS only supported on AIX.

2013-10-08 Thread Grant Street

Got confirmation this morning. Essentially they will only offer Best
Efforts with any NFS that is not NFSv4 on AIX. The following are quotes
from the ticket.

As my colleague Dave pointed out we will give our best effort to
resolve any problems you may have with NFS, but since it is not
supported in most environments except AIX, we can not guarentee that we
will be able to resolve all issues related to NFS.

My understanding is that snapdiff will work in the environments
specified in the restriction but it is not fully supported, in other
words only on a best effort basis.

Grant


On 09/10/13 01:53, Paul Zarnowski wrote:

You may be confusing NFSv3 with v4.  I can believe that v4 support is limited, 
but v3 is supported on aix, Solaris, Linux, et al.  Snapdiff incrementals are 
only supported to a NetApp from AIX and Linux for NFS (v3).

..Paul
(excuse my brevity  typos - sent from my phone)


On Oct 8, 2013, at 1:11 AM, Grant Street gra...@al.com.au wrote:

I'm confirming this now ... But it doesn't look good. I'm not talking
will it or won't it work I'm more concerned about technical support. I
had an issue with restoring data to an NFS file system on a mac and was
told that it wasn't supported. That's when I started getting concerned.

The following does not contain NFS except AIX. eg. I would expect it to
list NFS against linux with ACL Support as NO

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.client.doc%2Fc_bac_aclsupt.html

Grant



On 08/10/13 14:38, Alex Paschal wrote:
Hello, Grant.  I'm certain NFS filesystems are supported on clients
other than AIX.  In fact, the URL below links to the UNIX BAClient
manual, which contains the sentence:
Note: On Solaris and HP-UX, the nfstimeoutoption can fail if the NFS
mount is hard.

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nfshsmounts.html

Perhaps your source confused NFS support with NFS Version 4 ACL support?

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nasfs.html




On 10/7/2013 4:41 PM, Grant Street wrote:
Hello All

Just a heads up to something I found out last week. I have been informed
that backing up an NFS server from a non AIX client is NOT supported.

This could also include using the snapdiff functionality on Netapps.
This is being confirmed now.

This may be old news to you, in which case , sorry, but this is a big
concern for us. I have created an RFE
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=40014


Thanks

Grant




NFS only supported on AIX.

2013-10-07 Thread Grant Street

Hello All

Just a heads up to something I found out last week. I have been informed
that backing up an NFS server from a non AIX client is NOT supported.

This could also include using the snapdiff functionality on Netapps.
This is being confirmed now.

This may be old news to you, in which case , sorry, but this is a big
concern for us. I have created an RFE
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=40014

Thanks

Grant


Re: NFS only supported on AIX.

2013-10-07 Thread Grant Street

I'm confirming this now ... But it doesn't look good. I'm not talking
will it or won't it work I'm more concerned about technical support. I
had an issue with restoring data to an NFS file system on a mac and was
told that it wasn't supported. That's when I started getting concerned.

The following does not contain NFS except AIX. eg. I would expect it to
list NFS against linux with ACL Support as NO

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.client.doc%2Fc_bac_aclsupt.html

Grant


On 08/10/13 14:38, Alex Paschal wrote:

Hello, Grant.  I'm certain NFS filesystems are supported on clients
other than AIX.  In fact, the URL below links to the UNIX BAClient
manual, which contains the sentence:
Note: On Solaris and HP-UX, the nfstimeoutoption can fail if the NFS
mount is hard.

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nfshsmounts.html

Perhaps your source confused NFS support with NFS Version 4 ACL support?

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.client.doc/c_bac_nasfs.html



On 10/7/2013 4:41 PM, Grant Street wrote:

Hello All

Just a heads up to something I found out last week. I have been informed
that backing up an NFS server from a non AIX client is NOT supported.

This could also include using the snapdiff functionality on Netapps.
This is being confirmed now.

This may be old news to you, in which case , sorry, but this is a big
concern for us. I have created an RFE
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=40014


Thanks

Grant



Re: DISK vs FILE DevClass

2013-09-19 Thread Grant Street

It very much depends on the disk you are putting these on. It is a lot
easier to get bigger cheaper disks to stream data using FILE than to do
lots of small IOPS using DISK.

We were mainly concerned about getting a large amount data to the disk
storage pool and to tape as quickly as possible.

We tested the throughput of our disk using fio and found that it was
able to stream data much much faster than it could do random IO. That is
why we chose FILE. Remember that FILE storage pools use 256KB chunks and
IIRC DISK storage pools are 64KB chunks

Douring the tests we also determined the maximum number of threads that
each volume could support and based the Migration/backup stgp threads
around that. NOTE TSM can use all the threads on the same volume worst
case, unless you stripe accross them.

We were not concerned with Maximum number of mount points because or
backup/archive sessions would never max out disk IO.

Pre-Defining the volumes has a bug if you are using collocation at the
moment.
http://www-01.ibm.com/support/docview.wss?uid=swg1IC95089

HTH

Grant
On 20/09/13 04:40, Paul Zarnowski wrote:

Just to add a few more thoughts to the discussion...

If you ever have to restore your TSM database, you will need to audit all of 
your DISK volumes.  If you set reusedelay appropriately, you can avoid having 
to audit FILE volumes.  Yes, this requires a bit more space, because you'll 
have volumes in PENDING status for a time.

One reason to limit mountlimit would be to try to avoid head thrashing.  
Generally, backup data sent to FILE pools should get good performance because 
you have a stream of data coming into a sequential volume.  DISK pools are 
random, so more head movement.  If you have a high mountlimit, then you could 
offset the benefits of writing sequentially to disk.

We find that running Backup Stgpool from FILE is faster than from DISK.  Our 
Copy stgpools are on remote tape.

..Paul

At 01:20 PM 9/19/2013, Prather, Wanda wrote:

For file devclass, I generally don't worry about maximum volumes because I 
don't set the volumes up as scratch, I predefine them.
Just something else that can cause issues for the customer, and reports of 
other people seeing the coming and going of scratch file volumes causing 
fragmentation in the filesystem.  Better to define the volumes same as a random 
DISK pool.

For mountlimit, it's just the maximum number of client processes you expect to 
be writing to that drive at once. Or set to 999, no reason to restrict it.

For maxcapacity, it just has to be larger than the largest container volume you 
plan to create in that pool.

If you have no plans for dedup, you have no REQUIREment for the file devclass.

And what I HATE about the file devclass, is that you don't get pool failover.  
If the pool fills up before you can migrate out, your backups fail, rather than 
waiting for a tape from the NEXTSTGPOOL.

If the data is going to migrate off to another pool, so the disk pool gets 
emptied frequently  anyway, what benefit to having a filepool?
And if it isn't emptied every day, you will have to run reclamation on it.

So when it's just a buffer diskpool, I prefer to use DISK rather than FILE.






-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, September 19, 2013 11:34 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DISK vs FILE DevClass

We are in a transition of our SAN storage from one EMC box to another.
Since this requires my relocating 18TB of TSM server storage, I thought I would 
take this opportunity to revisit FILE devclass vs DISK, which we are using now.

I have been reading through the Linux Server Admin Guide on the pro's and 
con's of both devclasses.  Still not sure if it would be better to go with
FILE.   Here is some info on what this server does,.

For the server that would be using this storage, the sole backups are Lotus 
Notes/Domino servers, so the backup data profile is not your usual data mix 
(largely Notes TDP).

No dedupe and no plans to dedupe.
No active storage and no need for it.
4-5TB daily with spikes to 15TB on weekends - 95%+ is TDP

When creating/updating the FILE devclass, how do I calculate/guesstimate the 
values for MOUNTLIMIT and MAXIMUM CAPACITY as well as the MAXIMUM VOLUMES?

Unfortunately, the storage they assigned to me on the VNX5700 is broken up into 
8-pieces/luns, varying from 2.2TB to 2.4TB, each.

Looking for some feedback on which way we should go and why one is preferable 
than the other.

--
*Zoltan Forray*
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html



--
Paul Zarnowski  

Re: stgp volumes

2013-08-30 Thread Grant Street

This is my observation for File based stg pools so YMMV
For File based storage pools, TSM seems to create the files in a round 
robin fashion for each storage directory.


Anything past the initial client sessions, TSM is ignorant of the 
storage. It does not care where the data comes from, other process using 
the same location or files.


For single processes it doesn't matter but all bets are off for multi 
threads or proceses. So it is best to plan for worst case scenario eg 
all threads/proc/sessions hitting the same location.


So it's better to have a single location/directory/lun that is twice as 
fast, rather two locations.


Grant
On 30/08/13 15:47, Dierk Harbort wrote:

Hello all!


Lets say, a tsm server has 1 storagepool, type disk. This stgp has 5
volumes. 3 of those reside on SAN1, and the 2 others on SAN2. Up to 65%
utilisation on the pool every day. Migration to tape runs properly, so all
5 volumes are empty.
Now I would like to understand, how tsm server decides usage of a stgp
volumes. Is it round robin, or what ever.

Does anybody know the answer, or a place to read about?

TIA
Dierk


Bürgel Wirtschaftsinformationen GmbH  Co. KG
Gasstraße 18
22761 Hamburg

Geschäftsführer: Klaus-Jürgen Baum, Dr. Norbert Sellin, Stefan Duncker
Registergericht: Hamburg HRA 85212
USt-IdNr.: DE 117 981 371
Steuer-Nr.: 27/541/00020
Sitz der Gesellschaft: Hamburg
Geschäftsführende Gesellschafterin:
Bürgel Wirtschaftsinformationen Verwaltungs-GmbH
Registergericht: Hamburg HRB 45 704

Umfassende Informationen: Die neue Bürgel Vollauskunft.
Jetzt mit noch mehr Euler Hermes Expertise.
Informationen unter http://www.buergel.de und
http://www.buergel.de/vollauskunft

Professionelles Forderungsmanagement für mehr Liquidität.
Debitorenmanagement, Inkasso, Forderungskauf.
Informationen unter
http://www.buergel.de/produkte-leistungen/forderungsmanagement.html

DDMonitor - Deutscher Debitoren Monitor.
Transparenz hinsichtlich potenzieller Bonitätsrisiken innerhalb Ihres
Kundenportfolios.
Informationen unter:
http://www.buergel.de/produkte-leistungen/wirtschaftsinformationen/ddmonitor.html

BÜRGEL Wirtschaftsinformationen seit 1885



Re: TS3310 Tape Library Cleaning Slot

2013-08-19 Thread Grant Street

In a perfect world you would restart the whole server and library JIC
... but ...

The TS3310 is normally configured to use physical partitions ie the OS
and Software only see the slots in the partitions not the cleaning slot
so effectively the library looses a slot from TSM's POV. This would mean
that the library geometry would change.

Provided no other changes were made eg control path, cabling, os
configuration. There shouldn't be any changes in the tape drive ordering.

Deleting the partition and recreating it will make the library go
Offline for a period so I would offline the TSM library path while
this was done.

Are you using the IBM tape and library drivers? What OS are you using?

After the change you should be able to use the OS tools to get
information about library slots and check that it is correct.

One that is working set the TSM library path online

Run an audit library.

run show slots and show library in TSM to confirm that the new changes
have been picked up.

I haven't tried this so YMMV

Grant




On 20/08/13 05:25, Geoff Gill wrote:

Hello,

I posted a question which didn't really get any response so I'd like to reword 
this to see if anyone has a comment.

This is a TS3310 Tape library. Whoever defined everything initially removed the default 
cleaning slot and there is no way to insert a cleaning tape because there are no cleaning 
slots defined. There are empty slots available so here is the question. If I go to modify 
the logical library and bring the Configure Storage Slots number down by 1 
slot to free up an empty slot, does anyone know if there is a reboot of the library or 
library audit or any issues I need to worry about?

Thank You
Geoff Gill



Re: restore requires cartridge to mount R/W

2013-08-04 Thread Grant Street
We have seen an issue when using Mixed Generation media. eg LTO4 and 
LTO6 using LTO6 drives.


The library was complaining that TSM was generating a write to the LTO4 
drives. Even though the tapes we identified as LTO4 and set to ReadOnly


It turns out that a verification command that the tape is loaded 
correctly, was trying to write or open the tape for write. This was 
fixed in a hot fix for the lintape driver 1.76.10-1.


HTH

Grant


On 02/08/13 19:01, Tuncel Mutlu (BT İşletim ve Teknik Destek Bölümü) wrote:

Hi,

I have EMC Data Domain here and a remote one and I am doing replication between 
them. TSM is v5.5.6.100 on AIX, so is the Storage Agent on both end. The files 
backup also on AIX, file client is v6.2.4.4. These are file backups.

To prepare for a restore on the remote location:
  - previously has copied the rootvg (AIX) couple of days ago (DB and LOG disks 
has not changed), remote virtual library and drives were avaiable
  - copied the DB and LOG disks after the backup (SVC replication)
  - copied VOLHIST and DEVCONFIG to the appropriate directory
  - changed DSMSERV.OPT, added DISABLESHEDS YES and NOMIGRRECL
  - opened the TSM and using a script disable sessions, deleted sheds, disk 
pool volumes, paths, drives, libraries, redefined the virtual library and drives

When I tried to do a test restore for a small file (there a 5 of them and 42 of 
the big ones), there is failure and it says cannot find the file, on TSM it 
says cannot mount read-only cartridge (the cartridges are READWRITE on TSM, but 
on VTL there are RORD - Read Only / Replication Destination), I tried all kind 
of stuff but couldn't get thru, so I deleted the replication pair relationship 
and then it worked. After starting the restore with 12 concurrent sessions, I 
saw the following:

ANR1699I Resolved MUHORAP11_STG to 1 server(s) - issuing command Q MOUNT  
against server(s).
ANR1687I Output for command 'Q MOUNT ' issued against server MUHORAP11_STG 
follows:
ANR8330I LTO volume CB0060L3 is mounted R/O in drive AKDD5_DB_DRV03 
(/dev/rmt14), status: IN USE.
ANR8329I LTO volume CB0100L3 is mounted R/W in drive AKDD5_DB_DRV02 
(/dev/rmt13), status: IDLE.
ANR8330I LTO volume CB0067L3 is mounted R/O in drive AKDD5_DB_DRV04 
(/dev/rmt15), status: IN USE.
ANR8330I LTO volume CB0069L3 is mounted R/O in drive AKDD5_DB_DRV05 
(/dev/rmt16), status: IN USE.
ANR8330I LTO volume CB0068L3 is mounted R/O in drive AKDD5_DB_DRV06 
(/dev/rmt17), status: IN USE.
ANR8330I LTO volume CB0064L3 is mounted R/O in drive AKDD5_DB_DRV07 
(/dev/rmt18), status: IN USE.
ANR8330I LTO volume CB0073L3 is mounted R/O in drive AKDD5_DB_DRV08 
(/dev/rmt19), status: IN USE.
ANR8330I LTO volume CB0065L3 is mounted R/O in drive AKDD5_DB_DRV09 
(/dev/rmt20), status: IN USE.
ANR8330I LTO volume CB0004L3 is mounted R/O in drive AKDD5_DB_DRV10 
(/dev/rmt21), status: IN USE.
ANR8330I LTO volume CB0080L3 is mounted R/O in drive AKDD5_DB_DRV11 
(/dev/rmt22), status: IN USE.
ANR8330I LTO volume CB0075L3 is mounted R/O in drive AKDD5_DB_DRV12 
(/dev/rmt23), status: IN USE.
ANR8330I LTO volume CB0079L3 is mounted R/O in drive AKDD5_DB_DRV13 
(/dev/rmt0), status: IN USE.
ANR8330I LTO volume CB0074L3 is mounted R/O in drive AKDD5_DB_DRV14 
(/dev/rmt1), status: IN USE.
ANR8334I 13 matches found.
ANR1688I Output for command 'Q MOUNT ' issued against server MUHORAP11_STG 
completed.
ANR1694I Server MUHORAP11_STG processed command 'Q MOUNT ' and completed 
successfully.
ANR1697I Command 'Q MOUNT ' processed by 1 server(s):  1 successful, 0 with 
warnings, and 0 with errors.

The cartridge which is mounted R/W is the same one which was requested before, 
It contains all of the small files and pieces of 2 big ones.

My question is why a restore will require cartridge to mount R/W ? The pools on 
the destination side of VTL replication are R/O, and only deleting the pair 
relationship will make the R/W, but I don't want to do it again.

Regards,

Tuncel






Değerli görüş ve önerilerinizi internet sayfamızdaki Bize Ulaşın bölümünden 
iletebilirsiniz.
Bu e-posta ve muhtemel eklerinde verilen bilgiler kişiye özel ve gizli olup, 
yalnızca mesajda belirlenen alıcı ile ilgilidir. Size yanlışlıkla ulaşmışsa 
lütfen göndericiye bilgi veriniz, mesajı siliniz ve içeriğini başka bir kişiye 
açıklamayınız, herhangi bir ortama kopyalamayınız. Bu mesaj veya ekleri, aksi 
sözleşme ile veya mesaj içeriğinde açıkça belirtilmedikçe, herhangi bir işleme 
ilişkin olarak Bankamız adına herhangi bir teklif, kabul veya teyit amacı 
taşımamaktadır. Verilen bilgilerin doğru veya eksiksiz olmasına yönelik bir 
garanti verilmemekte olup, bilgiler önceden bildirilmeksizin 
değiştirilebilecektir. Bu mesajın içeriği Bankamızın resmi görüşlerini 
yansıtmayabileceğinden Akbank T.A.Ş. hiçbir hukuki sorumluluğu kabul etmez.


You are kindly requested to share your valuable views and opinions to us via 
the Contact Us section on our website.
The information provided in this e-mail and any 

Re: IBM TS3310 Cleaning slot not defined

2013-07-24 Thread Grant Street

I think you have to
* remove the data/normal partition
* Increase the cleaning slots
* recreate the data/normal partition using remaining tapes.

HTH

Grant


On 25/07/13 08:15, Geoff Gill wrote:

Hello,

 From what I have read about the TS3310 by default there is one cleaning slot defined and you can 
add a few more through the interface. On the library in question, whoever set it up removed that 
cleaning slot and defined all slots as data slots. Going through the web interface to change that 
in Manage Library, Cleaning Slots it is impossible. The drop down says None 
(AutoClean will not be available) and nothing else to choose from.

Hopefully someone can answer this for this type library. Since there is at 
least one empty slot in the library is there a way to remove one slot from the 
one defined library so I can then turn around and define that as a cleaning 
slot without having to do anything on the TSM server related to the library or 
the inventory of it since TSM should not know about an empty slot?


Thank You
Geoff Gill



Archive WEB UI not showing available Management classes in windows.

2013-07-08 Thread Grant Street

Hello

Has anyone seen an issue where the the Java based web client on a
windows client does not show anything in the Management class drop down
box on the Archive window?

Maybe I missing something?

Grant


Re: SV: TSM NDMP separate data and tape server

2013-06-10 Thread Grant Street
According to the NDMP spec you can have a separate Data and tape server, 
what I was looking for was a way that the netapp could be the NDMP tape 
server with locally attached tapes and another server be the NDMP Data 
server with TSM keeping track of the meta data.


This is called a Three-Way Configuration in the NDMPv4 Protocol
2.2.4. Three-Way Configuration
One may back up an NDMP Server that supports NDMP but does not have a 
locally attached backup device by sending the data through a TCP/IP 
connection to another NDMP Server. In this configuration, the NDMP data 
service exists on one NDMP Server and the NDMP tape service exists on a 
separate server. Both the NDMP control connections (to server 1 and 
server 2 and the NDMP data connection (between server 1 and server 2 
exist across the network boundary.


According to infocenter Tivoli Storage Manager supports NDMP Version 4 
for all NDMP operations.


Thanks anyway

Grant

On 07/06/13 17:18, Christian Svensson wrote:

Hi Grant,
What do you mean?
You can backup a NetApp direct to Tape but still keep track in TSM.

/Christian

-Ursprungligt meddelande-
Från: Grant Street [mailto:gra...@al.com.au]
Skickat: den 5 juni 2013 08:59
Till: ADSM-L@VM.MARIST.EDU
Ämne: TSM NDMP separate data and tape server

Hi

I just want to confirm, my research

Can you have the NDMP Data Server on a Windows server backing up to a the Tape 
server on a Netapp using TSM?

I know a Netapp can be a DATA and TAPE server in one but according to the NDMP 
Definitions you should be able to separate them.

Thanks

Grant



TSM NDMP separate data and tape server

2013-06-05 Thread Grant Street

Hi

I just want to confirm, my research

Can you have the NDMP Data Server on a Windows server backing up to a
the Tape server on a Netapp using TSM?

I know a Netapp can be a DATA and TAPE server in one but according to
the NDMP Definitions you should be able to separate them.

Thanks

Grant


Re: SNAPDIFF BACKUP FAILURE

2013-05-30 Thread Grant Street

Hi

Snapdiff backups can only be done on the volume level. eg on Netapps we
have

/vol/volume_name/qtree_name

normally we export at the qtree level but in order to do snapdiff you
need to MOUNT it at the VOLUME level.

I also created a wiki which has some other snapdiff features.

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli+Storage+Manager/page/Snapshot+Differencing+Caveats

Grant

On 30/05/13 12:41, joemon wrote:

Hi

I am facing an issue with Snapdiff backup of NAS filers using TSM. I have added 
the shares in NAS and associated TSM schedules. The issue is with names I 
think. My Share name and Volume name are different. But if I am not wrong, 
Snapdiff is purely based on CIFS share name and not volume name. Is there any 
relation to that? Can anyone help me in this regard.

I am getting the following error.

05/28/2013 00:00:08 ANS2836E Incremental backup operation using snapshot 
difference is only available for full volumes. \\nasXX\ is a partial volume 
or qtree.
05/28/2013 00:00:08 ANS5283E The operation was unsuccessful.

05/28/2013 00:00:08 ANS1512E Scheduled event Schedule name' failed.  Return 
code = 12.

where nasxx is NAS filer name and  is share name.

Expecting suggestions.

+--
|This was sent by joe86ant...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



Collocation anomaly report

2013-04-16 Thread Grant Street

Hello

We use collocation to segment data into collocation groups and nodes,
but recently found that collocation is on a best efforts basis and
will use any tape if there is not enough space.

I understand the theory behind this but it does not help with compliance
requirements. I know that we should make sure that there are always
enough free tapes, but without any way to know we have no proof that we
are in compliance.

I have created an RFE
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=33537
. Please vote if you agree:-)

While I wait a more than two years for this to be implemented, I was
wondering if anyone had a way to report on any Collocation anomalies?
I created the following but still not complete enough

select volume_name, count(volume_name) as Nodes_per_volume from
(select Unique  volume_name , volumeusage.node_name from volumeusage,
nodes where nodes.node_name = volumeusage.node_name and nodes.
collocgroup_name is null) group by (volume_name) having count
(volume_name) 1

and

select unique volume_name, count(volume_name) as Groups_per_volume
from (select Unique  volume_name ,  collocgroup_name from volumeusage,
nodes where nodes.node_name = volumeusage.node_name ) group by
(volume_name) having count(volume_name) 1

Thanks in advance

Grant


Re: Collocation anomaly report

2013-04-16 Thread Grant Street

I understand that getting a good backup is the most important, but when
you offer a feature  that is best efforts with no way to verify, why
have the feature?
eg
TSM will use best efforts to do a backup of a client.
As you know there are times when for one reason or another a backup
cannot be done.
Imagine now, TSM does not give you a way to report if the backup
succeeded, if it generated an error or when it started the backup?

That is what happens when it fails to collocate, there is no report, it
does not generate an error, warning or information message nor does post
a message to any log that it has had to resort to non collocation.

Does that make sense?

I will need to look at the implications of splitting it out based on
domains, thanks for the heads up.

Grant

On 17/04/13 10:15, Nick Laflamme wrote:

If you absolutely need for nodes to be isolated on their own media, why aren't 
they in their own individual domains which point to their own storage pools,  
all of which might share a library?

Frankly, I like that TSM will override collocation preferences when its at 
MAXSCR for volumes in a pool.

Just a thought,
Nick

On Apr 16, 2013, at 6:40 PM, Grant Street gra...@al.com.au wrote:


Hello

We use collocation to segment data into collocation groups and nodes,
but recently found that collocation is on a best efforts basis and
will use any tape if there is not enough space.

I understand the theory behind this but it does not help with compliance
requirements. I know that we should make sure that there are always
enough free tapes, but without any way to know we have no proof that we
are in compliance.

snip


Grant


Tape drive encryption solutions - slightly OT

2013-03-13 Thread Grant Street

Hello

Just wanted to get some feedback from anyone with experience in doing
tape drive encryption.

We normally run TSM using IBM tape drives in a quantum library. But we
have a contractual requirement to deliver encrypted tapes in a standard
tar format.

Because this is a last minute once off thing we are looking for an easy
and cheap way to setup tape drive encryption from the system level.

We are using IBM drivers, and I can see there is a ekm config file that
points to an EKM server. But it's not clear what protocols/standard are
used/required by the EKM server and which ones work etc.

Any help would be appreciated

--
Grant Street
Senior Systems Engineer

T: +61 2 9383 4800 (main)
T: +61 2 938 34882 (direct)
F: +61 2 9383 4801 (fax)

Animal Logic Logo

*See our latest work at http://www.animallogic.com/work*


Re: LTFS user ?

2012-12-18 Thread Grant Street

I haven't used it, but have a look at crossroad's strongbox. It looks to
be very good in the LTFS tiered storage and presents as a NAS. It has a
caching NAS head, multiple copies etc..

http://www.crossroads.com/products/strongbox/

Grant

On 19/12/12 08:27, ritchi64 wrote:

Hello,

I want to know if some of you use LTFS for tiering storage (like tier3).
Are you able to share the libairy with TSM backup?
any gotcha?

We already got a solution for tier1 and tier2 but wondering for tier3, we 
should look at 4TB sata drive or go with tape and LTFS?

+--
|This was sent by alainrich...@hotmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--


Re: -snapdiff incremental backups on Windows and include/exclude

2012-12-06 Thread Grant Street

Just be careful when you mix snapdiff and include/exclude.

Just to state the obvious
The include/exclude only operates on the changed files so it is
suggested should you change the the include/exclude you need to run a
backup using createnewbase=yes.

for the following example

file ABC.txt is excluded on first backup and the file does not change.
You then change the include/exclude so it would be included. In this
situation the file would never be backed up. To solve it you would need
to run a backup with  createnewbase=yes or without the snapdiff option.

HTH

Grant

On 06/12/12 16:23, Prather, Wanda wrote:

Thanks very much!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Clark, 
Margaret
Sent: Tuesday, December 04, 2012 3:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] -snapdiff incremental backups on Windows and 
include/exclude

We run snapdiff backups using 6.2.4 client, 6.2.3.1 server, Ontap 7.3.3, and 
exclude and exclude.dir work as expected.  - Margaret

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Prather, Wanda
Sent: Monday, December 03, 2012 10:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] -snapdiff incremental backups on Windows and include/exclude

Do:
include
exclude
exclude.dir

Work when you are doing an incremental with -snapdiff  ?

(Netapp Vfiler, TSM 6.4 client on Win2K8, Ontap 8.1, TSM 6.3.0 server on Win2K8)

Thanks!



Wanda Prather  |  Senior Technical Specialist  | wanda.prat...@icfi.com  |  
www.icfi.com ICF International  | 401 E. Pratt St, Suite 2214, Baltimore, MD 
21202 | 410.539.1135 (o)


Re: V6 DR Test

2012-11-21 Thread Grant Street

Really good starting point
http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.srv.doc%2Ft_scen_srv_recover.html

On 22/11/12 02:46, Geoff Gill wrote:

I was wondering if anyone had a doc you have saved with steps for bringing up a 
V6 TSM server for a DR test you could share with me. My only experience is with 
V5 and I'm looking for notes for steps and what's necessary for V6.

Thanks for ay assistance you might provide.



Thank You
Geoff Gill


Re: Error when installing TSM client on RHES 5.7

2012-09-12 Thread Grant Street

yum whatprovides */libstdc++-libc6.1-1.so.2
yum whatprovides */libXp.so.6

will tell you what is missing.

HTH

Grant

On 13/09/12 11:43, Paul_Dudley wrote:

Hi all,



I am trying to install the TSM client on Red Hat linux version ES 5.7. 
TIVsm-API-5.2.0-0 has installed OK via the rpm command however I have run into 
the problem below:



rpm -i TIVsm-BA.i386.rpm

error: Failed dependencies:

 libstdc++-libc6.1-1.so.2 is needed by TIVsm-BA-5.2.0-0.i386

 libXp.so.6 is needed by TIVsm-BA-5.2.0-0.i386



Any suggestions? I assume there are some other linux packages I have to install?





Thanks  Regards

Paul



Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au







ANL - Regional Carrier of the Year 2011 - Containerisation International

ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
  contents to any person.


Re: Export node to new server

2012-08-23 Thread Grant Street

Do you have any information on doing server-to-server export? Is there
a TSM name for it?

Thanks
Grant

On 24/08/12 09:31, Prather, Wanda wrote:

Better if you do that in reverse order, unless the clients are trivial in size.
If you let the client scheduled backup run, it will do a full, which for many 
clients can be painful.

If you get the export done first, all the existing data will be represented in 
the new server DB, and the client's scheduled backup to the new server will be 
incremental.

If there is a time lag between the time you do the first export and the time 
you get the clients dsm.opt file pointing to the new server, you can do an 
incremental export to catch up - add the FROMDATE=today-blah fromtime=xx:yy:zz 
to the EXPORT command.

Also IMHO it's easier to do what you suggest using server-to-server export than 
using media.  Then it's just one step.

Wanda

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Vandeventer, Harold [BS]
Sent: Thursday, August 23, 2012 4:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Export node to new server

My goal is to relocate all backup data  for a node currently on a TSM 5.5.2 
server to a TSM 6.2.4 server.

There are no archive or space managed data in the system.

First time I've had to try this, so looking to the experts for comment.

Looks like a fairly simple process:
Define the node on the target server with appropriate schedules, policy, etc.
Have the node manager modify the DSM.OPT file to specify the new server.
Let scheduled backups run.

On source TSM: EXPORT NODEnodename   FILEDATA=BACKUP DEVCLASS=classname

On target TSM: IMPORT NODEnodename  FILEDATA=BACKUP DEVCLASS=classname  
DATES=ABSOLUTE MERGEFILESPACES=YES VOL=volnames

An important parameter MERGEFILESPACES=YES to bring in the old data into the 
identical filespaces.



Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services 
harold.vandeven...@ks.gov
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the person 
or entity to which it is addressed and may contain confidential or privileged 
information.  Any unauthorized review, use, or disclosure is prohibited.  If 
you are not the intended recipient, please contact the sender and destroy the 
original message, including all copies, Thank you.
***


IO profile of TSM storage

2012-06-29 Thread Grant Street

Hi

I'm in the process of revisiting the TSM storage IO requirements and
thought I'd just get a heads up for any previous experience, corrections
etc.

From what I can gather in the documentation

* File based storage pools are read/written in 256KB chunks
* DB is read and written in 64KB chunks.

But
what is the profile of active logs? archive logs?
Is there a way to increase the IO depth/concurrency when TSM
reads/writes to the storage pool etc

Does MOVESIZETHRESH and TXNBYTELIMIT offer much improvement?

Is there any way to determine if the DB/DB logs are slowing the
streaming from the Disk pools?

Do you know what the equiv fio would be to generate the same load
pattern as TSM. Using fio I was able to push a LUN to 300MB/s sequential
read but with TSM I'm getting about 60-70MB/s for a migration from file
to an LTO5 drive. Disk is not contended CPU is fineany tips?

Grant


Disabling the library sharing actlog messages

2012-06-20 Thread Grant Street

Hello

I was wondering if anyone had an idea on how to disable the library
sharing messages from the actlog?
here's some stats
I have dsmserv redirect stdout and stderr to a log file.
This log file is currently 158811 lines for less than 8 hours.
If I remove the ANR0409I and ANR0408I lines  I get 1645 lines.
So these library sharing messages account for 99% of the actlog's content.

I continually get the following
ANR0408I Session 54932 started for server  (Linux/x86_64) (Tcp/Ip)
for library sharing.
ANR0409I Session 54932 ended for server  (Linux/x86_64).

This makes it nigh on impossible to use q actlog without capturing 1000
library sharing messages, and the log file does not contain the timestamps.

It doesn't look like I can use ! as a not etc.

Any other help full information?

Grant


Stopping NDMP backup storage pool

2012-06-11 Thread Grant Street

we have a NDMP backup storage pool running and it is Waiting for mount
of scratch volume   (345071 seconds).

We have some tapes that are in the library but the checkin process is
waiting even though I used checkl=barcode.

How do I get it to die/recover?

This is the library controller instance so I don't want to bounce it if
I don't have to.

Grant


Re: Backup of CIFS

2012-05-10 Thread Grant Street

Hi

Have a look at the following for more information
http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot+Differencing+Caveats

On windows 2003+ you have to netuse within the same session, otherwise 
the service/process can't see the mount.


If you want to backup the windows permissions check out
https://www-304.ibm.com/support/entdocview.wss?uid=swg1IC79757

There was also a a converstaion on this just recently on the mailing list.

Grant

On 11/05/12 05:04, George Huebschman wrote:

To back up CIFS shares on a NetApp filer from a Windows host, I believe
that I need to have Domain Admin permission.

I am being told that if I can browse and access the files on the share with
the backup logon, that is enough.



I have two client servers in different locations.  I am using one logon for
both of them, but one client is in a different domain.

I can (more or less) backup  the files in the Prod domain (it takes 8
days…), but I get the error that a required NT Permission is not held for
all of the directories on the client in the DR domain.



The CIFS shares are in the dsm.opt as domains.

I can map to them in theStart,Run,\\filername\cifs_Share_name
fashion from the logon profile that I use for all the other backups.  I can
browse files and access them.

But just because I can browse and access does not mean I can back them up,
correct?  The TSM Client is only able to backup local filesystems unless it
has authority in the domain, correct?  Or not?



George Huebschman



From: , Y

Sent: Thursday, May 10, 2012 1:59 PM

To: Huebschan, George

Subject: Re: TT#02409365



Correct that server is in SJ. I logged into that server with adsm and could
see the SJ filer/shares and could read/copy data from it. So if the app is
using that account to connect it doesn't make much sense that it isn't
working unless there are other requirements...I'll try and do some more
testing later today to see if I can find anything else but from my end so
far everything looks good.







This e-mail is the property of NaviSite, Inc. It is intended only for the
person or entity to which it is addressed and may contain information that
is privileged, confidential, or otherwise protected from disclosure.
Distribution or copying of this e-mail, or the information contained
herein, to anyone other than the intended recipient is prohibited.



Re: CIFS shares user permissions

2012-04-03 Thread Grant Street

Handy to know, thanks, I'll give it a try. I had to ASK IBM what
permissions were required exactly,

Just make sure that you do not have SkipNTPermissions set to yes in
your optfile.

Using this requires less permissions( and is sometimes spouted as a
solution/workaround) but does not restore all the NT/windows permissions.

Grant


On 04/04/12 07:21, Steven Harris wrote:

Hi Grant

It appears that the apar overstates the requirement.

My security people had a lot of issues with making the backup service
runner a domain admin.  Fortunately it is not necessary.  At the filer I
made the id a member of the filer's adminstrators and backup operators
groups, and all is now working.  Mind you I am not using snapdiff.


Regards

Steve

Steven Harris
TSM Admin
Canberra Australia

On 3/04/2012 11:31 AM, Grant Street wrote:

The user has to be a domain admin of the domain that the filer is
trusted member of. The user also needs to be a member of the backup
operators.

https://www-304.ibm.com/support/entdocview.wss?uid=swg1IC79757

If you are thinking about using snapdiff backups have a read of the
following

http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot+Differencing+Caveats


On 03/04/12 11:16, Steve Harris wrote:

Hi All

I'm attempting to backup an IBM N-Series (i.e. Netapp) filer : its the
first time I've attempted this.  Server is TSM 6.3.0.0 on RHEL 5.7
X86_64.  Client is 6.3.0.0 on Windows 2008R2.

I have a domain id that I have mapped to the Filer shares and the
scheduler service is running under.

Backup fails RC12

04/03/2012 01:30:08 ANS5250E An unexpected error was encountered.
  TSM function name : GetFileSecurityInfo
  TSM function  : CreateFile() returned '1314' for file
'\\xx-fil02\nhp_home_directories$\'

  TSM return code   : 268
  TSM file  : ntfileio.cpp (9471)


Any idea what rights the domain admin that this backup is running under
needs to have to back this data up?


TIA

Steve.

Steven Harris
TSM Admin
Canberra Australia






Re: CIFS shares user permissions

2012-04-02 Thread Grant Street

The user has to be a domain admin of the domain that the filer is
trusted member of. The user also needs to be a member of the backup
operators.

https://www-304.ibm.com/support/entdocview.wss?uid=swg1IC79757

If you are thinking about using snapdiff backups have a read of the
following

http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot+Differencing+Caveats

On 03/04/12 11:16, Steve Harris wrote:

Hi All

I'm attempting to backup an IBM N-Series (i.e. Netapp) filer : its the
first time I've attempted this.  Server is TSM 6.3.0.0 on RHEL 5.7
X86_64.  Client is 6.3.0.0 on Windows 2008R2.

I have a domain id that I have mapped to the Filer shares and the
scheduler service is running under.

Backup fails RC12

04/03/2012 01:30:08 ANS5250E An unexpected error was encountered.
 TSM function name : GetFileSecurityInfo
 TSM function  : CreateFile() returned '1314' for file
'\\xx-fil02\nhp_home_directories$\'

 TSM return code   : 268
 TSM file  : ntfileio.cpp (9471)


Any idea what rights the domain admin that this backup is running under
needs to have to back this data up?


TIA

Steve.

Steven Harris
TSM Admin
Canberra Australia


Re: Because tape is dead....

2012-03-29 Thread Grant Street

On 30/03/12 08:13, Allen S. Rout wrote:

My apologies:  I try to avoid writing to the readership of a list, but
this is just neat.

http://en.wikipedia.org/wiki/Linear_Tape_File_System

- Allen S. Rout

I like the look of this based on ltfs
http://www.crossroads.com/products/strongbox/

Used in upcoming Fujifilm cloud solution
http://www.permivault.com/


Re: TSM 6.2 Administration Center

2012-03-15 Thread Grant Street

FYI
Redhat have moved their supported browser to firefox 10.0.3esr as per
their latest security advisory RHSA-2012:0387-1
https://rhn.redhat.com/rhn/errata/details/Details.do?eid=14334



On 16/03/12 00:27, Vandeventer, Harold [BS] wrote:

Thanks for the link Neil.

I'm using Firefox 3.6.24.

I'm not seeing the issues you have re Nodes and Tabs on Client Nodes and Backup 
Sets.
Also don't have your  Error 500 Manage Servers.

The only problem I've seen on Admin 6.3 Manage Servers is moving to a second 
server from the same Manage Servers tab. Example:
- Open Manage Servers, see all servers in list.
- Pick one (Server Properties).
- Work on it for awhile, Activity Log, whatever.
- Click OK or Cancel to close that server.
- the Tab returns to the original list of servers.
- Pick another server.
- Searching within the activity log creates a PortletException error.
- Any other work (Sessions, database/log, admin schedules) seems to work fine.

Closing the Manage Servers tab and restarting it always lets me pick a server 
and search Activity Log seems to be only when I'm searching the log on the 
second server to be chosen that the error occurs.


Harold Vandeventer
Systems Programmer
State of Kansas - Department of Administration - Office of Information 
Technology Services
harold.vandeven...@da.ks.gov
(785) 296-0631


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Neil 
Schofield
Sent: Thursday, March 15, 2012 6:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 6.2 Administration Center

In response to Harold's email, the APAR that describes the extraneous
services with TSM AC 6.3.0 is here:
http://www-01.ibm.com/support/docview.wss?crawler=1uid=swg1IC80444

I've confirmed that it is resolved in AC 6.3.1.

The problems I've experienced in Chrome and Firefox are:
- Nodes and tabs not displayed correctly on Client Nodes and Backup Sets
page
- Server Properties links on Manage Servers page give the following
errors:
Error 500: SRVE0295E: Error reported: 500

Regards
Neil Schofield
Technical Leader
Yorkshire Water Services Ltd.


  

Spotted a leak?
If you spot a leak please report it immediately. Call us on 0800 57 3553 or go 
to http://www.yorkshirewater.com/leaks

Get a free water saving pack
Don't forget to request your free water and energy saving pack, it could save 
you money on your utility bills and help you conserve water. 
http://www.yorkshirewater.com/savewater

The information in this e-mail is confidential and may also be legally 
privileged. The contents are intended for recipient only and are subject to the 
legal notice available at http://www.keldagroup.com/email.htm
Yorkshire Water Services Limited
Registered Office Western House, Halifax Road, Bradford, BD6 2SZ
Registered in England and Wales No 2366682


Re: TSM 6.2 Administration Center supported browsers

2012-03-13 Thread Grant Street

There are RFE's for firefox
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=14315
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=17574
and IE
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=12137

It's crazy in this day and age that the browser version is so important 
in a TSM web application.


Grant


On 14/03/12 09:06, Gergana V Markova wrote:

Michael,

IE7 is actually supported for Admin Center 6.2 -- as per section
Additional Software:

http://www-01.ibm.com/support/docview.wss?uid=swg21410467

MS Internet Explorer starting with 6.0 is supported with Java 1.5.0. MS
Internet Explorer 8.0 and later is not currently supported. ​



What are the issues you are seeing with IE7 for Admin Center 6.2?
Alternatively,  IE8 is supported in AC 6.3. That version also supports
FF3.5 and 3.6.

IE9 or Chrome are not supported yet.

Thanks.

Regards/Pozdravi, Gergana
| |  ~\\  !  //~
| |   ( ( 'o . o' ) )Imagination can change the equation . .
| |(  w  )
| |  Gergana V Markova   gmark...@us.ibm.com
| |  IBM Tivoli Storage Manager(TSM) - Server Development



From:
Michael P Hiznymhi...@binghamton.edu
To:
ADSM-L@vm.marist.edu
Date:
03/13/2012 01:23 PM
Subject:
[ADSM-L] TSM 6.2 Administration Center



We recently upgrade TSM to version 6.2 and have found out that the
Administration Center web interface will not work with any newer browsers.
It is certified to work with ie 6.0 and Firefox 2.0.  Has anyone come up
with a work around to get it to work with ie7, 8, or 9? Or Chrome?






Re: Controling FILLING tapes at end of Migration

2012-03-12 Thread Grant Street

Is there a way, short of combining collocation groups, to deal with this
problem?

some options
1 Reduce the number of migration processes
2 Increase the time between migrations eg rather than daily do weekly.
3 do reclamation after migration.

AFAIK the combination of migration processes and collocation groups is
what gives you this behavior. if these two are not negotiable then I'd
recommend option 3 above.

someone else might have a trickier way.

Grant


Re: SNAPDIFF with CIFS on Linux supported?

2012-02-28 Thread Grant Street

Couple of things

- you need to do the mount at the netapp volume level. So the share has
to be at the netapp volume level. mounting the volume at any other level
will generate this error.
- If you mount the same volume more than once at different
locations/levels it can get confused and give this error. So make sure
that there is only one mount under that volume.
- when you backup a Netapp windows permission volume, backing it up over
NFS will not capture more than standard unix permissions. In order to
capture full windows permissions you need to backup the volume from a
windows host or use NDMP.
- I haven't tried backing up via cifs on linux but, after you get it
working make sure you check the permissions are backed up and restored
properly.
- Make sure you setup the NAS user and password correctly

see
http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot+Differencing+Caveats

I created it to lessen the pain of others

On 29/02/12 09:43, Paul Zarnowski wrote:

I would be happy to be proven wrong on this, but I believe that snapdiff for 
CIFS is only supported on Windows.  snapdiff for NFS is supported on AIX and 
Linux.

BTW, that particular error message seems to be issued for a variety of reasons. 
 We have found it hard to figure out what the problem is at times, and we have 
an open PMR for this message right now.

..Paul

At 03:05 PM 2/28/2012, Stackwick, Stephen wrote:

What's in the can...

Obviously, Linux systems can mount CIFS shares, but does anyone have the 
SNAPDIFF option working? I keep getting:

ANS2831E Incremental by snapshot difference cannot be performed on /mountpoint as 
is is not a NetApp NFS or CIFS volume.

It is too! The message kind of implies that CIFS is OK on Linux.

Steve

STEPHEN STACKWICK | Senior Consultant | 301.518.6352 (m) | 
sstackw...@icfi.commailto:sstackw...@icfi.com  | 
icfi.comhttp://www.icfi.com/
ICF INTERNATIONAL | 410 E. Pratt Street Suite 2214, Baltimore, MD 21202 | 
410.539.1135 (o)



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: TSM 5.5 : NAS share folder backup from TSM client

2012-02-16 Thread Grant Street

If your running snapdiff
https://www-304.ibm.com/support/entdocview.wss?uid=swg1IC79757



On 17/02/12 04:29, Robert A. Clark wrote:

If you haven't already done so, I advise you setup a script to mount the
share and run the backup.

If this script is 100% successful, then you can look at whether the
service is gaining access to the share with the permissions you expect.

I've seen situations where the static mount doesn't have the right
permissions when accessed by the service.

[RC]

ADSM-L@VM.MARIST.EDU wrote on 02/15/2012 10:27:58 PM:


Hi ,

 I have a situation where my share folder backup through ba-client
fails with  ANS5174E A required NT privilege is not held  error
message.  Share folder from NAS server is mounted to a windows 2003
Standard Edition(SP2) server.  Profile(tsm domain profile) which is used
to perform tsm backup is used to mount the share folder to windows
server and it has read  write privilege to the share folder.  I am able
to  create  delete a file from  the share folder using the tsm profile.


 I am surprised why my backup fails with this error message when I
have the needful privilege.  Even at the NAS end I have full permission
on this particular share folder.

 I raised a support call and they suggest me to look at the
permission in the windows server.  I have checked it and it look fine.

 has any one encountered this issue before, do share me how this can
be fixed.   Or Do I need to follow the NAS backup approach, if so Can I
perform the backup through LAN instead of attaching a tape drive to NAS
server.

below are my environment details

  Server --  TSM server (5.5.5.0) in AIX (6100-04-01-0944).
  Client --  TSM ba-client (5.5.2.12) in Windows 2003 Standard edition
(SP2)

   NAS server --  QNAP TS-809U-RP Turbo

Regards,

Rajesh Lakshminarayanan



If you are not the intended addressee, please inform us immediately that you 
have received this e-mail in error, and delete it. We thank you for your 
cooperation.


Re: Need some support for snapdiff RFE's

2012-02-14 Thread Grant Street

The reason I raised the RFE was that, to me, anything less than
certainty is an incomplete backup 

If I need to do a regular full JIC then that, is a lack of certainty.

So I suggested the following
1. For Include/exclude changes, delete backup or policy changes to
force a create new base eg delete the TSM created snapshot.
2. For File skipped or file excluded for file set Delete the newer
TSM snapshot, rather than the older one so that the problem can be
corrected and the files backed up on next run while still using
snapdiff. It just means that the snapshot may be a little older.

This will mean that there will be certainty in your backup that is not
reliant on additional schedules.

It will have a downside of more non-scheduled full backups as a full
backup will be triggered by anything listed in (1) on next incremental
backup. Another downside is that the TSM snapshot may be larger until
anomalies are resolved.

The downsides in a stable system would would be about the same if not
less than the current situation.

Hope that explains my thinking

Grant




On 15/02/12 05:08, Pete Tanenhaus wrote:

Actually this really isn't correct, a snapshot differential backup will
detect deleted files and will expire them on the TSM server.

Periodic full progressive incremental backups are recommended because less
complete backups such as snapdiff,
incremental by date, and JBB can never be as comprehensive as a full
progressive given that a full progressive examines
every file on the local file system and every file in the TSM server
inventory.

Note that less complete implies that changes processed by a full
progressive might be missed by other less complete
backup methods, and this is the reasoning behind recommending periodic full
progressive incrementals.

JBB is somewhat better in that in makes every attempt to detect conditions
which indicate that the change journal is out of
sync with what has previously been backed up and to automatically force the
full progressive incremental backup instead
of requiring the user to manually schedule it.

For reasons that require a very detailed explanation (let me know if you
are interested), the current snapdiff implementation
doesn't have the robustness or resiliency that JBB does and therefore
really requires manually scheduling full progressive
incremental backups via the CreateNewBase option.

Hope this helps ...

Pete Tanenhaus
Tivoli Storage Manager Client Development
email: tanen...@us.ibm.com
tieline: 320.8778, external: 607.754.4213

Those who refuse to challenge authority are condemned to conform to it


|
| From:  |
|
   
--|
   |Paul Zarnowskip...@cornell.edu
  |
   
--|
|
| To:|
|
   
--|
   |ADSM-L@vm.marist.edu
  |
   
--|
|
| Date:  |
|
   
--|
   |02/14/2012 07:12 AM 
  |
   
--|
|
| Subject:   |
|
   
--|
   |Re: Need some support for snapdiff RFE's
  |
   
--|
|
| Sent by:   |
|
   
--|
   |ADSM: Dist Stor ManagerADSM-L@vm.marist.edu 
  |
   

Need some support for snapdiff RFE's

2012-02-13 Thread Grant Street

Hello

Just thought I would plug two snapdiff RFE's(Request For Enhancement)
that I created on the IBM developer works site.

I am trying to garner some support for these in the hope that they will
be implemented in the future.

I have seen that some of you have had experience with Netapp snapdiff
backups and you may have similar beliefs.

Define snapdiff and other snap options on a per filesystem basis rather
than per backup command
When running an incremental backup be able to define the snapdiff or
other snap option on a per filesystem basis rather than for the whole
backup incremental job.

http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=12858

Snapdiff to update last backup fields in filespace data
Could you please update the TSM snapdiff client to update the lastbackup
information of filespaces when a query filespace f=d is issued. This
would be the obvious behaviour and would be inline with the documented
meaning of the columns

http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfeCR_ID=13145

TIA

Grant


Re: Isilon backup

2012-02-12 Thread Grant Street

On 13/02/12 05:48, Prather, Wanda wrote:

I don't have any personal experience with Isilon, so these answers are based on 
other experience with NDMP and filer devices:


Can any of you provide real world backup times for full tree traversal backups 
to TSM using NDMP?  I understand my mileage may vary depending on the data and 
it's size as well as the hardware configuration but there must be a fairly 
standard throughput rate for file server data.


NDMP is a very old backup protocol without much smarts.  It does not traverse 
the file tree, it dumps the entire share as a single object.  The amount of 
time increases with the size of the share, and is affected by whether you are 
doing it via TCP/IP or direct to tape via fibre, the disk in the device, etc.   
The biggest problem is that you can only do fulls and differentials.  That 
means you are pushing backups of the same unchanged data over and over again, 
and if you want to have 6 months coverage of your backup data, you have to keep 
6 months of these enormous full dumps.


Lastly I'm not sure I understand why an incremental backup cannot be run on the 
Isilon.  Please explain.


NAS devices are closed operating systems, so you can't install a TSM client on 
them.  If you are accessing the data via CIFS shares, you can run an 
incremental backup by putting the TSM client on a Windows machine and backing 
up the data via the UNC name or mounted drive letter.  (Or do the equivalent 
using a UNIX client for NFS shares.)  The problem with that, it means that you 
are scanning the file tree via the CIFS share, which is much slower than 
scanning the file tree on a local hard drive.  Then you are pulling the 
identified changed data across the network to the Windows machine running the 
TSM client, then across the network again to the TSM server.  If your file tree 
is millions of files,  it's dirt slow.  But you get a true incremental TSM 
backup that way, and the data can be managed at the file level with  normal TSM 
management classes/retention rules.   Much better than keeping TB of NDMP 
dumps, but the question is how long it takes.   KEEP YOUR SHARES SMALL if



you want to do it that way.  I have customers where we've solved the problem by 
running multiple proxy servers, each responsible for 1-2 shares, so that all 
the shares can get backed up daily.  But it's a lot more work for you, than 
using the -snapdiff API, which is designed by Netapp to address the problem 
(and works well).



To be clear the NDMP protocol can do incrementals (depending on the
storage and it's implementation of NDMP). It is TSM that does not
support incremental NDMP.

Grant


Re: Isilon backup

2012-02-09 Thread Grant Street

Isilon is great usable storage that you can scale in any direction. It
lives under the one Namespace and super easy to admin.

For backup/restore...
Isilon only provide NDMP or direct backup ala Cifs/NFS
# NDMP in TSM is only full and differential. there is no incremental
option. You need to purchase a backup accelerator for every 2 or so
drives you want to physically connect to the isilon cluster.
# Direct backup requires complete tree traversal but can be scaled. Uses
the same hardware that user's benefit from.

How these effect you depends on your data and backup windows etc
The newer Isilon nodes should be faster to do tree walks as their
Metadata is stored on SSD(Make sure you confirm this on the models you
get).

Even with the limitations of snapdiff that I have documented here
http://adsm.org/forum/showthread.php?25002-snapdiff-backup-caveats
I would still pick NetApp if you can. If your unstructured data is
large, snapdiff can save a lot of time essentially it can instantly
determine what files have changed since the last backup and only sends
the changed files through to TSM, eliminating any tree traversal.

Grant

PS ESXi using NFS is GREAT!!!

On 10/02/12 04:26, EVILUTION wrote:

This may be a bit off topic but consider this a thread bump

We have about 25TB worth of unstructured data spread across seven windows 
servers using DFS.  We are using TSM with a monthly image backup as well as 
daily journal based backups to collect the data but I'm concerned about restore 
times.

I have convinced managment that we need a filer but they do not want to 
purchase a file JUST to for file server backup/recovery.   They now want to 
depoly test and dev VM's on the solution and possibly use the platform for 
virtual desktops (VDI) and maybe even a workstation backup solution.

I spoke with Gartner and without hesitation the recommended the ISILON over all 
other solutions.  For those of you that already have ISILON do you feel it was 
the right choice?

We are an EMC storage shop so ISILON would be easy to sell but I feel better 
about purchasing a proven solutions (NETAPP) rather than something that EMC may 
chew up and significantly change.

My primary concern is still with the backup of the data contained on the device 
along with virus protection and access control.  Please provide your feedback.

+--
|This was sent by jeff.je...@sentry.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: Isilon backup

2012-02-02 Thread Grant Street

We tested the backup accelerators a few years ago (about 2 ) and they
didn't fit our requirements. This was before they offered the Metadata
on SSD.

At that time they only supported a few tape drives for each accelerator
and the bottle neck was getting the data out of the cluster.

Because we don't backup everything in the isilon cluster and we had lots
of user facing Accelerator nodes we got better performance by running
multiple parallel TSM backup streams from one or two clients.

Again this is all based on the older isilon nodes and your milage may vary.

Grant

On 03/02/12 08:31, Shawn Drew wrote:

It is just NDMP, but you need to get the backup accelerator to do that.
(It adds FC ports to the isilon and enables NDMP)

Here is the isilon guide for this.
http://www.isilon.com/library/configuration-guide-ndmp-ibm-tivoli-storage-manager-isilon


I've just read about it, but haven't done it myself.  I also read about
the IBM SONAS and that looks much more appealing from a TSM perspective.
(Built-in TSM client, uses its super-grid thing for the incremental scan,
hsm-like functionality)


Regards,
Shawn

Shawn Drew





Internet
r.p...@plcs.nl

Sent by: ADSM-L@VM.MARIST.EDU
02/02/2012 04:18 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Isilon backup






for what it's worth, NDMP works with TSM, but that's it. NDMP is not the
most advanced backup solution and the TSM implementation is not the most
feature rich. I'd look into other solutions first before reverting to
NDMP.


On 2 feb. 2012, at 22:12, Huebner,Andy,FORT WORTH,IT wrote:


Anyone backup an Isilon array? Using 3592 tape drives?  The sales guys

say it is just NDMP.

I am looking for just basic information (good, bad, Oh Smurf!).  One may

be in our future.



Andy Huebner


This e-mail (including any attachments) is confidential and may be

legally privileged. If you are not an intended recipient or an authorized
representative of an intended recipient, you are prohibited from using,
copying or distributing the information in this e-mail or its attachments.
If you have received this e-mail in error, please notify the sender
immediately by return e-mail and delete all copies of this message and any
attachments.


Thank you.


--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.