Hi,
What is the technical limitation of DISK in which something like reclamation
is no possible? We have filesystem fragmentation (in OS), FILE reclamation
but nothing for DISK pools. Why?
Hi,
DISK is much better for you because it will allocate all blocks at the same
time.
The only way to avoid the fragmentation with FILE is to pre-define all volumes.
Another thing I don't like with FILE is that you need to create more or same
number of volumes as clients and then set number of
hi
Try to run on source server:
query rpfile devclass=* f=d
and see fields Marked for Deletion
If it is Yes, you must run on source server :
REConcile Volumes * device_class_name Fix=yes
Efim
Hi TSM-ers!
At this moment we are using a diskpool with a VTS-like (DL4106 by EMC)
storage pool as nextpool.
I too am looking at a FILE pool to replace this in the future, just to
prevent a vendor lock-in for our TSM environment and of course the
possibility to use de-dup.
The only problem I see
Hi Eric,
If I where you should I start searching for much better Filesystem to run
FILECLASS on such EXT4, TUX or something similar.
And also split your FILECLASS to multiple LUNs. If a filesystem crash happened
you will get a minimal check disk time.
Best Regards
Christian Svensson
Cell:
I think the amount of time that filesystem check takes depends on the number
of files (i-nodes) and not directly on the size of a filesystem.
ADSM: Dist Stor
Hi TSM-ers!
At this moment we are using a diskpool with a VTS-like (DL4106 by EMC)
storage pool as nextpool.
I too am looking at a FILE pool to replace this in the future, just to
prevent a vendor lock-in for our TSM environment and of course the
possibility to use de-dup.
Hi Rick,
What do you wanna know about VTL?
If you looking at Quantum or EMC (Who is OEM parts of Quantum VTL and
FalconStor) is basically a Linux OS and running Quantums own Filesystem called
NextFS or something like that, if I don't remember wrong.
NextFS is a great file system if you have
Hi,
node1 has /fs1 and /fs2 filesystems. image backup of /fs1 exists in TSM
server and the image backup of /fs2 is in progress. From another node node2
the following restore command is virtually unsuccessful:
# dsmc restore image /fs1 -virtualnodename=node1
it does not fail, actually it transfers
Hi!
Before they were taken over by EMC, some guys from DataDomain were visiting us
a few months ago. They presented their DD boxes, which offer hardware based
dedup, compression and defragmentation.
They told us about customers who allocate a DISK (not FILE!) storage pool in a
DD box and just
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 09/22/2009
10:16:54 AM:
Hi!
Before they were taken over by EMC, some guys from DataDomain
were visiting us a few months ago. They presented their DD boxes,
which offer hardware based dedup, compression and defragmentation.
They told us about
I ran into this just a few weeks ago.
I was backing up the RPFiles to a library manager which didn't have any
storage pools (previously)
So I wasn't running expiration on the destination side.
If the files are marked for deletion, just make sure your expirations are
running on the destination
The only reason why I should sale a customer a VTL is if they wanna run a
LANFree backup to disk.
It seems that the value-add of a VTL over FILE devices on
plain DISK is Lanfree, compression and de-dup. Given the much greater
cost of a VTL over plain disk, we keep looking for how to achieve
I would like to keep 2 drives unused at all times for restores and
labeling scratch tapes.
TSM 5.5/AIX/Atape
We have 18 Drives, 1 library manager and 5 library clients.
The classic method is to set the Mount limit in the device class.
However, we also have NDMP clients which are configured so
I don't think it's an either/or decision. I believe that tape will always have
a place but that using some file pool will offer very nice RTO/RPO combos for
some data structures. The allure of very inexpensive tape storage should
always be there (and perhaps increase again with LTO5) while
On Tue, Sep 22, 2009 at 7:03 PM, Kelly Lipp l...@storserver.com wrote:
I don't think it's an either/or decision. I believe that tape will always
have a place but that using some file pool will offer very nice RTO/RPO
combos for some data structures. The allure of very inexpensive tape
Anyone with some input would be greatly appreciated. Here is my set up and my
issue at hand. Currently we are using server-server communication with
vaulting. Main location is a blade hs21 with fiber attached SAN storage running
win server 2003 r2 with TSM extended edition 5.5.3. Second
HA, perfect! I never knew about the AUTOLABEL option.And wouldn't you
know it, we have it enabled already.
I also just found out about SET DRMCHECKLABEL. I think that will take
care of everything.
Thanks!
Shawn
Shawn Drew
Internet
I _Almost_ agree with Kelly. A restore will preempt any process
running on the same server, but with 6 TSM instances, chances are that
your restore is running on a TSM instance that has nothing to preempt,
while other instances are busy with eg. reclamation or other less
essential tasks.
On 22
Any one has come across this error?
Client os = Windows 2003 x64
TSm version 5.5.2
TDP version 5.5.2
TSm server 5.5.3
I tried re-installing both tsm and tdp but didn't help. Ran chkdsk too.
09/09/2009 12:09:38 ACO5436E A failure occurred on stripe number (0), rc
= 418
09/09/2009
Hehe, just as a thought experiment...
Presuming our instances are of equal size, they would be equally running
jobs at the same times. So they would all equally have a couple drives
available to be pre-empted.
In reality, our instances are not equal in size and it is the large ones
that would
I'm pretty sure that Autolabel will work on a shared library.
Bill Boyer
TEAMWORK...means never having to take all the blame yourself. - ??
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Shawn Drew
Sent: Tuesday, September 22, 2009 1:59 PM
Question: Does TSM support multi-reader function of FILE devices
with a VTL? In other words, can you get the same tape vol mounted
multiple times - ONCE for WRITING and multiple for READONLY? Or, just
multiple READONLY mounts? This would be great at our DR site for
DR restores.
No. A
23 matches
Mail list logo