From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
David Browne
My NetWare support guy has identified about 1 TB of data on one server
that has not been accessed for more than 1 year.
He would like to delete the files however he would like to be able to
retrieve the data for the
For a given archive package/description, the directories are not archived
a second time, so that is why you see a difference between inspected and
archived the second time around.
I'm not certain why it would be important for the inspected and
archived numbers to match up, but if you specify a
On trying to run an archive on a win2000 using the backup/archive client I
get the following message on clicking the archive button (before file
selection).
ANS5148W The server needs to do a one-time conversion of your archive data
before you can continue. This operation may take a long
Will,
I can't answer your question definitively because of the same reason Richard sighted,
but I do agree with him that it operates only on the nodes data
.
In addition I have seen this same message recently when upgrading TSM clients from
5.1.1.0 to 5.1.5.15. Bottom line is that to complete
: Re: Archive Question
On trying to run an archive on a win2000 using the backup/archive client I
get the following message on clicking the archive button (before file
selection).
ANS5148W The server needs to do a one-time conversion of your archive data
before you can continue
On Wed, Jun 26, 2002 at 12:55:30PM -0400, Lawson, Jerry W (ETSD, IT) wrote:
Dsmc archive -archmc=60days -deletefiles
/opt/file/directory/seven/layers/deep/log.*
There are 2 files to be archived that match the log.* criteria each night.
The first strange thing that I see is that
More than likely, you have a situation where the directories are being bound
to the longest mgmt class you have for archiving in that policy domain.
If you look at the regular backup data the same thing should be occurring.
Directory entries are showing but the data may already be expired. The
also check on the cleanup archdir command...
it will get rid of all those extra entries
CLEAN ARCHDIR node_name {DELETEDUPLICATES | SHOWSTATS | RESETDESCR |
1DELETDIRS } [FORMAT=S|D] [WAIT=NO|YES]
Dwight
-Original Message-
From: Lawson, Jerry W (ETSD, IT) [mailto:[EMAIL PROTECTED]]
-Gianni-
OBJects='c:\file 2 d:\file 3 4 d:\A B C'
Note: Enclose the file string in double quotes if it contains blank
characters (spaces), and then surround the double quotes with single quotes.
If the file string contains multiple file names, each must be surrounded by
its own pair of double
: Tuesday, July 17, 2001 10:09 AM
To: [EMAIL PROTECTED]
Subject: Re: Archive Question
snip
07/17/01 06:49:45 ANE4961I (Session: 832, Node: F50_CLIENT) Total
number of
bytes transferred:70.15 GB
snip
07/17/01 06:49:45 ANE4964I (Session: 832, Node: F50_CLIENT
looked in your accounting log to see what it
thinks, or checked your Summary table?
Thanks,
Alex
-Original Message-
From: Bill Wheeler [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 16, 2001 10:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Archive Question
The information that we are archiving
: Re: Archive Question
Bill,
Could you give us the snippet of your activity log that you're looking at,
and point out the number that fluctuates?
If, as I suspect, you're looking at the bytes transferred number, that
number can definitely fluctuate based on retries, network-related
retransmitions
The only reason, I can think of, that the amount of data archived off would
be different everyday is that there are people deleting/renaming/creating
files on the said filesystem. There's not much which can go 'wrong' with the
archive function - it just takes what it finds and holds it for you..
What is it you are archiving? Are you sure that the data would
not be less on some days? If there are not a lot of files, then
look at dsmsched.log on the client and see what files and sizes
were archived. This may show why it is less some days, or point
to files/directories where it is less.
Bill,
Are you running any type of compression from the client node. We have
seen this same type of fluctuation in some solaris boxes. We have run the
same archive in different parts of the day just to get a comparsion and
the amount of data backed up is differne every time. My Unix Admin's
PROTECTED]]
Sent: Monday, July 16, 2001 11:15 AM
To: [EMAIL PROTECTED]
Subject: Re: Archive Question
What is it you are archiving? Are you sure that the data would
not be less on some days? If there are not a lot of files, then
look at dsmsched.log on the client and see what files and sizes
were
:[EMAIL PROTECTED]]
Reply To: ADSM: Dist Stor Manager
Sent: Monday, July 16, 2001 1:12 PM
To: [EMAIL PROTECTED]
Subject: Re: Archive Question
The information that we are archiving consists of two repositories, DB2
backups and backup of our Pro/I information.
The information
Hi Terry,
There are (up to) 2 COPY GROUPS for each management class. One for backups
and one for archives.
Therefore, you just have to specify the appropriate storage pool for each of
them.
The command DEFINE COPYGROUP has a parameter TYPE=BACKUP (default) or
TYPE=ARCHIVE.
You can look at HELP
define a management class, make it not the default, set the archive copy
group to point towards a new storage pool (and I wouldn't even put in a
backup copy group) now you could make this new storage pool, disk, then a
next to tape OR straight to tape... just depends on how many concurrent
client
Hi:
-Define your new storage pool
-Define a copy group type=archive dest=your new storage pool
You get to define a separate default copygroup for both backups and archives.
[EMAIL PROTECTED] 10/25/00 10:22AM
Hello all - I have a question on how to do the following:
I have many clients that
20 matches
Mail list logo