We have a pair of tsm servers doing backup and replication. Each has a
database over 1TB on ssd and 512G of memory
Our organization likes to do os patch maintenance every 90d and doing this
requires a stop and restart of db2. When would it be best to do
maintenance to shorten the rollback time?
For a pure TSM/SP client solution, beyond what can be done with an inclexcl
file or server-side client option sets as you and Skylar have already noted,
I think you're out of luck.
You probably have several options from the Puppet side, though...
You could use file_line resource in Pupppet's
I don't know that this approach would work - Puppet would see that the file
differs from the deployed file, and would just overwrite it the next time
the agent runs. Puppet would need to manage dsm.sys completely[1] with
Rick's desired changes, or those options would have to be taken out of
Rick
The m4 macro processor is a standard unix offering and can do anything from
simple includes and variable substitutions to lisp-like processing that will
boggle your mind. An m4 macro with some include files and a makefile with a
cron job to build your dsm.sys might do the job.
Cheers
Hi Rick,
I'm not aware of a mechanism that allows one to do that with dsmc/dsm.sys,
but Puppet does have the ability to include arbitrary lines in a file,
either via a template or directly in a rule definition.
Another option would be to use server-side client option sets:
Hello,
Our Unix team in implementing a management application named Puppet.
They are running into a problem using Puppet to setup/maintain the
TSM client dsm.sys files. They create/maintain the dsm.sys as
per a template of some kind. If you change a dsm.sys with a unique
option, it gets
Thanks for the confirmations. It took multiple attempts over 3-days to
finally delete all of the systemstate objects for these 4-nodes, totalling
over 1.2B objects deleted. The deletes kept constantly failing with errors
like this:
2/26/2019 2:24:27 PM ANR0106E imfsdel.c(2723): Unexpected error
Zoltan
I had something similar. Prod node of application had a reasonable number of
systemstate objects. Two nonprod nodes of same application had huge numbers of
systemstate. This first came to my attention when daily expiration was taking
3 or more days instead of the usual 30 minutes.
As
I set up a cron job that does filespace and node deletions in a batch on
weekends. Then if it takes a long time, I don't care. I set this up back on
server version 5, when deletions took a REALLY long time, and I kept it in V6
to deal with exactly this issue with System State. We've been using
Hi Andy,
Here are some end-of-session statistics for the node I just deleted 300M
systemstate objects for. Note, 02/20/2019 was when we pushed down via
CLOPT "DOMAIN ALL-LOCAL -SYSTEMSTATE" and the numbers dropped from 400K+ to
120+
02/12/2019 06:08:46 ANE4954I (Session: 77016, Node:
Hi Zoltan,
There is nothing in the client code you are using that would cause
excessive backups. If you happen to have backup logs going far back enough,
those might show an excessive number of system state objects. Normally I
would expect the number of backup objects N to be 9 < N < 15.
I did some research and the application on these servers is called
SolarWinds
On Tue, Feb 26, 2019 at 12:19 PM Andrew Raibeck wrote:
> Hi Zoltan,
>
> What policy was system state bound to for the nodes that exceed 200 million
> objects? Is it possible that the number of versions retained was
Hi Andy,
We do not have any policy/managementclass with versions higher than 5
across our complex and retain-extra copies is 30-days. I do not know what
these servers/applications do so I will ask about the Crypto Keys you
referred to in the link.
For the first 3-with this issue, the client is
Hi Zoltan,
What policy was system state bound to for the nodes that exceed 200 million
objects? Is it possible that the number of versions retained was very
large?
Another thought is whether the OS of each of these nodes might be affected
by this issue:
Hi Andy,
Thank you for clarifying things - a bit.However, why would certain
nodes have enormously large numbers vs the average when all things are
equal as far as policies are concerned?
I do see average systemstate object delete counts in the *1-2M range* but
these 4-nodes are exceeding
Hi Zoltan,
The large number of objects is normal for system state file spaces. System
state backup uses grouping, with each backed up object being a member of
the group. If the same object is included in multiple groups, then it will
be counted more than once. Each system state backup creates a
Just found another node with a similar issue on a different ISP server with
different software levels (client=7.1.4.4 and OS=Windows 2012R2). The node
name is the same so I think the application is, as well.
2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL\System
On 26.2.2019. 15:01, Zoltan Forray wrote:
> Since all of these systemstate deletes crashed/failed, I restarted them and
> 2-of the 3 are already up to 5M objects after running for 30-minutes. Will
> this ever end successfully?
All of mine did finish successfully...
But, none of them had more
Since all of these systemstate deletes crashed/failed, I restarted them and
2-of the 3 are already up to 5M objects after running for 30-minutes. Will
this ever end successfully?
On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic wrote:
> FYI,
> same here...but my range/ratio was:
>
> ~2 mil occ to
Oops - meant to say "the deletions all failed" - not expirations. Now I
get to try them, again.
On Tue, Feb 26, 2019 at 8:17 AM Zoltan Forray wrote:
> Thanks for the confirmation that I am not the only one seeing it and
> wondering what is going on. FWIW, the expirations all
Thanks for the confirmation that I am not the only one seeing it and
wondering what is going on. FWIW, the expirations all failed/crashed with
strange "unexpected error 4522 fetching row in table "Backup.Objects" (or
Filespaces). The last "q proc" I recorded:
2,325 DELETE FILESPACE
FYI,
same here...but my range/ratio was:
~2 mil occ to 25 mil deleted objects...
Never solved the mystery... gave up :->
--
Sasa Drnjevic
www.srce.unizg.hr/en/
On 2019-02-25 20:05, Zoltan Forray wrote:
> Here is a new one...
>
> We turned off backing up SystemState last week. Now I am
You're seeing the same issue I've seen.
Q OCCUP values for SystemState seem to have no relationship to the number that
are eventually deleted.
The value on Q OCCUP is always much lower than what is reported in Q PRO.
And, deleting system state takes a LONG time.
-Original Message-
Here is a new one...
We turned off backing up SystemState last week. Now I am going through and
deleted the Systemstate filesystems.
Since I wanted to see how many objects would be deleted, I did a "Q
OCCUPANCY" and preserved the file count numbers for all Windows nodes on
this server.
For
I have this node which is stuck with bad replication information that is
keeping me from deleting its backups/filespaces. Replication has been
turned off on this server. Any attempt to delete the filespaces produces
this error:
2/25/2019 10:35:12 AM ANR2017I Administrator ZFORRAY issued
Hi Hans,
Yes. It's like before. If you do a "Typical" install, it only installs
the base client. It won't be listening to any ports unless you install
and configure services that would required to listen. Like a Prompted
Scheduler.
The web user interface for restore will only be installed
Is it possible to install the client without a running web-server and
without the client listening to any port? (CAD controlled scheduler,
polling).
Hans Chr.
On Mon, Feb 25, 2019 at 6:07 AM Mark Yakushev wrote:
> Hi Tom,
>
> The increase in size is attributed mostly to this item:
>
>
Hi Tom,
The increase in size is attributed mostly to this item:
"Simplified and secure web-based portal for remote file-level recoveries"
The client includes a web server now.
- Mark
"ADSM: Dist Stor Manager" wrote on 02/22/2019
01:39:57 PM:
> From: Tom Alverson
> To: ADSM-L@VM.MARIST.EDU
Hi all
I'm trying to recommend an automated restore testing tool like TSMWorks' ART to
management.
TSMWorks web site appears to have been hijacked.
Lindsay Morris' Linkedin page seems to be dead. Twitter feed too.
Nothing from TSMWorks by Google search since about 2009
Does anyone know what
Here is the announcement page with what's new:
https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/897/ENUS219-029/index.html_locale=en
On Fri, Feb 22, 2019 at 4:39 PM Tom Alverson wrote:
> The new version that supports Server 2019 is out and it is about twice as
>
The new version that supports Server 2019 is out and it is about twice as
large as previous verisons. No idea why yet. Here are the fixes
https://www-01.ibm.com/support/docview.wss?uid=ibm10872614
Here are the download links:
https://www-01.ibm.com/support/docview.wss?uid=ibm10872618
Hi Eric,
They will still be excluded from dedup:
"Data deduplication is disabled if the enablelanfree option is set.
Backup-archive client encrypted files are excluded from client-side
data deduplication. Files from encrypted file systems are also
excluded."
Source:
Hi Andy,
Thanks again for your help!!!
Kind regards,
Eric van Loon
Air France/KLM Storage & Backup
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew
Raibeck
Sent: vrijdag 22 februari 2019 14:49
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Encrypt
Hi Marc,
I will turn on client dedup and compression for the few nodes that have to use
encryption.
Thanks for your reply!
Kind regards,
Eric van Loon
Air France/KLM Storage & Backup
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc
Eric,
Just be aware that if you are backing up to a container pool, encrypted
files will not dedup. If that's the case, you may want to encrypt traffic
instead, and use at-rest encryption on the storage pool.
-
Thanks,
Marc...
Marc
Hi Eric,
To encrypt all files:
include.encrypt *
You should also review the "encryptiontype" and "encryptkey" settings.
"encryptiontype aes256" enables the client's strongest encryption setting.
"encryptkey generate" uses a dynamically created encryption key that is
stored (in encrypted
Hi guys,
What would be the best include.encrypt statement(s) to be added to the dsm.opt
when I want everything to be encrypted on a Windows client? So all data,
including system state and no matter how many drives it has.
Thanks for your help in advance!
Kind regards,
Eric van Loon
Air
All servers are RHEL 7.1.7.400.
We have a server we have been trying to completely remove from replication
since it is being retired (DISABLE REPLICATION and SET REPLSERVER)
Somewhere along the way, a node got stuck and I cannot delete it's
filespaces since it has issues with the node
Steve:
This is how I handle scheduling of vmware backups.
Hope it gives you a clue.
I agree that ibm needs to clean up spectrum protect for ve and unify the syntax.
Schedule definition follows: sorry for the mangled wrapping; darned outlook
upd sch VMWARE-PROD -
nightLY-BB-01 -
Ok, so I tried this:
1. added to BCLAB.opt -> Domain.VMFull
all-vm;-vm=guest*,local*,nb_diskexpandtest,rlt00015,tlpvs001,tlsf001,tlsql003,tlsql012,tlsql200,tlsql201,tlxddc001
2. tried this command -> dsmc.exe backup vm -optfile=BCLAB.opt -preview
-Mode=IfIncremental
Got this result:
Command
Got it.
Amazing what you forget if you don't do it often.
Thanks all.
-Original Message-
From: ADSM: Dist Stor Manager On Behalf Of Marc Lanteigne
Sent: Tuesday, February 19, 2019 10:54 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] restoring from old backups
Also, do you see the
I went through this. You cannot exclude VMs in an include exclude file.
-Original Message-
From: ADSM: Dist Stor Manager On Behalf Of Schaub, Steve
Sent: Tuesday, February 19, 2019 3:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] how to exclude vm's using sp for ve
8.1.2.0 client
Steve
I think you have to specify what you want to back up in the DOMAIN.VMFULL with
one of the available options before excluding anything.
Then leave off the * in your backup vm command.
Regards
Steve
Steven Harris
TSM Admin/Consultant
Canberra Australia
-Original Message-
From:
8.1.2.0 client
Have a batch script that backs up all vm's in our lab, now I have a list of
vm's I need to exclude.
Here is the backup command:
dsmc.exe backup vm * -VmBackupType=FullVm -Mode=IfIncremental -VmMaxParallel=10
-VmMaxBackupSession=20 -VmLimitPerHost=10 -VmLimitPerDatastore=10
Also, do you see the E: filespace for that node:
Dsmc query filespace -fromnode=fpmstore
And what's the filespace type?
-
Thanks,
Marc...
Marc Lanteigne
Spectrum Protect Specialist AVP / SRT
416.478.0233 | marclantei...@ca.ibm.com
Office
Hi,
did you try it with virtualnodename option?
This was how I did it in CLI successfully.
dsmc.exe restore -optfile=dsm-bkp.opt -virtualnodename=SOMEOTHER.NODE
-password='somepassword' -subdir=yes 'c:\*'
'E:\restore\20151209\SomeOtherNode_C\'
Maybe the Win versions were a bit different than
You also need -fromnode or -virtualnode. With the former, you need to
have access granted, with the latter, you need to know the node password.
I prefer the 2nd method myself.
-
Thanks,
Marc...
Marc Lanteigne
Spectrum Protect Specialist
Thanks mark.
Did forget the braces. However,
Dsmc restore {\\fpmstore\e$}\KEYCONTROL\* c:\users\glee\documents\key\
-subdir=yes
Results in ans1084 message.
If I query the db directly, I see all sorts of things with an hl_name of
\\fpmstore\e$\KEYCONTROL\
For the node fpmstore
Any help
I'm assuming this was command line. You have to specify the filespace.
Example:
dsmc restore {\\machinename\c$}\* ...
If you try to restore C:, it assumes you are trying to restore the C: on
the machine you are on regardless of the nodename you are restoring from.
-
Thanks,
Marc...
Have a backup from an old windows 2003 server which no longer exists.
I granted my windows 10 client proxy authority to that old server's node.
Now when I try to restore, I get "no matching files or the file system is
incompatible"
Without rebuilding a win 2003 server, how do I get around
We are doing lots of cleanup on our replication target server and have
noticed an issue with stopping/purging replication of specific filespaces
(in this case, SystemState). All servers are RHEL Linux 7 ISP 7.1.7.400
As we understand it,
UPDATE FILESPACE insertnodenamehere 1 nametype=fsid
Thanks Sasa
The SKIPPNP seem to work
Thank you
Genet Begashaw
IT Systems Analyst
5801 University Research Court,
University of Maryland’s Division of Information Technology
College Park, MD 20740
Phone: (301) 405-9555 (W)
(240)660-0024 (Cell)
On Mon, Feb 18, 2019 at 8:19 AM Sasa
On 18.2.2019. 13:54, Genet Begashaw wrote:
> Thanks Steven ,
>
> We did map the share drive and run and run using domain id , we still get
> permission denied and not able to see the share drive to select the drive
> to be included
Did you try the client option SKIPNTPermissions ?
If you
Try just using an SMB path... \\servername\share\
Best regards,
Mike, x7942
RMD IT Client Services
On Mon, Feb 18, 2019 at 7:56 AM Genet Begashaw wrote:
> Thanks Steven ,
>
> We did map the share drive and run and run using domain id , we still get
> permission denied and not able to see
Thanks Steven ,
We did map the share drive and run and run using domain id , we still get
permission denied and not able to see the share drive to select the drive
to be included
Thank you
Genet Begashaw
IT Systems Analyst
5801 University Research Court,
University of Maryland’s Division of
Its not hard Genet.
Did you search the archives first?
The trick is to run the scheduler under a Windows Domain user id not the usual
System id. You can either permanently map the share or, use a
preschedule/postschedule command pair to map it as you need it.
Cheers
Steve
Steven Harris
Thank you
Genet Begashaw
I have added a second stanza to dsm.sys to give me a scheduler and node for a
slightly different data mover configuration to test.
I created a second client options file pointing to this server stanza.
All works except for the setting of the VMWare password.
Tried the following two commands:
delete filespaces on primary that have not been on on the replica?
I have seen a lot of that going on, you get stale filespaces on the replica
with the metadata but due to deduplication the impact is largest on the
database.
If this is not it I would run the IBM database reorg perl script, you
Hi,
are you sure that all of the policies are the same?
All retention periods are the same?
And are you sure that expiration process regularly runs on the target
server? Chec for errors in the activity log...
Rgds,
--
Sasa Drnjevic
www.srce.unizg.hr/en/
On 2019-02-02 15:10, Saravanan
Hello Experts,
What cases repl server would have more db usage than primary server. Here I go
with one setup in similar situation.
Primary server db usage - 67% (out of 6TB)
Replication Server db usage - 96% (out of 6TB)
repli server uses only for replication purpose, no there native backups
Bjoern,
If I understand your situation correctly on the server that you need to
transfer data from you need to set the default replication server to the
desired target then on selected nodes you remove them from their current
replication role and update their replication mode.
There may be a
Hi Eric,
This will be a version 8.1.x.
Best regards,
Andy
Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
"ADSM: Dist Stor Manager" wrote on 2019-01-31
06:54:27:
> From: "Loon, Eric van (ITOP
Hi J. Eric,
The normal recovery path is to make use of "repair stgpool" command, which
should be fixing the damaged container, making use of the data that has already
been replicated to your destination server.
You might then perform an audit from the involved containers (audit container
Hi Eric!
Checkout the repair stgpool command at
https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.6/srv.reference/r_cmd_stgpool_repair.html
Kind regards,
Eric van Loon
Air France/KLM Storage & Backup
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
Looking thru my actlog I noticed msgs like these:
01/30/2019 08:59:07 ANR4847W REPLICATE NODE detected an extent with ID
9421329176063255953 on container
/orion_c3/18/1843.ncf that is marked
damaged. (SESSION: 65110496, PROCESS: 31257)
01/30/2019 08:59:08 ANR4847W REPLICATE NODE
Hi Andy,
Thank you very much for your reply! Any idea if that will be a 8.1.7 release or
a higher version, like 8.2?
Kind regards,
Eric van Loon
Air France/KLM Storage & Backup
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Andrew
Raibeck
Support for Windows Server 2019 is currently targeted for 1Q2019. Please
note the usual caveats apply: this is not an official announcement, and
information is subject to change at the discretion of IBM.
Regards,
Andy
Hi Venu,
I will not deploy unsupported client versions in our company. Maybe it works,
but not having support when you run into issues is simply not acceptable for us.
I'm hoping for an official statement here from Andy or Del. ;-)
Kind regards,
Eric van Loon
Air France/KLM Storage & Backup
Hello Eric,
Nothing mentioned about support level of Spectrum Protect client for Win 2019
server.
You can have a try with latest Spectrum Protect client available, but make sure
to test both backup & restore capabilities before get this onto Production
Environment.
Suggest you open a support
Hi Francisco,
great.
Thanks!
Bjørn
-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager Im Auftrag von Francisco
Molero
Gesendet: Donnerstag, 31. Januar 2019 11:43
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: [ADSM-L] Question on switching server roles in replication
Hi ,
you can
Dear all,
I set up a new server for one of our customers and will use the old server and
library as target for replication. First I tried to export the data, but this
takes too long due to the amount of data. So my next approach is to set up
replication from the existing server to the new one
Hi guys!
Does anybody know when Windows Server 2019 will be supported by TSM as backup
client?
Thanks for any reply in advance!
Kind regards,
Eric van Loon
Air France/KLM Storage & Backup
For information, services and offers, please visit
Have you looked for jumbo-frame (lack of) configuration everywhere?
That reminds me of failure modes I’ve seen in due to missing (or not
compatible) jumbo frame support. Is the new server on a new switch?
Does the new server support a larger jumbo frame size that might be
rejected between it and
While waiting for IBM to respond to my PMR request, I wonder if any of you
have seen an issue like this - Here is our current setup
Multiple Exchange 2013 CU21 servers running TSM Baclient 7.1.0.1 and
TDP/Exchange 7.1.0.1 backing up to TSM Storage server 7.1.7.2 on AIX.
Backups have been very slow
Rete and reto should be set to nolimit.
Vere and verd should be 1.
Sent from my iPhone using IBM Verse
On Jan 17, 2019, 4:41:06 PM, g...@bsu.edu wrote:
From: g...@bsu.edu
To: ADSM-L@VM.MARIST.EDU
Cc:
Date: Jan 17, 2019, 4:41:06 PM
Subject: [ADSM-L] clarification on defining retentions
I
I would like to create a domain to retain the last version of all files on a
server whether active or inactive at the time I place the server in that
domain. If a new backup happens to be taken, all previous versions will go
away.
Would that be as follows?
Vere=0 verdel=0 rete=0 reto=1
If I
Thank you for your research/info.
On Wed, Jan 16, 2019 at 1:22 PM Sasa Drnjevic wrote:
> On 2019-01-15 20:49, Zoltan Forray wrote:
> > Can someone explain what a "join" driver is? Google-ing doesn't bring up
> > anything useful? The lin_tape readme says they tried to introduce it
> back
> >
On 2019-01-15 20:49, Zoltan Forray wrote:
> Can someone explain what a "join" driver is? Google-ing doesn't bring up
> anything useful? The lin_tape readme says they tried to introduce it back
> in 2017 - removed it - and now it is back?
Hi.
Just came across this for the 1st time...with
Can someone explain what a "join" driver is? Google-ing doesn't bring up
anything useful? The lin_tape readme says they tried to introduce it back
in 2017 - removed it - and now it is back?
--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor
Hi Folks,
First of all, happy new year to all of you !
I do experience a strange behavior with container based storage pools, which I
would like to share with you. Basically it's all about wasted space in
directory pools, which I would like to reclaim :
1) I do search for some container
Hi Gary
We schedule manually here.
We have one VBS for each of the separate backup streams and edit the
domain.vmfull manually to add VMs. The problem with this approach is that it
is manual, and we end up having the same large VMs running backup all day. We
also have a mixture of some
My VM backup schedule executes a .bat file on the VM proxy server. The file
has one or more 'dsmc backup vm host1,host2,host3.etc' commands.
Jim
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Lee,
Gary
Sent: Tuesday, January 8, 2019 2:14 PM
Tsm client 8.1.4, running on RHEL 6.9 VMs. TSM server 7.1.7.1 on RHEL 6.9.
We do not use the VMWare plugin to manage our TSM for VE backup schedules.
Schedules backup by host using standard tsm scheduling.
Did this because the vmware client is not very screen reader friendly for a
blind user.
Hello,
I've just discovered that the defaults have been changed in 8.1.6 (I've a test
system just upgraded to 8.1.6.100) :
Protect : TSMSRV1>q opt defragCntrTrigger
Server Option Option Setting
-
This one has been on my wish list for several months - so thanks in advance for
the idea!
I am curious, though - has anyone tried to utilize the API to the OC for this?
It might not be possible - that is how little time I’ve had to research that
idea - but can’t hurt to ask…
Robert Talda
Actually you would indeed struggle to pass $1 through to dsmadmc, however you
could use macros instead:
function delnode {
if [ -z "$1" ]; then
# Empty $1 positional parameter. Show some help message
echo "delnode "
else
# Create a dsmadmc.command macro file
Why not use a function in your .bashrc instead?
Maybe something like this:
function delnode {
if [ -z "$1" ]; then
# Empty $1 positional parameter. Show some help message
echo "delnode "
else
# Remove and decommission the node from both primary and
Yes...I've see that one before.
However my bad. The error was coming from a non-container server. As it
turned out we were out of disk space.
Thanks
On Fri, Dec 14, 2018 at 12:58 PM Deschner, Roger Douglas
wrote:
> Check your MAXSCRATCH settings for that storage pool. This is a gotcha
>
Check your MAXSCRATCH settings for that storage pool. This is a gotcha that has
bitten me in the past.
Roger Deschner
University of Illinois at Chicago
"I have not lost my mind; it is backed up on tape somewhere."
From: J. Eric Wonderley
Sent: Thursday,
Even simpler if you define the source server to itself and create a server
group called both with group members of the source and target servers. Then you
can issue
both:rem replnode
both:decomm node
J. Pohlmann
-Original Message-
From: ADSM: Dist Stor Manager
Dear all,
i just started to use replication and it looks to be a good approach to
replace the copy pools i till now.
But one point seems to become much more complicated: deleting nodes.
The only way i see up to now takes some steps and two times to log on:
log on to the primary server:
1)
We are getting failures that show out of space when we are NOT out of
space?:
ORA-19511: non RMAN, but media manager or vendor specific failure, error
text:
ANS1311E (RC11) Server out of data storage space
Eric,
In addition to this, when using TSM client > 8.1.2, as non-root user, watch
for this :
https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.6/client/c_cfg_nonadmin.html
The "passworddir" statement is now mandatory in dsm.sys ...
Cheers.
Arnaud
Hi Eric,
The error says you do not have permission to write to the dsmerror.log in
the reports directory.
Solution here: https://www-01.ibm.com/support/docview.wss?uid=swg21633869
-
Thanks,
Marc...
Marc Lanteigne
Spectrum Protect
I have tsm v8.1.5 on linux and it does not run...:
[e1derley@yacht ~]$ dsmadmc
ANS1398E Initialization functions cannot open one of the IBM Spectrum
Protect logs or a related file: /reports/dsmerror.log. errno = 13,
Permission denied
[e1derley@yacht ~]$ ll /reports/dsmerror.log
-rw-r--r-- 1 root
Hi Tom,
According to the "Installation & User's Guide", it has changed with 8.1.2.
--
Best regards / Cordialement / مع تحياتي
Erwann SIMON
- Mail original -
De: "Tom Alverson"
À: ADSM-L@VM.MARIST.EDU
Envoyé: Vendredi 7 Décembre 2018 21:47:46
Objet: Re: [ADSM-L] Resourceutilization=100
With great powers comes great responsibility.
As with before, doesn't matter what value you specify, it doesn't mean you
will get that many threads or sessions. However, I would thread carefully
when going above 10. I'd go up gradually until you hit a point of
diminishing return and then
Wait. What??
What version of client do you need for this? Will my storage servers
explode? How many sessions do you need?
On Thu, Dec 6, 2018 at 12:16 PM Erwann SIMON wrote:
> Hi all,
>
> Resourceutilization can now (officially) been set with higher values thant
> is the past. Maximum value
Hi all,
Resourceutilization can now (officially) been set with higher values thant is
the past. Maximum value is now 100 while it was 10.
How does it now behave ? How many producers and consumers can we expect with
higher values like 100 ?
--
Best regards / Cordialement / مع تحياتي
Erwann
901 - 1000 of 97247 matches
Mail list logo