thank you thank you... I would like to see that in IBM documentation
somewhere.
On 3/25/20 11:50 AM, Venkateswara R Puvvada wrote:
> Matt,
>
> It is recommended to have dedicated AFM gateway nodes. Memory and CPU
> requirements for AFM gateway node depends on the number of filesets
> handled by
Matt,
It is recommended to have dedicated AFM gateway nodes. Memory and CPU
requirements for AFM gateway node depends on the number of filesets
handled by the node and the inode usage of those filesets. Since AFM keeps
track of changes in the memory, any network disturbance can cause the
On 25/03/2020 16:32, Skylar Thompson wrote:
On Wed, Mar 25, 2020 at 04:27:27PM +, Jonathan Buzzard wrote:
On 25/03/2020 14:15, Skylar Thompson wrote:
We execute mmbackup via a regular TSM client schedule with an incremental
action, with a virtualmountpoint set to an empty, local "canary"
On 25/03/2020 14:15, Skylar Thompson wrote:
We execute mmbackup via a regular TSM client schedule with an incremental
action, with a virtualmountpoint set to an empty, local "canary" directory.
mmbackup runs as a preschedule command, and the client -domain parameter is
set only to backup the
Hello,
Sorry, I was wrong. Looks like the timeout already happens in xCAT/rinv and the
gui just reports it. What to some respect is good - now this is a purely
xCAT/hardware issue. The GUI isn't involved any more.
Kind regards
Heiner
/var/log/xcat/command.log:
Hello,
I did ask about this timeouts when the gui runs HW_INVENTORY before. Now I
would like to know what the exact timeout value in the gui code is and if we
can change it. I want to argue: If a xCat command takes X seconds but the GUI
code timeouts after Y we know the command will
IIRC, I think you need to set 2 in the bit field of the DEBUGmmbackup
environment variable. I had a long-term task to see what I could get out of
that, but this just reminded me of it and current events might actually let
me have time to look into it now...
On Wed, Mar 25, 2020 at 10:38:55AM
Additionally, mmbackup creates by default a .mmbackupCfg directory on the root
of the fileset where it dumps several files and directories with the progress
of the backup. For instance: expiredFiles/, prepFiles/, updatedFiles/,
dsminstr.log, ...
You may then create a script to search these
We execute mmbackup via a regular TSM client schedule with an incremental
action, with a virtualmountpoint set to an empty, local "canary" directory.
mmbackup runs as a preschedule command, and the client -domain parameter is
set only to backup the canary directory. dsmc will backup the canary
So far we have not revisited the EOS date for 4.2.3, but I would not rule it
out entirely if the lockdown continues well into the summer. If we did, the
next likely EOS date would be April 30th.
Even if we do postpone the date for 4.2.3, keep two other dates in mind for
planning:
- RHEL 6
On 19/02/2020 23:34, Renata Maria Dart wrote:
Hi, I understand gpfs 4.2.3 is end of support this coming September.
A planning question at this stage. Do IBM intend to hold to this date or
is/could there be a relaxation due to COVID-19?
Basically I was planning to do the upgrade this
What is the best way of monitoring whether or not mmbackup has managed
to complete a backup successfully?
Traditionally one use a TSM monitoring solution of your choice to make
sure nodes where backing up (I am assuming mmbackup is being used in
conjunction with TSM here).
However
12 matches
Mail list logo