On Tue, Dec 8, 2009 at 6:08 PM, Rodery, Floyd A Mr CIV US DISA CDB12
<[email protected]> wrote:
> I was curious if anyone had any experience with running VERITAS
> NetBackup (specifically 6.5.4) for Linux on System z (SLES10)?  We've
> noticed extremely high amounts of memory usage when VERITAS is running a
> full back up of a server; end result is swapping to disk.
>
>
>
> Scenario prior to full back up:
>
> Server has approximately 1gb of free virtual memory
>
> Server has 256mb of primary swap, not being used (VDISK)
>
> Server has 512mb of secondary swap, not being used (VDISK)
>
>
>
> NetBackup starts:
>
> VERITAS ties up the 1 GB of free virtual memory, then the 256 MB of
> primary VDISK is used, spilling over and using up the 512 MB of
> secondary VDISK, and ends up swapping to disk.  This process will last
> approximately 1-2 hours while it backs up approximately 100 GB of data.
> Netback is also running in multiplex mode during the full back ups.

I don't know the particular application, but it sounds like every
parallel backup thread acquires a fixed amount of virtual memory to
hold an index or something like that. For such things a few hundred MB
is not absurd. I assume what you talk about is a backup client running
on each Linux server?

The big question would be whether you're really swapping during those
1-2 hours or that all other idle processes are swapped out at the
start, and stay there because Linux has other things that need to be
in memory. Also swappinesss may be playing a role here. The backup
process reads a lot of data and metadata, and Linux memory management
will attempt to retain much of that in cache at the expense of
swapping out anonymous memory. Not useful obviously, because the
backup is not helpful to predict future access. Another case where you
want swappiness to 0, I think.

There's a few more things to look at. Your ESALPS performance data has
the answers to most of my questions.You're most welcome to upload some
of that for me to think about (but not to the list please :-)

> Another note, the incremental back ups are running in non-multiplex mode
> and we don't experience the same type of memory usage.  The caveat
> being, obviously the incremental back ups are not backing up as much
> data.

I would expect an incremental backup also use an index to the previous
full backup (and earlier incrementals). That would raise your
requirement for process virtual memory. But then, if you don't run
multiple backups in parallel, that gets it down again.

Sounds like during the multiplex backups you need more memory / vdisk
to avoid swapping to disk. Or change the backup strategy. I would feel
uncomfortable when the sizing of the virtual machine is determined by
the backup application (or the installation program).

Since this is a scheduled memory requirement, a neat trick is to use
CMM-1 in a programmed fashion. Deflate the balloon by the proper
amount before the backups starts, and inflate the balloon when the
backup completes (or via cron, when you know the workload on z/VM is
such that you don't want to devote the resources to your backupr).

> We are not seeing the same type of memory issues on the x86, VMware,
> etc.  This issue seems to be isolated to the s390x environment?  I was
> curious if anyone has run into this issue before?

But your x86 systems likely have way more memory...

-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to