Hello Stef,

Since your active log is filling, do you have the SAN capacity to increase
it?  We use the maximum active log size of 512GB, which is overkill for
our intake but might be exactly what you need.  I'd also think that with
that much intake, a few more cores wouldn't hurt.  There's a lot of
processing involved with large dedup workloads and we typically use all 10
of our allocated Power 8 cores just doing server-side dedupe.

Hope that helps,
__________________________
Matthew McGeary
Technical Specialist - Infrastructure
PotashCorp
T: (306) 933-8921
www.potashcorp.com




From:   Stef Coene <[email protected]>
To:     [email protected]
Date:   12/10/2015 04:05 AM
Subject:        [ADSM-L] TSM troubles
Sent by:        "ADSM: Dist Stor Manager" <[email protected]>



Hi,

Some time ago I mailed in frustration that using DB2 as TSM backend was a
bad
idea.

Well, here I'm again with the same frustration.

This time I just want to know who is using deduplication successful?
How much data do you process daily? Client or server or mixed?

We are trying to process a daily intake between 10 - 40 TB, almost all
file
level backup. The TSM server is running on AIX, 6 x Power7, 128 GB ram.
Disk
is on SVC with FlashSystem 840. Diskpool is 250 TB on 2 x V7000 with 1 TB
NLSAS disks, SAN attached. We are trying to do client based dedup.

The problem is that the active log fills up (128 GB) in a few hours. And
this
up to 2 times per day! DB2 recovery takes 4 hours because we have to do a
'db2stop force' :(


Stef

Reply via email to