by the way. :-)
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan
Folkerts
Sent: woensdag 12 april 2017 17:56
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication
Eric
bler
> Sent: maandag 10 april 2017 18:12
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Deduplication
>
> Hi Eric,
>
> A few things:
>
> - Client-side provides better overall throughput for Spectrum Protect
> because the deduplication is spread across more CPU's. So if you
?
Thanks again for your help!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del
Hoobler
Sent: maandag 10 april 2017 18:12
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication
Hi Eric
17:30
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication
Perhaps the client side dedupe is keeping a dedupe hash-bitmap that is not
getting fully refreshed when you purge the backup data from the server?
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU
Hi Eric,
A few things:
- Client-side provides better overall throughput for Spectrum Protect
because the deduplication is spread across more CPU's. So if you can
afford to do the deduplication client-side, that is the best overall
result.
- Client-side helps reduce network traffic
- The
Perhaps the client side dedupe is keeping a dedupe hash-bitmap that is not
getting fully refreshed when you purge the backup data from the server?
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Loon,
Eric van (ITOPT3) - KLM
Sent: Monday,
tor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Stefan Folkerts
> > Sent: donderdag 31 maart 2016 17:55
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Deduplication and database backups
> >
> > I've seen plenty of databases go to container pools and get fa
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: donderdag 31 maart 2016 17:55
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Deduplication and database backups
>
> I've seen plenty of databases go to container pools and get fair to good
&
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication and database backups
I've seen plenty of databases go to container pools and get fair to good
deduplications results even on the first backup.
It should not matter that it is one large object, it will make the chunks
larger but normally you should
I've seen plenty of databases go to container pools and get fair to good
deduplications results even on the first backup.
It should not matter that it is one large object, it will make the chunks
larger but normally you should still get some deduplication as long as it's
not encrypted.
It would
I'll third the odd percentages... using 7.1.3.100.
tsm: TSMPRD02>select sum(reporting_mb) from OCCUPANCY where
stgpool_name='SASCONT0'
Unnamed[1]
--
182520798.90
tsm: TSMPRD02>q stg sascont0
Storage Device
excluding empty space within aggregates. For this
value, 1 MB = 1048576 bytes.
I'm lost here ...
Cheers.
Arnaud
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Matthew McGeary
Sent: Tuesday, March 22, 2016 2:23 PM
To: ADSM-L@VM.MARIST.EDU<mailto:ADSM-L@VM.MARIST.EDU&g
Arnaud,
I too am seeing odd percentages where containerpools and dedup is
concerned. I have a small remote server pair that protects ~23 TB of pre
dedup data, but my containerpools show an occupancy of ~10 TB, which
should be a data reduction of over 50%. However, a q stg on the
containerpool
39 is actually not a great number; it means you are getting less than 2 for 1
dedup.
Unless you have backups running hard 24 hours a day, those dedup processes
should finish.
When you do Q PROC, if the processes have any work to do, they show as ACTIVE,
if not they show IDLE.
I'd think that
Hey, Nick, missed your name the first time around!
Being in higher-ed/research we went the cheap route and actually just
use direct-attach 15K SAS drives on Dell servers, divvied up into
multiple RAID-10 sets. Even a 1TB database only takes us ~1 hour to
backup or restore, which is well within
Hi Wanda,
I'm using Deduplication and have found that tsm life would be much easier if
the stg pool was kept smaller under 3TB in size. I haven't done enough testing
with this, and I know it is slightly counterproductive to achieve the highest
deduplication savings. But it sure does make the
Wanda,
In trying to troubleshoot an unrelated performance PMR, IBM provided me
with an e-fix for the dedupdel bottleneck that it sounds like you're
experiencing. They obviously will want to do their due-diligence on
whether or not this efix will help solve your problems, but it has proved
very
Sergio and Wanda,
Thanks for your posts! I opened PMR 10702,L6Q,000 a couple weeks ago
for slow performance [recently completely fell off the cliff!] with our
SRV3 TSM
v6.3.4.200 service that *was* successfully doing client+server deduplication
for 72TB BackupDedup STGpool on NetApp FC [soon to
Woo hoo!
That's great news.
Will open a ticket and escalate.
Also looking at client-side dedup, but I have to do some architectural
planning, as all the data is coming from one client, the TSM VE data mover,
which is a vm.
Re client-side dedup, do you know if there is any cooperation
Please do post results -
expiration just ran for me, queue 30M!
45 TB dedup pool
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of James
R Owen
Sent: Friday, December 20, 2013 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L]
Client-side dedup and simultaneous-write to a copy pool are mutually
exclusive. You can't do both, which is the only theoretical way to
enforce deduprequiresbackup with client-side dedup. I suppose IBM could
enhance TSM to do a simultaneous-like operation with client-side dedup,
but that's not
I can second that Sergio,
Backup stgpools to copy tapes is not pretty, and is an intensive process to
rehydrate all that data.
The one extra thing I did was split the database across multiple folder for
parallel I/O to the Database. That has worked out very well, and I currently
have it
While we don't do deduplication (tests show we gain less than 25% from it),
we also split our DB2 instances across multiple, physically-separate
volumes. The one thing to note is that you have to dump and restore the
database to spread existing data across those directories if you add them
Hi All,
Is someone using this script for reporting purpose ?
http://www-01.ibm.com/support/docview.wss?uid=swg21596944
--
Best regards / Cordialement / مع تحياتي
Erwann SIMON
- Mail original -
De: Wanda Prather wanda.prat...@icfi.com
À: ADSM-L@VM.MARIST.EDU
Envoyé: Vendredi 20
Is anyone doing stgpool backups to a dedup file copy pool?
At 02:23 PM 12/20/2013, Marouf, Nick wrote:
I can second that Sergio,
Backup stgpools to copy tapes is not pretty, and is an intensive process to
rehydrate all that data.
The one extra thing I did was split the database across
Hi Skylar !
Yes that would be the easy way do it, there is an option to rebalance
the I/O after you add the new file systems to the database. I had already setup
TSM before the performance tuning guideline was released. Doing this way, will
require more storage initially and running
Hi Wanda,
some quick rambling thoughts about dereferenced chunk cleanup.
Do you know about the 'show banner' command? If IBM sends you an e-fix, this
will tell you what it is fixing.
tsm: xshow banner
* EFIX Cumulative
Hi Wanda,
Expire Inventory is queuing chunk for deletion.
See the Q PR output when, at the end of the expire inventory process, the total
numbers of nodes have been reached. No more deletion of objects occurs, but
SHOW DEDUPDELETEINFO shows that the deletion threads are still working,
Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Gee, Norman
Sent: Wednesday, July 24, 2013 11:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options
This why IBM is pushing their VTL solution. IBM will only charge for the
net amount using an all IBM solution. At least
[mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Gee, Norman
Sent: Wednesday, July 24, 2013 11:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options
This why IBM is pushing their VTL solution. IBM will only charge for the
net amount using an all IBM solution. At least
On Jul 26, 2013, at 5:21 AM, Steven Langdale steven.langd...@gmail.com wrote:
Hello Stefan
Have you got cases of this? I ask because I have been specifically told by
our rep that any dedupe saving for capacity licensing is TSM dedupe only,
regarless of the backend storage.
During our last
@VM.MARIST.EDU] On Behalf
Of
Gee, Norman
Sent: Wednesday, July 24, 2013 11:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options
This why IBM is pushing their VTL solution. IBM will only charge for
the
net amount using an all IBM solution. At least
@VM.MARIST.EDU
Subject: Re: Deduplication/replication options
This why IBM is pushing their VTL solution. IBM will only charge for the
net amount using an all IBM solution. At least that is what I was told.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU
Hi Sergio!
Another thing to take into consideration: if you have switched from PVU
licensing to sub-capacity licensing in the past: TSM sub-capacity
licensing is based on the amount of data stored in your primary pool. If
this data is stored on a de-duplicating storage device you will be
charged
On 07/23/2013 06:30 PM, Nick Laflamme wrote:
I'm surprised by Allen's comments, given the context of the list.
TSM doesn't support BOOST. It doesn't support at the server level, and it
doesn't support for a client writing directly to a DataDomain DDR.
Duh, yes, good point.
Context: We moved
, 2013 11:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options
Hi Sergio!
Another thing to take into consideration: if you have switched from PVU
licensing to sub-capacity licensing in the past: TSM sub-capacity
licensing is based on the amount of data stored in your primary
Colwell
Draper Lab
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Gee,
Norman
Sent: Wednesday, July 24, 2013 11:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication/replication options
This why IBM is pushing their VTL solution. IBM will only
Hi Sergio,
There are many people more knowledgeable than I am on this topic, and I hope
they contribute to this interesting question. My two cents would be to remember
that the TSM database doesn't know about an array replication, so you'll have
to deal with that issue if you have a massive
On 07/23/2013 01:19 PM, Sergio O. Fuentes wrote:
We're currently faced with a decision go with a dedupe storage array
or with TSM dedupe for our backup storage targets. There are some
very critical pros and cons going with one or the other. For
example, TSM dedupe will reduce overall
I'm using Data Domain as the only dedup component. Mgmt is balking at the cost
additional disk or tape pools with TSM dedup and the highly desired backup to
non-dedup pool. Our current tape technology is quite old and replacing with
several new drives and library hardware isn't on the
I have no experience with TSM de-dup, but I have plenty with Data Domain.
We have 3 different disaster recovery methods for 3 sites.
1. The largest site is traditional TSM, write the data to a primary pool (DD
VTL) and make copies to physical tape and use a truck to move them away.
2. Medium
I'm surprised by Allen's comments, given the context of the list.
TSM doesn't support BOOST. It doesn't support at the server level, and it
doesn't support for a client writing directly to a DataDomain DDR. This may
be obvious to everyone, but I fear for the people who are TSM-centric and
haven't
Thanks, guys, for your input. Nick, your comment is relevant to us.
We're not used to by-passing TSM for any storage management task regarding
backups. We use very little storage-based replication in our environment
as it is, and introducing array-based replication adds a wrinkle to
managing our
Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also
notice that log files for DB's such as Exchange pre 2010 using legacy backups
and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either compressing
this data or storing
Yep.
Oracle DB's, getting great dedup rates on the DB's (except the ones where they
have turned on Oracle compression to start with - that is, the DB itself is
compressed).
Poor dedup on the Oracle logs either way.
W
-Original Message-
From: ADSM: Dist Stor Manager
I second Wanda on the logs. When you think about it, logs are unique
data, being entirely made of transactions in the order in which they
come in. If they were identical to some other data, I'd start looking
around for Twighlight Zone cameras.
On the other hand, I suppose I could imagine a
Also the Files Per Set parameter in Oracle will really get you -
Protectier Recommends no more than a setting of 4. We have seen 10 and
we went from 10:1 to 2.5:1
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Friday,
Thanks Wanda and Alex,
Yes I too thought about the uniqueness of the data that makes up logs.
I guess I'm just second guessing myself.
One approach I am thing about in regard to the same issue with pre Exchange
2010 log files (legacy incrementals) is if it wouldn't be better to just do
full
Interesting idea -- Let us know what you find out!
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick
Adamson
Sent: Friday, January 11, 2013 2:03 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates
Thanks Wanda and
Yes, I agree with you. I can't think of a reason why most of the
database shouldn't dedup out.
On 1/11/2013 11:03 AM, Rick Adamson wrote:
Thanks Wanda and Alex,
Yes I too thought about the uniqueness of the data that makes up logs.
I guess I'm just second guessing myself.
One approach I am
The size of storage is not enough information to size a system.
The number of sessions determines system size.
If you have four clients, 1 gig per night, you could run 8GB RAM, Core2
2GHz and be okay.
Realistically, 32GB per instance is good. db2sysc will use about 20GB per
instance if it's
On 06/09/11 22:40, Richard van Denzel wrote:
Hi All,
Just a question aboutn the internal dedup of TSM. When I dedup a storage
pool and then backup the pool to a dedup copy pool, will the data in the
storage pool backup be transferred deduped or will it get undeduped first,
then transferred and
Hi Richard,
No, the deduplicated data is not recomposed when backing up to a
deduplicated copy storage pool.
Recommended reading:
http://www.ibm.com/developerworks/wikis/pages/viewpage.action?pageId=108134649
http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Data
Back to client side dedupe, which we're about to deploy for a branch
campus 90 miles away in Rockford IL.
The data is sent from the clients in Rockford via tin cans and string to
the TSM server in Chicago already dedpued. We're using source dedupe
because the network bandwidth is somewhat
As far as I know client site de-duplication will not work with primary storage
pool DISK. It must be FILE as well like for server site de-duplication.
Am I right?
Grigori G. Solonovitch
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Roger
This is my understanding as well. I'm almost certain this is the case, though
we have not yet used source dedup.
..Paul
On Jun 22, 2011, at 3:34 AM, Grigori Solonovitch
grigori.solonovi...@ahliunited.com wrote:
As far as I know client site de-duplication will not work with primary
Agreed.
AFAIK,
the client-side dedup function is reliant on the dedup information in the
storage pool where the data resides on the server. Which has to be a file pool,
and deduped.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul
Client side dedup is only done to a dedup storagepool which means the
storagepool has to be a FILE type storagepool.
Roger Deschner rog...@uic.edu 6/22/2011 2:37 AM
Back to client side dedupe, which we're about to deploy for a branch
campus 90 miles away in Rockford IL.
The data is sent from
Tape pools are not de-duped, so that is not a consideration.
On Tue, Jun 21, 2011 at 13:17, Mark Mooney mmoo...@aisconsulting.net wrote:
Hello,
I had a student ask me today What happens if you have collocation turned on
for a storage pool that you are deduplicating? I did not know what to
Doesn't it undup when it goes to tape?
Or am I still living in 5.5 and thinking in VTL dedup?
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Mark Mooney
Sent: Tuesday, June 21, 2011 2:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L]
: Tuesday, June 21, 2011 8:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication and Collocation
Tape pools are not de-duped, so that is not a consideration.
On Tue, Jun 21, 2011 at 13:17, Mark Mooney mmoo...@aisconsulting.net wrote:
Hello,
I had a student ask me today What happens if you have
Dedup only works in TSM storage pools that reside on disk (specifically
devtype=FILE pools).
If you have data that goes to a dedup pool, then gets migrated off to tape, it
is reduped (rehydrated, reinflated, whatever you want to call it.)
So collocation will still be in effect for that pool.
, 2011 8:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication and Collocation
Tape pools are not de-duped, so that is not a consideration.
On Tue, Jun 21, 2011 at 13:17, Mark Mooney mmoo...@aisconsulting.net wrote:
Hello,
I had a student ask me today What happens if you have collocation
And that's why storage pool planning is very important. The less re-duping,
hydrating, inflating you do the better. Client data to a non-deduped (I
guess that would be a duped) pool that migrates to a deduped pool. But
backup stgpool before the migration happens to avoid the re.
This is where
Cool, Thanks :) I have questions about client dedup. Do you know of any
redbook detail on that?
Thanks,
Mooney
Prather, Wanda wprat...@icfi.com wrote:
Dedup only works in TSM storage pools that reside on disk (specifically
devtype=FILE pools).
If you have data that goes to a dedup pool,
https://www-304.ibm.com/support/docview.wss?context=SSGSG7lang=allrs=2077wv=1loc=en_UScs=UTF-8uid=swg27018576q1=tste_webcastdc=DA410
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Mark
Mooney
Sent: Tuesday, June 21, 2011 2:53 PM
To:
Even if a FILE devclass has dedup turned on, when the data is migrated,
reclaimed, or backed up (backup stgpool) to tape, then the files are
reconstructed from their pieces.
You cannot dedup on DISK stgpools.
DISK implies random access disk - e.g., devclass DISK.
FILE implies serial access disk
Thank you Wanda! Much Appreciated!
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Tuesday, June 21, 2011 9:09 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication and Collocation
https://www-304.ibm.com/support
Check the MOUNTLIMIT in the client definition.
It controls how many mount points in a sequential pool the client can use at
once.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Jim
Neal
Sent: Thursday, March 10, 2011 4:16 PM
To:
Thanks Wanda! That worked perfectly! I owe you one!
Jim
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Thursday, March 10, 2011 1:20 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication Question
Check
You're welcome. Been there, done that, got the scars to prove it! ;)
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Jim
Neal
Sent: Thursday, March 10, 2011 4:44 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication Question
Hi Andy,
Are you doing server- or client-side deduplication? What are the versions
of your TSM Client and Server?
Regards,
Mark L. Yakushev
From: Andrew Carlson naclos...@gmail.com
To: ADSM-L@vm.marist.edu
Date: 04/21/2010 12:36 PM
Subject:[ADSM-L] Deduplication Status
I have been
Server side dedup, Server V6.2, client V6.2.
On Wed, Apr 21, 2010 at 2:39 PM, Mark Yakushev bar...@us.ibm.com wrote:
Hi Andy,
Are you doing server- or client-side deduplication? What are the versions
of your TSM Client and Server?
Regards,
Mark L. Yakushev
From: Andrew Carlson
-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Andrew
Carlson
Sent: Wednesday, April 21, 2010 4:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication Status
Server side dedup, Server V6.2, client V6.2.
On Wed, Apr 21, 2010 at 2:39 PM, Mark Yakushev bar...@us.ibm.com
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Andrew Carlson
Sent: Wednesday, April 21, 2010 4:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication Status
Server side dedup, Server V6.2, client V6.2.
On Wed, Apr 21, 2010 at 2:39 PM, Mark
75 matches
Mail list logo