It's a new day, so it must be time for more SPP questions.
For the folks running SPP, what deduplication ratios are you seeing? So far
I'm still in the testing phase, with approximately 10 VMs that I'm running
backup testing on. Two of the VMs have similar database footprints (one is qa
gt; -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, April 12, 2018 2:10 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: disabling compression and/or deduplication for a client
> backing up aga
: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan
Folkerts
Sent: Thursday, April 12, 2018 2:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: disabling compression and/or deduplication for a client backing up
against deduped/compressed directory-based storage pool
I understand, I didn't
:22 PM
To: ADSM-L@VM.MARIST.EDU
Subject: disabling compression and/or deduplication for a client backing up
against deduped/compressed directory-based storage pool
Hi Arnaud,
did you run tsm server instrumentation.
It could help to identify where the issue is.
We have tsm server that is connected
lectronic transmission in error, please notify the
> sender by e-mail, telephone or fax at the numbers listed above. Thank you.
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
>
**
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Wednesday, April 11, 2018 7:43 AM
> To: ADSM-L@VM.MARIST.EDU
> Subjec
Folkerts
Sent: Wednesday, April 11, 2018 7:43 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: disabling compression and/or deduplication for a client backing up
against deduped/compressed directory-based storage pool
That's no fun, maybe we can help!
What storage are you using for your active log
hole working together, so far
> without real success : performance is horrible :-(
>
> Cheers.
>
> Arnaud
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, April 05, 2018 5:48 P
Subject: Re: disabling compression and/or deduplication for a client backing up
against deduped/compressed directory-based storage pool
Hi,
With the directory containerpool you cannot, for as far as I know, disable
an attempt to deduplicate the data and if the data is able to
settings for deduplication will
have no effect on compression within the pool.
Are you using an IBM blueprint configuration for the Spectrum Protect
environment?
Regards,
Stefan
On Tue, Apr 3, 2018 at 6:06 PM, PAC Brion Arnaud <arnaud.br...@panalpina.com
> wrote:
> Hi All,
>
Hi All,
Following to global client backup performance issues on some new TSM server,
which I suspect to be related to the workload induced on TSM instance by
deduplication/compression operations, I would like to do some testing with a
client, selectively disabling compression or deduplication
Hi Anders,
Whished it would be that simple ...
Unfortunately, there are quite a lot of discrepancies between the data reported
by our query, and the output from "q stg", like demonstrated here :
Output for "q stg xxx f=d"
DIR_DB2 : Deduplication Sav
Hi
This is simple math
select
stgpool_name,DEDUP_SPACE_SAVED_MB/(DEDUP_SPACE_SAVED_MB+COMP_SPACE_SAVED_MB+(EST_CAPACITY_MB*PCT_UTILIZED/100))*100||'%'
as "Dedup savings" from stgpools
select
Hi All,
Simple question : did any of you succeeded in building a query that would
provide accurate statistics on deduplication and compression factors for the
new TSM directory-based pools ?
I would simply get following :
Stgpool name, space that would be used without dedup & compres
by the way. :-)
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan
Folkerts
Sent: woensdag 12 april 2017 17:56
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication
Eric
bler
> Sent: maandag 10 april 2017 18:12
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Deduplication
>
> Hi Eric,
>
> A few things:
>
> - Client-side provides better overall throughput for Spectrum Protect
> because the deduplication is spread across more CPU's. So if you
?
Thanks again for your help!
Kind regards,
Eric van Loon
Air France/KLM Storage Engineering
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del
Hoobler
Sent: maandag 10 april 2017 18:12
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication
Hi Eric
17:30
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication
Perhaps the client side dedupe is keeping a dedupe hash-bitmap that is not
getting fully refreshed when you purge the backup data from the server?
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU
Hi Eric,
A few things:
- Client-side provides better overall throughput for Spectrum Protect
because the deduplication is spread across more CPU's. So if you can
afford to do the deduplication client-side, that is the best overall
result.
- Client-side helps reduce network traffic
, April 10, 2017 10:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication
Hi guys!
We are trying to make a fair comparison between server- and client-side
deduplication. I'm running into an 'issue' where I notice that once you created
a backup of a certain set of data, it is always
Hi guys!
We are trying to make a fair comparison between server- and client-side
deduplication. I'm running into an 'issue' where I notice that once you created
a backup of a certain set of data, it is always deduplicated 100% afterwards
when you start a new client-side deduped backup. Even
Yes.
On TSM 7.1.6 I'm observing following savings during full backup:
09/12/2016 01:33:03 ANR0951I Session 424697 for node SAP_PEP processed
1 files by using inline data deduplication or compression, or both. The
number of original bytes was 319,384,083,580. Inline data deduplication
reduced the d
Hi Martin,
>From what I have gathered, it looks like the HANA database is backed up in
such big objects that deduplication fails to deduplicate, I have seen the
deduplication does something for the transaction logs but not for the full
backups,
Snippet from actlog when transaction l
Hi,
Very often SAP HANA admins use the data compression to save memory.
if so deduplication efficiency should fall.
Efim
> 7 сент. 2016 г., в 11:20, Martin Janosik <martin.jano...@cz.ibm.com>
> написал(а):
>
> Hello all,
>
> is anyone storing SAP HANA backups (using
Hello all,
is anyone storing SAP HANA backups (using Data Protection for ERP) in
directory storage pools?
What are deduplication savings in your environment?
In our environment we see only 40% savings (35%-50%), comparing to
predicted dedup savings 1:9 (this ratio is currently valid for backups
that is because the container (cloud or directory) manages deduplication.
As the data is ingested, Spectrum Protect detemines if the data is to be
deduplicated. Inside the storage pool, you will see two types of
containers, a container that is deduplicated and a non-deduplicated. To
answer
Hi
Can I create a deduplication enabled storage pool using Cleversafe cloud
using TSM 7.1.5. I can find that there are flags to enable / disable
encryption for on-premises however there are no flags to enable for disable
for dedduplication nor compression.
Thanks
am actually seeing the
> same issue when doing Sybase ACE backups, again large objects, but still
> under 50GB
>
> I see good deduplication on MSSQL and Domino backups, in directory
> container pools,
>
> Eric, HANA is SAP's own in-memory database, no oracle.
>
> I have client
Hi Guys,
Thanks for the feedback, My feeling is that it must be that the HANA api
does not split the objects into smaller chunks, I am actually seeing the
same issue when doing Sybase ACE backups, again large objects, but still
under 50GB
I see good deduplication on MSSQL and Domino backups
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Deduplication and database backups
I've seen plenty of databases go to container pools and get fair to good
deduplications results even on the first backup.
It should not matter that it is one large object, it will make the chunks
larger but normally you should
I've seen plenty of databases go to container pools and get fair to good
deduplications results even on the first backup.
It should not matter that it is one large object, it will make the chunks
larger but normally you should still get some deduplication as long as it's
not encrypted.
It would
Hi all,
I want to hear what others are doing in regards of deduplication and large
files / database backups,
on a recent setup we are taking backups of a SAP Hana system to a direcotry
container, I see great dedup stats when the system is doing log backups,
but I get no deduplication effects
-L@VM.MARIST.EDU
Envoyé: Mercredi 23 Mars 2016 10:27:58
Objet: Re: [ADSM-L] Real world deduplication rates with TSM 7.1 and container
pools
Erwann,
Thanks for your input : I had a look at the video which clarified few points,
but not all of them ...
One question : if using deduped and compre
SIMON
Sent: Tuesday, March 22, 2016 11:18 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Real world deduplication rates with TSM 7.1 and container pools
Hi all,
TSM deduplication is effective combined with compression. Without compression,
I'm not sure that it worths what it costs (1:2,5 ratio or 65% is wh
Hi all,
TSM deduplication is effective combined with compression. Without compression,
I'm not sure that it worths what it costs (1:2,5 ratio or 65% is what I
generally see with mixed data and "standard" retention).
You all should whatch this Tricia Jiang's video on YouTube
I'll third the odd percentages... using 7.1.3.100.
tsm: TSMPRD02>select sum(reporting_mb) from OCCUPANCY where
stgpool_name='SASCONT0'
Unnamed[1]
--
182520798.90
tsm: TSMPRD02>q stg sascont0
Storage Device
016
10:47:24 AM:
> From: PAC Brion Arnaud <arnaud.br...@panalpina.com>
> To: ADSM-L@VM.MARIST.EDU
> Date: 03/22/2016 10:48 AM
> Subject: Re: Real world deduplication rates with TSM 7.1 and container
pools
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
Total Data Protected (MB): 167
Total Space Used (MB): 36
Total Space Saved (MB): 131
Total Saving Percentage: 78.34
Deduplication Savings: 137,056,854
Deduplication Percentage: 78.34
Non-Deduplicated Extent Count: 8,161
Non-De
Sent: March 22, 2016 07:11
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Real world deduplication rates with TSM 7.1 and
container pools
Hi David,
No. Only newly stored data will be compressed.
Del
"ADSM: Dist Stor Manager&quo
> Date: 03/22/2016 10:19 AM
> Subject: Re: Real world deduplication rates with TSM 7.1 and container
pools
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Hi,
>
> but already client side compressed data isn't affected, isn't it?
> what if da
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Real world deduplication rates with TSM 7.1 and container pools
Hi,
but already client side compressed data isn't affected, isn't it?
what if data is replicated. Is that data then compressed?
Regards,
Alex
Von:Del Hoobler <hoob...@us.ibm.
Hi,
but already client side compressed data isn't affected, isn't it?
what if data is replicated. Is that data then compressed?
Regards,
Alex
Von:Del Hoobler <hoob...@us.ibm.com>
An: ADSM-L@VM.MARIST.EDU,
Datum: 22.03.2016 15:14
Betreff:Re: [ADSM-L] Real world dedupli
M-L@VM.MARIST.EDU
> Date: 03/22/2016 09:42 AM
> Subject: Re: Real world deduplication rates with TSM 7.1 and container
pools
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Del,
>
> After upgrading to 7.1.5 is there a way to get pre-existing
&
] Real world deduplication rates with TSM 7.1 and container
pools
I think most of you know Spectrum Protect just added in-line compression
to the container and cloud deduplicated pools in version 7.1.5:
https://urldefense.proofpoint.com/v2/url?u=http
.br...@panalpina.com>
To: ADSM-L@VM.MARIST.EDU
Date: 03/22/2016 03:52 AM
Subject:[ADSM-L] Deduplication questions, again
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
Hi All,
Another question in regards of TSM container based deduplicated pools ...
feedback.
Thank you,
Del
"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 03/22/2016
05:36:46 AM:
> From: PAC Brion Arnaud <arnaud.br...@panalpina.com>
> To: ADSM-L@VM.MARIST.EDU
> Date: 03/22/2016 05:
few lines ...
Compressed: No
Deduplication Savings: 0 (0%)
Compression Savings: 0 (0%)
Total Space Saved: 0 (0%)
Auto-copy Mode:
Contains Data Deduplicated by Client?:
Maximum Simultaneous Writers: No Limit
mprovement by IBM !
Of course, only MHO ...
Cheers.
Arnaud
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del
Hoobler
Sent: Monday, March 21, 2016 10:53 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Real world deduplication rates with TSM 7.1 and
I think most of you know Spectrum Protect just added in-line compression
to the container and cloud deduplicated pools in version 7.1.5:
http://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.5/srv.common/r_techchg_srv_compress_715.html
Adding incremental forever, the new in-line deduplication
:15 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Real world deduplication rates with TSM 7.1 and container pools
We have around 900 VM's in 2 container pools and we get 35% dedup
percentage.
Regards,
Bas van Kampen
On 18-3-2016 15:41, PAC Brion Arnaud wrote:
> Hi All,
>
> We are currently te
We have around 900 VM's in 2 container pools and we get 35% dedup
percentage.
Regards,
Bas van Kampen
On 18-3-2016 15:41, PAC Brion Arnaud wrote:
Hi All,
We are currently testing TSM 7.1 deduplication feature, in conjunction with
container based storage pools.
So far, my test TSM instances
Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Friday, March 18, 2016 5:32 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Real world deduplication rates with TSM 7.1 and container
> pools
>
> We see around 50-65% deduplication savings o
] On Behalf Of Stefan
Folkerts
Sent: Friday, March 18, 2016 5:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Real world deduplication rates with TSM 7.1 and container pools
We see around 50-65% deduplication savings on the fileclass storagepools,
most common seems to be around 55%.
It requires what I call
Hi All,
We are currently testing TSM 7.1 deduplication feature, in conjunction with
container based storage pools.
So far, my test TSM instances, installed with such a setup are reporting dedup
percentage of 45 %, means dedup factor around 1.81, using a sample of clients
which
But I am willing to bet this was a salesman promising
> "similar performance."
>
> There is no technology I know where any deduplication factor can be
> guaranteed. Perhaps "UP to 4" for certain kinds of data... And overall
> reduction of storage is what yo
Hi Arnaud
If IBM made that commitment in black and white, then you should hold their
feet to the fire. But I am willing to bet this was a salesman promising
"similar performance."
There is no technology I know where any deduplication factor can be
guaranteed. Perhaps "UP to
We see around 50-65% deduplication savings on the fileclass storagepools,
most common seems to be around 55%.
It requires what I call "deep reclaims" with very low values that need a
lot of time.
We are seeing 60-70% on containerpools but on average it is more like 65%
but that is based
Subject: [ADSM-L] Windows 2012R2 Deduplication
Hi all,
I have a question about how you should handle backups when running File system
de-duplication in Windows,
I have a customer who runs Hyper-V, one guest in this HyperV setup is a file
server, with disks to big to handle with TSM
Hi all,
I have a question about how you should handle backups when running File
system de-duplication in Windows,
I have a customer who runs Hyper-V, one guest in this HyperV setup is a
file server, with disks to big to handle with TSM for HyperV, so that is
not an option (not that I think it
Sergio,
I don't have exact numbers but from what I recall we were running
150-200MB/s, this is not the network load but the effective thruput using
client-side deduplication.
Use multiple sessions (how many is best is trial/error) and have fast
filepool storage as well, you will see a lot
,
I've been using source-side deduplication pretty successfully for most
of
my clients (Unix and Windows and TDP for MSSQL) for at least two years
now. The backup window for the source-side is significantly shorter for
Unix clients, minimally shorter for Windows and somewhat longer for
MSSQL
nodes
PM, Sergio O. Fuentes sfuen...@umd.edu
wrote:
Hello folks,
I've been using source-side deduplication pretty successfully for most of
my clients (Unix and Windows and TDP for MSSQL) for at least two years
now. The backup window for the source-side is significantly shorter for
Unix clients
Hello folks,
I've been using source-side deduplication pretty successfully for most of my
clients (Unix and Windows and TDP for MSSQL) for at least two years now. The
backup window for the source-side is significantly shorter for Unix clients,
minimally shorter for Windows and somewhat
I submitted this request to IBM TSM development, and am posting it on their
behalf:
TSM Data deduplication has been in the product since 6.1 (server side data
deduplication) and 6.2 (client side data deduplication) and is therefore
considered mature at the 7.1 version. The data deduplication
Awesome... thank you Dave!
On Tue, Dec 9, 2014 at 9:52 AM, Dave Canan ddca...@outlook.com wrote:
I submitted this request to IBM TSM development, and am posting it on
their behalf:
TSM Data deduplication has been in the product since 6.1 (server side data
deduplication) and 6.2 (client side
Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent
Christensen
Sent: Saturday, December 06, 2014 6:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] SV: TSM level for deduplication
Hi Thomas,
when you are calling 7.1.1- an utter distaster when it comes to dedup then
what issues are you
: December 8, 2014 08:34
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM level for deduplication
Bent,
TSM 7.1.1.000 had a bug that sometimes caused restores of large files to
fail. IBM considered the bug serious enough to warrant removing 7.1.1.000
from its software distribution servers.
Thomas Denier
?uid=swg24035122
Best regards,
Joerg Pohlmann
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Thomas Denier
Sent: December 8, 2014 08:34
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM level for deduplication
Bent,
TSM 7.1.1.000 had
, December 08, 2014 12:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM level for deduplication
FYI - 7.1.1.000 is still on the FTP site. 7.1.1.100 is also on the FTP site.
Ref http://www-01.ibm.com/support/docview.wss?uid=swg24035122
Best regards,
Joerg Pohlmann
-Original Message
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Thomas
Denier
Sent: Friday, December 05, 2014 2:56 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM level for deduplication
My management is very eager to deploy TSM deduplication in our
.
- Bent
Fra: ADSM: Dist Stor Manager [ADSM-L@VM.MARIST.EDU] P#229; vegne af Thomas
Denier [thomas.den...@jefferson.edu]
Sendt: 5. december 2014 20:56
Til: ADSM-L@VM.MARIST.EDU
Emne: [ADSM-L] TSM level for deduplication
My management is very eager to deploy TSM
My management is very eager to deploy TSM deduplication in our production
environment. We have been testing deduplication on a TSM 6.2.5.0 test server,
but the list of known bugs makes me very uncomfortable about using that
level for production deployment of deduplication. The same is true
I am trying to determine the causes of two anomalies in the behavior of a
deduplicated storage pool in our TSM test environment. The test environment
uses TSM 6.2.5.0 server code running under zSeries Linux. The environment has
been using only server side deduplication since early September. Some
server does
not
otherwise know the contents of the files that are stored, so if the data
is
in some encrypted state by a 3rd party, the TSM server is not aware of
this, and it could be eligible for deduplication. How effective
deduplication will be with such data depends on how well this encrypted
Here's an excerpt from official TSM documentation for TSM Server 7.1 as a
limitation for deduplication:
Encrypted files
The Tivoli Storage Manager server and the backup-archive client cannot
deduplicate encrypted files. If an encrypted file is encountered during data
deduplication processing
is not aware of
this, and it could be eligible for deduplication. How effective
deduplication will be with such data depends on how well this encrypted
data lends itself to being deduped.
Thus the statement does not apply to data encrypted by a 3rd-party tool,
i.e., if the data has already been encrypted
I've searched the archives but I can't really find the answer
I'm looking for.
Running on version 6.3.1.0.
I have a primary storage pool running dedup. I run the ID
DEDUP command, the reclamation command, and the expiration command throughout
the
[mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Tyree,
David
Sent: Monday, July 28, 2014 11:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] deduplication status
I've searched the archives but I can't really find the answer
I'm looking for.
Running on version 6.3.1.0
: Thursday, June 12, 2014 4:31 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM and VTL Deduplication
Yes, one of the two. If TSM deduplication is enabled and the target is a
virtual tape, i doubt if the VTL can deduplicate anything from the write data
Of
Ehresman,David E.
Sent: Friday, June 13, 2014 7:53 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM and VTL Deduplication
Just so we're all clear here.
You cannot TSM dedup to virtual tape, even though the virtual tape is actually
disk. TSM dedup has to go to a TSM defined FILE storage pool, not a TSM
Greetings,
We are trying to evaluate the possibility of introducing deduplication into
our backups. Our initial deployment will be based on quadstor vtl
http://www.quadstor.com/virtual-tape-library.html but at the same time we are
trying to understand the TSM deduplication feature. Could
Subject: [ADSM-L] TSM and VTL Deduplication
Greetings,
We are trying to evaluate the possibility of introducing deduplication into
our backups. Our initial deployment will be based on quadstor vtl
http://www.quadstor.com/virtual-tape-library.html but at the same time we are
trying to understand
Understood. Thanks !
On Thu, 6/12/14, Ehresman,David E. deehr...@louisville.edu wrote:
Subject: Re: [ADSM-L] TSM and VTL Deduplication
To: ADSM-L@VM.MARIST.EDU
Date: Thursday, June 12, 2014, 5:33 AM
If TSM moves data from a
(disk) dedup pool
Subject: Re: [ADSM-L] TSM and VTL Deduplication
Understood. Thanks !
On Thu, 6/12/14, Ehresman,David E. deehr...@louisville.edu wrote:
Subject: Re: [ADSM-L] TSM and VTL Deduplication
To: ADSM-L@VM.MARIST.EDU
Date: Thursday, June 12, 2014, 5:33 AM
Be prepare for your database size to double or triple if you are using TSM
deduplication.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Thursday, June 12, 2014 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM and VTL
-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ge=
e, Norman
Sent: Thursday, June 12, 2014 10:55 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM and VTL Deduplication
Be prepare for your database size to double or triple if you are using TSM =
deduplication
: [ADSM-L] TSM and VTL Deduplication
Be prepare for your database size to double or triple if you are using TSM
deduplication.
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Thursday, June 12, 2014 7:15 AM
To: ADSM-L
Deduplication
To: ADSM-L@VM.MARIST.EDU
Date: Thursday, June 12, 2014, 8:47 AM
Hi,
I'd rather say 6 to 10 times, or 10 GB of
DB for each 1 TB of data (native, not deduped) stored.
--
Best
regards / Cordialement / مع تحياتي
Erwann SIMON
-
Mail original -
De: Norman
Gee norman
: Thursday, June 12, 2014 2:41 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM and VTL Deduplication
Thanks for all the answers. So SSDs (Looking at SSD caching) for the database
storage and 10GB per TB of total backup data on the safer side.
On Thu
Yes, one of the two. If TSM deduplication is enabled and the target is a
virtual tape, i doubt if the VTL can deduplicate anything from the write data.
On Thu, 6/12/14, Ehresman,David E. deehr...@louisville.edu wrote:
Subject: Re: [ADSM-L] TSM
Deduplication Database Totals
-
Total Dedup Chunks in DB : 1171344436
Average Dedup Chunk Size : 447243.5
Deduplication Impact to Database and Storage Pools
Hi Christian,
No, it's not possible. the Storage Pool must be deduplication enabled to be
able to run identify duplicates against it. If you enable your Storage Storage
for deduplication, and start identify process, your DB will start to grown at a
very fast rate and you won't be able
You can however you this to get an idea :
http://www-01.ibm.com/support/docview.wss?uid=swg21596944
On Tue, Mar 25, 2014 at 10:25 AM, Erwann Simon erwann.si...@free.fr wrote:
Hi Christian,
No, it's not possible. the Storage Pool must be deduplication enabled to
be able to run identify
Hi *SM-nerds.
I just wonder if it is possible to run a deduplication Identify process on an
existing File Class Storage Pool without deduplicat any data?
We just want know how much data that will be deduplicate and if it worth it on
this storage pool.
We are running TSM 6.3.4.300 running
Hey, Nick, missed your name the first time around!
Being in higher-ed/research we went the cheap route and actually just
use direct-attach 15K SAS drives on Dell servers, divvied up into
multiple RAID-10 sets. Even a 1TB database only takes us ~1 hour to
backup or restore, which is well within
Hi Wanda,
I'm using Deduplication and have found that tsm life would be much easier if
the stg pool was kept smaller under 3TB in size. I haven't done enough testing
with this, and I know it is slightly counterproductive to achieve the highest
deduplication savings. But it sure does make
Wanda,
In trying to troubleshoot an unrelated performance PMR, IBM provided me
with an e-fix for the dedupdel bottleneck that it sounds like you're
experiencing. They obviously will want to do their due-diligence on
whether or not this efix will help solve your problems, but it has proved
very
Sergio and Wanda,
Thanks for your posts! I opened PMR 10702,L6Q,000 a couple weeks ago
for slow performance [recently completely fell off the cliff!] with our
SRV3 TSM
v6.3.4.200 service that *was* successfully doing client+server deduplication
for 72TB BackupDedup STGpool on NetApp FC [soon
: Prather, Wanda
Subject: Re: [ADSM-L] Deduplication number of chunks waiting in queue
continues to rise?
Wanda,
In trying to troubleshoot an unrelated performance PMR, IBM provided me with an
e-fix for the dedupdel bottleneck that it sounds like you're experiencing.
They obviously will want to do
] Deduplication number of chunks waiting in queue
continues to rise?
Sergio and Wanda,
Thanks for your posts! I opened PMR 10702,L6Q,000 a couple weeks ago for slow
performance [recently completely fell off the cliff!] with our
SRV3 TSM
v6.3.4.200 service that *was* successfully doing client+server
1 - 100 of 253 matches
Mail list logo