Re: Deduplication candidates

2013-01-11 Thread Alex Paschal

Yes, I agree with you.  I can't think of a reason why most of the
database shouldn't dedup out.

On 1/11/2013 11:03 AM, Rick Adamson wrote:

Thanks Wanda and Alex,
Yes I too thought about the uniqueness of the data that makes up logs.
I guess I'm just second guessing myself.
One approach I am thing about in regard to the same issue with pre Exchange 
2010 log files (legacy incrementals) is if it wouldn't be better to just do 
full backups. Aside from the time-to-completion, overall storage requirements 
may be the same and would in most cases speed recovery.
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Alex 
Paschal
Sent: Friday, January 11, 2013 11:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

I second Wanda on the logs.  When you think about it, logs are unique data, 
being entirely made of transactions in the order in which they come in.  If 
they were identical to some other data, I'd start looking around for Twighlight 
Zone cameras.

On the other hand, I suppose I could imagine a test harness issuing the exact 
same set of transactions to a test system multiple times.

On 1/11/2013 7:21 AM, Prather, Wanda wrote:

Yep.

Oracle DB's, getting great dedup rates on the DB's (except the ones where they 
have turned on Oracle compression to start with - that is, the DB itself is 
compressed).

Poor dedup on the Oracle logs either way.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of Rick Adamson
Sent: Friday, January 11, 2013 8:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also 
notice that log files for DB's such as Exchange pre 2010 using legacy backups 
and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either compressing 
this data or storing it on compressed volumes but I found no evidence of it. 
After seeing this conversation and giving it further thought I wonder if others 
experience poor de-dup rates on these data types?
Thanks
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
Of bkupmstr
Sent: Friday, January 11, 2013 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication candidates

Thomas,
First off, with all the great enhancehancements and current high stability 
levels I would recommend going straight to version 6.4 As you have already 
stated there are certain data types hat are good candidates for data 
deduplication and your database backup data definitely is and image files 
definitely aren't.

>From my experience oracle export files are traditionally good dedupe 
candidates also.
>From what you describe, the SQL backup data minus the compression would also 
be a good candidate.

The one thing you do not mention is how many versions of this backup data you 
are keeping?

>From my experience, unless you are keeping a minimum of 4 backup

versions, the dedupe ratios will suffer. Too many time I see folks
keeping only 2 backup versions nd they can't understand why they get
very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that you 
write the backup data to a target disk pool that  will have good enough 
performance to not negatively impact backup speed.

+-
+-
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+-
+-



Re: Deduplication candidates

2013-01-11 Thread Prather, Wanda
Interesting idea  -- Let us know what you find out!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Friday, January 11, 2013 2:03 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Thanks Wanda and Alex,
Yes I too thought about the uniqueness of the data that makes up logs.
I guess I'm just second guessing myself.
One approach I am thing about in regard to the same issue with pre Exchange 
2010 log files (legacy incrementals) is if it wouldn't be better to just do 
full backups. Aside from the time-to-completion, overall storage requirements 
may be the same and would in most cases speed recovery.
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Alex 
Paschal
Sent: Friday, January 11, 2013 11:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

I second Wanda on the logs.  When you think about it, logs are unique data, 
being entirely made of transactions in the order in which they come in.  If 
they were identical to some other data, I'd start looking around for Twighlight 
Zone cameras.

On the other hand, I suppose I could imagine a test harness issuing the exact 
same set of transactions to a test system multiple times.

On 1/11/2013 7:21 AM, Prather, Wanda wrote:
> Yep.
>
> Oracle DB's, getting great dedup rates on the DB's (except the ones where 
> they have turned on Oracle compression to start with - that is, the DB itself 
> is compressed).
>
> Poor dedup on the Oracle logs either way.
>
> W
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Rick Adamson
> Sent: Friday, January 11, 2013 8:44 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Deduplication candidates
>
> Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also 
> notice that log files for DB's such as Exchange pre 2010 using legacy backups 
> and DB2 log files de-dup very poorly.
> Originally I thought that our DBA's or Exchange admins were either 
> compressing this data or storing it on compressed volumes but I found no 
> evidence of it. After seeing this conversation and giving it further thought 
> I wonder if others experience poor de-dup rates on these data types?
> Thanks
> ~Rick
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of bkupmstr
> Sent: Friday, January 11, 2013 7:15 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Deduplication candidates
>
> Thomas,
> First off, with all the great enhancehancements and current high stability 
> levels I would recommend going straight to version 6.4 As you have already 
> stated there are certain data types hat are good candidates for data 
> deduplication and your database backup data definitely is and image files 
> definitely aren't.
>
> >From my experience oracle export files are traditionally good dedupe 
> >candidates also.
> >From what you describe, the SQL backup data minus the compression would also 
> >be a good candidate.
>
> The one thing you do not mention is how many versions of this backup data you 
> are keeping?
>
> >From my experience, unless you are keeping a minimum of 4 backup 
> >versions, the dedupe ratios will suffer. Too many time I see folks 
> >keeping only 2 backup versions nd they can't understand why they get 
> >very poor dedup rates
>
> Also be aware that with TSM deduplication you will have to ensure that you 
> write the backup data to a target disk pool that  will have good enough 
> performance to not negatively impact backup speed.
>
> +-
> +-
> |This was sent by bkupm...@yahoo.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +-
> +-
>


Re: Restore symantec backup with TSM???

2013-01-11 Thread Hart, Charles A
You can convert it with a product known as Butterfly
http://www.theregister.co.uk/2011/03/22/butterfly_software/   Looks cool


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rick Adamson
Sent: Friday, January 11, 2013 12:59 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore symantec backup with TSM???

Most likely the only option is to restore the data somewhere and re-back
it up with TSM.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Stef Coene
Sent: Friday, January 11, 2013 1:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore symantec backup with TSM???

On Friday 11 January 2013 03:46:21 you wrote:
> Hello,
>
> I've several media with a full backup previously made with Symantec 
> Netbackup Enterprise Server v7.1.0.4... is it possible to import, 
> catalague and restores that bakcup with IBM TSM?
No.


Stef

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Restore symantec backup with TSM???

2013-01-11 Thread Chavdar Cholev
No,
you can check IBM tool called backup migrator, that can be used, but
you will need TSM and symantec BE together

Regards
Chavdar

On Fri, Jan 11, 2013 at 8:58 PM, Rick Adamson
 wrote:
> Most likely the only option is to restore the data somewhere and re-back it 
> up with TSM.
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stef 
> Coene
> Sent: Friday, January 11, 2013 1:43 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Restore symantec backup with TSM???
>
> On Friday 11 January 2013 03:46:21 you wrote:
>> Hello,
>>
>> I've several media with a full backup previously made with Symantec
>> Netbackup Enterprise Server v7.1.0.4... is it possible to import,
>> catalague and restores that bakcup with IBM TSM?
> No.
>
>
> Stef


Re: Restore symantec backup with TSM???

2013-01-11 Thread Rick Adamson
Most likely the only option is to restore the data somewhere and re-back it up 
with TSM.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stef 
Coene
Sent: Friday, January 11, 2013 1:43 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Restore symantec backup with TSM???

On Friday 11 January 2013 03:46:21 you wrote:
> Hello,
>
> I've several media with a full backup previously made with Symantec 
> Netbackup Enterprise Server v7.1.0.4... is it possible to import, 
> catalague and restores that bakcup with IBM TSM?
No.


Stef


Re: Deduplication candidates

2013-01-11 Thread Rick Adamson
Thanks Wanda and Alex,
Yes I too thought about the uniqueness of the data that makes up logs.
I guess I'm just second guessing myself.
One approach I am thing about in regard to the same issue with pre Exchange 
2010 log files (legacy incrementals) is if it wouldn't be better to just do 
full backups. Aside from the time-to-completion, overall storage requirements 
may be the same and would in most cases speed recovery.
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Alex 
Paschal
Sent: Friday, January 11, 2013 11:57 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

I second Wanda on the logs.  When you think about it, logs are unique data, 
being entirely made of transactions in the order in which they come in.  If 
they were identical to some other data, I'd start looking around for Twighlight 
Zone cameras.

On the other hand, I suppose I could imagine a test harness issuing the exact 
same set of transactions to a test system multiple times.

On 1/11/2013 7:21 AM, Prather, Wanda wrote:
> Yep.
>
> Oracle DB's, getting great dedup rates on the DB's (except the ones where 
> they have turned on Oracle compression to start with - that is, the DB itself 
> is compressed).
>
> Poor dedup on the Oracle logs either way.
>
> W
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Rick Adamson
> Sent: Friday, January 11, 2013 8:44 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Deduplication candidates
>
> Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also 
> notice that log files for DB's such as Exchange pre 2010 using legacy backups 
> and DB2 log files de-dup very poorly.
> Originally I thought that our DBA's or Exchange admins were either 
> compressing this data or storing it on compressed volumes but I found no 
> evidence of it. After seeing this conversation and giving it further thought 
> I wonder if others experience poor de-dup rates on these data types?
> Thanks
> ~Rick
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of bkupmstr
> Sent: Friday, January 11, 2013 7:15 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Deduplication candidates
>
> Thomas,
> First off, with all the great enhancehancements and current high stability 
> levels I would recommend going straight to version 6.4 As you have already 
> stated there are certain data types hat are good candidates for data 
> deduplication and your database backup data definitely is and image files 
> definitely aren't.
>
> >From my experience oracle export files are traditionally good dedupe 
> >candidates also.
> >From what you describe, the SQL backup data minus the compression would also 
> >be a good candidate.
>
> The one thing you do not mention is how many versions of this backup data you 
> are keeping?
>
> >From my experience, unless you are keeping a minimum of 4 backup 
> >versions, the dedupe ratios will suffer. Too many time I see folks 
> >keeping only 2 backup versions nd they can't understand why they get 
> >very poor dedup rates
>
> Also be aware that with TSM deduplication you will have to ensure that you 
> write the backup data to a target disk pool that  will have good enough 
> performance to not negatively impact backup speed.
>
> +-
> +-
> |This was sent by bkupm...@yahoo.com via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +-
> +-
>


Re: Restore symantec backup with TSM???

2013-01-11 Thread Stef Coene
On Friday 11 January 2013 03:46:21 you wrote:
> Hello,
>
> I've several media with a full backup previously made with Symantec
> Netbackup Enterprise Server v7.1.0.4... is it possible to import, catalague
> and restores that bakcup with IBM TSM?
No.


Stef


Restore symantec backup with TSM???

2013-01-11 Thread dummycerberus
Hello,

I've several media with a full backup previously made with Symantec Netbackup 
Enterprise Server v7.1.0.4... is it possible to import, catalague and restores 
that bakcup with IBM TSM?

Thanks in advance and best regards

+--
|This was sent by dummycerbe...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: Deduplication candidates

2013-01-11 Thread Hart, Charles A
Also the Files Per Set parameter in Oracle will really get you -
Protectier Recommends no more than a setting of 4.  We have seen 10 and
we went from 10:1 to 2.5:1 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Prather, Wanda
Sent: Friday, January 11, 2013 9:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Yep.

Oracle DB's, getting great dedup rates on the DB's (except the ones
where they have turned on Oracle compression to start with - that is,
the DB itself is compressed).

Poor dedup on the Oracle logs either way.

W 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Rick Adamson
Sent: Friday, January 11, 2013 8:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I
also notice that log files for DB's such as Exchange pre 2010 using
legacy backups and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either
compressing this data or storing it on compressed volumes but I found no
evidence of it. After seeing this conversation and giving it further
thought I wonder if others experience poor de-dup rates on these data
types?
Thanks
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
bkupmstr
Sent: Friday, January 11, 2013 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication candidates

Thomas,
First off, with all the great enhancehancements and current high
stability levels I would recommend going straight to version 6.4 As you
have already stated there are certain data types hat are good candidates
for data deduplication and your database backup data definitely is and
image files definitely aren't.

>From my experience oracle export files are traditionally good dedupe
candidates also.
>From what you describe, the SQL backup data minus the compression would
also be a good candidate.

The one thing you do not mention is how many versions of this backup
data you are keeping?

>From my experience, unless you are keeping a minimum of 4 backup
versions, the dedupe ratios will suffer. Too many time I see folks
keeping only 2 backup versions nd they can't understand why they get
very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that
you write the backup data to a target disk pool that  will have good
enough performance to not negatively impact backup speed.

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--

This e-mail, including attachments, may include confidential and/or
proprietary information, and may be used only by the person or entity
to which it is addressed. If the reader of this e-mail is not the intended
recipient or his or her authorized agent, the reader is hereby notified
that any dissemination, distribution or copying of this e-mail is
prohibited. If you have received this e-mail in error, please notify the
sender by replying to this message and delete this e-mail immediately.


Re: Deduplication candidates

2013-01-11 Thread Alex Paschal

I second Wanda on the logs.  When you think about it, logs are unique
data, being entirely made of transactions in the order in which they
come in.  If they were identical to some other data, I'd start looking
around for Twighlight Zone cameras.

On the other hand, I suppose I could imagine a test harness issuing the
exact same set of transactions to a test system multiple times.

On 1/11/2013 7:21 AM, Prather, Wanda wrote:

Yep.

Oracle DB's, getting great dedup rates on the DB's (except the ones where they 
have turned on Oracle compression to start with - that is, the DB itself is 
compressed).

Poor dedup on the Oracle logs either way.

W

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Friday, January 11, 2013 8:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also 
notice that log files for DB's such as Exchange pre 2010 using legacy backups 
and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either compressing 
this data or storing it on compressed volumes but I found no evidence of it. 
After seeing this conversation and giving it further thought I wonder if others 
experience poor de-dup rates on these data types?
Thanks
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
bkupmstr
Sent: Friday, January 11, 2013 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication candidates

Thomas,
First off, with all the great enhancehancements and current high stability 
levels I would recommend going straight to version 6.4 As you have already 
stated there are certain data types hat are good candidates for data 
deduplication and your database backup data definitely is and image files 
definitely aren't.

>From my experience oracle export files are traditionally good dedupe 
candidates also.
>From what you describe, the SQL backup data minus the compression would also 
be a good candidate.

The one thing you do not mention is how many versions of this backup data you 
are keeping?

>From my experience, unless you are keeping a minimum of 4 backup versions, the 
dedupe ratios will suffer. Too many time I see folks keeping only 2 backup 
versions nd they can't understand why they get very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that you 
write the backup data to a target disk pool that  will have good enough 
performance to not negatively impact backup speed.

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



Re: Deduplication candidates

2013-01-11 Thread Prather, Wanda
Yep.

Oracle DB's, getting great dedup rates on the DB's (except the ones where they 
have turned on Oracle compression to start with - that is, the DB itself is 
compressed).

Poor dedup on the Oracle logs either way.

W 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Rick 
Adamson
Sent: Friday, January 11, 2013 8:44 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Deduplication candidates

Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also 
notice that log files for DB's such as Exchange pre 2010 using legacy backups 
and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either compressing 
this data or storing it on compressed volumes but I found no evidence of it. 
After seeing this conversation and giving it further thought I wonder if others 
experience poor de-dup rates on these data types?
Thanks
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
bkupmstr
Sent: Friday, January 11, 2013 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication candidates

Thomas,
First off, with all the great enhancehancements and current high stability 
levels I would recommend going straight to version 6.4 As you have already 
stated there are certain data types hat are good candidates for data 
deduplication and your database backup data definitely is and image files 
definitely aren't.

>From my experience oracle export files are traditionally good dedupe 
>candidates also.
>From what you describe, the SQL backup data minus the compression would also 
>be a good candidate.

The one thing you do not mention is how many versions of this backup data you 
are keeping?

>From my experience, unless you are keeping a minimum of 4 backup versions, the 
>dedupe ratios will suffer. Too many time I see folks keeping only 2 backup 
>versions nd they can't understand why they get very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that you 
write the backup data to a target disk pool that  will have good enough 
performance to not negatively impact backup speed.

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: Deduplication candidates

2013-01-11 Thread Rick Adamson
Though our TSM systems (6.3 and 5.5) use back-end de-dup, data domain, I also 
notice that log files for DB's such as Exchange pre 2010 using legacy backups 
and DB2 log files de-dup very poorly.
Originally I thought that our DBA's or Exchange admins were either compressing 
this data or storing it on compressed volumes but I found no evidence of it. 
After seeing this conversation and giving it further thought I wonder if others 
experience poor de-dup rates on these data types?
Thanks
~Rick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
bkupmstr
Sent: Friday, January 11, 2013 7:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Deduplication candidates

Thomas,
First off, with all the great enhancehancements and current high stability 
levels I would recommend going straight to version 6.4 As you have already 
stated there are certain data types hat are good candidates for data 
deduplication and your database backup data definitely is and image files 
definitely aren't.

>From my experience oracle export files are traditionally good dedupe 
>candidates also.
>From what you describe, the SQL backup data minus the compression would also 
>be a good candidate.

The one thing you do not mention is how many versions of this backup data you 
are keeping?

>From my experience, unless you are keeping a minimum of 4 backup versions, the 
>dedupe ratios will suffer. Too many time I see folks keeping only 2 backup 
>versions nd they can't understand why they get very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that you 
write the backup data to a target disk pool that  will have good enough 
performance to not negatively impact backup speed.

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Deduplication candidates

2013-01-11 Thread bkupmstr
Thomas,
First off, with all the great enhancehancements and current high stability 
levels I would recommend going straight to version 6.4
As you have already stated there are certain data types hat are good candidates 
for data deduplication and your database backup data definitely is and image 
files definitely aren't.

>From my experience oracle export files are traditionally good dedupe 
>candidates also.
>From what you describe, the SQL backup data minus the compression would also 
>be a good candidate.

The one thing you do not mention is how many versions of this backup data you 
are keeping?

>From my experience, unless you are keeping a minimum of 4 backup versions, the 
>dedupe ratios will suffer. Too many time I see folks keeping only 2 backup 
>versions nd they can't understand why they get very poor dedup rates

Also be aware that with TSM deduplication you will have to ensure that you 
write the backup data to a target disk pool that  will have good enough 
performance to not negatively impact backup speed.

+--
|This was sent by bkupm...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


2nd Call for Papers for 11th TSM Symposium 2013

2013-01-11 Thread Claus Kalle
Hi all,

first of all: Best wishes for 2013, luck and healthiness shall accompany
you at all times!

Please find below information about the TSM Symposium to be
hosted by University of Cologne and Guide Share Europe (GSE German
Region) in September.

Registration has just been opened - feel free to register right now!

As an exhibitor please consider registering early since booth positions
are registered on a first come first serve policy.

Thanks and best regards,
Claus
---
11th GSE TSM Symposium 2013
Tivoli Storage Manager : Future Expectations
Hilton Gendarmenmarkt Berlin, 17.-20. September 2013

Call for papers
Call for exhibitors
Call for sponsorship

Background:

Following the ADSM-Workshops (starting 1994 at Karlsruhe University),
the well-received ADSM/TSM-Symposia (since 1999) at Oxford University
and the last TSM-Symposia at the Grandhotel Petersberg in 2009 and the
Hilton Dresden Hotel near the Frauenkirche in the Old town of Dresden in
2011, in 2013  the University of Cologne together with Guide Share
Europe is hosting the eleventh TSM Symposium from September 17th through
September 20th. The Symposium will be held in the Hilton Gendarmenmarkt
Hotel in the center of Berlin.

It will have been two years since the last Symposium in Dresden and
there will be plenty of Tivoli Storage Manager (TSM) related topics to
talk about.

This is your chance to join us in benefitting from the innovations, the
new and changed functionality in the recent and upcoming versions and to
hear what is expected to be implemented in TSM over the next couple of
years.

Come along and keep up with the very latest features and plans of the
TSM-team within IBM, gain hints and tips on upgrading your archive and
backup services, learn how to exploit the forthcoming new functionality,
and acquire additional technical insight into TSM. Take full advantage
of the presentations from IBM Development, as well as contributions from
other experienced and established TSM users. Best of all, meet and get
to know peer professionals in an open, informal atmosphere.

How to make a contribution:

We are seeking technical contributions in the form of papers and
presentations from the TSM user community - the system managers and
administrators who are arguably the real TSM experts - and this is an
excellent chance to share your knowledge and wisdom.

Possible topics / Ideas for a contribution

* Experiences / Usage Scenarios
* Protection of virtualized environments
* Data Protection products for applications (databases, mail, SAP)
* NDMP Backup and Restore Scenarios
* Dealing with Growth: HSM, Deduplication, ...
* New Tape and Library Technologies
* Data Security - Data shredding / Encryption
* Dealing with the Cloud: CRUD devices and Big Data

Technical contributions from TSM solution architects and related storage
system vendors are also sought: this is an opportunity to demonstrate
the flexibility and extensibility of TSM.

The conference language will be English. All talks will have to be held
in English language.

Information for Exhibitors and Sponsors:

We also offer excellent opportunities for exhibiting at the Symposium.
Your company may support the symposium by becoming an exhibitor or
sponsor. This will be a fine opportunity to show your company's
supportive engagement in TSM at many places during the conference and on
the conference materials (bags, badges and such).

Please see the registration web-pages for details and options available.

Calendar:

Open-NowCall for papers
Open-NowCall for exhibitors
Open-NowCall for sponsorship
7th January 13  Registration open
30th April  Paper abstracts due
15th MayNotification of Speakers
13th July   Registration closes
16th August Final papers due
17th September  Symposium begins

The technical committee comprises:

# Gerd Becker, Empalis GmbH
# Rolf Bogus, University of Heidelberg
# Kirsten Glöer, FIZ Karlsruhe
# Peter Groth, BTB GmbH
# Claus Kalle, University of Cologne
# Peter Micke, FRITZ & MACZIOL GmbH

Conference Layout:

The symposium will be held at the Hilton Gendarmenmarkt Berlin Hotel,
with early arrivals' registration opening on the afternoon of Tuesday 17
September 2013 adjacent to the 46th convention of GSE's AK-SMS. The
exhibitions will be open and the exhibitors will give their introductory
presentations before hosting an evening  reception.

The symposium presentations will start early on the following morning
Wednesday 18 September 2013, and conclude at lunchtime on Friday 20
September 2013.

Accommodation for 3 nights in the Hilton Dresden is included in the
basic symposium registration package for EUR 1150 EUR for GSE Members
and EUR 1390 for non GSE Members (the price for non GSE Members
qualifies for up to two years of free membership of GSE, see web page
for details). Other packages are available. All meals and social events
are included.

All arrangements will be handled thru the German Cha