Like it says in the document, it's a recommendation and not a technical limit.
However, having the server running at 100% utilization all the time doesnt seem
like a healthy scenario.
Why arent you deduplicating files larger than 1GB? From my experience,
datafiles from SQL, Exchange and such
I'm not fully aware of how the DD replicates data, but if you have 15-20TB/day
being written to your main DD, and that data is then replicated to the off-site
DD, how much data is actually replicated?
With a 1Gbs connection, you could hit values up to 360GB/s hour (expecting
100MB/s which
On Sep 29, 2011, at 12:30 AM, Daniel Sparrman wrote:
I'm not fully aware of how the DD replicates data, but if you have
15-20TB/day being written to your main DD, and that data is then replicated
to the off-site DD, how much data is actually replicated?
With a 1Gbs connection, you could
Hi TSM's
I have a preschedule command to be run two cmd files to put an Essbase in read
/ only mode and then dump the database into a flat file. When the first COMAND
is finished it must start another command file to do the same, just on a
different base.
But when the first command is finished
Yepp, we have the same thing with our Sepaton, all deduplication is done
inline. Reason I asked is because there seems to be other manufacturers who
needs a 2nd box todo deduplication.
Regards
Daniel
Daniel Sparrman
Exist i Stockholm AB
Växel: 08-754 98 00
Fax: 08-754 97 30
So the data is both deduplicated and compressed before you
send it offsite?
Yes, that is how the DD handles replication.
DD is a inline dedup system. When data come into the DD
it is deduped, what is left is compressed, then it
is written to disk. Only the new unique data is replicated.
Richard, excellent comments!
I will add that to TSM is just storage and has no idea about the deduplication,
compression, etc. that DD performs, thus making it challenging to determine the
actual storage utilization from an individual client and/or file space
perspective.
Secondly, aside
The elephants in the room:
It is tempting, once DD gets in the door, to move all database backups (the
typical TDP/RMAN and SQLLiteSpeed stuff) to go directly to DD. (No TSM
involved, so save money on licenses?)
Combinations that have more advanced communications with the back end storage
Good questions:
We are currently working on a project that is using ProtectTIER. The
ProtectTIER 7650G is does dedup. It looks like a TS3500 w LTO drive. We will
be getting another 7650G at a second data center. The idea is to to
cross-replicate between data centers.
-Original
The elephant has left the building.
Do you get the same advanced features by just dumping data onto a DD as you do
with the TSM TDP clients? Exmerge anyone? Or perhaps an SQL dump?
Still have todo filebackups, or wait, why not just use robocopy and copy it
onto the DD? Or what the heck, just
Anyone out there backing up a multiple server SourceOne configuration? This
is the replacement product for MailXtender. There is a script to run
preschedule to set SourceOne up for backup. And then you also have to backup
the other servers in the SourceOne configuration while this is 'Paused'.
Bill,
We have one SourceOne server using a database on a separate server. We
run the job sequence using BMC's Control-M scheduler.
SourceOne server1) Activity Suspend vbs
2) Native Archive Suspend vbs
Database server 3) Database export
4)
Hi Daniel,
My main point was to say that your previous posts seemed to be saying that
dedup storagepools
were recommended to be 6 TB in size at most. It is my understanding the 6TB
recommendation was
a daily server thruput maximum design target when dedup is in use.
I agree, a processor at
13 matches
Mail list logo