Point taken, but I'd argue your savings, over the long haul, are
probably fairly small. Existent, sure, but not worth planning over. If
this is really what you're looking at you're better off going to w2k03
to get lvr, SIS and tunable compression settings. As long as you're not
notifying across the site boundary you won't really get a huge win here,
but maybe that's just my gut. I can't speak to it anecdotally.

That said, unless you have a heavily constrained pipe typically
replication is done on an as-acceptable basis, not as compression
benefits. Compression is a benefit, not a reason to schedule
replication. Considering that we replicate per attribute (and with lvr,
per value) there is a very small cost to changing a single attribute
even on a large object. Metadata + new value. The cost savings, if
looking at aggregate bandwidth, could be viewed as the fact that you
replicate the attribute only at the interval rather than per change. So
you could change it 10 times today but you only replicate once if the
interval is 24 hours vs. every hour where you may replicate it 10 times.
Unless of course the 10 changes are all value adds/removes using
LVR....then it's no different.

~Eric






-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of joe
Sent: Monday, March 08, 2004 11:07 PM
To: [EMAIL PROTECTED]
Subject: RE: [ActiveDir] DC Replication Bandwidth Issue

I was simply going by the idea that the less often you replicate the
more
changes that pile up to be replicated. The more data that has to be
compressed *generally* the better the compression ratio's you get - this
is
standard compression algorithm magic involving the pattern reductions
that
can be accomplished with larger data sets. The more often (and larger)
patterns that present themselves, the better your crunch ratio. The
exception being when your data is specifically all over the map such as
password hashes which tend to compress horribly due to very little
pattern
repetition. 

Also until you hit, I think 32k in data volume, you don't get
compression at
all. So if you are replicating a few changes often your chances of
compression and the ratios you get out of it if any are lower. Say you
make
some change that is 10k in size, you make the change 3 times an hour and
are
set to replicate every 15 minutes. You will not get compression. You set
your replication to once an hour you should hit compression point at
which
point you will send less data in that hour (not even including meta
data).
Go two hours and you will most likely get even better compression ratios
and
even a greater savings in traffic over replicating every 15 minutes (and
again not even including meta data). If your changes are all password
hash
changes, all bets are off because that is some of the most random data
that
will go across the replication thread. 

When we did initial testing of this way back when I seem to recall
getting
some pretty good compression numbers when pushing larger volumes of
data. 

I guess you could say that the Windows Replication compression
algorithms
give a static compression ratio across the board so that you get a
constant
savings whether replicating 100k or 200k and I will say ok but I seem to
recall seeing differently in our testing. 

  joe


-------------
http://www.joeware.net   (download joeware)
http://www.cafeshops.com/joewarenet  (wear joeware)
 
 

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Eric Fleischman
Sent: Monday, March 08, 2004 11:43 PM
To: [EMAIL PROTECTED]
Subject: RE: [ActiveDir] DC Replication Bandwidth Issue

I'll bite.

> I wouldn't expect a lot of replication unless you are making lots of 
> changes, but you can tune it by modifying the schedule to get the max 
> benefit out of the replication packet compression

What does that mean? I don't see the relationship between frequency of
replication and compression benefits. Short of not pushing metadata more
than once when you replicate less frequently for a rapidly-changing
object
(and a few other small attributes that are brought along for the
ride) what benefit do you realize here? How does frequency of
replication
play in to compression of changes?

~Eric



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of joe
Sent: Monday, March 08, 2004 9:33 PM
To: [EMAIL PROTECTED]
Subject: RE: [ActiveDir] DC Replication Bandwidth Issue

LOL.

I wouldn't expect a lot of replication unless you are making lots of
changes, but you can tune it by modifying the schedule to get the max
benefit out of the replication packet compression. Actually you will
probably have less traffic as your logons and other things using the DCs
don't have to traverse the WAN. 

Make sure you have SP4 or the out of band quickie password replicate hot
fix
in place unless you make sure you change passwords on the remote DC for
the
users there. As you turn up the latency to tune replication, that could
become more troublesome but for the hotfix/SP.

  joe
 


-------------
http://www.joeware.net   (download joeware)
http://www.cafeshops.com/joewarenet  (wear joeware)
 
 

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Network
Administrator
Sent: Wednesday, March 03, 2004 10:17 PM
To: [EMAIL PROTECTED]
Subject: [ActiveDir] DC Replication Bandwidth Issue

I have an upcoming project that I'd like to seek some input on.

I'm looking at building a third domain controller for a tiny domain of
about
250 users.  Currently, we have two domain controllers at a central
location
where approximately 85% of our users reside.  The rest of our users are
at
branch locations connected by 128k links that aren't horribly taxed.

I'd like to place the third domain controller at one of the branch
locations
as a "disaster recovery" box that will be capable of processing domain
authentications and other DC-related functions in case our central
locations
is hit by some catastrophe.  Since this is a single site, single domain,
single forest topology, I don't necessarily need this box to do anything
other than replicate domain information and critical services (DNS,
WINS,
etc.) on a semi-regular basis.  How much bandwidth do you guys think
this
box will take?  Again, it is a tiny domain with approximately 250 users
and
225 workstations.  It won't hold any FSMO roles, I'll just seize them
from
the console at the branch location if Joe's volcano makes it all the way
over to Kalamazoo.

-James R. Rogers

List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive:
http://www.mail-archive.com/activedir%40mail.activedir.org/
List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive:
http://www.mail-archive.com/activedir%40mail.activedir.org/

List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive:
http://www.mail-archive.com/activedir%40mail.activedir.org/
List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive: http://www.mail-archive.com/activedir%40mail.activedir.org/

Reply via email to