|
While I understand the three separate RAID array design, as
I have previously mentioned I don't think it is necessary for most AD
implementations because in general, the log file drive(s) will be sleeping. Most
people just do not generate enough churn to get IOs bumpin on the log drive. The
one exception I have seen was when Eric was inflating his big DIT. The numbers
he was generating for log IOPS was far more than I have ever heard of anywhere
for AD.
With a generic DC across the board, it is the DIT drive
that takes the pounding. I haven't seen any x64 machines with a 64bit OS on them
yet to see what that looks like but obviously if there is enough RAM and the DIT
has gotten into cache, this will drammatically change the footprint and at that
point the OS disk I would guess will become the busiest (excluding
environments with tons of writes to AD). Even still, I haven't seen an OS on a
DC that required its own dedicated spindles. While it is a cute idea for rolling
back from bad updates I would rather have it figured out in extensive testing
before hand than go through the extra work in production. I look at DCs as very
expendable, if I hurt one, I don't think twice about rebuilding it and
repromoting it; this is a very different design than say a SQL Server or
Exchange Server which isn't generally expendable. So anyway, for a generic DC
configuration, anything that increases the number of spindles for the DIT is
where I go. If that means slapping the OS and logs on with it, I am fine with it
because in the hundreds of perf logs I have had to wade through, the OS and logs
are a rounding error in IOPS next to the DIT drive.
I believe 5000 is the number mentioned in the guidance from
MSFT and again as I said in the last post, it generally isn't great to make a
decision on numbers unless you have a feeling for use as well. I can pretty much
guarantee that a DC in a site with 5000 users and also a couple of really busy
Exchange servers a 32 bit GC will get pounded into performing inadequately, I
have seen it several times and they are always built as per that silly MSFT
deployment doc. Interestingly I asked the question about how to build a DC for a
given site of 3 MCS folks and Eric. The green MCS guy said exactly what the MSFT
doc said - some mirrors, the two other MCS folks with heavy Exchange Enterprise
experience indicated to use 10,0+1, or 5. Eric said to use x64 (he always has to
be different) but after I pressed him he said to maximize the spindles as well.
If you are speaking with a hardware company for
recommendations, they are pretty much going to just quote you what the software
company said, they pretty much need to. If they thought and said, no you should
change and buy more hardware at 2000 you may look at them and say, hey now, you
are trying to sell more hardware. If they say, oh no, do it at 10,000 and
then it breaks you use the MSFT guidelines to beat them saying they gave
bad advice.
Me... I rather overbuild my DCs and be happy and bored
when the utilization goes over expected and the DCs are still purring along, not
living on the edge and people are wondering what is going on and you start
having to look at every single perf counter that was recorded for a week trying
to work out exactly which component is the one screwing you. Hardware is CHEAP!
Downtime and poor performance is EXPENSIVE. Also, let alone downs and slow email
or something, it is far more expensive to bring in someone like me to spend
hours or days to try and figure out that you should have bought an extra 1 GB of
RAM or not followed the silly multiple mirror design or something. Plus, later,
if you decide to add more functionality or upgrade your OS, you aren't sitting
with a design that was for that machine at that one point in time based on an
assumption that nothing would change and have to go scrambling for hardware to
cover what other new thing you want to do.
The hardest thing is designing for a greenfield
installation... Say you are moving from some other NOS or from a mainframe
environment to Windows. You have no clue what the load is going to be because
there is nothing to look at so you don't know if you are under or overbuilding.
Then unfortunately, numbers of users gets more important as it is the only real
starting point you have.
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Myrick, Todd (NIH/CC/DCRI) [E] Sent: Friday, June 23, 2006 6:41 AM To: [email protected] Subject: RE: [ActiveDir] DC Configuration Some of my opinions based on my own research.
Follow-up thoughts looking for group input.
With regards to when is it best to use Software RAID, I have debated this with several people and I seem to favor this approach in Virtual Server Environments and using it on the System/Boot Partition for DR purposes. Another possible use for the software based mirroring might be to create live copy of server for duplication purposes (personally I think there are much better approaches out there.) Any thoughts on this?
What Disk type do you all recommend? I currently still stick to the Basic Disk for the most part. (Unless I want to use software based fault-tolerance).
Thanks,
Todd
From: Al
Mulnick [mailto:[EMAIL PROTECTED]
Interesting how much traffic this subject has garnered.
But I have to ask, why? I mean, we haven't even heard the performance concepts and you're ready to put this on extra hardware no questions. What if he only had about 500 users? Would that still hold? What if it were a largely distributed environment and they had a network such that they needed many smaller vs. fewer larger DC's? Maybe a branch office environment?
I hate software raid (joe's sure to put that definition in a wiki somewhere) because of the false sense of hope it gives the implementer. But I do understand the idea of the least amount of hardware for the task at hand and not a penny more hardware than is needed. Not that I'm even coming close to endorsing software level RAID - far from it.
So why not a RAID 1 partition that holds all the OS, binaries, log files, file and print facilities etc?
It's a distributed app and could very easily work to the
specs needed in a largely distributed architecture. Were RODC available, it
might be chosen for some of the ones I have in mind.
I'm sure you feel I'm baiting you and picking on you Gil but I am curious what some of the thinking in the crowd is <G>
On 6/22/06, Gil Kirkpatrick <[EMAIL PROTECTED]> wrote: OS, DIT, logs on separate spindles.
|
- RE: [ActiveDir] DC Configuration Myrick, Todd \(NIH/CC/DCRI\) [E]
- Re: [ActiveDir] DC Configuration Al Lilianstrom
- Re: [ActiveDir] DC Configuration Brett Shirley
- Re: [ActiveDir] DC Configura... Al Mulnick
- RE: [ActiveDir] DC Configura... Myrick, Todd \(NIH/CC/DCRI\) [E]
- RE: [ActiveDir] DC Configuration Myrick, Todd \(NIH/CC/DCRI\) [E]
- RE: [ActiveDir] DC Configuration joe
- RE: [ActiveDir] DC Configuration Myrick, Todd \(NIH/CC/DCRI\) [E]
- RE: [ActiveDir] DC Configuration joe
- Re: [ActiveDir] DC Configuration Al Mulnick
- RE: [ActiveDir] DC Configuration Myrick, Todd \(NIH/CC/DCRI\) [E]
