While I understand the three separate RAID array design, as I have previously mentioned I don't think it is necessary for most AD implementations because in general, the log file drive(s) will be sleeping. Most people just do not generate enough churn to get IOs bumpin on the log drive. The one exception I have seen was when Eric was inflating his big DIT. The numbers he was generating for log IOPS was far more than I have ever heard of anywhere for AD.
 
With a generic DC across the board, it is the DIT drive that takes the pounding. I haven't seen any x64 machines with a 64bit OS on them yet to see what that looks like but obviously if there is enough RAM and the DIT has gotten into cache, this will drammatically change the footprint and at that point the OS disk I would guess will become the busiest (excluding environments with tons of writes to AD). Even still, I haven't seen an OS on a DC that required its own dedicated spindles. While it is a cute idea for rolling back from bad updates I would rather have it figured out in extensive testing before hand than go through the extra work in production. I look at DCs as very expendable, if I hurt one, I don't think twice about rebuilding it and repromoting it; this is a very different design than say a SQL Server or Exchange Server which isn't generally expendable. So anyway, for a generic DC configuration, anything that increases the number of spindles for the DIT is where I go. If that means slapping the OS and logs on with it, I am fine with it because in the hundreds of perf logs I have had to wade through, the OS and logs are a rounding error in IOPS next to the DIT drive.
 
I believe 5000 is the number mentioned in the guidance from MSFT and again as I said in the last post, it generally isn't great to make a decision on numbers unless you have a feeling for use as well. I can pretty much guarantee that a DC in a site with 5000 users and also a couple of really busy Exchange servers a 32 bit GC will get pounded into performing inadequately, I have seen it several times and they are always built as per that silly MSFT deployment doc. Interestingly I asked the question about how to build a DC for a given site of 3 MCS folks and Eric. The green MCS guy said exactly what the MSFT doc said - some mirrors, the two other MCS folks with heavy Exchange Enterprise experience indicated to use 10,0+1, or 5. Eric said to use x64 (he always has to be different) but after I pressed him he said to maximize the spindles as well.
 
If you are speaking with a hardware company for recommendations, they are pretty much going to just quote you what the software company said, they pretty much need to. If they thought and said, no you should change and buy more hardware at 2000 you may look at them and say, hey now, you are trying to sell more hardware. If they say, oh no, do it at 10,000 and then it breaks you use the MSFT guidelines to beat them saying they gave bad advice.
 
Me... I rather overbuild my DCs and be happy and bored when the utilization goes over expected and the DCs are still purring along, not living on the edge and people are wondering what is going on and you start having to look at every single perf counter that was recorded for a week trying to work out exactly which component is the one screwing you. Hardware is CHEAP! Downtime and poor performance is EXPENSIVE. Also, let alone downs and slow email or something, it is far more expensive to bring in someone like me to spend hours or days to try and figure out that you should have bought an extra 1 GB of RAM or not followed the silly multiple mirror design or something. Plus, later, if you decide to add more functionality or upgrade your OS, you aren't sitting with a design that was for that machine at that one point in time based on an assumption that nothing would change and have to go scrambling for hardware to cover what other new thing you want to do.
 
The hardest thing is designing for a greenfield installation... Say you are moving from some other NOS or from a mainframe environment to Windows. You have no clue what the load is going to be because there is nothing to look at so you don't know if you are under or overbuilding. Then unfortunately, numbers of users gets more important as it is the only real starting point you have.
 
 
--
O'Reilly Active Directory Third Edition - http://www.joeware.net/win/ad3e.htm 
 
 


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Myrick, Todd (NIH/CC/DCRI) [E]
Sent: Friday, June 23, 2006 6:41 AM
To: [email protected]
Subject: RE: [ActiveDir] DC Configuration

Some of my opinions based on my own research.

 

  1. I prefer hot swappable hardware RAID 1 for all boot / system partitions no matter what the role of the server is.  To me this gives the fastest disaster recovery option for situations you are unsure about with regards to OS updates and single drive failures.  On a side note we used to use three mirrors for our domain controller setups. 1 for system/boot/syslog, 1 for transaction logs, and 1 for data.  We mirrored this after our exchange setup, except in Exchange we used RAID 5 arrays to store the data.
  2. With regards to number of spindles and performance, I discussed this with someone on the list before (Guido) and people at HP and we came to the conclusion that with the latest 15K drives you won’t see any tangible performance improvements going with multiple mirrors unless you DC’s service more than 5000 people in that location where the DC resides.
  3. Judging from the original posters SMTP information, it looks like his organization has less than 5000 people in it, so I recommend his first option.

 

Follow-up thoughts looking for group input.

 

With regards to when is it best to use Software RAID, I have debated this with several people and I seem to favor this approach in Virtual Server Environments and using it on the System/Boot Partition for DR purposes.  Another possible use for the software based mirroring might be to create live copy of server for duplication purposes (personally I think there are much better approaches out there.)  Any thoughts on this?

 

What Disk type do you all recommend?  I currently still stick to the Basic Disk for the most part. (Unless I want to use software based fault-tolerance).

 

Thanks,

 

Todd

 

 

 


From: Al Mulnick [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 22, 2006 11:17 PM
To: [email protected]
Subject: Re: [ActiveDir] DC Configuration

 

Interesting how much traffic this subject has garnered. 

 

But I have to ask, why? I mean, we haven't even heard the performance concepts and you're ready to put this on extra hardware no questions. What if he only had about 500 users? Would that still hold? What if it were a largely distributed environment and they had a network such that they needed many smaller vs. fewer larger DC's? Maybe a branch office environment?

 

I hate software raid (joe's sure to put that definition in a wiki somewhere) because of the false sense of hope it gives the implementer.  But I do understand the idea of the least amount of hardware for the task at hand and not a penny more hardware than is needed.  Not that I'm even coming close to endorsing software level RAID - far from it. 

 

So why not a RAID 1 partition that holds all the OS, binaries, log files, file and print facilities etc?

 

It's a distributed app and could very easily work to the specs needed in a largely distributed architecture. Were RODC available, it might be chosen for some of the ones I have in mind. 
 

I'm sure you feel I'm baiting you and picking on you Gil but I am curious what some of the thinking in the crowd is  <G>

 


 

On 6/22/06, Gil Kirkpatrick <[EMAIL PROTECTED]> wrote:

OS, DIT, logs on separate spindles.

Enough memory to store the DIT + overhead.

-gil
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]] On Behalf Of Al Lilianstrom
Sent: Thursday, June 22, 2006 1:24 PM
To: [email protected]
Subject: [ActiveDir] DC Configuration

We have some budget money to replace domain controllers this year. Not
all of them but probably half of them. We've pretty much decided on 64
bit Dell PowerEdge servers. Most of the discussion is about disk
configuration. Two schools of thought exist here.

1) 2x73GB 15K drives in RAID1. Carve up the volume at the OS level with
20GB or so for the OS and the remainder for NTDS, Sysvol, and system
state backups

2) Two sets of 2x73 10K drives in RAID1. The first set is for the OS,
the second is for NTDS, Sysvol, and system state backups.

I've always liked physically separating the OS from the application
data. Others here like carving up the volume at the OS.

Any thoughts, opinions, suggestions?

       tia, al
--

Al Lilianstrom
CD/CSS/CSI
[EMAIL PROTECTED]
List info   : http://www.activedir.org/List.aspx
List FAQ    : http://www.activedir.org/ListFAQ.aspx
List archive: http://www.activedir.org/ml/threads.aspx
List info   : http://www.activedir.org/List.aspx
List FAQ    : http://www.activedir.org/ListFAQ.aspx
List archive: http://www.activedir.org/ml/threads.aspx

 

Reply via email to