First off I will be honest, I don't like GGs. Didn't like them in NT4, don't
like them in NT5.x. However I have done quite a bit of work with
multi-master / multi-resource domain environments. Those who did single
master environments that were all centrally managed for all group
memberships probably like GGs. 


Now that I got that out of the way....

Heavy use of GGs in a multidomain environment (most specifically
Multi-Master/Multi-Resource aka multi-multi) tends to bite you and tends to
be a security risk. Plus people like to use them incorrectly (IMO) in terms
of assigning permissions directly to them.


Scenario 1.

I have a GG called D1\GG1, I apply that permission to some folder F1. That
is fine and dandy for x months. Then all of a sudden I get a new domain or
someone from another domain needs access. So I either 

A. Create an ID for them in D1
B. Add another ACE to the ACL of F1. 


Issues:

A is a nightmare to manage. Users have to keep in mind which ID to use when
and what password is involved. Tends to make people duplicate passwords
which is a security no no.

B is a nightmare to manage. You have to look at the ACL to determine what
groups have access and then look at the groups and finally have an answer. 


High Level:

A. You don't want to confuse your users unless you have outsourced your help
desk and they are on a fixed price for unlimited calls.

B. Put your groups that get applied to resources as close to the resources
as possible. Keep this standard so that you don't have to worry about the
ACL itself, but the group that should be main security principal in the ACL.




Scenaro 2.

I have a GG called D1\GG1 and a DLG called D1\DLG1. The permissions are
applied to DLG1. I am following UGLY so I take my role based group GG1 and
add it to my resource based group DLG1. So now GG1 has permissions through
DLG1. I then add D2\GG1 because they have the same need. At the same time
D2\GG1 is used in some DLGs and on resources in D2. 

At some point, someone forgets that D2\GG1 is nested in D1\DLG1, they add
some people to it that shouldn't have access to the stuff protected by
D1\DLG1. Alternatively and probably pretty realistically the people managing
D2\* are different than those managing D1\* and D1 admins by adding D2\GG1
have lost control of who specifically can be added to get access to the
resource. 



Scenario 3.

I have a DLG name D1\DLG1. It is used in the ACL of folder F1. I want to add
a user to have access to that folder, I add the user to DLG1 irregardless of
where in the trusted architecture that user is. If I need to know who has
permissions, I look at DLG1 and there are the people, if someone nested a
group, I can chase that group membership at that time. Possibly this
degrades into Scenario 2. 

The place where this may start to break down otherwise and outside of
degrading to Scenario 2 is if you have a lot of turnover and your people are
in lots of resource groups. In that situation I would say you have two other
issues besides using GG -vs DLG. You have a lot of turnover which isn't
good. You have your resources spread around all over which isn't good. 


I won't get into the aspects of Exchange as that is just ridiculous and
really doesn't play into the GG versus DLG conversation except in
permissioning AD and that is fun all by itself.



In a multidomain environment which has become obvious at this point to the
list, I am a very strong proponent of using Domain Local Groups. These
points above are some of the reasons. There are times when you can't use
them but they are mostly around, like I mentioned previously, with
permissing AD itself. At that point you have to look closer at what is being
done and if it is intelligent anyway.

In a role based ummm role, I can see the value of GGs, but I don't like the
implementation of most role based systems because they like to work on the
80-20 rule. If 80% need it, they all get it. 

I think overall you do better with DLGs and specifically managing resources.
If you have resources all over the place for one group or role, possibly you
should be looking at centralization of the resources, not trying to assign
permissions all over the place for the people.

I have helped many people over the years in newsgroups and other places with
group structuring and have yet to have seen a really good implementation
using global groups. The use was generally due to misunderstanding or
haphazard smacking of environments together without cleanup or someone going
to MCSE school and trying to follow that goofy UGLy model without really
understanding the intent behind it. In some instances (again, I visualize
role based systems) this may work fine, however as a generic solution, I
don't think it is very good.


Just my .10 cents. (I'm a bit more expensive than some others.) :o) 


   joe

 

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Kern, Tom
Sent: Friday, June 04, 2004 1:22 PM
To: [EMAIL PROTECTED]
Subject: RE: [ActiveDir] AD Health check

joe, near the bottom of your email, you site heavy use of gg's.
why would heavy use of gg's as opposed to lg's pose a problem?

what issues arise from having alot og global groups?
thanks

-----Original Message-----
From: joe [mailto:[EMAIL PROTECTED]
Sent: Friday, June 04, 2004 12:47 PM
To: [EMAIL PROTECTED]
Subject: RE: [ActiveDir] AD Health check


> when you've inherited a forest with few domains, what would you check 
> in the first place to make sure, things are running as they should?


I must be weird, given the circumstance (walking in the door on an unknown)
I would tackle this completely differently than the other posters have
mentioned.

I would send an email to a couple of systems people that were already
running it and ask, what issues have they been seeing/have seen? Get all
documentation they have for the environment configuration that they were
aiming for.

I would create a new object in every partition (excluding schema) and in
each sysvol and in WINS (dynamic entry please, not static) and then let it
all replicate.

I would look at the configuration container and the sites / site link layout
to ascertain what the replication topology and theoretical latencies should
be. Looking for any oddball things like weird replication timing, bad
schedules, site link bridges, etc. The schedules would probably be the
biggest pain as I have nothing to decode those currently from the command
line but could probably tie perl and adfind together pretty quickly to do
so. If it was small enough I would just eyeball the outputs or actually use
the GUI.

Then later (not real later, depends on site topology) go looking for all of
my objects I created and for the AD objects the whenchanged attribute so I
can compare against the theoretical latencies. This will test the LDAP
access to every DC making sure they are responding properly. Initially I
assumed that but then figured I should document it so it is obvious this is
checking another aspect of the health. Also query every WINS Server for the
record I added with both netsh and NBLOOKUP/NMBLOOKUP (one is a Samba port
to Win32 and one is a Microsoft supplied tool) to test functionality on both
WINS interfaces. 

I would run a little tool I call OOR against all DCs. It initially stood for
out of resources which was a huge issue I walked into once when I walked in
on an unknown environment. A good 80+ DCs were all reporting out of
resources when trying to do NET API type calls against them. The oor tool is
a simple perl script that loops through a domain and does a GetUserInfo for
the guest account against all DCs. 

This gets you a good solid baseline on how things are really working versus
grabbing a ton of info and munging through reports.

Any DC that didn't get the information I focus on looking at replication
info and if necessary chasing into FRS or DNS as necessary.


That would be my first place stuff... After that, then I would dive into the
rest of this. 


After I know everything is basically functioning as expected, you have time
to look at the more detailed stuff that lets you know things that aren't
stopping functionality but could be impacting it for performance, etc. This
would be to do the intensive check every dc for every error, check all of
dns, check all of frs, etc that the diag tools do. I would also start
monitoring the DRA Pending queue of each DC on a 2-3 minute swing to see if
I have any serious bottle necks there that could be slowing things down.

What SP and hotfix levels are the DCs at? What functionality modes are you
in? Do you have enough GC coverage for the mode? I.E. If mixed mode you can
have one level of coverage, for native mode you may need more coverage.

Now the real hard work begins... Finding out what AD permissions have been
delegated and to whom and do they make sense? Do you have any serious
security holes because of it. 

Audit all of the computer accounts and remove stale ones. Ditto for user
accounts (maybe look at using oldcmp for BOTH of those things, yes it will
do user accounts to if you use the -f option and specify a filter that picks
out users. 

Audit the WINS records, any statics, they still needed? If not remove them. 

Ditto for DNS. 

What is the group strategy? How are they using them? For DLs? For security?
The old legacy UGLY method or something a little more updated? Heavy use of
GGs? Why? Doing role based stuff? Heavy use of UNIs? Do you have the proper
GC coverage for them?

Try to figure out which groups are and aren't being used. Try to clean that
up, if there aren't owners for all of the groups, find someone to own them.
Every object in AD should have an owner, if you can't find one, you as
Enterprise Admin now own it, do you personally need it? No, disable it. For
a group you can disable a security group by making it a DL, it will keep the
SID so if you need it back, it just won't work as a security group anymore.
You can reenable it as a security group if you find it is indeed needed and
you have a new owner for it. 

Look at the naming standards. There better be some. If not, set some. If
there are some, do they make sense or are they ad hoc? Naming standard
should exist for servers, clients, groups, OUs, sites, sitelinks. 

I have outlined 1-24 months of work here depending on how large the
environment, what tools are available, what problems exist, what work load
there is outside of this. I would make sure there was netmon or ethereal
available on every DC and any other server I supported and start doing basic
network traces of each of the subnets that the machines are on. 



Oh if there is Exchange in there that adds a bunch more and needs to be kept
in mind through the whole thing.


Now if in my next job I start doing hot dropping into sites like this
describes, I would probably write a bunch of tools to help this process out
because it would be seriously a mishmosh of different things at the moment.
Heck just having a tool to run on a network and get a good understanding of
what is there would be nice, though difficult. 



   joe

 

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Svetlana
Kouznetsova
Sent: Friday, June 04, 2004 10:53 AM
To: [EMAIL PROTECTED]
Subject: [ActiveDir] AD Health check


Hi,
In my quest to solve various problems in our forest while promoting W2K3 DC,
I've now come to the point when I want to ascertain overall current
situation in my AD and I need more general advice on :
What kind of tests one should do for checking the health of AD (W2K native
mode). As far as I can see, there are no certain compulsory things you need
to run in your AD from time to time - it all depends on time, skills and
perhaps, one's wish as well.

But maybe people can share their experience - when you've inherited a forest
with few domains, what would you check in the first place to make sure,
things are running as they should?

I can think of the basics, like 

Obvious event logs, dcdiag and netdiag
netdiag /debug /v - for basically, everything ?
dcdiag /test:fsmocheck - to test for all global role-holders are known and
responding dcdiag /test:frssysvol - to test frs dcdiag /test:registerindns
/dnsdomain:domain - to test, if DC can register DC Locator DNS records
nltest/dclist:domain_name - to see if DC can see the rest of the forest
nltest /dsgetdc:domain_name /gc  - to see if DC can see GC  servers in the
forest nslookup -d - for testing DNS queries repadmin /bind
servername.domain - to test if DC can bind to others for replication. 

Perhaps, some of them are overkill, but I'm looking for a bit  more, then
just routine checkup.

Can you comment, please?

Thanks in advance
Lana.

List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive: http://www.mail-archive.com/activedir%40mail.activedir.org/

List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive: http://www.mail-archive.com/activedir%40mail.activedir.org/
List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive: http://www.mail-archive.com/activedir%40mail.activedir.org/

List info   : http://www.activedir.org/mail_list.htm
List FAQ    : http://www.activedir.org/list_faq.htm
List archive: http://www.mail-archive.com/activedir%40mail.activedir.org/

Reply via email to