>Thanks, but all these "methods" require modification of the scripts 
>already on the server, and it won't ensure any new script being written 
>by a user on my system to comply.

That is correct.

>Are you all saying that there are no logs kept by default of errors 
>generated on php/mysql pages at all unless specifically coded? Wouldn't 
>it be possible then in future PHP releases to have a "set_error_logging" 
>directive in the php.ini file that will automatically run a wrapper 
>function on all mysql_query() functions to do this kind of thing?

There *IS* a setting in php.ini to log every error.

However, it only logs PHP errors, not unreported MySQL errors.

I think you *MUST* write some PHP code to get MySQL errors to appear in the
first place, and you'd have to write even more code to get them to be
considered PHP errors -- I *THINK*.  Never turned on this feature, so can't
be 100% certain.

>How are people out there managing the scripts/script errors caused  by 
>users on their systems? Or is it a case of "handling the crisis when it 
>happens"?

In most cases, dedicated applications *ARE* using a common 'include' file
and the Project Manager or Lead Developer will kill you if you don't.

In a shared ISP sort of environment, you just have to educate your users,
and be sure you make it easy for them to Do The Right Thing.

Does their default include_path have a non web-tree directory conveniently
placed in their home directory for them to throw their db_connnect.inc file
into it?

My ISP does that, but I dunno if the rest are that smart or not.

He called the directory 'php' instead of 'include' like I would have, but I
can live with that. :-)

Actually, he provides a db class pre-built in a file in that directory,
along with some custom pre-built PHP scripts like guestbook and Tour
Calendar...  But that's because he focuses on a particular market.

>You see, as administrator, I need to be able to quickly see who are 
>coding in such a way as to leave security holes, or even worse, cause 
>the server to crash due to poor coding. There are almost 1000 individual 
>php files on my server, and it wouldn't be possible for me to scrutinize 
>all of them to make sure they are OK.

You won't catch a security hole by logging anyway, I don't think...

Though I guess you could pull out the file names and make sure they aren't
in the web-tree and aren't, say, world-writable (shudder).

1000 PHP files?  That's not that many :-)

If you need to log every MySQL query specifically, that's *probably* gonna
be a debugging feature (not recommended for production use) in /etc/my.conf,
assuming a recent install following the instructions that come with MySQL. 
If you installed with Triad or an RPM or anything like that, you're on your
own.

It's actually fairly hard to bring down the whole machine using PHP and
MySQL -- You'd have to work at it, or do something incredibly stupid...

Locking up or killing off a single Apache child is less unlikely (still not
common) but that usually takes care of itself with Apache children doomed to
die within a certain time-frame or # of requests.

So you run enough Apache children that it doesn't matter if a few get locked
up for awhile.

>Are there any admins out there that have policies about scripting 
>practices on their systems; ie, checking a script from a user before it 
>is allowed to be uploaded etc?

Possibly.  But that's *GOT* to be very resource-intensive on the human side,
and probably not useful for most situations.

I think the actual answer is that *MOST* admins are looking at the "big
picture" and monitoring their machines rather than try to force users into a
single channel or scrutinize every line of code.

If you try to force users into a single channel, you'll either make them all
frustrated and drive them away, or there will be so many who find some
work-around that it won't really be effective anyway.

I think we can safely say that scrutinizing every line of code is not an
option for most.

Set up a monitoring system of your critical services (HTTP, MySQL) and just
focus on quality of service, rather than perfection of your users.

While I understand your concerns, I think you're focusing too much on the
details of what could go wrong, and missing the forest for the trees.

If something does go wrong, there will most likely be physical evidence
(logs, error messages, top output) that you can use to find the problem
quickly.

In the rare cases where it doesn't, be prepared to turn on logging and take
the performance hit until you *CAN* find it.

Another suggestion:  Provide a "development" setup for your users.  If they
have an easy place to put scripts/database calls and pound on it before it
goes into Production to a live audience, they'll more likely use that and
make sure the damn thing doesn't take down anything before going "live"

-- 
Like Music?  http://l-i-e.com/artists.htm

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to