[STATUS] (flood) Wed Dec 11 23:46:08 EST 2002

2002-12-12 Thread Rodent of Unusual Size
flood STATUS:   -*-text-*-
Last modified at [$Date: 2002/09/06 10:24:42 $]

Release:

1.0:   Released July 23, 2002
milestone-03:  Tagged January 16, 2002
ASF-transfer:  Released July 17, 2001
milestone-02:  Tagged August 13, 2001
milestone-01:  Tagged July 11, 2001 (tag lost during transfer)

RELEASE SHOWSTOPPERS:

* Everything needs to work perfectly

Other bugs that need fixing:

* I get a SIGBUS on Darwin with our examples/round-robin-ssl.xml
  config, on the second URL. I'm using OpenSSL 0.9.6c 21 dec 2001.
  
* iPlanet sends Content-length - there is a hack in there now
  to recognize it.  However, all HTTP headers need to be normalized
  before checking their values.  This isn't easy to do.  Grr.

* OpenSSL 0.9.6
  Segfaults under high load.  Upgrade to OpenSSL 0.9.6b.
   Aaron says: I just found a big bug that might have been causing
   this all along (we weren't closing ssl sockets).
   How can I reproduce the problem you were seeing
   to verify if this was the fix?

* SEGVs when /tmp/.rnd doesn't exist are bad. Make it configurable
  and at least bomb with a good error message. (See Doug's patch.)
   Status: This is fixed, no?

* If APR has disabled threads, flood should as well. We might want
  to have an enable/disable parameter that does this also, providing
  an error if threads are desired but not available.

* flood needs to clear pools more often. With a long running test
  it can chew up memory very quickly. We should just bite the bullet
  and create/destroy/clear pools for each level of our model:
  farm, farmer, profile, url/request-cycle, etc.

* APR needs to have a unified interface for ephemeral port
  exhaustion, but aparently Solaris and Linux return different
  errors at the moment. Fix this in APR then take advantage of
  it in flood.

* The examples/analyze-relative scripts fail when there are less
  than 5 unique URLs.

Other features that need writing:

* More analysis and graphing scripts are needed

* Write robust tool (using tethereal perhaps) to take network dumps 
  and convert them to flood's XML format.
Status: Justin volunteers.  Aaron had a script somewhere that is
a start.

* Get chunked encoding support working.
Status: Justin volunteers.  He got sidetracked by the httpd
implementation of input filtering and never finished 
this.  This is a stopgap until apr-serf is completed.

* Maybe we should make randfile and capath runtime directives that
  come out of the XML, instead of autoconf parameters.

* We are using apr_os_thread_current() and getpid() in some places
  when what we really want is a GUID. The GUID will be used to
  correlate raw output data with each farmer. We may wish to print
  a unique ID for each of farm, farmer, profile, and url to help in
  postprocessing.

* We are using strtol() in some places and strtoll() in others.
  Pick one (Aaron says strtol(), but he's not sure).

* Validation of responses (known C-L, specific strings in response)
   Status: Justin volunteers

* HTTP error codes (ie. teach it about 302s)
   Justin says: Yeah, this won't be with round_robin as implemented.  
Need a linked list-based profile where we can insert 
new URLs into the sequence.

* Farmer (Single-thread, multiple profiles)
   Status: Aaron says: If you have threads, then any Farmer can be
   run as part of any Farm. If you don't have threads, you can
   currently only run one Farmer named Joe right now (this will
   be changed so that if you don't have threads, flood will attempt
   to run all Farmers in serial under one process).

* Collective (Single-host, multiple farms)
  This is a number of Farms that have been fork()ed into child processes.

* Megaconglomerate (Multiple hosts each running a collective)
  This is a number of Collectives running on a number of hosts, invoked
  via RSH/SSH or maybe even some proprietary mechanism.

* Other types of urllists
a) Random / Random-weighted
b) Sequenced (useful with cookie propogation)
c) Round-robin
d) Chaining of the above strategies
  Status: Round-robin is complete.

* Other types of reports
  Status: Aaron says: simple reports are functional. Justin added
  a new type that simply prints the approx. timestamp when
  the test was run, and the result as OK/FAIL; it is called
  easy reports (see flood_easy_reports.h).
  Furthermore, simple_reports and easy_reports both print
  out the current requesting URI line.

Documentation that needs writing:

* Documentation?  

RE: [PATCH] Use mutex locks in mod_specweb99.c

2002-12-12 Thread MATHIHALLI,MADHUSUDAN (HP-Cupertino,ex1)
same on HP-UX also.. This is how it looks :

/* Cross process serialization techniques */
/* #undef USE_FLOCK_SERIALIZE */
#define USE_SYSVSEM_SERIALIZE 1
/* #undef USE_FCNTL_SERIALIZE */
/* #undef USE_PROC_PTHREAD_SERIALIZE */
/* #undef USE_PTHREAD_SERIALIZE */

/* #undef POSIXSEM_IS_GLOBAL */
/* #undef SYSVSEM_IS_GLOBAL */
/* #undef FCNTL_IS_GLOBAL */
/* #undef FLOCK_IS_GLOBAL */

-Madhu

-Original Message-
From: Greg Ames [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 12, 2002 12:35 PM
To: [EMAIL PROTECTED]
Subject: Re: [PATCH] Use mutex locks in mod_specweb99.c


Sander Temme wrote:
I started seeing the following errors in the specweb99 run 
output, when I
use mod_specweb99.c with Apache 2.0.43 and worker MPM. The 
following patch
seems to get rid of the problem. If you're thinking that it 
may degrade the
response - I did not find much difference though.

Can somebody please evaluate and let me know if it's okay ?.
 
 
 Ha! I have seen this too but have had no time to even think 
about working on
 it.
 
 I have one question. Your patch mutexes out the acquisition 
of the file
 lock. Do these thread mutexes apply only within the process, 
or across
 processes as well? In the latter case, we could do away with 
the flock
 entirely if we're in a multithreaded environment. In that 
case the #ifs
 would move to the _dolock function and we'd have an _unlock 
function with
 its own #ifs.

I dug into APR locks a little bit.  The apr_global_mutex_* 
functions turn into 
two separate syscalls, with #if APR_HAS_THREADS around the 
thread mutexing.  So 
unfortunately they wouldn't save us any syscalls :-( :-(  But 
they might save a 
little bit of function call overhead.

Another interesting place to look is in 
srclib/apr/include/arch/unix/apr_private.h .
There are several _IS_GLOBAL symbols for various 
serialization mechanisms. 
On my Linux box, all of them are #undef'ed and commented out, 
including fcntl 
and flock which are the two choices for apr_file_lock.  Madhu, 
could you take a 
look there and see what you've got?

Thanks,
Greg






Re: [PATCH] Use mutex locks in mod_specweb99.c

2002-12-12 Thread Greg Ames
MATHIHALLI,MADHUSUDAN (HP-Cupertino,ex1) wrote:
same on HP-UX also.. This is how it looks :
/* Cross process serialization techniques */
/* #undef USE_FLOCK_SERIALIZE */
#define USE_SYSVSEM_SERIALIZE 1
/* #undef USE_FCNTL_SERIALIZE */
/* #undef USE_PROC_PTHREAD_SERIALIZE */
/* #undef USE_PTHREAD_SERIALIZE */
/* #undef POSIXSEM_IS_GLOBAL */
/* #undef SYSVSEM_IS_GLOBAL */
/* #undef FCNTL_IS_GLOBAL */
/* #undef FLOCK_IS_GLOBAL */
oh well...sigh...  I guess we're stuck with using double locks for now, either 
as in your patch or as in the apr_global_mutex_xxx functions.  I can configure 
Apache with --disable-threads when I benchmark with prefork and get it back down 
to one lock.  I will be on vacation for the rest of the year after tomorrow, so 
I'd prefer that someone else follow up on this.

Longer term, Dave Hansen whom I work with at IBM had a couple of intriguing 
ideas for the SPECWeb99 post log.  One is to implement it in shared memory.  The 
current record counter is updated using an atomic_add primitive.  Once you do 
that, you can use the answer as an index into an array of log records and no 
other threads will access that particular record.  A complication is how to 
implement command/Fetch.  Dave has an implementation which uses a separate 
daemon program to retrieve the log.

Dave also asked me if there would be a way to use Apache's regular logging 
functions for the SPEC post log, and cut down on the number of opens and closes 
on the post log file.  That's an interesting idea.  If mod_specweb99 opened the 
post log during the post_config hook, or something similar, and the child 
processes all inherited the post log fd, they could just write to it.  The 
problem would be the current record counter and the record number at the 
beginning of each log record.  We could use a shared memory variable updated 
with atomic_add here too, or maybe pipe the post log to a utility which prepends 
the record number.  I took a quick look at the SPECWeb99 run rules, and they 
seem to be flexible on how you actually implement the post log.

Greg


Re: SSL upgrade [was: Final patch for a long time]

2002-12-12 Thread Joe Orton
On Thu, Dec 12, 2002 at 01:08:08AM -0600, William Rowe wrote:
 My proposed solution is to review the patch and apply it to cvs HEAD.  Get it
 committed.  Of course there are no test suites right now, and there won't be
 for a little while yet.  But once the code exists, it will be simpler to keep the
 SSL upgrade facility maintained, and debug it as the clients become available
 (most especially, libwww exercises through perl-framework.)

I think there were a couple of mistakes in the patch:

 --- modules/ssl/ssl_engine_io.c   23 Nov 2002 21:19:03 -  1.101
 +++ modules/ssl/ssl_engine_io.c   12 Dec 2002 07:06:46 -
 @@ -1181,6 +1181,84 @@
  return APR_SUCCESS;
  }
  
 +static apr_status_t ssl_io_filter_Upgrade(ap_filter_t *f,
 + apr_bucket_brigade *bb)
 +
 +{
 +#define SWITCH_STATUS_LINE 101 Switching Protocols

Should be HTTP/1.1 101 Swiching Protocols unless the prefix is added
somewhere I missed, otherwise this isn't a valid status-line.

 +#define UPGRADE_HEADER Upgrade: TLS/1.0 HTTP/1.1
 +#define CONNECTION_HEADER Conenction: Upgrade

Spot the typo :)

 +connection = apr_table_get(r-headers_in, Connection);
 +
 +apr_table_unset(r-headers_out, Upgrade);
 +
 +if (strcmp(connection, Upgrade) || strcmp(upgrade, TLS/1.0)) {
 +return ap_pass_brigade(f-next, bb);
 +}

I don't think the requirement that the client sends exactly Connection:
Upgrade is correct; the only requirement here is on the client to send
a Connection header including the upgrade token.

joe



Re: request for comments: multiple-connections-per-thread MPM design

2002-12-12 Thread Manoj Kasichainula
Took too long to respond. Oh well, no one else did either...

On Tue, Nov 26, 2002 at 01:14:10AM -0500, Glenn wrote:
 On Mon, Nov 25, 2002 at 08:36:59PM -0800, Manoj Kasichainula wrote:
  BTW, ISTR Ryan commenting a while back that cross-thread signalling
  isn't reliable, and it scares me in general, so I'd lean towards the
  pipe.
  
  I'm pondering what else could be done about this; having to muck with a
  pipe doesn't feel like the right thing to do.
 
 Why not?

Good question. I'm still waffling on this.

 Add a descriptor (pipe, socket, whatever) to the pollset and use
 it to indicate the need to generate a new pollset.  The thread that sends
 info down this descriptor could be programmed to wait a short amount of
 time between sending triggers, so as not to cause the select() to return
 too, too often, but short enough not to delay the handling of new
 connections too long.

But what's a good value? Any value picked is going to be too annoying.
0.1 s means delaying lots of threads up to a tenth of a second. And
there would be good reasons for wanting to lower that value, and to not
lower that value. Which would mean it would need to be a tunable
parameter depending on network and CPU characteristics, and needing a
tunable parameter for this just seems ooky. 

But just picking a good value and sticking with it might not be too bad.
The correct thing to do would be to code it up and test, but I'd rather
have a reasonable idea of the chances for success first. :)

In the perfect case, each poll call would return immediately with lots
of file descriptors ready for work, and they would all get farmed out.
Then before the next poll runs, there are more file descriptors ready to
be polled. 

Hmmm, if the poll is waiting on fds for any length of time, it should be
ok to interrupt it, because by definition it's not doing anything else.

So maybe the way to go is to forget about waiting the 0.1 s to interrupt
poll. Just notify it immediately when there's a fd waiting to be polled.
If no other fds have work to provide, we add the new fds to the poll set
and continue.

Otherwise, just run through all the other fds that need handling first,
then pick off all the fds that are waiting for polling and add them to
the fd set.

So (again using terms from my proposal):

submit_ticket would push fds into a queue and write to new_event_pipe if
the queue was empty when pushing.

get_next_event would do something like:

if (previous_poll_fds_remaining) {
pick one off, call event handler for it
}
else {
clean out new_event_queue and put values into new poll set
poll(pollfds, io_timeout);
call event handler for one of the returned pollfds
}

Something was bothering me about this earlier, and I can't remember what
it is. Maybe it's that when the server isn't busy, a single ticket
submission will make 2 threads (the ticket submitter and the thread
holding the poll mutex) do stuff. Maybe even 3 threads since a new
thread could take the poll mutex. But since this is the unbusy case,
it's not quite so bad.




Support for ASP

2002-12-12 Thread Karma Dorji



Can anyone help me, how to support ASP, in apache server, running in Linux 
7.2 box, my apache is 2.0, i need to support ASP for one of the customer, 
hosting in my sever, your help will be highly appreciated.

Thanks.
Karma.



Re: cvs commit: httpd-2.0/server Makefile.in

2002-12-12 Thread Ben Laurie
[EMAIL PROTECTED] wrote:

jerenkrantz2002/12/11 13:09:16

  Modified:server   Makefile.in
  Log:
  Take a stab at fixing the brokenness in our tree (grr!).
  
  ls -1 is bound to be more portable than find -maxdepth, but I suspect it may
  not be as portable as it really should.

man ls says By default, ls lists one entry per line to standard output; 
the exceptions are to terminals or when the -C option is specified.

AFAIK, this completely standard behaviour.

You shouldn't need the -1.

Also, can I offer, should you be completely paranoid:

echo $$dir/*.h | sed 's/ /\
/g'

Cheers,

Ben.

  
  Revision  ChangesPath
  1.80  +1 -1  httpd-2.0/server/Makefile.in
  
  Index: Makefile.in
  ===
  RCS file: /home/cvs/httpd-2.0/server/Makefile.in,v
  retrieving revision 1.79
  retrieving revision 1.80
  diff -u -u -r1.79 -r1.80
  --- Makefile.in	3 Dec 2002 09:47:11 -	1.79
  +++ Makefile.in	11 Dec 2002 21:09:16 -	1.80
  @@ -47,7 +47,7 @@
   	tmp=export_files_unsorted.txt; \
   	rm -f $$tmp  touch $$tmp; \
   	for dir in $(EXPORT_DIRS); do \
  -	find $$dir -maxdepth 1 -type f -name '*.h'  $$tmp; \
  +	ls -1 $$dir/*.h  $$tmp; \
   	done; \
   	sort -u $$tmp  $@; \
   	rm -f $$tmp
  
  
  



--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff




mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Bill Stoddard
The CacheMaxStreamingBuffer function is currently implemented in mod_cache. It
carves out a chunk of RAM to buffer responses, regardless of the actual storage
manager (mod_mem_cache or mod_disk_cache) that will ultimately be used cache the
response. The function is not really useful if you are using mod_disk_cache.

IMHO, the streaming buffer function belongs in the storage manager
(mod_mem_cache) rather than mod_cache.  I propose we move this function into
mod_mem_cache.  I also question the need for the CacheMaxStreamingBuffer
configuration directive. Why not use MCacheMaxObjectSize as the max streaming
buffer size?  This would eliminate a source of misconfiguration, and
specifically the case of neglecting to include CacheMaxStreamingBuffer in
httpd.conf (I spent maybe 30 minutes trying to figure out why some responses
were not being cached that I knew darn well were within my configured cache size
thresholds. Most Apache users would not have a clue where to start looking for
the cause, nor should they be expected to have a clue).

I'll start working on this if I hear no objections.

Bill





Re: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Estrade Matthieu
I confirm the CacheMaxStreamingBuffer is a source of misconfiguration
Using the MCacheMaxObjectSize could me a good thing.
In case of CacheMaxStreamingBuffer used, i don't know how 
MCacheMaxObjectSize is used for because if a document is bigger than 
MaxStreamingBuffer, it will be never cached.
are you sure MaxStreamingBuffer is not used in disk_cache ?
this directive is to store all the brigade before saving the entire data 
with the write_headers and write_body functions, pointing on disk or mem 
writing functions.
I think it's the same for disk and mem cache.

In the same time, could you answer my old mail with the 
CacheSlashEndingUrl patch i made is usefull or not ?

regards

Matthieu


Bill Stoddard wrote:

The CacheMaxStreamingBuffer function is currently implemented in mod_cache. It
carves out a chunk of RAM to buffer responses, regardless of the actual storage
manager (mod_mem_cache or mod_disk_cache) that will ultimately be used cache the
response. The function is not really useful if you are using mod_disk_cache.

IMHO, the streaming buffer function belongs in the storage manager
(mod_mem_cache) rather than mod_cache.  I propose we move this function into
mod_mem_cache.  I also question the need for the CacheMaxStreamingBuffer
configuration directive. Why not use MCacheMaxObjectSize as the max streaming
buffer size?  This would eliminate a source of misconfiguration, and
specifically the case of neglecting to include CacheMaxStreamingBuffer in
httpd.conf (I spent maybe 30 minutes trying to figure out why some responses
were not being cached that I knew darn well were within my configured cache size
thresholds. Most Apache users would not have a clue where to start looking for
the cause, nor should they be expected to have a clue).

I'll start working on this if I hear no objections.

Bill


_
GRAND JEU SMS : Pour gagner un NOKIA 7650, envoyez le mot IF au 61321
(prix d'un SMS + 0.35 euro). Un SMS vous dira si vous avez gagné.
Règlement : http://www.ifrance.com/_reloc/sign.sms

 



_
GRAND JEU SMS : Pour gagner un NOKIA 7650, envoyez le mot IF au 61321
(prix d'un SMS + 0.35 euro). Un SMS vous dira si vous avez gagné.
Règlement : http://www.ifrance.com/_reloc/sign.sms




RE: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Bill Stoddard
 I confirm the CacheMaxStreamingBuffer is a source of misconfiguration
 Using the MCacheMaxObjectSize could me a good thing.
 In case of CacheMaxStreamingBuffer used, i don't know how
 MCacheMaxObjectSize is used for because if a document is bigger than
 MaxStreamingBuffer, it will be never cached.
 are you sure MaxStreamingBuffer is not used in disk_cache ?
 this directive is to store all the brigade before saving the entire data
 with the write_headers and write_body functions, pointing on disk or mem
 writing functions.
 I think it's the same for disk and mem cache.

I need to look at the old code, but if i recall correctly, mod_mem_cache would
reject attempting to cache an object whose length was unknown (see the call to
cache_create_entity() in mod_cache). mod_disk_cache otoh would happily attempt
to cache objects whose size was not know at the start. Remember that
mod_disk_cache is just writing cache objects to disk and does not need to
allocate memory to hold/prefetch the object.


 In the same time, could you answer my old mail with the
 CacheSlashEndingUrl patch i made is usefull or not ?

Yea, I was just looking at that. I think we can eliminate that check. I was
originally afraid that this would muck up negotiation but the cache does not
work with negotiated content now anyway.

Bill




Re: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Brian Pane
On Thu, 2002-12-12 at 07:53, Bill Stoddard wrote:
 The CacheMaxStreamingBuffer function is currently implemented in mod_cache. It
 carves out a chunk of RAM to buffer responses, regardless of the actual storage
 manager (mod_mem_cache or mod_disk_cache) that will ultimately be used cache the
 response. The function is not really useful if you are using mod_disk_cache.
 
 IMHO, the streaming buffer function belongs in the storage manager
 (mod_mem_cache) rather than mod_cache.  I propose we move this function into
 mod_mem_cache.  I also question the need for the CacheMaxStreamingBuffer
 configuration directive. Why not use MCacheMaxObjectSize as the max streaming
 buffer size?  This would eliminate a source of misconfiguration, and
 specifically the case of neglecting to include CacheMaxStreamingBuffer in
 httpd.conf (I spent maybe 30 minutes trying to figure out why some responses
 were not being cached that I knew darn well were within my configured cache size
 thresholds. Most Apache users would not have a clue where to start looking for
 the cause, nor should they be expected to have a clue).
 
 I'll start working on this if I hear no objections.

When I added CacheMaxStreamingBuffer originally, I had two
reasons for making it a separate directive:

  1. As a separate directive, it could be disabled by default
 to guarantee that the new functionality wouldn't break
 anyone's existing mod_cache setup.
[I'm no longer worried about this issue, now that
the code has been in place for a while.]

  2. There are some extreme cases where the maximum cacheable
 object size could be too large a value for CacheMaxStreamingBuffer.
 For example, if MCacheMaxObjectSize is 20MB, and your server
 is servering a mix of 10MB static files and 30MB streamed
 CGI responses with no content-length information, then each
 CGI response will cause mod_cache to buffer 20MB of content
 before giving up and freeing all that space.
[I am still worried about this issue.]

I suppose we could eliminate my second concern by simply adding
a note to the documentation that says, don't use ridiculously
large values for MCacheMaxObjectSize.  What do you think?

If you move the stream buffering to the storage manager, does
that mean that mod_disk_cache won't be able to cache streamed
responses any more?  Or are you thinking of mirroring the current
buffering logic with something that stores the pending content
in the file rather than in-core?

Brian





RE: Webdav

2002-12-12 Thread Bennett, Tony - CNF
You don't have to run Apache 2.0 as root
in order to provide webdav capability...
...If you are running as user 'nobody',
just ensure that the directory tree that
is dav enabled is owned by user 'nobody'.

-tony

 -Original Message-
 From: Martin Ouimet [mailto:[EMAIL PROTECTED]] 
 Sent: Thursday, December 12, 2002 8:11 AM
 To: Apache dev mailing list (E-mail)
 Subject: Webdav
 
 
 
 Hi folks,
   I'm an network administrator using webdav to share 
 users folder and im running it as root since user need to 
 create, move, copy, delete files.  I was wondering first is 
 there a patch or any hack to lower down priviledge because I 
 dislike my server having apache 2.0 running as root.  If it 
 doesnt seem to exists i'll code one.  So the question is, if 
 I start a patch to lower down down webdav privilege will I 
 loose my time?
 
 Martin Ouimet
 



Re: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Estrade Matthieu




what i remember is cache is able to cache unknown size document with MaxStreamingBuffer.
i will try to draw it:

first brigade, no length, if no EOS bucket, saving data in temp brigade
second brigade, if no EOS, concat data to temp_brigade
until finding an EOS bucket, or total length is more than MaxStreamingBuffer.

All of this is starting line 630 of mod_cache.c

then, write_headers, and write_body (with temp brigade if the document is
handled with more than 1 brigade), which are pointing on disk or mem cache
writing functions depending on setting.

how long do you think the cache will stay in experimental
I am using it a lot in my product which is used by many customers
I found any big problems
in worker, reverse_proxy with mem_cache, i have like 300 request on backend
for 120 000 request on reverse proxy
It increase performance a lot

regards,

Matthieu



Bill Stoddard wrote:

  
I confirm the CacheMaxStreamingBuffer is a source of misconfiguration
Using the MCacheMaxObjectSize could me a good thing.
In case of CacheMaxStreamingBuffer used, i don't know how
MCacheMaxObjectSize is used for because if a document is bigger than
MaxStreamingBuffer, it will be never cached.
are you sure MaxStreamingBuffer is not used in disk_cache ?
this directive is to store all the brigade before saving the entire data
with the write_headers and write_body functions, pointing on disk or mem
writing functions.
I think it's the same for disk and mem cache.

  
  
I need to look at the old code, but if i recall correctly, mod_mem_cache would
reject attempting to cache an object whose length was unknown (see the call to
cache_create_entity() in mod_cache). mod_disk_cache otoh would happily attempt
to cache objects whose size was not know at the start. Remember that
mod_disk_cache is just writing cache objects to disk and does not need to
allocate memory to hold/prefetch the object.

  
  
In the same time, could you answer my old mail with the
CacheSlashEndingUrl patch i made is usefull or not ?

  
  
Yea, I was just looking at that. I think we can eliminate that check. I was
originally afraid that this would muck up negotiation but the cache does not
work with negotiated content now anyway.

Bill

_
GRAND JEU SMS : Pour gagner un NOKIA 7650, envoyez le mot IF au 61321
(prix d'un SMS + 0.35 euro). Un SMS vous dira si vous avez gagn.
Rglement : http://www.ifrance.com/_reloc/sign.sms

  






Suppressing Authentication Dialog box

2002-12-12 Thread Laxmikanth M.S.
Hi,
PLEASE REPLT TO LAXMIKANTH.MS@SONATA_SOFTWARECOM

I have setup a site with Basic Autentication...
I want to suppress the dialog box 
I have the password and username with meis there anyway to pass these
values directly from my page instead thro' the popup box.
thanks in advacne
Laxmikanth

*
Disclaimer: The information in this e-mail and any attachments is
confidential / privileged. It is intended solely for the addressee or
addressees. If you are not the addressee indicated in this message, you may
not copy or deliver this message to anyone. In such case, you should destroy
this message and kindly notify the sender by reply email. Please advise
immediately if you or your employer does not consent to Internet email for
messages of this kind.
*



Re: [PATCH-3] Allowing extended characters in LDAP authentication...

2002-12-12 Thread Astrid Kessler
This patch eliminates the hardcoded charset table.  Instead it
reads the charset table from a conf file.  The directive
AuthLDAPCharsetConfig allows the admin to specify the charset conf
file.  Is there also a need to specify additional conversions
directly in the httpd.conf file through a different directive?  It
seems that the charset conf file would be sufficient.  If there are
multiple charsets per language, these can be set by specifying the
5 character language ID rather than the 2 character ID similar to
the example in the charset.conv file for chinese. 

As nd said, if someone needs additional conversion, he will scream for it. 
:-)

But something else is going around in my head. Why should this charset 
conversion be limited to ladp? Well, I don't know where we need the 
conversion table too. But the table itself should be general available to 
all modules. Maybe some other modules would like to do the same.
A core (?) directive like LanguageCharsetConfig might be much more useful 
then AuthLDAPCharsetConfig. So the next step would be to move the 
conversion function to core or apr or so, too. Each module, which needs a 
conversion, can call this funtion instead of having its own code.

Maybe there are some overlapping with mod_charset_lite which also does 
charset conversion. 

Kess



RE: Suppressing Authentication Dialog box

2002-12-12 Thread John K. Sterling
Not using basic authentication.  basic auth IS the browser dialogue based
authentication.  you will need to write your own auth module to accept the
username and password from the vars.

sterling

-- Original Message --
Reply-To: [EMAIL PROTECTED]
From: Laxmikanth M.S. [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Suppressing Authentication Dialog box
Date: Thu, 12 Dec 2002 16:11:28 +0530


Hi,
PLEASE REPLT TO LAXMIKANTH.MS@SONATA_SOFTWARECOM

I have setup a site with Basic Autentication...
I want to suppress the dialog box
I have the password and username with meis there anyway to pass these
values directly from my page instead thro' the popup box.
thanks in advacne
Laxmikanth

*
Disclaimer: The information in this e-mail and any attachments is
confidential / privileged. It is intended solely for the addressee or
addressees. If you are not the addressee indicated in this message, you
may
not copy or deliver this message to anyone. In such case, you should destroy
this message and kindly notify the sender by reply email. Please advise
immediately if you or your employer does not consent to Internet email
for
messages of this kind.
*





RE: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Bill Stoddard

 When I added CacheMaxStreamingBuffer originally, I had two
 reasons for making it a separate directive:

   1. As a separate directive, it could be disabled by default
  to guarantee that the new functionality wouldn't break
  anyone's existing mod_cache setup.
 [I'm no longer worried about this issue, now that
 the code has been in place for a while.]

   2. There are some extreme cases where the maximum cacheable
  object size could be too large a value for CacheMaxStreamingBuffer.
  For example, if MCacheMaxObjectSize is 20MB, and your server
  is servering a mix of 10MB static files and 30MB streamed
  CGI responses with no content-length information, then each
  CGI response will cause mod_cache to buffer 20MB of content
  before giving up and freeing all that space.
 [I am still worried about this issue.]

 I suppose we could eliminate my second concern by simply adding
 a note to the documentation that says, don't use ridiculously
 large values for MCacheMaxObjectSize.  What do you think?

Documenting the issue might be sufficient.  If we do need a directive, then
perhaps default it to MCacheMaxObjectSize and  use the directive to lower the
streaming threshold to handle the pathological cases.


 If you move the stream buffering to the storage manager, does
 that mean that mod_disk_cache won't be able to cache streamed
 responses any more?  Or are you thinking of mirroring the current
 buffering logic with something that stores the pending content
 in the file rather than in-core?

I haven't looked at mod_disk_cache recently but I seem to recall that it already
handled streaming content.  If it doesn't, it should be a simple matter to fix
by streaming the content to disk as you suggest.

Bill




RE: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Bill Stoddard



I 
expect mod_cache/mod_mem_cache is close to moving out of experimental. I 
have no confidence in mod_disk_cache (mainly because I have not spent much time 
on it in months). None of the changes I am proposing would substantially 
impact the stability (famous last words :-).

Bill

  -Original Message-From: Estrade Matthieu 
  [mailto:[EMAIL PROTECTED]]Sent: Thursday, December 12, 2002 
  11:54 AMTo: [EMAIL PROTECTED]Subject: Re: mod_cache 
  CacheMaxStreamingBufferwhat i remember is cache is able 
  to cache unknown size document with MaxStreamingBuffer.i will try to draw 
  it:first brigade, no length, if no EOS bucket, saving data in temp 
  brigadesecond brigade, if no EOS, concat data to temp_brigadeuntil 
  finding an EOS bucket, or total length is more than 
  MaxStreamingBuffer.All of this is starting line 630 of 
  mod_cache.cthen, write_headers, and write_body (with temp brigade if 
  the document is handled with more than 1 brigade), which are pointing on disk 
  or mem cache writing functions depending on setting.how long do you 
  think the cache will stay in experimentalI am using it a lot in my product 
  which is used by many customersI found any big problemsin worker, 
  reverse_proxy with mem_cache, i have like 300 request on backend for 120 000 
  request on reverse proxyIt increase performance a 
  lotregards,MatthieuBill Stoddard wrote:
  
I confirm the CacheMaxStreamingBuffer is a source of misconfiguration
Using the MCacheMaxObjectSize could me a good thing.
In case of CacheMaxStreamingBuffer used, i don't know how
MCacheMaxObjectSize is used for because if a document is bigger than
MaxStreamingBuffer, it will be never cached.
are you sure MaxStreamingBuffer is not used in disk_cache ?
this directive is to store all the brigade before saving the entire data
with the write_headers and write_body functions, pointing on disk or mem
writing functions.
I think it's the same for disk and mem cache.

I need to look at the old code, but if i recall correctly, mod_mem_cache would
reject attempting to cache an object whose length was unknown (see the call to
cache_create_entity() in mod_cache). mod_disk_cache otoh would happily attempt
to cache objects whose size was not know at the start. Remember that
mod_disk_cache is just writing cache objects to disk and does not need to
allocate memory to hold/prefetch the object.

  
In the same time, could you answer my old mail with the
CacheSlashEndingUrl patch i made is usefull or not ?

Yea, I was just looking at that. I think we can eliminate that check. I was
originally afraid that this would muck up negotiation but the cache does not
work with negotiated content now anyway.

Bill

_
GRAND JEU SMS : Pour gagner un NOKIA 7650, envoyez le mot IF au 61321
(prix d'un SMS + 0.35 euro). Un SMS vous dira si vous avez gagné.
Règlement : http://www.ifrance.com/_reloc/sign.sms

  


RE: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Brian Pane
On Thu, 2002-12-12 at 09:40, Bill Stoddard wrote:

  I suppose we could eliminate my second concern by simply adding
  a note to the documentation that says, don't use ridiculously
  large values for MCacheMaxObjectSize.  What do you think?
 
 Documenting the issue might be sufficient.  If we do need a directive, then
 perhaps default it to MCacheMaxObjectSize and  use the directive to lower the
 streaming threshold to handle the pathological cases.

Sounds good to me.

Brian





RE: mod_cache CacheMaxStreamingBuffer

2002-12-12 Thread Bill Stoddard

 
 On Thu, 2002-12-12 at 09:40, Bill Stoddard wrote:
 
   I suppose we could eliminate my second concern by simply adding
   a note to the documentation that says, don't use ridiculously
   large values for MCacheMaxObjectSize.  What do you think?
  
  Documenting the issue might be sufficient.  If we do need a directive, then
  perhaps default it to MCacheMaxObjectSize and  use the directive to 
 lower the
  streaming threshold to handle the pathological cases.
 
 Sounds good to me.
 
 Brian
 

Okay, I'll work up a patch next week.

Bill



Re: mod_usertrack patch

2002-12-12 Thread Jim Jagielski
Unless anyone has gas on this, I'm +1 and will be
committing in the next 24-48 hours.

At 9:16 AM -0500 12/6/02, Andrei Zmievski wrote:
Jim,

Resubmitting the patch, as you requested.

--
Andrei Zmievski Mail:   [EMAIL PROTECTED]
Sr. Front End Software Engineer Web:http://www.fast.no/
Fast Search  Transfer Inc  Phone:  781-304-2493
93 Worcester Street Fax:781-304-2410
Wellesley MA 02481-9181, USAMain:   781-304-2400

Attachment converted: PowerMac:mod_usertrack 1.patch (TEXT/R*ch) (000B18F3)


-- 
===
   Jim Jagielski   [|]   [EMAIL PROTECTED]   [|]   http://www.jaguNET.com/
  A society that will trade a little liberty for a little order
 will lose both and deserve neither - T.Jefferson



Re: request for comments: multiple-connections-per-thread MPM design

2002-12-12 Thread Glenn
On Thu, Dec 12, 2002 at 12:39:17AM -0800, Manoj Kasichainula wrote:
...
  Add a descriptor (pipe, socket, whatever) to the pollset and use
  it to indicate the need to generate a new pollset.  The thread that sends
  info down this descriptor could be programmed to wait a short amount of
  time between sending triggers, so as not to cause the select() to return
  too, too often, but short enough not to delay the handling of new
  connections too long.
 
 But what's a good value?
...
 Hmmm, if the poll is waiting on fds for any length of time, it should be
 ok to interrupt it, because by definition it's not doing anything else.
 
 So maybe the way to go is to forget about waiting the 0.1 s to interrupt
 poll. Just notify it immediately when there's a fd waiting to be polled.
 If no other fds have work to provide, we add the new fds to the poll set
 and continue.
 Otherwise, just run through all the other fds that need handling first,
 then pick off all the fds that are waiting for polling and add them to
 the fd set.
 
 So (again using terms from my proposal):
 
 submit_ticket would push fds into a queue and write to new_event_pipe if
 the queue was empty when pushing.
 
 get_next_event would do something like:
 
 if (previous_poll_fds_remaining) {
 pick one off, call event handler for it
 }
 else {
 clean out new_event_queue and put values into new poll set
 poll(pollfds, io_timeout);
 call event handler for one of the returned pollfds
 }
...

+1 on concept with comments:
Each time poll returns to handle ready fds, it should skip new_event_pipe
(it should not send than fd to an event handler), and it should check
new_event_queue for fds to add to the pollset before it returns to polling.

It should always be doing useful work or should be blocking in select(),
because it will always have at least one fd -- it's end of new_event_pipe --
in its pollset.


Coding to interrupt the poll immediately is the first thing to do, and
then a max short delay can be added to submit_ticket only if necessary.

As you said, the max short delay would only affect the unbusy case where
the poll is waiting on all current members of the pollset.  The short
delay had been suggested to prevent interrupting select() before select()
had a chance to do any useful work.  We won't know if this is a real or
imagined problem until it is tested.  It sounds like it won't be a
performance problem, although using the max short timer of even 0.05s might
slightly reduce the CPU usage of these threads when under heavy load.

-Glenn



Re: [PATCH-3] Allowing extended characters in LDAPauthentication...

2002-12-12 Thread Brad Nicholes
   The charset conversion that is happening in LDAP is actually quite specialized.  
The general functionality of converting from one charset to another already exists in 
APR in the form of apr_xlat_xxx().  LDAP is only interested in converting the user ID 
from a given charset to UTF-8.  Up until auth_ldap calls ap_get_basic_auth_pw(), the 
user ID and password are encrypted in the Authentication header entry.  Until the 
user ID and password have been decrypted, the conversion to UTF-8 can not occur.  
Therefore the conversion must take place from within auth_ldap or any other 
authentication module after decrypting the user information.  A module or filter 
outside of the authentication module that does a blind charset conversion on the 
header information, would not work because it would not be able to decrypt the user ID 
and password, convert it and re-encrypt it in order to make the process transparent to 
all authentication modules.  (Actually you could probably make it work for base64, but 
what about digest?)
   On the other hand, the one place that the conversion could be done is within the 
call to ap_get_basic_auth_pw().  But ap_get_basic_auth_pw() or whatever function 
handles decrypting digest authentication, would have to be modified so that it had 
access to the accept-language header values.  This would allow it to convert from 
the assumed browser's charset to UTF-8 or any other charset.  But the down side is 
that the accept-language header value does not guarantee that that is the charset 
the browser used when it sent the request.  It is simply an indicator of what 
charset(s) the browser will accept.  Auth_LDAP would be utilizing this functionality 
to at least attempt to do the right thing rather than always failing.
   I do agree that we need some type of functionality that will convert requests made 
in a particular charset to a universal charset that Apache can rely on.  I'm just not 
sure this is it.  It seems to work for auth_LDAP, but I'm not sure how to generalize 
it.  This is where a much broader discussion need to take place.



Brad Nicholes
Senior Software Engineer
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 

 [EMAIL PROTECTED] Thursday, December 12, 2002 4:09:57 AM 
This patch eliminates the hardcoded charset table.  Instead it
reads the charset table from a conf file.  The directive
AuthLDAPCharsetConfig allows the admin to specify the charset conf
file.  Is there also a need to specify additional conversions
directly in the httpd.conf file through a different directive?  It
seems that the charset conf file would be sufficient.  If there are
multiple charsets per language, these can be set by specifying the
5 character language ID rather than the 2 character ID similar to
the example in the charset.conv file for chinese. 

As nd said, if someone needs additional conversion, he will scream for it. 
:-)

But something else is going around in my head. Why should this charset 
conversion be limited to ladp? Well, I don't know where we need the 
conversion table too. But the table itself should be general available to 
all modules. Maybe some other modules would like to do the same.
A core (?) directive like LanguageCharsetConfig might be much more useful 
then AuthLDAPCharsetConfig. So the next step would be to move the 
conversion function to core or apr or so, too. Each module, which needs a 
conversion, can call this funtion instead of having its own code.

Maybe there are some overlapping with mod_charset_lite which also does 
charset conversion. 

Kess




Re: [PATCH-3] Allowing extended characters in LDAPauthentication...

2002-12-12 Thread Brad Nicholes
   The charset conversion that is happening in LDAP is actually quite
specialized.  The general functionality of converting from one charset
to another already exists in APR in the form of apr_xlat_xxx().  LDAP is
only interested in converting the user ID from a given charset to UTF-8.
 Up until auth_ldap calls ap_get_basic_auth_pw(), the user ID and
password are encrypted in the Authentication header entry.  Until the
user ID and password have been decrypted, the conversion to UTF-8 can
not occur.  Therefore the conversion must take place from within
auth_ldap or any other authentication module after decrypting the user
information.  A module or filter outside of the authentication module
that does a blind charset conversion on the header information, would
not work because it would not be able to decrypt the user ID and
password, convert it and re-encrypt it in order to make the process
transparent to all authentication modules.  (Actually you could probably
make it work for base64, but what about digest?)
   On the other hand, the one place that the conversion could be done
is within the call to ap_get_basic_auth_pw().  But
ap_get_basic_auth_pw() or whatever function handles decrypting digest
authentication, would have to be modified so that it had access to the
accept-language header values.  This would allow it to convert from
the assumed browser's charset to UTF-8 or any other charset.  But the
down side is that the accept-language header value does not guarantee
that that is the charset the browser used when it sent the request.  It
is simply an indicator of what charset(s) the browser will accept. 
Auth_LDAP would be utilizing this functionality to at least attempt to
do the right thing rather than always failing.
   I do agree that we need some type of functionality that will convert
requests made in a particular charset to a universal charset that Apache
can rely on.  I'm just not sure this is it.  It seems to work for
auth_LDAP, but I'm not sure how to generalize it.  This is where a much
broader discussion need to take place.



Brad Nicholes
Senior Software Engineer
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 

 [EMAIL PROTECTED] Thursday, December 12, 2002 4:09:57 AM 
This patch eliminates the hardcoded charset table.  Instead it
reads the charset table from a conf file.  The directive
AuthLDAPCharsetConfig allows the admin to specify the charset
conf
file.  Is there also a need to specify additional conversions
directly in the httpd.conf file through a different directive? 
It
seems that the charset conf file would be sufficient.  If there
are
multiple charsets per language, these can be set by specifying
the
5 character language ID rather than the 2 character ID similar to
the example in the charset.conv file for chinese. 

As nd said, if someone needs additional conversion, he will scream for
it. 
:-)

But something else is going around in my head. Why should this charset

conversion be limited to ladp? Well, I don't know where we need the 
conversion table too. But the table itself should be general available
to 
all modules. Maybe some other modules would like to do the same.
A core (?) directive like LanguageCharsetConfig might be much more
useful 
then AuthLDAPCharsetConfig. So the next step would be to move the 
conversion function to core or apr or so, too. Each module, which needs
a 
conversion, can call this funtion instead of having its own code.

Maybe there are some overlapping with mod_charset_lite which also does

charset conversion. 

Kess



Re: Suppressing Authentication Dialog box

2002-12-12 Thread Glenn
 -- Original Message --
 Reply-To: [EMAIL PROTECTED]
 From: Laxmikanth M.S. [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: Suppressing Authentication Dialog box
 Date: Thu, 12 Dec 2002 16:11:28 +0530
 
 Hi,
 PLEASE REPLT TO LAXMIKANTH.MS@SONATA_SOFTWARECOM
 
 I have setup a site with Basic Autentication...
 I want to suppress the dialog box 
 I have the password and username with meis there anyway to pass these
 values directly from my page instead thro' the popup box.
 thanks in advacne
 Laxmikanth

This is not the appropriate list for such a question.  Please see:
  http://httpd.apache.org/lists.html#http-users

What you are asking makes Basic Authentication effectively useless
since you're going to post the username and password in a public link.
In any case, http://username:[EMAIL PROTECTED]/ will do what
I think you are asking.

-Glenn



Re: cvs commit: httpd-2.0/server request.c

2002-12-12 Thread Paul J. Reder


[EMAIL PROTECTED] wrote:


wrowe   2002/12/11 23:05:54

  Modified:server   request.c
  Log:
Make the code simpler to follow, and perhaps clear up the follow-symlink
bug reports we have seen on bugzilla.  e.g. 14206 etc.
  
  Revision  ChangesPath
  1.122 +23 -43httpd-2.0/server/request.c


Sorry to be the bearer of bad news but the problem reported in 14206 still

occurs with this new code. All you have to do is the following:

In your htdocs directory:

mv index.html foo.html
ln -s foo.html index.html

In your httpd.conf:


# Note: Options should not allow FollowSymLinks
Directory /
Options None
AllowOverride None
/Directory

Directory /home/Apache/htdocs
   Options None
   AllowOverride None
   Order deny,allow
   Allow from all
/Directory


Now bring up your browser and request:

http://your.machine.name:port/index.html

You'll get a 403:forbidden error.

http://your.machine.name:port/

You'll get the page foo.html.

I can spend more time tracking this if you want, but it won't be
till this afternoon.

--
Paul J. Reder
---
The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure.
-- Albert Einstein






RE: Webdav

2002-12-12 Thread Martin Ouimet

this is impossible because im sharing all my user's home directory via webdav.

-Original Message-
From: Bennett, Tony - CNF [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 12, 2002 11:48 AM
To: '[EMAIL PROTECTED]'
Subject: RE: Webdav


You don't have to run Apache 2.0 as root
in order to provide webdav capability...
...If you are running as user 'nobody',
just ensure that the directory tree that
is dav enabled is owned by user 'nobody'.

-tony

 -Original Message-
 From: Martin Ouimet [mailto:[EMAIL PROTECTED]] 
 Sent: Thursday, December 12, 2002 8:11 AM
 To: Apache dev mailing list (E-mail)
 Subject: Webdav
 
 
 
 Hi folks,
   I'm an network administrator using webdav to share 
 users folder and im running it as root since user need to 
 create, move, copy, delete files.  I was wondering first is 
 there a patch or any hack to lower down priviledge because I 
 dislike my server having apache 2.0 running as root.  If it 
 doesnt seem to exists i'll code one.  So the question is, if 
 I start a patch to lower down down webdav privilege will I 
 loose my time?
 
 Martin Ouimet
 



compiling mod_auth_digest fails

2002-12-12 Thread Günter Knauf
Hi,
just tried to compile snapshot apache-1.3_20021212111200.tar.gz for Win32 platform, 
but compilation breaks in mod_auth_digest.c line 378 with 'unexpected #endif'. After 
removing line 378 it compiles.

Guenter.




Re: [PATCH-3] Allowing extended characters in LDAP authentication...

2002-12-12 Thread Astrid Kessler
The charset conversion that is happening in LDAP is actually quite
 specialized.  The general functionality of converting from one charset
 to another already exists in APR in the form of apr_xlat_xxx().  LDAP is
 only interested in converting the user ID from a given charset to UTF-8.
  Up until auth_ldap calls ap_get_basic_auth_pw(), the user ID and
 password are encrypted in the Authentication header entry.  Until the
 user ID and password have been decrypted, the conversion to UTF-8 can
 not occur.  Therefore the conversion must take place from within
 auth_ldap or any other authentication module after decrypting the user
 information.  A module or filter outside of the authentication module
 that does a blind charset conversion on the header information, would
 not work because it would not be able to decrypt the user ID and
 password, convert it and re-encrypt it in order to make the process
 transparent to all authentication modules.  

Well you are right, that you first have to decrypt the authentication 
information before you are able to do charset conversion. And I overlooked 
that a conversion function already exists, which you are using. My 
suggestions have been a little bit inconsideratly. Let me try to explain.

I do agree that we need some type of functionality that will convert
 requests made in a particular charset to a universal charset that Apache
 can rely on.  I'm just not sure this is it.  It seems to work for
 auth_LDAP, but I'm not sure how to generalize it.  This is where a much
 broader discussion need to take place.

I still think mod_auth_ldap won't be the only module doing charset 
conversion on headers. Or say, the authentication header might not stay the 
only header which needs to be converted. But if we want to convert headers 
and we have to guess the incoming charset, we will need a general 
assignment table, not only for mod_auth_ldap but for all modules interested 
in converting headers. Or with other words, your conf file might move to 
another module at a later time. Which could also be done now. 

But maybe this patch is not the right place to discuss a general new 
feature.

Kess



RE: Webdav

2002-12-12 Thread Bennett, Tony - CNF
Notice:  I have cross-posted this into the dav-dev
 list where it more aptly belongs.

mod_dav was designed assuming it owns all resources
in its repository... whether that repository is 
a file-system (like the out-of-the-box version of
mod_dav  mod_dav_fs), or whether that repository
is a data-base (like catacomb: http://www.webdav.org/catacomb/).

mod_dav does not provide authentication, authorization,
or access control.

-tony

 -Original Message-
 From: Martin Ouimet [mailto:[EMAIL PROTECTED]] 
 Sent: Thursday, December 12, 2002 12:33 PM
 To: [EMAIL PROTECTED]
 Subject: RE: Webdav
 
 
 
 this is impossible because im sharing all my user's home 
 directory via webdav.
 
 -Original Message-
 From: Bennett, Tony - CNF [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, December 12, 2002 11:48 AM
 To: '[EMAIL PROTECTED]'
 Subject: RE: Webdav
 
 
 You don't have to run Apache 2.0 as root
 in order to provide webdav capability...
 ...If you are running as user 'nobody',
 just ensure that the directory tree that
 is dav enabled is owned by user 'nobody'.
 
 -tony
 
  -Original Message-
  From: Martin Ouimet [mailto:[EMAIL PROTECTED]] 
  Sent: Thursday, December 12, 2002 8:11 AM
  To: Apache dev mailing list (E-mail)
  Subject: Webdav
  
  
  
  Hi folks,
  I'm an network administrator using webdav to share 
  users folder and im running it as root since user need to 
  create, move, copy, delete files.  I was wondering first is 
  there a patch or any hack to lower down priviledge because I 
  dislike my server having apache 2.0 running as root.  If it 
  doesnt seem to exists i'll code one.  So the question is, if 
  I start a patch to lower down down webdav privilege will I 
  loose my time?
  
  Martin Ouimet
  
 



Re: [PATCH-3] Allowing extended characters in LDAPauthentication...

2002-12-12 Thread Brad Nicholes
   You are absolutely right, there are other modules that need to do header 
conversion.  In a previous email, Bill Rowe pointed out that WebDAV also suffers from 
charset mismatch, but in a different way than auth_ldap.  WebDAV needs the URI 
converted as well as other header entries in order to function correctly.  A 
generalized solution needs to be worked out, but even a generalized header conversion 
solution still may not solve the problem for authentication modules because of the 
fact that the authentication data conversion needs to be done at the point when the 
data is decrypted.  In order to solve WebDAV's problem, the scope of this discussion 
needs to be much broader.  Any ideas??





Brad Nicholes
Senior Software Engineer
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 

 [EMAIL PROTECTED] Thursday, December 12, 2002 2:07:24 PM 
The charset conversion that is happening in LDAP is actually quite
 specialized.  The general functionality of converting from one charset
 to another already exists in APR in the form of apr_xlat_xxx().  LDAP is
 only interested in converting the user ID from a given charset to UTF-8.
  Up until auth_ldap calls ap_get_basic_auth_pw(), the user ID and
 password are encrypted in the Authentication header entry.  Until the
 user ID and password have been decrypted, the conversion to UTF-8 can
 not occur.  Therefore the conversion must take place from within
 auth_ldap or any other authentication module after decrypting the user
 information.  A module or filter outside of the authentication module
 that does a blind charset conversion on the header information, would
 not work because it would not be able to decrypt the user ID and
 password, convert it and re-encrypt it in order to make the process
 transparent to all authentication modules.  

Well you are right, that you first have to decrypt the authentication 
information before you are able to do charset conversion. And I overlooked 
that a conversion function already exists, which you are using. My 
suggestions have been a little bit inconsideratly. Let me try to explain.

I do agree that we need some type of functionality that will convert
 requests made in a particular charset to a universal charset that Apache
 can rely on.  I'm just not sure this is it.  It seems to work for
 auth_LDAP, but I'm not sure how to generalize it.  This is where a much
 broader discussion need to take place.

I still think mod_auth_ldap won't be the only module doing charset 
conversion on headers. Or say, the authentication header might not stay the 
only header which needs to be converted. But if we want to convert headers 
and we have to guess the incoming charset, we will need a general 
assignment table, not only for mod_auth_ldap but for all modules interested 
in converting headers. Or with other words, your conf file might move to 
another module at a later time. Which could also be done now. 

But maybe this patch is not the right place to discuss a general new 
feature.

Kess




Re: [PATCH-3] Allowing extended characters in LDAPauthentication...

2002-12-12 Thread Brad Nicholes
   You are absolutely right, there are other modules that need to do
header conversion.  In a previous email, Bill Rowe pointed out that
WebDAV also suffers from charset mismatch, but in a different way than
auth_ldap.  WebDAV needs the URI converted as well as other header
entries in order to function correctly.  A generalized solution needs to
be worked out, but even a generalized header conversion solution still
may not solve the problem for authentication modules because of the fact
that the authentication data conversion needs to be done at the point
when the data is decrypted.  In order to solve WebDAV's problem, the
scope of this discussion needs to be much broader.  Any ideas??





Brad Nicholes
Senior Software Engineer
Novell, Inc., the leading provider of Net business solutions
http://www.novell.com 

 [EMAIL PROTECTED] Thursday, December 12, 2002 2:07:24 PM 
The charset conversion that is happening in LDAP is actually
quite
 specialized.  The general functionality of converting from one
charset
 to another already exists in APR in the form of apr_xlat_xxx().  LDAP
is
 only interested in converting the user ID from a given charset to
UTF-8.
  Up until auth_ldap calls ap_get_basic_auth_pw(), the user ID and
 password are encrypted in the Authentication header entry.  Until
the
 user ID and password have been decrypted, the conversion to UTF-8
can
 not occur.  Therefore the conversion must take place from within
 auth_ldap or any other authentication module after decrypting the
user
 information.  A module or filter outside of the authentication
module
 that does a blind charset conversion on the header information,
would
 not work because it would not be able to decrypt the user ID and
 password, convert it and re-encrypt it in order to make the process
 transparent to all authentication modules.  

Well you are right, that you first have to decrypt the authentication 
information before you are able to do charset conversion. And I
overlooked 
that a conversion function already exists, which you are using. My 
suggestions have been a little bit inconsideratly. Let me try to
explain.

I do agree that we need some type of functionality that will
convert
 requests made in a particular charset to a universal charset that
Apache
 can rely on.  I'm just not sure this is it.  It seems to work for
 auth_LDAP, but I'm not sure how to generalize it.  This is where a
much
 broader discussion need to take place.

I still think mod_auth_ldap won't be the only module doing charset 
conversion on headers. Or say, the authentication header might not stay
the 
only header which needs to be converted. But if we want to convert
headers 
and we have to guess the incoming charset, we will need a general 
assignment table, not only for mod_auth_ldap but for all modules
interested 
in converting headers. Or with other words, your conf file might move
to 
another module at a later time. Which could also be done now. 

But maybe this patch is not the right place to discuss a general new 
feature.

Kess



Re: [PATCH-3] Allowing extended characters in LDAP authentication...

2002-12-12 Thread Astrid Kessler
Brad Nicholes [EMAIL PROTECTED] wrote in news:sdf8b26d.020@prv-
mail25.provo.novell.com:

 In order to solve WebDAV's problem, the
 scope of this discussion needs to be much broader.  Any ideas??

Hm, not really. :/ 
This should be done by someone with more experience to the code. 

Kess



Re: Support for ASP

2002-12-12 Thread Sabyasachi Chakrabarty
I don't know wheter this is the correct forum for this. However, there
some solutions available like Chiliasp from Chilisoft. There is also an
open source project called apache-asp.


Hope this helps.

Sachi

On Thu, 12 Dec 2002, Karma Dorji wrote:

 BlankCan anyone help me, how to support ASP, in apache server, running in Linux 7.2 
box, my apache is 2.0, i need to support ASP for one of the customer, hosting in my 
sever, your help will be highly appreciated.
 
 Thanks.
 Karma.