Re: Form of req.filename/req.hlist.directory on Win32 systems.

2006-04-17 Thread Nicolas Lehuen
Hi Graham,Here is the test handler I've used :from mod_python import apachedef handler(req): req.content_type = 'text/plain' req.write(req.hlist.directory+'\n') req.write(req.filename+'\n'
) return apache.OKIf I use :DocumentRoot c:\\apache22\\htdocsDirectory c:\\apache22\\htdocs # ... SetHandler mod_python PythonHandler test_handler
/DirectoryI get, when calling http://localhost/index.html:c:\apache22\htdocs/C:/apache22/htdocs/index.htmlNote that the drive letter has been uppercased and 
req.filename normalized to POSIX path names. req.hlist.directory, though supported by Win32, looks weird.Now with :DocumentRoot c:/apache22/htdocs
Directory c:/apache22/htdocs
 # ...
 SetHandler mod_python
 PythonHandler test_handler
/Directory
I get :c:/apache22/htdocs/C:/apache22/htdocs/index.htmlWith :DocumentRoot c:/apache22/htdocs

Directory c:\\apache22\\htdocs

 # ...

 SetHandler mod_python

 PythonHandler test_handler

/Directory

I get :c:\apache22\htdocs/C:/apache22/htdocs/index.htmlAnd finally with :DocumentRoot c:\\apache22\\htdocs


Directory c:/apache22/htdocs


 # ...


 SetHandler mod_python


 PythonHandler test_handler


/Directory


I get :c:/apache22/htdocs/C:/apache22/htdocs/index.htmlSo req.filename seems always normalized while req.hlist.directory reflects what was entered in the Directory tag. Both POSIX and Windows forms are allowed, unfortunately, but the backslash forms needs C-style escaping, and IIRC the Apache documentation recommends using forward slashes.
Regards,Nicolas2006/4/16, Graham Dumpleton [EMAIL PROTECTED]:
I am sure I asked this a long time ago, but have forgotten all thedetails.On Win32 systems does req.filename set by Apache always use POSIXstyle forward slashes, ie., '/', to separate components of adirectory? Thus:
 /some/pathHow does Apache indicate a drive letter when one is necessary? Is it: c:/some/pathDoes any of the above change based on whether forward or backwardslashes are used in a Directory directive? Ie.,
 Directory c:/some/path ... /Directory?vs: Directory c:\\some\\path ... /DirectoryOr does Apache not allow the latter anyway?
If Apache does allow the latter, does that mean that req.hlist.directoryis coming through set including backslashes rather than forwardslashes.I want to get my head around this all again as at different times the
valuesof req.filename and req.hlist.directory are used to determine the Pythoninterpreter name. As highlighted in: http://issues.apache.org/jira/browse/MODPYTHON-161
If there is a mix of conventions, with user code also being able toaffectthese values, there may be no consistency and thus could end up withscenarios where a different interpreter to one than was expected will be
used.Any help from Win32 users in understanding all this would be muchappreciated.Thanks.Graham


Re: Form of req.filename/req.hlist.directory on Win32 systems.

2006-04-17 Thread Graham Dumpleton

Was this with mod_python from subversion or 3.2.8?

Want to qualify whether latest set of changes I checked in to support
Files directive has caused it to behave differently as how it determines
req.hlist.directory is different to before.

Thanks.

Graham

On 18/04/2006, at 4:33 AM, Nicolas Lehuen wrote:


Hi Graham,

Here is the test handler I've used :

from mod_python import apache

def handler(req):
req.content_type = 'text/plain'
req.write(req.hlist.directory+'\n')
req.write(req.filename+'\n' )
return apache.OK

If I use :

DocumentRoot c:\\apache22\\htdocs
Directory c:\\apache22\\htdocs
   # ...
SetHandler mod_python
PythonHandler test_handler
/Directory

I get, when calling http://localhost/index.html:

c:\apache22\htdocs/
C:/apache22/htdocs/index.htmlNote that the drive letter has been  
uppercased and req.filename normalized to POSIX path names.  
req.hlist.directory, though supported by Win32, looks weird.


Now with :

DocumentRoot c:/apache22/htdocs
Directory c:/apache22/htdocs
   # ...
SetHandler mod_python
PythonHandler test_handler
/Directory

I get :

c:/apache22/htdocs/
C:/apache22/htdocs/index.html
With :

DocumentRoot c:/apache22/htdocs
Directory c:\\apache22\\htdocs
   # ...
SetHandler mod_python
PythonHandler test_handler
/Directory

I get :

c:\apache22\htdocs/
C:/apache22/htdocs/index.html
And finally with :

DocumentRoot c:\\apache22\\htdocs
Directory c:/apache22/htdocs
   # ...
SetHandler mod_python
PythonHandler test_handler
/Directory

I get :
c:/apache22/htdocs/
C:/apache22/htdocs/index.html
So req.filename seems always normalized while req.hlist.directory  
reflects what was entered in the Directory tag. Both POSIX and  
Windows forms are allowed, unfortunately, but the backslash forms  
needs C-style escaping, and IIRC the Apache documentation  
recommends using forward slashes.


Regards,
Nicolas
2006/4/16, Graham Dumpleton [EMAIL PROTECTED]: I am sure I  
asked this a long time ago, but have forgotten all the

details.

On Win32 systems does req.filename set by Apache always use POSIX
style forward slashes, ie., '/', to separate components of a
directory? Thus:

   /some/path

How does Apache indicate a drive letter when one is necessary? Is it:

   c:/some/path

Does any of the above change based on whether forward or backward
slashes are used in a Directory directive? Ie.,

   Directory c:/some/path
   ...
   /Directory?

vs:

   Directory c:\\some\\path
   ...
   /Directory

Or does Apache not allow the latter anyway?

If Apache does allow the latter, does that mean that  
req.hlist.directory

is coming through set including backslashes rather than forward
slashes.

I want to get my head around this all again as at different times the
values
of req.filename and req.hlist.directory are used to determine the  
Python

interpreter name. As highlighted in:

   http://issues.apache.org/jira/browse/MODPYTHON-161

If there is a mix of conventions, with user code also being able to
affect
these values, there may be no consistency and thus could end up with
scenarios where a different interpreter to one than was expected  
will be

used.

Any help from Win32 users in understanding all this would be much
appreciated.

Thanks.

Graham





Re: Form of req.filename/req.hlist.directory on Win32 systems.

2006-04-17 Thread Nicolas Lehuen
This was with the Subversion trunk.Ill do some tests with 3.2.8 and tell you the results.Nicolas2006/4/17, Graham Dumpleton 
[EMAIL PROTECTED]:Was this with mod_python from subversion or 3.2.8?
Want to qualify whether latest set of changes I checked in to supportFiles directive has caused it to behave differently as how it determinesreq.hlist.directory is different to before.Thanks.Graham
On 18/04/2006, at 4:33 AM, Nicolas Lehuen wrote: Hi Graham, Here is the test handler I've used : from mod_python import apache def handler(req): 
req.content_type = 'text/plain' req.write(req.hlist.directory+'\n') req.write(req.filename+'\n' ) return apache.OK If I use : DocumentRoot c:\\apache22\\htdocs
 Directory c:\\apache22\\htdocs# ... SetHandler mod_python PythonHandler test_handler /Directory I get, when calling 
http://localhost/index.html: c:\apache22\htdocs/ C:/apache22/htdocs/index.htmlNote that the drive letter has been uppercased and req.filename normalized to POSIX path names. req.hlist.directory
, though supported by Win32, looks weird. Now with : DocumentRoot c:/apache22/htdocs Directory c:/apache22/htdocs# ... SetHandler mod_python
 PythonHandler test_handler /Directory I get : c:/apache22/htdocs/ C:/apache22/htdocs/index.html With : DocumentRoot c:/apache22/htdocs
 Directory c:\\apache22\\htdocs# ... SetHandler mod_python PythonHandler test_handler /Directory I get : c:\apache22\htdocs/
 C:/apache22/htdocs/index.html And finally with : DocumentRoot c:\\apache22\\htdocs Directory c:/apache22/htdocs# ... SetHandler mod_python
 PythonHandler test_handler /Directory I get : c:/apache22/htdocs/ C:/apache22/htdocs/index.html So req.filename seems always normalized while req.hlist.directory
 reflects what was entered in the Directory tag. Both POSIX and Windows forms are allowed, unfortunately, but the backslash forms needs C-style escaping, and IIRC the Apache documentation recommends using forward slashes.
 Regards, Nicolas 2006/4/16, Graham Dumpleton [EMAIL PROTECTED]: I am sure I asked this a long time ago, but have forgotten all the
 details. On Win32 systems does req.filename set by Apache always use POSIX style forward slashes, ie., '/', to separate components of a directory? Thus:/some/path
 How does Apache indicate a drive letter when one is necessary? Is it:c:/some/path Does any of the above change based on whether forward or backward slashes are used in a Directory directive? Ie.,
Directory c:/some/path.../Directory? vs:Directory c:\\some\\path.../Directory
 Or does Apache not allow the latter anyway? If Apache does allow the latter, does that mean that req.hlist.directory is coming through set including backslashes rather than forward
 slashes. I want to get my head around this all again as at different times the values of req.filename and req.hlist.directory are used to determine the Python interpreter name. As highlighted in:
http://issues.apache.org/jira/browse/MODPYTHON-161 If there is a mix of conventions, with user code also being able to
 affect these values, there may be no consistency and thus could end up with scenarios where a different interpreter to one than was expected will be used. Any help from Win32 users in understanding all this would be much
 appreciated. Thanks. Graham


It's that time of the year again

2006-04-17 Thread Ian Holsman

Google is about to start it's summer of code project

what does this mean for HTTP/APR ?

we need:
- mentors
and
- project ideas.

so.. if there is any niggly things or cool projects you haven't got  
the time to do yourself, but could devote 2-3 hrs/week to help  
someone else do, and could be accomplished by a good student in  
10-12wks.

now's the time.


ideas so far (half joking):
- mod_ircd
- implementing a UDP protocol
- a caching module implement CML (cache-meta-language)
- a SEDA type MPM


last year's SoC produced:
- mod_smtpd
- mod_mbox enhancements
- mod_cache_requestor (which i don't think really took off)
and 2 active comitters.

so.. lets get brainstorming. Let's see HTTP get the prize for most  
ideas (and beat those java weanies)


--Ian

--
Ian Holsman   ++61-3-9877-0909
in this place it takes all the running you can do, to keep in the  
same place. - Lewis Caroll






Re: It's that time of the year again

2006-04-17 Thread Nick Kew
On Monday 17 April 2006 07:13, Ian Holsman wrote:

 ideas so far (half joking):
 - mod_ircd
 - implementing a UDP protocol
 - a caching module implement CML (cache-meta-language)
 - a SEDA type MPM

OK, let's play ...

- Language bindings (mod_[perl|python|etc]) for new goodies
  like DBD and XMLNS
- APR build modularisation and dlloading
- Update apxs to search web, download  verify modules,
  get security and license info.

-- 
Nick Kew


Re: It's that time of the year again

2006-04-17 Thread Graham Dumpleton


On 17/04/2006, at 4:27 PM, Nick Kew wrote:


On Monday 17 April 2006 07:13, Ian Holsman wrote:

OK, let's play ...

- Language bindings (mod_[perl|python|etc]) for new goodies
  like DBD and XMLNS


This possibly ties up with something which I was intending to one day  
implement
in mod_python. That is to export from mod_python, as optional  
functions, functions
which would allow one to get access to the Python interpreter  
instances plus other
functions for accessing/creating Python wrappers for request, server  
and filter

objects.

The original intent of doing this was so that module writers using C  
code could

access the Python interpreters already managed by mod_python and then
provide a means in their own module of being able to use Python code.

The particular use case I was seeing this as being targeted at was so  
as to be

able to implement a mod_dav_python. That is, mod_dav_python is primarily
a C based module which hooks into mod_dav and bridges the C hooks  
into hooks

implemented in Python. The mod_dav_python would need to provide Python
wrapper classes for all the mod_dav structures, but at least it  
doesn't have to

duplicate what mod_python does in the way of interpreter management and
wrappers for request object etc. Overall this could allow a mod_dav  
filesystem

to be implemented in Python.

The same set of optional functions exported from mod_python may be  
useful for
implementing your suggestion. It is certainly a preferable approach  
to making

mod_python understand something like mod_dav and embedding the support
within it as standard.

Graham






Re: It's that time of the year again

2006-04-17 Thread Nick Kew
On Monday 17 April 2006 07:45, Graham Dumpleton wrote:
 On 17/04/2006, at 4:27 PM, Nick Kew wrote:
  On Monday 17 April 2006 07:13, Ian Holsman wrote:
 
  OK, let's play ...
 
  - Language bindings (mod_[perl|python|etc]) for new goodies
like DBD and XMLNS

 This possibly ties up with something which I was intending to one day
 implement
 in mod_python. That is to export from mod_python, as optional
 functions, functions
 which would allow one to get access to the Python interpreter
 instances plus other
 functions for accessing/creating Python wrappers for request, server
 and filter
 objects.

 The original intent of doing this was so that module writers using C
 code could
 access the Python interpreters already managed by mod_python and then
 provide a means in their own module of being able to use Python code.

We actually have an analagous setup working with Tcl.  It's quite a small
module, but works well to run Tcl script embedded in HTML pages, and
help the Client upgrade from the vignette system they previously used.
A separate module - used by the interpreter - provides Tcl bindings for DBD.

My thought on doing the same with Python is that it really shouldn't need
the full baggage of mod_python just to do this.  All they really have in 
common is the python interpretors.  So maybe the architecture for this
could look something like:

mod_python_base (manage the python interpreters)
mod_python  (as now, less what's moved to python_base) - big
mod_python_embedded (python interpreters for C programmers) - small


 The particular use case I was seeing this as being targeted at was so
 as to be
 able to implement a mod_dav_python. That is, mod_dav_python is primarily
 a C based module which hooks into mod_dav and bridges the C hooks
 into hooks
 implemented in Python. The mod_dav_python would need to provide Python
 wrapper classes for all the mod_dav structures, but at least it
 doesn't have to
 duplicate what mod_python does in the way of interpreter management and
 wrappers for request object etc. Overall this could allow a mod_dav
 filesystem
 to be implemented in Python.

Hmmm.  I'm not sure I see what you mean.  Providing python hooks in mod_dav
ops would surely be a relatively simple extension to the existing mod_python?
Though it does sound like a pretty similar task to providing DBD bindings.
Maybe this is because I've not looked at mod_python from a C perspective.

XMLNS bindings would enable people to script SAX2 callback events in Python,
and mix-and-match with C modules, all running in an XMLNS filter.  How does 
that look from your PoV?

 The same set of optional functions exported from mod_python may be
 useful for
 implementing your suggestion. It is certainly a preferable approach
 to making
 mod_python understand something like mod_dav and embedding the support
 within it as standard.

Right, yes.  It looks like a potential fit:-)


-- 
Nick Kew


Re: It's that time of the year again

2006-04-17 Thread Graham Dumpleton


On 17/04/2006, at 6:43 PM, Nick Kew wrote:


On Monday 17 April 2006 07:45, Graham Dumpleton wrote:

On 17/04/2006, at 4:27 PM, Nick Kew wrote:

On Monday 17 April 2006 07:13, Ian Holsman wrote:

OK, let's play ...

- Language bindings (mod_[perl|python|etc]) for new goodies
  like DBD and XMLNS


This possibly ties up with something which I was intending to one day
implement
in mod_python. That is to export from mod_python, as optional
functions, functions
which would allow one to get access to the Python interpreter
instances plus other
functions for accessing/creating Python wrappers for request, server
and filter
objects.

The original intent of doing this was so that module writers using C
code could
access the Python interpreters already managed by mod_python and then
provide a means in their own module of being able to use Python code.


We actually have an analagous setup working with Tcl.  It's quite a  
small

module, but works well to run Tcl script embedded in HTML pages, and
help the Client upgrade from the vignette system they previously used.
A separate module - used by the interpreter - provides Tcl bindings  
for DBD.


My thought on doing the same with Python is that it really  
shouldn't need
the full baggage of mod_python just to do this.  All they really  
have in

common is the python interpretors.  So maybe the architecture for this
could look something like:

mod_python_base (manage the python interpreters)
mod_python  (as now, less what's moved to python_base) - big
mod_python_embedded (python interpreters for C programmers) - small


There isn't actually much in mod_python that could be extracted out  
into a

base or embedded package. Pretty well all the C code would still be
required in order to support the basic stuff that would be exportable as
optional functions. The only bits that might not move are code for  
filter and

handler directives for registering and then invoking handlers as part of
normal Apache request processing phases. This would be at most a few
hundred lines of code.

In terms of the package as a whole and what possibly shouldn't be in
mod_python are the mod_python.publisher and mod_python.psp handlers
which are in effect a layer over the core mod_python handler mechanisms.
These amount to mostly Python code, although mod_python.psp has a
C code component, but that isn't part of the Apache module, but a  
loadable

Python extension module. Thus neither actually affected the footprint of
the Apache module when not in use.

Thus, there probably is very little to be gained from doing a split  
and in

practice it would probably make it harder to manage the code base as
far as releases and maintenance.


The particular use case I was seeing this as being targeted at was so
as to be
able to implement a mod_dav_python. That is, mod_dav_python is  
primarily

a C based module which hooks into mod_dav and bridges the C hooks
into hooks
implemented in Python. The mod_dav_python would need to provide  
Python

wrapper classes for all the mod_dav structures, but at least it
doesn't have to
duplicate what mod_python does in the way of interpreter  
management and

wrappers for request object etc. Overall this could allow a mod_dav
filesystem
to be implemented in Python.


Hmmm.  I'm not sure I see what you mean.  Providing python hooks in  
mod_dav
ops would surely be a relatively simple extension to the existing  
mod_python?


The issue is more that mod_dav support shouldn't belong in mod_python
itself. It should be ignorant of other Apache modules. If mod_python  
itself

had dependencies on other modules, it makes it harder to develop as you
have to bring into the core development team people who have intimate
knowledge of the other packages. The only exception we have made to that
ideal is that the request object in mod_python in unreleased code  
hooks in

optional functions of mod_ssl because of its basic importance. That was
purely out of convenience as it had been demonstrated how to make a
standalone Python module which achieved the same end.

XMLNS bindings would enable people to script SAX2 callback events  
in Python,
and mix-and-match with C modules, all running in an XMLNS filter.   
How does

that look from your PoV?


Can't see why it couldn't be done. The next version of mod_python hooks
into mod_includes to allow Python code in SSI files. What you are  
talking

about doesn't sound much different.


The same set of optional functions exported from mod_python may be
useful for
implementing your suggestion. It is certainly a preferable approach
to making
mod_python understand something like mod_dav and embedding the  
support

within it as standard.


Right, yes.  It looks like a potential fit:-)


Only problem is that there is a bit of cleanup work in internals of  
mod_python
before the optional functions could be put in place. I'm slowly  
getting there

though. :-)

Anyway, in terms of the request for projects, if 

Standard MSIE tweaks for Apache

2006-04-17 Thread Bjørn Stabell
After switching to Apache 2.2.1 (the unreleased version) we found MS  
IE could no longer access our site (which has keepalive, mod_deflate,  
mod_proxy, and mod_ssl).  At first I thought it was a keepalive or  
cipher problem, but it turned out to be a problem with IE (6.0) not  
being able to handle compressed (mod_deflate) files other than text/ 
html.  Does this ring a bell?


We're now using the following IE hacks:

#
# IE hacks
# - IE 1-4 cannot handle keepalive w/ SSL, and doesn't do HTTP 1.1 well
# - all IE has problems with compression of non-html stuff (like css  
and js),
#   also make sure we vary on User-Agent if we do conditional  
compression

#
BrowserMatch MSIE ssl-unclean-shutdown gzip-only-text/html
BrowserMatch MSIE [1-4] nokeepalive downgrade-1.0 force-response-1.0
SetEnvIfNoCase Request_URI \.(?:js|css)$vary-user-agent
Header append Vary User-Agent env=vary-user-agent
[...]

AddOutputFilterByType DEFLATE text/plain text/html text/xml text/ 
xhtml text/javascript application/x-javascript application/xhtml+xml  
application/xml text/css




So, my questions are:

1) Anything missing from the above config?

2) Shouldn't there be a standard IE hacks configuration for  
Apache?  Maybe even a mod_ie_hacks?


3) Instead of matching on .js and .css in the URI, it would be great  
to have something more robust; is there any way to do this?  (Based  
on content-type, or based on the fact that the content would've been  
compressed but wasn't because of some env variable was set.)


IE is still a pretty popular browser, it's relatively important to  
handle it well ;)



Rgds,
Bjorn


Re: It's that time of the year again

2006-04-17 Thread Colm MacCarthaigh
On Mon, Apr 17, 2006 at 06:44:11PM +1000, John Vandenberg wrote:
 A cool project that appears to needs a coder is mod_bittorrent.  

There's already a mod_bittorrent, but it only produces .torrent files
dynamically, it doesn't act as as a seed or participate in the p2p.
There's mod_torrent too which has ceased development.

A working bittorrent module for httpd is directly useful to me, and it's
been on my personal TODO for a while now, so it's something I'd be happy
to mentor.

 The initial objective is to dynamically build a torrent file for a
 user requested file.  This could be integrated into httpd by sending
 large files as torrents when the user agent states that it accepts
 application/x-bittorrent, and/or when the server notices that the
 number of downloads for the file has risen rapidly.

Nah, that's boring and far too trivial imo. We need a bittorrent
protocol implementation, the host needs to act as a seed too :)

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Standard MSIE tweaks for Apache

2006-04-17 Thread Joost de Heer
IE is still a pretty popular browser, it's relatively important to 
handle it well ;)


Shouldn't that read: IE is still a pretty popular browser, it's relatively 
important that it handles things well.?


Joost


Re: It's that time of the year again

2006-04-17 Thread Jeff McAdams
Ian Holsman wrote:
 ideas so far (half joking):
 - mod_ircd
 - implementing a UDP protocol
 - a caching module implement CML (cache-meta-language)
 - a SEDA type MPM

mod_snmp would be very useful.
-- 
Jeff McAdams
They that can give up essential liberty to obtain a
little temporary safety deserve neither liberty nor safety.
   -- Benjamin Franklin



signature.asc
Description: OpenPGP digital signature


Re: It's that time of the year again

2006-04-17 Thread Jorge Schrauwen
I've seen a mod_snmp somewhere:http://www.mod-snmp.com/mod_snmp.htmldon't know how if its free and for what version though.
On 4/17/06, Jeff McAdams [EMAIL PROTECTED] wrote:
Ian Holsman wrote: ideas so far (half joking): - mod_ircd - implementing a UDP protocol - a caching module implement CML (cache-meta-language) - a SEDA type MPMmod_snmp would be very useful.
--Jeff McAdamsThey that can give up essential liberty to obtain alittle temporary safety deserve neither liberty nor safety. -- Benjamin Franklin
-- ~Jorge


Re: It's that time of the year again

2006-04-17 Thread Mads Toftum
On Mon, Apr 17, 2006 at 02:25:46PM +0200, Jorge Schrauwen wrote:
 I've seen a mod_snmp somewhere:
 http://www.mod-snmp.com/mod_snmp.html
 don't know how if its free and for what version though.
 
SNMP module for Apache 1.3.x as you can see on www.mod-snmp.com

vh

Mads Toftum
-- 
`Darn it, who spiked my coffee with water?!' - lwall



Re: Large file support in 2.0.56?

2006-04-17 Thread Jeff Trawick
On 4/15/06, Brandon Fosdick [EMAIL PROTECTED] wrote:
 I might have asked this before, but I've forgotten the answer, and so has 
 google. Has any of the large file goodness from 2.2.x made it into 2.0.x? 
 Will it ever?

Different answer than you got before, but I think this is more accurate (Joe?):

Turn on your OS's large file flags in CFLAGS

make distclean  CFLAGS=-D_something ./configure

and you get the support. This isn't the default with APR 0.9.x (and
thus Apache httpd 2.0.x) because it breaks binary compatibility with
existing builds.  As long as you use only modules that you can
recompile and don't have bugs exposed only with large file support
enabled you should be okay.


Re: It's that time of the year again

2006-04-17 Thread Mads Toftum
On Mon, Apr 17, 2006 at 07:27:08AM +0100, Nick Kew wrote:
 - Update apxs to search web, download  verify modules,
   get security and license info.
 
More details from the discussions at apachecon EU last year:
http://mail-archives.apache.org/mod_mbox/httpd-dev/200507.mbox/[EMAIL PROTECTED]

vh

Mads Toftum
-- 
`Darn it, who spiked my coffee with water?!' - lwall



Re: Large file support in 2.0.56?

2006-04-17 Thread Colm MacCarthaigh
On Mon, Apr 17, 2006 at 09:09:12AM -0400, Jeff Trawick wrote:
 On 4/15/06, Brandon Fosdick [EMAIL PROTECTED] wrote:
  I might have asked this before, but I've forgotten the answer, and so has 
  google. Has any of the large file goodness from 2.2.x made it into 2.0.x? 
  Will it ever?
 
 Different answer than you got before, but I think this is more accurate 
 (Joe?):
 
 Turn on your OS's large file flags in CFLAGS
 
 make distclean  CFLAGS=-D_something ./configure

That works, but does need Joe's split patch;

http://people.apache.org/~jorton/ap_splitlfs.diff

:)

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: Any volunteers?

2006-04-17 Thread William A. Rowe, Jr.

Jorge Schrauwen wrote:
Since there doesn't seem to be mutch interest in this i'll post what i 
got in hopes of attracting some more poeple.


FWIW - there is a new project in incubation... sources have recently been
imported by the original authors (a small team of administrators at Merck
who build this to save their own sanity, and did a pretty fantastic job)...

http://incubator.apache.org/projects/lokahi.html

It's still quite early in it's incubation, so issue one is committer diversity,
but the management framework already supports Apache's httpd and Tomcat, with
(we hope) more ASF technologies to follow.

Bill


Re: 2.0.56 candidate coming soon

2006-04-17 Thread William A. Rowe, Jr.

Colm MacCarthaigh wrote:


I'm not sure if I'm stepping on anyone's toes here, if I am, I don't
mean to be, I can remember a few different abandonded potential releases
at this point, I just wanna get the basic fixes out there.


Of course not :)  Holiday weekend and all - if you are ready before me than
by all means, be my guest!  In fact, the project's rules are specifically set
up that more than one RM could roll a competing package (better or worse) which
is why the package release rules are written the way they are.



Re: Large file support in 2.0.56?

2006-04-17 Thread William A. Rowe, Jr.

Brandon Fosdick wrote:

Nick Kew wrote:

I haven't tried files that size, but that's far too small for 
LARGE_FILE to

be relevant.  I guess you knew that already, so does something else
lead you to suppose you're hitting an Apache limit?


It does seem like a rather small and arbitrary limit. I can't think of 
what else besides apache would cause it, but I could be missing 
something. The files are being dumped into mysql in 64K blocks. The 
machine is an amd64, so that shouldn't be a problem, and 700MB isn't 
near 2 or 4 GB anyway. Uploading from a cable modem doesn't go anywhere 
near saturating the disk, cpu, or network. I've tried OSX, Win2k and 
WinXP, all with the same result. I'm running out of things to check. Any 
suggestions? I guess it could be a limit in mod_dav itself. I'm afraid 
to go there...it looks messy.


Well, the content length is stored as an int in httpd 2.0, so that also is
an issue (dav has so much metadata I've no idea how much real data can be
put up to the server.)

Keep in mind that you are gonna hit client bugs as well ;-)  At least httpd
version 2.2 should give you a good baseline to seperate the client from the
server issues.


Re: It's that time of the year again

2006-04-17 Thread William A. Rowe, Jr.

Ian Holsman wrote:


so.. lets get brainstorming. Let's see HTTP get the prize for most  
ideas (and beat those java weanies)


LOL!  Ok, in all seriousness, error message localization is -sooo- long
overdue, and we have a perpetual problem with apr_status_t extention.

The next 2.0 release of apr really needs a 'register error results' for
a given set of codes... with a number-of-errors to be set aside, and the
appropriate callback to convert codes to error messages (notice the
intersection with the localization comment above.)



Re: It's that time of the year again

2006-04-17 Thread Brian Akins
An example I'd like to do (or mentor someone) is a mod_memcached that 
could serve as the basis of memcached based modules.  It could handle 
all the configuration details, errors, and general heavy lifting of 
memcached.  It would then be very easy to write other modules that had 
hooks into memcached (ie, a mod_cache provider).


Perhaps to abstract it even further, perhaps mod_memcached should be a 
provider for a more generic caching framework.  Not HTTP specific like 
mod_cache, but a general key based cache.  Could be used for a variety 
of things (including an HTTP cache).



typedef struct {
apr_status_t (*cache_create)(ap_cache_t **instance, apr_table_t 
*params);
apr_status_t (*cache_stats)(ap_cache_t *instance, ap_cache_stats_t 
**stats);
apr_status_t (*cache_set)(ap_cache_t *instance, const char *key, 
void *val, apr_size_t len, apr_time_t expires);

/*
other stuff, like get, replace, etc.
*/

} cache_provider_t;

Just thinking out loud.


Also, mod_cache should be renamed mod_http_cache

--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: It's that time of the year again

2006-04-17 Thread Rian A Hunter

Quoting Ian Holsman [EMAIL PROTECTED]:

ideas so far (half joking):
- mod_ircd
- implementing a UDP protocol
- a caching module implement CML (cache-meta-language)
- a SEDA type MPM


http://www.kegel.com/c10k.html

I think a SoC project that profiles Apache (and finds out where we fall short)
so that we are able to compete with other lightweight HTTP servers popping up
these days would be a good endeavor for any CS student.

This seems to be more viable for our threaded MPMs. For the prefork 
MPM, maybe a

goal for 10,000 connections might be impractical.

I haven't done any benchmarks myself, I've just read results so anyone correct
me if I'm wrong.

Rian


Re: Any volunteers?

2006-04-17 Thread Jorge Schrauwen
Very interesting, lot of potention there...Might put mine on hold then... and see what comes of this.On 4/17/06, William A. Rowe, Jr. 
[EMAIL PROTECTED] wrote:Jorge Schrauwen wrote: Since there doesn't seem to be mutch interest in this i'll post what i
 got in hopes of attracting some more poeple.FWIW - there is a new project in incubation... sources have recently beenimported by the original authors (a small team of administrators at Merckwho build this to save their own sanity, and did a pretty fantastic job)...
http://incubator.apache.org/projects/lokahi.htmlIt's still quite early in it's incubation, so issue one is committer diversity,but the management framework already supports Apache's httpd and Tomcat, with
(we hope) more ASF technologies to follow.Bill-- ~Jorge


Re: It's that time of the year again

2006-04-17 Thread Colm MacCarthaigh
On Mon, Apr 17, 2006 at 12:34:29PM -0400, Rian A Hunter wrote:
 I think a SoC project that profiles Apache (and finds out where we
 fall short) so that we are able to compete with other lightweight HTTP
 servers popping up these days would be a good endeavor for any CS
 student.

Right now, I'm getting 22k reqs/sec from Apache httpd, and 18k/sec from
lighttpd. Simple things like using epoll, or the way the worker
balancing is done have huge effects compared to the tiny improvements
refactoring code can have.

 This seems to be more viable for our threaded MPMs. For the prefork
 MPM, maybe a goal for 10,000 connections might be impractical.

With prefork I can generally push about 27,000 concurrent connections
before things get hairy. With worker, I have a usable system up to
83,000 concurrent connections, without much effort.

 I haven't done any benchmarks myself, I've just read results so anyone
 correct me if I'm wrong.

Dan Kegels page is years out of date, and was uninformed even when it
wasn't :)

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: It's that time of the year again

2006-04-17 Thread Garrett Rooney
On 4/16/06, Ian Holsman [EMAIL PROTECTED] wrote:
 Google is about to start it's summer of code project

 what does this mean for HTTP/APR ?

 we need:
 - mentors

I'd be willing to help mentor.

 and
 - project ideas.

A few ideas:

in APR:

  - Improve the build system so that it can generate win32 project
files automatically, instead of requiring us to maintain them by hand.
 It also might be nice to allow generation of makefiles on win32, so
we can build via command line tools instead of requiring visual
studio.
  - Add a logging API, abstracting the differences between syslog,
win32 event logs, and basic file backed logs.  This project has the
potential to involve working with the Subversion project as well,
since it has a need for such an API.

in HTTPD:

  - Extend the mod_proxy_fcgi code so that it can manage its own
worker processes, rather than requiring them to be managed externally.
 Would most likely require a bit of refactoring inside of mod_proxy as
well.

-garrett


Re: It's that time of the year again

2006-04-17 Thread Joachim Zobel
On Mon, 17 Apr 2006 16:13:06 +1000
Ian Holsman [EMAIL PROTECTED] wrote:

 so.. if there is any niggly things or cool projects you haven't got  
 the time to do yourself,

Yeah, my no 1. wish is a mod_tal (the existing sourceforge project was
abandomed before start) that:

1. acts as an output filter on xml data and
2. behaves streaming with respect to the xml data.

TAL is a spec for a template engine. For details on TAL see
http://www.zope.org/Wikis/DevSite/Projects/ZPT/TAL%20Specification%201.4

Just my 0.02 EUR.
Sincerely,
Joachim


Re: It's that time of the year again

2006-04-17 Thread Garrett Rooney
On 4/17/06, Colm MacCarthaigh [EMAIL PROTECTED] wrote:
 On Mon, Apr 17, 2006 at 12:34:29PM -0400, Rian A Hunter wrote:
  I think a SoC project that profiles Apache (and finds out where we
  fall short) so that we are able to compete with other lightweight HTTP
  servers popping up these days would be a good endeavor for any CS
  student.

 Right now, I'm getting 22k reqs/sec from Apache httpd, and 18k/sec from
 lighttpd. Simple things like using epoll, or the way the worker
 balancing is done have huge effects compared to the tiny improvements
 refactoring code can have.

  This seems to be more viable for our threaded MPMs. For the prefork
  MPM, maybe a goal for 10,000 connections might be impractical.

 With prefork I can generally push about 27,000 concurrent connections
 before things get hairy. With worker, I have a usable system up to
 83,000 concurrent connections, without much effort.

  I haven't done any benchmarks myself, I've just read results so anyone
  correct me if I'm wrong.

 Dan Kegels page is years out of date, and was uninformed even when it
 wasn't :)

I suspect that a significant problem with this sort of project will be
lack of proper hardware for benchmarking purposes.  From everything
I've heard it's not all that hard to totally saturate the kind of
networks you're likely to have sitting around your house with
commodity hardware and no real effort.  To really benchmark it's going
to require more stuff than your average college student has lying
around the house.

-garrett


Re: It's that time of the year again

2006-04-17 Thread Brian McCallister


On Apr 17, 2006, at 10:04 AM, Garrett Rooney wrote:


I suspect that a significant problem with this sort of project will be
lack of proper hardware for benchmarking purposes.  From everything
I've heard it's not all that hard to totally saturate the kind of
networks you're likely to have sitting around your house with
commodity hardware and no real effort.  To really benchmark it's going
to require more stuff than your average college student has lying
around the house.


That would be one of the advantages of it being with the ASF -- in  
theory we can get access to some network gear which might be harder  
for a student to lay hands on.


-Brian




Re: It's that time of the year again

2006-04-17 Thread Garrett Rooney
On 4/17/06, Brian McCallister [EMAIL PROTECTED] wrote:

 On Apr 17, 2006, at 10:04 AM, Garrett Rooney wrote:

  I suspect that a significant problem with this sort of project will be
  lack of proper hardware for benchmarking purposes.  From everything
  I've heard it's not all that hard to totally saturate the kind of
  networks you're likely to have sitting around your house with
  commodity hardware and no real effort.  To really benchmark it's going
  to require more stuff than your average college student has lying
  around the house.

 That would be one of the advantages of it being with the ASF -- in
 theory we can get access to some network gear which might be harder
 for a student to lay hands on.

Perhaps, but AFAICT infra@ doesn't have this kind of thing lying
around at the moment, so unless someone is going to step up with
hardware people can use it's kind of a showstopper.

-garrett


Re: It's that time of the year again

2006-04-17 Thread Mads Toftum
On Mon, Apr 17, 2006 at 10:15:09AM -0700, Garrett Rooney wrote:
 Perhaps, but AFAICT infra@ doesn't have this kind of thing lying
 around at the moment, so unless someone is going to step up with
 hardware people can use it's kind of a showstopper.
 
Correct.

vh

Mads Toftum
-- 
`Darn it, who spiked my coffee with water?!' - lwall



Re: It's that time of the year again

2006-04-17 Thread William A. Rowe, Jr.

FWIW, in our earlier discussions we discussed segmented keys, that any
provider wouldn't have to provide multiple keys (to be hooked) but would
have to support segmented keys such that (for example) three UUID's would
designate highest - narrower - narrowest lookup, and it would (of course)
be expected that a segment 0-only lookup would have clashes/multiple
entities.

What a program does with this situation is up to the user/consumer.  Our
thought was that if (for example http caching) we first did a direct seek
on -one- page based on segment 0  1 (the host and the uri) we would get
back the variant headers.  We could then perform a direct hit on segment
0  1  2 (the variation key) and get precisely the page they needed as
a two-hit-always operation.  If the page doesn't vary, the first hit would
be the actual page.   If variant also varies, well, guess we are SOL ;-)

Bill


Brian Akins wrote:
An example I'd like to do (or mentor someone) is a mod_memcached that 
could serve as the basis of memcached based modules.  It could handle 
all the configuration details, errors, and general heavy lifting of 
memcached.  It would then be very easy to write other modules that had 
hooks into memcached (ie, a mod_cache provider).


Perhaps to abstract it even further, perhaps mod_memcached should be a 
provider for a more generic caching framework.  Not HTTP specific like 
mod_cache, but a general key based cache.  Could be used for a variety 
of things (including an HTTP cache).



typedef struct {
apr_status_t (*cache_create)(ap_cache_t **instance, apr_table_t 
*params);
apr_status_t (*cache_stats)(ap_cache_t *instance, ap_cache_stats_t 
**stats);
apr_status_t (*cache_set)(ap_cache_t *instance, const char *key, 
void *val, apr_size_t len, apr_time_t expires);

/*
other stuff, like get, replace, etc.
*/
   
} cache_provider_t;


Just thinking out loud.


Also, mod_cache should be renamed mod_http_cache





Re: It's that time of the year again

2006-04-17 Thread Brian Akins

Rian A Hunter wrote:


This seems to be more viable for our threaded MPMs. For the prefork
MPM, maybe a
goal for 10,000 connections might be impractical.


Using worker, we do many thousands of connections (ie, much more than 
10k).  I think Colm has published some numbers about his experiences.


--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: It's that time of the year again

2006-04-17 Thread William A. Rowe, Jr.

Garrett Rooney wrote:


I suspect that a significant problem with this sort of project will be
lack of proper hardware for benchmarking purposes.  From everything
I've heard it's not all that hard to totally saturate the kind of
networks you're likely to have sitting around your house with
commodity hardware and no real effort.  To really benchmark it's going
to require more stuff than your average college student has lying
around the house.


LOL - not to mention the converse.  Even when eating all four 1GB network
segments it's quite difficult to saturate anything but the NICs on one of
Sun's T2000 loaner boxes until you add dynamic content to the mix :)


Re: It's that time of the year again

2006-04-17 Thread Brian Akins

Garrett Rooney wrote:

 To really benchmark it's going
to require more stuff than your average college student has lying
around the house.


Simple dual opteron with GigE networking is more than sufficient.  I can 
mentor by testing some changes if somebody needs it.




--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: It's that time of the year again

2006-04-17 Thread Brian Akins

William A. Rowe, Jr. wrote:


LOL - not to mention the converse.  Even when eating all four 1GB network
segments it's quite difficult to saturate anything but the NICs on one of
Sun's T2000 loaner boxes until you add dynamic content to the mix :)


Try smaller files. like 1 byte.  That is how I test a bunch of stuff 
here.  It's very hard to saturate a GigE when serving 1 byte files.



--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


modules/debug/ still - bleh!!!

2006-04-17 Thread William A. Rowe, Jr.

Alright folks, both the Netware and Win32 maintainers asked that the
directory NOT be named debugging.  In vengance, for example, we would
just toss out a .libs/ directory to describe lib resources?  That's
the effect here.

So - can I get everyone to agree on modules/debugging/ before we tar
another tarball?  This will be a noop for binary and source code compat,
it's just a directory rename (with the appropriate tweaks to the top
level makefiles.)


Jon Pokroy/VNUBPL/UK is out of the office.

2006-04-17 Thread Jon Pokroy
I will be Out of the Office
Start Date: 13/04/2006.
End Date: 24/04/2006.


I will respond to your message when I return.

For downloads issues, please contact Steffen Mueller:
[EMAIL PROTECTED]

For Italy relaunch issues, contact Ross McDonald: [EMAIL PROTECTED]

For all other issues, contact David Slade: [EMAIL PROTECTED]







Re: It's that time of the year again

2006-04-17 Thread Esteban Pizzini
On 4/17/06, Jeff McAdams [EMAIL PROTECTED] wrote:
Ian Holsman wrote: ideas so far (half joking): - mod_ircd - implementing a UDP protocol - a caching module implement CML (cache-meta-language) - a SEDA type MPMmod_snmp would be very useful.

I have implemented snmp for Apache 2.0.x (mod-apache-snmp.sourceforge.net) It't an implementation using net-snmp as agent.

I'm working on the module and there some features to add

--Jeff McAdamsThey that can give up essential liberty to obtain a
little temporary safety deserve neither liberty nor safety.
-- Benjamin Franklin-- --Esteban Pizzini(
http://mod-apache-snmp.sourceforge.net)


modules/debug/ still - bleh!!!

2006-04-17 Thread William A. Rowe, Jr.

Alright folks, both the Netware and Win32 maintainers asked that the
directory NOT be named debugging.  In vengance, for example, we would
just toss out a .libs/ directory to describe lib resources?  That's
the effect here.

So - can I get everyone to agree on modules/debugging/ before we tar
another tarball?  This will be a noop for binary and source code compat,
it's just a directory rename (with the appropriate tweaks to the top
level makefiles.)



Re: [VOTE] Release 2.2.1 as GA

2006-04-17 Thread Steffen

Also all is fine with building against APR 1.2.7.

Steffen

- Original Message - 
From: Steffen [EMAIL PROTECTED]

To: dev@httpd.apache.org
Sent: Tuesday, April 04, 2006 19:39
Subject: Re: [VOTE] Release 2.2.1 as GA



Done.

I build against APR and APR-util 1.3.0 and the Perl scripts working now.

Also no  build error apu_version anymore.

All tests passed here, including mod_perl and other common mods.


Steffen

http://www.apachelounge.com

- Original Message - 
From: William A. Rowe, Jr. [EMAIL PROTECTED]

To: dev@httpd.apache.org
Sent: Tuesday, April 04, 2006 18:07
Subject: Re: [VOTE] Release 2.2.1 as GA



Steffen wrote:

When build Apache 2.2.1 with APR 1.2.2, the Perl scripts are working.


This is apparent in my test case.

Slowing things down in the debugger - the flaw goes away, which is to say
some blocking logic isn't blocking, probably an attribute of some recent
minor refactoring of the read_with_timeout logic.

Could I ask to to try against APR 1.3 (trunk!) to see if this resolves 
your

issues?  And I'll do the same.  Perhaps this is justification to backport
the very major cleanups in Win32 read/write logic a bit early.  The major
refactoring has not been backported, just yet.

Bill








generalized cache modules

2006-04-17 Thread Brian Akins

Was playing with memcached and mod_cache when I had some thoughts.

-mod_cache should be renamed to mod_http_cache
-new modules mod_cache (or some inventive) name would be a more general 
purpose cache)


So mod_http_cache would simply be a front end to mod_cache providers. 
These providers do not have to know anything about http, only how to 
fetch, store, expire, etc. objects.


Some example mod_cache providers could be simplified versions of 
mod_disk_cache and mod_mem_cache as well as the to-be-written 
mod_memcached (mod_memcached_cache?).


So cache providers would be fairly simply to write and mod_http_cache 
could handle all the nastiness with http specific stuff (Vary, etc.)


My biggest mental block is how to have the configuration for this not be 
absolutely horrific.


Thoughts?

--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: It's that time of the year again

2006-04-17 Thread Parin Shah
 - mod_cache_requestor (which i don't think really took off)
 and 2 active comitters.

I still haven't given up on it. :-) I am trying to remove the libcurl
dependency by creating mocked up connection and request. hopefully, it
would take off one day :-)


Re: generalized cache modules

2006-04-17 Thread Davi Arnaut

On Mon, 17 Apr 2006, Brian Akins wrote:


Was playing with memcached and mod_cache when I had some thoughts.

-mod_cache should be renamed to mod_http_cache
-new modules mod_cache (or some inventive) name would be a more general 
purpose cache)


So mod_http_cache would simply be a front end to mod_cache providers. These 
providers do not have to know anything about http, only how to fetch, store, 
expire, etc. objects.


Something similar to the Squid Storage Interface [1] ?

Some example mod_cache providers could be simplified versions of 
mod_disk_cache and mod_mem_cache as well as the to-be-written mod_memcached 
(mod_memcached_cache?).


So cache providers would be fairly simply to write and mod_http_cache could 
handle all the nastiness with http specific stuff (Vary, etc.)


Also, do you mean to use mod_memcached as an application
cache (mod_cache working as a frontend for a site) or as
a proxy cache ?

My biggest mental block is how to have the configuration for this not be 
absolutely horrific.


Thoughts?




--
Davi Arnaut

[1] http://www.squid-cache.org/Doc/Prog-Guide/prog-guide-12.html


Re: It's that time of the year again

2006-04-17 Thread Parin Shah
 An example I'd like to do (or mentor someone) is a mod_memcached that
 could serve as the basis of memcached based modules.  It could handle
 all the configuration details, errors, and general heavy lifting of
 memcached.  It would then be very easy to write other modules that had
 hooks into memcached (ie, a mod_cache provider).

I have liked the idea of mod_memcached. I can work on it with you (if
we have Soc student for this project, I can work with him as well )


Re: generalized cache modules

2006-04-17 Thread Brian Akins

Davi Arnaut wrote:


Something similar to the Squid Storage Interface [1] ?


sorta.  maybe more simple.  simple fetch and store for now.


Also, do you mean to use mod_memcached as an application
cache (mod_cache working as a frontend for a site) or as
a proxy cache ?



Doesn't matter.  current mod_cache can be used as either.  I am talking 
of a completely general purpose cache.  It doesn't care if it's http 
pages, ip's or what.  an http_cache would be as trivial to implement 
as porting current mod_cache to use generalized cache.




--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: It's that time of the year again

2006-04-17 Thread Brian Akins

Parin Shah wrote:


I have liked the idea of mod_memcached. I can work on it with you (if
we have Soc student for this project, I can work with him as well )


To clarify, I really meant to say mod_memcache, the client part, not the 
server.


Was just an idea, I can help in some way as well.  I have a working 
prototype now.



--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


Re: Suggestion to significanlty improve performance of apache httpd

2006-04-17 Thread Rodent of Unusual Size
No response has been sent to the originator; feel free to
do so.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Ken.Coar.Org/
Author, developer, opinionist  http://Apache-Server.Com/

Millennium hand and shrimp!
---BeginMessage---

Dear Apache,

This isn't a request for help (you don't even need to reply) but I have 
found an opportunity for a large improvement in Apache httpd which your 
developers may be interested in.


I use Apache2 on a site serving some 8,000,000 pages a day - after
optimizing this significantly by using software external to Apache I
feel I have a simple but highly valuable improvement to suggest.

* My site serves much content to Asia, where users are on slow
connections, this results (even with keep-alives turned off) a large
number of parallel Apache processes 'spoon feeding' many slow connections

* By placing a HTTP cache (Squid) in front of it (even with all caching
turned off) reduces the Apache processes (and therefore memory usage) by
an order of magnitude. This is achieved because Squid waits until it has
received the entire header before passing it (at high speed) to Apache,
and Apache can write the page itself back (at high speed) to Squid, not
worrying about the slow client connection. This has a HUGE impact on
overall server performance, both Apache and MySQL memory use drop by an
order of magnitude, and all memory is now free to cache content!

This is a solution of sorts  but not the most efficient solution.

* Wasted processing overhead: Squid is parsing and reacting to both HTTP
request and response headers and attempting to cache certain items (even
if this isn't really what is wanted). Squid really does use quite a lot
of CPU, even when only doing this simple task!

* Wasted network overhead: I now have client connecting to Squid which
is then connecting to Apache on the same machine.


As an alternative solution, I tried patching Apache 1.3 with lingerd
(http://www.iagora.com/about/software/lingerd/) - this provides half of
the required solution, as kernel tcp write buffers allow Apache to write
a response quickly and lingerd handles closing the connection to a slow
client.

Still undressed though is the problem of a request header arriving very
slowly.


Much Better Solution:
***

* Modify Apache so that the entire header is read before the socket (or
possibly just header data) are passed to an Apache process (kind of like
a lingerd in reverse)
* Modify Apache so that once a worker process has generated the response
data, it passes the socket (or possibly just the response data) back to
a single process to finish off (rather like lingerd does)

Regards,
--
philip anderson
invades limited





---End Message---


Re: Suggestion to significanlty improve performance of apache httpd

2006-04-17 Thread Colm MacCarthaigh

Hey Phil,

we're always responsive to such suggestions, but I think we've beaten
you to it, at least somewhat, see below for what may be useful
resources.

On Mon, Apr 17, 2006 at 04:16:45PM -0400, Rodent of Unusual Size wrote:
 Much Better Solution:
 ***
 
 * Modify Apache so that the entire header is read before the socket (or
 possibly just header data) are passed to an Apache process (kind of like
 a lingerd in reverse)

Apache httpd 2.2 has support for kernel-level accept filters which do
exactly that;

http://httpd.apache.org/docs/2.2/mod/core.html#acceptfilter

That way the kernel buffers the request until it's ready.

 * Modify Apache so that once a worker process has generated the
 response data, it passes the socket (or possibly just the response
 data) back to a single process to finish off (rather like lingerd
 does)

See the event mpm;

http://httpd.apache.org/docs/2.2/mod/event.html

It is still under development, but is relatively stable and functional
in non-SSL (or other situations in which input filtering is required)
environments.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: It's that time of the year again

2006-04-17 Thread Ian Holsman

yeah.. thats the hard part of SoC.
not coding it yourself in 20 minutes, and leaving it for your student  
to do ;-)


On 18/04/2006, at 5:43 AM, Brian Akins wrote:


Parin Shah wrote:


I have liked the idea of mod_memcached. I can work on it with you (if
we have Soc student for this project, I can work with him as well )


To clarify, I really meant to say mod_memcache, the client part,  
not the server.


Was just an idea, I can help in some way as well.  I have a working  
prototype now.



--
Brian Akins
Lead Systems Engineer
CNN Internet Technologies


--
Ian Holsman  ++61-3-9818-0132
Good judgment comes from experience, experience comes from bad judgment.




Protocol handlers: Change in connection processing from 2.0 to 2.2?

2006-04-17 Thread Tyler MacDonald
Hi all,

I have a protocol handler that worked perfectly in apache 2.0.53,
but after upgrading to 2.2, something strange is happening. The protocol
handler processed gnudip2 TCP update requests for yi.org dynamic DNS.

The GnuDIP2 protocol requires that a password salt be sent by the
server as soon as a connection is established. That was how it worked under
apache 2.0. However, it looks like the apache 2.2 server is waiting for a
line of input *before* it processes my connection. Once a line of input is
received from the client, my handler runs, prints its greeting message
(which should have printed right away) and reads the line of input.

I've tried bumping the priority of my handler from APR_HOOK_MIDDLE
to APR_HOOK_FIRST, but that didn't do any good.

Has anything changed in connection processing between apache 2.0 and
2.2 that I need to watch out for?

Here's the line that registers the hook:

 ap_hook_process_connection(ap_process_gnudip2_connection,NULL,NULL, 
APR_HOOK_FIRST);

(that was APR_HOOK_MIDDLE before, no difference)

And here's the line that actually sends the text:

 ap_fprintf(conn-output_filters, tmp_bb, %s\n, gnudip2_salt);

I've attached mod_weedns.c and gnudip2_tcp.c from the source. Can
anybody help me solve this?

Cheers,
Tyler

/* module to handle the gnudip2 dynamic IP update TCP-based protocol from within apache2 */

#include apr_strings.h

#define APR_WANT_STRFUNC
#include apr_want.h
#include ap_config.h

#define CORE_PRIVATE
#include httpd.h
#include http_config.h
#include http_connection.h
#include http_core.h
#include http_protocol.h	/* For index_of_response().  Grump. */
#include http_request.h
#include http_log.h
#include scoreboard.h

#include mod_weedns.h
#include gnudip2_tcp.h
#include gnudip2.h
#include db-gnudip2.h

static request_rec *gnudip2_tcp_read_request(conn_rec *conn)
{
 request_rec *r;
 apr_pool_t *p;
 apr_bucket_brigade *tmp_bb, *in_bb;
 char *req_s = NULL;
 gnudip2_protocol_request tcp_r;
 gnudip2_update g2_u;
 int rv;
 int len;
 int inrv;

 apr_pool_create(p, conn-pool);
 r = apr_pcalloc(p, sizeof(request_rec));
 r-pool= p;
 r-connection  = conn;
 r-server  = conn-base_server;

 r-user= NULL;
 r-ap_auth_type= NULL;

 r-allowed_methods = ap_make_method_list(p, 2);

 r-headers_in  = apr_table_make(r-pool, 25);
 r-subprocess_env  = apr_table_make(r-pool, 25);
 r-headers_out = apr_table_make(r-pool, 12);
 r-err_headers_out = apr_table_make(r-pool, 5);
 r-notes   = apr_table_make(r-pool, 5);

 r-request_config  = ap_create_request_config(r-pool);
 /* Must be set before we run create request hook */

 r-proto_output_filters = conn-output_filters;
 r-output_filters  = r-proto_output_filters;
 r-proto_input_filters = conn-input_filters;
 r-input_filters   = r-proto_input_filters;
 ap_run_create_request(r);
 r-per_dir_config  = r-server-lookup_defaults;

 r-sent_bodyct = 0;  /* bytect isn't for body */

 r-read_length = 0;
 r-read_body   = REQUEST_NO_BODY;

 r-status  = 0;  /* Until we get a request */
 r-the_request = NULL;
 r-method			= apr_psprintf(r-pool, GET);
 r-method_number	= M_GET;
 r-uri= apr_psprintf(r-pool, gnudip2://%s:%i/, r-server-server_hostname, r-server-addrs-host_port);
 r-finfo.filetype	= 0;
 r-filename		= ;
 r-request_time	= apr_time_now();
 r-protocol		= apr_psprintf(r-pool, GNUDIP2);
 

 tmp_bb = apr_brigade_create(r-pool, r-connection-bucket_alloc);
 in_bb = apr_brigade_create(r-pool, r-connection-bucket_alloc);

 gnudip2_new_salt();
 apr_table_setn(r-notes, gnudip2_salt, apr_pstrdup(r-pool, gnudip2_salt));
 ap_fprintf(conn-output_filters, tmp_bb, %s\n, gnudip2_salt);
 ap_update_child_status(conn-sbh, SERVER_BUSY_READ, r);
 ap_fflush(conn-output_filters, tmp_bb);
 ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,  output = %s, gnudip2_salt);
 ap_run_map_to_storage(r);

 /* Get the request... */
 req_s = apr_palloc(r-pool, DEFAULT_LIMIT_REQUEST_LINE + 3);
 rv = ap_rgetline(req_s, DEFAULT_LIMIT_REQUEST_LINE + 2, len, r, 0, in_bb);

 if(rv == APR_SUCCESS)
 {
  r-read_length = len;
  inrv = fill_tcp_protocol_request(tcp_r, req_s);
 }
 else
 {
  inrv = 0;
 }
 
 if(inrv)
 {
  strcpy(tcp_r.salt, gnudip2_salt);
  ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,  input = %s (salt = %s, user_name = %s, hashed_password = %s, domain = %s, mode = %i, address = %s), req_s,
   tcp_r.salt, tcp_r.user_name, tcp_r.hashed_password, tcp_r.domain, tcp_r.mode, tcp_r.address);
  tcp_r.remote_addr = r-connection-remote_addr-sa.sin.sin_addr;
 
  if((!tcp_r.address[0]) || (!strcmp(0.0.0.0, tcp_r.address))) {
   strcpy(tcp_r.address, r-connection-remote_ip);
   inet_aton(tcp_r.address, tcp_r.remote_addr);
   ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r,  inferred address is %s, tcp_r.address);
  }

  r-the_request = apr_psprintf(r-pool, GET /?domn=%saddr=%suser=%s GNUDIP2, 

Re: Protocol handlers: Change in connection processing from 2.0 to 2.2?

2006-04-17 Thread Paul Querna

Tyler MacDonald wrote:

Hi all,

I have a protocol handler that worked perfectly in apache 2.0.53,
but after upgrading to 2.2, something strange is happening. The protocol
handler processed gnudip2 TCP update requests for yi.org dynamic DNS.

The GnuDIP2 protocol requires that a password salt be sent by the
server as soon as a connection is established. That was how it worked under
apache 2.0. However, it looks like the apache 2.2 server is waiting for a
line of input *before* it processes my connection. Once a line of input is
received from the client, my handler runs, prints its greeting message
(which should have printed right away) and reads the line of input.


Add/Change your configuration to the Following:

Listen 1234 GnuDIP2
AcceptFilter GnuDIP2 none

This will disable the Accept Filter that by default waits for data.

See also the docs:
http://httpd.apache.org/docs/2.2/mod/core.html#acceptfilter
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#listen

-Paul



Re: Protocol handlers: Change in connection processing from 2.0 to 2.2?

2006-04-17 Thread Tyler MacDonald
Paul Querna [EMAIL PROTECTED] wrote:
 Add/Change your configuration to the Following:
 
 Listen 1234 GnuDIP2
 AcceptFilter GnuDIP2 none
 
 This will disable the Accept Filter that by default waits for data.
 
 See also the docs:
 http://httpd.apache.org/docs/2.2/mod/core.html#acceptfilter
 http://httpd.apache.org/docs/2.2/mod/mpm_common.html#listen

That works, thanks!!!

This is a private module so I'm not concerned about complicating the
configuration file, but I'm planning a release of a public module (mod_bt;
http://www.crackerjack.net/mod_bt/) that will have a protocol handler soon.
I'll make sure I document that they need to set that option, but...

Is there any way for a module to turn off the AcceptFilter itself?

Thanks,
Tyler



Re: Protocol handlers: Change in connection processing from 2.0 to 2.2?

2006-04-17 Thread Paul Querna

Tyler MacDonald wrote:

Paul Querna [EMAIL PROTECTED] wrote:

Add/Change your configuration to the Following:

Listen 1234 GnuDIP2
AcceptFilter GnuDIP2 none

This will disable the Accept Filter that by default waits for data.

See also the docs:
http://httpd.apache.org/docs/2.2/mod/core.html#acceptfilter
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#listen


That works, thanks!!!

This is a private module so I'm not concerned about complicating the
configuration file, but I'm planning a release of a public module (mod_bt;
http://www.crackerjack.net/mod_bt/) that will have a protocol handler soon.
I'll make sure I document that they need to set that option, but...

Is there any way for a module to turn off the AcceptFilter itself?


Nope, mostly because with how the current API is structured, we don't 
know what module will handle the protocol until runtime.  It could be 
one that wants Accept Filtering, or one that doesn't.


If you could with 100% accuracy match a single Listener Record to a 
Single Protocol Module, then an API could be created to better handle 
this situation.


-Paul


apxs not compiling with largefile support?

2006-04-17 Thread Tyler MacDonald
Here's another odd one...

My apache is compiled with large file support. However, when apxs goes to
compile a DSO, it doesn't pass the correct cflags in. This results in a
bunch of syntax error before off64_t errors.

To work around this, I'm doing this:

APXS = /opt/apache2/bin/apxs
APR_CONFIG = /opt/apache2/bin/apr-1-config
APXS_CFLAGS = -I/opt/weedns-4/src/libweedns4  `$(APR_CONFIG) --cppflags`
APXS_LDFLAGS = -L/opt/weedns-4/src -lweedns4 -lmysqlclient
COMPILE_IT = $(APXS) -S CFLAGS=$(APXS_CFLAGS) -Wall 
-Wimplicit-function-declaration $(APXS_LDFLAGS) -n $(MODULE_NAME) -c
INSTALL_IT = $(APXS) -i -a -n $(MODULE_NAME) $(TARGETLA)

But bringing apr-config in myself seems to defeat the purpose of using apxs
in the first place. Is this a known issue or something I'm doing wrong? Is
there a better way?

Thanks,
Tyler



Re: Protocol handlers: Change in connection processing from 2.0 to 2.2?

2006-04-17 Thread Tyler MacDonald
Paul Querna [EMAIL PROTECTED] wrote:
 Nope, mostly because with how the current API is structured, we don't 
 know what module will handle the protocol until runtime.  It could be 
 one that wants Accept Filtering, or one that doesn't.
 
 If you could with 100% accuracy match a single Listener Record to a 
 Single Protocol Module, then an API could be created to better handle 
 this situation.

How about meeting you halfway: The Listen directive still needs a protocol
tag, that makes sense, and IMHO it actually reduces the need for comments in
the config file because when you look at the Listen directive you can tell
right away what it's there for. So say I did

Listen  *:6669 bt_peer

Is there (or should there be) a way to hook in to do the equivalent of
AcceptFilter bt_peer none from C, after the configuration file has been
parsed?

Thanks,
Tyler



Re: [PATCH] #39275 MaxClients on startup [Was: Bug in 2.0.56-dev]

2006-04-17 Thread Chris Darroch
Colm:

The worker and event MPMs would use these to track their
 non-worker threads; and the parent process for these MPMs could
 monitor them as per option C to decide when the child process's
 workers were ready to be counted.
 
 +1, I think this could be very useful, I came accross the same kind of
 problems when looking at handling the odd mod_cgid race conditions when
 doing graceful stops, but in the end didn't solve them (we still have
 slightly different death semantics between prefork and worker-like MPM's
 for cgi termination).

   Great, I'll get started on this first thing tomorrow.  I'm
away Thurs-Mon this week but with luck will have some patches
before I go for all these co-mingled issues.

Finally, a question without checking the code first ... I notice
 that worker.c has the following, taken from prefork.c:
 
 /* fork didn't succeed. Fix the scoreboard or else
  * it will say SERVER_STARTING forever and ever
  */
 ap_update_child_status_from_indexes(slot, 0, SERVER_DEAD, NULL);
 
 Is that right?  Should it perhaps cycle through and set all
 non-DEAD workers to DEAD?  I'm thinking that in prefork,
 threads_limit is set to 1, so mod_status only checks the first
 thread in a slot, but here it'll check all of them.
 
 I'm not sure what you mean. As in, all potential workers globally? That
 child isn't going to have any actual workers, since child_main() never
 got run.

   What I noticed is that the prefork MPM seems to consistently
use the first worker_score in the scoreboard for all its record-
keeping; that makes sense because its thread_limit is always 1.
So in make_child(), it sets that worker_score to SERVER_STARTING,
does the fork(), and if that fails, resets the worker_score
to SERVER_DEAD.  No problems there.

   However, the code seems to have been copied over into the
worker MPM, and that particular if-fork()-fails bit still does
the same business, even though other things have changed around
it.  In particular, multiple worker_score structures are in
use for each child (ap_threads_per_child of them), and
make_child() doesn't begin by setting them all to SERVER_STARTING --
or even one of them -- so having it subsequently handle the
fork()-fails error condition by resetting the first one to
SERVER_DEAD seems incorrect.

   Further, make_child() may be creating a child to replace
a gracefully exiting one, and if so, it shouldn't actually
touch any of the worker_score statuses.  That's because what
start_threads() in the child does -- in the case where fork()
succeeds -- is to watch for threads in the old process that
have marked their status as SERVER_GRACEFUL and only then
initiate new workers to replace them.  So in fact, if fork()
fails, such old workers may still be executing and their
status values may not yet be SERVER_GRACEFUL.  As things stand,
if one of them happens to be have a scoreboard index of 0, its
value will get overwritten by make_child().

   I think probably the right thing is for make_child() just
to do nothing here.  That makes the same assumption the rest
of the code does, namely, that worker threads will reach their
normal termination and set their statuses to SERVER_DEAD or
SERVER_GRACEFUL; and if a child process does fail, then
the parent process's will notice in server_main_loop() and
set all of the child's threads' statuses to SERVER_DEAD.

   Anyway, I'll include that change in my patchset for review;
I may of course have missed something important.

Chris.

-- 
GPG Key ID: 366A375B
GPG Key Fingerprint: 485E 5041 17E1 E2BB C263  E4DE C8E3 FA36 366A 375B