Here's a complete list of the ones we use. I should mention that we usually
use ns_db via a wrapper; it's easily in the top 10.
Not sure how useful this is, since it is a static analysis of source rather
than a dynamic analysis of which routines are actually used the most. Is
there a way to
The script I used is below. It's a crappy TCL reader we wrote for a
specific purpose and doesn't work quite right - doesn't handle strings
correctly, probably other stuff.
We don't use nsv_ routines - still on 2.3.3. Personally, I don't like
the nsv_ interface. With ns_share, a variable is a
At least on Linux, puts is thread-safe if the -nonewline option is used.
I ran tests and used it for a URL logging feature we developed - worked
great. So instead of puts $string, do puts -nonewline $string\n
JIm
I guess I could start with what you've done, clean up the
code, and start
Using the Referer: header is dangerous - won't work for customers
using a privacy filter. -Jim
One option that you could implement would be to make the post focus
go to the same page and add in a hidden form var or use referer to
check if coming from a submit / POST action. At the
The documentation may be a little dated, but, we're on 2.3.3 and developed
our web site over a 4 year period with no access to source - only the docs.
There have been a few obscure server bugs where access to the source
would have been very helpful, but the read the source to understand
I think you are putting the register filter commands in nsd.tcl, but
nsd.tcl is a startup file and only certain commands are allowed.
Instead, create a .tcl file with your register filter commands and
procedures, and put that file in servername/modules/tcl
Jim
Hi.
I'm trying to map user
I have a suggestion: these Source Forge cc's to the mailing list are kinda
nice in that it connects the two forms of communication and helps keep the
community tied together.
But would it be possible to either:
a) only post the new responses to the non-digest mailing list, or
b) only post
Thanks Rob. I read the Google posts about this. Sounds like it is an
email thing, so we shouldn't be seeing them in HTTP requests, right?
We do send out some links in email, but not the one I copied in my
previous post.
THanks again for your help,
Jim
+-- On Jun 19, Jim Wilcoxson
There is no direct way to get hex data into a string in TCL (that I know of).
You could do:
set hexcodes f4e301ab
regsub -all {..} $hexcodes {%\0} hextemp
set hexstring [ns_urldecode $hextemp]
Would work as long as you don't have a zero byte. TCL doesn't handle zero
bytes well in
Mutex stands for mutual exclusion, and is designed to let only one
process run in a critical section at a time.
During server initialization, you say:
ns_share lockname
set lockname [ns_mutex create]
To protect a section of code from multiple processes, you use:
ns_mutex lock $lockname
if
Pragma should not be sent out from a web server; this is for web clients
to communicate with servers/proxies.
You should send:
1. Cache-control: no-cache
2. Ensure no Last-Modified header (can't use ns_returnfile)
3. Add Expires: Thu, 01 Jan 1999 00:00:00 GMT (don't use a date like
1990; some
FYI, the ns_set below is not valid syntax...
The correct syntax is ns_set put [ns_conn outputheaders] Pragma no-cache
Jim
Ian Harding wrote:
I am trying to fix it so a page is always fetched fresh from the server, even when
the user hits the back button (or I use it with .history(-1))
My own experience is that logging is essential, even in production. We
log everything - SQL commands, debug logs we've inserted... everything.
The problem is that when a customer writes in and says I can't login
to your web site, without logs you are left with telling them clear
your cache and
releases - dunno.
Jim
Jim Wilcoxson wrote:
The way I approach these things is to do a prototype in TCL using
existing interfaces/functions, develop the whole application/page that
will use it, and THEN if there is a performance problem, make sure
that this thingy is the cause. Most
I vote with Rob to use a global variable for data storage, not a function.
Accessing variables with functions ala [nsv_set/get/whatever is problematic
because they are not general. You can't pass them to functions. They
aren't first-class variables, so any time you use them, you have to be
I dunno if this happens with later versions, but thought I'd mention it.
This code will crash 2.3.3 every time:
ns_return blah
ns_set update [ns_conn outputheaders] blah blah
I think I remember testing this about 6 mos ago and even a read-reference
to outputheaders causes a crash.
I'm working again on migrating to 3.x from 2.3.3 and could use some advice.
1. I downloaded the ArsDigita version - 3.3ad13 too. Are there any critical
fixes I need to put in 3.4 before using it for production? Or is 3.3ad13
better for production? Or...?
2. Is there a good reference for 2.x
handler to kill the main process
if compiled under Linux. This isn't in the 3.4 version. The hanging
around behavior under Linux means that init won't restart the server
if it segv's. Lots of hassles because of this.
Jim
On 2001.08.07, Jim Wilcoxson [EMAIL PROTECTED] wrote:
Thanks. I found
This isn't a Linux difference - it's an AS difference.
On the same machine, 2.3.3 scheduled procs can do [ns_thread getid] and
will get a number that represents a Linux process, but 3.4 will return
a number that does not represent a Linux process, i.e., there is no
corresponding number in the
Patch for ns_urldecode - not decoding + - space:
in nsd/urlencode.c at line 113; broken version:
twobytes[2] = '\0';
while (*string != '\0') {
if (*string != '%') {
Ns_DStringNAppend(pds, string, 1);
++string;
} else {
fixed:
twobytes[2] =
On 2001.08.07, Jim Wilcoxson [EMAIL PROTECTED] wrote:
Has anyone ever seen the TCL open command block with TCL 8x? If I use
nsd76, things work fine. With TCL 8x, my startup script hangs at an
open statement, trying to open a file for reading. The only weird thing
is that the file
Is this enforced in AS 3.x? Your note says the web server cannot follow...,
which is only true if it is chrooted or there is some server code checking
links (I think).
Jim
Hi Ellen,
every web server has what's called a pageroot, the directory in the
filesystem where the web pages are
THe sample-config.tcl in 3.4x is not totally correct either. I have already
found a couple of parameters in the wrong section and it has delayed our
conversion from 2.3.3 until I can go through the source and figure out ALL
of the parameters and which section they should go in. Maybe the docs
We do this. Register a proc for /dir, put your TCL scripts there, in the
/dir handler look at the URL suffix and do a TCL source command or
ns_returnfile. (Put a catch around the source command - that's the
important part).
Jim
Wow, that is a *great* idea. Then you could register an
PageRoot based on the
request URL, or maybe a Host header? Maybe not the real PageRoot could be
set, but maybe a falsified or virtualized one?
--
Mark Hubbard: [EMAIL PROTECTED]
Microsoft Certified Professional
Knowledge is Power.
-Original Message-
From: Jim Wilcoxson [EMAIL
Oops. This is partially my own fault - I based my config on a 3.2
sample-config.tcl, and things like maxthreads apparently moved from
the ns/threads section to the server/servername section. I knew
that one was wrong so figured others might be too.
Jim
On 2001.08.22, Jim Wilcoxson [EMAIL
This gets a conn:
Ns_Conn *conn;
/* get connection structure */
conn = Ns_TclGetConn(interp);
if (conn == NULL) {
Tcl_AppendResult(interp, NULL conn??, NULL);
return TCL_ERROR;
But a detached thread won't have a conn structure - it isn't associated
with a connection.
I think
IE looks at the data that is sent back and will change (for example) the
MIME type from text/plain to text/html if you send a HTML document with
the text/plain MIME type. Very stupid of them in my opinion, but what
do I know. It doesn't just look at the extension when deciding what
MIME type it
Oh, that's great. Bad news for those of us who like reliable, predictable
software. What URLs did you get this info from?
Sorry, don't remember. I did a Google Search. The document was on
Microsoft's site. I was trying to get IE to recognize a CSV text
download, but it kept displaying it
Are you sure it isn't your OS killing your server? Unix tends to do
this when it runs out of resources, and it often kills the process
using the most resources first.
Jim
The nssock notice is apparently benign. (I found the code after all. I
needed to include .cpp files in my grep.)
Here's another version:
http://www.rubylane.com/public/nimda.tcl.txt
This adds a 60-second delay before the redirect and has a maximum # of
connections that will be held up on your server. I have our server
set to hold up to 10 attackers. Once this limit is exceeded the
redirect is issued
It appears that delaying this worm on one system is effective, but it is
multi-threaded to some extent because a single attacker is simultaneously
attacking a couple of our machines.
I have 3 in jail on one server, 7 on another, and 3 on another...
Jim
The attack code isn't multi-threaded: if
Another circumstance is if you have a large TCL library code base in
modules/tcl and make heavy use of the thread mechanism, like frequently
starting detached threads.
This is no big deal with 2.3.3 because of the shared TCL interpreter,
but 3.X takes a huge performance hit when a thread starts
.
Then whenever a new thread starts, this proc string is evaluated to
define all of the TCL procedures in the new thread; but nothing is
executed other than defining the procs.
Seems like some kind of dynamic proc definition mechanism similar to
autoloading would be useful here...
Jim
Jim Wilcoxson wrote
No, you can do this. But you need to use \ for the quotes
inside a quoted string (like name=\newEmp\) and if you
have dollar signs or braces inside the quotes and don't mean
to eval TCL variables or call TCL functions, you need to
backslash them too.
Also, a square function brace starts a new
should only bump maxthreads
when the wait time starts to exceed 3 seconds. (I have no idea how
we measure this yet...)
Jim
On Thursday, September 20, 2001, at 09:46 AM, Jim Wilcoxson wrote:
Seems like some kind of dynamic proc definition mechanism similar to
autoloading would be useful
the same thing
could be done with proc so I don't have to edit everything?? Or I
can redefine it just for our library ... think I know how.
Jim
On Thursday, September 20, 2001, at 10:39 AM, Jim Wilcoxson wrote:
I think the idea of redefining the unknown command is good.
OK, I'm sorry to do
Rob amazes me sometimes. :) Someone posts an idea and Rob posts a complete
implementation the same day! Very cool.
Jim
Rob, thanks for fixing my simplistic solutions. I'm woefully ignorant of
namespaces, and your suggestion about how to handle filters and registered
procs is excellent --
For us, slow thread creation is more a scalability concern compared
with 2.3.3. We have a production setup with 2.3.3 and have to evaluate
how 3.X will affect that setup.
Specific concerns:
1. We do launch threads. We have to look through all our code to make
sure we don't do this anywhere
like images and style sheets; one for lightweight
pages - bboard messages/indexes, classified ads, etc.; and one for
heavyweight pages - various types of searches.
+-- On Sep 21, Jim Wilcoxson said:
Rob - if we look at a simple case of all requests being CPU bound,
2 CPUs
If the server is that critical to your operation, you have to do what
we do: monitor it make sure it restarts quickly if it crashes.
Relying on software to never crash is bad business IMO.
You do not want to be busy doing development work and get a note from
a customer that your site has been
cpus.
It would be interesting to have a stat for how often threads are
blocked while writing to the socket.
J
On Saturday, September 29, 2001, at 03:59 AM, Jim Wilcoxson wrote:
if returning data to the user could go through a single-thread,
multi-connection select loop if it would overflow
I like the queueing behavior rather than Server Busy. If it takes a
long time, people will know the server is busy and can either wait or
not. If it spits back Server Busy, they have to keep hitting Reload.
One good thing about having a connection queue separate from the
listen queue is that
Can you duplicate the problem without using a database? We used to think
a lot of our crash problems were database related because they only happened
on the machine where our DB is used. But after further testing, we were
able to cause crashes w/o using the DB.
Do you see the resuming message
It seems there is at least one leftover bug that we also saw in 2.3.3.
malformed bucket chain in Tcl_DeleteHashEntry
Yesterday this was in one of our logs and the server promptly crashed.
Patches/clues welcome.
Jim
Jerry, that's a good alternative. I did some investigation and it
looks like LinuxThreads doesn't support atttributes on condition
variables or on threads. Posix threads supports the notion of
attributes on threads and synchronization variables, but the only
attribute they talk much about is
I'm very glad to see that fixes are being backported to a production
version and hope this continues for important bug fixes even after
4.0 is released. Thanks!
Jim
AOLserver 3.4.2 and 3.2.1 have been released along with a totally new
AOLserver.COM web site with new docs!
Check it out at
Check out http://bw.org/whois/ for a program you can exec to do a
whois, figure out which registrar they used, then query that registrar
to get the detailed info. Doing it in a dedicated thread is a totally
separate issue from how you do the query.
ns_addrbyhost will only work with a domain
I think all sites should remove the Server: header. It's only usefulness
I can see is for stats and to help people attack a site more efficiently.
It'll be off ours soon (for the latter reason).
Jim
- Original Message -
From: Dossy [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent:
I think you can just say ld -o blah.so -shared blah.a without extracting
.o files.
The tricky part is going to be ensuring that the files were compiled with
the -D_REENTRANT flag on Linux. Otherwise, the code won't work in weird
cases (like referencing errno).
Jim
On Linux, you should be
ar -x blah.a
On Thu, Oct 25, 2001 at 06:55:50PM -0500, Rob Mayoff wrote:
On Linux, you should be able to extract all the .o files from the .a and
combine them into a .so.
Rob, how would I do that? Can you point me to any info?
--
Andrew Piskorski [EMAIL PROTECTED]
Are you exec'ing anything in the scheduled proc? I've seen some
weirdness in the past with execs getting hung, although not much
recently. I think it was more of an OS (Linux) issue.
We've used scheduled procs for years w/o any problems on all of our
servers, though we only have a handful (of
Is this true of detached threads too - no cleanup? We're probably fine,
but nice to know when to be extra careful.-Jim
Make sure that you explicitly release database handles and ns_sets in
scheduled procs. A connection thread has a bunch of cleanup that it does,
and none of it gets
Microsoft will always own the NT server market and there will be a
miniscule following for AS on NT. A survey would likely show that the
number of active AS users (users, not IP addresses) is already tiny
compared to Apache and IIS. As a product with a smallish audience, it
makes a lot of sense
This is not documented in 3.4 but is present and appears (from limited
testing) to work. It was in 2.3.3 also. Takes a date string like:
ns_param rollfmt %Y-%m-%d-%H:%M
and will rename the access.log file using this pattern.
Jim
If I understand your question then my guess as to the advantage of
accepting connections even when there are no threads available is thus:
1. We know it's the typical case that you can have 100 connections and only
ten threads, so by default you have to be able to accept more
I agree with Jerry's explanation. -Jim
At 05:42 AM 9/29/01, you wrote:
On 2001.09.29, Jim Wilcoxson [EMAIL PROTECTED] wrote:
But it doesn't seem to buy anything to have users waiting on a connection
queue vs. waiting on the socket listen queue, except they have a different
message
I ran across a single process, non-blocking proxy program that will proxy
multiple TCP and UDP requests. The configuration is very simple - a list of:
listen_on_addr:port = forward_to_addr:port
It's called fastforward at http://www.worldvisions.ca/
Jim
As a suggestion, a -e option with TCL that is eval'd before running nsd.tcl
would be useful. We have 8 servers that are mostly configured the same
except for a handful of parameters, specifically:
- the listening port
- does it need to load nsperms?
- keepalive is not enabled for some
- log
Here's the fix we use. What we found is that if a redirect follows a POST,
then MSIE will ignore the arguments present on the redirect.
JIm
proc rl_returnredirect {location} {
global __did_ns_return rlfont
global __trace_endtime
if {[info exists __did_ns_return]} {
rl_log error
We're finally running 3.4 in production. :)
So far things have gone very well. We've had a situation with
launching threads and I'd like to float a suggestion to handle
it better.
One server was running at around 81MB after it fully started, with
around 12 nsd processes. It has now grown to
We started 3.4 on a production server this morning and after 90 minutes it
looked like this:
Last login: Mon Oct 15 05:29:02 2001
No mail.
$ ps aux|grep nsd
nsadmin 32565 0.0 3.4 40424 36132 ? S 04:15 0:01 bin/nsd -i -t nsd
nsadmin 32568 0.0 3.4 40424 36132 ? S 04:15
Im curious why you dont just set minthreads = maxthreads at startup to reduce load.
Because a) I don't know what a good value is for maxthreads, so
overestimate it; b) It will take longer to get the server to accept
requests when starting up.
nbsp; BRnbsp;/P/DIVgt;This server does not
After running 12 hours, we're seeing 28 nsd threads using 253MB. Does
that still seem reasonable for memory usage? Our baseline for this
server is 81MB right after the server starts with around 12 threads.
This server handled 762K requests today, total (less than that in the
12 hour period).
I was travelling yesterday, plus we are still fighting a few fires
since the 3.4 upgrade. To answer some of the questions/suggestions
people have posted:
1. Yes, I'm sure we're running 7.6 TCL. I ran into a few problems
with 8.X because we (intentionally) use poorly-constructed lists
in a
17, 2001, at 10:23 AM, Jim Wilcoxson wrote:
We don't use nsv's - IMO that programming model is broken because
regular TCL constructs can't be used on nsv's.
I probably missed a memo or something, but can you tell me what you have
in mind here?
Pete.
In glancing at the zippy code, it looks like it used a power-of-2
algorithm, so I figured it might cause less heap fragmentation. I
think that might be at least some of the problem. Does the standard
gnu/linux memory allocator handle fragmentation poorly/well?
+-- On Oct 17, Jim
search, but I was getting mostly 'zippy
the pinhead' and other weird stuff!
Anyone have an URL or explanation?
thanks,
--brett
On Wed, 17 Oct 2001 09:54:25 -0500
Rob Mayoff [EMAIL PROTECTED] wrote:
+-- On Oct 17, Jim Wilcoxson said:
In glancing at the zippy code, it looks
On a test server configured with threadtimeout set to 120, minthreads
not set (defaults to 1/0 I think), and maxthreads set to 40, I have
another server reference a URL every 5 seconds. What I see on the
test server is:
[17/Oct/2001:19:17:49][9533.8200][-conn0-] Notice: monitor: returning page
I'm confused. Browsers are supposed to handle relative URL
redirection, so I don't understand why a relative redirect is being
changed to absolute by this routine. If it were just kept relative
(i.e., original code was removed instead of adding new code), then
relative redirect requests could
However, resource starvation/denial of service is a serious
potential problem. Fire up a couple hundred connections where
you feed a very large Host: string ...
Go to any web site and hit its search engine 200 times. It will most
likely die a horrible death. In fact, any routine request to
Dec 2001, Jim Wilcoxson wrote:
I don't know how to do it (tried a bunch of things), but agree it would
be nice to know if the browser side of the connection has died because of
a user hitting escape or whatever. Some processes may take several minutes.
If a user gets impatient or double
I guess I'll have to look at the AOL code sometime when I get a
chance. It seems like when a browser closes a connection, a select
event will occur on that fd, something will get marked, and creating
an ns_conn polling command that said whether the fd is closed would
not be a big deal... But
What would be even better is to just write the code right in the first place
so that no error ever happens. (I tried that. I came close.)
People make mistakes. Server software should do something reasonable. The
problem with just closing the connection is:
- the user will get connection
Maybe there needs to be a flag in ns_write and friends to indicate
that something has been written to the connection. If not, send
out a 500 before closing the connection.
To me, this seems like a general error condition not specific to
filters. I imagine (but don't know for a fact) that there
What controls are there to limit the time a page may take to return?
No limits
What controls are there to kill a page that is running?
No controls. If you want a TCL script to have a limit, the script has
to have tests inside itself to check the limits and abort processing.
It would be
Saw this today in my access log:
XX.XX.XX.XX - - [11/Jan/2002:18:38:07 -0800] 400 534
It's a Bad Request error, but the logging is goofed up - no space after the URL.
This error was caused by telneting into AS 3.4 and not sending any data.
Jim
is running,
or is the fact that we have renamed procedures enough to cause a crash
later at some point?
Thanks,
Jim
Jim Wilcoxson wrote:
Our server is crashing 1-5 times per day with this error in the server log:
malformed bucket chain in Tcl_DeleteHashEntry
the above means the tcl
A while back I reported that we periodically get an error:
malformed bucket chain in Tcl_DeleteHashEntry
Some days this problem occurs 12x per day, other times only 1. But we
have noticed that when we have network connectivity problems, our server
crashes more often. May be related to this
It would be cool if the ns_stats code tracked CPU time and the amount of
data a script generated.
Jim
Hello.
I've read a bit of the AOLserver sources again. I've been wondering if
someone has used TclX's profiling code in AOLserver.
I've read a bit on ns_stats but this is not what I want
We had a 250MB file with around 1.5M lines of 170 bytes each. A TCL
program to read this file in a ~10-line loop with a few if tests, a
handful of string commands (trim, length, compare), and setting an
ns_share array took 365 seconds. The exact same thing in a 20-line
C program took 40
Hey Rob,
I just tried your idea and it works correctly if flush is added. I
figured this is probably a libc bug but thought I'd post it here in
case others have trouble with it. Might be a good idea to add a flush
inside ns_ftruncate as a workaround.
Jim
+-- On Feb 22, Jim
When we evaluated 3.4 performance (TCL 7.6 and 8x) vs. 2.3.3 (TCL 7.4),
one thing I compared was a 10-line loop to load an ns_share array. The
loop contained maybe 8-10 string operations on a string of around 200
characters, and a single set command with an ns_share array lvalue.
One execution
possible,
although there is no way to unset a C variable but an ns_share can be
unset. I'm sure many more issues than this... LOL.
FYI, the loop was contained in a proc in a TCL file that is loaded
during startup. So I assume it was compiled. We're using the
standard allocator, not -z.
Jim
Jim
Without trying to start a flame war, I'll make these observations:
1. Though it may not be RFC compliant to do what it is doing, it will
still work on 95%+ of the browsers in use.
2. It is only an issue if the developer has a coding error.
3. Taken together, it is extremely unlikely to happen
We get this error quite frequently on one of our servers - 5-10 times
per day. I read somwhere about an error in the TCL 7.6 expr handling
that could cause it. I dunno... There are only a few modules we run
on this server that we don't run on the others, and we have reviewed
them several times
When there is ordering involved, as Jason mentions, hashes/arrays don't
work so great. Random access to an arbitrary element is fast, but there
has to be another data structure to remember the order.
Lots of times we combine the two: put the data in hashes/arrays with keys,
then keep a list
Agree, and I'd advise that you spend a couple of days reading the TCL
book. If you know C, you will pick up TCL in about 5 seconds and be
proficient in a week or two. If you value your time, it's a very wise
investment.
When we first started our site we worried about stupid details like
group comes out with new
releases of software every 3 months, everyone is expected to jump on
the bandwagon. Putting bug fixes back into released versions, to some
reasonable extent, is really a big help to production sites that
aren't able to switch environments every 6-months to a year.
Jim
Jim
On 4/18/02 10:49 AM, Jim Wilcoxson [EMAIL PROTECTED] wrote:
This test is a
10-15 line TCL loop to read through a file containing around 500K lines,
do 3-4 string operations, set an ns_share array entry
Can you try the same test with an nsv instead of an ns_share?
Yeah, although we
This bug has been around forever. It also happens, at least in 2.3.3, when
you do an ns_return, forget the TCL return, and do a 2nd ns_return. We
redefined ns_return so that if it is attempted twice, the 2nd call is
ignored because it kept crashing our server.
Jim
I ran the following in a
] still
returns a valid setId, so when you deal in the output headers after an
ns_respond, you crash the server.
-T
On Sun, 9 Jun 2002 18:35:04 -0700, Jim Wilcoxson [EMAIL PROTECTED] wrote:
For an esoteric reason which I won't go into, we were doing something that
occasionally caused server
On 2002.06.10, Jim Wilcoxson [EMAIL PROTECTED] wrote:
I'd welcome some tips from a TCL guru to explain how to throw a TCL
error and/or get some kind of useful trace/debug information, like
what table the thing is barfing on.
A Tcl error from C?
Ya. I wrote a dump routine. Seems
I narrowed this nagging server crash bug down today to a test case:
ns_share sharevar
set sharevar(1) 1
if {[info exists sharevar(2)]} {
}
ns_return 200 text/plain hi
Putting this TCL script in pageroot with no other modules loaded and
hitting it at a rate of about 800 times per
-O1 doesn't work, so I think any optimization is out.-Jim
It seems the problem causing our server crashes is that I recompiled
AOLServer with -O2 on Linux. When compiled with -g, the server
doesn't crash, and the test case seems to be running reliably at the
same speed - actually,
that AOLserver 3 has C API for caches that
automatically flush outdated entries and automatically evict LRU entries
to limit memory use, and that there's an ns_cache module that provides a
Tcl layer for the C API?
+-- On Jul 30, Jim Wilcoxson said:
Just roll your own cache, whereever you
Thanks Rob. -J
+-- On Jul 30, Jim Wilcoxson said:
I'm not complaining, because this isn't particularly important to me,
but I can't imagine a relatively new AOLServer developer figuring any
of this out. I'm lost, and I've been developing on it for 8 years.
The ns_cache module
I just looked at the socket code, the main part being in nsd/sock.c:
---
Ns_SockSend(SOCKET sock, void *buf, int towrite, int timeout)
{
int nwrote;
nwrote = send(sock, buf, towrite, 0);
if (nwrote == -1
ns_sockerrno == EWOULDBLOCK
Ns_SockWait(sock, NS_SOCK_WRITE,
For what it's worth, we're not using any virtual server modules and
see this problem.
Jim
Interesting you link this to virtual servers What is your virtual
server technology?
Jerry
At 03:32 AM 8/24/2002, you wrote:
dear all,
i got this error message after i setting my aolserver
This problem is occurring in ns_getform, so the relevant code is:
set fp
while {$fp == } {
set tmpfile [ns_tmpnam]
set fp [ns_openexcl $tmpfile]
}
ns_conncptofp $fp
The disk is not full and this problem is very
1 - 100 of 170 matches
Mail list logo