Re: [Libevent-users] [OT] craigslist: libevent programmer wanted

2007-11-11 Thread Marc Lehmann
On Fri, Nov 09, 2007 at 12:39:37AM +0100, Hannah Schroeter [EMAIL PROTECTED] 
wrote:
 I see less problems with the writing away of the data sucked from the
 web servers, as most Unix like systems write stuff asynchronously, so
 the open(..., O_CREAT...), write() and close() calls won't be too slow.

Most unix systems cache data for quite long, butwhen they write, usually
user mode apps also halt. For throughput this is of little concern, but
in a game server I wrote, even an fsync could freeze the server for 15-20
seconds(!) when another sync was in progress at the same time, or when
some othe rprogram geenrated lots of I/O (for example a backup/restore).

(I hear the linux is abysmal w.r.t. writing out data (and I agree :),
but I found similar problems with freebsd, too, so I guess it is quite
common).

 And if they should be slower than the network interfaces, combine
 things with I/O worker {threads,processes}. BTDT (main program using
 event multiplexing on network + socketpairs to I/O helper processes).

If anybody uses perl, there is the IO::AIO module which provides this
quite efficiently (using only a single pipe to report results, and it is
only written/read once per poll, not per result).

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


Re: [Libevent-users] [OT] craigslist: libevent programmer wanted

2007-11-11 Thread Christopher Layne
On Mon, Nov 12, 2007 at 06:38:33AM +0100, Marc Lehmann wrote:
 On Fri, Nov 09, 2007 at 12:39:37AM +0100, Hannah Schroeter [EMAIL 
 PROTECTED] wrote:
  I see less problems with the writing away of the data sucked from the
  web servers, as most Unix like systems write stuff asynchronously, so
  the open(..., O_CREAT...), write() and close() calls won't be too slow.
 
 Most unix systems cache data for quite long, butwhen they write, usually
 user mode apps also halt. For throughput this is of little concern, but
 in a game server I wrote, even an fsync could freeze the server for 15-20
 seconds(!) when another sync was in progress at the same time, or when
 some othe rprogram geenrated lots of I/O (for example a backup/restore).

BTW: This isn't a global Linux issue, it's specifically an issue with ext3 and 
the
way it handles fsync() on a global scale.

http://kerneltrap.org/node/14148

Personally, I use XFS (awesome design).

-cl
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


Re: [Libevent-users] [OT] craigslist: libevent programmer wanted

2007-11-11 Thread Marc Lehmann
On Sun, Nov 11, 2007 at 09:46:43PM -0800, Christopher Layne [EMAIL PROTECTED] 
wrote:
  Most unix systems cache data for quite long, butwhen they write, usually
  user mode apps also halt. For throughput this is of little concern, but
  in a game server I wrote, even an fsync could freeze the server for 15-20
  seconds(!) when another sync was in progress at the same time, or when
  some othe rprogram geenrated lots of I/O (for example a backup/restore).
 
 BTW: This isn't a global Linux issue, it's specifically an issue with ext3 
 and the
 way it handles fsync() on a global scale.

I am specifically not using ext3 anywhere on any of my systems, so, no,
this has nothing whatsoever to do with ext3 and its many deficencies.

 http://kerneltrap.org/node/14148
 
 Personally, I use XFS (awesome design).

Yeah, and even slower than ext3. By far. And this issue happens with xfs
just the same. When memory grows tight and linux needs to flush, the
system more or less freezes (w.r.t. I/O). I even see operations such as
utime() freeze, even when everything is in the cache.

(Ok, XFS is in fact the fastest filesystem when all you want to do is
stream very large files, it can be very space-efficient, but at *anything*
else it rather sucks, speed-wise).

(And it fragments like hell, but at least it has an online defragmenter,
which helps those very large files stream even faster).

:)

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  [EMAIL PROTECTED]
  -=/_/_//_/\_,_/ /_/\_\
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


[Libevent-users] [OT] craigslist: libevent programmer wanted

2007-11-08 Thread Garth Patil
http://sfbay.craigslist.org/pen/cpg/472325599.html

libevent programmer wanted (san mateo)
Reply to: [EMAIL PROTECTED]
Date: 2007-11-07, 8:33PM PST


I'm guessing if you clicked on this link, you probably know what
libevent is, and you're probably more than proficient in C. I have a
small project for someone who has previous experience with libevent,
and _understands_ HTTP (not just what it is, but has good experience
on the protocol layer). The project is to build a blazingly fast HTTP
client for lots of parallel requests. Here is a short description:

*Build a daemon with libevent that:
1. watches a directory called request
2. when a new file arrives in request, load the file (format below)
3. augment the content with the appropriate headers
4. makes the HTTP request
5. upon success, write the contents of the response to a file of the
same name, in a directory called result
6. upon failure, write an error message to a file of the same name, in
a directory called error

*The program should:
- be written in C
- use libevent for filesystem events
- use libevent for the HTTP request event loop
- support most of HTTP/1.1, especially chunking
- allow at least 5000 concurrent requests without breaking a sweat
- NOT BLOCK ON ANY STEP

*File format:
hostname

http headers

http body
EOF

*Example file:
www.monkey.org

POST /banana HTTP/1.1
Content-Type: text/xml

?xml version=1.0 ?
monkey
foodbanana/food
/monkey
EOF

By the way, someone already has a good start on this in a image
grabbing program called crawl (http://monkey.org/~provos/crawl/). If
anyone is up to this, please send me an email and tell me what you
want in terms of compensation.
Thanks!
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users


Re: [Libevent-users] [OT] craigslist: libevent programmer wanted

2007-11-08 Thread Hannah Schroeter
Hi!

On Thu, Nov 08, 2007 at 02:19:25PM -0800, Christopher Layne wrote:
On Thu, Nov 08, 2007 at 08:11:55AM -0800, Garth Patil wrote:
 http://sfbay.craigslist.org/pen/cpg/472325599.html

 libevent programmer wanted (san mateo)
 - NOT BLOCK ON ANY STEP

close() can block. *boom tsst*.

On sockets, IIRC only with non-standard settings of the SO_LINGER
option.

The setsockopt(4) manual page, on OpenBSD, says:

 SO_LINGER controls the action taken when unsent messages are queued on
 socket and a close(2) is performed.  If the socket promises reliable de-
 livery of data and SO_LINGER is set, the system will block the process on
 the close(2) attempt until it is able to transmit the data or until it
 decides it is unable to deliver the information (a timeout period mea-
 sured in seconds, termed the linger interval, is specified in the
 setsockopt() call when SO_LINGER is requested).  If SO_LINGER is disabled
 and a close(2) is issued, the system will process the close in a manner
 that allows the process to continue as quickly as possible.

And IIRC, SO_LINGER *is* disabled by default. So if you want close not
to block, you either keep SO_LINGER disabled or set the linger timeout
to zero (which specifies *different* behaviors!).

And then, for HTTP, this should be relatively irrelevant, as usually the
client (and the task specified was a HTTP *client*, not a server)
closes the connection only after receiving the response, and the
server even only *starts* sending the response after having received
the whole request, so the transmit buffer of the socket should be empty
anyway when the client closes the socket.

Kind regards,

Hannah.
___
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users