On Thu, Jun 30, 2005 at 04:58:42PM -0400, Greg A. Woods wrote: > Get a pair of decent modern fibre-channel PCI host adapters (ideally > 64-bit for the decently fast 64-bit PCI slots in your decently high-end > server system) that are supported by your current operating system and > then go out and buy yourself a nice shiny new Apple Xserve RAID box with > the maximum possible configuration of cache RAM (or something similar > but much more expensive if you don't like Apple).
Sounds way more expensive than what I use right now. Most likely that thing has an internal UPS and mirrored RAM, like the Clariion I had some years ago, until its two storage processors said "microkernel panic" and a RAID group was gone. Oops. But let's keep that expensive stuff for when it is really needed. > If you can't afford this kind of solution then you sure as heck can't > afford the _much_ more expensive programming labour that would be > necessary to go to the extremes you originally discussed, even if that > labour were to be donated. TANSTAAFL I disagree. I wrote my own FTP/HTTP server for the FTP cluster I run, because nothing else offered the performance I wanted. No surprise, it's a fixed set of processes, each serving a few hundred sessions by event callback functions. Some people say I am crazy to have done that, but it squeezes about a Gbit/s out of rather cheap hardware and survived being official counterstrike download mirror a couple times, where more expensive equipment elsewhere failed, and it is so well balanced that everything reaches the cap right when the outgoing network links are full. That's what I consider to be an economic solution. It wasn't that hard really. Thanks goodness that cool guy at Cambridge didn't just buy big hardware for Smail, but wrote an experimental internet mailer instead. I guess he didn't think of that being too hard, either. Back to the problem: How about not creating and deleting files, but using preallocated ones to begin with? That would save directory updates and inode allocations, plus block allocations for small mails. Alternatively, a helper process may create and flush them on demand, combining multiple flush requests. If no helper runs, Exim may fork off one, and the first to bind the port wins, serving all other processes. Nothing breaks if it dies. Storing multiple mails in one file is tempting, but I am afraid exiscan and local_scan() are not going to like that a lot. So is storing data and header in the same directory, having to update just one, once. Any more ideas? Michael -- ## List details at http://www.exim.org/mailman/listinfo/exim-users ## Exim details at http://www.exim.org/ ## Please use the Wiki with this list - http://www.exim.org/eximwiki/
