Actually, I did think of another idea. I cannot attest to it being anymore efficient or not.
Also, you would need one unified function for file logging. Let's call it log_file() for now. Pass a filename argument to log_file(). It opens the file, and saves the filename and fp into a linked list. Any calls to log_file search the link list for an entry matching the filename. If it matches, it logs to the saved fp. Th function flushes the logs, but doesn't close them. But doing so will cause strcmp(),and that might end up being more overhead if you log to many files. > -----Original Message----- > From: Chris "Winston" Litchfield [mailto:[EMAIL PROTECTED] > Sent: Saturday, October 04, 2003 12:20 AM > To: [email protected] > Subject: Log performance. > > Greetings, > > I have a question/request for ideas. Basically the problem I > have is this. > After a long time (5+ years) of direct development on my mud, > it is filled with GREAT log lines. Thats not really the > problem but I believe that I have so many now that the > constant writing to disk > > Every logfile entry does an open file, write line, close file. > > This is a huge performance overhead. I like to know things > that happen upon a crash.. but still.. > > So the question is: Anyone do logging differently that may > be much better performance? (ie: not so many constant > writes, but still maintain the same information). > > Chris "Winston" Litchfield. > > > > -- > ROM mailing list > [email protected] > http://www.rom.org/cgi-bin/mailman/listinfo/rom >

