You don't need the system time in microseconds.  What you need is the O()
function that describes what is really going on.  Does the function vary
with N, NlogN, N^2, or maybe N^3?  That is the question you need to be
asking most.  Sure some function calls are more expensive than others, but
the same rule applies to them also.  I don't have the code in front of me at
the moment (it would require rebooting into LINUX, since I'm in Winblows
right now), but I can tell you without even looking that something in there
must be operating with O() >= N^2 in places.  Those are the places which
need to be examined the most.  That is the point behind true RDBMS theory,
as well as basic code optimization in C/C++ and SQL query optimization.

Drew Northup, N1XIM


> -----Original Message-----
> From: Mikhail Ramendik [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 26, 2004 2:15 PM
> To: DBMAIL Developers Mailinglist
> Subject: [Dbmail-dev] Help me, I don't speak C...
>
>
> Hello,
>
> Well, this is really not a dbmail question, but as I need it to try and
> speed up dbmail, I hope you'll bear with me.
>
> I am trying to understand why _ic_fetch() is slow. For this I need log
> entries with millisecond, not second, precision. Apparently syslogd
> can't do this, so I tried to add the following to the trace function in
> debug.c :
>
>                 struct timeval _tv;
>                 struct timezone _tz;
>
>                 gettimeofday(&_tv,&_tz);
>                 vsyslog(LOG_NOTICE,"Microseconds: %d", _tv.tv_usec);
>
> (Of course I also added #include<sys/time.h> in the beginning).
>
> This causes a segfault. Why? And what is the right way of getting any
> system time value in milli-or microseconds? (I don't care if it wraps
> every second or not).
>
> Yours, Mikhail Ramendik
>
>
>
>
>
>

Reply via email to