Read the code - there's lots of other things being done in that loop - all
myGlobals.stickyHosts does is prevent the host from being selected for purge
in hash.c.  It doesn't stop the session purge, etc.

I don't know of anyone who has pushed ntop beyond 2 processors (real or
imaginary).  There well could be threading issues, but unless you can donate
some hardware, it's not going to be easy to find.  Running under gdb and
posting the back traces will find some, but gdb itself alters threading
somewhat, so it can't find them all...  As you find them, please post using
the problem report so we have a tracking #.

In all honesty, ntop really can't drive a system that hard.  One processor
MAX per NIC, with dribs and drabs used for housekeeping, etc. is about all
you need, AT MOST.

Yeah, I forgot about NPTL when I was roasting them over pthreads in my
earlier reply.  At least -u root gets past the pthread issue in gdb, but
nothing I've seen fixes the tools for NPTL, at least that I've seen (yet)...

-----Burton

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Dominique
Lalot
Sent: Wednesday, May 28, 2003 8:05 AM
To: [EMAIL PROTECTED]
Subject: [Ntop] BUG ntop new dump


Hello,

That's me again.
At first using ntop, I played a little bit with major stats without
displaying too much machines and pie stats.
As now ntop is no more segfaulting in pbuf.c, I was clicking more
intensively.
ANd many times crash or loop.
I went back from glibc from redhat9 to 8.0 ( seems problems of thread libs?
NPTL ?)
anyway, its bombing again.

Just an idea, my computeur is a bi-pro Xeon, with hyper-threading enabled
Top show me 4 processors instead of 2.
I'm wondering is the code is really thread safe "under linux?" as we discuss
about pbuf.c bug.
For me, it's not stable enough to go in production!??

As I use -c (sticky host), why is there still a purgeidlehost?.

Thanks

Dom


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 32771 (LWP 1471)]
updateDeviceThpt (deviceToUpdate=0) at traffic.c:213
213 if(broadcastHost(el)) continue;
(gdb) list
208 }
209
210 for(idx=1; idx<myGlobals.device[deviceToUpdate].actualHashSize; idx++) {
211 if((el = myGlobals.device[deviceToUpdate].hash_hostTraffic[idx]) !=
NULL) {
212
213 if(broadcastHost(el)) continue;
214
215 el->actualRcvdThpt =
(float)(el->bytesRcvd.value-el->lastBytesRcvd.value)/timeDiff;
216 if(el->peakRcvdThpt < el->actualRcvdThpt) el->peakRcvdThpt =
el->actualRcvdThpt;217 el->actualSentThpt =
(float)(el->bytesSent.value-el->lastBytesSent.value)/timeDiff;
(gdb) bt full
#0 updateDeviceThpt (deviceToUpdate=0) at traffic.c:213
timeDiff = 60
timeMinDiff = 60
timeHourDiff = 182
totalTime = 182
idx = 1025
el = (HostTraffic *) 0x369
#1 0x400e83a2 in purgeIdleHosts (actDevice=0) at hash.c:458
idx = 2760
numFreedBuckets = 0
maxBucket = 0
theIdx = 1054130988
hashFull = 65549
hashLen = 32768
startTime = 1054130988
purgeTime = 69632
lastPurgeTime = {1054130867, 0 <repeats 31 times>}
firstRun = 0 '\0'
theFlaggedHosts = (HostTraffic **) 0x0
len = 4294963200
newHostsToPurgePerCycle = 1074896256
purgeStats = '\0' <repeats 96 times>,
"[EMAIL PROTECTED]@\001\000\000\000d.\004\000dJ�A%_\020@"
hiresDeltaTime = 24.0000305
hiresTimeStart = {tv_sec = 1054130988, tv_usec = 117607}
hiresTimeEnd = {tv_sec = 0, tv_usec = 0}
#2 0x400efdc9 in scanIdleLoop (notUsed=0x0) at ntop.c:670
i = 0
#3 0x403e9881 in pthread_start_thread () from /lib/i686/libpthread.so.0
No symbol table info available.
#4 0x403e9985 in pthread_start_thread_event () from
/lib/i686/libpthread.so.0
No symbol table info available.
(gdb) info stack
#0 updateDeviceThpt (deviceToUpdate=0) at traffic.c:213
#1 0x400e83a2 in purgeIdleHosts (actDevice=0) at hash.c:458
#2 0x400efdc9 in scanIdleLoop (notUsed=0x0) at ntop.c:670
#3 0x403e9881 in pthread_start_thread () from /lib/i686/libpthread.so.0
#4 0x403e9985 in pthread_start_thread_event () from
/lib/i686/libpthread.so.0

_______________________________________________
Ntop mailing list
[EMAIL PROTECTED]
http://listgateway.unipi.it/mailman/listinfo/ntop

Reply via email to