Re: Anybody Use 2 or More CPU at Production Env. ( SMP )
On Thu, 18 Dec 2003 21:32:38 +0200 Vahric MUHTARYAN [EMAIL PROTECTED] wrote: - Hi Everybody , - - - I watching freebsd-stable list I saw that somebody have a problem - with SMP support which they are using 4.9 . I know that some improvement are - coming with 5.x But this problem are very important for example - somebody when enable SMP support system start to reset itself or under high - load crashed ?! Did you manage to find out why your machine crashed? What are you doing to heavily load them? - I wonder Does anybody use SMP Support without Problem . Because SMP - is very important things ... We run numerous 4.x SMP machines without any problems. Some of them are heavily CPU loaded, too loaded sometimes. Others are heavily network loaded. They don't crash though. We have uptime on several SMP machines running 4.62 of more than one year. I know that's not current, but I've no reason to update them. - I wonder too What about HyperThreading ?! We have this enabled on some machines too. I've often debated the value of this with colleagues for our particular circumstances. You would have to elaborate on what you are doing to find out how useful it is for you. - Second How can I learn What is 1:1 and M:N thread libraries ? ! How - it's work ?! How SMP work on FreeBSD ?! You might have a look at www.freebsd.org/smp, and also read the smp newsgroup archives. - Because I'm using Redhat for a long time and I don't have any - problem with it of course under high load ... Is this doing the same task as your FreeBSD machine? If so, what's the task? - Thanks - Vahric MUHTARYAN Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: How to find out which drive isdown?
On Sun, 14 Dec 2003 14:27:23 -0600 Dan Nelson [EMAIL PROTECTED] wrote: [snipped] - Vunum ld will list the disks and their status. If you're not sure - which physical disk is da1, you'll probably have to look at the - jumper settings on each drive. If you can see the serial numbers on the drives how about... bilko# camcontrol devlist SEAGATE ST318406LC 010A at scbus0 target 0 lun 0 (pass0,da0) SEAGATE ST318406LC 010A at scbus0 target 1 lun 0 (pass1,da1) SEAGATE ST318406LC 010A at scbus0 target 2 lun 0 (pass2,da2) then... bilko# camcontrol inquiry 0:0:0 pass0: SEAGATE ST318406LC 010A Fixed Direct Access SCSI-3 device pass0: Serial Number 3FE294G67342CTJN pass0: 160.000MB/s transfers (80.000MHz, offset 63, 16bit), Tagged Queueing Enabled - Dan Nelson Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Bind query logging stops after a logrotate.
Hello Charlie, On Mon, 8 Sep 2003 12:04 , Charlie Schluting [EMAIL PROTECTED] sent: FBSD 5.1: Using Bind9.2.2, and I have query logging turned on: logging { channel querylog { file /var/log/query.lo~g; print-time yes; }; category queries { querylog; }; }; After a logrotate, it stops logging completely. The permissions are correct, and all I have to do to make it start logging again is: rndc reload. Anyone heard of this? Any ideas? You could use the built in log rotation in Bind. Change your log line to, for example: file /var/log/query.log print-time yes; versions 5 size 10m; TIA, -Charlie Cheers, John. Message sent via Global Webmail - http://www.global.net.uk/ ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Problem building Perl5.8 port with threads
Hello All, I'm having a really bad time trying to compile Perl 5.8 with thread support on FreeBSD 5.1 release on i386. I've tried the port and do a make -DWITH_THREADS, but it eventually bombs out with the following error: cd t (rm -f perl; /bin/ln -s ../miniperl perl)./perl TEST base/*.t comp/*.t cmd/*.t run/*.t io/*.t op/*.t uni/*.t /dev/tty Out of memory! *** Error code 1 (ignored) *** Error code 1 (ignored) ./miniperl -Ilib configpm configpm.tmp Out of memory! *** Error code 1 I've tried building it separately from the ports and it does the same thing. I've tried the WITH_PERL_MALLOC option in the ports but am getting nowhere. There is no memory shortage on this machine. I've tried the ports that came with 5.1R and the latest ports, but both result in the same error. Does anyone have some advice on how to compile Perl 5.8 on FreeBSD 5.1R with threads. I don't believe I'm doing something wrong here as I compiled the same on Solaris 8 without problems. Thanks, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: huge /var/log/exim files
-- On Wed, 6 Aug 2003 11:42:27 -0400 (EDT) Steve Hovey [EMAIL PROTECTED] wrote: - - I do - - cat /dev/null mainlog - - etc Or how about exicyclog? It installed as part of exim. It is the more subtle way. [snipped] - My question is this: If I rm mainlog and reject log, will they be built - again or is there a more subtle way to do this? - - Thanks, - - Kirk Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
df/nfs anomoly
Hello, I have some NetApps mounted up via NFS on our FreeBSD 4.7 machines. I added another disk to one of the volumes on NetApps (need the I/O rather than space) and I find that the size returned by df has wrapped: netapp5:/vol/vol3 -1019785316 519424028 608274304 -51%/foo netapp7:/vol/vol3 1071313416 497031824 57428159246%/bar I can't see any functionality is affected by this, but I thought that NFS in FreeBSD was using 64 bit sizes already, so I guess this is a limitation in df? Help/useful comments welcome. Note the REMOVE in the email address. Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: df/nfs anomoly
Grrr, Apologies, I've just found an item on lists that describes this. It's to do with statfs rather than df per se. Apologies for the waste of time. John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Softupdates: df, du, sync and fsck [quite long]
On Sat, 28 Jun 2003 15:12:05 -0400 Bill Moran [EMAIL PROTECTED] wrote: - Hmmm ... not good. A little more research might qualify this problem for a PR. I was thinking that myself :-) - Yikes! Is the machine still responsive? Sometimes you can put the load that - high and still have a functional box. It was way too sluggish. The machine responded eventually but I wouldn't want to run it like that in production (even though I did for half an hour). - I'm guessing by the way the conversation is going that you're able to grab - one of these boxes and make some tweaks. Possibly try putting the spool - directory on a dedicated partition and mounting it async? If the box shuts - down dirty, you'll probably have to newfs the partition before you can use - it again. At least make sure the spool partition is seperate from your log - partition, that should help to mitigate the problem (although you may already - have done that). I've ordered some more disks already. I'm going to split off the spool, the logs and the anti virus scanner (creates a temporary file for every message received). This will definitely help, I'm sure. Still, it doesn't answer the problem with soft updates I've experienced. - I was wondering if maybe the syncs were taking longer than the shutdown process - was willing to wait. It would certainly seem so, or perhaps it just can't sync for some reason. - It may save you some time to look in CVS under the files for the drivers for - the SCSI subsystem as well as the drivers for you specific cards to see if any - commit messages talk about fixing problems like this. Will do. - My experience with background fsck is that the machine is slow as hell while - the background fsck is running. Whether or not this is better or worse than - what you're experiencing with 4.7 is a question only you can answer. I've played around with background fsck on other machines, but I'm not sure it's right for these (very busy) machines. - Well ... I'm really shooting in the dark with these suggestions, but hopefully - there will be something useful. Gratefully received... - -- - Bill Moran Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Softupdates: df, du, sync and fsck [quite long]
Hello Bill, On Fri, 27 Jun 2003 23:53:30 -0400 Bill Moran [EMAIL PROTECTED] wrote: - I don't know what's wrong, but does unmounting and remounting the partition - reclaim the lost space? Alas, I can't umount the partition, my guess is because it is unable to sync (nothing to do with open files, and no error message saying device busy). The command just doesn't return after I've issued it. - If there's a LOT of inodes with problems, it could easily take a while to fix. - Also, if you run fsck without specifying a filesystem to fix, it exhaustively - checks all filesystems. So even if the problem is on /var, it might spend a - long time checking /usr as well. You can work around this by calling fsck - with the filesystem to check. I don't think it's to do with inodes or block size, etc. There's about 2M inodes on /var. A manual fsck on a dirty shutdown on this partition (ignoring the problem in hand) takes a couple of minutes. - If these are production boxes, I'd recommend turning it off until you resolve - the problem. Indeed, I tried that last night on one machine and it put the load through the roof(48). - I don't know if this would qualify as advice, but since nobody else - seems to have any suggestions, I figured I'd throw my thoughts in. - Are you using ATA or SCSI drives? SCSI. - Does issuing a manual sync once you've stopped the spooling process help - any? No. I'd already tried numerous syncs, and of course a clean shutdown tries that too. - Are these all identical mobos ... possibly a BIOS update available? Haven't looked for an update, but I think they're all identical. - These aren't IBM ATA drives are they? I had one of those give me grief for - months (if you look in the archives, you should be able to find details on - which drives caused problems). Alas not! They're straightforward Seagates, which in other machines we use (much lighter load) don't have this problem. - Have you tried updating one of the machines to 4.8 to see if the problem - has been fixed? I haven't tried that yet but will do so. I'm also going to test a 5.1R machine, perhaps the background fsck will help when I alas come to reboot. - Like I said, not good advice, just some ideas for you. All advice and ideas are welcome. - Bill Moran Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Softupdates: df, du, sync and fsck [quite long]
Hello, I've a couple of questions about soft updates. I've Googled heavily on this but not really found a satisfactory answer. The story: I'm running on numerous FreeBSD 4.7 SMP machines as primary MX machines. The mail is not stored on the FreeBSD machines but on NetApps via NFS. However the mail is temporarily spooled on the FreeBSD machines during normal MTA handling and passing to an anti-virus scanner. I have one large partition /var on each machine where basically all the work and temporary/transient files for the MTA and AV scanner takes place. These machines are heavily utilised, running quite hot with a load average of anything from 2 to 8. Many thousands of temporary files are thus created and deleted a minute. I have no problem with this as nearly all email is delivered in under 1 minute whatever. I notice that after a while the amount of free space as shown by df considerably varies from a du on /var. I'm aware of why this happens with soft updates, but that's not the whole story. If I turn off incoming email on a machine, the space does not seem to sync back to what it should be. No matter how long I turn off the MTA, the space is simply not returned, and df/du show differences of about 5:1. Nothing else is writing/holding open files on that partition (even turned off syslog, cron, etc. and checked using lsof). In comparison, if, for example, on my normal desktop machine I create a 500MB file, then delete it, the space shortly afterwards is returned to me when I run df. The only way I've been able to recover this space to what it should be is to reboot the machine. Which brings me to the next problem... As an example, here is a snippet from the console from when I rebooted an affected machine: boot() called on cpu#2 Waiting (max 60 seconds) for system process `vnlru' to stop...stopped Waiting (max 60 seconds) for system process `bufdaemon' to stop...stopped Waiting (max 60 seconds) for system process `syncer' to stop...timed out syncing disks... 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 giving up on 22 buffers Uptime: 27d23h1m27s Rebooting... As you can see the file system is unable to sync. When the machine reboots it literally takes hours to fsck the /var partition (only about 15GB). And the fsck output is full of messages like this: UNEXPECTED SOFT UPDATE INCONSISTENCY Now, is there a problem here with soft updates losing track of what is going on on this busy partition? It would appear to be so as quietening the machine does not lead to a proper sync. Secondly, why does the fsck take such an inordinate amount of time for a smallish partition? I really like the performance benefits of soft updates, but it seems that I'm going to have to turn it off on /var because of the problems that eventually occur. If anyone has some advice I'd be grateful. Cheers, John. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Directory hashing question
Hello, I have a question about best practices for directory hashing. I have about 80,000 zone files which are named after the domain which I generate using a Perl script. I'm looking for the best hashing to reduce the start up time for bind. I've tried different hashings. Using example.foo as an example (:-)), if I take the first and second letters of the domain and hash it like this /var/named/e/x/example.foo I still end up with (in a few cases) more than 3000 zones in one directory. If I hash using the first+second and third+fourth like this /var/named/ex/am/example.com I end up with a lot fewer zones in the individual directories, but bind's start up time is much longer. I can live with the first hasing if I need to, but I'm seeking some advice and suggestions on what others think (or know) would be better. Cheers, John Ekins. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message
netstat counters wrapping?
Hello, I'm guessing that the counters in netstat wrap, can anyone confirm if this is the case? On a particular machine here is a sample: jre@web3:~ 504] netstat -inb Name Mtu Network AddressIpkts Ierrs IbytesOpkts Oerrs Obytes Coll fxp0 1500 Link#100:20:ed:1a:11:3a 967502927 0 4198415173 930726328 1 2948161691 0 fxp0 1500 172.16.100/24 172.16.100.185 967046225 - 1695853036 930713017 - 2802327179 - I know for a fact that the there's been more than about 3 GB of traffic output from this machine, and the counters haven't been reset manually. Thanks, John. To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-questions in the body of the message