Re: Problem with FreeBSD 4.8, ipf, ipfnat and forwarding for pcAnywhere
I am using telnet just to see if the port accepts connections. That test works fine internally. We are not running a telnet server. Also, we are telnetting to the pcAnywhere port, not the telnet port. :) - Original Message - From: JJB [EMAIL PROTECTED] To: adp [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Friday, May 07, 2004 7:47 AM Subject: RE: Problem with FreeBSD 4.8, ipf, ipfnat and forwarding for pcAnywhere For your telnet test to pcanywhere ports on target Lan pc to work you have to tell telnet on the target to listen on those ports. I believe pcanywhere is one of those applications that imbed the ip address of the remote and host into the packet data and used by the application to establish bi-directional packet exchange. This means that pcanywhere will not work using nated ip address. This is an common design flaw in many 3rd party software providers applications, mostly seen in games and ms/windows netmeeting. Pcanywhere only works over the public internet between two ms/window boxs that use public routable IP address. It will also work between two pc on the Lan because Nating only occurs as packet leaves Lan headed for public internet. If you have an range of static public IP address assigned to you by your ISP then you could assign one of those ip address to the LAN pc you want pcanywhere to work on and you should be good to go. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of adp Sent: Friday, May 07, 2004 12:37 AM To: [EMAIL PROTECTED] Subject: Problem with FreeBSD 4.8, ipf, ipfnat and forwarding for pcAnywhere This shouldn't be that hard, but I can't get it working. I have a FreeBSD firewall with three NICs (Internet, LAN, DMZ). I have bridging enabled between the Internet and DMZ interfaces. I now have an internal computer (LAN) that needs to be accessible via pcAnywhere. I can telnet to the pcAnywhere ports on the internal computer fine from the firewall or the LAN. So that works. However, when I configured ipnat to forward my pcAnywhere ports a telnet from the Internet just stalls. My ipnat configuration: # cat /etc/ipnat.conf (xl0 = internet, xl1 = lan, xl2 = dmz) # pcAnywhere # normal nat for office disabled - this is all i have in ipnat.conf rdr xl0 public-ip/32 port 5631 - 192.168.99.9 port 5631 rdr xl0 public-ip/32 port 5632 - 192.168.99.9 port 5632 And I am allowing in accessing via ipf: pass in quick proto tcp from any to public-ip port = 5631 group 200 pass in quick proto udp from any to public-ip port = 5631 group 200 pass in quick proto tcp from any to public-ip port = 5632 group 200 pass in quick proto udp from any to public-ip port = 5632 group 200 (If I take these out I see the ipmon block messages, but with these they go away, so it's not ipf I don't think.) Am I missing something here? This should work! A tcpdump. I am remote (remote-client): %telnet public-ip 5631 Trying public-ip... (just sits there) On the FreeBSD box: # tcpdump -n -i xl0 port 5631 tcpdump: listening on xl0 23:26:41.772801 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460,nop,wscale 0,nop,nop,timestamp 99416198 0 (DF) [tos 0x10] 23:26:44.772018 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460,nop,wscale 0,nop,nop,timestamp 99416498 0 (DF) [tos 0x10] 23:26:48.013346 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460,nop,wscale 0,nop,nop,timestamp 99416818 0 (DF) [tos 0x10] 23:26:51.230241 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:26:54.429267 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:26:57.596288 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:27:03.809921 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:27:16.050057 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] ^C 48 packets received by filter 0 packets dropped by kernel Oh, and again, I do have bridging enabled between Internet and DMZ: My bridge script: #!/bin/sh echo -n Enabling bridging: if sysctl -w net.link.ether.bridge=1 /dev/null 21; then echo activated. else echo failed. fi echo -n Enabling bridging between xl0 and xl2 interfaces: if sysctl -w net.link.ether.bridge_cfg=xl0,xl2 /dev/null 21; then echo activated. else echo failed. fi ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd
Problem with FreeBSD 4.8, ipf, ipfnat and forwarding for pcAnywhere
This shouldn't be that hard, but I can't get it working. I have a FreeBSD firewall with three NICs (Internet, LAN, DMZ). I have bridging enabled between the Internet and DMZ interfaces. I now have an internal computer (LAN) that needs to be accessible via pcAnywhere. I can telnet to the pcAnywhere ports on the internal computer fine from the firewall or the LAN. So that works. However, when I configured ipnat to forward my pcAnywhere ports a telnet from the Internet just stalls. My ipnat configuration: # cat /etc/ipnat.conf (xl0 = internet, xl1 = lan, xl2 = dmz) # pcAnywhere # normal nat for office disabled - this is all i have in ipnat.conf rdr xl0 public-ip/32 port 5631 - 192.168.99.9 port 5631 rdr xl0 public-ip/32 port 5632 - 192.168.99.9 port 5632 And I am allowing in accessing via ipf: pass in quick proto tcp from any to public-ip port = 5631 group 200 pass in quick proto udp from any to public-ip port = 5631 group 200 pass in quick proto tcp from any to public-ip port = 5632 group 200 pass in quick proto udp from any to public-ip port = 5632 group 200 (If I take these out I see the ipmon block messages, but with these they go away, so it's not ipf I don't think.) Am I missing something here? This should work! A tcpdump. I am remote (remote-client): %telnet public-ip 5631 Trying public-ip... (just sits there) On the FreeBSD box: # tcpdump -n -i xl0 port 5631 tcpdump: listening on xl0 23:26:41.772801 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460,nop,wscale 0,nop,nop,timestamp 99416198 0 (DF) [tos 0x10] 23:26:44.772018 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460,nop,wscale 0,nop,nop,timestamp 99416498 0 (DF) [tos 0x10] 23:26:48.013346 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460,nop,wscale 0,nop,nop,timestamp 99416818 0 (DF) [tos 0x10] 23:26:51.230241 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:26:54.429267 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:26:57.596288 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:27:03.809921 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] 23:27:16.050057 remote-client.3755 public-ip.5631: S 2174885259:2174885259(0) win 57344 mss 1460 (DF) [tos 0x10] ^C 48 packets received by filter 0 packets dropped by kernel Oh, and again, I do have bridging enabled between Internet and DMZ: My bridge script: #!/bin/sh echo -n Enabling bridging: if sysctl -w net.link.ether.bridge=1 /dev/null 21; then echo activated. else echo failed. fi echo -n Enabling bridging between xl0 and xl2 interfaces: if sysctl -w net.link.ether.bridge_cfg=xl0,xl2 /dev/null 21; then echo activated. else echo failed. fi ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
bind 8 slow inside freebsd jail
I am running bind 8 inside a FreeBSD 4.9 jail. For some reason responses from our internal DNS servers (all of which run in jails) are very slow when resolving external hostnames. Here are some little factoids: 1. resolution of internal domain works great. it takes less than 1 second. 2. resolution of an external domain is very slow or times out. 3. resolution of an external domain that is in the dns server's cache is fast. So the problem with in trying to resolve external domains for the first time. I think this is related to our FreeBSD jail setup in some way because frankly I can't figure out anything else. We are using forwarders. If I dig with them the response is 1 second. If I just dig for my root hints from our internal DNS servers it takes up to 20 seconds: # date; dig @ns2; date Tue May 4 10:50:18 CDT 2004 ; DiG 8.3 @ns2 ; (1 server found) ;; res options: init recurs defnam dnsrch ;; got answer: ;; -HEADER- opcode: QUERY, status: NOERROR, id: 27736 ;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 13 ;; QUERY SECTION: ;; ., type = NS, class = IN ;; ANSWER SECTION: . 4d20h36m13s IN NS L.ROOT-SERVERS.NET. . 4d20h36m13s IN NS M.ROOT-SERVERS.NET. . 4d20h36m13s IN NS A.ROOT-SERVERS.NET. ... ... ;; Total query time: 6 msec ;; FROM: ns.domain.com to SERVER: 192.168.42.78 ;; WHEN: Tue May 4 10:50:38 2004 ;; MSG SIZE sent: 17 rcvd: 436 Tue May 4 10:50:38 CDT 2004 Has anyone seen this before? Our DNS servers ran fine, but then we went with FreeBSD jails and our response time seems to have gone way, way down. The server hosting the dns server has no real firewall: # ipfw l 00100 allow ip from any to any via lo0 00200 deny ip from any to 127.0.0.0/8 00300 deny ip from 127.0.0.0/8 to any 65000 allow ip from any to any 65535 deny ip from any to any And isn't heavily loaded: # uptime 10:53AM up 13 days, 12:02, 1 user, load averages: 0.19, 0.32, 0.32 network buffers seem fine: # netstat -m 32/544/18304 mbufs in use (current/peak/max): 32 mbufs allocated to data 26/492/4576 mbuf clusters in use (current/peak/max) 1120 Kbytes allocated to network (8% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines My root hints was just refreshed. My named.conf options {} : options { directory /etc/namedb; listen-on { 192.168.42.78; }; forward first; forwarders { aa.bb.cc.dd; ee.ff.gg.hh; }; allow-transfer { 127.0.0.1; 192.168.42.0/24; }; allow-recursion { 127.0.0.1; 192.168.42.0/24; }; //fetch-glue no; // we have a firewall between us and the Internet, so let's // go ahead and define our query source port query-source address 192.168.42.78 port 53; //named-xfer /usr/libexec/named-xfer; }; ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Abnormal network errors?
I am working on some network performance issues. One of the first things I inspected was netstat-s. This is for a FreeBSD 4.9-REL NFS and MySQL server that has been up for 33 days. It seems to me that I have a lot of errors, specifically with UDP (NFS related I would guess). The server is a dual P3 with 512MB of RAM, software RAID-1 (one of the Promise hybrid hardware/kernel RAID cards), and FastEthernet. We are using just a consumer'ish 10/100 switch. (Hope to rectify that soon enough.) On this server I'm thinking I need two things: 1. More sockets available. 2. Larger sockbufs for send and recv. Is this an accurate assessment? What is 2432320 packets for unknown/unsupported protocol? What specifically does this mean? (In other words, what should I do to resolve this?) What about 921363 calls to icmp_error? Under tcp I have 481930 embryonic connections dropped. I assume that means I don't have enough sockets available for when this server gets loaded. Correct? I am including 'ifconfig', 'netstat -m', 'netstat -s', 'nfsstat -s', 'sysctl net' and 'sysctl kern.ipc' below. Is there a way to see the peak sockets I have had opened? It doesn't look like it. P.S. I am Cc'ing performance@ since this is both a general administration and performance-related question. # ifconfig rl0 rl0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST mtu 1500 inet 192.168.42.70 netmask 0xffe0 broadcast 192.168.42.95 ..multiple aliases.. ether 00:50:ba:60:4d:e5 media: Ethernet autoselect (100baseTX full-duplex) status: active # netstat -m 34/736/10112 mbufs in use (current/peak/max): 34 mbufs allocated to data 0/504/2528 mbuf clusters in use (current/peak/max) 1192 Kbytes allocated to network (15% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines # netstat -s tcp: 332918288 packets sent 279842528 data packets (1533896593 bytes) 184127 data packets (216536937 bytes) retransmitted 2 resends initiated by MTU discovery 41010478 ack-only packets (14887613 delayed) 0 URG only packets 12197 window probe packets 4899618 window update packets 6969381 control packets 298154198 packets received 237883532 acks (for 1547823911 bytes) 5548105 duplicate acks 0 acks for unsent data 194397219 packets (612913715 bytes) received in-sequence 175092 completely duplicate packets (40526143 bytes) 386 old duplicate packets 7879 packets with some dup. data (1540154 bytes duped) 1113196 out-of-order packets (1061944624 bytes) 5645 packets (6794550 bytes) of data after window 209 window probes 3540787 window update packets 217721 packets received after close 2925 discarded for bad checksums 0 discarded for bad header offset fields 0 discarded because packet too short 1450821 connection requests 5141899 connection accepts 1739 bad connection attempts 0 listen queue overflows 5676325 connections established (including accepts) 6592771 connections closed (including 153553 drops) 2325245 connections updated cached RTT on close 2325245 connections updated cached RTT variance on close 2124129 connections updated cached ssthresh on close 481930 embryonic connections dropped 232282251 segments updated rtt (of 188115355 attempts) 2843613 retransmit timeouts 829 connections dropped by rexmit timeout 25922 persist timeouts 0 connections dropped by persist timeout 528 keepalive timeouts 111 keepalive probes sent 417 connections dropped by keepalive 10677309 correct ACK header predictions 40631044 correct data packet header predictions 5143661 syncache entries added 10004 retransmitted 6951 dupsyn 3 dropped 5141899 completed 0 bucket overflow 0 cache overflow 382 reset 1327 stale 0 aborted 0 badack 53 unreach 0 zone failures 0 cookies sent 0 cookies received udp: 272987897 datagrams received 0 with incomplete header 0 with bad data length field 870 with bad checksum 682 with no checksum 921363 dropped due to no socket 0 broadcast/multicast datagrams dropped due to no socket 19976574 dropped due to full socket buffers 0 not for hashed pcb 252089090 delivered
Re: Abnormal network errors?
- Original Message - From: Charles Swiger [EMAIL PROTECTED] On May 5, 2004, at 2:27 PM, adp wrote: On this server I'm thinking I need two things: 1. More sockets available. 2. Larger sockbufs for send and recv. Is this an accurate assessment? Given the application of this system, you might want to up the value of kern.ipc.nmbclusters by a factor of four or so (it's NBMCLUSTERS in the kernel config file). However, it's not essential-- your netstat -m is OK, and your TCP send and receive windows are reasonably sized as-is by default. Several problems. First, we are hosting a DNS server on this box. The DNS resolves domains we are auth. for very fast, or anything in its cache very fast, but anything else is SLOW or times-out. Also, our www server (another box) is responding slowly in general (4-6 seconds). What is 2432320 packets for unknown/unsupported protocol? What specifically does this mean? (In other words, what should I do to resolve this?) It means machines are sending non-IP traffic on your network, which is normal if you have Windows protocols (NetBEUI, SPX/IPX) or Macs (AppleTalk) around. Or chatty network devices like some printers What is 802.1d? I am getting a lot of this: 16:21:16.788617 802.1d config 8000.00:04:27:d1:cb:d3.8019 root 8000.00:03:6c:51:a2:a7 pathcost 8 age 2 max 20 hello 2 fdelay 15 And this: 16:21:15.424508 CDP v2, ttl=180s DevID 'Six2' Addr (1): IPv4 10.2.254.62 PortID 'FastEthernet0/12' CAP 0x0a[|cdp] I'm at a colo. See /usr/include/net/ethernet.h for an idea, or maybe tcpdump not ip might give some idea of what's going by. What about 921363 calls to icmp_error? ICMP messages like responding to a ping, or people sending traffic with RFC-1918 unroutable addresses (gives dest unreachable)... That's weird. I tried 'tcpdump icmp' and see a few errors right off the bat: # tcpdump -n icmp tcpdump: listening on rl0 16:27:46.633262 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1249 unreachable 16:27:53.639237 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1280 unreachable 16:28:02.579417 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1204 unreachable 16:28:07.716527 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1510 unreachable 16:28:08.589910 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1218 unreachable 16:28:15.668697 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1327 unreachable 16:28:33.581427 192.168.42.71 192.168.42.76: icmp: 192.168.42.71 udp port 1355 unreachable Hmm. I am running a DNS server in a jail on the NFS server. We have been getting very slow responses times from it. Seems related. Under tcp I have 481930 embryonic connections dropped. I assume that means I don't have enough sockets available for when this server gets loaded. Correct? More likely, these are someone doing a port scan and leaving half-open connections lying around to get cleaned up. We are behind a managed NetScreen firewall. I can't see how anyone is port-scanning us, unless they are just scanning the few ports we have open to the world. It might be helpful if you gave us some idea as to what the performance problem you were seeing was? Is NFS access slow, or some such? Are you seeing errors or collisions in netstat -i or in whatever statistics your switch keeps per port? The following areas struck me as being relevant: # ifconfig rl0 First, consider upgrading to a fxp or dc-based NIC. Noted. udp: 272987897 datagrams received [ ... ] 19976574 dropped due to full socket buffers This is high enough to represent a concern, agreed. How do I fix this then? I assume I don't have enough sockets available. Is there a way to see where I am peaking? I'm thinking adding more memory will increase the system-set defaults (also read 'man tuning'). ip: 578001924 total packets received [ ... ] 4899083 fragments received 4 fragments dropped (dup or out of space) 750 fragments dropped after timeout 842689 packets reassembled ok [ ... ] 609745425 packets sent from this host 1914687 output datagrams fragmented 10496350 fragments created Second, you're fragmenting a relatively large number of packets going by, you ought to see what's going on with your MTU and pMTU discovery. I suppose if you're using large UDP datagrams with NFS, that might be it. [ The machines I've got around with comparible traffic volume might have 400 frags received, and 10 transmitted, or some such. ] Could this be related to my switch or anything else? ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Abnormal network errors?
As an added note, I am seeing real issues with the NFS stats on the server: # nfsstat -s Server Info: Getattr SetattrLookup Readlink Read WriteCreate Remove 502201060 80441153 281 4569327 2420840270703 462872 Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access 365160191938 0 2009 453 1811510 0 115333847 MknodFsstatFsinfo PathConfCommitGLeaseVacate Evict 0 4878368 234 0409440 0 0 0 Server Ret-Failed 76480937 Server Faults 0 Server Cache Stats: Inprog Idem Non-idemMisses 351435 97871 1579 211262178 Server Lease Stats: Leases PeakL GLeases 0 0 0 Server Write Gathering: WriteOps WriteRPC Opsaved 2420836 2420840 4 ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
NFS server fail-over - how do you do it?
I am running a FreeBSD 4.9-REL NFS server. Once every several hours our main NFS server replicates everything to a backup FreeBSD NFS server. We are okay with the gap in time between replication. What we aren't sure about is how to automate the fail-over between the primary to the secondary NFS server. This is for a web cluster. Each client mounts several directories from the NFS server. Let's say that our primary NFS server dies and just goes away. What then? Are you periodically doing a mount or a file look-up of a mounted filesystem to check if your NFS server died? If so are you just unmounting and remounting everything using the backup NFS server? Just curious how this problem is being solved. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
NFS server problems?
I occasionally see error messages on NFS clients such as: nfs server files:/rmt/mnt: not responding pmap_collect: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC got bad cookie vp 0xdc92a9c0 bp 0xcc578fac got bad cookie vp 0xdc92a9c0 bp 0xcc578e60 NFS actually runs fine. We haven't had any real problems. On the client I do see some errors: # nfsstat -c Client Info: ... Rpc Info: TimedOut Invalid X Replies Retries Requests 26 0 162 508405121 My thoughts are a bad cable or switch going to the NFS server, but I'm curious what others think. This happens on multiple NFS clients. (Well, I only found the PMAP_SHPGPERPROC entry on one.) The NFS server is FreeBSD 4.9-STABLE and the clients FreeBSD 4.10. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Can I specify the resolver timeout?
We have two internal DNS servers in a FreeBSD web cluster. If the first DNS server fails then after a timeout period the client's resolver will try the second DNS server. This works fine, but is a bit slow. It looks like the timeout takes 10 to 15 seconds on FreeBSD 4.9-STABLE. Is there a way to override this timeout value? I know it is possible on other UNIX systems, such as AIX. Basically, we want to get a response within 3 seconds or the resolver should try the second DNS server. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: NFS server fail-over - how do you do it?
One of my big problems right now is that if our primary NFS server goes down then everything using that NFS mount locks up. If I change to the mounted filesystem on the client then it stalls: # pwd /root # cd /nfs-mount-dir [locks] If I try to reboot the reboot fails as well since FreeBSD can't unmount the filesystem!? How do I stop this from happening? I am using this to mount NFS filesystems: # mount -o bg,intr,soft ... - Original Message - From: adp [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sunday, May 30, 2004 2:43 AM Subject: NFS server fail-over - how do you do it? I am running a FreeBSD 4.9-REL NFS server. Once every several hours our main NFS server replicates everything to a backup FreeBSD NFS server. We are okay with the gap in time between replication. What we aren't sure about is how to automate the fail-over between the primary to the secondary NFS server. This is for a web cluster. Each client mounts several directories from the NFS server. Let's say that our primary NFS server dies and just goes away. What then? Are you periodically doing a mount or a file look-up of a mounted filesystem to check if your NFS server died? If so are you just unmounting and remounting everything using the backup NFS server? Just curious how this problem is being solved. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Can I specify the resolver timeout?
I did in fact look at the manpage and did not find that option. I just looked again and I still can't find it. # man resolv.conf | grep -i timeout # uname -r 4.10-BETA Are you running FreeBSD 5.x perhaps? If the option is available and my manpage is wrong then that's fine. Just let me know. :) - Original Message - From: Giorgos Keramidas [EMAIL PROTECTED] To: adp [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Sunday, May 30, 2004 10:12 PM Subject: Re: Can I specify the resolver timeout? On 2004-05-30 12:04, adp [EMAIL PROTECTED] wrote: Is there a way to override this timeout value? I know it is possible on other UNIX systems, such as AIX. Basically, we want to get a response within 3 seconds or the resolver should try the second DNS server. Look at resolv.conf(5). More specifically at the options timeout option. - Giorgos ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: NFS server fail-over - how do you do it?
We can live with the chance that a file write might fail as long as we can switch over to another NFS server if the primary fails. So amd will help us avoid the client hung issue? I will have to take a look. That is the worst thing of all when it comes to a failed NFS server. You can't even remotely reboot the NFS client! Someone has to power reset the damn thing. That's bad. On Sun, May 30, 2004 at 02:43:37AM -0500, adp wrote: I am running a FreeBSD 4.9-REL NFS server. Once every several hours our main NFS server replicates everything to a backup FreeBSD NFS server. We are okay with the gap in time between replication. What we aren't sure about is how to automate the fail-over between the primary to the secondary NFS server. This is for a web cluster. Each client mounts several directories from the NFS server. Let's say that our primary NFS server dies and just goes away. What then? Are you periodically doing a mount or a file look-up of a mounted filesystem to check if your NFS server died? If so are you just unmounting and remounting everything using the backup NFS server? Just curious how this problem is being solved. If you're mounting those NFS partitions read/write, then there really isn't a good solution for this problem[1] -- you need your NFS server up and running 24x7. If you are NFS mounting those partitions read-only, then you can in principle construct a fail-over system between those servers. Some Unix OSes let you specify a list of servers in fstab(5) (eg. Solaris) and clients will mount from one or other of them. Unfortunately you can't do that with standard NFS mounts under FreeBSD. You could try using VRRP -- see the net/freevrrpd port for example -- but I'm not sure how well that would work if the system failed-over in the middle of an IO transaction. In any case -- certainly if your NFS partitions are read/write, but also for read-only, perhaps the best compromise is to use the automounter amd(8) This certainly does help with the 'nightmare filesystem' scenario, where loss of a server prevents the clients doing anything, even rebooting cleanly. You can create a limited and rudimentary form of failover by using role-base hostnames in your internal DNS -- eg nfsserv.example.com as a CNAME pointing at your main server, and then modify the DNS when you need the failover to occur. It's a bit clunky and needs manual intervention, but it beats having nothing at all. Cheers, Matthew [1] Well, I assume you haven't got the resources to set up a storage array with multiple servers accessing the same disk sets. -- Dr Matthew J Seaman MA, D.Phil. 26 The Paddocks Savill Way PGP: http://www.infracaninophile.co.uk/pgpkey Marlow Tel: +44 1628 476614 Bucks., SL7 1TH UK ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: NFS server fail-over - how do you do it?
Very useful information, thanks. We have a very stable NFS server, but I am still working hard to put some redundancy into place. I was thinking that since NFS is udp-based, that if the primary NFS server failed, and the secondary assumed the primary NFS server's IP address, that things would at least return to normal (of course, any writes that had been in progress would fail horribly). That doesn't seem to be the case. During a test we killed the main NFS server and brought up the NFS IP as an alias on the backup. Didn't work. Has anyone tried anything like this? - Original Message - From: Chuck Swiger [EMAIL PROTECTED] To: adp [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, May 31, 2004 11:55 AM Subject: Re: NFS server fail-over - how do you do it? adp wrote: One of my big problems right now is that if our primary NFS server goes down then everything using that NFS mount locks up. If I change to the mounted filesystem on the client then it stalls: # pwd /root # cd /nfs-mount-dir [locks] If I try to reboot the reboot fails as well since FreeBSD can't unmount the filesystem!? Solaris provides mechanisms for NFS-failover for read-only NFS shares, but FreeBSD doesn't seem to support that. Besides, most people seem to want to use read/write filesystems, which makes the former solution not very useful to most people's requirements. The solution to the problem is to make very certain that your primary NFS server does not go down, ever, period. Reasonable people who identify a mission-critical system such as a primary NFS server ought to be willing to spend money to get really good hardware, have a UPS, and so forth to facility the goal of 100% uptime. A Sun E450 still makes a nice primary fileserver, although NAS solutions like a NetApp or an Auspex (not cheap!) should also be considered. The other choice would be to switch from using NFS to using a distributed filesystem which implements fileserver redundancy, such as AFS and it's successor, DFS. -- -Chuck ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Can I specify the resolver timeout?
It would be nice to see this in 4.10 or 4.11 (if there will be one). We aren't looking to move to 5.x within the next several months, if not longer. FreeBSD is just too stable to upgrade. :) - Original Message - From: Giorgos Keramidas [EMAIL PROTECTED] To: Matthew Seaman [EMAIL PROTECTED]; adp [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Monday, May 31, 2004 3:16 AM Subject: Re: Can I specify the resolver timeout? On 2004-05-31 08:20, Matthew Seaman [EMAIL PROTECTED] wrote: On Mon, May 31, 2004 at 06:38:58AM +0300, Giorgos Keramidas wrote: Hmmm, I *am* running 5.X. Looking at the manpage source I see that this option's missing from the 4.X sources :( This came up on the list quite recently. The source for the FreeBSD resolver.5 man page (/usr/src/share/man/man5/resolver.5) is maintained separately from the equivalent BIND source contributed from the ISC (/usr/src/contrib/bind/doc/man/resolver.5) The 'timeout:' and 'attempts:' entries in the FreeBSD man page are there in HEAD and have been for 5 months, but (despite the CVS comment on version 1.10 of the page) haven't been MFC'd to RELENG_4 or RELENG_5_2 yet. Whether this means that support is available in the underlying resolver libraries is another question. I just looked at the sources of src/lib/libc/net/res_init.c and it seems that support for timeout: only exists in CURRENT. RELENG_4, RELENG_5_0, RELENG_5_1 and RELENG_5_2 lack the part of res_init.c that sets the timeout to the value configured in the `resolv.conf' file. I'm not sure if there are any plans to merge this back to 4.X or the release branches of 5.X though. - Giorgos ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
rpc.statd needs a lot of memory?
I am running FreeBSD 4.9. We have several NFS clients and one server. On all machines we are running rpc.statd. I noticed that the size is around 257MB, although res is usually only around 460KB so this isn't a big problem. Why is the size so large though? ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Handling lots of custom packages..
I use the software from ports extensively. I also use a lot of compile-time options (for example, WITH_IMAP=yes for lang/php4). Rather than recompile our source each time we bring up a server we just build from ports and then build a package using 'make ..options.. package'. We then use this package. Now, if we just want to add this package to a system using pkg_add we have to first install the dependencies: # pkg_add dep1.tgz # pkg_add dep2.tgz # pkg_add php4-..tgz Instead, we have a copy of ftp.freebsd.org/.../packages/All and we use: # pkg_add -r ftp://localftp/.../packages/All/php4-...tgz This has the benefit that we don't know or care what needs to get pulled in. (Well, we know, we just don't have to add it to an install script.) So far so good. But we may have different options for different packages of the same software depending on where we will use it. So on a general purpose web server I may have a lot of options for php4, but for running php4 on a much more targeted use (say, on a webmail server that only connects using imap) I will have less options specified. (And there are other examples, this is just the easiest to consider.) So this means I can't really just dump our custom packages to packages/All since files will get overwritten. So I wanted to do something like: /repos/ftp.freebsd.org/.../packages/All /repos/ftp.freebsd.org/.../packages/mail-custom /repos/ftp.freebsd.org/.../packages/db-custom /repos/ftp.freebsd.org/.../packages/web-custom The All are the original packages from ftp.freebsd.org and everything else is custom compiled packages. I then create symlinks for everything in All to mail-custom/, db-custom/, and so on. This doesn't work. When I pkg_add -r it always ends up looking in All/. I have to rename a dir to All, like so: # mv All All.old # mv mail-custom All And then it works. What is the best way to do this? Again, my goal is to only need to specify the one package I want rather than all of the dependencies too. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Jails and SSL..
I want to run Apache under a FreeBSD jail. For normal http this works fine. However, I'm a little worried that we won't be able to use jails because we use SSL for several sites. With SSL we have to define one IP per site. Jails only have one IP. Is there a way around this other than just having one jail per SSL site? (I'd rather not do that!) ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
iostat and watching disk performance for Promise RAID..
My disk device is /dev/ar0: # mount /dev/ar0s1a on / (ufs, local) /dev/ar0s1h on /home (ufs, local, soft-updates) /dev/ar0s1e on /tmp (ufs, local, soft-updates) ... Yet iostat shows /dev/adX: n# iostat 5 tty ad0 ad4 ad6 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 03 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 2 2 26 1 69 0 448 0.00 0 0.00 15.43 24 0.37 14.80 26 0.37 4 0 9 4 84 0 541 0.00 0 0.00 6.67 1 0.00 6.00 3 0.02 12 0 16 6 66 0 730 0.00 0 0.00 4.84 6 0.03 5.67 9 0.05 4 0 12 8 77 0 483 0.00 0 0.00 2.00 0 0.00 2.00 0 0.00 9 0 10 5 76 ... Is iostat simply breaking out my RAID-1 (/dev/ar0) into its two component drives? If I had three drives composing /dev/ar0 would I then see ad0, ad4, adX? /dev/ad6 here is just a drive we use for short-term backups, and is not RAID. I just want to confirm. Thanks! ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Jails and SSL..
use SSL for several sites. With SSL we have to define one IP per site. Jails only have one IP. Is there a way around this other than just having one jail per SSL site? (I'd rather not do that!) Something I think I'm going to end up doing is running two jails: one for http, one for https. You can bind jails to local addresses (say, 127.0.0.3), and then use either natd or ipfw to forward different ports to the appropriate jail. Is this possible though? I wonder if I can get Apache to listen and RESPOND FOR several SSL sites on one IP, even though externally I'm mapping several public IP's to that one IP used by the jail/Apache. I plan on trying this later this week. Has anyone already tried this though? If so, what was your experience. It's a great idea if it works! ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Using Datavault Agent for Unix 4.50 on FreeBSD
We will be using our datacenter's backup (Datavault) for our FreeBSD machines. I do have the Linux emu. installed, but before testing this out I wanted to see if anyone else has done this before. The agent we will be using is for Linux (no versions for FreeBSD per the datacenter). The agent docs show the following shared libraries as needed. If anyone sees a potential problem then please let me know! libstc++-libc6.1-1.so.2 libcrypt.so.1 libpthread.so.0 libdl.so.2 libm.so.6 libc.so.6 /lib/ld-linux.so.2 ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Tuning a system..
We have a pretty high load mail server that does AV and spam filtering. I am looking to perf. tune this machine. It's FreeBSD 4.9-REL and Postfix. I am trying to correlate the info in systat to things I need to worry about. I am using systat with vmstat output since that seems to basically show everything you need to see. By the way, I did read 'man tuning'. First, I see that my memory is fine: 3 usersLoad 9.75 4.46 3.24 Mar 15 17:02 Mem:KBREALVIRTUAL VN PAGER SWAP PAGER Tot Share TotShareFree in out in out Act 1067128704 303637630772 35356 count All 502468 37816 3878800 181488 pages The major thing I'm looking at is SWAP PAGER: SWAP PAGER in out I see that I have processes in the run state and 14 waiting on the disk: Proc:r p d s wCsw Trp Sys Int Sof Flt 814 77 6305 297429015 950 1863 2293 I have 77 processes sleeping. I have 0 processes in w state, which means that my CPU isn't having a problem. I am spending a lot of time in sys and the rest in user: 49.5%Sys 1.7%Intr 44.5%User 0.0%Nice 4.4%Idl Here are my disks: Disks aacd0 acd0 KB/t 13.19 0.00 tps 111 0 MB/s 1.43 0.00 % busy2 0 From %Sys I would say that disk is a problem for us. However, I'm having problems really understanding the numbers for Disks. We are running the server on RAID-1 (2 disks) IDE. What should I be looking for here? Here is the right side: 2626 cow 867 total 106472 wireata0 irq14 147376 act 464 bge0 irq11 222104 inact 110 aac0 irq7 28304 cache atkbd0 irq 6992 free100 clk irq0 daefr 128 rtc irq8 2841 prcfr react pdwake pdpgs intrn 62032 buf 508 dirtybuf 40239 desiredvnodes 38005 numvnodes 13105 freevnodes ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
FreeBSD hdparm?
I know that under Linux I can modify how the OS uses IDE drives using hdparm. Is there an equivilent for FreeBSD? 'man tuning' seems to indicate that there isn't anything and that I only need to worry about whether I need to use softupdates. (It also mentions sysctl values such as vfs.vmiodirenable, which seems enabled in 4.9 by default.) I can see a need to tweak the parameters of how FreeBSD uses my disks unless it already knows when to enable certain features, for example 32-bit IO. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
FreeBSD 4.9 goes boom!
Problem: FreeBSD 4.9 load average quickly goes to high levels such as 300. System becomes unusable and HOPEFULLY reboots. In general though we have to call a tech to reboot it by hitting the power switch. Here is the setup: I have a FreeBSD 4.9 server on a P4 with 256MB of RAM. We have a IDE drive. We were using HiTech RAID-1, but it was flaky so now I'm just using a single drive with regular IDE. CPU: Intel(R) Pentium(R) 4 CPU 1500MHz (1494.47-MHz 686-class CPU) Origin = GenuineIntel Id = 0xf07 Stepping = 7 Features=0x3febf9ffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV ,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM real memory = 268369920 (262080K bytes) avail memory = 257400832 (251368K bytes) Warning: Pentium 4 CPU: PSE disabled Pentium Pro MTRR support enabled atapci0: Intel ICH2 ATA100 controller port 0xf000-0xf00f at device 31.1 on pci0 ad0: 38166MB WDC WD400BB-00GFA0 [77545/16/63] at ata0-master UDMA33 On this server I have several jails: jail 1 : running apache and serving about 6 hits/s on average. jails 2 - 7 : running apache with just one children in general for SSL (several SSL sites, several jails -- I'm moving to a single SSL jail and using natd later) jail 8 - a ssh jail for people to manage the sites During normal loads we are okay on memory. (I am adding more.) At all times we have about 1GB of paging disk free. Normally, my 5 and 10 min loads are around 0.5 (I can watch column r in vmstat and see we usually have 0 or 1 processes waiting.) This is normal: last pid: 7924; load averages: 0.11, 0.25, 0.49 up 0+00:39:40 15:30:01 345 processes: 2 running, 342 sleeping, 1 zombie Mem: 137M Active, 27M Inact, 52M Wired, 2284K Cache, 35M Buf, 30M Free Swap: 2048M Total, 31M Used, 2017M Free, 1% Inuse PID USERNAME PRI NICE SIZERES STATETIME WCPUCPU COMMAND 7914 root 30 0 2264K 1320K RUN 0:00 31.00% 1.51% top 7883 root 2 0 6600K 6016K sbwait 0:00 13.84% 1.32% perl 6660 nobody 2 0 17940K 12676K sbwait 0:01 1.07% 1.07% httpd 7930 root 29 0 1852K 924K RUN 0:00 17.00% 0.83% top 763 nobody18 0 15004K 7144K lockf0:02 0.15% 0.15% httpd 7828 nobody 2 0 17732K 12424K accept 0:00 0.37% 0.15% httpd 4586 nobody 2 0 17944K 12604K sbwait 0:01 0.10% 0.10% httpd 7868 nobody 2 0 16376K 10944K accept 0:00 1.03% 0.10% httpd 7910 root -6 0 1968K 1356K piperd 0:00 2.00% 0.10% perl 1461 nobody18 0 14628K 6780K lockf0:02 0.05% 0.05% httpd 2812 nobody18 0 14368K 6620K lockf0:02 0.05% 0.05% httpd 4575 nobody 2 0 17768K 12480K accept 0:01 0.05% 0.05% httpd 4593 nobody 2 0 18080K 12780K sbwait 0:05 0.00% 0.00% httpd 4422 root 2 0 16100K 10264K select 0:03 0.00% 0.00% httpd 4595 nobody 2 0 17984K 12728K sbwait 0:03 0.00% 0.00% httpd 764 nobody18 0 14992K 7300K lockf0:02 0.00% 0.00% httpd 4560 nobody 2 0 17944K 12684K sbwait 0:02 0.00% 0.00% httpd 4561 nobody 2 0 17944K 12672K sbwait 0:02 0.00% 0.00% httpd But when the system crashes the system load just skyrockets: last pid: 88248; load averages: 238.98, 197.07, 127.85 up 2+17:12:36 14:45:38 709 processes: 257 running, 421 sleeping, 31 zombie Mem: 143M Active, 21M Inact, 75M Wired, 7908K Cache, 35M Buf, 1844K Free Swap: 2048M Total, 488M Used, 1560M Free, 23% Inuse PID USERNAME PRI NICE SIZERES STATETIME WCPUCPU COMMAND 88185 root 2 0 6504K 5736K connec 0:00 1.47% 0.93% perl 25298 nobody -18 0 13700K 1596K vmpfw0:13 0.59% 0.39% httpd 57349 nobody -18 0 14788K 1588K spread 0:10 0.57% 0.39% httpd 18115 nobody -18 0 14224K 1604K vmpfw0:21 0.39% 0.24% httpd 39876 root 2 0 2716K 0K RUN 10:12 0.00% 0.00% top 84557 nobody 2 0 22600K 0K RUN 9:54 0.00% 0.00% httpd 84567 nobody 2 0 22360K 0K sbwait 9:47 0.00% 0.00% httpd 84568 nobody 2 0 22564K 0K RUN 9:47 0.00% 0.00% httpd 84564 nobody 2 0 22680K 0K sbwait 9:41 0.00% 0.00% httpd 84556 nobody -22 0 21092K 580K swread 9:39 0.00% 0.00% httpd 84554 nobody 2 0 22592K 0K RUN 9:32 0.00% 0.00% httpd 84555 nobody 2 0 22608K 0K RUN 9:31 0.00% 0.00% httpd 84558 nobody 2 0 22580K 0K RUN 9:22 0.00% 0.00% httpd 84563 nobody 2 0 22692K 0K RUN 9:07 0.00% 0.00% httpd 84560 nobody 2 0 22580K 0K RUN 8:56 0.00% 0.00% httpd 84398 root 2 0 21052K 1604K select 4:14 0.00% 0.00% httpd 94 root 2 0 360K 0K nfsd 3:03 0.00% 0.00% nfsd 3730 nobody18 0 14888K 0K lockf1:23 0.00% 0.00% httpd Since I have 75M wired I have SOME memory available to my system. I am using bsdsar. Our system crashed around 2:45 today: Time ad0 ad1 ad2 ad3 da0 da1 da2 da3 da4 da5 da6 13:400
FreeBSD and MySQL - mysqld eats CPU alive
I recently posted the following message to MySQL discussion list. The response there, and the one I keep finding on Google, is that this is a long-standing issue betweeen FreeBSD and MySQL. For me this has been happening since FreeBSD 4.4. I have one site where we are going to have to move to Linux. I would much prefer keeping us on FreeBSD, but we just can't afford the downtime anymore. Another site is looking at moving to PostgreSQL on FreeBSD. Any help on this? Googling shows a long history of people having these problems but no solutions. Please don't give me a URL to a Google showing others having this problem--I've seen that and more. I want to know if there is a solution. Any help is appreciated! ... I have several MySQL and FreeBSD installs across a few different sites, and I consistently have problems with mysqld. It will begin to eat up all of the CPU and eventually become unresponsive (or the machine will just burn). I can't seem to manually reproduce this, but given enough time a FreeBSD box with mysqld will go down. Our servers are generally heavily loaded. I would say that I'm doing something wrong (although what I could be doing wrong I'm not sure), but I recently began working with another company that has the EXACT SAME PROBLEM. They are even thinking of moving to PostgreSQL, but we are trying to fix mysqld instead for now. This behavior has been seen on: FreeBSD 4.4, 4.7, 4.9, 4.10 MySQL 3.x and 4.x Typical load: 50 qps With and without replication enabled. Some sites are SELECT heavy, some are INSERT heavy. For one site I think we will be moving from FreeBSD to Linux for the MySQL servers since MySQL seems to run like a champ on Linux. We will continue to use FreeBSD for everything else. Anyone experienced this problem? Is it mysqld or FreeBSD? I can't pinpoint the exact issue. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
tmpfs for FreeBSD?
I'm looking for a ramdisk-style filesystem for FreeBSD that can be used for scratch space, e.g., tmpfs in Solaris. The filesystem should be able to grow and shrink in memory (and use real disk space as needed) depending on the amount of free RAM on the system. I don't want just a fixed sized block of memory reserved for /tmp. I will be using this for scratch files that are quickly created and then destroyed, and will average around 2MB each. We are expecting out tmp filesystem to need around 256MB to 512MB on average. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
ntop problems with FreeBSD?
Anyone get ntop totally working with FreeBSD 4.9? We can run it, but it's flaky. On one FreeBSD box it runs fine for a while and then just dies. No syslog messages. It's just gone. On another it won't display anything in the Web interface (it opens new windows when clicking on some items), and it also randomly crashes. We are using ipf on one system and ipfw on another. All are dual-homed FreeBSD firewalls. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Postfix thinks there isn't enough disk space in a jail
This problem seems to be affecting Postfix in a FreeBSD jail, and I haven't seen this problem outside of a jail, so I'm trying questions@ first. I am running postfix-2.0.18,1 (from ports) in a FreeBSD 4.10 system in a jail. Everything was fine until recently I moved NFS services over to this same server. (This may be a red herring.) Now, every few mails I get an email to Postmaster like this: Transcript of session follows. Out: 220 xx ESMTP In: EHLO yy Out: 250-xx Out: 250-PIPELINING Out: 250-SIZE 102400 Out: 250-VRFY Out: 250-ETRN Out: 250 8BITMIME In: MAIL FROM:[EMAIL PROTECTED] SIZE=13414 Out: 452 Insufficient system storage In: QUIT Out: 221 Bye Okay, so the disk is filling up. box# df -hl FilesystemSize Used Avail Capacity Mounted on /dev/ar0s1a 1008M45M 882M 5%/ /dev/ar0s1d27G23G 1.9G92%/jails /dev/ar0s1h 1008M20M 908M 2%/home /dev/ar0s1g 1008M 10.0K 927M 0%/tmp /dev/ar0s1f 3.9G 1.2G 2.4G34%/usr /dev/ar0s1e 2.0G 148M 1.7G 8%/var procfs4.0K 4.0K 0B 100%/proc procfs4.0K 4.0K 0B 100%/jails/xxx/proc Okay, we are at 92%. We should clean some things up, but we do still have 1.9GB of free space. (And we often linger around this anyway.) Postfix is set to accept mail as long as there is 25MB of free space: main.cf:queue_minfree = 2500 Anyone seen this happen? Postfix should not be returning '452 Insufficient system storage' to clients at this point. It doesn't happen for all mails. Just around 5% of so. It seems fairly random to me. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
SMP on SMP-capable system with one processor
We have several Dell-based systems that are dual-processor capable, but have only one processor. The FreeBSD 4.9 kernels for each system is compiled with SMP support, even though there is only one processor on each system right now. Would this actually reduce performance on a single processor system? I know that SMP kernels have to worry about special locking, and may be doing unnecessary work. ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]