strsvis breakage when upgrading to from 9.1 to 9-STABLE
Hello, I've sent a similar query before, but didn't receive any answers. When upgrading from 9.1 to 9-STABLE, the buildworld fails with: === usr.bin/xinstall (all) cc -O2 -pipe -I/usr/src/usr.bin/xinstall/../../contrib/mtree -I/usr/src/usr.bin/xinstall/../../lib/libnetbsd -I/usr/src/usr.bin/xinstall/../../lib/libmd -std=gnu99 -fstack-protector -Wsystem-headers -Werror -Wall -Wno-format-y2k -W -Wno-unused-parameter -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wcast-qual -Wwrite-strings -Wswitch -Wshadow -Wunused-parameter -Wcast-align -Wchar-subscripts -Winline -Wnested-externs -Wredundant-decls -Wold-style-definition -Wno-pointer-sign -c /usr/src/usr.bin/xinstall/xinstall.c cc -O2 -pipe -I/usr/src/usr.bin/xinstall/../../contrib/mtree -I/usr/src/usr.bin/xinstall/../../lib/libnetbsd -I/usr/src/usr.bin/xinstall/../../lib/libmd -std=gnu99 -fstack-protector -Wsystem-headers -Werror -Wall -Wno-format-y2k -W -Wno-unused-parameter -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wreturn-type -Wcast-qual -Wwrite-strings -Wswitch -Wshadow -Wunused-parameter -Wcast-align -Wchar-subscripts -Winline -Wnested-externs -Wredundant-decls -Wold-style-definition -Wno-pointer-sign -c /usr/src/usr.bin/xinstall/../../contrib/mtree/getid.c cc1: warnings being treated as errors /usr/src/usr.bin/xinstall/xinstall.c: In function 'metadata_log': /usr/src/usr.bin/xinstall/xinstall.c:1331: warning: implicit declaration of function 'strsvis' /usr/src/usr.bin/xinstall/xinstall.c:1331: warning: nested extern declaration of 'strsvis' Digging around, it looks like strsvis is in the 9-STABLE sources but not in the installed system's (9.1) sources. I assume the build process is pulling in system headers and libraries, but that seems wrong. Isn't the build process supposed to pull in only sources from /usr/src and libraries which are freshly built instead of the system ones? What would be the workaround for the above problem? signature.asc Description: OpenPGP digital signature
Re: Suggest changing dirhash defaults for FreeBSD 9.2.
On 29/08/2013 03:32, Dewayne Geraghty wrote: From the analysis perforned in 2009, and referenced earlier by Robert, this https://wiki.freebsd.org/DirhashDynamicMemory and other material at this site, indicates that the reclaimage interval is workload dependent and that 5 to 8 seconds seems, on average, to be adequate. I'm having trouble understanding what the graphs are saying - you seem to have almost consistently worse results with increasing dirhash memory and/or reclaim age. Is the discussion, rather than (synthetic) workload performance, sufficient to warrant changing the default settings by a factor of 12? Yes, and also for an additional reason on top of what I've already said in @fs and @hackers: it will be at least 10 years before someone remembers to bumb this tunable again, so I consider a little overkill to be ok. signature.asc Description: OpenPGP digital signature
Re: Suggest changing dirhash defaults for FreeBSD 9.2.
On 28/08/2013 05:58, Robert Burmeister wrote: On 8/27/2013 9:40 AM, Sergey Kandaurov wrote: On 27 August 2013 16:41, Robert Burmeister robert.burmeis...@utoledo.edu wrote: I have been experimenting with dirhash settings, and have scoured the internet for other peoples' experience with it. (I found the performance improvement in compiling has forestalled the need to add an SSD drive. ;-) I believe that increasing the following values by 10 would benefit most FreeBSD users without disadvantage. vfs.ufs.dirhash_maxmem: 2097152 to 20971520 vfs.ufs.dirhash_reclaimage: 5 to 50 or 60 vfs.ufs.dirhash_maxmem is further autotuned based on available physical memory. See r214359 for details. [Spock Eyebrow of Thought] I'm running FreeBSD i386 9.2, that allows a max of 4 Gigs of RAM. To what value does the algorithm tune in your case? On my 16 GB machine, it's ~~ 25 MB: vfs.ufs.dirhash_maxmem: 26968064 I think the algorithm is still overly conservative for 32 bit systems, which are more likely to be using UFS. As 64 bit platforms tend to have more RAM and use ZFS, is the same tuning algorithm appropriate for both? The policy is to use fractions of the installed RAM (though in a roundabout way), so it should scale reasonably well to both systems with large and small memories. I'll bump vfs.ufs.dirhash_reclaimage to 60, it's worth it. signature.asc Description: OpenPGP digital signature
Re: 9.2-RC3 Now Available
Updated via svnup from releng/9.0 to releng/9.2 (r254910) and I got this in buildworld: cc1: warnings being treated as errors /usr/src/usr.bin/xinstall/xinstall.c: In function 'metadata_log': /usr/src/usr.bin/xinstall/xinstall.c:1331: warning: implicit declaration of function 'strsvis' /usr/src/usr.bin/xinstall/xinstall.c:1331: warning: nested extern declaration of 'strsvis' *** Error code 1 1 error *** Error code 2 1 error *** Error code 2 2 errors *** Error code 2 1 error *** Error code 2 1 error Shouldn't buildworld use includes from /usr/src and not from the installed system? signature.asc Description: OpenPGP digital signature
Re: Installing FreeBSD 9.1 amd64 on IBM x3550 M3
On 11/02/2013 12:23, Panagiotis Christias wrote: Hello, I'm trying to install FreeBSD 9.1 amd64 on an IBM x3550 M3 server. Installation went smoothly, RAID controller and network cards were successfully recognised. How stable is it? I may have a problem manifesting in random reboots with a similar machine. signature.asc Description: OpenPGP digital signature
Adding process title to SIGSEGV messages?
Hello, Is there a way to add a process title to SIGSEGV messages which are usually collected in /var/log/messages? Jan 18 15:08:06 www kernel: pid 95174 (process-name), uid 80: exited on signal 11 I'd like to inspect the process' titles alongside process-name. signature.asc Description: OpenPGP digital signature
Re: IPv4 vs. IPv6 Ethernet Performance
On 28/08/2012 17:38, Norbert Aschendorff wrote: Configuration v6 v4 === Linux - Linux925 935 # = This could be v6's 40B header # vs. v4's 20B Linux - FreeBSD 450 700 FreeBSD - Linux 455 920 === The FreeBSD-Linux value shows that the ethernet chip on the FreeBSD machine (it's Intel stuff on both sides, using the em(4) driver on FreeBSD) is able to send at full 1G speed. But why is IPv6 so slow? There are some more numbers, FreeBSD-FreeBSD here: http://people.freebsd.org/~bz/bench/ Apparently, the software stack is capable enough so it may be a driver problem in your case. signature.asc Description: OpenPGP digital signature
Re: Problems with crashing IBM X3630 M3/ZFS
On 06/07/2012 20:56, Bob Healey wrote: Hello. I've got a quartet of IBM x3630 M3 with one that is frequently hard locking under heavy NFS load. I am running 9.0-RELEASE with all the patches from freebsd-update. My problem machine has 8 16 core clients, each doing IO intensive tasks connected to it via a Procurve and the onboard igb0 interface. Mostly network reads, typically 10MB read per MB written. When the machine locks under load, none of the consoles respond, nor can I reach the machine via ethernet. I can break into DDB via the serial over lan interface, and am running a debug/witness kernel at the moment (I was running GENERIC previously). During the boot sequence, witness tosses me into DDB ~10 times before I get a login prompt. Prior to this machine acting up, it had multiple 802.1q vlans, and ran 9K packets on its private network to the compute clients. A dmesg can be found at http://boyle.che.rpi.edu/~healer/boomer/dmesg /etc/rc.conf can be found at http://boyle.che.rpi.edu/~healer/boomer/rc.conf A listing of installed ports can be found at http://boyle.che.rpi.edu/~healer/boomer/pkg_info The output of psauxwwo wchan against my two crash dumps can be found at http://boyle.che.rpi.edu/~healer/boomer/crash1-psaux-wchan and http://boyle.che.rpi.edu/~healer/boomer/crash2-psaux-wchan I'm not entire convinced this is software, but I've run out of local experts to ask, and can't prove its hardware. Hi, I tested a recent IBM machine similar to yours recently (I don't know if it was exactly the same model, but it was probably an M3), and observed a number of lockups which seemed to be related to the RAID card (IBM's ServeRAID, re-branded LSI). I don't know if this has anything to do with your problems, but IIRC in my case there were some kernel messages on the console relating to the driver and/or PCI bus errors on the slot with the RAID controller prior to the lockups - maybe you can check for these. I have other bad experiences with IBM's hardware and have given up on them for running FreeBSD. signature.asc Description: OpenPGP digital signature
apache hangs in wait4
Hello, I have a very embarrassing problem where apache22-worker, running mod_fcgid with php, perl and python fastcgi processes, hangs daliy in wait4: # procstat -k 54688 PIDTID COMM TDNAME KSTACK 54688 101355 httpd-mi_switch sleepq_catch_signals sleepq_wait_sig _sleep kern_wait sys_wait4 amd64_syscall Xfast_syscall The only suspicious things in logs is this: [Sat Jul 07 20:00:01 2012] [notice] SIGUSR1 received. Doing graceful restart [Sat Jul 07 20:00:10 2012] [error] FastCGI process 41228 still did not exit, terminating forcefully The 41228 process is a Perl FastCGI web application using p5-FCGI (wwsympa), and it is in the accept wchan. Any ideas? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Recommendation for Hyervisor to host FreeBSD
On 05/07/2012 14:21, Mark Saad wrote: I am using VMware esxi v4.01 with no issues for 6, 7, 8 and 9 . Esxi will happily host amd64 installs and i386 provider the underlying hardware supports it. The older esx 3.5 works as well on 32bit hardware but I am no longer using it. Also I use virtualbox 4 hosted on a Mac and a 9-stable amd64 box with little issue . Hello, Which type of workload do you run on FreeBSD under VMWare ESXi? e.g. web server, database, e-mail...? signature.asc Description: OpenPGP digital signature
Re: ServeRAID BR10il (LSISAS1064E)
On 16/05/2012 11:45, Alexey V. Panfilov wrote: Hi! I try to boot from CDROM with FreeBSD on IBM xServer x3250 M4 (P/N 2583-72G), but it always crash with message: NMI ISA b8, EISA ff RAM parity error, likely hardware failure. Attempts to install 8.3, 9.0, 7.3 / i386, amd64 - result always was the same. Screenshot of the crash is here: http://tmp.lehis.ru/img/IMAG0339.jpg This looks very much like what I had with a Dell server with a mpt controller. Unfortunately, I didn't collect the information on the controller chip. CentOS also worked fine on the machine (so that was what I left it running with). signature.asc Description: OpenPGP digital signature
Re: Compatibility with the new XEON Processors
On 04/04/2012 18:03, Efraín Déctor wrote: Hello. Does anyone know if FreeBSD 8.2 and FreeBSD 9.1 are fully compatible with this processor: Intel XEON E3 1270 ?. Yes. signature.asc Description: OpenPGP digital signature
Re: 157k interrupts per second causing 60% CPU load on idle system
On 20/03/2012 06:26, Matt Thyer wrote: I've upgraded my FreeBSD-STABLE NAS from r225723 (22nd Sept 2011) to r232477 (4th Mar 2012) and am finding that a system process called intr is now constantly using about 60% of 1 CPU starting a short time after reboot (possibly triggered by use of the samba server). When this starts, systat -vm 1 says that the system is 85% idle and 14% interrupt handling. It says that there's around 157k interrupts per second. Ok, but *which* interrupt is getting triggered? Please send the output of vmstat -i. signature.asc Description: OpenPGP digital signature
Re: 157k interrupts per second causing 60% CPU load on idle system
On 20 March 2012 12:52, Matt Thyer matt.th...@gmail.com wrote: On 20 March 2012 21:12, Ivan Voras ivo...@freebsd.org wrote: On 20/03/2012 06:26, Matt Thyer wrote: I've upgraded my FreeBSD-STABLE NAS from r225723 (22nd Sept 2011) to r232477 (4th Mar 2012) and am finding that a system process called intr is now constantly using about 60% of 1 CPU starting a short time after reboot (possibly triggered by use of the samba server). When this starts, systat -vm 1 says that the system is 85% idle and 14% interrupt handling. It says that there's around 157k interrupts per second. Ok, but *which* interrupt is getting triggered? Please send the output of vmstat -i. interrupt total rate irq16: uhci0+ 3392184862 126692 Ok, something's probably wrong with USB. Can you disable it in BIOS? cpu0: timer 53549677 1999 irq256: mps0 2643187 98 irq257: re0 5508108 205 irq258: ahci0 160717 6 cpu1: timer 53525300 1999 cpu2: timer 53525300 1999 cpu3: timer 53525296 1999 Total 3614622447 134999 ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Serverworks HT-1000 HPET event timer
On 09/03/2012 16:43, Alexander Motin wrote: Hi. Does anybody have success story of using HPET event timer (not time counter!) on Serverworks HT-1000 chipset under FreeBSD 9/10? I was reported about problems with it on HP BL465c G6 blade system and now thinking whether it is global problem or specific to this system. For what it's worth, I have a G1 and everything works by default: hpet0: High Precision Event Timer iomem 0xfed0-0xfed003ff on acpi0 Timecounter HPET frequency 14318180 Hz quality 950 Event timer HPET frequency 14318180 Hz quality 450 Event timer HPET1 frequency 14318180 Hz quality 440 Event timer HPET2 frequency 14318180 Hz quality 440 kern.eventtimer.choice: HPET(450) HPET1(440) HPET2(440) LAPIC(400) i8254(100) RTC(0) kern.eventtimer.et.LAPIC.flags: 15 kern.eventtimer.et.LAPIC.frequency: 0 kern.eventtimer.et.LAPIC.quality: 400 kern.eventtimer.et.i8254.flags: 1 kern.eventtimer.et.i8254.frequency: 1193182 kern.eventtimer.et.i8254.quality: 100 kern.eventtimer.et.HPET.flags: 3 kern.eventtimer.et.HPET.frequency: 14318180 kern.eventtimer.et.HPET.quality: 450 kern.eventtimer.et.HPET1.flags: 3 kern.eventtimer.et.HPET1.frequency: 14318180 kern.eventtimer.et.HPET1.quality: 440 kern.eventtimer.et.HPET2.flags: 3 kern.eventtimer.et.HPET2.frequency: 14318180 kern.eventtimer.et.HPET2.quality: 440 kern.eventtimer.et.RTC.flags: 17 kern.eventtimer.et.RTC.frequency: 32768 kern.eventtimer.et.RTC.quality: 0 kern.eventtimer.periodic: 0 kern.eventtimer.timer: HPET kern.eventtimer.idletick: 0 kern.eventtimer.singlemul: 2 # vmstat -i interrupt total rate irq1: atkbd0 18 0 irq20: hpet0 1625923139658 irq22: uhci4 4323336 1 irq256: bce0 772213596312 irq257: ciss0 40836282 16 irq258: isp049525760 20 irq259: isp1 84 0 Total 2492822215 1008 signature.asc Description: OpenPGP digital signature
Re: nmbclusters: how do we want to fix this for 8.3 ?
On 23/02/2012 09:19, Fabien Thomas wrote: I think this is more reasonable to setup interface with one queue. Unfortunately, the moment you do that, two things will happen: 1) users will start complaining again how FreeBSD is slow 2) the setting will be come a sacred cow and nobody will change this default for the next 10 years. If it really comes down to enabling only one queue, something needs to complain extremely loudly that this isn't an optimal setting. Only printing it out at boot may not be enough - what's needed is possibly a script in periodic/daily which checks system sanity every day and e-mails the operator. signature.asc Description: OpenPGP digital signature
Re: Tuning needed for slow RDP FreeBSD 9 - Win 2008 R2
On 13/02/2012 02:50, Peter Olsson wrote: Desktop: FreeBSD 9.0-RELEASE amd64, generic kernel, running Openbox. My WAN is about 1.2 Mbps, and I try to run RDP to windows servers beyond my WAN. RDP to a Windows Server 2003 SP2 is fast and works without problems. RDP to a Windows Server 2008 R2 is very slow, and sometimes just disconnects. I tried changing a couple of net.inet.tcp sysctl: Nothing has helped, do you have any ideas what I should tune? It is highly unlikely that any network tuning will help here - this is almost certainly an application-level problem. For what it's worth, I'm using rdesktop to a Win2k8 R2 server without any lag other problems. signature.asc Description: OpenPGP digital signature
Re: FreeBSD 9 crash/deadlock when dump(8)ing file system with journaling enabled.
On 30/01/2012 13:06, Jeremy Chadwick wrote: For now I've turned off journaling (soft updates seem fine) and that works around the issue. Let me know if I can provide more details etc! I'm not sure, but this may be an after-effect of known problems right now with SU+J on 9.0. It would help if you could state if you're using dump -L or not. I've seen the deadlock behaviour you describe on older FreeBSD versions (dating back to at least 7.x) when using dump -L, which generates a fs snapshot. Obviously 7.x does not have SU+J, so I'm a little surprised disabling journalling fixes the problem for you. It's a known bug: SU+J currently deadlocks when used with UFS snapshots. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Unbalanced timer interrupts under VMWare?
I have a strange situation on a VMWare 5-hosted machine: # vmstat -i interrupt total rate irq1: atkbd0 74 0 irq6: fdc011 0 irq15: ata1 17 0 irq18: em0 42122 1 cpu0:timer 2246291 54 irq256: mpt0 141402 3 cpu1:timer280800 6 Total2710717 65 The cpu0 timer interrupt rate is 54 Hz and cpu1 rate is 6 Hz. The same is visible when monitoring the system in real time with systat -vm. This is a default FreeBSD 9 RC3 amd64 system, HZ is the default 100. Unless the tickless kernel project has advanced more than I think, this looks like a problem... so I looked elsewhere and it turns out I cannot get more than about 55 interrupts/s with the disk controller either. Any ideas? I have another host running VMware 5 but only an 8-stable machine in it, which works fine. Does anyone else run 9.x on VMware 5? The host is a Xeon X3360 CPU (4 cores, no HTT, 2.8 GHz). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Unbalanced timer interrupts under VMWare?
On 21.12.2011. 14:48, Maxim Dounin wrote: Hello! On Wed, Dec 21, 2011 at 12:02:04PM +0100, Ivan Voras wrote: Unless the tickless kernel project has advanced more than I think, It is, actually. Many thanks to mav@ for his amazing work. $ sysctl kern.eventtimer.periodic kern.eventtimer.periodic: 0 $ vmstat -i | grep timer cpu0:timer 72769640 47 Ah, great! I missed that commit :) Thanks mav! And down to 37 i/s as seen in systat -vm. Idle virtual machine now takes 2 times less CPU on my laptop as seen from the host. this looks like a problem... so I looked elsewhere and it turns out I cannot get more than about 55 interrupts/s with the disk controller either. Happily goes to 6k i/s here (though it's under VirtualBox and a bit old -current, not 9.0). Yes, it looks like it's not related to the tickless mode, the disk IO is slow even when kern.eventtimer.periodic=1. Something else is broken. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: TCP Reassembly Issues
On 24.11.2011. 8:02, Kris Bauer wrote: Hello, I am currently experiencing an issue with FreeBSD 9.0-RC2 r227852 where the net.inet.tcp.reass.curesegments value is constantly increasing (and not descreasing when there is nominal traffic with the box). It is causing tcp slowdowns as described with kern/155407: Exhausted net.inet.tcp.reass.maxsegments block recovering tcp session (for this socket and any other socket waiting for retransmited packets). After exhausted net.inet.tcp.reass.maxsegments allocation new entry in tcp_reass failed (for this socket and any other socket waiting for retransmited packets). I have increased the reass.maxsegments value to 16384 to temporarily avoid the problem, but the cursegments number keeps rising and it seems it will occur again. Is this an issue that anyone else has seen? I can provide more information if need be. Is your configuration different than the default in some way? Do you use a firewall? Multithreaded netisr? One of the new TCP congestion control modules? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ATA/Cdrom(?) panic
On 16/11/2011 07:43, Bjoern A. Zeeb wrote: Hey, we have seen this or a very similar panic for about 1 year now once in a while and I think I reported it before; this is FreeBSD as guest on Yes, IIRC I've also reported it before; it crashes randomly, when the machine is not doing anything with the cdrom. As a workaround, I now remove the cdrom device from vmware instances. vmware. Seems it was a double panic this time. Could someone please see what's going on there?It was on 8.x-STABLE in the past and this is 8.2-RELEASE-p4. Thanks /bz acd0: WARNING - READ_TOC taskqueue timeout - completing request directly Fatal trap 12: page fault while in kernel mode cpuid = 4; apic id = 04 fault virtual address = 0x1f4 fault code = supervisor read, page not present instruction pointer = 0x20:0xc08a1e9f stack pointer = 0x28:0xe6ad5b9c Fatal trap 12: page fault while in kernel mode frame pointer = 0x28:0xe6ad5bb4 cpuid = 2; code segment= base 0x0, limit 0xf, type 0x1bapic id = 02 = DPL 0, pres 1, def32 1, gran 1 fault virtual address = 0x1f4 processor eflags= fault code = supervisor read, page not presentinterrupt enabled, instruction pointer = 0x20:0xc08a1e9fresume, stack pointer = 0x28:0xe8e9e808IOPL = 0 frame pointer = 0x28:0xe8e9e820 current process = code segment= base 0x0, limit 0xf, type 0x1b12 (swi6: task queue) = DPL 0, pres 1, def32 1, gran 1 trap number = 12 processor eflags= interrupt enabled, panic: page faultresume, cpuid = 4IOPL = 0 current process = KDB: stack backtrace:25162 (bsnmpd) trap number = 12#0 0xc08e0d07 at kdb_backtrace+0x47 #1 0xc08b1dc7 at panic+0x117 #2 0xc0be4b53 at trap_fatal+0x323 #3 0xc0be4dd0 at trap_pfault+0x270 #4 0xc0be5315 at trap+0x465 #5 0xc0bcbecc at calltrap+0x6 #6 0xc08b0d86 at _sema_post+0x46 #7 0xc056fa47 at ata_completed+0x727 #8 0xc08eb97a at taskqueue_run_locked+0xca #9 0xc08ebc8a at taskqueue_run+0xaa #10 0xc08ebd53 at taskqueue_swi_run+0x13 #11 0xc088903b at intr_event_execute_handlers+0x13b #12 0xc088a75b at ithread_loop+0x6b #13 0xc0886d51 at fork_exit+0x91 #14 0xc0bcbf44 at fork_trampoline+0x8 Uptime: 5d20h1m56s (gdb) l *ata_completed+0x727 489 (request-callback)(request); 490 else 491 sema_post(request-done); 492 493 /* only call ata_start if channel is present */ 494 if (ch) 495 ata_start(ch-dev); 496 } 497 498 void signature.asc Description: OpenPGP digital signature
Re: ATA/Cdrom(?) panic
On 16/11/2011 15:45, Joel Dahl wrote: Hmm. We're running many FreeBSD 8.2 machines as guests in VMware but have never encountered the panic described above. Should I be worried? :-) I've encountered them often enough that I started removing cdrom devices from the VMs. signature.asc Description: OpenPGP digital signature
Re: Questions about using gvirstor as a RAID0 solution
On 24/10/2011 11:21, carlopmart wrote: Hi all, I would like to use gvirstor as a thin provisioning solution for a mysql server, but I have some doubts about using it: Yes, it's kind of what it was created for... a) Do I need to put geom_virstor_load=YES on loader.conf or this kernel module is loaded automatically at boot If I create gvirstor volume using the label option?? You need to load the module yourself, the same as with other GEOM modules. b) Does gvirstor supports UFS journaling?? For example: gjournal label /dev/virstor/mydata newfs -O 2 -J /dev/virstor/mydata You can do that. It will be very inefficient (i.e. you will only avoid fscks, there will probably be no performance gains at all) but nothing should break. Both virstor and gjournal add their overheads (specifically, they can be seek-intensive in different ways), so you wouldn't want to use either if sustained random IO performance is important. On the other hand, since you are using 9-stable, you can also use the journaled soft-updates instead of gjournal, for much better efficiency. c) Can i use growfs utility If I need to expand a virstor volume at filesystem level?? Not exactly; virstor will immediately create a volume with large virtual size (whatever you specify at the volume creation) ragardless of how many physical devices you have. If you add more physical devices to the virstor later, you do not have to do anything with the file system itself, it will still see the original large virtual size. If you are talking about expanding the virtual volume size, that's not implemented yet (and in that case you would need to use growfs). signature.asc Description: OpenPGP digital signature
Re: FreeBSD on IBM X3550 M3
On 18/10/2011 09:03, Gót András wrote: The M5014 RAID is also UEFI aware and of course I only made the initial disk group and volume group config on it. :) Yes, the moment of truth will come this evening. I hope I'll be able got FreeBSD working on the machine and I don't have to go on with Linux. For the record. I also found something about someone couldn't even boot Windows Server install CD on this machine and he had to update to firmware. There's also something about OpenBSD that went with a clean install, but after the it freezes randomly. FWIW, I had bad experiences with installing even Linux on IBM UEFI servers and now avoid them. signature.asc Description: OpenPGP digital signature
Setting coredumpsize on a running process?
I have PHP executing as fastcgi via the mod_fcgid module in Apache. I suspect there is a bug in PHP or one of its extensions which causes it to crash with sigsegv, but I cannot get any coredumps. I suspect something is setting coredumpsize to 0 - either Apache, mod_fcgid or PHP. So the question is: is there a way to set coredumpsize on a running process, with the intention of getting a core dump when it crashes? I already tried setting CoreDumpDirectory in Apache and also configuring apache22limits_args in /etc/rc.conf but without effect. signature.asc Description: OpenPGP digital signature
Re: Setting coredumpsize on a running process?
On 18 October 2011 16:43, Jeremy Chadwick free...@jdc.parodius.com wrote: On Tue, Oct 18, 2011 at 04:32:11PM +0200, Ivan Voras wrote: I have PHP executing as fastcgi via the mod_fcgid module in Apache. I suspect there is a bug in PHP or one of its extensions which causes it to crash with sigsegv, but I cannot get any coredumps. I suspect something is setting coredumpsize to 0 - either Apache, mod_fcgid or PHP. So the question is: is there a way to set coredumpsize on a running process, with the intention of getting a core dump when it crashes? I already tried setting CoreDumpDirectory in Apache and also configuring apache22limits_args in /etc/rc.conf but without effect. I ended up solving this on a machine where coredumps with Apache + PHP were highly common by setting sysctl kern.corefile to /var/cores/%P.%N.core, then made sure the /var/cores directory was root:wheel, perms 1777. Otherwise I could not get a coredump. apache22limits_enable did not help either, nor did CoreDumpDirectory. Having fun yet? Oh, I have years and years of fun debugging PHP, in one way or the other :) Your suggestion for setting core dump directory explicitely helped; now it looks like I've hit an infinite recursion / stack eating bug somewhere in PCRE... #1703 0x000805d5c72e in match () from /usr/local/lib/libpcre.so.0 #1704 0x000805d5b4f0 in match () from /usr/local/lib/libpcre.so.0 #1705 0x000805d5c72e in match () from /usr/local/lib/libpcre.so.0 #1706 0x000805d5b4f0 in match () from /usr/local/lib/libpcre.so.0 However, I'm drawing the line at debugging PCRE, this will go into the don't do that category. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: FreeBSD 8.2r amd 64 problem when compiling 32bit applications
On 25/08/2011 15:28, noel beck wrote: The following is the example of the error when compiling in 32-bit on a 64-bit machine: [gsaid@Bruno ~]$ gcc -m32 -o hello hello.c I don't think -m32 is supported at all. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: WD Advanced Format: do I need to do something special?
On 18/08/2011 11:55, Yuri wrote: On 08/18/2011 02:17, Jeremy Chadwick wrote: The below advice still applies. Do not skim the page, read it. http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html You will therefore have to go through some manual rigmarole (preferably with gpart(8)) to ensure performance. If you plan on using the disks in ZFS, you get to go through some extra rigmarole. I didn't know about such extra actions that are required and just created ZFS pool. zdb -C mypool shows ashift as 9. I read it as meaning that sector size if 512bytes (wrong!). But I tested the 25GB file writing/reading speed on the middle tracks and it seems reasonable: WR 55MB/s RD 107MB/s So can I get even better speeds if it was aware of 4k sector? Yes, read and write speeds on modern drives should be almost equal. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: WD Advanced Format: do I need to do something special?
On 19/08/2011 03:28, Yuri wrote: Following instructions here (http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html) I destroyed my previous ZFS pool with 512 byte sectors and did this: gnop create -S 4096 /dev/ad4 zpool create mypool /dev/ad4.nop zpol create mypool/mydir zpool export mypool gnop destroy /dev/ad4.nop zpool import mypool Now this command 'zdb -C data | grep ashift' shows ashift=12 (4096 byte sectors). However, when I begin to copy a lot of files files into /mypool/mydir online radio player gets severely affected. Sound get interrupted all the time. Itrettuptions stop after 1-2 secs after I stop copying. This didn't happen with sector size 512 bytes. What is wrong? Which version of FreeBSD are you doing this on? Do you have any non-default tuning? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: OS X Lion time machine = (afpd|iSCSI) = ZFS question
On 21/07/2011 23:56, Bakul Shah wrote: I got wondering if iSCSI on FreeBSD is stable enough for time machine use. How much duct tape and baling wire are needed to make it work?! iSCSI as in the target (server) function? net/istgt in ports seemed ok last time I tried it. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: disable 64-bit dma for one PCI slot only?
On 19/07/2011 07:56, Andrey V. Elsukov wrote: On 19.07.2011 1:22, Scott Long wrote: Btw, I *HATE* the chip and card identifiers used in pciconf. Can we change it to emit the standard (sub)vendor/(sub)device terminology? Oh, yeah. I hate that too. Would you want them as 4 separate entities or to just rename the labels to 'devid' and 'subdevid'? If we're going to change it, might as well break it down into 4 fields. Maybe we retain the old format under a legacy switch and/or env variable for users that have tools that parse the output (cough yahoo cough). Hi, Scott i think for keeping POLA it is better add new option to make new output format. This is a too strict interpretation of POLA! If the change is done for better compliance with standards and it is done in a major version (i.e. 9.0 or 10.0), it's not a matter of POLA (otherwise, the change will never happen). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Status of support for 4KB disk sectors
On 19.7.2011. 19:54, Chuck Swiger wrote: On Jul 18, 2011, at 11:04 PM, Kevin Oberman wrote: I just wish FreeBSD had some decent documentation on such a fundamental operation. Fortunately there are some pretty good articles folks have written, but they did leave me with several questions. Is there something in FreeBSD which is preventing you from using the drive's native DEV_BSIZE of 4096 bytes, or is it that the drive claims to have a physical block size of 512 bytes when it is really 4k? Nope, only that. The current state of the matter (i.e. for 9.0) is: * The new ATA driver has quirks for certain models of HDDs which have this false advertising (which can be manually triggered by kern.cam.ada.X.quirks=1) which causes the drive to report stripesize of 4k. * This information is used in gpart to size align partitions; it will be used by the new installer * Default fragment size for UFS was raised to 4K so it will be aligned by default. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: UFS SU+J
On 29/06/2011 23:03, Mark Saad wrote: The svn sources are here http://svn.freebsd.org/base/projects/suj/8/ . Why would suj not make it into 8-STABLE ? It is a too large patch, and it changes a lot of important, known and working code (like softupdates). In other words, it's too risky. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
csh Cannot open /etc/termcap after starting screen
Hello, This *looks* like it should be a trivial problem (or at least often-encountered one) but short of debugging both screen and tcsh, I have no ideas what to do next... On several machines (seemingly random, some are running 7-stable, others 8-stable), I get this message after starting screen, written on the newly created screen: csh: Cannot open /etc/termcap. csh: using dumb terminal settings. The problem is: this also happens whan I'm doing it as the root user, and /etc/termcap is a symlink to /usr/share/misc/termcap, which definitely exists and is readable. To make it even stranger, it looks like the environment contains something which seems to be valid termcap data: lara:/home/ivoras# setenv STY=58859.pts-13.lara TERM=screen TERMCAP=SC|screen|VT 100/ANSI X3.64 virtual terminal:\ :DO=\E[%dB:LE=\E[%dD:RI=\E[%dC:UP=\E[%dA:bs:bt=\E[Z:\ :cd=\E[J:ce=\E[K:cl=\E[H\E[J:cm=\E[%i%d;%dH:ct=\E[3g:\ :do=^J:nd=\E[C:pt:rc=\E8:rs=\Ec:sc=\E7:st=\EH:up=\EM:\ :le=^H:bl=^G:cr=^M:it#8:ho=\E[H:nw=\EE:ta=^I:is=\E)0:\ :li#48:co#104:am:xn:xv:LP:sr=\EM:al=\E[L:AL=\E[%dL:\ :cs=\E[%i%d;%dr:dl=\E[M:DL=\E[%dM:dc=\E[P:DC=\E[%dP:\ :im=\E[4h:ei=\E[4l:mi:IC=\E[%d@:ks=\E[?1h\E=:\ :ke=\E[?1l\E:vi=\E[?25l:ve=\E[34h\E[?25h:vs=\E[34l:\ :ti=\E[?1049h:te=\E[?1049l:us=\E[4m:ue=\E[24m:so=\E[3m:\ :se=\E[23m:mb=\E[5m:md=\E[1m:mh=\E[2m:mr=\E[7m:\ :me=\E[m:ms:\ :Co#8:pa#64:AF=\E[3%dm:AB=\E[4%dm:op=\E[39;49m:AX:\ :vb=\Eg:as=\E(0:ae=\E(B:\ :ac=\140\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++,,hhII00:\ :k0=\E[10~:k1=\EOP:k2=\EOQ:k3=\EOR:k4=\EOS:k5=\E[15~:\ :k6=\E[17~:k7=\E[18~:k8=\E[19~:k9=\E[20~:k;=\E[21~:\ :F1=\E[23~:F2=\E[24~:F3=\E[25~:F4=\E[26~:F5=\E[28~:\ :F6=\E[29~:F7=\E[31~:F8=\E[32~:F9=\E[33~:FA=\E[34~:kb=^?:\ :K2=\E[G:kh=\E[1~:@1=\E[1~:kH=\E[4~:@7=\E[4~:kN=\E[6~:\ :kP=\E[5~:kI=\E[2~:kD=\E[3~:ku=\EOA:kd=\EOB:kr=\EOC:\ :kl=\EOD: WINDOW=0 SHELL=/bin/csh The shell and all started programs are misbehaving and/or treating the terminal as dumb. For example, mc writes this: lara:/home/ivoras# mc Unknown terminal: screen Check the TERM environment variable. Also make sure that the terminal is defined in the terminfo database. Alternatively, set the TERMCAP environment variable to the desired termcap entry. There really isn't a termcap line in /etc/termcap beginning with ^screen, but there is one beginning with ^SC containing the entry which is also in the environment listing above (which fails with the same error if I set it). The system works if I set some other terminal type like xterm. Any ideas? Why is the screen terminal type so special? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: [poll] hyperthreading_allowed, hlt_logical_cpus, mp_watchdog
On 24/05/2011 15:21, Andriy Gapon wrote: I am planning on some changes in head and would like to see if people use the following features: - machdep.hyperthreading_allowed tunable and sysctl - machdep.hlt_logical_cpus tunable and sysctl - mp_watchdog kernel option If you are using any of the above, please let me know - better via a private reply: - which exactly of the mentioned above features you use - please make distinction between use of tunables and sysctls - tell for what you use the feature - provide overview of your hardware, which scheduler you use and intended purpose of the system Whatever you do, please leave at least some way (at least a tunable) to enable/disable HTT - some workloads are better with, and some without it, and some BIOSes are unreliable in enabling/disabling it :) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Is machdep.cpu_idle_hlt deprecated?
On 02/05/2011 19:56, Jung-uk Kim wrote: On Monday 02 May 2011 10:48 am, Bruce Cran wrote: On Sat, 30 Apr 2011 21:20:28 -0700 Jeremy Chadwickfree...@jdc.parodius.com wrote: Anyone know if machdep.cpu_idle_hlt still exists? Taken from acpi(4) on RELENG_8: It looks like it might have been replaced by machdep.idle: machdep.idle: currently selected idle function machdep.idle_available: list of available idle functions machdep.idle: acpi machdep.idle_available: spin, hlt, acpi, It seems machdep.cpu_idle_hlt was deprecated long ago with this commit by jeff (CC'ed): http://svnweb.freebsd.org/base?view=revisionrevision=178471 How likely is this to affect real-world performance on e.g. a busy web server? As the comments say, the fastest are spin and mwait and the slowest is acpi, which is also the default. I have no reference point for what fast and slow mean in this context :) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: correct way to setup gmirror on 7.4?
On 28/04/2011 17:02, Edho P Arief wrote: On Thu, Apr 28, 2011 at 9:40 PM, Freddie Cashfjwc...@gmail.com wrote: Granted, there may be reasons why it wasn't done like this in the beginning, but my non-GEOM programmer's eyes can't see any. I believe one of the reason is it would prevent conversion from non-gmirror disk to gmirror one as explained here http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html Actually, storing any kind of metadata in the first sector can lead to weird and unexpected problems with some buggy disk controllers which parse the MBR for some (wrong) reasons. I personally have a disk controller which hangs on boot if the MBR contains anything but primary partitions of DOS type (even changing the partition type makes it hang), and that is not the only disk controller I've seen with this type of a bug. The second reason is that storing anything except the MBR in the first sector makes the drive non-bootable (even if the controller is ok) and it is kind of nice to be able to make a cheap soft-RAID1 from two ordinary (S)ATA drives. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: GELI speed
On 30/03/2011 02:48, Clayton Milos wrote: Now on 8.2-RELEASE when I run geli onetime -s 4096 gzero it crashes the box with a kernel fault. You need to obtain information about the crash. Add a line to /etc/rc.conf: dumpdev=AUTO (assuming you have decent swap space on an unencrypted drive), then reboot, make it crash and look at information written to /var/crash. Post at least the panic backtrace. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: tmpfs is zero bytes (no free space), maybe a zfs bug?
On 7 February 2011 14:37, Gleb Kurtsou gleb.kurt...@gmail.com wrote: It's up to user to mount tmpfs filesystems of reasonable size to prevent resource exhaustion. Anyway, enormously large tmpfs killing all your process is not the way to go. Of course not, but as I see it (from admin perspective), tmpfs should behave as close to regular processes in consuming memory as possible (where possible; obviously it cannot be subject to the OOM killer :) ). The problem described in this thread is that there is enough memory in various lists and tmpfs still reports 0 bytes free. See my message: the machine had more than 8 GB of free memory (reported by top) and still 0 bytes free in tmpfs - and that's not counting inactive and other forms of used memory which could be freed or swapped out (and also not counting swap). By as close to regular processes in consuming memory I mean that I would expect tmpfs to allocate from the same total pool of memory as processes and be subject to the same mechanisms of VM, including swap. If that is not possible, I would (again, as an admin) like to extend the tmpfs(5) man page and other documentation with information about what types of memory will and will not count towards available to tmpfs. Unless there are objections, I'm planning to do the following: 1. By default set tmpfs size to max(all swap/2, all memory/2) and print warning that filesystem size should be specified manually. Max(swap/2,mem/2) is used as a band-aid for the case when no swap is setup. You mean as a reservation, maximum limit or something else? If a tmpfs with size of e.g. 16 GB is configured, will the memory be preallocated? wired? I don't think there should be default hard size limits to tmpfs - it should be able to hold sudden bursts of large temp files (using swap if needed), but that could be achieved by configuring a tmpfs whose size is RAM+swap if the memory is not preallocated so not a big problem. 3. Remove live filesystem size checks, i.e. do not depend on free/inact memory. I'm for it, if it's possible in the light of #1 2. Add support for resizing tmpfs on the fly: mount -u -o size=newsize /tmpfs ditto. Reserving swap for tmpfs might not be what user expects: generally I use tmpfs for work dir for building ports, it's unused most of the time. It looks like we think the opposite of it :) I would like it to be swapped out if needed, making room for running processes etc. as regular VM paging algorithms decide. Of course, if that could be controlled with a flag we'd both be happy :) btw, what linux and opensolaris do when available mem/swap gets low due to tmpfs and how filesystem size determined at real-time? There's some information here: http://en.wikipedia.org/wiki/Tmpfs ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Is /etc/rc.conf scriptable?
On 01/02/2011 13:35, Yue Wu wrote: Hi list, I'm trying to do something to make rc.conf can act conditionally Technically, yes it can be done, but you shouldn't. It is in essence a shell script as it is sourced by other shell scripts but that's only because that approach is easiest to implement. Other tools may read rc.conf and they could break if they find something unexpected there. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: TRIM support in UFS - any chance of it in ZFS ?
On 31/01/2011 14:41, Pete French wrote: Just saw that the TRIM support for UFS has been MFC'd. Excellent stuff. I was wondering if there were any plans to do similar for ZFS at all ? AFAIK it isn't yet supported in upstream ZFS. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Gpart and gmirror 8.2 from 18 januari
On 19/01/2011 12:30, Johan Hendriks wrote: Hello all, i used to have disk configured with gpart and gmirror. But with the latest 8.2, my server will not boot anymore if i label the disk with gmirror. Gpart status Name Status Components ad4p1 OK ad4 Then i do a gmirror label -v -b load gm0 /dev/ad4 Edit /etc/fstab And change /dev/ad4px to /dev/mirror/gm0px I reboot, and it hangs when tring to Mount the root device. I get an error about an corrupt gpt label. Yes, GPT has the unfortunate property that it records its data both at the beginning of a drive and at the end, so you cannot use it this way (because gmirror wants the last sector for itself). I haven't tried it but I think from the GPT specification that it records where the secondary table is, so maybe you could do it the other way around: first do a gmirror configuration, then create GPT partitions within the gmirror device (i.e. on /dev/mirror/gm0, not on /dev/ad4). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Gpart and gmirror 8.2 from 18 januari
On 21/01/2011 13:56, Johan Hendriks wrote: Ok the funny thing is, i get the same error on 8.1 Release (the corrupt error), but it boots, and all seems to work. Maybe the boot process was made to be more standard-compliant :) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Gpart and gmirror 8.2 from 18 januari
On 21/01/2011 14:22, Andrey V. Elsukov wrote: On 21.01.2011 16:03, Ivan Voras wrote: On 21/01/2011 13:56, Johan Hendriks wrote: Ok the funny thing is, i get the same error on 8.1 Release (the corrupt error), but it boots, and all seems to work. Maybe the boot process was made to be more standard-compliant :) The most strangest is that UFS's label ufsid/4b9545d7d72d5019 is represented as whole disk where GPT is located. This is how glabel works - if anything within a provider recognizes it as its own (e.g. a file system), the whole provider is labeled for it. Or are you thinking about something else? If you first did gmirror, then gpt, then newfs, the UFS label should be created with the same data as the gpt partition, not the whole disk. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: tmpfs is zero bytes (no free space), maybe a zfs bug?
On 19/01/2011 11:09, Attila Nagy wrote: On 01/19/11 09:46, Jeremy Chadwick wrote: On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote: I first noticed this problem on machines with more memory (32GB eg.), but now it happens on 4G machines too: tmpfs 0B 0B 0B 100% /tmp FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan 8 22:11:54 CET 2011 Maybe it's related, that I use zfs on these machines... Sometimes it grows and shrinks, but generally there is no space even for a small file, or a socket to create. http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html Oh crap. :( I hope somebody can find the time to look into this, it's pretty annoying... http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch I don't think this is a complete solution but it's a start. If you can, try it and see if it helps. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: tmpfs is zero bytes (no free space), maybe a zfs bug?
On 19 January 2011 16:02, Kostik Belousov kostik...@gmail.com wrote: http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch I don't think this is a complete solution but it's a start. If you can, try it and see if it helps. This is not a start, and actually a step in the wrong direction. Tmpfs is wrong now, but the patch would make the wrongness even bigger. Issue is that the current tmpfs calculation should not depend on the length of the inactive queue or the amount of free pages. This data only measures the pressure on the pagedaemon, and has absolutely no relation to the amount of data that can be put into anonymous objects before the system comes out of swap. vm_lowmem handler is invoked in two situations: - when KVA cannot satisfy the request for the space allocation; - when pagedaemon have to start the scan. None of the situations has any direct correlation with the fact that tmpfs needs to check, that is Is there enough swap to keep all my future anonymous memory requests ?. Might be, swap reservation numbers can be useful to the tmpfs reporting. Also might be, tmpfs should reserve the swap explicitely on start, instead of making attempts to guess how much can be allocated at random moment. Thank you for your explanation! I'm still not very familiar with VM and VFS. Could you also read my report at http://www.mail-archive.com/freebsd-current@freebsd.org/msg126491.html ? I'm curious about the fact that there is lots of 'free' memory here in the same situation. Do you think that there is something which can be done as a band-aid without a major modification to tmpfs? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.2-PRERELEASE: live deadlock, almost all processes in pfault state
On 08/01/2011 23:06, Lev Serebryakov wrote: I need to look how raid3 and vinum/raid5 lives with that situation. One other standard solution is to spawn a thread and offload the job to that thread, instead of within GEOM start(). This is what most current complex GEOM classes to. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.2-PRERELEASE: live deadlock, almost all processes in pfault state
On 08/01/2011 20:42, Lev Serebryakov wrote: Hello, Kostik. You wrote 8 января 2011 г., 22:02:32: If I am guessing right, this creature has a classic deadlock when bio processing requires memory allocation. It seems that tid 100079 is sleeping not even due to the free page shortage, but due to address space exhaustion. As result, read/write requests are stalled. I want to say, that ZFS, for example, could allocate much more memory, and, yes, it had problems on i386 with this, but not on amd64, AFAIK... So, I'm (geom_radi5) doing something wrong... geom_raid5 (I'm assuming you're talking about the module that was written some time ago by an external developer) does serveral things wrong - that's why it wasn't included in FreeBSD. IIRC, one of those things is that it aggressively caches writes below the file system layer, which is a no-no. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
On 12/30/10 12:40, Damien Fleuriot wrote: I am concerned that in the event a drive fails, I won't be able to repair the disks in time before another actually fails. An old trick to avoid that is to buy drives from different series or manufacturers (the theory is that identical drives tend to fail at the same time), but this may not be applicable if you have 5 drives in a volume :) Still, you can try playing with RAIDZ levels and probabilities. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: tmpfs runs out of space on 8.2pre-release, zfs related?
On 01/02/11 09:41, miyamoto moesasji wrote: miyamoto moesasjimiyamoto.31bat gmail.com writes: In setting up tmpfs (so not tmpmfs) on a machine that is using zfs(v15, zfs v4) on 8.2prerelease I run out of space on the tmpfs when copying a file of ~4.6 GB file from the zfs-filesystem to the memory disk. This machine has 8GB of memory backed by swap on the harddisk, so I expected the file to copy to memory without problems. this is in fact worse than I first thought. After leaving the machine running overnight the tmpfs is reduced to a size of 4K, which shows that tmpfs is in fact completely unusable for me. See the output of df: --- h...@pulsarx4:~/ df -hi /tmp FilesystemSizeUsed Avail Capacity iused ifree %iused Mounted on tmpfs 4.0K4.0K 0B 100% 18 0 100% /tmp This is a known problem. So far, no solution has been offered, which means that effectively, tmpfs cannot be used with ZFS on the same system. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1-STABLE Unexpected XML: what does it mean?
On 14/12/2010 15:27, Eugene Mitrofanov wrote: Hi I observe the very strange message while run a lot of commands: r...@beaver:eugene# glabel status Unexpected XML: name=stripesize data=18432 Unexpected XML: name=stripeoffset data=0 Maybe you have a label or some other custom device name with non-ascii characters or with characters which break xml parsing? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1-STABLE Unexpected XML: what does it mean?
On 14/12/2010 16:19, Eugene Mitrofanov wrote: How can I locate this device? Also I dont understand why mdconfig complains? Try examining the output of # sysctl -b kern.geom.confxml ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1-STABLE Unexpected XML: what does it mean?
On 14/12/2010 16:29, Eugene Mitrofanov wrote: On Tuesday 14 December 2010, Ivan Voras wrote: On 14/12/2010 16:19, Eugene Mitrofanov wrote: How can I locate this device? Also I dont understand why mdconfig complains? Try examining the output of # sysctl -b kern.geom.confxml Here they are: mesh class id=0x807a5a20 nameFD/name geom id=0xff00029f6200 class ref=0x807a5a20/ namefd0/name rank1/rank provider id=0xff00029f6100 geom ref=0xff00029f6200/ moder0w0e0/mode namefd0/name mediasize1474560/mediasize sectorsize512/sectorsize stripesize18432/stripesize stripeoffset0/stripeoffset /provider /geom /class Looks ok so far (except weird stripesize). Sorry, I have no idea what is broken here. Try updating and rebuilding world in case you have some rare corruption. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: High cpu usage when using ZFS cache device
On 11/16/10 08:16, Christer Solskogen wrote: On Tue, Nov 16, 2010 at 1:30 AM, Brian Reichertreich...@numachi.com wrote: On Mon, Nov 15, 2010 at 09:50:50PM +0100, Christer Solskogen wrote: My load on my i7 920 is certainly higher when I add a 8GB usb stick as a ZFS cache device. USB 1.0? 2.0? Dunno even if that would make a difference... This is USB 2.0. I didn't know USB had such much to say on the cpu. You can easily test it - use the stick as a simple disk device with UFS and see how much CPU does it take simply to talk to the device. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: High cpu usage when using ZFS cache device
On 16 November 2010 13:15, Christer Solskogen christer.solsko...@gmail.com wrote: On Tue, Nov 16, 2010 at 12:47 PM, Ivan Voras ivo...@freebsd.org wrote: You can easily test it - use the stick as a simple disk device with UFS and see how much CPU does it take simply to talk to the device. See, that is why I think it is a ZFS issue. Because I did that. I created a UFS filesystem on the same usb stick. Mounted it and did a dd if=/dev/zero of=/mnt/file. The systemload goes +0.6 instead if +10.3. See: CPU: 0.0% user, 0.0% nice, 0.6% system, 0.0% interrupt, 99.3% idle Mem: 832M Active, 960M Inact, 7017M Wired, 2600K Cache, 1237M Buf, 3063M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 38261 root 1 46 0 5776K 1112K wdrain 7 0:07 4.98% dd But when using it as cache device for zfs: CPU: 0.0% user, 0.0% nice, 11.9% system, 0.0% interrupt, 88.1% idle Mem: 832M Active, 193M Inact, 5782M Wired, 2592K Cache, 1237M Buf, 5066M Free Swap: 8192M Total, 8192M Free The funny thing is that when I add the device (and some cache is added to it) the load is normal. But the load goes up when nothing is written to it (or beeing read from it) You mean you have system load on an otherwise idle system? Try this: 1) start top with parameters -H -S, see if anything is using the CPU time 2) start gstat, see if anything is using IO, and if it's particularly slow or busying the device too much ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
rpcbind, rpc.statd memory footprint
I'm not sure what to expect from these (i.e. what is normal in this case?) but the VM sizes for the NFS-used rpc.statd and rpcbind here look a bit too big, compared to their resident sizes: 778 root 1 440 26420K 3256K select 1 0:01 0.00% rpcbind 891 root 1 440 263M 1296K select 1 0:01 0.00% rpc.statd This is 8-stable amd64. Could there be a memory leak somewhere, especially in rpc.statd? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: kpanic on install 32GB of RAM [SEC=UNCLASSIFIED]
On 10/21/10 21:06, Kostik Belousov wrote: On Thu, Oct 21, 2010 at 09:50:03AM -0700, Sean Bruno wrote: On Thu, 2010-10-21 at 05:48 -0700, Andriy Gapon wrote: on 20/10/2010 21:28 Sean Bruno said the following: I guess, I could replace the kernel on the CD and have them reburn it? That should work. BTW, here I described yet another way of building custom recovery/installation CDs that I use: http://wiki.freebsd.org/AvgLiveCD Before I get started on this, it looks like something else is going on. Here is a panic + trace on the latest 9-current snap shot. hammer time indeed. Suggestions are welcome! http://people.freebsd.org/~sbruno/9-current-panic.png http://people.freebsd.org/~sbruno/9-current-trace-panic.png It feels like msgbufp variable has absurd value. Can you arrange to get the output of verbose boot, esp. the SMAP lines ? This is probably completely wrong for this problem but in the tiny case it isn't, maybe it will give someone an idea: I remember in the old times (tm) that there was a trick by which the msgbuf is supposed to be preserved across soft reboots. I don't know the details, and it might just be valid for i386 but part of that deal could be that some code tries to parse that memory area for valid msgbuf and due to some garbage, fails with such a panic. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: repeating crashes with 8.1
On 10/21/10 21:08, Randy Bush wrote: FreeBSD 8.1-STABLE #2: Thu Oct 21 15:30:45 UTC 2010 r...@rip.psg.com:/usr/obj/usr/src/sys/RIP amd64 console recording em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header panic: sbflush_internal: cc 4294965301 || mb 0 || mbcnt 0 cpuid = 0 panic: bufwrite: buffer is not busy??? What does the machine do? Does it perhaps have 6to4 (stf) enabled? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: repeating crashes with 8.1
On 10/22/10 16:25, Mike Tancsa wrote: At 10:18 AM 10/22/2010, Randy Bush wrote: Do you know how this panic is triggered ? Are you able to create it on demand ? no i do not. bring server up and it'll happen in half an hour. and the server was happy for two months. so i am thinking hardware. Perhaps. The reason I ask is that I had a box go down last night with the same set of errors. The box has a number of ipv6 routes, but its next hop was down and the problems started soon after. So I wonder if it has something to do with that. Do you have ipv6 on this box and are all the next hop addresses correct / reachable ? Oct 22 02:06:02 i4 kernel: em1: discard frame w/o packet header Oct 22 02:06:10 i4 kernel: em2: discard frame w/o packet header Oct 22 02:06:21 i4 kernel: em1: discard frame w/o packet header FWIW I had a series of crashes with those characteristics which I suspected were IPv6 or 6to4-related on 8.0-RELEASE and 8-STABLE which eventually made me give up on IPv6 - it was crashing too often (couple of times a week). I have at least a couple of threads on this on freebsd-net@ (without resolution). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Reproducible Kernel Panic on 8.1-STABLE [SEC=UNCLASSIFIED]
On 10/15/10 03:43, Wilkinson, Alex wrote: 0n Thu, Oct 14, 2010 at 04:51:10PM +0800, Wilkinson, Alex wrote: 0n Thu, Oct 14, 2010 at 02:13:27PM +0600, Sergey Nikolenko wrote: On 14.10.2010 09:26, Wilkinson, Alex wrote: I have come across a bug that triggers a kernel panic on 8.1-STABLE(r213395) through the use of /usr/ports/sysutils/fusefs-sshfs. Typically i do an sshfs mount as such: #sshfs usern...@hostname:/home/username local_mountpoint/ This mounts the remote filesystem fine. However, when i edit and save a file in say vi on the remote sshfs i get the following panic everytime: Try this out http://www.freebsd.org/cgi/query-pr.cgi?pr=149674 Yes! GREAT! This patch fixes the kernel panic! Can we get this committed ASAP ? Committed! How stable is fuse sshfs lately? It looks like every time in the past I tried it I soon ended up panicking the system. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: zfs hang in zio-io_cv) with dd read
On 10/07/10 14:15, John Hay wrote: Hi, I got hold of a SunFire X4500 with 48 X 500G disks and thought to try FreeBSD 8-stable with zfs on it. I have setup the two boot disks in a zfs mirror and then the rest in a pool of 6 X raidz2 of 7 disks each. I have created a 10G file with dd in the second pool, but if I try to read it with dd, dd will hang in zio-io_cv) according to ^T. This happens everytime. The first time I saw messages about an interrupt storm, so I have put hw.intr_storm_threshold=1 in /etc/sysctl.conf. According to systat -vm 1 there is atapci for 2-3 seconds and then it is quiet. There are two things you could try: 1) use the AHCI driver (ahci_load=YES in /boot/loader.conf) and 2) disable superpages, they don't get along on a few models of Opterons (vm.pmap.pg_ps_enabled=0 in /boot/loader.conf). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: zfs hang in zio-io_cv) with dd read
On 10/07/10 20:25, Andriy Gapon wrote: on 07/10/2010 15:35 Ivan Voras said the following: /boot/loader.conf) and 2) disable superpages, they don't get along on a few models of Opterons (vm.pmap.pg_ps_enabled=0 in /boot/loader.conf). Those who follow know that the issue is supposed to be resolved long ago. Just in case. Yes, it was. OTOH CPU errata lists are so long today I think it's justified to verify the assumptions. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: MySQL performance concern
On 10/02/10 22:18, Rumen Telbizov wrote: pool: tank config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/tank0 ONLINE 0 0 0 gpt/tank1 ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/tank2 ONLINE 0 0 0 gpt/tank3 ONLINE 0 0 0 logs ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/zil0 ONLINE 0 0 0 gpt/zil1 ONLINE 0 0 0 cache gpt/l2arc0 ONLINE 0 0 0 gpt/l2arc1 ONLINE 0 0 0 pool: zroot config: NAMESTATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirrorONLINE 0 0 0 gpt/zroot0 ONLINE 0 0 0 gpt/zroot1 ONLINE 0 0 0 zroot is a couple of small partitions from two of the same SAS disks. zil and l2arc are 8 and 22G partitions from 32G SSDs This looks a bit overly complex (your recovery procedure if some of the drives goes bad will include re-creating the partition layout), but it probably shouldn't affect performance. Just to check - mapped to physical drives this looks like this: (gpt/ prefix omitted for brevity): * tank0..tank3 : on SAS drives * zroot0, zroot1 : on some of the same SAS drives as above * zil0, zil1 : on SSD drives * l2arc0, l2arc1 : on the same SSD drives as above ARC and ZIL have some very different IO characteristics, I don't know if they would interfere with each other. Can you spend some time looking at the output of gstat while the database task is running and see if there's something odd? Like %busy column going near 100% for some of them? What IO bandwidth and ops/s are you getting? I pretty much have no zfs tuning done since from what I've found there shouldn't be any needed since I'm running 8.1 on a 64bit machine. Let me know if you'd like me to experiment with any ... Some additional information: # sysctl vm.kmem_size vm.kmem_size: 5539958784 # sysctl vm.kmem_size_max vm.kmem_size_max: 329853485875 # sysctl vfs.zfs.arc_max vfs.zfs.arc_max: 4466216960 I have done some digging myself and it seems that two settings have noticable impact on MySQL load: * zfs block size - you need to re-create all mysql files to change this; set to 8 KiB (or whatever MyISAM uses for block size) * reducing vfs.zfs.txg.timeout to about 5 seconds Are you using ZFS compression? See http://jp.planet.mysql.com/entry/?id=19489 for more ideas. Other than that, your CPUs are: New: 2 x Dual Core Xeon E5502 1.87Ghz Old: 2 x Xeon Quad E5410 @ 2.33GHz You can see here how different they are: http://en.wikipedia.org/wiki/List_of_Intel_Xeon_microprocessors Specifically, as you are using a single-threaded client, you *need* the additional GHz of the old server. You are quoting 30% CPU usage on the new server - I assume this is the total CPU as reported by utilities like top, iostat, vmstat, etc - meaning that if the system has four CPU cores, one of then is 100% busy (meaning 25% of the total) and another is about 20% used. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: MFC of ZFSv15
On 09/16/10 12:42, Guido Falsi wrote: Related to this, I have a question. Is it convenient to put databases on a compresed filesystem? Apart from the space advantage, does it give any speed advantage/penalty? It depends on what you do. It will not save you memory usage either since data needs to be decompressed when read. If the database is lightly loaded I don't think there will ever be problems. Also if the database is mostly read-only. If it's used in a heavy loaded read+write environment or if it is CPU-bound, it is probably a bad idea to put it on a compressed file system. Anyone has some benchmark or objective data about this? I know about this one: http://don.blogs.smugmug.com/2008/10/13/zfs-mysqlinnodb-compression-update/ But it only really measures copy (cp) speeds and compression, not database performance. Also are we talking about MyISAM or InnoDB tables? Or a mix of those? MyISAM would probably be faster to compress and manage :) http://www.scribd.com/doc/14603831/Optimizing-MySQL-Performance-with-ZFS ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: very stupid mistake: a part of /usr is deleted
On 09/15/10 15:36, Zara Kanaeva wrote: Hi all, vor 2 hours i made a very stupid mistake: i have deleted (as root naturally) a part of /usr-directory. I have definitely deleted .snap and presumably 100-150 files in /usr/bin. uname -a - FreeBSD (XX).uni-tuebingen.de 8.0-RELEASE FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:02:08 UTC 2009 r...@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 That is actually an easy situation to recover, you can do it in at least these ways: 1) if you build/upgrade from source, you can either reinstall if you have working /usr/obj or try and rebuild them if you have working /usr/src 2) if you have another machine with the same FreeBSD version and architecture, simply copy the missing files (with tar, scp, ftp, fetch/wget, etc...) 3) if you have networking and at least working fetch / ftp / wget, cat and tar, you can fetch the files at ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/8.0-RELEASE/base/ and use install.sh to reinstall the base binaries Remember that those files are not magical, you can restore them any way you are able. You can even boot the live CD (from ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/ISO-IMAGES/8.0/), mount the appropriate file system and copy the files from the CD. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: AoE driver for FBSD8 or later?
On 09/14/10 15:35, Max Khon wrote: On Tue, Sep 14, 2010 at 5:01 PM, George Mamalakismama...@eng.auth.grwrote: thank you very much for your help. The driver works fine; I am able to see all 13T. In case something goes wrong I will inform you. For the time being, everything is OK. I committed the port to the FreeBSD ports tree: ports/net/aoe. ATA over Ethernet seems interesting enough as a concept - any ideas why this code isn't integrated into base? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: AoE driver for FBSD8 or later?
On 09/14/10 12:01, George Mamalakis wrote: thank you very much for your help. The driver works fine; I am able to see all 13T. In case something goes wrong I will inform you. For the time being, everything is OK. As it is a relatively uncommon protocol, can you run some tests and describe your experiences with AoE? Both good and bad experiences :) (if you need advice on which tests, try bonnie++, blogbench and randomio, all are under ports/benchmarks) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ipfw: Too many dynamic rules
On 09/09/10 17:39, Gareth de Vaux wrote: Hi again, I use some keep-state rules in ipfw, but get the following kernel message: kernel: ipfw: install_state: Too many dynamic rules when presumably my state table reaches its limit (and I effectively get DoS'd). netstat shows tons of connections in FIN_WAIT_2 state, mostly to my webserver. Consequently net.inet.ip.fw.dyn_count is large too. I can increase my net.inet.ip.fw.dyn_max but the new limit will simply be reached later on. For what it's worth, here's what I've been running: net.inet.ip.fw.dyn_buckets=1024 net.inet.ip.fw.dyn_max=8192 net.inet.ip.fw.dyn_ack_lifetime=60 If in a tight spot, I might reduce dyn_ack_lifetime to 10. There is no way this machine would service 8192 legitimate simultaneous connections so this works for me. If you have the memory I think you can increase dyn_max practically arbitrarily. If under a DDoS attack, you might run out of some other resource, like ephemeral TCP ports for the server side of connections, before running out of ipfw entries. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-stable crashes in vmware (possible em driver issue?)
On 02/25/10 01:23, Ivan Voras wrote: Ivan Voras wrote: I have a fairly recent 8-stable machine running under VMWare ESXi 3.5 (amd64 guest), which apparently crashes every few days from the same causes: em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header Panic string: sbsndptr: sockbuf 0xff007cca8c20 and mbuf 0xff00490a6400 clashing In case someone is interested or has an idea - on this machine I have multiple crashed cores with similarily strange problems all connected with networking and/or the em driver: ... It looks like the most probable culprit are stf (6to4) support and/or something dealing with stf routing (the machine was a stf gateway for a small subnet). When disabled, the crashes stop. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Tuning the scheduler? Desktop with a CPU-intensive task becomes rapidly unusable.
On 09/01/10 15:08, jan.gr...@bristol.ac.uk wrote: I'm running -STABLE with a kde-derived desktop. This setup (which is pretty standard) is providing abysmal interactive performance on an eight-core machine whenever I try to do anything CPU-intensive (such as building a port). Basically, trying to build anything from ports rapidly renders everything else so non-interactive in the eyes of the scheduler that, for instance, switching between virtual desktops (I have six of them in reasonably frequent use) takes about a minute of painful waiting on redraws to complete. Are you sure this is about the scheduler or maybe bad X11 drivers? Once I pay attention to any particular window, the scheduler rapidly (like, in 15 agonising seconds or so) decides that the processes associated with that particular window are interactive and performance there picks up again. But it only takes 10 seconds (not timed; ballpark figures) or so of inattention for a window's processes to lapse back into a low-priority state, with the attendant performance problems. windows in X11 have nothing to do with the scheduler (contrary to MS Windows where the OS actually re-nices processes whose windows have focus) - here you are just interacting with a process. I don't think my desktop usage is particularly abnormal; I doubt my level of frustration is, either :-) I think the issue here is that a modern I'm writing this on a quad-core Core2 machine with 4 GB RAM, amd64 arch, Radeon 2500 HD, with KDE4 with most of the 3D visual effects turned on. I have not yet experienced problems like you describe. On the other hand, I have noticed that a 2xQuad-core machine I have access too has more X11 interactivity problems than this single quad-core machine, though again not as serious as yours. I don't know why this is. From the hardware side it might be the shared FSB or from the software side it might be the scheduler. If you want to try something I think it's easier for you to disable one CPU in BIOS or pin X.org and its descendant processes to CPUs of a single socket than to diagnose scheduler problems. but compared to the performance under sched_4bsd, what I'm seeing is an atrocious user experience. It would be best if you could quantify this in some way. I have no idea how. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
NFS uid/gid mapping
hi, I can't seem to find how to manually remap uid gid information while using NFS, e.g. something similar to this: http://www.kernelcrash.com/blog/nfs-uidgid-mapping/2007/09/10/ Is such mapping really unimplemented? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ZFS performance question
On 08/20/10 12:30, Heinrich Rebehn wrote: I am somewhat concerned about the numbers for per-char-output and per-char-input. In fact, i have never before seen that low numbers in a bonnie test. Using a single disk with UFS yields about 6 times as much. BTW: Running OpenSolaris on the same hardware yields 110306 for per-char-write and 94698 for per-char-read. per-char stats are different between different operating systems because of how they are implemented. Apparently, bonnie++ forces full disk writes (fsyncs) for each byte written on BSDs, but Linux (and apparently Solaris) somehow manage to write-cache this (or at least - cache it much more). It only matters if you have software which depends on this caching and performs slowly otherwise. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Inconsistent IO performance
On 13.8.2010 18:01, Kevin Oberman wrote: For some time I have seen very odd issues with IO performance on 8-Stable. Going back to November of last year when 8.0 was released, I see variations of up to 22% in identical operations. This is not a degradation as the performance moves up and down. In 8.0-8.1 span of time there was some work on the ata driver to make it use MAXPHYS (128 KiB) transfer sizes instead of 64 KiB. Modifying this will involve changing and recompiling the kernel but if you want to try something and the hardware is SATA you might try the new AHCI driver (ada). http://ivoras.net/blog/tree/2009-11-17.trying-ahci-in-8.0.html ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-STABLE Slow Write Speeds on ESXI 4.0
On 9 August 2010 16:55, Joshua Boyd boy...@jbip.net wrote: On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras ivo...@freebsd.org wrote: On 7 August 2010 19:03, Joshua Boyd boy...@jbip.net wrote: On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras ivo...@freebsd.org wrote: It's unlikely they will help, but try: vfs.read_max=32 for read speeds (but test using the UFS file system, not as a raw device like above), and: vfs.hirunningspace=8388608 vfs.lorunningspace=4194304 for writes. Again, it's unlikely but I'm interested in results you achieve. This is interesting. Write speeds went up to 40MBish. Still slow, but 4x faster than before. [r...@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250 250+0 records in 250+0 records out 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec) [r...@git ~]# dd if=/var/testfile of=/dev/null 512000+0 records in 512000+0 records out 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec) So read speeds are up to what they should be, but write speeds are still significantly below what they should be. Well, you *could* double the size of runningspace tunables and try that :) Basically, in tuning these two settings we are cheating: increasing read-ahead (read_max) and write in-flight buffering (runningspace) in order to offload as much IO to the controller (in this case vmware) as soon as possible, so to reschedule horrible IO-caused context switches vmware has. It will help sequential performance, but nothing can help random IOs. Hmm. So what you're saying is that FreeBSD doesn't properly support the ESXI controller? Nope, I'm saying you will never get raw disk-like performance with any full virtualization product, regardless of specifics. If you want performance, go OS-level (like jails) or some example of paravirtualization. I'm going to try 7.3-RELEASE today, just to make sure that this isn't a regression of some kind. It seems from reading other posts that this used to work properly and satisfactorily. Nope, I've been messing around with VMWare for a long time and the performance penalty was always there. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-STABLE Slow Write Speeds on ESXI 4.0
On 9.8.2010 17:12, Ivan Voras wrote: On 9 August 2010 16:55, Joshua Boyd boy...@jbip.net wrote: On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras ivo...@freebsd.org wrote: I'm going to try 7.3-RELEASE today, just to make sure that this isn't a regression of some kind. It seems from reading other posts that this used to work properly and satisfactorily. Nope, I've been messing around with VMWare for a long time and the performance penalty was always there. Hmmm, I've been thinking a little and after retesting one of my guests on a 12-drive RAID-6 HP FC enclosure, and your 40 MB/s writes actually do seam slow. I'm getting around 110 MB/s sequential reads and writes (untuned) and the guest controller is recognized as: mpt0: LSILogic 1030 Ultra4 Adapter port 0x1400-0x14ff mem 0xd882-0xd883,0xd880-0xd881 irq 17 at device 16.0 on pci0 mpt0: [ITHREAD] mpt0: MPI Version=1.2.0.0 on 7.3-release, i386, ESXi 4.0. Are you sure you are testing when no others guests generate IO? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-STABLE Slow Write Speeds on ESXI 4.0
On 9 August 2010 18:11, Jeremy Chadwick free...@jdc.parodius.com wrote: I thought Intel VT-d was supposed to help address things like this? Probably - http://www.intel.com/technology/itj/2006/v10i3/2-io/7-conclusion.htm says it should help unmodified guests, but I don't know for sure. I do know that Nehalems run faster on VMWare, probably because nested paging or whatever it's called helps context switches on syscalls. I can confirm on VMware Workstation 7.1, not ESXi, that disk I/O performance isn't that great. I only test with a Host OS of Windows XP SP3, and for the Guest OS's hard disk driver use the LSI SATA/SAS option. I can't imagine IDE/ATA being faster, since (at least Workstation) emulates an Intel ICH2. Yes, disk IO was always slow with VMWare. VirtualBox cheats by emulating ATA controllers (ICH6) instead of SCSI and turning on disk cache - it's noticably faster than VMWare. I was under the impression that ESXi provided native access to the hardware in the system (vs. Workstation which emulates everything)? I think it can be configured this way, but then you'd need a separate LUN for the VM drive, bypassing vmware's usual storage (vmfs) and all the goodies that come with it. OTOH, there are paravirtualized drivers for Linux and Windows in 4.0 which should help, but I haven't tried them yet. The controller seen by FreeBSD in the OP's system is: mpt0: LSILogic SAS/SATA Adapter port 0x4000-0x40ff mem 0xd9c04000-0xd9c07fff,0xd9c1-0xd9c1 irq 18 at device 0.0 on pci3 mpt0: [ITHREAD] mpt0: MPI Version=1.5.0.0 Which looks an awful lot like what I see on Workstation 7.1. FWIW, Workstation 7.1 is fairly adamant about stating if you want faster disk I/O, pre-allocate the disk space rather than let disk use grow dynamically. I've never tested this however. Yes, this statement has always been true. How does Linux's I/O perform with the same setup? I've tested Linux, Windows and FreeBSD on VMWare 3.5 last year and the results (IOPS) were: ESXi-FreeBSD174 ESXi-Linux 221 ESXI-Windows98 Xen-FreeBSD 72 Xen-Linux 148 Xen-Linux-PV244 HyperV-FreeBSD 61 HyperV-Linux69 HyperV-Windows 58 (I couldn't get Windows to run on Xen; Linux-PV is Linux as paravirtualized Xen guest). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: zpool - low speed write
On 5.8.2010 6:47, Alex V. Petrov wrote: camcontrol identify ada2 pass2: WDC WD10EADS-00M2B0 01.00A01 ATA-8 SATA 2.x device Aren't those 4k sector drives? To verify this hypotesis though, you will have to destroy the zpool, use gnop to create a virtual 4k sector drive for each physical drive and try testing everything again, using these new virtual drives. Unfortunately, if this is the case, it will be troublesome to find a production solution just yet. I have an idea but no time to try it. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-STABLE Slow Write Speeds on ESXI 4.0
On 7.8.2010 3:21, Joshua Boyd wrote: Hello, I'm experiencing slow write speeds on 8-STABLE running on an ESXI 4.0 server, despite whatever tunables I've thrown at it. Read speeds are slower than they should be, but acceptable. Note, this is a thick provisioned disk, not thin. Speeds on Windows hosts are as expected for an MD3000 DAS, 250MB/s or so. [r...@git ~]# dd if=/dev/da0 of=/dev/null bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 3.304514 secs (158658118 bytes/sec [r...@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 52.083421 secs (10066313 bytes/sec) I assume you are using UFS and SU? What tunables have you tried? It's unlikely they will help, but try: vfs.read_max=32 for read speeds (but test using the UFS file system, not as a raw device like above), and: vfs.hirunningspace=8388608 vfs.lorunningspace=4194304 for writes. Again, it's unlikely but I'm interested in results you achieve. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: the console bug still exists
On 4.8.2010 16:49, jhell wrote: On 08/04/2010 03:22, David Xu wrote: Sigh, pressing ScrollLock key several times can lock up the kernel when it is still booting before /sbin/init runs. David Xu Sorry David, No matter what I have tried I have not been able to reproduce this across 5 separate machines. For what it's worth, I come across this buglet occasionally, about once or twice a year. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-STABLE Slow Write Speeds on ESXI 4.0
On 7 August 2010 19:03, Joshua Boyd boy...@jbip.net wrote: On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras ivo...@freebsd.org wrote: It's unlikely they will help, but try: vfs.read_max=32 for read speeds (but test using the UFS file system, not as a raw device like above), and: vfs.hirunningspace=8388608 vfs.lorunningspace=4194304 for writes. Again, it's unlikely but I'm interested in results you achieve. This is interesting. Write speeds went up to 40MBish. Still slow, but 4x faster than before. [r...@git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250 250+0 records in 250+0 records out 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec) [r...@git ~]# dd if=/var/testfile of=/dev/null 512000+0 records in 512000+0 records out 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec) So read speeds are up to what they should be, but write speeds are still significantly below what they should be. Well, you *could* double the size of runningspace tunables and try that :) Basically, in tuning these two settings we are cheating: increasing read-ahead (read_max) and write in-flight buffering (runningspace) in order to offload as much IO to the controller (in this case vmware) as soon as possible, so to reschedule horrible IO-caused context switches vmware has. It will help sequential performance, but nothing can help random IOs. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: gpart -b 34 versus gpart -b 1024
On 25.7.2010 5:58, Dan Langille wrote: ---Sequential Output ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU /sec %CPU 5 110.6 80.5 115.3 15.1 60.9 8.5 68.8 46.2 326.7 15.3 469 1.4 5 130.9 94.2 118.3 15.6 61.1 8.5 70.1 46.8 241.2 12.7 473 1.4 50 113.1 82.4 114.6 15.2 63.4 8.9 72.7 48.2 142.2 9.5 126 0.7 50 110.5 81.0 112.8 15.0 62.8 9.0 72.9 48.5 139.7 9.5 144 0.9 Here, the results aren't much better either... am I not aligning this partition correctly? Missing something else? Or... are they both 4K block aligned? As others have said - your drives probably don't have the alignment requiremnt, but your posts show in an excellent example why benchmarking file systems is complicated and how easy it is to measure noise instead of the real thing. To measure real performance in your case, you would either need to benchmark at a layer beneath the file system or with a simple file system which does alwasy predictable io patterns. It's hard to do with zfs with raidz - afaik even accessing the raw zvols translates into complex IOs (they are COW). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8.1 AMD64 Beta1 cd panics on Proliant ML110 G6
On 06/11/10 10:48, Johan Hendriks wrote: Hello all. I try to install the Beta of 8.1 but it panics on my server HP Proliant ML110 with the following. Just as a data-point: I've tested 8.1 on HP DL380 G6 (i.e. same generation, different series) and there were no problems. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: panic: vm_fault_copy_wired: page missing
On 04/15/10 13:11, Jeremy Chadwick wrote: On Thu, Apr 15, 2010 at 02:05:26PM +0300, Daniel Braniss wrote: On Thu, Apr 15, 2010 at 01:24:14PM +0300, Daniel Braniss wrote: On Thu, Apr 15, 2010 at 9:22 AM, Daniel Branissda...@cs.huji.ac.il wrote: Hi, I'm getting this with FreeBSD-8-stable, it usually happens when starting apache: alc@ made some VM MFCs yesterday, could you try a 13th of April kernel and see if it works out for you? with or without the MFC it's still panicking, and the memory size does not affect the outcome :-( Shot in the dark: either at the interactive loader prompt or by editing /boot/loader.conf, try disabling superpage support: vm.pmap.pg_ps_enabled=0 that's the first thing I tried :-( just to complicate things a bit, if I start the apache later, via forcestart, things 'seem' better. but keep them comming, I need this fixed. Take NFS out of the picture if you can... I'm late into the discussion but just to verify - you are talking about not running executables over NFS, right? Not serving data? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: random FreeBSD panics
On 28 March 2010 16:42, Masoom Shaikh masoom.sha...@gmail.com wrote: lets assume if this is h/w problem, then how can other OSes overcome this ? is there a way to make FreeBSD ignore this as well, let it result in reasonable performance penalty. Very probably, if only we could detect where the problem is. Try adding options PRINTF_BUFR_SIZE=128 to the kernel configuration file if you can, to see if you can get a less mangled log outout. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Multi node storage, ZFS
On 03/25/10 00:45, Michal wrote: backend storage for databases. It's all well and good having 1 ZFS server, but it's fragile in the the sense of no redundancy, then we have 1 ZFS server and a 2nd with DRBD, but that's a waste of money...think 12 TB, and you need to pay for another 12TB box for redundancy, and you are still looking at 1 server. I am thinking a cheap solution but one that has IO throughput, redundancy and is easy to manange and expand across multiple nodes. Well, what I described is kind of like that, centered around trying to best balance redundancy and cost. For example, you don't need two 12 TB boxes in a mirror. Depending on what you need you can get only one 12 TB box at the start, then with ZFS trivially extend that storage with another 12 TB box when you need it, repeat to infinity (each box will internally have RAID6 or something like that). Of course then you have a problem if a single box fails, which you can get around by using triplets of 12 TB boxes in RAIDZ, etc. etc. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Multi node storage, ZFS
Freddie Cash wrote: On Wed, Mar 24, 2010 at 8:47 AM, Michal mic...@ionic.co.uk wrote: I wrote a really long e-mail but realised I could ask this question far far easier, if it doesn't make sense, the original e-mail is bellow Can I use ZFS to create a multinode storage area. Multiple HDD's in Multiple servers to create one target of, for example, //officestorage Allowing me to expand the storage space when needed and clients being able to retrieve data (like RAID0 but over devices not HDD) Here is an example I found which is where I'm getting some ideas from http://www.howtoforge.com/how-to-build-a-low-cost-san-p3 Horribly, horribly, horribly complex. But, then, that's the Linux world. :) Server 1: bunch of disks exported via iSCSI Server 2: bunch of disks exported via iSCSI Server 3: bunch of disks exported via iSCSI SAN box: uses all those iSCSI exports to create a ZFS pool For what it's worth - I think this is a good idea! iSCSI and ZFS make it extraordinarily flexible to do this. You can have a RAIS - redundant array of inexpensive servers :) For example: each server box hosts 8-12 drives - use a hardware controller with RAID6 and a BBU to create a single volume (if FreeBSD booting issues allow, but that can be worked around). Export this volume via iSCSI. Repeat for the rest of the servers. Then, on the client, create a RAIDZ. or if you trust your setup that much. a straight striped ZFS volume. If you do it the RAIDZ way, one of your storage servers can fail completely. As you need more space, add more servers in batches of three (if you did RAIDZ, else the number doesn't matter), add them to the client as usual. The client in this case can be a file server, and you can achieve failover between several of those by using e.g. carp, heartbeat, etc. - if the master node fails, some other one can reconstitute the ZFS pool ad make it available. But, you need very fast links between the nodes, and I wouldn't use something like this without extensively testing the failure modes. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Many processes stuck in zfs
On 03/11/10 15:09, Alexander Leidinger wrote: Quoting Ivan Voras ivo...@freebsd.org (from Thu, 11 Mar 2010 11:59:01 +0100): On 03/11/10 09:54, Borja Marcos wrote: I don't know about the rest but this: CPU: Intel(R) Xeon(R) CPU L5420 @ 2.50GHz (2496.25-MHz K8-class CPU) does not agree with this: FreeBSD/SMP: 1 package(s) x 8 core(s) The Xeon 54xx series does not come in 8 core packages. Either it is 2xquad-core or a Xeon 55xx. Can also be a problem in the layout detection logic... Not likely, because the 54xx family is very wide spread and nothing special with regards to its topology. It also has the same topology as 53xx. These are systems limited to two physical sockets, each of which can have a single, dual or a quad core CPU and the 5xxx motherboards accept all CPUs from series 50xx, 51xx, 52xx, 53xx, 54xx. In short - these are very, very common systems. My guess would be that someone, somewhere is lying - I don't know if CPUID can be (wrongly) updated by microcode, for example, or if the mptable can be corrupted in a way to represent two physical (socketed) CPUs as one. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-stable crashes in vmware (possible em driver issue?)
On 25 February 2010 02:31, Jack Vogel jfvo...@gmail.com wrote: Hmmm, not sure what changes are in this, what if you use the 8.0 REL driver, does it still happen? Yes, this is why it is now running 8-STABLE. I have more FreeBSD guests on the same VMWare hosts which work fine, but this is the only 64-bit one. I don't know if this information helps. On Wed, Feb 24, 2010 at 4:23 PM, Ivan Voras ivo...@freebsd.org wrote: Ivan Voras wrote: I have a fairly recent 8-stable machine running under VMWare ESXi 3.5 (amd64 guest), which apparently crashes every few days from the same causes: em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header Panic string: sbsndptr: sockbuf 0xff007cca8c20 and mbuf 0xff00490a6400 clashing In case someone is interested or has an idea - on this machine I have multiple crashed cores with similarily strange problems all connected with networking and/or the em driver: 1) em0: watchdog timeout -- resetting Fatal trap 12: page fault while in kernel mode current process = 0 (em0 taskq) 2) em0: watchdog timeout -- resetting Fatal trap 9: general protection fault while in kernel mode current process = 1219 (slapd) 3) em0: discard frame w/o packet header panic: sbdrop I'm scratching my head about the #2 above - I don't think trap#9 is usual. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: 8-stable crashes in vmware (possible em driver issue?)
Ivan Voras wrote: I have a fairly recent 8-stable machine running under VMWare ESXi 3.5 (amd64 guest), which apparently crashes every few days from the same causes: em0: discard frame w/o packet header em0: discard frame w/o packet header em0: discard frame w/o packet header Panic string: sbsndptr: sockbuf 0xff007cca8c20 and mbuf 0xff00490a6400 clashing In case someone is interested or has an idea - on this machine I have multiple crashed cores with similarily strange problems all connected with networking and/or the em driver: 1) em0: watchdog timeout -- resetting Fatal trap 12: page fault while in kernel mode current process = 0 (em0 taskq) 2) em0: watchdog timeout -- resetting Fatal trap 9: general protection fault while in kernel mode current process = 1219 (slapd) 3) em0: discard frame w/o packet header panic: sbdrop I'm scratching my head about the #2 above - I don't think trap#9 is usual. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Incorrect super block
On 02/18/10 16:26, Harald Weis wrote: Has anybody encountered the following problem ? Mac OS X does recognize FreeBSD partitions on USB disks, but doesn't want to mount them because ``Incorrect super block''. This is extremely annoying for my ``client'' because he relies on dayly backups on USB keys. Is there a solution ? Are you using UFS1 or UFS2? If one, try the other :) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Sudden mbuf demand increase and shortage under the load
On 02/15/10 13:25, Maxim Sobolev wrote: Hi, Our company have a FreeBSD based product that consists of the numerous interconnected processes and it does some high-PPS UDP processing (30-50K PPS is not uncommon). We are seeing some strange periodic I have nothing very useful to help you with but maybe you can detect if it's a em/igp issue by buying a cheap Realtek gigabit (re) card and trying it out. Those can be bought for a few dollars now (e.g. from D-Link and many others), and I can confirm that at least the one I tried can carry around 50K pps, but not much more (I can tell you the exact chip later today if you are interested). failures under the load in several such systems, which usually evidences itself in IPC (even through unix domain sockets) suddenly either breaking down or pausing and restoring only some time later (like 5-10 minutes). The only sign of failure I managed to find was the increase of the requests for mbufs denied in the netstat -m and number of total mbuf clusters (nmbclusters) raising up to the limit. I have tried to raise some network-related limits (most notably maxusers and nmbclusters), but it has not helped with the issue - it's still happening from time to time to us. Below you can find output from the netstat -m few minutes right after that shortage period - you see that somehow the system has allocated huge amount of memory for the network (700MB), with only tiny amount of that being actually in use. This is for the kern.ipc.nmbclusters: 302400. Eventually the system reclaims all that memory and goes back to its normal use of 30-70MB. This problem is killing us, so any suggestions are greatly appreciated. My current hypothesis is that due to some issues either with the network driver or network subsystem itself, the system goes insane and eats up all mbufs up to nmbclusters limit. But since mbufs are shared between network and local IPC, IPC goes down as well. We observe this issue with systems using both em(4) driver and igb(4) driver. I believe both drivers share the same design, however I am not sure if this is some kind of design flaw in the driver or part of a larger problem with the network subsystem. This happens on amd64 7.2-RELEASE and 7.3-PRERELEASE alike, with 8GB of memory. I have not tried upgrading to 8.0, this is production system so upgrading will not be easy. I don't believe there are some differences that let us hope that this problem will go away after upgrade, but I can try it as the last resort. As I said, this is very critical issue, so I can provide any additional debug information upon request. We are ready to go as far as paying somebody reasonable amount of money for tracking down and resolving the issue. Regards, ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Strange problem with 8-stable, VMWare vSphere 4 AMD CPUs (unexpected shutdowns)
It looks like I've stumbled upon a bug in vSphere 4 (recent update) with FreeBSD/amd64 8.0/8-stable (but not 7.x) guests on Opteron(s). In this combination, everything works fine until a moderate load is started - a buildworld is enough. About five minutes after the load starts, the vSphere client starts getting timeouts while talking with the host and soon after the guest VM is forcibly shut down without any trace of a reason in various logs. The same VM runs fine on hosts with Xeon CPUs. The shutdown happens regardless if there is a vSphere client connected. This is very repeatable, on Sun Fire X4140 hosts. With 7.x/7.stable guests everything works fine. I'm posting this for future reference and to see if anyone has encountered something like that, or has an idea why this happens. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Strange problem with 8-stable, VMWare vSphere 4 AMD CPUs (unexpected shutdowns)
On 02/10/10 17:05, Andriy Gapon wrote: on 10/02/2010 17:36 Ivan Voras said the following: It looks like I've stumbled upon a bug in vSphere 4 (recent update) with FreeBSD/amd64 8.0/8-stable (but not 7.x) guests on Opteron(s). In this combination, everything works fine until a moderate load is started - a buildworld is enough. About five minutes after the load starts, the vSphere client starts getting timeouts while talking with the host and soon after the guest VM is forcibly shut down without any trace of a reason in various logs. The same VM runs fine on hosts with Xeon CPUs. The shutdown happens regardless if there is a vSphere client connected. This is very repeatable, on Sun Fire X4140 hosts. With 7.x/7.stable guests everything works fine. I'm posting this for future reference and to see if anyone has encountered something like that, or has an idea why this happens. Wild guess - try disabling superpages in the guests. It looks like your guess is perfectly correct :) The guest has been doing buildworlds for an hour and it works fine. Thanks! It's strange how this doesn't affect the Xeons... ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Strange problem with 8-stable, VMWare vSphere 4 AMD CPUs (unexpected shutdowns)
On 10 February 2010 18:13, Andriy Gapon a...@icyb.net.ua wrote: on 10/02/2010 19:05 Ivan Voras said the following: On 02/10/10 17:05, Andriy Gapon wrote: Wild guess - try disabling superpages in the guests. It looks like your guess is perfectly correct :) The guest has been doing buildworlds for an hour and it works fine. Thanks! It's strange how this doesn't affect the Xeons... I really can not tell more but there seems to be an issue between our implementation of superpages (very unique) and AMD processors from 10h family. I'd recommend not using superpages feature with those processors for time being. When you say very unique is it in the it is not Linux or Windows sense or do we do something nonstandard? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Strange problem with 8-stable, VMWare vSphere 4 AMD CPUs (unexpected shutdowns)
On 10 February 2010 19:10, Andriy Gapon a...@icyb.net.ua wrote: on 10/02/2010 20:03 Ivan Voras said the following: When you say very unique is it in the it is not Linux or Windows sense or do we do something nonstandard? The former - neither Linux, Windows or OpenSolaris seem to have what we have. I can't find the exact documents but I think both Windows MegaUltimateServer (the highest priced version of Windows Server, whatever it's called today) and Linux (though disabled and marked Experimental) have it, or have some kind of support for large pages that might not be as pervasive (maybe they use it for kernel only?). I have no idea about (Open)Solaris. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Strange problem with 8-stable, VMWare vSphere 4 AMD CPUs (unexpected shutdowns)
On 10 February 2010 19:26, Ivan Voras ivo...@freebsd.org wrote: On 10 February 2010 19:10, Andriy Gapon a...@icyb.net.ua wrote: on 10/02/2010 20:03 Ivan Voras said the following: When you say very unique is it in the it is not Linux or Windows sense or do we do something nonstandard? The former - neither Linux, Windows or OpenSolaris seem to have what we have. I can't find the exact documents but I think both Windows MegaUltimateServer (the highest priced version of Windows Server, whatever it's called today) and Linux (though disabled and marked Experimental) have it, or have some kind of support for large pages that might not be as pervasive (maybe they use it for kernel only?). I have no idea about (Open)Solaris. VMWare documentation about large pages: http://www.vmware.com/files/pdf/large_pg_performance.pdf I think I remember reading that on Windows, the application must use a special syscall to allocate an area with large pages, but I can't find the document. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: Strange problem with 8-stable, VMWare vSphere 4 AMD CPUs (unexpected shutdowns)
On 10 February 2010 19:35, Andriy Gapon a...@icyb.net.ua wrote: on 10/02/2010 20:26 Ivan Voras said the following: On 10 February 2010 19:10, Andriy Gapon a...@icyb.net.ua wrote: on 10/02/2010 20:03 Ivan Voras said the following: When you say very unique is it in the it is not Linux or Windows sense or do we do something nonstandard? The former - neither Linux, Windows or OpenSolaris seem to have what we have. I can't find the exact documents but I think both Windows MegaUltimateServer (the highest priced version of Windows Server, whatever it's called today) and Linux (though disabled and marked Experimental) have it, or have some kind of support for large pages that might not be as pervasive (maybe they use it for kernel only?). I have no idea about (Open)Solaris. I haven't said that those OSes do not use large pages. I've said what I've said :-) Ok :) Is there a difference between large pages as they are commonly known and superpages as in FreeBSD ? In other words - are you referencing some specific mechanism, like automatic promotion / demotion of the large pages or maybe something else? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
Re: ATA_CAM + ZFS gives short 1-2 seconds system freeze on disk load
On 02/08/10 15:33, Guido Falsi wrote: It looks like it freezes the system for the second or two it takes to flush buffers to disk when there are big outputs. This happens when decompressiong big distfiles, mainly. The openoffice port triggers this almost continuosly every few seconds during compilation. I've also seen this when working with big files(for example graphic images in uncompressed formats). It gets very annoying and I don't remember this happening before activating the ATA_CAM flag. There was some slowdown with big disk access, but not a total freeze. I think ZFS does this all the time, i.e. regardless of underlying device drivers. Can you test your theory by going to an older kernel and keeping *everything* else the same? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org
terminfo missing?
Hi, This has bugged me on a couple of machines but I've always attributed it to some misconfiguration of mine: running curses-like programs under screen (i.e. in virtual screens) fails with messages like terminal entry not found. For example, less does this, and vim complains with this: E558: Terminal entry not found in terminfo 'screen' not known. Available builtin terminals are: builtin_ansi builtin_xterm builtin_iris-ansi builtin_dumb defaulting to 'ansi' Looking at terminfo(5) it looks like terminfo should be located at /usr/share/misc/terminfo/ but I have no such directory here. There is a /usr/share/misc/termcap file. This machine is relatively fresh, only a source-based update was performed from 8.0-R to 8.0-STABLE, so I don't think there is some package that does this. Can someone enlight me about what is happening here? ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org