Posix threads: CLOCK_REALTIME/CLOCK_MONOTONIC
Hello. I am working on a multi-threaded application which may call settimeofday() and therefore may have serious problems with timing calculations. In my applications I calclulate time differences using clock_gettime(CLOCK_MONOTONIC) under FreeBSD-5. Under FreeBSD-4 it is a trivial kernel patch in kern_time.c to have CLOCK_MONOTONIC, as there already is a kernel function nanouptime(). int clock_gettime(p, uap) struct proc *p; struct clock_gettime_args *uap; { struct timespec ats; switch (SCARG(uap, clock_id)) { case CLOCK_REALTIME: nanotime(ats); break; case CLOCK_MONOTONIC: nanouptime(ats); break; default: return (EINVAL); }; return (copyout(ats, SCARG(uap, tp), sizeof(ats))); } Looking through the sources of the various threading libraries I found that either gettimeofday() or clock_gettime(CLOCK_REALTIME) is used for all calculations. I am not sure, what posix currently says about this but found a chapter 'Condition variable wait clock' in [Butenhof] (p.359). As I understand it, Posix.1j expects an implementation to - at least for pthread_cond_timedwait() - use CLOCK_MONOTONIC by default. They introduce a new function pair pthread_condattr_(get|set)clock() to change pthread_cond_timedwait() to use either CLOCK_MONOTONIC or CLOCK_REALTIME. From my understanding of the threading libraries' internals, it should be trivial to modify them to using CLOCK_MONOTONIC only, but not quite as trivial to implement pthread_condattr_(get|set)clock(). For FreeBSD-4 I already have a modified libc_r, where I call clock_gettime(CLOCK_MONOTONIC) once in _thread_init() and set a global variable _sched_clkid to either CLOCK_MONOTONIC or CLOCK_REALTIME for further calls to clock_gettime(). Any comments/ideas/opinions? Norbert ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Accessing filesystem from a KLD
There's also a desire to provide an easier to use interface for drivers to load, for example, firmware. Is there an existing development ? I think I'll separate retreiving firmware from the filesystem in another KLD, so maybe I should contact the developers... ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Posix threads: CLOCK_REALTIME/CLOCK_MONOTONIC
On Wednesday 29 June 2005 03:57 am, Norbert Koch wrote: Hello. I am working on a multi-threaded application which may call settimeofday() and therefore may have serious problems with timing calculations. In my applications I calclulate time differences using clock_gettime(CLOCK_MONOTONIC) under FreeBSD-5. Under FreeBSD-4 it is a trivial kernel patch in kern_time.c to have CLOCK_MONOTONIC, as there already is a kernel function nanouptime(). int clock_gettime(p, uap) struct proc *p; struct clock_gettime_args *uap; { struct timespec ats; switch (SCARG(uap, clock_id)) { case CLOCK_REALTIME: nanotime(ats); break; case CLOCK_MONOTONIC: nanouptime(ats); break; default: return (EINVAL); }; return (copyout(ats, SCARG(uap, tp), sizeof(ats))); } Looking through the sources of the various threading libraries I found that either gettimeofday() or clock_gettime(CLOCK_REALTIME) is used for all calculations. I am not sure, what posix currently says about this but found a chapter 'Condition variable wait clock' in [Butenhof] (p.359). As I understand it, Posix.1j expects an implementation to - at least for pthread_cond_timedwait() - use CLOCK_MONOTONIC by default. They introduce a new function pair pthread_condattr_(get|set)clock() to change pthread_cond_timedwait() to use either CLOCK_MONOTONIC or CLOCK_REALTIME. From my understanding of the threading libraries' internals, it should be trivial to modify them to using CLOCK_MONOTONIC only, but not quite as trivial to implement pthread_condattr_(get|set)clock(). For FreeBSD-4 I already have a modified libc_r, where I call clock_gettime(CLOCK_MONOTONIC) once in _thread_init() and set a global variable _sched_clkid to either CLOCK_MONOTONIC or CLOCK_REALTIME for further calls to clock_gettime(). Any comments/ideas/opinions? You probably want to bring this up on [EMAIL PROTECTED] as that is where all the guys who implement the thread libraries hang out. :) -- John Baldwin [EMAIL PROTECTED] http://www.FreeBSD.org/~jhb/ Power Users Use the Power to Serve = http://www.FreeBSD.org ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: problem handling POSIX thread on FreeBSD
On Tuesday 28 June 2005 07:58 pm, Pablo Mora wrote: Ok, I understand, but by being threads POSIX should be executed of the same one in any type of S.OR? Not sure I understand the question. What do you mean by S.O? Are you saying that since the threads are POSIX, that you would expect the program to act the same on all Operating Systems? That's not an entirely safe assumption to make in that POSIX only guarantees that things like mutexes work (and it specifically states that you have to unlock a mutex in the same thread you locked it, what you were doing would result in undefined behavior). POSIX doesn't make any guarantees about how threads are scheduled with respect to one another. -- John Baldwin [EMAIL PROTECTED] http://www.FreeBSD.org/~jhb/ Power Users Use the Power to Serve = http://www.FreeBSD.org ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Accessing filesystem from a KLD
Why not to use VOP_READ? See how it is called in dev/md.c:mdstart_vnode, check kern/vfs_vnops.c:vn_open_cred for information how to lookup a file name and open it. That's what I do, however I use the wrapper functions vn_open(), vn_rdwr() and so. But I have a problem, when I call this code : void *request_firmware(const char *name, size_t *size) { int flags; char filename[40]; struct nameidata nd; struct thread *td = curthread; [...] NDINIT(nd, LOOKUP, FOLLOW, UIO_SYSSPACE, filename[0], td); flags = FREAD; vn_open(nd, flags, 0, -1); [...] } from the KLD handler function (for testing) it works. But when I call it from a thread created by kthread_create() in another KLD, I have a page fault. A few printfs show that the call to vn_open() is responsible for the fault. I have not forgotten to lock Giant in my kernel thread. Any ideas ? Thanks, Sebastien ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Accessing filesystem from a KLD
On Wed, Jun 29, 2005 at 11:55:50AM +0200, Seb wrote: Why not to use VOP_READ? See how it is called in dev/md.c:mdstart_vnode, check kern/vfs_vnops.c:vn_open_cred for information how to lookup a file name and open it. That's what I do, however I use the wrapper functions vn_open(), vn_rdwr() and so. But I have a problem, when I call this code : void *request_firmware(const char *name, size_t *size) { int flags; char filename[40]; struct nameidata nd; struct thread *td = curthread; [...] NDINIT(nd, LOOKUP, FOLLOW, UIO_SYSSPACE, filename[0], td); flags = FREAD; vn_open(nd, flags, 0, -1); [...] } from the KLD handler function (for testing) it works. But when I call it from a thread created by kthread_create() in another KLD, I have a page fault. A few printfs show that the call to vn_open() is responsible for the fault. I have not forgotten to lock Giant in my kernel thread. Any ideas ? You got page fault from namei(), which is called from vn_open() to lookup a path name. namei() tries to obtain a reference on current directory for the current thread. This current directory (fd_cdir field) is NULL in your kthread. At this point a page fault in kernel address space is generated. More detail: Check in kthread_create() how new kthread is created, check flag RFFDG in fork1(). Since new kthread is created from thread0 and RFFDG is on, then new kthread will copy descriptor table from proc0. proc0 has descriptor table created by fdinit() in proc0_init(). fdinit() sets fd_cdir (current directory) to 0 (the same as NULL in /sys). Can you change fd_cdir in kthread to rootvnode I don't know, haven't checked this yet. But you can open a file in syscall and then use obtained vp in your kthread for VOP_READ call. It would be better to see backtrace of above mentioned page fault. But I guess that everything happened as I described. Hope this can help. ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
No SMP on Compaq ML350 with FBSD 5.3-RE
Hello, I'm sure this problem would have been found before, but so far, I can't see any trace of it in searches. I just picked up a Compaq ML350 for free from a liquidator, runs fine it seems, and has dual P-II 600MHz CPUs. The BIOS post shows the two CPUs, as does the BIOS System Info function. However, a fresh install of FreeBSD 5.3-RE shows only a single CPU, and utilities like top reflect this as well. Attached is the dmesg output. Since this is such old hardware, I'm really surprised that it doesn't just simply work. Thoughts? -- Justin Hopper [EMAIL PROTECTED] UNIX Systems Engineer BSDHosting.net Hosting Division of Digital Oasys Inc. http://www.bsdhosting.net Copyright (c) 1992-2004 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD 5.3-RELEASE #0: Tue Jun 28 23:38:16 PDT 2005 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/SERVER Timecounter i8254 frequency 1193182 Hz quality 0 CPU: Intel Pentium III (596.00-MHz 686-class CPU) Origin = GenuineIntel Id = 0x681 Stepping = 1 Features=0x383fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE real memory = 1073741824 (1024 MB) avail memory = 1045381120 (996 MB) ACPI APIC Table: COMPAQ MAPICTBL ioapic0: Changing APIC ID to 8 ioapic1: Changing APIC ID to 3 ioapic1 Version 1.1 irqs 16-31 on motherboard ioapic0 Version 1.1 irqs 0-15 on motherboard npx0: [FAST] npx0: math processor on motherboard npx0: INT 16 interface acpi0: COMPAQ CPQD020 on motherboard acpi0: Power Button (fixed) unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported unknown: I/O range not supported Timecounter ACPI-safe frequency 3579545 Hz quality 1000 acpi_timer0: 32-bit timer at 3.579545MHz port 0xf808-0xf80b on acpi0 cpu0: ACPI CPU (2 Cx states) on acpi0 pcib0: ACPI Host-PCI bridge port 0xcf8-0xcff on acpi0 pci0: ACPI PCI bus on pcib0 pcib1: PCI-PCI bridge at device 1.0 on pci0 pci1: PCI bus on pcib1 ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: No SMP on Compaq ML350 with FBSD 5.3-RE
Justin Hopper wrote: I just picked up a Compaq ML350 for free from a liquidator, runs fine it seems, and has dual P-II 600MHz CPUs. The BIOS post shows the two CPUs, as does the BIOS System Info function. However, a fresh install of FreeBSD 5.3-RE shows only a single CPU, and utilities like top reflect this as well. Attached is the dmesg output. Since this is such old hardware, I'm really surprised that it doesn't just simply work. Thoughts? I would certainly start by installing the latest and greatest BIOS that is available for the machine. You don't mention exactly which version of the ML350 you have -- there apparently were several. If the machine is old, chances are good that a newer BIOS exists, and probably fixes all sorts of problems. I found the following site, which might be helpful: http://h18023.www1.hp.com/support/files/server/us/romtabl.html Good luck. Oh, and I'd try 5.4 instead of 5.3 -- can't hurt ;-) -Kurt ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: No SMP on Compaq ML350 with FBSD 5.3-RE
On Wednesday 29 June 2005 02:00 pm, Justin Hopper wrote: Hello, I'm sure this problem would have been found before, but so far, I can't see any trace of it in searches. I just picked up a Compaq ML350 for free from a liquidator, runs fine it seems, and has dual P-II 600MHz CPUs. The BIOS post shows the two CPUs, as does the BIOS System Info function. However, a fresh install of FreeBSD 5.3-RE shows only a single CPU, and utilities like top reflect this as well. Attached is the dmesg output. Since this is such old hardware, I'm really surprised that it doesn't just simply work. Thoughts? Did you add 'options SMP' to your kernel config? I think that GENERIC in 5.x has 'options SMP' disabled by default. -- John Baldwin [EMAIL PROTECTED]http://www.FreeBSD.org/~jhb/ Power Users Use the Power to Serve = http://www.FreeBSD.org ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
hot path optimizations in uma_zalloc() uma_zfree()
Hi folks. I just tryed to make buckets management in perCPU cache like in Solaris (see paper of Jeff Bonwick - Magazines and Vmem) and got perfomance gain around 10% in my test program. Then i made another minor code optimization and got another 10%. The program just creates and destroys sockets in loop. I suppose the reason of first gain lies in increasing of cpu cache hits. In current fbsd code allocations and freeings deal with separate buckets. Buckets are changed when one of them became full or empty first. In Solaris this work is pure LIFO: i.e. alloc() and free() work with one bucket - the current bucket (it is called magazine there), that's why cache hit rate is bigger. Another optimization is very trivial, for example: - bucket-ub_cnt--; - item = bucket-ub_bucket[bucket-ub_cnt]; + item = bucket-ub_bucket[--bucket-ub_cnt]; (see the patch) The test program: #include unistd.h #include sys/socket.h main(int argc, char *argv[]) { int *fd, n, i,j, iters=10; n = atoi(argv[1]); fd = (int*) malloc(sizeof(*fd) * n); iters /= n; for (i=0; iiters; i++) { for (j=0; jn; j++) fd[j] = socket(AF_UNIX, SOCK_STREAM, 0); for (j=0; jn; j++) close(fd[j]); } } The results with current uma_core.c time ./sockloop 1# first arg is the number of sockets that created in one iteration 0.093u 2.650s 0:02.75 99.6% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.108u 2.298s 0:02.41 99.1% 5+181k 0+0io 0pf+0w time ./sockloop 1 0.127u 2.278s 0:02.41 99.1% 5+177k 0+0io 0pf+0w time ./sockloop 10# number of iterations is changed according to arg (see code) 0.054u 2.239s 0:02.30 99.1% 5+181k 0+0io 0pf+0w time ./sockloop 10 0.069u 2.199s 0:02.27 99.1% 6+184k 0+0io 0pf+0w time ./sockloop 10 0.086u 2.185s 0:02.28 99.1% 5+178k 0+0io 0pf+0w time ./sockloop 100 0.101u 2.393s 0:02.51 99.2% 5+179k 0+0io 0pf+0w time ./sockloop 100 0.085u 2.505s 0:02.60 99.2% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.054u 2.441s 0:02.50 99.6% 5+178k 0+0io 0pf+0w time ./sockloop 1000 0.093u 2.739s 0:02.84 99.2% 5+181k 0+0io 0pf+0w time ./sockloop 1000 0.085u 2.797s 0:02.89 99.3% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.117u 2.689s 0:02.82 98.9% 5+179k 0+0io 0pf+0w The results of first optimization (only buckets management) time ./sockloop 1 0.125u 1.938s 0:02.06 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.070u 1.993s 0:02.06 100.0%5+180k 0+0io 0pf+0w time ./sockloop 1 0.110u 1.953s 0:02.06 100.0%5+177k 0+0io 0pf+0w time ./sockloop 10 0.093u 1.776s 0:01.87 99.4% 5+180k 0+0io 0pf+0w time ./sockloop 10 0.116u 1.754s 0:01.87 99.4% 5+181k 0+0io 0pf+0w time ./sockloop 10 0.093u 1.777s 0:01.87 99.4% 5+181k 0+0io 0pf+0w time ./sockloop 100 0.100u 2.182s 0:02.29 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.093u 2.174s 0:02.27 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.078u 2.158s 0:02.24 99.1% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.101u 2.403s 0:02.51 99.6% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.124u 2.381s 0:02.52 99.2% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.125u 2.373s 0:02.51 99.2% 5+178k 0+0io 0pf+0w The results of both optimizations time ./sockloop 1 0.062u 1.785s 0:01.85 99.4% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.124u 1.722s 0:01.85 99.4% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.087u 1.759s 0:01.85 98.9% 5+177k 0+0io 0pf+0w time ./sockloop 10 0.069u 1.684s 0:01.75 99.4% 5+181k 0+0io 0pf+0w time ./sockloop 10 0.070u 1.673s 0:01.74 100.0%5+180k 0+0io 0pf+0w time ./sockloop 10 0.070u 1.672s 0:01.74 100.0%5+177k 0+0io 0pf+0w time ./sockloop 100 0.077u 2.102s 0:02.18 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.116u 2.062s 0:02.18 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.055u 2.126s 0:02.19 99.0% 5+178k 0+0io 0pf+0w time ./sockloop 1000 0.077u 2.298s 0:02.39 98.7% 5+181k 0+0io 0pf+0w time ./sockloop 1000 0.070u 2.340s 0:02.42 99.5% 5+178k 0+0io 0pf+0w time ./sockloop 1000 0.054u 2.320s 0:02.39 99.1% 5+179k 0+0io 0pf+0w the patch is for uma_core.c from RELENG_5, but i checked uma_core.c in CURRENT - it's the same regarding to thiese improvements. I don't have any commit rights, so the patch is just for reviewing. Here it is: --- sys/vm/uma_core.c.orig Wed Jun 29 21:46:52 2005 +++ sys/vm/uma_core.c Wed Jun 29 23:09:32 2005 @@ -1830,8 +1830,7 @@ if (bucket) { if (bucket-ub_cnt 0) { - bucket-ub_cnt--; - item = bucket-ub_bucket[bucket-ub_cnt]; + item = bucket-ub_bucket[--bucket-ub_cnt]; #ifdef INVARIANTS bucket-ub_bucket[bucket-ub_cnt] = NULL; #endif @@ -2252,7 +2251,7 @@ cache = zone-uz_cpu[cpu]; zfree_start: - bucket = cache-uc_freebucket; + bucket = cache-uc_allocbucket; if (bucket) { /* @@ -2263,8 +2262,7 @@ if (bucket-ub_cnt bucket-ub_entries) { KASSERT(bucket-ub_bucket[bucket-ub_cnt] == NULL, (uma_zfree:
Re: hot path optimizations in uma_zalloc() uma_zfree()
On Thu, 30 Jun 2005, ant wrote: I just tryed to make buckets management in perCPU cache like in Solaris (see paper of Jeff Bonwick - Magazines and Vmem) and got perfomance gain around 10% in my test program. Then i made another minor code optimization and got another 10%. The program just creates and destroys sockets in loop. This sounds great -- I'm off to bed now (.uk time and all), but will run some benchmarks locally tomorrow. I've recently started investigating using the PMC support in 6.x to look at cache behavior in the network-related fast paths, but haven't gotten too far as yet. Thanks, Robert N M Watson I suppose the reason of first gain lies in increasing of cpu cache hits. In current fbsd code allocations and freeings deal with separate buckets. Buckets are changed when one of them became full or empty first. In Solaris this work is pure LIFO: i.e. alloc() and free() work with one bucket - the current bucket (it is called magazine there), that's why cache hit rate is bigger. Another optimization is very trivial, for example: - bucket-ub_cnt--; - item = bucket-ub_bucket[bucket-ub_cnt]; + item = bucket-ub_bucket[--bucket-ub_cnt]; (see the patch) The test program: #include unistd.h #include sys/socket.h main(int argc, char *argv[]) { int *fd, n, i,j, iters=10; n = atoi(argv[1]); fd = (int*) malloc(sizeof(*fd) * n); iters /= n; for (i=0; iiters; i++) { for (j=0; jn; j++) fd[j] = socket(AF_UNIX, SOCK_STREAM, 0); for (j=0; jn; j++) close(fd[j]); } } The results with current uma_core.c time ./sockloop 1# first arg is the number of sockets that created in one iteration 0.093u 2.650s 0:02.75 99.6% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.108u 2.298s 0:02.41 99.1% 5+181k 0+0io 0pf+0w time ./sockloop 1 0.127u 2.278s 0:02.41 99.1% 5+177k 0+0io 0pf+0w time ./sockloop 10# number of iterations is changed according to arg (see code) 0.054u 2.239s 0:02.30 99.1% 5+181k 0+0io 0pf+0w time ./sockloop 10 0.069u 2.199s 0:02.27 99.1% 6+184k 0+0io 0pf+0w time ./sockloop 10 0.086u 2.185s 0:02.28 99.1% 5+178k 0+0io 0pf+0w time ./sockloop 100 0.101u 2.393s 0:02.51 99.2% 5+179k 0+0io 0pf+0w time ./sockloop 100 0.085u 2.505s 0:02.60 99.2% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.054u 2.441s 0:02.50 99.6% 5+178k 0+0io 0pf+0w time ./sockloop 1000 0.093u 2.739s 0:02.84 99.2% 5+181k 0+0io 0pf+0w time ./sockloop 1000 0.085u 2.797s 0:02.89 99.3% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.117u 2.689s 0:02.82 98.9% 5+179k 0+0io 0pf+0w The results of first optimization (only buckets management) time ./sockloop 1 0.125u 1.938s 0:02.06 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.070u 1.993s 0:02.06 100.0%5+180k 0+0io 0pf+0w time ./sockloop 1 0.110u 1.953s 0:02.06 100.0%5+177k 0+0io 0pf+0w time ./sockloop 10 0.093u 1.776s 0:01.87 99.4% 5+180k 0+0io 0pf+0w time ./sockloop 10 0.116u 1.754s 0:01.87 99.4% 5+181k 0+0io 0pf+0w time ./sockloop 10 0.093u 1.777s 0:01.87 99.4% 5+181k 0+0io 0pf+0w time ./sockloop 100 0.100u 2.182s 0:02.29 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.093u 2.174s 0:02.27 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.078u 2.158s 0:02.24 99.1% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.101u 2.403s 0:02.51 99.6% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.124u 2.381s 0:02.52 99.2% 5+180k 0+0io 0pf+0w time ./sockloop 1000 0.125u 2.373s 0:02.51 99.2% 5+178k 0+0io 0pf+0w The results of both optimizations time ./sockloop 1 0.062u 1.785s 0:01.85 99.4% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.124u 1.722s 0:01.85 99.4% 5+180k 0+0io 0pf+0w time ./sockloop 1 0.087u 1.759s 0:01.85 98.9% 5+177k 0+0io 0pf+0w time ./sockloop 10 0.069u 1.684s 0:01.75 99.4% 5+181k 0+0io 0pf+0w time ./sockloop 10 0.070u 1.673s 0:01.74 100.0%5+180k 0+0io 0pf+0w time ./sockloop 10 0.070u 1.672s 0:01.74 100.0%5+177k 0+0io 0pf+0w time ./sockloop 100 0.077u 2.102s 0:02.18 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.116u 2.062s 0:02.18 99.5% 5+180k 0+0io 0pf+0w time ./sockloop 100 0.055u 2.126s 0:02.19 99.0% 5+178k 0+0io 0pf+0w time ./sockloop 1000 0.077u 2.298s 0:02.39 98.7% 5+181k 0+0io 0pf+0w time ./sockloop 1000 0.070u 2.340s 0:02.42 99.5% 5+178k 0+0io 0pf+0w time ./sockloop 1000 0.054u 2.320s 0:02.39 99.1% 5+179k 0+0io 0pf+0w the patch is for uma_core.c from RELENG_5, but i checked uma_core.c in CURRENT - it's the same regarding to thiese improvements. I don't have any commit rights, so the patch is just for reviewing. Here it is: --- sys/vm/uma_core.c.orig Wed Jun 29 21:46:52 2005 +++ sys/vm/uma_core.c Wed Jun 29 23:09:32 2005 @@ -1830,8 +1830,7 @@ if (bucket) { if (bucket-ub_cnt 0) { - bucket-ub_cnt--; - item = bucket-ub_bucket[bucket-ub_cnt]; + item = bucket-ub_bucket[--bucket-ub_cnt]; #ifdef
ICH6R RAID
It appears that I'm not the only one struggling with the RAID1 using Intel's ICH6R. (My mobo is a Tyan S5150 with two identical 80GB WD SATA drives.) Using either 5.4-RELEASE or 6.0-CURRENT-SNAP004 the OS seems to load from the release CD but fails to boot from the RAID array. (5.4 drops into boot loader and 6.0 can't find the OS.) Both operating systems boot fine without a RAID1 array and both disks come up. I even tried building the mirror then disabling one drive using the firmware setup. The firmware says then to have the OS rebuild the mirror. OK,... The OS then boots fine to ad4 and can see ad6. I can dd if=/dev/ad4 of=/dev/ad6 successfully, but I cannot get atacontrol to rebuild the mirror. Maybe this approach will work, and I'm doing something wrong... Now man ata(4) indicates that only up to ICH5 is supported in 5.4 while up to ICH6 is supported in 6.0, but the hardware archives indicate that sos's MK3 patches made it into 5.4-RELEASE, and 6.0 fully supports the ICH6R. However, I'm actually not sure the precise story here. Has anyone out there actually booted FreeBSD using mirrored SATA drives with an ICH6R? If so, how did you do it and what version did you use? I could really use some help! Thanks, -gayn ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
buffer locking
I'm working in FreeBSD 5.3-RELEASE. In various places in the buffer management code (e.g. ibwrite()) the buffer lock reference count is checked (see below), presumably to make sure the buffer is safely locked before working with it. Is there a reason that it's not neccesary to ensure that the current thread is the lockholder? This seems like it could lead to a race condition where, say ibwrite, is content because SOMEONE has the buffer locked, even if it's not the current thread. Thanks, -Nate ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]
Suurce code navigation tools with call graph?
I'm currently in the position of needing to cut a large program into two halves and insert a clean API between them. To do this I need to get a good understanding of how the control flow works, and I'm looking for tools that might help me. So far I've seen: - etags will follow the control flow downwards with the find-tag command (M-. in Emacs). It's useful at times, but a little pedestrian for what I want to do. - cscope will show me all callers for a function. This is closer, but it's still not overly easy to read. - Source navigator (snavigator) gives a graphical representation of the downwards control tree for a function. It doesn't seem to be able to go in the other direction, i.e show what functions call a specific function. - doxygen does the same thing. Arguably the graphic representation is nicer. - kscope is a KDE wrapper for cscope. It seems to come closest to what I'm looking for in that it will show the callers, but the form in which it does so is painful. In particular, there doesn't seem to be any way to view the source code round the call. If that's the best there is, I can live with it. But is it the best? Does anybody have a better tool? And yes, I've looked through /usr/ports/devel, but with 1536 ports, it's easy to miss things. Greg -- The virus contained in this message was not detected. Finger [EMAIL PROTECTED] for PGP public key. See complete headers for address and phone numbers. pgpOTWvUhSOCt.pgp Description: PGP signature
Re: problem handling POSIX thread on FreeBSD
Not sure I understand the question. What do you mean by S.O? , sorry by my badly english, the correct word is O.S (operative system). you saying that since the threads are POSIX, that you would expect the program to act the same on all Operating Systems? exactly, that thought before your answer. I thought that a same code was executed of the same form so much in Solaris, in GNU/Linux and FreeBSD. At least than had shown the same results. really I do not know because Linux and solaris they show me: hilo1: puntero contiene 0 hilo2: puntero contiene 0 hilo1: puntero contiene 0 hilo2: puntero contiene 3 hilo1: puntero contiene 2 hilo2: puntero contiene 6 hilo1: puntero contiene 4 hilo2: puntero contiene 9 hilo1: puntero contiene 6 hilo2: puntero contiene 12 hilo1: puntero contiene 8 hilo2: puntero contiene 15 hilo1: puntero contiene 10 hilo2: puntero contiene 18 hilo1: puntero contiene 12 hilo2: puntero contiene 21 hilo1: puntero contiene 14 hilo2: puntero contiene 24 hilo1: puntero contiene 16 finaliza hilo1 con id 1082231728 hilo2: puntero contiene 27 finaliza hilo2 con id 1090624432 fin hilo 2 sadly in my university we work with Solaris:' ( -- I repeat part of the code: /* file: test.c */ #includepthread.h char buffer[512]; pthread_mutex_t mutex, mutex2; pthread_t hilo1, hilo2; void f1(void* ptr) { int i,n=10; int valor=0; char*p=(char*)ptr; for(i=0;in;i++) { pthread_mutex_lock(mutex); sscanf(p,%d,valor); printf(\thilo1: puntero contiene %d\n, valor); valor=i*3; sprintf(p, %d,valor); pthread_mutex_unlock(mutex2); } valor=(int)pthread_self(); printf(finaliza hilo1 con id %d\n, valor); pthread_exit(valor); } void f2(void* ptr) { int i,n=10; int valor=0; char*p=(char*)ptr; for(i=0;in;i++) { pthread_mutex_lock(mutex2); sscanf(p,%d,valor); printf(hilo2: puntero contiene %d\n, valor); valor=i*2; sprintf(p, %d,valor); pthread_mutex_unlock(mutex); } valor=(int)pthread_self(); printf(finaliza hilo2 con id %d\n, valor); pthread_exit(valor); } int main() { pthread_attr_t atributos; memset(buffer, '\0', sizeof(buffer)); pthread_mutex_init(mutex, NULL); //linux pthread_mutex_init(mutex2, NULL); //linux pthread_mutex_lock(mutex2); /* ¿? */ if(pthread_attr_init(atributos) 0) { perror(pthread_attr_init); exit(-1); } if(pthread_attr_setscope(atributos,PTHREAD_SCOPE_PROCESS) 0) { perror(pthread_attr_setscope); exit(-2); } if(pthread_create (hilo1, atributos, (void*)f1, (void*)buffer)0) { perror(pthread_create hilo1); exit(-2); } if(pthread_create(hilo2, atributos, (void*)f2, (void*)buffer)0) { perror(pthread_create hilo2); exit(-2); } . } you believe that a mutex not necessarily must be unlocked by the same thread that locks it? sorry but my badly english, but I am making an effort to me so that you manage to understand to me. -- Concepción, Chile. ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to [EMAIL PROTECTED]