Re: Only noise from azalia
On Tue, Jan 17, 2012 at 02:24:19PM -0200, Jairo Souto wrote: I have been tried all bsd.mp from ftp.openbsd.org:/pub/OpenBSD/snapshots/amd64/bsd.mp Are you running 4.9 or -current? misc@ did not answer... Usually, people in tech@ are in misc@ too. They probably won't because you're running a custom built kernel. Please see: http://www.openbsd.org/faq/faq5.html#WhySrc
Re: etc/rc.d/sendmail diff
OK sthen@. It is easy to get in this state if untarring new base*.tgz on a system where sendmail is running. On 2012/01/17 07:40, Dan Harnett wrote: The sendmail daemon can be in a state where it is rejecting new messages and sets the proc title accordingly. The current rc.d script ignores sendmail if it is in this state. $ pgrep -lf sendmail 459 sendmail: rejecting new messages: min free: 100 I don't believe the wildcard following '(accepting|rejecting)' is really needed either, but left it in. Index: sendmail === RCS file: /home/danh/.cvs/openbsd/src/etc/rc.d/sendmail,v retrieving revision 1.4 diff -u -p -r1.4 sendmail --- sendmail 12 Jul 2011 05:40:55 - 1.4 +++ sendmail 17 Jan 2012 11:59:54 - @@ -6,7 +6,7 @@ daemon=/usr/sbin/sendmail . /etc/rc.d/rc.subr -pexp=(sendmail: accepting.*|${daemon}* -(q[0-9]|bd)*) +pexp=(sendmail: (accepting|rejecting).*|${daemon}* -(q[0-9]|bd)*) rc_bg=YES
FIX: filedescriptor leak in authpf.c
Hello, this diff fix a filedescriptor leak in authpf.c. The function allowed_luser() is call one time directly from main() but I think it should be consistent. I just compiled the sources but could not test the code path. bye, Jan Index: authpf.c === RCS file: /mount/cvsdev/cvs/openbsd/src/usr.sbin/authpf/authpf.c,v retrieving revision 1.115 diff -u -w -p -r1.115 authpf.c --- authpf.c2 Sep 2010 14:01:04 - 1.115 +++ authpf.c18 Jan 2012 09:13:49 - @@ -523,6 +523,7 @@ allowed_luser(struct passwd *pw) invalid group '%s' in %s (%s), buf + 1, PATH_ALLOWFILE, strerror(errno)); + fclose(f); return (0); } @@ -549,9 +550,11 @@ allowed_luser(struct passwd *pw) lbuf = NULL; } - if (matched) + if (matched) { + fclose(f); return (1); /* matched an allowed user/group */ } + } syslog(LOG_INFO, denied access to %s: not listed in %s, pw-pw_name, PATH_ALLOWFILE); @@ -560,6 +563,7 @@ allowed_luser(struct passwd *pw) fputs(buf, stdout); } fflush(stdout); + fclose(f); return (0); }
Re: etc/rc.d/sendmail diff
On Wed, Jan 18, 2012 at 01:47:52PM +, Stuart Henderson wrote: OK sthen@. It is easy to get in this state if untarring new base*.tgz on a system where sendmail is running. Replacing the queue directory while sendmail was running is actually how I produced the example. However, it happens in real world cases as well, like high load averages, out of memory, and too many child processes. In some cases, it could be reasonable to change the sendmail configuration to adapt to the higher demand. Then it should be possible to use the rc.d script to restart it. On 2012/01/17 07:40, Dan Harnett wrote: The sendmail daemon can be in a state where it is rejecting new messages and sets the proc title accordingly. The current rc.d script ignores sendmail if it is in this state. $ pgrep -lf sendmail 459 sendmail: rejecting new messages: min free: 100 I don't believe the wildcard following '(accepting|rejecting)' is really needed either, but left it in. Index: sendmail === RCS file: /home/danh/.cvs/openbsd/src/etc/rc.d/sendmail,v retrieving revision 1.4 diff -u -p -r1.4 sendmail --- sendmail12 Jul 2011 05:40:55 - 1.4 +++ sendmail17 Jan 2012 11:59:54 - @@ -6,7 +6,7 @@ daemon=/usr/sbin/sendmail . /etc/rc.d/rc.subr -pexp=(sendmail: accepting.*|${daemon}* -(q[0-9]|bd)*) +pexp=(sendmail: (accepting|rejecting).*|${daemon}* -(q[0-9]|bd)*) rc_bg=YES
Re: Scheduler improvements
Hi people, after a long time of silence here's a second iteration of the patch. I've addressed a few concerns voiced here: * Process lookup and friends now have O(log n) runtime. I achieved that by abusing RB-trees as priority queues since they have runtime O(log n) in all relevant algorithms. * The algorithm for calculating a new deadline for a given process has been simplified and is now documented a bit better. It also derives the deadline offset from the value of hz (via rrticks_init) as suggested by Miod (?). * CPU rlimits are now honored again. The relevant code did not change, the new patch just doesn't remove rlimit enforcement anymore. * Timeslices are 20ms long instead of 10ms. This solves the issue of 0ms long timeslices on machines with hz 100. With recent improvements in the mainline scheduler and especially rthreads, the performance of the patched scheduler and mainline is now roughly similar, at least if throughput is concerned. I have the feeling that the system behaves snappier with my patch, but that might be some sort of placebo effect. I haven't yet come up with a reliable method to benchmark interactivity except for actually using the machine and doing stuff on it. It's interesting to note however that the patched scheduler achieves a performance similar to the default one without all the fancy methods for calculating how expensive it is to move a process from one CPU to another or related methods for preserving cache locality. I use the patched scheduler exclusively on my Core2Duo machine with an MP build. The amount of lines removed versus added lines by this patch shifted towards more added lines but is still at 173 lines less than the default. Once again, comments, rants, insults, everything is welcome :) -- Gregor Best Index: sys/proc.h === RCS file: /cvs/src/sys/sys/proc.h,v retrieving revision 1.149 diff -u -r1.149 proc.h --- sys/proc.h 7 Jan 2012 05:38:12 - 1.149 +++ sys/proc.h 17 Jan 2012 16:10:09 - @@ -43,6 +43,7 @@ #include machine/proc.h /* Machine-dependent proc substruct. */ #include sys/selinfo.h /* For struct selinfo */ #include sys/queue.h +#include sys/tree.h #include sys/timeout.h /* For struct timeout */ #include sys/event.h /* For struct klist */ #include sys/mutex.h /* For struct mutex */ @@ -210,7 +211,9 @@ #definePS_SINGLEUNWIND _P_SINGLEUNWIND struct proc { - TAILQ_ENTRY(proc) p_runq; + RB_ENTRY(proc) p_runq; + RB_ENTRY(proc) p_expq; + TAILQ_ENTRY(proc) p_slpq; LIST_ENTRY(proc) p_list;/* List of all processes. */ struct process *p_p; /* The process of this thread. */ @@ -251,6 +254,9 @@ #endif /* scheduling */ + struct timeval p_deadline; /* virtual deadline used for scheduling */ + struct timeval p_deadline_set; /* time at which the deadline was set */ + int p_rrticks; /* number of ticks this process is allowed to stay on a processor */ u_int p_estcpu;/* Time averaged value of p_cpticks. */ int p_cpticks; /* Ticks of cpu time. */ fixpt_t p_pctcpu;/* %cpu for this process during p_swtime */ Index: sys/sched.h === RCS file: /cvs/src/sys/sys/sched.h,v retrieving revision 1.30 diff -u -r1.30 sched.h --- sys/sched.h 16 Nov 2011 20:50:19 - 1.30 +++ sys/sched.h 17 Jan 2012 16:10:09 - @@ -87,8 +87,6 @@ #define CP_IDLE4 #define CPUSTATES 5 -#defineSCHED_NQS 32 /* 32 run queues. */ - /* * Per-CPU scheduler state. * XXX - expose to userland for now. @@ -99,7 +97,6 @@ u_int spc_schedticks; /* ticks for schedclock() */ u_int64_t spc_cp_time[CPUSTATES]; /* CPU state statistics */ u_char spc_curpriority; /* usrpri of curproc */ - int spc_rrticks;/* ticks until roundrobin() */ int spc_pscnt; /* prof/stat counter */ int spc_psdiv; /* prof/stat divisor */ struct proc *spc_idleproc; /* idle proc for this cpu */ @@ -107,9 +104,6 @@ u_int spc_nrun; /* procs on the run queues */ fixpt_t spc_ldavg; /* shortest load avg. for this cpu */ - TAILQ_HEAD(prochead, proc) spc_qs[SCHED_NQS]; - volatile uint32_t spc_whichqs; - #ifdef notyet struct proc *spc_reaper;/* dead proc reaper */ #endif @@ -119,18 +113,16 @@ #ifdef _KERNEL /* spc_flags */ -#define SPCF_SEENRR 0x0001 /* process has seen roundrobin() */ -#define SPCF_SHOULDYIELD0x0002 /* process should yield the CPU */ -#define SPCF_SWITCHCLEAR(SPCF_SEENRR|SPCF_SHOULDYIELD) -#define SPCF_SHOULDHALT0x0004 /* CPU
allow more glob stats
The glob limit to only allow 128 stat calls seems rather low. We allow 16384 readdir calls, by comparison. We also have a limit on the amount of memory used, which effectively caps stats too. Why 2048? I have 1435 files in /usr/local/bin and I think even a limited glob should be able to list them all. For that matter, failed stats aren't much cheaper than successful stats, so we should probably do the counting before stat, not after. Index: glob.3 === RCS file: /home/tedu/cvs/src/lib/libc/gen/glob.3,v retrieving revision 1.29 diff -u -p -r1.29 glob.3 --- glob.3 8 Oct 2010 21:48:42 - 1.29 +++ glob.3 18 Jan 2012 16:21:23 - @@ -269,7 +269,7 @@ Limit the amount of memory used to store .Li 64K , the number of .Xr stat 2 -calls to 128, and the number of +calls to 2048, and the number of .Xr readdir 3 calls to 16K. This option should be set for programs that can be coerced to a denial of Index: glob.c === RCS file: /home/tedu/cvs/src/lib/libc/gen/glob.c,v retrieving revision 1.38 diff -u -p -r1.38 glob.c --- glob.c 22 Sep 2011 06:27:29 - 1.38 +++ glob.c 18 Jan 2012 16:24:52 - @@ -123,7 +123,7 @@ typedef char Char; #defineismeta(c) (((c)M_QUOTE) != 0) #defineGLOB_LIMIT_MALLOC 65536 -#defineGLOB_LIMIT_STAT 128 +#defineGLOB_LIMIT_STAT 2048 #defineGLOB_LIMIT_READDIR 16384 /* Limit of recursion during matching attempts. */ @@ -628,8 +628,6 @@ glob2(Char *pathbuf, Char *pathbuf_last, for (anymeta = 0;;) { if (*pattern == EOS) { /* End of pattern? */ *pathend = EOS; - if (g_lstat(pathbuf, sb, pglob)) - return(0); if ((pglob-gl_flags GLOB_LIMIT) limitp-glim_stat++ = GLOB_LIMIT_STAT) { @@ -638,6 +636,8 @@ glob2(Char *pathbuf, Char *pathbuf_last, *pathend = EOS; return(GLOB_NOSPACE); } + if (g_lstat(pathbuf, sb, pglob)) + return(0); if (((pglob-gl_flags GLOB_MARK) pathend[-1] != SEP) (S_ISDIR(sb.st_mode) ||
Re: etc/rc.d/sendmail diff
Il 17/01/2012 13.40, Dan Harnett ha scritto: The sendmail daemon can be in a state where it is rejecting new messages and sets the proc title accordingly. The current rc.d script ignores sendmail if it is in this state. $ pgrep -lf sendmail 459 sendmail: rejecting new messages: min free: 100 I don't believe the wildcard following '(accepting|rejecting)' is really needed either, but left it in. ok giovanni@
Re: Scheduler improvements
And it didn't take long for me to find a small bug... attached is a fixed version of the patch. Such things happen if one decides to regenerate a patch just in case and forgets to revert to a working version before doing that :D -- Gregor Best Index: sys/proc.h === RCS file: /cvs/src/sys/sys/proc.h,v retrieving revision 1.149 diff -u -r1.149 proc.h --- sys/proc.h 7 Jan 2012 05:38:12 - 1.149 +++ sys/proc.h 17 Jan 2012 16:10:09 - @@ -43,6 +43,7 @@ #include machine/proc.h /* Machine-dependent proc substruct. */ #include sys/selinfo.h /* For struct selinfo */ #include sys/queue.h +#include sys/tree.h #include sys/timeout.h /* For struct timeout */ #include sys/event.h /* For struct klist */ #include sys/mutex.h /* For struct mutex */ @@ -210,7 +211,9 @@ #definePS_SINGLEUNWIND _P_SINGLEUNWIND struct proc { - TAILQ_ENTRY(proc) p_runq; + RB_ENTRY(proc) p_runq; + RB_ENTRY(proc) p_expq; + TAILQ_ENTRY(proc) p_slpq; LIST_ENTRY(proc) p_list;/* List of all processes. */ struct process *p_p; /* The process of this thread. */ @@ -251,6 +254,9 @@ #endif /* scheduling */ + struct timeval p_deadline; /* virtual deadline used for scheduling */ + struct timeval p_deadline_set; /* time at which the deadline was set */ + int p_rrticks; /* number of ticks this process is allowed to stay on a processor */ u_int p_estcpu;/* Time averaged value of p_cpticks. */ int p_cpticks; /* Ticks of cpu time. */ fixpt_t p_pctcpu;/* %cpu for this process during p_swtime */ Index: sys/sched.h === RCS file: /cvs/src/sys/sys/sched.h,v retrieving revision 1.30 diff -u -r1.30 sched.h --- sys/sched.h 16 Nov 2011 20:50:19 - 1.30 +++ sys/sched.h 17 Jan 2012 16:10:09 - @@ -87,8 +87,6 @@ #define CP_IDLE4 #define CPUSTATES 5 -#defineSCHED_NQS 32 /* 32 run queues. */ - /* * Per-CPU scheduler state. * XXX - expose to userland for now. @@ -99,7 +97,6 @@ u_int spc_schedticks; /* ticks for schedclock() */ u_int64_t spc_cp_time[CPUSTATES]; /* CPU state statistics */ u_char spc_curpriority; /* usrpri of curproc */ - int spc_rrticks;/* ticks until roundrobin() */ int spc_pscnt; /* prof/stat counter */ int spc_psdiv; /* prof/stat divisor */ struct proc *spc_idleproc; /* idle proc for this cpu */ @@ -107,9 +104,6 @@ u_int spc_nrun; /* procs on the run queues */ fixpt_t spc_ldavg; /* shortest load avg. for this cpu */ - TAILQ_HEAD(prochead, proc) spc_qs[SCHED_NQS]; - volatile uint32_t spc_whichqs; - #ifdef notyet struct proc *spc_reaper;/* dead proc reaper */ #endif @@ -119,18 +113,16 @@ #ifdef _KERNEL /* spc_flags */ -#define SPCF_SEENRR 0x0001 /* process has seen roundrobin() */ -#define SPCF_SHOULDYIELD0x0002 /* process should yield the CPU */ -#define SPCF_SWITCHCLEAR(SPCF_SEENRR|SPCF_SHOULDYIELD) -#define SPCF_SHOULDHALT0x0004 /* CPU should be vacated */ -#define SPCF_HALTED0x0008 /* CPU has been halted */ +#define SPCF_SHOULDYIELD0x0001 /* process should yield the CPU */ +#define SPCF_SHOULDHALT0x0002 /* CPU should be vacated */ +#define SPCF_HALTED0x0004 /* CPU has been halted */ -#defineSCHED_PPQ (128 / SCHED_NQS) /* priorities per queue */ #define NICE_WEIGHT 2 /* priorities per nice level */ -#defineESTCPULIM(e) min((e), NICE_WEIGHT * PRIO_MAX - SCHED_PPQ) +#defineESTCPULIM(e) min((e), NICE_WEIGHT * PRIO_MAX) extern int schedhz;/* ideally: 16 */ extern int rrticks_init; /* ticks per roundrobin() */ +extern struct cpuset sched_idle_cpus; struct proc; void schedclock(struct proc *); @@ -147,18 +139,20 @@ void cpu_switchto(struct proc *, struct proc *); struct proc *sched_chooseproc(void); struct cpu_info *sched_choosecpu(struct proc *); -struct cpu_info *sched_choosecpu_fork(struct proc *parent, int); +struct cpu_info *sched_choosecpu_fork(struct proc *parent); void cpu_idle_enter(void); void cpu_idle_cycle(void); void cpu_idle_leave(void); void sched_peg_curproc(struct cpu_info *ci); +void generate_deadline(struct proc *, char); + #ifdef MULTIPROCESSOR void sched_start_secondary_cpus(void); void sched_stop_secondary_cpus(void); #endif -#define curcpu_is_idle() (curcpu()-ci_schedstate.spc_whichqs == 0) +#define curcpu_is_idle() (cpuset_isset(sched_idle_cpus, curcpu())) void
(Inquiry) Greetings from Aicom Group Int.
Hello, We are currently searching for high-tech manufacturers and suppliers who are able to produce and supply the exact quality products on our website. Aicom group international is a leading industrial and commercial company based in vietnam founded in 1987, and has been existing in business since then. And it's proud of it's track record of prompt and dependable services . I am really interested in your products,kindly send me your catalog .Please inform me about the Minimum Order Quantity Delivery time Payment terms warranty . Your early reply is highly appreciated . Thank You. Sir Larry Koldswet (larrykoldsw...@yahoo.cn)