Ian Smith wrote:
I take Jonathan's point that it would be nice to have this functionality
in atacontrol, though perhaps the BUGS section in ataidle(8) precludes
merging that? cc'ing Bruce Cran in case he wants to add something ..
ataidle is at the moment quite dumb about sending commands: it
Xin LI wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ben Stuyts wrote:
| Hi,
|
| While doing an rsync from a zfs filesystem to an external usb hd (also
| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server seemingly continued
On 13 mei 2008, at 21:53, Xin LI wrote:
| While doing an rsync from a zfs filesystem to an external usb hd
(also
| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server seemingly continued to
run
| fine. The reboot command did not
On Wed, May 14, 2008 at 10:39:36AM +0200, Ben Stuyts wrote:
On 13 mei 2008, at 21:53, Xin LI wrote:
| While doing an rsync from a zfs filesystem to an external usb hd
(also
| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server
Kostik Belousov wrote:
BTW, The wired memory goes all over the place in this machine: even
when it's mostly idle it varies between 300 MB and 3.5 GB. That's why
I added 4 more GB.
3.5Gb wired ? Do you run amd64 ?
Does wired memory drops lower after you change the load ?
Either that or
On 14 mei 2008, at 10:50, Kostik Belousov wrote:
BTW, The wired memory goes all over the place in this machine: even
when it's mostly idle it varies between 300 MB and 3.5 GB. That's why
I added 4 more GB.
3.5Gb wired ? Do you run amd64 ?
Does wired memory drops lower after you change the
On 14 mei 2008, at 10:15, Kris Kennaway wrote:
| While doing an rsync from a zfs filesystem to an external usb hd
(also
| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server seemingly continued to
run
| fine. The reboot command
On 14 mei 2008, at 10:50, Kostik Belousov wrote:
3.5Gb wired ? Do you run amd64 ?
Yes.
Does wired memory drops lower after you change the load ?
Sorry, forgot to answer this in my previous msg.
It is very unpredictable, and I have not found a pattern. It is a
small business server,
bms == Bruce M Simpson [EMAIL PROTECTED] writes:
bms It would be great if we could ship FreeBSD out of the box, ready
bms to automount removable media. This would be useful to all users,
bms but particularly for novices and people who just wanna get on
bms and use the beast.
I think this
Hi there group,
I have nscd running on 6.3 with backports patches, but maybe this will
apply to the 7.0?
What's the problem:
i have nss setup with nss_pg module and authenticates passing through
pam pg module.
I have nscd running so I can make fewer queries to the pg server when
system
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.
I compile this program as follows:
cc sched_test.c -o sched_test -pthread
I believe that the behavior I observe
I had a box run out of dynamic state space yesterday. I found I can
increase the number of dynamic rules by increasing the sysctl
parameter net.inet.ip.fw.dyn_max. I can't find, however, how this
affects memory usage on the system. Is it dyanamically allocated and
de-allocated, or is it
Hi,
after updating an Intel S5000PAL system from 6.2 to 6.3, ums(4) is no
longer attaching correctly.
Here's an dmesg diff between 6.2 and 6.3
uhub3: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub3: 2 ports with 2 removable, self powered
ehci0: EHCI (generic) USB 2.0 controller
Hi,
there's a regression going from 6.2 to 6.3, where it will panic upon
booting the kernel within vm_fault. This problem has been discussed
before, but I'm seeing it reliably on a RELENG_6 checkout from 5th of
May.
It affects multiple (but identical) systems, here's an verbose boot
leading to
On Wed, 2008-05-14 at 17:32 +0200, Ulrich Spoerlein wrote:
Hi,
there's a regression going from 6.2 to 6.3, where it will panic upon
booting the kernel within vm_fault. This problem has been discussed
before, but I'm seeing it reliably on a RELENG_6 checkout from 5th of
May.
It affects
on 14/05/2008 18:17 Andriy Gapon said the following:
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.
I compile this program as follows:
cc sched_test.c -o
Hi,
I eagerly started using atacontrol's new spindown command the other day.
There's a gmirror volume running on top of the two disks that get spundown.
I find that often when the drives are spun back up to serve a disk request,
one of the ata devices times out and my system goes into a never
On Tue, 13 May 2008 00:26:49 -0400
Pierre-Luc Drouin [EMAIL PROTECTED] wrote:
Hi,
I would like to know if the memory allocation problem with zfs has
been fixed in -stable? Is zfs considered to be more stable now?
Thanks!
Pierre-Luc Drouin
We just set up a zfs based fileserver in our
Hello,
I have a FreeBSD 7.0-STABLE amd64 box which gives this message with apache 2.2
very often. Previously the contents of the box was on 6.3-STABLE x86 and I had
no such problems. This started right away when we moved to 7, 64bit.
FreeBSD web.X.com 7.0-STABLE FreeBSD 7.0-STABLE #0:
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.
I compile this program as follows:
cc sched_test.c -o sched_test -pthread
I believe that the behavior I
On Wed, May 14, 2008 at 10:20:42PM +0200, Aragon Gouveia wrote:
This is what is logged on the console after the disk spin up message:
ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing
request directly
ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout -
On Wed, May 14, 2008 at 11:39:10PM +0300, Evren Yurtesen wrote:
Approaching the limit on PV entries, consider increasing either the
vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
Approaching the limit on PV entries, consider increasing either the
vm.pmap.shpgperproc or the
Andriy Gapon wrote:
I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.
I compile this program as follows:
cc sched_test.c -o sched_test -pthread
I believe that the
On Wed, 14 May 2008, Andriy Gapon wrote:
I believe that the behavior I observe is broken because: if thread #1
releases a mutex and then tries to re-acquire it while thread #2 was
already blocked waiting on that mutex, then thread #1 should be queued
after thread #2 in mutex waiter's list.
24 matches
Mail list logo