Hi!
On a FreeBSD 8.0-RELEASE i386 (GENERIC kernel):
# cd /usr/src/sys/boot/efi/libefi/
# ls -l
ls: efinet.c: Bad file descriptor
total 38
-rw-r--r-- 1 root wheel 461 Oct 25 2009 Makefile
-rw-r--r-- 1 root wheel 1777 Oct 25 2009 delay.c
-rw-r--r-- 1 root wheel 2682 Oct 25 2009
Hi!
On a FreeBSD 8.0-RELEASE i386 (GENERIC kernel):
[...]
/dev/ad4s1e 101554150 3143488157741142862 -3143488157647713044 3364544879817%
/usr
The last reboot did not do any fsck, smartctl does not complain.
So, what can I do to fix this ?
The last reboot *did* indeed fsck, sorry for the
On Tue, Apr 05, 2011 at 09:22:27AM +0200, Kurt Jaeger wrote:
On a FreeBSD 8.0-RELEASE i386 (GENERIC kernel):
# cd /usr/src/sys/boot/efi/libefi/
# ls -l
ls: efinet.c: Bad file descriptor
Your file system it broken and needs fsck.
total 38
-rw-r--r-- 1 root wheel 461 Oct 25 2009
On Tue, Apr 05, 2011 at 09:29:59AM +0200, Kurt Jaeger wrote:
Hi!
On a FreeBSD 8.0-RELEASE i386 (GENERIC kernel):
[...]
/dev/ad4s1e 101554150 3143488157741142862 -3143488157647713044
3364544879817%/usr
The last reboot did not do any fsck, smartctl does not complain.
So, what
I've reached almost 118 MB/s but I don't have access to the
configuration atm. This was from a windows 7 client. From vmware I've
gotten 107 MB/s during a debian 6 server installation. I'll post the
settings when I get back to work.
that would be nice. I will test also a Windows7 client,
On Tue, Apr 05, 2011 at 12:54:26AM -0700, Jeremy Chadwick wrote:
2) Tried booting into single-user to run fsck -f /dev/ad4 anyway?
Sorry, this should have been fsck -f /dev/ad4s1e. Derp. :-)
--
| Jeremy Chadwick j...@parodius.com |
| Parodius Networking
On Mon, 4 Apr 2011 11:08:16 -0700 Freddie Cash wrote:
FC On Sat, Apr 2, 2011 at 1:44 AM, Pawel Jakub Dawidek p...@freebsd.org
wrote:
I just committed a fix for a problem that might look like a deadlock.
With trociny@ patch and my last fix (to GEOM GATE and hastd) do you
still have any
hi,
Am Dienstag, den 05.04.2011, 10:01 +0200 schrieb Claus Guttesen:
The only setting I changed was:
QueueDepth 64
[...]
I've tested now the default params with your Queue, but nothing changed:
On same time startet with clusterssh:
root@dhcp1 ~ # dd if=/dev/zero of=/dev/sda bs=1M
on 05/04/2011 04:01 Jeremy Chadwick said the following:
On Mon, Apr 04, 2011 at 08:56:10PM -0400, Boris Kochergin wrote:
No swap, blank /boot/loader.conf, default /etc/sysctl.conf. I'm
going to try this ARC tuning thing. I vaguely recall several claims
that tuning wasn't necessary anymore on
Adding some swap would help a lot more.
So, I run a lot of systems without swap - basically my
thinking at the time I set them up went like this.
I have 4 gig of memory, and 4 gig of swap. Surely running 8 gig of
memory and no swap will be just as good ?
but, is that actually true ? Is real
On 04/05/11 10:04, Pete French wrote:
Adding some swap would help a lot more.
So, I run a lot of systems without swap - basically my
thinking at the time I set them up went like this.
I have 4 gig of memory, and 4 gig of swap. Surely running 8 gig of
memory and no swap will be just as good ?
Years ago when RAM was expensive swap a necessary workaround. That doesn't
mean that swap is useless. It all depends on what a server is doing. If you
are using a database server then it is absolutely normal and expected to
cache. Also, unlike other OS FreeBSD tends to make use of the ram. Swaping
on 05/04/2011 17:04 Pete French said the following:
Adding some swap would help a lot more.
So, I run a lot of systems without swap - basically my
thinking at the time I set them up went like this.
I have 4 gig of memory, and 4 gig of swap. Surely running 8 gig of
memory and no swap will
On Tue, Apr 05, 2011 at 03:04:22PM +0100, Pete French wrote:
Adding some swap would help a lot more.
So, I run a lot of systems without swap - basically my
thinking at the time I set them up went like this.
I have 4 gig of memory, and 4 gig of swap. Surely running 8 gig of
memory and no
Having swap provides some cushion. Swap kind of smooths any bursts. (And it
can
also slow things down as a side effect)
This is why I got rid of it - my application is a lot of CGI scripts. The
overload condition is that we run out of memory - and we run *way* out
of memory its never
This is more of an proof of concept question:
I am building an redundant cluster of blade servers, and toy with the
idea to use HAST and ZFS for the storage.
Blades will work in pairs and each pair will provide various services,
from SQL databases, to hosting virtual machines (jails and
On Apr 5, 2011, at 7:54 AM, Pete French wrote:
Having swap provides some cushion. Swap kind of smooths any bursts. (And it
can
also slow things down as a side effect)
This is why I got rid of it - my application is a lot of CGI scripts. The
overload condition is that we run out of memory
Am 05.04.2011 15:51, schrieb Andriy Gapon:
Boris,
ARC is an adaptive cache (as its name says), but the adaption doesn't happen
instantly. So, when your applications do not use a lot of memory, but there
is
steady filesystem usage, then ZFS ARC is going to gradually grow to consume an
On Tue, Apr 5, 2011 at 5:05 AM, Mikolaj Golub troc...@freebsd.org wrote:
On Mon, 4 Apr 2011 11:08:16 -0700 Freddie Cash wrote:
FC On Sat, Apr 2, 2011 at 1:44 AM, Pawel Jakub Dawidek p...@freebsd.org
wrote:
I just committed a fix for a problem that might look like a deadlock.
With
Hi,
Today I had deadlock on several machines. Almost all processes stucked
in [tx-tx_cpu[c].tc_lock]. Machines were helped only `reboot -n'.
I've created new gzip-ed filesystem a few days ago. I didn't have any
problems with ZFS before.
System was built from
- Original Message -
From: Pete French petefre...@ingresso.co.uk
This is why I got rid of it - my application is a lot of CGI scripts. The
overload condition is that we run out of memory - and we run *way* out
of memory its never just a little overflow, it;s either handleable or
Today I had deadlock on several machines. Almost all processes stucked in
[tx-tx_cpu[c].tc_lock]. Machines were helped only `reboot -n'.
I've created new gzip-ed filesystem a few days ago. I didn't have any
problems with ZFS before.
System was built from
22 matches
Mail list logo