Hi,
I've ran into sort of a snag with building a 2T file server.
Given all the good press here for 3ware and the talk to the guys at the CeBIT
I decided to go for a 9550SX-LP8.
With that I bought a ASUS serverboard: K8N-LR with 165 dual core opteron.
In itself is this a combo that I thing
, or a PCI-E 16x?
--WjW
Willem Jan Withagen wrote:
Hi,
I've ran into sort of a snag with building a 2T file server.
Given all the good press here for 3ware and the talk to the guys at
the CeBIT I decided to go for a 9550SX-LP8.
With that I bought a ASUS serverboard: K8N-LR with 165 dual core
Greg Martin wrote:
I find it hard to believe nobody has mentioned 3ware, they are a bit
more expensive but you pay for top notch quality, stability...
Their newer cards support PCI-X and SATA II /w hotswap.
Well the message started by saying that I got caught by a 3ware card that did
not
Greg Martin wrote:
Well the message started by saying that I got caught by a 3ware card that did
not want to play nice with me.
So I guess nobody deared suggesting another 3ware card.
;)
My apologies, I now understand its a hardware issue. Before you toss
the 3ware completely try the
Ruslan Ermilov wrote:
On Mon, Aug 14, 2006 at 07:02:09PM +0200, Oliver Fromme wrote:
Hi,
I just tried to do a make installworld with DESTDIR set
to a directory on an NFS server. It broke like this:
=== lib/libcom_err/doc (install)
lockf -k
Guido van Rooij wrote:
Anyway, I created a gm device and a partition. Now the read performance
is not what I'd expect.
I have the partition on two SATA devices on different controlers.
I get around 60MB/s for each disk. I can get that speed from both disks
simultaneously.
Now when I dd from the
Oliver Fromme wrote:
Guido van Rooij wrote:
Anyway, I created a gm device and a partition. Now the read performance
is not what I'd expect.
I have the partition on two SATA devices on different controlers.
I get around 60MB/s for each disk. I can get that speed from both disks
Hi,
I have a customer with a server with an Adaptec 2100 SCSI card and 2 73Gb
seagates, running 6.1.
Now one of the disks has been acting up and disconnected from the Raid-1.
Customer got a call from the NOC that a server was beeping quite loud.
So I rebuild the RAID and stress tested it a
Bruce Burden wrote:
On Wed, Dec 13, 2006 at 12:13:11PM -0500, Jim Pingle wrote:
Willem Jan Withagen wrote:
So the 1000$ question: is there any chance of getting at least the state
of the RAID and its disk out into the open?
It works fine here. I use this on systems with 2100s, and with 2010s
Charles Sprickman wrote:
On Wed, 13 Dec 2006, Jim Pingle wrote:
Willem Jan Withagen wrote:
So the 1000$ question: is there any chance of getting at least the state
of the RAID and its disk out into the open? It shure would give me a
much better feeling, knowing that at least serious trouble
Hi,
I got the following Filesystem:
FilesystemSizeUsed Avail Capacity iused ifree %iused
/dev/da0a 1.3T422G823G34% 565952 1828334700%
Running of a 3ware 9550, on a dual core Opteron 242 with 1Gb.
The system is used as SMB/NFS server for my other systems
Gary Palmer wrote:
On Tue, Jan 02, 2007 at 09:06:24PM +0100, Willem Jan Withagen wrote:
Hi,
I got the following Filesystem:
FilesystemSizeUsed Avail Capacity iused ifree %iused
/dev/da0a 1.3T422G823G34% 565952 1828334700%
Running of a 3ware 9550, on a dual
LI Xin wrote:
Willem Jan Withagen wrote:
Hi,
I got the following Filesystem:
FilesystemSizeUsed Avail Capacity iused ifree %iused
/dev/da0a 1.3T422G823G34% 565952 1828334700%
Running of a 3ware 9550, on a dual core Opteron 242 with 1Gb.
The system is used
Kris Kennaway wrote:
..
The file-system would come to a stop, processes stuck on bio, snap-shots
not finishing etc. This was caused by the system running out of usable
buffers. The change forces them to be flushed every so often. This is
independant of locking. 10 might be to aggresive.
Doug Ambrisko wrote:
| or things can get wedged. We have some other patches as well that might
| be required. As a hack on a local server we have been using snap shots
| to do a hot back-up of a data base each morning. This is based on
| 6.x.
|
| What do you mean by get wedged? Are you
Kris Kennaway wrote:
Or waiting until the snapshot operation finishes. You (still) haven't
determined that it's actually hanging as opposed to just waiting for
the snapshot operation to finish.
Just upgraded to 6.2-STABLE, and I must say that things are a LOT better:
- It did return a
Kris Kennaway wrote:
On Wed, Jan 17, 2007 at 02:48:18PM +0100, Willem Jan Withagen wrote:
Kris Kennaway wrote:
Or waiting until the snapshot operation finishes. You (still) haven't
determined that it's actually hanging as opposed to just waiting for
the snapshot operation to finish.
Just
Hi,
Running 5.3-Stable from december, this server has a login delay which I
can not really explain.
Both ssh and ftp react almost direct, only after entering the passwd the
connections sits idle for 15 secs.
So something is is taking a while, and since it is both with ssh and
ftp, I would
|Doug White wrote:
|
| On Mar 8, 2005, at 5:05 AM, Willem Jan Withagen wrote:
|
|
| With this mornings sources on RELENG_5:
|
|
| cd /usr/share/man; set - `grep ^[a-zA-Z] /home1/src/etc/man.alias`;
| while [ $# -gt 0 ] ; do rm -rf $1; ln -s $2 $1; shift;
| shift; done
| cd /usr
Greg 'groggy' Lehey wrote:
I've recently acquired an AMD64 box (dual Opteron 242, SiS [EMAIL PROTECTED]
motherboard
(http://www.msi.com.tw/program/products/server/svr/pro_svr_detail.php?UID=484).
See below for more details). I find it very unstable running with 8
GB memory, though 4 GB are not a
Greg 'groggy' Lehey wrote:
I've recently acquired an AMD64 box (dual Opteron 242, SiS [EMAIL PROTECTED]
motherboard
(http://www.msi.com.tw/program/products/server/svr/pro_svr_detail.php?UID=484).
See below for more details). I find it very unstable running with 8
GB memory, though 4 GB are not a
David O'Brien wrote:
On Tue, Apr 05, 2005 at 09:33:05AM +0200, Willem Jan Withagen wrote:
I'm sorry to come into this discussion after 58 messages, but this board
has been extensively discussed about 1 year ago, because it gave me trouble
to no end (even with 2Gb). One of the early amd64
Kris Kennaway wrote:
By the way, I'm thinking that more frequently hang might related with
large read/write block in mount_nfs -r/-w (I use 8192, original is 1024).
That's certainly possible since non-default settings don't get as much
testing. It would be good to get a traceback.
Has it even
Willem Jan Withagen wrote:
Kris Kennaway wrote:
By the way, I'm thinking that more frequently hang might related with
large read/write block in mount_nfs -r/-w (I use 8192, original is
1024).
That's certainly possible since non-default settings don't get as much
testing. It would be good to get
Claus Guttesen wrote:
By the way, I'm thinking that more frequently hang might related with
large read/write block in mount_nfs -r/-w (I use 8192, original is 1024).
That's certainly possible since non-default settings don't get as much
testing. It would be good to get a traceback.
Has it even
Claus Guttesen wrote:
How did you come to this conclusion? What kind of workload?
To make a short story long ;-)
Last year just after christmas I got a new storage system and had an
opportunity to replace our Linux-nfs-server with FreeBSD. I searched
the archives for nfs-related
Ronald Klop wrote:
Hello,
The tool gstat can produce very nice stats.
Can I get these stats from the system periodicly for use in my one
scripts/graphs?
Is there a sysctl like kern.ad0.reads? Or some other way of retreiving
this info from the kernel.
Looking at the gstat output, the numbers
Hi,
I've been asked to help this former customer.
This box is running 5.2.1 with as most exotic application jave/tomcat. And
it crashes too often. The first essential question for the customer is: Is
this hardware??? So I did the 'make -j 8 buildworld' test, which it survived
with flying
Vallo Kallaste wrote:
On Wed, Nov 24, 2004 at 07:33:55PM +0100, Eirik Øverby
[EMAIL PROTECTED] wrote:
OK I see, makes sense. So it's not really a raid3 issue, but an
implementation issue.
The only problem then is - gvinum being in a completely unusable state
(for raid5 anyway), what are my
Brian Szymanski wrote:
That is not completely fair for vinum
I've been running vinum now for the better of 3-4 years, and even with a
set of very flaky seagate IDE drives I never lost a byte.
Vinum has served me well, and I trust gvinum will get there as well.
I just left my fileserver at 5.1,
Hi,
Just got an Intelserver with a Promise Fastrack TX2000 controller.
So I installed 5.3-STABLE on it, and pulled a disk.
Got a nice report that I disk had gone, but the box kept on running.
Which is nice.
So I reinserted the drive, and got atacontrol to reinit the channel, and
voila back was
Hi,
Got this intel r1300 dual processor server with 2 Em inerfaces and a lot
of usb ports, 2Gb and en Promise Fastrack TX2000
Problem is that when em0 gets an interrupt, it also shows up on the USB
controller. top shows the exact same amounts of interrupt at both every
time over. Which has a
Scott Long wrote:
Willem Jan Withagen wrote:
Hi,
Got this intel r1300 dual processor server with 2 Em inerfaces and a
lot of usb ports, 2Gb and en Promise Fastrack TX2000
Problem is that when em0 gets an interrupt, it also shows up on the
USB controller. top shows the exact same amounts
Ceri Davies wrote:
On Thu, Nov 25, 2004 at 08:58:36PM +, Ceri Davies wrote:
I have a 4.10-STABLE machine that I want to migrate to 5.3-STABLE. Most
of the bases are covered, but I'm not sure what to expect for my vinum
volumes. I don't have anything esoteric (see attached config), but can
I
Hi,
I'm trying to get my world to 7.2-stable(amd64),but run into:
install -o root -g wheel -m 444 kgzldr.o /usr/lib
=== sys/boot/i386/libi386 (install)
=== sys/boot/i386/libfirewire (install)
=== sys/boot/i386/loader (install)
make: don't know how to make
Dmitry Morozovsky wrote:
On Fri, 5 Jun 2009, Kirk Strauser wrote:
KS On Friday 05 June 2009 06:27:23 am Willem Jan Withagen wrote:
KS Hi,
KS
KS I'm trying to get my world to 7.2-stable(amd64),but run into:
KS install -o root -g wheel -m 444 kgzldr.o /usr/lib
KS === sys/boot/i386/libi386
Dmitry Morozovsky wrote:
On Sun, 7 Jun 2009, Willem Jan Withagen wrote:
WJW KS On Friday 05 June 2009 06:27:23 am Willem Jan Withagen wrote:
WJW KS Hi,
WJW KS
WJW KS I'm trying to get my world to 7.2-stable(amd64),but run into:
WJW KS install -o root -g wheel -m 444 kgzldr.o /usr
Compiling the kernel gives me:
cc -c -O -pipe -std=c99 -g -Wall -Wredundant-decls -Wnested-externs
-Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Winline
-Wcast-qual -Wundef -Wno-pointer-sign -fformat-extensions -nostdinc -I.
-I/usr/src/sys -I/usr/src/sys/contrib/altq -D_KERNEL
Andrey V. Elsukov wrote:
Willem Jan Withagen wrote:
With a 6.4 of last night
So I guess 6.4 - 7.2 needs to go via 7.0 ??
It is already fixed.
http://svn.freebsd.org/changeset/base/195987
cvsupped last night at 1:00 +1
So the apha-particle strikes again. ;)
I'll update, and rerun
Willem Jan Withagen wrote:
Andrey V. Elsukov wrote:
Willem Jan Withagen wrote:
With a 6.4 of last night
So I guess 6.4 - 7.2 needs to go via 7.0 ??
It is already fixed.
http://svn.freebsd.org/changeset/base/195987
cvsupped last night at 1:00 +1
So the apha-particle strikes again
helge.old...@atosorigin.com wrote:
Willem,
Willem Jan Withagen wrote on Friday, July 31, 2009 2:07 PM:
Willem Jan Withagen wrote:
Andrey V. Elsukov wrote:
Willem Jan Withagen wrote:
With a 6.4 of last night
So I guess 6.4 - 7.2 needs to go via 7.0 ??
It is already fixed. http
Jeremy Chadwick wrote:
On Tue, Oct 14, 2008 at 12:49:35AM -0600, RYAN vAN GINNEKEN wrote:
- Jeremy Chadwick [EMAIL PROTECTED] wrote:
On Mon, Oct 13, 2008 at 09:20:34PM -0600, RYAN vAN GINNEKEN wrote:
Trying to install zoneminder port but getting this error
Using this version on Freebsd
RYAN vAN GINNEKEN wrote:
That is exactly what i was afraid of had trouble making work with
zoneminder live cd and ubuntu. Seen a post about the FreeBSD (my
favorite OS) port and thought i would give it a go.
My idea
From the sounds of it not worth do mind me asking how you went about
Ruben de Groot wrote:
On Mon, Nov 10, 2008 at 11:21:11PM +0700, Eugene Grosbein typed:
On Mon, Nov 10, 2008 at 11:01:00AM -0500, Stephen Clark wrote:
Why would du show 630k used by /tmp while df show 161M used
by /tmp?
I have run fstat /tmp and can't find any files that are using
the space
Ruben de Groot wrote:
You need lsof +aL1 /tmp to see an answer.
Please don't advise people to install third party apps (lsof) where
base system tools (fstat) can do the job.
Why not?
Because it gives the impression the base system is incomplete, which it is not,
at least not in this
Hi,
This has passed several times on and of the lists.
And is hindering me too.
I'm trying to revive an old dual optern Tyan Tomcat S2875 board. Even
upgraded it to the most recent BIOS. But still no go.
Both with 8.0 and 7.2 RELEASE.
I've also disabled P1394 and all USB in the BIOS, that
Willem Jan Withagen wrote:
I'm trying to revive an old dual optern Tyan Tomcat S2875 board. Even
upgraded it to the most recent BIOS. But still no go.
Both with 8.0 and 7.2 RELEASE.
I've also disabled P1394 and all USB in the BIOS, that did not work either.
Only thing that is extra in the box
Attilio Rao wrote:
2010/1/21 Willem Jan Withagen w...@digiware.nl:
Willem Jan Withagen wrote:
I'm trying to revive an old dual optern Tyan Tomcat S2875 board. Even
upgraded it to the most recent BIOS. But still no go.
Both with 8.0 and 7.2 RELEASE.
I've also disabled P1394 and all USB
Hi,
I build an ZFS server based on
Supermicro C2SBX with 8Gb and intel Q9550
http://www.supermicro.com/products/motherboard/Core2Duo/X38/C2SBX.cfm
with an areca 1120 with 8*1,5T
It has an Inter em0, as in:
Feb 25 14:46:29 zfs kernel: pci0: ACPI PCI bus on pcib0
Feb 25 14:46:29
On 25-2-2010 23:59, Jack Vogel wrote:
The failure to setup receive structures means it did not have sufficient
mbufs
to setup the RX ring and buffer structs. Not sure why this results in a
lockup,
but try and increase kern.ipc.nmbclusters.
Let me know what happens,
I've doubled the value
On 26-2-2010 10:58, Gerrit Kühn wrote:
On Fri, 26 Feb 2010 10:34:41 +0100 Willem Jan Withagenw...@digiware.nl
wrote about Re: em0 freezes on ZFS server:
WJW Probably the reason why this happened yesterday is that I started
WJW doing major software builds (over ZFS/NFS/TCP/v3) against data
Hi,
When everything is life is just smoothly flowing by, and all is hunky-dory,
some things don't get the credits they deserve.
So here we go ;)
Standing at the coffee machine this morning I realized that FreeBSD has been
part of my professional life for already way, way too long.
On 26-2-2010 13:03, Jeremy Chadwick wrote:
On Fri, Feb 26, 2010 at 10:34:41AM +0100, Willem Jan Withagen wrote:
This is wat netstat -m told me when it refused to revive em0:
Below are the netstat -m counters/lines of concern:
24980/2087/27067 mbufs in use (current/cache/total)
24530/1070
On 26-2-2010 13:16, Gerrit Kühn wrote:
On Thu, 25 Feb 2010 14:59:28 -0800 Jack Vogeljfvo...@gmail.com wrote
about Re: em0 freezes on ZFS server:
JV The failure to setup receive structures means it did not have
JV sufficient mbufs
JV to setup the RX ring and buffer structs.
I'm monitoring
On 26-2-2010 13:44, Gerrit Kühn wrote:
On Fri, 26 Feb 2010 13:31:38 +0100 Gerrit Kühn
ger...@pmp.uni-hannover.de wrote about Re: em0 freezes on ZFS server:
GK JC Note how close the current value is to that of total. I'm not
GK JC too surprised you're seeing what you are as a result of
On 26-2-2010 16:07, Daniel Braniss wrote:
On Fri, 26 Feb 2010 15:04:37 +0200 Daniel Branissda...@cs.huji.ac.il
wrote about Re: em0 freezes on ZFS server :
DBAt least in my case I found out what is eating the buffers: nfsd
DBdoes! The buffers stop increasing as soon as I stop nfsd.
On 26-2-2010 22:43, Gerrit Kühn wrote:
DB I'll have to do some packet snooping to check if it's TCP or UDP nfs
DB traffic, since some of the clients are Linux ...
I have Linux clients, too. Some use tcp, some udp.
I have Linux and FreeBSD clients running. The build system runs on
Linux.
On 27-2-2010 8:02, Gerrit Kühn wrote:
On Fri, 26 Feb 2010 23:12:39 +0100 Willem Jan Withagenw...@digiware.nl
wrote about Re: mbuf leakage with nfs/zfs?:
WJW Mine are now:
WJW 41533/2402/43935 mbufs in use (current/cache/total)
WJW 41454/1572/43026/262144 mbuf clusters in use
On 27-2-2010 21:32, Eirik Øverby wrote:
I've had a discussion with some folks on this for a while. I can easily
reproduce this situation by mounting a FreeBSD ZFS filesystem via
NFS-UDP from an OpenBSD machine. Telling the OpenBSD machine to use TCP
instead of UDP makes the problem go away.
On 28-2-2010 18:55, Gerrit Kühn wrote:
On Sun, 28 Feb 2010 12:21:28 + Robert N. M. Watson
rwat...@freebsd.org wrote about Re: mbuf leakage with nfs/zfs? :
RNMW It's almost certainly one or a small number of very specific RPCs
RNMW that are triggering it -- maybe OpenBSD does an extra
On 17-3-2010 9:27, Matthias Gamsjager wrote:
sharenfs does work in freebsd but iscsi does not. I'm not sure about smb.
about nfs: you should take a look at /etc/zfs/exports
On Wed, Mar 17, 2010 at 9:15 AM, Harald Schmalzbauer
h.schmalzba...@omnilan.de wrote:
Hello,
I observed some very
On 20-3-2010 0:50, Charles Sprickman wrote:
Just wondering, is this using the base nfsd/mountd, or is there some
in-kernel nfs code strictly for zfs? I haven't found much info on the
share* options in the manpage or wiki.
There's also the complete ZFS manual you should read:
On 2010-09-21 13:39, Andriy Gapon wrote:
The Project is ultimately about the users, right? There are early signs that
some old FreeBSD users get tired from those changes, those removals, lesser
POLA adherence, marketing-not-technical-stuff for time-not-feature-based
releases, not so stable
On 2010-09-21 15:16, Jeremy Chadwick wrote:
On Tue, Sep 21, 2010 at 02:59:46PM +0200, Willem Jan Withagen wrote:
On 2010-09-21 13:39, {some mysterious person :-)} wrote:
The Project is ultimately about the users, right? There are early signs that
some old FreeBSD users get tired from those
Hi,
This is with stable as of yesterday,but with an un-tunned ZFS box I was
still able to generate a kmem exhausted panic.
Hard panic, just 3 lines.
The box contains 12Gb memory, runs on a 6 core (with HT) xeon.
6* 2T WD black caviar in raidz2 with 2*512Mb mirrored log.
The box died while
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 28-9-2010 13:50, Jeremy Chadwick wrote:
On Tue, Sep 28, 2010 at 01:24:28PM +0200, Willem Jan Withagen wrote:
This is with stable as of yesterday,but with an un-tunned ZFS box I
was still able to generate a kmem exhausted panic.
Hard panic, just
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 28-9-2010 15:46, Andriy Gapon wrote:
on 28/09/2010 16:25 Willem Jan Withagen said the following:
Well advises seem to vary, and the latest I understood was that
8.1-stable did not need any tuning. (The other system with a much
older kernel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 28-9-2010 16:07, Andriy Gapon wrote:
on 28/09/2010 17:02 Willem Jan Withagen said the following:
I do have (read) this document, but still that doesn't really give you
guidelines for tuning on FreeBSD. It is a fileserver without any serious
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 28-9-2010 16:25, Andriy Gapon wrote:
on 28/09/2010 17:09 Willem Jan Withagen said the following:
On 28-9-2010 16:07, Andriy Gapon wrote:
on 28/09/2010 17:02 Willem Jan Withagen said the following:
I do have (read) this document, but still
On 2010-11-01 8:30, Andriy Gapon wrote:
First and foremost, the double-caching issue for ZFS+sendfile on FreeBSD is
still there and no resolution for this issue is on horizon. So, you have to
account for the fact that twice as much memory is needed for this use-case.
Whether you plan your
System:
FreeBSD zfs.digiware.nl 8.2-STABLE FreeBSD 8.2-STABLE #1: Sat Feb 26
06:28:43 CET 2011
r...@zfs.digiware.nl:/usr/obj/usr/src/src8/src/sys/ZFS amd64
Don't have a serial console, so I wrote down the traceback.
But my guess is that that is not enough, however I needed the system so
I
On 2011-03-08 15:15, John Baldwin wrote:
On Tuesday, March 08, 2011 5:54:36 am Willem Jan Withagen wrote:
System:
FreeBSD zfs.digiware.nl 8.2-STABLE FreeBSD 8.2-STABLE #1: Sat Feb 26
06:28:43 CET 2011
r...@zfs.digiware.nl:/usr/obj/usr/src/src8/src/sys/ZFS amd64
Don't have a serial console
On 18-3-2011 16:38, Alan Cox wrote:
On 03/08/2011 08:15, John Baldwin wrote:
On Tuesday, March 08, 2011 5:54:36 am Willem Jan Withagen wrote:
System:
FreeBSD zfs.digiware.nl 8.2-STABLE FreeBSD 8.2-STABLE #1: Sat Feb 26
06:28:43 CET 2011
r...@zfs.digiware.nl:/usr/obj/usr/src/src8/src/sys/ZFS
Hi,
I'm running ZFS of of 8 disks on an areca 1120
Things were just running fine until I tried upgrading to the most recent
STABLE.
Turns out that my configuration causes a panic in
arcmsr.c:2093,
Because the MTX is multiple invoked here.
Now my previous working version dates from 26/2,
On 26-6-2011 13:01, Jeremy Chadwick wrote:
On Sun, Jun 26, 2011 at 01:53:17AM +0200, Willem Jan Withagen wrote:
I'm running ZFS of of 8 disks on an areca 1120
Things were just running fine until I tried upgrading to the most recent
STABLE.
Turns out that my configuration causes a panic
On 26-6-2011 13:16, Steven Hartland wrote:
- Original Message - From: Willem Jan Withagen w...@digiware.nl
...
So I tried opgrading my firmware to 1.49, but to no avail.
The system keeps panicing.
So I guess that there is still a coding error somewhere in the driver.
But I'm
On 26-6-2011 13:50, Steven Hartland wrote:
- Original Message - From: Willem Jan Withagen w...@digiware.nl
Well the main key to the problem is that on 2011/06/06 the new version
from Areca got imported. So if you have all your boxes with kernels
predating 06-06, you're not running
On 26-6-2011 15:02, Jeremy Chadwick wrote:
On Sun, Jun 26, 2011 at 02:19:30PM +0200, Willem Jan Withagen wrote:
On 26-6-2011 13:50, Steven Hartland wrote:
- Original Message - From: Willem Jan Withagen w...@digiware.nl
Well the main key to the problem is that on 2011/06/06 the new
On 26-6-2011 15:52, Mike Tancsa wrote:
On 6/25/2011 7:53 PM, Willem Jan Withagen wrote:
Hi,
I'm running ZFS of of 8 disks on an areca 1120
Things were just running fine until I tried upgrading to the most recent
STABLE.
Turns out that my configuration causes a panic in
arcmsr.c:2093
On 2011-06-27 15:39, Mike Tancsa wrote:
Hi Ching,
Thanks very much for the quick turn around! The person who was having
the error is actually Willem, not me. All has been running fine for my
setup, but I will deploy it as well to test.
Yup,
And I've got my plate full with real-live
On 2011-06-27 4:49, 黃清隆 wrote:
Hi Mike,
Thanks for your bug report.
Please compile the new driver in attached zip file and try again.
Thanks,
Ching
Hi Ching,
So I did, and it did boot the server.
However upon reboot it again paniced.
This time in arcmsc.c: 1298 with:
mtx_lock_sleep
A server just all of a sudden dropped from the network.
uptime was 26days.
This got my ZFS server hanging:
Aug 1 23:39:58 zfs kernel: em0: Watchdog timeout -- resetting
Aug 1 23:39:58 zfs kernel: em0: Queue(0) tdh = 942, hw tdt = 977
Aug 1 23:39:58 zfs kernel: em0: TX(0) desc avail = 985,Next
On 2011-08-02 0:49, Claus Guttesen wrote:
Do you happen to run nfs on the server?
I had weird problems with igb-timeouts when many nfs-reads occured
and a down and up on the interface would restore the network
connection for a while. I had vmware-servers on a nfs-share and
either when booting
On 2011-08-02 1:00, Jeremy Chadwick wrote:
On Tue, Aug 02, 2011 at 12:27:57AM +0200, Willem Jan Withagen wrote:
A server just all of a sudden dropped from the network.
uptime was 26days.
This got my ZFS server hanging:
Aug 1 23:39:58 zfs kernel: em0: Watchdog timeout -- resetting
Aug 1 23
Hi,
Today I liked to live dangerously, and want to upgrade a backups server
from i386 to amd64. Just to see if we could.
And otherwise I'd scap it and install from usb-stick.
So I have my server running amd64 build GENERIC.
export /, /var, /usr on the server to be upgraded.
But upgrading
On 2011-08-20 13:15, Willem Jan Withagen wrote:
Hi,
Today I liked to live dangerously, and want to upgrade a backups server
from i386 to amd64. Just to see if we could.
And otherwise I'd scap it and install from usb-stick.
So I have my server running amd64 build GENERIC.
export /, /var, /usr
On 20-8-2011 13:26, Willem Jan Withagen wrote:
On 2011-08-20 13:15, Willem Jan Withagen wrote:
Hi,
Today I liked to live dangerously, and want to upgrade a backups server
from i386 to amd64. Just to see if we could.
And otherwise I'd scap it and install from usb-stick.
So I have my server
Hi,
I'm assembling a few system with a ASUS P8 H161-MLE motherboard
which was supposed to have a 'Realtek® 8112L, 1 x Gigabit LAN
Controller(s)' onboard.
And to be honestly I never expected that version not to be supported.
Just booted 8.2-RELEASE on it, and the Installer crashed when I
On 2011-08-22 1:01, YongHyeon PYUN wrote:
On Sun, Aug 21, 2011 at 04:01:10PM +0200, Willem Jan Withagen wrote:
Hi,
I'm assembling a few system with a ASUS P8 H161-MLE motherboard
which was supposed to have a 'Realtek® 8112L, 1 x Gigabit LAN
Controller(s)' onboard.
And to be honestly I never
On 22-8-2011 1:15, Willem Jan Withagen wrote:
On 2011-08-22 1:01, YongHyeon PYUN wrote:
On Sun, Aug 21, 2011 at 04:01:10PM +0200, Willem Jan Withagen wrote:
Hi,
I'm assembling a few system with a ASUS P8 H161-MLE motherboard
which was supposed to have a 'Realtek® 8112L, 1 x Gigabit LAN
On 23-10-2011 17:09, per...@pluto.rain.com wrote:
but there was no forwarding table and all packets were forwarded
to all ports.
I always figured that's normal for a hub as opposed to a switch.
I also remember that SOME hubs of that era had series problems if
the cable was too short.
Hi
Still running this file server on ZFS, and every now and then em0 goes
down, and is not revivable Nothing goes in or out the box...
Any suggestions as how to (help) fix this?
Regards,
--WjW
---
Nov 10 09:07:41 zfs kernel: em0: Watchdog timeout -- resetting
Nov 10 09:07:41 zfs
On 10-11-2011 10:50, Jeremy Chadwick wrote:
On Thu, Nov 10, 2011 at 10:22:39AM +0100, Willem Jan Withagen wrote:
Still running this file server on ZFS, and every now and then em0
goes down, and is not revivable Nothing goes in or out the
box...
Any suggestions as how to (help) fix
On 10-11-2011 23:25, Joshua Boyd wrote:
On Thu, Nov 10, 2011 at 6:51 AM, Willem Jan Withagen w...@digiware.nl
mailto:w...@digiware.nl wrote:
em0@pci0:0:25:0:class=0x02 card=0x10bd15d9
chip=0x10bd8086 rev=0x02 hdr=0x00
vendor = 'Intel Corporation'
device
On 2011-11-10 23:25, Joshua Boyd wrote:
On Thu, Nov 10, 2011 at 6:51 AM, Willem Jan Withagen w...@digiware.nl
mailto:w...@digiware.nl wrote:
em0@pci0:0:25:0:class=0x02 card=0x10bd15d9
chip=0x10bd8086 rev=0x02 hdr=0x00
vendor = 'Intel Corporation'
device
Hi,
I'm getting these:
Nov 16 16:40:49 zfs kernel: ata6: port is not ready (timeout 15000ms)
tfd = 0080
Nov 16 16:40:49 zfs kernel: ata6: hardware reset timeout
Nov 16 16:41:50 zfs kernel: ata6: port is not ready (timeout 15000ms)
tfd = 0080
Nov 16 16:41:50 zfs kernel: ata6:
On 2011-11-16 20:55, Alexander Motin wrote:
Hi.
On 16.11.2011 18:12, Willem Jan Withagen wrote:
I'm getting these:
Nov 16 16:40:49 zfs kernel: ata6: port is not ready (timeout 15000ms)
tfd = 0080
Nov 16 16:40:49 zfs kernel: ata6: hardware reset timeout
Nov 16 16:41:50 zfs kernel: ata6
an expander?
No SATA expanders...
What HBA / hard disk controller are you using?
A combi of CH9 and ARECA in PCI-X, disks are all exported a single disks.
Thanx for the suggestions
-_WjW
Am 16.11.2011 17:12, schrieb Willem Jan Withagen:
Hi,
I'm getting these:
Nov 16 16:40:49 zfs kernel
On 2011-11-17 12:20, Jeremy Chadwick wrote:
On Thu, Nov 17, 2011 at 12:03:26PM +0100, Willem Jan Withagen wrote:
On 2011-11-16 18:22, Peter Maloney wrote:
Willem,
I can only guess, but...
Is AHCI enabled in the bios? If you are not using 'fake-raid' for any
disks, you should [depending
Hi,
I have this ZFS server up for about 27 days, and about 3 weeks ago (was
not really paying attention) it turns out it lost its SSD that I'm using
for log and cache. There is also a poor and lonely memory stick for log.
So the box did not really suffer file loss.
system is running:
FreeBSD
1 - 100 of 174 matches
Mail list logo