Hi.
I've got strange problem. I've setup pound on my FBSD6 box and
sometimes it (pound) stops. Normally it should exit but it doesn't -
it hangs.
# ps auwx | grep pound
pound 46868 0.0 1.0 55428 21932 ?? SE1:58PM 0:31.34
/usr/local/sbin/pound -f /usr/local/etc/pound.cfg
S: Marks a
On Fri, Aug 17, 2007 at 08:47:01AM +0200, peceka wrote:
I've got strange problem. I've setup pound on my FBSD6 box and
sometimes it (pound) stops. Normally it should exit but it doesn't -
it hangs.
# ps auwx | grep pound
pound 46868 0.0 1.0 55428 21932 ?? SE1:58PM 0:31.34
Oliver Fromme wrote:
Roman Bogorodskiy wrote:
Artem Kuchin wrote:
[FreeBSD crashes when a mounted device disappears]
It's not a bug, it's a feature.
I certainly wouldn't call it a feature. As others have
noted, it is a known shortcoming in FreeBSD, and fixing
it is
2007/8/17, Eugene Grosbein [EMAIL PROTECTED]:
On Fri, Aug 17, 2007 at 08:47:01AM +0200, peceka wrote:
I've got strange problem. I've setup pound on my FBSD6 box and
sometimes it (pound) stops. Normally it should exit but it doesn't -
it hangs.
# ps auwx | grep pound
pound 46868
Hello,
peceka wrote:
Hi.
I've got strange problem. I've setup pound on my FBSD6 box and
sometimes it (pound) stops. Normally it should exit but it doesn't -
it hangs.
# ps auwx | grep pound
pound 46868 0.0 1.0 55428 21932 ?? SE1:58PM 0:31.34
/usr/local/sbin/pound -f
On Fri, Aug 17, 2007 at 10:19:11AM +0200, peceka wrote:
It does if it become 'trying to exit'.
It may have a serial device open and bugs in the driver may prevent
process from closing device in timely manner. Try switching to uart(4)
driver
from sio(4) if this is your case.
But
Hello,
Eugene Grosbein wrote:
On Fri, Aug 17, 2007 at 10:19:11AM +0200, peceka wrote:
It does if it become 'trying to exit'.
It may have a serial device open and bugs in the driver may prevent
process from closing device in timely manner. Try switching to uart(4) driver
from sio(4) if this
Hello ,
A Greeting Card is waiting for you at our virtual post office! You can
pick up your postcard at the following web address:
[1]http://www.all-yours.net/u/view.php?id=a0190313376667
visit E-Greetings at http://www.all-yours.net/
and enter your pickup code, which is:
My D610 notebook is panic constantly with this:
Fatal trap 12: page fault while in kernel mode
fault virtual address = 0x170
fault code = supervisor read, page not present
instruction pointer = 0x20:0xc0598199
stack pointer = 0x28:0xe438dc40
frame pointer =
On 8/17/07, Alexandre Biancalana [EMAIL PROTECTED] wrote:
My D610 notebook is panic constantly with this:
Fatal trap 12: page fault while in kernel mode
fault virtual address = 0x170
fault code = supervisor read, page not present
instruction pointer = 0x20:0xc0598199
stack
Someone suggested I try to compile encfs again and since I updated my system
two days ago, I tried again and fusefs-encfs now compiles.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe,
On Friday 17 August 2007 16:00:15 Pollywog wrote:
Someone suggested I try to compile encfs again and since I updated my
system two days ago, I tried again and fusefs-encfs now compiles.
Apologies, I clicked on the wrong link, this should have gone to the
freebsd-questions list.
You have just received a virtual postcard from a family member!
.
You can pick up your postcard at the following web address:
.
[1]http://www2.postcards.org/?a91-valets-cloud-31337
.
If you can't click on the web address above, you can also
visit 1001 Postcards at
On 8/17/07, Svein Halvor Halvorsen [EMAIL PROTECTED] wrote:
Oliver Fromme wrote:
Roman Bogorodskiy wrote:
Artem Kuchin wrote:
[FreeBSD crashes when a mounted device disappears]
It's not a bug, it's a feature.
I certainly wouldn't call it a feature. As others have
noted,
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three possibilities on organizing this disk. My
needs are really for a single 1Tb file system on which I will run
postgres.
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three possibilities on organizing this disk. My
needs are really for a single 1Tb file system on which I will run
postgres.
I was just attempting to add a RAID shelf to a system w/ an Adaptec
29320 card, and when I export a 7TB disk to the card, and kept getting
timeout related issues. The dump is very similar to PR 76178 which
appears to be reporting the same issue...
I have attached a dmesg w/ the failure... I
- Original Message -
From: Claus Guttesen [EMAIL PROTECTED]
To: Vivek Khera [EMAIL PROTECTED]
Cc: FreeBSD Stable freebsd-stable@freebsd.org
Sent: Friday, August 17, 2007 11:10 PM
Subject: Re: large RAID volume partition strategy
I have a shiny new big RAID array. 16x500GB SATA
On Fri, 17 Aug 2007 17:42:55 -0400 Vivek Khera wrote:
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three possibilities on organizing this disk. My
needs are really for a
On Sat, 18 Aug 2007 02:26:04 +0400 Boris Samorodov wrote:
On Fri, 17 Aug 2007 17:42:55 -0400 Vivek Khera wrote:
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Vivek Khera wrote:
I'm not keen on option 1 because of the potentially long fsck times
after a crash.
Depending on your allowable downtime after a crash, fscking even a 1 TB
UFS file system can be a long time. For large file systems there's
really
- Clayton Milos [EMAIL PROTECTED] wrote:
If you goal is speed and obviously as little possibility of a fail
(RAID6+spare) then RAID6 is the wrong way to go...
RAID6's read speeds are great but the write speeds are not.
If you want awesome performance and reliability the real way to go
On Aug 17, 2007, at 6:26 PM, Boris Samorodov wrote:
I have 6 SATA-II 300MB/s disks at 3WARE adapter. My (very!) simple
tests gave about 170MB/s for dd. BTW, I tested (OK, very fast)
RAID5, RAID6, gmirror+gstripe and noone get close to RAID10. (Well, as
expected, I suppose).
Whichever RAID
On Aug 17, 2007, at 6:10 PM, Claus Guttesen wrote:
If you want to avoid the long fsck-times your remaining options are a
journaling filesystem or zfs, either requires an upgrade from freebsd
6.2. I have used zfs and had a serverstop due to powerutage in out
area. Our zfs-samba-server came up
On Aug 17, 2007, at 7:31 PM, Ivan Voras wrote:
Depending on your allowable downtime after a crash, fscking even a
1 TB
UFS file system can be a long time. For large file systems there's
really no alternative to using -CURRENT / 7.0, and either gjournal
or ZFS.
I'll investigate this
Vivek Khera wrote:
My only fear of this is that once this system is in production, that's
pretty much it. Maintenance windows are about 1 year apart, usually
longer.
Others will have to comment about that. I have only one 7-CURRENT in
production (because of ZFS) and I had only one panic (in
Vivek Khera wrote:
But, if I don't go with zfs, which would be a better way to slice the
space up: RAID volumes exported as individual disks to freebsd, or one
RAID volume divided into multiple logical partitions with disklabel?
In general, it's almost always better to do the partitioning in
27 matches
Mail list logo