Johan Ström wrote:
drop all traffic)? A check with pfctl -vsr reveals that the actual rule
inserted is pass on lo0 inet from 123.123.123.123 to 123.123.123.123
flags S/SA keep state. Where did that keep state come from?
'flags S/SA keep state' is the default now for tcp filter rules -- that
On Fri, 16 May 2008, Vivek Khera wrote:
How are the buckets used? Are they hashed per rule number or some
other mechanism? Nearly all of my states are from the same rule (eg,
on a mail server for the SMTP port rule).
/sys/netinet/ip_fw.h
/sys/netinet/ip_fw2.c
Hashed per flow,
On May 18, 2008, at 9:19 AM, Matthew Seaman wrote:
Johan Ström wrote:
drop all traffic)? A check with pfctl -vsr reveals that the actual
rule inserted is pass on lo0 inet from 123.123.123.123 to
123.123.123.123 flags S/SA keep state. Where did that keep state
come from?
'flags S/SA
after trying to mount my zfs pools in single user mode I got the following
message for each:
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as
it was last accessed by another system (host: gw.bb1.matik.com.br hostid:
0xbefb4a0f). See:
On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
after trying to mount my zfs pools in single user mode I got the following
message for each:
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as
it was last accessed by another system (host:
On Sun, 18 May 2008 09:56:17 -0300
JoaoBR [EMAIL PROTECTED] wrote:
after trying to mount my zfs pools in single user mode I got the
following message for each:
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
loaded as it was last accessed by another system (host:
Hello,
i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both
AMD64 and I386 arch
with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server.
Everything runs almost fine, except for disk access. Performance is
quite OK (around 60mb/sec),
but when accessing disks, System
On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote:
On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
after trying to mount my zfs pools in single user mode I got the
following message for each:
May 18 09:09:36 gw kernel:
On Sun, May 18, 2008 at 3:33 AM, Johan Ström [EMAIL PROTECTED] wrote:
On May 18, 2008, at 9:19 AM, Matthew Seaman wrote:
Johan Ström wrote:
drop all traffic)? A check with pfctl -vsr reveals that the actual rule
inserted is pass on lo0 inet from 123.123.123.123 to 123.123.123.123 flags
S/SA
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote:
On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote:
On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote:
On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote:
after trying to mount my zfs pools in single user mode I got the
On Sun, May 18, 2008 at 04:15:55PM +0200, Daniel Ponticello wrote:
Hello,
i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both
AMD64 and I386 arch
with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server.
Everything runs almost fine, except for disk access. Performance
On May 17, 2008, Norbert Papke wrote:
Environment: FreeBSD 7.0 Stable (as of Apr 30), apache-2.0.63
I am experiencing Apache crashes on a fairly consistent and frequent basis.
The crash occurs in strncmp(). To help with the diagnosis, I have rebuilt
libc with debug symbols. Here is a
Clifton Royston ha scritto:
If you are accessing a software emulation of a SCSI disk, I would
offhand expect the CPU load to go up substantially when you are reading
or writing it at the maximum achievable bandwidth. You can't expect
normal relative load results under an emulator, and while
On Apr 29, 2008, at 2:08 PM, Jeremy Chadwick wrote:
I'd recommend staying away from Realtek NICs. Pick up an Intel Pro/
1000
GT or PT. Realtek has a well-known history of issues.
Just wanted to tell you guys that so far a em(4) seems to have fixed
the problem.
--
Peter Ankerstål
On Wed, May 14, 2008 at 4:35 PM, Marc UBM Bocklet [EMAIL PROTECTED] wrote:
On Tue, 13 May 2008 00:26:49 -0400
Pierre-Luc Drouin [EMAIL PROTECTED] wrote:
I would like to know if the memory allocation problem with zfs has
been fixed in -stable? Is zfs considered to be more stable now?
From: bazzoola [EMAIL PROTECTED]
Subject: Auto bridge for qemu network [was: kqemu support: not compiled]
Date: Thu, 15 May 2008 03:06:25 -0400
Also, is it possible to update this page, it has some outdated info:
http://people.freebsd.org/~maho/qemu/qemu.html
*It is the 1st answer from google
On Sun, 18 May 2008, Torfinn Ingolfsen wrote:
On Sun, 18 May 2008 09:56:17 -0300
JoaoBR [EMAIL PROTECTED] wrote:
after trying to mount my zfs pools in single user mode I got the
following message for each:
May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be
loaded as it was
da0 at mpt0 bus 0 target 0 lun 0
da0: VMware Virtual disk 1.0 Fixed Direct Access SCSI-2 device
da0: 3.300MB/s transfers
da0: Command Queueing Enabled
da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C)
Can you re-negotiate transfer rate, using camcontrol?
`camcontrol negotiate 0:0
18 matches
Mail list logo