Re: connect(): Operation not permitted

2008-05-18 Thread Matthew Seaman
Johan Ström wrote: drop all traffic)? A check with pfctl -vsr reveals that the actual rule inserted is pass on lo0 inet from 123.123.123.123 to 123.123.123.123 flags S/SA keep state. Where did that keep state come from? 'flags S/SA keep state' is the default now for tcp filter rules -- that

Re: how much memory does increasing max rules for IPFW take up?

2008-05-18 Thread Ian Smith
On Fri, 16 May 2008, Vivek Khera wrote: How are the buckets used? Are they hashed per rule number or some other mechanism? Nearly all of my states are from the same rule (eg, on a mail server for the SMTP port rule). /sys/netinet/ip_fw.h /sys/netinet/ip_fw2.c Hashed per flow,

Re: connect(): Operation not permitted

2008-05-18 Thread Johan Ström
On May 18, 2008, at 9:19 AM, Matthew Seaman wrote: Johan Ström wrote: drop all traffic)? A check with pfctl -vsr reveals that the actual rule inserted is pass on lo0 inet from 123.123.123.123 to 123.123.123.123 flags S/SA keep state. Where did that keep state come from? 'flags S/SA

possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR
after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was last accessed by another system (host: gw.bb1.matik.com.br hostid: 0xbefb4a0f). See:

Re: possible zfs bug? lost all pools

2008-05-18 Thread Greg Byshenk
On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote: after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was last accessed by another system (host:

Re: possible zfs bug? lost all pools

2008-05-18 Thread Torfinn Ingolfsen
On Sun, 18 May 2008 09:56:17 -0300 JoaoBR [EMAIL PROTECTED] wrote: after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was last accessed by another system (host:

Disk access/MPT under ESX3.5

2008-05-18 Thread Daniel Ponticello
Hello, i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both AMD64 and I386 arch with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server. Everything runs almost fine, except for disk access. Performance is quite OK (around 60mb/sec), but when accessing disks, System

Re: possible zfs bug? lost all pools

2008-05-18 Thread Jeremy Chadwick
On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote: On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote: On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote: after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel:

Re: connect(): Operation not permitted

2008-05-18 Thread Kian Mohageri
On Sun, May 18, 2008 at 3:33 AM, Johan Ström [EMAIL PROTECTED] wrote: On May 18, 2008, at 9:19 AM, Matthew Seaman wrote: Johan Ström wrote: drop all traffic)? A check with pfctl -vsr reveals that the actual rule inserted is pass on lo0 inet from 123.123.123.123 to 123.123.123.123 flags S/SA

Re: possible zfs bug? lost all pools

2008-05-18 Thread JoaoBR
On Sunday 18 May 2008 12:39:11 Jeremy Chadwick wrote: On Sun, May 18, 2008 at 12:20:33PM -0300, JoaoBR wrote: On Sunday 18 May 2008 11:11:38 Greg Byshenk wrote: On Sun, May 18, 2008 at 09:56:17AM -0300, JoaoBR wrote: after trying to mount my zfs pools in single user mode I got the

Re: Disk access/MPT under ESX3.5

2008-05-18 Thread Clifton Royston
On Sun, May 18, 2008 at 04:15:55PM +0200, Daniel Ponticello wrote: Hello, i'm running some tests with FreeBSD6.3 and FreeBSD7-Stable, using both AMD64 and I386 arch with both schedulers (ULE and 4BSD) on VmWare ESX3.5 server. Everything runs almost fine, except for disk access. Performance

Re: Apache seg faults -- Possible problem with libc? [solved]

2008-05-18 Thread Norbert Papke
On May 17, 2008, Norbert Papke wrote: Environment: FreeBSD 7.0 Stable (as of Apr 30), apache-2.0.63 I am experiencing Apache crashes on a fairly consistent and frequent basis. The crash occurs in strncmp(). To help with the diagnosis, I have rebuilt libc with debug symbols. Here is a

Re: Disk access/MPT under ESX3.5

2008-05-18 Thread Daniel Ponticello
Clifton Royston ha scritto: If you are accessing a software emulation of a SCSI disk, I would offhand expect the CPU load to go up substantially when you are reading or writing it at the maximum achievable bandwidth. You can't expect normal relative load results under an emulator, and while

Re: Packet-corruption with re(4)

2008-05-18 Thread Peter Ankerstål
On Apr 29, 2008, at 2:08 PM, Jeremy Chadwick wrote: I'd recommend staying away from Realtek NICs. Pick up an Intel Pro/ 1000 GT or PT. Realtek has a well-known history of issues. Just wanted to tell you guys that so far a em(4) seems to have fixed the problem. -- Peter Ankerstål

Re: Status of ZFS in -stable?

2008-05-18 Thread Zaphod Beeblebrox
On Wed, May 14, 2008 at 4:35 PM, Marc UBM Bocklet [EMAIL PROTECTED] wrote: On Tue, 13 May 2008 00:26:49 -0400 Pierre-Luc Drouin [EMAIL PROTECTED] wrote: I would like to know if the memory allocation problem with zfs has been fixed in -stable? Is zfs considered to be more stable now?

Re: Auto bridge for qemu network

2008-05-18 Thread Maho NAKATA
From: bazzoola [EMAIL PROTECTED] Subject: Auto bridge for qemu network [was: kqemu support: not compiled] Date: Thu, 15 May 2008 03:06:25 -0400 Also, is it possible to update this page, it has some outdated info: http://people.freebsd.org/~maho/qemu/qemu.html *It is the 1st answer from google

Re: possible zfs bug? lost all pools

2008-05-18 Thread Charles Sprickman
On Sun, 18 May 2008, Torfinn Ingolfsen wrote: On Sun, 18 May 2008 09:56:17 -0300 JoaoBR [EMAIL PROTECTED] wrote: after trying to mount my zfs pools in single user mode I got the following message for each: May 18 09:09:36 gw kernel: ZFS: WARNING: pool 'cache1' could not be loaded as it was

Re: Disk access/MPT under ESX3.5

2008-05-18 Thread Shunsuke SHINOMIYA
da0 at mpt0 bus 0 target 0 lun 0 da0: VMware Virtual disk 1.0 Fixed Direct Access SCSI-2 device da0: 3.300MB/s transfers da0: Command Queueing Enabled da0: 34816MB (71303168 512 byte sectors: 255H 63S/T 4438C) Can you re-negotiate transfer rate, using camcontrol? `camcontrol negotiate 0:0