On Wed, 6 Jul 2005, Thomas Backlund wrote:
I could check the firmware versions if you want.
Yes thanks, please do.
megaraid: fw version:[516A] bios version:[H418]
megaraid: fw version:[513O] bios version:[H418]
At least these versions seem to work just fine.
--
-=[ Count Zero / TBH -
On Wed, 6 Jul 2005, Thomas Backlund wrote:
I could check the firmware versions if you want.
Yes thanks, please do.
megaraid: fw version:[516A] bios version:[H418]
megaraid: fw version:[513O] bios version:[H418]
At least these versions seem to work just fine.
--
-=[ Count Zero / TBH -
On Tue, 5 Jul 2005, Chris Wright wrote:
Any news on this matter?
I hvr a PE1850 waiting for kernel upgrade, but I'm afraid to do so now...
I can't break my box with tests since it's in active use...
For now I'm running a 2.6.8.1 based kernel on the box...
Last known good one (that Andy
On Tue, 5 Jul 2005, Chris Wright wrote:
Any news on this matter?
I hvr a PE1850 waiting for kernel upgrade, but I'm afraid to do so now...
I can't break my box with tests since it's in active use...
For now I'm running a 2.6.8.1 based kernel on the box...
Last known good one (that Andy
On Thu, 17 May 2001, Simon Richter wrote:
> CPU is a Pentium 166 MMX on an Asus TX97 mainboard, ISA cards are a 3c509
> and a Soundblaster.
The Asus TX97 is known to be a CPU toaster. I've replaced dozens of
them because of overheating problems. I don't know why the problem
seems to come up
On Thu, 17 May 2001, Simon Richter wrote:
CPU is a Pentium 166 MMX on an Asus TX97 mainboard, ISA cards are a 3c509
and a Soundblaster.
The Asus TX97 is known to be a CPU toaster. I've replaced dozens of
them because of overheating problems. I don't know why the problem
seems to come up with
On Tue, 10 Apr 2001, Jussi Hamalainen wrote:
>program vers proto port
> 102 tcp111 portmapper
> 102 udp111 portmapper
> 1000211 udp 1024 nlockmgr
> 1000213 udp 1024 nlockmgr
> 151 udp686 m
I have two PCs running Slackware 7.1. I can't get lockd to work
properly with NFS:
Apr 10 21:03:59 sputnik kernel: nsm_mon_unmon: rpc failed, status=-93
Apr 10 21:03:59 sputnik kernel: lockd: cannot monitor xxx.xxx.xxx.xxx
Apr 10 21:03:59 sputnik kernel: lockd: failed to monitor xxx.xxx.xxx.xxx
I have two PCs running Slackware 7.1. I can't get lockd to work
properly with NFS:
Apr 10 21:03:59 sputnik kernel: nsm_mon_unmon: rpc failed, status=-93
Apr 10 21:03:59 sputnik kernel: lockd: cannot monitor xxx.xxx.xxx.xxx
Apr 10 21:03:59 sputnik kernel: lockd: failed to monitor xxx.xxx.xxx.xxx
On Tue, 10 Apr 2001, Jussi Hamalainen wrote:
program vers proto port
102 tcp111 portmapper
102 udp111 portmapper
1000211 udp 1024 nlockmgr
1000213 udp 1024 nlockmgr
151 udp686 mountd
152
Where can I get the LFS patch for 2.2.18? Www.scyld.com doesn't
seem to be carrying it anymore.
--
-=[ Count Zero / TBH - Jussi Hämäläinen - email [EMAIL PROTECTED] ]=-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More
On Wed, 17 Jan 2001, Tony Gale wrote:
> It looks like this is due to the odd way in which ipchains handles
> fragments. Try:
>
> echo 1 > /proc/sys/net/ipv4/ip_always_defrag
Thanks, this seems to do the trick. Does this oddity still exist
in 2.4?
--
-=[ Count Zero / TBH - Jussi Hämäläinen -
There seems to be a bug in ipchains. Matching port 65535 seems to
always fail. If I set the chain policy to REJECT or DENY and then
add a rule that accepts TCP to/from ports 0:65535, packets going to
port 65535 will still be caught by the kernel. Is there a fix for
this? It's driving me nuts. The
There seems to be a bug in ipchains. Matching port 65535 seems to
always fail. If I set the chain policy to REJECT or DENY and then
add a rule that accepts TCP to/from ports 0:65535, packets going to
port 65535 will still be caught by the kernel. Is there a fix for
this? It's driving me nuts. The
On Wed, 17 Jan 2001, Tony Gale wrote:
It looks like this is due to the odd way in which ipchains handles
fragments. Try:
echo 1 /proc/sys/net/ipv4/ip_always_defrag
Thanks, this seems to do the trick. Does this oddity still exist
in 2.4?
--
-=[ Count Zero / TBH - Jussi Hmlinen - email
On Mon, 1 Jan 2001, Lincoln Dale wrote:
> i know that you've said previously that you've increased your MTU beyond
> 1500, but can you validate that it is actually working?
Yup. At least 1500 byte ICMP echo packets get through the tunnel
OK.
> alternatively, ensure that your application is
On Sun, 31 Dec 2000, Mikael Abrahamsson wrote:
> When the linux box does TCP to the outside it'll use the MTU of
> the tunnel (default route is the tunnel) and thus works perfectly
> (since TCP MSS will be set low enough to fit into the tunnel).
In my case I can't access a problematic host even
I have an old 486-box acting as a router. It has two NICs and
an ISDN adapter. The box is connected to my ISP by ISDN link
and has a GRE tunnel running over the ISDN link. The other end
of the tunnel is a Cisco router and the tunnel is the default
route. I'm experiencing problems identical to the
I have an old 486-box acting as a router. It has two NICs and
an ISDN adapter. The box is connected to my ISP by ISDN link
and has a GRE tunnel running over the ISDN link. The other end
of the tunnel is a Cisco router and the tunnel is the default
route. I'm experiencing problems identical to the
On Sun, 31 Dec 2000, Mikael Abrahamsson wrote:
When the linux box does TCP to the outside it'll use the MTU of
the tunnel (default route is the tunnel) and thus works perfectly
(since TCP MSS will be set low enough to fit into the tunnel).
In my case I can't access a problematic host even
On Mon, 1 Jan 2001, Lincoln Dale wrote:
i know that you've said previously that you've increased your MTU beyond
1500, but can you validate that it is actually working?
Yup. At least 1500 byte ICMP echo packets get through the tunnel
OK.
alternatively, ensure that your application is
hdc:hdc: set_multmode: status=0x51 { DriveReady SeekComplete Error }
hdc: set_multmode: error=0x04 { DriveStatusError }
[PTBL] [523/255/63] hdc1 hdc2
This has been happening at least since 2.2.10. It's probably just
something cosmetic, but shouldn't it still be fixed? Running
vanilla-2.2.16
hdc:hdc: set_multmode: status=0x51 { DriveReady SeekComplete Error }
hdc: set_multmode: error=0x04 { DriveStatusError }
[PTBL] [523/255/63] hdc1 hdc2
This has been happening at least since 2.2.10. It's probably just
something cosmetic, but shouldn't it still be fixed? Running
vanilla-2.2.16
23 matches
Mail list logo