hi there, i've got my system reinstalled today with freebsd 4.9-release.
i installed cvsup-without-gui by ports, typed cvsup stable.sup, waited for that
finish, cd /usr/src and make buildworld.
and then BOOM! it drop. whats the big deal? i'm tired doing this on MANY others
servers, at home, on ot
On Thu, Oct 07, 2004 at 09:49:45PM +0300, Ruslan Ermilov wrote:
> On Thu, Oct 07, 2004 at 12:38:42PM -0400, Vlad wrote:
> > FreeBSD 4.10-STABLE #3: Thu Sep 30
> >
> > $ id
> > uid=65534(nobody) gid=65534(nobody) groups=65534(nobody)
> >
> > $ mkdir test
> >
> > $ chmod 770 test
> >
> > $ cp -R
Jean-Francois Dockes writes:
| Just in case it may help someone (this information is not very easily
| accessible in the archives):
|
| - I have a Promise TX2 controller with a PCI ID of 0x3375105a . It works
|for me in 4.10 by adding the new PCI ID everywhere that you'll find the
|other/
On Thu, Jan 01, 1970 at 12:00:00AM +, Steve Shorter wrote:
> Howdy!
>
> FreeBSD 4-10
>
> I have some machines that run customers cgi stuff.
> These machines have started to hang and become unresponsive.
> At first I thought it was a hardware issue, but I discovered in
> a cyclade
[ current@ cc'ed, as is still the best place for 5.x ]
On Fri, 8 Oct 2004 14:17:13 +0200 (CEST)
"Patrick M. Hausen" <[EMAIL PROTECTED]> wrote:
> Hi!
>
> Next test with the current beta:
>
> We have a system with a VIA chipset based mainboard, the ATA
> controller is reported to be a VIA 8235.
> "Mike" == Mike Tancsa <[EMAIL PROTECTED]> writes:
Mike> At 12:18 PM 08/10/2004, David Gilbert wrote:
>> Idle_poll is default 1, I'm not positive we tested 0. I don't
>> think there is much idle time here.
Mike> Actually, on RELENG_5, I think the default is now zero.
checked, tho. We did
> "Julian" == Julian Elischer <[EMAIL PROTECTED]> writes:
Julian> David Gilbert wrote:
Julian> there are also changes in B4->B7 that ar related to scheduling
Julian> the packet delivery mechanisms.. They may not make much of a
Julian> difference but...
I will endevour to do cvsup and retest
At 12:18 PM 08/10/2004, David Gilbert wrote:
Idle_poll is default 1, I'm not positive we tested 0. I don't think
there is much idle time here.
Actually, on RELENG_5, I think the default is now zero.
With a releng_5 BETA7 box in between 2 other hosts, with idle_poll set to
the default on zero, usi
David Gilbert wrote:
"Scott" == Scott Long <[EMAIL PROTECTED]> writes:
Scott> Interesting results. One thing to note is that a severe bug in
Scott> the if_em driver was fixed for BETA7. The symptoms of this bug
Scott> include apparent livelock of the machine during heavy xmit
Scott>
Howdy!
FreeBSD 4-10
I have some machines that run customers cgi stuff.
These machines have started to hang and become unresponsive.
At first I thought it was a hardware issue, but I discovered in
a cyclades log the following stuff that got logged to the
console which explains the
> "Daniel" == Daniel Eriksson <[EMAIL PROTECTED]> writes:
Daniel> David Gilbert wrote:
>> Right out of the box, FreeBSD 5.3 (with polling) passed about 200
>> kpps.
Daniel> Was this with debug.mpsafenet enabled and all debugging
Daniel> (WITNESS and such) turned off?
mpsafenet on and all wit
> "Mike" == Mike Tancsa <[EMAIL PROTECTED]> writes:
Mike> At 10:08 AM 08/10/2004, David Gilbert wrote:
>> Right out of the box, FreeBSD 5.3 (with polling) passed about 200
>> kpps. net.isr.enable=1 increased that without polling to about 220
Mike> Did you have kern.polling.idle_poll at 0 or
> "Guy" == Guy Helmer <[EMAIL PROTECTED]> writes:
Guy> The fixed bug in the em driver for BETA7 may significantly help
Guy> (see Scott Long's response prior to mine).
As I replied, I hand-applied these patches. They reduced live lock
(or what my tech calls "chunkyness" --- almost live lock),
David Gilbert wrote:
> Right out of the box, FreeBSD 5.3 (with polling) passed about 200
> kpps.
Was this with debug.mpsafenet enabled and all debugging (WITNESS and such)
turned off?
/Daniel Eriksson
___
[EMAIL PROTECTED] mailing list
http://lists.f
Paul Mather wrote:
> Vinum is known broken in 5.3. :-) You should be using geom_vinum
> instead. It will largely be a drop-in replacement for your above Vinum
> configuration. (I am using it on a similar root-on-vinum setup.) The
> main changes are these:
What I need to know is whether the rai
At 10:08 AM 08/10/2004, David Gilbert wrote:
Right out of the box, FreeBSD 5.3 (with polling) passed about 200
kpps. net.isr.enable=1 increased that without polling to about 220
Did you have kern.polling.idle_poll at 0 or 1 ? In my tests a few weeks ago
this seemed to make a difference, but the l
On Fri, 2004-10-08 at 07:52, Patrick M. Hausen wrote:
> We have a production system that runs on a vinum system drive
> configured like this:
[[Configuration omitted.]]
> It's currently running fine with FreeBSD 5.2.1-RELEASE-p10.
>
> After upgrading to 5.3-BETA7, buildworld, buildkernel, insta
David Gilbert wrote:
During the next week, I will continue testing with full simulated
routing tables, random packets and packets between 350 and 550 bytes
(average ISP out/in packet sizes). I will add to this report then.
If anyone has tuning advice for FreeBSD 5.3, I'd like to hear it.
Three thi
David Gilbert wrote:
The opportunity presented itelf for me to test packet passing ability
on some fairly exotic hardware. The motherboard I really wanted to
test not only had separate memory busses for each cpu, but also had
two separate PCI-X busses (one slot each). To this, I added two
intel p
> "Scott" == Scott Long <[EMAIL PROTECTED]> writes:
Scott> Interesting results. One thing to note is that a severe bug in
Scott> the if_em driver was fixed for BETA7. The symptoms of this bug
Scott> include apparent livelock of the machine during heavy xmit
Scott> load. You might want to up
David Gilbert wrote:
The opportunity presented itelf for me to test packet passing ability
on some fairly exotic hardware. The motherboard I really wanted to
test not only had separate memory busses for each cpu, but also had
two separate PCI-X busses (one slot each). To this, I added two
intel p
The opportunity presented itelf for me to test packet passing ability
on some fairly exotic hardware. The motherboard I really wanted to
test not only had separate memory busses for each cpu, but also had
two separate PCI-X busses (one slot each). To this, I added two
intel pro/1000 gigabit ether
Hi!
Next test with the current beta:
We have a system with a VIA chipset based mainboard, the ATA
controller is reported to be a VIA 8235.
This system has worked just fine with 5.1 then stopped working when
the atang changes were commited. It wasn't that important to us
(it's really cheap [tm] ha
Hi all!
We have a production system that runs on a vinum system drive
configured like this:
cab# vinum l
2 drives:
D b State: up /dev/ad1s1h A: 0/114494 MB (0%)
D a State: up /dev/ad0s1h A: 0/114494 MB (0%)
4 volumes:
V root
24 matches
Mail list logo