Linux-Hardware Digest #905, Volume #14 Fri, 15 Jun 01 14:13:12 EDT
Contents:
Re: EIDE RAID Card Ideas (Vincent Fox)
Fast NICs (Chema)
Re: Capacity of Dell Server running as a web server? ("Heikki Tuuri")
Re: Fast NICs ("Andy Scutt")
Re: Linux X goes away??? (David Hartnett)
Workaround to get HP Colorado IDE tape drives to work under RH7.1 (2.4 (Leonard
Evens)
Re: VIA686b bug and Linux (Kenneth Rørvik)
Redhat 7.1 and SMC 1211 network card on ix86 PC (Adrian McMenamin)
Re: PCMCIA Ethernet Card for Linux? (Michael Meissner)
Re: Fast NICs (Jean-David Beyer)
Re: Capacity of Dell Server running as a web server? ("Steve Wolfe")
Re: Fast NICs (Alexander Wasmuth)
Re: ServerWorks III LE Chipset under linux ("Steve Wolfe")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Vincent Fox)
Subject: Re: EIDE RAID Card Ideas
Date: 15 Jun 2001 14:10:17 GMT
In <3b2a0427$0$88186$[EMAIL PROTECTED]> "Eric Braeden" <[EMAIL PROTECTED]>
writes:
>What is there about the current crop of EIDE RAID cards
>that prevent operation under Linux? Other than the damn
>driver source isn't available...In principle, why can't the
>cards be made "invisible" so normal drivers for IDE can
>be used?
There already is a very good one. See http://www.3ware.com/
Their cards work very well I use them throughout my operation.
Primarily I use the 6200 and have machines with mirrored boot drives.
A drive failure is thus not a cause for downtime. Simple.
--
"Who needs horror movies when we have Microsoft"?
-- Christine Comaford, PC Week, 27/9/95
------------------------------
Date: Fri, 15 Jun 2001 16:25:17 +0200
From: Chema <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.misc,comp.os.linux.networking
Subject: Fast NICs
Hi everybody:
I am looking for fast and reliable 10/100Mbs NICs for a new cluster. I
can not find pure speed benchmarks published or press reviews about it.
The NICs should be well supported under GNU/Linux and I do not mind the
price, the matter is top speed and a good manufacturer support.
Please give me your opinions or point me to an adequate URL.
Thank you very much.
Chema Box
System Administrator
ETSI Navales UPM
Madrid Spain.
------------------------------
From: "Heikki Tuuri" <[EMAIL PROTECTED]>
Subject: Re: Capacity of Dell Server running as a web server?
Date: Fri, 15 Jun 2001 14:32:02 GMT
Hi!
> No, I didn't say that it was the bottleneck. To pretend that anything
>that is not a "bottleneck" can be dropped from the equation isn't correct.
>And trying to equate sheer capacity with web-serving capacity isn't
>correct. Even though the database machine may not be anywhere near
>capacity, it is still part of the equation.
How many DB queries your application typically does per hit?
> As just a quick, fast, "off-the top of my head" example, using a
>count() function when you're trying to benchmark a join isn't exactly
>optimal - how much did efficiedcy differences in the count() code affect
>the results? In order to find that out, you then have to benchmark the
>count() routines, and do the math.
sum(...) or count(*) is needed to eliminate huge traffic between
the client and the server. Decision support queries typically
consist of such aggregate function queries. Calculation of
the sum or count inside the DB server takes probably less
than 1 / 1000 of the time required to search the rows through
indexes.
>> I used the default parameters of PostgreSQL except that I
>> increased the shared cache size to 24 MB and the log buffer to 4 MB.
>> For MySQL/InnoDB you can use the parameters specified in the InnoDB
>> manual. As long as the magnitude of the buffer pool and the log buffer
>> are about what I said on my benchmark page, they will run in about the
>> same time.
>
> That may very well be true - but both systems were designed to run
>under different configurations. I would not pretend to think that a
>configuration that suited Postgres would suit MySQL/InnoDB, nor would I
>expect the default configuration of either to be anywhere near optimal.
>Furthermore, you didn't mention anything about whether fsync() was enabled
>or disabled on either, which can make *huge* differences.
Sorry, I have to correct that I also set some other parameters bigger
than the default. I have pasted the Pg conf file at the end of this message.
I let PostgreSQL do fsync in file writes. InnoDB by default uses
fdatasync. There should not be much difference, because the
created table fits well in the buffer pool, and the necessary disk write
consists of a big block write of the log segment created in the
big insert operation. The fsync problem in Pg versions prior to 7.1
was due to the fact that Pg did not have WAL. To commit a transaction
one had to flush all modified index pages to disk, which meant a lot
of random disk writes. By removing the fsync one could give up the
durability of transactions and consequently skip the disk writes.
The join calculation does not do any disk i/o at all because the tables fit
in the buffer pool. I chose my simple bechmarks so that one needs
little or no tuning to get the optimal result from a typical database
server.
>> The Great Bridge benchmarks have exactly the flaws you are criticizing:
>> they did not post the configuration parameters to their website,
>> but only say that they used the 'default parameters' of two databases
>> whose names they do not disclose. They tuned PostgreSQL but
>> did not tune the other databases.
>
> Well, that's not entirely true. They did not disclose the two
>"closed-source" databases because of legal issues - after speaking with
>the person that did the tests, I can tell you that the two companies who
>make the products have histories of lawsuits over benchmarks.
Yes, I know they cannot publish the names without the permission
from the DB vendors. Also TPC forbids the use of the name 'TPC-C' without
conforming to the TPC rules. For some reason they still used the name
'TPC-C'.
> In the case of MySQL, you're right - they *initially* didn't let them
>configure it. However, they realized their errors, and changed that. I'm
>not going to pretend it was perfect, though. : )
I would have liked to see some analysis of the test results. The 'TPC-C'
tests they ran were strictly disk-bound, I believe. That would explain why
all databases got the same results. They were basically measuring
the disk rotation speed, I believe.
>> It would be very nice if you could correct this testing methodology
>> by running the tests yourself. They are very simple and it takes only
>> 10 minutes from you to run them once you have MySQL/InnoDB-3.23.39
>> installed. I would also be very pleased to hear how your web application
>> scales on MySQL/InnoDB.
>
> I've actually been giving a lot of thought lately to a good database
>benchmarking methodology, and have a few ideas, but I'm far from perfect.
>The best, for *me*, is to keep a log of queries executed on *my* servers,
>and use those for benchmarking, but that won't mean much for systems that
>differ greatly from what I'm doing. Want to work together, and see what
>we can come up with?
If you can run the tests posted on my website or other tests in an
impartial way on PostgreSQL and MySQL/InnoDB, I am very interested
to hear the results. I can help in setting InnoDB parameters, though
probably the examples in the InnoDB manual at http://www.innodb.com
suffice to show how to set them in the optimal way. I have sought to find
solutions which make tuning easy.
>steve
Regards,
Heikki
#
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form
#
# name = value
#
# (The `=' is optional.) White space is collapsed, comments are
# introduced by `#' anywhere on a line. The complete list of option
# names and allowed values can be found in the PostgreSQL
# documentation. Examples are:
#log_connections = on
#fsync = off
#max_connections = 64
# Any option can also be given as a command line switch to the
# postmaster, e.g., 'postmaster -c log_connections=on'. Some options
# can be set at run-time with the 'SET' SQL command.
#========================================================================
#
# Connection Parameters
#
#tcpip_socket = false
#ssl = false
#max_connections = 32 # 1-1024
#port = 5432
#hostname_lookup = false
#show_source_port = false
#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777
#virtual_host = ''
#krb_server_keyfile = ''
#
# Performance
#
sort_mem = 1024
shared_buffers = 3000 # min 16
fsync = true
#
# Optimizer Parameters
#
enable_seqscan = true
enable_indexscan = true
enable_tidscan = true
enable_sort = true
enable_nestloop = true
enable_mergejoin = true
enable_hashjoin = true
#ksqo = false
#geqo = true
#effective_cache_size = 1000 # default in 8k pages
#random_page_cost = 4
#cpu_tuple_cost = 0.01
#cpu_index_tuple_cost = 0.001
#cpu_operator_cost = 0.0025
#geqo_selection_bias = 2.0 # range 1.5-2.0
#
# GEQO Optimizer Parameters
#
#geqo_threshold = 11
#geqo_pool_size = 0 #default based in tables, range 128-1024
#geqo_effort = 1
#geqo_generations = 0
#geqo_random_seed = -1 # auto-compute seed
#
# Inheritance
#
#sql_inheritance = true
#
# Deadlock
#
#deadlock_timeout = 1000
#
# Expression Depth Limitation
#
#max_expr_depth = 10000 # min 10
#
# Write-ahead log (WAL)
#
wal_buffers = 500 # min 4
wal_files = 10 # range 0-64
wal_sync_method = fsync # fsync or fdatasync or open_sync or open_datasync
# Note: default wal_sync_method varies across platforms
#wal_debug = 0 # range 0-16
#commit_delay = 0 # range 0-100000
#commit_siblings = 5 # range 1-1000
checkpoint_segments = 10 # in logfile segments (16MB each), min 1
checkpoint_timeout = 300 # in seconds, range 30-3600
#
# Debug display
#
#silent_mode = false
#log_connections = false
#log_timestamp = false
#log_pid = false
#debug_level = 0 # range 0-16
#debug_print_query = false
#debug_print_parse = false
#debug_print_rewritten = false
#debug_print_plan = false
#debug_pretty_print = false
#ifdef USE_ASSERT_CHECKING
#debug_assertions = true
#endif
#
# Syslog
#
#ifdef ENABLE_SYSLOG
#syslog = 0 # range 0-2
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#endif
#
# Statistics
#
#show_parser_stats = false
#show_planner_stats = false
#show_executor_stats = false
#show_query_stats = false
#ifdef BTREE_BUILD_STATS
#show_btree_build_stats = false
#endif
#
# Lock Tracing
#
#trace_notify = false
#ifdef LOCK_DEBUG
#trace_locks = false
#ifdef LOCK_DEBUG
#trace_locks = false
#trace_userlocks = false
#trace_spinlocks = false
#debug_deadlocks = false
#trace_lock_oidmin = 16384
#trace_lock_table = 0
#endif
------------------------------
From: "Andy Scutt" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc,comp.os.linux.networking
Subject: Re: Fast NICs
Date: Fri, 15 Jun 2001 15:55:46 +0100
Hiya,
In the years I've been doing this, I've worked for ISPs and ASPs and in
both cases I can honestly saythe best NICs we've had under NT and Linux
are Intel EtherExpress 100+, they aren't cheap £40 UK but they are
reliable and fast.
Scutty
------------------------------
From: [EMAIL PROTECTED] (David Hartnett)
Crossposted-To:
comp.os.linux.admin,comp.os.linux.misc,comp.os.linux.questions,comp.os.linux.x,linux.redhat.install,linux.redhat.misc
Subject: Re: Linux X goes away???
Date: 15 Jun 2001 08:59:17 -0700
J Sloan <[EMAIL PROTECTED]> wrote in message news:<[EMAIL PROTECTED]>...
> JT wrote:
>
> > Running RH7.0. I never can be assured if Linux is going to come up in X.
> > Sometimes it does and somethings its just a blank black screen. When it
> > doesn't I have to re-install the who OS over again. Running Matrox Millenium
> > G200 8mgs ram.
>
> bzzt, wrong answer - reinstalling may be the
> windows way of fixing problems, but it's not
> the answer in the Unix world.
>
> Look in the sys logs to find out what is
> making the X server unhappy -
>
> > What confuses me, why would it work sometimes and sometimes not?
>
> Flaky hardware?
>
> the log entries will shed more light.
>
> cu
>
> jjs
What's making the X server unhappy is the fact that this new version 4
is not complete with the necessary drivers for all cards. We will be
lucky if it is a completed server by the end of the year. The best
thing is to probably ignore version 4 and install version 3.3.6, which
performed almost flawlessly because it had a driver base for
everything. RedHat rawhide has 4.1 for the next release but its
unknown if it is complete.
I'm dealing with RedHat 7.1 and it is very disappointing.
------------------------------
From: Leonard Evens <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.setup,redhat.general,comp.os.linux.misc
Subject: Workaround to get HP Colorado IDE tape drives to work under RH7.1 (2.4
Date: Fri, 15 Jun 2001 11:10:54 -0500
Robert Davies wrote:
>
> Leonard Evens wrote:
>
> > Denis Leroy wrote:
> >>
> >> Ga Mu <[EMAIL PROTECTED]> wrote in message
> >> news:<[EMAIL PROTECTED]>...
> >> > HELP!
> >> >
> >> The solution is to use the device in SCSI emulation mode, it'll work just
> >> fine.
> >>
> >> modprobe ide-scsi
> >> modprobe st
> >>
> >> drive is now available as /dev/st0 and /dev/nst0
> >>
> >> -denis
> >
> > I tried this, but attempts to access the device through mt or
> > tar respond that there is no such device. Of course, there is
> > such a device, but something can't find it. Any suggestions?
>
> Yes, you need to pass some options on boot in your lilo append line.
>
> To set a drive hdc to scsi emulation I have :
>
> append = "reboot=warm hdb=ide-scsi hdc=ide-scsi"
>
> This is needed to tell the IDE drivers that they don't 'own' this drive.
>
> I had to put the modprobe ide-scsi in a file called 'boot.local' which is
> run before the rc?.d stuff.
>
> Rob
I think that as long as I don't try to access the tape drive first
as /dev/ht0 (and thus load the ide-tape module), then Denis's
fix works fine. I think I could just put this is rc.local
or just load the modules when I need them to use the tape drive.
I know several people had complained about not being able to use
the HP Colorado IDE tape drives under RH7.1. I've been trying
to track them down without success. I just came upon Denis's
solution by luck, and I think it should be better advertised.
--
Leonard Evens [EMAIL PROTECTED] 847-491-5537
Dept. of Mathematics, Northwestern Univ., Evanston, IL 60208
------------------------------
Subject: Re: VIA686b bug and Linux
From: [EMAIL PROTECTED] (Kenneth Rørvik)
Date: Fri, 15 Jun 2001 16:50:28 GMT
[EMAIL PROTECTED] (Michael Perry) wrote in
<[EMAIL PROTECTED]>:
>One critical issue I should raise is to be careful using hdparm on the
>2.4.5 kernel with the VIA drivers. I do not believe hdparm is needed to
>set any advanced parameters on the drives being supported by this
>driver.
With the correct kernel options, DMA is set automagically. However, turning
on IRQ unmasking should also help a bit (hdparm -u1). I believe this is not
done at boot time (It's not on my box).
--
Kenneth Rørvik 91841353/22950312
Nordbergv. 60A [EMAIL PROTECTED]
0875 OSLO home.no.net/stasis
------------------------------
From: Adrian McMenamin <[EMAIL PROTECTED]>
Subject: Redhat 7.1 and SMC 1211 network card on ix86 PC
Date: Fri, 15 Jun 2001 18:01:01 +0100
Sorry, folks, I've tried for a week to set this up but now have to admit
failure.
I have lots of PC experience, but its all Windows based and now I've made
the move I am having a very difficult time, but here goes...
I have the following hardware installed on a new HP pavilion machine...
SMC2-1211TX (rev 16) PCI network card. (This is reported by the system, so
I assume it is actually plugged in!)
My system (Redhat 7.1) does not seem to have automatically configured a
driver for it, so I have tried to do it myself, but cannot get the
rtl8139.c file to compile.
When I try I get all these messages...
[root@localhost Adrian]# gcc -DMODULE -Wall -Wstrict-prototypes -O6 -c
rtl8139.c
rtl8139.c: In function `rtl8129_open':
rtl8139.c:675: structure has no member named `tbusy'
rtl8139.c:676: structure has no member named `interrupt'
rtl8139.c:677: structure has no member named `start'
rtl8139.c: In function `rtl8129_timer':
rtl8139.c:777: structure has no member named `interrupt'
rtl8139.c:783: structure has no member named `tbusy'
rtl8139.c: In function `rtl8129_tx_timeout':
rtl8139.c:910: structure has no member named `tbusy'
rtl8139.c: In function `rtl8129_start_xmit':
rtl8139.c:940: structure has no member named `tbusy'
rtl8139.c:963: structure has no member named `tbusy'
rtl8139.c:967: structure has no member named `tbusy'
rtl8139.c: In function `rtl8129_interrupt':
rtl8139.c:992: structure has no member named `interrupt'
rtl8139.c:995: structure has no member named `interrupt'
rtl8139.c:1072: structure has no member named `tbusy'
rtl8139.c:1073: `NET_BH' undeclared (first use in this function)
rtl8139.c:1073: (Each undeclared identifier is reported only once
rtl8139.c:1073: for each function it appears in.)
rtl8139.c:1152: structure has no member named `interrupt'
rtl8139.c: In function `rtl8129_close':
rtl8139.c:1275: structure has no member named `start'
rtl8139.c:1276: structure has no member named `tbusy'
rtl8139.c: In function `rtl8129_get_stats':
rtl8139.c:1354: structure has no member named `start'
Now, on the scyld.com site there is a warning about this sort of thing
happening with Redhat 7.0 - has it not been fixed for 7.1? It also talks
about using kgcc instead of gcc. Where is kgcc?
Anyway, I gave up on that and tried to use the RPM of precompiled drivers
for ix86 machines, but it doesn't seem to work either - I get messages
about unresolved symbols, when the loader/booter says it is examining
module dependencies but to tell you the truth I am not sure I am even
loading the files into the right places!?
I just don't know what to do - any help would be gratefully received.
------------------------------
Subject: Re: PCMCIA Ethernet Card for Linux?
From: Michael Meissner <[EMAIL PROTECTED]>
Date: 15 Jun 2001 13:42:28 -0400
cfeller <[EMAIL PROTECTED]> writes:
> Xircom's cards work good... have both a Xircom PCMCIA modem card, and a
> Xircom PCMCIA ethernet/lan card on my RH laptop.
Quoting from the pcmcia-3.1.26 SUPPORTED.CARDS page:
The following cards are NOT supported. This list is not meant to be
comprehensive: I list these cards because people frequently ask about
them. In general, there are no technical reasons why a card is not
supported: simply put, as far as I know, no one is working on these
cards, therefore, drivers will not be written. Most cards on this
list have been there for a very long time, so please do not send me
email just to ask if their status has changed.
Adaptec/Trantor APA-460 SlimSCSI
Eiger Labs SCSI w/FCC ID K36...
Melco LPC3-TX
New Media .WAVjammer and all other sound cards
New Media LiveWire+
Nikon CoolPix100
Panasonic KXL-D720
RATOC SMA01U SmartMedia Adapter
SMC 8016 EliteCard
Xircom CEM II Ethernet/Modem
Xircom CE-10BT Ethernet [ but try xircce_cs contrib driver ]
Xircom CBE-10/100 CardBus
I've had good luck with my two LinkSys cards that I got some time ago:
Linksys PCMPC200 EtherFast CardBus
Linksys EtherFast LANmodem 56K (PCMLM56)
The PCMLM56 card is a combo card (modem + ethernet) and has the two connections
in a housing that juts out of the slot (ie, no dongle that can break and must
be the top card). It is not a cardbus card, so I tend to use the PCMPC200 when
I'm just using the ethernet. The PCMPC200 card has a dongle, but there is now
a version like the PCMLM56 that doesn't have a dongle. The PCMCP200 is also
fairly cheap ($53 at NECX for instance).
--
Michael Meissner, Red Hat, Inc. (GCC group)
PMB 198, 174 Littleton Road #3, Westford, Massachusetts 01886, USA
Work: [EMAIL PROTECTED] phone: +1 978-486-9304
Non-work: [EMAIL PROTECTED] fax: +1 978-692-4482
------------------------------
From: Jean-David Beyer <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc,comp.os.linux.networking
Subject: Re: Fast NICs
Date: Fri, 15 Jun 2001 13:44:32 -0400
Chema wrote:
>
> Hi everybody:
>
> I am looking for fast and reliable 10/100Mbs NICs for a new cluster. I
> can not find pure speed benchmarks published or press reviews about it.
> The NICs should be well supported under GNU/Linux and I do not mind the
> price, the matter is top speed and a good manufacturer support.
> Please give me your opinions or point me to an adequate URL.
>
I have an Intel EE-Pro 100+ in each of my two machines. This one is a
Linux-Only machine, and the other runs either Linux or Windows95. I
have never had any trouble with these cards, and they run at 100MHz
full-duplex according to the LEDs on the cards and the fancy-dancy
graphic configuration tool for Windows. When booting the machine, it
does say:
kernel: eepro100.c:v1.09j-t 9/29/99 Donald Becker
http://cesdis.gsfc.nasa.gov/linux/drivers/eepro100.html
kernel: eepro100.c: $Revision: 1.20.2.10 $ 2000/05/31
Modified by Andrey V. Savochkin <[EMAIL PROTECTED]> and others
kernel: eth0: Intel PCI EtherExpress Pro100 82557, 00:90:27:43:12:75,
I/O at 0xef00, IRQ 17.
kernel: Receiver lock-up bug exists -- enabling work-around. <---<<<
kernel: Board assembly 721383-006, Physical connectors present: RJ45
kernel: Primary interface chip i82555 PHY #1.
kernel: General self-test: passed.
kernel: Serial sub-system self-test: passed.
Jkernel: Internal registers self-test: passed.
kernel: ROM checksum self-test: passed (0x04f4518b).
kernel: eepro100.c:v1.09j-t 9/29/99 Donald Becker
http://cesdis.gsfc.nasa.gov/linux/drivers/eepro100.html
kernel: eepro100.c: $Revision: 1.20.2.10 $ 2000/05/31
Modified by Andrey V. Savochkin <[EMAIL PROTECTED]> and others
But the line indicated by <---<<< does not seem to be a problem; i.e.,
the work-around must be working.
--
.~. Jean-David Beyer Registered Linux User 85642.
/V\ Registered Machine 73926.
/( )\ Shrewsbury, New Jersey http://counter.li.org
^^-^^ 1:35pm up 8 days, 2:30, 4 users, load average: 3.43, 3.26, 2.75
------------------------------
From: "Steve Wolfe" <[EMAIL PROTECTED]>
Subject: Re: Capacity of Dell Server running as a web server?
Date: Fri, 15 Jun 2001 11:39:36 -0600
> > No, I didn't say that it was the bottleneck. To pretend that
anything
> >that is not a "bottleneck" can be dropped from the equation isn't
correct.
> >And trying to equate sheer capacity with web-serving capacity isn't
> >correct. Even though the database machine may not be anywhere near
> >capacity, it is still part of the equation.
>
>
> How many DB queries your application typically does per hit?
Depending on the page, it can be anywhere from one to tens. But what I
was referring to is that the overall latency of the request is dependant
on several things - of which the database is one, but not the only factor.
> If you can run the tests posted on my website or other tests in an
> impartial way on PostgreSQL and MySQL/InnoDB, I am very interested
> to hear the results. I can help in setting InnoDB parameters, though
> probably the examples in the InnoDB manual at http://www.innodb.com
> suffice to show how to set them in the optimal way. I have sought to
find
> solutions which make tuning easy.
I ran a couple of the tests, and found results that were somewhat worse
or a lot better - but the point I'm trying to make is that your tests
aren't really indiciative of real-world applications, in my opinion, and
the systems haven't necessarily been set up in a "real-world"
configuration.
The other issue is, of course, scaling. Rather than simply giving the #
of transactions/second at a single # of simultaneous transactions,
plotting a graph of TPS vs. connections would illustrate the trends in a
much better manner.
steve
------------------------------
From: [EMAIL PROTECTED] (Alexander Wasmuth)
Crossposted-To: comp.os.linux.misc,comp.os.linux.networking
Subject: Re: Fast NICs
Date: 15 Jun 2001 17:51:39 GMT
* Chema <[EMAIL PROTECTED]>:
> I am looking for fast and reliable 10/100Mbs NICs for a new cluster. I
> can not find pure speed benchmarks published or press reviews about it.
> The NICs should be well supported under GNU/Linux and I do not mind the
> price, the matter is top speed and a good manufacturer support.
> Please give me your opinions or point me to an adequate URL.
Perhaps this link is helpful:
http://www.fefe.de/linuxeth/
Alex
--
http://alexander.wasmuth.org/
------------------------------
From: "Steve Wolfe" <[EMAIL PROTECTED]>
Subject: Re: ServerWorks III LE Chipset under linux
Date: Fri, 15 Jun 2001 11:49:53 -0600
> I just wanna say that I think you're nuts if you think that a dual VIA
> board is even WORTH the $70.
>
> Performance tests have shown that a dual VIA gives same or less
> performance than a single VIA at the same clock rate. The dual VIA
> chipset is just dumb. VIA chipsets are dumb. They suck; they're
> slow; and they suck.
>
> Anybody with me?
I'm guessing that you've never used one. Am I correct? I'd also be
interested in seeing where you got those performance tests, as there are
plenty of benchmarks that show it scaling very well. For instance,
http://www6.tomshardware.com/mainboard/01q1/010201/dual-06.html
Shows it scaling *exactly* linearly under applications that were written
with SMP in mind. Furthermore, in real-world applications, my servers
with CUV4X-D's outperform my dually boards with BX chipsets, and the BX
isn't a sloucher. If the BX were overclocked to the same FSB, it may
perform better, but on mission-critical servers, I tend not to overclock.
steve
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to comp.os.linux.hardware.
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Hardware Digest
******************************