On Mon, Mar 13, 2023 at 8:24 AM Dale <[email protected]> wrote:
>
>  According to my google searches, PCIe x4 is faster
> than PCIe x1.  It's why some cards are PCIe x8 or x16.  I think video
> cards are usually x16.  My question is, given the PCIe x4 card costs
> more, is it that much faster than a PCIe x1?

It could be slower than PCIe x1, because you didn't specify the version.

PCIe uses lanes.  Each lane provides a certain amount of bandwidth
depending on the version in use.

For example, a v1 4x card has 1 GB/s of bandwidth.  A v4 1x card has
2GB/s of bandwidth.

Note that slot size is only loosely coupled with the number of lanes.
Lots of motherboards have a second 16x slot that only provides 4-8
lanes to save on the cost of a PCIe swich.  You can also use adapters
to connect a 16x card to a 1x slot, or you might find a motherboard
that has an open-ended slot so that you can just fit a 16x card onto
the 1x slot.  It will of course only use a single lane that way.

So what you need to do is consider the following:

1. How much bandwidth do you actually need?  If you're using spinning
disks you aren't going to sustain more than 200MB/s to a single drive,
and the odds of having 10 drives using all that bandwidth are pretty
low.  If you're using SSDs then you're more likely to max them out
since the seek cost is much lower.
2. What PCIe version does your motherboard support?  Sticking a v4
card on an old motherboard that only supports v2 is going to result in
it running at v2 speeds, so don't pay a premium for something you
won't use.  Likewise, if they cut down on the number of lanes assuming
they'll have more bandwidth you might have less than you expected to
have.

Then look up the number of lanes and the PCIe version and see what you
can expect:
https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions

I think odds are you aren't going to want to pay a premium if you're
just using spinning disks.  If you actually wanted solid state storage
then I'd also be avoiding SATA and trying to use NVMe, though doing
that at scale requires a lot of IO, and that will cost you quite a
bit.  There is a reason your motherboard has mostly 1x slots - PCIe
lanes are expensive to support.  On most consumer motherboards they're
only handled by the CPU, and consumer CPUs are very limited in how
many they offer.  Higher end motherboards may have a switch and offer
more lanes, but they'll still bottleneck if they're all maxed out
getting into the CPU.  If you buy a server CPU for several thousand
dollars one of the main features they offer is a LOT more PCIe lanes,
so you can load up on NVMes and have them running at v4-5.  (Typical
NVMe uses a 4x M.2 slot, and of course you can have 16x cards offering
multiples of those.)

The whole setup is pretty analogous to networking.  If you have a
computer with 4 network ports you can bond them together and run them
to a switch that supports this with 4 cables, and get 4x the
bandwidth.  However, you can also get a single connection to run at
higher speeds (1Gb, 2.5Gb, 10Gb, etc), and you can do both.  PCIe
lanes are just like bonded network cables - they are just pairs of
signal wires that use differential signaling, just like twisted pairs
in an ethernet cable.  Longer slots just add more of them.  Everything
is packet switched, so if there are more lanes it just spreads the
packets across them.  Higher versions mean higher speeds in each lane.

-- 
Rich

Reply via email to