Re: how to aggregate a single TCP connection, is posible?
On 2013/10/01 23:02, Abel Abraham Camarillo Ojeda wrote: On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks I'm not aware of any LACP implementation on switches which does per-packet balancing. Even if you hack your kernel so that LACP trunks use round-robin to determine the output port (rather than hashes of headers), that is only on the link *to* the switch. Once the switch has received a packet, it will use its own algorithm to choose the output port. Typically the switch will use a hash of ethernet headers i.e. src/dest MAC and vlan tags - expensive switches will allow more options but usually even then it's no more than src/dest IP and port numbers. Even if you can find some way around this, some packets will arrive out-of-order which will cause individual TCP flows to slow down, so even in that case it's pretty unlikely to really help actual performance. It sounds like what you really need here is 10GE kit. Motherboard/NIC ports aren't too bad now, but if you want more than 2-4 10GE ports on a switch (to mention some of the cheaper options: xgs1910-24, gsm7228s, sg500x-24) then the switches start to get rather expensive.
Re: how to aggregate a single TCP connection, is posible?
Multipath TCP is the only way I know of to truly aggregate a single connection across any and all links. iOS7 supports Multi-path TCP, Citrix supports it and Amazon EC2 uses it too :) http://mptcp.info.ucl.ac.be/ http://perso.uclouvain.be/olivier.bonaventure/blog/html/2013/09/18/mptcp.html In their tests the devs managed to get a single TCP connection to run at upto 53Gbit across 6 10Bgit links. The patch is very simple to apply. Andy. On Wed 02 Oct 2013 09:58:02 BST, Stuart Henderson wrote: On 2013/10/01 23:02, Abel Abraham Camarillo Ojeda wrote: On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks I'm not aware of any LACP implementation on switches which does per-packet balancing. Even if you hack your kernel so that LACP trunks use round-robin to determine the output port (rather than hashes of headers), that is only on the link *to* the switch. Once the switch has received a packet, it will use its own algorithm to choose the output port. Typically the switch will use a hash of ethernet headers i.e. src/dest MAC and vlan tags - expensive switches will allow more options but usually even then it's no more than src/dest IP and port numbers. Even if you can find some way around this, some packets will arrive out-of-order which will cause individual TCP flows to slow down, so even in that case it's pretty unlikely to really help actual performance. It sounds like what you really need here is 10GE kit. Motherboard/NIC ports aren't too bad now, but if you want more than 2-4 10GE ports on a switch (to mention some of the cheaper options: xgs1910-24, gsm7228s, sg500x-24) then the switches start to get rather expensive.
Re: how to aggregate a single TCP connection, is posible?
On Wed, Oct 2, 2013 at 3:58 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013/10/01 23:02, Abel Abraham Camarillo Ojeda wrote: On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks I'm not aware of any LACP implementation on switches which does per-packet balancing. Even if you hack your kernel so that LACP trunks use round-robin to determine the output port (rather than hashes of headers), that is only on the link *to* the switch. Once the switch has received a packet, it will use its own algorithm to choose the output port. Typically the switch will use a hash of ethernet headers i.e. src/dest MAC and vlan tags - expensive switches will allow more options but usually even then it's no more than src/dest IP and port numbers. Even if you can find some way around this, some packets will arrive out-of-order which will cause individual TCP flows to slow down, so even in that case it's pretty unlikely to really help actual performance. It sounds like what you really need here is 10GE kit. Motherboard/NIC ports aren't too bad now, but if you want more than 2-4 10GE ports on a switch (to mention some of the cheaper options: xgs1910-24, gsm7228s, sg500x-24) then the switches start to get rather expensive. Thanks, I don't really need this, it was just a kind of research...
Re: how to aggregate a single TCP connection, is posible?
Andy: This seems interesting, will check later. thanks. On Wed, Oct 2, 2013 at 4:07 AM, Andy a...@brandwatch.com wrote: Multipath TCP is the only way I know of to truly aggregate a single connection across any and all links. iOS7 supports Multi-path TCP, Citrix supports it and Amazon EC2 uses it too :) http://mptcp.info.ucl.ac.be/ http://perso.uclouvain.be/olivier.bonaventure/blog/html/2013/09/18/mptcp.html In their tests the devs managed to get a single TCP connection to run at upto 53Gbit across 6 10Bgit links. The patch is very simple to apply. Andy. On Wed 02 Oct 2013 09:58:02 BST, Stuart Henderson wrote: On 2013/10/01 23:02, Abel Abraham Camarillo Ojeda wrote: On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks I'm not aware of any LACP implementation on switches which does per-packet balancing. Even if you hack your kernel so that LACP trunks use round-robin to determine the output port (rather than hashes of headers), that is only on the link *to* the switch. Once the switch has received a packet, it will use its own algorithm to choose the output port. Typically the switch will use a hash of ethernet headers i.e. src/dest MAC and vlan tags - expensive switches will allow more options but usually even then it's no more than src/dest IP and port numbers. Even if you can find some way around this, some packets will arrive out-of-order which will cause individual TCP flows to slow down, so even in that case it's pretty unlikely to really help actual performance. It sounds like what you really need here is 10GE kit. Motherboard/NIC ports aren't too bad now, but if you want more than 2-4 10GE ports on a switch (to mention some of the cheaper options: xgs1910-24, gsm7228s, sg500x-24) then the switches start to get rather expensive.
Re: how to aggregate a single TCP connection, is posible?
Wow! Impressive to what length people will go just so that they do not need to switch to SCTP. /wbr Ariel Burbaickij On Wed, Oct 2, 2013 at 11:07 AM, Andy a...@brandwatch.com wrote: Multipath TCP is the only way I know of to truly aggregate a single connection across any and all links. iOS7 supports Multi-path TCP, Citrix supports it and Amazon EC2 uses it too :) http://mptcp.info.ucl.ac.be/ http://perso.uclouvain.be/**olivier.bonaventure/blog/html/** 2013/09/18/mptcp.htmlhttp://perso.uclouvain.be/olivier.bonaventure/blog/html/2013/09/18/mptcp.html In their tests the devs managed to get a single TCP connection to run at upto 53Gbit across 6 10Bgit links. The patch is very simple to apply. Andy. On Wed 02 Oct 2013 09:58:02 BST, Stuart Henderson wrote: On 2013/10/01 23:02, Abel Abraham Camarillo Ojeda wrote: On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-**SG200-26P-Ethernet-Mini-GBIC-** SLM2024PT/dp/B004GHMU5Qhttp://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks I'm not aware of any LACP implementation on switches which does per-packet balancing. Even if you hack your kernel so that LACP trunks use round-robin to determine the output port (rather than hashes of headers), that is only on the link *to* the switch. Once the switch has received a packet, it will use its own algorithm to choose the output port. Typically the switch will use a hash of ethernet headers i.e. src/dest MAC and vlan tags - expensive switches will allow more options but usually even then it's no more than src/dest IP and port numbers. Even if you can find some way around this, some packets will arrive out-of-order which will cause individual TCP flows to slow down, so even in that case it's pretty unlikely to really help actual performance. It sounds like what you really need here is 10GE kit. Motherboard/NIC ports aren't too bad now, but if you want more than 2-4 10GE ports on a switch (to mention some of the cheaper options: xgs1910-24, gsm7228s, sg500x-24) then the switches start to get rather expensive.
Re: how to aggregate a single TCP connection, is posible?
Slightly OT: Set aside LACP, which is just there to establish the trunk and got nothing to do with the actual packet forwarding on the trunk, Brocade has a technology that should be able to load balance a single TCP session on all the trunk links http://community.brocade.com/community/blogs/data_center/blog/2011/04/06/brocade-isl-trunking-has-almost-perfect-load-balancing On Wed, Oct 2, 2013 at 1:58 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013/10/01 23:02, Abel Abraham Camarillo Ojeda wrote: On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks I'm not aware of any LACP implementation on switches which does per-packet balancing. Even if you hack your kernel so that LACP trunks use round-robin to determine the output port (rather than hashes of headers), that is only on the link *to* the switch. Once the switch has received a packet, it will use its own algorithm to choose the output port. Typically the switch will use a hash of ethernet headers i.e. src/dest MAC and vlan tags - expensive switches will allow more options but usually even then it's no more than src/dest IP and port numbers. Even if you can find some way around this, some packets will arrive out-of-order which will cause individual TCP flows to slow down, so even in that case it's pretty unlikely to really help actual performance. It sounds like what you really need here is 10GE kit. Motherboard/NIC ports aren't too bad now, but if you want more than 2-4 10GE ports on a switch (to mention some of the cheaper options: xgs1910-24, gsm7228s, sg500x-24) then the switches start to get rather expensive.
Re: how to aggregate a single TCP connection, is posible?
On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet? Stuart: Trying between two obsd hosts only (no switch) I was able to get more than 1000Mb speed testing with tcpbench but only using great values for -n option (-n 16)... Is there a way to aggregate (reliably) a single TCP connection using an LACP capable switch between two OpenBSD hosts? I'm using this: http://www.amazon.com/Cisco-SG200-26P-Ethernet-Mini-GBIC-SLM2024PT/dp/B004GHMU5Q Thanks
Re: how to aggregate a single TCP connection, is posible?
This is a question with many solutions, each with their own benefits and disadvantages and is a subject of some history. If you are connecting two servers directly together without using a switch in-between them, then round-robin is for you. However if you need to have switches in the mix there are many things that need to be considered. The most limiting factor is you have to connect both cables to the same single switch to use trunks, or purchase multiple very expensive switches which support sharing MAC address tables between them if you want to connect a cable to each one to achieve improved redundancy as well as aggregated performance. If this is to connect through a Core-Distribution tiered setup then again you are going to need some decent kit. The cheap, and I personally think, awesome new solution which is a very hot topic at the moment is 'Multi-Path TCP'. This is a technology where no trunks are needed, and 'dumb' cheap switches can be used. The paths don't even need to be the same networks. Each interface is configured like a standard single interface with its own IP address. mptcp builds individual sessions using each of the interfaces and then aggregates the traffic in the kernel stack. This technology is designed to allow aggregation of 'any' type of IP interface including 3G, WiFI and LAN for example. The extra optional TCP headers are used to achieve it all, and keep processing/reordering overhead to a minimum etc. mptcp is currently still in late beta stages but is under heavy development and already being used across Amazon's data centres. It is not long from being included into the Linux kernel as standard (you can add it manually very easily). Hopefully OpenBSD will also include the algorithms at some point. It was announced and demonstrated at RIPE's last conference and has been presented at many other prestigious forums, and is being contributed to by many other big providers as well as Amazon. We are planning to roll mptcp out across all our data-centres in 2014. Anyway, hope this gives you some useful ideas. Andrew Lemin On Fri, 23 Aug 2013 18:39:29 -0500, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet?
Re: how to aggregate a single TCP connection, is posible?
Not yet, will test. On Thu, Aug 22, 2013 at 7:05 AM, Stuart Henderson s...@spacehopper.org wrote: On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet?
Re: how to aggregate a single TCP connection, is posible?
On 2013-08-22, Abel Abraham Camarillo Ojeda acam...@verlet.org wrote: Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... LACP uses a hash over IP addresses/vlan tags/flowlabel to avoid problems with out-of-order packet delivery. (Similar for equal-cost multipath). Have you tried a roundrobin trunk yet?
how to aggregate a single TCP connection, is posible?
Is there a way to duplicate the throughput of a single TCP connection using two servers having two gigabit NICs? I have tried using LACP but I cannot get more than 900MB of throughput... dmesg both servers are equal: OpenBSD 5.2 (GENERIC.MP) #368: Wed Aug 1 10:04:49 MDT 2012 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 2141519872 (2042MB) avail mem = 2062200832 (1966MB) mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.4 @ 0x7fb9c000 (64 entries) bios0: vendor Dell Inc. version 2.0.1 date 10/27/2007 bios0: Dell Inc. PowerEdge 2950 acpi0 at bios0: rev 2 acpi0: sleep states S0 S4 S5 acpi0: tables DSDT FACP APIC SPCR HPET MCFG WDAT SLIC ERST HEST BERT EINJ TCPA acpi0: wakeup devices PCI0(S5) acpitimer0 at acpi0: 3579545 Hz, 24 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Xeon(R) CPU E5310 @ 1.60GHz, 1596.16 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,VMX,TM2,SSSE3,CX16,xTPR,PDCM,DCA,NXE,LONG,LAHF cpu0: 4MB 64b/line 16-way L2 cache cpu0: apic clock running at 265MHz cpu1 at mainbus0: apid 1 (application processor) cpu1: Intel(R) Xeon(R) CPU E5310 @ 1.60GHz, 1595.93 MHz cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,VMX,TM2,SSSE3,CX16,xTPR,PDCM,DCA,NXE,LONG,LAHF cpu1: 4MB 64b/line 16-way L2 cache cpu2 at mainbus0: apid 2 (application processor) cpu2: Intel(R) Xeon(R) CPU E5310 @ 1.60GHz, 1595.93 MHz cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,VMX,TM2,SSSE3,CX16,xTPR,PDCM,DCA,NXE,LONG,LAHF cpu2: 4MB 64b/line 16-way L2 cache cpu3 at mainbus0: apid 3 (application processor) cpu3: Intel(R) Xeon(R) CPU E5310 @ 1.60GHz, 1595.93 MHz cpu3: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,VMX,TM2,SSSE3,CX16,xTPR,PDCM,DCA,NXE,LONG,LAHF cpu3: 4MB 64b/line 16-way L2 cache ioapic0 at mainbus0: apid 4 pa 0xfec0, version 20, 24 pins ioapic0: misconfigured as apic 0, remapped to apid 4 ioapic1 at mainbus0: apid 5 pa 0xfec81000, version 20, 24 pins ioapic1: misconfigured as apic 0, remapped to apid 5 acpihpet0 at acpi0: 14318179 Hz acpimcfg0 at acpi0 addr 0xe000, bus 0-255 acpiprt0 at acpi0: bus 0 (PCI0) acpiprt1 at acpi0: bus 6 (PEX2) acpiprt2 at acpi0: bus 7 (UPST) acpiprt3 at acpi0: bus 8 (DWN1) acpiprt4 at acpi0: bus 10 (DWN2) acpiprt5 at acpi0: bus 1 (PEX3) acpiprt6 at acpi0: bus 2 (PE2P) acpiprt7 at acpi0: bus 12 (PEX4) acpiprt8 at acpi0: bus 14 (PEX6) acpiprt9 at acpi0: bus 4 (SBEX) acpiprt10 at acpi0: bus 16 (COMP) acpicpu0 at acpi0: C3 acpicpu1 at acpi0: C3 acpicpu2 at acpi0: C3 acpicpu3 at acpi0: C3 ipmi at mainbus0 not configured pci0 at mainbus0 bus 0 pchb0 at pci0 dev 0 function 0 Intel 5000X Host rev 0x12 ppb0 at pci0 dev 2 function 0 Intel 5000 PCIE rev 0x12 pci1 at ppb0 bus 6 ppb1 at pci1 dev 0 function 0 Intel 6321ESB PCIE rev 0x01 pci2 at ppb1 bus 7 ppb2 at pci2 dev 0 function 0 Intel 6321ESB PCIE rev 0x01 pci3 at ppb2 bus 8 ppb3 at pci3 dev 0 function 0 ServerWorks PCIE-PCIX rev 0xc3 pci4 at ppb3 bus 9 bnx0 at pci4 dev 0 function 0 Broadcom BCM5708 rev 0x12: apic 4 int 16 ppb4 at pci2 dev 1 function 0 Intel 6321ESB PCIE rev 0x01: msi pci5 at ppb4 bus 10 ppb5 at pci1 dev 0 function 3 Intel 6321ESB PCIE-PCIX rev 0x01 pci6 at ppb5 bus 11 ppb6 at pci0 dev 3 function 0 Intel 5000 PCIE rev 0x12 pci7 at ppb6 bus 1 ppb7 at pci7 dev 0 function 0 Intel IOP333 PCIE-PCIX rev 0x00 pci8 at ppb7 bus 2 mfi0 at pci8 dev 14 function 0 Dell PERC 5 rev 0x00: apic 5 int 14, 0x1f031028 mfi0: logical drives 1, version 5.2.1-0067, 256MB RAM scsibus0 at mfi0: 1 targets sd0 at scsibus0 targ 0 lun 0: DELL, PERC 5/i, 1.03 SCSI3 0/direct fixed naa.6001c230daeb98001352781c17f970ff sd0: 278784MB, 512 bytes/sector, 570949632 sectors ppb8 at pci7 dev 0 function 2 Intel IOP333 PCIE-PCIX rev 0x00 pci9 at ppb8 bus 3 ppb9 at pci0 dev 4 function 0 Intel 5000 PCIE x8 rev 0x12: msi pci10 at ppb9 bus 12 ppb10 at pci0 dev 5 function 0 Intel 5000 PCIE rev 0x12 pci11 at ppb10 bus 13 ppb11 at pci0 dev 6 function 0 Intel 5000 PCIE x8 rev 0x12: msi pci12 at ppb11 bus 14 ppb12 at pci0 dev 7 function 0 Intel 5000 PCIE rev 0x12 pci13 at ppb12 bus 15 Intel I/OAT rev 0x12 at pci0 dev 8 function 0 not configured pchb1 at pci0 dev 16 function 0 Intel 5000 Error Reporting rev 0x12 pchb2 at pci0 dev 16 function 1 Intel 5000 Error Reporting rev 0x12 pchb3 at pci0 dev 16 function 2 Intel 5000 Error Reporting rev 0x12 pchb4 at pci0 dev 17 function 0 Intel 5000 Reserved rev 0x12 pchb5 at pci0 dev 19 function 0 Intel 5000 Reserved rev 0x12 pchb6 at pci0 dev 21 function 0 Intel 5000 FBD rev 0x12 pchb7 at pci0 dev 22 function 0 Intel 5000 FBD rev 0x12 ppb13 at pci0 dev 28