HI Martin, I’d guess that the SRX240 has an external PCI-E to PCI bridge chip providing the PCI bus to the mini-PIM slots.
In theory you would need to provide a pair of of 32-bit 66MHz buses in order to have enough bandwidth to be able to drive all four mini-PIM slots at 1Gbps without oversubscription. I believe the CPU in the SRX240 is quad core, which make it a CN5230, not that it makes much difference. Using some boot output stolen from a random post on the juniper.net forums: > pcib1: <Cavium on-chip PCIe HOST bridge> on obio0 > Disabling Octeon big bar support > PCIe: Waiting for port 0 to finish reset > PCIe: Port 0 link active, 2 lanes > PCIe: Waiting for port 1 to finish reset > PCIe: Port 1 link active, 1 lanes > pcib1: Initialized controller > pci0: <PCI bus> on pcib1 > pcib2: <PCI-PCI bridge> irq 0 at device 0.0 on pci0 > pci1: <PCI bus> on pcib2 > pci1: <serial bus, USB> at device 2.0 (no driver attached) > pci1: <network> at device 7.0 (no driver attached) > pcib0: <Cavium on-chip PCIe HOST bridge> on obio0 > pci2: <PCI bus> on pcib0 > pci2: <processor> at device 0.0 (no driver attached) It looks like they’ve got the OCTEON's PCI-E controller set up in 2x 2-lane mode, although one of those is only connected with a single lane. So pcib0 and pcib1 are the two PCI-E connections coming from the OCTEON processor and then pcib2 is a PCI-E to PCI bridge chip hooked up to pcib1. If these are running PCI-E 1.x, then this gives you 4Gbps on pcib0 and 2Gbps on pcib1. Double that for PCI-E 2.x. I guess they’ve got 4x of the onboard ports hooked directly up to the network controller on the CPU and then the other 12x on a network controller connected to pcib0. The other option is the CPU network controller running at 10Gbps and then some/all of the ports hooked up to a switch chip. Edward Dore Freethought Internet On 6 May 2014, at 12:22, Martin T <[email protected]> wrote: > Hi, > > on the other hand, for example SRX240 seems to use either Cavium > CN5220 or CN5230 SoC("OCTEON 52XX CPU" according to dmesg) which does > not seem to support PCI(http://www.cavium.com/OCTEON-Plus_CN52XX.html) > while according to mini-PIM compatibility matrix, SRX240 is able to > use same mini-PIMs as SRX210. However, I'm afraid that PCI interface > is just missing on CN52XX block diagram as according to SRX240 kernel > message buffer, it seems to have PCI controller: > > pcib1: Initialized controller > pci0: <PCI bus> on pcib1 > pcib2: <PCI-PCI bridge> irq 0 at device 0.0 on pci0 > pci1: <PCI bus> on pcib2 > pci1: <serial bus, USB> at device 2.0 (no driver attached) > pci1: <network> at device 7.0 (no driver attached) > pcib0: <Cavium on-chip PCIe HOST bridge> on obio0 > pci2: <PCI bus> on pcib0 > pci2: <processor> at device 0.0 (no driver attached) > > Long story short, I would also think that mini-PIM's use PCI interface. > > > regards, > Martin > > On 5/6/14, Edward Dore <[email protected]> wrote: >> The original PIMs on the J-series were PCI and the subsequent ePIM and uPIM >> were PCI-E. >> >> I believe the XPIM on the SRX series are also PCI-E. >> >> I’m not sure about the mini-PIM, but I would guess at PCI as the Cavium >> OCTEON processor used in the SRX210 is a CN5020 and the only real option it >> has for expansion is 32-bit 66MHz PCI. That would give you just over 2Gbps >> of bandwidth on the PCI bus shared between the single mini-PIM slot and >> anything else Junpier have hooked up to the PCI bus, with the biggest >> mini-PIM that you can get being 1x1Gbps. >> >> Edward Dore >> Freethought Internet >> >> On 5 May 2014, at 23:28, Martin T <[email protected]> wrote: >> >>> Hi, >>> >>> has anyone investigated what interface is used in case of mini-PIM >>> modules? Physically it looks similar to 68-pin SCSI-3 connector: >>> http://i.imgur.com/UxhCS6g.jpg Do they use some proprietary protocol >>> or is it indeed SCSI? It would probably appear in kernel message >>> buffer(seen with dmesg/"show system boot-messages") when SRX is booted >>> up with mini-PIM inserted, but unfortunately I don't have any modules >>> around. >>> >>> >>> >>> regards, >>> Martin >>> _______________________________________________ >>> juniper-nsp mailing list [email protected] >>> https://puck.nether.net/mailman/listinfo/juniper-nsp >> >> _______________________________________________ juniper-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/juniper-nsp

