Re: RapidIO - general questions
On Mon, 2009-06-29 at 10:44 -0500, ext david.hag...@gmail.com wrote: Do you know (and if you know, can you comment) if IDT is planning on offering RIO (and more importantly to me sRIO) chipsets that can be used on other architectures besides the various PPC chips with embedded sRIO controllers? I am using only switches, and they are not fixed to any architecture. You can use them independently to any MCU. I do not know if they are planning to provides something more. Jan ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: RapidIO - general questions
Hi as I already informed you, we'd like to contribute to the Linux RapidIO subsystem, with several features, Here are few general information about the design of implementation of such features. naming: domain - a several boards connected together via rio, with only one host host - a MCU with host bit set, only one per domain domain master - a host which have also domain_master_bit set ( boot param), only one in overall system. * Domains configuration - traverse the whole rio network by domain master to find all the hosts, providing a domain IDs to them and finally programing domain routing tables to the switches. Since we are cooperating with a IDT as a switch supplier, the IDT will public their private API for setting such a domain routing tables under GPL. - there is an issue how to proceed with a locking of the switches, during enumeration of domains and later on during enumeration of endpoint by hosts. All the time, of running system, there will be two MCUs trying to configure the same switch in certain situation, especially during hot-plug of domain. So this is not clear yet. * Static ID configuration based on port numbers tree sysfs files for providing necessary info : host_id - this is the same like riohdid boot parameter now switch_ids - this file is for providing source ids for switches to be able to report problems via port-write packets endpoint_ids - to provide list of all endpoint in one domain. I read somewhere that passing of structures via sysFS is not acceptable, but how to pass some more information to the kernel. I expect to pass those structures via bin sysFS files, analyze the input not only cast it, and than use it. Is it this OK ? * User triggered enumeration/discovery This is necessary because of the static IDs. They have to be know before enumeration can start, and this is most general way of doing this. So once the static Ids are provided over sysFS files, then via another sysFS file a enumeration is triggered. I am talking about enumeration only because the endpoints that wait for the enumeration are going to preform discovery as usual, after enumeration. This needs some changes. Whole enumeration process have to be put into the kthread and discovery as well. Then kernel can boots up to user space and enumeration can be triggered. Is there any standard way in the kernel how to postpone and than trigger a configuration of a bus from the user space ? * User space library for configuring switches IDT is going to provide user space GPL library that will covers an rio switch configuration based on rio spec 2.0. * Error handling (port-write packets - configuration, handling of them) This should be as general as possible, and IDT is designing this part. So a port-write packet (have to be written) driver will receive a packet, analyze this and perform an action: - two scenarios can happen. 1. The port-write info is part of the rio spec 2.0 and the packet is processed by kernel directly 2. The port-write info is vendor specific and it is passed to the user space, where is processed and proper action is than taken via current sysFS config files. * Hot-plug (hot-insert/hot-remove) of devices This is case of error handling. In case of any error covered by rio spec 2.0 (bad CRC, bad character, .. ) the ports of the switch that generates this port-write info are scanned for PORT_OK status or PORT_UNINITIALIZED so we are able to catch the hot-plug/hot-extract of any device. Hotplug should be functional in the time of enumeration, because enumeration process traverse the system port by port, and if some endpoint is powered on after enumeration process testes its port but before the end of the standard enumeration process, this device can be missed. * Aux driver - basic driver, for sending messages over different mboxes, right now we implemented this as a character device. If any one is interested let me know, I will send this to you. On the end we'd like to support already existing scenario of dynamic assignment of the devID as well as static ID and user space triggered enumeration. The question is if there is a static table of IDs, do we still needs an discovery process on every endpoint ? Propagating any hot-plug/hot- extract event to every endpoint to reflect this in local sysfs structure would be quite hard. Any comment to this topics is highly appreciated, as well as forwarding this to anyone who can be interested. Jan On Wed, 2009-05-20 at 09:00 +0200, ext Jan Neskudla wrote: n Fri, 2009-05-15 at 15:56 +0800, ext Li Yang wrote: On Fri, May 15, 2009 at 3:33 PM, Jan Neskudla jan.neskudla@nsn.com wrote: On Wed, 2009-05-13 at 18:57 +0800, ext Li Yang wrote: cc'ed LKML On Tue, May 12, 2009 at 5:17 PM, Jan Neskudla jan.neskudla@nsn.com wrote: Hallo we'd likes to use a RapidIO as a general communication bus on our new product, and so I have some questions
Re: RapidIO - general questions
n Fri, 2009-05-15 at 15:56 +0800, ext Li Yang wrote: On Fri, May 15, 2009 at 3:33 PM, Jan Neskudla jan.neskudla@nsn.com wrote: On Wed, 2009-05-13 at 18:57 +0800, ext Li Yang wrote: cc'ed LKML On Tue, May 12, 2009 at 5:17 PM, Jan Neskudla jan.neskudla@nsn.com wrote: Hallo we'd likes to use a RapidIO as a general communication bus on our new product, and so I have some questions about general design of Linux RIO subsystem. I did not find any better mailing list for RapidIO discussion. [1] - we'd like to implement following features * Hot-plug (hot-insert/hot-remove) of devices * Error handling (port-write packets - configuration, handling of them) * Static ID configuration based on port numbers * Aux driver - basic driver, for sending messages over different mboxes, handling ranges of doorbells Is it here anyone who is working on any improvement, or anyone who knows the development plans for RapidIO subsystem? AFAIK, there is no one currently working on these features for Linux. It will be good if you can add these useful features. Yes it looks like that, currently we are analyzing current rapidIO system, and how we can add these features. [2] - I have a following problem with a current implementation of loading drivers. The driver probe-function call is based on comparison of VendorID (VID) and DeviceID (DID) only. Thus if I have 3 devices with same DID and VID connected to the same network (bus), the driver is loaded 3times, instead only once for the actual device Master port. This should be the correct way as you actually have 3 instances of the device. Rionet driver solved this by enabling to call initialization function just once, and it expect that this is the Master port. Rionet is kind of special. It's not working like a simple device driver, but more like a customized protocol stack to support multiple ethernet over rio links. Is it this correct behavior ? It looks to me that RapidIO is handled like a local bus (like PCI) This is correct behavior. All of them are using Linux device/driver infrastructure, but rionet is a special device. But I do not have a 3 devices on one silicon. I am talking about 3 devices (3 x EP8548 boards + IDT switch) connected over rapidIO through the switch. And in this case I'd like to have only one driver siting on the top of Linux RapidIO subsystem. I don't see the advantage of loading You are having one driver, but it probes 3 times for each device using the driver. a driver locally for remote device. Am I missing something ? If you want to interact with the remote device, you need the driver to do the work locally. We are going to use a RapidIO as a bigger network of active devices, and each will have each own driver (sitting on its own), and all the settings will be done over maintenance packets. May be it will be solved by the fact, that we are going to use a staticIDs, so there will be no discovery as it is now. And thus there will be only one device visible in the internal structures of the subsystem, and thus only one drive will be loaded. And one more think, I am getting so much Bus errors OOPSes. Whenever there is a problem with a comunication over Rio I get such a kernel OPS. I had to add some delays into some function to be able to finish the enum+discovery process. Did you have some experience with some bigger rio network running under linux ? It looks like an known issue for switched rio network, but I don't have the correct equipment to reproduce the problem here. Could you do some basic debugging and share your findings? Thanks. I tried to acquired some info about the problem, I found that the OOPS always occur when there is no respond from the device or the respond is too slow. I always got that error during function call rio_get_host_deviceid_lock when it tries to access a remote device or switch. This function is the first call of the rio_mport_read_config_32 so is also first try of remote access to any device. It is a timing issue, and after placing a printk into the rio_get_host_deviceid_lock the OOPSing almost disappeared. Jan - Leo ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: RapidIO - general questions
On Wed, 2009-05-13 at 18:57 +0800, ext Li Yang wrote: cc'ed LKML On Tue, May 12, 2009 at 5:17 PM, Jan Neskudla jan.neskudla@nsn.com wrote: Hallo we'd likes to use a RapidIO as a general communication bus on our new product, and so I have some questions about general design of Linux RIO subsystem. I did not find any better mailing list for RapidIO discussion. [1] - we'd like to implement following features * Hot-plug (hot-insert/hot-remove) of devices * Error handling (port-write packets - configuration, handling of them) * Static ID configuration based on port numbers * Aux driver - basic driver, for sending messages over different mboxes, handling ranges of doorbells Is it here anyone who is working on any improvement, or anyone who knows the development plans for RapidIO subsystem? AFAIK, there is no one currently working on these features for Linux. It will be good if you can add these useful features. Yes it looks like that, currently we are analyzing current rapidIO system, and how we can add these features. [2] - I have a following problem with a current implementation of loading drivers. The driver probe-function call is based on comparison of VendorID (VID) and DeviceID (DID) only. Thus if I have 3 devices with same DID and VID connected to the same network (bus), the driver is loaded 3times, instead only once for the actual device Master port. This should be the correct way as you actually have 3 instances of the device. Rionet driver solved this by enabling to call initialization function just once, and it expect that this is the Master port. Rionet is kind of special. It's not working like a simple device driver, but more like a customized protocol stack to support multiple ethernet over rio links. Is it this correct behavior ? It looks to me that RapidIO is handled like a local bus (like PCI) This is correct behavior. All of them are using Linux device/driver infrastructure, but rionet is a special device. But I do not have a 3 devices on one silicon. I am talking about 3 devices (3 x EP8548 boards + IDT switch) connected over rapidIO through the switch. And in this case I'd like to have only one driver siting on the top of Linux RapidIO subsystem. I don't see the advantage of loading a driver locally for remote device. Am I missing something ? And one more think, I am getting so much Bus errors OOPSes. Whenever there is a problem with a comunication over Rio I get such a kernel OPS. I had to add some delays into some function to be able to finish the enum+discovery process. Did you have some experience with some bigger rio network running under linux ? Jan ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
RapidIO - general questions
Hallo we'd likes to use a RapidIO as a general communication bus on our new product, and so I have some questions about general design of Linux RIO subsystem. I did not find any better mailing list for RapidIO discussion. [1] - we'd like to implement following features * Hot-plug (hot-insert/hot-remove) of devices * Error handling (port-write packets - configuration, handling of them) * Static ID configuration based on port numbers * Aux driver - basic driver, for sending messages over different mboxes, handling ranges of doorbells Is it here anyone who is working on any improvement, or anyone who knows the development plans for RapidIO subsystem? [2] - I have a following problem with a current implementation of loading drivers. The driver probe-function call is based on comparison of VendorID (VID) and DeviceID (DID) only. Thus if I have 3 devices with same DID and VID connected to the same network (bus), the driver is loaded 3times, instead only once for the actual device Master port. Rionet driver solved this by enabling to call initialization function just once, and it expect that this is the Master port. Is it this correct behavior ? It looks to me that RapidIO is handled like a local bus (like PCI) Jan ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [PATCH 0/5] rapidio: adding memory mapping IO support and misc fixes
On Thu, 2009-05-07 at 10:21 -0500, ext Kumar Gala wrote: On May 7, 2009, at 9:10 AM, Jan Neskudla wrote: And one more think, when I enabled usage of DMA, rionet does not compile too, but in this case I do not have a fix. I tested this on kernel 2.6.29.1 and EP8548 as target board. What exactly do you mean by that? What CONFIG options cause compile failure? Can you post the compiler error. - k The problem is with the missing stucture dma_client in the kernel tree sources. It looks to me that a dma model changed after 2.6 .28. Here are the details. I used a pristine kernel 2.6.29 + patch 2.6.29.1 than Leo's patches in this order. rio-warn_unused_result-warnings-fix.patch rionet-add-memory-access-to-simulated-Ethernet-over-rapidio.patch powerpc-add-memory-map-support-to-Freescale-RapioIO-block.patch powerpc-fsl_rio-use-LAW-address-from-device-tree.patch rapidio-add-common-mapping-APIs-for-RapidIO-memory-access.patch Important CONFIG options are: PPC_86xx=y HPC8641_HPCN=y RAPIDIO=y DMADEVICES=y FSL_DMA=y !! NETDEVICES=y RIONET=y/m RIONET_MEMMAP=y RIONET_DMA=y !! And the error during compilation: CC drivers/net/rionet.o drivers/net/rionet.c:110: error: field `rio_dma_client' has incomplete type drivers/net/rionet.c: In function `rio_send_mem': drivers/net/rionet.c:239: error: parse error before rnet drivers/net/rionet.c: At top level: drivers/net/rionet.c:514: warning: enum dma_state declared inside parameter list drivers/net/rionet.c:514: warning: its scope is only this definition or declaration, which is probably not what you want drivers/net/rionet.c:515: error: parameter `state' has incomplete type drivers/net/rionet.c:515: error: return type is an incomplete type drivers/net/rionet.c: In function `rionet_dma_event': drivers/net/rionet.c:516: warning: type defaults to `int' in declaration of `__mptr' drivers/net/rionet.c:516: warning: initialization from incompatible pointer type drivers/net/rionet.c:518: error: variable `ack' has initializer but incomplete type drivers/net/rionet.c:518: error: `DMA_DUP' undeclared (first use in this function) drivers/net/rionet.c:518: error: (Each undeclared identifier is reported only once drivers/net/rionet.c:518: error: for each function it appears in.) drivers/net/rionet.c:518: error: storage size of 'ack' isn't known drivers/net/rionet.c:522: error: `DMA_RESOURCE_AVAILABLE' undeclared (first use in this function) drivers/net/rionet.c:524: error: `DMA_ACK' undeclared (first use in this function) drivers/net/rionet.c:531: error: `DMA_RESOURCE_REMOVED' undeclared (first use in this function) drivers/net/rionet.c:544: warning: `return' with a value, in function returning void drivers/net/rionet.c:518: warning: unused variable `ack' drivers/net/rionet.c: In function `rionet_dma_register': drivers/net/rionet.c:553: error: implicit declaration of function `dma_async_client_register' drivers/net/rionet.c:554: error: implicit declaration of function `dma_async_client_chan_request' drivers/net/rionet.c: In function `rionet_close': drivers/net/rionet.c:731: error: implicit declaration of function `dma_async_client_unregister' make[2]: *** [drivers/net/rionet.o] Error 1 Jan ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev
Re: [PATCH 0/5] rapidio: adding memory mapping IO support and misc fixes
Hi Gerhard, Yes I am sure, I know I chosen a different board than I use. I wanted to show that this compilation problem is not influenced by our e500 patches. So I did the test on the pristine 2.6.29.1 kernel without any external patches applied, and the problem is exactly the same when the rionet is compiled for E500 with our patches. And anyway the dma_client structure is defined in the 2.6.28, but not in the 2.6.29, so it looks to me that a rionet dma support is written for older kernel. Here is the Linus tree and async_tx tree merge tree months ago and info is: dmaengine: kill struct dma_client and supporting infrastructure http://git.kernel.org/? p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=d9e8a3a5b8298a3c814ed37ac5756e6f67b6be41 Jan On Fri, 2009-05-08 at 12:28 +0200, ext Gerhard Jaeger wrote: Hi Jan, On Friday 08 May 2009 12:06:35 Jan Neskudla wrote: [SNIPSNAP] Important CONFIG options are: PPC_86xx=y HPC8641_HPCN=y you're using a e500 board (EP8548A), but the options above will be used when building a kernel for a e600 machine (MPC8641). Are you sure that is okay? - Gerhard ___ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev