Re: Remote I/O bus

2019-10-07 Thread Luca Ceresoli
Hi Greg,

On 06/10/19 11:18, Greg KH wrote:
> On Sun, Oct 06, 2019 at 12:29:18AM +0200, Luca Ceresoli wrote:
>> BTW I guess having an FPGA external to the SoC connected via SPI or I2C
>> is not uncommon. Am I wrong?
> 
> Not uncommon at all, look at the drivers/fpga/ subsystem for a standard
> way to access those types of chips with a standard api.

My question was probably ambiguously stated, sorry.

drivers/fpga/ has drivers to send a bitstream to an unconfigured FPGA,
which uses protocols casted in silicon by FPGA vendors.

My question was about the connection between a configured FPGA and the
main SoC. This is subject to the fantasy of the FPGA implementer, which
explains why there is no specific protocol implemented in mainline: it
is usually either a standard bus (typically memory-mapped or PCIe) or a
custom protocol over I2C or SPI.

I think only DFL is somewhat different, but it doesn't apply to my case
since it assumes MMIO, and if I had MMIO I wouldn't have started this
thread at all.

-- 
Luca

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-06 Thread Greg KH
On Sun, Oct 06, 2019 at 12:29:18AM +0200, Luca Ceresoli wrote:
> BTW I guess having an FPGA external to the SoC connected via SPI or I2C
> is not uncommon. Am I wrong?

Not uncommon at all, look at the drivers/fpga/ subsystem for a standard
way to access those types of chips with a standard api.

greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-05 Thread Valdis Klētnieks
On Sun, 06 Oct 2019 00:29:18 +0200, Luca Ceresoli said:

> BTW I guess having an FPGA external to the SoC connected via SPI or I2C
> is not uncommon. Am I wrong?

Look at it this way - as a practical matter, if you have an FPGA, it's probably
going to be hanging off an SPI, I2C, or PCI.  And if you're an SoC, especially
at the low end, PCI may be too much silicon to bother with.

Oddly enough, I've not seen any FPGA over USB.  That of course doesn't mean
that some maniac hasn't tried to do it :)



pgpy4C0bc1VWU.pgp
Description: PGP signature
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-05 Thread Luca Ceresoli
Hi Valdis,

On 04/10/19 23:51, Valdis Klētnieks wrote:
> On Fri, 04 Oct 2019 17:08:30 +0200, Luca Ceresoli said:
>> Yes, the read/write helpers are nicely isolated. However this sits in a
>> vendor kernel that tends to change a lot from one release to another, so
> 
> I admit having a hard time wrapping my head around "vendor kernel that
> changes a lot from one release to another", unless you mean something like
> the RedHat Enterprise releases that updates every 2 years, and at that point 
> you get hit
> with a jump of 8 or 10 kernel releases.
> 
> And of course, the right answer is to fix up the driver and upstream it, so 
> that
> in 2022 when your vendor does a new release, the updated driver will already 
> be
> there waiting for you.
> 
> And don't worry about having to do patches to update the driver to a new 
> kernel
> release because APIs change - that's only a problem for out-of-tree drivers.  
> If it's
> in-tree, the person making the API change is supposed to fix your driver for 
> you.

Thanks for your words! I totally agree, and I do upstream my work
whenever I can. I also use the same arguments you used to convince other
people to do so.

Weird enough, the whole idea of an io-over-spi bridge came exactly
because I want my work to be as close to upstream as possible -- even
the non-upstreamable hacks I need in embedded products. All the drivers
I'm going to use in the FPGA are platform drivers, and there is no way
my-own-io-over-spi bus support in those drivers will ever be mainlined.

That's why I came up with the idea of keeping those drivers as platform
drivers (the devices after all expose a real I/O bus, not SPI) and have
the io-over-spi logic in a "bridge" driver (which is what the
microprocessor in the FPGA does). This would allow to use *exactly* the
mainlined driver when one is in mainline, without any change.

But it looks like mine was not the right idea.

BTW I guess having an FPGA external to the SoC connected via SPI or I2C
is not uncommon. Am I wrong?

-- 
Luca


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-04 Thread Valdis Klētnieks
On Fri, 04 Oct 2019 17:08:30 +0200, Luca Ceresoli said:
> Yes, the read/write helpers are nicely isolated. However this sits in a
> vendor kernel that tends to change a lot from one release to another, so

I admit having a hard time wrapping my head around "vendor kernel that
changes a lot from one release to another", unless you mean something like
the RedHat Enterprise releases that updates every 2 years, and at that point 
you get hit
with a jump of 8 or 10 kernel releases.

And of course, the right answer is to fix up the driver and upstream it, so that
in 2022 when your vendor does a new release, the updated driver will already be
there waiting for you.

And don't worry about having to do patches to update the driver to a new kernel
release because APIs change - that's only a problem for out-of-tree drivers.  
If it's
in-tree, the person making the API change is supposed to fix your driver for 
you.



pgpdrGLcTVjyk.pgp
Description: PGP signature
___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-04 Thread Luca Ceresoli
Hi Greg,

On 04/10/19 16:54, Greg KH wrote:
> On Fri, Oct 04, 2019 at 04:08:06PM +0200, Luca Ceresoli wrote:
>> Hi Greg,
>>
>> On 04/10/19 15:22, Greg KH wrote:
>>> On Fri, Oct 04, 2019 at 01:04:56PM +0200, Luca Ceresoli wrote:
 Hi,

 on an embedded system I currently have a standard platform device:

 .-.  data  ..
 | CPU || DEVICE |
 '-'   bus  ''

 The driver is a standard platform driver that uses ioread32() and
 iowrite32() to access registers.

 So far, so good.

 Now in a new design I have the same device in an FPGA, external to the
 SoC. The external FPGA is not reachable via an I/O bus, but via SPI (or
 I2C). A microprocessor in the FPGA acts as a bridge: as an SPI client it
 receives register read/write requests from the CPU, forwards them to the
 devices on the in-FPGA data bus as a master, then sends back the replies
 over SPI.

SoC <- | -> FPGA

 .-.  data   .-.   ..  data   ..
 | CPU |-| SPI CTL |---| BRIDGE |-| DEVICE |
 '-'  bus A  '-'  SPI  ''  bus B  ''


 What would be a proper way to model this in the Linux kernel?

 Of course I can hack the drivers to hijack them on SPI, but I'm trying
 to solve the problem in a better way. IMO "a proper way" implies that
 the platform driver does not need to be aware of the existence of the
 bridge.

 Note: in the real case there is more than one device to handle.

 At first sight I think this should be modeled with a "bridge" device that:

  * is a SPI device
  * implements a "platform bus" where regular platform devices can be
instantiated, similar to a "simple-bus"
>>>
>>> Yes, make your own "bus", and have the SPI device be your "host
>>> controller" in that it bridges the SPI bus to your "FPGA bus".
>>>
>>> The driver model is set up for this, it should be not that complex to do
>>> so.  If you have specific questions, just let me know.  "Clean" examples
>>> of what to do is the greybus code as that's probably one of the newest
>>> busses to be added to the kernel.
>>>
 In device tree terms:

  { /* data bus A in picture */

 spi0: spi@4200 {
 reg = <0x4200 0x1000>;
 #address-cells = <1>;

 io-over-spi-bridge@1 { /* data bus B */
 reg = <1>; /* slave select pin 1 */
 compatible = "linux,io-over-spi-bridge";
 #address-cells = <1>;
 #size-cells = <1>;

 mydevice@4000 {
 /* 1 kB I/O space at 0x4000 on bus B */
 reg = <0x4000 0x1000>;
 };
 };
 };
 };

 The io-over-spi driver is supposed to request allocation of a virtual
 memory area that:
  1. is as large as the address space on bus B
  2. is __iomem (non cached, etc)
  3. is not mapped on the physical CPU address space (bus A)
  4. page faults at every read/write access, triggering a callback
 that starts an SPI transaction, waits for the result and returns
>>>
>>> I don't think you can map memory to be "on an SPI bus", unless you have
>>> support for that in your hardware controller itself.  Trying to map
>>> memory in this way is odd, just treat the devices out off the bus as
>>> "devices that need messages sent to them", and you should be fine.  It's
>>> not memory-mapped iomemory, so don't think of it that way.
>>
>> If I got you correctly, this means I cannot reuse the existing device
>> drivers unmodified as I was hoping to.
> 
> You are switching from a "ioread/write" to "all data goes across an SPI
> link".  No, you can't reuse the existing drivers, but you can modify
> them to abstract out the "read/write data" functions to be transport
> agnositic.
> 
>> They won't be 'struct platform_device' instances anymore, they will
>> become 'struct mybus_device' instances. And as such they won't be
>> allowed to call ioread32() / iowrite32(), but will have to call
>> mybus_ioread32() and mybus_iowrite32(). Correct?
> 
> Yes.
> 
> But, if you do it right, the majority of your driver is the logic to
> control the hardware, and interact with whatever other subsystem those
> devices talk to.  Read/Write data and the bus the device talks to should
> just be a tiny shim that you can split out into a separate module/file.

Sure, the driver logic wouldn't be touched. Only the read/write helpers
and the entry point (I think I'd need two different probe functions, but
again sharing most of their code).

> Do you have a pointer to your existing code anywhere?

One of the drivers I'm looking at is:

https://github.com/Xilinx/linux-xlnx/blob/xilinx-v2019.1/drivers/media/platform/xilinx/xilinx-csi2rxss.c

Yes, the read/write helpers are nicely 

Re: Remote I/O bus

2019-10-04 Thread Greg KH
On Fri, Oct 04, 2019 at 04:08:06PM +0200, Luca Ceresoli wrote:
> Hi Greg,
> 
> On 04/10/19 15:22, Greg KH wrote:
> > On Fri, Oct 04, 2019 at 01:04:56PM +0200, Luca Ceresoli wrote:
> >> Hi,
> >>
> >> on an embedded system I currently have a standard platform device:
> >>
> >> .-.  data  ..
> >> | CPU || DEVICE |
> >> '-'   bus  ''
> >>
> >> The driver is a standard platform driver that uses ioread32() and
> >> iowrite32() to access registers.
> >>
> >> So far, so good.
> >>
> >> Now in a new design I have the same device in an FPGA, external to the
> >> SoC. The external FPGA is not reachable via an I/O bus, but via SPI (or
> >> I2C). A microprocessor in the FPGA acts as a bridge: as an SPI client it
> >> receives register read/write requests from the CPU, forwards them to the
> >> devices on the in-FPGA data bus as a master, then sends back the replies
> >> over SPI.
> >>
> >>SoC <- | -> FPGA
> >>
> >> .-.  data   .-.   ..  data   ..
> >> | CPU |-| SPI CTL |---| BRIDGE |-| DEVICE |
> >> '-'  bus A  '-'  SPI  ''  bus B  ''
> >>
> >>
> >> What would be a proper way to model this in the Linux kernel?
> >>
> >> Of course I can hack the drivers to hijack them on SPI, but I'm trying
> >> to solve the problem in a better way. IMO "a proper way" implies that
> >> the platform driver does not need to be aware of the existence of the
> >> bridge.
> >>
> >> Note: in the real case there is more than one device to handle.
> >>
> >> At first sight I think this should be modeled with a "bridge" device that:
> >>
> >>  * is a SPI device
> >>  * implements a "platform bus" where regular platform devices can be
> >>instantiated, similar to a "simple-bus"
> > 
> > Yes, make your own "bus", and have the SPI device be your "host
> > controller" in that it bridges the SPI bus to your "FPGA bus".
> > 
> > The driver model is set up for this, it should be not that complex to do
> > so.  If you have specific questions, just let me know.  "Clean" examples
> > of what to do is the greybus code as that's probably one of the newest
> > busses to be added to the kernel.
> > 
> >> In device tree terms:
> >>
> >>  { /* data bus A in picture */
> >>
> >> spi0: spi@4200 {
> >> reg = <0x4200 0x1000>;
> >> #address-cells = <1>;
> >>
> >> io-over-spi-bridge@1 { /* data bus B */
> >> reg = <1>; /* slave select pin 1 */
> >> compatible = "linux,io-over-spi-bridge";
> >> #address-cells = <1>;
> >> #size-cells = <1>;
> >>
> >> mydevice@4000 {
> >> /* 1 kB I/O space at 0x4000 on bus B */
> >> reg = <0x4000 0x1000>;
> >> };
> >> };
> >> };
> >> };
> >>
> >> The io-over-spi driver is supposed to request allocation of a virtual
> >> memory area that:
> >>  1. is as large as the address space on bus B
> >>  2. is __iomem (non cached, etc)
> >>  3. is not mapped on the physical CPU address space (bus A)
> >>  4. page faults at every read/write access, triggering a callback
> >> that starts an SPI transaction, waits for the result and returns
> > 
> > I don't think you can map memory to be "on an SPI bus", unless you have
> > support for that in your hardware controller itself.  Trying to map
> > memory in this way is odd, just treat the devices out off the bus as
> > "devices that need messages sent to them", and you should be fine.  It's
> > not memory-mapped iomemory, so don't think of it that way.
> 
> If I got you correctly, this means I cannot reuse the existing device
> drivers unmodified as I was hoping to.

You are switching from a "ioread/write" to "all data goes across an SPI
link".  No, you can't reuse the existing drivers, but you can modify
them to abstract out the "read/write data" functions to be transport
agnositic.

> They won't be 'struct platform_device' instances anymore, they will
> become 'struct mybus_device' instances. And as such they won't be
> allowed to call ioread32() / iowrite32(), but will have to call
> mybus_ioread32() and mybus_iowrite32(). Correct?

Yes.

But, if you do it right, the majority of your driver is the logic to
control the hardware, and interact with whatever other subsystem those
devices talk to.  Read/Write data and the bus the device talks to should
just be a tiny shim that you can split out into a separate module/file.

Do you have a pointer to your existing code anywhere?

thanks,

greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-04 Thread Luca Ceresoli
Hi Greg,

On 04/10/19 15:22, Greg KH wrote:
> On Fri, Oct 04, 2019 at 01:04:56PM +0200, Luca Ceresoli wrote:
>> Hi,
>>
>> on an embedded system I currently have a standard platform device:
>>
>> .-.  data  ..
>> | CPU || DEVICE |
>> '-'   bus  ''
>>
>> The driver is a standard platform driver that uses ioread32() and
>> iowrite32() to access registers.
>>
>> So far, so good.
>>
>> Now in a new design I have the same device in an FPGA, external to the
>> SoC. The external FPGA is not reachable via an I/O bus, but via SPI (or
>> I2C). A microprocessor in the FPGA acts as a bridge: as an SPI client it
>> receives register read/write requests from the CPU, forwards them to the
>> devices on the in-FPGA data bus as a master, then sends back the replies
>> over SPI.
>>
>>SoC <- | -> FPGA
>>
>> .-.  data   .-.   ..  data   ..
>> | CPU |-| SPI CTL |---| BRIDGE |-| DEVICE |
>> '-'  bus A  '-'  SPI  ''  bus B  ''
>>
>>
>> What would be a proper way to model this in the Linux kernel?
>>
>> Of course I can hack the drivers to hijack them on SPI, but I'm trying
>> to solve the problem in a better way. IMO "a proper way" implies that
>> the platform driver does not need to be aware of the existence of the
>> bridge.
>>
>> Note: in the real case there is more than one device to handle.
>>
>> At first sight I think this should be modeled with a "bridge" device that:
>>
>>  * is a SPI device
>>  * implements a "platform bus" where regular platform devices can be
>>instantiated, similar to a "simple-bus"
> 
> Yes, make your own "bus", and have the SPI device be your "host
> controller" in that it bridges the SPI bus to your "FPGA bus".
> 
> The driver model is set up for this, it should be not that complex to do
> so.  If you have specific questions, just let me know.  "Clean" examples
> of what to do is the greybus code as that's probably one of the newest
> busses to be added to the kernel.
> 
>> In device tree terms:
>>
>>  { /* data bus A in picture */
>>
>> spi0: spi@4200 {
>> reg = <0x4200 0x1000>;
>> #address-cells = <1>;
>>
>> io-over-spi-bridge@1 { /* data bus B */
>> reg = <1>; /* slave select pin 1 */
>> compatible = "linux,io-over-spi-bridge";
>> #address-cells = <1>;
>> #size-cells = <1>;
>>
>> mydevice@4000 {
>> /* 1 kB I/O space at 0x4000 on bus B */
>> reg = <0x4000 0x1000>;
>> };
>> };
>> };
>> };
>>
>> The io-over-spi driver is supposed to request allocation of a virtual
>> memory area that:
>>  1. is as large as the address space on bus B
>>  2. is __iomem (non cached, etc)
>>  3. is not mapped on the physical CPU address space (bus A)
>>  4. page faults at every read/write access, triggering a callback
>> that starts an SPI transaction, waits for the result and returns
> 
> I don't think you can map memory to be "on an SPI bus", unless you have
> support for that in your hardware controller itself.  Trying to map
> memory in this way is odd, just treat the devices out off the bus as
> "devices that need messages sent to them", and you should be fine.  It's
> not memory-mapped iomemory, so don't think of it that way.

If I got you correctly, this means I cannot reuse the existing device
drivers unmodified as I was hoping to. They won't be 'struct
platform_device' instances anymore, they will become 'struct
mybus_device' instances. And as such they won't be allowed to call
ioread32() / iowrite32(), but will have to call mybus_ioread32() and
mybus_iowrite32(). Correct?

Thanks,
-- 
Luca

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Re: Remote I/O bus

2019-10-04 Thread Greg KH
On Fri, Oct 04, 2019 at 01:04:56PM +0200, Luca Ceresoli wrote:
> Hi,
> 
> on an embedded system I currently have a standard platform device:
> 
> .-.  data  ..
> | CPU || DEVICE |
> '-'   bus  ''
> 
> The driver is a standard platform driver that uses ioread32() and
> iowrite32() to access registers.
> 
> So far, so good.
> 
> Now in a new design I have the same device in an FPGA, external to the
> SoC. The external FPGA is not reachable via an I/O bus, but via SPI (or
> I2C). A microprocessor in the FPGA acts as a bridge: as an SPI client it
> receives register read/write requests from the CPU, forwards them to the
> devices on the in-FPGA data bus as a master, then sends back the replies
> over SPI.
> 
>SoC <- | -> FPGA
> 
> .-.  data   .-.   ..  data   ..
> | CPU |-| SPI CTL |---| BRIDGE |-| DEVICE |
> '-'  bus A  '-'  SPI  ''  bus B  ''
> 
> 
> What would be a proper way to model this in the Linux kernel?
> 
> Of course I can hack the drivers to hijack them on SPI, but I'm trying
> to solve the problem in a better way. IMO "a proper way" implies that
> the platform driver does not need to be aware of the existence of the
> bridge.
> 
> Note: in the real case there is more than one device to handle.
> 
> At first sight I think this should be modeled with a "bridge" device that:
> 
>  * is a SPI device
>  * implements a "platform bus" where regular platform devices can be
>instantiated, similar to a "simple-bus"

Yes, make your own "bus", and have the SPI device be your "host
controller" in that it bridges the SPI bus to your "FPGA bus".

The driver model is set up for this, it should be not that complex to do
so.  If you have specific questions, just let me know.  "Clean" examples
of what to do is the greybus code as that's probably one of the newest
busses to be added to the kernel.

> In device tree terms:
> 
>  { /* data bus A in picture */
> 
> spi0: spi@4200 {
> reg = <0x4200 0x1000>;
> #address-cells = <1>;
> 
> io-over-spi-bridge@1 { /* data bus B */
> reg = <1>; /* slave select pin 1 */
> compatible = "linux,io-over-spi-bridge";
> #address-cells = <1>;
> #size-cells = <1>;
> 
> mydevice@4000 {
> /* 1 kB I/O space at 0x4000 on bus B */
> reg = <0x4000 0x1000>;
> };
> };
> };
> };
> 
> The io-over-spi driver is supposed to request allocation of a virtual
> memory area that:
>  1. is as large as the address space on bus B
>  2. is __iomem (non cached, etc)
>  3. is not mapped on the physical CPU address space (bus A)
>  4. page faults at every read/write access, triggering a callback
> that starts an SPI transaction, waits for the result and returns

I don't think you can map memory to be "on an SPI bus", unless you have
support for that in your hardware controller itself.  Trying to map
memory in this way is odd, just treat the devices out off the bus as
"devices that need messages sent to them", and you should be fine.  It's
not memory-mapped iomemory, so don't think of it that way.

good luck!

greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Remote I/O bus

2019-10-04 Thread Luca Ceresoli
Hi,

on an embedded system I currently have a standard platform device:

.-.  data  ..
| CPU || DEVICE |
'-'   bus  ''

The driver is a standard platform driver that uses ioread32() and
iowrite32() to access registers.

So far, so good.

Now in a new design I have the same device in an FPGA, external to the
SoC. The external FPGA is not reachable via an I/O bus, but via SPI (or
I2C). A microprocessor in the FPGA acts as a bridge: as an SPI client it
receives register read/write requests from the CPU, forwards them to the
devices on the in-FPGA data bus as a master, then sends back the replies
over SPI.

   SoC <- | -> FPGA

.-.  data   .-.   ..  data   ..
| CPU |-| SPI CTL |---| BRIDGE |-| DEVICE |
'-'  bus A  '-'  SPI  ''  bus B  ''


What would be a proper way to model this in the Linux kernel?

Of course I can hack the drivers to hijack them on SPI, but I'm trying
to solve the problem in a better way. IMO "a proper way" implies that
the platform driver does not need to be aware of the existence of the
bridge.

Note: in the real case there is more than one device to handle.

At first sight I think this should be modeled with a "bridge" device that:

 * is a SPI device
 * implements a "platform bus" where regular platform devices can be
   instantiated, similar to a "simple-bus"

In device tree terms:

 { /* data bus A in picture */

spi0: spi@4200 {
reg = <0x4200 0x1000>;
#address-cells = <1>;

io-over-spi-bridge@1 { /* data bus B */
reg = <1>; /* slave select pin 1 */
compatible = "linux,io-over-spi-bridge";
#address-cells = <1>;
#size-cells = <1>;

mydevice@4000 {
/* 1 kB I/O space at 0x4000 on bus B */
reg = <0x4000 0x1000>;
};
};
};
};

The io-over-spi driver is supposed to request allocation of a virtual
memory area that:
 1. is as large as the address space on bus B
 2. is __iomem (non cached, etc)
 3. is not mapped on the physical CPU address space (bus A)
 4. page faults at every read/write access, triggering a callback
that starts an SPI transaction, waits for the result and returns

After some research I haven't found how this could be implemented,
mostly due to my newbieness about kernel memory management. Also, as
drivers might access the bus in IRQ handlers, I suspect there is no way
to do an SPI transaction in IRQ context, but this could be handled
differently (threaded IRQ...).

Does this look like a good approach to handle the problem?
If it does, how would you implement the iomem access and handle IRQ context?
Otherwise, which way would you suggest?

Many thanks in advance,
-- 
Luca

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies