Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2016-01-07 Thread Santosh Shukla
On Thu, Dec 31, 2015 at 9:11 PM, Alex Williamson
 wrote:
> On Thu, 2015-12-31 at 15:03 +0530, Santosh Shukla wrote:
>> On Tue, Dec 29, 2015 at 11:01 PM, Alex Williamson
>>  wrote:
>> > On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
>> > > On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann 
>> > > wrote:
>> > > > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
>> > > > > mistakenly added wrong email-id of alex, looping his correct
>> > > > > one.
>> > > > >
>> > > > > On 29 December 2015 at 21:23, Santosh Shukla > > > > > lina
>> > > > > ro.org> wrote:
>> > > > > > On 29 December 2015 at 18:58, Arnd Bergmann 
>> > > > > > wrote:
>> > > > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla
>> > > > > > > wrote:
>> > > > > > > > On 23 December 2015 at 03:26, Arnd Bergmann > > > > > > > > .de>
>> > > > > > > > wrote:
>> > > > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> > > > > > > > > > }
>> > > > > > > > > >
>> > > > > > > > > > So I care for /dev/ioport types interface who could
>> > > > > > > > > > do
>> > > > > > > > > > more than byte
>> > > > > > > > > > data copy to/from user-space. I tested this patch
>> > > > > > > > > > with
>> > > > > > > > > > little
>> > > > > > > > > > modification and could able to run pmd driver for
>> > > > > > > > > > arm/arm64 case.
>> > > > > > > > > >
>> > > > > > > > > > Like to know how to address pci_io region mapping
>> > > > > > > > > > problem for
>> > > > > > > > > > arm/arm64, in-case /dev/ioports approach is not
>> > > > > > > > > > acceptable or else I
>> > > > > > > > > > can spent time on restructuring the patch?
>> > > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > For the use case you describe, can't you use the vfio
>> > > > > > > > > framework to
>> > > > > > > > > access the PCI BARs?
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
>> > > > > > > > vfio_pci_map() and
>> > > > > > > > it look to me that it only maps ioresource_mem pci
>> > > > > > > > region,
>> > > > > > > > pasting
>> > > > > > > > code snap:
>> > > > > > > >
>> > > > > > > > if (!(pci_resource_flags(pdev, index) &
>> > > > > > > > IORESOURCE_MEM))
>> > > > > > > > return -EINVAL;
>> > > > > > > > 
>> > > > > > > >
>> > > > > > > > and I want to map ioresource_io pci region for arm
>> > > > > > > > platform
>> > > > > > > > in my
>> > > > > > > > use-case. Not sure vfio maps pci_iobar region?
>> > > > > > >
>> > > > > > > Mapping I/O BARs is not portable, notably it doesn't work
>> > > > > > > on
>> > > > > > > x86.
>> > > > > > >
>> > > > > > > You should be able access them using the read/write
>> > > > > > > interface
>> > > > > > > on
>> > > > > > > the vfio device.
>> > > > > > >
>> > > > > > Right, x86 doesn't care as iopl() could give userspace
>> > > > > > application
>> > > > > > direct access to ioports.
>> > > > > >
>> > > > > > Also, Alex in other dpdk thread [1] suggested someone to
>> > > > > > propose io
>> > > > > > bar mapping in vfio-pci, I guess in particular to non-x86
>> > > > > > arch
>> > > > > > so I
>> > > > > > started working on it.
>> > > > > >
>> > > > >
>> > > >
>> > > > So what's wrong with just using the existing read/write API on
>> > > > all
>> > > > architectures?
>> > > >
>> > >
>> > > nothing wrong, infact read/write api will still be used so to
>> > > access
>> > > mmaped io pci bar at userspace. But right now vfio_pci_map()
>> > > doesn't
>> >
>> > vfio_pci_mmap(), the read/write accessors fully support i/o port.
>> >
>>
>> (Sorry for delayed response!)
>> Right.
>> > > map io pci bar in particular (i.e.. ioresource_io) so I guess
>> > > need to
>> > > add that bar mapping in vfio. pl. correct me if i misunderstood
>> > > anything.
>> >
>> > Maybe I misunderstood what you were asking for, it seemed like you
>> > specifically wanted to be able to mmap i/o port space, which is
>> > possible, just not something we can do on x86.  Maybe I should have
>> > asked why.  The vfio API already supports read/write access to i/o
>> > port
>>
>> Yes, I want to map io port pci space in vfio and reason for that is :
>> I want to access virto-net-pci device at userspace using vfio and for
>> that I am using vfio-noiommu latest linux-next patch. but I am not
>> able to mmap io port pci space in vfio because of below condition -
>>
>> 1)
>> --- user space code snippet 
>> reg.index = i; // where i is {0..1} i.e.. {BAR0..BAR1} such that BAR0
>> = io port pci space and BAR1 = pci config space
>>
>> ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, );
>> if ((reg.flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) {
>>return err;
>> }
>> now consider i = 0 case where pci_rersource_flag set to IORESOURCE_IO
>>
>> --- kernel / vfip-pci.c -
>> so vfio_pci_ioctl() wont set info.flag to VFIO_REGION_INFO_FLAG_MMAP.
>> And it won't set for two
>> 1) pci_resource_flag & IORESOURCE_MEM
>> 2) ioport size < PAZE_SIZE
>>
>> The second one I addressed but 

Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2016-01-07 Thread Santosh Shukla
On Thu, Dec 31, 2015 at 9:11 PM, Alex Williamson
 wrote:
> On Thu, 2015-12-31 at 15:03 +0530, Santosh Shukla wrote:
>> On Tue, Dec 29, 2015 at 11:01 PM, Alex Williamson
>>  wrote:
>> > On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
>> > > On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann 
>> > > wrote:
>> > > > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
>> > > > > mistakenly added wrong email-id of alex, looping his correct
>> > > > > one.
>> > > > >
>> > > > > On 29 December 2015 at 21:23, Santosh Shukla > > > > > lina
>> > > > > ro.org> wrote:
>> > > > > > On 29 December 2015 at 18:58, Arnd Bergmann 
>> > > > > > wrote:
>> > > > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla
>> > > > > > > wrote:
>> > > > > > > > On 23 December 2015 at 03:26, Arnd Bergmann > > > > > > > > .de>
>> > > > > > > > wrote:
>> > > > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> > > > > > > > > > }
>> > > > > > > > > >
>> > > > > > > > > > So I care for /dev/ioport types interface who could
>> > > > > > > > > > do
>> > > > > > > > > > more than byte
>> > > > > > > > > > data copy to/from user-space. I tested this patch
>> > > > > > > > > > with
>> > > > > > > > > > little
>> > > > > > > > > > modification and could able to run pmd driver for
>> > > > > > > > > > arm/arm64 case.
>> > > > > > > > > >
>> > > > > > > > > > Like to know how to address pci_io region mapping
>> > > > > > > > > > problem for
>> > > > > > > > > > arm/arm64, in-case /dev/ioports approach is not
>> > > > > > > > > > acceptable or else I
>> > > > > > > > > > can spent time on restructuring the patch?
>> > > > > > > > > >
>> > > > > > > > >
>> > > > > > > > > For the use case you describe, can't you use the vfio
>> > > > > > > > > framework to
>> > > > > > > > > access the PCI BARs?
>> > > > > > > > >
>> > > > > > > >
>> > > > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
>> > > > > > > > vfio_pci_map() and
>> > > > > > > > it look to me that it only maps ioresource_mem pci
>> > > > > > > > region,
>> > > > > > > > pasting
>> > > > > > > > code snap:
>> > > > > > > >
>> > > > > > > > if (!(pci_resource_flags(pdev, index) &
>> > > > > > > > IORESOURCE_MEM))
>> > > > > > > > return -EINVAL;
>> > > > > > > > 
>> > > > > > > >
>> > > > > > > > and I want to map ioresource_io pci region for arm
>> > > > > > > > platform
>> > > > > > > > in my
>> > > > > > > > use-case. Not sure vfio maps pci_iobar region?
>> > > > > > >
>> > > > > > > Mapping I/O BARs is not portable, notably it doesn't work
>> > > > > > > on
>> > > > > > > x86.
>> > > > > > >
>> > > > > > > You should be able access them using the read/write
>> > > > > > > interface
>> > > > > > > on
>> > > > > > > the vfio device.
>> > > > > > >
>> > > > > > Right, x86 doesn't care as iopl() could give userspace
>> > > > > > application
>> > > > > > direct access to ioports.
>> > > > > >
>> > > > > > Also, Alex in other dpdk thread [1] suggested someone to
>> > > > > > propose io
>> > > > > > bar mapping in vfio-pci, I guess in particular to non-x86
>> > > > > > arch
>> > > > > > so I
>> > > > > > started working on it.
>> > > > > >
>> > > > >
>> > > >
>> > > > So what's wrong with just using the existing read/write API on
>> > > > all
>> > > > architectures?
>> > > >
>> > >
>> > > nothing wrong, infact read/write api will still be used so to
>> > > access
>> > > mmaped io pci bar at userspace. But right now vfio_pci_map()
>> > > doesn't
>> >
>> > vfio_pci_mmap(), the read/write accessors fully support i/o port.
>> >
>>
>> (Sorry for delayed response!)
>> Right.
>> > > map io pci bar in particular (i.e.. ioresource_io) so I guess
>> > > need to
>> > > add that bar mapping in vfio. pl. correct me if i misunderstood
>> > > anything.
>> >
>> > Maybe I misunderstood what you were asking for, it seemed like you
>> > specifically wanted to be able to mmap i/o port space, which is
>> > possible, just not something we can do on x86.  Maybe I should have
>> > asked why.  The vfio API already supports read/write access to i/o
>> > port
>>
>> Yes, I want to map io port pci space in vfio and reason for that is :
>> I want to access virto-net-pci device at userspace using vfio and for
>> that I am using vfio-noiommu latest linux-next patch. but I am not
>> able to mmap io port pci space in vfio because of below condition -
>>
>> 1)
>> --- user space code snippet 
>> reg.index = i; // where i is {0..1} i.e.. {BAR0..BAR1} such that BAR0
>> = io port pci space and BAR1 = pci config space
>>
>> ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, );
>> if ((reg.flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) {
>>return err;
>> }
>> now consider i = 0 case where pci_rersource_flag set to IORESOURCE_IO
>>
>> --- kernel / vfip-pci.c -
>> so vfio_pci_ioctl() wont set info.flag to VFIO_REGION_INFO_FLAG_MMAP.
>> And it won't set 

Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-31 Thread Alex Williamson
On Thu, 2015-12-31 at 15:03 +0530, Santosh Shukla wrote:
> On Tue, Dec 29, 2015 at 11:01 PM, Alex Williamson
>  wrote:
> > On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
> > > On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann 
> > > wrote:
> > > > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> > > > > mistakenly added wrong email-id of alex, looping his correct
> > > > > one.
> > > > > 
> > > > > On 29 December 2015 at 21:23, Santosh Shukla  > > > > lina
> > > > > ro.org> wrote:
> > > > > > On 29 December 2015 at 18:58, Arnd Bergmann 
> > > > > > wrote:
> > > > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla
> > > > > > > wrote:
> > > > > > > > On 23 December 2015 at 03:26, Arnd Bergmann  > > > > > > > .de>
> > > > > > > > wrote:
> > > > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
> > > > > > > > > > }
> > > > > > > > > > 
> > > > > > > > > > So I care for /dev/ioport types interface who could
> > > > > > > > > > do
> > > > > > > > > > more than byte
> > > > > > > > > > data copy to/from user-space. I tested this patch
> > > > > > > > > > with
> > > > > > > > > > little
> > > > > > > > > > modification and could able to run pmd driver for
> > > > > > > > > > arm/arm64 case.
> > > > > > > > > > 
> > > > > > > > > > Like to know how to address pci_io region mapping
> > > > > > > > > > problem for
> > > > > > > > > > arm/arm64, in-case /dev/ioports approach is not
> > > > > > > > > > acceptable or else I
> > > > > > > > > > can spent time on restructuring the patch?
> > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > For the use case you describe, can't you use the vfio
> > > > > > > > > framework to
> > > > > > > > > access the PCI BARs?
> > > > > > > > > 
> > > > > > > > 
> > > > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
> > > > > > > > vfio_pci_map() and
> > > > > > > > it look to me that it only maps ioresource_mem pci
> > > > > > > > region,
> > > > > > > > pasting
> > > > > > > > code snap:
> > > > > > > > 
> > > > > > > > if (!(pci_resource_flags(pdev, index) &
> > > > > > > > IORESOURCE_MEM))
> > > > > > > > return -EINVAL;
> > > > > > > > 
> > > > > > > > 
> > > > > > > > and I want to map ioresource_io pci region for arm
> > > > > > > > platform
> > > > > > > > in my
> > > > > > > > use-case. Not sure vfio maps pci_iobar region?
> > > > > > > 
> > > > > > > Mapping I/O BARs is not portable, notably it doesn't work
> > > > > > > on
> > > > > > > x86.
> > > > > > > 
> > > > > > > You should be able access them using the read/write
> > > > > > > interface
> > > > > > > on
> > > > > > > the vfio device.
> > > > > > > 
> > > > > > Right, x86 doesn't care as iopl() could give userspace
> > > > > > application
> > > > > > direct access to ioports.
> > > > > > 
> > > > > > Also, Alex in other dpdk thread [1] suggested someone to
> > > > > > propose io
> > > > > > bar mapping in vfio-pci, I guess in particular to non-x86
> > > > > > arch
> > > > > > so I
> > > > > > started working on it.
> > > > > > 
> > > > > 
> > > > 
> > > > So what's wrong with just using the existing read/write API on
> > > > all
> > > > architectures?
> > > > 
> > > 
> > > nothing wrong, infact read/write api will still be used so to
> > > access
> > > mmaped io pci bar at userspace. But right now vfio_pci_map()
> > > doesn't
> > 
> > vfio_pci_mmap(), the read/write accessors fully support i/o port.
> > 
> 
> (Sorry for delayed response!)
> Right.
> > > map io pci bar in particular (i.e.. ioresource_io) so I guess
> > > need to
> > > add that bar mapping in vfio. pl. correct me if i misunderstood
> > > anything.
> > 
> > Maybe I misunderstood what you were asking for, it seemed like you
> > specifically wanted to be able to mmap i/o port space, which is
> > possible, just not something we can do on x86.  Maybe I should have
> > asked why.  The vfio API already supports read/write access to i/o
> > port
> 
> Yes, I want to map io port pci space in vfio and reason for that is :
> I want to access virto-net-pci device at userspace using vfio and for
> that I am using vfio-noiommu latest linux-next patch. but I am not
> able to mmap io port pci space in vfio because of below condition -
> 
> 1)
> --- user space code snippet 
> reg.index = i; // where i is {0..1} i.e.. {BAR0..BAR1} such that BAR0
> = io port pci space and BAR1 = pci config space
> 
> ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, );
> if ((reg.flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) {
>    return err;
> }
> now consider i = 0 case where pci_rersource_flag set to IORESOURCE_IO
> 
> --- kernel / vfip-pci.c -
> so vfio_pci_ioctl() wont set info.flag to VFIO_REGION_INFO_FLAG_MMAP.
> And it won't set for two
> 1) pci_resource_flag & IORESOURCE_MEM
> 2) ioport size < PAZE_SIZE
> 
> The second one I addressed but first one is what I believe that need
> to add support in vfio.
> and Same applicable for vfio_pci_mmap() too..
> 
> This is why I am thinking to add IORESOURCE_IO 

Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-31 Thread Santosh Shukla
On Tue, Dec 29, 2015 at 11:01 PM, Alex Williamson
 wrote:
> On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
>> On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann  wrote:
>> > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
>> > > mistakenly added wrong email-id of alex, looping his correct one.
>> > >
>> > > On 29 December 2015 at 21:23, Santosh Shukla > > > ro.org> wrote:
>> > > > On 29 December 2015 at 18:58, Arnd Bergmann 
>> > > > wrote:
>> > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>> > > > > > On 23 December 2015 at 03:26, Arnd Bergmann 
>> > > > > > wrote:
>> > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> > > > > > > > }
>> > > > > > > >
>> > > > > > > > So I care for /dev/ioport types interface who could do
>> > > > > > > > more than byte
>> > > > > > > > data copy to/from user-space. I tested this patch with
>> > > > > > > > little
>> > > > > > > > modification and could able to run pmd driver for
>> > > > > > > > arm/arm64 case.
>> > > > > > > >
>> > > > > > > > Like to know how to address pci_io region mapping
>> > > > > > > > problem for
>> > > > > > > > arm/arm64, in-case /dev/ioports approach is not
>> > > > > > > > acceptable or else I
>> > > > > > > > can spent time on restructuring the patch?
>> > > > > > > >
>> > > > > > >
>> > > > > > > For the use case you describe, can't you use the vfio
>> > > > > > > framework to
>> > > > > > > access the PCI BARs?
>> > > > > > >
>> > > > > >
>> > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
>> > > > > > vfio_pci_map() and
>> > > > > > it look to me that it only maps ioresource_mem pci region,
>> > > > > > pasting
>> > > > > > code snap:
>> > > > > >
>> > > > > > if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>> > > > > > return -EINVAL;
>> > > > > > 
>> > > > > >
>> > > > > > and I want to map ioresource_io pci region for arm platform
>> > > > > > in my
>> > > > > > use-case. Not sure vfio maps pci_iobar region?
>> > > > >
>> > > > > Mapping I/O BARs is not portable, notably it doesn't work on
>> > > > > x86.
>> > > > >
>> > > > > You should be able access them using the read/write interface
>> > > > > on
>> > > > > the vfio device.
>> > > > >
>> > > > Right, x86 doesn't care as iopl() could give userspace
>> > > > application
>> > > > direct access to ioports.
>> > > >
>> > > > Also, Alex in other dpdk thread [1] suggested someone to
>> > > > propose io
>> > > > bar mapping in vfio-pci, I guess in particular to non-x86 arch
>> > > > so I
>> > > > started working on it.
>> > > >
>> > >
>> >
>> > So what's wrong with just using the existing read/write API on all
>> > architectures?
>> >
>>
>> nothing wrong, infact read/write api will still be used so to access
>> mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't
>
> vfio_pci_mmap(), the read/write accessors fully support i/o port.
>

(Sorry for delayed response!)
Right.
>> map io pci bar in particular (i.e.. ioresource_io) so I guess need to
>> add that bar mapping in vfio. pl. correct me if i misunderstood
>> anything.
>
> Maybe I misunderstood what you were asking for, it seemed like you
> specifically wanted to be able to mmap i/o port space, which is
> possible, just not something we can do on x86.  Maybe I should have
> asked why.  The vfio API already supports read/write access to i/o port

Yes, I want to map io port pci space in vfio and reason for that is :
I want to access virto-net-pci device at userspace using vfio and for
that I am using vfio-noiommu latest linux-next patch. but I am not
able to mmap io port pci space in vfio because of below condition -

1)
--- user space code snippet 
reg.index = i; // where i is {0..1} i.e.. {BAR0..BAR1} such that BAR0
= io port pci space and BAR1 = pci config space

ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, );
if ((reg.flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) {
   return err;
}
now consider i = 0 case where pci_rersource_flag set to IORESOURCE_IO

--- kernel / vfip-pci.c -
so vfio_pci_ioctl() wont set info.flag to VFIO_REGION_INFO_FLAG_MMAP.
And it won't set for two
1) pci_resource_flag & IORESOURCE_MEM
2) ioport size < PAZE_SIZE

The second one I addressed but first one is what I believe that need
to add support in vfio.
and Same applicable for vfio_pci_mmap() too..

This is why I am thinking to add IORESOURCE_IO space mapping support
in vfio; in particular non-x86 archs.. pl. correct my understanding in
case wrong.

> space, so if you intend to mmap it only to use read/write on top of the
> mmap, I suppose you might see some performance improvement, but not
> really any new functionality.  You'd also need to deal with page size
> issues since i/o port ranges are generally quite a bit smaller than the
> host page size and they'd need to be mapped such that each devices does
> not share a host page of i/o port space with other devices.  On x86 i/o

Yes. I have taken care size < PAZE_SIZE condition.

> port 

Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-31 Thread Santosh Shukla
On Tue, Dec 29, 2015 at 11:01 PM, Alex Williamson
 wrote:
> On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
>> On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann  wrote:
>> > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
>> > > mistakenly added wrong email-id of alex, looping his correct one.
>> > >
>> > > On 29 December 2015 at 21:23, Santosh Shukla > > > ro.org> wrote:
>> > > > On 29 December 2015 at 18:58, Arnd Bergmann 
>> > > > wrote:
>> > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>> > > > > > On 23 December 2015 at 03:26, Arnd Bergmann 
>> > > > > > wrote:
>> > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> > > > > > > > }
>> > > > > > > >
>> > > > > > > > So I care for /dev/ioport types interface who could do
>> > > > > > > > more than byte
>> > > > > > > > data copy to/from user-space. I tested this patch with
>> > > > > > > > little
>> > > > > > > > modification and could able to run pmd driver for
>> > > > > > > > arm/arm64 case.
>> > > > > > > >
>> > > > > > > > Like to know how to address pci_io region mapping
>> > > > > > > > problem for
>> > > > > > > > arm/arm64, in-case /dev/ioports approach is not
>> > > > > > > > acceptable or else I
>> > > > > > > > can spent time on restructuring the patch?
>> > > > > > > >
>> > > > > > >
>> > > > > > > For the use case you describe, can't you use the vfio
>> > > > > > > framework to
>> > > > > > > access the PCI BARs?
>> > > > > > >
>> > > > > >
>> > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
>> > > > > > vfio_pci_map() and
>> > > > > > it look to me that it only maps ioresource_mem pci region,
>> > > > > > pasting
>> > > > > > code snap:
>> > > > > >
>> > > > > > if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>> > > > > > return -EINVAL;
>> > > > > > 
>> > > > > >
>> > > > > > and I want to map ioresource_io pci region for arm platform
>> > > > > > in my
>> > > > > > use-case. Not sure vfio maps pci_iobar region?
>> > > > >
>> > > > > Mapping I/O BARs is not portable, notably it doesn't work on
>> > > > > x86.
>> > > > >
>> > > > > You should be able access them using the read/write interface
>> > > > > on
>> > > > > the vfio device.
>> > > > >
>> > > > Right, x86 doesn't care as iopl() could give userspace
>> > > > application
>> > > > direct access to ioports.
>> > > >
>> > > > Also, Alex in other dpdk thread [1] suggested someone to
>> > > > propose io
>> > > > bar mapping in vfio-pci, I guess in particular to non-x86 arch
>> > > > so I
>> > > > started working on it.
>> > > >
>> > >
>> >
>> > So what's wrong with just using the existing read/write API on all
>> > architectures?
>> >
>>
>> nothing wrong, infact read/write api will still be used so to access
>> mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't
>
> vfio_pci_mmap(), the read/write accessors fully support i/o port.
>

(Sorry for delayed response!)
Right.
>> map io pci bar in particular (i.e.. ioresource_io) so I guess need to
>> add that bar mapping in vfio. pl. correct me if i misunderstood
>> anything.
>
> Maybe I misunderstood what you were asking for, it seemed like you
> specifically wanted to be able to mmap i/o port space, which is
> possible, just not something we can do on x86.  Maybe I should have
> asked why.  The vfio API already supports read/write access to i/o port

Yes, I want to map io port pci space in vfio and reason for that is :
I want to access virto-net-pci device at userspace using vfio and for
that I am using vfio-noiommu latest linux-next patch. but I am not
able to mmap io port pci space in vfio because of below condition -

1)
--- user space code snippet 
reg.index = i; // where i is {0..1} i.e.. {BAR0..BAR1} such that BAR0
= io port pci space and BAR1 = pci config space

ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, );
if ((reg.flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) {
   return err;
}
now consider i = 0 case where pci_rersource_flag set to IORESOURCE_IO

--- kernel / vfip-pci.c -
so vfio_pci_ioctl() wont set info.flag to VFIO_REGION_INFO_FLAG_MMAP.
And it won't set for two
1) pci_resource_flag & IORESOURCE_MEM
2) ioport size < PAZE_SIZE

The second one I addressed but first one is what I believe that need
to add support in vfio.
and Same applicable for vfio_pci_mmap() too..

This is why I am thinking to add IORESOURCE_IO space mapping support
in vfio; in particular non-x86 archs.. pl. correct my understanding in
case wrong.

> space, so if you intend to mmap it only to use read/write on top of the
> mmap, I suppose you might see some performance improvement, but not
> really any new functionality.  You'd also need to deal with page size
> issues since i/o port ranges are generally quite a bit smaller than the
> host page size and they'd need to be mapped such that each devices does
> not share a host page of i/o port space 

Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-31 Thread Alex Williamson
On Thu, 2015-12-31 at 15:03 +0530, Santosh Shukla wrote:
> On Tue, Dec 29, 2015 at 11:01 PM, Alex Williamson
>  wrote:
> > On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
> > > On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann 
> > > wrote:
> > > > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> > > > > mistakenly added wrong email-id of alex, looping his correct
> > > > > one.
> > > > > 
> > > > > On 29 December 2015 at 21:23, Santosh Shukla  > > > > lina
> > > > > ro.org> wrote:
> > > > > > On 29 December 2015 at 18:58, Arnd Bergmann 
> > > > > > wrote:
> > > > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla
> > > > > > > wrote:
> > > > > > > > On 23 December 2015 at 03:26, Arnd Bergmann  > > > > > > > .de>
> > > > > > > > wrote:
> > > > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
> > > > > > > > > > }
> > > > > > > > > > 
> > > > > > > > > > So I care for /dev/ioport types interface who could
> > > > > > > > > > do
> > > > > > > > > > more than byte
> > > > > > > > > > data copy to/from user-space. I tested this patch
> > > > > > > > > > with
> > > > > > > > > > little
> > > > > > > > > > modification and could able to run pmd driver for
> > > > > > > > > > arm/arm64 case.
> > > > > > > > > > 
> > > > > > > > > > Like to know how to address pci_io region mapping
> > > > > > > > > > problem for
> > > > > > > > > > arm/arm64, in-case /dev/ioports approach is not
> > > > > > > > > > acceptable or else I
> > > > > > > > > > can spent time on restructuring the patch?
> > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > For the use case you describe, can't you use the vfio
> > > > > > > > > framework to
> > > > > > > > > access the PCI BARs?
> > > > > > > > > 
> > > > > > > > 
> > > > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
> > > > > > > > vfio_pci_map() and
> > > > > > > > it look to me that it only maps ioresource_mem pci
> > > > > > > > region,
> > > > > > > > pasting
> > > > > > > > code snap:
> > > > > > > > 
> > > > > > > > if (!(pci_resource_flags(pdev, index) &
> > > > > > > > IORESOURCE_MEM))
> > > > > > > > return -EINVAL;
> > > > > > > > 
> > > > > > > > 
> > > > > > > > and I want to map ioresource_io pci region for arm
> > > > > > > > platform
> > > > > > > > in my
> > > > > > > > use-case. Not sure vfio maps pci_iobar region?
> > > > > > > 
> > > > > > > Mapping I/O BARs is not portable, notably it doesn't work
> > > > > > > on
> > > > > > > x86.
> > > > > > > 
> > > > > > > You should be able access them using the read/write
> > > > > > > interface
> > > > > > > on
> > > > > > > the vfio device.
> > > > > > > 
> > > > > > Right, x86 doesn't care as iopl() could give userspace
> > > > > > application
> > > > > > direct access to ioports.
> > > > > > 
> > > > > > Also, Alex in other dpdk thread [1] suggested someone to
> > > > > > propose io
> > > > > > bar mapping in vfio-pci, I guess in particular to non-x86
> > > > > > arch
> > > > > > so I
> > > > > > started working on it.
> > > > > > 
> > > > > 
> > > > 
> > > > So what's wrong with just using the existing read/write API on
> > > > all
> > > > architectures?
> > > > 
> > > 
> > > nothing wrong, infact read/write api will still be used so to
> > > access
> > > mmaped io pci bar at userspace. But right now vfio_pci_map()
> > > doesn't
> > 
> > vfio_pci_mmap(), the read/write accessors fully support i/o port.
> > 
> 
> (Sorry for delayed response!)
> Right.
> > > map io pci bar in particular (i.e.. ioresource_io) so I guess
> > > need to
> > > add that bar mapping in vfio. pl. correct me if i misunderstood
> > > anything.
> > 
> > Maybe I misunderstood what you were asking for, it seemed like you
> > specifically wanted to be able to mmap i/o port space, which is
> > possible, just not something we can do on x86.  Maybe I should have
> > asked why.  The vfio API already supports read/write access to i/o
> > port
> 
> Yes, I want to map io port pci space in vfio and reason for that is :
> I want to access virto-net-pci device at userspace using vfio and for
> that I am using vfio-noiommu latest linux-next patch. but I am not
> able to mmap io port pci space in vfio because of below condition -
> 
> 1)
> --- user space code snippet 
> reg.index = i; // where i is {0..1} i.e.. {BAR0..BAR1} such that BAR0
> = io port pci space and BAR1 = pci config space
> 
> ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, );
> if ((reg.flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) {
>    return err;
> }
> now consider i = 0 case where pci_rersource_flag set to IORESOURCE_IO
> 
> --- kernel / vfip-pci.c -
> so vfio_pci_ioctl() wont set info.flag to VFIO_REGION_INFO_FLAG_MMAP.
> And it won't set for two
> 1) pci_resource_flag & IORESOURCE_MEM
> 2) ioport size < PAZE_SIZE
> 
> The second one I addressed but first one is what I believe that need
> to add support in vfio.
> and Same 

Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Alex Williamson
On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
> On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann  wrote:
> > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> > > mistakenly added wrong email-id of alex, looping his correct one.
> > > 
> > > On 29 December 2015 at 21:23, Santosh Shukla  > > ro.org> wrote:
> > > > On 29 December 2015 at 18:58, Arnd Bergmann 
> > > > wrote:
> > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> > > > > > On 23 December 2015 at 03:26, Arnd Bergmann 
> > > > > > wrote:
> > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
> > > > > > > > }
> > > > > > > > 
> > > > > > > > So I care for /dev/ioport types interface who could do
> > > > > > > > more than byte
> > > > > > > > data copy to/from user-space. I tested this patch with
> > > > > > > > little
> > > > > > > > modification and could able to run pmd driver for
> > > > > > > > arm/arm64 case.
> > > > > > > > 
> > > > > > > > Like to know how to address pci_io region mapping
> > > > > > > > problem for
> > > > > > > > arm/arm64, in-case /dev/ioports approach is not
> > > > > > > > acceptable or else I
> > > > > > > > can spent time on restructuring the patch?
> > > > > > > > 
> > > > > > > 
> > > > > > > For the use case you describe, can't you use the vfio
> > > > > > > framework to
> > > > > > > access the PCI BARs?
> > > > > > > 
> > > > > > 
> > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
> > > > > > vfio_pci_map() and
> > > > > > it look to me that it only maps ioresource_mem pci region,
> > > > > > pasting
> > > > > > code snap:
> > > > > > 
> > > > > > if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> > > > > > return -EINVAL;
> > > > > > 
> > > > > > 
> > > > > > and I want to map ioresource_io pci region for arm platform
> > > > > > in my
> > > > > > use-case. Not sure vfio maps pci_iobar region?
> > > > > 
> > > > > Mapping I/O BARs is not portable, notably it doesn't work on
> > > > > x86.
> > > > > 
> > > > > You should be able access them using the read/write interface
> > > > > on
> > > > > the vfio device.
> > > > > 
> > > > Right, x86 doesn't care as iopl() could give userspace
> > > > application
> > > > direct access to ioports.
> > > > 
> > > > Also, Alex in other dpdk thread [1] suggested someone to
> > > > propose io
> > > > bar mapping in vfio-pci, I guess in particular to non-x86 arch
> > > > so I
> > > > started working on it.
> > > > 
> > > 
> > 
> > So what's wrong with just using the existing read/write API on all
> > architectures?
> > 
> 
> nothing wrong, infact read/write api will still be used so to access
> mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't

vfio_pci_mmap(), the read/write accessors fully support i/o port.

> map io pci bar in particular (i.e.. ioresource_io) so I guess need to
> add that bar mapping in vfio. pl. correct me if i misunderstood
> anything.

Maybe I misunderstood what you were asking for, it seemed like you
specifically wanted to be able to mmap i/o port space, which is
possible, just not something we can do on x86.  Maybe I should have
asked why.  The vfio API already supports read/write access to i/o port
space, so if you intend to mmap it only to use read/write on top of the
mmap, I suppose you might see some performance improvement, but not
really any new functionality.  You'd also need to deal with page size
issues since i/o port ranges are generally quite a bit smaller than the
host page size and they'd need to be mapped such that each devices does
not share a host page of i/o port space with other devices.  On x86 i/o
port space is mostly considered legacy and not a performance critical
path for most modern devices; PCI SR-IOV specifically excludes i/o port
space.  So what performance gains do you expect to see in being able to
mmap i/o port space and what hardware are you dealing with that relies
on i/o port space rather than mmio for performance?  Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Santosh Shukla
On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann  wrote:
> On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
>> mistakenly added wrong email-id of alex, looping his correct one.
>>
>> On 29 December 2015 at 21:23, Santosh Shukla  
>> wrote:
>> > On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
>> >> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>> >>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
>> >>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> >>> >> }
>> >>> >>
>> >>> >> So I care for /dev/ioport types interface who could do more than byte
>> >>> >> data copy to/from user-space. I tested this patch with little
>> >>> >> modification and could able to run pmd driver for arm/arm64 case.
>> >>> >>
>> >>> >> Like to know how to address pci_io region mapping problem for
>> >>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> >>> >> can spent time on restructuring the patch?
>> >>> >>
>> >>> >
>> >>> > For the use case you describe, can't you use the vfio framework to
>> >>> > access the PCI BARs?
>> >>> >
>> >>>
>> >>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
>> >>> it look to me that it only maps ioresource_mem pci region, pasting
>> >>> code snap:
>> >>>
>> >>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>> >>> return -EINVAL;
>> >>> 
>> >>>
>> >>> and I want to map ioresource_io pci region for arm platform in my
>> >>> use-case. Not sure vfio maps pci_iobar region?
>> >>
>> >> Mapping I/O BARs is not portable, notably it doesn't work on x86.
>> >>
>> >> You should be able access them using the read/write interface on
>> >> the vfio device.
>> >>
>> > Right, x86 doesn't care as iopl() could give userspace application
>> > direct access to ioports.
>> >
>> > Also, Alex in other dpdk thread [1] suggested someone to propose io
>> > bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
>> > started working on it.
>> >
>>
>
> So what's wrong with just using the existing read/write API on all
> architectures?
>

nothing wrong, infact read/write api will still be used so to access
mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't
map io pci bar in particular (i.e.. ioresource_io) so I guess need to
add that bar mapping in vfio. pl. correct me if i misunderstood
anything.

> Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Arnd Bergmann
On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> mistakenly added wrong email-id of alex, looping his correct one.
> 
> On 29 December 2015 at 21:23, Santosh Shukla  
> wrote:
> > On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
> >> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> >>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
> >>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
> >>> >> }
> >>> >>
> >>> >> So I care for /dev/ioport types interface who could do more than byte
> >>> >> data copy to/from user-space. I tested this patch with little
> >>> >> modification and could able to run pmd driver for arm/arm64 case.
> >>> >>
> >>> >> Like to know how to address pci_io region mapping problem for
> >>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
> >>> >> can spent time on restructuring the patch?
> >>> >>
> >>> >
> >>> > For the use case you describe, can't you use the vfio framework to
> >>> > access the PCI BARs?
> >>> >
> >>>
> >>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
> >>> it look to me that it only maps ioresource_mem pci region, pasting
> >>> code snap:
> >>>
> >>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> >>> return -EINVAL;
> >>> 
> >>>
> >>> and I want to map ioresource_io pci region for arm platform in my
> >>> use-case. Not sure vfio maps pci_iobar region?
> >>
> >> Mapping I/O BARs is not portable, notably it doesn't work on x86.
> >>
> >> You should be able access them using the read/write interface on
> >> the vfio device.
> >>
> > Right, x86 doesn't care as iopl() could give userspace application
> > direct access to ioports.
> >
> > Also, Alex in other dpdk thread [1] suggested someone to propose io
> > bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
> > started working on it.
> >
> 

So what's wrong with just using the existing read/write API on all
architectures?

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Santosh Shukla
mistakenly added wrong email-id of alex, looping his correct one.

On 29 December 2015 at 21:23, Santosh Shukla  wrote:
> On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
>> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
>>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
>>> >> }
>>> >>
>>> >> So I care for /dev/ioport types interface who could do more than byte
>>> >> data copy to/from user-space. I tested this patch with little
>>> >> modification and could able to run pmd driver for arm/arm64 case.
>>> >>
>>> >> Like to know how to address pci_io region mapping problem for
>>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>>> >> can spent time on restructuring the patch?
>>> >>
>>> >
>>> > For the use case you describe, can't you use the vfio framework to
>>> > access the PCI BARs?
>>> >
>>>
>>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
>>> it look to me that it only maps ioresource_mem pci region, pasting
>>> code snap:
>>>
>>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>>> return -EINVAL;
>>> 
>>>
>>> and I want to map ioresource_io pci region for arm platform in my
>>> use-case. Not sure vfio maps pci_iobar region?
>>
>> Mapping I/O BARs is not portable, notably it doesn't work on x86.
>>
>> You should be able access them using the read/write interface on
>> the vfio device.
>>
> Right, x86 doesn't care as iopl() could give userspace application
> direct access to ioports.
>
> Also, Alex in other dpdk thread [1] suggested someone to propose io
> bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
> started working on it.
>
> Thanks.
>
> [1] http://dpdk.org/ml/archives/dev/2015-December/030852.html
>
>> Arnd
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Santosh Shukla
On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> >> }
>> >>
>> >> So I care for /dev/ioport types interface who could do more than byte
>> >> data copy to/from user-space. I tested this patch with little
>> >> modification and could able to run pmd driver for arm/arm64 case.
>> >>
>> >> Like to know how to address pci_io region mapping problem for
>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> >> can spent time on restructuring the patch?
>> >>
>> >
>> > For the use case you describe, can't you use the vfio framework to
>> > access the PCI BARs?
>> >
>>
>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
>> it look to me that it only maps ioresource_mem pci region, pasting
>> code snap:
>>
>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>> return -EINVAL;
>> 
>>
>> and I want to map ioresource_io pci region for arm platform in my
>> use-case. Not sure vfio maps pci_iobar region?
>
> Mapping I/O BARs is not portable, notably it doesn't work on x86.
>
> You should be able access them using the read/write interface on
> the vfio device.
>
Right, x86 doesn't care as iopl() could give userspace application
direct access to ioports.

Also, Alex in other dpdk thread [1] suggested someone to propose io
bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
started working on it.

Thanks.

[1] http://dpdk.org/ml/archives/dev/2015-December/030852.html

> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Arnd Bergmann
On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
> > On Tuesday 22 December 2015, Santosh Shukla wrote:
> >> }
> >>
> >> So I care for /dev/ioport types interface who could do more than byte
> >> data copy to/from user-space. I tested this patch with little
> >> modification and could able to run pmd driver for arm/arm64 case.
> >>
> >> Like to know how to address pci_io region mapping problem for
> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
> >> can spent time on restructuring the patch?
> >>
> >
> > For the use case you describe, can't you use the vfio framework to
> > access the PCI BARs?
> >
> 
> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
> it look to me that it only maps ioresource_mem pci region, pasting
> code snap:
> 
> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> return -EINVAL;
> 
> 
> and I want to map ioresource_io pci region for arm platform in my
> use-case. Not sure vfio maps pci_iobar region?

Mapping I/O BARs is not portable, notably it doesn't work on x86.

You should be able access them using the read/write interface on
the vfio device.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Santosh Shukla
On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> >> }
>> >>
>> >> So I care for /dev/ioport types interface who could do more than byte
>> >> data copy to/from user-space. I tested this patch with little
>> >> modification and could able to run pmd driver for arm/arm64 case.
>> >>
>> >> Like to know how to address pci_io region mapping problem for
>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> >> can spent time on restructuring the patch?
>> >>
>> >
>> > For the use case you describe, can't you use the vfio framework to
>> > access the PCI BARs?
>> >
>>
>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
>> it look to me that it only maps ioresource_mem pci region, pasting
>> code snap:
>>
>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>> return -EINVAL;
>> 
>>
>> and I want to map ioresource_io pci region for arm platform in my
>> use-case. Not sure vfio maps pci_iobar region?
>
> Mapping I/O BARs is not portable, notably it doesn't work on x86.
>
> You should be able access them using the read/write interface on
> the vfio device.
>
Right, x86 doesn't care as iopl() could give userspace application
direct access to ioports.

Also, Alex in other dpdk thread [1] suggested someone to propose io
bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
started working on it.

Thanks.

[1] http://dpdk.org/ml/archives/dev/2015-December/030852.html

> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Arnd Bergmann
On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> mistakenly added wrong email-id of alex, looping his correct one.
> 
> On 29 December 2015 at 21:23, Santosh Shukla  
> wrote:
> > On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
> >> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> >>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
> >>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
> >>> >> }
> >>> >>
> >>> >> So I care for /dev/ioport types interface who could do more than byte
> >>> >> data copy to/from user-space. I tested this patch with little
> >>> >> modification and could able to run pmd driver for arm/arm64 case.
> >>> >>
> >>> >> Like to know how to address pci_io region mapping problem for
> >>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
> >>> >> can spent time on restructuring the patch?
> >>> >>
> >>> >
> >>> > For the use case you describe, can't you use the vfio framework to
> >>> > access the PCI BARs?
> >>> >
> >>>
> >>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
> >>> it look to me that it only maps ioresource_mem pci region, pasting
> >>> code snap:
> >>>
> >>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> >>> return -EINVAL;
> >>> 
> >>>
> >>> and I want to map ioresource_io pci region for arm platform in my
> >>> use-case. Not sure vfio maps pci_iobar region?
> >>
> >> Mapping I/O BARs is not portable, notably it doesn't work on x86.
> >>
> >> You should be able access them using the read/write interface on
> >> the vfio device.
> >>
> > Right, x86 doesn't care as iopl() could give userspace application
> > direct access to ioports.
> >
> > Also, Alex in other dpdk thread [1] suggested someone to propose io
> > bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
> > started working on it.
> >
> 

So what's wrong with just using the existing read/write API on all
architectures?

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Santosh Shukla
mistakenly added wrong email-id of alex, looping his correct one.

On 29 December 2015 at 21:23, Santosh Shukla  wrote:
> On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
>> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
>>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
>>> >> }
>>> >>
>>> >> So I care for /dev/ioport types interface who could do more than byte
>>> >> data copy to/from user-space. I tested this patch with little
>>> >> modification and could able to run pmd driver for arm/arm64 case.
>>> >>
>>> >> Like to know how to address pci_io region mapping problem for
>>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>>> >> can spent time on restructuring the patch?
>>> >>
>>> >
>>> > For the use case you describe, can't you use the vfio framework to
>>> > access the PCI BARs?
>>> >
>>>
>>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
>>> it look to me that it only maps ioresource_mem pci region, pasting
>>> code snap:
>>>
>>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>>> return -EINVAL;
>>> 
>>>
>>> and I want to map ioresource_io pci region for arm platform in my
>>> use-case. Not sure vfio maps pci_iobar region?
>>
>> Mapping I/O BARs is not portable, notably it doesn't work on x86.
>>
>> You should be able access them using the read/write interface on
>> the vfio device.
>>
> Right, x86 doesn't care as iopl() could give userspace application
> direct access to ioports.
>
> Also, Alex in other dpdk thread [1] suggested someone to propose io
> bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
> started working on it.
>
> Thanks.
>
> [1] http://dpdk.org/ml/archives/dev/2015-December/030852.html
>
>> Arnd
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Santosh Shukla
On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann  wrote:
> On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
>> mistakenly added wrong email-id of alex, looping his correct one.
>>
>> On 29 December 2015 at 21:23, Santosh Shukla  
>> wrote:
>> > On 29 December 2015 at 18:58, Arnd Bergmann  wrote:
>> >> On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
>> >>> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
>> >>> > On Tuesday 22 December 2015, Santosh Shukla wrote:
>> >>> >> }
>> >>> >>
>> >>> >> So I care for /dev/ioport types interface who could do more than byte
>> >>> >> data copy to/from user-space. I tested this patch with little
>> >>> >> modification and could able to run pmd driver for arm/arm64 case.
>> >>> >>
>> >>> >> Like to know how to address pci_io region mapping problem for
>> >>> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> >>> >> can spent time on restructuring the patch?
>> >>> >>
>> >>> >
>> >>> > For the use case you describe, can't you use the vfio framework to
>> >>> > access the PCI BARs?
>> >>> >
>> >>>
>> >>> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
>> >>> it look to me that it only maps ioresource_mem pci region, pasting
>> >>> code snap:
>> >>>
>> >>> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
>> >>> return -EINVAL;
>> >>> 
>> >>>
>> >>> and I want to map ioresource_io pci region for arm platform in my
>> >>> use-case. Not sure vfio maps pci_iobar region?
>> >>
>> >> Mapping I/O BARs is not portable, notably it doesn't work on x86.
>> >>
>> >> You should be able access them using the read/write interface on
>> >> the vfio device.
>> >>
>> > Right, x86 doesn't care as iopl() could give userspace application
>> > direct access to ioports.
>> >
>> > Also, Alex in other dpdk thread [1] suggested someone to propose io
>> > bar mapping in vfio-pci, I guess in particular to non-x86 arch so I
>> > started working on it.
>> >
>>
>
> So what's wrong with just using the existing read/write API on all
> architectures?
>

nothing wrong, infact read/write api will still be used so to access
mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't
map io pci bar in particular (i.e.. ioresource_io) so I guess need to
add that bar mapping in vfio. pl. correct me if i misunderstood
anything.

> Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Alex Williamson
On Tue, 2015-12-29 at 22:00 +0530, Santosh Shukla wrote:
> On Tue, Dec 29, 2015 at 9:50 PM, Arnd Bergmann  wrote:
> > On Tuesday 29 December 2015 21:25:15 Santosh Shukla wrote:
> > > mistakenly added wrong email-id of alex, looping his correct one.
> > > 
> > > On 29 December 2015 at 21:23, Santosh Shukla  > > ro.org> wrote:
> > > > On 29 December 2015 at 18:58, Arnd Bergmann 
> > > > wrote:
> > > > > On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> > > > > > On 23 December 2015 at 03:26, Arnd Bergmann 
> > > > > > wrote:
> > > > > > > On Tuesday 22 December 2015, Santosh Shukla wrote:
> > > > > > > > }
> > > > > > > > 
> > > > > > > > So I care for /dev/ioport types interface who could do
> > > > > > > > more than byte
> > > > > > > > data copy to/from user-space. I tested this patch with
> > > > > > > > little
> > > > > > > > modification and could able to run pmd driver for
> > > > > > > > arm/arm64 case.
> > > > > > > > 
> > > > > > > > Like to know how to address pci_io region mapping
> > > > > > > > problem for
> > > > > > > > arm/arm64, in-case /dev/ioports approach is not
> > > > > > > > acceptable or else I
> > > > > > > > can spent time on restructuring the patch?
> > > > > > > > 
> > > > > > > 
> > > > > > > For the use case you describe, can't you use the vfio
> > > > > > > framework to
> > > > > > > access the PCI BARs?
> > > > > > > 
> > > > > > 
> > > > > > I looked at file: drivers/vfio/pci/vfio_pci.c, func
> > > > > > vfio_pci_map() and
> > > > > > it look to me that it only maps ioresource_mem pci region,
> > > > > > pasting
> > > > > > code snap:
> > > > > > 
> > > > > > if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> > > > > > return -EINVAL;
> > > > > > 
> > > > > > 
> > > > > > and I want to map ioresource_io pci region for arm platform
> > > > > > in my
> > > > > > use-case. Not sure vfio maps pci_iobar region?
> > > > > 
> > > > > Mapping I/O BARs is not portable, notably it doesn't work on
> > > > > x86.
> > > > > 
> > > > > You should be able access them using the read/write interface
> > > > > on
> > > > > the vfio device.
> > > > > 
> > > > Right, x86 doesn't care as iopl() could give userspace
> > > > application
> > > > direct access to ioports.
> > > > 
> > > > Also, Alex in other dpdk thread [1] suggested someone to
> > > > propose io
> > > > bar mapping in vfio-pci, I guess in particular to non-x86 arch
> > > > so I
> > > > started working on it.
> > > > 
> > > 
> > 
> > So what's wrong with just using the existing read/write API on all
> > architectures?
> > 
> 
> nothing wrong, infact read/write api will still be used so to access
> mmaped io pci bar at userspace. But right now vfio_pci_map() doesn't

vfio_pci_mmap(), the read/write accessors fully support i/o port.

> map io pci bar in particular (i.e.. ioresource_io) so I guess need to
> add that bar mapping in vfio. pl. correct me if i misunderstood
> anything.

Maybe I misunderstood what you were asking for, it seemed like you
specifically wanted to be able to mmap i/o port space, which is
possible, just not something we can do on x86.  Maybe I should have
asked why.  The vfio API already supports read/write access to i/o port
space, so if you intend to mmap it only to use read/write on top of the
mmap, I suppose you might see some performance improvement, but not
really any new functionality.  You'd also need to deal with page size
issues since i/o port ranges are generally quite a bit smaller than the
host page size and they'd need to be mapped such that each devices does
not share a host page of i/o port space with other devices.  On x86 i/o
port space is mostly considered legacy and not a performance critical
path for most modern devices; PCI SR-IOV specifically excludes i/o port
space.  So what performance gains do you expect to see in being able to
mmap i/o port space and what hardware are you dealing with that relies
on i/o port space rather than mmio for performance?  Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-29 Thread Arnd Bergmann
On Wednesday 23 December 2015 17:04:40 Santosh Shukla wrote:
> On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
> > On Tuesday 22 December 2015, Santosh Shukla wrote:
> >> }
> >>
> >> So I care for /dev/ioport types interface who could do more than byte
> >> data copy to/from user-space. I tested this patch with little
> >> modification and could able to run pmd driver for arm/arm64 case.
> >>
> >> Like to know how to address pci_io region mapping problem for
> >> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
> >> can spent time on restructuring the patch?
> >>
> >
> > For the use case you describe, can't you use the vfio framework to
> > access the PCI BARs?
> >
> 
> I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
> it look to me that it only maps ioresource_mem pci region, pasting
> code snap:
> 
> if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
> return -EINVAL;
> 
> 
> and I want to map ioresource_io pci region for arm platform in my
> use-case. Not sure vfio maps pci_iobar region?

Mapping I/O BARs is not portable, notably it doesn't work on x86.

You should be able access them using the read/write interface on
the vfio device.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-23 Thread Santosh Shukla
On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
> On Tuesday 22 December 2015, Santosh Shukla wrote:
>> }
>>
>> So I care for /dev/ioport types interface who could do more than byte
>> data copy to/from user-space. I tested this patch with little
>> modification and could able to run pmd driver for arm/arm64 case.
>>
>> Like to know how to address pci_io region mapping problem for
>> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> can spent time on restructuring the patch?
>>
>
> For the use case you describe, can't you use the vfio framework to
> access the PCI BARs?
>

I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
it look to me that it only maps ioresource_mem pci region, pasting
code snap:

if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
return -EINVAL;


and I want to map ioresource_io pci region for arm platform in my
use-case. Not sure vfio maps pci_iobar region?

> After all, you are talking about regular PCI devices, not access to
> random unknown I/O port numbers.
>
Yes, pci_iobar region.

> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-23 Thread Santosh Shukla
On 23 December 2015 at 03:26, Arnd Bergmann  wrote:
> On Tuesday 22 December 2015, Santosh Shukla wrote:
>> }
>>
>> So I care for /dev/ioport types interface who could do more than byte
>> data copy to/from user-space. I tested this patch with little
>> modification and could able to run pmd driver for arm/arm64 case.
>>
>> Like to know how to address pci_io region mapping problem for
>> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> can spent time on restructuring the patch?
>>
>
> For the use case you describe, can't you use the vfio framework to
> access the PCI BARs?
>

I looked at file: drivers/vfio/pci/vfio_pci.c, func vfio_pci_map() and
it look to me that it only maps ioresource_mem pci region, pasting
code snap:

if (!(pci_resource_flags(pdev, index) & IORESOURCE_MEM))
return -EINVAL;


and I want to map ioresource_io pci region for arm platform in my
use-case. Not sure vfio maps pci_iobar region?

> After all, you are talking about regular PCI devices, not access to
> random unknown I/O port numbers.
>
Yes, pci_iobar region.

> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread Arnd Bergmann
On Tuesday 22 December 2015, H. Peter Anvin wrote:
> On that subject, shouldn't we have common infrastructure to deal with memory
> mapped I/O ports in the kernel?  Or do we have that now?  I obviously don't
> pay too much attention...

We don't have it at the moment, though some of the code that we introduced
for arm64 is defined in common code, just not shared with anything else.

Changing other architectures over to use this is painful and gains the
architectures very little, so I doubt it is going to happen.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread H. Peter Anvin
On December 22, 2015 1:56:20 PM PST, Arnd Bergmann  wrote:
>On Tuesday 22 December 2015, Santosh Shukla wrote:
>> }
>> 
>> So I care for /dev/ioport types interface who could do more than byte
>> data copy to/from user-space. I tested this patch with little
>> modification and could able to run pmd driver for arm/arm64 case.
>> 
>> Like to know how to address pci_io region mapping problem for
>> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> can spent time on restructuring the patch?
>> 
>
>For the use case you describe, can't you use the vfio framework to
>access the PCI BARs?
>
>After all, you are talking about regular PCI devices, not access to
>random unknown I/O port numbers.
>
>   Arnd

On that subject, shouldn't we have common infrastructure to deal with memory 
mapped I/O ports in the kernel?  Or do we have that now?  I obviously don't pay 
too much attention...
-- 
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread Arnd Bergmann
On Tuesday 22 December 2015, Santosh Shukla wrote:
> }
> 
> So I care for /dev/ioport types interface who could do more than byte
> data copy to/from user-space. I tested this patch with little
> modification and could able to run pmd driver for arm/arm64 case.
> 
> Like to know how to address pci_io region mapping problem for
> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
> can spent time on restructuring the patch?
> 

For the use case you describe, can't you use the vfio framework to
access the PCI BARs?

After all, you are talking about regular PCI devices, not access to
random unknown I/O port numbers.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread Santosh Shukla
On 30 May 2014 at 17:02, Arnd Bergmann  wrote:
> On Thursday 29 May 2014 06:38:35 H. Peter Anvin wrote:
>> On 05/29/2014 02:26 AM, Arnd Bergmann wrote:
>> > On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
>> >> On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
>> >>>
>> >>> My feeling is that all devices we can think of fall into at least one
>> >>> of these categories:
>> >>>
>> >>> * legacy PC stuff that needs only byte access
>> >>> * PCI devices that can be accessed through sysfs
>> >>> * devices on x86 that can be accessed using iopl
>> >>>
>> >>
>> >> I don't believe PCI I/O space devices can be accessed through sysfs, but
>> >> perhaps I'm wrong?  (mmapping I/O space is not portable.)
>> >
>> > The interface is there, both a read/write and mmap on the resource
>> > bin_attribute. But it seems you're right, neither of them is implemented
>> > on all architectures.
>> >
>> > Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
>> > I/O space, even though a lot of others could. The read-write interface
>> > is only defined for alpha, ia64, microblaze and powerpc.
>> >
>>
>> And how is that read/write interface defined?  Does it have the same
>> silly handling of data sizes?
>
> In architecture specific code, e.g. for powerpc:
>
> int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val, size_t size)
> {
> unsigned long offset;
> struct pci_controller *hose = pci_bus_to_host(bus);
> struct resource *rp = >io_resource;
> void __iomem *addr;
>
> /* Check if port can be supported by that bus. We only check
>  * the ranges of the PHB though, not the bus itself as the rules
>  * for forwarding legacy cycles down bridges are not our problem
>  * here. So if the host bridge supports it, we do it.
>  */
> offset = (unsigned long)hose->io_base_virt - _IO_BASE;
> offset += port;
>
> if (!(rp->flags & IORESOURCE_IO))
> return -ENXIO;
> if (offset < rp->start || (offset + size) > rp->end)
> return -ENXIO;
> addr = hose->io_base_virt + port;
>
> switch(size) {
> case 1:
> *((u8 *)val) = in_8(addr);
> return 1;
> case 2:
> if (port & 1)
> return -EINVAL;
> *((u16 *)val) = in_le16(addr);
> return 2;
> case 4:
> if (port & 3)
> return -EINVAL;
> *((u32 *)val) = in_le32(addr);
> return 4;
> }
> return -EINVAL;
> }
>
> The common code already enforces size to be 1, 2 or 4.
>

I have an use-case for arm/arm64 both where user-space application
access pci_io address in user-space. The use-case description: dpdk's
virtio-pmd user-space driver running inside the VM/Guest. That
virtio-pmd driver maps pci_io region to guest user-space and does pmd
driver initialization. In x86 case, pmd driver uses iopl() so to
access ioport via port api's {in, out},[b,w,l]. The problem is for
platform like arm, where kernel does not map pci_io space

file : arch/arm/kernel/bios32.c
int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine)
{
if (mmap_state == pci_mmap_io)
return -EINVAL;
.
}

So I care for /dev/ioport types interface who could do more than byte
data copy to/from user-space. I tested this patch with little
modification and could able to run pmd driver for arm/arm64 case.

Like to know how to address pci_io region mapping problem for
arm/arm64, in-case /dev/ioports approach is not acceptable or else I
can spent time on restructuring the patch?

Use-case details [1].

Thanks in advance.

[1] http://dpdk.org/ml/archives/dev/2015-December/030530.html
> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread Arnd Bergmann
On Tuesday 22 December 2015, Santosh Shukla wrote:
> }
> 
> So I care for /dev/ioport types interface who could do more than byte
> data copy to/from user-space. I tested this patch with little
> modification and could able to run pmd driver for arm/arm64 case.
> 
> Like to know how to address pci_io region mapping problem for
> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
> can spent time on restructuring the patch?
> 

For the use case you describe, can't you use the vfio framework to
access the PCI BARs?

After all, you are talking about regular PCI devices, not access to
random unknown I/O port numbers.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread Arnd Bergmann
On Tuesday 22 December 2015, H. Peter Anvin wrote:
> On that subject, shouldn't we have common infrastructure to deal with memory
> mapped I/O ports in the kernel?  Or do we have that now?  I obviously don't
> pay too much attention...

We don't have it at the moment, though some of the code that we introduced
for arm64 is defined in common code, just not shared with anything else.

Changing other architectures over to use this is painful and gains the
architectures very little, so I doubt it is going to happen.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread H. Peter Anvin
On December 22, 2015 1:56:20 PM PST, Arnd Bergmann  wrote:
>On Tuesday 22 December 2015, Santosh Shukla wrote:
>> }
>> 
>> So I care for /dev/ioport types interface who could do more than byte
>> data copy to/from user-space. I tested this patch with little
>> modification and could able to run pmd driver for arm/arm64 case.
>> 
>> Like to know how to address pci_io region mapping problem for
>> arm/arm64, in-case /dev/ioports approach is not acceptable or else I
>> can spent time on restructuring the patch?
>> 
>
>For the use case you describe, can't you use the vfio framework to
>access the PCI BARs?
>
>After all, you are talking about regular PCI devices, not access to
>random unknown I/O port numbers.
>
>   Arnd

On that subject, shouldn't we have common infrastructure to deal with memory 
mapped I/O ports in the kernel?  Or do we have that now?  I obviously don't pay 
too much attention...
-- 
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2015-12-22 Thread Santosh Shukla
On 30 May 2014 at 17:02, Arnd Bergmann  wrote:
> On Thursday 29 May 2014 06:38:35 H. Peter Anvin wrote:
>> On 05/29/2014 02:26 AM, Arnd Bergmann wrote:
>> > On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
>> >> On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
>> >>>
>> >>> My feeling is that all devices we can think of fall into at least one
>> >>> of these categories:
>> >>>
>> >>> * legacy PC stuff that needs only byte access
>> >>> * PCI devices that can be accessed through sysfs
>> >>> * devices on x86 that can be accessed using iopl
>> >>>
>> >>
>> >> I don't believe PCI I/O space devices can be accessed through sysfs, but
>> >> perhaps I'm wrong?  (mmapping I/O space is not portable.)
>> >
>> > The interface is there, both a read/write and mmap on the resource
>> > bin_attribute. But it seems you're right, neither of them is implemented
>> > on all architectures.
>> >
>> > Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
>> > I/O space, even though a lot of others could. The read-write interface
>> > is only defined for alpha, ia64, microblaze and powerpc.
>> >
>>
>> And how is that read/write interface defined?  Does it have the same
>> silly handling of data sizes?
>
> In architecture specific code, e.g. for powerpc:
>
> int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val, size_t size)
> {
> unsigned long offset;
> struct pci_controller *hose = pci_bus_to_host(bus);
> struct resource *rp = >io_resource;
> void __iomem *addr;
>
> /* Check if port can be supported by that bus. We only check
>  * the ranges of the PHB though, not the bus itself as the rules
>  * for forwarding legacy cycles down bridges are not our problem
>  * here. So if the host bridge supports it, we do it.
>  */
> offset = (unsigned long)hose->io_base_virt - _IO_BASE;
> offset += port;
>
> if (!(rp->flags & IORESOURCE_IO))
> return -ENXIO;
> if (offset < rp->start || (offset + size) > rp->end)
> return -ENXIO;
> addr = hose->io_base_virt + port;
>
> switch(size) {
> case 1:
> *((u8 *)val) = in_8(addr);
> return 1;
> case 2:
> if (port & 1)
> return -EINVAL;
> *((u16 *)val) = in_le16(addr);
> return 2;
> case 4:
> if (port & 3)
> return -EINVAL;
> *((u32 *)val) = in_le32(addr);
> return 4;
> }
> return -EINVAL;
> }
>
> The common code already enforces size to be 1, 2 or 4.
>

I have an use-case for arm/arm64 both where user-space application
access pci_io address in user-space. The use-case description: dpdk's
virtio-pmd user-space driver running inside the VM/Guest. That
virtio-pmd driver maps pci_io region to guest user-space and does pmd
driver initialization. In x86 case, pmd driver uses iopl() so to
access ioport via port api's {in, out},[b,w,l]. The problem is for
platform like arm, where kernel does not map pci_io space

file : arch/arm/kernel/bios32.c
int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine)
{
if (mmap_state == pci_mmap_io)
return -EINVAL;
.
}

So I care for /dev/ioport types interface who could do more than byte
data copy to/from user-space. I tested this patch with little
modification and could able to run pmd driver for arm/arm64 case.

Like to know how to address pci_io region mapping problem for
arm/arm64, in-case /dev/ioports approach is not acceptable or else I
can spent time on restructuring the patch?

Use-case details [1].

Thanks in advance.

[1] http://dpdk.org/ml/archives/dev/2015-December/030530.html
> Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-06-06 Thread Maciej W. Rozycki
On Wed, 4 Jun 2014, H. Peter Anvin wrote:

> > Also IIRC PCI-PCI bridges only forward port I/O space accesses 
> > within the low 64kB.
> 
> Not true.

 It must have been an implementation-specific constraint then (e.g. 
DECchip 21050), it's been a while since I looked into it.  Thanks for 
straightening me out.

  Maciej
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-06-06 Thread Maciej W. Rozycki
On Wed, 4 Jun 2014, H. Peter Anvin wrote:

  Also IIRC PCI-PCI bridges only forward port I/O space accesses 
  within the low 64kB.
 
 Not true.

 It must have been an implementation-specific constraint then (e.g. 
DECchip 21050), it's been a while since I looked into it.  Thanks for 
straightening me out.

  Maciej
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-06-04 Thread H. Peter Anvin
On 06/01/2014 03:35 AM, Maciej W. Rozycki wrote:
> Also IIRC PCI-PCI bridges only forward port I/O space accesses 
> within the low 64kB.

Not true.

-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-06-04 Thread H. Peter Anvin
On 06/01/2014 03:35 AM, Maciej W. Rozycki wrote:
 Also IIRC PCI-PCI bridges only forward port I/O space accesses 
 within the low 64kB.

Not true.

-hpa

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-06-01 Thread Maciej W. Rozycki
On Sun, 11 May 2014, Josh Triplett wrote:

> > > > What if I attempt a four-byte read at 65535? That would access four
> > > > out-of-bounds bytes, right?
> > > 
> > > No, it would do an ind instruction on port 65535.
> > 
> > Yes, on x86. What about other architectures?
> 
> That's a good point; on architectures that map I/O to memory, this
> device should check port+count rather than port.  Is there a reliable
> #define that identifies architectures with that property, other than
> CONFIG_X86?

 FWIW, on x86 an IND from 65535 will be split into two separate bus read 
cycles, e.g. on a 32-bit data bus one at 0xfffc with only the MSB enabled 
and another one with the three LSB enabled (64-bit buses will address the 
first cycle at 0xfff8, etc.).  Data obtained from these cycles is then 
merged appropriately before writing to the destination.  The address put 
on the bus for the second cycle is implementation-specific as are all 
accesses beyond 0x so it might be 0x1 or it might be 0.

 If implementing unaligned port I/O on non-x86 targets I think it would be 
most reasonable to wrap the out-of-range part around back to 0 before 
merging data obtained this way.  The range supported can of course be 
different to what x86 supports and may be specific e.g. to the PCI host 
bridge (its port I/O space forwarding window size).  Systems with peer 
bridges may have multiple port I/O spaces too, this is one reason to have 
the size of the port I/O space extended beyond 64kB; address bits from #16 
up can then be used to select the intended host bridge to forward the 
access to.  Also IIRC PCI-PCI bridges only forward port I/O space accesses 
within the low 64kB.

 Most easily unaligned port I/O may not be supported at all by our kernel 
device, even on x86.  I think it would actually be reasonable to do, I 
have yet to hear of a piece of hardware that has any use for unaligned 
port I/O.

  Maciej
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-06-01 Thread Maciej W. Rozycki
On Sun, 11 May 2014, Josh Triplett wrote:

What if I attempt a four-byte read at 65535? That would access four
out-of-bounds bytes, right?
   
   No, it would do an ind instruction on port 65535.
  
  Yes, on x86. What about other architectures?
 
 That's a good point; on architectures that map I/O to memory, this
 device should check port+count rather than port.  Is there a reliable
 #define that identifies architectures with that property, other than
 CONFIG_X86?

 FWIW, on x86 an IND from 65535 will be split into two separate bus read 
cycles, e.g. on a 32-bit data bus one at 0xfffc with only the MSB enabled 
and another one with the three LSB enabled (64-bit buses will address the 
first cycle at 0xfff8, etc.).  Data obtained from these cycles is then 
merged appropriately before writing to the destination.  The address put 
on the bus for the second cycle is implementation-specific as are all 
accesses beyond 0x so it might be 0x1 or it might be 0.

 If implementing unaligned port I/O on non-x86 targets I think it would be 
most reasonable to wrap the out-of-range part around back to 0 before 
merging data obtained this way.  The range supported can of course be 
different to what x86 supports and may be specific e.g. to the PCI host 
bridge (its port I/O space forwarding window size).  Systems with peer 
bridges may have multiple port I/O spaces too, this is one reason to have 
the size of the port I/O space extended beyond 64kB; address bits from #16 
up can then be used to select the intended host bridge to forward the 
access to.  Also IIRC PCI-PCI bridges only forward port I/O space accesses 
within the low 64kB.

 Most easily unaligned port I/O may not be supported at all by our kernel 
device, even on x86.  I think it would actually be reasonable to do, I 
have yet to hear of a piece of hardware that has any use for unaligned 
port I/O.

  Maciej
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-30 Thread Arnd Bergmann
On Thursday 29 May 2014 06:38:35 H. Peter Anvin wrote:
> On 05/29/2014 02:26 AM, Arnd Bergmann wrote:
> > On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
> >> On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
> >>>
> >>> My feeling is that all devices we can think of fall into at least one
> >>> of these categories:
> >>>
> >>> * legacy PC stuff that needs only byte access
> >>> * PCI devices that can be accessed through sysfs
> >>> * devices on x86 that can be accessed using iopl
> >>>
> >>
> >> I don't believe PCI I/O space devices can be accessed through sysfs, but
> >> perhaps I'm wrong?  (mmapping I/O space is not portable.)
> > 
> > The interface is there, both a read/write and mmap on the resource
> > bin_attribute. But it seems you're right, neither of them is implemented
> > on all architectures.
> > 
> > Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
> > I/O space, even though a lot of others could. The read-write interface
> > is only defined for alpha, ia64, microblaze and powerpc.
> > 
> 
> And how is that read/write interface defined?  Does it have the same
> silly handling of data sizes?

In architecture specific code, e.g. for powerpc:

int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val, size_t size)
{
unsigned long offset;
struct pci_controller *hose = pci_bus_to_host(bus);
struct resource *rp = >io_resource;
void __iomem *addr;

/* Check if port can be supported by that bus. We only check
 * the ranges of the PHB though, not the bus itself as the rules
 * for forwarding legacy cycles down bridges are not our problem
 * here. So if the host bridge supports it, we do it.
 */
offset = (unsigned long)hose->io_base_virt - _IO_BASE;
offset += port;

if (!(rp->flags & IORESOURCE_IO))
return -ENXIO;
if (offset < rp->start || (offset + size) > rp->end)
return -ENXIO;
addr = hose->io_base_virt + port;

switch(size) {
case 1:
*((u8 *)val) = in_8(addr);
return 1;
case 2:
if (port & 1)
return -EINVAL;
*((u16 *)val) = in_le16(addr);
return 2;
case 4:
if (port & 3)
return -EINVAL;
*((u32 *)val) = in_le32(addr);
return 4;
}
return -EINVAL;
}

The common code already enforces size to be 1, 2 or 4.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-30 Thread Arnd Bergmann
On Thursday 29 May 2014 06:38:35 H. Peter Anvin wrote:
 On 05/29/2014 02:26 AM, Arnd Bergmann wrote:
  On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
  On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
 
  My feeling is that all devices we can think of fall into at least one
  of these categories:
 
  * legacy PC stuff that needs only byte access
  * PCI devices that can be accessed through sysfs
  * devices on x86 that can be accessed using iopl
 
 
  I don't believe PCI I/O space devices can be accessed through sysfs, but
  perhaps I'm wrong?  (mmapping I/O space is not portable.)
  
  The interface is there, both a read/write and mmap on the resource
  bin_attribute. But it seems you're right, neither of them is implemented
  on all architectures.
  
  Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
  I/O space, even though a lot of others could. The read-write interface
  is only defined for alpha, ia64, microblaze and powerpc.
  
 
 And how is that read/write interface defined?  Does it have the same
 silly handling of data sizes?

In architecture specific code, e.g. for powerpc:

int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val, size_t size)
{
unsigned long offset;
struct pci_controller *hose = pci_bus_to_host(bus);
struct resource *rp = hose-io_resource;
void __iomem *addr;

/* Check if port can be supported by that bus. We only check
 * the ranges of the PHB though, not the bus itself as the rules
 * for forwarding legacy cycles down bridges are not our problem
 * here. So if the host bridge supports it, we do it.
 */
offset = (unsigned long)hose-io_base_virt - _IO_BASE;
offset += port;

if (!(rp-flags  IORESOURCE_IO))
return -ENXIO;
if (offset  rp-start || (offset + size)  rp-end)
return -ENXIO;
addr = hose-io_base_virt + port;

switch(size) {
case 1:
*((u8 *)val) = in_8(addr);
return 1;
case 2:
if (port  1)
return -EINVAL;
*((u16 *)val) = in_le16(addr);
return 2;
case 4:
if (port  3)
return -EINVAL;
*((u32 *)val) = in_le32(addr);
return 4;
}
return -EINVAL;
}

The common code already enforces size to be 1, 2 or 4.

Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-29 Thread H. Peter Anvin
On 05/29/2014 02:26 AM, Arnd Bergmann wrote:
> On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
>> On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
>>>
>>> My feeling is that all devices we can think of fall into at least one
>>> of these categories:
>>>
>>> * legacy PC stuff that needs only byte access
>>> * PCI devices that can be accessed through sysfs
>>> * devices on x86 that can be accessed using iopl
>>>
>>
>> I don't believe PCI I/O space devices can be accessed through sysfs, but
>> perhaps I'm wrong?  (mmapping I/O space is not portable.)
> 
> The interface is there, both a read/write and mmap on the resource
> bin_attribute. But it seems you're right, neither of them is implemented
> on all architectures.
> 
> Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
> I/O space, even though a lot of others could. The read-write interface
> is only defined for alpha, ia64, microblaze and powerpc.
> 

And how is that read/write interface defined?  Does it have the same
silly handling of data sizes?

-hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-29 Thread Arnd Bergmann
On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
> On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
> > 
> > My feeling is that all devices we can think of fall into at least one
> > of these categories:
> > 
> > * legacy PC stuff that needs only byte access
> > * PCI devices that can be accessed through sysfs
> > * devices on x86 that can be accessed using iopl
> > 
> 
> I don't believe PCI I/O space devices can be accessed through sysfs, but
> perhaps I'm wrong?  (mmapping I/O space is not portable.)

The interface is there, both a read/write and mmap on the resource
bin_attribute. But it seems you're right, neither of them is implemented
on all architectures.

Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
I/O space, even though a lot of others could. The read-write interface
is only defined for alpha, ia64, microblaze and powerpc.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-29 Thread Arnd Bergmann
On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
 On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
  
  My feeling is that all devices we can think of fall into at least one
  of these categories:
  
  * legacy PC stuff that needs only byte access
  * PCI devices that can be accessed through sysfs
  * devices on x86 that can be accessed using iopl
  
 
 I don't believe PCI I/O space devices can be accessed through sysfs, but
 perhaps I'm wrong?  (mmapping I/O space is not portable.)

The interface is there, both a read/write and mmap on the resource
bin_attribute. But it seems you're right, neither of them is implemented
on all architectures.

Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
I/O space, even though a lot of others could. The read-write interface
is only defined for alpha, ia64, microblaze and powerpc.

Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-29 Thread H. Peter Anvin
On 05/29/2014 02:26 AM, Arnd Bergmann wrote:
 On Wednesday 28 May 2014 14:41:52 H. Peter Anvin wrote:
 On 05/19/2014 05:36 AM, Arnd Bergmann wrote:

 My feeling is that all devices we can think of fall into at least one
 of these categories:

 * legacy PC stuff that needs only byte access
 * PCI devices that can be accessed through sysfs
 * devices on x86 that can be accessed using iopl


 I don't believe PCI I/O space devices can be accessed through sysfs, but
 perhaps I'm wrong?  (mmapping I/O space is not portable.)
 
 The interface is there, both a read/write and mmap on the resource
 bin_attribute. But it seems you're right, neither of them is implemented
 on all architectures.
 
 Only powerpc, microblaze, alpha, sparc and xtensa allow users to mmap
 I/O space, even though a lot of others could. The read-write interface
 is only defined for alpha, ia64, microblaze and powerpc.
 

And how is that read/write interface defined?  Does it have the same
silly handling of data sizes?

-hpa


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-28 Thread H. Peter Anvin
On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
> 
> My feeling is that all devices we can think of fall into at least one
> of these categories:
> 
> * legacy PC stuff that needs only byte access
> * PCI devices that can be accessed through sysfs
> * devices on x86 that can be accessed using iopl
> 

I don't believe PCI I/O space devices can be accessed through sysfs, but
perhaps I'm wrong?  (mmapping I/O space is not portable.)


-hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-28 Thread H. Peter Anvin
On 05/19/2014 05:36 AM, Arnd Bergmann wrote:
 
 My feeling is that all devices we can think of fall into at least one
 of these categories:
 
 * legacy PC stuff that needs only byte access
 * PCI devices that can be accessed through sysfs
 * devices on x86 that can be accessed using iopl
 

I don't believe PCI I/O space devices can be accessed through sysfs, but
perhaps I'm wrong?  (mmapping I/O space is not portable.)


-hpa


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-19 Thread Arnd Bergmann
On Thursday 15 May 2014 14:56:46 j...@joshtriplett.org wrote:
> On Tue, May 13, 2014 at 03:10:59PM -0700, H. Peter Anvin wrote:
> > On 05/09/2014 03:38 PM, Josh Triplett wrote:
> > > On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
> > >> On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
> > >>>
> >  However, if we're going to have these devices I'm wondering if having
> >  /dev/portw and /dev/portl (or something like that) might not make 
> >  sense,
> >  rather than requiring a system call per transaction.
> > >>>
> > >>> Actually the behavior of /dev/port for >1 byte writes seems questionable
> > >>> already: There are very few devices on which writing to consecutive
> > >>> port numbers makes sense. Normally you just want to write a series
> > >>> of bytes (or 16/32 bit words) into the same port number instead,
> > >>> as the outsb()/outsw()/outsl() functions do.
> > >>>
> > >>
> > >> Indeed.  I missed the detail that it increments the port index; it is
> > >> virtually guaranteed to be bogus.
> > > 
> > > Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
> > > that accept arbitrary-length reads and writes (divisible by the size)
> > > and do the equivalent of the string I/O instructions outs/ins, but for
> > > the moment I'd like to add the single device that people always seem to
> > > want and can't get from /dev/port.  If someone's doing enough writes
> > > that doing a syscall per in/out instruction seems like too much
> > > overhead, they can write a real device driver or use ioperm/iopl.
> > 
> > I really have a problem with the logic "our current interface is wrong,
> > so let's introduce another wrong interface which solves a narrow use
> > case".  In some ways it would actually be *better* to use an ioctl
> > interface on /dev/port in that case...
> 
> ioport{8,16,32} seems preferable to an ioctl on /dev/port, but in any
> case, I'd be happy to adapt this patch to whatever interface seems
> preferable.  I just don't want to let the perfect be the enemy of the
> good here; 16-bit and 32-bit port operations are currently completely
> impossible via /dev/port, and I'm primarily interested in fixing that,
> not necessarily in creating a completely generalized interface for doing
> high-performance repeated I/O operations that ought to be in the kernel
> anyway.

I'd prefer to do the absolute minimum to enable this: port I/O is not
something that people really should be doing in this decade, at least
not on non-x86.

A simple pair of ioctls would be fine in my mind, if we think there
is actually a strong use case. Josh, where did you find that testing
driver? Do you have reason to believe it's actually useful on non-x86?
If the reason for the existence of that code is just that someone
didn't find the iopl() man page, I'd rather not have it at all.

My feeling is that all devices we can think of fall into at least one
of these categories:

* legacy PC stuff that needs only byte access
* PCI devices that can be accessed through sysfs
* devices on x86 that can be accessed using iopl

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-19 Thread Arnd Bergmann
On Thursday 15 May 2014 14:56:46 j...@joshtriplett.org wrote:
 On Tue, May 13, 2014 at 03:10:59PM -0700, H. Peter Anvin wrote:
  On 05/09/2014 03:38 PM, Josh Triplett wrote:
   On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
   On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
  
   However, if we're going to have these devices I'm wondering if having
   /dev/portw and /dev/portl (or something like that) might not make 
   sense,
   rather than requiring a system call per transaction.
  
   Actually the behavior of /dev/port for 1 byte writes seems questionable
   already: There are very few devices on which writing to consecutive
   port numbers makes sense. Normally you just want to write a series
   of bytes (or 16/32 bit words) into the same port number instead,
   as the outsb()/outsw()/outsl() functions do.
  
  
   Indeed.  I missed the detail that it increments the port index; it is
   virtually guaranteed to be bogus.
   
   Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
   that accept arbitrary-length reads and writes (divisible by the size)
   and do the equivalent of the string I/O instructions outs/ins, but for
   the moment I'd like to add the single device that people always seem to
   want and can't get from /dev/port.  If someone's doing enough writes
   that doing a syscall per in/out instruction seems like too much
   overhead, they can write a real device driver or use ioperm/iopl.
  
  I really have a problem with the logic our current interface is wrong,
  so let's introduce another wrong interface which solves a narrow use
  case.  In some ways it would actually be *better* to use an ioctl
  interface on /dev/port in that case...
 
 ioport{8,16,32} seems preferable to an ioctl on /dev/port, but in any
 case, I'd be happy to adapt this patch to whatever interface seems
 preferable.  I just don't want to let the perfect be the enemy of the
 good here; 16-bit and 32-bit port operations are currently completely
 impossible via /dev/port, and I'm primarily interested in fixing that,
 not necessarily in creating a completely generalized interface for doing
 high-performance repeated I/O operations that ought to be in the kernel
 anyway.

I'd prefer to do the absolute minimum to enable this: port I/O is not
something that people really should be doing in this decade, at least
not on non-x86.

A simple pair of ioctls would be fine in my mind, if we think there
is actually a strong use case. Josh, where did you find that testing
driver? Do you have reason to believe it's actually useful on non-x86?
If the reason for the existence of that code is just that someone
didn't find the iopl() man page, I'd rather not have it at all.

My feeling is that all devices we can think of fall into at least one
of these categories:

* legacy PC stuff that needs only byte access
* PCI devices that can be accessed through sysfs
* devices on x86 that can be accessed using iopl

Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-15 Thread josh
On Tue, May 13, 2014 at 03:10:59PM -0700, H. Peter Anvin wrote:
> On 05/09/2014 03:38 PM, Josh Triplett wrote:
> > On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
> >> On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
> >>>
>  However, if we're going to have these devices I'm wondering if having
>  /dev/portw and /dev/portl (or something like that) might not make sense,
>  rather than requiring a system call per transaction.
> >>>
> >>> Actually the behavior of /dev/port for >1 byte writes seems questionable
> >>> already: There are very few devices on which writing to consecutive
> >>> port numbers makes sense. Normally you just want to write a series
> >>> of bytes (or 16/32 bit words) into the same port number instead,
> >>> as the outsb()/outsw()/outsl() functions do.
> >>>
> >>
> >> Indeed.  I missed the detail that it increments the port index; it is
> >> virtually guaranteed to be bogus.
> > 
> > Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
> > that accept arbitrary-length reads and writes (divisible by the size)
> > and do the equivalent of the string I/O instructions outs/ins, but for
> > the moment I'd like to add the single device that people always seem to
> > want and can't get from /dev/port.  If someone's doing enough writes
> > that doing a syscall per in/out instruction seems like too much
> > overhead, they can write a real device driver or use ioperm/iopl.
> 
> I really have a problem with the logic "our current interface is wrong,
> so let's introduce another wrong interface which solves a narrow use
> case".  In some ways it would actually be *better* to use an ioctl
> interface on /dev/port in that case...

ioport{8,16,32} seems preferable to an ioctl on /dev/port, but in any
case, I'd be happy to adapt this patch to whatever interface seems
preferable.  I just don't want to let the perfect be the enemy of the
good here; 16-bit and 32-bit port operations are currently completely
impossible via /dev/port, and I'm primarily interested in fixing that,
not necessarily in creating a completely generalized interface for doing
high-performance repeated I/O operations that ought to be in the kernel
anyway.

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-15 Thread josh
On Tue, May 13, 2014 at 03:10:59PM -0700, H. Peter Anvin wrote:
 On 05/09/2014 03:38 PM, Josh Triplett wrote:
  On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
  On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
 
  However, if we're going to have these devices I'm wondering if having
  /dev/portw and /dev/portl (or something like that) might not make sense,
  rather than requiring a system call per transaction.
 
  Actually the behavior of /dev/port for 1 byte writes seems questionable
  already: There are very few devices on which writing to consecutive
  port numbers makes sense. Normally you just want to write a series
  of bytes (or 16/32 bit words) into the same port number instead,
  as the outsb()/outsw()/outsl() functions do.
 
 
  Indeed.  I missed the detail that it increments the port index; it is
  virtually guaranteed to be bogus.
  
  Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
  that accept arbitrary-length reads and writes (divisible by the size)
  and do the equivalent of the string I/O instructions outs/ins, but for
  the moment I'd like to add the single device that people always seem to
  want and can't get from /dev/port.  If someone's doing enough writes
  that doing a syscall per in/out instruction seems like too much
  overhead, they can write a real device driver or use ioperm/iopl.
 
 I really have a problem with the logic our current interface is wrong,
 so let's introduce another wrong interface which solves a narrow use
 case.  In some ways it would actually be *better* to use an ioctl
 interface on /dev/port in that case...

ioport{8,16,32} seems preferable to an ioctl on /dev/port, but in any
case, I'd be happy to adapt this patch to whatever interface seems
preferable.  I just don't want to let the perfect be the enemy of the
good here; 16-bit and 32-bit port operations are currently completely
impossible via /dev/port, and I'm primarily interested in fixing that,
not necessarily in creating a completely generalized interface for doing
high-performance repeated I/O operations that ought to be in the kernel
anyway.

- Josh Triplett
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-13 Thread H. Peter Anvin
On 05/09/2014 03:38 PM, Josh Triplett wrote:
> On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
>> On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
>>>
 However, if we're going to have these devices I'm wondering if having
 /dev/portw and /dev/portl (or something like that) might not make sense,
 rather than requiring a system call per transaction.
>>>
>>> Actually the behavior of /dev/port for >1 byte writes seems questionable
>>> already: There are very few devices on which writing to consecutive
>>> port numbers makes sense. Normally you just want to write a series
>>> of bytes (or 16/32 bit words) into the same port number instead,
>>> as the outsb()/outsw()/outsl() functions do.
>>>
>>
>> Indeed.  I missed the detail that it increments the port index; it is
>> virtually guaranteed to be bogus.
> 
> Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
> that accept arbitrary-length reads and writes (divisible by the size)
> and do the equivalent of the string I/O instructions outs/ins, but for
> the moment I'd like to add the single device that people always seem to
> want and can't get from /dev/port.  If someone's doing enough writes
> that doing a syscall per in/out instruction seems like too much
> overhead, they can write a real device driver or use ioperm/iopl.
> 

I really have a problem with the logic "our current interface is wrong,
so let's introduce another wrong interface which solves a narrow use
case".  In some ways it would actually be *better* to use an ioctl
interface on /dev/port in that case...

-hpa



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-13 Thread H. Peter Anvin
On 05/09/2014 03:38 PM, Josh Triplett wrote:
 On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
 On 05/09/2014 02:12 PM, Arnd Bergmann wrote:

 However, if we're going to have these devices I'm wondering if having
 /dev/portw and /dev/portl (or something like that) might not make sense,
 rather than requiring a system call per transaction.

 Actually the behavior of /dev/port for 1 byte writes seems questionable
 already: There are very few devices on which writing to consecutive
 port numbers makes sense. Normally you just want to write a series
 of bytes (or 16/32 bit words) into the same port number instead,
 as the outsb()/outsw()/outsl() functions do.


 Indeed.  I missed the detail that it increments the port index; it is
 virtually guaranteed to be bogus.
 
 Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
 that accept arbitrary-length reads and writes (divisible by the size)
 and do the equivalent of the string I/O instructions outs/ins, but for
 the moment I'd like to add the single device that people always seem to
 want and can't get from /dev/port.  If someone's doing enough writes
 that doing a syscall per in/out instruction seems like too much
 overhead, they can write a real device driver or use ioperm/iopl.
 

I really have a problem with the logic our current interface is wrong,
so let's introduce another wrong interface which solves a narrow use
case.  In some ways it would actually be *better* to use an ioctl
interface on /dev/port in that case...

-hpa



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-11 Thread Josh Triplett
On Sun, May 11, 2014 at 02:50:06PM +0200, Jann Horn wrote:
> On Sat, May 10, 2014 at 12:32:46PM -0700, Josh Triplett wrote:
> > On Sat, May 10, 2014 at 09:07:42AM +0200, Jann Horn wrote:
> > > On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
> > > > +   if (port > 65535)
> > > > +   return 0;
> > > > +   switch (count) {
> > > [...]
> > > > +   case 4:
> > > > +   if (__put_user(inl(port), buf) < 0)
> > > > +   return -EFAULT;
> > > 
> > > What if I attempt a four-byte read at 65535? That would access four
> > > out-of-bounds bytes, right?
> > 
> > No, it would do an ind instruction on port 65535.
> 
> Yes, on x86. What about other architectures?

That's a good point; on architectures that map I/O to memory, this
device should check port+count rather than port.  Is there a reliable
#define that identifies architectures with that property, other than
CONFIG_X86?

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-11 Thread Jann Horn
On Sat, May 10, 2014 at 12:32:46PM -0700, Josh Triplett wrote:
> On Sat, May 10, 2014 at 09:07:42AM +0200, Jann Horn wrote:
> > On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
> > > + if (port > 65535)
> > > + return 0;
> > > + switch (count) {
> > [...]
> > > + case 4:
> > > + if (__put_user(inl(port), buf) < 0)
> > > + return -EFAULT;
> > 
> > What if I attempt a four-byte read at 65535? That would access four
> > out-of-bounds bytes, right?
> 
> No, it would do an ind instruction on port 65535.

Yes, on x86. What about other architectures?

http://lxr.free-electrons.com/source/arch/m68k/include/asm/io_mm.h#L110
110 #define inl mcf_pci_inl

http://lxr.free-electrons.com/source/arch/m68k/platform/coldfire/pci.c#L163
163 u32 mcf_pci_inl(u32 addr)
164 {
165 return le32_to_cpu(__raw_readl(iospace + (addr & PCI_IO_MASK)));
166 }

http://lxr.free-electrons.com/source/arch/m68k/platform/coldfire/pci.c#L37
 37 #define PCI_IO_SIZE 0x0001  /* 64k */
 38 #define PCI_IO_MASK (PCI_IO_SIZE - 1)

http://lxr.free-electrons.com/source/arch/m68k/include/asm/raw_io.h#L54
 54 #define __raw_readl in_be32

http://lxr.free-electrons.com/source/arch/m68k/include/asm/raw_io.h#L36
 36 #define in_be32(addr) \
 37 ({ u32 __v = (*(__force volatile u32 *) (addr)); __v; })

As far as I can see, you'd get a slightly out-of-bounds read here. Or
is that feature only intended for x86?


signature.asc
Description: Digital signature


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-11 Thread Jann Horn
On Sat, May 10, 2014 at 12:32:46PM -0700, Josh Triplett wrote:
 On Sat, May 10, 2014 at 09:07:42AM +0200, Jann Horn wrote:
  On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
   + if (port  65535)
   + return 0;
   + switch (count) {
  [...]
   + case 4:
   + if (__put_user(inl(port), buf)  0)
   + return -EFAULT;
  
  What if I attempt a four-byte read at 65535? That would access four
  out-of-bounds bytes, right?
 
 No, it would do an ind instruction on port 65535.

Yes, on x86. What about other architectures?

http://lxr.free-electrons.com/source/arch/m68k/include/asm/io_mm.h#L110
110 #define inl mcf_pci_inl

http://lxr.free-electrons.com/source/arch/m68k/platform/coldfire/pci.c#L163
163 u32 mcf_pci_inl(u32 addr)
164 {
165 return le32_to_cpu(__raw_readl(iospace + (addr  PCI_IO_MASK)));
166 }

http://lxr.free-electrons.com/source/arch/m68k/platform/coldfire/pci.c#L37
 37 #define PCI_IO_SIZE 0x0001  /* 64k */
 38 #define PCI_IO_MASK (PCI_IO_SIZE - 1)

http://lxr.free-electrons.com/source/arch/m68k/include/asm/raw_io.h#L54
 54 #define __raw_readl in_be32

http://lxr.free-electrons.com/source/arch/m68k/include/asm/raw_io.h#L36
 36 #define in_be32(addr) \
 37 ({ u32 __v = (*(__force volatile u32 *) (addr)); __v; })

As far as I can see, you'd get a slightly out-of-bounds read here. Or
is that feature only intended for x86?


signature.asc
Description: Digital signature


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-11 Thread Josh Triplett
On Sun, May 11, 2014 at 02:50:06PM +0200, Jann Horn wrote:
 On Sat, May 10, 2014 at 12:32:46PM -0700, Josh Triplett wrote:
  On Sat, May 10, 2014 at 09:07:42AM +0200, Jann Horn wrote:
   On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
+   if (port  65535)
+   return 0;
+   switch (count) {
   [...]
+   case 4:
+   if (__put_user(inl(port), buf)  0)
+   return -EFAULT;
   
   What if I attempt a four-byte read at 65535? That would access four
   out-of-bounds bytes, right?
  
  No, it would do an ind instruction on port 65535.
 
 Yes, on x86. What about other architectures?

That's a good point; on architectures that map I/O to memory, this
device should check port+count rather than port.  Is there a reliable
#define that identifies architectures with that property, other than
CONFIG_X86?

- Josh Triplett
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Josh Triplett
On Sat, May 10, 2014 at 07:18:45PM +0200, Greg Kroah-Hartman wrote:
> On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
> > @@ -827,6 +898,9 @@ static const struct memdev {
> >  #ifdef CONFIG_PRINTK
> > [11] = { "kmsg", 0644, _fops, NULL },
> >  #endif
> > +#ifdef CONFIG_DEVPORT
> > +[12] = { "ioports", 0, _fops, NULL },
> 
> Odd extra space?

Copy/paste from the "port" device, which apparently has that space to
make "[4]" and "11" line up at the ones digit.  Will fix.

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Josh Triplett
On Sat, May 10, 2014 at 09:07:42AM +0200, Jann Horn wrote:
> On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
> > +   if (port > 65535)
> > +   return 0;
> > +   switch (count) {
> [...]
> > +   case 4:
> > +   if (__put_user(inl(port), buf) < 0)
> > +   return -EFAULT;
> 
> What if I attempt a four-byte read at 65535? That would access four
> out-of-bounds bytes, right?

No, it would do an ind instruction on port 65535.

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Greg Kroah-Hartman
On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
> @@ -827,6 +898,9 @@ static const struct memdev {
>  #ifdef CONFIG_PRINTK
>   [11] = { "kmsg", 0644, _fops, NULL },
>  #endif
> +#ifdef CONFIG_DEVPORT
> +  [12] = { "ioports", 0, _fops, NULL },

Odd extra space?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Jann Horn
On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
> + if (port > 65535)
> + return 0;
> + switch (count) {
[...]
> + case 4:
> + if (__put_user(inl(port), buf) < 0)
> + return -EFAULT;

What if I attempt a four-byte read at 65535? That would access four
out-of-bounds bytes, right?


signature.asc
Description: Digital signature


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Jann Horn
On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
 + if (port  65535)
 + return 0;
 + switch (count) {
[...]
 + case 4:
 + if (__put_user(inl(port), buf)  0)
 + return -EFAULT;

What if I attempt a four-byte read at 65535? That would access four
out-of-bounds bytes, right?


signature.asc
Description: Digital signature


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Greg Kroah-Hartman
On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
 @@ -827,6 +898,9 @@ static const struct memdev {
  #ifdef CONFIG_PRINTK
   [11] = { kmsg, 0644, kmsg_fops, NULL },
  #endif
 +#ifdef CONFIG_DEVPORT
 +  [12] = { ioports, 0, ioports_fops, NULL },

Odd extra space?

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Josh Triplett
On Sat, May 10, 2014 at 09:07:42AM +0200, Jann Horn wrote:
 On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
  +   if (port  65535)
  +   return 0;
  +   switch (count) {
 [...]
  +   case 4:
  +   if (__put_user(inl(port), buf)  0)
  +   return -EFAULT;
 
 What if I attempt a four-byte read at 65535? That would access four
 out-of-bounds bytes, right?

No, it would do an ind instruction on port 65535.

- Josh Triplett
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-10 Thread Josh Triplett
On Sat, May 10, 2014 at 07:18:45PM +0200, Greg Kroah-Hartman wrote:
 On Fri, May 09, 2014 at 12:19:16PM -0700, Josh Triplett wrote:
  @@ -827,6 +898,9 @@ static const struct memdev {
   #ifdef CONFIG_PRINTK
  [11] = { kmsg, 0644, kmsg_fops, NULL },
   #endif
  +#ifdef CONFIG_DEVPORT
  +[12] = { ioports, 0, ioports_fops, NULL },
 
 Odd extra space?

Copy/paste from the port device, which apparently has that space to
make [4] and 11 line up at the ones digit.  Will fix.

- Josh Triplett
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Josh Triplett
On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
> On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
> > 
> >> However, if we're going to have these devices I'm wondering if having
> >> /dev/portw and /dev/portl (or something like that) might not make sense,
> >> rather than requiring a system call per transaction.
> > 
> > Actually the behavior of /dev/port for >1 byte writes seems questionable
> > already: There are very few devices on which writing to consecutive
> > port numbers makes sense. Normally you just want to write a series
> > of bytes (or 16/32 bit words) into the same port number instead,
> > as the outsb()/outsw()/outsl() functions do.
> > 
> 
> Indeed.  I missed the detail that it increments the port index; it is
> virtually guaranteed to be bogus.

Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
that accept arbitrary-length reads and writes (divisible by the size)
and do the equivalent of the string I/O instructions outs/ins, but for
the moment I'd like to add the single device that people always seem to
want and can't get from /dev/port.  If someone's doing enough writes
that doing a syscall per in/out instruction seems like too much
overhead, they can write a real device driver or use ioperm/iopl.

- Josh triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread H. Peter Anvin
On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
> 
>> However, if we're going to have these devices I'm wondering if having
>> /dev/portw and /dev/portl (or something like that) might not make sense,
>> rather than requiring a system call per transaction.
> 
> Actually the behavior of /dev/port for >1 byte writes seems questionable
> already: There are very few devices on which writing to consecutive
> port numbers makes sense. Normally you just want to write a series
> of bytes (or 16/32 bit words) into the same port number instead,
> as the outsb()/outsw()/outsl() functions do.
> 

Indeed.  I missed the detail that it increments the port index; it is
virtually guaranteed to be bogus.

-hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Arnd Bergmann
On Friday 09 May 2014 13:54:05 H. Peter Anvin wrote:
> On 05/09/2014 12:58 PM, Arnd Bergmann wrote:
> > On Friday 09 May 2014 12:19:16 Josh Triplett wrote:
> > 
> >> +if (!access_ok(VERIFY_WRITE, buf, count))
> >> +return -EFAULT;
> >> +if (port > 65535)
> >> +return 0;
> > 
> > This should probably test against IO_SPACE_LIMIT, which may be zero,
> > something larger than 65536 or even ULONG_MAX, depending on the
> > architecture.
> > 
> > In cases where this IO_SPACE_LIMIT is zero or ULONG_MAX, we should
> > probably disallow access completely. The former case is for architectures
> > that don't have any I/O ports, the other is either a mistake, or is
> > used when inb is defined as readb, and the port numbers are just virtual
> > addresses.
> > 
> 
> PCI supports a 32-bit I/O address space, so if the architecture permits
> it, having a 32-bit I/O space is perfectly legitimate.

Right, but on all 32-bit architectures other than x86, the I/O ports
are mapped into physical memory addresses, which means you can't
map all of the I/O space into the CPU address space. On 64-bit
architectures you can, but then it's UINT_MAX, not ULONG_MAX.

There is also the theoretical case of machines mapping a window
of I/O addresses with portnumber==phys_addr as we normally do
for PCI memory space, but I haven't seen anyone actually do that.
Practically every PCI implementation (if they have I/O space at all)
maps a small number of ports (65536 or 1048576 mostly) starting at
port number zero to a fixed physical CPU address.

> It is worth noting that /dev/port has the same problem.

Right. We should fix that, too.

> However, if we're going to have these devices I'm wondering if having
> /dev/portw and /dev/portl (or something like that) might not make sense,
> rather than requiring a system call per transaction.

Actually the behavior of /dev/port for >1 byte writes seems questionable
already: There are very few devices on which writing to consecutive
port numbers makes sense. Normally you just want to write a series
of bytes (or 16/32 bit words) into the same port number instead,
as the outsb()/outsw()/outsl() functions do.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread H. Peter Anvin
On 05/09/2014 12:58 PM, Arnd Bergmann wrote:
> On Friday 09 May 2014 12:19:16 Josh Triplett wrote:
> 
>> +if (!access_ok(VERIFY_WRITE, buf, count))
>> +return -EFAULT;
>> +if (port > 65535)
>> +return 0;
> 
> This should probably test against IO_SPACE_LIMIT, which may be zero,
> something larger than 65536 or even ULONG_MAX, depending on the
> architecture.
> 
> In cases where this IO_SPACE_LIMIT is zero or ULONG_MAX, we should
> probably disallow access completely. The former case is for architectures
> that don't have any I/O ports, the other is either a mistake, or is
> used when inb is defined as readb, and the port numbers are just virtual
> addresses.
> 

PCI supports a 32-bit I/O address space, so if the architecture permits
it, having a 32-bit I/O space is perfectly legitimate.

It is worth noting that /dev/port has the same problem.

However, if we're going to have these devices I'm wondering if having
/dev/portw and /dev/portl (or something like that) might not make sense,
rather than requiring a system call per transaction.

Also, x86 supports unaligned I/O port references, but not all
architectures do.  On the other hand, x86 also supports ioperm().

-hpa



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Arnd Bergmann
On Friday 09 May 2014 12:19:16 Josh Triplett wrote:

> + if (!access_ok(VERIFY_WRITE, buf, count))
> + return -EFAULT;
> + if (port > 65535)
> + return 0;

This should probably test against IO_SPACE_LIMIT, which may be zero,
something larger than 65536 or even ULONG_MAX, depending on the
architecture.

In cases where this IO_SPACE_LIMIT is zero or ULONG_MAX, we should
probably disallow access completely. The former case is for architectures
that don't have any I/O ports, the other is either a mistake, or is
used when inb is defined as readb, and the port numbers are just virtual
addresses.

Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Josh Triplett
/dev/port only supports reading and writing 8-bit ports; multi-byte
operations on /dev/port will just operate on multiple successive 8-bit
ports.

Add a new device, /dev/ioports, which supports reading and writing
16-bit and 32-bit ports.  This makes it possible to perform arbitrary
I/O port operations cleanly from privileged userspace processes, without
using iopl or ioperm.

Signed-off-by: Josh Triplett 
---

Written after encountering yet another out-of-tree omnibus "do arbitrary
I/O for test purposes" driver; this one's main reason for existence was
the inability to operate on 16-bit and 32-bit I/O ports.  Let's get a
proper interface into the kernel, to make such drivers obsolete.

I've also written a corresponding manpage patch, which I'll submit as a
reply to this one.

 drivers/char/mem.c | 79 --
 1 file changed, 77 insertions(+), 2 deletions(-)

diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 917403f..84e0526 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -35,6 +35,7 @@
 #endif
 
 #define DEVPORT_MINOR  4
+#define DEVIOPORTS_MINOR   12
 
 static inline unsigned long size_inside_page(unsigned long start,
 unsigned long size)
@@ -584,6 +585,69 @@ static ssize_t write_port(struct file *file, const char 
__user *buf,
*ppos = i;
return tmp-buf;
 }
+
+static ssize_t read_ioports(struct file *file, char __user *buf,
+   size_t count, loff_t *ppos)
+{
+   unsigned long port = *ppos;
+
+   if (!access_ok(VERIFY_WRITE, buf, count))
+   return -EFAULT;
+   if (port > 65535)
+   return 0;
+   switch (count) {
+   case 1:
+   if (__put_user(inb(port), buf) < 0)
+   return -EFAULT;
+   return 1;
+   case 2:
+   if (__put_user(inw(port), buf) < 0)
+   return -EFAULT;
+   return 2;
+   case 4:
+   if (__put_user(inl(port), buf) < 0)
+   return -EFAULT;
+   return 4;
+   default:
+   return -EINVAL;
+   }
+}
+
+static ssize_t write_ioports(struct file *file, const char __user *buf,
+size_t count, loff_t *ppos)
+{
+   unsigned long port = *ppos;
+
+   if (!access_ok(VERIFY_READ, buf, count))
+   return -EFAULT;
+   if (port > 65535)
+   return 0;
+   switch (count) {
+   case 1: {
+   u8 b;
+   if (__get_user(b, buf))
+   return -EFAULT;
+   outb(b, port);
+   return 1;
+   }
+   case 2: {
+   u16 w;
+   if (__get_user(w, buf))
+   return -EFAULT;
+   outw(w, port);
+   return 2;
+   }
+   case 4: {
+   u32 l;
+   if (__get_user(l, buf))
+   return -EFAULT;
+   outl(l, port);
+   return 4;
+   }
+   default:
+   return -EINVAL;
+   }
+}
 #endif
 
 static ssize_t read_null(struct file *file, char __user *buf,
@@ -779,6 +843,13 @@ static const struct file_operations port_fops = {
.write  = write_port,
.open   = open_port,
 };
+
+static const struct file_operations ioports_fops = {
+   .llseek = memory_lseek,
+   .read   = read_ioports,
+   .write  = write_ioports,
+   .open   = open_port,
+};
 #endif
 
 static const struct file_operations zero_fops = {
@@ -827,6 +898,9 @@ static const struct memdev {
 #ifdef CONFIG_PRINTK
[11] = { "kmsg", 0644, _fops, NULL },
 #endif
+#ifdef CONFIG_DEVPORT
+[12] = { "ioports", 0, _fops, NULL },
+#endif
 };
 
 static int memory_open(struct inode *inode, struct file *filp)
@@ -892,9 +966,10 @@ static int __init chr_dev_init(void)
continue;
 
/*
-* Create /dev/port?
+* Create /dev/port and /dev/ioports?
 */
-   if ((minor == DEVPORT_MINOR) && !arch_has_dev_port())
+   if ((minor == DEVPORT_MINOR || minor == DEVIOPORTS_MINOR)
+   && !arch_has_dev_port())
continue;
 
device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor),
-- 
2.0.0.rc2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Josh Triplett
/dev/port only supports reading and writing 8-bit ports; multi-byte
operations on /dev/port will just operate on multiple successive 8-bit
ports.

Add a new device, /dev/ioports, which supports reading and writing
16-bit and 32-bit ports.  This makes it possible to perform arbitrary
I/O port operations cleanly from privileged userspace processes, without
using iopl or ioperm.

Signed-off-by: Josh Triplett j...@joshtriplett.org
---

Written after encountering yet another out-of-tree omnibus do arbitrary
I/O for test purposes driver; this one's main reason for existence was
the inability to operate on 16-bit and 32-bit I/O ports.  Let's get a
proper interface into the kernel, to make such drivers obsolete.

I've also written a corresponding manpage patch, which I'll submit as a
reply to this one.

 drivers/char/mem.c | 79 --
 1 file changed, 77 insertions(+), 2 deletions(-)

diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 917403f..84e0526 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -35,6 +35,7 @@
 #endif
 
 #define DEVPORT_MINOR  4
+#define DEVIOPORTS_MINOR   12
 
 static inline unsigned long size_inside_page(unsigned long start,
 unsigned long size)
@@ -584,6 +585,69 @@ static ssize_t write_port(struct file *file, const char 
__user *buf,
*ppos = i;
return tmp-buf;
 }
+
+static ssize_t read_ioports(struct file *file, char __user *buf,
+   size_t count, loff_t *ppos)
+{
+   unsigned long port = *ppos;
+
+   if (!access_ok(VERIFY_WRITE, buf, count))
+   return -EFAULT;
+   if (port  65535)
+   return 0;
+   switch (count) {
+   case 1:
+   if (__put_user(inb(port), buf)  0)
+   return -EFAULT;
+   return 1;
+   case 2:
+   if (__put_user(inw(port), buf)  0)
+   return -EFAULT;
+   return 2;
+   case 4:
+   if (__put_user(inl(port), buf)  0)
+   return -EFAULT;
+   return 4;
+   default:
+   return -EINVAL;
+   }
+}
+
+static ssize_t write_ioports(struct file *file, const char __user *buf,
+size_t count, loff_t *ppos)
+{
+   unsigned long port = *ppos;
+
+   if (!access_ok(VERIFY_READ, buf, count))
+   return -EFAULT;
+   if (port  65535)
+   return 0;
+   switch (count) {
+   case 1: {
+   u8 b;
+   if (__get_user(b, buf))
+   return -EFAULT;
+   outb(b, port);
+   return 1;
+   }
+   case 2: {
+   u16 w;
+   if (__get_user(w, buf))
+   return -EFAULT;
+   outw(w, port);
+   return 2;
+   }
+   case 4: {
+   u32 l;
+   if (__get_user(l, buf))
+   return -EFAULT;
+   outl(l, port);
+   return 4;
+   }
+   default:
+   return -EINVAL;
+   }
+}
 #endif
 
 static ssize_t read_null(struct file *file, char __user *buf,
@@ -779,6 +843,13 @@ static const struct file_operations port_fops = {
.write  = write_port,
.open   = open_port,
 };
+
+static const struct file_operations ioports_fops = {
+   .llseek = memory_lseek,
+   .read   = read_ioports,
+   .write  = write_ioports,
+   .open   = open_port,
+};
 #endif
 
 static const struct file_operations zero_fops = {
@@ -827,6 +898,9 @@ static const struct memdev {
 #ifdef CONFIG_PRINTK
[11] = { kmsg, 0644, kmsg_fops, NULL },
 #endif
+#ifdef CONFIG_DEVPORT
+[12] = { ioports, 0, ioports_fops, NULL },
+#endif
 };
 
 static int memory_open(struct inode *inode, struct file *filp)
@@ -892,9 +966,10 @@ static int __init chr_dev_init(void)
continue;
 
/*
-* Create /dev/port?
+* Create /dev/port and /dev/ioports?
 */
-   if ((minor == DEVPORT_MINOR)  !arch_has_dev_port())
+   if ((minor == DEVPORT_MINOR || minor == DEVIOPORTS_MINOR)
+!arch_has_dev_port())
continue;
 
device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor),
-- 
2.0.0.rc2

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Arnd Bergmann
On Friday 09 May 2014 12:19:16 Josh Triplett wrote:

 + if (!access_ok(VERIFY_WRITE, buf, count))
 + return -EFAULT;
 + if (port  65535)
 + return 0;

This should probably test against IO_SPACE_LIMIT, which may be zero,
something larger than 65536 or even ULONG_MAX, depending on the
architecture.

In cases where this IO_SPACE_LIMIT is zero or ULONG_MAX, we should
probably disallow access completely. The former case is for architectures
that don't have any I/O ports, the other is either a mistake, or is
used when inb is defined as readb, and the port numbers are just virtual
addresses.

Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread H. Peter Anvin
On 05/09/2014 12:58 PM, Arnd Bergmann wrote:
 On Friday 09 May 2014 12:19:16 Josh Triplett wrote:
 
 +if (!access_ok(VERIFY_WRITE, buf, count))
 +return -EFAULT;
 +if (port  65535)
 +return 0;
 
 This should probably test against IO_SPACE_LIMIT, which may be zero,
 something larger than 65536 or even ULONG_MAX, depending on the
 architecture.
 
 In cases where this IO_SPACE_LIMIT is zero or ULONG_MAX, we should
 probably disallow access completely. The former case is for architectures
 that don't have any I/O ports, the other is either a mistake, or is
 used when inb is defined as readb, and the port numbers are just virtual
 addresses.
 

PCI supports a 32-bit I/O address space, so if the architecture permits
it, having a 32-bit I/O space is perfectly legitimate.

It is worth noting that /dev/port has the same problem.

However, if we're going to have these devices I'm wondering if having
/dev/portw and /dev/portl (or something like that) might not make sense,
rather than requiring a system call per transaction.

Also, x86 supports unaligned I/O port references, but not all
architectures do.  On the other hand, x86 also supports ioperm().

-hpa



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Arnd Bergmann
On Friday 09 May 2014 13:54:05 H. Peter Anvin wrote:
 On 05/09/2014 12:58 PM, Arnd Bergmann wrote:
  On Friday 09 May 2014 12:19:16 Josh Triplett wrote:
  
  +if (!access_ok(VERIFY_WRITE, buf, count))
  +return -EFAULT;
  +if (port  65535)
  +return 0;
  
  This should probably test against IO_SPACE_LIMIT, which may be zero,
  something larger than 65536 or even ULONG_MAX, depending on the
  architecture.
  
  In cases where this IO_SPACE_LIMIT is zero or ULONG_MAX, we should
  probably disallow access completely. The former case is for architectures
  that don't have any I/O ports, the other is either a mistake, or is
  used when inb is defined as readb, and the port numbers are just virtual
  addresses.
  
 
 PCI supports a 32-bit I/O address space, so if the architecture permits
 it, having a 32-bit I/O space is perfectly legitimate.

Right, but on all 32-bit architectures other than x86, the I/O ports
are mapped into physical memory addresses, which means you can't
map all of the I/O space into the CPU address space. On 64-bit
architectures you can, but then it's UINT_MAX, not ULONG_MAX.

There is also the theoretical case of machines mapping a window
of I/O addresses with portnumber==phys_addr as we normally do
for PCI memory space, but I haven't seen anyone actually do that.
Practically every PCI implementation (if they have I/O space at all)
maps a small number of ports (65536 or 1048576 mostly) starting at
port number zero to a fixed physical CPU address.

 It is worth noting that /dev/port has the same problem.

Right. We should fix that, too.

 However, if we're going to have these devices I'm wondering if having
 /dev/portw and /dev/portl (or something like that) might not make sense,
 rather than requiring a system call per transaction.

Actually the behavior of /dev/port for 1 byte writes seems questionable
already: There are very few devices on which writing to consecutive
port numbers makes sense. Normally you just want to write a series
of bytes (or 16/32 bit words) into the same port number instead,
as the outsb()/outsw()/outsl() functions do.

Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread H. Peter Anvin
On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
 
 However, if we're going to have these devices I'm wondering if having
 /dev/portw and /dev/portl (or something like that) might not make sense,
 rather than requiring a system call per transaction.
 
 Actually the behavior of /dev/port for 1 byte writes seems questionable
 already: There are very few devices on which writing to consecutive
 port numbers makes sense. Normally you just want to write a series
 of bytes (or 16/32 bit words) into the same port number instead,
 as the outsb()/outsw()/outsl() functions do.
 

Indeed.  I missed the detail that it increments the port index; it is
virtually guaranteed to be bogus.

-hpa


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] drivers/char/mem.c: Add /dev/ioports, supporting 16-bit and 32-bit ports

2014-05-09 Thread Josh Triplett
On Fri, May 09, 2014 at 02:20:45PM -0700, H. Peter Anvin wrote:
 On 05/09/2014 02:12 PM, Arnd Bergmann wrote:
  
  However, if we're going to have these devices I'm wondering if having
  /dev/portw and /dev/portl (or something like that) might not make sense,
  rather than requiring a system call per transaction.
  
  Actually the behavior of /dev/port for 1 byte writes seems questionable
  already: There are very few devices on which writing to consecutive
  port numbers makes sense. Normally you just want to write a series
  of bytes (or 16/32 bit words) into the same port number instead,
  as the outsb()/outsw()/outsl() functions do.
  
 
 Indeed.  I missed the detail that it increments the port index; it is
 virtually guaranteed to be bogus.

Exactly.  It might make sense to have ioport8/ioport16/ioport32 devices
that accept arbitrary-length reads and writes (divisible by the size)
and do the equivalent of the string I/O instructions outs/ins, but for
the moment I'd like to add the single device that people always seem to
want and can't get from /dev/port.  If someone's doing enough writes
that doing a syscall per in/out instruction seems like too much
overhead, they can write a real device driver or use ioperm/iopl.

- Josh triplett
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/