Re: Running ivshmem-demo in Jetson TK1.

2018-03-12 Thread jonas
Den fredag 9 mars 2018 kl. 09:10:12 UTC+1 skrev Claudio Scordino:
> Hi Jonas,
> 
> 
> 2017-12-10 17:34 GMT+01:00 jonas :
> Hi,
> 
> 
> 
> I'll be making an effort to contribute my work to the master branch of 
> Jailhouse within the next couple of weeks.
> 
> 
> 
> If I'm not wrong, those patches were not eventually upstreamed.
> Do you still plan to upstream them ?
> 
> 
> Many thanks,
> 
> 
>               Claudio

Hi,

I upstreamed a patch-set, see:
https://groups.google.com/forum/#!topic/jailhouse-dev/IqwQsQ9JEno

Henning said he would take care of introducing this into Jailhouse. I don't 
know the status or plans for this work though.

/Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2018-03-09 Thread Claudio Scordino
Hi Jonas,

2017-12-10 17:34 GMT+01:00 jonas :

> Hi,
>
> I'll be making an effort to contribute my work to the master branch of
> Jailhouse within the next couple of weeks.
>

If I'm not wrong, those patches were not eventually upstreamed.
Do you still plan to upstream them ?

Many thanks,

  Claudio

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-12-22 Thread Luca Cuomo
Il giorno venerdì 22 dicembre 2017 12:54:52 UTC+1, jonas ha scritto:
> Den fredag 22 december 2017 kl. 10:32:12 UTC+1 skrev Luca Cuomo:
> > Il giorno giovedì 21 dicembre 2017 23:47:39 UTC+1, jonas ha scritto:
> > > Den torsdag 21 december 2017 kl. 17:24:30 UTC+1 skrev Jan Kiszka:
> > > > On 2017-12-21 16:32, Luca Cuomo wrote:
> > > > > Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha 
> > > > > scritto:
> > > > >> On 2017-12-21 10:05, Luca Cuomo wrote:
> > > > >>> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha 
> > > > >>> scritto:
> > > >  Hi,
> > > > 
> > > >  I'll be making an effort to contribute my work to the master 
> > > >  branch of Jailhouse within the next couple of weeks.
> > > > 
> > > >  /Jonas
> > > > 
> > > >  Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin 
> > > >  Petra:
> > > > > Hi,
> > > > >
> > > > >
> > > > > I'm resending the patch(es) that were shared by Jonas a while ago.
> > > > >
> > > > >
> > > > > Best Regards,
> > > > > Constantin
> > > > >
> > > > >
> > > > > On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild 
> > > > >  wrote:
> > > > > Hi Claudio,
> > > > >
> > > > >
> > > > >
> > > > > Am Thu, 7 Dec 2017 17:29:45 +0100
> > > > >
> > > > > schrieb Claudio Scordino :
> > > > >
> > > > >
> > > > >
> > > > >> Hi guys,
> > > > >
> > > > >>
> > > > >
> > > > >> 2017-08-09 15:23 GMT+02:00 Henning Schild
> > > > >
> > > > >> :
> > > > >
> > > > >>
> > > > >
> > > > >>> Hey,
> > > > >
> > > > >>>
> > > > >
> > > > >>> unfortunately Jonas never published his overall changes, maybe 
> > > > >>> now
> > > > >
> > > > >>> he understands why i kindly asked him to do so.
> > > > >
> > > > >>> I think Jonas maybe ran into every single problem one could
> > > > >
> > > > >>> encounter on the way, so if you read the thread you will 
> > > > >>> probably
> > > > >
> > > > >>> be able to come up with a similar patch at some point. That 
> > > > >>> would
> > > > >
> > > > >>> be the duplication of efforts, getting a first working patch 
> > > > >>> into a
> > > > >
> > > > >>> mergeable form is another story.
> > > > >
> > > > >>>
> > > > >
> > > > >>> If there are legal reasons to not publish code on the list i 
> > > > >>> suggest
> > > > >
> > > > >>> you exchange patches between each other. But of cause i would 
> > > > >>> like
> > > > >
> > > > >>> to see contributions eventually ;).
> > > > >
> > > > >>>
> > > > >
> > > > >>>
> > > > >
> > > > >> We need to run IVSHMEM on the TX1.
> > > > >
> > > > >> Any chance of upstreaming those patches to not waste time
> > > > >
> > > > >> re-inventing the wheel ?
> > > > >
> > > > >> If that's not possible, please send me a copy privately.
> > > > >
> > > > >
> > > > >
> > > > > Unfortunately i do not have those patches either. I am afraid 
> > > > > someone
> > > > >
> > > > > will have to do that over again.
> > > > >
> > > > >
> > > > >
> > > > > But the whole thread is basically about enabling the demo, which 
> > > > > is
> > > > >
> > > > > interesting for people just getting started with ivshmem. And for
> > > > >
> > > > > people that want to implement their own protocol on top of it.
> > > > >
> > > > > If you are just looking at running ivshmem-net you are good to 
> > > > > go, that
> > > > >
> > > > > code is in a working state.
> > > > >
> > > > >
> > > > >
> > > > > Henning
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >> Many thanks and best regards,
> > > > >
> > > > >>
> > > > >
> > > > >>               Claudio
> > > > >>>
> > > > >>> Hi all,
> > > > >>>
> > > > >>> i've applied the provided patch and i'm trying to connect the linux 
> > > > >>> root cell with the bare metal cell running the 
> > > > >>> inmate/demo/arm/ivshmem-demo.c. I've attached the used 
> > > > >>> configurations (jetson-tx1-ivshmem for the root cell and the other 
> > > > >>> one for the bare metal). 
> > > > >>> When i create the bare metal cell the connection between pci 
> > > > >>> devices is correctly up.
> > > > >>> - 
> > > > >>> Initializing Jailhouse hypervisor  on CPU 2
> > > > >>> Code location: 0xc0200050
> > > > >>> Page pool usage after early setup: mem 63/16358, remap 64/131072
> > > > >>> Initializing processors:
> > > > >>>  CPU 2... OK
> > > > >>>  CPU 1... OK
> > > > >>>  CPU 3... OK
> > > > >>>  CPU 0... OK
> > > > >>> Adding virtual PCI device 00:0f.0 to cell 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-22 Thread jonas
Den fredag 22 december 2017 kl. 10:32:12 UTC+1 skrev Luca Cuomo:
> Il giorno giovedì 21 dicembre 2017 23:47:39 UTC+1, jonas ha scritto:
> > Den torsdag 21 december 2017 kl. 17:24:30 UTC+1 skrev Jan Kiszka:
> > > On 2017-12-21 16:32, Luca Cuomo wrote:
> > > > Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha 
> > > > scritto:
> > > >> On 2017-12-21 10:05, Luca Cuomo wrote:
> > > >>> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
> > >  Hi,
> > > 
> > >  I'll be making an effort to contribute my work to the master branch 
> > >  of Jailhouse within the next couple of weeks.
> > > 
> > >  /Jonas
> > > 
> > >  Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> > > > Hi,
> > > >
> > > >
> > > > I'm resending the patch(es) that were shared by Jonas a while ago.
> > > >
> > > >
> > > > Best Regards,
> > > > Constantin
> > > >
> > > >
> > > > On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild 
> > > >  wrote:
> > > > Hi Claudio,
> > > >
> > > >
> > > >
> > > > Am Thu, 7 Dec 2017 17:29:45 +0100
> > > >
> > > > schrieb Claudio Scordino :
> > > >
> > > >
> > > >
> > > >> Hi guys,
> > > >
> > > >>
> > > >
> > > >> 2017-08-09 15:23 GMT+02:00 Henning Schild
> > > >
> > > >> :
> > > >
> > > >>
> > > >
> > > >>> Hey,
> > > >
> > > >>>
> > > >
> > > >>> unfortunately Jonas never published his overall changes, maybe now
> > > >
> > > >>> he understands why i kindly asked him to do so.
> > > >
> > > >>> I think Jonas maybe ran into every single problem one could
> > > >
> > > >>> encounter on the way, so if you read the thread you will probably
> > > >
> > > >>> be able to come up with a similar patch at some point. That would
> > > >
> > > >>> be the duplication of efforts, getting a first working patch into 
> > > >>> a
> > > >
> > > >>> mergeable form is another story.
> > > >
> > > >>>
> > > >
> > > >>> If there are legal reasons to not publish code on the list i 
> > > >>> suggest
> > > >
> > > >>> you exchange patches between each other. But of cause i would like
> > > >
> > > >>> to see contributions eventually ;).
> > > >
> > > >>>
> > > >
> > > >>>
> > > >
> > > >> We need to run IVSHMEM on the TX1.
> > > >
> > > >> Any chance of upstreaming those patches to not waste time
> > > >
> > > >> re-inventing the wheel ?
> > > >
> > > >> If that's not possible, please send me a copy privately.
> > > >
> > > >
> > > >
> > > > Unfortunately i do not have those patches either. I am afraid 
> > > > someone
> > > >
> > > > will have to do that over again.
> > > >
> > > >
> > > >
> > > > But the whole thread is basically about enabling the demo, which is
> > > >
> > > > interesting for people just getting started with ivshmem. And for
> > > >
> > > > people that want to implement their own protocol on top of it.
> > > >
> > > > If you are just looking at running ivshmem-net you are good to go, 
> > > > that
> > > >
> > > > code is in a working state.
> > > >
> > > >
> > > >
> > > > Henning
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >> Many thanks and best regards,
> > > >
> > > >>
> > > >
> > > >>               Claudio
> > > >>>
> > > >>> Hi all,
> > > >>>
> > > >>> i've applied the provided patch and i'm trying to connect the linux 
> > > >>> root cell with the bare metal cell running the 
> > > >>> inmate/demo/arm/ivshmem-demo.c. I've attached the used configurations 
> > > >>> (jetson-tx1-ivshmem for the root cell and the other one for the bare 
> > > >>> metal). 
> > > >>> When i create the bare metal cell the connection between pci devices 
> > > >>> is correctly up.
> > > >>> - 
> > > >>> Initializing Jailhouse hypervisor  on CPU 2
> > > >>> Code location: 0xc0200050
> > > >>> Page pool usage after early setup: mem 63/16358, remap 64/131072
> > > >>> Initializing processors:
> > > >>>  CPU 2... OK
> > > >>>  CPU 1... OK
> > > >>>  CPU 3... OK
> > > >>>  CPU 0... OK
> > > >>> Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
> > > >>> Page pool usage after late setup: mem 74/16358, remap 69/131072
> > > >>> Activating hypervisor
> > > >>> Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
> > > >>> Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
> > > >>> "Jetson-TX1-ivshmem"
> > > >>> ---
> > > >>>
> > > >>> 1st problem: no PCI device appears in linux (lspci does not 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-22 Thread Luca Cuomo
Il giorno giovedì 21 dicembre 2017 23:47:39 UTC+1, jonas ha scritto:
> Den torsdag 21 december 2017 kl. 17:24:30 UTC+1 skrev Jan Kiszka:
> > On 2017-12-21 16:32, Luca Cuomo wrote:
> > > Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha scritto:
> > >> On 2017-12-21 10:05, Luca Cuomo wrote:
> > >>> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
> >  Hi,
> > 
> >  I'll be making an effort to contribute my work to the master branch of 
> >  Jailhouse within the next couple of weeks.
> > 
> >  /Jonas
> > 
> >  Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> > > Hi,
> > >
> > >
> > > I'm resending the patch(es) that were shared by Jonas a while ago.
> > >
> > >
> > > Best Regards,
> > > Constantin
> > >
> > >
> > > On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild 
> > >  wrote:
> > > Hi Claudio,
> > >
> > >
> > >
> > > Am Thu, 7 Dec 2017 17:29:45 +0100
> > >
> > > schrieb Claudio Scordino :
> > >
> > >
> > >
> > >> Hi guys,
> > >
> > >>
> > >
> > >> 2017-08-09 15:23 GMT+02:00 Henning Schild
> > >
> > >> :
> > >
> > >>
> > >
> > >>> Hey,
> > >
> > >>>
> > >
> > >>> unfortunately Jonas never published his overall changes, maybe now
> > >
> > >>> he understands why i kindly asked him to do so.
> > >
> > >>> I think Jonas maybe ran into every single problem one could
> > >
> > >>> encounter on the way, so if you read the thread you will probably
> > >
> > >>> be able to come up with a similar patch at some point. That would
> > >
> > >>> be the duplication of efforts, getting a first working patch into a
> > >
> > >>> mergeable form is another story.
> > >
> > >>>
> > >
> > >>> If there are legal reasons to not publish code on the list i suggest
> > >
> > >>> you exchange patches between each other. But of cause i would like
> > >
> > >>> to see contributions eventually ;).
> > >
> > >>>
> > >
> > >>>
> > >
> > >> We need to run IVSHMEM on the TX1.
> > >
> > >> Any chance of upstreaming those patches to not waste time
> > >
> > >> re-inventing the wheel ?
> > >
> > >> If that's not possible, please send me a copy privately.
> > >
> > >
> > >
> > > Unfortunately i do not have those patches either. I am afraid someone
> > >
> > > will have to do that over again.
> > >
> > >
> > >
> > > But the whole thread is basically about enabling the demo, which is
> > >
> > > interesting for people just getting started with ivshmem. And for
> > >
> > > people that want to implement their own protocol on top of it.
> > >
> > > If you are just looking at running ivshmem-net you are good to go, 
> > > that
> > >
> > > code is in a working state.
> > >
> > >
> > >
> > > Henning
> > >
> > >
> > >
> > >
> > >
> > >> Many thanks and best regards,
> > >
> > >>
> > >
> > >>               Claudio
> > >>>
> > >>> Hi all,
> > >>>
> > >>> i've applied the provided patch and i'm trying to connect the linux 
> > >>> root cell with the bare metal cell running the 
> > >>> inmate/demo/arm/ivshmem-demo.c. I've attached the used configurations 
> > >>> (jetson-tx1-ivshmem for the root cell and the other one for the bare 
> > >>> metal). 
> > >>> When i create the bare metal cell the connection between pci devices is 
> > >>> correctly up.
> > >>> - 
> > >>> Initializing Jailhouse hypervisor  on CPU 2
> > >>> Code location: 0xc0200050
> > >>> Page pool usage after early setup: mem 63/16358, remap 64/131072
> > >>> Initializing processors:
> > >>>  CPU 2... OK
> > >>>  CPU 1... OK
> > >>>  CPU 3... OK
> > >>>  CPU 0... OK
> > >>> Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
> > >>> Page pool usage after late setup: mem 74/16358, remap 69/131072
> > >>> Activating hypervisor
> > >>> Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
> > >>> Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
> > >>> "Jetson-TX1-ivshmem"
> > >>> ---
> > >>>
> > >>> 1st problem: no PCI device appears in linux (lspci does not return 
> > >>> anything)
> > >>
> > >> Are you using a Linux kernel with the Jailhouse-related patches? Did you
> > >> enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
> > >> systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.
> > > 
> > > Yes, i'm using a kernel for jailhouse on Jetson tx1. The kernel has the 
> > > above CONFIGS. In root cell configuration if i 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-22 Thread jonas
Den torsdag 21 december 2017 kl. 23:47:39 UTC+1 skrev jonas:
> Den torsdag 21 december 2017 kl. 17:24:30 UTC+1 skrev Jan Kiszka:
> > On 2017-12-21 16:32, Luca Cuomo wrote:
> > > Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha scritto:
> > >> On 2017-12-21 10:05, Luca Cuomo wrote:
> > >>> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
> >  Hi,
> > 
> >  I'll be making an effort to contribute my work to the master branch of 
> >  Jailhouse within the next couple of weeks.
> > 
> >  /Jonas
> > 
> >  Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> > > Hi,
> > >
> > >
> > > I'm resending the patch(es) that were shared by Jonas a while ago.
> > >
> > >
> > > Best Regards,
> > > Constantin
> > >
> > >
> > > On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild 
> > >  wrote:
> > > Hi Claudio,
> > >
> > >
> > >
> > > Am Thu, 7 Dec 2017 17:29:45 +0100
> > >
> > > schrieb Claudio Scordino :
> > >
> > >
> > >
> > >> Hi guys,
> > >
> > >>
> > >
> > >> 2017-08-09 15:23 GMT+02:00 Henning Schild
> > >
> > >> :
> > >
> > >>
> > >
> > >>> Hey,
> > >
> > >>>
> > >
> > >>> unfortunately Jonas never published his overall changes, maybe now
> > >
> > >>> he understands why i kindly asked him to do so.
> > >
> > >>> I think Jonas maybe ran into every single problem one could
> > >
> > >>> encounter on the way, so if you read the thread you will probably
> > >
> > >>> be able to come up with a similar patch at some point. That would
> > >
> > >>> be the duplication of efforts, getting a first working patch into a
> > >
> > >>> mergeable form is another story.
> > >
> > >>>
> > >
> > >>> If there are legal reasons to not publish code on the list i suggest
> > >
> > >>> you exchange patches between each other. But of cause i would like
> > >
> > >>> to see contributions eventually ;).
> > >
> > >>>
> > >
> > >>>
> > >
> > >> We need to run IVSHMEM on the TX1.
> > >
> > >> Any chance of upstreaming those patches to not waste time
> > >
> > >> re-inventing the wheel ?
> > >
> > >> If that's not possible, please send me a copy privately.
> > >
> > >
> > >
> > > Unfortunately i do not have those patches either. I am afraid someone
> > >
> > > will have to do that over again.
> > >
> > >
> > >
> > > But the whole thread is basically about enabling the demo, which is
> > >
> > > interesting for people just getting started with ivshmem. And for
> > >
> > > people that want to implement their own protocol on top of it.
> > >
> > > If you are just looking at running ivshmem-net you are good to go, 
> > > that
> > >
> > > code is in a working state.
> > >
> > >
> > >
> > > Henning
> > >
> > >
> > >
> > >
> > >
> > >> Many thanks and best regards,
> > >
> > >>
> > >
> > >>               Claudio
> > >>>
> > >>> Hi all,
> > >>>
> > >>> i've applied the provided patch and i'm trying to connect the linux 
> > >>> root cell with the bare metal cell running the 
> > >>> inmate/demo/arm/ivshmem-demo.c. I've attached the used configurations 
> > >>> (jetson-tx1-ivshmem for the root cell and the other one for the bare 
> > >>> metal). 
> > >>> When i create the bare metal cell the connection between pci devices is 
> > >>> correctly up.
> > >>> - 
> > >>> Initializing Jailhouse hypervisor  on CPU 2
> > >>> Code location: 0xc0200050
> > >>> Page pool usage after early setup: mem 63/16358, remap 64/131072
> > >>> Initializing processors:
> > >>>  CPU 2... OK
> > >>>  CPU 1... OK
> > >>>  CPU 3... OK
> > >>>  CPU 0... OK
> > >>> Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
> > >>> Page pool usage after late setup: mem 74/16358, remap 69/131072
> > >>> Activating hypervisor
> > >>> Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
> > >>> Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
> > >>> "Jetson-TX1-ivshmem"
> > >>> ---
> > >>>
> > >>> 1st problem: no PCI device appears in linux (lspci does not return 
> > >>> anything)
> > >>
> > >> Are you using a Linux kernel with the Jailhouse-related patches? Did you
> > >> enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
> > >> systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.
> > > 
> > > Yes, i'm using a kernel for jailhouse on Jetson tx1. The kernel has the 
> > > above CONFIGS. In root cell configuration if i put 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-21 Thread jonas
Den torsdag 21 december 2017 kl. 17:24:30 UTC+1 skrev Jan Kiszka:
> On 2017-12-21 16:32, Luca Cuomo wrote:
> > Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha scritto:
> >> On 2017-12-21 10:05, Luca Cuomo wrote:
> >>> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
>  Hi,
> 
>  I'll be making an effort to contribute my work to the master branch of 
>  Jailhouse within the next couple of weeks.
> 
>  /Jonas
> 
>  Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> > Hi,
> >
> >
> > I'm resending the patch(es) that were shared by Jonas a while ago.
> >
> >
> > Best Regards,
> > Constantin
> >
> >
> > On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild 
> >  wrote:
> > Hi Claudio,
> >
> >
> >
> > Am Thu, 7 Dec 2017 17:29:45 +0100
> >
> > schrieb Claudio Scordino :
> >
> >
> >
> >> Hi guys,
> >
> >>
> >
> >> 2017-08-09 15:23 GMT+02:00 Henning Schild
> >
> >> :
> >
> >>
> >
> >>> Hey,
> >
> >>>
> >
> >>> unfortunately Jonas never published his overall changes, maybe now
> >
> >>> he understands why i kindly asked him to do so.
> >
> >>> I think Jonas maybe ran into every single problem one could
> >
> >>> encounter on the way, so if you read the thread you will probably
> >
> >>> be able to come up with a similar patch at some point. That would
> >
> >>> be the duplication of efforts, getting a first working patch into a
> >
> >>> mergeable form is another story.
> >
> >>>
> >
> >>> If there are legal reasons to not publish code on the list i suggest
> >
> >>> you exchange patches between each other. But of cause i would like
> >
> >>> to see contributions eventually ;).
> >
> >>>
> >
> >>>
> >
> >> We need to run IVSHMEM on the TX1.
> >
> >> Any chance of upstreaming those patches to not waste time
> >
> >> re-inventing the wheel ?
> >
> >> If that's not possible, please send me a copy privately.
> >
> >
> >
> > Unfortunately i do not have those patches either. I am afraid someone
> >
> > will have to do that over again.
> >
> >
> >
> > But the whole thread is basically about enabling the demo, which is
> >
> > interesting for people just getting started with ivshmem. And for
> >
> > people that want to implement their own protocol on top of it.
> >
> > If you are just looking at running ivshmem-net you are good to go, that
> >
> > code is in a working state.
> >
> >
> >
> > Henning
> >
> >
> >
> >
> >
> >> Many thanks and best regards,
> >
> >>
> >
> >>               Claudio
> >>>
> >>> Hi all,
> >>>
> >>> i've applied the provided patch and i'm trying to connect the linux root 
> >>> cell with the bare metal cell running the inmate/demo/arm/ivshmem-demo.c. 
> >>> I've attached the used configurations (jetson-tx1-ivshmem for the root 
> >>> cell and the other one for the bare metal). 
> >>> When i create the bare metal cell the connection between pci devices is 
> >>> correctly up.
> >>> - 
> >>> Initializing Jailhouse hypervisor  on CPU 2
> >>> Code location: 0xc0200050
> >>> Page pool usage after early setup: mem 63/16358, remap 64/131072
> >>> Initializing processors:
> >>>  CPU 2... OK
> >>>  CPU 1... OK
> >>>  CPU 3... OK
> >>>  CPU 0... OK
> >>> Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
> >>> Page pool usage after late setup: mem 74/16358, remap 69/131072
> >>> Activating hypervisor
> >>> Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
> >>> Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
> >>> "Jetson-TX1-ivshmem"
> >>> ---
> >>>
> >>> 1st problem: no PCI device appears in linux (lspci does not return 
> >>> anything)
> >>
> >> Are you using a Linux kernel with the Jailhouse-related patches? Did you
> >> enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
> >> systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.
> > 
> > Yes, i'm using a kernel for jailhouse on Jetson tx1. The kernel has the 
> > above CONFIGS. In root cell configuration if i put ".pci_is_virtual = 1," 
> > dmesg shows the message:
> > [  102.100405] jailhouse: CONFIG_OF_OVERLAY disabled
> > [  102.100417] jailhouse: failed to add virtual host controller
> > [  102.100422] The Jailhouse is opening.
> > 
> > If i put it to 0 there is no error message but not pci device is set as 
> > before.
> > 
> >>
> >>>
> >>> Then i launch the ivshmem-demo.bin. I've 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-21 Thread Jan Kiszka
On 2017-12-21 16:32, Luca Cuomo wrote:
> Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha scritto:
>> On 2017-12-21 10:05, Luca Cuomo wrote:
>>> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
 Hi,

 I'll be making an effort to contribute my work to the master branch of 
 Jailhouse within the next couple of weeks.

 /Jonas

 Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> Hi,
>
>
> I'm resending the patch(es) that were shared by Jonas a while ago.
>
>
> Best Regards,
> Constantin
>
>
> On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild  
> wrote:
> Hi Claudio,
>
>
>
> Am Thu, 7 Dec 2017 17:29:45 +0100
>
> schrieb Claudio Scordino :
>
>
>
>> Hi guys,
>
>>
>
>> 2017-08-09 15:23 GMT+02:00 Henning Schild
>
>> :
>
>>
>
>>> Hey,
>
>>>
>
>>> unfortunately Jonas never published his overall changes, maybe now
>
>>> he understands why i kindly asked him to do so.
>
>>> I think Jonas maybe ran into every single problem one could
>
>>> encounter on the way, so if you read the thread you will probably
>
>>> be able to come up with a similar patch at some point. That would
>
>>> be the duplication of efforts, getting a first working patch into a
>
>>> mergeable form is another story.
>
>>>
>
>>> If there are legal reasons to not publish code on the list i suggest
>
>>> you exchange patches between each other. But of cause i would like
>
>>> to see contributions eventually ;).
>
>>>
>
>>>
>
>> We need to run IVSHMEM on the TX1.
>
>> Any chance of upstreaming those patches to not waste time
>
>> re-inventing the wheel ?
>
>> If that's not possible, please send me a copy privately.
>
>
>
> Unfortunately i do not have those patches either. I am afraid someone
>
> will have to do that over again.
>
>
>
> But the whole thread is basically about enabling the demo, which is
>
> interesting for people just getting started with ivshmem. And for
>
> people that want to implement their own protocol on top of it.
>
> If you are just looking at running ivshmem-net you are good to go, that
>
> code is in a working state.
>
>
>
> Henning
>
>
>
>
>
>> Many thanks and best regards,
>
>>
>
>>               Claudio
>>>
>>> Hi all,
>>>
>>> i've applied the provided patch and i'm trying to connect the linux root 
>>> cell with the bare metal cell running the inmate/demo/arm/ivshmem-demo.c. 
>>> I've attached the used configurations (jetson-tx1-ivshmem for the root cell 
>>> and the other one for the bare metal). 
>>> When i create the bare metal cell the connection between pci devices is 
>>> correctly up.
>>> - 
>>> Initializing Jailhouse hypervisor  on CPU 2
>>> Code location: 0xc0200050
>>> Page pool usage after early setup: mem 63/16358, remap 64/131072
>>> Initializing processors:
>>>  CPU 2... OK
>>>  CPU 1... OK
>>>  CPU 3... OK
>>>  CPU 0... OK
>>> Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
>>> Page pool usage after late setup: mem 74/16358, remap 69/131072
>>> Activating hypervisor
>>> Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
>>> Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
>>> "Jetson-TX1-ivshmem"
>>> ---
>>>
>>> 1st problem: no PCI device appears in linux (lspci does not return anything)
>>
>> Are you using a Linux kernel with the Jailhouse-related patches? Did you
>> enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
>> systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.
> 
> Yes, i'm using a kernel for jailhouse on Jetson tx1. The kernel has the above 
> CONFIGS. In root cell configuration if i put ".pci_is_virtual = 1," dmesg 
> shows the message:
> [  102.100405] jailhouse: CONFIG_OF_OVERLAY disabled
> [  102.100417] jailhouse: failed to add virtual host controller
> [  102.100422] The Jailhouse is opening.
> 
> If i put it to 0 there is no error message but not pci device is set as 
> before.
> 
>>
>>>
>>> Then i launch the ivshmem-demo.bin. I've made some modification:
>>> * in inmate/lib/arm-common/pci.c the #define PCI_CFG_BASE
>>> (0x4800)
>>>   as defined in jetson-tx-ivshmem.c: 
>>>config.header.platform_info.pci_mmconfig_base.
>>>   In the same file i've enabled the print of pci_read/write_config
>>> * in inmate/demos/arm/ivshmem-demo.c i've removed the filter on 
>>>   

Re: Running ivshmem-demo in Jetson TK1.

2017-12-21 Thread Luca Cuomo
Il giorno giovedì 21 dicembre 2017 14:19:48 UTC+1, Jan Kiszka ha scritto:
> On 2017-12-21 10:05, Luca Cuomo wrote:
> > Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
> >> Hi,
> >>
> >> I'll be making an effort to contribute my work to the master branch of 
> >> Jailhouse within the next couple of weeks.
> >>
> >> /Jonas
> >>
> >> Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> >>> Hi,
> >>>
> >>>
> >>> I'm resending the patch(es) that were shared by Jonas a while ago.
> >>>
> >>>
> >>> Best Regards,
> >>> Constantin
> >>>
> >>>
> >>> On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild  
> >>> wrote:
> >>> Hi Claudio,
> >>>
> >>>
> >>>
> >>> Am Thu, 7 Dec 2017 17:29:45 +0100
> >>>
> >>> schrieb Claudio Scordino :
> >>>
> >>>
> >>>
>  Hi guys,
> >>>
> 
> >>>
>  2017-08-09 15:23 GMT+02:00 Henning Schild
> >>>
>  :
> >>>
> 
> >>>
> > Hey,
> >>>
> >
> >>>
> > unfortunately Jonas never published his overall changes, maybe now
> >>>
> > he understands why i kindly asked him to do so.
> >>>
> > I think Jonas maybe ran into every single problem one could
> >>>
> > encounter on the way, so if you read the thread you will probably
> >>>
> > be able to come up with a similar patch at some point. That would
> >>>
> > be the duplication of efforts, getting a first working patch into a
> >>>
> > mergeable form is another story.
> >>>
> >
> >>>
> > If there are legal reasons to not publish code on the list i suggest
> >>>
> > you exchange patches between each other. But of cause i would like
> >>>
> > to see contributions eventually ;).
> >>>
> >
> >>>
> >
> >>>
>  We need to run IVSHMEM on the TX1.
> >>>
>  Any chance of upstreaming those patches to not waste time
> >>>
>  re-inventing the wheel ?
> >>>
>  If that's not possible, please send me a copy privately.
> >>>
> >>>
> >>>
> >>> Unfortunately i do not have those patches either. I am afraid someone
> >>>
> >>> will have to do that over again.
> >>>
> >>>
> >>>
> >>> But the whole thread is basically about enabling the demo, which is
> >>>
> >>> interesting for people just getting started with ivshmem. And for
> >>>
> >>> people that want to implement their own protocol on top of it.
> >>>
> >>> If you are just looking at running ivshmem-net you are good to go, that
> >>>
> >>> code is in a working state.
> >>>
> >>>
> >>>
> >>> Henning
> >>>
> >>>
> >>>
> >>>
> >>>
>  Many thanks and best regards,
> >>>
> 
> >>>
>                Claudio
> > 
> > Hi all,
> > 
> > i've applied the provided patch and i'm trying to connect the linux root 
> > cell with the bare metal cell running the inmate/demo/arm/ivshmem-demo.c. 
> > I've attached the used configurations (jetson-tx1-ivshmem for the root cell 
> > and the other one for the bare metal). 
> > When i create the bare metal cell the connection between pci devices is 
> > correctly up.
> > - 
> > Initializing Jailhouse hypervisor  on CPU 2
> > Code location: 0xc0200050
> > Page pool usage after early setup: mem 63/16358, remap 64/131072
> > Initializing processors:
> >  CPU 2... OK
> >  CPU 1... OK
> >  CPU 3... OK
> >  CPU 0... OK
> > Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
> > Page pool usage after late setup: mem 74/16358, remap 69/131072
> > Activating hypervisor
> > Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
> > Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
> > "Jetson-TX1-ivshmem"
> > ---
> > 
> > 1st problem: no PCI device appears in linux (lspci does not return anything)
> 
> Are you using a Linux kernel with the Jailhouse-related patches? Did you
> enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
> systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.

Yes, i'm using a kernel for jailhouse on Jetson tx1. The kernel has the above 
CONFIGS. In root cell configuration if i put ".pci_is_virtual = 1," dmesg shows 
the message:
[  102.100405] jailhouse: CONFIG_OF_OVERLAY disabled
[  102.100417] jailhouse: failed to add virtual host controller
[  102.100422] The Jailhouse is opening.

If i put it to 0 there is no error message but not pci device is set as before.

> 
> > 
> > Then i launch the ivshmem-demo.bin. I've made some modification:
> > * in inmate/lib/arm-common/pci.c the #define PCI_CFG_BASE
> > (0x4800)
> >   as defined in jetson-tx-ivshmem.c: 
> >config.header.platform_info.pci_mmconfig_base.
> >   In the same file i've enabled the print of pci_read/write_config
> > * in inmate/demos/arm/ivshmem-demo.c i've removed the filter on 
> >   class/revision in order to get a suitable pci device with proper 
> >   

Re: Running ivshmem-demo in Jetson TK1.

2017-12-21 Thread Jan Kiszka
On 2017-12-21 10:05, Luca Cuomo wrote:
> Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
>> Hi,
>>
>> I'll be making an effort to contribute my work to the master branch of 
>> Jailhouse within the next couple of weeks.
>>
>> /Jonas
>>
>> Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
>>> Hi,
>>>
>>>
>>> I'm resending the patch(es) that were shared by Jonas a while ago.
>>>
>>>
>>> Best Regards,
>>> Constantin
>>>
>>>
>>> On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild  
>>> wrote:
>>> Hi Claudio,
>>>
>>>
>>>
>>> Am Thu, 7 Dec 2017 17:29:45 +0100
>>>
>>> schrieb Claudio Scordino :
>>>
>>>
>>>
 Hi guys,
>>>

>>>
 2017-08-09 15:23 GMT+02:00 Henning Schild
>>>
 :
>>>

>>>
> Hey,
>>>
>
>>>
> unfortunately Jonas never published his overall changes, maybe now
>>>
> he understands why i kindly asked him to do so.
>>>
> I think Jonas maybe ran into every single problem one could
>>>
> encounter on the way, so if you read the thread you will probably
>>>
> be able to come up with a similar patch at some point. That would
>>>
> be the duplication of efforts, getting a first working patch into a
>>>
> mergeable form is another story.
>>>
>
>>>
> If there are legal reasons to not publish code on the list i suggest
>>>
> you exchange patches between each other. But of cause i would like
>>>
> to see contributions eventually ;).
>>>
>
>>>
>
>>>
 We need to run IVSHMEM on the TX1.
>>>
 Any chance of upstreaming those patches to not waste time
>>>
 re-inventing the wheel ?
>>>
 If that's not possible, please send me a copy privately.
>>>
>>>
>>>
>>> Unfortunately i do not have those patches either. I am afraid someone
>>>
>>> will have to do that over again.
>>>
>>>
>>>
>>> But the whole thread is basically about enabling the demo, which is
>>>
>>> interesting for people just getting started with ivshmem. And for
>>>
>>> people that want to implement their own protocol on top of it.
>>>
>>> If you are just looking at running ivshmem-net you are good to go, that
>>>
>>> code is in a working state.
>>>
>>>
>>>
>>> Henning
>>>
>>>
>>>
>>>
>>>
 Many thanks and best regards,
>>>

>>>
               Claudio
> 
> Hi all,
> 
> i've applied the provided patch and i'm trying to connect the linux root cell 
> with the bare metal cell running the inmate/demo/arm/ivshmem-demo.c. I've 
> attached the used configurations (jetson-tx1-ivshmem for the root cell and 
> the other one for the bare metal). 
> When i create the bare metal cell the connection between pci devices is 
> correctly up.
> - 
> Initializing Jailhouse hypervisor  on CPU 2
> Code location: 0xc0200050
> Page pool usage after early setup: mem 63/16358, remap 64/131072
> Initializing processors:
>  CPU 2... OK
>  CPU 1... OK
>  CPU 3... OK
>  CPU 0... OK
> Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
> Page pool usage after late setup: mem 74/16358, remap 69/131072
> Activating hypervisor
> Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
> Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
> "Jetson-TX1-ivshmem"
> ---
> 
> 1st problem: no PCI device appears in linux (lspci does not return anything)

Are you using a Linux kernel with the Jailhouse-related patches? Did you
enable CONFIG_PCI_HOST_GENERIC and CONFIG_PCI_DOMAINS? On most ARM
systems, Jailhouse exposes the ivshmem devices via a virtual host bridge.

> 
> Then i launch the ivshmem-demo.bin. I've made some modification:
> * in inmate/lib/arm-common/pci.c the #define PCI_CFG_BASE(0x4800)
>   as defined in jetson-tx-ivshmem.c: 
>config.header.platform_info.pci_mmconfig_base.
>   In the same file i've enabled the print of pci_read/write_config
> * in inmate/demos/arm/ivshmem-demo.c i've removed the filter on 
>   class/revision in order to get a suitable pci device with proper 
>   deviceId:vendorId
> 
> When the bare metal starts, it iterate on the memory with a lot of read 
> 
> pci_read_config(bdf:0x0, addr:0x, size:0x2), 
> reg_addr0x4800
> pci_read_config(bdf:0x1, addr:0x, size:0x2), 
> reg_addr0x48000100
> pci_read_config(bdf:0x2, addr:0x, size:0x2), 
> reg_addr0x48000200
> pci_read_config(bdf:0x3, addr:0x, size:0x2), 
> reg_addr0x48000300
> 
>  after a while something happens (follow <---)
> 
> IVSHMEM: Found 1af4:1110 at 07:10.0 <---
> pci_read_config(bdf:0x780, addr:0x0008, size:0x4), 
> reg_addr0x48078008
> IVSHMEM: class/revision ff01, not supported skipping device <--- //IGNORED
> pci_read_config(bdf:0x780, addr:0x0006, size:0x2), 
> 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-21 Thread Luca Cuomo
Il giorno domenica 10 dicembre 2017 17:34:24 UTC+1, jonas ha scritto:
> Hi,
> 
> I'll be making an effort to contribute my work to the master branch of 
> Jailhouse within the next couple of weeks.
> 
> /Jonas
> 
> Den fredag 8 december 2017 kl. 06:47:33 UTC+1 skrev Constantin Petra:
> > Hi,
> > 
> > 
> > I'm resending the patch(es) that were shared by Jonas a while ago.
> > 
> > 
> > Best Regards,
> > Constantin
> > 
> > 
> > On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild  
> > wrote:
> > Hi Claudio,
> > 
> > 
> > 
> > Am Thu, 7 Dec 2017 17:29:45 +0100
> > 
> > schrieb Claudio Scordino :
> > 
> > 
> > 
> > > Hi guys,
> > 
> > >
> > 
> > > 2017-08-09 15:23 GMT+02:00 Henning Schild
> > 
> > > :
> > 
> > >
> > 
> > > > Hey,
> > 
> > > >
> > 
> > > > unfortunately Jonas never published his overall changes, maybe now
> > 
> > > > he understands why i kindly asked him to do so.
> > 
> > > > I think Jonas maybe ran into every single problem one could
> > 
> > > > encounter on the way, so if you read the thread you will probably
> > 
> > > > be able to come up with a similar patch at some point. That would
> > 
> > > > be the duplication of efforts, getting a first working patch into a
> > 
> > > > mergeable form is another story.
> > 
> > > >
> > 
> > > > If there are legal reasons to not publish code on the list i suggest
> > 
> > > > you exchange patches between each other. But of cause i would like
> > 
> > > > to see contributions eventually ;).
> > 
> > > >
> > 
> > > >
> > 
> > > We need to run IVSHMEM on the TX1.
> > 
> > > Any chance of upstreaming those patches to not waste time
> > 
> > > re-inventing the wheel ?
> > 
> > > If that's not possible, please send me a copy privately.
> > 
> > 
> > 
> > Unfortunately i do not have those patches either. I am afraid someone
> > 
> > will have to do that over again.
> > 
> > 
> > 
> > But the whole thread is basically about enabling the demo, which is
> > 
> > interesting for people just getting started with ivshmem. And for
> > 
> > people that want to implement their own protocol on top of it.
> > 
> > If you are just looking at running ivshmem-net you are good to go, that
> > 
> > code is in a working state.
> > 
> > 
> > 
> > Henning
> > 
> > 
> > 
> > 
> > 
> > > Many thanks and best regards,
> > 
> > >
> > 
> > >              Claudio

Hi all,

i've applied the provided patch and i'm trying to connect the linux root cell 
with the bare metal cell running the inmate/demo/arm/ivshmem-demo.c. I've 
attached the used configurations (jetson-tx1-ivshmem for the root cell and the 
other one for the bare metal). 
When i create the bare metal cell the connection between pci devices is 
correctly up.
- 
Initializing Jailhouse hypervisor  on CPU 2
Code location: 0xc0200050
Page pool usage after early setup: mem 63/16358, remap 64/131072
Initializing processors:
 CPU 2... OK
 CPU 1... OK
 CPU 3... OK
 CPU 0... OK
Adding virtual PCI device 00:0f.0 to cell "Jetson-TX1-ivshmem"
Page pool usage after late setup: mem 74/16358, remap 69/131072
Activating hypervisor
Adding virtual PCI device 00:0f.0 to cell "jetson-tx1-demo-shmem"
Shared memory connection established: "jetson-tx1-demo-shmem" <--> 
"Jetson-TX1-ivshmem"
---

1st problem: no PCI device appears in linux (lspci does not return anything)

Then i launch the ivshmem-demo.bin. I've made some modification:
* in inmate/lib/arm-common/pci.c the #define PCI_CFG_BASE(0x4800)
  as defined in jetson-tx-ivshmem.c: 
   config.header.platform_info.pci_mmconfig_base.
  In the same file i've enabled the print of pci_read/write_config
* in inmate/demos/arm/ivshmem-demo.c i've removed the filter on 
  class/revision in order to get a suitable pci device with proper 
  deviceId:vendorId

When the bare metal starts, it iterate on the memory with a lot of read 

pci_read_config(bdf:0x0, addr:0x, size:0x2), reg_addr0x4800
pci_read_config(bdf:0x1, addr:0x, size:0x2), reg_addr0x48000100
pci_read_config(bdf:0x2, addr:0x, size:0x2), reg_addr0x48000200
pci_read_config(bdf:0x3, addr:0x, size:0x2), reg_addr0x48000300

 after a while something happens (follow <---)

IVSHMEM: Found 1af4:1110 at 07:10.0 <---
pci_read_config(bdf:0x780, addr:0x0008, size:0x4), 
reg_addr0x48078008
IVSHMEM: class/revision ff01, not supported skipping device <--- //IGNORED
pci_read_config(bdf:0x780, addr:0x0006, size:0x2), 
reg_addr0x48078004
pci_read_config(bdf:0x780, addr:0x0034, size:0x1), 
reg_addr0x48078034
IVSHMEM ERROR: device is not MSI-X capable <---
pci_read_config(bdf:0x780, addr:0x004c, size:0x4), 
reg_addr0x4807804c
pci_read_config(bdf:0x780, addr:0x0048, size:0x4), 

Re: Running ivshmem-demo in Jetson TK1.

2017-12-07 Thread Constantin Petra
Hi,

I'm resending the patch(es) that were shared by Jonas a while ago.

Best Regards,
Constantin

On Thu, Dec 7, 2017 at 10:08 PM, Henning Schild 
wrote:

> Hi Claudio,
>
> Am Thu, 7 Dec 2017 17:29:45 +0100
> schrieb Claudio Scordino :
>
> > Hi guys,
> >
> > 2017-08-09 15:23 GMT+02:00 Henning Schild
> > :
> >
> > > Hey,
> > >
> > > unfortunately Jonas never published his overall changes, maybe now
> > > he understands why i kindly asked him to do so.
> > > I think Jonas maybe ran into every single problem one could
> > > encounter on the way, so if you read the thread you will probably
> > > be able to come up with a similar patch at some point. That would
> > > be the duplication of efforts, getting a first working patch into a
> > > mergeable form is another story.
> > >
> > > If there are legal reasons to not publish code on the list i suggest
> > > you exchange patches between each other. But of cause i would like
> > > to see contributions eventually ;).
> > >
> > >
> > We need to run IVSHMEM on the TX1.
> > Any chance of upstreaming those patches to not waste time
> > re-inventing the wheel ?
> > If that's not possible, please send me a copy privately.
>
> Unfortunately i do not have those patches either. I am afraid someone
> will have to do that over again.
>
> But the whole thread is basically about enabling the demo, which is
> interesting for people just getting started with ivshmem. And for
> people that want to implement their own protocol on top of it.
> If you are just looking at running ivshmem-net you are good to go, that
> code is in a working state.
>
> Henning
>
> > Many thanks and best regards,
> >
> >  Claudio
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


jailhouse-bpi-ivshmem-demo.patch
Description: Binary data


Re: Running ivshmem-demo in Jetson TK1.

2017-12-07 Thread Henning Schild
Hi Claudio,

Am Thu, 7 Dec 2017 17:29:45 +0100
schrieb Claudio Scordino :

> Hi guys,
> 
> 2017-08-09 15:23 GMT+02:00 Henning Schild
> :
> 
> > Hey,
> >
> > unfortunately Jonas never published his overall changes, maybe now
> > he understands why i kindly asked him to do so.
> > I think Jonas maybe ran into every single problem one could
> > encounter on the way, so if you read the thread you will probably
> > be able to come up with a similar patch at some point. That would
> > be the duplication of efforts, getting a first working patch into a
> > mergeable form is another story.
> >
> > If there are legal reasons to not publish code on the list i suggest
> > you exchange patches between each other. But of cause i would like
> > to see contributions eventually ;).
> >
> >  
> We need to run IVSHMEM on the TX1.
> Any chance of upstreaming those patches to not waste time
> re-inventing the wheel ?
> If that's not possible, please send me a copy privately.

Unfortunately i do not have those patches either. I am afraid someone
will have to do that over again.

But the whole thread is basically about enabling the demo, which is
interesting for people just getting started with ivshmem. And for
people that want to implement their own protocol on top of it.
If you are just looking at running ivshmem-net you are good to go, that
code is in a working state.

Henning

> Many thanks and best regards,
> 
>  Claudio

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-12-07 Thread Claudio Scordino
Hi guys,

2017-08-09 15:23 GMT+02:00 Henning Schild :

> Hey,
>
> unfortunately Jonas never published his overall changes, maybe now he
> understands why i kindly asked him to do so.
> I think Jonas maybe ran into every single problem one could encounter
> on the way, so if you read the thread you will probably be able to come
> up with a similar patch at some point. That would be the duplication of
> efforts, getting a first working patch into a mergeable form is another
> story.
>
> If there are legal reasons to not publish code on the list i suggest
> you exchange patches between each other. But of cause i would like to
> see contributions eventually ;).
>
>
We need to run IVSHMEM on the TX1.
Any chance of upstreaming those patches to not waste time re-inventing the
wheel ?
If that's not possible, please send me a copy privately.

Many thanks and best regards,

 Claudio

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-08-10 Thread Jan Kiszka
On 2017-08-10 08:33, Constantin Petra wrote:
> Hi,
> 
> Thanks for the information, I have been looking more into this thread.
> For my understanding:
> Is the pci_(read/write)_config() access using MMIO from guest ARM side
> supposed to access .pci_mmconfig_base address (0xfc00 for zcu102,
> which I see it being "reserved memory" in the Ultrascale+ TRM) and thus
> trigger arch_handle_dabt()->mmio_handle_access() on hypervisor side, or
> am I off track?

Nope, that's how things are supposed to work on that board. The MMIO
config space is fully virtualized, so we picked an unused address range
for it.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-08-10 Thread Constantin Petra
Hi,

Thanks for the information, I have been looking more into this thread.
For my understanding:
Is the pci_(read/write)_config() access using MMIO from guest ARM side
supposed to access .pci_mmconfig_base address (0xfc00 for zcu102, which
I see it being "reserved memory" in the Ultrascale+ TRM) and thus
trigger arch_handle_dabt()->mmio_handle_access() on hypervisor side, or am
I off track?

Thanks,
Constantin

On Wed, Aug 9, 2017 at 4:23 PM, Henning Schild 
wrote:

> Hey,
>
> unfortunately Jonas never published his overall changes, maybe now he
> understands why i kindly asked him to do so.
> I think Jonas maybe ran into every single problem one could encounter
> on the way, so if you read the thread you will probably be able to come
> up with a similar patch at some point. That would be the duplication of
> efforts, getting a first working patch into a mergeable form is another
> story.
>
> If there are legal reasons to not publish code on the list i suggest
> you exchange patches between each other. But of cause i would like to
> see contributions eventually ;).
>
> regards,
> Henning
>
> Am Tue, 8 Aug 2017 00:27:31 -0700
> schrieb Constantin Petra :
>
> > Hi,
> >
> > Sorry to pick this up after this time, but I would be interested in
> > the pci.c modifications related to ARM for inmates (using MMIO
> > instead of PIO). Was there a follow-up to the discussions
> > above?(checked out the discussion archives but I can't find any). I
> > would like to avoid re-inventing the wheel if there's one already
> > rolling.
> >
> > Thanks,
> > Constantin
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-08-09 Thread Henning Schild
Hey,

unfortunately Jonas never published his overall changes, maybe now he
understands why i kindly asked him to do so.
I think Jonas maybe ran into every single problem one could encounter
on the way, so if you read the thread you will probably be able to come
up with a similar patch at some point. That would be the duplication of
efforts, getting a first working patch into a mergeable form is another
story.

If there are legal reasons to not publish code on the list i suggest
you exchange patches between each other. But of cause i would like to
see contributions eventually ;).

regards,
Henning

Am Tue, 8 Aug 2017 00:27:31 -0700
schrieb Constantin Petra :

> Hi,
> 
> Sorry to pick this up after this time, but I would be interested in
> the pci.c modifications related to ARM for inmates (using MMIO
> instead of PIO). Was there a follow-up to the discussions
> above?(checked out the discussion archives but I can't find any). I
> would like to avoid re-inventing the wheel if there's one already
> rolling.
> 
> Thanks,
> Constantin

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-08-08 Thread Constantin Petra
Hi,

Sorry to pick this up after this time, but I would be interested in the pci.c 
modifications related to ARM for inmates (using MMIO instead of PIO).
Was there a follow-up to the discussions above?(checked out the discussion 
archives but I can't find any). I would like to avoid re-inventing the wheel if 
there's one already rolling.

Thanks,
Constantin

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-19 Thread Henning Schild
Am Fri, 19 May 2017 07:15:17 -0700
schrieb jonas :

> Den fredag 19 maj 2017 kl. 13:13:15 UTC+2 skrev Henning Schild:
> > Am Fri, 19 May 2017 03:22:05 -0700
> > schrieb jonas :
> >   
> > > Den fredag 19 maj 2017 kl. 11:22:06 UTC+2 skrev Henning Schild:  
> > > > Am Thu, 18 May 2017 14:42:20 -0700
> > > > schrieb jonas :
> > > > 
> > > > > >   
> > > > > > > Hi again,
> > > > > > > 
> > > > > > > Let's assume that I want to modify
> > > > > > > jailhouse/inmates/demos/arm/gic-demo.c to also handle
> > > > > > > ivshmem interrupts generated by the hypervisor to the
> > > > > > > bare-metal cell when writing the virtual PCI driver
> > > > > > > config area using uio_ivshmem/uio_send in the root-cell.
> > > > > > > 
> > > > > > > The first thing I would have to do is enable the
> > > > > > > IVSHMEM_IRQ in
> > > > > > > jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by
> > > > > > > calling gic_enable_irq(IVSHMEM_IRQ); in the same manner
> > > > > > > as gic_enable_irq(TIMER_IRQ);.  
> > > > > > 
> > > > > > You would have to register a handler first, gic_setup() but
> > > > > > yes. 
> > > > > > > I would also have to check what irqn is passed in
> > > > > > > jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned
> > > > > > > int irqn), in order to distinguish between TIMER_IRQ and
> > > > > > > IVSHMEM_IRQ, right?  
> > > > > > 
> > > > > > I am not sure but it looks like gic_setup() might actually
> > > > > > redirect all interrupts to that one handler. Because you do
> > > > > > not need to specify the number. That check is there to not
> > > > > > react to other interrupts, there are probably no others.
> > > > > >   
> > > > > > > TIMER_IRQ is defined (to 27) in
> > > > > > > jailhouse/inmates/lib/arm/include/mach.h. Where does this
> > > > > > > value come from?  
> > > > > > 
> > > > > > Probably from some ARM manual describing the interrupt
> > > > > > controller GIC, or maybe from the device tree, i do not know
> > > > > > too much about ARM. But it is basically a constant for your
> > > > > > target.   
> > > > > > > How do I know what value to set IVSHMEM_IRQ to?  
> > > > > > 
> > > > > > Have a look at the linux inmate config for the bananapi. You
> > > > > > will have to get some pieces for your inmate config.
> > > > > > 
> > > > > > Get
> > > > > > .vpci_irq_base = 123,
> > > > > > and the irqchips section. Make sure you adjust the array
> > > > > > size for irqchips.
> > > > > > From the pin_bitmap you just need the second value
> > > > > > 0, 0, 0, 1 << (155-128),
> > > > > > And now your IVSHMEM_IRQ is 155. That should work, but i
> > > > > > also cant fully explain where the numbers come from.
> > > > > > 
> > > > > > Henning
> > > > > >   
> > > > > > > Still a bit confused - Jonas
> > > > > > >  
> > > > > 
> > > > > Thanks Henning,
> > > > > 
> > > > > I tried the suggested additions to the bare-metal cell
> > > > > configuration and inmate, but no success yet. I use
> > > > > 'uio_send /dev/uio0 10 0 0' to fire 10 interrupts from the
> > > > > root-cell.
> > > > > 
> > > > > Any suggestions on how to proceed?
> > > > 
> > > > You could instrument ivshmem_remote_interrupt and
> > > > arch_ivshmem_trigger_interrupt and other functions on the way
> > > > with printfs
> > > > I guess the interrupt is leaving the hypervisor but your inmate
> > > > does not receive it. You could integrate the timer-code from
> > > > gic-demo into your inmate to verify that the cell is able to
> > > > receive interrupts at all.
> > > > And maybe the 155 is wrong after all but you could see that
> > > > with the instrumentation of the hypervisor.
> > > > 
> > > > Henning
> > > > 
> > > > > Verify that accesses to the virtual PCI device configuration
> > > > > area actually are made, intercepted by the hypervisor and
> > > > > interrupts generated to the bare-metal cell when running
> > > > > 'uio_send'?
> > > > > 
> > > > > /Jonas
> > > > >
> > > 
> > > Good suggestion! Actually, that is exactly what I did this
> > > morning.
> > > 
> > > In
> > > hypervisor/arch/arm-common/ivshmem.c:arch_ivshmem_trigger_interrupt(),
> > > I added: ``` printk("%s(ive:%p), irq_id:%d\n", __func__, ive,
> > > irq_id); ```
> > > 
> > > When running `uio_send /dev/uio0 10 0 0` I get:
> > > ```
> > > [UIO] ping #0
> > > 
> > > [UIO] ping #1arch_ivshmem_trigger_interrupt(ive:0xf0047090),
> > > irq_id:0
> > > 
> > > [UIO] ping #2arch_ivshmem_trigger_interrupt(ive:0xf0047090),
> > > irq_id:0
> > > 
> > > [UIO] ping #3arch_ivshmem_trigger_interrupt(ive:0xf0047090),
> > > irq_id:0
> > > 
> > > [UIO] ping #4arch_ivshmem_trigger_interrupt(ive:0xf0047090),
> > > irq_id:0
> > > 
> > > [UIO] ping #5arch_ivshmem_trigger_interrupt(ive:0xf0047090),
> > > irq_id:0
> > > 
> > > [UIO] ping #6arch_ivshmem_trigger_interrupt(ive:0xf0047090),
> > > irq_id:0
> > > 
> > > [UIO] ping 

Re: Running ivshmem-demo in Jetson TK1.

2017-05-19 Thread Henning Schild
Hey,

i think you should talk to Jonas because he is almost there. Maybe you
guys can exchange code. If you are not subscribed check out the archive
to read what happened.
http://jailhouse-dev.narkive.com/

Henning

Am Fri, 19 May 2017 06:40:32 -0700
schrieb Hari Krishnan :

> Hi there,
> 
> Sorry for asking a question that is regarding the porting of
> ivshmem-demo.c. As mentioned, I was trying to replicate pci.c for the
> arm and I am facing some issues.
> 
> I have converted the pci_read_config() and pci_write_config() using
> mmio. 
> 
> I have used Code that is similar to that can be found in the 
> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h 
> 
> The mmcfg_address = PCI_get_device_mmcfg_base(bdf) + address in which
> I replaced the function call with value from the root cell
> configuration " 0x4800". But then the PCI_read/write_config()
> doesn't require bdf to be passed as a parameter?
> 
> Please support regarding the porting of ivshmem-demo.c to arm.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-19 Thread Hari Krishnan
Hi there,

Sorry for asking a question that is regarding the porting of ivshmem-demo.c.
As mentioned, I was trying to replicate pci.c for the arm and I am facing some 
issues.

I have converted the pci_read_config() and pci_write_config() using mmio. 

I have used Code that is similar to that can be found in the 
hypervisor. hypervisor/pci.c include/jailhouse/mmio.h 

The mmcfg_address = PCI_get_device_mmcfg_base(bdf) + address in which I 
replaced the function call with value from the root cell configuration " 
0x4800". But then the PCI_read/write_config() doesn't require bdf to be 
passed as a parameter?

Please support regarding the porting of ivshmem-demo.c to arm.

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-19 Thread Henning Schild
Am Fri, 19 May 2017 03:22:05 -0700
schrieb jonas :

> Den fredag 19 maj 2017 kl. 11:22:06 UTC+2 skrev Henning Schild:
> > Am Thu, 18 May 2017 14:42:20 -0700
> > schrieb jonas :
> >   
> > > > 
> > > > > Hi again,
> > > > > 
> > > > > Let's assume that I want to modify
> > > > > jailhouse/inmates/demos/arm/gic-demo.c to also handle ivshmem
> > > > > interrupts generated by the hypervisor to the bare-metal cell
> > > > > when writing the virtual PCI driver config area using
> > > > > uio_ivshmem/uio_send in the root-cell.
> > > > > 
> > > > > The first thing I would have to do is enable the IVSHMEM_IRQ
> > > > > in jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by
> > > > > calling gic_enable_irq(IVSHMEM_IRQ); in the same manner as
> > > > > gic_enable_irq(TIMER_IRQ);.
> > > > 
> > > > You would have to register a handler first, gic_setup() but yes.
> > > > 
> > > > > I would also have to check what irqn is passed in
> > > > > jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int
> > > > > irqn), in order to distinguish between TIMER_IRQ and
> > > > > IVSHMEM_IRQ, right?
> > > > 
> > > > I am not sure but it looks like gic_setup() might actually
> > > > redirect all interrupts to that one handler. Because you do not
> > > > need to specify the number. That check is there to not react to
> > > > other interrupts, there are probably no others.
> > > > 
> > > > > TIMER_IRQ is defined (to 27) in
> > > > > jailhouse/inmates/lib/arm/include/mach.h. Where does this
> > > > > value come from?
> > > > 
> > > > Probably from some ARM manual describing the interrupt
> > > > controller GIC, or maybe from the device tree, i do not know
> > > > too much about ARM. But it is basically a constant for your
> > > > target. 
> > > > > How do I know what value to set IVSHMEM_IRQ to?
> > > > 
> > > > Have a look at the linux inmate config for the bananapi. You
> > > > will have to get some pieces for your inmate config.
> > > > 
> > > > Get
> > > > .vpci_irq_base = 123,
> > > > and the irqchips section. Make sure you adjust the array size
> > > > for irqchips.
> > > > From the pin_bitmap you just need the second value
> > > > 0, 0, 0, 1 << (155-128),
> > > > And now your IVSHMEM_IRQ is 155. That should work, but i also
> > > > cant fully explain where the numbers come from.
> > > > 
> > > > Henning
> > > > 
> > > > > Still a bit confused - Jonas
> > > > >
> > > 
> > > Thanks Henning,
> > > 
> > > I tried the suggested additions to the bare-metal cell
> > > configuration and inmate, but no success yet. I use
> > > 'uio_send /dev/uio0 10 0 0' to fire 10 interrupts from the
> > > root-cell.
> > > 
> > > Any suggestions on how to proceed?  
> > 
> > You could instrument ivshmem_remote_interrupt and
> > arch_ivshmem_trigger_interrupt and other functions on the way with
> > printfs
> > I guess the interrupt is leaving the hypervisor but your inmate does
> > not receive it. You could integrate the timer-code from gic-demo
> > into your inmate to verify that the cell is able to receive
> > interrupts at all.
> > And maybe the 155 is wrong after all but you could see that with the
> > instrumentation of the hypervisor.
> > 
> > Henning
> >   
> > > Verify that accesses to the virtual PCI device configuration area
> > > actually are made, intercepted by the hypervisor and interrupts
> > > generated to the bare-metal cell when running 'uio_send'?
> > > 
> > > /Jonas
> > >  
> 
> Good suggestion! Actually, that is exactly what I did this morning.
> 
> In
> hypervisor/arch/arm-common/ivshmem.c:arch_ivshmem_trigger_interrupt(),
> I added: ``` printk("%s(ive:%p), irq_id:%d\n", __func__, ive, irq_id);
> ```
> 
> When running `uio_send /dev/uio0 10 0 0` I get:
> ```
> [UIO] ping #0
> 
> [UIO] ping #1arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #2arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #3arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #4arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #5arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #6arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #7arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #8arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] ping #9arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0
> 
> [UIO] Exiting...
> ```
> 
> Hence, since irq_id is 0, no interrupt is set pending in the
> irqchip...

If you look at where the irq_id comes from you will find 
(ive->intx_ctrl_reg & IVSHMEM_INTX_ENABLE).

Have a look what uio_ivshmem.c is doing in line 207, that is missing in
your inmate.

> I also added a printout in `hypervisor/arch/arm-common/irqchip.c`:
> ```   if ((irq_id != 30) && (irq_id != 33)) {
>   printk("%s(cpu_data:%p, irq_id:%d)\n", __func__,
> cpu_data, irq_id); }
> ```
> which 

Re: Running ivshmem-demo in Jetson TK1.

2017-05-19 Thread jonas
Den fredag 19 maj 2017 kl. 11:22:06 UTC+2 skrev Henning Schild:
> Am Thu, 18 May 2017 14:42:20 -0700
> schrieb jonas :
> 
> > >   
> > > > Hi again,
> > > > 
> > > > Let's assume that I want to modify
> > > > jailhouse/inmates/demos/arm/gic-demo.c to also handle ivshmem
> > > > interrupts generated by the hypervisor to the bare-metal cell when
> > > > writing the virtual PCI driver config area using
> > > > uio_ivshmem/uio_send in the root-cell.
> > > > 
> > > > The first thing I would have to do is enable the IVSHMEM_IRQ in
> > > > jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by calling
> > > > gic_enable_irq(IVSHMEM_IRQ); in the same manner as
> > > > gic_enable_irq(TIMER_IRQ);.  
> > > 
> > > You would have to register a handler first, gic_setup() but yes.
> > >   
> > > > I would also have to check what irqn is passed in
> > > > jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int
> > > > irqn), in order to distinguish between TIMER_IRQ and IVSHMEM_IRQ,
> > > > right?  
> > > 
> > > I am not sure but it looks like gic_setup() might actually redirect
> > > all interrupts to that one handler. Because you do not need to
> > > specify the number. That check is there to not react to other
> > > interrupts, there are probably no others.
> > >   
> > > > TIMER_IRQ is defined (to 27) in
> > > > jailhouse/inmates/lib/arm/include/mach.h. Where does this value
> > > > come from?  
> > > 
> > > Probably from some ARM manual describing the interrupt controller
> > > GIC, or maybe from the device tree, i do not know too much about
> > > ARM. But it is basically a constant for your target.
> > >
> > > > How do I know what value to set IVSHMEM_IRQ to?  
> > > 
> > > Have a look at the linux inmate config for the bananapi. You will
> > > have to get some pieces for your inmate config.
> > > 
> > > Get
> > > .vpci_irq_base = 123,
> > > and the irqchips section. Make sure you adjust the array size for
> > > irqchips.
> > > From the pin_bitmap you just need the second value
> > > 0, 0, 0, 1 << (155-128),
> > > And now your IVSHMEM_IRQ is 155. That should work, but i also cant
> > > fully explain where the numbers come from.
> > > 
> > > Henning
> > >   
> > > > Still a bit confused - Jonas
> > > >  
> > 
> > Thanks Henning,
> > 
> > I tried the suggested additions to the bare-metal cell configuration
> > and inmate, but no success yet. I use 'uio_send /dev/uio0 10 0 0' to
> > fire 10 interrupts from the root-cell.
> > 
> > Any suggestions on how to proceed?
> 
> You could instrument ivshmem_remote_interrupt and
> arch_ivshmem_trigger_interrupt and other functions on the way with
> printfs
> I guess the interrupt is leaving the hypervisor but your inmate does
> not receive it. You could integrate the timer-code from gic-demo into
> your inmate to verify that the cell is able to receive interrupts at
> all.
> And maybe the 155 is wrong after all but you could see that with the
> instrumentation of the hypervisor.
> 
> Henning
> 
> > Verify that accesses to the virtual PCI device configuration area
> > actually are made, intercepted by the hypervisor and interrupts
> > generated to the bare-metal cell when running 'uio_send'?
> > 
> > /Jonas
> >

Good suggestion! Actually, that is exactly what I did this morning.

In hypervisor/arch/arm-common/ivshmem.c:arch_ivshmem_trigger_interrupt(), I 
added:
```
printk("%s(ive:%p), irq_id:%d\n", __func__, ive, irq_id);
```

When running `uio_send /dev/uio0 10 0 0` I get:
```
[UIO] ping #0

[UIO] ping #1arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #2arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #3arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #4arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #5arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #6arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #7arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #8arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] ping #9arch_ivshmem_trigger_interrupt(ive:0xf0047090), irq_id:0

[UIO] Exiting...
```

Hence, since irq_id is 0, no interrupt is set pending in the irqchip...

I also added a printout in `hypervisor/arch/arm-common/irqchip.c`:
``` if ((irq_id != 30) && (irq_id != 33)) {
printk("%s(cpu_data:%p, irq_id:%d)\n", __func__, cpu_data, irq_id);
}
```
which shows a lot of interrupts being handled (I had to filter out 30 (PPI 14) 
and 33 (UART 0) in order not to drown in printouts at startup).

I can then see printouts for IRQ 2, (SGI 2), 4 (SGI 4), 64 (SD/MMC 0), 140 
(IVSHMEM_IRQ for root-cell???, 108+32, as my root-cell config 
contains:`config.cell.vpci_irq_base = 108`?), and eventually when reaching 
steady state, only 117 (GMAC).

I'm guessing that I should have seen corresponding printouts for IRQ 155 
(123+32, as my bare-metal cell 

Re: Running ivshmem-demo in Jetson TK1.

2017-05-19 Thread Henning Schild
Am Thu, 18 May 2017 14:42:20 -0700
schrieb jonas :

> >   
> > > Hi again,
> > > 
> > > Let's assume that I want to modify
> > > jailhouse/inmates/demos/arm/gic-demo.c to also handle ivshmem
> > > interrupts generated by the hypervisor to the bare-metal cell when
> > > writing the virtual PCI driver config area using
> > > uio_ivshmem/uio_send in the root-cell.
> > > 
> > > The first thing I would have to do is enable the IVSHMEM_IRQ in
> > > jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by calling
> > > gic_enable_irq(IVSHMEM_IRQ); in the same manner as
> > > gic_enable_irq(TIMER_IRQ);.  
> > 
> > You would have to register a handler first, gic_setup() but yes.
> >   
> > > I would also have to check what irqn is passed in
> > > jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int
> > > irqn), in order to distinguish between TIMER_IRQ and IVSHMEM_IRQ,
> > > right?  
> > 
> > I am not sure but it looks like gic_setup() might actually redirect
> > all interrupts to that one handler. Because you do not need to
> > specify the number. That check is there to not react to other
> > interrupts, there are probably no others.
> >   
> > > TIMER_IRQ is defined (to 27) in
> > > jailhouse/inmates/lib/arm/include/mach.h. Where does this value
> > > come from?  
> > 
> > Probably from some ARM manual describing the interrupt controller
> > GIC, or maybe from the device tree, i do not know too much about
> > ARM. But it is basically a constant for your target.
> >
> > > How do I know what value to set IVSHMEM_IRQ to?  
> > 
> > Have a look at the linux inmate config for the bananapi. You will
> > have to get some pieces for your inmate config.
> > 
> > Get
> > .vpci_irq_base = 123,
> > and the irqchips section. Make sure you adjust the array size for
> > irqchips.
> > From the pin_bitmap you just need the second value
> > 0, 0, 0, 1 << (155-128),
> > And now your IVSHMEM_IRQ is 155. That should work, but i also cant
> > fully explain where the numbers come from.
> > 
> > Henning
> >   
> > > Still a bit confused - Jonas
> > >  
> 
> Thanks Henning,
> 
> I tried the suggested additions to the bare-metal cell configuration
> and inmate, but no success yet. I use 'uio_send /dev/uio0 10 0 0' to
> fire 10 interrupts from the root-cell.
> 
> Any suggestions on how to proceed?

You could instrument ivshmem_remote_interrupt and
arch_ivshmem_trigger_interrupt and other functions on the way with
printfs
I guess the interrupt is leaving the hypervisor but your inmate does
not receive it. You could integrate the timer-code from gic-demo into
your inmate to verify that the cell is able to receive interrupts at
all.
And maybe the 155 is wrong after all but you could see that with the
instrumentation of the hypervisor.

Henning

> Verify that accesses to the virtual PCI device configuration area
> actually are made, intercepted by the hypervisor and interrupts
> generated to the bare-metal cell when running 'uio_send'?
> 
> /Jonas
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-18 Thread jonas
> 
> > Hi again,
> > 
> > Let's assume that I want to modify
> > jailhouse/inmates/demos/arm/gic-demo.c to also handle ivshmem
> > interrupts generated by the hypervisor to the bare-metal cell when
> > writing the virtual PCI driver config area using uio_ivshmem/uio_send
> > in the root-cell.
> > 
> > The first thing I would have to do is enable the IVSHMEM_IRQ in
> > jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by calling
> > gic_enable_irq(IVSHMEM_IRQ); in the same manner as
> > gic_enable_irq(TIMER_IRQ);.
> 
> You would have to register a handler first, gic_setup() but yes.
> 
> > I would also have to check what irqn is passed in
> > jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int irqn),
> > in order to distinguish between TIMER_IRQ and IVSHMEM_IRQ, right?
> 
> I am not sure but it looks like gic_setup() might actually redirect all
> interrupts to that one handler. Because you do not need to specify the
> number. That check is there to not react to other interrupts, there are
> probably no others.
> 
> > TIMER_IRQ is defined (to 27) in
> > jailhouse/inmates/lib/arm/include/mach.h. Where does this value come
> > from?
> 
> Probably from some ARM manual describing the interrupt controller GIC,
> or maybe from the device tree, i do not know too much about ARM.
> But it is basically a constant for your target.
>  
> > How do I know what value to set IVSHMEM_IRQ to?
> 
> Have a look at the linux inmate config for the bananapi. You will have
> to get some pieces for your inmate config.
> 
> Get
> .vpci_irq_base = 123,
> and the irqchips section. Make sure you adjust the array size for
> irqchips.
> From the pin_bitmap you just need the second value
> 0, 0, 0, 1 << (155-128),
> And now your IVSHMEM_IRQ is 155. That should work, but i also cant
> fully explain where the numbers come from.
> 
> Henning
> 
> > Still a bit confused - Jonas
> >

Thanks Henning,

I tried the suggested additions to the bare-metal cell configuration and 
inmate, but no success yet. I use 'uio_send /dev/uio0 10 0 0' to fire 10 
interrupts from the root-cell.

Any suggestions on how to proceed?

Verify that accesses to the virtual PCI device configuration area actually are 
made, intercepted by the hypervisor and interrupts generated to the bare-metal 
cell when running 'uio_send'?

/Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread jonas
Hi again,

Let's assume that I want to modify jailhouse/inmates/demos/arm/gic-demo.c to 
also handle ivshmem interrupts generated by the hypervisor to the bare-metal 
cell when writing the virtual PCI driver config area using uio_ivshmem/uio_send 
in the root-cell.

The first thing I would have to do is enable the IVSHMEM_IRQ in 
jailhouse/inmates/demos/arm/gic-demo.c:inmate_main() by calling 
gic_enable_irq(IVSHMEM_IRQ); in the same manner as gic_enable_irq(TIMER_IRQ);.

I would also have to check what irqn is passed in 
jailhouse/inmates/demos/arm/gic-demo.c:handle_IRQ(unsigned int irqn), in order 
to distinguish between TIMER_IRQ and IVSHMEM_IRQ, right?

TIMER_IRQ is defined (to 27) in jailhouse/inmates/lib/arm/include/mach.h. Where 
does this value come from?

How do I know what value to set IVSHMEM_IRQ to?

Still a bit confused - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread Henning Schild
Am Wed, 17 May 2017 04:54:07 -0700
schrieb jonas :

> > > > > You do not need to know the number, the uio-driver knows it.
> > > > > And the bare metal inmate does not need to know it since it
> > > > > is just writing to a register to trigger it.
> > > > > It looks like it is working. After loading the driver you
> > > > > should see a new entry in /proc/interrupts. And when the
> > > > > inmate runs you should see the counter going up.  
> > > > 
> > > > Unfortunately not (just yet...). I've commented out the part
> > > > where the bare-metal ivshmem-demo inmate writes to the
> > > > IO-mapped ivshmem register of the virtual PCI device. The last
> > > > thing I see in the inmate terminal window (after adding the
> > > > printout prior to writing to the ivshmem register area) is:
> > > > IVSHMEM: 00:00.0 sending IRQ (by writing to 0x7c0c)
> > > > 
> > > > In the terminal window of the Linux root-cell I see:
> > > > FATAL: Invalid ivshmem register read, number 04
> > > > FATAL: forbidden access (exception class 0x24)
> > > > pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
> > > > r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> > > > r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
> > > > r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
> > > > r12=0xc08d1e10 r13=0xc08d1e00
> > > > r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")
> > > 
> > > Seems like the Intx path was never really tested with the uio
> > > driver. I think the problem is caused be the interrupt handler
> > > ivshmem_handler in uio_ivshmem.c
> > > It is trying to read the IntrStatus register which jailhouse does
> > > not implement. Just make the function a pure "return
> > > IRQ_HANDLED;" and you should get further. Actually you error
> > > indicates that the interrupt was received because Linux ran the
> > > basic uio handler.  
> >   
> 
> Yes, that does do the trick!
> Before starting the ivshmem-demo bare-metal inmate the interrupt
> count for ivshmem as reported by /proc/interrupts is zero. After
> having started the inmate it is one (I just write once to the LSTATE
> register from the inmate).
> 
> > I do not remember why the Status register is not implemented by
> > jailhouse, maybe Jan does. Or i would have to read up in the archive
> > and see whether it was ever part of the patchsets that introduced
> > ivshmem.
> >   
> 
> Hehe - That was my next question...
> 
> > I just pushed a patch to the jailhouse-next branch, it compiles but
> > i did not test it  You could give it a try.
> >   
> 
> OK, I'm currently on v0.6. Do I want to switch to jailhouse-next in a
> hurry, or am I good for now on v0.6? Eventually I will move on to
> newer branches/tags, of course, and upstream my findings.

I was talking about ivshmem-guest-code, not jailhouse.

Henning

> > Henning
> >   
> > > > If i comment out the line in the bare-metal inmate where the
> > > > register is written (in ivshmem_demo.c:send_irq(),
> > > > mmio_write32(d->registers + 3, 1);), all seems to be well and I
> > > > am able to verify that the shared memory has been updated by the
> > > > bare-metal inmate from within the root cell. I've also been
> > > > able to verify that the contents of the shared memory area is
> > > > picked up by the bare-metal inmate. No interrupts from the
> > > > inmate to the root cell though (of course).
> > > > 
> > > > Since I'm able to access the virtual PCI device register area
> > > > using mmio_read32() from the inmate, it looks like the area has
> > > > not been mapped for write access (by Jailhouse)? Am I missing
> > > > some PCI device configuration entry?
> > > > 
> > > > I tried to find where the FATAL:-printouts come from and found
> > > > traces to
> > > > jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio() and
> > > > jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I
> > > > don't know what to do with this information at the moment. Is
> > > > it possible to dump some call-stack from the hypervisor when
> > > > fatal errors occur?
> > > 
> > > The function ivshmem_register_mmio was the right place to look.
> > > Now if you look at the error you see that linux tried to read
> > > register 4. And that register is not handled by jailhouse, have a
> > > look at IVSHMEM_REG_* in ivshmem.c.
> > >   
> > > > > Getting an IRQ sent to the inmate will be more tricky, you
> > > > > will need to program the GIC where the x86 code does
> > > > > "int_set_handler". The gic-demo should give a clue.  
> > > > 
> > > > Yep, I've started looking at this example. Thanks for verifying
> > > > that this is the way forward.
> > > > 
> > > > >   
> > > > > > Does the uio_ivshmem driver take care of generating
> > > > > > interrupts from the root-cell to the bare metal cell, or do
> > > > > > I need to modify this as well?  
> > > > > 
> > > > > The uio-driver does not actually do anything. It just makes
> > > > > the ressources of the "hardware" visible to 

Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread jonas
Den onsdag 17 maj 2017 kl. 13:54:07 UTC+2 skrev jonas:
> > > > > You do not need to know the number, the uio-driver knows it. And
> > > > > the bare metal inmate does not need to know it since it is just
> > > > > writing to a register to trigger it.
> > > > > It looks like it is working. After loading the driver you should
> > > > > see a new entry in /proc/interrupts. And when the inmate runs you
> > > > > should see the counter going up.
> > > > 
> > > > Unfortunately not (just yet...). I've commented out the part where
> > > > the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem
> > > > register of the virtual PCI device. The last thing I see in the
> > > > inmate terminal window (after adding the printout prior to writing
> > > > to the ivshmem register area) is: IVSHMEM: 00:00.0 sending IRQ (by
> > > > writing to 0x7c0c)
> > > > 
> > > > In the terminal window of the Linux root-cell I see:
> > > > FATAL: Invalid ivshmem register read, number 04
> > > > FATAL: forbidden access (exception class 0x24)
> > > > pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
> > > > r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> > > > r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
> > > > r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
> > > > r12=0xc08d1e10 r13=0xc08d1e00
> > > > r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")  
> > > 
> > > Seems like the Intx path was never really tested with the uio driver.
> > > I think the problem is caused be the interrupt handler
> > > ivshmem_handler in uio_ivshmem.c
> > > It is trying to read the IntrStatus register which jailhouse does not
> > > implement. Just make the function a pure "return IRQ_HANDLED;" and you
> > > should get further. Actually you error indicates that the interrupt
> > > was received because Linux ran the basic uio handler.
> > 
> 
> Yes, that does do the trick!
> Before starting the ivshmem-demo bare-metal inmate the interrupt count for 
> ivshmem as reported by /proc/interrupts is zero. After having started the 
> inmate it is one (I just write once to the LSTATE register from the inmate).
IVSHMEM_REG_DBELL, not LSTATE, sorry...
> 
> > I do not remember why the Status register is not implemented by
> > jailhouse, maybe Jan does. Or i would have to read up in the archive
> > and see whether it was ever part of the patchsets that introduced
> > ivshmem.
> > 
> 
> Hehe - That was my next question...
> 
> > I just pushed a patch to the jailhouse-next branch, it compiles but i
> > did not test it  You could give it a try.
> > 
> 
> OK, I'm currently on v0.6. Do I want to switch to jailhouse-next in a hurry, 
> or am I good for now on v0.6? Eventually I will move on to newer 
> branches/tags, of course, and upstream my findings.
Ah, I see. The ivshmem-guest-code repo, not jailhouse repo. 
> 
> > Henning
> > 
> > > > If i comment out the line in the bare-metal inmate where the
> > > > register is written (in ivshmem_demo.c:send_irq(),
> > > > mmio_write32(d->registers + 3, 1);), all seems to be well and I am
> > > > able to verify that the shared memory has been updated by the
> > > > bare-metal inmate from within the root cell. I've also been able to
> > > > verify that the contents of the shared memory area is picked up by
> > > > the bare-metal inmate. No interrupts from the inmate to the root
> > > > cell though (of course).
> > > > 
> > > > Since I'm able to access the virtual PCI device register area using
> > > > mmio_read32() from the inmate, it looks like the area has not been
> > > > mapped for write access (by Jailhouse)? Am I missing some PCI device
> > > > configuration entry?
> > > > 
> > > > I tried to find where the FATAL:-printouts come from and found
> > > > traces to  jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio()
> > > > and jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I
> > > > don't know what to do with this information at the moment. Is it
> > > > possible to dump some call-stack from the hypervisor when fatal
> > > > errors occur?  
> > > 
> > > The function ivshmem_register_mmio was the right place to look. Now if
> > > you look at the error you see that linux tried to read register 4. And
> > > that register is not handled by jailhouse, have a look at
> > > IVSHMEM_REG_* in ivshmem.c.
> > > 
> > > > > Getting an IRQ sent to the inmate will be more tricky, you will
> > > > > need to program the GIC where the x86 code does "int_set_handler".
> > > > > The gic-demo should give a clue.
> > > > 
> > > > Yep, I've started looking at this example. Thanks for verifying that
> > > > this is the way forward.
> > > >   
> > > > > 
> > > > > > Does the uio_ivshmem driver take care of generating interrupts
> > > > > > from the root-cell to the bare metal cell, or do I need to
> > > > > > modify this as well?
> > > > > 
> > > > > The uio-driver does not actually do anything. It just makes the
> > > > > ressources of the "hardware" visible to userland. I suggest 

Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread Jan Kiszka
On 2017-05-17 13:54, jonas wrote:
> You do not need to know the number, the uio-driver knows it. And
> the bare metal inmate does not need to know it since it is just
> writing to a register to trigger it.
> It looks like it is working. After loading the driver you should
> see a new entry in /proc/interrupts. And when the inmate runs you
> should see the counter going up.

 Unfortunately not (just yet...). I've commented out the part where
 the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem
 register of the virtual PCI device. The last thing I see in the
 inmate terminal window (after adding the printout prior to writing
 to the ivshmem register area) is: IVSHMEM: 00:00.0 sending IRQ (by
 writing to 0x7c0c)

 In the terminal window of the Linux root-cell I see:
 FATAL: Invalid ivshmem register read, number 04
 FATAL: forbidden access (exception class 0x24)
 pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
 r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
 r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
 r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
 r12=0xc08d1e10 r13=0xc08d1e00
 r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")  
>>>
>>> Seems like the Intx path was never really tested with the uio driver.
>>> I think the problem is caused be the interrupt handler
>>> ivshmem_handler in uio_ivshmem.c
>>> It is trying to read the IntrStatus register which jailhouse does not
>>> implement. Just make the function a pure "return IRQ_HANDLED;" and you
>>> should get further. Actually you error indicates that the interrupt
>>> was received because Linux ran the basic uio handler.
>>
> 
> Yes, that does do the trick!
> Before starting the ivshmem-demo bare-metal inmate the interrupt count for 
> ivshmem as reported by /proc/interrupts is zero. After having started the 
> inmate it is one (I just write once to the LSTATE register from the inmate).
> 
>> I do not remember why the Status register is not implemented by
>> jailhouse, maybe Jan does. Or i would have to read up in the archive
>> and see whether it was ever part of the patchsets that introduced
>> ivshmem.
>>
> 
> Hehe - That was my next question...

The interrupt sources are of edge nature + the event reasons are usually
stored in the data structures inside the shared memory. So there is no
point in implementing a sticky and costly (performance- and
implementation-wise) status bit.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread jonas
> > > > You do not need to know the number, the uio-driver knows it. And
> > > > the bare metal inmate does not need to know it since it is just
> > > > writing to a register to trigger it.
> > > > It looks like it is working. After loading the driver you should
> > > > see a new entry in /proc/interrupts. And when the inmate runs you
> > > > should see the counter going up.
> > > 
> > > Unfortunately not (just yet...). I've commented out the part where
> > > the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem
> > > register of the virtual PCI device. The last thing I see in the
> > > inmate terminal window (after adding the printout prior to writing
> > > to the ivshmem register area) is: IVSHMEM: 00:00.0 sending IRQ (by
> > > writing to 0x7c0c)
> > > 
> > > In the terminal window of the Linux root-cell I see:
> > > FATAL: Invalid ivshmem register read, number 04
> > > FATAL: forbidden access (exception class 0x24)
> > > pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
> > > r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> > > r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
> > > r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
> > > r12=0xc08d1e10 r13=0xc08d1e00
> > > r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")  
> > 
> > Seems like the Intx path was never really tested with the uio driver.
> > I think the problem is caused be the interrupt handler
> > ivshmem_handler in uio_ivshmem.c
> > It is trying to read the IntrStatus register which jailhouse does not
> > implement. Just make the function a pure "return IRQ_HANDLED;" and you
> > should get further. Actually you error indicates that the interrupt
> > was received because Linux ran the basic uio handler.
> 

Yes, that does do the trick!
Before starting the ivshmem-demo bare-metal inmate the interrupt count for 
ivshmem as reported by /proc/interrupts is zero. After having started the 
inmate it is one (I just write once to the LSTATE register from the inmate).

> I do not remember why the Status register is not implemented by
> jailhouse, maybe Jan does. Or i would have to read up in the archive
> and see whether it was ever part of the patchsets that introduced
> ivshmem.
> 

Hehe - That was my next question...

> I just pushed a patch to the jailhouse-next branch, it compiles but i
> did not test it  You could give it a try.
> 

OK, I'm currently on v0.6. Do I want to switch to jailhouse-next in a hurry, or 
am I good for now on v0.6? Eventually I will move on to newer branches/tags, of 
course, and upstream my findings.

> Henning
> 
> > > If i comment out the line in the bare-metal inmate where the
> > > register is written (in ivshmem_demo.c:send_irq(),
> > > mmio_write32(d->registers + 3, 1);), all seems to be well and I am
> > > able to verify that the shared memory has been updated by the
> > > bare-metal inmate from within the root cell. I've also been able to
> > > verify that the contents of the shared memory area is picked up by
> > > the bare-metal inmate. No interrupts from the inmate to the root
> > > cell though (of course).
> > > 
> > > Since I'm able to access the virtual PCI device register area using
> > > mmio_read32() from the inmate, it looks like the area has not been
> > > mapped for write access (by Jailhouse)? Am I missing some PCI device
> > > configuration entry?
> > > 
> > > I tried to find where the FATAL:-printouts come from and found
> > > traces to  jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio()
> > > and jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I
> > > don't know what to do with this information at the moment. Is it
> > > possible to dump some call-stack from the hypervisor when fatal
> > > errors occur?  
> > 
> > The function ivshmem_register_mmio was the right place to look. Now if
> > you look at the error you see that linux tried to read register 4. And
> > that register is not handled by jailhouse, have a look at
> > IVSHMEM_REG_* in ivshmem.c.
> > 
> > > > Getting an IRQ sent to the inmate will be more tricky, you will
> > > > need to program the GIC where the x86 code does "int_set_handler".
> > > > The gic-demo should give a clue.
> > > 
> > > Yep, I've started looking at this example. Thanks for verifying that
> > > this is the way forward.
> > >   
> > > > 
> > > > > Does the uio_ivshmem driver take care of generating interrupts
> > > > > from the root-cell to the bare metal cell, or do I need to
> > > > > modify this as well?
> > > > 
> > > > The uio-driver does not actually do anything. It just makes the
> > > > ressources of the "hardware" visible to userland. I suggest you
> > > > have a look at the jailhouse specific README.
> > > > https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
> > > > If you did not come across this file yet you might be on the wrong
> > > > branch of ivshmem-guest-code.
> > > 
> > > I've seen it. I'm on the jailhouse branch of ivshmem-guest-code.
> 

Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread Henning Schild
Am Wed, 17 May 2017 12:45:19 +0200
schrieb "[ext] Henning Schild" :

> Am Wed, 17 May 2017 02:13:24 -0700
> schrieb jonas :
> 
> > Den tisdag 16 maj 2017 kl. 16:54:35 UTC+2 skrev Henning Schild:  
> > > You do not need to know the number, the uio-driver knows it. And
> > > the bare metal inmate does not need to know it since it is just
> > > writing to a register to trigger it.
> > > It looks like it is working. After loading the driver you should
> > > see a new entry in /proc/interrupts. And when the inmate runs you
> > > should see the counter going up.
> > 
> > Unfortunately not (just yet...). I've commented out the part where
> > the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem
> > register of the virtual PCI device. The last thing I see in the
> > inmate terminal window (after adding the printout prior to writing
> > to the ivshmem register area) is: IVSHMEM: 00:00.0 sending IRQ (by
> > writing to 0x7c0c)
> > 
> > In the terminal window of the Linux root-cell I see:
> > FATAL: Invalid ivshmem register read, number 04
> > FATAL: forbidden access (exception class 0x24)
> > pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
> > r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> > r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
> > r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
> > r12=0xc08d1e10 r13=0xc08d1e00
> > r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")  
> 
> Seems like the Intx path was never really tested with the uio driver.
> I think the problem is caused be the interrupt handler
> ivshmem_handler in uio_ivshmem.c
> It is trying to read the IntrStatus register which jailhouse does not
> implement. Just make the function a pure "return IRQ_HANDLED;" and you
> should get further. Actually you error indicates that the interrupt
> was received because Linux ran the basic uio handler.

I do not remember why the Status register is not implemented by
jailhouse, maybe Jan does. Or i would have to read up in the archive
and see whether it was ever part of the patchsets that introduced
ivshmem.

I just pushed a patch to the jailhouse-next branch, it compiles but i
did not test it  You could give it a try.

Henning

> > If i comment out the line in the bare-metal inmate where the
> > register is written (in ivshmem_demo.c:send_irq(),
> > mmio_write32(d->registers + 3, 1);), all seems to be well and I am
> > able to verify that the shared memory has been updated by the
> > bare-metal inmate from within the root cell. I've also been able to
> > verify that the contents of the shared memory area is picked up by
> > the bare-metal inmate. No interrupts from the inmate to the root
> > cell though (of course).
> > 
> > Since I'm able to access the virtual PCI device register area using
> > mmio_read32() from the inmate, it looks like the area has not been
> > mapped for write access (by Jailhouse)? Am I missing some PCI device
> > configuration entry?
> > 
> > I tried to find where the FATAL:-printouts come from and found
> > traces to  jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio()
> > and jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I
> > don't know what to do with this information at the moment. Is it
> > possible to dump some call-stack from the hypervisor when fatal
> > errors occur?  
> 
> The function ivshmem_register_mmio was the right place to look. Now if
> you look at the error you see that linux tried to read register 4. And
> that register is not handled by jailhouse, have a look at
> IVSHMEM_REG_* in ivshmem.c.
> 
> > > Getting an IRQ sent to the inmate will be more tricky, you will
> > > need to program the GIC where the x86 code does "int_set_handler".
> > > The gic-demo should give a clue.
> > 
> > Yep, I've started looking at this example. Thanks for verifying that
> > this is the way forward.
> >   
> > > 
> > > > Does the uio_ivshmem driver take care of generating interrupts
> > > > from the root-cell to the bare metal cell, or do I need to
> > > > modify this as well?
> > > 
> > > The uio-driver does not actually do anything. It just makes the
> > > ressources of the "hardware" visible to userland. I suggest you
> > > have a look at the jailhouse specific README.
> > > https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
> > > If you did not come across this file yet you might be on the wrong
> > > branch of ivshmem-guest-code.
> > 
> > I've seen it. I'm on the jailhouse branch of ivshmem-guest-code.
> > 
> > Thanks - Jonas  
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread Henning Schild
Am Wed, 17 May 2017 02:13:24 -0700
schrieb jonas :

> Den tisdag 16 maj 2017 kl. 16:54:35 UTC+2 skrev Henning Schild:
> > You do not need to know the number, the uio-driver knows it. And the
> > bare metal inmate does not need to know it since it is just writing
> > to a register to trigger it.
> > It looks like it is working. After loading the driver you should
> > see a new entry in /proc/interrupts. And when the inmate runs you
> > should see the counter going up.  
> 
> Unfortunately not (just yet...). I've commented out the part where
> the bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem
> register of the virtual PCI device. The last thing I see in the
> inmate terminal window (after adding the printout prior to writing to
> the ivshmem register area) is: IVSHMEM: 00:00.0 sending IRQ (by
> writing to 0x7c0c)
> 
> In the terminal window of the Linux root-cell I see:
> FATAL: Invalid ivshmem register read, number 04
> FATAL: forbidden access (exception class 0x24)
> pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
> r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
> r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
> r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
> r12=0xc08d1e10 r13=0xc08d1e00
> r14=0xc03d4dfc Parking CPU 0 (Cell: "Banana-Pi")

Seems like the Intx path was never really tested with the uio driver. I
think the problem is caused be the interrupt handler
ivshmem_handler in uio_ivshmem.c
It is trying to read the IntrStatus register which jailhouse does not
implement. Just make the function a pure "return IRQ_HANDLED;" and you
should get further. Actually you error indicates that the interrupt was
received because Linux ran the basic uio handler.

> If i comment out the line in the bare-metal inmate where the register
> is written (in ivshmem_demo.c:send_irq(), mmio_write32(d->registers +
> 3, 1);), all seems to be well and I am able to verify that the shared
> memory has been updated by the bare-metal inmate from within the root
> cell. I've also been able to verify that the contents of the shared
> memory area is picked up by the bare-metal inmate. No interrupts from
> the inmate to the root cell though (of course).
> 
> Since I'm able to access the virtual PCI device register area using
> mmio_read32() from the inmate, it looks like the area has not been
> mapped for write access (by Jailhouse)? Am I missing some PCI device
> configuration entry?
> 
> I tried to find where the FATAL:-printouts come from and found traces
> to  jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio() and
> jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I don't
> know what to do with this information at the moment. Is it possible
> to dump some call-stack from the hypervisor when fatal errors occur?

The function ivshmem_register_mmio was the right place to look. Now if
you look at the error you see that linux tried to read register 4. And
that register is not handled by jailhouse, have a look at IVSHMEM_REG_*
in ivshmem.c.

> > Getting an IRQ sent to the inmate will be more tricky, you will
> > need to program the GIC where the x86 code does "int_set_handler".
> > The gic-demo should give a clue.  
> 
> Yep, I've started looking at this example. Thanks for verifying that
> this is the way forward.
> 
> >   
> > > Does the uio_ivshmem driver take care of generating interrupts
> > > from the root-cell to the bare metal cell, or do I need to modify
> > > this as well?  
> > 
> > The uio-driver does not actually do anything. It just makes the
> > ressources of the "hardware" visible to userland. I suggest you
> > have a look at the jailhouse specific README.
> > https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
> > If you did not come across this file yet you might be on the wrong
> > branch of ivshmem-guest-code.  
> 
> I've seen it. I'm on the jailhouse branch of ivshmem-guest-code.
> 
> Thanks - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-17 Thread jonas
Den tisdag 16 maj 2017 kl. 16:54:35 UTC+2 skrev Henning Schild:
> You do not need to know the number, the uio-driver knows it. And the
> bare metal inmate does not need to know it since it is just writing to
> a register to trigger it.
> It looks like it is working. After loading the driver you should see a
> new entry in /proc/interrupts. And when the inmate runs you should see
> the counter going up.

Unfortunately not (just yet...). I've commented out the part where the 
bare-metal ivshmem-demo inmate writes to the IO-mapped ivshmem register of the 
virtual PCI device. The last thing I see in the inmate terminal window (after 
adding the printout prior to writing to the ivshmem register area) is:
IVSHMEM: 00:00.0 sending IRQ (by writing to 0x7c0c)

In the terminal window of the Linux root-cell I see:
FATAL: Invalid ivshmem register read, number 04
FATAL: forbidden access (exception class 0x24)
pc=0xbf00b018 cpsr=0x600c0193 hsr=0x9386
r0=0x007c r1=0xdd4f3600 r2=0x00010001 r3=0xdf948000
r4=0xc08d r5=0xdd144290 r6=0xc0959325 r7=0x007c
r8=0x007c r9=0xc08a3a40 r10=0x r11=0xc08d1e0c
r12=0xc08d1e10 r13=0xc08d1e00 r14=0xc03d4dfc
Parking CPU 0 (Cell: "Banana-Pi")

If i comment out the line in the bare-metal inmate where the register is 
written (in ivshmem_demo.c:send_irq(), mmio_write32(d->registers + 3, 1);), all 
seems to be well and I am able to verify that the shared memory has been 
updated by the bare-metal inmate from within the root cell. I've also been able 
to verify that the contents of the shared memory area is picked up by the 
bare-metal inmate. No interrupts from the inmate to the root cell though (of 
course).

Since I'm able to access the virtual PCI device register area using 
mmio_read32() from the inmate, it looks like the area has not been mapped for 
write access (by Jailhouse)? Am I missing some PCI device configuration entry?

I tried to find where the FATAL:-printouts come from and found traces to  
jailhouse/hypervisor/ivshmem.c:ivshmem_register_mmio() and 
jailhouse/hypervisor/arch/arm/traps.c:arch_handle_trap(). I don't know what to 
do with this information at the moment. Is it possible to dump some call-stack 
from the hypervisor when fatal errors occur?

> Getting an IRQ sent to the inmate will be more tricky, you will need to
> program the GIC where the x86 code does "int_set_handler". The gic-demo
> should give a clue.

Yep, I've started looking at this example. Thanks for verifying that this is 
the way forward.

> 
> > Does the uio_ivshmem driver take care of generating interrupts from
> > the root-cell to the bare metal cell, or do I need to modify this as
> > well?
> 
> The uio-driver does not actually do anything. It just makes the
> ressources of the "hardware" visible to userland. I suggest you have a
> look at the jailhouse specific README.
> https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
> If you did not come across this file yet you might be on the wrong
> branch of ivshmem-guest-code.

I've seen it. I'm on the jailhouse branch of ivshmem-guest-code.

Thanks - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-16 Thread Henning Schild
Am Mon, 8 May 2017 14:46:14 -0700
schrieb jonas :

> > Needs to be 0 for INTx operation.  
> 
> OK, when I remove '.num_msix_vectors = 1' from the root cell
> configuration, I can see the following in '/var/log/messages':
> [   69.760313] PCI host bridge //vpci@0 ranges: [   69.764807]   MEM
> 0x0210..0x02101fff -> 0x0210 [   69.774477] pci-host-generic
> 200.vpci: PCI host bridge to bus :00 [   69.781428] pci_bus
> :00: root bus resource [bus 00] [   69.786705] pci_bus :00:
> root bus resource [mem 0x0210-0x02101fff] [   69.793830] pci_bus
> :00: scanning bus [   69.794718] pci :00:00.0: [1af4:1110]
> type 00 class 0xff [   69.794815] pci :00:00.0: reg 0x10:
> [mem 0x-0x00ff 64bit] [   69.794981] pci :00:00.0:
> calling pci_fixup_ide_bases+0x0/0x50 [   69.797231] pci_bus :00:
> fixups for bus [   69.797283] PCI: bus0: Fast back to back transfers
> disabled [   69.803007] pci_bus :00: bus scan returning with
> max=00 [   69.803343] pci :00:00.0: fixup irq: got 124
> [   69.803363] pci :00:00.0: assigning IRQ 124
> [   69.803433] pci :00:00.0: BAR 0: assigned [mem
> 0x0210-0x021000ff 64bit] [   69.813181] uio_ivshmem :00:00.0:
> enabling device ( -> 0002) [   69.819791] uio_ivshmem
> :00:00.0: using jailhouse mode [   69.825511] uio_ivshmem
> :00:00.0: regular IRQs [   69.836988] The Jailhouse is opening.
> 
> How does this IRQ number correlate to the INTx I should be using when
> generating interrupts from the bare-metal inmate to the root-cell?

You do not need to know the number, the uio-driver knows it. And the
bare metal inmate does not need to know it since it is just writing to
a register to trigger it.
It looks like it is working. After loading the driver you should see a
new entry in /proc/interrupts. And when the inmate runs you should see
the counter going up.
Getting an IRQ sent to the inmate will be more tricky, you will need to
program the GIC where the x86 code does "int_set_handler". The gic-demo
should give a clue.

> Does the uio_ivshmem driver take care of generating interrupts from
> the root-cell to the bare metal cell, or do I need to modify this as
> well?

The uio-driver does not actually do anything. It just makes the
ressources of the "hardware" visible to userland. I suggest you have a
look at the jailhouse specific README.
https://github.com/henning-schild/ivshmem-guest-code/blob/jailhouse/README.jailhouse
If you did not come across this file yet you might be on the wrong
branch of ivshmem-guest-code.

Henning

> Slightly confused - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-05 Thread Jan Kiszka
On 2017-05-05 15:02, jonas wrote:
> Hi,
>
> I'm also experimenting with ivshmem between the root-cell and a
> bare metal cell. In my case, however, on BananaPi M1.
>
> Could you elaborate on modifying the functions
> pci_(read|write)_config to use mmio instead of pio?
>
> I guess it's a matter of accessing the appropriate memory mapped
> PCI configuration space of the (virtual) PCI devices available to
> the guest/inmate instead of accessing PCI_REG_ADDR_PORT and
> PCI_REG_DATA_PORT using functions(out|in)[bwl]?  

 Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs
 and ins will not work, instead the whole config space will be in
 physical memory. The location can be found in the root-cell
 configuration .pci_mmconfig_base.
 Some more information can be found here.
 http://wiki.osdev.org/PCI

 The method currently implemented is called method #1 on that wiki.
 Make sure to keep your access aligned with the size that is
 requested.

 Code that is similar to what you will need can be found in the
 hypervisor. hypervisor/pci.c include/jailhouse/mmio.h

 Henning

   
> Best regards - Jonas Weståker
>  
>>>
>>> Thanks for the fast response.
>>> I've got a bit further in porting ivshmem-demo.c from x86 to arm, but
>>> a few new questions arise: When scanning the configuration area of
>>> the (virtual) PCI device the followning is reported: "IVSHMEM ERROR:
>>> device is not MSI-X capable" - is this a problem?
>>
>> If you see that the example will not do anything. Your pci access code
>> might still not work. You can remove that sanity check to provoke more
>> accesses.
>>
> 
> Yes, I commented out the 'return;' after the printk.
> 
>> Does the rest of the output look like the pci-code is reading sane
>> values?
> 
> IVSHMEM: Found 1af4:1110 at 00:00.0
> IVSHMEM ERROR: device is not MSI-X capable
> IVSHMEM: shmem is at 0x7bf0
> IVSHMEM: bar0 is at 0x7c00
> IVSHMEM: bar2 is at 0x7c004000
> IVSHMEM: mapped shmem and bars, got position 0x0001
> IVSHMEM: Enabled IRQ:0x20
> IVSHMEM: Vector set for PCI MSI-X.
> IVSHMEM: 00:00.0 sending IRQ
> IVSHMEM: waiting for interrupt.
> 
>> What did you set num_msix_vectors to?
>>
> 
> '.num_msix_vectors = 1,'

Needs to be 0 for INTx operation.

> 
>>> jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the
>>> IVSHMEM region and registers. Got any pointers to code doing the
>>> equivalent for ARM?
>>
>> I think on ARM the inmates run without paging, so the implementation
>> would be empty.
>>
> 
> OK. That simplifies/explains things... I commented out the call to 
> 'map_pages()' as well.
> 
>>> What is the expected behaviour when accessing unmapped memory in an
>>> inmate?
>>
>> As i said, i think you are running on physical so everything is visible.
>>
>>> (E.g., I can see the inmate/cell gets shut down when touching memory
>>> outside .pci_mmconfig_base + 0x10): # Unhandled data read at
>>> 0x210(2) FATAL: unhandled trap (exception class 0x24)
>>> pc=0x0ff4 cpsr=0x6153 hsr=0x9346
>>> r0=0x1834 r1=0x000d r2=0x r3=0x6ed1 
>>> r4=0x0210 r5=0x r6=0x0002 r7=0x 
>>> r8=0x1000 r9=0x r10=0x r11=0x 
>>> r12=0x r13=0x6f80 r14=0x0fc4 
>>> Parking CPU 1 (Cell: "ivshmem-demo")
>>
>> This is an access outside of memory that the hypervisor gave to the
>> cell.
>>  
>>> What memory areas are made available by Jailhouse for a cell/inmate
>>> to access?
>>
>> They are described in the cell config, however the virtual PCI bus is
>> special there only the base is in the config and the size is calculated.
>> From hypervisor/pci.c pci_init you can see the 0x10, it is
>> 1*256*4096
>>
> 
> Actually, I think I spotted a bug here. In 
> inmates/lib/pci.c:find_pci_device() there is a loop 'for (bdf = start_bdf; 
> bdf < 0x1; bdf++)', which will touch memory outside PCI_CFG_BASE_ADDR + 
> 0x10, hence the unhandled trap. Changing the loop to 'for (bdf = 
> start_bdf; bdf < 0x1000; bdf++)' fixes the problem (0x1000 == 4096).
> 
> Why does this work on x86? Are bigger pages used by the hypervisor to map the 
> PCI configuration area?

On x86, the is always the full mmconfig space accessible. On ARM, you
need to check what platform_info.pci_mmconfig_end_bus is set to. When we
emulate PCI, we keep it at 0, i.e. a single bus. The inmate lib is not
yet aware of such restrictions.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-05 Thread jonas
Den tisdag 2 maj 2017 kl. 18:12:04 UTC+2 skrev J. Kiszka:
> On 2017-05-02 17:35, Jonas Westaker wrote:
> >>> Hi,
> >>>
> >>> I'm also experimenting with ivshmem between the root-cell and a bare
> >>> metal cell. In my case, however, on BananaPi M1.
> >>>
> >>> Could you elaborate on modifying the functions
> >>> pci_(read|write)_config to use mmio instead of pio?
> >>>
> >>> I guess it's a matter of accessing the appropriate memory mapped PCI
> >>> configuration space of the (virtual) PCI devices available to the
> >>> guest/inmate instead of accessing PCI_REG_ADDR_PORT and
> >>> PCI_REG_DATA_PORT using functions(out|in)[bwl]?
> >>
> >> Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs and
> >> ins will not work, instead the whole config space will be in physical
> >> memory. The location can be found in the root-cell configuration
> >> .pci_mmconfig_base.
> >> Some more information can be found here.
> >> http://wiki.osdev.org/PCI
> >>
> >> The method currently implemented is called method #1 on that wiki. Make
> >> sure to keep your access aligned with the size that is requested.
> >>
> >> Code that is similar to what you will need can be found in the
> >> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
> >>
> >> Henning
> >>
> >>
> >>> Best regards - Jonas Weståker
> >>>
> > 
> > Thanks for the fast response.
> > I've got a bit further in porting ivshmem-demo.c from x86 to arm, but a few 
> > new questions arise:
> > When scanning the configuration area of the (virtual) PCI device the 
> > followning is reported: "IVSHMEM ERROR: device is not MSI-X capable" - is 
> > this a problem?
> 
> The demo was written with the assumption there is always MSI-X for
> ivshmem interrupts. However, we only have this on ARM when there is also
> a gic-v2m MSI controller physically available. That is not the case on
> the Jetson.
> 

I'm on BPi-M1, but as far as I've understood, it has a gic-v2 (Allwinner A20, 
2* Cortex-A7).

> We then fall back to line-based interrupts (INTx). The demo needs to be
> extended in this regard. You will probably have to hard-code the GIC
> interrupt number as well because the demos have no device tree support.
> 

I guess using a command line argument would be the way to go, as well as the 
base address of the PCI configuration area, as you suggested earlier.

> > 
> > jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the IVSHMEM 
> > region and registers. Got any pointers to code doing the equivalent for ARM?
> > 
> > What is the expected behaviour when accessing unmapped memory in an inmate?
> > 
> > (E.g., I can see the inmate/cell gets shut down when touching memory 
> > outside .pci_mmconfig_base + 0x10):
> > # Unhandled data read at 0x210(2)
> > FATAL: unhandled trap (exception class 0x24)
> > pc=0x0ff4 cpsr=0x6153 hsr=0x9346
> > r0=0x1834 r1=0x000d r2=0x r3=0x6ed1 
> > r4=0x0210 r5=0x r6=0x0002 r7=0x 
> > r8=0x1000 r9=0x r10=0x r11=0x 
> > r12=0x r13=0x6f80 r14=0x0fc4 
> > Parking CPU 1 (Cell: "ivshmem-demo")
> 
> That is the expected behaviour: stop the CPU that performed the invalid
> access.
> 
> > 
> > What memory areas are made available by Jailhouse for a cell/inmate to 
> > access?
> 
> On ARM, the GICV (as GICC) and everything you list in the config.
> 
> Jan
> -- 
> Siemens AG, Corporate Technology, CT RDA ITP SES-DE
> Corporate Competence Center Embedded Linux

BR - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-05 Thread jonas
> > > > Hi,
> > > > 
> > > > I'm also experimenting with ivshmem between the root-cell and a
> > > > bare metal cell. In my case, however, on BananaPi M1.
> > > > 
> > > > Could you elaborate on modifying the functions
> > > > pci_(read|write)_config to use mmio instead of pio?
> > > > 
> > > > I guess it's a matter of accessing the appropriate memory mapped
> > > > PCI configuration space of the (virtual) PCI devices available to
> > > > the guest/inmate instead of accessing PCI_REG_ADDR_PORT and
> > > > PCI_REG_DATA_PORT using functions(out|in)[bwl]?  
> > > 
> > > Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs
> > > and ins will not work, instead the whole config space will be in
> > > physical memory. The location can be found in the root-cell
> > > configuration .pci_mmconfig_base.
> > > Some more information can be found here.
> > > http://wiki.osdev.org/PCI
> > > 
> > > The method currently implemented is called method #1 on that wiki.
> > > Make sure to keep your access aligned with the size that is
> > > requested.
> > > 
> > > Code that is similar to what you will need can be found in the
> > > hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
> > > 
> > > Henning
> > > 
> > >   
> > > > Best regards - Jonas Weståker
> > > >  
> > 
> > Thanks for the fast response.
> > I've got a bit further in porting ivshmem-demo.c from x86 to arm, but
> > a few new questions arise: When scanning the configuration area of
> > the (virtual) PCI device the followning is reported: "IVSHMEM ERROR:
> > device is not MSI-X capable" - is this a problem?
> 
> If you see that the example will not do anything. Your pci access code
> might still not work. You can remove that sanity check to provoke more
> accesses.
> 

Yes, I commented out the 'return;' after the printk.

> Does the rest of the output look like the pci-code is reading sane
> values?

IVSHMEM: Found 1af4:1110 at 00:00.0
IVSHMEM ERROR: device is not MSI-X capable
IVSHMEM: shmem is at 0x7bf0
IVSHMEM: bar0 is at 0x7c00
IVSHMEM: bar2 is at 0x7c004000
IVSHMEM: mapped shmem and bars, got position 0x0001
IVSHMEM: Enabled IRQ:0x20
IVSHMEM: Vector set for PCI MSI-X.
IVSHMEM: 00:00.0 sending IRQ
IVSHMEM: waiting for interrupt.

> What did you set num_msix_vectors to?
> 

'.num_msix_vectors = 1,'

> > jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the
> > IVSHMEM region and registers. Got any pointers to code doing the
> > equivalent for ARM?
> 
> I think on ARM the inmates run without paging, so the implementation
> would be empty.
> 

OK. That simplifies/explains things... I commented out the call to 
'map_pages()' as well.

> > What is the expected behaviour when accessing unmapped memory in an
> > inmate?
> 
> As i said, i think you are running on physical so everything is visible.
> 
> > (E.g., I can see the inmate/cell gets shut down when touching memory
> > outside .pci_mmconfig_base + 0x10): # Unhandled data read at
> > 0x210(2) FATAL: unhandled trap (exception class 0x24)
> > pc=0x0ff4 cpsr=0x6153 hsr=0x9346
> > r0=0x1834 r1=0x000d r2=0x r3=0x6ed1 
> > r4=0x0210 r5=0x r6=0x0002 r7=0x 
> > r8=0x1000 r9=0x r10=0x r11=0x 
> > r12=0x r13=0x6f80 r14=0x0fc4 
> > Parking CPU 1 (Cell: "ivshmem-demo")
> 
> This is an access outside of memory that the hypervisor gave to the
> cell.
>  
> > What memory areas are made available by Jailhouse for a cell/inmate
> > to access?
> 
> They are described in the cell config, however the virtual PCI bus is
> special there only the base is in the config and the size is calculated.
> From hypervisor/pci.c pci_init you can see the 0x10, it is
> 1*256*4096
> 

Actually, I think I spotted a bug here. In inmates/lib/pci.c:find_pci_device() 
there is a loop 'for (bdf = start_bdf; bdf < 0x1; bdf++)', which will touch 
memory outside PCI_CFG_BASE_ADDR + 0x10, hence the unhandled trap. Changing 
the loop to 'for (bdf = start_bdf; bdf < 0x1000; bdf++)' fixes the problem 
(0x1000 == 4096).

Why does this work on x86? Are bigger pages used by the hypervisor to map the 
PCI configuration area?

BR - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-03 Thread Henning Schild
Am Tue, 2 May 2017 08:35:25 -0700
schrieb Jonas Westaker :

> > > Hi,
> > > 
> > > I'm also experimenting with ivshmem between the root-cell and a
> > > bare metal cell. In my case, however, on BananaPi M1.
> > > 
> > > Could you elaborate on modifying the functions
> > > pci_(read|write)_config to use mmio instead of pio?
> > > 
> > > I guess it's a matter of accessing the appropriate memory mapped
> > > PCI configuration space of the (virtual) PCI devices available to
> > > the guest/inmate instead of accessing PCI_REG_ADDR_PORT and
> > > PCI_REG_DATA_PORT using functions(out|in)[bwl]?  
> > 
> > Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs
> > and ins will not work, instead the whole config space will be in
> > physical memory. The location can be found in the root-cell
> > configuration .pci_mmconfig_base.
> > Some more information can be found here.
> > http://wiki.osdev.org/PCI
> > 
> > The method currently implemented is called method #1 on that wiki.
> > Make sure to keep your access aligned with the size that is
> > requested.
> > 
> > Code that is similar to what you will need can be found in the
> > hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
> > 
> > Henning
> > 
> >   
> > > Best regards - Jonas Weståker
> > >  
> 
> Thanks for the fast response.
> I've got a bit further in porting ivshmem-demo.c from x86 to arm, but
> a few new questions arise: When scanning the configuration area of
> the (virtual) PCI device the followning is reported: "IVSHMEM ERROR:
> device is not MSI-X capable" - is this a problem?

If you see that the example will not do anything. Your pci access code
might still not work. You can remove that sanity check to provoke more
accesses.

Does the rest of the output look like the pci-code is reading sane
values?
What did you set num_msix_vectors to?

> jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the
> IVSHMEM region and registers. Got any pointers to code doing the
> equivalent for ARM?

I think on ARM the inmates run without paging, so the implementation
would be empty.

> What is the expected behaviour when accessing unmapped memory in an
> inmate?

As i said, i think you are running on physical so everything is visible.

> (E.g., I can see the inmate/cell gets shut down when touching memory
> outside .pci_mmconfig_base + 0x10): # Unhandled data read at
> 0x210(2) FATAL: unhandled trap (exception class 0x24)
> pc=0x0ff4 cpsr=0x6153 hsr=0x9346
> r0=0x1834 r1=0x000d r2=0x r3=0x6ed1 
> r4=0x0210 r5=0x r6=0x0002 r7=0x 
> r8=0x1000 r9=0x r10=0x r11=0x 
> r12=0x r13=0x6f80 r14=0x0fc4 
> Parking CPU 1 (Cell: "ivshmem-demo")

This is an access outside of memory that the hypervisor gave to the
cell.
 
> What memory areas are made available by Jailhouse for a cell/inmate
> to access?

They are described in the cell config, however the virtual PCI bus is
special there only the base is in the config and the size is calculated.
>From hypervisor/pci.c pci_init you can see the 0x10, it is
1*256*4096

> BR - Jonas

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-05-02 Thread Jan Kiszka
On 2017-05-02 17:35, Jonas Westaker wrote:
>>> Hi,
>>>
>>> I'm also experimenting with ivshmem between the root-cell and a bare
>>> metal cell. In my case, however, on BananaPi M1.
>>>
>>> Could you elaborate on modifying the functions
>>> pci_(read|write)_config to use mmio instead of pio?
>>>
>>> I guess it's a matter of accessing the appropriate memory mapped PCI
>>> configuration space of the (virtual) PCI devices available to the
>>> guest/inmate instead of accessing PCI_REG_ADDR_PORT and
>>> PCI_REG_DATA_PORT using functions(out|in)[bwl]?
>>
>> Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs and
>> ins will not work, instead the whole config space will be in physical
>> memory. The location can be found in the root-cell configuration
>> .pci_mmconfig_base.
>> Some more information can be found here.
>> http://wiki.osdev.org/PCI
>>
>> The method currently implemented is called method #1 on that wiki. Make
>> sure to keep your access aligned with the size that is requested.
>>
>> Code that is similar to what you will need can be found in the
>> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
>>
>> Henning
>>
>>
>>> Best regards - Jonas Weståker
>>>
> 
> Thanks for the fast response.
> I've got a bit further in porting ivshmem-demo.c from x86 to arm, but a few 
> new questions arise:
> When scanning the configuration area of the (virtual) PCI device the 
> followning is reported: "IVSHMEM ERROR: device is not MSI-X capable" - is 
> this a problem?

The demo was written with the assumption there is always MSI-X for
ivshmem interrupts. However, we only have this on ARM when there is also
a gic-v2m MSI controller physically available. That is not the case on
the Jetson.

We then fall back to line-based interrupts (INTx). The demo needs to be
extended in this regard. You will probably have to hard-code the GIC
interrupt number as well because the demos have no device tree support.

> 
> jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the IVSHMEM region 
> and registers. Got any pointers to code doing the equivalent for ARM?
> 
> What is the expected behaviour when accessing unmapped memory in an inmate?
> 
> (E.g., I can see the inmate/cell gets shut down when touching memory outside 
> .pci_mmconfig_base + 0x10):
> # Unhandled data read at 0x210(2)
> FATAL: unhandled trap (exception class 0x24)
> pc=0x0ff4 cpsr=0x6153 hsr=0x9346
> r0=0x1834 r1=0x000d r2=0x r3=0x6ed1 
> r4=0x0210 r5=0x r6=0x0002 r7=0x 
> r8=0x1000 r9=0x r10=0x r11=0x 
> r12=0x r13=0x6f80 r14=0x0fc4 
> Parking CPU 1 (Cell: "ivshmem-demo")

That is the expected behaviour: stop the CPU that performed the invalid
access.

> 
> What memory areas are made available by Jailhouse for a cell/inmate to access?

On ARM, the GICV (as GICC) and everything you list in the config.

Jan
-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-04-28 Thread Jan Kiszka
On 2017-04-27 17:31, Henning Schild wrote:
> Am Thu, 27 Apr 2017 07:44:56 -0700
> schrieb jonas :
> 
 Hi,
 Thanks for the reply.
 So as you said,
 1)I've augmented the jetson-tk1-demo.c config with an ivshmem
 device and a shared mem region, using to
 configs/jetson-tk1-linux-demo.c as reference.

 2)I replicated the ivshmem-demo from x86 to inmates/demos/arm and
 hooked it up in the Makefile I tried to cross compile the same
 and I have encountered a few errors. From what I've observed, the
 errors are mainly regarding the pci related functions. How can I
 proceed with this? PFA the error log.  
>>>
>>> As Jan said, you will have to move the pci library into the ARM
>>> inmate as well. You will basically need a version of
>>> inmates/lib/x86/pci.c that uses mmio instead of pio. So you will
>>> have to change the two functions pci_(read|write)_config to use
>>> mmio.
>>>
>>> Henning
>>>   
>>
>> Hi,
>>
>> I'm also experimenting with ivshmem between the root-cell and a bare
>> metal cell. In my case, however, on BananaPi M1.
>>
>> Could you elaborate on modifying the functions
>> pci_(read|write)_config to use mmio instead of pio?
>>
>> I guess it's a matter of accessing the appropriate memory mapped PCI
>> configuration space of the (virtual) PCI devices available to the
>> guest/inmate instead of accessing PCI_REG_ADDR_PORT and
>> PCI_REG_DATA_PORT using functions(out|in)[bwl]?
> 
> Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs and
> ins will not work, instead the whole config space will be in physical
> memory. The location can be found in the root-cell configuration
> .pci_mmconfig_base.

And as this base address is different for each board, and we do not have
a device tree parser in our inmate library yet, I would suggest to make
this value an inmate command line parameter for now.

Jan

> Some more information can be found here.
> http://wiki.osdev.org/PCI
> 
> The method currently implemented is called method #1 on that wiki. Make
> sure to keep your access aligned with the size that is requested.
> 
> Code that is similar to what you will need can be found in the
> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
> 
> Henning
> 
> 
>> Best regards - Jonas Weståker
>>
> 

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-04-27 Thread jonas
> > Hi,
> > Thanks for the reply.
> > So as you said,
> > 1)I've augmented the jetson-tk1-demo.c config with an ivshmem device
> > and a shared mem region, using to configs/jetson-tk1-linux-demo.c as
> > reference.
> > 
> > 2)I replicated the ivshmem-demo from x86 to inmates/demos/arm and
> > hooked it up in the Makefile I tried to cross compile the same and I
> > have encountered a few errors. From what I've observed, the errors
> > are mainly regarding the pci related functions. How can I proceed
> > with this? PFA the error log.
> 
> As Jan said, you will have to move the pci library into the ARM inmate
> as well. You will basically need a version of inmates/lib/x86/pci.c
> that uses mmio instead of pio. So you will have to change the two
> functions pci_(read|write)_config to use mmio.
> 
> Henning
> 

Hi,

I'm also experimenting with ivshmem between the root-cell and a bare metal 
cell. In my case, however, on BananaPi M1.

Could you elaborate on modifying the functions pci_(read|write)_config to use 
mmio instead of pio?

I guess it's a matter of accessing the appropriate memory mapped PCI 
configuration space of the (virtual) PCI devices available to the guest/inmate 
instead of accessing PCI_REG_ADDR_PORT and PCI_REG_DATA_PORT using 
functions(out|in)[bwl]?

Best regards - Jonas Weståker

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-04-26 Thread Hari Krishnan
Hi,
Thanks for the reply.
So as you said,
1)I've augmented the jetson-tk1-demo.c config with an ivshmem device and a
shared mem region, using to configs/jetson-tk1-linux-demo.c as reference.

2)I replicated the ivshmem-demo from x86 to inmates/demos/arm and hooked it up 
in the Makefile
I tried to cross compile the same and I have encountered a few errors. 
>From what I've observed, the errors are mainly regarding the pci related 
>functions. 
How can I proceed with this?
PFA the error log.
Thanks and regards,
Harikrishnan

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6$ sudo $MAKE 
ARCH=arm KDIR=/home/guest/user1/tegra_K1/linux-9fd0a81 
CROSS_COMPILE=/home/guest/user1/tegra_K1/gcc-linaro-6.3.1-2017.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-
 
  CHK 
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/hypervisor/include/generated/version.h
  CHK 
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/hypervisor/include/generated/config.mk
  CC  
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.o
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:
 In function ‘pci_cfg_read64’:
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:46:14:
 error: implicit declaration of function ‘pci_read_config’ 
[-Werror=implicit-function-declaration]
  bar = ((u64)pci_read_config(bdf, addr + 4, 4) << 32) |
  ^~~
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:
 In function ‘pci_cfg_write64’:
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:53:2:
 error: implicit declaration of function ‘pci_write_config’ 
[-Werror=implicit-function-declaration]
  pci_write_config(bdf, addr + 4, (u32)(val >> 32), 4);
  ^~~~
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:
 In function ‘get_bar_sz’:
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:62:28:
 error: ‘PCI_CFG_BAR’ undeclared (first use in this function)
  bar = pci_cfg_read64(bdf, PCI_CFG_BAR + (8 * barn));
^~~
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:62:28:
 note: each undeclared identifier is reported only once for each function it 
appears in
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:
 In function ‘map_shmem_and_bars’:
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:73:12:
 error: implicit declaration of function ‘pci_find_cap’ 
[-Werror=implicit-function-declaration]
  int cap = pci_find_cap(d->bdf, PCI_CAP_MSIX);
^~~~
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:73:33:
 error: ‘PCI_CAP_MSIX’ undeclared (first use in this function)
  int cap = pci_find_cap(d->bdf, PCI_CAP_MSIX);
 ^~~~
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:81:13:
 error: cast to pointer from integer of different size 
[-Werror=int-to-pointer-cast]
  d->shmem = (void *)pci_cfg_read64(d->bdf, IVSHMEM_CFG_SHMEM_PTR);
 ^
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:84:55:
 error: ‘PAGE_SIZE’ undeclared (first use in this function)
  d->registers = (u32 *)((u64)(d->shmem + d->shmemsz + PAGE_SIZE - 1)
   ^
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:85:5:
 error: ‘PAGE_MASK’ undeclared (first use in this function)
   & PAGE_MASK);
 ^
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:86:26:
 error: ‘PCI_CFG_BAR’ undeclared (first use in this function)
  pci_cfg_write64(d->bdf, PCI_CFG_BAR, (u64)d->registers);
  ^~~
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:86:39:
 error: cast from pointer to integer of different size 
[-Werror=pointer-to-int-cast]
  pci_cfg_write64(d->bdf, PCI_CFG_BAR, (u64)d->registers);
   ^
/home/guest/user1/tegra_K1/jailhouse_kernel_4.11/jailhouse-0.6/inmates/demos/arm/ivshmem-demo.c:89:26:
 error: cast from pointer to integer of different size 
[-Werror=pointer-to-int-cast]
  d->msix_table = (u32 

Re: Running ivshmem-demo in Jetson TK1.

2017-04-20 Thread Jan Kiszka
On 2017-04-20 18:19, Hari Krishnan wrote:
>> Before going into details here, let me ask you what your goals are: Is
>> the purpose to understand the details or more to achieve a certain
>> functionality? Do you want to establish a low-level ivshmem link between
>> the root cell and some bare-metal or a non-Linux OS in a non-root cell?
>> Or are you looking for ivshmem-net, a network link over ivshmem?
>  
> Hi Jan,
> 
> Thanks again for the reply.
> Although I would look for implementing specific functionality later, I am 
> currently trying to achieve a low-level ivshmem link between the root cell 
> and another non-root cell. I want to see an interrupt sent from one root cell 
> received and acknowledged by the other cell and sent another interrupt back 
> to the root cell and provide acknowledgement for the same. I believe such a 
> setup is written for  in the ivshmem-demo. I have been able to run the 
> uart-demo in Jetson TK1 but I am finding difficulty with ivshmem-demo. Could 
> you help me establish a low level ivshmem link between two cells in Jetson 
> TK1 and run the ivshmem-demo? Does the ivshmem-demo work for arm 
> architecture? What moifications should I make to make it run in Jetson TK1?

OK, understood, learning by doing - makes quite some sense.

There will be probably some details to sort out left and right, but the
basic steps should be like this:

- replicate the ivshmem-demo from x86 to inmates/demos/arm and hook it
  up in the Makefile

- resolve build issues, maybe provide missing implementations for
  inmates/lib/arm

- augment the jetson-tk1-demo.c config with an ivshmem device and a
  shared mem region, using to configs/jetson-tk1-linux-demo.c as
  reference

- check out https://github.com/henning-schild/ivshmem-guest-code,
  validate on x86 that it still works as described in ivshmem-guest-
  code/README.jailhouse (if not, report and/or fix)

- make ivshmem-guest-code build for arm, specifically the pieces
  described in README.jailhouse

And just ask, if you run into troubles.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-04-20 Thread Hari Krishnan
> Before going into details here, let me ask you what your goals are: Is
> the purpose to understand the details or more to achieve a certain
> functionality? Do you want to establish a low-level ivshmem link between
> the root cell and some bare-metal or a non-Linux OS in a non-root cell?
> Or are you looking for ivshmem-net, a network link over ivshmem?
 
Hi Jan,

Thanks again for the reply.
Although I would look for implementing specific functionality later, I am 
currently trying to achieve a low-level ivshmem link between the root cell and 
another non-root cell. I want to see an interrupt sent from one root cell 
received and acknowledged by the other cell and sent another interrupt back to 
the root cell and provide acknowledgement for the same. I believe such a setup 
is written for  in the ivshmem-demo. I have been able to run the uart-demo in 
Jetson TK1 but I am finding difficulty with ivshmem-demo. Could you help me 
establish a low level ivshmem link between two cells in Jetson TK1 and run the 
ivshmem-demo? Does the ivshmem-demo work for arm architecture? What 
moifications should I make to make it run in Jetson TK1?

Regards,
Harikrishnan

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Running ivshmem-demo in Jetson TK1.

2017-04-18 Thread Jan Kiszka
On 2017-04-18 14:41, Hari Krishnan wrote:
> Hi,
> Thanks for the reply. 
> 
> I am a newbie and am still unclear of how to exactly run ivshmem-demo which 
> is available in inmates->demos->x86.
> 
> According to the documentation for inter-cell communication, "You can go 
> ahead and connect two non-root cells and run the ivshmem-demo. They will send 
> each other interrupts".
> 1) Do we need two non root cells or can this work with a root cell and a non 
> root cell?

The root cell is also a cell, so, yes.

> 
> 2) If it is possible to communicate between a root cell and a non root cell, 
> how can we " connect" these two cells? 

Look at the existing configs/, e.g. for the qemu-vm.c and the
linux-x86-demo.c. They both contain the config fragments (PCI device,
memory region) to establish a cell-to-cell link. A more complex scenario
(multiple links) can be found under configs/zynqmp-zcu102.c.

> 
> 3) How should I run ivshmem-demo?
> As in, for a demo uart-demo for arm, I had a uart-demo.bin file which I had 
> loaded in a non root cell,jetson-demo.cell.
> How can I proceed to run ivshmem-demo on Jetson TK1?  Could you help me in a 
> more comprehensive manner?

Before going into details here, let me ask you what your goals are: Is
the purpose to understand the details or more to achieve a certain
functionality? Do you want to establish a low-level ivshmem link between
the root cell and some bare-metal or a non-Linux OS in a non-root cell?
Or are you looking for ivshmem-net, a network link over ivshmem?

Documentation of ivshmem is in flux because the whole interface is in
flux. There is, e.g., a branch wip/ivshmem2 which contains some more
modifications to the virtual PCI device but also a specification of the
same [1]. Not yet set in stone, though.

Jan

[1]
https://github.com/siemens/jailhouse/blob/wip/ivshmem2/Documentation/ivshmem-v2-specification.md

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.