Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-24 Thread Philipp Zabel
Hi Steve,

On Mon, 2017-01-09 at 20:43 +0100, Philipp Zabel wrote:
> On the other hand, there is currently no way to communicate to userspace
> that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
> input resolution, and then scale up bilinearly for there.

This is completely wrong.

For some reason I that I can't reconstruct anymore, I didn't realize
that the PRPENC/VF_RS_R_V/H coefficient fields for the resizing section
are 14 bits wide, so the bilinear scaling factor can in fact be reduced
down to 8192/16383 ~= 0.50003 before the /2 downsizing section needs to
be engaged.

>  So instead of
> pretending to be able to downscale to any resolution, it would be better
> to split each IC task into two subdevs, one for the
> 1/2-or-1/4-downscaler, and one for the bilinear upscaler.

So this point is moot. I still like the unified PRP subdev because of
the single input pad, but splitting the scaling over two subdevs is not
useful after all.

regards
Philipp
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-16 Thread Philipp Zabel
Hi Steve,

On Sat, 2017-01-14 at 12:26 -0800, Steve Longerbeam wrote:
> 
> On 01/13/2017 03:05 AM, Philipp Zabel wrote:
> > Hi Steve,
> >
> > Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
[...]
> >>   I would
> >> imagine it will need two additional inputs and another output to support
> >> the Combiner (two pads for each plane to be combined, and a combiner
> >> output pad).
> > If I accept for a moment that IDMAC/FSU channel links are described as
> > media entity links, that would be right, I guess.
> 
> Hi Philipp,
> 
> Let me start by asking, why you are averse to the idea that a media
> driver passes video frames from source to sink using memory
> buffers? There is no hard-and-fast rule in the media framework that
> states this should not be done, AFAIK.

To help you understand my perspective, I mostly use v4l2 devices in
GStreamer pipelines. That means, chaining separate mem2mem devices into
a pipeline that passes dma-bufs around is the easy default. I consider
linking mem2mem devices in the kernel (so buffers don't have to be
dequeued/queued into userspace all the time) or even in hardware (the
FSU basically implements hardware fences on a free-running rigid
2-buffer queue between two DMA channels) to be two performance
optimization steps from there.

Step 1 of linking two mem2mem devices using a software buffer queue
could be achieved at the videobuf2 level. That would need a new API to
share DMA buffers between vb2 queues and then switch them into a free
running mode that allows the kernel to pass buffers back and forth
automatically. But that mechanism would not be specific to hardware at
all. It could reuse / extend upon the existing vb2 queue implementation,
and it would be possible to use it with any driver instead of only IPU
internal components. In case of i.MX6 we could link the CODA h.264
encoder input to the PRPENC output, for example.
Also I'm opposed to adding a custom mem2mem framework in the IPU driver
because I don't believe the IPU is the only hardware unit that has
processing paths that need to go through temporary memory copies. Adding
the same functionality for every driver that can do this in a slightly
different way doesn't scale.

Step 2 of providing a fixed double-buffer queue and then using the IPU
FSU to trigger the DMA read channels in hardware instead of from the
write channel EOF interrupt handler is quite a bit more hardware
specific, but even there, it could be that the FSU links are not limited
to the IPU. I'm not sure if this actually works, but according to the
reference manual the (CODA) VPU can be triggered by the write channels
from SMFC and PRPVF and the VDOA can trigger the VDI or PP read channels
on i.MX6.

I do feel a bit bad about arguing against an existing working solution
when I only have a rough idea how I'd like steps 1 and 2 to look like,
but I really think implementing this inside a single driver via media
entity links is not the right way, and I fear once established, we'd
never get rid of it.

> I agree this overlaps with the mem2mem device idea somewhat, but
> IMHO a media device should be allowed to use internal memory
> buffers to pass video frames between pads, if that's what it needs to
> do to implement some functionality.
> 
> Can anyone else listening on this thread, chime in on this topic?

Yes, that would be very welcome.

> >>> Is there even a reason for the user to switch between direct and via
> >>> memory paths manually, or could this be inferred from other state
> >>> (formats, active links)?
> >> a CSI -> VDIC link doesn't convey whether that is a direct link using
> >> the FSU, or whether it is via SMFC and memory buffers.
> >>
> >> If you'll recall, the VDIC has three motion modes: low, medium, and
> >> high.
> >>
> >> When VDIC receives directly from CSI, it can only operate in
> >> high motion mode (it processes a single field F(n-1) sent directly
> >> from the CSI using the FSU). The reference manual refers to this
> >> as "real time mode".
> > In my opinion this is the only mode that should be supported in the
> > capture driver.
> 
> I have to disagree on that.

There isn't even hardware assisted triggering of the VDIC inputs for
deinterlacing in those modes, so there's really no performance benefit
over vb2 queue linking, and that would be a lot more useful.

> >   But that may be wishful thinking to a certain degree -
> > see below.
> >
> >> The low and medium motion modes require processing all three
> >> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
> >> channels 8, 9, and 10 respectively.
> >>
> >> So in order to support low and medium motion modes, there needs to
> >> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
> >> memory buffers.
> > In the cases where the VDIC reads all three fields from memory, I'd
> > prefer that to be implemented as a separate mem2mem device.
>
> I prefer that there be a single VDIC media entity, that makes use of its
> dma read 

Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-14 Thread Steve Longerbeam



On 01/13/2017 03:05 AM, Philipp Zabel wrote:

Hi Steve,

Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:

Hi Philipp, JM,

First, let me say that you both have convinced me that we need a VDIC
entity. I've implemented that in the branch imx-media-staging-md-vdic.
At this point it only implements the M/C de-interlacing function, not the
plane Combiner. So it has only one input and one output pad.

Excellent.


  I would
imagine it will need two additional inputs and another output to support
the Combiner (two pads for each plane to be combined, and a combiner
output pad).

If I accept for a moment that IDMAC/FSU channel links are described as
media entity links, that would be right, I guess.


Hi Philipp,

Let me start by asking, why you are averse to the idea that a media
driver passes video frames from source to sink using memory
buffers? There is no hard-and-fast rule in the media framework that
states this should not be done, AFAIK.

I agree this overlaps with the mem2mem device idea somewhat, but
IMHO a media device should be allowed to use internal memory
buffers to pass video frames between pads, if that's what it needs to
do to implement some functionality.

Can anyone else listening on this thread, chime in on this topic?



Is there even a reason for the user to switch between direct and via
memory paths manually, or could this be inferred from other state
(formats, active links)?

a CSI -> VDIC link doesn't convey whether that is a direct link using
the FSU, or whether it is via SMFC and memory buffers.

If you'll recall, the VDIC has three motion modes: low, medium, and
high.

When VDIC receives directly from CSI, it can only operate in
high motion mode (it processes a single field F(n-1) sent directly
from the CSI using the FSU). The reference manual refers to this
as "real time mode".

In my opinion this is the only mode that should be supported in the
capture driver.


I have to disagree on that.


  But that may be wishful thinking to a certain degree -
see below.


The low and medium motion modes require processing all three
fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
channels 8, 9, and 10 respectively.

So in order to support low and medium motion modes, there needs to
be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
memory buffers.

In the cases where the VDIC reads all three fields from memory, I'd
prefer that to be implemented as a separate mem2mem device.



I prefer that there be a single VDIC media entity, that makes use of its
dma read channels in order to support all of its motion compensation
modes.




  While useful
on its own, there could be an API to link together the capture and
output of different video4linux devices, and that could get a backend to
implement IDMAC/FSU channel linking where supported.


An interesting idea, but it also sounds a lot like what can already be
provided by a pipeline in the media framework, by linking an entity
that is a video source to an entity that is a video sink.





How about this: we can do away with SMFC entities by having two
output pads from the CSI: a "direct" output pad that can link to PRP and
VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
require memory buffers (VDIC in low/medium motion mode, camif, and
PP). Only one of those output pads can be active at a time. I'm not sure if
that allowed by the media framework, do two source pads imply that the
entity can activate both of those pads simultaneously, or is allowed that
only one source pad of two or more can be activated at a time? It's not
clear to me.

Let me know if you agree with this proposal.

In my opinion that is better than having the SMFC as a separate entity,
even better would be not to have to describe the memory paths as media
links.


Ok, I'll go ahead and implement this idea then.



[...]

Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
are correct that PRPENC and PRPVF do share an input channel (the CSIs).
But the PRPVF has an additional input channel from the VDIC,

Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
or am I mistaken?

The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
well documented, but see "IPU main flows" section in the manual.
All listed pipelines that route VDIC to IC go to IC (PRP VF).

Sorry to be a bit pedantic, the FSU does not send output. It just
triggers a DMA read channel (IDMAC or DMAIC) whenever signalled by
another write channel's EOF.


Right, poor choice of wording on my part, thanks for the
clarification.



Since the read channel of PRPVF and PRPENC is the same (IC direct, cb7),
I don't yet understand how the VDIC output can be sent to one but not
the other. As you said, the documentation is a bit confusing in this
regard.


agreed.


Which suggests that when IC receives from VDIC, PRPENC can
receive no data and is effectively unusable.


The VDIC direct input is enabled with 

Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-13 Thread Philipp Zabel
Am Donnerstag, den 12.01.2017, 18:22 -0800 schrieb Steve Longerbeam:
> 
> On 01/12/2017 03:43 PM, Steve Longerbeam wrote:
> >
> >
> > On 01/12/2017 03:22 PM, Steve Longerbeam wrote:
> >>
> >>
>  and since my PRPVF entity roles
>  up the VDIC internally, it is actually receiving from the VDIC 
>  channel.
>  So unless you think we should have a distinct VDIC entity, I would 
>  like
>  to keep this
>  the way it is.
> >>> Yes, I think VDIC should be separated out of PRPVF. What do you think
> >>> about splitting the IC PRP into three parts?
> >>>
> >>> PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
> >>> and two output pads connected to PRPVF and PRPENC, respectively. This
> >>> would even allow to have the PRP describe the downscale and PRPVF and
> >>> PRPENC describe the bilinear upscale part of the IC.
> >
> > Actually, how about the following:
> >
> > PRP would have one input pad coming from CSI0, CSI1, or VDIC. But
> > instead of another level of indirection with two more PRPENC and PRPVF
> > entities, PRP would instead have two output pads, one for PRPVF output
> > and one for PRPENC output.
> >
> > Both output pads could be activated if the input is connected to CSI0 
> > or CSI1.
> > And only the PRPVF output can be activated if the input is from VDIC.

Regarding the single input issue, I'd be fine with that too. But I
really like the idea of splitting the downsize section from the main
processing section because seeing the decimated resolution at the
downsize output pad makes it really obvious why the resulting frames are
blurry.

> Actually that proved too difficult. I went with your original idea. 
> Branch that
> implements this is imx-media-staging-md-prp. The media dot graph looks good
> but I have not tested yet. I'll start testing it tomorrow.

Thanks, I'll have a look.

regards
Philipp


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-13 Thread Philipp Zabel
Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
> Hi Philipp, JM,
>
> First, let me say that you both have convinced me that we need a VDIC
> entity. I've implemented that in the branch imx-media-staging-md-vdic.
> At this point it only implements the M/C de-interlacing function, not the
> plane Combiner. So it has only one input and one output pad.

Excellent.

>  I would
> imagine it will need two additional inputs and another output to support
> the Combiner (two pads for each plane to be combined, and a combiner
> output pad).

If I accept for a moment that IDMAC/FSU channel links are described as
media entity links, that would be right, I guess. The input pads would
represent the VDI1/3_SRC_SEL FSU muxes and the IDMAC read channels 25
and 26.

> More below...
[...]
> > I don't suggest to fold it into the CSI.
> > The CSI should have one output pad that that can be connected either to
> > the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
> > (CSIx_DATA_DEST=2).
> 
> Right, and CSI can connect to VDIC. I don't know if it is documented,
> but to route to VDIC, set CSIx_DATA_DEST=1, as if to IC PRP. Confusing,
> but it's as if the VDIC is somehow part of the IC.

Agreed. I get the feeling that the VDIC FIFOs (especially FIFO1) and the
"FIFO[s] located in the Input Buffer Memory" that are mentioned in the
IC chapter might be partially referring to the same thing, or at least
are somehow tightly interconnected. At least VDIC FIFO1 described in
Figure 37-47 "VDIC Block Diagram" has the direct CSI input into FIFO1,
and it is mentioned that the IDMAC can read back from FIFO1 directly.

[...]
> > Is there even a reason for the user to switch between direct and via
> > memory paths manually, or could this be inferred from other state
> > (formats, active links)?
> 
> a CSI -> VDIC link doesn't convey whether that is a direct link using
> the FSU, or whether it is via SMFC and memory buffers.
> 
> If you'll recall, the VDIC has three motion modes: low, medium, and
> high.
> 
> When VDIC receives directly from CSI, it can only operate in
> high motion mode (it processes a single field F(n-1) sent directly
> from the CSI using the FSU). The reference manual refers to this
> as "real time mode".

In my opinion this is the only mode that should be supported in the
capture driver. But that may be wishful thinking to a certain degree -
see below.

> The low and medium motion modes require processing all three
> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
> channels 8, 9, and 10 respectively.
> 
> So in order to support low and medium motion modes, there needs to
> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
> memory buffers.

In the cases where the VDIC reads all three fields from memory, I'd
prefer that to be implemented as a separate mem2mem device. While useful
on its own, there could be an API to link together the capture and
output of different video4linux devices, and that could get a backend to
implement IDMAC/FSU channel linking where supported.

> How about this: we can do away with SMFC entities by having two
> output pads from the CSI: a "direct" output pad that can link to PRP and
> VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
> require memory buffers (VDIC in low/medium motion mode, camif, and
> PP). Only one of those output pads can be active at a time. I'm not sure if
> that allowed by the media framework, do two source pads imply that the
> entity can activate both of those pads simultaneously, or is allowed that
> only one source pad of two or more can be activated at a time? It's not
> clear to me.
> 
> Let me know if you agree with this proposal.

In my opinion that is better than having the SMFC as a separate entity,
even better would be not to have to describe the memory paths as media
links.

[...]
> >> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
> >> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
> >> But the PRPVF has an additional input channel from the VDIC,
> > Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
> > or am I mistaken?
> 
> The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
> well documented, but see "IPU main flows" section in the manual.
> All listed pipelines that route VDIC to IC go to IC (PRP VF).

Sorry to be a bit pedantic, the FSU does not send output. It just
triggers a DMA read channel (IDMAC or DMAIC) whenever signalled by
another write channel's EOF.

Since the read channel of PRPVF and PRPENC is the same (IC direct, cb7),
I don't yet understand how the VDIC output can be sent to one but not
the other. As you said, the documentation is a bit confusing in this
regard.

> Which suggests that when IC receives from VDIC, PRPENC can
> receive no data and is effectively unusable.
> 
> > The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
> > (IC_INPUT=1), and that is the 

Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-13 Thread Philipp Zabel
Hi Steve,

Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
> Hi Philipp, JM,
>
> First, let me say that you both have convinced me that we need a VDIC
> entity. I've implemented that in the branch imx-media-staging-md-vdic.
> At this point it only implements the M/C de-interlacing function, not the
> plane Combiner. So it has only one input and one output pad.

Excellent.

>  I would
> imagine it will need two additional inputs and another output to support
> the Combiner (two pads for each plane to be combined, and a combiner
> output pad).

If I accept for a moment that IDMAC/FSU channel links are described as
media entity links, that would be right, I guess. The input pads would
represent the VDI1/3_SRC_SEL FSU muxes and the IDMAC read channels 25
and 26.

> More below...
[...]
> > I don't suggest to fold it into the CSI.
> > The CSI should have one output pad that that can be connected either to
> > the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
> > (CSIx_DATA_DEST=2).
> 
> Right, and CSI can connect to VDIC. I don't know if it is documented,
> but to route to VDIC, set CSIx_DATA_DEST=1, as if to IC PRP. Confusing,
> but it's as if the VDIC is somehow part of the IC.

Agreed. I get the feeling that the VDIC FIFOs (especially FIFO1) and the
"FIFO[s] located in the Input Buffer Memory" that are mentioned in the
IC chapter might be partially referring to the same thing, or at least
are somehow tightly interconnected. At least VDIC FIFO1 described in
Figure 37-47 "VDIC Block Diagram" has the direct CSI input into FIFO1,
and it is mentioned that the IDMAC can read back from FIFO1 directly.

[...]
> > Is there even a reason for the user to switch between direct and via
> > memory paths manually, or could this be inferred from other state
> > (formats, active links)?
> 
> a CSI -> VDIC link doesn't convey whether that is a direct link using
> the FSU, or whether it is via SMFC and memory buffers.
> 
> If you'll recall, the VDIC has three motion modes: low, medium, and
> high.
> 
> When VDIC receives directly from CSI, it can only operate in
> high motion mode (it processes a single field F(n-1) sent directly
> from the CSI using the FSU). The reference manual refers to this
> as "real time mode".

In my opinion this is the only mode that should be supported in the
capture driver. But that may be wishful thinking to a certain degree -
see below.

> The low and medium motion modes require processing all three
> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
> channels 8, 9, and 10 respectively.
> 
> So in order to support low and medium motion modes, there needs to
> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
> memory buffers.

In the cases where the VDIC reads all three fields from memory, I'd
prefer that to be implemented as a separate mem2mem device. While useful
on its own, there could be an API to link together the capture and
output of different video4linux devices, and that could get a backend to
implement IDMAC/FSU channel linking where supported.

> How about this: we can do away with SMFC entities by having two
> output pads from the CSI: a "direct" output pad that can link to PRP and
> VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
> require memory buffers (VDIC in low/medium motion mode, camif, and
> PP). Only one of those output pads can be active at a time. I'm not sure if
> that allowed by the media framework, do two source pads imply that the
> entity can activate both of those pads simultaneously, or is allowed that
> only one source pad of two or more can be activated at a time? It's not
> clear to me.
> 
> Let me know if you agree with this proposal.

In my opinion that is better than having the SMFC as a separate entity,
even better would be not to have to describe the memory paths as media
links.

[...]
> >> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
> >> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
> >> But the PRPVF has an additional input channel from the VDIC,
> > Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
> > or am I mistaken?
> 
> The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
> well documented, but see "IPU main flows" section in the manual.
> All listed pipelines that route VDIC to IC go to IC (PRP VF).

Sorry to be a bit pedantic, the FSU does not send output. It just
triggers a DMA read channel (IDMAC or DMAIC) whenever signalled by
another write channel's EOF.

Since the read channel of PRPVF and PRPENC is the same (IC direct, cb7),
I don't yet understand how the VDIC output can be sent to one but not
the other. As you said, the documentation is a bit confusing in this
regard.

> Which suggests that when IC receives from VDIC, PRPENC can
> receive no data and is effectively unusable.
> 
> > The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
> > (IC_INPUT=1), and 

Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-12 Thread Steve Longerbeam



On 01/12/2017 03:43 PM, Steve Longerbeam wrote:



On 01/12/2017 03:22 PM, Steve Longerbeam wrote:




and since my PRPVF entity roles
up the VDIC internally, it is actually receiving from the VDIC 
channel.
So unless you think we should have a distinct VDIC entity, I would 
like

to keep this
the way it is.

Yes, I think VDIC should be separated out of PRPVF. What do you think
about splitting the IC PRP into three parts?

PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
and two output pads connected to PRPVF and PRPENC, respectively. This
would even allow to have the PRP describe the downscale and PRPVF and
PRPENC describe the bilinear upscale part of the IC.


Actually, how about the following:

PRP would have one input pad coming from CSI0, CSI1, or VDIC. But
instead of another level of indirection with two more PRPENC and PRPVF
entities, PRP would instead have two output pads, one for PRPVF output
and one for PRPENC output.

Both output pads could be activated if the input is connected to CSI0 
or CSI1.

And only the PRPVF output can be activated if the input is from VDIC.


Actually that proved too difficult. I went with your original idea. 
Branch that

implements this is imx-media-staging-md-prp. The media dot graph looks good
but I have not tested yet. I'll start testing it tomorrow.

Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-12 Thread Steve Longerbeam



On 01/12/2017 03:22 PM, Steve Longerbeam wrote:




and since my PRPVF entity roles
up the VDIC internally, it is actually receiving from the VDIC channel.
So unless you think we should have a distinct VDIC entity, I would like
to keep this
the way it is.

Yes, I think VDIC should be separated out of PRPVF. What do you think
about splitting the IC PRP into three parts?

PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
and two output pads connected to PRPVF and PRPENC, respectively. This
would even allow to have the PRP describe the downscale and PRPVF and
PRPENC describe the bilinear upscale part of the IC.


Actually, how about the following:

PRP would have one input pad coming from CSI0, CSI1, or VDIC. But
instead of another level of indirection with two more PRPENC and PRPVF
entities, PRP would instead have two output pads, one for PRPVF output
and one for PRPENC output.

Both output pads could be activated if the input is connected to CSI0 or 
CSI1.

And only the PRPVF output can be activated if the input is from VDIC.

Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-12 Thread Steve Longerbeam

Hi Philipp, JM,


First, let me say that you both have convinced me that we need a VDIC
entity. I've implemented that in the branch imx-media-staging-md-vdic.
At this point it only implements the M/C de-interlacing function, not the
plane Combiner. So it has only one input and one output pad. I would
imagine it will need two additional inputs and another output to support
the Combiner (two pads for each plane to be combined, and a combiner
output pad).

More below...

On 01/11/2017 04:10 AM, Philipp Zabel wrote:

Hi Steve,

Am Dienstag, den 10.01.2017, 15:52 -0800 schrieb Steve Longerbeam:

On 01/09/2017 04:15 PM, Steve Longerbeam wrote:

Hi Philipp,


On 01/09/2017 11:43 AM, Philipp Zabel wrote:




One is the amount and organization of subdevices/media entities visible
to userspace. The SMFCs should not be user controllable subdevices, but
can be implicitly enabled when a CSI is directly linked to a camif.

I agree the SMFC could be folded into the CSI, but I see at least one
issue.

I don't suggest to fold it into the CSI.
The CSI should have one output pad that that can be connected either to
the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
(CSIx_DATA_DEST=2).


Right, and CSI can connect to VDIC. I don't know if it is documented,
but to route to VDIC, set CSIx_DATA_DEST=1, as if to IC PRP. Confusing,
but it's as if the VDIC is somehow part of the IC.


  The SMFC should be considered part of the link
between CSI and IDMAC.


sure, I can agree with that.


The IC PRP input pad should be connected to either the CSI0 output pad
(CSI_SEL=0,IC_INPUT=0), the CSI1 output pad (CSI_SEL=1,IC_INPUT=0), or
to the VDIC (IC_INPUT=1).


correct.




 From the dot graph you'll see that the PRPVF entity can receive directly
from the CSI, or indirectly via the SMFC.

And that's one reason why I think representing the mem2mem paths as
links in the media controller interface is questionable. The path "via
SMFC" is not really a hardware connection between CSI -> SMFC -> IC PRP,
but two completely separate paths:
CSI -> SMFC -> IDMAC -> mem and mem -> IDMAC -> IC PRP with different
IDMAC read/write channels. The only real connection is that one DMA the
IC DMA transfers are triggered automatically by the frame
synchronisation unit on every CSI frame.
There is no way to convey to the user which links are real connections
and which are just linked DMA write and read channels somewhere else.

Is there even a reason for the user to switch between direct and via
memory paths manually, or could this be inferred from other state
(formats, active links)?


a CSI -> VDIC link doesn't convey whether that is a direct link using
the FSU, or whether it is via SMFC and memory buffers.

If you'll recall, the VDIC has three motion modes: low, medium, and
high.

When VDIC receives directly from CSI, it can only operate in
high motion mode (it processes a single field F(n-1) sent directly
from the CSI using the FSU). The reference manual refers to this
as "real time mode".

The low and medium motion modes require processing all three
fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
channels 8, 9, and 10 respectively.

So in order to support low and medium motion modes, there needs to
be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
memory buffers.

How about this: we can do away with SMFC entities by having two
output pads from the CSI: a "direct" output pad that can link to PRP and
VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
require memory buffers (VDIC in low/medium motion mode, camif, and
PP). Only one of those output pads can be active at a time. I'm not sure if
that allowed by the media framework, do two source pads imply that the
entity can activate both of those pads simultaneously, or is allowed that
only one source pad of two or more can be activated at a time? It's not
clear to me.

Let me know if you agree with this proposal.




Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
best choice. It is true that the three tasks can be enabled separately,
but to my understanding, the PRP ENC and PRP VF tasks share a single
input channel. Shouldn't this be a single PRP subdevice with one input
and two (VF, ENC) outputs?

Since the VDIC sends its motion compensated frames to the PRP VF task,
I've created the PRPVF entity solely for motion compensated de-interlace
support. I don't really see any other use for the PRPVF task except for
motion compensated de-interlace.

I suppose simultaneous scaling to two different resolutions without
going through memory could be one interesting use case:

,--> VF --> IDMAC -> mem -> to display
CSI -> IC PRP
`--> ENC -> IDMAC -> mem -> to VPU


Yes, that is one possibility.


So really, the PRPVF entity is a combination of the VDIC and PRPVF
subunits.

I'd prefer to keep them separate, we could then use a deactivated VDIC
-> IC PRP link to mark the VDIC as available to be used for 

Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-11 Thread Philipp Zabel
Am Montag, den 09.01.2017, 16:15 -0800 schrieb Steve Longerbeam:
[...]
> Since the VDIC sends its motion compensated frames to the PRP VF task,
> I've created the PRPVF entity solely for motion compensated de-interlace
> support. I don't really see any other use for the PRPVF task except for
> motion compensated de-interlace.
> 
> So really, the PRPVF entity is a combination of the VDIC and PRPVF subunits.
> 
> So looking at link_setup() in imx-csi.c, you'll see that when the CSI is 
> linked
> to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
> 
> But if we were to create a VDIC entity, I can see how we could then
> have a single PRP entity. Then if the PRP entity is receiving from the VDIC,
> the PRP VF task would be activated.
> 
> Another advantage of creating a distinct VDIC entity is that frames could
> potentially be routed directly from the VDIC to camif, for 
> motion-compensated
> frame capture only with no scaling/CSC. I think that would be IDMAC channel
> 5, we've tried to get that pipeline to work in the past without success. 
> That's
> mainly why I decided not to attempt it and instead fold VDIC into PRPVF 
> entity.
> 
> 
> > On the other hand, there is currently no way to communicate to userspace
> > that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
> > input resolution, and then scale up bilinearly for there. So instead of
> > pretending to be able to downscale to any resolution, it would be better
> > to split each IC task into two subdevs, one for the
> > 1/2-or-1/4-downscaler, and one for the bilinear upscaler.
> 
> Hmm, good point, but couldn't we just document that fact?
> 
> 
> 
> > Next there is the issue of the custom mem2mem infrastructure inside the
> > IPU driver. I think this should be ultimately implemented at a higher
> > level,
> 
> if we were to move it up, into what subsystem would it go? I guess
> v4l2, but then we lose the possibility of other subsystems making use
> of it, such as drm.
> 
> >   but I see no way this will ever move out of the IPU driver once
> > the userspace inferface gets established.
> 
> it would be more difficult at that point, true.
> 
> > Then there are a few issues that are not userspace visible, so less
> > pressing. For example modelling the internal subdevs as separate
> > platform devices with async subdevices seems unnecessarily indirect. Why
> > not drop the platform devices and create the subdevs directly instead of
> > asynchronously?
> 
> This came out of a previous implementation where the IPU internal
> subunits and their links were represented as an OF graph (the patches
> I floated by you that you didn't like :) I've been meaning to ask you why
> you don't like to expose the IPU internals via OF graph, I have my theories
> why, but I digress). In that implementation the internal subdevs had to be
> registered asynchronously with device node match.
> 
> I thought about going back to registering the IPU internal subdevs
> directly, but I actually like doing it this way, it provides a satisfying
> symmetry that all the subdevs are registered in the same way
> asynchronously, the only difference being the external subdevs are
> registered with device node match, and the internal subdevs with
> device name match.
> 
> 
> > I'll try to give the driver a proper review in the next days.
> 
> Ok thanks.
> 
> >> Philipp's driver only supports unconverted image capture from the SMFC.
> >> In addition
> >> to that, mine allows for all the hardware links supported by the IPU,
> >> including routing
> >> frames from the CSI directly to the Image converter for scaling up to
> >> 4096x4096,
> > Note that tiled scaling (anything > 1024x1024) currently doesn't produce
> > correct results due to the fractional reset at the seam. This is not a
> > property of this driver, but still needs to be fixed in the ipu-v3 core.
> 
> right, understood.
> 
> >> colorspace conversion, rotation, and motion compensated de-interlace.
> >> Yes all these
> >> conversion can be carried out post-capture via a mem2mem device, but
> >> conversion
> >> directly from CSI capture has advantages, including minimized CPU
> >> utilization and
> >> lower AXI bus traffic.
> > These benefits are limited by the hardware to non-rotated frames <
> > 1024x1024 pixels.
> 
> right. But I can see many use-cases out there, where people need
> scaling and/or colorspace conversion in video capture, but don't
> need output > 1024x1024.
> 
> Steve
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-11 Thread Philipp Zabel
Hi Steve,

Am Dienstag, den 10.01.2017, 15:52 -0800 schrieb Steve Longerbeam:
> 
> On 01/09/2017 04:15 PM, Steve Longerbeam wrote:
> > Hi Philipp,
> >
> >
> > On 01/09/2017 11:43 AM, Philipp Zabel wrote:
> >
> >
> > 
> >> One is the amount and organization of subdevices/media entities visible
> >> to userspace. The SMFCs should not be user controllable subdevices, but
> >> can be implicitly enabled when a CSI is directly linked to a camif.
> >
> > I agree the SMFC could be folded into the CSI, but I see at least one
> > issue.

I don't suggest to fold it into the CSI.
The CSI should have one output pad that that can be connected either to
the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
(CSIx_DATA_DEST=2). The SMFC should be considered part of the link
between CSI and IDMAC.
The IC PRP input pad should be connected to either the CSI0 output pad
(CSI_SEL=0,IC_INPUT=0), the CSI1 output pad (CSI_SEL=1,IC_INPUT=0), or
to the VDIC (IC_INPUT=1).

> > From the dot graph you'll see that the PRPVF entity can receive directly
> > from the CSI, or indirectly via the SMFC.

And that's one reason why I think representing the mem2mem paths as
links in the media controller interface is questionable. The path "via
SMFC" is not really a hardware connection between CSI -> SMFC -> IC PRP,
but two completely separate paths:
CSI -> SMFC -> IDMAC -> mem and mem -> IDMAC -> IC PRP with different
IDMAC read/write channels. The only real connection is that one DMA the
IC DMA transfers are triggered automatically by the frame
synchronisation unit on every CSI frame.
There is no way to convey to the user which links are real connections
and which are just linked DMA write and read channels somewhere else.

Is there even a reason for the user to switch between direct and via
memory paths manually, or could this be inferred from other state
(formats, active links)?

>  If the SMFC entity were folded
> > into the CSI entity, there would have to be a "direct to PRPVF" output 
> > pad
> > and a "indirect via SMFC" output pad and I'm not sure how that info would
> > be conveyed to the user. With a SMFC entity those pipelines are explicit.
>
> In summary here, unless you have strong objection I'd prefer to keep a
> distinct SMFC entity.

I do, I still think you could both describe the hardware better and
reduce unnecessary interface complexity by removing the SMFC entities
and their imagined links to the IC.

> It makes the pipelines more clear to the user, and it
> better models the IPU internals.

I disagree. The IPU has a single SMFC that acts as FIFO to both CSIs in
the CSI -> SMFC -> IDMAC path, not two. The "channel linking" (automatic
DMA triggers between channels in the IDMAC via FSU) has nothing to do
with the SMFC.

> >> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
> >> best choice. It is true that the three tasks can be enabled separately,
> >> but to my understanding, the PRP ENC and PRP VF tasks share a single
> >> input channel. Shouldn't this be a single PRP subdevice with one input
> >> and two (VF, ENC) outputs?
> >
> > Since the VDIC sends its motion compensated frames to the PRP VF task,
> > I've created the PRPVF entity solely for motion compensated de-interlace
> > support. I don't really see any other use for the PRPVF task except for
> > motion compensated de-interlace.

I suppose simultaneous scaling to two different resolutions without
going through memory could be one interesting use case:

   ,--> VF --> IDMAC -> mem -> to display
CSI -> IC PRP 
   `--> ENC -> IDMAC -> mem -> to VPU

> > So really, the PRPVF entity is a combination of the VDIC and PRPVF 
> > subunits.

I'd prefer to keep them separate, we could then use a deactivated VDIC
-> IC PRP link to mark the VDIC as available to be used for its
combining functionality by another driver.

Also logically, the VDIC subdev would be the right user interface to
switch from interlaced to non-interlaced pad modes, whereas the IC
subdev(s) should just allow changing color space and size between its
inputs and outputs.

> > So looking at link_setup() in imx-csi.c, you'll see that when the CSI 
> > is linked
> > to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
> >
> > But if we were to create a VDIC entity, I can see how we could then
> > have a single PRP entity. Then if the PRP entity is receiving from the 
> > VDIC,
> > the PRP VF task would be activated.
> >
> > Another advantage of creating a distinct VDIC entity is that frames could
> > potentially be routed directly from the VDIC to camif, for 
> > motion-compensated
> > frame capture only with no scaling/CSC. I think that would be IDMAC 
> > channel
> > 5, we've tried to get that pipeline to work in the past without 
> > success. That's
> > mainly why I decided not to attempt it and instead fold VDIC into 
> > PRPVF entity.

There's also channel 13, but that's described as "Recent field form
CSI", so I think that might be for CSI only mode.


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-11 Thread Jean-Michel Hautbois
Hi Steve and Philipp,

2017-01-11 0:52 GMT+01:00 Steve Longerbeam :
>
>
> On 01/09/2017 04:15 PM, Steve Longerbeam wrote:
>>
>> Hi Philipp,
>>
>>
>> On 01/09/2017 11:43 AM, Philipp Zabel wrote:
>>
>>
>> 
>>>
>>> One is the amount and organization of subdevices/media entities visible
>>> to userspace. The SMFCs should not be user controllable subdevices, but
>>> can be implicitly enabled when a CSI is directly linked to a camif.
>>
>>
>> I agree the SMFC could be folded into the CSI, but I see at least one
>> issue.
>>
>> From the dot graph you'll see that the PRPVF entity can receive directly
>> from the CSI, or indirectly via the SMFC. If the SMFC entity were folded
>> into the CSI entity, there would have to be a "direct to PRPVF" output pad
>> and a "indirect via SMFC" output pad and I'm not sure how that info would
>> be conveyed to the user. With a SMFC entity those pipelines are explicit.
>
>
> In summary here, unless you have strong objection I'd prefer to keep a
> distinct SMFC entity. It makes the pipelines more clear to the user, and it
> better models the IPU internals.
>
>>
>>> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
>>> best choice. It is true that the three tasks can be enabled separately,
>>> but to my understanding, the PRP ENC and PRP VF tasks share a single
>>> input channel. Shouldn't this be a single PRP subdevice with one input
>>> and two (VF, ENC) outputs?
>>
>>
>> Since the VDIC sends its motion compensated frames to the PRP VF task,
>> I've created the PRPVF entity solely for motion compensated de-interlace
>> support. I don't really see any other use for the PRPVF task except for
>> motion compensated de-interlace.
>>
>> So really, the PRPVF entity is a combination of the VDIC and PRPVF
>> subunits.
>>
>> So looking at link_setup() in imx-csi.c, you'll see that when the CSI is
>> linked
>> to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
>>
>> But if we were to create a VDIC entity, I can see how we could then
>> have a single PRP entity. Then if the PRP entity is receiving from the
>> VDIC,
>> the PRP VF task would be activated.
>>
>> Another advantage of creating a distinct VDIC entity is that frames could
>> potentially be routed directly from the VDIC to camif, for
>> motion-compensated
>> frame capture only with no scaling/CSC. I think that would be IDMAC
>> channel
>> 5, we've tried to get that pipeline to work in the past without success.
>> That's
>> mainly why I decided not to attempt it and instead fold VDIC into PRPVF
>> entity.
>>
>>
>
> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You are
> correct
> that PRPENC and PRPVF do share an input channel (the CSIs). But the PRPVF
> has an additional input channel from the VDIC, and since my PRPVF entity
> roles
> up the VDIC internally, it is actually receiving from the VDIC channel.
>
> So unless you think we should have a distinct VDIC entity, I would like to
> keep this
> the way it is.
>
> Steve
>

That is exactly my thought. I was wondering if VDIC could be an
independent entity, as it could also be used as a combiner if one adds
the channels...

What do you think about that ?

JM
PS: My phone sent the mail in HTML, again, sorry for the double mail, again...
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-10 Thread Steve Longerbeam



On 01/09/2017 04:15 PM, Steve Longerbeam wrote:

Hi Philipp,


On 01/09/2017 11:43 AM, Philipp Zabel wrote:




One is the amount and organization of subdevices/media entities visible
to userspace. The SMFCs should not be user controllable subdevices, but
can be implicitly enabled when a CSI is directly linked to a camif.


I agree the SMFC could be folded into the CSI, but I see at least one
issue.

From the dot graph you'll see that the PRPVF entity can receive directly
from the CSI, or indirectly via the SMFC. If the SMFC entity were folded
into the CSI entity, there would have to be a "direct to PRPVF" output 
pad

and a "indirect via SMFC" output pad and I'm not sure how that info would
be conveyed to the user. With a SMFC entity those pipelines are explicit.


In summary here, unless you have strong objection I'd prefer to keep a
distinct SMFC entity. It makes the pipelines more clear to the user, and it
better models the IPU internals.




Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
best choice. It is true that the three tasks can be enabled separately,
but to my understanding, the PRP ENC and PRP VF tasks share a single
input channel. Shouldn't this be a single PRP subdevice with one input
and two (VF, ENC) outputs?


Since the VDIC sends its motion compensated frames to the PRP VF task,
I've created the PRPVF entity solely for motion compensated de-interlace
support. I don't really see any other use for the PRPVF task except for
motion compensated de-interlace.

So really, the PRPVF entity is a combination of the VDIC and PRPVF 
subunits.


So looking at link_setup() in imx-csi.c, you'll see that when the CSI 
is linked

to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.

But if we were to create a VDIC entity, I can see how we could then
have a single PRP entity. Then if the PRP entity is receiving from the 
VDIC,

the PRP VF task would be activated.

Another advantage of creating a distinct VDIC entity is that frames could
potentially be routed directly from the VDIC to camif, for 
motion-compensated
frame capture only with no scaling/CSC. I think that would be IDMAC 
channel
5, we've tried to get that pipeline to work in the past without 
success. That's
mainly why I decided not to attempt it and instead fold VDIC into 
PRPVF entity.





Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You 
are correct

that PRPENC and PRPVF do share an input channel (the CSIs). But the PRPVF
has an additional input channel from the VDIC, and since my PRPVF entity 
roles

up the VDIC internally, it is actually receiving from the VDIC channel.

So unless you think we should have a distinct VDIC entity, I would like 
to keep this

the way it is.

Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-09 Thread Steve Longerbeam

Hi Philipp,


On 01/09/2017 11:43 AM, Philipp Zabel wrote:

Hi Steve,

Am Freitag, den 30.12.2016, 12:26 -0800 schrieb Steve Longerbeam:

On 12/30/2016 11:06 AM, Marek Vasut wrote:

On 12/29/2016 09:51 PM, Robert Schwebel wrote:

Hi Jean-Michel,

Hi,


On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:

What is the status of this work?

Philipp's patches have been reworked with the review feedback from the
last round and a new version will be posted when he is back from
holidays.

IMO Philipp's patches are better integrated and well structured, so I'd
rather like to see his work in at some point.

Granted I am biased, but I will state my case. "Better integrated" - my
patches
are also well integrated with the media core infrastructure. Philipp's
patches
in fact require modification to media core, whereas mine require none.
Some changes are needed of course (more subdev type definitions for
one).

As for "well structured", I don't really understand what is meant by that,
but my driver is also well structured.

I agree that this driver is well structured and well documented. Many of
my earlier concerns regarding the device tree bindings and media
controller interface have been addressed.


Thanks! Your input is crucial, so I really appreciate it.


  But there are still a few
design choices that I don't agree with, and some are userspace visible,
which makes me worry about not being able to change them later.


Ok.


One is the amount and organization of subdevices/media entities visible
to userspace. The SMFCs should not be user controllable subdevices, but
can be implicitly enabled when a CSI is directly linked to a camif.


I agree the SMFC could be folded into the CSI, but I see at least one
issue.

From the dot graph you'll see that the PRPVF entity can receive directly
from the CSI, or indirectly via the SMFC. If the SMFC entity were folded
into the CSI entity, there would have to be a "direct to PRPVF" output pad
and a "indirect via SMFC" output pad and I'm not sure how that info would
be conveyed to the user. With a SMFC entity those pipelines are explicit.


Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
best choice. It is true that the three tasks can be enabled separately,
but to my understanding, the PRP ENC and PRP VF tasks share a single
input channel. Shouldn't this be a single PRP subdevice with one input
and two (VF, ENC) outputs?


Since the VDIC sends its motion compensated frames to the PRP VF task,
I've created the PRPVF entity solely for motion compensated de-interlace
support. I don't really see any other use for the PRPVF task except for
motion compensated de-interlace.

So really, the PRPVF entity is a combination of the VDIC and PRPVF subunits.

So looking at link_setup() in imx-csi.c, you'll see that when the CSI is 
linked

to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.

But if we were to create a VDIC entity, I can see how we could then
have a single PRP entity. Then if the PRP entity is receiving from the VDIC,
the PRP VF task would be activated.

Another advantage of creating a distinct VDIC entity is that frames could
potentially be routed directly from the VDIC to camif, for 
motion-compensated

frame capture only with no scaling/CSC. I think that would be IDMAC channel
5, we've tried to get that pipeline to work in the past without success. 
That's
mainly why I decided not to attempt it and instead fold VDIC into PRPVF 
entity.




On the other hand, there is currently no way to communicate to userspace
that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
input resolution, and then scale up bilinearly for there. So instead of
pretending to be able to downscale to any resolution, it would be better
to split each IC task into two subdevs, one for the
1/2-or-1/4-downscaler, and one for the bilinear upscaler.


Hmm, good point, but couldn't we just document that fact?




Next there is the issue of the custom mem2mem infrastructure inside the
IPU driver. I think this should be ultimately implemented at a higher
level,


if we were to move it up, into what subsystem would it go? I guess
v4l2, but then we lose the possibility of other subsystems making use
of it, such as drm.


  but I see no way this will ever move out of the IPU driver once
the userspace inferface gets established.


it would be more difficult at that point, true.


Then there are a few issues that are not userspace visible, so less
pressing. For example modelling the internal subdevs as separate
platform devices with async subdevices seems unnecessarily indirect. Why
not drop the platform devices and create the subdevs directly instead of
asynchronously?


This came out of a previous implementation where the IPU internal
subunits and their links were represented as an OF graph (the patches
I floated by you that you didn't like :) I've been meaning to ask you why
you don't like to expose the IPU internals via OF graph, I have my 

Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-09 Thread Philipp Zabel
Hi Steve,

Am Freitag, den 30.12.2016, 12:26 -0800 schrieb Steve Longerbeam:
> 
> On 12/30/2016 11:06 AM, Marek Vasut wrote:
> > On 12/29/2016 09:51 PM, Robert Schwebel wrote:
> >> Hi Jean-Michel,
> > Hi,
> >
> >> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
> >>> What is the status of this work?
> >> Philipp's patches have been reworked with the review feedback from the
> >> last round and a new version will be posted when he is back from
> >> holidays.
> > IMO Philipp's patches are better integrated and well structured, so I'd
> > rather like to see his work in at some point.
> 
> Granted I am biased, but I will state my case. "Better integrated" - my 
> patches
> are also well integrated with the media core infrastructure. Philipp's 
> patches
> in fact require modification to media core, whereas mine require none.
> Some changes are needed of course (more subdev type definitions for
> one).
> 
> As for "well structured", I don't really understand what is meant by that,
> but my driver is also well structured.

I agree that this driver is well structured and well documented. Many of
my earlier concerns regarding the device tree bindings and media
controller interface have been addressed. But there are still a few
design choices that I don't agree with, and some are userspace visible,
which makes me worry about not being able to change them later.

One is the amount and organization of subdevices/media entities visible
to userspace. The SMFCs should not be user controllable subdevices, but
can be implicitly enabled when a CSI is directly linked to a camif.
Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
best choice. It is true that the three tasks can be enabled separately,
but to my understanding, the PRP ENC and PRP VF tasks share a single
input channel. Shouldn't this be a single PRP subdevice with one input
and two (VF, ENC) outputs?
On the other hand, there is currently no way to communicate to userspace
that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
input resolution, and then scale up bilinearly for there. So instead of
pretending to be able to downscale to any resolution, it would be better
to split each IC task into two subdevs, one for the
1/2-or-1/4-downscaler, and one for the bilinear upscaler.
Next there is the issue of the custom mem2mem infrastructure inside the
IPU driver. I think this should be ultimately implemented at a higher
level, but I see no way this will ever move out of the IPU driver once
the userspace inferface gets established.

Then there are a few issues that are not userspace visible, so less
pressing. For example modelling the internal subdevs as separate
platform devices with async subdevices seems unnecessarily indirect. Why
not drop the platform devices and create the subdevs directly instead of
asynchronously?
I'll try to give the driver a proper review in the next days.

> Philipp's driver only supports unconverted image capture from the SMFC. 
> In addition
> to that, mine allows for all the hardware links supported by the IPU, 
> including routing
> frames from the CSI directly to the Image converter for scaling up to 
> 4096x4096,

Note that tiled scaling (anything > 1024x1024) currently doesn't produce
correct results due to the fractional reset at the seam. This is not a
property of this driver, but still needs to be fixed in the ipu-v3 core.

> colorspace conversion, rotation, and motion compensated de-interlace. 
> Yes all these
> conversion can be carried out post-capture via a mem2mem device, but 
> conversion
> directly from CSI capture has advantages, including minimized CPU 
> utilization and
> lower AXI bus traffic.

These benefits are limited by the hardware to non-rotated frames <
1024x1024 pixels.

regards
Philipp

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Steve Longerbeam

Hi JM,


On 01/02/2017 06:59 AM, Jean-Michel Hautbois wrote:


Steve: which branch is the correct one on your github ?


branch imx-media-staging-md-v4 on
g...@github.com:slongerbeam/mediatree.git

Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Fabio Estevam
Hi Jean-Michel,

On Mon, Jan 2, 2017 at 12:59 PM, Jean-Michel Hautbois
 wrote:

> Steve: which branch is the correct one on your github ?

I have tested imx-media-staging-md-v2 (based on 4.9-rc) and also
imx-media-staging-md-v4 branch, which is based on 4.10-rc1.
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Steve Longerbeam

Hi Hans,


On 01/02/2017 06:45 AM, Hans Verkuil wrote:

On 01/02/17 14:51, Jean-Michel Hautbois wrote:

Hi,

2016-12-30 21:26 GMT+01:00 Steve Longerbeam 
:



On 12/30/2016 11:06 AM, Marek Vasut wrote:


On 12/29/2016 09:51 PM, Robert Schwebel wrote:


Hi Jean-Michel,


Hi,


On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:


What is the status of this work?


Philipp's patches have been reworked with the review feedback from 
the

last round and a new version will be posted when he is back from
holidays.


IMO Philipp's patches are better integrated and well structured, so 
I'd

rather like to see his work in at some point.



Granted I am biased, but I will state my case. "Better integrated" - my
patches
are also well integrated with the media core infrastructure. Philipp's
patches
in fact require modification to media core, whereas mine require none.
Some changes are needed of course (more subdev type definitions for
one).

As for "well structured", I don't really understand what is meant by 
that,

but my driver is also well structured.

Philipp's driver only supports unconverted image capture from the 
SMFC. In

addition
to that, mine allows for all the hardware links supported by the IPU,
including routing
frames from the CSI directly to the Image converter for scaling up to
4096x4096,
colorspace conversion, rotation, and motion compensated 
de-interlace. Yes

all these
conversion can be carried out post-capture via a mem2mem device, but
conversion
directly from CSI capture has advantages, including minimized CPU
utilization and
lower AXI bus traffic. In any case, Freescale added these hardware 
paths,

and my
driver supports them.


I had a deeper look to both drivers, and I must say the features of
Steve's one are really interesting.
I don't think any of those has been tested with digital inputs (I have
ADV76xx chips on my board, which I hope will be available one day for
those interested) and so, I can test and help adding some of the
missing parts.
And for at least a week or two, I have all of my time for it, so it
would be of great help to know which one has the bigger chance to be
integrated... :)


Steve's series is definitely preferred from my point of view. The feature
set is clearly superior to Philipp's driver.

I plan on reviewing Steve's series soonish but a quick scan didn't see 
anything
suspicious. The code looks clean, and I am leaning towards getting 
this in sooner
rather than later, so if you have the time to work on this, then go 
for it!


Steve, I have a SabreLite and a ov5642 sensor, so I should be able to 
test

that driver.


Ok, just remember to disable the ov5640 node in imx6qdl-sabrelite.dtsi,
otherwise the driver will expect to see an async registration callback from
the ov5640.



There is also an ov5642 sensor driver in 
drivers/media/i2/soc_camera/ov5642.c.
But nobody AFAIK is using it, so I would be inclined to take your code 
and

remove the soc_camera driver.


Ok, and at some point ov5642.c and ov5640_mipi.c need to be merged into 
a single
ov564x.c, cleaned up (get rid of, or significantly prune, those huge 
register tables - they
were created by FSL by dumping the i2c register set after a reset, so 
they consist mostly

of the POR register values), and finally moved to drivers/media/i2c.

Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Fabio Estevam
On Mon, Jan 2, 2017 at 12:45 PM, Hans Verkuil  wrote:

> Steve's series is definitely preferred from my point of view. The feature
> set is clearly superior to Philipp's driver.
>
> I plan on reviewing Steve's series soonish but a quick scan didn't see
> anything
> suspicious. The code looks clean, and I am leaning towards getting this in
> sooner
> rather than later, so if you have the time to work on this, then go for it!

This is good news!

I had a chance to test Steve's series on a mx6qsabresd and it worked fine.

Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Jean-Michel Hautbois
2017-01-02 15:45 GMT+01:00 Hans Verkuil :
> On 01/02/17 14:51, Jean-Michel Hautbois wrote:
>>
>> Hi,
>>
>> 2016-12-30 21:26 GMT+01:00 Steve Longerbeam :
>>>
>>>
>>>
>>> On 12/30/2016 11:06 AM, Marek Vasut wrote:


 On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>
>
> Hi Jean-Michel,


 Hi,

> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>
>>
>> What is the status of this work?
>
>
> Philipp's patches have been reworked with the review feedback from the
> last round and a new version will be posted when he is back from
> holidays.


 IMO Philipp's patches are better integrated and well structured, so I'd
 rather like to see his work in at some point.
>>>
>>>
>>>
>>> Granted I am biased, but I will state my case. "Better integrated" - my
>>> patches
>>> are also well integrated with the media core infrastructure. Philipp's
>>> patches
>>> in fact require modification to media core, whereas mine require none.
>>> Some changes are needed of course (more subdev type definitions for
>>> one).
>>>
>>> As for "well structured", I don't really understand what is meant by
>>> that,
>>> but my driver is also well structured.
>>>
>>> Philipp's driver only supports unconverted image capture from the SMFC.
>>> In
>>> addition
>>> to that, mine allows for all the hardware links supported by the IPU,
>>> including routing
>>> frames from the CSI directly to the Image converter for scaling up to
>>> 4096x4096,
>>> colorspace conversion, rotation, and motion compensated de-interlace. Yes
>>> all these
>>> conversion can be carried out post-capture via a mem2mem device, but
>>> conversion
>>> directly from CSI capture has advantages, including minimized CPU
>>> utilization and
>>> lower AXI bus traffic. In any case, Freescale added these hardware paths,
>>> and my
>>> driver supports them.
>>
>>
>> I had a deeper look to both drivers, and I must say the features of
>> Steve's one are really interesting.
>> I don't think any of those has been tested with digital inputs (I have
>> ADV76xx chips on my board, which I hope will be available one day for
>> those interested) and so, I can test and help adding some of the
>> missing parts.
>> And for at least a week or two, I have all of my time for it, so it
>> would be of great help to know which one has the bigger chance to be
>> integrated... :)
>
>
> Steve's series is definitely preferred from my point of view. The feature
> set is clearly superior to Philipp's driver.
>
> I plan on reviewing Steve's series soonish but a quick scan didn't see
> anything
> suspicious. The code looks clean, and I am leaning towards getting this in
> sooner
> rather than later, so if you have the time to work on this, then go for it!

Glad to here it !

> Steve, I have a SabreLite and a ov5642 sensor, so I should be able to test
> that driver.
>
> There is also an ov5642 sensor driver in
> drivers/media/i2/soc_camera/ov5642.c.
> But nobody AFAIK is using it, so I would be inclined to take your code and
> remove the soc_camera driver.

Steve: which branch is the correct one on your github ?
Thanks,
JM
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Hans Verkuil

On 01/02/17 14:51, Jean-Michel Hautbois wrote:

Hi,

2016-12-30 21:26 GMT+01:00 Steve Longerbeam :



On 12/30/2016 11:06 AM, Marek Vasut wrote:


On 12/29/2016 09:51 PM, Robert Schwebel wrote:


Hi Jean-Michel,


Hi,


On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:


What is the status of this work?


Philipp's patches have been reworked with the review feedback from the
last round and a new version will be posted when he is back from
holidays.


IMO Philipp's patches are better integrated and well structured, so I'd
rather like to see his work in at some point.



Granted I am biased, but I will state my case. "Better integrated" - my
patches
are also well integrated with the media core infrastructure. Philipp's
patches
in fact require modification to media core, whereas mine require none.
Some changes are needed of course (more subdev type definitions for
one).

As for "well structured", I don't really understand what is meant by that,
but my driver is also well structured.

Philipp's driver only supports unconverted image capture from the SMFC. In
addition
to that, mine allows for all the hardware links supported by the IPU,
including routing
frames from the CSI directly to the Image converter for scaling up to
4096x4096,
colorspace conversion, rotation, and motion compensated de-interlace. Yes
all these
conversion can be carried out post-capture via a mem2mem device, but
conversion
directly from CSI capture has advantages, including minimized CPU
utilization and
lower AXI bus traffic. In any case, Freescale added these hardware paths,
and my
driver supports them.


I had a deeper look to both drivers, and I must say the features of
Steve's one are really interesting.
I don't think any of those has been tested with digital inputs (I have
ADV76xx chips on my board, which I hope will be available one day for
those interested) and so, I can test and help adding some of the
missing parts.
And for at least a week or two, I have all of my time for it, so it
would be of great help to know which one has the bigger chance to be
integrated... :)


Steve's series is definitely preferred from my point of view. The feature
set is clearly superior to Philipp's driver.

I plan on reviewing Steve's series soonish but a quick scan didn't see anything
suspicious. The code looks clean, and I am leaning towards getting this in 
sooner
rather than later, so if you have the time to work on this, then go for it!

Steve, I have a SabreLite and a ov5642 sensor, so I should be able to test
that driver.

There is also an ov5642 sensor driver in drivers/media/i2/soc_camera/ov5642.c.
But nobody AFAIK is using it, so I would be inclined to take your code and
remove the soc_camera driver.

Regards,

Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2017-01-02 Thread Jean-Michel Hautbois
Hi,

2016-12-30 21:26 GMT+01:00 Steve Longerbeam :
>
>
> On 12/30/2016 11:06 AM, Marek Vasut wrote:
>>
>> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>>>
>>> Hi Jean-Michel,
>>
>> Hi,
>>
>>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:

 What is the status of this work?
>>>
>>> Philipp's patches have been reworked with the review feedback from the
>>> last round and a new version will be posted when he is back from
>>> holidays.
>>
>> IMO Philipp's patches are better integrated and well structured, so I'd
>> rather like to see his work in at some point.
>
>
> Granted I am biased, but I will state my case. "Better integrated" - my
> patches
> are also well integrated with the media core infrastructure. Philipp's
> patches
> in fact require modification to media core, whereas mine require none.
> Some changes are needed of course (more subdev type definitions for
> one).
>
> As for "well structured", I don't really understand what is meant by that,
> but my driver is also well structured.
>
> Philipp's driver only supports unconverted image capture from the SMFC. In
> addition
> to that, mine allows for all the hardware links supported by the IPU,
> including routing
> frames from the CSI directly to the Image converter for scaling up to
> 4096x4096,
> colorspace conversion, rotation, and motion compensated de-interlace. Yes
> all these
> conversion can be carried out post-capture via a mem2mem device, but
> conversion
> directly from CSI capture has advantages, including minimized CPU
> utilization and
> lower AXI bus traffic. In any case, Freescale added these hardware paths,
> and my
> driver supports them.

I had a deeper look to both drivers, and I must say the features of
Steve's one are really interesting.
I don't think any of those has been tested with digital inputs (I have
ADV76xx chips on my board, which I hope will be available one day for
those interested) and so, I can test and help adding some of the
missing parts.
And for at least a week or two, I have all of my time for it, so it
would be of great help to know which one has the bigger chance to be
integrated... :)

Thanks again,
JM
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-12-30 Thread Steve Longerbeam



On 12/30/2016 11:06 AM, Marek Vasut wrote:

On 12/29/2016 09:51 PM, Robert Schwebel wrote:

Hi Jean-Michel,

Hi,


On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:

What is the status of this work?

Philipp's patches have been reworked with the review feedback from the
last round and a new version will be posted when he is back from
holidays.

IMO Philipp's patches are better integrated and well structured, so I'd
rather like to see his work in at some point.


Granted I am biased, but I will state my case. "Better integrated" - my 
patches
are also well integrated with the media core infrastructure. Philipp's 
patches

in fact require modification to media core, whereas mine require none.
Some changes are needed of course (more subdev type definitions for
one).

As for "well structured", I don't really understand what is meant by that,
but my driver is also well structured.

Philipp's driver only supports unconverted image capture from the SMFC. 
In addition
to that, mine allows for all the hardware links supported by the IPU, 
including routing
frames from the CSI directly to the Image converter for scaling up to 
4096x4096,
colorspace conversion, rotation, and motion compensated de-interlace. 
Yes all these
conversion can be carried out post-capture via a mem2mem device, but 
conversion
directly from CSI capture has advantages, including minimized CPU 
utilization and
lower AXI bus traffic. In any case, Freescale added these hardware 
paths, and my

driver supports them.

I leave it up to the maintainers.

Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-12-30 Thread Marek Vasut
On 12/29/2016 09:51 PM, Robert Schwebel wrote:
> Hi Jean-Michel,

Hi,

> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>> What is the status of this work?
> 
> Philipp's patches have been reworked with the review feedback from the
> last round and a new version will be posted when he is back from
> holidays.

IMO Philipp's patches are better integrated and well structured, so I'd
rather like to see his work in at some point.

-- 
Best regards,
Marek Vasut
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-12-29 Thread Robert Schwebel
Hi Jean-Michel,

On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
> What is the status of this work?

Philipp's patches have been reworked with the review feedback from the
last round and a new version will be posted when he is back from
holidays.

rsc
-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-12-29 Thread Jean-Michel Hautbois
I am so sorry... This mail wasn't send to the mailing list as this
** gmail switched back to HTML...

2016-12-29 16:08 GMT+01:00 Jean-Michel Hautbois
:
> Hi Philipp and al.,
>
>
> 2016-10-19 23:30 GMT+02:00 Sakari Ailus :
>>
>> On Fri, Oct 14, 2016 at 07:34:20PM +0200, Philipp Zabel wrote:
>> > Hi,
>> >
>> > the second round removes the prepare_stream callback and instead lets
>> > the
>> > intermediate subdevices propagate s_stream calls to their sources rather
>> > than individually calling s_stream on each subdevice from the bridge
>> > driver.
>> > This is similar to how drm bridges recursively call into their next
>> > neighbor.
>> > It makes it easier to do bringup ordering on a per-link level, as long
>> > as the
>> > source preparation can be done at s_power, and the sink can just
>> > prepare, call
>> > s_stream on its source, and then enable itself inside s_stream.
>> > Obviously this
>> > would only work in a generic fashion if all asynchronous subdevices with
>> > both
>> > inputs and outputs would propagate s_stream to their source subdevices.
>
>
> What is the status of this work ? I saw Steve's patches before yours, so
> both are implementing pretty much the same functionnality but differently.
> Which one will be finally merged ?
> I wanted to upgrade my kernel, in order to give it a try on a board with
> adv7604 and adv7611 devices.
> Is there a git tree somewhere integrating it too ?
>
> Thanks,
> JM
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-10-19 Thread Sakari Ailus
On Fri, Oct 14, 2016 at 07:34:20PM +0200, Philipp Zabel wrote:
> Hi,
> 
> the second round removes the prepare_stream callback and instead lets the
> intermediate subdevices propagate s_stream calls to their sources rather
> than individually calling s_stream on each subdevice from the bridge driver.
> This is similar to how drm bridges recursively call into their next neighbor.
> It makes it easier to do bringup ordering on a per-link level, as long as the
> source preparation can be done at s_power, and the sink can just prepare, call
> s_stream on its source, and then enable itself inside s_stream. Obviously this
> would only work in a generic fashion if all asynchronous subdevices with both
> inputs and outputs would propagate s_stream to their source subdevices.

Hi Philipp,

I'll review the async patches tomorrow / the day after. I have not forgotten
them. :-)

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ai...@iki.fi XMPP: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-10-17 Thread Philipp Zabel
Hi Gary,

Am Montag, den 17.10.2016, 12:18 +0200 schrieb Gary Bisson:
[...]
> For the whole series:
> Tested-by: Gary Bisson 
> 
> Tested on Nitrogen6x + BD_HDMI_MIPI daughter board on linux-next
> 20161016.
>
> This required using your v4l2-ctl patch to set the EDID if the source
> output can't be forced:
> https://patchwork.kernel.org/patch/6097201/
> BTW, do you have any update on this? Because it looks like the
> VIDIOC_SUBDEV_QUERYCAP hasn't been implemented since your patch (March
> 2015).
> 
> Then I followed the procedure you gave here:
> https://patchwork.kernel.org/patch/9366503/
> 
> For those interested in trying it out, note that kmssink requires to use
> Gstreamer 1.9.x.
> 
> I have a few remarks:
> - I believe it would help having a patch that sets imx_v6_v7_defconfig
>   with the proper options in this series

I can add that in the next round.

> - Not related to this series, I couldn't boot the board unless I disable
>   the PCIe driver, have you experienced the same issue?

I had not enabled the PCIe driver, but a quick boot test with
CONFIG_PCIE_DW enabled hangs after these messages:

[1.314298] OF: PCI: host bridge /soc/pcie@0x0100 ranges:
[1.317199] OF: PCI:   No bus range found for /soc/pcie@0x0100, using 
[bus 00-ff]
[1.325171] OF: PCI:IO 0x01f8..0x01f8 -> 0x
[1.331029] OF: PCI:   MEM 0x0100..0x01ef -> 0x0100

I've asked Lucas to have a look.

> - Is there a way not to set all the links manually using media-ctl? I
>   expected all the formats to be negotiated automatically once a stream
>   is properly detected.

This should be done in userspace, probably libv4l2.

> - As discussed last week, the Nitrogen6x dtsi file shouldn't be
>   included, instead an overlay would be more appropriate. Maybe the log
>   should contain a comment about this.

Ok.

> Let me know if I need to add that Tested-by to every single patch so it
> appears on Patchwork.

It's fine as is, thank you.

regards
Philipp

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support

2016-10-17 Thread Gary Bisson
Hi Philipp,

On Fri, Oct 14, 2016 at 07:34:20PM +0200, Philipp Zabel wrote:
> Hi,
> 
> the second round removes the prepare_stream callback and instead lets the
> intermediate subdevices propagate s_stream calls to their sources rather
> than individually calling s_stream on each subdevice from the bridge driver.
> This is similar to how drm bridges recursively call into their next neighbor.
> It makes it easier to do bringup ordering on a per-link level, as long as the
> source preparation can be done at s_power, and the sink can just prepare, call
> s_stream on its source, and then enable itself inside s_stream. Obviously this
> would only work in a generic fashion if all asynchronous subdevices with both
> inputs and outputs would propagate s_stream to their source subdevices.
> 
> Changes since v1:
>  - Propagate field and colorspace in ipucsi_subdev_set_format.
>  - Remove v4l2_media_subdev_prepare_stream and v4l2_media_subdev_s_stream,
>let subdevices propagate s_stream calls to their upstream subdevices
>themselves.
>  - Various fixes (see individual patches for details)

For the whole series:
Tested-by: Gary Bisson 

Tested on Nitrogen6x + BD_HDMI_MIPI daughter board on linux-next
20161016.

This required using your v4l2-ctl patch to set the EDID if the source
output can't be forced:
https://patchwork.kernel.org/patch/6097201/
BTW, do you have any update on this? Because it looks like the
VIDIOC_SUBDEV_QUERYCAP hasn't been implemented since your patch (March
2015).

Then I followed the procedure you gave here:
https://patchwork.kernel.org/patch/9366503/

For those interested in trying it out, note that kmssink requires to use
Gstreamer 1.9.x.

I have a few remarks:
- I believe it would help having a patch that sets imx_v6_v7_defconfig
  with the proper options in this series
- Not related to this series, I couldn't boot the board unless I disable
  the PCIe driver, have you experienced the same issue?
- Is there a way not to set all the links manually using media-ctl? I
  expected all the formats to be negotiated automatically once a stream
  is properly detected.
- As discussed last week, the Nitrogen6x dtsi file shouldn't be
  included, instead an overlay would be more appropriate. Maybe the log
  should contain a comment about this.

Let me know if I need to add that Tested-by to every single patch so it
appears on Patchwork.

Regards,
Gary
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html