cron job: media_tree daily build: OK

2018-10-19 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:   Sat Oct 20 05:00:11 CEST 2018
media-tree git hash:3b796aa60af087f5fec75aee9b17f2130f2b9adc
media_build git hash:   0c8bb27f3aaa682b9548b656f77505c3d1f11e71
v4l-utils git hash: c36dbbdfa8b30b2badd4f893b59d0bd4f0bd12aa
edid-decode git hash:   5eeb151a748788666534d6ea3da07f90400d24c2
gcc version:i686-linux-gcc (GCC) 8.2.0
sparse version: 0.5.2
smatch version: 0.5.1
host hardware:  x86_64
host os:4.18.11-marune

linux-git-arm-at91: OK
linux-git-arm-davinci: OK
linux-git-arm-multi: OK
linux-git-arm-pxa: OK
linux-git-arm-stm32: OK
linux-git-arm64: OK
linux-git-i686: OK
linux-git-mips: OK
linux-git-powerpc64: OK
linux-git-sh: OK
linux-git-x86_64: OK
Check COMPILE_TEST: OK
linux-3.10.108-i686: OK
linux-3.10.108-x86_64: OK
linux-3.11.10-i686: OK
linux-3.11.10-x86_64: OK
linux-3.12.74-i686: OK
linux-3.12.74-x86_64: OK
linux-3.13.11-i686: OK
linux-3.13.11-x86_64: OK
linux-3.14.79-i686: OK
linux-3.14.79-x86_64: OK
linux-3.15.10-i686: OK
linux-3.15.10-x86_64: OK
linux-3.16.57-i686: OK
linux-3.16.57-x86_64: OK
linux-3.17.8-i686: OK
linux-3.17.8-x86_64: OK
linux-3.18.123-i686: OK
linux-3.18.123-x86_64: OK
linux-3.19.8-i686: OK
linux-3.19.8-x86_64: OK
linux-4.0.9-i686: OK
linux-4.0.9-x86_64: OK
linux-4.1.52-i686: OK
linux-4.1.52-x86_64: OK
linux-4.2.8-i686: OK
linux-4.2.8-x86_64: OK
linux-4.3.6-i686: OK
linux-4.3.6-x86_64: OK
linux-4.4.159-i686: OK
linux-4.4.159-x86_64: OK
linux-4.5.7-i686: OK
linux-4.5.7-x86_64: OK
linux-4.6.7-i686: OK
linux-4.6.7-x86_64: OK
linux-4.7.10-i686: OK
linux-4.7.10-x86_64: OK
linux-4.8.17-i686: OK
linux-4.8.17-x86_64: OK
linux-4.9.131-i686: OK
linux-4.9.131-x86_64: OK
linux-4.10.17-i686: OK
linux-4.10.17-x86_64: OK
linux-4.11.12-i686: OK
linux-4.11.12-x86_64: OK
linux-4.12.14-i686: OK
linux-4.12.14-x86_64: OK
linux-4.13.16-i686: OK
linux-4.13.16-x86_64: OK
linux-4.14.74-i686: OK
linux-4.14.74-x86_64: OK
linux-4.15.18-i686: OK
linux-4.15.18-x86_64: OK
linux-4.16.18-i686: OK
linux-4.16.18-x86_64: OK
linux-4.17.19-i686: OK
linux-4.17.19-x86_64: OK
linux-4.18.12-i686: OK
linux-4.18.12-x86_64: OK
linux-4.19-rc6-i686: OK
linux-4.19-rc6-x86_64: OK
apps: OK
spec-git: OK
sparse: WARNINGS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Saturday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Saturday.tar.bz2

The Media Infrastructure API from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/index.html


Re: [PATCH v3 01/16] media: imx: add mem2mem device

2018-10-19 Thread Tim Harvey
On Fri, Oct 19, 2018 at 1:19 PM Steve Longerbeam  wrote:
>
>
> On 10/19/18 2:53 AM, Philipp Zabel wrote:
> > Hi Tim,
> >
> > On Thu, 2018-10-18 at 15:53 -0700, Tim Harvey wrote:
> > [...]
> >> Philipp,
> >>
> >> Thanks for submitting this!
> >>
> >> I'm hoping this lets us use non-IMX capture devices along with the IMX
> >> media controller entities to so we can use hardware
> >> CSC,scaling,pixel-format-conversions and ultimately coda based encode.
> >>
> >> I've built this on top of linux-media and see that it registers as
> >> /dev/video8 but I'm not clear how to use it? I don't see it within the
> >> media controller graph.
> > It's a V4L2 mem2mem device that can be handled by the GstV4l2Transform
> > element, for example. GStreamer should create a v4l2video8convert
> > element of that type.
> >
> > The mem2mem device is not part of the media controller graph on purpose.
> > There is no interaction with any of the entities in the media controller
> > graph apart from the fact that the IC PP task we are using for mem2mem
> > scaling is sharing hardware resources with the IC PRP tasks used for the
> > media controller scaler entitites.
>
>
> It would be nice in the future to link mem2mem output-side to the ipu_vdic:1
> pad, to make use of h/w VDIC de-interlace as part of mem2mem operations.
> The progressive output from a new ipu_vdic:3 pad can then be sent to the
> image_convert APIs by the mem2mem driver for further tiled scaling, CSC,
> and rotation by the IC PP task. The ipu_vdic:1 pad makes use of pure
> DMA-based
> de-interlace, that is, all input frames (N-1, N, N+1) to the VDIC are sent
> from DMA buffers, and this VDIC mode of operation is well understood
> and produces clean de-interlace output. The risk is that this would require
> iDMAC channel 5 for ipu_vdic:3, which IFAIK is not verified to work yet.
>
>
> The other problem with that currently is that mem2mem would have to be
> split into
> separate device nodes: a /dev/videoN for output-side (linked to
> ipu_vdic:1), and
> a /dev/videoM for capture-side (linked from ipu_vdic:3). And then it no
> longer
> presents to userspace as a mem2mem device with a single device node for both
> output and capture sides.
>
>
> Or is there another way? I recall work to integrate mem2mem with media
> control.
> There is v4l2_m2m_register_media_controller(), but that create three
> entities:
> source, processing, and sink. The VDIC entity would be part of mem2mem
> processing but this entity already exists for the current graph. This
> function
> could however be used as a guide to incorporate the VDIC entity into m2m
> device.
>

I agree - without being able to utilize de-interlace,csc,scaling and
rotation it seems fairly limited today (but a great start!).

Also, if it were in the media graph wouldn't we be able to use the
compose selection subdev API?

I've got an AVC8000 minPCIe card here with a TW6869 with 8x analog
capture inputs that I'm hoping to someday soon be able to capture,
compose into a single frame, and encode.

Regards,

Tim


Re: [PATCH 4/4] media: mt9m111: allow to setup pixclk polarity

2018-10-19 Thread Sakari Ailus
Hi Marco,

Thanks for the patchset.

On Fri, Oct 19, 2018 at 05:50:27PM +0200, Marco Felsch wrote:
> From: Enrico Scholz 
> 
> The chip can be configured to output data transitions on the
> rising or falling edge of PIXCLK (Datasheet R58:1[9]), default is on the
> falling edge.
> 
> Parsing the fw-node is made in a subfunction to bundle all (future)
> dt-parsing / fw-parsing stuff.

Could you rebase this on current mediatree master, please?

> 
> Signed-off-by: Enrico Scholz 
> (m.grzesc...@pengutronix.de: Fix inverting clock. INV_PIX_CLOCK bit is set
> per default. Set bit to 0 (enable mask bit without value) to enable
> falling edge sampling.)
> Signed-off-by: Michael Grzeschik 
> (m.fel...@pengutronix.de: use fwnode helpers)
> (m.fel...@pengutronix.de: mv of parsing into own function)
> (m.fel...@pengutronix.de: adapt commit msg)
> Signed-off-by: Marco Felsch 
> Reviewed-by: Philipp Zabel 
> ---
>  drivers/media/i2c/mt9m111.c | 52 -
>  1 file changed, 51 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
> index 13080c6c1ba3..5d45bc9ea0cb 100644
> --- a/drivers/media/i2c/mt9m111.c
> +++ b/drivers/media/i2c/mt9m111.c
> @@ -15,12 +15,14 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  #include 
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  /*
>   * MT9M111, MT9M112 and MT9M131:
> @@ -236,6 +238,8 @@ struct mt9m111 {
>   const struct mt9m111_datafmt *fmt;
>   int lastpage;   /* PageMap cache value */
>   bool is_streaming;
> + /* user point of view - 0: falling 1: rising edge */
> + unsigned int pclk_sample:1;
>  #ifdef CONFIG_MEDIA_CONTROLLER
>   struct media_pad pad;
>  #endif
> @@ -586,6 +590,10 @@ static int mt9m111_set_pixfmt(struct mt9m111 *mt9m111,
>   return -EINVAL;
>   }
>  
> + /* receiver samples on falling edge, chip-hw default is rising */

Could you add DT binding documentation that would cover this? The existing
documentation is, well, rather vague. Which properties are relevant for the
hardware, are they mandatory or optional and if they're optional, then are
there relevant default values?

> + if (mt9m111->pclk_sample == 0)
> + mask_outfmt2 |= MT9M111_OUTFMT_INV_PIX_CLOCK;
> +
>   ret = mt9m111_reg_mask(client, context_a.output_fmt_ctrl2,
>  data_outfmt2, mask_outfmt2);
>   if (!ret)
> @@ -1045,9 +1053,15 @@ static int mt9m111_s_stream(struct v4l2_subdev *sd, 
> int enable)
>  static int mt9m111_g_mbus_config(struct v4l2_subdev *sd,
>   struct v4l2_mbus_config *cfg)
>  {
> - cfg->flags = V4L2_MBUS_MASTER | V4L2_MBUS_PCLK_SAMPLE_RISING |
> + struct mt9m111 *mt9m111 = container_of(sd, struct mt9m111, subdev);
> +
> + cfg->flags = V4L2_MBUS_MASTER |
>   V4L2_MBUS_HSYNC_ACTIVE_HIGH | V4L2_MBUS_VSYNC_ACTIVE_HIGH |
>   V4L2_MBUS_DATA_ACTIVE_HIGH;
> +
> + cfg->flags |= mt9m111->pclk_sample ? V4L2_MBUS_PCLK_SAMPLE_FALLING :
> + V4L2_MBUS_PCLK_SAMPLE_RISING;
> +
>   cfg->type = V4L2_MBUS_PARALLEL;
>  
>   return 0;
> @@ -1117,6 +1131,33 @@ static int mt9m111_video_probe(struct i2c_client 
> *client)
>   return ret;
>  }
>  
> +#ifdef CONFIG_OF
> +static int mt9m111_probe_of(struct i2c_client *client, struct mt9m111 
> *mt9m111)
> +{
> + struct v4l2_fwnode_endpoint *bus_cfg;
> + struct device_node *np;
> + int ret = 0;
> +
> + np = of_graph_get_next_endpoint(client->dev.of_node, NULL);
> + if (!np)
> + return -EINVAL;
> +
> + bus_cfg = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(np));
> + if (IS_ERR(bus_cfg)) {
> + ret = PTR_ERR(bus_cfg);
> + goto out_of_put;
> + }
> +
> + mt9m111->pclk_sample = !!(bus_cfg->bus.parallel.flags &
> +   V4L2_MBUS_PCLK_SAMPLE_RISING);
> +
> + v4l2_fwnode_endpoint_free(bus_cfg);
> +out_of_put:
> + of_node_put(np);
> + return ret;
> +}
> +#endif
> +
>  static int mt9m111_probe(struct i2c_client *client,
>const struct i2c_device_id *did)
>  {
> @@ -1141,6 +1182,15 @@ static int mt9m111_probe(struct i2c_client *client,
>   /* Default HIGHPOWER context */
>   mt9m111->ctx = _b;
>  
> + if (IS_ENABLED(CONFIG_OF)) {
> + ret = mt9m111_probe_of(client, mt9m111);
> + if (ret)
> + return ret;
> + } else {
> + /* use default chip hardware values */
> + mt9m111->pclk_sample = 1;
> + }
> +
>   v4l2_i2c_subdev_init(>subdev, client, _subdev_ops);
>   mt9m111->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
>  

-- 
Regards,

Sakari Ailus
sakari.ai...@linux.intel.com


Re: [PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread Sakari Ailus
Hi Jacopo,

On Fri, Oct 19, 2018 at 03:13:28PM +0200, jacopo mondi wrote:
> Hi Mauro, Hans, Sakari,
> 
> On Fri, Oct 19, 2018 at 03:58:51PM +0300, Sakari Ailus wrote:
> > Hi Hans, Mauro,
> >
> > On Fri, Oct 19, 2018 at 02:39:27PM +0200, Hans Verkuil wrote:
> > > On 10/19/18 14:31, Mauro Carvalho Chehab wrote:
> > > > Em Fri, 19 Oct 2018 13:45:32 +0200
> > > > Hans Verkuil  escreveu:
> > > >
> > > >> On 10/19/18 13:43, Mauro Carvalho Chehab wrote:
> > > >>> Those drivers are part of the legacy SoC camera framework.
> > > >>> They're being converted to not use it, but sometimes we're
> > > >>> keeping both legacy any new driver.
> > > >>>
> > > >>> This time, for example, we have two drivers on media with
> > > >>> the same name: ov772x. That's bad.
> > > >>>
> > > >>> So, in order to prevent that to happen, let's prepend the SoC
> > > >>> legacy drivers with soc_.
> > > >>>
> > > >>> No functional changes.
> > > >>>
> > > >>> Signed-off-by: Mauro Carvalho Chehab 
> > > >>
> > > >> Acked-by: Hans Verkuil 
> > > >
> > > > For now, let's just avoid the conflict if one builds both modules and
> > > > do a modprobe ov772x.
> > > >
> > > >> Let's kill all of these in the next kernel. I see no reason for keeping
> > > >> them around.
> > > >
> > > > While people are doing those SoC conversions, I would keep it. We
> > >
> > > Which people are doing SoC conversions? Nobody is using soc-camera 
> > > anymore.
> > > It is a dead driver. The only reason it hasn't been removed yet is lack of
> > > time since it is not just removing the driver, but also patching old board
> > > files that use soc_camera headers. Really left-overs since the 
> > > corresponding
> > > soc-camera drivers have long since been removed.
> > >
> > > > could move it to staging, to let it clear that those drivers require
> > > > conversion, and give people some time to work on it.
> > >
> > > There is nobody working on it. These are old sensors, and few will have
> > > the hardware to test it. If someone needs such a sensor driver, then they
> > > can always look at an older kernel version. It's still in git after all.
> > >
> > > Just kill it rather then polluting the media tree.
> >
> > I remember at least Jacopo has been doing some. There was someone else as
> > well, but I don't remember right now who it was. That said, I'm not sure if
> > there's anything happening to the rest.
> 
> Yes, I did port a few drivers and there are patches for others coming.
> 
> [PATCH v2 0/4] media: soc_camera: ov9640: switch driver to v4l2_async
> from Peter Cvek (now in Cc)
> >
> > Is there something that prevents removing these right away? As you said
> > it's not functional and people can always check old versions if they want
> > to port the driver to V4L2 sub-device framework.
> 
> All dependencies should have been solved so far, but given that
> someone might want to do the porting at some point, I don't see how
> bad would it be to have them in staging, even if people could look
> into the git history...

The atomisp driver was removed on similar basis --- it was not functional
and no-one was actively working on it. And there was lots of work to keep
the codebase compiling (not as much the case with these drivers though).
I think that decision regarding atomisp was a correct one, and I don't see
much difference between that and these drivers.

-- 
Regards,

Sakari Ailus
sakari.ai...@linux.intel.com


Re: [PATCH v3 01/16] media: imx: add mem2mem device

2018-10-19 Thread Steve Longerbeam



On 10/19/18 2:53 AM, Philipp Zabel wrote:

Hi Tim,

On Thu, 2018-10-18 at 15:53 -0700, Tim Harvey wrote:
[...]

Philipp,

Thanks for submitting this!

I'm hoping this lets us use non-IMX capture devices along with the IMX
media controller entities to so we can use hardware
CSC,scaling,pixel-format-conversions and ultimately coda based encode.

I've built this on top of linux-media and see that it registers as
/dev/video8 but I'm not clear how to use it? I don't see it within the
media controller graph.

It's a V4L2 mem2mem device that can be handled by the GstV4l2Transform
element, for example. GStreamer should create a v4l2video8convert
element of that type.

The mem2mem device is not part of the media controller graph on purpose.
There is no interaction with any of the entities in the media controller
graph apart from the fact that the IC PP task we are using for mem2mem
scaling is sharing hardware resources with the IC PRP tasks used for the
media controller scaler entitites.



It would be nice in the future to link mem2mem output-side to the ipu_vdic:1
pad, to make use of h/w VDIC de-interlace as part of mem2mem operations.
The progressive output from a new ipu_vdic:3 pad can then be sent to the
image_convert APIs by the mem2mem driver for further tiled scaling, CSC,
and rotation by the IC PP task. The ipu_vdic:1 pad makes use of pure 
DMA-based

de-interlace, that is, all input frames (N-1, N, N+1) to the VDIC are sent
from DMA buffers, and this VDIC mode of operation is well understood
and produces clean de-interlace output. The risk is that this would require
iDMAC channel 5 for ipu_vdic:3, which IFAIK is not verified to work yet.


The other problem with that currently is that mem2mem would have to be 
split into
separate device nodes: a /dev/videoN for output-side (linked to 
ipu_vdic:1), and
a /dev/videoM for capture-side (linked from ipu_vdic:3). And then it no 
longer

presents to userspace as a mem2mem device with a single device node for both
output and capture sides.


Or is there another way? I recall work to integrate mem2mem with media 
control.
There is v4l2_m2m_register_media_controller(), but that create three 
entities:

source, processing, and sink. The VDIC entity would be part of mem2mem
processing but this entity already exists for the current graph. This 
function

could however be used as a guide to incorporate the VDIC entity into m2m
device.


Steve




Re: i.MX6 IPU CSI analog video input on Ventana

2018-10-19 Thread Steve Longerbeam



On 10/18/18 10:56 AM, Tim Harvey wrote:

On Wed, Oct 17, 2018 at 4:37 PM Steve Longerbeam  wrote:


On 10/17/18 4:05 PM, Tim Harvey wrote:

On Wed, Oct 17, 2018 at 2:33 PM Steve Longerbeam  wrote:

Hi Tim,

On 10/17/18 1:38 PM, Tim Harvey wrote:

On Mon, Jun 4, 2018 at 1:58 AM Krzysztof Hałasa  wrote:

I've just tested the PAL setup: in currect situation (v4.17 + Steve's
fix-csi-interlaced.2 + "media: adv7180: fix field type" + a small cheap
PAL camera) the following produces bottom-first interlaced frames:

media-ctl -r -l '"adv7180 2-0020":0->"ipu2_csi1_mux":1[1],
   "ipu2_csi1_mux":2->"ipu2_csi1":0[1],
   "ipu2_csi1":2->"ipu2_csi1 capture":0[1]'

media-ctl -V "'adv7180 2-0020':0 [fmt:UYVY2X8/720x576 field:alternate]"
media-ctl -V "'ipu2_csi1_mux':2 [fmt:UYVY2X8/720x576]"
media-ctl -V "'ipu2_csi1':2 [fmt:AYUV32/720x576 field:interlaced]"

"adv7180 2-0020":0 [fmt:UYVY2X8/720x576 field:alternate]
"ipu2_csi1_mux":1  [fmt:UYVY2X8/720x576 field:alternate]
"ipu2_csi1_mux":2  [fmt:UYVY2X8/720x576 field:alternate]
"ipu2_csi1":0  [fmt:UYVY2X8/720x576 field:alternate]
"ipu2_csi1":2  [fmt:AYUV32/720x576 field:interlaced]

I think it would be great if these changes make their way upstream.
The details could be refined then.

Krzysztof / Steve / Philipp,

I jumped back onto IMX6 video capture from the adv7180 the other day
trying to help out a customer that's using mainline and found things
are still not working right. Where is all of this at these days?

If I use v4.19 with Steves 'imx-media: Fixes for interlaced capture'
v3 series (https://patchwork.kernel.org/cover/10626499/) I
rolling/split (un-synchronized) video using:

# Setup links
media-ctl -r
media-ctl -l '"adv7180 2-0020":0 -> "ipu2_csi1_mux":1[1]'
media-ctl -l '"ipu2_csi1_mux":2 -> "ipu2_csi1":0[1]'
media-ctl -l '"ipu2_csi1":1 -> "ipu2_ic_prp":0[1]'
media-ctl -l '"ipu2_ic_prp":2 -> "ipu2_ic_prpvf":0[1]'
media-ctl -l '"ipu2_ic_prpvf":1 -> "ipu2_ic_prpvf capture":0[1]'
# Configure pads
media-ctl -V "'adv7180 2-0020':0 [fmt:UYVY2X8/720x480]"
media-ctl -V "'ipu2_csi1_mux':2 [fmt:UYVY2X8/720x480 field:interlaced]"
media-ctl -V "'ipu2_csi1':1 [fmt:UYVY2X8/720x480 field:interlaced]"
media-ctl -V "'ipu2_ic_prp':2 [fmt:UYVY2X8/720x480 field:interlaced]"
media-ctl -V "'ipu2_ic_prpvf':1 [fmt:UYVY2X8/720x480 field:none]"
# stream JPEG/RTP/UDP
gst-launch-1.0 v4l2src device=/dev/video3 ! video/x-raw,format=UYVY !
jpegenc ! rtpjpegpay ! udpsink host=$SERVER port=$PORT
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device
'/dev/video3' does not support progressive interlacing

I'm doing the above on a Gateworks GW5404 IMXQ which has a tda1997x
HDMI receiver sensor and an adv7180 Analog CVBS sensor - media graph
is here: http://dev.gateworks.com/docs/linux/media/imx6q-gw54xx-media.png

Are there other patches I need or different field formats above with
4.19? Do any of the other kernels work without patchsets that you know
of between 4.16 and 4.19?


First, the v3 series is out of date. Please apply the latest v5 posting
of that series. See the imx.rst doc regarding field type negotiation,
all pads starting at ipu2_csi1:1 should be 'seq-bt' or 'seq-tb' until the
capture device, which should be set to 'interlaced' to enable IDMAC
interweave. The ADV7180 now correctly sets its field type to alternate,
which imx-media-csi.c translates to seq-tb or seq-bt at its output pad.

See the SabreAuto examples in the doc.

For the rolling/split image problem, try the attached somewhat hackish patch.
There used to be code in imx-media-csi.c that searched for the backend sensor
and queries via .g_skip_frames whether the sensor produces bad frames at first
stream-on. But there was push-back on that, so the attached is another
approach that doesn't require searching for a backend sensor.

Steve,

Thanks - I hadn't noticed the updated series. I've built it on top of
linux-media/master and tested with:

- Testing linux-media/master + your v5 now:

# Use simple interweaving
media-ctl -r
# Setup links
media-ctl -l '"adv7180 2-0020":0 -> "ipu2_csi1_mux":1[1]'
media-ctl -l '"ipu2_csi1_mux":2 -> "ipu2_csi1":0[1]'
media-ctl -l '"ipu2_csi1":2 -> "ipu2_csi1 capture":0[1]'
# Configure pads
media-ctl -V "'adv7180 2-0020':0 [fmt:UYVY2X8/720x480 field:seq-bt]"
media-ctl -V "'ipu2_csi1_mux':2 [fmt:UYVY2X8/720x480]"
media-ctl -V "'ipu2_csi1':1 [fmt:AYUV32/720x480]"
# Configure ipu_csi capture interface (/dev/video7)
v4l2-ctl -d7 --set-fmt-video=field=interlaced_bt
# Stream JPEG/RTP/UDP
gst-launch-1.0 v4l2src device=/dev/video7 ! video/x-raw,format=UYVY !
jpegenc ! rtpjpegpay ! udpsink host=$SERVER port=5000
^^ gives me ERROR: from element
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video7' does
not support progressive interlacing

I'm assuming this is because the format is still 'interlaced' - not
sure how to stream this from GStreamer?


I don't know what v4l2src plugin is trying to say by "progressive
interlacing" -
that's 

Re: [PATCH 4/4] media: mt9m111: allow to setup pixclk polarity

2018-10-19 Thread kbuild test robot
Hi Enrico,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linuxtv-media/master]
[also build test ERROR on v4.19-rc8 next-20181019]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Marco-Felsch/media-mt9m111-features/20181020-022716
base:   git://linuxtv.org/media_tree.git master
config: x86_64-randconfig-x000-201841 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64 

All errors (new ones prefixed by >>):

   drivers/media//i2c/mt9m111.c: In function 'mt9m111_probe':
>> drivers/media//i2c/mt9m111.c:1185:9: error: implicit declaration of function 
>> 'mt9m111_probe_of'; did you mean 'mt9m111_probe'? 
>> [-Werror=implicit-function-declaration]
  ret = mt9m111_probe_of(client, mt9m111);
^~~~
mt9m111_probe
   cc1: some warnings being treated as errors

vim +1185 drivers/media//i2c/mt9m111.c

  1159  
  1160  static int mt9m111_probe(struct i2c_client *client,
  1161   const struct i2c_device_id *did)
  1162  {
  1163  struct mt9m111 *mt9m111;
  1164  struct i2c_adapter *adapter = 
to_i2c_adapter(client->dev.parent);
  1165  int ret;
  1166  
  1167  if (!i2c_check_functionality(adapter, 
I2C_FUNC_SMBUS_WORD_DATA)) {
  1168  dev_warn(>dev,
  1169   "I2C-Adapter doesn't support 
I2C_FUNC_SMBUS_WORD\n");
  1170  return -EIO;
  1171  }
  1172  
  1173  mt9m111 = devm_kzalloc(>dev, sizeof(struct mt9m111), 
GFP_KERNEL);
  1174  if (!mt9m111)
  1175  return -ENOMEM;
  1176  
  1177  mt9m111->clk = v4l2_clk_get(>dev, "mclk");
  1178  if (IS_ERR(mt9m111->clk))
  1179  return PTR_ERR(mt9m111->clk);
  1180  
  1181  /* Default HIGHPOWER context */
  1182  mt9m111->ctx = _b;
  1183  
  1184  if (IS_ENABLED(CONFIG_OF)) {
> 1185  ret = mt9m111_probe_of(client, mt9m111);
  1186  if (ret)
  1187  return ret;
  1188  } else {
  1189  /* use default chip hardware values */
  1190  mt9m111->pclk_sample = 1;
  1191  }
  1192  
  1193  v4l2_i2c_subdev_init(>subdev, client, 
_subdev_ops);
  1194  mt9m111->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
  1195  
  1196  v4l2_ctrl_handler_init(>hdl, 5);
  1197  v4l2_ctrl_new_std(>hdl, _ctrl_ops,
  1198  V4L2_CID_VFLIP, 0, 1, 1, 0);
  1199  v4l2_ctrl_new_std(>hdl, _ctrl_ops,
  1200  V4L2_CID_HFLIP, 0, 1, 1, 0);
  1201  v4l2_ctrl_new_std(>hdl, _ctrl_ops,
  1202  V4L2_CID_AUTO_WHITE_BALANCE, 0, 1, 1, 1);
  1203  mt9m111->gain = v4l2_ctrl_new_std(>hdl, 
_ctrl_ops,
  1204  V4L2_CID_GAIN, 0, 63 * 2 * 2, 1, 32);
  1205  v4l2_ctrl_new_std_menu(>hdl,
  1206  _ctrl_ops, V4L2_CID_EXPOSURE_AUTO, 1, 0,
  1207  V4L2_EXPOSURE_AUTO);
  1208  v4l2_ctrl_new_std_menu_items(>hdl,
  1209  _ctrl_ops, V4L2_CID_TEST_PATTERN,
  1210  ARRAY_SIZE(mt9m111_test_pattern_menu) - 1, 0, 0,
  1211  mt9m111_test_pattern_menu);
  1212  mt9m111->subdev.ctrl_handler = >hdl;
  1213  if (mt9m111->hdl.error) {
  1214  ret = mt9m111->hdl.error;
  1215  goto out_clkput;
  1216  }
  1217  

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip


Re: [PATCH v4 00/22] i.MX media mem2mem scaler

2018-10-19 Thread Steve Longerbeam

Awesome, thanks Philipp.

For the whole series:

Acked-by: Steve Longerbeam 
Tested-by: Steve Longerbeam 
on i.MX6q SabreSD.


On 10/19/18 5:15 AM, Philipp Zabel wrote:

Hi,

this is the fourth version of the i.MX mem2mem scaler series.

An alignment issue with 24-bit RGB formats has been corrected in the
seam position selection patch and a few new fixes by Steve have been
added. If there are no more issues, I'll pick up the ipu-v3 patches
via imx-drm/next. The first patch could be merged via the media tree
independently.

Changes since v3:
  - Fix tile_left_align for 24-bit RGB formats and reduce alignment
restrictions for U/V packed planar YUV formats
  - Catch unaligned tile offsets in image-convert
  - Add chroma plane offset overrides to ipu_cpmem_set_image() to
prevent a false positive warning in some cases
  - Fix a race between run and unprepare and make abort reentrant.


Changes since v2:
  - Rely on ipu_image_convert_adjust() in mem2mem_try_fmt() for format
adjustments. This makes the mem2mem driver mostly a V4L2 mem2mem API
wrapper around the IPU image converter, and independent of the
internal image converter implementation.
  - Remove the source and destination buffers on error in device_run().
Otherwise the conversion is re-attempted apparently over and over
again (with WARN() backtraces).
  - Allow subscribing to control changes.
  - Fix seam position selection for more corner cases:
 - Switch width/height properly and align tile top left positions to 8x8
   IRT block size when rotating.
 - Align input width to input burst length in case the scaling step
   flips horizontally.
 - Fix bottom edge calculation.

Changes since v1:
  - Fix inverted allow_overshoot logic
  - Correctly switch horizontal / vertical tile alignment when
determining seam positions with the 90° rotator active.
  - Fix SPDX-License-Identifier and remove superfluous license
text.
  - Fix uninitialized walign in try_fmt

Previous cover letter:

we have image conversion code for scaling and colorspace conversion in
the IPUv3 base driver for a while. Since the IC hardware can only write
up to 1024x1024 pixel buffers, it scales to larger output buffers by
splitting the input and output frame into similarly sized tiles.

This causes the issue that the bilinear interpolation resets at the tile
boundary: instead of smoothly interpolating across the seam, there is a
jump in the input sample position that is very apparent for high
upscaling factors. This can be avoided by slightly changing the scaling
coefficients to let the left/top tiles overshoot their input sampling
into the first pixel / line of their right / bottom neighbors. The error
can be further reduced by letting tiles be differently sized and by
selecting seam positions that minimize the input sampling position error
at tile boundaries.
This is complicated by different DMA start address, burst size, and
rotator block size alignment requirements, depending on the input and
output pixel formats, and the fact that flipping happens in different
places depending on the rotation.

This series implements optimal seam position selection and seam hiding
with per-tile resizing coefficients and adds a scaling mem2mem device
to the imx-media driver.

regards
Philipp

Philipp Zabel (15):
   media: imx: add mem2mem device
   gpu: ipu-v3: ipu-ic: allow to manually set resize coefficients
   gpu: ipu-v3: image-convert: prepare for per-tile configuration
   gpu: ipu-v3: image-convert: calculate per-tile resize coefficients
   gpu: ipu-v3: image-convert: reconfigure IC per tile
   gpu: ipu-v3: image-convert: store tile top/left position
   gpu: ipu-v3: image-convert: calculate tile dimensions and offsets
 outside fill_image
   gpu: ipu-v3: image-convert: move tile alignment helpers
   gpu: ipu-v3: image-convert: select optimal seam positions
   gpu: ipu-v3: image-convert: fix debug output for varying tile sizes
   gpu: ipu-v3: image-convert: relax alignment restrictions
   gpu: ipu-v3: image-convert: fix bytesperline adjustment
   gpu: ipu-v3: image-convert: add some ASCII art to the exposition
   gpu: ipu-v3: image-convert: disable double buffering if necessary
   gpu: ipu-v3: image-convert: allow three rows or columns

Steve Longerbeam (7):
   gpu: ipu-cpmem: add WARN_ON_ONCE() for unaligned dma buffers
   gpu: ipu-v3: Add chroma plane offset overrides to
 ipu_cpmem_set_image()
   gpu: ipu-v3: image-convert: Prevent race between run and unprepare
   gpu: ipu-v3: image-convert: Only wait for abort completion if active
 run
   gpu: ipu-v3: image-convert: Allow reentrancy into abort
   gpu: ipu-v3: image-convert: Remove need_abort flag
   gpu: ipu-v3: image-convert: Catch unaligned tile offsets

  drivers/gpu/ipu-v3/ipu-cpmem.c|   52 +-
  drivers/gpu/ipu-v3/ipu-ic.c   |   52 +-
  drivers/gpu/ipu-v3/ipu-image-convert.c| 1019 ++---
  drivers/staging/media/imx/Kconfig  

[PATCH 2/4] media: mt9m111: add streaming check to set_fmt

2018-10-19 Thread Marco Felsch
From: Michael Grzeschik 

Currently set_fmt don't care about the streaming status, so the format
can be changed during streaming. This can lead into wrong behaviours.

Check if the device is already streaming and return -EBUSY to avoid
wrong behaviours.

Signed-off-by: Michael Grzeschik 
Signed-off-by: Marco Felsch 
Reviewed-by: Philipp Zabel 
---
 drivers/media/i2c/mt9m111.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
index 78d08a81e0e2..d060075a670b 100644
--- a/drivers/media/i2c/mt9m111.c
+++ b/drivers/media/i2c/mt9m111.c
@@ -561,6 +561,9 @@ static int mt9m111_set_fmt(struct v4l2_subdev *sd,
bool bayer;
int ret;
 
+   if (mt9m111->is_streaming)
+   return -EBUSY;
+
if (format->pad)
return -EINVAL;
 
-- 
2.19.0



[PATCH 0/4] media: mt9m111 features

2018-10-19 Thread Marco Felsch
Hello,

the purpose of this series is to support the pixclk polarity and the
framerate selection. I picked the patches form Michael and Enrico,
ported them to 4.19 and did some adjustments.

I tested the framrate and pixckl selection on a custom arm based board.

Enrico Scholz (1):
  media: mt9m111: allow to setup pixclk polarity

Marco Felsch (1):
  media: mt9m111: add s_stream callback

Michael Grzeschik (2):
  media: mt9m111: add streaming check to set_fmt
  media: mt9m111: add support to select formats and fps for {Q,SXGA}

 drivers/media/i2c/mt9m111.c | 221 +++-
 1 file changed, 220 insertions(+), 1 deletion(-)

-- 
2.19.0



[PATCH 4/4] media: mt9m111: allow to setup pixclk polarity

2018-10-19 Thread Marco Felsch
From: Enrico Scholz 

The chip can be configured to output data transitions on the
rising or falling edge of PIXCLK (Datasheet R58:1[9]), default is on the
falling edge.

Parsing the fw-node is made in a subfunction to bundle all (future)
dt-parsing / fw-parsing stuff.

Signed-off-by: Enrico Scholz 
(m.grzesc...@pengutronix.de: Fix inverting clock. INV_PIX_CLOCK bit is set
per default. Set bit to 0 (enable mask bit without value) to enable
falling edge sampling.)
Signed-off-by: Michael Grzeschik 
(m.fel...@pengutronix.de: use fwnode helpers)
(m.fel...@pengutronix.de: mv of parsing into own function)
(m.fel...@pengutronix.de: adapt commit msg)
Signed-off-by: Marco Felsch 
Reviewed-by: Philipp Zabel 
---
 drivers/media/i2c/mt9m111.c | 52 -
 1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
index 13080c6c1ba3..5d45bc9ea0cb 100644
--- a/drivers/media/i2c/mt9m111.c
+++ b/drivers/media/i2c/mt9m111.c
@@ -15,12 +15,14 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
 #include 
 #include 
 #include 
+#include 
 
 /*
  * MT9M111, MT9M112 and MT9M131:
@@ -236,6 +238,8 @@ struct mt9m111 {
const struct mt9m111_datafmt *fmt;
int lastpage;   /* PageMap cache value */
bool is_streaming;
+   /* user point of view - 0: falling 1: rising edge */
+   unsigned int pclk_sample:1;
 #ifdef CONFIG_MEDIA_CONTROLLER
struct media_pad pad;
 #endif
@@ -586,6 +590,10 @@ static int mt9m111_set_pixfmt(struct mt9m111 *mt9m111,
return -EINVAL;
}
 
+   /* receiver samples on falling edge, chip-hw default is rising */
+   if (mt9m111->pclk_sample == 0)
+   mask_outfmt2 |= MT9M111_OUTFMT_INV_PIX_CLOCK;
+
ret = mt9m111_reg_mask(client, context_a.output_fmt_ctrl2,
   data_outfmt2, mask_outfmt2);
if (!ret)
@@ -1045,9 +1053,15 @@ static int mt9m111_s_stream(struct v4l2_subdev *sd, int 
enable)
 static int mt9m111_g_mbus_config(struct v4l2_subdev *sd,
struct v4l2_mbus_config *cfg)
 {
-   cfg->flags = V4L2_MBUS_MASTER | V4L2_MBUS_PCLK_SAMPLE_RISING |
+   struct mt9m111 *mt9m111 = container_of(sd, struct mt9m111, subdev);
+
+   cfg->flags = V4L2_MBUS_MASTER |
V4L2_MBUS_HSYNC_ACTIVE_HIGH | V4L2_MBUS_VSYNC_ACTIVE_HIGH |
V4L2_MBUS_DATA_ACTIVE_HIGH;
+
+   cfg->flags |= mt9m111->pclk_sample ? V4L2_MBUS_PCLK_SAMPLE_FALLING :
+   V4L2_MBUS_PCLK_SAMPLE_RISING;
+
cfg->type = V4L2_MBUS_PARALLEL;
 
return 0;
@@ -1117,6 +1131,33 @@ static int mt9m111_video_probe(struct i2c_client *client)
return ret;
 }
 
+#ifdef CONFIG_OF
+static int mt9m111_probe_of(struct i2c_client *client, struct mt9m111 *mt9m111)
+{
+   struct v4l2_fwnode_endpoint *bus_cfg;
+   struct device_node *np;
+   int ret = 0;
+
+   np = of_graph_get_next_endpoint(client->dev.of_node, NULL);
+   if (!np)
+   return -EINVAL;
+
+   bus_cfg = v4l2_fwnode_endpoint_alloc_parse(of_fwnode_handle(np));
+   if (IS_ERR(bus_cfg)) {
+   ret = PTR_ERR(bus_cfg);
+   goto out_of_put;
+   }
+
+   mt9m111->pclk_sample = !!(bus_cfg->bus.parallel.flags &
+ V4L2_MBUS_PCLK_SAMPLE_RISING);
+
+   v4l2_fwnode_endpoint_free(bus_cfg);
+out_of_put:
+   of_node_put(np);
+   return ret;
+}
+#endif
+
 static int mt9m111_probe(struct i2c_client *client,
 const struct i2c_device_id *did)
 {
@@ -1141,6 +1182,15 @@ static int mt9m111_probe(struct i2c_client *client,
/* Default HIGHPOWER context */
mt9m111->ctx = _b;
 
+   if (IS_ENABLED(CONFIG_OF)) {
+   ret = mt9m111_probe_of(client, mt9m111);
+   if (ret)
+   return ret;
+   } else {
+   /* use default chip hardware values */
+   mt9m111->pclk_sample = 1;
+   }
+
v4l2_i2c_subdev_init(>subdev, client, _subdev_ops);
mt9m111->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
 
-- 
2.19.0



[PATCH 1/4] media: mt9m111: add s_stream callback

2018-10-19 Thread Marco Felsch
Add callback to check if we are already streaming. Now other callbacks
can check the state and return -EBUSY if we already streaming.

Signed-off-by: Marco Felsch 
---
 drivers/media/i2c/mt9m111.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
index efda1aa95ca0..78d08a81e0e2 100644
--- a/drivers/media/i2c/mt9m111.c
+++ b/drivers/media/i2c/mt9m111.c
@@ -217,6 +217,7 @@ struct mt9m111 {
int power_count;
const struct mt9m111_datafmt *fmt;
int lastpage;   /* PageMap cache value */
+   bool is_streaming;
 #ifdef CONFIG_MEDIA_CONTROLLER
struct media_pad pad;
 #endif
@@ -880,6 +881,14 @@ static int mt9m111_enum_mbus_code(struct v4l2_subdev *sd,
return 0;
 }
 
+static int mt9m111_s_stream(struct v4l2_subdev *sd, int enable)
+{
+   struct mt9m111 *mt9m111 = container_of(sd, struct mt9m111, subdev);
+
+   mt9m111->is_streaming = !!enable;
+   return 0;
+}
+
 static int mt9m111_g_mbus_config(struct v4l2_subdev *sd,
struct v4l2_mbus_config *cfg)
 {
@@ -893,6 +902,7 @@ static int mt9m111_g_mbus_config(struct v4l2_subdev *sd,
 
 static const struct v4l2_subdev_video_ops mt9m111_subdev_video_ops = {
.g_mbus_config  = mt9m111_g_mbus_config,
+   .s_stream   = mt9m111_s_stream,
 };
 
 static const struct v4l2_subdev_pad_ops mt9m111_subdev_pad_ops = {
-- 
2.19.0



[PATCH 3/4] media: mt9m111: add support to select formats and fps for {Q,SXGA}

2018-10-19 Thread Marco Felsch
From: Michael Grzeschik 

This patch implements the framerate selection using the skipping and
readout power-modi features. The power-modi cut the framerate by half
and each context has an independent selection bit. The same applies to
the 2x skipping feature.

Signed-off-by: Michael Grzeschik 
Signed-off-by: Marco Felsch 
---
 drivers/media/i2c/mt9m111.c | 156 
 1 file changed, 156 insertions(+)

diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
index d060075a670b..13080c6c1ba3 100644
--- a/drivers/media/i2c/mt9m111.c
+++ b/drivers/media/i2c/mt9m111.c
@@ -204,6 +204,22 @@ static const struct mt9m111_datafmt mt9m111_colour_fmts[] 
= {
{MEDIA_BUS_FMT_SBGGR10_2X8_PADHI_LE, V4L2_COLORSPACE_SRGB},
 };
 
+enum mt9m111_mode_id {
+   MT9M111_MODE_SXGA_8FPS,
+   MT9M111_MODE_SXGA_15FPS,
+   MT9M111_MODE_QSXGA_30FPS,
+   MT9M111_NUM_MODES,
+};
+
+struct mt9m111_mode_info {
+   unsigned int sensor_w;
+   unsigned int sensor_h;
+   unsigned int max_image_w;
+   unsigned int max_image_h;
+   unsigned int max_fps;
+   unsigned int reg_val;
+};
+
 struct mt9m111 {
struct v4l2_subdev subdev;
struct v4l2_ctrl_handler hdl;
@@ -213,6 +229,8 @@ struct mt9m111 {
struct v4l2_clk *clk;
unsigned int width; /* output */
unsigned int height;/* sizes */
+   struct v4l2_fract frame_interval;
+   const struct mt9m111_mode_info *current_mode;
struct mutex power_lock; /* lock to protect power_count */
int power_count;
const struct mt9m111_datafmt *fmt;
@@ -223,6 +241,34 @@ struct mt9m111 {
 #endif
 };
 
+static const struct mt9m111_mode_info mt9m111_mode_data[MT9M111_NUM_MODES] = {
+   [MT9M111_MODE_SXGA_8FPS] = {
+   .sensor_w = 1280,
+   .sensor_h = 1024,
+   .max_image_w = 1280,
+   .max_image_h = 1024,
+   .max_fps = 8,
+   .reg_val = MT9M111_RM_LOW_POWER_RD,
+   },
+   [MT9M111_MODE_SXGA_15FPS] = {
+   .sensor_w = 1280,
+   .sensor_h = 1024,
+   .max_image_w = 1280,
+   .max_image_h = 1024,
+   .max_fps = 15,
+   .reg_val = MT9M111_RM_FULL_POWER_RD,
+   },
+   [MT9M111_MODE_QSXGA_30FPS] = {
+   .sensor_w = 1280,
+   .sensor_h = 1024,
+   .max_image_w = 640,
+   .max_image_h = 512,
+   .max_fps = 30,
+   .reg_val = MT9M111_RM_LOW_POWER_RD | MT9M111_RM_COL_SKIP_2X |
+  MT9M111_RM_ROW_SKIP_2X,
+   },
+};
+
 /* Find a data format by a pixel code */
 static const struct mt9m111_datafmt *mt9m111_find_datafmt(struct mt9m111 
*mt9m111,
u32 code)
@@ -616,6 +662,62 @@ static int mt9m111_set_fmt(struct v4l2_subdev *sd,
return ret;
 }
 
+static const struct mt9m111_mode_info *
+mt9m111_find_mode(struct mt9m111 *mt9m111, unsigned int req_fps,
+ unsigned int width, unsigned int height)
+{
+   const struct mt9m111_mode_info *mode;
+   struct v4l2_rect *sensor_rect = >rect;
+   unsigned int gap, gap_best = (unsigned int) -1;
+   int i, best_gap_idx = 1;
+
+   /* find best matched fps */
+   for (i = 0; i < MT9M111_NUM_MODES; i++) {
+   unsigned int fps = mt9m111_mode_data[i].max_fps;
+
+   gap = abs(fps - req_fps);
+   if (gap < gap_best) {
+   best_gap_idx = i;
+   gap_best = gap;
+   }
+   }
+
+   /*
+* Use context a/b default timing values instead of calculate blanking
+* timing values.
+*/
+   mode = _mode_data[best_gap_idx];
+   mt9m111->ctx = (best_gap_idx == MT9M111_MODE_QSXGA_30FPS) ? _a :
+   _b;
+
+   /*
+* Check if current settings support the fps because fps selection is
+* based on the row/col skipping mechanism which has some restriction.
+*/
+   if (sensor_rect->width != mode->sensor_w ||
+   sensor_rect->height != mode->sensor_h ||
+   width > mode->max_image_w ||
+   height > mode->max_image_h) {
+   /* reset sensor window size */
+   mt9m111->rect.left = MT9M111_MIN_DARK_COLS;
+   mt9m111->rect.top = MT9M111_MIN_DARK_ROWS;
+   mt9m111->rect.width = mode->sensor_w;
+   mt9m111->rect.height = mode->sensor_h;
+
+   /* reset image size */
+   mt9m111->width = mode->max_image_w;
+   mt9m111->height = mode->max_image_h;
+
+   dev_warn(mt9m111->subdev.dev,
+"Warning: update image size %dx%d[%dx%d] -> 
%dx%d[%dx%d]\n",
+sensor_rect->width, sensor_rect->height, width, height,
+

Re: RFC: kernelCI media subsystem pilot (Test results for gtucker/kernelci-media - gtucker-kernelci-media-001-6-g1b2c6e5844d8)

2018-10-19 Thread Ezequiel Garcia
On Fri, 2018-10-19 at 09:06 +0200, Hans Verkuil wrote:
> On 10/18/2018 09:23 PM, Ezequiel Garcia wrote:
> > Hi everyone,
> > 
> > In Collabora, and as part of our kernelci work, we are doing
> > research on kernel functional testing with kernelci.
> > 
> > For those new to kernelci, see 
> > https://github.com/kernelci/kernelci-doc/wiki/KernelCI 
> > and https://kernelci.org/.
> > 
> > The goal is to lay down the infrastructure required to make
> > automated test coverage an integral part of our feature
> > and bugfix development process.
> > 
> > So, as a first attempt, we've decided to extend kernelci test
> > v4l2 plan support, leading the way to extending
> > other subsystems' test plans.
> > 
> > Currently, kernelci looks for a list of branches every hour and
> > see if anything changed. For any branch that has changed, it triggers
> > builds, boots, tests and reports for each branch that had some changes
> > since last time it ran.
> > 
> > For this pilot, we've decided to target just a few devices:
> > qemu with vivid, rk3399-gru-kevin and rk3288-veyron-jaq
> > with uvc.
> 
> It's running v4l2-compliance, right?
> 

Exactly.

> Looking at the test cases, they appear in the reverse order that 
> v4l2-compliance
> performs them, that's a bit odd.
> 

That's something to check in the parser. I'm sure it's fixable if you
find it annoying.

> And if we include uvc in the testing, then I need to prioritize the work I
> started for uvc to remove the last FAILs.
> 

Well, we can run anything. We decided to go for uvc and vivid, just too pick
two popular examples, but there are really no restrictions.

Thanks,
Ezequiel


Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Ezequiel Garcia
Hi Nicolas,

On Fri, 2018-10-19 at 07:41 -0400, Nicolas Dufresne wrote:
> Le vendredi 19 octobre 2018 à 07:35 -0400, Nicolas Dufresne a écrit :
> > Le vendredi 19 octobre 2018 à 09:28 +0200, Hans Verkuil a écrit :
> > > On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > > > Set up a statically-allocated, dummy buffer to
> > > > be used as flush buffer, which signals
> > > > a encoding (or decoding) stop.
> > > > 
> > > > When the flush buffer is queued to the OUTPUT queue,
> > > > the driver will send an V4L2_EVENT_EOS event, and
> > > > mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
> > > 
> > > I'm confused. What is the current driver doing wrong? It is already
> > > setting the LAST flag AFAIK. I don't see why a dummy buffer is
> > > needed.
> > 
> > I'm not sure of this patch either. It seems to trigger the legacy
> > "empty payload" buffer case. Driver should mark the last buffer, and
> > then following poll should return EPIPE. Maybe it's the later that
> > isn't respected ?
> 
> Sorry, I've send this too fast. The following poll should not block,
> and DQBUF should retunr EPIPE.
> 
> In GStreamer we currently ignore the LAST flag and wait for EPIPE. The
> reason is that not all driver can set the LAST flag. Exynos firmware
> tells you it's done later and we don't want to introduce any latency in
> the driver. The last flag isn't that useful in fact, but it can be use
> with RTP to set the marker bit.
> 

Yeah, I know that gstreamer ignores the LAST flag.

> In previous discussion, using a buffer with payload 0 was not liked.
> There might be codec where an empty buffer is valid, who knows ?
> 

The goal of this patch is for the driver to mark the dst buf
as V4L2_BUF_FLAG_DONE and V4L2_BUF_FLAG_LAST, so videobuf2
core returns EPIPE on a DQBUF.

Sorry for not being clear in the commit log.

Thanks,
Ezequiel


Re: [PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread jacopo mondi
Hi Mauro, Hans, Sakari,

On Fri, Oct 19, 2018 at 03:58:51PM +0300, Sakari Ailus wrote:
> Hi Hans, Mauro,
>
> On Fri, Oct 19, 2018 at 02:39:27PM +0200, Hans Verkuil wrote:
> > On 10/19/18 14:31, Mauro Carvalho Chehab wrote:
> > > Em Fri, 19 Oct 2018 13:45:32 +0200
> > > Hans Verkuil  escreveu:
> > >
> > >> On 10/19/18 13:43, Mauro Carvalho Chehab wrote:
> > >>> Those drivers are part of the legacy SoC camera framework.
> > >>> They're being converted to not use it, but sometimes we're
> > >>> keeping both legacy any new driver.
> > >>>
> > >>> This time, for example, we have two drivers on media with
> > >>> the same name: ov772x. That's bad.
> > >>>
> > >>> So, in order to prevent that to happen, let's prepend the SoC
> > >>> legacy drivers with soc_.
> > >>>
> > >>> No functional changes.
> > >>>
> > >>> Signed-off-by: Mauro Carvalho Chehab 
> > >>
> > >> Acked-by: Hans Verkuil 
> > >
> > > For now, let's just avoid the conflict if one builds both modules and
> > > do a modprobe ov772x.
> > >
> > >> Let's kill all of these in the next kernel. I see no reason for keeping
> > >> them around.
> > >
> > > While people are doing those SoC conversions, I would keep it. We
> >
> > Which people are doing SoC conversions? Nobody is using soc-camera anymore.
> > It is a dead driver. The only reason it hasn't been removed yet is lack of
> > time since it is not just removing the driver, but also patching old board
> > files that use soc_camera headers. Really left-overs since the corresponding
> > soc-camera drivers have long since been removed.
> >
> > > could move it to staging, to let it clear that those drivers require
> > > conversion, and give people some time to work on it.
> >
> > There is nobody working on it. These are old sensors, and few will have
> > the hardware to test it. If someone needs such a sensor driver, then they
> > can always look at an older kernel version. It's still in git after all.
> >
> > Just kill it rather then polluting the media tree.
>
> I remember at least Jacopo has been doing some. There was someone else as
> well, but I don't remember right now who it was. That said, I'm not sure if
> there's anything happening to the rest.

Yes, I did port a few drivers and there are patches for others coming.

[PATCH v2 0/4] media: soc_camera: ov9640: switch driver to v4l2_async
from Peter Cvek (now in Cc)
>
> Is there something that prevents removing these right away? As you said
> it's not functional and people can always check old versions if they want
> to port the driver to V4L2 sub-device framework.

All dependencies should have been solved so far, but given that
someone might want to do the porting at some point, I don't see how
bad would it be to have them in staging, even if people could look
into the git history...

>
> --
> Regards,
>
> Sakari Ailus
> sakari.ai...@linux.intel.com


signature.asc
Description: PGP signature


Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Ezequiel Garcia
On Fri, 2018-10-19 at 09:28 +0200, Hans Verkuil wrote:
> On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > Set up a statically-allocated, dummy buffer to
> > be used as flush buffer, which signals
> > a encoding (or decoding) stop.
> > 
> > When the flush buffer is queued to the OUTPUT queue,
> > the driver will send an V4L2_EVENT_EOS event, and
> > mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
> 
> I'm confused. What is the current driver doing wrong? It is already
> setting the LAST flag AFAIK.

The driver seems to be setting V4L2_BUF_FLAG_LAST to the dst
buf, if there's no src buf.

IIRC, that alone is has no effects, because .device_run never
gets to run (after all, there is no src buf), and the dst
buf is never marked as done and dequeued...
 
> I don't see why a dummy buffer is needed.
> 

... and that is why I took the dummy buffer idea (from some other driver),
which seemed an easy way to artificially run device_run
to dequeue the dst buf.

What do you think?

Thanks,
Ezequiel


Re: [PATCH 1/2] vicodec: Have decoder propagate changes to the CAPTURE queue

2018-10-19 Thread Ezequiel Garcia
On Fri, 2018-10-19 at 09:14 +0200, Hans Verkuil wrote:
> On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > The decoder interface (not yet merged) specifies that
> > width and height values set on the OUTPUT queue, must
> > be propagated to the CAPTURE queue.
> > 
> > This is not enough to comply with the specification,
> > which would require to properly support stream resolution
> > changes detection and notification.
> > 
> > However, it's a relatively small change, which fixes behavior
> > required by some applications such as gstreamer.
> > 
> > With this change, it's possible to run a simple T(T⁻¹) pipeline:
> > 
> > gst-launch-1.0 videotestsrc ! v4l2fwhtenc ! v4l2fwhtdec ! fakevideosink
> > 
> > Signed-off-by: Ezequiel Garcia 
> > ---
> >  drivers/media/platform/vicodec/vicodec-core.c | 15 +++
> >  1 file changed, 15 insertions(+)
> > 
> > diff --git a/drivers/media/platform/vicodec/vicodec-core.c 
> > b/drivers/media/platform/vicodec/vicodec-core.c
> > index 1eb9132bfc85..a2c487b4b80d 100644
> > --- a/drivers/media/platform/vicodec/vicodec-core.c
> > +++ b/drivers/media/platform/vicodec/vicodec-core.c
> > @@ -673,6 +673,13 @@ static int vidioc_s_fmt(struct vicodec_ctx *ctx, 
> > struct v4l2_format *f)
> > q_data->width = pix->width;
> > q_data->height = pix->height;
> > q_data->sizeimage = pix->sizeimage;
> > +
> > +   /* Propagate changes to CAPTURE queue */
> > +   if (!ctx->is_enc && V4L2_TYPE_IS_OUTPUT(f->type)) {
> 
> Do we need !ctx->is_enc? Isn't this the same for both decoder and encoder?
> 

Well, I wasn't really sure about this. The decoder document clearly
says that changes has to be propagated to the capture queue, but that statement
is not in the encoder spec.

Since gstreamer didn't needs this, I decided not to add it.

Perhaps it's something to correct in the encoder spec?

> > +   ctx->q_data[V4L2_M2M_DST].width = pix->width;
> > +   ctx->q_data[V4L2_M2M_DST].height = pix->height;
> > +   ctx->q_data[V4L2_M2M_DST].sizeimage = pix->sizeimage;
> 
> This is wrong: you are copying the sizeimage for the compressed format as the
> sizeimage for the raw format, which is quite different.
> 

Doh, you are right.

> I think you need to make a little helper function that can update the 
> width/height
> of a particular queue and that can calculate the sizeimage correctly.
> 

Sounds good.

Thanks for the review,
Ezequiel


Re: [PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread Sakari Ailus
Hi Hans, Mauro,

On Fri, Oct 19, 2018 at 02:39:27PM +0200, Hans Verkuil wrote:
> On 10/19/18 14:31, Mauro Carvalho Chehab wrote:
> > Em Fri, 19 Oct 2018 13:45:32 +0200
> > Hans Verkuil  escreveu:
> > 
> >> On 10/19/18 13:43, Mauro Carvalho Chehab wrote:
> >>> Those drivers are part of the legacy SoC camera framework.
> >>> They're being converted to not use it, but sometimes we're
> >>> keeping both legacy any new driver.
> >>>
> >>> This time, for example, we have two drivers on media with
> >>> the same name: ov772x. That's bad.
> >>>
> >>> So, in order to prevent that to happen, let's prepend the SoC
> >>> legacy drivers with soc_.
> >>>
> >>> No functional changes.
> >>>
> >>> Signed-off-by: Mauro Carvalho Chehab   
> >>
> >> Acked-by: Hans Verkuil 
> > 
> > For now, let's just avoid the conflict if one builds both modules and
> > do a modprobe ov772x.
> > 
> >> Let's kill all of these in the next kernel. I see no reason for keeping
> >> them around.
> > 
> > While people are doing those SoC conversions, I would keep it. We
> 
> Which people are doing SoC conversions? Nobody is using soc-camera anymore.
> It is a dead driver. The only reason it hasn't been removed yet is lack of
> time since it is not just removing the driver, but also patching old board
> files that use soc_camera headers. Really left-overs since the corresponding
> soc-camera drivers have long since been removed.
> 
> > could move it to staging, to let it clear that those drivers require
> > conversion, and give people some time to work on it.
> 
> There is nobody working on it. These are old sensors, and few will have
> the hardware to test it. If someone needs such a sensor driver, then they
> can always look at an older kernel version. It's still in git after all.
> 
> Just kill it rather then polluting the media tree.

I remember at least Jacopo has been doing some. There was someone else as
well, but I don't remember right now who it was. That said, I'm not sure if
there's anything happening to the rest.

Is there something that prevents removing these right away? As you said
it's not functional and people can always check old versions if they want
to port the driver to V4L2 sub-device framework.

-- 
Regards,

Sakari Ailus
sakari.ai...@linux.intel.com


Re: [PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread Hans Verkuil
On 10/19/18 14:31, Mauro Carvalho Chehab wrote:
> Em Fri, 19 Oct 2018 13:45:32 +0200
> Hans Verkuil  escreveu:
> 
>> On 10/19/18 13:43, Mauro Carvalho Chehab wrote:
>>> Those drivers are part of the legacy SoC camera framework.
>>> They're being converted to not use it, but sometimes we're
>>> keeping both legacy any new driver.
>>>
>>> This time, for example, we have two drivers on media with
>>> the same name: ov772x. That's bad.
>>>
>>> So, in order to prevent that to happen, let's prepend the SoC
>>> legacy drivers with soc_.
>>>
>>> No functional changes.
>>>
>>> Signed-off-by: Mauro Carvalho Chehab   
>>
>> Acked-by: Hans Verkuil 
> 
> For now, let's just avoid the conflict if one builds both modules and
> do a modprobe ov772x.
> 
>> Let's kill all of these in the next kernel. I see no reason for keeping
>> them around.
> 
> While people are doing those SoC conversions, I would keep it. We

Which people are doing SoC conversions? Nobody is using soc-camera anymore.
It is a dead driver. The only reason it hasn't been removed yet is lack of
time since it is not just removing the driver, but also patching old board
files that use soc_camera headers. Really left-overs since the corresponding
soc-camera drivers have long since been removed.

> could move it to staging, to let it clear that those drivers require
> conversion, and give people some time to work on it.

There is nobody working on it. These are old sensors, and few will have
the hardware to test it. If someone needs such a sensor driver, then they
can always look at an older kernel version. It's still in git after all.

Just kill it rather then polluting the media tree.

Regards,

Hans

> 
> In the specific case of ov772x, as we have already a driver that doesn't
> require soc_camera, we can strip it for -next.
> 
>>
>> Regards,
>>
>>  Hans
>>
>>> ---
>>>  drivers/media/i2c/soc_camera/Makefile  | 18 +-
>>>  .../soc_camera/{mt9m001.c => soc_mt9m001.c}|  0
>>>  .../soc_camera/{mt9t112.c => soc_mt9t112.c}|  0
>>>  .../soc_camera/{mt9v022.c => soc_mt9v022.c}|  0
>>>  .../i2c/soc_camera/{ov5642.c => soc_ov5642.c}  |  0
>>>  .../i2c/soc_camera/{ov772x.c => soc_ov772x.c}  |  0
>>>  .../i2c/soc_camera/{ov9640.c => soc_ov9640.c}  |  0
>>>  .../i2c/soc_camera/{ov9740.c => soc_ov9740.c}  |  0
>>>  .../{rj54n1cb0c.c => soc_rj54n1cb0c.c} |  0
>>>  .../i2c/soc_camera/{tw9910.c => soc_tw9910.c}  |  0
>>>  10 files changed, 9 insertions(+), 9 deletions(-)
>>>  rename drivers/media/i2c/soc_camera/{mt9m001.c => soc_mt9m001.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{mt9t112.c => soc_mt9t112.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{mt9v022.c => soc_mt9v022.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{ov5642.c => soc_ov5642.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{ov772x.c => soc_ov772x.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{ov9640.c => soc_ov9640.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{ov9740.c => soc_ov9740.c} (100%)
>>>  rename drivers/media/i2c/soc_camera/{rj54n1cb0c.c => soc_rj54n1cb0c.c} 
>>> (100%)
>>>  rename drivers/media/i2c/soc_camera/{tw9910.c => soc_tw9910.c} (100%)
>>>
>>> diff --git a/drivers/media/i2c/soc_camera/Makefile 
>>> b/drivers/media/i2c/soc_camera/Makefile
>>> index 8c7770f62997..09ae483b96ef 100644
>>> --- a/drivers/media/i2c/soc_camera/Makefile
>>> +++ b/drivers/media/i2c/soc_camera/Makefile
>>> @@ -1,10 +1,10 @@
>>>  # SPDX-License-Identifier: GPL-2.0
>>> -obj-$(CONFIG_SOC_CAMERA_MT9M001)   += mt9m001.o
>>> -obj-$(CONFIG_SOC_CAMERA_MT9T112)   += mt9t112.o
>>> -obj-$(CONFIG_SOC_CAMERA_MT9V022)   += mt9v022.o
>>> -obj-$(CONFIG_SOC_CAMERA_OV5642)+= ov5642.o
>>> -obj-$(CONFIG_SOC_CAMERA_OV772X)+= ov772x.o
>>> -obj-$(CONFIG_SOC_CAMERA_OV9640)+= ov9640.o
>>> -obj-$(CONFIG_SOC_CAMERA_OV9740)+= ov9740.o
>>> -obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= rj54n1cb0c.o
>>> -obj-$(CONFIG_SOC_CAMERA_TW9910)+= tw9910.o
>>> +obj-$(CONFIG_SOC_CAMERA_MT9M001)   += soc_mt9m001.o
>>> +obj-$(CONFIG_SOC_CAMERA_MT9T112)   += soc_mt9t112.o
>>> +obj-$(CONFIG_SOC_CAMERA_MT9V022)   += soc_mt9v022.o
>>> +obj-$(CONFIG_SOC_CAMERA_OV5642)+= soc_ov5642.o
>>> +obj-$(CONFIG_SOC_CAMERA_OV772X)+= soc_ov772x.o
>>> +obj-$(CONFIG_SOC_CAMERA_OV9640)+= soc_ov9640.o
>>> +obj-$(CONFIG_SOC_CAMERA_OV9740)+= soc_ov9740.o
>>> +obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= soc_rj54n1cb0c.o
>>> +obj-$(CONFIG_SOC_CAMERA_TW9910)+= soc_tw9910.o
>>> diff --git a/drivers/media/i2c/soc_camera/mt9m001.c 
>>> b/drivers/media/i2c/soc_camera/soc_mt9m001.c
>>> similarity index 100%
>>> rename from drivers/media/i2c/soc_camera/mt9m001.c
>>> rename to drivers/media/i2c/soc_camera/soc_mt9m001.c
>>> diff --git a/drivers/media/i2c/soc_camera/mt9t112.c 
>>> b/drivers/media/i2c/soc_camera/soc_mt9t112.c
>>> similarity index 100%
>>> rename from 

Re: [PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread Mauro Carvalho Chehab
Em Fri, 19 Oct 2018 13:45:32 +0200
Hans Verkuil  escreveu:

> On 10/19/18 13:43, Mauro Carvalho Chehab wrote:
> > Those drivers are part of the legacy SoC camera framework.
> > They're being converted to not use it, but sometimes we're
> > keeping both legacy any new driver.
> > 
> > This time, for example, we have two drivers on media with
> > the same name: ov772x. That's bad.
> > 
> > So, in order to prevent that to happen, let's prepend the SoC
> > legacy drivers with soc_.
> > 
> > No functional changes.
> > 
> > Signed-off-by: Mauro Carvalho Chehab   
> 
> Acked-by: Hans Verkuil 

For now, let's just avoid the conflict if one builds both modules and
do a modprobe ov772x.

> Let's kill all of these in the next kernel. I see no reason for keeping
> them around.

While people are doing those SoC conversions, I would keep it. We
could move it to staging, to let it clear that those drivers require
conversion, and give people some time to work on it.

In the specific case of ov772x, as we have already a driver that doesn't
require soc_camera, we can strip it for -next.

> 
> Regards,
> 
>   Hans
> 
> > ---
> >  drivers/media/i2c/soc_camera/Makefile  | 18 +-
> >  .../soc_camera/{mt9m001.c => soc_mt9m001.c}|  0
> >  .../soc_camera/{mt9t112.c => soc_mt9t112.c}|  0
> >  .../soc_camera/{mt9v022.c => soc_mt9v022.c}|  0
> >  .../i2c/soc_camera/{ov5642.c => soc_ov5642.c}  |  0
> >  .../i2c/soc_camera/{ov772x.c => soc_ov772x.c}  |  0
> >  .../i2c/soc_camera/{ov9640.c => soc_ov9640.c}  |  0
> >  .../i2c/soc_camera/{ov9740.c => soc_ov9740.c}  |  0
> >  .../{rj54n1cb0c.c => soc_rj54n1cb0c.c} |  0
> >  .../i2c/soc_camera/{tw9910.c => soc_tw9910.c}  |  0
> >  10 files changed, 9 insertions(+), 9 deletions(-)
> >  rename drivers/media/i2c/soc_camera/{mt9m001.c => soc_mt9m001.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{mt9t112.c => soc_mt9t112.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{mt9v022.c => soc_mt9v022.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{ov5642.c => soc_ov5642.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{ov772x.c => soc_ov772x.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{ov9640.c => soc_ov9640.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{ov9740.c => soc_ov9740.c} (100%)
> >  rename drivers/media/i2c/soc_camera/{rj54n1cb0c.c => soc_rj54n1cb0c.c} 
> > (100%)
> >  rename drivers/media/i2c/soc_camera/{tw9910.c => soc_tw9910.c} (100%)
> > 
> > diff --git a/drivers/media/i2c/soc_camera/Makefile 
> > b/drivers/media/i2c/soc_camera/Makefile
> > index 8c7770f62997..09ae483b96ef 100644
> > --- a/drivers/media/i2c/soc_camera/Makefile
> > +++ b/drivers/media/i2c/soc_camera/Makefile
> > @@ -1,10 +1,10 @@
> >  # SPDX-License-Identifier: GPL-2.0
> > -obj-$(CONFIG_SOC_CAMERA_MT9M001)   += mt9m001.o
> > -obj-$(CONFIG_SOC_CAMERA_MT9T112)   += mt9t112.o
> > -obj-$(CONFIG_SOC_CAMERA_MT9V022)   += mt9v022.o
> > -obj-$(CONFIG_SOC_CAMERA_OV5642)+= ov5642.o
> > -obj-$(CONFIG_SOC_CAMERA_OV772X)+= ov772x.o
> > -obj-$(CONFIG_SOC_CAMERA_OV9640)+= ov9640.o
> > -obj-$(CONFIG_SOC_CAMERA_OV9740)+= ov9740.o
> > -obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= rj54n1cb0c.o
> > -obj-$(CONFIG_SOC_CAMERA_TW9910)+= tw9910.o
> > +obj-$(CONFIG_SOC_CAMERA_MT9M001)   += soc_mt9m001.o
> > +obj-$(CONFIG_SOC_CAMERA_MT9T112)   += soc_mt9t112.o
> > +obj-$(CONFIG_SOC_CAMERA_MT9V022)   += soc_mt9v022.o
> > +obj-$(CONFIG_SOC_CAMERA_OV5642)+= soc_ov5642.o
> > +obj-$(CONFIG_SOC_CAMERA_OV772X)+= soc_ov772x.o
> > +obj-$(CONFIG_SOC_CAMERA_OV9640)+= soc_ov9640.o
> > +obj-$(CONFIG_SOC_CAMERA_OV9740)+= soc_ov9740.o
> > +obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= soc_rj54n1cb0c.o
> > +obj-$(CONFIG_SOC_CAMERA_TW9910)+= soc_tw9910.o
> > diff --git a/drivers/media/i2c/soc_camera/mt9m001.c 
> > b/drivers/media/i2c/soc_camera/soc_mt9m001.c
> > similarity index 100%
> > rename from drivers/media/i2c/soc_camera/mt9m001.c
> > rename to drivers/media/i2c/soc_camera/soc_mt9m001.c
> > diff --git a/drivers/media/i2c/soc_camera/mt9t112.c 
> > b/drivers/media/i2c/soc_camera/soc_mt9t112.c
> > similarity index 100%
> > rename from drivers/media/i2c/soc_camera/mt9t112.c
> > rename to drivers/media/i2c/soc_camera/soc_mt9t112.c
> > diff --git a/drivers/media/i2c/soc_camera/mt9v022.c 
> > b/drivers/media/i2c/soc_camera/soc_mt9v022.c
> > similarity index 100%
> > rename from drivers/media/i2c/soc_camera/mt9v022.c
> > rename to drivers/media/i2c/soc_camera/soc_mt9v022.c
> > diff --git a/drivers/media/i2c/soc_camera/ov5642.c 
> > b/drivers/media/i2c/soc_camera/soc_ov5642.c
> > similarity index 100%
> > rename from drivers/media/i2c/soc_camera/ov5642.c
> > rename to drivers/media/i2c/soc_camera/soc_ov5642.c
> > diff --git a/drivers/media/i2c/soc_camera/ov772x.c 
> > b/drivers/media/i2c/soc_camera/soc_ov772x.c
> > similarity index 100%
> > rename from 

Re: [PATCH v3 00/16] i.MX media mem2mem scaler

2018-10-19 Thread Philipp Zabel
Hi Steve,

On Wed, 2018-10-17 at 16:46 -0700, Steve Longerbeam wrote:
> Hi Philipp,
> 
> On 10/12/18 5:29 PM, Steve Longerbeam wrote:
> > 
> > 
> > But one last thing. Conversions to and from YV12 are producing images
> > with wrong colors, it looks like the .uv_swapped boolean needs to be 
> > checked
> > additionally somewhere. Any ideas?
> 
> 
> Sorry, this was my fault. I fixed this in
> 
> "gpu: ipu-v3: Add chroma plane offset overrides to ipu_cpmem_set_image()"
> 
> in my fork g...@github.com:slongerbeam/mediatree.git, branch imx-mem2mem.3.
> 
> Steve

Thanks a lot for testing, fixes, and integration. Basically I've just
resubmitted that branch as v4.
After this round I'll pick up all non-controversial ipu-v3 / image-
convert patches.

regards
Philipp


[PATCH v4 02/22] gpu: ipu-cpmem: add WARN_ON_ONCE() for unaligned dma buffers

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

Add a WARN_ON_ONCE() if either the Y/packed buffer, or the U/V offsets,
are not aligned on 8-byte boundaries. This will catch alignment
bugs in DRM, V4L2.

Signed-off-by: Steve Longerbeam 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-cpmem.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/gpu/ipu-v3/ipu-cpmem.c b/drivers/gpu/ipu-v3/ipu-cpmem.c
index a9d2501500a1..7e65954f13c2 100644
--- a/drivers/gpu/ipu-v3/ipu-cpmem.c
+++ b/drivers/gpu/ipu-v3/ipu-cpmem.c
@@ -259,6 +259,8 @@ EXPORT_SYMBOL_GPL(ipu_cpmem_set_high_priority);
 
 void ipu_cpmem_set_buffer(struct ipuv3_channel *ch, int bufnum, dma_addr_t buf)
 {
+   WARN_ON_ONCE(buf & 0x7);
+
if (bufnum)
ipu_ch_param_write_field(ch, IPU_FIELD_EBA1, buf >> 3);
else
@@ -268,6 +270,8 @@ EXPORT_SYMBOL_GPL(ipu_cpmem_set_buffer);
 
 void ipu_cpmem_set_uv_offset(struct ipuv3_channel *ch, u32 u_off, u32 v_off)
 {
+   WARN_ON_ONCE((u_off & 0x7) || (v_off & 0x7));
+
ipu_ch_param_write_field(ch, IPU_FIELD_UBO, u_off / 8);
ipu_ch_param_write_field(ch, IPU_FIELD_VBO, v_off / 8);
 }
@@ -435,6 +439,8 @@ void ipu_cpmem_set_yuv_planar_full(struct ipuv3_channel *ch,
   unsigned int uv_stride,
   unsigned int u_offset, unsigned int v_offset)
 {
+   WARN_ON_ONCE((u_offset & 0x7) || (v_offset & 0x7));
+
ipu_ch_param_write_field(ch, IPU_FIELD_SLUV, uv_stride - 1);
ipu_ch_param_write_field(ch, IPU_FIELD_UBO, u_offset / 8);
ipu_ch_param_write_field(ch, IPU_FIELD_VBO, v_offset / 8);
-- 
2.19.0



[PATCH v4 16/22] gpu: ipu-v3: image-convert: select optimal seam positions

2018-10-19 Thread Philipp Zabel
Select seam positions that minimize distortions during seam hiding while
satifying input and output IDMAC, rotator, and image format constraints.

This code looks for aligned output seam positions that minimize the
difference between the fractional corresponding ideal input positions
and the input positions rounded to alignment requirements.

Since now tiles can be sized differently, alignment restrictions of the
complete image can be relaxed in the next step.

Signed-off-by: Philipp Zabel 
---
Changes since v3:
 - Fix tile_left_align for 24-bit RGB formats and reduce alignment
   restrictions for U/V packed planar YUV formats.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 343 -
 1 file changed, 337 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index a407ca3b367b..a674241dd0b8 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -432,6 +432,126 @@ static int calc_image_resize_coefficients(struct 
ipu_image_convert_ctx *ctx,
return 0;
 }
 
+#define round_closest(x, y) round_down((x) + (y)/2, (y))
+
+/*
+ * Find the best aligned seam position in the inverval [out_start, out_end].
+ * Rotation and image offsets are out of scope.
+ *
+ * @out_start: start of inverval, must be within 1024 pixels / lines
+ * of out_end
+ * @out_end: end of interval, smaller than or equal to out_edge
+ * @in_edge: input right / bottom edge
+ * @out_edge: output right / bottom edge
+ * @in_align: input alignment, either horizontal 8-byte line start address
+ *alignment, or pixel alignment due to image format
+ * @out_align: output alignment, either horizontal 8-byte line start address
+ * alignment, or pixel alignment due to image format or rotator
+ * block size
+ * @in_burst: horizontal input burst size in case of horizontal flip
+ * @out_burst: horizontal output burst size or rotator block size
+ * @downsize_coeff: downsizing section coefficient
+ * @resize_coeff: main processing section resizing coefficient
+ * @_in_seam: aligned input seam position return value
+ * @_out_seam: aligned output seam position return value
+ */
+static void find_best_seam(struct ipu_image_convert_ctx *ctx,
+  unsigned int out_start,
+  unsigned int out_end,
+  unsigned int in_edge,
+  unsigned int out_edge,
+  unsigned int in_align,
+  unsigned int out_align,
+  unsigned int in_burst,
+  unsigned int out_burst,
+  unsigned int downsize_coeff,
+  unsigned int resize_coeff,
+  u32 *_in_seam,
+  u32 *_out_seam)
+{
+   struct device *dev = ctx->chan->priv->ipu->dev;
+   unsigned int out_pos;
+   /* Input / output seam position candidates */
+   unsigned int out_seam = 0;
+   unsigned int in_seam = 0;
+   unsigned int min_diff = UINT_MAX;
+
+   /*
+* Output tiles must start at a multiple of 8 bytes horizontally and
+* possibly at an even line horizontally depending on the pixel format.
+* Only consider output aligned positions for the seam.
+*/
+   out_start = round_up(out_start, out_align);
+   for (out_pos = out_start; out_pos < out_end; out_pos += out_align) {
+   unsigned int in_pos;
+   unsigned int in_pos_aligned;
+   unsigned int abs_diff;
+
+   /*
+* Tiles in the right row / bottom column may not be allowed to
+* overshoot horizontally / vertically. out_burst may be the
+* actual DMA burst size, or the rotator block size.
+*/
+   if ((out_burst > 1) && (out_edge - out_pos) % out_burst)
+   continue;
+
+   /*
+* Input sample position, corresponding to out_pos, 19.13 fixed
+* point.
+*/
+   in_pos = (out_pos * resize_coeff) << downsize_coeff;
+   /*
+* The closest input sample position that we could actually
+* start the input tile at, 19.13 fixed point.
+*/
+   in_pos_aligned = round_closest(in_pos, 8192U * in_align);
+
+   if ((in_burst > 1) &&
+   (in_edge - in_pos_aligned / 8192U) % in_burst)
+   continue;
+
+   if (in_pos < in_pos_aligned)
+   abs_diff = in_pos_aligned - in_pos;
+   else
+   abs_diff = in_pos - in_pos_aligned;
+
+   if (abs_diff < min_diff) {
+   in_seam = in_pos_aligned;
+   out_seam = out_pos;
+   min_diff = 

[PATCH v4 14/22] gpu: ipu-v3: image-convert: calculate tile dimensions and offsets outside fill_image

2018-10-19 Thread Philipp Zabel
This will allow to calculate seam positions after initializing the
ipu_image base structure but before calculating tile dimensions.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index d14ee7b303a1..542c091cfef1 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1467,9 +1467,7 @@ static int fill_image(struct ipu_image_convert_ctx *ctx,
else
ic_image->stride  = ic_image->base.pix.bytesperline;
 
-   calc_tile_dimensions(ctx, ic_image);
-
-   return calc_tile_offsets(ctx, ic_image);
+   return 0;
 }
 
 /* borrowed from drivers/media/v4l2-core/v4l2-common.c */
@@ -1673,14 +1671,24 @@ ipu_image_convert_prepare(struct ipu_soc *ipu, enum 
ipu_ic_task ic_task,
ctx->num_tiles = d_image->num_cols * d_image->num_rows;
ctx->rot_mode = rot_mode;
 
+   ret = fill_image(ctx, s_image, in, IMAGE_CONVERT_IN);
+   if (ret)
+   goto out_free;
+   ret = fill_image(ctx, d_image, out, IMAGE_CONVERT_OUT);
+   if (ret)
+   goto out_free;
+
ret = calc_image_resize_coefficients(ctx, in, out);
if (ret)
goto out_free;
 
-   ret = fill_image(ctx, s_image, in, IMAGE_CONVERT_IN);
+   calc_tile_dimensions(ctx, s_image);
+   ret = calc_tile_offsets(ctx, s_image);
if (ret)
goto out_free;
-   ret = fill_image(ctx, d_image, out, IMAGE_CONVERT_OUT);
+
+   calc_tile_dimensions(ctx, d_image);
+   ret = calc_tile_offsets(ctx, d_image);
if (ret)
goto out_free;
 
-- 
2.19.0



[PATCH v4 03/22] gpu: ipu-v3: Add chroma plane offset overrides to ipu_cpmem_set_image()

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

Allow the caller of ipu_cpmem_set_image() to override the latters
calculation of the chroma plane offsets, by adding override U/V
plane offsets to 'struct ipu_image'.

Signed-off-by: Steve Longerbeam 
---
New since v3.
---
 drivers/gpu/ipu-v3/ipu-cpmem.c | 46 +++---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 10 +++---
 include/video/imx-ipu-v3.h |  3 ++
 3 files changed, 35 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-cpmem.c b/drivers/gpu/ipu-v3/ipu-cpmem.c
index 7e65954f13c2..163fadb8a33a 100644
--- a/drivers/gpu/ipu-v3/ipu-cpmem.c
+++ b/drivers/gpu/ipu-v3/ipu-cpmem.c
@@ -745,48 +745,56 @@ int ipu_cpmem_set_image(struct ipuv3_channel *ch, struct 
ipu_image *image)
switch (pix->pixelformat) {
case V4L2_PIX_FMT_YUV420:
offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
-   u_offset = U_OFFSET(pix, image->rect.left,
-   image->rect.top) - offset;
-   v_offset = V_OFFSET(pix, image->rect.left,
-   image->rect.top) - offset;
+   u_offset = image->u_offset ?
+   image->u_offset : U_OFFSET(pix, image->rect.left,
+  image->rect.top) - offset;
+   v_offset = image->v_offset ?
+   image->v_offset : V_OFFSET(pix, image->rect.left,
+  image->rect.top) - offset;
 
ipu_cpmem_set_yuv_planar_full(ch, pix->bytesperline / 2,
  u_offset, v_offset);
break;
case V4L2_PIX_FMT_YVU420:
offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
-   u_offset = U_OFFSET(pix, image->rect.left,
-   image->rect.top) - offset;
-   v_offset = V_OFFSET(pix, image->rect.left,
-   image->rect.top) - offset;
+   u_offset = image->u_offset ?
+   image->u_offset : V_OFFSET(pix, image->rect.left,
+  image->rect.top) - offset;
+   v_offset = image->v_offset ?
+   image->v_offset : U_OFFSET(pix, image->rect.left,
+  image->rect.top) - offset;
 
ipu_cpmem_set_yuv_planar_full(ch, pix->bytesperline / 2,
- v_offset, u_offset);
+ u_offset, v_offset);
break;
case V4L2_PIX_FMT_YUV422P:
offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
-   u_offset = U2_OFFSET(pix, image->rect.left,
-image->rect.top) - offset;
-   v_offset = V2_OFFSET(pix, image->rect.left,
-image->rect.top) - offset;
+   u_offset = image->u_offset ?
+   image->u_offset : U2_OFFSET(pix, image->rect.left,
+   image->rect.top) - offset;
+   v_offset = image->v_offset ?
+   image->v_offset : V2_OFFSET(pix, image->rect.left,
+   image->rect.top) - offset;
 
ipu_cpmem_set_yuv_planar_full(ch, pix->bytesperline / 2,
  u_offset, v_offset);
break;
case V4L2_PIX_FMT_NV12:
offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
-   u_offset = UV_OFFSET(pix, image->rect.left,
-image->rect.top) - offset;
-   v_offset = 0;
+   u_offset = image->u_offset ?
+   image->u_offset : UV_OFFSET(pix, image->rect.left,
+   image->rect.top) - offset;
+   v_offset = image->v_offset ? image->v_offset : 0;
 
ipu_cpmem_set_yuv_planar_full(ch, pix->bytesperline,
  u_offset, v_offset);
break;
case V4L2_PIX_FMT_NV16:
offset = Y_OFFSET(pix, image->rect.left, image->rect.top);
-   u_offset = UV2_OFFSET(pix, image->rect.left,
- image->rect.top) - offset;
-   v_offset = 0;
+   u_offset = image->u_offset ?
+   image->u_offset : UV2_OFFSET(pix, image->rect.left,
+image->rect.top) - offset;
+   v_offset = image->v_offset ? image->v_offset : 0;
 
ipu_cpmem_set_yuv_planar_full(ch, pix->bytesperline,
  u_offset, v_offset);
diff --git 

[PATCH v4 13/22] gpu: ipu-v3: image-convert: store tile top/left position

2018-10-19 Thread Philipp Zabel
Store tile top/left position in pixels in the tile structure.
This will allow overlapping tiles with different sizes later.

Signed-off-by: Philipp Zabel 
---
No functional changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 27 ++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index cb47981741b4..d14ee7b303a1 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -84,6 +84,8 @@ struct ipu_image_convert_dma_chan {
 struct ipu_image_tile {
u32 width;
u32 height;
+   u32 left;
+   u32 top;
/* size and strides are in bytes */
u32 size;
u32 stride;
@@ -433,13 +435,17 @@ static int calc_image_resize_coefficients(struct 
ipu_image_convert_ctx *ctx,
 static void calc_tile_dimensions(struct ipu_image_convert_ctx *ctx,
 struct ipu_image_convert_image *image)
 {
-   int i;
+   unsigned int i;
 
for (i = 0; i < ctx->num_tiles; i++) {
struct ipu_image_tile *tile = >tile[i];
+   const unsigned int row = i / image->num_cols;
+   const unsigned int col = i % image->num_cols;
 
tile->height = image->base.pix.height / image->num_rows;
tile->width = image->base.pix.width / image->num_cols;
+   tile->left = col * tile->width;
+   tile->top = row * tile->height;
tile->size = ((tile->height * image->fmt->bpp) >> 3) *
tile->width;
 
@@ -535,7 +541,7 @@ static int calc_tile_offsets_planar(struct 
ipu_image_convert_ctx *ctx,
struct ipu_image_convert_priv *priv = chan->priv;
const struct ipu_image_pixfmt *fmt = image->fmt;
unsigned int row, col, tile = 0;
-   u32 H, w, h, y_stride, uv_stride;
+   u32 H, top, y_stride, uv_stride;
u32 uv_row_off, uv_col_off, uv_off, u_off, v_off, tmp;
u32 y_row_off, y_col_off, y_off;
u32 y_size, uv_size;
@@ -552,13 +558,12 @@ static int calc_tile_offsets_planar(struct 
ipu_image_convert_ctx *ctx,
uv_size = y_size / (fmt->uv_width_dec * fmt->uv_height_dec);
 
for (row = 0; row < image->num_rows; row++) {
-   w = image->tile[tile].width;
-   h = image->tile[tile].height;
-   y_row_off = row * h * y_stride;
-   uv_row_off = (row * h * uv_stride) / fmt->uv_height_dec;
+   top = image->tile[tile].top;
+   y_row_off = top * y_stride;
+   uv_row_off = (top * uv_stride) / fmt->uv_height_dec;
 
for (col = 0; col < image->num_cols; col++) {
-   y_col_off = col * w;
+   y_col_off = image->tile[tile].left;
uv_col_off = y_col_off / fmt->uv_width_dec;
if (fmt->uv_packed)
uv_col_off *= 2;
@@ -601,7 +606,7 @@ static int calc_tile_offsets_packed(struct 
ipu_image_convert_ctx *ctx,
struct ipu_image_convert_priv *priv = chan->priv;
const struct ipu_image_pixfmt *fmt = image->fmt;
unsigned int row, col, tile = 0;
-   u32 w, h, bpp, stride, offset;
+   u32 bpp, stride, offset;
u32 row_off, col_off;
 
/* setup some convenience vars */
@@ -609,12 +614,10 @@ static int calc_tile_offsets_packed(struct 
ipu_image_convert_ctx *ctx,
bpp = fmt->bpp;
 
for (row = 0; row < image->num_rows; row++) {
-   w = image->tile[tile].width;
-   h = image->tile[tile].height;
-   row_off = row * h * stride;
+   row_off = image->tile[tile].top * stride;
 
for (col = 0; col < image->num_cols; col++) {
-   col_off = (col * w * bpp) >> 3;
+   col_off = (image->tile[tile].left * bpp) >> 3;
 
offset = row_off + col_off;
 
-- 
2.19.0



[PATCH v4 19/22] gpu: ipu-v3: image-convert: fix bytesperline adjustment

2018-10-19 Thread Philipp Zabel
For planar formats, bytesperline does not depend on BPP. It must always
be larger than width and aligned to tile width alignment restrictions.

The input bytesperline to ipu_image_convert_adjust() may be
uninitialized, so don't rely on input bytesperline as the
minimum value for clamp_align(). Use 2 << w_align as the minimum
instead.

Signed-off-by: Philipp Zabel 
[slongerb...@gmail.com: clamp input bytesperline]
Signed-off-by: Steve Longerbeam 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 0829723a7599..b735065fe288 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1915,10 +1915,18 @@ void ipu_image_convert_adjust(struct ipu_image *in, 
struct ipu_image *out,
out->pix.height = clamp_align(out->pix.height, MIN_H, MAX_H, h_align);
 
/* set input/output strides and image sizes */
-   in->pix.bytesperline = (in->pix.width * infmt->bpp) >> 3;
-   in->pix.sizeimage = in->pix.height * in->pix.bytesperline;
-   out->pix.bytesperline = (out->pix.width * outfmt->bpp) >> 3;
-   out->pix.sizeimage = out->pix.height * out->pix.bytesperline;
+   in->pix.bytesperline = infmt->planar ?
+   clamp_align(in->pix.width, 2 << w_align, MAX_W, w_align) :
+   clamp_align((in->pix.width * infmt->bpp) >> 3,
+   2 << w_align, MAX_W, w_align);
+   in->pix.sizeimage = infmt->planar ?
+   (in->pix.height * in->pix.bytesperline * infmt->bpp) >> 3 :
+   in->pix.height * in->pix.bytesperline;
+   out->pix.bytesperline = outfmt->planar ? out->pix.width :
+   (out->pix.width * outfmt->bpp) >> 3;
+   out->pix.sizeimage = outfmt->planar ?
+   (out->pix.height * out->pix.bytesperline * outfmt->bpp) >> 3 :
+   out->pix.height * out->pix.bytesperline;
 }
 EXPORT_SYMBOL_GPL(ipu_image_convert_adjust);
 
-- 
2.19.0



[PATCH v4 15/22] gpu: ipu-v3: image-convert: move tile alignment helpers

2018-10-19 Thread Philipp Zabel
Move tile_width_align and tile_height_align up so they
can be used by the tile edge position calculation code.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 54 +-
 1 file changed, 27 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 542c091cfef1..a407ca3b367b 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -432,6 +432,33 @@ static int calc_image_resize_coefficients(struct 
ipu_image_convert_ctx *ctx,
return 0;
 }
 
+/*
+ * We have to adjust the tile width such that the tile physaddrs and
+ * U and V plane offsets are multiples of 8 bytes as required by
+ * the IPU DMA Controller. For the planar formats, this corresponds
+ * to a pixel alignment of 16 (but use a more formal equation since
+ * the variables are available). For all the packed formats, 8 is
+ * good enough.
+ */
+static inline u32 tile_width_align(const struct ipu_image_pixfmt *fmt)
+{
+   return fmt->planar ? 8 * fmt->uv_width_dec : 8;
+}
+
+/*
+ * For tile height alignment, we have to ensure that the output tile
+ * heights are multiples of 8 lines if the IRT is required by the
+ * given rotation mode (the IRT performs rotations on 8x8 blocks
+ * at a time). If the IRT is not used, or for input image tiles,
+ * 2 lines are good enough.
+ */
+static inline u32 tile_height_align(enum ipu_image_convert_type type,
+   enum ipu_rotate_mode rot_mode)
+{
+   return (type == IMAGE_CONVERT_OUT &&
+   ipu_rot_mode_is_irt(rot_mode)) ? 8 : 2;
+}
+
 static void calc_tile_dimensions(struct ipu_image_convert_ctx *ctx,
 struct ipu_image_convert_image *image)
 {
@@ -1487,33 +1514,6 @@ static unsigned int clamp_align(unsigned int x, unsigned 
int min,
return x;
 }
 
-/*
- * We have to adjust the tile width such that the tile physaddrs and
- * U and V plane offsets are multiples of 8 bytes as required by
- * the IPU DMA Controller. For the planar formats, this corresponds
- * to a pixel alignment of 16 (but use a more formal equation since
- * the variables are available). For all the packed formats, 8 is
- * good enough.
- */
-static inline u32 tile_width_align(const struct ipu_image_pixfmt *fmt)
-{
-   return fmt->planar ? 8 * fmt->uv_width_dec : 8;
-}
-
-/*
- * For tile height alignment, we have to ensure that the output tile
- * heights are multiples of 8 lines if the IRT is required by the
- * given rotation mode (the IRT performs rotations on 8x8 blocks
- * at a time). If the IRT is not used, or for input image tiles,
- * 2 lines are good enough.
- */
-static inline u32 tile_height_align(enum ipu_image_convert_type type,
-   enum ipu_rotate_mode rot_mode)
-{
-   return (type == IMAGE_CONVERT_OUT &&
-   ipu_rot_mode_is_irt(rot_mode)) ? 8 : 2;
-}
-
 /* Adjusts input/output images to IPU restrictions */
 void ipu_image_convert_adjust(struct ipu_image *in, struct ipu_image *out,
  enum ipu_rotate_mode rot_mode)
-- 
2.19.0



[PATCH v4 00/22] i.MX media mem2mem scaler

2018-10-19 Thread Philipp Zabel
Hi,

this is the fourth version of the i.MX mem2mem scaler series.

An alignment issue with 24-bit RGB formats has been corrected in the
seam position selection patch and a few new fixes by Steve have been
added. If there are no more issues, I'll pick up the ipu-v3 patches
via imx-drm/next. The first patch could be merged via the media tree
independently.

Changes since v3:
 - Fix tile_left_align for 24-bit RGB formats and reduce alignment
   restrictions for U/V packed planar YUV formats
 - Catch unaligned tile offsets in image-convert
 - Add chroma plane offset overrides to ipu_cpmem_set_image() to
   prevent a false positive warning in some cases
 - Fix a race between run and unprepare and make abort reentrant.


Changes since v2:
 - Rely on ipu_image_convert_adjust() in mem2mem_try_fmt() for format
   adjustments. This makes the mem2mem driver mostly a V4L2 mem2mem API
   wrapper around the IPU image converter, and independent of the
   internal image converter implementation.
 - Remove the source and destination buffers on error in device_run().
   Otherwise the conversion is re-attempted apparently over and over
   again (with WARN() backtraces).
 - Allow subscribing to control changes.
 - Fix seam position selection for more corner cases:
- Switch width/height properly and align tile top left positions to 8x8
  IRT block size when rotating.
- Align input width to input burst length in case the scaling step
  flips horizontally.
- Fix bottom edge calculation.

Changes since v1:
 - Fix inverted allow_overshoot logic
 - Correctly switch horizontal / vertical tile alignment when
   determining seam positions with the 90° rotator active.
 - Fix SPDX-License-Identifier and remove superfluous license
   text.
 - Fix uninitialized walign in try_fmt

Previous cover letter:

we have image conversion code for scaling and colorspace conversion in
the IPUv3 base driver for a while. Since the IC hardware can only write
up to 1024x1024 pixel buffers, it scales to larger output buffers by
splitting the input and output frame into similarly sized tiles.

This causes the issue that the bilinear interpolation resets at the tile
boundary: instead of smoothly interpolating across the seam, there is a
jump in the input sample position that is very apparent for high
upscaling factors. This can be avoided by slightly changing the scaling
coefficients to let the left/top tiles overshoot their input sampling
into the first pixel / line of their right / bottom neighbors. The error
can be further reduced by letting tiles be differently sized and by
selecting seam positions that minimize the input sampling position error
at tile boundaries.
This is complicated by different DMA start address, burst size, and
rotator block size alignment requirements, depending on the input and
output pixel formats, and the fact that flipping happens in different
places depending on the rotation.

This series implements optimal seam position selection and seam hiding
with per-tile resizing coefficients and adds a scaling mem2mem device
to the imx-media driver.

regards
Philipp

Philipp Zabel (15):
  media: imx: add mem2mem device
  gpu: ipu-v3: ipu-ic: allow to manually set resize coefficients
  gpu: ipu-v3: image-convert: prepare for per-tile configuration
  gpu: ipu-v3: image-convert: calculate per-tile resize coefficients
  gpu: ipu-v3: image-convert: reconfigure IC per tile
  gpu: ipu-v3: image-convert: store tile top/left position
  gpu: ipu-v3: image-convert: calculate tile dimensions and offsets
outside fill_image
  gpu: ipu-v3: image-convert: move tile alignment helpers
  gpu: ipu-v3: image-convert: select optimal seam positions
  gpu: ipu-v3: image-convert: fix debug output for varying tile sizes
  gpu: ipu-v3: image-convert: relax alignment restrictions
  gpu: ipu-v3: image-convert: fix bytesperline adjustment
  gpu: ipu-v3: image-convert: add some ASCII art to the exposition
  gpu: ipu-v3: image-convert: disable double buffering if necessary
  gpu: ipu-v3: image-convert: allow three rows or columns

Steve Longerbeam (7):
  gpu: ipu-cpmem: add WARN_ON_ONCE() for unaligned dma buffers
  gpu: ipu-v3: Add chroma plane offset overrides to
ipu_cpmem_set_image()
  gpu: ipu-v3: image-convert: Prevent race between run and unprepare
  gpu: ipu-v3: image-convert: Only wait for abort completion if active
run
  gpu: ipu-v3: image-convert: Allow reentrancy into abort
  gpu: ipu-v3: image-convert: Remove need_abort flag
  gpu: ipu-v3: image-convert: Catch unaligned tile offsets

 drivers/gpu/ipu-v3/ipu-cpmem.c|   52 +-
 drivers/gpu/ipu-v3/ipu-ic.c   |   52 +-
 drivers/gpu/ipu-v3/ipu-image-convert.c| 1019 ++---
 drivers/staging/media/imx/Kconfig |1 +
 drivers/staging/media/imx/Makefile|1 +
 drivers/staging/media/imx/imx-media-dev.c |   11 +
 drivers/staging/media/imx/imx-media-mem2mem.c |  873 ++
 

[PATCH v4 12/22] gpu: ipu-v3: image-convert: reconfigure IC per tile

2018-10-19 Thread Philipp Zabel
For differently sized tiles or if the resizing coefficients change,
we have to stop, reconfigure, and restart the IC between tiles.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 65 +-
 1 file changed, 44 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 31e7186bcc00..cb47981741b4 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1151,6 +1151,24 @@ static irqreturn_t do_bh(int irq, void *dev_id)
return IRQ_HANDLED;
 }
 
+static bool ic_settings_changed(struct ipu_image_convert_ctx *ctx)
+{
+   unsigned int cur_tile = ctx->next_tile - 1;
+   unsigned int next_tile = ctx->next_tile;
+
+   if (ctx->resize_coeffs_h[cur_tile % ctx->in.num_cols] !=
+   ctx->resize_coeffs_h[next_tile % ctx->in.num_cols] ||
+   ctx->resize_coeffs_v[cur_tile / ctx->in.num_cols] !=
+   ctx->resize_coeffs_v[next_tile / ctx->in.num_cols] ||
+   ctx->in.tile[cur_tile].width != ctx->in.tile[next_tile].width ||
+   ctx->in.tile[cur_tile].height != ctx->in.tile[next_tile].height ||
+   ctx->out.tile[cur_tile].width != ctx->out.tile[next_tile].width ||
+   ctx->out.tile[cur_tile].height != ctx->out.tile[next_tile].height)
+   return true;
+
+   return false;
+}
+
 /* hold irqlock when calling */
 static irqreturn_t do_irq(struct ipu_image_convert_run *run)
 {
@@ -1194,27 +1212,32 @@ static irqreturn_t do_irq(struct ipu_image_convert_run 
*run)
 * not done, place the next tile buffers.
 */
if (!ctx->double_buffering) {
-
-   src_tile = _image->tile[ctx->next_tile];
-   dst_idx = ctx->out_tile_map[ctx->next_tile];
-   dst_tile = _image->tile[dst_idx];
-
-   ipu_cpmem_set_buffer(chan->in_chan, 0,
-s_image->base.phys0 + src_tile->offset);
-   ipu_cpmem_set_buffer(outch, 0,
-d_image->base.phys0 + dst_tile->offset);
-   if (s_image->fmt->planar)
-   ipu_cpmem_set_uv_offset(chan->in_chan,
-   src_tile->u_off,
-   src_tile->v_off);
-   if (d_image->fmt->planar)
-   ipu_cpmem_set_uv_offset(outch,
-   dst_tile->u_off,
-   dst_tile->v_off);
-
-   ipu_idmac_select_buffer(chan->in_chan, 0);
-   ipu_idmac_select_buffer(outch, 0);
-
+   if (ic_settings_changed(ctx)) {
+   convert_stop(run);
+   convert_start(run, ctx->next_tile);
+   } else {
+   src_tile = _image->tile[ctx->next_tile];
+   dst_idx = ctx->out_tile_map[ctx->next_tile];
+   dst_tile = _image->tile[dst_idx];
+
+   ipu_cpmem_set_buffer(chan->in_chan, 0,
+s_image->base.phys0 +
+src_tile->offset);
+   ipu_cpmem_set_buffer(outch, 0,
+d_image->base.phys0 +
+dst_tile->offset);
+   if (s_image->fmt->planar)
+   ipu_cpmem_set_uv_offset(chan->in_chan,
+   src_tile->u_off,
+   src_tile->v_off);
+   if (d_image->fmt->planar)
+   ipu_cpmem_set_uv_offset(outch,
+   dst_tile->u_off,
+   dst_tile->v_off);
+
+   ipu_idmac_select_buffer(chan->in_chan, 0);
+   ipu_idmac_select_buffer(outch, 0);
+   }
} else if (ctx->next_tile < ctx->num_tiles - 1) {
 
src_tile = _image->tile[ctx->next_tile + 1];
-- 
2.19.0



[PATCH v4 18/22] gpu: ipu-v3: image-convert: relax alignment restrictions

2018-10-19 Thread Philipp Zabel
For the planar but U/V-packed formats NV12 and NV16, 8 pixel width
alignment is good enough to fulfill the 8 byte stride requirement.
If we allow the input 8-pixel DMA bursts to overshoot the end of the
line, the only input alignment restrictions are dictated by the pixel
format and 8-byte aligned line start address.
Since different tile sizes are allowed, the output tile with / height
alignment doesn't need to be multiplied by number of columns / rows.

Signed-off-by: Philipp Zabel 
[slongerb...@gmail.com: Bring in the fixes to format width and
 height alignment restrictions from imx-media-mem2mem.c.]
Signed-off-by: Steve Longerbeam 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 81 +-
 1 file changed, 41 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 0451d699f515..0829723a7599 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -551,31 +551,46 @@ static inline u32 tile_top_align(const struct 
ipu_image_pixfmt *fmt)
return fmt->uv_height_dec > 1 ? 2 : 1;
 }
 
-/*
- * We have to adjust the tile width such that the tile physaddrs and
- * U and V plane offsets are multiples of 8 bytes as required by
- * the IPU DMA Controller. For the planar formats, this corresponds
- * to a pixel alignment of 16 (but use a more formal equation since
- * the variables are available). For all the packed formats, 8 is
- * good enough.
- */
-static inline u32 tile_width_align(const struct ipu_image_pixfmt *fmt)
+static inline u32 tile_width_align(enum ipu_image_convert_type type,
+  const struct ipu_image_pixfmt *fmt,
+  enum ipu_rotate_mode rot_mode)
 {
-   return fmt->planar ? 8 * fmt->uv_width_dec : 8;
+   if (type == IMAGE_CONVERT_IN) {
+   /*
+* The IC burst reads 8 pixels at a time. Reading beyond the
+* end of the line is usually acceptable. Those pixels are
+* ignored, unless the IC has to write the scaled line in
+* reverse.
+*/
+   return (!ipu_rot_mode_is_irt(rot_mode) &&
+   (rot_mode & IPU_ROT_BIT_HFLIP)) ? 8 : 2;
+   }
+
+   /*
+* Align to 16x16 pixel blocks for planar 4:2:0 chroma subsampled
+* formats to guarantee 8-byte aligned line start addresses in the
+* chroma planes when IRT is used. Align to 8x8 pixel IRT block size
+* for all other formats.
+*/
+   return (ipu_rot_mode_is_irt(rot_mode) &&
+   fmt->planar && !fmt->uv_packed) ?
+   8 * fmt->uv_width_dec : 8;
 }
 
-/*
- * For tile height alignment, we have to ensure that the output tile
- * heights are multiples of 8 lines if the IRT is required by the
- * given rotation mode (the IRT performs rotations on 8x8 blocks
- * at a time). If the IRT is not used, or for input image tiles,
- * 2 lines are good enough.
- */
 static inline u32 tile_height_align(enum ipu_image_convert_type type,
+   const struct ipu_image_pixfmt *fmt,
enum ipu_rotate_mode rot_mode)
 {
-   return (type == IMAGE_CONVERT_OUT &&
-   ipu_rot_mode_is_irt(rot_mode)) ? 8 : 2;
+   if (type == IMAGE_CONVERT_IN || !ipu_rot_mode_is_irt(rot_mode))
+   return 2;
+
+   /*
+* Align to 16x16 pixel blocks for planar 4:2:0 chroma subsampled
+* formats to guarantee 8-byte aligned line start addresses in the
+* chroma planes when IRT is used. Align to 8x8 pixel IRT block size
+* for all other formats.
+*/
+   return (fmt->planar && !fmt->uv_packed) ? 8 * fmt->uv_width_dec : 8;
 }
 
 /*
@@ -661,8 +676,9 @@ static void find_seams(struct ipu_image_convert_ctx *ctx,
unsigned int in_top_align = tile_top_align(in->fmt);
unsigned int out_left_align = tile_left_align(out->fmt);
unsigned int out_top_align = tile_top_align(out->fmt);
-   unsigned int out_width_align = tile_width_align(out->fmt);
-   unsigned int out_height_align = tile_height_align(out->type,
+   unsigned int out_width_align = tile_width_align(out->type, out->fmt,
+   ctx->rot_mode);
+   unsigned int out_height_align = tile_height_align(out->type, out->fmt,
  ctx->rot_mode);
unsigned int in_right = in->base.rect.width;
unsigned int in_bottom = in->base.rect.height;
@@ -1855,8 +1871,6 @@ void ipu_image_convert_adjust(struct ipu_image *in, 
struct ipu_image *out,
  enum ipu_rotate_mode rot_mode)
 {
const struct ipu_image_pixfmt *infmt, *outfmt;
-   unsigned int num_in_rows, num_in_cols;
-   unsigned int num_out_rows, num_out_cols;
u32 

[PATCH v4 07/22] gpu: ipu-v3: image-convert: Allow reentrancy into abort

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

Allow reentrancy into ipu_image_convert_abort(), by moving re-init
of ctx->aborted completion under the spin lock, and only if there is
an active run, and complete all waiters do_bh(). Note:
ipu_image_convert_unprepare() is still _not_ reentrant, and can't
be made reentrant.

Signed-off-by: Steve Longerbeam 
---
New since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index e3e032252604..abd8afb22b48 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -896,7 +896,7 @@ static irqreturn_t do_bh(int irq, void *dev_id)
dev_dbg(priv->ipu->dev,
"%s: task %u: signaling abort for ctx %p\n",
__func__, chan->ic_task, ctx);
-   complete(>aborted);
+   complete_all(>aborted);
}
}
 
@@ -1533,8 +1533,6 @@ static void __ipu_image_convert_abort(struct 
ipu_image_convert_ctx *ctx)
int run_count, ret;
bool need_abort;
 
-   reinit_completion(>aborted);
-
spin_lock_irqsave(>irqlock, flags);
 
/* move all remaining pending runs in this context to done_q */
@@ -1549,6 +1547,9 @@ static void __ipu_image_convert_abort(struct 
ipu_image_convert_ctx *ctx)
active_run = (chan->current_run && chan->current_run->ctx == ctx) ?
chan->current_run : NULL;
 
+   if (active_run)
+   reinit_completion(>aborted);
+
need_abort = (run_count || active_run);
 
ctx->aborting = true;
-- 
2.19.0



[PATCH v4 04/22] gpu: ipu-v3: ipu-ic: allow to manually set resize coefficients

2018-10-19 Thread Philipp Zabel
For tiled scaling, we want to compute the scaling coefficients
externally in such a way that the interpolation overshoots tile
boundaries and samples up to the first pixel of the next tile.
Prepare to override the resizing coefficients from the image
conversion code.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-ic.c | 52 +++--
 include/video/imx-ipu-v3.h  |  6 +
 2 files changed, 39 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-ic.c b/drivers/gpu/ipu-v3/ipu-ic.c
index 67cc820253a9..594c3cbc8291 100644
--- a/drivers/gpu/ipu-v3/ipu-ic.c
+++ b/drivers/gpu/ipu-v3/ipu-ic.c
@@ -442,36 +442,40 @@ int ipu_ic_task_graphics_init(struct ipu_ic *ic,
 }
 EXPORT_SYMBOL_GPL(ipu_ic_task_graphics_init);
 
-int ipu_ic_task_init(struct ipu_ic *ic,
-int in_width, int in_height,
-int out_width, int out_height,
-enum ipu_color_space in_cs,
-enum ipu_color_space out_cs)
+int ipu_ic_task_init_rsc(struct ipu_ic *ic,
+int in_width, int in_height,
+int out_width, int out_height,
+enum ipu_color_space in_cs,
+enum ipu_color_space out_cs,
+u32 rsc)
 {
struct ipu_ic_priv *priv = ic->priv;
-   u32 reg, downsize_coeff, resize_coeff;
+   u32 downsize_coeff, resize_coeff;
unsigned long flags;
int ret = 0;
 
-   /* Setup vertical resizing */
-   ret = calc_resize_coeffs(ic, in_height, out_height,
-_coeff, _coeff);
-   if (ret)
-   return ret;
+   if (!rsc) {
+   /* Setup vertical resizing */
 
-   reg = (downsize_coeff << 30) | (resize_coeff << 16);
+   ret = calc_resize_coeffs(ic, in_height, out_height,
+_coeff, _coeff);
+   if (ret)
+   return ret;
+
+   rsc = (downsize_coeff << 30) | (resize_coeff << 16);
 
-   /* Setup horizontal resizing */
-   ret = calc_resize_coeffs(ic, in_width, out_width,
-_coeff, _coeff);
-   if (ret)
-   return ret;
+   /* Setup horizontal resizing */
+   ret = calc_resize_coeffs(ic, in_width, out_width,
+_coeff, _coeff);
+   if (ret)
+   return ret;
 
-   reg |= (downsize_coeff << 14) | resize_coeff;
+   rsc |= (downsize_coeff << 14) | resize_coeff;
+   }
 
spin_lock_irqsave(>lock, flags);
 
-   ipu_ic_write(ic, reg, ic->reg->rsc);
+   ipu_ic_write(ic, rsc, ic->reg->rsc);
 
/* Setup color space conversion */
ic->in_cs = in_cs;
@@ -487,6 +491,16 @@ int ipu_ic_task_init(struct ipu_ic *ic,
spin_unlock_irqrestore(>lock, flags);
return ret;
 }
+
+int ipu_ic_task_init(struct ipu_ic *ic,
+int in_width, int in_height,
+int out_width, int out_height,
+enum ipu_color_space in_cs,
+enum ipu_color_space out_cs)
+{
+   return ipu_ic_task_init_rsc(ic, in_width, in_height, out_width,
+   out_height, in_cs, out_cs, 0);
+}
 EXPORT_SYMBOL_GPL(ipu_ic_task_init);
 
 int ipu_ic_task_idma_init(struct ipu_ic *ic, struct ipuv3_channel *channel,
diff --git a/include/video/imx-ipu-v3.h b/include/video/imx-ipu-v3.h
index 8bb163cd9314..e582e8e7527a 100644
--- a/include/video/imx-ipu-v3.h
+++ b/include/video/imx-ipu-v3.h
@@ -390,6 +390,12 @@ int ipu_ic_task_init(struct ipu_ic *ic,
 int out_width, int out_height,
 enum ipu_color_space in_cs,
 enum ipu_color_space out_cs);
+int ipu_ic_task_init_rsc(struct ipu_ic *ic,
+int in_width, int in_height,
+int out_width, int out_height,
+enum ipu_color_space in_cs,
+enum ipu_color_space out_cs,
+u32 rsc);
 int ipu_ic_task_graphics_init(struct ipu_ic *ic,
  enum ipu_color_space in_g_cs,
  bool galpha_en, u32 galpha,
-- 
2.19.0



[PATCH v4 11/22] gpu: ipu-v3: image-convert: calculate per-tile resize coefficients

2018-10-19 Thread Philipp Zabel
Slightly modifying resize coefficients per-tile allows to completely
hide the seams between tiles and to sample the correct input pixels at
the bottom and right edges of the image.

Tiling requires a bilinear interpolator reset at each tile start, which
causes the image to be slightly shifted if the starting pixel should not
have been sampled from an integer pixel position in the source image
according to the full image resizing ratio. To work around this
hardware limitation, calculate per-tile resizing coefficients that make
sure that the correct input pixels are sampled at the tile end.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 236 -
 1 file changed, 234 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index c2f82d681c48..31e7186bcc00 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -135,6 +135,12 @@ struct ipu_image_convert_ctx {
struct ipu_image_convert_image in;
struct ipu_image_convert_image out;
enum ipu_rotate_mode rot_mode;
+   u32 downsize_coeff_h;
+   u32 downsize_coeff_v;
+   u32 image_resize_coeff_h;
+   u32 image_resize_coeff_v;
+   u32 resize_coeffs_h[MAX_STRIPES_W];
+   u32 resize_coeffs_v[MAX_STRIPES_H];
 
/* intermediate buffer for rotation */
struct ipu_image_convert_dma_buf rot_intermediate[2];
@@ -361,6 +367,69 @@ static inline int num_stripes(int dim)
return 4;
 }
 
+/*
+ * Calculate downsizing coefficients, which are the same for all tiles,
+ * and bilinear resizing coefficients, which are used to find the best
+ * seam positions.
+ */
+static int calc_image_resize_coefficients(struct ipu_image_convert_ctx *ctx,
+ struct ipu_image *in,
+ struct ipu_image *out)
+{
+   u32 downsized_width = in->rect.width;
+   u32 downsized_height = in->rect.height;
+   u32 downsize_coeff_v = 0;
+   u32 downsize_coeff_h = 0;
+   u32 resized_width = out->rect.width;
+   u32 resized_height = out->rect.height;
+   u32 resize_coeff_h;
+   u32 resize_coeff_v;
+
+   if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
+   resized_width = out->rect.height;
+   resized_height = out->rect.width;
+   }
+
+   /* Do not let invalid input lead to an endless loop below */
+   if (WARN_ON(resized_width == 0 || resized_height == 0))
+   return -EINVAL;
+
+   while (downsized_width >= resized_width * 2) {
+   downsized_width >>= 1;
+   downsize_coeff_h++;
+   }
+
+   while (downsized_height >= resized_height * 2) {
+   downsized_height >>= 1;
+   downsize_coeff_v++;
+   }
+
+   /*
+* Calculate the bilinear resizing coefficients that could be used if
+* we were converting with a single tile. The bottom right output pixel
+* should sample as close as possible to the bottom right input pixel
+* out of the decimator, but not overshoot it:
+*/
+   resize_coeff_h = 8192 * (downsized_width - 1) / (resized_width - 1);
+   resize_coeff_v = 8192 * (downsized_height - 1) / (resized_height - 1);
+
+   dev_dbg(ctx->chan->priv->ipu->dev,
+   "%s: hscale: >>%u, *8192/%u vscale: >>%u, *8192/%u, %ux%u 
tiles\n",
+   __func__, downsize_coeff_h, resize_coeff_h, downsize_coeff_v,
+   resize_coeff_v, ctx->in.num_cols, ctx->in.num_rows);
+
+   if (downsize_coeff_h > 2 || downsize_coeff_v  > 2 ||
+   resize_coeff_h > 0x3fff || resize_coeff_v > 0x3fff)
+   return -EINVAL;
+
+   ctx->downsize_coeff_h = downsize_coeff_h;
+   ctx->downsize_coeff_v = downsize_coeff_v;
+   ctx->image_resize_coeff_h = resize_coeff_h;
+   ctx->image_resize_coeff_v = resize_coeff_v;
+
+   return 0;
+}
+
 static void calc_tile_dimensions(struct ipu_image_convert_ctx *ctx,
 struct ipu_image_convert_image *image)
 {
@@ -578,6 +647,149 @@ static int calc_tile_offsets(struct ipu_image_convert_ctx 
*ctx,
return calc_tile_offsets_packed(ctx, image);
 }
 
+/*
+ * Calculate the resizing ratio for the IC main processing section given input
+ * size, fixed downsizing coefficient, and output size.
+ * Either round to closest for the next tile's first pixel to minimize seams
+ * and distortion (for all but right column / bottom row), or round down to
+ * avoid sampling beyond the edges of the input image for this tile's last
+ * pixel.
+ * Returns the resizing coefficient, resizing ratio is 8192.0 / resize_coeff.
+ */
+static u32 calc_resize_coeff(u32 input_size, u32 downsize_coeff,
+u32 output_size, bool allow_overshoot)
+{
+   u32 downsized = input_size >> downsize_coeff;
+

[PATCH v4 09/22] gpu: ipu-v3: image-convert: Catch unaligned tile offsets

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

Catch calculated tile offsets that are not 8-byte aligned as required by the
IDMAC engine and return error in calc_tile_offsets().

Signed-off-by: Steve Longerbeam 
---
New since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 61 --
 1 file changed, 37 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index b8a400182a00..5fccba176e39 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -459,8 +459,8 @@ static void calc_out_tile_map(struct ipu_image_convert_ctx 
*ctx)
}
 }
 
-static void calc_tile_offsets_planar(struct ipu_image_convert_ctx *ctx,
-struct ipu_image_convert_image *image)
+static int calc_tile_offsets_planar(struct ipu_image_convert_ctx *ctx,
+   struct ipu_image_convert_image *image)
 {
struct ipu_image_convert_chan *chan = ctx->chan;
struct ipu_image_convert_priv *priv = chan->priv;
@@ -509,24 +509,30 @@ static void calc_tile_offsets_planar(struct 
ipu_image_convert_ctx *ctx,
image->tile[tile].u_off = u_off;
image->tile[tile++].v_off = v_off;
 
-   dev_dbg(priv->ipu->dev,
-   "task %u: ctx %p: %s@[%d,%d]: y_off %08x, u_off 
%08x, v_off %08x\n",
-   chan->ic_task, ctx,
-   image->type == IMAGE_CONVERT_IN ?
-   "Input" : "Output", row, col,
-   y_off, u_off, v_off);
+   if ((y_off & 0x7) || (u_off & 0x7) || (v_off & 0x7)) {
+   dev_err(priv->ipu->dev,
+   "task %u: ctx %p: %s@[%d,%d]: "
+   "y_off %08x, u_off %08x, v_off %08x\n",
+   chan->ic_task, ctx,
+   image->type == IMAGE_CONVERT_IN ?
+   "Input" : "Output", row, col,
+   y_off, u_off, v_off);
+   return -EINVAL;
+   }
}
}
+
+   return 0;
 }
 
-static void calc_tile_offsets_packed(struct ipu_image_convert_ctx *ctx,
-struct ipu_image_convert_image *image)
+static int calc_tile_offsets_packed(struct ipu_image_convert_ctx *ctx,
+   struct ipu_image_convert_image *image)
 {
struct ipu_image_convert_chan *chan = ctx->chan;
struct ipu_image_convert_priv *priv = chan->priv;
const struct ipu_image_pixfmt *fmt = image->fmt;
unsigned int row, col, tile = 0;
-   u32 w, h, bpp, stride;
+   u32 w, h, bpp, stride, offset;
u32 row_off, col_off;
 
/* setup some convenience vars */
@@ -541,27 +547,35 @@ static void calc_tile_offsets_packed(struct 
ipu_image_convert_ctx *ctx,
for (col = 0; col < image->num_cols; col++) {
col_off = (col * w * bpp) >> 3;
 
-   image->tile[tile].offset = row_off + col_off;
+   offset = row_off + col_off;
+
+   image->tile[tile].offset = offset;
image->tile[tile].u_off = 0;
image->tile[tile++].v_off = 0;
 
-   dev_dbg(priv->ipu->dev,
-   "task %u: ctx %p: %s@[%d,%d]: phys %08x\n",
-   chan->ic_task, ctx,
-   image->type == IMAGE_CONVERT_IN ?
-   "Input" : "Output", row, col,
-   row_off + col_off);
+   if (offset & 0x7) {
+   dev_err(priv->ipu->dev,
+   "task %u: ctx %p: %s@[%d,%d]: "
+   "phys %08x\n",
+   chan->ic_task, ctx,
+   image->type == IMAGE_CONVERT_IN ?
+   "Input" : "Output", row, col,
+   row_off + col_off);
+   return -EINVAL;
+   }
}
}
+
+   return 0;
 }
 
-static void calc_tile_offsets(struct ipu_image_convert_ctx *ctx,
+static int calc_tile_offsets(struct ipu_image_convert_ctx *ctx,
  struct ipu_image_convert_image *image)
 {
if (image->fmt->planar)
-   calc_tile_offsets_planar(ctx, image);
-   else
-   calc_tile_offsets_packed(ctx, image);
+   return calc_tile_offsets_planar(ctx, image);
+
+   return calc_tile_offsets_packed(ctx, image);
 }
 
 /*
@@ -1199,9 +1213,8 @@ 

[PATCH v4 20/22] gpu: ipu-v3: image-convert: add some ASCII art to the exposition

2018-10-19 Thread Philipp Zabel
Visualize the scaling and rotation pipeline with some ASCII art
diagrams. Remove the FIXME comment about missing seam prevention.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 39 +++---
 1 file changed, 29 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index b735065fe288..91fe8f1672b4 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -37,17 +37,36 @@
  * when double_buffering boolean is set).
  *
  * Note that the input frame must be split up into the same number
- * of tiles as the output frame.
+ * of tiles as the output frame:
  *
- * FIXME: at this point there is no attempt to deal with visible seams
- * at the tile boundaries when upscaling. The seams are caused by a reset
- * of the bilinear upscale interpolation when starting a new tile. The
- * seams are barely visible for small upscale factors, but become
- * increasingly visible as the upscale factor gets larger, since more
- * interpolated pixels get thrown out at the tile boundaries. A possilble
- * fix might be to overlap tiles of different sizes, but this must be done
- * while also maintaining the IDMAC dma buffer address alignment and 8x8 IRT
- * alignment restrictions of each tile.
+ *   +-+-+
+ *   +-+---+ |  A  | B   |
+ *   | A   | B | | | |
+ *   +-+---+   -->   +-+-+
+ *   | C   | D | |  C  | D   |
+ *   +-+---+ | | |
+ *   +-+-+
+ *
+ * Clockwise 90° rotations are handled by first rescaling into a
+ * reusable temporary tile buffer and then rotating with the 8x8
+ * block rotator, writing to the correct destination:
+ *
+ * +-+-+
+ * | | |
+ *   +-+---+ +-+   | C   | A   |
+ *   | A   | B | | A,B, |  |   | | |
+ *   +-+---+   -->   | C,D  |  |  -->  | | |
+ *   | C   | D | +-+   +-+-+
+ *   +-+---+   | D   | B   |
+ * | | |
+ * +-+-+
+ *
+ * If the 8x8 block rotator is used, horizontal or vertical flipping
+ * is done during the rotation step, otherwise flipping is done
+ * during the scaling step.
+ * With rotation or flipping, tile order changes between input and
+ * output image. Tiles are numbered row major from top left to bottom
+ * right for both input and output image.
  */
 
 #define MAX_STRIPES_W4
-- 
2.19.0



[PATCH v4 05/22] gpu: ipu-v3: image-convert: Prevent race between run and unprepare

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

Prevent possible race by parallel threads between ipu_image_convert_run()
and ipu_image_convert_unprepare(). This involves setting ctx->aborting
to true unconditionally so that no new job runs can be queued during
unprepare, and holding the ctx->aborting flag until the context is freed.

Note that the "normal" ipu_image_convert_abort() case (e.g. not during
context unprepare) should clear the ctx->aborting flag after aborting
any active run and clearing the context's pending queue. This is because
it should be possible to continue to use the conversion context and queue
more runs after an abort.

Signed-off-by: Steve Longerbeam 
---
New since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 41fb62b88c54..6c15bf8efaa2 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1524,7 +1524,7 @@ int ipu_image_convert_queue(struct ipu_image_convert_run 
*run)
 EXPORT_SYMBOL_GPL(ipu_image_convert_queue);
 
 /* Abort any active or pending conversions for this context */
-void ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx)
+static void __ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx)
 {
struct ipu_image_convert_chan *chan = ctx->chan;
struct ipu_image_convert_priv *priv = chan->priv;
@@ -1551,7 +1551,7 @@ void ipu_image_convert_abort(struct ipu_image_convert_ctx 
*ctx)
 
need_abort = (run_count || active_run);
 
-   ctx->aborting = need_abort;
+   ctx->aborting = true;
 
spin_unlock_irqrestore(>irqlock, flags);
 
@@ -1572,7 +1572,11 @@ void ipu_image_convert_abort(struct 
ipu_image_convert_ctx *ctx)
dev_warn(priv->ipu->dev, "%s: timeout\n", __func__);
force_abort(ctx);
}
+}
 
+void ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx)
+{
+   __ipu_image_convert_abort(ctx);
ctx->aborting = false;
 }
 EXPORT_SYMBOL_GPL(ipu_image_convert_abort);
@@ -1586,7 +1590,7 @@ void ipu_image_convert_unprepare(struct 
ipu_image_convert_ctx *ctx)
bool put_res;
 
/* make sure no runs are hanging around */
-   ipu_image_convert_abort(ctx);
+   __ipu_image_convert_abort(ctx);
 
dev_dbg(priv->ipu->dev, "%s: task %u: removing ctx %p\n", __func__,
chan->ic_task, ctx);
-- 
2.19.0



[PATCH v4 17/22] gpu: ipu-v3: image-convert: fix debug output for varying tile sizes

2018-10-19 Thread Philipp Zabel
Since tile dimensions now vary between tiles, add debug output for each
tile's position and dimensions.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index a674241dd0b8..0451d699f515 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -308,12 +308,11 @@ static void dump_format(struct ipu_image_convert_ctx *ctx,
struct ipu_image_convert_priv *priv = chan->priv;
 
dev_dbg(priv->ipu->dev,
-   "task %u: ctx %p: %s format: %dx%d (%dx%d tiles of size %dx%d), 
%c%c%c%c\n",
+   "task %u: ctx %p: %s format: %dx%d (%dx%d tiles), %c%c%c%c\n",
chan->ic_task, ctx,
ic_image->type == IMAGE_CONVERT_OUT ? "Output" : "Input",
ic_image->base.pix.width, ic_image->base.pix.height,
ic_image->num_cols, ic_image->num_rows,
-   ic_image->tile[0].width, ic_image->tile[0].height,
ic_image->fmt->fourcc & 0xff,
(ic_image->fmt->fourcc >> 8) & 0xff,
(ic_image->fmt->fourcc >> 16) & 0xff,
@@ -789,6 +788,8 @@ static void find_seams(struct ipu_image_convert_ctx *ctx,
 static void calc_tile_dimensions(struct ipu_image_convert_ctx *ctx,
 struct ipu_image_convert_image *image)
 {
+   struct ipu_image_convert_chan *chan = ctx->chan;
+   struct ipu_image_convert_priv *priv = chan->priv;
unsigned int i;
 
for (i = 0; i < ctx->num_tiles; i++) {
@@ -813,6 +814,13 @@ static void calc_tile_dimensions(struct 
ipu_image_convert_ctx *ctx,
tile->rot_stride =
(image->fmt->bpp * tile->height) >> 3;
}
+
+   dev_dbg(priv->ipu->dev,
+   "task %u: ctx %p: %s@[%u,%u]: %ux%u@%u,%u\n",
+   chan->ic_task, ctx,
+   image->type == IMAGE_CONVERT_IN ? "Input" : "Output",
+   row, col,
+   tile->width, tile->height, tile->left, tile->top);
}
 }
 
-- 
2.19.0



[PATCH v4 22/22] gpu: ipu-v3: image-convert: allow three rows or columns

2018-10-19 Thread Philipp Zabel
If width or height are in the [2049, 3072] range, allow to
use just three tiles in this dimension, instead of four.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 3e73494d5930..13103ab86050 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -379,12 +379,7 @@ static int alloc_dma_buf(struct ipu_image_convert_priv 
*priv,
 
 static inline int num_stripes(int dim)
 {
-   if (dim <= 1024)
-   return 1;
-   else if (dim <= 2048)
-   return 2;
-   else
-   return 4;
+   return (dim - 1) / 1024 + 1;
 }
 
 /*
-- 
2.19.0



[PATCH v4 01/22] media: imx: add mem2mem device

2018-10-19 Thread Philipp Zabel
Add a single imx-media mem2mem video device that uses the IPU IC PP
(image converter post processing) task for scaling and colorspace
conversion.
On i.MX6Q/DL SoCs with two IPUs currently only the first IPU is used.

The hardware only supports writing to destination buffers up to
1024x1024 pixels in a single pass, arbitrary sizes can be achieved
by rendering multiple tiles per frame.

Signed-off-by: Philipp Zabel 
[slongerb...@gmail.com: use ipu_image_convert_adjust(), fix
 device_run() error handling]
Signed-off-by: Steve Longerbeam 
---
No changes since v3.
---
 drivers/staging/media/imx/Kconfig |   1 +
 drivers/staging/media/imx/Makefile|   1 +
 drivers/staging/media/imx/imx-media-dev.c |  11 +
 drivers/staging/media/imx/imx-media-mem2mem.c | 873 ++
 drivers/staging/media/imx/imx-media.h |  10 +
 5 files changed, 896 insertions(+)
 create mode 100644 drivers/staging/media/imx/imx-media-mem2mem.c

diff --git a/drivers/staging/media/imx/Kconfig 
b/drivers/staging/media/imx/Kconfig
index bfc17de56b17..07013cb3cb66 100644
--- a/drivers/staging/media/imx/Kconfig
+++ b/drivers/staging/media/imx/Kconfig
@@ -6,6 +6,7 @@ config VIDEO_IMX_MEDIA
depends on HAS_DMA
select VIDEOBUF2_DMA_CONTIG
select V4L2_FWNODE
+   select V4L2_MEM2MEM_DEV
---help---
  Say yes here to enable support for video4linux media controller
  driver for the i.MX5/6 SOC.
diff --git a/drivers/staging/media/imx/Makefile 
b/drivers/staging/media/imx/Makefile
index 698a4210316e..f2e722d0fa19 100644
--- a/drivers/staging/media/imx/Makefile
+++ b/drivers/staging/media/imx/Makefile
@@ -6,6 +6,7 @@ imx-media-ic-objs := imx-ic-common.o imx-ic-prp.o 
imx-ic-prpencvf.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-capture.o
+obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-mem2mem.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-ic.o
 
diff --git a/drivers/staging/media/imx/imx-media-dev.c 
b/drivers/staging/media/imx/imx-media-dev.c
index 481840195071..c4324df54c2e 100644
--- a/drivers/staging/media/imx/imx-media-dev.c
+++ b/drivers/staging/media/imx/imx-media-dev.c
@@ -309,6 +309,17 @@ static int imx_media_probe_complete(struct 
v4l2_async_notifier *notifier)
goto unlock;
 
ret = v4l2_device_register_subdev_nodes(>v4l2_dev);
+   if (ret)
+   goto unlock;
+
+   /* TODO: check whether we have IC subdevices first */
+   imxmd->m2m_vdev = imx_media_mem2mem_device_init(imxmd);
+   if (IS_ERR(imxmd->m2m_vdev)) {
+   ret = PTR_ERR(imxmd->m2m_vdev);
+   goto unlock;
+   }
+
+   ret = imx_media_mem2mem_device_register(imxmd->m2m_vdev);
 unlock:
mutex_unlock(>mutex);
if (ret)
diff --git a/drivers/staging/media/imx/imx-media-mem2mem.c 
b/drivers/staging/media/imx/imx-media-mem2mem.c
new file mode 100644
index ..a2a4dca017ce
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-mem2mem.c
@@ -0,0 +1,873 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * i.MX IPUv3 mem2mem Scaler/CSC driver
+ *
+ * Copyright (C) 2011 Pengutronix, Sascha Hauer
+ * Copyright (C) 2018 Pengutronix, Philipp Zabel
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "imx-media.h"
+
+#define fh_to_ctx(__fh)container_of(__fh, struct mem2mem_ctx, fh)
+
+enum {
+   V4L2_M2M_SRC = 0,
+   V4L2_M2M_DST = 1,
+};
+
+struct mem2mem_priv {
+   struct imx_media_video_dev vdev;
+
+   struct v4l2_m2m_dev   *m2m_dev;
+   struct device *dev;
+
+   struct imx_media_dev  *md;
+
+   struct mutex  mutex;   /* mem2mem device mutex */
+
+   atomic_t  num_inst;
+};
+
+#define to_mem2mem_priv(v) container_of(v, struct mem2mem_priv, vdev)
+
+/* Per-queue, driver-specific private data */
+struct mem2mem_q_data {
+   struct v4l2_pix_format  cur_fmt;
+   struct v4l2_rectrect;
+};
+
+struct mem2mem_ctx {
+   struct mem2mem_priv *priv;
+
+   struct v4l2_fh  fh;
+   struct mem2mem_q_data   q_data[2];
+   int error;
+   struct ipu_image_convert_ctx *icc;
+
+   struct v4l2_ctrl_handler ctrl_hdlr;
+   int rotate;
+   bool hflip;
+   bool vflip;
+   enum ipu_rotate_moderot_mode;
+};
+
+static struct mem2mem_q_data *get_q_data(struct mem2mem_ctx *ctx,
+enum v4l2_buf_type type)
+{
+   if (V4L2_TYPE_IS_OUTPUT(type))
+   return >q_data[V4L2_M2M_SRC];
+   else
+   return >q_data[V4L2_M2M_DST];
+}
+
+/*
+ * mem2mem callbacks
+ */
+
+static void job_abort(void *_ctx)
+{
+   struct mem2mem_ctx *ctx = _ctx;
+
+   if 

[PATCH v4 10/22] gpu: ipu-v3: image-convert: prepare for per-tile configuration

2018-10-19 Thread Philipp Zabel
Let convert_start start from a given tile index, allocate intermediate
tile with maximum tile size.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 60 +++---
 1 file changed, 35 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 5fccba176e39..c2f82d681c48 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -625,7 +625,8 @@ static void init_idmac_channel(struct ipu_image_convert_ctx 
*ctx,
   struct ipuv3_channel *channel,
   struct ipu_image_convert_image *image,
   enum ipu_rotate_mode rot_mode,
-  bool rot_swap_width_height)
+  bool rot_swap_width_height,
+  unsigned int tile)
 {
struct ipu_image_convert_chan *chan = ctx->chan;
unsigned int burst_size;
@@ -635,23 +636,23 @@ static void init_idmac_channel(struct 
ipu_image_convert_ctx *ctx,
unsigned int tile_idx[2];
 
if (image->type == IMAGE_CONVERT_OUT) {
-   tile_idx[0] = ctx->out_tile_map[0];
+   tile_idx[0] = ctx->out_tile_map[tile];
tile_idx[1] = ctx->out_tile_map[1];
} else {
-   tile_idx[0] = 0;
+   tile_idx[0] = tile;
tile_idx[1] = 1;
}
 
if (rot_swap_width_height) {
-   width = image->tile[0].height;
-   height = image->tile[0].width;
-   stride = image->tile[0].rot_stride;
+   width = image->tile[tile_idx[0]].height;
+   height = image->tile[tile_idx[0]].width;
+   stride = image->tile[tile_idx[0]].rot_stride;
addr0 = ctx->rot_intermediate[0].phys;
if (ctx->double_buffering)
addr1 = ctx->rot_intermediate[1].phys;
} else {
-   width = image->tile[0].width;
-   height = image->tile[0].height;
+   width = image->tile[tile_idx[0]].width;
+   height = image->tile[tile_idx[0]].height;
stride = image->stride;
addr0 = image->base.phys0 +
image->tile[tile_idx[0]].offset;
@@ -701,7 +702,7 @@ static void init_idmac_channel(struct ipu_image_convert_ctx 
*ctx,
ipu_idmac_set_double_buffer(channel, ctx->double_buffering);
 }
 
-static int convert_start(struct ipu_image_convert_run *run)
+static int convert_start(struct ipu_image_convert_run *run, unsigned int tile)
 {
struct ipu_image_convert_ctx *ctx = run->ctx;
struct ipu_image_convert_chan *chan = ctx->chan;
@@ -709,28 +710,29 @@ static int convert_start(struct ipu_image_convert_run 
*run)
struct ipu_image_convert_image *s_image = >in;
struct ipu_image_convert_image *d_image = >out;
enum ipu_color_space src_cs, dest_cs;
+   unsigned int dst_tile = ctx->out_tile_map[tile];
unsigned int dest_width, dest_height;
int ret;
 
-   dev_dbg(priv->ipu->dev, "%s: task %u: starting ctx %p run %p\n",
-   __func__, chan->ic_task, ctx, run);
+   dev_dbg(priv->ipu->dev, "%s: task %u: starting ctx %p run %p tile %u -> 
%u\n",
+   __func__, chan->ic_task, ctx, run, tile, dst_tile);
 
src_cs = ipu_pixelformat_to_colorspace(s_image->fmt->fourcc);
dest_cs = ipu_pixelformat_to_colorspace(d_image->fmt->fourcc);
 
if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
/* swap width/height for resizer */
-   dest_width = d_image->tile[0].height;
-   dest_height = d_image->tile[0].width;
+   dest_width = d_image->tile[dst_tile].height;
+   dest_height = d_image->tile[dst_tile].width;
} else {
-   dest_width = d_image->tile[0].width;
-   dest_height = d_image->tile[0].height;
+   dest_width = d_image->tile[dst_tile].width;
+   dest_height = d_image->tile[dst_tile].height;
}
 
/* setup the IC resizer and CSC */
ret = ipu_ic_task_init(chan->ic,
-  s_image->tile[0].width,
-  s_image->tile[0].height,
+  s_image->tile[tile].width,
+  s_image->tile[tile].height,
   dest_width,
   dest_height,
   src_cs, dest_cs);
@@ -741,27 +743,27 @@ static int convert_start(struct ipu_image_convert_run 
*run)
 
/* init the source MEM-->IC PP IDMAC channel */
init_idmac_channel(ctx, chan->in_chan, s_image,
-  IPU_ROTATE_NONE, false);
+  IPU_ROTATE_NONE, false, tile);
 
if (ipu_rot_mode_is_irt(ctx->rot_mode)) 

[PATCH v4 21/22] gpu: ipu-v3: image-convert: disable double buffering if necessary

2018-10-19 Thread Philipp Zabel
Double-buffering only works if tile sizes are the same and the resizing
coefficient does not change between tiles, even for non-planar formats.

Signed-off-by: Philipp Zabel 
---
No changes since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 27 --
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 91fe8f1672b4..3e73494d5930 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1990,6 +1990,7 @@ ipu_image_convert_prepare(struct ipu_soc *ipu, enum 
ipu_ic_task ic_task,
struct ipu_image_convert_chan *chan;
struct ipu_image_convert_ctx *ctx;
unsigned long flags;
+   unsigned int i;
bool get_res;
int ret;
 
@@ -2077,15 +2078,37 @@ ipu_image_convert_prepare(struct ipu_soc *ipu, enum 
ipu_ic_task ic_task,
 * for every tile, and therefore would have to be updated for
 * each buffer which is not possible. So double-buffering is
 * impossible when either the source or destination images are
-* a planar format (YUV420, YUV422P, etc.).
+* a planar format (YUV420, YUV422P, etc.). Further, differently
+* sized tiles or different resizing coefficients per tile
+* prevent double-buffering as well.
 */
ctx->double_buffering = (ctx->num_tiles > 1 &&
 !s_image->fmt->planar &&
 !d_image->fmt->planar);
+   for (i = 1; i < ctx->num_tiles; i++) {
+   if (ctx->in.tile[i].width != ctx->in.tile[0].width ||
+   ctx->in.tile[i].height != ctx->in.tile[0].height ||
+   ctx->out.tile[i].width != ctx->out.tile[0].width ||
+   ctx->out.tile[i].height != ctx->out.tile[0].height) {
+   ctx->double_buffering = false;
+   break;
+   }
+   }
+   for (i = 1; i < ctx->in.num_cols; i++) {
+   if (ctx->resize_coeffs_h[i] != ctx->resize_coeffs_h[0]) {
+   ctx->double_buffering = false;
+   break;
+   }
+   }
+   for (i = 1; i < ctx->in.num_rows; i++) {
+   if (ctx->resize_coeffs_v[i] != ctx->resize_coeffs_v[0]) {
+   ctx->double_buffering = false;
+   break;
+   }
+   }
 
if (ipu_rot_mode_is_irt(ctx->rot_mode)) {
unsigned long intermediate_size = d_image->tile[0].size;
-   unsigned int i;
 
for (i = 1; i < ctx->num_tiles; i++) {
if (d_image->tile[i].size > intermediate_size)
-- 
2.19.0



[PATCH v4 06/22] gpu: ipu-v3: image-convert: Only wait for abort completion if active run

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

Only wait for the ctx->aborted completion if there is an active run
in progress, otherwise the wait will just timeout after 10 seconds.
If there is no active run in progress, the done queue just needs to
be emptied.

Signed-off-by: Steve Longerbeam 
---
New since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index 6c15bf8efaa2..e3e032252604 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1562,9 +1562,14 @@ static void __ipu_image_convert_abort(struct 
ipu_image_convert_ctx *ctx)
return;
}
 
+   if (!active_run) {
+   empty_done_q(chan);
+   return;
+   }
+
dev_dbg(priv->ipu->dev,
-   "%s: task %u: wait for completion: %d runs, active run %p\n",
-   __func__, chan->ic_task, run_count, active_run);
+   "%s: task %u: wait for completion: %d runs\n",
+   __func__, chan->ic_task, run_count);
 
ret = wait_for_completion_timeout(>aborted,
  msecs_to_jiffies(1));
-- 
2.19.0



[PATCH v4 08/22] gpu: ipu-v3: image-convert: Remove need_abort flag

2018-10-19 Thread Philipp Zabel
From: Steve Longerbeam 

The need_abort flag is not really needed anymore in
__ipu_image_convert_abort(), remove it.
No functional changes.

Signed-off-by: Steve Longerbeam 
---
New since v3.
---
 drivers/gpu/ipu-v3/ipu-image-convert.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c 
b/drivers/gpu/ipu-v3/ipu-image-convert.c
index abd8afb22b48..b8a400182a00 100644
--- a/drivers/gpu/ipu-v3/ipu-image-convert.c
+++ b/drivers/gpu/ipu-v3/ipu-image-convert.c
@@ -1531,7 +1531,6 @@ static void __ipu_image_convert_abort(struct 
ipu_image_convert_ctx *ctx)
struct ipu_image_convert_run *run, *active_run, *tmp;
unsigned long flags;
int run_count, ret;
-   bool need_abort;
 
spin_lock_irqsave(>irqlock, flags);
 
@@ -1550,13 +1549,11 @@ static void __ipu_image_convert_abort(struct 
ipu_image_convert_ctx *ctx)
if (active_run)
reinit_completion(>aborted);
 
-   need_abort = (run_count || active_run);
-
ctx->aborting = true;
 
spin_unlock_irqrestore(>irqlock, flags);
 
-   if (!need_abort) {
+   if (!run_count && !active_run) {
dev_dbg(priv->ipu->dev,
"%s: task %u: no abort needed for ctx %p\n",
__func__, chan->ic_task, ctx);
-- 
2.19.0



Re: [PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread Hans Verkuil
On 10/19/18 13:43, Mauro Carvalho Chehab wrote:
> Those drivers are part of the legacy SoC camera framework.
> They're being converted to not use it, but sometimes we're
> keeping both legacy any new driver.
> 
> This time, for example, we have two drivers on media with
> the same name: ov772x. That's bad.
> 
> So, in order to prevent that to happen, let's prepend the SoC
> legacy drivers with soc_.
> 
> No functional changes.
> 
> Signed-off-by: Mauro Carvalho Chehab 

Acked-by: Hans Verkuil 

Let's kill all of these in the next kernel. I see no reason for keeping
them around.

Regards,

Hans

> ---
>  drivers/media/i2c/soc_camera/Makefile  | 18 +-
>  .../soc_camera/{mt9m001.c => soc_mt9m001.c}|  0
>  .../soc_camera/{mt9t112.c => soc_mt9t112.c}|  0
>  .../soc_camera/{mt9v022.c => soc_mt9v022.c}|  0
>  .../i2c/soc_camera/{ov5642.c => soc_ov5642.c}  |  0
>  .../i2c/soc_camera/{ov772x.c => soc_ov772x.c}  |  0
>  .../i2c/soc_camera/{ov9640.c => soc_ov9640.c}  |  0
>  .../i2c/soc_camera/{ov9740.c => soc_ov9740.c}  |  0
>  .../{rj54n1cb0c.c => soc_rj54n1cb0c.c} |  0
>  .../i2c/soc_camera/{tw9910.c => soc_tw9910.c}  |  0
>  10 files changed, 9 insertions(+), 9 deletions(-)
>  rename drivers/media/i2c/soc_camera/{mt9m001.c => soc_mt9m001.c} (100%)
>  rename drivers/media/i2c/soc_camera/{mt9t112.c => soc_mt9t112.c} (100%)
>  rename drivers/media/i2c/soc_camera/{mt9v022.c => soc_mt9v022.c} (100%)
>  rename drivers/media/i2c/soc_camera/{ov5642.c => soc_ov5642.c} (100%)
>  rename drivers/media/i2c/soc_camera/{ov772x.c => soc_ov772x.c} (100%)
>  rename drivers/media/i2c/soc_camera/{ov9640.c => soc_ov9640.c} (100%)
>  rename drivers/media/i2c/soc_camera/{ov9740.c => soc_ov9740.c} (100%)
>  rename drivers/media/i2c/soc_camera/{rj54n1cb0c.c => soc_rj54n1cb0c.c} (100%)
>  rename drivers/media/i2c/soc_camera/{tw9910.c => soc_tw9910.c} (100%)
> 
> diff --git a/drivers/media/i2c/soc_camera/Makefile 
> b/drivers/media/i2c/soc_camera/Makefile
> index 8c7770f62997..09ae483b96ef 100644
> --- a/drivers/media/i2c/soc_camera/Makefile
> +++ b/drivers/media/i2c/soc_camera/Makefile
> @@ -1,10 +1,10 @@
>  # SPDX-License-Identifier: GPL-2.0
> -obj-$(CONFIG_SOC_CAMERA_MT9M001) += mt9m001.o
> -obj-$(CONFIG_SOC_CAMERA_MT9T112) += mt9t112.o
> -obj-$(CONFIG_SOC_CAMERA_MT9V022) += mt9v022.o
> -obj-$(CONFIG_SOC_CAMERA_OV5642)  += ov5642.o
> -obj-$(CONFIG_SOC_CAMERA_OV772X)  += ov772x.o
> -obj-$(CONFIG_SOC_CAMERA_OV9640)  += ov9640.o
> -obj-$(CONFIG_SOC_CAMERA_OV9740)  += ov9740.o
> -obj-$(CONFIG_SOC_CAMERA_RJ54N1)  += rj54n1cb0c.o
> -obj-$(CONFIG_SOC_CAMERA_TW9910)  += tw9910.o
> +obj-$(CONFIG_SOC_CAMERA_MT9M001) += soc_mt9m001.o
> +obj-$(CONFIG_SOC_CAMERA_MT9T112) += soc_mt9t112.o
> +obj-$(CONFIG_SOC_CAMERA_MT9V022) += soc_mt9v022.o
> +obj-$(CONFIG_SOC_CAMERA_OV5642)  += soc_ov5642.o
> +obj-$(CONFIG_SOC_CAMERA_OV772X)  += soc_ov772x.o
> +obj-$(CONFIG_SOC_CAMERA_OV9640)  += soc_ov9640.o
> +obj-$(CONFIG_SOC_CAMERA_OV9740)  += soc_ov9740.o
> +obj-$(CONFIG_SOC_CAMERA_RJ54N1)  += soc_rj54n1cb0c.o
> +obj-$(CONFIG_SOC_CAMERA_TW9910)  += soc_tw9910.o
> diff --git a/drivers/media/i2c/soc_camera/mt9m001.c 
> b/drivers/media/i2c/soc_camera/soc_mt9m001.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/mt9m001.c
> rename to drivers/media/i2c/soc_camera/soc_mt9m001.c
> diff --git a/drivers/media/i2c/soc_camera/mt9t112.c 
> b/drivers/media/i2c/soc_camera/soc_mt9t112.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/mt9t112.c
> rename to drivers/media/i2c/soc_camera/soc_mt9t112.c
> diff --git a/drivers/media/i2c/soc_camera/mt9v022.c 
> b/drivers/media/i2c/soc_camera/soc_mt9v022.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/mt9v022.c
> rename to drivers/media/i2c/soc_camera/soc_mt9v022.c
> diff --git a/drivers/media/i2c/soc_camera/ov5642.c 
> b/drivers/media/i2c/soc_camera/soc_ov5642.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/ov5642.c
> rename to drivers/media/i2c/soc_camera/soc_ov5642.c
> diff --git a/drivers/media/i2c/soc_camera/ov772x.c 
> b/drivers/media/i2c/soc_camera/soc_ov772x.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/ov772x.c
> rename to drivers/media/i2c/soc_camera/soc_ov772x.c
> diff --git a/drivers/media/i2c/soc_camera/ov9640.c 
> b/drivers/media/i2c/soc_camera/soc_ov9640.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/ov9640.c
> rename to drivers/media/i2c/soc_camera/soc_ov9640.c
> diff --git a/drivers/media/i2c/soc_camera/ov9740.c 
> b/drivers/media/i2c/soc_camera/soc_ov9740.c
> similarity index 100%
> rename from drivers/media/i2c/soc_camera/ov9740.c
> rename to drivers/media/i2c/soc_camera/soc_ov9740.c
> diff --git a/drivers/media/i2c/soc_camera/rj54n1cb0c.c 
> 

[PATCH] media: rename soc_camera I2C drivers

2018-10-19 Thread Mauro Carvalho Chehab
Those drivers are part of the legacy SoC camera framework.
They're being converted to not use it, but sometimes we're
keeping both legacy any new driver.

This time, for example, we have two drivers on media with
the same name: ov772x. That's bad.

So, in order to prevent that to happen, let's prepend the SoC
legacy drivers with soc_.

No functional changes.

Signed-off-by: Mauro Carvalho Chehab 
---
 drivers/media/i2c/soc_camera/Makefile  | 18 +-
 .../soc_camera/{mt9m001.c => soc_mt9m001.c}|  0
 .../soc_camera/{mt9t112.c => soc_mt9t112.c}|  0
 .../soc_camera/{mt9v022.c => soc_mt9v022.c}|  0
 .../i2c/soc_camera/{ov5642.c => soc_ov5642.c}  |  0
 .../i2c/soc_camera/{ov772x.c => soc_ov772x.c}  |  0
 .../i2c/soc_camera/{ov9640.c => soc_ov9640.c}  |  0
 .../i2c/soc_camera/{ov9740.c => soc_ov9740.c}  |  0
 .../{rj54n1cb0c.c => soc_rj54n1cb0c.c} |  0
 .../i2c/soc_camera/{tw9910.c => soc_tw9910.c}  |  0
 10 files changed, 9 insertions(+), 9 deletions(-)
 rename drivers/media/i2c/soc_camera/{mt9m001.c => soc_mt9m001.c} (100%)
 rename drivers/media/i2c/soc_camera/{mt9t112.c => soc_mt9t112.c} (100%)
 rename drivers/media/i2c/soc_camera/{mt9v022.c => soc_mt9v022.c} (100%)
 rename drivers/media/i2c/soc_camera/{ov5642.c => soc_ov5642.c} (100%)
 rename drivers/media/i2c/soc_camera/{ov772x.c => soc_ov772x.c} (100%)
 rename drivers/media/i2c/soc_camera/{ov9640.c => soc_ov9640.c} (100%)
 rename drivers/media/i2c/soc_camera/{ov9740.c => soc_ov9740.c} (100%)
 rename drivers/media/i2c/soc_camera/{rj54n1cb0c.c => soc_rj54n1cb0c.c} (100%)
 rename drivers/media/i2c/soc_camera/{tw9910.c => soc_tw9910.c} (100%)

diff --git a/drivers/media/i2c/soc_camera/Makefile 
b/drivers/media/i2c/soc_camera/Makefile
index 8c7770f62997..09ae483b96ef 100644
--- a/drivers/media/i2c/soc_camera/Makefile
+++ b/drivers/media/i2c/soc_camera/Makefile
@@ -1,10 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0
-obj-$(CONFIG_SOC_CAMERA_MT9M001)   += mt9m001.o
-obj-$(CONFIG_SOC_CAMERA_MT9T112)   += mt9t112.o
-obj-$(CONFIG_SOC_CAMERA_MT9V022)   += mt9v022.o
-obj-$(CONFIG_SOC_CAMERA_OV5642)+= ov5642.o
-obj-$(CONFIG_SOC_CAMERA_OV772X)+= ov772x.o
-obj-$(CONFIG_SOC_CAMERA_OV9640)+= ov9640.o
-obj-$(CONFIG_SOC_CAMERA_OV9740)+= ov9740.o
-obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= rj54n1cb0c.o
-obj-$(CONFIG_SOC_CAMERA_TW9910)+= tw9910.o
+obj-$(CONFIG_SOC_CAMERA_MT9M001)   += soc_mt9m001.o
+obj-$(CONFIG_SOC_CAMERA_MT9T112)   += soc_mt9t112.o
+obj-$(CONFIG_SOC_CAMERA_MT9V022)   += soc_mt9v022.o
+obj-$(CONFIG_SOC_CAMERA_OV5642)+= soc_ov5642.o
+obj-$(CONFIG_SOC_CAMERA_OV772X)+= soc_ov772x.o
+obj-$(CONFIG_SOC_CAMERA_OV9640)+= soc_ov9640.o
+obj-$(CONFIG_SOC_CAMERA_OV9740)+= soc_ov9740.o
+obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= soc_rj54n1cb0c.o
+obj-$(CONFIG_SOC_CAMERA_TW9910)+= soc_tw9910.o
diff --git a/drivers/media/i2c/soc_camera/mt9m001.c 
b/drivers/media/i2c/soc_camera/soc_mt9m001.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/mt9m001.c
rename to drivers/media/i2c/soc_camera/soc_mt9m001.c
diff --git a/drivers/media/i2c/soc_camera/mt9t112.c 
b/drivers/media/i2c/soc_camera/soc_mt9t112.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/mt9t112.c
rename to drivers/media/i2c/soc_camera/soc_mt9t112.c
diff --git a/drivers/media/i2c/soc_camera/mt9v022.c 
b/drivers/media/i2c/soc_camera/soc_mt9v022.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/mt9v022.c
rename to drivers/media/i2c/soc_camera/soc_mt9v022.c
diff --git a/drivers/media/i2c/soc_camera/ov5642.c 
b/drivers/media/i2c/soc_camera/soc_ov5642.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/ov5642.c
rename to drivers/media/i2c/soc_camera/soc_ov5642.c
diff --git a/drivers/media/i2c/soc_camera/ov772x.c 
b/drivers/media/i2c/soc_camera/soc_ov772x.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/ov772x.c
rename to drivers/media/i2c/soc_camera/soc_ov772x.c
diff --git a/drivers/media/i2c/soc_camera/ov9640.c 
b/drivers/media/i2c/soc_camera/soc_ov9640.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/ov9640.c
rename to drivers/media/i2c/soc_camera/soc_ov9640.c
diff --git a/drivers/media/i2c/soc_camera/ov9740.c 
b/drivers/media/i2c/soc_camera/soc_ov9740.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/ov9740.c
rename to drivers/media/i2c/soc_camera/soc_ov9740.c
diff --git a/drivers/media/i2c/soc_camera/rj54n1cb0c.c 
b/drivers/media/i2c/soc_camera/soc_rj54n1cb0c.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/rj54n1cb0c.c
rename to drivers/media/i2c/soc_camera/soc_rj54n1cb0c.c
diff --git a/drivers/media/i2c/soc_camera/tw9910.c 
b/drivers/media/i2c/soc_camera/soc_tw9910.c
similarity index 100%
rename from drivers/media/i2c/soc_camera/tw9910.c
rename to 

Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Nicolas Dufresne
Le vendredi 19 octobre 2018 à 07:35 -0400, Nicolas Dufresne a écrit :
> Le vendredi 19 octobre 2018 à 09:28 +0200, Hans Verkuil a écrit :
> > On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > > Set up a statically-allocated, dummy buffer to
> > > be used as flush buffer, which signals
> > > a encoding (or decoding) stop.
> > > 
> > > When the flush buffer is queued to the OUTPUT queue,
> > > the driver will send an V4L2_EVENT_EOS event, and
> > > mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
> > 
> > I'm confused. What is the current driver doing wrong? It is already
> > setting the LAST flag AFAIK. I don't see why a dummy buffer is
> > needed.
> 
> I'm not sure of this patch either. It seems to trigger the legacy
> "empty payload" buffer case. Driver should mark the last buffer, and
> then following poll should return EPIPE. Maybe it's the later that
> isn't respected ?

Sorry, I've send this too fast. The following poll should not block,
and DQBUF should retunr EPIPE.

In GStreamer we currently ignore the LAST flag and wait for EPIPE. The
reason is that not all driver can set the LAST flag. Exynos firmware
tells you it's done later and we don't want to introduce any latency in
the driver. The last flag isn't that useful in fact, but it can be use
with RTP to set the marker bit.

In previous discussion, using a buffer with payload 0 was not liked.
There might be codec where an empty buffer is valid, who knows ?

> 
> > 
> > Regards,
> > 
> > Hans
> > 
> > > 
> > > With this change, it's possible to run a pipeline to completion:
> > > 
> > > gst-launch-1.0 videotestsrc num-buffers=10 ! v4l2fwhtenc !
> > > v4l2fwhtdec ! fakevideosink
> > > 
> > > Signed-off-by: Ezequiel Garcia 
> > > ---
> > >  drivers/media/platform/vicodec/vicodec-core.c | 80 ++---
> > > --
> > > 
> > >  1 file changed, 44 insertions(+), 36 deletions(-)
> > > 
> > > diff --git a/drivers/media/platform/vicodec/vicodec-core.c
> > > b/drivers/media/platform/vicodec/vicodec-core.c
> > > index a2c487b4b80d..4ed4dae10e30 100644
> > > --- a/drivers/media/platform/vicodec/vicodec-core.c
> > > +++ b/drivers/media/platform/vicodec/vicodec-core.c
> > > @@ -113,7 +113,7 @@ struct vicodec_ctx {
> > >   struct v4l2_ctrl_handler hdl;
> > >  
> > >   struct vb2_v4l2_buffer *last_src_buf;
> > > - struct vb2_v4l2_buffer *last_dst_buf;
> > > + struct vb2_v4l2_buffer  flush_buf;
> > >  
> > >   /* Source and destination queue data */
> > >   struct vicodec_q_data   q_data[2];
> > > @@ -220,6 +220,7 @@ static void device_run(void *priv)
> > >   struct vicodec_dev *dev = ctx->dev;
> > >   struct vb2_v4l2_buffer *src_buf, *dst_buf;
> > >   struct vicodec_q_data *q_out;
> > > + bool flushing;
> > >   u32 state;
> > >  
> > >   src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > > @@ -227,26 +228,36 @@ static void device_run(void *priv)
> > >   q_out = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> > >  
> > >   state = VB2_BUF_STATE_DONE;
> > > - if (device_process(ctx, src_buf, dst_buf))
> > > +
> > > + flushing = (src_buf == >flush_buf);
> > > + if (!flushing && device_process(ctx, src_buf, dst_buf))
> > >   state = VB2_BUF_STATE_ERROR;
> > > - ctx->last_dst_buf = dst_buf;
> > >  
> > >   spin_lock(ctx->lock);
> > > - if (!ctx->comp_has_next_frame && src_buf == ctx->last_src_buf)
> > > {
> > > + if (!flushing) {
> > > + if (!ctx->comp_has_next_frame && src_buf == ctx-
> > > > last_src_buf) {
> > > 
> > > + dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > > + v4l2_event_queue_fh(>fh, _event);
> > > + }
> > > +
> > > + if (ctx->is_enc) {
> > > + src_buf->sequence = q_out->sequence++;
> > > + src_buf = v4l2_m2m_src_buf_remove(ctx-
> > > > fh.m2m_ctx);
> > > 
> > > + v4l2_m2m_buf_done(src_buf, state);
> > > + } else if (vb2_get_plane_payload(_buf->vb2_buf, 0)
> > > + == ctx->cur_buf_offset) {
> > > + src_buf->sequence = q_out->sequence++;
> > > + src_buf = v4l2_m2m_src_buf_remove(ctx-
> > > > fh.m2m_ctx);
> > > 
> > > + v4l2_m2m_buf_done(src_buf, state);
> > > + ctx->cur_buf_offset = 0;
> > > + ctx->comp_has_next_frame = false;
> > > + }
> > > + } else {
> > > + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > > + vb2_set_plane_payload(_buf->vb2_buf, 0, 0);
> > >   dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > >   v4l2_event_queue_fh(>fh, _event);
> > >   }
> > > - if (ctx->is_enc) {
> > > - src_buf->sequence = q_out->sequence++;
> > > - src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > > - v4l2_m2m_buf_done(src_buf, state);
> > > - } else if (vb2_get_plane_payload(_buf->vb2_buf, 0) == ctx-
> > > > cur_buf_offset) {
> > > 
> > > - src_buf->sequence = q_out->sequence++;
> > > - src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > > -   

Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Hans Verkuil
On 10/19/18 13:35, Nicolas Dufresne wrote:
> Le vendredi 19 octobre 2018 à 09:28 +0200, Hans Verkuil a écrit :
>> On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
>>> Set up a statically-allocated, dummy buffer to
>>> be used as flush buffer, which signals
>>> a encoding (or decoding) stop.
>>>
>>> When the flush buffer is queued to the OUTPUT queue,
>>> the driver will send an V4L2_EVENT_EOS event, and
>>> mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
>>
>> I'm confused. What is the current driver doing wrong? It is already
>> setting the LAST flag AFAIK. I don't see why a dummy buffer is
>> needed.
> 
> I'm not sure of this patch either. It seems to trigger the legacy
> "empty payload" buffer case. Driver should mark the last buffer, and
> then following poll should return EPIPE. Maybe it's the later that
> isn't respected ?

That would make more sense: vicodec doesn't set last_buffer_dequeued
to true in the vb2_queue, so you won't get EPIPE.

But that should be much easier to add.

Regards,

Hans


Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Nicolas Dufresne
Le vendredi 19 octobre 2018 à 09:28 +0200, Hans Verkuil a écrit :
> On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > Set up a statically-allocated, dummy buffer to
> > be used as flush buffer, which signals
> > a encoding (or decoding) stop.
> > 
> > When the flush buffer is queued to the OUTPUT queue,
> > the driver will send an V4L2_EVENT_EOS event, and
> > mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
> 
> I'm confused. What is the current driver doing wrong? It is already
> setting the LAST flag AFAIK. I don't see why a dummy buffer is
> needed.

I'm not sure of this patch either. It seems to trigger the legacy
"empty payload" buffer case. Driver should mark the last buffer, and
then following poll should return EPIPE. Maybe it's the later that
isn't respected ?

> 
> Regards,
> 
>   Hans
> 
> > 
> > With this change, it's possible to run a pipeline to completion:
> > 
> > gst-launch-1.0 videotestsrc num-buffers=10 ! v4l2fwhtenc !
> > v4l2fwhtdec ! fakevideosink
> > 
> > Signed-off-by: Ezequiel Garcia 
> > ---
> >  drivers/media/platform/vicodec/vicodec-core.c | 80 ++-
> > 
> >  1 file changed, 44 insertions(+), 36 deletions(-)
> > 
> > diff --git a/drivers/media/platform/vicodec/vicodec-core.c
> > b/drivers/media/platform/vicodec/vicodec-core.c
> > index a2c487b4b80d..4ed4dae10e30 100644
> > --- a/drivers/media/platform/vicodec/vicodec-core.c
> > +++ b/drivers/media/platform/vicodec/vicodec-core.c
> > @@ -113,7 +113,7 @@ struct vicodec_ctx {
> > struct v4l2_ctrl_handler hdl;
> >  
> > struct vb2_v4l2_buffer *last_src_buf;
> > -   struct vb2_v4l2_buffer *last_dst_buf;
> > +   struct vb2_v4l2_buffer  flush_buf;
> >  
> > /* Source and destination queue data */
> > struct vicodec_q_data   q_data[2];
> > @@ -220,6 +220,7 @@ static void device_run(void *priv)
> > struct vicodec_dev *dev = ctx->dev;
> > struct vb2_v4l2_buffer *src_buf, *dst_buf;
> > struct vicodec_q_data *q_out;
> > +   bool flushing;
> > u32 state;
> >  
> > src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > @@ -227,26 +228,36 @@ static void device_run(void *priv)
> > q_out = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> >  
> > state = VB2_BUF_STATE_DONE;
> > -   if (device_process(ctx, src_buf, dst_buf))
> > +
> > +   flushing = (src_buf == >flush_buf);
> > +   if (!flushing && device_process(ctx, src_buf, dst_buf))
> > state = VB2_BUF_STATE_ERROR;
> > -   ctx->last_dst_buf = dst_buf;
> >  
> > spin_lock(ctx->lock);
> > -   if (!ctx->comp_has_next_frame && src_buf == ctx->last_src_buf)
> > {
> > +   if (!flushing) {
> > +   if (!ctx->comp_has_next_frame && src_buf == ctx-
> > >last_src_buf) {
> > +   dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > +   v4l2_event_queue_fh(>fh, _event);
> > +   }
> > +
> > +   if (ctx->is_enc) {
> > +   src_buf->sequence = q_out->sequence++;
> > +   src_buf = v4l2_m2m_src_buf_remove(ctx-
> > >fh.m2m_ctx);
> > +   v4l2_m2m_buf_done(src_buf, state);
> > +   } else if (vb2_get_plane_payload(_buf->vb2_buf, 0)
> > +   == ctx->cur_buf_offset) {
> > +   src_buf->sequence = q_out->sequence++;
> > +   src_buf = v4l2_m2m_src_buf_remove(ctx-
> > >fh.m2m_ctx);
> > +   v4l2_m2m_buf_done(src_buf, state);
> > +   ctx->cur_buf_offset = 0;
> > +   ctx->comp_has_next_frame = false;
> > +   }
> > +   } else {
> > +   src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > +   vb2_set_plane_payload(_buf->vb2_buf, 0, 0);
> > dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > v4l2_event_queue_fh(>fh, _event);
> > }
> > -   if (ctx->is_enc) {
> > -   src_buf->sequence = q_out->sequence++;
> > -   src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > -   v4l2_m2m_buf_done(src_buf, state);
> > -   } else if (vb2_get_plane_payload(_buf->vb2_buf, 0) == ctx-
> > >cur_buf_offset) {
> > -   src_buf->sequence = q_out->sequence++;
> > -   src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > -   v4l2_m2m_buf_done(src_buf, state);
> > -   ctx->cur_buf_offset = 0;
> > -   ctx->comp_has_next_frame = false;
> > -   }
> > v4l2_m2m_buf_done(dst_buf, state);
> > ctx->comp_size = 0;
> > ctx->comp_magic_cnt = 0;
> > @@ -293,6 +304,8 @@ static int job_ready(void *priv)
> > src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > if (!src_buf)
> > return 0;
> > +   if (src_buf == >flush_buf)
> > +   return 1;
> > p_out = vb2_plane_vaddr(_buf->vb2_buf, 0);
> > sz = vb2_get_plane_payload(_buf->vb2_buf, 0);
> > p = p_out + ctx->cur_buf_offset;
> > @@ -770,21 +783,6 @@ static int vidioc_s_fmt_vid_out(struct file
> > *file, void *priv,
> > return ret;
> >  }
> >  
> > -static void 

Re: [PATCH 1/2] media: ov5640: Add check for PLL1 output max frequency

2018-10-19 Thread Maxime Ripard
Hi,

On Thu, Oct 18, 2018 at 03:35:25PM +0200, jacopo mondi wrote:
> Hi Maxime,
> 
> On Thu, Oct 18, 2018 at 11:15:50AM +0200, Maxime Ripard wrote:
> > On Wed, Oct 17, 2018 at 09:37:17PM +0200, Jacopo Mondi wrote:
> > > Check that the PLL1 output frequency does not exceed the maximum allowed 
> > > 1GHz
> > > frequency.
> > >
> > > Signed-off-by: Jacopo Mondi 
> > > ---
> > >  drivers/media/i2c/ov5640.c | 23 +++
> > >  1 file changed, 19 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
> > > index e098435..1f2e72d 100644
> > > --- a/drivers/media/i2c/ov5640.c
> > > +++ b/drivers/media/i2c/ov5640.c
> > > @@ -770,7 +770,7 @@ static int ov5640_mod_reg(struct ov5640_dev *sensor, 
> > > u16 reg,
> > >   * always set to either 1 or 2 in the vendor kernels.
> > >   */
> > >  #define OV5640_SYSDIV_MIN1
> > > -#define OV5640_SYSDIV_MAX2
> > > +#define OV5640_SYSDIV_MAX16
> > >
> > >  /*
> > >   * This is supposed to be ranging from 1 to 16, but the value is always
> > > @@ -806,15 +806,20 @@ static int ov5640_mod_reg(struct ov5640_dev 
> > > *sensor, u16 reg,
> > >   * This is supposed to be ranging from 1 to 8, but the value is always
> > >   * set to 1 in the vendor kernels.
> > >   */
> > > -#define OV5640_PCLK_ROOT_DIV 1
> > > +#define OV5640_PCLK_ROOT_DIV 1
> > > +#define OV5640_PLL_SYS_ROOT_DIVIDER_BYPASS   0x00
> > >
> > >  static unsigned long ov5640_compute_sys_clk(struct ov5640_dev *sensor,
> > >   u8 pll_prediv, u8 pll_mult,
> > >   u8 sysdiv)
> > >  {
> > > - unsigned long rate = clk_get_rate(sensor->xclk);
> > > + unsigned long sysclk = sensor->xclk_freq / pll_prediv * pll_mult;
> > >
> > > - return rate / pll_prediv * pll_mult / sysdiv;
> > > + /* PLL1 output cannot exceed 1GHz. */
> > > + if (sysclk / 100 > 1000)
> > > + return 0;
> > > +
> > > + return sysclk / sysdiv;
> > >  }
> > >
> > >  static unsigned long ov5640_calc_sys_clk(struct ov5640_dev *sensor,
> > > @@ -844,6 +849,16 @@ static unsigned long ov5640_calc_sys_clk(struct 
> > > ov5640_dev *sensor,
> > >   _rate = ov5640_compute_sys_clk(sensor,
> > >  OV5640_PLL_PREDIV,
> > >  _pll_mult, _sysdiv);
> > > +
> > > + /*
> > > +  * We have reached the maximum allowed PLL1 output,
> > > +  * increase sysdiv.
> > > +  */
> > > + if (rate == 0) {
> > > + _pll_mult = OV5640_PLL_MULT_MAX + 1;
> > > + continue;
> > > + }
> > > +
> >
> > Both your patches look sane to me. However, I guess here you're
> > setting _pll_mult at this value so that you won't reach the for
> > condition on the next iteration?
> >
> > Wouldn't it be cleaner to just use a break statement here?
> 
> Yes, it's much cleaner indeed. Not sure why I thought this was a good
> idea tbh.
> 
> Would you like me to send a v2, or can you take care of this when
> re-sending v5?

I'll squash it in the v5, thanks!

Maxime

-- 
Maxime Ripard, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: [PATCH v4 01/12] media: ov5640: Adjust the clock based on the expected rate

2018-10-19 Thread Maxime Ripard
Hi!

On Thu, Oct 18, 2018 at 03:46:05PM +0200, jacopo mondi wrote:
> Hello Maxime,
> 
> On Thu, Oct 18, 2018 at 11:29:12AM +0200, Maxime Ripard wrote:
> > Hi Jacopo,
> >
> > Thanks for reviewing this patch
> >
> > On Tue, Oct 16, 2018 at 06:54:50PM +0200, jacopo mondi wrote:
> > > > +static unsigned long ov5640_compute_sys_clk(struct ov5640_dev *sensor,
> > > > +   u8 pll_prediv, u8 pll_mult,
> > > > +   u8 sysdiv)
> > > > +{
> > > > +   unsigned long rate = clk_get_rate(sensor->xclk);
> > >
> > > The clock rate is stored in sensor->xclk at probe time, no need to
> > > query it every iteration.
> >
> > From a clk API point of view though, there's nothing that guarantees
> > that the clock rate hasn't changed between the probe and the time
> > where this function is called.
> 
> Correct, bell, it can be queried in the caller and re-used here :)
> >
> > I appreciate that we're probably connected to an oscillator, but even
> > then, on the Allwinner SoCs we've had the issue recently that one
> > oscillator feeding the BT chip was actually had a muxer, with each
> > option having a slightly different rate, which was bad enough for the
> > BT chip to be non-functional.
> >
> > I can definitely imagine the same case happening here for some
> > SoCs. Plus, the clock framework will cache the rate as well when
> > possible, so we're not losing anything here.
> 
> I see, so please ignore this comment :)
> 
> >
> > > > +
> > > > +   return rate / pll_prediv * pll_mult / sysdiv;
> > > > +}
> > > > +
> > > > +static unsigned long ov5640_calc_sys_clk(struct ov5640_dev *sensor,
> > > > +unsigned long rate,
> > > > +u8 *pll_prediv, u8 *pll_mult,
> > > > +u8 *sysdiv)
> > > > +{
> > > > +   unsigned long best = ~0;
> > > > +   u8 best_sysdiv = 1, best_mult = 1;
> > > > +   u8 _sysdiv, _pll_mult;
> > > > +
> > > > +   for (_sysdiv = OV5640_SYSDIV_MIN;
> > > > +_sysdiv <= OV5640_SYSDIV_MAX;
> > > > +_sysdiv++) {
> > > > +   for (_pll_mult = OV5640_PLL_MULT_MIN;
> > > > +_pll_mult <= OV5640_PLL_MULT_MAX;
> > > > +_pll_mult++) {
> > > > +   unsigned long _rate;
> > > > +
> > > > +   /*
> > > > +* The PLL multiplier cannot be odd if above
> > > > +* 127.
> > > > +*/
> > > > +   if (_pll_mult > 127 && (_pll_mult % 2))
> > > > +   continue;
> > > > +
> > > > +   _rate = ov5640_compute_sys_clk(sensor,
> > > > +  
> > > > OV5640_PLL_PREDIV,
> > > > +  _pll_mult, 
> > > > _sysdiv);
> > >
> > > I'm under the impression a system clock slower than the requested one, 
> > > even
> > > if more accurate is not good.
> > >
> > > I'm still working on understanding how all CSI-2 related timing
> > > parameters play together, but since the system clock is calculated
> > > from the pixel clock (which comes from the frame dimensions, bpp, and
> > > rate), and it is then used to calculate the MIPI BIT clock frequency,
> > > I think it would be better to be a little faster than a bit slower,
> > > otherwise the serial lane clock wouldn't be fast enough to output
> > > frames generated by the sensor core (or maybe it would just decrease
> > > the frame rate and that's it, but I don't think it is just this).
> > >
> > > What do you think of adding the following here:
> > >
> > > if (_rate < rate)
> > > continue
> >
> > I really don't know MIPI-CSI2 enough to be able to comment on your
> > concerns, but when reaching the end of the operating limit of the
> > clock, it would prevent us from having any rate at all, which seems
> > bad too.
> 
> Are you referring to the 1GHz limit of the (xvlkc / pre_div * mult)
> output here? If that's your concern we should adjust the requested
> SYSCLK rate then (and I added a check for that in my patches on top of
> yours, but it could be improved to be honest, as it just refuses the
> current rate, while it should increment the pre_divider instead, now
> that I think better about that).

I meant to the limits of the PCLK / MIPI bit clock, so the rate we are
expected to reach. But I really don't have any opinion on this, so
I'll just merge your suggestion for the next version.

> >
> > > > +   if (abs(rate - _rate) < abs(rate - best)) {
> > > > +   best = _rate;
> > > > +   best_sysdiv = _sysdiv;
> > > > +   best_mult = _pll_mult;
> > > > +   }
> > > > +
> > > > +   

Re: [PATCH v3 01/16] media: imx: add mem2mem device

2018-10-19 Thread Philipp Zabel
Hi Tim,

On Thu, 2018-10-18 at 15:53 -0700, Tim Harvey wrote:
[...]
> Philipp,
> 
> Thanks for submitting this!
> 
> I'm hoping this lets us use non-IMX capture devices along with the IMX
> media controller entities to so we can use hardware
> CSC,scaling,pixel-format-conversions and ultimately coda based encode.
> 
> I've built this on top of linux-media and see that it registers as
> /dev/video8 but I'm not clear how to use it? I don't see it within the
> media controller graph.

It's a V4L2 mem2mem device that can be handled by the GstV4l2Transform
element, for example. GStreamer should create a v4l2video8convert
element of that type.

The mem2mem device is not part of the media controller graph on purpose.
There is no interaction with any of the entities in the media controller
graph apart from the fact that the IC PP task we are using for mem2mem
scaling is sharing hardware resources with the IC PRP tasks used for the
media controller scaler entitites.

regards
Philipp


Re: i.MX6 IPU CSI analog video input on Ventana

2018-10-19 Thread Philipp Zabel
On Wed, 2018-10-17 at 14:33 -0700, Steve Longerbeam wrote:
[...]
> > I'm also interested in looking at Philipps' 'i.MX media mem2mem
> > scaler' series (https://patchwork.kernel.org/cover/10603881/) and am
> > wondering if anyone has some example pipelines showing that in use.
> > I'm hoping that is what is needed to be able to use hardware
> > scaling/CSC and coda based encoding on streams from v4l2 PCI capture
> > devices.
> 
> Yes exactly, I'll let Philipp answer. I'm also interested in the gstreamer
> element needed to make use of h/w scaling/CSC from the mem2mem
> device.

GStreamer should create a GstV4l2Transform element "v4l2videoXconvert"
for the /dev/videoX mem2mem scaler device.

> For coda encode, my understanding is that the v4l2h264enc element will
> make use of coda h/w encode, something like this example which encodes to
> a h.264 file (I haven't verified this works, still need to build a later 
> version of gst-plugins-good that has the vl2h264enc support):
> 
> gst-launch-1.0 v4l2src io-mode=dmabuf device=/dev/video$dev !\ 
> "video/x-raw,format=$fmt,width=$w,height=$h"  ! \
> v4l2h264enc output-io-mode=dmabuf-import  ! queue ! matroskamux ! \
> filesink location=$filename

With GStreamer 1.14 the capture side io-mode parameter is not necessary
anymore to export dmabufs.
The output-io-mode parameter is currently still needed though, as the
V4L2 elements don't support negotiating dmabuf using caps via
video/x-raw(memory:DMABuf) yet.

Also there's a h264parse missing to convert the video/x-h264,stream-
format=byte-stream from v4l2h264enc to video/x-h264,stream-format=avc as
required by matroskamux:

gst-launch-1.0 \
v4l2src ! \
v4l2video10convert output-io-mode=dmabuf-import ! \
v4l2h264enc output-io-mode=dmabuf-import ! \
h264parse ! \
matroskamux ! \
filesink

> > Lastly, is there any hope to use IMX6 hardware compositing to say
> > stitch together multiple streams from a v4l2 PCI capture device into a
> > single stream for coda based hw encoding?
> 
> The IPUv3 Image Converter has a combining unit that can combine pixels from
> two images, but there is no support for that in mainline AFAIK.

I don't think there is any V4L2 API for compositing yet.

regards
Philipp


[PATCH] cec: keep track of outstanding transmits

2018-10-19 Thread Hans Verkuil
I noticed that repeatedly running 'cec-ctl --playback' would occasionally
select 'Playback Device 2' instead of 'Playback Device 1', even though there
were no other Playback devices in the HDMI topology. This happened both with
'real' hardware and with the vivid CEC emulation, suggesting that this was an
issue in the core code that claims a logical address.

What 'cec-ctl --playback' does is to first clear all existing logical addresses,
and immediately after that configure the new desired device type.

The core code will poll the logical addresses trying to find a free address.
When found it will issue a few standard messages as per the CEC spec and return.
Those messages are queued up and will be transmitted asynchronously.

What happens is that if you run two 'cec-ctl --playback' commands in quick
succession, there is still a message of the first cec-ctl command being 
transmitted
when you reconfigure the adapter again in the second cec-ctl command.

When the logical addresses are cleared, then all information about outstanding
transmits inside the CEC core is also cleared, and the core is no longer aware
that there is still a transmit in flight.

When the hardware finishes the transmit it calls transmit_done and the CEC core
thinks it is actually in response of a POLL messages that is trying to find a
free logical address. The result of all this is that the core thinks that the
logical address for Playback Device 1 is in use, when it is really an earlier
transmit that ended.

The main transmit thread looks at adap->transmitting to check if a transmit
is in progress, but that is set to NULL when the adapter is unconfigured.
adap->transmitting represents the view of userspace, not that of the hardware.
So when unconfiguring the adapter the message is marked aborted from the point
of view of userspace, but seen from the PoV of the hardware it is still ongoing.

So introduce a new bool transmit_in_progress that represents the hardware state
and use that instead of adap->transmitting. Now the CEC core waits until the
hardware finishes the transmit before starting a new transmit.

Signed-off-by: Hans Verkuil 
---
This too should go to the stable 4.18 branch.
---
diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c
index 31d1f4ab915e..fadc1a54ae5a 100644
--- a/drivers/media/cec/cec-adap.c
+++ b/drivers/media/cec/cec-adap.c
@@ -455,7 +455,7 @@ int cec_thread_func(void *_adap)
(adap->needs_hpd &&
 (!adap->is_configured && 
!adap->is_configuring)) ||
kthread_should_stop() ||
-   (!adap->transmitting &&
+   (!adap->transmit_in_progress &&
 !list_empty(>transmit_queue)),
msecs_to_jiffies(CEC_XFER_TIMEOUT_MS));
timeout = err == 0;
@@ -463,7 +463,7 @@ int cec_thread_func(void *_adap)
/* Otherwise we just wait for something to happen. */
wait_event_interruptible(adap->kthread_waitq,
kthread_should_stop() ||
-   (!adap->transmitting &&
+   (!adap->transmit_in_progress &&
 !list_empty(>transmit_queue)));
}

@@ -488,6 +488,7 @@ int cec_thread_func(void *_adap)
pr_warn("cec-%s: message %*ph timed out\n", adap->name,
adap->transmitting->msg.len,
adap->transmitting->msg.msg);
+   adap->transmit_in_progress = false;
adap->tx_timeouts++;
/* Just give up on this. */
cec_data_cancel(adap->transmitting,
@@ -499,7 +500,7 @@ int cec_thread_func(void *_adap)
 * If we are still transmitting, or there is nothing new to
 * transmit, then just continue waiting.
 */
-   if (adap->transmitting || list_empty(>transmit_queue))
+   if (adap->transmit_in_progress || 
list_empty(>transmit_queue))
goto unlock;

/* Get a new message to transmit */
@@ -545,6 +546,8 @@ int cec_thread_func(void *_adap)
if (adap->ops->adap_transmit(adap, data->attempts,
 signal_free_time, >msg))
cec_data_cancel(data, CEC_TX_STATUS_ABORTED);
+   else
+   adap->transmit_in_progress = true;

 unlock:
mutex_unlock(>lock);
@@ -575,14 +578,17 @@ void cec_transmit_done_ts(struct cec_adapter *adap, u8 
status,
data = adap->transmitting;
if (!data) {
/*
-* This can happen if a transmit was issued and the cable is
+* This might happen if a transmit was issued 

Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Hans Verkuil
On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> Set up a statically-allocated, dummy buffer to
> be used as flush buffer, which signals
> a encoding (or decoding) stop.
> 
> When the flush buffer is queued to the OUTPUT queue,
> the driver will send an V4L2_EVENT_EOS event, and
> mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.

I'm confused. What is the current driver doing wrong? It is already
setting the LAST flag AFAIK. I don't see why a dummy buffer is needed.

Regards,

Hans

> 
> With this change, it's possible to run a pipeline to completion:
> 
> gst-launch-1.0 videotestsrc num-buffers=10 ! v4l2fwhtenc ! v4l2fwhtdec ! 
> fakevideosink
> 
> Signed-off-by: Ezequiel Garcia 
> ---
>  drivers/media/platform/vicodec/vicodec-core.c | 80 ++-
>  1 file changed, 44 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/media/platform/vicodec/vicodec-core.c 
> b/drivers/media/platform/vicodec/vicodec-core.c
> index a2c487b4b80d..4ed4dae10e30 100644
> --- a/drivers/media/platform/vicodec/vicodec-core.c
> +++ b/drivers/media/platform/vicodec/vicodec-core.c
> @@ -113,7 +113,7 @@ struct vicodec_ctx {
>   struct v4l2_ctrl_handler hdl;
>  
>   struct vb2_v4l2_buffer *last_src_buf;
> - struct vb2_v4l2_buffer *last_dst_buf;
> + struct vb2_v4l2_buffer  flush_buf;
>  
>   /* Source and destination queue data */
>   struct vicodec_q_data   q_data[2];
> @@ -220,6 +220,7 @@ static void device_run(void *priv)
>   struct vicodec_dev *dev = ctx->dev;
>   struct vb2_v4l2_buffer *src_buf, *dst_buf;
>   struct vicodec_q_data *q_out;
> + bool flushing;
>   u32 state;
>  
>   src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> @@ -227,26 +228,36 @@ static void device_run(void *priv)
>   q_out = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
>  
>   state = VB2_BUF_STATE_DONE;
> - if (device_process(ctx, src_buf, dst_buf))
> +
> + flushing = (src_buf == >flush_buf);
> + if (!flushing && device_process(ctx, src_buf, dst_buf))
>   state = VB2_BUF_STATE_ERROR;
> - ctx->last_dst_buf = dst_buf;
>  
>   spin_lock(ctx->lock);
> - if (!ctx->comp_has_next_frame && src_buf == ctx->last_src_buf) {
> + if (!flushing) {
> + if (!ctx->comp_has_next_frame && src_buf == ctx->last_src_buf) {
> + dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> + v4l2_event_queue_fh(>fh, _event);
> + }
> +
> + if (ctx->is_enc) {
> + src_buf->sequence = q_out->sequence++;
> + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> + v4l2_m2m_buf_done(src_buf, state);
> + } else if (vb2_get_plane_payload(_buf->vb2_buf, 0)
> + == ctx->cur_buf_offset) {
> + src_buf->sequence = q_out->sequence++;
> + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> + v4l2_m2m_buf_done(src_buf, state);
> + ctx->cur_buf_offset = 0;
> + ctx->comp_has_next_frame = false;
> + }
> + } else {
> + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> + vb2_set_plane_payload(_buf->vb2_buf, 0, 0);
>   dst_buf->flags |= V4L2_BUF_FLAG_LAST;
>   v4l2_event_queue_fh(>fh, _event);
>   }
> - if (ctx->is_enc) {
> - src_buf->sequence = q_out->sequence++;
> - src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> - v4l2_m2m_buf_done(src_buf, state);
> - } else if (vb2_get_plane_payload(_buf->vb2_buf, 0) == 
> ctx->cur_buf_offset) {
> - src_buf->sequence = q_out->sequence++;
> - src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> - v4l2_m2m_buf_done(src_buf, state);
> - ctx->cur_buf_offset = 0;
> - ctx->comp_has_next_frame = false;
> - }
>   v4l2_m2m_buf_done(dst_buf, state);
>   ctx->comp_size = 0;
>   ctx->comp_magic_cnt = 0;
> @@ -293,6 +304,8 @@ static int job_ready(void *priv)
>   src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
>   if (!src_buf)
>   return 0;
> + if (src_buf == >flush_buf)
> + return 1;
>   p_out = vb2_plane_vaddr(_buf->vb2_buf, 0);
>   sz = vb2_get_plane_payload(_buf->vb2_buf, 0);
>   p = p_out + ctx->cur_buf_offset;
> @@ -770,21 +783,6 @@ static int vidioc_s_fmt_vid_out(struct file *file, void 
> *priv,
>   return ret;
>  }
>  
> -static void vicodec_mark_last_buf(struct vicodec_ctx *ctx)
> -{
> - static const struct v4l2_event eos_event = {
> - .type = V4L2_EVENT_EOS
> - };
> -
> - spin_lock(ctx->lock);
> - ctx->last_src_buf = v4l2_m2m_last_src_buf(ctx->fh.m2m_ctx);
> - if (!ctx->last_src_buf && ctx->last_dst_buf) {
> - ctx->last_dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> - v4l2_event_queue_fh(>fh, 

Re: [PATCH 1/2] vicodec: Have decoder propagate changes to the CAPTURE queue

2018-10-19 Thread Hans Verkuil
On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> The decoder interface (not yet merged) specifies that
> width and height values set on the OUTPUT queue, must
> be propagated to the CAPTURE queue.
> 
> This is not enough to comply with the specification,
> which would require to properly support stream resolution
> changes detection and notification.
> 
> However, it's a relatively small change, which fixes behavior
> required by some applications such as gstreamer.
> 
> With this change, it's possible to run a simple T(T⁻¹) pipeline:
> 
> gst-launch-1.0 videotestsrc ! v4l2fwhtenc ! v4l2fwhtdec ! fakevideosink
> 
> Signed-off-by: Ezequiel Garcia 
> ---
>  drivers/media/platform/vicodec/vicodec-core.c | 15 +++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/drivers/media/platform/vicodec/vicodec-core.c 
> b/drivers/media/platform/vicodec/vicodec-core.c
> index 1eb9132bfc85..a2c487b4b80d 100644
> --- a/drivers/media/platform/vicodec/vicodec-core.c
> +++ b/drivers/media/platform/vicodec/vicodec-core.c
> @@ -673,6 +673,13 @@ static int vidioc_s_fmt(struct vicodec_ctx *ctx, struct 
> v4l2_format *f)
>   q_data->width = pix->width;
>   q_data->height = pix->height;
>   q_data->sizeimage = pix->sizeimage;
> +
> + /* Propagate changes to CAPTURE queue */
> + if (!ctx->is_enc && V4L2_TYPE_IS_OUTPUT(f->type)) {

Do we need !ctx->is_enc? Isn't this the same for both decoder and encoder?

> + ctx->q_data[V4L2_M2M_DST].width = pix->width;
> + ctx->q_data[V4L2_M2M_DST].height = pix->height;
> + ctx->q_data[V4L2_M2M_DST].sizeimage = pix->sizeimage;

This is wrong: you are copying the sizeimage for the compressed format as the
sizeimage for the raw format, which is quite different.

I think you need to make a little helper function that can update the 
width/height
of a particular queue and that can calculate the sizeimage correctly.

Regards,

Hans

> + }
>   break;
>   case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
>   case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
> @@ -693,6 +700,14 @@ static int vidioc_s_fmt(struct vicodec_ctx *ctx, struct 
> v4l2_format *f)
>   q_data->width = pix_mp->width;
>   q_data->height = pix_mp->height;
>   q_data->sizeimage = pix_mp->plane_fmt[0].sizeimage;
> +
> + /* Propagate changes to CAPTURE queue */
> + if (!ctx->is_enc && V4L2_TYPE_IS_OUTPUT(f->type)) {
> + ctx->q_data[V4L2_M2M_DST].width = pix_mp->width;
> + ctx->q_data[V4L2_M2M_DST].height = pix_mp->height;
> + ctx->q_data[V4L2_M2M_DST].sizeimage =
> + pix_mp->plane_fmt[0].sizeimage;
> + }
>   break;
>   default:
>   return -EINVAL;
> 



Re: RFC: kernelCI media subsystem pilot (Test results for gtucker/kernelci-media - gtucker-kernelci-media-001-6-g1b2c6e5844d8)

2018-10-19 Thread Hans Verkuil
On 10/18/2018 09:23 PM, Ezequiel Garcia wrote:
> Hi everyone,
> 
> In Collabora, and as part of our kernelci work, we are doing
> research on kernel functional testing with kernelci.
> 
> For those new to kernelci, see 
> https://github.com/kernelci/kernelci-doc/wiki/KernelCI 
> and https://kernelci.org/.
> 
> The goal is to lay down the infrastructure required to make
> automated test coverage an integral part of our feature
> and bugfix development process.
> 
> So, as a first attempt, we've decided to extend kernelci test
> v4l2 plan support, leading the way to extending
> other subsystems' test plans.
> 
> Currently, kernelci looks for a list of branches every hour and
> see if anything changed. For any branch that has changed, it triggers
> builds, boots, tests and reports for each branch that had some changes
> since last time it ran.
> 
> For this pilot, we've decided to target just a few devices:
> qemu with vivid, rk3399-gru-kevin and rk3288-veyron-jaq
> with uvc.

It's running v4l2-compliance, right?

Looking at the test cases, they appear in the reverse order that v4l2-compliance
performs them, that's a bit odd.

And if we include uvc in the testing, then I need to prioritize the work I
started for uvc to remove the last FAILs.

Regards,

Hans

> 
> We'd like to get some early feedback on this, so we are sending
> an example of how a kernelci report would look like, to trigger
> some discussion around
> the direction this should take.
> 
> Thanks!
> 
> ===
> Test results for:
>   Tree:gtucker
>   Branch:  kernelci-media
>   Kernel:  gtucker-kernelci-media-002-2-gaa27eb0392c7
>   URL: https://gitlab.collabora.com/gtucker/linux.git
>   Commit:  aa27eb0392c70adec713e911a9b5267a1d853624
>   Test plans: v4l2
> 
> Summary
> ---
> 3 test groups results
> 
> 1  | v4l2   | rk3399-gru-kevin   | arm64 |  49 total:  17 PASS   4 
> FAIL  28 SKIP
> 2  | v4l2   | rk3288-veyron-jaq  | arm   |  49 total:  17 PASS   4 
> FAIL  28 SKIP
> 3  | v4l2   | qemu   | arm64 | 168 total: 102 PASS   0 
> FAIL  66 SKIP
> 
> 
> Tests
> -
> 
> 1  | v4l2   | rk3399-gru-kevin   | arm64 |  49 total:  17 PASS   4 
> FAIL  28 SKIP
> 
>   Config:  defconfig
>   Lab Name:lab-collabora-dev
>   Date:2018-10-18 18:48:52.426000
>   TXT log: 
> http://staging-storage.kernelci.org/gtucker/kernelci-media/gtucker-kernelci-media-002-2-gaa27eb0392c7/arm64/defconfig/lab-collabora-dev
> /v4l2-rk3399-gru-kevin.txt
>   HTML log:
> http://staging-storage.kernelci.org/gtucker/kernelci-media/gtucker-kernelci-media-002-2-gaa27eb0392c7/arm64/defconfig/lab-collabora-dev
> /v4l2-rk3399-gru-kevin.html
>   Rootfs:  
> http://staging-storage.kernelci.org/images/rootfs/debian/stretchv4l2/20181017.1//arm64/rootfs.cpio.gz
>   Test Git:git://linuxtv.org/v4l-utils.git
>   Test Commit: e889b0d56757b9a74eb8c4c8cc2ebd5b18e228b2
> 
> 
>   Test cases:
> 
> * MMAP: FAIL
> * blocking-wait: PASS
> * read/write: SKIP
> * read/write: SKIP
> * read/write: SKIP
> * VIDIOC_EXPBUF: PASS
> * VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: PASS
> * VIDIOC_TRY_DECODER_CMD: SKIP
> * VIDIOC_G_ENC_INDEX: SKIP
> * VIDIOC_TRY_ENCODER_CMD: SKIP
> * Scaling: SKIP
> * Composing: SKIP
> * Cropping: SKIP
> * VIDIOC_G_SLICED_VBI_CAP: SKIP
> * VIDIOC_S_FMT: PASS
> * VIDIOC_TRY_FMT: PASS
> * VIDIOC_G_FMT: PASS
> * VIDIOC_G_FBUF: SKIP
> * VIDIOC_G/S_PARM: FAIL
> * VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: PASS
> * VIDIOC_G/S_JPEGCOMP: SKIP
> * VIDIOC_UNSUBSCRIBE_EVENT/DQEVENT: PASS
> * VIDIOC_G/S/TRY_EXT_CTRLS: FAIL
> * VIDIOC_G/S_CTRL: PASS
> * VIDIOC_QUERYCTRL: PASS
> * VIDIOC_QUERY_EXT_CTRL/QUERYMENU: FAIL
> * VIDIOC_G/S_EDID: SKIP
> * VIDIOC_DV_TIMINGS_CAP: SKIP
> * VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: SKIP
> * VIDIOC_ENUM/G/S/QUERY_STD: SKIP
> * VIDIOC_G/S_AUDOUT: SKIP
> * VIDIOC_G/S/ENUMOUTPUT: SKIP
> * VIDIOC_ENUMAUDOUT: SKIP
> * VIDIOC_G/S_FREQUENCY: SKIP
> * VIDIOC_G/S_MODULATOR: SKIP
> * VIDIOC_G/S_AUDIO: SKIP
> * VIDIOC_G/S/ENUMINPUT: PASS
> * VIDIOC_ENUMAUDIO: SKIP
> * VIDIOC_S_HW_FREQ_SEEK: SKIP
> * VIDIOC_G/S_FREQUENCY: SKIP
> * VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: SKIP
> * VIDIOC_LOG_STATUS: SKIP
> * VIDIOC_DBG_G/S_REGISTER: SKIP
> * for-unlimited-opens: PASS
> * VIDIOC_G/S_PRIORITY: PASS
> * VIDIOC_QUERYCAP: PASS
> * second-/dev/video0-open: PASS
> * VIDIOC_QUERYCAP: PASS
> * MC-information-see-Media-Driver-Info-above: PASS
> 
> 
> 
> 2  | v4l2   | rk3288-veyron-jaq  | arm   |  49 total:  17 PASS   4 
> FAIL  28 SKIP
> 
>   Config:  multi_v7_defconfig
>   Lab Name:lab-collabora-dev
>   Date:2018-10-18 17:00:41.724000
>   TXT log: 
> http://staging-storage.kernelci.org/gtucker/kernelci-media/gtucker-kernelci-media-002-2-gaa27eb0392c7/arm/multi_v7_defconfig/lab-collab
> 

[GIT PULL FOR v4.20] request_api: Rename vb2_m2m_request_queue

2018-10-19 Thread Hans Verkuil
Just one patch that renames vb2_m2m_request_queue to v4l2_m2m_request_queue.

Regards,

Hans

The following changes since commit e4183d3256e3cd668e899d06af66da5aac3a51af:

  media: dt-bindings: Document the Rockchip VPU bindings (2018-10-05 07:00:43 
-0400)

are available in the Git repository at:

  git://linuxtv.org/hverkuil/media_tree.git tags/br-v4.20j

for you to fetch changes up to f25cd39d7b98d953e6868ce969f3862f7c2e8425:

  media: Rename vb2_m2m_request_queue -> v4l2_m2m_request_queue (2018-10-19 
08:55:20 +0200)


Tag branch


Ezequiel Garcia (1):
  media: Rename vb2_m2m_request_queue -> v4l2_m2m_request_queue

 drivers/media/platform/vim2m.c  | 2 +-
 drivers/media/v4l2-core/v4l2-mem2mem.c  | 4 ++--
 drivers/staging/media/sunxi/cedrus/cedrus.c | 2 +-
 include/media/v4l2-mem2mem.h| 2 +-
 4 files changed, 5 insertions(+), 5 deletions(-)