Hi Murali,

On Wednesday 22 April 2009 19:17:28 Karicheri, Muralidharan wrote:
> Hi Laurent,
>
> Thanks for your excellent review which, I believe, will help in improving
> the driver as we submit them to open source kernel.

You're welcome. Sorry for the delay, I was abroad last week and I've just 
barely recovered from jetlag.

> Based on this, I think I need to send some more iterations of patches in
> this mailing list before submitting the same to V4l2 mailing list. Please
> participate in the review so that I can close on the outstanding issues and
> submit it to v4l2 mailing list for review.

I'll be glad to help.

> I will also migrate the driver to sub device model before submitting it to
> v4l2 mailing list. Since we have a pretty big list of comments, I have
> included only those where I have a dis-agreement or more discussion is
> needed (If you want to see  all comments, I have a saved copy which I can be
> sent to you).

No problem. I've done the same. Mails get way too long otherwise.

> > here's my (humble) review of your VPFE capture driver. Before I start, I'd
> > like to thank you for your work on this v4l driver and contribution to the
> > Linux kernel.
> >
> > As I'm not very familiar with the v4l2-int-device model and the VPFE
> > hardware itself, some of my questions/remarks might be out of place.
> > Please feel free to (kindly) point me to my mistakes.
>
> I would like to give you some idea about our hardware that makes things
> clear for discussion.
>
> There are many variants of DaVinci processors of which we need
> to add support DM6446, DM355 & DM365 through this series of patches. Hope
> someone from the community augment this by adding support for other SoCs
> (example DM6443) which are currently not supported in this patch. Video
> Processing Front End (VPFE) of these SoCs have following Major IPs to help
> in image processing.
>
> INPUT
> I/f
>
>    |---------|
>    | AF, AEW |---> SDRAM
>    |---------|
>         |
>     |------|       |--------|    |-------|    |--------|
> --->| CCDC |---+-->|IPIPEIF |--->|IPIPE  |--->|Resizer |--|
>     |------|   |   |--------|    |-------|    |--------|  |
>                |       ^             |                    |
>                V       |             V                    V
>              SDRAM   SDRAM     |-----------|            SDRAM
>                                | Histogram |
>                                |-----------|
>                                      |
>                                      V
>                                    SDRAM
> VPFE inputs
>
> Raw Bayer RGB (8-16 bits)/
> Raw YUV (BT.656, BT.1120),
> Raw YUV (8bit muxed/ 16 bit with separate HD/VD/FIELD)
>
> CCDC (CCD controller)
> Has functional IPs to process raw bayer RGB and raw yuv data
> Some of the functional IPs are used for pre-processing the image data
> before sending to other IPs. Examples, Defect correction (to remove/correct
> dead pixel data from input data), Data formatting (read out different
> raw bayer RGB pattern and convert it to standard bayer RGB data as required
> by VPFE IPs), Color Space Conversion (CMYG to Bayer RGB) etc to name a few.
>
> IPIPEIF
> This does some more preprocessing before sending data to IPIPE. It also
> generates timing signals when input data from SDRAM. This is not available
> in earlier SoCs like DM6446, but available on newer SoCs such as DM355,
> DM365 etc.
>
> IPIPE
> -----
> This has many IPs to do image conversion to YUV and image tuning. Image
> tuning refers to white balance, noise filter, RGB to RGB conversion,
> luminance processing, edge enhancements etc... This is not available on
> DM6446, but available on newer SoCs such as DM355, DM365 etc. In DM6446 we
> have preview Engine to do similar functions
>
> Resizer
> -------
> Upscale or Downscale the image to desired output resolution. This has
> different filters to apply when doing scaling.

Thanks for the summary. Does the H.264 compression IP block in the DM365 fit 
somewhere in there, or is it completely separate ?

> > > The driver uses ccdc_hw_device interface to configure CCDC based on the
> > > interface requirement of the slave decoder device.
> >
> > If I'm not mistaken, the vpfe_capture.ko module depends on either
> > ccdc_dm355.ko or ccdc_davinci.ko (which should really be named
> > ccdc_dm6446.ko, but you've already been told about that) but not both, as
> > those modules conflict. Is there a reason not to combine both the vpfe and
> > the ccdc drivers into a single module ?
>
> [MK] The reason they are developed as independent module is to help in
> re-use it across other OS if needed. I am not sure why you want to combine
> them. Could you elaborate?

See below.

> > You should also probably call ccdc functions directly instead of going
> > through the ccdc_hw_dev structure, and pass them a pointer to a ccdc data
> > structure instead of storing everything as global variables in the
> > ccdc_dm355.c and ccdc_davinci.c source files.
>
> [MK] As I see it, vpfe_capture bridge driver needs to use different CCDC
> modules on each of the different SoCs that we support. Using function ptrs
> help us to assign NULL to function ptrs on some SoCs where the specific
> function is not required to be implemented. But we could consider functions
> that are called from interrupt context available as global inline function
> to make it more efficient. Alternately using the functions as global, how
> do I call a function from bridge driver if that is applicable only to one
> SoC and not applicable for others?
>
> I thought following is not acceptable
>
> #ifdef CONFIG_ARCH_DAVINCI_DM644x (config variable for DM6446 SoC)
>          call specific function
> #endif

You're right. I overlooked the fact that some functions were not mandatory. A 
simple solution, that would allow the VPFE code not to test for NULL function 
pointers, would be to implement dummy functions (basically just returning a 
fixed value) for operations not supported by a given CCDC module. I'll let you 
decide if this is acceptable and desirable.

I can think of two ways to support different CCDC modules in the VPFE driver. 
The first one is what you've implemented so far. Only a single CCDC module can 
be loaded as they all export identically named symbols. This make it 
impossible to build VPFE support directly in a kernel (as opposed to 
dynamically loaded modules) that needs to run on both a DM355 and a DM6446.

The second way is to have the CCDC modules register their ccdc_hw_device (btw, 
the function pointers might be split into a second structure called 
ccdc_hw_ops to be consistent with names in the rest of the kernel) with the 
VPFE driver. This would allow different CCDC modules to be compiled in the 
same kernel. Platform device data passed by board-specific code would be used 
to select the right CCDC module at runtime.

I think the direction currently followed by most architectures is to enable 
building of kernels supporting different boards/devices. This would make the 
second solution preferable. I have no strong opinion on this though.

> > > +#include <linux/version.h>
> > > +#include <media/v4l2-common.h>
> > > +#include <linux/io.h>
> > > +#include <mach/cpu.h>
> > > +#include <media/davinci/vpfe_capture.h>
> > > +#include <media/tvp514x.h>
> >
> > media/tvp514x doesn't seem to be required.
>
> [MK]Ok. I will investigate this.
>
> > > +static int debug;
> > > +static char *ch0_decoder = "TVP514X";
> > > +static u32 ch0_numbuffers = 3;
> > > +static u32 ch0_bufsize = (720 * 576 * 2);
> >
> > The buffer size and the number of buffers should be controlled by the
> > userspace application. You can restrict the value to a driver-specific
> > range, but don't use module parameters to select them.
>
> I think this is used across all our drivers (including the dm6467
> driver that is being reviewed in the list now)
>
> The reason for these variables as module parameter is as follows:-
>
> Our drivers supports capture resolutions that varies from D1 to
> HD (1920x1080). On image sensor capture, we may support capture of
> even higher resolution. We support both MMAP and PTR based IO.
> If MMAP is used, we would like to pre-allocate buffers for application
> since allocating it on the fly result in fragmentation. So setting
> number of buffers as a module parameters make sense. Also if a
> particular implementation doesn't want to use these pre-allocated
> buffers (use ptr IO instead), the parameter can be set to zero to
> force the driver not allocate any buffers and thereby reduce system
> memory usage. I have currently removed PTR based IO in this patch
> since our implementation was a hack based on an older version of the
> kernel. If more buffers than pre-allocated are requested, then
> driver tries to allocate more and free it when device
> is closed (This is how we do it in our internal driver which will
> get ported to open source later. Proriority now is to get the driver
> in the open source with minimum set of features and add more features
> later). Similar reasoning applies to buffer size. If user application
> is using max D1 resolution capture, there is no need to pre-allocate
> big size buffers and can be controlled at boot/insmode time.

So the buffer count and size parameters are only used to preallocate the 
buffers ? In that case maybe you could use a single parameter to set the 
preallocated memory size.

> > > +module_param(ch0_decoder, charp, S_IRUGO);
> > > +module_param(ch0_numbuffers, uint, S_IRUGO);
> > > +module_param(ch0_bufsize, uint, S_IRUGO);
> >
> > Do you plan to support chips with multiple channels ? Having the number of
> > channels configurable at compile time makes the code harder to read and
> > review. Unless there is a strong reason not to, I would suggest to assume
> > there is a single channel per VPFE (and a single VPFE per chip), and add
> > multiple channels support later when a real chip will require that. The
> > code currently hardcodes the number of channels to 1, so you will need an
> > additional patch to the VPFE driver anyway.
>
> [MK] Ok. I will investigate and do cleanup.

Thanks. That's an important cleanup in my opinion, it will make the code 
easier to review and integrate. Support for multiple channels, if needed, can 
be implemented later.

[snip]

> > > +static void vpfe_slave_device_unregister(struct v4l2_int_device *s)
> > > +{
> > > +    int index;
> > > +    struct channel_obj *channel = s->u.slave->master->priv;
> > > +
> > > +    for (index = 0; index < VPFE_CAPTURE_NUM_DECODERS; index++) {
> >
> > Newbie question here, why do we need support for multiple decoders ?
>
> [MK] From my diagram, it might be clear to you now. VPFE can interface with
> multiple decoders and application will choose to switch between them as
> desired to capture from different sources.

Ok. This code will be rewritten when migrated to the sub device model anyway, 
so there's no point in reviewing it thoroughly right now.

[snip]

> > > +/*  vpfe_device_register: Used for registering a slave decoder
> > > + *  device with master
> > > + */
> > > +static int vpfe_slave_device_register(struct v4l2_int_device *s)
> > > +{
> > > +    struct channel_obj *channel = s->u.slave->master->priv;
> > > +    struct common_obj *common = &channel->common[VPFE_VIDEO_INDEX];
> >
> > Similar to my remark about multiple channels, do you plan to support chips
> > with multiple "objects" ? The driver currently defines to number of
> > objects to 1, and the generic implementation makes the code harder to read
> > and review. I would suggest assuming a single object (for video data). VBI
> > and HBI support can be added later when a chip will require them. Don't
> > try to overdesign, keep the driver as simple as possible and extend it
> > later.
>
> [MK] Ok. Will clean up the code.

Thanks.

[snip]

> > > +            return -EINVAL;
> > > +
> > > +    if (!channel->numdecoders) {
> > > +            if (!vidioc_int_dev_init(s)) {
> > > +                    channel->current_decoder = 0;
> > > +                    channel->decoder[channel->current_decoder] = s;
> > > +                    v4l2_info(vpfe_dev->driver, "Current decoder is set
> > > to" +                             " %s\n", (s->name));
> > > +            }
> >
> > Can't this case be folded into the generic case inside the 'else' ?
>
> [MK] The code will change when sub-device model is implemented. Could we
> ignore this comment?

Sure. You can ignore all comments related to code that will go away during the 
porting to sub-device (except of course general remarks that apply to the 
driver as a whole such as code formatting).

> > > +    } else {
> > > +            /* search through the array for an empty entry */
> > > +            for (index = 0; index < VPFE_CAPTURE_NUM_DECODERS; index++)
> > > { +                    if (ISNULL(channel->decoder[index])) {
> > > +                            channel->decoder[index] = s;
> > > +                            break;
> > > +                    }
> > > +            }
> > > +            if (index == VPFE_CAPTURE_NUM_DECODERS) {
> > > +                    v4l2_err(vpfe_dev->driver,
> > > +                            "decoder count reached"
> > > +                            " maximum allowed\n");
> > > +                    return -ENOMEM;
> > > +            }
> > > +            if (!strncmp(ch0_decoder, s->name, strlen(ch0_decoder))) {
> > > +                    if (!common->started) {
> > > +                            if (!vidioc_int_dev_init(s)) {
> > > +                                    channel->current_decoder = index;
> > > +                                    v4l2_info(vpfe_dev->driver,
> > > +                                            "Current decoder is"
> > > +                                            " set to %s\n", (s->name));
> > > +                            }
> > > +                    }
> >
> > Shouldn't an error be returned if common->started is true ? The test could
> > be moved at the top of this function.
>
> [MK] Error is to be returned. But check should be here since the driver
> is currently streaming and current decoder cannot be changed. However if
> the decoder is not the default one, there is not problem with registration.
> so we cannot move it to top. Besides this code is going to change. So
> I will just take care of the error case.

Ok, I'll review it after the porting to sub-device.

[snip]

> > > +    f->type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
> > > +    ccdc_hw_dev.get_image_window(&image_win);
> > > +    f->fmt.pix.width = image_win.width;
> > > +    f->fmt.pix.height = image_win.height;
> >
> > Why don't you pass &f->fmt.pix directly to ccdc_hw_dev.get_image_window ?
>
> [MK] image window is composed of top,left , width & height. v4l2_rect is
> good fit for this since pix has other fields that is not retrieved from
> CCDC. set_image_window() use similar arg and it requires to set top and
> left as well.

Ok.

[snip]

> > > +}
> > > +
> > > +static int vpfe_attach_irq(struct channel_obj *channel)
> > > +{
> > > +    enum ccdc_frmfmt frame_format;
> > > +    int err = 0;
> > > +
> > > +    channel->irq_type = VPFE_USE_CCDC_IRQ;
> > > +
> > > +    switch (channel->irq_type) {
> > > +    case VPFE_USE_CCDC_IRQ:
> >
> > You've just assigned irq_type to VPFE_USE_CCDC_IRQ. This is a bit
> > pointless...
>
> [MK] Actually I removed some code for the initial version. I need to attach
> to Resizer IRQ when I have Resizer in the path. So can I keep it
> so that I don't have to re-work later ?

Then you might want to add a small comment to explain this.

> > > +            {
> > > +                    ccdc_hw_dev.get_frame_format(&frame_format);
> > > +                    if (frame_format == CCDC_FRMFMT_PROGRESSIVE) {
> > > +                            err =
> > > +                                    request_irq(channel->ccdc_irq1,
> > > +                                                vdint1_isr,
> > > +                                                 IRQF_DISABLED,
> > > +                                                 "vpfe_capture1",
> > > +                                                 (void *)&vpfe_obj);
> >
> > Casting to (void *) doesn't need to be explicit in C.
>
> [MK] Ok.
>
> > You don't need to put every argument on its own line.
>
> [MK]. Apply this to generally across the code. I thought
> someone will appreciate putting on different lines :)

:-) As long as the code fits in the 80 columns limit, you don't need to split 
it up in too many lines.

[snip]

> > > +            ret = vpfe_lookup_v4l2_pix_format(hw_pix);
> > > +            if (ret >= 0) {
> >
> > You should also try not to mix conventions regarding return values.
> > ccdc_hw_dev.enum_pix seems to return 0 on success and != 0 on error, while
> > vpfe_lookup_v4l2_pix_format returns >= 0 on success and < 0 on error.
> > Adhering to one of those conventions through the code (except of course
> > where it makes sense not to) would improve consistency
>
> [MK] In this case it doesn't make sense since we need index into the pix
> table to get the pix format.

What I mean is that you could go for the >= 0 on success and < 0 on error 
convention for most cases.

[snip]

> > > +static int vpfe_s_fmt_vid_cap(struct file *file, void *priv,
> > > +                            struct v4l2_format *fmt)
> > > +{
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common = NULL;
> > > +    struct v4l2_rect win;
> > > +    struct vpfe_pixel_format *pix_fmts;
> > > +    int ret = 0;
> > > +    common = &(channel->common[VPFE_VIDEO_INDEX]);
> > > +
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "VIDIOC_S_FMT\n");
> > > +    /* If streaming is started, return error */
> > > +    if (common->started) {
> > > +            v4l2_err(vpfe_dev->driver, "Streaming is started\n");
> > > +            ret = -EBUSY;
> > > +            goto out;
> >
> > There is no cleanup code under the out label, you can return directly and
> > remove the label.
>
> [MK] In that case, next 3 goto out statements can be removed as well.
> right?

Right. The 'goto out' statements can be replaced by returns. The 'goto 
lock_out' statements need to stay as you must release the mutex before 
returning (and 'lock_out' can be renamed to 'out').

[snip]

> > > +static int vpfe_g_input(struct file *file, void *priv, unsigned int
> > > *index) +{
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common =
> > > +                        &(channel->common[VPFE_VIDEO_INDEX]);
> > > +    struct vpfe_capture_input *vpfe_inputs = channel->video.input;
> > > +    int ret = 0;
> > > +
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "VIDIOC_G_INPUT\n");
> > > +    ret = mutex_lock_interruptible(&common->lock);
> > > +    if (!ret)
> > > +            *index = vpfe_inputs->current_input;
> >
> > Do you really need to protect the read with a mutex ?
>
> [MK] Ideally two different file handle can call g_input and s_input. In
> that case is it not good to keep it so that g_input() get the latest
> input used when there is concurrent application calls ?

VIDIOC_G_INPUT and VIDIOC_S_INPUT can definitely be called on different file 
handles. However, the V4L2 API doesn't guarantee how those calls should be 
ordered. Even if you protect VIDIOC_G_INPUT with the mutex you can't guarantee 
which of VIDIOC_G_INPUT and VIDIOC_S_INPUT will be executed first.

[snip]

> > > +static int vpfe_s_input(struct file *file, void *priv, unsigned int
> >
> >index)
> >
> > > +{
> > > +    int i, ret = -EINVAL;
> > > +    v4l2_std_id std_id;
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct v4l2_int_device *new_dec, *curr_dec =
> > > +            channel->decoder[channel->current_decoder];
> >
> > Try to avoid multiple variable declarations on the same line.
>
> [MK]. I never thought that this is prohibited in the coding
> style. I see such instances throughout the kernel driver code :)

I'm not sure if it's a strict rule. I know that some kernel code validation 
scripts check (or used to check) for that. I personally think it's ok to 
declare short variables on the same line (like for instance 'unsigned int i, 
j;' for loop counters), but in this case the code get a bit more difficult to 
read, especially due to the assignment operation.

[snip]

> > > +    }
> > > +
> > > +    ret = mutex_lock_interruptible(&common->lock);
> > > +    if (ret)
> > > +            goto out;
> > > +    /* If streaming is started return device busy
> > > +     * error
> > > +     */
> > > +    if (common->started) {
> > > +            v4l2_err(vpfe_dev->driver, "Streaming is on\n");
> > > +            ret = -EBUSY;
> > > +            goto lock_out;
> > > +    }
> > > +    new_dec_name = vpfe_inputs->inputs[index].dec_name;
> > > +    /* switch in new decoder to be active */
> > > +    if (strcmp(new_dec_name, curr_dec->name)) {
> > > +            for (i = 0; i < VPFE_CAPTURE_NUM_DECODERS; i++) {
> >
> > If we really need to support multiple decoders hooked up on the VPFE
> > input, we must be prepared to see multiple decoders with identical names.
> > You shouldn't use the name to check if the current and next inputs use the
> > same decoder.
>
> [MK] Driver assumes that input names are unique across all decoder.
> Otherwise how do we select a particular decoder for use? Probably
> I need to enforce this at registration to avoid duplicates. Is that
> fine with you ?

What happens when you have two identical decoders (two TVP514x for instance) 
connected to the CCDC then ?

[snip]

> > > +static int vpfe_s_std(struct file *file, void *priv, v4l2_std_id
> >
> >*std_id)
> >
> > > +{
> > > +    int ret = 0;
> >
> > General remark, local variable declarations in the Linux kernel are
> > usually roughly sorted by line length (related variables should of course
> > be declared close to each other). It might just be a matter of taste.
>
> [MK] Not sure what you want me to change here. Do you mean
> I need to declare variables such that smaller line comes first
> followed by longer line or vice-versa ?

The opposite. Kernel code usually has

        struct big_struct_name *long_var_name = long_init_val;
        int ret;

instead of

        int ret;
        struct big_struct_name *long_var_name = long_init_val;

You obviously don't have to measure each line with a pixel-perfect rule :-) 
Grouping related variables and then roughly sorting my length is enough.

[snip]

> > > +static int vpfe_videobuf_prepare(struct videobuf_queue *vq,
> > > +                            struct videobuf_buffer *vb,
> > > +                            enum v4l2_field field)
> > > +{
> > > +    int ret = 0;
> > > +    /* Get the file handle object and channel object */
> > > +    struct vpfe_fh *fh = vq->priv_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common;
> > > +    unsigned long addr;
> > > +
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "<vpfe_buffer_prepare>\n");
> > > +
> > > +    common = &(channel->common[VPFE_VIDEO_INDEX]);
> > > +
> > > +    if (V4L2_MEMORY_USERPTR == common->memory) {
> > > +            /* we don't support user ptr IO */
> >
> > It seems whether to use the MMAP or USERPTR streaming method is ultimately
> > controlled by the ch0_numbuffers module parameter. Don't do that, use the
> > buffer type supplied to VIDIOC_REQBUFS.
>
> [MK] Not exactly. Only default IO is selected based on the ch0_numbuffers
> variable. User is free to use either and driver will honor it. Please
> tell me which code makes you believe so.

common->memory is assigned in vpfe_config_default_format based on 
config_params.numbuffers[ch->channel_id]. config_params.numbuffers[0] is only 
assigned in vpfe_init from ch0_numbuffers.

> > > +            v4l2_dbg(1, debug, vpfe_dev->driver,
> > > +                    "<vpfe_buffer_prepare: USERPTR IO"
> > > +                    " not supported>\n");
> > > +            ret = -EINVAL;
> > > +            goto out;
> > > +    }
> > > +
> > > +    /* If buffer is not initialized, initialize it */
> > > +    if (VIDEOBUF_NEEDS_INIT == vb->state) {
> > > +            vb->width = common->width;
> > > +            vb->height = common->height;
> > > +            vb->size = vb->width * vb->height * 2;
> > > +            vb->field = field;
> > > +    }
> > > +    addr = videobuf_to_dma_contig(vb);
> > > +    if (vq->streaming) {
> > > +            if (!ISALIGNED(addr)) {
> >
> > Use IS_ALIGNED from linux/kernel.h instead.
>
> [MK]Ok. I guess I need to use IS_ALIGNED(addr, 32) for
> 32 byte alignment right?

That's right (note that IS_ALIGNED only works for power of two alignments).

[snip]

> > > +static void vpfe_videobuf_queue(struct videobuf_queue *vq,
> > > +                            struct videobuf_buffer *vb)
> > > +{
> > > +    /* Get the file handle object and channel object */
> > > +    struct vpfe_fh *fh = vq->priv_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common = NULL;
> > > +
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "<vpfe_buffer_queue>\n");
> > > +    common = &(channel->common[VPFE_VIDEO_INDEX]);
> > > +
> > > +    /* add the buffer to the DMA queue */
> > > +    list_add_tail(&vb->queue, &common->dma_queue);
> >
> > This needs to be protected by a mutex
>
> [MK] I think in the videobuf-core.c this is already protected
> by mutex lock using vb_lock of videobuf_queue structure.

Actually it needs to be protected by a spinlock. common->dma_queue is accessed 
in IRQ handlers.

[snip]

> > > +static void vpfe_videobuf_release(struct videobuf_queue *vq,
> > > +                            struct videobuf_buffer *vb)
> > > +{
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "vpfe_videobuf_release\n");
> > > +    videobuf_dma_contig_free(vq, vb);
> > > +    vb->state = VIDEOBUF_NEEDS_INIT;
> >
> > I'm not sure about this, but aren't you supposed to remove the buffer from
> > the dma_queue here ?
>
> [MK] I believe when REQUEST_BUF is called the second time, INIT_LIST_HEAD
> will re-initialize the queue.

The buf_release callback is not necessarily followed by a VIDIOC_REQBUFS. For 
instance, if the user calls VIDIOC_STREAMOFF, vpfe_videobuf_release will be 
called. It would be perfectly valid to call VIDIOC_STREAMON after that with 
any VIDIOC_REQBUFS in between.

[snip]

> > > +static int vpfe_reqbufs(struct file *file, void *priv,
> > > +                          struct v4l2_requestbuffers *p)
> > > +{
> > > +    int ret = 0;
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common = NULL;
> > > +    enum v4l2_field field;
> > > +
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "<vpfe_buffer_queue>\n");
> > > +    common = &(channel->common[VPFE_VIDEO_INDEX]);
> > > +
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "\nEnd of VIDIOC_REQBUFS
> >
> >ioctl");
> >
> > > +
> > > +    if (V4L2_BUF_TYPE_VIDEO_CAPTURE != p->type) {
> > > +            ret = -EINVAL;
> > > +            goto out;
> > > +    }
> > > +    if (common->io_usrs != 0) {
> > > +            ret = -EBUSY;
> > > +            goto out;
> > > +    }
> >
> > This will make subsequent VIDIOC_REQBUFS requests fail. It's perfectly
> > legal to call VIDIOC_REQBUFS multiple times provided the device isn't
> > streaming and buffers are not mapped. The stream test is (at least
> > partially) handled by videobuf_reqbufs.
> >
> > When calling VIDIOC_REQBUFS with a number of buffers equal to zero you
> > should free all buffers (using videobuf_mmap_free).
>
> [MK] Currently we support REQBUF only once after open. I know this is a
> limitation we need to fix. Can we do this at a later stage? At this
> time, my priority is get driver with existing limitations and later on
> work to remove the limitations as we progress. Given the amount of driver
> work we have at hand, I will do it in stages.

Then please add a comment to explicitly state the limitation. You should still 
probably fix the VIDIOC_REQBUFS with a number of buffers equal to zero, as 
applications commonly use that to free the buffers.

[snip]

> > > +    }
> > > +    common = &(channel->common[buf_type_index]);
> > > +    if (p->memory != V4L2_MEMORY_MMAP) {
> >
> > p->memory isn't supposed to be set by userspace applications when calling
> > VIDIOC_QUERYBUF. Don't check it.
>
> [MK] Ok. Just curious. What happens when user application request for
> USERPTR IO and then calls QUERYBUF?

VIDIOC_QUERYBUF is part of the memory mapping I/O method. It doesn't make 
sense with the user pointer I/O method. The V4L2 specification doesn't 
explicitly states what should happen, I guess returning an error would be the 
right thing to do.

[snip]

> > > +    /* If buffer queue is empty, return error */
> > > +    if (list_empty(&common->dma_queue)) {
> > > +            v4l2_err(vpfe_dev->driver, "buffer queue is empty\n");
> > > +            ret = -EIO;
> > > +            goto lock_out;
> > > +    }
> > > +    /* Get the next frame from the buffer queue */
> > > +    common->nextFrm = common->curFrm =
> > > +        list_entry(common->dma_queue.next,
> > > +                   struct videobuf_buffer, queue);
> > > +    /* Remove buffer from the buffer queue */
> > > +    list_del(&common->curFrm->queue);
> > > +    /* Mark state of the current frame to active */
> > > +    common->curFrm->state = VIDEOBUF_ACTIVE;
> > > +    /* Initialize field_id and started member */
> > > +    channel->field_id = 0;
> > > +    common->started = 1;
> > > +
> > > +    addr = videobuf_to_dma_contig(common->curFrm);
> > > +
> > > +    /* Calculate field offset */
> > > +    vpfe_calculate_offsets(channel);
> > > +
> > > +    if (vpfe_attach_irq(channel) < 0) {
> >
> > Is there a reason not to request IRQs when the device is probed by the
> > driver and leave them requested until the driver is unloaded or the device
> > removed ?
>
> [MK] As I have mentioned in a earlier response, when we port the latest
> version of the driver, it will attach isr on the fly either for
> handling irq from ccdc or from resizer based on IPIPE & Resizers
> are in the path or not. So we will have to re-do this if we change
> this now. I prefer to leave it like this. Let me know if you feel
> different.

Can't you register both IRQ handlers ? Only handlers for enabled IRQ will get 
called anyway.

[snip]

> > > +static int vpfe_streamoff(struct file *file, void *priv,
> > > +                            enum v4l2_buf_type i)
> > > +{
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common = NULL;
> > > +    int buf_type_index = VPFE_VIDEO_INDEX, ret = 0;
> > > +    if (V4L2_BUF_TYPE_VIDEO_CAPTURE != i) {
> >
> > Already checked in videobuf_streamoff.
>
> [MK] Do you mean v4l2-ioctl.c ? It doesn't. Neither in videobuf_streamoff.
> So I need to keep it here.

You're right, my bad, sorry.

> > > +static int vpfe_g_ctrl(struct file *file, void *priv,
> > > +                         struct v4l2_control *ctrl)
> > > +{
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct v4l2_int_device *dec =
> > > +            channel->decoder[channel->current_decoder];
> > > +    struct common_obj *common =
> > > +        &(channel->common[VPFE_VIDEO_INDEX]);
> > > +    int ret = 0;
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "VIDIOC_G_CTRL\n");
> > > +    /* Call getcontrol function of decoder device */
> > > +    ret = mutex_lock_interruptible(&common->lock);
> > > +    if (ret)
> > > +            return ret;
> > > +    ret = vidioc_int_g_ctrl(dec, ctrl);
> > > +    mutex_unlock(&common->lock);
> >
> > Why does this require locking here ?
>
> [MK] There can be two file handles doing S_CTRL and G_CTRL. So
> for data integrity it might be needed. If it is guranteed that
> only one file handle is involved this can be removed.

There can definitely be several handles calling VIDIOC_G_CTRL and 
VIDIOC_S_CTRL simultaneously.

Read operations can usually be done without locking. If walking a list is 
required, you will need a mutex, but that decision should be made by the sub-
device, not the VPFE driver which has no knowledge of how the sub-device 
implements the operation.

[snip]

> > > +static int vpfe_s_ctrl(struct file *file, void *priv,
> > > +                         struct v4l2_control *ctrl)
> > > +{
> > > +    int ret = 0;
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common =
> > > +        &(channel->common[VPFE_VIDEO_INDEX]);
> > > +    struct v4l2_int_device *dec =
> > > +            channel->decoder[channel->current_decoder];
> > > +    v4l2_dbg(1, debug, vpfe_dev->driver, "VIDIOC_S_CTRL\n");
> > > +    /* Call setcontrol function of decoder device */
> > > +    ret = mutex_lock_interruptible(&common->lock);
> > > +    if (ret)
> > > +            return ret;
> > > +    ret = vidioc_int_s_ctrl(dec, ctrl);
> > > +    mutex_unlock(&common->lock);
> >
> > Why does this require locking here ?
>
> [MK] Same as above. If there are two file handle that call
> this ioctl, this is needed. I can think of only one, but
> will like to hear about you experience here.

See my answer above.

[snip]

> > > +static long vpfe_param_handler(struct file *file, void *priv,
> > > +            int cmd, void *param)
> > > +{
> > > +    struct vpfe_fh *fh = file->private_data;
> > > +    struct channel_obj *channel = fh->channel;
> > > +    struct common_obj *common = NULL;
> > > +    int ret = 0;
> > > +    common = &(channel->common[VPFE_VIDEO_INDEX]);
> > > +
> > > +    if (common->started) {
> > > +            /* only allowed if streaming is not started */
> > > +            v4l2_err(vpfe_dev->driver, "channel already started\n");
> > > +            ret = -EBUSY;
> > > +            goto out;
> > > +    }
> > > +    ret = mutex_lock_interruptible(&common->lock);
> > > +    if (ret)
> > > +            goto out;
> > > +    switch (cmd) {
> > > +    case VPFE_CMD_S_SOC_PARAMS:
> > > +    {
> >
> > I don't like that. CCDC parameters should either come from a
> > board-specific file or be computed from regular V4L2 controls. Userspace
> > applications must not be required to pass a block of CCDC-specific
> > parameters.
>
> [MK] If you see the VPFE, there are a lot of parameters our customers wants
> to tweak. I agree that some of the duplicate parameters can go away (like
> width, height, pixel format etc), but we need to keep parameters like
> datasft (shift raw data before writing it to SDRAM) and color pattern to be
> used etc so that customer application can change it on the fly. For example
> color pattern depends on raw and column start used in the sensor. So it
> must be controlled based on a particular sensor (example MT9T031) settings
> for a capture resolution used. Also we have many IPs, as mentioned at the
> beginning of this,  which needs to be configured by application. Example
> for raw image capture, application may want to set defect pixel locations
> for pixel correction, may want to do color space conversion etc. Similarly
> if IPIPE is in the capture path , there are parameters associated with each
> IP in IPIPE that needs to be configured. Infact I want to split this in to
> two. One for common parameter configuration which will be handled via 
> VPFE_CMD_S_SOC_PARAMS ioctl. Others through module specific structures and
> IOCTLS. If my understanding is correct, V4L2 controls can be used for
> parameters that can be controlled while streaming is ongoing. But how do I
> configure modules before starting streaming like, defect pixel correction,
> data formatter, color space converter etc ?  Simialry only few module
> parameters in IPIPE can be controlled while streaming is ongoing. So it
> require SoC specific module configuration IOCTLs. White balance is a
> candidate for control, defect pixel correction is a candidate for module
> configuration IOCTL. Pixel correctin varies from sensor to sensor and can
> not be in board specific files.

We have a whole bunch of parameters in there, and you're right that 
implementing a nice and clean way to set them all isn't easy.

Some parameters can probably be converted to V4L2 controls, but we can always 
work on that later (and yes V4L2 controls can be used before streaming as 
well).

Let's take a look at the YCbCr parameters for the DM355.

struct ccdc_params_ycbcr {
       /* pixel format */
       enum ccdc_pixfmt pix_fmt;
       /* progressive or interlaced frame */
       enum ccdc_frmfmt frm_fmt;
       /* video window */
       struct ccdc_imgwin win;
       /* field id polarity */
       enum ccdc_pinpol fid_pol;
       /* vertical sync polarity */
       enum ccdc_pinpol vd_pol;
       /* horizontal sync polarity */
       enum ccdc_pinpol hd_pol;
       /* enable BT.656 embedded sync mode */
       int bt656_enable;
       /* cb:y:cr:y or y:cb:y:cr in memory */
       enum ccdc_pixorder pix_order;
       /* interleaved or separated fields  */
       enum ccdc_buftype buf_type;
};

If my understanding is correct, pix_fmt and win are duplicate and I think we 
agree that they should be removed. frm_fmt and buf_type might fall into the 
same category. pix_order can also probably be derived from the V4L2 image 
format.

fid_pol, vd_pol, hd_pol seem to be board-specific, can they be moved to board-
specific data ? Does bt656_enable fall into the same category ?

If my assumptions are correct, this leaves us with no parameters to set 
through VPFE_CMD_S_SOC_PARAMS :-)

There are a few more parameters for Bayer mode. Some of them, such as lens 
shading correction, can probably be converted to V4L2 parameters. We can do 
that at a later time though.

My point is that parameters that can be computed from V4L2 data (image format 
and size, ...) shouldn't be passed explicitly from userspace, and parameters 
that are definitely board-specific should come from platform data set by 
board-specific code. Other parameters can still be set using 
VPFE_CMD_S_SOC_PARAMS.

I like your plan to configure the CCDC, IPIPE and other blocks using different 
ioctls. VPFE_CMD_S_SOC_PARAMS could be renamed to VPFE_CMD_S_CCDC_PARAMS.

[snip]

> > > --- /dev/null
> > > +++ b/include/media/davinci/vpfe_capture.h

[snip]

> > > +
> > > +struct vpfe_pixel_format {
> > > +    unsigned int pix_fmt;
> > > +    char *desc;
> > > +    enum vpfe_hw_pix_format hw_fmt;
> > > +};
> >
> > Is there a reason not to use the V4L2 pixel format constants internally
> > and to transcode to/from CCDC pixel formats ?
>
> [MK] There are multiple drivers that refers to the same pixel format, some
> of which are not v4l2 (char driver for resize & preview). So internally we
> use vpfe_hw_pix_format to refer to them.

Ok.

[snip]

Thanks again for your work on getting the driver into mainline. I will try and 
be quicker to respond on the next iteration (provided I'm not abroad once 
again :-))

Regards,

Laurent Pinchart


_______________________________________________
Davinci-linux-open-source mailing list
[email protected]
http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source

Reply via email to