Laurent, Thanks once again for your support. Please see below my response with [MK] prefix. If some more outstanding issues are to be discussed, will it be possible to talk to you over phone?
Murali Karicheri Software Design Engineer Texas Instruments Inc. Germantown, MD 20874 Phone : 301-515-3736 email: [email protected] >> Resizer >> ------- >> Upscale or Downscale the image to desired output resolution. This has >> different filters to apply when doing scaling. > >Thanks for the summary. Does the H.264 compression IP block in the DM365 >fit >somewhere in there, or is it completely separate ? [MK] Resizer handles the data in UYVY format. H.264 is done on a different IP, not this one. > >> > > The driver uses ccdc_hw_device interface to configure CCDC based on >the >> > > interface requirement of the slave decoder device. >> > >> > If I'm not mistaken, the vpfe_capture.ko module depends on either >> > ccdc_dm355.ko or ccdc_davinci.ko (which should really be named >> > ccdc_dm6446.ko, but you've already been told about that) but not both, >as >> > those modules conflict. Is there a reason not to combine both the vpfe >and >> > the ccdc drivers into a single module ? >> >> [MK] The reason they are developed as independent module is to help in >> re-use it across other OS if needed. I am not sure why you want to >combine >> them. Could you elaborate? > >See below. > >> > You should also probably call ccdc functions directly instead of going >> > through the ccdc_hw_dev structure, and pass them a pointer to a ccdc >data >> > structure instead of storing everything as global variables in the >> > ccdc_dm355.c and ccdc_davinci.c source files. >> >> [MK] As I see it, vpfe_capture bridge driver needs to use different CCDC >> modules on each of the different SoCs that we support. Using function >ptrs >> help us to assign NULL to function ptrs on some SoCs where the specific >> function is not required to be implemented. But we could consider >functions >> that are called from interrupt context available as global inline >function >> to make it more efficient. Alternately using the functions as global, how >> do I call a function from bridge driver if that is applicable only to one >> SoC and not applicable for others? >> >> I thought following is not acceptable >> >> #ifdef CONFIG_ARCH_DAVINCI_DM644x (config variable for DM6446 SoC) >> call specific function >> #endif > >You're right. I overlooked the fact that some functions were not mandatory. >A >simple solution, that would allow the VPFE code not to test for NULL >function >pointers, would be to implement dummy functions (basically just returning a >fixed value) for operations not supported by a given CCDC module. I'll let >you >decide if this is acceptable and desirable. > [MK]. I am more inclined to check for null in the code rather than implementing dummy functions. I like how similar thing is done in videobuf-core.c where the code checks for mandatory functions by calling BUG_ON(!q->ops->buf_queue) Do you think this is a good choice? >I can think of two ways to support different CCDC modules in the VPFE >driver. >The first one is what you've implemented so far. Only a single CCDC module >can >be loaded as they all export identically named symbols. This make it >impossible to build VPFE support directly in a kernel (as opposed to >dynamically loaded modules) that needs to run on both a DM355 and a DM6446. > >The second way is to have the CCDC modules register their ccdc_hw_device >(btw, >the function pointers might be split into a second structure called >ccdc_hw_ops to be consistent with names in the rest of the kernel) with the [MK] Ok. >VPFE driver. This would allow different CCDC modules to be compiled in the >same kernel. Platform device data passed by board-specific code would be >used >to select the right CCDC module at runtime. > [MK]If I understand correctly, what you are suggesting is to have the ccdc module register with vpfe driver. So vpfe driver implements a register function which all available ccdc modules call to register with the vpfe. vpfe driver platform data indicate the board type. Based on that vpfe_drivers select the correct ccdc and rejects the rest. Is it what you mean here ? >I think the direction currently followed by most architectures is to enable >building of kernels supporting different boards/devices. This would make >the >second solution preferable. I have no strong opinion on this though. > [MK]I feel the same way. >> >> > > +static int debug; >> > > +static char *ch0_decoder = "TVP514X"; >> > > +static u32 ch0_numbuffers = 3; >> > > +static u32 ch0_bufsize = (720 * 576 * 2); >> > >> > The buffer size and the number of buffers should be controlled by the >> > userspace application. You can restrict the value to a driver-specific >> > range, but don't use module parameters to select them. >> >> I think this is used across all our drivers (including the dm6467 >> driver that is being reviewed in the list now) >> >> The reason for these variables as module parameter is as follows:- >> >> Our drivers supports capture resolutions that varies from D1 to >> HD (1920x1080). On image sensor capture, we may support capture of >> even higher resolution. We support both MMAP and PTR based IO. >> If MMAP is used, we would like to pre-allocate buffers for application >> since allocating it on the fly result in fragmentation. So setting >> number of buffers as a module parameters make sense. Also if a >> particular implementation doesn't want to use these pre-allocated >> buffers (use ptr IO instead), the parameter can be set to zero to >> force the driver not allocate any buffers and thereby reduce system >> memory usage. I have currently removed PTR based IO in this patch >> since our implementation was a hack based on an older version of the >> kernel. If more buffers than pre-allocated are requested, then >> driver tries to allocate more and free it when device >> is closed (This is how we do it in our internal driver which will >> get ported to open source later. Proriority now is to get the driver >> in the open source with minimum set of features and add more features >> later). Similar reasoning applies to buffer size. If user application >> is using max D1 resolution capture, there is no need to pre-allocate >> big size buffers and can be controlled at boot/insmode time. > >So the buffer count and size parameters are only used to preallocate the >buffers ? In that case maybe you could use a single parameter to set the >preallocated memory size. > [MK] Are you saying allocate a single big buffer? How do we then handle individual buffers as required by v4l2 driver?. Also there is more chance to fail the allocation if we are requesting a single large buffer. In our implementation, we allocate 3 or more separate buffers. I don't quite get what you are saying here. > >[snip] > >> > > + return -EINVAL; >> > > + >> > > + if (!channel->numdecoders) { >> > > + if (!vidioc_int_dev_init(s)) { >> > > + channel->current_decoder = 0; >> > > + channel->decoder[channel->current_decoder] = s; >> > > + v4l2_info(vpfe_dev->driver, "Current decoder is >set >> > > to" + " %s\n", (s->name)); >> > > + } >> > >> > Can't this case be folded into the generic case inside the 'else' ? >> >> [MK] The code will change when sub-device model is implemented. Could we >> ignore this comment? > >Sure. You can ignore all comments related to code that will go away during >the >porting to sub-device (except of course general remarks that apply to the >driver as a whole such as code formatting). > [MK] Ok. >> > > + } else { >> > > + /* search through the array for an empty entry */ >> > > + for (index = 0; index < VPFE_CAPTURE_NUM_DECODERS; >index++) >> > > { + if (ISNULL(channel->decoder[index])) { >> > > + channel->decoder[index] = s; >> > > + break; >> > > + } >> > > + } >> > > + if (index == VPFE_CAPTURE_NUM_DECODERS) { >> > > + v4l2_err(vpfe_dev->driver, >> > > + "decoder count reached" >> > > + " maximum allowed\n"); >> > > + return -ENOMEM; >> > > + } >> > > + if (!strncmp(ch0_decoder, s->name, strlen(ch0_decoder))) >{ >> > > + if (!common->started) { >> > > + if (!vidioc_int_dev_init(s)) { >> > > + channel->current_decoder = >index; >> > > + v4l2_info(vpfe_dev->driver, >> > > + "Current decoder is" >> > > + " set to %s\n", (s- >>name)); >> > > + } >> > > + } >> > >> > Shouldn't an error be returned if common->started is true ? The test >could >> > be moved at the top of this function. >> >> [MK] Error is to be returned. But check should be here since the driver >> is currently streaming and current decoder cannot be changed. However if >> the decoder is not the default one, there is not problem with >registration. >> so we cannot move it to top. Besides this code is going to change. So >> I will just take care of the error case. > >Ok, I'll review it after the porting to sub-device. > [MK] ok. >[snip] > >> > > +} >> > > + >> > > +static int vpfe_attach_irq(struct channel_obj *channel) >> > > +{ >> > > + enum ccdc_frmfmt frame_format; >> > > + int err = 0; >> > > + >> > > + channel->irq_type = VPFE_USE_CCDC_IRQ; >> > > + >> > > + switch (channel->irq_type) { >> > > + case VPFE_USE_CCDC_IRQ: >> > >> > You've just assigned irq_type to VPFE_USE_CCDC_IRQ. This is a bit >> > pointless... >> >> [MK] Actually I removed some code for the initial version. I need to >attach >> to Resizer IRQ when I have Resizer in the path. So can I keep it >> so that I don't have to re-work later ? > >Then you might want to add a small comment to explain this. [MK] Ok. Will do. > > >[snip] > >> > > + ret = vpfe_lookup_v4l2_pix_format(hw_pix); >> > > + if (ret >= 0) { >> > >> > You should also try not to mix conventions regarding return values. >> > ccdc_hw_dev.enum_pix seems to return 0 on success and != 0 on error, >while >> > vpfe_lookup_v4l2_pix_format returns >= 0 on success and < 0 on error. >> > Adhering to one of those conventions through the code (except of course >> > where it makes sense not to) would improve consistency >> >> [MK] In this case it doesn't make sense since we need index into the pix >> table to get the pix format. > >What I mean is that you could go for the >= 0 on success and < 0 on error >convention for most cases. > [MK]. So we agree this will stay. I will follow this convention where ever your comment is applicable. > >[snip] > >> > > +static int vpfe_s_input(struct file *file, void *priv, unsigned int >> > >> >index) >> > >> > > +{ >> > > + int i, ret = -EINVAL; >> > > + v4l2_std_id std_id; >> > > + struct vpfe_fh *fh = file->private_data; >> > > + struct channel_obj *channel = fh->channel; >> > > + struct v4l2_int_device *new_dec, *curr_dec = >> > > + channel->decoder[channel->current_decoder]; >> > >> > Try to avoid multiple variable declarations on the same line. >> >> [MK]. I never thought that this is prohibited in the coding >> style. I see such instances throughout the kernel driver code :) > >I'm not sure if it's a strict rule. I know that some kernel code validation >scripts check (or used to check) for that. I personally think it's ok to >declare short variables on the same line (like for instance 'unsigned int i, >j;' for loop counters), but in this case the code get a bit more difficult >to >read, especially due to the assignment operation. > [MK] Ok. I will move curr_dec declaration to the next line. >[snip] > >> > > + } >> > > + >> > > + ret = mutex_lock_interruptible(&common->lock); >> > > + if (ret) >> > > + goto out; >> > > + /* If streaming is started return device busy >> > > + * error >> > > + */ >> > > + if (common->started) { >> > > + v4l2_err(vpfe_dev->driver, "Streaming is on\n"); >> > > + ret = -EBUSY; >> > > + goto lock_out; >> > > + } >> > > + new_dec_name = vpfe_inputs->inputs[index].dec_name; >> > > + /* switch in new decoder to be active */ >> > > + if (strcmp(new_dec_name, curr_dec->name)) { >> > > + for (i = 0; i < VPFE_CAPTURE_NUM_DECODERS; i++) { >> > >> > If we really need to support multiple decoders hooked up on the VPFE >> > input, we must be prepared to see multiple decoders with identical >names. >> > You shouldn't use the name to check if the current and next inputs use >the >> > same decoder. >> >> [MK] Driver assumes that input names are unique across all decoder. >> Otherwise how do we select a particular decoder for use? Probably >> I need to enforce this at registration to avoid duplicates. Is that >> fine with you ? > >What happens when you have two identical decoders (two TVP514x for >instance) >connected to the CCDC then ? > >[snip] > [MK]Ok. I need to implement a translation of input index to index on the decoder if this is the case. Can be done. >> > > +static int vpfe_s_std(struct file *file, void *priv, v4l2_std_id >> > >> >*std_id) >> > >> > > +{ >> > > + int ret = 0; >> > >> > General remark, local variable declarations in the Linux kernel are >> > usually roughly sorted by line length (related variables should of >course >> > be declared close to each other). It might just be a matter of taste. >> >> [MK] Not sure what you want me to change here. Do you mean >> I need to declare variables such that smaller line comes first >> followed by longer line or vice-versa ? > >The opposite. Kernel code usually has > [MK] Ok. > struct big_struct_name *long_var_name = long_init_val; > int ret; > >instead of > > int ret; > struct big_struct_name *long_var_name = long_init_val; > >You obviously don't have to measure each line with a pixel-perfect rule :-) >Grouping related variables and then roughly sorting my length is enough. > >[snip] > >> > > +static int vpfe_videobuf_prepare(struct videobuf_queue *vq, >> > > + struct videobuf_buffer *vb, >> > > + enum v4l2_field field) >> > > +{ >> > > + int ret = 0; >> > > + /* Get the file handle object and channel object */ >> > > + struct vpfe_fh *fh = vq->priv_data; >> > > + struct channel_obj *channel = fh->channel; >> > > + struct common_obj *common; >> > > + unsigned long addr; >> > > + >> > > + v4l2_dbg(1, debug, vpfe_dev->driver, "<vpfe_buffer_prepare>\n"); >> > > + >> > > + common = &(channel->common[VPFE_VIDEO_INDEX]); >> > > + >> > > + if (V4L2_MEMORY_USERPTR == common->memory) { >> > > + /* we don't support user ptr IO */ >> > >> > It seems whether to use the MMAP or USERPTR streaming method is >ultimately >> > controlled by the ch0_numbuffers module parameter. Don't do that, use >the >> > buffer type supplied to VIDIOC_REQBUFS. >> >> [MK] Not exactly. Only default IO is selected based on the ch0_numbuffers >> variable. User is free to use either and driver will honor it. Please >> tell me which code makes you believe so. > >common->memory is assigned in vpfe_config_default_format based on >config_params.numbuffers[ch->channel_id]. config_params.numbuffers[0] is >only >assigned in vpfe_init from ch0_numbuffers. [MK]. The code that should set common->memory when REQBUF is received from application was missing. I will add it. Then I can remove the vpfe_config_default_format assignment. > >[snip] > >> > > +static void vpfe_videobuf_queue(struct videobuf_queue *vq, >> > > + struct videobuf_buffer *vb) >> > > +{ >> > > + /* Get the file handle object and channel object */ >> > > + struct vpfe_fh *fh = vq->priv_data; >> > > + struct channel_obj *channel = fh->channel; >> > > + struct common_obj *common = NULL; >> > > + >> > > + v4l2_dbg(1, debug, vpfe_dev->driver, "<vpfe_buffer_queue>\n"); >> > > + common = &(channel->common[VPFE_VIDEO_INDEX]); >> > > + >> > > + /* add the buffer to the DMA queue */ >> > > + list_add_tail(&vb->queue, &common->dma_queue); >> > >> > This needs to be protected by a mutex >> >> [MK] I think in the videobuf-core.c this is already protected >> by mutex lock using vb_lock of videobuf_queue structure. > >Actually it needs to be protected by a spinlock. common->dma_queue is >accessed >in IRQ handlers. > [MK] Ok. >[snip] > >> > > +static void vpfe_videobuf_release(struct videobuf_queue *vq, >> > > + struct videobuf_buffer *vb) >> > > +{ >> > > + v4l2_dbg(1, debug, vpfe_dev->driver, "vpfe_videobuf_release\n"); >> > > + videobuf_dma_contig_free(vq, vb); >> > > + vb->state = VIDEOBUF_NEEDS_INIT; >> > >> > I'm not sure about this, but aren't you supposed to remove the buffer >from >> > the dma_queue here ? >> >> [MK] I believe when REQUEST_BUF is called the second time, INIT_LIST_HEAD >> will re-initialize the queue. > >The buf_release callback is not necessarily followed by a VIDIOC_REQBUFS. >For >instance, if the user calls VIDIOC_STREAMOFF, vpfe_videobuf_release will be >called. It would be perfectly valid to call VIDIOC_STREAMON after that with >any VIDIOC_REQBUFS in between. > [MK]. What you are saying is the following scenario will create problem since I don't remove the buffer from the dma queue resulting in driver using an invalid buffer (already released by videobuf_dma_contig_free) if STREAMON is called again VIDIOC_STREAMON VIDIOC_STREAMOFF VIDIOC_STREAMON. So I guess I can call INIT_LIST_HEAD() to remove the buffer so that STREAMON will fail gracefully if application calls STREAMON immediately after that, it will fail. Can I assume, after STREAMOFF, to successfully start streaming again, application needs to call a) REQBUF ->(optional QUERYBUF -> optional mmap) -> QBUF -> STREAMON ? or QBUF -> STREAMON ? >[snip] > >> > > +static int vpfe_reqbufs(struct file *file, void *priv, >> > > + struct v4l2_requestbuffers *p) >> > > +{ >> > > + int ret = 0; >> > > + struct vpfe_fh *fh = file->private_data; >> > > + struct channel_obj *channel = fh->channel; >> > > + struct common_obj *common = NULL; >> > > + enum v4l2_field field; >> > > + >> > > + v4l2_dbg(1, debug, vpfe_dev->driver, "<vpfe_buffer_queue>\n"); >> > > + common = &(channel->common[VPFE_VIDEO_INDEX]); >> > > + >> > > + v4l2_dbg(1, debug, vpfe_dev->driver, "\nEnd of VIDIOC_REQBUFS >> > >> >ioctl"); >> > >> > > + >> > > + if (V4L2_BUF_TYPE_VIDEO_CAPTURE != p->type) { >> > > + ret = -EINVAL; >> > > + goto out; >> > > + } >> > > + if (common->io_usrs != 0) { >> > > + ret = -EBUSY; >> > > + goto out; >> > > + } >> > >> > This will make subsequent VIDIOC_REQBUFS requests fail. It's perfectly >> > legal to call VIDIOC_REQBUFS multiple times provided the device isn't >> > streaming and buffers are not mapped. The stream test is (at least >> > partially) handled by videobuf_reqbufs. >> > >> > When calling VIDIOC_REQBUFS with a number of buffers equal to zero you >> > should free all buffers (using videobuf_mmap_free). >> >> [MK] Currently we support REQBUF only once after open. I know this is a >> limitation we need to fix. Can we do this at a later stage? At this >> time, my priority is get driver with existing limitations and later on >> work to remove the limitations as we progress. Given the amount of driver >> work we have at hand, I will do it in stages. > >Then please add a comment to explicitly state the limitation. You should >still >probably fix the VIDIOC_REQBUFS with a number of buffers equal to zero, as >applications commonly use that to free the buffers. > [MK] Ok. Will do. >[snip] > >> > > + } >> > > + common = &(channel->common[buf_type_index]); >> > > + if (p->memory != V4L2_MEMORY_MMAP) { >> > >> > p->memory isn't supposed to be set by userspace applications when >calling >> > VIDIOC_QUERYBUF. Don't check it. >> >> [MK] Ok. Just curious. What happens when user application request for >> USERPTR IO and then calls QUERYBUF? > >VIDIOC_QUERYBUF is part of the memory mapping I/O method. It doesn't make >sense with the user pointer I/O method. The V4L2 specification doesn't >explicitly states what should happen, I guess returning an error would be >the >right thing to do. > [MK]. I guess I can check memory variable saved in common instead for validation instead. Right? >[snip] > >> > > + /* If buffer queue is empty, return error */ >> > > + if (list_empty(&common->dma_queue)) { >> > > + v4l2_err(vpfe_dev->driver, "buffer queue is empty\n"); >> > > + ret = -EIO; >> > > + goto lock_out; >> > > + } >> > > + /* Get the next frame from the buffer queue */ >> > > + common->nextFrm = common->curFrm = >> > > + list_entry(common->dma_queue.next, >> > > + struct videobuf_buffer, queue); >> > > + /* Remove buffer from the buffer queue */ >> > > + list_del(&common->curFrm->queue); >> > > + /* Mark state of the current frame to active */ >> > > + common->curFrm->state = VIDEOBUF_ACTIVE; >> > > + /* Initialize field_id and started member */ >> > > + channel->field_id = 0; >> > > + common->started = 1; >> > > + >> > > + addr = videobuf_to_dma_contig(common->curFrm); >> > > + >> > > + /* Calculate field offset */ >> > > + vpfe_calculate_offsets(channel); >> > > + >> > > + if (vpfe_attach_irq(channel) < 0) { >> > >> > Is there a reason not to request IRQs when the device is probed by the >> > driver and leave them requested until the driver is unloaded or the >device >> > removed ? >> >> [MK] As I have mentioned in a earlier response, when we port the latest >> version of the driver, it will attach isr on the fly either for >> handling irq from ccdc or from resizer based on IPIPE & Resizers >> are in the path or not. So we will have to re-do this if we change >> this now. I prefer to leave it like this. Let me know if you feel >> different. > >Can't you register both IRQ handlers ? Only handlers for enabled IRQ will >get >called anyway. > [MK]. In our SoCs, there are limited number of IRQ lines (7 or 8 depending on the SoC) from the video processing peripheral to ARM. These vary from SoC to SoC, ranging from 15 to 31 IRQs available to be muxed over these lines. So doing it statically at probe is not possible. In our internal linux kernel we have a vpss module in the arch/arm/mach-davinci folder that mux these vpss irq to arm irq. So we need to do this anyway in our system. Right now it might be possible to do it in probe. How do you want to handle this? > >[snip] > >> > > +static long vpfe_param_handler(struct file *file, void *priv, >> > > + int cmd, void *param) >> > > +{ >> > > + struct vpfe_fh *fh = file->private_data; >> > > + struct channel_obj *channel = fh->channel; >> > > + struct common_obj *common = NULL; >> > > + int ret = 0; >> > > + common = &(channel->common[VPFE_VIDEO_INDEX]); >> > > + >> > > + if (common->started) { >> > > + /* only allowed if streaming is not started */ >> > > + v4l2_err(vpfe_dev->driver, "channel already started\n"); >> > > + ret = -EBUSY; >> > > + goto out; >> > > + } >> > > + ret = mutex_lock_interruptible(&common->lock); >> > > + if (ret) >> > > + goto out; >> > > + switch (cmd) { >> > > + case VPFE_CMD_S_SOC_PARAMS: >> > > + { >> > >> > I don't like that. CCDC parameters should either come from a >> > board-specific file or be computed from regular V4L2 controls. >Userspace >> > applications must not be required to pass a block of CCDC-specific >> > parameters. >> >> [MK] If you see the VPFE, there are a lot of parameters our customers >wants >> to tweak. I agree that some of the duplicate parameters can go away (like >> width, height, pixel format etc), but we need to keep parameters like >> datasft (shift raw data before writing it to SDRAM) and color pattern to >be >> used etc so that customer application can change it on the fly. For >example >> color pattern depends on raw and column start used in the sensor. So it >> must be controlled based on a particular sensor (example MT9T031) >settings >> for a capture resolution used. Also we have many IPs, as mentioned at the >> beginning of this, which needs to be configured by application. Example >> for raw image capture, application may want to set defect pixel locations >> for pixel correction, may want to do color space conversion etc. >Similarly >> if IPIPE is in the capture path , there are parameters associated with >each >> IP in IPIPE that needs to be configured. Infact I want to split this in >to >> two. One for common parameter configuration which will be handled via >> VPFE_CMD_S_SOC_PARAMS ioctl. Others through module specific structures >and >> IOCTLS. If my understanding is correct, V4L2 controls can be used for >> parameters that can be controlled while streaming is ongoing. But how do >I >> configure modules before starting streaming like, defect pixel correction, >> data formatter, color space converter etc ? Simialry only few module >> parameters in IPIPE can be controlled while streaming is ongoing. So it >> require SoC specific module configuration IOCTLs. White balance is a >> candidate for control, defect pixel correction is a candidate for module >> configuration IOCTL. Pixel correctin varies from sensor to sensor and can >> not be in board specific files. > >We have a whole bunch of parameters in there, and you're right that >implementing a nice and clean way to set them all isn't easy. > >Some parameters can probably be converted to V4L2 controls, but we can >always >work on that later (and yes V4L2 controls can be used before streaming as >well). > >Let's take a look at the YCbCr parameters for the DM355. > >struct ccdc_params_ycbcr { > /* pixel format */ > enum ccdc_pixfmt pix_fmt; > /* progressive or interlaced frame */ > enum ccdc_frmfmt frm_fmt; > /* video window */ > struct ccdc_imgwin win; > /* field id polarity */ > enum ccdc_pinpol fid_pol; > /* vertical sync polarity */ > enum ccdc_pinpol vd_pol; > /* horizontal sync polarity */ > enum ccdc_pinpol hd_pol; > /* enable BT.656 embedded sync mode */ > int bt656_enable; > /* cb:y:cr:y or y:cb:y:cr in memory */ > enum ccdc_pixorder pix_order; > /* interleaved or separated fields */ > enum ccdc_buftype buf_type; >}; > >If my understanding is correct, pix_fmt and win are duplicate and I think >we >agree that they should be removed. frm_fmt and buf_type might fall into the >same category. pix_order can also probably be derived from the V4L2 image >format. > >fid_pol, vd_pol, hd_pol seem to be board-specific, can they be moved to >board- >specific data ? Does bt656_enable fall into the same category ? > >If my assumptions are correct, this leaves us with no parameters to set >through VPFE_CMD_S_SOC_PARAMS :-) > [MK]. Agree, this structure is not needed. Only for Raw Bayer, it is needed for few parameters. >There are a few more parameters for Bayer mode. Some of them, such as lens >shading correction, can probably be converted to V4L2 parameters. We can do >that at a later time though. > >My point is that parameters that can be computed from V4L2 data (image >format >and size, ...) shouldn't be passed explicitly from userspace, and >parameters >that are definitely board-specific should come from platform data set by >board-specific code. Other parameters can still be set using >VPFE_CMD_S_SOC_PARAMS. > >I like your plan to configure the CCDC, IPIPE and other blocks using >different >ioctls. VPFE_CMD_S_SOC_PARAMS could be renamed to VPFE_CMD_S_CCDC_PARAMS. > [MK] Better would be to call it VPFE_CMD_S_CCDC_RAW_PARAMS. I plan to rename VPFE_CMD_S/G_SOC_PARAMS to VPFE_CMD_S/G_CCDC_RAW_PARAMS and later remove the module specific structures for DFC, CSC etc to individual control ioctls (using V4L2_CID_PRIVATE_BASE). > _______________________________________________ Davinci-linux-open-source mailing list [email protected] http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source
