RE: Mem2Mem V4L2 devices [RFC]

2009-10-06 Thread Marek Szyprowski
Hello,

On Monday, October 05, 2009 8:07 PM Hiremath, Vaibhav wrote:

 -Original Message-
 From: Hiremath, Vaibhav [mailto:hvaib...@ti.com]
 Sent: Monday, October 05, 2009 8:07 PM
 To: Marek Szyprowski; linux-media@vger.kernel.org
 Cc: kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
  -Original Message-
  From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
  Sent: Monday, October 05, 2009 7:26 PM
  To: Hiremath, Vaibhav; linux-media@vger.kernel.org
  Cc: kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak; Marek
  Szyprowski
  Subject: RE: Mem2Mem V4L2 devices [RFC]
 
  Hello,
 
  On Monday, October 05, 2009 7:43 AM Hiremath, Vaibhav wrote:
 
   In terms of V4L2 framework such device would be both video
sink
and source at the same time. The main problem is how the video
  nodes
(/dev/videoX) should be assigned to such a device.
   
The simplest way of implementing mem2mem device in v4l2
  framework
would
use two video nodes (one for input and one for output). Such an
  idea
has
been already suggested on V4L2 mini-summit.
   [Hiremath, Vaibhav] We discussed 2 options during summit,
  
   1) Only one video device node, and configuring parameters using
  V4L2_BUF_TYPE_VIDEO_CAPTURE for input
   parameter and V4L2_BUF_TYPE_VIDEO_OUTPUT for output parameter.
  
   2) 2 separate video device node, one with
  V4L2_BUF_TYPE_VIDEO_CAPTURE and another with
   V4L2_BUF_TYPE_VIDEO_OUTPUT, as mentioned by you.
  
   The obvious and preferred option would be 2, because with option 1
  we could not able to achieve real
   streaming. And again we have to put constraint on application for
  fixed input buffer index.
 
  What do you mean by real streaming?
 
 [Hiremath, Vaibhav] I meant, after streamon, there will be just sequence of 
 queuing and de-queuing of
 buffers. With single node of operation, how are we deciding which is input 
 buffer and which one is
 output?

By the buffer-type parameter. The only difference is that you will queue both 
buffers into the same video node.

 We have to assume or put constraint on application that the 0th index will be 
 always input,
 irrespective of number of buffers requested.

No. The input buffers will be distinguished by the type parameter.

   [Hiremath, Vaibhav] Please note that we must put one limitation to
  application that, the buffers in
   both the video nodes are mapped one-to-one. This means that,
  
   Video0 (input)Video1 (output)
   Index-0   == index-0
   Index-1   == index-1
   Index-2   == index-2
  
   Do you see any other option to this? I think this constraint is
  obvious from application point of view
   in during streaming.
 
  This is correct. Every application should queue a corresponding
  output buffer for each queued input buffer.
  NOTE that the this while discussion is how make it possible to have
  2 different applications running at the same time, each of them
  queuing their own input and output buffers. It will look somehow
  like this:
 
  Video0 (input)  Video1 (output)
  App1, Index-0   == App1, index-0
  App2, Index-0   == App2, index-0
  App1, Index-1   == App1, index-1
  App2, Index-1   == App2, index-1
  App1, Index-2   == App1, index-2
  App2, Index-2   == App2, index-2
 
  Note, that the absolute order of the queue/dequeue might be
  different, but each application should get the right output buffer,
  which corresponds to the queued input buffer.
 
 [Hiremath, Vaibhav] We have to create separate queues for every device open 
 call. It would be
 difficult/complex for the driver to maintain special queue for request from 
 number of applications.

I know that this would be complex for every driver to maintain its special 
queues. But imho such an use case (multiple instance
support) is so important (especially for embedded applications) that it is 
worth to properly design an additional framework for
mem2mem v4l2 devices, so all the buffers handling will be hidden from the 
actual drivers.

   [Hiremath, Vaibhav] Initially I thought of having separate queue
  in driver which tries to make maximum
   usage of underneath hardware. Application just will queue the
  buffers and call streamon, driver
   internally queues it in his own queue and issues a resize
  operation (in this case) for all the queued
   buffers, releasing one-by-one to application. We have similar
  implementation internally, but not with
   standard V4L2 framework, it uses custom IOCTL's for everything.
 
  This is similar to what we have currently, however we want to move
  all our custom drivers into the generic kernel frameworks.
 
   But when we decided to provide User Space library with media
  controller, I thought of moving this
   burden to application layer. Application library will create an
  interface 

RE: Mem2Mem V4L2 devices [RFC] - Can we enhance the V4L2 API?

2009-10-06 Thread Marek Szyprowski
Hello,

On October 06, 2009 12:31 AM Karicheri, Muralidharan wrote:

 Are we constrained to use the QBUF/DQBUF/STREAMON/STREAMOFF model for this 
 specific device (memory to
 memory)? What about adding new IOCTLs that can be used for this specific 
 device type that possibly can
 simplify the implementation?

Don't forget about the simplest V4L2 io model based on read() and write() 
calls. This io model fits very well into
transaction/conversion like device. There is an issue with blocking calls, as 
the applications would need to use threads in order to
do simple image conversion, but this can be easily avoided with non-blocking io 
and poll().

 As we have seen in the discussion, this is not a streaming device, rather
 a transaction/conversion device which operate on a given frame to get a 
 desired output frame. Each
 transaction may have it's own set of configuration context which will be 
 applied to the hardware
 before starting the operation. This is unlike a streaming device, where most 
 of the configuration is
 done prior to starting the streaming.

From the application point of view an instance of such a device still is a 
streaming device. The application should not even know if
any other apps are using the device or not (well, it may only notice the lower 
throughput or higher device latency, but this cannot
be avoided). Application can queue input and output buffers, stream on and wait 
for the result.

 The changes done during streaming are controls like brightness,
 contrast, gain etc. The frames received by application are either 
 synchronized to an input source
 timing or application output frame based on a display timing. Also a single 
 IO instance is usually
 maintained at the driver where as in the case of memory to memory device, 
 hardware needs to switch
 contexts between operations. So we might need a different approach than 
 capture/output device.

All this is internal to the device driver, which can hide it from the 
application.

Best regards
--
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-06 Thread Marek Szyprowski
Hello,

On Monday, October 05, 2009 8:27 PM Hiremath, Vaibhav wrote:

   [Hiremath, Vaibhav] IMO, this implementation is not streaming
  model, we are trying to fit mem-to-mem
   forcefully to streaming.
 
  Why this does not fit streaming? I see no problems with streaming
  over mem2mem device with only one video node. You just queue input
  and output buffers (they are distinguished by 'type' parameter) on
  the same video node.
 
 [Hiremath, Vaibhav] Do we create separate queue of buffers based on type? I 
 think we don't.

Why not? I really see no problems implementing such driver, especially if this 
heavily increases the number of use cases where such
device can be used.

 App1  App2App3... AppN
   ||  |   | |
---
   |
   /dev/video0
   |
   Resizer Driver
 
 Everyone will be doing streamon, and in normal use case every application 
 must be getting buffers from
 another module (another driver, codecs, DSP, etc...) in multiple streams, 0, 
 1,2,3,4N

Right.

 Every application will start streaming with (mostly) fixed scaling factor 
 which mostly never changes.

Right. The driver can store the scaling factors and other parameters in the 
private data of each opened instance of the /dev/video0
device.

 This one video node approach is possible only with constraint that, the 
 application will always queue
 only 2 buffers with one CAPTURE and one with OUTPUT type. He has to wait till 
 first/second gets
 finished, you can't queue multiple buffers (input and output) simultaneously.

Why do you think you cannot queue multiple buffers? IMHO can perfectly queue 
more than one input buffer, then queue the same number
of output buffers and then the device will process all the buffers.

 I do agree here with you that we need to investigate on whether we really 
 have such use-case. Does it
 make sense to put such constraint on application?

What constraint?

 What is the impact? Again in case of down-scaling,
 application may want to use same buffer as input, which is easily possible 
 with single node approach.

Right. But take into account that down-scaling is the one special case in which 
the operation can be performed in-place. Usually all
other types of operations (like color space conversion or rotation) require 2 
buffers. Please note that having only one video node
would not mean that all operations must be done in-place. As Ivan stated you 
can perfectly queue 2 separate input and output buffers
into the one video node and the driver can handle this correctly.

Best regards
--
Marek Szyprowski
Samsung Poland RD Center

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] libv4l - spca561: Have static decoding tables

2009-10-06 Thread Jean-Francois Moine
Hello Hans,

Searching the image encoding type of a new (but old!) webcam and tracing
some decoding functions, I was surprised to see a move of constant
tables to the stack in internal_spca561_decode.

Best regards.

-- 
Ken ar c'hentañ | ** Breizh ha Linux atav! **
Jef |   http://moinejf.free.fr/
libv4l - spca561: Have static decoding tables

Signed-off-by: Jean-Francois Moine moin...@free.fr

diff -r fbecc4a86361 v4l2-apps/libv4l/libv4lconvert/spca561-decompress.c
--- a/v4l2-apps/libv4l/libv4lconvert/spca561-decompress.c	Mon Oct 05 10:41:30 2009 +0200
+++ b/v4l2-apps/libv4l/libv4lconvert/spca561-decompress.c	Tue Oct 06 08:37:30 2009 +0200
@@ -308,7 +308,7 @@
 	static int accum[8 * 8 * 8];
 	static int i_hits[8 * 8 * 8];
 
-	const int nbits_A[] =
+	static const int nbits_A[] =
 	{ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
 		1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
 		1, 1,
@@ -336,7 +336,7 @@
 		3, 3, 3, 3, 3,
 		3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
 	};
-	const int tab_A[] =
+	static const int tab_A[] =
 	{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 		0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 		0, 0,
@@ -371,7 +371,7 @@
 		1
 	};
 
-	const int nbits_B[] =
+	static const int nbits_B[] =
 	{ 0, 8, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4,
 		4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3,
 		3, 3,
@@ -399,7 +399,7 @@
 		1, 1, 1, 1, 1,
 		1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
 	};
-	const int tab_B[] =
+	static const int tab_B[] =
 	{ 0xff, -4, 3, 3, -3, -3, -3, -3, 2, 2, 2, 2, 2, 2, 2, 2, -2,
 		-2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2,
 		1, 1,
@@ -434,7 +434,7 @@
 		0, 0, 0, 0, 0, 0, 0,
 	};
 
-	const int nbits_C[] =
+	static const int nbits_C[] =
 	{ 0, 0, 8, 8, 7, 7, 7, 7, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5,
 		5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4,
 		4, 4,
@@ -462,7 +462,7 @@
 		2, 2, 2, 2, 2,
 		2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
 	};
-	const int tab_C[] =
+	static const int tab_C[] =
 	{ 0xff, 0xfe, 6, -7, 5, 5, -6, -6, 4, 4, 4, 4, -5, -5, -5, -5,
 		3, 3, 3, 3, 3, 3, 3, 3, -4, -4, -4, -4, -4, -4, -4, -4, 2,
 		2, 2, 2,
@@ -498,7 +498,7 @@
 		-1,
 	};
 
-	const int nbits_D[] =
+	static const int nbits_D[] =
 	{ 0, 0, 0, 0, 8, 8, 8, 8, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 6,
 		6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5,
 		5, 5,
@@ -526,7 +526,7 @@
 		3, 3, 3, 3, 3,
 		3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3
 	};
-	const int tab_D[] =
+	static const int tab_D[] =
 	{ 0xff, 0xfe, 0xfd, 0xfc, 10, -11, 11, -12, 8, 8, -9, -9, 9, 9,
 		-10, -10, 6, 6, 6, 6, -7, -7, -7, -7, 7, 7, 7, 7, -8, -8,
 		-8, -8,
@@ -564,7 +564,7 @@
 	};
 
 	/* a_curve[19 + i] = ... [-19..19] = [-160..160] */
-	const int a_curve[] =
+	static const int a_curve[] =
 	{ -160, -144, -128, -112, -98, -88, -80, -72, -64, -56, -48,
 		-40, -32, -24, -18, -12, -8, -5, -2, 0, 2, 5, 8, 12, 18,
 		24, 32,
@@ -572,7 +572,7 @@
 		72, 80, 88, 98, 112, 128, 144, 160
 	};
 	/* clamp0_255[256 + i] = min(max(i,255),0) */
-	const unsigned char clamp0_255[] =
+	static const unsigned char clamp0_255[] =
 	{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 		0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
 		0, 0,
@@ -681,14 +681,14 @@
 		255
 	};
 	/* abs_clamp15[19 + i] = min(abs(i), 15) */
-	const int abs_clamp15[] =
+	static const int abs_clamp15[] =
 	{ 15, 15, 15, 15, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3,
 		2, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
 		15, 15,
 		15
 	};
 	/* diff_encoding[256 + i] = ... */
-	const int diff_encoding[] =
+	static const int diff_encoding[] =
 	{ 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
 		7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
 		7, 7,


RE: Mem2Mem V4L2 devices [RFC]

2009-10-06 Thread Marek Szyprowski
Hello,

On Monday, October 05, 2009 10:02 PM Karicheri, Muralidharan wrote:

 There is another use case where there are two Resizer hardware working om the 
 same input frame and
 give two different output frames of different resolution. How do we handle 
 this using the one video
 device approach you
 just described here?

How the hardware is actually designed? I see two possibilities:

1.
[input buffer] --[dma engine] [resizer1] --[dma]- [mem output buffer1]
   \- [resizer2] --[dma]- [mem output buffer2]

2.
[input buffer] ---[dma engine1]- [resizer1] --[dma]- [mem output buffer1]
\-[dma engine2]- [resizer2] --[dma]- [mem output buffer2]

In the first case we would really have problems mapping it properly to video
nodes. But we should think if there are any use cases of such design? (in
terms of mem-2-mem device) I know that this Y-type design makes sense as a
part of the pipeline from a sensor or decoder device. But I cannot find any
useful use case for mem2mem version of it.

The second case is much more trivial. One can just create two separate resizer
devices (with their own nodes) or one resizer driver with two hardware
resizers underneath it. In both cases application would simply queue the input
buffer 2 times for both transactions.

Best regards
--
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] AVerTV MCE 116 Plus radio

2009-10-06 Thread Aleksandr V. Piskunov
Added FM radio support to Avermedia AVerTV MCE 116 Plus card

Signed-off-by: Aleksandr V. Piskunov alexandr.v.pisku...@gmail.com

diff --git a/linux/drivers/media/video/ivtv/ivtv-cards.c 
b/linux/drivers/media/video/ivtv/ivtv-cards.c
--- a/linux/drivers/media/video/ivtv/ivtv-cards.c
+++ b/linux/drivers/media/video/ivtv/ivtv-cards.c
@@ -965,6 +965,7 @@
{ IVTV_CARD_INPUT_AUD_TUNER,  CX25840_AUDIO5   },
{ IVTV_CARD_INPUT_LINE_IN1,   CX25840_AUDIO_SERIAL, 1 },
},
+   .radio_input = { IVTV_CARD_INPUT_AUD_TUNER,  CX25840_AUDIO5 },
/* enable line-in */
.gpio_init = { .direction = 0xe000, .initial_value = 0x4000 },
.xceive_pin = 10,
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] AVerTV MCE 116 Plus radio

2009-10-06 Thread Aleksandr V. Piskunov
On Tue, Oct 06, 2009 at 11:04:06AM +0300, Aleksandr V. Piskunov wrote:
 Added FM radio support to Avermedia AVerTV MCE 116 Plus card
 

What leaves me puzzled, radio only works ok with ivtv newi2c=1

With default newi2c audio is tinny, metallic, with some strange static.
Similar problem with pvr-150 was reported years ago, guess issue is still
unresolved, perhaps something with cx25840..
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Global Video Buffers Pool - PMM and UPBuffer reference drivers [RFC]

2009-10-06 Thread Marek Szyprowski
Hello,

On Friday, October 02, 2009 6:04 PM David F. Carlson wrote:

 I am not a fan of the large and static driver based bootmem allocations in the
 samsung-ap-2.6 git.  This work at least addresses that issue.  Thanks.
 
 Below are some comments.  Perhaps I am not getting it.
 
 According to Marek Szyprowski:
 
  algorithm itself would typically be changed to fit a usage pattern.
 
  In our solution all memory buffers are all allocated by user-space
  applications, because only user applications have enough information
  which devices will be used in the video processing pipeline. For
  example:
 
  MFC video decoder - Post Processor (scaler and color space converter)
   - Frame Buffer memory.
 
  If such a translation succeeds the
  physical memory region will be properly locked and is guaranteed to be
  in the memory till the end of transaction. Each transaction must be
  closed by the multimedia device driver explicitly.
 
 Since this is a *physical* memory manager, I would never expect the memory
 to not be in memory...

The memory cannot be swapped to disk, but notice that PMM is also an allocator.
It must track which regions has been allocated and free them when are no
longer in use.

  2. Allocating a buffer from userspace
 
  PMM provides a /dev/pmm special device. Each time the application wants
  to allocate a buffer it opens the /dev/pmm special file, calls
  IOCTL_PMM_ALLOC ioctl and the mmaps it into its virtual memory. The
  struct pmm_area_info parameter for IOCTL_PMM_ALLOC ioctl describes the
  memory requirements for the buffer (please refer to
  include/linux/s3c/pmm.h) - like buffer size, memory alignment, memory
  type (PMM supports different memory types, although currently only one
  is used) and cpu cache coherency rules (memory can be mapped as
  cacheable or non-cacheable). The buffer is freed when the file
  descriptor reference count reaches zero (so the file is closed, is
  unmmaped from applications memory and released from multimedia devices).
 
 I prefer using mmap semantics with a driver than messes with *my* address
 space.  Ioctls that mess with my address space gives me hives.
 
 mmap is the call a user makes to map a memory/file object into its space.
 ioctl is for things that don't fit read/write/mmap.  :-)

Well, maybe I didn't write it clear enough. You allocate a buffer with
custom ioctl and then map it with mmap() call. So the sequence of the calls
is as follows:

struct pmm_mem_info info = {
.magic = PMM_MAGIC,
.size = BUFFER_SIZE,
.type = PMM_MEM_GENERAL,
.flags = PMM_NO_CACHE,
.alignment = 0x1000,
};

fd = open(/dev/pmm);
ioctl(fd, IOCTL_PMM_ALLOC, info);
mmap(0, BUFFER_SIZE, 0777, MAP_SHARED, fd, 0);
close(fd);

 1.  memory alignment will be page size 4k (no?).  Or are you suggesting
 larger alignment requirements?  Are any of the target devices 24 bit dma
 clients?  (Crossing a 16MB boundary would then not work...)

You can set the alignment in pmm_mem_info structure.

 2. Since these buffers will be dma sources/targets, cache will be off (no?)

You can control weather to use cache or not on the buffer region with special
flags provided to alloc ioctl. In case o cacheable mapping, the upbuffer
translation layer would do proper cache synchronization (flush/clean/invalidate)
basing on the type of operation that the driver wants to perform (please refer
to include/linux/s3c/upbuffer.h)

 Many CPUs ldr/stm memcpy do burst access to DDR so non-cached is still pretty
 zipping for non-bit banging apps.  Forcing non-cached makes much of the sync
 semantic you have go away.
 
 Is there a use-case for cached graphics controller memory that I am missing?

Yes, you may want to perform some of the gfx transformations by the cpu (in
case they are for example not available in the hardware). If the buffer is
mapped as non-cacheable then all cpu read opeations will be really slow.

  3. Buffer locking
 
  If user application performed a mmap call on some special device and a
  driver mapped some physical memory into user address space (usually with
  remap_pfn_range function), this will create a vm_area structure with
  VM_PFNMAP flag set in user address space map. UPBuffer layer can easily
  perform a reverse mapping (virtual address to physical memory address)
  by simply reading the PTE values and checking if it is contiguous in
  physical memory. The only problem is how to guarantee the security of
  this solution. VM_PFNMAP-type areas do not have associated page
  structures so that memory pages cannot be locked directly in page cache.
  However such memory area would not be freed until the special file that
  is behind it is still in use. We found that is can be correctly locked
  by increasing reference counter of that special file (vm_area-vm_file).
  This would lock the mapping from being freed if user would accidently do
  something really nasty like unmapping that area.
 
 I am still missing it.  Your /dev/pmm is allocating 

RE: Mem2Mem V4L2 devices [RFC]

2009-10-06 Thread Ivan T. Ivanov
Hi, 

On Tue, 2009-10-06 at 08:23 +0200, Marek Szyprowski wrote:
 Hello,
 
 On Monday, October 05, 2009 8:27 PM Hiremath, Vaibhav wrote:
 
[Hiremath, Vaibhav] IMO, this implementation is not streaming
   model, we are trying to fit mem-to-mem
forcefully to streaming.
  
   Why this does not fit streaming? I see no problems with streaming
   over mem2mem device with only one video node. You just queue input
   and output buffers (they are distinguished by 'type' parameter) on
   the same video node.
  
  [Hiremath, Vaibhav] Do we create separate queue of buffers based on type? I 
  think we don't.
 
 Why not? I really see no problems implementing such driver, especially if 
 this heavily increases the number of use cases where such
 device can be used.
 
  App1App2App3... AppN
|  |  |   | |
 ---
  |
  /dev/video0
  |
  Resizer Driver
  
  Everyone will be doing streamon, and in normal use case every application 
  must be getting buffers from
  another module (another driver, codecs, DSP, etc...) in multiple streams, 
  0, 1,2,3,4N
 
 Right.
 
  Every application will start streaming with (mostly) fixed scaling factor 
  which mostly never changes.
 
 Right. The driver can store the scaling factors and other parameters in the 
 private data of each opened instance of the /dev/video0
 device.
 
  This one video node approach is possible only with constraint that, the 
  application will always queue
  only 2 buffers with one CAPTURE and one with OUTPUT type. He has to wait 
  till first/second gets
  finished, you can't queue multiple buffers (input and output) 
  simultaneously.
 
 Why do you think you cannot queue multiple buffers? IMHO can perfectly queue 
 more than one input buffer, then queue the same number
 of output buffers and then the device will process all the buffers.
 
  I do agree here with you that we need to investigate on whether we really 
  have such use-case. Does it
  make sense to put such constraint on application?
 
 What constraint?
 
  What is the impact? Again in case of down-scaling,
  application may want to use same buffer as input, which is easily possible 
  with single node approach.
 
 Right. But take into account that down-scaling is the one special case in 
 which the operation can be performed in-place. Usually all
 other types of operations (like color space conversion or rotation) require 2 
 buffers. Please note that having only one video node
 would not mean that all operations must be done in-place. As Ivan stated you 
 can perfectly queue 2 separate input and output buffers
 into the one video node and the driver can handle this correctly.
 

 i agree with you Marek.

 can i made one suggestion. as we all know some hardware can do in-place
 processing. i think it will be not too bad if user put same buffer 
 as input and output, or with some spare space between start address of
 input and output. from driver point of view there is no 
 difference, it will see 2 different buffers. in this case we also 
 can save time from mapping virtual to physical addresses.

 but in general, i think separate input and output buffers 
 (even overlapped), and single device node will simplify design 
 and implementation of such drivers. Also this will be more clear 
 and easily manageable from user space point of view.

 iivanov



 Best regards
 --
 Marek Szyprowski
 Samsung Poland RD Center
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


uvcvideo: Finally fix Logitech Quickcam for Notebooks Pro

2009-10-06 Thread Ondrej Zary
Hello,
I have a Logitech Quickcam for Notebooks Pro camera (046d:08c3) which just 
does not work even with kernel 2.6.31 and has never worked well before.

On http://linux-uvc.berlios.de/, there are two problems listed. I want to 
really fix these two problems so the camera will just work after plugging in 
(and not disconnect). I started with problem no. 2 as this causes the camera 
not to work at all when plugged in:

usb 5-2.4: new high speed USB device using ehci_hcd and address 7
usb 5-2.4: configuration #1 chosen from 1 choice
uvcvideo: Found UVC 1.00 device unnamed (046d:08c3)
uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling 
workaround.
uvcvideo: Failed to query (129) UVC probe control : -110 (exp. 26).
uvcvideo: Failed to initialize the device (-5).

When I do modprobe snd_usb_audio, then rmmod snd_usb_audio and 
finally modprobe uvcvideo, it works. So it looks like snd_usb_audio does 
some initialization that allows uvcvideo to work. It didn't work at all I 
didn't have snd_usb_audio module compiled.

What was the change that supposedly broke this in 2.6.22?

-- 
Ondrej Zary
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Global Video Buffers Pool - PMM and UPBuffer reference drivers

2009-10-06 Thread David F. Carlson
According to Marek Szyprowski:
 
 struct pmm_mem_info info = {
   .magic = PMM_MAGIC,
   .size = BUFFER_SIZE,
   .type = PMM_MEM_GENERAL,
   .flags = PMM_NO_CACHE,
   .alignment = 0x1000,
 };
 
 fd = open(/dev/pmm);
 ioctl(fd, IOCTL_PMM_ALLOC, info);
 mmap(0, BUFFER_SIZE, 0777, MAP_SHARED, fd, 0);
 close(fd);

Thanks for the clarification.

 
  2. Since these buffers will be dma sources/targets, cache will be off (no?)
 
 You can control weather to use cache or not on the buffer region with special
 flags provided to alloc ioctl. In case o cacheable mapping, the upbuffer
 translation layer would do proper cache synchronization 
 (flush/clean/invalidate)
 basing on the type of operation that the driver wants to perform (please refer
 to include/linux/s3c/upbuffer.h)

How does user-space know that a buffer will be the target of DMA?  How will
flushing working on implied-dma devices (such as the FB 60Hz dma)?  

Will the FB take any user or driver allocated PMM and fix it to be 
non-cached so that implicit DMA makes sense?  Or does the user have to know 
that a buffer may be the target of dma sometime later because the driver
it passed the PMM to may subsequently pass it to another driver?  

You have added lots of capability but have provided no user-space guidance.

 
 Exactly this is addressed by the UPBuffer translation layer. If application
 unmaps the buffer from its address space the region is not freed unless the
 multimedia driver explicitly unlocks it after the transaction. That is that
 I called the buffer locking. Multimedia driver must lock the buffer before
 performing any DMA transactions on it.
  
 
  You have presented a very flexible, general purpose bootmem 
  allocator/mapper.
  (cache/uncached, bounce/pmm, etc.)
  
  The problem you were trying to solve is a means to generalize multiple
  compile-time fixed bootmem allocations at runtime.
  
  Perhaps this could be simplified to fix that problem by assuming that
  all users (including the s3c-fb driver) would use a simple non-cached
  pmm allocator so that all allocations would be pooled.
 
 I don't get this, could you elaborate?

You have lot of provisions for bounce copies etc. that imply that s3c-mm
drivers will accept non-PMM buffers for I/O.  This creates more problems than
it solves.  Fix the fixed allocation problem, then solve world hunger.

 
  I would advocate hiding pmm allocations within the s3c-mm drivers.
  Each driver could test user buffers for isPMM() trivially since the
  bootmem is physically contig.
  
  What is the advantage in exporting the pmm API to user-space?
 
 Only user applications know what buffers will be required for the
 processing they are performing and which of them they want to reuse
 with other drivers. We decided to remove all buffers from the drivers
 and allocate them separately in user space. This way no memory is wasted
 to fixed buffers. Please note that the ability of SHARING the same buffer
 between different devices is the key feature of this solution.

I have no problem with buffer sharing and runtime pools.  Motherhood and 
apple pie.

I think what is missing are the use-cases for *each s3c-mm device*:

1.  Device DMA model (cached/non-cached  no-dma / explicit dma / implicit dma)
2.  Device buffer model: min size, max size, alignment, scatter/gather) 
3.  Device shared PMM use-case (post-fb and what else?)
4.  Device lifecycle of pmm buffers (define transaction)
5.  Device non-PMM use-case (when would using non-PMM make sense)

You obviously had these use-cases in mind when you designed the PMM.

It would help to understand this design if you could elaborate on you model
for how these devices would be used.  (And how the user will know how
to satisfy each device wrt (1), (2), (3))  

The reality is that user-space doesn't/can't/shouldn't know intimate details 
of driver internals like required alignment, etc.  

My suggestion (from the previous email that PMM is a driver issue):
The user should direct each driver to allocate its buffer(s) providing a 
size and a SHARED|PRIVATE flag depending on when the buffer could ever 
be passed to another driver.  

struct user_pmm {
   size_tsize;  /* desired size of the allocation */
   uint32_t  flags; /* SHARED or PRIVATE to this driver */
};

*Each driver* has the IOCTL_PMM_ALLOC so that it can know its requirements.
And there is no /dev/pmm.

Cheers,

David F. CarlsonChronolytics, Inc.  Rochester, NY
mailto:d...@chronolytics.comhttp://www.chronolytics.com

The faster I go, the behinder I get. --Lewis Carroll
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-06 Thread Hiremath, Vaibhav

 -Original Message-
 From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
 Sent: Tuesday, October 06, 2009 11:53 AM
 To: Hiremath, Vaibhav; 'Ivan T. Ivanov'; linux-media@vger.kernel.org
 Cc: kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak; Marek
 Szyprowski
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 Hello,
 
 On Monday, October 05, 2009 8:27 PM Hiremath, Vaibhav wrote:
 
[Hiremath, Vaibhav] IMO, this implementation is not streaming
   model, we are trying to fit mem-to-mem
forcefully to streaming.
  
   Why this does not fit streaming? I see no problems with
 streaming
   over mem2mem device with only one video node. You just queue
 input
   and output buffers (they are distinguished by 'type' parameter)
 on
   the same video node.
  
  [Hiremath, Vaibhav] Do we create separate queue of buffers based
 on type? I think we don't.
 
 Why not? I really see no problems implementing such driver,
 especially if this heavily increases the number of use cases where
 such
 device can be used.
 
[Hiremath, Vaibhav] I thought of it and you are correct, it should be possible. 
I was kind of biased and thinking in only one direction. Now I don't see any 
reason why we should go for 2 device node approach. Earlier I was thinking of 2 
device nodes for 2 queues, if it is possible with one device node then I think 
we should align to single device node approach.

Do you see any issues with it?

Thanks,
Vaibhav

  App1App2App3... AppN
|  |  |   | |
 ---
  |
  /dev/video0
  |
  Resizer Driver
 
  Everyone will be doing streamon, and in normal use case every
 application must be getting buffers from
  another module (another driver, codecs, DSP, etc...) in multiple
 streams, 0, 1,2,3,4N
snip
 case in which the operation can be performed in-place. Usually all
 other types of operations (like color space conversion or rotation)
 require 2 buffers. Please note that having only one video node
 would not mean that all operations must be done in-place. As Ivan
 stated you can perfectly queue 2 separate input and output buffers
 into the one video node and the driver can handle this correctly.
 
 Best regards
 --
 Marek Szyprowski
 Samsung Poland RD Center
 

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[cron job] v4l-dvb daily build 2.6.22 and up: ERRORS, 2.6.16-2.6.21: ERRORS

2009-10-06 Thread Hans Verkuil
This message is generated daily by a cron job that builds v4l-dvb for
the kernels and architectures in the list below.

Results of the daily build of v4l-dvb:

date:Tue Oct  6 19:00:03 CEST 2009
path:http://www.linuxtv.org/hg/v4l-dvb
changeset:   13046:c7aa399e5dac
gcc version: gcc (GCC) 4.3.1
hardware:x86_64
host os: 2.6.26

linux-2.6.22.19-armv5: OK
linux-2.6.23.12-armv5: OK
linux-2.6.24.7-armv5: OK
linux-2.6.25.11-armv5: OK
linux-2.6.26-armv5: OK
linux-2.6.27-armv5: OK
linux-2.6.28-armv5: OK
linux-2.6.29.1-armv5: OK
linux-2.6.30-armv5: OK
linux-2.6.31-armv5: OK
linux-2.6.32-rc3-armv5: ERRORS
linux-2.6.32-rc3-armv5-davinci: ERRORS
linux-2.6.27-armv5-ixp: ERRORS
linux-2.6.28-armv5-ixp: ERRORS
linux-2.6.29.1-armv5-ixp: ERRORS
linux-2.6.30-armv5-ixp: ERRORS
linux-2.6.31-armv5-ixp: ERRORS
linux-2.6.32-rc3-armv5-ixp: ERRORS
linux-2.6.28-armv5-omap2: OK
linux-2.6.29.1-armv5-omap2: OK
linux-2.6.30-armv5-omap2: OK
linux-2.6.31-armv5-omap2: ERRORS
linux-2.6.32-rc3-armv5-omap2: ERRORS
linux-2.6.22.19-i686: ERRORS
linux-2.6.23.12-i686: ERRORS
linux-2.6.24.7-i686: ERRORS
linux-2.6.25.11-i686: ERRORS
linux-2.6.26-i686: OK
linux-2.6.27-i686: OK
linux-2.6.28-i686: OK
linux-2.6.29.1-i686: WARNINGS
linux-2.6.30-i686: WARNINGS
linux-2.6.31-i686: WARNINGS
linux-2.6.32-rc3-i686: ERRORS
linux-2.6.23.12-m32r: OK
linux-2.6.24.7-m32r: OK
linux-2.6.25.11-m32r: OK
linux-2.6.26-m32r: OK
linux-2.6.27-m32r: OK
linux-2.6.28-m32r: OK
linux-2.6.29.1-m32r: OK
linux-2.6.30-m32r: OK
linux-2.6.31-m32r: OK
linux-2.6.32-rc3-m32r: ERRORS
linux-2.6.30-mips: WARNINGS
linux-2.6.31-mips: OK
linux-2.6.32-rc3-mips: ERRORS
linux-2.6.27-powerpc64: ERRORS
linux-2.6.28-powerpc64: ERRORS
linux-2.6.29.1-powerpc64: ERRORS
linux-2.6.30-powerpc64: ERRORS
linux-2.6.31-powerpc64: ERRORS
linux-2.6.32-rc3-powerpc64: ERRORS
linux-2.6.22.19-x86_64: ERRORS
linux-2.6.23.12-x86_64: ERRORS
linux-2.6.24.7-x86_64: ERRORS
linux-2.6.25.11-x86_64: ERRORS
linux-2.6.26-x86_64: OK
linux-2.6.27-x86_64: OK
linux-2.6.28-x86_64: OK
linux-2.6.29.1-x86_64: WARNINGS
linux-2.6.30-x86_64: WARNINGS
linux-2.6.31-x86_64: WARNINGS
linux-2.6.32-rc3-x86_64: ERRORS
sparse (linux-2.6.31): OK
sparse (linux-2.6.32-rc3): OK
linux-2.6.16.61-i686: ERRORS
linux-2.6.17.14-i686: ERRORS
linux-2.6.18.8-i686: ERRORS
linux-2.6.19.5-i686: ERRORS
linux-2.6.20.21-i686: ERRORS
linux-2.6.21.7-i686: ERRORS
linux-2.6.16.61-x86_64: ERRORS
linux-2.6.17.14-x86_64: ERRORS
linux-2.6.18.8-x86_64: ERRORS
linux-2.6.19.5-x86_64: ERRORS
linux-2.6.20.21-x86_64: ERRORS
linux-2.6.21.7-x86_64: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Tuesday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Tuesday.tar.bz2

The V4L2 specification failed to build, but the last compiled spec is here:

http://www.xs4all.nl/~hverkuil/spec/v4l2.html

The DVB API specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/dvbapi.pdf

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PULL] http://mercurial.intuxication.org/hg/v4l-dvb-commits

2009-10-06 Thread Igor M. Liplianin
On 5 октября 2009 16:23:32 Mauro Carvalho Chehab wrote:
 Em Wed, 23 Sep 2009 20:47:17 +0300

 Igor M. Liplianin liplia...@me.by escreveu:
  Mauro,
 
  Please pull from http://mercurial.intuxication.org/hg/v4l-dvb-commits
 
  for the following 2 changesets:
 
  01/02: Add support for TBS-likes remotes
  http://mercurial.intuxication.org/hg/v4l-dvb-commits?cmd=changeset;node=c
 4e209d7decc

 +   { 0x1a, KEY_SHUFFLE},   /* snapshot */

 Snapshot should use KEY_CAMERA instead. Please see the API reference at:
   http://linuxtv.org/downloads/v4l-dvb-apis/ch17s01.html
I will change and resend.


  02/02: Add support for TeVii remotes
  http://mercurial.intuxication.org/hg/v4l-dvb-commits?cmd=changeset;node=4
 71f55ec066a

 Some keys here also seem weird to my eyes:

 +   { 0x41, KEY_AB},
 +   { 0x46, KEY_F1},
 +   { 0x47, KEY_F2},
 +   { 0x5e, KEY_F3},
 +   { 0x5c, KEY_F4},
 +   { 0x52, KEY_F5},
 +   { 0x5a, KEY_F6},

 Do you have keys labeled as AB, F1..F6 at the IR?
Well, definitly yes.
Howewer, it has labels satellite, provider, transponder, favorites, 
all
for F1 .. F5 additionally.
KEY_AB labeled as A/B.

http://murmansat.ru/dev/b/tevii-s600.jpg


 Also, I don't like using KEY_POWER for power. Some Linux distros turn the
 computer off with this keycode. It is better to use KEY_POWER2 instead, and
 let the userspace apps (or lirc) to properly associate it to something
 useful, like finishing the media application, instead of turning the
 computer off.
I will change it too.


   drivers/media/common/ir-keymaps.c |   99
  +- drivers/media/video/cx88/cx88-input.c
  |   26 
   include/media/ir-common.h |2
   3 files changed, 124 insertions(+), 3 deletions(-)
 
  Thanks,
  Igor
 
  --
  To unsubscribe from this list: send the line unsubscribe linux-media in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html

 Cheers,
 Mauro

-- 
Igor M. Liplianin
Microsoft Windows Free Zone - Linux used for all Computing Tasks
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Patch for TeVii S470

2009-10-06 Thread Igor M. Liplianin
On 3 октября 2009 16:37:31 Niels Ole Salscheider wrote:
 Hello,

 I have downloaded and compiled the linuxtv sources from
 http://mercurial.intuxication.org/hg/s2-liplianin in order to get my TeVii
 S470 working. Nevertheless, I get the folowing error if I try to tune any
 channel:

 Oct  3 11:15:51 Server kernel: ds3000_firmware_ondemand: Waiting for
 firmware upload (dvb-fe-ds3000.fw)...
 Oct  3 11:15:51 Server kernel: i2c-adapter i2c-1: firmware: requesting
 dvb-fe- ds3000.fw
 Oct  3 11:15:51 Server kernel: [ cut here ]
 Oct  3 11:15:51 Server kernel: WARNING: at fs/sysfs/dir.c:487
 sysfs_add_one+0x12d/0x160()
 Oct  3 11:15:51 Server kernel: Hardware name: System Product Name
 Oct  3 11:15:51 Server kernel: sysfs: cannot create duplicate filename
 '/devices/pci:00/:00:05.0/:02:00.0/i2c-adapter/i2c-1/i2c-1'
 Oct  3 11:15:51 Server kernel: Modules linked in: af_packet hwmon_vid
 i2c_dev powernow_k8 kvm_amd kvm nfsd nfs_acl exportfs nfs lockd auth_rpcgss
 sunrpc dvb_pll ds3000 stv0299 cx23885 cx2341x v4l2_common videodev
 v4l1_compat b2c2_flexcop_pci v4l2_compat_ioctl32 b2c2_flexcop
 videobuf_dma_sg videobuf_dvb dvb_core videobuf_core processor i2c_piix4
 btcx_risc cx24123 atiixp thermal tveeprom ide_core cx24113 ehci_hcd
 ohci_hcd s5h1420 thermal_sys shpchp usbcore floppy atl1 ata_generic
 8250_pnp rtc_cmos pata_acpi rtc_core pci_hotplug 8250 mii sg serial_core
 k8temp rtc_lib hwmon button unix
 Oct  3 11:15:51 Server kernel: Pid: 2548, comm: kdvb-ad-1-fe-0 Not tainted
 2.6.31-gentoo #1
 Oct  3 11:15:51 Server kernel: Call Trace:
 Oct  3 11:15:51 Server kernel: [81186f3d] ?
 sysfs_add_one+0x12d/0x160 Oct  3 11:15:51 Server kernel:
 [81057af9] ?
 warn_slowpath_common+0x89/0x100
 Oct  3 11:15:51 Server kernel: [81186de4] ?
 sysfs_pathname+0x44/0x70 Oct  3 11:15:51 Server kernel:
 [81057c11] ? warn_slowpath_fmt+0x61/0x90 Oct  3 11:15:51 Server
 kernel: [81186de4] ? sysfs_pathname+0x44/0x70 Oct  3 11:15:51
 Server kernel: [8110f5bb] ? kmem_cache_alloc+0x9b/0x160 Oct  3
 11:15:51 Server kernel: [81186de4] ? sysfs_pathname+0x44/0x70 Oct
  3 11:15:51 Server kernel: [81186f3d] ? sysfs_add_one+0x12d/0x160
 Oct  3 11:15:51 Server kernel: [81187ee0] ? create_dir+0x70/0xe0
 Oct  3 11:15:51 Server kernel: [81187f8f] ?
 sysfs_create_dir+0x3f/0x70 Oct  3 11:15:51 Server kernel:
 [812b23c9] ?
 kobject_add_internal+0x109/0x210
 Oct  3 11:15:51 Server kernel: [812b27c4] ? kobject_add+0x64/0xb0
 Oct  3 11:15:51 Server kernel: [8135308d] ?
 dev_set_name+0x5d/0x80 Oct  3 11:15:51 Server kernel: [812b21f6]
 ? kobject_get+0x26/0x50 Oct  3 11:15:51 Server kernel: [81353fdf]
 ? device_add+0x18f/0x680 Oct  3 11:15:51 Server kernel:
 [8135f83e] ? _request_firmware+0x2ae/0x5d0 Oct  3 11:15:51 Server
 kernel: [a0201ff6] ? ds3000_tune+0xaf6/0xe98 [ds3000]
 Oct  3 11:15:51 Server kernel: [81066c64] ?
 try_to_del_timer_sync+0x64/0x90
 Oct  3 11:15:51 Server kernel: [a017da61] ?
 dvb_frontend_swzigzag_autotune+0xe1/0x260 [dvb_core]
 Oct  3 11:15:51 Server kernel: [81066d80] ?
 process_timeout+0x0/0x40 Oct  3 11:15:51 Server kernel:
 [a017ec0a] ?
 dvb_frontend_swzigzag+0x26a/0x2c0 [dvb_core]
 Oct  3 11:15:51 Server kernel: [a017f0b0] ?
 dvb_frontend_thread+0x450/0x770 [dvb_core]
 Oct  3 11:15:51 Server kernel: [81077d80] ?
 autoremove_wake_function+0x0/0x60
 Oct  3 11:15:51 Server kernel: [a017ec60] ?
 dvb_frontend_thread+0x0/0x770 [dvb_core]
 Oct  3 11:15:51 Server kernel: [a017ec60] ?
 dvb_frontend_thread+0x0/0x770 [dvb_core]
 Oct  3 11:15:51 Server kernel: [81077846] ? kthread+0xb6/0xd0
 Oct  3 11:15:51 Server kernel: [8100d29a] ? child_rip+0xa/0x20
 Oct  3 11:15:51 Server kernel: [81077790] ? kthread+0x0/0xd0
 Oct  3 11:15:51 Server kernel: [8100d290] ? child_rip+0x0/0x20
 Oct  3 11:15:51 Server kernel: ---[ end trace 478953c6d1c9275f ]---


 The following patch solves this problem for me:

 diff -r 82a256f5d842 linux/drivers/media/dvb/frontends/ds3000.c
 --- a/linux/drivers/media/dvb/frontends/ds3000.cWed Sep 23 20:44:12
 2009 +0300
 +++ b/linux/drivers/media/dvb/frontends/ds3000.cSat Oct 03 15:28:52
 2009 +0200
 @@ -444,7 +444,7 @@
 /* Load firmware */
 /* request the firmware, this will block until someone
 uploads it */
 printk(%s: Waiting for firmware upload (%s)...\n,
 __func__, DS3000_DEFAULT_FIRMWARE);
 -   ret = request_firmware(fw, DS3000_DEFAULT_FIRMWARE,
 state-

 i2c-dev);

 +   ret = request_firmware(fw, DS3000_DEFAULT_FIRMWARE, state-

 i2c-dev.parent);

 printk(%s: Waiting for firmware upload(2)...\n,
 __func__); if (ret) {
 printk(%s: No firmware uploaded (timeout or file
 not found?)\n, __func__);

 Kind regards

 Ole
 --
 To 

[PATCH 0/8] Add vsprintf extension %pU to print UUID/GUIDs and use it

2009-10-06 Thread Joe Perches
Using %pU makes an x86 defconfig image a bit smaller

before: $ size vmlinux
   textdata bss dec hex filename
6976022  679572 1359668 9015262  898fde vmlinux

after:  $ size vmlinux
   textdata bss dec hex filename
6975863  679652 1359668 9015183  898f8f vmlinux

Joe Perches (8):
  lib/vsprintf.c: Add %pU to print UUID/GUIDs
  random.c: Use %pU to print UUIDs
  drivers/firmware/dmi_scan.c: Use %pUB to print UUIDs
  drivers/md/md.c: Use %pU to print UUIDs
  drivers/media/video/uvc: Use %pUl to print UUIDs
  fs/gfs2/sys.c: Use %pUB to print UUIDs
  fs/ubifs: Use %pUB to print UUIDs
  fs/xfs/xfs_log_recover.c: Use %pU to print UUIDs

 drivers/char/random.c|   10 +---
 drivers/firmware/dmi_scan.c  |5 +--
 drivers/md/md.c  |   16 ++--
 drivers/media/video/uvc/uvc_ctrl.c   |   69 --
 drivers/media/video/uvc/uvc_driver.c |7 +--
 drivers/media/video/uvc/uvcvideo.h   |   10 -
 fs/gfs2/sys.c|   16 +--
 fs/ubifs/debug.c |9 +---
 fs/ubifs/super.c |7 +---
 fs/xfs/xfs_log_recover.c |   14 ++-
 lib/vsprintf.c   |   62 ++-
 11 files changed, 114 insertions(+), 111 deletions(-)

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/8] drivers/media/video/uvc: Use %pUl to print UUIDs

2009-10-06 Thread Joe Perches
Signed-off-by: Joe Perches j...@perches.com
---
 drivers/media/video/uvc/uvc_ctrl.c   |   69 --
 drivers/media/video/uvc/uvc_driver.c |7 +--
 drivers/media/video/uvc/uvcvideo.h   |   10 -
 3 files changed, 35 insertions(+), 51 deletions(-)

diff --git a/drivers/media/video/uvc/uvc_ctrl.c 
b/drivers/media/video/uvc/uvc_ctrl.c
index c3225a5..4d06976 100644
--- a/drivers/media/video/uvc/uvc_ctrl.c
+++ b/drivers/media/video/uvc/uvc_ctrl.c
@@ -1093,8 +1093,8 @@ int uvc_xu_ctrl_query(struct uvc_video_chain *chain,
 
if (!found) {
uvc_trace(UVC_TRACE_CONTROL,
-   Control  UVC_GUID_FORMAT /%u not found.\n,
-   UVC_GUID_ARGS(entity-extension.guidExtensionCode),
+   Control %pUl/%u not found.\n,
+   entity-extension.guidExtensionCode,
xctrl-selector);
return -EINVAL;
}
@@ -1171,9 +1171,9 @@ int uvc_ctrl_resume_device(struct uvc_device *dev)
(ctrl-info-flags  UVC_CONTROL_RESTORE) == 0)
continue;
 
-   printk(KERN_INFO restoring control  UVC_GUID_FORMAT
-   /%u/%u\n, UVC_GUID_ARGS(ctrl-info-entity),
-   ctrl-info-index, ctrl-info-selector);
+   printk(KERN_INFO restoring control %pUl/%u/%u\n,
+  ctrl-info-entity,
+  ctrl-info-index, ctrl-info-selector);
ctrl-dirty = 1;
}
 
@@ -1228,46 +1228,43 @@ static void uvc_ctrl_add_ctrl(struct uvc_device *dev,
dev-intfnum, info-selector, (__u8 *)size, 2);
if (ret  0) {
uvc_trace(UVC_TRACE_CONTROL, GET_LEN failed on 
-   control  UVC_GUID_FORMAT /%u (%d).\n,
-   UVC_GUID_ARGS(info-entity), info-selector,
-   ret);
+ control %pUl/%u (%d).\n,
+ info-entity, info-selector, ret);
return;
}
 
if (info-size != le16_to_cpu(size)) {
-   uvc_trace(UVC_TRACE_CONTROL, Control  UVC_GUID_FORMAT
-   /%u size doesn't match user supplied 
-   value.\n, UVC_GUID_ARGS(info-entity),
-   info-selector);
+   uvc_trace(UVC_TRACE_CONTROL,
+ Control %pUl/%u size doesn't match user 
supplied value.\n,
+ info-entity, info-selector);
return;
}
 
ret = uvc_query_ctrl(dev, UVC_GET_INFO, ctrl-entity-id,
dev-intfnum, info-selector, inf, 1);
if (ret  0) {
-   uvc_trace(UVC_TRACE_CONTROL, GET_INFO failed on 
-   control  UVC_GUID_FORMAT /%u (%d).\n,
-   UVC_GUID_ARGS(info-entity), info-selector,
-   ret);
+   uvc_trace(UVC_TRACE_CONTROL,
+ GET_INFO failed on control %pUl/%u (%d).\n,
+ info-entity, info-selector, ret);
return;
}
 
flags = info-flags;
if (((flags  UVC_CONTROL_GET_CUR)  !(inf  (1  0))) ||
((flags  UVC_CONTROL_SET_CUR)  !(inf  (1  1 {
-   uvc_trace(UVC_TRACE_CONTROL, Control 
-   UVC_GUID_FORMAT /%u flags don't match 
-   supported operations.\n,
-   UVC_GUID_ARGS(info-entity), info-selector);
+   uvc_trace(UVC_TRACE_CONTROL,
+ Control %pUl/%u flags don't match supported 
operations.\n,
+ info-entity, info-selector);
return;
}
}
 
ctrl-info = info;
ctrl-data = kmalloc(ctrl-info-size * UVC_CTRL_NDATA, GFP_KERNEL);
-   uvc_trace(UVC_TRACE_CONTROL, Added control  UVC_GUID_FORMAT /%u 
-   to device %s entity %u\n, UVC_GUID_ARGS(ctrl-info-entity),
-   ctrl-info-selector, dev-udev-devpath, entity-id);
+   uvc_trace(UVC_TRACE_CONTROL,
+ Added control %pUl/%u to device %s entity %u\n,
+ ctrl-info-entity, ctrl-info-selector,
+ dev-udev-devpath, entity-id);
 }
 
 /*
@@ -1293,17 +1290,16 @@ int uvc_ctrl_add_info(struct uvc_control_info *info)
continue;
 
if (ctrl-selector == info-selector) {
-   uvc_trace(UVC_TRACE_CONTROL, Control 
-