Jassi,
>>> See how mailbox_startup() tries to balance mbox->ops->startup() and
>>> mailbox_fini() the mbox->ops->shutdown() That's very fragile and the
>>> cause of imbalance between rpm enable/disable, unless your clients are
>>> buggy.
>>
>> Yeah, it is kinda messed up in the existing code,
Hello Suman,
On 10 May 2013 05:48, Suman Anna wrote:
>> No, please. The controller driver should not implement any policy (of
>> allowing/disallowing requests). It should simply try to do as
>> directed. If the client screwed up even after getting info from
>> platform_data/DT, let it suffer.
>
Hello Suman,
On 10 May 2013 05:48, Suman Anna s-a...@ti.com wrote:
No, please. The controller driver should not implement any policy (of
allowing/disallowing requests). It should simply try to do as
directed. If the client screwed up even after getting info from
platform_data/DT, let it
Jassi,
See how mailbox_startup() tries to balance mbox-ops-startup() and
mailbox_fini() the mbox-ops-shutdown() That's very fragile and the
cause of imbalance between rpm enable/disable, unless your clients are
buggy.
Yeah, it is kinda messed up in the existing code, the startup defined
Hi Jassi,
>
> On 9 May 2013 06:55, Suman Anna wrote:
>
>>> so it can't be driven by the controller. We could make it a Kconfig option.
>>> What do you suggest?
>>
>> I am saying controller/link because they are the ones that knows how the
>> physical transport is, and it may vary from one to
Hi Suman,
On 9 May 2013 06:55, Suman Anna wrote:
>> so it can't be driven by the controller. We could make it a Kconfig option.
>> What do you suggest?
>
> I am saying controller/link because they are the ones that knows how the
> physical transport is, and it may vary from one to another. I
Hi Suman,
On 9 May 2013 06:55, Suman Anna s-a...@ti.com wrote:
so it can't be driven by the controller. We could make it a Kconfig option.
What do you suggest?
I am saying controller/link because they are the ones that knows how the
physical transport is, and it may vary from one to
Hi Jassi,
On 9 May 2013 06:55, Suman Anna s-a...@ti.com wrote:
so it can't be driven by the controller. We could make it a Kconfig option.
What do you suggest?
I am saying controller/link because they are the ones that knows how the
physical transport is, and it may vary from one to
Hi Jassi,
>
> The client(s) can always generate TX requests at a rate greater than
> the API could transmit on the physical link. So as much as we dislike
> it, we have to buffer TX requests, otherwise N clients would.
The current code doesn't support N clients today
Hi Jassi,
The client(s) can always generate TX requests at a rate greater than
the API could transmit on the physical link. So as much as we dislike
it, we have to buffer TX requests, otherwise N clients would.
The current code doesn't support N clients today anyway, and if they are
On 8 May 2013 03:18, Suman Anna wrote:
> Hi Jassi,
>
>> On 7 May 2013 05:15, Suman Anna wrote:
The client(s) can always generate TX requests at a rate greater than
the API could transmit on the physical link. So as much as we dislike
it, we have to buffer TX requests,
Hi Jassi,
> On 7 May 2013 05:15, Suman Anna wrote:
>>>
>>> The client(s) can always generate TX requests at a rate greater than
>>> the API could transmit on the physical link. So as much as we dislike
>>> it, we have to buffer TX requests, otherwise N clients would.
>>
>> The current code
Hi Suman,
On 7 May 2013 05:15, Suman Anna wrote:
>>
>> The client(s) can always generate TX requests at a rate greater than
>> the API could transmit on the physical link. So as much as we dislike
>> it, we have to buffer TX requests, otherwise N clients would.
>
> The current code doesn't
Hi Suman,
On 7 May 2013 05:15, Suman Anna s-a...@ti.com wrote:
The client(s) can always generate TX requests at a rate greater than
the API could transmit on the physical link. So as much as we dislike
it, we have to buffer TX requests, otherwise N clients would.
The current code doesn't
Hi Jassi,
On 7 May 2013 05:15, Suman Anna s-a...@ti.com wrote:
The client(s) can always generate TX requests at a rate greater than
the API could transmit on the physical link. So as much as we dislike
it, we have to buffer TX requests, otherwise N clients would.
The current code doesn't
On 8 May 2013 03:18, Suman Anna s-a...@ti.com wrote:
Hi Jassi,
On 7 May 2013 05:15, Suman Anna s-a...@ti.com wrote:
The client(s) can always generate TX requests at a rate greater than
the API could transmit on the physical link. So as much as we dislike
it, we have to buffer TX requests,
Hi Jassi,
On 05/04/2013 02:08 PM, Jassi Brar wrote:
> Hi Suman,
>
>> Anyway, here is a summary of the open points that we have:
>> 1. Atomic Callbacks:
>> The current code provides some sort of buffering on Tx, but imposes the
>> restriction that the clients do the buffering on Rx. This is main
Hi Jassi,
On 05/04/2013 02:08 PM, Jassi Brar wrote:
Hi Suman,
Anyway, here is a summary of the open points that we have:
1. Atomic Callbacks:
The current code provides some sort of buffering on Tx, but imposes the
restriction that the clients do the buffering on Rx. This is main
concern
Hi Suman,
On 4 May 2013 07:50, Suman Anna wrote:
> Hi Jassi,
>
> On 04/27/2013 01:14 PM, jassisinghb...@gmail.com wrote:
>> From: Jassi Brar
>>
>> Introduce common framework for client/protocol drivers and
>> controller drivers of Inter-Processor-Communication (IPC).
>>
>> Client driver
Hi Suman,
On 4 May 2013 07:50, Suman Anna s-a...@ti.com wrote:
Hi Jassi,
On 04/27/2013 01:14 PM, jassisinghb...@gmail.com wrote:
From: Jassi Brar jaswinder.si...@linaro.org
Introduce common framework for client/protocol drivers and
controller drivers of Inter-Processor-Communication (IPC).
Hi Jassi,
On 04/27/2013 01:14 PM, jassisinghb...@gmail.com wrote:
> From: Jassi Brar
>
> Introduce common framework for client/protocol drivers and
> controller drivers of Inter-Processor-Communication (IPC).
>
> Client driver developers should have a look at
> include/linux/mailbox_client.h
Hi Jassi,
On 04/27/2013 01:14 PM, jassisinghb...@gmail.com wrote:
From: Jassi Brar jaswinder.si...@linaro.org
Introduce common framework for client/protocol drivers and
controller drivers of Inter-Processor-Communication (IPC).
Client driver developers should have a look at
From: Jassi Brar
Introduce common framework for client/protocol drivers and
controller drivers of Inter-Processor-Communication (IPC).
Client driver developers should have a look at
include/linux/mailbox_client.h to understand the part of
the API exposed to client drivers.
Similarly controller
From: Jassi Brar jaswinder.si...@linaro.org
Introduce common framework for client/protocol drivers and
controller drivers of Inter-Processor-Communication (IPC).
Client driver developers should have a look at
include/linux/mailbox_client.h to understand the part of
the API exposed to client
24 matches
Mail list logo