Hi  Rong,

I started leaning binder driver (and protocol)   and I came accoss your 
post and tried to use client/server test on a regular driver on a 3.4 kernel

I have 3 issues to discuss.

First question is connected to your   RFC , I do not understand why binder 
uses global lock, I see it protects node, proc, refs, whatever)   but why  ?
Each of these structures may be protected with a structure own mutex  , isn 
' it?

I am trying to understand binder structures like refs, threads and I can 
not find explanations ( although I googled).
Can you or somebody else point me to a reference ? (many thanks !!)
(I am willing to share this link 
http://0xlab.org/~jserv/android-binder-ipc.pdf) 

And the last issue, test running.
I was not succesfull in running client, it print me these error
rcv NOOP
rcv FAILED_BINDER

I have tried to see what is going inside the kernel using binder module 
debug mask and discovered following :

server process 500

[    9.554149] binder: 500:500 node 1 u  (null) c  (null) created
[    9.555003] binder: 500:500 write 4 at bfde49a0, read 0 at 00000000
[    9.555871] binder: 500:500 BC_ENTER_LOOPER
[    9.556691] binder: 500:500 wrote 4 of 4, read return 0 of 0
[    9.557544] binder: 500:500 write 0 at 00000000, read 4096 at bfde49a0

client process 505 
[  519.190472] binder: 505:505 transaction failed 29201, size 16-4  (  is 
16 a parcel len, but what is 4 stands for, offset from where to where ? )
[  519.190474] binder: 505:505 wrote 44 of 44, read return 0 of 0
[  519.190478] binder: 505:505 write 0 at 00000000, read 4096 at bfe80244
[  519.190480] binder: 505:505 wrote 0 of 0, read return 8 of 4096
[  519.191138] binder: 505:505 write 0 at 00000000, read 4096 at bfe80244
[  136.864593] binder: 1161:1161 transaction failed 29201, size 16-4


I also had the err_binder_alloc_buf_failed message and I made some research 
to establish the course
I added some printk and found hat client failes because
binder_alloc_buf() finction checks for  proc->vma == NULL, and it is NULL 
indeed because only binder_mmap saves vma in proc->vma

But client is not doing mmap() and is not supposed too.


Can you please clarify for me what is the proper way to run  the test ?

On Wednesday, January 25, 2012 3:41:20 PM UTC+2, Rong wrote:
>
> Hi Folks, 
>
> I've just finished a fresh implementation of the android binder 
> driver, would love to see some suggestions or comments on the code as 
> well as the whole binder IPC idea. The driver can be found on Github 
> at 
>
> https://github.com/rong1129/android-binder-ipc 
>
> in the module/new directory. The rest in that project are a minimum 
> set of framework library, the service manager, and some test 
> applications. 
>
> Reason I did this project was because when I was exploring around the 
> Android kernel and framework stuff, I found the existing binder driver 
> wasn't implemented efficiently esp. in the context of SMP. There's a 
> big mutex (binder_lock) that locks everyone else out when one ioctl is 
> in progress. I spent hours thinking a way to remove it, but turned out 
> impossible - there are basically pointers shared and passed around 
> between processes all over the place, which is why most of the driver 
> is protected by that mutex. It's easy to manage but the downside is no 
> two or more IPCs can be executed at the same time, regardless how many 
> CPUs you have. Also the mutex around ioctls or any long operation can 
> significantly reduce a system's responsiveness. 
>
> So in the new implementation, I took a new approach - an sysV like 
> process message queue is implemented as the foundation of the driver. 
> Unlike the sysV queue, it's used only in the kernel - mainly for 
> drivers. I specially separated it out in the hope that it would also 
> be beneficial to other drivers. The queue is designed so queue 
> identifiers (addresses) can be passed across processes and queues can 
> be accessed by different processes as long as the proper get/set 
> methods are called. 
>
> The binder driver is built on top of the queue mechanism and have 
> other data structure designed carefully so maximum concurrency can be 
> reached while requiring minimum locking. For example, the binder node 
> and refs in the existing driver are replaced with a single structure 
> binder_obj. Objects (nodes and refs in the existing terms) are created 
> only in the current process context (shared by threads) and not 
> accessed by other processes. 
>
> In terms of performance, the current version is slightly better than 
> the existing driver, in particular with concurrent IPC call scenarios. 
> As I just finished the coding and some simple tests, not so much has 
> been done in terms of tuning or optimizations. But I will surely do 
> them in the following days together with completing whatever is left. 
>
> To summarize the status, I managed to implement most part of the 
> protocol, as for now, the standard test app binderAddInts application 
> can work properly. Most of the existing implementation details are 
> covered, except a few things below, 
>
> * mmap and user buffer allocating stuff - it's the only 
> incompatibility so far 
> The existing mmap mechanism does reduce data copying from twice to 
> once, but going into other process's space to allocate and manage 
> buffers would need a big lock to avoid a lot of nasty things and you 
> can't be guaranteed other processes are not killed while you writing 
> to their space. Also the extra buffer management overhead can easily 
> kill all the benefits it actually brings. 
>
> I implemented in a traditional way, where there are two data copying 
> in a transaction: process A to kernel and kernel to process B, which 
> is simple and most drivers would do (if DMA is not involved). This is 
> the only incompatible place in terms kernel/user API so far. There's 
> no difference for the kernel to read data from user space, but for 
> writes, the driver writes the transaction data to the supplied buffer 
> in binder_write_read structure, instead of a pre-allocated mmap-ed 
> buffer, as a result the application is expected to follow the same 
> logic to read the data back and of course provide a larger read buffer 
> when doing ioctl. 
>
> * File descriptor sharing across processes 
> It's not used by the test application so it's not considered yet. Also 
> I'm not convinced it's so useful, as one can easily implement the 
> similar thing in user space by re-opening the file, although having it 
> won't affect much of concurrency as it's already taken care of by 
> VFS. 
>
> * Priority inheritance 
> Not sure if it exists is to avoid priority inversion, but seems 
> there's also priority adjusting in the framework - confusing. I'm not 
> entirely sure how they work together. Will probably look into it a 
> litter later. 
>
> * Reference counting and etc. 
> The whole strong / weak refs in the kernel just complicates the whole 
> driver, IMHO. It should well be enforced just at the user level. 
> There's not so much point of a strong referencing at the driver level, 
> as a process can quit regardless it wishes or not, or whether it has 
> strong references to something or not. What it matters is the driver 
> has to provide a transparent channel and a proper closing-down 
> notification to the applications, so they can maintain those 
> references properly just between themselves. 
>
> At the moment, there's a hack in the implementation to send an acquire 
> command to the user when a binder object is written through the driver 
> - just to stop the application from crashing as if no one holds a 
> reference to the object, it will be destroyed right after addService() 
> call. Took me hours to figure it out. 
>
> That's it - a good summary for the last ten days or so working on the 
> driver and a lot more time trying to understand how it works :(. 
> Anyway, it's GPLed so feel free to try and contribute. 
>
> Cheers, 
> Rong 
>
>
>

-- 
unsubscribe: [email protected]
website: http://groups.google.com/group/android-kernel

Reply via email to