[ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12134:
-----------------------------------
    Attachment: HDFS-12134.HDFS-8707.003.patch

Thanks for reviewing again [~mdeepak].  I also have some burn-in now with 
external tests that used to hit gssapi issues and it looks like this took care 
of them.

Reuploading the same patch since it looks like the CI build had issues.  If 
that goes well I'll commit this to HDFS-8707.

> libhdfs++: Add a synchronization interface for the GSSAPI
> ---------------------------------------------------------
>
>                 Key: HDFS-12134
>                 URL: https://issues.apache.org/jira/browse/HDFS-12134
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: James Clampffer
>         Attachments: HDFS-12134.HDFS-8707.000.patch, 
> HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch, 
> HDFS-12134.HDFS-8707.003.patch
>
>
> Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe.  There needs to 
> be a way for a client application to share a lock with this library in order 
> to prevent race conditions.  It can be done using event callbacks through the 
> C API but we can provide something more robust (RAII) in the C++ API.
> Proposed client supplied lock, pretty much the C++17 lockable concept. Use a 
> default if one isn't provided.  This would be scoped at the process level 
> since it's unlikely that multiple instances of libgssapi unless someone puts 
> some effort in with dlopen/dlsym.
> {code}
> class LockProvider
> {
>   virtual ~LockProvider() {}
>   // allow client application to deny access to the lock
>   virtual bool try_lock() = 0;
>   virtual void unlock() = 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to