EdColeman opened a new issue, #2821:
URL: https://github.com/apache/accumulo/issues/2821
**Is your feature request related to a problem? Please describe.**
With recent changes in property handling with version information available
we may be able to improve concurrency handling
This was pulled from comments on #2799 by @keith-turner
**Additional context**
Directly from @keith-turner comment
[Atomic property update...] this code may be able to gracefully handle
multiple clients concurrently modifying props. Not completely sure, but think
something like the following may work.
Create a thrift type to return version+props.
```
class ThriftVersionedProperties {
long version;
Map<String, String> props;
}
```
The following code was written quickly and omits a lot of details, trying to
show how we could pass a version back and forth over thrift and use that
version to detect concurrent changes and retry.
```
public void modifyProperties(String tableName, final
Consumer<Map<String,String>> mapMutator)
throws AccumuloException, AccumuloSecurityException,
IllegalArgumentException,
ConcurrentModificationException {
while(true) {
// make thrift call to get props+version
ThriftVersionedProps tvp = client.getTableProperties(...);
// mutate the properties
mapMutator.accept(tvp.properties);
try {
// pass the modified props+version back to thrift call. The
thrift call will throw a thrift exception if the version has changed since we
read props above.
client.modifyTableProperties(...,tableName, tvp);
// successfully modified props, so break out of the retry loop...
probably a better way to do this than a while(true) loop
break;
} catch (PropertiesVersionChanged pvc) {
log.debug("Properties changed since reading, retrying...");
//TODO maybe sleep using retry object
} catch (TableNotFoundException e) {
throw new AccumuloException(e);
}
}
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]