[
https://issues.apache.org/jira/browse/IGNITE-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16610255#comment-16610255
]
Vladimir Ozerov commented on IGNITE-8485:
-----------------------------------------
Hi [~NIzhikov], my comments:
1) Styling: please look through your code and fix the following issues:
- Unused imports
- Replace abbreviations in method names with full words (as abbreviations are
allowed only for variable, not method names and class names)
- Replace "log.warn" with "U.warn"
2) I am still not satisfied with API. "AES" name is not appropriate here,
because it denotes algorithm, but instead it should denote underlying storage.
Correct name would be "Keystore". First, it will help us to add more algorithms
in future while still using keystore (e.g. 3DES, which is used by other
vendors). Second, what if in future we add another implementation over some KMS
which will also use AES? How would we name? This is why "AES" should go away
from interface names.
3) Key generation for clients - please move it to {{GridEncryptionManager}}, as
this is exactly what managers and processors are created for - to manage
component lifecycle, listen for events, encapsulate related logic in a single
place.
4) Currently random node is picked to send request to. Instead, random *server*
node should be used.
5) I would suggest to remove group IDs from request. First, at this point our
understanding that cache groups is a hack feature, which is likely to be
removed in future in favor of tablespaces. So it is better to avoid relying on
it if possible. Second, there is no need to get existing keys for caches at
all. Because by the time you got the key from existing cache, it may get's
re-created concurrently with another key. Or key rotation may happen (in future
release). So you can never rely on what key is returned, and it should be
compared with existing group key in exchange thread during cache start. IMO all
we need is the request is the number of keys to be generated.
6) {{GridCacheProcessor#genEncKeysAndStartCacheAfter}} - future is registered
after message is sent, which means that response may be missed (e.g. in case of
long GC pause or unfavorable context switch). Also there is no proper sync with
disconnect event, meaning that you may have hanging futures after disconnect as
well. Bulletproof synchronization here looks like this :
{code:java}
boolean stopped;
boolean disconnected;
onStart(...) {
// Set all listeners
}
onKernalStop(...) {
sync (mux) {
// Set stop flag
// Complete all futures with error
}
}
onDiscoveryMessage(...) {
sync (mux) {
// Iterate over registered futures, resend if possible or finish with
error
}
}
onIOMessage(...) {
sync (mux) {
// Generate key or complete and deregister future.
}
}
onDisconnect(...) {
sync (mux) {
// Set disconnect flag
// Complete all futures with error
}
}
onReconnect(...) {
sync (mux) {
// Remove disconnect flag.
}
}
generateKeys(...) {
sync (mux) {
// Check stop and disconnect flags
// Register future, send request
}
}{code}
At this point it should be obvious why all this logic should be located in
separate processor.
7) There is no need to throw exception in IO listener threads. All we can do
here is to log error with proper message.
> TDE - Phase-1
> -------------
>
> Key: IGNITE-8485
> URL: https://issues.apache.org/jira/browse/IGNITE-8485
> Project: Ignite
> Issue Type: Sub-task
> Reporter: Nikolay Izhikov
> Assignee: Nikolay Izhikov
> Priority: Critical
> Fix For: 2.7
>
>
> Basic support for a Transparent Data Encryption should be implemented:
> 1. Usage of standard JKS, Java Security.
> 2. Persistent Data Encryption/Decryption.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)