[
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15403181#comment-15403181
]
Subramanyam Pattipaka commented on HADOOP-13403:
------------------------------------------------
[~cnauroth], Thanks for your comments.
I will update with more comments on executeParallel and generate another patch.
I had refactored code for both delete and rename operations to use single
interface. Rename already has array and the array is already used at other
locations. If we use ConcurrentLinkedQueue and remove contents from it then
after executeParallel call, there won't be any entries in the queue. In future,
if we need this array contents for reuse then we have to regenerate the list of
files. If use array, we do the job with out loosing entries and can be useful
for other cases in future.
Regarding futures, I hope you agree to keep current pattern and not to use
futures.
Regarding getThreadPool, we are doing new operation. This can potentially
result in OutOfMemoryException if we give very large value as input. This could
especially happen even for little big number if the current thread has already
reached the maximum heap size due to object object allocations like
fileMeataData array[]. Even if we think of restricting this to a max value like
1024 then still OutOfmemory can cause with remote possibility. Currently, I
couldn't think of other scenario, but don't want to realize it later which can
make operation fail. Instead, for any kind of exceptions raised as part of new
ThreadPoolExecutor() operation, we want to take serial path. I have already
included checks for basic cases like check for threadCount > 1 after going
through user configurations etc.. This is extra safety check on top of that.
I ran tests and all of them are passing. Can you please provide details on what
errors you are seeing?
> AzureNativeFileSystem rename/delete performance improvements
> ------------------------------------------------------------
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
> Issue Type: Bug
> Components: azure
> Affects Versions: 2.7.2
> Reporter: Subramanyam Pattipaka
> Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch,
> HADOOP-13403-003.patch
>
>
> WASB Performance Improvements
> Problem
> -----------
> Azure Native File system operations like rename/delete which has large number
> of directories and/or files in the source directory are experiencing
> performance issues. Here are possible reasons
> a) We first list all files under source directory hierarchically. This is
> a serial operation.
> b) After collecting the entire list of files under a folder, we delete or
> rename files one by one serially.
> c) There is no logging information available for these costly operations
> even in DEBUG mode leading to difficulty in understanding wasb performance
> issues.
> Proposal
> -------------
> Step 1: Rename and delete operations will generate a list all files under the
> source folder. We need to use azure flat listing option to get list with
> single request to azure store. We have introduced config
> fs.azure.flatlist.enable to enable this option. The default value is 'false'
> which means flat listing is disabled.
> Step 2: Create thread pool and threads dynamically based on user
> configuration. These thread pools will be deleted after operation is over.
> We are introducing introducing two new configs
> a) fs.azure.rename.threads : Config to set number of rename
> threads. Default value is 0 which means no threading.
> b) fs.azure.delete.threads: Config to set number of delete
> threads. Default value is 0 which means no threading.
> We have provided debug log information on number of threads not used
> for the operation which can be useful .
> Failure Scenarios:
> If we fail to create thread pool due to ANY reason (for example trying
> create with thread count with large value such as 1000000), we fall back to
> serialization operation.
> Step 3: Bob operations can be done in parallel using multiple threads
> executing following snippet
> while ((currentIndex = fileIndex.getAndIncrement()) < files.length) {
> FileMetadata file = files[currentIndex];
> Rename/delete(file);
> }
> The above strategy depends on the fact that all files are stored in a
> final array and each thread has to determine synchronized next index to do
> the job. The advantage of this strategy is that even if user configures large
> number of unusable threads, we always ensure that work doesn’t get serialized
> due to lagging threads.
> We are logging following information which can be useful for tuning
> number of threads
> a) Number of unusable threads
> b) Time taken by each thread
> c) Number of files processed by each thread
> d) Total time taken for the operation
> Failure Scenarios:
> Failure to queue a thread execute request shouldn’t be an issue if we
> can ensure at least one thread has completed execution successfully. If we
> couldn't schedule one thread then we should take serialization path.
> Exceptions raised while executing threads are still considered regular
> exceptions and returned to client as operation failed. Exceptions raised
> while stopping threads and deleting thread pool shouldn't can be ignored if
> operation all files are done with out any issue.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]