>From the output of ls-files, we remove all but the leftmost path
component and then we eliminate duplicates. We do this in a while loop,
which is a performance bottleneck when the number of iterations is large
(e.g. for 60000 files in linux.git).

$ COMP_WORDS=(git status -- ar) COMP_CWORD=3; time _git

real    0m11.876s
user    0m4.685s
sys     0m6.808s

Replacing the loop with the cut command improves performance

$ COMP_WORDS=(git status -- ar) COMP_CWORD=3; time _git

real    0m1.372s
user    0m0.263s
sys     0m0.167s

The measurements were done with Msys2 bash, which is used by Git for

When filtering the ls-files output we take care not to touch absolute
paths. This is redundant, because ls-files will never output absolute
paths. Remove the unnecessary operations.

The issue was reported here:

Signed-off-by: Clemens Buchacher <dri...@gmx.net>

On Sun, Mar 18, 2018 at 02:26:18AM +0100, SZEDER Gábor wrote:
> You didn't run the test suite, did you? ;)

My bad. I put the sort back in. Test t9902 is now pass. I did not run
the other tests. I think the completion script is not used there.

I also considered Junio's and Johannes' comments.

> I have a short patch series collecting dust somewhere for a long
> while, [...]
> Will try to dig up those patches.

Cool. Bash completion can certainly use more performance improvements.

 contrib/completion/git-completion.bash | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/contrib/completion/git-completion.bash 
index 6da95b8..69a2d41 100644
--- a/contrib/completion/git-completion.bash
+++ b/contrib/completion/git-completion.bash
@@ -384,12 +384,7 @@ __git_index_files ()
        local root="${2-.}" file
        __git_ls_files_helper "$root" "$1" |
-       while read -r file; do
-               case "$file" in
-               ?*/*) echo "${file%%/*}" ;;
-               *) echo "$file" ;;
-               esac
-       done | sort | uniq
+       cut -f1 -d/ | sort | uniq
 # Lists branches from the local repository.

Reply via email to