[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553641#comment-16553641
 ] 

Istvan Fajth edited comment on HDFS-13322 at 7/26/18 10:29 AM:
---------------------------------------------------------------

Hello [~fabbri],

finally I was able to find some time to setup the environment for testing, and 
prepare and run the tests.

I am attaching the test codes that I have used, and also I am attaching the 
measurements from 10 run from both tests on the old and the new fuse code.

The test was run on a single linux vm against a CDH5.14 cluster with 3 
DataNodes. The randomfile I used was a 1MB and a 1KB file created with 
/dev/urandom as the source, and a 1 byte file that contained the letter "a"
||test version||1MB file read * 10k avg.||1KB file read * 10k avg.||1B file 
read * 10k avg.||10k different 1KB file||
|Original version with catter.sh|174.064 sec|78.725 sec|79.195 sec|90.683 sec|
|Patched version with catter.sh|180.675 sec|81.028 sec|81.187 sec|92.859 sec|
|*Performance degradation*|*3.8%*|*2.9%*|*2.5%* |*2.4%* |
|Original version with TestFuse.java|137.159 sec|65.982 sec|65.713 sec|67.411 
sec|
|Patched version with TestFuse.java|139.095 sec |68.457 sec |68.919 sec|69.101 
sec|
|*Performance degradation*|*1.4%*| *3.8%*|*4.9%*|*2.5%* |

After running these test, I thought I check if the page cache has any effect, 
and tried with 10k 10KB file generated from /dev/urandom as well, it seems that 
under a certain size network traffic of getting the data is not really a 
factor, so I got suspicious hence I ran a test with different files as well as 
the last step.

As it seems from the result the performance degradation due to the change in 
the proposed patch is under 5% in all of the scenarios I have tested, and 
mostly in the 2-4% range.

Let me know if you have any observation on the provided code and perf test, 
also let me know if these values seem to be acceptable.


was (Author: pifta):
Hello [~fabbri],

finally I was able to find some time to setup the environment for testing, and 
prepare and run the tests.

I am attaching the test codes that I have used, and also I am attaching the 
measurements from 10 run from both tests on the old and the new fuse code.

The test was run on a single linux vm against a CDH5.14 cluster with 3 
DataNodes. The randomfile I used was a 1MB and a 1KB file created with 
/dev/urandom as the source, and a 1 byte file that contained the letter "a"
||test version||1MB file read * 10k avg.||1KB file read * 10k avg.||1B file 
read * 10k avg.||10k different 10KB file||
|Original version with catter.sh|174.064 sec|78.725 sec|79.195 sec|90.683 sec|
|Patched version with catter.sh|180.675 sec|81.028 sec|81.187 sec|92.859 sec|
|*Performance degradation*|*3.8%*|*2.9%*|*2.5%* |*2.4%* |
|Original version with TestFuse.java|137.159 sec|65.982 sec|65.713 sec|67.411 
sec|
|Patched version with TestFuse.java|139.095 sec |68.457 sec |68.919 sec|69.101 
sec|
|*Performance degradation*|*1.4%*| *3.8%*|*4.9%*|*2.5%* |

After running these test, I thought I check if the page cache has any effect, 
and tried with 10k 10KB file generated from /dev/urandom as well, it seems that 
under a certain size network traffic of getting the data is not really a 
factor, so I got suspicious hence I ran a test with different files as well as 
the last step.

As it seems from the result the performance degradation due to the change in 
the proposed patch is under 5% in all of the scenarios I have tested, and 
mostly in the 2-4% range.

Let me know if you have any observation on the provided code and perf test, 
also let me know if these values seem to be acceptable.

> fuse dfs - uid persists when switching between ticket caches
> ------------------------------------------------------------
>
>                 Key: HDFS-13322
>                 URL: https://issues.apache.org/jira/browse/HDFS-13322
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fuse-dfs
>    Affects Versions: 2.6.0
>         Environment: Linux xxxxxx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>            Reporter: Alex Volskiy
>            Assignee: Istvan Fajth
>            Priority: Minor
>         Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> HDFS-13322.003.patch, TestFuse.java, TestFuse2.java, catter.sh, catter2.sh, 
> perftest_new_behaviour_10k_different_1KB.txt, perftest_new_behaviour_1B.txt, 
> perftest_new_behaviour_1KB.txt, perftest_new_behaviour_1MB.txt, 
> perftest_old_behaviour_10k_different_1KB.txt, perftest_old_behaviour_1B.txt, 
> perftest_old_behaviour_1KB.txt, perftest_old_behaviour_1MB.txt, 
> testHDFS-13322.sh, test_after_patch.out, test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to