[ 
https://issues.apache.org/jira/browse/HDFS-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17281721#comment-17281721
 ] 

Hadoop QA commented on HDFS-15829:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 36s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
1s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; 
considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 
681 unchanged - 0 fixed = 688 total (was 681) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt{color}
 | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
24s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt{color}
 | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01
 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 3 new 
+ 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
11s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html{color}
 | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}195m  9s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/patch-asflicense-problems.txt{color}
 | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}266m 14s{color} | 
{color:black}{color} | {color:black}{color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Should org.apache.hadoop.hdfs.server.namenode.TtlService$FilterThread be 
a _static_ inner class?  At TtlService.java:inner class?  At 
TtlService.java:[lines 40-61] |
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15829 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13020242/HDFS-15829.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux f072b632ca61 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 2df2dfb9ed3 |
| Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
| Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
|  Test Results | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/testReport/ |
| Max. process+thread count | 3391 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/469/console |
| versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Use xattr to support HDFS TTL on Observer namenode
> --------------------------------------------------
>
>                 Key: HDFS-15829
>                 URL: https://issues.apache.org/jira/browse/HDFS-15829
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: dfsclient, namenode
>            Reporter: Yang Yun
>            Assignee: Yang Yun
>            Priority: Minor
>         Attachments: HDFS-15829.patch
>
>
> h3. Overview
>  
>  HDFS TTL is implemented using the xattr mechanism provided by HDFS. When a 
> user sets a TTL to a file or directory, HDFS creates an xattr named "ttl" for 
> the file or directory, and stores the value set by the user in this xattr. A 
> service called TtlService runs on HDFS standby or Observer(Recommended ). It 
> scans the in-memony inode map regularly, reads the value of xattr "ttl" from 
> each INode, and calculates whether the ttl has expired. If so, it will get 
> the full file path from Inode and add it to expired file list. After scan it 
> will create a DFSClient and delete the expired file list in bach. other 
> option is to trigger a Yarn job to delete them in parallel。
> h3. Protocol
> Add two xattr 
>  "user.ttl":  value of TTL by minutes, identify the time that file or folder 
> will be expired.
>  "user. ttlproperty": value is TTL types, including,
>  * SINCELASTWRITE = 0x1       # caculate the TTL from last writing.
>  * KEEPEMPTYDIR = 0x2;          # if keep the empty dir
>  * KEEPEMPTYSUBDIR = 0x4;  # if keep subdir empty.
>  
>  *Nested TTL*
>  TTL supports setting for each directory and file on a path, so that after 
> setting, the setting of the lower-level subdirectory or file will take 
> effect. If a directory or file does not have a time to live, it will inherit 
> the settings of the nearest ancestor directory. The following is an 
> illustrative example. Suppose there is such a directory tree:
>   
> {code:java}
> /A/B/E  
> /A/C  
> /A/D {code}
>  
>  That is, B, C and D under directory A. And there is file E under directory 
> B. Suppose the user sets the TTL of A to 2 days, the TTL of B to 3 days, the 
> TTL of E to 1 day, and the TTL of C and D is not set. Then the file E will be 
> cleared after 1 day. After 2 days, C and D will be cleared. The settings 
> inherited from directory A are used here. Please note that at this time, 
> directory A will not be cleared because it is not empty. After 3 days, B will 
> be cleared because its own settings expire. After B is cleared, because A’s 
> settings have already expired and A has become an empty directory, it will 
> also be cleared.
> h3. Usage
> Fro the first version, provide API to set the TTL,  will add comand line 
> later.
>   
> {code:java}
> /**
>  * Set TTL to a file.
>  * @param fs the file system.
>  * @param path the target file to set TTL.
>  * @param path the TTL value.
>  * @param property the type of TTL.
>  * @throws IOException
>  */
> public static void setTTl(FileSystem fs, Path path, int value, int property) 
>   {code}
> h3. Example
>  
> {code:java}
> TtlInfo.setTTl(fs, file, System.currentTimeMillis() / 1000 / 60 + 60, 0); 
> #The file will be expired in an 60 minutes. 
> TtlInfo.setTTl(fs, file, 60, TtlInfo.SINCELASTWRITE); #The file will be 
> expired after 60 minutes since last write.{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to