[ 
https://issues.apache.org/jira/browse/HADOOP-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17724060#comment-17724060
 ] 

Viraj Jasani edited comment on HADOOP-18744 at 5/19/23 6:01 AM:
----------------------------------------------------------------

Came across few more tests failures while testing HADOOP-18740:
{code:java}
[ERROR] 
testCreateFlagCreateAppendNonExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations)
  Time elapsed: 2.763 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:235)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:217)
 {code}
{code:java}
[ERROR] testDiskBlockCreate(org.apache.hadoop.fs.s3a.ITestS3ABlockOutputDisk)  
Time elapsed: 2.329 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray.testDiskBlockCreate(ITestS3ABlockOutputArray.java:114)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code}
{code:java}
[ERROR] 
testDiskBlockCreate(org.apache.hadoop.fs.s3a.ITestS3ABlockOutputByteBuffer)  
Time elapsed: 1.937 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray.testDiskBlockCreate(ITestS3ABlockOutputArray.java:114)
 {code}
{code:java}
[ERROR] 
testDeleteNonExistingFileInDir(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI)
  Time elapsed: 1.809 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:235)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:217)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerCreateFile(S3AFileSystem.java:1891) 
{code}


was (Author: vjasani):
A couple more relevant failures:
{code:java}
[ERROR] 
testCreateFlagCreateAppendNonExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations)
  Time elapsed: 2.763 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:235)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:217)
 {code}
{code:java}
[ERROR] testDiskBlockCreate(org.apache.hadoop.fs.s3a.ITestS3ABlockOutputDisk)  
Time elapsed: 2.329 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray.testDiskBlockCreate(ITestS3ABlockOutputArray.java:114)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code}
{code:java}
[ERROR] 
testDiskBlockCreate(org.apache.hadoop.fs.s3a.ITestS3ABlockOutputByteBuffer)  
Time elapsed: 1.937 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray.testDiskBlockCreate(ITestS3ABlockOutputArray.java:114)
 {code}
{code:java}
[ERROR] 
testDeleteNonExistingFileInDir(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI)
  Time elapsed: 1.809 s  <<< ERROR!
java.io.IOException: File name too long
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2063)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)
        at 
org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:235)
        at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:217)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerCreateFile(S3AFileSystem.java:1891) 
{code}

> ITestS3ABlockOutputArray failure with IO File name too long
> -----------------------------------------------------------
>
>                 Key: HADOOP-18744
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18744
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Ahmar Suhail
>            Priority: Major
>
> On an EC2 instance, the following tests are failing:
>  
> {{{}ITestS3ABlockOutputArray.testDiskBlockCreate{}}}{{{}ITestS3ABlockOutputByteBuffer>ITestS3ABlockOutputArray.testDiskBlockCreate{}}}{{{}ITestS3ABlockOutputDisk>ITestS3ABlockOutputArray.testDiskBlockCreate{}}}
>  
> with the error IO File name too long. 
>  
> The tests create a file with a 1024 char file name and rely on 
> File.createTempFile() to truncate the file name to < OS limit. 
>  
> Stack trace:
> {{Java.io.IOException: File name too long}}
> {{    at java.io.UnixFileSystem.createFileExclusively(Native Method)}}
> {{    at java.io.File.createTempFile(File.java:2063)}}
> {{    at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:1377)}}
> {{    at 
> org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:829)}}
> {{    at 
> org.apache.hadoop.fs.s3a.ITestS3ABlockOutputArray.testDiskBlockCreate(ITestS3ABlockOutputArray.java:114)}}
> {{    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)}}
> {{    at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)}}
> {{    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to