[ 
https://issues.apache.org/jira/browse/HDDS-9762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793227#comment-17793227
 ] 

Mladjan Gadzic commented on HDDS-9762:
--------------------------------------

[~ashishk] these are audit logs

OM:
{code:java}
2023-12-05 09:47:26,317 | INFO  | OMAudit | user=hadoop | ip=172.20.0.2 | 
op=READ_VOLUME {volume=s3v} | ret=SUCCESS |  
2023-12-05 09:47:26,317 | INFO  | OMAudit | user=hadoop | ip=172.20.0.2 | 
op=READ_BUCKET {volume=s3v, bucket=fso} | ret=SUCCESS |  
2023-12-05 09:47:26,319 | INFO  | OMAudit | user=hadoop | ip=172.20.0.2 | 
op=LIST_STATUS {volume=s3v, bucket=fso, key=s3-1GB/6/, dataSize=0, 
replicationConfig=null} | ret=SUCCESS |  
2023-12-05 09:47:26,320 | ERROR | OMAudit | user=hadoop | ip=172.20.0.2 | 
op=GET_FILE_STATUS {volume=s3v, bucket=fso, key=s3-1GB/6/, dataSize=0, 
replicationConfig=null} | ret=FAILURE | FILE_NOT_FOUND 
org.apache.hadoop.ozone.om.exceptions.OMException: Unable to get file status: 
volume: s3v bucket: fso key: s3-1GB/6/
        at 
org.apache.hadoop.ozone.om.KeyManagerImpl.getOzoneFileStatusFSO(KeyManagerImpl.java:1356)
        at 
org.apache.hadoop.ozone.om.KeyManagerImpl.getFileStatus(KeyManagerImpl.java:1140)
  {code}
S3G:
{code:java}
2023-12-05 09:47:26,818 | ERROR | S3GAudit | user=random | ip=192.168.65.1 | 
op=GET_BUCKET {bucket=[fso], list-type=[2], max-keys=[2], fetch-owner=[false], 
delimiter=[/], prefix=[s3-1GB/3/]} | ret=FAILURE | 
java.lang.NullPointerException
        at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$ListStatusRequest$Builder.setStartKey(OzoneManagerProtocolProtos.java)
        at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.listStatusLight(OzoneManagerProtocolClientSideTranslatorPB.java:2267)
  {code}
Other option does not work (it just hangs and nothing happens).
{code:java}
➜  hadoop-3.3.2 bin/hdfs dfs -Dfs.s3a.access.key=1 -Dfs.s3a.secret.key=1 
-Dfs.s3a.endpoint=http://localhost:9878 -Dfs.s3a.path.style.access=true -put 
a.txt s3a://fso/s3-1GB/key2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/Users/mladjangadzic/Downloads/hadoop-3.3.2/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/Users/mladjangadzic/Documents/bakson/ozone/hadoop-ozone/dist/target/ozone-1.4.0-SNAPSHOT/share/ozone/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2023-12-05 11:48:35,050 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2023-12-05 11:48:35,295 INFO impl.MetricsConfig: Loaded properties from 
hadoop-metrics2.properties
2023-12-05 11:48:35,545 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
period at 10 second(s).
2023-12-05 11:48:35,546 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system started
2023-12-05 11:48:36,086 INFO impl.DirectoryPolicyImpl: Directory markers will 
be kept {code}
Am I doing something wrong?

> [FSO] Hadoop dfs s3a protocol does not work with FSO buckets
> ------------------------------------------------------------
>
>                 Key: HDDS-9762
>                 URL: https://issues.apache.org/jira/browse/HDDS-9762
>             Project: Apache Ozone
>          Issue Type: Bug
>    Affects Versions: 1.4.0
>            Reporter: Mladjan Gadzic
>            Priority: Blocker
>         Attachments: 2023-12-02.png
>
>
> Trying to exercise freon dfsg over s3a results in exception.
> Command:
>  
> {code:java}
> OZONE_CLASSPATH=/opt/hadoop/share/ozone/lib/aws-java-sdk-bundle-1.11.1026.jar:/opt/hadoop/share/ozone/lib/hadoop-aws-3.3.2.jar:$(ozone
>  classpath ozone-common) ozone freon 
> \-Dfs.s3a.endpoint=http://host.docker.internal:9878 
> \-Dfs.s3a.etag.checksum.enabled=false \-Dfs.s3a.path.style.access=true 
> \-Dfs.s3a.change.detection.source=versionid 
> \-Dfs.s3a.change.detection.mode=client 
> \-Dfs.s3a.change.detection.version.required=false \dfsg -s102400 -n10000 -t10 
> --path=s3a://fso/ --prefix="s3-1GB" {code}
>  
> Exception (command first run)
> {code:java}
> 2023-11-22 18:34:19,180 [s3a-transfer-fso-unbounded-pool4-t1] DEBUG 
> impl.BulkDeleteRetryHandler: Retrying on error during bulk delete
> :org.apache.hadoop.fs.s3a.AWSS3IOException: delete: 
> com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more 
> objects could not be deleted (Service: null; Status Code: 200; Error Code: 
> null; Request ID: 0bcdb9b8-40f8-402f-b8d1-b5bdb8159823; S3 Extended Request 
> ID: DwT29rWRhtYS; Proxy: null), S3 Extended Request ID: DwT29rWRhtYS:null: 
> InternalError: s3-1GB/: Directory is not empty. Key:s3-1GB
> : One or more objects could not be deleted (Service: null; Status Code: 200; 
> Error Code: null; Request ID: 0bcdb9b8-40f8-402f-b8d1-b5bdb8159823; S3 
> Extended Request ID: DwT29rWRhtYS; Proxy: null)
>         at 
> org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteSupport.translateDeleteException(MultiObjectDeleteSupport.java:117)
>         at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:312)
>         at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:426)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:2775)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeysS3(S3AFileSystem.java:3022)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3121)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3078)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:4498)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$finishedWrite$31(S3AFileSystem.java:4403)
>         at 
> org.apache.hadoop.fs.s3a.impl.CallableSupplier.get(CallableSupplier.java:87)
>         at 
> java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>         at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: com.amazonaws.services.s3.model.MultiObjectDeleteException: One or 
> more objects could not be deleted (Service: null; Status Code: 200; Error 
> Code: null; Request ID: 0bcdb9b8-40f8-402f-b8d1-b5bdb8159823; S3 Extended 
> Request ID: DwT29rWRhtYS; Proxy: null), S3 Extended Request ID: DwT29rWRhtYS
>         at 
> com.amazonaws.services.s3.AmazonS3Client.deleteObjects(AmazonS3Client.java:2345)
>         at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$deleteObjects$16(S3AFileSystem.java:2785)
>         at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>         at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>         at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414)
>         ... 11 more{code}
> In consecutive run (command second run), there is a different exception
> {code:java}
> 2023-11-22 18:39:36,543 [pool-2-thread-9] ERROR freon.BaseFreonGenerator: 
> Error on executing task 7
> :org.apache.hadoop.fs.FileAlreadyExistsException: s3a://fso/s3-1GB/7 is a 
> directory
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerCreateFile(S3AFileSystem.java:1690)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$create$6(S3AFileSystem.java:1646)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2337)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2356)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:1645)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1233)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1210)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1091)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1078)
>  at 
> org.apache.hadoop.ozone.freon.HadoopFsGenerator.lambda$createFile$0(HadoopFsGenerator.java:112)
>  at com.codahale.metrics.Timer.time(Timer.java:101)
>  at 
> org.apache.hadoop.ozone.freon.HadoopFsGenerator.createFile(HadoopFsGenerator.java:111)
>  at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.tryNextTask(BaseFreonGenerator.java:220)
>  at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.taskLoop(BaseFreonGenerator.java:200)
>  at 
> org.apache.hadoop.ozone.freon.BaseFreonGenerator.lambda$startTaskRunners$0(BaseFreonGenerator.java:174)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:829) {code}
> Ozone SHA f34d347af1f7b9c1eb82cf27fbe8231c85493628.
> Libs from Hadoop 3.3.2 version.
> It is reproducible using unsecure Ozone Docker cluster with 3DNs.
> Steps to reproduce the issue:
>  # bring up unsecure Ozone Docker cluster
>  # exec into OM container
>  # add env variables 
> AWS_ACCESS_KEY_ID=random
> AWS_SECRET_KEY=random
> OZONE_ROOT_LOGGER=debug,console
>  # create bucket named "fso" with FSO layout
>  # run mentioned command (first time)
>  # check output
>  # run mentioned command (second time)
>  # check output



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to