[ 
https://issues.apache.org/jira/browse/HDDS-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17701666#comment-17701666
 ] 

Mladjan Gadzic commented on HDDS-6825:
--------------------------------------

[~NeilJoshi] thanks for finding out right version to reproduce the issue and 
reproducing it.

While trying to reproduce the issue with current Ozone trunk we found out 
zero-byte file gets created instead of dir
{code:bash}
ubuntu@ip-172-31-30-88:~/hadoop-3.3.4$ bin/hdfs dfs -Dfs.s3a.access.key=1 
-Dfs.s3a.secret.key=1 -Dfs.s3a.endpoint=http://localhost:9878 
-Dfs.s3a.connection.ssl.enabled=false -Dfs.s3a.path.style.access=true -mkdir 
s3a://fso/test
2023-03-17 10:41:58,394 INFO impl.MetricsConfig: Loaded properties from 
hadoop-metrics2.properties
2023-03-17 10:41:58,503 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
period at 10 second(s).
2023-03-17 10:41:58,503 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system started
2023-03-17 10:42:01,145 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
metrics system...
2023-03-17 10:42:01,145 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system stopped.
2023-03-17 10:42:01,145 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system shutdown complete. 

ubuntu@ip-172-31-30-88:~/hadoop-3.3.4$ bin/hdfs dfs -Dfs.s3a.access.key=1 
-Dfs.s3a.secret.key=1 -Dfs.s3a.endpoint=http://localhost:9878 
-Dfs.s3a.connection.ssl.enabled=false -Dfs.s3a.path.style.access=true -mkdir 
s3a://fso/test/dir1
2023-03-17 11:10:36,220 INFO impl.MetricsConfig: Loaded properties from 
hadoop-metrics2.properties
2023-03-17 11:10:36,372 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
period at 10 second(s).
2023-03-17 11:10:36,373 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system started
mkdir: PUT 0-byte object  on test/dir1: 
com.amazonaws.services.s3.model.AmazonS3Exception: An error occurred 
(InvalidRequest) when calling the PutObject/MPU PartUpload operation: 
ozone.om.enable.filesystem.paths is enabled Keys are considered as Unix Paths. 
Path has Violated FS Semantics which caused put operation to fail. (Service: 
Amazon S3; Status Code: 400; Error Code: InvalidRequest; Request ID: 
026d6434-ad44-4218-aca6-80c24d7e8622; S3 Extended Request ID: null; Proxy: 
null), S3 Extended Request ID: null:InvalidRequest: An error occurred 
(InvalidRequest) when calling the PutObject/MPU PartUpload operation: 
ozone.om.enable.filesystem.paths is enabled Keys are considered as Unix Paths. 
Path has Violated FS Semantics which caused put operation to fail. (Service: 
Amazon S3; Status Code: 400; Error Code: InvalidRequest; Request ID: 
026d6434-ad44-4218-aca6-80c24d7e8622; S3 Extended Request ID: null; Proxy: null)
2023-03-17 11:10:38,972 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
metrics system...
2023-03-17 11:10:38,973 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system stopped.
2023-03-17 11:10:38,973 INFO impl.MetricsSystemImpl: s3a-file-system metrics 
system shutdown complete.
{code}
{code:bash}
bash-4.2$ ozone sh key list /s3v/fso
[ {
  "volumeName" : "s3v",
  "bucketName" : "fso",
  "name" : "test",
  "dataSize" : 0,
  "creationTime" : "2023-03-17T10:42:00.857Z",
  "modificationTime" : "2023-03-17T10:42:00.963Z",
  "replicationConfig" : {
    "replicationFactor" : "ONE",
    "requiredNodes" : 1,
    "replicationType" : "RATIS"
  },
  "metadata" : { }
} ] {code}
A while ago we merged a patch for creation of dirs through S3G for buckets with 
FSO layout. Currently we are working on a similar patch regarding dir creation 
over S3A with Trino for paths with trailing slash for buckets with FSO layout. 
Unlike that patch, _dfs -mkdir_ command creates zero-byte file instead of dir 
for buckets with FSO layout.

> FS Ops fail on FSO Bucket via s3a scheme
> ----------------------------------------
>
>                 Key: HDDS-6825
>                 URL: https://issues.apache.org/jira/browse/HDDS-6825
>             Project: Apache Ozone
>          Issue Type: Bug
>          Components: S3
>    Affects Versions: 1.3.0
>            Reporter: Soumitra Sulav
>            Priority: Critical
>              Labels: OzoneS3
>         Attachments: CDPD-40560-fso_s3a_mkdir_operation_debug.log, 
> hadoop_debug_fso.txt, hadoop_debug_fso_part.txt, hadoop_debug_obs.txt
>
>
> Steps to reproduce :
> 1. Create an s3v bucket with \{{FILE_SYSTEM_OPTIMIZED}} as Bucket Layout
> {code:java}
> # /opt/cloudera/parcels/CDH/bin/ozone sh  bucket info
>  s3v/fso
> {
>   "metadata" : \{ },
>   "volumeName" : "s3v",
>   "name" : "fso",
>   "storageType" : "DISK",
>   "versioning" : false,
>   "usedBytes" : 174,
>   "usedNamespace" : 1,
>   "creationTime" : "2022-06-02T16:24:58.759Z",
>   "modificationTime" : "2022-06-02T16:24:58.759Z",
>   "quotaInBytes" : -1,
>   "quotaInNamespace" : -1,
>   "bucketLayout" : "FILE_SYSTEM_OPTIMIZED",
>   "owner" : "hrt_qa",
>   "link" : false
> }{code}
> 2. Get s3 credentials
> 3. Run FS operation via s3a scheme :
> {code:java}
> /opt/cloudera/parcels/CDH/bin/hadoop fs 
> [email protected] 
> -Dfs.s3a.secret.key=32b5a586d7c35e1a0aa93776ce2500d07edd34ae9c3df2f9658bd705bf0a33ad
>  -Dfs.s3a.endpoint=https://quasar-unoxvj-6.quasar-unoxvj.root.hwx.site:9879 
> -Dfs.s3a.connection.ssl.enabled=true -Dfs.s3a.change.detection.mode=none 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.path.style.access=true -mkdir s3a://fso/test \{code}
> S3G Logs
> {code:java}
> 2022-06-02 16:48:10,808 WARN org.eclipse.jetty.server.HttpChannel: 
> handleException /fso/ FILE_NOT_FOUND 
> org.apache.hadoop.ozone.om.exceptions.OMException: Unable to get file status: 
> volume: s3v bucket: fso key: test/
> 2022-06-02 16:48:10,809 WARN org.eclipse.jetty.server.HttpChannelState: 
> unhandled due to prior sendError
> javax.servlet.ServletException: javax.servlet.ServletException: 
> org.glassfish.jersey.server.ContainerException: FILE_NOT_FOUND 
> org.apache.hadoop.ozone.om.exceptions.OMException: Unable to get file status: 
> volume: s3v bucket: fso key: test/
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:162)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>         at org.eclipse.jetty.server.Server.handle(Server.java:516)
>         at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
>         at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
>         at 
> org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:540)
>         at 
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:395)
>         at 
> org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:161)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
>         at 
> org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
>         at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
>         at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
>         at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
>         at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
>         at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
>         at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: javax.servlet.ServletException: 
> org.glassfish.jersey.server.ContainerException: FILE_NOT_FOUND 
> org.apache.hadoop.ozone.om.exceptions.OMException: Unable to get file status: 
> volume: s3v bucket: fso key: test/
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:410)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
>         at 
> org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
>         at 
> org.apache.hadoop.ozone.s3.RootPageDisplayFilter.doFilter(RootPageDisplayFilter.java:53)
>         at 
> org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
>  {code}
> PFA Debug console logs.
> ls works. mkdir hangs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to