fapifta commented on code in PR #4250: URL: https://github.com/apache/ozone/pull/4250#discussion_r1114337991
########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API Review Comment: ```suggestion Append to a File | not implemented in Ozone ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API Review Comment: ```suggestion Concat File(s) | not implemented in Ozone ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported Review Comment: ```suggestion Rename a File/Directory | supported (with limitations) ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone Review Comment: ```suggestion Set Quota | not implemented in Ozone FileSystem API ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API Review Comment: ```suggestion Truncate a File | not implemented in Ozone ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone +Set Quota By Storage Type | not implemented in Ozone +Get File Checksum | not implemented in Ozone +Get Home Directory | supported +Get Trash Root | supported +Set Permission | supported +Set Owner | supported +Set Replication Factor | supported Review Comment: ```suggestion Set Replication Factor | not implemented in Ozone FileSystem API ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported Review Comment: ```suggestion Create a Symbolic Link | not implemented in Ozone ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone +Set Quota By Storage Type | not implemented in Ozone +Get File Checksum | not implemented in Ozone Review Comment: ```suggestion Get File Checksum | unsupported (to be fixed) ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone +Set Quota By Storage Type | not implemented in Ozone +Get File Checksum | not implemented in Ozone +Get Home Directory | supported +Get Trash Root | supported +Set Permission | supported +Set Owner | supported +Set Replication Factor | supported +Set Access or Modification Time | supported Review Comment: ```suggestion Set Access or Modification Time | not implemented in Ozone FileSystem API ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone +Set Quota By Storage Type | not implemented in Ozone +Get File Checksum | not implemented in Ozone +Get Home Directory | supported +Get Trash Root | supported +Set Permission | supported Review Comment: ```suggestion Set Permission | not implemented in Ozone FileSystem API ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone +Set Quota By Storage Type | not implemented in Ozone +Get File Checksum | not implemented in Ozone +Get Home Directory | supported +Get Trash Root | supported +Set Permission | supported +Set Owner | supported Review Comment: ```suggestion Set Owner | not implemented in Ozone FileSystem API ``` ########## hadoop-hdds/docs/content/interface/HttpFS.md: ########## @@ -0,0 +1,119 @@ +--- +title: HttpFS Gateway +weight: 4 +menu: +main: +parent: "Client Interfaces" +summary: Ozone HttpFS is a WebHDFS compatible interface implementation, as a separate role it provides an easy integration with Ozone. +--- + +<!--- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to You under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +Ozone HttpFS can be used to integrate Ozone with other tools via REST API. + +## Introduction + +Ozone HttpFS is forked from the HDFS HttpFS endpoint implementation ([HDDS-5448](https://issues.apache.org/jira/browse/HDDS-5448)). It is added as a separate role to Ozone, like S3G. + +HttpFS is a service that provides a REST HTTP gateway supporting File System operations (read and write). It is interoperable with the **webhdfs** REST HTTP API. + +HttpFS can be used to access data on an Ozone cluster behind of a firewall (the HttpFS service acts as a gateway and is the only system that is allowed to cross the firewall into the cluster). + +HttpFS can be used to access data in Ozone using HTTP utilities (such as curl and wget) and HTTP libraries Perl from other languages than Java. + +The **webhdfs** client FileSystem implementation can be used to access HttpFS using the Ozone filesystem command line tool (`ozone fs`) as well as from Java applications using the Hadoop FileSystem Java API. + +HttpFS has built-in security supporting Hadoop pseudo authentication and Kerberos SPNEGO and other pluggable authentication mechanisms. It also provides Hadoop proxy user support. + + +## Getting started + +HttpFS service itself is a Jetty based web-application that uses the Hadoop FileSystem API to talk to the cluster, it is a separate service which provides access to Ozone via a REST API. It should be started additionally to the regular Ozone components. + +You can start a docker based cluster, including the HttpFS gateway from the release package. + +Go to the `compose/ozone` directory and start the server: + +```bash +docker-compose up -d --scale datanode=3 +``` + +You can/should find now the HttpFS gateway in docker with the name `ozone_httpfs`. +HttpFS HTTP web-service API calls are HTTP REST calls that map to an Ozone file system operation. For example, using the `curl` Unix command. + +E.g. in the docker cluster you can execute commands like these: + +* `curl -i -X PUT "http://httpfs:14000/webhdfs/v1/vol1?op=MKDIRS&user.name=hdfs"` creates a volume called `vol1`. + + +* `$ curl 'http://httpfs-host:14000/webhdfs/v1/user/foo/README.txt?op=OPEN&user.name=foo'` returns the contents of the `/user/foo/README.txt` key. + + +## Supported operations + +These are the WebHDFS REST API operations that are supported/unsupported in Ozone. + +### File and Directory Operations + +Operation | Support +--------------------------------|--------------------- +Create and Write to a File | supported +Append to a File | not implemented in Ozone FileSystem API +Concat File(s) | not implemented in Ozone FileSystem API +Open and Read a File | supported +Make a Directory | supported +Create a Symbolic Link | unsupported +Rename a File/Directory | unsupported +Delete a File/Directory | supported +Truncate a File | not implemented in Ozone FileSystem API +Status of a File/Directory | supported +List a Directory | supported +List a File | supported +Iteratively List a Directory | supported + + +### Other File System Operations + +Operation | Support +--------------------------------------|--------------------- +Get Content Summary of a Directory | supported +Get Quota Usage of a Directory | supported +Set Quota | not implemented in Ozone +Set Quota By Storage Type | not implemented in Ozone +Get File Checksum | not implemented in Ozone +Get Home Directory | supported Review Comment: ```suggestion Get Home Directory | unsupported (to be fixed) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
