[
https://issues.apache.org/jira/browse/HDDS-1250?focusedWorklogId=215009&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-215009
]
ASF GitHub Bot logged work on HDDS-1250:
----------------------------------------
Author: ASF GitHub Bot
Created on: 18/Mar/19 20:47
Start Date: 18/Mar/19 20:47
Worklog Time Spent: 10m
Work Description: bharatviswa504 commented on issue #591: HDDS-1250: IIn
OM HA AllocateBlock call where connecting to SCM from OM should not happen on
Ratis.
URL: https://github.com/apache/hadoop/pull/591#issuecomment-474094516
Ran smoke tests locally.
I don't see any tests being run for secure, but remaining all other tests
are passing.
Thank You @hanishakoneru for the review.
I will commit this shortly.
HW13865:smoketest bviswanadham$ ./test.sh
-------------------------------------------------
Executing test(s): [basic]
Cluster type: ozone
Compose file:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/../compose/ozone/docker-compose.yaml
Output dir:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/result
Command to rerun: ./test.sh --keep --env ozone basic
-------------------------------------------------
Removing network ozone_default
WARNING: Network ozone_default not found.
Creating network "ozone_default" with the default driver
Creating ozone_scm_1 ...
Creating ozone_datanode_1 ...
Creating ozone_datanode_2 ...
Creating ozone_datanode_3 ...
Creating ozone_om_1 ...
Creating ozone_om_1
Creating ozone_datanode_1
Creating ozone_datanode_1 ... done
Creating ozone_datanode_2 ... done
Creating ozone_om_1 ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
3 datanodes are up and registered to the scm
==============================================================================
Basic
==============================================================================
Basic.Basic :: Smoketest ozone cluster startup
==============================================================================
Check webui static resources | PASS
|
------------------------------------------------------------------------------
Start freon testing | PASS
|
------------------------------------------------------------------------------
Basic.Basic :: Smoketest ozone cluster startup | PASS
|
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
Basic.Ozone-Shell :: Test ozone shell CLI usage
==============================================================================
RpcClient with port | PASS
|
------------------------------------------------------------------------------
RpcClient without host | PASS
|
------------------------------------------------------------------------------
RpcClient without scheme | PASS
|
------------------------------------------------------------------------------
Basic.Ozone-Shell :: Test ozone shell CLI usage | PASS
|
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Basic | PASS
|
5 critical tests, 5 passed, 0 failed
5 tests total, 5 passed, 0 failed
==============================================================================
Output: /opt/hadoop/smoketest/result/robot-ozone-basic.xml
Stopping ozone_datanode_2 ... done
Stopping ozone_datanode_3 ... done
Stopping ozone_scm_1 ... done
Stopping ozone_datanode_1 ... done
Stopping ozone_om_1 ... done
Removing ozone_datanode_2 ... done
Removing ozone_datanode_3 ... done
Removing ozone_scm_1 ... done
Removing ozone_datanode_1 ... done
Removing ozone_om_1 ... done
Removing network ozone_default
-------------------------------------------------
Executing test(s): [auditparser]
Cluster type: ozone
Compose file:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/../compose/ozone/docker-compose.yaml
Output dir:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/result
Command to rerun: ./test.sh --keep --env ozone auditparser
-------------------------------------------------
Removing network ozone_default
WARNING: Network ozone_default not found.
Creating network "ozone_default" with the default driver
Creating ozone_scm_1 ...
Creating ozone_om_1 ...
Creating ozone_datanode_1 ...
Creating ozone_datanode_2 ...
Creating ozone_datanode_3 ...
Creating ozone_scm_1
Creating ozone_datanode_1
Creating ozone_datanode_1 ... done
Creating ozone_datanode_2 ... done
Creating ozone_datanode_3 ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
3 datanodes are up and registered to the scm
==============================================================================
Auditparser
==============================================================================
Auditparser.Auditparser :: Smoketest ozone cluster startup
==============================================================================
Initiating freon to generate data | PASS
|
------------------------------------------------------------------------------
Testing audit parser | PASS
|
------------------------------------------------------------------------------
Auditparser.Auditparser :: Smoketest ozone cluster startup | PASS
|
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
Auditparser | PASS
|
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
Output: /opt/hadoop/smoketest/result/robot-ozone-auditparser.xml
Stopping ozone_datanode_1 ... done
Stopping ozone_om_1 ... done
Stopping ozone_datanode_2 ... done
Stopping ozone_datanode_3 ... done
Stopping ozone_scm_1 ... done
Removing ozone_datanode_1 ... done
Removing ozone_om_1 ... done
Removing ozone_datanode_2 ... done
Removing ozone_datanode_3 ... done
Removing ozone_scm_1 ... done
Removing network ozone_default
-------------------------------------------------
Executing test(s): [ozonefs]
Cluster type: ozonefs
Compose file:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/../compose/ozonefs/docker-compose.yaml
Output dir:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/result
Command to rerun: ./test.sh --keep --env ozonefs ozonefs
-------------------------------------------------
Removing network ozonefs_default
WARNING: Network ozonefs_default not found.
Creating network "ozonefs_default" with the default driver
Creating ozonefs_om_1 ...
Creating ozonefs_scm_1 ...
Creating ozonefs_hadoop2_1 ...
Creating ozonefs_hadoop3_1 ...
Creating ozonefs_datanode_1 ...
Creating ozonefs_datanode_2 ...
Creating ozonefs_datanode_3 ...
Creating ozonefs_scm_1
Creating ozonefs_om_1
Creating ozonefs_datanode_1
Creating ozonefs_hadoop3_1
Creating ozonefs_datanode_1 ... done
Creating ozonefs_datanode_2 ... done
Creating ozonefs_hadoop2_1 ... done
3 datanodes are up and registered to the scm
==============================================================================
Ozonefs
==============================================================================
Ozonefs.Ozonefs :: Ozonefs test
==============================================================================
Create volume and bucket | PASS
|
------------------------------------------------------------------------------
Check volume from ozonefs | PASS
|
------------------------------------------------------------------------------
Run ozoneFS tests | PASS
|
------------------------------------------------------------------------------
Ozonefs.Ozonefs :: Ozonefs test | PASS
|
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Ozonefs | PASS
|
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Output: /opt/hadoop/smoketest/result/robot-ozonefs-ozonefs.xml
Stopping ozonefs_datanode_1 ... done
Stopping ozonefs_hadoop2_1 ... done
Stopping ozonefs_datanode_2 ... done
Stopping ozonefs_datanode_3 ... done
Stopping ozonefs_hadoop3_1 ... done
Stopping ozonefs_om_1 ... done
Stopping ozonefs_scm_1 ... done
Removing ozonefs_datanode_1 ... done
Removing ozonefs_hadoop2_1 ... done
Removing ozonefs_datanode_2 ... done
Removing ozonefs_datanode_3 ... done
Removing ozonefs_hadoop3_1 ... done
Removing ozonefs_om_1 ... done
Removing ozonefs_scm_1 ... done
Removing network ozonefs_default
-------------------------------------------------
Executing test(s): [basic]
Cluster type: ozone-hdfs
Compose file:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/../compose/ozone-hdfs/docker-compose.yaml
Output dir:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/result
Command to rerun: ./test.sh --keep --env ozone-hdfs basic
-------------------------------------------------
Removing network ozonehdfs_default
WARNING: Network ozonehdfs_default not found.
Creating network "ozonehdfs_default" with the default driver
Creating ozonehdfs_scm_1 ...
Creating ozonehdfs_namenode_1 ...
Creating ozonehdfs_datanode_1 ...
Creating ozonehdfs_om_1 ...
Creating ozonehdfs_datanode_2 ...
Creating ozonehdfs_datanode_3 ...
Creating ozonehdfs_s3g_1 ...
Creating ozonehdfs_namenode_1
Creating ozonehdfs_scm_1
Creating ozonehdfs_s3g_1
Creating ozonehdfs_datanode_1
Creating ozonehdfs_datanode_1 ... done
Creating ozonehdfs_datanode_2 ... done
Creating ozonehdfs_s3g_1 ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
WARNING! Datanodes are not started successfully. Please check the
docker-compose files
==============================================================================
Basic
==============================================================================
Basic.Basic :: Smoketest ozone cluster startup
==============================================================================
Check webui static resources | PASS
|
------------------------------------------------------------------------------
Start freon testing | PASS
|
------------------------------------------------------------------------------
Basic.Basic :: Smoketest ozone cluster startup | PASS
|
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
Basic.Ozone-Shell :: Test ozone shell CLI usage
==============================================================================
RpcClient with port | PASS
|
------------------------------------------------------------------------------
RpcClient without host | PASS
|
------------------------------------------------------------------------------
RpcClient without scheme | PASS
|
------------------------------------------------------------------------------
Basic.Ozone-Shell :: Test ozone shell CLI usage | PASS
|
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Basic | PASS
|
5 critical tests, 5 passed, 0 failed
5 tests total, 5 passed, 0 failed
==============================================================================
Output: /opt/hadoop/smoketest/result/robot-ozone-hdfs-basic.xml
Stopping ozonehdfs_om_1 ... done
Stopping ozonehdfs_datanode_2 ... done
Stopping ozonehdfs_datanode_1 ... done
Stopping ozonehdfs_datanode_3 ... done
Stopping ozonehdfs_s3g_1 ... done
Stopping ozonehdfs_scm_1 ... done
Stopping ozonehdfs_namenode_1 ... done
Removing ozonehdfs_om_1 ... done
Removing ozonehdfs_datanode_2 ... done
Removing ozonehdfs_datanode_1 ... done
Removing ozonehdfs_datanode_3 ... done
Removing ozonehdfs_s3g_1 ... done
Removing ozonehdfs_scm_1 ... done
Removing ozonehdfs_namenode_1 ... done
Removing network ozonehdfs_default
-------------------------------------------------
Executing test(s): [s3]
Cluster type: ozones3
Compose file:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/../compose/ozones3/docker-compose.yaml
Output dir:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/result
Command to rerun: ./test.sh --keep --env ozones3 s3
-------------------------------------------------
Removing network ozones3_default
WARNING: Network ozones3_default not found.
Creating network "ozones3_default" with the default driver
Creating ozones3_datanode_1 ...
Creating ozones3_datanode_2 ...
Creating ozones3_datanode_3 ...
Creating ozones3_scm_1 ...
Creating ozones3_s3g_1 ...
Creating ozones3_om_1 ...
Creating ozones3_datanode_1
Creating ozones3_om_1
Creating ozones3_datanode_2
Creating ozones3_datanode_1 ... done
Creating ozones3_datanode_2 ... done
Creating ozones3_datanode_3 ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
3 datanodes are up and registered to the scm
==============================================================================
S3
==============================================================================
S3.Awss3 :: S3 gateway test with aws cli
==============================================================================
File upload and directory list | PASS
|
------------------------------------------------------------------------------
S3.Awss3 :: S3 gateway test with aws cli | PASS
|
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
S3.Bucketcreate :: S3 gateway test with aws cli
==============================================================================
Create bucket which already exists | PASS
|
------------------------------------------------------------------------------
S3.Bucketcreate :: S3 gateway test with aws cli | PASS
|
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
S3.Buckethead :: S3 gateway test with aws cli
==============================================================================
Head Bucket not existent | PASS
|
------------------------------------------------------------------------------
S3.Buckethead :: S3 gateway test with aws cli | PASS
|
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
S3.Bucketlist :: S3 gateway test with aws cli
==============================================================================
List buckets | PASS
|
------------------------------------------------------------------------------
S3.Bucketlist :: S3 gateway test with aws cli | PASS
|
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
S3.MultipartUpload :: S3 gateway test with aws cli
==============================================================================
Test Multipart Upload | PASS
|
------------------------------------------------------------------------------
Test Multipart Upload Complete | PASS
|
------------------------------------------------------------------------------
Test Multipart Upload Complete Entity too small | PASS
|
------------------------------------------------------------------------------
Test Multipart Upload Complete Invalid part | PASS
|
------------------------------------------------------------------------------
Test abort Multipart upload | PASS
|
------------------------------------------------------------------------------
Test abort Multipart upload with invalid uploadId | PASS
|
------------------------------------------------------------------------------
Upload part with Incorrect uploadID | PASS
|
------------------------------------------------------------------------------
Test list parts | PASS
|
------------------------------------------------------------------------------
Test Multipart Upload with the simplified aws s3 cp API | PASS
|
------------------------------------------------------------------------------
S3.MultipartUpload :: S3 gateway test with aws cli | PASS
|
9 critical tests, 9 passed, 0 failed
9 tests total, 9 passed, 0 failed
==============================================================================
S3.Objectcopy :: S3 gateway test with aws cli
==============================================================================
Copy Object Happy Scenario | PASS
|
------------------------------------------------------------------------------
Copy Object Where Bucket is not available | PASS
|
------------------------------------------------------------------------------
Copy Object Where both source and dest are same with change to sto... | PASS
|
------------------------------------------------------------------------------
Copy Object Where Key not available | PASS
|
------------------------------------------------------------------------------
S3.Objectcopy :: S3 gateway test with aws cli | PASS
|
4 critical tests, 4 passed, 0 failed
4 tests total, 4 passed, 0 failed
==============================================================================
S3.Objectdelete :: S3 gateway test with aws cli
==============================================================================
Delete file with s3api | PASS
|
------------------------------------------------------------------------------
Delete file with s3api, file doesn't exist | PASS
|
------------------------------------------------------------------------------
Delete dir with s3api | PASS
|
------------------------------------------------------------------------------
Delete file with s3api, file doesn't exist, prefix of a real file | PASS
|
------------------------------------------------------------------------------
Delete file with s3api, bucket doesn't exist | PASS
|
------------------------------------------------------------------------------
S3.Objectdelete :: S3 gateway test with aws cli | PASS
|
5 critical tests, 5 passed, 0 failed
5 tests total, 5 passed, 0 failed
==============================================================================
S3.Objectmultidelete :: S3 gateway test with aws cli
==============================================================================
Delete file with multi delete | PASS
|
------------------------------------------------------------------------------
S3.Objectmultidelete :: S3 gateway test with aws cli | PASS
|
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
S3.Objectputget :: S3 gateway test with aws cli
==============================================================================
Put object to s3 | PASS
|
------------------------------------------------------------------------------
Get object from s3 | PASS
|
------------------------------------------------------------------------------
Get Partial object from s3 with both start and endoffset | PASS
|
------------------------------------------------------------------------------
Get Partial object from s3 with both start and endoffset(start off... | PASS
|
------------------------------------------------------------------------------
Get Partial object from s3 with both start and endoffset(end offse... | PASS
|
------------------------------------------------------------------------------
Get Partial object from s3 with only start offset | PASS
|
------------------------------------------------------------------------------
Get Partial object from s3 with both start and endoffset which are... | PASS
|
------------------------------------------------------------------------------
Get Partial object from s3 to get last n bytes | PASS
|
------------------------------------------------------------------------------
Incorrect values for end and start offset | PASS
|
------------------------------------------------------------------------------
Zero byte file | PASS
|
------------------------------------------------------------------------------
S3.Objectputget :: S3 gateway test with aws cli | PASS
|
10 critical tests, 10 passed, 0 failed
10 tests total, 10 passed, 0 failed
==============================================================================
S3.Webui :: S3 gateway web ui test
==============================================================================
File upload and directory list | PASS
|
------------------------------------------------------------------------------
S3.Webui :: S3 gateway web ui test | PASS
|
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
S3 | PASS
|
34 critical tests, 34 passed, 0 failed
34 tests total, 34 passed, 0 failed
==============================================================================
Output: /opt/hadoop/smoketest/result/robot-ozones3-s3.xml
Stopping ozones3_datanode_3 ... done
Stopping ozones3_scm_1 ... done
Stopping ozones3_s3g_1 ... done
Stopping ozones3_datanode_2 ... done
Stopping ozones3_om_1 ... done
Stopping ozones3_datanode_1 ... done
Removing ozones3_datanode_3 ... done
Removing ozones3_scm_1 ... done
Removing ozones3_s3g_1 ... done
Removing ozones3_datanode_2 ... done
Removing ozones3_om_1 ... done
Removing ozones3_datanode_1 ... done
Removing network ozones3_default
-------------------------------------------------
Executing test(s): [security]
Cluster type: ozonesecure
Compose file:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/../compose/ozonesecure/docker-compose.yaml
Output dir:
/Users/bviswanadham/workspace/hadoop-commit/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/smoketest/result
Command to rerun: ./test.sh --keep --env ozonesecure security
-------------------------------------------------
Removing network ozonesecure_default
WARNING: Network ozonesecure_default not found.
Creating network "ozonesecure_default" with the default driver
Creating ozonesecure_scm_1 ...
Creating ozonesecure_s3g_1 ...
Creating ozonesecure_om_1 ...
Creating ozonesecure_kms_1 ...
Creating ozonesecure_datanode_1 ...
Creating ozonesecure_datanode_2 ...
Creating ozonesecure_datanode_3 ...
Creating ozonesecure_kdc_1 ...
Creating ozonesecure_s3g_1
Creating ozonesecure_kms_1
Creating ozonesecure_datanode_2
Creating ozonesecure_scm_1
Creating ozonesecure_kdc_1
Creating ozonesecure_datanode_1 ... done
Creating ozonesecure_datanode_2 ... done
Creating ozonesecure_kms_1 ... done
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
ERROR: No container found for scm_1
WARNING! Datanodes are not started successfully. Please check the
docker-compose files
ERROR: No container found for om_1
Stopping ozonesecure_kdc_1 ... done
Removing ozonesecure_kdc_1 ... done
Removing ozonesecure_kms_1 ... done
Removing ozonesecure_datanode_3 ... done
Removing ozonesecure_om_1 ... done
Removing ozonesecure_datanode_1 ... done
Removing ozonesecure_datanode_2 ... done
Removing ozonesecure_scm_1 ... done
Removing ozonesecure_s3g_1 ... done
Removing network ozonesecure_default
Setting up environment!
Log: /opt/hadoop/smoketest/result/log.html
Report: /opt/hadoop/smoketest/result/report.html
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 215009)
Time Spent: 4h 10m (was: 4h)
> In OM HA AllocateBlock call where connecting to SCM from OM should not happen
> on Ratis
> --------------------------------------------------------------------------------------
>
> Key: HDDS-1250
> URL: https://issues.apache.org/jira/browse/HDDS-1250
> Project: Hadoop Distributed Data Store
> Issue Type: Sub-task
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
> Labels: pull-request-available
> Time Spent: 4h 10m
> Remaining Estimate: 0h
>
> In OM HA, currently when allocateBlock is called, in applyTransaction() on
> all OM nodes, we make a call to SCM and write the allocateBlock information
> into OM DB. The problem with this is, every OM allocateBlock and appends new
> BlockInfo into OMKeyInfom and also this a correctness issue. (As all OM's
> should have the same block information for a key, even though eventually this
> might be changed during key commit)
>
> The proposed approach is:
> 1. Calling SCM for allocation of block will happen outside of ratis, and this
> block information is passed and writing to DB will happen via Ratis.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]