[
https://issues.apache.org/jira/browse/HDDS-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Bharat Viswanadham updated HDDS-2916:
-------------------------------------
Status: Patch Available (was: Open)
> OM HA cli getserviceroles not working
> -------------------------------------
>
> Key: HDDS-2916
> URL: https://issues.apache.org/jira/browse/HDDS-2916
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Client, Ozone Manager
> Reporter: Nilotpal Nandi
> Assignee: Bharat Viswanadham
> Priority: Major
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> started docker based cluster with "ozone.om.ratis.enable" = true
> OM started with ratis backend.
>
> {noformat}
> om_1 | 2020-01-20 09:02:48,116 [main] INFO om.OzoneManagerStarter:
> registered UNIX signal handlers for [TERM, HUP, INT]om_1 | 2020-01-20
> 09:02:48,116 [main] INFO om.OzoneManagerStarter: registered UNIX signal
> handlers for [TERM, HUP, INT]om_1 | 2020-01-20 09:02:49,213 [main]
> INFO ha.OMHANodeDetails: ozone.om.internal.service.id is not defined, falling
> back to ozone.om.service.ids to find serviceID for OzoneManager if it is HA
> enabled clusterom_1 | 2020-01-20 09:02:49,279 [main] INFO
> ha.OMHANodeDetails: Configuration either no ozone.om.address set. Falling
> back to the default OM address om/172.18.0.2:9862om_1 | 2020-01-20
> 09:02:49,280 [main] INFO ha.OMHANodeDetails: OM Service ID is not set.
> Setting it to the default ID: omServiceIdDefaultom_1 | 2020-01-20
> 09:02:49,294 [main] WARN server.ServerUtils: ozone.om.db.dirs is not
> configured. We recommend adding this setting. Falling back to
> ozone.metadata.dirs instead.om_1 | 2020-01-20 09:02:49,315 [main] WARN
> server.ServerUtils: ozone.om.db.dirs is not configured. We recommend adding
> this setting. Falling back to ozone.metadata.dirs instead.om_1 |
> 2020-01-20 09:02:50,268 [main] WARN server.ServerUtils: ozone.om.db.dirs is
> not configured. We recommend adding this setting. Falling back to
> ozone.metadata.dirs instead.om_1 | 2020-01-20 09:02:50,309 [main] INFO
> util.log: Logging initialized @3941msom_1 | 2020-01-20 09:02:50,406
> [main] INFO db.DBStoreBuilder: using custom profile for table: userTableom_1
> | 2020-01-20 09:02:50,406 [main] INFO db.DBStoreBuilder: Using default
> column profile:DBProfile.DISK for Table:userTableom_1 | 2020-01-20
> 09:02:50,406 [main] INFO db.DBStoreBuilder: using custom profile for table:
> volumeTableom_1 | 2020-01-20 09:02:50,406 [main] INFO
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for
> Table:volumeTableom_1 | 2020-01-20 09:02:50,407 [main] INFO
> db.DBStoreBuilder: using custom profile for table: bucketTableom_1 |
> 2020-01-20 09:02:50,407 [main] INFO db.DBStoreBuilder: Using default column
> profile:DBProfile.DISK for Table:bucketTableom_1 | 2020-01-20
> 09:02:50,407 [main] INFO db.DBStoreBuilder: using custom profile for table:
> keyTableom_1 | 2020-01-20 09:02:50,407 [main] INFO db.DBStoreBuilder:
> Using default column profile:DBProfile.DISK for Table:keyTableom_1 |
> 2020-01-20 09:02:50,407 [main] INFO db.DBStoreBuilder: using custom profile
> for table: deletedTableom_1 | 2020-01-20 09:02:50,407 [main] INFO
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for
> Table:deletedTableom_1 | 2020-01-20 09:02:50,408 [main] INFO
> db.DBStoreBuilder: using custom profile for table: openKeyTableom_1 |
> 2020-01-20 09:02:50,408 [main] INFO db.DBStoreBuilder: Using default column
> profile:DBProfile.DISK for Table:openKeyTableom_1 | 2020-01-20
> 09:02:50,408 [main] INFO db.DBStoreBuilder: using custom profile for table:
> s3Tableom_1 | 2020-01-20 09:02:50,408 [main] INFO db.DBStoreBuilder:
> Using default column profile:DBProfile.DISK for Table:s3Tableom_1 |
> 2020-01-20 09:02:50,408 [main] INFO db.DBStoreBuilder: using custom profile
> for table: multipartInfoTableom_1 | 2020-01-20 09:02:50,409 [main]
> INFO db.DBStoreBuilder: Using default column profile:DBProfile.DISK for
> Table:multipartInfoTableom_1 | 2020-01-20 09:02:50,409 [main] INFO
> db.DBStoreBuilder: using custom profile for table: dTokenTableom_1 |
> 2020-01-20 09:02:50,409 [main] INFO db.DBStoreBuilder: Using default column
> profile:DBProfile.DISK for Table:dTokenTableom_1 | 2020-01-20
> 09:02:50,409 [main] INFO db.DBStoreBuilder: using custom profile for table:
> s3SecretTableom_1 | 2020-01-20 09:02:50,409 [main] INFO
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for
> Table:s3SecretTableom_1 | 2020-01-20 09:02:50,409 [main] INFO
> db.DBStoreBuilder: using custom profile for table: prefixTableom_1 |
> 2020-01-20 09:02:50,409 [main] INFO db.DBStoreBuilder: Using default column
> profile:DBProfile.DISK for Table:prefixTableom_1 | 2020-01-20
> 09:02:50,433 [main] INFO db.DBStoreBuilder: using custom profile for table:
> defaultom_1 | 2020-01-20 09:02:50,433 [main] INFO db.DBStoreBuilder:
> Using default column profile:DBProfile.DISK for Table:defaultom_1 |
> 2020-01-20 09:02:50,435 [main] INFO db.DBStoreBuilder: Using default options.
> DBProfile.DISKom_1 | 2020-01-20 09:02:50,620 [main] WARN
> server.ServerUtils: Storage directory for Ratis is not configured. It is a
> good idea to map this to an SSD disk. Falling back to ozone.metadata.dirsom_1
> | 2020-01-20 09:02:50,647 [main] INFO ratis.OzoneManagerRatisServer:
> Instantiating OM Ratis server with GroupID: omServiceIdDefault and Raft
> Peers: om:9872om_1 | 2020-01-20 09:02:50,682 [main] INFO
> impl.RaftServerProxy: raft.rpc.type = GRPC (default)om_1 | 2020-01-20
> 09:02:50,762 [main] INFO grpc.GrpcFactory: PERFORMANCE WARNING:
> useCacheForAllThreads is true that may cause Netty to create a lot garbage
> objects and, as a result, trigger GC.om_1 | It is recommended to
> disable useCacheForAllThreads by setting
> -Dorg.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false
> in command line.om_1 | 2020-01-20 09:02:50,767 [main] INFO
> grpc.GrpcConfigKeys$Server: raft.grpc.server.port = 9872 (custom)om_1
> | 2020-01-20 09:02:50,768 [main] INFO server.GrpcService:
> raft.grpc.message.size.max = 33554432 (custom)om_1 | 2020-01-20
> 09:02:50,770 [main] INFO server.RaftServerConfigKeys:
> raft.server.log.appender.buffer.byte-limit = 33554432 (custom)om_1 |
> 2020-01-20 09:02:50,771 [main] INFO server.GrpcService:
> raft.grpc.flow.control.window = 1MB (=1048576) (default)om_1 |
> 2020-01-20 09:02:50,771 [main] INFO server.RaftServerConfigKeys:
> raft.server.rpc.request.timeout = 3000ms (default)om_1 | 2020-01-20
> 09:02:51,110 [main] INFO server.RaftServerConfigKeys: raft.server.storage.dir
> = [/data/metadata/ratis] (custom)om_1 | 2020-01-20 09:02:51,117 [main]
> INFO impl.RaftServerProxy: a7718018-f8c6-4b70-90b7-aadd8f920710: addNew
> group-C5BA1605619E:[a7718018-f8c6-4b70-90b7-aadd8f920710:om:9872] returns
> group-C5BA1605619E:java.util.concurrent.CompletableFuture@2b0b7e5a[Not
> completed]om_1 | 2020-01-20 09:02:51,123 [main] INFO om.OzoneManager:
> OzoneManager Ratis server initialized at port 9872
>
> {noformat}
>
> Ran "getserviceroles" command from CLI. "getserviceroles" API not working
>
> {noformat}
> /opt/hadoop/bin/ozone admin om getserviceroles
> -id=omServiceIdDefault/opt/hadoop/bin/ozone admin om getserviceroles
> -id=omServiceIdDefaultCouldn't create RpcClient protocol
> exception:java.lang.IllegalArgumentException: Could not find any configured
> addresses for OM. Please configure the system with ozone.om.address at
> org.apache.hadoop.ozone.om.ha.OMFailoverProxyProvider.loadOMClientConfigs(OMFailoverProxyProvider.java:138)
> at
> org.apache.hadoop.ozone.om.ha.OMFailoverProxyProvider.<init>(OMFailoverProxyProvider.java:83)
> at
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.<init>(OzoneManagerProtocolClientSideTranslatorPB.java:208)
> at org.apache.hadoop.ozone.client.rpc.RpcClient.<init>(RpcClient.java:155)
> at
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:190)
> at
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:122)
> at org.apache.hadoop.ozone.admin.om.OMAdmin.createClient(OMAdmin.java:59) at
> org.apache.hadoop.ozone.admin.om.GetServiceRolesSubcommand.call(GetServiceRolesSubcommand.java:49)
> at
> org.apache.hadoop.ozone.admin.om.GetServiceRolesSubcommand.call(GetServiceRolesSubcommand.java:32)
> at picocli.CommandLine.execute(CommandLine.java:1173) at
> picocli.CommandLine.access$800(CommandLine.java:141) at
> picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at
> picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
> at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at
> picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at
> org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at
> org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at
> org.apache.hadoop.ozone.admin.OzoneAdmin.main(OzoneAdmin.java:66)Couldn't
> create RpcClient protocol{noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]