[ 
https://issues.apache.org/jira/browse/HDDS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17758400#comment-17758400
 ] 

Tejaskriya Madhan commented on HDDS-6888:
-----------------------------------------

This might be an outdated issue. I followed the steps mentioned above and did 
not face the issue. The keys were created successfully, without NPE being 
raised on ShutdownHook():
{code:java}
bash-4.2$ ozone freon randomkeys --numOfVolumes=1 --numOfBuckets=1 
--numOfKeys=2 --keySize=1024000
2023-08-24 07:09:27,951 [main] INFO impl.MetricsConfig: Loaded properties from 
hadoop-metrics2.properties
2023-08-24 07:09:28,009 [main] INFO impl.MetricsSystemImpl: Scheduled Metric 
snapshot period at 10 second(s).
2023-08-24 07:09:28,010 [main] INFO impl.MetricsSystemImpl: ozone-freon metrics 
system started
2023-08-24 07:09:28,385 [main] INFO freon.RandomKeyGenerator: Number of 
Threads: 10
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: Number of 
Volumes: 1.
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: Number of Buckets 
per Volume: 1.
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: Number of Keys 
per Bucket: 2.
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: Key size: 1024000 
bytes
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: Buffer size: 4096 
bytes
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: validateWrites : 
false
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: Number of 
Validate Threads: 1
2023-08-24 07:09:28,398 [main] INFO freon.RandomKeyGenerator: cleanObjects : 
false
2023-08-24 07:09:28,408 [main] INFO freon.RandomKeyGenerator: Starting progress 
bar Thread. 0.00% |?                                                            
                                        |  0/2 Time: 0:00:00|  2023-08-24 
07:09:28,440 [pool-1-thread-1] INFO rpc.RpcClient: Creating Volume: 
vol-0-57483, with hadoop as owner and space quota set to -1 bytes, counts quota 
set to -1
2023-08-24 07:09:28,475 [pool-1-thread-3] INFO rpc.RpcClient: Creating Bucket: 
vol-0-57483/bucket-0-46067, with server-side default bucket layout, hadoop as 
owner, Versioning false, Storage Type set to DISK and Encryption set to false, 
Replication Type set to server-side default replication type, Namespace Quota 
set to -1, Space Quota set to -1 
2023-08-24 07:09:28,790 [pool-1-thread-2] WARN impl.MetricsSystemImpl: 
ozone-freon metrics system already initialized!
2023-08-24 07:09:28,956 [pool-1-thread-4] INFO metrics.MetricRegistries: Loaded 
MetricRegistries class org.apache.ratis.metrics.impl.MetricRegistriesImpl
 100.00% 
|?????????????????????????????????????????????????????????????????????????????????????????????????????|
  2/2 Time: 0:00:01|  
2023-08-24 07:09:36,427 [Thread-15] WARN grpc.GrpcUtil: Timed out gracefully 
shutting down connection: 
ManagedChannelOrphanWrapper{delegate=ManagedChannelImpl{logId=1, 
target=172.18.0.13:9858}}. ***************************************************
Status: Success
Git Base Revision: 1be78238728da9266a4f88195058f08fd012bf9c
Number of Volumes created: 1
Number of Buckets created: 1
Number of Keys added: 2
Average Time spent in volume creation: 00:00:00,004
Average Time spent in bucket creation: 00:00:00,001
Average Time spent in key creation: 00:00:00,012
Average Time spent in key write: 00:00:00,069
Total bytes written: 2048000
Total Execution time: 00:00:08,383
*************************************************** {code}
I think it can be resolved.

> NPE raised by freon RandomKeys on ShutdownHook()
> ------------------------------------------------
>
>                 Key: HDDS-6888
>                 URL: https://issues.apache.org/jira/browse/HDDS-6888
>             Project: Apache Ozone
>          Issue Type: Bug
>            Reporter: Neil Joshi
>            Assignee: Tejaskriya Madhan
>            Priority: Major
>
> Freon load testing randomkeys throws a null pointer exception on ShutdownHook 
> when RandomKeys.printStats is called. 
> Issue can be reproduced on docker development cluster, in this case ozone-ha 
> as follows:
> hadoop-ozone/dist/target/ozone-.../compose/ozone-ha$ docker-compose up {-}d 
> -{-}scale datanode=3
> $ docker-compose exec scm1 bash
> bash-4.2$ _ozone freon randomkeys --numOfVolumes=1 --numOfBuckets=1 
> --numOfKeys=2 --keySize=1024000_
> {code:java}
> Status: Success
> Git Base Revision: a3b9c37a397ad4188041dd80621bdeefc46885f2
> Number of Volumes created: 1
> Number of Buckets created: 1
> Number of Keys added: 2
> 2022-06-14 23:57:38,841 [Thread-3] WARN util.ShutdownHookManager: 
> ShutdownHook 'RandomKeyGenerator$$Lambda$124/0x0000000840276840' failed, 
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>     at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
>     at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
>     at 
> org.apache.hadoop.ozone.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:132)
>     at 
> org.apache.hadoop.ozone.util.ShutdownHookManager$1.run(ShutdownHookManager.java:103)
> Caused by: java.lang.NullPointerException
>     at 
> org.apache.hadoop.ozone.freon.RandomKeyGenerator.printStats(RandomKeyGenerator.java:487)
>     at 
> org.apache.hadoop.ozone.freon.RandomKeyGenerator.lambda$addShutdownHook$0(RandomKeyGenerator.java:399)
>     at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>     at java.base/java.lang.Thread.run(Thread.java:829){code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to