pravin1406 opened a new issue, #8504: URL: https://github.com/apache/hudi/issues/8504
**Describe the problem you faced** When i give wrong (non existent) recordKeys or pre-combine keys to the hudi spark job, the spark job fails with appropriate exception and spark context gets stopped. But there are other jettyserver threads running in the background which keeps running. This in turn keep my OCP pod running , it never stops. Points to Note. I ran the job in overwrite mode on a path, where another table existed and also had a corresponding hive table. By going throug h logs i can see, hudi trying to clean older data and overwriting it successfully. A clear and concise description of the problem. **To Reproduce** Steps to reproduce the behavior: 1. Create a simple input files. 2. Create a hudi spark job with some random columns (non-existent in input file) 3. Launch the spark job in a kubernetes cluster. **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** Hudi version : 0.12.2 Spark version : 3.2.0 Hive version : 3.1.2_1 Hadoop version : Hadoop 3.2.1 Storage (HDFS/S3/GCS..) : HDFS Running on Docker? (yes/no) : yes **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
