[ 
https://issues.apache.org/jira/browse/FLINK-21672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17818182#comment-17818182
 ] 

david radley edited comment on FLINK-21672 at 2/19/24 9:51 AM:
---------------------------------------------------------------

Hi [~martijnvisser] 

Here is a list of sun. classes I have found in core Flink.
 # The one that I reported in StickyAllocationAndLocalRecoveryTestJob. This 
test code uses the sun management class to get the current pid. We have some 
options
 ## we change the constructor so it is driven reflectively so there is no 
compile error. This change would allow the tst to compile.
 ## To have this test run in Semeru we could use logic like Datadog and try to 
reflectively load other classes. Something like 
[https://github.com/DataDog/dd-trace-java/blob/aee3ca59c6a05233f4295552f2ede80bc4fc[…]agent/src/main/java/datadog/trace/bootstrap/AgentBootstrap.java|https://github.com/DataDog/dd-trace-java/blob/aee3ca59c6a05233f4295552f2ede80bc4fc%5B%E2%80%A6%5Dagent/src/main/java/datadog/trace/bootstrap/AgentBootstrap.java]
   . I see we could reflectively drive 
com.ibm.lang.management.RuntimeMXBean.getProcessID() if the sun class is not 
there.
 ## We fix this properly at Flink v2, using 
 ### methods introduced at java 10:{_}java.lang.management.RuntimeMXBean 
runtime={_}{_}java.lang.management.ManagementFactory.getRuntimeMXBean();{_}

                                _runtime.getPid();_

                           _2. Or use a java 9 approach_ _long pid = 
ProcessHandle.current().pid();_ 

      2. core Flink MemorySegment.java uses sun.misc.Unsafe. I see 
[chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://cr.openjdk.org/~psandoz/dv14-uk-paul-sandoz-unsafe-the-situation.pdf]
 I wonder if we can remove unsafe at Flink v2; I am not sure how used _off-heap 
unsafe_ memory is (also Flink v2 is changing how memory is being handled). I am 
not seeing an alternative.

     3. I see _SignalHandler and_ TestSignalHandler uses sun.misc.Signal . This 
seems to have come from Hadoop implementations that have been inherited. 

     4. There are 2 imports of import sun.security.krb5.KrbException; that can 
be produced when calling sun.security.krb5.Config.refresh()

 

I would like to implement 1.2 or if this is not acceptable 1.1. This would 
really help us short term as we could at least build with skipTests on Semeru.

 

Sun usages 2 and 3  would need so some consensus in the community - as is seems 
we would be removing capability unless we can find an alternative. The 
sun.security references are used when testing Hadoop with Kerberos, I have not 
looked into it.

 

 


was (Author: JIRAUSER300523):
Hi [~martijnvisser] 

Here is a list of sun. classes I have found in core Flink.
 # The one that I reported in StickyAllocationAndLocalRecoveryTestJob. This 
test code uses the sun management class to get the current pid. We have some 
options
 ## we change the constructor so it is driven reflectively so there is no 
compile error. This change would allow the tst to compile.
 ## To have this test run in Semeru we could use logic like Datadog and try to 
reflectively load other classes. Something like 
[https://github.com/DataDog/dd-trace-java/blob/aee3ca59c6a05233f4295552f2ede80bc4fc[…]agent/src/main/java/datadog/trace/bootstrap/AgentBootstrap.java|https://github.com/DataDog/dd-trace-java/blob/aee3ca59c6a05233f4295552f2ede80bc4fc%5B%E2%80%A6%5Dagent/src/main/java/datadog/trace/bootstrap/AgentBootstrap.java]
   . I see we could reflectively drive 
com.ibm.lang.management.RuntimeMXBean.getProcessID() if the sun class is not 
there.
 ## We fix this properly at Flink v2, using  methods introduced at java 10:
 ## _java.lang.management.RuntimeMXBean runtime =_

                           
_java.lang.management.ManagementFactory.getRuntimeMXBean();_

                           _runtime.getPid();_

                           _2. Or use a java 9 approach_ _long pid = 
ProcessHandle.current().pid();_ 

      2. core Flink MemorySegment.java uses sun.misc.Unsafe. I see 
[chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://cr.openjdk.org/~psandoz/dv14-uk-paul-sandoz-unsafe-the-situation.pdf]
 I wonder if we can remove unsafe at Flink v2; I am not sure how used _off-heap 
unsafe_ memory is. I am not seeing an alternative.

     3. I see _SignalHandler and_ TestSignalHandler uses sun.misc.Signal . This 
seems to have come from Hadoop implementations that have been inherited. 

     4. There are 2 imports of import sun.security.krb5.KrbException; that can 
be produced when calling sun.security.krb5.Config.refresh()

 

I would like to implement 1.2 or if this is not acceptable 1.1. This would 
really help us short term as we could at least build with skipTests on Semeru.

 

Sun usages 2 and 3  would need so some consensus in the community - as is seems 
we would be removing capability unless we can find an alternative. The 
sun.security references are used when testing Hadoop with Kerberos, I have not 
looked into it.

 

 

> End to end tests (streaming) aren't Java vendor neutral (sun.management bean 
> used)
> ----------------------------------------------------------------------------------
>
>                 Key: FLINK-21672
>                 URL: https://issues.apache.org/jira/browse/FLINK-21672
>             Project: Flink
>          Issue Type: Improvement
>          Components: Tests
>            Reporter: Adam Roberts
>            Assignee: david radley
>            Priority: Minor
>
> Hi everyone, have been looking to run the tests for Flink using an 11 
> AdoptOpenJDK 11 distribution (so the latest for Linux, x86-64 specifically) 
> and I see
>  
>  [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile 
> (default-compile) on project flink-local-recovery-and-allocation-test: 
> Compilation failure: Compilation failure: 
>  [ERROR] 
> /var/home/core/flink/flink-end-to-end-tests/flink-local-recovery-and-allocation-test/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java:[416,23]
>  cannot find symbol
>  [ERROR] symbol: class VMManagement
>  [ERROR] location: package sun.management
>  [ERROR] 
> /var/home/core/flink/flink-end-to-end-tests/flink-local-recovery-and-allocation-test/src/main/java/org/apache/flink/streaming/tests/StickyAllocationAndLocalRecoveryTestJob.java:[416,59]
>  cannot find symbol
>  [ERROR] symbol: class VMManagement
>  [ERROR] location: package sun.management
>  [ERROR] -> [Help 1]
>   
> my guess is that AdoptOpenJDK's class-library simply doesn't have this 
> package and we should use a more neutral one if that's available - I went 
> with 
> [https://adoptopenjdk.net/releases.html?variant=openjdk11&jvmVariant=openj9] 
> personally (OpenJ9 being the name for IBM's open-sourced J9 JVM), but I 
> wonder if that has its own platform specific bean as well; I haven't worked 
> on IBM's distribution of Java for almost seven years* but hopefully someone 
> may have more insight so you don't need to be using OpenJDK backed by HotSpot 
> to run said tests. It would be helpful if we didn't need an (if vendor == IBM 
> do this || vendor == Oracle do this ... etc) statement, so you can run the 
> tests no matter where you're getting Java from. 
>  
> *full disclaimer, I work at IBM and helped create AdoptOpenJDK, and used to 
> work in its Java team...I've honestly forgot if we have a vendor-neutral bean 
> available and now work on something totally different!
> Cheers!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to