Stephan, we are in 1.4.2.
Thanks,

-- Ashish 
 
  On Mon, Mar 26, 2018 at 7:38 AM, Stephan Ewen<se...@apache.org> wrote:   If 
you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have Hadoop 
in your application jar. That can mess up things with child-first classloading. 
1.4.2 should handle Hadoop properly in any case.
On Sun, Mar 25, 2018 at 3:26 PM, Ashish Pokharel <ashish...@yahoo.com> wrote:

Hi Ken,
Yes - we are on 1.4. Thanks for that link - it certainly now explains how 
things are working :) 
We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class path” 
command basically points to HDP2.6 locations (HDP = Hortonworks Data Platform). 
Best guess I have for this right now is HDP2.6 back ported some 2.9 changes 
into their distro. This is on my list to get to the bottom of (hopefully no 
hiccups till prod) - we double checked our Salt Orchestration packages which 
were used to built the cluster but couldn’t find a reference to hadoop 2.9. For 
now, we are moving on with our testing to prepare for deployment with hadoop 
free version which is using hadoop classpath as described in FLINK-7477.  
Thanks, Ashish

On Mar 23, 2018, at 12:31 AM, Ken Krugler <kkrugler_li...@transpac.com> wrote:
Hi Ashish,
Are you using Flink 1.4? If so, what does the “hadoop classpath” command return 
from the command line where you’re trying to start the job?
Asking because I’d run into issues with https://issues.apache. 
org/jira/browse/FLINK-7477, where I had a old version of Hadoop being 
referenced by the “hadoop" command.
— Ken


On Mar 22, 2018, at 7:05 PM, Ashish Pokharel <ashish...@yahoo.com> wrote:
Hi All,
Looks like we are out of the woods for now (so we think) - we went with Hadoop 
free version and relied on client libraries on edge node. 
However, I am still not very confident as I started digging into that stack as 
well and realized what Till pointed out (traces leads to a class that is part 
of 2.9). I did dig around env variables and nothing was set. This is a brand 
new clustered installed a week back and our team is literally the first hands 
on deck. I will fish around and see if Hortonworks back-ported something for 
HDP (dots are still not completely connected but nonetheless, we have a test 
session and app running in our brand new Prod)
Thanks, Ashish

On Mar 22, 2018, at 4:47 AM, Till Rohrmann <trohrm...@apache.org> wrote:
Hi Ashish,
the class ` RequestHedgingRMFailoverProxyP rovider` was only introduced with 
Hadoop 2.9.0. My suspicion is thus that you start the client with some Hadoop 
2.9.0 dependencies on the class path. Could you please check the logs of the 
client what's on its class path? Maybe you could also share the logs with us. 
Please also check whether HADOOP_CLASSPATH is set to something suspicious.
Thanks a lot!
Cheers,Till
On Wed, Mar 21, 2018 at 6:25 PM, ashish pok <ashish...@yahoo.com> wrote:

Hi Piotrek,
At this point we are simply trying to start a YARN session. 
BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has 
experienced similar issues. 
We actually pulled 2.6 binaries for the heck of it and ran into same issues. 
I guess we are left with getting non-hadoop binaries and set HADOOP_CLASSPATH 
then?

-- Ashish 
 
  On Wed, Mar 21, 2018 at 12:03 PM, Piotr Nowojski<pi...@data-artisans.com> 
wrote:   Hi,
> Does some simple word count example works on the cluster after the upgrade?
If not, maybe your job is pulling some dependency that’s causing this version 
conflict?
Piotrek


On 21 Mar 2018, at 16:52, ashish pok <ashish...@yahoo.com> wrote:
Hi Piotrek,
Yes, this is a brand new Prod environment. 2.6 was in our lab.
Thanks,

-- Ashish 
 
  On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski<pi...@data-artisans.com> 
wrote:   Hi,
Have you replaced all of your old Flink binaries with freshly downloaded Hadoop 
2.7 versions? Are you sure that something hasn't mix in the process?
Does some simple word count example works on the cluster after the upgrade?
Piotrek


On 21 Mar 2018, at 16:11, ashish pok <ashish...@yahoo.com> wrote:
Hi All,
We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 
2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem 
like :) We definitely are using 2.7 binaries but it looks like there is a call 
here to a private methos which screams runtime incompatibility. 
Anyone has seen this and have pointers?
Thanks, Ashish

Exception in thread "main" java.lang.IllegalAccessError: tried to access method 
org.apache.hadoop.yarn.client. ConfiguredRMFailoverProxyProvi 
der.getProxyInternal()Ljava/la ng/Object; from class 
org.apache.hadoop.yarn.client. RequestHedgingRMFailoverProxyP rovider
            at org.apache.hadoop.yarn.client. RequestHedgingRMFailoverProxyP 
rovider.init(RequestHedgingRMF ailoverProxyProvider.java:75)
            at org.apache.hadoop.yarn.client. RMProxy.createRMFailoverProxyP 
rovider(RMProxy.java:163)
            at org.apache.hadoop.yarn.client. RMProxy.createRMProxy(RMProxy. 
java:94)
            at org.apache.hadoop.yarn.client. ClientRMProxy.createRMProxy(Cl 
ientRMProxy.java:72)
            at org.apache.hadoop.yarn.client. api.impl.YarnClientImpl.servic 
eStart(YarnClientImpl.java: 187)
            at org.apache.hadoop.service.Abst ractService.start(AbstractServ 
ice.java:193)
            at org.apache.flink.yarn.Abstract YarnClusterDescriptor.getYarnC 
lient(AbstractYarnClusterDescr iptor.java:314)
            at org.apache.flink.yarn.Abstract YarnClusterDescriptor.deployIn 
ternal(AbstractYarnClusterDesc riptor.java:417)
            at org.apache.flink.yarn.Abstract YarnClusterDescriptor.deploySe 
ssionCluster(AbstractYarnClust erDescriptor.java:367)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli.run(FlinkYarnS 
essionCli.java:679)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli$1.call(FlinkYa 
rnSessionCli.java:514)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli$1.call(FlinkYa 
rnSessionCli.java:511)
            at java.security.AccessController .doPrivileged(Native Method)
            at javax.security.auth.Subject.do As(Subject.java:422)
            at org.apache.hadoop.security.Use rGroupInformation.doAs(UserGro 
upInformation.java:1698)
            at org.apache.flink.runtime.secur ity.HadoopSecurityContext.runS 
ecured(HadoopSecurityContext. java:41)
            at org.apache.flink.yarn.cli.Flin kYarnSessionCli.main(FlinkYarn 
SessionCli.java:511)



  


  






------------------------------ --------------http://about.me/kkrugler+1 
530-210-6378




  

Reply via email to