Re: Hadoop Error on ECS Fargate

2023-07-17 Thread Martijn Visser
Hi Mengxi Wang, Which Flink version are you using? Best regards, Martijn On Thu, Jul 13, 2023 at 3:21 PM Wang, Mengxi X via user < user@flink.apache.org> wrote: > Hi community, > > > > We got this kuerberos error with Hadoop as file system on ECS Fargate > deployment. > > > > Caused by: org.ap

Re: Hadoop is not in the classpath/dependencies

2021-03-30 Thread Chesnay Schepler
This looks related to HDFS-12920; where Hadoop 2.X tries to read a duration from hdfs-default.xml expecting plain numbers, but in 3.x they also contain time units. On 3/30/2021 9:37 AM, Matthias Seiler wrote: Thank you all for the replies! I did as @Maminspapin suggested and indeed the prev

Re: Hadoop is not in the classpath/dependencies

2021-03-30 Thread Matthias Seiler
Thank you all for the replies! I did as @Maminspapin suggested and indeed the previous error disappeared, but now the exception is ``` java.io.IOException: Cannot instantiate file system for URI: hdfs://node-1:9000/flink //... Caused by: java.lang.NumberFormatException: For input string: "30s" //

Re: Hadoop is not in the classpath/dependencies

2021-03-26 Thread Robert Metzger
Hey Matthias, Maybe the classpath contains hadoop libraries, but not the HDFS libraries? The "DistributedFileSystem" class needs to be accessible to the classloader. Can you check if that class is available? Best, Robert On Thu, Mar 25, 2021 at 11:10 AM Matthias Seiler < matthias.sei...@campus.t

Re: Hadoop is not in the classpath/dependencies

2021-03-25 Thread Maminspapin
I downloaded the lib (last version) from here: https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-7.0/ and put it in the flink_home/lib directory. It helped. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Hadoop is not in the classpath/dependencies

2021-03-25 Thread Maminspapin
I have the same problem ... -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Hadoop Integration Link broken in downloads page

2021-03-10 Thread Till Rohrmann
Thanks a lot for reporting this problem Debraj. I've created a JIRA issue for it [1]. [1] https://issues.apache.org/jira/browse/FLINK-21723 Cheers, Till On Tue, Mar 9, 2021 at 5:28 AM Debraj Manna wrote: > Hi > > It appears the Hadoop Interation >

Re: Hadoop FS when running standalone

2020-07-16 Thread Lorenzo Nicora
Thanks Alessandro, I think I solved it. I cannot set any HADOOP_HOME as I have no Hadoop installed on the machine running my tests. But adding *org.apache.flink:flink-shaded-hadoop-2:2.8.3-10.0* as a compile dependency to the Maven profile building the standalone version fixed the issue. Lorenzo

Re: Hadoop FS when running standalone

2020-07-16 Thread Alessandro Solimando
Hi Lorenzo, IIRC I had the same error message when trying to write snappified parquet on HDFS with a standalone fat jar. Flink could not "find" the hadoop native/binary libraries (specifically I think for me the issue was related to snappy), because my HADOOP_HOME was not (properly) set. I have n

Re: Hadoop user jar for flink 1.9 plus

2020-03-20 Thread Vishal Santoshi
Awesome, thanks! On Tue, Mar 17, 2020 at 11:14 AM Chesnay Schepler wrote: > You can download flink-shaded-hadoop from the downloads page: > https://flink.apache.org/downloads.html#additional-components > > On 17/03/2020 15:56, Vishal Santoshi wrote: > > We have been on flink 1.8.x on production

Re: Hadoop user jar for flink 1.9 plus

2020-03-17 Thread Chesnay Schepler
You can download flink-shaded-hadoop from the downloads page: https://flink.apache.org/downloads.html#additional-components On 17/03/2020 15:56, Vishal Santoshi wrote: We have been on flink 1.8.x on production and were planning to go to flink 1.9 or above. We have always used hadoop uber jar fr

Re: Hadoop 运行 mr 程序 报错

2019-03-04 Thread sam peng
Thanks for your reply, I fix the problem by adding a new user. Root is not avaliable . > 在 2019年3月4日,上午11:47,sam peng <624645...@qq.com> 写道: > > > 请教大家一个Hadoop 运行MR问题。 > > 之前我们配置过一个单点Hadoop,能正常运行。 > > 目前我们把hadoop 移到生产环境中,将hadoop目录挂载在磁盘中,用flume能正常收取kafka数据。 > > 但是运行mr程序报错 : > > <11_28_35__0

Re: Hadoop compatibility and HBase bulk loading

2018-01-16 Thread Fabian Hueske
Looking at my previous mail which mentions changes to API, optimizer, and runtime code of the DataSet API this would be a major and non-trivial effort and also require that a committer spends a good amount of time for this. 2018-01-16 10:07 GMT+01:00 Flavio Pompermaier : > Do you think is that c

Re: Hadoop compatibility and HBase bulk loading

2018-01-16 Thread Flavio Pompermaier
Do you think is that complex to support it? I think we can try to implement it if someone could give us some support (at least some big picture) On Tue, Jan 16, 2018 at 10:02 AM, Fabian Hueske wrote: > No, I'm not aware of anybody working on extending the Hadoop compatibility > support. > I'll a

Re: Hadoop compatibility and HBase bulk loading

2018-01-16 Thread Fabian Hueske
No, I'm not aware of anybody working on extending the Hadoop compatibility support. I'll also have no time to work on this any time soon :-( 2018-01-13 1:34 GMT+01:00 Flavio Pompermaier : > Any progress on this Fabian? HBase bulk loading is a common task for us > and it's very annoying and uncomf

Re: Hadoop compatibility and HBase bulk loading

2018-01-12 Thread Flavio Pompermaier
Any progress on this Fabian? HBase bulk loading is a common task for us and it's very annoying and uncomfortable to run a separate YARN job to accomplish it... On 10 Apr 2015 12:26, "Flavio Pompermaier" wrote: Great! That will be awesome. Thank you Fabian On Fri, Apr 10, 2015 at 12:14 PM, Fabia

Re: hadoop-free hdfs config

2018-01-11 Thread Till Rohrmann
Thanks for trying it out and letting us know. Cheers, Till On Thu, Jan 11, 2018 at 9:56 AM, Oleksandr Baliev wrote: > Hi Till, > > thanks for your reply and clarification! With RocksDBStateBackend btw the > same story, looks like a wrapper over FsStateBackend: > > 01/11/2018 09:27:22 Job execut

Re: hadoop-free hdfs config

2018-01-11 Thread Oleksandr Baliev
Hi Till, thanks for your reply and clarification! With RocksDBStateBackend btw the same story, looks like a wrapper over FsStateBackend: 01/11/2018 09:27:22 Job execution switched to status FAILING. org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implem

Re: hadoop-free hdfs config

2018-01-10 Thread Till Rohrmann
Hi Sasha, you're right that if you want to access HDFS from the user code only it should be possible to use the Hadoop free Flink version and bundle the Hadoop dependencies with your user code. However, if you want to use Flink's file system state backend as you did, then you have to start the Fli

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Eron Wright
Thanks for investigating this, Jared. I would summarize it as Flink-on-Mesos cannot be used in Hadoop-free mode in Flink 1.4.0. I filed an improvement bug to support this scenario: FLINK-8247 On Tue, Dec 12, 2017 at 11:46 AM, Jared Stehler < jared.steh...@intellifylearning.com> wrote: > I had

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Jared Stehler
I had been excluding all transitive dependencies from the lib dir; it seems to be working when I added the following deps: commons-configuration commons-configuration 1.7 commons-lang commons-lang 2.6 -- Jared Stehler Chief Architect - In

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Jared Stehler
The class is there; this issue is a static initializer error, probably from other missing classes. I’ll try using the uber jar to see if that helps any, and will report back. I’ve included the shaded jar as a maven dependency: org.apache.flink flink-shaded-hadoop2 ${flink

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Chesnay Schepler
Could you look into the flink-shaded-hadoop jar to check whether the missing class is actually contained? Where did the flink-shaded-hadoop jar come from? I'm asking because when building flink-dist from source the jar is called flink-shaded-hadoop2-uber-1.4.0.jar, which does indeed contain th

Re: hadoop

2017-08-16 Thread Ted Yu
Can you check the following config in yarn-site.xml ? yarn.resourcemanager.proxy-user-privileges.enabled (true) Cheers On Wed, Aug 16, 2017 at 4:48 PM, Raja.Aravapalli wrote: > > > Hi, > > > > I triggered an flink yarn-session on a running Hadoop cluster… and > triggering streaming application

Re: hadoop

2017-08-16 Thread Will Du
Is the kerberos token expired without renewing? > On Aug 16, 2017, at 7:48 PM, Raja.Aravapalli > wrote: > > > Hi, > > I triggered an flink yarn-session on a running Hadoop cluster… and triggering > streaming application on that. > > But, I see after few days of running without any issues

Re: Hadoop 2.7.3

2017-02-10 Thread Dean Wampler
I don't have it any more, unfortunately. To be clear, I don't think it was Flink related, but a collision between a Hadoop security library calling into a Google Guava library, where a method was missing on CacheBuilder in the latter. Also, to add to the irritation, it only happened in my OSX envir

Re: Hadoop 2.7.3

2017-02-10 Thread Ted Yu
Dean: Can you pastebin the stack trace around the MethodMissing error ? If there was no stack trace, please tell us the what the log said. Thanks On Fri, Feb 10, 2017 at 2:26 PM, Dean Wampler wrote: > This is completely unrelated, but I just debugged a MethodMissing error in > an application s

Re: Hadoop 2.7.3

2017-02-10 Thread Dean Wampler
This is completely unrelated, but I just debugged a MethodMissing error in an application stack, where it doesn't occur with Hadoop 2.7.2, but does occur with 2.7.3 (yeah!). I would dig into the appropriate logs to see if an underlying exception is being thrown and you're not seeing enough detail.