Re: flink k-means on hadoop cluster

2015-06-08 Thread Till Rohrmann
hdfs://ServerURI:8020/user/cloudera/inputs should do the trick ​ On Mon, Jun 8, 2015 at 12:41 PM Pa Rö wrote: > it's works, now i have set the permissiions to the yarn user, > but my flink app not find the path. i try following path and get the same > exception: > file:///127.0.0.1:8020/user/clo

Re: flink k-means on hadoop cluster

2015-06-08 Thread Pa Rö
it's works, now i have set the permissiions to the yarn user, but my flink app not find the path. i try following path and get the same exception: file:///127.0.0.1:8020/user/cloudera/inputs/ how i must set the path to hdfs?? 2015-06-08 11:38 GMT+02:00 Till Rohrmann : > I assume that the path i

Re: flink k-means on hadoop cluster

2015-06-08 Thread Till Rohrmann
I assume that the path inputs and outputs is not correct since you get the error message *chown `output’: No such file or directory*. Try to provide the full path to the chown command such as hdfs://ServerURI/path/to/your/directory. ​ On Mon, Jun 8, 2015 at 11:23 AM Pa Rö wrote: > Hi Robert, > >

Re: flink k-means on hadoop cluster

2015-06-08 Thread Pa Rö
Hi Robert, i have see that you write me on stackoverflow, thanks. now the path is right and i get the old exception: org.apache.flink.runtime.JobException: Creating the input splits caused an error: File file:/127.0.0.1:8020/home/user/cloudera/outputs/seed-1 does not exist or the user running Flin

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
No, the permissions are still not correct, otherwise, Flink would not complan. The error message of Flink is actually pretty precise: "Caused by: java.io.FileNotFoundException: File /user/cloudera/inputs does not exist or the user running Flink ('yarn') has insufficient permissions to access it."

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
here my main class: public static void main(String[] args) { //load properties Properties pro = new Properties(); try { pro.load(FlinkMain.class.getResourceAsStream("/config.properties"));

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
i have change the permissions from the cloudera user and try the following command. and the files exist on hdfs ;) i set the files in my properties file like "flink.output=/user/cloudera/outputs/output_flink" i get the same exception again, maybe the problem have an other reason? [cloudera@quickst

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
sorry, i see my yarn end before i can run my app, i must set the write access for yarn, maybe this solve my problem. 2015-06-04 17:33 GMT+02:00 Pa Rö : > i start the yarn-session.sh with sudo > and than the flink run command with sudo, > i get the following exception: > > cloudera@quickstart bin]

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
i start the yarn-session.sh with sudo and than the flink run command with sudo, i get the following exception: cloudera@quickstart bin]$ sudo ./flink run /home/cloudera/Desktop/ma-flink.jar log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). lo

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
I would recommend you to read the output of the commands you are entering more closely. bash-4.1$ hadoop fs -chmod 777 /user/cloudera/outputs chmod: changing permissions of '/user/cloudera/outputs': Permission denied. user=yarn is not the owner of inode=outputs Chmod is clearly stating that is wa

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
i try this: [cloudera@quickstart bin]$ sudo su yarn bash-4.1$ hadoop fs -chmod 777 /user/cloudera/outputs chmod: changing permissions of '/user/cloudera/outputs': Permission denied. user=yarn is not the owner of inode=outputs bash-4.1$ hadoop fs -chmod 777 /user/cloudera/inputs chmod: changing per

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
As the output of the "hadoop" tool indicates, it expects two arguments, you only passed one (777). The second argument it is expecting is the path to the file you want to change. In your case, it is: hadoop fs -chmod 777 /user/cloudera/outputs The reason why hadoop fs -chmod 777 * does not work

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
i get the same exception Using JobManager address from YARN properties quickstart.cloudera/ 127.0.0.1:53874 java.io.IOException: Mkdirs failed to create /user/cloudera/outputs 2015-06-04 17:09 GMT+02:00 Pa Rö : > bash-4.1$ hadoop fs -chmod 777 * > chmod: `config.sh': No such file or directory > c

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
bash-4.1$ hadoop fs -chmod 777 * chmod: `config.sh': No such file or directory chmod: `flink': No such file or directory chmod: `flink.bat': No such file or directory chmod: `jobmanager.sh': No such file or directory chmod: `pyflink2.sh': No such file or directory chmod: `pyflink3.sh': No such file

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
[cloudera@quickstart bin]$ sudo su yarn bash-4.1$ hadoop fs -chmod 777 -chmod: Not enough arguments: expected 2 but got 1 Usage: hadoop fs [generic options] -chmod [-R] PATH... bash-4.1$ you understand? 2015-06-04 17:04 GMT+02:00 Robert Metzger : > It looks like the user "yarn" which is running

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
It looks like the user "yarn" which is running Flink doesn't have permission to access the files. Can you do "sudo su yarn" to become the "yarn" user. Then, you can do "hadoop fs -chmod 777" to make the files accessible for everyone. On Thu, Jun 4, 2015 at 4:59 PM, Pa Rö wrote: > okay, it's wo

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
okay, it's work, i get a exception: [cloudera@quickstart Desktop]$ cd flink-0.9-SNAPSHOT/bin/ [cloudera@quickstart bin]$ flink run /home/cloudera/Desktop/ma-flink.jar bash: flink: command not found [cloudera@quickstart bin]$ ./flink run /home/cloudera/Desktop/ma-flink.jar log4j:WARN No appenders c

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
Once you've started the YARN session, you can submit a Flink job with "./bin/flink run ". The jar file of your job doesn't need to be in HDFS. It has to be in the local file system and flink will send it to all machines. On Thu, Jun 4, 2015 at 4:48 PM, Pa Rö wrote: > okay, now it run on my hado

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
okay, now it run on my hadoop. how i can start my flink job? and where must the jar file save, at hdfs or as local file? 2015-06-04 16:31 GMT+02:00 Robert Metzger : > Yes, you have to run these commands in the command line of the Cloudera VM. > > On Thu, Jun 4, 2015 at 4:28 PM, Pa Rö > wrote: >

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
Yes, you have to run these commands in the command line of the Cloudera VM. On Thu, Jun 4, 2015 at 4:28 PM, Pa Rö wrote: > you mean run this command on terminal/shell and not define a hue job? > > 2015-06-04 16:25 GMT+02:00 Robert Metzger : > >> It should be certainly possible to run Flink on a

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
you mean run this command on terminal/shell and not define a hue job? 2015-06-04 16:25 GMT+02:00 Robert Metzger : > It should be certainly possible to run Flink on a cloudera live VM > > I think these are the commands you need to execute: > > wget > http://stratosphere-bin.s3-website-us-east-1.am

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
It should be certainly possible to run Flink on a cloudera live VM I think these are the commands you need to execute: wget http://stratosphere-bin.s3-website-us-east-1.amazonaws.com/flink-0.9-SNAPSHOT-bin-hadoop2.tgz tar xvzf flink-0.9-SNAPSHOT-bin-hadoop2.tgz cd flink-0.9-SNAPSHOT/ *export HADO

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
hi robert, i think the problem is the hue api, i had the same problem with spark submit script, but on the new hue release, they have a spark submit api. i asked the group for the same problem with spark, no reply. i want test my app on local cluster, before i run it on the big cluster, for that

Re: flink k-means on hadoop cluster

2015-06-04 Thread Robert Metzger
Hi Paul, why did running Flink from the regular scripts not work for you? I'm not an expert on Hue, I would recommend asking in the Hue user forum / mailing list: https://groups.google.com/a/cloudera.org/forum/#!forum/hue-user. On Thu, Jun 4, 2015 at 4:09 PM, Pa Rö wrote: > thanks, > now i wan

Re: flink k-means on hadoop cluster

2015-06-04 Thread Pa Rö
thanks, now i want run my app on cloudera live vm single node, how i can define my flink job with hue? i try to run the flink script in the hdfs, it's not work. best regards, paul 2015-06-02 14:50 GMT+02:00 Robert Metzger : > I would recommend using HDFS. > For that, you need to specify the path

Re: flink k-means on hadoop cluster

2015-06-02 Thread Robert Metzger
I would recommend using HDFS. For that, you need to specify the paths like this: hdfs:///path/to/data. On Tue, Jun 2, 2015 at 2:48 PM, Pa Rö wrote: > nice, > > which file system i must use for the cluster? java.io or hadoop.fs or > flink? > > 2015-06-02 14:29 GMT+02:00 Robert Metzger : > >> Hi,

Re: flink k-means on hadoop cluster

2015-06-02 Thread Pa Rö
nice, which file system i must use for the cluster? java.io or hadoop.fs or flink? 2015-06-02 14:29 GMT+02:00 Robert Metzger : > Hi, > you can start Flink on YARN on the Cloudera distribution. > > See here for more: > http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html >

Re: flink k-means on hadoop cluster

2015-06-02 Thread Robert Metzger
Hi, you can start Flink on YARN on the Cloudera distribution. See here for more: http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html These are the commands you need to execute wget http://stratosphere-bin.s3-website-us-east-1.amazonaws.com/flink-0.9-SNAPSHOT-bin-hadoop2.

flink k-means on hadoop cluster

2015-06-02 Thread Pa Rö
hi community, i want test my flink k-means on a hadoop cluster. i use the cloudera live distribution. how i can run flink on this cluster? maybe only the java dependencies are engouth? best regards, paul