No, the permissions are still not correct, otherwise, Flink would not
complan.
The error message of Flink is actually pretty precise: Caused by:
java.io.FileNotFoundException: File /user/cloudera/inputs does not exist or
the user running Flink ('yarn') has insufficient permissions to access it.
i start the yarn-session.sh with sudo
and than the flink run command with sudo,
i get the following exception:
cloudera@quickstart bin]$ sudo ./flink run
/home/cloudera/Desktop/ma-flink.jar
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
here my main class:
public static void main(String[] args) {
//load properties
Properties pro = new Properties();
try {
pro.load(FlinkMain.class.getResourceAsStream(/config.properties));
i have change the permissions from the cloudera user and try the following
command.
and the files exist on hdfs ;) i set the files in my properties file like
flink.output=/user/cloudera/outputs/output_flink
i get the same exception again, maybe the problem have an other reason?
okay, it's work, i get a exception:
[cloudera@quickstart Desktop]$ cd flink-0.9-SNAPSHOT/bin/
[cloudera@quickstart bin]$ flink run /home/cloudera/Desktop/ma-flink.jar
bash: flink: command not found
[cloudera@quickstart bin]$ ./flink run /home/cloudera/Desktop/ma-flink.jar
log4j:WARN No appenders
It looks like the user yarn which is running Flink doesn't have
permission to access the files.
Can you do sudo su yarn to become the yarn user. Then, you can do
hadoop fs -chmod 777 to make the files accessible for everyone.
On Thu, Jun 4, 2015 at 4:59 PM, Pa Rö paul.roewer1...@googlemail.com
Once you've started the YARN session, you can submit a Flink job with
./bin/flink run pathToYourJar.
The jar file of your job doesn't need to be in HDFS. It has to be in the
local file system and flink will send it to all machines.
On Thu, Jun 4, 2015 at 4:48 PM, Pa Rö
[cloudera@quickstart bin]$ sudo su yarn
bash-4.1$ hadoop fs -chmod 777
-chmod: Not enough arguments: expected 2 but got 1
Usage: hadoop fs [generic options] -chmod [-R] MODE[,MODE]... | OCTALMODE
PATH...
bash-4.1$
you understand?
2015-06-04 17:04 GMT+02:00 Robert Metzger rmetz...@apache.org:
hi robert,
i think the problem is the hue api,
i had the same problem with spark submit script,
but on the new hue release, they have a spark submit api.
i asked the group for the same problem with spark, no reply.
i want test my app on local cluster, before i run it on the big cluster,
for
bash-4.1$ hadoop fs -chmod 777 *
chmod: `config.sh': No such file or directory
chmod: `flink': No such file or directory
chmod: `flink.bat': No such file or directory
chmod: `jobmanager.sh': No such file or directory
chmod: `pyflink2.sh': No such file or directory
chmod: `pyflink3.sh': No such
i try this:
[cloudera@quickstart bin]$ sudo su yarn
bash-4.1$ hadoop fs -chmod 777 /user/cloudera/outputs
chmod: changing permissions of '/user/cloudera/outputs': Permission denied.
user=yarn is not the owner of inode=outputs
bash-4.1$ hadoop fs -chmod 777 /user/cloudera/inputs
chmod: changing
Hi Paul,
why did running Flink from the regular scripts not work for you?
I'm not an expert on Hue, I would recommend asking in the Hue user forum /
mailing list:
https://groups.google.com/a/cloudera.org/forum/#!forum/hue-user.
On Thu, Jun 4, 2015 at 4:09 PM, Pa Rö
As the output of the hadoop tool indicates, it expects two arguments, you
only passed one (777).
The second argument it is expecting is the path to the file you want to
change.
In your case, it is:
hadoop fs -chmod 777 /user/cloudera/outputs
The reason why
hadoop fs -chmod 777 *
does not work
Yes, you have to run these commands in the command line of the Cloudera VM.
On Thu, Jun 4, 2015 at 4:28 PM, Pa Rö paul.roewer1...@googlemail.com
wrote:
you mean run this command on terminal/shell and not define a hue job?
2015-06-04 16:25 GMT+02:00 Robert Metzger rmetz...@apache.org:
It
Hi.
Flink is not DBMS. There is no equivalent operation of insert, update, remove.
But you can use map[1] or filter[2] operation to create modified dataset.
I recommend you some sildes[3][4] to understand Flink concepts.
Regards,
Chiwan Park
[1]
@ Stephan, I was trying to follow the concept of *Nest Join. *In other
words, I wanted to follow certain implementation to achieve my goal.
@Fabian, Well, solving the exception this way will lead to incorrect
result, as they key will always exist on one side, the iterator of the
other side will
I am not sure if I got your question right.
You can easily prevent the NoSuchElementException, but calling next() only
if hasNext() returns true.
2015-06-04 11:18 GMT+02:00 Mustafa Elbehery elbeherymust...@gmail.com:
Yes, Its working now .. But my assumption is that I want to join different
17 matches
Mail list logo