Hi,
which version of flink is compatible with hadoop-2.5.2?
The releases in 0.9.0 explicitly mentioned the compatibility with 2.2.0,
2.6.0, 2.7.0, but not 2.5.x
Regards,
Santosh
--
View this message in context:
Hi Santosh,
I would try the Hadoop 2.4.1 build of Flink.
On Wed, Jul 15, 2015 at 10:13 AM, santosh_rajaguru sani...@gmail.com
wrote:
Hi,
which version of flink is compatible with hadoop-2.5.2?
The releases in 0.9.0 explicitly mentioned the compatibility with 2.2.0,
2.6.0, 2.7.0, but not
Hey,
any input on this? or a hint? or where to look to figure this out by myself?
Thanks!
-Vasia.
On 7 July 2015 at 15:20, Vasiliki Kalavri vasilikikala...@gmail.com wrote:
Hello to my squirrels,
I've started looking into FLINK-1943
https://issues.apache.org/jira/browse/FLINK-1943 and I
Hi,
Hadoop is not a necessity for running Flink, but rather an option. Try the
steps of the setup guide. [1]
If you really nee HDFS though to get the best IO performance I would
suggest having Hadoop on all your machines running Flink.
[1]
Hey Vasia!
Sorry for the late response... Thanks for pinging again!
The optimizer is acting a little funky here - seems an artifact of the
properties optimization.
- The initial join needs to be partitioned and sorted. Can you check
whether one partitioning and sorting happens before the
What IDE should I use? There are various options and I already have Eclipse
Luna. The IDE page lists that the Scala IDE is the best. So should I go
with the Scala IDE? Will I be able to develop in Java later?
On Wed, Jul 15, 2015 at 4:44 PM, Kostas Tzoumas ktzou...@apache.org wrote:
Hi Rohit,
Hi,
thank you Stephan!
Here's the missing part of the plan: http://i.imgur.com/N861tg1.png
There is one hash partition / sort. Is this what you're talking about?
Regarding your second point, how can I test if the data is known to be
partitioned at the end?
-Vasia.
On 15 July 2015 at 13:13,
Hi Santosh,
yes that is possible, if you want to read a complete file without splitting
it into records. However, you need to implement a custom InputFormat for
that which extends Flink's FileInputFormat.
If you want to split it into records, you need a character sequence that
delimits two
Perhaps there is also an existing HadoopInputFormat for XML that you might
be able to reuse for your purposes (Flink supports Hadoop input formats).
For example, there is an XMLInputFormat in the Apache Mahout codebase that
you could take a look at:
Enrique Bautista Barahona created FLINK-2365:
Summary: Review of How to contribute page
Key: FLINK-2365
URL: https://issues.apache.org/jira/browse/FLINK-2365
Project: Flink
Issue
Lady Kalamari,
The plan looks good.
To test whether the data is partitioned there: If you have the optimizer
plan, make sure the global properties have a partitioning property of
PATITIONED_HASH.
Thanks,
Stephan
On Wed, Jul 15, 2015 at 2:07 PM, Vasiliki Kalavri vasilikikala...@gmail.com
Thanks Fabian Kostas for info. Using XMLInputFormat, I am able to read a xml
file from HDFS.
Cheers,
Santosh
--
View this message in context:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Read-XML-from-HDFS-tp7023p7035.html
Sent from the Apache Flink Mailing List archive.
12 matches
Mail list logo