Re: Compilation Error in WindowStream.fold()

2017-02-24 Thread nsengupta
Hello Aljoscha, Many thanks for taking this up. This is the modified code: -- val uniqueVehicles = envDefault .fromCollection(readings) .map(e => MITSIMUtils.preparePositionReport(e))

Compilation Error in WindowStream.fold()

2017-02-23 Thread nsengupta
For reasons I cannot grasp, I am unable to move ahead. Here's the code: - import org.apache.flink.api.common.functions.FoldFunction import

Re: Clarification: use of AllWindowedStream.apply() function

2017-02-16 Thread nsengupta
better fold(, > , ) to map the result of folding your > window to some other data type. If you will, a WindowFunction allows > "mapping" the result of your windowing to a different type. > > Best, > Aljoscha > > On Wed, 15 Feb 2017 at 06:14 nsengupta <[hidden email

Re: Clarification: use of AllWindowedStream.apply() function

2017-02-14 Thread nsengupta
I have gone through this post , where Aljoscha explains that /mapping/ on WindowedStream is /not/ allowed. So, I think I haven't asked the question properly. Here is

Clarification: use of AllWindowedStream.apply() function

2017-02-14 Thread nsengupta
I am trying to understand if the AllWindowedStream.apply() function can be used for creating a DataStream of new types. Here is a portion of the code: case class

Re: Table API: java.sql.DateTime is not supported;

2017-02-06 Thread nsengupta
Hello Timo, Thanks for the clarification. This means that I *cannot use CsvTableSource*, as I have, in the example. Instead, I should: * Write custom Scalar function to convert STRINGs to other datatypes as required * Read the file as CsvInput, with all fields as STRINGs * Apply the

Re: Compiler error while using 'CsvTableSource'

2017-02-06 Thread nsengupta
more user-friendly in the future. > > https://issues.apache.org/jira/browse/FLINK-5714 > > Timo > > > Am 05/02/17 um 06:08 schrieb nsengupta: > > Thanks, Till, for taking time to share your understanding. > > -- N > > On Sun, Feb 5, 2017 at 12:49 AM, Till Rohrma

Re: Compiler error while using 'CsvTableSource'

2017-02-04 Thread nsengupta
e of the > parameters with a default argument. > > Cheers, > Till > > On Fri, Feb 3, 2017 at 12:49 PM, nsengupta <[hidden email] > <http:///user/SendEmail.jtp?type=node=11441=0>> wrote: > >> Till, >> >> Many thanks. Just to confirm that it is work

Table API: java.sql.DateTime is not supported;

2017-02-04 Thread nsengupta
I am reading a bunch of records from a CSV file. A record looks like this: "4/1/2014 0:11:00",40.769,-73.9549,"B02512" I intend to treat these records as SQL Rows and then process. Here's the code: package org.nirmalya.exercise import

Re: Compiler error while using 'CsvTableSource'

2017-02-03 Thread nsengupta
Till, Many thanks. Just to confirm that it is working fine at my end, here's a screenshot. This is Flink 1.1.4 but Flink-1.2/Flink-1.3 shouldn't be any problem. It never struck me that lack

Re: Compiler error while using 'CsvTableSource'

2017-02-02 Thread nsengupta
Til, FWIW, I have fired the entire testsuite for Flink latest Snapshot. Almost all testcases passed, particularly this one: This case uses a bulit-in loaded CSV (in

Re: Compiler error while using 'CsvTableSource'

2017-02-02 Thread nsengupta
Hello Till, Many thanks for a quick reply. I have tried to follow your suggestion, with no luck: Just to give it a shot, I have tried this too (following Flink Documentation):

Compiler error while using 'CsvTableSource'

2017-02-02 Thread nsengupta
I am using *flink-shell*, available with flink-1.2-SNAPSHOT. While loading a CSV file into a CsvTableSource - following the example given with the documents - I get this error. I am not sure

Sharing State between Operators

2016-05-13 Thread nsengupta
Hello Flinksters Alright. So, I had a fruitful exchange of messages with Balaji earlier today, on this topic. I moved ahead with the understanding derived from the exchange (thanks, Balaji) at the time. But, now I am back because I think my approach is unclean, if not incorrect. There probably

Availability of OrderedKeyedDataStream

2016-05-13 Thread nsengupta
Hello Flinksters, Have you decided to do away with the 'OrderedKeyedDataStream' type altogether? I didn't find it in the API documents . It is mentioned and elaborated here

Re: Count of Grouped DataSet

2016-05-01 Thread nsengupta
Hello all, This is how I have moved ahead with the implementation of finding count of a GroupedDataSet: *val k = envDefault .fromElements((1,1,2,"A"),(1,1,2,"B"),(2,1,3,"B"),(3,1,4,"C")) .groupBy(1,2) .reduceGroup(nextGroup => { val asList = nextGroup.toList

Re: Discarding header from CSV file

2016-04-29 Thread nsengupta
Hello Chiwan, Sorry for the late reply. I have been into other things for some time. Yes, you are right. I have been assuming that field to be Integer, wrongly. I will fix it and give it a go again. Many thanks again. -- Nirmalya -- View this message in context:

Problem in creating quickstart project using archetype (Scala)

2016-04-28 Thread nsengupta
Hello all, I don't know if anyone else has faced his; I haven't so far. When I try to create a new project template following the instructions here , it fails. This is what happens

Re: Discarding header from CSV file

2016-04-27 Thread nsengupta
Hello Chiwan, Yes, that's an oversight on my part. In my hurry, I didn't even try to explore the source of that /Exception/. Thanks, again. However, I still don't know why I am not being able to read the CSV file. As the output shows, using standard IO routines, I can read the same file anyway.

Re: Discarding header from CSV file

2016-04-27 Thread nsengupta
Till, Thanks for looking into this. I have removed the toList() from the collect() function, to align the code with what I generally do in a Flink application. It throws an Exception, and I can't figure out why. *Here's my code (shortened for brevity):* case class

Re: Discarding header from CSV file

2016-04-26 Thread nsengupta
Chiwan and other Flinksters,I am stuck with the following. Somehow, I am an unable to spot the error, if any! Please help.*I have this case class*:case class BuildingInformation(buildingID: Int, buildingManager: Int, buildingAge: Int, productID: String, country: String)*I intend to read from a CSV

Re: Discarding header from CSV file

2016-04-26 Thread nsengupta
e to use readTextFile() method, I think, you can ignore column > headers by calling zipWithIndex method and filtering it based on the index. > > Regards, > Chiwan Park > > > On Apr 27, 2016, at 10:32 AM, nsengupta <[hidden email] > <http:///user/SendEmail.jtp?type=node=64

Discarding header from CSV file

2016-04-26 Thread nsengupta
What is the recommended way of discarding the Column Header(s) from a CSV file, if I am using /environment.readTextFile() / facility? Obviously, we don't know beforehand, which of the nodes will read the Header(s)? So, we cannot use usual tricks like drop(1)? I don't recall well: has this