Re: readCsvFile

2016-10-07 Thread Alberto Ramón
> Hi Alberto, > > if you want to read a single column you have to wrap it in a Tuple1: > > val text4 = env.readCsvFile[Tuple1[String]]("file:data.csv" ,includedFields = > Array(1)) > > Best, Fabian > > 2016-10-06 20:59 GMT+02:00 Alberto Ramón <a.ramonporto..

jdbc.JDBCInputFormat

2016-10-07 Thread Alberto Ramón
I want use CreateInput + buildJDBCInputFormat to acces to database on SCALA PB1: import org.apache.flink.api.scala.io.jdbc.JDBCInputFormat Error:(25, 37) object jdbc is not a member of package org.apache.flink.api.java.io import org.apache.flink.api.java.io.jdbc.JDBCInputFormat Then, I can't

Re: jdbc.JDBCInputFormat

2016-10-09 Thread Alberto Ramón
gt; > import org.apache.flink.api.java.io.jdbc.JDBCInputFormat > > There is no Scala implementation of this class but you can also use Java > classes in Scala. > > 2016-10-07 21:38 GMT+02:00 Alberto Ramón <a.ramonporto...@gmail.com>: > >> >> I want use CreateInput + build

Re: readCsvFile

2016-10-09 Thread Alberto Ramón
yes ¡ I didn't see this :( Thanks 2016-10-09 23:23 GMT+02:00 Fabian Hueske <fhue...@gmail.com>: > Shouldn't "val counts4 = text3" be "val counts4 = text4"? > > > 2016-10-09 23:14 GMT+02:00 Alberto Ramón <a.ramonporto...@gmail.com>: > >>

Re: Stephan "Apache Flink to very Large State"

2016-11-07 Thread Alberto Ramón
:( thanks 2016-11-07 15:14 GMT+01:00 Till Rohrmann <trohrm...@apache.org>: > Hi Alberto, > > there were some technical problems with the recording. Therefore, the > publication is a little bit delayed. It should be added soon. > > Cheers, > Till > > On Sat, Nov

Re: Memory on Aggr

2016-11-08 Thread Alberto Ramón
arge if the number of distinct xx values grows over time. > That's why we will probably enforce a time predicate or meta data that the > value domain of xx is of constant size. > > > > 2016-11-08 9:04 GMT+01:00 Alberto Ramón <a.ramonporto...@gmail.com>: > >> Yes thanks >

Stephan "Apache Flink to very Large State"

2016-11-05 Thread Alberto Ramón
Hello In FlinkForward 2016, There was a meet: http://flink-forward.org/kb_sessions/scaling-stream-processing-with-apache-flink-to-very-large-state/ But I cant find the video in, youtube channel: https://www.youtube.com/channel/UCY8_lgiZLZErZPF47a2hXMA Any idea, where is it ?

Memory on Aggr

2016-11-07 Thread Alberto Ramón
>From "Relational Queries on Data Stream in Apache Flink" > Bounday Memory Requirements ( https://docs.google.com/document/d/1qVVt_16kdaZQ8RTfA_f4konQPW4tnl8THw6rzGUdaqU/edit# ) *SELECT user, page, COUNT(page) AS pCntFROM pageviews* *GROUP BY user, page* *-Versus-* *SELECT user, page,

Queryable state using JDBC

2016-10-18 Thread Alberto Ramón
Hello I'm investigating about Flink + Calcite, StreamSQL, Queryable State Is possible connect to Kylin using SQL Client *via JDBC *? (I always see API examples) BR

Re: Queryable state using JDBC

2016-10-18 Thread Alberto Ramón
he.org>: > Hi Alberto, > > have you checked out Flink's JDBCInputFormat? As far as I can tell, Kylin > has support for JDBC and, thus, you should be able to read from it with > this input format. > > Cheers, > Till > > On Tue, Oct 18, 2016 at 11:28 AM, Alberto

Read Apache Kylin from Apache Flink

2016-10-18 Thread Alberto Ramón
Hello I made a small contribution / manual about: *"How-to Read Apache Kylin data from Apache Flink With Scala" * For any suggestions, feel free to contact me Thanks, Alberto

Trigger evaluate

2016-10-24 Thread Alberto Ramón
Hello, 1 doubt: By default, when Trigger is launch to evaluate data of window ? - New element in window? - When a watermark arrive? - When the window is moved? Thanks , Alb

Re: Trigger evaluate

2016-10-24 Thread Alberto Ramón
elated? 2016-10-24 19:39 GMT+02:00 Aljoscha Krettek <aljos...@apache.org>: > Hi, > this depends on the Trigger you're using. For example, EventTimeTrigger > will trigger when the watermark passes the end of a window. > > Cheers, > Aljoscha > > On Mon, 24 Oct 2016

Re: Trigger evaluate

2016-10-24 Thread Alberto Ramón
0, 1:01:30, ... (every 30 seconds). > given the windows above, the window from 00:59:00 to 1:00:00 will be > evaluated, when a watermark of 1:00:00 or later is received. It might also > happen that multiple windows are evaluated if watermarks are more than 30 > seconds apart. > &

Re: Why use Kafka after all?

2016-11-16 Thread Alberto Ramón
I have several times the same thoughts that Dromit: (Kafka cluster is a expensiver over cost) Can someone check this Ideas? Case 1: You dont need Replay, Exact One: - All values have time-stamp - Data Source insert the WaterMark in the Source. Some code example ? Case 2: Your DataSource is

Compile for Java 1.8

2016-11-12 Thread Alberto Ramón
bash> git Clone POM> 1.8 bash> java -version ==> java version "1.8.0_111" *Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project flink-core: Compilation failure[ERROR] . . . /flink1.2

Re: Compile for Java 1.8

2016-11-13 Thread Alberto Ramón
tests. > > Note that you can still use JDK8 to compile Flink without changing the > java.version in pom.xml, in this scenario the compiler will not try to use > 1.8 language features and the compilation will succeed. You can also run > your Flink cluster using the Java 8 runtime. > > A

Re: jdbc.JDBCInputFormat

2016-10-11 Thread Alberto Ramón
:48 schrieb Timo Walther: > > I could reproduce the error locally. I will prepare a fix for it. > > Timo > > Am 10/10/16 um 11:54 schrieb Alberto Ramón: > > It's from Jun and Unassigned :( > Is There a Workarround? > > I'm will try to contact with the reporter , Ma

Re: readCsvFile

2016-10-09 Thread Alberto Ramón
),1) > > because the single field is wrapped in a Tuple1. > You have to unwrap it in the map function: .map { (_._1, 1) } > > 2016-10-07 18:08 GMT+02:00 Alberto Ramón <a.ramonporto...@gmail.com>: > >> Humm >> >> Your solution compile with out errors, but Include

Re: jdbc.JDBCInputFormat

2016-10-10 Thread Alberto Ramón
d get higher priority. > > Timo > > Am 09/10/16 um 13:27 schrieb Alberto Ramón: > > > After solved some issues, I connected with Kylin, but I can't read data > > import org.apache.flink.api.scala._import > org.apache.flink.api.java.io.jdbc.JDBCInputFormatimpo

Source and Sink Flink

2017-03-14 Thread Alberto Ramón
Can be possible from Flink?: read from IBM MQ write in HDFS using append

Flink and Rest API

2018-01-04 Thread Alberto Ramón
* Read from Rest API in streaming / micro batch some values ( Example: read last Value of BitCoin) * Expose Queriable State as as queriable Rest API (Example: Expose intermediate results on demmand)

Re: Flink and Rest API

2018-01-06 Thread Alberto Ramón
Thanks Till and Sendoh On 5 January 2018 at 12:38, Till Rohrmann wrote: > Hi Alberto, > > currently, the queryable state is not exposed via a REST interface. You > have to use the QueryableStateClient for that [1]. If it's not possible to > directly connect to the machines