Row arity of from does not match serializers.

2019-12-05 Thread srikanth flink
ng.runtime.io.StreamOneInputProcessor .processInput(StreamOneInputProcessor.java:143) at org.apache.flink.streaming.runtime.tasks.StreamTask .performDefaultAction(StreamTask.java:276) at org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask .java:298) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask .java:403) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) Help me understand the error in detail. Thanks Srikanth

Table/SQL API to read and parse JSON, Java.

2019-12-01 Thread srikanth flink
Hi there, I'm following the link to read JSON data from Kafka and convert to table, programmatically. I'd try and succeed declarative using SQL client. My Json data is nested like: {a:1,b,2,c:{x:1,y:2}}. Code:

Re: Job Distribution Strategy On Cluster.

2019-11-11 Thread srikanth flink
Great, thanks for the update. On Tue, Nov 12, 2019 at 8:51 AM Zhu Zhu wrote: > There is no plan for release 1.9.2 yet. > Flink 1.10.0 is planned to be released in early January. > > Thanks, > Zhu Zhu > > srikanth flink 于2019年11月11日周一 下午9:53写道: > >> Zhu Zhu, >&g

Re: Job Distribution Strategy On Cluster.

2019-11-11 Thread srikanth flink
Zhu Zhu, That's awesome and is what I'm looking for. Any update on when would be the next release date? Thanks Srikanth On Mon, Nov 11, 2019 at 3:40 PM Zhu Zhu wrote: > Hi Srikanth, > > Is this issue what you encounter? FLINK-12122: a job would tend to fill > one TM before u

Re: Job Distribution Strategy On Cluster.

2019-11-11 Thread srikanth flink
Vina, I've set parallelism as 6 while max parallelism is 128. Thanks Srikanth On Mon, Nov 11, 2019 at 3:18 PM vino yang wrote: > Hi srikanth, > > What's your job's parallelism? > > In some scenes, many operators are chained with each other. if it's > parallelism is 1,

Job Distribution Strategy On Cluster.

2019-11-06 Thread srikanth flink
Hi there, I'm running Flink with 3 node cluster. While running my jobs(both SQL client and jar submission), the jobs are being assigned to single machine instead of distribution among the cluster. How could I achieve the job distribution to make use of the computation power? Thanks Srikanth

What is the slot vs cpu ratio?

2019-11-06 Thread srikanth flink
Hi there, I've 3 node cluster with 16cores each. How many slots could I utilize at max and how to I do the calculation? Thanks Srikanth

Using RocksDB as lookup source in Flink

2019-11-04 Thread srikanth flink
. Thanks Srikanth

Add custom fields into Json

2019-10-29 Thread srikanth flink
. "ORDER" ... "MINUS" ... "UNION" ... ")" ... "," ... Could someone help me with this? Thanks Srikanth

Can a Flink query outputs nested json?

2019-10-24 Thread srikanth flink
t; ... "INTERSECT" ... "LIMIT" ... "OFFSET" ... "ORDER" ... "MINUS" ... "UNION" ... "," ... Help me understand the problem with my schema/query? Also would like to add new columns and nested colums. Thanks Srikanth

Querying nested JSON stream?

2019-10-17 Thread srikanth flink
source; Can someone help me with the query, querying nested JSON so I could save resources running flattening job? Thanks Srikanth

Re: Reading Key-Value from Kafka | Eviction policy.

2019-09-27 Thread srikanth flink
Hi Miki, What are those several ways? could you help me with references? Use case: We have a continuous credit card transaction stream flowing into a Kafka topic, along with a set of defaulters of credit card in a .csv file(which gets updated every day). Thanks Srikanth On Fri, Sep 27

Reading Key-Value from Kafka | Eviction policy.

2019-09-26 Thread srikanth flink
and value, so as the eviction works on the keys and older data is cleared. I found nothing in the docs, so far. Could someone help with that? If there's no support for reading key and value, can someone help me to assign a key to the table I'm building from stream? Thanks Srikanth

Re: Flink SQL update-mode set to retract in env file.

2019-09-26 Thread srikanth flink
Awesome, thanks! On Thu, Sep 26, 2019 at 5:50 PM Terry Wang wrote: > Hi, Srikanth~ > > In your code, > DataStream outStreamAgg = tableEnv.toRetractStream(resultTable, > Row.class).map(t -> {}); has converted the resultTable into a DataStream > that’s un

Re: Flink SQL update-mode set to retract in env file.

2019-09-26 Thread srikanth flink
ProducerProperties)); Is it that the above functionality works only with Table API and not with SQL? Please explain. Thanks Srikanth On Thu, Sep 26, 2019 at 1:57 PM Terry Wang wrote: > Hi srikanth~ > > The Flink SQL update-mode is inferred from the target table type. >

Flink SQL update-mode set to retract in env file.

2019-09-26 Thread srikanth flink
me with the syntax. Thanks Srikanth

Explain time based windows

2019-09-25 Thread srikanth flink
Srikanth

How do I create a temporal function using Flink Clinet SQL?

2019-09-24 Thread srikanth flink
Hi, I'm running time based joins, dynamic table over temporal function. Is there a way I could create temporal table using flink SQL. And I'm using v1.9. Thanks Srikanth

Re: Approach to match join streams to create unique streams.

2019-09-24 Thread srikanth flink
Fabian, Thanks, already implemented the left join. Srikanth On Tue, Sep 24, 2019 at 2:12 PM Fabian Hueske wrote: > Hi, > > AFAIK, Flink SQL Temporal table function joins are only supported as inner > equality joins. > An extension to left outer joins

Can I cross talk between environments

2019-09-23 Thread srikanth flink
Hi, I'm using Java code to source from Kafka, streaming to table and registered the table. I understand that I have started the StreamExecutionEnvironment and execution. Is there a way that I could access the registered table/temporal function from SQL client? Thanks Srikanth

Approach to match join streams to create unique streams.

2019-09-23 Thread srikanth flink
o I could do a *contains*? OR any other approach? Help appreciated. Thanks Srikanth

How exactly does Idle-state retention policy work?

2019-09-17 Thread srikanth flink
o work, how does Flink justify the Heap size grew to 80GB and crash? 2. Is is that every query with a time windowed join, Flink SQL will automatically clear older records that have become irrelevant? Thanks Srikanth

Re: writeAsCSV with partitionBy

2016-05-24 Thread Srikanth
Isn't this related to -- https://issues.apache.org/jira/browse/FLINK-2672 ?? This can be achieved with a RollingSink[1] & custom Bucketer probably. [1] https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/fs/RollingSink.html Srikanth On Tue,

Re: Barriers at work

2016-05-13 Thread Srikanth
I have a follow up. Is there a recommendation of list of knobs that can be tuned if at least once guarantee while handling failure is good enough? For cases like alert generation, non idempotent sink, etc where the system can live with duplicates or has other mechanism to handle them. Srikanth

Re: Barriers at work

2016-05-13 Thread Srikanth
Thanks Matthias & Stephan! Yes, if we choose to fail checkpoint on expiry, we can restore from previous checkpoint. Looking forward to read the new design proposal. Srikanth On Fri, May 13, 2016 at 8:09 AM, Stephan Ewen <se...@apache.org> wrote: > Hi Srikanth! > > That is

Re: Force triggering events on watermark

2016-05-10 Thread Srikanth
PURGE; case FIRE_AND_PURGE: return TriggerResult.FIRE_AND_PURGE; default: return TriggerResult.CONTINUE; } } } Srikanth On Tue, May 10, 2016 at 11:36 AM, Fabian Hueske <fhue...@gmail.com> wrote: > Maybe the last example of this blog post is helpful [1]. > > Best, Fabian > >

Force triggering events on watermark

2016-05-10 Thread Srikanth
w do I force trigger the left over records when watermark is past the window? I.e, I want to use triggers to start early processing but finalize the window based on watermark. Output shows that records for keys 23 & 9 weren't processed. (4,157428) (4,157428) (23,111283) (23,108042) (9,161374) (9,161374) (4,136505) (4,List(157428, 157428, 136505)) Thanks, Srikanth

Re: how to convert datastream to collection

2016-05-03 Thread Srikanth
Why do you want collect and iterate? Why not iterate on the DataStream itself? May be I didn't understand your use case completely. Srikanth On Tue, May 3, 2016 at 10:55 AM, Aljoscha Krettek <aljos...@apache.org> wrote: > Hi, > please keep in mind that we're dealing with streams.

Re: Scala compilation error

2016-05-02 Thread Srikanth
TypeInformation]( functionName: String, operator: TwoInputStreamOperator[IN1, IN2, R]) Srikanth On Mon, May 2, 2016 at 7:18 PM, Srikanth <srikanth...@gmail.com> wrote: > Hello, > > I'm fac > > val stream = env.addSource(new FlinkKafkaConsumer09[String]("test-topic",

Scala compilation error

2016-05-02 Thread Srikanth
Hello, I'm fac val stream = env.addSource(new FlinkKafkaConsumer09[String]("test-topic", new SimpleStringSchema(), properties)) val bidderStream: KeyedStream[BidderRawLogs, Int] = stream.flatMap(b => BidderRawLogs(b)).keyBy(b => b.strategyId) val metaStrategy: KeyedStream[(Int, String), Int] =

Re: Join DataStream with dimension tables?

2016-04-27 Thread Srikanth
Aljoscha, Your thoughts on this? Srikanth On Mon, Apr 25, 2016 at 8:08 PM, Srikanth <srikanth...@gmail.com> wrote: > Aljoscha, > > Looks like a potential solution. Feels a bit hacky though. > > Didn't quite understand why a list backed store is used to for static >

Re: Join DataStream with dimension tables?

2016-04-25 Thread Srikanth
read phase? Lohith, Adding a component like Cassandra just for this feels like a overkill. But if I can't find a suitable way to do this, I might use it( or Redis probably). Srikanth On Fri, Apr 22, 2016 at 12:20 PM, Lohith Samaga M <lohith.sam...@mphasis.com > wrote: > Hi, > C

Re: writeAsCSV with partitionBy

2016-02-16 Thread Srikanth
output/field0=2/file ./output/field0=3/file ./output/field0=4/file Assuming field0 is Int and has unique values 1,2,3&4. Srikanth On Mon, Feb 15, 2016 at 6:20 AM, Fabian Hueske <fhue...@gmail.com> wrote: > Hi Srikanth, > > DataSet.partitionBy() will partition th

writeAsCSV with partitionBy

2016-02-12 Thread Srikanth
API to do this. But what will be the best way of achieving this? Srikanth