Stateful functions roadmap

2023-06-01 Thread Tarandeep Singh
Hi, I came across Flink stateful functions and the project/idea excited me. But I noticed no code has been committed lately in this project and I couldn’t fins it’s roadmap. Can someone shed some light on state if this project? Thank you, Tarandeep

Flink streaming (1.3.2) KafkaConsumer08 - Unable to retrieve any partitions

2018-01-22 Thread Tarandeep Singh
Hi, Our flink streaming job that is reading from old version of Kafka keeps failing (every 9 minutes or so) with this error: java.lang.RuntimeException: Unable to retrieve any partitions for the requested topics [extracted-dimensions]. Please check previous log entries at

Flink 1.3 release date

2017-06-01 Thread Tarandeep Singh
Hi, Any updates on 1.3 release date? Thanks, Tarandeep

Re: Cassandra connector POJO - tombstone question

2017-04-12 Thread Tarandeep Singh
hould be possible for > you to cherry-pick it onto a 1.2 branch. > > I will add a ticket for this soon (currently getting timeouts in JIRA). > > Regards, > Chesnay > > > On 12.04.2017 02:27, Tarandeep Singh wrote: > >> Hi, >> >> I am using flink-1

Cassandra connector POJO - tombstone question

2017-04-11 Thread Tarandeep Singh
Hi, I am using flink-1.2 and Cassandra connector to write to cassandra tables. I am using POJOs with DataStax annotations as described here- https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/cassandra.html My question is- how are nulls handles by cassandra sink?

Re: flink-1.2 and unit testing / flinkspector

2017-03-23 Thread Tarandeep Singh
Hi Nancy, I also get 1 test failed when I build/run tests on flink-spector: - should stop if all triggers fire Run completed in 3 seconds, 944 milliseconds. Total number of tests run: 19 Suites: completed 5, aborted 0 Tests: succeeded 18, failed 1, canceled 0, ignored 0, pending 0 *** 1 TEST

Re: flink-1.2 and unit testing / flinkspector

2017-03-17 Thread Tarandeep Singh
> http://pastebin.com/Dxerr5KM > > Cheers > > On Fri, Mar 17, 2017 at 5:19 PM, Tarandeep Singh <tarand...@gmail.com> > wrote: > >> Hi Ted, >> >> See the attached patch. >> >> I am able to run test examples (e.g. >> org.flinkspector.datastream

Re: flink-1.2 and unit testing / flinkspector

2017-03-17 Thread Tarandeep Singh
yuzhih...@gmail.com> wrote: > Can you post the patch for flink-specter where the mini cluster is > replaced ? > > I assume you upgraded the version of Flink in the pom. > > Cheers > > On Mar 17, 2017, at 4:26 PM, Tarandeep Singh <tarand...@gmail.com> wrote: > &g

flink-1.2 and unit testing / flinkspector

2017-03-17 Thread Tarandeep Singh
Hi, Is someone using flinkspector unit testing framework with flink-1.2? I added the following dependencies in my pom.xml file: org.flinkspector flinkspector-datastream_2.10 0.5 org.flinkspector

Re: Data+control stream from kafka + window function - not working

2017-03-17 Thread Tarandeep Singh
atermarkDebugger” in your job. Have you > checked whether or not the watermarks printed there are identical (using > getInput v.s. getKafkaInput)? > > Cheers, > Gordon > > > On March 17, 2017 at 12:32:51 PM, Tarandeep Singh (tarand...@gmail.com) > wrote: > > Anyo

Re: Data+control stream from kafka + window function - not working

2017-03-16 Thread Tarandeep Singh
Anyone? Any suggestions what could be going wrong or what I am doing wrong? Thanks, Tarandeep On Thu, Mar 16, 2017 at 7:34 AM, Tarandeep Singh <tarand...@gmail.com> wrote: > Data is read from Kafka and yes I use different group id every time I run > the code. I have put break poin

Re: Data+control stream from kafka + window function - not working

2017-03-16 Thread Tarandeep Singh
ties)); >> } > > > Have you tried using a different “group.id” everytime you’re re-running the > job? > Note that the “auto.offset.reset” value is only respected when there aren’t > any offsets for the group committed in Kafka. > So you might not actually be

Data+control stream from kafka + window function - not working

2017-03-16 Thread Tarandeep Singh
Hi, I am using flink-1.2 and reading data stream from Kafka (using FlinkKafkaConsumer08). I want to connect this data stream with another stream (read control stream) so as to do some filtering on the fly. After filtering, I am applying window function (tumbling/sliding event window) along with

Flink streaming - call external API "after" sink

2017-03-09 Thread Tarandeep Singh
Hi, I am using flink-1.2 streaming API to process clickstream and compute some results per cookie. The computed results are stored in Cassandra using flink-cassandra connector. After a result is stored in cassandra, I want to notify an external system (using their API or via Kafka) that result is

Re: Flink 1.2 and Cassandra Connector

2017-03-06 Thread Tarandeep Singh
Hi Robert & Nico, I am facing the same problem (java.lang.NoClassDefFoundError: com/codahale/metrics/Metric) Can you help me identify shading issue in pom.xml file. My pom.xml content- - http://maven.apache.org/POM/4.0.0;

1.2 release date

2017-02-05 Thread Tarandeep Singh
Hi, Looking forward to 1.2 version of Flink (lots of exciting features have been added). Has the date finalized yet? Thanks, Tarandeep

Re: Exception while running Flink jobs (1.0.0)

2016-10-03 Thread Tarandeep Singh
) at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136) ... 15 more -Tarandeep On Mon, Oct 3, 2016 at 12:49 PM, Tarandeep Singh <tarand...@gmail.com> wrote: > Hi, > > I am using flink-1.0.0 and running ETL (batch) jobs on it for quite some > time (few months) without any problem. Starting th

Exception while running Flink jobs (1.0.0)

2016-10-03 Thread Tarandeep Singh
Hi, I am using flink-1.0.0 and running ETL (batch) jobs on it for quite some time (few months) without any problem. Starting this morning, I have been getting errors like these- "Received an event in channel 3 while still having data from a record. This indicates broken serialization logic. If

Re: NotSerializableException

2016-06-09 Thread Tarandeep Singh
inct(); >> DataSet<Tuple2<Boolean, T> result = records >> .leftOuterJoin(distIds) >> .where(KEYSELECTOR) >> .equalTo("*") // use full string as key >> .with(JOINFUNC) // set Bool to false if right == null, true otherwise >> >>

NotSerializableException

2016-06-08 Thread Tarandeep Singh
Hi, I am getting NoSerializableException in this class-  public class RecordsFilterer { public DataSet> addFilterFlag(DataSet dataset, DataSet filteredIds, String fieldName) { return dataset.coGroup(filteredIds) .where(new KeySelector()

Re: Flink + Avro GenericRecord - first field value overwrites all other fields

2016-05-12 Thread Tarandeep Singh
On Wed, May 11, 2016 at 10:24 PM, Tarandeep Singh <tarand...@gmail.com> wrote: > Hi, > > I am using DataSet API and reading Avro files as DataSet. I > am seeing this weird behavior that record is read correctly from file > (verified by printing all values) but when when

Flink + Avro GenericRecord - first field value overwrites all other fields

2016-05-11 Thread Tarandeep Singh
Hi, I am using DataSet API and reading Avro files as DataSet. I am seeing this weird behavior that record is read correctly from file (verified by printing all values) but when when this record is passed to Flink chain/DAG (e.g. KeySelector), every field in this record has the same value as the

Insufficient number of network buffers

2016-05-02 Thread Tarandeep Singh
Hi, I have written ETL jobs in Flink (DataSet API). When I execute them in IDE, they run and finish fine. When I try to run them on my cluster, I get "Insufficient number of network buffers" error. I have 5 machines in my cluster with 4 cores each. TaskManager is given 3GB each. I increased the

Re: Compression - AvroOutputFormat and over network ?

2016-04-20 Thread Tarandeep Singh
o contribute, I am happy >> to give some hints about which parts of the system would need to be >> modified. >> >> – Ufuk >> >> >> On Mon, Apr 18, 2016 at 12:56 PM, Tarandeep Singh <tarand...@gmail.com> >> wrote: >> > Hi, >> >

Compression - AvroOutputFormat and over network ?

2016-04-18 Thread Tarandeep Singh
Hi, How can I set compression for AvroOutputFormat when writing files on HDFS? Also, can we set compression for intermediate data that is sent over network (from map to reduce phase) ? Thanks, Tarandeep

Example - Reading Avro Generic records

2016-04-01 Thread Tarandeep Singh
Hi, Can someone please point me to an example of creating DataSet using Avro Generic Records? I tried this code - final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); final Path iPath = new Path(args[0]); DataSet dataSet = env.createInput(new

DataSetUtils zipWithIndex question

2016-03-30 Thread Tarandeep Singh
Hi, I am looking at implementation of zipWithIndex in DataSetUtils- https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/utils/DataSetUtils.java It works in two phases/steps 1) Count number of elements in each partition (using mapPartition) 2) In second

Re: Unable to submit flink job that uses Avro data

2016-03-23 Thread Tarandeep Singh
> On Wed, 23.03.2016 06:59, Chesnay Schepler wrote > Could you be missing the call to execute()? Yes, that was it. Can't believe I missed that ! Thank you Chesnay. Best, Tarandeep On 23.03.2016 01:25, Tarandeep Singh wrote: >> Hi, >> >> I wrote a simple Flink job tha