Hi,
I came across Flink stateful functions and the project/idea excited me. But I
noticed no code has been committed lately in this project and I couldn’t fins
it’s roadmap.
Can someone shed some light on state if this project?
Thank you,
Tarandeep
Hi,
Our flink streaming job that is reading from old version of Kafka keeps
failing (every 9 minutes or so) with this error:
java.lang.RuntimeException: Unable to retrieve any partitions for the
requested topics [extracted-dimensions]. Please check previous log entries
at
Hi,
Any updates on 1.3 release date?
Thanks,
Tarandeep
hould be possible for
> you to cherry-pick it onto a 1.2 branch.
>
> I will add a ticket for this soon (currently getting timeouts in JIRA).
>
> Regards,
> Chesnay
>
>
> On 12.04.2017 02:27, Tarandeep Singh wrote:
>
>> Hi,
>>
>> I am using flink-1
Hi,
I am using flink-1.2 and Cassandra connector to write to cassandra tables.
I am using POJOs with DataStax annotations as described here-
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/cassandra.html
My question is- how are nulls handles by cassandra sink?
Hi Nancy,
I also get 1 test failed when I build/run tests on flink-spector:
- should stop if all triggers fire
Run completed in 3 seconds, 944 milliseconds.
Total number of tests run: 19
Suites: completed 5, aborted 0
Tests: succeeded 18, failed 1, canceled 0, ignored 0, pending 0
*** 1 TEST
> http://pastebin.com/Dxerr5KM
>
> Cheers
>
> On Fri, Mar 17, 2017 at 5:19 PM, Tarandeep Singh <tarand...@gmail.com>
> wrote:
>
>> Hi Ted,
>>
>> See the attached patch.
>>
>> I am able to run test examples (e.g.
>> org.flinkspector.datastream
yuzhih...@gmail.com> wrote:
> Can you post the patch for flink-specter where the mini cluster is
> replaced ?
>
> I assume you upgraded the version of Flink in the pom.
>
> Cheers
>
> On Mar 17, 2017, at 4:26 PM, Tarandeep Singh <tarand...@gmail.com> wrote:
>
&g
Hi,
Is someone using flinkspector unit testing framework with flink-1.2?
I added the following dependencies in my pom.xml file:
org.flinkspector
flinkspector-datastream_2.10
0.5
org.flinkspector
atermarkDebugger” in your job. Have you
> checked whether or not the watermarks printed there are identical (using
> getInput v.s. getKafkaInput)?
>
> Cheers,
> Gordon
>
>
> On March 17, 2017 at 12:32:51 PM, Tarandeep Singh (tarand...@gmail.com)
> wrote:
>
> Anyo
Anyone?
Any suggestions what could be going wrong or what I am doing wrong?
Thanks,
Tarandeep
On Thu, Mar 16, 2017 at 7:34 AM, Tarandeep Singh <tarand...@gmail.com>
wrote:
> Data is read from Kafka and yes I use different group id every time I run
> the code. I have put break poin
ties));
>> }
>
>
> Have you tried using a different “group.id” everytime you’re re-running the
> job?
> Note that the “auto.offset.reset” value is only respected when there aren’t
> any offsets for the group committed in Kafka.
> So you might not actually be
Hi,
I am using flink-1.2 and reading data stream from Kafka (using
FlinkKafkaConsumer08). I want to connect this data stream with another
stream (read control stream) so as to do some filtering on the fly. After
filtering, I am applying window function (tumbling/sliding event window)
along with
Hi,
I am using flink-1.2 streaming API to process clickstream and compute some
results per cookie. The computed results are stored in Cassandra using
flink-cassandra connector. After a result is stored in cassandra, I want to
notify an external system (using their API or via Kafka) that result is
Hi Robert & Nico,
I am facing the same problem (java.lang.NoClassDefFoundError:
com/codahale/metrics/Metric)
Can you help me identify shading issue in pom.xml file.
My pom.xml content-
-
http://maven.apache.org/POM/4.0.0;
Hi,
Looking forward to 1.2 version of Flink (lots of exciting features have
been added).
Has the date finalized yet?
Thanks,
Tarandeep
)
at
com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
... 15 more
-Tarandeep
On Mon, Oct 3, 2016 at 12:49 PM, Tarandeep Singh <tarand...@gmail.com>
wrote:
> Hi,
>
> I am using flink-1.0.0 and running ETL (batch) jobs on it for quite some
> time (few months) without any problem. Starting th
Hi,
I am using flink-1.0.0 and running ETL (batch) jobs on it for quite some
time (few months) without any problem. Starting this morning, I have been
getting errors like these-
"Received an event in channel 3 while still having data from a record. This
indicates broken serialization logic. If
inct();
>> DataSet<Tuple2<Boolean, T> result = records
>> .leftOuterJoin(distIds)
>> .where(KEYSELECTOR)
>> .equalTo("*") // use full string as key
>> .with(JOINFUNC) // set Bool to false if right == null, true otherwise
>>
>>
Hi,
I am getting NoSerializableException in this class-
public class RecordsFilterer {
public DataSet> addFilterFlag(DataSet
dataset, DataSet filteredIds, String fieldName) {
return dataset.coGroup(filteredIds)
.where(new KeySelector()
On Wed, May 11, 2016 at 10:24 PM, Tarandeep Singh <tarand...@gmail.com>
wrote:
> Hi,
>
> I am using DataSet API and reading Avro files as DataSet. I
> am seeing this weird behavior that record is read correctly from file
> (verified by printing all values) but when when
Hi,
I am using DataSet API and reading Avro files as DataSet. I
am seeing this weird behavior that record is read correctly from file
(verified by printing all values) but when when this record is passed to
Flink chain/DAG (e.g. KeySelector), every field in this record has the same
value as the
Hi,
I have written ETL jobs in Flink (DataSet API). When I execute them in IDE,
they run and finish fine. When I try to run them on my cluster, I get
"Insufficient number of network buffers" error.
I have 5 machines in my cluster with 4 cores each. TaskManager is given 3GB
each. I increased the
o contribute, I am happy
>> to give some hints about which parts of the system would need to be
>> modified.
>>
>> – Ufuk
>>
>>
>> On Mon, Apr 18, 2016 at 12:56 PM, Tarandeep Singh <tarand...@gmail.com>
>> wrote:
>> > Hi,
>> >
Hi,
How can I set compression for AvroOutputFormat when writing files on HDFS?
Also, can we set compression for intermediate data that is sent over
network (from map to reduce phase) ?
Thanks,
Tarandeep
Hi,
Can someone please point me to an example of creating DataSet using Avro
Generic Records?
I tried this code -
final ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();
final Path iPath = new Path(args[0]);
DataSet dataSet = env.createInput(new
Hi,
I am looking at implementation of zipWithIndex in DataSetUtils-
https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/utils/DataSetUtils.java
It works in two phases/steps
1) Count number of elements in each partition (using mapPartition)
2) In second
> On Wed, 23.03.2016 06:59, Chesnay Schepler wrote
> Could you be missing the call to execute()?
Yes, that was it. Can't believe I missed that !
Thank you Chesnay.
Best,
Tarandeep
On 23.03.2016 01:25, Tarandeep Singh wrote:
>> Hi,
>>
>> I wrote a simple Flink job tha
28 matches
Mail list logo