Re: [DISCUSS] Help from community to continue with Apache Bahir Development
+1 for continuing development. I have seen ok traction on Flink side, and if we could support the latest Spark that would probably get similar traction as well. On Fri, Jul 14, 2023 at 6:21 AM Joao Boto wrote: > Hi All, > > I would like to start a DISCUSS thread about continuing development and > releases for the Apache Bahir project. > > The current Project Management Committee (PMC) needs help to continue the > project. > To do this we would like to see if there are still interests from the > community for us to contribute to develop, maintain, and create releases > for the project. > > If there are no more contributions or interests from the community, the > next step is to move the project to Apache Attic [1] to retire the project. > > Looking forward to hearing from you all. > > Thanks > > [1] https://attic.apache.org > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir Flink 1.1.0 (RC2)
+1 reviewed the release artifacts but not functionality. On Wed, Jul 27, 2022 at 4:45 AM Joao Boto wrote: > Dear community member, > > Please vote to approve the release of Apache Bahir Flink 1.1.0 (RC2) based > on > Apache Flink 1.14.5. > > Tag: v1.1.0-rc2 > (38092f025046fd4850c49a943e9197f67aabda54) > https://github.com/apache/bahir-flink/tree/v1.1.0-rc2 > > Release files: > https://repository.apache.org/content/repositories/orgapachebahir-1033 > > Source distribution: > https://dist.apache.org/repos/dist/dev/bahir/bahir-flink/1.1.0-rc2/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir Flink 1.1.0 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir Flink 1.1.0 (RC2)
If we have to rebuild the RC, could we please add Apache License header to readme.md files Here is an example: https://raw.githubusercontent.com/elyra-ai/elyra/main/README.md On Wed, Jul 27, 2022 at 7:45 AM Joao Boto wrote: > Dear community member, > > Please vote to approve the release of Apache Bahir Flink 1.1.0 (RC2) based > on > Apache Flink 1.14.5. > > Tag: v1.1.0-rc2 > (38092f025046fd4850c49a943e9197f67aabda54) > https://github.com/apache/bahir-flink/tree/v1.1.0-rc2 > > Release files: > https://repository.apache.org/content/repositories/orgapachebahir-1033 > > Source distribution: > https://dist.apache.org/repos/dist/dev/bahir/bahir-flink/1.1.0-rc2/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir Flink 1.1.0 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [DISCUSS] Release bahir-flink 1.1
Thanks for volunteering to help with that. As for the release number, I don't have any strong opinions. BTW, the release script should automate most of what you want for the release. On Tue, Aug 17, 2021 at 10:50 PM Robert Metzger wrote: > Hey all, > > The last bahir-flink release was in 2017, and users are asking for another > release [1]. I'd like to create a bahir-flink 1.1 release (or shall we call > it 2.0 ?) soon. > > If there are no objections, I'll soon create a RC. > > Best, > Robert > > [1] https://issues.apache.org/jira/browse/BAHIR-279 > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-254) Redis or RedisDescriptorTest use sql or tab api will run exception(flink1.11) :becase use Deprecated method and feild
[ https://issues.apache.org/jira/browse/BAHIR-254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-254. --- Assignee: housezhang Resolution: Fixed > Redis or RedisDescriptorTest use sql or tab api will run > exception(flink1.11) :becase use Deprecated method and feild > > > Key: BAHIR-254 > URL: https://issues.apache.org/jira/browse/BAHIR-254 > Project: Bahir > Issue Type: Bug > Components: Spark Streaming Connectors >Affects Versions: Flink-1.0 >Reporter: housezhang >Assignee: housezhang >Priority: Major > Fix For: Flink-Next > > Original Estimate: 24h > Remaining Estimate: 24h > > {quote}when i run RedisDescriptorTest class the error: > java.lang.IllegalStateException: No operators defined in streaming topology. > Cannot execute. > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getStreamGraphGenerator(StreamExecutionEnvironment.java:1870) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getStreamGraph(StreamExecutionEnvironment.java:1861) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getStreamGraph(StreamExecutionEnvironment.java:1846) > at > org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697) > at > org.apache.flink.streaming.connectors.redis.RedisDescriptorTest.testRedisDescriptor(RedisDescriptorTest.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at > org.junit.rules.RunRules.evaluate(RunRules.java:20) at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at > org.junit.rules.RunRules.evaluate(RunRules.java:20) at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) at > org.junit.runner.JUnitCore.run(JUnitCore.java:137) at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69) > at > com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) > at > com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220) > at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53) > {quote} > the follow two line code : > tableEnvironment.sqlUpdate("insert into redis select k, v from t1"); > env.execute("Test Redis Table"); > change to > tableEnvironment.executeSql("insert into redis select k, v from t1"); > // env.execute("Test Redis Table"); > > the other error is > > {quote}Java HotSpot(TM) 64-Bit Server VM warning: ignoring option > MaxPermSize=512m; support was removed in 8.0Java HotSpot(TM) 64-Bit Server VM > warning: ignoring option MaxPermSize=512m; support was removed in 8.0 > org.apache.flink.table.api.TableException: findAndCreateTableSink failed. > at > org.apa
[jira] [Resolved] (BAHIR-239) Fix Bugs in SQSClient
[ https://issues.apache.org/jira/browse/BAHIR-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-239. --- Fix Version/s: Spark-2.4.0 Resolution: Fixed > Fix Bugs in SQSClient > - > > Key: BAHIR-239 > URL: https://issues.apache.org/jira/browse/BAHIR-239 > Project: Bahir > Issue Type: Bug > Components: Spark Structured Streaming Connectors >Affects Versions: Spark-2.4.0 >Reporter: Abhishek Dixit >Assignee: Abhishek Dixit >Priority: Major > Fix For: Spark-2.4.0 > > > This Jira is to fix 2 bugs in SQSClient > 1. Incomplete error message shows up when SQSClient fails to be created. > {code:java} > org.apache.spark.SparkException: Error occured while creating Amazon SQS > Client null > {code} > 2. AWS region is not honoured when authentication mode is keys. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BAHIR-245) Add asf.yml
[ https://issues.apache.org/jira/browse/BAHIR-245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-245. --- Fix Version/s: Flink-Next Resolution: Fixed > Add asf.yml > --- > > Key: BAHIR-245 > URL: https://issues.apache.org/jira/browse/BAHIR-245 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Reporter: João Boto >Assignee: João Boto >Priority: Major > Fix For: Flink-Next > > Attachments: image-2020-08-27-22-05-54-382.png > > > this is to add asf.yml, more info > [here|https://cwiki.apache.org/confluence/display/INFRA/git+-+.asf.yaml+features] > > current info is [this|https://gitbox.apache.org/schemes.cgi?bahir-flink] > !image-2020-08-27-22-05-54-382.png! > > with this we could also remove de merge option on PR > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BAHIR-245) Add asf.yml
[ https://issues.apache.org/jira/browse/BAHIR-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17186125#comment-17186125 ] Luciano Resende commented on BAHIR-245: --- What configurations are you planning to change? > Add asf.yml > --- > > Key: BAHIR-245 > URL: https://issues.apache.org/jira/browse/BAHIR-245 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Reporter: João Boto >Assignee: João Boto >Priority: Major > Attachments: image-2020-08-27-22-05-54-382.png > > > this is to add asf.yml, more info > [here|https://cwiki.apache.org/confluence/display/INFRA/git+-+.asf.yaml+features] > > current info is [this|https://gitbox.apache.org/schemes.cgi?bahir-flink] > !image-2020-08-27-22-05-54-382.png! > > with this we could also remove de merge option on PR > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BAHIR-225) flink-connector-redis Jedis connect to Redis cluster throws NumberFormatException For input string 112@112
[ https://issues.apache.org/jira/browse/BAHIR-225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende updated BAHIR-225: -- Fix Version/s: (was: Not Applicable) Flink-Next > flink-connector-redis Jedis connect to Redis cluster throws > NumberFormatException For input string 112@112 > -- > > Key: BAHIR-225 > URL: https://issues.apache.org/jira/browse/BAHIR-225 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 > Environment: mac and Linux, Java8, Flink 1.10 >Reporter: sam lin >Assignee: sam lin >Priority: Blocker > Labels: build, jedis, maven > Fix For: Flink-Next > > Original Estimate: 24h > Remaining Estimate: 24h > > There is a bug in Jedis connecting to Redis cluster when using > flink-redis-connector. The maven 1.1.0~1.1.5 > [jar]([https://mvnrepository.com/artifact/org.apache.flink/flink-connector-redis]) > all are using Jedis 2.8. After I upgrade Jedis to 2.9, this error is gone. > Please build and push a new version maven Jar with Jedis 2.9 will fix this > issue. > Context: [http://www.programmersought.com/article/589114613/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BAHIR-225) flink-connector-redis Jedis connect to Redis cluster throws NumberFormatException For input string 112@112
[ https://issues.apache.org/jira/browse/BAHIR-225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-225: - Assignee: sam lin > flink-connector-redis Jedis connect to Redis cluster throws > NumberFormatException For input string 112@112 > -- > > Key: BAHIR-225 > URL: https://issues.apache.org/jira/browse/BAHIR-225 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 > Environment: mac and Linux, Java8, Flink 1.10 >Reporter: sam lin >Assignee: sam lin >Priority: Blocker > Labels: build, jedis, maven > Fix For: Not Applicable > > Original Estimate: 24h > Remaining Estimate: 24h > > There is a bug in Jedis connecting to Redis cluster when using > flink-redis-connector. The maven 1.1.0~1.1.5 > [jar]([https://mvnrepository.com/artifact/org.apache.flink/flink-connector-redis]) > all are using Jedis 2.8. After I upgrade Jedis to 2.9, this error is gone. > Please build and push a new version maven Jar with Jedis 2.9 will fix this > issue. > Context: [http://www.programmersought.com/article/589114613/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BAHIR-225) flink-connector-redis Jedis connect to Redis cluster throws NumberFormatException For input string 112@112
[ https://issues.apache.org/jira/browse/BAHIR-225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-225. --- Resolution: Fixed > flink-connector-redis Jedis connect to Redis cluster throws > NumberFormatException For input string 112@112 > -- > > Key: BAHIR-225 > URL: https://issues.apache.org/jira/browse/BAHIR-225 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 > Environment: mac and Linux, Java8, Flink 1.10 >Reporter: sam lin >Assignee: sam lin >Priority: Blocker > Labels: build, jedis, maven > Fix For: Flink-Next > > Original Estimate: 24h > Remaining Estimate: 24h > > There is a bug in Jedis connecting to Redis cluster when using > flink-redis-connector. The maven 1.1.0~1.1.5 > [jar]([https://mvnrepository.com/artifact/org.apache.flink/flink-connector-redis]) > all are using Jedis 2.8. After I upgrade Jedis to 2.9, this error is gone. > Please build and push a new version maven Jar with Jedis 2.9 will fix this > issue. > Context: [http://www.programmersought.com/article/589114613/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (BAHIR-225) flink-connector-redis Jedis connect to Redis cluster throws NumberFormatException For input string 112@112
[ https://issues.apache.org/jira/browse/BAHIR-225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reopened BAHIR-225: --- > flink-connector-redis Jedis connect to Redis cluster throws > NumberFormatException For input string 112@112 > -- > > Key: BAHIR-225 > URL: https://issues.apache.org/jira/browse/BAHIR-225 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 > Environment: mac and Linux, Java8, Flink 1.10 >Reporter: sam lin >Priority: Blocker > Labels: build, jedis, maven > Fix For: Not Applicable > > Original Estimate: 24h > Remaining Estimate: 24h > > There is a bug in Jedis connecting to Redis cluster when using > flink-redis-connector. The maven 1.1.0~1.1.5 > [jar]([https://mvnrepository.com/artifact/org.apache.flink/flink-connector-redis]) > all are using Jedis 2.8. After I upgrade Jedis to 2.9, this error is gone. > Please build and push a new version maven Jar with Jedis 2.9 will fix this > issue. > Context: [http://www.programmersought.com/article/589114613/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BAHIR-133) Add MongoDB Source/Sink for Flink Streaming
[ https://issues.apache.org/jira/browse/BAHIR-133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127798#comment-17127798 ] Luciano Resende commented on BAHIR-133: --- I haven't seen a PR around this. > Add MongoDB Source/Sink for Flink Streaming > > > Key: BAHIR-133 > URL: https://issues.apache.org/jira/browse/BAHIR-133 > Project: Bahir > Issue Type: Wish > Components: Flink Streaming Connectors >Reporter: Hai Zhou >Assignee: Hai Zhou >Priority: Major > Fix For: Flink-Next > > > MongoSource / MongoSink via implementation RichSourceFunction / > RichSinkFunction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Bahir-Flink Kudu improvement proposal
On Wed, Apr 8, 2020 at 8:43 AM Márton Balassi wrote: > > Dear Bahir-Flink dev community, > > Wearing my Cloudera hat I am happy to let you know that we have recently > evaluated the Kudu connector and came to the conclusion that it could serve > as the basis for our Flink-Kudu connector that we are planning to add to > Cloudera's Flink distribution as supported functionality. > > We are seeking to discuss these proposed changes [1] with the community > hoping that we directly contribute those back and maintain them in the > Bahir codebase. Balazs and Gyula (ccd) are working on this from our team > and they have already implemented mot of the proposed changes. > > [1] > https://docs.google.com/document/d/1Exd1qP5claqnDANeNAbO5EQQ19XGdqxtkCi1gUKXrrE/edit > <https://docs.google.com/document/d/1Exd1qP5claqnDANeNAbO5EQQ19XGdqxtkCi1gUKXrrE/edit#heading=h.ypnx6smsdkc9> > > Looking forward to your feedback, > Marton Enhancements that would move extensions forward are always welcomed and the link above provides very good information about the proposed changes. Does anyone have any comments and/or concerns about these? -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-220) Add redis descriptor to make redis connection as a table
[ https://issues.apache.org/jira/browse/BAHIR-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-220. --- Fix Version/s: Flink-Next Assignee: yuemeng Resolution: Fixed > Add redis descriptor to make redis connection as a table > > > Key: BAHIR-220 > URL: https://issues.apache.org/jira/browse/BAHIR-220 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 >Reporter: yuemeng >Assignee: yuemeng >Priority: Major > Fix For: Flink-Next > > > currently, for Flink-1.9.0, we can use the catalog to store our stream table > source and sink > for Redis connector, it should exist a Redis table sink so we can register it > to catalog, and use Redis as a table in SQL environment > {code} > Redis redis = new Redis() > .mode(RedisVadidator.REDIS_CLUSTER) > .command(RedisCommand.INCRBY_EX.name()) > .ttl(10) > .property(RedisVadidator.REDIS_NODES, REDIS_HOST+ ":" + > REDIS_PORT); > tableEnvironment > .connect(redis).withSchema(new Schema() > .field("k", TypeInformation.of(String.class)) > .field("v", TypeInformation.of(Long.class))) > .registerTableSink("redis"); > tableEnvironment.sqlUpdate("insert into redis select k, v from t1"); > env.execute("Test Redis Table"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BAHIR-222) Update Readme with details of SQL Streaming SQS connector
[ https://issues.apache.org/jira/browse/BAHIR-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-222. --- Fix Version/s: (was: Not Applicable) Spark-2.4.0 Resolution: Fixed > Update Readme with details of SQL Streaming SQS connector > - > > Key: BAHIR-222 > URL: https://issues.apache.org/jira/browse/BAHIR-222 > Project: Bahir > Issue Type: Task > Components: Spark Structured Streaming Connectors >Affects Versions: Not Applicable >Reporter: Abhishek Dixit >Assignee: Abhishek Dixit >Priority: Major > Fix For: Spark-2.4.0 > > > Adding link to SQL Streaming SQS connector in BAHIR Readme. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BAHIR-213) Faster S3 file Source for Structured Streaming with SQS
[ https://issues.apache.org/jira/browse/BAHIR-213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-213. --- Fix Version/s: Spark-2.4.0 Assignee: Abhishek Dixit Resolution: Fixed > Faster S3 file Source for Structured Streaming with SQS > --- > > Key: BAHIR-213 > URL: https://issues.apache.org/jira/browse/BAHIR-213 > Project: Bahir > Issue Type: New Feature > Components: Spark Structured Streaming Connectors >Affects Versions: Spark-2.4.0 >Reporter: Abhishek Dixit >Assignee: Abhishek Dixit >Priority: Major > Fix For: Spark-2.4.0 > > > Using FileStreamSource to read files from a S3 bucket has problems both in > terms of costs and latency: > * *Latency:* Listing all the files in S3 buckets every microbatch can be > both slow and resource intensive. > * *Costs:* Making List API requests to S3 every microbatch can be costly. > The solution is to use Amazon Simple Queue Service (SQS) which lets you find > new files written to S3 bucket without the need to list all the files every > microbatch. > S3 buckets can be configured to send notification to an Amazon SQS Queue on > Object Create / Object Delete events. For details see AWS documentation here > [Configuring S3 Event > Notifications|https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html] > > Spark can leverage this to find new files written to S3 bucket by reading > notifications from SQS queue instead of listing files every microbatch. > I hope to contribute changes proposed in [this pull > request|https://github.com/apache/spark/pull/24934] to Apache Bahir as > suggested by [gaborgsomogyi|https://github.com/gaborgsomogyi] > [here|https://github.com/apache/spark/pull/24934#issuecomment-511389130] -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [DISCUSS] Proposal for Ignite Extensions as a separate Apache Bahir module
Seems like this has died down without any issues, what should be our next steps here? On Wed, Oct 23, 2019 at 1:12 PM Dmitry Pavlov wrote: > > Hi Bahir Community, > > I would also help with these extensions migration, release, patch review. My > involvement is highly dependent on the current workload. > > I would like to wait a little bit to be sure everybody agrees in Ignite. > > Sincerely, > Dmitriy Pavlov > PMC and Committer at Apache Ignite > PPMC and Committer at Apache Training (incubating). > > On 2019/10/22 01:23:43, Luciano Resende wrote: > > On Mon, Oct 21, 2019 at 5:48 PM Saikat Maitra > > wrote: > > > > > > Hello, > > > > > > I am Saikat and I am committer in Apache Ignite project. I am interested > > > in > > > joining the Apache Bahir community and contribute to following Apache > > > Ignite Extensions. > > > > > > https://apacheignite.readme.io/docs/integrations > > > > > > The reason we wanted to contribute our Apache Ignite integration as > > > separate Extensions is this will help us to manage and maintain separate > > > lifecycle for Apache Ignite integrations. > > > > > > All the integrations will continue to be part of ASF and we will keep > > > supporting and developing in accordance with ASF vision and practices. Our > > > inspiration for the move is very similar to as mentioned in Apache Flink. > > > > > > https://flink.apache.org/ecosystem.html#third-party-projects > > > > > > I would be very grateful if you please review and share your thoughts on > > > the proposal. > > > > > > Warm Regards, > > > Saikat > > > > What are your thoughts around involvement on the maintenance tasks > > such as pr reviews and releases around these extensions? > > > > -- > > Luciano Resende > > http://twitter.com/lresende1975 > > http://lresende.blogspot.com/ > > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [DISCUSS] Proposal for Ignite Extensions as a separate Apache Bahir module
On Mon, Oct 21, 2019 at 5:48 PM Saikat Maitra wrote: > > Hello, > > I am Saikat and I am committer in Apache Ignite project. I am interested in > joining the Apache Bahir community and contribute to following Apache > Ignite Extensions. > > https://apacheignite.readme.io/docs/integrations > > The reason we wanted to contribute our Apache Ignite integration as > separate Extensions is this will help us to manage and maintain separate > lifecycle for Apache Ignite integrations. > > All the integrations will continue to be part of ASF and we will keep > supporting and developing in accordance with ASF vision and practices. Our > inspiration for the move is very similar to as mentioned in Apache Flink. > > https://flink.apache.org/ecosystem.html#third-party-projects > > I would be very grateful if you please review and share your thoughts on > the proposal. > > Warm Regards, > Saikat What are your thoughts around involvement on the maintenance tasks such as pr reviews and releases around these extensions? -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-155) Add expire to redis sink
[ https://issues.apache.org/jira/browse/BAHIR-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-155. --- Fix Version/s: (was: Flink-1.0) Flink-Next Resolution: Fixed > Add expire to redis sink > - > > Key: BAHIR-155 > URL: https://issues.apache.org/jira/browse/BAHIR-155 > Project: Bahir > Issue Type: Wish > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 >Reporter: miki haiat >Priority: Major > Labels: features > Fix For: Flink-Next > > > I have a scenario that im collection some MD and aggregate the result by > time . > for example Each HSET of each window can create different values > by adding expiry i can guarantee that the key is holding only the current > window values > im thinking to change the the interface signuter > > {code:java} > void hset(String key, String hashField, String value); > void set(String key, String value); > //to this > void hset(String key, String hashField, String value,int expire); > void set(String key, String value,int expire); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: Apache Bahir release for the Flink runtime
I just started publishing a snapshot based on latest master. It will be available shortly from : https://repository.apache.org/content/groups/snapshots/ On Fri, Sep 27, 2019 at 8:51 AM Gustavo Momenté wrote: > > Any update on this? Are snapshot published anywhere? I'm interested in > using Apache Bahir right now, but can't figure where snapshots are > published. > > On 2019/05/20 07:41:24, Luciano Resende wrote: > > Ok, if everybody agrees, we can go with 1.8.0 and try to keep it > synchronized.> > > > > As for remaining items, any must-have items before we try a release > candidate?> > > > > On Mon, May 20, 2019 at 12:30 AM Joao Boto wrote:> > > >> > > > if we re going to try to be syncronized with flink version and this > could be important because of Blink (Alibaba fork) integration to Flink> > > >> > > > i think that the next release should be 1.8.0> > > >> > > > El lun., 20 may. 2019 a las 0:21, Luciano Resende () > escribió:> > > >>> > > >> What should we call the next release then? Just 1.1? 1.5? 2.0?> > > >>> > > >> On Mon, May 20, 2019 at 00:13 Joao Boto wrote:> > > >>>> > > >>> There are a new connectors and some actualizations on previous > connectors> > > >>> Because of that I think so.> > > >>>> > > >>> Relatively to synchronization to Flink version, could be interesting > but we have to do release more often> > > >>>> > > >>>> > > >>> Joao Boto> > > >>>> > > >>> El dom., 19 may. 2019 a las 23:05, Luciano Resende () > escribió:> > > >>>>> > > >>>> It has been a while since the last Bahir release for the Apache > Flink> > > >>>> runtime, should we create one?> > > >>>>> > > >>>> Also, the last release was 1.0, what should we call it now (as > Flink> > > >>>> is around 1.8)? Any synchronization required/desired?> > > >>>>> > > >>>> --> > > >>>> Luciano Resende> > > >>>> http://twitter.com/lresende1975> > > >>>> http://lresende.blogspot.com/> > > >>> > > >> --> > > >> Sent from my Mobile device> > > > > > > > > -- > > > Luciano Resende> > > http://twitter.com/lresende1975> > > http://lresende.blogspot.com/> > > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[RESULT][VOTE] Apache Bahir 2.4.0 (RC1)
On Fri, Sep 13, 2019 at 10:38 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.4.0 (RC1) based on > Apache Spark 2.4.0. > > Tag: v2.4.0-rc1 (f908ec0dc1bbc4c6d11cde446f2bfd89ea39155f) > > https://github.com/apache/bahir/tree/v2.4.0-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1031 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.4.0-rc1/ > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.4.0 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > The vote has passed with 3 +1 from Luciano Resende Ted Yu Christian Kadner Thanks -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[RESULT][VOTE] Apache Bahir 2.3.4 (RC1)
On Wed, Sep 11, 2019 at 5:42 PM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.4 (RC1) based on > Apache Spark 2.3.4. > > Tag: v2.3.4-rc1 (716107f420ac3e0afd76e61b74069e551d9a7e15) > > https://github.com/apache/bahir/tree/v2.3.4-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1030 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.4-rc1/ > > Also, here is a list of changes between v2.3.3 and v2.3.4-rc1 tags, > which summarize in updating Apache Spark to 2.3.4 and release build > preparation. > > https://github.com/apache/bahir/compare/v2.3.3...v2.3.4-rc1 > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.4 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- Vote has passed with 4 +1 from: Luciano Resende Ted Yu Jean-Baptiste Onofré Christian Kadner Thanks -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.3.4 (RC1)
Off course, my +1 I know a lot of us are just between travels (returning from ApacheCon) but are any other volunteers available to review the release? On Wed, Sep 11, 2019 at 5:42 PM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.4 (RC1) based on > Apache Spark 2.3.4. > > Tag: v2.3.4-rc1 (716107f420ac3e0afd76e61b74069e551d9a7e15) > > https://github.com/apache/bahir/tree/v2.3.4-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1030 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.4-rc1/ > > Also, here is a list of changes between v2.3.3 and v2.3.4-rc1 tags, > which summarize in updating Apache Spark to 2.3.4 and release build > preparation. > > https://github.com/apache/bahir/compare/v2.3.3...v2.3.4-rc1 > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.4 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- > Luciano Resende > http://twitter.com/lresende1975 > http://lresende.blogspot.com/ -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[VOTE] Apache Bahir 2.4.0 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.4.0 (RC1) based on Apache Spark 2.4.0. Tag: v2.4.0-rc1 (f908ec0dc1bbc4c6d11cde446f2bfd89ea39155f) https://github.com/apache/bahir/tree/v2.4.0-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1031 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.4.0-rc1/ The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.4.0 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[VOTE] Apache Bahir 2.3.4 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.3.4 (RC1) based on Apache Spark 2.3.4. Tag: v2.3.4-rc1 (716107f420ac3e0afd76e61b74069e551d9a7e15) https://github.com/apache/bahir/tree/v2.3.4-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1030 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.4-rc1/ Also, here is a list of changes between v2.3.3 and v2.3.4-rc1 tags, which summarize in updating Apache Spark to 2.3.4 and release build preparation. https://github.com/apache/bahir/compare/v2.3.3...v2.3.4-rc1 The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.3.4 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-214) Improve KuduConnector speed
[ https://issues.apache.org/jira/browse/BAHIR-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-214. --- Fix Version/s: Flink-Next Resolution: Fixed > Improve KuduConnector speed > --- > > Key: BAHIR-214 > URL: https://issues.apache.org/jira/browse/BAHIR-214 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > > kudu connector has some issues on kudu sink with some flush modes that kill > sink over time > > this is a refactor to resolve that issues and improve speed on eventual > consistence -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (BAHIR-172) Avoid FileInputStream/FileOutputStream
[ https://issues.apache.org/jira/browse/BAHIR-172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-172. --- Fix Version/s: Spark-2.4.0 Resolution: Fixed > Avoid FileInputStream/FileOutputStream > -- > > Key: BAHIR-172 > URL: https://issues.apache.org/jira/browse/BAHIR-172 > Project: Bahir > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > Fix For: Spark-2.4.0 > > > They rely on finalizers (before Java 11), which create unnecessary GC load. > The alternatives, {{Files.newInputStream}}, are as easy to use and don't have > this issue. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (BAHIR-217) Install of Oracle JDK 8 Failing in Travis CI
[ https://issues.apache.org/jira/browse/BAHIR-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-217: - Assignee: Abhishek Dixit > Install of Oracle JDK 8 Failing in Travis CI > > > Key: BAHIR-217 > URL: https://issues.apache.org/jira/browse/BAHIR-217 > Project: Bahir > Issue Type: Bug > Components: Build >Reporter: Abhishek Dixit >Assignee: Abhishek Dixit >Priority: Major > Labels: build, easyfix > Fix For: Spark-2.4.0 > > > Install of Oracle JDK 8 Failing in Travis CI. As a result, build is failing > for new pull requests. > We need to make a small fix in _ __ .travis.yml_ file as mentioned in the > issue here: > https://travis-ci.community/t/install-of-oracle-jdk-8-failing/3038 > We just need to add > {code:java} > dist: trusty{code} > in the .travis.yml file as mentioned in the issue above. > I can raise a PR for this fix if required. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (BAHIR-217) Install of Oracle JDK 8 Failing in Travis CI
[ https://issues.apache.org/jira/browse/BAHIR-217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-217. --- Fix Version/s: Spark-2.4.0 Resolution: Fixed > Install of Oracle JDK 8 Failing in Travis CI > > > Key: BAHIR-217 > URL: https://issues.apache.org/jira/browse/BAHIR-217 > Project: Bahir > Issue Type: Bug > Components: Build >Reporter: Abhishek Dixit >Priority: Major > Labels: build, easyfix > Fix For: Spark-2.4.0 > > > Install of Oracle JDK 8 Failing in Travis CI. As a result, build is failing > for new pull requests. > We need to make a small fix in _ __ .travis.yml_ file as mentioned in the > issue here: > https://travis-ci.community/t/install-of-oracle-jdk-8-failing/3038 > We just need to add > {code:java} > dist: trusty{code} > in the .travis.yml file as mentioned in the issue above. > I can raise a PR for this fix if required. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (BAHIR-217) Install of Oracle JDK 8 Failing in Travis CI
[ https://issues.apache.org/jira/browse/BAHIR-217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919083#comment-16919083 ] Luciano Resende commented on BAHIR-217: --- Thanks [~abhishekd0907], please submit a PR. > Install of Oracle JDK 8 Failing in Travis CI > > > Key: BAHIR-217 > URL: https://issues.apache.org/jira/browse/BAHIR-217 > Project: Bahir > Issue Type: Bug > Components: Build >Reporter: Abhishek Dixit >Priority: Major > Labels: build, easyfix > > Install of Oracle JDK 8 Failing in Travis CI. As a result, build is failing > for new pull requests. > We need to make a small fix in _ __ .travis.yml_ file as mentioned in the > issue here: > https://travis-ci.community/t/install-of-oracle-jdk-8-failing/3038 > We just need to add > {code:java} > dist: trusty{code} > in the .travis.yml file as mentioned in the issue above. > I can raise a PR for this fix if required. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (BAHIR-210) bump flink version to 1.8.1
[ https://issues.apache.org/jira/browse/BAHIR-210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-210. --- Fix Version/s: Flink-Next Resolution: Fixed > bump flink version to 1.8.1 > --- > > Key: BAHIR-210 > URL: https://issues.apache.org/jira/browse/BAHIR-210 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (BAHIR-212) MQTT support to flink connector
[ https://issues.apache.org/jira/browse/BAHIR-212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888529#comment-16888529 ] Luciano Resende commented on BAHIR-212: --- If you basically want to update the connector from the ActiveMQ project, you should discuss this with them. If you want to use their implementation as inspiration for a new MQTT connector supporting Flink in Bahir, then the Apache License gives you permission to do that. > MQTT support to flink connector > --- > > Key: BAHIR-212 > URL: https://issues.apache.org/jira/browse/BAHIR-212 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 > Environment: Software >Reporter: Antonio Miranda >Priority: Minor > Labels: features > > Hello, > I'm working with Apache Flink, and I would like to have a Streaming > connector for the MQTT protocol. The problem now is that the library that you > use of activemq-client, does not have support for MQTT. I found on the > Internet the library, whose structure is similar to activemq-client, but with > support for MQTT > ([https://github.com/apache/activemq/blob/master/activemq-mqtt]). > Would it be possible to update the connector to support MQTT? > Thank you > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (BAHIR-192) Add jdbc sink support for Structured Streaming
[ https://issues.apache.org/jira/browse/BAHIR-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-192. --- Resolution: Fixed Fix Version/s: Spark-2.4.0 > Add jdbc sink support for Structured Streaming > -- > > Key: BAHIR-192 > URL: https://issues.apache.org/jira/browse/BAHIR-192 > Project: Bahir > Issue Type: New Feature > Components: Spark Structured Streaming Connectors >Reporter: Wang Yanlin >Assignee: Wang Yanlin >Priority: Major > Fix For: Spark-2.4.0 > > > Currently, spark sql support read and write to jdbc in batch mode, but do not > support for Structured Streaming. During work, even thought we can write to > jdbc using foreach sink, but providing a more easier way for writing to jdbc > would be helpful. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Bahir release for Apache Spark 2.4
Are there anything else we might want to get in the 2.4 release? Otherwise, I was going to take a look at producing the release. Thoughts ? -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[ANNOUNCE] Apache Bahir 2.3.3 Released
Apache Bahir provides extensions to multiple distributed analytic platforms, extending their reach with a diversity of streaming connectors and SQL data sources. The Apache Bahir community is pleased to announce the release of Apache Bahir 2.3.3 which provides the following extensions for Apache Spark 2.3.3: - Apache CouchDB/Cloudant SQL data source - Apache CouchDB/Cloudant Streaming connector - Akka Streaming connector - Akka Structured Streaming data source - Google Cloud Pub/Sub Streaming connector - Cloud PubNub Streaming connector (new) - MQTT Streaming connector - MQTT Structured Streaming data source (new sink) - Twitter Streaming connector - ZeroMQ Streaming connector (new enhanced implementation) For more information about Apache Bahir and to download the latest release go to: https://bahir.apache.org For more details on how to use Apache Bahir extensions in your application please visit our documentation page https://bahir.apache.org/docs/spark/overview/ The Apache Bahir PMC -- Luciano Resende http://people.apache.org/~lresende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[ANNOUNCE] Apache Bahir 2.2.3 Released
Apache Bahir provides extensions to multiple distributed analytic platforms, extending their reach with a diversity of streaming connectors and SQL data sources. The Apache Bahir community is pleased to announce the release of Apache Bahir 2.2.3 which provides the following extensions for Apache Spark 2.2.3: - Apache CouchDB/Cloudant SQL data source - Apache CouchDB/Cloudant Streaming connector - Akka Streaming connector - Akka Structured Streaming data source - Google Cloud Pub/Sub Streaming connector - Cloud PubNub Streaming connector (new) - MQTT Streaming connector - MQTT Structured Streaming data source (new sink) - Twitter Streaming connector - ZeroMQ Streaming connector (new enhanced implementation) For more information about Apache Bahir and to download the latest release go to: https://bahir.apache.org For more details on how to use Apache Bahir extensions in your application please visit our documentation page https://bahir.apache.org/docs/spark/overview/ The Apache Bahir PMC -- Luciano Resende http://people.apache.org/~lresende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-205) add password support for flink sink of redis cluster
[ https://issues.apache.org/jira/browse/BAHIR-205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-205. --- Resolution: Fixed Fix Version/s: (was: Not Applicable) Spark-2.4.0 > add password support for flink sink of redis cluster > - > > Key: BAHIR-205 > URL: https://issues.apache.org/jira/browse/BAHIR-205 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Affects Versions: Not Applicable >Reporter: yanfeng >Assignee: yanfeng >Priority: Major > Labels: features > Fix For: Spark-2.4.0 > > Original Estimate: 24h > Remaining Estimate: 24h > > redis cluster with password protect is not supported in > flink-connector-redis_2.11 version 1.1-SNAPSHOT, because the related jedis > 2.8.0 doesn't support the feature, so uprade jedis to 2.9.0 and some code > will fulfill it -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (BAHIR-205) add password support for flink sink of redis cluster
[ https://issues.apache.org/jira/browse/BAHIR-205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reopened BAHIR-205: --- Assignee: yanfeng > add password support for flink sink of redis cluster > - > > Key: BAHIR-205 > URL: https://issues.apache.org/jira/browse/BAHIR-205 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Affects Versions: Not Applicable >Reporter: yanfeng >Assignee: yanfeng >Priority: Major > Labels: features > Fix For: Not Applicable > > Original Estimate: 24h > Remaining Estimate: 24h > > redis cluster with password protect is not supported in > flink-connector-redis_2.11 version 1.1-SNAPSHOT, because the related jedis > 2.8.0 doesn't support the feature, so uprade jedis to 2.9.0 and some code > will fulfill it -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: [VOTE] Apache Bahir 2.2.3 (RC1)
On Sun, May 19, 2019 at 7:53 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.2.3 (RC1) based on > Apache Spark 2.2.3. > > Tag: v2.2.3-rc1 (963c644ff96615bf53ed6570abcf6930d1532776) > > https://github.com/apache/bahir/tree/v2.2.3-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1028 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.2.3-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.2.3 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > Vote passed with binding +1 from: Luciano Resende Christian Kadner Ted Yu Thanks for your time reviewing the release. -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.2.3 (RC1)
Thanks, Christian, We are still looking for one more vote. Note that we had two concurrent votes going on and this is a different vote thread (e.g. 2.2.3 versus 2.3.3) On Tue, May 28, 2019 at 12:20 AM Christian Kadner wrote: > > +1 > > Sorry for the delay, I was traveling last week. > > Thanks Luciano! > > ~ Christian > > On 2019/05/19 14:53:32, Luciano Resende wrote: > > Dear community member, > > > > Please vote to approve the release of Apache Bahir 2.2.3 (RC1) based on > > Apache Spark 2.2.3. > > > > Tag: v2.2.3-rc1 (963c644ff96615bf53ed6570abcf6930d1532776) > > > > https://github.com/apache/bahir/tree/v2.2.3-rc1 > > > > Release files: > > > > https://repository.apache.org/content/repositories/orgapachebahir-1028 > > > > Source distribution: > > > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.2.3-rc1/ > > > > > > The vote is open for at least 72 hours and passes if a majority of at least > > 3 +1 PMC votes are cast. > > > > [ ] +1 Release this package as Apache Bahir 2.2.3 > > [ ] -1 Do not release this package because ... > > > > > > Thanks for your vote! > > > > -- > > Luciano Resende > > http://twitter.com/lresende1975 > > http://lresende.blogspot.com/ > > -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.3.3 (RC1)
On Sun, May 19, 2019 at 9:15 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.3 (RC1) based on > Apache Spark 2.3.3. > > Tag: v2.3.3-rc1 (e29034cad9bec11da1b81324b1f67118772861d2) > > https://github.com/apache/bahir/tree/v2.3.3-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1029 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.3-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.3 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > Vote passed with 3 binding +1 from : Jean-Baptiste Onofré Luciano Resende Ted Yu And 1 non-binding +1 from: Lukasz Antoniak Thanks for the time reviewing the release and voting -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.2.3 (RC1)
Off course, my +1 to the release. For the ones that already voted on the 2.3.3 release, please help with this vote as well, as the changes are very small as described in https://lists.apache.org/thread.html/4034eb65228d54e4d0b5b3103000122932cc37f8adf920e42a486aec@%3Cdev.bahir.apache.org%3E On Sun, May 19, 2019 at 7:53 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.2.3 (RC1) based on > Apache Spark 2.2.3. > > Tag: v2.2.3-rc1 (963c644ff96615bf53ed6570abcf6930d1532776) > > https://github.com/apache/bahir/tree/v2.2.3-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1028 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.2.3-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.2.3 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- > Luciano Resende > http://twitter.com/lresende1975 > http://lresende.blogspot.com/ -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.3.3 (RC1)
Off course, my +1 for the release. We are still looking for an extra vote to approve the release, please take a little time and help approve the release. On Sun, May 19, 2019 at 9:15 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.3 (RC1) based on > Apache Spark 2.3.3. > > Tag: v2.3.3-rc1 (e29034cad9bec11da1b81324b1f67118772861d2) > > https://github.com/apache/bahir/tree/v2.3.3-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1029 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.3-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.3 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- > Luciano Resende > http://twitter.com/lresende1975 > http://lresende.blogspot.com/ -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-200) change from docker tests to kudu-test-utils
[ https://issues.apache.org/jira/browse/BAHIR-200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-200. --- Resolution: Fixed Fix Version/s: Flink-Next > change from docker tests to kudu-test-utils > --- > > Key: BAHIR-200 > URL: https://issues.apache.org/jira/browse/BAHIR-200 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > > As of version 1.9.0, Kudu ships with an experimental feature called the > binary test JAR. This feature gives people who want to test against Kudu the > capability to start a Kudu "mini cluster" from Java or another JVM-based > language without having to first build Kudu locally > > https://kudu.apache.org/docs/developing.html#_jvm_based_integration_testing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Apache Bahir release for the Flink runtime
What should we call the next release then? Just 1.1? 1.5? 2.0? On Mon, May 20, 2019 at 00:13 Joao Boto wrote: > There are a new connectors and some actualizations on previous connectors > Because of that I think so. > > Relatively to synchronization to Flink version, could be interesting but > we have to do release more often > > > Joao Boto > > El dom., 19 may. 2019 a las 23:05, Luciano Resende () > escribió: > >> It has been a while since the last Bahir release for the Apache Flink >> runtime, should we create one? >> >> Also, the last release was 1.0, what should we call it now (as Flink >> is around 1.8)? Any synchronization required/desired? >> >> -- >> Luciano Resende >> http://twitter.com/lresende1975 >> http://lresende.blogspot.com/ >> > -- Sent from my Mobile device
[jira] [Resolved] (BAHIR-206) bump flink to 1.8.0
[ https://issues.apache.org/jira/browse/BAHIR-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-206. --- Resolution: Fixed Fix Version/s: Flink-Next > bump flink to 1.8.0 > --- > > Key: BAHIR-206 > URL: https://issues.apache.org/jira/browse/BAHIR-206 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > > bump flink version to 1.8.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[VOTE] Apache Bahir 2.3.3 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.3.3 (RC1) based on Apache Spark 2.3.3. Tag: v2.3.3-rc1 (e29034cad9bec11da1b81324b1f67118772861d2) https://github.com/apache/bahir/tree/v2.3.3-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1029 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.3-rc1/ The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.3.3 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Voting on Bahir 2.2.3 and 2.3.3
These are small releases, that update transient dependencies to respective Spark releases and were based on previous Bahir release tag. To help you vote on the releases, please see the difference between the two tags: Release 2.2.3 https://github.com/apache/bahir/compare/v2.2.2...v2.2.3-rc1 Release 2.3.3 https://github.com/apache/bahir/compare/v2.3.2...v2.3.3-rc1 -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[VOTE] Apache Bahir 2.2.3 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.2.3 (RC1) based on Apache Spark 2.2.3. Tag: v2.2.3-rc1 (963c644ff96615bf53ed6570abcf6930d1532776) https://github.com/apache/bahir/tree/v2.2.3-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1028 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.2.3-rc1/ The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.2.3 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Assigned] (BAHIR-177) Siddhi Library state recovery causes an Exception
[ https://issues.apache.org/jira/browse/BAHIR-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-177: - Assignee: Dominik Wosiński > Siddhi Library state recovery causes an Exception > - > > Key: BAHIR-177 > URL: https://issues.apache.org/jira/browse/BAHIR-177 > Project: Bahir > Issue Type: Bug >Reporter: Dominik Wosiński >Assignee: Dominik Wosiński >Priority: Blocker > Fix For: Flink-Next > > > Currently, Flink offers a way to store state and this is utilized for Siddhi > Library. The problem is that Siddhi internally bases on operators IDs that > are generated automatically when the _SiddhiAppRuntime_ is initialized. This > means that if the job is restarted and new operators IDs are assigned for > Siddhi, yet the Flink stores states with old ID's. > Siddhi uses an operator ID to get state from Map : > _snapshotable.restoreState(snapshots.get(snapshotable.getElementId()));_ > Siddhi does not make a null-check on the retrieved values, thus > _restoreState_ throws an NPE which is caught and > _CannotRestoreSiddhiAppStateException_ is thrown instead. Any flink job will > go into infinite loop of restarting after facing this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-202) Improve KuduSink throughput by using async FlushMode
[ https://issues.apache.org/jira/browse/BAHIR-202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-202: - Assignee: Suxing Lee > Improve KuduSink throughput by using async FlushMode > > > Key: BAHIR-202 > URL: https://issues.apache.org/jira/browse/BAHIR-202 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 >Reporter: Suxing Lee >Assignee: Suxing Lee >Priority: Major > Fix For: Flink-Next > > > Improve KuduSink throughput by using async FlushMode. > And using checkpoint to ensure at-least-once in async flush mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Bahir Releases for Spark 2.2.3 and 2.3.3
Just heads up that I am planning to work over the weekend to catch up on Spark releases 2.2.3 and 2.3.3. For these releases, I am starting from creating a branch from the previous release tag (e.g. 2.2.2 and 2.3.2) as the master has been updated to Spark 2.4 which moved to Scala 2.12. Please let me know if you have any questions or concerns. -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
New Tags for Bahir-Spark
If you noticed a few new tags being added to the Bahir spark repo, I am doing some cleanup and creating the "official" tags based on the approved release candidate. -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-203) Pubsub manual acknowledgement
[ https://issues.apache.org/jira/browse/BAHIR-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-203. --- Resolution: Fixed Assignee: Lukasz Antoniak Fix Version/s: (was: Spark-2.3.0) > Pubsub manual acknowledgement > -- > > Key: BAHIR-203 > URL: https://issues.apache.org/jira/browse/BAHIR-203 > Project: Bahir > Issue Type: Improvement > Components: Spark Streaming Connectors >Reporter: Danny Tachev >Assignee: Lukasz Antoniak >Priority: Minor > Fix For: Spark-2.4.0 > > > Hi, > We have a use case where acknowledgement has to be sent at a later stage when > streaming data from google pubsub. Any chance for the acknowledgement in > PubsubReceiver to be made optional and ackId to be included in the > SparkPubsubMessage model? > Example: > {code:java} > store(receivedMessages > .map(x => { > val sm = new SparkPubsubMessage > sm.message = x.getMessage > sm.ackId = x.getAckId > sm > }) > .iterator) > if ( ... ) { > val ackRequest = new AcknowledgeRequest() > ackRequest.setAckIds(receivedMessages.map(x => x.getAckId).asJava) > client.projects().subscriptions().acknowledge(subscriptionFullName, > ackRequest).execute() > }{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-202) Improve KuduSink throughput by using async FlushMode
[ https://issues.apache.org/jira/browse/BAHIR-202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-202. --- Resolution: Fixed > Improve KuduSink throughput by using async FlushMode > > > Key: BAHIR-202 > URL: https://issues.apache.org/jira/browse/BAHIR-202 > Project: Bahir > Issue Type: Improvement > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 >Reporter: Suxing Lee >Priority: Major > Fix For: Flink-Next > > > Improve KuduSink throughput by using async FlushMode. > And using checkpoint to ensure at-least-once in async flush mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-177) Siddhi Library state recovery causes an Exception
[ https://issues.apache.org/jira/browse/BAHIR-177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-177. --- Resolution: Fixed Assignee: (was: Dominik Wosiński) Fix Version/s: Flink-Next > Siddhi Library state recovery causes an Exception > - > > Key: BAHIR-177 > URL: https://issues.apache.org/jira/browse/BAHIR-177 > Project: Bahir > Issue Type: Bug >Reporter: Dominik Wosiński >Priority: Blocker > Fix For: Flink-Next > > > Currently, Flink offers a way to store state and this is utilized for Siddhi > Library. The problem is that Siddhi internally bases on operators IDs that > are generated automatically when the _SiddhiAppRuntime_ is initialized. This > means that if the job is restarted and new operators IDs are assigned for > Siddhi, yet the Flink stores states with old ID's. > Siddhi uses an operator ID to get state from Map : > _snapshotable.restoreState(snapshots.get(snapshotable.getElementId()));_ > Siddhi does not make a null-check on the retrieved values, thus > _restoreState_ throws an NPE which is caught and > _CannotRestoreSiddhiAppStateException_ is thrown instead. Any flink job will > go into infinite loop of restarting after facing this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-190) ActiveMQ connector stops on empty queue
[ https://issues.apache.org/jira/browse/BAHIR-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-190. --- Resolution: Fixed Fix Version/s: Flink-Next > ActiveMQ connector stops on empty queue > --- > > Key: BAHIR-190 > URL: https://issues.apache.org/jira/browse/BAHIR-190 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors >Affects Versions: Flink-1.0 >Reporter: Stephan Brosinski >Priority: Critical > Fix For: Flink-Next > > > I tried the ActiveMQ Flink Connector. Reading from an ActiveMQ queue, it > seems to connector exits once there are no more messages in the queue. This > ends the Flink job processing the stream. > To me it seems, that the while loop inside the run method (AMQSource.java, > line 222) should not do a return, but a continue if the message is no > instance of ByteMessage, e.g. null. > If I'm right, I can create a pull request showing the change. > To reproduce: > > {code:java} > ActiveMQConnectionFactory connectionFactory = new > ActiveMQConnectionFactory("xxx", "xxx", > "tcp://localhost:61616?jms.redeliveryPolicy.maximumRedeliveries=1"); > AMQSourceConfig amqConfig = new > AMQSourceConfig.AMQSourceConfigBuilder() > .setConnectionFactory(connectionFactory) > .setDestinationName("test") > .setDestinationType(DestinationType.QUEUE) > .setDeserializationSchema(new SimpleStringSchema()) > .build(); > AMQSource amqSource = new AMQSource<>(amqConfig); > env.addSource(amqSource).print() > env.setParallelism(1).execute("ActiveMQ Consumer");{code} > Then point the Flink job at an empty ActiveMQ queue. > > Not sure if this is a bug, but it's not what I expected when I used the > connector. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Apache Bahir Added to Apache Kibble Demo
Good luck, and let us know if you need any extra info. Please also share any insights you find about the project. On Wed, May 1, 2019 at 08:03 Sharan Foga wrote: > Hi All > > A quick note to let you know that I am adding Bahir to the Kibble demo. > It's part of my research on the transmission of Apache culture through > incubation. I'd like to use the Bahir data as a comparison against projects > that did undergo incubation. Not sure what the result will be at this stage > - so it will be interesting to find out :-) > > Thanks > Sharan > -- Sent from my Mobile device
[jira] [Resolved] (BAHIR-187) Reduce size of sql-cloudant test database
[ https://issues.apache.org/jira/browse/BAHIR-187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-187. --- Resolution: Fixed Fix Version/s: Spark-2.4.0 > Reduce size of sql-cloudant test database > - > > Key: BAHIR-187 > URL: https://issues.apache.org/jira/browse/BAHIR-187 > Project: Bahir > Issue Type: Improvement >Reporter: Esteban Laver >Assignee: Esteban Laver >Priority: Minor > Fix For: Spark-2.4.0 > > > Reduce the number of documents from 1967 to 100 in the n_flight.json test > file. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-141) Support for Array[Byte] in SparkGCPCredentials.jsonServiceAccount
[ https://issues.apache.org/jira/browse/BAHIR-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-141. --- Resolution: Fixed Assignee: Lukasz Antoniak Fix Version/s: (was: Spark-2.2.0) (was: Spark-2.0.2) Spark-2.4.0 > Support for Array[Byte] in SparkGCPCredentials.jsonServiceAccount > - > > Key: BAHIR-141 > URL: https://issues.apache.org/jira/browse/BAHIR-141 > Project: Bahir > Issue Type: Improvement > Components: Spark Streaming Connectors >Affects Versions: Spark-2.0.2, Spark-2.1.1, Spark-2.2.0 >Reporter: Ankit Agrahari >Assignee: Lukasz Antoniak >Priority: Minor > Fix For: Spark-2.4.0 > > > Existing implementation of SparkGCPCredentials.jsonServiceAccount has only > support for reading the credential file from given path in config(i.e local > path). If developer does not have access to worker node it makes easy for > developer to add the byte array of json service file rather than path(which > will not be available on worker node and will lead to FileNotFound > Exaception)from the machine where driver is submitting job from. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (BAHIR-197) Add FlumeSink to Flume connector
Luciano Resende created BAHIR-197: - Summary: Add FlumeSink to Flume connector Key: BAHIR-197 URL: https://issues.apache.org/jira/browse/BAHIR-197 Project: Bahir Issue Type: Bug Components: Flink Streaming Connectors Affects Versions: Flink-Next Reporter: Luciano Resende -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (BAHIR-107) Build and test Bahir against Scala 2.12
[ https://issues.apache.org/jira/browse/BAHIR-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782469#comment-16782469 ] Luciano Resende commented on BAHIR-107: --- For non released code (SNAPSHOT), you need to checkout the code and build it yourself. > Build and test Bahir against Scala 2.12 > --- > > Key: BAHIR-107 > URL: https://issues.apache.org/jira/browse/BAHIR-107 > Project: Bahir > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.4.0 > > > Spark has started effort for accommodating Scala 2.12 > See SPARK-14220 . > This JIRA is to track requirements for building Bahir on Scala 2.12.7 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-184) Please delete old releases from mirroring system
[ https://issues.apache.org/jira/browse/BAHIR-184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-184. --- Resolution: Fixed Assignee: Luciano Resende Fix Version/s: Not Applicable Removed old releases, and lefthe latest one from each branch. > Please delete old releases from mirroring system > > > Key: BAHIR-184 > URL: https://issues.apache.org/jira/browse/BAHIR-184 > Project: Bahir > Issue Type: Bug >Reporter: Sebb > Assignee: Luciano Resende >Priority: Major > Fix For: Not Applicable > > > To reduce the load on the volunteer 3rd party mirrors, projects must remove > non-current releases from the mirroring system. > The following releases appear to be obsolete, as they do not appear on the > Bahir download page: > 2.1.1 > 2.1.2 > 2.1.3 > 2.2.0 > 2.2.1 > 2.2.2 > 2.3.0 > 2.3.1 > Furthermore, many of the underlying Spark releases are no longer current > according to the Spark release notes, e.g. > http://spark.apache.org/releases/spark-release-2-3-2.html > says that it replaces earlier 2.3.x releases, so 2.3.0 and 2.3.1 are not > current Spark releases. > It's unfair to expect the mirrors to carry old releases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-191) update flume version to 1.9.0
[ https://issues.apache.org/jira/browse/BAHIR-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-191. --- Resolution: Fixed Fix Version/s: Flink-Next > update flume version to 1.9.0 > - > > Key: BAHIR-191 > URL: https://issues.apache.org/jira/browse/BAHIR-191 > Project: Bahir > Issue Type: Improvement >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > > flume has new version.. > update to last one > > https://flume.apache.org/releases/1.9.0.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-195) bump flink version to 1.7.1
[ https://issues.apache.org/jira/browse/BAHIR-195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-195. --- Resolution: Fixed Fix Version/s: Flink-Next > bump flink version to 1.7.1 > --- > > Key: BAHIR-195 > URL: https://issues.apache.org/jira/browse/BAHIR-195 > Project: Bahir > Issue Type: Improvement >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-194) update kudu version to 1.8.0
[ https://issues.apache.org/jira/browse/BAHIR-194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-194. --- Resolution: Fixed Fix Version/s: Flink-Next > update kudu version to 1.8.0 > > > Key: BAHIR-194 > URL: https://issues.apache.org/jira/browse/BAHIR-194 > Project: Bahir > Issue Type: Improvement >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-107) Build and test Bahir against Scala 2.12
[ https://issues.apache.org/jira/browse/BAHIR-107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-107: - Assignee: Lukasz Antoniak > Build and test Bahir against Scala 2.12 > --- > > Key: BAHIR-107 > URL: https://issues.apache.org/jira/browse/BAHIR-107 > Project: Bahir > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Lukasz Antoniak >Priority: Major > > Spark has started effort for accommodating Scala 2.12 > See SPARK-14220 . > This JIRA is to track requirements for building Bahir on Scala 2.12.7 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Upgrading to Scala 2.12
On Tue, Jan 22, 2019 at 2:54 AM Łukasz Antoniak wrote: > > Team, > > Since version 2.4.0 Spark team decided to introduce support for Scala 2.12 > and remain 2.11 for backward compatibility. As part of BAHIR-107, I have > been working to upgrade code base to Scala 2.12 and Spark 2.4.0 ( > https://github.com/apache/bahir/pull/76/files). Do we plan to continue > support for Scala 2.11? In my opinion, Bahir should follow Spark in terms > of Scala binary compatibility. I have already checked that transitive > dependencies that I had to upgrade (org.json4s:json4s-jackson and > com.twitter:algebird-core) provide artifacts for both - Scala 2.11 and 2.12. > > Lukasz +1 for maintaining compatibility as possible... As for the publish code, please look into dev/release-build.sh on how we publish releases and snapshots to accommodate the 2.11 and 2.12 Scala versions. -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Resolved] (BAHIR-175) Recovering from Failures with Checkpointing Exception(Mqtt)
[ https://issues.apache.org/jira/browse/BAHIR-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-175. --- Resolution: Fixed Assignee: Lukasz Antoniak Fix Version/s: Spark-2.4.0 > Recovering from Failures with Checkpointing Exception(Mqtt) > --- > > Key: BAHIR-175 > URL: https://issues.apache.org/jira/browse/BAHIR-175 > Project: Bahir > Issue Type: Bug > Components: Spark Structured Streaming Connectors >Reporter: lynn >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.4.0 > > > Spark Version:2.2.0 spark-streaming-sql-mqtt version: 2.2.0 > > # Reading checkpoints offsets error > org.apache.spark.sql.execution.streaming.SerializedOffset cannot be cast to > org.apache.spark.sql.execution.streaming.LongOffset > > solution: > The MqttStreamSource.scala source file: > Line 149, getBatch Method: > val startIndex = start.getOrElse(LongOffset(0)) match > { case offset: SerializedOffset => offset.json.toInt case offset: LongOffset > => offset.offset.toInt } > val endIndex = end match > { case offset: SerializedOffset => offset.json.toInt case offset: LongOffset > => offset.offset.toInt } > 2. The MqttStreamSource.scala source file > getBatch Method: > val data: ArrayBuffer[(String, Timestamp)] = ArrayBuffer.empty > // Move consumed messages to persistent store. > (startIndex + 1 to endIndex).foreach > { id => val element = messages.getOrElse(id, store.retrieve(id)) data += > element store.store(id, element) messages.remove(id, element) } > The following line: > val element = messages.getOrElse(id, store.retrieve(id)) throws error: > java.lang.ClassCastException: scala.Tuple2 cannot be cast to > scala.runtime.Nothing$ > at > org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource$$anonfun$getBatch$1$$anonfun$3.apply(MQTTStreamSource.scala:160) > at > org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource$$anonfun$getBatch$1$$anonfun$3.apply(MQTTStreamSource.scala:160) > at scala.collection.MapLike$class.getOrElse(MapLike.scala:128) > at scala.collection.concurrent.TrieMap.getOrElse(TrieMap.scala:633) > at > org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource$$anonfun$getBatch$1.apply$mcZI$sp(MQTTStreamSource.scala:160) > at > org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource$$anonfun$getBatch$1.apply(MQTTStreamSource.scala:159) > at > org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource$$anonfun$getBatch$1.apply(MQTTStreamSource.scala:159) > at scala.collection.immutable.Range.foreach(Range.scala:160) > at > org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource.getBatch(MQTTStreamSource.scala:159) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$populateStartOffsets$3.apply(StreamExecution.scala:470) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$populateStartOffsets$3.apply(StreamExecution.scala:466) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at > org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25) > at > org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$populateStartOffsets(StreamExecution.scala:466) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(StreamExecution.scala:297) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294) > at > org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279) > at > org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58) > at > org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamE
[jira] [Resolved] (BAHIR-45) Add cleanup support to SQL-STREAMING-MQTT Source once SPARK-16963 is fixed.
[ https://issues.apache.org/jira/browse/BAHIR-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-45. -- Resolution: Fixed Fix Version/s: Spark-2.4.0 Resolving per comment above, please reopen if there are any left over that needs to be addressed. > Add cleanup support to SQL-STREAMING-MQTT Source once SPARK-16963 is fixed. > --- > > Key: BAHIR-45 > URL: https://issues.apache.org/jira/browse/BAHIR-45 > Project: Bahir > Issue Type: Improvement >Reporter: Prashant Sharma >Assignee: Prashant Sharma >Priority: Major > Fix For: Spark-2.4.0 > > > Currently, source tries to persist all the incoming messages to allow for > minimal level of fault tolerance. Once SPARK-16963 is fixed, we would have a > way to perform cleanup of messages no longer required. This would allow us to > make the connector more fault tolerant. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-45) Add cleanup support to SQL-STREAMING-MQTT Source once SPARK-16963 is fixed.
[ https://issues.apache.org/jira/browse/BAHIR-45?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-45: Assignee: Prashant Sharma > Add cleanup support to SQL-STREAMING-MQTT Source once SPARK-16963 is fixed. > --- > > Key: BAHIR-45 > URL: https://issues.apache.org/jira/browse/BAHIR-45 > Project: Bahir > Issue Type: Improvement >Reporter: Prashant Sharma >Assignee: Prashant Sharma >Priority: Major > > Currently, source tries to persist all the incoming messages to allow for > minimal level of fault tolerance. Once SPARK-16963 is fixed, we would have a > way to perform cleanup of messages no longer required. This would allow us to > make the connector more fault tolerant. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-183) Using HDFS for saving message for mqtt source
[ https://issues.apache.org/jira/browse/BAHIR-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-183. --- Resolution: Fixed Assignee: Wang Yanlin Fix Version/s: (was: Spark-2.2.1) Spark-2.4.0 > Using HDFS for saving message for mqtt source > - > > Key: BAHIR-183 > URL: https://issues.apache.org/jira/browse/BAHIR-183 > Project: Bahir > Issue Type: Improvement > Components: Spark Structured Streaming Connectors >Affects Versions: Spark-2.2.0 >Reporter: Wang Yanlin >Assignee: Wang Yanlin >Priority: Major > Fix For: Spark-2.4.0 > > > Currently in spark-sql-streaming-mqtt, the received mqtt message is saved in > a local file by driver, this will have the risks of losing data for cluster > mode when application master failover occurs. So saving in-coming mqtt > messages using a director in checkpoint will solve this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-186) Support SSL connection in MQTT SQL Streaming
[ https://issues.apache.org/jira/browse/BAHIR-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-186. --- Resolution: Fixed Fix Version/s: Spark-2.4.0 > Support SSL connection in MQTT SQL Streaming > > > Key: BAHIR-186 > URL: https://issues.apache.org/jira/browse/BAHIR-186 > Project: Bahir > Issue Type: New Feature > Components: Spark Streaming Connectors >Reporter: Lukasz Antoniak >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.4.0 > > > Mailing list discussion: > https://www.mail-archive.com/user@bahir.apache.org/msg00022.html. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-186) Support SSL connection in MQTT SQL Streaming
[ https://issues.apache.org/jira/browse/BAHIR-186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-186: - Assignee: Lukasz Antoniak > Support SSL connection in MQTT SQL Streaming > > > Key: BAHIR-186 > URL: https://issues.apache.org/jira/browse/BAHIR-186 > Project: Bahir > Issue Type: New Feature > Components: Spark Streaming Connectors >Reporter: Lukasz Antoniak >Assignee: Lukasz Antoniak >Priority: Major > > Mailing list discussion: > https://www.mail-archive.com/user@bahir.apache.org/msg00022.html. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-103) Refactoring of files BahirUtils.scala & Logging.scala into bahir-common
[ https://issues.apache.org/jira/browse/BAHIR-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-103. --- Resolution: Fixed Fix Version/s: Spark-2.4.0 > Refactoring of files BahirUtils.scala & Logging.scala into bahir-common > --- > > Key: BAHIR-103 > URL: https://issues.apache.org/jira/browse/BAHIR-103 > Project: Bahir > Issue Type: Improvement > Components: Spark Structured Streaming Connectors >Reporter: Subhobrata Dey >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.4.0 > > > The files BahirUtils.scala & Logging.scala present under the package > `org.apache.bahir.utils` > in the streaming-sql connectors should be refactored into a > `bahir-common` project , which will be shared across extensions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-103) Refactoring of files BahirUtils.scala & Logging.scala into bahir-common
[ https://issues.apache.org/jira/browse/BAHIR-103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-103: - Assignee: Lukasz Antoniak > Refactoring of files BahirUtils.scala & Logging.scala into bahir-common > --- > > Key: BAHIR-103 > URL: https://issues.apache.org/jira/browse/BAHIR-103 > Project: Bahir > Issue Type: Improvement > Components: Spark Structured Streaming Connectors >Reporter: Subhobrata Dey >Assignee: Lukasz Antoniak >Priority: Major > > The files BahirUtils.scala & Logging.scala present under the package > `org.apache.bahir.utils` > in the streaming-sql connectors should be refactored into a > `bahir-common` project , which will be shared across extensions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-147) Update Flink extensions documentation with latest contents
[ https://issues.apache.org/jira/browse/BAHIR-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-147. --- Resolution: Fixed Fix Version/s: Flink-Next > Update Flink extensions documentation with latest contents > -- > > Key: BAHIR-147 > URL: https://issues.apache.org/jira/browse/BAHIR-147 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors, Website > Reporter: Luciano Resende >Assignee: Joao Boto >Priority: Critical > Fix For: Flink-Next > > > [~rmetzger] Looks like the website documentation for the current extensions > has fallen behind and is causing some user issues (e.g. BAHIR-142). We should > update the website with latest contents and references on how to add the > spanshot to the test applications. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-188) update flink to 1.7.0
[ https://issues.apache.org/jira/browse/BAHIR-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-188. --- Resolution: Fixed Fix Version/s: Flink-Next > update flink to 1.7.0 > - > > Key: BAHIR-188 > URL: https://issues.apache.org/jira/browse/BAHIR-188 > Project: Bahir > Issue Type: Task >Reporter: Joao Boto >Assignee: Joao Boto >Priority: Major > Fix For: Flink-Next > > > Update Flink to last version (1.7.0) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-147) Update Flink extensions documentation with latest contents
[ https://issues.apache.org/jira/browse/BAHIR-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-147: - Assignee: Joao Boto > Update Flink extensions documentation with latest contents > -- > > Key: BAHIR-147 > URL: https://issues.apache.org/jira/browse/BAHIR-147 > Project: Bahir > Issue Type: Bug > Components: Flink Streaming Connectors, Website > Reporter: Luciano Resende >Assignee: Joao Boto >Priority: Critical > > [~rmetzger] Looks like the website documentation for the current extensions > has fallen behind and is causing some user issues (e.g. BAHIR-142). We should > update the website with latest contents and references on how to add the > spanshot to the test applications. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: [PROPOSAL] Move to gitbox.apache.org
I think we can drive this with consensus (unless infra requires a vote) +1 On Fri, Dec 7, 2018 at 9:05 AM Jean-Baptiste Onofré wrote: > > Hi all, > > our repositories are currently located on git-wip-us.apache.org. > > This service will be decommissioned in the coming months. > > I propose to move to gitbox.apache.org. > > I can start a formal vote and if it's OK, I'm volunteer to deal with infra. > > Thoughts ? > > Regards > JB > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[ANNOUNCE] Apache Bahir 2.3.2 Released
Apache Bahir provides extensions to multiple distributed analytic platforms, extending their reach with a diversity of streaming connectors and SQL data sources. The Apache Bahir community is pleased to announce the release of Apache Bahir 2.3.2 which provides the following extensions for Apache Spark 2.3.2: - Apache CouchDB/Cloudant SQL data source - Apache CouchDB/Cloudant Streaming connector - Akka Streaming connector - Akka Structured Streaming data source - Google Cloud Pub/Sub Streaming connector - Cloud PubNub Streaming connector (new) - MQTT Streaming connector - MQTT Structured Streaming data source (new sink) - Twitter Streaming connector - ZeroMQ Streaming connector (new enhanced implementation) For more information about Apache Bahir and to download the latest release go to: http://bahir.apache.org For more details on how to use Apache Bahir extensions in your application please visit our documentation page http://bahir.apache.org/docs/spark/overview/ The Apache Bahir PMC -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[ANNOUNCE] Apache Bahir 2.3.1 Released
Apache Bahir provides extensions to multiple distributed analytic platforms, extending their reach with a diversity of streaming connectors and SQL data sources. The Apache Bahir community is pleased to announce the release of Apache Bahir 2.3.1 which provides the following extensions for Apache Spark 2.3.1: - Apache CouchDB/Cloudant SQL data source - Apache CouchDB/Cloudant Streaming connector - Akka Streaming connector - Akka Structured Streaming data source - Google Cloud Pub/Sub Streaming connector - Cloud PubNub Streaming connector (new) - MQTT Streaming connector - MQTT Structured Streaming data source (new sink) - Twitter Streaming connector - ZeroMQ Streaming connector (new enhanced implementation) For more information about Apache Bahir and to download the latest release go to: http://bahir.apache.org For more details on how to use Apache Bahir extensions in your application please visit our documentation page http://bahir.apache.org/docs/spark/overview/ The Apache Bahir PMC -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[ANNOUNCE] Apache Bahir 2.3.0 Released
Apache Bahir provides extensions to multiple distributed analytic platforms, extending their reach with a diversity of streaming connectors and SQL data sources. The Apache Bahir community is pleased to announce the release of Apache Bahir 2.3.0 which provides the following extensions for Apache Spark 2.3.0: - Apache CouchDB/Cloudant SQL data source - Apache CouchDB/Cloudant Streaming connector - Akka Streaming connector - Akka Structured Streaming data source - Google Cloud Pub/Sub Streaming connector - Cloud PubNub Streaming connector (new) - MQTT Streaming connector - MQTT Structured Streaming data source (new sink) - Twitter Streaming connector - ZeroMQ Streaming connector (new enhanced implementation) For more information about Apache Bahir and to download the latest release go to: http://bahir.apache.org For more details on how to use Apache Bahir extensions in your application please visit our documentation page http://bahir.apache.org/docs/spark/overview/ The Apache Bahir PMC -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.3.1 (RC1)
Off course, my + 1 On Fri, Nov 30, 2018 at 8:11 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.1 (RC1) based on > Apache Spark 2.3.1. > > Tag: v2.3.1-rc1 (6c4d67e0a99bcbbad199b7d1d26f3624491070b4) > > https://github.com/apache/bahir/tree/v2.3.1-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1026 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.1-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.1 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- > Luciano Resende > http://twitter.com/lresende1975 > http://lresende.blogspot.com/ -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.3.0 (RC1)
Off course, my + 1 On Fri, Nov 30, 2018 at 6:43 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.0 (RC1) based on > Apache Spark 2.3.0. > > Tag: v2.3.0-rc1 (e2d138bd2a927d0ab42e45739d8938eefafca352) > > https://github.com/apache/bahir/tree/v2.3.0-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1025 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.0-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.0 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- > Luciano Resende > http://twitter.com/lresende1975 > http://lresende.blogspot.com/ -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[RESULT] [VOTE] Apache Bahir 2.3.2 (RC1)
On Fri, Nov 30, 2018 at 12:57 PM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.2 (RC1) based on > Apache Spark 2.3.2. > > Tag: v2.3.2-rc1 (6d24b270fd713352faf1c93f08f835e39e496489) > > https://github.com/apache/bahir/tree/v2.3.2-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1027 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.2-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.2 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > Vote has passed with +1 from: Ted Yu, Jean-Baptiste Onofré Christian Kadner Luciano Resende -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[RESULT][VOTE] Apache Bahir 2.3.1 (RC1)
On Fri, Nov 30, 2018 at 8:11 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.1 (RC1) based on > Apache Spark 2.3.1. > > Tag: v2.3.1-rc1 (6c4d67e0a99bcbbad199b7d1d26f3624491070b4) > > https://github.com/apache/bahir/tree/v2.3.1-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1026 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.1-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.1 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > Vote has passed with +1 from: Ted Yu, Jean-Baptiste Onofré Christian Kadner Luciano Resende -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[RESULT] [VOTE] Apache Bahir 2.3.0 (RC1)
On Fri, Nov 30, 2018 at 6:43 AM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.0 (RC1) based on > Apache Spark 2.3.0. > > Tag: v2.3.0-rc1 (e2d138bd2a927d0ab42e45739d8938eefafca352) > > https://github.com/apache/bahir/tree/v2.3.0-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1025 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.0-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.0 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > Vote has passed with +1 from: Ted Yu, Jean-Baptiste Onofré Christian Kadner Luciano Resende -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: [VOTE] Apache Bahir 2.3.2 (RC1)
Off course, my + 1 On Fri, Nov 30, 2018 at 12:57 PM Luciano Resende wrote: > > Dear community member, > > Please vote to approve the release of Apache Bahir 2.3.2 (RC1) based on > Apache Spark 2.3.2. > > Tag: v2.3.2-rc1 (6d24b270fd713352faf1c93f08f835e39e496489) > > https://github.com/apache/bahir/tree/v2.3.2-rc1 > > Release files: > > https://repository.apache.org/content/repositories/orgapachebahir-1027 > > Source distribution: > > https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.2-rc1/ > > > The vote is open for at least 72 hours and passes if a majority of at least > 3 +1 PMC votes are cast. > > [ ] +1 Release this package as Apache Bahir 2.3.2 > [ ] -1 Do not release this package because ... > > > Thanks for your vote! > > -- > Luciano Resende > http://twitter.com/lresende1975 > http://lresende.blogspot.com/ -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
Re: Release 2.3.x
I have created the following RCs - 2.3.0 - 2.3.1 - 2.3.2 They are mostly the same, mostly changing the Spark dependencies. Please help review/vote these. On Thu, Nov 29, 2018 at 10:42 AM Jean-Baptiste Onofré wrote: > > Nothing from my side. +1 for the release. > > Regards > JB > > On 29/11/2018 10:40, Luciano Resende wrote: > > I am planning to start working on getting the 2.3.x releases soon, any > > jiras or prs that I should wait for ? > > > > Thanks > > > > -- > Jean-Baptiste Onofré > jbono...@apache.org > http://blog.nanthrax.net > Talend - http://www.talend.com -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[VOTE] Apache Bahir 2.3.2 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.3.2 (RC1) based on Apache Spark 2.3.2. Tag: v2.3.2-rc1 (6d24b270fd713352faf1c93f08f835e39e496489) https://github.com/apache/bahir/tree/v2.3.2-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1027 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.2-rc1/ The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.3.2 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[VOTE] Apache Bahir 2.3.1 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.3.1 (RC1) based on Apache Spark 2.3.1. Tag: v2.3.1-rc1 (6c4d67e0a99bcbbad199b7d1d26f3624491070b4) https://github.com/apache/bahir/tree/v2.3.1-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1026 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.1-rc1/ The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.3.1 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[VOTE] Apache Bahir 2.3.0 (RC1)
Dear community member, Please vote to approve the release of Apache Bahir 2.3.0 (RC1) based on Apache Spark 2.3.0. Tag: v2.3.0-rc1 (e2d138bd2a927d0ab42e45739d8938eefafca352) https://github.com/apache/bahir/tree/v2.3.0-rc1 Release files: https://repository.apache.org/content/repositories/orgapachebahir-1025 Source distribution: https://dist.apache.org/repos/dist/dev/bahir/bahir-spark/2.3.0-rc1/ The vote is open for at least 72 hours and passes if a majority of at least 3 +1 PMC votes are cast. [ ] +1 Release this package as Apache Bahir 2.3.0 [ ] -1 Do not release this package because ... Thanks for your vote! -- Luciano Resende http://twitter.com/lresende1975 http://lresende.blogspot.com/
[jira] [Assigned] (BAHIR-182) Create PubNub extension for Apache Spark Streaming
[ https://issues.apache.org/jira/browse/BAHIR-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-182: - Assignee: Lukasz Antoniak > Create PubNub extension for Apache Spark Streaming > -- > > Key: BAHIR-182 > URL: https://issues.apache.org/jira/browse/BAHIR-182 > Project: Bahir > Issue Type: New Feature > Components: Spark Streaming Connectors >Reporter: Lukasz Antoniak >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.3.0 > > > Implement new connector for PubNub ([https://www.pubnub.com/)] which is > increasing in popularity cloud messaging infrastructure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-182) Create PubNub extension for Apache Spark Streaming
[ https://issues.apache.org/jira/browse/BAHIR-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-182. --- Resolution: Fixed Fix Version/s: Spark-2.3.0 > Create PubNub extension for Apache Spark Streaming > -- > > Key: BAHIR-182 > URL: https://issues.apache.org/jira/browse/BAHIR-182 > Project: Bahir > Issue Type: New Feature > Components: Spark Streaming Connectors >Reporter: Lukasz Antoniak >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.3.0 > > > Implement new connector for PubNub ([https://www.pubnub.com/)] which is > increasing in popularity cloud messaging infrastructure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (BAHIR-66) Add test that ZeroMQ streaming connector can receive data
[ https://issues.apache.org/jira/browse/BAHIR-66?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-66. -- Resolution: Fixed Fix Version/s: Spark-2.3.0 > Add test that ZeroMQ streaming connector can receive data > - > > Key: BAHIR-66 > URL: https://issues.apache.org/jira/browse/BAHIR-66 > Project: Bahir > Issue Type: Sub-task > Components: Spark Streaming Connectors >Reporter: Christian Kadner >Assignee: Lukasz Antoniak >Priority: Major > Labels: test > Fix For: Spark-2.3.0 > > > Add test cases that verify that the *ZeroMQ streaming connector* can receive > streaming data. > See [BAHIR-63|https://issues.apache.org/jira/browse/BAHIR-63] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (BAHIR-66) Add test that ZeroMQ streaming connector can receive data
[ https://issues.apache.org/jira/browse/BAHIR-66?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende reassigned BAHIR-66: Assignee: Lukasz Antoniak > Add test that ZeroMQ streaming connector can receive data > - > > Key: BAHIR-66 > URL: https://issues.apache.org/jira/browse/BAHIR-66 > Project: Bahir > Issue Type: Sub-task > Components: Spark Streaming Connectors >Reporter: Christian Kadner >Assignee: Lukasz Antoniak >Priority: Major > Labels: test > Fix For: Spark-2.3.0 > > > Add test cases that verify that the *ZeroMQ streaming connector* can receive > streaming data. > See [BAHIR-63|https://issues.apache.org/jira/browse/BAHIR-63] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Release 2.3.x
I am planning to start working on getting the 2.3.x releases soon, any jiras or prs that I should wait for ? Thanks -- Sent from my Mobile device
[jira] [Resolved] (BAHIR-49) Add MQTTSink to SQL Streaming MQTT.
[ https://issues.apache.org/jira/browse/BAHIR-49?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luciano Resende resolved BAHIR-49. -- Resolution: Fixed Fix Version/s: Spark-2.3.0 > Add MQTTSink to SQL Streaming MQTT. > --- > > Key: BAHIR-49 > URL: https://issues.apache.org/jira/browse/BAHIR-49 > Project: Bahir > Issue Type: New Feature >Reporter: Prashant Sharma >Assignee: Lukasz Antoniak >Priority: Major > Fix For: Spark-2.3.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)