[GitHub] storm issue #1684: [STORM-2093] Fix permissions in multi-tenant, secure mode
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1684 Please make commit message explain the commit and/or add JIRA issue as prefix. Commit message is not clear on which thing this commit fixes and how it helps. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user revans2 commented on the issue: https://github.com/apache/storm/pull/1642 The Travis failures are from hive dependencies not being downloaded. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1642 @revans2 Thanks for the great work. Really appreciated. +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user revans2 commented on the issue: https://github.com/apache/storm/pull/1642 @HeartSaVioR I found the original comment and put it the change. I also fixed some other review comments from @abellina and also fixed an annoying failure with drpc_auth_test.clj not being able to bind to a port. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1684: [STORM-2093] Fix permissions in multi-tenant, secu...
GitHub user ppoulosk opened a pull request: https://github.com/apache/storm/pull/1684 [STORM-2093] Fix permissions in multi-tenant, secure mode Heap dumps created on OOM, when served through the logviewer in secure multitenant configurations have permssions set such that the logviewer cannot read them. This change checks permissions in this case before serving and then runs the worker-launcher to enable the logviewer to serve them. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ppoulosk/storm STORM-2093 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1684.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1684 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: JIRI Notifications Moved to iss...@storm.apache.org
I mean the mails which sender is g...@git.apache.org and recipient is dev@storm.apache.org. They're still on dev@. You can see them from archive of storm-dev mailing list. http://mail-archives.apache.org/mod_mbox/storm-dev/201609.mbox/%3c20160913193135.93042df...@git1-us-west.apache.org%3E On 2016년 9월 14일 (수) at 오전 2:28 P. Taylor Goetzwrote: > No, github notifications will only get logged in the “Work Log” section of > an associated JIRA. Github notifications will not go to any mailing list. > > If you want to receive github notifications you will need to watch the > Storm github repo and select a notification option. > > -Taylor > > > On Sep 12, 2016, at 5:20 PM, Jungtaek Lim wrote: > > > > Nice work. > > > > Will Github notification (posted to dev@) be also moved to issues@? Or > will > > keep posted to dev@? > > On 2016년 9월 13일 (화) at 오전 6:00 P. Taylor Goetz > wrote: > > > >> Infra has now moved JIRA notifications from dev@storm.apache.org to > >> issues.storm.apache.org. > >> > >> If you would like to continue to receive JIRA notifications, you should > >> subscribe to that list by sending an email to: > >> > >> issues-subscr...@storm.apache.org > >> > >> -Taylor > >> > >
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1642 @harshach There's already an another pull request for fixing race condition on old supervisor. Given that we can't set a due date for 2.0 (and maintenance date for 1.x) it would be better to have this as kind of bugfix. @revans2 The code block in my comment fixes the issue. You seem to fix it, but it's gone now. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user revans2 commented on the issue: https://github.com/apache/storm/pull/1642 @HeartSaVioR @harshach Once this goes in I would be happy to make a version for 1.x In fact we are in the process of pulling it back for the version we use internally, and I was able to keep the old supervisor in place as a backup option. As for the regression I will try and find the original code and see where I made a mistake. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: JIRI Notifications Moved to iss...@storm.apache.org
No, github notifications will only get logged in the “Work Log” section of an associated JIRA. Github notifications will not go to any mailing list. If you want to receive github notifications you will need to watch the Storm github repo and select a notification option. -Taylor > On Sep 12, 2016, at 5:20 PM, Jungtaek Limwrote: > > Nice work. > > Will Github notification (posted to dev@) be also moved to issues@? Or will > keep posted to dev@? > On 2016년 9월 13일 (화) at 오전 6:00 P. Taylor Goetz wrote: > >> Infra has now moved JIRA notifications from dev@storm.apache.org to >> issues.storm.apache.org. >> >> If you would like to continue to receive JIRA notifications, you should >> subscribe to that list by sending an email to: >> >> issues-subscr...@storm.apache.org >> >> -Taylor >> signature.asc Description: Message signed with OpenPGP using GPGMail
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user harshach commented on the issue: https://github.com/apache/storm/pull/1642 @HeartSaVioR do we want this for 1.x branch? Given that it's a java migration and pretty much a new feature that can get into 1.x-branch. I would rather keep 1.x-branch as it is. @revans2 will go through it today. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1642 And I saw some places using lambda which can't be applied to 1.x. @revans2 Could you craft a pull request for 1.x branch (might also 1.0.x) too when we're sure it's good to merge? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1642: STORM-2018: Supervisor V2.
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1642 I've left a comment of regression issue, but other manual tests passed including failed things. I'd be +1 once regression issue is resolved. Great works. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/1642#discussion_r78551282 --- Diff: storm-core/src/jvm/org/apache/storm/localizer/AsyncLocalizer.java --- @@ -0,0 +1,420 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.localizer; + +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.net.JarURLConnection; +import java.net.URL; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; + +import org.apache.commons.io.FileUtils; +import org.apache.storm.Config; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.ClientBlobStore; +import org.apache.storm.daemon.Shutdownable; +import org.apache.storm.daemon.supervisor.AdvancedFSOps; +import org.apache.storm.daemon.supervisor.SupervisorUtils; +import org.apache.storm.generated.StormTopology; +import org.apache.storm.utils.ConfigUtils; +import org.apache.storm.utils.Utils; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import com.google.common.util.concurrent.ThreadFactoryBuilder; + +/** + * This is a wrapper around the Localizer class that provides the desired + * async interface to Slot. + * TODO once we have replaced the original supervisor merge this with + * Localizer and optimize them + */ +public class AsyncLocalizer implements ILocalizer, Shutdownable { +/** + * A future that has already completed. + */ +private static class AllDoneFuture implements Future { + +@Override +public boolean cancel(boolean mayInterruptIfRunning) { +return false; +} + +@Override +public boolean isCancelled() { +return false; +} + +@Override +public boolean isDone() { +return true; +} + +@Override +public Void get() { +return null; +} + +@Override +public Void get(long timeout, TimeUnit unit) { +return null; +} + +} + +private static final Logger LOG = LoggerFactory.getLogger(AsyncLocalizer.class); + +private final Localizer _localizer; +private final ExecutorService _execService; +private final boolean _isLocalMode; +private final Map_conf; +private final Map _basicPending; +private final Map _blobPending; +private final AdvancedFSOps _fsOps; + +private class DownloadBaseBlobsDistributed implements Callable { +private final String _topologyId; + +public DownloadBaseBlobsDistributed(String topologyId) { +this._topologyId = topologyId; +} + +@Override +public Void call() throws Exception { +String stormroot = ConfigUtils.supervisorStormDistRoot(_conf, _topologyId); +File sr = new File(stormroot); +if (sr.exists()) { +if (!_fsOps.supportsAtomicDirectoryMove()) { +LOG.warn("{} may have partially downloaded blobs, recovering", _topologyId); +Utils.forceDelete(stormroot); +} else { +LOG.warn("{} already downloaded blobs, skipping", _topologyId); +return null; +} +} +boolean deleteAll = true; +String tmproot = ConfigUtils.supervisorTmpDir(_conf) +
[GitHub] storm pull request #1683: STORM-2092: optimize TridentKafkaState batch sendi...
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1683#discussion_r78545097 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaState.java --- @@ -73,30 +74,35 @@ public void prepare(Properties options) { public void updateState(List tuples, TridentCollector collector) { String topic = null; -for (TridentTuple tuple : tuples) { -try { +try { +Listfutures = new ArrayList<>(tuples.size()); +for (TridentTuple tuple : tuples) { topic = topicSelector.getTopic(tuple); if(topic != null) { Future result = producer.send(new ProducerRecord(topic, mapper.getKeyFromTuple(tuple), mapper.getMessageFromTuple(tuple))); -try { -result.get(); -} catch (ExecutionException e) { -String errorMsg = "Could not retrieve result for message with key = " -+ mapper.getKeyFromTuple(tuple) + " from topic = " + topic; -LOG.error(errorMsg, e); -throw new FailedException(errorMsg, e); -} +futures.add(result); } else { LOG.warn("skipping key = " + mapper.getKeyFromTuple(tuple) + ", topic selector returned null."); } -} catch (Exception ex) { -String errorMsg = "Could not send message with key = " + mapper.getKeyFromTuple(tuple) -+ " to topic = " + topic; -LOG.warn(errorMsg, ex); -throw new FailedException(errorMsg, ex); } + +for (int i = 0 ; i < futures.size(); i++) { +Future future = futures.get(i); +try { +future.get(); +} catch (ExecutionException e) { +String errorMsg = "Could not retrieve result for message with key = " ++ mapper.getKeyFromTuple(tuples.get(i)) + " from topic = " + topic; +LOG.error(errorMsg, e); +throw new FailedException(errorMsg, e); --- End diff -- Will fix --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1683: STORM-2092: optimize TridentKafkaState batch sendi...
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/1683#discussion_r78545083 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaState.java --- @@ -73,30 +74,35 @@ public void prepare(Properties options) { public void updateState(List tuples, TridentCollector collector) { String topic = null; -for (TridentTuple tuple : tuples) { -try { +try { +Listfutures = new ArrayList<>(tuples.size()); +for (TridentTuple tuple : tuples) { topic = topicSelector.getTopic(tuple); if(topic != null) { Future result = producer.send(new ProducerRecord(topic, mapper.getKeyFromTuple(tuple), mapper.getMessageFromTuple(tuple))); -try { -result.get(); -} catch (ExecutionException e) { -String errorMsg = "Could not retrieve result for message with key = " -+ mapper.getKeyFromTuple(tuple) + " from topic = " + topic; -LOG.error(errorMsg, e); -throw new FailedException(errorMsg, e); -} +futures.add(result); } else { LOG.warn("skipping key = " + mapper.getKeyFromTuple(tuple) + ", topic selector returned null."); } -} catch (Exception ex) { -String errorMsg = "Could not send message with key = " + mapper.getKeyFromTuple(tuple) -+ " to topic = " + topic; -LOG.warn(errorMsg, ex); -throw new FailedException(errorMsg, ex); } + +for (int i = 0 ; i < futures.size(); i++) { --- End diff -- Yeah you're right. I missed it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1683: STORM-2092: optimize TridentKafkaState batch sendi...
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1683#discussion_r78544642 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaState.java --- @@ -73,30 +74,35 @@ public void prepare(Properties options) { public void updateState(List tuples, TridentCollector collector) { String topic = null; -for (TridentTuple tuple : tuples) { -try { +try { +Listfutures = new ArrayList<>(tuples.size()); +for (TridentTuple tuple : tuples) { topic = topicSelector.getTopic(tuple); if(topic != null) { Future result = producer.send(new ProducerRecord(topic, mapper.getKeyFromTuple(tuple), mapper.getMessageFromTuple(tuple))); -try { -result.get(); -} catch (ExecutionException e) { -String errorMsg = "Could not retrieve result for message with key = " -+ mapper.getKeyFromTuple(tuple) + " from topic = " + topic; -LOG.error(errorMsg, e); -throw new FailedException(errorMsg, e); -} +futures.add(result); } else { LOG.warn("skipping key = " + mapper.getKeyFromTuple(tuple) + ", topic selector returned null."); } -} catch (Exception ex) { -String errorMsg = "Could not send message with key = " + mapper.getKeyFromTuple(tuple) -+ " to topic = " + topic; -LOG.warn(errorMsg, ex); -throw new FailedException(errorMsg, ex); } + +for (int i = 0 ; i < futures.size(); i++) { --- End diff -- see errorMsg tuples.get(i) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1681: STORM-1444 Support EXPLAIN statement in StormSQL
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/1681#discussion_r78543991 --- Diff: external/sql/storm-sql-core/src/jvm/org/apache/storm/sql/StormSqlImpl.java --- @@ -140,6 +142,40 @@ public void submit( } } + @Override + public void explain(Iterable statements) throws Exception { +MapdataSources = new HashMap<>(); +for (String sql : statements) { + StormParser parser = new StormParser(sql); + SqlNode node = parser.impl().parseSqlStmtEof(); + + System.out.println("==="); + System.out.println("query>"); + System.out.println(sql); + System.out.println("---"); + + if (node instanceof SqlCreateTable) { +handleCreateTableForTrident((SqlCreateTable) node, dataSources); +System.out.println("No plan presented on DDL"); + } else if (node instanceof SqlCreateFunction) { +handleCreateFunction((SqlCreateFunction) node); --- End diff -- Yeah seems like it's missed spot. Thanks for finding. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1683: STORM-2092: optimize TridentKafkaState batch sendi...
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/1683#discussion_r78543845 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaState.java --- @@ -73,30 +74,35 @@ public void prepare(Properties options) { public void updateState(List tuples, TridentCollector collector) { String topic = null; -for (TridentTuple tuple : tuples) { -try { +try { +Listfutures = new ArrayList<>(tuples.size()); +for (TridentTuple tuple : tuples) { topic = topicSelector.getTopic(tuple); if(topic != null) { Future result = producer.send(new ProducerRecord(topic, mapper.getKeyFromTuple(tuple), mapper.getMessageFromTuple(tuple))); -try { -result.get(); -} catch (ExecutionException e) { -String errorMsg = "Could not retrieve result for message with key = " -+ mapper.getKeyFromTuple(tuple) + " from topic = " + topic; -LOG.error(errorMsg, e); -throw new FailedException(errorMsg, e); -} +futures.add(result); } else { LOG.warn("skipping key = " + mapper.getKeyFromTuple(tuple) + ", topic selector returned null."); } -} catch (Exception ex) { -String errorMsg = "Could not send message with key = " + mapper.getKeyFromTuple(tuple) -+ " to topic = " + topic; -LOG.warn(errorMsg, ex); -throw new FailedException(errorMsg, ex); } + +for (int i = 0 ; i < futures.size(); i++) { +Future future = futures.get(i); +try { +future.get(); +} catch (ExecutionException e) { +String errorMsg = "Could not retrieve result for message with key = " ++ mapper.getKeyFromTuple(tuples.get(i)) + " from topic = " + topic; +LOG.error(errorMsg, e); +throw new FailedException(errorMsg, e); --- End diff -- Since it already sends multiple requests, we need to log other errors as well and throw FailedException with summarized message (containing all errors) if any. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1683: STORM-2092: optimize TridentKafkaState batch sendi...
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/1683#discussion_r78543513 --- Diff: external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaState.java --- @@ -73,30 +74,35 @@ public void prepare(Properties options) { public void updateState(List tuples, TridentCollector collector) { String topic = null; -for (TridentTuple tuple : tuples) { -try { +try { +Listfutures = new ArrayList<>(tuples.size()); +for (TridentTuple tuple : tuples) { topic = topicSelector.getTopic(tuple); if(topic != null) { Future result = producer.send(new ProducerRecord(topic, mapper.getKeyFromTuple(tuple), mapper.getMessageFromTuple(tuple))); -try { -result.get(); -} catch (ExecutionException e) { -String errorMsg = "Could not retrieve result for message with key = " -+ mapper.getKeyFromTuple(tuple) + " from topic = " + topic; -LOG.error(errorMsg, e); -throw new FailedException(errorMsg, e); -} +futures.add(result); } else { LOG.warn("skipping key = " + mapper.getKeyFromTuple(tuple) + ", topic selector returned null."); } -} catch (Exception ex) { -String errorMsg = "Could not send message with key = " + mapper.getKeyFromTuple(tuple) -+ " to topic = " + topic; -LOG.warn(errorMsg, ex); -throw new FailedException(errorMsg, ex); } + +for (int i = 0 ; i < futures.size(); i++) { --- End diff -- for (Future future : futures) { --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1681: STORM-1444 Support EXPLAIN statement in StormSQL
Github user vesense commented on the issue: https://github.com/apache/storm/pull/1681 Btw, I'm ok with current explain output. Spark's is even better, maybe there is a chance to improve Calcite. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1681: STORM-1444 Support EXPLAIN statement in StormSQL
Github user vesense commented on the issue: https://github.com/apache/storm/pull/1681 Nice Job. one minor comment. LGTM. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1681: STORM-1444 Support EXPLAIN statement in StormSQL
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1681#discussion_r78530932 --- Diff: external/sql/storm-sql-core/src/jvm/org/apache/storm/sql/StormSqlImpl.java --- @@ -140,6 +142,40 @@ public void submit( } } + @Override + public void explain(Iterable statements) throws Exception { +MapdataSources = new HashMap<>(); +for (String sql : statements) { + StormParser parser = new StormParser(sql); + SqlNode node = parser.impl().parseSqlStmtEof(); + + System.out.println("==="); + System.out.println("query>"); + System.out.println(sql); + System.out.println("---"); + + if (node instanceof SqlCreateTable) { +handleCreateTableForTrident((SqlCreateTable) node, dataSources); +System.out.println("No plan presented on DDL"); + } else if (node instanceof SqlCreateFunction) { +handleCreateFunction((SqlCreateFunction) node); --- End diff -- Do we need a tip here? `System.out.println("No plan presented on DDL");` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1683: STORM-2092: optimize TridentKafkaState batch sending
Github user vesense commented on the issue: https://github.com/apache/storm/pull/1683 ci failure is unrelated. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1683: STORM-2092: optimize TridentKafkaState batch sendi...
GitHub user vesense opened a pull request: https://github.com/apache/storm/pull/1683 STORM-2092: optimize TridentKafkaState batch sending You can merge this pull request into a Git repository by running: $ git pull https://github.com/vesense/storm STORM-2092 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/1683.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1683 commit 9888bf6b994a29a2f18ac8126c249dd314ed764b Author: vesenseDate: 2016-09-13T09:22:48Z STORM-2092: optimize TridentKafkaState batch sending --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1681: STORM-1444 Support EXPLAIN statement in StormSQL
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1681 FYI: Flink seems not support explain yet. https://github.com/apache/flink/pull/2485 is proposed for adding this feature. As I described to JIRA issue, Spark supports explain and output seems to be more user friendly. (Note that Spark doesn't use Calcite.) ``` scala> spark.sql("SELECT GRPID, COUNT(*) AS CNT, MAX(AGE) AS MAX_AGE, MIN(AGE) AS MIN_AGE, AVG(AGE) AS AVG_AGE, MAX(AGE) - MIN(AGE) AS DIFF FROM FOO WHERE ID > 2 GROUP BY GRPID").explain == Physical Plan == *HashAggregate(keys=[GRPID#64], functions=[count(1), max(AGE#67), min(AGE#67), avg(cast(AGE#67 as bigint)), max(AGE#67), min(AGE#67)]) +- Exchange hashpartitioning(GRPID#64, 200) +- *HashAggregate(keys=[GRPID#64], functions=[partial_count(1), partial_max(AGE#67), partial_min(AGE#67), partial_avg(cast(AGE#67 as bigint)), partial_max(AGE#67), partial_min(AGE#67)]) +- *Project [grpid#64, age#67] +- *Filter (isnotnull(ID#63) && (ID#63 > 2)) +- LocalTableScan [id#63, grpid#64, name#65, addr#66, age#67] ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1682: STORM-2089 Replace Consumer of ISqlTridentDataSource with...
Github user vesense commented on the issue: https://github.com/apache/storm/pull/1682 +1 and I have filed a jira for improving the TridentKafkaState batch sending. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1682: STORM-2089 Replace Consumer of ISqlTridentDataSource with...
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1682 @vesense Addressed javadoc. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1682: STORM-2089 Replace Consumer of ISqlTridentDataSource with...
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/1682 @vesense Yeah we need to call get() to react for the result. We could still send all requests first, and sync all to reduce waiting time. It's a kind of trade-off since it doesn't do fail-fast. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm issue #1682: STORM-2089 Replace Consumer of ISqlTridentDataSource with...
Github user vesense commented on the issue: https://github.com/apache/storm/pull/1682 @HeartSaVioR one minor comment, others looks good to me. about sync send `.get()`, please take a look at previous comments: https://github.com/apache/storm/pull/743#commitcomment-14195354 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1682: STORM-2089 Replace Consumer of ISqlTridentDataSour...
Github user HeartSaVioR commented on a diff in the pull request: https://github.com/apache/storm/pull/1682#discussion_r78509897 --- Diff: external/sql/storm-sql-runtime/src/jvm/org/apache/storm/sql/runtime/ISqlTridentDataSource.java --- @@ -17,14 +17,32 @@ */ package org.apache.storm.sql.runtime; -import org.apache.storm.trident.operation.Function; -import org.apache.storm.trident.spout.IBatchSpout; import org.apache.storm.trident.spout.ITridentDataSource; +import org.apache.storm.trident.state.StateFactory; +import org.apache.storm.trident.state.StateUpdater; /** * A ISqlTridentDataSource specifies how an external data source produces and consumes data. */ public interface ISqlTridentDataSource { + class SqlTridentConsumer { +private StateFactory stateFactory; +private StateUpdater stateUpdater; + +public SqlTridentConsumer(StateFactory stateFactory, StateUpdater stateUpdater) { + this.stateFactory = stateFactory; + this.stateUpdater = stateUpdater; +} + +public StateFactory getStateFactory() { + return stateFactory; +} + +public StateUpdater getStateUpdater() { + return stateUpdater; +} + } + ITridentDataSource getProducer(); --- End diff -- Makes sense. I'll add javadoc for methods and also SqlTridentConsumer. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1682: STORM-2089 Replace Consumer of ISqlTridentDataSour...
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1682#discussion_r78509264 --- Diff: external/sql/storm-sql-runtime/src/jvm/org/apache/storm/sql/runtime/ISqlTridentDataSource.java --- @@ -17,14 +17,32 @@ */ package org.apache.storm.sql.runtime; -import org.apache.storm.trident.operation.Function; -import org.apache.storm.trident.spout.IBatchSpout; import org.apache.storm.trident.spout.ITridentDataSource; +import org.apache.storm.trident.state.StateFactory; +import org.apache.storm.trident.state.StateUpdater; /** * A ISqlTridentDataSource specifies how an external data source produces and consumes data. */ public interface ISqlTridentDataSource { + class SqlTridentConsumer { +private StateFactory stateFactory; +private StateUpdater stateUpdater; + +public SqlTridentConsumer(StateFactory stateFactory, StateUpdater stateUpdater) { + this.stateFactory = stateFactory; + this.stateUpdater = stateUpdater; +} + +public StateFactory getStateFactory() { + return stateFactory; +} + +public StateUpdater getStateUpdater() { + return stateUpdater; +} + } + ITridentDataSource getProducer(); --- End diff -- how about add some javadoc for interface methods in `ISqlTridentDataSource` and `DataSource`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request #1682: STORM-2089 Replace Consumer of ISqlTridentDataSour...
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/1682#discussion_r78509309 --- Diff: external/sql/storm-sql-runtime/src/jvm/org/apache/storm/sql/runtime/ISqlTridentDataSource.java --- @@ -17,14 +17,32 @@ */ package org.apache.storm.sql.runtime; -import org.apache.storm.trident.operation.Function; -import org.apache.storm.trident.spout.IBatchSpout; import org.apache.storm.trident.spout.ITridentDataSource; +import org.apache.storm.trident.state.StateFactory; +import org.apache.storm.trident.state.StateUpdater; /** * A ISqlTridentDataSource specifies how an external data source produces and consumes data. */ public interface ISqlTridentDataSource { + class SqlTridentConsumer { +private StateFactory stateFactory; +private StateUpdater stateUpdater; + +public SqlTridentConsumer(StateFactory stateFactory, StateUpdater stateUpdater) { + this.stateFactory = stateFactory; + this.stateUpdater = stateUpdater; +} + +public StateFactory getStateFactory() { + return stateFactory; +} + +public StateUpdater getStateUpdater() { + return stateUpdater; +} + } + ITridentDataSource getProducer(); --- End diff -- how about add some javadoc for interface methods in `ISqlTridentDataSource` and `DataSource`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---