[jira] [Commented] (IGNITE-13078) С++: Add CMake build support

2020-06-09 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129878#comment-17129878
 ] 

Igor Sapego commented on IGNITE-13078:
--

[~ivandasch] By the way, can we also add a TC job which will check CMake build 
on Windows? I believe, we need it anyway if we want to make sure that new 
commits won't break compilation on Windows. Here are env variables which should 
be set on Windows for CMake to be able to locate all the dependencies:

BOOST_INCLUDEDIR

BOOST_LIBRARYDIR

OPENSSL_ROOT_DIR

 

> С++: Add CMake build support
> 
>
> Key: IGNITE-13078
> URL: https://issues.apache.org/jira/browse/IGNITE-13078
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-13078-dynamic-odbc.patch, 
> ignite-13078-static-odbc.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, it is hard to build Ignite.C++. Different build processes for 
> windows and Linux, lack of building support on Mac OS X (a quite popular OS 
> among developers), absolutely not IDE support, except windows and only Visual 
> Studio is supported.
> I’d suggest migrating to the CMake build system. It is very popular among 
> open source projects, and in The Apache Software Foundation too. Notable 
> users: Apache Mesos, Apache Zookeeper (C client offers CMake as an 
> alternative to autoconf and the only option on Windows), Apache Kafka 
> (librdkafka - C/C++ client), Apache Thrift. Popular column-oriented database 
> ClickHouse also uses CMake.
> CMake is widely supported in many IDE’s on various platforms, notably Visual 
> Studio, CLion, Xcode, QtCreator, KDevelop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13078) С++: Add CMake build support

2020-06-09 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129876#comment-17129876
 ] 

Igor Sapego commented on IGNITE-13078:
--

[~ivandasch] yes, for windows static linking of boost libraries is used usually 
as it is just much more easy. As for {{Boost_USE_MULTITHREADED}} – it is used 
just to make sure that the right variant of libraries is used (non-mt libraries 
often do not present in boost binaries by default). If it's ON by default, I 
believe we can remove those lines.

Support of WiX is a good thing, maybe should implement it in future.

> С++: Add CMake build support
> 
>
> Key: IGNITE-13078
> URL: https://issues.apache.org/jira/browse/IGNITE-13078
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-13078-dynamic-odbc.patch, 
> ignite-13078-static-odbc.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, it is hard to build Ignite.C++. Different build processes for 
> windows and Linux, lack of building support on Mac OS X (a quite popular OS 
> among developers), absolutely not IDE support, except windows and only Visual 
> Studio is supported.
> I’d suggest migrating to the CMake build system. It is very popular among 
> open source projects, and in The Apache Software Foundation too. Notable 
> users: Apache Mesos, Apache Zookeeper (C client offers CMake as an 
> alternative to autoconf and the only option on Windows), Apache Kafka 
> (librdkafka - C/C++ client), Apache Thrift. Popular column-oriented database 
> ClickHouse also uses CMake.
> CMake is widely supported in many IDE’s on various platforms, notably Visual 
> Studio, CLion, Xcode, QtCreator, KDevelop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-13140) Incorrect example in Pull Request checklist: comma after ticket name in commit message

2020-06-09 Thread Andrey N. Gura (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey N. Gura resolved IGNITE-13140.
-
Resolution: Fixed

> Incorrect example in Pull Request checklist: comma after ticket name in 
> commit message
> --
>
> Key: IGNITE-13140
> URL: https://issues.apache.org/jira/browse/IGNITE-13140
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey N. Gura
>Assignee: Andrey N. Gura
>Priority: Major
>
> Historically commit message pattern always was IGNITE- Description 
> (without :). It could be observed in git commits history. Also description of 
> contribution process never contains such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13140) Incorrect example in Pull Request checklist: comma after ticket name in commit message

2020-06-09 Thread Andrey N. Gura (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129837#comment-17129837
 ] 

Andrey N. Gura commented on IGNITE-13140:
-

Visa isn't needed. Change doesn't affect code base.

Merged to master branch.

> Incorrect example in Pull Request checklist: comma after ticket name in 
> commit message
> --
>
> Key: IGNITE-13140
> URL: https://issues.apache.org/jira/browse/IGNITE-13140
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey N. Gura
>Assignee: Andrey N. Gura
>Priority: Major
>
> Historically commit message pattern always was IGNITE- Description 
> (without :). It could be observed in git commits history. Also description of 
> contribution process never contains such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13140) Incorrect example in Pull Request checklist: comma after ticket name in commit message

2020-06-09 Thread Andrey N. Gura (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey N. Gura updated IGNITE-13140:

Fix Version/s: 2.9

> Incorrect example in Pull Request checklist: comma after ticket name in 
> commit message
> --
>
> Key: IGNITE-13140
> URL: https://issues.apache.org/jira/browse/IGNITE-13140
> Project: Ignite
>  Issue Type: Bug
>Reporter: Andrey N. Gura
>Assignee: Andrey N. Gura
>Priority: Major
> Fix For: 2.9
>
>
> Historically commit message pattern always was IGNITE- Description 
> (without :). It could be observed in git commits history. Also description of 
> contribution process never contains such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13140) Incorrect example in Pull Request checklist: comma after ticket name in commit message

2020-06-09 Thread Andrey N. Gura (Jira)
Andrey N. Gura created IGNITE-13140:
---

 Summary: Incorrect example in Pull Request checklist: comma after 
ticket name in commit message
 Key: IGNITE-13140
 URL: https://issues.apache.org/jira/browse/IGNITE-13140
 Project: Ignite
  Issue Type: Bug
Reporter: Andrey N. Gura
Assignee: Andrey N. Gura


Historically commit message pattern always was IGNITE- Description (without 
:). It could be observed in git commits history. Also description of 
contribution process never contains such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13139) exception when closing Ignite at program end

2020-06-09 Thread Tomasz Grygo (Jira)
Tomasz Grygo created IGNITE-13139:
-

 Summary: exception when closing Ignite at program end
 Key: IGNITE-13139
 URL: https://issues.apache.org/jira/browse/IGNITE-13139
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.8.1
 Environment: Java 1.8.0_231
Windows 10


Reporter: Tomasz Grygo
 Attachments: ignite.config.xml

Exception when closing Ignite at program end using
ignite.close();

2020-06-09 15:07:44,102 [tcp-disco-srvr-[:47500]-#3] [ERROR] - Failed to accept 
TCP connection.
java.net.SocketException: socket closed
at java.net.DualStackPlainSocketImpl.accept0(Native Method)
at 
java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:131)
at 
java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:199)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:6353)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServerThread.body(ServerImpl.java:6276)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
[15:08:30] Ignite node stopped OK [uptime=00:03:50.796]




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13052) Calculate result of reserveHistoryForExchange in advance

2020-06-09 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129551#comment-17129551
 ] 

Vladislav Pyatkov commented on IGNITE-13052:


This is an appropriated PR: [https://github.com/apache/ignite/pull/7911]

> Calculate result of reserveHistoryForExchange in advance
> 
>
> Key: IGNITE-13052
> URL: https://issues.apache.org/jira/browse/IGNITE-13052
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Vladislav Pyatkov
>Priority: Major
>   Original Estimate: 80h
>  Remaining Estimate: 80h
>
> Method reserveHistoryForExchange() is called on every partition map exchange. 
> It's an expensive call: it requires iteration over the whole checkpoint 
> history with possible retrieve of GroupState from WAL (it's stored on heap 
> with SoftReference). On some deployments this operation can take several 
> minutes.
> The idea of optimization is to calculate its result only on first PME 
> (ideally, even before first PME, on recovery stage), keep resulting map 
> (grpId, partId -> earlisetCheckpoint) on heap and update it if necessary. 
> From the first glance, the map should be updated:
> 1) On checkpoint. If a new partition appears on local node, it should be 
> registered in the map with current checkpoint. If a partition is evicted from 
> local node, or changes its state to non-OWNING, it should be removed from the 
> map. If checkpoint is marked as inapplicable for a certain group, the whole 
> group should be removed from the map.
> 2) On checkpoint history cleanup. For every (grpId, partId), previous 
> earliest checkpoint should be changed with setIfGreater to new earliest 
> checkpoint.
> We should also extract WAL pointer reservation and filtering small partitions 
> from reserveHistoryForExchange(), but this shouldn't be a problem.
> Another point for optimization: searchPartitionCounter() and 
> searchCheckpointEntry() are executed for each (grpId, partId). That means 
> we'll perform O(number of partitions) linear lookups in history. This should 
> be optimized as well: we can perform one lookup for all (grpId, partId) 
> pairs. This is especially critical for reserveHistoryForPreloading() method 
> complexity: it's executed from exchange thread.
> Memory overhead of storing described map on heap is insignificant. Its size 
> isn't greater than size of map returned from reserveHistoryForExchange().
> Described fix should be much simpler than IGNITE-12429.
> P.S. Possibly, instead of storing map, we can keep earliestCheckpoint right 
> in GridDhtLocalPartition. It may simplify implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10859) Ignite Spark giving exception when join two cached tables

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10859:
-
Fix Version/s: (was: 2.9)

> Ignite Spark giving exception when join two cached tables
> -
>
> Key: IGNITE-10859
> URL: https://issues.apache.org/jira/browse/IGNITE-10859
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Ayush
>Assignee: Alexey Zinoviev
>Priority: Major
>
> When we are loading two data-frames from ignite in spark and joining those to 
> dataframes, it is giving exception. We have checked the generated logical 
> plan and seems like it is wrong.
> I am adding the stack trace and code 
>  
> scala> val df1 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "HIVE_customer_address_2_1546577865912").load().toDF(schema1.columns 
> map(_.toLowerCase): _*)
> df1: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 11 more fields]
> scala> df1.show(1)
>  
> +-++--++-++++--+--
> |ca_address_sk|ca_address_id|ca_street_number|ca_street_name|ca_street_type|ca_suite_number|ca_city|ca_county|ca_state|ca_zip|ca_country|ca_gmt_offset|ca_location_type|
> +-++--++-++++--+--
> |1|BAAA|18|Jackson|Parkway|Suite 280|Fairfield|Maricopa 
> County|AZ|86192|United States|-7.00|condo|
> +-++--++-++++--+--
>  only showing top 1 row
> scala> val df2 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "POSTGRES_customer_1_1546598025406").load().toDF(schema2.columns 
> map(_.toLowerCase): _*)
>  df2: org.apache.spark.sql.DataFrame] = [c_customer_sk: int, c_customer_id: 
> string ... 16 more fields]
> scala> df2.show(1)
>  
> +-++++---++-++++++-++
> |c_customer_sk|c_customer_id|c_current_cdemo_sk|c_current_hdemo_sk|c_current_addr_sk|c_first_shipto_date_sk|c_first_sales_date_sk|c_salutation|c_first_name|c_last_name|c_preferred_cust_flag|c_birth_day|c_birth_month|c_birth_year|c_birth_country|c_login|c_email_address|c_last_review_date|
> +-++++---++-++++++-++
> |7288|IHMB|1461725|4938|18198|2450838|2450808|Sir|Steven|Storey 
> ...|Y|1|2|1967|QATAR|null|Steven.Storey@QdG...|2452528|
> +-++++---++-++++++-++
> scala> df1.join(df2, df1.col("ca_address_sk") === df2.col("c_customer_sk"), 
> "inner")
>  res64: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 29 more fields]
> scala> res64.show
>  19/01/04 16:50:07 ERROR Executor: Exception in task 0.0 in stage 15.0 (TID 
> 15)
>  javax.cache.CacheException: Failed to parse query. Column 
> "POSTGRES_CUSTOMER_1_1546598025406.CA_ADDRESS_SK" not found; SQL statement:
>  SELECT CAST(HIVE_customer_address_2_1546577865912.CA_ADDRESS_SK AS VARCHAR) 
> AS ca_address_sk, HIVE_customer_address_2_1546577865912.CA_ADDRESS_ID, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NAME, 
> HIVE_customer_address_2_1546577865912.CA_STREET_TYPE, 
> HIVE_customer_address_2_1546577865912.CA_SUITE_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_CITY, 
> HIVE_customer_address_2_1546577865912.CA_COUNTY, 
> HIVE_customer_address_2_1546577865912.CA_STATE, 
> HIVE_customer_address_2_1546577865912.CA_ZIP, 
> HIVE_customer_address_2_1546577865912.CA_COUNTRY, 
> CAST(HIVE_customer_address_2_1546577865912.CA_GMT_OFFSET AS 

[jira] [Updated] (IGNITE-12243) [Spark] Add support of HAVING without GROUP BY

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12243:
-
Fix Version/s: (was: 2.9)
   3.0

> [Spark] Add support of HAVING without GROUP BY
> --
>
> Key: IGNITE-12243
> URL: https://issues.apache.org/jira/browse/IGNITE-12243
> Project: Ignite
>  Issue Type: Sub-task
>  Components: spark
>Affects Versions: 2.9
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Critical
>  Labels: await
> Fix For: 3.0
>
>
> Also the semantic of Having support was changed in Spark
> [https://github.com/apache/spark/pull/22696/files]
> https://issues.apache.org/jira/browse/SPARK-25708
> Now Having could be a legal operation without GroupBY
>  
> Rewrite the test "SELECT id FROM city HAVING id > 1"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9317) Table Names With Special Characters Don't Work in Spark SQL Optimisations

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-9317:

Fix Version/s: (was: 2.9)

> Table Names With Special Characters Don't Work in Spark SQL Optimisations
> -
>
> Key: IGNITE-9317
> URL: https://issues.apache.org/jira/browse/IGNITE-9317
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.6
>Reporter: Stuart Macdonald
>Assignee: Alexey Zinoviev
>Priority: Major
>
> Table names aren't escaped in execution of Ignite SQL through the spark SQL 
> interface, meaning table names with special characters (such as . or -) cause 
> SQL grammar exceptions upon execution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-11871) [ML] IP resolver in TensorFlow cluster manager doesn't work properly

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-11871:
-
Fix Version/s: (was: 2.9)
   3.0

> [ML] IP resolver in TensorFlow cluster manager doesn't work properly
> 
>
> Key: IGNITE-11871
> URL: https://issues.apache.org/jira/browse/IGNITE-11871
> Project: Ignite
>  Issue Type: Bug
>  Components: ml
>Affects Versions: 2.7, 2.8
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 3.0
>
>
> TensorFlow cluster manager requires NodeId to be resolved into IP address or 
> hostname to pass the address/name to TensorFlow worker. Currently, it uses 
> strategy "return first" and returns the first available address/name. As a 
> result of that, in the case when the server has more than one interface 
> cluster resolver might work incorrectly and return different addresses/names 
> for the same server.
> To fix this problem we need to update 
> [TensorFlowServerAddressSpec|https://github.com/apache/ignite/blob/master/modules/tensorflow/src/main/java/org/apache/ignite/tensorflow/cluster/spec/TensorFlowServerAddressSpec.java]
>  so that it returns the same address/name for the same server all the time. 
> If a server has multiple network interfaces we need to find a "GCD", a 
> network with all Ignite nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Fix Version/s: (was: 2.9)

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Manoj G T
>Priority: Critical
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-7523) Exception on data expiration after sharedRDD.saveValues call

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-7523:

Fix Version/s: (was: 2.9)

> Exception on data expiration after sharedRDD.saveValues call
> 
>
> Key: IGNITE-7523
> URL: https://issues.apache.org/jira/browse/IGNITE-7523
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Zinoviev
>Priority: Critical
>
> Reproducer:
> {code:java}
> package rdd_expiration;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.List;
> import java.util.UUID;
> import java.util.concurrent.atomic.AtomicLong;
> import javax.cache.Cache;
> import javax.cache.expiry.CreatedExpiryPolicy;
> import javax.cache.expiry.Duration;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.DataRegionConfiguration;
> import org.apache.ignite.configuration.DataStorageConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.lang.IgniteOutClosure;
> import org.apache.ignite.spark.JavaIgniteContext;
> import org.apache.ignite.spark.JavaIgniteRDD;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import org.apache.log4j.Level;
> import org.apache.log4j.Logger;
> import org.apache.spark.SparkConf;
> import org.apache.spark.api.java.JavaRDD;
> import org.apache.spark.api.java.JavaSparkContext;
> import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
> import static org.apache.ignite.cache.CacheMode.PARTITIONED;
> import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
> /**
> * This example demonstrates how to create an JavaIgnitedRDD and share it with 
> multiple spark workers. The goal of this
> * particular example is to provide the simplest code example of this logic.
> * 
> * This example will start Ignite in the embedded mode and will start an 
> JavaIgniteContext on each Spark worker node.
> * 
> * The example can work in the standalone mode as well that can be enabled by 
> setting JavaIgniteContext's
> * \{@code standalone} property to \{@code true} and running an Ignite node 
> separately with
> * `examples/config/spark/example-shared-rdd.xml` config.
> */
> public class RddExpiration {
> /**
> * Executes the example.
> * @param args Command line arguments, none required.
> */
> public static void main(String args[]) throws InterruptedException {
> Ignite server = null;
> for (int i = 0; i < 4; i++) {
> IgniteConfiguration serverCfg = createIgniteCfg();
> serverCfg.setClientMode(false);
> serverCfg.setIgniteInstanceName("Server" + i);
> server = Ignition.start(serverCfg);
> }
> server.active(true);
> // Spark Configuration.
> SparkConf sparkConf = new SparkConf()
> .setAppName("JavaIgniteRDDExample")
> .setMaster("local")
> .set("spark.executor.instances", "2");
> // Spark context.
> JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
> // Adjust the logger to exclude the logs of no interest.
> Logger.getRootLogger().setLevel(Level.ERROR);
> Logger.getLogger("org.apache.ignite").setLevel(Level.INFO);
> // Creates Ignite context with specific configuration and runs Ignite in the 
> embedded mode.
> JavaIgniteContext igniteContext = new JavaIgniteContext Integer>(
> sparkContext,
> new IgniteOutClosure() {
> @Override public IgniteConfiguration apply() {
> return createIgniteCfg();
> }
> },
> true);
> // Create a Java Ignite RDD of Type (Int,Int) Integer Pair.
> JavaIgniteRDD sharedRDD = igniteContext. Integer>fromCache("sharedRDD");
> long start = System.currentTimeMillis();
> long totalLoaded = 0;
> while(System.currentTimeMillis() - start < 55_000) {
> // Define data to be stored in the Ignite RDD (cache).
> List data = new ArrayList<>(20_000);
> for (int i = 0; i < 20_000; i++)
> data.add(i);
> // Preparing a Java RDD.
> JavaRDD javaRDD = sparkContext.parallelize(data);
> sharedRDD.saveValues(javaRDD);
> totalLoaded += 20_000;
> }
> System.out.println("Loaded " + totalLoaded);
> for (;;) {
> System.out.println(">>> Iterating over Ignite Shared RDD...");
> IgniteCache cache = server.getOrCreateCache("sharedRDD");
> AtomicLong recordsLeft = new AtomicLong(0);
> for (Cache.Entry entry : cache) {
> recordsLeft.incrementAndGet();
> }
> System.out.println("Left: " + recordsLeft.get());
> }
> // Close IgniteContext on all the workers.
> // igniteContext.close(true);
> }
> private static IgniteConfiguration createIgniteCfg() {
> IgniteConfiguration cfg = new IgniteConfiguration();
> 

[jira] [Assigned] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-12141:


Assignee: (was: Alexey Zinoviev)

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Reporter: Manoj G T
>Priority: Critical
> Fix For: 2.9
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12054) [Umbrella][Spark] Upgrade Spark module to 2.4

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12054:
-
Fix Version/s: (was: 2.9)
   3.0

> [Umbrella][Spark] Upgrade Spark module to 2.4
> -
>
> Key: IGNITE-12054
> URL: https://issues.apache.org/jira/browse/IGNITE-12054
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Reporter: Denis A. Magda
>Assignee: Alexey Zinoviev
>Priority: Blocker
>  Labels: important
> Fix For: 3.0
>
> Attachments: ignite-spark-patch-new.diff
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Users can't use APIs that are already available in Spark 2.4:
> https://stackoverflow.com/questions/57392143/persisting-spark-dataframe-to-ignite
> Let's upgrade Spark from 2.3 to 2.4 until we extract the Spark Integration as 
> a separate module that can support multiple Spark versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10292) ML: Replace IGFS by model storage for TensorFlow

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10292:
-
Fix Version/s: (was: 2.9)
   3.0

> ML: Replace IGFS by model storage for TensorFlow
> 
>
> Key: IGNITE-10292
> URL: https://issues.apache.org/jira/browse/IGNITE-10292
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 3.0
>
>
> Currently we have a TensorFlow IGFS plugin that provides a file system 
> functionality (see 
> https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/ignite).
>  At the same time IGFS is deprecated and would be great to replace it by a 
> simple model storage based on cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-9357:
---

Assignee: (was: Alexey Zinoviev)

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 3.0
>Reporter: Alexey Kukushkin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10746) [ML] Participate in TensorFlow 2.0 preparation

2020-06-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10746:
-
Fix Version/s: (was: 2.9)
   3.0

> [ML] Participate in TensorFlow 2.0 preparation
> --
>
> Key: IGNITE-10746
> URL: https://issues.apache.org/jira/browse/IGNITE-10746
> Project: Ignite
>  Issue Type: Task
>  Components: ml, tensorflow
>Affects Versions: 2.7
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 3.0
>
>
> The next TensorFlow releases starting from 2.0 introduce significant 
> structure changes: all code from contribution module will be moved into 
> separate sub-projects. Our "TensorFlow on Apache Ignite" integration code in 
> contribution module is also moving into so called "tensorflow/io" sub-project 
> (see [https://github.com/tensorflow/io]).
> Almost all things related to this movement is already done by community 
> members. We need to check that "TensorFlow on Apache Ignite" is still working 
> after the movement, clarify details about "tensorflow/io" 
> review/build/publish procedures including Windows build which is not 
> supported so far.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13138) Add REST tests for the new cluster state change API

2020-06-09 Thread Sergey Antonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Antonov updated IGNITE-13138:

Description: I didn't find tests for a new REST commands introduced in an 
IGNITE-12225. It must be fixed.   (was: I didn't find tests for a new REST 
commands introduced in an IGNITE-12225. We must fix them.)

> Add REST tests for the new cluster state change API
> ---
>
> Key: IGNITE-13138
> URL: https://issues.apache.org/jira/browse/IGNITE-13138
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Antonov
>Assignee: Sergey Antonov
>Priority: Major
> Fix For: 2.9
>
>
> I didn't find tests for a new REST commands introduced in an IGNITE-12225. It 
> must be fixed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13138) Add REST tests for the new cluster state change API

2020-06-09 Thread Sergey Antonov (Jira)
Sergey Antonov created IGNITE-13138:
---

 Summary: Add REST tests for the new cluster state change API
 Key: IGNITE-13138
 URL: https://issues.apache.org/jira/browse/IGNITE-13138
 Project: Ignite
  Issue Type: Improvement
Reporter: Sergey Antonov
Assignee: Sergey Antonov
 Fix For: 2.9


I didn't find tests for a new REST commands introduced in an IGNITE-12225. We 
must fix them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13137) WAL reservation may failed even though the required segment is available

2020-06-09 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-13137:
-
Description: 
It seems there is a race in {{FileWriteAheadLogManager}} that may lead to the 
inability to reserve a WAL segment. 
 Let's consider the following scenario:
  - log WAL record that requires a rollover of the current segment.
  - archiver is moving the WAL file to an archive folder
  - trying to reserve this segment
{code:java}
@Override public boolean reserve(WALPointer start) {
...
segmentAware.reserve(((FileWALPointer)start).index());

if (!hasIndex(((FileWALPointer)start).index())) {  <-- hasIndex 
returns false
segmentAware.release(((FileWALPointer)start).index());

return false;
}
...
}

private boolean hasIndex(long absIdx) {
...
boolean inArchive = new File(walArchiveDir, segmentName).exists() ||
new File(walArchiveDir, zipSegmentName).exists();

if (inArchive)<-- At this point, the required WAL segment is not moved 
yet, so inArchive == false
return true;

if (absIdx <= lastArchivedIndex()) <-- lastArchivedIndex() scans archive 
directory and finds a new WAL segment,
return false;  <-- and absIdx == lastArchivedIndex!

FileWriteHandle cur = currHnd;

return cur != null && cur.getSegmentId() >= absIdx;
}

{code}
Besides this race, it seems to me, the behavior of WAL reservation should be 
improved in a case when the required segment is already reserved/locked. In 
that particular case, we don't need to check WAL archive directory at all.

 

  was:
It seems there is a race in {{FileWriteAheadLogManager}} that may lead to the 
inability to reserve a WAL segment. 
Let's consider the following scenario:
 - log WAL record that requires a rollover of the current segment.
 - archiver is moving the WAL file to an archive folder
 - trying to reserve this segment


{code:java}
@Override public boolean reserve(WALPointer start) {
...
segmentAware.reserve(((FileWALPointer)start).index());

if (!hasIndex(((FileWALPointer)start).index())) {  <-- hasIndex 
returns false
segmentAware.release(((FileWALPointer)start).index());

return false;
}
...
}

private boolean hasIndex(long absIdx) {
...
boolean inArchive = new File(walArchiveDir, segmentName).exists() ||
new File(walArchiveDir, zipSegmentName).exists();

if (inArchive)<-- At this point, the required WAL segment is not moved 
yet, so inArchive == false
return true;

if (absIdx <= lastArchivedIndex()) <-- lastArchivedIndex() scans archive 
directory and finds a new WAL segment, and absIdx == lastArchivedIndex!
return false;

FileWriteHandle cur = currHnd;

return cur != null && cur.getSegmentId() >= absIdx;
}

{code}

Besides this race, it seems to me, the behavior of WAL reservation should be 
improved in a case when the required segment is already reserved/locked. In 
that particular case, we don't need to check WAL archive directory at all.

 


> WAL reservation may failed even though the required segment is available
> 
>
> Key: IGNITE-13137
> URL: https://issues.apache.org/jira/browse/IGNITE-13137
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Priority: Major
>
> It seems there is a race in {{FileWriteAheadLogManager}} that may lead to the 
> inability to reserve a WAL segment. 
>  Let's consider the following scenario:
>   - log WAL record that requires a rollover of the current segment.
>   - archiver is moving the WAL file to an archive folder
>   - trying to reserve this segment
> {code:java}
> @Override public boolean reserve(WALPointer start) {
> ...
> segmentAware.reserve(((FileWALPointer)start).index());
> if (!hasIndex(((FileWALPointer)start).index())) {  <-- hasIndex 
> returns false
> segmentAware.release(((FileWALPointer)start).index());
> return false;
> }
> ...
> }
> private boolean hasIndex(long absIdx) {
> ...
> boolean inArchive = new File(walArchiveDir, segmentName).exists() ||
> new File(walArchiveDir, zipSegmentName).exists();
> if (inArchive)<-- At this point, the required WAL segment is not 
> moved yet, so inArchive == false
> return true;
> if (absIdx <= lastArchivedIndex()) <-- lastArchivedIndex() scans archive 
> directory and finds a new WAL segment,
> return false;  <-- and absIdx == lastArchivedIndex!
> FileWriteHandle cur = currHnd;
> return cur != null && cur.getSegmentId() >= absIdx;
> }
> {code}
> Besides this race, it seems to me, the behavior of WAL 

[jira] [Updated] (IGNITE-13137) WAL reservation may failed even though the required segment is available

2020-06-09 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-13137:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> WAL reservation may failed even though the required segment is available
> 
>
> Key: IGNITE-13137
> URL: https://issues.apache.org/jira/browse/IGNITE-13137
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 2.8
>Reporter: Vyacheslav Koptilin
>Priority: Major
>
> It seems there is a race in {{FileWriteAheadLogManager}} that may lead to the 
> inability to reserve a WAL segment. 
>  Let's consider the following scenario:
>   - log WAL record that requires a rollover of the current segment.
>   - archiver is moving the WAL file to an archive folder
>   - trying to reserve this segment
> {code:java}
> @Override public boolean reserve(WALPointer start) {
> ...
> segmentAware.reserve(((FileWALPointer)start).index());
> if (!hasIndex(((FileWALPointer)start).index())) {  <-- hasIndex 
> returns false
> segmentAware.release(((FileWALPointer)start).index());
> return false;
> }
> ...
> }
> private boolean hasIndex(long absIdx) {
> ...
> boolean inArchive = new File(walArchiveDir, segmentName).exists() ||
> new File(walArchiveDir, zipSegmentName).exists();
> if (inArchive)<-- At this point, the required WAL segment is not 
> moved yet, so inArchive == false
> return true;
> if (absIdx <= lastArchivedIndex()) <-- lastArchivedIndex() scans archive 
> directory and finds a new WAL segment,
> return false;  <-- and absIdx == lastArchivedIndex!
> FileWriteHandle cur = currHnd;
> return cur != null && cur.getSegmentId() >= absIdx;
> }
> {code}
> Besides this race, it seems to me, the behavior of WAL reservation should be 
> improved in a case when the required segment is already reserved/locked. In 
> that particular case, we don't need to check WAL archive directory at all.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13136) Calcite integration. Improve join predicate testing.

2020-06-09 Thread Roman Kondakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Kondakov updated IGNITE-13136:

Description: 
Currently we have to merge joining rows in order to test a join predicate:
{code:java}
Row row = handler.concat(left, rightMaterialized.get(rightIdx++));

if (!cond.test(row))
continue;
{code}
it results in unconditional building a joined row even if it will not be 
emitted to downstream further. To avoid extra GC pressure we need to test the 
join predicate before joining rows:
{code:java}
if (!cond.test(left, right))
continue;

Row row = handler.concat(left, right);
{code}

  was:
Currently we have to merge joining rows in order to test a join predicate:
{code:java}
Row row = handler.concat(left, rightMaterialized.get(rightIdx++));

if (!cond.test(row))
continue;
{code}
it results in unconditional building a joining row even if it will not be 
emitted to downstream further. To avoid extra GC pressure we need to test the 
join predicate before joining rows:
{code:java}
if (!cond.test(left, right))
continue;

Row row = handler.concat(left, right);
{code}


> Calcite integration. Improve join predicate testing.
> 
>
> Key: IGNITE-13136
> URL: https://issues.apache.org/jira/browse/IGNITE-13136
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Roman Kondakov
>Priority: Minor
>
> Currently we have to merge joining rows in order to test a join predicate:
> {code:java}
> Row row = handler.concat(left, rightMaterialized.get(rightIdx++));
> if (!cond.test(row))
> continue;
> {code}
> it results in unconditional building a joined row even if it will not be 
> emitted to downstream further. To avoid extra GC pressure we need to test the 
> join predicate before joining rows:
> {code:java}
> if (!cond.test(left, right))
> continue;
> Row row = handler.concat(left, right);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13137) WAL reservation may failed even though the required segment is available

2020-06-09 Thread Vyacheslav Koptilin (Jira)
Vyacheslav Koptilin created IGNITE-13137:


 Summary: WAL reservation may failed even though the required 
segment is available
 Key: IGNITE-13137
 URL: https://issues.apache.org/jira/browse/IGNITE-13137
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 2.8
Reporter: Vyacheslav Koptilin


It seems there is a race in {{FileWriteAheadLogManager}} that may lead to the 
inability to reserve a WAL segment. 
Let's consider the following scenario:
 - log WAL record that requires a rollover of the current segment.
 - archiver is moving the WAL file to an archive folder
 - trying to reserve this segment


{code:java}
@Override public boolean reserve(WALPointer start) {
...
segmentAware.reserve(((FileWALPointer)start).index());

if (!hasIndex(((FileWALPointer)start).index())) {  <-- hasIndex 
returns false
segmentAware.release(((FileWALPointer)start).index());

return false;
}
...
}

private boolean hasIndex(long absIdx) {
...
boolean inArchive = new File(walArchiveDir, segmentName).exists() ||
new File(walArchiveDir, zipSegmentName).exists();

if (inArchive)<-- At this point, the required WAL segment is not moved 
yet, so inArchive == false
return true;

if (absIdx <= lastArchivedIndex()) <-- lastArchivedIndex() scans archive 
directory and finds a new WAL segment, and absIdx == lastArchivedIndex!
return false;

FileWriteHandle cur = currHnd;

return cur != null && cur.getSegmentId() >= absIdx;
}

{code}

Besides this race, it seems to me, the behavior of WAL reservation should be 
improved in a case when the required segment is already reserved/locked. In 
that particular case, we don't need to check WAL archive directory at all.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13136) Calcite integration. Improve join predicate testing.

2020-06-09 Thread Roman Kondakov (Jira)
Roman Kondakov created IGNITE-13136:
---

 Summary: Calcite integration. Improve join predicate testing.
 Key: IGNITE-13136
 URL: https://issues.apache.org/jira/browse/IGNITE-13136
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Roman Kondakov


Currently we have to merge joining rows in order to test a join predicate:
{code:java}
Row row = handler.concat(left, rightMaterialized.get(rightIdx++));

if (!cond.test(row))
continue;
{code}
it results in unconditional building a joining row even if it will not be 
emitted to downstream further. To avoid extra GC pressure we need to test the 
join predicate before joining rows:
{code:java}
if (!cond.test(left, right))
continue;

Row row = handler.concat(left, right);
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-13010) A local listener for cache events with type EVT_CACHE_STOPPED does not get a cache event from a remote node.

2020-06-09 Thread Denis Garus (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Garus reassigned IGNITE-13010:


Assignee: Denis Garus

> A local listener for cache events with type EVT_CACHE_STOPPED does not get a 
> cache event from a remote node.
> 
>
> Key: IGNITE-13010
> URL: https://issues.apache.org/jira/browse/IGNITE-13010
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>
> A local listener for cache events with type EVT_CACHE_STOPPED does not get a 
> cache event from a remote node. 
> That occurs due to NPE on a remote node:
> {code:java}
> [2020-05-14 
> 12:07:25,623][ERROR][sys-#206%security.NpeGridEventConsumeHandlerReproducer2%][GridEventConsumeHandler]
>  Failed to send event notification to node: 
> 55671ec1-dad9-452b-8ab2-4b7916c0[2020-05-14 
> 12:07:25,623][ERROR][sys-#206%security.NpeGridEventConsumeHandlerReproducer2%][GridEventConsumeHandler]
>  Failed to send event notification to node: 
> 55671ec1-dad9-452b-8ab2-4b7916c0java.lang.NullPointerException at 
> org.apache.ignite.internal.GridEventConsumeHandler$2$1.run(GridEventConsumeHandler.java:238)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:834)
> {code}
> The reproducer:
> {code:java}
> public class NpeGridEventConsumeHandlerReproducer extends 
> GridCommonAbstractTest {
> private static AtomicInteger rmtCounter = new AtomicInteger();
> private static AtomicInteger locCounter = new AtomicInteger();
> @Override protected IgniteConfiguration getConfiguration(String 
> igniteInstanceName) throws Exception {
> return 
> super.getConfiguration(igniteInstanceName).setIncludeEventTypes(EVT_CACHE_STOPPED);
> }
> @Test
> public void test() throws Exception {
> startGrids(3);
> 
> grid(1).createCache(new CacheConfiguration<>("test_cache"));
> grid(0).events().remoteListen((uuid, evt) ->{
>  locCounter.incrementAndGet();
>  return true;
> }, evt->{
> rmtCounter.incrementAndGet();
> return true;
> }, EVT_CACHE_STOPPED);
> grid(1).destroyCache("test_cache");
> TimeUnit.SECONDS.sleep(10);
> assertEquals(rmtCounter.get(), locCounter.get());
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129224#comment-17129224
 ] 

Aleksey Plekhanov commented on IGNITE-13135:


[~akalashnikov] according to your test, the schema should be proposed to the 
cluster, but not requested (asserts that there are 2 
MetadataUpdateProposedMessage and 0 MetadataRequestMessage). I think it's 
correct behavior.

> CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
>  failed
> ---
>
> Key: IGNITE-13135
> URL: https://issues.apache.org/jira/browse/IGNITE-13135
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Test failed with error:
> {noformat}
> java.lang.AssertionError: [] 
> Expected :2
> Actual   :0
> at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
> After fix IGNITE-13096
> Also test fails sometimes due to ConcurrentModificationException in 
> CacheRegisterMetadataLocallyTest.assertCommunicationMessages:
> {noformat}
> class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:755)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:714)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage.toString(GridDhtPartitionDemandMessage.java:387)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.lambda$assertCommunicationMessages$1(CacheRegisterMetadataLocallyTest.java:241)
> at 
> java.base/java.util.concurrent.ConcurrentLinkedQueue.forEachFrom(ConcurrentLinkedQueue.java:1037)
> at 
> java.base/java.util.concurrent.ConcurrentLinkedQueue.forEach(ConcurrentLinkedQueue.java:1054)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCommunicationMessages(CacheRegisterMetadataLocallyTest.java:240)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:154)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2234)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:831)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap.toString(IgniteDhtDemandedPartitionsMap.java:167)
> at java.base/java.lang.String.valueOf(String.java:2951)
> at 
> org.apache.ignite.internal.util.GridStringBuilder.a(GridStringBuilder.java:102)
> at 
> org.apache.ignite.internal.util.tostring.SBLimitedLength.a(SBLimitedLength.java:100)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:900)
> at 
> 

[jira] [Commented] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Anton Kalashnikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129201#comment-17129201
 ] 

Anton Kalashnikov commented on IGNITE-13135:


[~sergey-chugunov] do you have an idea why we always request schema for binary 
metadata even when we register binary metadata from local files? As I 
remember(but maybe I wrong) we discussed about that when I  was implementing 
registration of binary metadata on node start and according to my test, the 
schema should be requested anyway for some reason.

[~alex_pl]. Changes look good to me if we indeed should request schema despite 
the local version but if we shouldn't do it, you just can fix the test(change 
assert from 2 to 0).

> CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
>  failed
> ---
>
> Key: IGNITE-13135
> URL: https://issues.apache.org/jira/browse/IGNITE-13135
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Test failed with error:
> {noformat}
> java.lang.AssertionError: [] 
> Expected :2
> Actual   :0
> at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
> After fix IGNITE-13096
> Also test fails sometimes due to ConcurrentModificationException in 
> CacheRegisterMetadataLocallyTest.assertCommunicationMessages:
> {noformat}
> class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:755)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:714)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage.toString(GridDhtPartitionDemandMessage.java:387)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.lambda$assertCommunicationMessages$1(CacheRegisterMetadataLocallyTest.java:241)
> at 
> java.base/java.util.concurrent.ConcurrentLinkedQueue.forEachFrom(ConcurrentLinkedQueue.java:1037)
> at 
> java.base/java.util.concurrent.ConcurrentLinkedQueue.forEach(ConcurrentLinkedQueue.java:1054)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCommunicationMessages(CacheRegisterMetadataLocallyTest.java:240)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:154)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2234)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:831)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap.toString(IgniteDhtDemandedPartitionsMap.java:167)
> at java.base/java.lang.String.valueOf(String.java:2951)
> 

[jira] [Commented] (IGNITE-13017) Remove hardcoded delay from re-marking failed node as alive.

2020-06-09 Thread Anton Vinogradov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129200#comment-17129200
 ] 

Anton Vinogradov commented on IGNITE-13017:
---

Merged to master.
Thanks for your contribution!

> Remove hardcoded delay from re-marking failed node as alive.
> 
>
> Key: IGNITE-13017
> URL: https://issues.apache.org/jira/browse/IGNITE-13017
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.8.1
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
> Attachments: WostCaseStepByStep.txt
>
>
> We should remove hardcoded timeout from:
> {code:java}
> boolean 
> ServerImpl.CrossRingMessageSendState.markLastFailedNodeAlive() {
> if (state == RingMessageSendState.FORWARD_PASS || state == 
> RingMessageSendState.BACKWARD_PASS) {
>...
> if (--failedNodes <= 0) {
> ...
> state = RingMessageSendState.STARTING_POINT;
> try {
> Thread.sleep(200);
> }
> catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> }
> }
> return true;
> }
> return false;
> }
> {code}
> This can bring additional 200ms to duration of failed node detection. 
> See '_WorstCaseStepByStep.txt_', step 6. And IGNITE-13016 for more details. 
> This ticket is part of it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13017) Remove hardcoded delay from re-marking failed node as alive.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13017:
--
Affects Version/s: 2.8.1

> Remove hardcoded delay from re-marking failed node as alive.
> 
>
> Key: IGNITE-13017
> URL: https://issues.apache.org/jira/browse/IGNITE-13017
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.8.1
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
> Attachments: WostCaseStepByStep.txt
>
>
> We should remove hardcoded timeout from:
> {code:java}
> boolean 
> ServerImpl.CrossRingMessageSendState.markLastFailedNodeAlive() {
> if (state == RingMessageSendState.FORWARD_PASS || state == 
> RingMessageSendState.BACKWARD_PASS) {
>...
> if (--failedNodes <= 0) {
> ...
> state = RingMessageSendState.STARTING_POINT;
> try {
> Thread.sleep(200);
> }
> catch (InterruptedException e) {
> Thread.currentThread().interrupt();
> }
> }
> return true;
> }
> return false;
> }
> {code}
> This can bring additional 200ms to duration of failed node detection. 
> See '_WorstCaseStepByStep.txt_', step 6. And IGNITE-13016 for more details. 
> This ticket is part of it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13012) Fix failure detection timeout. Simplify node ping routine.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Affects Version/s: 2.8.1

> Fix failure detection timeout. Simplify node ping routine.
> --
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.8.1
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Connection failure may not be detected within 
> IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
> ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
> Node ping routine is duplicated.
> We should fix:
> 1. Failure detection timeout should take in account last sent message. 
> Current ping is bound to own time:
> {code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
> This is weird because any discovery message check connection. 
> 2. Make connection check interval depend on failure detection timeout (FTD). 
> Current value is a constant:
> {code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
> 3. Remove additional, quickened connection checking.  Once we do fix 1, this 
> will become even more useless.
> Despite TCP discovery has a period of connection checking, it may send ping 
> before this period exhausts. This premature node ping relies on the time of 
> any sent or even any received message. 
> 4. Do not worry user with “Node seems disconnected” when everything is OK. 
> Once we do fix 1 and 3, this will become even more useless. 
> Node may log on INFO: “Local node seems to be disconnected from topology …” 
> whereas it is not actually disconnected at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13015) Use nano time in node failure detection.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13015:
--
Affects Version/s: 2.8.1

> Use nano time in node failure detection.
> 
>
> Key: IGNITE-13015
> URL: https://issues.apache.org/jira/browse/IGNITE-13015
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.8.1
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
>  Labels: iep-45
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Make sure node failure detection do not use:
> {code:java}
> System.currentTimeMillis()
> and
> IgniteUtils.currentTimeMillis()
> {code}
> We should use nano time instead. Disadventages of current impl.:
> 1)System time has no quarantine of strict forward movement. System time 
> can be adjusted, synchronized by NTP as example. This can lead to incorrect 
> and negative delays.
> 2) IgniteUtils.currentTimeMillis() is granulated by 10ms
> *To fix*:
> {code:java}ServerImpl.lastRingMsgReceivedTime{code} should be nano.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13134) Fix connection recovery timout.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13134:
--
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Fix connection recovery timout.
> ---
>
> Key: IGNITE-13134
> URL: https://issues.apache.org/jira/browse/IGNITE-13134
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.8.1
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>
> If node experiences connection issues it must establish new connection or 
> fail within failureDetectionTimeout + connectionRecoveryTimout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12085) ThreadPool metrics register after all components start

2020-06-09 Thread Mikhail Petrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129170#comment-17129170
 ] 

Mikhail Petrov commented on IGNITE-12085:
-

[~nizhikov], [~NSAmelchev] Thanks for the review!

> ThreadPool metrics register after all components start
> --
>
> Key: IGNITE-12085
> URL: https://issues.apache.org/jira/browse/IGNITE-12085
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Mikhail Petrov
>Priority: Major
>  Labels: IEP-35, await
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For now, thread pool metrics register after all {{GridComponent}} starts.
> But there are specific scenarios when some component blocks {{onKernalStart}} 
> execution for a long time. {{GridCacheProcessor}} can be taken as an example.
> This leads to the situation when some metric info is lost.
> Seems, we can register thread pool metrics right after only **required** 
> components are started and don't wait for all components.
> {code:java}
> // Callbacks.
> for (GridComponent comp : ctx) {
> comp.onKernalStart(active);
> }
> // Start plugins.
> for (PluginProvider provider : ctx.plugins().allProviders())
> provider.onIgniteStart();
> ctx.metric().registerThreadPools(utilityCachePool, execSvc, 
> svcExecSvc, sysExecSvc, stripedExecSvc,
> p2pExecSvc, mgmtExecSvc, igfsExecSvc, dataStreamExecSvc, 
> restExecSvc, affExecSvc, idxExecSvc,
> callbackExecSvc, qryExecSvc, schemaExecSvc, rebalanceExecSvc, 
> rebalanceStripedExecSvc, customExecSvcs);
> // Register MBeans.
> mBeansMgr.registerAllMBeans(utilityCachePool, execSvc, 
> svcExecSvc, sysExecSvc, stripedExecSvc, p2pExecSvc,
> mgmtExecSvc, igfsExecSvc, dataStreamExecSvc, restExecSvc, 
> affExecSvc, idxExecSvc, callbackExecSvc,
> qryExecSvc, schemaExecSvc, rebalanceExecSvc, 
> rebalanceStripedExecSvc, customExecSvcs, ctx.workersRegistry());
> {code}
> {code:java}
> public class GridCacheProcessor {
> @Override public void onKernalStart(boolean active) throws 
> IgniteCheckedException {
> //.
> final List syncFuts = new 
> ArrayList<>(caches.size());
> sharedCtx.forAllCaches(new CIX1() {
> @Override public void applyx(GridCacheContext cctx) {
> CacheConfiguration cfg = cctx.config();
> if (cctx.affinityNode() &&
> cfg.getRebalanceMode() == SYNC &&
> startTopVer.equals(cctx.startTopologyVersion())) {
> CacheMode cacheMode = cfg.getCacheMode();
> if (cacheMode == REPLICATED || (cacheMode == PARTITIONED 
> && cfg.getRebalanceDelay() >= 0))
> // Need to wait outside to avoid a deadlock
> syncFuts.add(cctx.preloader().syncFuture());
> }
> }
> });
> for (int i = 0, size = syncFuts.size(); i < size; i++)
> syncFuts.get(i).get();
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13090) Add parameter of connection check period to TcpDiscoverySpi

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13090:
--
Priority: Minor  (was: Major)

> Add parameter of connection check period to TcpDiscoverySpi
> ---
>
> Key: IGNITE-13090
> URL: https://issues.apache.org/jira/browse/IGNITE-13090
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Minor
>
> We should add parameter of connection check period to TcpDiscoverySpi. If it 
> isn't automatically set by IgniteConfiguration.setFailureDetectionTimeout(), 
> user should be able to tune. 
> Similar params:
> {code:java}
> TcpDiscoverySpi.setReconnectCount()
> TcpDiscoverySpi.setAckTimeout()
> TcpDiscoverySpi.setSocketTimeout()
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-13005) Spring Data 2 - JPA Improvements and working with multiple Ignite instances on same JVM

2020-06-09 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev reassigned IGNITE-13005:


Assignee: (was: Ilya Kasnacheev)

> Spring Data 2 - JPA Improvements and working with multiple Ignite instances 
> on same JVM
> ---
>
> Key: IGNITE-13005
> URL: https://issues.apache.org/jira/browse/IGNITE-13005
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.7.6
>Reporter: Manuel Núñez
>Priority: Major
>
> I have it working for Spring Data 2 (2.7.6) module with some interesting 
> improvements, but by now I don't have enough time to give it the attention it 
> requires, full unit/integration tests..., sorry a lot. ¿maybe any of you have 
> the time?. Thanks community!!
> Code is 100% compatible with previous versions. 
> [https://github.com/hawkore/ignite-hk/tree/master/modules/spring-data-2.0]
>  * Supports multiple ignite instances on same JVM (@RepositoryConfig).
>  * Supports query tuning parameters in {{@Query}} annotation
>  * Supports projections
>  * Supports {{Page}} and {{Stream}} responses
>  * Supports Sql Fields Query resultset transformation into the domain entity
>  * Supports named parameters ({{:myParam}}) into SQL queries, declared using 
> {{@Param("myParam")}}
>  * Supports advanced parameter binding and SpEL expressions into SQL queries:
>  ** *Template variables*:
>  *** {{#entityName}} - the simple class name of the domain entity
>  ** *Method parameter expressions*: Parameters are exposed for indexed access 
> ({{[0]}} is the first query method's param) or via the name declared using 
> {{@Param}}. The actual SpEL expression binding is triggered by {{?#}}. 
> Example: {{?#\{[0]\}} or {{?#\{#myParamName\}}}
>  ** *Advanced SpEL expressions*: While advanced parameter binding is a very 
> useful feature, the real power of SpEL stems from the fact, that the 
> expressions can refer to framework abstractions or other application 
> components through SpEL EvaluationContext extension model.
>  * Supports SpEL expressions into Text queries ({{TextQuery}}). 
> Some examples:
> {code:java}
> // Spring Data Repositories using different ignite instances on same JVM
> @RepositoryConfig(igniteInstance = "FLIGHTS_BBDD", cacheName = "ROUTES")
> public interface FlightRouteRepository extends IgniteRepository String> {
> ...
> }
> @RepositoryConfig(igniteInstance = "GEO_BBDD", cacheName = "POIS")
> public interface PoiRepository extends IgniteRepository {
> ...
> }
> {code}
> {code:java}
> // named parameter
> @Query(value = "SELECT * from #{#entityName} where email = :email")
> User searchUserByEmail(@Param("email") String email);
> {code}
> {code:java}
> // indexed parameters
> @Query(value = "SELECT * from #{#entityName} where country = ?#{[0] and city 
> = ?#{[1]}")
> List searchUsersByCity(@Param("country") String country, @Param("city") 
> String city, Pageable pageable);
> {code}
> {code:java}
> // ordered method parameters
> @Query(value = "SELECT * from #{#entityName} where email = ?")
> User searchUserByEmail(String email);
> {code}
> {code:java}
> // Advanced SpEL expressions
> @Query(value = "SELECT * from #{#entityName} where uuidCity = 
> ?#{mySpELFunctionsBean.cityNameToUUID(#city)}")
> List searchUsersByCity(@Param("city") String city, Pageable pageable);
> {code}
> {code:java}
> // textQuery - evaluated SpEL named parameter
> @Query(textQuery = true, value = "email: #{#email}")
> User searchUserByEmail(@Param("email") String email);
> {code}
> {code:java}
> // textQuery - evaluated SpEL named parameter
> @Query(textQuery = true, value = "#{#textToSearch}")
> List searchUsersByText(@Param("textToSearch") String text, Pageable 
> pageable);
> {code}
> {code:java}
> // textQuery - evaluated SpEL indexed parameter
> @Query(textQuery = true, value = "#{[0]}")
> List searchUsersByText(String textToSearch, Pageable pageable);
> {code}
> {code:java}
> // Projection
> @Query(value =
>"SELECT DISTINCT m.id, m.name, m.logos FROM #{#entityName} e 
> USE INDEX (ORIGIN_IDX) INNER JOIN \"flightMerchants\".Merchant m ON m"
>+ "._key=e"
>+ ".merchant WHERE e.origin = :origin and e.disabled = 
> :disabled GROUP BY m.id, m.name, m.logos ORDER BY m.name")
>  List searchMerchantsByOrigin(Class projection, @Param("origin") 
> String origin, @Param("disabled") boolean disabled);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13012) Fix failure detection timeout. Simplify node ping routine.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Description: 
Connection failure may not be detected within 
IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
Node ping routine is duplicated.

We should fix:

1. Failure detection timeout should take in account last sent message. Current 
ping is bound to own time:
{code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
This is weird because any discovery message check connection. 

2. Make connection check interval depend on failure detection timeout (FTD). 
Current value is a constant:
{code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}

3. Remove additional, quickened connection checking.  Once we do fix 1, this 
will become even more useless.
Despite TCP discovery has a period of connection checking, it may send ping 
before this period exhausts. This premature node ping relies on the time of any 
sent or even any received message. 

4. Do not worry user with “Node seems disconnected” when everything is OK. Once 
we do fix 1 and 3, this will become even more useless. 
Node may log on INFO: “Local node seems to be disconnected from topology …” 
whereas it is not actually disconnected at all.

  was:
Connection failure may not be detected within 
IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
Node ping routine is duplicated.

We should fix:

1. Failure detection timeout should take in account last sent message. Current 
ping is bound to own time:
{code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
This is weird because any discovery message check connection. 

2. Make connection check interval depend on failure detection timeout (FTD). 
Current value is a constant:
{code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}

3. Remove additional, quickened connection checking.  Once we do fix 1, this 
will become even more useless.
Despite TCP discovery has a period of connection checking, it may send ping 
before this period exhausts. This premature node ping relies on the time of any 
sent or even any received message. 

4. Do not worry user with “Node disconnected” when everything is OK. Once we do 
fix 1 and 3, this will become even more useless. 
Node may log on INFO: “Local node seems to be disconnected from topology …” 
whereas it is not actually disconnected at all.


> Fix failure detection timeout. Simplify node ping routine.
> --
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Connection failure may not be detected within 
> IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
> ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
> Node ping routine is duplicated.
> We should fix:
> 1. Failure detection timeout should take in account last sent message. 
> Current ping is bound to own time:
> {code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
> This is weird because any discovery message check connection. 
> 2. Make connection check interval depend on failure detection timeout (FTD). 
> Current value is a constant:
> {code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
> 3. Remove additional, quickened connection checking.  Once we do fix 1, this 
> will become even more useless.
> Despite TCP discovery has a period of connection checking, it may send ping 
> before this period exhausts. This premature node ping relies on the time of 
> any sent or even any received message. 
> 4. Do not worry user with “Node seems disconnected” when everything is OK. 
> Once we do fix 1 and 3, this will become even more useless. 
> Node may log on INFO: “Local node seems to be disconnected from topology …” 
> whereas it is not actually disconnected at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129140#comment-17129140
 ] 

Ignite TC Bot commented on IGNITE-13135:


{panel:title=Branch: [pull/7914/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5375131buildTypeId=IgniteTests24Java8_RunAll]

> CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
>  failed
> ---
>
> Key: IGNITE-13135
> URL: https://issues.apache.org/jira/browse/IGNITE-13135
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Test failed with error:
> {noformat}
> java.lang.AssertionError: [] 
> Expected :2
> Actual   :0
> at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
> After fix IGNITE-13096
> Also test fails sometimes due to ConcurrentModificationException in 
> CacheRegisterMetadataLocallyTest.assertCommunicationMessages:
> {noformat}
> class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:755)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:714)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage.toString(GridDhtPartitionDemandMessage.java:387)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.lambda$assertCommunicationMessages$1(CacheRegisterMetadataLocallyTest.java:241)
> at 
> java.base/java.util.concurrent.ConcurrentLinkedQueue.forEachFrom(ConcurrentLinkedQueue.java:1037)
> at 
> java.base/java.util.concurrent.ConcurrentLinkedQueue.forEach(ConcurrentLinkedQueue.java:1054)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCommunicationMessages(CacheRegisterMetadataLocallyTest.java:240)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:154)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2234)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: class org.apache.ignite.IgniteException: null
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
> at 
> org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:831)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap.toString(IgniteDhtDemandedPartitionsMap.java:167)
> at java.base/java.lang.String.valueOf(String.java:2951)
> at 
> org.apache.ignite.internal.util.GridStringBuilder.a(GridStringBuilder.java:102)
> at 
> org.apache.ignite.internal.util.tostring.SBLimitedLength.a(SBLimitedLength.java:100)
> at 
> 

[jira] [Commented] (IGNITE-13052) Calculate result of reserveHistoryForExchange in advance

2020-06-09 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129127#comment-17129127
 ] 

Vladislav Pyatkov commented on IGNITE-13052:


This test failed as before.

We cannot view this because master does not run on TC daily.

[~irakov] Could please review change?

> Calculate result of reserveHistoryForExchange in advance
> 
>
> Key: IGNITE-13052
> URL: https://issues.apache.org/jira/browse/IGNITE-13052
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Vladislav Pyatkov
>Priority: Major
>   Original Estimate: 80h
>  Remaining Estimate: 80h
>
> Method reserveHistoryForExchange() is called on every partition map exchange. 
> It's an expensive call: it requires iteration over the whole checkpoint 
> history with possible retrieve of GroupState from WAL (it's stored on heap 
> with SoftReference). On some deployments this operation can take several 
> minutes.
> The idea of optimization is to calculate its result only on first PME 
> (ideally, even before first PME, on recovery stage), keep resulting map 
> (grpId, partId -> earlisetCheckpoint) on heap and update it if necessary. 
> From the first glance, the map should be updated:
> 1) On checkpoint. If a new partition appears on local node, it should be 
> registered in the map with current checkpoint. If a partition is evicted from 
> local node, or changes its state to non-OWNING, it should be removed from the 
> map. If checkpoint is marked as inapplicable for a certain group, the whole 
> group should be removed from the map.
> 2) On checkpoint history cleanup. For every (grpId, partId), previous 
> earliest checkpoint should be changed with setIfGreater to new earliest 
> checkpoint.
> We should also extract WAL pointer reservation and filtering small partitions 
> from reserveHistoryForExchange(), but this shouldn't be a problem.
> Another point for optimization: searchPartitionCounter() and 
> searchCheckpointEntry() are executed for each (grpId, partId). That means 
> we'll perform O(number of partitions) linear lookups in history. This should 
> be optimized as well: we can perform one lookup for all (grpId, partId) 
> pairs. This is especially critical for reserveHistoryForPreloading() method 
> complexity: it's executed from exchange thread.
> Memory overhead of storing described map on heap is insignificant. Its size 
> isn't greater than size of map returned from reserveHistoryForExchange().
> Described fix should be much simpler than IGNITE-12429.
> P.S. Possibly, instead of storing map, we can keep earliestCheckpoint right 
> in GridDhtLocalPartition. It may simplify implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13021) Calcite integration. Avoid full scans for disjunctive queries.

2020-06-09 Thread Roman Kondakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129084#comment-17129084
 ] 

Roman Kondakov commented on IGNITE-13021:
-

[~amashenkov] The patch looks good for me. I left a single comment: I think we 
need to ensure the proper behavior in the presence of {{null}} values.

> Calcite integration. Avoid full scans for disjunctive queries.
> --
>
> Key: IGNITE-13021
> URL: https://issues.apache.org/jira/browse/IGNITE-13021
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Roman Kondakov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently a full table scan will be executed in the case of disjunctive 
> predicate even if predicate fields are indexed. For example:
> {code:java}
> SELECT * FROM emps WHERE name='A' OR surname='B'
> {code}
> This is caused by the nature of indexes: they can return cursor bounded by 
> lower and upper bounds. We can cope with it by implementing a logical rule 
> for rewriting {{OR}} query to a {{UNION ALL}} query:
> {code:java}
> SELECT * FROM emps WHERE name='A' 
> UNION ALL
> SELECT * FROM emps WHERE surname='B'  AND LNNVL(name='A')
> {code}
> where {{LNNVL()}} function has semantics 
> {code:java}
> LNNVL(name='A') == name!='A' OR name=NULL.
> {code}
> It is used to avoid expensive deduplication. This name is taken from Oracle, 
> we can think of more meaningful name, or find the analog in Calcite or H2.
> See, for example, this blog post: 
> [https://blogs.oracle.com/optimizer/optimizer-transformations:-or-expansion] 
> for details.
> Also it is needed to check this works for {{IN}} clause with small number of 
> literals (AFAIK Calcite converts large {{IN}} clauses to a join with 
> {{Values}} table where N > 20).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13084) Update BouncyCastle dependency for ignite-aws

2020-06-09 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev updated IGNITE-13084:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Update BouncyCastle dependency for ignite-aws
> -
>
> Key: IGNITE-13084
> URL: https://issues.apache.org/jira/browse/IGNITE-13084
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.7.6
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
>  Labels: security
> Fix For: 2.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. bcprov-ext-jdk15on-1.54 has a lot of known  CVE's, so it must be updated 
> to fix potential security issues



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-13084) Update BouncyCastle dependency for ignite-aws

2020-06-09 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev resolved IGNITE-13084.
--
Fix Version/s: 2.9
 Release Note: Updated Bouncy Castle dependency for aws module.
   Resolution: Fixed

Thank you for this fix, I have merged it to master.

> Update BouncyCastle dependency for ignite-aws
> -
>
> Key: IGNITE-13084
> URL: https://issues.apache.org/jira/browse/IGNITE-13084
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 2.7.6
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
>  Labels: security
> Fix For: 2.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. bcprov-ext-jdk15on-1.54 has a lot of known  CVE's, so it must be updated 
> to fix potential security issues



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13084) Update BouncyCastle dependency for ignite-aws

2020-06-09 Thread Ilya Kasnacheev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev updated IGNITE-13084:
-
Component/s: aws

> Update BouncyCastle dependency for ignite-aws
> -
>
> Key: IGNITE-13084
> URL: https://issues.apache.org/jira/browse/IGNITE-13084
> Project: Ignite
>  Issue Type: Task
>  Components: aws
>Affects Versions: 2.7.6
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
>  Labels: security
> Fix For: 2.9
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. bcprov-ext-jdk15on-1.54 has a lot of known  CVE's, so it must be updated 
> to fix potential security issues



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13012) Fix failure detection timeout. Simplify node ping routine.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Description: 
Connection failure may not be detected within 
IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
Node ping routine is duplicated.

We should fixes:

1. Failure detection timeout should take in account last sent message. Current 
ping is bound to own time:
{code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
This is weird because any discovery message check connection. 

2. Make connection check interval depend on failure detection timeout (FTD). 
Current value is a constant:
{code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}

3. Remove additional, quickened connection checking.  Once we do fix 1, this 
will become even more useless.
Despite TCP discovery has a period of connection checking, it may send ping 
before this period exhausts. This premature node ping relies on the time of any 
sent or even any received message. 

4. Do not worry user with “Node disconnected” when everything is OK. Once we do 
fix 1 and 3, this will become even more useless. 
Node may log on INFO: “Local node seems to be disconnected from topology …” 
whereas it is not actually disconnected at all.

  was:
Node-to-next-node connection checking has several drawbacks which go together. 
These drawback hindered understanding and catching problems in IGNITE-13016.  
We should fix the following :

1. Failure detection timeout should take in account last sent message. 
Connection check interval should also rely on this time. If we set timeout on 
current message only, we have no guarantee that connection failure is detected 
with failure detection timeout.  
Current ping is bound to own time:
{code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
This is weird because any discovery message check connection. And 
TpcDiscoveryConnectionCheckMessage is just an addition when message queue is 
empty for a long time. 

2. Make connection check interval depend on failure detection timeout (FTD). 
Current value is a constant:
{code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
Let's set it FDT/4 to get enough timeout time since last sent message.

3. Remove additional, quickened connection checking.  Once we do fix 1, this 
will become even more useless.
Despite TCP discovery has a period of connection checking, it may send ping 
before this period exhausts. This premature node ping relies on the time of any 
sent or even any received message. Imagine: if node 2 receives no message from 
node 1 within some time, it decides to do extra ping node 3 not waiting for 
regular ping. Such behavior makes confusion and gives no considerable benefits. 
See {code:java}ServerImpl.RingMessageWorker.failureThresholdReached{code}

4. Do not worry user with “Node disconnected” when everything is OK. Once we do 
fix 1 and 3, this will become even more useless. 
Node may log on INFO: “Local node seems to be disconnected from topology …” 
whereas it is not actually disconnected at all.


> Fix failure detection timeout. Simplify node ping routine.
> --
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Connection failure may not be detected within 
> IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
> ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
> Node ping routine is duplicated.
> We should fixes:
> 1. Failure detection timeout should take in account last sent message. 
> Current ping is bound to own time:
> {code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
> This is weird because any discovery message check connection. 
> 2. Make connection check interval depend on failure detection timeout (FTD). 
> Current value is a constant:
> {code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
> 3. Remove additional, quickened connection checking.  Once we do fix 1, this 
> will become even more useless.
> Despite TCP discovery has a period of connection checking, it may send ping 
> before this period exhausts. This premature node ping relies on the time of 
> any sent or even any received message. 
> 4. Do not worry user with “Node disconnected” when everything is OK. Once we 
> do fix 1 and 3, this will become even more useless. 
> Node may log on INFO: “Local node seems to be disconnected from topology …” 
> whereas it is not 

[jira] [Updated] (IGNITE-13012) Fix failure detection timeout. Simplify node ping routine.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Description: 
Connection failure may not be detected within 
IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
Node ping routine is duplicated.

We should fix:

1. Failure detection timeout should take in account last sent message. Current 
ping is bound to own time:
{code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
This is weird because any discovery message check connection. 

2. Make connection check interval depend on failure detection timeout (FTD). 
Current value is a constant:
{code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}

3. Remove additional, quickened connection checking.  Once we do fix 1, this 
will become even more useless.
Despite TCP discovery has a period of connection checking, it may send ping 
before this period exhausts. This premature node ping relies on the time of any 
sent or even any received message. 

4. Do not worry user with “Node disconnected” when everything is OK. Once we do 
fix 1 and 3, this will become even more useless. 
Node may log on INFO: “Local node seems to be disconnected from topology …” 
whereas it is not actually disconnected at all.

  was:
Connection failure may not be detected within 
IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
Node ping routine is duplicated.

We should fixes:

1. Failure detection timeout should take in account last sent message. Current 
ping is bound to own time:
{code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
This is weird because any discovery message check connection. 

2. Make connection check interval depend on failure detection timeout (FTD). 
Current value is a constant:
{code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}

3. Remove additional, quickened connection checking.  Once we do fix 1, this 
will become even more useless.
Despite TCP discovery has a period of connection checking, it may send ping 
before this period exhausts. This premature node ping relies on the time of any 
sent or even any received message. 

4. Do not worry user with “Node disconnected” when everything is OK. Once we do 
fix 1 and 3, this will become even more useless. 
Node may log on INFO: “Local node seems to be disconnected from topology …” 
whereas it is not actually disconnected at all.


> Fix failure detection timeout. Simplify node ping routine.
> --
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Connection failure may not be detected within 
> IgniteConfiguration.failureDetectionTimeout. Actual worst delay is: 
> ServerImpl.CON_CHECK_INTERVAL + IgniteConfiguration.failureDetectionTimeout. 
> Node ping routine is duplicated.
> We should fix:
> 1. Failure detection timeout should take in account last sent message. 
> Current ping is bound to own time:
> {code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
> This is weird because any discovery message check connection. 
> 2. Make connection check interval depend on failure detection timeout (FTD). 
> Current value is a constant:
> {code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
> 3. Remove additional, quickened connection checking.  Once we do fix 1, this 
> will become even more useless.
> Despite TCP discovery has a period of connection checking, it may send ping 
> before this period exhausts. This premature node ping relies on the time of 
> any sent or even any received message. 
> 4. Do not worry user with “Node disconnected” when everything is OK. Once we 
> do fix 1 and 3, this will become even more useless. 
> Node may log on INFO: “Local node seems to be disconnected from topology …” 
> whereas it is not actually disconnected at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13012) Fix failure detection timeout. Simplify node ping routine.

2020-06-09 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13012:
--
Summary: Fix failure detection timeout. Simplify node ping routine.  (was: 
Make node connection checking rely on the configuration. Simplify node ping 
routine.)

> Fix failure detection timeout. Simplify node ping routine.
> --
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Node-to-next-node connection checking has several drawbacks which go 
> together. These drawback hindered understanding and catching problems in 
> IGNITE-13016.  We should fix the following :
> 1. Failure detection timeout should take in account last sent message. 
> Connection check interval should also rely on this time. If we set timeout on 
> current message only, we have no guarantee that connection failure is 
> detected with failure detection timeout.  
> Current ping is bound to own time:
> {code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
> This is weird because any discovery message check connection. And 
> TpcDiscoveryConnectionCheckMessage is just an addition when message queue is 
> empty for a long time. 
> 2. Make connection check interval depend on failure detection timeout (FTD). 
> Current value is a constant:
> {code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
> Let's set it FDT/4 to get enough timeout time since last sent message.
> 3. Remove additional, quickened connection checking.  Once we do fix 1, this 
> will become even more useless.
> Despite TCP discovery has a period of connection checking, it may send ping 
> before this period exhausts. This premature node ping relies on the time of 
> any sent or even any received message. Imagine: if node 2 receives no message 
> from node 1 within some time, it decides to do extra ping node 3 not waiting 
> for regular ping. Such behavior makes confusion and gives no considerable 
> benefits. 
> See {code:java}ServerImpl.RingMessageWorker.failureThresholdReached{code}
> 4. Do not worry user with “Node disconnected” when everything is OK. Once we 
> do fix 1 and 3, this will become even more useless. 
> Node may log on INFO: “Local node seems to be disconnected from topology …” 
> whereas it is not actually disconnected at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-7105) .NET: IIgnite.ReentrantLock

2020-06-09 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-7105:
---
Ignite Flags: Release Notes Required
Release Note: .NET: Add IgniteLock

> .NET: IIgnite.ReentrantLock
> ---
>
> Key: IGNITE-7105
> URL: https://issues.apache.org/jira/browse/IGNITE-7105
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, newbie
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Propagate {{Ignite.reentrantLock}} to .NET.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7105) .NET: IIgnite.ReentrantLock

2020-06-09 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129052#comment-17129052
 ] 

Pavel Tupitsyn commented on IGNITE-7105:


Merged to master: 2026b25d87c1d01dce8e33b2dc3d6ae5b3af9df5

> .NET: IIgnite.ReentrantLock
> ---
>
> Key: IGNITE-7105
> URL: https://issues.apache.org/jira/browse/IGNITE-7105
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Propagate {{Ignite.reentrantLock}} to .NET.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13021) Calcite integration. Avoid full scans for disjunctive queries.

2020-06-09 Thread Andrey Mashenkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129016#comment-17129016
 ] 

Andrey Mashenkov commented on IGNITE-13021:
---

[~rkondakov], please review.
Calcite tests looks good.

> Calcite integration. Avoid full scans for disjunctive queries.
> --
>
> Key: IGNITE-13021
> URL: https://issues.apache.org/jira/browse/IGNITE-13021
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Roman Kondakov
>Assignee: Andrey Mashenkov
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently a full table scan will be executed in the case of disjunctive 
> predicate even if predicate fields are indexed. For example:
> {code:java}
> SELECT * FROM emps WHERE name='A' OR surname='B'
> {code}
> This is caused by the nature of indexes: they can return cursor bounded by 
> lower and upper bounds. We can cope with it by implementing a logical rule 
> for rewriting {{OR}} query to a {{UNION ALL}} query:
> {code:java}
> SELECT * FROM emps WHERE name='A' 
> UNION ALL
> SELECT * FROM emps WHERE surname='B'  AND LNNVL(name='A')
> {code}
> where {{LNNVL()}} function has semantics 
> {code:java}
> LNNVL(name='A') == name!='A' OR name=NULL.
> {code}
> It is used to avoid expensive deduplication. This name is taken from Oracle, 
> we can think of more meaningful name, or find the analog in Calcite or H2.
> See, for example, this blog post: 
> [https://blogs.oracle.com/optimizer/optimizer-transformations:-or-expansion] 
> for details.
> Also it is needed to check this works for {{IN}} clause with small number of 
> literals (AFAIK Calcite converts large {{IN}} clauses to a join with 
> {{Values}} table where N > 20).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13070) SQL regressions detection framework

2020-06-09 Thread Andrey Mashenkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129010#comment-17129010
 ] 

Andrey Mashenkov commented on IGNITE-13070:
---

[~rkondakov], I've left a comment to the PR, other looks good.

> SQL regressions detection framework
> ---
>
> Key: IGNITE-13070
> URL: https://issues.apache.org/jira/browse/IGNITE-13070
> Project: Ignite
>  Issue Type: Test
>  Components: sql
>Reporter: Roman Kondakov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to detect SQL regressions early. We can do it by comparing the SQL 
> query performance for different Ignite versions. This test framework should 
> work in the following way:
>  # It starts two Ignite clusters with different versions (current version and 
> the previous release version).
>  # Framework then runs randomly generated queries in both clusters and checks 
> the execution time for each cluster. We need to port SQLSmith library from 
> C++ to java for this step. But initially we can start with some set of 
> hardcoded queries and postpone the SQLSmith port. Randomized queries can be 
> added later.
>  # All problematic queries are then reported as performance issues. In this 
> way we can manually examine the problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13012) Make node connection checking rely on the configuration. Simplify node ping routine.

2020-06-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17129006#comment-17129006
 ] 

Ignite TC Bot commented on IGNITE-13012:


{panel:title=Branch: [pull/7835/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5360952buildTypeId=IgniteTests24Java8_RunAll]

> Make node connection checking rely on the configuration. Simplify node ping 
> routine.
> 
>
> Key: IGNITE-13012
> URL: https://issues.apache.org/jira/browse/IGNITE-13012
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Node-to-next-node connection checking has several drawbacks which go 
> together. These drawback hindered understanding and catching problems in 
> IGNITE-13016.  We should fix the following :
> 1. Failure detection timeout should take in account last sent message. 
> Connection check interval should also rely on this time. If we set timeout on 
> current message only, we have no guarantee that connection failure is 
> detected with failure detection timeout.  
> Current ping is bound to own time:
> {code:java}ServerImpl. RingMessageWorker.lastTimeConnCheckMsgSent{code}
> This is weird because any discovery message check connection. And 
> TpcDiscoveryConnectionCheckMessage is just an addition when message queue is 
> empty for a long time. 
> 2. Make connection check interval depend on failure detection timeout (FTD). 
> Current value is a constant:
> {code:java}static int ServerImpls.CON_CHECK_INTERVAL = 500{code}
> Let's set it FDT/4 to get enough timeout time since last sent message.
> 3. Remove additional, quickened connection checking.  Once we do fix 1, this 
> will become even more useless.
> Despite TCP discovery has a period of connection checking, it may send ping 
> before this period exhausts. This premature node ping relies on the time of 
> any sent or even any received message. Imagine: if node 2 receives no message 
> from node 1 within some time, it decides to do extra ping node 3 not waiting 
> for regular ping. Such behavior makes confusion and gives no considerable 
> benefits. 
> See {code:java}ServerImpl.RingMessageWorker.failureThresholdReached{code}
> 4. Do not worry user with “Node disconnected” when everything is OK. Once we 
> do fix 1 and 3, this will become even more useless. 
> Node may log on INFO: “Local node seems to be disconnected from topology …” 
> whereas it is not actually disconnected at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13135:
---
Description: 
Test failed with error:
{noformat}
java.lang.AssertionError: [] 
Expected :2
Actual   :0
at 
org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
After fix IGNITE-13096

Also test fails sometimes due to ConcurrentModificationException in 
CacheRegisterMetadataLocallyTest.assertCommunicationMessages:
{noformat}
class org.apache.ignite.IgniteException: null
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:755)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:714)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemandMessage.toString(GridDhtPartitionDemandMessage.java:387)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.lambda$assertCommunicationMessages$1(CacheRegisterMetadataLocallyTest.java:241)
at 
java.base/java.util.concurrent.ConcurrentLinkedQueue.forEachFrom(ConcurrentLinkedQueue.java:1037)
at 
java.base/java.util.concurrent.ConcurrentLinkedQueue.forEach(ConcurrentLinkedQueue.java:1054)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCommunicationMessages(CacheRegisterMetadataLocallyTest.java:240)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:154)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2234)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: class org.apache.ignite.IgniteException: null
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:1162)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl(GridToStringBuilder.java:1045)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:831)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteDhtDemandedPartitionsMap.toString(IgniteDhtDemandedPartitionsMap.java:167)
at java.base/java.lang.String.valueOf(String.java:2951)
at 
org.apache.ignite.internal.util.GridStringBuilder.a(GridStringBuilder.java:102)
at 
org.apache.ignite.internal.util.tostring.SBLimitedLength.a(SBLimitedLength.java:100)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:900)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:)
... 19 more
Caused by: java.util.ConcurrentModificationException
at java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1493)
at java.base/java.util.HashMap$KeyIterator.next(HashMap.java:1516)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.addCollection(GridToStringBuilder.java:950)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toString(GridToStringBuilder.java:896)
at 
org.apache.ignite.internal.util.tostring.GridToStringBuilder.toStringImpl0(GridToStringBuilder.java:)
... 27 more{noformat}

  was:
Test failed with error:
{noformat}
java.lang.AssertionError: [] 
Expected :2
Actual   :0
at 
org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
at 

[jira] [Commented] (IGNITE-13052) Calculate result of reserveHistoryForExchange in advance

2020-06-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128968#comment-17128968
 ] 

Ignite TC Bot commented on IGNITE-13052:


{panel:title=Branch: [pull/7911/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Cache 5{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5374895]]
* IgniteCacheWithIndexingTestSuite: 
CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
 - Test has low fail rate in base branch 0,0% and is not flaky

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5373489buildTypeId=IgniteTests24Java8_RunAll]

> Calculate result of reserveHistoryForExchange in advance
> 
>
> Key: IGNITE-13052
> URL: https://issues.apache.org/jira/browse/IGNITE-13052
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Rakov
>Assignee: Vladislav Pyatkov
>Priority: Major
>   Original Estimate: 80h
>  Remaining Estimate: 80h
>
> Method reserveHistoryForExchange() is called on every partition map exchange. 
> It's an expensive call: it requires iteration over the whole checkpoint 
> history with possible retrieve of GroupState from WAL (it's stored on heap 
> with SoftReference). On some deployments this operation can take several 
> minutes.
> The idea of optimization is to calculate its result only on first PME 
> (ideally, even before first PME, on recovery stage), keep resulting map 
> (grpId, partId -> earlisetCheckpoint) on heap and update it if necessary. 
> From the first glance, the map should be updated:
> 1) On checkpoint. If a new partition appears on local node, it should be 
> registered in the map with current checkpoint. If a partition is evicted from 
> local node, or changes its state to non-OWNING, it should be removed from the 
> map. If checkpoint is marked as inapplicable for a certain group, the whole 
> group should be removed from the map.
> 2) On checkpoint history cleanup. For every (grpId, partId), previous 
> earliest checkpoint should be changed with setIfGreater to new earliest 
> checkpoint.
> We should also extract WAL pointer reservation and filtering small partitions 
> from reserveHistoryForExchange(), but this shouldn't be a problem.
> Another point for optimization: searchPartitionCounter() and 
> searchCheckpointEntry() are executed for each (grpId, partId). That means 
> we'll perform O(number of partitions) linear lookups in history. This should 
> be optimized as well: we can perform one lookup for all (grpId, partId) 
> pairs. This is especially critical for reserveHistoryForPreloading() method 
> complexity: it's executed from exchange thread.
> Memory overhead of storing described map on heap is insignificant. Its size 
> isn't greater than size of map returned from reserveHistoryForExchange().
> Described fix should be much simpler than IGNITE-12429.
> P.S. Possibly, instead of storing map, we can keep earliestCheckpoint right 
> in GridDhtLocalPartition. It may simplify implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-13113) CacheEvent#subjectId for cache events with types EventType#EVTS_CACHE

2020-06-09 Thread Veena Mithare (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127252#comment-17127252
 ] 

Veena Mithare edited comment on IGNITE-13113 at 6/9/20, 7:08 AM:
-

HI Team,

The Jira IGNITE-12781 was created by me. To tackle the issue till this node is 
fixed I have used the approach as below . Kindly confirm if you see any 
concerns with this :  
 # If the cacheevent holds the subject id of the remoteclient, then fetch it 
using getSpiContext().authenticatedSubject(uuid ) method. ( This in turn will 
check the AuthenticationContext.context() and match the subjectId in of the 
event with the one in the AuthenticationContext.context() )
 # If it holds the subjectId of the node instead of the remoteclient( In this 
case, the subject returned by point 1 will be null ) -
 ## Create a cache( transactionIdToSubjectCache) that holds xid vs security 
subject information where xid is the id of the transaction started event. The 
subject Id on this event always holds the remote client id for cache put events 
generated on dbeaver.
 ## When a cacheput event is sent to the storage spi - match the xid as follows
 ### Get the subject from transactionIdToSubjectCache using the xid.
 ### If the above is null, get the originating xid of the event xid and get the 
subject using the originating xid.

 

I am able to get the subject using this approach- could you kindly verify if I 
am missing anything.

Here is a pseudo code :

public class AuditSpi extends IgniteSpiAdapter implements EventStorageSpi {
    private IgniteCache 
transactionIdSubjectMapCache;

 

    @Override
    public void record(Event evt) throws IgniteSpiException {
    assert evt != null;
    ignite = Ignition.ignite(igniteInstanceName);
    transactionIdSubjectMapCache = 
ignite.cache("transactionIdSubjectMapCache");

  

    if (evt instanceof TransactionStateChangedEvent && (evt.type()
    == EventType.EVT_TX_STARTED
    )) {

    //populate the transactionIdSubjectMapCache for events generated 
from dbeaver. This always contains the remote_client subject id.
    if (AuthorizationContext.context() != null)
{ 
transactionIdSubjectMapCache.put(((TransactionStateChangedEvent) 
evt).tx().xid(), ((ProjectAuthorizationContext) 
AuthorizationContext.context()) .subject()); 
    } 
    return;

    }

    if (evt instanceof CacheEvent) {

    SecuritySubject subj = 
getSpiContext().authenticatedSubject(((CacheEvent) evt).subjectId())l;
    IgniteUuid transactionId = null;
    if (subj == null)
   {  
   SecuritySubject sub = 
getSecuritySubjectFromTransactionMap((CacheEvent) evt,  transactionId); 
   
  // more logic to store it in the audit cache here.

   } 
    }

    }

    private SecuritySubject getSecuritySubjectFromTransactionMap(CacheEvent evt,

   IgniteUuid transactionId) {
    SecuritySubject subj = transactionIdSubjectMapCache.get(evt.xid());
   

    if (subj == null) {

    IgniteTxManager tm = ((IgniteEx) 
ignite).context().cache().context().tm();

    for (IgniteInternalTx transaction : tm.activeTransactions()) {

    if (transaction.xid().equals(evt.xid())) {
    if (transaction.nearXidVersion() != null)
{ subj = transactionIdSubjectMapCache   
  .get(transaction.nearXidVersion().asGridUuid());  
   } 
    }
    }
    }
    return subj;

    }

 

}

 

regards,

Veena.


was (Author: veenamithare):
HI Team,

The Jira IGNITE-12781 was created by me. To tackle the issue till this node is 
fixed I have used the approach as below . Kindly confirm if you see any 
concerns with this :  
 # If the cacheevent holds the subject id of the remoteclient, then fetch it 
using getSpiContext().authenticatedSubject(uuid ) method. ( This in turn will 
check the AuthenticationContext.context() and match the subjectId in of the 
event with the one in the AuthenticationContext.context() )
 # If it holds the subjectId of the node instead of the remoteclient( In this 
case, the subject returned by point 1 will be null ) -
 ## Create a cache( transactionIdToSubjectCache) that holds xid vs security 
subject information where xid is the id of the transaction started event. The 
subject Id on this event always holds the remote client id for cache put events 
generated on dbeaver.
 ## When a cacheput event is sent to the storage spi - match the xid as follows
 ### Get the subject from transactionIdToSubjectCache using the xid.
 ### If the above 

[jira] [Comment Edited] (IGNITE-12781) Cache_Put event generated from a remote_client user action has subject uuid of Node that executes the request sometimes.

2020-06-09 Thread Veena Mithare (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127251#comment-17127251
 ] 

Veena Mithare edited comment on IGNITE-12781 at 6/9/20, 7:08 AM:
-

Hi Team,

To get the audit information for cache put events generated on dbeaver, I use 
this approach in the storagespi . Kindly confirm if you see any concerns with 
this :

 
 # If the cacheevent holds the subject id of the remoteclient, then fetch it 
using getSpiContext().authenticatedSubject(uuid ) method. ( This in turn will 
check the AuthenticationContext.context() and match the subjectId in of the 
event with the one in the AuthenticationContext.context() )
 # If it holds the subjectId of the node instead of the remoteclient( In this 
case, the subject returned by point 1 will be null ) -
 ## Create a cache( transactionIdToSubjectCache) that holds xid vs security 
subject information where xid is the id of the transaction started event. The 
subject Id on this event always holds the remote client id for cache put events 
generated on dbeaver.
 ## When a cacheput event is sent to the storage spi - match the xid as follows
 ### Get the subject from transactionIdToSubjectCache using the xid.
 ### If the above is null, get the originating xid of the event xid and get the 
subject using the originating xid.

 

I am able to get the subject using this approach- could you kindly verify if I 
am missing anything.

Here is a pseudo code :

public class AuditSpi extends IgniteSpiAdapter implements EventStorageSpi {
    private IgniteCache 
transactionIdSubjectMapCache;

 

    @Override
    public void record(Event evt) throws IgniteSpiException {
    assert evt != null;
    ignite = Ignition.ignite(igniteInstanceName);
    transactionIdSubjectMapCache = 
ignite.cache("transactionIdSubjectMapCache");

  

    if (evt instanceof TransactionStateChangedEvent && (evt.type()
    == EventType.EVT_TX_STARTED
    )) {

    //populate the transactionIdSubjectMapCache for events generated 
from dbeaver. This always contains the remote_client subject id.
    if (AuthorizationContext.context() != null)
{ 
transactionIdSubjectMapCache.put(((TransactionStateChangedEvent) 
evt).tx().xid(), ((ProjectAuthorizationContext) 
AuthorizationContext.context()) .subject()); 
    } 
    return;

    }

    if (evt instanceof CacheEvent) {

    SecuritySubject subj = 
getSpiContext().authenticatedSubject(((CacheEvent) evt).subjectId())l;
    IgniteUuid transactionId = null;
    if (subj == null)
   {  
   SecuritySubject sub = 
getSecuritySubjectFromTransactionMap((CacheEvent) evt,  transactionId); 
   
  // more logic to store it in the audit cache here.

   } 
    }

    }

    private SecuritySubject getSecuritySubjectFromTransactionMap(CacheEvent evt,

   IgniteUuid transactionId) {
    SecuritySubject subj = transactionIdSubjectMapCache.get(evt.xid());
   

    if (subj == null) {

    IgniteTxManager tm = ((IgniteEx) 
ignite).context().cache().context().tm();

    for (IgniteInternalTx transaction : tm.activeTransactions()) {

    if (transaction.xid().equals(evt.xid())) {
    if (transaction.nearXidVersion() != null)
{ subj = transactionIdSubjectMapCache   
  .get(transaction.nearXidVersion().asGridUuid());  
   } 
    }
    }
    }
    return subj;

    }

 

}

 

regards,

Veena.

 


was (Author: veenamithare):
Hi Team,

To get the audit information for cache put events generated on dbeaver, I use 
this approach in the storagespi . Kindly confirm if you see any concerns with 
this :

 
 # If the cacheevent holds the subject id of the remoteclient, then fetch it 
using getSpiContext().authenticatedSubject(uuid ) method. ( This in turn will 
check the AuthenticationContext.context() and match the subjectId in of the 
event with the one in the AuthenticationContext.context() )
 # If it holds the subjectId of the node instead of the remoteclient( In this 
case, the subject returned by point 1 will be null ) -
 ## Create a cache( transactionIdToSubjectCache) that holds xid vs security 
subject information where xid is the id of the transaction started event. The 
subject Id on this event always holds the remote client id for cache put events 
generated on dbeaver.
 ## When a cacheput event is sent to the storage spi - match the xid as follows
 ### Get the subject from transactionIdToSubjectCache using the xid.
 ### If the above is null, get 

[jira] [Comment Edited] (IGNITE-13078) С++: Add CMake build support

2020-06-09 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128654#comment-17128654
 ] 

Ivan Daschinskiy edited comment on IGNITE-13078 at 6/9/20, 6:53 AM:


[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggest 
leave unix linking as is and change linking for win32. 

By the way, why for win32 you suggest static linking for boost? It is common 
for windows?
And why set {{Boost_USE_MULTITHREADED}} to {{ON}} explicitly? It is {{ON}} by 
default, AFAIK.

By the way, CPack (part of CMake) has built-in support to WiX, so we can add 
generation of odbc driver package in build process.



was (Author: ivandasch):
[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggest 
leave unix linking as is and change linking for win32. 

By the way, why for win32 you suggest static linking for boost? It is common 
for windows?
And why set {{Boost_USE_MULTITHREADED}} to {{ON}} explicitly? It is {{ON}} by 
default, AFAIK.



> С++: Add CMake build support
> 
>
> Key: IGNITE-13078
> URL: https://issues.apache.org/jira/browse/IGNITE-13078
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-13078-dynamic-odbc.patch, 
> ignite-13078-static-odbc.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, it is hard to build Ignite.C++. Different build processes for 
> windows and Linux, lack of building support on Mac OS X (a quite popular OS 
> among developers), absolutely not IDE support, except windows and only Visual 
> Studio is supported.
> I’d suggest migrating to the CMake build system. It is very popular among 
> open source projects, and in The Apache Software Foundation too. Notable 
> users: Apache Mesos, Apache Zookeeper (C client offers CMake as an 
> alternative to autoconf and the only option on Windows), Apache Kafka 
> (librdkafka - C/C++ client), Apache Thrift. Popular column-oriented database 
> ClickHouse also uses CMake.
> CMake is widely supported in many IDE’s on various platforms, notably Visual 
> Studio, CLion, Xcode, QtCreator, KDevelop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-13078) С++: Add CMake build support

2020-06-09 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128654#comment-17128654
 ] 

Ivan Daschinskiy edited comment on IGNITE-13078 at 6/9/20, 6:52 AM:


[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggest 
leave unix linking as is and change linking for win32. 

By the way, why for win32 you suggest static linking for boost? It is common 
for windows?
And why set {{Boost_USE_MULTITHREADED}} to {{ON}} explicitly? It is {{ON}} by 
default, AFAIK.




was (Author: ivandasch):
[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggest 
leave unix linking as is and change linking for win32.

By the way, why for win32 you suggest static linking for boost? It is common 
for windows?



> С++: Add CMake build support
> 
>
> Key: IGNITE-13078
> URL: https://issues.apache.org/jira/browse/IGNITE-13078
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-13078-dynamic-odbc.patch, 
> ignite-13078-static-odbc.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, it is hard to build Ignite.C++. Different build processes for 
> windows and Linux, lack of building support on Mac OS X (a quite popular OS 
> among developers), absolutely not IDE support, except windows and only Visual 
> Studio is supported.
> I’d suggest migrating to the CMake build system. It is very popular among 
> open source projects, and in The Apache Software Foundation too. Notable 
> users: Apache Mesos, Apache Zookeeper (C client offers CMake as an 
> alternative to autoconf and the only option on Windows), Apache Kafka 
> (librdkafka - C/C++ client), Apache Thrift. Popular column-oriented database 
> ClickHouse also uses CMake.
> CMake is widely supported in many IDE’s on various platforms, notably Visual 
> Studio, CLion, Xcode, QtCreator, KDevelop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-13078) С++: Add CMake build support

2020-06-09 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128654#comment-17128654
 ] 

Ivan Daschinskiy edited comment on IGNITE-13078 at 6/9/20, 6:50 AM:


[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggests 
leave unix linking as is and change linking for win32.

By the way, why for win32 you suggests static linking for boost? It is common 
for windows?




was (Author: ivandasch):
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. I think that for win32 it's ok link statically for odbc driver. 

> С++: Add CMake build support
> 
>
> Key: IGNITE-13078
> URL: https://issues.apache.org/jira/browse/IGNITE-13078
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-13078-dynamic-odbc.patch, 
> ignite-13078-static-odbc.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, it is hard to build Ignite.C++. Different build processes for 
> windows and Linux, lack of building support on Mac OS X (a quite popular OS 
> among developers), absolutely not IDE support, except windows and only Visual 
> Studio is supported.
> I’d suggest migrating to the CMake build system. It is very popular among 
> open source projects, and in The Apache Software Foundation too. Notable 
> users: Apache Mesos, Apache Zookeeper (C client offers CMake as an 
> alternative to autoconf and the only option on Windows), Apache Kafka 
> (librdkafka - C/C++ client), Apache Thrift. Popular column-oriented database 
> ClickHouse also uses CMake.
> CMake is widely supported in many IDE’s on various platforms, notably Visual 
> Studio, CLion, Xcode, QtCreator, KDevelop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-13078) С++: Add CMake build support

2020-06-09 Thread Ivan Daschinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128654#comment-17128654
 ] 

Ivan Daschinskiy edited comment on IGNITE-13078 at 6/9/20, 6:50 AM:


[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggest 
leave unix linking as is and change linking for win32.

By the way, why for win32 you suggest static linking for boost? It is common 
for windows?




was (Author: ivandasch):
[~isapego]
Current autotools build system builds odbc driver linking dynamically to other 
ignite libs. I use same approach. I only change examples build, linking against 
installed libraries. This is more logical, because this resembles actual user 
experience. 

I think that for win32 it's ok link statically for odbc driver. So I suggests 
leave unix linking as is and change linking for win32.

By the way, why for win32 you suggests static linking for boost? It is common 
for windows?



> С++: Add CMake build support
> 
>
> Key: IGNITE-13078
> URL: https://issues.apache.org/jira/browse/IGNITE-13078
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Ivan Daschinskiy
>Assignee: Ivan Daschinskiy
>Priority: Major
> Fix For: 2.9
>
> Attachments: ignite-13078-dynamic-odbc.patch, 
> ignite-13078-static-odbc.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, it is hard to build Ignite.C++. Different build processes for 
> windows and Linux, lack of building support on Mac OS X (a quite popular OS 
> among developers), absolutely not IDE support, except windows and only Visual 
> Studio is supported.
> I’d suggest migrating to the CMake build system. It is very popular among 
> open source projects, and in The Apache Software Foundation too. Notable 
> users: Apache Mesos, Apache Zookeeper (C client offers CMake as an 
> alternative to autoconf and the only option on Windows), Apache Kafka 
> (librdkafka - C/C++ client), Apache Thrift. Popular column-oriented database 
> ClickHouse also uses CMake.
> CMake is widely supported in many IDE’s on various platforms, notably Visual 
> Studio, CLion, Xcode, QtCreator, KDevelop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13135:
---
Ignite Flags:   (was: Docs Required,Release Notes Required)

> CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
>  failed
> ---
>
> Key: IGNITE-13135
> URL: https://issues.apache.org/jira/browse/IGNITE-13135
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>
> Test failed with error:
> {noformat}
> java.lang.AssertionError: [] 
> Expected :2
> Actual   :0
> at 
> org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
> at 
> org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
> After fix IGNITE-13096



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13135) CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest failed

2020-06-09 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-13135:
--

 Summary: 
CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest
 failed
 Key: IGNITE-13135
 URL: https://issues.apache.org/jira/browse/IGNITE-13135
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksey Plekhanov
Assignee: Aleksey Plekhanov


Test failed with error:
{noformat}
java.lang.AssertionError: [] 
Expected :2
Actual   :0
at 
org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:119)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.assertCustomMessages(CacheRegisterMetadataLocallyTest.java:230)
at 
org.apache.ignite.internal.processors.cache.CacheRegisterMetadataLocallyTest.testClientFindsValueByAffinityKeyStaticCacheWithoutExtraRequest(CacheRegisterMetadataLocallyTest.java:153){noformat}
After fix IGNITE-13096



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-13121) Incorrect "Completed partition exchange" message triggered by client node (dis)connect.

2020-06-09 Thread Stanilovsky Evgeny (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny resolved IGNITE-13121.
-
Resolution: Not A Problem

> Incorrect "Completed partition exchange" message triggered by client node 
> (dis)connect.
> ---
>
> Key: IGNITE-13121
> URL: https://issues.apache.org/jira/browse/IGNITE-13121
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.8.1
>Reporter: Stanilovsky Evgeny
>Priority: Major
>
> Now topology change with client cluster node triggers near message, this is 
> erroneous due to no partition exchange have place here. All we need  to log 
> here - is only topology change message.
> {noformat}
> Completed partition exchange [localNode=e0062158-e1d1-4139-aca3-332ddc919b00, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=97, minorTopVer=0], evt=NODE_LEFT, evtNode=TcpDiscoveryNode 
> [id=e91e44dd-fb7e-415a-a980-3d6c597f46e8, 
> consistentId=e91e44dd-fb7e-415a-a980-3d6c597f46e8, addrs=ArrayList 
> [1.2.6.44], sockAddrs=HashSet [grid2094/1.2.3.4:0], discPort=0, order=94, 
> intOrder=59, lastExchangeTime=1590669922545, loc=false, 
> ver=2.5.8#20190912-sha1:67c23274, isClient=true],
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13121) Incorrect "Completed partition exchange" message triggered by client node (dis)connect.

2020-06-09 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17128895#comment-17128895
 ] 

Stanilovsky Evgeny commented on IGNITE-13121:
-

Although this part of message: "partition exchange" is still incorrect, looks 
like we can`t easily change it, cause looks like a lot of monitoring systems 
already take into account this message as a "partition map exchange" finish 
point.

> Incorrect "Completed partition exchange" message triggered by client node 
> (dis)connect.
> ---
>
> Key: IGNITE-13121
> URL: https://issues.apache.org/jira/browse/IGNITE-13121
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.8.1
>Reporter: Stanilovsky Evgeny
>Priority: Major
>
> Now topology change with client cluster node triggers near message, this is 
> erroneous due to no partition exchange have place here. All we need  to log 
> here - is only topology change message.
> {noformat}
> Completed partition exchange [localNode=e0062158-e1d1-4139-aca3-332ddc919b00, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=97, minorTopVer=0], evt=NODE_LEFT, evtNode=TcpDiscoveryNode 
> [id=e91e44dd-fb7e-415a-a980-3d6c597f46e8, 
> consistentId=e91e44dd-fb7e-415a-a980-3d6c597f46e8, addrs=ArrayList 
> [1.2.6.44], sockAddrs=HashSet [grid2094/1.2.3.4:0], discPort=0, order=94, 
> intOrder=59, lastExchangeTime=1590669922545, loc=false, 
> ver=2.5.8#20190912-sha1:67c23274, isClient=true],
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)