[ignite] branch master updated: IGNITE-11868 GridClient#data() should be deprecated/removed. - Fixes #6894.

2019-10-03 Thread irakov
This is an automated email from the ASF dual-hosted git repository.

irakov pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/master by this push:
 new a0e0bef  IGNITE-11868 GridClient#data() should be deprecated/removed. 
- Fixes #6894.
a0e0bef is described below

commit a0e0befbc97b24c4dd092d148f4540da5f2548ef
Author: kcheng.mvp 
AuthorDate: Thu Oct 3 20:07:51 2019 +0300

IGNITE-11868 GridClient#data() should be deprecated/removed. - Fixes #6894.

Signed-off-by: Ivan Rakov 
---
 .../java/org/apache/ignite/internal/client/GridClient.java| 11 +--
 .../apache/ignite/internal/client/impl/GridClientImpl.java|  5 -
 .../internal/client/router/impl/GridRouterClientImpl.java |  5 -
 3 files changed, 1 insertion(+), 20 deletions(-)

diff --git 
a/modules/core/src/main/java/org/apache/ignite/internal/client/GridClient.java 
b/modules/core/src/main/java/org/apache/ignite/internal/client/GridClient.java
index 0405a90..f84c3f1 100644
--- 
a/modules/core/src/main/java/org/apache/ignite/internal/client/GridClient.java
+++ 
b/modules/core/src/main/java/org/apache/ignite/internal/client/GridClient.java
@@ -31,9 +31,8 @@ import java.util.UUID;
  * can have multiple instances of {@code GridClient} running in the same VM. 
For
  * information on how to start or stop Grid please refer to {@link 
GridClientFactory} class.
  * 
- * Use following methods to get access to remote cache functionality:
+ * Use the following method to get access to remote cache functionality:
  * 
- * {@link #data()}
  * {@link #data(String)}
  * 
  * Use following methods to get access to remote compute functionality:
@@ -63,14 +62,6 @@ public interface GridClient extends AutoCloseable {
 public UUID id();
 
 /**
- * Gets a data projection for a default grid cache with {@code null} name.
- *
- * @return Data projection for grid cache with {@code null} name.
- * @throws GridClientException If client was closed.
- */
-public GridClientData data() throws GridClientException;
-
-/**
  * Gets a data projection for grid cache with name cacheName. If
  * no data configuration with given name was provided at client startup, an
  * exception will be thrown.
diff --git 
a/modules/core/src/main/java/org/apache/ignite/internal/client/impl/GridClientImpl.java
 
b/modules/core/src/main/java/org/apache/ignite/internal/client/impl/GridClientImpl.java
index 21e09bf..546f97a 100644
--- 
a/modules/core/src/main/java/org/apache/ignite/internal/client/impl/GridClientImpl.java
+++ 
b/modules/core/src/main/java/org/apache/ignite/internal/client/impl/GridClientImpl.java
@@ -261,11 +261,6 @@ public class GridClientImpl implements GridClient {
 }
 
 /** {@inheritDoc} */
-@Override public GridClientData data() throws GridClientException {
-return data(null);
-}
-
-/** {@inheritDoc} */
 @Override public GridClientData data(@Nullable final String cacheName) 
throws GridClientException {
 checkClosed();
 
diff --git 
a/modules/core/src/main/java/org/apache/ignite/internal/client/router/impl/GridRouterClientImpl.java
 
b/modules/core/src/main/java/org/apache/ignite/internal/client/router/impl/GridRouterClientImpl.java
index f3c9d39..68b56b5 100644
--- 
a/modules/core/src/main/java/org/apache/ignite/internal/client/router/impl/GridRouterClientImpl.java
+++ 
b/modules/core/src/main/java/org/apache/ignite/internal/client/router/impl/GridRouterClientImpl.java
@@ -176,11 +176,6 @@ public class GridRouterClientImpl implements GridClient {
 }
 
 /** {@inheritDoc} */
-@Override public GridClientData data() throws GridClientException {
-return clientImpl.data();
-}
-
-/** {@inheritDoc} */
 @Override public GridClientData data(String cacheName) throws 
GridClientException {
 return clientImpl.data(cacheName);
 }



[ignite] branch master updated (08a1d1d -> 0db596a)

2019-10-03 Thread mmuzaf
This is an automated email from the ASF dual-hosted git repository.

mmuzaf pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/ignite.git.


from 08a1d1d  IGNITE-11008 Remove redundant spaces from JDBC metadata 
columns IS_GENERATEDCOLUMN and BUFFER_LENGTH - Fixes #6885.
 add 0db596a  IGNITE-12181: Fixed assertion for non-persisted group in PDS 
enabled cluster. (#6929)

No new revisions were added by this update.

Summary of changes:
 .../cache/IgniteCacheOffheapManagerImpl.java   | 15 +++--
 .../cache/persistence/GridCacheOffheapManager.java |  8 +--
 .../IgnitePdsCacheRebalancingAbstractTest.java | 74 +-
 3 files changed, 82 insertions(+), 15 deletions(-)



[ignite] branch master updated: IGNITE-11008 Remove redundant spaces from JDBC metadata columns IS_GENERATEDCOLUMN and BUFFER_LENGTH - Fixes #6885.

2019-10-03 Thread ipavlukhin
This is an automated email from the ASF dual-hosted git repository.

ipavlukhin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/master by this push:
 new 08a1d1d  IGNITE-11008 Remove redundant spaces from JDBC metadata 
columns IS_GENERATEDCOLUMN and BUFFER_LENGTH - Fixes #6885.
08a1d1d is described below

commit 08a1d1d93bf224d4ea4c0ecdbb85df893743c52d
Author: kcmvp 
AuthorDate: Thu Oct 3 12:23:46 2019 +0300

IGNITE-11008 Remove redundant spaces from JDBC metadata columns 
IS_GENERATEDCOLUMN and BUFFER_LENGTH - Fixes #6885.

Signed-off-by: ipavlukhin 
---
 .../apache/ignite/internal/jdbc/thin/JdbcThinDatabaseMetadata.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinDatabaseMetadata.java
 
b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinDatabaseMetadata.java
index 35b2c14..a59c983 100644
--- 
a/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinDatabaseMetadata.java
+++ 
b/modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/JdbcThinDatabaseMetadata.java
@@ -792,7 +792,7 @@ public class JdbcThinDatabaseMetadata implements 
DatabaseMetaData {
 new JdbcColumnMeta(null, null, "DATA_TYPE", Short.class),   // 
5
 new JdbcColumnMeta(null, null, "TYPE_NAME", String.class),  // 
6
 new JdbcColumnMeta(null, null, "COLUMN_SIZE", Integer.class),   // 
7
-new JdbcColumnMeta(null, null, "BUFFER_LENGTH ", Integer.class), 
// 8
+new JdbcColumnMeta(null, null, "BUFFER_LENGTH", Integer.class), // 
8
 new JdbcColumnMeta(null, null, "DECIMAL_DIGITS", Integer.class), 
// 9
 new JdbcColumnMeta(null, null, "NUM_PREC_RADIX", Short.class),  // 
10
 new JdbcColumnMeta(null, null, "NULLABLE", Short.class),// 
11
@@ -808,7 +808,7 @@ public class JdbcThinDatabaseMetadata implements 
DatabaseMetaData {
 new JdbcColumnMeta(null, null, "SCOPE_TABLE", String.class),// 
21
 new JdbcColumnMeta(null, null, "SOURCE_DATA_TYPE", Short.class), 
// 22
 new JdbcColumnMeta(null, null, "IS_AUTOINCREMENT", String.class), 
// 23
-new JdbcColumnMeta(null, null, "IS_GENERATEDCOLUMN ", 
String.class) // 24
+new JdbcColumnMeta(null, null, "IS_GENERATEDCOLUMN", String.class) 
// 24
 );
 
 if (!isValidCatalog(catalog))



[ignite] branch master updated: IGNITE-11723:[Spark] IgniteSpark integration should support skipStore option for internal dataStreamer (IgniteRdd and Ignite DataFrame) (#6907)

2019-10-03 Thread zaleslaw
This is an automated email from the ASF dual-hosted git repository.

zaleslaw pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/ignite.git


The following commit(s) were added to refs/heads/master by this push:
 new 2e7553a  IGNITE-11723:[Spark] IgniteSpark integration should support 
skipStore option for internal dataStreamer (IgniteRdd and Ignite DataFrame) 
(#6907)
2e7553a is described below

commit 2e7553aa469a679c8a297e49f50b5464d9d76488
Author: Alexey Zinoviev 
AuthorDate: Thu Oct 3 11:38:33 2019 +0300

IGNITE-11723:[Spark] IgniteSpark integration should support skipStore 
option for internal dataStreamer (IgniteRdd and Ignite DataFrame) (#6907)
---
 .../ignite/spark/IgniteDataFrameSettings.scala |  19 ++
 .../scala/org/apache/ignite/spark/IgniteRDD.scala  |  11 +-
 .../org/apache/ignite/spark/JavaIgniteRDD.scala|  13 +-
 .../ignite/spark/impl/IgniteRelationProvider.scala |   3 +
 .../org/apache/ignite/spark/impl/QueryHelper.scala |   9 +-
 .../spark/JavaEmbeddedIgniteRDDSelfTest.java   |   6 +-
 ...avaEmbeddedIgniteRDDWithLocalStoreSelfTest.java | 227 +
 .../ignite/testsuites/IgniteRDDTestSuite.java  |   4 +-
 8 files changed, 276 insertions(+), 16 deletions(-)

diff --git 
a/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteDataFrameSettings.scala
 
b/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteDataFrameSettings.scala
index e176721..4e0abf4 100644
--- 
a/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteDataFrameSettings.scala
+++ 
b/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteDataFrameSettings.scala
@@ -120,6 +120,25 @@ object IgniteDataFrameSettings {
 /**
   * Config option for saving data frame.
   * Internally all SQL inserts are done through `IgniteDataStreamer`.
+  * This options sets `skipStore` property of streamer.
+  * If `true` then write-through behavior will be disabled for data 
streaming.
+  * If `false` then write-through behavior will be enabled for data 
streaming.
+  * Default value if `false`.
+  *
+  * @example {{{
+  *   val igniteDF = spark.write.format(IGNITE)
+  *   // other options ...
+  *   .option(OPTION_STREAMER_SKIP_STORE, true)
+  *   .save()
+  *  }}}
+  * @see [[org.apache.ignite.IgniteDataStreamer]]
+  * @see [[org.apache.ignite.IgniteDataStreamer#skipStore(boolean)]]
+  */
+val OPTION_STREAMER_SKIP_STORE = "streamerSkipStore"
+
+/**
+  * Config option for saving data frame.
+  * Internally all SQL inserts are done through `IgniteDataStreamer`.
   * This options sets `autoFlushFrequency` property of streamer.
   *
   * @example {{{
diff --git 
a/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteRDD.scala 
b/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteRDD.scala
index 5fb81b6..25784d1 100644
--- a/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteRDD.scala
+++ b/modules/spark/src/main/scala/org/apache/ignite/spark/IgniteRDD.scala
@@ -17,7 +17,6 @@
 package org.apache.ignite.spark
 
 import javax.cache.Cache
-
 import org.apache.ignite.cache.query._
 import org.apache.ignite.cluster.ClusterNode
 import org.apache.ignite.configuration.CacheConfiguration
@@ -228,8 +227,9 @@ class IgniteRDD[K, V] (
  * @param rdd RDD instance to save values from.
  * @param overwrite Boolean flag indicating whether the call on this 
method should overwrite existing
  *  values in Ignite cache.
+ * @param skipStore Sets flag indicating that write-through behavior 
should be disabled for data streaming.
  */
-def savePairs(rdd: RDD[(K, V)], overwrite: Boolean = false) = {
+def savePairs(rdd: RDD[(K, V)], overwrite: Boolean = false, skipStore: 
Boolean = false) = {
 rdd.foreachPartition(it ⇒ {
 val ig = ic.ignite()
 
@@ -240,6 +240,7 @@ class IgniteRDD[K, V] (
 
 try {
 streamer.allowOverwrite(overwrite)
+streamer.skipStore(skipStore)
 
 it.foreach(tup ⇒ {
 streamer.addData(tup._1, tup._2)
@@ -258,8 +259,9 @@ class IgniteRDD[K, V] (
  * @param f Transformation function.
  * @param overwrite Boolean flag indicating whether the call on this 
method should overwrite existing
  *  values in Ignite cache.
+ * @param skipStore Sets flag indicating that write-through behavior 
should be disabled for data streaming.
  */
-def savePairs[T](rdd: RDD[T], f: (T, IgniteContext) ⇒ (K, V), overwrite: 
Boolean) = {
+def savePairs[T](rdd: RDD[T], f: (T, IgniteContext) ⇒ (K, V), overwrite: 
Boolean, skipStore: Boolean) = {
 rdd.foreachPartition(it ⇒ {
 val ig = ic.ignite()
 
@@ -270,6 +272,7 @@ class IgniteRDD[K, V] (
 
 try {
 streamer.allowOverwrite(overwrite)
+