[jira] [Created] (IGNITE-6042) Update KafkaStreamer dependencies to Kafka 0.11.x

2017-08-10 Thread Roman Shtykh (JIRA)
Roman Shtykh created IGNITE-6042:


 Summary: Update KafkaStreamer dependencies to Kafka 0.11.x
 Key: IGNITE-6042
 URL: https://issues.apache.org/jira/browse/IGNITE-6042
 Project: Ignite
  Issue Type: Task
  Components: streaming
Reporter: Roman Shtykh
Assignee: Roman Shtykh
 Fix For: 2.2






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


new website tickets

2017-08-10 Thread Dmitriy Setrakyan
Igniters,

I have filed several website tickets to reflect the addition of persistence
in the latest 2.1 release:

https://issues.apache.org/jira/browse/IGNITE-6036
https://issues.apache.org/jira/browse/IGNITE-6037
https://issues.apache.org/jira/browse/IGNITE-6038
https://issues.apache.org/jira/browse/IGNITE-6039
https://issues.apache.org/jira/browse/IGNITE-6039
https://issues.apache.org/jira/browse/IGNITE-6040
https://issues.apache.org/jira/browse/IGNITE-6041

The idea behind this ticket to to improve user experience, documentation
and feature descriptions. If anyone can think of more changes or additions,
please reply in this thread.

Denis, given your natural affinity and love towards the website changes
(pun intended) I have assigned all of them to you, but feel free to
unassign yourself.

D.


[jira] [Created] (IGNITE-6041) Update Gettting Started documentation page

2017-08-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-6041:
-

 Summary: Update Gettting Started documentation page
 Key: IGNITE-6041
 URL: https://issues.apache.org/jira/browse/IGNITE-6041
 Project: Ignite
  Issue Type: Task
  Components: website
Reporter: Dmitriy Setrakyan
Assignee: Denis Magda


Update Getting Started guide in documentation:
# show enable/disable persistence flag
# add SQL connectivity example (create, insert, select)
# put data grid example right after SQL
# add collocated computation example to the data grid example
# add service grid example



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6040) Update Distributed SQL Database page on the website

2017-08-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-6040:
-

 Summary: Update Distributed SQL Database page on the website
 Key: IGNITE-6040
 URL: https://issues.apache.org/jira/browse/IGNITE-6040
 Project: Ignite
  Issue Type: Task
  Components: website
Reporter: Dmitriy Setrakyan
Assignee: Denis Magda


# add Write-Ahead-Log (WAL) sub-section
# add Main Storage sub-section
# add Checkpointing sub-section
# add Redundancy sub-section (describe backups)
# add Durability sub-section (describe restarts)
# add Configuration example



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6039) Update Persistence page on the website

2017-08-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-6039:
-

 Summary: Update Persistence page on the website
 Key: IGNITE-6039
 URL: https://issues.apache.org/jira/browse/IGNITE-6039
 Project: Ignite
  Issue Type: Task
  Components: website
Reporter: Dmitriy Setrakyan
Assignee: Denis Magda


# Change the diagram to reflect native persistence
# add native persistence sub-section
# add native persistence feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6038) Update Data Grid page on teh website to reflect Persistence

2017-08-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-6038:
-

 Summary: Update Data Grid page on teh website to reflect 
Persistence
 Key: IGNITE-6038
 URL: https://issues.apache.org/jira/browse/IGNITE-6038
 Project: Ignite
  Issue Type: Task
  Components: website
Reporter: Dmitriy Setrakyan
Assignee: Denis Magda


# Change the diagram to reflect native persistence
# add native persistence sub-section
# add native persistence feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6037) Ignite 2.1 features overview changes on the website

2017-08-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-6037:
-

 Summary: Ignite 2.1 features overview changes on the website
 Key: IGNITE-6037
 URL: https://issues.apache.org/jira/browse/IGNITE-6037
 Project: Ignite
  Issue Type: Bug
  Components: website
Reporter: Dmitriy Setrakyan
Assignee: Denis Magda


Make the following changes to Features->Overview:
# add What is Memory Centric
# add What is Durable Memory



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6036) Ignite 2.1 use case changes on the website

2017-08-10 Thread Dmitriy Setrakyan (JIRA)
Dmitriy Setrakyan created IGNITE-6036:
-

 Summary: Ignite 2.1 use case changes on the website
 Key: IGNITE-6036
 URL: https://issues.apache.org/jira/browse/IGNITE-6036
 Project: Ignite
  Issue Type: Bug
  Components: website
Reporter: Dmitriy Setrakyan
Assignee: Denis Magda


Need to make the following changes to the use case section on the website:
# Add new section in the dropdown: Distributed Database
## add Distributed Transactional Database page
## add In-Memory Database page
## add Ignite vs NoSQL Databases
## add Ignite vs RDBMS Databases
# Under In-Memory Caching Section
## add Ignite and traditional In-Memory DataGrids page (explain persistence)
## fix Key-Value Store page to reflect native persistence
## fix JCache Provider page to reflect native persistence




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Spark Data Frame support in Ignite

2017-08-10 Thread Valentin Kulichenko
Denis,

This only allows to limit dataset fetched from DB to Spark. This is useful,
but does not replace custom Strategy integration. Because after you create
the FD, you will use its API to do additional filtering, mapping,
aggregation, etc., and this will happen within Spark. With custom strategy
the whole processing will be done on Ignite side.

-Val

On Thu, Aug 10, 2017 at 3:07 PM, Denis Magda  wrote:

> >> This JDBC integration is just a Spark data source, which means that
> Spark
> >> will fetch data in its local memory first, and only then apply filters,
> >> aggregations, etc.
>
> Seems that there is a backdoor exposed via the standard SQL syntax. You
> can execute so called “pushdown” queries [1] that are sent by Spark to a
> JDBC database right away and the result is wrapped into a form of the
> DataFrame.
>
> I could do this trick using Ignite as a JDBC compliant datasource
> executing the query below over the data stored in the cluster:
>
> SELECT p.name as person, c.name as city " +
> "FROM person p, city c  WHERE p.city_id = c.id
>
> There are some limitations though because the actual query issued by Spark
> will be:
>
> SELECT * FROM (SELECT p.name as person, c.name as city " +
> "FROM person p, city c  WHERE p.city_id = c.id) as res
>
> Here [2] is a complete example.
>
>
> [1] https://docs.databricks.com/spark/latest/data-sources/sql-
> databases.html#pushdown-query-to-database-engine <
> https://docs.databricks.com/spark/latest/data-sources/sql-
> databases.html#pushdown-query-to-database-engine>
> [2] https://github.com/dmagda/ignite-dataframes <
> https://github.com/dmagda/ignite-dataframes>
>
> —
> Denis
>
> > On Aug 4, 2017, at 3:41 PM, Dmitriy Setrakyan  wrote:
> >
> > On Thu, Aug 3, 2017 at 9:04 PM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> >> This JDBC integration is just a Spark data source, which means that
> Spark
> >> will fetch data in its local memory first, and only then apply filters,
> >> aggregations, etc. This is obviously slow and doesn't use all advantages
> >> Ignite provides.
> >>
> >> To create useful and valuable integration, we should create a custom
> >> Strategy that will convert Spark's logical plan into a SQL query and
> >> execute it directly on Ignite.
> >>
> >
> > I get it, but we have been talking about Data Frame support for longer
> than
> > a year. I think we should advise our users to switch to JDBC until the
> > community gets someone to implement it.
> >
> >
> >>
> >> -Val
> >>
> >> On Thu, Aug 3, 2017 at 12:12 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> >> wrote:
> >>
> >>> On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke 
> >> wrote:
> >>>
>  I think the development effort would still be higher. Everything would
>  have to be put via JDBC into Ignite, then checkpointing would have to
> >> be
>  done via JDBC (again additional development effort), a lot of
> >> conversion
>  from spark internal format to JDBC and back to ignite internal format.
>  Pagination I do not see as a useful feature for managing large data
> >>> volumes
>  from databases - on the contrary it is very inefficient (and one would
> >> to
>  have to implement logic to fetch al pages). Pagination was also never
>  thought of for fetching large data volumes, but for web pages showing
> a
>  small result set over several pages, where the user can click manually
> >>> for
>  the next page (what they anyway not do most of the time).
> 
>  While it might be a quick solution , I think a deeper integration than
>  JDBC would be more beneficial.
> 
> >>>
> >>> Jorn, I completely agree. However, we have not been able to find a
> >>> contributor for this feature. You sound like you have sufficient domain
> >>> expertise in Spark and Ignite. Would you be willing to help out?
> >>>
> >>>
> > On 3. Aug 2017, at 08:57, Dmitriy Setrakyan 
>  wrote:
> >
> >> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke 
>  wrote:
> >>
> >> I think the JDBC one is more inefficient, slower requires too much
> >> development effort. You can also check the integration of Alluxio
> >> with
> >> Spark.
> >>
> >
> > As far as I know, Alluxio is a file system, so it cannot use JDBC.
>  Ignite,
> > on the other hand, is an SQL system and works well with JDBC. As far
> >> as
>  the
> > development effort, we are dealing with SQL, so I am not sure why
> >> JDBC
> > would be harder.
> >
> > Generally speaking, until Ignite provides native data frame
> >>> integration,
> > having JDBC-based integration out of the box is minimally acceptable.
> >
> >
> >> Then, in general I think JDBC has never designed for large data
> >>> volumes.
> >> It is for executing queries and getting a small or aggregated result
> >>> set
> >> back. 

Re: Spark Data Frame support in Ignite

2017-08-10 Thread Denis Magda
>> This JDBC integration is just a Spark data source, which means that Spark
>> will fetch data in its local memory first, and only then apply filters,
>> aggregations, etc. 

Seems that there is a backdoor exposed via the standard SQL syntax. You can 
execute so called “pushdown” queries [1] that are sent by Spark to a JDBC 
database right away and the result is wrapped into a form of the DataFrame.

I could do this trick using Ignite as a JDBC compliant datasource executing the 
query below over the data stored in the cluster:

SELECT p.name as person, c.name as city " +
"FROM person p, city c  WHERE p.city_id = c.id

There are some limitations though because the actual query issued by Spark will 
be:

SELECT * FROM (SELECT p.name as person, c.name as city " +
"FROM person p, city c  WHERE p.city_id = c.id) as res

Here [2] is a complete example.


[1] 
https://docs.databricks.com/spark/latest/data-sources/sql-databases.html#pushdown-query-to-database-engine
 

[2] https://github.com/dmagda/ignite-dataframes 


—
Denis

> On Aug 4, 2017, at 3:41 PM, Dmitriy Setrakyan  wrote:
> 
> On Thu, Aug 3, 2017 at 9:04 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
> 
>> This JDBC integration is just a Spark data source, which means that Spark
>> will fetch data in its local memory first, and only then apply filters,
>> aggregations, etc. This is obviously slow and doesn't use all advantages
>> Ignite provides.
>> 
>> To create useful and valuable integration, we should create a custom
>> Strategy that will convert Spark's logical plan into a SQL query and
>> execute it directly on Ignite.
>> 
> 
> I get it, but we have been talking about Data Frame support for longer than
> a year. I think we should advise our users to switch to JDBC until the
> community gets someone to implement it.
> 
> 
>> 
>> -Val
>> 
>> On Thu, Aug 3, 2017 at 12:12 AM, Dmitriy Setrakyan 
>> wrote:
>> 
>>> On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke 
>> wrote:
>>> 
 I think the development effort would still be higher. Everything would
 have to be put via JDBC into Ignite, then checkpointing would have to
>> be
 done via JDBC (again additional development effort), a lot of
>> conversion
 from spark internal format to JDBC and back to ignite internal format.
 Pagination I do not see as a useful feature for managing large data
>>> volumes
 from databases - on the contrary it is very inefficient (and one would
>> to
 have to implement logic to fetch al pages). Pagination was also never
 thought of for fetching large data volumes, but for web pages showing a
 small result set over several pages, where the user can click manually
>>> for
 the next page (what they anyway not do most of the time).
 
 While it might be a quick solution , I think a deeper integration than
 JDBC would be more beneficial.
 
>>> 
>>> Jorn, I completely agree. However, we have not been able to find a
>>> contributor for this feature. You sound like you have sufficient domain
>>> expertise in Spark and Ignite. Would you be willing to help out?
>>> 
>>> 
> On 3. Aug 2017, at 08:57, Dmitriy Setrakyan 
 wrote:
> 
>> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke 
 wrote:
>> 
>> I think the JDBC one is more inefficient, slower requires too much
>> development effort. You can also check the integration of Alluxio
>> with
>> Spark.
>> 
> 
> As far as I know, Alluxio is a file system, so it cannot use JDBC.
 Ignite,
> on the other hand, is an SQL system and works well with JDBC. As far
>> as
 the
> development effort, we are dealing with SQL, so I am not sure why
>> JDBC
> would be harder.
> 
> Generally speaking, until Ignite provides native data frame
>>> integration,
> having JDBC-based integration out of the box is minimally acceptable.
> 
> 
>> Then, in general I think JDBC has never designed for large data
>>> volumes.
>> It is for executing queries and getting a small or aggregated result
>>> set
>> back. Alternatively for inserting / updating single rows.
>> 
> 
> Agree in general. However, Ignite JDBC is designed to work with
>> larger
 data
> volumes and supports data pagination automatically.
> 
> 
>>> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan >> 
>> wrote:
>>> 
>>> Jorn, thanks for your feedback!
>>> 
>>> Can you explain how the direct support would be different from the
>>> JDBC
>>> support?
>>> 
>>> Thanks,
>>> D.
>>> 
 On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke >> 
>> wrote:
 
 These 

Nightly build

2017-08-10 Thread Valentin Kulichenko
Folks,

I noticed that the latest successful nightly build happened on May 31:
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

Looks like it's failing since then. Does anyone know the reason?

-Val


[GitHub] ignite pull request #2426: IGNITE-6027

2017-08-10 Thread devozerov
Github user devozerov closed the pull request at:

https://github.com/apache/ignite/pull/2426


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2432: IGNITE-6034

2017-08-10 Thread devozerov
GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/2432

IGNITE-6034



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6034

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2432.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2432


commit 50cfab7806275f8719a38276c549f38695b0e9f0
Author: devozerov 
Date:   2017-08-10T18:09:11Z

Writer part.

commit 6d820243389f64cc2324b048e75240271c93d291
Author: devozerov 
Date:   2017-08-10T18:18:21Z

Reader part.

commit 37b228b09983607a24c5db6f93cdefc38ed9081f
Author: devozerov 
Date:   2017-08-10T18:22:12Z

Minors.

commit 74890db63bfe20e472433391b3f1bd8f2e5699a5
Author: devozerov 
Date:   2017-08-10T18:23:38Z

Minors.

commit 6f1b888fa698dc948951a7a22e9c01650712af6a
Author: devozerov 
Date:   2017-08-10T18:28:08Z

Single interface.

commit 977ee1ac7224145f950a775a19b778c0721e6354
Author: devozerov 
Date:   2017-08-10T18:54:02Z

Wired things up. Now need to remove plain types.

commit df031907a1d5520d942258ce3ba86e6c31975114
Author: devozerov 
Date:   2017-08-10T19:09:00Z

WIP.

commit fe3bdc5db94c7daf6e7d6d4ef3c71df474a5880c
Author: devozerov 
Date:   2017-08-10T19:23:04Z

Removed GridKernalContext from "value" signature.

commit 91a0de707e49988c18ee04c924d3f6b29fcb4b22
Author: devozerov 
Date:   2017-08-10T19:25:40Z

Minors.

commit 5da2e2ae23d422693b8cdbfb272d1a6d9d644fe8
Author: devozerov 
Date:   2017-08-10T19:58:10Z

Done.

commit fe8241292ece4ba083faf5fbc1dca83562e6da41
Author: devozerov 
Date:   2017-08-10T20:02:45Z

Minors.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6035) Indexes aren't cleaned on cache clear/destroy

2017-08-10 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-6035:


 Summary: Indexes aren't cleaned on cache clear/destroy
 Key: IGNITE-6035
 URL: https://issues.apache.org/jira/browse/IGNITE-6035
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
 Environment: Curently on cache clear or destroy cache in a group, 
where other caches exist (this operation executes cache clear internally) the 
query indexes (if present) aren't cleaned/destroyed. 

{{IgniteCacheOffheapManagerImpl.CacheDataStoreImpl#clear}} has to call 
finishUpdate(...)


This leads issues on next insertions or updates (on attempt to teplace an 
alreadyu deleted item)

{{org/apache/ignite/internal/processors/cache/persistence/tree/BPlusTree.java:370}}:

{noformat}Caused by: java.lang.AssertionError: itemId=0, directCnt=0, 
page=000100010005 [][][free=970]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.getDataOffset(DataPageIO.java:437)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.DataPageIO.readPayload(DataPageIO.java:488)
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:149)
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:101)
at 
org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:62)
at 
org.apache.ignite.internal.processors.query.h2.database.io.H2ExtrasLeafIO.getLookupRow(H2ExtrasLeafIO.java:124)
at 
org.apache.ignite.internal.processors.query.h2.database.io.H2ExtrasLeafIO.getLookupRow(H2ExtrasLeafIO.java:36)
at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:123)
at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:40)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4372)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Replace.run0(BPlusTree.java:370)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Replace.run0(BPlusTree.java:330)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4697)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4682)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:342)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:261)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$11200(BPlusTree.java:81)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.tryReplace(BPlusTree.java:2875)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2279)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2006)
... 28 more{noformat}
Reporter: Igor Seliverstov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2431: IGNITE-6032 ODBC: Added SQL_SCROLL_OPTIONS suppor...

2017-08-10 Thread isapego
GitHub user isapego opened a pull request:

https://github.com/apache/ignite/pull/2431

IGNITE-6032 ODBC: Added SQL_SCROLL_OPTIONS support for SQLGetInfo



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6032

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2431.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2431


commit 9da17772026ff1fcbda98cad0c4fb808cb8c901d
Author: Igor Sapego 
Date:   2017-08-10T17:41:14Z

IGNITE-6032: Test added

commit 3f8dbf6ab4e1eafbb888531362d47788dfdf5a95
Author: Igor Sapego 
Date:   2017-08-10T17:42:11Z

IGNITE-6032: Fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2430: IGNITE-6019

2017-08-10 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/2430

IGNITE-6019



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6019

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2430.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2430


commit 12a479fe7d1973bdba5dcdb488d3ff109667d107
Author: Alexander Paschenko 
Date:   2017-08-10T12:48:38Z

IGNITE-6019 Merge indexes iterator

commit 51a62134345731c5464c854c62e8be59fb79a9fd
Author: Alexander Paschenko 
Date:   2017-08-10T14:06:08Z

Minors.

commit 652713cba2498cfa85ad89bf9f0aaf96784cae76
Author: Alexander Paschenko 
Date:   2017-08-10T14:19:08Z

Minors.

commit 76c9e2ad6d2e7d0ab4e73b85fa6c2cda9fb913b3
Author: Alexander Paschenko 
Date:   2017-08-10T17:31:59Z

Added a test.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6034) SQL: Optimize GridQueryNextPageResponse message consumption

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6034:
---

 Summary: SQL: Optimize GridQueryNextPageResponse message 
consumption
 Key: IGNITE-6034
 URL: https://issues.apache.org/jira/browse/IGNITE-6034
 Project: Ignite
  Issue Type: Task
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
 Fix For: 2.2


Currently we store data in {{GridQueryNextPageResponse}} in message wrappers. 
This can be avoided easily, if we add custom converter interface which could be 
passed to our direct marshaller facility.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2429: IGNITE-6029 WAL Record serialization refactoring

2017-08-10 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/2429

IGNITE-6029 WAL Record serialization refactoring

And initial stuff for Record V2 serializer.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6029

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2429.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2429


commit 605198854104e04f343630e2fcec8fe78103be3e
Author: Pavel Kovalenko 
Date:   2017-08-10T17:18:59Z

IGNITE-6029 Record serializer refactoring and initial stuff for Record V2 
serialization.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: IgniteSemaphore methods semantics

2017-08-10 Thread Valentin Kulichenko
If this is true, I think it should be fixed. availablePermits() returning
number of acquired permits sounds very confusing.

-Val

On Thu, Aug 10, 2017 at 7:38 AM, Andrey Kuznetsov  wrote:

> Hi, igniters!
>
>
>
> As IgniteSemaphore's javadoc states,
>
>
>
> "Distributed semaphore provides functionality similar to {@code
> java.util.concurrent.Semaphore}."
>
>
>
> At the same time method semantics of current implementation is inverted,
> i.e. acquire() decrements internal semaphore count and release() increments
> count. Then newlyCreatedSemaphore.acquire() call blocks until some other
> thread calls release(), and it looks confusing.Also, availablePermits()
> returns permits acquired so far, that is, semaphore count.
>
>
>
> Another difference is unbounded nature of IgniteSemaphore implementation,
> while java.util.concurrent.Semaphore is bounded.
>
>
>
> I think we are to do one of the following:
>
>
>
> - Document uncommon IgniteSemaphore semantics properly
>
>
>
> or
>
>
>
> - Change its semantics to conform java.util.concurrent counterpart.
>
>
>
> --
>
> Best regards,
>
>   Andrey Kuznetsov.
>


[GitHub] ignite pull request #2428: IGNITE-4991 Do not print out system properties wh...

2017-08-10 Thread ezhuravl
GitHub user ezhuravl opened a pull request:

https://github.com/apache/ignite/pull/2428

IGNITE-4991 Do not print out system properties when IGNITE_TO_STRING_…

…INCLUDE_SENSITIVE is set to false

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-4991

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2428.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2428


commit 4f90b6fd77bd23fa818620f0757b792ba388ef93
Author: Evgenii Zhuravlev 
Date:   2017-08-10T15:54:57Z

IGNITE-4991 Do not print out system properties when 
IGNITE_TO_STRING_INCLUDE_SENSITIVE is set to false




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6033) Add sorted and multithreaded modes in checkpoint algorithm

2017-08-10 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-6033:
--

 Summary: Add sorted and multithreaded modes in checkpoint algorithm
 Key: IGNITE-6033
 URL: https://issues.apache.org/jira/browse/IGNITE-6033
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Affects Versions: 2.1
Reporter: Ivan Rakov
Assignee: Ivan Rakov
 Fix For: 2.2


Sequential writes to SSD are faster than random. When we write checkpoint, we 
iterate through hash table, which is actually random order. We should add an 
option to write pages sorted by page index. It should be configured in 
PersistentStoreConfiguration.
Also, we already have PersistentStoreConfiguration#checkpointingThreads option, 
but we don't use it - we create thread pool, but submit only one task to it. 
This should be fixed as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6032) ODBC: SQLGetData should return SQL_SUCCESS if called on NULL column

2017-08-10 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-6032:
---

 Summary: ODBC: SQLGetData should return SQL_SUCCESS if called on 
NULL column
 Key: IGNITE-6032
 URL: https://issues.apache.org/jira/browse/IGNITE-6032
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 2.1
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 2.2


Currently {{SQLGetData}} returns {{SLQ_NO_DATA}} if called on NULL data. 
According to specification, it should return {{SQL_SUCCESS}} in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2427: Ignite 4756

2017-08-10 Thread vadopolski
GitHub user vadopolski opened a pull request:

https://github.com/apache/ignite/pull/2427

Ignite 4756

Added calculate printDistribution method and 
AffinityDistributionPrintLogTest.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vadopolski/ignite ignite-4756-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2427.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2427


commit 30f3ae561459d3c04a06d647202f98ef9c4ef153
Author: vopolski <2w3e4r5t>
Date:   2017-07-04T13:30:26Z

IGNITE-4756

commit be797dbb474552080f1a33a56be799c2851528af
Author: vopolski <2w3e4r5t>
Date:   2017-07-04T13:38:50Z

IGNITE-4756

commit cc345de82c3c83b96500be537aa80499c09af7f0
Author: vopolski <2w3e4r5t>
Date:   2017-07-04T13:48:49Z

IGNITE-4756

commit f4212b5435fe8c1f27ca76223a8d4a9cc413831a
Author: vopolski <2w3e4r5t>
Date:   2017-07-04T16:10:13Z

IGNITE-4756

commit dc2b935bef44ab20a9c8b14d00d8ddf2bebddfa1
Author: vopolski <2w3e4r5t>
Date:   2017-07-05T09:33:24Z

IGNITE-4756

commit 56a65eab1c5e2a3d7671a6766bd28d477c4ffe7f
Author: vopolski <2w3e4r5t>
Date:   2017-07-05T12:00:51Z

IGNITE-4756

commit 1903cafc0b044771a2c111e24eb542f50d337c69
Author: vopolski <2w3e4r5t>
Date:   2017-07-06T14:37:44Z

IGNITE-4756

commit 7551f13a13ff720cc1d53cfeaf7c98a476d49118
Author: vopolski <2w3e4r5t>
Date:   2017-07-07T12:57:38Z

IGNITE-4756

commit b303fb5085ded69a4911c77a20f6ca32b64f0439
Author: vopolski <2w3e4r5t>
Date:   2017-07-10T09:33:15Z

IGNITE-4756

commit c9d6a33e5d14c90d8cf830f4764172f4dff41039
Author: vopolski <2w3e4r5t>
Date:   2017-07-10T14:32:48Z

IGNITE-4756

commit 0d6ab3cec30a970051ee06af3ffa6a15f3c455e5
Author: vopolski <2w3e4r5t>
Date:   2017-07-11T09:15:25Z

IGNITE-4756

commit 08fbdcbda4554cdc2e4641ce4076cbef29ebb1b1
Author: vopolski <2w3e4r5t>
Date:   2017-07-11T14:07:53Z

IGNITE-4756

commit 4d31cfb0c8235ed663cb913b166ab75818a739a2
Author: vopolski <2w3e4r5t>
Date:   2017-07-19T13:45:33Z

IGNITE-4756

commit 2a0d0955d35ab7f8647389066ebac259e4c51783
Author: vopolski <2w3e4r5t>
Date:   2017-07-20T14:03:30Z

IGNITE-4756
Improved Java Doc and Code Style.

commit 96cba61b3c266c67aa78829fc1b75ca0334bcff8
Author: vopolski <2w3e4r5t>
Date:   2017-07-20T14:35:55Z

IGNITE-4756
Improved Java Doc, Code Style and Abbreviation.

commit 8985596bacd745d266b296e11a9789839a11ee76
Author: vopolski <2w3e4r5t>
Date:   2017-07-21T11:46:17Z

IGNITE-4756
Improved Java Doc, Code Style and Abbreviation.

commit 5402fa02bb0befde402e290e38d5b711bdb7fc87
Author: vopolski <2w3e4r5t>
Date:   2017-07-24T11:48:11Z

deleted setter getter constructor

commit 4fe1ba1c8c1db507865f79a4ab4d37248d2608b6
Author: Max Kozlov 
Date:   2017-07-24T11:59:17Z

Merge branch 'ignite-4756' of https://github.com/vadopolski/ignite into 
vadim/ignite-4756

commit a208694bf76bf56e368cd68d045f4f13aad0e69a
Author: vopolski <2w3e4r5t>
Date:   2017-07-24T14:37:39Z

IGNITE-4756

commit 6cf30c9389c16d4a5ee53a0b0c7f9e2777555a8d
Author: vopolski <2w3e4r5t>
Date:   2017-07-24T14:56:30Z

IGNITE-4756

commit c15f7a8ed6f0c9696087396eca5cd934e62c8b07
Author: vopolski <2w3e4r5t>
Date:   2017-07-24T16:25:08Z

IGNITE-4756

commit 45830807563458da6e13b404e54c050294359ae3
Author: vopolski <2w3e4r5t>
Date:   2017-07-24T17:04:56Z

IGNITE-4756

commit 995075864f70d3aa8644482242edbbf472b71004
Author: vopolski <2w3e4r5t>
Date:   2017-07-24T17:19:52Z

IGNITE-4756

commit b05d959e899b8fe2237dada5d7aacdfaa1182166
Author: vopolski <2w3e4r5t>
Date:   2017-07-25T16:19:55Z

IGNITE-4756

commit 413889d67da6fa6e351d87ed930d4d64998e4c31
Author: vopolski <2w3e4r5t>
Date:   2017-07-25T16:25:32Z

IGNITE-4756

commit ebcfda10f50b77e6df2130521726e2bcceb8348e
Author: vopolski <2w3e4r5t>
Date:   2017-07-25T17:01:32Z

IGNITE-4756

commit 4d0b3621a6a089a482717b9d8351750cea817711
Author: vopolski <2w3e4r5t>
Date:   2017-07-27T12:07:33Z

IGNITE-4756

commit e0f0b12b7b64fc425513a8a74ad09fb1ff0b0360
Author: vopolski <2w3e4r5t>
Date:   2017-07-27T12:15:01Z

IGNITE-4756

commit 7fcd1acd586366a9c54b9145a365bb58ca186b80
Author: vopolski <2w3e4r5t>
Date:   2017-07-28T09:50:02Z

IGNITE-4756

commit d66512cee1eb6b2eb5fe91dd7bb3d558029109f0
Author: vopolski <2w3e4r5t>
Date:   2017-07-28T09:59:56Z

IGNITE-4756




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6031) Web Console: Improve grouping of fields on Cluster Configuration Tab (By importance / alphabetically)

2017-08-10 Thread Vica Abramova (JIRA)
Vica Abramova created IGNITE-6031:
-

 Summary: Web Console: Improve grouping of fields on Cluster 
Configuration Tab (By importance / alphabetically)
 Key: IGNITE-6031
 URL: https://issues.apache.org/jira/browse/IGNITE-6031
 Project: Ignite
  Issue Type: Improvement
  Components: UI, wizards
Reporter: Vica Abramova






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Cluster auto activation design proposal

2017-08-10 Thread Sergey Chugunov
Going down "node set" road:

-fixed node set
-established node set
-base node set

On Thu, Aug 10, 2017 at 5:23 PM, Dmitriy Setrakyan 
wrote:

> Can we brainstorm on the names again, I am not sure we have a consensus on
> the name "baseline topology". This will be included in Ignite
> configuration, so the name has to be clear.
>
> Some of the proposals were:
>
> - baseline topology
> - minimal node set
> - node restart set
> - minimal topology
>
> Any other suggestions?
>
> D.
>
> On Thu, Aug 10, 2017 at 2:13 AM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > Denis,
> >
> > This should be handled by the BT triggers. If I have 3 backups
> configured,
> > I actually won't care if my cluster will live 6 hours without an
> additional
> > backup. If for a partition there is only one backup left - a new BT
> should
> > be triggered automatically.
> >
> > 2017-08-10 0:33 GMT+03:00 Denis Magda :
> >
> > > Sergey,
> > >
> > > That’s the only concern I have:
> > >
> > > * 5. User takes out nodes from cluster (e.g. for maintenance purposes):
> > no
> > >   rebalance happens until user recreates BLT on new cluster topology.*
> > >
> > > What if a node is crashed (or some other kind of outage) in the middle
> of
> > > the night and the user has to be sure that survived nodes will
> rearrange
> > > and rebalancing partitions?
> > >
> > > —
> > > Denis
> > >
> > >
> > > > On Aug 4, 2017, at 9:21 AM, Sergey Chugunov <
> sergey.chugu...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Folks,
> > > >
> > > > I've summarized all results from our discussion so far on wiki page:
> > > > https://cwiki.apache.org/confluence/display/IGNITE/
> > > Automatic+activation+design+-+draft
> > > >
> > > > I hope I reflected the most important details and going to add API
> > > > suggestions for all use cases soon.
> > > >
> > > > Feel free to give feedback here or in comments under the page.
> > > >
> > > > Thanks,
> > > > Sergey.
> > > >
> > > > On Thu, Aug 3, 2017 at 5:40 PM, Alexey Kuznetsov <
> > akuznet...@apache.org>
> > > > wrote:
> > > >
> > > >> Hi,
> > > >>
> > >  1. User creates new BLT using WebConsole or other tool and
> "applies"
> > > it
> > > >> to brand-new cluster.
> > > >>
> > > >> Good idea, but we also should implement *command-line utility* for
> the
> > > same
> > > >> use case.
> > > >>
> > > >> --
> > > >> Alexey Kuznetsov
> > > >>
> > >
> > >
> >
>


IgniteSemaphore methods semantics

2017-08-10 Thread Andrey Kuznetsov
Hi, igniters!



As IgniteSemaphore's javadoc states,



"Distributed semaphore provides functionality similar to {@code
java.util.concurrent.Semaphore}."



At the same time method semantics of current implementation is inverted,
i.e. acquire() decrements internal semaphore count and release() increments
count. Then newlyCreatedSemaphore.acquire() call blocks until some other
thread calls release(), and it looks confusing.Also, availablePermits()
returns permits acquired so far, that is, semaphore count.



Another difference is unbounded nature of IgniteSemaphore implementation,
while java.util.concurrent.Semaphore is bounded.



I think we are to do one of the following:



- Document uncommon IgniteSemaphore semantics properly



or



- Change its semantics to conform java.util.concurrent counterpart.



--

Best regards,

  Andrey Kuznetsov.


Re: Cluster auto activation design proposal

2017-08-10 Thread Dmitriy Setrakyan
Can we brainstorm on the names again, I am not sure we have a consensus on
the name "baseline topology". This will be included in Ignite
configuration, so the name has to be clear.

Some of the proposals were:

- baseline topology
- minimal node set
- node restart set
- minimal topology

Any other suggestions?

D.

On Thu, Aug 10, 2017 at 2:13 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Denis,
>
> This should be handled by the BT triggers. If I have 3 backups configured,
> I actually won't care if my cluster will live 6 hours without an additional
> backup. If for a partition there is only one backup left - a new BT should
> be triggered automatically.
>
> 2017-08-10 0:33 GMT+03:00 Denis Magda :
>
> > Sergey,
> >
> > That’s the only concern I have:
> >
> > * 5. User takes out nodes from cluster (e.g. for maintenance purposes):
> no
> >   rebalance happens until user recreates BLT on new cluster topology.*
> >
> > What if a node is crashed (or some other kind of outage) in the middle of
> > the night and the user has to be sure that survived nodes will rearrange
> > and rebalancing partitions?
> >
> > —
> > Denis
> >
> >
> > > On Aug 4, 2017, at 9:21 AM, Sergey Chugunov  >
> > wrote:
> > >
> > > Folks,
> > >
> > > I've summarized all results from our discussion so far on wiki page:
> > > https://cwiki.apache.org/confluence/display/IGNITE/
> > Automatic+activation+design+-+draft
> > >
> > > I hope I reflected the most important details and going to add API
> > > suggestions for all use cases soon.
> > >
> > > Feel free to give feedback here or in comments under the page.
> > >
> > > Thanks,
> > > Sergey.
> > >
> > > On Thu, Aug 3, 2017 at 5:40 PM, Alexey Kuznetsov <
> akuznet...@apache.org>
> > > wrote:
> > >
> > >> Hi,
> > >>
> >  1. User creates new BLT using WebConsole or other tool and "applies"
> > it
> > >> to brand-new cluster.
> > >>
> > >> Good idea, but we also should implement *command-line utility* for the
> > same
> > >> use case.
> > >>
> > >> --
> > >> Alexey Kuznetsov
> > >>
> >
> >
>


[GitHub] ignite pull request #2426: IGNITE-6027

2017-08-10 Thread devozerov
GitHub user devozerov opened a pull request:

https://github.com/apache/ignite/pull/2426

IGNITE-6027



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6027

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2426.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2426


commit f7afc0dc7ae3954aaf55d5ef1fa6e28afc974011
Author: devozerov 
Date:   2017-08-10T13:23:08Z

Added last page to the response.

commit d465d11ce0efbe35a58e3e78e2cb5895b9b80444
Author: devozerov 
Date:   2017-08-10T14:04:08Z

Done.

commit d1fdb746c9be80072a50c3de5534215e7ad53983
Author: devozerov 
Date:   2017-08-10T14:13:11Z

Done.

commit 00ccadb0c8c157e1c03a0d57c4751b4d85ff322c
Author: devozerov 
Date:   2017-08-10T14:14:25Z

Minors.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6030) Allow enabling persistence per-cache

2017-08-10 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-6030:


 Summary: Allow enabling persistence per-cache
 Key: IGNITE-6030
 URL: https://issues.apache.org/jira/browse/IGNITE-6030
 Project: Ignite
  Issue Type: New Feature
  Components: persistence
Affects Versions: 2.1
Reporter: Alexey Goncharuk
 Fix For: 2.2






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6029) Refactor WAL Record serialization and introduce RecordV2Serializer

2017-08-10 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-6029:
---

 Summary: Refactor WAL Record serialization and introduce 
RecordV2Serializer
 Key: IGNITE-6029
 URL: https://issues.apache.org/jira/browse/IGNITE-6029
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.1
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko
 Fix For: 2.2


Currently RecordSerializer interface and default RecordV1Serializer 
implementation are not well extendable. We should refactor RecordSerializer 
interface and introduce new RecordV2Serializer with very base functionality - 
delegate everything to RecordV1Serializer.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6028) JDBC thin Driver: support metadata getFunctions methods

2017-08-10 Thread Taras Ledkov (JIRA)
Taras Ledkov created IGNITE-6028:


 Summary: JDBC thin Driver: support metadata getFunctions methods
 Key: IGNITE-6028
 URL: https://issues.apache.org/jira/browse/IGNITE-6028
 Project: Ignite
  Issue Type: Task
  Components: jdbc
Affects Versions: 2.1
Reporter: Taras Ledkov
Priority: Minor
 Fix For: 2.2


The methods of the {{JdbcThinDatabaseMetadata}} must be supported:
{{getFunctions}}
{{getFunctionColumns}}
{{getNumericFunctions}}
{{getStringFunctions}}
{{getSystemFunctions}}
{{getTimeDateFunctions}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6027) SQL: add "last" flag to GridQueryNextPageResponse

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6027:
---

 Summary: SQL: add "last" flag to GridQueryNextPageResponse
 Key: IGNITE-6027
 URL: https://issues.apache.org/jira/browse/IGNITE-6027
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
 Fix For: 2.2


Sometimes it is impossible to get result set size in advance. Let's add 
{{last}} flag to support it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [jira] [Created] (IGNITE-6026) init cluster for Ignite Ignite Persistence by xml

2017-08-10 Thread Alexey Kukushkin
Hi Alex,
The main idea from the API perspective is that Ignite Persistence is 
transparent for the API. After you enable persistence all your APIs including 
get/put, SQL, etc would work as before. 
Enable Persistence with 1 line:




And then activate cluster explicitly - With script:
${IGNITE_HOME}/bin/control.sh --activate

OR in the code:
ignite.active(true);


Best regards, Alexey


On Thursday, August 10, 2017, 3:50:35 PM GMT+3, Alex Negashev (JIRA) 
 wrote:

Alex Negashev created IGNITE-6026:
-

            Summary: init cluster for Ignite  Ignite Persistence by xml 
                Key: IGNITE-6026
                URL: https://issues.apache.org/jira/browse/IGNITE-6026
            Project: Ignite
          Issue Type: Wish
          Components: cache, examples, persistence
    Affects Versions: 2.1
        Environment: ignite in docker with zk
            Reporter: Alex Negashev


Hello! We use Ignite 2.1 and would like to use Ignite Persistence, how i can do 
this with out java code? xml only.
Example attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2422: IGNITE-5963: Add additional check to Thread.sleep...

2017-08-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2422


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6026) init cluster for Ignite Ignite Persistence by xml

2017-08-10 Thread Alex Negashev (JIRA)
Alex Negashev created IGNITE-6026:
-

 Summary: init cluster for Ignite  Ignite Persistence by xml 
 Key: IGNITE-6026
 URL: https://issues.apache.org/jira/browse/IGNITE-6026
 Project: Ignite
  Issue Type: Wish
  Components: cache, examples, persistence
Affects Versions: 2.1
 Environment: ignite in docker with zk
Reporter: Alex Negashev


Hello! We use Ignite 2.1 and would like to use Ignite Persistence, how i can do 
this with out java code? xml only.
Example attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2421: IGNITE-5995: ODBC: Fix for SQLGetData

2017-08-10 Thread isapego
Github user isapego closed the pull request at:

https://github.com/apache/ignite/pull/2421


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-6025) SQL: improve CREATE INDEX performance

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6025:
---

 Summary: SQL: improve CREATE INDEX performance
 Key: IGNITE-6025
 URL: https://issues.apache.org/jira/browse/IGNITE-6025
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
 Fix For: 2.2


When bulk data load is performed, it is considered a good practice to bypass 
certain facilities of underlying engine to achieve greater throughput. E.g., TX 
or MVCC managers can by bypassed, global table lock can be held instead of 
fine-grained page/row/field locks, etc.. 

Another widely used technique is to drop table indexes and re-build them form 
scratch when load finished. This is now possible with help of {{CREATE INDEX}} 
command which could be executed in runtime. However, experiments with large 
data sets shown that {{DROP INDEX}} -> load -> {{CREATE INDEX}} is *much 
slower* than simple load to indexed table. The reasons for this are both 
inefficient implementation of {{CREATE INDEX}} command, as well as some storage 
architectural decisions.

1) Index is created by a single thread; probably multiple threads could speed 
it up and the cost of higher CPU usage. But how to split iteration between 
several threads?
2) Cache iteration happens through primary index. So we read an index page, but 
to read entries we have to navigate to data page. If single data page is 
referenced from N places in the index tree, we will read it N times. This leads 
to bad cache locality in memory-only case, and to excessive disk IO in case of 
persistence. This could be avoided, if we iterate over data pages, and index 
all data from a single page at once.
3) Another widely used technique is building BTree in bottom-up approach. That 
is, we sort all data rows first, then build leafs, then go one level up, etc.. 
This approach could give us the best build performance possible, especially if 
p.2 is implemented. However it is the most difficult optimization, which will 
require implementation of spilling to disk if result set is too large.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Cluster auto activation design proposal

2017-08-10 Thread Alexey Goncharuk
Denis,

This should be handled by the BT triggers. If I have 3 backups configured,
I actually won't care if my cluster will live 6 hours without an additional
backup. If for a partition there is only one backup left - a new BT should
be triggered automatically.

2017-08-10 0:33 GMT+03:00 Denis Magda :

> Sergey,
>
> That’s the only concern I have:
>
> * 5. User takes out nodes from cluster (e.g. for maintenance purposes): no
>   rebalance happens until user recreates BLT on new cluster topology.*
>
> What if a node is crashed (or some other kind of outage) in the middle of
> the night and the user has to be sure that survived nodes will rearrange
> and rebalancing partitions?
>
> —
> Denis
>
>
> > On Aug 4, 2017, at 9:21 AM, Sergey Chugunov 
> wrote:
> >
> > Folks,
> >
> > I've summarized all results from our discussion so far on wiki page:
> > https://cwiki.apache.org/confluence/display/IGNITE/
> Automatic+activation+design+-+draft
> >
> > I hope I reflected the most important details and going to add API
> > suggestions for all use cases soon.
> >
> > Feel free to give feedback here or in comments under the page.
> >
> > Thanks,
> > Sergey.
> >
> > On Thu, Aug 3, 2017 at 5:40 PM, Alexey Kuznetsov 
> > wrote:
> >
> >> Hi,
> >>
>  1. User creates new BLT using WebConsole or other tool and "applies"
> it
> >> to brand-new cluster.
> >>
> >> Good idea, but we also should implement *command-line utility* for the
> same
> >> use case.
> >>
> >> --
> >> Alexey Kuznetsov
> >>
>
>


[jira] [Created] (IGNITE-6024) SQL: execute DML statements on the server when possible

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6024:
---

 Summary: SQL: execute DML statements on the server when possible
 Key: IGNITE-6024
 URL: https://issues.apache.org/jira/browse/IGNITE-6024
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
 Fix For: 2.2


Currently we execute DML statements as follows:
1) Get query result set to the client
2) Construct entry processors and send them to servers in batches

This approach is inefficient as it causes a lot of unnecessary network 
communication  Instead, we should execute DML statements directly on server 
nodes when it is possible.

Implementation considerations:
1) Determine set of queries which could be processed in this way. E.g., 
{{LIMIT/OFFSET}}, {{GROUP BY}}, {{ORDER BY}}, {{DISTINCT}}, etc. are out of 
question - they must go through the client anyway. Probably {{skipMergeTable}} 
flag is a good starting point (good, not precise!)
2) Send request to every server and execute local DML right there
3) No failover support at the moment - throw "partial update" exception if 
topology is unstable
4) Handle partition reservation carefully
5) Transactions: we still have single coordinator - this is a client. When MVCC 
and TX SQL is ready, client will assign proper counters to server requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6023) Index inline should be enabled by default for fixed-length data types

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6023:
---

 Summary: Index inline should be enabled by default for 
fixed-length data types
 Key: IGNITE-6023
 URL: https://issues.apache.org/jira/browse/IGNITE-6023
 Project: Ignite
  Issue Type: Improvement
  Components: persistence, sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
 Fix For: 2.2


Currently index inline is not enabled by default. This is ok for variable 
length types such as strings because we cannot enforce their length at the 
moment (it would require changes to DDL). But for fixed-length data types it is 
perfectly OK to enable inline by default:
- int
- long
- date



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6022) SQL: add native batch execution support for DML statements

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6022:
---

 Summary: SQL: add native batch execution support for DML statements
 Key: IGNITE-6022
 URL: https://issues.apache.org/jira/browse/IGNITE-6022
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
 Fix For: 2.2


We have batch execution support for JDBC and ODBC drivers. This decreases 
number of network hops. However, we do not have any batch execution support on 
the server side. It means that for batch of N similar statements, every 
statement will go through the whole execution chain - parsing, splitting, 
communication with servers. And while parsing and splitting might be avoided 
with help of statement cache, the most heavy part - network communication - is 
still there.

We need to investigate how to optimize the flow for batch updates. Possible 
improvements:
1) Execute statements with certain degree of parallelism;
2) Send several query execution requests to the server at once;
3) Ensure that caches are used properly for batching - we should not parse the 
same request multiple times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6021) SQL: support asynchronous page prefetch on client side

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6021:
---

 Summary: SQL: support asynchronous page prefetch on client side
 Key: IGNITE-6021
 URL: https://issues.apache.org/jira/browse/IGNITE-6021
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
 Fix For: 2.2


This ticket should be done after IGNITE-5019. We should allow users to control 
how many pages to prefetch when executing queries. New API should be 
constructed carefully, taking in count the following considerations:
1) Sometimes user wants to get the first page ASAP, e.g. to display it on UI. 
In this case prefetch size should be 0. This is the best candidate for default 
value.
2) Sometimes user wants to get all results ASAP. E.g. for batch processing. In 
this case we should change our communication logic - instead of "request - 
response" model, we should employ "request - all responses" model, when we 
start query execution, and server pushes everything to the client without 
waiting for "next page request". This should be some special value, e.g. "-1".
3) And sometimes user want to have some real prefetch. E.g. because individual 
row processing on a client side is expensive, and user may benefit from 
concurrent fetching. In this case user should be able to set some positive 
integer defining how many pages to request in advance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6020) SQL: client should request first pages on query execution instead of first cursor read

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6020:
---

 Summary: SQL: client should request first pages on query execution 
instead of first cursor read
 Key: IGNITE-6020
 URL: https://issues.apache.org/jira/browse/IGNITE-6020
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
 Fix For: 2.2


Currently we request first data blocks form server nodes on first cursor 
access. However, user code might want to execute a query, and access the cursor 
after some delay, in hope that asynchronous execution will do the trick. 

For this reason, we should start requesting pages eagerly on "execute" command 
rather than on cursor access.

{code}
try (QueryCursor cursor = cache.execute(...)) { // <-- Should be here
...
for (List row : cursor) {   // <-- But currently here
...
}
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6019) SQL: client node should not hold the whole data set in-memory when possible

2017-08-10 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-6019:
---

 Summary: SQL: client node should not hold the whole data set 
in-memory when possible
 Key: IGNITE-6019
 URL: https://issues.apache.org/jira/browse/IGNITE-6019
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
Reporter: Vladimir Ozerov
Assignee: Alexander Paschenko
 Fix For: 2.2


Our SQL engine requests request data from server nodes in pieces called "page". 
This allows us to control memory consumption on client side. However, currently 
our client code is designed in a way that all pages are requested from all 
servers before a single cursor row is returned to the user. It defeats the 
whole idea of "cursor" and "page", and could easily crash client node with 
OOME. 

We need to fix that and request further pages in a kind of sliding window, 
keeping no more than "N" pages in memory simultaneously. Note that sometimes it 
is not possible, e.g. in case of {{DISTINCT}} or non-collocated {{GROUP BY}}. 
In this case we would have to build the whole result set first anyway. So let's 
focus on a scenario when the whole result set is not needed.

As currently everything is requested synchronously page-by-page, in the first 
version it would be enough to distribute synchronous page requests between 
cursor reads, without any prefetch. 

Implementation details:
1) Optimization should be applied only to {{skipMergeTbl=true}} cases, when 
complete result set of map queries is not needed.
2) Starting point is {{GridReduceQueryExecutor#query}}, see 
{{skipMergeTbl=true}} branch - this is where we get all pages eagerly.
3) Get no more than one page from the server at a time. We request the page, 
iterate over it, then request another page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)