[jira] [Created] (IGNITE-12276) Thin client uses Optimized marshaller for TreeSet and TreeMap

2019-10-09 Thread Mikhail Cherkasov (Jira)
Mikhail Cherkasov created IGNITE-12276:
--

 Summary: Thin client uses Optimized marshaller for TreeSet and 
TreeMap
 Key: IGNITE-12276
 URL: https://issues.apache.org/jira/browse/IGNITE-12276
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Reporter: Mikhail Cherkasov


Thin client uses Optimized marshaller for TreeSet and TreeMap, while thick 
client replace them with BinaryTreeMap/BinaryTreeSet.

As a result it blocks schema changes for stored objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7062) Ignite page with video resources and recording

2019-10-09 Thread Denis A. Magda (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948063#comment-16948063
 ] 

Denis A. Magda commented on IGNITE-7062:


[~mmuzaf], unassigned the release version and took over this ticket. Thanks!

> Ignite page with video resources and recording
> --
>
> Key: IGNITE-7062
> URL: https://issues.apache.org/jira/browse/IGNITE-7062
> Project: Ignite
>  Issue Type: Task
>  Components: site
>Reporter: Denis A. Magda
>Assignee: Denis A. Magda
>Priority: Major
>
> There is a plenty of recordings of Ignite meetups, webinars and conference 
> talks available on the Internet. Some of them introduce basic components and 
> capabilities, some share best practices and pitfalls while the other share 
> use cases.
> Generally, it's beneficial for both Ignite community and users to gather and 
> expose the most useful ones under a special video recording section. For 
> instance, we might consider these talks to be added right away:
> * Ignite use case: https://youtu.be/1D8hyLWMtfM
> * Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ
> * Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8
> Instead of creating a new page for this purpose I would rework the 
> screencasts' page combining all the media content there: 
> https://ignite.apache.org/screencasts.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-7062) Ignite page with video resources and recording

2019-10-09 Thread Denis A. Magda (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis A. Magda updated IGNITE-7062:
---
Fix Version/s: (was: 2.8)

> Ignite page with video resources and recording
> --
>
> Key: IGNITE-7062
> URL: https://issues.apache.org/jira/browse/IGNITE-7062
> Project: Ignite
>  Issue Type: Task
>  Components: site
>Reporter: Denis A. Magda
>Assignee: Prachi Garg
>Priority: Major
>
> There is a plenty of recordings of Ignite meetups, webinars and conference 
> talks available on the Internet. Some of them introduce basic components and 
> capabilities, some share best practices and pitfalls while the other share 
> use cases.
> Generally, it's beneficial for both Ignite community and users to gather and 
> expose the most useful ones under a special video recording section. For 
> instance, we might consider these talks to be added right away:
> * Ignite use case: https://youtu.be/1D8hyLWMtfM
> * Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ
> * Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8
> Instead of creating a new page for this purpose I would rework the 
> screencasts' page combining all the media content there: 
> https://ignite.apache.org/screencasts.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-7062) Ignite page with video resources and recording

2019-10-09 Thread Denis A. Magda (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis A. Magda reassigned IGNITE-7062:
--

Assignee: Denis A. Magda  (was: Prachi Garg)

> Ignite page with video resources and recording
> --
>
> Key: IGNITE-7062
> URL: https://issues.apache.org/jira/browse/IGNITE-7062
> Project: Ignite
>  Issue Type: Task
>  Components: site
>Reporter: Denis A. Magda
>Assignee: Denis A. Magda
>Priority: Major
>
> There is a plenty of recordings of Ignite meetups, webinars and conference 
> talks available on the Internet. Some of them introduce basic components and 
> capabilities, some share best practices and pitfalls while the other share 
> use cases.
> Generally, it's beneficial for both Ignite community and users to gather and 
> expose the most useful ones under a special video recording section. For 
> instance, we might consider these talks to be added right away:
> * Ignite use case: https://youtu.be/1D8hyLWMtfM
> * Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ
> * Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8
> Instead of creating a new page for this purpose I would rework the 
> screencasts' page combining all the media content there: 
> https://ignite.apache.org/screencasts.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-10683) Prepare process of packaging and delivering thin clients

2019-10-09 Thread Denis A. Magda (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948061#comment-16948061
 ] 

Denis A. Magda commented on IGNITE-10683:
-

I'll let [~isapego] to outline our release strategy.

> Prepare process of packaging and delivering thin clients
> 
>
> Key: IGNITE-10683
> URL: https://issues.apache.org/jira/browse/IGNITE-10683
> Project: Ignite
>  Issue Type: Task
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Major
> Fix For: 2.8
>
>
> # **NodeJs client**
> #* +Instruction+: 
> https://github.com/nobitlost/ignite/blob/ignite--docs/modules/platforms/nodejs/README.md#publish-ignite-nodejs-client-on-npmjscom-instruction
> #* +Uploaded+: https://www.npmjs.com/package/apache-ignite-client
> # **PHP client**
> #* +Instruction+: 
> https://github.com/nobitlost/ignite/blob/ignite-7783-docs/modules/platforms/php/README.md#release-the-client-in-the-php-package-repository-instruction
> {panel}
> Cannot be uploaded on Packagist as the client should be in a dedicated 
> repository for that - 
> https://issues.apache.org/jira/browse/IGNITE-7783?focusedCommentId=16595476=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16595476
> Installation from the sources works.
> {panel}
> # **Python client**
> I have already registered the package `pyignite` on PyPI[1]. The person who 
> is going to take the responsibility of maintaining it should create an 
> account on PyPI and mail me in private, so that I can grant them the 
> necessary rights. They also must install twine[3].
> The process of packaging is well described in the packaging tutorial[2]. In 
> the nutshell, the maintainer must do the following:
> ## Clone/pull the sources from the git repository,
> ## Enter the directory in which the `setup.py` is resides (“the setup 
> directory”), in our case it is `modules/platforms/python`.
> ## Create the packages with the command `python3 setup.py sdist bdist_wheel`. 
> The packages will be created in `modules/platforms/python/dist` folder.
> ## Upload packages with twine: `twine upload dist/*`.
> It is very useful to have a dedicated Python virtual environment prepared to 
> perform steps 3-4. Just do an editable install of `pyignite` into that 
> environment from the setup directory: `pip3 install -e .` You can also 
> install twine (`pip install twine`) in it.
> Consider also making a `.pypirc` file to save time on logging in to PyPI. 
> Newest version of `twine` is said to support keyrings on Linux and Mac, but I 
> have not tried this yet.
> [1] https://pypi.org/project/pyignite/
> [2] https://packaging.python.org/tutorials/packaging-projects/
> [3] https://twine.readthedocs.io/en/latest/
> Some other notes on PyPI and versioning.
> - The package version is located in the `setup.py`, it is a `version` 
> argument of the `setuptools.setup()` function. Editing the `setup.py` is the 
> only way to set the package version.
> - You absolutely can not replace a package in PyPI (hijacking prevention). If 
> you have published the package by mistake, all you can do is delete the 
> unwanted package, increment the version counter in `setup.py`, and try again.
> - If you upload the package through the web interface of PyPI (without 
> twine), the package description will be garbled. Web interface does not 
> support markdown.
> Anyway, I would like to join in the congratulations on successful release. 
> Kudos to the team.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-8617) Node Discovery Using AWS Application ELB

2019-10-09 Thread Denis A. Magda (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948060#comment-16948060
 ] 

Denis A. Magda commented on IGNITE-8617:


[~slukyanov], could you please finish a final review round?

> Node Discovery Using AWS Application ELB
> 
>
> Key: IGNITE-8617
> URL: https://issues.apache.org/jira/browse/IGNITE-8617
> Project: Ignite
>  Issue Type: New Feature
>  Components: aws, documentation
>Reporter: Uday Kale
>Assignee: Uday Kale
>Priority: Major
> Fix For: 2.8
>
>
> Support for Node discovery using AWS Application ELB. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (IGNITE-11539) Document services hot redeployment via DeploymentSpi

2019-10-09 Thread Denis A. Magda (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis A. Magda closed IGNITE-11539.
---

> Document services hot redeployment via DeploymentSpi
> 
>
> Key: IGNITE-11539
> URL: https://issues.apache.org/jira/browse/IGNITE-11539
> Project: Ignite
>  Issue Type: Task
>  Components: documentation, managed services
>Reporter: Vyacheslav Daradur
>Priority: Major
>  Labels: iep-17
> Fix For: 2.8
>
>
> It's necessary to document "how to use" service hot redeployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-11977) Data streamer pool MXBean is registered as ThreadPoolMXBean instead of StripedExecutorMXBean

2019-10-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947925#comment-16947925
 ] 

Ignite TC Bot commented on IGNITE-11977:


{panel:title=Branch: [pull/6695/head] Base: [master] : Possible Blockers 
(16)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}PDS (Indexing){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676430]]
* IgnitePdsWithIndexingCoreTestSuite: 
IgniteWalRecoveryTest.testWalBigObjectNodeCancel - Test has low fail rate in 
base branch 0,0% and is not flaky

{color:#d04437}Cache 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676416]]
* IgniteCacheTestSuite2: 
GridCacheAtomicNearReadersSelfTest.testTwoNodesTwoKeysNoBackups - Test has low 
fail rate in base branch 0,0% and is not flaky

{color:#d04437}MVCC Queries{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676400]]
* IgniteCacheMvccSqlTestSuite: 
CacheMvccPartitionedSqlTxQueriesTest.testAccountsTxDmlSql_WithRemoves_SingleNode_Persistence
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Queries 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676442]]
* IgniteBinaryCacheQueryTestSuite: 
IndexingCachePartitionLossPolicySelfTest.testReadWriteSafeWithBackupsAfterKillCrd[TRANSACTIONAL]
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}MVCC PDS 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676460]]
* IgnitePdsMvccTestSuite2: IgniteNodeStoppedDuringDisableWALTest.test - History 
for base branch is absent.

{color:#d04437}MVCC PDS 4{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=4676462]]
* IgnitePdsMvccTestSuite4: ResetLostPartitionTest.testResetLostPartitions - 
Test has low fail rate in base branch 0,0% and is not flaky
* IgnitePdsMvccTestSuite4: 
ResetLostPartitionTest.testReactivateGridBeforeResetLostPartitions - Test has 
low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Cache 8{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=4676422]]
* IgniteCacheTestSuite8: GridCacheRebalancingSyncSelfTest.testLoadRebalancing - 
Test has low fail rate in base branch 0,0% and is not flaky
* IgniteCacheTestSuite8: 
GridCacheRebalancingAsyncSelfTest.testComplexRebalancing - Test has low fail 
rate in base branch 0,0% and is not flaky

{color:#d04437}PDS 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676433]]
* IgnitePdsTestSuite2: IgniteNodeStoppedDuringDisableWALTest.test - New test 
duration 170s is more that 1 minute

{color:#d04437}MVCC Cache{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676398]]
* IgniteCacheMvccTestSuite: CacheMvccTransactionsTest.testNodesRestartNoHang - 
Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Queries 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676377]]
* IgniteBinaryCacheQueryTestSuite2: IgniteCacheQueriesLoadTest1.testQueries - 
New test duration 169s is more that 1 minute

{color:#d04437}MVCC Cache 7{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=4676456]]
* IgniteCacheMvccTestSuite7: 
MvccCacheGroupMetricsMBeanTest.testCacheGroupMetrics - History for base branch 
is absent.
* IgniteCacheMvccTestSuite7: 
GridCacheRebalancingWithAsyncClearingMvccTest.testPartitionClearingNotBlockExchange
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Cache (Restarts) 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4676413]]
* IgniteCacheRestartTestSuite2: 
IgniteCacheGetRestartTest.testGetRestartPartitioned2 - Test has low fail rate 
in base branch 0,0% and is not flaky

{color:#d04437}[Licenses Headers]{color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=4676403]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4676466buildTypeId=IgniteTests24Java8_RunAll]

> Data streamer pool MXBean is registered as ThreadPoolMXBean instead of 
> StripedExecutorMXBean
> 
>
> Key: IGNITE-11977
> URL: https://issues.apache.org/jira/browse/IGNITE-11977
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Ruslan Kamashev
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Data streamer pool is registered with a ThreadPoolMXBean while it is actually 
> a StripedExecutor and can use a StripedExecutorMXBean.
> Need to change the registration in the IgniteKernal code. It should be 
> registered the same way as the striped executor pool.



--
This message was sent by 

[jira] [Commented] (IGNITE-12268) Adds possibility to set up custom REST processor.

2019-10-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947808#comment-16947808
 ] 

Ignite TC Bot commented on IGNITE-12268:


{panel:title=Branch: [pull/6948/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4677911buildTypeId=IgniteTests24Java8_RunAll]

> Adds possibility to set up custom REST processor.
> -
>
> Key: IGNITE-12268
> URL: https://issues.apache.org/jira/browse/IGNITE-12268
> Project: Ignite
>  Issue Type: Improvement
>Reporter: PetrovMikhail
>Assignee: PetrovMikhail
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It's needed to add ability to configure your own REST processor via 
> functionality of Ignite plugins.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-11586) Update platforms/cpp/DEVNOTES.txt: OpenSSL

2019-10-09 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947802#comment-16947802
 ] 

Igor Sapego commented on IGNITE-11586:
--

I've lost this ticket ) I'm going to merge it now.

> Update platforms/cpp/DEVNOTES.txt: OpenSSL 
> ---
>
> Key: IGNITE-11586
> URL: https://issues.apache.org/jira/browse/IGNITE-11586
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.7
> Environment: Windows 10 Pro, Visual Studio 2010 Pro, Oracle JDK 8
>Reporter: Sergey Kozlov
>Assignee: Stephen Darlington
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ODBC compilation required OpenSSL headers and the project compilation fails 
> due to unable to open {{include/openssl/ssl.h}}. I suggest to add the 
> requirement to install OpenSSL and set the corresponding environment variable:
> {noformat}
> Building on Windows with Visual Studio (tm)
> --
> Common Requirements:
> ...
> * OPENSSL_HOME environment variable must be set pointing to OpenSSL 
> installation directory.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12271) Persistence can't read pages from disk on Big Endian architectures

2019-10-09 Thread Ilya Kasnacheev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947778#comment-16947778
 ] 

Ilya Kasnacheev commented on IGNITE-12271:
--

[~agoncharuk] I have created a ticket to make it configurable, let's split it 
into phase 2.

> Persistence can't read pages from disk on Big Endian architectures
> --
>
> Key: IGNITE-12271
> URL: https://issues.apache.org/jira/browse/IGNITE-12271
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> So we are trying to start master on Big Endian, and we get the following 
> exceptions:
> {code}
> Runtime failure on row: Row@5bf1ee15[ snip, ver: GridCacheVersion 
> [topVer=180723326, order=1569259166164, nodeOrder=1] ][ 1307496, 32211, 3, 0 
> ]" [5-197]
> at 
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> ... 41 more
> Caused by: class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@5bf1ee15[ snip], ver: GridCacheVersion 
> [topVer=180723326, order=1569259166164, nodeOrder=1] ][ 1307496, 32211, 3, 0 ]
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2320)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2267)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:323)
> ... 38 more
> Caused by: java.lang.IllegalStateException: Failed to get page IO instance 
> (page content is corrupted)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:84)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:96)
> at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:153)
> at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:107)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:221)
> at 
> org.apache.ignite.internal.processors.query.h2.database.io.AbstractH2ExtrasLeafIO.getLookupRow(AbstractH2ExtrasLeafIO.java:153)
> at 
> org.apache.ignite.internal.processors.query.h2.database.io.AbstractH2ExtrasLeafIO.getLookupRow(AbstractH2ExtrasLeafIO.java:35)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12275) Make byte order in Persistence pages configurable

2019-10-09 Thread Ilya Kasnacheev (Jira)
Ilya Kasnacheev created IGNITE-12275:


 Summary: Make byte order in Persistence pages configurable
 Key: IGNITE-12275
 URL: https://issues.apache.org/jira/browse/IGNITE-12275
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Reporter: Ilya Kasnacheev


Right now we force it to be in LITTLE_ENDIAN, but we don't read it in fixed 
order, which leads to failures.

After IGNITE-12271 it will be changed to nativeOrder().

We should make it configurable so that persistence files may be moved between 
nodes of different archs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12267) ClassCastException after change column type (drop, add)

2019-10-09 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947775#comment-16947775
 ] 

Kirill Tkalenko commented on IGNITE-12267:
--

[~taras.ledkov], please review the code.

> ClassCastException after change column type (drop, add)
> ---
>
> Key: IGNITE-12267
> URL: https://issues.apache.org/jira/browse/IGNITE-12267
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SQL column type change is not present, but it is possible to delete and 
> create with a new type.
> The application of the migration script passes without errors.
> The error occurs whenever the column is accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12267) ClassCastException after change column type (drop, add)

2019-10-09 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-12267:
-
Reviewer: Taras Ledkov  (was: Denis Chudov)

> ClassCastException after change column type (drop, add)
> ---
>
> Key: IGNITE-12267
> URL: https://issues.apache.org/jira/browse/IGNITE-12267
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SQL column type change is not present, but it is possible to delete and 
> create with a new type.
> The application of the migration script passes without errors.
> The error occurs whenever the column is accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12267) ClassCastException after change column type (drop, add)

2019-10-09 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947762#comment-16947762
 ] 

Denis Chudov commented on IGNITE-12267:
---

[~ktkale...@gridgain.com] code looks good to me.

> ClassCastException after change column type (drop, add)
> ---
>
> Key: IGNITE-12267
> URL: https://issues.apache.org/jira/browse/IGNITE-12267
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SQL column type change is not present, but it is possible to delete and 
> create with a new type.
> The application of the migration script passes without errors.
> The error occurs whenever the column is accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12189) Implement correct limit for TextQuery

2019-10-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947760#comment-16947760
 ] 

Ignite TC Bot commented on IGNITE-12189:


{panel:title=Branch: [pull/6917/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}PDS (Indexing){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4678107]]
* IgnitePdsWithIndexingCoreTestSuite: 
IgniteLogicalRecoveryTest.testRecoveryOnJoinToInactiveCluster - Test has low 
fail rate in base branch 0,0% and is not flaky

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4677672buildTypeId=IgniteTests24Java8_RunAll]

> Implement correct limit for TextQuery
> -
>
> Key: IGNITE-12189
> URL: https://issues.apache.org/jira/browse/IGNITE-12189
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Yuriy Shuliha 
>Assignee: Yuriy Shuliha 
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> PROBLEM
> For now each server-node returns all response records to the client-node and 
> it may contain ~thousands, ~hundred thousands records.
>  Event if we need only first 10-100. Again, all the results are added to 
> queue in _*GridCacheQueryFutureAdapter*_ in arbitrary order by pages.
>  There are no any means to deliver deterministic result.
> SOLUTION
>  Implement _*limit*_ as parameter for _*TextQuery*_ and 
> _*GridCacheQueryRequest*_ 
>  It should be passed as limit  parameter in Lucene's  
> _*IndexSearcher.search()*_ in _*GridLuceneIndex*_.
> For distributed queries _*limit*_ will also trim response queue when merging 
> results.
> Type: long
>  Special value: : 0 -> No limit (Integer.MAX_VALUE);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12271) Persistence can't read pages from disk on Big Endian architectures

2019-10-09 Thread Ilya Kasnacheev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947734#comment-16947734
 ] 

Ilya Kasnacheev commented on IGNITE-12271:
--

[~agoncharuk] Unfortunately I don't have a concrete list of places which we 
should do this change. Unfortunately we don't have robust testing environment 
for it either :(

> Persistence can't read pages from disk on Big Endian architectures
> --
>
> Key: IGNITE-12271
> URL: https://issues.apache.org/jira/browse/IGNITE-12271
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> So we are trying to start master on Big Endian, and we get the following 
> exceptions:
> {code}
> Runtime failure on row: Row@5bf1ee15[ snip, ver: GridCacheVersion 
> [topVer=180723326, order=1569259166164, nodeOrder=1] ][ 1307496, 32211, 3, 0 
> ]" [5-197]
> at 
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> ... 41 more
> Caused by: class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@5bf1ee15[ snip], ver: GridCacheVersion 
> [topVer=180723326, order=1569259166164, nodeOrder=1] ][ 1307496, 32211, 3, 0 ]
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2320)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2267)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:323)
> ... 38 more
> Caused by: java.lang.IllegalStateException: Failed to get page IO instance 
> (page content is corrupted)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:84)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:96)
> at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:153)
> at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:107)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:221)
> at 
> org.apache.ignite.internal.processors.query.h2.database.io.AbstractH2ExtrasLeafIO.getLookupRow(AbstractH2ExtrasLeafIO.java:153)
> at 
> org.apache.ignite.internal.processors.query.h2.database.io.AbstractH2ExtrasLeafIO.getLookupRow(AbstractH2ExtrasLeafIO.java:35)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-9181) Continuous query with remote filter factory doesn't let nodes join

2019-10-09 Thread Denis Mekhanikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947697#comment-16947697
 ] 

Denis Mekhanikov commented on IGNITE-9181:
--

This issue is resolved under IGNITE-3653

> Continuous query with remote filter factory doesn't let nodes join 
> ---
>
> Key: IGNITE-9181
> URL: https://issues.apache.org/jira/browse/IGNITE-9181
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Attachments: ContinuousQueryNodeJoinTest.java
>
>
> When continuous query is registered from a client node and it has a remote 
> filter factory configured, and P2P class loading is enabled, then all new 
> nodes fail with an exception, which doesn't let them join the cluster.
> Exception:
> {noformat}
> [ERROR][tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%][TestTcpDiscoverySpi]
>  Runtime error caught during grid runnable execution: GridWorker 
> [name=tcp-disco-msg-worker, 
> igniteInstanceName=continuous.ContinuousQueryNodeJoinTest1, finished=false, 
> hashCode=726450632, interrupted=false, 
> runner=tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%], 
> nextNode=[null]
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandlerV2.getEventFilter(CacheContinuousQueryHandlerV2.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.register(CacheContinuousQueryHandler.java:330)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.registerHandler(GridContinuousProcessor.java:1738)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onDiscoDataReceived(GridContinuousProcessor.java:646)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:538)
>   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:889)
>   at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1993)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4502)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2804)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2604)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7115)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2688)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7059)
>   at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> {noformat}
> Reproducer is in the attachment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9181) Continuous query with remote filter factory doesn't let nodes join

2019-10-09 Thread Denis Mekhanikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov updated IGNITE-9181:
-
Release Note:   (was: This issue is resolved under IGNITE-3653)

> Continuous query with remote filter factory doesn't let nodes join 
> ---
>
> Key: IGNITE-9181
> URL: https://issues.apache.org/jira/browse/IGNITE-9181
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Attachments: ContinuousQueryNodeJoinTest.java
>
>
> When continuous query is registered from a client node and it has a remote 
> filter factory configured, and P2P class loading is enabled, then all new 
> nodes fail with an exception, which doesn't let them join the cluster.
> Exception:
> {noformat}
> [ERROR][tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%][TestTcpDiscoverySpi]
>  Runtime error caught during grid runnable execution: GridWorker 
> [name=tcp-disco-msg-worker, 
> igniteInstanceName=continuous.ContinuousQueryNodeJoinTest1, finished=false, 
> hashCode=726450632, interrupted=false, 
> runner=tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%], 
> nextNode=[null]
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandlerV2.getEventFilter(CacheContinuousQueryHandlerV2.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.register(CacheContinuousQueryHandler.java:330)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.registerHandler(GridContinuousProcessor.java:1738)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onDiscoDataReceived(GridContinuousProcessor.java:646)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:538)
>   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:889)
>   at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1993)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4502)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2804)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2604)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7115)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2688)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7059)
>   at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> {noformat}
> Reproducer is in the attachment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-9181) Continuous query with remote filter factory doesn't let nodes join

2019-10-09 Thread Denis Mekhanikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Mekhanikov resolved IGNITE-9181.
--
Release Note: This issue is resolved under IGNITE-3653
  Resolution: Duplicate

> Continuous query with remote filter factory doesn't let nodes join 
> ---
>
> Key: IGNITE-9181
> URL: https://issues.apache.org/jira/browse/IGNITE-9181
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.6
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Attachments: ContinuousQueryNodeJoinTest.java
>
>
> When continuous query is registered from a client node and it has a remote 
> filter factory configured, and P2P class loading is enabled, then all new 
> nodes fail with an exception, which doesn't let them join the cluster.
> Exception:
> {noformat}
> [ERROR][tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%][TestTcpDiscoverySpi]
>  Runtime error caught during grid runnable execution: GridWorker 
> [name=tcp-disco-msg-worker, 
> igniteInstanceName=continuous.ContinuousQueryNodeJoinTest1, finished=false, 
> hashCode=726450632, interrupted=false, 
> runner=tcp-disco-msg-worker-#15%continuous.ContinuousQueryNodeJoinTest1%], 
> nextNode=[null]
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandlerV2.getEventFilter(CacheContinuousQueryHandlerV2.java:108)
>   at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.register(CacheContinuousQueryHandler.java:330)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.registerHandler(GridContinuousProcessor.java:1738)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onDiscoDataReceived(GridContinuousProcessor.java:646)
>   at 
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:538)
>   at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:889)
>   at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1993)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4502)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2804)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2604)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7115)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2688)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7059)
>   at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> {noformat}
> Reproducer is in the attachment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-9913) Prevent data updates blocking in case of backup BLT server node leave

2019-10-09 Thread Anton Vinogradov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947637#comment-16947637
 ] 

Anton Vinogradov commented on IGNITE-9913:
--

Alexey Goncharuk,

1. Seems, I've got a situation, please check the understanding.

Prepared Txs can be located at backup nodes having partitions state from: 
{{[state == MOVING || state == OWNING || state == RENTING]}}
 So, we have to cover all these cases.
 For example, we should repair all partitions with non-finished rebalance 
(moving), correct?

So, {{Set failedPrimaries = 
aff.primaryPartitions(fut.exchangeId().eventNode().id(), aff.lastVersion());}} 
is a correct calculation, but 
 {{Set locBackups = 
aff.backupPartitions(fut.sharedContext().localNodeId(), aff.lastVersion());}} 
should be replaced with dht.localPartitions() usage?

> Prevent data updates blocking in case of backup BLT server node leave
> -
>
> Key: IGNITE-9913
> URL: https://issues.apache.org/jira/browse/IGNITE-9913
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Ivan Rakov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.8
>
> Attachments: 9913_yardstick.png, master_yardstick.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Ignite cluster performs distributed partition map exchange when any server 
> node leaves or joins the topology.
> Distributed PME blocks all updates and may take a long time. If all 
> partitions are assigned according to the baseline topology and server node 
> leaves, there's no actual need to perform distributed PME: every cluster node 
> is able to recalculate new affinity assigments and partition states locally. 
> If we'll implement such lightweight PME and handle mapping and lock requests 
> on new topology version correctly, updates won't be stopped (except updates 
> of partitions that lost their primary copy).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9379) Ignite node hangs after OOM in a thread from thin client thread pool

2019-10-09 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-9379:

Priority: Critical  (was: Major)

> Ignite node hangs after OOM in a thread from thin client thread pool
> 
>
> Key: IGNITE-9379
> URL: https://issues.apache.org/jira/browse/IGNITE-9379
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.6
>Reporter: Taras Ledkov
>Priority: Critical
> Fix For: 2.8
>
>
> OOM exception handler isn't set up for thin client thread pool.
> The issue is described in details at the [dev 
> list|http://apache-ignite-evelopers.2346864.n4.nabble.com/Binary-Client-Protocol-client-hangs-in-case-of-OOM-on-server-td34224.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9090) When client node make cache.QueryCursorImpl.getAll they have OOM and continue working

2019-10-09 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-9090:

Ignite Flags:   (was: Docs Required)

> When client node make cache.QueryCursorImpl.getAll they have OOM and continue 
> working
> -
>
> Key: IGNITE-9090
> URL: https://issues.apache.org/jira/browse/IGNITE-9090
> Project: Ignite
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.4
> Environment: 2 server node, 1 client, 1 cache with 15 kk size
>Reporter: ARomantsov
>Priority: Critical
> Fix For: 2.8
>
>
> {code:java}
> [12:21:22,390][SEVERE][query-#69][GridCacheIoManager] Failed to process 
> message [senderId=30cab4ec-1da7-4e9f-a262-bdfa4d466865, messageType=class 
> o.a.i.i.processors.cache.query.GridCacheQueryResponse]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at java.lang.Long.valueOf(Long.java:840)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:250)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:421)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponseEntry.readExternal(GridCacheQueryResponseEntry.java:90)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:555)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:917)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:421)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:227)
> at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
> at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1777)
> at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
> at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310)
> at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:99)
> at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.unmarshalCollection0(GridCacheQueryResponse.java:189)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.finishUnmarshal(GridCacheQueryResponse.java:162)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1530)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:576)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:101)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1613)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [12:21:28,573][INFO][ignite-update-notifier-timer][GridUpdateNotifier] 

[jira] [Commented] (IGNITE-9090) When client node make cache.QueryCursorImpl.getAll they have OOM and continue working

2019-10-09 Thread Maxim Muzafarov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947609#comment-16947609
 ] 

Maxim Muzafarov commented on IGNITE-9090:
-

[~ARomantsov]

Hello, does the issue still actual?
Should we move it to the next release, since it caused by h2_limitations?

> When client node make cache.QueryCursorImpl.getAll they have OOM and continue 
> working
> -
>
> Key: IGNITE-9090
> URL: https://issues.apache.org/jira/browse/IGNITE-9090
> Project: Ignite
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.4
> Environment: 2 server node, 1 client, 1 cache with 15 kk size
>Reporter: ARomantsov
>Priority: Critical
> Fix For: 2.8
>
>
> {code:java}
> [12:21:22,390][SEVERE][query-#69][GridCacheIoManager] Failed to process 
> message [senderId=30cab4ec-1da7-4e9f-a262-bdfa4d466865, messageType=class 
> o.a.i.i.processors.cache.query.GridCacheQueryResponse]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at java.lang.Long.valueOf(Long.java:840)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:250)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:421)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponseEntry.readExternal(GridCacheQueryResponseEntry.java:90)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readExternalizable(OptimizedObjectInputStream.java:555)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedClassDescriptor.read(OptimizedClassDescriptor.java:917)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObject0(OptimizedObjectInputStream.java:346)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedObjectInputStream.readObjectOverride(OptimizedObjectInputStream.java:198)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:421)
> at 
> org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:227)
> at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
> at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1777)
> at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
> at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310)
> at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:99)
> at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.unmarshalCollection0(GridCacheQueryResponse.java:189)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.finishUnmarshal(GridCacheQueryResponse.java:162)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1530)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:576)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:101)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1613)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:125)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2752)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1516)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:125)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1485)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> 

[jira] [Commented] (IGNITE-9913) Prevent data updates blocking in case of backup BLT server node leave

2019-10-09 Thread Anton Vinogradov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947604#comment-16947604
 ] 

Anton Vinogradov commented on IGNITE-9913:
--

[~agoncharuk]
 Thanks for joining!

1. Not sure I've got an issue.
 As far as I can see {{aff.[primary,backup]Partitions}} uses {{aff.assignment}} 
(not an {{idealAssignment}}) to calculate list of nodes.
 Having that baseline enabled and was not changed we should just check the 
latest assignment, which was calculated using part2node during the latest 
finished regular PME.
 Have I missed something? Could you, please, reexplain the situation?

2. Non-affected nodes finish PME immediately. 
 So, we will block new operations only at affected nodes and only during the 
recovery.
 Benchmarks are in progress, will provide the result once it will be ready.
 But the main improvement here should be the ability to skip waiting for 
already started operations completion.

> Prevent data updates blocking in case of backup BLT server node leave
> -
>
> Key: IGNITE-9913
> URL: https://issues.apache.org/jira/browse/IGNITE-9913
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Ivan Rakov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.8
>
> Attachments: 9913_yardstick.png, master_yardstick.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Ignite cluster performs distributed partition map exchange when any server 
> node leaves or joins the topology.
> Distributed PME blocks all updates and may take a long time. If all 
> partitions are assigned according to the baseline topology and server node 
> leaves, there's no actual need to perform distributed PME: every cluster node 
> is able to recalculate new affinity assigments and partition states locally. 
> If we'll implement such lightweight PME and handle mapping and lock requests 
> on new topology version correctly, updates won't be stopped (except updates 
> of partitions that lost their primary copy).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12253) [ML] Refactor current hierarchy of ML exceptions

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12253:
-
Summary: [ML] Refactor current hierarchy of ML exceptions  (was: [ML] 
Refactor current hierarchy of ML exeptions)

> [ML] Refactor current hierarchy of ML exceptions
> 
>
> Key: IGNITE-12253
> URL: https://issues.apache.org/jira/browse/IGNITE-12253
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.8
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Major
>  Labels: await
> Fix For: 2.8
>
>
> The current hierarchy of ML exceptions is too separated and have no common 
> root and rules of writing and logging



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12253) [ML] Refactor current hierarchy of ML exeptions

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12253:
-
Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> [ML] Refactor current hierarchy of ML exeptions
> ---
>
> Key: IGNITE-12253
> URL: https://issues.apache.org/jira/browse/IGNITE-12253
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.8
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Major
>  Labels: await
> Fix For: 2.8
>
>
> The current hierarchy of ML exceptions is too separated and have no common 
> root and rules of writing and logging



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Thibaut (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated IGNITE-12236:
-
Description: 
Hello,

org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy

does not override 

org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy

since this commit

[https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
 

this results in a thrown exception in 

org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor

 

 This prevents using ignite with any up to date version of spring. Fixing this 
would require updating  that's the reason I'm puting 
this as Improvement.

 

[dev-list 
discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]

  was:
Hello,

org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy

does not override 

org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy

since this commit

[https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
 

this results in a thrown exception in 

org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor

 

 This prevents using ignite with any up to date version of spring. Fixing this 
would require updating  that's the reason I'm puting 
this as Improvement.


> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.
>  
> [dev-list 
> discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Thibaut (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated IGNITE-12236:
-
Description: 
Hello,

org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy

does not override 

org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy

since this commit

[https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
 

this results in a thrown exception in 

org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor

 

 This prevents using ignite with any up to date version of spring. Fixing this 
would require updating  that's the reason I'm puting 
this as Improvement.

 

[dev-list 
discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]

[link title|http://example.com]

  was:
Hello,

org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy

does not override 

org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy

since this commit

[https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
 

this results in a thrown exception in 

org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor

 

 This prevents using ignite with any up to date version of spring. Fixing this 
would require updating  that's the reason I'm puting 
this as Improvement.

 

[dev-list 
discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]


> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.
>  
> [dev-list 
> discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]
> [link title|http://example.com]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Thibaut (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated IGNITE-12236:
-
Description: 
Hello,

org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy

does not override 

org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy

since this commit

[https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
 

this results in a thrown exception in 

org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor

 

 This prevents using ignite with any up to date version of spring. Fixing this 
would require updating  that's the reason I'm puting 
this as Improvement.

 

[dev-list 
discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]

  was:
Hello,

org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy

does not override 

org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy

since this commit

[https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
 

this results in a thrown exception in 

org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor

 

 This prevents using ignite with any up to date version of spring. Fixing this 
would require updating  that's the reason I'm puting 
this as Improvement.

 

[dev-list 
discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]

[link title|http://example.com]


> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.
>  
> [dev-list 
> discussion|[http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-12236-RepositoryFactorySupport-getQueryLookupStrategy-no-longer-overriden-in-IgniteRepositoryy-tc43932.html]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10747) [ML] Add NaN (missing) value support into Decision Tree training

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10747:
-
Fix Version/s: (was: 2.8)
   2.9

> [ML] Add NaN (missing) value support into Decision Tree training
> 
>
> Key: IGNITE-10747
> URL: https://issues.apache.org/jira/browse/IGNITE-10747
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.9
>Reporter: Anton Dmitriev
>Assignee: Alexey Platonov
>Priority: Major
>  Labels: await
> Fix For: 2.9
>
>
> As result of integration with XGBoost our DecisionTree model now supports NaN 
> (missing) values. Would be great to support these values in out DecisionTree 
> classification/regression trainers as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10747) [ML] Add NaN (missing) value support into Decision Tree training

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10747:
-
Affects Version/s: (was: 2.8)
   2.9

> [ML] Add NaN (missing) value support into Decision Tree training
> 
>
> Key: IGNITE-10747
> URL: https://issues.apache.org/jira/browse/IGNITE-10747
> Project: Ignite
>  Issue Type: Improvement
>  Components: ml
>Affects Versions: 2.9
>Reporter: Anton Dmitriev
>Assignee: Alexey Platonov
>Priority: Major
>  Labels: await
> Fix For: 2.8
>
>
> As result of integration with XGBoost our DecisionTree model now supports NaN 
> (missing) values. Would be great to support these values in out DecisionTree 
> classification/regression trainers as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-9913) Prevent data updates blocking in case of backup BLT server node leave

2019-10-09 Thread Alexey Goncharuk (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947584#comment-16947584
 ] 

Alexey Goncharuk commented on IGNITE-9913:
--

[~NSAmelchev], [~avinogradov], a few comments for the PR:
 * The {{localRecoveryNeeded}} does not seem right - you check the list of 
partitions from affinity assignment cache. There may be a case when a node 
still owns a partition, but it is not an assigned backup for this partition 
(this will happen right after late affinity assignment change, when affinity 
cache is changed to an ideal assignment, but the node did not yet RENTed a 
partition). In this case, the partition will not be reported in the list of 
partitions and recovery will be skipped
 * Do I understand correctly that *all* new transactions will still wait for 
this optimized PME to complete? If yes, what is the actual time boost that this 
change gives? Do you have any benchmark numbers? If no, how do you order 
transactions on a new primary node with the backup transactions on the same 
node that did not finish recovery yet?

> Prevent data updates blocking in case of backup BLT server node leave
> -
>
> Key: IGNITE-9913
> URL: https://issues.apache.org/jira/browse/IGNITE-9913
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Ivan Rakov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.8
>
> Attachments: 9913_yardstick.png, master_yardstick.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Ignite cluster performs distributed partition map exchange when any server 
> node leaves or joins the topology.
> Distributed PME blocks all updates and may take a long time. If all 
> partitions are assigned according to the baseline topology and server node 
> leaves, there's no actual need to perform distributed PME: every cluster node 
> is able to recalculate new affinity assigments and partition states locally. 
> If we'll implement such lightweight PME and handle mapping and lock requests 
> on new topology version correctly, updates won't be stopped (except updates 
> of partitions that lost their primary copy).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (IGNITE-11936) Avoid changing AffinityTopologyVersion on a server node join/left event from not baseline topology.

2019-10-09 Thread Alexey Goncharuk (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-11936:
--
Comment: was deleted

(was: [~NSAmelchev], [~avinogradov], a few comments for the PR:
 * The {{localRecoveryNeeded}} does not seem right - you check the list of 
partitions from affinity assignment cache. There may be a case when a node 
still owns a partition, but it is not an assigned backup for this partition 
(this will happen right after late affinity assignment change, when affinity 
cache is changed to an ideal assignment, but the node did not yet RENTed a 
partition). In this case, the partition will not be reported in the list of 
partitions and recovery will be skipped
 * Do I understand correctly that *all* new transactions will still wait for 
this optimized PME to complete? If yes, what is the actual time boost that this 
change gives? Do you have any benchmark numbers? If no, how do you order 
transactions on a new primary node with the backup transactions on the same 
node that did not finish recovery yet?)

> Avoid changing AffinityTopologyVersion on a server node join/left event from 
> not baseline topology.
> ---
>
> Key: IGNITE-11936
> URL: https://issues.apache.org/jira/browse/IGNITE-11936
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, a client join/left event does not change AffinityTopologyVersion 
> (see IGNITE-9558). It shouldn't be changed on a server node join/left event 
> from not baseline topology too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-11936) Avoid changing AffinityTopologyVersion on a server node join/left event from not baseline topology.

2019-10-09 Thread Alexey Goncharuk (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947570#comment-16947570
 ] 

Alexey Goncharuk commented on IGNITE-11936:
---

[~NSAmelchev], [~avinogradov], a few comments for the PR:
 * The {{localRecoveryNeeded}} does not seem right - you check the list of 
partitions from affinity assignment cache. There may be a case when a node 
still owns a partition, but it is not an assigned backup for this partition 
(this will happen right after late affinity assignment change, when affinity 
cache is changed to an ideal assignment, but the node did not yet RENTed a 
partition). In this case, the partition will not be reported in the list of 
partitions and recovery will be skipped
 * Do I understand correctly that *all* new transactions will still wait for 
this optimized PME to complete? If yes, what is the actual time boost that this 
change gives? Do you have any benchmark numbers? If no, how do you order 
transactions on a new primary node with the backup transactions on the same 
node that did not finish recovery yet?

> Avoid changing AffinityTopologyVersion on a server node join/left event from 
> not baseline topology.
> ---
>
> Key: IGNITE-11936
> URL: https://issues.apache.org/jira/browse/IGNITE-11936
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, a client join/left event does not change AffinityTopologyVersion 
> (see IGNITE-9558). It shouldn't be changed on a server node join/left event 
> from not baseline topology too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12274) [ML] DecisionTree works incorrectly if maxDeep > amount of features

2019-10-09 Thread Alexey Zinoviev (Jira)
Alexey Zinoviev created IGNITE-12274:


 Summary: [ML] DecisionTree works incorrectly if maxDeep > amount 
of features
 Key: IGNITE-12274
 URL: https://issues.apache.org/jira/browse/IGNITE-12274
 Project: Ignite
  Issue Type: Bug
  Components: ml
Affects Versions: 2.9
Reporter: Alexey Zinoviev
Assignee: Alexey Zinoviev
 Fix For: 2.9


We have a problem in two places: 

null nodes could be created here *MeanDecisionTreeLeafBuilder.createLeafNode* 
method in the row *return aa != null ? new DecisionTreeLeafNode(aa[0]) : null;*

Probably, this situation is arising then the amount of features is smaller than 
maxDeep



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12271) Persistence can't read pages from disk on Big Endian architectures

2019-10-09 Thread Alexey Goncharuk (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947541#comment-16947541
 ] 

Alexey Goncharuk commented on IGNITE-12271:
---

[~ilyak], I do not think the suggested change is correct. Instead of using a 
native byte order all over the code, we need to have a single place where byte 
order is specified (via configuration or system property) and use this value in 
the places you changed.

The reason for this is that it would be good to have persistence files created 
on one architecture, copied to another architecture, and a node should 
successfully start. This can be done only if we have the same byte order for 
both runs.

> Persistence can't read pages from disk on Big Endian architectures
> --
>
> Key: IGNITE-12271
> URL: https://issues.apache.org/jira/browse/IGNITE-12271
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Ilya Kasnacheev
>Assignee: Ilya Kasnacheev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> So we are trying to start master on Big Endian, and we get the following 
> exceptions:
> {code}
> Runtime failure on row: Row@5bf1ee15[ snip, ver: GridCacheVersion 
> [topVer=180723326, order=1569259166164, nodeOrder=1] ][ 1307496, 32211, 3, 0 
> ]" [5-197]
> at 
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> ... 41 more
> Caused by: class 
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
>  Runtime failure on row: Row@5bf1ee15[ snip], ver: GridCacheVersion 
> [topVer=180723326, order=1569259166164, nodeOrder=1] ][ 1307496, 32211, 3, 0 ]
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2320)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2267)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:323)
> ... 38 more
> Caused by: java.lang.IllegalStateException: Failed to get page IO instance 
> (page content is corrupted)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:84)
> at 
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:96)
> at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:153)
> at 
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:107)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
> at 
> org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:221)
> at 
> org.apache.ignite.internal.processors.query.h2.database.io.AbstractH2ExtrasLeafIO.getLookupRow(AbstractH2ExtrasLeafIO.java:153)
> at 
> org.apache.ignite.internal.processors.query.h2.database.io.AbstractH2ExtrasLeafIO.getLookupRow(AbstractH2ExtrasLeafIO.java:35)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947539#comment-16947539
 ] 

Ignite TC Bot commented on IGNITE-12236:


{panel:title=Branch: [pull/6956/head] Base: [master] : Possible Blockers 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}ZooKeeper (Discovery) 1{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=4677010]]
* ZookeeperDiscoverySpiTestSuite1: 
ZookeeperDiscoveryClientDisconnectTest.testStartNoServers_FailOnTimeout - Test 
has low fail rate in base branch 0,0% and is not flaky
* ZookeeperDiscoverySpiTestSuite1: 
ZookeeperDiscoveryClientDisconnectTest.testServersLeft_FailOnTimeout - Test has 
low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 Exit Code , 
TC_SERVICE_MESSAGE |https://ci.ignite.apache.org/viewLog.html?buildId=4677058]]

{color:#d04437}Cache 5{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=4677040]]
* IgniteCacheTestSuite5: 
IgniteCachePartitionLossPolicySelfTest.testReadWriteSafeAfterKillTwoNodesWithDelayWithPersistence[TRANSACTIONAL]
 - Test has low fail rate in base branch 0,0% and is not flaky

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4677087buildTypeId=IgniteTests24Java8_RunAll]

> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-7820) Investigate and fix perfromance drop of WAL for FSYNC mode

2019-10-09 Thread Maxim Muzafarov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Muzafarov updated IGNITE-7820:

Fix Version/s: (was: 2.8)
   2.9

> Investigate and fix perfromance drop of WAL for FSYNC mode
> --
>
> Key: IGNITE-7820
> URL: https://issues.apache.org/jira/browse/IGNITE-7820
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.4
>Reporter: Andrey N. Gura
>Assignee: Andrey N. Gura
>Priority: Critical
> Fix For: 2.9
>
>
> WAL performance drop was introduced by 
> https://issues.apache.org/jira/browse/IGNITE-6339 fix. In order to provide 
> better performance for {{FSYNC}} WAL mode 
> {{FsyncModeFileWriteAheadLogManager}} implementation was added as result of 
> fix issue https://issues.apache.org/jira/browse/IGNITE-7594.
> *What we know about this performance drop:*
> * It affects {{IgnitePutAllBenchmark}} and {{IgnitePutAllTxBenchmark}} and 
> measurements show 10-15% drop and ~50% drop accordingly.
> * It is reproducible not for all hardware configuration. That is for some 
> configuration we see performance improvements instead of drop.
> * It is reproducible for [Many clients --> One server] topology.
> * If {{IGNITE_WAL_MMAP == false}} then we have better performance.
> * If {{fsyncDelay == 0}} then we have better performance.
> *What were tried during initial investigation:*
> * Replacing of {{LockSupport.park/unpark}} to spin leads to improvement about 
> 2%.
> * Using {{FileWriteHandle.fsync(null)}} (unconditional flush) instead of 
> {{FileWriteHandle.fsync(position)}} (conditional flush) doesn't affect 
> benchmarks.
> *What should we do:*
> Investigate the problem and provide fix or recommendation for system tuning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Thibaut (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated IGNITE-12236:
-
Comment: was deleted

(was: IGNITE-12129 updates the library)

> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Thibaut (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947513#comment-16947513
 ] 

Thibaut commented on IGNITE-12236:
--

IGNITE-12129 updates the library

> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12079) [ML][Umbrella] Add advanced preprocessing techniques

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12079:
-
Fix Version/s: (was: 2.8)
   2.9

> [ML][Umbrella] Add advanced preprocessing techniques
> 
>
> Key: IGNITE-12079
> URL: https://issues.apache.org/jira/browse/IGNITE-12079
> Project: Ignite
>  Issue Type: New Feature
>  Components: ml
>Affects Versions: 2.9
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.9
>
>
> *Main goal:*
> To reduce the gap between Apache Spark and Apache Ignite in preprocessing 
> operations. The reducing of the gap could help with loading Spark ML 
> Pipelines to Ignite ML.
>  
> Next steps:
>  # Add Frequency Encoder
>  # Add two Imputing Strategies (MIN, MAX, COUNT, MOST_FREQUENT, 
> LEAST_FREQUENT)
>  # Add RobustScaler (will be added in Spark 3.0)
>  # Add CountVectorizer
>  # Add FeatureHasher
>  # Add QuantileDiscretizer
>  # Add Locality Sensitive Hashing (LSH)
>  # Add LabelEncoder
>  # Add RevertStringIndexing
>  # Add multi-column preprocessor



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12079) [ML][Umbrella] Add advanced preprocessing techniques

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12079:
-
Affects Version/s: (was: 2.8)
   2.9

> [ML][Umbrella] Add advanced preprocessing techniques
> 
>
> Key: IGNITE-12079
> URL: https://issues.apache.org/jira/browse/IGNITE-12079
> Project: Ignite
>  Issue Type: New Feature
>  Components: ml
>Affects Versions: 2.9
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.8
>
>
> *Main goal:*
> To reduce the gap between Apache Spark and Apache Ignite in preprocessing 
> operations. The reducing of the gap could help with loading Spark ML 
> Pipelines to Ignite ML.
>  
> Next steps:
>  # Add Frequency Encoder
>  # Add two Imputing Strategies (MIN, MAX, COUNT, MOST_FREQUENT, 
> LEAST_FREQUENT)
>  # Add RobustScaler (will be added in Spark 3.0)
>  # Add CountVectorizer
>  # Add FeatureHasher
>  # Add QuantileDiscretizer
>  # Add Locality Sensitive Hashing (LSH)
>  # Add LabelEncoder
>  # Add RevertStringIndexing
>  # Add multi-column preprocessor



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12079) [ML][Umbrella] Add advanced preprocessing techniques

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12079:
-
Labels:   (was: await)

> [ML][Umbrella] Add advanced preprocessing techniques
> 
>
> Key: IGNITE-12079
> URL: https://issues.apache.org/jira/browse/IGNITE-12079
> Project: Ignite
>  Issue Type: New Feature
>  Components: ml
>Affects Versions: 2.8
>Reporter: Alexey Zinoviev
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.8
>
>
> *Main goal:*
> To reduce the gap between Apache Spark and Apache Ignite in preprocessing 
> operations. The reducing of the gap could help with loading Spark ML 
> Pipelines to Ignite ML.
>  
> Next steps:
>  # Add Frequency Encoder
>  # Add two Imputing Strategies (MIN, MAX, COUNT, MOST_FREQUENT, 
> LEAST_FREQUENT)
>  # Add RobustScaler (will be added in Spark 3.0)
>  # Add CountVectorizer
>  # Add FeatureHasher
>  # Add QuantileDiscretizer
>  # Add Locality Sensitive Hashing (LSH)
>  # Add LabelEncoder
>  # Add RevertStringIndexing
>  # Add multi-column preprocessor



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (IGNITE-4980) .NET: NuGet packages do not work with PackageReference in VS2017

2019-10-09 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn resolved IGNITE-4980.

Fix Version/s: 2.8
   Resolution: Fixed

Fixed as part of IGNITE-10554

> .NET: NuGet packages do not work with PackageReference in VS2017
> 
>
> Key: IGNITE-4980
> URL: https://issues.apache.org/jira/browse/IGNITE-4980
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 2.8
>
>
> VS2017 & NuGet 4.0 come with a new way of referencing packages: instead of 
> {{packages.config}}, there is a {{PackageReference}} section in {{csproj}} 
> file: 
> http://blog.nuget.org/20170316/NuGet-now-fully-integrated-into-MSBuild.html
> This feature does not support {{install.ps1}} script which we use to insert 
> build event for copying JARs to output directory.
> This is not a blocker, since build event can be set up manually.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-10554) .NET: Jars are not copied to target dir under .NET Core

2019-10-09 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947505#comment-16947505
 ] 

Pavel Tupitsyn commented on IGNITE-10554:
-

Merged to master: 20b3fb8450196215b0ccda38ac8dee7963c14fa3

> .NET: Jars are not copied to target dir under .NET Core
> ---
>
> Key: IGNITE-10554
> URL: https://issues.apache.org/jira/browse/IGNITE-10554
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.4
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .NET
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We use PowerShell script to update post-build event in the target project and 
> copy jar files to target directory during build.
> However, this no longer works with .NET Core.
> nuspec file should be updated with new format, see example from 
> https://github.com/NuGet/Samples/blob/master/ContentFilesExample/authoring/ContentFilesExample.nuspec:
> {code}
> 
> 
>   
> ContentFilesExample
> 1.0.0
> nuget
> nuget
> false
> A content v2 example package.
> contentv2 contentFiles
> 
> 
> 
> 
> 
> 
>  copyToOutput="true" />
> 
>   
>  {code}
> *UPDATE: this breaks NuGet package usage completely under .NET Core 3.0*
> [NuGet behavior has 
> changed|https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0#build-copies-dependencies]
>  in .NET Core 3.0:
> ??The dotnet build command now copies NuGet dependencies for your application 
> from the NuGet cache to the build output folder??
> In .NET Core 2.x dependencies are used directly from NuGet cache, so JAR 
> files are resolved.
> In 3.0 this does not work anymore, we should find a way to copy JAR files to 
> the output folder.
> Test cases:
> * .NET 4.x
> * .NET Core 2.x, 3.x Windows & Linux
> * LINQPad
> * Binary zip distribution (examples, .NET Core examples)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12033) .NET: Callbacks from striped pool due to async/await may hang cluster

2019-10-09 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947503#comment-16947503
 ] 

Pavel Tupitsyn commented on IGNITE-12033:
-

[~ilyak] yep, see the discussion above, and on the dev list.

> .NET: Callbacks from striped pool due to async/await may hang cluster
> -
>
> Key: IGNITE-12033
> URL: https://issues.apache.org/jira/browse/IGNITE-12033
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, platforms
>Affects Versions: 2.7.5
>Reporter: Ilya Kasnacheev
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .net
> Fix For: 2.8
>
>
> http://apache-ignite-users.70518.x6.nabble.com/Replace-or-Put-after-PutAsync-causes-Ignite-to-hang-td27871.html#a28051
> There's a reproducer project. Long story short, .Net can invoke cache 
> operations with future callbacks, which will be invoked from striped pool. If 
> such callbacks are to use cache operations, those will be possibly sheduled 
> to the same stripe and cause a deadlock.
> The code is very simple:
> {code}
> Console.WriteLine("PutAsync");
> await cache.PutAsync(1, "Test");
> Console.WriteLine("Replace");
> cache.Replace(1, "Testing"); // Hangs here
> Console.WriteLine("Wait");
> await Task.Delay(Timeout.Infinite); 
> {code}
> async/await should absolutely not allow any client code to be run from 
> stripes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-10554) .NET: Jars are not copied to target dir under .NET Core

2019-10-09 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947488#comment-16947488
 ] 

Igor Sapego commented on IGNITE-10554:
--

Looks good to me.

> .NET: Jars are not copied to target dir under .NET Core
> ---
>
> Key: IGNITE-10554
> URL: https://issues.apache.org/jira/browse/IGNITE-10554
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.4
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .NET
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We use PowerShell script to update post-build event in the target project and 
> copy jar files to target directory during build.
> However, this no longer works with .NET Core.
> nuspec file should be updated with new format, see example from 
> https://github.com/NuGet/Samples/blob/master/ContentFilesExample/authoring/ContentFilesExample.nuspec:
> {code}
> 
> 
>   
> ContentFilesExample
> 1.0.0
> nuget
> nuget
> false
> A content v2 example package.
> contentv2 contentFiles
> 
> 
> 
> 
> 
> 
>  copyToOutput="true" />
> 
>   
>  {code}
> *UPDATE: this breaks NuGet package usage completely under .NET Core 3.0*
> [NuGet behavior has 
> changed|https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0#build-copies-dependencies]
>  in .NET Core 3.0:
> ??The dotnet build command now copies NuGet dependencies for your application 
> from the NuGet cache to the build output folder??
> In .NET Core 2.x dependencies are used directly from NuGet cache, so JAR 
> files are resolved.
> In 3.0 this does not work anymore, we should find a way to copy JAR files to 
> the output folder.
> Test cases:
> * .NET 4.x
> * .NET Core 2.x, 3.x Windows & Linux
> * LINQPad
> * Binary zip distribution (examples, .NET Core examples)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-8856) Incorrect behavior of BinaryTypeConfiguration in case of using a wildcard for type names

2019-10-09 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947482#comment-16947482
 ] 

Vyacheslav Koptilin commented on IGNITE-8856:
-

Hi [~mmuzaf],

I think that this improvement is actual. If you want to participate and review 
it, you're welcome. Anyway, wa can just move this ticket to the next release.

> Incorrect behavior of BinaryTypeConfiguration in case of using a wildcard for 
> type names
> 
>
> Key: IGNITE-8856
> URL: https://issues.apache.org/jira/browse/IGNITE-8856
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.5
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.8
>
>
> Let's consider the following BinaryConfiguration:
> {code:java}
> 
> 
> ...
> 
> 
>  class="org.apache.ignite.binary.BinaryTypeConfiguration">
>  value="org.apache.ignite.examples.*"/>
> 
>  class="org.apache.ignite.binary.BinaryBasicNameMapper">
>  value="false"/>
> 
> 
> 
> 
> 
> 
> 
> {code}
> My intention is using custom BinaryBasicMapper for all classes in the 
> specified package and its sub packages,
> but BinaryContext implementation matches only classes that reside in the 
> "org.apache.ignite.examples" package.
> Classes from sub-packages are not matched, and therefore do not use the 
> specified BinaryBasicNameMapper.
> *UPD:*
> Well, it seems that the current behavior is not an error and was implemented 
> in this way intentionally.
> On the other hand, I think it would be good to have the ability to specify 
> sub-packages as well.
> For example, it can be achieved through the '**' pattern as follows:
> ||example||defenition||priority||
> |org.apache.ignite.examples.TestClass|matches exactly the one class|the 
> highest priority|
> |org.apache.ignite.examples.*|matches all classes in the given package|medium|
> |org.apache.ignite.examples.**|matches all classes in the given package and 
> its sub-packages|low|
> The proposed enhancement does not break the current behavior and allows to 
> specify a filter for sub-packages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12124) Stopping the cache does not wait for expiration process, which may be started and may lead to errors

2019-10-09 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947477#comment-16947477
 ] 

Vyacheslav Koptilin commented on IGNITE-12124:
--

Hello [~mmuzaf],

Yes, this issue is waiting for a reviewer. Do you want to help with that? I 
would be really appreciate :)

> Stopping the cache does not wait for expiration process, which may be started 
> and may lead to errors
> 
>
> Key: IGNITE-12124
> URL: https://issues.apache.org/jira/browse/IGNITE-12124
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Stopping a cache with configured TTL may lead to errors. For instance,
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.onDeferredDelete(GridCacheContext.java:1702)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onTtlExpired(GridCacheMapEntry.java:4040)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager$1.applyx(GridCacheTtlManager.java:75)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager$1.applyx(GridCacheTtlManager.java:66)
>   at 
> org.apache.ignite.internal.util.lang.IgniteInClosure2X.apply(IgniteInClosure2X.java:37)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpiredInternal(GridCacheOffheapManager.java:2501)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.purgeExpired(GridCacheOffheapManager.java:2427)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.expire(GridCacheOffheapManager.java:989)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:233)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:150)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:119)
>   at java.lang.Thread.run(Thread.java:748){noformat}
> The obvious reason for this {{NullPointerException}} is that unregistering of 
> {{GridCacheTtlManager}} (see {{GridCacheSharedTtlCleanupManager#unregister}} 
> does not wait for the finish of expiration (in that particular case, 
> {{GridCacheContext}} is already cleaned up).
>  
> So, unregistering of {{GridCacheTtlManager}}, caused by cache stopping, must 
> wait for expiration if it is running for the cache that stops. On the other 
> hand, it does not seem correct to wait for expiration under the 
> {{checkpointReadLock}} see 
> {{GridCacheProcessor#processCacheStopRequestOnExchangeDone}}:
> {code:java}
> private void processCacheStopRequestOnExchangeDone(ExchangeActions 
> exchActions) {
> ...
> doInParallel(
> parallelismLvl,
> sharedCtx.kernalContext().getSystemExecutorService(),
> cachesToStop.entrySet(),
> cachesToStopByGrp -> {
> ...
> for (ExchangeActions.CacheActionData action : 
> cachesToStopByGrp.getValue()) {
> ...
> sharedCtx.database().checkpointReadLock();
> try {
> prepareCacheStop(...); <---unregistering of 
> GridCacheTtlManager is performed here
> }
> finally {
> sharedCtx.database().checkpointReadUnlock();
> }
> }
> ...
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12033) .NET: Callbacks from striped pool due to async/await may hang cluster

2019-10-09 Thread Ilya Kasnacheev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947478#comment-16947478
 ] 

Ilya Kasnacheev commented on IGNITE-12033:
--

[~ptupitsyn] any actions needed from Java devs?

> .NET: Callbacks from striped pool due to async/await may hang cluster
> -
>
> Key: IGNITE-12033
> URL: https://issues.apache.org/jira/browse/IGNITE-12033
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, platforms
>Affects Versions: 2.7.5
>Reporter: Ilya Kasnacheev
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .net
> Fix For: 2.8
>
>
> http://apache-ignite-users.70518.x6.nabble.com/Replace-or-Put-after-PutAsync-causes-Ignite-to-hang-td27871.html#a28051
> There's a reproducer project. Long story short, .Net can invoke cache 
> operations with future callbacks, which will be invoked from striped pool. If 
> such callbacks are to use cache operations, those will be possibly sheduled 
> to the same stripe and cause a deadlock.
> The code is very simple:
> {code}
> Console.WriteLine("PutAsync");
> await cache.PutAsync(1, "Test");
> Console.WriteLine("Replace");
> cache.Replace(1, "Testing"); // Hangs here
> Console.WriteLine("Wait");
> await Task.Delay(Timeout.Infinite); 
> {code}
> async/await should absolutely not allow any client code to be run from 
> stripes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-9357:

Affects Version/s: (was: 2.9)
   3.0

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 3.0
>Reporter: Alexey Kukushkin
>Assignee: Alexey Zinoviev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-9357:
---

Assignee: Alexey Zinoviev  (was: Alexey Kukushkin)

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.9
>Reporter: Alexey Kukushkin
>Assignee: Alexey Zinoviev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2019-10-09 Thread Alexey Kukushkin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947450#comment-16947450
 ] 

Alexey Kukushkin commented on IGNITE-9357:
--

[~zaleslaw], that was only a proof-of-concept (PoC). The solution is still far 
from being production-ready. The items that must be implemented before we can 
merge it are:
  # The PoC supports only SQL-enabled caches with an incremental field: Spark 
maintains "start" and "end" offsets depending on the streaming mode ("append" 
or "aggregate") and passes the offsets to Ignite. Right now, the patch only 
works with incremental and timestamp fields. I think we need to somehow support 
any cache before we merge it.
 # Poor performance: Spark works using “micro-batches.” Currently the patch 
uses a naive approach of just firing the SQL ("SELECT ... FROM ... WHERE offset 
>= start AND offset < end") at Ignite. This doesn’t scale. It would be better 
to use a continuous query.
 # Work that would have to be done to get to a point where it could be merged 
into Ignite: cleaning up the code, documenting it, etc.

 
Unfortunately I do not have time to complete it in near future. Please feel 
free to take the task and complete it.
 

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.9
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12273) Slow TX recovery

2019-10-09 Thread Anton Vinogradov (Jira)
Anton Vinogradov created IGNITE-12273:
-

 Summary: Slow TX recovery
 Key: IGNITE-12273
 URL: https://issues.apache.org/jira/browse/IGNITE-12273
 Project: Ignite
  Issue Type: Task
Reporter: Anton Vinogradov
Assignee: Anton Vinogradov


TX recovery cause B*N*2 GridCacheTxRecoveryRequest messages sending (B - 
backups, N - prepared txs amount).
Seems, we able to perform recovery more efficiently.
For example, we may send only B*B*2 messages, accumulates txs together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12236) RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in IgniteRepositoryFactory

2019-10-09 Thread Thibaut (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thibaut updated IGNITE-12236:
-
Attachment: (was: IGNITE-12336.patch)

> RepositoryFactorySupport#getQueryLookupStrategy no longer overriden in 
> IgniteRepositoryFactory
> --
>
> Key: IGNITE-12236
> URL: https://issues.apache.org/jira/browse/IGNITE-12236
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring
>Affects Versions: 2.8, 2.7.6
>Reporter: Thibaut
>Priority: Major
>  Labels: newbie, patch
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hello,
> org.apache.ignite.springdata20.repository.support.IgniteRepositoryFactory#getQueryLookupStrategy
> does not override 
> org.springframework.data.repository.core.support.RepositoryFactorySupport#getQueryLookupStrategy
> since this commit
> [https://github.com/spring-projects/spring-data-commons/commit/a6215fbe0f5c9a254cddacb12763737f2c286ad5]
>  
> this results in a thrown exception in 
> org.springframework.data.repository.core.support.RepositoryFactorySupport.QueryExecutorMethodInterceptor#QueryExecutorMethodInterceptor
>  
>  This prevents using ignite with any up to date version of spring. Fixing 
> this would require updating  that's the reason I'm 
> puting this as Improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12272) Delayed TX recovery

2019-10-09 Thread Anton Vinogradov (Jira)
Anton Vinogradov created IGNITE-12272:
-

 Summary: Delayed TX recovery
 Key: IGNITE-12272
 URL: https://issues.apache.org/jira/browse/IGNITE-12272
 Project: Ignite
  Issue Type: Task
Reporter: Anton Vinogradov
Assignee: Anton Vinogradov


TX recovery now starts in delayed way.
IGNITE_TX_SALVAGE_TIMEOUT = 100 which cause 100+ ms delay on recovery.
Seems, we able to get rig of this delay to make recovery faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-7523) Exception on data expiration after sharedRDD.saveValues call

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-7523:

Affects Version/s: (was: 2.3)
   2.9

> Exception on data expiration after sharedRDD.saveValues call
> 
>
> Key: IGNITE-7523
> URL: https://issues.apache.org/jira/browse/IGNITE-7523
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.9
>
>
> Reproducer:
> {code:java}
> package rdd_expiration;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.List;
> import java.util.UUID;
> import java.util.concurrent.atomic.AtomicLong;
> import javax.cache.Cache;
> import javax.cache.expiry.CreatedExpiryPolicy;
> import javax.cache.expiry.Duration;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.DataRegionConfiguration;
> import org.apache.ignite.configuration.DataStorageConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.lang.IgniteOutClosure;
> import org.apache.ignite.spark.JavaIgniteContext;
> import org.apache.ignite.spark.JavaIgniteRDD;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import org.apache.log4j.Level;
> import org.apache.log4j.Logger;
> import org.apache.spark.SparkConf;
> import org.apache.spark.api.java.JavaRDD;
> import org.apache.spark.api.java.JavaSparkContext;
> import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
> import static org.apache.ignite.cache.CacheMode.PARTITIONED;
> import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
> /**
> * This example demonstrates how to create an JavaIgnitedRDD and share it with 
> multiple spark workers. The goal of this
> * particular example is to provide the simplest code example of this logic.
> * 
> * This example will start Ignite in the embedded mode and will start an 
> JavaIgniteContext on each Spark worker node.
> * 
> * The example can work in the standalone mode as well that can be enabled by 
> setting JavaIgniteContext's
> * \{@code standalone} property to \{@code true} and running an Ignite node 
> separately with
> * `examples/config/spark/example-shared-rdd.xml` config.
> */
> public class RddExpiration {
> /**
> * Executes the example.
> * @param args Command line arguments, none required.
> */
> public static void main(String args[]) throws InterruptedException {
> Ignite server = null;
> for (int i = 0; i < 4; i++) {
> IgniteConfiguration serverCfg = createIgniteCfg();
> serverCfg.setClientMode(false);
> serverCfg.setIgniteInstanceName("Server" + i);
> server = Ignition.start(serverCfg);
> }
> server.active(true);
> // Spark Configuration.
> SparkConf sparkConf = new SparkConf()
> .setAppName("JavaIgniteRDDExample")
> .setMaster("local")
> .set("spark.executor.instances", "2");
> // Spark context.
> JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
> // Adjust the logger to exclude the logs of no interest.
> Logger.getRootLogger().setLevel(Level.ERROR);
> Logger.getLogger("org.apache.ignite").setLevel(Level.INFO);
> // Creates Ignite context with specific configuration and runs Ignite in the 
> embedded mode.
> JavaIgniteContext igniteContext = new JavaIgniteContext Integer>(
> sparkContext,
> new IgniteOutClosure() {
> @Override public IgniteConfiguration apply() {
> return createIgniteCfg();
> }
> },
> true);
> // Create a Java Ignite RDD of Type (Int,Int) Integer Pair.
> JavaIgniteRDD sharedRDD = igniteContext. Integer>fromCache("sharedRDD");
> long start = System.currentTimeMillis();
> long totalLoaded = 0;
> while(System.currentTimeMillis() - start < 55_000) {
> // Define data to be stored in the Ignite RDD (cache).
> List data = new ArrayList<>(20_000);
> for (int i = 0; i < 20_000; i++)
> data.add(i);
> // Preparing a Java RDD.
> JavaRDD javaRDD = sparkContext.parallelize(data);
> sharedRDD.saveValues(javaRDD);
> totalLoaded += 20_000;
> }
> System.out.println("Loaded " + totalLoaded);
> for (;;) {
> System.out.println(">>> Iterating over Ignite Shared RDD...");
> IgniteCache cache = server.getOrCreateCache("sharedRDD");
> AtomicLong recordsLeft = new AtomicLong(0);
> for (Cache.Entry entry : cache) {
> recordsLeft.incrementAndGet();
> }
> System.out.println("Left: " + recordsLeft.get());
> }
> // Close IgniteContext on all the workers.
> // igniteContext.close(true);
> }
> private static IgniteConfiguration createIgniteCfg() {
> 

[jira] [Updated] (IGNITE-12204) In binary distribution, essential dependencies for ignite-spark missing

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12204:
-
Fix Version/s: 2.9

> In binary distribution, essential dependencies for ignite-spark missing
> ---
>
> Key: IGNITE-12204
> URL: https://issues.apache.org/jira/browse/IGNITE-12204
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Ilya Kasnacheev
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.9
>
>
> It seems that we only put direct dependencies of other JARs in our binary 
> distribution, and not transient ones.
> For example, libs/optional/ignite-spark lacks the essential commons-lang3 
> jar, which will lead to the following error immediately:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org.apache.commons.lang3.SystemUtils
> at org.apache.spark.util.Utils$.(Utils.scala:1915)
> at org.apache.spark.util.Utils$.(Utils.scala)
> at 
> org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
> {code}
> It's almost impossible to fix without resorting to Maven source build.
> I understand that adding Spark module to Ignite server is something not 
> widely used, but if we ship this module at all, we should make sure that it 
> is usable in some form.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-12204) In binary distribution, essential dependencies for ignite-spark missing

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-12204:


Assignee: Alexey Zinoviev

> In binary distribution, essential dependencies for ignite-spark missing
> ---
>
> Key: IGNITE-12204
> URL: https://issues.apache.org/jira/browse/IGNITE-12204
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.7.6
>Reporter: Ilya Kasnacheev
>Assignee: Alexey Zinoviev
>Priority: Major
>
> It seems that we only put direct dependencies of other JARs in our binary 
> distribution, and not transient ones.
> For example, libs/optional/ignite-spark lacks the essential commons-lang3 
> jar, which will lead to the following error immediately:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org.apache.commons.lang3.SystemUtils
> at org.apache.spark.util.Utils$.(Utils.scala:1915)
> at org.apache.spark.util.Utils$.(Utils.scala)
> at 
> org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
> {code}
> It's almost impossible to fix without resorting to Maven source build.
> I understand that adding Spark module to Ignite server is something not 
> widely used, but if we ship this module at all, we should make sure that it 
> is usable in some form.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12204) In binary distribution, essential dependencies for ignite-spark missing

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12204:
-
Affects Version/s: (was: 2.7.6)
   2.9

> In binary distribution, essential dependencies for ignite-spark missing
> ---
>
> Key: IGNITE-12204
> URL: https://issues.apache.org/jira/browse/IGNITE-12204
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Ilya Kasnacheev
>Assignee: Alexey Zinoviev
>Priority: Major
>
> It seems that we only put direct dependencies of other JARs in our binary 
> distribution, and not transient ones.
> For example, libs/optional/ignite-spark lacks the essential commons-lang3 
> jar, which will lead to the following error immediately:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org.apache.commons.lang3.SystemUtils
> at org.apache.spark.util.Utils$.(Utils.scala:1915)
> at org.apache.spark.util.Utils$.(Utils.scala)
> at 
> org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
> {code}
> It's almost impossible to fix without resorting to Maven source build.
> I understand that adding Spark module to Ignite server is something not 
> widely used, but if we ship this module at all, we should make sure that it 
> is usable in some form.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10325) Spark Data Frame - Thin Client

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10325:
-
Issue Type: New Feature  (was: Improvement)

> Spark Data Frame - Thin Client
> --
>
> Key: IGNITE-10325
> URL: https://issues.apache.org/jira/browse/IGNITE-10325
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.9
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>
> For now, client node required to connect to Ignite cluster from Spark.
> We need to add ability to use Thin Client protocol for Spark integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-10859) Ignite Spark giving exception when join two cached tables

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-10859:


Assignee: Alexey Zinoviev

> Ignite Spark giving exception when join two cached tables
> -
>
> Key: IGNITE-10859
> URL: https://issues.apache.org/jira/browse/IGNITE-10859
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Ayush
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.9
>
>
> When we are loading two data-frames from ignite in spark and joining those to 
> dataframes, it is giving exception. We have checked the generated logical 
> plan and seems like it is wrong.
> I am adding the stack trace and code 
>  
> scala> val df1 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "HIVE_customer_address_2_1546577865912").load().toDF(schema1.columns 
> map(_.toLowerCase): _*)
> df1: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 11 more fields]
> scala> df1.show(1)
>  
> +-++--++-++++--+--
> |ca_address_sk|ca_address_id|ca_street_number|ca_street_name|ca_street_type|ca_suite_number|ca_city|ca_county|ca_state|ca_zip|ca_country|ca_gmt_offset|ca_location_type|
> +-++--++-++++--+--
> |1|BAAA|18|Jackson|Parkway|Suite 280|Fairfield|Maricopa 
> County|AZ|86192|United States|-7.00|condo|
> +-++--++-++++--+--
>  only showing top 1 row
> scala> val df2 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "POSTGRES_customer_1_1546598025406").load().toDF(schema2.columns 
> map(_.toLowerCase): _*)
>  df2: org.apache.spark.sql.DataFrame] = [c_customer_sk: int, c_customer_id: 
> string ... 16 more fields]
> scala> df2.show(1)
>  
> +-++++---++-++++++-++
> |c_customer_sk|c_customer_id|c_current_cdemo_sk|c_current_hdemo_sk|c_current_addr_sk|c_first_shipto_date_sk|c_first_sales_date_sk|c_salutation|c_first_name|c_last_name|c_preferred_cust_flag|c_birth_day|c_birth_month|c_birth_year|c_birth_country|c_login|c_email_address|c_last_review_date|
> +-++++---++-++++++-++
> |7288|IHMB|1461725|4938|18198|2450838|2450808|Sir|Steven|Storey 
> ...|Y|1|2|1967|QATAR|null|Steven.Storey@QdG...|2452528|
> +-++++---++-++++++-++
> scala> df1.join(df2, df1.col("ca_address_sk") === df2.col("c_customer_sk"), 
> "inner")
>  res64: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 29 more fields]
> scala> res64.show
>  19/01/04 16:50:07 ERROR Executor: Exception in task 0.0 in stage 15.0 (TID 
> 15)
>  javax.cache.CacheException: Failed to parse query. Column 
> "POSTGRES_CUSTOMER_1_1546598025406.CA_ADDRESS_SK" not found; SQL statement:
>  SELECT CAST(HIVE_customer_address_2_1546577865912.CA_ADDRESS_SK AS VARCHAR) 
> AS ca_address_sk, HIVE_customer_address_2_1546577865912.CA_ADDRESS_ID, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NAME, 
> HIVE_customer_address_2_1546577865912.CA_STREET_TYPE, 
> HIVE_customer_address_2_1546577865912.CA_SUITE_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_CITY, 
> HIVE_customer_address_2_1546577865912.CA_COUNTY, 
> HIVE_customer_address_2_1546577865912.CA_STATE, 
> HIVE_customer_address_2_1546577865912.CA_ZIP, 
> HIVE_customer_address_2_1546577865912.CA_COUNTRY, 
> 

[jira] [Updated] (IGNITE-10859) Ignite Spark giving exception when join two cached tables

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10859:
-
Fix Version/s: 2.9

> Ignite Spark giving exception when join two cached tables
> -
>
> Key: IGNITE-10859
> URL: https://issues.apache.org/jira/browse/IGNITE-10859
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Ayush
>Priority: Major
> Fix For: 2.9
>
>
> When we are loading two data-frames from ignite in spark and joining those to 
> dataframes, it is giving exception. We have checked the generated logical 
> plan and seems like it is wrong.
> I am adding the stack trace and code 
>  
> scala> val df1 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "HIVE_customer_address_2_1546577865912").load().toDF(schema1.columns 
> map(_.toLowerCase): _*)
> df1: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 11 more fields]
> scala> df1.show(1)
>  
> +-++--++-++++--+--
> |ca_address_sk|ca_address_id|ca_street_number|ca_street_name|ca_street_type|ca_suite_number|ca_city|ca_county|ca_state|ca_zip|ca_country|ca_gmt_offset|ca_location_type|
> +-++--++-++++--+--
> |1|BAAA|18|Jackson|Parkway|Suite 280|Fairfield|Maricopa 
> County|AZ|86192|United States|-7.00|condo|
> +-++--++-++++--+--
>  only showing top 1 row
> scala> val df2 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "POSTGRES_customer_1_1546598025406").load().toDF(schema2.columns 
> map(_.toLowerCase): _*)
>  df2: org.apache.spark.sql.DataFrame] = [c_customer_sk: int, c_customer_id: 
> string ... 16 more fields]
> scala> df2.show(1)
>  
> +-++++---++-++++++-++
> |c_customer_sk|c_customer_id|c_current_cdemo_sk|c_current_hdemo_sk|c_current_addr_sk|c_first_shipto_date_sk|c_first_sales_date_sk|c_salutation|c_first_name|c_last_name|c_preferred_cust_flag|c_birth_day|c_birth_month|c_birth_year|c_birth_country|c_login|c_email_address|c_last_review_date|
> +-++++---++-++++++-++
> |7288|IHMB|1461725|4938|18198|2450838|2450808|Sir|Steven|Storey 
> ...|Y|1|2|1967|QATAR|null|Steven.Storey@QdG...|2452528|
> +-++++---++-++++++-++
> scala> df1.join(df2, df1.col("ca_address_sk") === df2.col("c_customer_sk"), 
> "inner")
>  res64: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 29 more fields]
> scala> res64.show
>  19/01/04 16:50:07 ERROR Executor: Exception in task 0.0 in stage 15.0 (TID 
> 15)
>  javax.cache.CacheException: Failed to parse query. Column 
> "POSTGRES_CUSTOMER_1_1546598025406.CA_ADDRESS_SK" not found; SQL statement:
>  SELECT CAST(HIVE_customer_address_2_1546577865912.CA_ADDRESS_SK AS VARCHAR) 
> AS ca_address_sk, HIVE_customer_address_2_1546577865912.CA_ADDRESS_ID, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NAME, 
> HIVE_customer_address_2_1546577865912.CA_STREET_TYPE, 
> HIVE_customer_address_2_1546577865912.CA_SUITE_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_CITY, 
> HIVE_customer_address_2_1546577865912.CA_COUNTY, 
> HIVE_customer_address_2_1546577865912.CA_STATE, 
> HIVE_customer_address_2_1546577865912.CA_ZIP, 
> HIVE_customer_address_2_1546577865912.CA_COUNTRY, 
> CAST(HIVE_customer_address_2_1546577865912.CA_GMT_OFFSET AS VARCHAR) AS 
> 

[jira] [Updated] (IGNITE-10325) Spark Data Frame - Thin Client

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10325:
-
Affects Version/s: (was: 2.6)
   2.9

> Spark Data Frame - Thin Client
> --
>
> Key: IGNITE-10325
> URL: https://issues.apache.org/jira/browse/IGNITE-10325
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.9
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>
> For now, client node required to connect to Ignite cluster from Spark.
> We need to add ability to use Thin Client protocol for Spark integration.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10859) Ignite Spark giving exception when join two cached tables

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-10859:
-
Affects Version/s: (was: 2.7)
   (was: 2.6)
   2.9

> Ignite Spark giving exception when join two cached tables
> -
>
> Key: IGNITE-10859
> URL: https://issues.apache.org/jira/browse/IGNITE-10859
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Ayush
>Priority: Major
>
> When we are loading two data-frames from ignite in spark and joining those to 
> dataframes, it is giving exception. We have checked the generated logical 
> plan and seems like it is wrong.
> I am adding the stack trace and code 
>  
> scala> val df1 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "HIVE_customer_address_2_1546577865912").load().toDF(schema1.columns 
> map(_.toLowerCase): _*)
> df1: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 11 more fields]
> scala> df1.show(1)
>  
> +-++--++-++++--+--
> |ca_address_sk|ca_address_id|ca_street_number|ca_street_name|ca_street_type|ca_suite_number|ca_city|ca_county|ca_state|ca_zip|ca_country|ca_gmt_offset|ca_location_type|
> +-++--++-++++--+--
> |1|BAAA|18|Jackson|Parkway|Suite 280|Fairfield|Maricopa 
> County|AZ|86192|United States|-7.00|condo|
> +-++--++-++++--+--
>  only showing top 1 row
> scala> val df2 = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
> CONFIG).option(OPTION_TABLE, 
> "POSTGRES_customer_1_1546598025406").load().toDF(schema2.columns 
> map(_.toLowerCase): _*)
>  df2: org.apache.spark.sql.DataFrame] = [c_customer_sk: int, c_customer_id: 
> string ... 16 more fields]
> scala> df2.show(1)
>  
> +-++++---++-++++++-++
> |c_customer_sk|c_customer_id|c_current_cdemo_sk|c_current_hdemo_sk|c_current_addr_sk|c_first_shipto_date_sk|c_first_sales_date_sk|c_salutation|c_first_name|c_last_name|c_preferred_cust_flag|c_birth_day|c_birth_month|c_birth_year|c_birth_country|c_login|c_email_address|c_last_review_date|
> +-++++---++-++++++-++
> |7288|IHMB|1461725|4938|18198|2450838|2450808|Sir|Steven|Storey 
> ...|Y|1|2|1967|QATAR|null|Steven.Storey@QdG...|2452528|
> +-++++---++-++++++-++
> scala> df1.join(df2, df1.col("ca_address_sk") === df2.col("c_customer_sk"), 
> "inner")
>  res64: org.apache.spark.sql.DataFrame = [ca_address_sk: int, ca_address_id: 
> string ... 29 more fields]
> scala> res64.show
>  19/01/04 16:50:07 ERROR Executor: Exception in task 0.0 in stage 15.0 (TID 
> 15)
>  javax.cache.CacheException: Failed to parse query. Column 
> "POSTGRES_CUSTOMER_1_1546598025406.CA_ADDRESS_SK" not found; SQL statement:
>  SELECT CAST(HIVE_customer_address_2_1546577865912.CA_ADDRESS_SK AS VARCHAR) 
> AS ca_address_sk, HIVE_customer_address_2_1546577865912.CA_ADDRESS_ID, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_STREET_NAME, 
> HIVE_customer_address_2_1546577865912.CA_STREET_TYPE, 
> HIVE_customer_address_2_1546577865912.CA_SUITE_NUMBER, 
> HIVE_customer_address_2_1546577865912.CA_CITY, 
> HIVE_customer_address_2_1546577865912.CA_COUNTY, 
> HIVE_customer_address_2_1546577865912.CA_STATE, 
> HIVE_customer_address_2_1546577865912.CA_ZIP, 
> HIVE_customer_address_2_1546577865912.CA_COUNTRY, 
> 

[jira] [Assigned] (IGNITE-9108) Spark DataFrames With Cache Key and Value Objects

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-9108:
---

Assignee: Alexey Zinoviev

> Spark DataFrames With Cache Key and Value Objects
> -
>
> Key: IGNITE-9108
> URL: https://issues.apache.org/jira/browse/IGNITE-9108
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Reporter: Stuart Macdonald
>Assignee: Alexey Zinoviev
>Priority: Major
>
> Add support for _key and _val columns within Ignite-provided Spark 
> DataFrames, which represent the cache key and value objects similar to the 
> current _key/_val column semantics in Ignite SQL.
>  
> If the cache key or value objects are standard SQL types (eg. String, Int, 
> etc) they will be represented as such in the DataFrame schema, otherwise they 
> are represented as Binary types encoded as either: 1. Ignite BinaryObjects, 
> in which case we'd need to supply a Spark Encoder implementation for 
> BinaryObjects, eg:
>  
> {code:java}
> IgniteSparkSession session = ...
> Dataset dataFrame = ...
> Dataset valDataSet = 
> dataFrame.select("_val_).as(session.binaryObjectEncoder(MyValClass.class))
> {code}
> Or 2. Kryo-serialised versions of the objects, eg:
>  
> {code:java}
> Dataset dataFrame = ...
> DataSet dataSet = 
> dataFrame.select("_val_).as(Encoders.kryo(MyValClass.class))
> {code}
> Option 1 would probably be more efficient but option 2 would be more 
> idiomatic Spark.
>  
> The rationale behind this is the same as the Ignite SQL _key and _val 
> columns: to allow access to the full cache objects from a SQL context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9108) Spark DataFrames With Cache Key and Value Objects

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-9108:

Fix Version/s: 2.9

> Spark DataFrames With Cache Key and Value Objects
> -
>
> Key: IGNITE-9108
> URL: https://issues.apache.org/jira/browse/IGNITE-9108
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.9
>Reporter: Stuart Macdonald
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.9
>
>
> Add support for _key and _val columns within Ignite-provided Spark 
> DataFrames, which represent the cache key and value objects similar to the 
> current _key/_val column semantics in Ignite SQL.
>  
> If the cache key or value objects are standard SQL types (eg. String, Int, 
> etc) they will be represented as such in the DataFrame schema, otherwise they 
> are represented as Binary types encoded as either: 1. Ignite BinaryObjects, 
> in which case we'd need to supply a Spark Encoder implementation for 
> BinaryObjects, eg:
>  
> {code:java}
> IgniteSparkSession session = ...
> Dataset dataFrame = ...
> Dataset valDataSet = 
> dataFrame.select("_val_).as(session.binaryObjectEncoder(MyValClass.class))
> {code}
> Or 2. Kryo-serialised versions of the objects, eg:
>  
> {code:java}
> Dataset dataFrame = ...
> DataSet dataSet = 
> dataFrame.select("_val_).as(Encoders.kryo(MyValClass.class))
> {code}
> Option 1 would probably be more efficient but option 2 would be more 
> idiomatic Spark.
>  
> The rationale behind this is the same as the Ignite SQL _key and _val 
> columns: to allow access to the full cache objects from a SQL context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9108) Spark DataFrames With Cache Key and Value Objects

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-9108:

Affects Version/s: 2.9

> Spark DataFrames With Cache Key and Value Objects
> -
>
> Key: IGNITE-9108
> URL: https://issues.apache.org/jira/browse/IGNITE-9108
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.9
>Reporter: Stuart Macdonald
>Assignee: Alexey Zinoviev
>Priority: Major
>
> Add support for _key and _val columns within Ignite-provided Spark 
> DataFrames, which represent the cache key and value objects similar to the 
> current _key/_val column semantics in Ignite SQL.
>  
> If the cache key or value objects are standard SQL types (eg. String, Int, 
> etc) they will be represented as such in the DataFrame schema, otherwise they 
> are represented as Binary types encoded as either: 1. Ignite BinaryObjects, 
> in which case we'd need to supply a Spark Encoder implementation for 
> BinaryObjects, eg:
>  
> {code:java}
> IgniteSparkSession session = ...
> Dataset dataFrame = ...
> Dataset valDataSet = 
> dataFrame.select("_val_).as(session.binaryObjectEncoder(MyValClass.class))
> {code}
> Or 2. Kryo-serialised versions of the objects, eg:
>  
> {code:java}
> Dataset dataFrame = ...
> DataSet dataSet = 
> dataFrame.select("_val_).as(Encoders.kryo(MyValClass.class))
> {code}
> Option 1 would probably be more efficient but option 2 would be more 
> idiomatic Spark.
>  
> The rationale behind this is the same as the Ignite SQL _key and _val 
> columns: to allow access to the full cache objects from a SQL context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-9357:

Affects Version/s: (was: 2.7)
   2.9

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.9
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2019-10-09 Thread Alexey Zinoviev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947439#comment-16947439
 ] 

Alexey Zinoviev commented on IGNITE-9357:
-

[@kukushal|https://github.com/kukushal] Are you waiting for the merging of this 
solution? Maybe you could update your PR and we will try to merge it?

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.7
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2019-10-09 Thread Alexey Zinoviev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947439#comment-16947439
 ] 

Alexey Zinoviev edited comment on IGNITE-9357 at 10/9/19 8:19 AM:
--

[~kukushal] Are you waiting for the merging of this solution? Maybe you could 
update your PR and we will try to merge it?


was (Author: zaleslaw):
[@kukushal|https://github.com/kukushal] Are you waiting for the merging of this 
solution? Maybe you could update your PR and we will try to merge it?

> Spark Structured Streaming with Ignite as data source and sink
> --
>
> Key: IGNITE-9357
> URL: https://issues.apache.org/jira/browse/IGNITE-9357
> Project: Ignite
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.7
>Reporter: Alexey Kukushkin
>Assignee: Alexey Kukushkin
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are working on a PoC where we want to use Ignite as a data storage and 
> Spark as a computation engine. We found that Ignite is supported neither as a 
> source nor as a Sink when using Spark Structured Streaming, which is a must 
> for us.
> We are enhancing Ignite to support Spark streaming with Ignite. We will send 
> docs and code for review for the Ignite Community to consider if the 
> community wants to accept this feature. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-9317) Table Names With Special Characters Don't Work in Spark SQL Optimisations

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-9317:
---

Assignee: Alexey Zinoviev  (was: Stuart Macdonald)

> Table Names With Special Characters Don't Work in Spark SQL Optimisations
> -
>
> Key: IGNITE-9317
> URL: https://issues.apache.org/jira/browse/IGNITE-9317
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.6
>Reporter: Stuart Macdonald
>Assignee: Alexey Zinoviev
>Priority: Major
> Fix For: 2.9
>
>
> Table names aren't escaped in execution of Ignite SQL through the spark SQL 
> interface, meaning table names with special characters (such as . or -) cause 
> SQL grammar exceptions upon execution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-9317) Table Names With Special Characters Don't Work in Spark SQL Optimisations

2019-10-09 Thread Alexey Zinoviev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947435#comment-16947435
 ] 

Alexey Zinoviev commented on IGNITE-9317:
-

Dear [~stuartmacd] what is the desired behavior for the Table Names With 
Special Characters?

Making an exception? We couldn't improve the Apache Spark, but we could notify 
Ignite users about that.

> Table Names With Special Characters Don't Work in Spark SQL Optimisations
> -
>
> Key: IGNITE-9317
> URL: https://issues.apache.org/jira/browse/IGNITE-9317
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.6
>Reporter: Stuart Macdonald
>Assignee: Stuart Macdonald
>Priority: Major
> Fix For: 2.9
>
>
> Table names aren't escaped in execution of Ignite SQL through the spark SQL 
> interface, meaning table names with special characters (such as . or -) cause 
> SQL grammar exceptions upon execution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Labels: await  (was: )

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.8
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
>  Labels: await
> Fix For: 2.8
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Fix Version/s: (was: 2.9)
   2.8

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.8
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.8
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947433#comment-16947433
 ] 

Alexey Zinoviev commented on IGNITE-12141:
--

[~gtmanoj235] Thank you for explanation of the difference, I'll try to solve it 
for 2.8 release

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.9
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.9
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Affects Version/s: (was: 2.9)
   2.8

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.8
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.9
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Affects Version/s: (was: 2.7.5)
   2.9

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.9
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.8
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Fix Version/s: (was: 2.8)

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.9
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-7523) Exception on data expiration after sharedRDD.saveValues call

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-7523:
---

Assignee: Alexey Zinoviev  (was: Mikhail Cherkasov)

> Exception on data expiration after sharedRDD.saveValues call
> 
>
> Key: IGNITE-7523
> URL: https://issues.apache.org/jira/browse/IGNITE-7523
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.9
>
>
> Reproducer:
> {code:java}
> package rdd_expiration;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.List;
> import java.util.UUID;
> import java.util.concurrent.atomic.AtomicLong;
> import javax.cache.Cache;
> import javax.cache.expiry.CreatedExpiryPolicy;
> import javax.cache.expiry.Duration;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.DataRegionConfiguration;
> import org.apache.ignite.configuration.DataStorageConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.lang.IgniteOutClosure;
> import org.apache.ignite.spark.JavaIgniteContext;
> import org.apache.ignite.spark.JavaIgniteRDD;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import org.apache.log4j.Level;
> import org.apache.log4j.Logger;
> import org.apache.spark.SparkConf;
> import org.apache.spark.api.java.JavaRDD;
> import org.apache.spark.api.java.JavaSparkContext;
> import static org.apache.ignite.cache.CacheAtomicityMode.ATOMIC;
> import static org.apache.ignite.cache.CacheMode.PARTITIONED;
> import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
> /**
> * This example demonstrates how to create an JavaIgnitedRDD and share it with 
> multiple spark workers. The goal of this
> * particular example is to provide the simplest code example of this logic.
> * 
> * This example will start Ignite in the embedded mode and will start an 
> JavaIgniteContext on each Spark worker node.
> * 
> * The example can work in the standalone mode as well that can be enabled by 
> setting JavaIgniteContext's
> * \{@code standalone} property to \{@code true} and running an Ignite node 
> separately with
> * `examples/config/spark/example-shared-rdd.xml` config.
> */
> public class RddExpiration {
> /**
> * Executes the example.
> * @param args Command line arguments, none required.
> */
> public static void main(String args[]) throws InterruptedException {
> Ignite server = null;
> for (int i = 0; i < 4; i++) {
> IgniteConfiguration serverCfg = createIgniteCfg();
> serverCfg.setClientMode(false);
> serverCfg.setIgniteInstanceName("Server" + i);
> server = Ignition.start(serverCfg);
> }
> server.active(true);
> // Spark Configuration.
> SparkConf sparkConf = new SparkConf()
> .setAppName("JavaIgniteRDDExample")
> .setMaster("local")
> .set("spark.executor.instances", "2");
> // Spark context.
> JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
> // Adjust the logger to exclude the logs of no interest.
> Logger.getRootLogger().setLevel(Level.ERROR);
> Logger.getLogger("org.apache.ignite").setLevel(Level.INFO);
> // Creates Ignite context with specific configuration and runs Ignite in the 
> embedded mode.
> JavaIgniteContext igniteContext = new JavaIgniteContext Integer>(
> sparkContext,
> new IgniteOutClosure() {
> @Override public IgniteConfiguration apply() {
> return createIgniteCfg();
> }
> },
> true);
> // Create a Java Ignite RDD of Type (Int,Int) Integer Pair.
> JavaIgniteRDD sharedRDD = igniteContext. Integer>fromCache("sharedRDD");
> long start = System.currentTimeMillis();
> long totalLoaded = 0;
> while(System.currentTimeMillis() - start < 55_000) {
> // Define data to be stored in the Ignite RDD (cache).
> List data = new ArrayList<>(20_000);
> for (int i = 0; i < 20_000; i++)
> data.add(i);
> // Preparing a Java RDD.
> JavaRDD javaRDD = sparkContext.parallelize(data);
> sharedRDD.saveValues(javaRDD);
> totalLoaded += 20_000;
> }
> System.out.println("Loaded " + totalLoaded);
> for (;;) {
> System.out.println(">>> Iterating over Ignite Shared RDD...");
> IgniteCache cache = server.getOrCreateCache("sharedRDD");
> AtomicLong recordsLeft = new AtomicLong(0);
> for (Cache.Entry entry : cache) {
> recordsLeft.incrementAndGet();
> }
> System.out.println("Left: " + recordsLeft.get());
> }
> // Close IgniteContext on all the workers.
> // igniteContext.close(true);
> }
> private static IgniteConfiguration createIgniteCfg() {
> 

[jira] [Updated] (IGNITE-12141) Ignite Spark Integration Support Schema on Table Write

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12141:
-
Fix Version/s: 2.9

> Ignite Spark Integration Support Schema on Table Write
> --
>
> Key: IGNITE-12141
> URL: https://issues.apache.org/jira/browse/IGNITE-12141
> Project: Ignite
>  Issue Type: Improvement
>  Components: spark
>Affects Versions: 2.9
>Reporter: Manoj G T
>Assignee: Alexey Zinoviev
>Priority: Critical
> Fix For: 2.9
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Ignite 2.6 doesn't allow to create table on any schema other than Public 
> Schema and this is the reason for not supporting "OPTION_SCHEMA" during 
> Overwrite mode. Now that Ignite supports to create the table in any given 
> schema it will be great if we can incorporate the changes to support 
> "OPTION_SCHEMA" during Overwrite mode and make it available as part of next 
> Ignite release.
>  
> +Related Issue:+
> [https://stackoverflow.com/questions/57782033/apache-ignite-spark-integration-not-working-with-schema-name]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-9229) Integration with Spark Data Frame. Add support for a ARRAY data type

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev reassigned IGNITE-9229:
---

Assignee: Alexey Zinoviev

> Integration with Spark Data Frame. Add support for a ARRAY data type
> 
>
> Key: IGNITE-9229
> URL: https://issues.apache.org/jira/browse/IGNITE-9229
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.6
>Reporter: Nikolay Izhikov
>Assignee: Alexey Zinoviev
>Priority: Critical
>
> Currently, integration with Spark Data Frames doesn't support ARRAY data type.
> We need to add support for it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9229) Integration with Spark Data Frame. Add support for a ARRAY data type

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-9229:

Affects Version/s: (was: 2.6)
   2.9

> Integration with Spark Data Frame. Add support for a ARRAY data type
> 
>
> Key: IGNITE-9229
> URL: https://issues.apache.org/jira/browse/IGNITE-9229
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.9
>Reporter: Nikolay Izhikov
>Assignee: Alexey Zinoviev
>Priority: Critical
>
> Currently, integration with Spark Data Frames doesn't support ARRAY data type.
> We need to add support for it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12159) Ignite spark doesn't support Alter Column syntax

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12159:
-
Affects Version/s: (was: 2.7.5)
   2.8

> Ignite spark doesn't support Alter Column syntax
> 
>
> Key: IGNITE-12159
> URL: https://issues.apache.org/jira/browse/IGNITE-12159
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.8
>Reporter: Andrey Aleksandrov
>Assignee: Alexey Zinoviev
>Priority: Critical
>  Labels: await
> Fix For: 2.8
>
>
> Steps:
> 1)Start the server
>  2)Run next SQL commands
> CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id DOUBLE, 
> zip_code LONG, PRIMARY KEY (name)) WITH "backups=1"
>  ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name VARCHAR(64))
> 3)After that run next spark code:
>        String configPath = "client.xml";
>        
>     SparkConf sparkConf = new SparkConf()
>     .setMaster("local")
>     .setAppName("Example"); 
>       IgniteSparkSession.builder()
>     .appName("Spark Ignite catalog example")
>     .master("local")
>     .config("ignite.disableSparkSQLOptimization", true)
>     .igniteConfig(configPath)
>     .getOrCreate();
>   
>        Dataset df2 = igniteSession.sql("select * from person");   
>        df2.show();
> The result will contain only 5 columns from CREATE TABLE call.
> [http://apache-ignite-users.70518.x6.nabble.com/Altered-sql-table-adding-new-columns-does-not-reflect-in-Spark-shell-td29265.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12159) Ignite spark doesn't support Alter Column syntax

2019-10-09 Thread Alexey Zinoviev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Zinoviev updated IGNITE-12159:
-
Labels: await  (was: )

> Ignite spark doesn't support Alter Column syntax
> 
>
> Key: IGNITE-12159
> URL: https://issues.apache.org/jira/browse/IGNITE-12159
> Project: Ignite
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.7.5
>Reporter: Andrey Aleksandrov
>Assignee: Alexey Zinoviev
>Priority: Critical
>  Labels: await
> Fix For: 2.8
>
>
> Steps:
> 1)Start the server
>  2)Run next SQL commands
> CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id DOUBLE, 
> zip_code LONG, PRIMARY KEY (name)) WITH "backups=1"
>  ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name VARCHAR(64))
> 3)After that run next spark code:
>        String configPath = "client.xml";
>        
>     SparkConf sparkConf = new SparkConf()
>     .setMaster("local")
>     .setAppName("Example"); 
>       IgniteSparkSession.builder()
>     .appName("Spark Ignite catalog example")
>     .master("local")
>     .config("ignite.disableSparkSQLOptimization", true)
>     .igniteConfig(configPath)
>     .getOrCreate();
>   
>        Dataset df2 = igniteSession.sql("select * from person");   
>        df2.show();
> The result will contain only 5 columns from CREATE TABLE call.
> [http://apache-ignite-users.70518.x6.nabble.com/Altered-sql-table-adding-new-columns-does-not-reflect-in-Spark-shell-td29265.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12189) Implement correct limit for TextQuery

2019-10-09 Thread Andrey Mashenkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947429#comment-16947429
 ] 

Andrey Mashenkov commented on IGNITE-12189:
---

Yuri,
Check code style test failed due to
[10:00:03][ERROR] 
/opt/buildagent/work/7bc1c54bc719b67c/modules/indexing/src/test/java/org/apache/ignite/internal/processors/cache/GridCacheFullTextQuerySelfTest.java:71:
 'VARIABLE_DEF' should be separated from previous statement. 
[EmptyLineSeparator]

I've found you mark all my PR notes as resolved, but not all of the issue are 
actually fixed.
Please, check if you forget to push some changes.

"Queries 1" test results looks weird as they doesn't look flacky and I can't 
figure out how your changes can affect them.
Possibly, you've created a branch from quite old master when these tests were 
broken.
Please, merge you branch (PR) with the latest master and re-run tests.

> Implement correct limit for TextQuery
> -
>
> Key: IGNITE-12189
> URL: https://issues.apache.org/jira/browse/IGNITE-12189
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Yuriy Shuliha 
>Assignee: Yuriy Shuliha 
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> PROBLEM
> For now each server-node returns all response records to the client-node and 
> it may contain ~thousands, ~hundred thousands records.
>  Event if we need only first 10-100. Again, all the results are added to 
> queue in _*GridCacheQueryFutureAdapter*_ in arbitrary order by pages.
>  There are no any means to deliver deterministic result.
> SOLUTION
>  Implement _*limit*_ as parameter for _*TextQuery*_ and 
> _*GridCacheQueryRequest*_ 
>  It should be passed as limit  parameter in Lucene's  
> _*IndexSearcher.search()*_ in _*GridLuceneIndex*_.
> For distributed queries _*limit*_ will also trim response queue when merging 
> results.
> Type: long
>  Special value: : 0 -> No limit (Integer.MAX_VALUE);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)