Re: Recap of Ignite ML presentation at ApacheCon

2019-09-18 Thread Denis Magda
Alexey, thanks, that's the source code I named as Payments Fraud Detection:
https://github.com/dmagda/ignite-ml-examples/blob/master/logistic-regression-example/src/main/java/org/apache/ignite/example/ClientNode.java

-
Denis


On Wed, Sep 18, 2019 at 2:27 AM Alexey Zinoviev 
wrote:

> Cool presentation, thank you for evangelism for ML module
> Could you share the link on the code from demo: Payments Fraud Detection?
>
> вт, 17 сент. 2019 г. в 02:25, Denis Magda :
>
> > Hey Ignite dev and user communities,
> >
> > Those of you who are passionate about ML and Ignite ML in particular, we
> > continue spreading the word about this so capable module.
> >
> > Had an opportunity to join ApacheCon this year and here is a
> > professionally written recap on my talk:
> > https://www.infoq.com/news/2019/09/continuous-deep-learning-ignite
> >
> > Slides for those of you who are curious or keep promoting Ignite ML in
> > other parts of the world (ping me if you need PPT for your
> presentations):
> >
> >
> https://www.slideshare.net/DenisMagda/continuous-machine-and-deep-learning-with-apache-ignite
> >
> > -
> > Denis
> >
>


Re: [SparkDataFrame] Query Optimization. Prototype

2019-09-18 Thread liyuj

Hi Nikolay,

Because in the discussion below, there is a list describing that ROWNUM 
is not applicable to Ignite, and I want to know why.


在 2019/9/18 下午11:14, Nikolay Izhikov 写道:

Hello, liyuj.

Please, clarify.

Do you want to contribute this to Ignite?
What explanation do you expect?

В Ср, 18/09/2019 в 22:46 +0800, liyuj пишет:

Hi,

Can anyone explain how difficult it is to implement ROWNUM?

This is a very common requirement.

在 2018/1/23 下午4:05, Serge Puchnin 写道:

yes, the Cust function is supporting both Ignite and H2.

I've updated the documentation for next system functions:
CASEWHEN Function, CAST, CONVERT, TABLE

https://apacheignite-sql.readme.io/docs/system-functions

And for my mind, next functions aren't applicable for Ignite:
ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
H2VERSION

Also an issue was created for review current documentation:
https://issues.apache.org/jira/browse/IGNITE-7496

--
BR,
Serge



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/






Re: Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Denis Magda
Igniters,

2.8 release cycle can last for many months due to a number of issues and
regressions we need to address. It can easily slip to the next year,
especially considering the holiday season. An alternate solution can be a
series of Ignite 2.7.x releases that are becoming more hardened with each
version and go with a small and manageable changeset. For instance, ML and
Thin Clients optimizations with fixes for newly discovered critical issues
can be released in 2.7.7 and 2.7.8.

As for 2.8, what exactly are we planning to release there?

-
Denis


On Wed, Sep 18, 2019 at 3:43 AM Nikolay Izhikov  wrote:

> Start of December for 2.8 sounds good.
>
> В Ср, 18/09/2019 в 13:38 +0300, Alexey Zinoviev пишет:
> > Sounds good for me, I will be ready to release ML module till 1 December
> > (end of September is not appropriate for ML module due to known issues
> and
> > bugs)
> > If the whole Ignite could be released to this date I will be happy in
> > another case I suggest the minor release by the formula mentioned above.
> >
> > Let's ask another module maintainers here?
> >
> > ср, 18 сент. 2019 г. в 13:00, Maxim Muzafarov :
> >
> > > Folks,
> > >
> > > Agree, we should start a discussion about releasing Apache Ignite 2.8
> > > version right after 2.7.6 will be announced.
> > > Am I missing something?
> > >
> > > On Wed, 18 Sep 2019 at 12:33, Nikolay Izhikov 
> wrote:
> > > >
> > > > > of 2.7 release
> > > >
> > > > It's 2.8, of course. Sorry, for typo.
> > > >
> > > >
> > > > В Ср, 18/09/2019 в 12:36 +0300, Nikolay Izhikov пишет:
> > > > > Hello, Alexey
> > > > >
> > > > > > we have no chance to
> > > > > > release the whole master branch
> > > > >
> > > > > Why do you think so?
> > > > > From my point of view, we should start discussion of 2.7 release,
> > >
> > > shortly(end of September of similar)
> > > > >
> > > > > В Ср, 18/09/2019 в 12:04 +0300, Alexey Zinoviev пишет:
> > > > > > Dear Igniters, for the whole year I'm waiting for the next
> release of
> > > > > > Apache Ignite with number 2.8 or 3.0. As I understand we have no
> > >
> > > chance to
> > > > > > release the whole master branch, but we have cool changes in ML
> > >
> > > component
> > > > > > that were not released since 2.7 (Nomember 2018).
> > > > > >
> > > > > > I'm ready to become a release manager of the special release with
> > >
> > > the taste
> > > > > > of new ML by the formula: 2.7.6 plus new ML plus all known fixed
> > >
> > > issues for
> > > > > > 2.7.6 release.
> > > > > >
> > > > > > Reason: ML has a limited dependency on core/sql component and
> could
> > >
> > > be
> > > > > > delivered without major changes in 2.7.6
> > > > > >
> > > > > > I hope to found the support from the community side and maybe we
> > >
> > > deliver
> > > > > > another components like Thin Clients, Spark integration and
> another
> > >
> > > ready
> > > > > > component without strong ties with core component in the future
> > >
> > > according
> > > > > > this way (in release 2.7.9 for example)
> > > > > >
> > > > > > What do you think, Igniters?
>


Re: TDE Master key rotation (Phase-2)

2019-09-18 Thread Nikita Amelchev
Nikolay, thanks for participating.

I have supplemented the design and clarify these moments. [1]

[1] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652381

ср, 18 сент. 2019 г. в 16:48, Nikolay Izhikov :
>
> Hello, Nikita.
>
> Thanks for starting this discussion.
>
> 1. We should add prerequisites for "master key rotation process" in design.
> Seems, it should be, "New master key available to EncryptionSPI for each 
> server node".
>
> 2. Please, use code formatting in wiki. It's make reading easier.
>
> 3. Please, clarify java API proposal. What will be changed and how.
> AFAIK we need to change EncryptionSPI, this should be covered in design.
>
> 4. Please, clarify new CLI commands.
> AFAIK we should have 2 command:
>
> 1. Start regular master key rotation process.
> 2. Start local master key rotation process during node recovery(for 
> the case when key changed while node was down).
>
> В Ср, 18/09/2019 в 16:09 +0300, Nikita Amelchev пишет:
> > Hi, Igniters.
> >
> > I'm going to implement the ability to rotate the master encryption key
> > (TDE Phase 2). [1]
> > Master key rotation required in case of it compromising or at the end
> > of crypto period(key validity period). I prepared the design. [2]
> >
> > In briefly, master keys will be identified by String masterKeyId. The
> > concept of the masterKeyId will be added to the cache keys encryption
> > process in EncryptionSpi.
> >
> > Users can configure master key id in IgniteConfiguration and will be
> > able to manage the key rotation process from java API, JMX, CLI:
> >  - ignite.encryption().changeMasterKey(String masterKeyId) - starts
> > master key rotation process.
> >  - String ignite.encryption().getMasterKeyId() - gets current master key id.
> >
> > Any thoughts?
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-12186
> > [2] 
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652381
> >



-- 
Best wishes,
Amelchev Nikita


[jira] [Created] (IGNITE-12198) GridDiscoveryManager uses hardcoded failure handler

2019-09-18 Thread Rohit Joshi (Jira)
Rohit Joshi created IGNITE-12198:


 Summary: GridDiscoveryManager uses hardcoded failure handler
 Key: IGNITE-12198
 URL: https://issues.apache.org/jira/browse/IGNITE-12198
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7.5
Reporter: Rohit Joshi


GridDiscoveryManager.onSegmentation() explicitly passes StopNodeFailureHandler 
to FailureProcessor overriding the failureHandler provided in 
IgniteConfiguration.
{code:java}
case RESTART_JVM:
ctx.failure().process(new FailureContext(FailureType.SEGMENTATION, null), 
restartProcHnd);

break;

case STOP:
ctx.failure().process(new FailureContext(FailureType.SEGMENTATION, null), 
stopNodeHnd);

break; {code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: How to free up space on disc after removing entries from IgniteCache with enabled PDS?

2019-09-18 Thread Alexey Goncharuk
Denis,

It's not fundamental, but quite complex. In postgres, for example, this is
not maintained automatically and store compaction is performed using the
full vacuum command, which acquires exclusive table lock, so no concurrent
activities on the table are possible.

The solution which Anton suggested does not look easy because it will most
likely significantly hurt performance: it is hard to maintain a data
structure to choose "page from free-list with enough space closest to the
beginning of the file". Overall, we can think of something similar to
postgres, when a space can be freed in some maintenance mode.

Online space cleanup sounds tricky for me, or at least I cannot think about
a plausible solution right away.

--AG

пт, 13 сент. 2019 г. в 19:43, Denis Magda :

> The issue starts hitting others who deploy Ignite persistence in
> production:
> https://issues.apache.org/jira/browse/IGNITE-12152
>
> Alex, I'm curious is this a fundamental problem. Asked the same question in
> JIRA but, probably, this discussion is a better place to get to the bottom
> first:
> https://issues.apache.org/jira/browse/IGNITE-10862
>
> -
> Denis
>
>
> On Thu, Jan 10, 2019 at 6:01 AM Anton Vinogradov  wrote:
>
> > Dmitriy,
> >
> > This does not look like a production-ready case :)
> >
> > How about
> > 1) Once you need to write an entry - you have to chose not random "page
> > from free-list with enough space"
> > but "page from free-list with enough space closest to the beginning of
> the
> > file".
> >
> > 2) Once you remove entry you have to merge the rest of the entries at
> this
> > page to the
> > "page from free-list with enough space closest to the beginning of the
> > file"
> > if possible. (optional)
> >
> > 3) Partition file tail with empty pages can bу removed at any time.
> >
> > 4) In case you have cold data inside the tail, just lock the page and
> > perform migration to
> > "page from free-list with enough space closest to the beginning of the
> > file".
> > This operation can be scheduled.
> >
> > On Wed, Jan 9, 2019 at 4:43 PM Dmitriy Pavlov 
> wrote:
> >
> > > In the TC Bot, I used to create the second cache with CacheV2 name and
> > > migrate needed data from Cache  V1 to V2.
> > >
> > > After CacheV1 destroy(), files are removed and disk space is freed.
> > >
> > > ср, 9 янв. 2019 г. в 12:04, Павлухин Иван :
> > >
> > > > Vyacheslav,
> > > >
> > > > Have you investigated how other vendors (Oracle, Postgres) tackle
> this
> > > > problem?
> > > >
> > > > I have one wild idea. Could the problem be solved by stopping a node
> > > > which need to be defragmented, clearing persistence files and
> > > > restarting the node? After rebalance the node will receive all data
> > > > back without fragmentation. I see a big downside -- sending data
> > > > across the network. But perhaps we can play with affinity and start
> > > > new node on the same host which will receive the same data, after
> that
> > > > old node can be stopped. It looks more as kind of workaround but
> > > > perhaps it can be turned into workable solution.
> > > >
> > > > ср, 9 янв. 2019 г. в 10:49, Vyacheslav Daradur  >:
> > > > >
> > > > > Yes, it's about Page Memory defragmentation.
> > > > >
> > > > > Pages in partitions files are stored sequentially, possible, it
> makes
> > > > > sense to defragment pages first to avoid interpages gaps since we
> use
> > > > > pages offset to manage them.
> > > > >
> > > > > I filled an issue [1], I hope we will be able to find resources to
> > > > > solve the issue before 2.8 release.
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/IGNITE-10862
> > > > >
> > > > > On Sat, Dec 29, 2018 at 10:47 AM Павлухин Иван <
> vololo...@gmail.com>
> > > > wrote:
> > > > > >
> > > > > > I suppose it is about Ignite Page Memory pages defragmentation.
> > > > > >
> > > > > > We can get 100 allocated pages each of which becomes only e.g.
> 50%
> > > > > > filled after removal some entries. But they will occupy a space
> for
> > > > > > 100 pages on a hard drive.
> > > > > >
> > > > > > пт, 28 дек. 2018 г. в 20:45, Denis Magda :
> > > > > > >
> > > > > > > Shouldn't the OS care of defragmentation? What we need to do is
> > to
> > > > give a
> > > > > > > way to remove stale data and "release" the allocated space
> > somehow
> > > > through
> > > > > > > the tools, MBeans or API methods.
> > > > > > >
> > > > > > > --
> > > > > > > Denis
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Dec 28, 2018 at 6:24 AM Vladimir Ozerov <
> > > > voze...@gridgain.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Vyacheslav,
> > > > > > > >
> > > > > > > > AFAIK this is not implemented. Shrinking/defragmentation is
> > > > important
> > > > > > > > optimization. Not only because it releases free space, but
> also
> > > > because it
> > > > > > > > decreases total number of pages. But is it not very easy to
> > > > implement, as
> > > > > > > > you have to both reshuffle data entries and index entries,
> > > 

[jira] [Created] (IGNITE-12197) Incorrect way for getting value of persistent enabled in CacheGroupMetricsImpl

2019-09-18 Thread Andrey Gura (Jira)
Andrey Gura created IGNITE-12197:


 Summary: Incorrect way for getting value of persistent enabled in 
CacheGroupMetricsImpl
 Key: IGNITE-12197
 URL: https://issues.apache.org/jira/browse/IGNITE-12197
 Project: Ignite
  Issue Type: Improvement
Reporter: Andrey Gura
Assignee: Andrey Gura
 Fix For: 2.8


IGNITE-12027 introduce possible bug  due to incorrect way for getting value of 
persistent enabled property in {{CacheGroupMetricsImpl}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSSION] The rebalancing process metrics update

2019-09-18 Thread Maxim Muzafarov
Igniters,


Currently, from my perspective, the Apache Ignite has a very raw
rebalance process metrics. Moreover, the most interesting metrics are
related to the Cache, not a CacheGroup and require enabling cache
statistics which can affect node performance.

Some of the metrics are not working as they expected. For instance,
`EstimatedRebalancingKeys` metric time to time returns `-1` value due
to an internal issues which require investigation (check [1] for
details). Another metric `rebalanceKeysReceived` metric treated as
CacheMetric in fact calculated for the whole cache group, see [2]
comment (e.g. historical rebalance, see IGNITE-11330 and code block
comment below). It confuses Ignite users.


I think the rebalance process metrics must be reworked, some issues
fixed and I invite you to participate in the current discussion.


WHAT TO DO


I've posted my thought in the description of the issue [3]. Here is
some details.

All such metrics (or their analogue) must be available for the
CacheGroupMetrics and I'd like to suggest to do the following steps:

Phase-1

rebalancingPartitionsLeft long metric
rebalancingReceivedKeys long metric
rebalancingReceivedBytes long metric
rebalancingStartTime long metric
rebalancingFinishTime long metric

It is not possible to get the actual values of rebalanced partitions
from the `LocalNodeMovingPartitionsCount` since for the empty node
join the cluster we are owning and enabling WAL simultaneously for all
the partitions at once. Partitions are actually transferred, but not
yet owning. That's why `rebalancingPartitionsLeft` metric needed, from
my point.

Phase-2

rebalancingExpectedKeys long metric
rebalancingExpectedBytes long metric
rebalancingEvictedPartitionsLeft long metric

The investigation is needed for the issues with the calculation of
estimated rebalancing keys count for full and historical rebalance
processes and their actual partitions sizes. These metrics must be
calculated before the new rebalance started for each cache group on
rebalancing node, so the user can see real values of 'how many keys
will be rebalanced and can able to estimate the rebalance process
finish time using a monitoring system that he uses.

Phase-3 (statistics must be enabled)

rebalancingKeysRate HitRate metric
rebalancingBytesRate HitRate metric

Currently, I've observed a lot of CPU (up to 100%) consumption for the
calculation of such type of metrics. I think it should be investigated
too and these metrics by default must be disabled.

Phase-4

After the rebalance process cache group level metrics will be
implemented we need to mark rebalancing CacheMetrics deprecated and
remove them from metrics a newly introduced metrics framework [4].
Such cache metrics should be implemented in an old-fashion way (like
they were before the metrics framework added) to keep backwards
compatibility and must be removed it Apache Ignite 3.0


Any thoughts?



[1] 
https://issues.apache.org/jira/browse/IGNITE-11330?focusedCommentId=16867537=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16867537
[2] 
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander.java#L1134
[3] https://issues.apache.org/jira/browse/IGNITE-12183
[4] https://issues.apache.org/jira/browse/IGNITE-11848


[jira] [Created] (IGNITE-12196) [Phase-4] Deprecate old rebalancing cache metrics

2019-09-18 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12196:


 Summary: [Phase-4] Deprecate old rebalancing cache metrics
 Key: IGNITE-12196
 URL: https://issues.apache.org/jira/browse/IGNITE-12196
 Project: Ignite
  Issue Type: Sub-task
Reporter: Maxim Muzafarov


We need to mark rebalancing CacheMetrics deprecated and remove them from 
metrics a newly introduced metrics framework IGNITE-11961. Such cache metrics 
should be implemented in an old-fashion way (like they were before the metrics 
framework added) to keep backwards compatibility.
Removed it Apache Ignite 3.0




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Apache Ignite 2.7.6 (Time, Scope, and Release manager)

2019-09-18 Thread Alexey Goncharuk
Igniters,

I am following the release process procedure [1] and currently on docs
update step. I can login to the admin panel, but I can only see the main
Ignite docs project. Who can add me to the remaining projects (C#/.NET,
C++, SQL, Integrations*, Ignite for Spark, Tools) or perhaps clone the docs
for 2.7.6?

[1] https://cwiki.apache.org/confluence/display/IGNITE/Release+Process

ср, 11 сент. 2019 г. в 20:44, Alexey Goncharuk :

> Ivan, thank you!
>
> Will update release notes and start the build shortly.
>
> ср, 11 сент. 2019 г. в 20:41, Ivan Rakov :
>
>> Alexey,
>>
>> I've merged https://issues.apache.org/jira/browse/IGNITE-12163 to master
>> and 2.7.6.
>>
>> Best Regards,
>> Ivan Rakov
>>
>> On 11.09.2019 18:13, Alexey Goncharuk wrote:
>> > Good,
>> >
>> > Please let me know when this is done, I will re-upload the release
>> > artifacts.
>> >
>> > ср, 11 сент. 2019 г. в 18:11, Alexandr Shapkin :
>> >
>> >> Alexey,
>> >>
>> >> The changes already have been tested, so no TC problems expected.
>> >> If this is true, then we need just a few hours to merge them.
>> >>
>> >> From: Alexey Goncharuk
>> >> Sent: Wednesday, September 11, 2019 6:03 PM
>> >> To: dev
>> >> Cc: Dmitriy Govorukhin; Anton Kalashnikov
>> >> Subject: Re: Re[2]: Apache Ignite 2.7.6 (Time, Scope, and Release
>> manager)
>> >>
>> >> Alexandr,
>> >>
>> >> I almost sent the vote email :) When do you expect the fix to be in
>> master
>> >> and 2.7.6?
>> >>
>> >> ср, 11 сент. 2019 г. в 17:38, Alexandr Shapkin :
>> >>
>> >>> Folks,
>> >>>
>> >>> A critical bug was detected in .NET [1].
>> >>>
>> >>> I understand that it’s a little bit late, but I propose to include
>> this
>> >>> issue into the release scope.
>> >>>
>> >>> PR is ready, currently waiting for a TC visa.
>> >>>
>> >>> Thoughts?
>> >>>
>> >>> [1] - https://issues.apache.org/jira/browse/IGNITE-12163
>> >>>
>> >>>
>> >>> From: Alexey Goncharuk
>> >>> Sent: Monday, September 9, 2019 5:11 PM
>> >>> To: dev
>> >>> Cc: Dmitriy Govorukhin; Anton Kalashnikov
>> >>> Subject: Re: Re[2]: Apache Ignite 2.7.6 (Time, Scope, and Release
>> >> manager)
>> >>> Igniters,
>> >>>
>> >>> I just pushed the last ticket to ignite-2.7.6 branch; looks like we
>> are
>> >>> ready for the next iteration.
>> >>>
>> >>> Given that Dmitriy Pavlov will be unavailable till the end of this
>> week,
>> >> I
>> >>> will take over the release. TC re-run is started.
>> >>>
>> >>> чт, 5 сент. 2019 г. в 16:14, Dmitriy Govorukhin <
>> >>> dmitriy.govoruk...@gmail.com>:
>> >>>
>>  Hi Igniters,
>> 
>>  I finished work on
>> https://issues.apache.org/jira/browse/IGNITE-12127,
>> >>> fix
>>  already in master and ignite-2.7.6
>> 
>>  On Wed, Sep 4, 2019 at 2:22 PM Dmitriy Govorukhin <
>>  dmitriy.govoruk...@gmail.com> wrote:
>> 
>> > Hi Alexey,
>> >
>> > I think that I will finish work on the fix tomorrow. Fix already
>>  completed
>> > but I need to get VISA from TC bot.
>> >
>> > On Mon, Sep 2, 2019 at 8:27 PM Alexey Goncharuk <
>> > alexey.goncha...@gmail.com> wrote:
>> >
>> >> Folks, it looks like I was overly optimistic with the estimates for
>> >>> the
>> >> mentioned two tickets.
>> >>
>> >> Dmitriy, Anton,
>> >> Can you share your vision when the issues will be fixed? Perhaps,
>> it
>>  makes
>> >> sense to release 2.7.6 with the already fixed issues and schedule
>> >>> 2.7.7?
>> >> Neither of them is a regression, so it's ok to release 2.7.6 as it
>> >> is
>>  now.
>> >> Thoughts?
>> >>
>> >> сб, 31 авг. 2019 г. в 11:37, Alexey Goncharuk <
>>  alexey.goncha...@gmail.com
>> >>> :
>> >>> Yes, my bad, forgot to include the link. That's the one.
>> >>>
>> >>> пт, 30 авг. 2019 г. в 15:01, Maxim Muzafarov > >>> :
>>  Alexey,
>> 
>>  Does the issue [1] is related to this [2] discussion on the
>>  user-list?
>>  If yes, I think it is very important to include these fixes to
>> >>> 2.7.6.
>>  [1] https://issues.apache.org/jira/browse/IGNITE-12127
>>  [2]
>> 
>> >>
>> http://apache-ignite-users.70518.x6.nabble.com/Node-failure-with-quot-Failed-to-write-buffer-quot-error-td29100.html
>>  On Fri, 30 Aug 2019 at 14:26, Alexei Scherbakov
>>   wrote:
>> > Alexey,
>> >
>> > Looks like important fixes, better to include them.
>> >
>> > пт, 30 авг. 2019 г. в 12:51, Alexey Goncharuk <
>>  alexey.goncha...@gmail.com>:
>> >> Igniters,
>> >>
>> >> Given that the RC1 vote did not succeed and we are still
>> >>> waiting
>> >> for
>>  a few
>> >> minor fixes, may I suggest including these two tickest to the
>>  2.7.6
>>  scope?
>> >> https://issues.apache.org/jira/browse/IGNITE-12127
>> >> https://issues.apache.org/jira/browse/IGNITE-12128
>> >>
>> >> The first one has been already reported on 

[jira] [Created] (IGNITE-12195) [Phase-3] Rebalance HitRate metrics

2019-09-18 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12195:


 Summary: [Phase-3] Rebalance HitRate metrics
 Key: IGNITE-12195
 URL: https://issues.apache.org/jira/browse/IGNITE-12195
 Project: Ignite
  Issue Type: Sub-task
Reporter: Maxim Muzafarov


Currently, HitRate metric can consume a lot of CPU resources. We need to 
investigate such performance issues and disable these metrics by default.

* rebalancingKeysRate HitRate metric
* rebalancingBytesRate HitRate metric



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12194) [Phase-2] Calculate expected rebalancing cache group keys

2019-09-18 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12194:


 Summary: [Phase-2] Calculate expected rebalancing cache group keys
 Key: IGNITE-12194
 URL: https://issues.apache.org/jira/browse/IGNITE-12194
 Project: Ignite
  Issue Type: Sub-task
Reporter: Maxim Muzafarov


We need to implement expected to be rebalanced cache group keys and total 
bytes. Currently, 'estimatedKeysCount' cache metric returns '-1' for some of 
the cases (see comments IGNITE-11330).

* rebalancingExpectedKeys long metric
* rebalancingExpectedBytes long metric
* rebalancingEvictedPartitionsLeft long metric



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12193) [Phase-1] Rebalancing cache group keys metric counters

2019-09-18 Thread Maxim Muzafarov (Jira)
Maxim Muzafarov created IGNITE-12193:


 Summary: [Phase-1] Rebalancing cache group keys metric counters
 Key: IGNITE-12193
 URL: https://issues.apache.org/jira/browse/IGNITE-12193
 Project: Ignite
  Issue Type: Sub-task
Reporter: Maxim Muzafarov


Implement metrics counters related to the cache group:

* rebalancingPartitionsLeft long metric
* rebalancingReceivedKeys long metric
* rebalancingReceivedBytes long metric
* rebalancingStartTime long metric
* rebalancingFinishTime long metric



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [SparkDataFrame] Query Optimization. Prototype

2019-09-18 Thread Nikolay Izhikov
Hello, liyuj.

Please, clarify.

Do you want to contribute this to Ignite?
What explanation do you expect?

В Ср, 18/09/2019 в 22:46 +0800, liyuj пишет:
> Hi,
> 
> Can anyone explain how difficult it is to implement ROWNUM?
> 
> This is a very common requirement.
> 
> 在 2018/1/23 下午4:05, Serge Puchnin 写道:
> > yes, the Cust function is supporting both Ignite and H2.
> > 
> > I've updated the documentation for next system functions:
> > CASEWHEN Function, CAST, CONVERT, TABLE
> > 
> > https://apacheignite-sql.readme.io/docs/system-functions
> > 
> > And for my mind, next functions aren't applicable for Ignite:
> > ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
> > DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
> > MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
> > AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
> > SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
> > H2VERSION
> > 
> > Also an issue was created for review current documentation:
> > https://issues.apache.org/jira/browse/IGNITE-7496
> > 
> > --
> > BR,
> > Serge
> > 
> > 
> > 
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [SparkDataFrame] Query Optimization. Prototype

2019-09-18 Thread liyuj

Hi,

Can anyone explain how difficult it is to implement ROWNUM?

This is a very common requirement.

在 2018/1/23 下午4:05, Serge Puchnin 写道:

yes, the Cust function is supporting both Ignite and H2.

I've updated the documentation for next system functions:
CASEWHEN Function, CAST, CONVERT, TABLE

https://apacheignite-sql.readme.io/docs/system-functions

And for my mind, next functions aren't applicable for Ignite:
ARRAY_GET, ARRAY_LENGTH, ARRAY_CONTAINS, CSVREAD, CSVWRITE, DATABASE,
DATABASE_PATH, DISK_SPACE_USED, FILE_READ, FILE_WRITE, LINK_SCHEMA,
MEMORY_FREE, MEMORY_USED, LOCK_MODE, LOCK_TIMEOUT, READONLY, CURRVAL,
AUTOCOMMIT, CANCEL_SESSION, IDENTITY, NEXTVAL, ROWNUM, SCHEMA,
SCOPE_IDENTITY, SESSION_ID, SET, TRANSACTION_ID, TRUNCATE_VALUE, USER,
H2VERSION

Also an issue was created for review current documentation:
https://issues.apache.org/jira/browse/IGNITE-7496

--
BR,
Serge



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/




Re: TDE Master key rotation (Phase-2)

2019-09-18 Thread Nikolay Izhikov
Hello, Nikita.

Thanks for starting this discussion.

1. We should add prerequisites for "master key rotation process" in design.
Seems, it should be, "New master key available to EncryptionSPI for each server 
node".

2. Please, use code formatting in wiki. It's make reading easier.

3. Please, clarify java API proposal. What will be changed and how.
AFAIK we need to change EncryptionSPI, this should be covered in design.

4. Please, clarify new CLI commands.
AFAIK we should have 2 command:

1. Start regular master key rotation process.
2. Start local master key rotation process during node recovery(for the 
case when key changed while node was down).

В Ср, 18/09/2019 в 16:09 +0300, Nikita Amelchev пишет:
> Hi, Igniters.
> 
> I'm going to implement the ability to rotate the master encryption key
> (TDE Phase 2). [1]
> Master key rotation required in case of it compromising or at the end
> of crypto period(key validity period). I prepared the design. [2]
> 
> In briefly, master keys will be identified by String masterKeyId. The
> concept of the masterKeyId will be added to the cache keys encryption
> process in EncryptionSpi.
> 
> Users can configure master key id in IgniteConfiguration and will be
> able to manage the key rotation process from java API, JMX, CLI:
>  - ignite.encryption().changeMasterKey(String masterKeyId) - starts
> master key rotation process.
>  - String ignite.encryption().getMasterKeyId() - gets current master key id.
> 
> Any thoughts?
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-12186
> [2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652381
> 


signature.asc
Description: This is a digitally signed message part


[jira] [Created] (IGNITE-12192) visorcmd hangs if you ^D to quit

2019-09-18 Thread Stephen Darlington (Jira)
Stephen Darlington created IGNITE-12192:
---

 Summary: visorcmd hangs if you ^D to quit
 Key: IGNITE-12192
 URL: https://issues.apache.org/jira/browse/IGNITE-12192
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7.5
Reporter: Stephen Darlington
Assignee: Stephen Darlington


You can exit most Unix utilities by pressing ^D. However, if you do that in 
ignitevisorcmd, the process hangs. You have to press ^C to quit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [IEP-35] Monitoring & Profiling. Phase 2

2019-09-18 Thread Alex Plehanov
One more point to discuss: Wouldn't it be better to have enabled system
views by default?
To enable views admin must restart the node, sometimes it's an issue.
Views cost almost nothing in terms of performance until they are explicitly
requested, so is their a reason to disable views by default?

вт, 17 сент. 2019 г. в 12:48, Alexey Goncharuk :

> Folks,
>
> I honestly tried to follow the discussion, but I think that I lost the
> point of the debate. Should we try to exploit the newly introduced slack to
> discuss the change and then send a follow-up here?
>
> --AG
>


TDE Master key rotation (Phase-2)

2019-09-18 Thread Nikita Amelchev
Hi, Igniters.

I'm going to implement the ability to rotate the master encryption key
(TDE Phase 2). [1]
Master key rotation required in case of it compromising or at the end
of crypto period(key validity period). I prepared the design. [2]

In briefly, master keys will be identified by String masterKeyId. The
concept of the masterKeyId will be added to the cache keys encryption
process in EncryptionSpi.

Users can configure master key id in IgniteConfiguration and will be
able to manage the key rotation process from java API, JMX, CLI:
 - ignite.encryption().changeMasterKey(String masterKeyId) - starts
master key rotation process.
 - String ignite.encryption().getMasterKeyId() - gets current master key id.

Any thoughts?

[1] https://issues.apache.org/jira/browse/IGNITE-12186
[2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652381

-- 
Best wishes,
Amelchev Nikita


[jira] [Created] (IGNITE-12191) Add support of system view and metric to Visor

2019-09-18 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created IGNITE-12191:


 Summary: Add support of system view and metric to Visor
 Key: IGNITE-12191
 URL: https://issues.apache.org/jira/browse/IGNITE-12191
 Project: Ignite
  Issue Type: Improvement
Reporter: Nikolay Izhikov


After Ignite obtain new {{SystemView}} and {{Metric}} API we should add support 
of it to the Visor.

User should be able to query and view any system view content or Metric 
Registry values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12190) Throw an exception for enabled text queries on persistent caches

2019-09-18 Thread Yuriy Shuliha (Jira)
Yuriy Shuliha  created IGNITE-12190:
---

 Summary: Throw an exception for enabled text queries on persistent 
caches
 Key: IGNITE-12190
 URL: https://issues.apache.org/jira/browse/IGNITE-12190
 Project: Ignite
  Issue Type: Improvement
  Components: general
Reporter: Yuriy Shuliha 
Assignee: Yuriy Shuliha 
 Fix For: 2.7.6






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12189) Implement correct limit for TextQuery

2019-09-18 Thread Yuriy Shuliha (Jira)
Yuriy Shuliha  created IGNITE-12189:
---

 Summary: Implement correct limit for TextQuery
 Key: IGNITE-12189
 URL: https://issues.apache.org/jira/browse/IGNITE-12189
 Project: Ignite
  Issue Type: Improvement
  Components: general
Reporter: Yuriy Shuliha 
Assignee: Yuriy Shuliha 
 Fix For: 2.7.6


PROBLEM

For now each server-node returns all response records to the client-node and it 
may contain ~thousands, ~hundred thousands records.
Event if we need only first 10-100. Again, all the results are added to queue 
in _*GridCacheQueryFutureAdapter*_ in arbitrary order by pages.
There are no any means to deliver deterministic result.

SOLUTION
Implement _*limit*_ as parameter for _*TextQuery*_ and 
_*GridCacheQueryRequest*_ 
It should be passed as limit  parameter in Lucene's  _*IndexSearcher.search()*_ 
in _*GridLuceneIndex*_.

For distributed queries _*limit*_ will also trim response queue when merging 
results.

Type: long
Special value: : 0 -> No limit (Long.MAX_VALUE);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12188) Metric CacheGroupMetrics.IndexBuildCountPartitionsLeft doesn't work correctly if there is more then one cache in cache group

2019-09-18 Thread Aleksey Plekhanov (Jira)
Aleksey Plekhanov created IGNITE-12188:
--

 Summary: Metric CacheGroupMetrics.IndexBuildCountPartitionsLeft 
doesn't work correctly if there is more then one cache in cache group
 Key: IGNITE-12188
 URL: https://issues.apache.org/jira/browse/IGNITE-12188
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.8
Reporter: Aleksey Plekhanov


Initial partitions counter is set to a total number of partitions for cache 
group but decremented each time partition processed for each cache.

Reproducer:
{code:java}
@Test
public void testIndexRebuildMetrics() throws Exception {
IgniteConfiguration cfg = new IgniteConfiguration()
.setConsistentId("ignite")
.setDataStorageConfiguration(new DataStorageConfiguration()
.setDefaultDataRegionConfiguration(new 
DataRegionConfiguration().setPersistenceEnabled(true)))
.setCacheConfiguration(
new CacheConfiguration("c1").setGroupName("grp")
.setAffinity(new 
RendezvousAffinityFunction().setPartitions(10))
.setIndexedTypes(Integer.class, Integer.class),
new CacheConfiguration("c2").setGroupName("grp")
.setAffinity(new 
RendezvousAffinityFunction().setPartitions(10))
.setIndexedTypes(Integer.class, Integer.class)
);

IgniteEx ignite = startGrid(cfg);

ignite.cluster().active(true);

for (int i = 0; i < 10_000; i++) {
ignite.cache("c1").put(i, i);
ignite.cache("c2").put(i, i);
}

ignite.cluster().active(false);

File dir = U.resolveWorkDirectory(U.defaultWorkDirectory(), 
DFLT_STORE_DIR, false);

U.delete(new File(dir, "ignite/cacheGroup-grp/index.bin"));

ignite.cluster().active(true);

doSleep(1_000L);


assertTrue(ignite.context().cache().cache("c1").context().group().metrics().getIndexBuildCountPartitionsLeft()
 >= 0);
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Nikolay Izhikov
Start of December for 2.8 sounds good.

В Ср, 18/09/2019 в 13:38 +0300, Alexey Zinoviev пишет:
> Sounds good for me, I will be ready to release ML module till 1 December
> (end of September is not appropriate for ML module due to known issues and
> bugs)
> If the whole Ignite could be released to this date I will be happy in
> another case I suggest the minor release by the formula mentioned above.
> 
> Let's ask another module maintainers here?
> 
> ср, 18 сент. 2019 г. в 13:00, Maxim Muzafarov :
> 
> > Folks,
> > 
> > Agree, we should start a discussion about releasing Apache Ignite 2.8
> > version right after 2.7.6 will be announced.
> > Am I missing something?
> > 
> > On Wed, 18 Sep 2019 at 12:33, Nikolay Izhikov  wrote:
> > > 
> > > > of 2.7 release
> > > 
> > > It's 2.8, of course. Sorry, for typo.
> > > 
> > > 
> > > В Ср, 18/09/2019 в 12:36 +0300, Nikolay Izhikov пишет:
> > > > Hello, Alexey
> > > > 
> > > > > we have no chance to
> > > > > release the whole master branch
> > > > 
> > > > Why do you think so?
> > > > From my point of view, we should start discussion of 2.7 release,
> > 
> > shortly(end of September of similar)
> > > > 
> > > > В Ср, 18/09/2019 в 12:04 +0300, Alexey Zinoviev пишет:
> > > > > Dear Igniters, for the whole year I'm waiting for the next release of
> > > > > Apache Ignite with number 2.8 or 3.0. As I understand we have no
> > 
> > chance to
> > > > > release the whole master branch, but we have cool changes in ML
> > 
> > component
> > > > > that were not released since 2.7 (Nomember 2018).
> > > > > 
> > > > > I'm ready to become a release manager of the special release with
> > 
> > the taste
> > > > > of new ML by the formula: 2.7.6 plus new ML plus all known fixed
> > 
> > issues for
> > > > > 2.7.6 release.
> > > > > 
> > > > > Reason: ML has a limited dependency on core/sql component and could
> > 
> > be
> > > > > delivered without major changes in 2.7.6
> > > > > 
> > > > > I hope to found the support from the community side and maybe we
> > 
> > deliver
> > > > > another components like Thin Clients, Spark integration and another
> > 
> > ready
> > > > > component without strong ties with core component in the future
> > 
> > according
> > > > > this way (in release 2.7.9 for example)
> > > > > 
> > > > > What do you think, Igniters?


signature.asc
Description: This is a digitally signed message part


Re: Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Alexey Zinoviev
Sounds good for me, I will be ready to release ML module till 1 December
(end of September is not appropriate for ML module due to known issues and
bugs)
If the whole Ignite could be released to this date I will be happy in
another case I suggest the minor release by the formula mentioned above.

Let's ask another module maintainers here?

ср, 18 сент. 2019 г. в 13:00, Maxim Muzafarov :

> Folks,
>
> Agree, we should start a discussion about releasing Apache Ignite 2.8
> version right after 2.7.6 will be announced.
> Am I missing something?
>
> On Wed, 18 Sep 2019 at 12:33, Nikolay Izhikov  wrote:
> >
> > > of 2.7 release
> >
> > It's 2.8, of course. Sorry, for typo.
> >
> >
> > В Ср, 18/09/2019 в 12:36 +0300, Nikolay Izhikov пишет:
> > > Hello, Alexey
> > >
> > > > we have no chance to
> > > > release the whole master branch
> > >
> > > Why do you think so?
> > > From my point of view, we should start discussion of 2.7 release,
> shortly(end of September of similar)
> > >
> > > В Ср, 18/09/2019 в 12:04 +0300, Alexey Zinoviev пишет:
> > > > Dear Igniters, for the whole year I'm waiting for the next release of
> > > > Apache Ignite with number 2.8 or 3.0. As I understand we have no
> chance to
> > > > release the whole master branch, but we have cool changes in ML
> component
> > > > that were not released since 2.7 (Nomember 2018).
> > > >
> > > > I'm ready to become a release manager of the special release with
> the taste
> > > > of new ML by the formula: 2.7.6 plus new ML plus all known fixed
> issues for
> > > > 2.7.6 release.
> > > >
> > > > Reason: ML has a limited dependency on core/sql component and could
> be
> > > > delivered without major changes in 2.7.6
> > > >
> > > > I hope to found the support from the community side and maybe we
> deliver
> > > > another components like Thin Clients, Spark integration and another
> ready
> > > > component without strong ties with core component in the future
> according
> > > > this way (in release 2.7.9 for example)
> > > >
> > > > What do you think, Igniters?
>


Re: Non-blocking PME Phase One (Node fail)

2019-09-18 Thread Anton Vinogradov
Alexey,

>> Can you describe the complete protocol changes first
The current goal is to find a better way.
I had at least 5 scenarios discarded because of finding corner cases
(Thanks to Alexey Scherbakov, Aleksei Plekhanov and Nikita Amelchev).
That's why I explained what we able to improve and why I think it works.

>> we need to remove this property, not add new logic that relies on it.
Agree.

>> How are you going to synchronize this?
Thanks for providing this case, seems it discards #1 + #2.2 case and #2.1
still possible with some optimizations.

"Zombie eating transactions" case can be theoretically solved, I think.
As I said before we may perform "Local switch" in case affinity was not
changed (except failed mode miss) other cases require regular PME.
In this case, new-primary is an ex-backup and we expect that old-primary
will try to update new-primary as a backup.
New primary will handle operations as a backup until it notified it's a
primary now.
Operations from ex-primary will be discarded or remapped once new-primary
notified it became the primary.

Discarding is a big improvement,
remapping is a huge improvement,
there is no 100% warranty that ex-primary will try to update new-primary as
a backup.

A lot of corner cases here.
So, seems minimized sync is a better solution.

Finally, according to your and Alexey Scherbakov's comments, the better
case is just to improve PME to wait for less, at least now.
Seems, we have to wait for (or cancel, I vote for this case - any
objections?) operations related to the failed primaries and wait for
recovery finish (which is fast).
In case affinity was changed and backup-primary switch (not related to the
failed primaries) required between the owners or even rebalance required,
we should just ignore this and perform only "Recovery PME".
Regular PME should happen later (if necessary), it can be even delayed (if
necessary).

Sounds good?

On Wed, Sep 18, 2019 at 11:46 AM Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Anton,
>
> I looked through the presentation and the ticket but did not find any new
> protocol description that you are going to implement. Can you describe the
> complete protocol changes first (to save time for both you and during the
> later review)?
>
> Some questions that I have in mind:
>  * It looks like that for "Local Switch" optimization you assume that node
> failure happens immediately for the whole cluster. This is not true - some
> nodes may "see" a node A failure, while others still consider it alive.
> Moreover, node A may not know yet that it is about to be failed and process
> requests correctly. This may happen, for example, due to a long GC pause on
> node A. In this case, node A resumes it's execution and proceeds to work as
> a primary (it has not received a segmentation event yet), node B also did
> not receive the A FAILED event yet. Node C received the event, ran the
> "Local switch" and assumed a primary role, node D also received the A
> FAILED event and switched to the new primary. Transactions from nodes B and
> D will be processed simultaneously on different primaries. How are you
> going to synchronize this?
>  * IGNITE_MAX_COMPLETED_TX_COUNT is fragile and we need to remove this
> property, not add new logic that relies on it. There is no way a user can
> calculate this property or adjust it in runtime. If a user decreases this
> property below a safe value, we will get inconsistent update counters and
> cluster desync. Besides, IGNITE_MAX_COMPLETED_TX_COUNT is quite a large
> value and will push HWM forward very quickly, much faster than during
> regular updates (you will have to increment it for each partition)
>
> ср, 18 сент. 2019 г. в 10:53, Anton Vinogradov :
>
> > Igniters,
> >
> > Recently we had a discussion devoted to the non-blocking PME.
> > We agreed that the most important case is a blocking on node failure and
> it
> > can be splitted to:
> >
> > 1) Affected partition’s operations latency will be increased by node
> > failure detection duration.
> > So, some operations may be freezed for 10+ seconds at real clusters just
> > waiting for a failed primary response.
> > In other words, some operations will be blocked even before blocking PME
> > started.
> >
> > The good news here that "bigger cluster decrease blocked operations
> > percent".
> >
> > Bad news that these operations may block non-affected operations at
> > - customers code (single_thread/striped pool usage)
> > - multikey operations (tx1 one locked A and waits for failed B,
> > non-affected tx2 waits for A)
> > - striped pools inside AI (when some task wais for tx.op() in sync way
> and
> > the striped thread is busy)
> > - etc ...
> >
> > Seems, we already, thanks to StopNodeFailureHandler (if configured),
> always
> > send node left event before node stop to minimize the waiting period.
> > So, only cases cause the hang without the stop are the problems now.
> >
> > Anyway, some additional research required here and it will 

[jira] [Created] (IGNITE-12187) [IEP-35] Move GridMetricManager and dependent classes to org.apache.ignite.internal.managers

2019-09-18 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created IGNITE-12187:


 Summary: [IEP-35] Move GridMetricManager and dependent classes to 
org.apache.ignite.internal.managers
 Key: IGNITE-12187
 URL: https://issues.apache.org/jira/browse/IGNITE-12187
 Project: Ignite
  Issue Type: Improvement
Reporter: Nikolay Izhikov
 Fix For: 2.8


Right now metric internal classes are in 
{{org.apache.ignite.internal.processors.metric}} package.

Seems, we should move it to {{org.apache.ignite.internal.managers.metric}} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Maxim Muzafarov
Folks,

Agree, we should start a discussion about releasing Apache Ignite 2.8
version right after 2.7.6 will be announced.
Am I missing something?

On Wed, 18 Sep 2019 at 12:33, Nikolay Izhikov  wrote:
>
> > of 2.7 release
>
> It's 2.8, of course. Sorry, for typo.
>
>
> В Ср, 18/09/2019 в 12:36 +0300, Nikolay Izhikov пишет:
> > Hello, Alexey
> >
> > > we have no chance to
> > > release the whole master branch
> >
> > Why do you think so?
> > From my point of view, we should start discussion of 2.7 release, 
> > shortly(end of September of similar)
> >
> > В Ср, 18/09/2019 в 12:04 +0300, Alexey Zinoviev пишет:
> > > Dear Igniters, for the whole year I'm waiting for the next release of
> > > Apache Ignite with number 2.8 or 3.0. As I understand we have no chance to
> > > release the whole master branch, but we have cool changes in ML component
> > > that were not released since 2.7 (Nomember 2018).
> > >
> > > I'm ready to become a release manager of the special release with the 
> > > taste
> > > of new ML by the formula: 2.7.6 plus new ML plus all known fixed issues 
> > > for
> > > 2.7.6 release.
> > >
> > > Reason: ML has a limited dependency on core/sql component and could be
> > > delivered without major changes in 2.7.6
> > >
> > > I hope to found the support from the community side and maybe we deliver
> > > another components like Thin Clients, Spark integration and another ready
> > > component without strong ties with core component in the future according
> > > this way (in release 2.7.9 for example)
> > >
> > > What do you think, Igniters?


[jira] [Created] (IGNITE-12186) TDE - Phase-2. Master key rotation.

2019-09-18 Thread Amelchev Nikita (Jira)
Amelchev Nikita created IGNITE-12186:


 Summary: TDE - Phase-2. Master key rotation.
 Key: IGNITE-12186
 URL: https://issues.apache.org/jira/browse/IGNITE-12186
 Project: Ignite
  Issue Type: Sub-task
Reporter: Amelchev Nikita
Assignee: Amelchev Nikita
 Fix For: 2.8


Need to implement master key rotation process. Master key(MK) rotation required 
in case of it compromising or at the end of crypto period(key validity period). 

[Design 
(cwiki).|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95652381]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Nikolay Izhikov
> of 2.7 release

It's 2.8, of course. Sorry, for typo.


В Ср, 18/09/2019 в 12:36 +0300, Nikolay Izhikov пишет:
> Hello, Alexey
> 
> > we have no chance to
> > release the whole master branch
> 
> Why do you think so?
> From my point of view, we should start discussion of 2.7 release, shortly(end 
> of September of similar)
> 
> В Ср, 18/09/2019 в 12:04 +0300, Alexey Zinoviev пишет:
> > Dear Igniters, for the whole year I'm waiting for the next release of
> > Apache Ignite with number 2.8 or 3.0. As I understand we have no chance to
> > release the whole master branch, but we have cool changes in ML component
> > that were not released since 2.7 (Nomember 2018).
> > 
> > I'm ready to become a release manager of the special release with the taste
> > of new ML by the formula: 2.7.6 plus new ML plus all known fixed issues for
> > 2.7.6 release.
> > 
> > Reason: ML has a limited dependency on core/sql component and could be
> > delivered without major changes in 2.7.6
> > 
> > I hope to found the support from the community side and maybe we deliver
> > another components like Thin Clients, Spark integration and another ready
> > component without strong ties with core component in the future according
> > this way (in release 2.7.9 for example)
> > 
> > What do you think, Igniters?


signature.asc
Description: This is a digitally signed message part


Re: Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Nikolay Izhikov
Hello, Alexey

> we have no chance to
> release the whole master branch

Why do you think so?
From my point of view, we should start discussion of 2.7 release, shortly(end 
of September of similar)

В Ср, 18/09/2019 в 12:04 +0300, Alexey Zinoviev пишет:
> Dear Igniters, for the whole year I'm waiting for the next release of
> Apache Ignite with number 2.8 or 3.0. As I understand we have no chance to
> release the whole master branch, but we have cool changes in ML component
> that were not released since 2.7 (Nomember 2018).
> 
> I'm ready to become a release manager of the special release with the taste
> of new ML by the formula: 2.7.6 plus new ML plus all known fixed issues for
> 2.7.6 release.
> 
> Reason: ML has a limited dependency on core/sql component and could be
> delivered without major changes in 2.7.6
> 
> I hope to found the support from the community side and maybe we deliver
> another components like Thin Clients, Spark integration and another ready
> component without strong ties with core component in the future according
> this way (in release 2.7.9 for example)
> 
> What do you think, Igniters?


signature.asc
Description: This is a digitally signed message part


Re: ML stable and performance

2019-09-18 Thread Alexey Zinoviev
I've started the discussion here
http://apache-ignite-developers.2346864.n4.nabble.com/Proposed-Release-2-7-7-2-7-6-ML-extra-fixed-bugs-td43537.html


пт, 13 сент. 2019 г. в 22:12, Alexey Zinoviev :

> The reason was that the last year there is no significant releases of
> Ignite between 2.7 and 2.8, only minor releases with long story of renaming.
> I am and another ML guys are ready in 1-2 months prepare ML module for 2.8
> or for the minor release 2.7.7 = 2.7.6 + updated ML + new fixed bugs
>
> Let's discuss it in separate thread next week
>
>
>
> пт, 13 сент. 2019 г. в 21:55, Denis Magda :
>
>> Alexey, I'm wondering,
>>
>> Are there any dependencies on Ignite Core that make us put off the ML
>> changes release until 2.8? I know that you do not support the idea of ML
>> as
>> a separate Ignite module but this concept would allow us to release ML as
>> frequently as we want not being blocked by Ignite core releases.
>>
>>
>> -
>> Denis
>>
>>
>> On Fri, Sep 13, 2019 at 11:45 AM Alexey Zinoviev 
>> wrote:
>>
>> > I could answer as one of developers of ML module.
>> > Currently is available the ML in version 2.7.5, it supports a lot of
>> > algorithms and could be used in production, but the API is not stable
>> and
>> > will be changed in 2.8
>> >
>> > The ML module will be stable since next release 2.8, also we have no
>> > performance report to compare for example with Spark ML
>> > Based on my exploration the performance of in terms of Big O notation is
>> > the same like in Spark ML (real numbers says that Ignite ML is more
>> faster
>> > than Spark ML due to Ignite in-memory nature and so on)
>> >
>> > Since 2.8 it will have good integration with TensorFlow, Spark ML,
>> XGBoost
>> > via model inference.
>> >
>> > You as a user have no ability to run, for-example scikit-learn or R
>> > packages in distributed mode over Ignite, but you could run the
>> TensorFlow,
>> > using Ignite as a distributed back-end instead of Horovod.
>> >
>> > If you have any questions, please let me know
>> >
>> >
>> >
>> > пт, 13 сент. 2019 г. в 21:28, Denis Magda :
>> >
>> >> David,
>> >>
>> >> Let me loop in Ignite dev list that has Ignite ML experts subscribed.
>> >> Please, could you share more details in regards to your performance
>> >> testing
>> >> and objectives for Ignite ML overall?
>> >>
>> >> The module is ready for production and we're ready to help solve any
>> >> cornerstones.
>> >>
>> >> -
>> >> Denis
>> >>
>> >>
>> >> On Fri, Sep 6, 2019 at 4:50 AM David Williams 
>> >> wrote:
>> >>
>> >> > Python is 25 times slower than Java for ML at runtimes, which I found
>> >> out
>> >> > online. But I don't know that statement is true or not. I need
>> insiders'
>> >> > opinion.  Which ml other packages are best options for Ignite?
>> >> >
>> >> > On Fri, Sep 6, 2019 at 7:28 PM Mikael 
>> >> wrote:
>> >> >
>> >> >> Hi!
>> >> >>
>> >> >> I have never used it myself but it's been there for long time and I
>> >> >> would expect it to be stable, and yes it will run distributed, I
>> can't
>> >> >> say anything about performance as I have never used it.
>> >> >>
>> >> >> You will find a lot of more information at:
>> >> >>
>> >> >> https://apacheignite.readme.io/docs/machine-learning
>> >> >>
>> >> >> Mikael
>> >> >>
>> >> >>
>> >> >> Den 2019-09-06 kl. 11:50, skrev David Williams:
>> >> >> >
>> >> >> >
>> >> >> > I am evaluating ML framework for Java platform. I knew Ignite has
>> ML
>> >> >> > package.
>> >> >> > But I like to know its stability and performance for production.
>> Can
>> >> >> > Ignite
>> >> >> > ML code run in distribute way?
>> >> >> >
>> >> >> > Except its own ML package, which ml packages are best options for
>> >> >> Ignite?
>> >> >>
>> >> >
>> >>
>> >
>>
>


Re: Recap of Ignite ML presentation at ApacheCon

2019-09-18 Thread Alexey Zinoviev
Cool presentation, thank you for evangelism for ML module
Could you share the link on the code from demo: Payments Fraud Detection?

вт, 17 сент. 2019 г. в 02:25, Denis Magda :

> Hey Ignite dev and user communities,
>
> Those of you who are passionate about ML and Ignite ML in particular, we
> continue spreading the word about this so capable module.
>
> Had an opportunity to join ApacheCon this year and here is a
> professionally written recap on my talk:
> https://www.infoq.com/news/2019/09/continuous-deep-learning-ignite
>
> Slides for those of you who are curious or keep promoting Ignite ML in
> other parts of the world (ping me if you need PPT for your presentations):
>
> https://www.slideshare.net/DenisMagda/continuous-machine-and-deep-learning-with-apache-ignite
>
> -
> Denis
>


Proposed Release: 2.7.7 = 2.7.6 + ML + extra fixed bugs

2019-09-18 Thread Alexey Zinoviev
Dear Igniters, for the whole year I'm waiting for the next release of
Apache Ignite with number 2.8 or 3.0. As I understand we have no chance to
release the whole master branch, but we have cool changes in ML component
that were not released since 2.7 (Nomember 2018).

I'm ready to become a release manager of the special release with the taste
of new ML by the formula: 2.7.6 plus new ML plus all known fixed issues for
2.7.6 release.

Reason: ML has a limited dependency on core/sql component and could be
delivered without major changes in 2.7.6

I hope to found the support from the community side and maybe we deliver
another components like Thin Clients, Spark integration and another ready
component without strong ties with core component in the future according
this way (in release 2.7.9 for example)

What do you think, Igniters?


[jira] [Created] (IGNITE-12185) [IEP-35] Add metrics - how many indexes are left to rebuild

2019-09-18 Thread Aleksandr Brazhnikov (Jira)
Aleksandr Brazhnikov created IGNITE-12185:
-

 Summary: [IEP-35] Add metrics - how many indexes are left to 
rebuild
 Key: IGNITE-12185
 URL: https://issues.apache.org/jira/browse/IGNITE-12185
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Brazhnikov


We want to know how many indexes are left to rebuild at the moment. The result 
must be an integer. The information must be in JMX, with no statistics enabled 
on the caches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12184) [IEP-35] Add metrics - status rebild indexes

2019-09-18 Thread Aleksandr Brazhnikov (Jira)
Aleksandr Brazhnikov created IGNITE-12184:
-

 Summary: [IEP-35] Add metrics - status rebild indexes 
 Key: IGNITE-12184
 URL: https://issues.apache.org/jira/browse/IGNITE-12184
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Brazhnikov


We want to see when the node is rebild indexes (index.bin), the result must be 
represented as binary logic (1/0). The information must be in JMX, with no 
statistics enabled on the caches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12183) [IEP-35] Add metrics rebalancing

2019-09-18 Thread Aleksandr Brazhnikov (Jira)
Aleksandr Brazhnikov created IGNITE-12183:
-

 Summary: [IEP-35] Add metrics rebalancing
 Key: IGNITE-12183
 URL: https://issues.apache.org/jira/browse/IGNITE-12183
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Brazhnikov


We want to see when rebalancing is performed on the node, including evict 
process, the result should be represented as binary logic (1/0). The 
information must be in JMX, with no statistics enabled on the caches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Non-blocking PME Phase One (Node fail)

2019-09-18 Thread Alexey Goncharuk
Anton,

I looked through the presentation and the ticket but did not find any new
protocol description that you are going to implement. Can you describe the
complete protocol changes first (to save time for both you and during the
later review)?

Some questions that I have in mind:
 * It looks like that for "Local Switch" optimization you assume that node
failure happens immediately for the whole cluster. This is not true - some
nodes may "see" a node A failure, while others still consider it alive.
Moreover, node A may not know yet that it is about to be failed and process
requests correctly. This may happen, for example, due to a long GC pause on
node A. In this case, node A resumes it's execution and proceeds to work as
a primary (it has not received a segmentation event yet), node B also did
not receive the A FAILED event yet. Node C received the event, ran the
"Local switch" and assumed a primary role, node D also received the A
FAILED event and switched to the new primary. Transactions from nodes B and
D will be processed simultaneously on different primaries. How are you
going to synchronize this?
 * IGNITE_MAX_COMPLETED_TX_COUNT is fragile and we need to remove this
property, not add new logic that relies on it. There is no way a user can
calculate this property or adjust it in runtime. If a user decreases this
property below a safe value, we will get inconsistent update counters and
cluster desync. Besides, IGNITE_MAX_COMPLETED_TX_COUNT is quite a large
value and will push HWM forward very quickly, much faster than during
regular updates (you will have to increment it for each partition)

ср, 18 сент. 2019 г. в 10:53, Anton Vinogradov :

> Igniters,
>
> Recently we had a discussion devoted to the non-blocking PME.
> We agreed that the most important case is a blocking on node failure and it
> can be splitted to:
>
> 1) Affected partition’s operations latency will be increased by node
> failure detection duration.
> So, some operations may be freezed for 10+ seconds at real clusters just
> waiting for a failed primary response.
> In other words, some operations will be blocked even before blocking PME
> started.
>
> The good news here that "bigger cluster decrease blocked operations
> percent".
>
> Bad news that these operations may block non-affected operations at
> - customers code (single_thread/striped pool usage)
> - multikey operations (tx1 one locked A and waits for failed B,
> non-affected tx2 waits for A)
> - striped pools inside AI (when some task wais for tx.op() in sync way and
> the striped thread is busy)
> - etc ...
>
> Seems, we already, thanks to StopNodeFailureHandler (if configured), always
> send node left event before node stop to minimize the waiting period.
> So, only cases cause the hang without the stop are the problems now.
>
> Anyway, some additional research required here and it will be nice if
> someone willing to help.
>
> 2) Some optimizations may speed-up node left case (eliminate upcoming
> operations blocking).
> A full list can be found at presentation [1].
> List contains 8 optimizations, but I propose to implement some at phase one
> and the rest at phase two.
> Assuming that real production deployment has Baseline enabled we able to
> gain speed-up by implementing the following:
>
> #1 Switch on node_fail/node_left event locally instead of starting real PME
> (Local switch).
> Since BLT enabled we always able to switch to the new-affinity primaries
> (no need to preload partitions).
> In case we're not able to switch to new-affinity primaries (all missed or
> BLT disabled) we'll just start regular PME.
> The new-primary calculation can be performed locally or by the coordinator
> (eg. attached to the node_fail message).
>
> #2 We should not wait for any already started operations completion (since
> they not related to failed primary partitions).
> The only problem is a recovery which may cause update-counters duplications
> in case of unsynced HWM.
>
> #2.1 We may wait only for recovery completion (Micro-blocking switch).
> Just block (all at this phase) upcoming operations during the recovery by
> incrementing the topology version.
> So in other words, it will be some kind of PME with waiting, but it will
> wait for recovery (fast) instead of finishing current operations (long).
>
> #2.2 Recovery, theoretically, can be async.
> We have to solve unsynced HWM issue (to avoid concurrent usage of the same
> counters) to make it happen.
> We may just increment HWM with IGNITE_MAX_COMPLETED_TX_COUNT at new-primary
> and continue recovery in an async way.
> Currently, IGNITE_MAX_COMPLETED_TX_COUNT specifies the number of committed
> transactions we expect between "the first backup committed tx1" and "the
> last backup committed the same tx1".
> I propose to use it to specify the number of prepared transactions we
> expect between "the first backup prepared tx1" and "the last backup
> prepared the same tx1".
> Both cases look pretty similar.
> In this case, we able 

[jira] [Created] (IGNITE-12182) ExecutorService is inconsistent with Compute and Services, runs on clients

2019-09-18 Thread Stephen Darlington (Jira)
Stephen Darlington created IGNITE-12182:
---

 Summary: ExecutorService is inconsistent with Compute and 
Services, runs on clients
 Key: IGNITE-12182
 URL: https://issues.apache.org/jira/browse/IGNITE-12182
 Project: Ignite
  Issue Type: Bug
  Components: compute
Affects Versions: 2.7.5
Reporter: Stephen Darlington
Assignee: Stephen Darlington


In IGNITE-860 the default behaviour was changed so that compute and service 
tasks would be executed only on server nodes. This is a sensible default and 
it's confusing that it's not also true for jobs run using the ExecutorService.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Non-blocking PME Phase One (Node fail)

2019-09-18 Thread Anton Vinogradov
Igniters,

Recently we had a discussion devoted to the non-blocking PME.
We agreed that the most important case is a blocking on node failure and it
can be splitted to:

1) Affected partition’s operations latency will be increased by node
failure detection duration.
So, some operations may be freezed for 10+ seconds at real clusters just
waiting for a failed primary response.
In other words, some operations will be blocked even before blocking PME
started.

The good news here that "bigger cluster decrease blocked operations
percent".

Bad news that these operations may block non-affected operations at
- customers code (single_thread/striped pool usage)
- multikey operations (tx1 one locked A and waits for failed B,
non-affected tx2 waits for A)
- striped pools inside AI (when some task wais for tx.op() in sync way and
the striped thread is busy)
- etc ...

Seems, we already, thanks to StopNodeFailureHandler (if configured), always
send node left event before node stop to minimize the waiting period.
So, only cases cause the hang without the stop are the problems now.

Anyway, some additional research required here and it will be nice if
someone willing to help.

2) Some optimizations may speed-up node left case (eliminate upcoming
operations blocking).
A full list can be found at presentation [1].
List contains 8 optimizations, but I propose to implement some at phase one
and the rest at phase two.
Assuming that real production deployment has Baseline enabled we able to
gain speed-up by implementing the following:

#1 Switch on node_fail/node_left event locally instead of starting real PME
(Local switch).
Since BLT enabled we always able to switch to the new-affinity primaries
(no need to preload partitions).
In case we're not able to switch to new-affinity primaries (all missed or
BLT disabled) we'll just start regular PME.
The new-primary calculation can be performed locally or by the coordinator
(eg. attached to the node_fail message).

#2 We should not wait for any already started operations completion (since
they not related to failed primary partitions).
The only problem is a recovery which may cause update-counters duplications
in case of unsynced HWM.

#2.1 We may wait only for recovery completion (Micro-blocking switch).
Just block (all at this phase) upcoming operations during the recovery by
incrementing the topology version.
So in other words, it will be some kind of PME with waiting, but it will
wait for recovery (fast) instead of finishing current operations (long).

#2.2 Recovery, theoretically, can be async.
We have to solve unsynced HWM issue (to avoid concurrent usage of the same
counters) to make it happen.
We may just increment HWM with IGNITE_MAX_COMPLETED_TX_COUNT at new-primary
and continue recovery in an async way.
Currently, IGNITE_MAX_COMPLETED_TX_COUNT specifies the number of committed
transactions we expect between "the first backup committed tx1" and "the
last backup committed the same tx1".
I propose to use it to specify the number of prepared transactions we
expect between "the first backup prepared tx1" and "the last backup
prepared the same tx1".
Both cases look pretty similar.
In this case, we able to make switch fully non-blocking, with async
recovery.
Thoughts?

So, I'm going to implement both improvements at "Lightweight version of
partitions map exchange" issue [2] if no one minds.

[1]
https://docs.google.com/presentation/d/1Ay7OZk_iiJwBCcA8KFOlw6CRmKPXkkyxCXy_JNg4b0Q/edit?usp=sharing
[2] https://issues.apache.org/jira/browse/IGNITE-9913


Re: Text queries/indexes (GridLuceneIndex, @QueryTextFiled)

2019-09-18 Thread Alexei Scherbakov
Denis,

I like the idea of throwing an exception for enabled text queries on
persistent caches.

Also I'm fine with proposed limit for unsorted searches.

Yury, please proceed with ticket creation.

вт, 17 сент. 2019 г., 22:06 Denis Magda :

> Igniters,
>
> I see nothing wrong with Yury's proposal in regards full-text search API
> evolution as long as Yury is ready to push it forward.
>
> As for the in-memory mode only, it makes total sense for in-memory data
> grid deployments when Ignite caches data of an underlying DB like Postgres.
> As part of the changes, I would simply throw an exception (by default) if
> the one attempts to use text indices with the native persistence enabled.
> If the person is ready to live with that limitation that an explicit
> configuration change is needed to come around the exception.
>
> Thoughts?
>
>
> -
> Denis
>
>
> On Tue, Sep 17, 2019 at 7:44 AM Yuriy Shuliga  wrote:
>
> > Hello to all again,
> >
> > Thank you for important comments and notes given below!
> >
> > Let me answer and continue the discussion.
> >
> > (I) Overall needs in Lucene indexing
> >
> > Alexei has referenced to
> > https://issues.apache.org/jira/browse/IGNITE-5371 where
> > absence of index persistence was declared as an obstacle to further
> > development.
> >
> > a) This ticket is already closed as not valid.b) There are definite needs
> > (and in our project as well) in just in-memory indexing of selected data.
> > We intend to use search capabilities for fetching limited amount of
> records
> > that should be used in type-ahead search / suggestions.
> > Not all of the data will be indexed and the are no need in Lucene index
> to
> > be persistence. Hope this is a wide pattern of text-search usage.
> >
> > (II) Necessary fixes in current implementation.
> >
> > a) Implementation of correct *limit *(*offset* seems to be not required
> in
> > text-search tasks for now)
> > I have investigated the data flow for distributed text queries. it was
> > simple test prefix query, like 'name'*='ene*'*
> > For now each server-node returns all response records to the client-node
> > and it may contain ~thousands, ~hundred thousands records.
> > Event if we need only first 10-100. Again, all the results are added to
> > queue in GridCacheQueryFutureAdapter in arbitrary order by pages.
> > I did not find here any means to deliver deterministic result.
> > So implementing limit as part of query and (GridCacheQueryRequest) will
> not
> > change the nature of response but will limit load on nodes and
> networking.
> >
> > Can we consider to open a ticket for this?
> >
> > (III) Further extension of Lucene API exposition to Ignite
> >
> > a) Sorting
> > The solution for this could be:
> > - Make entities comparable
> > - Add custom comparator to entity
> > - Add annotations to mark sorted fields for Lucene indexing
> > - Use comparators when merging responses or reducing to desired limit on
> > client node.
> > Will require full result set to be loaded into memory. Though can be used
> > for relatively small limits.
> > BR,
> > Yuriy Shuliha
> >
> > пт, 30 серп. 2019 о 10:37 Alexei Scherbakov <
> alexey.scherbak...@gmail.com>
> > пише:
> >
> > > Yuriy,
> > >
> > > Note what one of major blockers for text queries is [1] which makes
> > lucene
> > > indexes unusable with persistence and main reason for discontinuation.
> > > Probably it's should be addressed first to make text queries a valid
> > > product feature.
> > >
> > > Distributed sorting and advanved querying is indeed not a trivial task.
> > > Some kind of merging must be implemented on query originating node.
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-5371
> > >
> > > чт, 29 авг. 2019 г. в 23:38, Denis Magda :
> > >
> > > > Yuriy,
> > > >
> > > > If you are ready to take over the full-text search indexes then
> please
> > go
> > > > ahead. The primary reason why the community wants to discontinue them
> > > first
> > > > (and, probable, resurrect later) are the limitations listed by Andrey
> > and
> > > > minimal support from the community end.
> > > >
> > > > -
> > > > Denis
> > > >
> > > >
> > > > On Thu, Aug 29, 2019 at 1:29 PM Andrey Mashenkov <
> > > > andrey.mashen...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Yuriy,
> > > > >
> > > > > Unfortunatelly, there is a plan to discontinue TextQueries in
> Ignite
> > > [1].
> > > > > Motivation here is text indexes are not persistent, not
> transactional
> > > and
> > > > > can't be user together with SQL or inside SQL.
> > > > > and there is a lack of interest from community side.
> > > > > You are weclome to take on these issues and make TextQueries great.
> > > > >
> > > > > 1,  PageSize can't be used to limit resultset.
> > > > > Query results return from data node to client-side cursor in
> > > page-by-page
> > > > > manner and
> > > > > this parameter is designed control page size. It is supposed query
> > > > executes
> > > > > lazily on server side and
> > > > > it is not