Re: The Spark 2.4 support

2019-09-30 Thread Ivan Pavlukhin
Alexey, Nikolay,

Thank you for sharing details!

вт, 1 окт. 2019 г. в 07:42, Alexey Zinoviev :
>
> Great talk and paper, I've learnt it last year
>
> пн, 30 сент. 2019 г., 21:42 Nikolay Izhikov :
>
> > Yes, I can :)
> >
> > В Пн, 30/09/2019 в 11:40 -0700, Denis Magda пишет:
> > > Nikolay,
> > >
> > > Would you be able to review the changes? I'm not sure there is a better
> > candidate for now.
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Mon, Sep 30, 2019 at 11:01 AM Nikolay Izhikov 
> > wrote:
> > > > Hello, Ivan.
> > > >
> > > > I had a talk about internals of Spark integration in Ignite.
> > > > It answers on question why we should use Spark internals.
> > > >
> > > > You can take a look at my meetup talk(in Russian) [1] or read an
> > article if you prefer text [2].
> > > >
> > > > [1] https://www.youtube.com/watch?v=CzbAweNKEVY
> > > > [2] https://habr.com/ru/company/sberbank/blog/427297/
> > > >
> > > > В Пн, 30/09/2019 в 20:29 +0300, Alexey Zinoviev пишет:
> > > > > Yes, as I understand it uses Spark internals from the first commit)))
> > > > > The reason - we take Spark SQL query execution plan and try to
> > execute it
> > > > > on Ignite cluster
> > > > > Also we inherit a lot of Developer API related classes that could be
> > > > > unstable. Spark has no good point for extension and this is a reason
> > why we
> > > > > should go deeper
> > > > >
> > > > > пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin :
> > > > >
> > > > > > Hi Alexey,
> > > > > >
> > > > > > As an external watcher very far from Ignite Spark integration I
> > would
> > > > > > like to ask a humble question for my understanding. Why this
> > > > > > integration uses Spark internals? Is it a common approach for
> > > > > > integrating with Spark?
> > > > > >
> > > > > > пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev <
> > zaleslaw@gmail.com>:
> > > > > > >
> > > > > > > Hi, Igniters
> > > > > > > I've started the work on the Spark 2.4 support
> > > > > > >
> > > > > > > We started the discussion here, in
> > > > > > > https://issues.apache.org/jira/browse/IGNITE-12054
> > > > > > >
> > > > > > > The Spark internals were totally refactored between 2.3 and 2.4
> > versions,
> > > > > > > main changes touches
> > > > > > >
> > > > > > >- External catalog and listeners refactoring
> > > > > > >- Changes of HAVING operator semantic support
> > > > > > >- Push-down NULL filters generation in JOIN plans
> > > > > > >- minor changes in Plan Generation that should be adopted in
> > our
> > > > > > >integration module
> > > > > > >
> > > > > > > I propose the initial solution here via creation of new module
> > spark-2.4
> > > > > > > here https://issues.apache.org/jira/browse/IGNITE-12247 and
> > addition of
> > > > > >
> > > > > > new
> > > > > > > profile spark-2.4 (to avoid possible clashes with another spark
> > versions)
> > > > > > >
> > > > > > > Also I've transformed ticket to an Umbrella ticket and created a
> > few
> > > > > > > tickets for muted tests (around 7 from 211 tests are muted now)
> > > > > > >
> > > > > > > Please, if somebody interested in it, make an initial review of
> > modular
> > > > > > > ignite structure and changes (without deep diving into Spark
> > code).
> > > > > > >
> > > > > > > And yes, the proposed code is a copy-paste of spark-ignite
> > module with a
> > > > > > > few fixes
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Best regards,
> > > > > > Ivan Pavlukhin
> > > > > >
> >



-- 
Best regards,
Ivan Pavlukhin


Re: The Spark 2.4 support

2019-09-30 Thread Alexey Zinoviev
Great talk and paper, I've learnt it last year

пн, 30 сент. 2019 г., 21:42 Nikolay Izhikov :

> Yes, I can :)
>
> В Пн, 30/09/2019 в 11:40 -0700, Denis Magda пишет:
> > Nikolay,
> >
> > Would you be able to review the changes? I'm not sure there is a better
> candidate for now.
> >
> > -
> > Denis
> >
> >
> > On Mon, Sep 30, 2019 at 11:01 AM Nikolay Izhikov 
> wrote:
> > > Hello, Ivan.
> > >
> > > I had a talk about internals of Spark integration in Ignite.
> > > It answers on question why we should use Spark internals.
> > >
> > > You can take a look at my meetup talk(in Russian) [1] or read an
> article if you prefer text [2].
> > >
> > > [1] https://www.youtube.com/watch?v=CzbAweNKEVY
> > > [2] https://habr.com/ru/company/sberbank/blog/427297/
> > >
> > > В Пн, 30/09/2019 в 20:29 +0300, Alexey Zinoviev пишет:
> > > > Yes, as I understand it uses Spark internals from the first commit)))
> > > > The reason - we take Spark SQL query execution plan and try to
> execute it
> > > > on Ignite cluster
> > > > Also we inherit a lot of Developer API related classes that could be
> > > > unstable. Spark has no good point for extension and this is a reason
> why we
> > > > should go deeper
> > > >
> > > > пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin :
> > > >
> > > > > Hi Alexey,
> > > > >
> > > > > As an external watcher very far from Ignite Spark integration I
> would
> > > > > like to ask a humble question for my understanding. Why this
> > > > > integration uses Spark internals? Is it a common approach for
> > > > > integrating with Spark?
> > > > >
> > > > > пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev <
> zaleslaw@gmail.com>:
> > > > > >
> > > > > > Hi, Igniters
> > > > > > I've started the work on the Spark 2.4 support
> > > > > >
> > > > > > We started the discussion here, in
> > > > > > https://issues.apache.org/jira/browse/IGNITE-12054
> > > > > >
> > > > > > The Spark internals were totally refactored between 2.3 and 2.4
> versions,
> > > > > > main changes touches
> > > > > >
> > > > > >- External catalog and listeners refactoring
> > > > > >- Changes of HAVING operator semantic support
> > > > > >- Push-down NULL filters generation in JOIN plans
> > > > > >- minor changes in Plan Generation that should be adopted in
> our
> > > > > >integration module
> > > > > >
> > > > > > I propose the initial solution here via creation of new module
> spark-2.4
> > > > > > here https://issues.apache.org/jira/browse/IGNITE-12247 and
> addition of
> > > > >
> > > > > new
> > > > > > profile spark-2.4 (to avoid possible clashes with another spark
> versions)
> > > > > >
> > > > > > Also I've transformed ticket to an Umbrella ticket and created a
> few
> > > > > > tickets for muted tests (around 7 from 211 tests are muted now)
> > > > > >
> > > > > > Please, if somebody interested in it, make an initial review of
> modular
> > > > > > ignite structure and changes (without deep diving into Spark
> code).
> > > > > >
> > > > > > And yes, the proposed code is a copy-paste of spark-ignite
> module with a
> > > > > > few fixes
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best regards,
> > > > > Ivan Pavlukhin
> > > > >
>


[DISCUSSION][IEP-35] Replace RunningQueryManager with GridSystemViewManager

2019-09-30 Thread Nikolay Izhikov
Hello, Igniters.

Since the last release `RunningQueryManager` [1] was added.
It used to track a running query.

In IEP-35 [2] SystemView API was added.
SystemView API supposed to be used to track all kinds of internal Ignite 
objects.

I think this RunningQueryManager should be replaced [3] with the more unified 
SystemView API.

Any objections?

[1] https://issues.apache.org/jira/browse/IGNITE-10754
[2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=112820392
[3] https://issues.apache.org/jira/browse/IGNITE-12223
[4] https://issues.apache.org/jira/browse/IGNITE-12224


signature.asc
Description: This is a digitally signed message part


Re: New SQL execution engine

2019-09-30 Thread Nikolay Izhikov
Hello, Igniters.

I extends IEP [1] with the tickets caused by H2 limitations.

Please, let's write down requirements for engine in the IEP.

https://cwiki.apache.org/confluence/display/IGNITE/IEP-33%3A+New+SQL+executor+engine+infrastructure

В Пн, 30/09/2019 в 17:20 -0700, Denis Magda пишет:
> Ivan, we need more of these discussions, totally agree with you ;)
> 
> I've updated the Motivation paragraph outlining some high-level users we
> see by working with our users. Hope it helps. Let's carry on and let me
> send a note to Apache Calcite community.
> 
> -
> Denis
> 
> 
> On Mon, Sep 30, 2019 at 1:56 AM Ivan Pavlukhin  wrote:
> 
> > Folks,
> > 
> > Thanks everyone for a hot discussion! Not every open source community
> > has such open and boiling discussions. It means that people here
> > really do care. And I am proud of it!
> > 
> > As I understood, nobody is strictly against the proposed initiative.
> > And I am glad that we can move forward (with some steps back along the
> > way).
> > 
> > пт, 27 сент. 2019 г. в 19:29, Nikolay Izhikov :
> > > 
> > > Hello, Denis.
> > > 
> > > Thanks for the clarifications.
> > > 
> > > Sounds good for me.
> > > All I try to say in this thread:
> > > Guys, please, let's take a step back and write down requirements(what we
> > 
> > want to get with SQL engine).
> > > Which features and use-cases are primary for us.
> > > 
> > > I'm sure you have done it, already during your research.
> > > 
> > > Please, share it with the community.
> > > 
> > > I'm pretty sure we would back to this document again and again during
> > 
> > migration.
> > > So good written design is worth it.
> > > 
> > > В Пт, 27/09/2019 в 09:10 -0700, Denis Magda пишет:
> > > > Ignite mates, let me try to move the discussion in a constructive way.
> > 
> > It
> > > > looks like we set a wrong context from the very beginning.
> > > > 
> > > > Before proposing this idea to the community, some of us were
> > > > discussing/researching the topic in different groups (the one need to
> > 
> > think
> > > > it through first before even suggesting to consider changes of this
> > > > magnitude). The day has come to share this idea with the whole
> > 
> > community
> > > > and outline the next actions. But (!) nobody is 100% sure that that's
> > 
> > the
> > > > right decision. Thus, this will be an *experiment*, some of our
> > 
> > community
> > > > members will be developing a *prototype* and only based on the
> > 
> > prototype
> > > > outcomes we shall make a final decision. Igor, Roman, Ivan, Andrey,
> > 
> > hope
> > > > that nothing has changed and we're on the same page here.
> > > > 
> > > > Many technical and architectural reasons that justify this project have
> > > > been shared but let me throw in my perspective. There is nothing wrong
> > 
> > with
> > > > H2, that was the right choice for that time.  Thanks to H2 and Ignite
> > 
> > SQL
> > > > APIs, our project is used across hundreds of deployments who are
> > > > accelerating relational databases or use Ignite as a system of records.
> > > > However, these days many more companies are migrating to *distributed*
> > > > databases that speak SQL. For instance, if a couple of years ago 1 out
> > 
> > of
> > > > 10 use cases needed support for multi-joins queries or queries with
> > > > subselects or efficient memory usage then today there are 5 out of 10
> > 
> > use
> > > > cases of this kind; in the foreseeable future, it will be a 10 out of
> > 
> > 10.
> > > > So, the evolution is in progress -- the relational world goes
> > 
> > distributed,
> > > > it became exhaustive for both Ignite SQL maintainers and experts who
> > 
> > help
> > > > to tune it for production usage to keep pace with the evolution mostly
> > 
> > due
> > > > to the H2-dependency. Thus, Ignite SQL has to evolve and has to be
> > 
> > ready to
> > > > face the future reality.
> > > > 
> > > > Luckily, we don't need to rush and don't have the right to rush because
> > > > hundreds existing users have already trusted their production
> > 
> > environments
> > > > to Ignite SQL and we need to roll out changes with such a big impact
> > > > carefully. So, I'm excited that Roman, Igor, Ivan, Andrey stepped in
> > 
> > and
> > > > agreed to be the first contributors who will be *experimenting* with
> > 
> > the
> > > > new SQL engine. Let's support them; let's connect them with Apache
> > 
> > Calcite
> > > > community and see how this story evolves.  Folks, please keep the
> > 
> > community
> > > > aware of the progress, let us know when help is needed, some of us
> > 
> > will be
> > > > ready to support with development once you create a solid foundation
> > 
> > for
> > > > the prototype.
> > > > 
> > > > -
> > > > Denis
> > > > 
> > > > 
> > > > On Fri, Sep 27, 2019 at 1:45 AM Igor Seliverstov <
> > 
> > gvvinbl...@apache.org>
> > > > wrote:
> > > > 
> > > > > Hi Igniters!
> > > > > 
> > > > > As you might know currently we have many open issues relating to
> > 
> > 

Ignite community is building Calcite-based prototype

2019-09-30 Thread Denis Magda
Hey ASF-mates,

Just wanted to send a note for Ignite dev community who has started
prototyping

with a new Ignite SQL engine and Calcite was selected as the most favorable
option.

We will truly appreciate if you help us with questions that might hit your
dev list. Ignite folks have already studied Calcite well enough and carried
on with the integration, but there might be tricky parts that would require
your expertise.

Btw, if anybody is interested in Ignite (memory-centric database and
compute platform) or would like to learn more details about the prototype
or join its development, please check these links or send us a note:

   - https://ignite.apache.org
   -
   
https://cwiki.apache.org/confluence/display/IGNITE/IEP-33%3A+New+SQL+executor+engine+infrastructure


-
Denis,
Ignite PMC Chair


Re: New SQL execution engine

2019-09-30 Thread Denis Magda
Ivan, we need more of these discussions, totally agree with you ;)

I've updated the Motivation paragraph outlining some high-level users we
see by working with our users. Hope it helps. Let's carry on and let me
send a note to Apache Calcite community.

-
Denis


On Mon, Sep 30, 2019 at 1:56 AM Ivan Pavlukhin  wrote:

> Folks,
>
> Thanks everyone for a hot discussion! Not every open source community
> has such open and boiling discussions. It means that people here
> really do care. And I am proud of it!
>
> As I understood, nobody is strictly against the proposed initiative.
> And I am glad that we can move forward (with some steps back along the
> way).
>
> пт, 27 сент. 2019 г. в 19:29, Nikolay Izhikov :
> >
> > Hello, Denis.
> >
> > Thanks for the clarifications.
> >
> > Sounds good for me.
> > All I try to say in this thread:
> > Guys, please, let's take a step back and write down requirements(what we
> want to get with SQL engine).
> > Which features and use-cases are primary for us.
> >
> > I'm sure you have done it, already during your research.
> >
> > Please, share it with the community.
> >
> > I'm pretty sure we would back to this document again and again during
> migration.
> > So good written design is worth it.
> >
> > В Пт, 27/09/2019 в 09:10 -0700, Denis Magda пишет:
> > > Ignite mates, let me try to move the discussion in a constructive way.
> It
> > > looks like we set a wrong context from the very beginning.
> > >
> > > Before proposing this idea to the community, some of us were
> > > discussing/researching the topic in different groups (the one need to
> think
> > > it through first before even suggesting to consider changes of this
> > > magnitude). The day has come to share this idea with the whole
> community
> > > and outline the next actions. But (!) nobody is 100% sure that that's
> the
> > > right decision. Thus, this will be an *experiment*, some of our
> community
> > > members will be developing a *prototype* and only based on the
> prototype
> > > outcomes we shall make a final decision. Igor, Roman, Ivan, Andrey,
> hope
> > > that nothing has changed and we're on the same page here.
> > >
> > > Many technical and architectural reasons that justify this project have
> > > been shared but let me throw in my perspective. There is nothing wrong
> with
> > > H2, that was the right choice for that time.  Thanks to H2 and Ignite
> SQL
> > > APIs, our project is used across hundreds of deployments who are
> > > accelerating relational databases or use Ignite as a system of records.
> > > However, these days many more companies are migrating to *distributed*
> > > databases that speak SQL. For instance, if a couple of years ago 1 out
> of
> > > 10 use cases needed support for multi-joins queries or queries with
> > > subselects or efficient memory usage then today there are 5 out of 10
> use
> > > cases of this kind; in the foreseeable future, it will be a 10 out of
> 10.
> > > So, the evolution is in progress -- the relational world goes
> distributed,
> > > it became exhaustive for both Ignite SQL maintainers and experts who
> help
> > > to tune it for production usage to keep pace with the evolution mostly
> due
> > > to the H2-dependency. Thus, Ignite SQL has to evolve and has to be
> ready to
> > > face the future reality.
> > >
> > > Luckily, we don't need to rush and don't have the right to rush because
> > > hundreds existing users have already trusted their production
> environments
> > > to Ignite SQL and we need to roll out changes with such a big impact
> > > carefully. So, I'm excited that Roman, Igor, Ivan, Andrey stepped in
> and
> > > agreed to be the first contributors who will be *experimenting* with
> the
> > > new SQL engine. Let's support them; let's connect them with Apache
> Calcite
> > > community and see how this story evolves.  Folks, please keep the
> community
> > > aware of the progress, let us know when help is needed, some of us
> will be
> > > ready to support with development once you create a solid foundation
> for
> > > the prototype.
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Fri, Sep 27, 2019 at 1:45 AM Igor Seliverstov <
> gvvinbl...@apache.org>
> > > wrote:
> > >
> > > > Hi Igniters!
> > > >
> > > > As you might know currently we have many open issues relating to
> current
> > > > H2 based engine and its execution flow.
> > > >
> > > > Some of them are critical (like impossibility to execute particular
> > > > queries), some of them are majors (like impossibility to execute
> particular
> > > > queries without pre-preparation your data to have a collocation) and
> many
> > > > minors.
> > > >
> > > > Most of the issues cannot be solved without whole engine redesign.
> > > >
> > > > So, here the proposal:
> > > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=130028084
> > > >
> > > > I'll appreciate if you share your thoughts on top of that.
> > > >
> > > > Regards,
> > > > Igor
> > > >
>
>
>
> --
> Best regards,
> 

Re: Text queries/indexes (GridLuceneIndex, @QueryTextFiled)

2019-09-30 Thread Denis Magda
Yuriy,

I've seen you opening a pull-request with the first changes:
https://issues.apache.org/jira/browse/IGNITE-12189

Alex Scherbakov and Ivan are you the right guys to do the review?

-
Denis


On Fri, Sep 27, 2019 at 8:48 AM Павлухин Иван  wrote:

> Yuriy,
>
> Thank you for providing details! Quite interesting.
>
> Yes, we already have support of distributed limit and merging sorted
> subresults for SQL queries. E.g. ReduceIndexSorted and
> MergeStreamIterator are used for merging sorted streams.
>
> Could you please also clarify about score/relevance? Is it provided by
> Lucene engine for each query result? I am thinking how to do sorted
> merge properly in this case.
>
> ср, 25 сент. 2019 г. в 18:56, Yuriy Shuliga :
> >
> > Ivan,
> >
> > Thank you for interesting question!
> >
> > Text searches (or full text searches) are mostly human-oriented. And the
> > point of user's interest is topmost part of response.
> > Then user can read it, evaluate and use the given records for further
> > purposes.
> >
> > Particularly in our case, we use Ignite for operations with financial
> data,
> > and there lots of text stuff like assets names, fin. instruments,
> companies
> > etc.
> > In order to operate with this quickly and reliably, users used to work
> with
> > text search, type-ahead completions, suggestions.
> >
> > For this purposes we are indexing particular string data in separate
> caches.
> >
> > Sorting capabilities and response size limitations are very important
> > there. As our API have to provide most relevant information in view of
> > limited size.
> >
> > Now let me comment some Ignite/Lucene perspective.
> > Actually Ignite queries and Lucene returns *TopDocs.scoresDocs *already
> > sorted by *score *(relevance). So most relevant documents are on the top.
> > And currently distributed queries responses from different nodes are
> merged
> > into final query cursor queue in arbitrary way.
> > So in fact we already have the score order ruined here. Also Ignite
> > requests all possible documents from Lucene that is redundant and not
> good
> > for performance.
> >
> > I'm implementing *limit* parameter to be part of *TextQuery *and have to
> > notice that we still have to add sorting for text queries processing in
> > order to have applicable results.
> >
> > *Limit* parameter itself should improve the part of issues from above,
> but
> > definitely, sorting by document score at least  should be implemented
> along
> > with limit.
> >
> > This is a pretty short commentary if you still have any questions, please
> > ask, do not hesitate)
> >
> > BR,
> > Yuriy Shuliha
> >
> > чт, 19 вер. 2019 о 11:38 Павлухин Иван  пише:
> >
> > > Yuriy,
> > >
> > > Greatly appreciate your interest.
> > >
> > > Could you please elaborate a little bit about sorting? What tasks does
> > > it help to solve and how? It would be great to provide an example.
> > >
> > > ср, 18 сент. 2019 г. в 09:39, Alexei Scherbakov <
> > > alexey.scherbak...@gmail.com>:
> > > >
> > > > Denis,
> > > >
> > > > I like the idea of throwing an exception for enabled text queries on
> > > > persistent caches.
> > > >
> > > > Also I'm fine with proposed limit for unsorted searches.
> > > >
> > > > Yury, please proceed with ticket creation.
> > > >
> > > > вт, 17 сент. 2019 г., 22:06 Denis Magda :
> > > >
> > > > > Igniters,
> > > > >
> > > > > I see nothing wrong with Yury's proposal in regards full-text
> search
> > > API
> > > > > evolution as long as Yury is ready to push it forward.
> > > > >
> > > > > As for the in-memory mode only, it makes total sense for in-memory
> data
> > > > > grid deployments when Ignite caches data of an underlying DB like
> > > Postgres.
> > > > > As part of the changes, I would simply throw an exception (by
> default)
> > > if
> > > > > the one attempts to use text indices with the native persistence
> > > enabled.
> > > > > If the person is ready to live with that limitation that an
> explicit
> > > > > configuration change is needed to come around the exception.
> > > > >
> > > > > Thoughts?
> > > > >
> > > > >
> > > > > -
> > > > > Denis
> > > > >
> > > > >
> > > > > On Tue, Sep 17, 2019 at 7:44 AM Yuriy Shuliga 
> > > wrote:
> > > > >
> > > > > > Hello to all again,
> > > > > >
> > > > > > Thank you for important comments and notes given below!
> > > > > >
> > > > > > Let me answer and continue the discussion.
> > > > > >
> > > > > > (I) Overall needs in Lucene indexing
> > > > > >
> > > > > > Alexei has referenced to
> > > > > > https://issues.apache.org/jira/browse/IGNITE-5371 where
> > > > > > absence of index persistence was declared as an obstacle to
> further
> > > > > > development.
> > > > > >
> > > > > > a) This ticket is already closed as not valid.b) There are
> definite
> > > needs
> > > > > > (and in our project as well) in just in-memory indexing of
> selected
> > > data.
> > > > > > We intend to use search capabilities for fetching limited amount
> of
> > > > > records
> > > > > > that 

Re: Unassigned issue IGNITE-12174

2019-09-30 Thread Alexei Scherbakov
Hi Jan.

Lucene and spatial indexes do not support persistence.
This is known limitation which would probably be addressed in the future
releases.

пн, 30 сент. 2019 г. в 13:23, :

> Hi,
>
> I have opened a bug issue some time ago about a NullPointerException
> thrown if geospatial indexing is used together with durable memory:
>
> https://issues.apache.org/jira/browse/IGNITE-12174
>
> Even though I provided a minimal reproducing code example the issue
> was not assigned or reviewed to confirm whether we have a bug here or
> whether I did something wrong putting together my Ignite config.
>
> I tried to find the root cause by myself but got lost in the Ignite
> code base that is still mostly unknown to me. ;-)
>
> Can you please tell me what I can do to get the bug
> reviewed/confirmed/fixed?
>
> The bug could also be relevant for the upcoming 2.8 release.
>
> Thank you very much,
>
> Jan
>
>

-- 

Best regards,
Alexei Scherbakov


Re: The Spark 2.4 support

2019-09-30 Thread Nikolay Izhikov
Yes, I can :)

В Пн, 30/09/2019 в 11:40 -0700, Denis Magda пишет:
> Nikolay,
> 
> Would you be able to review the changes? I'm not sure there is a better 
> candidate for now.
> 
> -
> Denis
> 
> 
> On Mon, Sep 30, 2019 at 11:01 AM Nikolay Izhikov  wrote:
> > Hello, Ivan.
> > 
> > I had a talk about internals of Spark integration in Ignite.
> > It answers on question why we should use Spark internals.
> > 
> > You can take a look at my meetup talk(in Russian) [1] or read an article if 
> > you prefer text [2]. 
> > 
> > [1] https://www.youtube.com/watch?v=CzbAweNKEVY
> > [2] https://habr.com/ru/company/sberbank/blog/427297/
> > 
> > В Пн, 30/09/2019 в 20:29 +0300, Alexey Zinoviev пишет:
> > > Yes, as I understand it uses Spark internals from the first commit)))
> > > The reason - we take Spark SQL query execution plan and try to execute it
> > > on Ignite cluster
> > > Also we inherit a lot of Developer API related classes that could be
> > > unstable. Spark has no good point for extension and this is a reason why 
> > > we
> > > should go deeper
> > > 
> > > пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin :
> > > 
> > > > Hi Alexey,
> > > > 
> > > > As an external watcher very far from Ignite Spark integration I would
> > > > like to ask a humble question for my understanding. Why this
> > > > integration uses Spark internals? Is it a common approach for
> > > > integrating with Spark?
> > > > 
> > > > пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev :
> > > > > 
> > > > > Hi, Igniters
> > > > > I've started the work on the Spark 2.4 support
> > > > > 
> > > > > We started the discussion here, in
> > > > > https://issues.apache.org/jira/browse/IGNITE-12054
> > > > > 
> > > > > The Spark internals were totally refactored between 2.3 and 2.4 
> > > > > versions,
> > > > > main changes touches
> > > > > 
> > > > >- External catalog and listeners refactoring
> > > > >- Changes of HAVING operator semantic support
> > > > >- Push-down NULL filters generation in JOIN plans
> > > > >- minor changes in Plan Generation that should be adopted in our
> > > > >integration module
> > > > > 
> > > > > I propose the initial solution here via creation of new module 
> > > > > spark-2.4
> > > > > here https://issues.apache.org/jira/browse/IGNITE-12247 and addition 
> > > > > of
> > > > 
> > > > new
> > > > > profile spark-2.4 (to avoid possible clashes with another spark 
> > > > > versions)
> > > > > 
> > > > > Also I've transformed ticket to an Umbrella ticket and created a few
> > > > > tickets for muted tests (around 7 from 211 tests are muted now)
> > > > > 
> > > > > Please, if somebody interested in it, make an initial review of 
> > > > > modular
> > > > > ignite structure and changes (without deep diving into Spark code).
> > > > > 
> > > > > And yes, the proposed code is a copy-paste of spark-ignite module 
> > > > > with a
> > > > > few fixes
> > > > 
> > > > 
> > > > 
> > > > --
> > > > Best regards,
> > > > Ivan Pavlukhin
> > > > 


signature.asc
Description: This is a digitally signed message part


Re: The Spark 2.4 support

2019-09-30 Thread Denis Magda
Nikolay,

Would you be able to review the changes? I'm not sure there is a better
candidate for now.

-
Denis


On Mon, Sep 30, 2019 at 11:01 AM Nikolay Izhikov 
wrote:

> Hello, Ivan.
>
> I had a talk about internals of Spark integration in Ignite.
> It answers on question why we should use Spark internals.
>
> You can take a look at my meetup talk(in Russian) [1] or read an article
> if you prefer text [2].
>
> [1] https://www.youtube.com/watch?v=CzbAweNKEVY
> [2] https://habr.com/ru/company/sberbank/blog/427297/
>
> В Пн, 30/09/2019 в 20:29 +0300, Alexey Zinoviev пишет:
> > Yes, as I understand it uses Spark internals from the first commit)))
> > The reason - we take Spark SQL query execution plan and try to execute it
> > on Ignite cluster
> > Also we inherit a lot of Developer API related classes that could be
> > unstable. Spark has no good point for extension and this is a reason why
> we
> > should go deeper
> >
> > пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin :
> >
> > > Hi Alexey,
> > >
> > > As an external watcher very far from Ignite Spark integration I would
> > > like to ask a humble question for my understanding. Why this
> > > integration uses Spark internals? Is it a common approach for
> > > integrating with Spark?
> > >
> > > пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev  >:
> > > >
> > > > Hi, Igniters
> > > > I've started the work on the Spark 2.4 support
> > > >
> > > > We started the discussion here, in
> > > > https://issues.apache.org/jira/browse/IGNITE-12054
> > > >
> > > > The Spark internals were totally refactored between 2.3 and 2.4
> versions,
> > > > main changes touches
> > > >
> > > >- External catalog and listeners refactoring
> > > >- Changes of HAVING operator semantic support
> > > >- Push-down NULL filters generation in JOIN plans
> > > >- minor changes in Plan Generation that should be adopted in our
> > > >integration module
> > > >
> > > > I propose the initial solution here via creation of new module
> spark-2.4
> > > > here https://issues.apache.org/jira/browse/IGNITE-12247 and
> addition of
> > >
> > > new
> > > > profile spark-2.4 (to avoid possible clashes with another spark
> versions)
> > > >
> > > > Also I've transformed ticket to an Umbrella ticket and created a few
> > > > tickets for muted tests (around 7 from 211 tests are muted now)
> > > >
> > > > Please, if somebody interested in it, make an initial review of
> modular
> > > > ignite structure and changes (without deep diving into Spark code).
> > > >
> > > > And yes, the proposed code is a copy-paste of spark-ignite module
> with a
> > > > few fixes
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Ivan Pavlukhin
> > >
>


Re: The Spark 2.4 support

2019-09-30 Thread Nikolay Izhikov
Hello, Ivan.

I had a talk about internals of Spark integration in Ignite.
It answers on question why we should use Spark internals.

You can take a look at my meetup talk(in Russian) [1] or read an article if you 
prefer text [2]. 

[1] https://www.youtube.com/watch?v=CzbAweNKEVY
[2] https://habr.com/ru/company/sberbank/blog/427297/

В Пн, 30/09/2019 в 20:29 +0300, Alexey Zinoviev пишет:
> Yes, as I understand it uses Spark internals from the first commit)))
> The reason - we take Spark SQL query execution plan and try to execute it
> on Ignite cluster
> Also we inherit a lot of Developer API related classes that could be
> unstable. Spark has no good point for extension and this is a reason why we
> should go deeper
> 
> пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin :
> 
> > Hi Alexey,
> > 
> > As an external watcher very far from Ignite Spark integration I would
> > like to ask a humble question for my understanding. Why this
> > integration uses Spark internals? Is it a common approach for
> > integrating with Spark?
> > 
> > пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev :
> > > 
> > > Hi, Igniters
> > > I've started the work on the Spark 2.4 support
> > > 
> > > We started the discussion here, in
> > > https://issues.apache.org/jira/browse/IGNITE-12054
> > > 
> > > The Spark internals were totally refactored between 2.3 and 2.4 versions,
> > > main changes touches
> > > 
> > >- External catalog and listeners refactoring
> > >- Changes of HAVING operator semantic support
> > >- Push-down NULL filters generation in JOIN plans
> > >- minor changes in Plan Generation that should be adopted in our
> > >integration module
> > > 
> > > I propose the initial solution here via creation of new module spark-2.4
> > > here https://issues.apache.org/jira/browse/IGNITE-12247 and addition of
> > 
> > new
> > > profile spark-2.4 (to avoid possible clashes with another spark versions)
> > > 
> > > Also I've transformed ticket to an Umbrella ticket and created a few
> > > tickets for muted tests (around 7 from 211 tests are muted now)
> > > 
> > > Please, if somebody interested in it, make an initial review of modular
> > > ignite structure and changes (without deep diving into Spark code).
> > > 
> > > And yes, the proposed code is a copy-paste of spark-ignite module with a
> > > few fixes
> > 
> > 
> > 
> > --
> > Best regards,
> > Ivan Pavlukhin
> > 


signature.asc
Description: This is a digitally signed message part


Re: The Spark 2.4 support

2019-09-30 Thread Alexey Zinoviev
Yes, as I understand it uses Spark internals from the first commit)))
The reason - we take Spark SQL query execution plan and try to execute it
on Ignite cluster
Also we inherit a lot of Developer API related classes that could be
unstable. Spark has no good point for extension and this is a reason why we
should go deeper

пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin :

> Hi Alexey,
>
> As an external watcher very far from Ignite Spark integration I would
> like to ask a humble question for my understanding. Why this
> integration uses Spark internals? Is it a common approach for
> integrating with Spark?
>
> пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev :
> >
> > Hi, Igniters
> > I've started the work on the Spark 2.4 support
> >
> > We started the discussion here, in
> > https://issues.apache.org/jira/browse/IGNITE-12054
> >
> > The Spark internals were totally refactored between 2.3 and 2.4 versions,
> > main changes touches
> >
> >- External catalog and listeners refactoring
> >- Changes of HAVING operator semantic support
> >- Push-down NULL filters generation in JOIN plans
> >- minor changes in Plan Generation that should be adopted in our
> >integration module
> >
> > I propose the initial solution here via creation of new module spark-2.4
> > here https://issues.apache.org/jira/browse/IGNITE-12247 and addition of
> new
> > profile spark-2.4 (to avoid possible clashes with another spark versions)
> >
> > Also I've transformed ticket to an Umbrella ticket and created a few
> > tickets for muted tests (around 7 from 211 tests are muted now)
> >
> > Please, if somebody interested in it, make an initial review of modular
> > ignite structure and changes (without deep diving into Spark code).
> >
> > And yes, the proposed code is a copy-paste of spark-ignite module with a
> > few fixes
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: The Spark 2.4 support

2019-09-30 Thread Ivan Pavlukhin
Hi Alexey,

As an external watcher very far from Ignite Spark integration I would
like to ask a humble question for my understanding. Why this
integration uses Spark internals? Is it a common approach for
integrating with Spark?

пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev :
>
> Hi, Igniters
> I've started the work on the Spark 2.4 support
>
> We started the discussion here, in
> https://issues.apache.org/jira/browse/IGNITE-12054
>
> The Spark internals were totally refactored between 2.3 and 2.4 versions,
> main changes touches
>
>- External catalog and listeners refactoring
>- Changes of HAVING operator semantic support
>- Push-down NULL filters generation in JOIN plans
>- minor changes in Plan Generation that should be adopted in our
>integration module
>
> I propose the initial solution here via creation of new module spark-2.4
> here https://issues.apache.org/jira/browse/IGNITE-12247 and addition of new
> profile spark-2.4 (to avoid possible clashes with another spark versions)
>
> Also I've transformed ticket to an Umbrella ticket and created a few
> tickets for muted tests (around 7 from 211 tests are muted now)
>
> Please, if somebody interested in it, make an initial review of modular
> ignite structure and changes (without deep diving into Spark code).
>
> And yes, the proposed code is a copy-paste of spark-ignite module with a
> few fixes



-- 
Best regards,
Ivan Pavlukhin


[jira] [Created] (IGNITE-12249) Concurrent guarantees for TransactionView

2019-09-30 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created IGNITE-12249:


 Summary: Concurrent guarantees for TransactionView
 Key: IGNITE-12249
 URL: https://issues.apache.org/jira/browse/IGNITE-12249
 Project: Ignite
  Issue Type: Bug
Reporter: Nikolay Izhikov


Currently, {{TransactionView#keysCount}} and {{TransactionView#cacheIds}} works 
with Collections that not provides concurrent guarantees.

We should research the possibility to provide consistent transaction view for 
these data structures. 

Performance of the transaction engine can be limitation here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12248) Apache Calcite based query execution engine

2019-09-30 Thread Igor Seliverstov (Jira)
Igor Seliverstov created IGNITE-12248:
-

 Summary: Apache Calcite based query execution engine
 Key: IGNITE-12248
 URL: https://issues.apache.org/jira/browse/IGNITE-12248
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov


Currently used H2 based query execution engine has a number of critical 
limitations Which do not allow to execute an arbitrary query.

The ticket aims to show potential of a new, Calcite based, execution engine 
which may act not worse than current one on co-located queries, provide a boost 
for queries, using distributed joins, and provide an ability to execute 
arbitrary queries, requiring more than one map-reduce step in execution flow.

[IEP 
link|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=130028084]
[Dev list 
thread|http://apache-ignite-developers.2346864.n4.nabble.com/New-SQL-execution-engine-td43724.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Fwd: Gracefully shutting down the data grid

2019-09-30 Thread Shiva Kumar
Hi all,

I am trying to deactivate a cluster which is being connected with few
clients over JDBC.
As part of these clients connections, it inserts some records to many
tables and runs some long-running queries.
At this time I am trying to deactivate the cluster [basically trying to
take data backup, so before this, I need to de-activate the cluster] But
de-activation is hanging and control.sh not returning the control and hangs
infinitely.
when I check the current cluster state with rest API calls it sometime it
returns saying cluster is inactive.
After some time I am trying to activate the cluster but it returns this
error:

[root@ignite-test]# curl "
http://ignite-service-shiv.ignite.svc.cluster.local:8080/ignite?cmd=activate=ignite=ignite;
 | jq
  % Total% Received % Xferd  Average Speed   TimeTime Time
 Current
 Dload  Upload   Total   SpentLeft
 Speed
100   207  100   2070 0   2411  0 --:--:-- --:--:-- --:--:--
 2406
{
  "successStatus": 0,
  "sessionToken": "654F094484E24232AA74F35AC5E83481",
  "error": "*Failed to activate, because another state change operation is
currently in progress: deactivate\nsuppressed: \n*",
  "response": null
}


This means that my earlier de-activation has not succeeded properly.
Is there any other way to de-activate the cluster or to terminate the
existing client connections or to terminate the running queries.
I tried "kill -k -ar" from visor shell but it restarts few nodes and it
ended up with some exception related to page corruption.
Note: My Ignite deployment is on Kubernetes

Any help is appreciated.

regards,
shiva


Re: Improvements for new security approach.

2019-09-30 Thread Maksim Stepachev
I suppose that code works only with requests are made from
GridRestProcessor (It isn't a client, I call it like a fake client). As a
result, you can't load security on demand. If you want to do it, you
should transmit HTTP session and backward address of a node which
received REST request.

пн, 30 сент. 2019 г. в 16:16, Denis Garus :

>  Subject's size is unlimited, that can lead to a dramatic increase in
> traffic between nodes.
> >>I added network optimization for this case. I add a subject in the case
> when ctx.discovery().node(secSubjId) == null.
>
> Yes, this optimization is good, but we have to send SecurityContext
> whenever a client starts a job.
> Why?
>
> A better solution would be to send SecurityContext on demand.
>
> Imagine the following scenario.
> A client connects to node A and starts a job that runs on remote node B.
> IgniteSecurityProcessor on node B tries to find SecurityContext by
> subjectId.
> If IgniteSecurityProcessor fails, then it sends SecurityContextRequest to
> node A and gets required SecurityContext
> that afterward puts to the IgniteSecurityProcessor's cache on node B.
> The next request to run a job on node B with this subjectId doesn't
> require SecurityContext transmission.
>
> SecurityContextResponse can contain some additional information, for
> example,
> time-to-live of SecurityContext before eviction SecurityContext from the
> IgniteSecurityProcessor's cache.
>
> пт, 27 сент. 2019 г. в 15:20, Maksim Stepachev  >:
>
>> I finished with fixes: https://issues.apache.org/jira/browse/IGNITE-11992
>>
>> >> Subject's size is unlimited, that can lead to a dramatic increase in
>> traffic between nodes.
>> I added network optimization for this case. I add a subject in the case
>> when ctx.discovery().node(secSubjId) == null.
>>
>> >> Also, we need to get rid of GridTaskThreadContextKey#TC_SUBJ_ID as
>> duplication of IgnitSecurity responsibility.
>> [2]Yes, we should rid of this. But in the next task, because I can't
>> merge it since 18 Jul 19:)
>>
>> [1] I aggry with you.
>>
>>
>> пт, 27 сент. 2019 г. в 11:42, Denis Garus :
>>
>>> Hello, Maksim!
>>>
>>> Thank you for your effort and interest in the security of Ignite.
>>>
>>> I would like you to pay attention to the discussion [1] and issue [2].
>>> It looks like not only task should execute in the current security
>>> context but all operations too, that is essential to determine a security
>>> id for events.
>>> Also, we need to get rid of GridTaskThreadContextKey#TC_SUBJ_ID as
>>> duplication of IgnitSecurity responsibility.
>>> I think your task is the right place to do that.
>>> What is your opinion?
>>>
>>> >>It's the reason why subject id isn't enough and we should transmit
>>> subject inside message for this case.
>>> There is a problem with this approach.
>>> Subject's size is unlimited, that can lead to a dramatic increase in
>>> traffic between nodes.
>>>
>>> 1.
>>> http://apache-ignite-developers.2346864.n4.nabble.com/JavaDoc-for-Event-s-subjectId-methods-td43663.html
>>> 2. https://issues.apache.org/jira/browse/IGNITE-9914
>>>
>>> пт, 27 сент. 2019 г. в 08:38, Anton Vinogradov :
>>>
 Maksim

 >> I want to fix 2-3-4 points under one ticket.
 Please let me know once it's become ready to be reviewed.

 On Thu, Sep 26, 2019 at 5:18 PM Maksim Stepachev <
 maksim.stepac...@gmail.com>
 wrote:

 > Hi.
 >
 > Anton Vinogradov,
 >
 > I want to fix 2-3-4 points under one ticket.
 >
 > The first was fixed in the ticket:
 > https://issues.apache.org/jira/browse/IGNITE-11094
 > Also, I aggry with you that 5-6 isn't required to ignite.
 >
 > Denis Garus,
 > I made reproducer for point 3. Looks at the test from my pull-request:
 > JettyRestPropagationSecurityContextTest
 >
 > https://github.com/apache/ignite/pull/6918
 >
 > For point 2 you should apply GridRestProcessor from pr and set debug
 into
 > VisorQueryUtils#scheduleQueryStart between
 > ignite.context().closure().runLocalSafe  and call:
 > ignite.context().security().securityContext()
 >
 >
 > For point 3, do action above and call:
 >
 ignite.context().discovery().node(ignite.context().security().securityContext().subject().id())
 >
 > It returns null because this subject was created from the rest. It's
 the
 > reason why subject id isn't enough and we should transmit subject
 inside
 > message for this case.
 >
 > чт, 18 июл. 2019 г. в 12:45, Anton Vinogradov :
 >
 >> Maksim,
 >>
 >> Could you please split IGNITE-11992 to subtasks with proper
 descriptions?
 >> This will allow us to relocate discussion to the issues to solve each
 >> problem properly.
 >>
 >> On Thu, Jul 18, 2019 at 11:57 AM Denis Garus 
 wrote:
 >>
 >> > Hello, Maksim!
 >> > Thanks for your analysis!
 >> >
 >> > I have a few questions about your proposals.
 >> >

[jira] [Created] (IGNITE-12247) [Spark] Add initial support of Spark 2.4

2019-09-30 Thread Alexey Zinoviev (Jira)
Alexey Zinoviev created IGNITE-12247:


 Summary: [Spark] Add initial support of Spark 2.4
 Key: IGNITE-12247
 URL: https://issues.apache.org/jira/browse/IGNITE-12247
 Project: Ignite
  Issue Type: Sub-task
  Components: spark
Affects Versions: 2.8
Reporter: Alexey Zinoviev
Assignee: Alexey Zinoviev
 Fix For: 2.8


This solution provides the initial support of spark 2.4 with copied codebase 
from spark and with initial support of ExternalCatalog refactoring



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12246) [Spark] Add support of changes in External Catalog

2019-09-30 Thread Alexey Zinoviev (Jira)
Alexey Zinoviev created IGNITE-12246:


 Summary: [Spark] Add support of changes in External Catalog
 Key: IGNITE-12246
 URL: https://issues.apache.org/jira/browse/IGNITE-12246
 Project: Ignite
  Issue Type: Sub-task
  Components: spark
Affects Versions: 2.8
Reporter: Alexey Zinoviev
Assignee: Alexey Zinoviev
 Fix For: 2.8


It leads to problems with schemas

 

For example, next tests are muted now in Spark 2.4

"Additional features for IgniteSparkSession"

and

"Should allow Spark SQL to create a table"

"Should disallow creation of tables in non-PUBLIC schemas"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12245) [Spark] Support of Null Handling in JOIN condition

2019-09-30 Thread Alexey Zinoviev (Jira)
Alexey Zinoviev created IGNITE-12245:


 Summary: [Spark] Support of Null Handling in JOIN condition
 Key: IGNITE-12245
 URL: https://issues.apache.org/jira/browse/IGNITE-12245
 Project: Ignite
  Issue Type: Sub-task
  Components: spark
Affects Versions: 2.8
Reporter: Alexey Zinoviev
Assignee: Alexey Zinoviev
 Fix For: 2.8


Also, in Spark was fixed bug with incorrect null handling on columns in codition

https://issues.apache.org/jira/browse/SPARK-21479

It leads to IgniteOptimizationJoinSpec fixes (the same thing was in the 
previous migration from 2.2 to 2.3)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12244) [Spark] Fix test with Multiple Joins

2019-09-30 Thread Alexey Zinoviev (Jira)
Alexey Zinoviev created IGNITE-12244:


 Summary: [Spark] Fix test with Multiple Joins
 Key: IGNITE-12244
 URL: https://issues.apache.org/jira/browse/IGNITE-12244
 Project: Ignite
  Issue Type: Sub-task
  Components: spark
Affects Versions: 2.8
Reporter: Alexey Zinoviev
Assignee: Alexey Zinoviev
 Fix For: 2.8


Fix test  "JOIN 3 TABLE" after investigation in strange join plan generation in 
spark 2.4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12243) [Spark] Add support of HAVING without GROUP BY

2019-09-30 Thread Alexey Zinoviev (Jira)
Alexey Zinoviev created IGNITE-12243:


 Summary: [Spark] Add support of HAVING without GROUP BY
 Key: IGNITE-12243
 URL: https://issues.apache.org/jira/browse/IGNITE-12243
 Project: Ignite
  Issue Type: Sub-task
  Components: spark
Affects Versions: 2.8
Reporter: Alexey Zinoviev
Assignee: Alexey Zinoviev
 Fix For: 2.8






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12242) .NET: Add tests for Thin Client async continuation behavior

2019-09-30 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-12242:
---

 Summary: .NET: Add tests for Thin Client async continuation 
behavior
 Key: IGNITE-12242
 URL: https://issues.apache.org/jira/browse/IGNITE-12242
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.8


Add a test to verify that Thin Client async operation continuations run on 
Thread Pool threads, and not on socket reader thread.

The behavior is correct right now thanks to {{ContWith}} usage in 
{{DoOutInOpAsync}}, but this is an easy mistake to make, so tests are important.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: JavaDoc for Event's subjectId methods

2019-09-30 Thread Denis Garus
Thank you, Maksim!

I think that the description should look like:
"Gets security subject ID initiated this task event if IgniteSecurity is
enabled, otherwise returns null.
This property is not available for GridEventType#EVT_TASK_SESSION_ATTR_SET
task event."

Please pay close attention that described behavior is confirmed by tests.

пн, 30 сент. 2019 г. в 13:11, Maksim Stepachev :

> Denis,
>
> I added it in my ticket and pull-request. Should I change the only first
> sentence or full comment?
>
> пн, 30 сент. 2019 г. в 11:27, Denis Garus :
>
>> Hello!
>>
>> I suggested to Maksim Stepachev include these changes in the scope of
>> your thicket [1]
>> and it looks like he agreed [2].
>>
>> Maksim Stepachev, could you please reflect JavaDoc and behavior changes
>> of events in your ticket?
>>
>> 1. https://issues.apache.org/jira/browse/IGNITE-11992
>> 2.
>> http://apache-ignite-developers.2346864.n4.nabble.com/Improvements-for-new-security-approach-td42698.html
>>
>> пн, 30 сент. 2019 г. в 11:07, Ivan Pavlukhin :
>>
>>> Hi,
>>>
>>> Do we allow commits to master without a ticket? I can imagine only
>>> reverts as an exception.
>>>
>>> Otherwise a ticket is a primary process item. Work description,
>>> review, CI checks (we have a job checking javadocs).
>>>
>>> ср, 25 сент. 2019 г. в 01:15, Denis Magda :
>>> >
>>> > Denis, please feel free to go and edit the JavaDocs in place without a
>>> > ticket. The changes suggested by you are reasonable.
>>> >
>>> > -
>>> > Denis
>>> >
>>> >
>>> > On Tue, Sep 24, 2019 at 3:55 AM Denis Garus 
>>> wrote:
>>> >
>>> > > Hello, Igniters!
>>> > >
>>> > > Some events contain the subjectId method, for example,
>>> TaskEvent#subjectId.
>>> > > The JavaDoc for this method is:
>>> > > "Gets security subject ID initiated this task event, if available.
>>> > > This property is not available for
>>> GridEventType#EVT_TASK_SESSION_ATTR_SET
>>> > > task event.
>>> > > Subject ID will be set either to node ID or client ID initiated task
>>> > > execution."
>>> > >
>>> > > I think It's wrong. The main point is a subject id doesn't have any
>>> sense
>>> > > if IgniteSecurity is disabled.
>>> > > However, if IgniteSecurity is enabled, the method must return the
>>> subject
>>> > > id from the current security context.
>>> > > Thus, the description (and behavior) of the method should be the
>>> following:
>>> > > Gets security subject ID initiated this task event if IgniteSecurity
>>> is
>>> > > enabled, otherwise returns null.
>>> > >
>>> > > The same is actual for CacheEvent, CacheQueryExecutedEvent and
>>> > > CacheQueryReadEvent.
>>> > >
>>> > > If there are no objections, I am going to create a relevant issue in
>>> Jira.
>>> > >
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Ivan Pavlukhin
>>>
>>


Unassigned issue IGNITE-12174

2019-09-30 Thread j

Hi,

I have opened a bug issue some time ago about a NullPointerException  
thrown if geospatial indexing is used together with durable memory:


https://issues.apache.org/jira/browse/IGNITE-12174

Even though I provided a minimal reproducing code example the issue  
was not assigned or reviewed to confirm whether we have a bug here or  
whether I did something wrong putting together my Ignite config.


I tried to find the root cause by myself but got lost in the Ignite  
code base that is still mostly unknown to me. ;-)


Can you please tell me what I can do to get the bug reviewed/confirmed/fixed?

The bug could also be relevant for the upcoming 2.8 release.

Thank you very much,

Jan



Re: JavaDoc for Event's subjectId methods

2019-09-30 Thread Maksim Stepachev
Denis,

I added it in my ticket and pull-request. Should I change the only first
sentence or full comment?

пн, 30 сент. 2019 г. в 11:27, Denis Garus :

> Hello!
>
> I suggested to Maksim Stepachev include these changes in the scope of your
> thicket [1]
> and it looks like he agreed [2].
>
> Maksim Stepachev, could you please reflect JavaDoc and behavior changes of
> events in your ticket?
>
> 1. https://issues.apache.org/jira/browse/IGNITE-11992
> 2.
> http://apache-ignite-developers.2346864.n4.nabble.com/Improvements-for-new-security-approach-td42698.html
>
> пн, 30 сент. 2019 г. в 11:07, Ivan Pavlukhin :
>
>> Hi,
>>
>> Do we allow commits to master without a ticket? I can imagine only
>> reverts as an exception.
>>
>> Otherwise a ticket is a primary process item. Work description,
>> review, CI checks (we have a job checking javadocs).
>>
>> ср, 25 сент. 2019 г. в 01:15, Denis Magda :
>> >
>> > Denis, please feel free to go and edit the JavaDocs in place without a
>> > ticket. The changes suggested by you are reasonable.
>> >
>> > -
>> > Denis
>> >
>> >
>> > On Tue, Sep 24, 2019 at 3:55 AM Denis Garus 
>> wrote:
>> >
>> > > Hello, Igniters!
>> > >
>> > > Some events contain the subjectId method, for example,
>> TaskEvent#subjectId.
>> > > The JavaDoc for this method is:
>> > > "Gets security subject ID initiated this task event, if available.
>> > > This property is not available for
>> GridEventType#EVT_TASK_SESSION_ATTR_SET
>> > > task event.
>> > > Subject ID will be set either to node ID or client ID initiated task
>> > > execution."
>> > >
>> > > I think It's wrong. The main point is a subject id doesn't have any
>> sense
>> > > if IgniteSecurity is disabled.
>> > > However, if IgniteSecurity is enabled, the method must return the
>> subject
>> > > id from the current security context.
>> > > Thus, the description (and behavior) of the method should be the
>> following:
>> > > Gets security subject ID initiated this task event if IgniteSecurity
>> is
>> > > enabled, otherwise returns null.
>> > >
>> > > The same is actual for CacheEvent, CacheQueryExecutedEvent and
>> > > CacheQueryReadEvent.
>> > >
>> > > If there are no objections, I am going to create a relevant issue in
>> Jira.
>> > >
>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin
>>
>


Re: Apache Ignite 2.8 RELEASE [Time, Scope, Manager]

2019-09-30 Thread Ivan Pavlukhin
Maxim, Folks,

Could you please share a results of the Slack discussion from Sep 25?

ср, 25 сент. 2019 г. в 15:50, Dmitriy Pavlov :
>
> Hi Maxim,
>
> Thank you for preparing the release page!
>
> Could you please add Require release notes filter? You can find an example
> in  https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.7.6
>
> Sincerely
> Dmitriy Pavlov
>
> ср, 25 сент. 2019 г. в 11:58, Maxim Muzafarov :
>
> > Igniters,
> >
> >
> > It's true that we are still discussing the release dates. But
> > nevertheless, all the release blockers are important since some of
> > them may require more than one month to be fixed. Let's discuss today
> > how we will handle these issues and track Monitoring and ML major
> > features to get them into the next release.
> >
> > The meeting already scheduled. We will use the ASF Slack on September
> > 25-th, 17-00 (MSK).
> > I've created the channel [2] #ignite-release-2_8 please, join.
> > (Discussion will be on Russin language).
> >
> >
> >
> > Please, also note that I've created the 2.8 release confluence page
> > [1] with additional information. I will review all the issues we have
> > and will move some of them to 2.9.
> > But currently, we've had pinned to 2.8:
> >
> > - 604 open issues
> > - 57 in progress issues
> > - 34 patch available issues
> >
> > - 17 issues marked as the release blockers
> > - 4 of release blocker issues are unassigned
> >
> >
> > [1] https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8
> > [2] https://app.slack.com/client/T4S1WH2J3/CNQ51M4FQ
> >
> > On Wed, 25 Sep 2019 at 10:21, Dmitriy Pavlov  wrote:
> > >
> > > Hi Igniters,
> > >
> > > I suppose discussion is still at phase 0-Initializing
> > > https://cwiki.apache.org/confluence/display/IGNITE/Release+Process
> > >
> > > So it is probably no reason to discuss particular blockers. It would make
> > > sense when the process of removal irrelevant tickets starts (phase 1.2)
> > and
> > > till phase 4-Release candidate building.
> > >
> > > Sincerely,
> > > Dmitriy Pavlov
> > >
> > > вт, 24 сент. 2019 г. в 18:53, Anton Kalashnikov :
> > >
> > > > Hello, Igniters.
> > > >
> > > > I want to notice one more blocker for release [1]. This bug can lead to
> > > > some incorrect baseline default enabled flag calculation(more details
> > in
> > > > the ticket).
> > > >
> > > > [1] https://issues.apache.org/jira/browse/IGNITE-12227
> > > >
> > > > --
> > > > Best regards,
> > > > Anton Kalashnikov
> > > >
> > > >
> > > > 24.09.2019, 17:01, "Andrey Gura" :
> > > > > Sergey,
> > > > >
> > > > > As I know, scope freeze is not announced yet.
> > > > >
> > > > > On Tue, Sep 24, 2019 at 4:41 PM Sergey Antonov
> > > > >  wrote:
> > > > >>  Hi, I would add to release scope my ticket [1].
> > > > >>
> > > > >>  Any objections?
> > > > >>
> > > > >>  [1] https://issues.apache.org/jira/browse/IGNITE-12225
> > > > >>
> > > > >>  вт, 24 сент. 2019 г. в 09:21, Nikolay Izhikov  > >:
> > > > >>
> > > > >>  > > merge to master only fully finished features
> > > > >>  >
> > > > >>  > It's already true for Ignite master branch.
> > > > >>  >
> > > > >>  >
> > > > >>  > В Вт, 24/09/2019 в 09:03 +0300, Alexey Zinoviev пишет:
> > > > >>  > > The planned before 2_3 months release dates are good defender
> > from
> > > > >>  > > partially merged features, In my opinion
> > > > >>  > >
> > > > >>  > > Or we should have Master and dev branch separetely, and merge
> > to
> > > > master
> > > > >>  > > only fully finished features
> > > > >>  > >
> > > > >>  > > пн, 23 сент. 2019 г., 20:27 Maxim Muzafarov  > >:
> > > > >>  > >
> > > > >>  > > > Andrey,
> > > > >>  > > >
> > > > >>  > > > Agree with you. It can affect the user impression.
> > > > >>  > > >
> > > > >>  > > > Can you advise, how can we guarantee in our case when we
> > > > complete with
> > > > >>  > > > current partially merged features that someone will not
> > partially
> > > > >>  > > > merge the new one? Should we monitor the master branch
> > commits
> > > > for
> > > > >>  > > > such purpose?
> > > > >>  > > >
> > > > >>  > > > On Mon, 23 Sep 2019 at 20:18, Andrey Gura 
> > > > wrote:
> > > > >>  > > > >
> > > > >>  > > > > Maxim,
> > > > >>  > > > >
> > > > >>  > > > > > > From my point, if some components will not be ready by
> > > > >>  > > > > > > previously discussed `scope freeze` date it is
> > absolutely
> > > > OK to
> > > > >>  > > > > > > perform the next (e.g. 2.8.1, 2.8.2) releases.
> > > > >>  > > > >
> > > > >>  > > > > It is good approach if partial implemented features aren't
> > > > merged to
> > > > >>  > > > > master branch. Unfortunately this is not our case.
> > > > >>  > > > >
> > > > >>  > > > > I don't see any reasons to force new Apache Ignite release.
> > > > Time is
> > > > >>  > > > > not driver for release. If we want release Ignite
> > periodically
> > > > we
> > > > >>  > must
> > > > >>  > > > > significantly review the process. And most valuable change
> > in
> > > > this
> > > > >>  > > > 

Re: New SQL execution engine

2019-09-30 Thread Ivan Pavlukhin
Folks,

Thanks everyone for a hot discussion! Not every open source community
has such open and boiling discussions. It means that people here
really do care. And I am proud of it!

As I understood, nobody is strictly against the proposed initiative.
And I am glad that we can move forward (with some steps back along the
way).

пт, 27 сент. 2019 г. в 19:29, Nikolay Izhikov :
>
> Hello, Denis.
>
> Thanks for the clarifications.
>
> Sounds good for me.
> All I try to say in this thread:
> Guys, please, let's take a step back and write down requirements(what we want 
> to get with SQL engine).
> Which features and use-cases are primary for us.
>
> I'm sure you have done it, already during your research.
>
> Please, share it with the community.
>
> I'm pretty sure we would back to this document again and again during 
> migration.
> So good written design is worth it.
>
> В Пт, 27/09/2019 в 09:10 -0700, Denis Magda пишет:
> > Ignite mates, let me try to move the discussion in a constructive way. It
> > looks like we set a wrong context from the very beginning.
> >
> > Before proposing this idea to the community, some of us were
> > discussing/researching the topic in different groups (the one need to think
> > it through first before even suggesting to consider changes of this
> > magnitude). The day has come to share this idea with the whole community
> > and outline the next actions. But (!) nobody is 100% sure that that's the
> > right decision. Thus, this will be an *experiment*, some of our community
> > members will be developing a *prototype* and only based on the prototype
> > outcomes we shall make a final decision. Igor, Roman, Ivan, Andrey, hope
> > that nothing has changed and we're on the same page here.
> >
> > Many technical and architectural reasons that justify this project have
> > been shared but let me throw in my perspective. There is nothing wrong with
> > H2, that was the right choice for that time.  Thanks to H2 and Ignite SQL
> > APIs, our project is used across hundreds of deployments who are
> > accelerating relational databases or use Ignite as a system of records.
> > However, these days many more companies are migrating to *distributed*
> > databases that speak SQL. For instance, if a couple of years ago 1 out of
> > 10 use cases needed support for multi-joins queries or queries with
> > subselects or efficient memory usage then today there are 5 out of 10 use
> > cases of this kind; in the foreseeable future, it will be a 10 out of 10.
> > So, the evolution is in progress -- the relational world goes distributed,
> > it became exhaustive for both Ignite SQL maintainers and experts who help
> > to tune it for production usage to keep pace with the evolution mostly due
> > to the H2-dependency. Thus, Ignite SQL has to evolve and has to be ready to
> > face the future reality.
> >
> > Luckily, we don't need to rush and don't have the right to rush because
> > hundreds existing users have already trusted their production environments
> > to Ignite SQL and we need to roll out changes with such a big impact
> > carefully. So, I'm excited that Roman, Igor, Ivan, Andrey stepped in and
> > agreed to be the first contributors who will be *experimenting* with the
> > new SQL engine. Let's support them; let's connect them with Apache Calcite
> > community and see how this story evolves.  Folks, please keep the community
> > aware of the progress, let us know when help is needed, some of us will be
> > ready to support with development once you create a solid foundation for
> > the prototype.
> >
> > -
> > Denis
> >
> >
> > On Fri, Sep 27, 2019 at 1:45 AM Igor Seliverstov 
> > wrote:
> >
> > > Hi Igniters!
> > >
> > > As you might know currently we have many open issues relating to current
> > > H2 based engine and its execution flow.
> > >
> > > Some of them are critical (like impossibility to execute particular
> > > queries), some of them are majors (like impossibility to execute 
> > > particular
> > > queries without pre-preparation your data to have a collocation) and many
> > > minors.
> > >
> > > Most of the issues cannot be solved without whole engine redesign.
> > >
> > > So, here the proposal:
> > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=130028084
> > >
> > > I'll appreciate if you share your thoughts on top of that.
> > >
> > > Regards,
> > > Igor
> > >



-- 
Best regards,
Ivan Pavlukhin


Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-09-30 Thread Ivan Pavlukhin
Hi,

Stacktrace and exception message has some valuable details:
org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Failed to
find a page for eviction [segmentCapacity=126515, loaded=49628,
maxDirtyPages=37221, dirtyPages=49627, cpPages=0, pinnedInSegment=1,
failedToPrepare=49628]

I see a following:
1. Not all data fits data region memory.
2. Exception occurs when underlying cache is destroyed
(IgniteCacheOffheapManagerImpl.stopCache/removeCacheData call in stack
trace).
3. Page for replacement to disk was not found (loaded=49628,
failedToPrepare=49628). Almost all pages are dirty (dirtyPages=49627).

Answering several questions can help:
1. Does the same occur if IgniteCache.destroy() is called instead of DROP TABLE?
2. Does the same occur if SQL is not enabled for a cache?
3. It would be nice to see IgniteConfiguration and CacheConfiguration
causing problems.
4. Need to figure out why almost all pages are dirty. It might be a clue.


Re: How to free up space on disc after removing entries from IgniteCache with enabled PDS?

2019-09-30 Thread Anton Vinogradov
Alexei,
>> stopping fragmented node and removing partition data, then starting it
again

That's exactly what we're doing to solve the fragmentation issue.
The problem here is that we have to perform N/B restart-rebalance
operations (N - cluster size, B - backups count) and it takes a lot of time
with risks to lose the data.

On Fri, Sep 27, 2019 at 5:49 PM Alexei Scherbakov <
alexey.scherbak...@gmail.com> wrote:

> Probably this should be allowed to do using public API, actually this is
> same as manual rebalancing.
>
> пт, 27 сент. 2019 г. в 17:40, Alexei Scherbakov <
> alexey.scherbak...@gmail.com>:
>
> > The poor man's solution for the problem would be stopping fragmented node
> > and removing partition data, then starting it again allowing full state
> > transfer already without deletes.
> > Rinse and repeat for all owners.
> >
> > Anton Vinogradov, would this work for you as workaround ?
> >
> > чт, 19 сент. 2019 г. в 13:03, Anton Vinogradov :
> >
> >> Alexey,
> >>
> >> Let's combine your and Ivan's proposals.
> >>
> >> >> vacuum command, which acquires exclusive table lock, so no concurrent
> >> activities on the table are possible.
> >> and
> >> >> Could the problem be solved by stopping a node which needs to be
> >> defragmented, clearing persistence files and restarting the node?
> >> >> After rebalancing the node will receive all data back without
> >> fragmentation.
> >>
> >> How about to have special partition state SHRINKING?
> >> This state should mean that partition unavailable for reads and updates
> >> but
> >> should keep it's update-counters and should not be marked as lost,
> renting
> >> or evicted.
> >> At this state we able to iterate over the partition and apply it's
> entries
> >> to another file in a compact way.
> >> Indices should be updated during the copy-on-shrink procedure or at the
> >> shrink completion.
> >> Once shrank file is ready we should replace the original partition file
> >> with it and mark it as MOVING which will start the historical rebalance.
> >> Shrinking should be performed during the low activity periods, but even
> in
> >> case we found that activity was high and historical rebalance is not
> >> suitable we may just remove the file and use regular rebalance to
> restore
> >> the partition (this will also lead to shrink).
> >>
> >> BTW, seems, we able to implement partition shrink in a cheap way.
> >> We may just use rebalancing code to apply fat partition's entries to the
> >> new file.
> >> So, 3 stages here: local rebalance, indices update and global historical
> >> rebalance.
> >>
> >> On Thu, Sep 19, 2019 at 11:43 AM Alexey Goncharuk <
> >> alexey.goncha...@gmail.com> wrote:
> >>
> >> > Anton,
> >> >
> >> >
> >> > > >>  The solution which Anton suggested does not look easy because it
> >> will
> >> > > most likely significantly hurt performance
> >> > > Mostly agree here, but what drop do we expect? What price do we
> ready
> >> to
> >> > > pay?
> >> > > Not sure, but seems some vendors ready to pay, for example, 5% drop
> >> for
> >> > > this.
> >> >
> >> > 5% may be a big drop for some use-cases, so I think we should look at
> >> how
> >> > to improve performance, not how to make it worse.
> >> >
> >> >
> >> > >
> >> > > >> it is hard to maintain a data structure to choose "page from
> >> free-list
> >> > > with enough space closest to the beginning of the file".
> >> > > We can just split each free-list bucket to the couple and use first
> >> for
> >> > > pages in the first half of the file and the second for the last.
> >> > > Only two buckets required here since, during the file shrink, first
> >> > > bucket's window will be shrank too.
> >> > > Seems, this give us the same price on put, just use the first bucket
> >> in
> >> > > case it's not empty.
> >> > > Remove price (with merge) will be increased, of course.
> >> > >
> >> > > The compromise solution is to have priority put (to the first path
> of
> >> the
> >> > > file), with keeping removal as is, and schedulable per-page
> migration
> >> for
> >> > > the rest of the data during the low activity period.
> >> > >
> >> > Free lists are large and slow by themselves, it is expensive to
> >> checkpoint
> >> > and read them on start, so as a long-term solution I would look into
> >> > removing them. Moreover, not sure if adding yet another background
> >> process
> >> > will improve the codebase reliability and simplicity.
> >> >
> >> > If we want to go the hard path, I would look at free page tracking
> >> bitmap -
> >> > a special bitmask page, where each page in an adjacent block is marked
> >> as 0
> >> > if it has free space more than a certain configurable threshold (say,
> >> 80%)
> >> > - free, and 1 if less (full). Some vendors have successfully
> implemented
> >> > this approach, which looks much more promising, but harder to
> implement.
> >> >
> >> > --AG
> >> >
> >>
> >
> >
> > --
> >
> > Best regards,
> > Alexei Scherbakov
> >
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>


[jira] [Created] (IGNITE-12241) Found long running cache future and cluster is not responding

2019-09-30 Thread Gangaiah (Jira)
Gangaiah  created IGNITE-12241:
--

 Summary: Found long running cache future and cluster is not 
responding
 Key: IGNITE-12241
 URL: https://issues.apache.org/jira/browse/IGNITE-12241
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.6
Reporter: Gangaiah 


Cluster is not responding... Client able to connect but not responding while 
doing cache operations.. We have found only 'Found long running cache future' 
in logs, Below are logs

[2019-09-27T11:42:53,314][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:30:23.356, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594757419, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]
[2019-09-27T11:42:53,315][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:30:38.851, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594757421, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]
[2019-09-27T11:42:53,315][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:30:33.772, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594757420, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]
[2019-09-27T11:42:53,316][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:31:23.319, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594524109, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]
[2019-09-27T11:42:53,316][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:30:32.730, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594524108, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]
[2019-09-27T11:42:53,316][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:30:15.783, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594524107, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]
[2019-09-27T11:42:53,316][WARN 
][grid-timeout-worker-#263%EDIFCustomer_DR%][diagnostic] Found long running 
cache future [startTime=11:30:27.037, curTime=11:42:53.288, 
fut=GridDhtAtomicSingleUpdateFuture [allUpdated=true, 
super=GridDhtAtomicAbstractUpdateFuture [futId=594406845, resCnt=0, 
addedReader=false, dhtRes=\{b2fbba87-149d-4a75-b479-042c3bd62f0a=[res=false, 
size=1, nearSize=0]}]]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)