Re: Apache Ignite 2.8.0 spring services configuration null fields

2020-03-23 Thread Evgenii Zhuravlev
Hi,

I tried to reproduce the behaviour you described, but everything works fine
for me. Please check if I missed something:
https://github.com/ezhuravl/ignite-code-examples/tree/master/src/main/java/examples/service/parameters
https://github.com/ezhuravl/ignite-code-examples/blob/master/src/main/resources/config/service-parameters-example.xml

Evgenii

пн, 23 мар. 2020 г. в 13:54, myset :

> Hi,
> In 2.8.0 maven version of ignite-core, I encountered one issue within
> spring
> service configuration.
> All service properties fields are null (Ex. testField1, testField2) at
> runtime on TestServiceImpl - init() or execute().
> In 2.7.6 version everything works well.
>
> Ex.
> ...
>
>  class="org.apache.ignite.services.ServiceConfiguration">
> 
> 
> 
>  value="Field
> 1 value"/>
>  value="Field
> 2 value"/>
> 
> 
> 
> 
> 
> ...
>
>
> Thank you.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Apache Ignite 2.8.0 spring services configuration null fields

2020-03-23 Thread myset
Hi,
In 2.8.0 maven version of ignite-core, I encountered one issue within spring
service configuration. 
All service properties fields are null (Ex. testField1, testField2) at
runtime on TestServiceImpl - init() or execute().
In 2.7.6 version everything works well.

Ex.
...











  
...


Thank you.





--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Data vanished from cluster after INACTIVE/ACTIVE switch

2020-03-23 Thread Nikolay Izhikov
Hello, Ivan.

> Seems like we don't have a final agreement on whether we should add force
flag to deactivation API.

I think the only question is - Do we need —force flag in Java API or not.


> As a final solution, I'd want to see behavior when all in-memory data is 
> available after deactivation and further activation. 

Agree.

> I believe it’s possible to don't deallocate memory 
> (like mentioned before, we already can achieve that with 
> IGNITE_REUSE_MEMORY_ON_DEACTIVATE=true) and carefully reuse all loaded data 
> pages on next activation and caches start.

As far as I know, IGNITE_REUSE_MEMORY_ON_DEACTIVATE is for *other* purpose.
Can you provide a simple reproducer when in-memory data not cleared on 
deactivation?

> Considering this, do we really need to introduce force flag as a temporary 
> precaution?

My answer is yes we need it.
Right now, we can’t prevent data loss on deactivation for in-memory caches.

For me, the ultimate value of Ignite into real production environment is user 
data.
If we have some cases when data is lost - we should avoid it as hard as we can.

So, for now, this flag required.

> I suggest to rollback [2] from AI master, stop working on [1] and focus on 
> how to implement keeping in-memory data after the deactivation.

I think we can remove —force flag only after implementation of keeping 
in-memory data on deactivation would be finished.
If that happens before 2.9 then we can have clearer API.

> 23 марта 2020 г., в 18:47, Ivan Rakov  написал(а):
> 
> Folks,
> 
> Let's revive this discussion until it's too late and all API changes are
> merged to master [1].
> Seems like we don't have a final agreement on whether we should add force
> flag to deactivation API.
> 
> First of all, I think we are all on the same page that in-memory caches
> data vanishing on deactivation is counter-intuitive and dangerous. As a
> final solution, I'd want to see behavior when all in-memory data is
> available after deactivation and further activation. I believe it's
> possible to don't deallocate memory (like mentioned before, we already can
> achieve that with IGNITE_REUSE_MEMORY_ON_DEACTIVATE=true) and carefully
> reuse all loaded data pages on next activation and caches start.
> 
> Also, this is a wider question, but: do we understand what cluster
> deactivation is actually intended for? I can only think of two cases:
> - graceful cluster shutdown: an ability to cut checkpoints and to end
> transactional load consistently prior to further stop of all nodes
> - blocking all API (both reads and writes) due to some maintenance
> Neither of them requires forcefully clearing all in-memory data on
> deactivation. If everyone agrees, from now on we should assume data
> clearing as system design flaw that should be fixed, not as possible
> scenario which we should support on the API level.
> 
> Considering this, do we really need to introduce force flag as a temporary
> precaution? I have at least two reasons against it:
> 1) Once API was changed and released, we have to support it until the next
> major release. If we all understand that data vanishing issue is fixable, I
> believe we shouldn't engrave in the API flags that will become pointless.
> 2) More personal one, but I'm against any force flags in the API. This
> makes API harder to understand; more than that, the presence of such flags
> just highlights that API is poorly designed.
> 
> I suggest to rollback [2] from AI master, stop working on [1] and focus on
> how to implement keeping in-memory data after the deactivation.
> I think we can still require user consent for deactivation via control.sh
> (it already requires --yes) and JMX.
> 
> Thoughts?
> 
> [1]: https://issues.apache.org/jira/browse/IGNITE-12614
> [2]: https://issues.apache.org/jira/browse/IGNITE-12701
> 
> --
> Ivan
> 
> 
> On Tue, Mar 17, 2020 at 2:26 PM Vladimir Steshin  wrote:
> 
>> Nikolay, I think we should reconsider clearing at least system caches
>> when deactivating.
>> 
>> 17.03.2020 14:18, Nikolay Izhikov пишет:
>>> Hello, Vladimir.
>>> 
>>> I don’t get it.
>>> 
>>> What is your proposal?
>>> What we should do?
>>> 
 17 марта 2020 г., в 14:11, Vladimir Steshin 
>> написал(а):
 
 Nikolay, hi.
 
>>> And should be covered with the  —force parameter we added.
 As fix for user cases - yes. My idea is to emphasize overall ability to
>> lose various objects, not only data. Probably might be reconsidered in
>> future.
 
 
 17.03.2020 13:49, Nikolay Izhikov пишет:
> Hello, Vladimir.
> 
> If there is at lease one persistent data region then system data
>> region also becomes persistent.
> Your example applies only to pure in-memory clusters.
> 
> And should be covered with the —force parameter we added.
> 
> What do you think?
> 
>> 17 марта 2020 г., в 13:45, Vladimir Steshin 
>> написал(а):
>> 
>> Hi, all.
>> 
>> Fixes for control.sh and the REST have been merged. 

Re: IGNITE-12702 - Discussion

2020-03-23 Thread Denis Magda
Hi Kartik,

Actually, it means quite the opposite. You should fill in the release notes
field before the ticket is resolved/closed so that a release manager of a
next Ignite version adds a proper line to the final release notes with all
the changes.

As for the docs, it's highly likely you don't need to update Ignite
technical documentation (apacheignite.readme.io), thus, the field should be
disabled. However, see if APIs (Javadocs) require an update. Hope this
helps. Btw, this page elaborates more on our current docs process that is
being revisited and improved soon:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document

-
Denis


On Sun, Mar 22, 2020 at 6:33 AM kartik somani <
kartiksomani.offic...@gmail.com> wrote:

> Hello Igniters,
>
> I want to start working on ticket IGNITE-12702
> .  The flags on this
> issue are "Docs Required, Release Notes Required". Does this mean I have to
> first write/get docs and release notes before starting work on this?
>
> Regards,
> Kartik
>


Re: Hello Igniters

2020-03-23 Thread Denis Magda
Hi Kartik, welcome back to the community! Let's work together this time to
help you drive your contributions to a logic end ;)

Maksim, thanks for stepping in and offering extra support.

-
Denis


On Sun, Mar 22, 2020 at 6:21 AM Maksim Stepachev 
wrote:

> Hi, you are welcome!
>
> If you have any questions, send it to this list or you may call me to
> skype.
>
> вс, 22 мар. 2020 г. в 15:48, kartik somani <
> kartiksomani.offic...@gmail.com
> >:
>
> > Hello,
> >
> > I am Kartik. I have probably sent an introduction years ago, but never
> > could continue. This time I want to start and continue contributing to
> > Ignite with full zeal.
> > I'm currently working in a startup in India, prior to this I was working
> > with Amazon. I have a total of 7 years of work experience as software
> > engineer, out of which 3+ years on Java.
> >
> > I found the issue IGNITE-12702
> > <
> >
> https://issues.apache.org/jira/browse/IGNITE-12702?jql=project%20%3D%20IGNITE%20AND%20labels%20in%20(newbie)%20and%20status%20%3D%20OPEN
> > >
> > worth trying for starting out. Any help, comment, guidance would be
> > appreciated. Looking forward to working with all igniters.
> >
> > regards,
> > Kartik
> >
>


Re: contribution permissions

2020-03-23 Thread Denis Magda
Oleg, welcome to the community! I've added you to the contributors' list in
JIRA.

Look forward to seeing your first contribution completed shortly. Let us
know if you have any questions.

-
Denis


On Mon, Mar 23, 2020 at 6:22 AM Oleg Ostanin 
wrote:

> Hello Ignite Community!
>
> My name is Oleg. I want to contribute to Apache Ignite and want to
> start with this issue - IGNITE-12832, my JIRA username oleg-a-ostanin.
> Any help on this will be appreciated.
>
> Thanks!
>


Re: Security Subject of thin client on remote nodes

2020-03-23 Thread Denis Garus
Hello, Alexei, Ivan!

>> Seems like security API is indeed a bit over-engineered

Nobody has doubt we should do a reworking of GridSecurityProcessor.
But this point is outside of scope thin client's problem that we are
solving.
I think we can create new IEP that will accumulate all ideas of Ignite's
security improvements.

>> Presence of the separate #securityContext(UUID) highlights that user
indeed should care
>> about propagation of thin clients' contexts between the cluster nodes.

I agree with Ivan. I've implemented both variants,
and I like one with #securityContext(UUID) more.

Could you please take a look at PR [1] for the issue [2]?

1. https://github.com/apache/ignite/pull/7523
2. https://issues.apache.org/jira/browse/IGNITE-12759

пн, 23 мар. 2020 г. в 11:45, Ivan Rakov :

> Alex, Denis,
>
> Seems like security API is indeed a bit over-engineered.
>
> Let's get rid of SecurityContext and use SecuritySubject instead.
> > SecurityContext is just a POJO wrapper over
> > SecuritySubject's
> > org.apache.ignite.plugin.security.SecuritySubject#permissions.
> > It's functionality can be easily moved to SecuritySubject.
>
> I totally agree. Both subject and context are implemented by plugin
> provider, and I don't see any reason to keep both abstractions, especially
> if we are going to get rid of transferring subject in node attributes
> (argument that subject is more lightweight won't work anymore).
>
> Also, there's kind of mess in node authentication logic. There are at least
> two components responsible for it: DiscoverySpiNodeAuthenticator (which is
> forcibly set by GridDiscoveryManager, but in fact public) and
> GridSecurityProcessor (which performs actual node auth logic, but private).
> I also don't understand why we need both
> #authenticate(AuthenticationContext) and #authenticateNode(ClusterNode,
> SecurityCredentials) methods while it's possible to set explicit
> SecuritySubjectType.REMOTE_NODE in AuthenticationContext (this is arguable;
> perhaps there are strong reasons).
>
> Finally, areas of responsibility between IgniteSecurity and
> GridSecurityProcessor are kind of mixed. As far as I understand, the first
> is responsible for Ignite-internal management of security logic (keeping
> thread-local context, caching security contexts, etc; we don't expect
> IgniteSecurity to be replaced by plugin provider) and the latter is
> responsible for user-custom authentication / authorization logic. To be
> honest, it took plenty of time to figure this out for me.
>
> From my point of view, we should make GridSecurityProcessor interface
> public, rename it (it requires plenty of time to find the difference from
> IgniteSecurity), make its API as simple and non-duplicating as possible and
> clarify its area of responsibility (e.g. should it be responsible for
> propagation of successfully authenticated subject among all nodes or not?)
> to make it easy to embed custom security logic in Ignite.
>
> Regarding thin clients fix: implementation made by Denis suits better to
> the very implicit contract that it's better to change API contracts of an
> internal IgniteSecurity than of internal GridSecurityProcessor (which
> actually mustn't be internal).
>
> > My approach doesn't require any IEPs, just minor change in code and to
> >
> >
> org.apache.ignite.internal.processors.security.IgniteSecurity#authenticate(AuthenticationContext)
> > contract.
>
> Looks like a misuse of #authenticate method to me. It should perform
> initial authentication based on credentials (this may include queries to
> external authentication subsystem, e.g. LDAP). User may want to don't
> authenticate thin client on every node (this will increase the number of
> requests to auth subsystem unless user implicitly implements propagation of
> thin clients' contexts between nodes and make #authenticate cluster-wide
> idempotent: first call should perform actual authentication, next calls
> should retrieve context of already authenticated client). Presence of the
> separate #securityContext(UUID) highlights that user indeed should care
> about propagation of thin clients' contexts between the cluster nodes.
>
> --
> Ivan
>
> On Fri, Mar 20, 2020 at 12:22 PM Veena Mithare 
> wrote:
>
> > Hi Alexei, Denis,
> >
> > One of the main usecases of thin client authentication is to be able to
> > audit the changes done using the thin client user.
> > To enable that :
> > We really need to resolve this concern as well :
> > https://issues.apache.org/jira/browse/IGNITE-12781
> >
> > ( Incorrect security subject id is  associated with a cache_put event
> > when the originator of the event is a thin client. )
> >
> > Regards,
> > Veena
> >
> >
> > -Original Message-
> > From: Alexei Scherbakov 
> > Sent: 18 March 2020 08:11
> > To: dev 
> > Subject: Re: Security Subject of thin client on remote nodes
> >
> > Denis Garus,
> >
> > Both variants are capable of solving the thin client security context
> > problem.
> >
> > My approach doesn't 

Re: Introduce separate SQL configuration

2020-03-23 Thread Ilya Kasnacheev
Hello!

I think that we should defer changes to IgniteConfiguration to  AI 3.0.

Maybe it will motivate us to start work on it.

Just my opinion, without any reservations.

Regards.
-- 
Ilya Kasnacheev


пн, 23 мар. 2020 г. в 13:04, Taras Ledkov :

> Hi, Igniters.
>
> I propose to introduce separate configuration class for SQL configuration.
> e.g. SqlInitailConfiguration.
>
> Now we have several SQL parameters:
> - sqlSchemas;
> - defaultQueryTimeout;
> - longQueryWarningTimeout;
> - sqlQueryHistorySize;
>
> are placed at the IgniteConfiguration.
> I propose to deprecate the SQL properties in the IgniteConfiguration and
> redirect the calls to child configuration object of
> SqlInitailConfiguration.
>
> We are going to add several additional parameters in the nearest future.
> I guess separate configuration is better than many plain SQL parameters
> at the root IgniteConfiguration.
>
> I insist on the configuration be called 'Initial' because many
> parameters may be changed in runtime
> (e.g. longQueryWarningTimeout by JMX, set of schemas by creating caches
> etc).
>
> I've created PR [1] that contains proposal changes. The patch is not
> final clean.
> I think PR may be useful to demonstrate the proposal.
>
> Please let me know your opinion.
>
> [1]. https://github.com/apache/ignite/pull/7559
>
>
> --
> Taras Ledkov
> Mail-To: tled...@gridgain.com
>
>


Re: Data vanished from cluster after INACTIVE/ACTIVE switch

2020-03-23 Thread Ivan Rakov
Folks,

Let's revive this discussion until it's too late and all API changes are
merged to master [1].
Seems like we don't have a final agreement on whether we should add force
flag to deactivation API.

First of all, I think we are all on the same page that in-memory caches
data vanishing on deactivation is counter-intuitive and dangerous. As a
final solution, I'd want to see behavior when all in-memory data is
available after deactivation and further activation. I believe it's
possible to don't deallocate memory (like mentioned before, we already can
achieve that with IGNITE_REUSE_MEMORY_ON_DEACTIVATE=true) and carefully
reuse all loaded data pages on next activation and caches start.

Also, this is a wider question, but: do we understand what cluster
deactivation is actually intended for? I can only think of two cases:
- graceful cluster shutdown: an ability to cut checkpoints and to end
transactional load consistently prior to further stop of all nodes
- blocking all API (both reads and writes) due to some maintenance
Neither of them requires forcefully clearing all in-memory data on
deactivation. If everyone agrees, from now on we should assume data
clearing as system design flaw that should be fixed, not as possible
scenario which we should support on the API level.

Considering this, do we really need to introduce force flag as a temporary
precaution? I have at least two reasons against it:
1) Once API was changed and released, we have to support it until the next
major release. If we all understand that data vanishing issue is fixable, I
believe we shouldn't engrave in the API flags that will become pointless.
2) More personal one, but I'm against any force flags in the API. This
makes API harder to understand; more than that, the presence of such flags
just highlights that API is poorly designed.

I suggest to rollback [2] from AI master, stop working on [1] and focus on
how to implement keeping in-memory data after the deactivation.
I think we can still require user consent for deactivation via control.sh
(it already requires --yes) and JMX.

Thoughts?

[1]: https://issues.apache.org/jira/browse/IGNITE-12614
[2]: https://issues.apache.org/jira/browse/IGNITE-12701

--
Ivan


On Tue, Mar 17, 2020 at 2:26 PM Vladimir Steshin  wrote:

> Nikolay, I think we should reconsider clearing at least system caches
> when deactivating.
>
> 17.03.2020 14:18, Nikolay Izhikov пишет:
> > Hello, Vladimir.
> >
> > I don’t get it.
> >
> > What is your proposal?
> > What we should do?
> >
> >> 17 марта 2020 г., в 14:11, Vladimir Steshin 
> написал(а):
> >>
> >> Nikolay, hi.
> >>
> > And should be covered with the  —force parameter we added.
> >> As fix for user cases - yes. My idea is to emphasize overall ability to
> lose various objects, not only data. Probably might be reconsidered in
> future.
> >>
> >>
> >> 17.03.2020 13:49, Nikolay Izhikov пишет:
> >>> Hello, Vladimir.
> >>>
> >>> If there is at lease one persistent data region then system data
> region also becomes persistent.
> >>> Your example applies only to pure in-memory clusters.
> >>>
> >>> And should be covered with the —force parameter we added.
> >>>
> >>> What do you think?
> >>>
>  17 марта 2020 г., в 13:45, Vladimir Steshin 
> написал(а):
> 
>   Hi, all.
> 
>  Fixes for control.sh and the REST have been merged. Could anyone take
> a look to the previous email with an issue? Isn't this conductvery wierd?
> 
>


Re: Introduce separate SQL configuration

2020-03-23 Thread Taras Ledkov

Hi,

> It would be nice to provide a list of properties that can be changed 
at runtime


Ok, I catch the your and Pavel's opinion.
My be add javadoc for each changed parameter instead of list?

e.g. for longQueryWarningTimeout
See {@link 
org.apache.ignite.internal.mxbean.SqlQueryMXBean#setLongQueryWarningTimeout} 


to change this parameter in runtime.

But we cannot reference internal packages in public API.

On 23.03.2020 17:51, Вячеслав Коптилин wrote:

Hello Taras,

I think this is the right way.
I agree with Pavel's comment. It would be nice to provide a list of
properties that can be changed at runtime and remove the word “Initial”
from the class name.

Thanks,
S.

пн, 23 мар. 2020 г. в 16:54, Pavel Tupitsyn :


Hi Taras,

I support the idea, it brings some order to the IgniteConfiguration.

However, I have strong objections to `Initial` prefix:

many parameters may be changed in runtime

This applies to many other config properties, but we don't name them
`Initial`

Let's just make it SqlConfiguration.

On Mon, Mar 23, 2020 at 1:04 PM Taras Ledkov  wrote:


Hi, Igniters.

I propose to introduce separate configuration class for SQL

configuration.

e.g. SqlInitailConfiguration.

Now we have several SQL parameters:
- sqlSchemas;
- defaultQueryTimeout;
- longQueryWarningTimeout;
- sqlQueryHistorySize;

are placed at the IgniteConfiguration.
I propose to deprecate the SQL properties in the IgniteConfiguration and
redirect the calls to child configuration object of
SqlInitailConfiguration.

We are going to add several additional parameters in the nearest future.
I guess separate configuration is better than many plain SQL parameters
at the root IgniteConfiguration.

I insist on the configuration be called 'Initial' because many
parameters may be changed in runtime
(e.g. longQueryWarningTimeout by JMX, set of schemas by creating caches
etc).

I've created PR [1] that contains proposal changes. The patch is not
final clean.
I think PR may be useful to demonstrate the proposal.

Please let me know your opinion.

[1]. https://github.com/apache/ignite/pull/7559


--
Taras Ledkov
Mail-To: tled...@gridgain.com



--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Introduce separate SQL configuration

2020-03-23 Thread Вячеслав Коптилин
Hello Taras,

I think this is the right way.
I agree with Pavel's comment. It would be nice to provide a list of
properties that can be changed at runtime and remove the word “Initial”
from the class name.

Thanks,
S.

пн, 23 мар. 2020 г. в 16:54, Pavel Tupitsyn :

> Hi Taras,
>
> I support the idea, it brings some order to the IgniteConfiguration.
>
> However, I have strong objections to `Initial` prefix:
> > many parameters may be changed in runtime
> This applies to many other config properties, but we don't name them
> `Initial`
>
> Let's just make it SqlConfiguration.
>
> On Mon, Mar 23, 2020 at 1:04 PM Taras Ledkov  wrote:
>
> > Hi, Igniters.
> >
> > I propose to introduce separate configuration class for SQL
> configuration.
> > e.g. SqlInitailConfiguration.
> >
> > Now we have several SQL parameters:
> > - sqlSchemas;
> > - defaultQueryTimeout;
> > - longQueryWarningTimeout;
> > - sqlQueryHistorySize;
> >
> > are placed at the IgniteConfiguration.
> > I propose to deprecate the SQL properties in the IgniteConfiguration and
> > redirect the calls to child configuration object of
> > SqlInitailConfiguration.
> >
> > We are going to add several additional parameters in the nearest future.
> > I guess separate configuration is better than many plain SQL parameters
> > at the root IgniteConfiguration.
> >
> > I insist on the configuration be called 'Initial' because many
> > parameters may be changed in runtime
> > (e.g. longQueryWarningTimeout by JMX, set of schemas by creating caches
> > etc).
> >
> > I've created PR [1] that contains proposal changes. The patch is not
> > final clean.
> > I think PR may be useful to demonstrate the proposal.
> >
> > Please let me know your opinion.
> >
> > [1]. https://github.com/apache/ignite/pull/7559
> >
> >
> > --
> > Taras Ledkov
> > Mail-To: tled...@gridgain.com
> >
> >
>


Re: Introduce separate SQL configuration

2020-03-23 Thread Pavel Tupitsyn
Hi Taras,

I support the idea, it brings some order to the IgniteConfiguration.

However, I have strong objections to `Initial` prefix:
> many parameters may be changed in runtime
This applies to many other config properties, but we don't name them
`Initial`

Let's just make it SqlConfiguration.

On Mon, Mar 23, 2020 at 1:04 PM Taras Ledkov  wrote:

> Hi, Igniters.
>
> I propose to introduce separate configuration class for SQL configuration.
> e.g. SqlInitailConfiguration.
>
> Now we have several SQL parameters:
> - sqlSchemas;
> - defaultQueryTimeout;
> - longQueryWarningTimeout;
> - sqlQueryHistorySize;
>
> are placed at the IgniteConfiguration.
> I propose to deprecate the SQL properties in the IgniteConfiguration and
> redirect the calls to child configuration object of
> SqlInitailConfiguration.
>
> We are going to add several additional parameters in the nearest future.
> I guess separate configuration is better than many plain SQL parameters
> at the root IgniteConfiguration.
>
> I insist on the configuration be called 'Initial' because many
> parameters may be changed in runtime
> (e.g. longQueryWarningTimeout by JMX, set of schemas by creating caches
> etc).
>
> I've created PR [1] that contains proposal changes. The patch is not
> final clean.
> I think PR may be useful to demonstrate the proposal.
>
> Please let me know your opinion.
>
> [1]. https://github.com/apache/ignite/pull/7559
>
>
> --
> Taras Ledkov
> Mail-To: tled...@gridgain.com
>
>


contribution permissions

2020-03-23 Thread Oleg Ostanin
Hello Ignite Community!

My name is Oleg. I want to contribute to Apache Ignite and want to
start with this issue - IGNITE-12832, my JIRA username oleg-a-ostanin.
Any help on this will be appreciated.

Thanks!


[jira] [Created] (IGNITE-12832) Add user attributes support to control.sh

2020-03-23 Thread Andrey Kuznetsov (Jira)
Andrey Kuznetsov created IGNITE-12832:
-

 Summary: Add user attributes support to control.sh
 Key: IGNITE-12832
 URL: https://issues.apache.org/jira/browse/IGNITE-12832
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.8
Reporter: Andrey Kuznetsov


Change [1] introduced user attributes for various thin clients. 
{{control.sh|bat}} script uses {{GridClient}} to connect to cluster, but it's 
impossible to set user attributes in corresponding {{GridClientConfiguration}} 
currenly. I suggest to add such an ability by adding 
{{--attr-some-attr-name=attrValue}} command line option.

To prevent command line pollution I also suggest to implement {{.properties}} 
file support, so that command line arguments (including {{--attr*}} arguments) 
could be hidden in a file specified by {{--config filename.properties}}. In 
case of duplication explicit command line arguments should take precedence over 
arguments set in {{.properties}} file.

[1] https://issues.apache.org/jira/browse/IGNITE-12049



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-12831) Invoking destroy of local cache on one node destroys local caches with the same name on all other nodes

2020-03-23 Thread Ilya Shishkov (Jira)
Ilya Shishkov created IGNITE-12831:
--

 Summary: Invoking destroy of local cache on one node destroys 
local caches with the same name on all other nodes
 Key: IGNITE-12831
 URL: https://issues.apache.org/jira/browse/IGNITE-12831
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.7.6, 2.8
Reporter: Ilya Shishkov
 Attachments: MyLocalCacheDestroyReproducer.java

If you create caches with cache mode CacheMode.LOCAL and same name, but on 
different nodes, then all those caches will be destroyed after invoking destroy 
of cache on one of the cluster nodes.

Expected behaviour: local cache should be destroyed only on the node invoking 
destroy.

Reproducer in attachment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Re[2]: Discuss idle_verify with moving partitions changes.

2020-03-23 Thread Sergey Antonov
Zhenya, Vlad's message looks incorrectly. How about something like: "Not
all partitions were checked. Check of 
partitions was skipped due to rebalancing is in progress. Please rerun
idle_verify after finish rebalancing."

пн, 23 мар. 2020 г. в 12:47, Zhenya Stanilovsky :

>
> Guys thank for quick response, Ivan what do you think about Vlad`s
> proposal to add additional info like :
> "Possible results are not consistent due to rebalance still in progress" ?
> Thanks !
>
> >Понедельник, 23 марта 2020, 12:30 +03:00 от Ivan Rakov <
> ivan.glu...@gmail.com>:
> >
> >Zhenya,
> >
> >As for me, the current behavior of idle_verify looks correct.
> >There's no sense in checking MOVING partitions (on which we explicitly
> >inform user), however checking consistency between the rest of owners
> still
> >makes sense: they still can diverge and we can be aware of the presence of
> >the conflicts sooner.
> >In case cluster is not idle (in terms of user activities, not in terms of
> >internal cluster processes like rebalancing), utility will fail as
> expected.
> >
> >On Mon, Mar 23, 2020 at 11:23 AM Vladislav Pyatkov <
> vpyat...@gridgain.com >
> >wrote:
> >
> >> Hi Zhenya,
> >>
> >> I see your point. Need to show some message, because cluster is not idle
> >> (rebalance is going).
> >> When cluster not idle we cannot validate partitions honestly. After
> several
> >> minutes we can to get absolutely different result, without any client's
> >> operation of cache happened.
> >>
> >> May be enough showing some message more clear for end user. For example:
> >> "Result has not valid, rebalance is going."
> >>
> >> Another thing you meaning - issue in indexes, when rebalance is
> following.
> >> I think idex_validate should fail in this case, because indexes always
> in
> >> load during rebalance.
> >>
> >>
> >> On Mon, Mar 23, 2020 at 10:20 AM Zhenya Stanilovsky
> >> < arzamas...@mail.ru.invalid > wrote:
> >>
> >> >
> >> > Igniters, i found that near idle check commands only shows partitions
> in
> >> > MOVING states as info in log and not take into account this fact as
> >> > erroneous idle cluster state.
> >> > control.sh --cache idle_verify, control.sh --cache validate_indexes
> >> > --check-crc
> >> >
> >> > for example command would show something like :
> >> >
> >> > Arguments: --cache idle_verify --yes
> >> >
> >> >
> >>
> 
> >> > idle_verify task was executed with the following args: caches=[],
> >> > excluded=[], cacheFilter=[DEFAULT]
> >> > idle_verify check has finished, no conflicts have been found.
> >> > Verification was skipped for 21 MOVING partitions:
> >> > Skipped partition: PartitionKeyV2 [grpId=1544803905, grpName=default,
> >> > partId=7]
> >> > Partition instances: [PartitionHashRecordV2 [isPrimary=false,
> >> > consistentId=gridCommandHandlerTest2, updateCntr=3,
> >> partitionState=MOVING,
> >> > state=MOVING]] .. and so on
> >> >
> >> > I found this erroneous and can lead to further cluster index
> corruption,
> >> > for example in case when only command OK result checked.
> >> >
> >> > If no objections would be here, i plan to inform about moving states
> as
> >> > not OK exit code too.
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Vladislav Pyatkov
> >> Architect-Consultant "GridGain Rus" Llc.
> >>  +7-929-537-79-60
> >>
>
>
>
>



-- 
BR, Sergey Antonov


Re: Re[2]: Discuss idle_verify with moving partitions changes.

2020-03-23 Thread Ivan Rakov
Partial results are consistent though.
I'd add something like "Possible results are not full" instead.

On Mon, Mar 23, 2020 at 12:47 PM Zhenya Stanilovsky
 wrote:

>
> Guys thank for quick response, Ivan what do you think about Vlad`s
> proposal to add additional info like :
> "Possible results are not consistent due to rebalance still in progress" ?
> Thanks !
>
> >Понедельник, 23 марта 2020, 12:30 +03:00 от Ivan Rakov <
> ivan.glu...@gmail.com>:
> >
> >Zhenya,
> >
> >As for me, the current behavior of idle_verify looks correct.
> >There's no sense in checking MOVING partitions (on which we explicitly
> >inform user), however checking consistency between the rest of owners
> still
> >makes sense: they still can diverge and we can be aware of the presence of
> >the conflicts sooner.
> >In case cluster is not idle (in terms of user activities, not in terms of
> >internal cluster processes like rebalancing), utility will fail as
> expected.
> >
> >On Mon, Mar 23, 2020 at 11:23 AM Vladislav Pyatkov <
> vpyat...@gridgain.com >
> >wrote:
> >
> >> Hi Zhenya,
> >>
> >> I see your point. Need to show some message, because cluster is not idle
> >> (rebalance is going).
> >> When cluster not idle we cannot validate partitions honestly. After
> several
> >> minutes we can to get absolutely different result, without any client's
> >> operation of cache happened.
> >>
> >> May be enough showing some message more clear for end user. For example:
> >> "Result has not valid, rebalance is going."
> >>
> >> Another thing you meaning - issue in indexes, when rebalance is
> following.
> >> I think idex_validate should fail in this case, because indexes always
> in
> >> load during rebalance.
> >>
> >>
> >> On Mon, Mar 23, 2020 at 10:20 AM Zhenya Stanilovsky
> >> < arzamas...@mail.ru.invalid > wrote:
> >>
> >> >
> >> > Igniters, i found that near idle check commands only shows partitions
> in
> >> > MOVING states as info in log and not take into account this fact as
> >> > erroneous idle cluster state.
> >> > control.sh --cache idle_verify, control.sh --cache validate_indexes
> >> > --check-crc
> >> >
> >> > for example command would show something like :
> >> >
> >> > Arguments: --cache idle_verify --yes
> >> >
> >> >
> >>
> 
> >> > idle_verify task was executed with the following args: caches=[],
> >> > excluded=[], cacheFilter=[DEFAULT]
> >> > idle_verify check has finished, no conflicts have been found.
> >> > Verification was skipped for 21 MOVING partitions:
> >> > Skipped partition: PartitionKeyV2 [grpId=1544803905, grpName=default,
> >> > partId=7]
> >> > Partition instances: [PartitionHashRecordV2 [isPrimary=false,
> >> > consistentId=gridCommandHandlerTest2, updateCntr=3,
> >> partitionState=MOVING,
> >> > state=MOVING]] .. and so on
> >> >
> >> > I found this erroneous and can lead to further cluster index
> corruption,
> >> > for example in case when only command OK result checked.
> >> >
> >> > If no objections would be here, i plan to inform about moving states
> as
> >> > not OK exit code too.
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Vladislav Pyatkov
> >> Architect-Consultant "GridGain Rus" Llc.
> >>  +7-929-537-79-60
> >>
>
>
>
>


Introduce separate SQL configuration

2020-03-23 Thread Taras Ledkov

Hi, Igniters.

I propose to introduce separate configuration class for SQL configuration.
e.g. SqlInitailConfiguration.

Now we have several SQL parameters:
- sqlSchemas;
- defaultQueryTimeout;
- longQueryWarningTimeout;
- sqlQueryHistorySize;

are placed at the IgniteConfiguration.
I propose to deprecate the SQL properties in the IgniteConfiguration and 
redirect the calls to child configuration object of SqlInitailConfiguration.


We are going to add several additional parameters in the nearest future.
I guess separate configuration is better than many plain SQL parameters 
at the root IgniteConfiguration.


I insist on the configuration be called 'Initial' because many 
parameters may be changed in runtime
(e.g. longQueryWarningTimeout by JMX, set of schemas by creating caches 
etc).


I've created PR [1] that contains proposal changes. The patch is not 
final clean.

I think PR may be useful to demonstrate the proposal.

Please let me know your opinion.

[1]. https://github.com/apache/ignite/pull/7559


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re[2]: Discuss idle_verify with moving partitions changes.

2020-03-23 Thread Zhenya Stanilovsky

Guys thank for quick response, Ivan what do you think about Vlad`s proposal to 
add additional info like :
"Possible results are not consistent due to rebalance still in progress" ?
Thanks !
  
>Понедельник, 23 марта 2020, 12:30 +03:00 от Ivan Rakov :
> 
>Zhenya,
>
>As for me, the current behavior of idle_verify looks correct.
>There's no sense in checking MOVING partitions (on which we explicitly
>inform user), however checking consistency between the rest of owners still
>makes sense: they still can diverge and we can be aware of the presence of
>the conflicts sooner.
>In case cluster is not idle (in terms of user activities, not in terms of
>internal cluster processes like rebalancing), utility will fail as expected.
>
>On Mon, Mar 23, 2020 at 11:23 AM Vladislav Pyatkov < vpyat...@gridgain.com >
>wrote:
> 
>> Hi Zhenya,
>>
>> I see your point. Need to show some message, because cluster is not idle
>> (rebalance is going).
>> When cluster not idle we cannot validate partitions honestly. After several
>> minutes we can to get absolutely different result, without any client's
>> operation of cache happened.
>>
>> May be enough showing some message more clear for end user. For example:
>> "Result has not valid, rebalance is going."
>>
>> Another thing you meaning - issue in indexes, when rebalance is following.
>> I think idex_validate should fail in this case, because indexes always in
>> load during rebalance.
>>
>>
>> On Mon, Mar 23, 2020 at 10:20 AM Zhenya Stanilovsky
>> < arzamas...@mail.ru.invalid > wrote:
>>
>> >
>> > Igniters, i found that near idle check commands only shows partitions in
>> > MOVING states as info in log and not take into account this fact as
>> > erroneous idle cluster state.
>> > control.sh --cache idle_verify, control.sh --cache validate_indexes
>> > --check-crc
>> >
>> > for example command would show something like :
>> >
>> > Arguments: --cache idle_verify --yes
>> >
>> >
>> 
>> > idle_verify task was executed with the following args: caches=[],
>> > excluded=[], cacheFilter=[DEFAULT]
>> > idle_verify check has finished, no conflicts have been found.
>> > Verification was skipped for 21 MOVING partitions:
>> > Skipped partition: PartitionKeyV2 [grpId=1544803905, grpName=default,
>> > partId=7]
>> > Partition instances: [PartitionHashRecordV2 [isPrimary=false,
>> > consistentId=gridCommandHandlerTest2, updateCntr=3,
>> partitionState=MOVING,
>> > state=MOVING]] .. and so on
>> >
>> > I found this erroneous and can lead to further cluster index corruption,
>> > for example in case when only command OK result checked.
>> >
>> > If no objections would be here, i plan to inform about moving states as
>> > not OK exit code too.
>> >
>> >
>>
>>
>>
>> --
>> Vladislav Pyatkov
>> Architect-Consultant "GridGain Rus" Llc.
>>  +7-929-537-79-60
>> 
 
 
 
 

Re: Discuss idle_verify with moving partitions changes.

2020-03-23 Thread Ivan Rakov
Zhenya,

As for me, the current behavior of idle_verify looks correct.
There's no sense in checking MOVING partitions (on which we explicitly
inform user), however checking consistency between the rest of owners still
makes sense: they still can diverge and we can be aware of the presence of
the conflicts sooner.
In case cluster is not idle (in terms of user activities, not in terms of
internal cluster processes like rebalancing), utility will fail as expected.

On Mon, Mar 23, 2020 at 11:23 AM Vladislav Pyatkov 
wrote:

> Hi Zhenya,
>
> I see your point. Need to show some message, because cluster is not idle
> (rebalance is going).
> When cluster not idle we cannot validate partitions honestly. After several
> minutes we can to get absolutely different result, without any client's
> operation of cache happened.
>
> May be enough showing some message more clear for end user. For example:
> "Result has not valid, rebalance is going."
>
> Another thing you meaning - issue in indexes, when rebalance is following.
> I think idex_validate should fail in this case, because indexes always in
> load during rebalance.
>
>
> On Mon, Mar 23, 2020 at 10:20 AM Zhenya Stanilovsky
>  wrote:
>
> >
> > Igniters, i found that near idle check commands only shows partitions in
> > MOVING states as info in log and not take into account this fact as
> > erroneous idle cluster state.
> > control.sh --cache idle_verify, control.sh --cache validate_indexes
> > --check-crc
> >
> > for example command would show something like :
> >
> > Arguments: --cache idle_verify --yes
> >
> >
> 
> > idle_verify task was executed with the following args: caches=[],
> > excluded=[], cacheFilter=[DEFAULT]
> > idle_verify check has finished, no conflicts have been found.
> > Verification was skipped for 21 MOVING partitions:
> > Skipped partition: PartitionKeyV2 [grpId=1544803905, grpName=default,
> > partId=7]
> > Partition instances: [PartitionHashRecordV2 [isPrimary=false,
> > consistentId=gridCommandHandlerTest2, updateCntr=3,
> partitionState=MOVING,
> > state=MOVING]] .. and so on
> >
> > I found this erroneous and can lead to further cluster index corruption,
> > for example in case when only command OK result checked.
> >
> > If no objections would be here, i plan to inform about moving states as
> > not OK exit code too.
> >
> >
>
>
>
> --
> Vladislav Pyatkov
> Architect-Consultant "GridGain Rus" Llc.
> +7-929-537-79-60
>


Re: [REVIEW] IGNITE-12344 Remote listener of IgniteMessaging has to run with appropriate SecurityContext

2020-03-23 Thread Denis Garus
Hello, Alexei!

Thank you for the review!
I`ve fixed your comments, take a look, please.



пн, 17 февр. 2020 г. в 12:53, Nikita Amelchev :

> Denis,
>
> I have pre-reviewed your changes, LGTM.
>
> Could some security maintainer take a look?
>
> вт, 11 февр. 2020 г. в 20:37, Denis Garus :
> >
> > Hi, Igniters!
> >
> >
> > I've prepared the PR[1] for the ticket[2], could somebody do a review?
> >
> >
> > Thanks.
> >
> >
> >
> >1. https://github.com/apache/ignite/pull/7338
> >2. https://issues.apache.org/jira/browse/IGNITE-12344
>
>
>
> --
> Best wishes,
> Amelchev Nikita
>


Re: Security Subject of thin client on remote nodes

2020-03-23 Thread Ivan Rakov
Alex, Denis,

Seems like security API is indeed a bit over-engineered.

Let's get rid of SecurityContext and use SecuritySubject instead.
> SecurityContext is just a POJO wrapper over
> SecuritySubject's
> org.apache.ignite.plugin.security.SecuritySubject#permissions.
> It's functionality can be easily moved to SecuritySubject.

I totally agree. Both subject and context are implemented by plugin
provider, and I don't see any reason to keep both abstractions, especially
if we are going to get rid of transferring subject in node attributes
(argument that subject is more lightweight won't work anymore).

Also, there's kind of mess in node authentication logic. There are at least
two components responsible for it: DiscoverySpiNodeAuthenticator (which is
forcibly set by GridDiscoveryManager, but in fact public) and
GridSecurityProcessor (which performs actual node auth logic, but private).
I also don't understand why we need both
#authenticate(AuthenticationContext) and #authenticateNode(ClusterNode,
SecurityCredentials) methods while it's possible to set explicit
SecuritySubjectType.REMOTE_NODE in AuthenticationContext (this is arguable;
perhaps there are strong reasons).

Finally, areas of responsibility between IgniteSecurity and
GridSecurityProcessor are kind of mixed. As far as I understand, the first
is responsible for Ignite-internal management of security logic (keeping
thread-local context, caching security contexts, etc; we don't expect
IgniteSecurity to be replaced by plugin provider) and the latter is
responsible for user-custom authentication / authorization logic. To be
honest, it took plenty of time to figure this out for me.

>From my point of view, we should make GridSecurityProcessor interface
public, rename it (it requires plenty of time to find the difference from
IgniteSecurity), make its API as simple and non-duplicating as possible and
clarify its area of responsibility (e.g. should it be responsible for
propagation of successfully authenticated subject among all nodes or not?)
to make it easy to embed custom security logic in Ignite.

Regarding thin clients fix: implementation made by Denis suits better to
the very implicit contract that it's better to change API contracts of an
internal IgniteSecurity than of internal GridSecurityProcessor (which
actually mustn't be internal).

> My approach doesn't require any IEPs, just minor change in code and to
>
> org.apache.ignite.internal.processors.security.IgniteSecurity#authenticate(AuthenticationContext)
> contract.

Looks like a misuse of #authenticate method to me. It should perform
initial authentication based on credentials (this may include queries to
external authentication subsystem, e.g. LDAP). User may want to don't
authenticate thin client on every node (this will increase the number of
requests to auth subsystem unless user implicitly implements propagation of
thin clients' contexts between nodes and make #authenticate cluster-wide
idempotent: first call should perform actual authentication, next calls
should retrieve context of already authenticated client). Presence of the
separate #securityContext(UUID) highlights that user indeed should care
about propagation of thin clients' contexts between the cluster nodes.

--
Ivan

On Fri, Mar 20, 2020 at 12:22 PM Veena Mithare 
wrote:

> Hi Alexei, Denis,
>
> One of the main usecases of thin client authentication is to be able to
> audit the changes done using the thin client user.
> To enable that :
> We really need to resolve this concern as well :
> https://issues.apache.org/jira/browse/IGNITE-12781
>
> ( Incorrect security subject id is  associated with a cache_put event
> when the originator of the event is a thin client. )
>
> Regards,
> Veena
>
>
> -Original Message-
> From: Alexei Scherbakov 
> Sent: 18 March 2020 08:11
> To: dev 
> Subject: Re: Security Subject of thin client on remote nodes
>
> Denis Garus,
>
> Both variants are capable of solving the thin client security context
> problem.
>
> My approach doesn't require any IEPs, just minor change in code and to
>
> org.apache.ignite.internal.processors.security.IgniteSecurity#authenticate(AuthenticationContext)
> contract.
> We can add appropriate documentation to emphasize this.
> The argument "fragile" is not very convincing for me.
>
> I think we should collect more opinions before proceeding with IEP.
>
> Considering a fact we actually *may not care* about compatibility (I've
> already explained why), I'm thinking of another approach.
> Let's get rid of SecurityContext and use SecuritySubject instead.
> SecurityContext is just a POJO wrapper over SecuritySubject's
> org.apache.ignite.plugin.security.SecuritySubject#permissions.
> It's functionality can be easily moved to SecuritySubject.
>
> What do you think?
>
>
>
> пн, 16 мар. 2020 г. в 15:47, Denis Garus :
>
> >  Hello, Alexei!
> >
> > I agree with you if we may not care about compatibility at all, then
> > we can solve the problem much more straightforward 

Re: Discuss idle_verify with moving partitions changes.

2020-03-23 Thread Vladislav Pyatkov
Hi Zhenya,

I see your point. Need to show some message, because cluster is not idle
(rebalance is going).
When cluster not idle we cannot validate partitions honestly. After several
minutes we can to get absolutely different result, without any client's
operation of cache happened.

May be enough showing some message more clear for end user. For example:
"Result has not valid, rebalance is going."

Another thing you meaning - issue in indexes, when rebalance is following.
I think idex_validate should fail in this case, because indexes always in
load during rebalance.


On Mon, Mar 23, 2020 at 10:20 AM Zhenya Stanilovsky
 wrote:

>
> Igniters, i found that near idle check commands only shows partitions in
> MOVING states as info in log and not take into account this fact as
> erroneous idle cluster state.
> control.sh --cache idle_verify, control.sh --cache validate_indexes
> --check-crc
>
> for example command would show something like :
>
> Arguments: --cache idle_verify --yes
>
> 
> idle_verify task was executed with the following args: caches=[],
> excluded=[], cacheFilter=[DEFAULT]
> idle_verify check has finished, no conflicts have been found.
> Verification was skipped for 21 MOVING partitions:
> Skipped partition: PartitionKeyV2 [grpId=1544803905, grpName=default,
> partId=7]
> Partition instances: [PartitionHashRecordV2 [isPrimary=false,
> consistentId=gridCommandHandlerTest2, updateCntr=3, partitionState=MOVING,
> state=MOVING]] .. and so on
>
> I found this erroneous and can lead to further cluster index corruption,
> for example in case when only command OK result checked.
>
> If no objections would be here, i plan to inform about moving states as
> not OK exit code too.
>
>



-- 
Vladislav Pyatkov
Architect-Consultant "GridGain Rus" Llc.
+7-929-537-79-60


Discuss idle_verify with moving partitions changes.

2020-03-23 Thread Zhenya Stanilovsky

Igniters, i found that near idle check commands only shows partitions in MOVING 
states as info in log and not take into account this fact as erroneous idle 
cluster state.
control.sh --cache idle_verify, control.sh --cache validate_indexes --check-crc
 
for example command would show something like :
 
Arguments: --cache idle_verify --yes 

idle_verify task was executed with the following args: caches=[], excluded=[], 
cacheFilter=[DEFAULT]
idle_verify check has finished, no conflicts have been found.
Verification was skipped for 21 MOVING partitions:
Skipped partition: PartitionKeyV2 [grpId=1544803905, grpName=default, partId=7]
Partition instances: [PartitionHashRecordV2 [isPrimary=false, 
consistentId=gridCommandHandlerTest2, updateCntr=3, partitionState=MOVING, 
state=MOVING]] .. and so on
 
I found this erroneous and can lead to further cluster index corruption, for 
example in case when only command OK result checked.
 
If no objections would be here, i plan to inform about moving states as not OK 
exit code too.