Re: Persistence per memory policy configuration

2017-09-20 Thread Dmitriy Setrakyan
Firstly all, why not call it DataPolicy instead of MemoryPolicy? Secondly,
why not set data policies directly on IgniteConfiguration. And lastly, how
about we combine memory and disk properties in one bean with clear naming
convention?

Here is the example. Note that all properties above must start with with
"Memory" or "Disk".

*IgniteConfiguration cfg = new IgniteConfiguration();*




>
>
>
>
> *cfg.setDataPolicies(new DataPolicyConfiguration()
> .setName("bla"),.setMemoryMaxSize(1024), // must be greater than 0,
> since memory always needs to be enabled..setDiskMaxSize(0), // if
> greater than 0, then persistence is enabled.);*



I think this approach is much more concise and straight forward. What do
you think?

D.

On Wed, Sep 20, 2017 at 4:55 AM, Vladimir Ozerov 
wrote:

> I prefer the second. Composition over inheritance - this is how all our
> configuration is crafted.
>
> E.g. we do not have "CacheConfiguration" and "
> StoreEnabledCacheConfiguration".
> Instead, we have "CacheConfiguration.setCacheStoreFactory".
>
> On Wed, Sep 20, 2017 at 2:46 PM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > Reiterating this based on some feedback from PDS users.
> >
> > It might be confusing to configure persistence with "MemoryPolicy", so
> > another approach is to deprecate the old names and introduce a new name
> > "DataRegion" because it reflects the actual state when data is stored on
> > disk and partially in memory. I have two options in mind, each of them
> > looks acceptable to me, so I would like to have some feedback from the
> > community. Old configuration names will be deprecated (but still be taken
> > if used for backward compatibility). Note, that old names deprecation
> > handles default configuration compatibility very nicely - current PDS
> users
> > will not need to change anything to keep everything working. The two
> > options I mentioned are below:
> >
> >  * we have two separate classes for in-memory and persisted data regions,
> > so the configuration would look like so:
> >
> > IgniteConfiguration cfg = new IgniteConfiguration();
> >
> > cfg.setDataRegionsConfiguration(new DataRegionsConfiguration()
> > .setDataRegions(
> > new MemoryDataRegion()
> > .setName("volatileCaches")
> > .setMaxMemorySize(...),
> > new PersistentDataRegion()
> > .setName("persistentCaches")
> > .setMaxMemorySize(...)
> > .setMaxDiskSize()));
> >
> > cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
> >
> >
> > * we have one class for data region configuration, but it will have a
> > sub-bean for persistence configuration:
> >
> > IgniteConfiguration cfg = new IgniteConfiguration();
> >
> > cfg.setDataRegionsConfiguration(new DataRegionsConfiguration()
> > .setDataRegions(
> > new DataRegion()
> > .setName("volatileCaches")
> > .setMaxMemorySize(...),
> > new DataRegion()
> > .setName("persistentCaches")
> > .setMaxMemorySize(...),
> > .setPersistenceConfiguration(
> > new DataRegionPersistenceConfiguration()
> > .setMaxDiskSize(...;
> >
> > cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
> >
>


[jira] [Created] (IGNITE-6462) @SpringResource block Service Node bootstrap when the node starts at the first time

2017-09-20 Thread redtank (JIRA)
redtank created IGNITE-6462:
---

 Summary: @SpringResource block Service Node bootstrap when the 
node starts at the first time
 Key: IGNITE-6462
 URL: https://issues.apache.org/jira/browse/IGNITE-6462
 Project: Ignite
  Issue Type: Bug
  Components: managed services, spring
Affects Versions: 2.1, 2.2
 Environment: OS: OSX 10.12
Java: 1.8.0_112-b16
Kotlin: 1.1.4-3
Reporter: redtank


@SpringResource block Service Node bootstrap and service deployment when the 
node starts at the first time. After killing the service node and restarting, 
the service is deployed successfully.

My steps is

1. Start the data node

```
fun main(args: Array) {
SpringApplication(DataNodeApplication::class.java).apply {
addInitializers(
ApplicationContextInitializer {
DataNodeApplication.beans().initialize(it)
}
)
}.run(*args)
}

@SpringBootConfiguration
@EnableAutoConfiguration
class DataNodeApplication {

companion object {
fun beans() = beans {
bean("igniteInstance") {
// Ignite configuration with all defaults
// and enabled p2p deployment and enabled events.
val igniteConfig = IgniteConfiguration().apply {
isPeerClassLoadingEnabled = true

/*
Labeling Data Nodes with special attribute.
This attribute is checked by 
common.filters.DataNodeFilters
which decides where caches have to be deployed.
 */
userAttributes = mutableMapOf("data.node" to true)

// Configuring caches that will be deployed on Data Nodes
setCacheConfiguration(
// Cache for QuoteRequest
CacheConfiguration().apply {
name = "QuoteRequest"
/*
Enabling a special nodes filter for the 
cache. The filter
will make sure that the cache will be 
deployed only on Data
Nodes, the nodes that have 'data.node' 
attribute in the local
node map.
 */
nodeFilter = DataNodeFilter()
},
// Cache for Maintenance records
CacheConfiguration().apply {
name = "maintenance"
/*
Enabling a special nodes filter for the 
cache. The filter
will make sure that the cache will be 
deployed only on Data
Nodes, the nodes that have 'data.node' 
attribute in the local
node map.
 */
nodeFilter = DataNodeFilter()

// Enabling our sample cache store for the 
Maintenance cache

setCacheStoreFactory(FactoryBuilder.factoryOf("common.cachestore.SimpleCacheStore"))

// Avoid Maintenance objects deserialization on 
data nodes side
// when they are passed to SampleCacheStore.
isStoreKeepBinary = true

// Enabling the write-through feature for the 
store.
isWriteThrough = true

// Enabling the read-through feature for the 
store.
isReadThrough = true

// Configuring SQL schema.
queryEntities = listOf(
QueryEntity().apply {
// Setting indexed type's key class
keyType = "java.lang.Integer"

// Setting indexed type's value 
class
valueType = "entity.Maintenance"

// Defining fields that will be 
either indexed or queryable.
// Indexed fields are added to 
'indexes' list below.
fields = linkedMapOf("vehicleId" to 
"java.lang.Integer")

// Defining indexed fields.
// Single field (aka. column) index
indexes = 

[jira] [Created] (IGNITE-6461) Web Console: sanitize user on save

2017-09-20 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-6461:


 Summary: Web Console: sanitize user on save
 Key: IGNITE-6461
 URL: https://issues.apache.org/jira/browse/IGNITE-6461
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Alexey Kuznetsov
Assignee: Alexey Kuznetsov
 Fix For: 2.3






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Ignite Enhancement Proposal process

2017-09-20 Thread Nikita Ivanov
+1 on idea (long overdue) and +1 on using epics in JIRA for grouping IEPs.

--
Nikita Ivanov


On Wed, Sep 20, 2017 at 10:28 PM, Denis Magda  wrote:

> Vladimir,
>
> I support your initiative because it sorts things out and brings more
> transparency to Ignite community.
>
> The only moment that bothers me is how the tickets, falling into a
> specific IEP, are *grouped in JIRA*. Instead of labels I would advise us to
> use umbrella tickets or epics. I prefer epics more because they are the
> same umbrella tickets but with special visibility and tracking support from
> JIRA side.
>
> So if we consider epics as the way we group the relevant tickets in JIRA
> and keep the rest content in the IEP form on Wiki than it should help us
> benefit from both approaches.
>
> Thoughts?
>
> —
> Denis
>
> > On Sep 19, 2017, at 2:50 AM, Vladimir Ozerov 
> wrote:
> >
> > Igniters,
> >
> > I'd like to discuss an idea of adding "Enhancement Proposal" concept to
> our
> > process. Many other OSS vendors use it with great success ([1], [2], [3],
> > [4]), so I think we can also benefit from it.
> >
> > **Motivation**
> > Ignite project lacks transparency. We have a lot of thoughts and plans in
> > our heads. Some of them are materialized to tickets and discussions, some
> > don't. And yet there is no single place where one can understand major
> > features and challenges of the product for the nearest perspective. We do
> > not understand our own roadmap.
> >
> > Another problem is that our WIKI is full of trash - lots and lots of
> > outdated design documents and discussions.
> >
> > With Ignite Enhancement Proposal (IEP) process we can move all major
> > changes to a single place, thus increasing our understanding of the
> product
> > and community involvement.
> >
> > **Proposal**
> > 1) Create separate page on WIKI [5] where process flow will be defined
> > 2) Create sections for active and inactive/rejected proposals
> > 3) Every proposal should have separate page with the following fields:
> > - ID
> > - Summary
> > - Author
> > - Sponsor/shepherd/etc - committer or PMC member who will help author
> drive
> > the process
> > - Status (DRAFT, ACTIVE, COMPLETED, REJECTED)
> > - "Motivation" section
> > - "Description" section where actual design will reside
> > - "Risks and Assumptions" section
> > - Links to external resources, dev-list discussions and JIRA tickets
> > 4) Sponsor is responsible for keeping the page up to date
> > 5) Discussions should happen outside of WIKI - on the dev-list or inside
> > JIRA tickets
> > 6) Relevant JIRA tickets will be tracked with special labels, e.g.
> "iep-N"
> > [6]
> >
> > I created sample page for binary format improvements (still raw enough)
> [7].
> >
> > Please share your thoughts.
> >
> > Vladimir.
> >
> > [1] https://www.python.org/dev/peps/
> > [2]
> > https://hazelcast.atlassian.net/wiki/spaces/COM/pages/
> 27558010/Hazelcast+Enhancement+Proposals
> > [3] https://github.com/Kotlin/KEEP
> > [4] https://spark.apache.org/improvement-proposals.html
> > [5]
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=73638545
> > [6] https://issues.apache.org/jira/browse/IGNITE-6057
> > [7]
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 1%3A+Bulk+data+loading+performance+improvements
>
>


Re: Binary compatibility of persistent storage

2017-09-20 Thread Dmitry Pavlov
Denis, the argument sounds convincing to me. When you use 3rd party DB, you
rarely have to care how to migrate DB data.

Igniters, I suggest keeping compatibility as long as we can, even with the
transition to a new major release. If incompatibility will become
unavoidable we will consider migration procedures.

чт, 21 сент. 2017 г. в 0:21, Denis Magda :

> Either 3 or 4 looks as an appropriate approach for me. Ignite is no longer
> just an in-memory storage and we can not afford to force our users to
> migrate the data or configuration just because of the new cool feature in a
> new version. We should provide the same level of compatibility as RDBMS
> vendors do.
>
> —
> Denis
>
> > On Sep 19, 2017, at 4:16 AM, Vladimir Ozerov 
> wrote:
> >
> > igniters,
> >
> > Ignite doesn't have compatibility for binary protocols between different
> > versions, as this would make development harder and slower. On the other
> > hand we maintain API compatibility what helps us move users to new
> versions
> > faster.
> >
> > As native persistence is implemented, new challenge appeared - whether to
> > maintain binary compatibility of stored data. Many approaches exist:
> >
> > 1) No compatibility at all - easy for us, nightmare for users (IMO)
> > 2) No compatibility, but provide migration instruments
> > 3) Maintain compatibility between N latest minor versions
> > 4) Maintain compatibility between all versions within major release
> >
> > The more guarantees we offer, the harder them to maintain, the better UX.
> >
> > Let's think on what compatibility mode we can offer to our users if any.
> > Any ideas?
> >
> > Vladimir.
>
>


Re: Binary compatibility of persistent storage

2017-09-20 Thread Denis Magda
Either 3 or 4 looks as an appropriate approach for me. Ignite is no longer just 
an in-memory storage and we can not afford to force our users to migrate the 
data or configuration just because of the new cool feature in a new version. We 
should provide the same level of compatibility as RDBMS vendors do.

—
Denis

> On Sep 19, 2017, at 4:16 AM, Vladimir Ozerov  wrote:
> 
> igniters,
> 
> Ignite doesn't have compatibility for binary protocols between different
> versions, as this would make development harder and slower. On the other
> hand we maintain API compatibility what helps us move users to new versions
> faster.
> 
> As native persistence is implemented, new challenge appeared - whether to
> maintain binary compatibility of stored data. Many approaches exist:
> 
> 1) No compatibility at all - easy for us, nightmare for users (IMO)
> 2) No compatibility, but provide migration instruments
> 3) Maintain compatibility between N latest minor versions
> 4) Maintain compatibility between all versions within major release
> 
> The more guarantees we offer, the harder them to maintain, the better UX.
> 
> Let's think on what compatibility mode we can offer to our users if any.
> Any ideas?
> 
> Vladimir.



[GitHub] ignite pull request #2708: IGNITE-3935 Test on deserialization from differen...

2017-09-20 Thread knovik
Github user knovik closed the pull request at:

https://github.com/apache/ignite/pull/2708


---


[GitHub] ignite pull request #2708: IGNITE-3935 Test on deserialization from differen...

2017-09-20 Thread knovik
GitHub user knovik opened a pull request:

https://github.com/apache/ignite/pull/2708

IGNITE-3935 Test on deserialization from different nodes

ClassloaderSwitchSelfTest.java implements a test-case where class loaders 
should be switched to the peer-class-loading loader refering to IGNITE-3935 
issue.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/knovik/ignite IGNITE-3935

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2708.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2708


commit 107fff32693f18a73e9543bae9f496efd74cc6eb
Author: koctbik 
Date:   2017-09-19T15:00:38Z

IGNITE-3935 Testcase on class loader switching included in test suit 
CacheExamples

commit f7cd9116aab05c0e430ef5f5cd892b4a4b0f2a9e
Author: knovik 
Date:   2017-09-20T20:29:25Z

This closes pull req




---


[GitHub] ignite pull request #2707: IGNITE-6460 Wrong consistentId for lightweight Cl...

2017-09-20 Thread EdShangGG
GitHub user EdShangGG opened a pull request:

https://github.com/apache/ignite/pull/2707

IGNITE-6460 Wrong consistentId for lightweight ClusterNode instances



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6460

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2707.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2707


commit 7ece513e55e089ebc00de2d5675066f7eaf8af51
Author: Eduard Shangareev 
Date:   2017-09-20T20:36:55Z

IGNITE-6460 Wrong consistentId for lightweight ClusterNode instances




---


[jira] [Created] (IGNITE-6460) Wrong consistentId for lightweight ClusterNode instances

2017-09-20 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-6460:
-

 Summary: Wrong consistentId for lightweight ClusterNode instances
 Key: IGNITE-6460
 URL: https://issues.apache.org/jira/browse/IGNITE-6460
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Eduard Shangareev
Assignee: Eduard Shangareev
 Fix For: 2.3


I have introduced new constructor for TcpDiscoveryNode to create lightweight 
instances to store them on disc or etc.
But to save consistentId we need not only keep it in field, but also add to 
node attributes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Ignite Enhancement Proposal process

2017-09-20 Thread Denis Magda
Vladimir,

My only concern is the tickets aggregation in JIRA. The labels approach is not 
generic and flexible, requires me to use specific filters and use some numeric 
form for labels. However, if all the tickets listed in an IEP were grouped 
under an umbrella ticket or epic this would make all the things crystal clear.

—
Denis
 

> On Sep 20, 2017, at 12:41 PM, Vladimir Ozerov  wrote:
> 
> Denis,
> 
> I do not very like that idea. If you look at other projects, you'll notice
> that in general they use either WIKIs (e.g. HZ), or JIRA (e.g. Spark), but
> not both. Only one reason - to avoid complication and keep the process as
> light as possible, which is important for community.
> 
> Choosing between WIKI and JIRA I prefer WIKI because it has rich editing
> inteface. JIRA is better in state transitions and reporting. But I do not
> think we will have so many initiatives. Normally there should be ~20-30
> active initiatives, I think, so WIKI benefits outweight JIRA for me.
> 
> Makes sense?
> 
> On Wed, Sep 20, 2017 at 10:28 PM, Denis Magda  wrote:
> 
>> Vladimir,
>> 
>> I support your initiative because it sorts things out and brings more
>> transparency to Ignite community.
>> 
>> The only moment that bothers me is how the tickets, falling into a
>> specific IEP, are *grouped in JIRA*. Instead of labels I would advise us to
>> use umbrella tickets or epics. I prefer epics more because they are the
>> same umbrella tickets but with special visibility and tracking support from
>> JIRA side.
>> 
>> So if we consider epics as the way we group the relevant tickets in JIRA
>> and keep the rest content in the IEP form on Wiki than it should help us
>> benefit from both approaches.
>> 
>> Thoughts?
>> 
>> —
>> Denis
>> 
>>> On Sep 19, 2017, at 2:50 AM, Vladimir Ozerov 
>> wrote:
>>> 
>>> Igniters,
>>> 
>>> I'd like to discuss an idea of adding "Enhancement Proposal" concept to
>> our
>>> process. Many other OSS vendors use it with great success ([1], [2], [3],
>>> [4]), so I think we can also benefit from it.
>>> 
>>> **Motivation**
>>> Ignite project lacks transparency. We have a lot of thoughts and plans in
>>> our heads. Some of them are materialized to tickets and discussions, some
>>> don't. And yet there is no single place where one can understand major
>>> features and challenges of the product for the nearest perspective. We do
>>> not understand our own roadmap.
>>> 
>>> Another problem is that our WIKI is full of trash - lots and lots of
>>> outdated design documents and discussions.
>>> 
>>> With Ignite Enhancement Proposal (IEP) process we can move all major
>>> changes to a single place, thus increasing our understanding of the
>> product
>>> and community involvement.
>>> 
>>> **Proposal**
>>> 1) Create separate page on WIKI [5] where process flow will be defined
>>> 2) Create sections for active and inactive/rejected proposals
>>> 3) Every proposal should have separate page with the following fields:
>>> - ID
>>> - Summary
>>> - Author
>>> - Sponsor/shepherd/etc - committer or PMC member who will help author
>> drive
>>> the process
>>> - Status (DRAFT, ACTIVE, COMPLETED, REJECTED)
>>> - "Motivation" section
>>> - "Description" section where actual design will reside
>>> - "Risks and Assumptions" section
>>> - Links to external resources, dev-list discussions and JIRA tickets
>>> 4) Sponsor is responsible for keeping the page up to date
>>> 5) Discussions should happen outside of WIKI - on the dev-list or inside
>>> JIRA tickets
>>> 6) Relevant JIRA tickets will be tracked with special labels, e.g.
>> "iep-N"
>>> [6]
>>> 
>>> I created sample page for binary format improvements (still raw enough)
>> [7].
>>> 
>>> Please share your thoughts.
>>> 
>>> Vladimir.
>>> 
>>> [1] https://www.python.org/dev/peps/
>>> [2]
>>> https://hazelcast.atlassian.net/wiki/spaces/COM/pages/
>> 27558010/Hazelcast+Enhancement+Proposals
>>> [3] https://github.com/Kotlin/KEEP
>>> [4] https://spark.apache.org/improvement-proposals.html
>>> [5]
>>> https://cwiki.apache.org/confluence/pages/viewpage.
>> action?pageId=73638545
>>> [6] https://issues.apache.org/jira/browse/IGNITE-6057
>>> [7]
>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-
>> 1%3A+Bulk+data+loading+performance+improvements
>> 
>> 



Re: Ignite Enhancement Proposal process

2017-09-20 Thread Vladimir Ozerov
Denis,

I do not very like that idea. If you look at other projects, you'll notice
that in general they use either WIKIs (e.g. HZ), or JIRA (e.g. Spark), but
not both. Only one reason - to avoid complication and keep the process as
light as possible, which is important for community.

Choosing between WIKI and JIRA I prefer WIKI because it has rich editing
inteface. JIRA is better in state transitions and reporting. But I do not
think we will have so many initiatives. Normally there should be ~20-30
active initiatives, I think, so WIKI benefits outweight JIRA for me.

Makes sense?

On Wed, Sep 20, 2017 at 10:28 PM, Denis Magda  wrote:

> Vladimir,
>
> I support your initiative because it sorts things out and brings more
> transparency to Ignite community.
>
> The only moment that bothers me is how the tickets, falling into a
> specific IEP, are *grouped in JIRA*. Instead of labels I would advise us to
> use umbrella tickets or epics. I prefer epics more because they are the
> same umbrella tickets but with special visibility and tracking support from
> JIRA side.
>
> So if we consider epics as the way we group the relevant tickets in JIRA
> and keep the rest content in the IEP form on Wiki than it should help us
> benefit from both approaches.
>
> Thoughts?
>
> —
> Denis
>
> > On Sep 19, 2017, at 2:50 AM, Vladimir Ozerov 
> wrote:
> >
> > Igniters,
> >
> > I'd like to discuss an idea of adding "Enhancement Proposal" concept to
> our
> > process. Many other OSS vendors use it with great success ([1], [2], [3],
> > [4]), so I think we can also benefit from it.
> >
> > **Motivation**
> > Ignite project lacks transparency. We have a lot of thoughts and plans in
> > our heads. Some of them are materialized to tickets and discussions, some
> > don't. And yet there is no single place where one can understand major
> > features and challenges of the product for the nearest perspective. We do
> > not understand our own roadmap.
> >
> > Another problem is that our WIKI is full of trash - lots and lots of
> > outdated design documents and discussions.
> >
> > With Ignite Enhancement Proposal (IEP) process we can move all major
> > changes to a single place, thus increasing our understanding of the
> product
> > and community involvement.
> >
> > **Proposal**
> > 1) Create separate page on WIKI [5] where process flow will be defined
> > 2) Create sections for active and inactive/rejected proposals
> > 3) Every proposal should have separate page with the following fields:
> > - ID
> > - Summary
> > - Author
> > - Sponsor/shepherd/etc - committer or PMC member who will help author
> drive
> > the process
> > - Status (DRAFT, ACTIVE, COMPLETED, REJECTED)
> > - "Motivation" section
> > - "Description" section where actual design will reside
> > - "Risks and Assumptions" section
> > - Links to external resources, dev-list discussions and JIRA tickets
> > 4) Sponsor is responsible for keeping the page up to date
> > 5) Discussions should happen outside of WIKI - on the dev-list or inside
> > JIRA tickets
> > 6) Relevant JIRA tickets will be tracked with special labels, e.g.
> "iep-N"
> > [6]
> >
> > I created sample page for binary format improvements (still raw enough)
> [7].
> >
> > Please share your thoughts.
> >
> > Vladimir.
> >
> > [1] https://www.python.org/dev/peps/
> > [2]
> > https://hazelcast.atlassian.net/wiki/spaces/COM/pages/
> 27558010/Hazelcast+Enhancement+Proposals
> > [3] https://github.com/Kotlin/KEEP
> > [4] https://spark.apache.org/improvement-proposals.html
> > [5]
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=73638545
> > [6] https://issues.apache.org/jira/browse/IGNITE-6057
> > [7]
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 1%3A+Bulk+data+loading+performance+improvements
>
>


Re: Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Dmitriy Setrakyan
On Wed, Sep 20, 2017 at 6:37 AM, Igor Sapego  wrote:

> For example, some users may want to disable clients they are not
> using due to security considerations.
>

Well, there should be some authentication command in the protocol, which
will ask a client to login. Ignite should also provide a connection
callback of some sort, which can return false to reject the connection.
This way users will be able to implement there own authentication mechanism
in the callback and stop unwanted clients from connecting.


Re: Ignite Enhancement Proposal process

2017-09-20 Thread Denis Magda
Vladimir,

I support your initiative because it sorts things out and brings more 
transparency to Ignite community.

The only moment that bothers me is how the tickets, falling into a specific 
IEP, are *grouped in JIRA*. Instead of labels I would advise us to use umbrella 
tickets or epics. I prefer epics more because they are the same umbrella 
tickets but with special visibility and tracking support from JIRA side.

So if we consider epics as the way we group the relevant tickets in JIRA and 
keep the rest content in the IEP form on Wiki than it should help us benefit 
from both approaches.

Thoughts?

—
Denis 

> On Sep 19, 2017, at 2:50 AM, Vladimir Ozerov  wrote:
> 
> Igniters,
> 
> I'd like to discuss an idea of adding "Enhancement Proposal" concept to our
> process. Many other OSS vendors use it with great success ([1], [2], [3],
> [4]), so I think we can also benefit from it.
> 
> **Motivation**
> Ignite project lacks transparency. We have a lot of thoughts and plans in
> our heads. Some of them are materialized to tickets and discussions, some
> don't. And yet there is no single place where one can understand major
> features and challenges of the product for the nearest perspective. We do
> not understand our own roadmap.
> 
> Another problem is that our WIKI is full of trash - lots and lots of
> outdated design documents and discussions.
> 
> With Ignite Enhancement Proposal (IEP) process we can move all major
> changes to a single place, thus increasing our understanding of the product
> and community involvement.
> 
> **Proposal**
> 1) Create separate page on WIKI [5] where process flow will be defined
> 2) Create sections for active and inactive/rejected proposals
> 3) Every proposal should have separate page with the following fields:
> - ID
> - Summary
> - Author
> - Sponsor/shepherd/etc - committer or PMC member who will help author drive
> the process
> - Status (DRAFT, ACTIVE, COMPLETED, REJECTED)
> - "Motivation" section
> - "Description" section where actual design will reside
> - "Risks and Assumptions" section
> - Links to external resources, dev-list discussions and JIRA tickets
> 4) Sponsor is responsible for keeping the page up to date
> 5) Discussions should happen outside of WIKI - on the dev-list or inside
> JIRA tickets
> 6) Relevant JIRA tickets will be tracked with special labels, e.g. "iep-N"
> [6]
> 
> I created sample page for binary format improvements (still raw enough) [7].
> 
> Please share your thoughts.
> 
> Vladimir.
> 
> [1] https://www.python.org/dev/peps/
> [2]
> https://hazelcast.atlassian.net/wiki/spaces/COM/pages/27558010/Hazelcast+Enhancement+Proposals
> [3] https://github.com/Kotlin/KEEP
> [4] https://spark.apache.org/improvement-proposals.html
> [5]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73638545
> [6] https://issues.apache.org/jira/browse/IGNITE-6057
> [7]
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-1%3A+Bulk+data+loading+performance+improvements



Re: JCache time-based metrics

2017-09-20 Thread Andrey Gura
Alexey,

implicit transactions still can cause problems in metrics interpretation:

- locks acquiring is still needed;
- metrics that taken in account for implicit transactions and missed
metrics for explicit transactions for the same cache still can
confuse.

On Wed, Sep 20, 2017 at 6:02 PM, Alexey Goncharuk
 wrote:
> Agree with Andrey here. Probably, we can keep existing JCache metrics for
> implicit transactions, but we should design a whole new interface to handle
> transaction metrics, including various transaction stages, such as lock
> acquisition, prepare and commit phases. In my opinion, this is the only way
> to have unambiguous metrics in the product.
>
> 2017-09-20 17:58 GMT+03:00 Andrey Gura :
>
>> Slava and Igniters,
>>
>> Do you have any suggestions about described problem?
>>
>> From my point of view JCache API metrics specification has a number of
>> statements that can't be uniquely treated by Ignite user because of
>> implementations details. Most obvious problems here are related with
>> transactions that are distributed and have some properties that affect
>> their behaviour. Pessimistic transactions during entry modification
>> will enlist entry into transaction (read operation), acquire lock (it
>> can take some time if another transaction is already lock owner) and
>> eventually write modified entry into the cache. Where is modification
>> operation start? What if after cache.put() and before commit user
>> performs some time consuming operation? For optimistic transactions
>> still there is lock waiting time that affects metrics.
>>
>> So JCache API metrics are useless for transactions and could confuse
>> user. It seems that we shouldn't support JCache metrics at all in case
>> of transactional operations. May be we can offer another metrics that
>> could be more useful.
>>
>> Thoughts?
>>
>> On Tue, Sep 19, 2017 at 8:19 PM, Вячеслав Коптилин
>>  wrote:
>> > I'd want to make a new attempt to discuss JCache metrics, especially
>> > time-based metrics.
>> > As discussed earlier (
>> > http://apache-ignite-developers.2346864.n4.nabble.
>> com/Cache-Metrics-tt19699.html
>> > ),
>> > there are two approaches (and at first glance, the second one is
>> > preferable):
>> > #1 Node that starts some operation is responsible for updating the
>> > cache metrics.
>> > #2 Primary node (node that actually executes a request)
>> >
>> > I have one question/concern about time-based metrics for transactional
>> > caches.
>> > The JCache specification does not have definition like a transaction,
>> > partitioned cache etc,
>> > and, therefore, cannot provide convenience and clear understanding about
>> > the average time for that case.
>> >
>> > Let's assume, for example, we have the following code:
>> >
>> > try (Transaction tx =
>> > transactions.txStart(TransactionConcurrency.OPTIMISTIC,
>> > TransactionIsolation.READ_COMMITTED)) {
>> > value = cache.get(key);
>> > // calculate new value for the given key
>> > ...
>> > cache.put(key, new_value); // (1)
>> >
>> > // some additional operations
>> >
>> > // another update
>> > value = cache.get(key);// (2)
>> > ...
>> > cache.put(key, new_value);
>> >
>> > tx.commit();   // (3)
>> > }
>> >
>> > What is the average time for write operation? Is it a time needed for
>> > enlisting an entry in the transaction scope, acquiring locks, committing
>> > changes?
>> > For that particular case, current implementation accumulates both timings
>> > (1)&(2) on the local node during performing put operation, but 'write'
>> > counter is updated only once on data node during commit phase.
>> > I think this behavior is not obvious at least.
>> >
>> > Thanks,
>> > Slava.
>>


Re: Binary compatibility of persistent storage

2017-09-20 Thread Vyacheslav Daradur
Igniters,

The compatibility testing framework [1] will be completed soon.

We will be able to cover all the cases suggested by Vladimir, for example:

>> 2) No compatibility, but provide migration instruments
It will be possible to test migration tools and scenarios provided to the
end users.
Some usual steps:
* start(2.2), putAll(data), stop(2.2)  - working with previous versions
of Ignite cluster;
* startMigration(...) - migration using provided tools;
* start(curVer), checkAll(data) - data validation;

>> 3) Maintain compatibility between N latest minor versions
Every change triggers compatibility test update, so, the latest version
will be compatible with a previous format. It will be possible to perform
cascade update (eg. 2.2 -> 2.6 -> 2.9):
* changes in 2.3 -> add unit tests to check compatibility with 2.2;
* changes in 2.7 -> rewrite previous unit tests to check compatibility
with 2.6;
* changes in 2.9 -> rewrite previous unit tests to check compatibility
with 2.8;
* etc.;
* major release -> delete all compatibility tests, for example, for the
version 3.0;

>> 4) Maintain compatibility between all versions within major release
Every change triggers compatibility test creation, so, the latest version
will be compatible with every previous within major release:
* changes in 2.3 -> add compatibility unit tests with 2.2;
* changes in 2.7 -> add compatibility unit tests with 2.6;
* changes in 2.9 -> add compatibility unit tests with 2.8;
* etc.;
* major release -> delete all compatibility tests, for example, for the
version 3.0;

[1] https://issues.apache.org/jira/browse/IGNITE-5732 - Provide API to test
compatibility with old releases


On Tue, Sep 19, 2017 at 8:06 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> In my view, there are two different scenarios.
>
> First - user just upgrades the version (to get some bug fix, for example),
> but does not intend to change anything in their project and/or use any new
> features. In this case it should work transparently and cluster must be
> able to work with older format. Ideally, we should detect this
> automatically, but if it's not possible we can introduce some kind of
> 'compatibility mode' enabled by a system property.
>
> Second - user upgrades to get new features that require data format change.
> In this case, I think it's OK to suggest using a conversion tool. Or
> probably we can apply it automatically on node startup?
>
> -Val
>
> On Tue, Sep 19, 2017 at 6:38 AM, Yakov Zhdanov 
> wrote:
>
> > >Any major change in data/index page format. E.g. this could happen once
> > transactional SQL is ready.
> >
> > I would suggest we automatically disable this feature for databases
> created
> > with older versions.
> >
> > --Yakov
> >
>



-- 
Best Regards, Vyacheslav D.


Re: JCache time-based metrics

2017-09-20 Thread Alexey Goncharuk
Agree with Andrey here. Probably, we can keep existing JCache metrics for
implicit transactions, but we should design a whole new interface to handle
transaction metrics, including various transaction stages, such as lock
acquisition, prepare and commit phases. In my opinion, this is the only way
to have unambiguous metrics in the product.

2017-09-20 17:58 GMT+03:00 Andrey Gura :

> Slava and Igniters,
>
> Do you have any suggestions about described problem?
>
> From my point of view JCache API metrics specification has a number of
> statements that can't be uniquely treated by Ignite user because of
> implementations details. Most obvious problems here are related with
> transactions that are distributed and have some properties that affect
> their behaviour. Pessimistic transactions during entry modification
> will enlist entry into transaction (read operation), acquire lock (it
> can take some time if another transaction is already lock owner) and
> eventually write modified entry into the cache. Where is modification
> operation start? What if after cache.put() and before commit user
> performs some time consuming operation? For optimistic transactions
> still there is lock waiting time that affects metrics.
>
> So JCache API metrics are useless for transactions and could confuse
> user. It seems that we shouldn't support JCache metrics at all in case
> of transactional operations. May be we can offer another metrics that
> could be more useful.
>
> Thoughts?
>
> On Tue, Sep 19, 2017 at 8:19 PM, Вячеслав Коптилин
>  wrote:
> > I'd want to make a new attempt to discuss JCache metrics, especially
> > time-based metrics.
> > As discussed earlier (
> > http://apache-ignite-developers.2346864.n4.nabble.
> com/Cache-Metrics-tt19699.html
> > ),
> > there are two approaches (and at first glance, the second one is
> > preferable):
> > #1 Node that starts some operation is responsible for updating the
> > cache metrics.
> > #2 Primary node (node that actually executes a request)
> >
> > I have one question/concern about time-based metrics for transactional
> > caches.
> > The JCache specification does not have definition like a transaction,
> > partitioned cache etc,
> > and, therefore, cannot provide convenience and clear understanding about
> > the average time for that case.
> >
> > Let's assume, for example, we have the following code:
> >
> > try (Transaction tx =
> > transactions.txStart(TransactionConcurrency.OPTIMISTIC,
> > TransactionIsolation.READ_COMMITTED)) {
> > value = cache.get(key);
> > // calculate new value for the given key
> > ...
> > cache.put(key, new_value); // (1)
> >
> > // some additional operations
> >
> > // another update
> > value = cache.get(key);// (2)
> > ...
> > cache.put(key, new_value);
> >
> > tx.commit();   // (3)
> > }
> >
> > What is the average time for write operation? Is it a time needed for
> > enlisting an entry in the transaction scope, acquiring locks, committing
> > changes?
> > For that particular case, current implementation accumulates both timings
> > (1)&(2) on the local node during performing put operation, but 'write'
> > counter is updated only once on data node during commit phase.
> > I think this behavior is not obvious at least.
> >
> > Thanks,
> > Slava.
>


Re: JCache time-based metrics

2017-09-20 Thread Andrey Gura
Slava and Igniters,

Do you have any suggestions about described problem?

>From my point of view JCache API metrics specification has a number of
statements that can't be uniquely treated by Ignite user because of
implementations details. Most obvious problems here are related with
transactions that are distributed and have some properties that affect
their behaviour. Pessimistic transactions during entry modification
will enlist entry into transaction (read operation), acquire lock (it
can take some time if another transaction is already lock owner) and
eventually write modified entry into the cache. Where is modification
operation start? What if after cache.put() and before commit user
performs some time consuming operation? For optimistic transactions
still there is lock waiting time that affects metrics.

So JCache API metrics are useless for transactions and could confuse
user. It seems that we shouldn't support JCache metrics at all in case
of transactional operations. May be we can offer another metrics that
could be more useful.

Thoughts?

On Tue, Sep 19, 2017 at 8:19 PM, Вячеслав Коптилин
 wrote:
> I'd want to make a new attempt to discuss JCache metrics, especially
> time-based metrics.
> As discussed earlier (
> http://apache-ignite-developers.2346864.n4.nabble.com/Cache-Metrics-tt19699.html
> ),
> there are two approaches (and at first glance, the second one is
> preferable):
> #1 Node that starts some operation is responsible for updating the
> cache metrics.
> #2 Primary node (node that actually executes a request)
>
> I have one question/concern about time-based metrics for transactional
> caches.
> The JCache specification does not have definition like a transaction,
> partitioned cache etc,
> and, therefore, cannot provide convenience and clear understanding about
> the average time for that case.
>
> Let's assume, for example, we have the following code:
>
> try (Transaction tx =
> transactions.txStart(TransactionConcurrency.OPTIMISTIC,
> TransactionIsolation.READ_COMMITTED)) {
> value = cache.get(key);
> // calculate new value for the given key
> ...
> cache.put(key, new_value); // (1)
>
> // some additional operations
>
> // another update
> value = cache.get(key);// (2)
> ...
> cache.put(key, new_value);
>
> tx.commit();   // (3)
> }
>
> What is the average time for write operation? Is it a time needed for
> enlisting an entry in the transaction scope, acquiring locks, committing
> changes?
> For that particular case, current implementation accumulates both timings
> (1)&(2) on the local node during performing put operation, but 'write'
> counter is updated only once on data node during commit phase.
> I think this behavior is not obvious at least.
>
> Thanks,
> Slava.


[GitHub] ignite pull request #2706: Ignite 2.1.4 p3

2017-09-20 Thread mcherkasov
GitHub user mcherkasov opened a pull request:

https://github.com/apache/ignite/pull/2706

Ignite 2.1.4 p3



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.1.4-p3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2706.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2706


commit 84c7427a53a8e1712b1d0b763d7539c9cb844cb6
Author: Sergey Stronchinskiy 
Date:   2017-07-04T11:51:25Z

IGNITE-5532 .NET: Split CacheLinqTest into partial classes

This closes #2226

commit 64c156e9252395504af00f09d934f36b6bc21913
Author: Igor Sapego 
Date:   2017-07-04T16:42:33Z

IGNITE-5663: ODBC: Closing cursor do not reset prepared statement anymore

commit 80c95ff79f344daf1fca3f094733a24bac2a218d
Author: Igor Sapego 
Date:   2017-07-05T15:41:28Z

IGNITE-5576: Added Compute::Run() for C++

commit 836906c89dfb880ac602046c37b3a2dba3ebdc46
Author: samaitra 
Date:   2017-07-06T04:28:15Z

IGNITE-5695 FlinkIgniteSinkSelfTest is failing due to conflicting default 
test timeout and default flush frequency - Fixes #2241.

Signed-off-by: samaitra 

commit 651ffc544bbc32cded55626adcd3ed4cc74f11ce
Author: shroman 
Date:   2017-07-06T05:00:08Z

Removed unnecessary line from the comments.

commit d1d6802378d874b039f775fe787f78c507661bb2
Author: devozerov 
Date:   2017-07-07T09:36:13Z

Merge branch 'ignite-2.1'

commit 45cd87fe73db117f5148ed2006f8de8d2517bbfe
Author: mcherkasov 
Date:   2017-06-30T17:23:55Z

IGNITE-5554 ServiceProcessor may process failed reassignments in timeout 
thread

commit fa974286e8f066a8d6aa57519edf5ec7761be095
Author: Igor Sapego 
Date:   2017-07-07T13:49:15Z

IGNITE-5582: Implemented Compute::Broadcast for C++

commit 01f504ff83cc77f80d37981b5c5a15b653861bbd
Author: NSAmelchev 
Date:   2017-07-10T12:03:01Z

IGNITE-5087 Enum comparison fails after marshal-unmarshal with 
BinaryMarshaller.

commit ecfbc2c97464ad7da3f24afaaf1868a2d2fdb87e
Author: devozerov 
Date:   2017-07-11T09:17:41Z

Merge branch 'ignite-2.1'

# Conflicts:
#   
modules/platforms/dotnet/Apache.Ignite.Core.Tests/Apache.Ignite.Core.Tests.csproj

commit 1be9b40c37efcbf332ebeeefc865c2fe864339e7
Author: sboikov 
Date:   2017-07-11T09:42:54Z

Exchange future cleanup, added special tasks for reassign/force rebalance.

commit 8d4a0c2ca2abc17a1d54fa0d33b161531fa59b12
Author: Pavel Tupitsyn 
Date:   2017-07-11T09:49:16Z

Merge branch 'ignite-2.1'

commit bf25b5c52be044f07076c0800447230c75174db3
Author: Slava Koptilin 
Date:   2017-07-07T12:35:33Z

ignite-5562: assert statements were changed to the 'if' blocks

commit e93b28488693381fcd232de93087ab8ec1d0f5bb
Author: sboikov 
Date:   2017-07-11T11:18:52Z

ignite-5446 Only lateAffinity logic in CacheAffinitySharedManager.

commit 5c363184c80f2fd8b79f1075d1eacbf9af5369a1
Author: Denis Magda 
Date:   2017-07-11T19:20:16Z

Simplified Memory Policies Example

commit b95f76f8a0a3a7e920f78f20b3d814112fc6d522
Author: sboikov 
Date:   2017-07-12T05:47:04Z

ignite-5727 Call TcpCommunicationSpi's discovery listener first

commit 120384fca2b5f6f141207697f776d7859afa857f
Author: devozerov 
Date:   2017-07-12T06:48:51Z

Merge branch 'ignite-2.1'

commit 5394bbdeff4e9fb97d3b413bf30001ede580bdd7
Author: sboikov 
Date:   2017-07-13T10:30:59Z

Unnecessary synchronous rollback in GridDhtTxLocal.prepareAsync

commit 00c6b6c4ba00fa6577f74fc95b378737fb5a789c
Author: Alexander Menshikov 
Date:   2017-07-13T12:24:59Z

IGNITE-5567 Make benchmark Ignite.reentrantLock vs IgniteCache.lock

commit 18bdfe96a1e579371108c661e3374183c58a296d
Author: Alexey Goncharuk 
Date:   2017-07-13T12:42:30Z

Fixed NPE in tests

commit 7338445ac9c1a2343fd41cdd20785de07b727796
Author: dkarachentsev 
Date:   2017-07-13T13:00:08Z

IGNITE-5597 - Fix javadoc in Affinity and AffinityFunction for REPLICATED 
cache. This closes #2268.

commit d9ed07c67e4a4ff3a9de543cbe039ac2a48f03a0
Author: Sergey Chugunov 
Date:   2017-07-13T14:32:06Z

Functionality of muted test is debated now

commit 871d9260f3b32bed5273852dbdb74c758f73d383
Author: Sergey Chugunov 
Date:   2017-07-13T15:34:01Z

Functionality of GridVersionSelfTest is debated 

[jira] [Created] (IGNITE-6459) Implement metrics to monitor mvcc coordinator performance

2017-09-20 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-6459:


 Summary: Implement metrics to monitor mvcc coordinator performance
 Key: IGNITE-6459
 URL: https://issues.apache.org/jira/browse/IGNITE-6459
 Project: Ignite
  Issue Type: Sub-task
Reporter: Semen Boikov


Need provide some public metrics which allow to to monitor mvcc coordinator 
performance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6458) Implement possibility to manually change mvcc coordinator

2017-09-20 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-6458:


 Summary: Implement possibility to manually change mvcc coordinator
 Key: IGNITE-6458
 URL: https://issues.apache.org/jira/browse/IGNITE-6458
 Project: Ignite
  Issue Type: Sub-task
  Components: cache
Reporter: Semen Boikov


Need provide some ability to manually switch mvcc coordinator, probably via 
MBean.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2698: IGNITE-6250 .NET: Thin client: Basic exception ha...

2017-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2698


---


[GitHub] ignite pull request #2705: IGNITE-584: proper datastructures setDataMap fill...

2017-09-20 Thread zstan
GitHub user zstan opened a pull request:

https://github.com/apache/ignite/pull/2705

IGNITE-584: proper datastructures setDataMap fill while new node append



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-584

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2705.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2705


commit eec3df4cfd80a95197006b40a728d9b83e668ce8
Author: Evgeny Stanilovskiy 
Date:   2017-09-20T13:48:20Z

IGNITE-584: proper datastructures setDataMap fill while new node append




---


[jira] [Created] (IGNITE-6457) Incorrect exception when used schema name in lower case

2017-09-20 Thread Ilya Suntsov (JIRA)
Ilya Suntsov created IGNITE-6457:


 Summary: Incorrect exception when used schema name in lower case 
 Key: IGNITE-6457
 URL: https://issues.apache.org/jira/browse/IGNITE-6457
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
 Environment: Cache configuration:
{noformat}











{noformat}
Reporter: Ilya Suntsov
 Fix For: 2.3


Scenario:
1. Start 1 node
2. connect to node via sqlline (https://github.com/julianhyde/sqlline)
{noformat} ./sqlline -d org.apache.ignite.IgniteJdbcThinDriver --color=true 
--verbose=true --showWarnings=true --showNestedErrs=true -u 
jdbc:ignite:thin://127.0.0.1:10800/test{noformat}
3. Create table:
{noformat}CREATE TABLE city1 (id LONG PRIMARY KEY, name VARCHAR);{noformat}
Result:
{noformat}
[16:35:27,506][SEVERE][client-connector-#40%null%][JdbcRequestHandler] Failed 
to execute SQL query [reqId=0, req=JdbcQueryExecuteRequest [schemaName=test, 
pageSize=1024, maxRows=0, sqlQry=CREATE TABLE city1 (id LONG PRIMARY KEY, name 
VARCHAR), args=[], stmtType=ANY_STATEMENT_TYPE]]
class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to 
set schema for DB connection for thread [schema=test]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.connectionForThread(IgniteH2Indexing.java:439)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.connectionForSchema(IgniteH2Indexing.java:356)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1287)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1918)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1914)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2396)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1922)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:286)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:149)
at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:141)
at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:40)
at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.h2.jdbc.JdbcSQLException: Schema "test" not found; SQL statement:
SET SCHEMA "test" [90079-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.engine.Database.getSchema(Database.java:1755)
at org.h2.command.dml.Set.update(Set.java:408)
at org.h2.command.CommandContainer.update(CommandContainer.java:101)
at org.h2.command.Command.executeUpdate(Command.java:260)
at 
org.h2.jdbc.JdbcStatement.executeUpdateInternal(JdbcStatement.java:137)
at org.h2.jdbc.JdbcStatement.executeUpdate(JdbcStatement.java:122)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.connectionForThread(IgniteH2Indexing.java:431)
... 19 more
{noformat}

So we have incorrect exception.

Correct one appears if used the following jdbc url: 
{{jdbc:ignite:thin://127.0.0.1:10800/TEST}}







--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Igor Sapego
For example, some users may want to disable clients they are not
using due to security considerations.

Best Regards,
Igor

On Wed, Sep 20, 2017 at 4:13 PM, Dmitriy Setrakyan 
wrote:

> Why do we need the ability to disable individual clients?
>
> On Wed, Sep 20, 2017 at 5:26 AM, Igor Sapego  wrote:
>
> > I've filed a ticket for that: [1]
> >
> > [1] - https://issues.apache.org/jira/browse/IGNITE-6456
> >
> > Best Regards,
> > Igor
> >
> > On Wed, Sep 20, 2017 at 2:33 PM, Vladimir Ozerov 
> > wrote:
> >
> > > Agree. Do we have a ticket for this?
> > >
> > > On Wed, Sep 20, 2017 at 1:27 PM, Pavel Tupitsyn 
> > > wrote:
> > >
> > > > Yes, I think it would make sense to add enableJdbc, enableOdbc,
> > > > enableThinClients
> > > > properties to ClientConnectorConfiguration (which replaces
> > > > SqlConnectorConfiguration).
> > > >
> > > > This way users will also have better understanding of the
> > > > ClientConnectorConfiguration purpose.
> > > >
> > > > Pavel
> > > >
> > > > On Wed, Sep 20, 2017 at 1:12 PM, Igor Sapego 
> > wrote:
> > > >
> > > > > Hi, Igniters,
> > > > >
> > > > > In current approach, ODBC, thin JDBC and thin .NET client all
> connect
> > > > > to the grid using ClientListenerProcessor, which listen on a single
> > > port.
> > > > >
> > > > > The problem is that there is currently no way to disable only one
> > > client.
> > > > > For example, currently you can't disallow thin JDBC driver
> > connectivity
> > > > > alone, you can only disable the whole ClientListenerProcessor,
> which
> > is
> > > > > going to disable ODBC and thin .NET clients as well.
> > > > >
> > > > > I believe, we should add options to disable/enable every single
> > client,
> > > > > supported by the ClientListenerProcessor separately. Maybe we
> should
> > > > > add such options to the SqlConnectorConfiguration.
> > > > >
> > > > > What do you guys think?
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > >
> > >
> >
>


Re: Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Dmitriy Setrakyan
Why do we need the ability to disable individual clients?

On Wed, Sep 20, 2017 at 5:26 AM, Igor Sapego  wrote:

> I've filed a ticket for that: [1]
>
> [1] - https://issues.apache.org/jira/browse/IGNITE-6456
>
> Best Regards,
> Igor
>
> On Wed, Sep 20, 2017 at 2:33 PM, Vladimir Ozerov 
> wrote:
>
> > Agree. Do we have a ticket for this?
> >
> > On Wed, Sep 20, 2017 at 1:27 PM, Pavel Tupitsyn 
> > wrote:
> >
> > > Yes, I think it would make sense to add enableJdbc, enableOdbc,
> > > enableThinClients
> > > properties to ClientConnectorConfiguration (which replaces
> > > SqlConnectorConfiguration).
> > >
> > > This way users will also have better understanding of the
> > > ClientConnectorConfiguration purpose.
> > >
> > > Pavel
> > >
> > > On Wed, Sep 20, 2017 at 1:12 PM, Igor Sapego 
> wrote:
> > >
> > > > Hi, Igniters,
> > > >
> > > > In current approach, ODBC, thin JDBC and thin .NET client all connect
> > > > to the grid using ClientListenerProcessor, which listen on a single
> > port.
> > > >
> > > > The problem is that there is currently no way to disable only one
> > client.
> > > > For example, currently you can't disallow thin JDBC driver
> connectivity
> > > > alone, you can only disable the whole ClientListenerProcessor, which
> is
> > > > going to disable ODBC and thin .NET clients as well.
> > > >
> > > > I believe, we should add options to disable/enable every single
> client,
> > > > supported by the ClientListenerProcessor separately. Maybe we should
> > > > add such options to the SqlConnectorConfiguration.
> > > >
> > > > What do you guys think?
> > > >
> > > > Best Regards,
> > > > Igor
> > > >
> > >
> >
>


[GitHub] ignite pull request #2704: IGNITE-6454: suite timeout replace to failure

2017-09-20 Thread dspavlov
GitHub user dspavlov opened a pull request:

https://github.com/apache/ignite/pull/2704

IGNITE-6454: suite timeout replace to failure

Interrupted flag check was added

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
ignite-6454-interrupt-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2704.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2704


commit cfe3240a7933894dc3e048d954de524a9061128e
Author: dpavlov 
Date:   2017-09-20T13:07:11Z

IGNITE-6454: suite timeout replace to failure: interrupted flag check was 
added




---


[GitHub] ignite pull request #2703: PR to add branch to TC

2017-09-20 Thread alexzaitzev
GitHub user alexzaitzev opened a pull request:

https://github.com/apache/ignite/pull/2703

PR to add branch to TC

Change was made to create PR for TC to run tests on version Apache Ignite 
2.1

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexzaitzev/ignite ignite-2.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2703


commit 352c6c103d50c3938a7de7401d257cc404bd44b1
Author: alexzaitzev 
Date:   2017-09-20T13:05:09Z

Change was made to create PR for TC to run tests on version Apache Ignite 
2.1




---


[jira] [Created] (IGNITE-6456) Add flags to ClientConnectorConfiguration which enable/disable different clients

2017-09-20 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-6456:
---

 Summary: Add flags to ClientConnectorConfiguration which 
enable/disable different clients
 Key: IGNITE-6456
 URL: https://issues.apache.org/jira/browse/IGNITE-6456
 Project: Ignite
  Issue Type: Improvement
  Components: jdbc, odbc, thin client
Affects Versions: 2.1
Reporter: Igor Sapego
 Fix For: 2.3


There is currently no way to disable only one client. For example, currently 
you can't disallow thin JDBC driver connectivity alone, you can only disable 
the whole {{ClientListenerProcessor}}, which is going to disable ODBC and thin 
.NET clients as well.

We should add options to disable/enable every single client, supported by the 
{{ClientListenerProcessor}} separately. For example, we can add flags to the 
{{SqlConnectorConfiguration}}:
{{enableJdbc}}
{{enableOdbc}}
{{enableThinClients}}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Igor Sapego
I've filed a ticket for that: [1]

[1] - https://issues.apache.org/jira/browse/IGNITE-6456

Best Regards,
Igor

On Wed, Sep 20, 2017 at 2:33 PM, Vladimir Ozerov 
wrote:

> Agree. Do we have a ticket for this?
>
> On Wed, Sep 20, 2017 at 1:27 PM, Pavel Tupitsyn 
> wrote:
>
> > Yes, I think it would make sense to add enableJdbc, enableOdbc,
> > enableThinClients
> > properties to ClientConnectorConfiguration (which replaces
> > SqlConnectorConfiguration).
> >
> > This way users will also have better understanding of the
> > ClientConnectorConfiguration purpose.
> >
> > Pavel
> >
> > On Wed, Sep 20, 2017 at 1:12 PM, Igor Sapego  wrote:
> >
> > > Hi, Igniters,
> > >
> > > In current approach, ODBC, thin JDBC and thin .NET client all connect
> > > to the grid using ClientListenerProcessor, which listen on a single
> port.
> > >
> > > The problem is that there is currently no way to disable only one
> client.
> > > For example, currently you can't disallow thin JDBC driver connectivity
> > > alone, you can only disable the whole ClientListenerProcessor, which is
> > > going to disable ODBC and thin .NET clients as well.
> > >
> > > I believe, we should add options to disable/enable every single client,
> > > supported by the ClientListenerProcessor separately. Maybe we should
> > > add such options to the SqlConnectorConfiguration.
> > >
> > > What do you guys think?
> > >
> > > Best Regards,
> > > Igor
> > >
> >
>


[jira] [Created] (IGNITE-6455) Add flags to ClientConnectorConfiguration which enable/disable different clients

2017-09-20 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-6455:
---

 Summary: Add flags to ClientConnectorConfiguration which 
enable/disable different clients
 Key: IGNITE-6455
 URL: https://issues.apache.org/jira/browse/IGNITE-6455
 Project: Ignite
  Issue Type: Improvement
  Components: jdbc, odbc, thin client
Affects Versions: 2.1
Reporter: Igor Sapego
 Fix For: 2.3


There is currently no way to disable only one client. For example, currently 
you can't disallow thin JDBC driver connectivity alone, you can only disable 
the whole {{ClientListenerProcessor}}, which is going to disable ODBC and thin 
.NET clients as well.

We should add options to disable/enable every single client, supported by the 
{{ClientListenerProcessor}} separately. For example, we can add flags to the 
{{SqlConnectorConfiguration}}:
{{enableJdbc}}
{{enableOdbc}}
{{enableThinClients}}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2702: IGNITE-6448: clear query cache on ALTER TABLE ADD...

2017-09-20 Thread tledkov-gridgain
GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/2702

IGNITE-6448: clear query cache on ALTER TABLE ADD 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6448

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2702.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2702


commit 7720f091237acba3f43e456c25f4c73d52bb626e
Author: tledkov-gridgain 
Date:   2017-09-20T10:07:46Z

IGNITE-6448: add test

commit 652fa795095ca195a500b5aaef341eb9509c081f
Author: Alexander Paschenko 
Date:   2017-09-20T11:49:47Z

Metadata update fix

commit 26a1ecb98201263ce97f4c88535150938683129f
Author: tledkov-gridgain 
Date:   2017-09-20T12:17:21Z

IGNITE-6448: clear cached queries on add columns to table

commit 47de93df1b7b4e9b7e1953a9ce3f1aa77abf909e
Author: tledkov-gridgain 
Date:   2017-09-20T12:18:22Z

IGNITE-6448: add test to suite




---


Re: Persistence per memory policy configuration

2017-09-20 Thread Vladimir Ozerov
I prefer the second. Composition over inheritance - this is how all our
configuration is crafted.

E.g. we do not have "CacheConfiguration" and "StoreEnabledCacheConfiguration".
Instead, we have "CacheConfiguration.setCacheStoreFactory".

On Wed, Sep 20, 2017 at 2:46 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Reiterating this based on some feedback from PDS users.
>
> It might be confusing to configure persistence with "MemoryPolicy", so
> another approach is to deprecate the old names and introduce a new name
> "DataRegion" because it reflects the actual state when data is stored on
> disk and partially in memory. I have two options in mind, each of them
> looks acceptable to me, so I would like to have some feedback from the
> community. Old configuration names will be deprecated (but still be taken
> if used for backward compatibility). Note, that old names deprecation
> handles default configuration compatibility very nicely - current PDS users
> will not need to change anything to keep everything working. The two
> options I mentioned are below:
>
>  * we have two separate classes for in-memory and persisted data regions,
> so the configuration would look like so:
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>
> cfg.setDataRegionsConfiguration(new DataRegionsConfiguration()
> .setDataRegions(
> new MemoryDataRegion()
> .setName("volatileCaches")
> .setMaxMemorySize(...),
> new PersistentDataRegion()
> .setName("persistentCaches")
> .setMaxMemorySize(...)
> .setMaxDiskSize()));
>
> cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
>
>
> * we have one class for data region configuration, but it will have a
> sub-bean for persistence configuration:
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>
> cfg.setDataRegionsConfiguration(new DataRegionsConfiguration()
> .setDataRegions(
> new DataRegion()
> .setName("volatileCaches")
> .setMaxMemorySize(...),
> new DataRegion()
> .setName("persistentCaches")
> .setMaxMemorySize(...),
> .setPersistenceConfiguration(
> new DataRegionPersistenceConfiguration()
> .setMaxDiskSize(...;
>
> cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
>


Re: Persistence per memory policy configuration

2017-09-20 Thread Alexey Goncharuk
Reiterating this based on some feedback from PDS users.

It might be confusing to configure persistence with "MemoryPolicy", so
another approach is to deprecate the old names and introduce a new name
"DataRegion" because it reflects the actual state when data is stored on
disk and partially in memory. I have two options in mind, each of them
looks acceptable to me, so I would like to have some feedback from the
community. Old configuration names will be deprecated (but still be taken
if used for backward compatibility). Note, that old names deprecation
handles default configuration compatibility very nicely - current PDS users
will not need to change anything to keep everything working. The two
options I mentioned are below:

 * we have two separate classes for in-memory and persisted data regions,
so the configuration would look like so:

IgniteConfiguration cfg = new IgniteConfiguration();

cfg.setDataRegionsConfiguration(new DataRegionsConfiguration()
.setDataRegions(
new MemoryDataRegion()
.setName("volatileCaches")
.setMaxMemorySize(...),
new PersistentDataRegion()
.setName("persistentCaches")
.setMaxMemorySize(...)
.setMaxDiskSize()));

cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());


* we have one class for data region configuration, but it will have a
sub-bean for persistence configuration:

IgniteConfiguration cfg = new IgniteConfiguration();

cfg.setDataRegionsConfiguration(new DataRegionsConfiguration()
.setDataRegions(
new DataRegion()
.setName("volatileCaches")
.setMaxMemorySize(...),
new DataRegion()
.setName("persistentCaches")
.setMaxMemorySize(...),
.setPersistenceConfiguration(
new DataRegionPersistenceConfiguration()
.setMaxDiskSize(...;

cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());


Re: Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Vladimir Ozerov
Agree. Do we have a ticket for this?

On Wed, Sep 20, 2017 at 1:27 PM, Pavel Tupitsyn 
wrote:

> Yes, I think it would make sense to add enableJdbc, enableOdbc,
> enableThinClients
> properties to ClientConnectorConfiguration (which replaces
> SqlConnectorConfiguration).
>
> This way users will also have better understanding of the
> ClientConnectorConfiguration purpose.
>
> Pavel
>
> On Wed, Sep 20, 2017 at 1:12 PM, Igor Sapego  wrote:
>
> > Hi, Igniters,
> >
> > In current approach, ODBC, thin JDBC and thin .NET client all connect
> > to the grid using ClientListenerProcessor, which listen on a single port.
> >
> > The problem is that there is currently no way to disable only one client.
> > For example, currently you can't disallow thin JDBC driver connectivity
> > alone, you can only disable the whole ClientListenerProcessor, which is
> > going to disable ODBC and thin .NET clients as well.
> >
> > I believe, we should add options to disable/enable every single client,
> > supported by the ClientListenerProcessor separately. Maybe we should
> > add such options to the SqlConnectorConfiguration.
> >
> > What do you guys think?
> >
> > Best Regards,
> > Igor
> >
>


[jira] [Created] (IGNITE-6454) Data structure suite timeout

2017-09-20 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-6454:
--

 Summary: Data structure suite timeout
 Key: IGNITE-6454
 URL: https://issues.apache.org/jira/browse/IGNITE-6454
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Dmitriy Pavlov
Priority: Critical
 Fix For: 2.3


https://ci.ignite.apache.org/viewType.html?buildTypeId=Ignite20Tests_IgniteDataStrucutures=%3Cdefault%3E=buildTypeStatusDiv

Most often timeout is caused by following tests:

{noformat}
] :  [Step 4/5] Thread 
[name="test-runner-#35143%replicated.GridCacheReplicatedDataStructuresFailoverSelfTest%",
 id=38586, state=RUNNABLE, blockCnt=0, waitCnt=60]
[20:34:26] : [Step 4/5] at 
java.lang.Throwable.fillInStackTrace(Native Method)
[20:34:26] : [Step 4/5] at 
java.lang.Throwable.fillInStackTrace(Throwable.java:783)
[20:34:26] : [Step 4/5] - locked o.a.i.IgniteException@754033e
[20:34:26] : [Step 4/5] at 
java.lang.Throwable.(Throwable.java:265)
[20:34:26] : [Step 4/5] at 
java.lang.Exception.(Exception.java:66)
[20:34:26] : [Step 4/5] at 
java.lang.RuntimeException.(RuntimeException.java:62)
[20:34:26] : [Step 4/5] at 
o.a.i.IgniteException.(IgniteException.java:44)
[20:34:26] : [Step 4/5] at 
o.a.i.i.processors.datastructures.GridCacheLockImpl$Sync.validate(GridCacheLockImpl.java:275)
[20:34:26] : [Step 4/5] at 
o.a.i.i.processors.datastructures.GridCacheLockImpl$Sync.access$1000(GridCacheLockImpl.java:122)
[20:34:26] : [Step 4/5] at 
o.a.i.i.processors.datastructures.GridCacheLockImpl.lock(GridCacheLockImpl.java:1200)
[20:34:26] : [Step 4/5] at 
o.a.i.i.processors.cache.datastructures.GridCacheAbstractDataStructuresFailoverSelfTest.doTestReentrantLock(GridCacheAbstractDataStructuresFailoverSelfTest.java:785)
[20:34:26] : [Step 4/5] at 
o.a.i.i.processors.cache.datastructures.GridCacheAbstractDataStructuresFailoverSelfTest.testFairReentrantLockConstantMultipleTopologyChangeNonFailoverSafe(GridCacheAbstractDataStructuresFailoverSelfTest.java:739)
[20:34:26] : [Step 4/5] at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[20:34:26] : [Step 4/5] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[20:34:26] : [Step 4/5] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[20:34:26] : [Step 4/5] at 
java.lang.reflect.Method.invoke(Method.java:606)
[20:34:26] : [Step 4/5] at 
junit.framework.TestCase.runTest(TestCase.java:176)
[20:34:26] : [Step 4/5] at 
o.a.i.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
[20:34:26] : [Step 4/5] at 
o.a.i.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
[20:34:26] : [Step 4/5] at 
o.a.i.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
[20:34:26] : [Step 4/5] at java.lang.Thread.run(Thread.java:745)
[20:34:26] : [Step 4/5] 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Pavel Tupitsyn
Yes, I think it would make sense to add enableJdbc, enableOdbc,
enableThinClients
properties to ClientConnectorConfiguration (which replaces
SqlConnectorConfiguration).

This way users will also have better understanding of the
ClientConnectorConfiguration purpose.

Pavel

On Wed, Sep 20, 2017 at 1:12 PM, Igor Sapego  wrote:

> Hi, Igniters,
>
> In current approach, ODBC, thin JDBC and thin .NET client all connect
> to the grid using ClientListenerProcessor, which listen on a single port.
>
> The problem is that there is currently no way to disable only one client.
> For example, currently you can't disallow thin JDBC driver connectivity
> alone, you can only disable the whole ClientListenerProcessor, which is
> going to disable ODBC and thin .NET clients as well.
>
> I believe, we should add options to disable/enable every single client,
> supported by the ClientListenerProcessor separately. Maybe we should
> add such options to the SqlConnectorConfiguration.
>
> What do you guys think?
>
> Best Regards,
> Igor
>


Disabling ODBC/thin JDBC/thin .NET connectivity

2017-09-20 Thread Igor Sapego
Hi, Igniters,

In current approach, ODBC, thin JDBC and thin .NET client all connect
to the grid using ClientListenerProcessor, which listen on a single port.

The problem is that there is currently no way to disable only one client.
For example, currently you can't disallow thin JDBC driver connectivity
alone, you can only disable the whole ClientListenerProcessor, which is
going to disable ODBC and thin .NET clients as well.

I believe, we should add options to disable/enable every single client,
supported by the ClientListenerProcessor separately. Maybe we should
add such options to the SqlConnectorConfiguration.

What do you guys think?

Best Regards,
Igor


[jira] [Created] (IGNITE-6453) .NET: Thin client: cache operations

2017-09-20 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-6453:
--

 Summary: .NET: Thin client: cache operations
 Key: IGNITE-6453
 URL: https://issues.apache.org/jira/browse/IGNITE-6453
 Project: Ignite
  Issue Type: Improvement
  Components: platforms, thin client
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.3


Add simple cache operations, like {{ContainsKey}} and {{GetSize}}. Skip 
everything complex for now, like {{Invoke}}, which requires user-defined 
processor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6452) Invocation of getAll() through cache proxy during cache restart can throw unexpected CacheException

2017-09-20 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-6452:
--

 Summary: Invocation of getAll() through cache proxy during cache 
restart can throw unexpected CacheException
 Key: IGNITE-6452
 URL: https://issues.apache.org/jira/browse/IGNITE-6452
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Ivan Rakov
Assignee: Ivan Rakov
 Fix For: 2.3


Instead of expected IgniteCacheRestartingException, load test shows the 
following exception sometimes:
{noformat}
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
Failed to find message handler for message: GridNearGetRequest 
[futId=6fc73459e51-84b93e3c-47e1-433c-8a91-0700f131c617, 
miniId=27d73459e51-84b93e3c-47e1-433c-8a91-0700f131c617, ver=null, keyMap=null, 
flags=1, topVer=AffinityTopologyVersion [topVer=4, minorTopVer=32], 
subjId=080177d4-b78e-4f6f-a386-77be8830, taskNameHash=0, createTtl=-1, 
accessTtl=-1]

at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1285)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1648)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.getAll(IgniteCacheProxyImpl.java:873)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.getAll(GatewayProtectedCacheProxy.java:718)
at 
org.gridgain.grid.internal.processors.cache.database.IgniteDbSnapshotSelfTest$15.apply(IgniteDbSnapshotSelfTest.java:1911)
at 
org.gridgain.grid.internal.processors.cache.database.IgniteDbSnapshotSelfTest$15.apply(IgniteDbSnapshotSelfTest.java:1904)
at 
org.gridgain.grid.internal.processors.cache.database.IgniteDbSnapshotSelfTest.testReuseCacheProxyAfterRestore(IgniteDbSnapshotSelfTest.java:1796)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6450) Kotlin: Inherited platform declarations clash

2017-09-20 Thread redtank (JIRA)
redtank created IGNITE-6450:
---

 Summary: Kotlin: Inherited platform declarations clash
 Key: IGNITE-6450
 URL: https://issues.apache.org/jira/browse/IGNITE-6450
 Project: Ignite
  Issue Type: Bug
  Components: cache, spring
Affects Versions: 2.1, 2.2
 Environment: OS: macOS 10.12
Java: 1.8.0_112-b16
Kotlin: 1.1.4-3

org.apache.ignite:ignite-spring-data:2.2.0
org.springframework.data:spring-data-commons:1.13.1.RELEASE -> 2.0.0.RC3
Reporter: redtank


I am trying Spring Data and Ignite. My repository interface is below

```
@RepositoryConfig(cacheName = "QuoteRequest")
interface QuoteRequestRepository : IgniteRepository
```

The code works for spring-data-commons:1.13.1.RELEASE. But it doesn't work for 
2.0.0.RC3. The error message is below
 
```
Error:(9, 11) Kotlin: Inherited platform declarations clash: The following 
declarations have the same JVM signature (deleteAll(Ljava/lang/Iterable;)V):
fun deleteAll(p0: (Mutable)Iterable!): Unit defined in 
repository.QuoteRequestRepository
fun deleteAll(p0: (Mutable)Iterable!): Unit defined in 
repository.QuoteRequestRepository
```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6451) AssertionError: null in GridCacheIoManager.onSend on stop

2017-09-20 Thread Alexander Belyak (JIRA)
Alexander Belyak created IGNITE-6451:


 Summary: AssertionError: null in GridCacheIoManager.onSend on stop
 Key: IGNITE-6451
 URL: https://issues.apache.org/jira/browse/IGNITE-6451
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 1.8
Reporter: Alexander Belyak
Priority: Minor


If we stop node while sending message (after GridCacheIoManager.onSend test if 
grid is stopping) - we get AssertionError, for example, from:
{noformat}
java.lang.AssertionError: null
at 
org.apache.ignite.internal.processors.cache.GridCacheMessage.marshalCollection(GridCacheMessage.java:481)
 ~[ignite-core-1.10.3.ea15-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponse.prepareMarshal(GridCacheQueryResponse.java:134)
 ~[ignite-core-1.10.3.ea15-SNAPSHOT.jar:2.0.0-SNAPSHOT]
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onSend(GridCacheIoManager.java:917)
 [ignite-core-1.10.3.ea15-SNAPSHOT.jar:2.0.0-SNAPSHOT]
{noformat}
I think we need more reliable approach to stop grid, ideally - we must stop all 
activity as first step of stopping grid and go to next step only after it. Or 
we can just add many tests in code like after each 
cctx = ctx.getCacheContext(cacheId) 
do 
if (cctx == null && ...kernalContext().isStopping())
 return false; //<= handle parallel stop here to correctly cancel operation
I think its important because no one can trust db with assertions in logs!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6449) Visor CMD: Show missed cache properties

2017-09-20 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-6449:
-

 Summary: Visor CMD: Show missed cache properties
 Key: IGNITE-6449
 URL: https://issues.apache.org/jira/browse/IGNITE-6449
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Affects Versions: 2.1
Reporter: Vasiliy Sisko
Assignee: Vasiliy Sisko
 Fix For: 2.3






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6448) Select * doesn't return new field name after concurrent ALTER TABLE

2017-09-20 Thread Ilya Suntsov (JIRA)
Ilya Suntsov created IGNITE-6448:


 Summary: Select * doesn't return new field name after concurrent 
ALTER TABLE 
 Key: IGNITE-6448
 URL: https://issues.apache.org/jira/browse/IGNITE-6448
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.1
Reporter: Ilya Suntsov
Priority: Blocker
 Fix For: 2.3


Steps for reproduce:
1. Start 3 nodes
2. Execute 
{noformat}CREATE TABLE person (id LONG, name VARCHAR, city_id LONG, PRIMARY KEY 
(id, city_id)) {noformat}
to create table Person
3. Connect to grid via sqlline (https://github.com/julianhyde/sqlline)
{noformat}./sqlline -d org.apache.ignite.IgniteJdbcThinDriver --color=true 
--verbose=true --showWarnings=true --showNestedErrs=true -u 
jdbc:ignite:thin://127.0.0.1/{noformat}
4. Create one more connection {noformat}!connect jdbc:ignite:thin://127.0.0.1/ 
{noformat}
5. Execute ALTER TABLE for both connections {noformat} !all alter table person 
add field1 varchar;{noformat}
Result:
1. Got exception on coordinator:
{noformat}[10:59:15,805][SEVERE][client-connector-#55%null%][JdbcRequestHandler]
 Failed to execute SQL query [reqId=0, req=JdbcQueryExecuteRequest 
[schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=alter table person add 
field1 varchar, args=[], stmtType=ANY_STATEMENT_TYPE]]
class org.apache.ignite.internal.processors.query.IgniteSQLException: Column 
already exists: FIELD1
at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:329)
at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:273)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1383)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1918)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1914)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2396)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1922)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:286)
at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:149)
at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:141)
at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:40)
at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}
2. When I try to get all data from Person:
{noformat}select * from person;{noformat}
I get the table without new field but if try to get only this field from table 
it works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: MVCC configuration

2017-09-20 Thread Semyon Boikov
Yes, we'll add this validation.

On Wed, Sep 20, 2017 at 10:09 AM, Dmitriy Setrakyan 
wrote:

> On Tue, Sep 19, 2017 at 11:31 PM, Semyon Boikov 
> wrote:
>
> > > Can caches within the same group have different MVCC configuration?
> >
> > Do you think we really need have in the same group caches with different
> > mvcc configuration? for simplicity I would do not allow this.
> >
>
> I agree, let's not allow it. In that case, are you going to have extra
> validation on startup that caches in the same group must have identical
> MVCC config?
>


[GitHub] ignite pull request #2701: IGNITE-6316

2017-09-20 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/2701

IGNITE-6316



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6316

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2701.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2701


commit fa0e0c89c2a40d702607e4421682ea4f954e572c
Author: Alexander Paschenko 
Date:   2017-09-18T16:22:24Z

IGNITE-6316 Added test

commit 02f03c7c7540a7ff5efecc5461e2bf3b014cfa69
Author: Alexander Paschenko 
Date:   2017-09-19T12:16:23Z

IgnitePersistentStoreSchemaLoadTest fix

commit 9da99e696085d337d2ddfb9e4e60b3d4ed73bd60
Author: Alexander Paschenko 
Date:   2017-09-19T20:03:04Z

IGNITE-6316 StoredCacheData persistence fix




---


Re: MVCC configuration

2017-09-20 Thread Dmitriy Setrakyan
On Tue, Sep 19, 2017 at 11:31 PM, Semyon Boikov 
wrote:

> > Can caches within the same group have different MVCC configuration?
>
> Do you think we really need have in the same group caches with different
> mvcc configuration? for simplicity I would do not allow this.
>

I agree, let's not allow it. In that case, are you going to have extra
validation on startup that caches in the same group must have identical
MVCC config?


Re: MVCC configuration

2017-09-20 Thread Semyon Boikov
> Can caches within the same group have different MVCC configuration?

I think it is possible to implement, but there are some issues:
- for mvcc we need store mvcc version in hash index item (for now it is 16
bytes), since index items have fixed size then if we store in this index
data for caches with disabled mvcc, then it will have unnecessary 16 bytes
overhead
- for mvcc caches we need create correct hash index in advance, so if group
was created with mvcc disabled, then later it is not possible to add in
this group mvcc enabled cache

Do you think we really need have in the same group caches with different
mvcc configuration? for simplicity I would do not allow this.

Thanks

On Wed, Sep 20, 2017 at 7:30 AM, Dmitriy Setrakyan 
wrote:

> Can caches within the same group have different MVCC configuration?
>
> On Tue, Sep 19, 2017 at 2:34 AM, Vladimir Ozerov 
> wrote:
>
> > What I mean is that it might be not applicable for DML by design. E.g.
> may
> > be we will have to fallback to per-memory-policy approach, or to
> > per-cache-group. As we do not know it at the moment and there is no clear
> > demand from users, I would simply put it aside to avoid in mess in public
> > API in future.
> >
> > Moreover, per-cache flag raises additional questions which can be put out
> > of scope otherwise. E.g. is it legal to mix MVCC and non-MVCC caches in a
> > single transaction? If yes, what is the contract? Without fine-grained
> > per-cache approach in the first iteration we can avoid answering it.
> >
> > On Tue, Sep 19, 2017 at 12:25 PM, Semyon Boikov 
> > wrote:
> >
> > > If it is not valid for DML then we can easily detect this situation and
> > > throw exception, but if I do not use DML why non make it configurable
> > > per-cache?
> > >
> > > On Tue, Sep 19, 2017 at 12:22 PM, Vladimir Ozerov <
> voze...@gridgain.com>
> > > wrote:
> > >
> > > > I would say that per-cache configuration should be out of scope as
> well
> > > for
> > > > the first iteration. Because we do not know whether it will be valid
> > for
> > > > DML.
> > > >
> > > > On Tue, Sep 19, 2017 at 12:15 PM, Semyon Boikov <
> sboi...@gridgain.com>
> > > > wrote:
> > > >
> > > > > Folks, thank you for feedback, I want to summarize some decisions:
> > > > >
> > > > > 1. Mvcc is disabled by default. We'll add two flags to enable mvcc:
> > > > > per-cache flag - CacheConfiguration.isMvccEnabled, default value
> for
> > > all
> > > > > caches - IgniteConfiguration.isMvccEnabled.
> > > > > 2. For initial implementation mvcc for ATOMIC cache is out of
> scope,
> > it
> > > > can
> > > > > be enabled only for TRANSACTIONAL caches.
> > > > > 3. Mvcc coordinator can be any server node (oldest server node is
> > > > selected
> > > > > automatically). Also I believe we need possibility to have
> > *dedicated*
> > > > mvcc
> > > > > coordinator nodes which will process only internal mvcc messages.
> > Node
> > > > can
> > > > > be marked as dedicated coordinator via new flag
> > > > > IgniteConfiguration.isMvccCoordinator or we can add separate
> > > > > MvccConfiguration bean. But let's skip this decision for now before
> > we
> > > > have
> > > > > benchmarks numbers.
> > > > > 4. Need add some metrics to monitor mvcc coordinator performance.
> > > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Tue, Sep 19, 2017 at 10:47 AM, Vladimir Ozerov <
> > > voze...@gridgain.com>
> > > > > wrote:
> > > > >
> > > > > > This could be something like "preferredMvccCoordinator".
> > > > > >
> > > > > > On Tue, Sep 19, 2017 at 10:40 AM, Alexey Goncharuk <
> > > > > > alexey.goncha...@gmail.com> wrote:
> > > > > >
> > > > > > > >
> > > > > > > > I agree that we need coordinator nodes, but I do not
> understand
> > > why
> > > > > > can't
> > > > > > > > we reuse some cache nodes for it? Why do we need to ask user
> to
> > > > start
> > > > > > up
> > > > > > > > yet another type of node?
> > > > > > > >
> > > > > > >
> > > > > > > Dmitriy,
> > > > > > >
> > > > > > > My understanding is that Semyon does not deny a cache node to
> be
> > > used
> > > > > as
> > > > > > a
> > > > > > > coordinator. This property will allow to optionally have a
> > > > *dedicated*
> > > > > > node
> > > > > > > serving as a coordinator to improve cluster throughput under
> > heavy
> > > > > load.
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>