Re: Spark Data Frame support in Ignite

2017-08-04 Thread Dmitriy Setrakyan
On Thu, Aug 3, 2017 at 9:04 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> This JDBC integration is just a Spark data source, which means that Spark
> will fetch data in its local memory first, and only then apply filters,
> aggregations, etc. This is obviously slow and doesn't use all advantages
> Ignite provides.
>
> To create useful and valuable integration, we should create a custom
> Strategy that will convert Spark's logical plan into a SQL query and
> execute it directly on Ignite.
>

I get it, but we have been talking about Data Frame support for longer than
a year. I think we should advise our users to switch to JDBC until the
community gets someone to implement it.


>
> -Val
>
> On Thu, Aug 3, 2017 at 12:12 AM, Dmitriy Setrakyan 
> wrote:
>
> > On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke 
> wrote:
> >
> > > I think the development effort would still be higher. Everything would
> > > have to be put via JDBC into Ignite, then checkpointing would have to
> be
> > > done via JDBC (again additional development effort), a lot of
> conversion
> > > from spark internal format to JDBC and back to ignite internal format.
> > > Pagination I do not see as a useful feature for managing large data
> > volumes
> > > from databases - on the contrary it is very inefficient (and one would
> to
> > > have to implement logic to fetch al pages). Pagination was also never
> > > thought of for fetching large data volumes, but for web pages showing a
> > > small result set over several pages, where the user can click manually
> > for
> > > the next page (what they anyway not do most of the time).
> > >
> > > While it might be a quick solution , I think a deeper integration than
> > > JDBC would be more beneficial.
> > >
> >
> > Jorn, I completely agree. However, we have not been able to find a
> > contributor for this feature. You sound like you have sufficient domain
> > expertise in Spark and Ignite. Would you be willing to help out?
> >
> >
> > > > On 3. Aug 2017, at 08:57, Dmitriy Setrakyan 
> > > wrote:
> > > >
> > > >> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke 
> > > wrote:
> > > >>
> > > >> I think the JDBC one is more inefficient, slower requires too much
> > > >> development effort. You can also check the integration of Alluxio
> with
> > > >> Spark.
> > > >>
> > > >
> > > > As far as I know, Alluxio is a file system, so it cannot use JDBC.
> > > Ignite,
> > > > on the other hand, is an SQL system and works well with JDBC. As far
> as
> > > the
> > > > development effort, we are dealing with SQL, so I am not sure why
> JDBC
> > > > would be harder.
> > > >
> > > > Generally speaking, until Ignite provides native data frame
> > integration,
> > > > having JDBC-based integration out of the box is minimally acceptable.
> > > >
> > > >
> > > >> Then, in general I think JDBC has never designed for large data
> > volumes.
> > > >> It is for executing queries and getting a small or aggregated result
> > set
> > > >> back. Alternatively for inserting / updating single rows.
> > > >>
> > > >
> > > > Agree in general. However, Ignite JDBC is designed to work with
> larger
> > > data
> > > > volumes and supports data pagination automatically.
> > > >
> > > >
> > > >>> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan  >
> > > >> wrote:
> > > >>>
> > > >>> Jorn, thanks for your feedback!
> > > >>>
> > > >>> Can you explain how the direct support would be different from the
> > JDBC
> > > >>> support?
> > > >>>
> > > >>> Thanks,
> > > >>> D.
> > > >>>
> > >  On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke  >
> > > >> wrote:
> > > 
> > >  These are two different things. Spark applications themselves do
> not
> > > use
> > >  JDBC - it is more for non-spark applications to access Spark
> > > DataFrames.
> > > 
> > >  A direct support by Ignite would make more sense. Although you
> have
> > in
> > >  theory IGFS, if the user is using HDFS, which might not be the
> case.
> > > It
> > > >> is
> > >  now also very common to use Object stores, such as S3.
> > >  Direct support could be leverage for interactive analysis or
> > different
> > >  Spark applications sharing data.
> > > 
> > > > On 3. Aug 2017, at 05:12, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >
> > >  wrote:
> > > >
> > > > Igniters,
> > > >
> > > > We have had the integration with Spark Data Frames on our roadmap
> > > for a
> > > > while:
> > > > https://issues.apache.org/jira/browse/IGNITE-3084
> > > >
> > > > However, while browsing Spark documentation, I cam across the
> > generic
> > >  JDBC
> > > > data frame support in Spark:
> > > > https://spark.apache.org/docs/latest/sql-programming-guide.
> > >  html#jdbc-to-other-databases
> > > >
> > > > Given that Ignite has a JDBC driver, does it mean that 

Re: SSL certificate for the CI server

2017-08-04 Thread Dmitriy Setrakyan
I was asleep and woke up in cold sweat realizing that I don't have the
login to TC. How do I get one?

On Fri, Aug 4, 2017 at 10:53 PM, Aleksey Chetaev 
wrote:

> If anyone can’t sleep at night
> If anyone sleep very bad
> If you afraid that your password
> Can evil hacker steal right now
>
> For they we worked day and night
> Don’t slept and worked fully days
> And finish with https
> Teamcity for, Igniters for.
>
> https://ci.ignite.apache.org
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/SSL-certificate-for-the-
> CI-server-tp19830p20532.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: SSL certificate for the CI server

2017-08-04 Thread Aleksey Chetaev
If anyone can’t sleep at night
If anyone sleep very bad
If you afraid that your password
Can evil hacker steal right now

For they we worked day and night
Don’t slept and worked fully days
And finish with https
Teamcity for, Igniters for. 

https://ci.ignite.apache.org



--
View this message in context: 
http://apache-ignite-developers.2346864.n4.nabble.com/SSL-certificate-for-the-CI-server-tp19830p20532.html
Sent from the Apache Ignite Developers mailing list archive at Nabble.com.


Re: ODBC API conformance page updated

2017-08-04 Thread Dmitriy Setrakyan
Nice!

I am not sure I like the name "conformance" however. How about renaming it
to "specification"?

Also, would be nice to get a sense about why certain features are
unsupported and what are the plans to get closer to 100% compliance?

D.

On Fri, Aug 4, 2017 at 1:08 PM, Vladimir Ozerov 
wrote:

> Igor,
>
> Vary cool!
>
> On Fri, Aug 4, 2017 at 1:15 PM, Igor Sapego  wrote:
>
> > Hi Igniters,
> >
> > I've updated an ODBC API conformance page - [1],
> > so take a look if you are interested.
> >
> > Also, make sure you edit this page if you are adding
> > new features, or modifying existing features of the Ignite
> > ODBC driver.
> >
> > [1] - https://apacheignite.readme.io/v2.1/docs/conformance
> >
> > Best Regards,
> > Igor
> >
>


[jira] [Created] (IGNITE-5947) ClassCastException when two-dimensional array is fetched from cache

2017-08-04 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-5947:
---

 Summary: ClassCastException when two-dimensional array is fetched 
from cache
 Key: IGNITE-5947
 URL: https://issues.apache.org/jira/browse/IGNITE-5947
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.1
Reporter: Valentin Kulichenko
Priority: Critical
 Fix For: 2.2


When an instance of {{Object[][]}} is put into cache, and then read from there, 
the following exception is thrown:
{noformat}
Exception in thread "main" java.lang.ClassCastException: [Ljava.lang.Object; 
cannot be cast to [[Ljava.lang.Object;
{noformat}

Reproducer attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5946) Ignite WebSessions: Flaky failure for WebSessionSelfTest.testClientReconnectRequest() and subclasses

2017-08-04 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-5946:
--

 Summary: Ignite WebSessions: Flaky failure for 
WebSessionSelfTest.testClientReconnectRequest() and subclasses
 Key: IGNITE-5946
 URL: https://issues.apache.org/jira/browse/IGNITE-5946
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Dmitriy Pavlov
 Fix For: 2.2


Success rate on Teamcity is ~63%
org.apache.ignite.internal.websession.WebSessionSelfTest#testClientReconnectRequest()

http://ci.ignite.apache.org/viewLog.html?buildId=756773=buildResultsDiv=Ignite20Tests_IgniteWebSessions#testNameId4440256403233545493

{noformat}
java.lang.AssertionError: Error occurred on grid stop (see log for more 
details).
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTest(GridAbstractTest.java:1961)
{noformat}

{noformat}
[2017-08-04 
17:29:33,271][ERROR][test-runner-#1%websession.WebSessionSelfTest%][root] 
Failed to stop grid [igniteInstanceName=null, cancel=true]
class org.apache.ignite.IgniteClientDisconnectedException: Client node 
disconnected: client
at 
org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:92)
at org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3707)
at org.apache.ignite.internal.IgniteKernal.active(IgniteKernal.java:3423)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.awaitTopologyChange(GridAbstractTest.java:2105)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1030)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:1006)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.stopGrid(GridAbstractTest.java:997)
at 
org.apache.ignite.internal.websession.WebSessionSelfTest.testClientReconnectRequest(WebSessionSelfTest.java:163)
at 
org.apache.ignite.internal.websession.WebSessionSelfTest.testClientReconnectRequest(WebSessionSelfTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2401: ignite-2.1.3.b1

2017-08-04 Thread gvvinblade
GitHub user gvvinblade opened a pull request:

https://github.com/apache/ignite/pull/2401

ignite-2.1.3.b1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.1.3.b1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2401.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2401


commit eb336074c91da2e35aab4ca9f69a5d7191b3701b
Author: tledkov-gridgain 
Date:   2017-08-04T08:16:52Z

IGNITE-5920: Fixed CacheClientBinaryQueryExample": set 
CacheKeyConfiguration explicitly to enable affinity co-location. This closes 
#2389.

commit 8356ae7ac06eca77ed2a8c70b2cf86bc5d700fb6
Author: Igor Seliverstov 
Date:   2017-08-04T12:14:03Z

IGNITE-5658 Optimizations for data streamer
IGNITE-5918 Adding and searching objects in index tree produces a lot of 
garbage
IGNITE-5921 Reduce contention for free list access

commit 67c3833e835ff9d4a1880e984dd6446d431fd4a1
Author: Igor Seliverstov 
Date:   2017-08-04T16:51:43Z

IGNITE-5658 Optimizations for data streamer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2400: Fail tests with issue link

2017-08-04 Thread dspavlov
GitHub user dspavlov opened a pull request:

https://github.com/apache/ignite/pull/2400

Fail tests with issue link

Experimental branch for disabling constantly failing flaky tests with 
issues link

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite 
make-teamсity-green-again

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2400.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2400


commit b017820c23288068141858058ca4814c3ded7f61
Author: dpavlov 
Date:   2017-08-04T16:19:26Z

IGNITE-5841: fail test with issue link




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5945) Flaky failure in IgniteCache 5: IgniteCacheAtomicProtocolTest.testPutReaderUpdate2

2017-08-04 Thread Dmitriy Pavlov (JIRA)
Dmitriy Pavlov created IGNITE-5945:
--

 Summary: Flaky failure in IgniteCache 5: 
IgniteCacheAtomicProtocolTest.testPutReaderUpdate2
 Key: IGNITE-5945
 URL: https://issues.apache.org/jira/browse/IGNITE-5945
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Dmitriy Pavlov
 Fix For: 2.2


org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest#testPutReaderUpdate2


{noformat}
junit.framework.AssertionFailedError
at junit.framework.Assert.fail(Assert.java:55)
at junit.framework.Assert.assertTrue(Assert.java:22)
at junit.framework.Assert.assertFalse(Assert.java:39)
at junit.framework.Assert.assertFalse(Assert.java:47)
at junit.framework.TestCase.assertFalse(TestCase.java:219)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest.readerUpdateDhtFails(IgniteCacheAtomicProtocolTest.java:865)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.IgniteCacheAtomicProtocolTest.testPutReaderUpdate2(IgniteCacheAtomicProtocolTest.java:765)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
at java.lang.Thread.run(Thread.java:748)
{noformat}


Fail is reproducable locally 2 times per 20 runs
On TeamCity test success rate is 88,2%



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Geo-spatial and full-text indexes support in Ignite 2.x

2017-08-04 Thread Andrey Mashenkov
Vladimir,

Is my understanding right you suggest to create a custom Lucene codec which
will be able to store index in ignite page memory?

Lucene javadoc has complete description of codec API [1] and looks like it
is possible to implement custom one.

[1]
https://lucene.apache.org/core/5_5_2/core/org/apache/lucene/codecs/lucene54/package-summary.html#package_description

On Wed, Aug 2, 2017 at 12:02 AM, Denis Magda  wrote:

> Andrey,
>
> What I’m trying to say is that presently both text and geo-spatial indexes
> are unsupported NOT only for the persistence layer but for the durable
> (page) memory in general - as you properly pointed out the text indexes are
> stored in the old off-heap memory but has to be moved to be handled in page
> memory’s off-heap region directly.
>
> Makes sense?
>
> —
> Denis
>
> > On Aug 1, 2017, at 1:05 PM, Andrey Mashenkov 
> wrote:
> >
> > Denis,
> >
> > I see we still use GridUnsafeMemory in LuceneIndex [1] in master.
> > Do you mean a wrapper to new memory manager is used here?
> >
> > [1]
> > https://github.com/apache/ignite/blob/master/modules/
> indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/
> GridLuceneIndex.java
> >
> > On Tue, Aug 1, 2017 at 10:28 PM, Denis Magda  wrote:
> >
> >> Andrey,
> >>
> >> I’ve already wiped the old off-heap memory out of my mind. Sure I
> assumed
> >> the off-heap space managed by the new memory architecture.
> >>
> >> —
> >> Denis
> >>
> >>> On Aug 1, 2017, at 11:14 AM, Andrey Mashenkov <
> >> andrey.mashen...@gmail.com> wrote:
> >>>
> >>> Denis,
> >>>
> >>> Lucene fullText index stores off-heap in old way, not in page memory.
> >>> Therefore it is not persistent.
> >>>
> >>> There is a second issue related to FullText index as it is different
> kind
> >>> of index and it has own query type and it can't be use in SQL queries.
> >>> Looks like it make sense to integrate FullText indices in our SQL layer
> >> as
> >>> well.
> >>>
> >>> AFAIK, H2 has some support of FullText indices based on Lucene, so we
> can
> >>> get a hint how we can integrate it.
> >>>
> >>>
> >>>
> >>> On Tue, Aug 1, 2017 at 8:09 PM, Denis Magda  wrote:
> >>>
>  Vladimir,
> 
>  We need to consider that these two types of indexes are not stored
>  off-heap either.
> 
>  It expands the task a bit — the indexes have to be fully integrated
> with
>  the new durable memory architecture supporting both off-heap and
>  persistence layers.
> 
>  —
>  Denis
> 
> > On Aug 1, 2017, at 3:26 AM, Vladimir Ozerov 
>  wrote:
> >
> > Guys,
> >
> > AFAIK these two index types are not supported with enabled
> persistence
> >> at
> > the moment, neither they stored on the disk anyhow. Can someone help
> >> with
> > estimates on how difficult would it be to implement these indexes
> over
> > page-memory architecture?
> >
> > Looks like we will have to write our own implementation of these
> >> indexes,
> > instead of relying on Lucene and H2. Am I right?
> >
> > Vladimir.
> 
> 
> >>>
> >>>
> >>> --
> >>> Best regards,
> >>> Andrey V. Mashenkov
> >>
> >>
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Cluster auto activation design proposal

2017-08-04 Thread Sergey Chugunov
Folks,

I've summarized all results from our discussion so far on wiki page:
https://cwiki.apache.org/confluence/display/IGNITE/Automatic+activation+design+-+draft

I hope I reflected the most important details and going to add API
suggestions for all use cases soon.

Feel free to give feedback here or in comments under the page.

Thanks,
Sergey.

On Thu, Aug 3, 2017 at 5:40 PM, Alexey Kuznetsov 
wrote:

> Hi,
>
> >>1. User creates new BLT using WebConsole or other tool and "applies" it
>  to brand-new cluster.
>
> Good idea, but we also should implement *command-line utility* for the same
> use case.
>
> --
> Alexey Kuznetsov
>


[jira] [Created] (IGNITE-5944) Ignite 1.9 can't be started with configured IGFS and Hadoop secondary system

2017-08-04 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5944:
-

 Summary: Ignite 1.9 can't be started with configured IGFS and 
Hadoop secondary system
 Key: IGNITE-5944
 URL: https://issues.apache.org/jira/browse/IGNITE-5944
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Collecting query metrics and statistics

2017-08-04 Thread Vladimir Ozerov
NB: I mixed up in places "mapper" and "reducer" in the second part of the
message.

пт, 4 авг. 2017 г. в 17:48, Vladimir Ozerov :

> Igniters,
>
> At the moment Ignite lacks administrative and management capabilities in
> various places. On such demanded place is SQL queries. I want to start a
> discussion around adding query better metrics to Ignite.
>
> I propose to split the task in two parts: infrastructure and UX.
>
> 1) We start with adding unique UUID to all query requests, to be able to
> glue all pieces together. Then we log the following things (approximately):
> - Query ID
> - Original query
> - Map query
> - Mapper execution time
> - Mapper rows
> - Mapper output bytes (to reducer)
> - Mapper input bytes (for distributed joins)
> - Reduce query
> - Reduce execution time
> - Reduce rows
> - Reduce input bytes (sum of all mapper output bytes)
> - Some info on distributed joins may be
> - Advanced things in future (memory page accesses, disk accesses, etc.)
>
> All these stats are saved to local structures. These structures are
> accessible through some API and typically not exchange between nodes until
> requested.
>
> 2) UX
> The most important - we will made these stats *accessible through SQL*!
>
> SELECT * FROM map_query_stats
> |node|query_id|query  |time_ms|input_bytes|
> ---
> |CLI1|*UUID1*   |SELECT AVG(f)  | 50|100|
>
> SELECT * FROM reduce_query_stats
> |node|query_id|query  |time_ms|output_bytes|disk_reads|
> ---
> |SRV1|*UUID1*   |SELECT SUM(f), COUNT(f)|00 |20  |35|
>
> Then we do some UNIONS/JOINS from client/console/driver/whatever, and:
> SELECT ... FROM map_query_stats WHERE query_id = *UUID1*
> UNION
> SELECT ... FROM reduce_query_stats WHERE query_id = *UUID1*
>
> |total_time|total_disk_reads|reduce_node|reduce_time|reduce_disk_reads|
> ---
> |100   |180 |   |   | |
> ---
> |  ||SRV1   |10 |35   |
> ---
> |  ||SRV2   |90 |130  |
> ---
> |  ||SRV3   |20 |25   |
>
> Makes sense?
>
> Vladimir.
>


[GitHub] ignite pull request #2399: IGNITE-5939: ODBC: SQLColAttributes now works wit...

2017-08-04 Thread isapego
Github user isapego closed the pull request at:

https://github.com/apache/ignite/pull/2399


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2315: IGNITE-4800: Lucene query may fails with NPE.

2017-08-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2315


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Collecting query metrics and statistics

2017-08-04 Thread Andrey Mashenkov
Vladimir,

Looks like "Must have" feature.
Most of databases I see have such info available via system tables in
schema.

We have to choose and reserve a schema name for this. E.g.
"information_schema".



On Fri, Aug 4, 2017 at 5:48 PM, Vladimir Ozerov 
wrote:

> Igniters,
>
> At the moment Ignite lacks administrative and management capabilities in
> various places. On such demanded place is SQL queries. I want to start a
> discussion around adding query better metrics to Ignite.
>
> I propose to split the task in two parts: infrastructure and UX.
>
> 1) We start with adding unique UUID to all query requests, to be able to
> glue all pieces together. Then we log the following things (approximately):
> - Query ID
> - Original query
> - Map query
> - Mapper execution time
> - Mapper rows
> - Mapper output bytes (to reducer)
> - Mapper input bytes (for distributed joins)
> - Reduce query
> - Reduce execution time
> - Reduce rows
> - Reduce input bytes (sum of all mapper output bytes)
> - Some info on distributed joins may be
> - Advanced things in future (memory page accesses, disk accesses, etc.)
>
> All these stats are saved to local structures. These structures are
> accessible through some API and typically not exchange between nodes until
> requested.
>
> 2) UX
> The most important - we will made these stats *accessible through SQL*!
>
> SELECT * FROM map_query_stats
> |node|query_id|query  |time_ms|input_bytes|
> ---
> |CLI1|*UUID1*   |SELECT AVG(f)  | 50|100|
>
> SELECT * FROM reduce_query_stats
> |node|query_id|query  |time_ms|output_bytes|disk_reads|
> ---
> |SRV1|*UUID1*   |SELECT SUM(f), COUNT(f)|00 |20  |35|
>
> Then we do some UNIONS/JOINS from client/console/driver/whatever, and:
> SELECT ... FROM map_query_stats WHERE query_id = *UUID1*
> UNION
> SELECT ... FROM reduce_query_stats WHERE query_id = *UUID1*
>
> |total_time|total_disk_reads|reduce_node|reduce_time|reduce_disk_reads|
> ---
> |100   |180 |   |   | |
> ---
> |  ||SRV1   |10 |35   |
> ---
> |  ||SRV2   |90 |130  |
> ---
> |  ||SRV3   |20 |25   |
>
> Makes sense?
>
> Vladimir.
>



-- 
Best regards,
Andrey V. Mashenkov


[jira] [Created] (IGNITE-5943) Communication. We could reject client connection while it has received EVT_NODE_JOINED

2017-08-04 Thread Eduard Shangareev (JIRA)
Eduard Shangareev created IGNITE-5943:
-

 Summary: Communication. We could reject client connection while it 
has received EVT_NODE_JOINED
 Key: IGNITE-5943
 URL: https://issues.apache.org/jira/browse/IGNITE-5943
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.0
Reporter: Eduard Shangareev
Assignee: Eduard Shangareev
Priority: Critical
 Fix For: 2.2


There is a race between server nodes receive acknowledgment about joining 
client node and client node starts to think that it is absolutely functional.
It would cause to rejecting communication (for example. on requesting data from 
caches) between the client and servers.
The issue happens on really big topologies (> 300 nodes) or when many clients 
join simultaneously.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5942) Python3 pylibmc does not work with Ignite memcache mode

2017-08-04 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5942:
-

 Summary: Python3 pylibmc does not work with Ignite memcache mode
 Key: IGNITE-5942
 URL: https://issues.apache.org/jira/browse/IGNITE-5942
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Cherkasov


Example from:
https://apacheignite.readme.io/v2.0/docs/memcached-support#python
doesn't for Python 3.6.
There's exception on the following call:

client.set("key", "val")

It was tested with another python library - it works, so looks like the problem 
with pylibmc/libmemcached integration with Ignite.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Collecting query metrics and statistics

2017-08-04 Thread Vladimir Ozerov
Igniters,

At the moment Ignite lacks administrative and management capabilities in
various places. On such demanded place is SQL queries. I want to start a
discussion around adding query better metrics to Ignite.

I propose to split the task in two parts: infrastructure and UX.

1) We start with adding unique UUID to all query requests, to be able to
glue all pieces together. Then we log the following things (approximately):
- Query ID
- Original query
- Map query
- Mapper execution time
- Mapper rows
- Mapper output bytes (to reducer)
- Mapper input bytes (for distributed joins)
- Reduce query
- Reduce execution time
- Reduce rows
- Reduce input bytes (sum of all mapper output bytes)
- Some info on distributed joins may be
- Advanced things in future (memory page accesses, disk accesses, etc.)

All these stats are saved to local structures. These structures are
accessible through some API and typically not exchange between nodes until
requested.

2) UX
The most important - we will made these stats *accessible through SQL*!

SELECT * FROM map_query_stats
|node|query_id|query  |time_ms|input_bytes|
---
|CLI1|*UUID1*   |SELECT AVG(f)  | 50|100|

SELECT * FROM reduce_query_stats
|node|query_id|query  |time_ms|output_bytes|disk_reads|
---
|SRV1|*UUID1*   |SELECT SUM(f), COUNT(f)|00 |20  |35|

Then we do some UNIONS/JOINS from client/console/driver/whatever, and:
SELECT ... FROM map_query_stats WHERE query_id = *UUID1*
UNION
SELECT ... FROM reduce_query_stats WHERE query_id = *UUID1*

|total_time|total_disk_reads|reduce_node|reduce_time|reduce_disk_reads|
---
|100   |180 |   |   | |
---
|  ||SRV1   |10 |35   |
---
|  ||SRV2   |90 |130  |
---
|  ||SRV3   |20 |25   |

Makes sense?

Vladimir.


[jira] [Created] (IGNITE-5941) Index with long name stores incorrect

2017-08-04 Thread Vladislav Pyatkov (JIRA)
Vladislav Pyatkov created IGNITE-5941:
-

 Summary: Index with long name stores incorrect
 Key: IGNITE-5941
 URL: https://issues.apache.org/jira/browse/IGNITE-5941
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Reporter: Vladislav Pyatkov


SQL query by Index with long name return inconsistent result after cluster 
restart and recover from storage. At the same time a query by other index (with 
more shorter name) works correctly before and after recovery.

For example long index name:
{code}
QueryIndex index = new QueryIndex("name", true, 
"COM.SBT.AZIMUTH_PSI.PUBLISHER.ENTITIES.PUB.PARTICLES.CARPORT#MODELCOM.SBT.AZIMUTH_PSI.PUBLISHER.ENTITIES.PUB.PARTICLES.CARPORT");
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2399: IGNITE-5939: ODBC: SQLColAttributes now works wit...

2017-08-04 Thread isapego
GitHub user isapego opened a pull request:

https://github.com/apache/ignite/pull/2399

IGNITE-5939: ODBC: SQLColAttributes now works with legacy attribute codes.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5939

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2399.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2399


commit 44496f38d6d55ab616e8a8cf4f7a8c986543ef66
Author: Igor Sapego 
Date:   2017-08-04T13:55:05Z

IGNITE-5939: Added tests.

commit 8a443ced2071a9fff471ebd3167032708a3a4f16
Author: Igor Sapego 
Date:   2017-08-04T13:57:05Z

IGNITE-5939: Fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5940) DataStreamer throws exception as it's closed if OOM occurs on server node.

2017-08-04 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5940:
-

 Summary: DataStreamer throws exception as it's closed if OOM 
occurs on server node.
 Key: IGNITE-5940
 URL: https://issues.apache.org/jira/browse/IGNITE-5940
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5938) Implement WAL logs compaction and compression after checkpoint

2017-08-04 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-5938:


 Summary: Implement WAL logs compaction and compression after 
checkpoint
 Key: IGNITE-5938
 URL: https://issues.apache.org/jira/browse/IGNITE-5938
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Affects Versions: 2.1
Reporter: Alexey Goncharuk
 Fix For: 2.2


Currently, we simply move WAL segments to archive when WAL segment is written 
and delete when checkpoint history becomes too old. 
Archived WAL segment contains physical delta records that are no longer needed 
for rebalancing, so these records may be thrown away. In order to optimize disk 
space and delta WAL rebalancing, we can do the following:
1) Clean the WAL segments from the physical records
2) Compress the cleaned segments (I expect this to be very effective since we 
write full objects)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5936) Cleanup of not needed versions

2017-08-04 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-5936:


 Summary: Cleanup of not needed versions
 Key: IGNITE-5936
 URL: https://issues.apache.org/jira/browse/IGNITE-5936
 Project: Ignite
  Issue Type: Sub-task
Reporter: Semen Boikov


Need implement some procedure to remove from mvcc storage versions which are 
not needed anymore. Version which is safe to remove (there are not readers 
using this version) should be somehow passed from coordinator to servers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Set default cache synchronization mode to FULL_SYNC

2017-08-04 Thread Дмитрий Рябов
I mean "set readFromBackup = false" (copy-paste was bad idea).

2017-08-04 14:21 GMT+03:00 Дмитрий Рябов :

> +1 to change PRIMARY_SYNC to FULL_SYNC.
>
> I think it is not reasonable to set readFromBackup=true by default,
> especially for replicated caches, but FULL_SYNCE will keep cache in
> consistent state.
>
> 2017-08-04 13:23 GMT+03:00 Anton Vinogradov :
>
>> +1 to change PRIMARY_SYNC to FULL_SYNC and keep readFromBackup=true
>>
>> Dmitriy,
>> Why should we wait for 3.0?
>> This change looks safe for me.
>>
>> On Wed, Aug 2, 2017 at 9:51 PM, Dmitriy Setrakyan 
>> wrote:
>>
>> > We have to wait with any default changes to 3.0, unfortunately.
>> >
>> > On Wed, Aug 2, 2017 at 8:30 PM, Vladimir Ozerov 
>> > wrote:
>> >
>> > > Not sure about readFromBackup, but changing PRIMARY_SYNC to FULL_SYNC
>> > looks
>> > > safe to me. Any other thoughts?
>> > >
>> > > ср, 2 авг. 2017 г. в 21:10, Denis Magda :
>> > >
>> > > > +1 for both suggestions but I’m not sure we can do the change till
>> 3.0.
>> > > >
>> > > > —
>> > > > Denis
>> > > >
>> > > > > On Aug 2, 2017, at 1:27 AM, Vladimir Ozerov > >
>> > > > wrote:
>> > > > >
>> > > > > +1 for readFromBackup=false as well :-) Another example of default
>> > > value
>> > > > > with subtle effects.
>> > > > >
>> > > > > On Wed, Aug 2, 2017 at 11:11 AM, Alexey Goncharuk <
>> > > > > alexey.goncha...@gmail.com> wrote:
>> > > > >
>> > > > >> Vladimir,
>> > > > >>
>> > > > >> Personally, I agree that we should put correctness over
>> performance,
>> > > > >> however (1) is not a correct statement for TRANSACTIONAL caches.
>> A
>> > > > >> transactional client always validates the result of an operation
>> and
>> > > > throw
>> > > > >> a correct exception if operation failed. (1) is true for ATOMIC
>> > > caches,
>> > > > >> though.
>> > > > >>
>> > > > >> A user can get in trouble in this default for both TX and ATOMIC
>> > > caches
>> > > > if
>> > > > >> a put is performed from a backup node and readFromBackup is set
>> to
>> > > > false.
>> > > > >> In this case, the simple read-after-write scenario may fail. I
>> would
>> > > > rather
>> > > > >> set readFromBackup to false by default, however, this fixes
>> neither
>> > > the
>> > > > SQL
>> > > > >> nor ATOMIC caches issues.
>> > > > >>
>> > > > >> +1 for the change, and extend the warning for partitioned caches
>> > with
>> > > > >> readFromBackup=true and PRIMARY_SYNC.
>> > > > >>
>> > > > >>
>> > > > >>
>> > > > >> 2017-08-02 10:58 GMT+03:00 Vladimir Ozerov > >:
>> > > > >>
>> > > > >>> Igniters,
>> > > > >>>
>> > > > >>> I want to re-iterate idea of changing default synchronization
>> mode
>> > > from
>> > > > >>> PRIMARY_SYNC to FULL_SYNC.
>> > > > >>>
>> > > > >>> Motivation:
>> > > > >>> 1) If user set [cacheMode=PARTITIONED, backups=1] he still could
>> > > loose
>> > > > >> data
>> > > > >>> silently. Because primary node could report success to the
>> client
>> > and
>> > > > >> then
>> > > > >>> crash before data is propagated to backups.
>> > > > >>> 2) If user set [cacheMode=REPLICATED] and use SQL, he will might
>> > get
>> > > > >>> invalid results if cache is being updated concurrently - well
>> known
>> > > > >> issue.
>> > > > >>>
>> > > > >>> The only advantage of PRIMARY_SYNC is slightly better
>> performance,
>> > > but
>> > > > we
>> > > > >>> should prefer correctness over performance.
>> > > > >>>
>> > > > >>> Proposed changes:
>> > > > >>> 1) Make FULL_SYNC default;
>> > > > >>> 2) Print a warning about possibly incorrect SQL results if
>> > REPLICATED
>> > > > >> cache
>> > > > >>> is started in PRIMARY_SYNC mode.
>> > > > >>>
>> > > > >>> Thoughts?
>> > > > >>>
>> > > > >>> Vladimir.
>> > > > >>>
>> > > > >>
>> > > >
>> > > >
>> > >
>> >
>>
>
>


Re: Set default cache synchronization mode to FULL_SYNC

2017-08-04 Thread Anton Vinogradov
Dmitriy,

> I think it is not reasonable to set readFromBackup=true by default,
> especially for replicated caches, but FULL_SYNCE will keep cache in
> consistent state.

readFromBackup=true allows you to read from backup instead of making
request to primary node,
so it should be useful, especially for replicated cache.

On Fri, Aug 4, 2017 at 2:21 PM, Дмитрий Рябов  wrote:

> +1 to change PRIMARY_SYNC to FULL_SYNC.
>
> I think it is not reasonable to set readFromBackup=true by default,
> especially for replicated caches, but FULL_SYNCE will keep cache in
> consistent state.
>
> 2017-08-04 13:23 GMT+03:00 Anton Vinogradov :
>
> > +1 to change PRIMARY_SYNC to FULL_SYNC and keep readFromBackup=true
> >
> > Dmitriy,
> > Why should we wait for 3.0?
> > This change looks safe for me.
> >
> > On Wed, Aug 2, 2017 at 9:51 PM, Dmitriy Setrakyan  >
> > wrote:
> >
> > > We have to wait with any default changes to 3.0, unfortunately.
> > >
> > > On Wed, Aug 2, 2017 at 8:30 PM, Vladimir Ozerov 
> > > wrote:
> > >
> > > > Not sure about readFromBackup, but changing PRIMARY_SYNC to FULL_SYNC
> > > looks
> > > > safe to me. Any other thoughts?
> > > >
> > > > ср, 2 авг. 2017 г. в 21:10, Denis Magda :
> > > >
> > > > > +1 for both suggestions but I’m not sure we can do the change till
> > 3.0.
> > > > >
> > > > > —
> > > > > Denis
> > > > >
> > > > > > On Aug 2, 2017, at 1:27 AM, Vladimir Ozerov <
> voze...@gridgain.com>
> > > > > wrote:
> > > > > >
> > > > > > +1 for readFromBackup=false as well :-) Another example of
> default
> > > > value
> > > > > > with subtle effects.
> > > > > >
> > > > > > On Wed, Aug 2, 2017 at 11:11 AM, Alexey Goncharuk <
> > > > > > alexey.goncha...@gmail.com> wrote:
> > > > > >
> > > > > >> Vladimir,
> > > > > >>
> > > > > >> Personally, I agree that we should put correctness over
> > performance,
> > > > > >> however (1) is not a correct statement for TRANSACTIONAL
> caches. A
> > > > > >> transactional client always validates the result of an operation
> > and
> > > > > throw
> > > > > >> a correct exception if operation failed. (1) is true for ATOMIC
> > > > caches,
> > > > > >> though.
> > > > > >>
> > > > > >> A user can get in trouble in this default for both TX and ATOMIC
> > > > caches
> > > > > if
> > > > > >> a put is performed from a backup node and readFromBackup is set
> to
> > > > > false.
> > > > > >> In this case, the simple read-after-write scenario may fail. I
> > would
> > > > > rather
> > > > > >> set readFromBackup to false by default, however, this fixes
> > neither
> > > > the
> > > > > SQL
> > > > > >> nor ATOMIC caches issues.
> > > > > >>
> > > > > >> +1 for the change, and extend the warning for partitioned caches
> > > with
> > > > > >> readFromBackup=true and PRIMARY_SYNC.
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> 2017-08-02 10:58 GMT+03:00 Vladimir Ozerov <
> voze...@gridgain.com
> > >:
> > > > > >>
> > > > > >>> Igniters,
> > > > > >>>
> > > > > >>> I want to re-iterate idea of changing default synchronization
> > mode
> > > > from
> > > > > >>> PRIMARY_SYNC to FULL_SYNC.
> > > > > >>>
> > > > > >>> Motivation:
> > > > > >>> 1) If user set [cacheMode=PARTITIONED, backups=1] he still
> could
> > > > loose
> > > > > >> data
> > > > > >>> silently. Because primary node could report success to the
> client
> > > and
> > > > > >> then
> > > > > >>> crash before data is propagated to backups.
> > > > > >>> 2) If user set [cacheMode=REPLICATED] and use SQL, he will
> might
> > > get
> > > > > >>> invalid results if cache is being updated concurrently - well
> > known
> > > > > >> issue.
> > > > > >>>
> > > > > >>> The only advantage of PRIMARY_SYNC is slightly better
> > performance,
> > > > but
> > > > > we
> > > > > >>> should prefer correctness over performance.
> > > > > >>>
> > > > > >>> Proposed changes:
> > > > > >>> 1) Make FULL_SYNC default;
> > > > > >>> 2) Print a warning about possibly incorrect SQL results if
> > > REPLICATED
> > > > > >> cache
> > > > > >>> is started in PRIMARY_SYNC mode.
> > > > > >>>
> > > > > >>> Thoughts?
> > > > > >>>
> > > > > >>> Vladimir.
> > > > > >>>
> > > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>


[GitHub] ignite pull request #2396: Ignite 1.7.14

2017-08-04 Thread ntikhonov
GitHub user ntikhonov opened a pull request:

https://github.com/apache/ignite/pull/2396

Ignite 1.7.14



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-1.7.14

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2396.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2396


commit a2b4751f5eefd70a5a1aa26652c9671240125f78
Author: dkarachentsev 
Date:   2017-03-17T11:57:48Z

IGNITE-4473 - Client should re-try connection attempt in case of concurrent 
network failure.

(cherry picked from commit d124004)

commit c4de164392ddc114c88d5a6eba0ff0b13d32542f
Author: AMRepo 
Date:   2017-03-20T13:31:15Z

IGNITE-518: Unmuted tests that was fixed in ignite-4036. This closes #1636.

commit e0c012d977b6db13dfdf2fb8347677998287c1e4
Author: Igor Sapego 
Date:   2017-03-21T14:50:06Z

IGNITE-4200: Added copying of the C++ binaries.

commit 8d9ade2cc2d71f12a427e3effa3846a928b0681e
Author: dkarachentsev 
Date:   2017-03-22T11:57:24Z

Merge branch 'ignite-1.7.9-p1' into ignite-1.7.10

commit b7ab27301b59bf93fc73b52fdf8e0bcf124fec1d
Author: Andrey V. Mashenkov 
Date:   2017-04-06T11:43:50Z

IGNITE-4832: Prevent service deployment on client by default when 
configuration is provided on startup. This closes #1748.

commit 443ac9a7aa82af1359a03bcfc8f9212b108300e4
Author: Andrey V. Mashenkov 
Date:   2017-04-05T12:01:02Z

IGNITE-4917: Fixed failure when accessing BinaryObjectBuilder field value 
serialized with OptimizedMarshaller . This closes #1736.

commit 4a1415ad01ff9fde30d5c7c02e6d938f1515178d
Author: Andrey V. Mashenkov 
Date:   2017-04-12T10:01:25Z

IGNITE-4907: Fixed excessive service instances can be started with dynamic 
deployment. This closes #1766.

(cherry picked from commit 0f7ef74)

commit bf1049741f7a64728bd433f78262ba273f969848
Author: Andrey V. Mashenkov 
Date:   2017-04-17T16:00:30Z

IGNITE-4954 - Configurable expiration timeout for Cassandra session. This 
closes #1785.

commit f9ecacc625b458539775e6550bd9b7613ed38f21
Author: dkarachentsev 
Date:   2017-04-28T08:46:23Z

IGNITE-5077 - Support service security permissions

backport from master
(cherry picked from commit 6236b5f)

commit 91c899b909383c78b78b9bf0c8f233b8c75ef29e
Author: Valentin Kulichenko 
Date:   2017-04-28T12:48:57Z

IGNITE-5081 - Removed redundant duplication of permissions in 
SecurityPermissionSetBuilder

commit b48a26b9b1e97fb8eb52c2a2f36005770922ac3d
Author: Valentin Kulichenko 
Date:   2017-04-28T12:53:33Z

IGNITE-5080 - Fixes in SecurityBasicPermissionSet

commit f66c23cbb9a6f2c923ebf75c58f00afaf1c0b5f3
Author: Evgenii Zhuravlev 
Date:   2017-05-03T14:47:45Z

IGNITE-4939 Receive event before cache initialized fix

commit 45b4d6316145d0b4b46713409f5e8fbe55ff4c41
Author: Evgenii Zhuravlev 
Date:   2017-05-04T09:11:37Z

IGNITE-4939 Receive event before cache initialized fix

commit 075bcfca0ea22633be13cd02647e359ad6fdca16
Author: Andrey V. Mashenkov 
Date:   2017-05-04T09:21:04Z

Fix flacky service deployment tests.

commit 25c06b50d46937cb39534cdf4147b862217289a2
Author: rfqu 
Date:   2017-05-02T16:46:44Z

ignite-4220 Support statements for JDBC and Cassandra store

commit 987c182686962673e70398395cb27e94f894713b
Author: nikolay_tikhonov 
Date:   2017-05-15T08:54:16Z

Fixed "IGNITE-5214 ConcurrentModificationException with enable DEBUG log 
level"

Signed-off-by: nikolay_tikhonov 

commit ebc4a1648a80fbbd485e4c351fce9bee163318f9
Author: sboikov 
Date:   2017-05-16T08:30:29Z

DirectByteBufferStreamImpl: converted asserts into exceptions.

(cherry picked from commit 560ef60)

commit 9cd7e0f8d132f9b7c496fe64f75f271ef60da5eb
Author: Alexey Kuznetsov 
Date:   2017-02-09T09:44:41Z

IGNITE-4676 Fixed hang if closure executed nested internal task with 
continuation. Added test.
(cherry picked from commit e7a5307)

commit 43bcc15127bd3fd7ac4e277da6da9e5fb6a855c0
Author: Vasiliy Sisko 
Date:   2017-03-30T04:08:10Z

IGNITE-4838 Fixed internal task detection logic. Added tests.
(cherry picked from commit ba68c6c)

commit 2a818d36395dd1af23acf444adf396b2e2edbede
Author: Konstantin Dudkov 
Date:   2017-05-22T13:28:07Z

Fixed "IGNITE-4205 

[jira] [Created] (IGNITE-5934) Integrate mvcc support in sql query protocol

2017-08-04 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-5934:


 Summary: Integrate mvcc support in sql query protocol
 Key: IGNITE-5934
 URL: https://issues.apache.org/jira/browse/IGNITE-5934
 Project: Ignite
  Issue Type: Sub-task
Reporter: Semen Boikov


Need integrate mvcc support in sql query protocol:
- request current ID and list of active txs from coordinator
- pass this info in sql requests and in sql queries
- notify coordinator after query completed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-04 Thread Ivan Rakov
My vote is still for making message softer (crashed -> stopped) and 
keeping logic as is.


Example with File.close() is good, but I think it's not the case here. 
The state on disk after node stop *will not* reflect all user actions 
made before Ignite.close() call, independent of whether node was stopped 
during checkpoint.
Ignite will recover to actual state anyway, the only difference is WAL 
replay algorithm (stopping during checkpoint will force Ignite to replay 
delta records).


However, waiting for checkpoint on node stop brings two advantages:
1) Next start will be faster - less WAL records to replay.
2) Partition files will be locally consistent after node stop. User will 
be able to save partition file for any kind of analysis.


Are they strong enough to force user to wait on stop?

Best Regards,
Ivan Rakov

On 04.08.2017 13:42, Vyacheslav Daradur wrote:

Hi guys, I'll just add my opinion if you don't mind.


May be we should implement Vladimir's suggestion to flush the pages

without

respect to the cancel flag? Are there any thoughts on this?

I think It's  good suggestion.
But in case of unit-testing a developer usually call #stopAllGrids() at the
end of all tests.
The method GridAbstactTest#stopAllGrids() is built on top of the
method G.stop(name,
true) including.
IMO in that case checkpoints' flushing isn't necessary.


2017-08-04 13:25 GMT+03:00 Dmitry Pavlov :


Thank you all for replies.

I like idea to replace 'crashed' to 'stop'.  'crashed' word is really
confusing.

But still, if I call close () on file, all data is flushed to disk. But for
ignite.close () checkpoint may be not finished.

May be we should implement Vladimir's suggestion to flush the pages without
respect to the cancel flag? Are there any thoughts on this?

пт, 4 авг. 2017 г. в 11:12, Vladimir Ozerov :


Ivan,

Hanging on Ignite.close() will confuse user no more than restore on start
after graceful shutdown. IMO correct approach here would be to:
1) wait for checkpoint completion irrespective of "cancel" flag, because
this flag relates to compute jobs only as per documentation
2) print an INFO message to the log that we are saving a checkpoint due

to

node stop.

On Fri, Aug 4, 2017 at 10:54 AM, Ivan Rakov 

wrote:

Dmitriy,

 From my point of view, invoking stop(true) is correct behaviour.

Stopping node in the middle of checkpoint is absolutely valid case.

That's

how persistence works - node will restore memory state if stopped at

any

moment.
On the other hand, checkpoint may last for a long time. Thread hanging

on

Ignite.close() may confuse user much more than "crashed in the middle

of

checkpoint" message.

Best Regards,
Ivan Rakov


On 03.08.2017 22:34, Dmitry Pavlov wrote:


Hi Igniters,

I’ve created the simplest example using Ignite 2.1 and persistence

(see

the
code below). I've included Ignite instance into try-with-resources (I
think
it is default approach for AutoCloseable inheritors).

But next time when I started this server I got message: “Ignite node
crashed in the middle of checkpoint. Will restore memory state and

enforce

checkpoint on node start.”

This happens because in close() method we don’t wait checkpoint to

end.

I

am afraid this behaviour may confuse users on the first use of the
product.

What do you think if we change Ignite.close() functioning from

stop(true)

to stop(false)? This will allow to wait checkpoints to finish by

default.

Alternatively, we may improve example to show how to shutdown server

node

correctly. Current PersistentStoreExample does not cover server node
shutdown.

Any concerns on close() method change?

Sincerely,
Dmitriy Pavlov


IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPersistentStoreConfiguration(new

PersistentStoreConfiguration());

try (Ignite ignite = Ignition.start(cfg)){
 ignite.active(true);
 IgniteCache cache = ignite.getOrCreateCache("test"

);

 for (int i = 0; i < 1000; i++)
   cache.put("Key" + i, "Value" + i);
}









[jira] [Created] (IGNITE-5933) Integrate mvcc support in cache.getAll protocol

2017-08-04 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-5933:


 Summary: Integrate mvcc support in cache.getAll protocol
 Key: IGNITE-5933
 URL: https://issues.apache.org/jira/browse/IGNITE-5933
 Project: Ignite
  Issue Type: Sub-task
Reporter: Semen Boikov


Need integrate mvcc support in cache.getAll protocol:
- request current ID and list of active txs from coordinator
- pass this info in get requests and in local 'get'
- notify coordinator after getAll completed



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Set default cache synchronization mode to FULL_SYNC

2017-08-04 Thread Дмитрий Рябов
+1 to change PRIMARY_SYNC to FULL_SYNC.

I think it is not reasonable to set readFromBackup=true by default,
especially for replicated caches, but FULL_SYNCE will keep cache in
consistent state.

2017-08-04 13:23 GMT+03:00 Anton Vinogradov :

> +1 to change PRIMARY_SYNC to FULL_SYNC and keep readFromBackup=true
>
> Dmitriy,
> Why should we wait for 3.0?
> This change looks safe for me.
>
> On Wed, Aug 2, 2017 at 9:51 PM, Dmitriy Setrakyan 
> wrote:
>
> > We have to wait with any default changes to 3.0, unfortunately.
> >
> > On Wed, Aug 2, 2017 at 8:30 PM, Vladimir Ozerov 
> > wrote:
> >
> > > Not sure about readFromBackup, but changing PRIMARY_SYNC to FULL_SYNC
> > looks
> > > safe to me. Any other thoughts?
> > >
> > > ср, 2 авг. 2017 г. в 21:10, Denis Magda :
> > >
> > > > +1 for both suggestions but I’m not sure we can do the change till
> 3.0.
> > > >
> > > > —
> > > > Denis
> > > >
> > > > > On Aug 2, 2017, at 1:27 AM, Vladimir Ozerov 
> > > > wrote:
> > > > >
> > > > > +1 for readFromBackup=false as well :-) Another example of default
> > > value
> > > > > with subtle effects.
> > > > >
> > > > > On Wed, Aug 2, 2017 at 11:11 AM, Alexey Goncharuk <
> > > > > alexey.goncha...@gmail.com> wrote:
> > > > >
> > > > >> Vladimir,
> > > > >>
> > > > >> Personally, I agree that we should put correctness over
> performance,
> > > > >> however (1) is not a correct statement for TRANSACTIONAL caches. A
> > > > >> transactional client always validates the result of an operation
> and
> > > > throw
> > > > >> a correct exception if operation failed. (1) is true for ATOMIC
> > > caches,
> > > > >> though.
> > > > >>
> > > > >> A user can get in trouble in this default for both TX and ATOMIC
> > > caches
> > > > if
> > > > >> a put is performed from a backup node and readFromBackup is set to
> > > > false.
> > > > >> In this case, the simple read-after-write scenario may fail. I
> would
> > > > rather
> > > > >> set readFromBackup to false by default, however, this fixes
> neither
> > > the
> > > > SQL
> > > > >> nor ATOMIC caches issues.
> > > > >>
> > > > >> +1 for the change, and extend the warning for partitioned caches
> > with
> > > > >> readFromBackup=true and PRIMARY_SYNC.
> > > > >>
> > > > >>
> > > > >>
> > > > >> 2017-08-02 10:58 GMT+03:00 Vladimir Ozerov  >:
> > > > >>
> > > > >>> Igniters,
> > > > >>>
> > > > >>> I want to re-iterate idea of changing default synchronization
> mode
> > > from
> > > > >>> PRIMARY_SYNC to FULL_SYNC.
> > > > >>>
> > > > >>> Motivation:
> > > > >>> 1) If user set [cacheMode=PARTITIONED, backups=1] he still could
> > > loose
> > > > >> data
> > > > >>> silently. Because primary node could report success to the client
> > and
> > > > >> then
> > > > >>> crash before data is propagated to backups.
> > > > >>> 2) If user set [cacheMode=REPLICATED] and use SQL, he will might
> > get
> > > > >>> invalid results if cache is being updated concurrently - well
> known
> > > > >> issue.
> > > > >>>
> > > > >>> The only advantage of PRIMARY_SYNC is slightly better
> performance,
> > > but
> > > > we
> > > > >>> should prefer correctness over performance.
> > > > >>>
> > > > >>> Proposed changes:
> > > > >>> 1) Make FULL_SYNC default;
> > > > >>> 2) Print a warning about possibly incorrect SQL results if
> > REPLICATED
> > > > >> cache
> > > > >>> is started in PRIMARY_SYNC mode.
> > > > >>>
> > > > >>> Thoughts?
> > > > >>>
> > > > >>> Vladimir.
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
>


[jira] [Created] (IGNITE-5932) Integrate communication with coordinator in tx protocol

2017-08-04 Thread Semen Boikov (JIRA)
Semen Boikov created IGNITE-5932:


 Summary: Integrate communication with coordinator in tx protocol
 Key: IGNITE-5932
 URL: https://issues.apache.org/jira/browse/IGNITE-5932
 Project: Ignite
  Issue Type: Sub-task
  Components: cache
Reporter: Semen Boikov
Assignee: Semen Boikov


Need integrate communication with coordinator in transactions protocol:
- after locks are acquired need request ID from coordinator
- this ID should be passed to primary/backups and passed to update
- after tx is committed need notify coordinator

Notes:
- there are differences in prepare logic for 
optimistic/pessimistic/serializable transactions, so most probably work with 
coordinator should be implemented separately for these tx types
- need support case when coordinator fails during prepare (need think is 
necessary rollback and retry tx or switch to next assigned coordinator)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5930) Long running transactions while load test with 1 memory policy, 14 caches and >=24 servers

2017-08-04 Thread Ksenia Rybakova (JIRA)
Ksenia Rybakova created IGNITE-5930:
---

 Summary: Long running transactions while load test with 1 memory 
policy, 14 caches and >=24 servers
 Key: IGNITE-5930
 URL: https://issues.apache.org/jira/browse/IGNITE-5930
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Ksenia Rybakova


Load config:
- CacheRandomOperation benchmark
- Caches:
atomic partitioned cache
2 tx partitioned caches
2 atomic partitioned onheap caches
2 atomic replicated onheap caches
2 tx partitioned onheap caches
2 tx replicated onheap caches
3 partitioned atomic indexed caches
- Preload amount: 500K-2.5M entries per cache
- Key range: 1M
- Persistent store is disabled
- 24 server nodes or more (3 nodes per physical host), 16 client nodes

Long running transactions occur after preloading and throughput dropps to 0. 
Note, that with 16 server nodes transactions are ok.

Complete configs are attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5931) .NET: Incorrect conflicting type error

2017-08-04 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5931:
--

 Summary: .NET: Incorrect conflicting type error
 Key: IGNITE-5931
 URL: https://issues.apache.org/jira/browse/IGNITE-5931
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.2


Incorrect conflicting type error can occur when registering the same time from 
multiple threads simultaneously:

{code}
Conflicting type IDs [type1='Row', type2='Row', typeId=113114]
{code}

{{Marshaller.AddType}} should check if existing type is the same as new one (as 
we do it in {{AddUserType}}, see other usages of {{ThrowConflictingTypeError}}).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5929) Add application name to query

2017-08-04 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-5929:


 Summary: Add application name to query
 Key: IGNITE-5929
 URL: https://issues.apache.org/jira/browse/IGNITE-5929
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.1
Reporter: Alexey Kuznetsov
 Fix For: 2.2


It will be useful feature to add application name to query.
This will allow to understand from logs and UI tools what application executed 
some query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: ODBC API conformance page updated

2017-08-04 Thread Vladimir Ozerov
Igor,

Vary cool!

On Fri, Aug 4, 2017 at 1:15 PM, Igor Sapego  wrote:

> Hi Igniters,
>
> I've updated an ODBC API conformance page - [1],
> so take a look if you are interested.
>
> Also, make sure you edit this page if you are adding
> new features, or modifying existing features of the Ignite
> ODBC driver.
>
> [1] - https://apacheignite.readme.io/v2.1/docs/conformance
>
> Best Regards,
> Igor
>


Re: Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-04 Thread Vyacheslav Daradur
Hi guys, I'll just add my opinion if you don't mind.

> May be we should implement Vladimir's suggestion to flush the pages
without
> respect to the cancel flag? Are there any thoughts on this?

I think It's  good suggestion.
But in case of unit-testing a developer usually call #stopAllGrids() at the
end of all tests.
The method GridAbstactTest#stopAllGrids() is built on top of the
method G.stop(name,
true) including.
IMO in that case checkpoints' flushing isn't necessary.


2017-08-04 13:25 GMT+03:00 Dmitry Pavlov :

> Thank you all for replies.
>
> I like idea to replace 'crashed' to 'stop'.  'crashed' word is really
> confusing.
>
> But still, if I call close () on file, all data is flushed to disk. But for
> ignite.close () checkpoint may be not finished.
>
> May be we should implement Vladimir's suggestion to flush the pages without
> respect to the cancel flag? Are there any thoughts on this?
>
> пт, 4 авг. 2017 г. в 11:12, Vladimir Ozerov :
>
> > Ivan,
> >
> > Hanging on Ignite.close() will confuse user no more than restore on start
> > after graceful shutdown. IMO correct approach here would be to:
> > 1) wait for checkpoint completion irrespective of "cancel" flag, because
> > this flag relates to compute jobs only as per documentation
> > 2) print an INFO message to the log that we are saving a checkpoint due
> to
> > node stop.
> >
> > On Fri, Aug 4, 2017 at 10:54 AM, Ivan Rakov 
> wrote:
> >
> > > Dmitriy,
> > >
> > > From my point of view, invoking stop(true) is correct behaviour.
> > >
> > > Stopping node in the middle of checkpoint is absolutely valid case.
> > That's
> > > how persistence works - node will restore memory state if stopped at
> any
> > > moment.
> > > On the other hand, checkpoint may last for a long time. Thread hanging
> on
> > > Ignite.close() may confuse user much more than "crashed in the middle
> of
> > > checkpoint" message.
> > >
> > > Best Regards,
> > > Ivan Rakov
> > >
> > >
> > > On 03.08.2017 22:34, Dmitry Pavlov wrote:
> > >
> > >> Hi Igniters,
> > >>
> > >> I’ve created the simplest example using Ignite 2.1 and persistence
> (see
> > >> the
> > >> code below). I've included Ignite instance into try-with-resources (I
> > >> think
> > >> it is default approach for AutoCloseable inheritors).
> > >>
> > >> But next time when I started this server I got message: “Ignite node
> > >> crashed in the middle of checkpoint. Will restore memory state and
> > enforce
> > >> checkpoint on node start.”
> > >>
> > >> This happens because in close() method we don’t wait checkpoint to
> end.
> > I
> > >> am afraid this behaviour may confuse users on the first use of the
> > >> product.
> > >>
> > >> What do you think if we change Ignite.close() functioning from
> > stop(true)
> > >> to stop(false)? This will allow to wait checkpoints to finish by
> > default.
> > >>
> > >> Alternatively, we may improve example to show how to shutdown server
> > node
> > >> correctly. Current PersistentStoreExample does not cover server node
> > >> shutdown.
> > >>
> > >> Any concerns on close() method change?
> > >>
> > >> Sincerely,
> > >> Dmitriy Pavlov
> > >>
> > >>
> > >> IgniteConfiguration cfg = new IgniteConfiguration();
> > >> cfg.setPersistentStoreConfiguration(new
> PersistentStoreConfiguration());
> > >>
> > >> try (Ignite ignite = Ignition.start(cfg)){
> > >> ignite.active(true);
> > >> IgniteCache cache = ignite.getOrCreateCache("test"
> );
> > >>
> > >> for (int i = 0; i < 1000; i++)
> > >>   cache.put("Key" + i, "Value" + i);
> > >> }
> > >>
> > >>
> > >
> >
>



-- 
Best Regards, Vyacheslav D.


Re: Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-04 Thread Dmitry Pavlov
Thank you all for replies.

I like idea to replace 'crashed' to 'stop'.  'crashed' word is really
confusing.

But still, if I call close () on file, all data is flushed to disk. But for
ignite.close () checkpoint may be not finished.

May be we should implement Vladimir's suggestion to flush the pages without
respect to the cancel flag? Are there any thoughts on this?

пт, 4 авг. 2017 г. в 11:12, Vladimir Ozerov :

> Ivan,
>
> Hanging on Ignite.close() will confuse user no more than restore on start
> after graceful shutdown. IMO correct approach here would be to:
> 1) wait for checkpoint completion irrespective of "cancel" flag, because
> this flag relates to compute jobs only as per documentation
> 2) print an INFO message to the log that we are saving a checkpoint due to
> node stop.
>
> On Fri, Aug 4, 2017 at 10:54 AM, Ivan Rakov  wrote:
>
> > Dmitriy,
> >
> > From my point of view, invoking stop(true) is correct behaviour.
> >
> > Stopping node in the middle of checkpoint is absolutely valid case.
> That's
> > how persistence works - node will restore memory state if stopped at any
> > moment.
> > On the other hand, checkpoint may last for a long time. Thread hanging on
> > Ignite.close() may confuse user much more than "crashed in the middle of
> > checkpoint" message.
> >
> > Best Regards,
> > Ivan Rakov
> >
> >
> > On 03.08.2017 22:34, Dmitry Pavlov wrote:
> >
> >> Hi Igniters,
> >>
> >> I’ve created the simplest example using Ignite 2.1 and persistence (see
> >> the
> >> code below). I've included Ignite instance into try-with-resources (I
> >> think
> >> it is default approach for AutoCloseable inheritors).
> >>
> >> But next time when I started this server I got message: “Ignite node
> >> crashed in the middle of checkpoint. Will restore memory state and
> enforce
> >> checkpoint on node start.”
> >>
> >> This happens because in close() method we don’t wait checkpoint to end.
> I
> >> am afraid this behaviour may confuse users on the first use of the
> >> product.
> >>
> >> What do you think if we change Ignite.close() functioning from
> stop(true)
> >> to stop(false)? This will allow to wait checkpoints to finish by
> default.
> >>
> >> Alternatively, we may improve example to show how to shutdown server
> node
> >> correctly. Current PersistentStoreExample does not cover server node
> >> shutdown.
> >>
> >> Any concerns on close() method change?
> >>
> >> Sincerely,
> >> Dmitriy Pavlov
> >>
> >>
> >> IgniteConfiguration cfg = new IgniteConfiguration();
> >> cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
> >>
> >> try (Ignite ignite = Ignition.start(cfg)){
> >> ignite.active(true);
> >> IgniteCache cache = ignite.getOrCreateCache("test");
> >>
> >> for (int i = 0; i < 1000; i++)
> >>   cache.put("Key" + i, "Value" + i);
> >> }
> >>
> >>
> >
>


Re: Set default cache synchronization mode to FULL_SYNC

2017-08-04 Thread Anton Vinogradov
+1 to change PRIMARY_SYNC to FULL_SYNC and keep readFromBackup=true

Dmitriy,
Why should we wait for 3.0?
This change looks safe for me.

On Wed, Aug 2, 2017 at 9:51 PM, Dmitriy Setrakyan 
wrote:

> We have to wait with any default changes to 3.0, unfortunately.
>
> On Wed, Aug 2, 2017 at 8:30 PM, Vladimir Ozerov 
> wrote:
>
> > Not sure about readFromBackup, but changing PRIMARY_SYNC to FULL_SYNC
> looks
> > safe to me. Any other thoughts?
> >
> > ср, 2 авг. 2017 г. в 21:10, Denis Magda :
> >
> > > +1 for both suggestions but I’m not sure we can do the change till 3.0.
> > >
> > > —
> > > Denis
> > >
> > > > On Aug 2, 2017, at 1:27 AM, Vladimir Ozerov 
> > > wrote:
> > > >
> > > > +1 for readFromBackup=false as well :-) Another example of default
> > value
> > > > with subtle effects.
> > > >
> > > > On Wed, Aug 2, 2017 at 11:11 AM, Alexey Goncharuk <
> > > > alexey.goncha...@gmail.com> wrote:
> > > >
> > > >> Vladimir,
> > > >>
> > > >> Personally, I agree that we should put correctness over performance,
> > > >> however (1) is not a correct statement for TRANSACTIONAL caches. A
> > > >> transactional client always validates the result of an operation and
> > > throw
> > > >> a correct exception if operation failed. (1) is true for ATOMIC
> > caches,
> > > >> though.
> > > >>
> > > >> A user can get in trouble in this default for both TX and ATOMIC
> > caches
> > > if
> > > >> a put is performed from a backup node and readFromBackup is set to
> > > false.
> > > >> In this case, the simple read-after-write scenario may fail. I would
> > > rather
> > > >> set readFromBackup to false by default, however, this fixes neither
> > the
> > > SQL
> > > >> nor ATOMIC caches issues.
> > > >>
> > > >> +1 for the change, and extend the warning for partitioned caches
> with
> > > >> readFromBackup=true and PRIMARY_SYNC.
> > > >>
> > > >>
> > > >>
> > > >> 2017-08-02 10:58 GMT+03:00 Vladimir Ozerov :
> > > >>
> > > >>> Igniters,
> > > >>>
> > > >>> I want to re-iterate idea of changing default synchronization mode
> > from
> > > >>> PRIMARY_SYNC to FULL_SYNC.
> > > >>>
> > > >>> Motivation:
> > > >>> 1) If user set [cacheMode=PARTITIONED, backups=1] he still could
> > loose
> > > >> data
> > > >>> silently. Because primary node could report success to the client
> and
> > > >> then
> > > >>> crash before data is propagated to backups.
> > > >>> 2) If user set [cacheMode=REPLICATED] and use SQL, he will might
> get
> > > >>> invalid results if cache is being updated concurrently - well known
> > > >> issue.
> > > >>>
> > > >>> The only advantage of PRIMARY_SYNC is slightly better performance,
> > but
> > > we
> > > >>> should prefer correctness over performance.
> > > >>>
> > > >>> Proposed changes:
> > > >>> 1) Make FULL_SYNC default;
> > > >>> 2) Print a warning about possibly incorrect SQL results if
> REPLICATED
> > > >> cache
> > > >>> is started in PRIMARY_SYNC mode.
> > > >>>
> > > >>> Thoughts?
> > > >>>
> > > >>> Vladimir.
> > > >>>
> > > >>
> > >
> > >
> >
>


[GitHub] ignite pull request #2392: IGNITE-5923: ODBC: SQLGetTypeInfo now works with ...

2017-08-04 Thread isapego
Github user isapego closed the pull request at:

https://github.com/apache/ignite/pull/2392


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


ODBC API conformance page updated

2017-08-04 Thread Igor Sapego
Hi Igniters,

I've updated an ODBC API conformance page - [1],
so take a look if you are interested.

Also, make sure you edit this page if you are adding
new features, or modifying existing features of the Ignite
ODBC driver.

[1] - https://apacheignite.readme.io/v2.1/docs/conformance

Best Regards,
Igor


[jira] [Created] (IGNITE-5928) .NET: Get rid of BinaryStreamBase

2017-08-04 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5928:
--

 Summary: .NET: Get rid of BinaryStreamBase
 Key: IGNITE-5928
 URL: https://issues.apache.org/jira/browse/IGNITE-5928
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn
Priority: Minor


{{BinaryStreamBase}} is an abstract class with a single inheritor 
{{BinaryHeapStream}}.
Get rid of it for the sake of simplicity (and probably performance).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [IGNITE-5717] improvements of MemoryPolicy default size

2017-08-04 Thread dsetrakyan
But why? We allocate the memory, so we should know when it runs out. What am i 
missing?

⁣D.​

On Aug 4, 2017, 11:55 AM, at 11:55 AM, Sergey Chugunov 
 wrote:
>I used GC and java only as an example, they are not applicable to
>Ignite
>case where we manage offheap memory.
>
>My point is that there is no easy way to implement this feature in
>Ignite,
>and more time is needed to properly design it and account for all
>risks.
>
>Thanks,
>Sergey.
>
>On Fri, Aug 4, 2017 at 12:44 PM,  wrote:
>
>> Hang on. I thought we were talking about offheap size, GC should not
>be
>> relevant. Am I wrong?
>>
>> ⁣D.​
>>
>> On Aug 4, 2017, 11:38 AM, at 11:38 AM, Sergey Chugunov <
>> sergey.chugu...@gmail.com> wrote:
>> >Do you see an obvious way of implementing it?
>> >
>> >In java there is a heap and GC working on it. And for instance, it
>is
>> >possible to make a decision to throw an OOM based on some gc
>metrics.
>> >
>> >I may be wrong but I don't see a mechanism in Ignite to use it right
>> >away
>> >for such purposes.
>> >And implementing something without thorough planning brings huge
>risk
>> >of
>> >false positives with nodes stopping when they don't have to.
>> >
>> >That's why I think it must be implemented and intensively tested as
>> >part of
>> >a separate ticket.
>> >
>> >Thanks,
>> >Sergey.
>> >
>> >On Fri, Aug 4, 2017 at 12:18 PM,  wrote:
>> >
>> >> Without #3, the #1 and #2 make little sense.
>> >>
>> >> Why is #3 so difficult?
>> >>
>> >> ⁣D.​
>> >>
>> >> On Aug 4, 2017, 10:46 AM, at 10:46 AM, Sergey Chugunov <
>> >> sergey.chugu...@gmail.com> wrote:
>> >> >Dmitriy,
>> >> >
>> >> >Last item makes perfect sense to me, one may think of it as an
>> >> >"OutOfMemoryException" in java.
>> >> >However, it looks like such feature requires considerable efforts
>to
>> >> >properly design and implement it, so I would propose to create a
>> >> >separate
>> >> >ticket and agree upon target version for it.
>> >> >
>> >> >Items #1 and #2 will be implemented under IGNITE-5717. Makes
>sense?
>> >> >
>> >> >Thanks,
>> >> >Sergey.
>> >> >
>> >> >On Thu, Aug 3, 2017 at 4:34 AM, Dmitriy Setrakyan
>> >> >
>> >> >wrote:
>> >> >
>> >> >> Here is what we should do:
>> >> >>
>> >> >>1. Pick an acceptable number. Does not matter if it is 10%
>or
>> >50%.
>> >> >>2. Print the allocated memory in *BOLD* letters into the
>log.
>> >> >>3. Make sure that Ignite server never hangs due to the low
>> >memory
>> >> >issue.
>> >> >>We should sense it and kick the node out automatically,
>again
>> >with
>> >> >a
>> >> >> *BOLD*
>> >> >>message in the log.
>> >> >>
>> >> >>  Is this possible?
>> >> >>
>> >> >> D.
>> >> >>
>> >> >> On Wed, Aug 2, 2017 at 6:09 PM, Vladimir Ozerov
>> >> >
>> >> >> wrote:
>> >> >>
>> >> >> > My proposal is 10% instead of 80%.
>> >> >> >
>> >> >> > ср, 2 авг. 2017 г. в 18:54, Denis Magda :
>> >> >> >
>> >> >> > > Vladimir, Dmitriy P.,
>> >> >> > >
>> >> >> > > Please see inline
>> >> >> > >
>> >> >> > > > On Aug 2, 2017, at 7:20 AM, Vladimir Ozerov
>> >> >
>> >> >> > > wrote:
>> >> >> > > >
>> >> >> > > > Denis,
>> >> >> > > >
>> >> >> > > > The reason is that product should not hang user's
>computer.
>> >How
>> >> >else
>> >> >> > this
>> >> >> > > > could be explained? I am developer. I start Ignite, 1
>node,
>> >2
>> >> >nodes,
>> >> >> X
>> >> >> > > > nodes, observe how they join topology. Add one key, 10
>keys,
>> >1M
>> >> >keys.
>> >> >> > > Then
>> >> >> > > > I do a bug in example and load 100M keys accidentally -
>> >restart
>> >> >the
>> >> >> > > > computer. Correct behavior is to have small "maxMemory"
>by
>> >> >default to
>> >> >> > > avoid
>> >> >> > > > that. User should get exception instead of hang. E.g.
>Java's
>> >> >"-Xmx"
>> >> >> is
>> >> >> > > > typically 25% of RAM - more adequate value, comparing to
>> >> >Ignite.
>> >> >> > > >
>> >> >> > >
>> >> >> > > Right, the developer was educated about the Java heap
>> >parameters
>> >> >and
>> >> >> > > limited the overall space preferring OOM to the laptop
>> >> >suspension. Who
>> >> >> > > knows how he got to the point that 25% RAM should be used.
>> >That
>> >> >might
>> >> >> > have
>> >> >> > > been deep knowledge about JVM or he faced several hangs
>while
>> >> >testing
>> >> >> the
>> >> >> > > application.
>> >> >> > >
>> >> >> > > Anyway, JVM creators didn’t decide to predefine the Java
>heap
>> >to
>> >> >a
>> >> >> static
>> >> >> > > value to avoid the situations like above. So should not we
>as
>> >a
>> >> >> platform.
>> >> >> > > Educate people about the Ignite memory behavior like Sun
>did
>> >for
>> >> >the
>> >> >> Java
>> >> >> > > heap but do not try to solve the lack of knowledge with the
>> >> >default
>> >> >> > static
>> >> >> > > memory size.
>> >> >> > >
>> >> >> > >
>> >> >> > > > It doesn't matter whether you use persistence or not.
>> >> 

Re: [IGNITE-5717] improvements of MemoryPolicy default size

2017-08-04 Thread dsetrakyan
Hang on. I thought we were talking about offheap size, GC should not be 
relevant. Am I wrong?

⁣D.​

On Aug 4, 2017, 11:38 AM, at 11:38 AM, Sergey Chugunov 
 wrote:
>Do you see an obvious way of implementing it?
>
>In java there is a heap and GC working on it. And for instance, it is
>possible to make a decision to throw an OOM based on some gc metrics.
>
>I may be wrong but I don't see a mechanism in Ignite to use it right
>away
>for such purposes.
>And implementing something without thorough planning brings huge risk
>of
>false positives with nodes stopping when they don't have to.
>
>That's why I think it must be implemented and intensively tested as
>part of
>a separate ticket.
>
>Thanks,
>Sergey.
>
>On Fri, Aug 4, 2017 at 12:18 PM,  wrote:
>
>> Without #3, the #1 and #2 make little sense.
>>
>> Why is #3 so difficult?
>>
>> ⁣D.​
>>
>> On Aug 4, 2017, 10:46 AM, at 10:46 AM, Sergey Chugunov <
>> sergey.chugu...@gmail.com> wrote:
>> >Dmitriy,
>> >
>> >Last item makes perfect sense to me, one may think of it as an
>> >"OutOfMemoryException" in java.
>> >However, it looks like such feature requires considerable efforts to
>> >properly design and implement it, so I would propose to create a
>> >separate
>> >ticket and agree upon target version for it.
>> >
>> >Items #1 and #2 will be implemented under IGNITE-5717. Makes sense?
>> >
>> >Thanks,
>> >Sergey.
>> >
>> >On Thu, Aug 3, 2017 at 4:34 AM, Dmitriy Setrakyan
>> >
>> >wrote:
>> >
>> >> Here is what we should do:
>> >>
>> >>1. Pick an acceptable number. Does not matter if it is 10% or
>50%.
>> >>2. Print the allocated memory in *BOLD* letters into the log.
>> >>3. Make sure that Ignite server never hangs due to the low
>memory
>> >issue.
>> >>We should sense it and kick the node out automatically, again
>with
>> >a
>> >> *BOLD*
>> >>message in the log.
>> >>
>> >>  Is this possible?
>> >>
>> >> D.
>> >>
>> >> On Wed, Aug 2, 2017 at 6:09 PM, Vladimir Ozerov
>> >
>> >> wrote:
>> >>
>> >> > My proposal is 10% instead of 80%.
>> >> >
>> >> > ср, 2 авг. 2017 г. в 18:54, Denis Magda :
>> >> >
>> >> > > Vladimir, Dmitriy P.,
>> >> > >
>> >> > > Please see inline
>> >> > >
>> >> > > > On Aug 2, 2017, at 7:20 AM, Vladimir Ozerov
>> >
>> >> > > wrote:
>> >> > > >
>> >> > > > Denis,
>> >> > > >
>> >> > > > The reason is that product should not hang user's computer.
>How
>> >else
>> >> > this
>> >> > > > could be explained? I am developer. I start Ignite, 1 node,
>2
>> >nodes,
>> >> X
>> >> > > > nodes, observe how they join topology. Add one key, 10 keys,
>1M
>> >keys.
>> >> > > Then
>> >> > > > I do a bug in example and load 100M keys accidentally -
>restart
>> >the
>> >> > > > computer. Correct behavior is to have small "maxMemory" by
>> >default to
>> >> > > avoid
>> >> > > > that. User should get exception instead of hang. E.g. Java's
>> >"-Xmx"
>> >> is
>> >> > > > typically 25% of RAM - more adequate value, comparing to
>> >Ignite.
>> >> > > >
>> >> > >
>> >> > > Right, the developer was educated about the Java heap
>parameters
>> >and
>> >> > > limited the overall space preferring OOM to the laptop
>> >suspension. Who
>> >> > > knows how he got to the point that 25% RAM should be used.
>That
>> >might
>> >> > have
>> >> > > been deep knowledge about JVM or he faced several hangs while
>> >testing
>> >> the
>> >> > > application.
>> >> > >
>> >> > > Anyway, JVM creators didn’t decide to predefine the Java heap
>to
>> >a
>> >> static
>> >> > > value to avoid the situations like above. So should not we as
>a
>> >> platform.
>> >> > > Educate people about the Ignite memory behavior like Sun did
>for
>> >the
>> >> Java
>> >> > > heap but do not try to solve the lack of knowledge with the
>> >default
>> >> > static
>> >> > > memory size.
>> >> > >
>> >> > >
>> >> > > > It doesn't matter whether you use persistence or not.
>> >Persistent case
>> >> > > just
>> >> > > > makes this flaw more obvious - you have virtually unlimited
>> >disk, and
>> >> > yet
>> >> > > > you end up with swapping and hang when using Ignite with
>> >default
>> >> > > > configuration. As already explained, the problem is not
>about
>> >> > allocating
>> >> > > > "maxMemory" right away, but about the value of "maxMemory" -
>it
>> >is
>> >> too
>> >> > > big.
>> >> > > >
>> >> > >
>> >> > > How do you know what should be the default then? Why 1 GB? For
>> >> instance,
>> >> > > if I end up having only 1 GB of free memory left and try to
>start
>> >2
>> >> > server
>> >> > > nodes and an application I will face the laptop suspension
>again.
>> >> > >
>> >> > > —
>> >> > > Denis
>> >> > >
>> >> > > > "We had this behavior before" is never an argument. Previous
>> >offheap
>> >> > > > implementation had a lot of flaws, so let's just forget
>about
>> >it.
>> >> > > >
>> >> > > > On Wed, Aug 2, 2017 at 5:08 PM, Denis Magda

Re: [IGNITE-5717] improvements of MemoryPolicy default size

2017-08-04 Thread Sergey Chugunov
Do you see an obvious way of implementing it?

In java there is a heap and GC working on it. And for instance, it is
possible to make a decision to throw an OOM based on some gc metrics.

I may be wrong but I don't see a mechanism in Ignite to use it right away
for such purposes.
And implementing something without thorough planning brings huge risk of
false positives with nodes stopping when they don't have to.

That's why I think it must be implemented and intensively tested as part of
a separate ticket.

Thanks,
Sergey.

On Fri, Aug 4, 2017 at 12:18 PM,  wrote:

> Without #3, the #1 and #2 make little sense.
>
> Why is #3 so difficult?
>
> ⁣D.​
>
> On Aug 4, 2017, 10:46 AM, at 10:46 AM, Sergey Chugunov <
> sergey.chugu...@gmail.com> wrote:
> >Dmitriy,
> >
> >Last item makes perfect sense to me, one may think of it as an
> >"OutOfMemoryException" in java.
> >However, it looks like such feature requires considerable efforts to
> >properly design and implement it, so I would propose to create a
> >separate
> >ticket and agree upon target version for it.
> >
> >Items #1 and #2 will be implemented under IGNITE-5717. Makes sense?
> >
> >Thanks,
> >Sergey.
> >
> >On Thu, Aug 3, 2017 at 4:34 AM, Dmitriy Setrakyan
> >
> >wrote:
> >
> >> Here is what we should do:
> >>
> >>1. Pick an acceptable number. Does not matter if it is 10% or 50%.
> >>2. Print the allocated memory in *BOLD* letters into the log.
> >>3. Make sure that Ignite server never hangs due to the low memory
> >issue.
> >>We should sense it and kick the node out automatically, again with
> >a
> >> *BOLD*
> >>message in the log.
> >>
> >>  Is this possible?
> >>
> >> D.
> >>
> >> On Wed, Aug 2, 2017 at 6:09 PM, Vladimir Ozerov
> >
> >> wrote:
> >>
> >> > My proposal is 10% instead of 80%.
> >> >
> >> > ср, 2 авг. 2017 г. в 18:54, Denis Magda :
> >> >
> >> > > Vladimir, Dmitriy P.,
> >> > >
> >> > > Please see inline
> >> > >
> >> > > > On Aug 2, 2017, at 7:20 AM, Vladimir Ozerov
> >
> >> > > wrote:
> >> > > >
> >> > > > Denis,
> >> > > >
> >> > > > The reason is that product should not hang user's computer. How
> >else
> >> > this
> >> > > > could be explained? I am developer. I start Ignite, 1 node, 2
> >nodes,
> >> X
> >> > > > nodes, observe how they join topology. Add one key, 10 keys, 1M
> >keys.
> >> > > Then
> >> > > > I do a bug in example and load 100M keys accidentally - restart
> >the
> >> > > > computer. Correct behavior is to have small "maxMemory" by
> >default to
> >> > > avoid
> >> > > > that. User should get exception instead of hang. E.g. Java's
> >"-Xmx"
> >> is
> >> > > > typically 25% of RAM - more adequate value, comparing to
> >Ignite.
> >> > > >
> >> > >
> >> > > Right, the developer was educated about the Java heap parameters
> >and
> >> > > limited the overall space preferring OOM to the laptop
> >suspension. Who
> >> > > knows how he got to the point that 25% RAM should be used. That
> >might
> >> > have
> >> > > been deep knowledge about JVM or he faced several hangs while
> >testing
> >> the
> >> > > application.
> >> > >
> >> > > Anyway, JVM creators didn’t decide to predefine the Java heap to
> >a
> >> static
> >> > > value to avoid the situations like above. So should not we as a
> >> platform.
> >> > > Educate people about the Ignite memory behavior like Sun did for
> >the
> >> Java
> >> > > heap but do not try to solve the lack of knowledge with the
> >default
> >> > static
> >> > > memory size.
> >> > >
> >> > >
> >> > > > It doesn't matter whether you use persistence or not.
> >Persistent case
> >> > > just
> >> > > > makes this flaw more obvious - you have virtually unlimited
> >disk, and
> >> > yet
> >> > > > you end up with swapping and hang when using Ignite with
> >default
> >> > > > configuration. As already explained, the problem is not about
> >> > allocating
> >> > > > "maxMemory" right away, but about the value of "maxMemory" - it
> >is
> >> too
> >> > > big.
> >> > > >
> >> > >
> >> > > How do you know what should be the default then? Why 1 GB? For
> >> instance,
> >> > > if I end up having only 1 GB of free memory left and try to start
> >2
> >> > server
> >> > > nodes and an application I will face the laptop suspension again.
> >> > >
> >> > > —
> >> > > Denis
> >> > >
> >> > > > "We had this behavior before" is never an argument. Previous
> >offheap
> >> > > > implementation had a lot of flaws, so let's just forget about
> >it.
> >> > > >
> >> > > > On Wed, Aug 2, 2017 at 5:08 PM, Denis Magda 
> >> wrote:
> >> > > >
> >> > > >> Sergey,
> >> > > >>
> >> > > >> That’s expectable because as we revealed from this discussion
> >the
> >> > > >> allocation works different depending on whether the
> >persistence is
> >> > used
> >> > > or
> >> > > >> not:
> >> > > >>
> >> > > >> 1) In-memory mode (the persistence is disabled) - the space
> >will be
> >> > > >> 

Re: [IGNITE-5717] improvements of MemoryPolicy default size

2017-08-04 Thread dsetrakyan
Without #3, the #1 and #2 make little sense.

Why is #3 so difficult?

⁣D.​

On Aug 4, 2017, 10:46 AM, at 10:46 AM, Sergey Chugunov 
 wrote:
>Dmitriy,
>
>Last item makes perfect sense to me, one may think of it as an
>"OutOfMemoryException" in java.
>However, it looks like such feature requires considerable efforts to
>properly design and implement it, so I would propose to create a
>separate
>ticket and agree upon target version for it.
>
>Items #1 and #2 will be implemented under IGNITE-5717. Makes sense?
>
>Thanks,
>Sergey.
>
>On Thu, Aug 3, 2017 at 4:34 AM, Dmitriy Setrakyan
>
>wrote:
>
>> Here is what we should do:
>>
>>1. Pick an acceptable number. Does not matter if it is 10% or 50%.
>>2. Print the allocated memory in *BOLD* letters into the log.
>>3. Make sure that Ignite server never hangs due to the low memory
>issue.
>>We should sense it and kick the node out automatically, again with
>a
>> *BOLD*
>>message in the log.
>>
>>  Is this possible?
>>
>> D.
>>
>> On Wed, Aug 2, 2017 at 6:09 PM, Vladimir Ozerov
>
>> wrote:
>>
>> > My proposal is 10% instead of 80%.
>> >
>> > ср, 2 авг. 2017 г. в 18:54, Denis Magda :
>> >
>> > > Vladimir, Dmitriy P.,
>> > >
>> > > Please see inline
>> > >
>> > > > On Aug 2, 2017, at 7:20 AM, Vladimir Ozerov
>
>> > > wrote:
>> > > >
>> > > > Denis,
>> > > >
>> > > > The reason is that product should not hang user's computer. How
>else
>> > this
>> > > > could be explained? I am developer. I start Ignite, 1 node, 2
>nodes,
>> X
>> > > > nodes, observe how they join topology. Add one key, 10 keys, 1M
>keys.
>> > > Then
>> > > > I do a bug in example and load 100M keys accidentally - restart
>the
>> > > > computer. Correct behavior is to have small "maxMemory" by
>default to
>> > > avoid
>> > > > that. User should get exception instead of hang. E.g. Java's
>"-Xmx"
>> is
>> > > > typically 25% of RAM - more adequate value, comparing to
>Ignite.
>> > > >
>> > >
>> > > Right, the developer was educated about the Java heap parameters
>and
>> > > limited the overall space preferring OOM to the laptop
>suspension. Who
>> > > knows how he got to the point that 25% RAM should be used. That
>might
>> > have
>> > > been deep knowledge about JVM or he faced several hangs while
>testing
>> the
>> > > application.
>> > >
>> > > Anyway, JVM creators didn’t decide to predefine the Java heap to
>a
>> static
>> > > value to avoid the situations like above. So should not we as a
>> platform.
>> > > Educate people about the Ignite memory behavior like Sun did for
>the
>> Java
>> > > heap but do not try to solve the lack of knowledge with the
>default
>> > static
>> > > memory size.
>> > >
>> > >
>> > > > It doesn't matter whether you use persistence or not.
>Persistent case
>> > > just
>> > > > makes this flaw more obvious - you have virtually unlimited
>disk, and
>> > yet
>> > > > you end up with swapping and hang when using Ignite with
>default
>> > > > configuration. As already explained, the problem is not about
>> > allocating
>> > > > "maxMemory" right away, but about the value of "maxMemory" - it
>is
>> too
>> > > big.
>> > > >
>> > >
>> > > How do you know what should be the default then? Why 1 GB? For
>> instance,
>> > > if I end up having only 1 GB of free memory left and try to start
>2
>> > server
>> > > nodes and an application I will face the laptop suspension again.
>> > >
>> > > —
>> > > Denis
>> > >
>> > > > "We had this behavior before" is never an argument. Previous
>offheap
>> > > > implementation had a lot of flaws, so let's just forget about
>it.
>> > > >
>> > > > On Wed, Aug 2, 2017 at 5:08 PM, Denis Magda 
>> wrote:
>> > > >
>> > > >> Sergey,
>> > > >>
>> > > >> That’s expectable because as we revealed from this discussion
>the
>> > > >> allocation works different depending on whether the
>persistence is
>> > used
>> > > or
>> > > >> not:
>> > > >>
>> > > >> 1) In-memory mode (the persistence is disabled) - the space
>will be
>> > > >> allocated incrementally until the max threshold is reached.
>Good!
>> > > >>
>> > > >> 2) The persistence mode - the whole space (limited by the max
>> > threshold)
>> > > >> is allocated right away. It’s not surprising that your laptop
>starts
>> > > >> choking.
>> > > >>
>> > > >> So, in my previous response I tried to explain that I can’t
>find any
>> > > >> reason why we should adjust 1). Any reasons except for the
>massive
>> > > >> preloading?
>> > > >>
>> > > >> As for 2), that was a big surprise to reveal this after 2.1
>release.
>> > > >> Definitely we have to fix this somehow.
>> > > >>
>> > > >> —
>> > > >> Denis
>> > > >>
>> > > >>> On Aug 2, 2017, at 6:59 AM, Sergey Chugunov <
>> > sergey.chugu...@gmail.com
>> > > >
>> > > >> wrote:
>> > > >>>
>> > > >>> Denis,
>> > > >>>
>> > > >>> Just a simple example from our own codebase: I tried to
>execute
>> > > >>> 

[GitHub] ignite pull request #2162: IGNITE-5126 JDBC thin driver: support batches

2017-08-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2162


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [IGNITE-5717] improvements of MemoryPolicy default size

2017-08-04 Thread Sergey Chugunov
Dmitriy,

Last item makes perfect sense to me, one may think of it as an
"OutOfMemoryException" in java.
However, it looks like such feature requires considerable efforts to
properly design and implement it, so I would propose to create a separate
ticket and agree upon target version for it.

Items #1 and #2 will be implemented under IGNITE-5717. Makes sense?

Thanks,
Sergey.

On Thu, Aug 3, 2017 at 4:34 AM, Dmitriy Setrakyan 
wrote:

> Here is what we should do:
>
>1. Pick an acceptable number. Does not matter if it is 10% or 50%.
>2. Print the allocated memory in *BOLD* letters into the log.
>3. Make sure that Ignite server never hangs due to the low memory issue.
>We should sense it and kick the node out automatically, again with a
> *BOLD*
>message in the log.
>
>  Is this possible?
>
> D.
>
> On Wed, Aug 2, 2017 at 6:09 PM, Vladimir Ozerov 
> wrote:
>
> > My proposal is 10% instead of 80%.
> >
> > ср, 2 авг. 2017 г. в 18:54, Denis Magda :
> >
> > > Vladimir, Dmitriy P.,
> > >
> > > Please see inline
> > >
> > > > On Aug 2, 2017, at 7:20 AM, Vladimir Ozerov 
> > > wrote:
> > > >
> > > > Denis,
> > > >
> > > > The reason is that product should not hang user's computer. How else
> > this
> > > > could be explained? I am developer. I start Ignite, 1 node, 2 nodes,
> X
> > > > nodes, observe how they join topology. Add one key, 10 keys, 1M keys.
> > > Then
> > > > I do a bug in example and load 100M keys accidentally - restart the
> > > > computer. Correct behavior is to have small "maxMemory" by default to
> > > avoid
> > > > that. User should get exception instead of hang. E.g. Java's "-Xmx"
> is
> > > > typically 25% of RAM - more adequate value, comparing to Ignite.
> > > >
> > >
> > > Right, the developer was educated about the Java heap parameters and
> > > limited the overall space preferring OOM to the laptop suspension. Who
> > > knows how he got to the point that 25% RAM should be used. That might
> > have
> > > been deep knowledge about JVM or he faced several hangs while testing
> the
> > > application.
> > >
> > > Anyway, JVM creators didn’t decide to predefine the Java heap to a
> static
> > > value to avoid the situations like above. So should not we as a
> platform.
> > > Educate people about the Ignite memory behavior like Sun did for the
> Java
> > > heap but do not try to solve the lack of knowledge with the default
> > static
> > > memory size.
> > >
> > >
> > > > It doesn't matter whether you use persistence or not. Persistent case
> > > just
> > > > makes this flaw more obvious - you have virtually unlimited disk, and
> > yet
> > > > you end up with swapping and hang when using Ignite with default
> > > > configuration. As already explained, the problem is not about
> > allocating
> > > > "maxMemory" right away, but about the value of "maxMemory" - it is
> too
> > > big.
> > > >
> > >
> > > How do you know what should be the default then? Why 1 GB? For
> instance,
> > > if I end up having only 1 GB of free memory left and try to start 2
> > server
> > > nodes and an application I will face the laptop suspension again.
> > >
> > > —
> > > Denis
> > >
> > > > "We had this behavior before" is never an argument. Previous offheap
> > > > implementation had a lot of flaws, so let's just forget about it.
> > > >
> > > > On Wed, Aug 2, 2017 at 5:08 PM, Denis Magda 
> wrote:
> > > >
> > > >> Sergey,
> > > >>
> > > >> That’s expectable because as we revealed from this discussion the
> > > >> allocation works different depending on whether the persistence is
> > used
> > > or
> > > >> not:
> > > >>
> > > >> 1) In-memory mode (the persistence is disabled) - the space will be
> > > >> allocated incrementally until the max threshold is reached. Good!
> > > >>
> > > >> 2) The persistence mode - the whole space (limited by the max
> > threshold)
> > > >> is allocated right away. It’s not surprising that your laptop starts
> > > >> choking.
> > > >>
> > > >> So, in my previous response I tried to explain that I can’t find any
> > > >> reason why we should adjust 1). Any reasons except for the massive
> > > >> preloading?
> > > >>
> > > >> As for 2), that was a big surprise to reveal this after 2.1 release.
> > > >> Definitely we have to fix this somehow.
> > > >>
> > > >> —
> > > >> Denis
> > > >>
> > > >>> On Aug 2, 2017, at 6:59 AM, Sergey Chugunov <
> > sergey.chugu...@gmail.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Denis,
> > > >>>
> > > >>> Just a simple example from our own codebase: I tried to execute
> > > >>> PersistentStoreExample with default settings and two server nodes
> and
> > > >>> client node got frozen even on initial load of data into the grid.
> > > >>> Although with one server node the example finishes pretty quickly.
> > > >>>
> > > >>> And my laptop isn't the weakest one and has 16 gigs of memory, but
> it
> > > >>> cannot deal with it.
> > > >>>
> 

Nodes which started in separate JVM couldn't stop properly (in tests)

2017-08-04 Thread Vyacheslav Daradur
Hi Igniters,

Working on my task I found a bug at call the method #stopGrid(name),
it produced ClassCastException. I created a ticket[1].

After it was fixed[2] I saw that nodes which was started in a separate JVM
could stay in process of operation system.
It was fixed too, but not sure is it fixed in proper way or not.

Could someone review it?

[1] https://issues.apache.org/jira/browse/IGNITE-5910
[2] https://github.com/apache/ignite/pull/2382

-- 
Best Regards, Vyacheslav D.


[GitHub] ignite pull request #2389: IGNITE-5920: Fix the example. Set CacheKeyConfigu...

2017-08-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/2389


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-04 Thread Vladimir Ozerov
Ivan,

Hanging on Ignite.close() will confuse user no more than restore on start
after graceful shutdown. IMO correct approach here would be to:
1) wait for checkpoint completion irrespective of "cancel" flag, because
this flag relates to compute jobs only as per documentation
2) print an INFO message to the log that we are saving a checkpoint due to
node stop.

On Fri, Aug 4, 2017 at 10:54 AM, Ivan Rakov  wrote:

> Dmitriy,
>
> From my point of view, invoking stop(true) is correct behaviour.
>
> Stopping node in the middle of checkpoint is absolutely valid case. That's
> how persistence works - node will restore memory state if stopped at any
> moment.
> On the other hand, checkpoint may last for a long time. Thread hanging on
> Ignite.close() may confuse user much more than "crashed in the middle of
> checkpoint" message.
>
> Best Regards,
> Ivan Rakov
>
>
> On 03.08.2017 22:34, Dmitry Pavlov wrote:
>
>> Hi Igniters,
>>
>> I’ve created the simplest example using Ignite 2.1 and persistence (see
>> the
>> code below). I've included Ignite instance into try-with-resources (I
>> think
>> it is default approach for AutoCloseable inheritors).
>>
>> But next time when I started this server I got message: “Ignite node
>> crashed in the middle of checkpoint. Will restore memory state and enforce
>> checkpoint on node start.”
>>
>> This happens because in close() method we don’t wait checkpoint to end. I
>> am afraid this behaviour may confuse users on the first use of the
>> product.
>>
>> What do you think if we change Ignite.close() functioning from stop(true)
>> to stop(false)? This will allow to wait checkpoints to finish by default.
>>
>> Alternatively, we may improve example to show how to shutdown server node
>> correctly. Current PersistentStoreExample does not cover server node
>> shutdown.
>>
>> Any concerns on close() method change?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>>
>> IgniteConfiguration cfg = new IgniteConfiguration();
>> cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
>>
>> try (Ignite ignite = Ignition.start(cfg)){
>> ignite.active(true);
>> IgniteCache cache = ignite.getOrCreateCache("test");
>>
>> for (int i = 0; i < 1000; i++)
>>   cache.put("Key" + i, "Value" + i);
>> }
>>
>>
>


Re: Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-04 Thread Alexey Goncharuk
Maybe the "crashed" word is a bit strong here, we can change it to "stop"
and add a message that this is valid if Ignite is stopped by "close()"
method.

2017-08-04 10:54 GMT+03:00 Ivan Rakov :

> Dmitriy,
>
> From my point of view, invoking stop(true) is correct behaviour.
>
> Stopping node in the middle of checkpoint is absolutely valid case. That's
> how persistence works - node will restore memory state if stopped at any
> moment.
> On the other hand, checkpoint may last for a long time. Thread hanging on
> Ignite.close() may confuse user much more than "crashed in the middle of
> checkpoint" message.
>
> Best Regards,
> Ivan Rakov
>
>
> On 03.08.2017 22:34, Dmitry Pavlov wrote:
>
>> Hi Igniters,
>>
>> I’ve created the simplest example using Ignite 2.1 and persistence (see
>> the
>> code below). I've included Ignite instance into try-with-resources (I
>> think
>> it is default approach for AutoCloseable inheritors).
>>
>> But next time when I started this server I got message: “Ignite node
>> crashed in the middle of checkpoint. Will restore memory state and enforce
>> checkpoint on node start.”
>>
>> This happens because in close() method we don’t wait checkpoint to end. I
>> am afraid this behaviour may confuse users on the first use of the
>> product.
>>
>> What do you think if we change Ignite.close() functioning from stop(true)
>> to stop(false)? This will allow to wait checkpoints to finish by default.
>>
>> Alternatively, we may improve example to show how to shutdown server node
>> correctly. Current PersistentStoreExample does not cover server node
>> shutdown.
>>
>> Any concerns on close() method change?
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>>
>> IgniteConfiguration cfg = new IgniteConfiguration();
>> cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());
>>
>> try (Ignite ignite = Ignition.start(cfg)){
>> ignite.active(true);
>> IgniteCache cache = ignite.getOrCreateCache("test");
>>
>> for (int i = 0; i < 1000; i++)
>>   cache.put("Key" + i, "Value" + i);
>> }
>>
>>
>


Re: Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-04 Thread Ivan Rakov

Dmitriy,

From my point of view, invoking stop(true) is correct behaviour.

Stopping node in the middle of checkpoint is absolutely valid case. 
That's how persistence works - node will restore memory state if stopped 
at any moment.
On the other hand, checkpoint may last for a long time. Thread hanging 
on Ignite.close() may confuse user much more than "crashed in the middle 
of checkpoint" message.


Best Regards,
Ivan Rakov

On 03.08.2017 22:34, Dmitry Pavlov wrote:

Hi Igniters,

I’ve created the simplest example using Ignite 2.1 and persistence (see the
code below). I've included Ignite instance into try-with-resources (I think
it is default approach for AutoCloseable inheritors).

But next time when I started this server I got message: “Ignite node
crashed in the middle of checkpoint. Will restore memory state and enforce
checkpoint on node start.”

This happens because in close() method we don’t wait checkpoint to end. I
am afraid this behaviour may confuse users on the first use of the product.

What do you think if we change Ignite.close() functioning from stop(true)
to stop(false)? This will allow to wait checkpoints to finish by default.

Alternatively, we may improve example to show how to shutdown server node
correctly. Current PersistentStoreExample does not cover server node
shutdown.

Any concerns on close() method change?

Sincerely,
Dmitriy Pavlov


IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());

try (Ignite ignite = Ignition.start(cfg)){
ignite.active(true);
IgniteCache cache = ignite.getOrCreateCache("test");

for (int i = 0; i < 1000; i++)
  cache.put("Key" + i, "Value" + i);
}