Ignite.close(), G.stop(name, true). Change flag cancel to false

2017-08-03 Thread Dmitry Pavlov
Hi Igniters,

I’ve created the simplest example using Ignite 2.1 and persistence (see the
code below). I've included Ignite instance into try-with-resources (I think
it is default approach for AutoCloseable inheritors).

But next time when I started this server I got message: “Ignite node
crashed in the middle of checkpoint. Will restore memory state and enforce
checkpoint on node start.”

This happens because in close() method we don’t wait checkpoint to end. I
am afraid this behaviour may confuse users on the first use of the product.

What do you think if we change Ignite.close() functioning from stop(true)
to stop(false)? This will allow to wait checkpoints to finish by default.

Alternatively, we may improve example to show how to shutdown server node
correctly. Current PersistentStoreExample does not cover server node
shutdown.

Any concerns on close() method change?

Sincerely,
Dmitriy Pavlov


IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPersistentStoreConfiguration(new PersistentStoreConfiguration());

try (Ignite ignite = Ignition.start(cfg)){
   ignite.active(true);
   IgniteCache cache = ignite.getOrCreateCache("test");

   for (int i = 0; i < 1000; i++)
 cache.put("Key" + i, "Value" + i);
}


Re: Release notes tools for Ignite releases

2017-08-03 Thread Aleksey Chetaev
If the solution is positive, can anyone see my pull request?

2017-08-03 4:28 GMT+03:00 dsetrakyan [via Apache Ignite Developers] <
ml+s2346864n20403...@n4.nabble.com>:

> Agree with Denis. I think that we should treat it as a starting point for
> the release notes, and further update it before releases.
>
> On Wed, Aug 2, 2017 at 10:58 PM, Denis Magda <[hidden email]
> > wrote:
>
> > Guys,
> >
> > We're on the same page that the page has to look more colorful and
> > informative. This is exactly why Spark's page was given as an example.
> >
> > However, I like the template generated by Alex script in a sense that
> you
> > already have something to start work with. Go ahead and remove redundant
> > tickets once the template is ready, apply CSS and other nice formatting,
> > etc. So I would still accept the contribution.
> >
> > Denis
> >
> > On Wednesday, August 2, 2017, Vladimir Ozerov <[hidden email]
> >
> > wrote:
> >
> > > Alex,
> > >
> > > This is not only about the issue types. Release notes is the face of
> our
> > > product. Generating it from JIRA has several problems which are
> > > unresolvable IMO:
> > > 1) Tickets are created by many dozens people, so there could be typos,
> > > linguistic errors, etc.
> > > 2) Ticket descriptions often inconclusive and gives no useful
> information
> > > to users
> > >
> > > The only way to have sensible Release notes is to create them
> manually.
> > > Instead of forcing many people to follow some rules just to make this
> > > report looks sexy, we'd better to have one responsbile enginner, who
> will
> > > spend an hour once in 1-2 months to make release notes correct and
> > > meaningful. We can have a template with , , ,  tags
> for
> > > nice markup.
> > >
> > >
> > > On Wed, Aug 2, 2017 at 10:55 PM, Aleksey Chetaev <[hidden email]
> 
> > > >
> > > wrote:
> > >
> > > > Vladimir,
> > > >
> > > > I agree that page not perfect now. I think, problem in Jira issue
> > types.
> > > A
> > > > lot of new features or improvements created as task and often we
> don't
> > > use
> > > > sub-task for depended issues. I can be wrong, but committer who
> > prepared
> > > > release can't manually create this page, we need have some tools.
> > > >
> > > > 2017-08-02 21:35 GMT+03:00 Vladimir Ozerov [via Apache Ignite
> > > Developers] <
> > > > [hidden email] 
> >:
> > > >
> > > > > JIRA = report from JIRA
> > > > >
> > > > > ср, 2 авг. 2017 г. в 21:35, Vladimir Ozerov <[hidden email]
> > > > > >:
> > > > >
> > > > > > Denis,
> > > > > >
> > > > > > This page works exactly how I suggested in the beginning of that
> > > > thread:
> > > > > > manually crafted notes on most important features + link to JIRA
> > > report
> > > > > to
> > > > > > see all closed tickets.
> > > > > >
> > > > > > We already have manually crafted any properly grouped release
> > notes.
> > > > All
> > > > > > we need is to make them a bit more verbose, add some CSS and
> > publish
> > > on
> > > > > the
> > > > > > site. No need to publish JIRA, this is useless noise.
> > > > > >
> > > > > > ср, 2 авг. 2017 г. в 21:20, Denis Magda <[hidden email]
> > > > > >:
> > > > > >
> > > > > >> Vladimir,
> > > > > >>
> > > > > >> The goal is to have a page like that:
> > > > > >> https://spark.apache.org/releases/spark-release-2-1-0.html <
> > > > > >> https://spark.apache.org/releases/spark-release-2-1-0.html>
> > > > > >>
> > > > > >> where a user can go and see all the changes incorporated in the
> > > > > release.
> > > > > >> The header of the file can be custom - you can list major
> > > achievements
> > > > > with
> > > > > >> extra explanation. Going forward we can improve the page
> layout,
> > > > design
> > > > > and
> > > > > >> content but we definitely need a page like that so that the
> users
> > > can
> > > > > see
> > > > > >> the changes without a release download and lookup of
> > > RELEASE_NOTES.txt
> > > > > >> (which is not that descriptive as well).
> > > > > >>
> > > > > >> —
> > > > > >> Denis
> > > > > >>
> > > > > >> > On Aug 2, 2017, at 8:36 AM, Vladimir Ozerov <[hidden email]
> > > > > >
> > > > > >> wrote:
> > > > > >> >
> > > > > >> > Alex,
> > > > > >> >
> > > > > >> > In AI 2.1 we fixed several hundreds issues. Why do you think
> > there
> > > > is
> > > > > a
> > > > > >> > single person interested in reviewing all of them? E.g. we
> added
> > > new
> > > > > >> JDBC
> > > > > >> > driver. I do not see it in the list of major features,
> neither I
> > > > need
> > > > > to
> > > > > >> > know that this task was split into 20 smaller sub tasks, each
> of
> > > > > which
> > > > > >> are
> > > > > >> > listed in the report.
> > > > 

Hang when near cache is used

2017-08-03 Thread Valentin Kulichenko
Folks,

One of the users reported an issue with near cache in 2.0:
https://issues.apache.org/jira/browse/IGNITE-5926

There is a reproducer attached, I don't see anything obviously wrong and
can reproduce the issue. Can someone take a deeper look?

-Val


Re: Spark Data Frame support in Ignite

2017-08-03 Thread Valentin Kulichenko
This JDBC integration is just a Spark data source, which means that Spark
will fetch data in its local memory first, and only then apply filters,
aggregations, etc. This is obviously slow and doesn't use all advantages
Ignite provides.

To create useful and valuable integration, we should create a custom
Strategy that will convert Spark's logical plan into a SQL query and
execute it directly on Ignite.

-Val

On Thu, Aug 3, 2017 at 12:12 AM, Dmitriy Setrakyan 
wrote:

> On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke  wrote:
>
> > I think the development effort would still be higher. Everything would
> > have to be put via JDBC into Ignite, then checkpointing would have to be
> > done via JDBC (again additional development effort), a lot of conversion
> > from spark internal format to JDBC and back to ignite internal format.
> > Pagination I do not see as a useful feature for managing large data
> volumes
> > from databases - on the contrary it is very inefficient (and one would to
> > have to implement logic to fetch al pages). Pagination was also never
> > thought of for fetching large data volumes, but for web pages showing a
> > small result set over several pages, where the user can click manually
> for
> > the next page (what they anyway not do most of the time).
> >
> > While it might be a quick solution , I think a deeper integration than
> > JDBC would be more beneficial.
> >
>
> Jorn, I completely agree. However, we have not been able to find a
> contributor for this feature. You sound like you have sufficient domain
> expertise in Spark and Ignite. Would you be willing to help out?
>
>
> > > On 3. Aug 2017, at 08:57, Dmitriy Setrakyan 
> > wrote:
> > >
> > >> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke 
> > wrote:
> > >>
> > >> I think the JDBC one is more inefficient, slower requires too much
> > >> development effort. You can also check the integration of Alluxio with
> > >> Spark.
> > >>
> > >
> > > As far as I know, Alluxio is a file system, so it cannot use JDBC.
> > Ignite,
> > > on the other hand, is an SQL system and works well with JDBC. As far as
> > the
> > > development effort, we are dealing with SQL, so I am not sure why JDBC
> > > would be harder.
> > >
> > > Generally speaking, until Ignite provides native data frame
> integration,
> > > having JDBC-based integration out of the box is minimally acceptable.
> > >
> > >
> > >> Then, in general I think JDBC has never designed for large data
> volumes.
> > >> It is for executing queries and getting a small or aggregated result
> set
> > >> back. Alternatively for inserting / updating single rows.
> > >>
> > >
> > > Agree in general. However, Ignite JDBC is designed to work with larger
> > data
> > > volumes and supports data pagination automatically.
> > >
> > >
> > >>> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan 
> > >> wrote:
> > >>>
> > >>> Jorn, thanks for your feedback!
> > >>>
> > >>> Can you explain how the direct support would be different from the
> JDBC
> > >>> support?
> > >>>
> > >>> Thanks,
> > >>> D.
> > >>>
> >  On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke 
> > >> wrote:
> > 
> >  These are two different things. Spark applications themselves do not
> > use
> >  JDBC - it is more for non-spark applications to access Spark
> > DataFrames.
> > 
> >  A direct support by Ignite would make more sense. Although you have
> in
> >  theory IGFS, if the user is using HDFS, which might not be the case.
> > It
> > >> is
> >  now also very common to use Object stores, such as S3.
> >  Direct support could be leverage for interactive analysis or
> different
> >  Spark applications sharing data.
> > 
> > > On 3. Aug 2017, at 05:12, Dmitriy Setrakyan  >
> >  wrote:
> > >
> > > Igniters,
> > >
> > > We have had the integration with Spark Data Frames on our roadmap
> > for a
> > > while:
> > > https://issues.apache.org/jira/browse/IGNITE-3084
> > >
> > > However, while browsing Spark documentation, I cam across the
> generic
> >  JDBC
> > > data frame support in Spark:
> > > https://spark.apache.org/docs/latest/sql-programming-guide.
> >  html#jdbc-to-other-databases
> > >
> > > Given that Ignite has a JDBC driver, does it mean that it
> > transitively
> >  also
> > > supports Spark data frames? If yes, we should document it.
> > >
> > > D.
> > 
> > >>
> >
>


[GitHub] ignite pull request #2395: IGNITE-5927 .NET: Fix DataTable serialization

2017-08-03 Thread ptupitsyn
GitHub user ptupitsyn opened a pull request:

https://github.com/apache/ignite/pull/2395

IGNITE-5927 .NET: Fix DataTable serialization



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ptupitsyn/ignite IGNITE-5927

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2395.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2395






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5927) .NET: DataTable can't be serialized

2017-08-03 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5927:
--

 Summary: .NET: DataTable can't be serialized
 Key: IGNITE-5927
 URL: https://issues.apache.org/jira/browse/IGNITE-5927
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.0
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.2


{{System.Data.DataTable}} can't be serialized:

{code}
cache.Put(1, new DataTable());
{code}

results in exception:

{code}
System.InvalidCastException: Unable to cast object of type 
'Apache.Ignite.Core.Impl.Binary.BinaryWriter' to type 'System.IConvertible'.
   at System.Convert.ToBoolean(Object value, IFormatProvider provider)
   at System.Data.DataTable.GetObjectData(SerializationInfo info, 
StreamingContext context)
   at Apache.Ignite.Core.Impl.Binary.SerializableSerializer.WriteBinary[T](T 
obj, BinaryWriter writer) in 
S:\W\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Binary\SerializableSerializer.cs:line
 64
   at Apache.Ignite.Core.Impl.Binary.BinaryWriter.Write[T](T obj) in 
S:\W\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Binary\BinaryWriter.cs:line
 1224
   at Apache.Ignite.Core.Impl.Binary.Marshaller.Marshal[T](T val, IBinaryStream 
stream) in 
S:\W\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Binary\Marshaller.cs:line
 159
   at Apache.Ignite.Core.Impl.Binary.Marshaller.Marshal[T](T val) in 
S:\W\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Binary\Marshaller.cs:line
 144
{code}

StackOverflow question: 
https://stackoverflow.com/questions/45490249/how-to-store-datatable-in-apache-ignite



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5926) Puts on near cache hangs

2017-08-03 Thread vinay (JIRA)
vinay created IGNITE-5926:
-

 Summary: Puts on near cache hangs
 Key: IGNITE-5926
 URL: https://issues.apache.org/jira/browse/IGNITE-5926
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1, 2.0
Reporter: vinay


Cache puts on near cache on client node hangs after putting same number of keys 
(if keys and values are same during test). Most probably problem occurs when 
cache on server reaches max memory and starts page eviction. Attaching java 
class with main method to reproduce problem. If near cache is not used the the 
same test works fine.



Steps to reproduce
# Start a server node with one memory region with max size 100 MB. Page Evction 
Strategy as RANDOM_LRU. Set this memory region as default. Create a REPLICATED 
cache as part of server's IgniteConfiguration.
# Start a client node and create a near cache for the cache created on server. 
Keep near cache init size and max size 1000 with eviction policy LRU.
# Start a infinite while loop to put objects in near cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2394: IGNITE-5880: BLAS integration phase 2

2017-08-03 Thread ybabak
GitHub user ybabak opened a pull request:

https://github.com/apache/ignite/pull/2394

IGNITE-5880: BLAS integration phase 2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5880

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2394


commit e7d7684b0551b36f47a356feab577c89685df35a
Author: Yury Babak 
Date:   2017-08-02T23:44:03Z

IGNITE-5880: BLAS integration phase 2
wip

commit b8438aa3d36f513820142f5e00058c7572e8e22f
Author: Yury Babak 
Date:   2017-08-02T23:44:03Z

IGNITE-5880: BLAS integration phase 2
- fixed some failed tests.

commit ee8c3c4cc91f2c6c6087ab7bedd13fd07700a514
Author: Yury Babak 
Date:   2017-08-03T17:29:55Z

Merge remote-tracking branch 'professional/ignite-5880' into ignite-5880

# Conflicts:
#   
modules/ml/src/main/java/org/apache/ignite/ml/math/impls/storage/vector/MatrixVectorStorage.java




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5925) Get row/col for matrices

2017-08-03 Thread Yury Babak (JIRA)
Yury Babak created IGNITE-5925:
--

 Summary: Get row/col for matrices
 Key: IGNITE-5925
 URL: https://issues.apache.org/jira/browse/IGNITE-5925
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Yury Babak
 Fix For: 2.2


It should be useful to have this api for any matrix especially in BLAS and 
decompositions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2393: IGNITE-5738: SQL: Added support for batching for ...

2017-08-03 Thread skalashnikov
GitHub user skalashnikov opened a pull request:

https://github.com/apache/ignite/pull/2393

IGNITE-5738: SQL: Added support for batching for jdbc2 driver



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5738

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2393


commit 986026be8ca55af05dd4a24da7a510a33dd476c0
Author: skalashnikov 
Date:   2017-07-28T14:33:51Z

IGNITE-5738: WIP (ported changes made by AlexP)

commit ba6a4b08267468c156892c0b31814c352347fbd7
Author: skalashnikov 
Date:   2017-08-01T08:59:27Z

Merge branch 'master' of https://github.com/apache/ignite into ignite-5738

commit 89df4085552fdf5fc85a74b1255609408a13fce6
Author: skalashnikov 
Date:   2017-08-03T15:41:05Z

IGNITE-5738: Added support for statement batching

commit 1ae9c481b239eb72ba325c1b4e66f9edcfefad34
Author: skalashnikov 
Date:   2017-08-03T15:42:29Z

Merge branch 'master' of https://github.com/apache/ignite into ignite-5738

commit e22d4bcc9f266ebce97ec787b8d8d67ecc44d583
Author: skalashnikov 
Date:   2017-08-03T15:50:44Z

IGNITE-5738: cleanup




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2392: ODBC: SQLGetTypeInfo now works with SQL_ALL_TYPES

2017-08-03 Thread isapego
GitHub user isapego opened a pull request:

https://github.com/apache/ignite/pull/2392

ODBC: SQLGetTypeInfo now works with SQL_ALL_TYPES



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5923

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2392


commit 8ab0a8813fc6a536a7cf584462bf78f7faa9945f
Author: Igor Sapego 
Date:   2017-08-03T16:25:30Z

IGNITE-5923: Added test

commit c2eb27ac0752057d05db0f4274711fcd0aea7efa
Author: Igor Sapego 
Date:   2017-08-03T16:27:49Z

IGNITE-5923: Fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5924) .NET: Decouple Marshaller from Ignite

2017-08-03 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5924:
--

 Summary: .NET: Decouple Marshaller from Ignite
 Key: IGNITE-5924
 URL: https://issues.apache.org/jira/browse/IGNITE-5924
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.2


{{Marshaller}} class has {{Ignite}} property, which is used everywhere as a 
convenient accessor.
With thin client we don't have an {{Ignite}} instance ({{IgniteClient}} is 
there instead). 
Also, {{Marshaller}} itself only needs {{Ignite.BinaryProcessor}}, which is 
also tied to JNI.

So the plan is:
* Add {{IBinaryProcessor}} interface
* Replace {{Marshaller.Ignite}} with {{Marshaller.BinaryProcessor}}
* Fix external {{Marshaller.Ignite}} usages in some way



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2391: Ignite 5655

2017-08-03 Thread andrey-kuznetsov
GitHub user andrey-kuznetsov opened a pull request:

https://github.com/apache/ignite/pull/2391

Ignite 5655

First implementation, uses global level configuration option (in 
BinaryConfiguration)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/andrey-kuznetsov/ignite ignite-5655

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2391


commit c682ed622c6896343dd6c46025ee6667c788cbcd
Author: andrey-kuznetsov 
Date:   2017-07-28T15:07:36Z

IGNITE-5655: Draft ENCODED_STRING type support

commit ae939fb26df34c9e9ac074c9b9414e05354053df
Author: Andrey Kuznetsov 
Date:   2017-08-01T18:21:58Z

IGNITE-5655: Added ENCODED_STRING to BinaryWriteMode as well

commit 9abd99153e154c77faa2e25d809446def83f1378
Author: Andrey Kuznetsov 
Date:   2017-08-02T16:18:17Z

IGNITE-5655: Repaired BinaryMarshallerSelfTest for lossless encodings.

commit 2d25a76ca6751da027c9e955c2d68fedbcf68d7c
Author: Andrey Kuznetsov 
Date:   2017-08-03T15:17:28Z

IGNITE-5655: Removed 'default' encoding to preserve compatibility.

commit f540a84b1f03c90ff128a702b1ae73f70facfc85
Author: Andrey Kuznetsov 
Date:   2017-08-03T15:20:55Z

IGNITE-5655: String binary marshalling tests for non-utf-8 encodings

commit 305a25052a2fc4b2cfe0848107cd226ef2c0daec
Author: Andrey Kuznetsov 
Date:   2017-08-03T15:41:51Z

Merge branch 'master' into ignite-5655

commit ac83e9695a4240d2d2c504713f913167319b275d
Author: Andrey Kuznetsov 
Date:   2017-08-03T15:52:25Z

IGNITE-5655: Satisfying @NotNull contract

commit 4cafde333816e4bb9858801283878d4e8c577c46
Author: Andrey Kuznetsov 
Date:   2017-08-03T16:12:44Z

IGNITE-5655: Removed redundant TODOs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2390: ignored_tests suite fixes

2017-08-03 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/2390

ignored_tests suite fixes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite z_Ignores_check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2390


commit 7dc3ee8e68c67af308982e903b4f4de67916afb7
Author: Sergey Chugunov 
Date:   2017-08-03T15:37:26Z

Ignored tests suite was fixed for test run on TC




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2389: IGNITE-5920: Fix the example. Set CacheKeyConfigu...

2017-08-03 Thread tledkov-gridgain
GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/2389

IGNITE-5920: Fix the example. Set CacheKeyConfiguration explicitly to…

… enable affinity co-location.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5920

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2389


commit 46f493aeb7beb9923d18b6ef1da3ba3009164d8b
Author: tledkov-gridgain 
Date:   2017-08-03T15:23:07Z

IGNITE-5920: Fix the example. Set CacheKeyConfiguration explicitly to 
enable affinity co-location.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5923) ODBC: SQLGetTypeInfo does not work with SQL_ALL_TYPES

2017-08-03 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-5923:
---

 Summary: ODBC: SQLGetTypeInfo does not work with SQL_ALL_TYPES
 Key: IGNITE-5923
 URL: https://issues.apache.org/jira/browse/IGNITE-5923
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 2.1
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 2.2


ODBC function {{SQLGetTypeInfo}} does not work if given an {{SQL_ALL_TYPES}} as 
the {{DataType}} argument value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2388: IGNITE-5211 Classes based constructor for QueryEn...

2017-08-03 Thread tledkov-gridgain
GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/2388

IGNITE-5211 Classes based constructor for QueryEntities



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5211

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2388


commit f027c03167183254768c24c0aca4a8ddc33b985a
Author: tledkov-gridgain 
Date:   2017-08-03T14:36:23Z

IGNITE-5211 Classes based constructor for QueryEntities




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2371: IGNITE-5211 Classes based constructor for QueryEn...

2017-08-03 Thread tledkov-gridgain
Github user tledkov-gridgain closed the pull request at:

https://github.com/apache/ignite/pull/2371


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2387: IGNITE-5736 Add test of backward-compatibility

2017-08-03 Thread daradurvs
GitHub user daradurvs opened a pull request:

https://github.com/apache/ignite/pull/2387

IGNITE-5736 Add test of backward-compatibility



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/daradurvs/ignite ignite-5736

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2387.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2387


commit 1e1e8432dc936df7c62557575a57c4a159b4bca6
Author: Vyacheslav Daradur 
Date:   2017-05-31T08:41:56Z

ignite-5097: writing arrays length in varint encoding was implemented

commit d162d3e3d9036cddb275b4a3d86b8f5de9795185
Author: daradurvs 
Date:   2017-06-01T18:35:13Z

ignite-5097: doUnsafeWriteUnsignedVarint method was added

commit bfe381b3a7498eb5bebeb25026a43d36656c6041
Author: daradurvs 
Date:   2017-06-04T21:25:48Z

ignite-5097: dotNET - writing arrays length in varint encoding was 
implemented

commit 516fcf41e4e973abf41cdd19acd2c9ea1bfb9445
Author: daradurvs 
Date:   2017-06-04T21:26:00Z

ignite-5097: dotNET - hardcoded hashcode values in the tests were changed 
according to new conditions

commit fb43cbd77e9c83ef1aeb9dced923d9ca094a8be3
Author: Vyacheslav Daradur 
Date:   2017-07-05T20:25:27Z

Merge branch 'master' into ignite-5097_2

commit 398cb205c26c65f369dc3bdc4198f6032a206e87
Author: daradurvs 
Date:   2017-07-06T18:12:09Z

ignite-5097: compatibility property to allow to keep data in old format was 
added in Java part

commit 4105cf073e0e23f44c0c271407ce5415f867a352
Author: daradurvs 
Date:   2017-07-06T18:12:55Z

ignite-5097: dotNET - compatibility property to allow to keep data in old 
format was added

commit 86082a8052ce2e5c818183a18d16eddf54d5e346
Author: Vyacheslav Daradur 
Date:   2017-07-07T14:15:14Z

ignite-5097: compatibility mode test was added

commit 6aadaa985d021d38accedeaa3ada6790eb1981a9
Author: daradurvs 
Date:   2017-07-07T17:46:56Z

ignite-5097: dotNET - compatibility mode tests were added; fix constant 
condition

commit bd24ccf6e8c2b4deb85cd3ad48635be9addaecd3
Author: Vyacheslav Daradur 
Date:   2017-07-10T15:22:01Z

ignite-5097: dotNET - fix compatibility property condition

commit 5f3e1543c8de140a533d0fbdbfca74a2ffd89a36
Author: Vyacheslav Daradur 
Date:   2017-07-10T15:32:44Z

ignite-5097: rename constant of compatibility mode

commit ac59755342093609c2c9505ccde3308ebf1f1ed4
Author: daradurvs 
Date:   2017-07-10T17:42:00Z

ignite-5097: dotNET - fix compatibility mode test

commit bc2c07a63cd28b2e99d195b1f307c900e0ca
Author: Vyacheslav Daradur 
Date:   2017-07-14T15:02:27Z

ignite-5732: added test plugin to allow join topology nodes with different 
version

commit 45dc0c662f8392ead950c722f63c91e3467683fa
Author: daradurvs 
Date:   2017-08-01T14:53:07Z

ignite-5732: functional of starting node from Maven artifact was added (wip 
commits squashed)

commit 3321b2b48481e7c672cb250bd6c92e22c659d559
Author: daradurvs 
Date:   2017-08-01T15:00:22Z

ignite-5732: added MavenUtils

commit 3a1c2df63d8c30cdf4520083afd6f73a326a906e
Author: daradurvs 
Date:   2017-08-01T16:13:15Z

ignite-5732: minor refactoring

commit c4baa2847cd34a8733cad0929302eb8ecd89beae
Author: daradurvs 
Date:   2017-08-01T17:31:59Z

ignite-5732: minor refactoring 2

commit dbb5df2e8136b2a7b159fd11565d05fa2bd9f076
Author: daradurvs 
Date:   2017-08-02T10:33:14Z

ignite-5732: plugins were moved to another package

commit 8fbf34e3f84d77d6af8b40847eaa2f9faa0075fd
Author: daradurvs 
Date:   2017-08-02T11:10:46Z

ignite-5732: fix spi-attributes error in log

commit 9c205a303dbea0154f683e9514c926f4d53b8d12
Author: daradurvs 
Date:   2017-08-02T13:42:59Z

ignite-5732: refactoring; fixes of review notes

commit 35b56df32c8a9e0a06b9f9803999d3e32035959b
Author: daradurvs 
Date:   2017-08-02T15:24:42Z

ignite-5732: MavenUtils refactoring

commit 4eb47095eb37102f276f240a642a99e6eaedc23b
Author: daradurvs 
Date:   2017-08-02T16:07:55Z

ignite-5732: refactoring according to review note (DRY principle); minor 
style fixes

commit 03b4d916b3a8f7132d06a7f561c3c64d1ee6254f
Author: daradurvs 
Date:   2017-08-02T16:14:01Z

ignite-5732: add test in testsuite to execute on TeamCity

commit c70c55f4d0424862959d36f6f46fa98626c9d8fe
Author: daradurvs 
Date:   2017-08-02T18:43:15Z


[jira] [Created] (IGNITE-5922) Improve collisionSpi doc - ParallelJobsNumber should be less than PublicThreadPoolSize

2017-08-03 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-5922:
-

 Summary: Improve collisionSpi doc - ParallelJobsNumber should be 
less than PublicThreadPoolSize
 Key: IGNITE-5922
 URL: https://issues.apache.org/jira/browse/IGNITE-5922
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Evgenii Zhuravlev
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2386: IGNITE-5890 Estimated time to rebalance completio...

2017-08-03 Thread DmitriyGovorukhin
GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/2386

IGNITE-5890 Estimated time to rebalance completion



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-5890

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2386


commit 41664ea92205d9517d692edfc2620ad096cb1d44
Author: Dmitriy Govorukhin 
Date:   2017-08-03T08:05:34Z

IGNITE-5890 WIP. Estimated time to rebalance completion

commit 039e2ea9dfb9e5dc6c1cbfdc2fb8cf8fdee8a444
Author: Dmitriy Govorukhin 
Date:   2017-08-03T14:18:59Z

IGNITE-5890 added test estimate rebalance finish time

commit b44a72dabab928c0ab849d413b38e35934640001
Author: Dmitriy Govorukhin 
Date:   2017-08-03T14:25:57Z

IGNITE-5890 fix java doc




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Zeppelin, MyBatis and Vert.x update to AI 2.1

2017-08-03 Thread Andrey Gura
Hi,

I'll do it soon. But there is problem with Zeppelin because CI should
be configured on local machine. So I need more time than usually for
update Ignite version in Zeppelin (e.g. Ignite 2.0 still not relased
in Zeppelin).

On Tue, Aug 1, 2017 at 9:13 AM, Roman Shtykh  wrote:
> Denis,
> I updated MyBatis integration, and will release it this week.
> -- Roman
>
>
> On Tuesday, August 1, 2017 5:23 AM, Denis Magda  wrote:
>
>
>  Andrey, Roman,
> As maintainers of the projects in the subject, could you please go ahead and 
> update Ignite version to 2.1 
> there?https://cwiki.apache.org/confluence/display/IGNITE/External+Integrations
> —Denis
>
>


Re: Cache Metrics

2017-08-03 Thread Andrey Gura
Den,

I see at least two problems here:

1. Metrics meaning for end user. How user should interpret metrics in
this case. Moreover, average is bad gauge for monitoring because it
hides actual latencies. User should have possibility to get accurate
metrics in order to build some monitoring that can create percentile
based charts for example and accuracy is very important property for
such cases.

2. It just makes code more complex and we will have metrics related
logic in two places instead of one.



On Wed, Jul 26, 2017 at 4:45 AM, Denis Magda  wrote:
> Andrey,
>
> I would simply take an average if a mixed clients-servers cluster group is 
> used.
>
> In general, the goal of the ticket was to fix the time-based metrics on the 
> server side. As far as I understand they are already calculated properly on 
> the client’s considering network contribution, right? So, all that’s left to 
> do is to count the same on the servers so that those metrics no longer return 
> 0.
>
> —
> Denis
>
>> On Jul 25, 2017, at 6:53 AM, Andrey Gura  wrote:
>>
>> Den,
>>
>> doesn't make sense from my point if view. And we create new problem:
>> how should we aggregate this metrics when user requests metrics for
>> cluster group.
>>
>> On Mon, Jul 24, 2017 at 8:48 PM, Denis Magda  wrote:
>>> Guys,
>>>
>>> What if we calculate it on both sides? The client will keep the total time 
>>> needed to complete an operation including network hoops while a server 
>>> (primary or backup) will count only local time.
>>>
>>> —
>>> Denis
>>>
 On Jul 17, 2017, at 7:07 AM, Andrey Gura  wrote:

 Hi,

 I believe that the first solution is better than second because it
 takes into account network communication time. Average time of
 communication between nodes doesn't make sense from my point of view.

 So I vote for #1.

 On Thu, Jul 13, 2017 at 11:52 PM, Вячеслав Коптилин
  wrote:
> Hi Experts,
>
> I am working on https://issues.apache.org/jira/browse/IGNITE-3495
>
> A few words about this issue:
> It is about that the process of gathering/updating of cache metrics is
> inconsistent in some cases.
> Let's consider the following simple topology which contains only two 
> nodes:
> first node is a client node and the second is a server.
> And client node starts requests to the server node, for instance
> cache.put(), cache.putAll(), cache.get() etc.
> In that case, metrics which are related to counters (cache hits, cache
> misses, removals and puts) are calculated on the server side,
> while time metrics are updated on the client node.
>
> I think that both metrics (counters and time) should be calculated on the
> same node. So, there are two obvious solution:
>
> #1 Node that starts some operation is responsible for updating the cache
> metrics.
> Pro:
> - it will allow to get more accurate results of metrics.
> Contra:
> - this approach does not work in particular cases. for example, 
> partitioned
> cache with FULL_ASYNC write synchronization mode.
> - needs to extend response messages (GridNearAtomicUpdateResponse,
> GridNearGetResponse etc)
> in order to provide additional information from remote node: cache hits,
> number of removal etc.
> So, it will lead to additional pressure on communication channel.
> Perhaps, this impact will be small - 4 bytes per message or something like
> that.
> - backward incompatibility (this is a consequence of the previous point)
>
> #2 Primary node (node that actually executes a request)
> Pro:
> - easy to implement
> - backward compatible
> Contra:
> - time metrics will not include the time of communication between nodes, 
> so
> the results will be less accurate.
> - perhaps we need to provide additional metric which will allow to get avg
> time of communication between nodes.
>
> Please let me know about your thoughts.
> Perhaps, both alternatives are not so good...
>
> Regards,
> Slava.
>>>
>


Re: Cluster auto activation design proposal

2017-08-03 Thread Sergey Chugunov
I also would like to provide more use cases of how BLT is supposed to work
(let me call it this way until we come up with a better one):

   1. User creates new BLT using WebConsole or other tool and "applies" it
   to brand-new cluster.

   2. User starts up brand-new cluster with desired amount of nodes and
   activates it. At the moment of activation BLT is created with all server
   non-daemon nodes presented in the cluster.

   3. User starts up a cluster with previously prepared BLT -> when set of
   nodes in the cluster matches with BLT cluster gets automatically activated.

   4. User has an up-and-running active cluster and starts a few more
   nodes. They join the cluster but no partitions are assigned to them.
   User recreates BLT on new cluster topology -> partitions are assigned to
   new nodes.

   5. User takes out nodes from cluster (e.g. for maintenance purposes): no
   rebalance happens until user recreates BLT on new cluster topology.

   6. If some parameters reach critical levels (e.g. number of backups for
   a partition is too low) coordinator automatically recreates BLT and thus
   triggers rebalancing.


I hope these use cases will help to clarify purposes of the proposed
feature.

On Thu, Aug 3, 2017 at 4:08 PM, Alexey Goncharuk  wrote:

> My understanding of Baseline Topology is the set of nodes which are
> *expected* to be in the cluster.
> Let me go a little bit further because BT (or whatever name we choose) may
> and will solve more issues than just auto-activation:
>
> 1) More graceful control over rebalancing than just rebalance delay. If a
> server is shut down for maintenance and there are enough backup nodes in
> the cluster, there is no need to rebalance.
> 2) Guarantee that there will be no conflicting key-value mappings due to
> incorrect cluster activation. For example, consider a scenario when there
> was a cluster of 10 nodes, then the cluster was shut down, started first 5
> nodes, activated, made some updates, shut down 5 nodes, start up other 5
> nodes, activate, make some updates, start up first 5 nodes. Currently,
> there is no way to determine that there was an incompatible topology change
> which leads to data inconsistency.
> 3) When a cluster is shutting down node-by-node, we must track a node which
> has 'seen' a partition last time and not activate the cluster until all
> nodes are present. Otherwise, again, we may activate too early and see
> outdated values.
>
> I do not want to add any 'faster' hacks here because they will only make
> the issue above appear more likely. Besides, BT should be available in 2.2
> anyway, so no need to rush with hacks.
>
> --AG
>
> 2017-08-03 15:09 GMT+03:00 Yakov Zhdanov :
>
> > >Obvious connotation of "minimal set" is a set that cannot be decreased.
> >
> > >But lets consider the following case: user has a cluster of 50 nodes and
> > >decides to switch off 3 nodes for maintenance for a while. Ok, user just
> > >does it and then recreates this "minimal node set" to only 47 nodes.
> >
> > >So initial minimal node set was decreased - something counter-intuitive
> to
> > >me and may cause confusion as well.
> >
> > That was my point. If I have 50 nodes and 3 backups I can restart on 48,
> 49
> > and 50 without data loss. In case of 48 and 49 after cluster gets
> activated
> > missing backups are assigned and rebalancing starts.
> >
> > --Yakov
> >
>


[jira] [Created] (IGNITE-5921) Reduce contention for free list access

2017-08-03 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5921:
-

 Summary: Reduce contention for free list access
 Key: IGNITE-5921
 URL: https://issues.apache.org/jira/browse/IGNITE-5921
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.1
 Environment: Reduce contention for free list access.
Reporter: Mikhail Cherkasov
Assignee: Igor Seliverstov


Reduce contention for free list access.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2385: 8.0.3.ea14

2017-08-03 Thread agoncharuk
GitHub user agoncharuk opened a pull request:

https://github.com/apache/ignite/pull/2385

8.0.3.ea14



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-8.0.3.ea14

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2385


commit 52556f4bf6d544a44bfd49b02d84aa32f741813f
Author: Ilya Lantukh 
Date:   2017-04-28T16:40:31Z

Multiple optimizations from ignite-gg-8.0.3.ea5-atomicbench.

commit 58b6e05e82978c68ba9e2e53a3b9b866e2c474ca
Author: Ilya Lantukh 
Date:   2017-04-28T16:57:32Z

Optimizations from ignite-5068.

commit 925370c3c6c587870e19f31f8660139884847e06
Author: sboikov 
Date:   2017-05-06T09:15:21Z

client join race

(cherry picked from commit 682ca8e)

commit 643e6d49d0e0fb1d66b18dcfe74b570c15924036
Author: Alexey Goncharuk 
Date:   2017-05-10T09:14:42Z

GG-12170 - Added assertions for tag value, skip failed-to-write-lock-pages 
on checkpoint begin

commit f00785611e1d785fe8c2247770bf07e07ddb1ebe
Author: Ivan Rakov 
Date:   2017-05-12T16:39:52Z

GG-12194: Backport GG-12184 into 8.0.3.ea8

commit b39f7c8d4f486f25e48a812e4a9d5aa4bbb5932f
Author: Alexey Kuznetsov 
Date:   2017-05-13T15:37:25Z

GG-12190 Backported to 803ea8.

commit 8f1398f43038700cdf66a4538d24c4dfd227cb0e
Author: Alexey Goncharuk 
Date:   2017-05-15T09:47:38Z

Additional debug

commit c35dbf4ec3ba6b01932c76f14ad4c45fed402391
Author: Yakov Zhdanov 
Date:   2017-05-15T15:03:07Z

Added IO latency test + made it available from MBean

(cherry picked from commit 8195ae0)

commit b1116069549be224d59983b93d2ee22cab8402b8
Author: sboikov 
Date:   2017-05-16T08:24:11Z

Improved exchange timeout debug logging.

commit 560ef60bf90643dfc4a329a760714d652c73e9a8
Author: sboikov 
Date:   2017-05-16T08:30:29Z

DirectByteBufferStreamImpl: converted asserts into exceptions.

commit 250b4e04606d95821ee9e87c40225c45f3432d3c
Author: Yakov Zhdanov 
Date:   2017-05-17T13:36:27Z

Results printout for IO latency test

(cherry picked from commit a0fc6ee)

commit 46cba2a46966759e6d658f3ba991f15722a6634f
Author: Alexey Goncharuk 
Date:   2017-05-17T16:04:10Z

Do not re-create node2part map on every singleMap message

commit d4c999795917b7695cc13f2fea3e69cd1a3d5078
Author: Alexey Goncharuk 
Date:   2017-05-17T17:58:21Z

MetaPageInitRecord fix

commit 7b545fa9029ba9f3d90828cd38611f6a2988cb25
Author: EdShangGG 
Date:   2017-05-16T16:07:03Z

GG-12140 We will lose data if we cancel snapshot restore
(cherry picked from commit 8e3ad6d)

commit e590a8110b1ce7f8140f06ae4a504f60777847e6
Author: Eduard Shangareev 
Date:   2017-05-17T23:12:44Z

GG-12140 We will lose data if we cancel snapshot restore
(cherry picked from commit 7721838)

commit db84a921920ff6a4ebe55af4cf8047e37e0addb1
Author: EdShangGG 
Date:   2017-05-18T14:37:30Z

fixing compilation after cherry-picking

commit afbade50e151d2aa793e9762637853ec6d6f2f93
Author: EdShangGG 
Date:   2017-05-18T16:30:00Z

GG-12192 Concurrent node join and snapshot restore cause to coordinator fail
-fixing tests and compilation

commit b29b918aece6a99a12c8bf3f21c5419a9d97de25
Author: Alexey Kuznetsov 
Date:   2017-05-19T07:18:23Z

Added Affinity topology version and Pending exchanges to Visor data 
collector task.
(cherry picked from commit 7402ea1)

commit 096404d36c1cc3a1d9da1db3a2b0a2b7fcdd702d
Author: Yakov Zhdanov 
Date:   2017-05-19T17:33:36Z

Results printout for IO latency test

commit 33f1c3376a05e728a63df5cf8802d5df6f9e02f5
Author: Alexey Goncharuk 
Date:   2017-05-22T16:46:39Z

Fixed assertion in checkpointer on cache stop

commit e7edb38fdd42008fd358e320dfe003a47b22cbe6
Author: EdShangGG 
Date:   2017-05-23T12:31:55Z

GG-12192 Concurrent node join and snapshot restore cause to coordinator fail
-adding test
-fixing problem with coordinator left

commit 34a0d5f5f7c97c4f401c14b4211af35eaa34e850
Author: Vladislav Pyatkov 
Date:   2017-05-23T12:33:39Z

ignite-5212 Allow custom affinity function for data structures cache
(cherry picked from commit f353faf)

commit f169a1898aacdd08584210fe3377767d12be563d
Author: Yakov Zhdanov 

Re: Cluster auto activation design proposal

2017-08-03 Thread Alexey Goncharuk
My understanding of Baseline Topology is the set of nodes which are
*expected* to be in the cluster.
Let me go a little bit further because BT (or whatever name we choose) may
and will solve more issues than just auto-activation:

1) More graceful control over rebalancing than just rebalance delay. If a
server is shut down for maintenance and there are enough backup nodes in
the cluster, there is no need to rebalance.
2) Guarantee that there will be no conflicting key-value mappings due to
incorrect cluster activation. For example, consider a scenario when there
was a cluster of 10 nodes, then the cluster was shut down, started first 5
nodes, activated, made some updates, shut down 5 nodes, start up other 5
nodes, activate, make some updates, start up first 5 nodes. Currently,
there is no way to determine that there was an incompatible topology change
which leads to data inconsistency.
3) When a cluster is shutting down node-by-node, we must track a node which
has 'seen' a partition last time and not activate the cluster until all
nodes are present. Otherwise, again, we may activate too early and see
outdated values.

I do not want to add any 'faster' hacks here because they will only make
the issue above appear more likely. Besides, BT should be available in 2.2
anyway, so no need to rush with hacks.

--AG

2017-08-03 15:09 GMT+03:00 Yakov Zhdanov :

> >Obvious connotation of "minimal set" is a set that cannot be decreased.
>
> >But lets consider the following case: user has a cluster of 50 nodes and
> >decides to switch off 3 nodes for maintenance for a while. Ok, user just
> >does it and then recreates this "minimal node set" to only 47 nodes.
>
> >So initial minimal node set was decreased - something counter-intuitive to
> >me and may cause confusion as well.
>
> That was my point. If I have 50 nodes and 3 backups I can restart on 48, 49
> and 50 without data loss. In case of 48 and 49 after cluster gets activated
> missing backups are assigned and rebalancing starts.
>
> --Yakov
>


Re: Add isPrimary() and isBackup() methods on CacheQueryEntryEvent

2017-08-03 Thread Kozlov Maxim
Guys, 

I'm sorry for the misunderstanding, I was tired at the end of the day :-)
In the process of working on the task, I had to add 2 methods to the public 
interface.
The first method is #isBackup(), it returns 'true' if cache is being updated on 
the backup node.
The second method is #isPrimary(), it returns 'true' if cache is being updated 
on the primary node.
Their main purpose is to show where a continuous query filter has been invoked.

Any thoughts about such solution?

> 3 авг. 2017 г., в 14:45, Anton Vinogradov  
> написал(а):
> 
> Folks,
> 
> As far as I see, Issue still in PatchAvailable state, what did you mean by
> "solved"?
> 
> On Wed, Aug 2, 2017 at 8:01 PM, Kozlov Maxim  wrote:
> 
>> Sure.
>> 
>> CacheQueryEntryEvent:
>> 
>> public abstract boolean isBackup();
>> public abstract boolean isPrimary();
>> 
>> 
>>> 2 авг. 2017 г., в 19:56, Nikolai Tikhonov 
>> написал(а):
>>> 
>>> Max,
>>> 
>>> Thank you for your contribution! Could you share here what exactly was
>>> added to interface?
>>> 
>>> On Wed, Aug 2, 2017 at 7:53 PM, Kozlov Maxim 
>> wrote:
>>> 
 Igniters,
 
 When you solved the 3878[1] ticket, two methods were added[2]:
>> isPrimary()
 and isBackup() on the CacheQueryEntryEvent in a public API. Do you agree
 with this decision?
 
 [1] https://issues.apache.org/jira/browse/IGNITE-3878 <
 https://issues.apache.org/jira/browse/IGNITE-3878>
 [2] https://github.com/apache/ignite/pull/1393 <
>> https://github.com/apache/
 ignite/pull/1393>
 
 --
 Best Regards,
 Max K.
 
 
 
 
 
>> 
>> --
>> Best Regards,
>> Max K.
>> 
>> 
>> 
>> 
>> 

--
Best Regards,
Max K.






[jira] [Created] (IGNITE-5920) CacheClientBinaryQueryExample return different results if we add non local node

2017-08-03 Thread Aleksey Chetaev (JIRA)
Aleksey Chetaev created IGNITE-5920:
---

 Summary: CacheClientBinaryQueryExample return different results if 
we add non local node
 Key: IGNITE-5920
 URL: https://issues.apache.org/jira/browse/IGNITE-5920
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksey Chetaev


1. Start CacheClientBinaryQueryExample without external nodes. Section ">>> 
Employees working for GridGain" isn't empty.
2. Start 3 node and after that start example. section ">>> Employees working 
for GridGain" is empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Cluster auto activation design proposal

2017-08-03 Thread Yakov Zhdanov
>Obvious connotation of "minimal set" is a set that cannot be decreased.

>But lets consider the following case: user has a cluster of 50 nodes and
>decides to switch off 3 nodes for maintenance for a while. Ok, user just
>does it and then recreates this "minimal node set" to only 47 nodes.

>So initial minimal node set was decreased - something counter-intuitive to
>me and may cause confusion as well.

That was my point. If I have 50 nodes and 3 backups I can restart on 48, 49
and 50 without data loss. In case of 48 and 49 after cluster gets activated
missing backups are assigned and rebalancing starts.

--Yakov


[jira] [Created] (IGNITE-5919) .NET: EntryProcessorExample closes immediately after execution

2017-08-03 Thread Irina Zaporozhtseva (JIRA)
Irina Zaporozhtseva created IGNITE-5919:
---

 Summary: .NET: EntryProcessorExample closes immediately after 
execution
 Key: IGNITE-5919
 URL: https://issues.apache.org/jira/browse/IGNITE-5919
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Affects Versions: 1.9
Reporter: Irina Zaporozhtseva
Priority: Minor


EntryProcessorExample closes immediately after execution. Please, add:

Console.WriteLine();
Console.WriteLine(">>> Example finished, press any key to exit ...");
Console.ReadKey();



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Cluster auto activation design proposal

2017-08-03 Thread Yakov Zhdanov
>I think it is not just restarts, this set of nodes is minimally required
for the cluster to function, no?

I don't think so. Cluster can function if there is no data loss.

--Yakov


Re: Cluster auto activation design proposal

2017-08-03 Thread Dmitry Pavlov
Igniters, what about Target Node Set? Complete Node Set?

As when we reach this topology, we can activate cluster.

чт, 3 авг. 2017 г. в 12:58, Sergey Chugunov :

> Dmitriy,
>
> Obvious connotation of "minimal set" is a set that cannot be decreased.
>
> But lets consider the following case: user has a cluster of 50 nodes and
> decides to switch off 3 nodes for maintenance for a while. Ok, user just
> does it and then recreates this "minimal node set" to only 47 nodes.
>
> So initial minimal node set was decreased - something counter-intuitive to
> me and may cause confusion as well.
>
>
> On Thu, Aug 3, 2017 at 12:37 PM,  wrote:
>
> > Yakov,
> >
> > I think it is not just restarts, this set of nodes is minimally required
> > for the cluster to function, no?
> >
> > ⁣D.​
> >
> > On Aug 3, 2017, 11:23 AM, at 11:23 AM, Yakov Zhdanov <
> yzhda...@apache.org>
> > wrote:
> > >Ю> How about naming it "minimal node set" or "required node set"?
> > >
> > >Required for what? I would add restart if there are no confusion.
> > >
> > >--Yakov
> >
>


Re: Add isPrimary() and isBackup() methods on CacheQueryEntryEvent

2017-08-03 Thread Anton Vinogradov
Folks,

As far as I see, Issue still in PatchAvailable state, what did you mean by
"solved"?

On Wed, Aug 2, 2017 at 8:01 PM, Kozlov Maxim  wrote:

> Sure.
>
> CacheQueryEntryEvent:
>
> public abstract boolean isBackup();
> public abstract boolean isPrimary();
>
>
> > 2 авг. 2017 г., в 19:56, Nikolai Tikhonov 
> написал(а):
> >
> > Max,
> >
> > Thank you for your contribution! Could you share here what exactly was
> > added to interface?
> >
> > On Wed, Aug 2, 2017 at 7:53 PM, Kozlov Maxim 
> wrote:
> >
> >> Igniters,
> >>
> >> When you solved the 3878[1] ticket, two methods were added[2]:
> isPrimary()
> >> and isBackup() on the CacheQueryEntryEvent in a public API. Do you agree
> >> with this decision?
> >>
> >> [1] https://issues.apache.org/jira/browse/IGNITE-3878 <
> >> https://issues.apache.org/jira/browse/IGNITE-3878>
> >> [2] https://github.com/apache/ignite/pull/1393 <
> https://github.com/apache/
> >> ignite/pull/1393>
> >>
> >> --
> >> Best Regards,
> >> Max K.
> >>
> >>
> >>
> >>
> >>
>
> --
> Best Regards,
> Max K.
>
>
>
>
>


[jira] [Created] (IGNITE-5918) Adding and searching objects in index tree produce a lot of garbage

2017-08-03 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-5918:
-

 Summary: Adding and searching objects in index tree produce a lot 
of garbage
 Key: IGNITE-5918
 URL: https://issues.apache.org/jira/browse/IGNITE-5918
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5917) Fields have no error-triangle in case of incorrect value

2017-08-03 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-5917:
--

 Summary: Fields have no error-triangle in case of incorrect value
 Key: IGNITE-5917
 URL: https://issues.apache.org/jira/browse/IGNITE-5917
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Pavel Konstantinov
 Fix For: 2.2






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5916) Web console: incorrect default value for Cache-Queries & Indexing-SQL index max inline size = -1

2017-08-03 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-5916:
--

 Summary: Web console: incorrect default value for Cache-Queries & 
Indexing-SQL index max inline size = -1
 Key: IGNITE-5916
 URL: https://issues.apache.org/jira/browse/IGNITE-5916
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Konstantinov


If I set -1 as real value then field become incorrect (red border)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Cluster auto activation design proposal

2017-08-03 Thread Sergey Chugunov
Dmitriy,

Obvious connotation of "minimal set" is a set that cannot be decreased.

But lets consider the following case: user has a cluster of 50 nodes and
decides to switch off 3 nodes for maintenance for a while. Ok, user just
does it and then recreates this "minimal node set" to only 47 nodes.

So initial minimal node set was decreased - something counter-intuitive to
me and may cause confusion as well.


On Thu, Aug 3, 2017 at 12:37 PM,  wrote:

> Yakov,
>
> I think it is not just restarts, this set of nodes is minimally required
> for the cluster to function, no?
>
> ⁣D.​
>
> On Aug 3, 2017, 11:23 AM, at 11:23 AM, Yakov Zhdanov 
> wrote:
> >Ю> How about naming it "minimal node set" or "required node set"?
> >
> >Required for what? I would add restart if there are no confusion.
> >
> >--Yakov
>


Re: Cluster auto activation design proposal

2017-08-03 Thread dsetrakyan
Yakov,

I think it is not just restarts, this set of nodes is minimally required for 
the cluster to function, no?

⁣D.​

On Aug 3, 2017, 11:23 AM, at 11:23 AM, Yakov Zhdanov  
wrote:
>Ю> How about naming it "minimal node set" or "required node set"?
>
>Required for what? I would add restart if there are no confusion.
>
>--Yakov


Re: Cluster auto activation design proposal

2017-08-03 Thread Sergey Chugunov
>From my standpoint name for the concept should emphasize that nodes from
the set constitute a target topology - the place where user wants to be.

If we go in a "node set" way, what about FixedNodeSet or BaseNodeSet?

"restart node set" also is a bit confusing because this concept works not
only to restart but to manage adding and removing nodes to/from cluster.

E.g. cluster admin decides to add ten more nodes to existing cluster:
he/she starts them one by one, nodes join the cluster but don't receive any
data as they are not in FixedNodeSet yet.
Then admin issues "change fixed node set" command or adds them to the set
in some other way and nodes become operational.
As one can see, no restarts are involved in the process.

Thanks,
Sergey.

On Thu, Aug 3, 2017 at 12:23 PM, Yakov Zhdanov  wrote:

> Ю> How about naming it "minimal node set" or "required node set"?
>
> Required for what? I would add restart if there are no confusion.
>
> --Yakov
>


Re: Cluster auto activation design proposal

2017-08-03 Thread Yakov Zhdanov
Ю> How about naming it "minimal node set" or "required node set"?

Required for what? I would add restart if there are no confusion.

--Yakov


[jira] [Created] (IGNITE-5915) Add more clear WAL mode documentation and print a warning when NONE mode is used

2017-08-03 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-5915:


 Summary: Add more clear WAL mode documentation and print a warning 
when NONE mode is used
 Key: IGNITE-5915
 URL: https://issues.apache.org/jira/browse/IGNITE-5915
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Affects Versions: 2.1
Reporter: Alexey Goncharuk
Assignee: Alexey Goncharuk
 Fix For: 2.2


Describe which guarantees each WAL mode gives and print a warning when NONE is 
used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Cluster auto activation design proposal

2017-08-03 Thread dsetrakyan
How about naming it "minimal node set" or "required node set"?

⁣D.​

On Aug 3, 2017, 11:15 AM, at 11:15 AM, Yakov Zhdanov  
wrote:
>> * Based on some sort of policies when the actual cluster topology
>differs
>too much from the baseline or when some critical condition happens
>(e.g.,
>when there are no more backups for a partition)
>
>Good point, Alex! I would even go further. If cluster is active and
>under
>load and nodes continue joining and leaving then we can have several
>BT's
>that are possible to restart on - the main condition is to have all the
>up
>to date data partitions. I.e. if you have 4 servers and 3 backups most
>probably you can have all the data with 2, 3 and, of course, 4 nodes.
>Makes
>sense?
>
>I would also think of different name. Topology (for me) also implies
>the
>version, but here only nodes carrying data are important. How about
>"restart nodes set"?
>
>--Yakov


Re: Cluster auto activation design proposal

2017-08-03 Thread Yakov Zhdanov
> * Based on some sort of policies when the actual cluster topology differs
too much from the baseline or when some critical condition happens (e.g.,
when there are no more backups for a partition)

Good point, Alex! I would even go further. If cluster is active and under
load and nodes continue joining and leaving then we can have several BT's
that are possible to restart on - the main condition is to have all the up
to date data partitions. I.e. if you have 4 servers and 3 backups most
probably you can have all the data with 2, 3 and, of course, 4 nodes. Makes
sense?

I would also think of different name. Topology (for me) also implies the
version, but here only nodes carrying data are important. How about
"restart nodes set"?

--Yakov


Re: Thin client protocol message format

2017-08-03 Thread dsetrakyan
Got it, thanks!

On Aug 3, 2017, 11:04 AM, at 11:04 AM, Vladimir Ozerov  
wrote:
>Dima,
>
>Our goal is to have a format, which will work for both synchronous,
>asynchronous, single-threaded and multi-threaded clients. All we need
>to
>achieve this is "request ID" propagated from request to response. This
>way
>3-rd party developers will be free to decide how to implement the
>client.
>
>On Thu, Aug 3, 2017 at 4:52 AM, Dmitriy Setrakyan
>
>wrote:
>
>> Let us not forget that the main purpose of such protocol is to enable
>other
>> users contribute their own client implementations for various
>languages.
>> Also, most JDBC and ODBC use cases work in thread-per-connection
>mode.
>>
>> I think that if we introduce a multi-threaded client here, then it
>will be
>> a lot harder to understand, configure, use, or contribute, so I agree
>with
>> Vladimir.
>>
>> Let's keep it simple for now.
>>
>> D.
>>
>> On Wed, Aug 2, 2017 at 10:37 AM, Yakov Zhdanov 
>> wrote:
>>
>> > Agree with Alex. I think our implementations should share single
>> connection
>> > over threads in the process.
>> >
>> > --Yakov
>> >
>>


Re: Thin client protocol message format

2017-08-03 Thread Vladimir Ozerov
Dima,

Our goal is to have a format, which will work for both synchronous,
asynchronous, single-threaded and multi-threaded clients. All we need to
achieve this is "request ID" propagated from request to response. This way
3-rd party developers will be free to decide how to implement the client.

On Thu, Aug 3, 2017 at 4:52 AM, Dmitriy Setrakyan 
wrote:

> Let us not forget that the main purpose of such protocol is to enable other
> users contribute their own client implementations for various languages.
> Also, most JDBC and ODBC use cases work in thread-per-connection mode.
>
> I think that if we introduce a multi-threaded client here, then it will be
> a lot harder to understand, configure, use, or contribute, so I agree with
> Vladimir.
>
> Let's keep it simple for now.
>
> D.
>
> On Wed, Aug 2, 2017 at 10:37 AM, Yakov Zhdanov 
> wrote:
>
> > Agree with Alex. I think our implementations should share single
> connection
> > over threads in the process.
> >
> > --Yakov
> >
>


[GitHub] ignite pull request #2384: IGNITE-5897: Fix session init/end logic. This fix...

2017-08-03 Thread nizhikov
GitHub user nizhikov opened a pull request:

https://github.com/apache/ignite/pull/2384

IGNITE-5897: Fix session init/end logic. This fixes tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nizhikov/ignite IGNITE-5897

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2384


commit af07ec59d9d9f16adc34c6ce092ab836d0f3a92e
Author: Nikolay 
Date:   2017-08-03T08:07:01Z

IGNITE-5897: Fix session init/end logic. This fixes tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request #2383: IGNITE-5912: Redis EXPIRE/PEXPIRE commands.

2017-08-03 Thread shroman
GitHub user shroman opened a pull request:

https://github.com/apache/ignite/pull/2383

IGNITE-5912: Redis EXPIRE/PEXPIRE commands.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shroman/ignite IGNITE-5912

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2383.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2383


commit 696c1613d41c741a66241da6214559f2f5dd1494
Author: shroman 
Date:   2017-08-03T07:59:50Z

IGNITE-5912: Redis EXPIRE/PEXPIRE commands.

commit 0c86dc18d55749df6e1d2282a2edd222c0330833
Author: shroman 
Date:   2017-08-03T08:08:18Z

IGNITE-5912: Redis EXPIRE/PEXPIRE commands. Replaced lambdas with anonymous 
classes for Java 7.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5913) Web console: code (spring\java) is not generated for field 'Long query timeout:' on cluster level in Miscellaneous group

2017-08-03 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-5913:
--

 Summary: Web console: code (spring\java) is not generated for 
field 'Long query timeout:' on cluster level in Miscellaneous group
 Key: IGNITE-5913
 URL: https://issues.apache.org/jira/browse/IGNITE-5913
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Pavel Konstantinov
Priority: Minor
 Fix For: 2.2


It exists in Summary but not generated in Miscellaneous group 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5912) [Redis] EXPIRE/PEXPIRE on keys

2017-08-03 Thread Roman Shtykh (JIRA)
Roman Shtykh created IGNITE-5912:


 Summary: [Redis] EXPIRE/PEXPIRE on keys
 Key: IGNITE-5912
 URL: https://issues.apache.org/jira/browse/IGNITE-5912
 Project: Ignite
  Issue Type: New Feature
Reporter: Roman Shtykh
Assignee: Roman Shtykh


https://redis.io/commands/expire
https://redis.io/commands/pexpire



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #2382: IGNITE-5910 Method stopGrid(name) doesn't work in...

2017-08-03 Thread daradurvs
GitHub user daradurvs opened a pull request:

https://github.com/apache/ignite/pull/2382

IGNITE-5910 Method stopGrid(name) doesn't work in multiJvm mode



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/daradurvs/ignite ignite-5910

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2382.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2382


commit fe0534f41ef994b644ec628435583bc79f0181fd
Author: daradurvs 
Date:   2017-08-03T07:46:14Z

ignite-5910: fixed ClassCastException




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-5911) .NET: EntityFramework cache eager update

2017-08-03 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5911:
--

 Summary: .NET: EntityFramework cache eager update
 Key: IGNITE-5911
 URL: https://issues.apache.org/jira/browse/IGNITE-5911
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn


Ignite EntityFramework 2nd level cache invalidates cache entries when one of 
the related entity sets has been updated.

We can add an option to re-run affected queries and populate invalidated 
entries eagerly so that user queries hit the cache right away.

Command text is already stored as part of the cache key, so 
{{DbCache.InvalidateSets}} could retrieve affected keys and run queries.

User list thread:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-NET-and-Entityframework-refresh-cached-queries-instead-of-invalidating-it-td15916.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5910) Method stopGrid(name) doesn't work in multiJvm mode

2017-08-03 Thread Vyacheslav Daradur (JIRA)
Vyacheslav Daradur created IGNITE-5910:
--

 Summary: Method stopGrid(name) doesn't work in multiJvm mode
 Key: IGNITE-5910
 URL: https://issues.apache.org/jira/browse/IGNITE-5910
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Vyacheslav Daradur
Assignee: Vyacheslav Daradur
 Fix For: 2.2


{code:title=Exception at call}
java.lang.ClassCastException: 
org.apache.ignite.testframework.junits.multijvm.IgniteProcessProxy cannot be 
cast to org.apache.ignite.internal.IgniteKernal
{code}

{code:title=Reproducer snippet}
/** {@inheritDoc} */
@Override protected boolean isMultiJvm() {
return true;
}

/**
 * @throws Exception If failed.
 */
public void testGrid() throws Exception {
try {
startGrid(0);

startGrid(1);
}
finally {
stopGrid(1);

stopGrid(0);
}
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Create own SQL parser

2017-08-03 Thread Alexander Paschenko
Dmitry,

Ignite mode in H2 is about parsing only, it does not address other
issues pointed out by Vlad. Alas, implementing _parser_ surely won't
take no years, whilst things like smart query optimization, etc,
probably don't have and can't have proper "finished" state - it's
something that you can improve again and again as there's never too
much performance.

My opinion: we should use Ignite mode in H2 to extend classic commands
with keywords of our own fashion while laying foundation for our own
SQL engine by starting work on parser core to provide for processing
of Ignite specific commands. I'll try to come up with some prototype
in near future.

- Alex

2017-08-03 4:43 GMT+03:00 Dmitriy Setrakyan :
> Vladimir, this sounds like a task that would take years to design,
> implement, and polish. Can we just aim to improve the Ignite mode in H2,
> which is much more feasible  in my view?
>
> On Wed, Aug 2, 2017 at 2:59 PM, Vladimir Ozerov 
> wrote:
>
>> Alex P.,
>>
>> The very problem with non-SELECT and non-DML commands is that we do not
>> support most of what is supported by H2, and vice versa - H2 doesn't
>> support most things that we need, like cache properties, templates, inline
>> indexes, etc.. Another important thing is that at some point we will add a
>> kind of SQL-based command-line or scripting utility(es) for Ignite [1].
>> Mature products has rich set of commands, which are outside of SQL
>> standard. E.g., we would like to manage transaction settings (concurrency,
>> isolation) on per-session basis, grant and rewoke Ignite-specific roles,
>> gather some metrics from the cluster, etc.. It doesn't make sense to
>> develop it in H2.
>>
>> Actual H2 parsing logic takes about a dozen KLOCs. But parsing core is much
>> smaller and most of overall parser's code relates to SELECT and DML
>> statements, which are mostly not needed for DDL and administrative
>> commands. That said, I think it is perfectly fine to move Ignite-specific
>> commands to Ignite's own parser.
>>
>> Alex K.,
>>
>> Having the whole own SQL engine is very cool thing, as it gives us
>> unlimited capabilities in terms of performance and UX. But this is a very
>> huge thing. H2's core which is used in Ignite is about as big as all
>> existing Ignite's SQL logic in terms of lines of codes. So I would put this
>> question out of scope for now. We should focus on new features, usability
>> and documentation for now, and try getting as much as possible from the
>> given architecture.
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-5608
>>
>>
>> On Wed, Aug 2, 2017 at 3:38 PM, Alexey Kuznetsov 
>> wrote:
>>
>> > From my opinion we could start investing in our own parser and SQL engine
>> > step by step, little by little
>> >  and one day drop H2 at all.
>> >
>> >  Having own parser and engine will give us a freedom to any optimizations
>> > and any syntax we like.
>> >
>> > Also that will be for one dependency less and we could have SQL out of
>> the
>> > box with no third-party dependencies.
>> >
>> >
>> > On Wed, Aug 2, 2017 at 7:25 PM, Alexander Paschenko <
>> > alexander.a.pasche...@gmail.com> wrote:
>> >
>> > > I'd like to point out that we already do have Ignite mode in H2 parser
>> > > (thanks Sergi) and thus have AFFINITY keyword support. Is is suggested
>> > > that we should abandon H2 way at all? Or should we suggest adding to
>> > > H2 only rather minor stuff (like some keywords for existing commands)
>> > > whilst introducing completely new commands for our own parser?
>> > >
>> > > - Alex
>> > >
>> > > 2017-08-02 9:01 GMT+03:00 Vladimir Ozerov :
>> > > > No, it will work as follows:
>> > > >
>> > > > Model parse(String sql) {
>> > > > Model res = tryParseWithIgnite(sql); // Parse what we can
>> > > >
>> > > > if (res == null)
>> > > > res = parseWithH2(sql);
>> > > >
>> > > > return res;
>> > > > }
>> > > >
>> > > > We will need a number of custom commands which are not present in H2.
>> > > >
>> > > > On Wed, Aug 2, 2017 at 3:44 AM, Dmitriy Setrakyan <
>> > dsetrak...@apache.org
>> > > >
>> > > > wrote:
>> > > >
>> > > >> On Tue, Aug 1, 2017 at 11:08 PM, Vladimir Ozerov <
>> > voze...@gridgain.com>
>> > > >> wrote:
>> > > >>
>> > > >> > Own parser capable of processing non-SELECT and non-DML
>> statements.
>> > > >> >
>> > > >>
>> > > >> And how will it integrate with H2 parser? Or are you suggesting that
>> > we
>> > > get
>> > > >> rid of H2 SQL parser?
>> > > >>
>> > > >>
>> > > >> >
>> > > >> > On Tue, Aug 1, 2017 at 9:44 PM,  wrote:
>> > > >> >
>> > > >> > > Vova, I am not sure what you are proposing... extending H2
>> parser
>> > > with
>> > > >> > new
>> > > >> > > syntax or a brand new parser?
>> > > >> > >
>> > > >> > > ⁣D.
>> > > >> > >
>> > > >> > > On Aug 1, 2017, 4:26 PM, at 4:26 PM, Vladimir Ozerov <
>> > > >> > voze...@gridgain.com>
>> > > >> > > wrote:
>> > > >> > > 

[jira] [Created] (IGNITE-5909) Web console: Implement editable list

2017-08-03 Thread Dmitriy Shabalin (JIRA)
Dmitriy Shabalin created IGNITE-5909:


 Summary: Web console: Implement editable list
 Key: IGNITE-5909
 URL: https://issues.apache.org/jira/browse/IGNITE-5909
 Project: Ignite
  Issue Type: Improvement
  Components: wizards
Reporter: Dmitriy Shabalin
Assignee: Dmitriy Shabalin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5908) Web console: may failed to open non-root if user is not authorized

2017-08-03 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-5908:
--

 Summary: Web console: may failed to open non-root if user is not 
authorized
 Key: IGNITE-5908
 URL: https://issues.apache.org/jira/browse/IGNITE-5908
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Konstantinov


For example try to open http://localhost/configuration/basic
Expected: should redirect to home page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Spark Data Frame support in Ignite

2017-08-03 Thread Dmitriy Setrakyan
On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke  wrote:

> I think the development effort would still be higher. Everything would
> have to be put via JDBC into Ignite, then checkpointing would have to be
> done via JDBC (again additional development effort), a lot of conversion
> from spark internal format to JDBC and back to ignite internal format.
> Pagination I do not see as a useful feature for managing large data volumes
> from databases - on the contrary it is very inefficient (and one would to
> have to implement logic to fetch al pages). Pagination was also never
> thought of for fetching large data volumes, but for web pages showing a
> small result set over several pages, where the user can click manually for
> the next page (what they anyway not do most of the time).
>
> While it might be a quick solution , I think a deeper integration than
> JDBC would be more beneficial.
>

Jorn, I completely agree. However, we have not been able to find a
contributor for this feature. You sound like you have sufficient domain
expertise in Spark and Ignite. Would you be willing to help out?


> > On 3. Aug 2017, at 08:57, Dmitriy Setrakyan 
> wrote:
> >
> >> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke 
> wrote:
> >>
> >> I think the JDBC one is more inefficient, slower requires too much
> >> development effort. You can also check the integration of Alluxio with
> >> Spark.
> >>
> >
> > As far as I know, Alluxio is a file system, so it cannot use JDBC.
> Ignite,
> > on the other hand, is an SQL system and works well with JDBC. As far as
> the
> > development effort, we are dealing with SQL, so I am not sure why JDBC
> > would be harder.
> >
> > Generally speaking, until Ignite provides native data frame integration,
> > having JDBC-based integration out of the box is minimally acceptable.
> >
> >
> >> Then, in general I think JDBC has never designed for large data volumes.
> >> It is for executing queries and getting a small or aggregated result set
> >> back. Alternatively for inserting / updating single rows.
> >>
> >
> > Agree in general. However, Ignite JDBC is designed to work with larger
> data
> > volumes and supports data pagination automatically.
> >
> >
> >>> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan 
> >> wrote:
> >>>
> >>> Jorn, thanks for your feedback!
> >>>
> >>> Can you explain how the direct support would be different from the JDBC
> >>> support?
> >>>
> >>> Thanks,
> >>> D.
> >>>
>  On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke 
> >> wrote:
> 
>  These are two different things. Spark applications themselves do not
> use
>  JDBC - it is more for non-spark applications to access Spark
> DataFrames.
> 
>  A direct support by Ignite would make more sense. Although you have in
>  theory IGFS, if the user is using HDFS, which might not be the case.
> It
> >> is
>  now also very common to use Object stores, such as S3.
>  Direct support could be leverage for interactive analysis or different
>  Spark applications sharing data.
> 
> > On 3. Aug 2017, at 05:12, Dmitriy Setrakyan 
>  wrote:
> >
> > Igniters,
> >
> > We have had the integration with Spark Data Frames on our roadmap
> for a
> > while:
> > https://issues.apache.org/jira/browse/IGNITE-3084
> >
> > However, while browsing Spark documentation, I cam across the generic
>  JDBC
> > data frame support in Spark:
> > https://spark.apache.org/docs/latest/sql-programming-guide.
>  html#jdbc-to-other-databases
> >
> > Given that Ignite has a JDBC driver, does it mean that it
> transitively
>  also
> > supports Spark data frames? If yes, we should document it.
> >
> > D.
> 
> >>
>


Re: Spark Data Frame support in Ignite

2017-08-03 Thread Jörn Franke
I think the development effort would still be higher. Everything would have to 
be put via JDBC into Ignite, then checkpointing would have to be done via JDBC 
(again additional development effort), a lot of conversion from spark internal 
format to JDBC and back to ignite internal format. Pagination I do not see as a 
useful feature for managing large data volumes from databases - on the contrary 
it is very inefficient (and one would to have to implement logic to fetch al 
pages). Pagination was also never thought of for fetching large data volumes, 
but for web pages showing a small result set over several pages, where the user 
can click manually for the next page (what they anyway not do most of the time).

While it might be a quick solution , I think a deeper integration than JDBC 
would be more beneficial. 

> On 3. Aug 2017, at 08:57, Dmitriy Setrakyan  wrote:
> 
>> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke  wrote:
>> 
>> I think the JDBC one is more inefficient, slower requires too much
>> development effort. You can also check the integration of Alluxio with
>> Spark.
>> 
> 
> As far as I know, Alluxio is a file system, so it cannot use JDBC. Ignite,
> on the other hand, is an SQL system and works well with JDBC. As far as the
> development effort, we are dealing with SQL, so I am not sure why JDBC
> would be harder.
> 
> Generally speaking, until Ignite provides native data frame integration,
> having JDBC-based integration out of the box is minimally acceptable.
> 
> 
>> Then, in general I think JDBC has never designed for large data volumes.
>> It is for executing queries and getting a small or aggregated result set
>> back. Alternatively for inserting / updating single rows.
>> 
> 
> Agree in general. However, Ignite JDBC is designed to work with larger data
> volumes and supports data pagination automatically.
> 
> 
>>> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan 
>> wrote:
>>> 
>>> Jorn, thanks for your feedback!
>>> 
>>> Can you explain how the direct support would be different from the JDBC
>>> support?
>>> 
>>> Thanks,
>>> D.
>>> 
 On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke 
>> wrote:
 
 These are two different things. Spark applications themselves do not use
 JDBC - it is more for non-spark applications to access Spark DataFrames.
 
 A direct support by Ignite would make more sense. Although you have in
 theory IGFS, if the user is using HDFS, which might not be the case. It
>> is
 now also very common to use Object stores, such as S3.
 Direct support could be leverage for interactive analysis or different
 Spark applications sharing data.
 
> On 3. Aug 2017, at 05:12, Dmitriy Setrakyan 
 wrote:
> 
> Igniters,
> 
> We have had the integration with Spark Data Frames on our roadmap for a
> while:
> https://issues.apache.org/jira/browse/IGNITE-3084
> 
> However, while browsing Spark documentation, I cam across the generic
 JDBC
> data frame support in Spark:
> https://spark.apache.org/docs/latest/sql-programming-guide.
 html#jdbc-to-other-databases
> 
> Given that Ignite has a JDBC driver, does it mean that it transitively
 also
> supports Spark data frames? If yes, we should document it.
> 
> D.
 
>> 


Re: Spark Data Frame support in Ignite

2017-08-03 Thread Dmitriy Setrakyan
On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke  wrote:

> I think the JDBC one is more inefficient, slower requires too much
> development effort. You can also check the integration of Alluxio with
> Spark.
>

As far as I know, Alluxio is a file system, so it cannot use JDBC. Ignite,
on the other hand, is an SQL system and works well with JDBC. As far as the
development effort, we are dealing with SQL, so I am not sure why JDBC
would be harder.

Generally speaking, until Ignite provides native data frame integration,
having JDBC-based integration out of the box is minimally acceptable.


> Then, in general I think JDBC has never designed for large data volumes.
> It is for executing queries and getting a small or aggregated result set
> back. Alternatively for inserting / updating single rows.
>

Agree in general. However, Ignite JDBC is designed to work with larger data
volumes and supports data pagination automatically.


> > On 3. Aug 2017, at 08:17, Dmitriy Setrakyan 
> wrote:
> >
> > Jorn, thanks for your feedback!
> >
> > Can you explain how the direct support would be different from the JDBC
> > support?
> >
> > Thanks,
> > D.
> >
> >> On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke 
> wrote:
> >>
> >> These are two different things. Spark applications themselves do not use
> >> JDBC - it is more for non-spark applications to access Spark DataFrames.
> >>
> >> A direct support by Ignite would make more sense. Although you have in
> >> theory IGFS, if the user is using HDFS, which might not be the case. It
> is
> >> now also very common to use Object stores, such as S3.
> >> Direct support could be leverage for interactive analysis or different
> >> Spark applications sharing data.
> >>
> >>> On 3. Aug 2017, at 05:12, Dmitriy Setrakyan 
> >> wrote:
> >>>
> >>> Igniters,
> >>>
> >>> We have had the integration with Spark Data Frames on our roadmap for a
> >>> while:
> >>> https://issues.apache.org/jira/browse/IGNITE-3084
> >>>
> >>> However, while browsing Spark documentation, I cam across the generic
> >> JDBC
> >>> data frame support in Spark:
> >>> https://spark.apache.org/docs/latest/sql-programming-guide.
> >> html#jdbc-to-other-databases
> >>>
> >>> Given that Ignite has a JDBC driver, does it mean that it transitively
> >> also
> >>> supports Spark data frames? If yes, we should document it.
> >>>
> >>> D.
> >>
>


Re: Spark Data Frame support in Ignite

2017-08-03 Thread Jörn Franke
I think the JDBC one is more inefficient, slower requires too much development 
effort. You can also check the integration of Alluxio with Spark. 
Then, in general I think JDBC has never designed for large data volumes. It is 
for executing queries and getting a small or aggregated result set back. 
Alternatively for inserting / updating single rows. 

> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan  wrote:
> 
> Jorn, thanks for your feedback!
> 
> Can you explain how the direct support would be different from the JDBC
> support?
> 
> Thanks,
> D.
> 
>> On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke  wrote:
>> 
>> These are two different things. Spark applications themselves do not use
>> JDBC - it is more for non-spark applications to access Spark DataFrames.
>> 
>> A direct support by Ignite would make more sense. Although you have in
>> theory IGFS, if the user is using HDFS, which might not be the case. It is
>> now also very common to use Object stores, such as S3.
>> Direct support could be leverage for interactive analysis or different
>> Spark applications sharing data.
>> 
>>> On 3. Aug 2017, at 05:12, Dmitriy Setrakyan 
>> wrote:
>>> 
>>> Igniters,
>>> 
>>> We have had the integration with Spark Data Frames on our roadmap for a
>>> while:
>>> https://issues.apache.org/jira/browse/IGNITE-3084
>>> 
>>> However, while browsing Spark documentation, I cam across the generic
>> JDBC
>>> data frame support in Spark:
>>> https://spark.apache.org/docs/latest/sql-programming-guide.
>> html#jdbc-to-other-databases
>>> 
>>> Given that Ignite has a JDBC driver, does it mean that it transitively
>> also
>>> supports Spark data frames? If yes, we should document it.
>>> 
>>> D.
>> 


Re: Spark Data Frame support in Ignite

2017-08-03 Thread Dmitriy Setrakyan
Jorn, thanks for your feedback!

Can you explain how the direct support would be different from the JDBC
support?

Thanks,
D.

On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke  wrote:

> These are two different things. Spark applications themselves do not use
> JDBC - it is more for non-spark applications to access Spark DataFrames.
>
> A direct support by Ignite would make more sense. Although you have in
> theory IGFS, if the user is using HDFS, which might not be the case. It is
> now also very common to use Object stores, such as S3.
> Direct support could be leverage for interactive analysis or different
> Spark applications sharing data.
>
> > On 3. Aug 2017, at 05:12, Dmitriy Setrakyan 
> wrote:
> >
> > Igniters,
> >
> > We have had the integration with Spark Data Frames on our roadmap for a
> > while:
> > https://issues.apache.org/jira/browse/IGNITE-3084
> >
> > However, while browsing Spark documentation, I cam across the generic
> JDBC
> > data frame support in Spark:
> > https://spark.apache.org/docs/latest/sql-programming-guide.
> html#jdbc-to-other-databases
> >
> > Given that Ignite has a JDBC driver, does it mean that it transitively
> also
> > supports Spark data frames? If yes, we should document it.
> >
> > D.
>


[jira] [Created] (IGNITE-5907) Add validation to Basic screen for Off-heap size

2017-08-03 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-5907:
--

 Summary: Add validation to Basic screen for Off-heap size
 Key: IGNITE-5907
 URL: https://issues.apache.org/jira/browse/IGNITE-5907
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Pavel Konstantinov
 Fix For: 2.2


Off-heap size must be greater then initial off-heap size



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)