Asking to become a contributor: tyabuki

2018-08-23 Thread Toru YABUKI
Hello, Apache Ignite community.

Can you please enable contributor rights on IGNITE for my JIRA Username tyabuki.

Best regards
Toru Yabuki


[GitHub] ignite pull request #4599: IGNITE-9353: Removed "Known issue, possible deadl...

2018-08-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4599


---


Re: Apache Ignite 2.7 release

2018-08-23 Thread Pavel Petroshenko
Hi Nikolay,

Python [1], PHP [2], and Node.js [3] thin clients will get into the release.

Thanks,
p.

[1] https://jira.apache.org/jira/browse/IGNITE-7782
[2] https://jira.apache.org/jira/browse/IGNITE-7783
[3] https://jira.apache.org/jira/browse/IGNITE-


On Tue, Aug 21, 2018 at 12:20 PM, Dmitriy Setrakyan 
wrote:

> Thanks, Nikolay!
>
> I think it is important to include the links to all important Jira tickets
> in this thread, so that the community can track them.
>
> D.
>
> On Tue, Aug 21, 2018 at 12:06 AM, Nikolay Izhikov 
> wrote:
>
> > Hello, Dmitriy.
> >
> > I think Transparent Data Encryption will be available in 2.7
> >
> > В Пн, 20/08/2018 в 13:20 -0700, Dmitriy Setrakyan пишет:
> > > Hi Nikolay,
> > >
> > > Thanks for being the release manager!
> > >
> > > I am getting a bit lost in all these tickets. Can we specify some
> > > high-level tickets, that are not plain bug fixes, which will be
> > interesting
> > > for the community to notice?
> > >
> > > For example, here are some significant tasks that the community is
> either
> > > working on or has been working on:
> > >
> > > - Node.JS client
> > > - Python client
> > > - Transactional SQL (MVCC)
> > > - service grid stabilization
> > > - SQL memory utilization improvements
> > > - more?
> > >
> > > Can you please solicit status from the community for these tasks?
> > >
> > > D.
> > >
> > > On Mon, Aug 20, 2018 at 11:22 AM, Nikolay Izhikov  >
> > > wrote:
> > >
> > > > Hello, Igniters.
> > > >
> > > > I'm release manager of Apache Ignite 2.7.
> > > >
> > > > It's time to start discussion of release. [1]
> > > >
> > > > Current code freeze date is September, 30.
> > > > If you have any objections - please, responsd to this thread.
> > > >
> > > > [1] https://cwiki.apache.org/confluence/display/IGNITE/
> > Apache+Ignite+2.7
> >
>


Re: Compression prototype

2018-08-23 Thread Denis Magda
Hi Ilya,

Sounds terrific! Is this part of the following Ignite enhancement proposal?
https://cwiki.apache.org/confluence/display/IGNITE/IEP-20%3A+Data+Compression+in+Ignite

--
Denis

On Thu, Aug 23, 2018 at 5:17 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> My plan was to add a compression section to cache configuration, where you
> can enable compression, enable key compression (which has heavier
> performance implications), adjust dictionary gathering settings, and in the
> future possibly choose betwen algorithms. In fact I'm not sure, since my
> assumption is that you can always just use latest, but maybe we
> can have e.g. very fast and not very strong vs. slower but stronger one.
>
> I'm not sure yet if we should share dictionary between all caches vs.
> having separate dictionary for every cache.
>
> With regards to data format, of course there will be room for further
> extension.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-23 15:13 GMT+03:00 Sergey Kozlov :
>
> > Hi Ilya
> >
> > Is there a plan to introduce it as an option of Ignite configuration? In
> > that instead the boolean type I suggest to use the enum and reserve the
> > ability to extend compressions algorithms in future
> >
> > On Thu, Aug 23, 2018 at 1:09 PM, Ilya Kasnacheev <
> > ilya.kasnach...@gmail.com>
> > wrote:
> >
> > > Hello!
> > >
> > > I want to share with the developer community my compression prototype.
> > >
> > > Long story short, it compresses BinaryObject's byte[] as they are
> written
> > > to Durable Memory page, operating on a pre-built dictionary. Typical
> > > compression ratio is 0.4 (meaning 2.5x compression) using custom
> > > LZW+Huffman. Metadata, indexes and primitive values are unaffected
> > > entirely.
> > >
> > > This is akin to DB2's table-level compression[1] but independently
> > > invented.
> > >
> > > On Yardstick tests performance hit is -6% with PDS and up to -25% (in
> > > throughput) with In-Memory loads. It also means you can fit ~twice as
> > much
> > > data into the same IM cluster, or have higher ram/disk ratio with PDS
> > > cluster, saving on hardware or decreasing latency.
> > >
> > > The code is available as PR 4295[2] (set IGNITE_ENABLE_COMPRESSION=true
> > to
> > > activate). Note that it will not presently survive a PDS node restart.
> > > The impact is very small, the patch should be applicable to most 2.x
> > > releases.
> > >
> > > Sure there's a long way before this prototype can have hope of being
> > > included, but first I would like to hear input from fellow igniters.
> > >
> > > See also IEP-20[3].
> > >
> > > 1.
> > > https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.
> > > 5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0052331.html
> > > 2. https://github.com/apache/ignite/pull/4295
> > > 3.
> > > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > > 20%3A+Data+Compression+in+Ignite
> > >
> > > Regards,
> > >
> > > --
> > > Ilya Kasnacheev
> > >
> >
> >
> >
> > --
> > Sergey Kozlov
> > GridGain Systems
> > www.gridgain.com
> >
>


Re: affinityBackupFilter for AWS Availability Zones

2018-08-23 Thread David Harvey
Added IGNITE-9365

On Thu, Aug 23, 2018 at 3:56 PM Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Hi David,
>
> With the Docker image you can actually use additional libraries by
> providing URLs to JARs via EXTERNAL_LIBS property. Please refer to this
> page: https://apacheignite.readme.io/docs/docker-deployment
>
> But anyway, I believe that such contribution might be very valuable for
> Ignite. Feel free to create a ticket.
>
> -Val
>
> On Thu, Aug 23, 2018 at 11:58 AM David Harvey 
> wrote:
>
> > I need an affinityBackupFilter that will prevent backups from running in
> > the same AWS availability zone.  (A single availability zone has the
> > characteristic that some or all of the EC2 instances in that zone can
> fail
> > together due to a single fault.   You have no control over the hosts on
> > which the EC2 instance VMs run on in AWS, except by controlling the
> > availability zone) .
> >
> > I could write a few lines of custom code, but then I have to get it
> > deployed on all nodes in the cluster, and peer class loading will not
> > work.   So I cannot use an of the shelf docker image, for example.   So
> > that code should just be part of Ignite.
> >
> > I was thinking of adding new class along these lines, where the apply
> > function will return true only if none of the node's attributes match
> those
> > of any of the nodes in the list.   This would become part of the code
> base,
> > but would only be used if configured as the backupAffinityFunction
> >
> > ClusterNodeNoAttributesMatchBiPredicate implements
> > IgniteBiPredicate >
> > List> {
> >
> >
> > ClusterNodeNoAttributesMatchBiPredicate(String[] attributeNames)
> > {}
> >
> > For AvailabilityZones, there would be only one attribute examined, but we
> > have some potential use cases for distributing backups across two
> > sub-groups of an AZ.
> >
> > Alternately, we could enhance the RendezvousAffinityFunction to allow one
> > or more arbitrary attributes to be compared  to determine neighbors,
> > rather  than only org.apache.ignite.macs, and to add a setting that
> > controls whether backups should be placed on neighbors if they can't be
> > placed anywhere else.
> >
> > If I have 2 backups and three availability zones (AZ), I want one copy of
> > the data in each AZ.  If all nodes in one AZ fail, I want to be able to
> > decide to try to get to three copies anyway, increasing the per node
> > footprint by 50%, or to only run with one backup. This would also
> give
> > be a convoluted way to change  the number of backups of a cache
> > dynamically:Start the cache with a large number of backups, but don't
> > provide a location where the backup would be allowed to run initially.
> >
>


[jira] [Created] (IGNITE-9365) Force backups to different AWS availability zones using only Spring XML

2018-08-23 Thread David Harvey (JIRA)
David Harvey created IGNITE-9365:


 Summary: Force backups to different AWS availability zones using 
only Spring XML
 Key: IGNITE-9365
 URL: https://issues.apache.org/jira/browse/IGNITE-9365
 Project: Ignite
  Issue Type: Improvement
  Components: cache
 Environment:  
Reporter: David Harvey
Assignee: David Harvey
 Fix For: 2.7


As a developer, I want to be able to force  cache backups each to a different 
"Availability Zone", when I'm running out-of-the-box Ignite, without additional 
Jars installed.  "Availability zone" is a AWS feature with different names for 
the same function by other cloud providers.  A single availability zone has the 
characteristic that some or all of the EC2 instances in that zone can fail 
together due to a single fault.   You have no control over the hosts on which 
the EC2 instance VMs run on in AWS, except by controlling the availability zone 
.  
 
I could write a few lines of a custom affinityBackupFilter, and configure it a 
RendezvousAffinityFunction, but then I have to get it deployed on all nodes in 
the cluster, and peer class loading will not work to this.   The code to do 
this should just be part of Ignite. 
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Binary Client Protocol client hangs in case of OOM on server

2018-08-23 Thread dmitrievanthony
Is it a parameter of query? I see it in the list of OP_QUERY_SQL parameters,
but not in list of OP_QUERY_SCAN which I use
(https://apacheignite.readme.io/v2.6/docs/binary-client-protocol-sql-operations).



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: affinityBackupFilter for AWS Availability Zones

2018-08-23 Thread Valentin Kulichenko
Hi David,

With the Docker image you can actually use additional libraries by
providing URLs to JARs via EXTERNAL_LIBS property. Please refer to this
page: https://apacheignite.readme.io/docs/docker-deployment

But anyway, I believe that such contribution might be very valuable for
Ignite. Feel free to create a ticket.

-Val

On Thu, Aug 23, 2018 at 11:58 AM David Harvey  wrote:

> I need an affinityBackupFilter that will prevent backups from running in
> the same AWS availability zone.  (A single availability zone has the
> characteristic that some or all of the EC2 instances in that zone can fail
> together due to a single fault.   You have no control over the hosts on
> which the EC2 instance VMs run on in AWS, except by controlling the
> availability zone) .
>
> I could write a few lines of custom code, but then I have to get it
> deployed on all nodes in the cluster, and peer class loading will not
> work.   So I cannot use an of the shelf docker image, for example.   So
> that code should just be part of Ignite.
>
> I was thinking of adding new class along these lines, where the apply
> function will return true only if none of the node's attributes match those
> of any of the nodes in the list.   This would become part of the code base,
> but would only be used if configured as the backupAffinityFunction
>
> ClusterNodeNoAttributesMatchBiPredicate implements
> IgniteBiPredicate
> List> {
>
>
> ClusterNodeNoAttributesMatchBiPredicate(String[] attributeNames)
> {}
>
> For AvailabilityZones, there would be only one attribute examined, but we
> have some potential use cases for distributing backups across two
> sub-groups of an AZ.
>
> Alternately, we could enhance the RendezvousAffinityFunction to allow one
> or more arbitrary attributes to be compared  to determine neighbors,
> rather  than only org.apache.ignite.macs, and to add a setting that
> controls whether backups should be placed on neighbors if they can't be
> placed anywhere else.
>
> If I have 2 backups and three availability zones (AZ), I want one copy of
> the data in each AZ.  If all nodes in one AZ fail, I want to be able to
> decide to try to get to three copies anyway, increasing the per node
> footprint by 50%, or to only run with one backup. This would also give
> be a convoluted way to change  the number of backups of a cache
> dynamically:Start the cache with a large number of backups, but don't
> provide a location where the backup would be allowed to run initially.
>


affinityBackupFilter for AWS Availability Zones

2018-08-23 Thread David Harvey
I need an affinityBackupFilter that will prevent backups from running in
the same AWS availability zone.  (A single availability zone has the
characteristic that some or all of the EC2 instances in that zone can fail
together due to a single fault.   You have no control over the hosts on
which the EC2 instance VMs run on in AWS, except by controlling the
availability zone) .

I could write a few lines of custom code, but then I have to get it
deployed on all nodes in the cluster, and peer class loading will not
work.   So I cannot use an of the shelf docker image, for example.   So
that code should just be part of Ignite.

I was thinking of adding new class along these lines, where the apply
function will return true only if none of the node's attributes match those
of any of the nodes in the list.   This would become part of the code base,
but would only be used if configured as the backupAffinityFunction

ClusterNodeNoAttributesMatchBiPredicate implements
IgniteBiPredicate> {


ClusterNodeNoAttributesMatchBiPredicate(String[] attributeNames)
{}

For AvailabilityZones, there would be only one attribute examined, but we
have some potential use cases for distributing backups across two
sub-groups of an AZ.

Alternately, we could enhance the RendezvousAffinityFunction to allow one
or more arbitrary attributes to be compared  to determine neighbors,
rather  than only org.apache.ignite.macs, and to add a setting that
controls whether backups should be placed on neighbors if they can't be
placed anywhere else.

If I have 2 backups and three availability zones (AZ), I want one copy of
the data in each AZ.  If all nodes in one AZ fail, I want to be able to
decide to try to get to three copies anyway, increasing the per node
footprint by 50%, or to only run with one backup. This would also give
be a convoluted way to change  the number of backups of a cache
dynamically:Start the cache with a large number of backups, but don't
provide a location where the backup would be allowed to run initially.


[GitHub] ignite pull request #4609: IGNITE-9363: Fix Jetty tests.

2018-08-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4609


---


Re: Binary Client Protocol client hangs in case of OOM on server

2018-08-23 Thread Dmitriy Setrakyan
Hi, do you have query timeout configured?

D.

On Thu, Aug 23, 2018 at 9:09 AM, dmitrievanthony 
wrote:

> When I'm sending Scan Query request via Binary Client Protocol with very
> big
> page size I get OOM on the server node:
> java.lang.OutOfMemoryError: Java heap space at
> org.apache.ignite.internal.binary.streams.BinaryMemoryAllocatorChunk.
> reallocate(BinaryMemoryAllocatorChunk.java:69)
> at
> org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.
> ensureCapacity(BinaryHeapOutputStream.java:65)
> at
> org.apache.ignite.internal.binary.streams.BinaryAbstractOutputStream.
> writeByteArray(BinaryAbstractOutputStream.java:41)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteByteArray(
> BinaryWriterExImpl.java:530)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.
> write(BinaryClassDescriptor.java:634)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal0(BinaryWriterExImpl.java:223)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:164)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:151)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.writeObjectDetached(
> BinaryWriterExImpl.java:1506)
> at
> org.apache.ignite.internal.processors.platform.client.cache.
> ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.
> java:44)
> at
> org.apache.ignite.internal.processors.platform.client.cache.
> ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.
> java:29)
> at
> org.apache.ignite.internal.processors.platform.client.
> cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:80)
> at
> org.apache.ignite.internal.processors.platform.client.cache.
> ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:50)
> at
> org.apache.ignite.internal.processors.platform.client.
> ClientMessageParser.encode(ClientMessageParser.java:379)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.
> onMessage(ClientListenerNioListener.java:172)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.
> onMessage(ClientListenerNioListener.java:45)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterChain$
> TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.
> proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.
> body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.
> run(GridWorkerPool.java:70)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)Exception in thread
> "client-connector-#61" java.lang.OutOfMemoryError: Java heap space  at
> org.apache.ignite.internal.binary.streams.BinaryMemoryAllocatorChunk.
> reallocate(BinaryMemoryAllocatorChunk.java:69)
> at
> org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.
> ensureCapacity(BinaryHeapOutputStream.java:65)
> at
> org.apache.ignite.internal.binary.streams.BinaryAbstractOutputStream.
> writeByteArray(BinaryAbstractOutputStream.java:41)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteByteArray(
> BinaryWriterExImpl.java:530)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.
> write(BinaryClassDescriptor.java:634)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal0(BinaryWriterExImpl.java:223)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:164)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.
> marshal(BinaryWriterExImpl.java:151)
> at
> org.apache.ignite.internal.binary.BinaryWriterExImpl.writeObjectDetached(
> BinaryWriterExImpl.java:1506)
> at
> org.apache.ignite.internal.processors.platform.client.cache.
> ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.
> java:44)
> at
> org.apache.ignite.internal.processors.platform.client.cache.
> ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.
> java:29)
> at
> org.apache.ignite.internal.processors.platform.client.
> cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:80)
> at
> org.apache.ignite.internal.processors.platform.client.cache.
> ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:50)
> at
> org.apache.ignite.internal.processors.platform.client.
> ClientMessageParser.encode(ClientMessageParser.java:379)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.
> onMessage(ClientListenerNioListener.java:172)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.
> 

[jira] [Created] (IGNITE-9364) SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC

2018-08-23 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-9364:
-

 Summary: SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC
 Key: IGNITE-9364
 URL: https://issues.apache.org/jira/browse/IGNITE-9364
 Project: Ignite
  Issue Type: Bug
Reporter: Alexei Scherbakov
Assignee: Ivan Daschinskiy
 Fix For: 2.7
 Attachments: Ignite_Tests_2.4_Java_8_Basic_1_3255.log.zip

Failed run:

https://ci.ignite.apache.org/viewLog.html?buildId=1707476=IgniteTests24Java8_Basic1=buildLog



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4609: IGNITE-9363: Fix Jetty tests.

2018-08-23 Thread AMashenkov
GitHub user AMashenkov opened a pull request:

https://github.com/apache/ignite/pull/4609

IGNITE-9363: Fix Jetty tests.

Minor refactoring.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9363

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4609.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4609


commit bfba8a163890e163d8623ece25dbedfc7b5cd754
Author: Andrey V. Mashenkov 
Date:   2018-08-23T16:15:19Z

GG-14137: Fix Jetty tests.

Minor refactoring.

Signed-off-by: Andrey V. Mashenkov 




---


[jira] [Created] (IGNITE-9363) Jetty tests forget to stop nodes on finished.

2018-08-23 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-9363:


 Summary: Jetty tests forget to stop nodes on finished.
 Key: IGNITE-9363
 URL: https://issues.apache.org/jira/browse/IGNITE-9363
 Project: Ignite
  Issue Type: Improvement
Reporter: Andrew Mashenkov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4608: Remove SnapTreeMap. Remove IgniteInternalCache.is...

2018-08-23 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/4608

Remove SnapTreeMap. Remove IgniteInternalCache.isMongo*Cache().



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-14136

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4608.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4608


commit d1b019cb73b78250b97e1a4db70dd667ef7c68ea
Author: Ilya Kasnacheev 
Date:   2018-08-23T16:08:48Z

IGNITE-9361 Remove IgniteInternalCache.isMongo*Cache() and other remnants.

commit 6af580b640627e8a4990cd51bb8f26b4ab93675d
Author: Ilya Kasnacheev 
Date:   2018-08-23T16:27:16Z

IGNITE-9360 Remove SnapTreeMap and tests on it.




---


[GitHub] ignite pull request #4607: Ignite 14071 2.4.9

2018-08-23 Thread antkr
GitHub user antkr opened a pull request:

https://github.com/apache/ignite/pull/4607

Ignite 14071 2.4.9



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-14071-2.4.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4607.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4607


commit 82da0b5e9dc2ee2339c3fb1023e35d415bf1b647
Author: Pavel Kuznetsov 
Date:   2018-02-07T12:37:52Z

IGNITE-6217: Added benchmark to compare JDBC vs native SQL

This closes #2558

commit 701c6f141f6812ad7bc050a86266e696cf5863ed
Author: tledkov-gridgain 
Date:   2018-02-08T12:29:42Z

IGNITE-6625: JDBC thin: support SSL connection to Ignite node

This closes #3233

commit 2d729bf5c6f6fca9d07be2d57850642ba4b55004
Author: tledkov-gridgain 
Date:   2018-02-09T11:08:15Z

IGNITE-6625: SSL support for thin JDBC: additional fix test; change error 
message

commit 8994f847d7f5f15db73706d9210cdccb1cf3fb26
Author: devozerov 
Date:   2018-02-12T13:34:24Z

IGNITE-7003: Fixed faulty test 
WalModeChangeAdvancedSelfTest.testJoinCoordinator.

commit b142712264007d7397d1594541681a4a7e3d4b93
Author: Igor Sapego 
Date:   2018-02-26T12:02:07Z

IGNITE-7362: Fixed PDO ODBC integration issue

commit a2b2aee52cc65d01f2ecaf9462adc4bd368438ea
Author: Igor Sapego 
Date:   2018-02-28T10:23:12Z

IGNITE-7763: Fixed 32bit tests configurations to prevent OOM

This closes #3557

commit 652f3c4cdbaad40f5de25b06f0c13710aa7f2fd9
Author: Pavel Kuznetsov 
Date:   2018-03-13T12:46:36Z

IGNITE-7531: Data load benchmarks. This closes #3571.

commit 9337a53d9fcd62af87f6760080d350b43e275105
Author: tledkov-gridgain 
Date:   2018-03-16T11:38:38Z

IGNITE-7879: Fixed splitter logic for DISTINCT with subqueries. This closes 
#3634.

commit 7bec8b13cb373002d2a6b1b268d410338259bac2
Author: Igor Sapego 
Date:   2018-03-19T11:17:33Z

IGNITE-7811: Implemented connection failover for ODBC

This closes #3643

commit e512e5e0a2602df0ecfb69b2b5efabce836b04db
Author: Igor Sapego 
Date:   2018-03-20T10:37:02Z

IGNITE-7888: Implemented login timeout for ODBC.

This closes #3657

commit 211fca3a55e84b78ff0c1af04d91e25d6fc846c4
Author: devozerov 
Date:   2018-03-20T11:13:46Z

IGNITE-7984: SQL: improved deadlock handling in DML. This closes #3655.

commit bcd2888d27afe65f1a060e35b99a05ea420979c7
Author: Roman Guseinov 
Date:   2018-02-16T09:57:26Z

IGNITE-7192: Implemented JDBC support FQDN to multiple IPs

This closes #3439

commit d2659d0ec9f6e1a0b905fc7bf23b65fd5522c80a
Author: Alexander Paschenko 
Date:   2018-03-14T09:23:37Z

IGNITE-7253: JDBC thin driver: implemented streaming. This closes #3499. 
This closes #3591.

commit bc9018ef8b116f81b8e06d2ff7651ba2b6c7beae
Author: tledkov-gridgain 
Date:   2018-03-19T08:01:26Z

IGNITE-7029: JDBC thin driver now supports multiple IP addresses with 
transparent failover. This closes #3585.

commit 587022862fd5bdbb076ab6207ae6fd9bc7583c13
Author: Sergey Chugunov 
Date:   2018-03-16T16:24:17Z

IGNITE-7964 rmvId is stored to MetaStorage metapage during operations - 
Fixes #3645.

commit 006ef4d15e4faedb6dfce6ce9637602055b97293
Author: tledkov-gridgain 
Date:   2018-03-22T11:47:06Z

IGNITE-7436: Simple username/password authentication. This closes #3483.

commit 1c7f59c90514670e802d9d07544b00b7562fe6d2
Author: Pavel Tupitsyn 
Date:   2018-01-30T09:48:16Z

.NET: Fix build status icon in README

commit 162df61b305fccfc55e186d07351727f35b55179
Author: Pavel Tupitsyn 
Date:   2018-02-01T11:53:16Z

IGNITE-7561 .NET: Add IServices.GetDynamicServiceProxy

This closes #3457

commit 9a0328ebbc9d52f8e96898a384fa45743d2efa5b
Author: Pavel Tupitsyn 
Date:   2018-02-02T12:01:27Z

.NET: Update README regarding C++ interop and thin client

commit b804cfea51c87724b45b40de4fd58d300c049be1
Author: Pavel Tupitsyn 
Date:   2018-01-31T09:39:19Z

.NET: Suppress API parity check for SSL in ClientConnectorConfiguration

commit 6f8014de7250c4c0d87cbc8764afae4a225f654b
Author: apopov 
Date:   2018-02-13T10:13:15Z

IGNITE-3111 .NET can be now configured SSL without Spring

This closes #3498

commit 5131bcd71ce787cf2c61bf98446f5ec0a616ab1c
Author: Pavel Tupitsin 
Date:   2018-02-16T20:36:01Z

IGNITE-3111 .NET: Configure SSL without Spring - cleanup

* Remove unused members from ISslContextFactory
* Fix namespaces
* Remove unused files
* Cleanup tests

commit 4ac4645dcf6e85883ce0de46ba1253ba8135804e
Author: Pavel Tupitsyn 
Date:   2018-02-18T20:22:27Z

.NET: Fix LoadDllTest, IgniteStartStopTest

commit 8709785814a432f981c30274a55e2ef730667421
Author: Pavel Tupitsyn 
Date:   2018-02-18T20:27:29Z

.NET: Fix SslConfigurationTest

commit 

[jira] [Created] (IGNITE-9362) SQL: Remove NODES.IS_LOCAL attribute

2018-08-23 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-9362:
---

 Summary: SQL: Remove NODES.IS_LOCAL attribute
 Key: IGNITE-9362
 URL: https://issues.apache.org/jira/browse/IGNITE-9362
 Project: Ignite
  Issue Type: Task
  Components: sql
Reporter: Vladimir Ozerov
 Fix For: 2.7


We need to remove {{IS_LOCAL}} attribute from {{NODES}} system view. This 
attribute doesn't make sense: it depends on where SQL query is executed. When 
executed from JDBC/ODBC driver user will received strange result, when remote 
node is displayed as local.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9361) Remove IgniteInternalCache.isMongo*Cache() and other such stuff

2018-08-23 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-9361:
---

 Summary: Remove IgniteInternalCache.isMongo*Cache() and other such 
stuff
 Key: IGNITE-9361
 URL: https://issues.apache.org/jira/browse/IGNITE-9361
 Project: Ignite
  Issue Type: Improvement
Reporter: Ilya Kasnacheev


Nobody needs it for a long time already. It's all internal API, we can drop it 
outright.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Binary Client Protocol client hangs in case of OOM on server

2018-08-23 Thread dmitrievanthony
When I'm sending Scan Query request via Binary Client Protocol with very big
page size I get OOM on the server node:
java.lang.OutOfMemoryError: Java heap space at
org.apache.ignite.internal.binary.streams.BinaryMemoryAllocatorChunk.reallocate(BinaryMemoryAllocatorChunk.java:69)
at
org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.ensureCapacity(BinaryHeapOutputStream.java:65)
at
org.apache.ignite.internal.binary.streams.BinaryAbstractOutputStream.writeByteArray(BinaryAbstractOutputStream.java:41)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteByteArray(BinaryWriterExImpl.java:530)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:634)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:223)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:164)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:151)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.writeObjectDetached(BinaryWriterExImpl.java:1506)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.java:44)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.java:29)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:80)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:50)
at
org.apache.ignite.internal.processors.platform.client.ClientMessageParser.encode(ClientMessageParser.java:379)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:172)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:45)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)Exception in thread
"client-connector-#61" java.lang.OutOfMemoryError: Java heap space  at
org.apache.ignite.internal.binary.streams.BinaryMemoryAllocatorChunk.reallocate(BinaryMemoryAllocatorChunk.java:69)
at
org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.ensureCapacity(BinaryHeapOutputStream.java:65)
at
org.apache.ignite.internal.binary.streams.BinaryAbstractOutputStream.writeByteArray(BinaryAbstractOutputStream.java:41)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteByteArray(BinaryWriterExImpl.java:530)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:634)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:223)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:164)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:151)
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.writeObjectDetached(BinaryWriterExImpl.java:1506)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.java:44)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheEntryQueryCursor.writeEntry(ClientCacheEntryQueryCursor.java:29)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryCursor.writePage(ClientCacheQueryCursor.java:80)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheQueryResponse.encode(ClientCacheQueryResponse.java:50)
at
org.apache.ignite.internal.processors.platform.client.ClientMessageParser.encode(ClientMessageParser.java:379)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:172)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:45)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at

[jira] [Created] (IGNITE-9360) Destroy SnapTreeMap and related classes

2018-08-23 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-9360:
---

 Summary: Destroy SnapTreeMap and related classes
 Key: IGNITE-9360
 URL: https://issues.apache.org/jira/browse/IGNITE-9360
 Project: Ignite
  Issue Type: Improvement
Reporter: Ilya Kasnacheev
Assignee: Ilya Kasnacheev


It's not used anywhere and noone wants it, and it's a solid block of code.

On slightly unrelated note, GridCacheProxyImpl.isMongoDataCache() and friends 
have to go probably.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4546: GG-13761

2018-08-23 Thread AMashenkov
Github user AMashenkov closed the pull request at:

https://github.com/apache/ignite/pull/4546


---


[GitHub] ignite pull request #2812: Modify permissions on $IGNITE_HOME path for OpenS...

2018-08-23 Thread dwimsey
Github user dwimsey closed the pull request at:

https://github.com/apache/ignite/pull/2812


---


Re: Service Grid new design overview

2018-08-23 Thread Anton Vinogradov
Vyacheslav.

It looks like we able to restart all services on grid startup from old
definitions (inside cache) in case persistence turned on.
Se no problems to provide such automated migration case.
Also, we can test it using compatibility framework.

BTW, Is proposed solution provides the guarantee that services will be
redeployed after each cluster restart since now we're not using the cache?

чт, 23 авг. 2018 г. в 15:21, Nikolay Izhikov :

> Hello, Vyacheslav.
>
> Thanks, for sharing your design.
>
> > I have a question about services migration from AI 2.6 to a new solution
>
> Can you describe consequences of not having migration solution?
> What will happen on the user side?
>
>
> В Чт, 23/08/2018 в 14:44 +0300, Vyacheslav Daradur пишет:
> > Hi, Igniters!
> >
> > I’m working on Service Grid redesign tasks and design seems to be
> finished.
> >
> > The main goal of Service Grid redesign is to provide missed guarantees:
> > - Synchronous services deployment/undeployment;
> > - Failover on coordinator change;
> > - Propagation of deployments errors across the cluster;
> > - Introduce of a deployment failures policy;
> > - Prevention of deployments initiators hangs while deployment;
> > - etc.
> >
> > I’d like to ask the community their thoughts about the proposed design
> > to be sure that all important things have been considered.
> >
> > Also, I have a question about services migration from AI 2.6 to a new
> > solution. It’s very hard to provide tools for users migration, because
> > of significant changes. We don’t use utility cache anymore. Should we
> > spend time on this?
> >
> > I’ve prepared a definition of new Service Grid design, it’s described
> below:
> >
> > *OVERVIEW*
> >
> > All nodes (servers and clients) are able to host services, but the
> > client nodes are excluded from service deployment by default. The only
> > way to deploy service on clients nodes is to specify node filter in
> > ServiceConfiguration.
> >
> > All deployed services are identified internally by “serviceId”
> > (IgniteUuid). This allows us to build a base for such features as hot
> > redeployment and service’s versioning. It’s important to have the
> > ability to identify and manage services with the same name, but
> > different version.
> >
> > All actions on service’s state change are processed according to unified
> flow:
> > 1) Initiator sends over disco-spi a request to change service state
> > [deploy, undeploy] DynamicServicesChangeRequestBatchMessage which will
> > be stored by all server nodes in own queue to be processed, if
> > coordinator failed, at new coordinator;
> > 2) Coordinator calculates assignments and defines actions in a new
> > message ServicesAssignmentsRequestMessage and sends it over disco-spi
> > to be processed by all nodes;
> > 3) All nodes apply actions and build single map message
> > ServicesSingleMapMessage that contains services id and amount of
> > instances were deployed on this single node and sends the message over
> > comm-spi to coordinator (p2p);
> > 4) Once coordinator receives all single map messages then it builds
> > ServicesFullMapMessage that contains services deployments across the
> > cluster and sends message over disco-spi to be processed by all nodes;
> >
> > *MESSAGES*
> >
> > class DynamicServicesChangeRequestBatchMessage {
> > Collection reqs;
> > }
> >
> > class DynamicServiceChangeRequest {
> > IgniteUuid srvcId; // Unique service id (generates to deploy,
> > existing used to undeploy)
> > ServiceConfiguration cfg; // Empty in case of undeploy
> > byte flags; // Change’s types flags [deploy, undeploy, etc.]
> > }
> >
> > class ServicesAssignmentsRequestMessage {
> > ServicesDeploymentExchangeId exchId;
> > Map> srvcsToDeploy; // Deploy and
> reassign
> > Collection srvcsToUndeploy;
> > }
> >
> > class ServicesSingleMapMessage {
> > ServicesDeploymentExchangeId exchId;
> > Map results;
> > }
> >
> > class ServiceSingleDeploymentsResults {
> > int cnt; // Deployed instances count, 0 in case of undeploy
> > Collection errors; // Serialized exceptions to avoid
> > issues at spi-level
> > }
> >
> > class ServicesFullMapMessage  {
> > ServicesDeploymentExchangeId exchId;
> > Collection results;
> > }
> >
> > class ServiceFullDeploymentsResults {
> > IgniteUuid srvcId;
> > Map results; // Per node
> > }
> >
> > class ServicesDeploymentExchangeId {
> > UUID nodeId; // Initiated, joined or failed node id
> > int evtType; // EVT_NODE_[JOIN/LEFT/FAILED], EVT_DISCOVERY_CUSTOM_EVT
> > AffinityTopologyVersion topVer;
> > IgniteUuid reqId; // Unique id of custom discovery message
> > }
> >
> > *COORDINATOR CHANGE*
> >
> > All server nodes handle requests of service’s state changes and put it
> > into deployment queue, but only coordinator process them. If
> > coordinator left or fail they will be processed on new coordinator.
> >
> > *TOPOLOGY CHANGE*
> >
> > Each topology change 

[GitHub] ignite pull request #4606: Ignite 2.4.8 p2

2018-08-23 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/4606

Ignite 2.4.8 p2



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.4.8-p2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4606.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4606


commit 97ceaa3b8c902604ae28c7fb3ce90f2ce2c675e3
Author: Aleksey Plekhanov 
Date:   2018-03-20T09:28:19Z

IGNITE-7976: Fixed SQL schema normalization. This closes #3650.

commit 8f3c4dfd869520725b714f054f409a7561f09587
Author: Aleksey Plekhanov 
Date:   2018-03-20T09:28:19Z

IGNITE-7976: Fixed SQL schema normalization. This closes #3650.

commit 5e277c12c76550cdd949f056bb22cd18ded10ace
Author: Pavel Kovalenko 
Date:   2018-03-21T14:22:56Z

IGNITE-6113 Fixed partition eviction preventing exchange completion - Fixes 
#3445.

Signed-off-by: Alexey Goncharuk 
(cherry picked from commit 6dc5804af6dda90ca210a39d622964d78e3890f1)

commit ce8f6bcffe85d16c93e4ca8f9b8be894cd9fe9f9
Author: Pavel Kovalenko 
Date:   2018-03-21T14:32:51Z

IGNITE-6113 Fixed version handling.

commit aae4e97b786e1bc9b2353ddee61ff7077b8d759c
Author: Pavel Kovalenko 
Date:   2018-03-06T18:22:40Z

IGNITE-6113 Return back old implementation instead of 'removeIf' - Fixes 
#3609.

Signed-off-by: Alexey Goncharuk 
(cherry picked from commit 8c3731e83296e1750c7c098b283c3fd8584b1be4)

commit d60d4fd8f6abe652a5618e5644fb978bcb1495b0
Author: Denis Mekhanikov 
Date:   2018-03-20T02:19:35Z

IGNITE-7794 - Share persisted marshaller mappings when connecting - Fixes 
#3620.

Signed-off-by: Valentin Kulichenko 
(cherry picked from commit c4ccf55dd6eb22102c56e211812eb4ccc206f4a7)

commit 70efde3305a1559231583f453aeb228ed2d40896
Author: Valentin Kulichenko 
Date:   2018-03-20T02:20:20Z

Added new test to suite

Signed-off-by: Valentin Kulichenko 
(cherry picked from commit baf08bc9501676e418dab2431211a659ae421d25)

commit 002e4d650c072c0927a93075e79d2ab1652b475d
Author: EdShangGG 
Date:   2018-03-22T14:58:39Z

IGNITE-8007 We should treat as empty any partition as empty if it doesn't 
have any data - Fixes #3677.

(cherry picked from commit 14f7bce)

commit 699561265ae616b5eb75897343be84d8c83be804
Author: Anton Kalashnikov 
Date:   2018-03-21T15:53:15Z

GG-13631 fix GridDhtPartitionDemandLegacyMessage

(cherry picked from commit ffbc56e)

commit 37c8033c72ac4fd1ec1e419ba68142c4fe11ad8b
Author: tledkov-gridgain 
Date:   2018-03-13T09:37:29Z

IGNITE-7860: JDBC thin: changed default socket buffer size to 64Kb. This 
closes #3600.

commit 130adcf29ddb61f8e9baa784b81454d3ed7c3b75
Author: Pavel Tupitsyn 
Date:   2018-01-26T08:48:14Z

IGNITE-7530 .NET: Fix GetAll memory usage and performance in binary mode

This closes #3436

commit 824004909b23a9a7d599500967af34103acb8c47
Author: Igor Sapego 
Date:   2018-01-30T12:56:17Z

IGNITE-6810: Implemented SSL support for ODBC.

This closes #3361

commit 82da0b5e9dc2ee2339c3fb1023e35d415bf1b647
Author: Pavel Kuznetsov 
Date:   2018-02-07T12:37:52Z

IGNITE-6217: Added benchmark to compare JDBC vs native SQL

This closes #2558

commit 701c6f141f6812ad7bc050a86266e696cf5863ed
Author: tledkov-gridgain 
Date:   2018-02-08T12:29:42Z

IGNITE-6625: JDBC thin: support SSL connection to Ignite node

This closes #3233

commit 2d729bf5c6f6fca9d07be2d57850642ba4b55004
Author: tledkov-gridgain 
Date:   2018-02-09T11:08:15Z

IGNITE-6625: SSL support for thin JDBC: additional fix test; change error 
message

commit 8994f847d7f5f15db73706d9210cdccb1cf3fb26
Author: devozerov 
Date:   2018-02-12T13:34:24Z

IGNITE-7003: Fixed faulty test 
WalModeChangeAdvancedSelfTest.testJoinCoordinator.

commit b142712264007d7397d1594541681a4a7e3d4b93
Author: Igor Sapego 
Date:   2018-02-26T12:02:07Z

IGNITE-7362: Fixed PDO ODBC integration issue

commit a2b2aee52cc65d01f2ecaf9462adc4bd368438ea
Author: Igor Sapego 
Date:   2018-02-28T10:23:12Z

IGNITE-7763: Fixed 32bit tests configurations to prevent OOM

This closes #3557

commit 652f3c4cdbaad40f5de25b06f0c13710aa7f2fd9
Author: Pavel Kuznetsov 
Date:   2018-03-13T12:46:36Z

IGNITE-7531: Data load benchmarks. This closes #3571.

commit 9337a53d9fcd62af87f6760080d350b43e275105
Author: tledkov-gridgain 
Date:   2018-03-16T11:38:38Z

IGNITE-7879: Fixed splitter logic for DISTINCT with subqueries. This closes 
#3634.

commit 7bec8b13cb373002d2a6b1b268d410338259bac2
Author: Igor Sapego 
Date:   2018-03-19T11:17:33Z

IGNITE-7811: Implemented connection failover for ODBC

This closes #3643

commit e512e5e0a2602df0ecfb69b2b5efabce836b04db
Author: Igor Sapego 
Date:   2018-03-20T10:37:02Z


[jira] [Created] (IGNITE-9359) OptimizeMakeChangeGAExample hangs forever with additional nods in topology

2018-08-23 Thread Alex Volkov (JIRA)
Alex Volkov created IGNITE-9359:
---

 Summary: OptimizeMakeChangeGAExample hangs forever with additional 
nods in topology
 Key: IGNITE-9359
 URL: https://issues.apache.org/jira/browse/IGNITE-9359
 Project: Ignite
  Issue Type: Bug
  Components: ml
Affects Versions: 2.6
Reporter: Alex Volkov


To reproduce this issue please follow these steps:

1. Run two nodes using ignite.sh script.

For example:
{code:java}
bin/ignite.sh examples/config/example-ignite.xml -J-Xmx1g -J-Xms1g 
-J-DCONSISTENT_ID=node1 -J-DIGNITE_QUIET=false
{code}
2. Run  HelloWorldGAExample from IDEA IDE.

*Expecting result:*

Example successfully run and completed.

*Actual result:*

There are a lot of NPE exceptions in example log:
{code:java}
[2018-08-23 17:38:59,246][ERROR][pub-#20][GridJobWorker] Failed to execute job 
due to unexpected runtime exception 
[jobId=2a309376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, ses=GridJobSessionImpl 
[ses=GridTaskSessionImpl [taskName=o.a.i.ml.genetic.FitnessTask, 
dep=GridDeployment [ts=1535035116486, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, 
clsLdrId=4baf8376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, userVer=0, loc=true, 
sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap,
 pendingUndeploy=false, undeployed=false, usage=2], 
taskClsName=o.a.i.ml.genetic.FitnessTask, 
sesId=b4209376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, 
startTime=1535035123014, endTime=9223372036854775807, 
taskNodeId=70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, 
clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, 
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, 
topPred=o.a.i.i.cluster.ClusterGroupAdapter$AttributeFilter@5668ad01, 
subjId=70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, mapFut=GridFutureAdapter 
[ignoreInterrupts=false, state=INIT, res=null, hash=574227802]IgniteFuture 
[orig=], execName=null], 
jobId=2a309376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c], err=null]
java.lang.NullPointerException
at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:76)
at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:35)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6749)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
and it hangs on this one:
{code:java}
[2018-08-23 17:38:59,582][WARN ][sys-#54][AlwaysFailoverSpi] Received topology 
with only nodes that job had failed on (forced to fail) 
[failedNodes=[3db84480-08b8-4d54-9d3a-e23b53761f29, 
70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, 4f815cff-f77c-4a41-9ae1-ebb00b1dd44c]]
class org.apache.ignite.cluster.ClusterTopologyException: Failed to failover a 
job to another node (failover SPI returned null) 
[job=org.apache.ignite.ml.genetic.FitnessJob@1045c79e, node=TcpDiscoveryNode 
[id=4f815cff-f77c-4a41-9ae1-ebb00b1dd44c, addrs=ArrayList [0:0:0:0:0:0:0:1, 
127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.92:47501, 
/172.25.4.42:47501, /0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501], discPort=47501, 
order=2, intOrder=2, lastExchangeTime=1535035115978, loc=false, 
ver=2.7.0#19700101-sha1:, isClient=false]]
at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:853)
at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:851)
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
at 
org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:541)
at org.apache.ignite.ml.genetic.GAGrid.calculateFitness(GAGrid.java:102)
at org.apache.ignite.ml.genetic.GAGrid.evolve(GAGrid.java:171)
at 
org.apache.ignite.examples.ml.genetic.change.OptimizeMakeChangeGAExample.main(OptimizeMakeChangeGAExample.java:148)
Caused by: class 
org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to 
failover a job to another node (failover SPI returned null) 
[job=org.apache.ignite.ml.genetic.FitnessJob@1045c79e, node=TcpDiscoveryNode 
[id=4f815cff-f77c-4a41-9ae1-ebb00b1dd44c, addrs=ArrayList [0:0:0:0:0:0:0:1, 
127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.92:47501, 
/172.25.4.42:47501, /0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501], discPort=47501, 
order=2, intOrder=2, lastExchangeTime=1535035115978, loc=false, 
ver=2.7.0#19700101-sha1:, isClient=false]]
at 

Re: How to reduce Scan Query execution time?

2018-08-23 Thread dmitrievanthony
I checked and it looks like the result is the same (or even worse, I get
1150ms with page size 1000, but the reason might be in other changes,
previous measures I did using 2.6).



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


[GitHub] ignite pull request #4602: Ignite 9340

2018-08-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4602


---


Re: How to reduce Scan Query execution time?

2018-08-23 Thread Ilya Kasnacheev
Hello!

Can you check if there's any difference with
https://github.com/apache/ignite/pull/4592 ?

Regards,

-- 
Ilya Kasnacheev

2018-08-23 16:05 GMT+03:00 dmitrievanthony :

> Hi,
>
> I have a cache with 5000 objects, 400Kb each and I need to download all
> these objects using Binary Client Protocol. To do than I use Scan Query
> (and
> Load Next Page to load page 2, 3, etc...) request without any filter.
>
> I measure the time between two moments: when request has been sent and when
> the result is ready (not when page has been downloaded, only is ready to be
> downloaded).
>
> If the page size is 1000 (~400Mb per page) the measured time is 1000ms.
> If the page size if 100 (~40Mb per page) the measured time is 100ms.
>
> As result I conclude that Ignite spends ~2.5ms per every megabyte on
> preparing response and I correspondingly spend this time on waiting. This
> leads to the fact that I can't get throughput more than 200Mb/s using
> network 10Gbit/s. It's very confusing.
>
> So, the question. How to reduce Scan Query execution time in such
> configuration?
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


How to reduce Scan Query execution time?

2018-08-23 Thread dmitrievanthony
Hi,

I have a cache with 5000 objects, 400Kb each and I need to download all
these objects using Binary Client Protocol. To do than I use Scan Query (and
Load Next Page to load page 2, 3, etc...) request without any filter.

I measure the time between two moments: when request has been sent and when
the result is ready (not when page has been downloaded, only is ready to be
downloaded).

If the page size is 1000 (~400Mb per page) the measured time is 1000ms.
If the page size if 100 (~40Mb per page) the measured time is 100ms.

As result I conclude that Ignite spends ~2.5ms per every megabyte on
preparing response and I correspondingly spend this time on waiting. This
leads to the fact that I can't get throughput more than 200Mb/s using
network 10Gbit/s. It's very confusing.

So, the question. How to reduce Scan Query execution time in such
configuration?



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Service Grid new design overview

2018-08-23 Thread Nikolay Izhikov
Hello, Vyacheslav.

Thanks, for sharing your design.

> I have a question about services migration from AI 2.6 to a new solution

Can you describe consequences of not having migration solution?
What will happen on the user side?


В Чт, 23/08/2018 в 14:44 +0300, Vyacheslav Daradur пишет:
> Hi, Igniters!
> 
> I’m working on Service Grid redesign tasks and design seems to be finished.
> 
> The main goal of Service Grid redesign is to provide missed guarantees:
> - Synchronous services deployment/undeployment;
> - Failover on coordinator change;
> - Propagation of deployments errors across the cluster;
> - Introduce of a deployment failures policy;
> - Prevention of deployments initiators hangs while deployment;
> - etc.
> 
> I’d like to ask the community their thoughts about the proposed design
> to be sure that all important things have been considered.
> 
> Also, I have a question about services migration from AI 2.6 to a new
> solution. It’s very hard to provide tools for users migration, because
> of significant changes. We don’t use utility cache anymore. Should we
> spend time on this?
> 
> I’ve prepared a definition of new Service Grid design, it’s described below:
> 
> *OVERVIEW*
> 
> All nodes (servers and clients) are able to host services, but the
> client nodes are excluded from service deployment by default. The only
> way to deploy service on clients nodes is to specify node filter in
> ServiceConfiguration.
> 
> All deployed services are identified internally by “serviceId”
> (IgniteUuid). This allows us to build a base for such features as hot
> redeployment and service’s versioning. It’s important to have the
> ability to identify and manage services with the same name, but
> different version.
> 
> All actions on service’s state change are processed according to unified flow:
> 1) Initiator sends over disco-spi a request to change service state
> [deploy, undeploy] DynamicServicesChangeRequestBatchMessage which will
> be stored by all server nodes in own queue to be processed, if
> coordinator failed, at new coordinator;
> 2) Coordinator calculates assignments and defines actions in a new
> message ServicesAssignmentsRequestMessage and sends it over disco-spi
> to be processed by all nodes;
> 3) All nodes apply actions and build single map message
> ServicesSingleMapMessage that contains services id and amount of
> instances were deployed on this single node and sends the message over
> comm-spi to coordinator (p2p);
> 4) Once coordinator receives all single map messages then it builds
> ServicesFullMapMessage that contains services deployments across the
> cluster and sends message over disco-spi to be processed by all nodes;
> 
> *MESSAGES*
> 
> class DynamicServicesChangeRequestBatchMessage {
> Collection reqs;
> }
> 
> class DynamicServiceChangeRequest {
> IgniteUuid srvcId; // Unique service id (generates to deploy,
> existing used to undeploy)
> ServiceConfiguration cfg; // Empty in case of undeploy
> byte flags; // Change’s types flags [deploy, undeploy, etc.]
> }
> 
> class ServicesAssignmentsRequestMessage {
> ServicesDeploymentExchangeId exchId;
> Map> srvcsToDeploy; // Deploy and reassign
> Collection srvcsToUndeploy;
> }
> 
> class ServicesSingleMapMessage {
> ServicesDeploymentExchangeId exchId;
> Map results;
> }
> 
> class ServiceSingleDeploymentsResults {
> int cnt; // Deployed instances count, 0 in case of undeploy
> Collection errors; // Serialized exceptions to avoid
> issues at spi-level
> }
> 
> class ServicesFullMapMessage  {
> ServicesDeploymentExchangeId exchId;
> Collection results;
> }
> 
> class ServiceFullDeploymentsResults {
> IgniteUuid srvcId;
> Map results; // Per node
> }
> 
> class ServicesDeploymentExchangeId {
> UUID nodeId; // Initiated, joined or failed node id
> int evtType; // EVT_NODE_[JOIN/LEFT/FAILED], EVT_DISCOVERY_CUSTOM_EVT
> AffinityTopologyVersion topVer;
> IgniteUuid reqId; // Unique id of custom discovery message
> }
> 
> *COORDINATOR CHANGE*
> 
> All server nodes handle requests of service’s state changes and put it
> into deployment queue, but only coordinator process them. If
> coordinator left or fail they will be processed on new coordinator.
> 
> *TOPOLOGY CHANGE*
> 
> Each topology change (NODE_JOIN/LEFT/FAILED event) causes service's
> states deployment task. Assignments will be recalculated and applied
> for each deployed service.
> 
> *CLUSTER ACTIVATION/DEACTIVATION*
> 
> - On deactivation:
> * local services are being undeployed;
> * requests are not handling (including deployment / undeployment);
> - On activation:
> * local services are being redeployed;
> * requests are handling as usual;
> 
> *RELATED LINKS*
> 
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-17%3A+Oil+Change+in+Service+Grid
> http://apache-ignite-developers.2346864.n4.nabble.com/Service-grid-redesign-td28521.html
> 
> 

signature.asc
Description: This is a 

Re: Compression prototype

2018-08-23 Thread Ilya Kasnacheev
Hello!

My plan was to add a compression section to cache configuration, where you
can enable compression, enable key compression (which has heavier
performance implications), adjust dictionary gathering settings, and in the
future possibly choose betwen algorithms. In fact I'm not sure, since my
assumption is that you can always just use latest, but maybe we
can have e.g. very fast and not very strong vs. slower but stronger one.

I'm not sure yet if we should share dictionary between all caches vs.
having separate dictionary for every cache.

With regards to data format, of course there will be room for further
extension.

Regards,

-- 
Ilya Kasnacheev

2018-08-23 15:13 GMT+03:00 Sergey Kozlov :

> Hi Ilya
>
> Is there a plan to introduce it as an option of Ignite configuration? In
> that instead the boolean type I suggest to use the enum and reserve the
> ability to extend compressions algorithms in future
>
> On Thu, Aug 23, 2018 at 1:09 PM, Ilya Kasnacheev <
> ilya.kasnach...@gmail.com>
> wrote:
>
> > Hello!
> >
> > I want to share with the developer community my compression prototype.
> >
> > Long story short, it compresses BinaryObject's byte[] as they are written
> > to Durable Memory page, operating on a pre-built dictionary. Typical
> > compression ratio is 0.4 (meaning 2.5x compression) using custom
> > LZW+Huffman. Metadata, indexes and primitive values are unaffected
> > entirely.
> >
> > This is akin to DB2's table-level compression[1] but independently
> > invented.
> >
> > On Yardstick tests performance hit is -6% with PDS and up to -25% (in
> > throughput) with In-Memory loads. It also means you can fit ~twice as
> much
> > data into the same IM cluster, or have higher ram/disk ratio with PDS
> > cluster, saving on hardware or decreasing latency.
> >
> > The code is available as PR 4295[2] (set IGNITE_ENABLE_COMPRESSION=true
> to
> > activate). Note that it will not presently survive a PDS node restart.
> > The impact is very small, the patch should be applicable to most 2.x
> > releases.
> >
> > Sure there's a long way before this prototype can have hope of being
> > included, but first I would like to hear input from fellow igniters.
> >
> > See also IEP-20[3].
> >
> > 1.
> > https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.
> > 5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0052331.html
> > 2. https://github.com/apache/ignite/pull/4295
> > 3.
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > 20%3A+Data+Compression+in+Ignite
> >
> > Regards,
> >
> > --
> > Ilya Kasnacheev
> >
>
>
>
> --
> Sergey Kozlov
> GridGain Systems
> www.gridgain.com
>


Re: Compression prototype

2018-08-23 Thread Dmitriy Pavlov
Ok, thanks. IMO we need to store the dictionary in Durable memory before
merging into master.

чт, 23 авг. 2018 г. в 15:12, Ilya Kasnacheev :

> Hello!
>
> Currently, the dictionary for decompression is only stored on heap. After
> restart there's compressed data in the PDS, but there's no dictionary :)
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-23 14:58 GMT+03:00 Dmitriy Pavlov :
>
> > Hi Ilya,
> >
> > Thank you for sharing this here. I believe this contribution will be
> > accepted by the Community. Moreover, it shows so remarkable performance
> > boost.
> >
> > I'm pretty sure this patch will be reviewed by Ignite Native Persistence
> > experts soon.
> >
> > What do you mean by can't survive PDS node restart?
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > чт, 23 авг. 2018 г. в 13:09, Ilya Kasnacheev  >:
> >
> > > Hello!
> > >
> > > I want to share with the developer community my compression prototype.
> > >
> > > Long story short, it compresses BinaryObject's byte[] as they are
> written
> > > to Durable Memory page, operating on a pre-built dictionary. Typical
> > > compression ratio is 0.4 (meaning 2.5x compression) using custom
> > > LZW+Huffman. Metadata, indexes and primitive values are unaffected
> > > entirely.
> > >
> > > This is akin to DB2's table-level compression[1] but independently
> > > invented.
> > >
> > > On Yardstick tests performance hit is -6% with PDS and up to -25% (in
> > > throughput) with In-Memory loads. It also means you can fit ~twice as
> > much
> > > data into the same IM cluster, or have higher ram/disk ratio with PDS
> > > cluster, saving on hardware or decreasing latency.
> > >
> > > The code is available as PR 4295[2] (set IGNITE_ENABLE_COMPRESSION=true
> > to
> > > activate). Note that it will not presently survive a PDS node restart.
> > > The impact is very small, the patch should be applicable to most 2.x
> > > releases.
> > >
> > > Sure there's a long way before this prototype can have hope of being
> > > included, but first I would like to hear input from fellow igniters.
> > >
> > > See also IEP-20[3].
> > >
> > > 1.
> > >
> > > https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.
> > 5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0052331.html
> > > 2. https://github.com/apache/ignite/pull/4295
> > > 3.
> > >
> > > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > 20%3A+Data+Compression+in+Ignite
> > >
> > > Regards,
> > >
> > > --
> > > Ilya Kasnacheev
> > >
> >
>


Re: Storage Class Memory and Persistent Collections

2018-08-23 Thread Pavel Kovalenko
Hello Steve,

I've looked at Persistent Collectors library, but don't see any big
advantages to use it inside Ignite right now.
Why do you think, that marshalling/unmarshalling process will be speeded up
with this library? As I can see there are no explicit changes in
serialization process, just added persistent ability to regular java
collections.

2018-08-23 9:53 GMT+03:00 Steve Hostettler :

> That's great but would you mind explaining to me how we can get rid of the
> marshalling/unmarshalling? Because that would significantly speed up the
> processes that run on the grid.
>
> On Wed, Aug 22, 2018 at 3:36 AM Denis Magda  wrote:
>
> > Hello Steve,
> >
> > Intel folks are already contributing Intel Optane Persistent Memory
> support
> > to Ignite:
> >
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 26%3A+Adding+Experimental+Support+for+Intel+Optane+DC+Persistent+Memory
> >
> > http://apache-ignite-developers.2346864.n4.nabble.
> com/Adding-experimental-support-for-Intel-Optane-DC-
> Persistent-Memory-td33041.html
> >
> > Hopefully, the contribution will be reviewed and accepted soon.
> >
> > --
> > Denis
> >
> > On Tue, Aug 21, 2018 at 12:42 PM Steve Hostettler <
> > steve.hostett...@gmail.com> wrote:
> >
> > > Hello,
> > >
> > >
> > >
> > > Clearly Storage Class Memory represents a breakthrough for "in memory"
> > > grids
> > > and some people already tried it on Ignite :
> > >
> > >
> > http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-
> ssds-verified-on.ht
> > > ml
> > > <
> > http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-
> ssds-verified-on.html
> > >
> > >
> > > I would like to know what is the position of the community towards  the
> > > Persistent Collections (
> > > <
> > >
> > https://github.com/pmem/pcj?utm_source=ISTV_medium=
> Video_campaign=I
> > > STV_2017
> > > <
> > https://github.com/pmem/pcj?utm_source=ISTV_medium=
> Video_campaign=ISTV_2017
> > >
> > > >
> > >
> > >
> > https://github.com/pmem/pcj?utm_source=ISTV_medium=
> Video_campaign=IS
> > > TV_2017
> > > <
> > https://github.com/pmem/pcj?utm_source=ISTV_medium=
> Video_campaign=ISTV_2017
> > >,
> > >  https://pcollections.org/)?
> > > According to https://www.youtube.com/watch?v=ZuLAF3ppCzs , it could
> > > massively reduce the marshalling/unmarshalling time. `
> > >
> > > 1)   Do you guys plan to test it and to support SCM such as 3DXPoint
> > > natively?
> > >
> > > 2)   Would it mean that ignite would not serialize the objects off heap
> > > anymore?
> > >
> > >
> > >
> > > Hope it makes sense.
> > >
> > >
> > >
> > > Best Regards
> > >
> > >
> >
>
>
> --
> ===
> Steve Hostettler
>
> Université de Genève
> CUI - Battelle bat. A
> Route de Drize, 7
> CH-1227 Carouge
> Tel. +33 67 075 2843
> Fax +41 22 379 0250
>


Re: Compression prototype

2018-08-23 Thread Sergey Kozlov
Hi Ilya

Is there a plan to introduce it as an option of Ignite configuration? In
that instead the boolean type I suggest to use the enum and reserve the
ability to extend compressions algorithms in future

On Thu, Aug 23, 2018 at 1:09 PM, Ilya Kasnacheev 
wrote:

> Hello!
>
> I want to share with the developer community my compression prototype.
>
> Long story short, it compresses BinaryObject's byte[] as they are written
> to Durable Memory page, operating on a pre-built dictionary. Typical
> compression ratio is 0.4 (meaning 2.5x compression) using custom
> LZW+Huffman. Metadata, indexes and primitive values are unaffected
> entirely.
>
> This is akin to DB2's table-level compression[1] but independently
> invented.
>
> On Yardstick tests performance hit is -6% with PDS and up to -25% (in
> throughput) with In-Memory loads. It also means you can fit ~twice as much
> data into the same IM cluster, or have higher ram/disk ratio with PDS
> cluster, saving on hardware or decreasing latency.
>
> The code is available as PR 4295[2] (set IGNITE_ENABLE_COMPRESSION=true to
> activate). Note that it will not presently survive a PDS node restart.
> The impact is very small, the patch should be applicable to most 2.x
> releases.
>
> Sure there's a long way before this prototype can have hope of being
> included, but first I would like to hear input from fellow igniters.
>
> See also IEP-20[3].
>
> 1.
> https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.
> 5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0052331.html
> 2. https://github.com/apache/ignite/pull/4295
> 3.
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> 20%3A+Data+Compression+in+Ignite
>
> Regards,
>
> --
> Ilya Kasnacheev
>



-- 
Sergey Kozlov
GridGain Systems
www.gridgain.com


[GitHub] ignite pull request #4605: IGNITE-8911

2018-08-23 Thread EdShangGG
GitHub user EdShangGG opened a pull request:

https://github.com/apache/ignite/pull/4605

IGNITE-8911



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8911-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4605.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4605


commit b22bbf79dbcfb5127b41cde9d955542f1e1bb033
Author: EdShangGG 
Date:   2018-07-31T14:03:55Z

IGNITE-8911 While cache is restarting it's possible to start new cache with 
this name

commit 55bb3ac8cf334ddf84877e8c907ee2900258b12e
Author: EdShangGG 
Date:   2018-07-31T15:50:28Z

IGNITE-8911 While cache is restarting it's possible to start new cache with 
this name
-added test

commit 30333bf4a7a58d0b18e2f425ea6a645611c48f3a
Author: EdShangGG 
Date:   2018-08-23T12:04:03Z

Merge branch 'master1' into ignite-8911-1




---


[GitHub] ignite pull request #4291: Ignite-8911

2018-08-23 Thread EdShangGG
Github user EdShangGG closed the pull request at:

https://github.com/apache/ignite/pull/4291


---


Re: Compression prototype

2018-08-23 Thread Dmitriy Pavlov
Hi Ilya,

Thank you for sharing this here. I believe this contribution will be
accepted by the Community. Moreover, it shows so remarkable performance
boost.

I'm pretty sure this patch will be reviewed by Ignite Native Persistence
experts soon.

What do you mean by can't survive PDS node restart?

Sincerely,
Dmitriy Pavlov

чт, 23 авг. 2018 г. в 13:09, Ilya Kasnacheev :

> Hello!
>
> I want to share with the developer community my compression prototype.
>
> Long story short, it compresses BinaryObject's byte[] as they are written
> to Durable Memory page, operating on a pre-built dictionary. Typical
> compression ratio is 0.4 (meaning 2.5x compression) using custom
> LZW+Huffman. Metadata, indexes and primitive values are unaffected
> entirely.
>
> This is akin to DB2's table-level compression[1] but independently
> invented.
>
> On Yardstick tests performance hit is -6% with PDS and up to -25% (in
> throughput) with In-Memory loads. It also means you can fit ~twice as much
> data into the same IM cluster, or have higher ram/disk ratio with PDS
> cluster, saving on hardware or decreasing latency.
>
> The code is available as PR 4295[2] (set IGNITE_ENABLE_COMPRESSION=true to
> activate). Note that it will not presently survive a PDS node restart.
> The impact is very small, the patch should be applicable to most 2.x
> releases.
>
> Sure there's a long way before this prototype can have hope of being
> included, but first I would like to hear input from fellow igniters.
>
> See also IEP-20[3].
>
> 1.
>
> https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0052331.html
> 2. https://github.com/apache/ignite/pull/4295
> 3.
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-20%3A+Data+Compression+in+Ignite
>
> Regards,
>
> --
> Ilya Kasnacheev
>


[jira] [Created] (IGNITE-9358) DynamicIndexAbstractConcurrentSelfTest#testConcurrentRebalance is flaky

2018-08-23 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-9358:
---

 Summary: 
DynamicIndexAbstractConcurrentSelfTest#testConcurrentRebalance is flaky
 Key: IGNITE-9358
 URL: https://issues.apache.org/jira/browse/IGNITE-9358
 Project: Ignite
  Issue Type: Bug
Reporter: Ilya Kasnacheev


Fails approximately in 1/3 of runs

[~DmitriyGovorukhin] can you please take a look?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Service Grid new design overview

2018-08-23 Thread Vyacheslav Daradur
Hi, Igniters!

I’m working on Service Grid redesign tasks and design seems to be finished.

The main goal of Service Grid redesign is to provide missed guarantees:
- Synchronous services deployment/undeployment;
- Failover on coordinator change;
- Propagation of deployments errors across the cluster;
- Introduce of a deployment failures policy;
- Prevention of deployments initiators hangs while deployment;
- etc.

I’d like to ask the community their thoughts about the proposed design
to be sure that all important things have been considered.

Also, I have a question about services migration from AI 2.6 to a new
solution. It’s very hard to provide tools for users migration, because
of significant changes. We don’t use utility cache anymore. Should we
spend time on this?

I’ve prepared a definition of new Service Grid design, it’s described below:

*OVERVIEW*

All nodes (servers and clients) are able to host services, but the
client nodes are excluded from service deployment by default. The only
way to deploy service on clients nodes is to specify node filter in
ServiceConfiguration.

All deployed services are identified internally by “serviceId”
(IgniteUuid). This allows us to build a base for such features as hot
redeployment and service’s versioning. It’s important to have the
ability to identify and manage services with the same name, but
different version.

All actions on service’s state change are processed according to unified flow:
1) Initiator sends over disco-spi a request to change service state
[deploy, undeploy] DynamicServicesChangeRequestBatchMessage which will
be stored by all server nodes in own queue to be processed, if
coordinator failed, at new coordinator;
2) Coordinator calculates assignments and defines actions in a new
message ServicesAssignmentsRequestMessage and sends it over disco-spi
to be processed by all nodes;
3) All nodes apply actions and build single map message
ServicesSingleMapMessage that contains services id and amount of
instances were deployed on this single node and sends the message over
comm-spi to coordinator (p2p);
4) Once coordinator receives all single map messages then it builds
ServicesFullMapMessage that contains services deployments across the
cluster and sends message over disco-spi to be processed by all nodes;

*MESSAGES*

class DynamicServicesChangeRequestBatchMessage {
Collection reqs;
}

class DynamicServiceChangeRequest {
IgniteUuid srvcId; // Unique service id (generates to deploy,
existing used to undeploy)
ServiceConfiguration cfg; // Empty in case of undeploy
byte flags; // Change’s types flags [deploy, undeploy, etc.]
}

class ServicesAssignmentsRequestMessage {
ServicesDeploymentExchangeId exchId;
Map> srvcsToDeploy; // Deploy and reassign
Collection srvcsToUndeploy;
}

class ServicesSingleMapMessage {
ServicesDeploymentExchangeId exchId;
Map results;
}

class ServiceSingleDeploymentsResults {
int cnt; // Deployed instances count, 0 in case of undeploy
Collection errors; // Serialized exceptions to avoid
issues at spi-level
}

class ServicesFullMapMessage  {
ServicesDeploymentExchangeId exchId;
Collection results;
}

class ServiceFullDeploymentsResults {
IgniteUuid srvcId;
Map results; // Per node
}

class ServicesDeploymentExchangeId {
UUID nodeId; // Initiated, joined or failed node id
int evtType; // EVT_NODE_[JOIN/LEFT/FAILED], EVT_DISCOVERY_CUSTOM_EVT
AffinityTopologyVersion topVer;
IgniteUuid reqId; // Unique id of custom discovery message
}

*COORDINATOR CHANGE*

All server nodes handle requests of service’s state changes and put it
into deployment queue, but only coordinator process them. If
coordinator left or fail they will be processed on new coordinator.

*TOPOLOGY CHANGE*

Each topology change (NODE_JOIN/LEFT/FAILED event) causes service's
states deployment task. Assignments will be recalculated and applied
for each deployed service.

*CLUSTER ACTIVATION/DEACTIVATION*

- On deactivation:
* local services are being undeployed;
* requests are not handling (including deployment / undeployment);
- On activation:
* local services are being redeployed;
* requests are handling as usual;

*RELATED LINKS*

https://cwiki.apache.org/confluence/display/IGNITE/IEP-17%3A+Oil+Change+in+Service+Grid
http://apache-ignite-developers.2346864.n4.nabble.com/Service-grid-redesign-td28521.html


-- 
Best Regards, Vyacheslav D.


[GitHub] ignite pull request #4601: IGNITE-9338 Add connection data int env variables...

2018-08-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4601


---


Compression prototype

2018-08-23 Thread Ilya Kasnacheev
Hello!

I want to share with the developer community my compression prototype.

Long story short, it compresses BinaryObject's byte[] as they are written
to Durable Memory page, operating on a pre-built dictionary. Typical
compression ratio is 0.4 (meaning 2.5x compression) using custom
LZW+Huffman. Metadata, indexes and primitive values are unaffected entirely.

This is akin to DB2's table-level compression[1] but independently invented.

On Yardstick tests performance hit is -6% with PDS and up to -25% (in
throughput) with In-Memory loads. It also means you can fit ~twice as much
data into the same IM cluster, or have higher ram/disk ratio with PDS
cluster, saving on hardware or decreasing latency.

The code is available as PR 4295[2] (set IGNITE_ENABLE_COMPRESSION=true to
activate). Note that it will not presently survive a PDS node restart.
The impact is very small, the patch should be applicable to most 2.x
releases.

Sure there's a long way before this prototype can have hope of being
included, but first I would like to hear input from fellow igniters.

See also IEP-20[3].

1.
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_10.5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0052331.html
2. https://github.com/apache/ignite/pull/4295
3.
https://cwiki.apache.org/confluence/display/IGNITE/IEP-20%3A+Data+Compression+in+Ignite

Regards,

-- 
Ilya Kasnacheev


[GitHub] ignite pull request #4604: IGNITE-8971: make GridRestProcessor propagate err...

2018-08-23 Thread macrergate
GitHub user macrergate opened a pull request:

https://github.com/apache/ignite/pull/4604

IGNITE-8971: make GridRestProcessor propagate error message



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-8971

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4604.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4604


commit 6eca747e8e52765f7518b1ccd05219d7873a7c12
Author: AMedvedev 
Date:   2018-07-13T16:02:04Z

IGNITE-8971: make GridRestProcessor propagate error message




---


[GitHub] ignite pull request #4603: IGNITE-9357 Spark Structured Streaming with Ignit...

2018-08-23 Thread kukushal
GitHub user kukushal opened a pull request:

https://github.com/apache/ignite/pull/4603

IGNITE-9357 Spark Structured Streaming with Ignite as data source and sink



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9357

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4603.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4603


commit b022dfcf87579431c3257ae4cc4ed29e7240b27e
Author: kukushal 
Date:   2018-08-23T09:58:55Z

IGNITE-9357 Spark Structured Streaming with Ignite as data source and sink




---


[GitHub] ignite pull request #4602: Ignite 9340

2018-08-23 Thread dspavlov
GitHub user dspavlov opened a pull request:

https://github.com/apache/ignite/pull/4602

Ignite 9340



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/ignite ignite-9340

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4602.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4602


commit b8e9a1b99e25533d844065b29e952905fe6c7a54
Author: Dmitriy Pavlov 
Date:   2018-08-21T15:54:26Z

IGNITE-9340 Update jetty version in Apache Ignite (ignite-rest-http and 
other modules)

commit fefee7b635cd54d07d4a639a9f28786e1248ddfd
Author: Dmitriy Pavlov 
Date:   2018-08-21T16:17:10Z

IGNITE-9340 Update jetty version in Apache Ignite (ignite-rest-http and 
other modules)
Removed last modification of last modified header, it is done in

org.eclipse.jetty.server.Response#putHeaders(javax.servlet.http.HttpServletResponse,
 org.eclipse.jetty.http.HttpContent, long, boolean)

commit 32ef9b16936b66df5fcf25a5e3ced6b3deef49ed
Author: Dmitriy Pavlov 
Date:   2018-08-21T17:59:58Z

IGNITE-9340 Update jetty version in Apache Ignite (ignite-rest-http and 
other modules)

commit c847a895b1239398c3b6f0270f385b4f79aa85b0
Author: Dmitriy Pavlov 
Date:   2018-08-22T13:29:56Z

IGNITE-9340 Update jetty version in Apache Ignite: new test added and old 
tests were fixed

commit 2d410c55ad485571754ccb4aa6a9d6f3746b7019
Author: Dmitriy Pavlov 
Date:   2018-08-22T13:54:31Z

Merge branch 'master' into ignite-9340




---


Re: Synchronous tx entries unlocking in discovery\exchange threads.

2018-08-23 Thread Andrey Mashenkov
Would someone please review a PR for IGNITE-9290 [1] ?

I've add some fixes to PR #4556 [2] and TC looks fine [3].
Now,
* Explicit lock unlocking moved to background system pool thread.
* Exchange future doesn't care for unlocking at all.
* MvccManager discovery listener is responsible for triggering unlocking on
certain discovery events
* Also, ExchangeFuture.OnLeft() method was called twice in some cases. I've
fix this as well.


[1] https://issues.apache.org/jira/browse/IGNITE-9290
[2] https://github.com/apache/ignite/pull/4556
[3]
https://ci.ignite.apache.org/viewLog.html?buildId=1669225=buildResultsDiv=IgniteTests24Java8_RunAll

On Thu, Aug 16, 2018 at 10:31 AM Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Andrey, I agree that most likely this can be done in an async way. There
> are some nuances, though, because if a node leaves during an ongoing
> exchange, we should remove the locks in the context of the ongoing exchange
> and not wait for the next exchange event.
>
> I will take a look at your PR shortly.
>
> ср, 15 авг. 2018 г. в 15:57, Andrey Mashenkov  >:
>
> > Hi Igniters,
> >
> > I've found Ignite node tries to unlock tx entries when a node left the
> > grid.
> > Ignite do this synchronously in
> > GridCacheMvccManager.removeExplicitNodeLocks() in discovery and exchange
> > threads.
> >
> > Looks like this can be done in ascync way.
> > I've made a PR #4565 and seems there is no new test failures [1].
> >
> > I'm not familiar enough with exchange manager code, but looks like we can
> > scan locked entries more than once per node left event.
> > Also, it looks possible we can scan locked entries once for several
> merged
> > exchange events.
> >
> > Thoughts? Any ideas how this can be refactoried?
> >
> > [1]
> >
> >
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=projectOverview_IgniteTests24Java8=pull%2F4546%2Fhead
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>


-- 
Best regards,
Andrey V. Mashenkov


[jira] [Created] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink

2018-08-23 Thread Alexey Kukushkin (JIRA)
Alexey Kukushkin created IGNITE-9357:


 Summary: Spark Structured Streaming with Ignite as data source and 
sink
 Key: IGNITE-9357
 URL: https://issues.apache.org/jira/browse/IGNITE-9357
 Project: Ignite
  Issue Type: New Feature
  Components: spark
Affects Versions: 2.7
Reporter: Alexey Kukushkin
Assignee: Alexey Kukushkin


We are working on a PoC where we want to use Ignite as a data storage and Spark 
as a computation engine. We found that Ignite is not supported neither as a 
source nor as a Sink when using Spark Structured Streaming, which is a must for 
us.

We are enhancing Ignite to support Spark streaming with Ignite. We will send 
docs and code for review for the Ignite Community to consider if the Community 
want to accept this feature. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4601: IGNITE-9338 Add connection data int env variables...

2018-08-23 Thread dmitrievanthony
GitHub user dmitrievanthony opened a pull request:

https://github.com/apache/ignite/pull/4601

IGNITE-9338 Add connection data int env variables of TensorFlow worker 
processes



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9338

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4601.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4601


commit 597ca635203f1cbf77504a5e18519f45abea73e3
Author: Anton Dmitriev 
Date:   2018-08-22T13:12:01Z

IGNITE-9338 Pass Ignite dataset host and port into Python processes.

commit fe4bb1f04e95bf8c09c17eeed51bbf2cde696510
Author: Anton Dmitriev 
Date:   2018-08-22T14:02:33Z

IGNITE-9338 Pass Ignite dataset host and port into Python processes.

commit 18a936a1acd28eb0ae95ed0127a3874e8165ba7c
Author: Anton Dmitriev 
Date:   2018-08-22T14:04:12Z

IGNITE-9338 Pass Ignite dataset host and port into Python processes.




---


[jira] [Created] (IGNITE-9355) Document 3 new system views (nodes, node attributes, baseline nodes)

2018-08-23 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-9355:
---

 Summary: Document 3 new system views (nodes, node attributes, 
baseline nodes)
 Key: IGNITE-9355
 URL: https://issues.apache.org/jira/browse/IGNITE-9355
 Project: Ignite
  Issue Type: Task
  Components: documentation, sql
Reporter: Vladimir Ozerov
 Fix For: 2.7


We need to document three new SQL system views.
 # Explain users that new system SQL schema appeared, named "IGNITE", where all 
views are stored
 # System view NODES - list of current nodes in topology. Columns: ID, 
CONSISTENT_ID, VERSION, IS_LOCAL, IS_CLIENT, IS_DAEMON, NODE_ORDER, ADDRESSES, 
HOSTNAMES
 # System view NODE_ATTRIBUTES - attributes for all nodes. Columns: NODE_ID, 
NAME, VALUE
 # System view BASELINE_NODES - list of baseline topology nodes. Columns: 
CONSISTENT_ID, ONLINE (whether node is up and running at the moment)
 # Explain limitations: views cannot be joined with user tables; it is not 
allowed to create other objects (tables, indexes) in "IGNITE" schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #4575: IGNITE-9318 SQL system view for list of baseline ...

2018-08-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4575


---


[GitHub] ignite pull request #4600: Ignite 9274

2018-08-23 Thread ygerzhedovich
GitHub user ygerzhedovich opened a pull request:

https://github.com/apache/ignite/pull/4600

Ignite 9274



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-9274

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/4600.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4600


commit 402b62520500f00d29f890c59f33157cfbaa21c2
Author: Yury Gerzhedovich 
Date:   2018-08-17T08:07:52Z

IGNITE-9274: Pass transaction label to cache events

commit 13496243fb1bbf20ec17a7e045ef5ddaf293b513
Author: Yury Gerzhedovich 
Date:   2018-08-17T09:51:04Z

IGNITE-9274: minor fixes after code review

commit ab4c281b4a068edfbd0b3840a7bf9e6319060f26
Author: Yury Gerzhedovich 
Date:   2018-08-20T10:02:33Z

IGNITE-9274: first tests

commit 9630f8bcfd6aa526766ddfa8403de1c671b94f9b
Author: Yury Gerzhedovich 
Date:   2018-08-22T10:37:16Z

IGNITE-9274: support tx label for read cash events. Tests

commit a9ab8896851aae3aa6f9e0d028b79db9cdd44584
Author: Yury Gerzhedovich 
Date:   2018-08-22T10:37:43Z

Merge remote-tracking branch 'origin/master' into ignite-9274

commit 952937299a2601ff248209d2ff8587ca69647623
Author: Yury Gerzhedovich 
Date:   2018-08-22T11:05:16Z

IGNITE-9274: minor fixes after review

commit 80a02d6725a682d681505278ecc27efc164afd4e
Author: Yury Gerzhedovich 
Date:   2018-08-22T13:18:59Z

IGNITE-9274: boost test run

commit acb8d7f8ad3ab4d8bdaeb530560a44ab5dd35eba
Author: Yury Gerzhedovich 
Date:   2018-08-22T16:40:05Z

IGNITE-9274: rewrite tests

commit 6b02e371a315e1373fd4ebbd7ce8b72a66e1caf6
Author: Yury Gerzhedovich 
Date:   2018-08-23T08:06:38Z

IGNITE-9274: test fix




---


[GitHub] ignite pull request #4498: IGNITE-9235: Transitivity violation in GridMergeI...

2018-08-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/4498


---


Re: [MTCGA]: new failures in builds [1711030] needs to be handled

2018-08-23 Thread Dmitriy Pavlov
The test is to be fixed later in
https://issues.apache.org/jira/browse/IGNITE-9082. Already muted. Please
ignore.

чт, 23 авг. 2018 г. в 2:37, :

> Hi Ignite Developer,
>
> I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed.
> I hope you can help.
>
>  *Recently contributed test failed in master
> TransactionIntegrityWithPrimaryIndexCorruptionTest.testPrimaryIndexCorruptionDuringCommitOnPrimaryNode3
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3475401922666481512=%3Cdefault%3E=testDetails
>  No changes in build
>
> - If your changes can led to this failure(s), please create issue
> with label MakeTeamCityGreenAgain and assign it to you.
> -- If you have fix, please set ticket to PA state and write to dev
> list fix is ready
> -- For case fix will require some time please mute test and set
> label Muted_Test to issue
> - If you know which change caused failure please contact change
> author directly
> - If you don't know which change caused failure please send
> message to dev list to find out
> Should you have any questions please contact dpav...@apache.org or write
> to dev.list
> Best Regards,
> MTCGA.Bot
> Notification generated at Thu Aug 23 02:37:34 MSK 2018
>


Re: [MTCGA]: new failures in builds [1711030] needs to be handled

2018-08-23 Thread Alexey Goncharuk
This new test if failed intentionally (to be fixed in a separate ticket),
muted on TC.

чт, 23 авг. 2018 г. в 2:37, :

> Hi Ignite Developer,
>
> I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed.
> I hope you can help.
>
>  *Recently contributed test failed in master
> TransactionIntegrityWithPrimaryIndexCorruptionTest.testPrimaryIndexCorruptionDuringCommitOnPrimaryNode3
>
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3475401922666481512=%3Cdefault%3E=testDetails
>  No changes in build
>
> - If your changes can led to this failure(s), please create issue
> with label MakeTeamCityGreenAgain and assign it to you.
> -- If you have fix, please set ticket to PA state and write to dev
> list fix is ready
> -- For case fix will require some time please mute test and set
> label Muted_Test to issue
> - If you know which change caused failure please contact change
> author directly
> - If you don't know which change caused failure please send
> message to dev list to find out
> Should you have any questions please contact dpav...@apache.org or write
> to dev.list
> Best Regards,
> MTCGA.Bot
> Notification generated at Thu Aug 23 02:37:34 MSK 2018
>


Re: [MTCGA]: new failures in builds [1712385] needs to be handled

2018-08-23 Thread Dmitriy Pavlov
Hi Igniters,

This test seems to be flaky. I hope I will update bot today with improved
unstable test detection.

The Bot will require 4 times to fail before notification, and 100 previous
runs will be analyzed instead of 50.

Sincerely,
Dmitriy Pavlov

чт, 23 авг. 2018 г. в 8:37, :

> Hi Ignite Developer,
>
> I am MTCGA.Bot, and I've detected some issue on TeamCity to be addressed.
> I hope you can help.
>
>  *New test failure in master
> CacheAbstractTest.TestCacheConfigurationExpiryPolicy
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=5800368842121996296=%3Cdefault%3E=testDetails
>  No changes in build
>
> - If your changes can led to this failure(s), please create issue
> with label MakeTeamCityGreenAgain and assign it to you.
> -- If you have fix, please set ticket to PA state and write to dev
> list fix is ready
> -- For case fix will require some time please mute test and set
> label Muted_Test to issue
> - If you know which change caused failure please contact change
> author directly
> - If you don't know which change caused failure please send
> message to dev list to find out
> Should you have any questions please contact dpav...@apache.org or write
> to dev.list
> Best Regards,
> MTCGA.Bot
> Notification generated at Thu Aug 23 08:37:38 MSK 2018
>


[jira] [Created] (IGNITE-9354) HelloWorldGAExample hangs forever with additional nods in topology

2018-08-23 Thread Alex Volkov (JIRA)
Alex Volkov created IGNITE-9354:
---

 Summary: HelloWorldGAExample hangs forever with additional nods in 
topology
 Key: IGNITE-9354
 URL: https://issues.apache.org/jira/browse/IGNITE-9354
 Project: Ignite
  Issue Type: Bug
  Components: ml
Affects Versions: 2.6
Reporter: Alex Volkov
 Attachments: log.zip

To reproduce this issue please follow these steps:

1. Run two nodes using ignite.sh script.

For example:
{code:java}
bin/ignite.sh examples/config/example-ignite.xml -J-Xmx1g -J-Xms1g 
-J-DCONSISTENT_ID=node1 -J-DIGNITE_QUIET=false
{code}
2. Run  HelloWorldGAExample from IDEA IDE.

*Expecting result:*

Example successfully run and completed.

*Actual result:*

There are a lot of NPE exceptions in example log:
{code:java}
[2018-08-23 09:49:25,029][ERROR][pub-#19][GridJobWorker] Failed to execute job 
due to unexpected runtime exception 
[jobId=c296b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, ses=GridJobSessionImpl 
[ses=GridTaskSessionImpl [taskName=o.a.i.ml.genetic.FitnessTask, 
dep=GridDeployment [ts=1535006960878, depMode=SHARED, 
clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, 
clsLdrId=8d16b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, userVer=0, loc=true, 
sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap,
 pendingUndeploy=false, undeployed=false, usage=2], 
taskClsName=o.a.i.ml.genetic.FitnessTask, 
sesId=b196b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, 
startTime=1535006964236, endTime=9223372036854775807, 
taskNodeId=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, 
clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, 
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, 
topPred=o.a.i.i.cluster.ClusterGroupAdapter$AttributeFilter@2d746ce4, 
subjId=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, mapFut=GridFutureAdapter 
[ignoreInterrupts=false, state=INIT, res=null, hash=679592043]IgniteFuture 
[orig=], execName=null], 
jobId=c296b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1], err=null]
java.lang.NullPointerException
at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:76)
at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:35)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6749)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
and it hangs on this one:
{code:java}
[2018-08-23 09:49:35,229][WARN ][pub-#17][AlwaysFailoverSpi] Received topology 
with only nodes that job had failed on (forced to fail) 
[failedNodes=[eac48ea7-da79-453a-a94c-291039c5cc15, 
0907d876-e0ce-4fda-966d-ad91a03f9722, e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1]]
class org.apache.ignite.cluster.ClusterTopologyException: Failed to failover a 
job to another node (failover SPI returned null) 
[job=org.apache.ignite.ml.genetic.FitnessJob@35f8a9d3, node=TcpDiscoveryNode 
[id=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, addrs=ArrayList [0:0:0:0:0:0:0:1, 
127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.42:47502, 
/172.25.4.92:47502, /0:0:0:0:0:0:0:1:47502, /127.0.0.1:47502], discPort=47502, 
order=3, intOrder=3, lastExchangeTime=1535006974981, loc=true, 
ver=2.7.0#19700101-sha1:, isClient=false]]
at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:853)
at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:851)
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
at 
org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:541)
at org.apache.ignite.ml.genetic.GAGrid.calculateFitness(GAGrid.java:102)
at org.apache.ignite.ml.genetic.GAGrid.evolve(GAGrid.java:171)
at 
org.apache.ignite.examples.ml.genetic.helloworld.HelloWorldGAExample.main(HelloWorldGAExample.java:90)
Caused by: class 
org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to 
failover a job to another node (failover SPI returned null) 
[job=org.apache.ignite.ml.genetic.FitnessJob@35f8a9d3, node=TcpDiscoveryNode 
[id=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, addrs=ArrayList [0:0:0:0:0:0:0:1, 
127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.42:47502, 
/172.25.4.92:47502, /0:0:0:0:0:0:0:1:47502, /127.0.0.1:47502], discPort=47502, 
order=3, intOrder=3, lastExchangeTime=1535006974981, loc=true, 
ver=2.7.0#19700101-sha1:, isClient=false]]
at 

Re: Storage Class Memory and Persistent Collections

2018-08-23 Thread Steve Hostettler
That's great but would you mind explaining to me how we can get rid of the
marshalling/unmarshalling? Because that would significantly speed up the
processes that run on the grid.

On Wed, Aug 22, 2018 at 3:36 AM Denis Magda  wrote:

> Hello Steve,
>
> Intel folks are already contributing Intel Optane Persistent Memory support
> to Ignite:
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-26%3A+Adding+Experimental+Support+for+Intel+Optane+DC+Persistent+Memory
>
> http://apache-ignite-developers.2346864.n4.nabble.com/Adding-experimental-support-for-Intel-Optane-DC-Persistent-Memory-td33041.html
>
> Hopefully, the contribution will be reviewed and accepted soon.
>
> --
> Denis
>
> On Tue, Aug 21, 2018 at 12:42 PM Steve Hostettler <
> steve.hostett...@gmail.com> wrote:
>
> > Hello,
> >
> >
> >
> > Clearly Storage Class Memory represents a breakthrough for "in memory"
> > grids
> > and some people already tried it on Ignite :
> >
> >
> http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.ht
> > ml
> > <
> http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.html
> >
> >
> > I would like to know what is the position of the community towards  the
> > Persistent Collections (
> > <
> >
> https://github.com/pmem/pcj?utm_source=ISTV_medium=Video_campaign=I
> > STV_2017
> > <
> https://github.com/pmem/pcj?utm_source=ISTV_medium=Video_campaign=ISTV_2017
> >
> > >
> >
> >
> https://github.com/pmem/pcj?utm_source=ISTV_medium=Video_campaign=IS
> > TV_2017
> > <
> https://github.com/pmem/pcj?utm_source=ISTV_medium=Video_campaign=ISTV_2017
> >,
> >  https://pcollections.org/)?
> > According to https://www.youtube.com/watch?v=ZuLAF3ppCzs , it could
> > massively reduce the marshalling/unmarshalling time. `
> >
> > 1)   Do you guys plan to test it and to support SCM such as 3DXPoint
> > natively?
> >
> > 2)   Would it mean that ignite would not serialize the objects off heap
> > anymore?
> >
> >
> >
> > Hope it makes sense.
> >
> >
> >
> > Best Regards
> >
> >
>


-- 
===
Steve Hostettler

Université de Genève
CUI - Battelle bat. A
Route de Drize, 7
CH-1227 Carouge
Tel. +33 67 075 2843
Fax +41 22 379 0250