[jira] [Created] (IGNITE-7760) Handle FS hangs

2018-02-19 Thread Alexander Belyak (JIRA)
Alexander Belyak created IGNITE-7760:


 Summary: Handle FS hangs
 Key: IGNITE-7760
 URL: https://issues.apache.org/jira/browse/IGNITE-7760
 Project: Ignite
  Issue Type: Improvement
  Components: general
Affects Versions: 2.2, 2.1, 2.0, 1.9, 1.8, 1.7, 1.6
Reporter: Alexander Belyak


Need to handle FS operations hangs, for example  - copy WAL into wal archive 
(specially if wal archive mount as network file system volume).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7759) Logger does not print sockTimeout and ackTimeout default values for TcpDiscoverySpi

2018-02-19 Thread Roman Guseinov (JIRA)
Roman Guseinov created IGNITE-7759:
--

 Summary: Logger does not print sockTimeout and ackTimeout default 
values for TcpDiscoverySpi
 Key: IGNITE-7759
 URL: https://issues.apache.org/jira/browse/IGNITE-7759
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3, 2.1, 1.9
Reporter: Roman Guseinov


Logger does not print sockTimeout and ackTimeout default values for 
TcpDiscoverySpi

Before starting TcpDiscoverySpi logger prints the following message (debug mode 
is enabled):
{code:java}
[main][GridDiscoveryManager] Starting SPI: TcpDiscoverySpi [addrRslvr=null, 
sockTimeout=0, ackTimeout=0, marsh=JdkMarshaller 
[clsFilter=org.apache.ignite.internal.IgniteKernal$5@402e37bc], reconCnt=10, 
reconDelay=2000, maxAckTimeout=60, forceSrvMode=false, 
clientReconnectDisabled=false]
{code}
Note, that sockTimeout=0 and ackTimeout=0. Default values initializing happens 
after TcpDiscoverySpi.spiStart is called:
{code:java}
public class TcpDiscoverySpi extends IgniteSpiAdapter implements DiscoverySpi {
/** Node attribute that is mapped to node's external addresses (value is 
disc.tcp.ext-addrs). */

/** {@inheritDoc} */
@Override public void spiStart(@Nullable String igniteInstanceName) throws 
IgniteSpiException {
initializeImpl();
}

/**
 *
 */
private void initializeImpl() {
if (impl != null)
return;

initFailureDetectionTimeout();

if (!forceSrvMode && 
(Boolean.TRUE.equals(ignite.configuration().isClientMode( {
if (ackTimeout == 0)
ackTimeout = DFLT_ACK_TIMEOUT_CLIENT;

if (sockTimeout == 0)
sockTimeout = DFLT_SOCK_TIMEOUT_CLIENT;

impl = new ClientImpl(this);

ctxInitLatch.countDown();
}
else {
if (ackTimeout == 0)
ackTimeout = DFLT_ACK_TIMEOUT;

if (sockTimeout == 0)
sockTimeout = DFLT_SOCK_TIMEOUT;

impl = new ServerImpl(this);
}
 
}

}

{code}
To avoid confusion I suggest printing default sockTimeout and ackTimeout if 
they weren't changed like:
{code:java}
[main][GridDiscoveryManager] Starting SPI: TcpDiscoverySpi [addrRslvr=null, 
sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller 
[clsFilter=org.apache.ignite.internal.IgniteKernal$5@402e37bc], reconCnt=10, 
reconDelay=2000, maxAckTimeout=60, forceSrvMode=false, 
clientReconnectDisabled=false]
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: IgniteDataStreamer silently fails on a server node

2018-02-19 Thread Dmitriy Setrakyan
Nikolay,

Does this exception happen because IgniteUuid class is not correctly
handled? If that's the case, we should fix it. Would be great if you could
do it.

As far as propagating exceptions, addData(...) is asynchronous operation
and returns IgniteFuture. The exception should be propagated to that
future. Do you not see it there?

D.

On Mon, Feb 19, 2018 at 1:00 PM, Nikolay Izhikov 
wrote:

> Hello, Igniters.
>
> While working on IGNITE-7727 I found strange behavior of
> IgniteDataStreamer:
>
> If we have IgniteUuid as an indexed type update silently brokes on a
> server node.
> Client doesn't have any notification about fails.
> All calls of `addData`, `close`, etc. succeed on a client side but fails
> on server side.
>
> I see 2 issue here:
>
> 1. The fail itself - it certainly a bug, I think I can fix it.
>
> 2. Lack of client notification. Is it OK when client doesn't know about
> fails of streamer updates?
> Do we have this documented? I briefly looked at streamer docs but can't
> find description of such behavior.
>
>
> Reproducer [1]
>
> ```
> public void testStreamer() throws Exception {
> Ignite client = grid("client");
>
> CacheConfiguration ccfg = new CacheConfiguration("UUID_CACHE");
>
> ccfg.setIndexedTypes(IgniteUuid.class, String.class);
>
> client.createCache(ccfg);
>
> try(IgniteDataStreamer cache =
> client.dataStreamer("UUID_CACHE")) {
>
> for(Integer i=0; i<2; i++)
> cache.addData(IgniteUuid.randomUuid().toString(),
> i.toString());
> }
> }
> ```
>
> Server node stack trace [2]:
>
> ```
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> update index, incorrect key class [expCls=org.apache.ignite.
> lang.IgniteUuid,
> actualCls=org.apache.ignite.internal.binary.BinaryObjectImpl]
> at org.apache.ignite.internal.processors.query.GridQueryProcessor.
> typeByValue(GridQueryProcessor.java:1954)
> at org.apache.ignite.internal.processors.query.
> GridQueryProcessor.store(GridQueryProcessor.java:1877)
> at org.apache.ignite.internal.processors.cache.query.
> GridCacheQueryManager.store(GridCacheQueryManager.java:403)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(
> IgniteCacheOffheapManagerImpl.java:1343)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(
> IgniteCacheOffheapManagerImpl.java:1207)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.
> java:345)
> at org.apache.ignite.internal.processors.cache.
> GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3527)
> at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.
> initialValue(GridCacheMapEntry.java:2735)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2113)
> ... 11 more
> ```
>
> [1] https://gist.github.com/nizhikov/2e70a73c7b74a50fc89d270e9af1e1ca
>
> [2] https://gist.github.com/nizhikov/c491c8f2b45aa59458b37b42b4b8dab4


[jira] [Created] (IGNITE-7758) Update REST documentation

2018-02-19 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-7758:


 Summary: Update REST documentation
 Key: IGNITE-7758
 URL: https://issues.apache.org/jira/browse/IGNITE-7758
 Project: Ignite
  Issue Type: Task
  Components: documentation
Affects Versions: 2.5
Reporter: Alexey Kuznetsov
Assignee: Denis Magda
 Fix For: 2.5


RETS documentation need to be updated:
 # GET_OR_CREATE_CACHE enhanced command with optional "templateName", 
"backups", "cacheGroup", "dataRegion" and "writeSynchronizationMode" options.
 #  Added support for Java built in types for put/get operations via "keyType" 
and "valueType" optional parameters. List of supported types: 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7757) Unable to create a new cache via REST

2018-02-19 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-7757:
--

 Summary: Unable to create a new cache via REST
 Key: IGNITE-7757
 URL: https://issues.apache.org/jira/browse/IGNITE-7757
 Project: Ignite
  Issue Type: Bug
Reporter: Pavel Konstantinov


# try to start a new cache with non-existing cache group name
{code}
localhost:8080/ignite?cmd=getorcreate=cache1=
{code}
# then edit your request and make it correct and try again
{code}
localhost:8080/ignite?cmd=getorcreate=cache1
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Apply for a Loan Access Fund

2018-02-19 Thread Apply for a Loan
Constant Contact

Do you need a loan,You want to pay off bills, Expand your business? Look no 
further we offer all kinds of loans both long and short term loan.

Multi-component loans:
We specialize in investments in private sectors within our financial Investment 
services are M development and management, financial and operational 
management, due diligence and capital planning and development.

We also finance multi-component, multi annual investment program using a single 
“framework loan”. This funds a range of projects, usually by a national or 
local public sector body, most frequently regarding infrastructure, energy, oil 
and gas, efficiency/renewable, transport and urban renovation, real 
estate,emerging markets and high-technology,communications, software and 
digital content and services.

Conditions:
The project must be in line with our lending objectives and must be 
economically, financially, technically and environmentally sound. Financing 
conditions depend on the investment loan type and the security offered because 
our loans investment program is a strategic firm specializing in financial 
growth and loans/debt funding platform.

We've developed a lending program that makes it easy for us to meet our 
clients' loan investment borrowing needs. We grant loans from a minimum range 
of 100,000 US Dollars (One Hundred Thousand US Dollars ) to maximum 100Million 
US Dollars ( One Hundred Million US Dollars ) on any viable projects presented 
by your management after independent review on your business project BP model 
presentation.

Repayment:
Loan repayment is normally on a semi-annual or annual basis. Grace periods for 
capital repayment may be granted for a project’s construction phase of one year 
defer grace period for  Re-payment.

Interest rates:
Interest rates can be fixed, floating, revisable or convertible (i.e. allowing 
for a change of interest rate formula during the lifetime of a loan at 
predetermined periods) we offer all kinds of loans both long and short term 
loan for only 2% interest rate.

We intend to maintain a silent/financial position on our business with you or 
your company. If you are seriously interested with our offer we look forward to 
your favorable response.

We look forward to your favorable response.
Regards,
Bharat Kannan
Loan Agent
Pemail: bharatkexpatcen...@aol.com


Re: Ignite Thin clients for Node.js, Python, PHP

2018-02-19 Thread Alexey Kukushkin
Also, we could have a Hangouts call or something to share thin client
development experience. You can reach me at kukushkinale...@gmail.com.


Re: Ignite Thin clients for Node.js, Python, PHP

2018-02-19 Thread Alexey Kukushkin
Alexey, Ekaterina, Pavel,

These resources will be useful for you:

   - Thin client protocol spec
   
   - .NET thin client code
   

   - Java thin client code
   

(this
   is a work-in-progress in a development branch)

Please post your questions and ideas - the community will help.


Re: Ignite Thin clients for Node.js, Python, PHP

2018-02-19 Thread Denis Magda
Hi Alexey and welcome to all of you! Look forward to seeing the clients in
Ignite codebase ;)

Granted contributors access to all of you in JIRA. Feel free to assign
existing tickets to yourself or create new ones.

However, before you're getting down to the implementation, please share the
design and architectural ideas around the clients. Plus, it will be
reasonable to start with one client first and then add the rest once you
find out how the community works and Ignite is architectured.

Pavel T., Alex K., as creators of .NET and Java thin clients, please share
pointers that might be useful for Alexey.

--
Denis


On Mon, Feb 19, 2018 at 1:23 PM, Alexey Kosenchuk <
alexey.kosenc...@nobitlost.com> wrote:

> Hi All!
>
> Let us join the party, please ;)
>
> As we see, there is Binary Client Protocol to communicate with Ignite
> cluster and a concept of Thin (lightweight) client.
>
> If there are no objections or duplicated plans, we would like to develop
> Thin Client libs for:
> - Node.js
> - Python
> - PHP
>
> Please add us as contributors and provide access to the Ignite Jira
> components.
>
> Usernames in the Apache Jira:
> alexey.kosenchuk
> ekaterina.vergizova
> pavel.petroshenko
>
> Thanks!
> -Alexey
>


[Webinar] Caching technologies evolution

2018-02-19 Thread Denis Magda
Igniters,

Curious to see how the caching technologies evolved over the time? Where
Ignite, Redis or Memcached fits? Join my tomorrow webinar at 11.00 PT or
bookmark and get a recording later:

Redis Replaced: Why Companies Now Choose Apache® Ignite™ to Improve
Application Speed and Scale


--
Denis


Ignite Thin clients for Node.js, Python, PHP

2018-02-19 Thread Alexey Kosenchuk

Hi All!

Let us join the party, please ;)

As we see, there is Binary Client Protocol to communicate with Ignite 
cluster and a concept of Thin (lightweight) client.


If there are no objections or duplicated plans, we would like to develop 
Thin Client libs for:

- Node.js
- Python
- PHP

Please add us as contributors and provide access to the Ignite Jira 
components.


Usernames in the Apache Jira:
alexey.kosenchuk
ekaterina.vergizova
pavel.petroshenko

Thanks!
-Alexey


[jira] [Created] (IGNITE-7756) Streamer fails if IgniteUuid is indexed

2018-02-19 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-7756:
---

 Summary: Streamer fails if IgniteUuid is indexed
 Key: IGNITE-7756
 URL: https://issues.apache.org/jira/browse/IGNITE-7756
 Project: Ignite
  Issue Type: Bug
  Components: streaming
Affects Versions: 2.3
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov
 Fix For: 2.5


IgniteDataStreamer are failed to put data to the cache if IgniteUuid is 
IndexedType.

Spark tests in IGNITE-7227 are failed because of this issue.

Reproducer:

{code:java}
public void testStreamer() throws Exception {
Ignite client = grid("client");

CacheConfiguration ccfg = new CacheConfiguration("UUID_CACHE");

ccfg.setIndexedTypes(IgniteUuid.class, String.class);

client.createCache(ccfg);

try(IgniteDataStreamer cache =
client.dataStreamer("UUID_CACHE")) {

for(Integer i=0; i<2; i++)
cache.addData(IgniteUuid.randomUuid(), i.toString());
}
}
{code}

Exception stack trace:

{noformat}
[23:43:35] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=0, done=false, cancelled=false, err=null, 
futs=[true, true]][23:43:35] (err) Failed to execute compound future reducer: 
GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=0, done=false, 
cancelled=false, err=null, futs=[true, true]]class 
org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=57961924-82ec-4d56-81eb-1a4109a0]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1900)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:344)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1554)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1182)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1089)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:499)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Failed to set initial value 
for cache entry
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2135)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:302)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:59)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:89)
... 6 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to update 
index, incorrect key class [expCls=org.apache.ignite.lang.IgniteUuid, 
actualCls=org.apache.ignite.internal.binary.BinaryObjectImpl]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.typeByValue(GridQueryProcessor.java:1954)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1877)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:403)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1343)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1207)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:345)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3527)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2735)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2113)
... 11 more
class org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=57961924-82ec-4d56-81eb-1a4109a0]
at 

[GitHub] ignite pull request #3542: IGNITE-6842

2018-02-19 Thread Mmuzaf
GitHub user Mmuzaf opened a pull request:

https://github.com/apache/ignite/pull/3542

 IGNITE-6842

Test PR for learning contributing processes.
- Remove duplicated code stopGrid methods;
- Add stopAllGridsSilently method throwing exception in case stop grids 
fails;
- Make stopGrids default behavior for afterTestsStopped;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mmuzaf/ignite ignite-6842

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3542.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3542


commit 40f9c158e226942a02a22673637e593a28096496
Author: Maxim Muzafarov 
Date:   2018-02-18T14:29:55Z

IGNITE-6842: make stopAllGrids as default behavior for afterTestsStop

commit 18248091903a0d44e292eb2502f0c40a18abd3cd
Author: Maxim Muzafarov 
Date:   2018-02-19T09:25:52Z

IGNITE-6842:add stopAllGridsSilently as default stop grid

commit b18d9aadc6613c9f5c48e7b2cec2b53ed385c994
Author: Maxim Muzafarov 
Date:   2018-02-19T09:26:21Z

Merge branch 'master' of https://github.com/apache/ignite into ignite-6842




---


IgniteDataStreamer silently fails on a server node

2018-02-19 Thread Nikolay Izhikov
Hello, Igniters.

While working on IGNITE-7727 I found strange behavior of IgniteDataStreamer:

If we have IgniteUuid as an indexed type update silently brokes on a server 
node.
Client doesn't have any notification about fails.
All calls of `addData`, `close`, etc. succeed on a client side but fails on 
server side.

I see 2 issue here:

1. The fail itself - it certainly a bug, I think I can fix it.

2. Lack of client notification. Is it OK when client doesn't know about fails 
of streamer updates?
Do we have this documented? I briefly looked at streamer docs but can't find 
description of such behavior.


Reproducer [1]

```
public void testStreamer() throws Exception {
Ignite client = grid("client");

CacheConfiguration ccfg = new CacheConfiguration("UUID_CACHE");

ccfg.setIndexedTypes(IgniteUuid.class, String.class);

client.createCache(ccfg);

try(IgniteDataStreamer cache =
client.dataStreamer("UUID_CACHE")) {

for(Integer i=0; i<2; i++)
cache.addData(IgniteUuid.randomUuid().toString(), i.toString());
}
}
```

Server node stack trace [2]:

```
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to update 
index, incorrect key class [expCls=org.apache.ignite.lang.IgniteUuid,
actualCls=org.apache.ignite.internal.binary.BinaryObjectImpl]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.typeByValue(GridQueryProcessor.java:1954)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1877)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:403)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1343)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1207)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:345)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3527)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2735)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2113)
... 11 more
```

[1] https://gist.github.com/nizhikov/2e70a73c7b74a50fc89d270e9af1e1ca

[2] https://gist.github.com/nizhikov/c491c8f2b45aa59458b37b42b4b8dab4

signature.asc
Description: This is a digitally signed message part


Re: Apache Ignite 2.4 release

2018-02-19 Thread Dmitriy Setrakyan
On Mon, Feb 19, 2018 at 1:37 AM, Vladimir Ozerov 
wrote:

> Alex,
>
> You get me right. DEFAULT -> LOG_ONLY doesn't introduce any dramatic
> changes when comparing 2.3 to 2.4 - Ignite was unsafe out of the box in
> 2.3, and it is unsafe in 2.4 as well.
>
> The very problem is that we claim ourselves to be ACID, while in reality we
> are only "AI" out of the box, because durability is not guaranteed due to
> zero backups and LOG_ONLY and consistency is not guaranteed due to
> PRIMARY_SYNC. Neither Cassandra, nor Mongo or any others claim themselves
> to be ACID, so it is not valid to refer to their defaults.
>

Vladimir,
Ignite can be fully ACID, but at the same time have non-ACID defaults, as
long as we clearly document how to get ACID behavior. I do not see an issue
with it.

D.


Re: Apache Ignite 2.4 release

2018-02-19 Thread Denis Magda
>
> Neither Cassandra, nor Mongo or any others claim themselves
> to be ACID, so it is not valid to refer to their defaults.


This is why I referred to VoltDB as to an example. It supports ACID
transactions and persists changes to disk in a mode similar to LOG_ONLY.

--
Denis

On Sun, Feb 18, 2018 at 11:37 PM, Vladimir Ozerov 
wrote:

> Alex,
>
> You get me right. DEFAULT -> LOG_ONLY doesn't introduce any dramatic
> changes when comparing 2.3 to 2.4 - Ignite was unsafe out of the box in
> 2.3, and it is unsafe in 2.4 as well.
>
> The very problem is that we claim ourselves to be ACID, while in reality we
> are only "AI" out of the box, because durability is not guaranteed due to
> zero backups and LOG_ONLY and consistency is not guaranteed due to
> PRIMARY_SYNC. Neither Cassandra, nor Mongo or any others claim themselves
> to be ACID, so it is not valid to refer to their defaults.
>
> On Mon, Feb 19, 2018 at 10:06 AM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
> > In terms of 'safety', Ignite default settings are far beyond optimal. For
> > in-memory mode, we have 0 backups by default, which means partition loss
> in
> > a case of node failure, we have readFromBackup=true and PRIMARY_SYNC by
> > default which effectively cancels linearizability property for cache
> > updates, so setting the default WAL mode to LOG_ONLY does not seem to be
> a
> > bigger evil than it currently is. If we are to move to safer defaults, we
> > should change all of the affected sides.
> >
> > I also want to clarify the difference between guarantees in
> > non-fsync modes. We should distinguish the loss of durability (the loss
> of
> > the last update) because the update did not make it to disk and data loss
> > because the disk content was shuffled due to an incomplete page write. In
> > my understanding, the current situation is:
> > FSYNC: loss of durability: not possible, data loss: not possible
> > LOG_ONLY: loss of durability: possible only if OS/power fails, data loss:
> > possible only if OS/power fails
> > BACKGROUND: loss of durability: possible if Ignite process fails, data
> > loss: possible only if OS/power fails
> >
> > The data loss situation can be mitigated in the cluster using a large
> > enough replication factor (this is what Dmitriy was describing in the
> case
> > of LOG_ONLY and 3 backups configuration).
> >
> > Denis,
> > I do not think it is fair to compare Ignite defaults to Cassandra's
> > defaults because Cassandra is _not_ transactional _eventually consistent_
> > datastore, they claim much weaker guarantees than Ignite.
> >
> > All in all, I'm ok to change the WAL default right now, but I would
> revisit
> > all those settings in 3.0 and made Ignite safe-first.
> >
> > 2018-02-17 3:24 GMT+03:00 Denis Magda :
> >
> > > Classic relational databases have no choice rather than to use FSYNC by
> > > default. RDBMS is all about consistency.
> > >
> > > Distributed databases try to balance consistency and performance. For
> > > instance, why to fsync every update if there is usually 1 backup copy?
> > > This is probably why VoltDB [1] and Cassandra use the modes comparable
> to
> > > Ignite's LOG_ONLY.
> > >
> > > Ignite as a distributed database should care of both consistency and
> > > performance.
> > >
> > > My vote goes to FSYNC, LOG_ONLY (default), BACKGROUND, NONE.
> > >
> > >
> > > [1] https://docs.voltdb.com/UsingVoltDB/CmdLogConfig.php
> > >
> > > --
> > > Denis
> > >
> > >
> > > On Fri, Feb 16, 2018 at 2:14 PM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > Vova,
> > > >
> > > > I hear your concerns, but at the same time I know that one of the
> > largest
> > > > banks in eastern Europe is using Ignite in LOG_ONLY mode with 3
> backups
> > > to
> > > > move money. The rational is that the probability of failure of 4
> > servers
> > > at
> > > > hardware level at the same time is very low. However, if the JVM
> > process
> > > > fails on any server, then it can be safely restarted without loosing
> > > data.
> > > > In my view, this is why LOG_ONLY mode makes sense as a default.
> > > >
> > > > I still vote to change the default to LOG_ONLY, deprecate the DEFAULT
> > > name
> > > > altogether and add FSYNC mode instead.
> > > >
> > > > D.
> > > >
> > > > On Fri, Feb 16, 2018 at 4:05 PM, Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Sergey,
> > > > >
> > > > > We do not have backups by default either, so essentially we are
> > loosing
> > > > > data by default. Moreover, backups are less reliable option than
> > fsync
> > > > > because a lot of users cannot afford putting servers into separate
> > > power
> > > > > circuits, so a single power failure may easily lead to poweroff of
> > the
> > > > > whole cluster at once, so data is lost still. This is normal
> practice
> > > > even
> > > > > for enterprise deployments (e.g. asynchronous replication).
> > > > >
> > > > > To make 

Re: Apache Ignite 2.4 release

2018-02-19 Thread Denis Magda
Alex, please clarify some of the points

data loss because the disk content was shuffled due to an incomplete page
> write


are you talking about some phase of the checkpointing process? Do you mean
that the process just passed over changes to the file but didn't call
fsync, thus, offloading this to OS to decide?

BACKGROUND: loss of durability: possible if Ignite process fails, data
> loss: possible only if OS/power fails


This should be true if mmap files are not used for BACKGROUND, otherwise,
it provides the same guarantees as LOG_ONLY, right?

--
Denis

On Sun, Feb 18, 2018 at 11:06 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> In terms of 'safety', Ignite default settings are far beyond optimal. For
> in-memory mode, we have 0 backups by default, which means partition loss in
> a case of node failure, we have readFromBackup=true and PRIMARY_SYNC by
> default which effectively cancels linearizability property for cache
> updates, so setting the default WAL mode to LOG_ONLY does not seem to be a
> bigger evil than it currently is. If we are to move to safer defaults, we
> should change all of the affected sides.
>
> I also want to clarify the difference between guarantees in
> non-fsync modes. We should distinguish the loss of durability (the loss of
> the last update) because the update did not make it to disk and data loss
> because the disk content was shuffled due to an incomplete page write. In
> my understanding, the current situation is:
> FSYNC: loss of durability: not possible, data loss: not possible
> LOG_ONLY: loss of durability: possible only if OS/power fails, data loss:
> possible only if OS/power fails
> BACKGROUND: loss of durability: possible if Ignite process fails, data
> loss: possible only if OS/power fails
>
> The data loss situation can be mitigated in the cluster using a large
> enough replication factor (this is what Dmitriy was describing in the case
> of LOG_ONLY and 3 backups configuration).
>
> Denis,
> I do not think it is fair to compare Ignite defaults to Cassandra's
> defaults because Cassandra is _not_ transactional _eventually consistent_
> datastore, they claim much weaker guarantees than Ignite.
>
> All in all, I'm ok to change the WAL default right now, but I would revisit
> all those settings in 3.0 and made Ignite safe-first.
>
> 2018-02-17 3:24 GMT+03:00 Denis Magda :
>
> > Classic relational databases have no choice rather than to use FSYNC by
> > default. RDBMS is all about consistency.
> >
> > Distributed databases try to balance consistency and performance. For
> > instance, why to fsync every update if there is usually 1 backup copy?
> > This is probably why VoltDB [1] and Cassandra use the modes comparable to
> > Ignite's LOG_ONLY.
> >
> > Ignite as a distributed database should care of both consistency and
> > performance.
> >
> > My vote goes to FSYNC, LOG_ONLY (default), BACKGROUND, NONE.
> >
> >
> > [1] https://docs.voltdb.com/UsingVoltDB/CmdLogConfig.php
> >
> > --
> > Denis
> >
> >
> > On Fri, Feb 16, 2018 at 2:14 PM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > Vova,
> > >
> > > I hear your concerns, but at the same time I know that one of the
> largest
> > > banks in eastern Europe is using Ignite in LOG_ONLY mode with 3 backups
> > to
> > > move money. The rational is that the probability of failure of 4
> servers
> > at
> > > hardware level at the same time is very low. However, if the JVM
> process
> > > fails on any server, then it can be safely restarted without loosing
> > data.
> > > In my view, this is why LOG_ONLY mode makes sense as a default.
> > >
> > > I still vote to change the default to LOG_ONLY, deprecate the DEFAULT
> > name
> > > altogether and add FSYNC mode instead.
> > >
> > > D.
> > >
> > > On Fri, Feb 16, 2018 at 4:05 PM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Sergey,
> > > >
> > > > We do not have backups by default either, so essentially we are
> loosing
> > > > data by default. Moreover, backups are less reliable option than
> fsync
> > > > because a lot of users cannot afford putting servers into separate
> > power
> > > > circuits, so a single power failure may easily lead to poweroff of
> the
> > > > whole cluster at once, so data is lost still. This is normal practice
> > > even
> > > > for enterprise deployments (e.g. asynchronous replication).
> > > >
> > > > To make things even worse, we employ PRIMARY_SYNC mode by default! So
> > > even
> > > > if you configured backups, you still may loose data due to a single
> > node
> > > > failure - just shutdown the PRIMARY after commit is confirmed to the
> > > client
> > > > and your recent update will disappers.
> > > >
> > > > So this is what user should do to make himself safe:
> > > > 1) Learn about WAL modes
> > > > 2) Learn about backups
> > > > 3) Learn about synchronization modes
> > > > 4) Cross his fingers that he understood everything correctly and that
> > > there
> > > > 

[jira] [Created] (IGNITE-7755) Potentially crash during write cp-***-start.bin can lead to the impossibility of recovering

2018-02-19 Thread Dmitriy Govorukhin (JIRA)
Dmitriy Govorukhin created IGNITE-7755:
--

 Summary: Potentially crash during write cp-***-start.bin can lead 
to the impossibility of recovering
 Key: IGNITE-7755
 URL: https://issues.apache.org/jira/browse/IGNITE-7755
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3, 2.4
Reporter: Dmitriy Govorukhin
 Fix For: 2.5


We can crashed after cp-***-start.bin created but before content (wal point) is 
recorded. On recovery after trying read wal point we got an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7754) WAL in LOG_ONLY mode doesn't execute fsync on checkpoint begin

2018-02-19 Thread Ilya Lantukh (JIRA)
Ilya Lantukh created IGNITE-7754:


 Summary: WAL in LOG_ONLY mode doesn't execute fsync on checkpoint 
begin
 Key: IGNITE-7754
 URL: https://issues.apache.org/jira/browse/IGNITE-7754
 Project: Ignite
  Issue Type: Bug
Reporter: Ilya Lantukh
Assignee: Ilya Lantukh


On checkpoint begin method IgniteWriteAheadLogManager.fsync(WALPointer ptr) 
will be called, but it won't actually perform fsync because mode isn't FSYNC. 
It might lead to LFS corruption if OS or hardware failed until checkpoint had 
been finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3530: IGNITE-7042: Trying to configure scala-test plugi...

2018-02-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3530


---


[GitHub] ignite pull request #3521: ignite-7594

2018-02-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3521


---


[jira] [Created] (IGNITE-7753) Processors are incorrectly initialized if a node joins during cluster activation

2018-02-19 Thread Stanislav Lukyanov (JIRA)
Stanislav Lukyanov created IGNITE-7753:
--

 Summary: Processors are incorrectly initialized if a node joins 
during cluster activation
 Key: IGNITE-7753
 URL: https://issues.apache.org/jira/browse/IGNITE-7753
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3, 2.4, 2.5
Reporter: Stanislav Lukyanov
Assignee: Stanislav Lukyanov


If a node joins during the cluster activation process (while the related 
exchange operation is in progress), then some of the GridProcessor instances of 
that node will be incorrectly initialized. While GridClusterStateProcessor will 
correctly report the active cluster state, other processors that are sensitive 
to the cluster state, e.g. GridServiceProcessor, will be not initialized.

A reproducer is below. 
===
Ignite server = 
IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml",
 "server");

CyclicBarrier barrier = new CyclicBarrier(2);
Thread activationThread = new Thread(() -> {
try {
barrier.await();
server.active(true);
}
catch (Exception e) {
e.printStackTrace(); // TODO implement.
}
});
activationThread.start();
barrier.await();

IgnitionEx.setClientMode(true);
Ignite client = 
IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml",
 "client");

activationThread.join();

client.services().deployClusterSingleton("myClusterSingleton", new 
SimpleMapServiceImpl<>());
===

Here a single server node is started, then simultaneously a client node is 
being started and the cluster is being activated, then client attempts to 
deploy a service. As the result, the thread calling the deploy method hangs 
forever with a stack trace like this:
===
"main@1" prio=5 tid=0x1 nid=NA waiting
  java.lang.Thread.State: WAITING
  at sun.misc.Unsafe.park(Unsafe.java:-1)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
  at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
  at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
  at 
org.apache.ignite.internal.util.IgniteUtils.awaitQuiet(IgniteUtils.java:7505)
  at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.serviceCache(GridServiceProcessor.java:290)
  at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.writeServiceToCache(GridServiceProcessor.java:728)
  at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:634)
  at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:600)
  at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.deployMultiple(GridServiceProcessor.java:488)
  at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.deployClusterSingleton(GridServiceProcessor.java:469)
  at 
org.apache.ignite.internal.IgniteServicesImpl.deployClusterSingleton(IgniteServicesImpl.java:120)
===

The behavior depends on the timings - the client has to join in the middle of 
the activation's exchange process. Putting Thread.sleep(4000) into 
GridDhtPartitionsExchangeFuture.onClusterStateChangeRequest seems to work on a 
development laptop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Ignite Teamcity - actual failing tests (master)

2018-02-19 Thread Dmitry Pavlov
Hi,

Yes, it seems dev list restrics attachment, attached to
https://cwiki.apache.org/confluence/display/IGNITE/Make+Teamcity+Green+Again

Direct link:
https://cwiki.apache.org/confluence/download/attachments/73631266/Ignite%20Teamcity%20-%20current%20failures.pdf?api=v2

Sincerely,
Dmitriy Pavlov

пн, 19 февр. 2018 г. в 17:20, Nikolay Izhikov :

> Hello, Dmitriy
>
> Seems like apache mail doesn't allow attachments.
>
> Can you enlist failing tests in github gist or some similar service?
>
> В Пн, 19/02/2018 в 14:18 +, Dmitry Pavlov пишет:
> > Hi Igniters,
> >
> > I've done experiment on past two weekends to find out which test are
> currently failing using mass 'RunAll' execution.
> >
> > I created my utility for the analysis of TeamCity DB, which allowed to
> connect together many results from multiple runs of the same tests.
> >
> > Please find attached summary of last ~45 'Run All's.
> > As result ~450 tests are failing in master plus and ~350 C++ tests.
> >
> > Suites are sorted with order: most failing suites comes first. Tests
> within suite are also sorted as most failing first.
> >
> > Please find some tests written by you and please mute it if current
> failure is not significiant.
> >
> > Sincerely,
> > Dmitriy Pavlov


Ignite Teamcity - actual failing tests (master)

2018-02-19 Thread Dmitry Pavlov
Hi Igniters,

I've done experiment on past two weekends to find out which test are
currently failing using mass 'RunAll' execution.

I created my utility for the analysis of TeamCity DB, which allowed to
connect together many results from multiple runs of the same tests.

Please find attached summary of last ~45 'Run All's.
As result ~450 tests are failing in master plus and ~350 C++ tests.

Suites are sorted with order: most failing suites comes first. Tests within
suite are also sorted as most failing first.

Please find some tests written by you and please mute it if current failure
is not significiant.

Sincerely,
Dmitriy Pavlov


[jira] [Created] (IGNITE-7752) Update Ignite KafkaStreamer to use new KafkaConsmer configuration.

2018-02-19 Thread Andrew Mashenkov (JIRA)
Andrew Mashenkov created IGNITE-7752:


 Summary: Update Ignite KafkaStreamer to use new KafkaConsmer 
configuration.
 Key: IGNITE-7752
 URL: https://issues.apache.org/jira/browse/IGNITE-7752
 Project: Ignite
  Issue Type: Task
  Components: streaming
Reporter: Andrew Mashenkov
 Fix For: 2.5


Seems, for now it is impossible to use new style KafkaConsumer configuration in 
KafkaStreamer.

The issue here is Ignite use 
kafka.consumer.Consumer.createJavaConsumerConnector() method which creates old 
consumer (ZookeeperConsumerConnector).

We should create a new KafkaConsumer instead which looks like support both, old 
and new style configs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7751) Pages Write Throttle mode doesn't protect from checkpoint buffer overflow

2018-02-19 Thread Ivan Rakov (JIRA)
Ivan Rakov created IGNITE-7751:
--

 Summary: Pages Write Throttle mode doesn't protect from checkpoint 
buffer overflow
 Key: IGNITE-7751
 URL: https://issues.apache.org/jira/browse/IGNITE-7751
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.3
Reporter: Ivan Rakov
Assignee: Ivan Rakov
 Fix For: 2.5


Even with write throttling enabled, checkpoint buffer still can be overflowed. 
Example stacktrace:
{noformat}
2018-02-17 21:00:14.777 
[ERROR][sys-stripe-12-#13%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.d.dht.GridDhtTxRemote]
 Commit failed.
org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: 
Commit produced a runtime exception (all transaction entries will be 
invalidated): 
GridDhtTxRemote[id=06db48da161--07c5-23f5--0005, 
concurrency=OPTIMISTIC, isolation=SERIALIZABLE, state=COMMITTING, 
invalidate=false, rollbackOnly=false, 
nodeId=da415868-d9b3-48a5-9b56-0706ae60dd3b, duration=60]
at 
org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:739)
at 
org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitRemoteTx(GridDistributedTxRemoteAdapter.java:813)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:1319)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processDhtTxFinishRequest(IgniteTxHandler.java:1231)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$600(IgniteTxHandler.java:97)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$7.apply(IgniteTxHandler.java:213)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$7.apply(IgniteTxHandler.java:211)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:499)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.ignite.IgniteException: Runtime failure on row: 
Row@9f0a081[ key: 4694439661580364888, val: 
com.sbt.bm.ucp.common.dpl.model.party.DUserInfo_DPL_PROXY [idHash=1290746929, 
hash=400782371, colocationKey=16678, lastChangeDate=1518890414661, 
userFullName=null, partition_DPL_id=6, bankInfo_DPL_id=4694439661580364888, 
bankInfo_DPL_colocationKey=16678, ownerId=null, 
infoFlowChannel_DPL_colocationKey=0, userLogin=reloading, 
uid=1102030258731339432, isDeleted=false, infoFlowChannel_DPL_id=0, 
sourceSystem_DPL_id=65, id=4694439661580364888, 
colocationId=1102030258828706483], ver: GridCacheVersion [topVer=130360309, 
order=1519034613156, nodeOrder=5] ][ 1102030258731339432, reloading, 
4694439661580364888, 0, null, 65, 4694439661580364888, FALSE, 6 ]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2102)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2049)
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:247)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.addToIndex(GridH2Table.java:536)
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:468)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:595)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1865)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:407)
at 

[GitHub] ignite pull request #3499: IGNITE-7253

2018-02-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3499


---


[GitHub] ignite pull request #3541: IGNITE-7750 Fixed testMultiThreadStatisticsEnable...

2018-02-19 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/3541

IGNITE-7750 Fixed testMultiThreadStatisticsEnable test.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7750

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3541.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3541


commit 1418eb9659f10831621426beefe2a4f8978b63a5
Author: Pavel Kovalenko 
Date:   2018-02-19T11:47:37Z

IGNITE-7750 Fixed testMultiThreadStatisticsEnable test.




---


[jira] [Created] (IGNITE-7750) testMultiThreadStatisticsEnable is flaky on TC

2018-02-19 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-7750:
---

 Summary: testMultiThreadStatisticsEnable is flaky on TC
 Key: IGNITE-7750
 URL: https://issues.apache.org/jira/browse/IGNITE-7750
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko


{code:java}
class org.apache.ignite.IgniteException: Cache not found [cacheName=cache2]
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
at 
org.apache.ignite.internal.cluster.IgniteClusterImpl.enableStatistics(IgniteClusterImpl.java:497)
at 
org.apache.ignite.internal.processors.cache.CacheMetricsEnableRuntimeTest$3.run(CacheMetricsEnableRuntimeTest.java:181)
at 
org.apache.ignite.testframework.GridTestUtils$9.call(GridTestUtils.java:1275)
at 
org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
Caused by: class org.apache.ignite.IgniteCheckedException: Cache not found 
[cacheName=cache2]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.enableStatistics(GridCacheProcessor.java:4227)
at 
org.apache.ignite.internal.cluster.IgniteClusterImpl.enableStatistics(IgniteClusterImpl.java:494)
... 3 more
{code}


The problem of the test:

1) We don't wait for exchange future completion after "cache2" is started and 
it may lead to NullPointerException when we try to obtain reference to "cache2" 
on the node which doesn't complete exchange future and initialize cache proxy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3540: IGNITE-7749 Fixed testDiscoCacheReuseOnNodeJoin t...

2018-02-19 Thread Jokser
GitHub user Jokser opened a pull request:

https://github.com/apache/ignite/pull/3540

IGNITE-7749 Fixed testDiscoCacheReuseOnNodeJoin test.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7749

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3540.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3540


commit 5233269ea684e4cff8d4a7d46d3ce1dc343dd5f1
Author: Pavel Kovalenko 
Date:   2018-02-19T10:46:26Z

IGNITE-7749 Fixed testDiscoCacheReuseOnNodeJoin test.




---


[jira] [Created] (IGNITE-7749) testDiscoCacheReuseOnNodeJoin fails on TC

2018-02-19 Thread Pavel Kovalenko (JIRA)
Pavel Kovalenko created IGNITE-7749:
---

 Summary: testDiscoCacheReuseOnNodeJoin fails on TC
 Key: IGNITE-7749
 URL: https://issues.apache.org/jira/browse/IGNITE-7749
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Pavel Kovalenko
Assignee: Pavel Kovalenko


{code:java}
java.lang.ClassCastException: 
org.apache.ignite.internal.util.GridConcurrentHashSet cannot be cast to 
java.lang.String
at 
org.apache.ignite.spi.discovery.IgniteDiscoveryCacheReuseSelfTest.assertDiscoCacheReuse(IgniteDiscoveryCacheReuseSelfTest.java:93)
at 
org.apache.ignite.spi.discovery.IgniteDiscoveryCacheReuseSelfTest.testDiscoCacheReuseOnNodeJoin(IgniteDiscoveryCacheReuseSelfTest.java:64)
{code}


There are 2 problems in the test.

1) We don't wait for final topology version is set on all nodes and start 
checking discovery caches immediately after grids starting. It leads to 
possible NullPointerException while accessing to discovery caches history.
2) We don't use explicit assertEquals(String, Object, Object) related to 
comparing Objects, while Java can choose assertEquals(String, String) method to 
compare discovery cache fields which we're getting in runtime using reflection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3539: IGNITE-7747 An exception should not be throw if s...

2018-02-19 Thread DmitriyGovorukhin
GitHub user DmitriyGovorukhin opened a pull request:

https://github.com/apache/ignite/pull/3539

IGNITE-7747 An exception should not be throw if segments not found 

An exception should not be throw if segments not found for 
getAndReserveWalFiles. Stopped iteration if next segment not found

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7747

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3539.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3539


commit bc381187292e0c032f30069e9174bc78d1376789
Author: Dmitriy Govorukhin 
Date:   2018-02-19T10:34:17Z

IGNITE-7747 Exception should not be throw if segments not found for 
getAndReserveWalFiles. Stopped iteration if next segment not found




---


[jira] [Created] (IGNITE-7748) MVCC TX Implement TxLog related stuctures

2018-02-19 Thread Igor Seliverstov (JIRA)
Igor Seliverstov created IGNITE-7748:


 Summary: MVCC TX Implement TxLog related stuctures
 Key: IGNITE-7748
 URL: https://issues.apache.org/jira/browse/IGNITE-7748
 Project: Ignite
  Issue Type: Task
  Components: cache
Reporter: Igor Seliverstov
Assignee: Igor Seliverstov


Create TxLog on the basis of BPlusTree.

The key is a pair of two long (which correspond to crd version and mvcc cntr of 
MvccVersion)

The value is TxState - an enum.

TxState has next possible values : PREPARED, COMMITTED, ABORTED, NA.

NA is a special value, which is returned when there is no info about requested 
TX.

TxLog is managed by MvccProcessor and initiated along with MvccProcessor.

At the first step there is no special WAL records corresponding to TxLog 
operation (will be implemented in future.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7747) WAL manage getAndReserveWalFiles should not throw exception if segments not found

2018-02-19 Thread Dmitriy Govorukhin (JIRA)
Dmitriy Govorukhin created IGNITE-7747:
--

 Summary: WAL manage getAndReserveWalFiles should not throw 
exception if segments not found
 Key: IGNITE-7747
 URL: https://issues.apache.org/jira/browse/IGNITE-7747
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitriy Govorukhin
Assignee: Dmitriy Govorukhin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3374: Ignite-7409 exception handling reworked for resum...

2018-02-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3374


---


Re: Semaphore Stuck when no acquirers to assign permit

2018-02-19 Thread Vladisav Jelisavcic
Hi Alexey,

I already reviewed it, but wanted to see the resolution of IGNITE-6454
first before I merge.

Thanks,
Vladisav


On Fri, Feb 16, 2018 at 12:39 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Hi Vladislav,
>
> Can you please finalize the review? I would like to include the fix in one
> of the nightly Ignite builds for my own project.
>
> Thanks.
> --AG
>
> 2018-02-01 4:47 GMT+03:00 Dmitriy Setrakyan :
>
>> Hi Tim, Vlad,
>>
>> Any update on this ticket? I looked inside, but cannot understand the
>> status.
>>
>> Thanks,
>> D.
>>
>> -- Forwarded message --
>> From: Denis Magda 
>> Date: Fri, Jan 19, 2018 at 9:14 PM
>> Subject: Re: Semaphore Stuck when no acquirers to assign permit
>> To: u...@ignite.apache.org, dev@ignite.apache.org
>>
>>
>> Igniters,
>>
>> Who can check out Tim’s fix for the semaphore?
>>
>> pull request: https://github.com/apache/ignite/pull/3138 <
>> https://github.com/apache/ignite/pull/3138>
>> jira: https://issues.apache.org/jira/browse/IGNITE-7090 <
>> https://issues.apache.org/jira/browse/IGNITE-7090>
>>
>> —
>> Denis
>>
>> > On Jan 15, 2018, at 12:02 PM, Timay  wrote:
>> >
>> > I saw a release date set for 2.4 but have not had any feedback on the
>> jira so
>> > i wanted to check in on this. Can this make it into the 2.4 release?
>> >
>> > Thanks
>> > Tim
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


[jira] [Created] (IGNITE-7746) Failed to put data into cache if IndexedTypes configured.

2018-02-19 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-7746:


 Summary: Failed to put data into cache if IndexedTypes configured.
 Key: IGNITE-7746
 URL: https://issues.apache.org/jira/browse/IGNITE-7746
 Project: Ignite
  Issue Type: Improvement
  Components: cache, sql
Reporter: Alexey Kuznetsov
Assignee: Vladimir Ozerov
 Fix For: 2.5


reproducer

{code}

import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;

public class Reproducer {
 /**
 * @param args Command line arguments.
 */
 public static void main(String[] args) {
 IgniteConfiguration cfg = new IgniteConfiguration();

 CacheConfiguration ccfg = new CacheConfiguration("test");
 ccfg.setIndexedTypes(String.class, String.class);

 cfg.setCacheConfiguration(ccfg);

 try(Ignite ignite = Ignition.start(cfg)) {
 IgniteCache cStr = ignite.cache("test");
 cStr.put("key", "value");

 IgniteCache cInt = ignite.cache("test");
 cInt.put(1, 2);

 IgniteCache cIntStr = ignite.cache("test");
 cIntStr.put(7, "test");
 }
 }
}

{code}

{noformat}

Exception in thread "main" org.apache.ignite.cache.CachePartialUpdateException: 
Failed to update keys (retry update if possible).: [7]
 at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1278)
 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1731)
 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
 at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:886)
 at org.apache.ignite.Reproducer.main(Reproducer.java:26)
Caused by: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [7]
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1805)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1117)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:606)
 at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2369)
 at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2346)
 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1084)
 ... 2 more
 Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to update 
keys.
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKey(UpdateErrors.java:108)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKey(GridNearAtomicUpdateResponse.java:329)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2559)
 at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883)
 at