[MTCGA]: new failures in builds [5616712] needs to be handled

2020-09-22 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New test failure in master-nightly 
IgniteCacheJoinQueryWithAffinityKeyTest.testJoinQueryWithAffinityKeyNotQueryField
 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=2628365073744976156=%3Cdefault%3E=testDetails

 *New test failure in master-nightly 
IgniteCacheJoinQueryWithAffinityKeyTest.testJoinQuery 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=7316031532046904549=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - pavel vinokurov  
https://ci.ignite.apache.org/viewModification.html?modId=907530
 - mikhail petrov <32207922+ololo3...@users.noreply.github.com> 
https://ci.ignite.apache.org/viewModification.html?modId=907525

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 08:11:24 23-09-2020 


[MTCGA]: new failures in builds [5616739, 5616708] needs to be handled

2020-09-22 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New test failure in master 
RetryCauseMessageSelfTest.testSynthCacheWasNotFoundMessage 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-6875593003283015682=%3Cdefault%3E=testDetails

 *New test failure in master 
RetryCauseMessageSelfTest.testPartitionedCacheReserveFailureMessage 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-7187355201515083446=%3Cdefault%3E=testDetails

 *New test failure in master 
IgniteCacheCrossCacheJoinRandomTest.testJoin3Caches 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-6614395028483399540=%3Cdefault%3E=testDetails

 *New test failure in master 
IgniteCacheCrossCacheJoinRandomTest.testJoin4Caches 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=5980910237539399356=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - pavel vinokurov  
https://ci.ignite.apache.org/viewModification.html?modId=907530
 - mikhail petrov <32207922+ololo3...@users.noreply.github.com> 
https://ci.ignite.apache.org/viewModification.html?modId=907525

 *New test failure in master-nightly 
IgniteDbMemoryLeakSqlQueryTest.testMemoryLeak 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-7115007825971208104=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - pavel vinokurov  
https://ci.ignite.apache.org/viewModification.html?modId=907530
 - mikhail petrov <32207922+ololo3...@users.noreply.github.com> 
https://ci.ignite.apache.org/viewModification.html?modId=907525

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 07:41:23 23-09-2020 


[MTCGA]: new failures in builds [5616790] needs to be handled

2020-09-22 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New test failure in master IgnitePdsPageEvictionTest.testPageEvictionSql 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=-8180188490460437010=%3Cdefault%3E=testDetails
 Changes may lead to failure were done by 
 - pavel vinokurov  
https://ci.ignite.apache.org/viewModification.html?modId=907530
 - mikhail petrov <32207922+ololo3...@users.noreply.github.com> 
https://ci.ignite.apache.org/viewModification.html?modId=907525

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 07:11:23 23-09-2020 


[MTCGA]: new failures in builds [5616712] needs to be handled

2020-09-22 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New Critical Failure in master-nightly Queries (Binary Objects Simple 
Mapper) 
https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_BinaryObjectsSimpleMapperQueries?branch=%3Cdefault%3E
 Changes may lead to failure were done by 
 - pavel vinokurov  
https://ci.ignite.apache.org/viewModification.html?modId=907530
 - mikhail petrov <32207922+ololo3...@users.noreply.github.com> 
https://ci.ignite.apache.org/viewModification.html?modId=907525

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 03:56:23 23-09-2020 


[MTCGA]: new failures in builds [5616783] needs to be handled

2020-09-22 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 If your changes can lead to this failure(s): We're grateful that you were a 
volunteer to make the contribution to this project, but things change and you 
may no longer be able to finalize your contribution.
 Could you respond to this email and indicate if you wish to continue and fix 
test failures or step down and some committer may revert you commit. 

 *New Critical Failure in master Cache 9 
https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_Cache9?branch=%3Cdefault%3E
 Changes may lead to failure were done by 
 - pavel vinokurov  
https://ci.ignite.apache.org/viewModification.html?modId=907530
 - mikhail petrov <32207922+ololo3...@users.noreply.github.com> 
https://ci.ignite.apache.org/viewModification.html?modId=907525

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 02:26:22 23-09-2020 


ApacheCon Bug Bash

2020-09-22 Thread Tom DuBuisson
Ignite Developers,



As part of our sponsorship of ApacheCon, our company MuseDev is doing a Bug
Bash for select Apache projects. We'll bring members of the ApacheCon
community together to find and fix a range of security and performance bugs
during the conference, and gameify the experience with teams, a
leaderboard, and prizes. The bash is open to everyone whether attending the
conference or not, and our whole dev team will also be participating to
help fix as many bugs as we can.



We're seeding the bug list with results from Muse, our code analysis
platform, which runs as a Github App and comments on possible bugs as part
of the pull request workflow.  Here's an example of what it looks like:

https://github.com/curl/curl/pull/5971#discussion_r490252196




We explored a number of Apache projects and are reaching out because our
analysis through Muse found some interesting bugs that could be fixed
during the Bash.



We're writing to see if you'd be interested in having your project included
in the Bash. Everything is set up on our end, and if you're interested, we
would need you to say yes on this listserv, and we’ll work with the Apache
Infrastructure team to grant Muse access to your Github mirror. We'll then
make sure it's all set-up and ready for the Bash. And of course, everyone
on the project is most welcome to join the Bash and help us smash some bugs.


-Tom


Re: Applicability of term 'cache' to Apache Ignite

2020-09-22 Thread Denis Magda
I cast my vote for the "table". This term is generic, well-understood and
naturally fits SQL-intensive use cases. Basically, we don't need to
reinvent the wheel and the "table" aligns with our internal storage
structure proposed for 3.0.

Vladimir Ozerov's thoughts down below this discussion thread are worth
paying attention to. Vladimir suggests settling with the "table".

-
Denis


On Fri, Sep 18, 2020 at 3:12 PM Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Igniters,
>
> I would like to resurrect this discussion, as we have a chance to make the
> change in Ignite 3.0.
>
> The 'cache' term is clearly outdated and does not describe the
> functionality of Ignite. It looks like the term 'table' got the most
> support so far, and I think it quite accurately describes the generic tuple
> storage described in the "Schema-first Approach" IEP [1].
>
> Pure key-value API can be implemented as an IgniteMap facade, and
> effectively become one of the data structures, along with IgniteQueue and
> others.
>
> Please share your thoughts.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-54%3A+Schema-first+Approach
>
> -Val
>
> On Thu, Oct 18, 2018 at 9:56 AM Dmitriy Setrakyan 
> wrote:
>
> > I am beginning to like IgniteTable as well. How would something like this
> > be introduced to Ignite? Would we have IgniteTable extend IgniteCache?
> What
> > would happen to cache groups?
> >
> > D.
> >
> > On Thu, Oct 18, 2018 at 7:58 AM Павлухин Иван 
> wrote:
> >
> > > HI all,
> > >
> > > +1 for "table" from me. For me "table" has several benefits:
> > > 1. It's common and consequently easy to explain and understand.
> > > 2. It's quite universal. One can worry that "table" does not describes
> > > key-value storage well.
> > > I don't see any problem here, because Hash Table data structure
> > > contains "table" word it it's name.
> > > Also DHT comes to mind. Internally we have GridDhtCache class. So
> > it's
> > > already a "table".
> > >
> > > Regarding multiple QueryEntities in single cache. Correct me if I am
> > wrong,
> > > but currently we do not recommend to use them.
> > >
> > > чт, 18 окт. 2018 г. в 15:18, David Harvey :
> > >
> > > > We had a terminology agreement early on where we agreed to call them
> > > > caches, but we still call them tables anyway.
> > > >
> > > > When I finally understood how you could have multiple tables in a
> > single
> > > > cache,  I tried to find example use cases, but couldn't.  Is there
> > even a
> > > > test with multiple queryEntities?
> > > >
> > > > On Thu, Oct 18, 2018, 8:10 AM Alexey Zinoviev <
> zaleslaw@gmail.com>
> > > > wrote:
> > > >
> > > > > From my perspective (ML module), it will be very easy to talk about
> > > > Ignite
> > > > > in SQL terms like table (with additional information about ability
> to
> > > > make
> > > > > key-value CRUD operations, not only SELECT * FROM Table)
> > > > > Also we could look on PostgreSQL with different plugins for SQL
> > > extension
> > > > > like PostGIS or support of JSON-B and ability to store not only
> > planar
> > > > data
> > > > > with strict schema (I agrre here with Vladimir).
> > > > >
> > > > > чт, 18 окт. 2018 г. в 14:33, Ilya Lantukh :
> > > > >
> > > > > > I thought that current "caches" and "tables" have 1-to-N
> relation.
> > If
> > > > > > that's not a problem, than I also think that "table" is the best
> > > term.
> > > > > >
> > > > > > On Thu, Oct 18, 2018 at 9:29 AM Vladimir Ozerov <
> > > voze...@gridgain.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Well, I never thought about term “table” as a replacement for
> > > > “cache”,
> > > > > > but
> > > > > > > it appears to be good candidate.
> > > > > > >
> > > > > > > This is used by many some major vendors whose underlying
> storage
> > is
> > > > > > indeed
> > > > > > > a kind of key-value data structure. Most well-known example is
> > > MySQL
> > > > > with
> > > > > > > its MyISAM engine. Table can be used for both fixed and
> flexible
> > > > (e.g.
> > > > > > > JSON) schemas, as well as key-value access (hash map -> hash
> > table,
> > > > > both
> > > > > > > are good).
> > > > > > >
> > > > > > > Another important thing - we already use term “table”, and it
> is
> > > > always
> > > > > > > hard to explain our users how it relates to “cache”. If “cache”
> > is
> > > > > > dropped,
> > > > > > > then a single term “table” will be used everywhere.
> > > > > > >
> > > > > > > Last, but not least - “table” works well for both in-memory and
> > > > > > persistent
> > > > > > > modes.
> > > > > > >
> > > > > > > So if we are really aim to rename “cache”, then “table” is the
> > best
> > > > > > > candidate I’ve heard so far.
> > > > > > >
> > > > > > > чт, 18 окт. 2018 г. в 8:40, Alexey Zinoviev <
> > > zaleslaw@gmail.com
> > > > >:
> > > > > > >
> > > > > > > > Or we could extend our SQL commands by "GET BY KEY = X" and
> > "PUT
> > > > (x1,
> > > > > > x2,
> > > > > > > > x3) BY KEY = X" and the 

Re: [DISCUSSION] Renaming Ignite's product category

2020-09-22 Thread Konstantin Boudnik

+1

With regards,
  Cos

On 2020-09-21 20:35, Nikita Ivanov wrote:

My vote is to just call ignite "IgniteDB". That's it. No other additional
explanation is required as no amount of additional verbiage will help.
Every DB is different: from MongoDB, to RedisDB, to CockroachDB, to Oracle
- they all look & act completely different, and they don't go around trying
to explain in one line what they do and how they are different.

"IgniteDB" is clear, concise and gives us the broadest initial acceptance
from the new user perspective.

Thanks,
--
Nikita Ivanov



On Sat, Sep 19, 2020 at 1:10 PM Saikat Maitra 
wrote:


Hi,

My thoughts are similar to as Denis and Val mentioned like Apache Ignite -
"A Memory Centric Database".

It aligns to current features of Apache Ignite as mentioned in the below
post.


https://thenewstack.io/memory-centric-architectures-whats-next-for-in-memory-computing

Regards,
Saikat

On Fri, Sep 18, 2020 at 9:02 AM Carbone, Adam 
wrote:


So when I came across Ignite It was described as an In Memory Data Grid

So one way to look at this is who do you fashion as Ignite competing
against?

Are competing against Redis, Aerospike - In Memory Databases

Or are you more competing with

Gigaspaces - True In memory Compute platform

And then you have like of

Hazelcast that started as a Distributed Hash and have gained some
features...

On thing that I think is a differentiator that isn't being highlighted
but Is  unique feature to Ignited, and the primary reason we ended up here;
The integration with spark and it's distributed/shared Datasets/Dataframes.

I don't know for me the In Memory Data Grid I think fits what Ignite
is...

Regards

~Adam

Adam Carbone | Director of Innovation – Intelligent Platform Team |
Bottomline Technologies
Office: 603-501-6446 | Mobile: 603-570-8418
www.bottomline.com



On 9/17/20, 11:45 AM, "Glenn Wiebe"  wrote:

 I agree with Stephen about "database" devaluing what Ignite can do
(though
 it probably hits the majority of existing use cases). I tend to go
with
 "massively distributed storage and compute platform"

 I know, I didn't take sides, I just have both.

 Cheers,
   Glenn

 On Thu., Sep. 17, 2020, 7:04 a.m. Stephen Darlington, <
 stephen.darling...@gridgain.com> wrote:

 > I think this is a great question. Explaining what Ignite does is
always a
 > challenge, so having a useful “tag line” would be very valuable.
 >
 > I’m not sure what the answer is but I think calling it a “database”
 > devalues all the compute facilities. "Computing platform” may be
too vague
 > but it at least says that we do more than “just” store data.
 >
 > On 17 Sep 2020, at 06:29, Valentin Kulichenko <
 > valentin.kuliche...@gmail.com> wrote:
 >
 > My vote is for the "distributed memory-first database". It clearly
states
 > that Ignite is a database (which is true at this point), while still
 > emphasizing the in-memory computing power endorsed by the platform.
 >
 > The "in-memory computing platform" is an ambiguous term and doesn't
really
 > reflect what Ignite is, especially in its current state.
 >
 > -Val
 >
 > On Wed, Sep 16, 2020 at 3:53 PM Denis Magda 
wrote:
 >
 >> Igniters,
 >>
 >> Throughout the history of our project, we could see how the
addition of
 >> certain features required us to reassess the project's name and
category.
 >>
 >> Before Ignite joined the ASF, it supported only compute APIs
resembling
 >> the
 >> MapReduce engine of Hadoop. Those days, it was fair to define
Ignite as "a
 >> distributed in-memory computing engine". Next, at the time of the
project
 >> donation, it already included key-value/SQL/transactional APIs,
was used
 >> as
 >> a distributed cache, and significantly outgrew the "in-memory
computing
 >> engine" use case. That's how the project transitioned to the
product
 >> category of in-memory caches and we started to name it as an
"in-memory
 >> data grid" or "in-memory computing platform" to differentiate from
 >> classical caching products such as Memcached and Redis.
 >>
 >> Nowadays, the project outgrew its caching use case, and the
classification
 >> of Ignite as an "in-memory data grid" or "in-memory computing
platform"
 >> doesn't sound accurate. We rebuilt our storage engine by replacing
a
 >> typical key-value engine with a B-tree engine that spans across
memory and
 >> disk tiers. And it's not surprising to see more deployments of
Ignite as a
 >> database on its own. So, it feels like we need to reconsider Ignite
 >> positioning again so that a) application developers can discover
it easily
 >> via search engines and b) the project can stand out from in-memory
 >> projects
 >> with intersecting capabilities.
 >>
 >> To the point, I'm suggesting to reposition Ignite in one of the
following
 >> ways:

Re: Changes in run.sh script

2020-09-22 Thread Stanislav Lukyanov
I believe that remote JMX access should NOT be enabled by default in any Ignite 
distributions - neither docker nor regular binary package.

Enabling remote JMX requires caution. It is a powerful interface, and the fact 
that ignite.sh enables it by default with no security (!) bothers me a lot.
One could play the "THIS IS A SECURITY BUG!!!" card here, and this wouldn't be 
too far from truth.

I have also seen issues when a user who tried to enable remote JMX manually, 
following all best practices and enabling security, ran into issues due to 
ignite.sh overriding the user-provided JMX options.
Overall, I'd say that a user who doesn't know if they need remote JMX should 
start without it. A user who knows they need remote JMX should configure it 
with the necessary security options,
and the best way to do that in Ignite is currently to run with -nojmx 
(disabling ignite.sh's default options) and providing your own options. In 
other words, JMX options coming from ignite.sh do no good.

Note that this is all about remote JMX only. Local JMX monitoring (e.g. using 
JConsole or Prometheus Java agent exporter) is always available, even without 
any Java options.

I suggest to go ahead and disable remote JMX by default in Docker in 2.9, and 
plan to remove the remote JMX properties setting from ignite.sh in 2.10.

Thanks,
Stan

> On 22 Sep 2020, at 18:41, Aleksandr Shapkin  wrote:
> 
> Hi all,
>  
> There is the run.sh script that’s required for docker images deployment and 
> internally it just invokes the default ignite.sh script prior to starting a 
> node.
>  
> So far, so good, but we discovered that it doesn’t propagate system signals 
> to the JVM due to its internal logic and also lacks some configuration 
> parameters, like JVM options, therefore we have rewritten it in 
> https://issues.apache.org/jira/browse/IGNITE-13453 
>  and hope to include it 
> in 2.9 release.
>  
> As it turned out there was another reason for modifying the script – there 
> was no way of disabling JMX using it, i.e. to set -nojmx=1, the flag 
> supported by ignite.sh. With the new version, JMX is disabled by default and 
> should be turned on explicitly setting the JVM options, like 
> -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=49112, 
> Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false
>  
> As far as I can remember, there were some drawbacks with the default JMX 
> setting and ignite.sh – It reduces security and could conflict with the 
> control.sh, since it’s trying to open the same port. But it’s the subject for 
> a separate discussion.
>  
> So the question is – do we need to set those properties by default (enable 
> JMX), most likely copying them from ignite.sh or it’s fine to have JMX 
> disabled by default for k8s/docker deployment? 
> 
> -- 
> Alex.



Changes in run.sh script

2020-09-22 Thread Aleksandr Shapkin
Hi all,



There is the run.sh script that’s required for docker images deployment and
internally it just invokes the default ignite.sh script prior to starting a
node.



So far, so good, but we discovered that it doesn’t propagate system signals
to the JVM due to its internal logic and also lacks some configuration
parameters, like JVM options, therefore we have rewritten it in
https://issues.apache.org/jira/browse/IGNITE-13453 and hope to include it
in 2.9 release.



As it turned out there was another reason for modifying the script – there
was no way of disabling JMX using it, i.e. to set -nojmx=1, the flag
supported by ignite.sh. With the new version, JMX is disabled by default
and should be turned on explicitly setting the JVM options, like
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=49112,
Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false



As far as I can remember, there were some drawbacks with the default JMX
setting and ignite.sh – It reduces security and could conflict with the
control.sh, since it’s trying to open the same port. But it’s the subject
for a separate discussion.



So the question is – do we need to set those properties by default (enable
JMX), most likely copying them from ignite.sh or it’s fine to have JMX
disabled by default for k8s/docker deployment?

-- 
Alex.


[jira] [Created] (IGNITE-13476) Calcite improvements. Implement ProjectionNode.

2020-09-22 Thread Stanilovsky Evgeny (Jira)
Stanilovsky Evgeny created IGNITE-13476:
---

 Summary: Calcite improvements. Implement ProjectionNode.
 Key: IGNITE-13476
 URL: https://issues.apache.org/jira/browse/IGNITE-13476
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.8.1
Reporter: Stanilovsky Evgeny
Assignee: Stanilovsky Evgeny


Currently we have no functionality for filtering only useful columns in inner 
representations. For further heap allocations reduction and as a consequence: 
gc pressure, we need to implement such approach.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13475) NPE on IgniteTxHandler.finishDhtLocal

2020-09-22 Thread Ilya Kasnacheev (Jira)
Ilya Kasnacheev created IGNITE-13475:


 Summary: NPE on IgniteTxHandler.finishDhtLocal
 Key: IGNITE-13475
 URL: https://issues.apache.org/jira/browse/IGNITE-13475
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Ilya Kasnacheev
 Fix For: 2.8.1


{code}
[05:57:16,193][SEVERE][sys-stripe-15-#16][] Critical system error detected. 
Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=CRITICAL_ERROR, 
err=java.lang.NullPointerException]]
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finishDhtLocal(IgniteTxHandler.java:1064)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:953)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxFinishRequest(IgniteTxHandler.java:909)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$200(IgniteTxHandler.java:123)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:217)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:215)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:109)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:565)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: MTCGA bot is down

2020-09-22 Thread Petr Ivanov
Bot was down due to No space left on the device error.

It seems, that one of our cache entry (cache-teamcityFatBuild) eats most of the 
disk space (193 of 200 Gb for all caches).
Is there a way to shrink it on disk (cache entries are being cleaned, but that 
seems to not affect disk usage)?



> On 22 Sep 2020, at 11:58, Pavel Tupitsyn  wrote:
> 
> Nikolay, can you try again? Seems to work fine for me.
> 
> On Tue, Sep 22, 2020 at 9:57 AM Nikolay Izhikov  wrote:
> 
>> Hello, Igniters.
>> 
>> Currently, mtcga bot is down - 502 bad gateway error.
>> Can someone help with it?
>> 
>> https://mtcga.gridgain.com/prs.html
>> 



[jira] [Created] (IGNITE-13474) Client node consistentId uniqueness is not checked

2020-09-22 Thread Ilya Kasnacheev (Jira)
Ilya Kasnacheev created IGNITE-13474:


 Summary: Client node consistentId uniqueness is not checked
 Key: IGNITE-13474
 URL: https://issues.apache.org/jira/browse/IGNITE-13474
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.8.1
Reporter: Ilya Kasnacheev


Please see attached server and client nodes, as well as SO discussion.

consistentId uniqueness is not checked on client node join, leading to multiple 
client nodes with same consistentId as on server node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13473) Snapshot tests for ducktape

2020-09-22 Thread Sergei Ryzhov (Jira)
Sergei Ryzhov created IGNITE-13473:
--

 Summary: Snapshot tests for ducktape
 Key: IGNITE-13473
 URL: https://issues.apache.org/jira/browse/IGNITE-13473
 Project: Ignite
  Issue Type: Task
Reporter: Sergei Ryzhov
Assignee: Sergei Ryzhov


Snapshot tests for ducktape



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13472) JDBC bulkload operations are processed with wrong security context

2020-09-22 Thread Ryabov Dmitrii (Jira)
Ryabov Dmitrii created IGNITE-13472:
---

 Summary: JDBC bulkload operations are processed with wrong 
security context
 Key: IGNITE-13472
 URL: https://issues.apache.org/jira/browse/IGNITE-13472
 Project: Ignite
  Issue Type: Bug
Reporter: Ryabov Dmitrii
Assignee: Ryabov Dmitrii


{{JdbcAuthorizationTest.testCopyFrom()}} have many exceptions like

{code:java}
[12:02:56] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=0, done=false, cancelled=false, err=null, 
futs=TransformCollectionView [true]]class 
org.apache.ignite.IgniteCheckedException: Authorization failed [perm=CACHE_PUT, 
name=test-bulkload-cache, 
subject=TestSecuritySubject{id=ca0e2ed9-f877-4c49-a80e-11a1c7a0, 
type=REMOTE_NODE, login=jdbc.JdbcAuthorizationTest0}]
at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7589)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:979)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:569)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.plugin.security.SecurityException: 
Authorization failed [perm=CACHE_PUT, name=test-bulkload-cache, 
subject=TestSecuritySubject{id=ca0e2ed9-f877-4c49-a80e-11a1c7a0, 
type=REMOTE_NODE, login=jdbc.JdbcAuthorizationTest0}]
at 
org.apache.ignite.internal.processors.security.impl.TestSecurityProcessor.authorize(TestSecurityProcessor.java:153)
at 
org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.authorize(IgniteSecurityProcessor.java:207)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.checkSecurityPermission(DataStreamerUpdateJob.java:170)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:123)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7087)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:971)
... 4 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: MTCGA bot is down

2020-09-22 Thread Pavel Tupitsyn
Nikolay, can you try again? Seems to work fine for me.

On Tue, Sep 22, 2020 at 9:57 AM Nikolay Izhikov  wrote:

> Hello, Igniters.
>
> Currently, mtcga bot is down - 502 bad gateway error.
> Can someone help with it?
>
> https://mtcga.gridgain.com/prs.html
>


[jira] [Created] (IGNITE-13471) Execute user-defined compute jobs asynchronously when CompletionStage is returned

2020-09-22 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-13471:
---

 Summary: Execute user-defined compute jobs asynchronously when 
CompletionStage is returned
 Key: IGNITE-13471
 URL: https://issues.apache.org/jira/browse/IGNITE-13471
 Project: Ignite
  Issue Type: Improvement
  Components: compute
Reporter: Pavel Tupitsyn


When user-defined Compute jobs (callables, runnables) return CompletionStage:

1. Wait for completion (in a non-blocking way if possible)
2. Extract the result and return to the initiator node
3. Construct completed CompletionStage with the result on the initiator node 
and return to the user code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: IEP-51: Java Thin Client Async API

2020-09-22 Thread Pavel Tupitsyn
Yes, this makes a lot of sense (and can be applied to Services, too).

I've filed the ticket: https://issues.apache.org/jira/browse/IGNITE-13471
This requires a separate IEP, of course.

On Mon, Sep 21, 2020 at 6:33 PM mnk  wrote:

> Pavel Tupitsyn wrote
> >> result of a remote execution is a CompletionStage
> >
> > Can you give an example? What is a remote execution? Is this about
> Compute
> > and/or Services?
>
> This is about Compute.
>
> Let's say I'm doing affinityCallAsync for IgniteCallable where R
> implements CompletableStage or R is a CompletableFuture.  Then I
> wouldn't want to have the CompleteableStage or CompleteableFuture come back
> to me actually, rather something that will complete with the T when it's
> ready (and it would be nice if I could also cancel it for the
> CompleteableFuture case). It could be done as well for affinityCall
> (non-async) too, although in my mind the former case is the more important
> one. Of course, affinityCall is just an example, it should apply to the
> other compute methods as appropriate.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: [DISCUSSION] Maintenance Mode feature

2020-09-22 Thread Ivan Pavlukhin
Sergey,

Thank you for your answer. While I am not happy with the proposed
approach but things never were easy. Unfortunately I cannot suggest
100% better approaches so far. So, I should trust your vision.

2020-09-22 10:29 GMT+03:00, Sergey Chugunov :
> Ivan,
>
> Checkpointer in Maintenance Mode is started and allows normal operations as
> it may be needed for defragmentation and possibly other cases.
>
> Discovery is started with a special implementation of SPI that doesn't make
> attempts to seek and/or connect to the rest of the cluster. From that
> perspective node in MM is totally isolated.
>
> Communication is started as usual but I believe it doesn't matter as
> discovery no other nodes are observed in topology and connection attempt
> should not happen. But it may make sense to implement isolated version of
> communication SPI as well to have 100% guarantee that no communication with
> other nodes will happen.
>
> It is important to note that GridRestProcessor is started normally as we
> need it to connect to the node via control utility.
>
> On Mon, Sep 21, 2020 at 7:04 PM Ivan Pavlukhin  wrote:
>
>> Sergey,
>>
>> > From  the code complexity perspective I'm trying to design the feature
>> in such a way that all maintenance code is as encapsulated as possible
>> and
>> avoids massive interventions into main workflows of components.
>>
>> Could please briefly tell what means do you use to achieve
>> encapsulation? Are Discovery, Communication, Checkpointer and other
>> components started in a maintenance mode in current design?
>>
>> 2020-09-21 15:19 GMT+03:00, Nikolay Izhikov :
>> > Hello, Sergey.
>> >
>> >> At the moment I'm aware about two use cases for this feature:
>> >> corrupted
>> >> PDS cleanup and defragmentation.
>> >
>> > AFAIKU There is third use-case for this mode.
>> >
>> > Change encryption master key in case node was down during cluster
>> > master
>> key
>> > change.
>> > In this case, node can’t join to the cluster, because it’s master key
>> > differs from the cluster.
>> > To recover node Ignite should locally change master key before join.
>> >
>> > Please, take a look into source code [1]
>> >
>> > [1]
>> >
>> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GridEncryptionManager.java#L710
>> >
>> >> 21 сент. 2020 г., в 14:37, Sergey Chugunov 
>> >> написал(а):
>> >>
>> >> Ivan,
>> >>
>> >> Sorry for some confusion, MM indeed is not a normal mode. What I was
>> >> trying
>> >> to say is that when in MM node still starts and allows the user to
>> >> perform
>> >> actions with it like sending commands via control utility/JMX APIs or
>> >> reading metrics.
>> >>
>> >> This is the key point: although the node is not in the cluster but it
>> >> is
>> >> still alive can be monitored and supports management to do
>> >> maintenance.
>> >>
>> >> From  the code complexity perspective I'm trying to design the feature
>> in
>> >> such a way that all maintenance code is as encapsulated as possible
>> >> and
>> >> avoids massive interventions into main workflows of components.
>> >> At the moment I'm aware about two use cases for this feature:
>> >> corrupted
>> >> PDS
>> >> cleanup and defragmentation. As far as I know it won't bring too much
>> >> complexity in both cases.
>> >>
>> >> I cannot say for other components but I believe it will be possible to
>> >> integrate MM feature into their workflow as well with reasonable
>> >> amount
>> >> of
>> >> refactoring.
>> >>
>> >> Does it make sense to you?
>> >>
>> >> On Sun, Sep 6, 2020 at 8:08 AM Ivan Pavlukhin 
>> >> wrote:
>> >>
>> >>> Sergey,
>> >>>
>> >>> Thank you for your answer!
>> >>>
>> >>> Might be I am looking at the subject from a different angle.
>> >>>
>>  I think of a node in MM as an almost normal one
>> >>> I cannot think of such a mode as a normal one, because it apparently
>> >>> does not perform usual cluster node functions. It is not a part of a
>> >>> cluster, caches data is not available, Discovery and Communication
>> >>> are
>> >>> not needed.
>> >>>
>> >>> I fear that with "node started in a special mode" approach we will
>> >>> get
>> >>> an additional flag in the code making the code more complex and
>> >>> fragile. Should not I worry about it?
>> >>>
>> >>> 2020-09-02 10:45 GMT+03:00, Sergey Chugunov
>> >>> > >:
>>  Vladislav, Ivan,
>> 
>>  Thank you for your questions and suggestions. Let me answer them.
>> 
>>  Vladislav,
>> 
>>  If I understood you correctly, you're talking about a node
>>  performing
>> >>> some
>>  automatic actions to fix the problem and then join the cluster as
>>  usual.
>> 
>>  However the original ticket [1] where we faced the need for
>> Maintenance
>>  Mode is about exactly the opposite: avoid doing automatic actions
>>  and
>> >>> give
>>  a user the ability to decide what to do.
>> 
>>  Also the idea of Maintenance Mode is that the node is 

Re: [DISCUSSION] Maintenance Mode feature

2020-09-22 Thread Sergey Chugunov
Ivan,

Checkpointer in Maintenance Mode is started and allows normal operations as
it may be needed for defragmentation and possibly other cases.

Discovery is started with a special implementation of SPI that doesn't make
attempts to seek and/or connect to the rest of the cluster. From that
perspective node in MM is totally isolated.

Communication is started as usual but I believe it doesn't matter as
discovery no other nodes are observed in topology and connection attempt
should not happen. But it may make sense to implement isolated version of
communication SPI as well to have 100% guarantee that no communication with
other nodes will happen.

It is important to note that GridRestProcessor is started normally as we
need it to connect to the node via control utility.

On Mon, Sep 21, 2020 at 7:04 PM Ivan Pavlukhin  wrote:

> Sergey,
>
> > From  the code complexity perspective I'm trying to design the feature
> in such a way that all maintenance code is as encapsulated as possible and
> avoids massive interventions into main workflows of components.
>
> Could please briefly tell what means do you use to achieve
> encapsulation? Are Discovery, Communication, Checkpointer and other
> components started in a maintenance mode in current design?
>
> 2020-09-21 15:19 GMT+03:00, Nikolay Izhikov :
> > Hello, Sergey.
> >
> >> At the moment I'm aware about two use cases for this feature: corrupted
> >> PDS cleanup and defragmentation.
> >
> > AFAIKU There is third use-case for this mode.
> >
> > Change encryption master key in case node was down during cluster master
> key
> > change.
> > In this case, node can’t join to the cluster, because it’s master key
> > differs from the cluster.
> > To recover node Ignite should locally change master key before join.
> >
> > Please, take a look into source code [1]
> >
> > [1]
> >
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/managers/encryption/GridEncryptionManager.java#L710
> >
> >> 21 сент. 2020 г., в 14:37, Sergey Chugunov 
> >> написал(а):
> >>
> >> Ivan,
> >>
> >> Sorry for some confusion, MM indeed is not a normal mode. What I was
> >> trying
> >> to say is that when in MM node still starts and allows the user to
> >> perform
> >> actions with it like sending commands via control utility/JMX APIs or
> >> reading metrics.
> >>
> >> This is the key point: although the node is not in the cluster but it is
> >> still alive can be monitored and supports management to do maintenance.
> >>
> >> From  the code complexity perspective I'm trying to design the feature
> in
> >> such a way that all maintenance code is as encapsulated as possible and
> >> avoids massive interventions into main workflows of components.
> >> At the moment I'm aware about two use cases for this feature: corrupted
> >> PDS
> >> cleanup and defragmentation. As far as I know it won't bring too much
> >> complexity in both cases.
> >>
> >> I cannot say for other components but I believe it will be possible to
> >> integrate MM feature into their workflow as well with reasonable amount
> >> of
> >> refactoring.
> >>
> >> Does it make sense to you?
> >>
> >> On Sun, Sep 6, 2020 at 8:08 AM Ivan Pavlukhin 
> >> wrote:
> >>
> >>> Sergey,
> >>>
> >>> Thank you for your answer!
> >>>
> >>> Might be I am looking at the subject from a different angle.
> >>>
>  I think of a node in MM as an almost normal one
> >>> I cannot think of such a mode as a normal one, because it apparently
> >>> does not perform usual cluster node functions. It is not a part of a
> >>> cluster, caches data is not available, Discovery and Communication are
> >>> not needed.
> >>>
> >>> I fear that with "node started in a special mode" approach we will get
> >>> an additional flag in the code making the code more complex and
> >>> fragile. Should not I worry about it?
> >>>
> >>> 2020-09-02 10:45 GMT+03:00, Sergey Chugunov  >:
>  Vladislav, Ivan,
> 
>  Thank you for your questions and suggestions. Let me answer them.
> 
>  Vladislav,
> 
>  If I understood you correctly, you're talking about a node performing
> >>> some
>  automatic actions to fix the problem and then join the cluster as
>  usual.
> 
>  However the original ticket [1] where we faced the need for
> Maintenance
>  Mode is about exactly the opposite: avoid doing automatic actions and
> >>> give
>  a user the ability to decide what to do.
> 
>  Also the idea of Maintenance Mode is that the node is able to accept
>  commands, expose metrics and so on, thus we need all components to be
>  initialized (some of them may be partially initialized due to their
> own
>  maintenance).
>  To achieve that we need to go through a full cycle of node
>  initialization
>  including discovery initialization. When discovery is initialized (in
>  special isolated mode) I don't think it is easy to switch back to
>  normal
>  operations 

MTCGA bot is down

2020-09-22 Thread Nikolay Izhikov
Hello, Igniters.

Currently, mtcga bot is down - 502 bad gateway error.
Can someone help with it?

https://mtcga.gridgain.com/prs.html


Re: Apache Ignite 2.9.0 RELEASE [Time, Scope, Manager]

2020-09-22 Thread Alex Plehanov
Guys,

I've filled the ticket with reproducer [1] for the discovery bug. This bug
caused by [2] ticket. We discussed with Vladimir Steshin privately and
decided to revert this ticket. I will do it today (after TC bot visa) if
there are no objections.

[1]: https://issues.apache.org/jira/browse/IGNITE-13465
[2]: https://issues.apache.org/jira/browse/IGNITE-13134

пн, 21 сент. 2020 г. в 11:08, Alex Plehanov :

> Guys,
>
> During internal testing, we've found a critical bug with
> discovery (cluster falls apart if two nodes segmented sequentially). This
> problem is not reproduced in 2.8.1. I think we should fix it
> before release. Under investigation now. I'll let you know when we get
> something.
>
> чт, 17 сент. 2020 г. в 00:51, Andrey Gura :
>
>> > So what do you think? Should we proceed with a 'hacked' version of the
>> message factory in 2.9 and go for the runtime message generation in later
>> release, or keep the code clean and fix the regression in the next releases?
>> > Andrey, can you take a look at my change? I think it is fairly
>> straightforward and does not change the semantics, just skips the factory
>> closures for certain messages.
>>
>> IMHO 2.5% isn't too much especially because it isn't actual for all
>> workloads (I didn't get any significant drops during benchmarking). So
>> I prefer the runtime generation in later releases.
>>
>> On Mon, Sep 14, 2020 at 12:41 PM Alexey Goncharuk
>>  wrote:
>> >
>> > Alexey, Andrey, Igniters,
>> >
>> > So what do you think? Should we proceed with a 'hacked' version of the
>> message factory in 2.9 and go for the runtime message generation in later
>> release, or keep the code clean and fix the regression in the next releases?
>> > Andrey, can you take a look at my change? I think it is fairly
>> straightforward and does not change the semantics, just skips the factory
>> closures for certain messages.
>> >
>> > Personally, I would prefer fixing the regression given that we also
>> introduced tracing in this release.
>> >
>> >
>> >
>> > пт, 11 сент. 2020 г. в 12:09, Alex Plehanov :
>> >>
>> >> Alexey,
>> >>
>> >> We've benchmarked by yardstick commits 130376741bf vs ed52559eb95 and
>> the performance of ed52559eb95 is better for about 2.5% on average on our
>> environment (about the same results we got benchmarking 65c30ec6 vs
>> 0606f03d). We've made 24 runs for each commit of
>> IgnitePutTxImplicitBenchmark (we got maximum drop for 2.9 on this
>> benchmark), 200 seconds warmup, 300 seconds benchmark, 6 servers, 5 clients
>> 50 threads each. Yardstick results for this configuration:
>> >> Commit 130376741bf: avg TPS=164096, avg latency=9173464 ns
>> >> Commit ed52559eb95: avg TPS=168283, avg latency=8945908 ns
>> >>
>> >> пт, 11 сент. 2020 г. в 09:51, Artem Budnikov <
>> a.budnikov.ign...@gmail.com>:
>> >>>
>> >>> Hi Everyone,
>> >>>
>> >>> I posted an instruction on how to publish the docs on
>> ignite.apache.org/docs [1]. When you finish with Ignite 2.9, you can
>> update the docs by following the instruction. Unfortunately, I won't be
>> able to spend any time on this project any longer. You can send your pull
>> requests and questions about the documentation to Denis Magda.
>> >>>
>> >>> -Artem
>> >>>
>> >>> [1] :
>> https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document
>> >>>
>> >>> On Thu, Sep 10, 2020 at 2:45 PM Alexey Goncharuk <
>> alexey.goncha...@gmail.com> wrote:
>> 
>>  Alexey,
>> 
>>  I've tried to play with message factories locally, but
>> unfortunately, I
>>  cannot spot the difference between old and new implementation in
>>  distributed benchmarks. I pushed an implementation of
>> MessageFactoryImpl
>>  with the old switch statement to the ignite-2.9-revert-12568 branch
>>  (discussed this with Andrey Gura, the change should be compatible
>> with the
>>  new metrics as we still use the register() mechanics).
>> 
>>  Can you check if this change makes any difference performance-wise
>> in your
>>  environment? If yes, we can go with runtime code generation in the
>> long
>>  term: register classes and generate a dynamic message factory with a
>> switch
>>  statement once all messages are registered (not in 2.9 though,
>> obviously).
>> 
>>  ср, 9 сент. 2020 г. в 14:53, Alex Plehanov > >:
>> 
>>  > Hello guys,
>>  >
>>  > I've tried to optimize tracing implementation (ticket [1]), it
>> reduced the
>>  > drop, but not completely removed it.
>>  > Ivan Rakov, Alexander Lapin, can you please review the patch?
>>  > Ivan Artiukhov, can you please benchmark the patch [2] against
>> 2.8.1
>>  > release on your environment?
>>  > With this patch on our environment, it's about a 3% drop left,
>> it's close
>>  > to measurement error and I think such a drop is not a showstopper.
>> Guys,
>>  > WDYT?
>>  >
>>  > Also, I found that compatibility is broken for JDBC thin driver
>> between 2.8
>>  > and 2.9