Re: Rework storage format to index-organized approach

2017-11-28 Thread Vladimir Ozerov
Dima,

1) Primary key lookups could become a bit faster, but no breakthrough is
expected - there will be no need to jump from B+Tree leaf to data page, but
the tree itself will be bigger, because data records will take more space
than index records. I expect parity here.

2) We should observe dramatical improvement for scans (either ScanQuery or
SqlQuery) because data will be stored sequentially within blocks. Consider
the following case - a table with 10 records which could fit to 1 data
page. In current approach (heap) these records could be located in anywhere
from 1 data block to 10 different data blocks - it all depends on update
timings and free lists. So you end up in 10 page lock/unlock cycles and up
to 10 page reads, which will drive our LRU policy mad. In case of
index-organized approach data will be stored in 1 block in the best case
(sequential PK, no fragmentation), or 2-3 blocks in case of page splits or
segmentation. Clearly, this would be a huge win in terms of locks, page
reads and IO for scan workloads.

3) DML will be faster in case of sequential primary keys, e.g. (nearly)
monotonic LONG as transaction identifier. In this case data will be laid
out in a perfect sequential manner withing individual blocks, and in most
cases INSERT will lead to 1 data page update and 1 WAL record. Compare it
to 6 WAL record updates with current approach. On the other hand, random
INSERTS (e.g. UUID key) could become slower due to page splits and
fragmentation. Heap-organized storage is more preferable in this case.

4) Ideally we should not have index-per-partition, because in this case PK
range scans which are typical on OLAP workloads and JOINs will be slow. In
this case it would be not that easy to extract wipe out evicted partition.
This is another trade off - fast operations on stable system at the cost of
slower intermediate processes.

On Tue, Nov 28, 2017 at 6:27 AM, Dmitriy Setrakyan 
wrote:

> Vladimir,
>
> I definitely like the overall direction. My comments are below...
>
>
> On Mon, Nov 27, 2017 at 12:46 PM, Vladimir Ozerov 
> wrote:
>
> >
> > I propose to adopt this approach in two phases:
> > 1) Optionally add data to leaf pages. This should improve our ScanQuery
> > dramatically
> >
>
>  Definitely a good idea. Shouldn't it make the primary lookups faster as
> well?
>
> 2) Optionally has single primary index instead of per-partition index. This
> > should improve our updates and SQL scans at the cost of harder rebalance
> > and recovery.
> >
>
> Can you explain why it would improve SQL updates and Scan queries?
>
> Also, why would this approach make rebalancing slower? If we keep the index
> sorted by partition, then the rebalancing process should be able to grab
> any partition at any time. Do you agree?
>
> D.
>


Re: Rework storage format to index-organized approach

2017-11-28 Thread Vladimir Ozerov
Denis,

No, most likely free lists (or any other space management component) will
stay still. But in case of index-organized storage we will use it in less
number of scenarios.

On Tue, Nov 28, 2017 at 6:27 AM, Dmitriy Setrakyan 
wrote:

> Vladimir,
>
> I definitely like the overall direction. My comments are below...
>
>
> On Mon, Nov 27, 2017 at 12:46 PM, Vladimir Ozerov 
> wrote:
>
> >
> > I propose to adopt this approach in two phases:
> > 1) Optionally add data to leaf pages. This should improve our ScanQuery
> > dramatically
> >
>
>  Definitely a good idea. Shouldn't it make the primary lookups faster as
> well?
>
> 2) Optionally has single primary index instead of per-partition index. This
> > should improve our updates and SQL scans at the cost of harder rebalance
> > and recovery.
> >
>
> Can you explain why it would improve SQL updates and Scan queries?
>
> Also, why would this approach make rebalancing slower? If we keep the index
> sorted by partition, then the rebalancing process should be able to grab
> any partition at any time. Do you agree?
>
> D.
>


Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-28 Thread Vladimir Ozerov
Denis,

Yes, but can we look at proposed API before we dig into implementation?

On Tue, Nov 28, 2017 at 9:43 PM, Denis Magda  wrote:

> I think the failure processing policy should be configured via
> IgniteConfiguration in a way similar to the segmentation policies.
>
> —
> Denis
>
> > On Nov 27, 2017, at 11:28 PM, Vladimir Ozerov 
> wrote:
> >
> > Dmitry,
> >
> > How these policies will be configured? Do you have any API in mind?
> >
> > On Thu, Nov 23, 2017 at 6:26 PM, Denis Magda  wrote:
> >
> >> No objections here. Additional policies like EXEC might be added later
> >> depending on user needs.
> >>
> >> —
> >> Denis
> >>
> >>> On Nov 23, 2017, at 2:26 AM, Дмитрий Сорокин <
> sbt.sorokin@gmail.com>
> >> wrote:
> >>>
> >>> Denis,
> >>> I propose start with first three policies (it's already implemented,
> just
> >>> await some code combing, commit & review).
> >>> About of fourth policy (EXEC) I think that it's rather additional
> >> property
> >>> (some script path) than policy.
> >>>
> >>> 2017-11-23 0:43 GMT+03:00 Denis Magda :
> >>>
>  Just provide FailureProcessingPolicy with possible reactions:
>  - NOOP - exceptions will be reported, metrics will be triggered but an
>  affected Ignite process won’t be touched.
>  - HAULT (or STOP or KILL) - all the actions of the of NOOP + Ignite
>  process termination.
>  - RESTART - NOOP actions + process restart.
>  - EXEC - execute a custom script provided by the user.
> 
>  If needed the policy can be set per know failure such is OOM,
> >> Persistence
>  errors so that the user can act accordingly basing on a context.
> 
>  —
>  Denis
> 
> > On Nov 21, 2017, at 11:43 PM, Vladimir Ozerov 
>  wrote:
> >
> > In the first iteration I would focus only on reporting facilities, to
> >> let
> > administrator spot dangerous situation. And in the second phase, when
> >> all
> > reporting and metrics are ready, we can think on some automatic
> >> actions.
> >
> > On Wed, Nov 22, 2017 at 10:39 AM, Mikhail Cherkasov <
>  mcherka...@gridgain.com
> >> wrote:
> >
> >> Hi Anton,
> >>
> >> I don't think that we should shutdown node in case of
>  IgniteOOMException,
> >> if one node has no space, then other probably  don't have it too, so
> >> re
> >> -balancing will cause IgniteOOM on all other nodes and will kill the
>  whole
> >> cluster. I think for some configurations cluster should survive and
>  allow
> >> to user clean cache or/and add more nodes.
> >>
> >> Thanks,
> >> Mikhail.
> >>
> >> 20 нояб. 2017 г. 6:53 ПП пользователь "Anton Vinogradov" <
> >> avinogra...@gridgain.com> написал:
> >>
> >>> Igniters,
> >>>
> >>> Internal problems may and, unfortunately, cause unexpected cluster
> >>> behavior.
> >>> We should determine behavior in case any of internal problem
> >> happened.
> >>>
> >>> Well known internal problems can be split to:
> >>> 1) OOM or any other reason cause node crash
> >>>
> >>> 2) Situations required graceful node shutdown with custom
> >> notification
> >>> - IgniteOutOfMemoryException
> >>> - Persistence errors
> >>> - ExchangeWorker exits with error
> >>>
> >>> 3) Prefomance issues should be covered by metrics
> >>> - GC STW duration
> >>> - Timed out tasks and jobs
> >>> - TX deadlock
> >>> - Hanged Tx (waits for some service)
> >>> - Java Deadlocks
> >>>
> >>> I created special issue [1] to make sure all these metrics will be
> >>> presented at WebConsole or VisorConsole (what's preferred?)
> >>>
> >>> 4) Situations required external monitoring implementation
> >>> - GC STW duration exceed maximum possible length (node should be
>  stopped
> >>> before STW finished)
> >>>
> >>> All this problems were reported by different persons different time
>  ago,
> >>> So, we should reanalyze each of them and, possible, find better
> ways
> >> to
> >>> solve them than it described at issues.
> >>>
> >>> P.s. IEP-7 [2] already contains 9 issues, feel free to mention
>  something
> >>> else :)
> >>>
> >>> [1] https://issues.apache.org/jira/browse/IGNITE-6961
> >>> [2]
> >>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> >>> 7%3A+Ignite+internal+problems+detection
> >>>
> >>
> 
> 
> >>
> >>
>
>


Re: Optimization of SQL queries from Spark Data Frame to Ignite

2017-11-28 Thread Vladimir Ozerov
Nikolay,

Regarding p3. - partition pruning is already implemented in Ignite, so
there is no need to do this on your own.

On Wed, Nov 29, 2017 at 3:23 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Nikolay,
>
> Custom strategy allows to fully process the AST generated by Spark and
> convert it to Ignite SQL, so there will be no execution on Spark side at
> all. This is what we are trying to achieve here. Basically, one will be
> able to use DataFrame API to execute queries directly on Ignite. Does it
> make sense to you?
>
> I would recommend you to take a look at MemSQL implementation which does
> similar stuff: https://github.com/memsql/memsql-spark-connector
>
> Note that this approach will work only if all relations included in AST are
> Ignite tables. Otherwise, strategy should return null so that Spark falls
> back to its regular mode. Ignite will be used as regular data source in
> this case, and probably it's possible to implement some optimizations here
> as well. However, I never investigated this and it seems like another
> separate discussion.
>
> -Val
>
> On Tue, Nov 28, 2017 at 9:54 AM, Николай Ижиков 
> wrote:
>
> > Hello, guys.
> >
> > I have implemented basic support of Spark Data Frame API [1], [2] for
> > Ignite.
> > Spark provides API for a custom strategy to optimize queries from spark
> to
> > underlying data source(Ignite).
> >
> > The goal of optimization(obvious, just to be on the same page):
> > Minimize data transfer between Spark and Ignite.
> > Speedup query execution.
> >
> > I see 3 ways to optimize queries:
> >
> > 1. *Join Reduce* If one make some query that join two or more
> > Ignite tables, we have to pass all join info to Ignite and transfer to
> > Spark only result of table join.
> > To implement it we have to extend current implementation with new
> > RelationProvider that can generate all kind of joins for two or more
> tables.
> > We should add some tests, also.
> > The question is - how join result should be partitioned?
> >
> >
> > 2. *Order by* If one make some query to Ignite table with order
> by
> > clause we can execute sorting on Ignite side.
> > But it seems that currently Spark doesn’t have any way to tell
> > that partitions already sorted.
> >
> >
> > 3. *Key filter* If one make query with `WHERE key = XXX` or
> `WHERE
> > key IN (X, Y, Z)`, we can reduce number of partitions.
> > And query only partitions that store certain key values.
> > Is this kind of optimization already built in Ignite or I should
> > implement it by myself?
> >
> > May be, there is any other way to make queries run faster?
> >
> > [1] https://spark.apache.org/docs/latest/sql-programming-guide.html
> > [2] https://github.com/apache/ignite/pull/2742
> >
>


[jira] [Created] (IGNITE-7064) Web console: implement mechanism to

2017-11-28 Thread Ilya Borisov (JIRA)
Ilya Borisov created IGNITE-7064:


 Summary: Web console: implement mechanism to
 Key: IGNITE-7064
 URL: https://issues.apache.org/jira/browse/IGNITE-7064
 Project: Ignite
  Issue Type: Improvement
  Components: wizards
Reporter: Ilya Borisov
Assignee: Alexander Kalinin
Priority: Minor


Most of E2E tests require complex DB/web-agent/Ignite cluster state management. 
Let's implement tools to help with that, maybe something easy at first, like DB 
state. Web console backend already has a similar mechanism we can reuse.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3103: IGNITE-7043 Fix method name suggested when page e...

2017-11-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3103


---


Re: Ignite page with video resources and recordings

2017-11-28 Thread Dmitriy Setrakyan
Sounds like a good idea.

On Tue, Nov 28, 2017 at 5:16 PM, Denis Magda  wrote:

> Igniters,
>
> There is a plenty of recordings of Ignite meetups, webinars and conference
> talks available on the Internet. Some of them introduce basic components
> and capabilities, some share best practices and pitfalls while the other
> share use cases.
>
> Generally, it's beneficial for both Ignite community and users to gather
> and expose the most useful ones under a special video recording section.
> For instance, we might consider these talks to be added right away:
> • Ignite use case: https://youtu.be/1D8hyLWMtfM
> • Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ
> • Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8
>
> Instead of creating a new page for this purpose I would rework the
> screencasts' page combining all the media content there:
> https://ignite.apache.org/screencasts.html
>
> Here is a JIRA ticket: https://issues.apache.org/jira/browse/IGNITE-7062
>
> A feedback and suggestions are welcomed.
>
> Denis


Re: TC issues. IGNITE-3084. Spark Data Frame API

2017-11-28 Thread Николай Ижиков

Valentin,

For now `Ignite RDD` build runs on jdk1.7.
We need to update it to jdk1.8.

I wrote the whole versions numbers to be clear:

1. Current master - Spark version is 2.1.0.
So both `Ignite RDD` and `Ignite RDD 2.10` runs OK on jdk1.7.

2. My branch -
`Ignite RDD 2.10` - spark version is 2.1.2 - runs OK on jdk1.7.
`Ignite RDD` - spark version 2.2.0 - fails on jdk1.7, *has to be 
changed to run on jdk1.8*


29.11.2017 03:27, Valentin Kulichenko пишет:

Nikolay,

If Spark requires Java 8, then I guess we have no choice. How TC is configured 
at the moment? My understanding is that Spark related suites are successfully 
executed there, so is there an issue?

-Val

On Tue, Nov 28, 2017 at 2:42 AM, Николай Ижиков > wrote:

Hello, Valentin.

Added '-Dscala-2.10' to the build config. Let me know if it helps.


Yes, it helps. Thank you!
Now, 'Ignite RDD spark 2_10' succeed for my branch.


Do you mean that IgniteRDD does not compile on JDK7? If yes, do we know 
the reason? I don't think switching it to JDK8 is a solution as it should work 
with both.


I mean that latest version of spark doesn't support jdk7.

http://spark.apache.org/docs/latest/ 

"Spark runs on Java 8+..."
"For the Scala API, Spark 2.2.0 uses Scala 2.11..."
"Note that support for Java 7... were removed as of Spark 2.2.0"
"Note that support for Scala 2.10 is deprecated..."

Moreover, We can't have IgniteCatalog for spark 2.1.
Please, see my explanation in jira ticket -


https://issues.apache.org/jira/browse/IGNITE-3084?focusedCommentId=16268523=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16268523



Do you see any options to support jdk7 for spark module?

> I think all tests should be executed on TC. Can you check if they work 
and add them to corresponding suites

OK, I file a ticket and try to fix it shortly.

https://issues.apache.org/jira/browse/IGNITE-7042 


28.11.2017 03:33, Valentin Kulichenko пишет:

Hi Nikolay,

Please see my responses inline.

-Val

On Fri, Nov 24, 2017 at 2:55 AM, Николай Ижиков  >> wrote:

     Hello, guys.

     I have some issues on TC with my PR [1] for IGNITE-3084(Spark Data 
Frame API).
     Can you, please, help me:


     1. `Ignite RDD spark 2_10` -

     Currently this build runs with following profiles: 
`-Plgpl,examples,scala-2.10,-clean-libs,-release` [2]
     That means `scala` profile is activated too for `Ignite RDD spark 
2_10`
     Because `scala` activation is done like [3]:

     ```
                  
                      !scala-2.10
                  
     ```

     I think it a misconfiguration because scala(2.11) shouldn't be 
activated for 2.10 build.
     Am I miss something?

     Can someone edit build property?
              * Add `-scala` to profiles list
              * Or add `-Dscala-2.10` to jvm properties to turn off 
`scala` profile in this build.


Added '-Dscala-2.10' to the build config. Let me know if it helps.


     2. `Ignite RDD` -

     Currently this build run on jvm7 [4].
     As I wrote in my previous mail [5] current version of spark(2.2) 
runs only on jvm8.

     Can someone edit build property to run it on jvm8?


Do you mean that IgniteRDD does not compile on JDK7? If yes, do we know 
the reason? I don't think switching it to JDK8 is a solution as it should work 
with both.


     3. For now `Ignite RDD` and `Ignite RDD spark 2_10` only runs java 
tests [6] existing in `spark` module.
     There are several existing tests written in scala(i.e. scala-test) 
ignored in TC. IgniteRDDSpec [7] for example.
     Is it turned off by a purpose or I miss something?
     Should we run scala-test for spark and spark_2.10 modules?

I think all tests should be executed on TC. Can you check if they work 
and add them to corresponding suites?


     [1] https://github.com/apache/ignite/pull/2742 
 >
     [2] 
https://ci.ignite.apache.org/viewLog.html?buildId=960220=Ignite20Tests_IgniteRddSpark210=buildLog&_focus=379#_state=371


[jira] [Created] (IGNITE-7063) Web console: improve error handling in case of custom SMTP server

2017-11-28 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-7063:
--

 Summary: Web console: improve error handling in case of custom 
SMTP server
 Key: IGNITE-7063
 URL: https://issues.apache.org/jira/browse/IGNITE-7063
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Pavel Konstantinov


Add an error message in case if settings.json can't be read (contains error)
Add an error message in case if 'forgot password' message failed to be sent



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Apache Ignite Talk at Nike HQ: DEC 14

2017-11-28 Thread Denis Magda
+ user list

Dani, excellent news and good luck!

We’ve added an announcement to Ignite events page to draw more attention:
https://ignite.apache.org/events.html#niketechtalksdec2017

—
Denis

> On Nov 28, 2017, at 1:45 PM, Dani Traphagen  wrote:
> 
> Hi Apache Ignite Devs,
> 
> For those of you in the Pacific Northwest, I will be giving a talk at
> Nike’s Headquarters on Apache Ignite:
> https://niketechtalksdec2017.splashthat.com
> 
> You are welcome to join or send this on to others in the area!
> 
> Thanks kindly,
> Dani



Re: Integration of Spark and Ignite. Prototype.

2017-11-28 Thread Denis Magda
Guys,

Looking into the parallel discussion about the strategy support I would change 
my initial stance and support the idea of releasing the integration in its 
current state. Is the code ready to be merged into the master? Let’s 
concentrate on this first and handle the strategy support as a separate JIRA 
task. Agree?

—
Denis

> On Nov 27, 2017, at 3:47 PM, Valentin Kulichenko 
>  wrote:
> 
> Nikolay,
> 
> Let's estimate the strategy implementation work, and then decide weather to
> merge the code in current state or not. If anything is unclear, please
> start a separate discussion.
> 
> -Val
> 
> On Fri, Nov 24, 2017 at 5:42 AM, Николай Ижиков 
> wrote:
> 
>> Hello, Val, Denis.
>> 
>>> Personally, I think that we should release the integration only after
>> the strategy is fully supported.
>> 
>> I see two major reason to propose merge of DataFrame API implementation
>> without custom strategy:
>> 
>> 1. My PR is relatively huge, already. From my experience of interaction
>> with Ignite community - the bigger PR becomes, the more time of commiters
>> required to review PR.
>> So, I propose to move smaller, but complete steps here.
>> 
>> 2. It is not clear for me what exactly includes "custom strategy and
>> optimization".
>> Seems, that additional discussion required.
>> I think, I can put my thoughts on the paper and start discussion right
>> after basic implementation is done.
>> 
>>> Custom strategy implementation is actually very important for this
>> integration.
>> 
>> Understand and fully agreed.
>> I'm ready to continue work in that area.
>> 
>> 23.11.2017 02:15, Denis Magda пишет:
>> 
>> Val, Nikolay,
>>> 
>>> Personally, I think that we should release the integration only after the
>>> strategy is fully supported. Without the strategy we don’t really leverage
>>> from Ignite’s SQL engine and introduce redundant data movement between
>>> Ignite and Spark nodes.
>>> 
>>> How big is the effort to support the strategy in terms of the amount of
>>> work left? 40%, 60%, 80%?
>>> 
>>> —
>>> Denis
>>> 
>>> On Nov 22, 2017, at 2:57 PM, Valentin Kulichenko <
 valentin.kuliche...@gmail.com> wrote:
 
 Nikolay,
 
 Custom strategy implementation is actually very important for this
 integration. Basically, it will allow to create a SQL query for Ignite
 and
 execute it directly on the cluster. Your current implementation only
 adds a
 new DataSource which means that Spark will fetch data in its own memory
 first, and then do most of the work (like joins for example). Does it
 make
 sense to you? Can you please take a look at this and provide your
 thoughts
 on how much development is implied there?
 
 Current code looks good to me though and I'm OK if the strategy is
 implemented as a next step in a scope of separate ticket. I will do final
 review early next week and will merge it if everything is OK.
 
 -Val
 
 On Thu, Oct 19, 2017 at 7:29 AM, Николай Ижиков 
 wrote:
 
 Hello.
> 
> 3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog
>> 
> implementations and what is the difference?
> 
> IgniteCatalog removed.
> 
> 5. I don't like that IgniteStrategy and IgniteOptimization have to be
>> 
> set manually on SQLContext each time it's createdIs there any way to
> automate this and improve usability?
> 
> IgniteStrategy and IgniteOptimization are removed as it empty now.
> 
> Actually, I think it makes sense to create a builder similar to
>> 
> SparkSession.builder()...
> 
> IgniteBuilder added.
> Syntax looks like:
> 
> ```
> val igniteSession = IgniteSparkSession.builder()
>.appName("Spark Ignite catalog example")
>.master("local")
>.config("spark.executor.instances", "2")
>.igniteConfig(CONFIG)
>.getOrCreate()
> 
> igniteSession.catalog.listTables().show()
> ```
> 
> Please, see updated PR - https://github.com/apache/ignite/pull/2742
> 
> 2017-10-18 20:02 GMT+03:00 Николай Ижиков :
> 
> Hello, Valentin.
>> 
>> My answers is below.
>> Dmitry, do we need to move discussion to Jira?
>> 
>> 1. Why do we have org.apache.spark.sql.ignite package in our codebase?
>>> 
>> 
>> As I mentioned earlier, to implement and override Spark Catalog one
>> have
>> to use internal(private) Spark API.
>> So I have to use package `org.spark.sql.***` to have access to private
>> class and variables.
>> 
>> For example, SharedState class that stores link to ExternalCatalog
>> declared as `private[sql] class SharedState` - i.e. package private.
>> 
>> Can these classes reside under org.apache.ignite.spark instead?
>>> 
>> 
>> No, as long as we want to have our 

Ignite page with video resources and recordings

2017-11-28 Thread Denis Magda
Igniters,

There is a plenty of recordings of Ignite meetups, webinars and conference 
talks available on the Internet. Some of them introduce basic components and 
capabilities, some share best practices and pitfalls while the other share use 
cases.

Generally, it's beneficial for both Ignite community and users to gather and 
expose the most useful ones under a special video recording section. For 
instance, we might consider these talks to be added right away:
• Ignite use case: https://youtu.be/1D8hyLWMtfM
• Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ
• Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8

Instead of creating a new page for this purpose I would rework the screencasts' 
page combining all the media content there: 
https://ignite.apache.org/screencasts.html

Here is a JIRA ticket: https://issues.apache.org/jira/browse/IGNITE-7062

A feedback and suggestions are welcomed.

Denis

[jira] [Created] (IGNITE-7062) Ignite page with video resources and recording

2017-11-28 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7062:
---

 Summary: Ignite page with video resources and recording
 Key: IGNITE-7062
 URL: https://issues.apache.org/jira/browse/IGNITE-7062
 Project: Ignite
  Issue Type: Task
  Components: site
Reporter: Denis Magda
Assignee: Prachi Garg
 Fix For: 2.4


There is a plenty of recordings of Ignite meetups, webinars and conference 
talks available on the Internet. Some of them introduce basic components and 
capabilities, some share best practices and pitfalls while the other share use 
cases.

Generally, it's beneficial for both Ignite community and users to gather and 
expose the most useful ones under a special video recording section. For 
instance, we might consider these talks to be added right away:
* Ignite use case: https://youtu.be/1D8hyLWMtfM
* Ignite essentials: https://www.youtube.com/watch?v=G22L2KW9gEQ
* Kubernetes: https://www.youtube.com/watch?v=igDB0wyodr8

Instead of creating a new page for this purpose I would rework the screencasts' 
page combining all the media content there: 
https://ignite.apache.org/screencasts.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Reworking Ignite site's "Features" menu

2017-11-28 Thread Denis Magda
Dmitriy,

Thanks for the feedback.

Split all the work into a set of JIRA tasks aggregated under this one:
https://issues.apache.org/jira/browse/IGNITE-7061 


Hope to complete it by the end of the year.

—
Denis

> On Nov 22, 2017, at 5:03 PM, Dmitriy Setrakyan  wrote:
> 
> Sounds like a positive step forward. I have several comments:
> 
> 1. "More Features" should be all the way at the bottom
> 2. "What is Ignite" should go under Features
> 3. I would remove the words "Distributed" from the navigation menu and
> leave "Key-Value" and "SQL". Otherwise, you would be adding the word
> "distributed" to every menu item.
> 
> D.
> 
> 
> On Wed, Nov 22, 2017 at 2:53 PM, Denis Magda  wrote:
> 
>> The list formatting was broken by ASF mail engine. Fixed below.
>> 
>>> On Nov 22, 2017, at 2:51 PM, Denis Magda  wrote:
>>> 
>>> - What’s Ignite?
>>> - Features
>>>  — Distributed Key-Value
>>>  — Distributed SQL
>>>  — ACID Transactions
>>>  — Machine Learning
>>>  — Multi-Language Support
>>>  — More Features…
>>> - Architecture
>>>  — Overview
>>>  — Clustering and Deployment
>>>  — Distributed Database
>>>  — Durable Memory
>>>  — Collocated Processing
>>> - Tooling
>>>  — Ignite Web Console
>>>  — Data Visualization and Analysis
>> 
>> 



[jira] [Created] (IGNITE-7061) Rework Features menu and page

2017-11-28 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7061:
---

 Summary: Rework Features menu and page
 Key: IGNITE-7061
 URL: https://issues.apache.org/jira/browse/IGNITE-7061
 Project: Ignite
  Issue Type: Task
  Components: site
Reporter: Denis Magda
 Fix For: 2.4


The features menu and the page [1] is overloaded and confusing. As a technical 
guy, I feel lost trying to grasp what’s important and what’s secondary. That 
deters me from digging into the project. 

Rework the menu and page in accordance with this discussion:
http://apache-ignite-developers.2346864.n4.nabble.com/Reworking-Ignite-site-s-quot-Features-quot-menu-td24569.html

[1] https://ignite.apache.org/features.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7060) Prepare Architecture section for the site

2017-11-28 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7060:
---

 Summary: Prepare Architecture section for the site
 Key: IGNITE-7060
 URL: https://issues.apache.org/jira/browse/IGNITE-7060
 Project: Ignite
  Issue Type: Task
  Components: site
Reporter: Denis Magda
 Fix For: 2.4


In addition to the features, it's useful to introduce Ignite architecture right 
in the Features menu covering the following:
* Overview
* Clustering and Deployment
* Distributed Database
* Durable Memory



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7058) Make out a site page for ACID Transactions

2017-11-28 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7058:
---

 Summary: Make out a site page for ACID Transactions
 Key: IGNITE-7058
 URL: https://issues.apache.org/jira/browse/IGNITE-7058
 Project: Ignite
  Issue Type: Task
  Components: site
Reporter: Denis Magda
 Fix For: 2.4


ACID transactions are a major feature of Ignite and have to be exposed under 
the Features menu on the site.

Make out the page covering the following:
* 2Phase Commit Protocol
* Pessimistic and Optimistic Modes
* Deadlock detection



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7057) Create Key-Value Page for the site

2017-11-28 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7057:
---

 Summary: Create Key-Value Page for the site
 Key: IGNITE-7057
 URL: https://issues.apache.org/jira/browse/IGNITE-7057
 Project: Ignite
  Issue Type: Task
  Components: site
Reporter: Denis Magda
 Fix For: 2.4


Prepare a page to describe Ignite key-value APIs. The page will go to the 
Features menu and should cover the following:
* Scope of K/V APIs.
* JCache 
* Benefits of JCache
* How to combine K/V and SQL APIs
* Examples

Rework existing data grid and key-value store pages:
https://ignite.apache.org/features/datagrid.html
https://ignite.apache.org/use-cases/database/key-value-store.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7056) Prepare Multi-Language support page for the site

2017-11-28 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7056:
---

 Summary: Prepare Multi-Language support page for the site
 Key: IGNITE-7056
 URL: https://issues.apache.org/jira/browse/IGNITE-7056
 Project: Ignite
  Issue Type: Task
  Components: site
Reporter: Denis Magda
 Fix For: 2.4


Prepare Ignite's multi-language page that will go under the features menu. The 
page should encompass the following:
* Java, .NET and C++ native APIs.
* Supported drivers and protocols that can be used in various languages.
* A couple of examples.

Update the image by combining Java, NET and C++ logos.

Remove the pages below setting up a redirect the multi-languages page:
https://ignite.apache.org/features/java.html
https://ignite.apache.org/features/dotnet.html
https://ignite.apache.org/features/cpp.html
https://ignite.apache.org/features/clientprotos.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7055) Text query for a particular field not working

2017-11-28 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-7055:
---

 Summary: Text query for a particular field not working
 Key: IGNITE-7055
 URL: https://issues.apache.org/jira/browse/IGNITE-7055
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Valentin Kulichenko


Lucene queries allow to specify a specific field name to search in [1], however 
this doesn't seem to work in latest versions of Ignite.

To reproduce, modify {{CacheQueryExample#textQuery}} to use Lucene field 
expression:
{code}
QueryCursor> masters =
cache.query(new TextQuery(Person.class, "resume:Master"));
{code}
This query returns empty result.

[1] 
http://lucene.apache.org/core/5_5_2/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Fields



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Apache Ignite Talk at Nike HQ: DEC 14

2017-11-28 Thread Dani Traphagen
Hi Apache Ignite Devs,

For those of you in the Pacific Northwest, I will be giving a talk at
Nike’s Headquarters on Apache Ignite:
https://niketechtalksdec2017.splashthat.com

You are welcome to join or send this on to others in the area!

Thanks kindly,
Dani


Activate/deactivate cluster through http-rest api

2017-11-28 Thread Prachi Garg
Engineers,

Any progress regarding this issue [1]?

[1]
https://issues.apache.org/jira/browse/IGNITE-5733?focusedCommentId=16225941=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16225941

Thanks,
-Prachi


[jira] [Created] (IGNITE-7052) S3 IP finder: add an ability to provide endpoint address

2017-11-28 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-7052:
---

 Summary: S3 IP finder: add an ability to provide endpoint address
 Key: IGNITE-7052
 URL: https://issues.apache.org/jira/browse/IGNITE-7052
 Project: Ignite
  Issue Type: Improvement
  Components: s3
Affects Versions: 2.3
Reporter: Valentin Kulichenko
 Fix For: 2.4


By default S3 client detects region automatically by sending special request to 
{{us-west-1}}. In case environment is restricted to some other region, this 
leads to connection timeout exception.

The issue can be solved by providing a specific region endpoint via 
{{AmazonS3Client#setEndpoint}} method. To support this we need to add 
{{endpoint}} configuration property to the IP finder.

List of S3 region endpoints: 
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-28 Thread Denis Magda
I think the failure processing policy should be configured via 
IgniteConfiguration in a way similar to the segmentation policies.

—
Denis

> On Nov 27, 2017, at 11:28 PM, Vladimir Ozerov  wrote:
> 
> Dmitry,
> 
> How these policies will be configured? Do you have any API in mind?
> 
> On Thu, Nov 23, 2017 at 6:26 PM, Denis Magda  wrote:
> 
>> No objections here. Additional policies like EXEC might be added later
>> depending on user needs.
>> 
>> —
>> Denis
>> 
>>> On Nov 23, 2017, at 2:26 AM, Дмитрий Сорокин 
>> wrote:
>>> 
>>> Denis,
>>> I propose start with first three policies (it's already implemented, just
>>> await some code combing, commit & review).
>>> About of fourth policy (EXEC) I think that it's rather additional
>> property
>>> (some script path) than policy.
>>> 
>>> 2017-11-23 0:43 GMT+03:00 Denis Magda :
>>> 
 Just provide FailureProcessingPolicy with possible reactions:
 - NOOP - exceptions will be reported, metrics will be triggered but an
 affected Ignite process won’t be touched.
 - HAULT (or STOP or KILL) - all the actions of the of NOOP + Ignite
 process termination.
 - RESTART - NOOP actions + process restart.
 - EXEC - execute a custom script provided by the user.
 
 If needed the policy can be set per know failure such is OOM,
>> Persistence
 errors so that the user can act accordingly basing on a context.
 
 —
 Denis
 
> On Nov 21, 2017, at 11:43 PM, Vladimir Ozerov 
 wrote:
> 
> In the first iteration I would focus only on reporting facilities, to
>> let
> administrator spot dangerous situation. And in the second phase, when
>> all
> reporting and metrics are ready, we can think on some automatic
>> actions.
> 
> On Wed, Nov 22, 2017 at 10:39 AM, Mikhail Cherkasov <
 mcherka...@gridgain.com
>> wrote:
> 
>> Hi Anton,
>> 
>> I don't think that we should shutdown node in case of
 IgniteOOMException,
>> if one node has no space, then other probably  don't have it too, so
>> re
>> -balancing will cause IgniteOOM on all other nodes and will kill the
 whole
>> cluster. I think for some configurations cluster should survive and
 allow
>> to user clean cache or/and add more nodes.
>> 
>> Thanks,
>> Mikhail.
>> 
>> 20 нояб. 2017 г. 6:53 ПП пользователь "Anton Vinogradov" <
>> avinogra...@gridgain.com> написал:
>> 
>>> Igniters,
>>> 
>>> Internal problems may and, unfortunately, cause unexpected cluster
>>> behavior.
>>> We should determine behavior in case any of internal problem
>> happened.
>>> 
>>> Well known internal problems can be split to:
>>> 1) OOM or any other reason cause node crash
>>> 
>>> 2) Situations required graceful node shutdown with custom
>> notification
>>> - IgniteOutOfMemoryException
>>> - Persistence errors
>>> - ExchangeWorker exits with error
>>> 
>>> 3) Prefomance issues should be covered by metrics
>>> - GC STW duration
>>> - Timed out tasks and jobs
>>> - TX deadlock
>>> - Hanged Tx (waits for some service)
>>> - Java Deadlocks
>>> 
>>> I created special issue [1] to make sure all these metrics will be
>>> presented at WebConsole or VisorConsole (what's preferred?)
>>> 
>>> 4) Situations required external monitoring implementation
>>> - GC STW duration exceed maximum possible length (node should be
 stopped
>>> before STW finished)
>>> 
>>> All this problems were reported by different persons different time
 ago,
>>> So, we should reanalyze each of them and, possible, find better ways
>> to
>>> solve them than it described at issues.
>>> 
>>> P.s. IEP-7 [2] already contains 9 issues, feel free to mention
 something
>>> else :)
>>> 
>>> [1] https://issues.apache.org/jira/browse/IGNITE-6961
>>> [2]
>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-
>>> 7%3A+Ignite+internal+problems+detection
>>> 
>> 
 
 
>> 
>> 



Optimization of SQL queries from Spark Data Frame to Ignite

2017-11-28 Thread Николай Ижиков

Hello, guys.

I have implemented basic support of Spark Data Frame API [1], [2] for Ignite.
Spark provides API for a custom strategy to optimize queries from spark to 
underlying data source(Ignite).

The goal of optimization(obvious, just to be on the same page):
Minimize data transfer between Spark and Ignite.
Speedup query execution.

I see 3 ways to optimize queries:

1. *Join Reduce* If one make some query that join two or more Ignite 
tables, we have to pass all join info to Ignite and transfer to Spark only 
result of table join.
To implement it we have to extend current implementation with new 
RelationProvider that can generate all kind of joins for two or more tables.
We should add some tests, also.
The question is - how join result should be partitioned?


2. *Order by* If one make some query to Ignite table with order by 
clause we can execute sorting on Ignite side.
But it seems that currently Spark doesn’t have any way to tell that 
partitions already sorted.


3. *Key filter* If one make query with `WHERE key = XXX` or `WHERE key 
IN (X, Y, Z)`, we can reduce number of partitions.
And query only partitions that store certain key values.
Is this kind of optimization already built in Ignite or I should 
implement it by myself?

May be, there is any other way to make queries run faster?

[1] https://spark.apache.org/docs/latest/sql-programming-guide.html
[2] https://github.com/apache/ignite/pull/2742


[jira] [Created] (IGNITE-7051) SQL Rename table support

2017-11-28 Thread Blackfield (JIRA)
Blackfield created IGNITE-7051:
--

 Summary: SQL Rename table support
 Key: IGNITE-7051
 URL: https://issues.apache.org/jira/browse/IGNITE-7051
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.3
Reporter: Blackfield


Use case was discussed at length here: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-update-Data-Grid-Cache-td2075.html#a17641

Currently, we have to load data to second table, the client then has to 
change their query to query this 
new table. Drop the old table. 

The latest suggestion in the above thread to "load new data set in the same 
cache and remove old entries once preloading is finished" is not feasible 
since for large table and table scan query (thus requires whole dataset to 
be loaded), it will take a while to load everything - thus increasing the 
downtime. 

Table rename support will reduce the downtime greatly. 

Ref: Postgresql and H2 syntax
ALTER TABLE TmpTable RENAME TO Table1; 


Then, one would wrap it within transaction (It appears that this is slated 
for 2.4/2.5?) to drop old table, rename temp table to original table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7050) Add support for spring3

2017-11-28 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7050:
-

 Summary: Add support for spring3
 Key: IGNITE-7050
 URL: https://issues.apache.org/jira/browse/IGNITE-7050
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.3
 Environment: there are still users who use spring3 and hence can't use 
ignite which depends on spring4. I think we can create separate modules which 
Reporter: Mikhail Cherkasov
Assignee: Mikhail Cherkasov
 Fix For: 2.4






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7049) Optimistic transaction is not properly rolled back if timed out before sending prepare response.

2017-11-28 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-7049:
-

 Summary: Optimistic transaction is not properly rolled back if 
timed out before sending prepare response.
 Key: IGNITE-7049
 URL: https://issues.apache.org/jira/browse/IGNITE-7049
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Alexei Scherbakov
 Fix For: 2.4


Reproducer:

{noformat}
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *  http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.ignite.internal.processors.cache.transactions;

import org.apache.ignite.Ignite;
import org.apache.ignite.cluster.ClusterNode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.internal.TestRecordingCommunicationSpi;
import 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareResponse;
import org.apache.ignite.internal.util.typedef.G;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;
import org.apache.ignite.transactions.Transaction;

import static org.apache.ignite.cache.CacheAtomicityMode.TRANSACTIONAL;
import static org.apache.ignite.cache.CacheWriteSynchronizationMode.FULL_SYNC;
import static org.apache.ignite.transactions.TransactionConcurrency.OPTIMISTIC;
import static org.apache.ignite.transactions.TransactionIsolation.SERIALIZABLE;

/**
 * Tests an ability to eagerly rollback timed out optimistic transactions.
 */
public class TxRollbackOnTimeoutOptimisticTest extends GridCommonAbstractTest {
/** */
private static final String CACHE_NAME = "test";

/** IP finder. */
private static final TcpDiscoveryVmIpFinder IP_FINDER = new 
TcpDiscoveryVmIpFinder(true);

/** */
private static final int GRID_CNT = 3;

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String 
igniteInstanceName) throws Exception {
IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);

((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(IP_FINDER);

TestRecordingCommunicationSpi commSpi = new 
TestRecordingCommunicationSpi();

cfg.setCommunicationSpi(commSpi);

boolean client = "client".equals(igniteInstanceName);

cfg.setClientMode(client);

if (!client) {
CacheConfiguration ccfg = new CacheConfiguration(CACHE_NAME);

ccfg.setAtomicityMode(TRANSACTIONAL);
ccfg.setBackups(2);
ccfg.setWriteSynchronizationMode(FULL_SYNC);

cfg.setCacheConfiguration(ccfg);
}

return cfg;
}

/**
 * @return Near cache flag.
 */
protected boolean nearCacheEnabled() {
return false;
}

/** {@inheritDoc} */
@Override protected void beforeTest() throws Exception {
super.beforeTest();

startGridsMultiThreaded(GRID_CNT);
}

/** {@inheritDoc} */
@Override protected void afterTest() throws Exception {
super.afterTest();

stopAllGrids();
}

/** */
public void testOptimisticTimeout() throws Exception {
final Ignite client = startGrid("client");

assertNotNull(client.cache(CACHE_NAME));

final ClusterNode n0 = client.affinity(CACHE_NAME).mapKeyToNode(0);

final Ignite prim = G.ignite(n0.id());

for (Ignite ignite : G.allGrids()) {
if (ignite == prim)
continue;

final TestRecordingCommunicationSpi spi =

(TestRecordingCommunicationSpi)ignite.configuration().getCommunicationSpi();

spi.blockMessages(GridDhtTxPrepareResponse.class, prim.name());
}

final int val = 0;

try {
multithreaded(new Runnable() {
@Override public void run() {
try (Transaction txOpt = 
client.transactions().txStart(OPTIMISTIC, SERIALIZABLE, 300, 1)) {

client.cache(CACHE_NAME).put(val, val);

txOpt.commit();
   

[GitHub] ignite pull request #3106: Ignite PDS compatibilty framework fixes

2017-11-28 Thread dspavlov
GitHub user dspavlov opened a pull request:

https://github.com/apache/ignite/pull/3106

Ignite PDS compatibilty framework fixes

dependencies declaration, debug stuff added

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-13075

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3106


commit 732d1a5be8805f01823c874de5faa5f4f175078c
Author: dpavlov 
Date:   2017-11-24T17:10:43Z

GG-13075: Add PDS compatibility tests to ggprivate compatibility suite 
(Ignite Professional edition storage compatiblity)

commit e419f52f4d16a91ec2d7d4a0de9685b56da9c373
Author: dpavlov 
Date:   2017-11-28T14:06:38Z

GG-13075: fix maven for PDS compatibility tests of GridGain (Ignite 
Professional edition storage compatibility)

commit 04f644c6cba872b7929a7c43e2f8498c4499326c
Author: dpavlov 
Date:   2017-11-28T14:38:56Z

GG-13075: providing Ignite home for PDS compatibility tests of GridGain 
(Ignite Professional edition storage compatibility)

commit 05a386ef0817737b0ef36781ebda46559b9aa35a
Author: dpavlov 
Date:   2017-11-28T15:20:30Z

GG-13075: configurable dependencies: compatibility tests of GridGain 
(Ignite Professional edition storage compatibility)

commit b90df74fb2793de3aac398f61d95c4cf2fb1a424
Author: dpavlov 
Date:   2017-11-28T15:35:20Z

GG-13075: dumping classpath: compatibility tests of GridGain (Ignite 
Professional edition storage compatibility)

commit 370a236db988c618f780d862c685af4b55fd240d
Author: dpavlov 
Date:   2017-11-28T17:05:12Z

GG-13075: delayed dump of classpath & stacktrace using timeout: 
compatibility tests of snapshots




---


[GitHub] ignite pull request #3105: ignite-gg-13099

2017-11-28 Thread sk0x50
GitHub user sk0x50 opened a pull request:

https://github.com/apache/ignite/pull/3105

ignite-gg-13099



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sk0x50/ignite ignite-gg-13099

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3105


commit 6e36a7950db84913ddfd0d98f5a0b50923d2a29c
Author: tledkov-gridgain 
Date:   2016-11-15T09:42:29Z

IGNITE-3191: Fields are now sorted for binary objects which don't implement 
Binarylizable interface. This closes #1197.

commit e39888a08da313bec4d30f96488eccb36b4abacc
Author: Vasiliy Sisko 
Date:   2016-11-17T04:41:05Z

IGNITE-4163 Fixed load range queries.

commit 3eacc0b59c27be6b4b3aaa09f84b867ba42b449f
Author: Alexey Kuznetsov 
Date:   2016-11-21T10:28:56Z

Merged ignite-1.7.3 into ignite-1.7.4.

commit 0234f67390c88dceefd6e62de98adb922b4ba9ac
Author: Alexey Kuznetsov 
Date:   2016-11-21T10:40:50Z

IGNITE-3443 Implemented metrics for queries monitoring.

commit a24a394bb66ba0237a9e9ef940707d422b2980f0
Author: Konstantin Dudkov 
Date:   2016-11-21T10:53:58Z

IGNITE-2523 "single put" NEAR update request

commit 88f38ac6305578946f2881b12d2d557bd561f67d
Author: Konstantin Dudkov 
Date:   2016-11-21T12:11:09Z

IGNITE-3074 Optimize DHT atomic update future

commit 51ca24f2db32dff9c0034603ea3abfe5ef5cd846
Author: Konstantin Dudkov 
Date:   2016-11-21T13:48:44Z

IGNITE-3075 Implement single key-value pair DHT request/response for ATOMIC 
cache.

commit 6e4a279e34584881469a7d841432e6c38db2f06f
Author: tledkov-gridgain 
Date:   2016-11-21T14:15:17Z

IGNITE-2355: fix test - clear client connections before and after a test.

commit 551f90dbeebcad35a0e3aac07229fb67578f2ab7
Author: tledkov-gridgain 
Date:   2016-11-21T14:16:49Z

Merge remote-tracking branch 'community/ignite-1.7.4' into ignite-1.7.4

commit f2dc1d71705b86428a04a69c4f2d4ee3a82ed1bd
Author: sboikov 
Date:   2016-11-21T15:12:27Z

Merged ignite-1.6.11 into ignite-1.7.4.

commit d32fa21b673814b060d2362f06ff44838e9c2cdc
Author: sboikov 
Date:   2016-11-22T08:33:55Z

IGNITE-3075 Fixed condition for 'single' request creation

commit d15eba4becf7515b512c1032b193ce75e1589177
Author: Anton Vinogradov 
Date:   2016-11-22T08:56:20Z

IGNITE-4225 DataStreamer can hang on changing topology

commit f80bfbd19e7870554bf3abd13bde89b0f39aaee1
Author: Anton Vinogradov 
Date:   2016-11-22T09:02:57Z

IGNITE-3748 Data rebalancing of large cache can hang out.

commit bc695f8e3306c6d74d4fe53d9a98adedd43ad8f0
Author: Igor Sapego 
Date:   2016-11-22T09:05:15Z

IGNITE-4227: ODBC: Implemented SQLError. This closes #1237.

commit fc9ee6a74fe0bf413ab0643d2776a1a43e6dd5d2
Author: devozerov 
Date:   2016-11-22T09:05:32Z

Merge remote-tracking branch 'upstream/ignite-1.7.4' into ignite-1.7.4

commit 861fab9d0598ca2f06c4a6f293bf2866af31967c
Author: tledkov-gridgain 
Date:   2016-11-22T09:52:03Z

IGNITE-4239: add GridInternal annotaion for tasks instead of jobs. This 
closes #1250.

commit ba99df1554fbd1de2b2367b6ce011a024cd199bd
Author: tledkov-gridgain 
Date:   2016-11-22T10:07:20Z

IGNITE-4239: test cleanup

commit c34d27423a0c45c61341c1fcb3f56727fb91498f
Author: Igor Sapego 
Date:   2016-11-22T11:13:28Z

IGNITE-4100: Fix for DEVNOTES paths.

commit 9d82f2ca06fa6069c1976cc75814874256b24f8c
Author: devozerov 
Date:   2016-11-22T12:05:29Z

IGNITE-4259: Fixed a problem with geospatial indexes and BinaryMarshaller.

commit b038730ee56a662f73e02bbec83eb1712180fa82
Author: isapego 
Date:   2016-11-23T09:05:54Z

IGNITE-4249: ODBC: Fixed performance issue caused by ineddicient IO 
handling on CPP side. This closes #1254.

commit 7a47a0185d308cd3a58c7bfcb4d1cd548bff5b87
Author: devozerov 
Date:   2016-11-24T08:14:08Z

IGNITE-4270: Allow GridUnsafe.UNALIGNED flag override.

commit bf330251734018467fa3291fccf0414c9da7dd1b
Author: Andrey Novikov 
Date:   2016-11-24T10:08:08Z

Web console beta-6.

commit 7d88c5bfe7d6f130974fab1ed4266fff859afd3d
Author: Andrey Novikov 
Date:   2016-11-24T10:59:33Z

Web console beta-6. Minor fix.

commit 9c6824b4f33fbdead64299d9e0c34365d5d4a570
Author: nikolay_tikhonov 
Date:   2016-11-24T13:27:05Z

IGNITE-3958 Fixed "Client node should not start rest 

[jira] [Created] (IGNITE-7048) Cache get fails on node not in BaselineTopology.

2017-11-28 Thread Sergey Chugunov (JIRA)
Sergey Chugunov created IGNITE-7048:
---

 Summary: Cache get fails on node not in BaselineTopology.
 Key: IGNITE-7048
 URL: https://issues.apache.org/jira/browse/IGNITE-7048
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Chugunov
 Fix For: 2.4


As an example take a look at 
IgnitePdsBinaryMetadataOnClusterRestartTest::testMixedMetadataIsRestoredOnRestart.

When reading data for check from node not in BaselineTopology it fails with the 
following assertion:
{noformat}java.lang.AssertionError: result = true, persistenceEnabled = true, 
partitionState = EVICTED

at 
org.apache.ignite.internal.processors.cache.GridCacheContext.allowFastLocalRead(GridCacheContext.java:2044)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture.java:321)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:211)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:203)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync0(GridDhtAtomicCache.java:1392)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1600(GridDhtAtomicCache.java:131)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:470)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:468)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:757)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync(GridDhtAtomicCache.java:468)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4545)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4526)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1343)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:828)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:662)
at 
org.apache.ignite.internal.processors.cache.persistence.IgnitePdsBinaryMetadataOnClusterRestartTest.examineStaticMetadata(IgnitePdsBinaryMetadataOnClusterRestartTest.java:145)
at 
org.apache.ignite.internal.processors.cache.persistence.IgnitePdsBinaryMetadataOnClusterRestartTest.testMixedMetadataIsRestoredOnRestart(IgnitePdsBinaryMetadataOnClusterRestartTest.java:334)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2000)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1915)
at java.lang.Thread.run(Thread.java:745)
{noformat}

The problem with the test is that in method 
*GridCacheProcessor::prepareCacheStart* flag *affNode* is calculated ignoring 
information about BaselineTopology distribution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3104: Ignite-6339-hdr-buf

2017-11-28 Thread agura
GitHub user agura opened a pull request:

https://github.com/apache/ignite/pull/3104

Ignite-6339-hdr-buf



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agura/incubator-ignite ignite-6339-hdr-buf

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3104.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3104


commit a84127a024a68a4f91fafe17b3464871c701ef58
Author: Andrey Gura 
Date:   2017-09-13T12:36:26Z

ignite-6339 Segmented ring buffer implemented instead of WAL records chain

commit 1e1593ae64ddd9e51b07d218c734fbc9769218f0
Author: Andrey Gura 
Date:   2017-11-28T16:05:25Z

WIP




---


[GitHub] ignite pull request #3091: IGNITE-7013 .NET: Fix startup on macOS (dlopen ca...

2017-11-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3091


---


[jira] [Created] (IGNITE-7047) NPE at org.jsr166.ConcurrentLinkedHashMap.replace

2017-11-28 Thread Alexey Popov (JIRA)
Alexey Popov created IGNITE-7047:


 Summary: NPE at org.jsr166.ConcurrentLinkedHashMap.replace
 Key: IGNITE-7047
 URL: https://issues.apache.org/jira/browse/IGNITE-7047
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.1
Reporter: Alexey Popov
Assignee: Alexey Popov


NPE happens sometimes at heavy load after receiving 
GridDhtTxOnePhaseCommitAckRequest, no more details

ERROR 11/25/17 17:39:28 [::sys-stripe-2-#3%null%] cache.GridCacheIoManager> 
Failed processing message [senderId=0393e394-09a9-4c02-b33e-fb4d99c3539f, 
msg=GridDhtTxOnePhaseCommitAckRequest [vers=[GridCacheVersi
on [topVer=123129570, order=1511649564004, nodeOrder=2]], 
super=GridCacheMessage [msgId=95, depInfo=null, err=null, skipPrepare=false]]]
java.lang.NullPointerException
at 
org.jsr166.ConcurrentLinkedHashMap.replace(ConcurrentLinkedHashMap.java:1517)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.removeTxReturn(IgniteTxManager.java:1043)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processDhtTxOnePhaseCommitAckRequest(IgniteTxHandler.java:1070)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$700(IgniteTxHandler.java:95)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$8.apply(IgniteTxHandler.java:183)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$8.apply(IgniteTxHandler.java:181)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1042)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:561)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1097)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:483)
at java.lang.Thread.run(Thread.java:745)
ERROR 11/25/17 17:39:28 [::sys-stripe-14-#15%null%] cache.GridCacheIoManager> 
Failed processing message [senderId=52c4ced0-49f3-4075-9b2f-7d619adf6d33, 
msg=GridDhtTxOnePhaseCommitAckRequest [vers=[GridCacheVersion 
[topVer=123129570, order=1511649564004, nodeOrder=4]], super=GridCacheMessage 
[msgId=97, depInfo=null, err=null, skipPrepare=false]]]
java.lang.NullPointerException
at 
org.jsr166.ConcurrentLinkedHashMap.replace(ConcurrentLinkedHashMap.java:1517)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.removeTxReturn(IgniteTxManager.java:1043)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processDhtTxOnePhaseCommitAckRequest(IgniteTxHandler.java:1070)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$700(IgniteTxHandler.java:95)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$8.apply(IgniteTxHandler.java:183)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$8.apply(IgniteTxHandler.java:181)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1042)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:561)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)

Re: Ignite node crashes after one query fetches many entries from cache

2017-11-28 Thread Alexey Kukushkin
Ignite Developers,

I know community is developing an "Internal Problems Detection" feature
.
Do you know if it addresses the problem Ray described below? May be we
already have a setting to prevent this from happening?

On Tue, Nov 28, 2017 at 5:13 PM, Ray  wrote:

> I try to fetch all the results of a table with billions of entries using
> sql
> like this "select * from table_name".
> As far as my understanding, Ignite will prepare all the data on the node
> running this query then return the results to the client.
> The problem is that after a while, the node crashes(probably because of
> long
> GC pause or running out of memory).
> Is node crashing the expected behavior?
> I mean it's unreasonable that Ignite node crashes after this kind of query.
>
> From my experience with other databases,  running this kind of full table
> scan will not crash the node.
>
> The optimal way for handling this kind of situation is Ignite node stays
> alive, the query will be stopped by Ignite node when the node find out it
> will run out of memory soon.
> Then an error response shall be returned to the client.
>
> Please advice me if this mechanism already exists and there is hidden
> switch
> to turn it on.
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Alexey


[jira] [Created] (IGNITE-7046) Client queries should throw a correct exception

2017-11-28 Thread Kirill Shirokov (JIRA)
Kirill Shirokov created IGNITE-7046:
---

 Summary: Client queries should throw a correct exception
 Key: IGNITE-7046
 URL: https://issues.apache.org/jira/browse/IGNITE-7046
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.3
Reporter: Kirill Shirokov


The following test being added to 
org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQuerySelfTest:

/**
 * Method verifies that in the case of client query index is not used and a 
correct exception is thrown.
 *
 * @throws Exception If failed.
 */
public void testClientOnlyNodeIndexException() throws Exception {
try {
Ignite g = startGrid("client");

IgniteCache c = jcache(g, Integer.class, 
Integer.class);

try {
List cres = c.query(new SqlFieldsQuery("select 
count(*) from Integer")
.setLocal(true)).getAll();
} 
catch (IgniteException e) {
throw e; // FIXME: put an exception-checking code here instead 
of throw
}
}
finally {
stopGrid("client");
}
}

...will result in NPE instead of an Ignite exception explaining the appropriate 
cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7045) Client queries should throw a correct exception

2017-11-28 Thread Kirill Shirokov (JIRA)
Kirill Shirokov created IGNITE-7045:
---

 Summary: Client queries should throw a correct exception
 Key: IGNITE-7045
 URL: https://issues.apache.org/jira/browse/IGNITE-7045
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.3
Reporter: Kirill Shirokov


The following test being added to 
org.apache.ignite.internal.processors.cache.distributed.replicated.IgniteCacheReplicatedQuerySelfTest:

/**
 * Method verifies that in the case of client query index is not used and a 
correct exception is thrown.
 *
 * @throws Exception If failed.
 */
public void testClientOnlyNodeIndexException() throws Exception {
try {
Ignite g = startGrid("client");

IgniteCache c = jcache(g, Integer.class, 
Integer.class);

try {
List cres = c.query(new SqlFieldsQuery("select 
count(*) from Integer")
.setLocal(true)).getAll();
} 
catch (IgniteException e) {
throw e; // FIXME: put an exception-checking code here instead 
of throw
}
}
finally {
stopGrid("client");
}
}

...will result in NPE instead of an Ignite exception explaining the appropriate 
cause.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: FOR UPDATE support in SELECT clause

2017-11-28 Thread Vladimir Ozerov
In this case you lock rows, but there are not subsequent operation which
will use this lock. FOR UPDATE only makes sense as a part of transaction.

On Tue, Nov 28, 2017 at 4:56 PM, Dmitriy Setrakyan 
wrote:

> On Tue, Nov 28, 2017 at 12:34 AM, Vladimir Ozerov 
> wrote:
>
> > I do not see any use case for this. Why would you want to do this?
> >
>
> Atomic cache supports locking to my knowledge. The use case would be
> identical in SQL use case - to lock a row. Why not?
>


Re: FOR UPDATE support in SELECT clause

2017-11-28 Thread Dmitriy Setrakyan
On Tue, Nov 28, 2017 at 12:34 AM, Vladimir Ozerov 
wrote:

> I do not see any use case for this. Why would you want to do this?
>

Atomic cache supports locking to my knowledge. The use case would be
identical in SQL use case - to lock a row. Why not?


[jira] [Created] (IGNITE-7044) SQL: Documentation for the PARALLEL statement in the CREATE INDEX command

2017-11-28 Thread Roman Kondakov (JIRA)
Roman Kondakov created IGNITE-7044:
--

 Summary: SQL: Documentation for the PARALLEL statement in the 
CREATE INDEX command
 Key: IGNITE-7044
 URL: https://issues.apache.org/jira/browse/IGNITE-7044
 Project: Ignite
  Issue Type: Task
  Components: documentation
Affects Versions: 2.1
Reporter: Roman Kondakov
Assignee: Roman Kondakov
 Fix For: 2.4


Add a documentation for the PARALLEL option in the CREATE INDEX command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [!!Mass Mail]Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-28 Thread Сорокин Дмитрий Владимирович
Vladimir,

These policies (policy, in fact) can be configured in IgniteConfiguration by 
calling setFailureProcessingPolicy(FailureProcessingPolicy flrPlc) method.

--
Дмитрий Сорокин
Тел.: 8-789-13512
Моб.: +7 (916) 560-39-63


28.11.17, 10:28 пользователь "Vladimir Ozerov"  написал:

Dmitry,

How these policies will be configured? Do you have any API in mind?

On Thu, Nov 23, 2017 at 6:26 PM, Denis Magda  wrote:

> No objections here. Additional policies like EXEC might be added later
> depending on user needs.
>
> —
> Denis
>
> > On Nov 23, 2017, at 2:26 AM, Дмитрий Сорокин 
> wrote:
> >
> > Denis,
> > I propose start with first three policies (it's already implemented, 
just
> > await some code combing, commit & review).
> > About of fourth policy (EXEC) I think that it's rather additional
> property
> > (some script path) than policy.
> >
> > 2017-11-23 0:43 GMT+03:00 Denis Magda :
> >
> >> Just provide FailureProcessingPolicy with possible reactions:
> >> - NOOP - exceptions will be reported, metrics will be triggered but an
> >> affected Ignite process won’t be touched.
> >> - HAULT (or STOP or KILL) - all the actions of the of NOOP + Ignite
> >> process termination.
> >> - RESTART - NOOP actions + process restart.
> >> - EXEC - execute a custom script provided by the user.
> >>
> >> If needed the policy can be set per know failure such is OOM,
> Persistence
> >> errors so that the user can act accordingly basing on a context.
> >>
> >> —
> >> Denis
> >>
> >>> On Nov 21, 2017, at 11:43 PM, Vladimir Ozerov 
> >> wrote:
> >>>
> >>> In the first iteration I would focus only on reporting facilities, to
> let
> >>> administrator spot dangerous situation. And in the second phase, when
> all
> >>> reporting and metrics are ready, we can think on some automatic
> actions.
> >>>
> >>> On Wed, Nov 22, 2017 at 10:39 AM, Mikhail Cherkasov <
> >> mcherka...@gridgain.com
>  wrote:
> >>>
>  Hi Anton,
> 
>  I don't think that we should shutdown node in case of
> >> IgniteOOMException,
>  if one node has no space, then other probably  don't have it too, so
> re
>  -balancing will cause IgniteOOM on all other nodes and will kill the
> >> whole
>  cluster. I think for some configurations cluster should survive and
> >> allow
>  to user clean cache or/and add more nodes.
> 
>  Thanks,
>  Mikhail.
> 
>  20 нояб. 2017 г. 6:53 ПП пользователь "Anton Vinogradov" <
>  avinogra...@gridgain.com> написал:
> 
> > Igniters,
> >
> > Internal problems may and, unfortunately, cause unexpected cluster
> > behavior.
> > We should determine behavior in case any of internal problem
> happened.
> >
> > Well known internal problems can be split to:
> > 1) OOM or any other reason cause node crash
> >
> > 2) Situations required graceful node shutdown with custom
> notification
> > - IgniteOutOfMemoryException
> > - Persistence errors
> > - ExchangeWorker exits with error
> >
> > 3) Prefomance issues should be covered by metrics
> > - GC STW duration
> > - Timed out tasks and jobs
> > - TX deadlock
> > - Hanged Tx (waits for some service)
> > - Java Deadlocks
> >
> > I created special issue [1] to make sure all these metrics will be
> > presented at WebConsole or VisorConsole (what's preferred?)
> >
> > 4) Situations required external monitoring implementation
> > - GC STW duration exceed maximum possible length (node should be
> >> stopped
> > before STW finished)
> >
> > All this problems were reported by different persons different time
> >> ago,
> > So, we should reanalyze each of them and, possible, find better ways
> to
> > solve them than it described at issues.
> >
> > P.s. IEP-7 [2] already contains 9 issues, feel free to mention
> >> something
> > else :)
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-6961
> > [2]
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > 7%3A+Ignite+internal+problems+detection
> >
> 
> >>
> >>
>
>


УВЕДОМЛЕНИЕ О КОНФИДЕНЦИАЛЬНОСТИ: Это электронное сообщение и любые документы, 
приложенные к нему, содержат конфиденциальную информацию. Настоящим уведомляем 
Вас о том, что если это сообщение не предназначено Вам, использование, 
копирование, распространение информации, содержащейся в 

[GitHub] ignite pull request #3014: IGNITE-6406: SQL: CREATE INDEX should fill index ...

2017-11-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3014


---


[GitHub] ignite pull request #3103: IGNITE-7043 Fix method name suggested when page e...

2017-11-28 Thread alamar
GitHub user alamar opened a pull request:

https://github.com/apache/ignite/pull/3103

IGNITE-7043 Fix method name suggested when page eviction starts



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alamar/ignite patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3103


commit 40700bbba66528e8c42da66f28bcc07f634dbb20
Author: Ilya Kasnacheev 
Date:   2017-11-28T12:49:23Z

IGNITE-7043 Fix method name suggested when page eviction starts




---


[jira] [Created] (IGNITE-7043) Incorrect method name suggested when page eviction starts

2017-11-28 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-7043:
---

 Summary: Incorrect method name suggested when page eviction starts
 Key: IGNITE-7043
 URL: https://issues.apache.org/jira/browse/IGNITE-7043
 Project: Ignite
  Issue Type: Bug
  Components: persistence
Affects Versions: 2.3
Reporter: Ilya Kasnacheev
Assignee: Ilya Kasnacheev
Priority: Trivial


Reported via gitter:

WARNING: Page evictions started, this will affect storage performance (consider 
increasing DataStorageConfiguration#setPageCacheSize).
since there is no such setting (field/property) setPageCacheSize (ver. 
2.3.0#20171028-sha1:8add7fd5)

The actual method is DataRegionConfiguration.setMaxSize.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3102: Ignite-6339-buf-overflow

2017-11-28 Thread agura
GitHub user agura opened a pull request:

https://github.com/apache/ignite/pull/3102

Ignite-6339-buf-overflow



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/agura/incubator-ignite 
ignite-6339-buf-overflow

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3102


commit a84127a024a68a4f91fafe17b3464871c701ef58
Author: Andrey Gura 
Date:   2017-09-13T12:36:26Z

ignite-6339 Segmented ring buffer implemented instead of WAL records chain

commit a54de8e6ab38239785bb6f2df023b4c409fbf0fa
Author: Andrey Gura 
Date:   2017-11-28T12:43:08Z

debug




---


[GitHub] ignite pull request #3101: Compress with ssl

2017-11-28 Thread NSAmelchev
GitHub user NSAmelchev opened a pull request:

https://github.com/apache/ignite/pull/3101

Compress with ssl



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NSAmelchev/ignite comress-with-ssl

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3101.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3101


commit aded27854c24792bfe7f37cbe350e6e97368160e
Author: NSAmelchev 
Date:   2017-11-10T07:42:24Z

draft

commit 2ab1d5c478977c848fbcd79d1d86509b77454867
Author: NSAmelchev 
Date:   2017-11-20T13:15:40Z

compressV2.

commit 96ccc72eefdb2d8e474c71e8ed4f3484ab6abbe7
Author: NSAmelchev 
Date:   2017-11-21T13:24:46Z

change size of buffers

commit d39bc91b365db4a8b8a4e7ff5b8bcba018bf152d
Author: NSAmelchev 
Date:   2017-11-28T11:07:38Z

draft




---


Re: TC issues. IGNITE-3084. Spark Data Frame API

2017-11-28 Thread Николай Ижиков

Hello, Valentin.


Added '-Dscala-2.10' to the build config. Let me know if it helps.


Yes, it helps. Thank you!
Now, 'Ignite RDD spark 2_10' succeed for my branch.



Do you mean that IgniteRDD does not compile on JDK7? If yes, do we know the 
reason? I don't think switching it to JDK8 is a solution as it should work with 
both.


I mean that latest version of spark doesn't support jdk7.

http://spark.apache.org/docs/latest/

"Spark runs on Java 8+..."
"For the Scala API, Spark 2.2.0 uses Scala 2.11..."
"Note that support for Java 7... were removed as of Spark 2.2.0"
"Note that support for Scala 2.10 is deprecated..."

Moreover, We can't have IgniteCatalog for spark 2.1.
Please, see my explanation in jira ticket -

https://issues.apache.org/jira/browse/IGNITE-3084?focusedCommentId=16268523=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16268523

Do you see any options to support jdk7 for spark module?

> I think all tests should be executed on TC. Can you check if they work and 
add them to corresponding suites

OK, I file a ticket and try to fix it shortly.

https://issues.apache.org/jira/browse/IGNITE-7042

28.11.2017 03:33, Valentin Kulichenko пишет:

Hi Nikolay,

Please see my responses inline.

-Val

On Fri, Nov 24, 2017 at 2:55 AM, Николай Ижиков > wrote:

Hello, guys.

I have some issues on TC with my PR [1] for IGNITE-3084(Spark Data Frame 
API).
Can you, please, help me:


1. `Ignite RDD spark 2_10` -

Currently this build runs with following profiles: 
`-Plgpl,examples,scala-2.10,-clean-libs,-release` [2]
That means `scala` profile is activated too for `Ignite RDD spark 2_10`
Because `scala` activation is done like [3]:

```
             
                 !scala-2.10
             
```

I think it a misconfiguration because scala(2.11) shouldn't be activated 
for 2.10 build.
Am I miss something?

Can someone edit build property?
         * Add `-scala` to profiles list
         * Or add `-Dscala-2.10` to jvm properties to turn off `scala` 
profile in this build.


Added '-Dscala-2.10' to the build config. Let me know if it helps.


2. `Ignite RDD` -

Currently this build run on jvm7 [4].
As I wrote in my previous mail [5] current version of spark(2.2) runs only 
on jvm8.

Can someone edit build property to run it on jvm8?


Do you mean that IgniteRDD does not compile on JDK7? If yes, do we know the 
reason? I don't think switching it to JDK8 is a solution as it should work with 
both.


3. For now `Ignite RDD` and `Ignite RDD spark 2_10` only runs java tests 
[6] existing in `spark` module.
There are several existing tests written in scala(i.e. scala-test) ignored 
in TC. IgniteRDDSpec [7] for example.
Is it turned off by a purpose or I miss something?
Should we run scala-test for spark and spark_2.10 modules? 



I think all tests should be executed on TC. Can you check if they work and add 
them to corresponding suites?


[1] https://github.com/apache/ignite/pull/2742 

[2] 
https://ci.ignite.apache.org/viewLog.html?buildId=960220=Ignite20Tests_IgniteRddSpark210=buildLog&_focus=379#_state=371


[3] https://github.com/apache/ignite/blob/master/pom.xml#L533 

[4] 
https://ci.ignite.apache.org/viewLog.html?buildId=960221=Ignite20Tests_IgniteRdd=buildParameters


[5] 
http://apache-ignite-developers.2346864.n4.nabble.com/Integration-of-Spark-and-Ignite-Prototype-tp22649p23099.html


[6] 
https://ci.ignite.apache.org/viewLog.html?buildId=960220=Ignite20Tests_IgniteRddSpark210=testsInfo


[7] 
https://github.com/apache/ignite/blob/master/modules/spark/src/test/scala/org/apache/ignite/spark/IgniteRDDSpec.scala






[jira] [Created] (IGNITE-7042) Test written in scala doesn't executed on TC

2017-11-28 Thread Nikolay Izhikov (JIRA)
Nikolay Izhikov created IGNITE-7042:
---

 Summary: Test written in scala doesn't executed on TC 
 Key: IGNITE-7042
 URL: https://issues.apache.org/jira/browse/IGNITE-7042
 Project: Ignite
  Issue Type: Bug
  Components: spark
Affects Versions: 2.3
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov
Priority: Minor
 Fix For: 2.4


Tests from module `spark` and `spark_2.10` written in scala don't executes on 
TC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7041) Web Console: Incorrect code generation in case if cache has eviction policy and near cache with eviction policy

2017-11-28 Thread Pavel Konstantinov (JIRA)
Pavel Konstantinov created IGNITE-7041:
--

 Summary: Web Console: Incorrect code generation in case if cache 
has eviction policy and near cache with eviction policy
 Key: IGNITE-7041
 URL: https://issues.apache.org/jira/browse/IGNITE-7041
 Project: Ignite
  Issue Type: Bug
  Components: wizards
Reporter: Pavel Konstantinov


In case if cache has eviction policy and also near cache configuration with 
eviction policy then generated code contains an error - the same variable is 
used to define two different eviction policy types
{code}
public static CacheConfiguration cacheDepartmentCache() throws Exception {
CacheConfiguration ccfg = new CacheConfiguration();

.

LruEvictionPolicy evictionPlc = new LruEvictionPolicy();

evictionPlc.setBatchSize(5);
evictionPlc.setMaxSize(100);

ccfg.setEvictionPolicy(evictionPlc);
   
.

NearCacheConfiguration nearConfiguration = new NearCacheConfiguration();

nearConfiguration.setNearStartSize(4545);

evictionPlc = new SortedEvictionPolicy();-  THIS ROW IS 
INCORRECT
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: FOR UPDATE support in SELECT clause

2017-11-28 Thread Vladimir Ozerov
I do not see any use case for this. Why would you want to do this?

On Tue, Nov 28, 2017 at 11:18 AM, Dmitriy Setrakyan 
wrote:

> On Mon, Nov 27, 2017 at 11:33 PM, Vladimir Ozerov 
> wrote:
>
> > Hi Denis,
> >
> > "FOR UPDATE" is not supported at the moment. We will add it's support for
> > transactional case [1]. In non-transactional case it would behave in the
> > same way as normal SELECT.
> >
>
> Why only for transactional cases? Why can't we lock for non-transactional
> cases as well?
>


Re: FOR UPDATE support in SELECT clause

2017-11-28 Thread Dmitriy Setrakyan
On Mon, Nov 27, 2017 at 11:33 PM, Vladimir Ozerov 
wrote:

> Hi Denis,
>
> "FOR UPDATE" is not supported at the moment. We will add it's support for
> transactional case [1]. In non-transactional case it would behave in the
> same way as normal SELECT.
>

Why only for transactional cases? Why can't we lock for non-transactional
cases as well?