[jira] [Created] (IGNITE-7040) Web console: Invalid user table height

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7040:
-

 Summary: Web console: Invalid user table height
 Key: IGNITE-7040
 URL: https://issues.apache.org/jira/browse/IGNITE-7040
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko
Assignee: Dmitriy Shabalin


# Filter user list (f.e. to 2 rows)
# Change period of showed activity metrics.

Height of table changed to maximum available but show only filtered rows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Transport compression (not store compression)

2017-11-27 Thread Nikita Amelchev
Hi,
I've filed a ticket [1]. I'll try to share design details in a couple of
days.

1. https://issues.apache.org/jira/browse/IGNITE-7024

2017-11-23 18:31 GMT+03:00 Denis Magda :

> Nikita,
>
> Sounds like a good plan. Please share the design details prior getting
> down to the implementation.
>
> —
> Denis
>
> > On Nov 23, 2017, at 4:38 AM, Nikita Amelchev 
> wrote:
> >
> > Hi Igniters!
> >
> > I’m working on the similar feature for my own project.
> > I would like to suggest use in-line compression and write encoded bytes
> in
> > network channel by bytes array buffer. It allows us avoiding expensive
> > memory allocation.
> > The described design may be implemented in TcpCommunicationSpi level. We
> > can introduce pluggable compressor on TCP level where we will be able to
> > describe our compression strategy, for example, exclude some small
> messages
> > and many other.
> >
> > If the community doesn't mind I will file the ticket and will start
> > implementing it.
> > Any thoughts?
> >
> > 2017-11-23 12:06 GMT+03:00 Vladimir Ozerov :
> >
> >> Denis,
> >>
> >> Regarding zipped marshaller - this would be inefficient, because
> >> compression rate will be lower.
> >>
> >> On Thu, Nov 23, 2017 at 1:01 AM, Denis Magda  wrote:
> >>
> >>> Nikita,
> >>>
> >>> Your solution sounds reasonable from the first glance. However, the
> >>> communication layer processes a dozen of small system messages that
> >> should
> >>> be excluded from the compression. Guess, that we will spend more time
> on
> >>> compressing/decompressing them thus diminishing the positive effect of
> >> the
> >>> compression.
> >>>
> >>> Alexey K., Vladimir O.,
> >>>
> >>> What if we create Zip version of the binary marshaller the same way we
> >>> implemented GridClientZipOptimizedMarshaller?
> >>>
> >>> —
> >>> Denis
> >>>
>  On Nov 22, 2017, at 5:36 AM, Alexey Kuznetsov 
> >>> wrote:
> 
>  I think it is very useful feature.
>  I also have experience when server nodes connected via fast network.
>  But client nodes via very slow network.
> 
>  I implemeted GridClientZipOptimizedMarshaller and that solved my
> >> issue.
>  But this marshaller works only with old
>  and org.apache.ignite.internal.client.GridClient and has a lot of
>  limitations.
>  But compression was about 6-20x times.
> 
>  We need a solution for Ignite 2.x and client nodes.
> 
> 
>  On Wed, Nov 22, 2017 at 7:48 PM, Nikita Amelchev <
> nsamelc...@gmail.com
> >>>
>  wrote:
> 
> > Hello, Igniters!
> >
> > I think it is a useful feature. I suggest to implement it to
> >>> communication
> > SPI like SSL encryption implemented. I have experience with this
> >> feature
> > and I can try to develop it.
> >
> > 2017-11-22 12:01 GMT+03:00 Alexey Kukushkin <
> >> kukushkinale...@gmail.com
>  :
> >
> >> Forwarding to DEV list: Ignite developers, could you please share
> >> your
> >> thoughts on how hard it is to extend Ignite to compress data on the
> >> network.
> >>
> >> On Wed, Nov 22, 2017 at 10:04 AM, Gordon Reid (Nine Mile) <
> >> gordon.r...@ninemilefinancial.com> wrote:
> >>
> >>> Hi Igniters,
> >>>
> >>>
> >>>
> >>> I see there is a lot of discussion in certain threads about
> > compression.
> >>> This seems to have diverged into conversations about object versus
> > field
> >>> compression, and even throwing encryption into the mix. For my use
> > case,
> >> I
> >>> am not interested in compressing the cache stored in memory, I have
> >> plenty
> >>> of memory for my application. What I don’t have is a good network.
> I
> >> have a
> >>> high latency, low bandwidth network between my C# ignite client and
> >> my
> >> Java
> >>> ignite server. I only want to compress data when it is sent over
> the
> >>> network to remote nodes. It should be stored in the local memory
> >>> uncompressed. How can we achive this? Can the TcpCommunicationSpi
> > support
> >>> compression?
> >>>
> >>>
> >>>
> >>> Thanks,
> >>>
> >>> Gordon.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> This email and any attachments are proprietary & confidential and
> >> are
> >>> intended solely for the use of the individuals to whom it is
> >>> addressed.
> >> Any
> >>> views or opinions expressed are solely for those of the author and
> >> do
> > not
> >>> necessarily reflect those of Nine Mile Financial Pty. Limited. If
> >> you
> >> have
> >>> received this email in error, please let us know immediately by
> >> reply
> >> email
> >>> and delete from your system. Nine Mile Financial Pty. Limited. ABN:
> >>> 346
> >>> 1349 0252
> >>>
> >>
> >>
> >>
> 

Re: FOR UPDATE support in SELECT clause

2017-11-27 Thread Vladimir Ozerov
Hi Denis,

"FOR UPDATE" is not supported at the moment. We will add it's support for
transactional case [1]. In non-transactional case it would behave in the
same way as normal SELECT.

[1] https://issues.apache.org/jira/browse/IGNITE-6937

On Thu, Oct 19, 2017 at 3:21 AM, Denis Magda  wrote:

> Vladimir, Alex P.,
>
> In addition to that please review INSERT, UPDATE, DELETE, MERGE commands
> syntax. Are all the parameters (DIRECT, SORTED, etc.) supported by Ignite
> and if, yes, then how? I’m doubt that Ignite fully supports H2 syntax:
> https://apacheignite-sql.readme.io/v2.1/docs/dml
>
> —
> Denis
>
> > On Oct 18, 2017, at 2:02 PM, Denis Magda  wrote:
> >
> > Vladimir, Igniters,
> >
> > I’m editing the new version of our SELECT page [1] that initially
> consisted of the content fully copied from H2.
> >
> > For instance, there we had the following statement that’s not true for
> Ignite: "If FOR UPDATE is specified, the tables are locked for writing.
> When using MVCC, only the selected rows are locked as in an UPDATE
> statement. In this case, aggregate, GROUP BY, DISTINCTqueries or joins are
> not allowed in this case."
> >
> > How do we process FOR UPDATE parameter in Ignite right now? Please do a
> proof-read of the page at all confirming the rest data applies for Ignite.
> >
> > [1] https://apacheignite-sql.readme.io/v2.1/docs/select
> >
> > —
> > Denis
>
>


Re: IgniteJdbcDriver's usage of JavaLogger

2017-11-27 Thread Vladimir Ozerov
Andrey,

Agree, it is better to use JUL logger directly without Ignite wrapper.

On Mon, Oct 23, 2017 at 10:05 PM, Andrey Kornev 
wrote:

> Hey Vladimir,
>
> Maybe it'd be better to just use JUL logger directly?
>
> Or, better yet, just get rid of that nagging patronizing warning on line
> 434 (the only reason the logger is created in the first place) altogether
> and instead optionally throw an IAE?
>
> Or, include a dummy config/java.util.logging.properties with
> ignite-indexing distribution (under META-INF, perhaps) just to keep
> JavaLogger happy?
>
> Cheers
> Andrey
> --
> *From:* Vladimir Ozerov 
> *Sent:* Monday, October 23, 2017 9:03 AM
> *To:* dev@ignite.apache.org
> *Subject:* Re: IgniteJdbcDriver's usage of JavaLogger
>
> Hi Andrey,
>
> What kind of fix do you suggest?
>
> On Mon, Oct 23, 2017 at 6:58 PM, Andrey Kornev 
> wrote:
>
> > Hello,
> >
> > Just curious if anyone knows why IgniteJdbcDriver class instantiates a
> > JavaLogger() on line 410 rather than using the globally configured logger
> > instance?
> >
> > I have an slf4j logger configured and with ignite-indexing module in the
> > classpath, I get scary looking (albeit benign) message in my logs during
> > startup:
> >
> > Oct 23, 2017 9:02:23 AM java.util.logging.LogManager$RootLogger log
> > SEVERE: Failed to resolve default logging config file:
> > config/java.util.logging.properties
> >
> > Shouldn't IgniteJdbcDriver be fixed?
> >
> > Thanks
> > Andrey
> >
>


Re: Ignite Enhancement Proposal #7 (Internal problems detection)

2017-11-27 Thread Vladimir Ozerov
Dmitry,

How these policies will be configured? Do you have any API in mind?

On Thu, Nov 23, 2017 at 6:26 PM, Denis Magda  wrote:

> No objections here. Additional policies like EXEC might be added later
> depending on user needs.
>
> —
> Denis
>
> > On Nov 23, 2017, at 2:26 AM, Дмитрий Сорокин 
> wrote:
> >
> > Denis,
> > I propose start with first three policies (it's already implemented, just
> > await some code combing, commit & review).
> > About of fourth policy (EXEC) I think that it's rather additional
> property
> > (some script path) than policy.
> >
> > 2017-11-23 0:43 GMT+03:00 Denis Magda :
> >
> >> Just provide FailureProcessingPolicy with possible reactions:
> >> - NOOP - exceptions will be reported, metrics will be triggered but an
> >> affected Ignite process won’t be touched.
> >> - HAULT (or STOP or KILL) - all the actions of the of NOOP + Ignite
> >> process termination.
> >> - RESTART - NOOP actions + process restart.
> >> - EXEC - execute a custom script provided by the user.
> >>
> >> If needed the policy can be set per know failure such is OOM,
> Persistence
> >> errors so that the user can act accordingly basing on a context.
> >>
> >> —
> >> Denis
> >>
> >>> On Nov 21, 2017, at 11:43 PM, Vladimir Ozerov 
> >> wrote:
> >>>
> >>> In the first iteration I would focus only on reporting facilities, to
> let
> >>> administrator spot dangerous situation. And in the second phase, when
> all
> >>> reporting and metrics are ready, we can think on some automatic
> actions.
> >>>
> >>> On Wed, Nov 22, 2017 at 10:39 AM, Mikhail Cherkasov <
> >> mcherka...@gridgain.com
>  wrote:
> >>>
>  Hi Anton,
> 
>  I don't think that we should shutdown node in case of
> >> IgniteOOMException,
>  if one node has no space, then other probably  don't have it too, so
> re
>  -balancing will cause IgniteOOM on all other nodes and will kill the
> >> whole
>  cluster. I think for some configurations cluster should survive and
> >> allow
>  to user clean cache or/and add more nodes.
> 
>  Thanks,
>  Mikhail.
> 
>  20 нояб. 2017 г. 6:53 ПП пользователь "Anton Vinogradov" <
>  avinogra...@gridgain.com> написал:
> 
> > Igniters,
> >
> > Internal problems may and, unfortunately, cause unexpected cluster
> > behavior.
> > We should determine behavior in case any of internal problem
> happened.
> >
> > Well known internal problems can be split to:
> > 1) OOM or any other reason cause node crash
> >
> > 2) Situations required graceful node shutdown with custom
> notification
> > - IgniteOutOfMemoryException
> > - Persistence errors
> > - ExchangeWorker exits with error
> >
> > 3) Prefomance issues should be covered by metrics
> > - GC STW duration
> > - Timed out tasks and jobs
> > - TX deadlock
> > - Hanged Tx (waits for some service)
> > - Java Deadlocks
> >
> > I created special issue [1] to make sure all these metrics will be
> > presented at WebConsole or VisorConsole (what's preferred?)
> >
> > 4) Situations required external monitoring implementation
> > - GC STW duration exceed maximum possible length (node should be
> >> stopped
> > before STW finished)
> >
> > All this problems were reported by different persons different time
> >> ago,
> > So, we should reanalyze each of them and, possible, find better ways
> to
> > solve them than it described at issues.
> >
> > P.s. IEP-7 [2] already contains 9 issues, feel free to mention
> >> something
> > else :)
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-6961
> > [2]
> > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > 7%3A+Ignite+internal+problems+detection
> >
> 
> >>
> >>
>
>


Re: Facility to detect long STW pauses and other system response degradations

2017-11-27 Thread Vladimir Ozerov
Makes sense.

On Wed, Nov 22, 2017 at 4:54 PM, Anton Vinogradov 
wrote:

> Vova,
>
> Monitoring systems works not how you expected, they LOVE incremental
> metrics :)
>
> > 1) When did these events appear?
>
> Monitoring gets metrics each N seconds.
>
> 01.00 got 0, 0
> 02.00 got 1, 20 -> means between 02.00 and 01.00 was 1 STW with duration
> 20
> 03.00 got 3, 100  -> means between 03.00 and 02.00 was 2 STW with total
> duration 80
>
> Good monitoring will record this values and show you graph when you decide
> to check this value.
> So, you'll see "0,1,2" and "0,20,80" at 03.01
>
> > 2) How duration is distributed? Was 10 pauses 10 seconds each, or 9 short
> > pauses of 1 sec and one critical pause of 90s?
>
> So, previous probe was 0, 0 and we got it minute ago.
> Now we have 10, 100. It means we have 10 STW with total duration 100 last
> minute.
>
> But, it case we set interval to 10 seconds we'll get
> 1, 20
> 4, 20
> 0, 0
> 4, 10
> 1, 50
> 0, 0
>
> So, precision depends on check interval. And it's up to administration team
> what interval to choose.
>
> > May be a kind of sliding window plus min/max values will do better job.
>
> That's the monitoring system job (eg. zabbix)
>
>
> On Wed, Nov 22, 2017 at 4:10 PM, Vladimir Ozerov 
> wrote:
>
> > The question is how administrator should interpret these numbers. Ok, I
> > opened JMX console and see that there was 10 long GC events, which took
> 100
> > seconds.
> > 1) When did these events appear? Over the last day, which is more or less
> > OK, or over the last 10 minutes, so my server is nearly dead?
> > 2) How duration is distributed? Was 10 pauses 10 seconds each, or 9 short
> > pauses of 1 sec and one critical pause of 90s?
> >
> > May be a kind of sliding window plus min/max values will do better job.
> >
> > On Wed, Nov 22, 2017 at 1:07 PM, Anton Vinogradov <
> > avinogra...@gridgain.com>
> > wrote:
> >
> > > Vova,
> > >
> > > 1) We can gain collections info from GarbageCollectorMXBean
> > > But it provides only
> > > - collectionCount
> > > - collectionTime.
> > >
> > > This is very interesting metrics, but they tell us nothing about long
> > STW.
> > > Long STW means we have huge latency during STW and we should find the
> > > reason and solve it.
> > >
> > > 2) So, we're working on new incremental metrics
> > > - total amount of STW longer than XXX
> > > - total duration of STW longer than XXX
> > >
> > > which shows us JVM/GC health situation.
> > >
> > > Is it answer to your question?
> > >
> > > On Tue, Nov 21, 2017 at 9:05 PM, Vladimir Ozerov  >
> > > wrote:
> > >
> > > > Anton,
> > > >
> > > > The question is why user may need so precise measurement? I share
> > > Andrey’s
> > > > opinion - cannot understand the value.
> > > >
> > > > вт, 21 нояб. 2017 г. в 19:33, Anton Vinogradov <
> > avinogra...@gridgain.com
> > > >:
> > > >
> > > > > Andrey,
> > > > >
> > > > > >  JVM provides sufficient means of detecting a struggling process
> > out
> > > of
> > > > > the box.
> > > > >
> > > > > Could you point to some articles describing how to detect STW
> > exceeding
> > > > > some duration using only JVM API?
> > > > >
> > > > > On Tue, Nov 21, 2017 at 7:17 PM, Andrey Kornev <
> > > andrewkor...@hotmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > My 2 cents. Don’t do it. JVM provides sufficient means of
> > detecting a
> > > > > > struggling process out of the box. SRE/Operations teams usually
> > know
> > > > how
> > > > > to
> > > > > > monitor JVMs and can handle killing of such processes themselves.
> > > > > >
> > > > > > The feature adds no value, just complexity (and more
> configuration
> > > > > > parameters (!) — as if Ignite didn’t have enough of them
> already).
> > > > > >
> > > > > > Regards,
> > > > > > Andrey
> > > > > > _
> > > > > > From: Denis Magda 
> > > > > > Sent: Monday, November 20, 2017 3:10 PM
> > > > > > Subject: Re: Facility to detect long STW pauses and other system
> > > > response
> > > > > > degradations
> > > > > > To: 
> > > > > >
> > > > > >
> > > > > > My 2 cents.
> > > > > >
> > > > > > 1. Totally for a separate native process that will handle the
> > > > monitoring
> > > > > > of an Ignite process. The watchdog process can simply start a JVM
> > > tool
> > > > > like
> > > > > > jstat and parse its GC logs: https://dzone.com/articles/
> > > > > > how-monitor-java-garbage  > > > > > how-monitor-java-garbage>
> > > > > >
> > > > > > 2. As for the STW handling, I would make a possible reaction more
> > > > > generic.
> > > > > > Let’s define a policy (enumeration) that will define how to deal
> > with
> > > > an
> > > > > > unstable node. The events might be as follows - kill a node,
> > restart
> > > a
> > > > > > node, trigger a custom script using Runtime.exec or other
> methods.
> > > > > >
> > > > > > What’d you think? 

Re: Allow multiple caches use one SQL schema

2017-11-27 Thread Vladimir Ozerov
Hi Denis,

I do not see any section in docs mentioning that it is impossible to have
several caches in the same schema. If this is so, there is no need to fix
anything in docs.

On Wed, Oct 25, 2017 at 6:35 PM, Denis Magda  wrote:

> Vladimir,
>
> Guess this has to be documented under Java Dev Guide section?
> https://apacheignite-sql.readme.io/docs/schema-and-indexes <
> https://apacheignite-sql.readme.io/docs/schema-and-indexes>
>
> Do we need to do the same for .NET and C++?
>
> —
> Denis
>
> > Begin forwarded message:
> >
> > From: "Vladimir Ozerov (JIRA)" 
> > Subject: [jira] [Resolved] (IGNITE-6572) Allow multiple caches use one
> SQL schema
> > Date: October 25, 2017 at 4:47:00 AM PDT
> > To: dma...@gridgain.com
> >
> >
> > [ https://issues.apache.org/jira/browse/IGNITE-6572?page=
> com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
> >
> > Vladimir Ozerov resolved IGNITE-6572.
> > -
> >Resolution: Fixed
> >
> >> Allow multiple caches use one SQL schema
> >> 
> >>
> >>Key: IGNITE-6572
> >>URL: https://issues.apache.org/jira/browse/IGNITE-6572
> >>Project: Ignite
> >> Issue Type: Improvement
> >> Components: sql
> >>   Reporter: Denis Mekhanikov
> >>   Assignee: Denis Mekhanikov
> >> Labels: usability
> >>Fix For: 2.4
> >>
> >>
> >> When trying to create more than one cache with the same SQL schema
> name, the following exception is thrown:
> >> {noformat}
> >> Exception in thread "main" class org.apache.ignite.IgniteException:
> Schema already registered: TEST_SCHEMA
> >>  at org.apache.ignite.internal.util.IgniteUtils.
> convertException(IgniteUtils.java:957)
> >>  at org.apache.ignite.Ignition.start(Ignition.java:350)
> >>  at org.apache.ignite.examples.repro.schema.SchemaExampleNode.main(
> SchemaExampleNode.java:7)
> >> Caused by: class org.apache.ignite.IgniteCheckedException: Schema
> already registered: TEST_SCHEMA
> >>  at org.apache.ignite.internal.processors.query.h2.
> IgniteH2Indexing.registerCache(IgniteH2Indexing.java:2110)
> >>  at org.apache.ignite.internal.processors.query.GridQueryProcessor.
> registerCache0(GridQueryProcessor.java:1393)
> >>  at org.apache.ignite.internal.processors.query.GridQueryProcessor.
> onCacheStart0(GridQueryProcessor.java:784)
> >>  at org.apache.ignite.internal.processors.query.GridQueryProcessor.
> onCacheStart(GridQueryProcessor.java:845)
> >>  at org.apache.ignite.internal.processors.cache.
> GridCacheProcessor.startCache(GridCacheProcessor.java:1185)
> >>  at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> prepareCacheStart(GridCacheProcessor.java:1884)
> >>  at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> startCachesOnLocalJoin(GridCacheProcessor.java:1755)
> >>  at org.apache.ignite.internal.processors.cache.distributed.
> dht.preloader.GridDhtPartitionsExchangeFuture.init(
> GridDhtPartitionsExchangeFuture.java:619)
> >>  at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$ExchangeWorker.body(
> GridCachePartitionExchangeManager.java:1901)
> >>  at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
> >>  at java.lang.Thread.run(Thread.java:748)
> >> {noformat}
> >> It should be allowed to share schema between caches. Currently it works
> for PUBLIC schema only.
> >
> >
> >
> > --
> > This message was sent by Atlassian JIRA
> > (v6.4.14#64029)
>
>


[jira] [Created] (IGNITE-7039) SQL: local query should pin affected partitions

2017-11-27 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7039:
---

 Summary: SQL: local query should pin affected partitions
 Key: IGNITE-7039
 URL: https://issues.apache.org/jira/browse/IGNITE-7039
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Vladimir Ozerov


When distributed query is executed, we pin cache partitions for particular 
topology version on map nodes [1]. However, it seems that we do no do that for 
local queries. This is a bug because partition with required data could be 
evicted from local node at any time, leading to incorrect results.

[1] 
https://github.com/apache/ignite/blob/ignite-2.3/modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/twostep/GridMapQueryExecutor.java#L288



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: SQL warning for partitioned caches with setLocal

2017-11-27 Thread Vladimir Ozerov
Hi Luqman,

Required caches are already derived from SQL query through Ignite SQL
internals. We should just re-use this code for local queries. I filed a
ticket to fix this [1].

[1] https://issues.apache.org/jira/browse/IGNITE-7039

On Wed, Nov 22, 2017 at 12:05 AM, Luqman Ahmad 
wrote:

> Hi Vladmir,
>
> Agree - they shouldnt be coupled togethor but what if we can set something
> in affinity api which can be read in sql api.
>
> Please correct me if I am wrong but in the affinityCall/Run we have to
> provide all the cache names and rebalancing will skip if there is already
> an operation in process. If we go with your approach not sure whether we
> can calculate all the related partitioned caches to be locked dynamically.
>
> Ofcourse you would be in a better position to comment on it but cant we
> introduce something in affinity api which can be set/read through each
> affinityCall/Run,  and the affinity api can be used inside SQL api - just
> like the same way calculating partition id for a specific key or an finding
> an atomic reference.
>
> Thanks,
> Luqman
>
>
>
> On 21 Nov 2017 20:17, "Vladimir Ozerov"  wrote:
>
> Hi Luqman,
>
> I do not think SQL and compute should be coupled in the product. Instead,
> we should fix local query execution and pin partitions in the same way it
> is done for affinityCall/Run and distributed SQL.
>
> On Tue, Nov 21, 2017 at 6:25 PM, luqmanahmad 
> wrote:
>
> > Thanks dsetrakyan,
> >
> > I would like to add a few more things over here which should be
> applicable
> > to partitioned caches.
> >
> > This context variable which is set through affinityCall or affinityRun
> > should be available through either a helper class or cache configuration.
> > There could be other advantages as well for example:
> >
> > 1. We can check the context variable in all the partitioned cache
> > operations. In department and employee example if an employee is accessed
> > without an affinityRun or affinityCall computation it should also log a
> > WARNING message or through an exception based on the cache configuration.
> >
> > 2. The user would be able to implement their own custom checks using it.
> > For
> > example, if we want to have some abstract level checks to restrict
> > developers to use specific functionality related to partitioned caches.
> >
> > Luqman
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>


Re: affinity key syntax

2017-11-27 Thread Vladimir Ozerov
Dima,

I filed a ticket [1]. This is very promising approach which will allow us
to replace regular joins with nested tables in many cases, thus boosting
performance of JOINs and making Ignite's affinity configuration easier form
UX perspecitve. But this is a big thing, as it would require fully-fledged
SQL parser (we cannot process SELECT statements at the moment) and quire a
few changes to our B+Tree.

[1] https://issues.apache.org/jira/browse/IGNITE-7038

On Tue, Oct 31, 2017 at 6:28 AM, Dmitriy Setrakyan 
wrote:

> Thanks, Vladimir, got it. However, even though we may have different
> indexes, the data should be on the same node, if it is properly collocated.
> I do agree that we should try to remove extra index lookups, if possible.
> Do we have a ticket for it? Is it a lot of work?
>
> D.
>
> On Wed, Oct 25, 2017 at 3:42 AM, Vladimir Ozerov 
> wrote:
>
> > For example, currently every table in Ignite has at least two PK indexes
> -
> > one for cache operations, and another one for H2. If you have two tables
> > (parent - child), you have either 4 indexes (if they are in different
> > groups), or 3 indexes (same logical group). But even if certain tree is
> > shared between caches, dependent data entries are located in completely
> > different parts.
> >
> > With Spanner we will need only 1 index for both tables.
> >
> > On Wed, Oct 25, 2017 at 1:39 PM, Vladimir Ozerov 
> > wrote:
> >
> > > In Spanner once parent key is found you don't need to search for child
> > > keys from scratch - they are located just after the parent key in the
> > tree.
> > > In Ignite child and parent keys are located in different trees, hence
> > more
> > > lookups are needed.
> > >
> > > On Wed, Oct 25, 2017 at 1:36 PM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > >> On Wed, Oct 25, 2017 at 3:32 AM, Vladimir Ozerov <
> voze...@gridgain.com>
> > >> wrote:
> > >>
> > >> > Dima,
> > >> >
> > >> > Yes, I saw it also. But this is not about syntax only. Spanner use
> > this
> > >> > information to store data efficiently - child entries a located near
> > to
> > >> > their parents. We can think of it as if all related tables were
> > logical
> > >> > caches inside one physical cache, sorted by the key. With this
> storage
> > >> > format it will be possible to implement very efficient co-located
> > joins.
> > >> >
> > >>
> > >> Hm... I don't think Ignite's approach for collocated joins is lees
> > >> efficient. However, back to Spanner, the first value in the child
> table
> > >> key
> > >> is the parent table key. This tells me that Spanner collocates based
> on
> > an
> > >> approach very similar to Ignite's affinity key. Am I wrong?
> > >>
> > >
> > >
> >
>


[jira] [Created] (IGNITE-7038) SQL: nested tables support

2017-11-27 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7038:
---

 Summary: SQL: nested tables support
 Key: IGNITE-7038
 URL: https://issues.apache.org/jira/browse/IGNITE-7038
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Vladimir Ozerov


Many commercial databases support a kind of nested tables which is essentially 
a parent-child relation with special storage format. With this approach child 
data can be located efficiently without joins. 

Syntax example:
{code}
CREATE TYPE address_t AS OBJECT (
   cityVARCHAR2(20),
   street  VARCHAR2(30)
);

CREATE TYPE address_tab IS TABLE OF address_t;

CREATE TABLE customers (
   custid  NUMBER,
   address address_tab )
NESTED TABLE address STORE AS customer_addresses;

INSERT INTO customers VALUES (
1,
address_tab(
address_t('Redwood Shores', '101 First'),
address_t('Mill Valley', '123 Maple')
)
);
{code}

Several storage formats should be considered. First, data can be embedded to 
parent data row directly or through forward reference to a chain of dedicated 
blocks (similar to LOB data types). This is how conventional RDBMS systems 
work. 

Second, children rows could be stored in the same PK index as parent row. This 
is how Spanner works. In this case parent and child rows are different rows, 
but stored in the same data structures. This allows for sequential access to 
both parent and child data in case of joins, which could be extremely valuable 
in OLAP cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Query duration metrics

2017-11-27 Thread Vladimir Ozerov
Hi Andrey,

You are right, current implementation of metrics is questionable. Please
feel free to file a ticket if you have ideas on how to improve it.

On Sun, Nov 5, 2017 at 11:17 PM, Andrey Kornev 
wrote:

> Hi,
>
> It appears as if the query duration metric is a bit misleading.
>
> GridQueryProcessor.executeQuery() doesn't actually "run" the query. It
> simply delegates query execution to a closure passed in as a parameter. The
> closure may or may not actually execute the query. In some (all?) cases,
> the closure simply creates and returns an Iterable (how does
> IgniteH2Indexing.runQueryTwoStep, for example). Actual query execution
> happens at some point later when user instantiates an Iterator from the
> Iterable. What's in fact recorded (by GridQueryProcessor on line 2477) as
> the "query duration" is just the query parsing stage as well as some
> query's AST manipulation logic that tries to convert the regular regular
> query to a map/reduce style query. So, it's basically a "prepare statement
> duration" that is currently reported as "query duration".
>
> Am I missing something?
>
> Thanks
> Andrey
>


[GitHub] ignite pull request #3100: IGNITE-5490 Ignite Continuous Query might not sen...

2017-11-27 Thread sunnychanwork
GitHub user sunnychanwork opened a pull request:

https://github.com/apache/ignite/pull/3100

IGNITE-5490 Ignite Continuous Query might not send update request

During continuous query setup, there are potential updates in flight that 
would potentially have a situation like this:

T1 updates E1, lsnrs!=null, obtain update counter 1
T2 updates E2, lsnrs==null, obtain update counter 2
T3 updates E3, lsnrs!=null, obtain update counter 3

Notice that as E1 E2 and E3 are different there are no locks and they can 
proceed in parallel. As a result, the sequence of updates being sent to 
CQManager will be 1,3 with 2 missing and it will wait for update 2 forever 
which will never come.

To fix this I propose to use a ReadWrite lock to ensure that the Map Entry 
update will complete before setting up new continuous query.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunnychanwork/ignite IGNITE-5960

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3100.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3100


commit 3cff5cf0ebc282c9300613e4d6726df5dd56ab60
Author: Sunny Chan, CLSA 
Date:   2017-11-28T05:52:14Z

IGNITE-5490 use a ReadWrite lock to ensure that the Map Entry update
will complete before setting up new continuous query




---


[jira] [Created] (IGNITE-7037) Web console: Wrong activities metrics

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7037:
-

 Summary: Web console: Wrong activities metrics
 Key: IGNITE-7037
 URL: https://issues.apache.org/jira/browse/IGNITE-7037
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko


Count of activity metrics in grouped columns is not calculated from detail 
columns.
Configuration's activities columns show always 0.
Showed data in table is not equal to data from activity dialog.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7036) Web console: Wrong export of grouped users

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7036:
-

 Summary: Web console: Wrong export of grouped users
 Key: IGNITE-7036
 URL: https://issues.apache.org/jira/browse/IGNITE-7036
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko
Assignee: Dmitriy Shabalin


On export of grouped users when group is collapsed in table only header row is 
exported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7035) Web console: New user redirected on nonexistent page

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7035:
-

 Summary: Web console: New user redirected on nonexistent page
 Key: IGNITE-7035
 URL: https://issues.apache.org/jira/browse/IGNITE-7035
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko


New user redirected on login to /configuration page and then redirected to 
/configuration/basic.
Should be redirected directly to /configuration/basic.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7034) Web console: Wrong notebooks on become this user

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7034:
-

 Summary: Web console: Wrong notebooks on become this user
 Key: IGNITE-7034
 URL: https://issues.apache.org/jira/browse/IGNITE-7034
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko


On become this user first time showed lint of admin notebooks. That notebooks 
is not available.
On refresh show notebooks of showed user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7033) Web console: Increase width of columns on admin page

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7033:
-

 Summary: Web console: Increase width of columns on admin page
 Key: IGNITE-7033
 URL: https://issues.apache.org/jira/browse/IGNITE-7033
 Project: Ignite
  Issue Type: Bug
 Environment: "Last activity" and "email" columns are too narrow
Reporter: Vasiliy Sisko






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7032) Web console: Links in activiy details.

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7032:
-

 Summary: Web console: Links in activiy details.
 Key: IGNITE-7032
 URL: https://issues.apache.org/jira/browse/IGNITE-7032
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko


In activity details dialog should be showed page title instead of its link.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Apache Ignite Wikipedia

2017-11-27 Thread Dmitriy Setrakyan
This is awesome! I know that Wikipedia submissions are not easily approved,
so great job on pushing this through.

D.

On Mon, Nov 27, 2017 at 5:47 PM, Denis Magda  wrote:

> Igniters,
>
> Finally, we got Ignite’s wiki page ready and approved:
> https://en.wikipedia.org/wiki/Apache_Ignite  wiki/Apache_Ignite>
>
> The page is rather technical and I kept it short from the beginning to get
> through a meticulous review process. Now, feel free to improve it by
> sharing more details or covering use cases.
>
> Thanks to Prachi for making the article clearer for reading and
> understanding.
>
> —
> Denis
>
>
>
>


Re: Rework storage format to index-organized approach

2017-11-27 Thread Dmitriy Setrakyan
Vladimir,

I definitely like the overall direction. My comments are below...


On Mon, Nov 27, 2017 at 12:46 PM, Vladimir Ozerov 
wrote:

>
> I propose to adopt this approach in two phases:
> 1) Optionally add data to leaf pages. This should improve our ScanQuery
> dramatically
>

 Definitely a good idea. Shouldn't it make the primary lookups faster as
well?

2) Optionally has single primary index instead of per-partition index. This
> should improve our updates and SQL scans at the cost of harder rebalance
> and recovery.
>

Can you explain why it would improve SQL updates and Scan queries?

Also, why would this approach make rebalancing slower? If we keep the index
sorted by partition, then the rebalancing process should be able to grab
any partition at any time. Do you agree?

D.


[jira] [Created] (IGNITE-7031) Web console: Error on cancellation of comfirm dialog

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7031:
-

 Summary: Web console: Error on cancellation of comfirm dialog
 Key: IGNITE-7031
 URL: https://issues.apache.org/jira/browse/IGNITE-7031
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko


On cancellation of confirm dialog error message showed in log of browser.
F.e. Clone dialog or remove all dialog.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7030) Web console: (add) link not create new linked item

2017-11-27 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-7030:
-

 Summary: Web console: (add) link not create new linked item
 Key: IGNITE-7030
 URL: https://issues.apache.org/jira/browse/IGNITE-7030
 Project: Ignite
  Issue Type: Bug
Reporter: Vasiliy Sisko


On click by (add) link on configuration pages opened page with list of objects 
instead of creation of new with link to current.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Contribute ApacheIgnite

2017-11-27 Thread Denis Magda
Hello Eugeniu,

Thanks for showing an interest in Ignite and welcome aboard! 

I granted you required privileges in JIRA, so go ahead and take over the ticket.

Yury, who is one of ML maintainers can answer on all the IGNITE-6878 related 
questions in a separate discussion. Please start it off.

Finally…

Get familiar with Ignite development process described here:
https://cwiki.apache.org/confluence/display/IGNITE/Development+Process

Instructions on how to contribute can be found here:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute

Project setup in Intellij IDEA:
https://cwiki.apache.org/confluence/display/IGNITE/Project+Setup

Once you got familiar and were able to run a few examples, you should pick
a Jira ticket you would like to start on. Send an email to the dev list sharing 
your JIRA id, so we can add you as a contributor in Jira.

These are the easy tickets to start with:
https://issues.apache.org/jira/browse/IGNITE-4549?jql=project%20%3D%20IGNITE%20AND%20labels%20in%20(newbie)%20and%20status%20%3D%20OPEN

While those are more advanced but appealing:
https://ignite.apache.org/community/contribute.html#pick-ticket

Looking forward to your contributions!
Denis 

> On Nov 27, 2017, at 1:38 PM, Eugeniu Semenciuc  
> wrote:
> 
> Hello,
> 
>  My name is Semenciuc Eugeniu, i'm a Romanian software developer with 4 -5
> years experience.
> I recently started working with Apache Ignite, and I'm very excited about
> his capabilities.
> I would like to contribute to the development of this platform.
> I think my good mathematical background can be useful in the development of
> the issue: https://issues.apache.org/jira/browse/IGNITE-6878.
> I would be grateful if you could add me to the contributors list in order
> to be able to assign the ticket to myself.
> My JIRA details:
> Full Name: Semenciuc Eugeniu
> Email: semenciuc.euge...@gmail.com
> 
> Thanks, Semenciuc Eugeniu.



Re: Rework storage format to index-organized approach

2017-11-27 Thread Denis Magda
Vladimir,

How the free lists will be affected by the indexed-organized architecture? From 
what I see they’re becoming optional.

—
Denis
 
> On Nov 27, 2017, at 12:46 PM, Vladimir Ozerov  wrote:
> 
> Igniters,
> 
> I'd like to start a discussion about new storage format for Ignite. Our
> current approach is so-called *heap-organized* storage with secondary index
> per partition. It has a number of drawbacks:
> 1) Slow scans (joins, OLAP workload) - data is writen in arbitrary manner,
> so iteration over base index leads to multiple page reads and page locks
> 2) Slow writes in case of OLTP workload- every update touches miltiple
> index and free-list pages (a kind of write amplification)
> 3) Duplicated PK index when SQL is enabled - our base index cannot be used
> for lookups or range scans. This makes write amplification effects even
> worse.
> 
> All mature RDBMS systems emply alternative format as default -
> *index-organized* storage. In this case primary index leaf pages is data
> pages. Rowse are sorted inside data pages. This gives:
> - Blazingly fast scans (no dereference, less page reads, less evictions,
> less locks)
> - Fast writes in OLTP workloads when PK index column (e.g. ID) grows
> monotonically (you need to *update only one page* if there are no splits)
> - Slower random writes due to index fragmentation compared to heap
> 
> I propose to adopt this approach in two phases:
> 1) Optionally add data to leaf pages [1]. This should improve our ScanQuery
> dramatically
> 2) Optionally has single primary index instead of per-partition index [2].
> This should improve our updates and SQL scans at the cost of harder
> rebalance and recovery.
> 
> Thoughts?
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-7026
> [2] https://issues.apache.org/jira/browse/IGNITE-7027



Apache Ignite Wikipedia

2017-11-27 Thread Denis Magda
Igniters,

Finally, we got Ignite’s wiki page ready and approved:
https://en.wikipedia.org/wiki/Apache_Ignite 


The page is rather technical and I kept it short from the beginning to get 
through a meticulous review process. Now, feel free to improve it by sharing 
more details or covering use cases.

Thanks to Prachi for making the article clearer for reading and understanding.

—
Denis 





[jira] [Created] (IGNITE-7029) Add an ability to provide multiple connection addresses for thin JDBC driver

2017-11-27 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-7029:
---

 Summary: Add an ability to provide multiple connection addresses 
for thin JDBC driver
 Key: IGNITE-7029
 URL: https://issues.apache.org/jira/browse/IGNITE-7029
 Project: Ignite
  Issue Type: Improvement
  Components: jdbc
Affects Versions: 2.3
Reporter: Valentin Kulichenko


Currently we allow only to provide one address when connecting via thin JDBC 
driver. This has to issues:
* If node driver is connected to goes down, the driver stops working.
* Driver has to always go though the same node - this is a bottleneck.

As a simple solution we can allow to provide multiple addresses, like MySQL 
does for example: 
https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-usagenotes-j2ee-concepts-managing-load-balanced-connections.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: TC issues. IGNITE-3084. Spark Data Frame API

2017-11-27 Thread Valentin Kulichenko
Hi Nikolay,

Please see my responses inline.

-Val

On Fri, Nov 24, 2017 at 2:55 AM, Николай Ижиков 
wrote:

> Hello, guys.
>
> I have some issues on TC with my PR [1] for IGNITE-3084(Spark Data Frame
> API).
> Can you, please, help me:
>
>
> 1. `Ignite RDD spark 2_10` -
>
> Currently this build runs with following profiles:
> `-Plgpl,examples,scala-2.10,-clean-libs,-release` [2]
> That means `scala` profile is activated too for `Ignite RDD spark 2_10`
> Because `scala` activation is done like [3]:
>
> ```
> 
> !scala-2.10
> 
> ```
>
> I think it a misconfiguration because scala(2.11) shouldn't be activated
> for 2.10 build.
> Am I miss something?
>
> Can someone edit build property?
> * Add `-scala` to profiles list
> * Or add `-Dscala-2.10` to jvm properties to turn off `scala`
> profile in this build.
>
>
Added '-Dscala-2.10' to the build config. Let me know if it helps.


>
> 2. `Ignite RDD` -
>
> Currently this build run on jvm7 [4].
> As I wrote in my previous mail [5] current version of spark(2.2) runs only
> on jvm8.
>
> Can someone edit build property to run it on jvm8?
>
>
Do you mean that IgniteRDD does not compile on JDK7? If yes, do we know the
reason? I don't think switching it to JDK8 is a solution as it should work
with both.


>
> 3. For now `Ignite RDD` and `Ignite RDD spark 2_10` only runs java tests
> [6] existing in `spark` module.
> There are several existing tests written in scala(i.e. scala-test) ignored
> in TC. IgniteRDDSpec [7] for example.
> Is it turned off by a purpose or I miss something?
> Should we run scala-test for spark and spark_2.10 modules?


I think all tests should be executed on TC. Can you check if they work and
add them to corresponding suites?


>
> [1] https://github.com/apache/ignite/pull/2742
> [2] https://ci.ignite.apache.org/viewLog.html?buildId=960220
> ldTypeId=Ignite20Tests_IgniteRddSpark210=buildLog&_
> focus=379#_state=371
> [3] https://github.com/apache/ignite/blob/master/pom.xml#L533
> [4] https://ci.ignite.apache.org/viewLog.html?buildId=960221
> ldTypeId=Ignite20Tests_IgniteRdd=buildParameters
> [5] http://apache-ignite-developers.2346864.n4.nabble.com/
> Integration-of-Spark-and-Ignite-Prototype-tp22649p23099.html
> [6] https://ci.ignite.apache.org/viewLog.html?buildId=960220
> ldTypeId=Ignite20Tests_IgniteRddSpark210=testsInfo
> [7] https://github.com/apache/ignite/blob/master/modules/spark/
> src/test/scala/org/apache/ignite/spark/IgniteRDDSpec.scala
>


Re: Request for Participation: The Right Metrics for the Right Project

2017-11-27 Thread Denis Magda
Hi Daniel,

Is there an easy way to hook Kibble with Ignite? We’re definitely interested in 
such capabilities.

—
Denis

> On Nov 27, 2017, at 10:26 AM, Daniel Gruno  wrote:
> 
> Hi there, fellow Apache projects!
> 
> The Apache Kibble project serves as a practical implementation of
> metrics deemed to be helpful for open source projects trying to
> understand where their project is, was, and is headed.
> 
> As such, we need help in determining which metrics projects either
> already use and consider useful for measuring project health or which
> metrics they would love to have and use.
> 
> We are looking for projects interested in participating in the Kibble
> demo instance ( https://demo.kibble.apache.org/ ) and sending feedback
> to the Kibble project on which parts they find useful, which elements
> they find useless and which ideas they would love to see implemented to
> better gauge the health and activity of their project.
> 
> Initially we are looking for Apache projects to help out, but we will
> later on expand this to other open source organizations and projects.
> 
> Projects that participate will be added to the demo instance and scanned
> on a regular basis so the data can be used for reports and analysis.
> The Kibble PMC will ensure that the correct sources are added, but you
> are of course welcome to help identify which parts need analyzing.
> 
> How to participate:
> 
> - Join the d...@kibble.apache.org mailing list and let us know if your
> project is interested in joining the demo (a few projects were added in
> advance so you can actually test it). You can also join us on HipChat or
> in #kibble on Freenode IRC (IRC and HipChat are bridged).
> 
> - Try out the demo, and send us feedback to the mailing list on what you
> like, dislike and would love to see added.
> 
> - In particular: Which metrics do you look for when reviewing the code,
> development and community health/trends of your project - which do you
> have, which would you love to see added?
> 
> With regards,
> Daniel on behalf of the Apache Kibble project.
> 
> PS: Please note, we have limited capacity for these tests. We cannot
> have every single ASF project in the demo, and we reserve the rights to
> pick the projects that can participate, should we get a lot of requests.



Re: Integration of Spark and Ignite. Prototype.

2017-11-27 Thread Valentin Kulichenko
Nikolay,

Let's estimate the strategy implementation work, and then decide weather to
merge the code in current state or not. If anything is unclear, please
start a separate discussion.

-Val

On Fri, Nov 24, 2017 at 5:42 AM, Николай Ижиков 
wrote:

> Hello, Val, Denis.
>
> > Personally, I think that we should release the integration only after
> the strategy is fully supported.
>
> I see two major reason to propose merge of DataFrame API implementation
> without custom strategy:
>
> 1. My PR is relatively huge, already. From my experience of interaction
> with Ignite community - the bigger PR becomes, the more time of commiters
> required to review PR.
> So, I propose to move smaller, but complete steps here.
>
> 2. It is not clear for me what exactly includes "custom strategy and
> optimization".
> Seems, that additional discussion required.
> I think, I can put my thoughts on the paper and start discussion right
> after basic implementation is done.
>
> > Custom strategy implementation is actually very important for this
> integration.
>
> Understand and fully agreed.
> I'm ready to continue work in that area.
>
> 23.11.2017 02:15, Denis Magda пишет:
>
> Val, Nikolay,
>>
>> Personally, I think that we should release the integration only after the
>> strategy is fully supported. Without the strategy we don’t really leverage
>> from Ignite’s SQL engine and introduce redundant data movement between
>> Ignite and Spark nodes.
>>
>> How big is the effort to support the strategy in terms of the amount of
>> work left? 40%, 60%, 80%?
>>
>> —
>> Denis
>>
>> On Nov 22, 2017, at 2:57 PM, Valentin Kulichenko <
>>> valentin.kuliche...@gmail.com> wrote:
>>>
>>> Nikolay,
>>>
>>> Custom strategy implementation is actually very important for this
>>> integration. Basically, it will allow to create a SQL query for Ignite
>>> and
>>> execute it directly on the cluster. Your current implementation only
>>> adds a
>>> new DataSource which means that Spark will fetch data in its own memory
>>> first, and then do most of the work (like joins for example). Does it
>>> make
>>> sense to you? Can you please take a look at this and provide your
>>> thoughts
>>> on how much development is implied there?
>>>
>>> Current code looks good to me though and I'm OK if the strategy is
>>> implemented as a next step in a scope of separate ticket. I will do final
>>> review early next week and will merge it if everything is OK.
>>>
>>> -Val
>>>
>>> On Thu, Oct 19, 2017 at 7:29 AM, Николай Ижиков 
>>> wrote:
>>>
>>> Hello.

 3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog
>
 implementations and what is the difference?

 IgniteCatalog removed.

 5. I don't like that IgniteStrategy and IgniteOptimization have to be
>
 set manually on SQLContext each time it's createdIs there any way to
 automate this and improve usability?

 IgniteStrategy and IgniteOptimization are removed as it empty now.

 Actually, I think it makes sense to create a builder similar to
>
 SparkSession.builder()...

 IgniteBuilder added.
 Syntax looks like:

 ```
 val igniteSession = IgniteSparkSession.builder()
 .appName("Spark Ignite catalog example")
 .master("local")
 .config("spark.executor.instances", "2")
 .igniteConfig(CONFIG)
 .getOrCreate()

 igniteSession.catalog.listTables().show()
 ```

 Please, see updated PR - https://github.com/apache/ignite/pull/2742

 2017-10-18 20:02 GMT+03:00 Николай Ижиков :

 Hello, Valentin.
>
> My answers is below.
> Dmitry, do we need to move discussion to Jira?
>
> 1. Why do we have org.apache.spark.sql.ignite package in our codebase?
>>
>
> As I mentioned earlier, to implement and override Spark Catalog one
> have
> to use internal(private) Spark API.
> So I have to use package `org.spark.sql.***` to have access to private
> class and variables.
>
> For example, SharedState class that stores link to ExternalCatalog
> declared as `private[sql] class SharedState` - i.e. package private.
>
> Can these classes reside under org.apache.ignite.spark instead?
>>
>
> No, as long as we want to have our own implementation of
> ExternalCatalog.
>
> 2. IgniteRelationProvider contains multiple constants which I guess are
>>
> some king of config options. Can you describe the purpose of each of
> them?
>
> I extend comments for this options.
> Please, see my commit [1] or PR HEAD:
>
> 3. IgniteCatalog vs. IgniteExternalCatalog. Why do we have two Catalog
>>
> implementations and what is the difference?
>
> Good catch, thank you!
> After additional research I founded that only IgniteExternalCatalog
> required.
> I will update PR with 

[GitHub] ignite pull request #3099: Ignite 6871

2017-11-27 Thread alex-plekhanov
GitHub user alex-plekhanov opened a pull request:

https://github.com/apache/ignite/pull/3099

Ignite 6871



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-plekhanov/ignite ignite-6871

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3099.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3099


commit 9000565a759b3c1aa8f28d97afcc9e5e514396e8
Author: Aleksey Plekhanov 
Date:   2017-11-27T21:33:35Z

IGNITE-6871 Implement new JMX metrics for partitions map monitoring

commit 8dd11dae1e09be6ccb7ea6234a8ad763620d68de
Author: Aleksey Plekhanov 
Date:   2017-11-27T21:43:22Z

IGNITE-6871 Implement new JMX metrics for partitions map monitoring




---


Contribute ApacheIgnite

2017-11-27 Thread Eugeniu Semenciuc
Hello,

  My name is Semenciuc Eugeniu, i'm a Romanian software developer with 4 -5
years experience.
I recently started working with Apache Ignite, and I'm very excited about
his capabilities.
I would like to contribute to the development of this platform.
I think my good mathematical background can be useful in the development of
the issue: https://issues.apache.org/jira/browse/IGNITE-6878.
I would be grateful if you could add me to the contributors list in order
to be able to assign the ticket to myself.
My JIRA details:
Full Name: Semenciuc Eugeniu
Email: semenciuc.euge...@gmail.com

Thanks, Semenciuc Eugeniu.


[jira] [Created] (IGNITE-7028) Memcached does not set type flags for response

2017-11-27 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7028:
-

 Summary: Memcached does not set type flags for response
 Key: IGNITE-7028
 URL: https://issues.apache.org/jira/browse/IGNITE-7028
 Project: Ignite
  Issue Type: Bug
  Components: rest
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
 Fix For: 2.4


Memcached does not set type flags for response:
http://apache-ignite-users.70518.x6.nabble.com/Memcached-doesn-t-store-flags-td18403.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Rework storage format to index-organized approach

2017-11-27 Thread Vladimir Ozerov
Igniters,

I'd like to start a discussion about new storage format for Ignite. Our
current approach is so-called *heap-organized* storage with secondary index
per partition. It has a number of drawbacks:
1) Slow scans (joins, OLAP workload) - data is writen in arbitrary manner,
so iteration over base index leads to multiple page reads and page locks
2) Slow writes in case of OLTP workload- every update touches miltiple
index and free-list pages (a kind of write amplification)
3) Duplicated PK index when SQL is enabled - our base index cannot be used
for lookups or range scans. This makes write amplification effects even
worse.

All mature RDBMS systems emply alternative format as default -
*index-organized* storage. In this case primary index leaf pages is data
pages. Rowse are sorted inside data pages. This gives:
- Blazingly fast scans (no dereference, less page reads, less evictions,
less locks)
- Fast writes in OLTP workloads when PK index column (e.g. ID) grows
monotonically (you need to *update only one page* if there are no splits)
- Slower random writes due to index fragmentation compared to heap

I propose to adopt this approach in two phases:
1) Optionally add data to leaf pages [1]. This should improve our ScanQuery
dramatically
2) Optionally has single primary index instead of per-partition index [2].
This should improve our updates and SQL scans at the cost of harder
rebalance and recovery.

Thoughts?

[1] https://issues.apache.org/jira/browse/IGNITE-7026
[2] https://issues.apache.org/jira/browse/IGNITE-7027


[jira] [Created] (IGNITE-7027) Single primary index instead of mulitple per-partition indexes

2017-11-27 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7027:
---

 Summary: Single primary index instead of mulitple per-partition 
indexes
 Key: IGNITE-7027
 URL: https://issues.apache.org/jira/browse/IGNITE-7027
 Project: Ignite
  Issue Type: Task
  Components: cache
Reporter: Vladimir Ozerov


Currently we have per-partition primary index. This gives us easy and effective 
rebalance/recovery capabilities and efficient lookup in key-value mode. 

However, this doesn't work well for SQL case. We cannot use this index for 
range scans. Neither we can use it for PK lookups (it is possible to implement, 
but will be less then optimal due to necessity to build the whole key object).

The following change is suggested as optional storage mode:
1) Single index data structure for all partitions
2) Only single key type is allowed (i.e. no mess in the cache and no cache 
groups)
3) Additional SQL PK index will not be needed in this case



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7026) Index-organized data storage format

2017-11-27 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-7026:
---

 Summary: Index-organized data storage format
 Key: IGNITE-7026
 URL: https://issues.apache.org/jira/browse/IGNITE-7026
 Project: Ignite
  Issue Type: Task
  Components: cache, sql
Reporter: Vladimir Ozerov


In SQL *index-organized* table is a type of table format where rows are stored 
as leafs of a primary key index (sometimes called "clustered index"). In this 
format data within a single page is sorted in accordance with PK index. All 
leaves are always sorted as well. 

Another table format is *heap*. Data is put into arbitrary page with enough 
space. Free space is tracked using either free-lists or allocation maps. 
Primary key index is organized in the same way as secondary index - leaf pages 
contain a kind of row pointer. This is how Ignite currently works. 

This ticket is aimed to implement index-organized storage format, which will 
give us the following advantages:
1) Fast scans over PK index due to decreased number of page reads and page 
locks, which is especially important for JOINs and OLAP cases;
2) Faster inserts in OLTP workloads due to less number of page updates.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3090: IGNITE-6929

2017-11-27 Thread alexpaschenko
Github user alexpaschenko closed the pull request at:

https://github.com/apache/ignite/pull/3090


---


[GitHub] ignite pull request #3098: Ignite: Batch cache destroy requests added

2017-11-27 Thread dspavlov
GitHub user dspavlov opened a pull request:

https://github.com/apache/ignite/pull/3098

Ignite: Batch cache destroy requests added



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-gg-12972

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3098.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3098


commit 268986101933ea9ab355ea990fcd85b505d06180
Author: dpavlov 
Date:   2017-11-07T18:27:04Z

GG-12790: implemented force flag transferring in operation add parameter

commit 7e39b3800b9321bacd505b7f77a5931ce5e52b1d
Author: dpavlov 
Date:   2017-11-08T12:32:59Z

GG-12790: cancel snapshot in case of not forced

commit 4c7446dd2566f29579229f8150161bfe21eab911
Author: dpavlov 
Date:   2017-11-27T16:42:49Z

GG-12972: changes after review part 2: batch cache destroy: force option 
added as flag to allow Snapshot Utility to restore all caches from cache group 
(ignoring -caches parameter)




---


[GitHub] ignite pull request #3097: IGNITE-6992 FIX Ignite MR problem with accessing ...

2017-11-27 Thread ezhuravl
GitHub user ezhuravl opened a pull request:

https://github.com/apache/ignite/pull/3097

IGNITE-6992 FIX Ignite MR problem with accessing hdfs with enabled Ke…

…rberos

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6992

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3097.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3097


commit 683233bfbb72c35f0ecdcc5703d7d8654ffcc72f
Author: ezhuravl 
Date:   2017-11-27T16:34:32Z

IGNITE-6992 FIX Ignite MR problem with accessing hdfs with enabled Kerberos




---


[jira] [Created] (IGNITE-7025) Implement different strategies to fill missed data in LabeledDataset during loading from file

2017-11-27 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-7025:


 Summary: Implement different strategies to fill missed data in 
LabeledDataset during loading from file
 Key: IGNITE-7025
 URL: https://issues.apache.org/jira/browse/IGNITE-7025
 Project: Ignite
  Issue Type: Task
  Components: ml
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
Priority: Minor


For example, it can be four strategies 

public enum FillMissingValueWith {
/**
 * Fill missed value with zero or empty string or default value for 
categorical features
 */
ZERO,
/**
 * Fill missed value with mean on column
 * Requires an additional time to calculate
 */
MEAN,
/**
 * Fill missed value with mode on column
 * Requires an additional time to calculate
 */
MODE,
/**
 * Deletes observation with missed values
 * Transforms dataset and changes indexing
 */
DELETE
}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7024) Introduce some kind of network compression

2017-11-27 Thread Amelchev Nikita (JIRA)
Amelchev Nikita created IGNITE-7024:
---

 Summary: Introduce some kind of network compression
 Key: IGNITE-7024
 URL: https://issues.apache.org/jira/browse/IGNITE-7024
 Project: Ignite
  Issue Type: New Feature
Reporter: Amelchev Nikita
Assignee: Amelchev Nikita


Introduce some kind of pluggable compression at network level

The main idea is using in-line compression and writes encoded bytes in
network channel by bytes array buffer. It allows us avoiding expensive
memory allocation.

A solution may be implemented at TcpCommunicationSpi level.
For example, introduce Compressor interface which will allow us to describe our 
compression strategy, for example, exclude some small messages, choose 
compression algorithm and other…



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Thin Client Protocol documentation

2017-11-27 Thread Sergey Kozlov
Pavel

Thanks for explanations!

On Mon, Nov 27, 2017 at 2:46 PM, Pavel Tupitsyn 
wrote:

> Sergey,
>
> 1. Code table size does not affect anything, as I understand, so there is
> no reason to introduce an extra byte.
> 2. We have object arrays (code 23), I forgot to mention them, fixed.
> 3. Also forgot, see code 25 in the updated document.
>
> Also note that operation codes have been updated (grouped by purpose) as
> part of https://issues.apache.org/jira/browse/IGNITE-6989.
>
> Thanks,
> Pavel
>
> On Sun, Nov 26, 2017 at 9:54 PM, Sergey Kozlov 
> wrote:
>
> > Pavel
> >
> > Thanks for the document and your efforts for new protocol. It was really
> > helpful for playing around the python thin client design.
> >
> > Could you explain some things that were still not clear for binary object
> > format:
> >
> > 1. What a reason to introduce separated type codes for arrays? Why just
> we
> > can't use the following?
> > *<1 byte universal array code>*
> > *<1 byte primitive code>*
> > *<4 bytes length>*
> > **
> >
> > We get 1 byte overhead but save 10 bytes in the code table. For arrays
> the
> > overhead is really insignificant:10 longs array takes now 1+4+4*10=45
> bytes
> > vs 1+1+4+4*10=46 bytes for the approach
> > Moreover for that appoach a new primitive code will be available for
> using
> > for array immediately.
> >
> > 2. Why the arrays force to use a selected type? For python there's no
> > limitations to use different types across one array (list). Would be good
> > to introduce a new type that will allow that. It could be look like
> > following
> > *<1 byte universal array code>*
> > *<1 byte no common type code*> <-- this says that every item must provide
> > its date type code like it does regular primitive data
> > *<4 bytes length>*
> > *<1 byte item 0 type code>* <-- item provides its code
> > **  <-- item provides its data
> > *<1 byte item 1 type code>*
> > **
> > etc
> >
> > Also that allow to put nested arrays without changes in type code table!
> > For instance if we want to store 9 longs and 1 boolean it will take
> > now 1+1+4+(1+9)*4+(1+1)=48
> > bytes (vs 45 bytes to store as 10 longs as usual).
> >
> > 3. Ther's only one way to store a dictionary (key-value) structure as
> value
> > in the cache via Complex Object. But it looks like overcomplicated. I
> > suppose to introduce a code for that
> > *<1 byte key-value dictionary code>*
> >
> > *<4 bytes length>*
> > *<1 byte key 1 **name **type code>*
> > **
> > *<1 byte value 1 type code>*
> > **
> > *<1 byte key 2 **name **type code>*
> > **
> > *<1 byte value 2 type code>*
> > **
> > etc
> >
> > Also that allow to put nested dictionaries without changes in type code
> > table!
> > Of  course for the appoach above we get significat overhead for key
> > storing. But I think it is acceptable for some cases and definitely ok
> for
> > Python
> >
> >
> >
> >
> > On Wed, Nov 22, 2017 at 9:14 PM, Prachi Garg  wrote:
> >
> > > Thanks Pavel! The document has good information. I'll create one on
> > > readme.io; will also add some examples there.
> > >
> > > On Wed, Nov 22, 2017 at 5:03 AM, Pavel Tupitsyn 
> > > wrote:
> > >
> > > > Igniters,
> > > >
> > > > I've put together a detailed description of our Thin Client protocol
> > > > in form of IEP on wiki:
> > > > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > > > 9+Thin+Client+Protocol
> > > >
> > > >
> > > > To clarify:
> > > > - Protocol implementation is in master (see ClientMessageParser
> class)
> > > > - Protocol has not been released yet, so we are free to change
> anything
> > > > - Protocol is only used by .NET Thin Client for now, but is supposed
> to
> > > be
> > > > used from other languages by third party contributors
> > > > - More operations will be added in future, this is a first set of
> them,
> > > > cache-related
> > > >
> > > >
> > > > Please review the document and let me know your thoughts.
> > > > Is there anything missing or wrong?
> > > >
> > > > We should make sure that the foundation is solid and extensible.
> > > >
> > > >
> > > > Thanks,
> > > > Pavel
> > > >
> > >
> >
> >
> >
> > --
> > Sergey Kozlov
> > GridGain Systems
> > www.gridgain.com
> >
>



-- 
Sergey Kozlov
GridGain Systems
www.gridgain.com


[jira] [Created] (IGNITE-7023) In visorcmd I cannot list certain cache contents located on certain node.

2017-11-27 Thread Galinger Vladimir (JIRA)
Galinger Vladimir created IGNITE-7023:
-

 Summary: In visorcmd I cannot list certain cache contents located 
on certain node.
 Key: IGNITE-7023
 URL: https://issues.apache.org/jira/browse/IGNITE-7023
 Project: Ignite
  Issue Type: Bug
  Components: visor
Affects Versions: 2.3
Reporter: Galinger Vladimir


When testing affinity collocation example, I cannot see what keys of certain 
*partitioned* cache are located on certain node.

When I run something like this:

{noformat}
cache -scan -c=CacheQueryExampleOrganizations -id8=5F52C8E4
{noformat}

it still outputs all entries from the cache, not regarding the partitioning.

The issue is simple to reproduce, I slightly modified the 
org.apache.ignite.examples.datagrid.CacheQueryExample, adding 3 more 
organizations and starting additional 4 nodes.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7022) Use QuadTree for kNN performance

2017-11-27 Thread Aleksey Zinoviev (JIRA)
Aleksey Zinoviev created IGNITE-7022:


 Summary: Use QuadTree for kNN performance
 Key: IGNITE-7022
 URL: https://issues.apache.org/jira/browse/IGNITE-7022
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Aleksey Zinoviev
Assignee: Aleksey Zinoviev
Priority: Minor


Now, kNN implementation is not too fast. Its performance could be increased 
with [https://en.wikipedia.org/wiki/Quadtree]

Also, benchmarks should be provided too



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Thin Client Protocol documentation

2017-11-27 Thread Pavel Tupitsyn
Sergey,

1. Code table size does not affect anything, as I understand, so there is
no reason to introduce an extra byte.
2. We have object arrays (code 23), I forgot to mention them, fixed.
3. Also forgot, see code 25 in the updated document.

Also note that operation codes have been updated (grouped by purpose) as
part of https://issues.apache.org/jira/browse/IGNITE-6989.

Thanks,
Pavel

On Sun, Nov 26, 2017 at 9:54 PM, Sergey Kozlov  wrote:

> Pavel
>
> Thanks for the document and your efforts for new protocol. It was really
> helpful for playing around the python thin client design.
>
> Could you explain some things that were still not clear for binary object
> format:
>
> 1. What a reason to introduce separated type codes for arrays? Why just we
> can't use the following?
> *<1 byte universal array code>*
> *<1 byte primitive code>*
> *<4 bytes length>*
> **
>
> We get 1 byte overhead but save 10 bytes in the code table. For arrays the
> overhead is really insignificant:10 longs array takes now 1+4+4*10=45 bytes
> vs 1+1+4+4*10=46 bytes for the approach
> Moreover for that appoach a new primitive code will be available for using
> for array immediately.
>
> 2. Why the arrays force to use a selected type? For python there's no
> limitations to use different types across one array (list). Would be good
> to introduce a new type that will allow that. It could be look like
> following
> *<1 byte universal array code>*
> *<1 byte no common type code*> <-- this says that every item must provide
> its date type code like it does regular primitive data
> *<4 bytes length>*
> *<1 byte item 0 type code>* <-- item provides its code
> **  <-- item provides its data
> *<1 byte item 1 type code>*
> **
> etc
>
> Also that allow to put nested arrays without changes in type code table!
> For instance if we want to store 9 longs and 1 boolean it will take
> now 1+1+4+(1+9)*4+(1+1)=48
> bytes (vs 45 bytes to store as 10 longs as usual).
>
> 3. Ther's only one way to store a dictionary (key-value) structure as value
> in the cache via Complex Object. But it looks like overcomplicated. I
> suppose to introduce a code for that
> *<1 byte key-value dictionary code>*
>
> *<4 bytes length>*
> *<1 byte key 1 **name **type code>*
> **
> *<1 byte value 1 type code>*
> **
> *<1 byte key 2 **name **type code>*
> **
> *<1 byte value 2 type code>*
> **
> etc
>
> Also that allow to put nested dictionaries without changes in type code
> table!
> Of  course for the appoach above we get significat overhead for key
> storing. But I think it is acceptable for some cases and definitely ok for
> Python
>
>
>
>
> On Wed, Nov 22, 2017 at 9:14 PM, Prachi Garg  wrote:
>
> > Thanks Pavel! The document has good information. I'll create one on
> > readme.io; will also add some examples there.
> >
> > On Wed, Nov 22, 2017 at 5:03 AM, Pavel Tupitsyn 
> > wrote:
> >
> > > Igniters,
> > >
> > > I've put together a detailed description of our Thin Client protocol
> > > in form of IEP on wiki:
> > > https://cwiki.apache.org/confluence/display/IGNITE/IEP-
> > > 9+Thin+Client+Protocol
> > >
> > >
> > > To clarify:
> > > - Protocol implementation is in master (see ClientMessageParser class)
> > > - Protocol has not been released yet, so we are free to change anything
> > > - Protocol is only used by .NET Thin Client for now, but is supposed to
> > be
> > > used from other languages by third party contributors
> > > - More operations will be added in future, this is a first set of them,
> > > cache-related
> > >
> > >
> > > Please review the document and let me know your thoughts.
> > > Is there anything missing or wrong?
> > >
> > > We should make sure that the foundation is solid and extensible.
> > >
> > >
> > > Thanks,
> > > Pavel
> > >
> >
>
>
>
> --
> Sergey Kozlov
> GridGain Systems
> www.gridgain.com
>


[GitHub] ignite pull request #3096: IGNITE-7001

2017-11-27 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/3096

IGNITE-7001



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7001

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3096.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3096


commit f40600c691462190c59bf1ac161673863e1ea952
Author: Alexander Paschenko 
Date:   2017-11-24T15:27:48Z

IGNITE-7001 Dynamic index tests refactoring.

commit 68010a36c1365d72a82e8cc3e75d51c3ed5089ee
Author: Alexander Paschenko 
Date:   2017-11-27T11:14:51Z

IGNITE-7001 Continued.

commit 8a5eb38945669a3c0d6e02c0d602ee87f23cc617
Author: Alexander Paschenko 
Date:   2017-11-27T11:22:07Z

Merge remote-tracking branch 'apache/master' into ignite-7001




---


[jira] [Created] (IGNITE-7021) IgniteOOM is not propogated to client in case of implicit transaction

2017-11-27 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7021:
-

 Summary: IgniteOOM is not propogated to client  in case of 
implicit transaction
 Key: IGNITE-7021
 URL: https://issues.apache.org/jira/browse/IGNITE-7021
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Priority: Critical
 Fix For: 2.4


it's related to https://issues.apache.org/jira/browse/IGNITE-7019
when transaction fails due IgniteOOM,  ignite tries to rollback transaction and 
it fails too, because can't add free pages to free list due a new IgniteOOM:

[2017-11-27 
12:47:37,539][ERROR][sys-stripe-2-#4%cache.IgniteOutOfMemoryPropagationTest0%][GridNearTxLocal]
 Heuristic transaction failure.
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:835)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:774)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:555)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:441)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:489)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitAsync(GridDhtTxLocal.java:498)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:727)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:104)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451)
at 
org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:285)
at 
org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:276)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1246)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:666)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1040)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:398)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:519)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:150)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:135)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.access$000(IgniteTxHandler.java:97)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:177)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$1.apply(IgniteTxHandler.java:175)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:499)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Runtime failure on search 
row: org.apache.ignite.internal.processors.cache.tree.SearchRow@2b17e5c8

[GitHub] ignite pull request #2816: IGNITE-2766 Opportunistically reopen cache after ...

2017-11-27 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/2816


---


[GitHub] ignite pull request #3077: IGNITE-2766 Ensure that cache is available after ...

2017-11-27 Thread alamar
Github user alamar closed the pull request at:

https://github.com/apache/ignite/pull/3077


---


[jira] [Created] (IGNITE-7020) Web Console: add column resizer to pinned columns

2017-11-27 Thread Dmitriy Shabalin (JIRA)
Dmitriy Shabalin created IGNITE-7020:


 Summary: Web Console: add column resizer to pinned columns
 Key: IGNITE-7020
 URL: https://issues.apache.org/jira/browse/IGNITE-7020
 Project: Ignite
  Issue Type: Improvement
  Components: wizards
Reporter: Dmitriy Shabalin
Assignee: Dmitriy Shabalin
 Fix For: 3.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7019) Cluster can not survive after IgniteOOM

2017-11-27 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7019:
-

 Summary: Cluster can not survive after IgniteOOM
 Key: IGNITE-7019
 URL: https://issues.apache.org/jira/browse/IGNITE-7019
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
Priority: Critical
 Fix For: 2.4


even if we have full sync mode and transactional cache we can't add new nodes 
if there  was IgniteOOM, after adding new nodes and re-balancing, old nodes 
can't evict partitions:

[2017-11-17 20:02:24,588][ERROR][sys-#65%DR1%][GridDhtPreloader] Partition 
eviction failed, this can cause grid hang.
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Not enough 
memory allocated [policyName=100MB_Region_Eviction, size=104.9 MB]
Consider increasing memory policy size, enabling evictions, adding more nodes 
to the cluster, reducing number of backups or reducing model size.
at 
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:294)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePageNoReuse(DataStructure.java:117)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.allocatePage(DataStructure.java:105)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.addStripe(PagesList.java:413)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.getPageForPut(PagesList.java:528)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.put(PagesList.java:617)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.addForRecycle(FreeListImpl.java:582)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.reuseFreePages(BPlusTree.java:3847)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.releaseAll(BPlusTree.java:4106)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6900(BPlusTree.java:3166)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:1782)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.remove(BPlusTree.java:1567)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.remove(IgniteCacheOffheapManagerImpl.java:1387)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.remove(IgniteCacheOffheapManagerImpl.java:374)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.removeValue(GridCacheMapEntry.java:3233)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.clearInternal(GridDhtCacheEntry.java:588)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.clearAll(GridDhtLocalPartition.java:892)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvict(GridDhtLocalPartition.java:750)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:593)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader$3.call(GridDhtPreloader.java:580)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6639)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:967)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] ignite pull request #3085: ignite-6944 Fixed lookup of writeReplace and read...

2017-11-27 Thread agura
Github user agura closed the pull request at:

https://github.com/apache/ignite/pull/3085


---


[GitHub] ignite pull request #3095: IGNITE-6971 Ignite Logger type & logging file con...

2017-11-27 Thread apopovgg
GitHub user apopovgg opened a pull request:

https://github.com/apache/ignite/pull/3095

IGNITE-6971 Ignite Logger type & logging file config indication



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-6971

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3095.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3095


commit b244a39935d4b0d0e2c215dce32780c33e6afafa
Author: apopov 
Date:   2017-11-27T08:18:04Z

IGNITE-6971 Ignite Logger type & logging file config indication




---