I meant "they should *explicitly* provide data region configuration", of
course.
-Val
On Thu, Feb 1, 2018 at 10:58 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:
> Agree with Mike. I don't think it's a good idea to implicitly create data
> regions on cli
Cross-posting to dev.
Igniters,
This actually makes sense to me. Why don't we add IgniteCache#ttl(K key)
method that would return current TTL for the key? Looks like this is
already provided by GridCacheMapEntry#ttl() method, so we only need to
properly expose it to public API. Am I right?
If
Folks,
On "Eviction Policies" documentation page [1] we have the following callout:
> Configured eviction policy has no effect if Ignite persistence is enabled
> Note that if Ignite Persistence is enabled, then the page-based evictions
have no effect because the oldest pages will be purged from
Sergey,
These mappings are supposed to be the same on all nodes, so if the file
already exists, we can safely ignore this, or use a lock to avoid
concurrent access. Actually, I think we already fixed this in the past,
it's weird that issue came up again.
But in any case, switching marshaller
Hi John,
There are multiple ways to get several schemas for the same type. As Pavel
mentioned, one of the examples is when Binarylizable generates different
sets of fields under different circumstances.
However, more common use case is for two client nodes to have different
versions of the same
Anton,
I tend to agree with Ilya that identifying and fixing all the possible
broken tests in one go is not feasible. What is the proper way in your
view? What are you suggesting?
-Val
On Mon, Feb 5, 2018 at 2:18 AM, Anton Vinogradov
wrote:
> Ilya,
>
> 1) Still see
it
> >> is
> >>> page based eviction, but not entry-based. Actually data is not removed,
> >> but
> >>> only written to disk. We can address this page later by ID.
> >>> PDS eviction is primarily the replacement of pages fro
Nikolay,
To merge it to 2.4, you need to merge the change to ignite-2.4 release.
Let's do this if we come to an agreement in the neighbor thread.
-Val
On Thu, Feb 8, 2018 at 8:21 PM, Nikolay Izhikov wrote:
> Hello, Dmitriy.
>
> IGNITE-7337 are merged to master [1]
>
> Do
, will be released soon :)
> >
> > On Fri, Feb 9, 2018 at 7:19 AM, Nikolay Izhikov <nizhi...@apache.org>
> > wrote:
> >
> > > Hello, Igniters.
> > >
> > > Good news.
> > >
> > > IGNITE-7337 [1](Spark Data Frames: support s
t;>>>>>> On 20 Dec 2017, at 23:15, Denis Magda <dma...@apache.org> wrote:
> > >>>>>>>
> > >>>>>>> Petr, thanks, such a swift turnaround!
> > >>>>>>>
> > >>>>>>> Have you
'metastore' for marshaller cache?
>
> Sincerely,
> Dmitriy Pavlov
>
> пт, 9 февр. 2018 г. в 1:05, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Sergey,
> >
> > These mappings are supposed to be the same on all nodes, so if the fi
y to implement workaround for recovery from
> PDS.
> > For the collocated mode we can, for example, enforce REPLICATED cache
> mode.
> >
> > Why don't you like the idea with separate cache?
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-7565
> > [2] htt
Nikolay,
When you're talking about join optimization, what exactly are you referring
to?
Since other parts of data frames integration are already merged, I think
it's a good time to resurrect this thread? Does it make sense to review it
right now? Or you want to make some more changes?
-Val
On
IGNITE-7337 (Data frame save functionality) is merged into 2.4.
-Val
On Fri, Feb 9, 2018 at 11:47 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:
> Nikolay,
>
> To merge it to 2.4, you need to merge the change to ignite-2.4 release.
> Let's do this if we co
t; Extends scaladoc, etc.
>
> I will write you when PR is fully ready.
>
> [1] https://github.com/apache/ignite/pull/3397
>
>
>
> В Пн, 12/02/2018 в 13:45 -0800, Valentin Kulichenko пишет:
> > Nikolay,
> >
> > When you're talking about join optimization, what
; > I think we will have this information in a matter of 1-2 months.
>
> [1] https://github.com/apache/ignite/pull/3397
> [2] https://github.com/apache/ignite/pull/3397/files#diff-
> 5a861613530bbce650efa50d553a0e92R227
> [3] https://gist.github.com/nizhikov/a4389fd78636869dd38c1392
1] https://issues.apache.org/jira/browse/IGNITE-7588
> >
> > On Tue, Jan 30, 2018 at 2:27 PM, Anton Vinogradov
> > <avinogra...@gridgain.com> wrote:
> >> +1
> >>
> >> On Tue, Jan 30, 2018 at 9:02 AM, Yakov Zhdanov <yzhda...@apache.org>
heProxyImpl.query(IgniteCacheProxyImpl.java:664)
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:615)
> at org.apache.ignite.internal.processors.cache.
> GatewayProtectedCacheProxy.query(GatewayProtectedCach
until:
>
> 1. We create documentation for all join limitations.
> 2. Create the way to check is certain join satisfy current limitations.
>
> [1] http://apache-ignite-developers.2346864.n4.nabble.com/
> SparkDataFrame-Query-Optimization-Prototype-tp26249p26361.html
>
; On 19 Dec 2017, at 22:55, Denis Magda <dma...@apache.org> wrote:
> > >>>>
> > >>>> All the bids were accepted and the verdict is executed:
> > >>>> https://issues.apache.org/jira/browse/IGNITE-7251 <
> > https://issues.apache.o
Roman,
DataStreamerCacheUpdaters class is actually not a part of public API, so I
don't see a reason to change it unless there is a need for this internally
in Ignite.
-Val
On Thu, Feb 15, 2018 at 5:52 AM, Roman Guseinov wrote:
> Hello Igniters,
>
> In some cases, batched
Guys,
While we're on this topic, what is the difference between BACKGROUND and
NONE in terms of semantics and provided guarantees? To me it looks like
both guarantee to recover the state since last checkpoint and anything else
can potentially be lost, so from user perspective they are the same.
That's a great idea! Nikolay, let me know if you need any help with the
presentation, I will be happy to help.
-Val
On Fri, Feb 16, 2018 at 12:19 AM, Nikolay Izhikov
wrote:
> Ok, Igniters.
>
> I will do it in a few weeks.
> I need time to prepare to the talk.
>
> В Пт,
As far as I remember, it used to be public and then was moved to internal.
The main issue with these updaters was that batching is dangerous because
you can get deadlocks if keys are not sorted (which is the case for
BATCHED). There is also BATCHED_SORTED, but it requires keys to be
Comparable and
is a
> category, then it can be turned on and off using standard logger
> configuration.
>
> D.
>
> On Mon, Dec 11, 2017 at 3:28 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Igniters,
> >
> > We have bunch of warnings in the product wh
Ticket created: https://issues.apache.org/jira/browse/IGNITE-7284
-Val
On Wed, Dec 20, 2017 at 6:12 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:
> Sounds good, markers should work.
>
> On Wed, Dec 20, 2017 at 1:00 PM, Valentin Kulichenko <
> valentin.kulich
Ticket is still open. Vladimir, looks like it's assigned to you. Do you
have any plans to work on it?
https://issues.apache.org/jira/browse/IGNITE-5038
-Val
On Wed, Jan 3, 2018 at 1:26 PM, Abeneazer Chafamo wrote:
> Is there any update on the suggested
Revin,
I doubt IgniteRDD#getPrefferredLocations has any affect on data frames, but
this is an interesting point. Nikolay, as a developer of this
functionality, can you please comment on this?
-Val
On Wed, Jan 3, 2018 at 1:22 PM, Revin Chalil wrote:
> Thanks Val for the
Guys,
The latest build on the nightly builds page [1] is from May 31. Any idea
why?
[1]
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/
-Val
+1. Looks like a bug.
-Val
On Thu, Jun 21, 2018 at 12:26 PM Denis Magda wrote:
> Hello Slava,
>
> BinaryContext implementation matches only classes that reside in
> > the "org.apache.ignite.examples" package
>
>
> This looks like an oversight on our side. Think we need to fix it.
>
> --
>
D - Data is read without a
> lock
> > and is never cached in the transaction itself."). Which should be wrong.
> > Read locks are acquired but they are released as soon as the read is
> > complete (and they are not held until the transaction commits or rolls
> > back).
> >
&
ways
> > > > zero [1]. That's why WARN message shown here [2] would be not not
> quite
> > > > right
> > > > if we have a lot of client nodes in cluster.
> > > >
> > > >
> > > > [1]
> > > >
> > > >
java#L88
> > [3]
> >
> https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/examples/Apache.Ignite.Examples/Sql/SqlDmlExample.cs#L91
> > [4]
> >
> https://github.com/apache/ignite/blob/master/examples/src/main/scala/org/apache/ignite/scalar/examples/Sca
Hi Nikolay,
Can you please take a look at this thread on SO?
https://stackoverflow.com/questions/51621280/saving-a-spark-dataset-to-apache-ignite-with-array-column-and-savemode-overwrite
Looks like org.apache.ignite.spark.impl.QueryUtils#dataType method should
also support ArrayType as one of
Denis,
I think this is correct behavior. If you deploy a local query on a single
node with a REPLICATED cache, you expect to be notified with all the
updates. This is not the case for PARTITIONED caches.
-Val
On Tue, Jul 31, 2018 at 3:19 AM Denis Mekhanikov
wrote:
> Igniters,
>
> As you may
h the key/val and the
> relational fields in a dataframe schema.
>
> Stuart.
>
> > On 1 Aug 2018, at 04:23, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
> >
> > I don't think there are exact plans to remove _key and _value fields as
> > it'
for predicate pushdown to Ignite.
>
> I’m likewise keen to hear Nikolay’s point of view as he is obviously the
> expert.
>
> Thanks for your help so far.
>
> Stuart.
>
> On 1 Aug 2018, at 18:17, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
&
rlying cache objects, which is not possible currently.
>
> Can you elaborate on the reason _key and _val columns in Ignite SQL
> will be removed?
>
> Stuart.
>
> > On 27 Jul 2018, at 19:39, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
> >
> >
t; >
> > > > > > > On Wed, Jul 25, 2018 at 12:10 PM Dmitrii Ryabov <
> > > > somefire...@gmail.com
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1 to make LO
Stuart, Nikolay,
I really don't like the idea of exposing '_key' and '_val' fields. This is
legacy stuff that hopefully will be removed altogether one day. Let's not
use it in new features.
Actually, I don't even think it's even needed. Spark docs [1] suggest two
ways of creating a typed
the end):
> https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd
>
> Stuart.
>
> On 27 Jul 2018, at 20:05, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>
> wrote:
>
> Well, the second approach would use the optimizations, no?
>
> -Val
>
&g
Stuart,
Two tables can have same names only if they are located in different
schemas. Said that, sdding schema name support makes sense to me for sure.
We can implement this using either separate SCHEMA_NAME parameter, or
similar to what you suggested in option 3 but with schema name instead of
Hi John,
Please refer to DEVNOTES.txt, it describes the process step by step.
-Val
On Tue, Aug 7, 2018 at 12:14 PM John Wilson wrote:
> Hi,
>
> How do I generate tar.gz for Ignite from source?
>
> Thanks,
>
Stuart, Nikolay,
I see that the 'Table' class (returned by listTables method) has a
'database' field. Can we use this one to report schema name?
In any case, I think we should look into how this is done in data source
implementations for other databases. Any relational database has a notion
of
Vladimir,
1. Continuous queries are asynchronous in general case, so I don't think
it's even possible to provide transactional ordering, especially for the
case of distributed transactions. I would leave current guarantees as-is.
2. This one might be pretty useful. If it's not very hard to do,
Hi David,
With the Docker image you can actually use additional libraries by
providing URLs to JARs via EXTERNAL_LIBS property. Please refer to this
page: https://apacheignite.readme.io/docs/docker-deployment
But anyway, I believe that such contribution might be very valuable for
Ignite. Feel
e case when we have several Ignite
> configuration in one XML file.
> Now I see, may be this is too rare use-case to support.
>
> Stuart, Valentin, What is your proposal?
>
> В Ср, 22/08/2018 в 08:56 -0700, Valentin Kulichenko пишет:
> > Nikolay,
> >
> > Whate
wrote:
>
> > Hello, Stuart.
> >
> > Can you do some research and find out how schema is handled in Data
> Frames
> > for a regular RDBMS such as Oracle, MySQL, etc?
> >
> > В Пн, 20/08/2018 в 15:37 -0700, Valentin Kulichenko пишет:
> > > Stuart, Nikolay,
Guys,
I believe we should preserve the behavior that we have now. What happens to
services if we restart a persistent cluster running 2.6? Are services
recreated or not? If YES, we should make sure the same happens after
redesign. Would be even better if we preserve compatibility, i.e. allow
s to represent the schema name as the
> > database name for the purposes of the Spark catalog.
> >
> > If anyone knows of an existing way to list all available schemata within
> an
> > Ignite instance please let me know, otherwise the first task will be
> > creating tha
appy to make the change to have the
> > database reference the schema if Nikolay agrees. (I'll first need to do a
> > bit of research into how to obtain the list of all available schemata...)
> >
> > Thanks,
> > Stuart.
> >
> > On Tue, Aug 21, 2018 at
t;
> ср, 29 авг. 2018 г. в 7:14, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Folks,
> >
> > Is there a way to limit or disable retries of failed updates in the
> > write-behind store? I can't find one, it looks like if an update fails,
Folks,
By default baseline topology is enforced only if persistence is enabled and
is not defined in in-memory only scenario. But does this mean that I still
CAN use it without persistence?
In some cases users want to avoid rebalancing in case of a node failure
because it's assumed that this
On Mon, Jul 9, 2018 at 1:33 PM Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:
> If clientFailureDetectionTimeout is not set on server node, will it use
> failureDetectionTimeout
> instead?
>
> Either way, this configuration seems to be a bit confusing, but I don't
Folks,
Currently do not create any regions or allocate any offheap memory on
client nodes unless it's explicitly configured. This is good behavior,
however there is a usability issue caused by the fact that many users have
the same config file for both server and clients. This can lead to
Hi John,
Looks like pictures can't be attached directly. Try uploading it somewhere
and providing a link.
However, since you're talking about ranges, my guess would be that you're
using SQL which is currently NOT transactional. This support currently in
development though, probably other members
together.
>
> D.
>
> On Fri, Jul 20, 2018 at 3:59 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Folks,
> >
> > Currently do not create any regions or allocate any offheap memory on
> > client nodes unless it's explicitly configured
It sounds like the main drawback of LOCAL cache is that it's implemented
separately and therefore has to be maintained separately. If that's the
only issue, why not keep LOCAL cache mode on public API, but implement it
as a PARTITIONED cache with a node filter forcefully set? That's similar to
Hi John,
Read committed isolation typically implies that lock on read is not held
throughout the transaction lifecycle, i.e. released right after read is
completed (in Ignite this just means that no lock is required at all). This
semantic allows second read to get an updated value that was
Dave,
In case it's executed even if primary node is outside of the cluster group,
then I think it's a bug - I would throw an exception in this case. However,
is there any particular reason you're doing this? Is there a use case? I
don't see much sense in combining affinityRun with a cluster
There is a non-serializable anonymous class that you have declared within
the plugin. You should check what is actually serialized there and whether
it's supposed to be serialized or not.
-Val
On Fri, Aug 31, 2018 at 10:27 AM wt wrote:
> How can i make my custom plugin serializable?
>
>
>
al:
>
> 5. Always use clientFailureDetectionTimeout on clients instead of
> failureDetectionTimeout
> *What*: change code to use clientFailureDetectionTimeout on clients
> *When*: update code and readme.io docs in 2.7
>
> Thanks,
> Stan
>
> From: Valentin Kulichenko
> Sent:
onTimeout=20.
> When these two nodes communicate, server will use timeouts of 20 seconds
> and client will use timeout of 10 seconds.
>
> Stan
>
> From: Valentin Kulichenko
> Sent: 6 июля 2018 г. 23:17
> To: dev@ignite.apache.org
> Subject: Re: IgniteConfiguration, TcpDiscoveryS
Hi Amir,
I reviewed the change and commented in the ticket.
-Val
On Fri, Jul 6, 2018 at 4:18 PM Amir Akhmedov
wrote:
> Igniters,
>
> Please review my changes. That's a simple change, a setter method was
> added.
>
> Thanks,
> Amir
>
imeout is that it may allow
> clients to be slower/on a slower network than servers.
>
> Do you think it isn’t worth to have a separate setting just for that?
>
> Thanks,
> Stan
>
> From: Valentin Kulichenko
> Sent: 5 июля 2018 г. 18:16
> To: dev@ignite.apache.or
Dmitry,
Good point. I think it makes sense to even remove (deprecate) the
excludeNeighbors property and always distribute primary and backups to
different physical hosts in this scenario. Because why would anyone ever
set this to false if we switch default to true? This also automatically
fixes
cess it? Imagine I use data on nodes A and B
> > > performing reads and writes and node C crashes in the middle of tx.
> > Should
> > > my tx be rolled back? I think no.
> > >
> > > As far as difference it seems that IGNORE resets lost status for
> affected
>
Generally, I think we should not trim any exceptions, because this way we
can unexpectedly remove useful information. Do we know what was the
original reasoning behind this logic?
-Val
On Mon, Mar 12, 2018 at 3:57 AM, Stanislav Lukyanov
wrote:
> Hi,
>
>
Alex,
What is behavior going to be after IGNITE-5874 is fixed? Will expired entry
be removed from both memory and persistence?
-Val
On Sat, Mar 10, 2018 at 12:06 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:
> The ticket [1] is in patch available state looks good, the only thing
Nikolay,
Spark integration is not related to Scalar, the only thing they have in
common is Scala. I think we should have a separate configuration for
ignite-spark module. If anything Spark related is currently in Scalar
suite, it should be moved from there.
-Val
On Tue, Feb 27, 2018 at 3:03 AM,
FS are also tested there.
>
> Sincerely,
> Dmitriy Pavlov
>
> вт, 27 февр. 2018 г. в 22:49, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Nikolay,
> >
> > Spark integration is not related to Scalar, the only thing they have in
> > common is Scala.
Guys,
What is the result of this discussion? Do we still not support eviction and
expiration on persistence level? If so, any plans to change this?
-Val
On Tue, Nov 21, 2017 at 9:45 AM, Denis Magda wrote:
> We might break the compatibility for the next major release or even
s is a colossal usability problem (I'm pretty sure I've seen
> numerous messages about it on userlist) and I'll fill an issue if nobody is
> objecting.
>
> Ilya.
>
> --
> Ilya Kasnacheev
>
> 2018-03-05 22:50 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.c
This indeed looks like a bigger issue. Basically, there is no clear way (or
no way at all) to synchronize code that listens to partition loss event,
and the code that calls resetLostPartitions() method. Example scenario:
1. Cache is configured with 3rd party persistence.
2. One or more nodes fail
Ivan,
If grid hangs, graceful shutdown would most likely hang as well. Almost
never you can recover from a bad state using graceful procedures.
I agree that we should not create two defaults, especially in this case.
It's not even strictly defined what is embedded node in Ignite. For
example, if
I don't think peer class loading is even possible for services. I believe
we should reuse DeploymentSpi [1] for versioning.
[1] https://apacheignite.readme.io/docs/deployment-spi
-Val
On Wed, Apr 4, 2018 at 12:52 PM, Denis Magda wrote:
> Sorry, that was me who renamed the
Yes, the class deployment itself has to be explicit. I.e., there has to be
a manual step where user updates the class, and the exact step required
would depend on DeploymentSpi implementation. But then Ignite takes care of
everything else - service redeployment and restart is automatic.
Dmitriy
Guys,
I also not sure I understand the purpose of methods like [1] that accept
instance of AtomicConfiguration to create new atomic structure. Per my
knowledge, all atomics are stored in a single cache which is configured by
AtomicConfiguration provided on startup as part of IgniteConfiguration.
Is there a ticket? Let's create one if not.
-Val
On Tue, Apr 10, 2018 at 6:17 AM, Vladimir Ozerov <voze...@gridgain.com>
wrote:
> Val,
>
> They are simply not implemented yet. I am not aware of concrete plans to
> support them.
>
> On Mon, Apr 9, 2018 at 11:33
Denis,
In my understanding, in this case you should remove node from BLT and that
will trigger the rebalancing, no?
-Val
On Wed, Apr 11, 2018 at 12:23 PM, Denis Magda wrote:
> Igniters,
>
> As we know the rebalancing doesn't happen if one of the nodes goes down,
> thus,
Guys,
Is there a way to run collocated compute job in C++? I can't find
affinityRun and affinityCall method in C++ compute API, am I missing
something? If we really don't have them, is there any particular reason for
this and/or plans to add them?
-Val
should be added.
> > >
> > > I don't think, that a lot of people will implement their own
> > > *DeploymentSpi*-s, so we should make work with *UriDeploymentSpi* as
> > > comfortable as possible.
> > >
> > > Denis
> > >
> > &g
has to be brought up after a full
> cluster restart w/o user intervention. To achieve this we need to persist
> the service's configuration somewhere.
>
> --
> Denis
>
> On Mon, Apr 9, 2018 at 1:42 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
&g
roper DeploymentSpi.
> > Please correct me, if I'm wrong.
> > It would be good, though, to add some examples on service redeployment,
> > when implementation class changes.
> >
> > Denis
> >
> > чт, 5 апр. 2018 г. в 2:33, Valentin Kulichenko <
> > val
This is on my plate, will try to take a look this week.
-Val
On Mon, Apr 9, 2018 at 10:28 AM, Denis Magda wrote:
> Val,
>
> As an initial reviewer and reporter, could you have a look and sign the
> contribution off?
>
> --
> Denis
>
> On Mon, Apr 9, 2018 at 12:56 AM, Aleksey
o listen to needed ports, then a
> corresponding exception will be propagated to the user code.
> On the other hand, if exception is thrown from the *execute() *method, then
> service won't be undeployed.
>
> Denis
>
> пт, 20 апр. 2018 г. в 2:35, Valentin Kulichenko <
> va
> > > > > >>
> > > > > >> Also, I heard that presently we store a service configuration in
> > the
> > > > > >> system
> > > > > >> cache that doesn't give us a way to deploy a new version of a
> > &g
gt; when it is rebalanced to another node.
>
> As Denis said, if we are not going to prevent nodes from starting on
> service failures, then we should at least generate corresponding events.
> Otherwise there won't be any way to react to service initialization
> failures during n
Folks,
Any other thoughts on this? Should we create tickets for compute support if
there are no objections?
-Val
On Thu, Mar 22, 2018 at 4:27 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:
> I agree that compute and services functionality is important for thin
nce would be not to have compute functionality on thin
> clients, as it would introduce extra security risk.
>
> Any particular reason why you are asking for this feature?
>
> D.
>
> On Apr 2, 2018, 8:47 PM, at 8:47 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.co
e. Make sure to deactivate the
>> cluster before shutdown.");
>>
>
> Best Regards,
> Ivan Rakov
>
>
> On 24.03.2018 1:40, Valentin Kulichenko wrote:
>
>> Dmitry,
>>
>> Thanks for clarification. So it sounds like if we fix all other modes as
>> we
Nikolay,
Sounds like a good idea. I will do my best to speed up the process and
review asap.
-Val
On Thu, Mar 22, 2018 at 11:37 AM, Nikolay Izhikov
wrote:
> Hello, guys
>
> I agree with earlier release.
>
> I propose to include my task IGNITE-7077 to 2.5 release.
>
>
I agree that compute and services functionality is important for thin
client. It doesn't seem to be very hard to implement, but would provide
much better flexibility, as users would be able to do remote invocation of
arbitrary code, use collocated processing, etc. Having an ability to do
this from
e ready
> to
> > >>>>>>>>>
> > >>>>>>>> release
> > >>>>>>>> AI 2.5 with 7% drop.
> > >>>>>>>>> 2) Introduce LOG_ONLY_SAFE, make it default, add release note
> > >>&g
triy Pavlov
>
> сб, 24 мар. 2018 г. в 1:07, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > I agree. In my view, any possibility to get a corrupted storage is a bug
> > which needs to be fixed.
> >
> > BTW, can someone explain semant
Guys,
What do we understand under "data corruption" here? If a storage is in
corrupted state, does it mean that it needs to be completely removed and
cluster needs to be restarted without data? If so, I'm not sure any mode
that allows corruption makes much sense to me. How am I supposed to use a
Ilya,
IgniteDataStreamer#addData method returns future which should be completed
with error if one is thrown on server side. Does this happen or not?
-Val
On Mon, Mar 5, 2018 at 4:10 AM, Nikolay Izhikov wrote:
> Hello, Ilya.
>
> > I think it's time to end this, if that
I stumbled across couple of use cases where swap space was more suitable
than persistence. However, enabling both for a same region definitely
doesn't make sense to me, I would throw an exception in this case.
-Val
On Fri, Mar 2, 2018 at 9:46 AM, Denis Magda wrote:
> Hi
+1. Low level timeouts that we still have in discovery and communication
are very hard to explain and I doubt there is anyone who fully understands
how they currently work. They bring a lot of complexity and almost zero
value. Let's deprecate them and leave only failureDetectionTimeout plus
other
Hi Uday,
I added you to contributors list, you should be able to assign the ticket
to yourself now. Welcome to the community!
-Val
On Fri, Jun 29, 2018 at 3:40 AM uday kale wrote:
> Hi,
>
> I am unable to change the jira status of IGNITE-1260 or assign it to
> myself. Can someone grant me the
501 - 600 of 1097 matches
Mail list logo