Re: slow query performance against berkley db
Hi Rajesh, Ignite is a distributed system, testing with one node is really not the way. You need to consider having multiple nodes and portion and collocate your data before. Thanks, C > On 5 Feb 2018, at 16:36, Rajesh Kishorewrote: > > Hi, > > We are in the process of evaluating Ignite native persistence against berkely > db. For some reason Ignite query does not seem to be performant the way > application code behaves against berkley db > > Background: > Berkley db - As of now, we have berkley db for our application and the data > is stored as name value pair as byte stream in the berkley db's native file > system. > > Ignite DB - We are using Ignite DB's native persistence file system. Created > appropriate index and retrieving data using SQL involving multiple joins. > > Ignite configuration : with native persistence enabled , only one node > > Data: As of now in the main table we have only .1 M records and in supporting > tables we have around 2 million records > > Ignite sql query used > > SELECT f.entryID,f.attrName,f.attrValue, f.attrsType FROM > ( select st.entryID,st.attrName,st.attrValue, st.attrsType from > (SELECT at1.entryID FROM "objectclass".Ignite_ObjectClass at1 > WHERE at1.attrValue= ? ) t > INNER JOIN > "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE st ON > st.entryID = t.entryID WHERE st.attrKind IN ('u','o') > ) f >INNER JOIN (SELECT entryID from "dn".Ignite_DN where parentDN like ? ) > dnt ON f.entryID = dnt.entry > > The corresponding EXPLAIN PLAN > > > > [[SELECT > F__Z3.ENTRYID AS __C0_0, > F__Z3.ATTRNAME AS __C0_1, > F__Z3.ATTRVALUE AS __C0_2, > F__Z3.ATTRSTYPE AS __C0_3 > FROM ( > SELECT > ST__Z2.ENTRYID, > ST__Z2.ATTRNAME, > ST__Z2.ATTRVALUE, > ST__Z2.ATTRSTYPE > FROM ( > SELECT > AT1__Z0.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 > WHERE AT1__Z0.ATTRVALUE = ?1 > ) T__Z1 > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 > ON 1=1 > WHERE (ST__Z2.ATTRKIND IN('u', 'o')) > AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) > ) F__Z3 > /* SELECT > ST__Z2.ENTRYID, > ST__Z2.ATTRNAME, > ST__Z2.ATTRVALUE, > ST__Z2.ATTRSTYPE > FROM ( > SELECT > AT1__Z0.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 > WHERE AT1__Z0.ATTRVALUE = ?1 > ) T__Z1 > /++ SELECT > AT1__Z0.ENTRYID > FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z0 > /++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = ?1 > ++/ > WHERE AT1__Z0.ATTRVALUE = ?1 > ++/ > INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST__Z2 > /++ "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX: > ENTRYID = T__Z1.ENTRYID ++/ > ON 1=1 > WHERE (ST__Z2.ATTRKIND IN('u', 'o')) > AND (ST__Z2.ENTRYID = T__Z1.ENTRYID) > */ > INNER JOIN ( > SELECT > __Z4.ENTRYID > FROM "dn".IGNITE_DN __Z4 > WHERE __Z4.PARENTDN LIKE ?2 > ) DNT__Z5 > /* SELECT > __Z4.ENTRYID > FROM "dn".IGNITE_DN __Z4 > /++ "dn".EP_DN_IDX: ENTRYID IS ?3 ++/ > WHERE (__Z4.ENTRYID IS ?3) > AND (__Z4.PARENTDN LIKE ?2): ENTRYID = F__Z3.ENTRYID > */ > ON 1=1 > WHERE F__Z3.ENTRYID = DNT__Z5.ENTRYID > ORDER BY 1], [SELECT > __C0_0 AS ENTRYID, > __C0_1 AS ATTRNAME, > __C0_2 AS ATTRVALUE, > __C0_3 AS ATTRSTYPE > FROM PUBLIC.__T0 > /* "Ignite_DSAttributeStore"."merge_sorted" */ > ORDER BY 1 > /* index sorted */]] > > > Any pointers , how should I proceed , Following is the JFR report for the > code used > cursor = cache.query(new SqlFieldsQuery(query).setEnforceJoinOrder(true); > cursor.getAll(); > > > > > > > Thanks, > Rajesh
Re: Computation best practices
Hi Luqman, Is there a specific reason why you want to keep the data nodes separate from the compute nodes? As you say this beats the point of collocation. You should use the data nodes for compute and ensure you have a way to monitor and kill spurious tasks that may be executed on the grid. C. > On 13 Nov 2017, at 11:50, luqmanahmadwrote: > > Hi there, > > Just trying to clarify few bits in my head around data nodes and compute > nodes. > > Let's say we have 10 data nodes which are solely storing the data using > affinity collocation and we have 10 compute nodes as well for computing > different tasks on the cluster. > > Now we know that if we want to perform some operation on the data and we > know where it resides we can use the affinity API to perform some operation > on it which is indeed much better as there would be no data movement across > the node which makes sense as well. But then on the other side, we have got > compute nodes as well which are just sitting idle. Although we have got the > luxury of using distributed closures then wouldn't it be an overhead of > carrying all the data to a compute node to perform and then sending the data > back. > > Just trying to find a use case where the separate group of the cluster could > be useful. For example data-node, compute-node etc. If anyone can clear this > would be much appreciated. > > Thanks, > Luqman > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Platform updates
Luqman, do you want to update the application or the actual Ignite version? If it's your application Kara then as long as you can manage multiple versions of your app for a phased upgrade then sure. But if it's for Ignite then this is not possible to have 2 different version running in the same cluster. You can do that with GridGain enterprise version though. Cheers, C > On 4 Aug 2017, at 15:31, luqmanahmadwrote: > > Hi there, > > Lets say we have a distributed caching system which needs to be up 99.9% > time unless we upgrade the version of ignite. Now let say we have found some > bugs and needs to be updated on the production cluster without any downtime. > > How do we approach this scenario in ignite ? If we have a cluster, let say 2 > nodes, can we stop one node - > update the jars and repeat the same process on the other node without any > downtime ? I might not be thinking straight over here but would be > appreciated any help. > > Thanks, > Luqman > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Platform-updates-tp15998.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Apache Flink meets Apache Ignite next week in London
Hello Igniters of London, I know this has be shared already, the fellas at Apache Flink will be hosting us this coming Wednesday, 10th of May, where we will dive into Ignite/Flink Integration. Speaker Bio: Akmal B. Chaudhri is a Technical Evangelist for GridGain, specializing in Big Data, NoSQL and NewSQL database technologies. His current interests also include Apache Spark, Machine Learning, Data Science and how to become a Data Scientist. He has over 25 years experience in IT and has previously held roles as a developer, consultant, product strategist and technical trainer. He has worked for several blue-chip companies, such as Reuters and IBM and also the Big Data startups Hortonworks (Hadoop) and DataStax (Cassandra NoSQL Database). He has regularly presented at many international conferences and served on the program committees for a number of major conferences and workshops. He has published and presented widely and edited or co-edited 10 books. He holds a BSc (1st Class Hons.) in Computing and Information Systems, MSc in Business Systems Analysis and Design and a PhD in Computer Science. He is a Member of the British Computer Society (MBCS) and a Chartered IT Professional (CITP). Abstract: Apache Ignite is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies. Ignite as a collection of independent, well-integrated, in-memory components geared to improve performance and scalability of your application. Some of these components include: Advanced Clustering, Data Grid, SQL Grid, Streaming & CEP, Compute Grid, Service Grid, Ignite File System. Ignite also has integrations for accelerating data processing frameworks such as Hadoop and Spark. In addition to its own its own stream processing capability, integrations also exist for JMS, MQTT, Apache Flume, Apache Storm, Apache Kafka, Apache Camel, and Apache Flink, which will be covered in the session. RSVP: https://www.meetup.com/Apache-Flink-London-Meetup/events/239663941/ <https://www.meetup.com/Apache-Flink-London-Meetup/events/239663941/> We hope to see you there! Cheers, Christos
Apache Ignite vs. Apache Flink - a worthy comparison?
Igniters, I'm looking at a use case where I've been challenged to position Ignite vs. Flink. I know at a high-level these technologies target different use cases and can actually compliment each other it's still possible to have overlapping functionality. Anyone care to share their views on how perhaps Ignite competes against Flink? Thanks! -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Apache-Ignite-vs-Apache-Flink-a-worthy-comparison-tp11818.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Distributed Closures VS Executor Service
Hi Kyriako, Thinking about your original approach of using compute tasks assigned to nodes with all CATEGORIES required by the task to be local, I believe you could use the affinity interface to figure this out. You'd need to partition your CATEGORIES cache and use the CATEGORY ID as the affinity, then use the Affinity interface to determine which node a CATEGORY is mapped to and allocate same-node categories to tasks. Then you can direct this task to the correct node and force local query. IgniteCache cache = ignite.cache(cacheName); Affinity aff = ignite.affinity(cacheName); // Get Partition ID for a given Key int partId = aff.partition(categoryId); // Get Primary node id from Key. This is the one you need probably int nodeId = aff.mapKeysToNodes(categoryId); Javadoc: https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/Affinity.html#mapKeysToNodes(java.util.Collection) I still think the original approach I suggested is easier and makes more sense... -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11817.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Distributed Closures VS Executor Service
If I understand correctly, the SQL query is executed within every task on each of the nodes and it is not set to be a local query. Correct? If so then what you are really doing is executing the SQL queries from all the nodes to all the nodes. This is bound to be inefficient. In essence what you want to do is make your tasks work at a local level only. Why not just switch to simple distributed closures and broadcast the same task to all the nodes but configure the SQL query within the task to execute as local one - setLocal(true). I mean you are already using ComputeTaskSplitAdapter which abstracts the only difference between closures: the capability to automatically assign jobs to nodes. Broadcasting the same task with a local query means the same task would execute on all the nodes in parallel and perform only a local query on the node it is running on. Then your tasks can proceed to do the required calculations and write the results etc. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11758.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Non-collocated distributed SQL Joins across caches over separate cluster groups
Thanks Sergi, I understand that non-collocated distributed joins is a last resort very well. I still don't understand why cluster groups would make this any worse since in distributed non-collocated joins the data is NOT expected to be on the same node. Sounds to me that the cross-node calls would be almost the same... Christos -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Non-collocated-distributed-SQL-Joins-across-caches-over-separate-cluster-groups-tp11734p11748.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Distributed Closures VS Executor Service
Hi Kyriako, Agree with Nikolai on both points. Regarding point number 2, you want to use affinity data / data colocation and data / compute colocation. You basically need to ensure that when you are inserting data into the cache you use the same affinity for all entries that need to be collocated to ensure they end up on the same server. Then you can use an affinity run or broadcast with local query enabled to only execute the query locally. Now in your case you said the data is in an external database so I’m not sure how you would ensure the data for each query is local. Christos > On 5 Apr 2017, at 11:00, kmandalas <kyriakos.manda...@iriworldwide.com> wrote: > > Hello Nikolai, all > > About 1: yes, this was my perception as well, thanks for the confirmation > > About 2: Even if all the nodes provide result sets of local execution to the > query initiator, if we are talking about results being Lists containing a > couple of thousands POJOs, then wouldn't be a big overhead these objects to > be transferred over the Network? At least from some tests I perform, the > response time is worse than querying the DB directly. Is there a way I make > sure each node has all the data that will query locally? In combination with > Compute Task Adapter always. > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11739.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Non-collocated distributed SQL Joins across caches over separate cluster groups
Igniters, Is it correct to assume the following: We have an Ignite cluster comprised of 2 cluster groups A & B that have different caches deployed. We use an Ignite client to obtain API access to the whole cluster and execute a join query that joins data across the 2 caches My understanding is that this is not possible, correct? Reading this article [1 <https://dzone.com/articles/how-apache-ignite-helped-a-large-bank-process-geog-1>] it seems that such cross-cluster-group behaviour is supported with the transactions API and also advised. Any thoughts why the SQL API would not allow this and requires caches to be located on all nodes when the JOIN query is executed? Cheers, Christos
Re: Success story sharing
Hi walagi, thats great to know. Can you share more about the use case so I can understand better? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Success-story-sharing-tp11557p11614.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Distributed Closures VS Executor Service
> - The application server-side queries the DB, finds the products belonging > to the selected categories, performs calculations (applies the metrics on > them etc.) and persists the results in the database for future reference So if I understand this correctly you are not loading any data from the underlying DB. You are querying the database directly through a compute task in the grid. I'm guessing you are not loading any data into the cache because its simply too large. Is the database you are using a distributed deployment or a central point? If its distributed and partitioned then you might want to consider incorporating that into your design in such that the compute tasks always work with local datasets in the underlying database. Do you use SQL to retrieve data from the database or would key-based operation work? If you could figure out a way to retrieve products by ID then you could use Ignite as a cache with read-through to the database. At least some products in that case would be cached which would speed up your reads. For persisting the results you might want to consider configuring your database as a cache store to Ignite. This way you can write the results directly to an Ignite cache and configure a write-behind policy where Ignite takes care of replicating the results data down to the database in an asynchronous manner. This way you would not need to using Spring services or transient DAOs since it would all be handled by Ignite. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11609.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Distributed Closures VS Executor Service
Hi Kyriako, - If I want to pass data to callables, for example Lists of objects, small to -medium size (collections from 2000 to 6000 objects maximum, with average object size some ~200KBs) what is the best way to do this: to pass them as argument in callables or put them in the distributed cache for the callables to pick them up from there? Regarding the question you ask about including the data with a job vs using the cache: ideally what you would like to do is use compute and data affinity collocation such that your computations are executed on nodes with the relevant data. I.e you should insert data into the cache with a certain affinity and then use an affinity run compute task to send the job to the correct node. Have a read here: https://apacheignite.readme.io/docs/affinity-collocation If you share more on your use case I should be able to help you! Regards, Christos On Wed, 15 Mar 2017 at 12:13, kmandalas <kyriakos.manda...@iriworldwide.com> wrote: Hello Ignite team, As we are evaluating potential usage of Ignite for our Analytics projects, I would like to ask the following: - *Compute Grid*: what is in practical difference between Distributed Closures and Executor Service? For example if I have computations that I want to distribute (multiple callables) and I want to take advantage of all cores of the cluster (based on the existing load of course) so I get *fast results*, is there a difference between using exec.submit() VS ignite.compute().call()? Moreover, if some distributed calculation is in progress and occupies all the cores of the cluster and in the meantime a new distributed calculation is requested, then what will happen? Is there some queue mechanism and how is it configured? Which is the best way to implement this? Is there need for a messaging queue or we could rely to thread pool sizes configurations etc. ? - If I want to pass data to callables, for example Lists of objects, small to -medium size (collections from 2000 to 6000 objects maximum, with average object size some ~200KBs) what is the best way to do this: to pass them as argument in callables or put them in the distributed cache for the callables to pick them up from there? Thank you. -- View this message in context: Distributed Closures VS Executor Service <http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192.html> Sent from the Apache Ignite Users mailing list archive <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
4th Apache Ignite Meetup London hosted by IHS Markit on the 23rd of February
Fellow Igniters, We are very excited to invite you to join us for our 4th Apache Ignite Meetup hosted by IHS Markit on the 23rd of February. IHS Markit will present first on how they have been using Apache Ignite on several major projects. The 2nd part of the meetup will be led by Mandhir Gidda, GridGain's new EMEA Solution Architect who's been working with in-memory technologies for nearly 10 years. A user of Apache Ignite for over 6 years, IHS Markit has recently completed two major scale-up projects architected around Ignite / GridGain. Some of the developers involved in these projects would like to share their experience with the Ignite community, the good and the bad. In particular go through limitations & pitfalls they experienced, and how to work around them to design stable and scalable applications using Ignite. More details here: https://www.meetup.com/Apache-Ignite-London/events/237189063/ <https://www.meetup.com/Apache-Ignite-London/events/237189063/> We look forward to seeing you there! Best, Christos
Re: Official Apache Ignite meetup - Sept. 22nd - London
Hello Igniters, The recording from our most recent Ignite London meetup is ready to share. This time we had the pleasure of having Sam Lawrence from FSB Technologies talk about how they have used Apache Ignite to transform their online sports betting platform. Check it out: https://www.youtube.com/watch?v=rJZf1wIU1TE <https://www.youtube.com/watch?v=rJZf1wIU1TE> Please share your feedback regarding the content, recordings and style to help us shape these sessions as we move forward. Also, don’t forget to subscribe to our Apache Ignite YouTube channel to keep up to date with any new content we post. Cheers! Christos > On 16 Sep 2016, at 09:28, Christos Erotocritou <chris...@gridgain.com> wrote: > > We will try to stream the event live. But we will certainly share the video > and slides following the meetup. > > On Friday, 16 September 2016, Alexey Kuznetsov <akuznet...@gridgain.com > <mailto:akuznet...@gridgain.com>> wrote: > It is possible to see slides or even meetup video? > > 16 Сен 2016 г. 14:09 пользователь "Christos Erotocritou" < > chris...@gridgain.com <javascript:;>> написал: > > > Hello Igniters! > > > > Following our first and successful official Ignite meetup > > <http://www.meetup.com/Apache-Ignite-London/ > > <http://www.meetup.com/Apache-Ignite-London/>> in London, we are now > > excited to announce our next gathering. > > > > Our second session <http://meetu.ps/e/BVTs1/1RGQ6/f > > <http://meetu.ps/e/BVTs1/1RGQ6/f>> will be on the 22nd > > of September and we've invited FSB Technologies <http://www.fsbtech.com/ > > <http://www.fsbtech.com/>> to > > share their Ignite story. > > > > Event Link: http://meetu.ps/e/BVTs1/1RGQ6/f > > <http://meetu.ps/e/BVTs1/1RGQ6/f> > > > > If you are in London and have time please join us for a great afternoon of > > interesting talks, beers & pizzas! > > > > Please help us spread the word and grow our community by sharing this > > invite far and wide. > > > > Thanks, > > > > Christos > > > >
Re: Official Apache Ignite meetup - Sept. 22nd - London
We will try to stream the event live. But we will certainly share the video and slides following the meetup. On Friday, 16 September 2016, Alexey Kuznetsov <akuznet...@gridgain.com> wrote: > It is possible to see slides or even meetup video? > > 16 Сен 2016 г. 14:09 пользователь "Christos Erotocritou" < > chris...@gridgain.com <javascript:;>> написал: > > > Hello Igniters! > > > > Following our first and successful official Ignite meetup > > <http://www.meetup.com/Apache-Ignite-London/> in London, we are now > > excited to announce our next gathering. > > > > Our second session <http://meetu.ps/e/BVTs1/1RGQ6/f> will be on the 22nd > > of September and we've invited FSB Technologies <http://www.fsbtech.com/> > to > > share their Ignite story. > > > > Event Link: http://meetu.ps/e/BVTs1/1RGQ6/f > > > > If you are in London and have time please join us for a great afternoon > of > > interesting talks, beers & pizzas! > > > > Please help us spread the word and grow our community by sharing this > > invite far and wide. > > > > Thanks, > > > > Christos > > > > >
Ignite Download links broken
Hey guys, The links on the website seem to be broken, can someone check this? https://ignite.apache.org/download.html#binaries <https://ignite.apache.org/download.html#binaries> Thanks, Christos
Ignite & Kubernetes
Hi all, Is anyone working with Ignite & kubernetes? Moreover I’d like to understand how it would be possible to do auto discovery of new Ignite nodes. Thanks, Christos
Re: Semaphore blocking on tryAcquire() while holding a cache-lock
Ah, I realise now that this FAQ you are talking about is probably more of a dev one where as the one I’ve created is more product focused. Christos > On 11 Mar 2016, at 18:20, Christos Erotocritou <chris...@gridgain.com> wrote: > > We already have a basic FAQ page which I am populating: > http://apacheignite.gridgain.org/docs/faq > <http://apacheignite.gridgain.org/docs/faq> > > Please feel free to add to it. > > Not sure if we want to migrate this to the wiki? > > Christos > >> On 11 Mar 2016, at 17:35, Dmitriy Setrakyan <dsetrak...@apache.org >> <mailto:dsetrak...@apache.org>> wrote: >> >> +1 on FAQ >> >> Can we just create a page, and start populating it? >> >> D. >> >> On Fri, Mar 11, 2016 at 3:25 AM, Anton Vinogradov <avinogra...@gridgain.com >> <mailto:avinogra...@gridgain.com>> >> wrote: >> >>> Yakov, >>> >>> I've answered. >>> Seems we have to have special FAQ section at Ignite wiki to publish same >>> things. >>> >>> On Sun, Mar 6, 2016 at 12:21 PM, Yakov Zhdanov <yzhda...@apache.org >>> <mailto:yzhda...@apache.org>> >>> wrote: >>> >>>> Vlad and all (esp Val and Anton V.), >>>> >>>> I reviewed the PR. My comments are in the ticket. >>>> >>>> Anton V. there is a question regarding optimized-classnames.properties. >>>> Can you please respond in ticket? >>>> >>>> >>>> --Yakov >>>> >>>> 2016-02-29 16:00 GMT+06:00 Yakov Zhdanov <yzhda...@apache.org >>>> <mailto:yzhda...@apache.org>>: >>>> >>>>> Vlad, that's great! I will take a look this week. Reassigning ticket to >>>>> myself. >>>>> >>>>> --Yakov >>>>> >>>>> 2016-02-26 18:37 GMT+03:00 Vladisav Jelisavcic <vladis...@gmail.com >>>>> <mailto:vladis...@gmail.com>>: >>>>> >>>>>> Hi, >>>>>> >>>>>> i recently implemented distributed ReentrantLock - IGNITE-642, >>>>>> i made a pull request, so hopefully this could be added to the next >>>>>> release. >>>>>> >>>>>> Best regards, >>>>>> Vladisav >>>>>> >>>>>> On Thu, Feb 18, 2016 at 10:49 AM, Alexey Goncharuk < >>>>>> alexey.goncha...@gmail.com <mailto:alexey.goncha...@gmail.com>> wrote: >>>>>> >>>>>>> Folks, >>>>>>> >>>>>>> The current implementation of IgniteCache.lock(key).lock() has the >>>>>> same >>>>>>> semantics as the transactional locks - cache topology cannot be >>>>>> changed >>>>>>> while there exists an ongoing transaction or an explicit lock is >>>>>> held. The >>>>>>> restriction for transactions is quite fundamental, the lock() issue >>>>>> can be >>>>>>> fixed if we re-implement locking the same way IgniteSemaphore >>>>>> currently >>>>>>> works. >>>>>>> >>>>>>> As for the "Failed to find semaphore with the given name" message, my >>>>>> first >>>>>>> guess is that DataStructures were configured with 1 backups which led >>>>>> to >>>>>>> the data loss when two nodes were stopped. Mario, can you please >>>>>> re-test >>>>>>> your semaphore scenario with 2 backups configured for data structures? >>>>>>> From my side, I can also take a look at the semaphore issue when I'm >>>>>> done >>>>>>> with IGNITE-2610. >>>>>>> >>>>>> >>>>> >>>>> >>>> >>> >
Re: Apache Ignite is how to ensure the data's consistency between multiple server?
If you use full sync mode for backups, then the client node will wait for write or commit to complete on all participating remote nodes (primary and backup). This is the most restrictive configuration but will guarantee data consistency. In addition if you use transactions for any grid operations this will also guarantee that data changes will only be committed only if the whole transaction is successful i.e. if an object with an acquired optimistic lock changes then transaction will fail and any changes will be rolled back. > On 1 Mar 2016, at 10:19, 上帝已死 <527901...@qq.com> wrote: > > How does Apache Ignite ensure the data between multiple server is consistency > in anytime? > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Apache-Ignite-is-how-to-ensure-the-data-s-consistency-between-multiple-server-tp3288p3291.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com.