Re: Using OFF_HEAP_TIERED and Replicated Heap continously grows eventually heap crash
Here is how i am constructing that query SqlQuerySQL_PAGE_QUERY = new SqlQuery(TiffPage.class, "documentId = ? AND pageNumber = ?"); -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Using-OFF-HEAP-TIERED-and-Replicated-Heap-continously-grows-eventually-heap-crash-tp8604p8646.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Using OFF_HEAP_TIERED and Replicated Heap continously grows eventually heap crash
I have isolated my issue to the cache configuration of indexed types. If I remove setting the indexed type property the heap gets cleaned up. If I leave this property in only about 10 to 15% of the heap gets cleared on any garbage collection. Do I have something configured wrong regarding this property? java.util.UUID com.mgic.documentviewer.imaging.cache.beans.TiffPage Cache Object --- public class TiffPage extends ImagePage { } public abstract class ImagePage implements Serializable { private static final long serialVersionUID = 1L; private UUID id; /** Will be indexed on it's own and also participate in the group index for page access. */ @QuerySqlField(index = true, orderedGroups = {@QuerySqlField.Group(name = "doc_page_idx", order = 0, descending = true)}) private String documentId; /** Will participate in the group index sorted in ascending order. */ @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = "doc_page_idx", order = 1)}) private Integer pageNumber; private String pageFormat; private byte[] image; public ImagePage (String documentId, byte[] image, String pageFormat, Integer pageNumber) { this.id = UUID.randomUUID(); this.documentId = documentId; this.image = image; this.pageFormat = pageFormat; this.pageNumber = pageNumber; } Example Insertion into map --- TiffPage tiffPage = new TiffPage(Integer.toString(documentId), image, "png", 1); imageCache.put(tiffPage.getId(), tiffPage); Example Accessing from Map If I remove this property then these queries return no results. If I leave the property in then these queries return the expected results. SQL_PAGE_QUERY.setArgs(documentId, 1); ImagePage page = imageCache.query(SQL_PAGE_QUERY).getAll().get(0).getValue(); -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Using-OFF-HEAP-TIERED-and-Replicated-Heap-continously-grows-eventually-heap-crash-tp8604p8645.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: When writethrough processing, Persistent storage failed
I used the one provided out of the box. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8644.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Cache hit rate for Ignite not adding up
And another: After restarting a node, the local stats show that it has 162K cache puts, but only a size of 81K. It's not quite double, but it's really close. Does that mean there's 81K duplicate puts that were ignored? Or does it mean something else? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cache-hit-rate-for-Ignite-not-adding-up-tp8602p8642.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Using OFF_HEAP_TIERED and Replicated Heap continously grows eventually heap crash
More information but here is a complete dump of the cache configuration for this particular cache. ImageCache Cache Configuration settings"CacheConfiguration [name=ImageCache, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2, rebalanceTimeout=1, evictPlc=null, evictSync=false, evictKeyBufSize=1024, evictSyncConcurrencyLvl=4, evictSyncTimeout=1, evictFilter=null, evictMaxOverflowRatio=10.0, eagerTtl=true, dfltLockTimeout=0, startSize=150, nearCfg=null, writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false, loadPrevVal=false, aff=org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction@5bfbbe4, cacheMode=REPLICATED, atomicityMode=ATOMIC, atomicWriteOrderMode=PRIMARY, backups=2147483647, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC, rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCount=2, offHeapMaxMem=1073741824, swapEnabled=false, maxConcurrentAsyncOps=500, writeBehindEnabled=false, writeBehindFlushSize=10240, writeBehindFlushFreq=5000, writeBehindFlushThreadCnt=1, writeBehindBatchSize=512, maxQryIterCnt=1024, memMode=OFFHEAP_TIERED, affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@6670e954, rebalanceDelay=0, rebalanceThrottle=0, interceptor=null, longQryWarnTimeout=3000, readFromBackup=true, nodeFilter=org.apache.ignite.configuration.CacheConfiguration$IgniteAllNodesPredicate@88961cc, sqlSchema=null, sqlEscapeAll=false, sqlOnheapRowCacheSize=10, snapshotableIdx=false, cpOnRead=true, topValidator=null]" -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Using-OFF-HEAP-TIERED-and-Replicated-Heap-continously-grows-eventually-heap-crash-tp8604p8641.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: What happens when backup is set as 1 and cache mode as LOCAL
Sounds weird to me. Can you provide a test that I can run? -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/What-happens-when-backup-is-set-as-1-and-cache-mode-as-LOCAL-tp8599p8640.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Cache hit rate for Ignite not adding up
Another example of this not really adding up: I run a test hitting a bunch of data. We get a low cache hit rate, as expected, but presumably we're filling the cache with everything we missed. Nothing is getting removed from the cache, according to the metrics through jconsole. So then I run the exact same requests. We get a HIGHER hit rate (~25%) but still very far away from 100%, which is what I'd think we would get given that we're querying exclusively for stuff we already queried for. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cache-hit-rate-for-Ignite-not-adding-up-tp8602p8639.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Cache hit rate for Ignite not adding up
Shouldn't Ignite be putting an object into cache every time it's missed, though? As long as it isn't running out of room, shouldn't it have one entry for every time it missed? Another question I have is: when it says there are X misses, does that mean: 1) X times, this host looked for an object in the distributed cache that wasn't there. 2) X times, one of the distributed hosts made a call to find an object in this host's cache partition that wasn't there. 3) something else? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cache-hit-rate-for-Ignite-not-adding-up-tp8602p8638.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: org.h2.api.JavaObjectSerializer not found
Hi, Seem like h2 dependency have not been loaded. Is this exception occurred after, that manifest-file been changed[1]? I think, the exception bound with OSGi bundle was incorrect. [1]: http://apache-ignite-users.70518.x6.nabble.com/KARAF-4-6-4-8-Snapshot-IgniteAbstractOsgiContextActivator-tc8552.html On Thu, Oct 27, 2016 at 3:26 PM, flexvalleywrote: > Sorry i'm new with this forum... i post without sub before > > > > -- > View this message in context: http://apache-ignite-users. > 70518.x6.nabble.com/org-h2-api-JavaObjectSerializer-not- > found-tp8538p8553.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. > -- Vladislav Pyatkov
Re: Killing a node under load stalls the grid with ignite 1.7
Hi, I mean, If you need create Order entry before Trade, you can to do it in CacheStore implementation, but do not use IgniteCache for this. Just write inserts for both tables. Why this way did not matched? On Mon, Oct 31, 2016 at 4:55 PM, bintisepahawrote: > Hi Vladislav, > > what you are describing above is not clear to me at all? > Could you please elaborate? > > Thanks, > Binti > > > > -- > View this message in context: http://apache-ignite-users. > 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the- > grid-with-ignite-1-7-tp8130p8630.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. >
Re: When writethrough processing, Persistent storage failed
Hi, What cache store implementation are you using? The one provided out of the box or you own? Are there any exception during database write? -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622p8632.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: 答复: dynamic data structure with binaryobject
Hi Shawn, invoke() allows to avoid sending the value across network. So unless you need the whole value on the client, it's always preferable operation, especially if values are large. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/dynamic-data-structure-with-binaryobject-tp8581p8631.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Problem with getting started on Windows 7
Hi, Try to unset IGNITE_HOME. It will be detected automatically by the ignite.bat script. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Problem-with-getting-started-on-Windows-7-tp4830p8629.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Text Query
Hi, No, you can't. JDBC driver is for SQL only. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Text-Query-tp8610p8628.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Affinity Collocation
Hi, The answer is YES. Everything with the same affinity key is stored on the same node. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Affinity-Collocation-tp4576p8626.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Cache hit rate for Ignite not adding up
Hi, These numbers look valid to me. Number of hits/misses is incremented each time you access a key. I.e. it you call get(key) for the same key three times, you will get 3 hits if this key exists, or 3 misses if it doesn't. Having said that, number of gets must equal to hits+misses, which is true in your case. Cache size is a completely different metric, it just shows how many entries you have in cache. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cache-hit-rate-for-Ignite-not-adding-up-tp8602p8625.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Killing a node under load stalls the grid with ignite 1.7
Hi, You need to write into "write behind handler" to database only (you can fill several table, if it needed, for example "Order" than "Trade"). Cache which has "read through" on table "Trade" will always read value from database, until cache entry does not exist. On Thu, Oct 27, 2016 at 6:03 PM, bintisepahawrote: > yes I think you are write. Is there any setting that we can use in write > behind that will not lock the entries? > the use case is we have is like this > > Parent table - Order (Order Cache) > Child Table - Trade (Trade Cache) > > We only have write behind on Order Cache and when writing that we write > order and trade table both. so we query trade cache from order cache store > writeAll() which is causing the above issue. We need to do this because we > cannot write trade in the database without writing order. Foreign key > constraints and data-integrity. > > Do you have any recommendations to solve this problem? We cannot use > write-through. How do we make sure 2 tables are written in an order if they > are in separate caches? > > Thanks, > Binti > > > > -- > View this message in context: http://apache-ignite-users. > 70518.x6.nabble.com/Killing-a-node-under-load-stalls-the- > grid-with-ignite-1-7-tp8130p8557.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. > -- Vladislav Pyatkov
When writethrough processing, Persistent storage failed
When I was putting data into a cache(configured writethrough), the persistent storage(Oracle) failed. Then I found Ignite still continuously trying to write data to the Oracle, Until all the public thread pools are filled. When Oracle restored, Ignite still can not automatically connect to the it. I do not know whether this is a normal treatment, or anything else I need to configure? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/When-writethrough-processing-Persistent-storage-failed-tp8622.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: IN Query
Hi Anil, This might be PreparedStatement restriction. In that case you need to generate query by hand. Look at the stackoverflow[1]. [1]: http://stackoverflow.com/questions/3107044/preparedstatement-with-list-of-parameters-in-a-in-clause On Sat, Oct 29, 2016 at 4:44 PM, Anilwrote: > second try. > > On 28 October 2016 at 15:24, Anil wrote: > >> Any inputs please ? >> >> On 28 October 2016 at 09:27, Anil wrote: >> >>> Hi Val, >>> >>> the below one is multiple IN queries with AND but not OR. correct ? >>> >>> SqlQuery with join table worked for IN Query and the following prepared >>> statement is not working. >>> >>> List inParameter = new ArrayList<>(); >>> inParameter.add("8446ddce-5b40-11e6-85f9-005056a90879"); >>> inParameter.add("f5822409-5b40-11e6-ae7c-005056a91276"); >>> inParameter.add("9f445a19-5b40-11e6-ab1a-005056a95c7a"); >>> inParameter.add("fd12c96f-5b40-11e6-83f6-005056a947e8"); >>> PreparedStatement statement = conn.prepareStatement("SELECT p.name >>> FROM Person p join table(joinId VARCHAR(25) = ?) k on p.id = k.joinId"); >>> statement.setObject(1, inParameter.toArray()); >>> ResultSet rs = statement.executeQuery(); >>> >>> Thanks for your help. >>> >> >> > -- Vladislav Pyatkov
Re: KARAF 4.6/4.8.Snapshot IgniteAbstractOsgiContextActivator
Hi, Discussion has been moved to another thread. http://apache-ignite-users.70518.x6.nabble.com/KARAF-4-6-4-8-Snapshot-IgniteAbstractOsgiContextActivator-tc8552.html -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/KARAF-4-6-4-8-Snapshot-IgniteAbstractOsgiContextActivator-tp8515p8620.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: java.lang.IllegalStateException: Failed to create data streamer (grid is stopping).
Hi Bob, This message says about the instance of Ignite was stopped before cache.loadAll was invoked. Check it using Ignition.state() in code. On Mon, Oct 31, 2016 at 5:58 AM, 胡永亮/Bobwrote: > Hi > > I am using Ignite 1.6. > I meet the exception as the mail title when I call > cache.loadAll(keys, true, null); > And this Exception is not logged. > I found this through debugging. > > Actually, the ignite cluster is running. > > Can anyone tell what is the possible reason? > Thank you. > > -- > Bob > > > --- > Confidentiality Notice: The information contained in this e-mail and any > accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader > of this communication is > not the intended recipient, unauthorized use, forwarding, printing, > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the original > message and all copies from > your system. Thank you. > > --- > -- Vladislav Pyatkov