At this point queues cannot be persisted yet. The ability to store any data
structure or any cache to disk is currently being designed. I believe that
it will become part of Ignite release by the end of March.
As far as transaction logic, I would like other community members to chime
in. How
Hi,
Query results are already paginated by default. Page size is 1024 elements
by default and can be changed via Query.setPageSize() method. How many
entries in cache do you have? Iteration through the large data set is not
going to be fast anyway.
SQL should be used for indexed search, and they
Most likely memory is consumed by indexes. How many of them do you have?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Huge-heap-size-for-ignite-cache-when-storing-20-million-objects-of-small-size-tp3049p3072.html
Sent from the Apache Ignite Users
Can we use pagination while fetching the complete cache data..
I am using below method for fetching complete Cache data, but it is taking
too much time :
ScanQuery scanQuery = new ScanQuery()
QueryCursor>
Hi Val
At the time of this issue i checked topology through visor and all 3 client
and 2 server nodes were there. There were no items in any cache. I could
see caches created on all 5 nodes (near cache on 3 clients and replicated
cache on 2 servers). I also tried cleaning a cache through visor
I don’t think transactions on queues are supported yet, however, I am not
sure why not. Seems to be for some legacy reasons. I think it should be a
fairly easy feature to add.
Can someone from the community comment please?
On Wed, Feb 17, 2016 at 4:04 PM, Stolidedog wrote:
>
>From what I'm able to get from the documentation, Transactions only apply to
caches? Is it possible to use transactions in queues?
Basically I want to have one application offer items on a queue. Another
application take them off the queue and process them, but if it crashes, I
want another
I see that this is a known issue:
https://issues.apache.org/jira/browse/IGNITE-1631
and is planned to be resolved in 1.6
What is the workaround? Should I choose not to use DUAL_SYNC?
If I create the file using IgniteFileSystem.create() would that sidestep
this problem?
Thanks,
Rk
x77309
Hi Vinay,
CachePartialUpdateException is thrown by an update operation (put, putAll,
remove, removeAll, ...) if updates for one or more keys involved in this
operation failed. This exception has failedKeys() method that tells you
which keys failed so that you can retry only them, no need to retry
I am using Ignite 1.5.0.final on RHEL6 over JRE8.
I have an initialized IGFS filesystem with a seconday backup store as the
local filesystem
and have create a path (directory) *a/b/c/d*.
I am trying to write content to a (non-existent) file file as follows:
*IgniteFileSystem fs = ...;
Hi,
I faced an issue today and couldn't figure out whats wrong hence though of
asking on this forum.
I added expiration policy to 2 cacheConfigurations, stopped all cache server
nodes and then started one by one. My client nodes had near caches for the
these caches and i am not sure if this
Hi Christian,
You can broadcast a closure which will use IgniteCache.localEntries() method
to iterate through the local data and remove all required data. This way you
will minimize the network utilization.
Will this work for you?
-Val
--
View this message in context:
Hi Vladimir,
We are using 2 server nodes of 5 GB each. Also keeping backup count 1.
We used binarylizable and raw mode which shrinked size but SQL queries are
not running on raw mode. Is there any way we can run SQL queries with raw
mode or we could use any external serializer with which SQL
Hi vidhu,
If your object have 101 int field, each instance serialized with
BinaryMarshaller (the default one) will be an array of 731 byte. This means
1.4GB of heap for 2M objects. How many nodes are there in your application?
Do you have backups? It possible pleas share your configuration and
Denis,
Please see below for answers.
Cheers,
-Mateusz
On Tue, Feb 16, 2016 at 10:18 PM, Denis Magda wrote:
> Hi Mateusz,
>
> I've revisited the whole discussion from the beginning and should say that
> the solution based on the distributed queue won't work for you even if
Yes we are not facing any issues related to gc and we can run SQL queries
as well. I tried performing gc through visualvm but size didn't reduced.
We need to load lot of data in our ignite cluster(approx 112 million
objects in cluster). If we go by current scenario we will need lot of ram
and
16 matches
Mail list logo