Re: 回复: Ignite used memory 7 times greater than imported data

2016-07-13 Thread Denis Magda
Here


*Next, remove this setting giving a chance to Ignite pre-allocating the
sizeit will need in practice*

I meant removing of "startSize" parameter.

--
Denis

On Thu, Jul 14, 2016 at 7:20 AM, Denis Magda  wrote:

> Hi,
>
> There is a capacity planning guide [1] that lists how the memory is
> consumed
> by Ignite and how you can control it.
>
> The first thing is that if you have a single backup then the amount of is
> doubled.
>
> Next, remove this setting giving a chance to Ignite pre-allocating the size
> it will need in practice
>
>
>
> This property doesn't have any effect at all for off heap caches
>
> 
> 
> 
>
> Finally, how many indexes do you have per object? Indexes are not free as
> [1] states. Also make sure that there are no "unable to marshall with
> BinaryMarshaller" like messages in the logs, otherwise it would mean that
> values won't be serialized and stored in size optimal way.
>
> --
> Denis
>
> [1] https://apacheignite.readme.io/docs/capacity-planning
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-used-memory-7-times-greater-than-imported-data-tp6225p6293.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Does ignite support UPDATE/DELETE sql

2016-07-13 Thread Dmitriy Setrakyan
We are currently working on adding insert/update/delete commands to Ignite.
Here is the ticket you can follow:

https://issues.apache.org/jira/browse/IGNITE-2294

Thanks,
D.

On Thu, Jul 14, 2016 at 6:58 AM, Denis Magda  wrote:

> Hi,
>
> Please properly subscribe to the Ignite's user list. Refer to this page for
> details - https://ignite.apache.org/community/resources.html#ask
>
> See my answers inline.
>
>
> zhaojun08 wrote
> > HI ALL,
> >
> > I am new to ignite, and I have a few questions to confirm.
> >
> > 1. I want to use ignite to store RDBMS in MEM. Table in RDBMS have many
> > rows, does ignite store every row as a Java object, like the "Person"
> > object in docs? And is it the only way to store a row in Ignite?
> /
> > Yes, Apache Ignite is an in-memory key-value store meaning that for every
> > key there should be a corresponding value. A key-value tuple will
> > correspond to a row from your RDBMS store.
> /
> >
> > 2. I notice that "Person" class implements Serializable, does it mean
> > every row record stores in ignite in Serialization format? If so, will
> the
> > Serialization degrade the select performance, and the reason for
> > Serialization?
> /
> > Objects are stored in a serialized form in memory. However it doesn't
> mean
> > that JDK serialization techniques are used to prepare an object for
> > storage. In fact Ignite uses its own BinaryMarshaller (serializer) that
> > has good performance characteristics -
> > https://apacheignite.readme.io/docs/binary-marshaller
> /
> >
> > 3. I have store RDBMS in Ignite, can I update specific row record using
> > UPDATE/DELETE sql statement to alter the table?
> /
> > This kind of queries is not supported right know. You have to update
> > caches with methods like cache.put, cache.putAll, cache.invoke, etc.
> /
> >
> /
> > If you need to pre-load data from RDBMS then you can rely on  one of the
> > pre-loading strategies -
> https://apacheignite.readme.io/docs/data-loading.
> > This topic should be useful for you as well -
> > https://apacheignite.readme.io/docs/persistent-store
> /
> >
> > --
> > Denis
> >
> > Many Thanks!
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Does-ignite-support-UPDATE-DELETE-sql-tp6290p6292.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How does AffinityKey mapped?

2016-07-13 Thread November
Hi val

I come cross a problem while using inner join between two partition caches.
The inner join result is incomplete.

Following is my code. I use DTermKey as two caches' key. The inner join key
is DTerm, So I use [AffinityKeyMapped] on DTermKey.DTerm. Is there any thing
wrong?

var offerTermCache = ignite.GetOrCreateCache("offerTermCache");
var productTermCache = ignite.GetOrCreateCache("productTermCache");

class DTermKey
{
private long Id;

[AffinityKeyMapped]
private string DTerm;

public DTermKey(long id, string dTerm)
{
this.Id = id;
this.DTerm = dTerm;
}

public override bool Equals(object obj)
{
if (obj == null || !(obj is DTermKey))
{
return false;
}
else
{
DTermKey other = (DTermKey)obj;
return Id == other.Id && DTerm.Equals(other.DTerm);
}
}

public override int GetHashCode()
{
int hash = 13;
hash = (hash * 7) + Id.GetHashCode();
hash = (hash * 7) + DTerm.GetHashCode();

return hash;
}

public override string ToString()
{
return "DTermKey [id=" + Id + ", DTerm=" + DTerm + "]";
}
}

class OfferTerm
{
[QuerySqlField]
public string OfferId { get; set; }

[QuerySqlField]
public string Title { get; set; }

[QuerySqlField(IsIndexed = true)]
public string DTerm { get; set; }

public OfferTerm(string offerId, string title, string dTerm)
{
this.OfferId = offerId;
this.Title = title;
this.DTerm = dTerm;
}
}

class ProductTermCategory
{
[QuerySqlField(IsIndexed = true)]
public string DTerm { get; set; }

[QuerySqlField]
public double Entropy { get; set; }

public ProductTermCategory(string dTerm, double entropy)
{
this.DTerm = dTerm;
this.Entropy = entropy;
}
}


using (var productTermStreamer = ignite.GetDataStreamer(productTermCache.Name))
{
using (StreamReader sr = new StreamReader(productTermPath))
{
long id = 0;
string line;
while ((line = sr.ReadLine()) != null)
{
string[] strs = line.Split('\t');
double entropy = double.Parse(strs[1]);
if (entropy <= EntropyThreshold &&
keywordsCache.ContainsKey(strs[0]))
{
productTermStreamer.AddData(new DTermKey(id++, strs[0]), new
ProductTermCategory(strs[0], entropy));
}
}
productTermStreamer.Flush();
}
}

id = 0;
var sql = new SqlFieldsQuery("select distinct OfferId, Title,
NormalizedStemmed from OfferFeed");
var queryResult = offerFeedCache.QueryFields(sql);
foreach (var fields in queryResult)
{
offerTermCache.Put(new DTermKey(id++, (string)fields[2]), new
OfferTerm((string)fields[0], (string)fields[1], (string)fields[2]));
}

ignite.DestroyCache(offerFeedCache.Name);

var joinSql = new SqlFieldsQuery(
"select OfferTerm.OfferId, OfferTerm.Title, ProductTermCategory.DTerm,
ProductTermCategory.Entropy " +
"from OfferTerm, \"productTermCache\".ProductTermCategory where
OfferTerm.DTerm = ProductTermCategory.DTerm");
using (StreamWriter sw = new StreamWriter(output))
{
foreach (var fields in offerTermCache.QueryFields(joinSql))
{
sw.WriteLine("{0}\t{1}\t{2}\t{3}", fields[0], fields[1], fields[2],
fields[3]);
}
}



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260p6299.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cassandra Cache Store Example

2016-07-13 Thread Kamal C
Ok Denis!

I've some doubts in the Cassandra Persistent store.

1. In Ignite partitioned cache, number of partitions can be configured
   using AffinityFunction. How to configure the same for the
   Cassandra table ?

2. Will the partitioner used by Cassandra (MurMur3) and the Ignite
returns the same partition number for a cache key ?

3. Say a cache is configured to hold 2 lakh elements in memory
and rest of them in cassandra. If the cache contains more than
2 lakh elements, Then,

a. Calling cache.size() method returns count as 2 lakh
b. Using cache.iterator() able to iterate only the elements available
in the memory
c. With cache.removeAll() able to delete the entries in cassandra
that are available in the memory.

How to get the total size of a cache ? (memory + cassandra)

--Kamal


On Mon, Jul 11, 2016 at 6:01 PM, Denis Magda  wrote:

> Hi Kamal,
>
> Please create a ticket in JIRA for that and share a link to it over there.
> Hope that someone from the community will pick it up and implement. If
> you’re interested in this kind of contribution then it would be perfect.
>
> —
> Denis
>
> On Jul 11, 2016, at 3:05 PM, Kamal C  wrote:
>
> Hi,
>
> Can anyone add Cassandra CacheStore example in the examples[1] like
> JDBC CacheStore example?
>
> It will be useful to configure and test the feature quickly.
>
> [1]:
> https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store
>
> --Kamal
>
>
>
>


Re: Understanding data store and partitioning

2016-07-13 Thread Denis Magda
Hi Kamal,

All the data that has been already pre-loaded by the time the new node
joined and has to be located on the new node according to the new topology
version will be rebalanced there [1].

However if the pre-loading is still in progress then you need to call
localLoadCache() on this node because the rest of the nodes, that was in the
cluster before, will skip entries for which they are neither primary and
backups and such entries won't be rebalanced to the new node.

[1] https://apacheignite.readme.io/docs/rebalancing

--
Denis



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Understanding-data-store-and-partitioning-tp6264p6297.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Understanding data store and partitioning

2016-07-13 Thread Kamal C
Val,

When a new node joins the cluster, should I have to call loadCache() or
localLoadCache() method ?

>From docs,

1. loadCache() - executes localLoadCache() on all the nodes
2. localLoadCache() - will trigger data loading only in the local node.

--Kamal

On Thu, Jul 14, 2016 at 4:12 AM, vkulichenko 
wrote:

> Hi,
>
> You don't need to load all person IDs when loading the data. The
> loadCache()
> implementation can use Affinity API to get the array of local partition IDs
> and query the DB based on this IDs. With this approach each node will load
> only those rows that has to be stored locally. See the second code example
> in [1]. Also note the you can load different partitions in parallel.
>
> [1]
>
> https://apacheignite.readme.io/docs/data-loading#section-partition-aware-data-loading
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Understanding-data-store-and-partitioning-tp6264p6282.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Writebehind retry logic

2016-07-13 Thread Denis Magda
Hi Sparkle,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. Here is the instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1


sparkle_j wrote
> Hi Val,
> 
> If write behind failed due to bad data, framework is constantly trying to
> attempt (insert or update) forever. Hope you can make this re-try
> configurable. We had to implement some more code to limit the number of
> re-tries to 3. Wish this is configurable.
> 
> Thanks,
> Sparkle.

Do you have several caches with write-behind being enabled? If it's so and
there is a primary, foreign or unique key constraint then a store can come
to the loop while a constrained is satisfied (a write-behind store of cacheA
will wait and retry his updates until a write-behind store of cacheB inserts
its data that will satisfy a constraint).

In any case please share the full stack trace of your issue.

--
Denis




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Writebehind-retry-logic-tp6189p6295.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance in case of 1 server node

2016-07-13 Thread Denis Magda
Hi,

See my answers inline 


daniels wrote
> Hi Denis,
> I will remove startSize.
> Is there any need to add or remove anything?

Nothing else at the moment.


daniels wrote
> Before,I used JCache(JSR 107).
> Now instead of it I use Ignite(one node).

Ignite is a complaint JCache implementation. However keep in mind that
Ignite is a distributed key-value storage (cache) and should be compared
with distributed caches as well in the configuration with several nodes. It
doesn't make much sense to compare it with JCache implementations of
non-distributed caches or with structures like HashMap.


daniels wrote
> And want to get better performance.
> Also,
> I use   read-through cacheLoaderFactory(in MutableConfiguration) for cache
> configuration,and not Ignite CacheStoreFactory.
> But  in  cases of putAll,I use IgniteDataStreamer.

It's ok to use IgniteDataStreamer for preloading but after an initial
pre-loading is finished is better to switch to putAll and other methods that
can be used inside of transactions.

--
Denis






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-in-case-of-1-server-node-tp6207p6294.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 回复: Ignite used memory 7 times greater than imported data

2016-07-13 Thread Denis Magda
Hi,

There is a capacity planning guide [1] that lists how the memory is consumed
by Ignite and how you can control it.

The first thing is that if you have a single backup then the amount of is
doubled.

Next, remove this setting giving a chance to Ignite pre-allocating the size
it will need in practice

 

This property doesn't have any effect at all for off heap caches





Finally, how many indexes do you have per object? Indexes are not free as
[1] states. Also make sure that there are no "unable to marshall with
BinaryMarshaller" like messages in the logs, otherwise it would mean that
values won't be serialized and stored in size optimal way.

--
Denis

[1] https://apacheignite.readme.io/docs/capacity-planning



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-used-memory-7-times-greater-than-imported-data-tp6225p6293.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Does ignite support UPDATE/DELETE sql

2016-07-13 Thread Denis Magda
Hi,

Please properly subscribe to the Ignite's user list. Refer to this page for
details - https://ignite.apache.org/community/resources.html#ask

See my answers inline.


zhaojun08 wrote
> HI ALL,
> 
> I am new to ignite, and I have a few questions to confirm.
> 
> 1. I want to use ignite to store RDBMS in MEM. Table in RDBMS have many
> rows, does ignite store every row as a Java object, like the "Person"
> object in docs? And is it the only way to store a row in Ignite?
/
> Yes, Apache Ignite is an in-memory key-value store meaning that for every
> key there should be a corresponding value. A key-value tuple will
> correspond to a row from your RDBMS store.
/
> 
> 2. I notice that "Person" class implements Serializable, does it mean
> every row record stores in ignite in Serialization format? If so, will the
> Serialization degrade the select performance, and the reason for
> Serialization?
/
> Objects are stored in a serialized form in memory. However it doesn't mean
> that JDK serialization techniques are used to prepare an object for
> storage. In fact Ignite uses its own BinaryMarshaller (serializer) that
> has good performance characteristics -
> https://apacheignite.readme.io/docs/binary-marshaller
/
> 
> 3. I have store RDBMS in Ignite, can I update specific row record using
> UPDATE/DELETE sql statement to alter the table?
/
> This kind of queries is not supported right know. You have to update
> caches with methods like cache.put, cache.putAll, cache.invoke, etc.
/
> 
/
> If you need to pre-load data from RDBMS then you can rely on  one of the
> pre-loading strategies - https://apacheignite.readme.io/docs/data-loading.
> This topic should be useful for you as well -
> https://apacheignite.readme.io/docs/persistent-store
/
> 
> --
> Denis
> 
> Many Thanks!





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Does-ignite-support-UPDATE-DELETE-sql-tp6290p6292.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How does AffinityKey mapped?

2016-07-13 Thread vkulichenko
Correct!

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260p6291.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Does ignite support UPDATE/DELETE sql

2016-07-13 Thread zhaojun08
HI ALL,

I am new to ignite, and I have a few questions to confirm.

1. I want to use ignite to store RDBMS in MEM. Table in RDBMS have many
rows, does ignite store every row as a Java object, like the "Person" object
in docs? And is it the only way to store a row in Ignite?

2. I notice that "Person" class implements Serializable, does it mean every
row record stores in ignite in Serialization format? If so, will the
Serialization degrade the select performance, and the reason for
Serialization?

3. I have store RDBMS in Ignite, can I update specific row record using
UPDATE/DELETE sql statement to alter the table?

Many Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Does-ignite-support-UPDATE-DELETE-sql-tp6290.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How does AffinityKey mapped?

2016-07-13 Thread November
Thanks for reply.

Just confirm I understand right.

IgniteCache orgCache
IgniteCache comCache
IgniteCache empCache

In these three caches. The partition function will use
EmployeeKey.organizationId to decide which node to store Employee.

If orgCache, comCache's key have the same value with
EmployeeKey.organizationId. Then all the three value will store in the same
node. Because the partition function is the same to specify data type?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260p6288.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: "ArrayIndexOutOfBoundsException" happened when doing qurey in version 1.6.0

2016-07-13 Thread ght230
logs.rar   

Please refer to the attachment.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ArrayIndexOutOfBoundsException-happened-when-doing-qurey-in-version-1-6-0-tp6245p6287.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Does Apache Ignite support streaming serialization while returning results from the compute grid?

2016-07-13 Thread vkulichenko
Hi Mohamed,

There is no such support, but sending 4GB as a job result doesn't sound like
a good idea to me. This will take a lot of time and also you will lose the
result if the sender node fails in the middle of the process. Are you sure
this is required for your use case?

Can you elaborate more details on how your computation looks like? Is it a
map-reduce task? Why the result is so big?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Does-Apache-Ignite-support-streaming-serialization-while-returning-results-from-the-compute-grid-tp6251p6286.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache EntryProcessor.process is getting invoked twice when cache.invoke() method is called within Transaction, in atomic mode its invoked once.

2016-07-13 Thread vkulichenko
Hi,

There is a ticket for this issue:
https://issues.apache.org/jira/browse/IGNITE-3471. It should be fixed soon.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-EntryProcessor-process-is-getting-invoked-twice-when-cache-invoke-method-is-called-within-Tran-tp921p6283.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Understanding data store and partitioning

2016-07-13 Thread vkulichenko
Hi,

You don't need to load all person IDs when loading the data. The loadCache()
implementation can use Affinity API to get the array of local partition IDs
and query the DB based on this IDs. With this approach each node will load
only those rows that has to be stored locally. See the second code example
in [1]. Also note the you can load different partitions in parallel.

[1]
https://apacheignite.readme.io/docs/data-loading#section-partition-aware-data-loading

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Understanding-data-store-and-partitioning-tp6264p6282.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Iterating through a BinaryObject cache fails

2016-07-13 Thread vkulichenko
This exception means that there was an attempt of BinaryObject
deserialization, but there is no class definition for this BinaryObject.
More specifically, it tries to find the mapping between type ID and class
name, which is stored in the system cache and on the local FS when the
object is serialized. If the object was created using builder, this mapping
doesn't exist as well.

So yes, this is a generic exception and you should figure out why the
deserialization happens. Most likely, you're not properly using
withKeepBinary() method somewhere. Can you provide the full trace and the
code sample that fails?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Iterating-through-a-BinaryObject-cache-fails-tp6038p6281.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Failed to wait for initial partition map exchange

2016-07-13 Thread vkulichenko
Hi Jason,

There are a lot of possible reasons for that. Most likely something bad is
happening (assertion, out of memory error, etc.) which freezes the cluster
for some reason. I would recommend to collect full log files and thread
dumps from all the nodes (servers and clients) and investigate them. Are
there any exceptions in logs? Are there any threads suspiciously hanging on
some operations?

If you attach the info here, I will be able to take a look.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-wait-for-initial-partition-map-exchange-tp6252p6280.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: "ArrayIndexOutOfBoundsException" happened when doing qurey in version 1.6.0

2016-07-13 Thread vkulichenko
I need the trace of ArrayIndexOutOfBoundsException which is the root cause.
The log you provided shows 'caused by' line, but then it should also have
the actual trace for it. It's fine if you simply attach the full log file.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ArrayIndexOutOfBoundsException-happened-when-doing-qurey-in-version-1-6-0-tp6245p6279.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How does AffinityKey mapped?

2016-07-13 Thread November
hi

I still have some confuse. If there are three cache like following.
Which cache will empCache collocate with? orgCache or comCache?
Since I don't see any other configuration in java example
org.apache.ignite.examples.binary.datagrid.CacheClientBinaryQueryExample
except @AffinityKeyMapped. Why EmployeeKey.organizationId map to
Organization.id but not any other class.

IgniteCache orgCache
IgniteCache comCache
IgniteCache empCache

public class EmployeeKey {
/** ID. */
private int id;

/** Organization ID. */
@AffinityKeyMapped
private int organizationId;
}

public class Organization {
/** */
private static final AtomicLong ID_GEN = new AtomicLong();

/** Organization ID (indexed). */
@QuerySqlField(index = true)
private Long id;

/** Organization name (indexed). */
@QuerySqlField(index = true)
private String name;

/** Address. */
private Address addr;

/** Type. */
private OrganizationType type;

/** Last update time. */
private Timestamp lastUpdated;
}

public class Company {
/** */
private static final AtomicLong ID_GEN = new AtomicLong();

/** Company ID (indexed). */
@QuerySqlField(index = true)
private Long id;

/** Company name (indexed). */
@QuerySqlField(index = true)
private String name;

}



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260p6278.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How does AffinityKey mapped?

2016-07-13 Thread Vladislav Pyatkov
Hello,

You can to use both key with affinity part.

For example:

public class cache1Key {
private int uniqueId1;

@AffinityKeyMapped
private int DEV_LV;
}

public class cache2Key {
private int uniqueId2;

@AffinityKeyMapped
private int DEV_LV;
}

On Wed, Jul 13, 2016 at 4:48 PM, November  wrote:

> Thanks for reply
>
> I have another question. There are two tables.
>
> CREATE TABLE  T1
>  ( LAT_IDSMALLINT,
>DEV_IDVARCHAR(20),
>DEV_LVSMALLINT
>  );
>
>   CREATE TABLE  T2
>  ( THE_DATE  DATE,
>DEV_ID   VARCHAR(20),
>DEV_LV  SMALLINT
>  );
>
>  SQL query:
> SELECT t1.* from t1, t2 where t2.DEV_LV = t1.DEV_LV
>
> I use cache1 and cache2 (use
> DEV_LV as key) to store it.
>
> public class cache1Key {
> private int DEV_LV;
>
> /** Organization ID. */
> @AffinityKeyMapped
> private int cache2_DEV_LV;
> }
>
> What if there are repeat DEV_LV in both tables(but the cache's key need to
> be unique) ? Is there any other way to achieve collocate using
> @AffinityKeyMapped?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260p6275.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Iterating through a BinaryObject cache fails

2016-07-13 Thread pragmaticbigdata
Just bringing up this discussion back up again since I have been experiencing
the exception mentioned on this thread again.

Caused by: class org.apache.ignite.IgniteCheckedException: Class definition
was not found at marshaller cache and local file. [id=-1556878003,
file=E:\ApacheIgnite\apache-ignite-fabric-1.6.0-bin\work\marshaller\-1556878003.classname]
at
org.apache.ignite.internal.MarshallerContextImpl.className(MarshallerContextImpl.java:176)
at
org.apache.ignite.internal.MarshallerContextAdapter.getClass(MarshallerContextAdapter.java:174)
at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:599)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1474)
at
org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:572)
at
org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:131)

I encountered this exception this time while executing  some entry processor
code. It seems this is a generic exception. What are the cases when this
exception is thrown? What class definitions is ignite trying to load from
the local file? Does the exception appear only when working with
BinaryObject's?

Thanks.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Iterating-through-a-BinaryObject-cache-fails-tp6038p6276.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How does AffinityKey mapped?

2016-07-13 Thread November
Thanks for reply

I have another question. There are two tables.

CREATE TABLE  T1
 ( LAT_IDSMALLINT,
   DEV_IDVARCHAR(20),
   DEV_LVSMALLINT
 );

  CREATE TABLE  T2
 ( THE_DATE  DATE,
   DEV_ID   VARCHAR(20),
   DEV_LV  SMALLINT
 );

 SQL query:
SELECT t1.* from t1, t2 where t2.DEV_LV = t1.DEV_LV

I use cache1 and cache2 (use
DEV_LV as key) to store it.

public class cache1Key {
private int DEV_LV;

/** Organization ID. */
@AffinityKeyMapped
private int cache2_DEV_LV;
}

What if there are repeat DEV_LV in both tables(but the cache's key need to
be unique) ? Is there any other way to achieve collocate using
@AffinityKeyMapped?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260p6275.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Understanding data store and partitioning

2016-07-13 Thread Vladislav Pyatkov
Hi,

On Wed, Jul 13, 2016 at 3:48 PM, pragmaticbigdata 
wrote:

> > You can find partition number using: affinity.partition(key)
>
> My question was - to get the partition id we need the cache key. When doing
> the initial load into ignite we don't have the cache key. Does that mean we
> cannot have an optimized data loading (i.e. partition aware data loading)?
>

Cache is a key-value store at first. Yes you can to use ID of record (from
database), as key of cache.

>
> > Yes it does work. Each node hase own instance of CacheStore.
>
> Is the loadCache() method from the CacheStore invoked on a new server node
> that joins the cluster? If yes, what is the reason behind it? Won't it be
> copying the existing partitions from other server nodes?
>

If node joined after the method (loadCache) was executed, then partitions
will by relocated to new node (with all data in it).

>
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Understanding-data-store-and-partitioning-tp6264p6272.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Cache EntryProcessor.process is getting invoked twice when cache.invoke() method is called within Transaction, in atomic mode its invoked once.

2016-07-13 Thread pragmaticbigdata
I am trying to understand this behavior of the entry processor. I could see
that the entry processor is called on all server nodes including the backup
nodes and the client node. The entry processor is executed on the client
node when it is invoked from a transaction irrespective of the isolation
level. It is called even when the isolation level is read committed. Can
some one explain why is it called on the client node when in a transaction?

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-EntryProcessor-process-is-getting-invoked-twice-when-cache-invoke-method-is-called-within-Tran-tp921p6271.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Understanding data store and partitioning

2016-07-13 Thread Vladislav Pyatkov
Hello,



On Wed, Jul 13, 2016 at 2:26 PM, pragmaticbigdata 
wrote:

> Following the documentation on  data loading
>    I have some questions
> with regards to ignite version 1.6
>
> 1. How does ignite derive the partition id from the cache key? What is the
> relation between the partition id and the affinity key?
>

Affinity function reflect your key to partition. You can get the function
using method ignite().affinity(cacheName)

>
> 2. Partition aware data loading suggests to persist the partition id along
> with the data in the database. For this we would need to know the cache key
> upfront (as the example indicates - personId) right? How could getting the
> key be possible when doing the initial load? Did I misunderstood anything?
>

You can find partition number using: affinity.partition(key)

>
> 3. Is the data store implementation called on cluster rebalancing
> especially
> when a new server node joins the cluster?
>

Yes it does work. Each node hase own instance of CacheStore.

>
> Thanks!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Understanding-data-store-and-partitioning-tp6264.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread Yakov Zhdanov
Tracel, can you please also take threaddumps of both client and server JVMs
when you see that get() takes too long?

--Yakov

2016-07-13 12:33 GMT+03:00 tracel :

> Thanks yakov,
> I am using a Linux box.
> The delay can also be observed after leaving client idle for some 3 to 10
> minutes.
>
> I will try disabling the shared memory communication, Thanks!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250p6262.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Issue with concurrent users on Ignite 1.6.0 ODBC

2016-07-13 Thread Igor Sapego
Agneeswaran,

What I can see is that SEGFAULT caused by the misuse of the SQLGetDiagRec -
our current implementation of the function assumes that TextLengthPtr could
not
be NULL. Fix that and you are going to get some meaningful error message
that
we could analyze further. From the core dump it looks like a "Failed to run
map
query remotely." error message, but it's better to be sure.

Best Regards,
Igor

On Wed, Jul 13, 2016 at 11:14 AM, Agneeswaran <
agneeswaran.ponnuraman...@nielsen.com> wrote:

> Hi Igor,
>
> Please find the attached lib files,
>
> Thanks,
> Agneeswaran
>
> lib1.zip
> 
>
> lib2.zip
> 
>
> lib3.zip
> 
>
> lib4.zip
> 
>
> lib5.zip
> 
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Issue-with-concurrent-users-on-Ignite-1-6-0-ODBC-tp6217p6257.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread Dmitriy Setrakyan
On Wed, Jul 13, 2016 at 11:23 AM, tracel  wrote:

> thanks dsetrakyan,
>
> Why use System.out.println()?
>

I want to make sure that there is no overhead associated with log.info().
Can you check?


> I have added the System.out.println(), and keep the log.info() just for
> comparison:
>
> log.info("### Before get()");
> System.out.println("##~ Before get()");
> Vendor vendor = cache.get(vendorCode);
> System.out.println("##~ After  get()");
> log.info("### After  get()");
>
>
>
> The System.out was captured by log4j like this so they also marked as
> [INFO]
> log4j.appender.stdout.Threshold=INFO
> log4j.appender.stdout.Target=System.out
>
>
>
> Here's the log output:
>
> 16:17:06,861 [ INFO] CacheService:150 - ### Before get()
> 16:17:06,862 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,921 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,921 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,922 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,922 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,933 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,934 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,934 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,935 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,940 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,941 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,941 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,941 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,956 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,956 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,957 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,957 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,965 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,965 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,965 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,966 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,969 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,969 [ INFO] CacheService:156 - ### After  get()
> 16:17:11,970 [ INFO] CacheService:150 - ### Before get()
> 16:17:11,970 [ INFO] CacheService:62 - ##~ Before get()
> 16:17:11,975 [ INFO] CacheService:62 - ##~ After  get()
> 16:17:11,976 [ INFO] CacheService:156 - ### After  get()
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250p6258.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How does AffinityKey mapped?

2016-07-13 Thread Andrey Gura
Hi,

It is responsibility of AffinityKeyMapper interface implementation. By
default Ignite uses GridCacheDefaultAffinityKeyMapper class for this
purposes. But you can provide own mapper implementation using
CacheConfiguration.setAffinityMapper() method.

On Wed, Jul 13, 2016 at 11:58 AM, November  wrote:

> Hi
>
> I have read http://apacheignite.gridgain.org/docs/affinity-collocation and
> java example.
>
> I don't know how AffinityKey map? Like following code in java example.
> How does ignite know EmployeeKey's organizationId should collocate with
> Organization's id (not other fields). I can't find anything map them in
> configuration.
>
> Another question: how to use AffinityKey in .Net. Will just replace
> @AffinityKeyMapped with [AffinityKeyMapped] work or there is other thing
> different?
>
> public class EmployeeKey {
> /** ID. */
> private int id;
>
> /** Organization ID. */
> @AffinityKeyMapped
> private int organizationId;
> }
>
> public class Organization {
> /** */
> private static final AtomicLong ID_GEN = new AtomicLong();
>
> /** Organization ID (indexed). */
> @QuerySqlField(index = true)
> private Long id;
>
> /** Organization name (indexed). */
> @QuerySqlField(index = true)
> private String name;
>
> /** Address. */
> private Address addr;
>
> /** Type. */
> private OrganizationType type;
>
> /** Last update time. */
> private Timestamp lastUpdated;
>
> }
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Does the HTTP REST API support the binary value?

2016-07-13 Thread Jason
seems that REST API only support simple key/value, like string.

If the value is a class with multiple columns, and some of them are used to
do SQL query.
how to send this kind of data by using REST API?

Thanks,
-Jason



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Does-the-HTTP-REST-API-support-the-binary-value-tp6263.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread tracel
Thanks yakov,
I am using a Linux box.
The delay can also be observed after leaving client idle for some 3 to 10
minutes.

I will try disabling the shared memory communication, Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250p6262.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread Yakov Zhdanov
What OS do do you use? Is this delay observed only initially or can also be
observed after leaving client idle after doing some work?

Can you please disable shared memory communication in case you use Linux or
Mac - set
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi#setSharedMemoryPort
to -1.

Thanks!

--Yakov

2016-07-13 11:57 GMT+03:00 tracel :

> After done some more testings, I have these findings so far:
>
> - The slow get() symptom only found when the client node and the server
> node
> are started on the SAME machine, I cannot reproduce the symptom when client
> node and server node were started on their own machines.
>
> - The symptom seems to occur when the client idle for a while (from 3 to 10
> minutes)
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250p6259.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


How does AffinityKey mapped?

2016-07-13 Thread November
Hi

I have read http://apacheignite.gridgain.org/docs/affinity-collocation and
java example.

I don't know how AffinityKey map? Like following code in java example. 
How does ignite know EmployeeKey's organizationId should collocate with
Organization's id (not other fields). I can't find anything map them in
configuration.

Another question: how to use AffinityKey in .Net. Will just replace
@AffinityKeyMapped with [AffinityKeyMapped] work or there is other thing
different?

public class EmployeeKey {
/** ID. */
private int id;

/** Organization ID. */
@AffinityKeyMapped
private int organizationId;
}

public class Organization {
/** */
private static final AtomicLong ID_GEN = new AtomicLong();

/** Organization ID (indexed). */
@QuerySqlField(index = true)
private Long id;

/** Organization name (indexed). */
@QuerySqlField(index = true)
private String name;

/** Address. */
private Address addr;

/** Type. */
private OrganizationType type;

/** Last update time. */
private Timestamp lastUpdated;

}



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-AffinityKey-mapped-tp6260.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread tracel
After done some more testings, I have these findings so far:

- The slow get() symptom only found when the client node and the server node
are started on the SAME machine, I cannot reproduce the symptom when client
node and server node were started on their own machines.

- The symptom seems to occur when the client idle for a while (from 3 to 10
minutes)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250p6259.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


回复: Ignite used memory 7 times greater than imported data

2016-07-13 Thread 胡永亮/Bob
Now, because I created my cache in java code, so changing the config.xml 
doesn't work.

I already change the java code of creating my cache to change the backups to 
zero from one:

CacheConfiguration cfg =
CacheConfig.cache(cacheName, pojoStoreFactory);
//cfg.setBackups(1);
cfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cfg.setCacheMode(CacheMode.PARTITIONED);

After that, the same data consumed memory: 130G, the detail is the following:
[root@ignite15 apache-ignite-fabric-1.6.0-bin]# free -g
 total   used   free sharedbuffers cached
Mem:   125 66 59  0  0  0
-/+ buffers/cache: 65 60
Swap:3  0  3

[root@localhost apache-ignite-fabric-1.6.0-bin]# free -g
 total   used   free sharedbuffers cached
Mem:   125 65 60  0  0  0
-/+ buffers/cache: 64 61
Swap:3  0  3

Can anyone sugguest me how to reduce the number of used memory? 

Ignite consume too much memory!!!

Thanks.



Bob
 
发件人: 胡永亮/Bob
发送时间: 2016-07-12 17:22
收件人: user@ignite.apache.org
主题: Ignite used memory 7 times greater than imported data
Hi, everyone

I meet a problem: Ignite used 220G+ memory, but the imported data is only 
31G. Why?

Basic info: I used Ignite 1.6, and deployed ignite cluster using 2 machine 
whose memory is 128G.
I run 3 ignite instance in every machine. I am using JDK8.

I has 31G data in csv file. First, I import the data into oracle's one 
table, total records: 47,535,542
Then I using ignite-schema-import.sh tool to produce POJO files, and put 
them info my java project.
Then I used IgniteCache.loadCache to load data from oracle to ignite. It 
used time: 01:43:17

After importing data, I used linux command free and ps aux, get the 
following info:
free -g:
machine1:
total   used   free sharedbuffers   
  cached
Mem:   125114 11  0  0  0
-/+ buffers/cache:114 11
Swap:3  0  3

machine2:
total   used   free sharedbuffers   
  cached
Mem:   125122  3  0  0  0
-/+ buffers/cache:121  4
Swap:3  0  3

ps aux:
machine1:
USER PID%CPU %MEM VSZ  RSS  TTY   STAT START TIME COMMAND
root  23234 48.2 28.8 56278844 38088436 pts/1 Sl 14:29  70:08 
/usr/java/jdk1.8.0_91/bin/java -server -Xms1g -Xmx1g -XX:NewSize=512m 
-XX:SurvivorRatio=6 -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:MaxGCPauseMillis=2000 
-XX:GCTimeRatio=4 -XX:InitiatingHeapOccupancyPercent=30 -XX:G1HeapRegionSize=8M 
-XX:ConcGCThreads=16 -XX:G1HeapWastePercent=10 -XX:+UseTLAB 
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -DIGNITE_QUIET=true 
-DIGNITE_SUCCESS_FILE=/root/apache-ignite-fabric-1.6.0-bin/work/ignite_success_6f17738f-6b73-4681-a06e-a9d270c0fe9c
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=49164 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-DIGNITE_HOME=/root/apache-ignite-fabric-1.6.0-bin 
-DIGNITE_PROG_NAME=bin/ignite.sh -cp 
/root/apache-ignite-fabric-1.6.0-bin/libs/*:/root/apache-ignite-fabric-1.6.0-bin/libs/ignite-indexing/*:/root/apache-ignite-fabric-1.6.0-bin/libs/ignite-rest-http/*:/root/apache-ignite-fabric-1.6.0-bin/libs/ignite-spring/*:/root/apache-ignite-fabric-1.6.0-bin/libs/licenses/*
 org.apache.ignite.startup.cmdline.CommandLineStartup config_poc.xml
root  22259 46.5 32.7 61434264 43266272 pts/1 Sl 14:28  68:14 
/usr/java/jdk1.8.0_91/bin/java ...
root  22729 44.7 28.3 55533284 37374932 pts/1 Sl 14:28  65:29 
/usr/java/jdk1.8.0_91/bin/java ...

machine2:
root 182359 76.1 32.0 60570656 42322148 pts/2 Sl 14:34  87:02 
/usr/java/jdk1.8.0_91/bin/java ...
root 181882 75.8 31.7 60219724 41962436 pts/2 Sl 14:34  87:04 
/usr/java/jdk1.8.0_91/bin/java ...
root 182867 56.8 31.8 60243104 42005888 pts/2 Sl 14:34  64:54 
/usr/java/jdk1.8.0_91/bin/java ...

The ignite config:





http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>


















 



















   

Re: Update performance

2016-07-13 Thread ionut_s
Hi Val,

Lately I was using optimistic serializable which has better performance in
my test cases. 

I see two problems with EntryProcessor:
1. When the "where" condition is generic I don't have the keys therefore an
initial step is required to move all the keys in the client. 
2. In my tests putAll performs better than EntryProcessor. Apparently
EntryProcessor#process is called in the client too. I don't know what
triggers loading the value + processing in the client.

>From what I saw there is no way for changes done inside IgniteCallable to be
enlisted in the transaction started in the client. Can you confirm that?

Thanks,
Ionut




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Update-performance-tp6214p6255.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to troubleshoot a slow client node get()

2016-07-13 Thread Dmitriy Setrakyan
Can you replace log.info() with System.out.println() in your test?

On Wed, Jul 13, 2016 at 10:21 AM, tracel  wrote:

> I have an Ignite (1.5.0.final) cache client node started in a Tomcat
> 8.0.32,
> the client node connects to a server node started on the same machine.
>
> Sometimes the get() need some 5 seconds, while most of the other get() need
> almost no time.
> I wonder how the 5 seconds was spent, how I can troubleshoot it?
>
> I am trying but still cannot reproduce the symptom with another
> application,
> I will keep trying but hopefully I can get someone to shed some light here.
>
> Here is the log output, only the get() at 11:40:13 and 11:56:10 were taking
> longer time:
>
> 11:40:13,333 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,503 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,505 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,528 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,529 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,538 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,538 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,558 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,558 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,567 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,567 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,575 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,576 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,595 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,595 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,603 [ INFO] CacheService:152 - ### After  get()
> 11:40:18,786 [ INFO] CacheService:150 - ### Before get()
> 11:40:18,795 [ INFO] CacheService:152 - ### After  get()
> 11:56:10,142 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,208 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,208 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,214 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,214 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,228 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,229 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,243 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,244 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,247 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,247 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,250 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,250 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,255 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,256 [ INFO] CacheService:150 - ### Before get()
> 11:56:15,258 [ INFO] CacheService:152 - ### After  get()
> 11:56:15,280 [ INFO] CacheService:150 - ### Before get()
>
>
> The original code is quite complicated so I put a simplified version here:
>
> private IgniteCache cache;
>
> public Vendor getVendor(String vendorCode) {
> log.info("### Before get()");
> Vendor vendor = cache.get(vendorCode);
> log.info("### After  get()");
>
> if (vendor == null) {
> vendor = findVendorFromDB(vendorCode);
> }
>
> return vendor;
> }
>
>
> Thanks in advance!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: "ArrayIndexOutOfBoundsException" happened when doing qurey in version 1.6.0

2016-07-13 Thread ght230
What trace do you need? 

The log files in work/log?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ArrayIndexOutOfBoundsException-happened-when-doing-qurey-in-version-1-6-0-tp6245p6253.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Does Apache Ignite support streaming serialization while returning results from the compute grid?

2016-07-13 Thread Sahmoud, Mohamed
Hi,

Currently we are using Apache Ignite for caching huge amount of data (> 500 GB) 
and performing some computations over it. One of our computations required 
returning big chunk of data (> 4GB after serialization) from the compute node 
to the requester, which caused integer overflow while serialization.

Exception:
NFO | jvm 3 | 2016/07/07 11:29:55 | Caused by: 
java.lang.NegativeArraySizeException: null
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.util.io.GridUnsafeDataOutput.requestFreeSize(GridUnsafeDataOutput.java:153)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.util.io.GridUnsafeDataOutput.writeInt(GridUnsafeDataOutput.java:352)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeInt(GridOptimizedObjectOutputStream.java:602)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeArrayList(GridOptimizedObjectOutputStream.java:320)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedClassDescriptor.write(GridOptimizedClassDescriptor.java:779)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeObject0(GridOptimizedObjectOutputStream.java:201)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeFields(GridOptimizedObjectOutputStream.java:485)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeSerializable(GridOptimizedObjectOutputStream.java:306)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedClassDescriptor.write(GridOptimizedClassDescriptor.java:829)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeObject0(GridOptimizedObjectOutputStream.java:201)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeArrayList(GridOptimizedObjectOutputStream.java:323)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedClassDescriptor.write(GridOptimizedClassDescriptor.java:779)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeObject0(GridOptimizedObjectOutputStream.java:201)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeFields(GridOptimizedObjectOutputStream.java:485)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeSerializable(GridOptimizedObjectOutputStream.java:306)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedClassDescriptor.write(GridOptimizedClassDescriptor.java:829)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeObject0(GridOptimizedObjectOutputStream.java:201)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeArrayList(GridOptimizedObjectOutputStream.java:323)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedClassDescriptor.write(GridOptimizedClassDescriptor.java:779)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeObject0(GridOptimizedObjectOutputStream.java:201)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeFields(GridOptimizedObjectOutputStream.java:485)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.defaultWriteObject(GridOptimizedObjectOutputStream.java:655)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
java.util.Collections$SynchronizedCollection.writeObject(Collections.java:2081)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
java.lang.reflect.Method.invoke(Method.java:498)
INFO | jvm 3 | 2016/07/07 11:29:55 | at 
org.gridgain.grid.marshaller.optimized.GridOptimizedObjectOutputStream.writeSerializable(GridOptimizedObjectOutputStream.java:296)
INFO | jvm 3 | 2016/07/07 11:29:55 | ... 22 common frames omitted

We were using GridGain 6.2 and currently moved to Apache Ignite 1.5. I checked 
the code and found that part of the code responsible for doing the 
serialization part is the same, so the problem will not be solved in Apache 
Ignite.


How to troubleshoot a slow client node get()

2016-07-13 Thread tracel
I have an Ignite (1.5.0.final) cache client node started in a Tomcat 8.0.32,
the client node connects to a server node started on the same machine.

Sometimes the get() need some 5 seconds, while most of the other get() need
almost no time.
I wonder how the 5 seconds was spent, how I can troubleshoot it?

I am trying but still cannot reproduce the symptom with another application,
I will keep trying but hopefully I can get someone to shed some light here.

Here is the log output, only the get() at 11:40:13 and 11:56:10 were taking
longer time:

11:40:13,333 [ INFO] CacheService:150 - ### Before get()
11:40:18,503 [ INFO] CacheService:152 - ### After  get()
11:40:18,505 [ INFO] CacheService:150 - ### Before get()
11:40:18,528 [ INFO] CacheService:152 - ### After  get()
11:40:18,529 [ INFO] CacheService:150 - ### Before get()
11:40:18,538 [ INFO] CacheService:152 - ### After  get()
11:40:18,538 [ INFO] CacheService:150 - ### Before get()
11:40:18,558 [ INFO] CacheService:152 - ### After  get()
11:40:18,558 [ INFO] CacheService:150 - ### Before get()
11:40:18,567 [ INFO] CacheService:152 - ### After  get()
11:40:18,567 [ INFO] CacheService:150 - ### Before get()
11:40:18,575 [ INFO] CacheService:152 - ### After  get()
11:40:18,576 [ INFO] CacheService:150 - ### Before get()
11:40:18,595 [ INFO] CacheService:152 - ### After  get()
11:40:18,595 [ INFO] CacheService:150 - ### Before get()
11:40:18,603 [ INFO] CacheService:152 - ### After  get()
11:40:18,786 [ INFO] CacheService:150 - ### Before get()
11:40:18,795 [ INFO] CacheService:152 - ### After  get()
11:56:10,142 [ INFO] CacheService:150 - ### Before get()
11:56:15,208 [ INFO] CacheService:152 - ### After  get()
11:56:15,208 [ INFO] CacheService:150 - ### Before get()
11:56:15,214 [ INFO] CacheService:152 - ### After  get()
11:56:15,214 [ INFO] CacheService:150 - ### Before get()
11:56:15,228 [ INFO] CacheService:152 - ### After  get()
11:56:15,229 [ INFO] CacheService:150 - ### Before get()
11:56:15,243 [ INFO] CacheService:152 - ### After  get()
11:56:15,244 [ INFO] CacheService:150 - ### Before get()
11:56:15,247 [ INFO] CacheService:152 - ### After  get()
11:56:15,247 [ INFO] CacheService:150 - ### Before get()
11:56:15,250 [ INFO] CacheService:152 - ### After  get()
11:56:15,250 [ INFO] CacheService:150 - ### Before get()
11:56:15,255 [ INFO] CacheService:152 - ### After  get()
11:56:15,256 [ INFO] CacheService:150 - ### Before get()
11:56:15,258 [ INFO] CacheService:152 - ### After  get()
11:56:15,280 [ INFO] CacheService:150 - ### Before get()


The original code is quite complicated so I put a simplified version here:

private IgniteCache cache;

public Vendor getVendor(String vendorCode) {
log.info("### Before get()");
Vendor vendor = cache.get(vendorCode);
log.info("### After  get()");

if (vendor == null) {
vendor = findVendorFromDB(vendorCode);
}   

return vendor;
}


Thanks in advance!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-troubleshoot-a-slow-client-node-get-tp6250.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.