Re: Scaling with SQL query

2018-06-26 Thread Pavel Vinokurov
Hi Tom,

In case of a replicated cache the Ignite plans the execution of the sql
query across whole cluster by splitting into multiple map queries and a
single reduce query.
Thus it is possible communication overheads caused by  that the "reduce"
node collects data from multiple nodes.
Please show metrics for this query for you configuration.

Thanks,
Pavel


2018-06-26 6:24 GMT+03:00 Tom M :

> Hi,
>
> I have a cluster of 10 nodes, and a cache with replication factor 3 and no
> persistency enabled.
> The SQL query is pretty simple -- "SELECT * FROM Logs ORDER by time DESC
> LIMIT 100".
> I have checked the index for "time" attribute is applied.
>
> When I increase the number of nodes, throughput drops and latency
> increases.
> Can you please explain why and how Ignite processes this SQL request?
>



-- 

Regards

Pavel Vinokurov


Best practice for class versioning: marshaller error

2018-06-26 Thread Calvin KL Wong, CLSA
Hi,

I am running Ignite 2.3 using Cassandra as my persistence store.

I got unmarshall error when a server node trying to unmarshall an object of old 
version from Cassandra.

This is the scenario:

1.   Object of ClassA (older version) is serialized and persisted into 
Cassandra

2.   ClassA is updated with new fields

3.   Server node's classpath has new ClassA class file.

4.   Server node is deployed to the Ignite grid.  It tries to deserialize 
from the serialized object of ClassA (older version) and results the result to 
a query.  Then got this exception:


Caused by: org.apache.ignite.IgniteCheckedException: Failed to deserialize 
object with given class loader: 
[clsLdr=org.apache.ignite.internal.processors.cache.GridCacheDeploymentManager$CacheClassLoader@3bc180c9,
 err=Unexpected error occurred during unmarshalling of an instance of the 
class: java.time.Ser. Check that all nodes are running the same version of 
Ignite and that all nodes have GridOptimizedMarshaller configured with 
identical optimized classes lists, if any (see setClassNames and 
setClassNamesPath methods). If your serialized classes implement 
java.io.Externalizable interface, verify that serialization logic is correct.]

at 
org.apache.ignite.internal.marshaller.optimized.OptimizedMarshaller.unmarshal0(OptimizedMarshaller.java:236)
 ~[ignite-core-2.3.0-clsa.20180130.59.jar:2.3.0-clsa.20180130.59]

at

I believe this use case is pretty normal.
Do you have any recommendation/best practice on how I can avoid this error?  Is 
BinaryObject able to prevent this issue from happening?

Thanks,
Calvin

Calvin KL Wong
Sr. Lead Engineer, Execution Services
D  +852 2600 7983  |  M  +852 9267 9471  |  T  +852 2600 
5/F, One Island East, 18 Westlands Road, Island East, Hong Kong

[:1. Social Media Icons:CLSA_Social Media 
Icons_linkedin.png][:1. Social Media 
Icons:CLSA_Social Media 
Icons_twitter.png][:1. Social Media 
Icons:CLSA_Social Media 
Icons_youtube.png][:1.
 Social Media Icons:CLSA_Social Media 
Icons_facebook.png]

clsa.com
Insights. Liquidity. Capital.

[CLSA_RGB]

A CITIC Securities Company

The content of this communication is intended for the recipient and is subject 
to CLSA Legal and Regulatory Notices.
These can be viewed at https://www.clsa.com/disclaimer.html or sent to you upon 
request.
Please consider before printing. CLSA is ISO14001 certified and committed to 
reducing its impact on the environment.


Cache size in offheap mem in bytes

2018-06-26 Thread Prasad Bhalerao
Hi,
an object takes 200 bytes on heap and cache has such 50 million objects
stored in it.

Is it ok to calculate cache size as follows?

Cache size in bytes  = object size * no of objects
Cache size in bytes = 200 * 50 million


Thanks,
Prasad


Re: Change field type into BinryObjectBuilder

2018-06-26 Thread akurbanov
Got it, if you serialized with a given builder once, this type will be cached
globally in the internal cache. So the behaviour you observe is expected and
correct: you can change schema dynamically, but you have to preserve
compatibility. This means that you can change and remove fields, but not
their type, because object type is not expected to be bound to a given
cache. Serialized object can be stored and used in other ways e.g. stored in
other caches or a client node is using outdated schema is trying to
deserialize the object. 

So you have to create new binary type or add a field with different name. Or
do a complete restart with cleanup.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: And again... Failed to get page IO instance (page content is corrupted)

2018-06-26 Thread Olexandr K
Hi Andrey,

I see Fix version 2.7 in Jira:
https://issues.apache.org/jira/browse/IGNITE-8659
This is a critical bug.. bouncing of server node in not-a-right-time causes
a catastrophe.
This mean no availability in fact - I had to clean data folders to start my
cluster after that

BR, Oleksandr


On Fri, Jun 22, 2018 at 4:06 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi,
>
> We've found and fixed few issues related to ExpiryPolicy usage.
> Most likely, your issue is [1] and it is planned to ignite 2.6 release.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-8659
>
>
> On Fri, Jun 22, 2018 at 8:43 AM Olexandr K 
> wrote:
>
>> Hi Team,
>>
>> Issue is still there in 2.5.0
>>
>> Steps to reproduce:
>> 1) start 2 servers + 2 clients topology
>> 2) start load testing on client nodes
>> 3) stop server 1
>> 4) start server 1
>> 5) stop server 1 again when rebalancing is in progress
>> => and we got data corrupted here, see error below
>> => we were not able to restart Ignite cluster after that and need to
>> perform data folders cleanup...
>>
>> 2018-06-21 11:28:01.684 [ttl-cleanup-worker-#43] ERROR  - Critical system
>> error detected. Will be handled accordingly to configured handler
>> [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
>> failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
>> o.a.i.IgniteException: Runtime failure on bounds: [lower=null,
>> upper=PendingRow [
>> org.apache.ignite.IgniteException: Runtime failure on bounds:
>> [lower=null, upper=PendingRow []]
>> at org.apache.ignite.internal.processors.cache.persistence.
>> tree.BPlusTree.find(BPlusTree.java:971) ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.persistence.
>> tree.BPlusTree.find(BPlusTree.java:950) ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.
>> IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1024)
>> ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.
>> GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>> ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.
>> GridCacheSharedTtlCleanupManager$CleanupWorker.body(
>> GridCacheSharedTtlCleanupManager.java:137) [ignite-core-2.5.0.jar:2.5.0]
>> at 
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>> [ignite-core-2.5.0.jar:2.5.0]
>> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
>> Caused by: java.lang.IllegalStateException: Item not found: 2
>> at org.apache.ignite.internal.processors.cache.persistence.
>> tree.io.AbstractDataPageIO.findIndirectItemIndex(AbstractDataPageIO.java:341)
>> ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.persistence.
>> tree.io.AbstractDataPageIO.getDataOffset(AbstractDataPageIO.java:450)
>> ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.persistence.
>> tree.io.AbstractDataPageIO.readPayload(AbstractDataPageIO.java:492)
>> ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.persistence.
>> CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:150)
>> ~[ignite-core-2.5.0.jar:2.5.0]
>> at org.apache.ignite.internal.processors.cache.persistence.
>> CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
>> ~[ignite-core-2.5.0.j
>>
>> BR, Oleksandr
>>
>> On Thu, Jun 14, 2018 at 2:51 PM, Olexandr K <
>> olexandr.kundire...@gmail.com> wrote:
>>
>>> Upgraded to 2.5.0 and didn't get such error so far..
>>> Thanks!
>>>
>>> On Wed, Jun 13, 2018 at 4:58 PM, dkarachentsev <
>>> dkarachent...@gridgain.com> wrote:
>>>
 It would be better to upgrade to 2.5, where it is fixed.
 But if you want to overcome this issue in your's version, you need to
 add
 ignite-indexing dependency to your classpath and configure SQL indexes.
 For
 example [1], just modify it to work with Spring in XML:
 
 
 org.your.KeyObject
 org.your.ValueObject
 
 

 [1]
 https://apacheignite-sql.readme.io/docs/schema-and-
 indexes#section-registering-indexed-types

 Thanks!
 -Dmitry



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>
>>>
>>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


RE: Ignite Node failure - Node out of topology (SEGMENTED)

2018-06-26 Thread naresh.goty
Thanks for the recommendation, but we already identified and addressed the
issues with GC pauses in JVM, and now we could not find any long gc activity
during the time of node failure due to network segmentation. (please find
the attached screenshot of GC activity from dynatrace agent).

>From the screenshot, there is only young generation gc collection and that
too < 100ms.

We can still enable gc logs and i strongly suspect the issue is beyond jvm
pauses.

Thanks,
Naresh

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Igfs - questions and optimal configuration settings

2018-06-26 Thread Denis Mekhanikov
You plan sounds legit.

Basically, Ignite splits your files into chunks of data and puts them into
an ordinary distributed cache.

You can find general recommendation on configuring the Ignite cluster in
the documentation: https://apacheignite.readme.io/docs/

Regarding IGFS: you are going to need to configure data region size for
IGFS cache.
You can either configure size of the default data region, or specify a
cache name in *FileSystemConfiguration#dataCacheName* property and
configure data region for this cache separately.
More on data region configuration:
https://apacheignite.readme.io/docs/memory-configuration

You can also try to tune IGFS-specific parameters, that are available in
the *FileSystemConfiguration

*
class.

Denis

вт, 26 июн. 2018 г. в 17:41, matt :

> Hi,
>
> I've recently started using the Ignite FileSystem (igfs) in our API, to
> fully buffer an incoming stream of byte[] values (chunked InputStream). I'm
> doing this because that stream then needs to be sent along to another
> remote
> service, and I'd like the ability to retry without telling the sender to
> send again. The thinking is that if this all gets "buffered" into Ignite,
> then pulling the "file" out again and sending/retrying should be possible
> and present no burden on the original sender. After the file has been
> successfully sent, it is then deleted from Ignite -- this all seems to
> work,
> however, is there a better way?
>
> If this approach is a good one, I have questions on how to configure. I had
> to look around quite a bit to get a working configuration (version 2.3) and
> even now, I'm not clear as to what is needed in order to get a good
> configuration setup, based on environment/memory/hardware etc.. Is it OK to
> just use the default settings?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Different versions in ignite remote and local nodes.

2018-06-26 Thread dkarachentsev
Hi,

Where did you get that images? In logs of all your instances do you see
2.5.0 version?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Different versions in ignite remote and local nodes.

2018-06-26 Thread Denis Mekhanikov
Your Kubernetes image and your jar contains different versions of Ignite.

Check logs to see, which one has the version 2.4.0. It's printed at the
startup.

Denis

вт, 26 июн. 2018 г. в 15:51, wadhwasahil :

> I am trying to connect my spark client with ignite cluster version 2.5.0.
> When I run my spark submit job.
>
> **sudo /opt/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --master
> k8s://https://35.192.214.68 --deploy-mode cluster --name sparkIgnite
> --class
> org.blk.igniteSparkResearch.ScalarSharedRDDExample --conf
> spark.executor.instances=3 --conf spark.app.name=sharedSparkIgnite --conf
> spark.kubernetes.authenticate.driver.serviceAccountName=ignite --conf
> spark.kubernetes.container.image=
> us.gcr.io/nlp-research-198620/ignite-spark:v2
>
> local:///opt/spark/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar**
>
> I get the following error.
>
> **class org.apache.ignite.IgniteException: Failed to start manager:
> GridManagerAdapter [enabled=true,
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager*
>
> *Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node and
> remote node have different version numbers (node will not join, Ignite does
> not support rolling updates, so versions must be exactly the same)
> [locBuildVer=2.5.0, rmtBuildVer=2.4.0,
> locNodeAddrs=[ignite-cluster-68787659f9-626k6/0:0:0:0:0:0:0:1%lo,
> /10.8.2.82, /127.0.0.1],
>
> rmtNodeAddrs=[sparkignite-8bad224a0187324ba6f98da08e152c5e-driver/0:0:0:0:0:0:0:1%lo,
> /10.8.2.87, /127.0.0.1], locNodeId=d0c82763-8931-4d66-89ad-2689d8b3d01a,
> rmtNodeId=b6383f55-ecc6-4d5f-acb2-4bd9970d3fc2]*
> *
> The error says that my ignite cluster (local) is using 2.5.0 and my remote
> pod is using 2.4.0. But the ignite I have contains 2.5.0 ignite image, so
> where is the 2.4.0 version coming from?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Igfs - questions and optimal configuration settings

2018-06-26 Thread matt
Hi,

I've recently started using the Ignite FileSystem (igfs) in our API, to
fully buffer an incoming stream of byte[] values (chunked InputStream). I'm
doing this because that stream then needs to be sent along to another remote
service, and I'd like the ability to retry without telling the sender to
send again. The thinking is that if this all gets "buffered" into Ignite,
then pulling the "file" out again and sending/retrying should be possible
and present no burden on the original sender. After the file has been
successfully sent, it is then deleted from Ignite -- this all seems to work,
however, is there a better way?

If this approach is a good one, I have questions on how to configure. I had
to look around quite a bit to get a working configuration (version 2.3) and
even now, I'm not clear as to what is needed in order to get a good
configuration setup, based on environment/memory/hardware etc.. Is it OK to
just use the default settings?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Cluster getting stuck when new node Join or release

2018-06-26 Thread dkarachentsev
Hi,

Thread dumps look healthy. Please share full logs at that time when you took
that thread dumps or take a new ones (thread dumps + logs).

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Change field type into BinryObjectBuilder

2018-06-26 Thread akurbanov
Hi, you can go 2 ways: 

Since metadata is also cached, you can clear it by removing
ignite/work/marshaller/ directory (and clearing all persistent data if it
exist) and restarting cluster. 

Second one is to try to remove the field you want to change first and then
add it again like this:
BinaryObjectBuilder builder = ignite.binary().builder("type");
builder.removeField("field").build();
builder.setField("field", null, Type.class).build();




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Change field type into BinryObjectBuilder

2018-06-26 Thread daivanov
Hello. I have a problem with BinaryObjectBuilder.

I want change field type in BinaryObjectBuilder, but even after cache
destroy and new builder with new field type creation, I get old type from
ignite.binary().type().fieldType() method. And also get an exception that I
have error field type when I try insert new data.

Could you tell me what is the proper way to change field type in
BinaryObjectBuilder?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Different versions in ignite remote and local nodes.

2018-06-26 Thread wadhwasahil
I am trying to connect my spark client with ignite cluster version 2.5.0.
When I run my spark submit job.

**sudo /opt/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --master
k8s://https://35.192.214.68 --deploy-mode cluster --name sparkIgnite --class
org.blk.igniteSparkResearch.ScalarSharedRDDExample --conf
spark.executor.instances=3 --conf spark.app.name=sharedSparkIgnite --conf
spark.kubernetes.authenticate.driver.serviceAccountName=ignite --conf
spark.kubernetes.container.image=us.gcr.io/nlp-research-198620/ignite-spark:v2
local:///opt/spark/jars/igniteSpark-1.0-SNAPSHOT-jar-with-dependencies.jar**

I get the following error.

**class org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager*

*Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node and
remote node have different version numbers (node will not join, Ignite does
not support rolling updates, so versions must be exactly the same)
[locBuildVer=2.5.0, rmtBuildVer=2.4.0,
locNodeAddrs=[ignite-cluster-68787659f9-626k6/0:0:0:0:0:0:0:1%lo,
/10.8.2.82, /127.0.0.1],
rmtNodeAddrs=[sparkignite-8bad224a0187324ba6f98da08e152c5e-driver/0:0:0:0:0:0:0:1%lo,
/10.8.2.87, /127.0.0.1], locNodeId=d0c82763-8931-4d66-89ad-2689d8b3d01a,
rmtNodeId=b6383f55-ecc6-4d5f-acb2-4bd9970d3fc2]*
*
The error says that my ignite cluster (local) is using 2.5.0 and my remote
pod is using 2.4.0. But the ignite I have contains 2.5.0 ignite image, so
where is the 2.4.0 version coming from?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Strange cache size

2018-06-26 Thread Michaelikus
This is example of data stored in cache taked from visor.

java.lang.Long | 2147604480 | o.a.i.i.binary.BinaryObjectImpl |
Cache.UserObjectCacheItem [hash=684724513, UserName=omguser,
LastUpdated=System.DateTime [idHash=658840403, hash=1247822310,
ticks=636656011684408212, dateData=5248342030111796116]]


Total  amount of records in cache 86KK+ recs.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Strange cache size

2018-06-26 Thread Michaelikus
2Denis:

About measuring.
I'm use cluser only for this type of cache. And nothing more. So it scares
because it takes too much memory.

I have read already docs about memory metrics and i'm affraid about this
warning:

> Metrics collection is not a free operation and might affect the
> performance of an application. For this reason, the metrics is turned off
> by default.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Strange cache size

2018-06-26 Thread Mikael

Hi!

The overhead for an entry in Ignite is much bigger than an SQL server 
record, you have something like 200 byte overhead for each entry and the 
binary format used in an SQL server is more compact compared to the 
binary storage format used in Ignite, so if your records are small the 
size you have might be correct.


Are you keeping all of the data in ram at the same time ? wouldn't it be 
better to keep it in the SQL server and cache a smaller set of records 
in Ignite ?


Are you copying the data into Ignite as a one time process or are you 
using a CacheStoreAdapter ?


The section about durable memory / eviction policy in the documentation 
explains how to control the amount of ram that is used.


Mikael


Den 2018-06-26 kl. 10:11, skrev Michaelikus:

Hi dear dr. Allcome ;)

I have an table in MSSQL which size is 4GB
I've transfer it to cache and it's size takes more tan 80GB(5 nodes per 22GB
offheap each)
Can you advise me how i can control size of caches for monitoring?

Regards,
Michael.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/






Re: Strange cache size

2018-06-26 Thread Denis Mekhanikov
Michael,

Take a look at the following page:
https://apacheignite.readme.io/docs/memory-metrics
To monitor off-heap memory usage, you can use
*DataRegionMetrics#physicalMemorySize* metric.
If you multiply the result by the *DataRegionMetrics#pagesFillFactor *metric,
you will get an approximation of how much data you actually have in memory.

The increase in size, that you described, doesn't sound OK. How did you
measure the occupied space?
What is the replication factor of your cache?

Denis

вт, 26 июн. 2018 г. в 11:18, Michaelikus :

> Hi dear dr. Allcome ;)
>
> I have an table in MSSQL which size is 4GB
> I've transfer it to cache and it's size takes more tan 80GB(5 nodes per
> 22GB
> offheap each)
> Can you advise me how i can control size of caches for monitoring?
>
> Regards,
> Michael.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Ignite Node failure - Node out of topology (SEGMENTED)

2018-06-26 Thread dkarachentsev
Hi Naresh,

Actually any JVM process hang could lead to segmentation. If some node is
not responsive for longer than failureDetectionTimeout, it will be kicked
off from the cluster to prevent all over grid performance degradation.

It works on following scenario. Let's say we have 3 nodes in a ring: n1 ->
n2 -> n3. Over ring go some discovery messages along with metrics and
connection checks with predefined interval. Node 2 start experiencing issues
like GC pause or OS failures that forces process to stop. For that time node
1 is unable to send message to n2 (it doesn't receive ack). n1 waits for
failureDetectionTimeout and establishes connection to n3: n1 -> n3; when n2
is not connected. 

Cluster treated n2 as failed. When n2 comes back it tries to connect to n3
and send message across ring, when it receives message that it's out of
grid. For n2 that means it was segmented and best what it could do is stop.

To check if there were large JVM or system pauses, you may enable GC logs.
If they longer than failureDetectionTimeout, then node will be segmented.

The best way would be to solve pauses, but like a workaround - increase
timeout.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Strange cache size

2018-06-26 Thread Michaelikus
Hi dear dr. Allcome ;)

I have an table in MSSQL which size is 4GB
I've transfer it to cache and it's size takes more tan 80GB(5 nodes per 22GB
offheap each)
Can you advise me how i can control size of caches for monitoring?

Regards,
Michael.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Kerberos installation

2018-06-26 Thread Raghav
Hello Ravi,

Were you able to solve this issue? 

If so, kindly request you to post the correct configurations for Kerberos
enabled Ignite Installation.

Thanks in Advance!!! 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed Closures Apply method/c#

2018-06-26 Thread Pavel Tupitsyn
>  ThreadAbortException: Thread was being aborted.

This is not related to Ignite. Thread.Abort is called on .NET side, and
Ignite never does that.

Please check if ASP.NET or some other framework is involved.

On Mon, Jun 25, 2018 at 10:57 PM aealexsandrov 
wrote:

> Very strange. By default, there is no any timeout. I will take a look more
> carefully.
>
> Also is it possible that you cancel the closure somehow?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ClassCastException in Hibernate QueryCache

2018-06-26 Thread Kestas
Attaching ignite config.  A failed code is a simple execution of hibernate
query. Here is bigger stacktrace:
Caused by: java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast
to [Ljava.io.Serializable;
at
org.hibernate.cache.internal.StandardQueryCache.get(StandardQueryCache.java:189)
at org.hibernate.loader.Loader.getResultFromQueryCache(Loader.java:2587)
at org.hibernate.loader.Loader.listUsingQueryCache(Loader.java:2495)
at org.hibernate.loader.Loader.list(Loader.java:2467)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:502)
at
org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:384)
at
org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:216)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1490)
at
org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1445)
at
org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1414)
at org.hibernate.query.Query.getResultList(Query.java:146)
at
com.aaa.bbb.base.dataload.dao.GenericJpaQueryDAO.executeGenericQuery(GenericJpaQueryDAO.java:55)


For now I created 'dirty' workaround to continue with my testing:
org.apache.ignite.cache.hibernate.HibernateQueryResultsRegion#get

@Override
public Object get(SharedSessionContractImplementor
sharedSessionContractImplementor, Object key) throws CacheException {
Object result = super.get(sharedSessionContractImplementor, key);
if (result instanceof List) {
List list = (List) result;
if (list.size() > 1) {
for (int i = 1; i < list.size(); i++) {//first element in
list is id, skip it
Object row = list.get(i);
if (row != null && row.getClass().isArray())
{//convert Object[] to Serializable[]
Object[] rowArr = (Object[]) row;
list.set(i, Arrays.copyOf(rowArr, rowArr.length,
Serializable[].class));
}
}
}
}
return result;
}




On Mon, Jun 25, 2018 at 9:41 PM aealexsandrov 
wrote:

> Hi,
>
> Could you please provide some more details: cache configuration, an example
> of code that was failed and logs?
>
> BR,
> Andrei
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>
	   
	   
  





 






 


	 

	
	





  
   


 




 






	

   













   
			   127.0.0.1