Ignite start-up connection timeout?

2016-06-28 Thread deleerhai
Hi,
Start ignite times wrong. How to solve? How to achieve more than 3 cache
query? Please help me. Thank you very much!


[29 09:40:49,277 DEBUG] [localhost-startStop-1]
multicast.TcpDiscoveryMulticastIpFinder - Address receive timeout.
[29 09:40:50,203 DEBUG] [tcp-disco-msg-worker-#2%null%] tcp.TcpDiscoverySpi
- Message has been added to queue: TcpDiscoveryStatusCheckMessage
[creatorNode=TcpDiscoveryNode [id=ca4d625c-71cb-4de9-b810-61469afe0b35,
addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.31.156],
sockAddrs=[PC-20160314ERQT/192.168.31.156:47500, /0:0:0:0:0:0:0:1:47500,
/127.0.0.1:47500, /192.168.31.156:47500], discPort=47500, order=0,
intOrder=0, lastExchangeTime=1467164448141, loc=true,
ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], failedNodeId=null,
status=0, super=TcpDiscoveryAbstractMessage [sndNodeId=null,
id=7f2d0d99551-ca4d625c-71cb-4de9-b810-61469afe0b35, verifierNodeId=null,
topVer=0, pendingIdx=0, failedNodes=null, isClient=false]]
[29 09:40:50,203 DEBUG] [tcp-disco-msg-worker-#2%null%] tcp.TcpDiscoverySpi
- Processing message [cls=TcpDiscoveryStatusCheckMessage,
id=7f2d0d99551-ca4d625c-71cb-4de9-b810-61469afe0b35]
[29 09:40:50,204 DEBUG] [tcp-disco-msg-worker-#2%null%] tcp.TcpDiscoverySpi
- Ignore message, local node order is not initialized
[msg=TcpDiscoveryStatusCheckMessage [creatorNode=TcpDiscoveryNode
[id=ca4d625c-71cb-4de9-b810-61469afe0b35, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1,
192.168.31.156], sockAddrs=[PC-20160314ERQT/192.168.31.156:47500,
/0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /192.168.31.156:47500],
discPort=47500, order=0, intOrder=0, lastExchangeTime=1467164448141,
loc=true, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false],
failedNodeId=null, status=0, super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=7f2d0d99551-ca4d625c-71cb-4de9-b810-61469afe0b35,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=false]], locNode=TcpDiscoveryNode [id=ca4d625c-71cb-4de9-b81
0-61469afe0b35, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.31.156],
sockAddrs=[PC-20160314ERQT/192.168.31.156:47500, /0:0:0:0:0:0:0:1:47500,
/127.0.0.1:47500, /192.168.31.156:47500], discPort=47500, order=0,
intOrder=0, lastExchangeTime=1467164448141, loc=true,
ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false]]
[29 09:40:50,258 DEBUG] [localhost-startStop-1]
multicast.TcpDiscoveryMulticastIpFinder - Address receive timeout.
[29 09:40:50,260 DEBUG] [localhost-startStop-1]
multicast.TcpDiscoveryMulticastIpFinder - Received nodes addresses:
[PC-20160314ERQT/192.168.31.156:47500, /0:0:0:0:0:0:0:1:47500,
/127.0.0.1:47500]
[29 09:40:51,006 DEBUG] [grid-timeout-worker-#25%null%]
timeout.GridTimeoutProcessor - Timeout has occurred: CancelableTask
[id=5f2d0d99551-ea964ce2-f891-491e-8267-7f5240be0c80, endTime=1467164450992,
period=3000, cancel=false, task=MetricsUpdater [prevGcTime=-1,
prevCpuTime=-1,
super=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$MetricsUpdater@598ecc79]]
[29 09:40:51,278 ERROR] [localhost-startStop-1] tcp.TcpDiscoverySpi -
Exception on direct send: Connection refused: connect
java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at
java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1266)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1241)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendMessageDirectly(ServerImpl.java:1086)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:939)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:804)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:329)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1835)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:255)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
at

Re: Non-cluster mode

2016-06-28 Thread vkulichenko
Peter,

This doesn't make much sense to me. With OFFHEAP_TIERED eviction policy
should not change anything at all, so it sounds like misconfiguration. Can
you provide the whole test that I can run and investigate?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Non-cluster-mode-tp5959p5982.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Continuous Query

2016-06-28 Thread Alex Osmakoff
Hi There,

I am using Continuous Query mechanism and most of the time it works just fine. 
However, in some cases, it seems intermittently, CQ does not pick up the update 
in the cache where it should.

Could you please clarify the behaviour of Continuous Query in the following 
scenario:

My business logic might create multiple identical CQs in separate processing 
tasks. As I have no control on where the particular task gets executed within 
the grid it is possible that two identical queries gets created on the same 
node. Now when the cache gets updated and remote filter picks up the update to 
pass it to the local listener of the query, would local listeners in both 
queries be notified or only one? I think the same applies to CACHE_PUT_EVENT 
propagation in general: if there are two (or more) listeners and only one event 
would all the listeners be notified regardless of their location?

Many thanks,

Regards,

Alex

This email and any attachment is confidential. If you are not the intended 
recipient, please delete this message. Macquarie does not guarantee the 
integrity of any emails or attachments. For important disclosures and 
information about the incorporation and regulated status of Macquarie Group 
entities please see: www.macquarie.com/disclosures

Re: Re: argument type mismatch of oracle TIMESTAMP field when call loadCache

2016-06-28 Thread 胡永亮/Bob
sorry, the infomation is not in detail.

My ignite's version is new release version: 1.6.



胡永亮
 
Bob
 
From: Vasiliy Sisko
Date: 2016-06-28 19:18
To: user@ignite.apache.org
Subject: Re: Re: argument type mismatch of oracle TIMESTAMP field when call 
loadCache
Hello Bob. 

What version of Ignite are you use?

On Tue, Jun 28, 2016 at 12:01 PM, hu...@neusoft.com  wrote:
hi, Alexey

First, thanks your reply.

My oracle version is 12.1.0.1.0, the ddl script is the following:

CREATE TABLE "UIGNITE"."KC21" 
   ( "AKB020" VARCHAR2(20) NOT NULL ENABLE, 
"AKC190" VARCHAR2(24) NOT NULL ENABLE, 
"AAC001" NUMBER(20,0) NOT NULL ENABLE, 
"AAZ500" VARCHAR2(50), 
"AKA130" VARCHAR2(3) NOT NULL ENABLE, 
"AAB001" NUMBER(20,0), 
"AKA101" VARCHAR2(3), 
"BKC192" TIMESTAMP (6), 
"AKC193" VARCHAR2(50), 
"BKC231" VARCHAR2(300), 
"BKC194" TIMESTAMP (6), 
"AKC195" VARCHAR2(3), 
"AKC196" VARCHAR2(50), 
"BKC232" VARCHAR2(300), 
"BKC317" VARCHAR2(3), 
"AKA151" FLOAT(126), 
"AKC021" VARCHAR2(3), 
"BKC021" VARCHAR2(3), 
"BKC197" VARCHAR2(3), 
"BKC120" VARCHAR2(3), 
"BKC128" VARCHAR2(3), 
"BKC126" VARCHAR2(3), 
"BAE073" VARCHAR2(16), 
"AKE013" VARCHAR2(3), 
"BKC023" VARCHAR2(3), 
"AAA027" VARCHAR2(6), 
"BKC378" VARCHAR2(50), 
"BKC379" VARCHAR2(50), 
"BKC380" VARCHAR2(50), 
"BKC381" VARCHAR2(50), 
"AAB034" VARCHAR2(16), 
"BAC888" VARCHAR2(16), 
"AAB301" VARCHAR2(16), 
"BKC390" VARCHAR2(20), 
"BKC500" VARCHAR2(30), 
"BKA130" VARCHAR2(3), 
"AKB021" VARCHAR2(200), 
"BKF050" VARCHAR2(20), 
"AKC273" VARCHAR2(50), 
"AKF002" VARCHAR2(100), 
"AMC020" TIMESTAMP (6), 
"AKE021" VARCHAR2(20), 
"AKE020" VARCHAR2(20), 
"BKC191" VARCHAR2(20), 
"BKC190" VARCHAR2(20), 
"AAE017" VARCHAR2(50), 
"AAE032" TIMESTAMP (6), 
"AAC999" VARCHAR2(16), 
"AAE135" VARCHAR2(20), 
"AAC003" VARCHAR2(50), 
"AAC004" VARCHAR2(3), 
"BAE450" NUMBER(20,0), 
"AAB999" VARCHAR2(16), 
"AAB004" VARCHAR2(500), 
"AAB019" VARCHAR2(3), 
"AAB020" VARCHAR2(3), 
"BKC233" VARCHAR2(2048), 
"BAE013" VARCHAR2(100), 
"BKC319" VARCHAR2(3), 
"AAE013" VARCHAR2(200), 
"BKC050" VARCHAR2(3), 
"BKC051" VARCHAR2(3), 
"BKC052" VARCHAR2(200), 
"BKC053" VARCHAR2(200), 
"BKC054" VARCHAR2(200), 
"BAZ001" NUMBER(20,0), 
"BAZ002" NUMBER(20,0), 
"BZE011" VARCHAR2(50), 
"BZE036" TIMESTAMP (6), 
"AAE011" VARCHAR2(50), 
"AAE036" TIMESTAMP (6), 
"AAE100" VARCHAR2(3), 
"BKC055" VARCHAR2(3), 
"BKC056" VARCHAR2(3), 
"BKC057" VARCHAR2(3), 
"BKC058" VARCHAR2(3), 
"BKB010" VARCHAR2(3), 
"BKB011" VARCHAR2(3), 
"BKC059" VARCHAR2(50), 
"BKC060" FLOAT(126), 
"BKC280" VARCHAR2(50), 
"BKC281" VARCHAR2(50), 
"BKC282" VARCHAR2(50), 
"BKC283" VARCHAR2(50), 
"BKC284" VARCHAR2(50), 
"BKE160" VARCHAR2(3), 
"BKE161" VARCHAR2(50), 
"BKE162" TIMESTAMP (6), 
"BKC401" VARCHAR2(3), 
"BKC285" VARCHAR2(300), 
"BKC286" VARCHAR2(300), 
"BKC287" VARCHAR2(100), 
"BKC288" VARCHAR2(200), 
"BKC289" VARCHAR2(50), 
"BKC290" VARCHAR2(50), 
"BKC291" FLOAT(126), 
"BKC292" FLOAT(126), 
"BAB024" VARCHAR2(200), 
"AAE009" VARCHAR2(200), 
"AAE010" VARCHAR2(200), 
 CONSTRAINT "pk_kc21" PRIMARY KEY ("AKB020", "AKC190")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "IGNITE"  ENABLE
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "IGNITE" 



胡永亮
 
bob
 
From: Alexey Kuznetsov
Date: 2016-06-28 10:50
To: user@ignite.apache.org
Subject: Re: argument type mismatch of oracle TIMESTAMP field when call 
loadCache
Hi, Bob!

Could post here a sample table script that help us to reproduce the problem?
Something like "create table test ()".

Also, just in case please specify your Oracle version.

Thanks!

-- 
Alexey Kuznetsov



-- 
Vasiliy Sisko
GridGain Systems
www.gridgain.com


Re: Non-cluster mode

2016-06-28 Thread Peter Schmitt
Hi Val,

without the eviction policy, the setup breaks due to
  java.lang.OutOfMemoryError: GC overhead limit exceeded

I'm not sure why the heap grows (+ it looks like GC can't free it), in case
of the mentioned (off-heap) config.
In the end almost everything should be stored off-heap.
However, maybe it's a kind of index which needs heap-space.
In that case it would be great to know how much heap-space is safe if the
off-heap size should be ~50 GB.

Kind regards
Peter



2016-06-28 21:14 GMT+02:00 vkulichenko :

> Peter,
>
> The only configuration that defines whether nodes join topology or not is
> discovery SPI (the one you provided in the first message)
>
> All looks fine, expect that eviction policy will be ignored in your case.
> It's used for entries that are stored in heap memory, while your cache is
> OFFHEAP_TIERED and therefore stores everything offheap.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Non-cluster-mode-tp5959p5978.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Non-cluster mode

2016-06-28 Thread vkulichenko
Peter,

The only configuration that defines whether nodes join topology or not is
discovery SPI (the one you provided in the first message)

All looks fine, expect that eviction policy will be ignored in your case.
It's used for entries that are stored in heap memory, while your cache is
OFFHEAP_TIERED and therefore stores everything offheap.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Non-cluster-mode-tp5959p5978.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite & Kubernetes

2016-06-28 Thread Dmitriy Setrakyan
Paulo, would you like to contribute this to Ignite?

On Tue, Jun 28, 2016 at 11:43 AM, Paulo Pires  wrote:

> I came up with https://github.com/pires/apache-ignite-discovery-kubernetes
> .
>
> I have been using this in production Kubernetes. Obviously, one needs DNS
> integration which happens to be a best-practice anyway.
>
> Pires
>
> On Thu, Apr 28, 2016 at 4:45 PM Dmitriy Setrakyan 
> wrote:
>
>> Ignite does not have any specific Kubernetes integration, however, if it
>> supports TCP/IP, which I am pretty sure it does, then we can easily do
>> auto-discovery there using our static IP discovery:
>>
>>
>> https://apacheignite.readme.io/docs/cluster-config#static-ip-based-discovery
>>
>> Also, it is likely that it will work smoothly with Ignite docker
>> container:
>> https://ignite.apache.org/download.cgi#docker
>>
>> D.
>>
>> On Thu, Apr 28, 2016 at 6:50 AM, Christos Erotocritou <
>> chris...@gridgain.com> wrote:
>>
>>> Hi all,
>>>
>>> Is anyone working with Ignite & kubernetes?
>>>
>>> Moreover I’d like to understand how it would be possible to do auto
>>> discovery of new Ignite nodes.
>>>
>>> Thanks,
>>>
>>> Christos
>>
>>
>>


Re: Ignite & Kubernetes

2016-06-28 Thread Paulo Pires
I came up with https://github.com/pires/apache-ignite-discovery-kubernetes.

I have been using this in production Kubernetes. Obviously, one needs DNS
integration which happens to be a best-practice anyway.

Pires

On Thu, Apr 28, 2016 at 4:45 PM Dmitriy Setrakyan 
wrote:

> Ignite does not have any specific Kubernetes integration, however, if it
> supports TCP/IP, which I am pretty sure it does, then we can easily do
> auto-discovery there using our static IP discovery:
>
>
> https://apacheignite.readme.io/docs/cluster-config#static-ip-based-discovery
>
> Also, it is likely that it will work smoothly with Ignite docker container:
> https://ignite.apache.org/download.cgi#docker
>
> D.
>
> On Thu, Apr 28, 2016 at 6:50 AM, Christos Erotocritou <
> chris...@gridgain.com> wrote:
>
>> Hi all,
>>
>> Is anyone working with Ignite & kubernetes?
>>
>> Moreover I’d like to understand how it would be possible to do auto
>> discovery of new Ignite nodes.
>>
>> Thanks,
>>
>> Christos
>
>
>


Re: Problem with installing ODBC Driver

2016-06-28 Thread victor.khodyakov
It works, thank you!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-with-installing-ODBC-Driver-tp5914p5975.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Non-cluster mode

2016-06-28 Thread Peter Schmitt
Hi Val,

thank you for checking it!
I've switched to JDK8 and the issue disappeared.
I've to test it with the JDK version we need to use in production.
It would be great to hear whether there is a different approach to reach
the same or if it could be a side-effect due to the used config:

//IgniteConfiguration + start as listed in the first mail

CacheConfiguration offheapCacheConfig = new
CacheConfiguration()
.setName(cacheName)
.setCacheMode(CacheMode.LOCAL)
.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED)
.setSwapEnabled(false)
.setBackups(0);

offheapCacheConfig.setOffHeapMaxMemory(...); //max. value loaded from a
config
LruEvictionPolicy evictionPolicy = new LruEvictionPolicy();
evictionPolicy.setMaxMemorySize(new
BigDecimal(offheapCacheConfig.getOffHeapMaxMemory() * 0.9).longValue());
IgniteCache cache =
ignite.createCache(offheapCacheConfig);
cache = cache.withExpiryPolicy(new AccessedExpiryPolicy(new
Duration(/*configured value*/)));


And to fill the cache the following code is used:

try (IgniteDataStreamer stream =
ignite.dataStreamer(cache.getName())) {
stream.allowOverwrite(true);
//use stream#addData
}

I hope that config makes sense and can't lead to the effect I described
initially.

Kind regards
Peter



2016-06-28 19:23 GMT+02:00 vkulichenko :

> Hi Peter,
>
> Your code works fine for me. Can you please attach the log file?
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Non-cluster-mode-tp5959p5971.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Non-cluster mode

2016-06-28 Thread vkulichenko
Hi Peter,

Your code works fine for me. Can you please attach the log file?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Non-cluster-mode-tp5959p5971.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Problem with installing ODBC Driver

2016-06-28 Thread victor.khodyakov
I tried but it doesn't work, with local Ignite node started manually nor when
it is stopped.
Ignite log shows nothing, I believe ODBC didn't even start Ignite.

We use Tableau 9.3.3 64-bit
Here is Tableau error message:/
Failed to establish connection with the host.
Unable to connect to the server "Apache Ignite". Check that the server is
running and that you have access privileges to the requested database./

ODBC example bundled with Ignite 1.6 works well. Here are its settings from
registry:/
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\Apache Ignite]
"DriverODBCVer"="03.80"
"UsageCount"=dword:0001
"Driver"="F:\\Ignite\\apache-ignite-fabric-1.6.0-bin\\platforms\\cpp\\project\\vs\\x64\\Release\\ignite.odbc.dll"
/




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-with-installing-ODBC-Driver-tp5914p5969.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite group indexing not work problem

2016-06-28 Thread Alexei Scherbakov
Hi,

I've tried the provided sample and found what instead of using oId_fNum_num
index H2 engine prefers oId_fNum_date,
thus preventing condition on num field to use index.

I think it's incorrect behavior.

Could you disable oId_fNum_date, execute the query again and provide me
with the query plan and execution time ?

You can get query plan from build-in H2 console. Read more about how to
setup console here [1]

[1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console




2016-06-28 6:08 GMT+03:00 Zhengqingzheng :

> Hi there,
>
> My  ignite in-memory sql query is very slow. Anyone can help me to figure
> out what was wrong?
>
>
>
> I am using group indexing to speed up in-memory sql queries. I notice that
> my sql query took 2274ms (data set size: 10Million, return result:1).
>
>
>
> My query is executed as:
>
> String qryStr = "select * from UniqueField where oid= ? and fnum= ? and
> num= ?";
>
>
>
> String oId="a343";
>
> int fNum = 3;
>
> BigDecimal num = new BigDecimal("51002982136");
>
>
>
> IgniteCache cache =
> igniteMetaUtils.getIgniteCache(IgniteMetaCacheType.UNIQUE_INDEX);  // to
> get selected cache ,which has been created in some other place
>
>
>
> SqlQuery qry = new SqlQuery(UniqueField.class, qryStr);
>
> qry.setArgs(objId,fieldNum, numVal);
>
> long start = System.currentTimeMillis();
>
> List result= cache.query(qry).getAll();
>
> long end = System.currentTimeMillis();
>
> System.out.println("Time used in query :"+ (end-start)+"ms");
>
>
>
> And the result shows: Time used in query :2274ms
>
>
>
> I have set group indexes, and the model is defined as:
>
> import java.io.Serializable;
>
> import java.math.BigDecimal;
>
> import java.util.Date;
>
>
>
> import org.apache.ignite.cache.query.annotations.QuerySqlField;
>
>
>
> public class UniqueField implements Serializable
>
> {
>
>
>
> @QuerySqlField
>
> private String orgId;
>
>
>
> @QuerySqlField(
>
> orderedGroups={
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ msg ", order=1, descending = true),
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ num ", order=1, descending =
> true),
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ date ", order=1, descending = true)
>
>
>
> })
>
> private String oId;
>
>
>
> @QuerySqlField(index=true)
>
> private String gId;
>
>
>
>  @QuerySqlField(
>
> orderedGroups={
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ msg ", order=2, descending = true),
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ num ", order=2, descending =
> true),
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ date ", order=2, descending = true)
>
>
>
> })
>
> private int fNum;
>
>
>
> @QuerySqlField(index=true, @QuerySqlField.Group(
>
> name="oId_fNum_ msg ", order=3, descending = true)})
>
> private String msg;
>
>
>
> @QuerySqlField(index=true, @QuerySqlField.Group(
>
> name="oId_fNum_ num ", order=3, descending = true)})
>
> private BigDecimal num;
>
>
>
> @QuerySqlField(index=true, @QuerySqlField.Group(
>
> name="oId_fNum_ date ", order=3, descending = true)})
>
> private Date date;
>
>
>
> public UniqueField(){};
>
>
>
> public UniqueField(
>
> String orgId,
>
> String oId,
>
> String gId,
>
> int fNum,
>
> String msg,
>
> BigDecimal num,
>
> Date date
>
> ){
>
> this.orgId=orgId;
>
> this.oId=oId;
>
> this.gId = gId;
>
> this.fNum = fNum;
>
> this.msg = msg;
>
> this.num = num;
>
> this.date = date;
>
> }
>
>
>
> public String getOrgId()
>
> {
>
> return orgId;
>
> }
>
>
>
> public void setOrgId(String orgId)
>
> {
>
> this.orgId = orgId;
>
> }
>
>
>
> public String getOId()
>
> {
>
> return oId;
>
> }
>
>
>
> public void setOId(String oId)
>
> {
>
> this.oId = oId;
>
> }
>
>
>
> public String getGid()
>
> {
>
> return gId;
>
> }
>
>
>
> public void setGuid(String gId)
>
> {
>
> this.gId = gId;
>
> }
>
>
>
> public int getFNum()
>
> {
>
> return fNum;
>
> }
>
>
>
> public void setFNum(int fNum)
>
> {
>
> this.fNum = fNum;
>
> }
>
>
>
> public String getMsg()
>
> {
>
> return msg;
>
> }
>
>
>
> public void setMsg(String msg)
>
> {
>
> this.msg = msg;
>
> }
>
>
>
> public BigDecimal getNum()
>
> {
>
> return num;
>
> }
>
>
>
> public void setNum(BigDecimal num)
>
> {
>
> this.num = num;
>
> }
>
>
>
> public Date 

Re: System.exit() not exiting cleanly (locked on IgnitionEx$IgniteNamedInstance)

2016-06-28 Thread bintisepaha
Denis, is it ok to delete this file while the cluster is up? or this folder?
servers and clietns might be connected to it.

I posted another question for the community trying to understand the use of
the work directory. Could you please respond to that?

This lock.file has not been updated since Apr 13, the first time we used
ignite in production. the size of the file is 0 KB.

Also it is hard for us to understand why this started happening after 3
months, we were running smoothly until now. and now System.exit() or
Ignite.stop() both don't work randomly. For 2 days we were good, by passing
-DIGNITE_NO_SHUTDOWN_HOOK=true, now even that is not a reliable solution.
Its does not work each time.

is System.exit() or Ignite.stop(grid, false) not a good solution? Should we
be using Ignite.close()? or Ignite.stop(grid, true) to kill all the current
running jobs.

Thanks,
Binti







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/System-exit-not-exiting-cleanly-locked-on-IgnitionEx-IgniteNamedInstance-tp5814p5967.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Problem with installing ODBC Driver

2016-06-28 Thread Igor Sapego
vinisman wrote
> Igor, now i successfully build a driver and run ODBC examples. But the
> next step i need to plug Tableau Desktop to Apache Ignite. Tableau can do
> it only with the help of DSN. Is it possible to tune Windows DSN in proper
> way ?

I believe you can connect Tableau to Ignite without DSN, using "Driver"
option instead. Have you tried that?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-with-installing-ODBC-Driver-tp5914p5966.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SQL Query on REST APIs

2016-06-28 Thread Alexei Scherbakov
Hi, Francesco.

Please properly subscribe to the user mailing list so community members can
see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.

How many entries do you have in cache?
Have you tried to run a query using client mode or in H2 console ? [1]
Have you any errors in the server log?

[1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQL-Query-on-REST-APIs-tp4815p5964.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Strange collocated distributed set behavior

2016-06-28 Thread zshamrock
Thank you, Andrey. I will be monitoring the progress of this issue.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Strange-collocated-distributed-set-behavior-tp5643p5963.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Problem with curator-framework package when running spark job on Ignite

2016-06-28 Thread Denis Magda
Hi,

Please double check that you moved all ZooKeeper module related libs from 
“{apache_build}/libs/optional/ignite-zookeper” into “{apache_build}/libs/“ and 
start the nodes using ignite.sh using your configuration.

If you start the nodes from IDEA please make sure that there are no any curator 
libs that are imported by your idea. It’s better to import ignite-zookeeper 
module using Maven to avoid exceptions like the one below.

Please let me know if my suggestions doesn’t work for you and you need more 
assistance.

—
Denis

> On Jun 19, 2016, at 1:48 PM, huynq88  wrote:
> 
> configuration.xml 
> Hi
>  all,
> I got a problem when running spark job with Ignite. Just can't figure out why 
> the driver program keeps notifying that can't find method: 
> creatingParentContainersIfNeeded. I'm using the Ignite 1.6 and the built in 
> curator (2.9.1). I tried to replace the curator libraries with curator 2.10.0 
> but things still happen the same. (NOTE 1)
> I also tried to point out the curator CuratorFrameworkImpl class in the 
> configuration but it turns out to be another error (NOTE 2) 
> Please help me out with this situation. Thanks a million 
> (I also included my configuration file - NOTE 3) 
> NOTE 1 
> 16/06/18 15:36:53 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 1.0 
> (TID 3, host05): java.lang.NoSuchMethodError: 
> org.apache.curator.framework.api.CreateBuilder.creatingParentContainersIfNeeded()Lorg/apache/curator/framework/api/ProtectACLCreateModePathAndBytesable;
>  at 
> org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.internalRegisterService(ServiceDiscoveryImpl.java:224)
>  at 
> org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.registerService(ServiceDiscoveryImpl.java:190)
>  at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder.registerAddresses(TcpDiscoveryZookeeperIpFinder.java:225)
>  
> NOTE 2
> Cannot create inner bean 
> 'org.apache.curator.framework.imps.CuratorFrameworkImpl#51e2adc7' of type 
> [org.apache.curator.framework.imps.CuratorFrameworkImpl] while setting bean 
> property 'curator'; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'org.apache.curator.framework.imps.CuratorFrameworkImpl#51e2adc7' 
> defined in URL 
> [file:/u01/dwh_app/apache-ignite-fabric-1.6.0-bin/config/default-config.xml]: 
> Instantiation of bean failed; nested exception is 
> org.springframework.beans.BeanInstantiationException: Could not instantiate 
> bean class [org.apache.curator.framework.imps.CuratorFrameworkImpl]: No 
> default constructor found; nested exception is 
> java.lang.NoSuchMethodException: 
> org.apache.curator.framework.imps.CuratorFrameworkImpl.()] at 
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:906)
>  at org.apache.ignite.Ignition.start(Ignition.java:350) at 
> org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
>  Caused by: class org.apache.ignite.IgniteCheckedException: Failed to 
> instantiate Spring XML application context 
> [springUrl=file:/u01/dwh_app/apache-ignite-fabric-1.6.0-bin/config/default-config.xml,
>  err=Error creating bean with name 
> 'org.apache.ignite.configuration.IgniteConfiguration#0' defined in URL 
> [file:/u01/dwh_app/apache-ignite-fabric-1.6.0-bin/config/default-config.xml]: 
> Cannot create inner bean 
> 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#fad74ee' of type 
> [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] while setting bean 
> property 'discoverySpi'; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#fad74ee' 
> defined in URL 
> [file:/u01/dwh_app/apache-ignite-fabric-1.6.0-bin/config/default-config.xml]: 
> Cannot create inner bean 
> NOTE 3 
> In file attached 
> View this message in context: Problem with curator-framework package when 
> running spark job on Ignite 
> 
> Sent from the Apache Ignite Users mailing list archive 
>  at Nabble.com.



Re: System.exit() not exiting cleanly (locked on IgnitionEx$IgniteNamedInstance)

2016-06-28 Thread Denis Magda
Barrett, 

The data streamer is designed for caches preloading. There are no any serious 
issues related to it that prevents from using it in production. If you have one 
please share more details.

—
Denis

> On Jun 27, 2016, at 1:46 AM, Barrett Strausser  wrote:
> 
> Off-topic to this thread but I'm curious about the issues the OP (Binti) 
> mentioned with DataStreamer introducint instability. I'm planning a mid-size 
> deployment that will rely heavily on custom data streamer and JMS streamer.
> 
> Binti, would you mind emailing me directly or posting to the group?
> 
> -b
> 
> On Sun, Jun 26, 2016 at 4:52 PM, bintisepaha  > wrote:
> Attached is the zipped RGP.zip
>  >   log
> file (RGP.log) from the same client node that hangs. I see the shmem thread
> gets into a GC iteration at the same time. Do you think that might be
> causing it not to shut down?
> 
> Thanks,
> Binti
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/System-exit-not-exiting-cleanly-locked-on-IgnitionEx-IgniteNamedInstance-tp5814p5905.html
>  
> 
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 



Re: System.exit() not exiting cleanly (locked on IgnitionEx$IgniteNamedInstance)

2016-06-28 Thread Denis Magda
Binti,

As I see from the thread dumps the node is not stopped because the following 
shmem related Thread is trying to acquire a lock preventing the node from 
finalization

"ipc-shmem-gc-#24%DataGridServer-Production%" prio=10 tid=0x7f2124cc4000 
nid=0x12cc05 sleeping[0x7f20fd435000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.lock0(Native Method)
at sun.nio.ch.FileDispatcherImpl.lock(FileDispatcherImpl.java:89)
at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:982)
at java.nio.channels.FileChannel.lock(FileChannel.java:1052)
at 
org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.cleanupResources(IpcSharedMemoryServerEndpoint.java:608)
at 
org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.body(IpcSharedMemoryServerEndpoint.java:563)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:722)

The file that is being accessed is located in 
{ignite_work_dir}/ipc/shmem/lock.file

There is a chance that something happened and the lock wasn’t released by OS on 
your side. Please try to clean this directory and let me know the result.

—
Denis

> On Jun 27, 2016, at 5:10 PM, bintisepaha  wrote:
> 
> bearrito, in 1.6 we saw streamer hung on flush() or close() and would never
> return. We do not see that issue in 1.5.0.final.
> 
> Igniters, could you please look at the earlier attached log files and thread
> dumps to help with the original issue.
> 
> Thanks,
> Binti
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/System-exit-not-exiting-cleanly-locked-on-IgnitionEx-IgniteNamedInstance-tp5814p5924.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Non-cluster mode

2016-06-28 Thread Peter Schmitt
Hello Ignite-Community!

As a part of an evaluation of Ignite 1.6, I'm trying to use Ignite in a
cluster *without* cluster-mode.
It should be used as local cache (independent of the other nodes).
Therefore, the (Ignite-)nodes shouldn't try to discover other
(Ignite-)nodes.

I tried
TcpDiscoverySpi localNodeDiscovery = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1"));
localNodeDiscovery.setIpFinder(ipFinder);

IgniteConfiguration configuration = new IgniteConfiguration()
.setDiscoverySpi(localNodeDiscovery)
.setDaemon(false);

Ignite ignite = Ignition.getOrStart(configuration);


to limit cluster-discovery to the local node.
In that case Ignite does nothing at all (the startup hangs).

I've read the docs about the different cluster-modes, but I couldn't find
the information I'm looking for (which works as expected).

Any hint is appreciated,
Peter


Re: How to use rest api to put an object into cache?

2016-06-28 Thread Vladimir Ozerov
Hi Kevin,

Yes, currently REST protocol interpret everything as String. At this moment
you can use *ConnectorMessageInterceptor *interface. Once you implement and
configure it, you will start receiving callback for all keys and values
passed back and forth. So you can encode you object as a String somehow and
then convert it to real object inside interceptor. And the opposite: before
returning object from cache you can convert it to some String form.

This is not very convenient, though it will allow you to go further without
awaiting any tickets to be implemented.

Vladimir.

On Mon, Jun 27, 2016 at 6:42 PM, Alexey Kuznetsov 
wrote:

> Hi Kevin.
>
> >> 1.   Does Rest API only support String as key and value? When I
> try to use Integer as key, and gives null result.
> See: https://issues.apache.org/jira/browse/IGNITE-3345
>
> >> 2.   Assume I have a key object and value object, If I want to
> store this object in server side cache, Do I have to store them as json
> format string, and parse it on client side?
> >> In this case, How can I set read/write through to enable database
> interaction?
>
> See: https://issues.apache.org/jira/browse/IGNITE-962
>
> It seems both your questions is not implemented yet.
>
> IGNITE-3345 - could be easily implemented IMHO. Do you interested to
> contribute?
>
>
> --
> Alexey Kuznetsov
> GridGain Systems
> www.gridgain.com
>


Web Console Beta 2 release

2016-06-28 Thread Alexey Kuznetsov
Igniters!

I'd like to announce that we just pushed Ignite Web Console Beta 2 to
master branch and deployed new version at https://console.gridgain.com

NOTE: You may need to refresh page (F5 or Ctrl+R) in order to reload Web
Console.

What's new:

   - Implemented Monitoring of grid and caches (please note, you will need
   grid started from latest nightly build of master branch).
   -  Improved Demo mode (you may test SQL and Monitoring in Demo mode).
   -  Added a lot of properties to grid configuration.
   -  Improved validation of configuration properties.
   -  Improved XML and Java code generation.
   -  Fixed a lot of bugs and usability issues.

Feedback and suggestions are welcome!

What's next:

   - Migrate build to Webpack from jspm.
   - Frontend and backend tests.
   - .NET configuration and code generation.
   - Logs view and logs search.
   - Many new features are coming...


Stay tuned!
-- 
Alexey Kuznetsov


Re: DataStreamer hanging

2016-06-28 Thread Denis Magda
Hi,

Please properly subscribe to the user list so that we can see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to ì user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.


Please provide the logs from all the nodes. There should be an exception
that moved the streamer in the hanging state.

Answering on your question.

1) Is DataStreamer cache level object..? should I not flush or close
Streamer until the entire load is complete from all databases..? 

There is no need to call flush() for every operation because performance
won't be good then. The streamer flushes data automatically basing on such
parameters as perNodeBufferSize, autoFlushFrequency,
perNodeParallelOperations.

--
Denis



Hi, 

I am using DataStreamer to load data from multiple databases into cache.
Each node is responsible for loading data from one db and they are running
in parallel. 
I use dataStreamer.flush() after each load is done and looks like this is
hanging in Ignite1.6. 
Works fine, when we rolledback to 1.5. I am not sure why. 

1) Is DataStreamer cache level object..? should I not flush or close
Streamer until the entire load is complete from all databases..? 

Any help is greatly appreciated 


Here Stack Trace from where it is hanging : 

Name: pub-#6%DataGridServer-Staging% 
State: WAITING on
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$ConnectFuture@25d531b4
 
Total blocked: 0  Total waited: 1,808 

Stack trace: 
sun.misc.Unsafe.park(Native Method) 
java.util.concurrent.locks.LockSupport.park(Unknown Source) 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(Unknown
Source) 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(Unknown
Source) 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(Unknown
Source) 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:159)
 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2070)
 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1967)
 
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:1933)
 
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1285)
 
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1354)
 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:694)
 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:843)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.doUpdate(GridNearAtomicUpdateFuture.java:601)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:756)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:544)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:202)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1007)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1005)
 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:703)
 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/DataStreamer-hanging-tp5808p5953.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite with Cassandra and SSL

2016-06-28 Thread Denis Magda
Hi,

This is a duplicate discussion of the following
http://apache-ignite-users.70518.x6.nabble.com/Ignite-with-Cassandra-and-SSL-td5610.html#a5700

You will find a solution in the discussion above.

--
Denis



Good morning

Could you please help me understand how to establish persistence to
Cassandrea via SSL?

What else do I need to ensure apart from setting the below flag to true
useSSL  false   Enables the use of SSL




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-with-Cassandra-and-SSL-tp5611p5952.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Creating cache with CacheLoaderFactory on client node brings exception org.apache.ignite.IgniteCheckedException: Failed to find class with given class loader for unmarshalling (make sure same ver

2016-06-28 Thread daniel07
Lot of thank you, dear Magda



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Creating-cache-with-CacheLoaderFactory-on-client-node-brings-exception-org-apache-ignite-IgniteCheck-tp5915p5951.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Creating cache with CacheLoaderFactory on client node brings exception org.apache.ignite.IgniteCheckedException: Failed to find class with given class loader for unmarshalling (make sure same ver

2016-06-28 Thread Denis Magda
Yes, you have to redeploy this class every time it’s changed.

Peer-class-loading feature is supported for Ignite Compute engine allowing to 
deploy/re-deploy compute tasks automatically.
https://apacheignite.readme.io/docs/zero-deployment 


—
Denis

> On Jun 28, 2016, at 9:49 AM, daniel07  wrote:
> 
> Hi dear Magda,
> 
> As far as I understood ,my loaderFactory(and generally  whatever (classes)
> is used in client node)must be  manually set also in the rest nodes?
> 
> I replaced my classes to remote node,and it started working.
> Isn't there another solution? At least to be done automatically.
> Every time after changing my classes I should redeploy on server nodes?
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Creating-cache-with-CacheLoaderFactory-on-client-node-brings-exception-org-apache-ignite-IgniteCheck-tp5915p5949.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Creating cache with CacheLoaderFactory on client node brings exception org.apache.ignite.IgniteCheckedException: Failed to find class with given class loader for unmarshalling (make sure same ver

2016-06-28 Thread daniel07
Hi dear Magda,

As far as I understood ,my loaderFactory(and generally  whatever (classes)
is used in client node)must be  manually set also in the rest nodes?

I replaced my classes to remote node,and it started working.
Isn't there another solution? At least to be done automatically.
Every time after changing my classes I should redeploy on server nodes?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Creating-cache-with-CacheLoaderFactory-on-client-node-brings-exception-org-apache-ignite-IgniteCheck-tp5915p5949.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: transaction not timing out

2016-06-28 Thread Alexey Goncharuk
Hi,

As Dmitriy pointed out, there is no a reliable way to timeout a transaction
once the commit phase has begun.

If there is a chance that your cache store may stall for unpredictable
amount of time, this should be handled within the store and possibly throw
an exception, but this will result in heuristic exception and possibly
inconsistent data.
​


Re: Creating cache with CacheLoaderFactory on client node brings exception org.apache.ignite.IgniteCheckedException: Failed to find class with given class loader for unmarshalling (make sure same ver

2016-06-28 Thread Denis Magda
Hi,

You have to place the class of your loader factory in clients’ nodes classpath 
as well. The main reason for this is because when a transaction is started from 
a client node (that is quite a usual case) then the client first commits data 
to a storage and after that to in-memory.

Case with the storage is discussed in this blog post: 
http://gridgain.blogspot.ru/2014/09/two-phase-commit-for-in-memory-caches.html 


—
Denis

> On Jun 27, 2016, at 2:22 PM, daniel07  wrote:
> 
> Hi,
> I saw other questions related org.apache.ignite.IgniteCheckedException:
> Failed to find class with given class loader for unmarshalling (make sure
> same versions of all classes are available on all nodes or enable
> peer-class-loading) exception,but not find my answer.
> 
> I have one remote server node,and from local client node I discover that
> server node.
> Now from client node I want to create cache-
> p.s.  added for 2
> configurations
> my code is following 
> 
> private final CacheConfiguration ListInteger>> cacheConfiguration =
> SpringContextHolder.applicationContext
>   .getBean("cacheConfigurationTemplate", 
> CacheConfiguration.class);
> 
> ignite.createCache((CacheConfiguration ListInteger>>)new
> CacheConfiguration<>(cacheConfiguration).setName(CACHE_NAME)
>   .setReadThrough(true)
>   .setCacheLoaderFactory(new 
> EntityIdLoaderFactory())
>   
> .setExpiryPolicyFactory(EternalExpiryPolicy.factoryOf(
> 
> 
> 
> public  class EntityIdLoaderFactory
>   implements Factory {
> 
>   private static final long serialVersionUID = 7512841233166239706L;
> 
>   @Override
>   public EntityIdLoader create() {
>   return new EntityIdLoader(
>   () ->
> SpringContextHolder.applicationContext.getBean("persistenceService",
> PersistenceService.class),
>   () -> 
> SpringContextHolder.applicationContext.getBean("kbEngine",
> KbEngine.class));
>   }
> 
> }
> 
> 
> 
> 
> public class EntityIdLoader implements
> CacheLoader> {
> 
>@Nonnull
>private final Supplier persistenceService;
>@Nonnull
>private final Supplier kbEngine;
> 
>public EntityIdLoader(@Nonnull Supplier
> persistenceService, @Nonnull Supplier kbEngine) {
>this.kbEngine = Preconditions.checkNotNull(kbEngine);
>this.persistenceService =
> Preconditions.checkNotNull(persistenceService);
>}
> 
> 
> }
> 
> 
> during  creating cache ,on remote node brings exception
> 
> class org.apache.ignite.IgniteCheckedException: Failed to find class with
> given class loader for unmarshalling (make sure same versions of all classes
> are available on all nodes or enable peer-class-loading):
> java.net.URLClassLoader@738defde   at
> org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal(JdkMarshaller.java:108)
>at
> org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage.message(TcpDiscoveryCustomEventMessage.java:80)
>at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.notifyDiscoveryListener(ServerImpl.java:4894)
>   
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processCustomMessage(ServerImpl.java:4750)
>at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2121)
>at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2208)07)
>at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> Caused by: java.lang.ClassNotFoundException:
> com.synisys.idm.apollo.internal.service.caching.loaders.EntityIdLoaderFactory 
> 
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>at java.security.AccessController.doPrivileged(Native Method)
>at java.lang.ClassLoader.loadClass(ClassLoader.java:425):354)
>at java.lang.Class.forName0(Native Method)ader.java:358)
>at
> org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8250)
>at
> org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.resolveClass(JdkMarshallerObjectInputStream.java:54)
>at
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)12)
>at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)a:1771)
>at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)90)
>at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)a:1798)
>at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)90)
>

Re: Creating cache with CacheLoaderFactory on client node brings exception org.apache.ignite.IgniteCheckedException: Failed to find class with given class loader for unmarshalling (make sure same ver

2016-06-28 Thread daniel07
Hi @bintisepaha,
Yes the the JDK versions are same.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Creating-cache-with-CacheLoaderFactory-on-client-node-brings-exception-org-apache-ignite-IgniteCheck-tp5915p5946.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: transaction not timing out

2016-06-28 Thread Denis Magda
Hi Binti,

In case of writeAll the locks for keys are acquired sequentially. To avoid 
deadlocks you need to use an ordered data structure like TreeSet the will hold 
the keys.

Alex G. can you join the thread and provide your thoughts on why the timeout is 
not honored for the discussed transactional mode.

—
Denis

> On Jun 28, 2016, at 1:14 AM, bintisepaha  wrote:
> 
> Denis, We will reproduce with pessismistic/repeatable read next week and
> update you.
> How is writeAll handled by the framework - in a txn? 
> 
> Was someone able to look at the timeout not being honored while committing?
> What would cause this issue? 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/transaction-not-timing-out-tp5540p5934.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.