New Blogs: What's new in Ignite.NET 1.9 and Ignite Kubernetes deployment on Azure

2017-03-15 Thread Denis Magda
Igniters,

Let me share with the latest blog post about Apache Ignite you might be 
interested to look at.

* What’s new in Apache Ignite.NET 1.9 by Pavel: 
https://ptupitsyn.github.io/Whats-New-In-Ignite-Net-1-9/

* Deploying Ignite in Kubernetes in Microsof Azure by me: 
https://www.gridgain.com/resources/blog/deploying-apache-ignite-kubernetes-microsoft-azure

Soon the post will be added to Ignite’s blog feed:
https://ignite.apache.org/blogs.html 

If you blog about Ignite please let me know and we will promote your job over 
the feed, twitter and other channels.

—
Denis

ignite messaging disconnection behaviour

2017-03-15 Thread kfeerick
Hello friends,
Another query on some charateristics of Ignite beheviour which I'm not sure
if it's my usage or the intention of the product. We are using the messaging
sub-system to distribute streams of data between interested processes. 

The server is sending data as follows:
ignite.message().sendOrdered("my-topic", objectToSend, 1);

The client subscribing:
ignite.message().localListen("my-topic", (UUID uuid, Object o) -> {
log.info(o);
return true;
});

This works well for our use case apart from one behaviour. When a topic
subscriber process exits the sender detects the TCP failure and blocks the
sending thread for a prolonged period of time. Reporting the exception
below. 

We really don't care that a subcriber has gone away. We're more of a
multicast use case but don't want to introduce the overhead of a specialist
messaging product for what is quite a narrow use case. Ignite messaging
seems to do the trick apart from this one disconnection behaviour. 

Thanks for the great product and keep up the good work!
Kevin


[WARN ] 2017-03-16 11:06:32.951 [grid-nio-worker-0-#9%null%]
TcpCommunicationSpi - Failed to process selector key (will close):
GridSelectorNioSessionImpl [selectorIdx=0, queueSize=0,
writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
recovery=GridNioRecoveryDescriptor [acked=4624, resendCnt=0, rcvCnt=478,
sentCnt=4632, reserved=true, lastAck=478, nodeLeft=false,
node=TcpDiscoveryNode [id=613517f0-9534-46e3-bbe7-eb5ed1515b27,
addrs=[0:0:0:0:0:0:0:1, 10.1.1.15, 127.0.0.1, 172.27.236.2,
2001:0:9d38:6abd:18fe:1bb0:f5fe:fef0], sockAddrs=[/172.27.236.2:47501,
/2001:0:9d38:6abd:18fe:1bb0:f5fe:fef0:47501, A0004.lan/10.1.1.15:47501,
/0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501], discPort=47501, order=2,
intOrder=2, lastExchangeTime=1489622741577, loc=false,
ver=1.8.0#20161205-sha1:9ca40dbe, isClient=false], connected=true,
connectCnt=1, queueLimit=5120, reserveCnt=1], super=GridNioSessionImpl
[locAddr=/0:0:0:0:0:0:0:1:59801, rmtAddr=/0:0:0:0:0:0:0:1:47101,
createTime=1489622741736, closeTime=0, bytesSent=13680979, bytesRcvd=158379,
sndSchedTime=1489622792923, lastSndTime=1489622792923,
lastRcvTime=1489622792813, readsPaused=false,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=o.a.i.i.util.nio.GridDirectParser@69ad4e89, directMode=true],
GridConnectionBytesVerifyFilter], accepted=false]]
[WARN ] 2017-03-16 11:06:32.952 [grid-nio-worker-0-#9%null%]
TcpCommunicationSpi - Closing NIO session because of unhandled exception
[cls=class o.a.i.i.util.nio.GridNioException, msg=An existing connection was
forcibly closed by the remote host]
[INFO ] 2017-03-16 11:06:32.957 [market-data-worker] MarketDataPublisher -
ABC{bestBid=5.73, bestAsk=5.39, bidVol=575, askVol=2261, IAP=5.53}
[WARN ] 2017-03-16 11:06:33.980 [market-data-worker] TcpCommunicationSpi -
Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=/0:0:0:0:0:0:0:1:47101,
failureDetectionTimeout=1]
[WARN ] 2017-03-16 11:06:34.977 [market-data-worker] TcpCommunicationSpi -
Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=/127.0.0.1:47101,
failureDetectionTimeout=1]
[WARN ] 2017-03-16 11:06:35.982 [market-data-worker] TcpCommunicationSpi -
Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=/172.27.236.2:47101,
failureDetectionTimeout=1]
[WARN ] 2017-03-16 11:06:36.983 [market-data-worker] TcpCommunicationSpi -
Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=/2001:0:9d38:6abd:18fe:1bb0:f5fe:fef0:47101,
failureDetectionTimeout=1]
[WARN ] 2017-03-16 11:06:37.981 [market-data-worker] TcpCommunicationSpi -
Connect timed out (consider increasing 'failureDetectionTimeout'
configuration property) [addr=A0004.lan/10.1.1.15:47101,
failureDetectionTimeout=1]
[WARN ] 2017-03-16 11:06:37.981 [market-data-worker] TcpCommunicationSpi -
Failed to connect to a remote node (make sure that destination node is alive
and operating system firewall is disabled on local and remote hosts)
[addrs=[/0:0:0:0:0:0:0:1:47101, /127.0.0.1:47101, /172.27.236.2:47101,
/2001:0:9d38:6abd:18fe:1bb0:f5fe:fef0:47101, A0004.lan/10.1.1.15:47101]]
[ERROR] 2017-03-16 11:06:37.987 [market-data-worker] MarketDataPublisher -
Failed to send message to remote node: TcpDiscoveryNode
[id=613517f0-9534-46e3-bbe7-eb5ed1515b27, addrs=[0:0:0:0:0:0:0:1, 10.1.1.15,
127.0.0.1, 172.27.236.2, 2001:0:9d38:6abd:18fe:1bb0:f5fe:fef0],
sockAddrs=[/172.27.236.2:47501, /2001:0:9d38:6abd:18fe:1bb0:f5fe:fef0:47501,
A0004.lan/10.1.1.15:47501, /0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501],
discPort=47501, order=2, intOrder=2, lastExchangeTime=1489622741577,
loc=false, ver=1.8.0#20161205-sha1:9ca40dbe, isClient=false]
org.apache.ignite.spi.IgniteSpiException: Failed to send message to remote
node: TcpDiscoveryNode 

cache update from .NET client to Java cluster

2017-03-15 Thread kfeerick
Hello there,
We're using ignite data grid for the purposes of distributing data from a
shared distributed Java key value store to a .NET client for display and
update. The Java nodes are running in server mode and the client connected
in client mode. I see some interesting behaviour when i look to update data
in the k/v store from the .NET client. Querying for a single data item and
then immediately updating it appears to create a *new* entry in the cache
rather than updating the existing value (as I would expect). I suspect this
is to do with the binary notion of equivalence and I've tried supplying a
custom BinaryIdentityResolver but with no change in behaviour. Below is a
code snippet from the .NET side to illustrate. I can upload a complete
reproduction of the issue if that's required. 

Can someone let me know if my expectations or understanding of the features
of the data grid are incorrect. 

var cache = Ignite.GetOrCreateCache("my-cache");
var query = new SqlQuery(typeof(Value), "Id=?") {Arguments =
new[] {id}};
var cursor = cache.Query(query);
var entry = cursor.First();
var row = entry.Value;
row.Text = "ignite me";
row.LastUpdatedStamp = DateTime.Now;
cache.Put(entry.Key, modelRow);

Cheers
Kevin



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/cache-update-from-NET-client-to-Java-cluster-tp11217.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Fwd: Spring application context resource is not injected exception while starting ignite in jdbc driver mode

2017-03-15 Thread Denis Magda
Cross-posting to the dev list.

I’ve faced the same issue, filed a ticket with a reproducer and suggested a 
workaround:
https://issues.apache.org/jira/browse/IGNITE-4829 


*Alex K.*, could you please take care of the issue? If this is something we 
can’t fix quickly then Apache Ignite Web Console configuration wizard has to 
set the POJO store configuration differently. 

—
Denis

> Begin forwarded message:
> 
> From: san 
> Subject: Spring application context resource is not injected exception while 
> starting ignite in jdbc driver mode
> Date: August 24, 2016 at 6:27:47 AM PDT
> To: user@ignite.apache.org
> Reply-To: user@ignite.apache.org
> 
> I am facing an issue while starting ignite in JDBC driver mode. i have my
> cache in remote node and i need to access using plain sql queries. Hence i
> am using ignite jdbc driver. please find the code snippet.  
> 
> Class.forName("org.apache.ignite.IgniteJdbcDriver");
> Connection conn =
> DriverManager.getConnection("jdbc:ignite:cfg://ignite-jdbc.xml");
> ResultSet rs = conn.createStatement().executeQuery("select * from users");
> 
> my xml also simple. since cache is already loaded in remote node.
> 
>   class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>
> 
>
> 
> 
> 
>
> class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>
>
>127.0.0.1:47500..47549
>
>
>
>
>
>
> 
> please do the needful.. 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Spring-application-context-resource-is-not-injected-exception-while-starting-ignite-in-jdbc-driver-me-tp7272.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Understanding faster batch inserts/updates with apache ignite 1.9

2017-03-15 Thread Denis Magda
The performance should be comparable. However, as Val pointed out you need to 
pay some some fee for DML query parsing but it might be negligible.

—
Denis

> On Mar 15, 2017, at 7:21 AM, pragmaticbigdata  wrote:
> 
> Thanks for the reply.
> 
> DML also implies query parsing, mapping to Java objects and other preparation 
> steps before IgniteDataStreamer API is called. Thus the performance 
> difference. 
> 
> Does this mean the data streamer api would be faster in pre-loading or 
> bulk-data as compared to the streaming mode jdbc driver?
> 
> Thanks. 
> 
> On Wed, Mar 15, 2017 at 12:24 AM, vkulichenko [via Apache Ignite Users] 
> <[hidden email] > 
> wrote:
> 1. Yes, it's using IgniteDataStreamer under the hood and even has the same 
> parameters. 
> 2. DML also implies query parsing, mapping to Java objects and other 
> preparation steps before IgniteDataStreamer API is called. Thus the 
> performance difference. 
> 
> -Val 
> 
> If you reply to this email, your message will be added to the discussion 
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Understanding-faster-batch-inserts-updates-with-apache-ignite-1-9-tp7p11170.html
>  
> 
> To unsubscribe from Understanding faster batch inserts/updates with apache 
> ignite 1.9, click here .
> NAML 
> 
> 
> View this message in context: Re: Understanding faster batch inserts/updates 
> with apache ignite 1.9 
> 
> Sent from the Apache Ignite Users mailing list archive 
>  at Nabble.com.



Re: How to form cluster

2017-03-15 Thread Denis Magda
Hi,

Refer to Java or .NET getting started guides that explain how to start up a 
cluster:
https://apacheignite.readme.io/docs/getting-started 

https://apacheignite-net.readme.io/docs/getting-started-2 


While this documentation describes how to work with Ignite’s ODBC driver:
https://apacheignite.readme.io/docs/odbc-driver 


—
Denis

> On Mar 14, 2017, at 10:39 PM, kavitha  wrote:
> 
> Hi All,
> 
> I need to form a cluster for apache ignite database to connect it through
> ODBC connection. 
> 
> How can I proceed it?
> 
> Note: I am using C# language
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-form-cluster-tp11177.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Node setup

2017-03-15 Thread Denis Magda
It’s strange. Do you start the script from a terminal going to 
ignite-web-console folder?

Alex K., please join this discussion.

—
Denis


> On Mar 14, 2017, at 10:05 PM, kavitha  wrote:
> 
> Hi All,
> 
> I tried running ignite-web-agent.bat for making connection with web server.
> But I am getting following error.
> 
> Error: Could not find or load main class demo
> Press any key to continue . . .
> 
> Anyone please give me a solution for this?
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Node-setup-tp11090p11175.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Understanding Cache Key, Indexes, Partition and Affinity

2017-03-15 Thread anoorag.saxena
Hello, 

I am new to Apache Ignite and come from a Data Warehousing background. So
pardon if I try to relate to Ignite through DBMS jargon. I already went
through some of the posts on the forum but I am still unclear about some of
the basics. 

1.) CacheMode=PARTITIONED 
 - When I just declare a cache as partitioned, I understand that data is
equally distributed across all 
 nodes. Is there an option to provide a "partition key" based on which
the data would be distributed 
 across the nodes (in which case, the skewness of distribution would
depend on choice of Partition Key? 
  
 - Without an Affinity Key, when I load data (using loadCache()) into a
partitioned cache, will all source 
   rows be sent to all nodes on my cluster? 

2.) Affinity 
 I understand that the concept of affinity is to use a key that
distinctly identifies the node on which the 
 data may reside. 

 - When I partition the cache and define an affinity key, is the data
partitioned based 
 on the Affinity Key itself? If not, how does affinity differ from
partitioning? 
  
 - With an Affinity Key defined, when I load data (using loadCache())
into a partitioned cache, will the 
   source rows be sent to the node they belong to or all the nodes on my
cluster? 

3.) When I create an index for a cache...does it distribute the data
automatically (without defining any 
  Affinity Key or Partition Key) ? 

SCNEARIO DESCRIPTION 
- 
I want to load data from a persistent layer into a Staging Cache (assume
~2B) using loadCache(). 
The cache resides on a 4 node cluster. 
a.) Can I load data in such a way that each node has to process only 0.5B
records? 
 Is that using Partitioned Cache mode and defining an Affinity Key? 

Then I want to read transactions from the Staging Cache in TRANSACTIONAL
atomicity mode, lookup a Target Cache and do some operations. 
b.) When I do the lookup on Target Cache, how can I ensure that the lookup
is happening only on the node where the data resides and not do lookup on
all the nodes on which Target Cache resides? 
Would that be using the Affinity Key? If yes, how? 

c.) Lets say I wanted to do a lookup on a key other than Affinity Key
column, can creating an index on the lookup column help? Would I end up
scanning all nodes in that case? 

Staging Cache 
CustomerID 
CustomerEmail 
CustomerPhone   

Target Cache 
Seq_Num 
CustomerID 
CustomerEmail 
CustomerPhone 
StartDate 
EndDate 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Understanding-Cache-Key-Indexes-Partition-and-Affinity-tp11212.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How we can get metrics through JMX?

2017-03-15 Thread hitendrapratap
I can see the metrics.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-we-can-get-metrics-through-JMX-tp10394p11211.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How we can get metrics through JMX?

2017-03-15 Thread hitendrapratap
We are not able to see cluster metrics. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-we-can-get-metrics-through-JMX-tp10394p11210.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: IGNITE-4106

2017-03-15 Thread Anil
Hi Andrey,

Thank you.

I see it as Path Available. You guys are quick. I will test the fix
tomorrow.

Thanks.

On 15 March 2017 at 20:58, Andrey Mashenkov 
wrote:

> Hi Anil,
>
> It is a bug. Error occurs when entry has evicted from cache.
> I've create a ticket IGNITE-4826 [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4826
>
>
> On Wed, Mar 15, 2017 at 10:22 AM, Anil  wrote:
>
>> Hi Val and Andrey,
>>
>> I am seeing exception with following code as well. Not sure why is not
>> reproduced at your end.,
>>
>> Ignite ignite = Ignition.start(new File("/workspace/cache-manager
>> /test-parallelism/src/main/resources/ignite.xml").toURI().toURL());
>> IgniteCache cache = ignite.cache("TEST_CACHE");
>> IgniteDataStreamer streamer = ignite.dataStreamer("TEST_CACH
>> E");
>> for (int i =1; i< 10; i++){
>> streamer.addData(String.valueOf(i), new Test("1", "1"));
>> }
>>
>> Exception :
>>
>> 2017-03-15 12:46:43 ERROR DataStreamerImpl:495 - DataStreamer operation
>> failed.
>> class org.apache.ignite.IgniteCheckedException: Failed to finish
>> operation (too many remaps): 32
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5.apply(DataStreamerImpl.java:863)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5.apply(DataStreamerImpl.java:828)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter$Arr
>> ayListener.apply(GridFutureAdapter.java:456)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter$Arr
>> ayListener.apply(GridFutureAdapter.java:439)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>> ifyListener(GridFutureAdapter.java:271)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>> ifyListeners(GridFutureAdapter.java:259)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>> one(GridFutureAdapter.java:389)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>> one(GridFutureAdapter.java:355)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>> one(GridFutureAdapter.java:343)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer$2.apply(DataStreamerImpl.java:1564)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer$2.apply(DataStreamerImpl.java:1554)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>> ifyListener(GridFutureAdapter.java:271)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.lis
>> ten(GridFutureAdapter.java:228)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer.localUpdate(DataStreamerImpl.java:1554)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer.submit(DataStreamerImpl.java:1626)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer.update(DataStreamerImpl.java:1416)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.load0(DataStreamerImpl.java:932)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.access$1100(DataStreamerImpl.java:121)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5$1.run(DataStreamerImpl.java:876)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5$2.call(DataStreamerImpl.java:903)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5$2.call(DataStreamerImpl.java:891)
>> at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader
>> (IgniteUtils.java:6618)
>> at org.apache.ignite.internal.processors.closure.GridClosurePro
>> cessor$2.body(GridClosureProcessor.java:925)
>> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWo
>> rker.java:110)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: class org.apache.ignite.IgniteCheckedException:
>> GridH2QueryContext is not initialized.
>> at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils
>> .java:7239)
>> at org.apache.ignite.internal.processors.closure.GridClosurePro
>> cessor$2.body(GridClosureProcessor.java:933)
>> ... 4 more
>> Caused by: java.lang.IllegalStateException: GridH2QueryContext is not
>> initialized.
>> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Ind
>> exBase.threadLocalSegment(GridH2IndexBase.java:197)
>> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tre
>> eIndex.findOne(GridH2TreeIndex.java:290)
>> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tab
>> le.onSwapUnswap(GridH2Table.java:232)
>> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Tab
>> le.onSwap(GridH2Table.java:189)
>> at org.apache.ignite.internal.processors.query.h2.IgniteH2Index
>> 

Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-03-15 Thread Evans Ye
I'm not quite sure this is relevant to your issue.
I've experienced the hanging issue on ignite 1.8.0 when topology changed.
https://issues.apache.org/jira/browse/IGNITE-4450 might get to fix the
problem.
FYR.


2017-03-16 1:03 GMT+08:00 bintisepaha :

> Andrey, do you think topology changes with client nodes (not server nodes)
> leaving and joining the cluster may have any impact on transaction?
>
> twice out of the 4 times we saw topology changes while the last successful
> update was done to the key that was eventually locked.
>
> Yesterday we saw this same issue on another cache. We are due for 1.8
> upgrade soon, but given that no one else has seen this issue before, we are
> not sure if 1.8 would have fixed it either.
>
> Let us know what you think.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-
> subsequent-txns-failed-tp10536p11203.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Problems storing RoaringBitmaps

2017-03-15 Thread Luke Burton

I did just manage to use BinarySerializer and registered it as a custom 
serializer, it seems to work pending some more tests. Again, curious to hear if 
this approach is appropriate for the use case. Here's the Clojure version:

(defn roaring-serializer []
  (reify BinarySerializer
(writeBinary [this o binaryWriter]
  ; o contains our object.
  (let [byte-stream (ByteArrayOutputStream.)]
(with-open [out-stream (DataOutputStream. byte-stream)]
  (.serialize ^RoaringBitmap o out-stream))
(.writeByteArray binaryWriter "val" (.toByteArray byte-stream

(readBinary [this o binaryReader]
  ; o contains an empty object
  (let [raw-bytes (.readByteArray binaryReader "val")
byte-stream (ByteArrayInputStream. raw-bytes)]
(with-open [in-stream (DataInputStream. byte-stream)]
  (.deserialize ^RoaringBitmap o in-stream))




> On Mar 15, 2017, at 10:34 AM, Luke Burton  wrote:
> 
> 
> Hi there,
> 
> I'm storing RoaringBitmaps in Ignite and have encountered an odd 
> serialization issue. Please forgive the samples below being in Clojure, I've 
> written a small wrapper around most Ignite APIs that I can use. I think you 
> can catch the gist of what I'm doing, I hope :)
> 
> (let [inst (i/instance "attribute_bitmaps")
>bmp (doto (RoaringBitmap.)
>  (.add 999))
>srz (fn [it]
>  (let [b (ByteArrayOutputStream.)]
>(with-open [o (DataOutputStream. b)]
>  (.serialize it o))
>(.toByteArray b)))]
> 
>(doto inst
>  (i/clear)
>  (i/put "test" bmp))
> 
>(byte-streams/print-bytes (srz bmp))
>(byte-streams/print-bytes (srz (i/get inst "test"
> 
> Here I'm just creating a bitmap and storing it in a cache. I'm then just 
> printing the bytes of the thing I stored, as well as what it looks like 
> coming back out of Ignite.
> 
> I get the following warning in the logs:
> 
> 10:16:45.730 [djura.local ~ nREPL-worker-3 ~ 
> o.a.i.internal.binary.BinaryContext] Class "org.roaringbitmap.RoaringBitmap" 
> cannot be serialized using BinaryMarshaller because it either implements 
> Externalizable interface or have writeObject/readObject methods. 
> OptimizedMarshaller will be used instead and class instances will be 
> deserialized on the server. Please ensure that all nodes have this class in 
> classpath. To enable binary serialization either implement Binarylizable 
> interface or set explicit serializer using 
> BinaryTypeConfiguration.setSerializer() method. 
> 
> And really strangely, I get the same number of bytes back but some of them at 
> the end have been zero'd out (first one is correct, second one went through 
> Ignite):
> 
> 3A 30 00 00 01 00 00 00  98 00 00 00 10 00 00 00  .0..
> 7F 96  ..
> 3A 30 00 00 01 00 00 00  98 00 00 00 10 00 00 00  .0..
> 00 00  ..
> 
> The resulting object is still a valid RoaringBitmap, except all the values in 
> the bitmap are wrong! 
> 
> I'm guessing from this and the logs, that OptimizedMarshaller is being used 
> instead of BinaryMarshaller, and OptimizedMarshaller is not copying the 
> internal fields of the class correctly.
> 
> Would the recommended approach here be to create a custom class that extends 
> RoaringBitmap and implements Binarylizable? I'm not sure if Binarylizable is 
> a suitable approach for this situation where I don't control the source for 
> the class in question. I have no knowledge of the internal fields of this 
> class and really just want to ensure it survives the roundtrip through Ignite 
> by using its own internal serialization mechanism.
> 
> Luke.
> 



Problems storing RoaringBitmaps

2017-03-15 Thread Luke Burton

Hi there,

I'm storing RoaringBitmaps in Ignite and have encountered an odd serialization 
issue. Please forgive the samples below being in Clojure, I've written a small 
wrapper around most Ignite APIs that I can use. I think you can catch the gist 
of what I'm doing, I hope :)

 (let [inst (i/instance "attribute_bitmaps")
bmp (doto (RoaringBitmap.)
  (.add 999))
srz (fn [it]
  (let [b (ByteArrayOutputStream.)]
(with-open [o (DataOutputStream. b)]
  (.serialize it o))
(.toByteArray b)))]

(doto inst
  (i/clear)
  (i/put "test" bmp))

(byte-streams/print-bytes (srz bmp))
(byte-streams/print-bytes (srz (i/get inst "test"

Here I'm just creating a bitmap and storing it in a cache. I'm then just 
printing the bytes of the thing I stored, as well as what it looks like coming 
back out of Ignite.

I get the following warning in the logs:

10:16:45.730 [djura.local ~ nREPL-worker-3 ~ 
o.a.i.internal.binary.BinaryContext] Class "org.roaringbitmap.RoaringBitmap" 
cannot be serialized using BinaryMarshaller because it either implements 
Externalizable interface or have writeObject/readObject methods. 
OptimizedMarshaller will be used instead and class instances will be 
deserialized on the server. Please ensure that all nodes have this class in 
classpath. To enable binary serialization either implement Binarylizable 
interface or set explicit serializer using 
BinaryTypeConfiguration.setSerializer() method. 

And really strangely, I get the same number of bytes back but some of them at 
the end have been zero'd out (first one is correct, second one went through 
Ignite):

3A 30 00 00 01 00 00 00  98 00 00 00 10 00 00 00  .0..
7F 96  ..
3A 30 00 00 01 00 00 00  98 00 00 00 10 00 00 00  .0..
00 00  ..

The resulting object is still a valid RoaringBitmap, except all the values in 
the bitmap are wrong! 

I'm guessing from this and the logs, that OptimizedMarshaller is being used 
instead of BinaryMarshaller, and OptimizedMarshaller is not copying the 
internal fields of the class correctly.

Would the recommended approach here be to create a custom class that extends 
RoaringBitmap and implements Binarylizable? I'm not sure if Binarylizable is a 
suitable approach for this situation where I don't control the source for the 
class in question. I have no knowledge of the internal fields of this class and 
really just want to ensure it survives the roundtrip through Ignite by using 
its own internal serialization mechanism.

Luke.



Re: Ignite.NET very slow Cache Loading

2017-03-15 Thread Saifullah Zahid
I tried that but it is throwing exception

class org.apache.ignite.IgniteCheckedException: Cannot enable write-behind
(writer or store is not provided) for cache:

If I comment following settings

ReadThrough = true,
WriteThrough = true,
WriteBehindEnabled = true,

cache gets created but without loading data from database.

thanks,



On Wed, Mar 15, 2017 at 10:00 PM, Pavel Tupitsyn 
wrote:

> Why use Spring?
> I propose to do just one thing - comment out the CacheStoreFactory line.
>
> On Wed, Mar 15, 2017 at 7:30 PM, Saifullah Zahid 
> wrote:
>
>> Hi,
>>
>> If I disable CacheStoreFactor then I have to provide the
>> CacheConfiguration in spring xml
>> But I don't know how to provide Database and table mapping in spring xml,
>> I tried following configuration but it is failing.
>> I am not sure if it is right way to do?
>>
>> 
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xmlns:util="http://www.springframework.org/schema/util;
>>xsi:schemaLocation="http://www.springframework.org/schema/beans
>>http://www.springframework.or
>> g/schema/beans/spring-beans.xsd
>>http://www.springframework.org/schema/util
>>http://www.springframework.or
>> g/schema/util/spring-util.xsd">
>>
>> 
>> 
>> > value="com.microsoft.sqlserver.jdbc.SQLServerDriver"
>> />
>> 
>> 
>> 
>> 
>>  
>>  
>> 
>>  
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > value="transactionsdetail" />
>> 
>> 
>> 
>> 
>>
>> Getting following error
>>
>>  Failed to instantiate configuration from Spring XML
>>
>> Thanks,
>> Saif
>>
>> On Wed, Mar 15, 2017 at 8:29 PM, Pavel Tupitsyn 
>> wrote:
>>
>>> Have you tried commenting out cache store as I descrived above?
>>>
>>> On Wed, Mar 15, 2017 at 6:10 PM, Saifullah Zahid 
>>> wrote:
>>>
 Hi,

 following is my load cache method
 if I comments out following line of code, it does not slows down
 act(GetKeyValue(item), item);

 public void LoadCache(Action act, params object[] args)
 {
 using (DbConnection _connection = Create())
 {
 using (DbCommand command = _connection.CreateCommand())
 {
 MakeLoadCacheCommand(command, args);
 using (var reader = command.ExecuteReader())
 {
 while (reader.Read())
 {
 var item = new T();
 Map(reader, item); //populate object
 * act(GetKeyValue(item), item); //insert into cache*
 }
 }
 }
 }
 }

 Looks like issue is when inserting data into cache.

 Thanks,
 Saif


 On Wed, Mar 15, 2017 at 7:02 PM, Pavel Tupitsyn 
 wrote:

> Hi,
>
> You have cache store configured, which probably causes the slowdown.
> Please try to disable cache store (remove CacheStoreFactory from
> config) and see if it makes any difference.
>
> On Wed, Mar 15, 2017 at 4:02 PM, Saifullah Zahid 
> wrote:
>
>> Hi,
>>
>> I am facing an issue on cache loading, In the start cache loads
>> quickly but after some time it becomes very slow almost 1 row per second.
>> There are about 4 million rows in a table.
>> OS = Windows Server 2012
>> RAM = 64 GB
>> Node Heap config is 16 GB.
>> Following is Cache configuration
>>
>> TransactionsDetailStore = m_cache.GetOrCreateCache> TransactionsDetail>(new CacheConfiguration("transactionssdetail",
>> typeof(TransactionsDetail))
>> {
>> CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
>> true),
>> ReadThrough = true,
>> WriteThrough = true,
>> WriteBehindEnabled = true,
>> KeepBinaryInStore = false,
>> WriteBehindFlushThreadCount = 4,
>> WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
>> MemoryMode = CacheMemoryMode.OffheapTiered,
>> OffHeapMaxMemory = 0,
>> EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
>> WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
>> });
>>
>> Am I missing some configuration?
>> Kindly see if anyone have idea?
>>
>> Thanks,
>> Saif
>>
>
>

>>>
>>
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-03-15 Thread bintisepaha
Andrey, do you think topology changes with client nodes (not server nodes)
leaving and joining the cluster may have any impact on transaction?

twice out of the 4 times we saw topology changes while the last successful
update was done to the key that was eventually locked.

Yesterday we saw this same issue on another cache. We are due for 1.8
upgrade soon, but given that no one else has seen this issue before, we are
not sure if 1.8 would have fixed it either.

Let us know what you think.

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11203.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite.NET very slow Cache Loading

2017-03-15 Thread Pavel Tupitsyn
Why use Spring?
I propose to do just one thing - comment out the CacheStoreFactory line.

On Wed, Mar 15, 2017 at 7:30 PM, Saifullah Zahid 
wrote:

> Hi,
>
> If I disable CacheStoreFactor then I have to provide the
> CacheConfiguration in spring xml
> But I don't know how to provide Database and table mapping in spring xml,
> I tried following configuration but it is failing.
> I am not sure if it is right way to do?
>
> 
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>http://www.springframework.
> org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>http://www.springframework.
> org/schema/util/spring-util.xsd">
>
> 
> 
>  value="com.microsoft.sqlserver.jdbc.SQLServerDriver"
> />
>  value="jdbc:sqlserver://localhost:1433;databaseName=TransactionsDB"
> />
> 
> 
> 
>  
>  
> 
>  
> 
> 
> 
> 
> 
> 
> 
>  value="transactionsdetail" />
> 
> 
> 
> 
>
> Getting following error
>
>  Failed to instantiate configuration from Spring XML
>
> Thanks,
> Saif
>
> On Wed, Mar 15, 2017 at 8:29 PM, Pavel Tupitsyn 
> wrote:
>
>> Have you tried commenting out cache store as I descrived above?
>>
>> On Wed, Mar 15, 2017 at 6:10 PM, Saifullah Zahid 
>> wrote:
>>
>>> Hi,
>>>
>>> following is my load cache method
>>> if I comments out following line of code, it does not slows down
>>> act(GetKeyValue(item), item);
>>>
>>> public void LoadCache(Action act, params object[] args)
>>> {
>>> using (DbConnection _connection = Create())
>>> {
>>> using (DbCommand command = _connection.CreateCommand())
>>> {
>>> MakeLoadCacheCommand(command, args);
>>> using (var reader = command.ExecuteReader())
>>> {
>>> while (reader.Read())
>>> {
>>> var item = new T();
>>> Map(reader, item); //populate object
>>> * act(GetKeyValue(item), item); //insert into cache*
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>> Looks like issue is when inserting data into cache.
>>>
>>> Thanks,
>>> Saif
>>>
>>>
>>> On Wed, Mar 15, 2017 at 7:02 PM, Pavel Tupitsyn 
>>> wrote:
>>>
 Hi,

 You have cache store configured, which probably causes the slowdown.
 Please try to disable cache store (remove CacheStoreFactory from
 config) and see if it makes any difference.

 On Wed, Mar 15, 2017 at 4:02 PM, Saifullah Zahid 
 wrote:

> Hi,
>
> I am facing an issue on cache loading, In the start cache loads
> quickly but after some time it becomes very slow almost 1 row per second.
> There are about 4 million rows in a table.
> OS = Windows Server 2012
> RAM = 64 GB
> Node Heap config is 16 GB.
> Following is Cache configuration
>
> TransactionsDetailStore = m_cache.GetOrCreateCache TransactionsDetail>(new CacheConfiguration("transactionssdetail",
> typeof(TransactionsDetail))
> {
> CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
> true),
> ReadThrough = true,
> WriteThrough = true,
> WriteBehindEnabled = true,
> KeepBinaryInStore = false,
> WriteBehindFlushThreadCount = 4,
> WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
> MemoryMode = CacheMemoryMode.OffheapTiered,
> OffHeapMaxMemory = 0,
> EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
> WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
> });
>
> Am I missing some configuration?
> Kindly see if anyone have idea?
>
> Thanks,
> Saif
>


>>>
>>
>


Re: Ignite.NET very slow Cache Loading

2017-03-15 Thread Saifullah Zahid
Hi,

If I disable CacheStoreFactor then I have to provide the CacheConfiguration
in spring xml
But I don't know how to provide Database and table mapping in spring xml, I
tried following configuration but it is failing.
I am not sure if it is right way to do?


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util

http://www.springframework.org/schema/util/spring-util.xsd;>








 
 

 













Getting following error

 Failed to instantiate configuration from Spring XML

Thanks,
Saif

On Wed, Mar 15, 2017 at 8:29 PM, Pavel Tupitsyn 
wrote:

> Have you tried commenting out cache store as I descrived above?
>
> On Wed, Mar 15, 2017 at 6:10 PM, Saifullah Zahid 
> wrote:
>
>> Hi,
>>
>> following is my load cache method
>> if I comments out following line of code, it does not slows down
>> act(GetKeyValue(item), item);
>>
>> public void LoadCache(Action act, params object[] args)
>> {
>> using (DbConnection _connection = Create())
>> {
>> using (DbCommand command = _connection.CreateCommand())
>> {
>> MakeLoadCacheCommand(command, args);
>> using (var reader = command.ExecuteReader())
>> {
>> while (reader.Read())
>> {
>> var item = new T();
>> Map(reader, item); //populate object
>> * act(GetKeyValue(item), item); //insert into cache*
>> }
>> }
>> }
>> }
>> }
>>
>> Looks like issue is when inserting data into cache.
>>
>> Thanks,
>> Saif
>>
>>
>> On Wed, Mar 15, 2017 at 7:02 PM, Pavel Tupitsyn 
>> wrote:
>>
>>> Hi,
>>>
>>> You have cache store configured, which probably causes the slowdown.
>>> Please try to disable cache store (remove CacheStoreFactory from
>>> config) and see if it makes any difference.
>>>
>>> On Wed, Mar 15, 2017 at 4:02 PM, Saifullah Zahid 
>>> wrote:
>>>
 Hi,

 I am facing an issue on cache loading, In the start cache loads quickly
 but after some time it becomes very slow almost 1 row per second.
 There are about 4 million rows in a table.
 OS = Windows Server 2012
 RAM = 64 GB
 Node Heap config is 16 GB.
 Following is Cache configuration

 TransactionsDetailStore = m_cache.GetOrCreateCache(new CacheConfiguration("transactionssdetail",
 typeof(TransactionsDetail))
 {
 CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
 true),
 ReadThrough = true,
 WriteThrough = true,
 WriteBehindEnabled = true,
 KeepBinaryInStore = false,
 WriteBehindFlushThreadCount = 4,
 WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
 MemoryMode = CacheMemoryMode.OffheapTiered,
 OffHeapMaxMemory = 0,
 EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
 WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
 });

 Am I missing some configuration?
 Kindly see if anyone have idea?

 Thanks,
 Saif

>>>
>>>
>>
>


Re: Ignite.NET very slow Cache Loading

2017-03-15 Thread Pavel Tupitsyn
Have you tried commenting out cache store as I descrived above?

On Wed, Mar 15, 2017 at 6:10 PM, Saifullah Zahid 
wrote:

> Hi,
>
> following is my load cache method
> if I comments out following line of code, it does not slows down
> act(GetKeyValue(item), item);
>
> public void LoadCache(Action act, params object[] args)
> {
> using (DbConnection _connection = Create())
> {
> using (DbCommand command = _connection.CreateCommand())
> {
> MakeLoadCacheCommand(command, args);
> using (var reader = command.ExecuteReader())
> {
> while (reader.Read())
> {
> var item = new T();
> Map(reader, item); //populate object
> * act(GetKeyValue(item), item); //insert into cache*
> }
> }
> }
> }
> }
>
> Looks like issue is when inserting data into cache.
>
> Thanks,
> Saif
>
>
> On Wed, Mar 15, 2017 at 7:02 PM, Pavel Tupitsyn 
> wrote:
>
>> Hi,
>>
>> You have cache store configured, which probably causes the slowdown.
>> Please try to disable cache store (remove CacheStoreFactory from config)
>> and see if it makes any difference.
>>
>> On Wed, Mar 15, 2017 at 4:02 PM, Saifullah Zahid 
>> wrote:
>>
>>> Hi,
>>>
>>> I am facing an issue on cache loading, In the start cache loads quickly
>>> but after some time it becomes very slow almost 1 row per second.
>>> There are about 4 million rows in a table.
>>> OS = Windows Server 2012
>>> RAM = 64 GB
>>> Node Heap config is 16 GB.
>>> Following is Cache configuration
>>>
>>> TransactionsDetailStore = m_cache.GetOrCreateCache>> TransactionsDetail>(new CacheConfiguration("transactionssdetail",
>>> typeof(TransactionsDetail))
>>> {
>>> CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
>>> true),
>>> ReadThrough = true,
>>> WriteThrough = true,
>>> WriteBehindEnabled = true,
>>> KeepBinaryInStore = false,
>>> WriteBehindFlushThreadCount = 4,
>>> WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
>>> MemoryMode = CacheMemoryMode.OffheapTiered,
>>> OffHeapMaxMemory = 0,
>>> EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
>>> WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
>>> });
>>>
>>> Am I missing some configuration?
>>> Kindly see if anyone have idea?
>>>
>>> Thanks,
>>> Saif
>>>
>>
>>
>


Re: IGNITE-4106

2017-03-15 Thread Andrey Mashenkov
Hi Anil,

It is a bug. Error occurs when entry has evicted from cache.
I've create a ticket IGNITE-4826 [1].

[1] https://issues.apache.org/jira/browse/IGNITE-4826


On Wed, Mar 15, 2017 at 10:22 AM, Anil  wrote:

> Hi Val and Andrey,
>
> I am seeing exception with following code as well. Not sure why is not
> reproduced at your end.,
>
> Ignite ignite = Ignition.start(new File("/workspace/cache-
> manager/test-parallelism/src/main/resources/ignite.xml").toURI().toURL());
> IgniteCache cache = ignite.cache("TEST_CACHE");
> IgniteDataStreamer streamer = ignite.dataStreamer("TEST_
> CACHE");
> for (int i =1; i< 10; i++){
> streamer.addData(String.valueOf(i), new Test("1", "1"));
> }
>
> Exception :
>
> 2017-03-15 12:46:43 ERROR DataStreamerImpl:495 - DataStreamer operation
> failed.
> class org.apache.ignite.IgniteCheckedException: Failed to finish
> operation (too many remaps): 32
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$5.apply(DataStreamerImpl.java:863)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$5.apply(DataStreamerImpl.java:828)
> at org.apache.ignite.internal.util.future.GridFutureAdapter$
> ArrayListener.apply(GridFutureAdapter.java:456)
> at org.apache.ignite.internal.util.future.GridFutureAdapter$
> ArrayListener.apply(GridFutureAdapter.java:439)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> notifyListener(GridFutureAdapter.java:271)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> notifyListeners(GridFutureAdapter.java:259)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> onDone(GridFutureAdapter.java:389)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> onDone(GridFutureAdapter.java:355)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> onDone(GridFutureAdapter.java:343)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$Buffer$2.apply(DataStreamerImpl.java:1564)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$Buffer$2.apply(DataStreamerImpl.java:1554)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> notifyListener(GridFutureAdapter.java:271)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> listen(GridFutureAdapter.java:228)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$Buffer.localUpdate(DataStreamerImpl.java:1554)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$Buffer.submit(DataStreamerImpl.java:1626)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$Buffer.update(DataStreamerImpl.java:1416)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.load0(DataStreamerImpl.java:932)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.access$1100(DataStreamerImpl.java:121)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$5$1.run(DataStreamerImpl.java:876)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$5$2.call(DataStreamerImpl.java:903)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl$5$2.call(DataStreamerImpl.java:891)
> at org.apache.ignite.internal.util.IgniteUtils.
> wrapThreadLoader(IgniteUtils.java:6618)
> at org.apache.ignite.internal.processors.closure.
> GridClosureProcessor$2.body(GridClosureProcessor.java:925)
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteCheckedException:
> GridH2QueryContext is not initialized.
> at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7239)
> at org.apache.ignite.internal.processors.closure.
> GridClosureProcessor$2.body(GridClosureProcessor.java:933)
> ... 4 more
> Caused by: java.lang.IllegalStateException: GridH2QueryContext is not
> initialized.
> at org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase.
> threadLocalSegment(GridH2IndexBase.java:197)
> at org.apache.ignite.internal.processors.query.h2.opt.
> GridH2TreeIndex.findOne(GridH2TreeIndex.java:290)
> at org.apache.ignite.internal.processors.query.h2.opt.
> GridH2Table.onSwapUnswap(GridH2Table.java:232)
> at org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.onSwap(
> GridH2Table.java:189)
> at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onSwap(
> IgniteH2Indexing.java:737)
> at org.apache.ignite.internal.processors.query.GridQueryProcessor.onSwap(
> GridQueryProcessor.java:1183)
> at org.apache.ignite.internal.processors.cache.query.
> GridCacheQueryManager.onSwap(GridCacheQueryManager.java:398)
> at 

Re: Ignite.NET very slow Cache Loading

2017-03-15 Thread Saifullah Zahid
Hi,

following is my load cache method
if I comments out following line of code, it does not slows down
act(GetKeyValue(item), item);

public void LoadCache(Action act, params object[] args)
{
using (DbConnection _connection = Create())
{
using (DbCommand command = _connection.CreateCommand())
{
MakeLoadCacheCommand(command, args);
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
var item = new T();
Map(reader, item); //populate object
* act(GetKeyValue(item), item); //insert into cache*
}
}
}
}
}

Looks like issue is when inserting data into cache.

Thanks,
Saif


On Wed, Mar 15, 2017 at 7:02 PM, Pavel Tupitsyn 
wrote:

> Hi,
>
> You have cache store configured, which probably causes the slowdown.
> Please try to disable cache store (remove CacheStoreFactory from config)
> and see if it makes any difference.
>
> On Wed, Mar 15, 2017 at 4:02 PM, Saifullah Zahid 
> wrote:
>
>> Hi,
>>
>> I am facing an issue on cache loading, In the start cache loads quickly
>> but after some time it becomes very slow almost 1 row per second.
>> There are about 4 million rows in a table.
>> OS = Windows Server 2012
>> RAM = 64 GB
>> Node Heap config is 16 GB.
>> Following is Cache configuration
>>
>> TransactionsDetailStore = m_cache.GetOrCreateCache> TransactionsDetail>(new CacheConfiguration("transactionssdetail",
>> typeof(TransactionsDetail))
>> {
>> CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
>> true),
>> ReadThrough = true,
>> WriteThrough = true,
>> WriteBehindEnabled = true,
>> KeepBinaryInStore = false,
>> WriteBehindFlushThreadCount = 4,
>> WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
>> MemoryMode = CacheMemoryMode.OffheapTiered,
>> OffHeapMaxMemory = 0,
>> EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
>> WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
>> });
>>
>> Am I missing some configuration?
>> Kindly see if anyone have idea?
>>
>> Thanks,
>> Saif
>>
>
>


Re: Configuring Ignite Services to deploy on Server nodes only

2017-03-15 Thread afedotov
If you want to get IgniteServices for only server nodes by using XML
config,
then, to my knowledge, I don't think it's feasible.
Why do you want to configure it via XML?

On Wed, Mar 15, 2017 at 12:48 PM, GB [via Apache Ignite Users] <
ml-node+s70518n11189...@n6.nabble.com> wrote:

> Hi,
>
> Below link details how to deploy programatically Ignite Services to deploy
> only on Server nodes in the cluster. Our Ignite cluster is in standalone
> mode.
>
> 
>
> Can someone point me to documentation or way of achieving similar thing
> via xml configuration?
> - Specifying Node filer via xml configuration :
>
> public class ServiceFilter implements IgnitePredicate {
> @Override public boolean apply(ClusterNode node) {
>   // The service will be deployed on non client nodes
> // that have the attribute 'west.coast.node'.
> return !node.isClient() &&
> node.attributes().containsKey("west.coast.node");
>   }
> }
>
> - Specifying Server Node only deployment via xml configuration :
>
> // A service will be deployed on the server nodes only.
> IgniteServices services = ignite.services(ignite.cluster().forServers());
>
> // Deploying the service.
> services.deploy(serviceCfg);
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/
> Configuring-Ignite-Services-to-deploy-on-Server-nodes-only-tp11189.html
> To start a new topic under Apache Ignite Users, email
> ml-node+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>



-- 
Kind regards,
Alex.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Configuring-Ignite-Services-to-deploy-on-Server-nodes-only-tp11189p11197.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Same Affinity For Same Key On All Caches

2017-03-15 Thread Andrey Mashenkov
Good catch, Taras!

+1 for balanced Rendezvous AF instead of Fair AF.


On Wed, Mar 15, 2017 at 1:29 PM, Taras Ledkov  wrote:

> Folks,
>
> I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that
> is related to performance of Rendezvous AF.
>
> But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i
> try to use simple partition balancer close
> to Fair AF for Rendezvous AF.
> Take a look at the heatmaps of distributions at the issue. e.g.:
> - Compare of current Rendezvous AF and new Rendezvous AF based of
> Wang/Jenkins hash: https://issues.apache.org/jira/secure/attachment/
> 12858701/004.png
> - Compare of current Rendezvous AF and new Rendezvous AF based of
> Wang/Jenkins hash with partition balancer: https://issues.apache.org/
> jira/secure/attachment/12858690/balanced.004.png
>
> When the balancer is enabled the distribution of partitions by nodes looks
> like close to even distribution
> but in this case there is not guarantee that a partition doesn't move from
> one node to another
> when node leave topology.
> It is not guarantee but we try to minimize it because sorted array of
> nodes is used (like in for pure-Rendezvous AF).
>
> I think we can use new fast Rendezvous AF and use 'useBalancer' flag
> instead of Fair AF.
>
>
> On 03.03.2017 1:56, Denis Magda wrote:
>
> What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to
> fix?
>
> —
> Denis
>
> On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> Adding back the dev list.
>
> Folks,
>
> Are there any opinions on the problem discussed here? Do we really need
> FairAffinityFunction if it can't guarantee cross-cache collocation?
>
> -Val
>
> On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko  > wrote:
>
>> Hi Alex,
>>
>> I see your point. Can you please outline its advantages vs rendezvous
>> function?
>>
>> In my view issue discussed here makes it pretty much useless in vast
>> majority of use cases, and very error-prone in all others.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-
>> tp10829p11006.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Taras Ledkov
> Mail-To: tled...@gridgain.com
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Understanding faster batch inserts/updates with apache ignite 1.9

2017-03-15 Thread pragmaticbigdata
Thanks for the reply.

DML also implies query parsing, mapping to Java objects and other
> preparation steps before IgniteDataStreamer API is called. Thus the
> performance difference.


Does this mean the data streamer api would be faster in pre-loading or
bulk-data as compared to the streaming mode jdbc driver?

Thanks.

On Wed, Mar 15, 2017 at 12:24 AM, vkulichenko [via Apache Ignite Users] <
ml-node+s70518n11170...@n6.nabble.com> wrote:

> 1. Yes, it's using IgniteDataStreamer under the hood and even has the same
> parameters.
> 2. DML also implies query parsing, mapping to Java objects and other
> preparation steps before IgniteDataStreamer API is called. Thus the
> performance difference.
>
> -Val
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Understanding-faster-batch-
> inserts-updates-with-apache-ignite-1-9-tp7p11170.html
> To unsubscribe from Understanding faster batch inserts/updates with apache
> ignite 1.9, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Understanding-faster-batch-inserts-updates-with-apache-ignite-1-9-tp7p11195.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Ignite.NET very slow Cache Loading

2017-03-15 Thread Pavel Tupitsyn
Hi,

You have cache store configured, which probably causes the slowdown.
Please try to disable cache store (remove CacheStoreFactory from config)
and see if it makes any difference.

On Wed, Mar 15, 2017 at 4:02 PM, Saifullah Zahid 
wrote:

> Hi,
>
> I am facing an issue on cache loading, In the start cache loads quickly
> but after some time it becomes very slow almost 1 row per second.
> There are about 4 million rows in a table.
> OS = Windows Server 2012
> RAM = 64 GB
> Node Heap config is 16 GB.
> Following is Cache configuration
>
> TransactionsDetailStore = m_cache.GetOrCreateCache TransactionsDetail>(new CacheConfiguration("transactionssdetail",
> typeof(TransactionsDetail))
> {
> CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
> true),
> ReadThrough = true,
> WriteThrough = true,
> WriteBehindEnabled = true,
> KeepBinaryInStore = false,
> WriteBehindFlushThreadCount = 4,
> WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
> MemoryMode = CacheMemoryMode.OffheapTiered,
> OffHeapMaxMemory = 0,
> EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
> WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
> });
>
> Am I missing some configuration?
> Kindly see if anyone have idea?
>
> Thanks,
> Saif
>


Ignite.NET very slow Cache Loading

2017-03-15 Thread Saifullah Zahid
Hi,

I am facing an issue on cache loading, In the start cache loads quickly but
after some time it becomes very slow almost 1 row per second.
There are about 4 million rows in a table.
OS = Windows Server 2012
RAM = 64 GB
Node Heap config is 16 GB.
Following is Cache configuration

TransactionsDetailStore = m_cache.GetOrCreateCache(new CacheConfiguration("transactionssdetail",
typeof(TransactionsDetail))
{
CacheStoreFactory = new TransactionsDetailStoreFactory("ApplicationDB",
true),
ReadThrough = true,
WriteThrough = true,
WriteBehindEnabled = true,
KeepBinaryInStore = false,
WriteBehindFlushThreadCount = 4,
WriteBehindFlushFrequency = new TimeSpan(0, 0, 2),
MemoryMode = CacheMemoryMode.OffheapTiered,
OffHeapMaxMemory = 0,
EvictionPolicy = new LruEvictionPolicy { MaxSize = 100 },
WriteSynchronizationMode = CacheWriteSynchronizationMode.FullSync
});

Am I missing some configuration?
Kindly see if anyone have idea?

Thanks,
Saif


Distributed Closures VS Executor Service

2017-03-15 Thread kmandalas
Hello Ignite team,

As we are evaluating potential usage of Ignite for our Analytics projects, I
would like to ask the following:

- *Compute Grid*: what is in practical difference between Distributed
Closures and Executor Service? For example if I have computations that I
want to distribute (multiple callables) and I want to take advantage of all
cores of the cluster (based on the existing load of course) so I get *fast
results*, is there a difference between using exec.submit() VS
ignite.compute().call()? Moreover, if some distributed calculation is in
progress and occupies all the cores of the cluster and in the meantime a new
distributed calculation is requested, then what will happen? Is there some
queue mechanism and how is it configured? Which is the best way to implement
this? Is there need for a messaging queue or we could rely to thread pool
sizes configurations etc. ? 

- If I want to pass data to callables, for example Lists of objects, small
to -medium size (collections from 2000 to 6000 objects maximum, with average
object size some ~200KBs) what is the best way to do this: to pass them as
argument in callables or put them in the distributed cache for the callables
to pick them up from there?

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: ScanQuery and Lock

2017-03-15 Thread afedotov
Hi,

Scan query will return the current actual value, it won't wait for the lock
to be released.

On Tue, Mar 14, 2017 at 9:56 AM, Anil [via Apache Ignite Users] <
ml-node+s70518n11159...@n6.nabble.com> wrote:

> HI,
>
> How does scan query and lock on same entry works ?
>
> thread 1 -> lock on a entry
> thread 2 -> scan query trying to get the record
>
> i believe, scan query returns old entry. or waits till lock is released on
> entry ?
>
> Thanks
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/ScanQuery-
> and-Lock-tp11159.html
> To start a new topic under Apache Ignite Users, email
> ml-node+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>



-- 
Kind regards,
Alex.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ScanQuery-and-Lock-tp11159p11191.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Same Affinity For Same Key On All Caches

2017-03-15 Thread Taras Ledkov

Folks,

I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that 
is related to performance of Rendezvous AF.


But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i 
try to use simple partition balancer close

to Fair AF for Rendezvous AF.

Take a look at the heatmaps of distributions at the issue. e.g.:
- Compare of current Rendezvous AF and new Rendezvous AF based of 
Wang/Jenkins hash: 
https://issues.apache.org/jira/secure/attachment/12858701/004.png
- Compare of current Rendezvous AF and new Rendezvous AF based of 
Wang/Jenkins hash with partition balancer: 
https://issues.apache.org/jira/secure/attachment/12858690/balanced.004.png


When the balancer is enabled the distribution of partitions by nodes 
looks like close to even distribution
but in this case there is not guarantee that a partition doesn't move 
from one node to another

when node leave topology.
It is not guarantee but we try to minimize it because sorted array of 
nodes is used (like in for pure-Rendezvous AF).


I think we can use new fast Rendezvous AF and use 'useBalancer' flag 
instead of Fair AF.


On 03.03.2017 1:56, Denis Magda wrote:
What??? Unbelievable. It sounds like a design flaw to me. Any ideas 
how to fix?


—
Denis

On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko 
> wrote:


Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really 
need FairAffinityFunction if it can't guarantee cross-cache collocation?


-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko 
> wrote:


Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context:

http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html


Sent from the Apache Ignite Users mailing list archive at
Nabble.com .






--
Taras Ledkov
Mail-To: tled...@gridgain.com



Configuring Ignite Services to deploy on Server nodes only

2017-03-15 Thread GB
Hi,

Below link details how to deploy programatically Ignite Services to deploy
only on Server nodes in the cluster. Our Ignite cluster is in standalone
mode. 

  

Can someone point me to documentation or way of achieving similar thing via
xml configuration?
- Specifying Node filer via xml configuration :

public class ServiceFilter implements IgnitePredicate {
@Override public boolean apply(ClusterNode node) {
// The service will be deployed on non client nodes
// that have the attribute 'west.coast.node'.
return !node.isClient() &&
node.attributes().containsKey("west.coast.node");
  }
}

- Specifying Server Node only deployment via xml configuration :

// A service will be deployed on the server nodes only.
IgniteServices services = ignite.services(ignite.cluster().forServers());

// Deploying the service.
services.deploy(serviceCfg);




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Configuring-Ignite-Services-to-deploy-on-Server-nodes-only-tp11189.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Connect to SQL DB

2017-03-15 Thread Pavel Tupitsyn
You can create caches with ignite.CreateCache or GetOrCreateCache methods.

On Wed, Mar 15, 2017 at 12:20 PM, kavitha  wrote:

> Thanks for your reply.
> I tried below code. but I got error.
>
>  IIgnite ignite = Ignition.Start();
> using (var ldr = ignite.GetDataStreamer Account>("myStreamCache"))
> {
> for (int i = 0; i < EntryCount; i++)
> ldr.AddData(i, new Account(i, i));
> }
>
> Error Message:
> An unhandled exception of type 'System.InvalidOperationException' occurred
> in Apache.Ignite.Core.dll
>
> Additional information: Cache doesn't exist: myStreamCache
>
> Can you please explain how I correct it?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Connect-to-SQL-DB-tp11180p11187.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Connect to SQL DB

2017-03-15 Thread kavitha
Thanks for your reply.
I tried below code. but I got error.

 IIgnite ignite = Ignition.Start();
using (var ldr = ignite.GetDataStreamer("myStreamCache"))
{
for (int i = 0; i < EntryCount; i++)
ldr.AddData(i, new Account(i, i));
}

Error Message:
An unhandled exception of type 'System.InvalidOperationException' occurred
in Apache.Ignite.Core.dll

Additional information: Cache doesn't exist: myStreamCache

Can you please explain how I correct it?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Connect-to-SQL-DB-tp11180p11187.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to form cluster

2017-03-15 Thread Alper Tekinalp
Hi.

You can checkout:
https://apacheignite.readme.io/docs/cluster-config#jdbc-based-discovery

On Wed, Mar 15, 2017 at 8:39 AM, kavitha  wrote:

> Hi All,
>
> I need to form a cluster for apache ignite database to connect it through
> ODBC connection.
>
> How can I proceed it?
>
> Note: I am using C# language
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-form-cluster-tp11177.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr



Re: IGNITE-4106

2017-03-15 Thread Anil
Hi Val and Andrey,

I am seeing exception with following code as well. Not sure why is not
reproduced at your end.,

Ignite ignite = Ignition.start(new
File("/workspace/cache-manager/test-parallelism/src/main/resources/ignite.xml").toURI().toURL());
IgniteCache cache = ignite.cache("TEST_CACHE");
IgniteDataStreamer streamer =
ignite.dataStreamer("TEST_CACHE");
for (int i =1; i< 10; i++){
streamer.addData(String.valueOf(i), new Test("1", "1"));
}

Exception :

2017-03-15 12:46:43 ERROR DataStreamerImpl:495 - DataStreamer operation
failed.
class org.apache.ignite.IgniteCheckedException: Failed to finish operation
(too many remaps): 32
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:863)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:828)
at
org.apache.ignite.internal.util.future.GridFutureAdapter$ArrayListener.apply(GridFutureAdapter.java:456)
at
org.apache.ignite.internal.util.future.GridFutureAdapter$ArrayListener.apply(GridFutureAdapter.java:439)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:259)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:389)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:355)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:343)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer$2.apply(DataStreamerImpl.java:1564)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer$2.apply(DataStreamerImpl.java:1554)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:228)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.localUpdate(DataStreamerImpl.java:1554)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.submit(DataStreamerImpl.java:1626)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.update(DataStreamerImpl.java:1416)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.load0(DataStreamerImpl.java:932)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.access$1100(DataStreamerImpl.java:121)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5$1.run(DataStreamerImpl.java:876)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5$2.call(DataStreamerImpl.java:903)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5$2.call(DataStreamerImpl.java:891)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6618)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:925)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException:
GridH2QueryContext is not initialized.
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7239)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:933)
... 4 more
Caused by: java.lang.IllegalStateException: GridH2QueryContext is not
initialized.
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2IndexBase.threadLocalSegment(GridH2IndexBase.java:197)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2TreeIndex.findOne(GridH2TreeIndex.java:290)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.onSwapUnswap(GridH2Table.java:232)
at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.onSwap(GridH2Table.java:189)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onSwap(IgniteH2Indexing.java:737)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.onSwap(GridQueryProcessor.java:1183)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.onSwap(GridCacheQueryManager.java:398)
at
org.apache.ignite.internal.processors.cache.GridCacheSwapManager.write(GridCacheSwapManager.java:1345)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.swap(GridCacheMapEntry.java:681)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.evictInternal(GridCacheMapEntry.java:4310)
at
org.apache.ignite.internal.processors.cache.GridCacheEvictionManager.evict0(GridCacheEvictionManager.java:709)
at

Re: IgniteThread ThreadLocal Memory Leak

2017-03-15 Thread Sergi Vladykin
If you know exactly what ThreadLocal causes a leak, you can try to just
clean it with reflection at the end of your Callable.

Sergi

2017-03-15 10:10 GMT+03:00 Isaeed Mohanna :

> Hi
> As i have mentioned before its not me that is using the ThreadLocal but a
> 3rd party library that i am using is doing so. I am trying to get them to
> fix it but until they do so i wanted to verify if I can remedy the
> situation without having to wait for them.
> ill push them to change their implementation.
> Thanks alot
>
> On Tue, Mar 14, 2017 at 8:59 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Isaeed,
>>
>> There is no such way.
>>
>> If you're using thread local, you should properly clean it when the value
>> is
>> not needed anymore. Another way it to use something else instead.
>>
>> Why are you using thread locals in compute in the first place? What is the
>> use case for this?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/IgniteThread-ThreadLocal-Memory-Leak-tp11168p11171.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: IgniteThread ThreadLocal Memory Leak

2017-03-15 Thread Isaeed Mohanna
Hi
As i have mentioned before its not me that is using the ThreadLocal but a
3rd party library that i am using is doing so. I am trying to get them to
fix it but until they do so i wanted to verify if I can remedy the
situation without having to wait for them.
ill push them to change their implementation.
Thanks alot

On Tue, Mar 14, 2017 at 8:59 PM, vkulichenko 
wrote:

> Isaeed,
>
> There is no such way.
>
> If you're using thread local, you should properly clean it when the value
> is
> not needed anymore. Another way it to use something else instead.
>
> Why are you using thread locals in compute in the first place? What is the
> use case for this?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IgniteThread-ThreadLocal-
> Memory-Leak-tp11168p11171.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Connect to SQL DB

2017-03-15 Thread Pavel Tupitsyn
Hi,

If you need to pull data just once, Data Streamer is the best way to do
this:
https://apacheignite-net.readme.io/docs/data-streamers

I've replied to you in a different thread regarding other approaches:
http://apache-ignite-users.70518.x6.nabble.com/Read-Data-using-apache-ignite-td11150.html

On Wed, Mar 15, 2017 at 9:50 AM, kavitha  wrote:

> Hi,
>
> I need to connect to SQL DB and pull data to cache. (I am using .Net
> platform) Does anyone share me idea about this?
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Connect-to-SQL-DB-tp11180.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Read Data using apache ignite

2017-03-15 Thread Pavel Tupitsyn
You have two options:

1) Implement cache store that works with your SQL database
Docs: https://apacheignite-net.readme.io/docs/persistent-store
Example: https://ptupitsyn.github.io/Entity-Framework-Cache-Store/

2) Use Entity Framework to work with your DB and configure Ignite 2nd level
cache:
https://apacheignite-net.readme.io/docs/entity-framework-second-level-cache

On Wed, Mar 15, 2017 at 7:59 AM, kavitha  wrote:

> Yes I am using C#.
> How can I use ignite as a caching layer?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Read-Data-using-apache-ignite-tp11150p11174.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Connect to SQL DB

2017-03-15 Thread kavitha
Hi,

I need to connect to SQL DB and pull data to cache. (I am using .Net
platform) Does anyone share me idea about this? 






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Connect-to-SQL-DB-tp11180.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue in starting IgniteSinkConnector with Distributed Worker mode

2017-03-15 Thread Roman Shtykh
The connector is not created and no connection to your Ignite cluster is done 
in a distributed mode, if you pass connector configuration on the command 
line.In a distributed mode, you have to use REST API, like below.
$ curl -X POST -H "Content-Type: application/json" --data '{"name": 
"string-ignite-connector", "config": 
{"connector.class":"org.apache.ignite.stream.kafka.connect.IgniteSinkConnector",
 
"tasks.max":"2","topics":"test","cacheName":"cache1","igniteCfg":"path_to_/ignite.xml"}}'
 http://localhost:8083/connectors

Hope this helps.
Roman

On Tuesday, March 14, 2017 4:45 PM, dkarachentsev 
 wrote:
 

 Hi,

Could you please attach logs and thread dumps (in case of hang) if problem
is still relevant?

-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Issue-in-starting-IgniteSinkConnector-with-Distributed-Worker-mode-tp10647p11161.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


   

Re: IGNITE-4106

2017-03-15 Thread Anil
Sure Val. let me try again.

Thanks.

On 14 March 2017 at 20:28, vkulichenko 
wrote:

> Hi Anil,
>
> I tried to run your project and also didn't get the exception. Please
> provide exact steps how to run it in order to reproduce the behavior.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IGNITE-4106-tp11073p11169.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>