Re: Ignite startu is very slow

2018-12-03 Thread Evgenii Zhuravlev
Hi,

You have a pretty small amount of heap and 30 caches. Each cache creates
some overhead for node startup, because node should read states for each
partition of each cache(and each partition is a separate file). You can
reduce this overhead by configuring the same cache group:
https://apacheignite.readme.io/docs/cache-groups. Also, if you have only 3
nodes, it makes sense to reduce the number of partitions for each cache, by
configuring affinity function.

Evgenii

пн, 3 дек. 2018 г. в 20:39, kvenkatramtreddy :

> hi Team,
>
> any update or clue on above issue.
>
> Thanks & Regards,
> Venkat
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite benchmarking with YCSB

2018-12-03 Thread summasumma
Hi Ilya,

In early execution the writesyncmode=prim_sync was not set.
I made this correction and enabled and able to get this performance.

Even now CPU is not going above 50% in 3 node ignite cluster.
network bandwidhth: 700 mbps rx and 350 tx.
Not sure if there is any way now to improve above 55k per ycsb Insert

Soon will share other events like update/read results.

Thanks,
...summa



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite startu is very slow

2018-12-03 Thread kvenkatramtreddy
hi Team,

any update or clue on above issue.

Thanks & Regards,
Venkat




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


iginte REST API using HTTPS

2018-12-03 Thread Shesha Nanda
Hi,

I am trying add ssl security for the ignite to access iginte REST API using
HTTPS. I have fallowed below steps:

I have enabled the ssl by adding below configurations.











  





  



I am able to see [authentication=off, tls/ssl=on]  in the logs.


Getting below error when I tried to access REST API using HTTPS request .

curl https://localhost:8443/ignite?cmd=version
curl: (35) SSL received a record that exceeded the maximum permissible
length.

If i try with http it's working
curl http://localhost:8080/ignite?cmd=version


Please let me know the configurations to enable SSL and access REST API
using HTTPS

-- 
*Regards*
*Sheshananda Naidu,*
*+91-9035063060*

http://www.eclipse.org/jetty/configure.dtd;>




20
200



https
8443
true
true















  


  

3
true














false



Re: GridProcessorAdapter fails to start due to failure to initialise WAL segment on Ignite startup

2018-12-03 Thread Raymond Wilson
Hi Ilya,

I check the state of the WAL file in question (0008.wal) - it
is a zero byte WAL file. The only other WAL file present in the same
location is .wal (65536kb in size), which seems odd as
WALfiles 0001.wal through 0007.wal are not present.

Thanks,
Raymond.

On Tue, Dec 4, 2018 at 6:03 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> It seems that WAL file got truncated or something like that.
>
> Can you post this file to some file storage so that we could look?
>
> You can also try to change this node's WAL mode to LOG_ONLY and try to
> start it again (after backing up data, of course). Checks are less strict
> in this case.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 30 нояб. 2018 г. в 22:33, Raymond Wilson :
>
>> Hi Ilya,
>>
>> We don’t change the WAL segment size from the default values.
>>
>> The only activity that occurred was stopping a node, making a minor
>> change (not related to persistence) and rerunning the node.
>>
>> Raymond.
>>
>> Sent from my iPhone
>>
>> On 1/12/2018, at 2:52 AM, Ilya Kasnacheev 
>> wrote:
>>
>> Hello!
>>
>> "WAL segment size change is not supported"
>>
>> Is there a chance that you have changed WAL segment size setting between
>> launches?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 29 нояб. 2018 г. в 02:39, Raymond Wilson > >:
>>
>>> I'm using Ignite 2.6 with the C# client.
>>>
>>> I have a running cluster that I was debugging. All requests were read
>>> only (there were no state mutating operations running in the cluster.
>>>
>>> I terminated the one server node in the grid (running in the debugger)
>>> to make a small code change and re-run it (I do this frequently). The node
>>> may have been stopped for longer than the partitioning timeout.
>>>
>>> On re-running the server node it failed to start. On re-running the
>>> complete cluster it still failed to start, and all other nodes report
>>> failure to connect to a inactive grid.
>>>
>>> Looking at the log for the server node that is failing I get the
>>> following log showing an exception while initializing a WAL segment. This
>>> failure seems permanent and is unexpected as we are using the strict WAL
>>> atomicity mode (WalMode.Fsync) for all persisted regions.Is this a
>>> recoverable error, or does this imply data loss? [NB: This is a dev system
>>> so no prod data is affected]]
>>>
>>>
>>> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer >>>
>>> __  >>>   /  _/ ___/ |/ /  _/_  __/ __/>>>
>>> _/ // (7 7// /  / / / _/  >>> /___/\___/_/|_/___/ /_/ /___/
>>>  >>>   >>> ver. 2.6.0#20180710-sha1:669feacc  >>> 2018 Copyright(C) Apache
>>> Software Foundation  >>>   >>> Ignite documentation:
>>> http://ignite.apache.org
>>> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer Config
>>> URL: n/a
>>> 2018-11-29 12:26:09,948 [1] INFO  ImmutableCacheComputeServer
>>> IgniteConfiguration [igniteInstanceName=TRex-Immutable, pubPoolSize=50,
>>> svcPoolSize=12, callbackPoolSize=12, stripedPoolSize=12, sysPoolSize=12,
>>> mgmtPoolSize=4, igfsPoolSize=12, dataStreamerPoolSize=12,
>>> utilityCachePoolSize=12, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
>>> qryPoolSize=12, igniteHome=null,
>>> igniteWorkDir=C:\Users\rwilson\AppData\Local\Temp\TRexIgniteData\Immutable,
>>> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6e4784bc,
>>> nodeId=8f32d0a6-539c-40dd-bc42-d044f28bac73,
>>> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@e4487af,
>>> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
>>> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
>>> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
>>> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000,
>>> ackTimeout=5000, marsh=null, reconCnt=10, reconDelay=2000,
>>> maxAckTimeout=60, forceSrvMode=false, clientReconnectDisabled=false,
>>> internalLsnr=null], segPlc=STOP, segResolveAttempts=2,
>>> waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
>>> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
>>> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
>>> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@10d68fcd,
>>> locAddr=127.0.0.1, locHost=null, locPort=47100, locPortRange=100,
>>> shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=3,
>>> connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
>>> sockRcvBuf=32768, msgQueueLimit=1024, slowClientQueueLimit=0, nioSrvr=null,
>>> shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
>>> tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=16,
>>> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
>>> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
>>> ctxInitLatch=java.util.concurrent.CountDownLatch@117e949d[Count = 1],
>>> stopping=false,
>>> 

RE: When will Apache Ignite support Java 11?

2018-12-03 Thread Stanislav Lukyanov
The support for Java 9 *should* mean support for Java 11, the compatibility gap 
between the two is not big.

Moreover, I would (and going to) push for almost completely skipping the 
testing on Java 9 – it is 
in end-of-life already, so providing support for it is kind of pointless. Java 
11 is what should be supported by Ignite 2.8.

That said, I honestly don’t see everyone jumping from the Java 8 train any time 
soon.
Gap between 8 and 9+ (although not to big in reality) still makes people stay 
on 8,
and Oracle’s competitors are ready to offer alternative support.
So, I’d say that Java 8 is still going to be the main target for at least 
Ignite 2.8.

But this is all just a speculation for now as no plans were set in stone yet.
Stay tuned at d...@ignite.apache.org.

Stan

From: Loredana Radulescu Ivanoff
Sent: 26 ноября 2018 г. 21:23
To: user@ignite.apache.org
Subject: Re: When will Apache Ignite support Java 11?

Hello,

The current plan is that Oracle will stop updates for Java 8 commercial users 
after January 2019, and Java 11 is the next LTS release, so is there a plan to 
have Ignite working with Java 11 by then?

Thank you.

On Thu, Nov 22, 2018 at 10:49 PM Petr Ivanov  wrote:
Hi!


Full Java 9+ support is planned for 2.8 at least.
Currently it will work more or less on Java9. Java10/11 work is not guaranteed.

> On 22 Nov 2018, at 21:22, monstereo  wrote:
> 
> Is there any plan to support Java 11 for Apache Ignite?
> 
> If the next version of the Apache Ignite (2.7) will support Java 11, when it
> will be released?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: How to filter ip interfaces in TcpDiscoveryJdbcIpFinder

2018-12-03 Thread Stanislav Lukyanov
Currently you can only use IGNITE_LOCAL_HOST or TcpDiscoverySpi.localAddress 
for this.
You can automate setting these addresses via an external script, like
MY_IP=`ifconfig | grep `
java  -DIGNITE_LOCAL_HOST=$MY_IP
or put it into the Ignite config like


Stan

From: Luckyman
Sent: 3 ноября 2018 г. 23:36
To: user@ignite.apache.org
Subject: Re: How to filter ip interfaces in TcpDiscoveryJdbcIpFinder

Thx for reply, but it's not usable for our situation.

We have are Exalogic cluster with a lot of different network .
Each virtual node have it's own address. As I understand I should set the ip
address on each node.
But I want to set target network address and network mask once and each node
of ignite use ip address from  this network ignoring other networks on host.  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Create index got stuck and freeze whole cluster.

2018-12-03 Thread Stanislav Lukyanov
Hi,

The only thing I can say is that your troubles seem to have started way before.

I see a bunch of “Found long running cache future” repeating, and then exchange 
for
stopping SQL_PUBLIC_USERLEVEL cache that never completes. 
Would need logs going further (at least minutes) into the past to see what went 
wrong.

Stan

From: Ray
Sent: 30 октября 2018 г. 9:21
To: user@ignite.apache.org
Subject: Create index got stuck and freeze whole cluster.

I'm using a five nodes Ignite 2.6 cluster.
When I try try to create index on table with10 million records using sql
"create index on table(a,b,c,d)", the whole cluster freezes and prints the
following log for 40 minutes.

2018-10-30T02:48:44,086][WARN
][exchange-worker-#162][GridDhtPartitionsExchangeFuture] Unable to await
partitions release latch within timeout: ServerLatch [permits=4,
pendingAcks=[20aa5929-3f26-4923-87a3-27b4f6d4f744,
ec5be25e-6601-468c-9f0e-7ab7c8caa9e9, 45819b05-a338-4bc4-b104-f0c7567fd49d,
cbb80db7-b342-4b97-ba61-97d57c194a1a], super=CompletableLatch [id=exchange,
topVer=AffinityTopologyVersion [topVer=202, minorTopVer=1]]]

I noticed one of the servers(log in server3.zip) is stuck in checkpoint
process, and this server acts as coordinator in PME.
In the log I see only 856610 pages needs to be flushed to disk, but the
checkpoint takes 32 minutes to finish.
While another node takes 7 minutes to finish writing 919060 pages to disk.
Also the disk usage on the slow checkpoint server is not 100%.

Here's the whole log file for 5 servers.
server1.zip
  
server2.zip
  
server3.zip
  
server4.zip
  
server5.zip
  




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Avoiding Docker Bridge network when using S3 discovery

2018-12-03 Thread Stanislav Lukyanov
Hi,

Have you been able to solve this?
I think specifying TcpDiscoverySpi.localAddress should work.

Stan

From: Dave Harvey
Sent: 17 октября 2018 г. 20:10
To: user@ignite.apache.org
Subject: Avoiding Docker Bridge network when using S3 discovery

When we use S3 discovery and Ignite containers running under ECS using host 
networking, the S3 bucket end up with 172.17.0.1#47500 along with the other 
server addresses.   Then on cluster startup we must wait for the network 
timeout.    Is there a way to avoid having this address pushed to the S3 
bucket?     
Visor shows:
| Address (0)                 | 10.32.97.32                              |
| Address (1)                 | 172.17.0.1                               |
| Address (2)                 | 127.0.0.1        






Disclaimer
The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more Click Here.



RE: Ignite can't activate

2018-12-03 Thread Stanislav Lukyanov
Hi,

Reproduced that and filed https://issues.apache.org/jira/browse/IGNITE-10516.
Thanks for reporting.

Stan

From: yangjiajun
Sent: 29 ноября 2018 г. 10:52
To: user@ignite.apache.org
Subject: Re: Ignite can't activate

Hello.
Here is a reproducer for my case:
1.Start a node with persistence enabled.
2.Create a table without cache group and create an index on it.
3.Create another table and assign a cache group to it.Use same name to
create an index on this table.
4.Stop the node.
5.Restart the node and  do activate.Then you will see the exception. 


yangjiajun wrote
> Hello.
> My ignite can't activate after restart.I only have one node which is ver
> 2.6.The exception cause ignite can't activate is :
> 
> [14:18:05,802][SEVERE][exchange-worker-#110][GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not
> start): GridDhtPartitionsExchangeFuture
> [firstDiscoEvt=DiscoveryCustomEvent
> [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
> nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1543471364603]], crd=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=1,
> minorTopVer=1], discoEvt=DiscoveryCustomEvent [customMsg=null,
> affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
> nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1543471364603]], nodeId=5cb09b16, evt=DISCOVERY_CUSTOM_EVT],
> added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE,
> res=false, hash=1392437407], init=false, lastVer=GridCacheVersion
> [topVer=0,
> order=1543471361380, nodeOrder=0], partReleaseFut=PartitionReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=1], futures=[]], AtomicUpdateReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
> DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], futures=[]], LocalTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
> AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], futures=[RemoteTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> futures=[]],
> exchActions=null, affChangeMsg=null, initTs=1543471364624,
> centralizedAff=false, forceAffReassignment=true, changeGlobalStateE=class
> o.a.i.IgniteException: Duplicate index name
> [cache=SQL_PUBLIC_TABLE_TEMP_TEST1_R_1_X, schemaName=PUBLIC,
> idxName=TABLE_TEMP_99_R_1_X_ROMA3C_BSP_BATCH_ID,
> existingTable=TABLE_TEMP_99_R_1_X, table=TABLE_TEMP_TEST1_R_1_X],
> done=true, state=DONE, evtLatch=0, remaining=[], super=GridFutureAdapter
> [ignoreInterrupts=false, state=DONE, res=class
> o.a.i.IgniteCheckedException:
> Cluster state change failed., hash=721409668]]
> class org.apache.ignite.IgniteCheckedException: Cluster state change
> failed.
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2697)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2467)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1149)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:712)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
>   at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>

Re: Ignite Performance Issues when seeding data from Spark

2018-12-03 Thread kellan
I'm Using 2.6 on AWS. Like I mentioned, my Ignite cluster is running on i3
instances which have local storage, so burst shouldn't be a problem.

The trend I've noticed is that my writes-per-second increases, while the
size of each write decreases, and the number of PUT operations per second
and CPU usage also decreases.

I don't know if this is applicable here, but while running Kafka I ran into
problems like this when I didn't have enough memory dedicated to page cache,
but I don't know if this should be a consideration with Ignite. I'm
following Ignite's performance guidelines and dedicating less than 70% of my
available memory to Ignite Durable Memory and the Java Heap.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [RESOLVED] JDBC Streaming

2018-12-03 Thread Ilya Kasnacheev
Hello!

Yes, thin client has different approach to failover. However, it should not
freeze. Please collect thread dumps so that we can see why it froze. The
expectation here is that connection will be eventually dropped.

I have found a reason for your Thick driver streaming troubles. Turns out,
you have to specify cache=the cache you are streaming do.
You were supplying DATASTORE but were actually trying to stream to
TRANSACTIONS table which isn't in it.
The correct line will be:

if (stream)
url = 
"jdbc:ignite:cfg://cache=SQL_PUBLIC_TRANSACTIONS:streaming=true:streamingFlushFrequency=1000@file://"
+ config;

I have created a ticket https://issues.apache.org/jira/browse/IGNITE-10515

Can't promise it will get traction since everyone seems to switch to
thin driver.

Regards,

-- 

Ilya Kasnacheev


пн, 3 дек. 2018 г. в 20:10, joseheitor :

> Hooray!!! - It works.
>
> Thanks, Ilya.
>
> Please continue your investigation of the JDBC Client Driver
> (thick-client),
> and let me know what you find...?
>
> What follows should perhaps be posted separately...but here's something I
> noticed, which I don't fully understand or know how to deal with:
>
> While bulk-loading data previously without streaming, via the THICK Client,
> I had a node go down. Data continued to load though, as my other node
> remained active. When I noticed this, I simply started up the failed node
> again. Data-loading paused for a while (presumably while the nodes
> synced???), and then continued until completed.
>
> Now, while bulk-loading data with streaming, on the THIN Client, I also had
> a node go down. Data did not however continue to load. The application did
> not see a dropped connection - it simply froze. When I noticed this, I
> started up the failed node again, as before. Data-loading did not however
> resume loading as before. (I had to kill the application and restart
> it...).
>
> Should Ignite not throw an exception, to alert the application that the
> database connection dropped? Or is there support for a connection timeout
> for an SQL execution statement? Or (even better) should the driver not have
> automatically detected the connection loss and re-established a new
> connection to the other node (as the THICK client presumably does...)? and
> continued.
>
> Thanks,
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Fair queue polling policy?

2018-12-03 Thread Stanislav Lukyanov
I think what you’re talking about isn’t fairness, it’s round-robinness.
You can’t distribute a single piece of work among multiple nodes fairly – one 
gets it and others don’t.
Yes, it could be using different node each time, but it I don’t really a use 
case for that.

The queue itself isn’t a load balancer implementation, it doesn’t even need to 
care about fairness or anything.
All it need is to implement queue interface efficiently.

I think I can explain the fact that one node gets the data most of the time.
It’s probably due to that the first value (when the queue is empty) always has 
the same key – and always ends up on the same node.
So the behavior is not in that the same client get’s the value – it’s in that 
the same server always stores the first (second, third) value.
When all the servers try to get and remove the same value, the one closest to 
it (i.e. the one storing it) wins.
We probably could randomize the distribution – but it’s going to cost us in 
terms of code complexity and, maybe, performance. 

Overall, I don’t think it’s a bug in Ignite, and we would need a solid 
justification to change the behavior.

Do you have a use case when a random distribution is important?

Stan

From: Peter
Sent: 30 ноября 2018 г. 17:30
To: user@ignite.apache.org
Subject: Re: Fair queue polling policy?

Hello,

I have found this discussion about the same topic and indeed the example there 
works and the queues poll fair.

And when I tweak the sleep after put and take, so that the queue stays mostly 
empty all the, I can reproduce the unfair behaviour!
https://github.com/karussell/igniteexample/blob/master/src/main/java/test/IgniteTest.java

I'm not sure if this is a bug as it should be the responsibility of the client 
to avoid overloading itself. E.g. in my case this happened because I allowed 
too many threads for the tasks on the polling side, leading to too frequent 
polling, which leads to this mostly empty queue.

But IMO it should be clarified in the documentation as one expects a round 
robin behaviour even for empty queues. And e.g. in low latency environments 
and/or environments with many clients this could make problems. I have created 
an issue about it here: https://issues.apache.org/jira/browse/IGNITE-10496

Kind Regards
Peter

Am 30.11.18 um 01:44 schrieb Peter:
Hello,
My aim is a queue for load balancing that is described in the documentation: 
create an "ideally balanced system where every node only takes the number of 
jobs it can process, and not more."
I'm using jdk8 and ignite 2.6.0. I have successfully set up a two node ignite 
cluster where node1 has same CPU count (8) and same RAM as node2 but slightly 
slower CPU (virtual vs. dedicated). I created one unbounded queue in this 
system (no collection configuration, also no config for cluster except 
TcpDiscoveryVmIpFinder).
I call queue.put on both nodes at an equal rate and have one non-ignite-thread 
per node that does "queue.take()" and what I expect is that both machines go 
equally fast into the 100% CPU usage as both machines poll at their best 
frequency. But what I observe is that the slower node (node1) gets approx. 5 
times more items via queue.take than node2. This leads to 10% CPU usage on 
node2 and 100% CPU usage on node1 and I never had the case where it was equal.
What could be the reason? Is there a fair polling configuration or some 
anti-affine? Or is it required to do queue.take() inside a Runnable submitted 
via ignite.compute().something?
I also played with CollectionConfiguration.setCacheMode but the problem 
persists. Any pointers are appreciated.
Kind Regards
Peter




RE: Ignite not scaling as expected !!! (Thread dump provided)

2018-12-03 Thread Stanislav Lukyanov
How many cores does each node have?
Numbers of which threads do you increase? The ones doing the get() calls?

Thread dump from the client isn’t that interesting. Better to look what’s going 
on the servers.
You need to monitor your resources – CPU, Network IO, Disk IO. You may hit the 
limit on all of them.
You need to monitor your GC – perhaps that’s what’s taking all of the resources.
You need to look into your cache store – read-through is enabled, so you might 
be hitting performance issues 
with your cache store implementation and/or backing database.
You may need to have a performance profile – take JFR from all nodes.

Performance is a very complex topic and Ignite is a very complex system.
One can’t say for sure what’s going on just by looking at the config.
You have to look into everything at once to be able to make sense of it,
and it’s hardly something that can be done on a user mailing list.

Stan

From: the_palakkaran
Sent: 2 декабря 2018 г. 17:01
To: user@ignite.apache.org
Subject: Re: Ignite not scaling as expected !!! (Thread dump provided)

Hi Amir,

I have two server nodes and 1 client node. I have two caches, one that holds
entire accounts from DB and another counter cache that is used for counter
operations. The server nodes are deployed on two different nodes and
clustered together. A client that is also on one of the machines of the two
server nodes deployed tries to access data from the caches.


Ignite Configuration is provided below:
*

  
  
  
 

   
  
 

ip1:47500..47509
ip2:47500..47509

  
   

 
  
  




  
  
 

 
  
  
  
 

   
  
 
 
 
 
 
  
   

 
  
  
 












 
  







   
   
   
   
   
   
   
   
   
  
 
  
   


  
  
 

   
   
   
  
   

 
  
  *



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



[RESOLVED] JDBC Streaming

2018-12-03 Thread joseheitor
Hooray!!! - It works.

Thanks, Ilya.

Please continue your investigation of the JDBC Client Driver (thick-client),
and let me know what you find...?

What follows should perhaps be posted separately...but here's something I
noticed, which I don't fully understand or know how to deal with:

While bulk-loading data previously without streaming, via the THICK Client,
I had a node go down. Data continued to load though, as my other node
remained active. When I noticed this, I simply started up the failed node
again. Data-loading paused for a while (presumably while the nodes
synced???), and then continued until completed.

Now, while bulk-loading data with streaming, on the THIN Client, I also had
a node go down. Data did not however continue to load. The application did
not see a dropped connection - it simply froze. When I noticed this, I
started up the failed node again, as before. Data-loading did not however
resume loading as before. (I had to kill the application and restart it...).

Should Ignite not throw an exception, to alert the application that the
database connection dropped? Or is there support for a connection timeout
for an SQL execution statement? Or (even better) should the driver not have
automatically detected the connection loss and re-established a new
connection to the other node (as the THICK client presumably does...)? and
continued.

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: GridProcessorAdapter fails to start due to failure to initialise WAL segment on Ignite startup

2018-12-03 Thread Ilya Kasnacheev
Hello!

It seems that WAL file got truncated or something like that.

Can you post this file to some file storage so that we could look?

You can also try to change this node's WAL mode to LOG_ONLY and try to
start it again (after backing up data, of course). Checks are less strict
in this case.

Regards,
-- 
Ilya Kasnacheev


пт, 30 нояб. 2018 г. в 22:33, Raymond Wilson :

> Hi Ilya,
>
> We don’t change the WAL segment size from the default values.
>
> The only activity that occurred was stopping a node, making a minor change
> (not related to persistence) and rerunning the node.
>
> Raymond.
>
> Sent from my iPhone
>
> On 1/12/2018, at 2:52 AM, Ilya Kasnacheev 
> wrote:
>
> Hello!
>
> "WAL segment size change is not supported"
>
> Is there a chance that you have changed WAL segment size setting between
> launches?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 29 нояб. 2018 г. в 02:39, Raymond Wilson :
>
>> I'm using Ignite 2.6 with the C# client.
>>
>> I have a running cluster that I was debugging. All requests were read
>> only (there were no state mutating operations running in the cluster.
>>
>> I terminated the one server node in the grid (running in the debugger) to
>> make a small code change and re-run it (I do this frequently). The node may
>> have been stopped for longer than the partitioning timeout.
>>
>> On re-running the server node it failed to start. On re-running the
>> complete cluster it still failed to start, and all other nodes report
>> failure to connect to a inactive grid.
>>
>> Looking at the log for the server node that is failing I get the
>> following log showing an exception while initializing a WAL segment. This
>> failure seems permanent and is unexpected as we are using the strict WAL
>> atomicity mode (WalMode.Fsync) for all persisted regions.Is this a
>> recoverable error, or does this imply data loss? [NB: This is a dev system
>> so no prod data is affected]]
>>
>>
>> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer >>>
>> __  >>>   /  _/ ___/ |/ /  _/_  __/ __/>>>
>> _/ // (7 7// /  / / / _/  >>> /___/\___/_/|_/___/ /_/ /___/
>>  >>>   >>> ver. 2.6.0#20180710-sha1:669feacc  >>> 2018 Copyright(C) Apache
>> Software Foundation  >>>   >>> Ignite documentation:
>> http://ignite.apache.org
>> 2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer Config URL:
>> n/a
>> 2018-11-29 12:26:09,948 [1] INFO  ImmutableCacheComputeServer
>> IgniteConfiguration [igniteInstanceName=TRex-Immutable, pubPoolSize=50,
>> svcPoolSize=12, callbackPoolSize=12, stripedPoolSize=12, sysPoolSize=12,
>> mgmtPoolSize=4, igfsPoolSize=12, dataStreamerPoolSize=12,
>> utilityCachePoolSize=12, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
>> qryPoolSize=12, igniteHome=null,
>> igniteWorkDir=C:\Users\rwilson\AppData\Local\Temp\TRexIgniteData\Immutable,
>> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6e4784bc,
>> nodeId=8f32d0a6-539c-40dd-bc42-d044f28bac73,
>> marsh=org.apache.ignite.internal.binary.BinaryMarshaller@e4487af,
>> marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
>> sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
>> metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
>> discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000,
>> ackTimeout=5000, marsh=null, reconCnt=10, reconDelay=2000,
>> maxAckTimeout=60, forceSrvMode=false, clientReconnectDisabled=false,
>> internalLsnr=null], segPlc=STOP, segResolveAttempts=2,
>> waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
>> commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
>> enableForcibleNodeKill=false, enableTroubleshootingLog=false,
>> srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@10d68fcd,
>> locAddr=127.0.0.1, locHost=null, locPort=47100, locPortRange=100,
>> shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=3,
>> connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
>> sockRcvBuf=32768, msgQueueLimit=1024, slowClientQueueLimit=0, nioSrvr=null,
>> shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
>> tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=16,
>> unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
>> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
>> ctxInitLatch=java.util.concurrent.CountDownLatch@117e949d[Count = 1],
>> stopping=false,
>> metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@6db9f5a4],
>> evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@5f8edcc5,
>> colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
>> indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7a675056,
>> addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
>> txCfg=org.apache.ignite.configuration.TransactionConfiguration@d21a74c,
>> cacheSanityCheckEnabled=true, discoStartupDelay=6, 

Re: Failed to read data from remote connection

2018-12-03 Thread wangsan
Do you shut down C++ node properly prior killing the process?
Yeath, c++ node was killed by kill -9 .not sighup. It is a wrong ops,And
I will use kill ops.

Does this exceptions impacts cluster's functionality anyhow?
I am not sure about the exceptions. My cluster will crash with oom
(could not create native thread).And the ulimit and the config show the max
user processes is very large(64k). There are about 20 nodes in ignite. I
don't know why the cluster cost so many threads? So is this exceptions will
trigger two many socket (thread) ?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite distribution configuration.

2018-12-03 Thread Ilya Kasnacheev
Hello!

It is a separate deliverable and not a part of Apache Ignite distribution.

You can find it in Nightly Builds, for example:
https://ci.ignite.apache.org/viewLog.html?buildId=lastSuccessful=Releases_NightlyRelease_RunApacheIgniteNightlyRelease=artifacts=1

I'm not sure if you can get it as a part of AI release. You can sure build
it from source.-

Regards,
-- 
Ilya Kasnacheev


пн, 3 дек. 2018 г. в 16:52, Viraj Rathod :

> I’m unable to find ignite-web-agent.
> I have stored space ignite in /usr/share/apache-ignite directory.
> Using RHEL 7 with CENTOS as the OS.
>
> On Mon, 3 Dec 2018 at 1:14 PM, Mikael  wrote:
>
>> Hi!
>>
>> I don't think there is an easy answer, the configuration depends on so
>> many things, try the default configuration and see how it goes and work
>> your way from there, the documentation is great and explains well what all
>> the options do, so it's easy to play around with.
>>
>> Just configure a cache with 1 backup and see how it goes.
>>
>> There are a lot of things to consider, can you use affinity keys, will
>> all data fit in ram, do you need transactions, are you going to query data
>> with SQL, how are you going at access the data and so on, I think it is
>> more important you get a data model you can work with, you can always play
>> around with the cache configuration later.
>>
>> Mikael
>>
>>
>> Den 2018-12-03 kl. 07:59, skrev Viraj Rathod:
>>
>> I’m a new user of apache ignite.
>>
>> I want to know if my data is supposed to be partitioned amongst 3 nodes
>> and data of each node is supposed to have a backup on the other two nodes.
>> The data is JSON key value pairs in 150 columns and 1 million rows.
>> How will the configuration file look like?
>> Can anyone explain so that I can configure it for my project.
>>
>> Thanks.
>>
>> --
>> Regards,
>> Viraj Rathod
>>
>> --
> Regards,
> Viraj Rathod
>


Re: Invalid property 'statisticsEnabled' is not writable

2018-12-03 Thread Ilya Kasnacheev
Hello!

I'm pretty sure that I was able to run SQL queries on Apache Ignite.

If there's some specific configuration (such as Spark), please prepare
detailed steps to reproduce, reproducer project or film a video :)

Regards,
-- 
Ilya Kasnacheev


пн, 3 дек. 2018 г. в 18:05, ApacheUser :

> Hi Ilya,
>
> I am able to start and run SQL Queryies but not able to write, while
> loading
> data this error is thrown. please try to write some data in any dummy table
> with couple fields. I am using affinity key and backups=1 .
>
> Thanks
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Query regarding Ignite unit tests

2018-12-03 Thread Ilya Kasnacheev
Hello!

There is no scenario where you would run all tests during mvn clean install.

Normally, tests are run on per test suite basis with specific suite
settings. It will make more than a day to run all tests and they might
interfere each other.
The proper way of running tests is ci.ignite.apache.org which should be
available to all contributors (I think). You should ask to be a contributor
on developers list.

So the realistic way to build Ignite is -DskipTests.

OR, -DfailIfNoTests=false -Dtest=SpecificClassTest.

Regards,
-- 
Ilya Kasnacheev


пн, 3 дек. 2018 г. в 14:54, Namrata Bhave :

> Hi,
>
>
>
> I have recently started working with Apache Ignite. Build on x86 Ubuntu
> 16.04 is complete. However, while running tests using `mvn test` command,
> the execution gets stuck while running `ignite-core` module.
>
> Hence started running tests on individual modules, where similar behavior
> was seen in ignite-indexing, ignite-clients and ignite-ml modules as well.
>
> I have tried setting JAVA heap settings, running on a system with 32GB
> RAM.
>
> Is there a way to avoid this and get complete test results? Also, is there
> any CI or such environment where I can get results of unit tests?
>
>
>
> Would appreciate any help provided.
>
>
>
> Thanks and Regards,
>
> Namrata
>


Re: Invalid property 'statisticsEnabled' is not writable

2018-12-03 Thread ApacheUser
Hi Ilya,

I am able to start and run SQL Queryies but not able to write, while loading
data this error is thrown. please try to write some data in any dummy table
with couple fields. I am using affinity key and backups=1 .

Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client stucks and doesn't connect

2018-12-03 Thread Dmitry Lazurkin
OK. I have found steps to reproduce.

For reproducing we need Ignite with slow disk. Or I known how to emulate
slow hard disk:

Add to partitions cycle in
GridCacheDatabaseSharedManager#restorePartitionStates:
//...
    for (int i = 0; i < grp.affinity().partitions(); i++) {
    try {
    log.error("Wait");
    Thread.sleep(1);
    } catch (InterruptedException e) {
    e.printStackTrace();
    }
//...

- Now we can start server
- Then start client
- Wait message "Join cluster while cluster state transition is in
progress, waiting when transition finish."
- Kill server
- Wait repeatable exception java.net.ConnectException: Connection
refused (Connection refused)
- Start server and I have 100% chance to reproduce on my computer

Thank you.

On 11/29/18 14:03, Stanislav Lukyanov wrote:
>
> Hi,
>
>  
>
> The interesting (and disappointing) part is the NPE:
>
> java.lang.NullPointerException: null
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.sendJoinRequest(ClientImpl.java:666)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.joinTopology(ClientImpl.java:546)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.access$900(ClientImpl.java:128)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.tryJoin(ClientImpl.java:1846)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1757)
>
>  
>
> Which version do you use?
>
> Is this reproducible? Every time?
>
>  
>
> Thanks,
>
> Stan
>
>  
>
>  
>
> *From: *Dmitry Lazurkin 
> *Sent: *20 ноября 2018 г. 15:44
> *To: *user@ignite.apache.org 
> *Subject: *Client stucks and doesn't connect
>
>  
>
> Hello.
>
>  
>
> Ignite client stops connecting to server after exception:
>
>  
>
> 2018-11-19 16:00:49,257 [tcp-client-disco-reconnector-#5] DEBUG
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Resolved addresses from IP finder:
>
> [/10.48.14.1:47500]
>
> 2018-11-19 16:00:49,257 [tcp-client-disco-reconnector-#5] DEBUG
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Send join request
>
> [addr=/10.48.14.1:47500, reconnect=true,
>
> locNodeId=cd323c53-d1de-4608-8eec-e373b1f68b71]
>
> 2018-11-19 16:00:49,258 [tcp-client-disco-reconnector-#5] ERROR
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Exception on joining: Connection refused
>
> (Connection refused)
>
> java.net.ConnectException: Connection refused (Connection refused)
>
>     at java.net.PlainSocketImpl.socketConnect(Native Method)
>
>     at
>
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>
>     at
>
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>
>     at
>
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>
>     at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>
>     at java.net.Socket.connect(Socket.java:589)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1450)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1413)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.sendJoinRequest(ClientImpl.java:637)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.joinTopology(ClientImpl.java:546)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl.access$900(ClientImpl.java:128)
>
>     at
>
> org.apache.ignite.spi.discovery.tcp.ClientImpl$Reconnector.body(ClientImpl.java:1408)
>
>     at
>
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>
> 2018-11-19 16:00:49,258 [tcp-client-disco-reconnector-#5] DEBUG
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Failed to join to address
>
> [addr=/10.48.14.1:47500, recon=true, errs=[java.net.ConnectException:
>
> Connection refused (Connection refused)]]
>
> 2018-11-19 16:00:51,344 [tcp-client-disco-reconnector-#5] DEBUG
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Resolved addresses from IP finder:
>
> [/10.48.14.1:47500]
>
> 2018-11-19 16:00:51,344 [tcp-client-disco-reconnector-#5] DEBUG
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Send join request
>
> [addr=/10.48.14.1:47500, reconnect=true,
>
> locNodeId=cd323c53-d1de-4608-8eec-e373b1f68b71]
>
> 2018-11-19 16:00:51,364 [tcp-client-disco-reconnector-#5] DEBUG
>
> o.a.i.s.d.tcp.TcpDiscoverySpi - Message has been sent to address
>
> [msg=TcpDiscoveryClientReconnectMessage
>
> [routerNodeId=9c68d70f-883e-4a21-938b-05f3f6f98d20,
>
> lastMsgId=c42b1bc2761-bbc61e78-97e8-49cd-844e-2dc3e8aacc68,
>
> super=TcpDiscoveryAbstractMessage [sndNodeId=null,
>
> id=48722bc2761-cd323c53-d1de-4608-8eec-e373b1f68b71,
>
> verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
>
> isClient=true]], addr=/10.48.14.1:47500,
>
> rmtNodeId=9c68d70f-883e-4a21-938b-05f3f6f98d20]
>
> 2018-11-19 16:00:51,365 

Re: JDBC Streaming

2018-12-03 Thread Ilya Kasnacheev
Hello!

Indeed, you will need to reconnect if node that you're connected to fails.

It always supported returning cluster-wide data retrieval.

Regarda,
-- 
Ilya Kasnacheev


пн, 3 дек. 2018 г. в 15:25, joseheitor :

> Hi Ilya,
>
> Thanks for the response.
>
> My understanding was that Thin JDBC driver was only able to connect to a
> single node (not a cluster), so that if that node failed - it was not able
> to continue operating on the cluster... It would also only return data
> residing on that node (not records residing on other cluster nodes)???
>
> But I see that the docs now mention failover support with a list of hosts
> and aggregation of data from multiple nodes on the 'connected' node (was
> this added recently?).
>
> I will apply the necessary changes and test your suggestion.
>
> Thanks,
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite distribution configuration.

2018-12-03 Thread Viraj Rathod
I’m unable to find ignite-web-agent.
I have stored space ignite in /usr/share/apache-ignite directory.
Using RHEL 7 with CENTOS as the OS.

On Mon, 3 Dec 2018 at 1:14 PM, Mikael  wrote:

> Hi!
>
> I don't think there is an easy answer, the configuration depends on so
> many things, try the default configuration and see how it goes and work
> your way from there, the documentation is great and explains well what all
> the options do, so it's easy to play around with.
>
> Just configure a cache with 1 backup and see how it goes.
>
> There are a lot of things to consider, can you use affinity keys, will all
> data fit in ram, do you need transactions, are you going to query data with
> SQL, how are you going at access the data and so on, I think it is more
> important you get a data model you can work with, you can always play
> around with the cache configuration later.
>
> Mikael
>
>
> Den 2018-12-03 kl. 07:59, skrev Viraj Rathod:
>
> I’m a new user of apache ignite.
>
> I want to know if my data is supposed to be partitioned amongst 3 nodes
> and data of each node is supposed to have a backup on the other two nodes.
> The data is JSON key value pairs in 150 columns and 1 million rows.
> How will the configuration file look like?
> Can anyone explain so that I can configure it for my project.
>
> Thanks.
>
> --
> Regards,
> Viraj Rathod
>
> --
Regards,
Viraj Rathod


RE: Ignite cache.getAll takes a long time

2018-12-03 Thread Stanislav Lukyanov
I guess it could be caused by https://issues.apache.org/jira/browse/IGNITE-5003 
which is mentioned in that thread.
Also, make sure that your cache store code doesn’t cause you troubles – that 
you don’t open a new connection every time, 
don’t have unnecessary blocking, etc.

Stan

From: Justin Ji
Sent: 3 декабря 2018 г. 13:10
To: user@ignite.apache.org
Subject: RE: Ignite cache.getAll takes a long time

Stan - 

Thank for your reply!

Yes, the getAll and putAll(async) executed in parallel(amount of operations
executed at the same time).

But I think it may be caused by the write-behind, when I disabled the
write-behind the timeout disappeared, and I enabled the write-behind the
timeout appeared.

It is a little similar to
http://apache-ignite-users.70518.x6.nabble.com/write-behind-performance-impacting-main-thread-Write-behind-buffer-is-never-full-td17940.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Query regarding Ignite unit tests

2018-12-03 Thread Stanislav Lukyanov
Hi,

This is better to be asked on the dev-list – added that to the To, and Bcc’ed 
user-list.

I actually don’t think you can run tests for a specific module – either a 
single test, or a single test suite, or all of them.
I would usually either run a single test from IDEA or run all tests via 
TeamCity https://ci.ignite.apache.org.

Igniters, please help Namrata here with the best practices of working with 
tests.

Stan

From: Namrata Bhave
Sent: 3 декабря 2018 г. 14:54
To: user@ignite.apache.org
Subject: Query regarding Ignite unit tests

Hi,

I have recently started working with Apache Ignite. Build on x86 Ubuntu 16.04 
is complete. However, while running tests using `mvn test` command, the 
execution gets stuck while running `ignite-core` module.
Hence started running tests on individual modules, where similar behavior was 
seen in ignite-indexing, ignite-clients and ignite-ml modules as well.
I have tried setting JAVA heap settings, running on a system with 32GB RAM. 
Is there a way to avoid this and get complete test results? Also, is there any 
CI or such environment where I can get results of unit tests?

Would appreciate any help provided.

Thanks and Regards,
Namrata



Re: JDBC Streaming

2018-12-03 Thread joseheitor
Hi Ilya,

Thanks for the response.

My understanding was that Thin JDBC driver was only able to connect to a
single node (not a cluster), so that if that node failed - it was not able
to continue operating on the cluster... It would also only return data
residing on that node (not records residing on other cluster nodes)???

But I see that the docs now mention failover support with a list of hosts
and aggregation of data from multiple nodes on the 'connected' node (was
this added recently?).

I will apply the necessary changes and test your suggestion.

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC Streaming

2018-12-03 Thread Ilya Kasnacheev
Hello!

Apache Ignite SQL should be accessed by Ignite JDBC Thin driver. This is
the preferred way.

JDBC Thin driver also has streaming mode in the form of SET STREAMING
ON/OFF.

Please see attached file where I have introduced "thin" mode.

As for client mode streaming not working, I will look into it further.

Regards,
-- 
Ilya Kasnacheev


пн, 3 дек. 2018 г. в 01:13, joseheitor :

> Hi Ilya,
>
> Any update on your investigation of this issue...?
>
> Your comments that 'streaming mode' in Client driver and Client driver
> itself are near-deprecated - are very surprising and concerning!
>
> 1. Are you saying that Apache Ignite SQL will seize to be accessible via
> standard JDBC?
>
> 2. If 'streaming mode' is to be deprecated - will there be an alternative
> method of inserting high-throughput data via SQL?
>
> Thanks,
> Jose
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
package com.example.ignite;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Date;
import java.util.Random;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class IgniteApplication
{
public static String config;
public static boolean stream = false;
public static boolean thin = false;

public static void main(String[] args)
{
SpringApplication.run(IgniteApplication.class, args);
try {
if (args.length < 1) {
System.out.println("Usage: java -jar ignite-1.jar  []");
System.exit(1);
} else {
config = args[0];
if (args.length > 1)
if (args[1].equalsIgnoreCase("stream"))
stream = true;
else if (args[1].equalsIgnoreCase("thin"))
thin = true;
}
initSQLDatabase();
importSQLData();
querySQLDatabase();
}
catch (Exception e) {

e.printStackTrace(System.err);
}
}

private static void initSQLDatabase() throws SQLException {

Connection dbConnection = null;
Statement statement = null;
String url = "jdbc:ignite:cfg://cache=DATASTORE:distributedJoins=true:transactionsAllowed=true:multipleStatementsAllowed=true@file://" + config;
String cmd = "DROP TABLE IF EXISTS public.transactions;" +
 "DROP INDEX IF EXISTS transactions_id_k_v;" +
 "DROP INDEX IF EXISTS transactions_k_id_v;" +
 "DROP INDEX IF EXISTS transactions_k_v_id;" +
 "DROP INDEX IF EXISTS transactions_pk;" +
 "CREATE TABLE public.transactions (pk INT, id INT, k VARCHAR, v VARCHAR, PRIMARY KEY (pk, id))" +
 " WITH \"TEMPLATE=PARTITIONED, BACKUPS=1, ATOMICITY=TRANSACTIONAL, WRITE_SYNCHRONIZATION_MODE=FULL_SYNC, AFFINITY_KEY=id\";" +
 "CREATE INDEX transactions_id_k_v ON public.transactions (id, k, v);" +
 "CREATE INDEX transactions_k_id_v ON public.transactions (k, id, v);" +
 "CREATE INDEX transactions_k_v_id ON public.transactions (k, v, id);" +
 "CREATE INDEX transactions_pk ON public.transactions (pk);";
try {
Class.forName("org.apache.ignite.IgniteJdbcDriver");
dbConnection = DriverManager.getConnection(url, "", "");
dbConnection.setSchema("PUBLIC");

statement = dbConnection.createStatement();
System.out.print("Initializing database...");
statement.execute(cmd);
System.out.print("Done\n");
}
catch (Exception e) {

e.printStackTrace(System.err);

} finally {

if (statement != null) {
statement.close();
}

if (dbConnection != null) {
dbConnection.close();
}
}
}

private static void importSQLData() throws SQLException {

Connection dbConnection = null;
PreparedStatement statement = null;
Random randomGenerator = new Random();

String url;
if (stream)
url = "jdbc:ignite:cfg://cache=DATASTORE:streaming=true:streamingFlushFrequency=1000@file://" + config;
else if (thin)
url = "jdbc:ignite:thin://192.168.1.230,192.168.1.220";
else
url = "jdbc:ignite:cfg://cache=DATASTORE@file://" + config;

String insertTableSQL = "INSERT INTO transactions (pk, id, k, v) VALUES (?, ?, ?, ?)";

String initials[] = {"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", 
   

Query regarding Ignite unit tests

2018-12-03 Thread Namrata Bhave
Hi,

I have recently started working with Apache Ignite. Build on x86 Ubuntu 16.04 
is complete. However, while running tests using `mvn test` command, the 
execution gets stuck while running `ignite-core` module.
Hence started running tests on individual modules, where similar behavior was 
seen in ignite-indexing, ignite-clients and ignite-ml modules as well.
I have tried setting JAVA heap settings, running on a system with 32GB RAM.
Is there a way to avoid this and get complete test results? Also, is there any 
CI or such environment where I can get results of unit tests?

Would appreciate any help provided.

Thanks and Regards,
Namrata


Re: Ignite benchmarking with YCSB

2018-12-03 Thread Ilya Kasnacheev
Hello!

Looks like your numbers have improved. What change would lead to such
improvement? What's the current CPU utilization?

Regards,
-- 
Ilya Kasnacheev


пн, 3 дек. 2018 г. в 12:48, summasumma :

> HI Ilya,
>
> Thanks for all the inputs.
>
> Latest update is:
>
> Attempted Insert: 120k
> 2 YCSB each with 80 threads giving: 54.5k / 54.5k = total 109k
>
> - Failed to touch target 120k
> - This is with Primary_Sync enabled in ignite but without Threadpool=64 or
> connection pair.
>
> Increasing System/Public pool configuration or enabled connection-pair
> options didn't real helped in improving performance from here.
>
> Thanks,
> ...summa
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Ignite cache.getAll takes a long time

2018-12-03 Thread Justin Ji
Stan - 

Thank for your reply!

Yes, the getAll and putAll(async) executed in parallel(amount of operations
executed at the same time).

But I think it may be caused by the write-behind, when I disabled the
write-behind the timeout disappeared, and I enabled the write-behind the
timeout appeared.

It is a little similar to
http://apache-ignite-users.70518.x6.nabble.com/write-behind-performance-impacting-main-thread-Write-behind-buffer-is-never-full-td17940.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite benchmarking with YCSB

2018-12-03 Thread summasumma
HI Ilya,

Thanks for all the inputs.

Latest update is:

Attempted Insert: 120k  
2 YCSB each with 80 threads giving: 54.5k / 54.5k = total 109k

- Failed to touch target 120k 
- This is with Primary_Sync enabled in ignite but without Threadpool=64 or
connection pair.

Increasing System/Public pool configuration or enabled connection-pair
options didn't real helped in improving performance from here.

Thanks,
...summa




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/