Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-07-02 Thread ApacheUser
Thanks Ily ,
could share any guidelines to control groupby?, Like didicated client nodes
for connectivity from Tableau and SQL?
Thanks
Bhaskar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Toad Connection to Ignite

2018-08-02 Thread ApacheUser
Thanks Alex,

We have large pool of developers who uses TOAD, just thought of making TOAD
connect to Ignite to have similar experience. We are using DBeaver right
now.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-08-01 Thread ApacheUser
Hello Ilya Kasnacheev,

I set  IGNITE_SQL_FORCE_LAZY_RESULT_SET=true just below 
ENABLE_ASSERTIONS="0" in ignite.sh but still I am getting out of memory
error when I do select * from table. Is this right place to set this
parameter? please confirm.


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-08-01 Thread ApacheUser
Great, 
It works perfectly, thankyou.
Bhaskar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Toad Connection to Ignite

2018-08-02 Thread ApacheUser
Hello Ignite Team,

Is it possible to connect to Ignite from TOAD tool? for SQL Querying?.


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark to Ignite Data load, Ignite node crashashing

2018-08-08 Thread ApacheUser
Hello Ignite team,

I a writing data from Spark Dataframe to Ignite, frequently one node goes
down, I dont see any error in log file below is the trace. If i restart it
doesn't join Cluster unless I stop the Spark job which is writing data to
Ignite Cluster.

I have 4 nodes with 4CPU/16GB RAM 200GB disc space, persistenc eis enabled,
What could be the reason?

[00:44:33]__  
[00:44:33]   /  _/ ___/ |/ /  _/_  __/ __/
[00:44:33]  _/ // (7 7// /  / / / _/
[00:44:33] /___/\___/_/|_/___/ /_/ /___/
[00:44:33]
[00:44:33] ver. 2.6.0#20180710-sha1:669feacc
[00:44:33] 2018 Copyright(C) Apache Software Foundation
[00:44:33]
[00:44:33] Ignite documentation: http://ignite.apache.org
[00:44:33]
[00:44:33] Quiet mode.
[00:44:33]   ^-- Logging to file
'/data/ignitedata/apache-ignite-fabric-2.6.0-bin/work/log/ignite-d90d68c6.0.log'
[00:44:33]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[00:44:33]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[00:44:33]
[00:44:33] OS: Linux 3.10.0-862.3.2.el7.x86_64 amd64
[00:44:33] VM information: Java(TM) SE Runtime Environment 1.8.0_171-b11
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.171-b11
[00:44:33] Configured plugins:
[00:44:33]   ^-- None
[00:44:33]
[00:44:33] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
[00:44:33] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[00:44:33] Security status [authentication=off, tls/ssl=off]
[00:44:35] Nodes started on local machine require more than 20% of physical
RAM what can lead to significant slowdown due to swapping (please decrease
JVM heap size, data region size or checkpoint buffer size)
[required=13412MB, available=15885MB]
[00:44:35] Performance suggestions for grid  (fix if possible)
[00:44:35] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[00:44:35]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[00:44:35]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[00:44:35]   ^-- Speed up flushing of dirty pages by OS (alter
vm.dirty_expire_centisecs parameter by setting to 500)
[00:44:35]   ^-- Reduce pages swapping ratio (set vm.swappiness=10)
[00:44:35] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[00:44:35]
[00:44:35] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[00:44:35]
[00:44:35] Ignite node started OK (id=d90d68c6)
[00:44:35] >>> Ignite cluster is not active (limited functionality
available). Use control.(sh|bat) script or IgniteCluster interface to
activate.
[00:44:35] Topology snapshot [ver=4, servers=4, clients=0, CPUs=16,
offheap=40.0GB, heap=4.0GB]
[00:44:35]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=INACTIVE]
[00:44:35]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:44:35]   ^-- 1 nodes left for auto-activation
[a99529d8-e483-44b3-96eb-a5a773e380e3]
[00:44:35] Data Regions Configured:
[00:44:35]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:20] Topology snapshot [ver=5, servers=4, clients=1, CPUs=16,
offheap=50.0GB, heap=8.4GB]
[00:48:20]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:20]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:20] Data Regions Configured:
[00:48:20]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:37] Topology snapshot [ver=6, servers=4, clients=2, CPUs=16,
offheap=60.0GB, heap=12.0GB]
[00:48:37]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:37]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:37] Data Regions Configured:
[00:48:37]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:37] Topology snapshot [ver=7, servers=4, clients=3, CPUs=16,
offheap=70.0GB, heap=16.0GB]
[00:48:37]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:37]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:37] Data Regions Configured:
[00:48:37]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:38] Topology snapshot [ver=8, servers=4, clients=4, CPUs=16,
offheap=80.0GB, heap=19.0GB]
[00:48:38]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,
clusterState=ACTIVE]
[00:48:38]   ^-- Baseline [id=0, size=4, online=3, offline=1]
[00:48:38] Data Regions Configured:
[00:48:38]   ^-- default [initSize=256.0 MiB, maxSize=10.0 GiB,
persistenceEnabled=true]
[00:48:40] Topology snapshot [ver=9, servers=4, clients=5, CPUs=16,
offheap=90.0GB, heap=23.0GB]
[00:48:40]   ^-- Node [id=D90D68C6-C725-43F8-BC32-71363FE3E86F,

Apache Ignite SQL- read Only User Creation

2018-08-16 Thread ApacheUser
Hello Ignite team,

We are using  Apache Ignite are SQL reporting cluster . Ignite Persistence
and authenticationEnabled . We need a read only user role apart from ignite
user, is there any role or a way to create user with read only previllages?

Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-08-06 Thread ApacheUser
Hello Ilya Kasnacheev,

using Ignite 2.6.

 SQL through Tableau using ODBC connection is getting OOME when selct 8 from
table without limit.
I have set export IGNITE_SQL_FORCE_LAZY_RESULT_SET=true in ignite.sh.

What else should I configure to avoid OOME when using ODBC?

thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Running Spark Job in Background

2018-08-13 Thread ApacheUser
Thanks Denis,

When Submit Spark job which connects to Ignite cluster creates an Ignite
Client. The Ignite Client gets disconnected whe I close the window(Linux
Shell).
Regular Spark jobs are running fine with & or nohup, but in Spark/Ignite
case, the clienst ae getting killed and spark job nologer runs.

is there any way I can run the spark/Ignite job continuously even ater
closing the linux shell?

thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark to Ignite Data load, Ignite node crashashing

2018-08-09 Thread ApacheUser
attaching log of the tow nodes crashing everytime, I have 4 nodes but the
other two nodes ver rarely crashed. All nodes(VM) are 4CPU/16GB RAm/200GB
HDD(Shared Storage)

node 3:
[16:35:21,938][INFO][main][IgniteKernal] 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.6.0#20180710-sha1:669feacc
>>> 2018 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

[16:35:21,946][INFO][main][IgniteKernal] Config URL:
file:/data/ignitedata/apache-ignite-fabric-2.6.0-bin/config/default-config.xml
[16:35:21,954][INFO][main][IgniteKernal] IgniteConfiguration
[igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8,
stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=4,
dataStreamerPoolSize=8, utilityCachePoolSize=8,
utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
igniteHome=/data/ignitedata/apache-ignite-fabric-2.6.0-bin,
igniteWorkDir=/data/ignitedata/apache-ignite-fabric-2.6.0-bin/work,
mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e,
nodeId=df202ccb-356f-426a-8131-e2cc0b9bf98f,
marsh=org.apache.ignite.internal.binary.BinaryMarshaller@3023df74,
marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60,
forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null],
segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true,
allResolversPassReq=true, segChkFreq=1, commSpi=TcpCommunicationSpi
[connectGate=null, connPlc=null, enableForcibleNodeKill=false,
enableTroubleshootingLog=false,
srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@6302bbb1,
locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1,
directBuf=true, directSndBuf=false, idleConnTimeout=60,
connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=1000, nioSrvr=null,
shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32,
unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@31304f14[Count = 1],
stopping=false,
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@34a3d150],
evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@2a4fb17b,
colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7cc0cdad,
addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
txCfg=org.apache.ignite.configuration.TransactionConfiguration@7c7b252e,
cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
timeSrvPortRange=100, failureDetectionTimeout=1,
clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@4d5d943d,
odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
[seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration
[sysRegionInitSize=41943040, sysCacheMaxSize=104857600, pageSize=0,
concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=default,
maxSize=10737418240, initSize=268435456, swapPath=null,
pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100,
metricsEnabled=true, metricsSubIntervalCount=5,
metricsRateTimeInterval=6, persistenceEnabled=true,
checkpointPageBufSize=0], storagePath=/data/ignitedata/data,
checkpointFreq=18, lockWaitTime=1, checkpointThreads=4,
checkpointWriteOrder=SEQUENTIAL, walHistSize=20, walSegments=10,
walSegmentSize=67108864, walPath=/root/ignite/wal,
walArchivePath=db/wal/archive, metricsEnabled=true, walMode=LOG_ONLY,
walTlbSize=131072, walBuffSize=0, walFlushFreq=2000, walFsyncDelay=1000,
walRecordIterBuffSize=67108864, alwaysWriteFullPages=false,
fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@4c583ecf,
metricsSubIntervalCnt=5, metricsRateTimeInterval=6,
walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false,
walCompactionEnabled=false], activeOnStart=true, autoActivation=true,
longQryWarnTimeout=500, sqlConnCfg=null,
cliConnCfg=ClientConnectorConfiguration [host=null, port=10800,
portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true,
maxOpenCursorsPerConn=128, threadPoolSize=8, 

Running Spark Job in Background

2018-08-11 Thread ApacheUser
Hello Ignite Team,

I have Spark job thats streams live data into Ignite Cache . The  job gets
closed as soon as I close window(Linux shell) . The other spark streaming
jobs I run with "&" at the end of spark submit job and they run for very
long time untill they I stop or crash due to other factors etc.

Is there any way I can run Spark-Ignite job continuously?

This is my spark submit:

spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0
--master spark://:7077  --executor-cores x --total-executor-cores x
--executor-memory Xg --conf spark.driver.maxResultSize=Xg --driver-memory Xg
--conf spark.default.parallelism=XX --conf
spark.serializer=org.apache.spark.serializer.KryoSerializer   --class
com...dataload .jar  &


Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite RPM Installation As Service ignitevisorcmd usage

2018-08-20 Thread ApacheUser
Hi Ignite Team,

I have Installed Ignite as Service using RPM, its running fine but how can I
use ignitevisorcmd.sh to check the topology etc.?


Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Data Compresion in Ignite

2018-08-28 Thread ApacheUser
Hi Team,

We are using persistent storage . Could you please answer the following. 

1. What is the data format(Binary?) .
2. Is it compressed on Disc and in Memory?
3. Is the Data format in Memory on Disc same?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite SQL- read Only User Creation

2018-08-21 Thread ApacheUser
Thanks Andrei,

I ceated user but can't alter user except for changing password, The user is
able to delete rows or truncate tables which I dont want except ignite user.

Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Tracing all SQL Queries

2018-07-16 Thread ApacheUser
Hi Slava,
Sorry to get into this thread,I have similar problem to control long running
SQLs. I want timeout SQLs running more than 500ms.
I sthere any way to set etLongQueryWarningTimeout()  in CONFIG File?

Appreciate your response.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite Install As Service error

2018-07-11 Thread ApacheUser
Hi ilya,

I am using Ignite 2.5, The message pasted from "systemctl status
apache-ign...@default-config.xml "command. I did nt run any command.

full message:

]# systemctl status apache-ign...@default-config.xml
● apache-ign...@default-config.xml.service - Apache Ignite In-Memory
Computing Platform Service
   Loaded: loaded (/etc/systemd/system/apache-ignite@.service; enabled;
vendor preset: disabled)
   Active: active (running) since Mon 2018-07-09 15:36:40 GMT; 2s ago
  Process: 24867 ExecStart=/usr/share/apache-ignite/bin/service.sh start %i
(code=exited, status=0/SUCCESS)
  Process: 24862 ExecStartPre=/usr/bin/env bash
/usr/share/apache-ignite/bin/service.sh set-firewall (code=exited,
status=0/SUCCESS)
  Process: 24858 ExecStartPre=/usr/bin/chown ignite:ignite
/var/run/apache-ignite (code=exited, status=0/SUCCESS)
  Process: 24856 ExecStartPre=/usr/bin/mkdir /var/run/apache-ignite
(code=exited, status=1/FAILURE)
 Main PID: 24869 (ignite.sh)
   CGroup:
/system.slice/system-apache\x2dignite.slice/apache-ign...@default-config.xml.service
   ├─24869 /bin/bash /usr/share/apache-ignite/bin/ignite.sh
/etc/apache-ignite/default-config.xml
   └─24958 /usr/bin/java -Xms1g -Xmx1g -server -XX:+AggressiveOpts
-XX:MaxMetaspaceSize=256m -DIGNITE_QUIET=true
-DIGNITE_SUCCESS_FILE=/usr/share/apache-ignite/work/ignite...




Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite Install As Service error

2018-07-11 Thread ApacheUser
Hi,

/etc/systemd/system/apache-ignite@.service:

[Unit]
Description=Apache Ignite In-Memory Computing Platform Service
After=syslog.target network.target

[Service]
Type=forking
User=ignite
WorkingDirectory=/usr/share/apache-ignite/work
PermissionsStartOnly=true
ExecStartPre=-/usr/bin/mkdir /var/run/apache-ignite
ExecStartPre=-/usr/bin/chown ignite:ignite /var/run/apache-ignite
ExecStartPre=-/usr/bin/env bash /usr/share/apache-ignite/bin/service.sh
set-firewall
ExecStart=/usr/share/apache-ignite/bin/service.sh start %i
PIDFile=/var/run/apache-ignite/%i.pid

[Install]
WantedBy=multi-user.target

I am using Redhat EnterPrise Linux 7.5 

thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite Install As Service error

2018-07-09 Thread ApacheUser
the service is running but cant access, full message below
[]# systemctl status apache-ign...@default-config.xml
● apache-ign...@default-config.xml.service - Apache Ignite In-Memory
Computing Platform Service
   Loaded: loaded (/etc/systemd/system/apache-ignite@.service; enabled;
vendor preset: disabled)
   Active: active (running) since Mon 2018-07-09 15:40:49 GMT; 2s ago
  Process: 16838 ExecStart=/usr/share/apache-ignite/bin/service.sh start %i
(code=exited, status=0/SUCCESS)
  Process: 16833 ExecStartPre=/usr/bin/env bash
/usr/share/apache-ignite/bin/service.sh set-firewall (code=exited,
status=0/SUCCESS)
  Process: 16830 ExecStartPre=/usr/bin/chown ignite:ignite
/var/run/apache-ignite (code=exited, status=0/SUCCESS)
  Process: 16828 ExecStartPre=/usr/bin/mkdir /var/run/apache-ignite
(code=exited, status=1/FAILURE)
 Main PID: 16840 (ignite.sh)
   CGroup:
/system.slice/system-apache\x2dignite.slice/apache-ign...@default-config.xml.service
   ├─16840 /bin/bash /usr/share/apache-ignite/bin/ignite.sh
/etc/apache-ignite/default-config.xml
   └─16929 /usr/bin/java -Xms1g -Xmx1g -server -XX:+AggressiveOpts
-XX:MaxMetaspaceSize=256m -DIGNITE_QUIET=true
-DIGNITE_SUCCESS_FILE=/usr/share/apache-ignite/work/ignite...

Jul 09 15:40:49 ccrc_spark_analytic_4 systemd[1]: Starting Apache Ignite
In-Memory Computing Platform Service...
Jul 09 15:40:49 ccrc_spark_analytic_4 systemd[1]: Started Apache Ignite
In-Memory Computing Platform Service.

Thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-07-03 Thread ApacheUser
Thanks Ilya,
Appreciate your help, 

Is there any parameter in COnfig file to control the number of rows or
amount of resources a clinet connection can use and if exceeds disconnect?

thanks
Bhaskar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-06-28 Thread ApacheUser
Evgenii,

We use Ignite-as Im Memory Database for Tableau and SQL, we dont use Java.
We use spark to load data into Ingite by Spark streaming realtime data.
So if any user runs select * from table, the server nodes going OOME. We
need to control that behaviour i sthere any way?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-06-28 Thread ApacheUser
Hi Ignite Team,

I am trying set SqlFieldsQuery to seTLazy to avoid OOME on Server nodes. By
Config file has below setting
 








but getting below error:

[]# bin/ignite.sh
class org.apache.ignite.IgniteException: Failed to instantiate Spring XML
application context
[springUrl=file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml,
err=Error creating bean with name 'ignite.cfg' defined in URL
[file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml]:
Cannot create inner bean
'org.apache.ignite.cache.query.SqlFieldsQuery#5f9d02cb' of type
[org.apache.ignite.cache.query.SqlFieldsQuery] while setting bean property
'SqlFieldsQuery'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.ignite.cache.query.SqlFieldsQuery#5f9d02cb' defined in
URL
[file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml]:
Instantiation of bean failed; nested exception is
org.springframework.beans.BeanInstantiationException: Failed to instantiate
[org.apache.ignite.cache.query.SqlFieldsQuery]: No default constructor
found; nested exception is java.lang.NoSuchMethodException:
org.apache.ignite.cache.query.SqlFieldsQuery.()]
at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
at org.apache.ignite.Ignition.start(Ignition.java:355)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
instantiate Spring XML application context
[springUrl=file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml,
err=Error creating bean with name 'ignite.cfg' defined in URL
[file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml]:
Cannot create inner bean
'org.apache.ignite.cache.query.SqlFieldsQuery#5f9d02cb' of type
[org.apache.ignite.cache.query.SqlFieldsQuery] while setting bean property
'SqlFieldsQuery'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.ignite.cache.query.SqlFieldsQuery#5f9d02cb' defined in
URL
[file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml]:
Instantiation of bean failed; nested exception is
org.springframework.beans.BeanInstantiationException: Failed to instantiate
[org.apache.ignite.cache.query.SqlFieldsQuery]: No default constructor
found; nested exception is java.lang.NoSuchMethodException:
org.apache.ignite.cache.query.SqlFieldsQuery.()]
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:392)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:744)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:945)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693)
at org.apache.ignite.Ignition.start(Ignition.java:352)
... 1 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name 'ignite.cfg' defined in URL
[file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml]:
Cannot create inner bean
'org.apache.ignite.cache.query.SqlFieldsQuery#5f9d02cb' of type
[org.apache.ignite.cache.query.SqlFieldsQuery] while setting bean property
'SqlFieldsQuery'; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.ignite.cache.query.SqlFieldsQuery#5f9d02cb' defined in
URL
[file:/data/ignitedata/apache-ignite-fabric-2.5.0-bin/config/default-config.xml]:
Instantiation of bean failed; nested exception is
org.springframework.beans.BeanInstantiationException: Failed to instantiate
[org.apache.ignite.cache.query.SqlFieldsQuery]: No default constructor
found; nested exception is java.lang.NoSuchMethodException:
org.apache.ignite.cache.query.SqlFieldsQuery.()
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1531)
at

Re: SqlFieldsQuery Cannot create inner bean 'org.apache.ignite.cache.query.SqlFieldsQuery

2018-06-28 Thread ApacheUser
Evgenii,
what happens if the user doesn't set that limit or forget to set on client
tool?, 

we set that but some one testing without the lazy=true to prove that Apache
Ignite is not stable. 

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark 2.3 Structured Streaming With Ignite

2018-10-05 Thread ApacheUser
Hi,

Is it possible to use Spark Structured Streaming with Ignite?. I am getting
"Data source ignite does not support streamed writing" error.

Log trace:


Exception in thread "main" java.lang.UnsupportedOperationException: Data
source ignite does not support streamed writing
at
org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:319)
at
org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:293)
at
com.cisco.ccrc.spark.sparkIngite.ssincrmntl$.main(ssincrmntl.scala:90)
at
com.cisco.ccrc.spark.sparkIngite.ssincrmntl.main(ssincrmntl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark Ignite Data Load failing On Large Cache

2018-10-08 Thread ApacheUser
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and 
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after 
reaching 350GB plus one or two nodes not processing data load and the data 
load is stopped. Please advise, the CLuster , Server and Client Logs 
below.


 


Server Logs:

[11:59:34] Topology snapshot [ver=121, servers=4, clients=9, CPUs=32,
offheap=1000.0GB, heap=78.0GB]
[11:59:34]   ^-- Node [id=F6605E96-47C9-479B-A840-03316500C9A3,
clusterState=ACTIVE]
[11:59:34]   ^-- Baseline [id=0, size=4, online=4, offline=0]
[11:59:34] Data Regions Configured:
[11:59:34]   ^-- default_mem_region [initSize=256.0 MiB, maxSize=20.0 GiB,
persistenceEnabled=true]
[11:59:34]   ^-- q_major [initSize=10.0 GiB, maxSize=30.0 GiB,
persistenceEnabled=true]
[11:59:34]   ^-- q_minor [initSize=10.0 GiB, maxSize=30.0 GiB,
persistenceEnabled=true]
[14:33:15,872][SEVERE][grid-nio-worker-client-listener-3-#33][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-3, igniteInstanceName=null,
finished=false, hashCode=254322881, interrupted=false,
runner=grid-nio-worker-client-listener-3-#33]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/64.102.213.190:10800, rmtAddr=/10.82.249.225:51449,
createTime=1538740798912, closeTime=0, bytesSent=397, bytesRcvd=302,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1538742789216,
lastSndTime=1538742789216, lastRcvTime=1538742789216, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true]]]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1085)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
[21:43:26,312][SEVERE][grid-nio-worker-client-listener-0-#30][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=0, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-0, igniteInstanceName=null,
finished=false, hashCode=2211598, interrupted=false,
runner=grid-nio-worker-client-listener-0-#30]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/64.102.213.190:10800, rmtAddr=/10.82.32.114:59525,
createTime=1538746249024, closeTime=0, bytesSent=2035, bytesRcvd=1532,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1538767916701,
lastSndTime=1538767916701, lastRcvTime=1538767916701, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true]]]
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1085)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
   

Slow Data Insertion On Large Cache : Spark Streaming

2018-11-05 Thread ApacheUser
Hi Team,

We have 6 node Ignite cluster with 72CPU, 256GB  RAM and 5TB Storage . Data
ingested using Spark Streaming  into Ignite Cluster for SQL and Tableau
Usage.

I have couple of Large tables with 200ml rows with (200GB) and 800ml rows
with (500GB)  .
The insertion is taking more than 40secs if there is already existing
Composite key, if new row its around 10ms.

We have Entry, Main and Details tables, "Entry" cache has single field "id"
primary key, second cache "Main"  is with composite Primary key "id" and
"mainid" third Cache "Details" with composite primary key "id","mainrid" and
"detailid". "id" is the affinity key for all and some other small tables.

1. Is there any performance of insertion/updation diffeence  for  single
field primary key vs multi field primary key?
 will it make any differenc if I convert composite primary key as singe
field primary Key?
  like  concatanate all composite fields and make sigle filed primary key?

2.what are ignite.sh and Config parameters needs tuning?

My Spark Dataframe save options (Save to Ignite)

 .option(OPTION_STREAMER_ALLOW_OVERWRITE, true)
.mode(SaveMode.Append)
.save()

My Ignite.sh

JVM_OPTS="-server -Xms10g -Xmx10g -XX:+AggressiveOpts
-XX:MaxMetaspaceSize=512m"
JVM_OPTS="${JVM_OPTS} -XX:+AlwaysPreTouch"
JVM_OPTS="${JVM_OPTS} -XX:+UseG1GC"
JVM_OPTS="${JVM_OPTS} -XX:+ScavengeBeforeFullGC"
JVM_OPTS="${JVM_OPTS} -XX:+DisableExplicitGC"
JVM_OPTS="${JVM_OPTS} -XX:+HeapDumpOnOutOfMemoryError "
JVM_OPTS="${JVM_OPTS} -XX:HeapDumpPath=${IGNITE_HOME}/work"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDetails"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCTimeStamps"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDateStamps"
JVM_OPTS="${JVM_OPTS} -XX:+UseGCLogFileRotation"
JVM_OPTS="${JVM_OPTS} -XX:NumberOfGCLogFiles=10"
JVM_OPTS="${JVM_OPTS} -XX:GCLogFileSize=100M"
JVM_OPTS="${JVM_OPTS} -Xloggc:${IGNITE_HOME}/work/gc.log"
JVM_OPTS="${JVM_OPTS} -XX:+PrintAdaptiveSizePolicy"
JVM_OPTS="${JVM_OPTS} -XX:MaxGCPauseMillis=100"

export IGNITE_SQL_FORCE_LAZY_RESULT_SET=true

default-Config.xml






http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>







 



   



 





 






  











  
   











  
  

















  


  

  

  


  
  
   

  

 
 

  
  

  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
 64.x.x.x:47500..47509
 64.x.x.x:47500..47509

  

  

  

  





Thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what is Ignite Default atomicityMode

2018-11-08 Thread ApacheUser
Thanks Andrei,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Status of Spark Structured Streaming Support IGNITE-9357

2018-11-08 Thread ApacheUser
Hello Team,

Any update on the Spark Structred streaming support with Ignite?
https://issues.apache.org/jira/browse/IGNITE-9357




Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


what is Ignite Default atomicityMode

2018-11-08 Thread ApacheUser
Hi Team

Apache Ignite atomicityMode is "ATOMIC" by default  or do we need to
explictly include in defaul-config.xml? Pleae advise.




Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Slow Data Insertion On Large Cache : Spark Streaming

2018-11-12 Thread ApacheUser
Thanks Stan,
planning to move on to 2.7.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark Ignite : Unsupported data type ArrayType(StringType,true)

2018-10-05 Thread ApacheUser
Hi,

I am trying to save Spark DataFrame to Ignite, getting Unsupported data type
ArrayType(StringType,true) error. The same code was working fine.

This is  the code
val qErrJson =
spark.read.json(qErrErr.select("err").filter(_.getStringOption("err").isDefined).map(row
=> row.getString(0)))
qErrJson.createOrReplaceTempView("q_Err_all")


Exception in thread "main" class org.apache.ignite.IgniteException:
Unsupported data type ArrayType(StringType,true)
at
org.apache.ignite.spark.impl.QueryUtils$.dataType(QueryUtils.scala:151)
at
org.apache.ignite.spark.impl.QueryUtils$.org$apache$ignite$spark$impl$QueryUtils$$compileColumn(QueryUtils.scala:96)
at
org.apache.ignite.spark.impl.QueryUtils$$anonfun$5.apply(QueryUtils.scala:84)
at
org.apache.ignite.spark.impl.QueryUtils$$anonfun$5.apply(QueryUtils.scala:84)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at
org.apache.spark.sql.types.StructType.foreach(StructType.scala:99)
at
scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at org.apache.spark.sql.types.StructType.map(StructType.scala:99)
at
org.apache.ignite.spark.impl.QueryUtils$.compileCreateTable(QueryUtils.scala:84)
at
org.apache.ignite.spark.impl.QueryHelper$.createTable(QueryHelper.scala:60)
at
org.apache.ignite.spark.impl.IgniteRelationProvider.createRelation(IgniteRelationProvider.scala:154)
at
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
at
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at
com.cisco.ccrc.spark.sparkIngite.dataload$.saveDF(dataload.scala:210)
at
com.cisco.ccrc.spark.sparkIngite.dataload$.main(dataload.scala:78)
at com.cisco.ccrc.spark.sparkIngite.dataload.main(dataload.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Persistence Storage : Full data is stored in all activated nodes just local Data?

2018-10-09 Thread ApacheUser
Hi Team,
How does the data stored when the persistenc eis enabled?

Does Ignite Store all data in all Nodes enabled persistence  or just the
data which in that node?

Ex: I have 4 node Ignite Cluster and persistence is enabled in all 4 nodes
(activated after 4 nodes come up)  . When the data is loaded,  is full data
stored/Persisted in all 4 nodes or just the data persists which is in thet
node?


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark Ignite Data Load failing On Large Cache

2018-10-08 Thread ApacheUser
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after
reaching 350GB plus one or two nodes not processing data load and the data
load is stopped. Please advise, the CLuster , Server and Client Logs
below.Detailsvisor> topHosts:
4+===+|
  
Int./Ext. IPs|Node ID8(@)| Node Type |   OS 
 
| CPUs |   MACs| CPU Load
|+===+|
0:0:0:0:0:0:0:1%lo | 1: F6605E96(@n1)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:52:96:C4 | 0.14 %   ||
127.0.0.1  | 2: 2760B50C(@n11) | Client|
   
|  |   |  || 64.102.213.190 | 3:
81855FF0(@n12) | Client||  |
  
| 
|++---+---++--+---+--+|
0:0:0:0:0:0:0:1%lo | 1: 512609AB(@n0)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:E5:27:36 | 2.13 %   ||
127.0.0.1  | 2: 72AA1490(@n5)  | Client|
   
|  |   |  || 64.102.212.151 | 3:
E218A964(@n6)  | Client||  |
  
| 
|++---+---++--+---+--+|
0:0:0:0:0:0:0:1%lo | 1: 4470553B(@n2)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:C4:F4:98 | 0.10 %   ||
127.0.0.1  | 2: F0D1625A(@n7)  | Client|
   
|  |   |  || 64.102.213.13  | 3:
EF0C5A13(@n8)  | Client||  |
  
| 
|++---+---++--+---+--+|
0:0:0:0:0:0:0:1%lo | 1: F44497FE(@n3)  | Server| Linux amd64
3.10.0-862.11.6.el7.x86_64 | 8| FA:16:3E:26:72:FD | 0.21 %   ||
127.0.0.1  | 2: DBA60939(@n4)  | Client|
   
|  |   |  || 64.102.213.220 | 3:
65FA421F(@n9)  | Client||  |
  
|  ||| 4: 8CBFE426(@n10) | Client|  
 
|  |   | 
|+---+Summary:+--+|
Active | true|| Total hosts| 4  
|| Total nodes| 13  || Total CPUs | 32 
|| Avg. CPU load  | 0.61 %  || Avg. free heap | 71.00 %
|| Avg. Up time   | 30:22:52|| Snapshot time  | 2018-10-08
14:19:47 |+--+visor> nodeSelect node
from:+==+|
#  |Node ID8(@), IP | Node Type | Up Time  | CPUs | CPU Load
| Free Heap
|+==+|
0  | 512609AB(@n0), 64.102.212.151  | Server| 30:23:14 | 8| 4.33 %  
| 36.00 %   || 1  | F6605E96(@n1), 64.102.213.190  | Server| 30:23:10 |
8| 0.90 %   | 56.00 %   || 2  | 4470553B(@n2), 64.102.213.13   | Server   
| 30:23:07 | 8| 0.20 %   | 78.00 %   || 3  | F44497FE(@n3),
64.102.213.220  | Server| 30:23:03 | 8| 0.17 %   | 44.00 %   || 4  |
DBA60939(@n4), 64.102.213.220  | Client| 14:21:12 | 8| 0.17 %   |
66.00 %   || 5  | 72AA1490(@n5), 64.102.212.151  | Client| 14:21:06 | 8   
| 0.17 %   | 78.00 %   || 6  | E218A964(@n6), 64.102.212.151  | Client|
14:21:07 | 8| 0.17 %   | 71.00 %   || 7  | F0D1625A(@n7), 64.102.213.13  
| Client| 14:21:06 | 8| 0.07 %   | 84.00 %   || 8  | EF0C5A13(@n8),
64.102.213.13   | Client| 14:21:06 | 8| 0.07 %   | 83.00 %   || 9  |
65FA421F(@n9), 64.102.213.220  | Client| 14:21:07 | 8| 0.10 %   |
64.00 %   || 10 | 8CBFE426(@n10), 64.102.213.220 | Client| 14:21:06 | 8   
| 0.13 %   | 76.00 %   || 11 | 2760B50C(@n11), 64.102.213.190 | Client|
14:21:07 | 8| 0.13 %   | 78.00 %   || 12 | 81855FF0(@n12),
64.102.213.190 | Client| 14:21:06 | 8| 0.10 %   | 81.00 %  

Re: Invalid property 'statisticsEnabled' is not writable

2018-12-03 Thread ApacheUser
Hi Ilya,

I am able to start and run SQL Queryies but not able to write, while loading
data this error is thrown. please try to write some data in any dummy table
with couple fields. I am using affinity key and backups=1 .

Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Invalid property 'statisticsEnabled' is not writable

2018-11-27 Thread ApacheUser
Hi Team,

we have 6 node Ignite cluster, loading data with Spark. recently we have
added "cacheConfiguration", getting below error when we try to recreate
"cache" using spark data load.

any hint help please?

The error:


Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.apache.ignite.configuration.CacheConfiguration#18d63996' defined in URL
[file:/apps/ignitedata/apache-ignite-fabric-2.6.0-bin/config/default-config.xml]:
Error setting property values; nested exception is
org.springframework.beans.NotWritablePropertyException: Invalid property
'statisticsEnabled' of bean class
[org.apache.ignite.configuration.CacheConfiguration]: Bean property
'statisticsEnabled' is not writable or has an invalid setter method. Does
the parameter type of the setter match the return type of the getter?
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1570)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1280)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)


my config:
http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>

   
 
 
 







   


  
 





 





  
  











  
   











  
  
 








  
   
   

  
  
  
   








   
   

   
  
  
  










  



Thanks




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/