Re: No response from ignite job tracker

2018-08-14 Thread engrdean
Thanks Ilya.  The reason I used that address is because the value in the
example config provided with the ignite-hadoop package is:


mapreduce.jobtracker.address
localhost:11211


I tried changing it to just "local" which I know is the default, but I get
the following error:

2018-08-14 14:47:13,714 INFO  [main] mapreduce.Cluster
(Cluster.java:initialize(113)) - Failed to use
org.apache.ignite.hadoop.mapreduce.IgniteHadoopClientProtocolProvider due to
error:
java.io.IOException: Local execution mode is not supported, please point
mapreduce.jobtracker.address to real Ignite nodes.

I also tried changing it to just my server name with no port reference like:


mapreduce.jobtracker.address
myserver.com


and leaving it the value in the example ignite-hadoop configs:


mapreduce.jobtracker.address
localhost:11211


In both of those cases, it shows the url as N/A:

2018-08-14 15:09:24,285 INFO  [main] mapreduce.Job (Job.java:submit(1294)) -
The url to track the job: N/A

The description for mapreduce.jobtracker.address in the hadoop documentation
is:

"The host and port that the MapReduce job tracker runs at. If "local", then
jobs are run in-process as a single map and reduce task."

What port does ignite use for this if it isn't 11211?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: No response from ignite job tracker

2018-08-09 Thread engrdean
Still looking for any suggestions on this.  Anyone have any ideas for next
steps to troubleshoot?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Proper config for IGFS eviction

2018-08-09 Thread engrdean
I've been struggling to find a configuration that works successfully for IGFS
with hadoop filesystem caching.  Anytime I attempt to load more data than
what will fit into memory on my Ignite node, the ignite process crashes.  

The behavior I am looking for is that old cache entries will be evicted when
I try to write new data to IGFS that exceeds the available memory on the
server.  I can see that my data is being persisted into HDFS, but I seem to
be limited to the amount of physical memory on my Ignite server at the
moment.  I am using the teragen example to generate the files on hadoop for
the purposes of this test like so:

time hadoop-ig jar
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar
teragen 1 igfs://i...@myserver.com/tmp/output1

If I have systemRegionMaxSize set to a value less than the physical memory
on my ignite server, then the message is something like this:

/class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of
memory in data region [name=sysMemPlc, initSize=1.0 GiB, maxSize=14.0 GiB,
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies
/
If I increase the systemRegionMaxSize to a value greater than the physical
memory on my ignite server, the message is something like this:

/[2018-08-09 12:16:08,174][ERROR][igfs-#171][GridNearTxLocal] Heuristic
transaction failure.
class
org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException:
Failed to locally write to cache (all transaction entries will be
invalidated, however there was a window when entries for this transaction
were visible to others): GridNearTxLocal [mappings=IgniteTxMappingsImpl [],
nearLocallyMapped=false, colocatedLocallyMapped=true, needCheckBackup=null,
hasRemoteLocks=false, trackTimeout=false, lb=null, thread=igfs-#171,
mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
depEnabled=false, txState=IgniteTxStateImpl [activeCacheIds=[-313790114],
recovery=false, txMap=[IgniteTxEntry [key=KeyCacheObjectImpl [part=504,
val=IgfsBlockKey [fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4,
blockId=52879, affKey=null, evictExclude=true], hasValBytes=true],
cacheId=-313790114, txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=504,
val=IgfsBlockKey [fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4,
blockId=52879, affKey=null, evictExclude=true], hasValBytes=true],
cacheId=-313790114], val=[op=CREATE, val=CacheObjectByteArrayImpl
[arrLen=65536]], prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null],
entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
explicitVer=null, dhtVer=null, filters=[], filtersPassed=false,
filtersSet=true, entry=GridDhtCacheEntry [rdrs=[], part=504,
super=GridDistributedCacheEntry [super=GridCacheMapEntry
[key=KeyCacheObjectImpl [part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52879,
affKey=null, evictExclude=true], hasValBytes=true], val=null,
startVer=1533830728270, ver=GridCacheVersion [topVer=145310277,
order=1533830728270, nodeOrder=1], hash=-915370253,
extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc
[locs=[GridCacheMvccCandidate [nodeId=6ed33eb9-2103-402c-afab-a415c8f08f2f,
ver=GridCacheVersion [topVer=145310277, order=1533830728268, nodeOrder=1],
threadId=224, id=258264, topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=0], reentry=null,
otherNodeId=6ed33eb9-2103-402c-afab-a415c8f08f2f, otherVer=GridCacheVersion
[topVer=145310277, order=1533830728268, nodeOrder=1], mappedDhtNodes=null,
mappedNearNodes=null, ownerVer=null, serOrder=null, key=KeyCacheObjectImpl
[part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52879,
affKey=null, evictExclude=true], hasValBytes=true],
masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=1|near_local=0|removed=0|read=0,
prevVer=GridCacheVersion [topVer=145310277, order=1533830728268,
nodeOrder=1], nextVer=GridCacheVersion [topVer=145310277,
order=1533830728268, nodeOrder=1]]], rmts=null]], flags=2]]], prepared=1,
locked=false, nodeId=6ed33eb9-2103-402c-afab-a415c8f08f2f, locMapped=false,
expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
serReadVer=null, xidVer=GridCacheVersion [topVer=145310277,
order=1533830728268, nodeOrder=1]], IgniteTxEntry [key=KeyCacheObjectImpl
[part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52880,
affKey=null, evictExclude=true], hasValBytes=true], cacheId=-313790114,
txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=504, val=IgfsBlockKey
[fileId=c976b6f1561-689b0ba5-6920-4b52-a614-c2360d0acff4, blockId=52880,

Re: No response from ignite job tracker

2018-08-06 Thread engrdean
I can see messages in the ignite logs when I run my mapreduce job:

[2018-08-06 10:15:56,914][WARN ][pub-#23817][HadoopJobTracker]
ignite.job.shared.classloader job property is set to true; please disable it
if job tasks rely on mutable static state.
[2018-08-06 10:15:58,303][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-SETUP-0-0-#23820][FileOutputCommitter]
File Output Committer Algorithm version is 1
[2018-08-06 10:15:58,303][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-SETUP-0-0-#23820][FileOutputCommitter]
FileOutputCommitter skip cleanup _temporary folders under output
directory:false, ignore cleanup failures: false
[2018-08-06 10:15:58,474][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-MAP-0-0-#23829][LineRecordReader]
Found UTF-8 BOM and skipped it
[2018-08-06 10:15:58,755][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-REDUCE-0-0-#23831][FileOutputCommitter]
File Output Committer Algorithm version is 1
[2018-08-06 10:15:58,756][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-REDUCE-0-0-#23831][FileOutputCommitter]
FileOutputCommitter skip cleanup _temporary folders under output
directory:false, ignore cleanup failures: false
[2018-08-06 10:15:58,884][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-REDUCE-0-0-#23831][FileOutputCommitter]
Saved output of task
'attempt_e93a350c-a588-42ee-883a-f0efec20aaed_0001_r_00_0' to
hdfs://ignitehdp/tmp/output/_temporary/0/task_e93a350c-a588-42ee-883a-f0efec20aaed_0001_r_00
[2018-08-06 10:15:58,910][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-COMMIT-0-0-#23832][FileOutputCommitter]
File Output Committer Algorithm version is 1
[2018-08-06 10:15:58,910][INFO
][Hadoop-task-e93a350c-a588-42ee-883a-f0efec20aaed_1-COMMIT-0-0-#23832][FileOutputCommitter]
FileOutputCommitter skip cleanup _temporary folders under output
directory:false, ignore cleanup failures: false

I can also telnet from my laptop where I opening a browser to the server
that is running ignite on port 11211:

$ telnet myserver.com 11211
Trying xxx.xxx.xxx.xxx...
Connected to myserver.com.
Escape character is '^]'.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


No response from ignite job tracker

2018-08-03 Thread engrdean
I am successfully running mapreduce jobs using Ignite but I haven't been able
to use the Ignite job tracker.  I'm running my job like this:

$ /bin/hadoop --config ~/ignite/ignite_conf jar wc.jar WordCount
/user/ignite/ /tmp/output
2018-08-03 15:09:32,751 WARN  [main] mapreduce.JobResourceUploader
(JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option
parsing not performed. Implement the Tool interface and execute your
application with ToolRunner to remedy this.
2018-08-03 15:09:32,960 INFO  [main] input.FileInputFormat
(FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2018-08-03 15:09:33,016 INFO  [main] mapreduce.JobSubmitter
(JobSubmitter.java:submitJobInternal(198)) - number of splits:1
2018-08-03 15:09:33,099 INFO  [main] mapreduce.JobSubmitter
(JobSubmitter.java:printTokens(287)) - Submitting tokens for job:
job_a217f633-f2d7-4255-adad-be19b9cda204_0003
2018-08-03 15:09:35,906 INFO  [main] mapreduce.Job (Job.java:submit(1294)) -
The url to track the job: N/A
2018-08-03 15:09:35,906 INFO  [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1339)) - Running job:
job_a217f633-f2d7-4255-adad-be19b9cda204_0003
2018-08-03 15:09:37,230 INFO  [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1360)) - Job
job_a217f633-f2d7-4255-adad-be19b9cda204_0003 running in uber mode : false
2018-08-03 15:09:37,232 INFO  [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1367)) -  map 0% reduce 0%
2018-08-03 15:09:38,531 INFO  [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1367)) -  map 100% reduce 100%
2018-08-03 15:09:38,548 INFO  [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1378)) - Job
job_a217f633-f2d7-4255-adad-be19b9cda204_0003 completed successfully
2018-08-03 15:09:38,564 INFO  [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1385)) - Counters: 0

As you can see, it lists the job tracker url as "N/A".  I believe I have
correctly set the url in the mapred-site.xml:

$ grep -C 2 mapreduce.jobtracker.address ignite_conf/mapred-site.xml


  mapreduce.jobtracker.address
  myserver.com:11211


However, I'm getting an empty reply from the server:

$ curl http://myserver.com:11211
curl: (52) Empty reply from server

I can see the port is up:

$ netstat -luptn | grep 11211
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp0  0 0.0.0.0:11211   0.0.0.0:*   LISTEN 
11709/java


What am I missing here?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd party persistence with Hive

2018-07-13 Thread engrdean
Thanks Ivan for confirming my suspicions around the query support for Hive. 
I did attempt the SQLServerDialect as well but had the same results.

I did get a success message for the load but no data was actually loaded
into the cache so it appears that was a false positive.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


3rd party persistence with Hive

2018-07-12 Thread engrdean
I've been attempting to setup 3rd party persistence with a Hive database but
I'm running into some issues.  I used the Ignite web console to generate
code but when I try to initiate a cache load I receive the error below.  Has
anyone else been successful in using a Hive database with 3rd party
persistence?

Just to be clear, I am not trying to implement the IgniteHadoopFileSystem
implementation of IGFS.  The intent of using 3rd party persistence with Hive
instead of IGFS is to avoid tightly coupling Ignite with our Hadoop
environment.

It appears to me that the SQL being generated by Ignite to communicate with
Hive via the driver is probably not something that Hive can parse, but I
would welcome any feedback and I'm happy to provide config files if that
would be helpful.

[15:02:46,049][INFO][mgmt-#568%test%][CacheJdbcPojoStore] Started load cache
[cache=EmployeeCache, keyType=com.model.EmployeeKey]
[15:02:46,088][WARNING][mgmt-#568%test%][CacheJdbcPojoStore] Failed to load
entries from db in multithreaded mode, will try in single thread
[cache=EmployeeCache, keyType=com.model.EmployeeKey]
org.apache.hive.service.cli.HiveSQLException: Error while compiling
statement: FAILED: SemanticException [Error 10011]: Line 1:129 Invalid
function 'mod'
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:262)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:248)
at
org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:300)
at
org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:241)
at
org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:437)
at
org.apache.hive.jdbc.HivePreparedStatement.executeQuery(HivePreparedStatement.java:109)
at
com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353)
at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.loadCache(CacheAbstractJdbcStore.java:763)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:520)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:608)
at
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.localLoadCache(GridCacheProxyImpl.java:217)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:5520)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJobV2.localExecute(GridCacheAdapter.java:5569)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$TopologyVersionAwareJob.execute(GridCacheAdapter.java:6184)
at
org.apache.ignite.compute.ComputeJobAdapter.call(ComputeJobAdapter.java:132)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1855)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6623)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1123)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1921)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while
compiling statement: FAILED: SemanticException [Error 10011]: Line 1:129
Invalid function 'mod'
at
org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:323)
at
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:148)
at
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:226)
at
org.apache.hive.service.cli.operation.Operation.run(Operation.java:264)
at

Re: Ignite user create/modify trouble

2018-06-07 Thread engrdean
Thanks Andrei, adding quotes and realizing that case matters made all of the
difference.

I did read the documentation around this but my impression was that a
connection via JDBC should be case insensitive and that does not appear to
be the case.  From the documentation:

/For instance, if test was set as a username then:

You can use Test, TEst, TEST and other combinations from JDBC and ODBC.
You have to use TEST as the username from Ignite's native SQL APIs designed
for Java, .NET and other programming languages./

I took that to mean that since I am connecting via JDBC that the username is
case insensitive.  Am I just misunderstanding?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite user create/modify trouble

2018-06-06 Thread engrdean
I was very happy to see that the new 2.5 version of Ignite supports
authentication and I have tested this successfully with the default ignite
user.  However, I haven't had much success beyond that.  If I try to change
the password on the default ignite user (ALTER USER ignite WITH PASSWORD
'test';) I get the following error:

[19:09:24,539][SEVERE][client-connector-#95%grid%][JdbcRequestHandler]
Failed to execute SQL query [reqId=0, req=JdbcQueryExecuteRequest
[schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=ALTER USER ignite WITH
PASSWORD 'test', args=[], stmtType=ANY_STATEMENT_TYPE]]
class org.apache.ignite.internal.processors.query.IgniteSQLException:
Operation failed [nodeId=99cb1492-e7b1-4d1e-9cf5-111f21cd86a1,
opId=4901a57d361-acc1d895-072c-4e15-b593-434c52951f7e, err=class
org.apache.ignite.internal.processors.authentication.UserManagementException:
User doesn't exist [userName=IGNITE]]
at
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:247)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.tryQueryDistributedSqlFieldsNative(IgniteH2Indexing.java:1572)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1602)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2035)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2030)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2578)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2044)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:456)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:203)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:160)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:44)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class
org.apache.ignite.internal.processors.authentication.UserManagementException:
Operation failed [nodeId=99cb1492-e7b1-4d1e-9cf5-111f21cd86a1,
opId=4901a57d361-acc1d895-072c-4e15-b593-434c52951f7e, err=class
org.apache.ignite.internal.processors.authentication.UserManagementException:
User doesn't exist [userName=IGNITE]]
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor$UserOperationFinishFuture.onOperationFailOnNode(IgniteAuthenticationProcessor.java:1182)
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor.onFinishMessage(IgniteAuthenticationProcessor.java:802)
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor.access$700(IgniteAuthenticationProcessor.java:84)
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor$2.onMessage(IgniteAuthenticationProcessor.java:207)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1632)
at
org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1715)
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor.sendFinish(IgniteAuthenticationProcessor.java:959)
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor.access$3900(IgniteAuthenticationProcessor.java:84)
at
org.apache.ignite.internal.processors.authentication.IgniteAuthenticationProcessor$UserOperationWorker.body(IgniteAuthenticationProcessor.java:1301)

Re: Heap exhaustion when querying

2018-05-18 Thread engrdean
Thank you again Ilya.  I do have the lazy load function working now and I am
no longer crashing ignite with large result sets.  I do still get OOM
errors, but now it seems more like a tuning issue.

It would be really helpful if anyone can provide more details on how the
lazy load function is working behind the scenes.  The documentation just
says that it is used as a "hint for Ignite to fetch the result set lazily". 
It makes it difficult to plan around what resources are needed without more
details (e.g. does it offload when a particular percentage of heap is
used?).

The durable memory tuning documentation is very helpful when it comes to
planning around cache requirements but I'm struggling to optimize the memory
settings to take into account query usage as well.  Any help would be very
much appreciated.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Heap exhaustion when querying

2018-05-17 Thread engrdean
Thanks Ilya, but I don't have any code running on the Ignite servers other
than the Ignite process.  I am connecting with the SQL client DBeaver to run
my query and running DBeaver on a separate machine.

Are there any adjustments I can make on the Ignite server to insure that an
arbitrary user query can't crash the server?

Alternatively, is there anything I can do to get Ignite to store the result
set off heap?  I am no where near exhausting the overall memory on the
servers, just the java heap memory.

I have already implemented all of the memory tuning recommendations in the
latest Ignite docs but that did not seem to help.  Right now I have 10GB of
memory allocated for heap space out of the 64GB available on the server.


ilya.kasnacheev wrote
> Hello!
> 
> Yes, SELECT * FROM cache; will cause problems for you, unless you are
> using
> lazy=true, which isn't default.
> 
> However, if you're using lazy=true, you should probably search for memory
> leaks in your own code (such as trying to keep all results in memory at
> the
> same time instead of iterating).
> 
> Regards,
> 
> -- 
> Ilya Kasnacheev





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Heap exhaustion when querying

2018-05-17 Thread engrdean
The query is just:

select * from table1;

My understanding was that 'lazy=false' is the default.  Regardless of
whether I set 'lazy=true' or explicitly set 'lazy=false'  in my jdbc driver
settings then I see the same behavior:

Terminating due to java.lang.OutOfMemoryError: Java heap space

It wasn't clear to me in the documentation, but I thought that off heap
memory is used unless you explicitly tell ignite to use on heap caching.  Am
I confusing two different memory usages?

I understand that a 'select *...' is not going to be a practical query in
most cases and won't take advantage of any indexing, but the purpose of my
asking is to understand how Ignite utilizes its memory when responding to
queries.  Even if I'm not trying to pull back a full result set, I can still
think of lots of situations where the query results (especially with
concurrent queries) will exceed the available heap space.

I'm happy to share additional configuration information that will help and I
really appreciate any feedback.  I have to believe that there is a way to
deal with this problem (probably something simple) that I'm just missing.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/