Re: unsubscribe

2020-02-13 Thread Devin Bost
I'm not sure if you sent this to the right place.
Did you get a confirmation email about being unsubscribed?

Devin G. Bost


On Thu, Feb 13, 2020 at 10:23 PM Shahid Muhamed <
shahid.muha...@expeedsoftware.com> wrote:

> unsubscribe
>
> Thanks,
>
> Muhamed Shahid
>


unsubscribe

2020-02-13 Thread Shahid Muhamed
unsubscribe


Thanks,

Muhamed Shahid


Re: JDBC Thin Client does not return

2020-02-13 Thread pg31
Hello

I know all folks are busy with 2.8 release. 
It would be great, if someone can spare a little time to look at the above
issue. 

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to failover/scale cluster in Apache Ignite

2020-02-13 Thread wentat
Ok, I'll try to get a reproducer. However, I think its pretty hard because
the error seems to be transient errors related to failover with huge dataset
(1 TB plus dataset). My follow up question would be:

If kill -9 is not appropriate. What is the graceful way to failover a node?

For a 1TB dataset, is 30 nodes a good setup? One node takes about 35GB of
ram but I have given it 46GB



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Issue with Ignite Logging and Log4j2

2020-02-13 Thread Mitchell Rathbun (BLOOMBERG/ 731 LEX)
I am hoping to use a separate appender for Ignite logs in my application. In my 
configuration file, I have:


 


 
 
 

IGNITE and MAIN are both RollingRandomAccessFile appenders pointing to 
different files. In my java code I have:

File logConfigFile = new File(config.getIgniteGridLoggerXmlPath());
try {
IgniteLogger logger = new Log4J2Logger(logConfigFile);
ignCfg.setGridLogger(logger);
} catch (IgniteCheckedException e) {
LOG.error("Unable to set up IgniteLogger with path {}", logConfigFile, e);
}

When I run my application, Ignite is still sending all of its logs to the same 
file that MAIN points to. Any idea why this doesn't work? Should the name for 
the logger be something different than "org.apache.ignite"?

Re: Load cache data into another POJO with SQL

2020-02-13 Thread Evgenii Zhuravlev
Hi,

You can implement your own CacheStore and transform data in it:
https://apacheignite.readme.io/docs/3rd-party-store#section-custom-cachestore

Best Regards,
Evgenii

чт, 13 февр. 2020 г. в 13:29, Denis Magda :

> I doubt that this is supported.
>
> -
> Denis
>
>
> On Thu, Feb 13, 2020 at 11:32 AM Edward Chen  wrote:
>
>> Hello,
>>
>> I am using Ignite SQL, wondering it is possible to load cache data into
>> another POJO ?  just like ORM, sql like this :
>>
>> select new MyPojo(p.name, p.age) from myCacheTable as p where p.age > 30
>>
>> Thanks. Ed
>>
>>
>>
>>


Re: Load cache data into another POJO with SQL

2020-02-13 Thread Denis Magda
I doubt that this is supported.

-
Denis


On Thu, Feb 13, 2020 at 11:32 AM Edward Chen  wrote:

> Hello,
>
> I am using Ignite SQL, wondering it is possible to load cache data into
> another POJO ?  just like ORM, sql like this :
>
> select new MyPojo(p.name, p.age) from myCacheTable as p where p.age > 30
>
> Thanks. Ed
>
>
>
>


Re: REST API on top of ignite using node express

2020-02-13 Thread Denis Magda
Hi Nithin,

1. You can use Query.setPageSize method to instruct the Cursor to read the
result set in chunks bigger than 1024. However, regardless of the pageSize
the Cursor returns the whole result:
https://github.com/apache/ignite/blob/master/modules/platforms/nodejs/lib/Query.js#L56

2. That's how cursor._fieldnames are implemented:
https://github.com/apache/ignite/blob/master/modules/platforms/nodejs/lib/Cursor.js#L264

-
Denis


On Mon, Feb 10, 2020 at 11:59 AM nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com> wrote:

> Hi ,
>
> We are trying to build an Rest API on top of ignite cache using node
> express.
>
> Following is the way we are fetching data from ignite.
>
> await igniteClient.connect(new IgniteClientConfiguration(ENDPOINT));
> const cache = igniteClient.getCache(CacheNAME);
>
> const querysql=new SqlFieldsQuery("SqL");
> const cursor = await cache.query(querysql);
> const row =await  cursorProductDetails.getValue();
>
> We are facing the following issues while fetching the data in cursor.
>
> 1. cursor._values property is always having only 1024 rows even though the
> table as 100k rows.
> 2. cursor._fieldnames  property  is not displaying the field names as
> result
> of which we have created an
> array with list of fields and creating a list of json objects using this
> array and  traversing each row of cursor._values using map function.
>
> Please check below for sample code
>
> var dataProductDetails=cursor._values ;
>
> var res_data_prddetails=[];
>
>  var fields=[field1,field2]
>
> await dataProductDetails.map(function(arr){
>  var prdobj={};
>  fields.forEach((k,v)=> prdobj[k]=arr[v]);
>  res_data_prddetails.push(prdobj);
>}
>
>
>   );
>
> Also can you please let me know whether there is a way to directly convert
> the sql fields query output to JSON using node express.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Loading and Fetching the Data using Node js.

2020-02-13 Thread Denis Magda
According to the exception, the field name OrderId is of "long" type in
Java and of "double" type in Node.JS (or vice versa). The types of the
fields have to be identical. It seems like OrderId is a primitive field and
you, probably, should enforce its type to "long" on Node.JS end.

Also, check this example that shows how to enforce specific field types
from Node.JS to ensure they match the types of Java counterparts:
https://github.com/apache/ignite/blob/master/modules/platforms/nodejs/examples/CachePutGetExample.js#L73

If nothing works, please share a GitHub project with Java POJOs and
anything else that will help to get to the bottom of this.


-
Denis


On Thu, Feb 13, 2020 at 1:49 AM nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com> wrote:

> Hi
>
> Pasted below the code and error i got.Actually i am trying query an
> existing
> cache using node js which is loaded using Cache JDBC Pojo Store in Java.It
> would be really helpful share me if you have any sample code.
>
> PS C:\Users\ngovind\NodeApp> node NodeIgnite.js
> ERROR: Binary type has different field types [typeName=OrderId,
> fieldName=OrderID, fieldTypeName1=long, fieldTypeName2=double]
> (node:13596) UnhandledPromiseRejectionWarning: ReferenceError: igniteClient
> is not defined
> at start (C:\Users\ngovind\NodeApp\NodeIgnite.js:44:5)
> at processTicksAndRejections (internal/process/task_queues.js:93:5)
> (node:13596) UnhandledPromiseRejectionWarning: Unhandled promise rejection.
> This error originated either by throwing inside of an async function
> without
> a catch block, or by rejecting a promise which was not handled with
> .catch(). (rejection id: 1)
> (node:13596) [DEP0018] DeprecationWarning: Unhandled promise rejections are
> deprecated. In the future, promise rejections
> that are not handled will terminate the Node.js process with a non-zero
> exit
> code.
>
>
>
>
> const IgniteClient = require('apache-ignite-client');
> const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
> const ObjectType = IgniteClient.ObjectType;
> const CacheEntry = IgniteClient.CacheEntry;
> const ComplexObjectType = IgniteClient.ComplexObjectType;
>
> class OrderKey {
> constructor(OrderID = null, CityID= null) {
> this.OrderID = OrderID;
> this.CityID = CityID;
> }
> }
>
> class OrderDetails {
> constructor(Productname = null, CustomerName= null,StoreName=null) {
> this.Productname = Productname;
> this.CustomerName = CustomerName;
> this.StoreName = StoreName;
> }
> }
>
> async function start(){
> try {
> const igniteClient = new IgniteClient();
> await igniteClient.connect(new
> IgniteClientConfiguration('127.0.0.1:10800'));
> const cache = await igniteClient.getCache('OrdersCache');
> const OrderKeyComplexObjectType =
> new ComplexObjectType(new OrderKey(0,0),'OrderId');
> const OrderComplexObjectType = new ComplexObjectType(new
> OrderDetails('','',''),'Orders')
> ;
>
>cache.setKeyType(OrderKeyComplexObjectType).
>setValueType(OrderComplexObjectType);
>
> const data = await cache.get(new OrderKey(1,1));
>console.log(data.Productname);
>
> }
> catch (err) {
> console.log('ERROR: ' + err.message);
> }
> finally {
>
> igniteClient.disconnect();
>
> console.log(" Data Fetch Completed");
>
> }
> }
>
> start();
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Load cache data into another POJO with SQL

2020-02-13 Thread Edward Chen

Hello,

I am using Ignite SQL, wondering it is possible to load cache data into 
another POJO ?  just like ORM, sql like this :


select new MyPojo(p.name, p.age) from myCacheTable as p where p.age > 30

Thanks. Ed





Re: where to download odbc driver ?

2020-02-13 Thread Mikhail
Hi Ed,

Please read the doc:
https://www.gridgain.com/docs/latest/developers-guide/SQL/ODBC/odbc-driver#installing-odbc-driver

you can find binaries here: %IGNITE_HOME%\platforms\cpp\bin\odbc\

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Dynamic Cache Change not allowed

2020-02-13 Thread Evgenii Zhuravlev
Hi,

The message says: Failed to execute dynamic cache change request,* client
node disconnected*. So, it means that your client node is not connected to
the cluster at this moment.

It looks like you have connectivity issues between your local machine and
remote server, I would recommend to check that all ports are open and you
can connect without any issues.

Evgenii

чт, 13 февр. 2020 г. в 04:53, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Also one important observation is ,we are not getting this error
> "dynamic cache change is not allowed" when Ignite server node and client
> node  is running on local machine.Getting this error only when server node
> is running in unix and trying to connect to this node from local system.
> Should the entire project has to be deployed in unix?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Nodes started on local machine require more than 80% of physical RAM

2020-02-13 Thread Mikhail
> but 4GB for container OS seems a bit much. Thanks for letting me know in
any case! 

Absolutely agree with you that 4GB is too much, especially for the container
environment.
I think the person who wrote the check thought about big bare-metal
installations with 128GB+ RAM, but you can just ignore this warning, as I
said, just make sure that required mem < total mem.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


where to download odbc driver ?

2020-02-13 Thread Edward Chen

Hello,

As per Ignite doc, ignite is shipped with ODBC windows pre-built 
installers. I can not find any odbc msi file in 
apache-ignite-2.7.6-bin.zip.  Do you know how to get the odbc driver ?


https://apacheignite-sql.readme.io/docs/odbc-driver#building-odbc-driver

Thanks. Ed




Re: Ignite Cluster and Kubernetes Cluster

2020-02-13 Thread narges saleh
Thanks for the reply.
So, what I need is to set TcpDiscoveryKubernetesIpFinder.namespaceName to a
different namespace for each cluster and declare the namespace in the
related ignite connector service yaml file?

On Wed, Feb 12, 2020 at 10:19 PM pg31  wrote:

> Yes. You should deploy them in a different namespace.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Slow cache updates with indexing module enabled

2020-02-13 Thread xero
Hi Andrei, thanks for taking the time to answer my question. I will consider
your suggestion if we decide to switch to a multiple tables approach that
will require those JOIN considerations. But, in this case we have only 1
cache and the operation that we are executing is an update. We tried using
SQL-Update but we also tried using a CacheEntryProcessor directly. My
question is what is happening with all those indexes when an entry is
updated but, none of the indexed fields (except one) are being changed? In
our case, we are only flipping a boolean value of only 1 field. Is this
change triggering updates in ALL the indexes associated with the cache?

Cache is like this (with indexes on all fields):
id|(other fields)|segment_1|segment_2|segment_2|...|segment_99|segment_100

Then we try updating a batch of entries with an invokeAll using a
CacheEntryProcessor:
public Void process(MutableEntry entry, Object...
arguments) {
final BinaryObjectBuilder builder =
entry.getValue().toBuilder().setField("SEGMENT_1", true);
entry.setValue(builder.build());

return null;
}
When we update entry field SEGMENT_1 field with a True, are the other 99
indexes updated?
Those tickets I mentioned seem to be related but I would like to have your
confirmation.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite yarn resources keep on increasing

2020-02-13 Thread ChandanS
Hi Andrei,

I am using below configurations:

IGNITE_NODE_COUNT=40
IGNITE_RUN_CPU_PER_NODE=5
IGNITE_MEMORY_PER_NODE=15000
IGNITE_PATH=/project/ecpdevbermuda/ignite/apache-ignite-2.7.0-bin.zip
IGNITE_VERSION=2.7.0
IGNITE_WORKING_DIR=/project/ecpdevbermuda/ignite/
IGNITE_XML_CONFIG=/project/ecpdevbermuda/ignite/ignite-config-unifiedignite-la1.xml
IGNITE_USERS_LIBS=/project/ecpdevbermuda/ignite/libs/
IGNITE_QUEUE=root.ecpdevbermuda
IGNITE_CLUSTER_NAME=titan-resourceUP

Part of the logs:

20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned shuffle 0
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 10
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 9
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 8
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 7
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 6
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 5
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 4
20/02/11 06:43:43 INFO spark.ContextCleaner: Cleaned accumulator 3
20/02/11 06:43:43 INFO ignite.IgniteDataLoader: >>> Starting Ignite cluster
class org.apache.ignite.IgniteIllegalStateException: Ignite instance with
provided name doesn't exist. Did you call Ignition.start(..) to start an
Ignite instance? [name=titan-resourceUP]
at org.apache.ignite.internal.IgnitionEx.grid(IgnitionEx.java:1390)
at org.apache.ignite.Ignition.ignite(Ignition.java:531)
at
un.api.dataloader.ignite.IgniteDataLoader.liftedTree1$1(IgniteDataLoader.scala:707)
at
un.api.dataloader.ignite.IgniteDataLoader.getIgnite(IgniteDataLoader.scala:706)
at
un.api.dataloader.ignite.IgniteDataLoader.createOfficialNameOACache(IgniteDataLoader.scala:737)
at
un.api.dataloader.ignite.IgniteDataLoader.loadOfficialNameCache(IgniteDataLoader.scala:308)
at
un.api.dataloader.ignite.IgniteServerDataLoader$.loadOAData(IgniteServerDataLoader.scala:35)
at
un.api.StartStandalone$.startIgniteAndDataloading(StartStandalone.scala:98)
at
un.api.StartStandalone$.triggerIgniteStandAlone(StartStandalone.scala:43)
at 
un.api.StartStandalone$delayedInit$body.apply(StartStandalone.scala:18)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at un.api.StartStandalone$.main(StartStandalone.scala:17)
at un.api.StartStandalone.main(StartStandalone.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:567)
20/02/11 06:43:43 INFO ignite.IgniteDataLoader: >>> Reading Ignite Client
config from: /project/ecpdevbermuda/ignite/ignite-config-unifiedignite.xml
20/02/11 06:43:43 INFO ignite.IgniteDataLoader: >>> Starting Ignite Client
with cluster name: titan-resourceUP
20/02/11 06:43:43 INFO xml.XmlBeanDefinitionReader: Loading XML bean
definitions from InputStream resource [resource loaded through InputStream]
20/02/11 06:43:44 INFO support.GenericApplicationContext: Refreshing
org.springframework.context.support.GenericApplicationContext@7b544245:
startup date [Tue Feb 11 06:43:44 UTC 2020]; root of context hierarchy
20/02/11 06:43:44 WARN : Failed to resolve default logging config file:
config/java.util.logging.properties
Console logging handler is not configured.
20/02/11 06:43:44 INFO internal.IgniteKernal%titan-resourceUP: 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.7.0#20181130-sha1:256ae401
>>> 2018 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

20/02/11 06:43:44 INFO internal.IgniteKernal%titan-resourceUP: Config URL:
n/a
20/02/11 06:43:44 INFO internal.IgniteKernal%titan-resourceUP:
IgniteConfiguration [igniteInstanceName=titan-resourceUP, pubPoolSize=48,
svcPoolSize=48, callbackPoolSize=48, stripedPoolSize=48, sysPoolSize=48,
mgmtPoolSize=4, igfsPoolSize=48, dataStreamerPoolSize=48,
utilityCachePoolSize=48, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
qryPoolSize=48, igniteHome=null,

Re: How to failover/scale cluster in Apache Ignite

2020-02-13 Thread Vladimir Pligin
Hi, I'll try to do my best to help you.

>> Is kill -9  the right way to kill a node?

No, I don't think this is the right way. 

>> How about re-adding new nodes that were previously killed?

You should clean a node's work directory before re-adding.

>> How long does it take for the nodes to synchronise?

It depends on your network, data volume, disk(s) speed, data storage
configuration etc.

>> How do we know when a rebalance is completed?

You'll see a message in a log. Or you can use WebConsole.


By the way it would be great if you provide some sort of a reproducer to
help us review your scenario.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite yarn resources keep on increasing

2020-02-13 Thread Andrei Aleksandrov

Hi,

Could you please provide more details:

1)Your configurations and environment variables (IGNITE_PATH?)
2)The logs of your Ignite nodes where you see the mentioned exception.

IGNITE_PATH - a path to unzipped Ignite distribution instead of the URL. 
Is it possible that you didn't unzip the binaries or forget to copy 
binaries to some node?


BR,
Andrei

2/13/2020 1:12 PM, ChandanS пишет:

I am using ignite 2.7 version for ignite yarn deployment. I have my own spark
application that start ignite yarn cluster and load data to ignite. It works
fine in positive scenarios, but whenever there is an exception from the
ignite-yarn.jar side like giving wrong path for some properties
(IGNITE_PATH), the resource uses keep on increasing with some time interval.
I have started my application with --num-executors 40 --executor-cores 2,
currently after keeping the application up for last 10 hrs number of
executors is 461 and cores 921 with increasing in memory as well. I am
getting the below exception from ignite-yarn application:

class org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Dynamic Cache Change not allowed

2020-02-13 Thread nithin91
Also one important observation is ,we are not getting this error
"dynamic cache change is not allowed" when Ignite server node and client
node  is running on local machine.Getting this error only when server node
is running in unix and trying to connect to this node from local system.
Should the entire project has to be deployed in unix?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread Andrei Aleksandrov

Hi,

Can you please attach the full logs with the mentioned exception? BTW I 
don't see any attaches in the previous message (probably user list can't 
do it).


BR,
Andrei

2/13/2020 3:44 PM, nithin91 пишет:

Attached the bean file used



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread nithin91
Following is the java code that loads the cache.

package Load;

import java.sql.Types;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore;
import org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory;
import org.apache.ignite.cache.store.jdbc.JdbcType;
import org.apache.ignite.cache.store.jdbc.JdbcTypeField;
import org.apache.ignite.cache.store.jdbc.dialect.OracleDialect;
import org.apache.ignite.configuration.CacheConfiguration;
import ignite.example.IgniteUnixImplementation.OrderDetails;
import ignite.example.IgniteUnixImplementation.OrderKey;

public class OrdersLoad {

private static final class CacheJdbcPojoStoreExampleFactory extends
CacheJdbcPojoStoreFactory {
/**
 * 
 */
private static final long serialVersionUID = 1L;

/** {@inheritDoc} */
@Override public CacheJdbcPojoStore create()
{

setDataSourceBean("dataSource");
return super.create();
}
}


private static CacheConfiguration
cacheConfiguration() {
CacheConfiguration cfg = new
CacheConfiguration<>("OrdersCache");

CacheJdbcPojoStoreExampleFactory storefactory =new
CacheJdbcPojoStoreExampleFactory();

storefactory.setDialect(new OracleDialect());

storefactory.setDataSourceBean("dataSource");

JdbcType jdbcType = new JdbcType();

jdbcType.setCacheName("OrdersCache");
jdbcType.setDatabaseSchema("PDS_CACHE");
jdbcType.setDatabaseTable("ORDERS2");

jdbcType.setKeyType("ignite.example.IgniteUnixImplementation.OrderKey");
jdbcType.setKeyFields(new JdbcTypeField(Types.INTEGER, "ORDERID",
Long.class, "OrderID"),
new JdbcTypeField(Types.INTEGER, "CITYID", Long.class, "CityID")


);

   
jdbcType.setValueType("ignite.example.IgniteUnixImplementation.OrderDetails");
jdbcType.setValueFields(
new JdbcTypeField(Types.VARCHAR, "PRODUCTNAME", String.class,
"Productname"),
new JdbcTypeField(Types.VARCHAR, "CUSTOMERNAME", String.class,
"CustomerName"),
new JdbcTypeField(Types.VARCHAR, "STORENAME", String.class,
"StoreName")
);

storefactory.setTypes(jdbcType);

cfg.setCacheStoreFactory(storefactory);

cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);

cfg.setReadThrough(true);
cfg.setWriteThrough(true);
cfg.setSqlSchema("PIE");

return cfg;
}

public static void main(String[] args) throws Exception {
try (Ignite ignite = Ignition.start("Ignite-Client.xml")) {

System.out.println(">>> Loading cache OrderDetails");

IgniteCache cache =
ignite.getOrCreateCache(cacheConfiguration());

cache.clear();

ignite.cache("OrdersCache").loadCache(null);

System.out.println(">>> Loaded cache: OrdersCache
Size="+cache.size());

}
}
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread nithin91
Attached the bean file used



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread nithin91
Thanks aealexsandrov. This information is very useful.

Also i have one more query.

Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB to Ignite Cache using Cache JDBC Pojo Store.
   
As a part of this process, bean file is custom configured  to start
ignite
node in unix. Attached the bean file.
This bean file consists of both cache configuration details and
ignite
configuration details.
   
Once the node is running, we are trying to do the following
   
1. Connecting to the ignite node running on unix through eclipse by
creating a replica of
the attached bean file from local system and adding an additional property
in Bean file with
Client Mode = true and
   then loading the cache that are defined in the bean file deployed
in
unix using the
   following method from local system using JAVA
   
ignite.cache("CacheName").loadCache(null);
   
   * We are able to do this successfully.*
   
2.  Connecting to the ignite node running on unix by creating a
replica of
the attached bean file
in local system and adding an additional property in Bean
file with Client
Mode = true
and then trying to create a cache and configure the cache
and then finally
loading
the cache using the attached JAVA Code.
   
   
   * When we are trying this approach, we are getting an error
like dynamic
cache change
is not allowed.Not getting this error when Ignite server
node and client node  is running on local machine.Getting this error when
server node is running in unix and trying to connect to this node from local
system.*
   
It would be really helpful if you can help me in resolving
this issue.
   
If this not the right approach, then
Configuring all the caches in the bean file is the only
available
option?If this is case,
What should be the approach for  building some additional
caches in ignite
and load these Caches using Cache JDBC POJO Store when the node is running.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow cache updates with indexing module enabled

2020-02-13 Thread Andrei Aleksandrov

Hi,

SQL query performance can be not great because of several cases:

1)Incorrect indexes. Please check that your EXPLAIN contains indexes and 
doesn't have scans for joins:


INNER JOIN PUBLIC.PERSON P__Z1
    /* PUBLIC.PERSON.__SCAN_ */

Probably the inline size for used index is incorrect or wrong index used.

To solve this problem you should calculate the Inline for every index 
and check that your correct index used in EXPLAIN of your query. Here is 
the example of how inline for the field can be calculated:


**

*long*

*

0     1   9

| tag | value |

Total: 9 bytes


int

0     1   5

| tag | value |

Total: 5 bytes


String

0     1 3             N

| tag | size | UTF-8 value |

Total: 3 + string length


POJO (BinaryObejct)

0     1 3     4 8     12 16 20       24 32 N

| tag | size | tag | size | BO flags | type ID | hash | length | schema 
info | BO body |


  |               Binary object header         
      |


Total: 32 + N

*
2)GC pauses because of query execution without *LAZY *flag.

3)In the case of multiple joins the order of these joins can be 
incorrect because of H2 optimizer specific used in Ignite.


To fix this problem you should prepare the correct join order and set 
the *"enforce join order"* flag. When the BIG table will be joined to 
SMALL then it will be faster than otherwise:


select * from SMALLTABLE, BIGTABLE where SMALLTABLE.id = BIGTABLE.id - 
correct
select * from BIGTABLE , SMALLTABLEwhere SMALLTABLE.id = BIGTABLE.id - 
incorrect


Check the join order using the EXPLAIN command.

BR,
Andrei

2/12/2020 11:24 PM, xero пишет:

Hi,
We are experiencing slow updates to a cache with multiple indexed fields
(around 25 indexes during testing but we expect to have many more) for
updates that are only changing one field. Basically, we have a
customer*->belongsto->*segment relationship and we have one column per
segment. Only one column is updated with a 1 or 0 if the customer belongs to
the segment.

During testing, we tried dropping half of the unrelated indexes (indexes
over fields that are not being updated) and we duplicate the performance. We
went from 1k ops to 2k ops approximately.

We found these cases may be related:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-19%3A+SQL+index+update+optimizations
https://issues.apache.org/jira/browse/IGNITE-7015?src=confmacro

Could you please confirm us if IGNITE-7015 could be related to this
scenario? If yes, do you have any plans to continue the development of the
fix?


We are using Ignite 2.7.6 with 10 nodes, 2 backups, indexing module enabled
and persistence.

Cache Configuration: [name=xdp-contactcomcast-1, grpName=null,
memPlcName=xdp, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictPlcFactory=null,
onheapCache=false, sqlOnheapCache=false, sqlOnheapCacheMaxSize=0,
evictFilter=null, eagerTtl=true, dfltLockTimeout=0, nearCfg=null,
writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false,
loadPrevVal=false, aff=RendezvousAffinityFunction [parts=1024, mask=1023,
exclNeighbors=false, exclNeighborsWarn=false, backupFilter=null,
affinityBackupFilter=null], cacheMode=PARTITIONED, atomicityMode=ATOMIC,
backups=2, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCnt=2,
maxConcurrentAsyncOps=500, sqlIdxMaxInlineSize=-1, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
writeBehindCoalescing=true, maxQryIterCnt=1024,
affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@db5e319,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, qryDetailMetricsSz=0, readFromBackup=true,
nodeFilter=IgniteAllNodesPredicate [], sqlSchema=XDP_CONTACTCOMCAST_1,
sqlEscapeAll=false, cpOnRead=true, topValidator=null, partLossPlc=IGNORE,
qryParallelism=1, evtsDisabled=false, encryptionEnabled=false]


Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread Andrei Aleksandrov

Hi,

Please read my comments:

1)Ignite generally doesn't support changing of the cache configuration 
without re-creation of the the cache. But for SQL caches that were 
created via QueryEntity or CREATE TABLE you can add and remove the 
columns using ALTER TABLE commands:


https://apacheignite-sql.readme.io/docs/alter-table
https://apacheignite.readme.io/docs/cache-queries#query-configuration-using-queryentity
https://apacheignite-sql.readme.io/docs/create-table
2)First of all, you can use the following options:

https://apacheignite.readme.io/docs/3rd-party-store#section-read-through-and-write-through

Read through can load the requested keys from DB
Write through will load all the updates to DB.

In case if you require some cache invalidation or refresh then you can 
create some cron job for it.


3)I guess that loadCache is the only to do it. It will filter the values 
that have already existed in the cache.


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#loadCache-org.apache.ignite.lang.IgniteBiPredicate-java.lang.Object...-

4)You can use a different subset of integrations that can do distributed 
streaming to Ignite like Spark or Kafka:


https://apacheignite-mix.readme.io/docs/getting-started

BR,
Andrei
2/12/2020 9:11 PM, nithin91 пишет:

Hi

We are doing a  a POC on exploring the Ignite in memory capabilities and
building a rest api on
top of it using node express.


Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB
to Ignite Cache using Cache JDBC Pojo Store.

Can someone help me whether the following scenarios can be handled using
Ignite as i couldn't find this in the official documentation.

1. If we want to add/drop/modify a  column to the cache, can we 
update the
bean file directly
   when the node is running or do we need to stop the node and 
then again
restart.
   It would be really helpful if you can  share sample code or
documentation link.

2. How to refresh the ignite cache automatically or schedule 
the cache
refresh.
   It would be really helpful if you can  share sample code or
documentation link.

3. Is incremental refresh allowed? It would be really helpful 
if you can
share sample code or
   documentation link.


4. Is there any other way to load the caches fast other Cache 
JDBC POJO
Store.
   It would be really helpful if you can  share sample code or
documentation link.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite on yarn doesn't started

2020-02-13 Thread Andrei Aleksandrov

Hi,

I asked you to check it because I see the next option:

IGNITE_PATH = /tmp/ignite/apache-ignite-2.7.6-bin.zip

This option should be a path to *unzipped *Ignite distribution instead 
of the URL (you set zip)


Also I see commented IGNITE_URL option:

#IGNITE_URL =
http://ambari1.dmz.loc:/filebrowser/view=/tmp/ignite/apache-ignite-2.7.6-bin.zip

So it looks like you don't provide the Ignite binaries to your YARN 
deployment.


BR,
Andrei

2/11/2020 9:19 PM, v.shinkevich пишет:

aealexsandrov wrote

1) check that Ignite libs (from ignite_binaries/libs) are available for
your YARN deployment.
2) check that path to the configuration file is reachable from every node

1) I don't understand what I need to check. Where should these libs be ?  Do
I need to unpack the distribution? To a local folder or to HDFS?

My /tmp/ignite folder (on HDFS, on local the same content + unpacked distro
for local run check)

On HDFS I don't have any logs. Only one jar in workdir.


Log of local run:
[root@dn07 /tmp/ignite/apache-ignite-2.7.6-bin/bin]# ./ignite.sh

[20:54:37]__  
[20:54:37]   /  _/ ___/ |/ /  _/_  __/ __/
[20:54:37]  _/ // (7 7// /  / / / _/
[20:54:37] /___/\___/_/|_/___/ /_/ /___/
[20:54:37]
[20:54:37] ver. 2.7.6#20190911-sha1:21f7ca41
[20:54:37] 2019 Copyright(C) Apache Software Foundation
[20:54:37]
[20:54:37] Ignite documentation: http://ignite.apache.org
[20:54:37]
[20:54:37] Quiet mode.
[20:54:37]   ^-- Logging to file
'/tmp/ignite/apache-ignite-2.7.6-bin/work/log/ignite-e2eeb3da.0.log'
[20:54:37]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[20:54:37]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[20:54:37]
[20:54:37] OS: Linux 3.10.0-693.el7.x86_64 amd64
[20:54:37] VM information: Java(TM) SE Runtime Environment 1.8.0_141-b15
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.141-b15
[20:54:38] Please set system property '-Djava.net.preferIPv4Stack=true' to
avoid possible problems in mixed environments.
[20:54:38] Configured plugins:
[20:54:38]   ^-- None
[20:54:38]
[20:54:38] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT
Java HotSpot(TM) 64-Bit Server VM warning: sched_getaffinity failed (Invalid
argument)- using online processor count (192) which may exceed available
processors
[20:54:38] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[20:54:39] Security status [authentication=off, tls/ssl=off]
[20:54:44] Performance suggestions for grid  (fix if possible)
[20:54:44] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[20:54:44]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[20:54:44]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to
JVM options)
[20:54:44]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[20:54:44]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[20:54:44]   ^-- Speed up flushing of dirty pages by OS (alter
vm.dirty_expire_centisecs parameter by setting to 500)
[20:54:44] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[20:54:44]
[20:54:44] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[20:54:44] Data Regions Configured:
[20:54:44]   ^-- default [initSize=256.0 MiB, maxSize=403.0 GiB,
persistence=false]
[20:54:44]
[20:54:44] Ignite node started OK (id=e2eeb3da)
^C
[20:55:19] Ignite node stopped OK [uptime=00:00:35.686]




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using EntryProcessor arguments recommendations

2020-02-13 Thread Andrei Aleksandrov

Hi,

I suggest to read the documentation:

EntryProcessor:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheEntry.html
Invoke java doc:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-

CacheAtomicityMode specific:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheAtomicityMode.html

Note, that using invoke and invokeAll you will block the keys. It means 
that deadlock is possible in the following cases:


1)You use external keys inside EntryProcessor
2)Use unordered maps for invokeAll. TreeMap is suggested.

BR,
Andrei

2/12/2020 12:18 AM, Григорий Доможиров пишет:

I see two options of using EntryProcessor:
1. Pass arguments like this:
cache.invoke(key, new CustomProcessor(), someValue)
2. Pass stateful EntryProcessor like this:
  cache.invoke(key, new CustomProcessor(someValue))

Is there any recommendations?


Re: JDBC thin client incorrect security context

2020-02-13 Thread Andrei Aleksandrov

Hi,

I see that you found the ticket related to the current issue:

https://issues.apache.org/jira/browse/IGNITE-12589

Looks like it can be a reason of your problem.

Generally, I don't know how you implemented your security plugin if you 
take a look at similar plugin from third party vendor 
 
then you can see that subjectID should be related to user 
connection/session, not to node where some task will be executed (yes 
every node has it's subjectID  and user but JDBC connection with another 
user should have its own subjectID ).


How it implemented there in common details:

1)JDBC supports username and password fields:

https://apacheignite-sql.readme.io/docs/jdbc-driver#section-parameters

2)Every user session/connection mapped to some SecuritySubject (that 
contains subjectID)


3)Every event that contains subjectID can be linked with some user 
connection (SecuritySubject.login()) using the following code:


|public class EventStorageSpi extends IgniteSpiAdapter implements 
EventStorageSpi { @LoggerResource private IgniteLogger log; @Override 
public  Collection localEvents(IgnitePredicate p) 
{ return null; } @Override public void record(Event evt) throws 
IgniteSpiException { if (evt.type() == EVT_MANAGEMENT_TASK_STARTED) { 
TaskEvent taskEvent = (TaskEvent) evt; SecuritySubject subj = 
taskEvent.subjectId() != null ? 
getSpiContext().authenticatedSubject(taskEvent.subjectId()) : null; 
log.info("Management task started: [" + "name=" + taskEvent.taskName() + 
", " + "eventNode=" + taskEvent.node() + ", " + "timestamp=" + 
taskEvent.timestamp() + ", " + "info=" + taskEvent.message() + ", " + 
"subjectId=" + taskEvent.subjectId() + ", " + "secureSubject=" + subj + 
"]"); } } @Override public void spiStart(@Nullable String 
igniteInstanceName) throws IgniteSpiException { /* No-op. */ } @Override 
public void spiStop() throws IgniteSpiException { /* No-op. */ } }|


In case if this approach doesn't work for your implementation because of 
some issues then you can try to start the thread on Ignite developer 
mail list.


BR,
Andrei

2/12/2020 6:54 PM, VeenaMithare пишет:

Hi ,

We have built a security and audit plugin for security of our ignite
cluster. We are unable to get the right audit information i.e. we are unable
to get the right subject for users logged in through dbeaver ( jdbc thin
client. ). This is because the subjectid associated with the "CACHE_PUT"
event when an update is triggered by the jdbc thin client, contains the uuid
of the node that executed the update rather than the logged in jdbc thin
client user.

If this is a limitation with the current version of ignite, is there any
workaround to get this information ?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite yarn resources keep on increasing

2020-02-13 Thread ChandanS
I am using ignite 2.7 version for ignite yarn deployment. I have my own spark
application that start ignite yarn cluster and load data to ignite. It works
fine in positive scenarios, but whenever there is an exception from the
ignite-yarn.jar side like giving wrong path for some properties
(IGNITE_PATH), the resource uses keep on increasing with some time interval.
I have started my application with --num-executors 40 --executor-cores 2,
currently after keeping the application up for last 10 hrs number of
executors is 461 and cores 921 with increasing in memory as well. I am
getting the below exception from ignite-yarn application:

class org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Loading and Fetching the Data using Node js.

2020-02-13 Thread nithin91
Hi 

Pasted below the code and error i got.Actually i am trying query an existing
cache using node js which is loaded using Cache JDBC Pojo Store in Java.It
would be really helpful share me if you have any sample code.

PS C:\Users\ngovind\NodeApp> node NodeIgnite.js
ERROR: Binary type has different field types [typeName=OrderId,
fieldName=OrderID, fieldTypeName1=long, fieldTypeName2=double]
(node:13596) UnhandledPromiseRejectionWarning: ReferenceError: igniteClient
is not defined
at start (C:\Users\ngovind\NodeApp\NodeIgnite.js:44:5)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
(node:13596) UnhandledPromiseRejectionWarning: Unhandled promise rejection.
This error originated either by throwing inside of an async function without
a catch block, or by rejecting a promise which was not handled with
.catch(). (rejection id: 1)
(node:13596) [DEP0018] DeprecationWarning: Unhandled promise rejections are
deprecated. In the future, promise rejections
that are not handled will terminate the Node.js process with a non-zero exit
code.




const IgniteClient = require('apache-ignite-client');
const IgniteClientConfiguration = IgniteClient.IgniteClientConfiguration;
const ObjectType = IgniteClient.ObjectType;
const CacheEntry = IgniteClient.CacheEntry;
const ComplexObjectType = IgniteClient.ComplexObjectType;

class OrderKey {
constructor(OrderID = null, CityID= null) {
this.OrderID = OrderID;
this.CityID = CityID;
}  
}

class OrderDetails {
constructor(Productname = null, CustomerName= null,StoreName=null) {
this.Productname = Productname;
this.CustomerName = CustomerName;
this.StoreName = StoreName;
}  
}

async function start(){
try {
const igniteClient = new IgniteClient();
await igniteClient.connect(new
IgniteClientConfiguration('127.0.0.1:10800'));
const cache = await igniteClient.getCache('OrdersCache');
const OrderKeyComplexObjectType = 
new ComplexObjectType(new OrderKey(0,0),'OrderId');
const OrderComplexObjectType = new ComplexObjectType(new
OrderDetails('','',''),'Orders')
;

   cache.setKeyType(OrderKeyComplexObjectType).
   setValueType(OrderComplexObjectType);

const data = await cache.get(new OrderKey(1,1));
   console.log(data.Productname);

}
catch (err) {
console.log('ERROR: ' + err.message);
}
finally {

igniteClient.disconnect();
 
console.log(" Data Fetch Completed");

}
}

start();



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/