Re: Data (that is inserted via JDBC) is not persisted after cluster restart

2017-10-10 Thread Denis Magda
Everything gets stored under the workdir by default unless you override some of 
the paths using configurations APIs.

Share the SQL for the table you have troubles with if you want to get 
assistance from the community.

—
Dens
 
> On Oct 10, 2017, at 11:56 AM, blackfield  wrote:
> 
> I deleted everything under the work dir on all nodes, now, I am not able to
> reproduce it for new tables that I created.
> 
> I am still seeing the issue on one of the tables that I created in the past. 
> 
> We experimented with so many different things, perhaps, something was
> cached/stored elsewhere beside under the default work dir?
> 
> Does Ignite store some metadata else where beside under the default work
> dir?
> 
> 
> 
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Data (that is inserted via JDBC) is not persisted after cluster restart

2017-10-10 Thread blackfield
I deleted everything under the work dir on all nodes, now, I am not able to
reproduce it for new tables that I created.

I am still seeing the issue on one of the tables that I created in the past. 

We experimented with so many different things, perhaps, something was
cached/stored elsewhere beside under the default work dir?

Does Ignite store some metadata else where beside under the default work
dir?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Persistence store(MSSQL) using cross-platform c#(Ignite Client windows) and java(Ignite Server linux version)

2017-10-10 Thread JP
Thanks Alexey...
I will try this out.

But we have to figure out whether this approach would suits for our large
scale application..



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Persistence store(MSSQL) using cross-platform c#(Ignite Client windows) and java(Ignite Server linux version)

2017-10-10 Thread Alexey Kukushkin
Siva, JP,

This approach to add cache configuration at runtime actually works!!! Just
tested it:

Assuming initially you have no caches:



  

  
  http://ignite.apache.org/schema/dotnet/IgniteConfigurationSection;
gridName="myGrid1" clientMode="true">


  
  


This application adds a dynamic cache at runtime using XML configuration
approach instead of code-based approach!

class Program
{
private static Configuration appConfig =
ConfigurationManager.OpenExeConfiguration(Environment.GetCommandLineArgs()[0]);

static void Main(string[] args)
{
AddCacheConfiguration("dynamic-cache-1");

using (IIgnite ignite =
Ignition.StartFromApplicationConfiguration())
{
ignite.GetCache("dynamic-cache-1");
}
}

public static void AddCacheConfiguration(string name)
{
const string IgniteNS = "
http://ignite.apache.org/schema/dotnet/IgniteConfigurationSection;;
var appConfigPath = Environment.GetCommandLineArgs()[0] +
".config";
var appConfig = XDocument.Load(appConfigPath);
var cachesConfig = appConfig
.Element("configuration")
.Element(XName.Get("igniteConfiguration", IgniteNS))
.Element(XName.Get("cacheConfiguration", IgniteNS));
var cacheConfig = new XElement(XName.Get("cacheConfiguration",
IgniteNS));
cacheConfig.Add(new XAttribute("name", name));
cachesConfig.Add(cacheConfig);
appConfig.Save(appConfigPath);
}
}


Jepsen test: Hazelcast vs Ignite

2017-10-10 Thread andrew_k
Hi,

Just read on Jepsen test with Hazelcast:
https://jepsen.io/analyses/hazelcast-3-8-3

Providing that Ignite have quite similar functionality as Hazelcast, 

Is there any plans to try the same tests with Apache Ignite (or Gridgain)?

Seems like Gridgain team have at least one person able to run Jepsen :-)
http://sqadays.com/ru/talk/47543

That would be good check of Ignite distributed functions.

Andrew





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Persistence store(MSSQL) using cross-platform c#(Ignite Client windows) and java(Ignite Server linux version)

2017-10-10 Thread Alexey Kukushkin
Siva, JP,

Please ignore my suggestion to add cache configuration at runtime - it is
not supported and it will not work. Thus, I see only two options left -
either you call Java from C# or change your model to have the Tenant ID as
the cache value type field.


Re: Persistence store(MSSQL) using cross-platform c#(Ignite Client windows) and java(Ignite Server linux version)

2017-10-10 Thread JP
As per our need, tenant's are created at runtime.

That's why we are not preferring pre-configured tenants names and Tenant ID
in ignite config file.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node crash recovery

2017-10-10 Thread Alexey Goncharuk
Hi,

I assume you have backups=0 for your cache (otherwise you should not see
data loss). There are two ways to achieve what you need:
1) Set PartitionLossPolicy different from IGNORE in your cache
configuration. This way your clients will get an exception when trying to
read a lost key. After a stopped node is restarted, you need to call
resetLostPartitions() to restore the cluster state.
2) Set backups=1. This way you should have full availability even when a
node leaves the cluster.

Hope this helps,
AG

2017-10-10 13:52 GMT+03:00 mhetea :

> Hello,
> We have a cluster with 3 servers and about 9 clients.
> The cache mode is partitioned.
> At the moment randomly one server stops (we use ignite with spring boot),
> with no error and we lose the data as the cache mode is Partitioned.
> We enabled the Persistent Store
> (https://apacheignite.readme.io/docs/distributed-persistent-store#section-
> ignite-persistence-internals),
> but from what I read it only loads the values back in the memory when a get
> is made and the key doesn't exist in RAM.
> Is there a way (a command or something) to tell an ignite server to reload
> the data from the persistent store? (or some setting so that it does it
> automatically on crash)? The issue is that we have lists (which we get by
> SQL queries on the cache), and then from these lists we navigate to the
> actual item, so we cannot make a get on a certain key.
>
> Thank you!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: checkpoint marker is present on disk, but checkpoint record is missed in WAL

2017-10-10 Thread Alexey Goncharuk
Hi,

This should never happen in BACKGROUND mode unless you have a hard power
kill for your Ignite node (which is not your case). I've reviewed the
related parts of the code and found that there were a few tickets fixed in
2.3 that may have caused this issue (e.g. IGNITE-5772). Can you try
building a custom Ignite build from ignite-2.3 branch and check if the
issue is still present?

Thanks,
AG

2017-10-09 14:18 GMT+03:00 KR Kumar :

> Hi Guys - I am using ignite persistence with a 8 node cluster. Currently in
> dev/poc  stages. I get following exception when i try to restart the node
> after I killed the process with "kill . I have a shutdown hook to the
> code in which I am shutting down Ignite with G.stop(false). I read in a
> blog
> that When you stop ignite with cancel false, it will checkpoint the data
> and
> the stop the cluster and should not have any issues with restart. Any help
> is greatly appreciated.
>
> Invocation of init method failed; nested exception is class
> org.apache.ignite.IgniteCheckedException: Failed to restore memory state
> (checkpoint marker is present on disk, but checkpoint record is missed in
> WAL) [cpStatus=CheckpointStatus [cpStartTs=1507546382988,
> cpStartId=abeb760a-0388-4ad5-8473-62ed9c7bc0f3, startPtr=FileWALPointer
> [idx=6, fileOffset=33982453, len=2380345, forceFlush=false],
> cpEndId=c257dd1f-c350-4b0d-aefc-cad6d2c2082b, endPtr=FileWALPointer
> [idx=4,
> fileOffset=38761373, len=1586221, forceFlush=false]], lastRead=null]
> 06:55:09.341 [main] WARN
> org.springframework.context.support.ClassPathXmlApplicationContext -
> Exception encountered during context initialization - cancelling refresh
> attempt: org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name 'igniteContainer' defined in class path resource
> [mihi-gridworker-s.xml]: Invocation of init method failed; nested exception
> is class org.apache.ignite.IgniteCheckedException: Failed to restore
> memory
> state (checkpoint marker is present on disk, but checkpoint record is
> missed
> in WAL) [cpStatus=CheckpointStatus [cpStartTs=1507546382988,
> cpStartId=abeb760a-0388-4ad5-8473-62ed9c7bc0f3, startPtr=FileWALPointer
> [idx=6, fileOffset=33982453, len=2380345, forceFlush=false],
> cpEndId=c257dd1f-c350-4b0d-aefc-cad6d2c2082b, endPtr=FileWALPointer
> [idx=4,
> fileOffset=38761373, len=1586221, forceFlush=false]], lastRead=null]
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.initializeBean(AbstractAutowireCapableBeanFactory.java:1628)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.createBean(AbstractAutowireCapableBeanFactory.java:483)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory$1.
> getObject(AbstractBeanFactory.java:306)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.
> getSingleton(DefaultSingletonBeanRegistry.java:230)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(
> AbstractBeanFactory.java:302)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(
> AbstractBeanFactory.java:197)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.
> preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at
> org.springframework.context.support.AbstractApplicationContext.
> finishBeanFactoryInitialization(AbstractApplicationContext.java:866)
> at
> org.springframework.context.support.AbstractApplicationContext.refresh(
> AbstractApplicationContext.java:542)
> at
> org.springframework.context.support.ClassPathXmlApplicationContext.(
> ClassPathXmlApplicationContext.java:139)
> at
> org.springframework.context.support.ClassPathXmlApplicationContext.(
> ClassPathXmlApplicationContext.java:83)
> at
> com.pointillist.gridworker.agent.MihiGridWorker.start(
> MihiGridWorker.java:32)
> at com.pointillist.gridworker.MihiWorker.main(MihiWorker.java:20)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> restore
> memory state (checkpoint marker is present on disk, but checkpoint record
> is
> missed in WAL) [cpStatus=CheckpointStatus [cpStartTs=1507546382988,
> cpStartId=abeb760a-0388-4ad5-8473-62ed9c7bc0f3, startPtr=FileWALPointer
> [idx=6, fileOffset=33982453, len=2380345, forceFlush=false],
> cpEndId=c257dd1f-c350-4b0d-aefc-cad6d2c2082b, endPtr=FileWALPointer
> [idx=4,
> fileOffset=38761373, len=1586221, forceFlush=false]], lastRead=null]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager.restoreMemory(
> GridCacheDatabaseSharedManager.java:1433)
> at
> org.apache.ignite.internal.processors.cache.persistence.
> 

Re: Persistence store(MSSQL) using cross-platform c#(Ignite Client windows) and java(Ignite Server linux version)

2017-10-10 Thread Alexey Kukushkin
Also, I am still not sure why you decided to create dynamic caches for each
tenant  instead of having single pre-configured cache and having the Tenant
ID as the cache value type field.

How are you going to manage all those caches? The more caches you have the
more effort you need to manage and eventually delete them if a tenant is
removed.


Re: Re: Fetched result use too much time

2017-10-10 Thread Andrey Mashenkov
Hi Lucky,

Looks like your query selectivity is poor and even with GroupBy large
amount of data shoud be fetched to reduce node.

1. Is it possiblt to coolocate data on field used in OrderBy clause?
2. Looks weird that queryParallelizm cause wrong results. Looks like you
have a single node grid and there is a bug in queryParallelizm feature.
 Also I can find what ignite version you use. Would you try to switch to
the latest one?

On Tue, Oct 10, 2017 at 2:48 PM, Lucky  wrote:

> Andrey Mashenkov 
> Thank you very much!
> 1.query parallelism:this will cause a problem: fetch wrong reslut.
>I set it to 10,and have table a with 150,000 records, table b with
> 12,000,000 records.
>when I query single table,the result is correct.
>but when the sql is like this:
>select a.id from a inner join b on a.id = b.tid
>   it got the wrong result. The result should be 11,000,000;but it just
> return 380,000 records.
>   when I remove query parallelism setting,it return correctly.
>
> 2. I have modified ths property,and restart the server.for the record
> is too large, it need 4 hours to load data to ignite.So I have to wait.
> 3.Actually, if I remove the group by clause and having condition, it
> took more time!
> 4  and 5: I have try them before ,but it did not work.
> Thanks again.
> Lucky
>
>
>
>
> At 2017-09-21 21:28:40, "Andrey Mashenkov" 
> wrote:
>
> Lucky,
>
>
> 1. Looks like it make no sense to set query parallelism level higher
> number of available CPU on node.
>
> 2. Map query use index for field FASSIGCUID type of String and seems
> values are 16 chars length strings (32 bytes)
> By default, values with size < 10 bytes can be inlined in index, so Ignite
> doesn't need to lookup a data page for value data.
> You can try to increase it up to 32 via*
> cacheConfiguration.setSqlIndexMaxInlineSize(32) *or JVM property
> *-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=32*.
>
> 3. Ignite doesn't know whether your data is collocated by FDATABASEDID
> (group by clause) or not collocated.
> So, Ignite can't apply HAVING condition instantly on map phase and have to
> load and merge all groups from all nodes before check for HAVING.
> If it possible to collocate data on GROUP BY condition, you can hint
> Ignite with setting query flag:   *sqlFieldsQuery.setCollocated(true).*
> However, I'm not sure it will help much and H2 will be able to make any
> optimization here.
>
> 4. Also, you can force Ignite to use different index. E.g. group index on
> FDATABASEDID and FASSIGCUID and same fields in different order.
>
> 5. Sometimes, Ignite change join order and it can cause unexcpected
> slowdown. You can try to change join order by changing tables positions in
> query string.
> To preserve Ignite join order optimization you may use a flag:
> *sqlFieldsQuery.setEnforceJoinOrder(true).*
>
>
> Hope, this will help you.
>
>
>
> 【网易自营】好吃到爆!鲜香弹滑加热即食,经典13香/麻辣小龙虾仅75元3斤>>
> 
>
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Persistence store(MSSQL) using cross-platform c#(Ignite Client windows) and java(Ignite Server linux version)

2017-10-10 Thread Alexey Kukushkin
Siva, JP,

Ignite .NET has no special support for calling Java from .NET. If you
google for something like "call Java from C#" you will find lots of
examples telling it to 1) expose Java method through JNI 2) Build a DLL
with your Java native methods and all other required Java dependencies
included 3) Consume the DLL from C# using "unsafe extern" methods. Although
I did not try it myself, I personally would use a C++/CLI for .NET/Java
interoperability: it is easier to consume JNI in C++ and you can reference
C++/CLI types from your C# code.

But I suggest you this trick instead of attacking the problem with all
those complexities. The idea is to update configuration at runtime before
calling IIgnite.getOrCreateCache():

   1. Start with empty caches configuration in you App.config
   2. When you receive a tenant ID, check if cache for the tenant already
   exists in the configuration. If not, then update the "System.
   Configuration.Configuration" instance in memory and save it.
   3. Call getOrCreateCache()


Re: Job Listeners

2017-10-10 Thread chandrika
Hello Alexey,


the sample code is as given below:

@ComputeTaskSessionFullSupport
public class SplitExampleJgraphWithComplexDAGIgniteCachesample extends
ComputeTaskSplitAdapter ,
Integer> {

   
// Auto-injected task session.
@TaskSessionResource 
private ComputeTaskSession ses;


private static final Random random = new Random();
static int noOftasksExecutedSuccess = 0;

SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss:SSS");




@Override protected Collection split(int
clusterSize, CustomDirectedAcyclicGraph graph) {
Collection jobs = new LinkedList<>();

IgniteCache cacheUp = 
Ignition.ignite().getOrCreateCache("cacheNameNew");
ses.addAttributeListener((key, val) -> {
if ("COMPLETE".compareTo(key.toString()) == 0) {
nextTaskToExecute(graph, cacheUp);
}
}, false);

String task = null;
if (cacheUp.get("CurrentVertex") != null)
task = (String) cacheUp.get("CurrentVertex");
for (DefaultEdge outgoingEdge : graph.outgoingEdgesOf(task)) {
String sourceVertex = graph.getEdgeSource(outgoingEdge);
String targetVertex = graph.getEdgeTarget(outgoingEdge);
graph.setTargetVertex(targetVertex);
executingJobsBuilt(graph, jobs);
} if (task != null && graph.outgoingEdgesOf(task).size() == 0) {
if (cacheUp.get(task) != null && (Boolean)cacheUp.get(task)) {
String targetVertex = setNextVertexInCache(graph, 
cacheUp);
graph.setTargetVertex(targetVertex);
nextTaskToExecute(graph, cacheUp);
} else {
System.out.println("else part");
}
}
return jobs;
}



private void nextTaskToExecute(CustomDirectedAcyclicGraph graph,
IgniteCache cacheUp) {
Ignite ignite = Ignition.ignite();
if (cacheUp.get("NextVertex") != null) {
String processingVertex = (String) 
cacheUp.get("NextVertex");
if (processingVertex != null && 
areParentVerticesProcessed(graph,
processingVertex, cacheUp)) {
cacheUp.put("CurrentVertex", processingVertex);
// Execute task on the cluster and wait for its 
completion.

ignite.compute().execute(SplitExampleJgraphWithComplexDAGIgniteCachesample.class,
graph);
}
}
}

private void executingJobsBuilt(CustomDirectedAcyclicGraph graph, Collection jobs) {
String targetVertex = graph.getTargetVertex();
IgniteCache cacheNew = 
Ignition.ignite().getOrCreateCache("cacheNameNew");
   if (targetVertex != null && !cacheNew.containsKey(targetVertex)) {
   jobs.add(new ComputeJobAdapter() {
   // Auto-injected job context.
@JobContextResource
private ComputeJobContext jobCtx;

   @Nullable @Override public Object execute() {
   int duration1 = 8000 + random.nextInt(100);
   SimpleDateFormat dateFormatNew = new
SimpleDateFormat("HH:mm:ss:SSS");
   String task = (String) targetVertex;
   try {
   Thread.sleep(duration1);
   
System.out.println("executed the job **  
" + task +  "**" + dateFormatNew.format(new Date()));
cacheNew.put(task, true);
   } catch (Exception e1) {
e1.printStackTrace();
} 
ses.setAttribute("NEXTVERTEX", 
setNextVertexInCache(graph, cacheNew));
ses.setAttribute("COMPLETE", duration1);
   return duration1;
   }
   
   });
   
   }
   
   }


private String setNextVertexInCache(CustomDirectedAcyclicGraph graph, IgniteCache cache) {
String task = null; 
Set dagSourceVertex = graph.vertexSet();
Iterator itr = dagSourceVertex.iterator();
  while (itr.hasNext()) {
task = (String)itr.next();
if(cache.get("CurrentVertex") != null &&
!task.equalsIgnoreCase((String)cache.get("CurrentVertex")))
continue;
else {

Re: Data (that is inserted via JDBC) is not persisted after cluster restart

2017-10-10 Thread ilya.kasnacheev
Hello blackfield!

I have tried to reproduce your scenario, but to no avail.

So I started AI with your config:
~/Downloads/apache-ignite-fabric-2.2.0-bin% bin/ignite.sh -v
config/IgniteConfig.xml 
[15:13:29,179][INFO][main][GridDiscoveryManager] Topology snapshot [ver=1,
servers=1, clients=0, CPUs=8, heap=1.0GB]

Activated it:
~/Downloads/apache-ignite-fabric-2.2.0-bin% bin/control.sh --host 127.0.0.1
--port 11211 --activate

Created a table with DBeaver:
CREATE TABLE IF NOT EXISTS Person (
  age int, id int, city_id int, name varchar, company varchar,
  PRIMARY KEY (name, id))
  WITH "template=partitioned,backups=1"

Added a few records:
INSERT INTO Person(age, id, city_id, name, company) VALUES (29, 2, 253,
'You', 'XX')

Stopped server node, stopped DBeaver, started server node, activated it, ran
DBeaver and queries Person table, all entries are there (see screenshot)
Screenshot_DBeaver.png

  

Do I miss anything? You can see connection string in screenshot.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


REST API should be exposed on https

2017-10-10 Thread Ankit Singhai
Hi All,
We would like to make use of REST API feature in our production environment.
How can be enable it over https? please help out me with any document /
sample.

Thanks,
Ankit Singhai



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re:Re: Fetched result use too much time

2017-10-10 Thread Lucky
Andrey Mashenkov
Thank you very much!
1.query parallelism:this will cause a problem: fetch wrong reslut. 
   I set it to 10,and have table a with 150,000 records, table b with 
12,000,000 records.
   when I query single table,the result is correct.
   but when the sql is like this:
   select a.id from a inner join b on a.id = b.tid 
  it got the wrong result. The result should be 11,000,000;but it just 
return 380,000 records.
  when I remove query parallelism setting,it return correctly. 


2. I have modified ths property,and restart the server.for the record is 
too large, it need 4 hours to load data to ignite.So I have to wait.
3.Actually, if I remove the group by clause and having condition, it took 
more time!
4  and 5: I have try them before ,but it did not work.
Thanks again.
Lucky 





At 2017-09-21 21:28:40, "Andrey Mashenkov"  wrote:

Lucky,




1. Looks like it make no sense to set query parallelism level higher number of 
available CPU on node.


2. Map query use index for field FASSIGCUID type of String and seems values are 
16 chars length strings (32 bytes)
By default, values with size < 10 bytes can be inlined in index, so Ignite 
doesn't need to lookup a data page for value data. 
You can try to increase it up to 32 via 
cacheConfiguration.setSqlIndexMaxInlineSize(32) or JVM property 
-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=32.


3. Ignite doesn't know whether your data is collocated by FDATABASEDID (group 
by clause) or not collocated.
So, Ignite can't apply HAVING condition instantly on map phase and have to load 
and merge all groups from all nodes before check for HAVING.
If it possible to collocate data on GROUP BY condition, you can hint Ignite 
with setting query flag:   sqlFieldsQuery.setCollocated(true).
However, I'm not sure it will help much and H2 will be able to make any 
optimization here.


4. Also, you can force Ignite to use different index. E.g. group index on 
FDATABASEDID and FASSIGCUID and same fields in different order.


5. Sometimes, Ignite change join order and it can cause unexcpected slowdown. 
You can try to change join order by changing tables positions in query string.
To preserve Ignite join order optimization you may use a flag:  
sqlFieldsQuery.setEnforceJoinOrder(true).




Hope, this will help you.



insert data took too much time

2017-10-10 Thread Lucky
Hi,
   I excute the sql like this:
insert into "mycache".cacheInfo (_key,fid) 
values('111','111'),('222','222') ...
   This will insert 9 records at one time and it took 100 seconds. 
   I create index on column fid.
   The ignite version is 2.2 and the table has no record.
   the configuration is set writeBehind to true ,and writeSynchronizationMode 
is set to PRIMARY_SYNC


   what did I miss ?
   Thank you.

Node crash recovery

2017-10-10 Thread mhetea
Hello, 
We have a cluster with 3 servers and about 9 clients. 
The cache mode is partitioned. 
At the moment randomly one server stops (we use ignite with spring boot),
with no error and we lose the data as the cache mode is Partitioned.
We enabled the Persistent Store
(https://apacheignite.readme.io/docs/distributed-persistent-store#section-ignite-persistence-internals),
but from what I read it only loads the values back in the memory when a get
is made and the key doesn't exist in RAM. 
Is there a way (a command or something) to tell an ignite server to reload
the data from the persistent store? (or some setting so that it does it
automatically on crash)? The issue is that we have lists (which we get by
SQL queries on the cache), and then from these lists we navigate to the
actual item, so we cannot make a get on a certain key. 

Thank you! 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node was dropped from cluster due to network problems cascading failures

2017-10-10 Thread zshamrock
Could it be that the network bandwidth is reached on the EC2 instance?
r4.large has up to 10Gigabit network performance, and we are running another
network extensive service on those machines (although split into the
separate instance today), and according to the Cloudwatch we are about the
limit of the network capacity, which probably could affect the Ignite
clients talking to the server. 

Could it be?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


map types in Cassandra

2017-10-10 Thread elopez779
Hi.

I'm a beginner in Ignite and I have a question regarding map types in
Cassandra. I have been given a table anf I have to create a cache. One of
the fields of the table is type map.

I tried to create a POJO class to read the row. The POJO is as follows:

public class PojoAccount implements Serializable {

private Map locationinfo;
private String status;

public PojoAccount() {
super();
}

public PojoAccount(Map loc_info, String st) {
this.locationinfo = loc_info;
this.status = st;
}

public Map getLocationInfo() { return locationinfo; }
public String getStatus() { return status; }

public void setLocationInfo(Map loc_info) { 
locationinfo =
loc_info; }
public void setStatus(String st) { status = st; }

public String toString() {
return status + " ; " + locationinfo.toString();
}
}

and I get the following error when running the app:

Value locationinfo is of type map, not blob

I can use any type of variable in my POJO class. I used Map
because I thought that it would work, so my question is: How can I read a
map type or a collection in general?

Thanks in advance

Enrique



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache pre-loading question

2017-10-10 Thread afedotov
Hi,

By "custom" CacheStore I meant that you could implenent a CacheStore with
your custom read/write logic, but since you are using CacheJdbcPojoStore it
should be OK.

IgniteCache does have loadCache method.
Please take a look at
https://apacheignite.readme.io/docs/data-loading#section-ignitecacheloadcache

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache pre-loading question

2017-10-10 Thread franck102
Hi Alex, what do you mean by custom? 

I am indeed using a CacheJdbcPojoStore, it has the method I need 
(loadCache(final IgniteBiInClosure clo, @Nullable Object... args)) 

however I can't obtain a reference to the store from my ignite instance. 

I can try to build an instance of the store outside Ignite however that 
means (at best) maintaining its configuration separately from the rest of 
the Ignite config... 

What I'd like to do instead is: 


... however that loadCache method exists on the cache store, but not on the
cache :( 

Franck 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/