Re: The custom object support in REST-API

2016-07-21 Thread vkulichenko
Hi Marco,

Are you sure you're using 1.6? I can't find the line in the 1.6 codebase,
it's empty there. Can you please attach the whole log?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/The-custom-object-support-in-REST-API-tp6393p6463.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How about adding kryo or protostuff as an optional marshaller?

2016-07-21 Thread vkulichenko
I actually think that even comparing with the raw mode is not completely fair
(but this is definitely much closer to be fair). Any serialization protocol
based on precompiled schema will be very compact, because it provides almost
zero overhead (protostuff doesn't require .proto files, but still requires
to generate serialization code for POJOs). Such protocols are extremely
compact, but functionally limited and mainly used in messaging systems. For
example, we use something very similar internally in Ignite for
communication between nodes (see TcpCommunicationSpi code and Message
interface if interested in implementation details).

Binary format provides much more features. It is designed to avoid
deserialization on server nodes, at the same time allowing to lookup field
values and even run SQL queries. With the binary format you can also add any
objects into the cache (even without changing class definitions at all) and
dynamically change the schema. Obviously, all this adds meta information
into the protocol, but Ignite's binary format is still very compact if you
compare it with others that provide similar functionality.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-about-adding-kryo-or-protostuff-as-an-optional-marshaller-tp6309p6462.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to use the logger in the BackupFilter?

2016-07-21 Thread Jason
Thanks Alexey.

1. Tried the Ignite instance with the below code, but seems that it also
cannot injected automatically. Is there extra work, e.g. any
config/implements some interface, to make the annotation:
IgniteInstanceResource work?

/** Ignite instance. */
@IgniteInstanceResource
private Ignite ignite;



2. when is the AffinityFunction synced between all the nodes? only in the
node join? 
If the BackupFilter's result depends on some node specific thing, e.g. a
local file which may change very rarely, it will cause the ignite not to
work, right?

3. Actually our use scenario is just as below:
i)   group all the machines in the cluster into different groups
ii)  don't hope all the replicas of one partitions are assigned to the same
group
iii) when do the deployment, all the groups are restarted one by one, so
this can make sure the data in the memory don't lose.

This should be a common scenario for the all the production use, right? Is
there a common way to do this in Ignite team now?


Thanks,
-Jason 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-use-the-logger-in-the-BackupFilter-tp6442p6461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Connecting to apache ignite node started externally

2016-07-21 Thread vkulichenko
Hi,

First of all, please properly subscribe to the mailing because otherwise
community will not receive notifications for your messages. Here is the
instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1


kvipin wrote
> Hi All,
> 
> I'm unable to get apache ignite node in my c++ program which was started
> externally.
> 
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's
> binary configuration is not equal to remote node's binary configuration
> [locNodeId=6a623d75-8d85-4a05-a38b-4a38ec128b4e,
> rmtNodeId=b56b2b0a-952e-4ffd-9f27-c3c4a58469e9,
> locBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper,
> compactFooter=false, globSerializer=null}, rmtBinaryCfg=null]
>   at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1644)
>   at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:885)
>   at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:334)
>   at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1832)
>   at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:255)
>   ... 9 more
> [14:21:32] Ignite node stopped OK [uptime=00:00:17:593]

To start Java and C++ nodes in the same topology you need to provide
consistent binary configuration. Please refer to this post for the
explanation:
http://apache-ignite-users.70518.x6.nabble.com/Error-starting-c-client-node-using-1-6-tp4697p4719.html

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Connecting-to-apache-ignite-node-started-externally-tp6423p6459.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is there a way to configure the backup nodes for a cache?

2016-07-21 Thread vkulichenko
Hi,

Are G1 and G2 two different physical machines and you start two nodes on
each of them? If this is correct, you can achieve what you want by adding
this to the cache configuration:







This will force the affinity function to assign primary and backup nodes for
the same partition to different physical machines.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-there-a-way-to-configure-the-backup-nodes-for-a-cache-tp6448p6458.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Problem about Ignite Hadoop Accelerator MRv2 with CDH5.5.2 and kerberos

2016-07-21 Thread mao guo
Hi,all

In my env  CDH5.5.2 and kerberos,I configure ignite to Accelerator
mapreduce 2 as the office docs,but I got a problem when submit a MR job
like this:
16/07/22 11:16:42 WARN security.UserGroupInformation:
PriviledgedActionException as:@XX.COM (auth:KERBEROS)
cause:java.io.IOException: Failed to submit job (null status obtained):
job_666baa91-cf92-49d9-a2ab-4cdc702019aa_0003
java.io.IOException: Failed to submit job (null status obtained):
job_666baa91-cf92-49d9-a2ab-4cdc702019aa_0003
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.submitJob(HadoopClientProtocol.java:123)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:243)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)
at
org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
at
org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

did I configured something wrong or ignite not support MRv2 with kerberos


Re: Question about Tier Propagation of Memory

2016-07-21 Thread kwon
Hi, val.

Our cache mode is PARTITIONED and multi-node clustering.
So one node has both role server & client.
Therefore, we configured like this. (Near-Cache is for client role)

And, I wrote previous thread,
Case 1) Near-Cache & OFFHEAP_TIERED << Use High On-heap Memory
Case 2) Near-Cache & OFFHEAP_VALUES << It's Ok
memory  mode is also affect On-heap Memory usage, not just Near-Cache.(both
case has same GC options)

My main wonder is about that point.
Why is that?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-Tier-Propagation-of-Memory-tp6435p6456.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Question about Tier Propagation of Memory

2016-07-21 Thread vkulichenko
Near cache is always on-heap and it doesn't have tiered structure. Actually,
you configuration creates a near cache on the server. What is the reason for
that? If you were going to use near cache only on the client (which is
usually the case), please refer to this page:
https://apacheignite.readme.io/docs/near-caches

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-Tier-Propagation-of-Memory-tp6435p6455.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re: SQL join query return different result under the same data when having different ignite instance.

2016-07-21 Thread 胡永亮/Bob
Hi, all

In Ignite1.6 java doc, I found some description about @AffinityKeyMapped:

"Optional annotation to specify custom key-to-node affinity. Affinity key 
is a key which will be used to determine a node on which given cache key will 
be stored. This annotation allows to mark a field or a method in the cache key 
object that will be used as an affinity key (instead of the entire cache key 
object that is used for affinity by default). Note that a class can have only 
one field or method annotated with @AffinityKeyMapped annotation.  "

See the bold text,  so, when one table will join another table with two 
columns, how to make affinity?

The actual scenario like this:
 "select * FROm Kc21, \"Kc24Cache\".Kc24"
+ ", \"Ka06Cache\".Ka06, \"Kc60Cache\".Kc60 "
+ " WHERE Kc21.akb020 = Kc24.akb020"
+ " AND Kc21.akc190 = Kc24.akc190"
+ " AND Kc24.akb020 = Kc60.akb020"
+ " AND Kc24.akc190 = Kc60.akc190"
+ " AND Kc24.aae072 = Kc60.aae072"
+ " AND Kc24.bka135 = Kc60.bka135"
+ " AND COALESCE (Kc24.bkc380, '0') = '0'"
+ " AND Kc24.ake010 BETWEEN '2014-1-1' AND '2015-1-1'"
+ " AND Kc21.akc193 = Ka06.akc193";

Thanks everyone.



Bob
 
From: 胡永亮/Bob
Date: 2016-07-21 17:07
To: user@ignite.apache.org
Subject: Re: Re: SQL join query return different result under the same data 
when having different ignite instance.
Hi, Alexey:

First, thank you.

But, I don't know how to config the affinity for my sql, because it is too 
complex.

My sql is:
"select * FROm Kc21, \"Kc24Cache\".Kc24"
+ ", \"Ka06Cache\".Ka06, \"Kc60Cache\".Kc60 "
+ " WHERE Kc21.akb020 = Kc24.akb020"
+ " AND Kc21.akc190 = Kc24.akc190"
+ " AND Kc24.akb020 = Kc60.akb020"
+ " AND Kc24.akc190 = Kc60.akc190"
+ " AND Kc24.aae072 = Kc60.aae072"
+ " AND Kc24.bka135 = Kc60.bka135"
+ " AND COALESCE (Kc24.bkc380, '0') = '0'"
+ " AND Kc24.ake010 BETWEEN '2014-1-1' AND '2015-1-1'"
+ " AND Kc21.akc193 = Ka06.akc193";

I have one question:
For object Kc21, its columns  akb020 and akc190 will join with columns 
akb020 and akc190 of object Kc24. 
And besides, the column akc193 of object Kc21 also will join with the 
column akc193 of object Ka06. 
According to doc, I don't know how to config two affinitys. 

Thanks.



Bob
 
From: Alexey Goncharuk
Date: 2016-07-21 16:00
To: user@ignite.apache.org
Subject: Re: SQL join query return different result under the same data when 
having different ignite instance.
Hi,

Ignite 1.6 requires data to be properly collocated in order for joins to work 
correctly. Namely, data being joined from tables Kc21 and Kc24 must be 
collocated. See [1] for more details on affinity collocation and [2] for more 
details on how SQL queries work. Also, take a look at 
org.apache.ignite.examples.datagrid.CacheQueryExample for correct collocation 
example.

There is a ticket [3] which will remove this restriction, and hopefully, in 
will get to Ignite 1.7. You can watch this ticket for progress.

Hope this helps,
AG

---
[1] https://apacheignite.readme.io/docs/affinity-collocation
[2] https://apacheignite.readme.io/docs/sql-queries
[3] https://issues.apache.org/jira/browse/IGNITE-1232​

---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---


---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---


Re: Question about Tier Propagation of Memory

2016-07-21 Thread kwon
Thank you for answer, Val.

Yeah, I expected OFFHEAP_TIERED mode work exactly same way as you said.
But, last couple of days's our test result is not seem to be so.

However, may be, that is not caused by OFFHEAP_TIERED mode.

First, Check the our test configuration.
We used  [[ PARTITIONED & Near Cache & copyOnRead = false & OFFHEAP_TIERED
]].



















...


Under this configuration, Ignite's JVM heap is keep increasing,
and finally used most of heap is full with Cache Entry(Referenced by
org.apache.ignite.internal.processors.cache.distributed.near.GridNearAtomicCache).
But, we changed memory mode to OFFHEAP_VALUES,
then those memory usage pattern is disappear.

I attached Heap Dump Capture.(For your information, out JVM Heap max is set
to 1024M)

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-Tier-Propagation-of-Memory-tp6435p6453.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


答复: 答复: ignite Connection refused exception

2016-07-21 Thread Zhengqingzheng
Hi Val,
Your suggestion works like a charm.
Thank you very much.

Best regards,
Kevin

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com]
发送时间: 2016年7月21日 19:10
收件人: user@ignite.apache.org
主题: Re: 答复: ignite Connection refused exception

Hi Kevin,

The last error message suggests that it was a problem when trying to establish 
communication between two nodes on the same machine using shared memory 
mechanics. It is not clear now what is the reason for this, but I would try to 
disable shared memory and see if it helps.

In XML file please add the following bean:



...








And add the following line to your programmatic configuration:

TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
commSpi.setSharedMemoryPort(-1);

igniteCCF.setCommunicationSpi(commSpi);

Please let me know if it resolves the problem. Otherwise please attach the 
whole logs from all nodes.

Vladimir.

On Thu, Jul 21, 2016 at 12:03 PM, Zhengqingzheng 
> wrote:
Hi Denis,
I have configured TcpDiscoverySpi both on the server side and client side.
On the server side, my configuration is as follows:





 


10.120.70.122:47500..47509

10.120.89.196:47500..47509









On the client side, I am using java to configure the cache, here is the code:

   TcpDiscoveryVmIpFinder  ipFinder = new TcpDiscoveryVmIpFinder(false);

List addrs = new ArrayList();
addrs.add(IGNITE_NEW_ADDRESS);
addrs.add(IGNITE_ADDRESS);
ipFinder.setAddresses(addrs);


TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setIpFinder(ipFinder);

   // discoverySpi.setLocalAddress(instanceName);
   // discoverySpi.setLocalPort(47505);
igniteCCF.setDiscoverySpi(discoverySpi);



TcpCommunicationSpi commSpi = new TcpCommunicationSpi();

igniteCCF.setCommunicationSpi(commSpi);



//create ignite instance
ignite = Ignition.start(igniteCCF);


But  I still cannot connect to the server. Or I should see, it did connected to 
the server, however, closed without communication.
When I check the server console, I can see the client node joined, but return 
the following error:
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
 at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start SPI: 
TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, 
reconCnt=10, maxAckTimeout=60, forceSrvMode=false, 
clientReconnectDisabled=false]
 at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:258)
 at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
 at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
 ... 36 more
Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to connect to 
cluster, connection failed and failed to reconnect.
 at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$Reconnector.body(ClientImpl.java:1287)
 at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)


also I checked the log, the there are two types of error messages:

1.   
[21:15:30,801][ERROR][grid-time-coordinator-#50%null%][GridClockSyncProcessor] 
Failed to send time sync snapshot to remote node (did not leave grid?) 
[nodeId=de5a1f4c-d051-4dec-97d0-37da943ebd88, msg=GridClockDeltaSnapshotMessage 
[snapVer=GridClockDeltaVersion [ver=55, topVer=30], 
deltas={66198003-510d-4170-8b8f-5316c01f3d58=8740, 
8c38e19b-3aeb-4865-834b-ee6327913980=96665, 
8fb43d92-da97-4be9-8ecd-50c6456d0362=72055, 
2623a147-609e-4e64-97e3-e8a7fc9ccc42=96665, 
109e1f29-6c93-4e77-a6e0-ba09adbc79eb=-3227, 
6cbddaee-3fc9-4132-88ed-19ab1e5195a1=-5236, 
de5a1f4c-d051-4dec-97d0-37da943ebd88=-10946}], err=Failed to send message (node 
may have left the grid or TCP connection cannot be established due to firewall 
issues) [node=TcpDiscoveryNode [id=de5a1f4c-d051-4dec-97d0-37da943ebd88, 
addrs=[0:0:0:0:0:0:0:1, 10.135.66.169, 127.0.0.1], 
sockAddrs=[/0:0:0:0:0:0:0:1:0, /0:0:0:0:0:0:0:1:0, 
/10.135.66.169:0, /127.0.0.1:0], 
discPort=0, order=30, intOrder=20, 

Ignite Transactions and non-committed entries

2016-07-21 Thread juanavelez
We have a 4-node(server) setup. The configuration for each node includes a
cache configuration that it is both partitioned and transactional as well as
2 backups:

http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>



















hc1.dev.xxx.com:48500..48600
hc2.dev.xxx.com:48500..48600
hc3.dev.xxx.com:48500..48600
hc4.dev.xxx.com:48500..48600









We also have a test client that uses the same configuration and executes 10
transactions with each transaction inserting 1K keys. Before committing the
transaction, there is a pause(sleep) of 60 seconds.

In another client, using the same configuration, every 2 seconds we read the
size of the cache.

What we are seeing is that between all the 1K entries being put but before
being committed, the other client sees (in size at least) them. If we force
a rollback of that transaction(by way of killing the client), the
size-reading client stops seeing those and reverts back to the actual
committed size. What are we missing/not understanding? 

Client that does the inserts

int maxTransactions = 10;
int maxEntriesPerTx = 1000;
int startKey = 0;
Logger logger = Logger.getLogger(TestIgnite.class.getName());
logger.info("Starting " + maxTransactions + " transactions");
for (int i = 0; i < maxTransactions; i++) {
logger.info("Working on transaction " + (i+1) + ". Inserting
keys=" +
  (startKey + maxEntriesPerTx * i) + ".." + (startKey - 1 +
maxEntriesPerTx * (i+1)));
Transaction tx =
ignite.transactions().txStart(TransactionConcurrency.PESSIMISTIC,
  
TransactionIsolation.READ_COMMITTED);
try {
for (int j = 0; j < maxEntriesPerTx; j++) {
int entryKey = startKey + maxEntriesPerTx * i + j;
cache.put(new Integer(entryKey), new
MyClass(entryKey+""));
}
logger.info("Finished putting. Sleeping for 60s");
try {
Thread.sleep(6);
} catch (InterruptedException e) {
e.printStackTrace();
}
logger.info("Finished sleeping");
tx.commit();
logger.info("Commit complete. Put " + (maxEntriesPerTx *
(i+1)) + " entries so far");
logger.info("Retrieving size for customers map. Size=" +
cache.size());
} catch (RuntimeException e) {
if ( e.getCause() != null && e.getCause() instanceof
ClusterTopologyException) {
logger.log(Level.WARNING, " occurred. Rolling back
and retrying transaction "
+ i--, e);
tx.rollback();
}
else
 throw e;
} finally {
tx.close();
}
}

Client that does the reading of the size

Logger logger = Logger.getLogger(TestIgnite.class.getName());
try (Ignite ignite = Ignition.start(cfg)) {
IgniteCache cache =
ignite.getOrCreateCache("customers");
while (true) {
logger.info("Size=" + cache.size());
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

Jul 21, 2016 3:35:02 PM org.apache.ignite.logger.java.JavaLogger info
INFO: Topology snapshot [ver=9, servers=4, clients=1, CPUs=24, heap=7.6GB]
Jul 21, 2016 3:35:02 PM org.apache.ignite.logger.java.JavaLogger info
INFO: Started cache [name=customers, mode=PARTITIONED]
Jul 21, 2016 3:35:02 PM com.juan.ignite.ClientIgnite main
INFO: Size=0
Jul 21, 2016 3:35:04 PM org.apache.ignite.logger.java.JavaLogger info
INFO: Added new node to topology: TcpDiscoveryNode
[id=69547ec4-f2b9-4365-85cf-bc7f64b35d34, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1,
172.19.131.36], sockAddrs=[/172.19.131.36:0, /0:0:0:0:0:0:0:1:0,
/127.0.0.1:0, /172.19.131.36:0], discPort=0, order=10, intOrder=8,

Re: REST API : Failed to find mandatory parameter in request: key

2016-07-21 Thread vkulichenko
Hi Abhishek,

I already responded, see my previous post:

vkulichenko wrote
> You should put URL in quotes, because it contains special symbols parsed
> by bash. Works for me this way. 

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/REST-API-Failed-to-find-mandatory-parameter-in-request-key-tp6430p6449.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Question about Tier Propagation of Memory

2016-07-21 Thread vkulichenko
Hi,

OFFHEAP_TIERED mode by definition means that nothing is stored on-heap, so
heap tier is removed from the picture. When you issue a get() to
OFFHEAP_TIERED cache, the server will fetch the binary data directly from
off-heap memory and return it to the client without caching it in heap
memory.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-Tier-Propagation-of-Memory-tp6435p6450.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Is there a way to configure the backup nodes for a cache?

2016-07-21 Thread juanavelez
We have the scenario where we have two physical locations, each with 2 nodes
(therefore a 4-node) cluster: G1=N1,N2 G2=N3,N4. We would like to have the
backup for cache entries (whose primaries are located in nodes) in G1 to be
located in G2 and vice versa. Unfortunately I don't see anything in the
documentation that supports this. Am I correct? If not, could someone
provide some doc/examples? If I am, would it be possible to submit this as
an enhancement?

Thanks - J



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-there-a-way-to-configure-the-backup-nodes-for-a-cache-tp6448.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: REST API : Failed to find mandatory parameter in request: key

2016-07-21 Thread abhishek jain
Hi Guys,

Please let me know if anybody knows about this error.

Abhishek



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/REST-API-Failed-to-find-mandatory-parameter-in-request-key-tp6430p6445.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 答复: ignite Connection refused exception

2016-07-21 Thread Vladimir Ozerov
Hi Kevin,

The last error message suggests that it was a problem when trying to
establish communication between two nodes on the same machine using shared
memory mechanics. It is not clear now what is the reason for this, but I
would try to disable shared memory and see if it helps.

In XML file please add the following bean:


...









And add the following line to your programmatic configuration:

TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
*commSpi.setSharedMemoryPort(-1); *

igniteCCF.setCommunicationSpi(commSpi);

Please let me know if it resolves the problem. Otherwise please attach the
whole logs from all nodes.

Vladimir.

On Thu, Jul 21, 2016 at 12:03 PM, Zhengqingzheng 
wrote:

> Hi Denis,
>
> I have configured TcpDiscoverySpi both on the server side and client side.
>
> On the server side, my configuration is as follows:
>
>
>
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>
>  
>
> 
>
>
> 10.120.70.122:47500..47509
>
>
> 10.120.89.196:47500..47509
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
> On the client side, I am using java to configure the cache, here is the
> code:
>
>
>
>TcpDiscoveryVmIpFinder  ipFinder = new
> TcpDiscoveryVmIpFinder(false);
>
>
>
> List addrs = new ArrayList();
>
> addrs.add(IGNITE_NEW_ADDRESS);
>
> addrs.add(IGNITE_ADDRESS);
>
> ipFinder.setAddresses(addrs);
>
>
>
>
>
> TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
>
> discoverySpi.setIpFinder(ipFinder);
>
>
>
>// discoverySpi.setLocalAddress(instanceName);
>
>// discoverySpi.setLocalPort(47505);
>
> igniteCCF.setDiscoverySpi(discoverySpi);
>
>
>
>
>
>
>
> TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
>
>
>
> igniteCCF.setCommunicationSpi(commSpi);
>
>
>
>
>
>
>
> //create ignite instance
>
> ignite = Ignition.start(igniteCCF);
>
>
>
>
>
> But  I still cannot connect to the server. Or I should see, it did
> connected to the server, however, closed without communication.
>
> When I check the server console, I can see the client node joined, but
> return the following error:
>
>  at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>
>  at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>
>  at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
>
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
> SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
> reconCnt=10, maxAckTimeout=60, forceSrvMode=false,
> clientReconnectDisabled=false]
>
>  at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:258)
>
>  at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
>
>  at
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
>
>  ... 36 more
>
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to
> connect to cluster, connection failed and failed to reconnect.
>
>  at
> org.apache.ignite.spi.discovery.tcp.ClientImpl$Reconnector.body(ClientImpl.java:1287)
>
>  at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>
>
>
>
>
> also I checked the log, the there are two types of error messages:
>
> 1.   
> [21:15:30,801][ERROR][grid-time-coordinator-#50%null%][GridClockSyncProcessor]
> Failed to send time sync snapshot to remote node (did not leave grid?)
> [nodeId=de5a1f4c-d051-4dec-97d0-37da943ebd88,
> msg=GridClockDeltaSnapshotMessage [snapVer=GridClockDeltaVersion [ver=55,
> topVer=30], deltas={66198003-510d-4170-8b8f-5316c01f3d58=8740,
> 8c38e19b-3aeb-4865-834b-ee6327913980=96665,
> 8fb43d92-da97-4be9-8ecd-50c6456d0362=72055,
> 2623a147-609e-4e64-97e3-e8a7fc9ccc42=96665,
> 109e1f29-6c93-4e77-a6e0-ba09adbc79eb=-3227,
> 6cbddaee-3fc9-4132-88ed-19ab1e5195a1=-5236,
> de5a1f4c-d051-4dec-97d0-37da943ebd88=-10946}], err=Failed to send message
> (node may have left the grid or TCP connection cannot be established due to
> firewall issues) [node=TcpDiscoveryNode
> [id=de5a1f4c-d051-4dec-97d0-37da943ebd88, addrs=[0:0:0:0:0:0:0:1,
> 10.135.66.169, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:0,
> /0:0:0:0:0:0:0:1:0, /10.135.66.169:0, /127.0.0.1:0], discPort=0,
> order=30, intOrder=20, 

How to use the logger in the BackupFilter?

2016-07-21 Thread Jason
hi Ignite team,

I want to implement a customized BackupFilter which is called in the
RendezvousAffinityFunction for my own cluster's feature, and I encountered a
problem: how to get the logger in ignite?

I've tried  below ways, but all of them don't work, (NullPointerException).

1. Use the way just like below in RendezvousAffinityFunction, but seems that
the log cannot be automatically injected in my code, but it works in
RendezvousAffinityFunction. Any extra work for this?
/** Logger instance. */
@LoggerResource
private transient IgniteLogger log;

2. JavaLogger log = new JavaLogger();

3. U.warn(null, msg)

4. Try to pass the GridKernelContext to my class, then use IgniteLogger log
= ctx.log(myclass). It works when create my class, but do marshal/unmarshal,
it becomes null again.

BTW, I hard-code my BackupFilter in the RendezvousAffinityFunction as
default not by config. Because i use the .net version, it's a little
complicated to use config for this now.

Any suggestion on this? 

My detailed class is as below:
public class ScaleUnitBackFilter implements IgniteBiPredicate {
/**
 * It's used by the JdkMarshaller
 */
private static final long serialVersionUID = -5036727407264096908L;

private static final long ReloadCheckIntervalInMilliSecond = 30;
/**
 * delay the loading to the first read
 */
private long lastLoadTime = 0;

private static final String ScaleUnitFilePath = 
"d:/data/machineinfo.csv";

private HashMap scaleUnitMap;

/** Logger instance. */
@LoggerResource
private transient IgniteLogger log;

public ScaleUnitBackFilter() {
scaleUnitMap = new HashMap();
}

@Override
public boolean apply(ClusterNode primaryNode, ClusterNode
backupNodeCandidate) {
long curTime = U.currentTimeMillis();
if (curTime - lastLoadTime >= ReloadCheckIntervalInMilliSecond) 
{
loadScaleUnitMap();
}

A.ensure(primaryNode.hostNames().size() >= 1, "Primary Node 
must have
hostname.");
A.ensure(backupNodeCandidate.hostNames().size() >= 1, "Backup 
Node must
have hostname.");

// Remove the domain in the full hostname
String pn = primaryNode.hostNames().toArray(new
String[0])[0].split("\\.")[0];
String bnc = backupNodeCandidate.hostNames().toArray(new
String[0])[0].split("\\.")[0];
LT.info(log, "PN: " + pn + ", BNC: " + bnc, false);

if (scaleUnitMap == null || scaleUnitMap.isEmpty()) {
LT.warn(log, null, "The machineinfo.csv file may be 
empty. !!!PAY MORE
ATTENTION!!!", false);
return true;
}

if (!scaleUnitMap.containsKey(primaryNode) ||
!scaleUnitMap.containsKey(backupNodeCandidate)) {
LT.warn(log, null, "One machine isn't in the 
machineinfo.csv. !!!PAY MORE
ATTENTION!!!", false);
return true;
}

MachineInfo pnInfo = scaleUnitMap.get(primaryNode);
LT.info(log, printMachineInfo(pn, pnInfo), false);
MachineInfo bncInfo = scaleUnitMap.get(backupNodeCandidate);
LT.info(log, printMachineInfo(bnc, bncInfo), false);

// If in the same scale unit or backup node isn't in 'H' 
status, don't
select it as the backup node
if (pnInfo.scaleUnit.equals(bncInfo.scaleUnit) ||
!"H".equals(bncInfo.status)) {
LT.info(log, "Backup Node Candidate is filtered!", 
false);
return false;
}

LT.info(log, "PN: " + pn + ", BN: " + bnc + " is selected!", 
false);

return true;
}

private String printMachineInfo(String machine, MachineInfo 
machineInfo) {
return machine + "[" + machineInfo.scaleUnit + ", " + 
machineInfo.status +
"]";
}

private synchronized void loadScaleUnitMap() {
// double check 
long curTime = U.currentTimeMillis();
if (curTime - lastLoadTime >= ReloadCheckIntervalInMilliSecond) 
{
return;
}

String line = null;
String csvSplitBy = ",";
BufferedReader br = null;

try {
br = new BufferedReader(new 
FileReader(ScaleUnitFilePath));
while ((line = br.readLine()) != null) {
String[] fields = line.split(csvSplitBy);
   

Re: Re: SQL join query return different result under the same data when having different ignite instance.

2016-07-21 Thread 胡永亮/Bob
Hi, Alexey:

First, thank you.

But, I don't know how to config the affinity for my sql, because it is too 
complex.

My sql is:
"select * FROm Kc21, \"Kc24Cache\".Kc24"
+ ", \"Ka06Cache\".Ka06, \"Kc60Cache\".Kc60 "
+ " WHERE Kc21.akb020 = Kc24.akb020"
+ " AND Kc21.akc190 = Kc24.akc190"
+ " AND Kc24.akb020 = Kc60.akb020"
+ " AND Kc24.akc190 = Kc60.akc190"
+ " AND Kc24.aae072 = Kc60.aae072"
+ " AND Kc24.bka135 = Kc60.bka135"
+ " AND COALESCE (Kc24.bkc380, '0') = '0'"
+ " AND Kc24.ake010 BETWEEN '2014-1-1' AND '2015-1-1'"
+ " AND Kc21.akc193 = Ka06.akc193";

I have one question:
For object Kc21, its columns  akb020 and akc190 will join with columns 
akb020 and akc190 of object Kc24. 
And besides, the column akc193 of object Kc21 also will join with the 
column akc193 of object Ka06. 
According to doc, I don't know how to config two affinitys. 

Thanks.



Bob
 
From: Alexey Goncharuk
Date: 2016-07-21 16:00
To: user@ignite.apache.org
Subject: Re: SQL join query return different result under the same data when 
having different ignite instance.
Hi,

Ignite 1.6 requires data to be properly collocated in order for joins to work 
correctly. Namely, data being joined from tables Kc21 and Kc24 must be 
collocated. See [1] for more details on affinity collocation and [2] for more 
details on how SQL queries work. Also, take a look at 
org.apache.ignite.examples.datagrid.CacheQueryExample for correct collocation 
example.

There is a ticket [3] which will remove this restriction, and hopefully, in 
will get to Ignite 1.7. You can watch this ticket for progress.

Hope this helps,
AG

---
[1] https://apacheignite.readme.io/docs/affinity-collocation
[2] https://apacheignite.readme.io/docs/sql-queries
[3] https://issues.apache.org/jira/browse/IGNITE-1232​


---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---


Re: SQL join query return different result under the same data when having different ignite instance.

2016-07-21 Thread Alexey Goncharuk
Hi,

Ignite 1.6 requires data to be properly collocated in order for joins to
work correctly. Namely, data being joined from tables Kc21 and Kc24 must be
collocated. See [1] for more details on affinity collocation and [2] for
more details on how SQL queries work. Also, take a look
at org.apache.ignite.examples.datagrid.CacheQueryExample for correct
collocation example.

There is a ticket [3] which will remove this restriction, and hopefully, in
will get to Ignite 1.7. You can watch this ticket for progress.

Hope this helps,
AG

---
[1] https://apacheignite.readme.io/docs/affinity-collocation
[2] https://apacheignite.readme.io/docs/sql-queries
[3] https://issues.apache.org/jira/browse/IGNITE-1232​


Re: igfs.withAsync is still synchronous in most file operations?

2016-07-21 Thread Vladimir Ozerov
Hi Nicolae,

This is not very easy question.

First, "withAsync()" was introduced to IGFS mainly to support task
execution (methods "execute(...)"). For now it is pretty clear that these
methods are of very little use because there are much more convenient
frameworks to achieve the same goals - Hadoop and Spark. And IGFS can be
plugged into them easily. So I think there are rather high chances that
almost all async methods will be removed in Apache Ignite 2.0.

Second, we already has a kind of asynchrony for file writes. We have
special mode DUAL_ASYNC which flushes data to the secondary file system
asynchronously. Having two "flavors" of asynchrony makes API complex and
dirty.

But having said that I still think that asynchronous execution on standard
file system operations like "mkdirs", "remove", etc. could be useful. E.g.
removal of a directory with million files may take substantial time, and
user may want this process to happen in a background.

I hope we will come to some clear solution in Apache Ignite 2.0 and most
methods will have async counterparts.

Vladimir.


On Thu, Jul 21, 2016 at 12:48 AM, vkulichenko  wrote:

> Hi,
>
> Async execution is supported only for methods that are marked with
> @IgniteAsyncSupported annotation. read/write/create/delete operations are
> not among them.
>
> I'm not aware of any plans to provide this support. Can someone else from
> the community chime in?
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/igfs-withAsync-is-still-synchronous-in-most-file-operations-tp6420p6425.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>