Re: checkpoint marker is present on disk, but checkpoint record is missed in WAL

2017-10-12 Thread Dmitriy Setrakyan
KR, any chance you can provide a reproducer? It would really help us
properly debug your issue. If not, can we get a copy of your configuration?

On Thu, Oct 12, 2017 at 10:31 AM, KR Kumar  wrote:

> Hi AG,
>
> Thanks for responding to the thread. I have tried with 2.3 and I still face
> the same problem.
>
> Just to further explore, I killed ignite instance with kill -9 and a
> reboot,
> both situations, ignite just hangs during restart.
>
> Thanx and Regards
> KR Kumar
>


Re: Indexing fields of non-POJO cache values

2017-10-12 Thread Alexey Kuznetsov
Just as idea.

What if we can to declare a kind of "references" or "aliases" for fields in
such cases?
And this will help us to avoid duplication of data.

For example in JavaScript I could (almost on the fly) declare getters and
setters that could be as aliases for my data.


On Fri, Oct 13, 2017 at 12:39 AM, Andrey Kornev 
wrote:

> Hey Andrey,
>
> Thanks for your reply!
>
> We've been using a slightly different approach, where we extract the
> values of the indexable leaf nodes and store them as individual fields of
> the binary object along with the serialized tree itself. Then we configure
> the cache to use those fields as QueryEntities. It works fine and this way
> we avoid using joins in our queries.
>
> However an obvious drawback of such approach is data duplication. We end
> up with three copies of a field value:
>
> 1) the leaf node of the tree,
> 2) the field of the binary object, and
> 3) Ignite index
>
> I was hoping that there may be a better way to achieve this. In particular
> I'd like to avoid storing the value as a field of a binary object (copy #2).
>
> One possible (and elegant) approach to solving this problem would be to
> introduce a way to specify a method (or a closure) for a QueryEntity in
> addition to currently supported BinaryObject field/POJO attribute.
>
> Regards
> Andrey
>
> --
> *From:* Andrey Mashenkov 
> *Sent:* Thursday, October 12, 2017 6:25 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Indexing fields of non-POJO cache values
>
> Hi,
>
> Another way here is to implement your own query engine by extending
> IndexingSPI interface, which looks much more complicated.
>
> On Thu, Oct 12, 2017 at 4:23 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> There is no way to index such data as is. To index data you need to have
>> entry_field<->column mapping configured.
>> As a workaround here, leaves can be stored in cache as values.
>>
>> E.g. you can have a separate cache to index leaf nodes, where entries
>> will have 2 fields: "original tree key" field and "leaf node value" indexed
>> field.
>> So, you will be able to query serialized tree-like structures via SQL
>> query with JOIN condition on  "original tree key" and WHERE condition on
>> "leaf node value" field.
>> Obviously, you will need to implement intermediate logic to keep data of
>> both caches consistent.
>>
>>
>> On Wed, Oct 11, 2017 at 9:40 PM, Andrey Kornev 
>> wrote:
>>
>>> Hello,
>>>
>>> Consider the following use case: my cache values are a
>>> serialized tree-like structure (as opposed to a POJO). The leaf nodes of
>>> the tree are Java primitives. Some of the leaf nodes are used by the
>>> queries and should be indexed.
>>>
>>> What are my options for indexing such data?
>>>
>>> Thanks
>>> Andrey
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>



-- 
Alexey Kuznetsov


Re: Ignite Web Console login?

2017-10-12 Thread Gaurav Bajaj
Hi,

I was able successfully create account using docker image itself. Haven't
faced any issue.
Yes you are right, there is no documentation regarding account creation.

Thanks,
Gaurav
On Oct 7, 2017 6:12 PM, "Pim D"  wrote:

> Hi,
>
> Unfortunately, creating a new account does not seem to work with de Web
> Console provided in the Docker image by Apache.
> I get a connection refused on http://localhost/api/v1/user/ when filling
> the
> signup form (on http://locahost/).
> @ Apache documentation I could not find any information on the accounts.
> Also funny localhost is reachable, 127.0.0.1 or my actuall ip adres is
> not...
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Indexing fields of non-POJO cache values

2017-10-12 Thread Andrey Kornev
Hey Andrey,

Thanks for your reply!

We've been using a slightly different approach, where we extract the values of 
the indexable leaf nodes and store them as individual fields of the binary 
object along with the serialized tree itself. Then we configure the cache to 
use those fields as QueryEntities. It works fine and this way we avoid using 
joins in our queries.

However an obvious drawback of such approach is data duplication. We end up 
with three copies of a field value:

1) the leaf node of the tree,
2) the field of the binary object, and
3) Ignite index

I was hoping that there may be a better way to achieve this. In particular I'd 
like to avoid storing the value as a field of a binary object (copy #2).

One possible (and elegant) approach to solving this problem would be to 
introduce a way to specify a method (or a closure) for a QueryEntity in 
addition to currently supported BinaryObject field/POJO attribute.

Regards
Andrey


From: Andrey Mashenkov 
Sent: Thursday, October 12, 2017 6:25 AM
To: user@ignite.apache.org
Subject: Re: Indexing fields of non-POJO cache values

Hi,

Another way here is to implement your own query engine by extending IndexingSPI 
interface, which looks much more complicated.

On Thu, Oct 12, 2017 at 4:23 PM, Andrey Mashenkov 
> wrote:
Hi,

There is no way to index such data as is. To index data you need to have 
entry_field<->column mapping configured.
As a workaround here, leaves can be stored in cache as values.

E.g. you can have a separate cache to index leaf nodes, where entries will have 
2 fields: "original tree key" field and "leaf node value" indexed field.
So, you will be able to query serialized tree-like structures via SQL query 
with JOIN condition on  "original tree key" and WHERE condition on "leaf node 
value" field.
Obviously, you will need to implement intermediate logic to keep data of both 
caches consistent.


On Wed, Oct 11, 2017 at 9:40 PM, Andrey Kornev 
> wrote:
Hello,

Consider the following use case: my cache values are a serialized tree-like 
structure (as opposed to a POJO). The leaf nodes of the tree are Java 
primitives. Some of the leaf nodes are used by the queries and should be 
indexed.

What are my options for indexing such data?

Thanks
Andrey



--
Best regards,
Andrey V. Mashenkov



--
Best regards,
Andrey V. Mashenkov


Re: checkpoint marker is present on disk, but checkpoint record is missed in WAL

2017-10-12 Thread KR Kumar
Hi AG,

Thanks for responding to the thread. I have tried with 2.3 and I still face
the same problem. 

Just to further explore, I killed ignite instance with kill -9 and a reboot,
both situations, ignite just hangs during restart.

Thanx and Regards
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node was dropped from cluster due to network problems cascading failures

2017-10-12 Thread zshamrock
Ok, we identified the root cause. It was not specifically related to the
Ignite, but rather the security settings (EC2 security group), i.e. we only
had inbound port 47100 open to the EC2 instance. But as you can see from the
original message, the error is about the nodes running on ports 47103 and
47104, actually all others except 47100.

There is `TcpCommunicationSpi`
https://apacheignite.readme.io/v1.9/docs/network-config#section-configuration,
which defines `setLocalPort` (default to 47100) and `setLocalPortRange`
which is 100. And my assumption is that because we are running multiple
services on the same machine, so every Ignite client will get its own port
starting from 47100, and up to 47200 (or 47199?) (see `setLocalPortRange`
above). So as we are running multiple of the them, only one will get 47100
port, others will get 47101, and 47102 (as we have max of 3 running on the
same machine currently), and so on.

And they connect to the server node, which is listening port 47500 (which is
opened in the security group to connect).

So during the cluster start up everything works fine.

But then because ports 47101-... were not open on our app side, the server
could not reach back other clients apart from the one running on port 47100.

This is my theory (but at least opening those ports fixed the problem).

Of course, there is still an open question, is why the client node starts to
fail only when there is a load, I would expect there is a periodic heart
beat, so the server should not reach the client node almost immediately
after the cluster started (I mean the client nodes listening on ports
14701-...).

But we only start see the error after couple of hours when the system is in
use. 

Could you, please, comment on this?

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to cancel IgniteRunnale on remote node?

2017-10-12 Thread Andrey Mashenkov
Hi,

1. Ignite.executorService() method return JDK ExecutorService interface
implementation.
2. ExecutorService.shutdown() method javadoc says:
* Initiates an orderly shutdown in which previously submitted
* tasks are executed, but no new tasks will be accepted.
* Invocation has no additional effect if already shut down.

So, there said nothing that running tasks will be terminated.

3. Looks like you have to use ExecutorService.shutdownNow(), that will
reject all tasks in queue and interrupt tasks are running.
Javadoc:
* Attempts to stop all actively executing tasks, halts the
* processing of waiting tasks, and returns a list of the tasks
* that were awaiting execution.

And also your tasks should support cancellation via thread interrupted flag.
There are no guarantees beyond best-effort attempts to stop
* processing actively executing tasks. For example, typical
* implementations will cancel via {@link Thread#interrupt}, so any
* task that fails to respond to interrupts may never terminate.



On Thu, Oct 12, 2017 at 6:02 AM, james wu  wrote:

> Hi:
>
>   I design ignite cluster as following:
> 1. Among cluster one role is job submitter (client mode), another role is
> worker (server mode)
> 2. job submitter submit IgniteRunnable via Executor Service to worker node
> 3. IgniteRunnable answer for receive and process Kafka message
> 4. In client mode submit code, add shutdown hook to close kafka consumer in
> IgniteRunnable and call ExecutionService.shutdown() for graceful cancel
> Remote IgniteRunnable when client node exit.
> 5. All server node start from ignite release binary package via ignite.sh.
> all customer code package as jar add to ignite libs dir. client node start
> from java main.
> 6. When I terminate client process, the log show shutdown hook called, the
> kafka consumer close called, executionservice.shutdown called, but the
> remote IgniteRunnable still running and process kafka message
>
> Code like this:
>
> 1. Client submit code:
> public class IgniteKafkaOrderPaymentCompleteStreamingJob extends
> IgniteBaseJob {
>
> private static final int INITIAL_COUNT = 5;
>
> private static final String PAYMENT_COMPLETED_DATA_PROCESSOR =
> "paymentCompletedDataProcesser";
>
> public static void main(String[] args) {
> String springConfigProperty = getSpringPropertiesSuffix();
> ExecutorService executionService =
> createExecutionService(true,IgniteJobConstants.IGNITE_
> CLUSTER_COMPUTE_ROLE);
> List consumers = new
> ArrayList();
> for (int i = 0; i < INITIAL_COUNT; i++) {
> IgniteKafkaPaymentCompleteConsumer
> paymentCompleteStreamingConsumer = new
> IgniteKafkaPaymentCompleteConsumer();
>
> paymentCompleteStreamingConsumer.setProcesserBeanName(
> PAYMENT_COMPLETED_DATA_PROCESSOR);
>
> paymentCompleteStreamingConsumer.setDataProcesserClass(
> IOrderDataProcesser.class);
>
> paymentCompleteStreamingConsumer.setConfigProperties(
> springConfigProperty);
> consumers.add(paymentCompleteStreamingConsumer);
> executionService.submit(
> paymentCompleteStreamingConsumer);
> }
> AddShutDownHock(executionService, consumers);
>
> }
>
> }
>
> public abstract class IgniteBaseJob {
>
> public static IgniteLogger log;
> private static String SPRING_PROFILE_KEY = "spring.profile.active";
>
> /**
>  * This method used to create the execution service
>  * @param clientMode
>  * @param roleInstance
>  * @return
>  */
> protected static ExecutorService createExecutionService(Boolean
> clientMode,
> String roleInstance) {
> Ignition.setClientMode(clientMode);
> Ignite ignite = initializeIgniteContext("
> ignite-default.xml");
> IgniteCluster cluster = ignite.cluster();
> ClusterGroup worker =
> cluster.forAttribute(IgniteJobConstants.IGNITE_CLUSTER_GROUP_KEY,
> roleInstance);
> return ignite.executorService(worker);
> }
>
> /**
>  * This method used to getting the JVM parameter which used to
> indicate
> which
>  *
>  * @return
>  */
> protected static String getSpringPropertiesSuffix() {
> String springActiveProfile = System.getProperty(SPRING_
> PROFILE_KEY);
> String springPropertiesSuffix = 
> StringUtils.isBlank(springActiveProfile)
> ?
> SpringPropertiesType.production.name() : springActiveProfile;
> return springPropertiesSuffix;
> }
>
> protected static Ignite initializeIgniteContext() {
> try {
> Ignite ignite = Ignition.start(
>
> IgniteBaseJob.class.getClassLoader().getResourceAsStream("fds-
> ignite-develop.xml"));
> log = Ignition.ignite().log();
>

Re: Indexing fields of non-POJO cache values

2017-10-12 Thread Andrey Mashenkov
Hi,

There is no way to index such data as is. To index data you need to have
entry_field<->column mapping configured.
As a workaround here, leaves can be stored in cache as values.

E.g. you can have a separate cache to index leaf nodes, where entries will
have 2 fields: "original tree key" field and "leaf node value" indexed
field.
So, you will be able to query serialized tree-like structures via SQL query
with JOIN condition on  "original tree key" and WHERE condition on "leaf
node value" field.
Obviously, you will need to implement intermediate logic to keep data of
both caches consistent.


On Wed, Oct 11, 2017 at 9:40 PM, Andrey Kornev 
wrote:

> Hello,
>
> Consider the following use case: my cache values are a
> serialized tree-like structure (as opposed to a POJO). The leaf nodes of
> the tree are Java primitives. Some of the leaf nodes are used by the
> queries and should be indexed.
>
> What are my options for indexing such data?
>
> Thanks
> Andrey
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite cache transaction timeout

2017-10-12 Thread Andrey Mashenkov
Hi,

Transaction is timed out as transaction initiator node doen't receive
response from other node in time by some reason.
It may be due to an entry is locked by smbd or network issues or GC issues
on other node or some bug.

There is no starvation is stripe pool actually, as thread queue is empty,
but the long running operation that can be an issue
as it can hold a locks and make other messages delayed.

Also, transaction can be blocked with running partition map exchange on
unstable topology.
Is it possible there are too many operations or large entries involved in
transaction?


On Thu, Oct 12, 2017 at 8:36 AM, iostream  wrote:

> Hi,
>
> We are observing cache transaction timeout in our Ignite v2.1 cluster. The
> setup comprises 20 ignite servers and 20 clients. Error log -
>
> spring_264202322.log:Caused by:
> org.apache.ignite.internal.transactions.IgniteTxTimeoutCheckedException:
> Cache transaction timed out: GridNearTxLocal [mappings=IgniteTxMappingsImpl
> [], nearLocallyMapped=false, colocatedLocallyMapped=false,
> needCheckBackup=null, hasRemoteLocks=true,
> thread=DefaultMessageListenerContainer-10, mappings=IgniteTxMappingsImpl
> [],
> super=GridDhtTxLocalAdapter [nearOnOriginatingNode=false, nearNodes=[],
> dhtNodes=[], explicitLock=false, super=IgniteTxLocalAdapter
> [completedBase=null, sndTransformedVals=false, depEnabled=false,
> txState=IgniteTxStateImpl [activeCacheIds=GridIntList [idx=2,
> arr=[-1836347052,159420608]], recovery=false, txMap=[IgniteTxEntry
> [key=com.walmart.node.commons.manager.ignite.setup.model.
> key.fulfill_order_key
> [idHash=1384394993, hash=789085753, fulfill_order_id=79218428],
> cacheId=-1836347052, txKey=IgniteTxKey
> [key=com.walmart.node.commons.manager.ignite.setup.model.
> key.fulfill_order_key
> [idHash=1384394993, hash=789085753, fulfill_order_id=79218428],
> cacheId=-1836347052], val=[op=TRANSFORM, val=null], prevVal=[op=TRANSFORM,
> val=null], oldVal=[op=NOOP, val=null], entryProcessorsCol=[IgniteBiTuple
> [val1=org.apache.ignite.internal.processors.query.h2.
> DmlStatementsProcessor$ModifyingEntryProcessor@75e749b8,
> val2=[Ljava.lang.Object;@4ee393d4]], ttl=-1, conflictExpireTime=-1,
> conflictVer=null, explicitVer=null, dhtVer=null, filters=[],
> filtersPassed=false, filtersSet=true, entry=GridDhtDetachedCacheEntry
> [super=GridDistributedCacheEntry [super=GridCacheMapEntry
> [key=com.walmart.node.commons.manager.ignite.setup.model.
> key.fulfill_order_key
> [idHash=1384394993, hash=789085753, fulfill_order_id=79218428],
> val=com.walmart.node.commons.manager.ignite.setup.model.fulfill_order
> [idHash=1411260389, hash=1683738400, create_userid=HOORDPRO,
> dspns_type_cd=1], startVer=1507208251042, ver=GridCacheVersion
> [topVer=117883976, order=1507204044952, nodeOrder=112], hash=789085753,
> extras=null, flags=0]]], prepared=0, locked=true,
> nodeId=d02e4a23-abe5-470d-a414-bfb9816ff494, locMapped=false,
> expiryPlc=null, transferExpiryPlc=false, flags=2, partUpdateCntr=0,
> serReadVer=null, xidVer=GridCacheVersion [topVer=117883976,
> order=1507208250156, nodeOrder=401]], IgniteTxEntry
> [key=com.walmart.node.commons.manager.ignite.setup.model.
> key.fulfill_order_line_key
> [idHash=669957679, hash=1510291529, fulfill_order_line_nbr=1,
> fulfill_order_id=79218428], cacheId=159420608, txKey=IgniteTxKey
> [key=com.walmart.node.commons.manager.ignite.setup.model.
> key.fulfill_order_line_key
> [idHash=669957679, hash=1510291529, fulfill_order_line_nbr=1,
> fulfill_order_id=79218428], cacheId=159420608], val=[op=TRANSFORM,
> val=null], prevVal=[op=TRANSFORM, val=null], oldVal=[op=NOOP, val=null],
> entryProcessorsCol=[IgniteBiTuple
> [val1=org.apache.ignite.internal.processors.query.h2.
> DmlStatementsProcessor$ModifyingEntryProcessor@688b99e3,
> val2=[Ljava.lang.Object;@3a24ca15]], ttl=-1, conflictExpireTime=-1,
> conflictVer=null, explicitVer=null, dhtVer=null, filters=[],
> filtersPassed=false, filtersSet=true, entry=GridDhtDetachedCacheEntry
> [super=GridDistributedCacheEntry [super=GridCacheMapEntry
> [key=com.walmart.node.commons.manager.ignite.setup.model.
> key.fulfill_order_line_key
> [idHash=669957679, hash=1510291529, fulfill_order_line_nbr=1,
> fulfill_order_id=79218428],
> val=com.walmart.node.commons.manager.ignite.setup.model.fulfill_order_line
> [idHash=518069123, hash=-1823543951, create_userid=HOORDPRO,
> src_item_secondary_desc=null, s], startVer=1507208251774,
> ver=GridCacheVersion [topVer=117883976, order=1507204044404,
> nodeOrder=112],
> hash=1510291529, extras=null, flags=0]]], prepared=0, locked=true,
> nodeId=d02e4a23-abe5-470d-a414-bfb9816ff494, locMapped=false,
> expiryPlc=null, transferExpiryPlc=false, flags=2, partUpdateCntr=0,
> serReadVer=null, xidVer=GridCacheVersion [topVer=117883976,
> order=1507208250156, nodeOrder=401, super=IgniteTxAdapter
> [xidVer=GridCacheVersion [topVer=117883976, order=1507208250156,
> nodeOrder=401], writeVer=null, implicit=false, loc=true, threadId=167,
> 

Re: Job Listeners

2017-10-12 Thread Alexey Kukushkin
Hi Chandrika,

I can run your task on 1 node OK (see the output below) and I really do not
see what might cause a deadlock in your code. You said "*with one node it
was always failing causing a deadlock*" - what do you mean by "failing"? Do
you see an exception? Can you reproduce the problem with verbose logging on
(start server node with -DIGNITE_QUIET=false) and share the output with us?

[15:47:07] Ignite node started OK (id=87591b77)
[15:47:07] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8,
heap=3.6GB]
[15:47:20] Topology snapshot [ver=2, servers=1, clients=1, CPUs=8,
heap=7.1GB]
 executed the job **12 *
*15:47:29:685
 executed the job **11 *
*15:47:29:691
 executed the job **1 *
*15:47:29:764
[15:47:29] Topology snapshot [ver=3, servers=1, clients=0, CPUs=8,
heap=3.6GB]


Re: Job Listeners

2017-10-12 Thread chandrika
Hello Alexey,

Even i could make my code work on three nodes even earlier, but with one
node it was always failing causing a deadlock, please let me know how to go
about it cause the issue was with one node.

thanks 
chandrika



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.

2017-10-12 Thread ilya.kasnacheev
Hello Ray!

Can you please share cache configuration also? There's nothing in your
configuation which stands out so maybe I'll try to reproduce it on hardware.

Did checkpointing tuning produce any measurable difference? Do you spot
anything in Ignite logs when nodes get stuck that you may share?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Java 9

2017-10-12 Thread Paolo Di Tommaso
Hi Denis,

Neither in compatibility mode? I mean adding some --add-opens options

to access deprecated/internal apis?

Almost any any existing Java app works in this way. I've tried that but it
seems Ignite is throwing an exception because the Java version number does
not match the expected pattern.

Any workaround ?

Cheers,
Paolo


On Thu, Oct 12, 2017 at 1:25 AM, Denis Magda  wrote:

> Hi Paolo,
>
> There is some work to do to make Ignite running on Java 9:
> https://issues.apache.org/jira/browse/IGNITE-4615
>
> Guess the version will be supported by the end of the year.
>
> —
> Denis
>
> On Oct 11, 2017, at 2:08 PM, Paolo Di Tommaso 
> wrote:
>
> Hi,
>
> Which the minimal Ignite version that can run on Java 9 ?
>
> I'm trying Ignite 1.9 and I'm getting
>
>
> Caused by: java.lang.IllegalStateException: Ignite requires Java 7 or
> above. Current Java version is not supported: 9
> at org.apache.ignite.internal.IgnitionEx.(IgnitionEx.java:185)
>
>
>
>
> Thanks,
> Paolo
>
>
>


Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.

2017-10-12 Thread Ray
My Ignite config is as follows





























 






It's stuck forever and I waited 10 more hours and the ingestion still not
finished.
And on the other memory only Ignite cluster without persistent store
enabled, the job took 30 minutes to ingest 550 million entries.

Thanks for the suggestion, I'll try to add the checkpoint config.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/