Re: FW: class loading, peer class loading, jars, fun times in ignite

2019-05-29 Thread Dave Harvey
With these symptoms when the programmer understands the rules, there is
still a somewhat frequent bug where  "cache.withKeepBinary() "is what is
required, rather than simply "cache".

On Wed, May 29, 2019 at 12:29 PM Dmitriy Pavlov  wrote:

> Hi Scott,
>
> actually, users are encouraged to suggest edits to the documentation.
>
> Let me share
> https://cwiki.apache.org/confluence/display/IGNITE/How+to+Document#HowtoDocument-Basics
>  page.
> You may sign up into readme.io and suggest edits.
>
> Sincerely,
> Dmitriy Pavlov
>
> ср, 29 мая 2019 г. в 15:54, Scott Cote :
>
>> Ivan,
>>
>> I think that you gave me the right answer and confirmed my suspicion -
>> that peer class loading is only for just an executable and not data.
>>
>> Can I assist by helping edit the documentation on the Apache Ignite site
>> to add clarity on when a jar is needed in the lib folder?
>>
>> Also - I'll have to come up with a plan to flush out of date java data
>> classes, or maybe you guys have some techniques that allow for online
>> migration of v1 of a java data class to v2 of a java data class without
>> having to shutdown the whole set of vm's.
>>
>> Our system is pretty simple.
>>
>>
>> We use the cache's.
>>
>> Not the streamers.
>>
>> Not the callables ...
>>
>> It’s a big fantastic cache 
>>
>> (With some stuff that I built on top of it - like a priority queue
>> framework)
>>
>> SCott
>>
>> -Original Message-
>> From: Павлухин Иван 
>> Sent: Wednesday, May 29, 2019 1:16 AM
>> To: user@ignite.apache.org
>> Subject: Re: FW: class loading, peer class loading, jars, fun times in
>> ignite
>>
>> Hi Scott,
>>
>> As far as I know, peer class loading does not work for data classes
>> (which are stored in a cache). It works for tasks sended for execution
>> using IgniteCompute.
>>
>> It is only a partial answer. Could you describe your use case in more
>> details?
>>
>> вт, 28 мая 2019 г. в 23:35, Scott Cote :
>> >
>> > Whoops – sent to the wrong list …
>> >
>> >
>> >
>> > From: Scott Cote
>> > Sent: Tuesday, May 28, 2019 1:04 PM
>> > To: d...@ignite.apache.org
>> > Subject: class loading, peer class loading, jars, fun times in ignite
>> >
>> >
>> >
>> > I am fairly certain that I don’t know how to use peer class loading
>> properly.
>> >
>> >
>> >
>> > Am using Apache Ignite 2.7.  If I have a node running on 192.168.1.2
>> with a peer class loading enabled, and I start up a second node –
>> 192.168.1.3, client mode enabled and peer class loading enabled, then I
>> expected the following:
>> >
>> >
>> >
>> > Running the snippet (based on
>> https://apacheignite.readme.io/docs/getting-started#section-first-ignite-data-grid-application
>> ) on the client (192.168.1.3):
>> >
>> >
>> >
>> > try (Ignite ignite =
>> > Ignition.start("examples/config/example-ignite.xml")) {
>> >
>> > IgniteCache cache =
>> > ignite.getOrCreateCache("myCacheName");
>> >
>> >
>> >
>> > // Store keys in cache (values will end up on different cache
>> nodes).
>> >
>> > for (int i = 0; i < 10; i++)
>> >
>> > cache.put(i,new MyWrapperOfString( Integer.toString(i)));
>> >
>> >
>> >
>> > for (int i = 0; i < 10; i++)
>> >
>> > System.out.println("Got [key=" + i + ", val=" + cache.get(i) +
>> > ']');
>> >
>> > }
>> >
>> >
>> >
>> >
>> >
>> > Would cause the cache of “MyWrapperOfString” instances to be available
>> on 192.168.1.2 and on 192.168.1.3 .   Also be able to observe the cache
>> using visor, etc ….
>> >
>> >
>> >
>> > However – I instead get an error that the class “MyWrapperOfString” is
>> not available on 192.168.1.2.   Now if I take the jar that the class is
>> packed, and place it in the lib folder, all is happy.
>> >
>> >
>> >
>> > Should I have to do this?
>> >
>> > If yes – how do I update the jar if I have a cluster of nodes doing
>> this?   Do I have to shutdown the entire cluster in order to not have class
>> loader problems?
>> >
>> > I thought the peer class loading is supposed to solve this problem.
>> >
>> >
>> >
>> > I think it would be VERY INSTRUCTIVE for the snippet that I anchored to
>> not use a standard java library cache object, but to demonstrate the need
>> to package value object into a jar and stuff it into the lib folder (If
>> this is what is expected). Running lambdas that use basic java
>> primitives is cool, but is this the normal?
>> >
>> >
>> >
>> > Switching up …. Is there interest in me creating class loader that
>> would load java classes into the vm that could be incorporated into
>> ignite?   So instead of reading a jar, you load the class bytes into a
>> cache .  You want to hot load a new class?  Fine ! pump into the
>> DISTRIBUTED_CLASS_PATH_CACHE .
>> >
>> >
>> >
>> > Cheers.
>> >
>> >
>> >
>> > SCott
>> >
>> >
>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin
>>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the 

Re: JMX port for Ignite in docker

2019-03-18 Thread Dave Harvey
We had found we needed to change this in ignite.sh along these lines, so we
only had to expose one port out of the container.Otherwise you need to
expose the RMI port also.

# Newer Java versions (1.8.0_121+) allow the RMI port to be the same port.

if [ -n "$JMX_PORT" ]; then
JMX_MON="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=
${JMX_PORT} -Dcom.sun.management.jmxremote.rmi.port=${JMX_PORT} \
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"

On Mon, Mar 18, 2019 at 9:36 AM newigniter 
wrote:

> I also have a problem with this so if someone can help..!
> Tnx
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Different paths for storagePath and WAL from docker

2019-02-15 Thread Dave Harvey
https://apacheignite.readme.io/docs/docker-deployment
shows

sudo docker run -it --net=host -e "CONFIG_URI=$CONFIG_URI"
[-e "OPTION_LIBS=$OPTION_LIBS"]
[-e "JVM_OPTS=$JVM_OPTS"]
...

$CONFIG_URI can be https:xml which is the configuration file.
A configuration file can say to use ENVIRONMENT variables, which is what I
did.

The path that you put in must be a path that is visible inside the docker
container, e.g.., some shared storage that is mounted in the container.

On Fri, Feb 15, 2019 at 7:56 AM newigniter 
wrote:

> Thanks for your reply but I don't quite understand.
> How do you load this common spring file and how to you pass it to ignite on
> docker run?
> What is the example value for this:
> #{systemEnvironment['IGNITE_PERSISTENT_STORE']}
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Different paths for storagePath and WAL from docker

2019-02-15 Thread Dave Harvey
We have a common spring file accessible via HTTP.  Inside

 

and we vary the environment variables.







NOTE: the work directory has state that also needs to be persistent.
 persistent copies of the above are worthless without a persistent copy of
the binary object schemas, etc.



On Fri, Feb 15, 2019 at 6:12 AM newigniter 
wrote:

> I have ignite deployed in docker.
> I was studying Separate Disk Device for WAL part of docks:
>
> https://apacheignite.readme.io/docs/durable-memory-tuning#native-persistence-related-tuning
> .
>
> How can I configure this if I use docker? I have this deployed on ec2
> machine where I have two separate drives.
> In the docks, I see that storagePath, walPath & walArchivePath properties
> can be set in xml configuration which I can pass to the docker but I don't
> think I can use this here? If yes, how exactly?
>
> Any help is appreciated.
> Regards,
> Tomislav
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Avoiding Docker Bridge network when using S3 discovery

2018-12-21 Thread Dave Harvey
Created https://jira.apache.org/jira/browse/IGNITE-10791 for this




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite in docker (Native Persistence)

2018-12-18 Thread Dave Harvey
"value" : "${IGNITE_STRIPED_POOL_SIZE}"

},

{

"name" : "IGNITE_ASYNC_CALLBACK_POOL_SIZE",

"value" : "${IGNITE_ASYNC_CALLBACK_POOL_SIZE}"

},

{

"name" : "JVM_ADDITIONAL_OPTS",

"value" : "${JVM_ADDITIONAL_OPTS}"

}



],

"name": "IgniteImage",

"mountPoints": [

{

"sourceVolume": "ContainerLogs",

"containerPath": "/ContainerLogs"

},



{

"sourceVolume": "IgnitePersistenceStorage",

"containerPath": "/IgnitePersistenceStorage"

},

{

"sourceVolume": "SnapshotSocket",

"containerPath": "/var/run/jobcase-snapshot.sock"

}

],

"image": "
jobcase-platform-docker.jfrog.io/apacheignite-jobcase:${IGNITE_IMAGE_TAG}",

"portMappings": [

{

"protocol": "tcp",

"containerPort": 11211,

"hostPort": 11211

},

{

"protocol": "tcp",

"containerPort": 47100,

"hostPort": 47100

},

{

"protocol": "tcp",

"containerPort": 47500,

"hostPort": 47500

},

{

"protocol": "tcp",

"containerPort": 9000,

"hostPort": 9000

},

{

"protocol": "tcp",

"containerPort": 10800,

"hostPort": 10800

}

],

"logConfiguration": {

"logDriver": "json-file"

},

"healthCheck": {



"command": ["CMD-SHELL", "\${IGNITE_HOME}/bin/control.sh
--baseline | grep 'Cluster state: active'" ],

"interval": 30,

"timeout": 30,

"retries": 3,

"startPeriod": 300

 },

"ulimits": [

{

"softLimit": 100,

"name": "nofile",

"hardLimit": 100

}

],

"memoryReservation" : $MEMORY_RESERVATION,


"essential": true,

"volumesFrom": [],

"dockerLabels": {

"com.datadoghq.ad.check_names": "[\"jmx\"]",

"com.datadoghq.ad.instances": "[ {\"host\": \"localhost
\", \"port\":\"9000\"}]",



"com.datadoghq.ad.init_configs": "[{}]"

}

}

]

}

EOF

On Tue, Dec 18, 2018 at 9:08 AM Dave Harvey  wrote:

> See attached, which we use in our AWS ECS containers.
>
> Note that beside WAL and data, the work directory needs persistence,
> because it has all the typeID mappings.
>
> On Tue, Dec 18, 2018 at 7:32 AM Павлухин Иван  wrote:
>
>> Hi Rahul,
>>
>> Could you please share an ignite configuration and how do you launch a
>> docker container with Ignite?
>> Do you see something in your ignitedata/persistence ignitedata/wal
>> ignitedata/wal/archive after container stop?
>> I guess you can configure a consistentId by configuring
>> IgniteConfiguration bean property:
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>>   
>>   ...
>> 
>> вт, 18 дек. 2018 г. в 12:57, RahulMetangale > >:
>> >
>> > Hi All,
>> >
>> > I followed following steps for persistence in docker but i am observing
>> that
>> > cache is not retained after restart. From documentation i see that
>> > consisitentID need to be set to retain cache after restart but i am not
>> sure
>> > how it can set in configuration xml file. Any help is greatly
>> appreciated.
>> >
>> > Here are the steps i followed:
>> > 1. Created following folder on docker host in var directory
>> > mkdir -p ignitedata/persistence ignitedata/wal ignitedata/wal/archive
>> > 2. Updated default-config.xml
>> >
>> >
>> > 3. Ran following command to deploy ignite docker container. I updated
>> > default-config.xml inside container hence i did not pass the CONFIG_URI.
>> >
>> >
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>
>>
>> --
>> Best regards,
>> Ivan Pavlukhin
>>
>>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Ignite in docker (Native Persistence)

2018-12-18 Thread Dave Harvey
See attached, which we use in our AWS ECS containers.

Note that beside WAL and data, the work directory needs persistence,
because it has all the typeID mappings.

On Tue, Dec 18, 2018 at 7:32 AM Павлухин Иван  wrote:

> Hi Rahul,
>
> Could you please share an ignite configuration and how do you launch a
> docker container with Ignite?
> Do you see something in your ignitedata/persistence ignitedata/wal
> ignitedata/wal/archive after container stop?
> I guess you can configure a consistentId by configuring
> IgniteConfiguration bean property:
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>   
>   ...
> 
> вт, 18 дек. 2018 г. в 12:57, RahulMetangale :
> >
> > Hi All,
> >
> > I followed following steps for persistence in docker but i am observing
> that
> > cache is not retained after restart. From documentation i see that
> > consisitentID need to be set to retain cache after restart but i am not
> sure
> > how it can set in configuration xml file. Any help is greatly
> appreciated.
> >
> > Here are the steps i followed:
> > 1. Created following folder on docker host in var directory
> > mkdir -p ignitedata/persistence ignitedata/wal ignitedata/wal/archive
> > 2. Updated default-config.xml
> >
> >
> > 3. Ran following command to deploy ignite docker container. I updated
> > default-config.xml inside container hence i did not pass the CONFIG_URI.
> >
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd;>









 































  


  

























  























   
   
   
   
   




   
   
   
   








   
  
   
   



172.17.0.1 













   
   
   
  























Re: Avoiding Docker Bridge network when using S3 discovery

2018-12-04 Thread Dave Harvey
We didn't see a way to use setLocalAddress.  Each container has a different
IP address, but the bridge network presents the same IP address to each
container.  Therefore, we would only know what to use for the local address
by enumerating all addresses, and eliminating the bridge network.

On Mon, Dec 3, 2018 at 3:31 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> Have you been able to solve this?
>
> I think specifying TcpDiscoverySpi.localAddress should work.
>
>
>
> Stan
>
>
>
> *From: *Dave Harvey 
> *Sent: *17 октября 2018 г. 20:10
> *To: *user@ignite.apache.org
> *Subject: *Avoiding Docker Bridge network when using S3 discovery
>
>
>
> When we use S3 discovery and Ignite containers running under ECS using
> host networking, the S3 bucket end up with 172.17.0.1#47500 along with the
> other server addresses.   Then on cluster startup we must wait for the
> network timeout.Is there a way to avoid having this address pushed to
> the S3 bucket?
>
> Visor shows:
>
> | Address (0) | 10.32.97.32  |
>
> | Address (1) | 172.17.0.1   |
>
> | Address (2) | 127.0.0.1
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <http://www.mimecast.com/products/>.
>
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Snapshotting and Restore in Ignite

2018-11-14 Thread Dave Harvey
Gridgain has some kind of snapshoting add in.

You can save and restore the workDirectory from each node when the cluster
is in a stable state, provided that you use the same CONSISTENT_ID when
restoring.  We were able to convert the directory name back into a
consistent ID on restore, but we had to hack around "-".   If you a careful
about your original consistent_id, then the directory name will be the same
as the consistent id.
but
We have a prototype running that creates cross-node coherent LVM snapshots
on a running cluster by hooking into the topology discovery process, and
taking the snapshot on each node when transactions are frozen, but it is
not quite ready to share.
(Our earlier prototype failed because if we used and LINUX fsfreeze and
then tried to do an LVM snapshot in that state, the snapshot deadlocked.
 We were never able to understand how an FS level operation holds locks
that are needed for a volume level operation. )

Note: You can control where the persistent data is stored.














  





  ...



























   

On Wed, Nov 14, 2018 at 9:32 AM ilya.kasnacheev 
wrote:

> Hello!
>
> Currently there is no snapshotting in Apache Ignite. You could restore a
> different cluster from shutdown nodes' persistence directories but not from
> live nodes'.
>
> There are third party commercial implementations of snapshotting for
> Ignite.
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: CPU Count Drop On ECS

2018-10-26 Thread Dave Harvey
Reverting the AWS AMI did not revert the ECS agent version.

On Fri, Oct 26, 2018 at 6:11 PM Dave Harvey  wrote:

> I was running a 8x i3.8xlarge cluster on AWS ECS, and it would normally 
> display
>
> ^-- H/N/C [hosts=8, nodes=8, CPUs=256]
>
>
> Then I recreated it and got this, and problems like I can no longer start 
> visor.
>
> ^-- H/N/C [hosts=8, nodes=8, CPUs=8]
>
>
> Our ECS task specifies CPU=0 -> 1 share.   I'm reverting the AMI to an older 
> version, but expect that the following was the trigger:
>
>
> https://github.com/aws/amazon-ecs-agent/pull/1480
>
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


CPU Count Drop On ECS

2018-10-26 Thread Dave Harvey
I was running a 8x i3.8xlarge cluster on AWS ECS, and it would normally display

^-- H/N/C [hosts=8, nodes=8, CPUs=256]


Then I recreated it and got this, and problems like I can no longer start visor.

^-- H/N/C [hosts=8, nodes=8, CPUs=8]


Our ECS task specifies CPU=0 -> 1 share.   I'm reverting the AMI to an
older version, but expect that the following was the trigger:


https://github.com/aws/amazon-ecs-agent/pull/1480

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Avoiding Docker Bridge network when using S3 discovery

2018-10-17 Thread Dave Harvey
When we use S3 discovery and Ignite containers running under ECS using host
networking, the S3 bucket end up with 172.17.0.1#47500 along with the other
server addresses.   Then on cluster startup we must wait for the network
timeout.Is there a way to avoid having this address pushed to the S3
bucket?
Visor shows:

| Address (0) | 10.32.97.32  |

| Address (1) | 172.17.0.1   |

| Address (2) | 127.0.0.1

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Query 3x slower with index

2018-10-11 Thread Dave Harvey
"Ignite will only use one index per table"

I assume you mean "Ignite will only use one index per table per query"?

On Thu, Oct 11, 2018 at 1:55 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> It is a rather lengthy thread and I can’t dive into details right now,
>
> but AFAICS the issue now is making affinity key index to work with a
> secondary index.
>
> The important things to understand is
>
>1. Ignite will only use one index per table
>2. In case of a composite index, it will apply the columns one by one
>3. The affinity key index should always go first as the first step is
>splitting the query by affinity key values
>
>
>
> So, to use index over the affinity key (customer_id) and a secondary index
> (category_id) one needs to create an index
>
> like (customer_id, category_id), in that order, with no columns in between.
>
> Note that index (customer_id, dt, category_id) can’t be used instead of it.
>
> On the other hand, (customer_id, category_id, dt) can - the last part of
> the index will be left unused.
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *eugene miretsky 
> *Sent: *9 октября 2018 г. 19:40
> *To: *user@ignite.apache.org
> *Subject: *Re: Query 3x slower with index
>
>
>
> Hi Ilya,
>
>
>
> I have tried it, and got the same performance as forcing using category
> index in my initial benchmark - query is 3x slowers and uses only one
> thread.
>
>
>
> From my experiments so far it seems like Ignite can either (a) use
> affinity key and run queries in parallel, (b) use index but run the query
> on only one thread.
>
>
>
> Has anybody been able to run OLAP like queries in while using an index?
>
>
>
> Cheers,
>
> Eugene
>
>
>
> On Mon, Sep 24, 2018 at 10:55 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
> Hello!
>
>
>
> I guess that using AFFINITY_KEY as index have something to do with the
> fact that GROUP BY really wants to work per-partition.
>
>
>
> I have the following query for you:
>
>
>
> 1: jdbc:ignite:thin://localhost> explain Select count(*) FROM( Select
> customer_id from (Select customer_id, product_views_app, product_clict_app
> from GA_DATA ga join table(category_id int = ( 117930, 175930,
> 175940,175945,101450)) cats on cats.category_id = ga.category_id) data
> group by customer_id having SUM(product_views_app) > 2 OR
> SUM(product_clict_app) > 1);
> PLAN  SELECT
> DATA__Z2.CUSTOMER_ID AS __C0_0,
> SUM(DATA__Z2.PRODUCT_VIEWS_APP) AS __C0_1,
> SUM(DATA__Z2.PRODUCT_CLICT_APP) AS __C0_2
> FROM (
> SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
> ) DATA__Z2
> /* SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> /++ function ++/
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> /++ PUBLIC.GA_CATEGORY_ID: CATEGORY_ID = CATS__Z1.CATEGORY_ID ++/
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
>  */
> GROUP BY DATA__Z2.CUSTOMER_ID
>
> PLAN  SELECT
> COUNT(*)
> FROM (
> SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
> ) _18__Z3
> /* SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> /++ PUBLIC."merge_scan" ++/
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
>  */
>
>
>
> However, I'm not sure it is "optimal" or not since I have no idea if it
> will perform better or worse on real data. That's why I need a subset of
> data which will make query execution speed readily visible. Unfortunately,
> I can't deduce that from query plan alone.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пн, 24 сент. 2018 г. в 16:14, eugene miretsky :
>
> An easy way to reproduce would be to
>
>
>
> 1. Create table
>
> CREATE TABLE GA_DATA (
>
> customer_id bigint,
>
> dt timestamp,
>
> category_id int,
>
> product_views_app int,
>
> product_clict_app int,
>
> product_clict_web int,
>
> product_clict_web int,
>
> PRIMARY KEY (customer_id, dt, category_id)
>
> ) WITH "template=ga_template, backups=0, affinityKey=customer_id";
>
>
>
> 2. Create indexes
>
> · CREATE INDEX ga_customer_id ON GA_Data (customer_id)
>
> · CREATE INDEX ga_pKey ON GA_Data (customer_id, dt, category_id)
>
> · CREATE INDEX ga_category_and_customer_id ON GA_Data
> (category_id, customer_id)
>
> · CREATE INDEX ga_category_id ON GA_Data (category_id)
>
> 3. Run Explain on the following queries while trying forcing using
> different indexes
>
> · Select count(*) FROM(
>
> Select customer_id from GA_DATA  use index 

Re: Message grid failure due to userVersion setting

2018-09-18 Thread Dave Harvey
Thanks Ilya,

As I understand  this a bit more, it seems like IGNITE-7905
<https://issues.apache.org/jira/browse/IGNITE-7905> is really the same
basic flaw in user version not working as documented. Ignite-7905
reproduction is simply to set a non-zero userVersion in a ignite.xml (
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DeploymentMode.html),
and then

// Connect to the cluster.Ignite ignite = Ignition.start();
// Activate the cluster. Automatic topology initialization occurs //
only if you manually activate the cluster for the very first time.
ignite.cluster().active(true);


The activation then throws an exception on the server, because the server
already has the same built-in Ignite class.

As I understand the documentation, since the built in Ignite class is not
excluded, it should not even consider peer class loading because the class
exists locally.It should just use the local class.

-DH

On Tue, Sep 18, 2018 at 9:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I'm not familiar with these areas very much, but if you had a reproducer
> project I could take a look.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 17 сент. 2018 г. в 19:32, Dave Harvey :
>
>> I probably did not explain this clearly.  When sending a message from
>> server to client using the message grid, from a context unrelated to any
>> client call, the server, as you would expect uses its installed libraries,
>> and userVersion 0.For some reason, when the client receives this
>> message, it require that the user version match it's current user version.
>>
>> The use case is we have a stable set of libraries on the server, and the
>> server wants to send a topic based message to the client, using only the
>> type "String".   Unrelated to this, the client is using the compute grid,
>> where P2P is used, but that is interfering with basic functionality.
>> This,  IGNITE-7905 <https://issues.apache.org/jira/browse/IGNITE-7905> ,
>> and the paucity  of  results when I google for "ignite userVersion"  makes
>> it clear that shooting down classes in CONTINUOUS mode with userVersion is
>> not completely thought through.  We certainly never want to set a
>> userVersion on the servers.
>>
>> The documentation for P2P says:
>> "
>>
>>1. Ignite will check if class is available on local classpath (i.e.
>>if it was loaded at system startup), and if it was, it will be returned. 
>> No
>>class loading from a peer node will take place in this case."
>>
>> Clearly, java.lang.String is on the local class path.So it seems like
>> a user version mismatch should not be a reason to reject a class that is on
>> the local classpath.
>>
>> On Mon, Sep 17, 2018 at 11:01 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I think that Ignite cannot unload old version of code, unless it is
>>> loaded with something like URI deployment module.
>>> Version checking is there but server can't get rid of old code if it's
>>> on classpath.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 17 сент. 2018 г. в 16:47, Dave Harvey :
>>>
>>>> We have a client that uses the compute grid and message grid, as well
>>>> as the discovery API.  It communicates with a server plugin.   The cluster
>>>> is configured for CONTINUOUS peer class loading.  In order to force the
>>>> proper code to be loaded for the compute tasks, we change the user version,
>>>> e.g., to 2.
>>>>
>>>> If the server sends the client a message on the message grid, using
>>>> java.lang.string, the client fails because the user version sent for
>>>> java.lang.string is 0, but the client insists on 2.
>>>>
>>>> How is this supposed to work?   Our expectation was that the message
>>>> grid should not be affected by peer class loading settings.
>>>>
>>>>
>>>>
>>>>
>>>> *Disclaimer*
>>>>
>>>> The information contained in this communication from the sender is
>>>> confidential. It is intended solely for use by the recipient and others
>>>> authorized to receive it. If you are not the recipient, you are hereby
>>>> notified that any disclosure, copying, distribution or taking action in
>>>> relation of the contents of this information is strictly prohibited and may
>>>> be unlawful.
>>>>
>>>> This email has bee

Re: Message grid failure due to userVersion setting

2018-09-17 Thread Dave Harvey
I probably did not explain this clearly.  When sending a message from
server to client using the message grid, from a context unrelated to any
client call, the server, as you would expect uses its installed libraries,
and userVersion 0.For some reason, when the client receives this
message, it require that the user version match it's current user version.

The use case is we have a stable set of libraries on the server, and the
server wants to send a topic based message to the client, using only the
type "String".   Unrelated to this, the client is using the compute grid,
where P2P is used, but that is interfering with basic functionality.
This,  IGNITE-7905 <https://issues.apache.org/jira/browse/IGNITE-7905> ,
and the paucity  of  results when I google for "ignite userVersion"  makes
it clear that shooting down classes in CONTINUOUS mode with userVersion is
not completely thought through.  We certainly never want to set a
userVersion on the servers.

The documentation for P2P says:
"

   1. Ignite will check if class is available on local classpath (i.e. if
   it was loaded at system startup), and if it was, it will be returned. No
   class loading from a peer node will take place in this case."

Clearly, java.lang.String is on the local class path.So it seems like a
user version mismatch should not be a reason to reject a class that is on
the local classpath.

On Mon, Sep 17, 2018 at 11:01 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I think that Ignite cannot unload old version of code, unless it is loaded
> with something like URI deployment module.
> Version checking is there but server can't get rid of old code if it's on
> classpath.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 17 сент. 2018 г. в 16:47, Dave Harvey :
>
>> We have a client that uses the compute grid and message grid, as well as
>> the discovery API.  It communicates with a server plugin.   The cluster is
>> configured for CONTINUOUS peer class loading.  In order to force the proper
>> code to be loaded for the compute tasks, we change the user version, e.g.,
>> to 2.
>>
>> If the server sends the client a message on the message grid, using
>> java.lang.string, the client fails because the user version sent for
>> java.lang.string is 0, but the client insists on 2.
>>
>> How is this supposed to work?   Our expectation was that the message grid
>> should not be affected by peer class loading settings.
>>
>>
>>
>>
>> *Disclaimer*
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and others
>> authorized to receive it. If you are not the recipient, you are hereby
>> notified that any disclosure, copying, distribution or taking action in
>> relation of the contents of this information is strictly prohibited and may
>> be unlawful.
>>
>> This email has been scanned for viruses and malware, and may have been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>> Service (SaaS) for business. Providing a *safer* and *more useful* place
>> for your human generated data. Specializing in; Security, archiving and
>> compliance. To find out more Click Here
>> <http://www.mimecast.com/products/>.
>>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Message grid failure due to userVersion setting

2018-09-17 Thread Dave Harvey
We have a client that uses the compute grid and message grid, as well as
the discovery API.  It communicates with a server plugin.   The cluster is
configured for CONTINUOUS peer class loading.  In order to force the proper
code to be loaded for the compute tasks, we change the user version, e.g.,
to 2.

If the server sends the client a message on the message grid, using
java.lang.string, the client fails because the user version sent for
java.lang.string is 0, but the client insists on 2.

How is this supposed to work?   Our expectation was that the message grid
should not be affected by peer class loading settings.

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Transition from FULL_ASYNC/PRIMARY_SYNC to FULL_SYNC

2018-09-06 Thread Dave Harvey
It is my understanding that for Ignite transactions to be ACID, we need to
have the caches configured as FULL_SYNC.  [  Some of the code seems to
imply only at least one of the caches in the transaction  needs to be
FULL_SYNC, but that is outside the scope of my question. ]

The initial load of our caches takes a long time, because our
StreamReceiver needs to use transactions because it is transforming the
data.   This phase is idempotent, and could easily be run as FULL_ASYNC.
 However, once the data is loaded, we need the guarantees associated with
FULL_SYNC.  Is there any way to accomplish that, short of adding the
ability to change this cache setting dynamically?   Is there any way to
force transactions on caches not configured as FULL_SYNC to be FULL_SYNC?


Thanks,

-DH

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


ignite.compute(grp).affinityRun()...

2018-08-30 Thread Dave Harvey
It is unclear what the intended semantics of using
IgniteCompute.affinityRun() when a subset of the grid was selected.   From
reading the code, my current guess is IgniteCompute.affinityRun()  will run
on the primary, regardless of whether only a subset of the grid was
specified.

In my case I have two ClusterGroups, each of which  is a set of nodes where
each partition of a cache with 1 backup is represented exactly once ( used
a customAffinityBackup function to accomplish this).  The primary
partitions are spread over all nodes, but I can lose all nodes an 1 of the
cluster groups without losing data (in theory).

If  I use ignite.compute(grp).broadcast(),  the closure would run on 1/2
the nodes but be able to .   If the closure had bugs that triggered a node
failure, the cluster should stay up (assuming there is a witness).

Similarly. I would like ignite.compute(grp).affinityRunAsync() on a cache
created with readFromBackup to pay attention to the constraint that I
placed.

GridClosureProcessor.affinityRun() calls
GridAffinityProcessor.mapPartitionToNode()
which returns only the primary node for the partition. The node subset is
stored in the thread context under TC_SUBGRID

Eventually it seems we get to

// Nodes are ignored by affinity tasks.

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Cache Configuration Templates

2018-08-29 Thread Dave Harvey
The SQL interface does not allow you to configure details of the cache
except via templates.   The use case is I want to start with the cluster
specific defaults for a cache, and add some others.

I can create a empty real cache that has the defaults, and use its
configuration in the CacheConfiguration constructor,   but we already have
templates.

On Wed, Aug 29, 2018 at 4:55 AM, Ilya Kasnacheev 
wrote:

> Hello!
>
> Why don't you just use CREATE TABLE for that? I doubt there will be any
> significant overhead even if you never use any SQL and only use Cache API
> after that.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 28 авг. 2018 г. в 17:37, Dave Harvey :
>
>> I did a suggested edit adding the Spring configuration of templates.
>> The rest of the current semantics seem a bit odd, so I was somewhat at a
>> loss as to what to write.
>>
>> The wildcard approach means that I have to know the structure of the
>> cache name a priori.   Seems like there should be a Java API that is
>> equivalent to CREATE TABLE with allows a cache to be created from an
>> arbitrary template name, as well as a way to retrieve a copy of the
>> CacheConfiguration from a template name, so that it can subsequently be
>> enhanced.  I would assume that Ignite.cache(templateName) would fail if
>> the template name has a "*" in it, and instantiate a cache otherwise.
>>
>> On Mon, Aug 27, 2018 at 7:31 AM, Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Unfortunately, cache templates are not documented that good. AFAIK they
>>> were mostly implemented to be able to reference to complex cache
>>> configurations with CREATE TABLE.
>>>
>>> As far as my understanding goes, caches from cacheConfigurations are
>>> actually started when grid starts.
>>>
>>> 1) I think that only the first one will be used to create a cache. Even
>>> if you join a node with distinct cacheConfigurations to the cluster, and it
>>> already has some caches started, those will just be re-used (by name).
>>> 2) Yes, you can have a cacheConfiguration with name "prefix*", which
>>> will lead to a) not starting this cache on grid start, and b) when you
>>> start a cache "prefix1" it will use configuration from template. There's a
>>> test for it named IgniteCacheConfigurationTemplateTest in code base.
>>> 3) Nothing will happen, it will return early if such cache already
>>> exists.
>>> 4) Yes.
>>> 5) Good question, I will have to check that. Still I won't rely on that
>>> and just always have this configuration around.
>>> 6) See above about the '*'.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> сб, 25 авг. 2018 г. в 0:55, Dave Harvey :
>>>
>>>> I found what I've read in this area confusing, and here is my current
>>>> understanding.
>>>>
>>>> When creating an IgniteConfiguration in Java or XML, I can specify the
>>>> property cacheConfiguration, which is an array of
>>>> CacheConfigurations.  This causes Ignite to preserve these configurations,
>>>> but this will not cause Ignite to create a cache. If I call
>>>> Ignite.getOrCreateCache(string), if there is an existing cache, I will
>>>> get that, otherwise a new cache will be created using that configuration.
>>>>
>>>> It seems like creating a cache with a configuration will add to this
>>>> list, because Ignite.configuration.getCacheConfiguration() returns all
>>>> caches.
>>>>
>>>> I can later call Ignite.addCacheConfiguration(). This will add a
>>>> template to that list.
>>>>
>>>> Questions:
>>>> 1)  what happens if there are entries with duplicate names on
>>>> IgniteConfiguration.setCacheConfiguration()   when this is used to
>>>> create a grid?
>>>> 2) There was mention in one e-mail talking about a convention where
>>>> templates have "*" in their name?
>>>> 3) What happens if addCacheConfiguration() tries to add a duplicate
>>>> name?
>>>> 4) Is a template simply a cache that not fully instantiated?
>>>> 5) What about template persistence?   Are they persisted if they
>>>> specify a region that is persistent?
>>>> 6) My use case is that I want to create caches based some default for
>>>> the cluster, so in Java I would like to construct the new configuration
>>>>

Re: How to check if key exists in DataStreamer buffer so that it can be flushed?

2018-08-29 Thread Dave Harvey
The DataStreamer is unordered.   If you have  duplicate keys with different
values, and you don't flush or take other action, then you will get an
arbitrary result.   AllowOverwrite is not a solution.

Adding to the streamer returns a Future, and all of those futures are
notified when the buffer is committed.,

You can keep a map by key of those futures if the source is a single
client, and delay subsequent updates until the first completes.   You could
discard more than one duplicate.

What we do is have a version # that we store in the value, and the
StreamReceiver ignores earlier versions.
-DH

On Wed, Aug 29, 2018 at 8:08 AM, Вячеслав Коптилин  wrote:

> Hello,
>
> I don't think there is a way to do that check. Moreover, it seems to me
> that is useless in any case.
> The thing that allows you to achieve the desired behavior is
> `allowOverwrite` flag [1].
> By default, the data streamer will not overwrite existing data, which
> means that if it will encounter an entry that is already in cache, it will
> skip it.
> So, you can just set `allowOverride` to `false` (which is the default
> value) and put updated values into a cache.
>
> [1] https://apacheignite.readme.io/docs/data-streamers#
> section-allow-overwrite
>
> Thanks,
> S.
>
> ср, 29 авг. 2018 г. в 8:32, the_palakkaran :
>
>> Hi,
>>
>> I have a data streamer to load data into a cache. While loading I might
>> need
>> to update value of a particular key in cache, so I need to check if it is
>> already there in the streamer buffer. If so, either I need to update value
>> against that key in the buffer or I need to flush the data in the streamer
>> and then update. Is there a way to do this?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Cache Configuration Templates

2018-08-28 Thread Dave Harvey
I did a suggested edit adding the Spring configuration of templates.The
rest of the current semantics seem a bit odd, so I was somewhat at a loss
as to what to write.

The wildcard approach means that I have to know the structure of the cache
name a priori.   Seems like there should be a Java API that is equivalent
to CREATE TABLE with allows a cache to be created from an arbitrary
template name, as well as a way to retrieve a copy of the
CacheConfiguration from a template name, so that it can subsequently be
enhanced.  I would assume that Ignite.cache(templateName) would fail if
the template name has a "*" in it, and instantiate a cache otherwise.

On Mon, Aug 27, 2018 at 7:31 AM, Ilya Kasnacheev 
wrote:

> Hello!
>
> Unfortunately, cache templates are not documented that good. AFAIK they
> were mostly implemented to be able to reference to complex cache
> configurations with CREATE TABLE.
>
> As far as my understanding goes, caches from cacheConfigurations are
> actually started when grid starts.
>
> 1) I think that only the first one will be used to create a cache. Even if
> you join a node with distinct cacheConfigurations to the cluster, and it
> already has some caches started, those will just be re-used (by name).
> 2) Yes, you can have a cacheConfiguration with name "prefix*", which will
> lead to a) not starting this cache on grid start, and b) when you start a
> cache "prefix1" it will use configuration from template. There's a test for
> it named IgniteCacheConfigurationTemplateTest in code base.
> 3) Nothing will happen, it will return early if such cache already exists.
> 4) Yes.
> 5) Good question, I will have to check that. Still I won't rely on that
> and just always have this configuration around.
> 6) See above about the '*'.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> сб, 25 авг. 2018 г. в 0:55, Dave Harvey :
>
>> I found what I've read in this area confusing, and here is my current
>> understanding.
>>
>> When creating an IgniteConfiguration in Java or XML, I can specify the
>> property cacheConfiguration, which is an array of CacheConfigurations.
>> This causes Ignite to preserve these configurations, but this will not
>> cause Ignite to create a cache. If I call Ignite.getOrCreateCache(string),
>> if there is an existing cache, I will get that, otherwise a new cache will
>> be created using that configuration.
>>
>> It seems like creating a cache with a configuration will add to this
>> list, because Ignite.configuration.getCacheConfiguration() returns all
>> caches.
>>
>> I can later call Ignite.addCacheConfiguration(). This will add a
>> template to that list.
>>
>> Questions:
>> 1)  what happens if there are entries with duplicate names on
>> IgniteConfiguration.setCacheConfiguration()   when this is used to
>> create a grid?
>> 2) There was mention in one e-mail talking about a convention where
>> templates have "*" in their name?
>> 3) What happens if addCacheConfiguration() tries to add a duplicate name?
>> 4) Is a template simply a cache that not fully instantiated?
>> 5) What about template persistence?   Are they persisted if they specify
>> a region that is persistent?
>> 6) My use case is that I want to create caches based some default for
>> the cluster, so in Java I would like to construct the new configuration
>> from the a template of a known name.   So far, I can only see that I can
>> call Ignite.configuration.getCacheConfiguration() and then search the
>> array for a matching name.   Is there a better way?
>>
>>
>> *Disclaimer*
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and others
>> authorized to receive it. If you are not the recipient, you are hereby
>> notified that any disclosure, copying, distribution or taking action in
>> relation of the contents of this information is strictly prohibited and may
>> be unlawful.
>>
>> This email has been scanned for viruses and malware, and may have been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>> Service (SaaS) for business. Providing a *safer* and *more useful* place
>> for your human generated data. Specializing in; Security, archiving and
>> compliance. To find out more Click Here
>> <http://www.mimecast.com/products/>.
>>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified

Cache Configuration Templates

2018-08-24 Thread Dave Harvey
I found what I've read in this area confusing, and here is my current
understanding.

When creating an IgniteConfiguration in Java or XML, I can specify the
property cacheConfiguration, which is an array of CacheConfigurations.
This causes Ignite to preserve these configurations, but this will not
cause Ignite to create a cache. If I call Ignite.getOrCreateCache(string),
if there is an existing cache, I will get that, otherwise a new cache will
be created using that configuration.

It seems like creating a cache with a configuration will add to this list,
because Ignite.configuration.getCacheConfiguration() returns all caches.

I can later call Ignite.addCacheConfiguration(). This will add a template
to that list.

Questions:
1)  what happens if there are entries with duplicate names on
IgniteConfiguration.setCacheConfiguration()   when this is used to create a
grid?
2) There was mention in one e-mail talking about a convention where
templates have "*" in their name?
3) What happens if addCacheConfiguration() tries to add a duplicate name?
4) Is a template simply a cache that not fully instantiated?
5) What about template persistence?   Are they persisted if they specify a
region that is persistent?
6) My use case is that I want to create caches based some default for the
cluster, so in Java I would like to construct the new configuration from
the a template of a known name.   So far, I can only see that I can call
Ignite.configuration.getCacheConfiguration() and then search the array for
a matching name.   Is there a better way?

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Transaction Throughput in Data Streamer

2018-08-09 Thread Dave Harvey
We are trying to load and transform a large amount of data using the
IgniteDataStreamer using a custom StreamReceiver.We'd like this to run
a lot faster, and we cannot find anything that is close to saturated,
except the data-streamer threads, queues.   This is 2.5, with Ignite
persistence, and enough memory to fit all the data.

I was looking to turn some knob, like the size of a thread pool, to
increase the throughput, but I can't find any bottleneck.   If I turn up
the demand, the throughput does not increase, and the per transaction
latency increases. This would indicate a bottleneck somewhere.

The application has loaded about 900 million records of type A at this
point, and now we would like to load 2.5B records of type B.Records of
type A have a key and a unique ID.Records of type B have a different
key type, plus a foreign field that is A's unique ID.   The key we use in
ignite for record B is (B's key, A's key as affinity key). We also
maintain caches to map A's ID back to its key, and something similar for B.

For each record the stream receiver starts a pessimistic transaction,we
will end up with 1 local gets and 2-3 gets with no affinity (i.e. 50% local
on two nodes), and 2-4 puts, before we commit the transaction.  (FULL_SYNC
caches). There are a several fields with indices.

I've simplified this down to two nodes, with 4 cache caches each with one
backup, all with WAL LOGGING disabled.  The two nodes have 256GB of memory
and 32 CPUs and local SSDs that are unmirrored (i3.8xlarge on AWS). The
network is supposed to be 10 Gb.   The dataset is basically in memory, and
with the WAL disabled there is very little I/O.

The WAL logging disable only pushed the transaction rate from about 1750
to about 2000 TPS.

The CPU doesn't get above 20%, the network bandwidth is  only about 6MB/s
from each node and only about 1500 packets per second per node.   The read
wait time on the SSDs  is only enough to lock up a single thread, and there
are no writes except during checkpoints.

When I look at thread dumps, there is no obvious bottleneck except for the
Datastreamer threads.  Doubling the number of DataStreamer threads from
current 64 to 128 has no effect on throughput.

Looking via MXbeans, where I have a fix for IGNITE-7616, the DataStreamer
pool is saturated.   The "Striped Executor" is not.  With the WAL enabled,
the "StripedExecutor" shows some bursty load, when disabled the active
threads are queue low.  The work is distributed across the StripedExecutor
threads.   The nonDataStreamer  thread pools all frequently go to 0 active
threads, while the DataStreamer pool stays backed up.

With the WAL on with 64 DataStreamer threads, there tended to be able 53
"Owner transactions" on the node.

A snapshot of transactions outstanding follows.

Is there another place to look?   The DS threads tend top be waiting on
futures,  and the other threads are consistent with the relatively

THanks
-DH

f0a49c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
104

33549c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal],
dae1a619-4886-4001-8ac5-6651339c67b7 [ip-172-17-0-1.ec2.internal,
ip-10-32-98-209.ec2.internal]], DURATION: 134

b0949c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 114

2ca49c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
104

96349c53561--08a9-7ea9--0002=PREPARED, NEAR, DURATION: 134

9ca49c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 104

28f39c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
215

a2649c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
124

e7849c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
114

06849c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 114

89849c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
114

35549c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 134

f0449c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
134


Statistics Monitoring Integrations

2018-08-09 Thread Dave Harvey
I've been able to look at cache and thread pool statistics using JVisualVM
with Mbeans support.   Has anyone found a way to get these statistics out
to a tool like NewRelic or DataDog?

Thanks,

Dave Harvey

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: S3 discovery and bridge networks

2018-08-07 Thread Dave Harvey
My understanding:  S3 discovery works because the container publishes its
IP/port in an S3 bucket, and other nodes can read this to determine which
nodes might be in the cluster.   When running in a container using a bridge
network, the container does not know the external IP address that can be
used to reach it, so it doesn't have enough information to publish it's IP
address  in S3.
I'm running in ECS, which starts containers automatically, so I have no
means to pass in environment variables with additional information.

For this to work, the container would need to be able to determine the
external identity of its discovery port.   I can pass in the external port
# that we map to, but not the IP address.



On Mon, Aug 6, 2018 at 10:29 AM, Ilya Kasnacheev 
wrote:

> Hello!
>
> Have you tried to specify localAddress for communication and discovery
> SPIs? If not, can you please elaborate with ifconfig information and stuff?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-03 16:53 GMT+03:00 Dave Harvey :
>
>> I've been successfully running 2.5 on AWS ECS with host or AWSVPC
>> networking for the Ignite containers.   Is there any way around the fact
>> that with bridge networking, the Ignite node registers it's unmapped
>> address on S3?
>>
>>
>> *Disclaimer*
>>
>> The information contained in this communication from the sender is
>> confidential. It is intended solely for use by the recipient and others
>> authorized to receive it. If you are not the recipient, you are hereby
>> notified that any disclosure, copying, distribution or taking action in
>> relation of the contents of this information is strictly prohibited and may
>> be unlawful.
>>
>> This email has been scanned for viruses and malware, and may have been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as a
>> Service (SaaS) for business. Providing a *safer* and *more useful* place
>> for your human generated data. Specializing in; Security, archiving and
>> compliance. To find out more Click Here
>> <http://www.mimecast.com/products/>.
>>
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


S3 discovery and bridge networks

2018-08-03 Thread Dave Harvey
I've been successfully running 2.5 on AWS ECS with host or AWSVPC
networking for the Ignite containers.   Is there any way around the fact
that with bridge networking, the Ignite node registers it's unmapped
address on S3?

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


ALTER TABLE ... NOLOGGING

2018-08-02 Thread Dave Harvey
We did the following while loading a lot of data into 2.5

1) Started data loading on 8 node cluster
2) ALTER TABLE name NOLOGGING  on tables A,B,C,D but not X
3) continued loading
4) deactivated cluster
5) changed the config xml, to increase maxSize of the data region (from 2G
to 160G) and increase checkpointPageBufferSize
6) restarted IGNITE processes
7) continued loading for 2 days
8) deactivated cluster
9) changed the config xml, but /only/ thread pool sizes
10) restarted IGNITE processes

And we found on after step 10 that caches A,B,C,D  were *empty*, and only X
had data. We did not look at  the cache sizes after step 6.

Is this expected behavior?   It is not what we would have expected from
reading the documents.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Changing existing persistent caches.

2018-07-30 Thread Dave Harvey
I know Ignite only allows very limited changes to caches at runtime, e.g.,
turn on statistics or add/remove index or field.
I'm wondering if there is a way to change any of the cache configuration for
persistent caches at cluster startup.I have the impression that at some
point I saw some code that merged a cache configuration with the persistent
configuration.

The two things of interest to me are the # of backups and  the various
performance parameters, e.g., QueryParallelism.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Help needed with BinaryObjectException

2018-07-26 Thread Dave Harvey
The cluster needs to agree on how to decode various versions of the
BinaryObjectSchema.  Changing the type of a field name or an enum's value
are non-upwards compatible changes which Ignite cannot handle.

There is the question of the lifetime of the version of a type, and while
you may know that there are no instances of the old type anywhere, Ignite
currently has no way to determine this.

So we end up with this pretty tedious restriction that no-one has proposed
a great way out of.  If you do not have persistent data, then stopping the
cluster and purging the metadata is a way out.   With persistence, it is
difficult.

On Thu, Jul 26, 2018 at 3:49 AM, Roger Janssen 
wrote:

> Hi,
>
> Just some context first: We have a java application, and use spring
> function
> caching. In acceptance and prod, we have multiple instances and for that we
> use Ignite as a distributed in-memory cache. On test we run single
> instances, and we use Ignite just as a non-distributed in memory cache. We
> start Ignite embedded from out application. On acc/prod in server mode, on
> test in client mode. Like I said, we do not want any persistence!
>
> Now on test we run into the problem that we get a BinaryObjectException
> like
> : 'Conflicting enum values. Name 'OPEX_LOAN_LIMIT_WEIGHT' uses ordinal
> value
> (11) that is also used for name 'OPEX_RCK_MAX''
>
> I traced the code and the mergeEnumValues of BinaryUtils throws this
> exception. It seems to have a list of enum values stored in a map, with the
> ordinal as key. But... that is of values is incorrect, values are missing!
> It then receives a value not in that list, but with an ordinal already in
> that list, and then throws the exception.
>
> My questions:
> - What is happening here?
> - How is it possible for Ignite to have an incorrect breakdown of our enum?
> - Why is ignite serialising our objects if it is not persisting them? There
> is no need for this, we just want an in-memory cache.
> - Why is the Ignite marshaller persisting class data in the
> tomcat/temp/ignite/... folder? Especially since it should be running in non
> persistence mode.
> - How can we fix this problem because right now, this prevents us from
> going
> to prod?
>
> If Ignite somehow persists information about your classes, how do you then
> deploy new versions of your application with model changes and prevent
> these
> kind of problems from happening?
>
> Kind regards,
>
> Roger Janssen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Understanding the mechanics of peer class loading

2018-07-18 Thread Dave Harvey
I added this ticket, because we hit a similar problem, as was able to find
some quite suspect code: https://issues.apache.org/jira/browse/IGNITE-9026





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity calls in stream receiver

2018-07-17 Thread Dave Harvey
We switched to CONTINUOUS mode based on the assumption that SHARED mode had
regressed in a way that allowed it to create many class loaders, and
eventually run out of Metaspace.  

CONTINUOUS mode failed much sooner, and we were able to reproduce that
failure and identify bugs in the code.   The code that tries to handle
cycles in a graph search fails the search on a cycle rather than just
breaking the recursion.
Added https://issues.apache.org/jira/browse/IGNITE-9026 

Note: we did conclude that this is unrelated to nested or anonymous classes,
as we originally assumed.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity calls in stream receiver

2018-07-15 Thread Dave Harvey
We are running in SHARED_MODE on 2.5, and are currently quite suspicious of
this change in 2.4, the essence of this change is, in SHARED_MODE , to just
skip  the code that will "Find existing deployments that need to be checked
whether they should be reused for this request"
  
https://github.com/apache/ignite/commit/d2050237ee2b760d1c9cbc906b281790fd0976b4#diff-3fae20691c16a617d0c6158b0f61df3c



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Tracing all SQL Queries

2018-07-12 Thread Dave Harvey
Is there a simple way inside Ignite to get a log of all SQL Queries against
the cluster, either in the debug logs or elsewhere ?This is not a easy
question to phase in a way that Google will find a useful answer.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity calls in stream receiver

2018-07-11 Thread Dave Harvey
The nested class hypothesis seems unlikely.   We have 6000+
GridDeploymentClassLoaders on a node, because there are many instances of
"GridDeploymentPerVersionStore.SharedDeployment".

The userVersion is not changing, nor is the cluster topology.

I have enough data to debug this, just need some time.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Deadlock during cache loading

2018-06-28 Thread Dave Harvey
Your original stack trace shows a call to your custom stream receiver which
appears to itself call invoke().   I can only guess that your code does, but
it appears to be making an call off node to something that is not returning.

org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)

at

*com.mycompany.myapp.myPackage.dao.ignite.cache.streamer.VersionCheckingStreamReceiver.receive(VersionCheckingStreamReceiver.java:33)*

at

org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)

at



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Deadlock during cache loading

2018-06-28 Thread Dave Harvey
2.4 should be OK.
What you showed that the stream receiver called invoke() and did not get an
answer, not a deadlock.  Nothing looks particularly wrong there.  When we
created this bug, it was our a stream receiver called invoke() and that in
turn did another invoke, which was the actual bug.

It was helpful when we did the invoke using a custom thread pool, because
the logging reports thread in the custom pool, we could see which node had
active custom threads easily, and then look at what that thread was waiting
for.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Deadlock during cache loading

2018-06-25 Thread Dave Harvey
"When receiver is invoked for key K, it’s holding the lock for K."  is not
correct, at least in the 2.4 code.

When a custom stream receiver is called, the data streamer thread has a
read-lock preventing termination, and there is a real-lock on the topology,
but DataStreamerUpdateJob.call() does not get any per entry locks.

Since the DataStreamer threads are in a separate pool, a custom stream
receiver should be able to make any calls that a client can w/o fear of
deadlock.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Running Node removal from baseline

2018-06-22 Thread Dave Harvey
The documentation describes the use case where a node is stopped and removed
from the baseline, which reduces the number of backups/replicas when the
node is stopped.

I assume that there is no current code to support removing the node from the
baseline first, so that at least desired number of backups are maintained
all the times?   Any plans for this?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Region Concurrency

2018-05-28 Thread Dave Harvey
It does appear that if the global concurrentLevel is not set, that it
defaults to # CPUs not 4 * # CPUs as documented here
https://apacheignite.readme.io/docs/memory-configuration#section-global-configuration-parameters


   private long[] calculateFragmentSizes(int concLvl, long cacheSize, long
chpBufSize) {
if (concLvl < 2)
concLvl = Runtime.getRuntime().availableProcessors();



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Large durable caches

2018-05-18 Thread Dave Harvey
Early on running on 2.3 we had hit a clear deadlock that I never root-caused,
where the cluster just stopped working.  At the time I was use the same
DataStreamer from multiple threads and we tuned up the buffer size because
of that, and we were running against EBS, and perhaps with too short
timeouts. We have not seen this on 2.4 with a DataStreamer per producer
thread with default parameters against SSDs.   This problem seemed worse
when I paid attention to the Ignite startup message about needing to set a
message buffer/size limit, and specified one.

One thing still on my list, however, is to understand more about paired TCP
connections and why (whether) they are the default.  Fundamentally, if
you are sending bi-directional request/response pairs over a single TCP
virtual circuit, there is an inherent deadlock where responses may get
behind requests that are flow controlled.  With a single VC, the only
general solution to this is to assume unlimited memory, reading requests
from the VC and queuing them in memory, in order to be able to remove the
responses.  You can limit the memory usage on the receiver by limiting the
total requests that can be sent at a higher level, but as node count scales,
the receiver would need more memory.I've been assuming that paired
connections is trying to address this fundamental issue to prevent requests
from blocking responses, but I haven't gotten there yet.  My impression was
that paired connections are not the default.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What is the most efficient way to scan all data partitions?

2018-05-09 Thread Dave Harvey
When running on AWS, I found that what the "disk" that you are writing to is
the most critical issue for Ignite.   EC2 instances with local SSDs have
about 20x the write rate as a multiple 3 TB GP2 problems,   and using actual
disks (e.g., EBS) for Ignite Persistence storage   is a non-starter.Once
you have the right storage, the other parameter settings can tweak your
performance higher.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Effective Data through DataStream

2018-04-26 Thread Dave Harvey
When you set the stream receiver,  an instance of its class is created and
serialized, which will also include any class it is nested in.   On each
Data Streamer buffer, the serialized form of that class is sent.   If the
class containing the stream receiver has a pointer that is not
Externalizable, then you will serialize that object graph also.

Ensure that your stream receiver is in its own class, and that you are very
careful about the members of that class.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: One time replicating of the cluster data for setting up a new cluster

2018-04-03 Thread Dave Harvey
We had done this to group all of the data that needs to be backed up onto the
SSD.  Work also contains the log directory, and I haven't seen how to put
that elsewhere. 

   

















--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Baseline Topology and Node Failure

2018-03-28 Thread Dave Harvey
The introduction in 2.4 of Baselines seems quite helpful.   If a node
restarts, it will avoid excessive rebalancing.
What is unclear from the documentation is what happens in the case  where a
node fails, and doesn't come back.   I'm assuming that in fact nothing
happens, except that the backups on that node are now offline, 
some backups may have been promoted to primaries, and the cluster continues
to function, but not rebalancing (but that does not appear to be stated).

My question is:  After this event is detected, and something decides to
replace the node, what  process should be used to ensure that the new node
replaces the old one.  Is it sufficient to simply set a new baseline
("--baseline set"), and the minimum amount of data movement will occur?  Or
is there something that needs to be done to get the  right node IDs, or
replace the old node with the new one?

It is unclear what triggers rebalancing, e.g., --baseline remove or just
--baseline set



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Determining BinaryObject field type

2018-03-23 Thread Dave Harvey
Once a BinaryObjectSchema is created, it is not possible to change the type
of a field for a known field name.

My question is whether there is any way to determine the type of that field
in the Schema.  

We are hitting a case were the way we get the data out of a different
database returns a TIMESTAMP, but our binary object wants a DATE. In
this test case, I could figure out that, but in the general case,  I have a
BinaryObject type name, and a field name, and an exception if I try to put
the wrong type in that field.

The hokey general solutions I have come up with are:
1) Parse the exception message to see what type it wants
2) Have a list of conversions to try for the source type, and step through
them on each exception.
3) Get the field from an existing binary object of that type, and use the
class of the result.   But there is the chicken/egg problem.

I have found that I can create a cache on a cluster with persistence, with
some type definition, then delete that cache, the cluster will remember the
BinaryObjectSchema for that type, and refuse to allow me to change the
field's type.  If I  don't remember the field's type, how can I
build the binary object?

Is there any way to delete the schema without nuking some of the
binary-meta/marshaller files when the cluster is down?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SELECT Statement cancellation & memory sizing

2018-03-08 Thread Dave Harvey
Just saw 2.4 release notes: Improved COUNT(*) performance 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Setting userVersion on client node causes ignite.active(true) to fail

2018-02-27 Thread Dave Harvey
The server node was already active, and when I commented out
ignite.active(true) the client came up.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Setting userVersion on client node causes ignite.active(true) to fail

2018-02-27 Thread Dave Harvey
If I change userVersion in ignite.xml on the client to 5, when the docker
image is in SHARED mode, in order to ensure that our peer-class-loaded
classes are reloaded, I cannot start the client.
   final Ignite ignite = Ignition.start(igniteConfig);
ignite.active(true); <<< Throws "Task was not deployed or was
redeployed since task execution"

Log from server in docker image.

[23:31:34,827][WARNING]pub-#8859[GridDeploymentManager] Failed to deploy
class in SHARED or CONTINUOUS mode for given user version (class is locally
deployed for a different user version)
[cls=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
localVer=0, otherVer=5]
[23:31:34,828][SEVERE]pub-#8859[GridJobProcessor] Task was not deployed or
was redeployed since task execution
[taskName=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
taskClsName=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
codeVer=5, clsLdrId=778c374d161-85764126-dc96-4073-97ef-02be90c50723,
seqNum=1519687813239, depMode=SHARED, dep=null]
class org.apache.ignite.IgniteDeploymentException: Task was not deployed or
was redeployed since task execution
[taskName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
taskClsName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
codeVer=5, clsLdrId=778c374d161-85764126-dc96-4073-97ef-02be90c50723,
seqNum=1519687813239, depMode=SHARED, dep=null]
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1160)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1913)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Large durable caches

2018-02-21 Thread Dave Harvey
I fought with trying to get Ignite Persistence to work well on AWS GP2
volumes, and finally gave up, and moved to i3 instances, where the $ per
write IOP are much lower, and a i3.8xlarge gets 720,000 4K write IOPS vs on
the order of 10,000 for about the same cost.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-20 Thread Dave Harvey
I've started reproducing this issue with more  statistics, but have not
reached the worst performance point yet, but somethings are starting to
become clearer:

The DataStreamer hashes the affinity key to partition, and then maps the
partition to a node, and fills a single buffer at a time for the node.  A
DataStreamer thread on the node therefore get a buffer's worth of requests
grouped by the time of the addData() call, with no per thread grouping by
affinity key (as I had originally assumed).

The test I was running was using a large amount of data where the average
number of keys for each unique affinity key is 3, with some outliers up to
50K.   One of the caches being updated in the optimistic transaction in the
StreamReceiver contains an object whose key is the affinity key, and whose
contents are the set of keys that have that affinity key. We expect some
temporal locality for objects with the same affinity key.

We had a number of worker threads on a client node, but only one data
streamer, where we increased the buffer count.   Once we understood how the
data streamer actually worked, we made each worker have its own
DataStreamer.   This way, each worker could issue a flush, without affecting
the other workers.   That, in turn, allowed us to use smaller batches per
worker, decreasing the odds of temporal locality.

So it seems like we would get updates for the same affinity key on different
data streamer threads, and they could conflict updating the common record.  
The more keys per affinity key the more likely a conflict, and the more data
would need to be saved.   A flush operation could stall multiple workers,
and the flush operation might be dependent on requests that are conflicting.

We chose to use OPTIMISTIC transactions because of their lack-of-deadlock
characteristics, rather than because we thought there would be high
contention.  I do think this behavior suggests something sub-optimal
about the OPTIMISTIC lock implementation, because I see a dramatic decrease
in throughput, but not a dramatic increase in transaction restarts.

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-20 Thread Dave Harvey
I've started reproducing this issue with more  statistics, but have not
reached the worst performance point yet, but somethings are starting to
become clearer:

The DataStreamer hashes the affinity key to partition, and then maps the
partition to a node, and fills a single buffer at a time for the node.  A
DataStreamer thread on the node therefore get a buffer's worth of requests
grouped by the time of the addData() call, with no per thread grouping by
affinity key (as I had originally assumed).

The test I was running was using a large amount of data where the average
number of keys for each unique affinity key is 3, with some outliers up to
50K.   One of the caches being updated in the optimistic transaction in the
StreamReceiver contains an object whose key is the affinity key, and whose
contents are the set of keys that have that affinity key. We expect some
temporal locality for objects with the same affinity key.

We had a number of worker threads on a client node, but only one data
streamer, where we increased the buffer count.   Once we understood how the
data streamer actually worked, we made each worker have its own
DataStreamer.   This way, each worker could issue a flush, without affecting
the other workers.   That, in turn, allowed us to use smaller batches per
worker, decreasing the odds of temporal locality.

So it seems like we would get updates for the same affinity key on different
data streamer threads, and they could conflict updating the common record.  
The more keys per affinity key the more likely a conflict, and the more data
would need to be saved.   A flush operation could stall multiple workers,
and the flush operation might be dependent on requests that are conflicting.

We chose to use OPTIMISTIC transactions because of their lack-of-deadlock
characteristics, rather than because we thought there would be high
contention.  I do think this behavior suggests something sub-optimal
about the OPTIMISTIC lock implementation, because I see a dramatic decrease
in throughput, but not a dramatic increase in transaction restarts. 
"In OPTIMISTIC transactions, entry locks are acquired on primary nodes
during the prepare step,"  does not say anything about  the order that locks
are acquired.  Sorting the locks so there is a consistent order would avoid
deadlocks.   
If there are no deadlocks, then there could be n-1 restarts of the
transaction for each commit, where n is the number of data streamer threads.
This is the old "thundering herd" problem, which can easily be made order n
by only allowing one of the waiting threads to proceed at a time.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issues trying to force redeployment in shared mode

2018-02-20 Thread Dave Harvey
I've done some additional testing.  By shutting down another (the last)
client node that was running independent code, I was able to purge the bad
version of my code from the servers, while leaving the userVersion at "0".  
Apparently in this case, the client nodes are "master" nodes.  (The
deployment modes documentation uses the terms "master" and "workers" without
defining them, so you are left to guess which nodes are master and how they
become one.  Because they sent a closure?   Because some class was actually
loaded from them?).

Running from Eclipse with a local vanilla 2.3 docker image as a single
server,  changing the userVersion ignite.xml from "0" to "1" on the client
causes the error:  "Caused by: class
org.apache.ignite.IgniteDeploymentException: Task was not deployed or was
redeployed since task execution" , even if the server was just restarted. 
It starts working again of the user version changes back to "0".

That is, /changing the userVersion on the client causes the client not to be
able to talk to the server./  Since server will cache the userVersion on the
first access, there doesn't seem to be a path to get the client's code to
redeploy except by shutting down all clients, or by shutting down all
servers.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Issues trying to force redeployment in shared mode

2018-02-19 Thread Dave Harvey
I was trying to demonstrate changing classes on a client node so that classes
on servers get replaced with new code, but the symptoms make me believe that
I don't understand the rules at all.  

The deployment mode is SHARED.   I read the instructions and creates an
ignite.xml with a different userVersion.

and install this on the client node in order to force the cause previously
loaded code versions to be reloaded.   But instead, I get this failure even
if I restart both server and client.


Caused by: class org.apache.ignite.IgniteDeploymentException: Task was not
deployed or was redeployed since task execution
[taskName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
taskClsName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
codeVer=3, clsLdrId=ff87e49a161-695076b6-c96f-40ec-beb6-9897f3210dee,
seqNum=1518963947775, depMode=SHARED, dep=null]

at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1160)

at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1913)

at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)

at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)

at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)

at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

which seems to be due to "codeVer=3".   I can change the 3 to 0, and it goes
back to working.


The original problem I had was I changed a field in a static statistics
class deployed as part of a [Data]StreamReciever from Long to AtomicLong,
and the BinaryObject schema merging complained about this.  So I renamed the
field, but kept getting the same error with the old field name, so I assumed
that the code was not getting replaced on the server, because I needed to
communicate that the version changed.  The error was being thrown as
part of responding to a call for statistics. Because we are in SHARED
mode, as I read this now, restarting the client should have been sufficient
to replace the code.   So perhaps instead something has remembered the
intermediate schema that could not be merged? I would have assumed that the
intermediate unmerged schema would have been discarded.   




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-13 Thread Dave Harvey
I made improvements to the statistics collection in the stream receiver, and
I'm finding an excessive number of retry's of the optimistic transactions we
are using.   I will understand that and retry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: The client does not receive response after closure completion...

2017-12-28 Thread Dave Harvey
I got similar symptoms, but for a different root cause.

I was getting the original stack trace using the "cache" command in Visor,
but only when a client was connected.  Using this command caused the clients
to disconnect.   It turned out that I had enabled inbound TCP ports to the
servers, but this command requires an inbound connection into the clients,
and the TCP ports were blocked for the client.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: TcpDiscoveryS3IpFinder AmazonS3Exception: Slow Down

2017-10-24 Thread Dave Harvey
Opened a support tickets with GridGain and AWS.  The former suggested this,
which helped:

 20 
 21 
 22 
 23 
 24 
 25 

The throttling is per S3 bucket, and S3 discovery requires a private bucket
(or you get lots of errors), and AWS confirmed that the load was light, and
the errors were always on the first bucket enum.   They did suggest moving
from version 1.11.75 of the aws libs to 1.11.219, but fundamentally, they
delayed responding for long enough for the symptoms to go away on their own,
without explanation.   I can no longer reproduce, without any changes.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: TcpDiscoveryS3IpFinder AmazonS3Exception: Slow Down

2017-09-21 Thread Dave Harvey
The only possibly different thing we are doing is using a VPC endpoint to
allow the nodes to access S3 directly, without having to supply credentials.
   


 










  











--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


TcpDiscoveryS3IpFinder AmazonS3Exception: Slow Down

2017-09-21 Thread Dave Harvey
Is TcpDiscoveryS3IpFinder expected to work?   I randomly get exceptions which
seem to be considered part of normal S3 operation, but are not
handled/retried.   

com.amazonaws.services.s3.model.AmazonS3Exception: Slow Down (Service:
Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID:
823A32B2F20B2E3B), S3 Extended Request ID:
/qAAKF/LgRP8Y+7sGXSslaxRy6nBYYsJD17aSrFmCTsTXv+vEaeYxBFYT63km+T7PkRkHw1UdAQ=
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1586)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1254)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4137)
at
com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1275)
at
com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1232)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.initClient(TcpDiscoveryS3IpFinder.java:256)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.registerAddresses(TcpDiscoveryS3IpFinder.java:184)




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite/yardstick benchmarking from within Docker Image

2017-09-20 Thread Dave Harvey
Yes.  When I figured that out, my immediate reaction was to kill the running
ignite instance.   But since that process was what was keeping the docker
container running, the container exits.  

So instructions on how to run the benchmarks from inside the docker
container they are delivered in would be helpful.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite/yardstick benchmarking from within Docker Image

2017-09-20 Thread Dave Harvey
As I understand more, I changed the subject.

The Docker image for Ignite 2.1.0 contains the ignite-yardstick benchmarks,
which include a ReadMe file that provides instructions that *do not work* if
you simply try to run them from inside the Docker Container where Ignite is
already running.   

I had been trying to work from config/benchmark-sample.properties,   and
thrashed on this for quite a while.

I have gotten the benchmark to actually execute inside the container, after
setting SERVER_HOSTS=""  
bin/benchmark-run-all.sh config/benchmark-remote.properties 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AWS Apache Ignite AMI startup.sh reports spurrious errors if options have blanks

2017-09-20 Thread Dave Harvey
Sorry for the delay, spam filter.

I haven't been able to find an explanation for startup.sh, but when I login
to an EC2 instance created using the AWS community AMI for Ignite (which
only has docker and pulls ignite by the version you specify), I find
startup.sh in the current directory, and it is clearly the script run at
least the first time the instance boots.When I search for pieces of the
contents, nothing shows up.  When searching for startup.sh with various
AWS/ignite qualifiers, I get back to this post, or get a large list of
things that don't seem relevant.When I run this script, it loads
user-data (which is what is specified in Advanced Options on the AWS AMI
launch GUI - a set of environment variables), and prints errors if any of
those lines have spaces.  

This is the entire script:
#!/bin/bash

if [ ! -f ./user-data ]; then
  wget http://169.254.169.254/latest/user-data

  sed -i -e '$a\ ' ./user-data
fi

ENV_OPTIONS=""

while read p; do
  if [ ! -z "$p" ]; then
ENV_OPTIONS="$ENV_OPTIONS -e \"$p\""

export $p
  fi
done < ./user-data

eval "docker pull apacheignite/ignite:$IGNITE_VERSION"

eval "docker run -d --net=host $ENV_OPTIONS
apacheignite/ignite:$IGNITE_VERSION"






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


AWS Apache Ignite AMI startup.sh does not allow documented JVM options

2017-09-07 Thread Dave Harvey


If I use this in Advanced Details
JVM_OPTS=-Xms1g -Xmx1g -server -XX:+AggressiveOpts -XX:MaxPermSize=256m

I get
startup.sh: line 15: export: `-Xmx1g': not a valid identifier
startup.sh: line 15: export: `-server': not a valid identifier
startup.sh: line 15: export: `-XX:+AggressiveOpts': not a valid identifier
startup.sh: line 15: export: `-XX:MaxPermSize=256m': not a valid identifier

The startup.sh script is coded as 

export $p

when it seems like it should be:
export "$p"

I have not found any escape sequences for the original string that improve
this.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CONFIG_INI not copied to ./ignite-config.xml

2017-09-07 Thread Dave Harvey
Now I see the "Type 'help "command name"' to see how to use this command."
below the list.  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


CONFIG_INI not copied to ./ignite-config.xml

2017-09-07 Thread Dave Harvey
The documentation around CONFIG_INI says "The downloaded config file will be
saved to ./ignite-config.xml", except that did not happen.  Therefore when I
started visor, I had no way to discover the cluster, so  visor created a new
one.  After much newbie confusion, I copied the config file from the URL
into my docker container, and visor found it, and I could open it, and then
did meaningful things.


I'm using only S3 discovery for the cluster, and visor needed a copy of this
to discover the cluster.
   





The visor instructions simply say read the help, and if you type help, it
provides a command list.   Notably, the Description of the help command
doesn't indicate it takes arguments. An indication that "help open" was
something I could/should type would have been useful.This was like
discovering that I can kill the dragon with my bare hands.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with starting Ignite node on AWS

2017-09-05 Thread Dave Harvey
I'm also a newbie, but I'm running 2.1.0 and I seem to be hitting the same
problem, which sounds like it was fixed a long time ago.Is there
something else going on?
I've uploaded the 2 lines I pass when creating the EC2 instance from the AMI
as well as the config file I'm using, as well as the full output from docker
logs. errs.log
 
xxx.txt   
config.xml
  

Caused by: org.springframework.beans.factory.CannotLoadBeanClassException:
Cannot find class
[org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder] for
bean with name
'org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder#71623278'
defined in URL [https://s3.amazonaws.com/jc-ignite-trial/example-cache.xml];
nested exception is java.lang.ClassNotFoundException:
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder
at
org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1385)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:299)
... 28 more
Caused by: java.lang.ClassNotFoundException:
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/