Re: Reclaim memory from off-heap

2018-04-17 Thread Deepesh Malviya
Hi Stan,

Thanks for the response. In our flow, we create, fill, destory and recreate
cache in a for loop. I assume it would be still able to reuse what's
already been allocated.

Regards,
Deepesh

On Wed, Apr 18, 2018 at 12:07 AM, Stanislav Lukyanov  wrote:

> Hi,
>
> On Tue, Apr 17, 2018 at 8:33 PM, Deepesh Malviya 
> wrote:
>
>> Hi,
>>
>> I have read this post - http://apache-ignite-users.7
>> 0518.x6.nabble.com/Cache-Destroy-Space-reclaim-td17208.html
>>
>> I have few questions
>> 1. Does this post is still true?
>>
> Yes.
>
>
>> 2. If cache destroy is called and is recreated, will it use same space or
>> it will be new allocation on off-heap?
>>
> Not necessarily the same space, but it will try to reuse what's already
> available.
> If Ignite has an already allocated chunk of memory that is not used by
> other caches, it will use it.
> Otherwise, a new chunk will be allocated.
>
> Other words, if you create, fill and destroy a cache 100 times, Ignite
> will not allocate memory 100 times, it will reuse what's already allocated.
>
> 3. Is there anyway to reclaim memory from off-heap?
>>
> No. Instead, you can set the maximum size of a data region via
> DataRegionConfiguration.setMaxSize().
> Once a data region hits maxSize, Ignite will not allocate more memory for
> the caches in that region.
>
> Stan
>
>


-- 
_Deepesh


Re: Docker deployment with EXTERNAL_LIBS environment variable

2018-04-17 Thread Roman Shtykh
I had the same problem (pretty common not having -i option) and fixed it. Need 
a review.https://issues.apache.org/jira/browse/IGNITE-8143
-- Roman
 

On Tuesday, April 17, 2018, 7:15:25 p.m. GMT+9, Petr Ivanov 
 wrote:  
 
 Hi, Kseniya.


I guess that something wrong with wget in distribution (alpine-linux). I will 
need some testing to investigate further.



> On 17 Apr 2018, at 13:02, Ksenia Vazhdaeva  wrote:
> 
> Hello,
> 
> I am trying to deploy Apache Ignite 2.4.0 in docker using external libs as
> described at https://apacheignite.readme.io/docs/docker-deployment
> 
> /docker run -d --name ignite -v
> /storage/ignite/ignite-server-config.xml:/etc/ignite/ignite-server-config.xml
> \
>    -e "CONFIG_URI=file:///etc/ignite/ignite-server-config.xml" -p
> 47500:47500 \
>    -e
> "EXTERNAL_LIBS=http://central.maven.org/maven2/org/apache/ignite/ignite-schedule/1.0.0/ignite-schedule-1.0.0.jar;
> \
>    apacheignite/ignite:2.4.0/
> 
> Docker container is started but in docker logs there is an error
> 
> /wget: unrecognized option: i
> BusyBox v1.27.2 (2017-12-12 10:41:50 GMT) multi-call binary.
> 
> Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document
> FILE]
>     [--header 'header: value'] [-Y|--proxy on/off] [-P DIR]
>     [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL.../
> 
> Thus the external lib is not loaded.
> Could you, please, help me to resolve the problem or provide me with another
> way to add external libraries to Ignite classpath?
> 
> Thanks in advance,
> Ksenia
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
  

Re: Slow invoke call

2018-04-17 Thread javastuff....@gmail.com
Thanks Val.

I understand Array copy is heavy operation and probably lots of memory
allocations too, however, my profiler showing complete logic of copy and
append taking 50% of the total time taken by Invoke call. that's why the
question, does invoke should take this much time or its the concurrency
killing it to have the atomic operation?

I have already tried putting separate entries instead of appending to single
byte array. However, this approach needs more logic to keep sequence,
locking or synchronizing during fetch or remove. 
During the quick implementation of this new approach, I used scan query
filter on key for fetch and remove calls. As expected put was faster (no
entry-processor, no array copy), however, faced issue with scan query.
Probably one thread iterating on scan query and other tried to put, thats
where scan query bails out with an exception.



I am going to tweak this usecase further to get better results, any
ideas/input will be appreciated.

Thanks,
-Sambhav



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Efficiently determining if cache keys belong to the localservernode

2018-04-17 Thread Raymond Wilson
Agree on the idempotent comments. Many of the requests are aggregative
summarisations so there’ll need to be some additional tracking to detect
double computation and missed computation in these cases.



I understand that Ignite grids respond to requests during rebalancing
operations where partitions may move between nodes over significant time
periods. How does Ignite ensure requests consistency during rebalancing?



*From:* Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
*Sent:* Wednesday, April 18, 2018 12:26 AM
*To:* user@ignite.apache.org
*Subject:* RE: Efficiently determining if cache keys belong to the
localservernode



> Is the failure mode of a node changing primality for a key during an
affinity co-located compute function handled by Ignite automatically for
other contexts?

Are you asking whether or not affinityCall() would handle that? If so, then
no, not really – once the job is sent to a node, it is out. To handle that
Ignite would need to be able to stop the job, revert its changes and
restart it on another node – which is not possible in general, of course.



> Is there an event or similar facility to hook into to gain a notification
that this has occurred (and so re-run the computation to ensure the correct
result)?

You could listen to EVT_NODE_LEFT, EVT_NODE_FAILED and EVT_NODE_JOINED to
track topology changes, but it seems rather complex and fragile to me

Instead I would try to make the computations idempotent (i.e. to make sure
that processing the same key on two nodes doesn’t lead to inconsistency),
and track which keys were processed to be able to restart the computation
on the unprocessed ones (if any).



Stan



*From: *Raymond Wilson 
*Sent: *17 апреля 2018 г. 14:01
*To: *user@ignite.apache.org
*Subject: *RE: Efficiently determining if cache keys belong to the
localservernode



Hi Stan



Thanks for the additional pointers.



Is the failure mode of a node changing primality for a key during an
affinity co-located compute function handled by Ignite automatically for
other contexts? Is there an event or similar facility to hook into to gain
a notification that this has occurred (and so re-run the computation to
ensure the correct result)?



Thanks,

Raymond.





*From:* Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
*Sent:* Tuesday, April 17, 2018 10:42 PM
*To:* user@ignite.apache.org
*Subject:* RE: Efficiently determining if cache keys belong to the local
servernode



Hi Raymond,



OK, I see, batching the requests makes sense.

Have you looked at the ICacheAffinity interface? It provides a way to query
Ignite about the key-to-node mappings,

without dealing with partitions yourself.

The call

ignite.GetAffinity(“cache”).MapKeysToNodes(keys)

is suitable to split the request into batches on the client side.

The call

ignite.GetAffinity(“cache”).IsPrimary(key,
ignite.GetCluster().GetLocalNode())

is suitable to determine if a the current node is primary for the key.



This way you don’t need to cache affinity mappings – you just always use
the current mappings of the node.

However, you still need to make sure you can handle affinity mappings
changing while your jobs are running.

One can imagine situations when two nodes process the same key (because
both were primary at different times),

or no nodes processed a key (e.g. because a new node has joined, became
primary for the key but didn’t receive the broadcast).



Thanks,

Stan



*From: *Raymond Wilson 
*Sent: *16 апреля 2018 г. 23:36
*To: *user@ignite.apache.org
*Subject: *RE: Efficiently determining if cache keys belong to the local
servernode



Hi Stan,



Your understanding is correct.



I'm aware of the AffinityRun and AffinityCall methods, and their simple key

limitation.



My use case may require 100,000 or more elements of information to be

processed, so I don't want to call AffinityRun/Call that often. Each of

these elements is identified by a key that is very efficiently encoded into

the request (at the ~1 bit per key  level)



Further, each of those elements identifies work units that in themselves

could have 100,000 or more different elements to be processed.



One approach would be to explicitly break up the request into smaller ones,

each targeted at a server node. But that requires the requestor to have

intimate knowledge of the composition of the grid resources deployed, which

is not desirable.



The approach I'm looking into here is to have each server node receive the

same request via Cluster.Broadcast(), and for those nodes to determine which

elements in the overall request via the Key -> Partition affinity mapping.

The mapping itself is very efficient, and as I noted in my original post

determining the partition -> node map seems simple enough to do.



I'm unsure of the performance of requesting that mapping for every request,

versus caching it and adding watchers for rebalancing and topology change


Re: Reclaim memory from off-heap

2018-04-17 Thread Stanislav Lukyanov
Hi,

On Tue, Apr 17, 2018 at 8:33 PM, Deepesh Malviya  wrote:

> Hi,
>
> I have read this post - http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-Destroy-Space-reclaim-td17208.html
>
> I have few questions
> 1. Does this post is still true?
>
Yes.


> 2. If cache destroy is called and is recreated, will it use same space or
> it will be new allocation on off-heap?
>
Not necessarily the same space, but it will try to reuse what's already
available.
If Ignite has an already allocated chunk of memory that is not used by
other caches, it will use it.
Otherwise, a new chunk will be allocated.

Other words, if you create, fill and destroy a cache 100 times, Ignite will
not allocate memory 100 times, it will reuse what's already allocated.

3. Is there anyway to reclaim memory from off-heap?
>
No. Instead, you can set the maximum size of a data region via
DataRegionConfiguration.setMaxSize().
Once a data region hits maxSize, Ignite will not allocate more memory for
the caches in that region.

Stan


Reclaim memory from off-heap

2018-04-17 Thread Deepesh Malviya
Hi,

I have read this post -
http://apache-ignite-users.70518.x6.nabble.com/Cache-Destroy-Space-reclaim-td17208.html

I have few questions
1. Does this post is still true?
2. If cache destroy is called and is recreated, will it use same space or
it will be new allocation on off-heap?
3. Is there anyway to reclaim memory from off-heap?

Regards,
_DM


Re: How to upgrade Ignite 2.3 version to the latest version i.e. 2.4 version

2018-04-17 Thread Pavel Vinokurov
Hi,

Ignite does not support the rolling upgrade.
To upgrade from 2.3 to 2.4 you could stop the whole cluster, update to 2.4
and restart the cluster.
Please, look at page Cluster Activation and Baseline Topology
 Baseline Topology
is the major feature introduced in the 2.4 version.

2018-04-17 19:39 GMT+03:00 siva :

> Hi,
>
> We have 2.3 version server and client node.We want to upgrade to new
> version
> ,Is  rolling upgrade support Ignite? or any other way is there to upgrade?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


How to upgrade Ignite 2.3 version to the latest version i.e. 2.4 version

2018-04-17 Thread siva
Hi,

We have 2.3 version server and client node.We want to upgrade to new version
,Is  rolling upgrade support Ignite? or any other way is there to upgrade?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap and JVM heap memory allocation

2018-04-17 Thread akurbanov
But please consider that you must have enough memory at least for running OS
and it's services, also Ignite would warn you if you will try to use more
than ~80% of RAM (70% in case if native persistence is enabled, and 90% if
ignite is used in the pure in-memory mode).

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite spatial index behavior

2018-04-17 Thread Pavel Vinokurov
Hi,

Ignite provides envelope intersection, so it searches intersection of
bounding box for a polygon and a certain point.
Low 'hit-ratio' should not significantly affect to query performance.
Could you please share a small piece of code or a project.

2018-04-15 10:42 GMT+03:00 olg.k...@gmail.com :

> Is there an optimization about spatial index behavior?
> I'm running a POC, trying to show Ignite performance when searching for a
> polygon (from a polygon table) which intersects with a certain point
> (randomly generated).
> I've noticed that when the 'hit-ratio' is low (no intersecting polygon) the
> query return faster.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


RE: Efficiently determining if cache keys belong to the localservernode

2018-04-17 Thread Stanislav Lukyanov
> Is the failure mode of a node changing primality for a key during an affinity 
> co-located compute function handled by Ignite automatically for other 
> contexts?
Are you asking whether or not affinityCall() would handle that? If so, then no, 
not really – once the job is sent to a node, it is out. To handle that Ignite 
would need to be able to stop the job, revert its changes and restart it on 
another node – which is not possible in general, of course.

> Is there an event or similar facility to hook into to gain a notification 
> that this has occurred (and so re-run the computation to ensure the correct 
> result)?
You could listen to EVT_NODE_LEFT, EVT_NODE_FAILED and EVT_NODE_JOINED to track 
topology changes, but it seems rather complex and fragile to me
Instead I would try to make the computations idempotent (i.e. to make sure that 
processing the same key on two nodes doesn’t lead to inconsistency), and track 
which keys were processed to be able to restart the computation on the 
unprocessed ones (if any).

Stan 

From: Raymond Wilson
Sent: 17 апреля 2018 г. 14:01
To: user@ignite.apache.org
Subject: RE: Efficiently determining if cache keys belong to the localservernode

Hi Stan
 
Thanks for the additional pointers. 
 
Is the failure mode of a node changing primality for a key during an affinity 
co-located compute function handled by Ignite automatically for other contexts? 
Is there an event or similar facility to hook into to gain a notification that 
this has occurred (and so re-run the computation to ensure the correct result)?
 
Thanks,
Raymond.
 
 
From: Stanislav Lukyanov [mailto:stanlukya...@gmail.com] 
Sent: Tuesday, April 17, 2018 10:42 PM
To: user@ignite.apache.org
Subject: RE: Efficiently determining if cache keys belong to the local 
servernode
 
Hi Raymond,
 
OK, I see, batching the requests makes sense.
Have you looked at the ICacheAffinity interface? It provides a way to query 
Ignite about the key-to-node mappings,
without dealing with partitions yourself.
The call
    ignite.GetAffinity(“cache”).MapKeysToNodes(keys)
is suitable to split the request into batches on the client side.
The call
    ignite.GetAffinity(“cache”).IsPrimary(key, 
ignite.GetCluster().GetLocalNode())
is suitable to determine if a the current node is primary for the key.
 
This way you don’t need to cache affinity mappings – you just always use the 
current mappings of the node.
However, you still need to make sure you can handle affinity mappings changing 
while your jobs are running.
One can imagine situations when two nodes process the same key (because both 
were primary at different times),
or no nodes processed a key (e.g. because a new node has joined, became primary 
for the key but didn’t receive the broadcast).
 
Thanks,
Stan
 
From: Raymond Wilson
Sent: 16 апреля 2018 г. 23:36
To: user@ignite.apache.org
Subject: RE: Efficiently determining if cache keys belong to the local 
servernode
 
Hi Stan,
 
Your understanding is correct.
 
I'm aware of the AffinityRun and AffinityCall methods, and their simple key
limitation.
 
My use case may require 100,000 or more elements of information to be
processed, so I don't want to call AffinityRun/Call that often. Each of
these elements is identified by a key that is very efficiently encoded into
the request (at the ~1 bit per key  level)
 
Further, each of those elements identifies work units that in themselves
could have 100,000 or more different elements to be processed.
 
One approach would be to explicitly break up the request into smaller ones,
each targeted at a server node. But that requires the requestor to have
intimate knowledge of the composition of the grid resources deployed, which
is not desirable.
 
The approach I'm looking into here is to have each server node receive the
same request via Cluster.Broadcast(), and for those nodes to determine which
elements in the overall request via the Key -> Partition affinity mapping.
The mapping itself is very efficient, and as I noted in my original post
determining the partition -> node map seems simple enough to do.
 
I'm unsure of the performance of requesting that mapping for every request,
versus caching it and adding watchers for rebalancing and topology change
events to invalidate that cache mapping as needed (and how to wire those
up).
 
Thanks,
Raymond.
 
-Original Message-
From: Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
Sent: Tuesday, April 17, 2018 12:02 AM
To: user@ignite.apache.org
Subject: RE: Efficiently determining if cache keys belong to the local
server node
 
// Bcc’ing off dev@ignite list for now as it seems to be rather a user-space
discussion.
 
Hi,
 
Let me take a step back first. It seems a bit like an XY problem
(https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem),
so I’d like to clarify the goals before diving into your current solution.
 
AFAIU you want to process certain entries in your cache locally on the
server that caches these 

Re: Do we require to set MaxDirectMemorySize JVM parameter?

2018-04-17 Thread dkarachentsev
Hi Ankit,

No, Ignite uses sun.misc.Unsafe for offheap memory. Direct memory may be
used in DirectBuffers used for intercommunication. Usually defaults quite
enough.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Do we require to set MaxDirectMemorySize JVM parameter?

2018-04-17 Thread Ankit Singhai
Hi All,
Do we require to set MaxDirectMemorySize JVM parameter on IgniteServer to
make use of OFFHEAP?

For an example to make use of 8 GB off heap should I add MaxDirectMemorySize
if yes then how much?
  
  

  
  
  
  
  

  


Regards,
Ankit




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Efficiently determining if cache keys belong to the local servernode

2018-04-17 Thread Stanislav Lukyanov
Hi Raymond,

OK, I see, batching the requests makes sense.
Have you looked at the ICacheAffinity interface? It provides a way to query 
Ignite about the key-to-node mappings,
without dealing with partitions yourself.
The call
ignite.GetAffinity(“cache”).MapKeysToNodes(keys)
is suitable to split the request into batches on the client side.
The call
ignite.GetAffinity(“cache”).IsPrimary(key, 
ignite.GetCluster().GetLocalNode())
is suitable to determine if a the current node is primary for the key.

This way you don’t need to cache affinity mappings – you just always use the 
current mappings of the node.
However, you still need to make sure you can handle affinity mappings changing 
while your jobs are running.
One can imagine situations when two nodes process the same key (because both 
were primary at different times),
or no nodes processed a key (e.g. because a new node has joined, became primary 
for the key but didn’t receive the broadcast).

Thanks,
Stan

From: Raymond Wilson
Sent: 16 апреля 2018 г. 23:36
To: user@ignite.apache.org
Subject: RE: Efficiently determining if cache keys belong to the local 
servernode

Hi Stan,

Your understanding is correct.

I'm aware of the AffinityRun and AffinityCall methods, and their simple key
limitation.

My use case may require 100,000 or more elements of information to be
processed, so I don't want to call AffinityRun/Call that often. Each of
these elements is identified by a key that is very efficiently encoded into
the request (at the ~1 bit per key  level)

Further, each of those elements identifies work units that in themselves
could have 100,000 or more different elements to be processed.

One approach would be to explicitly break up the request into smaller ones,
each targeted at a server node. But that requires the requestor to have
intimate knowledge of the composition of the grid resources deployed, which
is not desirable.

The approach I'm looking into here is to have each server node receive the
same request via Cluster.Broadcast(), and for those nodes to determine which
elements in the overall request via the Key -> Partition affinity mapping.
The mapping itself is very efficient, and as I noted in my original post
determining the partition -> node map seems simple enough to do.

I'm unsure of the performance of requesting that mapping for every request,
versus caching it and adding watchers for rebalancing and topology change
events to invalidate that cache mapping as needed (and how to wire those
up).

Thanks,
Raymond.

-Original Message-
From: Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
Sent: Tuesday, April 17, 2018 12:02 AM
To: user@ignite.apache.org
Subject: RE: Efficiently determining if cache keys belong to the local
server node

// Bcc’ing off dev@ignite list for now as it seems to be rather a user-space
discussion.

Hi,

Let me take a step back first. It seems a bit like an XY problem
(https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem),
so I’d like to clarify the goals before diving into your current solution.

AFAIU you want to process certain entries in your cache locally on the
server that caches these entries. Is that correct?
Have you looked at affinityRun and affinityCall
(https://apacheignite.readme.io/docs/collocate-compute-and-data)? If yes,
why they don’t work for you?
One limitation with these methods is that they accept a single key to
process. Can you process your keys one by one, or do you need to access
multiple keys at once?

Thanks,
Stan

From: Raymond Wilson
Sent: 15 апреля 2018 г. 10:55
To: user@ignite.apache.org
Cc: d...@ignite.apache.org
Subject: Efficiently determining if cache keys belong to the local server
node

I have a type of query that asks for potentially large numbers of
information elements to be computed. Each element has an affinity key that
maps it to a server node through an IAffinityFunction.



The way the question is asked means that a single query broadcast to the
compute projection (owning the cache containing the source data for the
request) contains the identities of all the pieces of information needed to
be processed.



Each server node then scans the elements requested and identifies which ones
are its responsibility according to the affinity key.



Calculating the partition ID from the affinity key is simple (I have an
affinity function set up and supplied to the cache configuration, or I could
use IAffinity.GetPartition()), so the question became: How do I know the
server node executing the query is responsible for that partition, and so
should process this element? IE: I need to derive the vector of primary or
backup  partitions that this node is responsible for.



I can query the partition map and return it, like this:



ICacheAffinity affinity = Cache.Ignite.GetAffinity(Cache.Name);

public Dictionary primaryPartitions =
affinity.GetPrimaryPartitions(Cache.Ignite.GetCluster().GetLocalNode()).ToDictionary(k
=> k, v => 

Re: Docker deployment with EXTERNAL_LIBS environment variable

2018-04-17 Thread Petr Ivanov
Hi, Kseniya.


I guess that something wrong with wget in distribution (alpine-linux). I will 
need some testing to investigate further.



> On 17 Apr 2018, at 13:02, Ksenia Vazhdaeva  wrote:
> 
> Hello,
> 
> I am trying to deploy Apache Ignite 2.4.0 in docker using external libs as
> described at https://apacheignite.readme.io/docs/docker-deployment
> 
> /docker run -d --name ignite -v
> /storage/ignite/ignite-server-config.xml:/etc/ignite/ignite-server-config.xml
> \
>-e "CONFIG_URI=file:///etc/ignite/ignite-server-config.xml" -p
> 47500:47500 \
>-e
> "EXTERNAL_LIBS=http://central.maven.org/maven2/org/apache/ignite/ignite-schedule/1.0.0/ignite-schedule-1.0.0.jar;
> \
> apacheignite/ignite:2.4.0/
> 
> Docker container is started but in docker logs there is an error
> 
> /wget: unrecognized option: i
> BusyBox v1.27.2 (2017-12-12 10:41:50 GMT) multi-call binary.
> 
> Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document
> FILE]
>   [--header 'header: value'] [-Y|--proxy on/off] [-P DIR]
>   [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL.../
> 
> Thus the external lib is not loaded.
> Could you, please, help me to resolve the problem or provide me with another
> way to add external libraries to Ignite classpath?
> 
> Thanks in advance,
> Ksenia
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Docker deployment with EXTERNAL_LIBS environment variable

2018-04-17 Thread Ksenia Vazhdaeva
Hello,

I am trying to deploy Apache Ignite 2.4.0 in docker using external libs as
described at https://apacheignite.readme.io/docs/docker-deployment

/docker run -d --name ignite -v
/storage/ignite/ignite-server-config.xml:/etc/ignite/ignite-server-config.xml
\
-e "CONFIG_URI=file:///etc/ignite/ignite-server-config.xml" -p
47500:47500 \
-e
"EXTERNAL_LIBS=http://central.maven.org/maven2/org/apache/ignite/ignite-schedule/1.0.0/ignite-schedule-1.0.0.jar;
\
 apacheignite/ignite:2.4.0/

Docker container is started but in docker logs there is an error

/wget: unrecognized option: i
BusyBox v1.27.2 (2017-12-12 10:41:50 GMT) multi-call binary.

Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document
FILE]
[--header 'header: value'] [-Y|--proxy on/off] [-P DIR]
[-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL.../

Thus the external lib is not loaded.
Could you, please, help me to resolve the problem or provide me with another
way to add external libraries to Ignite classpath?

Thanks in advance,
Ksenia



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/