Re: With as syntax does not work in ignite 2.9.0?

2020-12-07 Thread Pavel Vinokurov
lExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>   at
>
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>   at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.h2.jdbc.JdbcSQLException: General error:
>
> "java.lang.NullPointerException" [5-197]
>
>   at
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
>
>   at org.h2.message.DbException.get(DbException.java:168)
>
>   at org.h2.message.DbException.convert(DbException.java:307)
>
>   at
> org.h2.message.DbException.toSQLException(DbException.java:280)
>
>   at
> org.h2.message.TraceObject.logAndConvert(TraceObject.java:357)
>
>   at
> org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:697)
>
>   at
>
>
> org.apache.ignite.internal.processors.query.h2.H2Connection.prepareStatementNoCache(H2Connection.java:191)
>
>   ... 26 more
>
> Caused by: java.lang.NullPointerException
>
>   at
> org.h2.table.Table.removeChildrenAndResources(Table.java:545)
>
>   at
> org.h2.table.TableView.removeChildrenAndResources(TableView.java:439)
>
>   at
> org.h2.engine.Session.removeLocalTempTable(Session.java:409)
>
>   at
> org.h2.command.Parser.parseSingleCommonTableExpression(Parser.java:5233)
>
>   at org.h2.command.Parser.parseWith(Parser.java:5145)
>
>   at
> org.h2.command.Parser.parseWithStatementOrQuery(Parser.java:1931)
>
>   at org.h2.command.Parser.parsePrepared(Parser.java:499)
>
>   at org.h2.command.Parser.parse(Parser.java:335)
>
>   at org.h2.command.Parser.parse(Parser.java:307)
>
>   at org.h2.command.Parser.prepareCommand(Parser.java:278)
>
>   at org.h2.engine.Session.prepareLocal(Session.java:611)
>
>   at org.h2.engine.Session.prepareCommand(Session.java:549)
>
>   at
> org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247)
>
>   at
> org.h2.jdbc.JdbcPreparedStatement.(JdbcPreparedStatement.java:76)
>
>   at
> org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:694)
>
>   ... 27 more
>
>
>
> In the client side,the error looks like:
>
>
>
> java.sql.SQLException: Failed to parse query. General error:
>
> "java.lang.NullPointerException" [5-197] at
>
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)
>
> at
>
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:231)
>
> at
>
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeQuery(JdbcThinStatement.java:139)
>
> at
>
> com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:111)
>
> at
>
>
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java...
>
>
>
> The sqls work fine while ignite just started.Such error happens after
> ignite
>
> runs a while.And after that,the error seems happen every time.
>
>
>
> Do u have any ideas about it?
>
>
>
>
>
>
>
>
>
> --
>
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


-- 

Regards

Pavel Vinokurov


Re: maxSize - per node or per cluster? - only 2k/sec in 11 node dedicated cluster

2020-11-15 Thread Pavel Vinokurov
Devin,

Could you share the code block  making puts.
Do you use thick or thin clients?

Thanks,
Pavel

сб, 14 нояб. 2020 г. в 06:29, Devin Bost :

> Hi Pavel,
>
> Thanks for your reply.
>
> Here's one of our configs:
>
>
> http://www.springframework.org/schema/beans;
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> xmlns:util="http://www.springframework.org/schema/util;
> xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd;>
>  "org.apache.ignite.configuration.IgniteConfiguration">
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  "factoryOf">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
>  "org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"
> >
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
>
> 
>  >
> 
> 
> 
>
> 
>  "org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_EXPIRED"/>
> 
>
> 
> 
>
> There are about 20 clients, and I'm not seeing any threading in the client
> code; however, what's strange is that we're seeing a very large number of
> connections (thousands), and we can't figure out what's causing the large
> number of connections. It does seem that performance drops even further
> when the connections increases over 2,000, but even when connections are in
> the hundreds, throughput is still below 3,000 / sec at best.
> One of our dev's said that IgniteDataStreamer didn't support our use case,
> I think because we're doing a lot of updates.
>
> Devin G. Bost
>
>
> On Thu, Nov 12, 2020 at 10:04 PM Pavel Vinokurov <
> vinokurov.pa...@gmail.com> wrote:
>
>> Hi Devin,
>>
>> MaxSize is set per node for the specified data region.
>> A few clarifying questions:
>> What cache configuration are you using? The performance could depend on
>> the type of a cache and the number of backups.
>> How many clients and threads are writing to the cluster? Because it is a
>> possible bottleneck on the client side.
>> Have you tried  IgniteDataStreamer[2]? It's recommended approach to load
>> the massive amount of data.
>>
>> You could refer to persistence tuning documentation[1].
>>
>> [1] https://ignite.apache.org/docs/latest/persistence/persistence-tuning
>> [2] https://ignite.apache.org/docs/latest/data-streaming#data-streaming
>> Thanks,
>> Pavel
>>
>> пт, 13 нояб. 2020 г. в 07:26, Devin Bost :
>>
>>> We're trying to figure out how to get more throughput from our Ignite
>>> cluster.
>>> We have 11 dedicated Ignite VM nodes, each with 32 GB of RAM. Yet, we're
>>> only writing at 2k/sec max, even when we parallelize the writes to Ignite.
>>> We're using native persistence, but it just seems way slower than
>>> expected.
>>>
>>> We have our cache maxSize set to 16 GB. Since the cluster has a total of
>>> 352 GB of RAM, should our maxSize be set per node, or per cluster? If it's
>>> per cluster, then I'm thinking we'd want to increase maxSize to at least
>>> 300 GB. Looking for feedback on this.
>>>
>>> We've already turned off swapping on the boxes.
>>>
>>> What else can we tune to increase throughput? It seems like we should be
>>> getting 100x the throughput at least, so I'm hoping something is just
>>> misconfigured.
>>>
>>> Devin G. Bost
>>>
>>
>>
>> --
>>
>> Regards
>>
>> Pavel Vinokurov
>>
>

-- 

Regards

Pavel Vinokurov


Re: Unixodbc and Apache Ignite on OSX compilation

2020-11-13 Thread Pavel Vinokurov
Hi Wolfgang,

Make sure that LD_LIBRARY_PATH contains unixodbc libs.
You may set the following:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/unixODBC/lib

Thanks,
Pavel

чт, 12 нояб. 2020 г. в 19:44, Wolfgang Meyerle <
wolfgang.meye...@googlemail.com>:

> Hi,
>
> I'm currently struggling a little bit getting Ignite up and running with
> UnixOdbc on my OSX machine.
>
> I'm compiling the platform cpp files atm with
>
> cmake -DWITH_ODBC=ON -DWITH_THIN_CLIENT=ON -DWITH_TESTS=ON
> -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/Users/user/ApacheIgnite ..
>
>
> and getting:
>
> Undefined symbols for architecture x86_64:
>"_SQLGetPrivateProfileString", referenced from:
>ignite::odbc::ReadDsnString(char const*,
> std::__1::basic_string,
> std::__1::allocator > const&, std::__1::basic_string std::__1::char_traits, std::__1::allocator > const&) in
> dsn_config.cpp.o
>"_SQLInstallerError", referenced from:
>ignite::odbc::ThrowLastSetupError() in dsn_config.cpp.o
>"_SQLWritePrivateProfileString", referenced from:
>ignite::odbc::WriteDsnString(char const*, char const*, char
> const*) in dsn_config.cpp.o
> ld: symbol(s) not found for architecture x86_64
> clang: error: linker command failed with exit code 1 (use -v to see
> invocation)
> make[2]: *** [odbc/libignite-odbc.2.9.0.50002.dylib] Error 1
> make[1]: *** [odbc/CMakeFiles/ignite-odbc.dir/all] Error 2
> make: *** [all] Error 2
>
> as an output.
>
>
> Any ideas?
>
> My rough guess is that the UnixOdbc interface must have changed.
> Currently I'm using version unixodbc 2.3.9 according to brew.
>
> Apache Ignite is in version apache-ignite-2.9.0
>
>
> Regards,
>
>
> Wolfgang
>
>

-- 

Regards

Pavel Vinokurov


Re: CPU soft lockup

2020-11-12 Thread Pavel Vinokurov
Hi Devakumar,

Could you show the logs from server nodes.

Thanks,
Pavel

пт, 13 нояб. 2020 г. в 04:22, Devakumar J :

> Hi,
> We are running on 2.8.0 version with 3 server nodes and 2 client nodes. We
> notice CPU soft lockup issues and server node goes down with critical error
> detected.
>
> Do we have any document reference or checklist to investigate this issue?
>
> Thanks,
> Devakumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 

Regards

Pavel Vinokurov


Re: maxSize - per node or per cluster? - only 2k/sec in 11 node dedicated cluster

2020-11-12 Thread Pavel Vinokurov
Hi Devin,

MaxSize is set per node for the specified data region.
A few clarifying questions:
What cache configuration are you using? The performance could depend on
the type of a cache and the number of backups.
How many clients and threads are writing to the cluster? Because it is a
possible bottleneck on the client side.
Have you tried  IgniteDataStreamer[2]? It's recommended approach to load
the massive amount of data.

You could refer to persistence tuning documentation[1].

[1] https://ignite.apache.org/docs/latest/persistence/persistence-tuning
[2] https://ignite.apache.org/docs/latest/data-streaming#data-streaming
Thanks,
Pavel

пт, 13 нояб. 2020 г. в 07:26, Devin Bost :

> We're trying to figure out how to get more throughput from our Ignite
> cluster.
> We have 11 dedicated Ignite VM nodes, each with 32 GB of RAM. Yet, we're
> only writing at 2k/sec max, even when we parallelize the writes to Ignite.
> We're using native persistence, but it just seems way slower than expected.
>
> We have our cache maxSize set to 16 GB. Since the cluster has a total of
> 352 GB of RAM, should our maxSize be set per node, or per cluster? If it's
> per cluster, then I'm thinking we'd want to increase maxSize to at least
> 300 GB. Looking for feedback on this.
>
> We've already turned off swapping on the boxes.
>
> What else can we tune to increase throughput? It seems like we should be
> getting 100x the throughput at least, so I'm hoping something is just
> misconfigured.
>
> Devin G. Bost
>


-- 

Regards

Pavel Vinokurov


Re: IgniteAtomicSequence

2020-11-02 Thread Pavel Vinokurov
IgniteAtomicLong shows poor results in case of non zero backups.
But IgniteAtomicSequence incremented 1M by ~0.4 second in the attached
example.

ср, 28 окт. 2020 г. в 13:45, narges saleh :

> Thanks Pavel for the replies.
>
> I did a quick/basic benchmark of IgniteAtomicLong, and IgniteAtomicLong,
> with only two ignite nodes, with 1,000,000 iterations. Unless, I am doing
> something wrong, the result was very disappointing.
> With IgniteAtomicSequence, the iterations took about 30 seconds.
> With IgniteAtomicLong, it took over 3 minutes. I repeated the test 5 times.
>
> On Wed, Oct 28, 2020 at 5:33 AM Pavel Vinokurov 
> wrote:
>
>> The atomic sequence/long could be persisted if the default data region is
>> persistent[1].
>> One of the use cases for IgniteAtomicSequence is to generate unique IDs.
>> You could estimate the volume implicitly by getting metrics from the
>> default data region[2].
>>
>> [1]
>> https://ignite.apache.org/docs/latest/persistence/native-persistence#enabling-persistent-storage
>> [2]
>> https://apacheignite.readme.io/v2.8.0/docs/memory-metrics#getting-metrics
>>
>> вт, 27 окт. 2020 г. в 21:50, narges saleh :
>>
>>> Questions:
>>> I know I should take into account the cluster size and the data volume,
>>> but how bad is the overhead of IgniteAtomicLong? Any benchmarking?
>>> What are the use cases for IgniteAtomicSequence if it does not
>>> guarantee ordering?
>>>
>>> thanks.
>>>
>>> On Tue, Oct 27, 2020 at 10:27 AM narges saleh 
>>> wrote:
>>>
>>>> Thank you Pavel, for the information.
>>>>
>>>> What happens to the atomic sequence/long value in case of a system
>>>> restart? Do I need to persist the value somewhere and pass it as the
>>>> initial value, across cluster restarts?
>>>>
>>>> On Tue, Oct 27, 2020 at 1:13 AM Pavel Vinokurov <
>>>> vinokurov.pa...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> In case of rebalance the reserves aren't recalculated. So the next
>>>>> values will be in the range  [1001,1000+reserveSize].
>>>>> After the local reserve exceeds, the node reserves a new range
>>>>> starting from the global last sequence number.
>>>>> The main idea IgniteAtomicSequence is to generate unique numbers but
>>>>> it doesn't guarantee ordering.
>>>>> If you need ordering you could set the reserve size to 1 or use
>>>>> IgniteAtomicLong.
>>>>>
>>>>> Thanks,
>>>>> Pavel
>>>>>
>>>>> вс, 25 окт. 2020 г. в 05:00, narges saleh :
>>>>>
>>>>>> Hi All,
>>>>>> What is the consequence of data rebalancing across nodes as far as
>>>>>> IgniteAtomicSequence and the reserve on each node is concerned? For
>>>>>> example, if the last sequence number is 6000 in one node and the record
>>>>>> moves to a node whose last sequence number is 1000? Do the reserves on 
>>>>>> both
>>>>>> nodes get recalculated?
>>>>>>
>>>>>> Are there best practices for generating and using these sequences?
>>>>>>
>>>>>> Is IgniteAtomicSequence the right approach if I want to keep track of
>>>>>> the records on each node for a partitioned cache?
>>>>>>
>>>>>> thanks.
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Regards
>>>>>
>>>>> Pavel Vinokurov
>>>>>
>>>>
>>
>> --
>>
>> Regards
>>
>> Pavel Vinokurov
>>
>

-- 

Regards

Pavel Vinokurov
import java.util.Arrays;
import java.util.UUID;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteAtomicLong;
import org.apache.ignite.IgniteAtomicSequence;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.configuration.AtomicConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

public class AtomicStructuresExample {

public static void main(String[] args) {
Ignite ignite1 = Ignition.start(getCfg());
Ignite ignite2 

Re: IgniteAtomicSequence

2020-10-28 Thread Pavel Vinokurov
The atomic sequence/long could be persisted if the default data region is
persistent[1].
One of the use cases for IgniteAtomicSequence is to generate unique IDs.
You could estimate the volume implicitly by getting metrics from the
default data region[2].

[1]
https://ignite.apache.org/docs/latest/persistence/native-persistence#enabling-persistent-storage
[2]
https://apacheignite.readme.io/v2.8.0/docs/memory-metrics#getting-metrics

вт, 27 окт. 2020 г. в 21:50, narges saleh :

> Questions:
> I know I should take into account the cluster size and the data volume,
> but how bad is the overhead of IgniteAtomicLong? Any benchmarking?
> What are the use cases for IgniteAtomicSequence if it does not
> guarantee ordering?
>
> thanks.
>
> On Tue, Oct 27, 2020 at 10:27 AM narges saleh 
> wrote:
>
>> Thank you Pavel, for the information.
>>
>> What happens to the atomic sequence/long value in case of a system
>> restart? Do I need to persist the value somewhere and pass it as the
>> initial value, across cluster restarts?
>>
>> On Tue, Oct 27, 2020 at 1:13 AM Pavel Vinokurov <
>> vinokurov.pa...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> In case of rebalance the reserves aren't recalculated. So the next
>>> values will be in the range  [1001,1000+reserveSize].
>>> After the local reserve exceeds, the node reserves a new range
>>> starting from the global last sequence number.
>>> The main idea IgniteAtomicSequence is to generate unique numbers but it
>>> doesn't guarantee ordering.
>>> If you need ordering you could set the reserve size to 1 or use
>>> IgniteAtomicLong.
>>>
>>> Thanks,
>>> Pavel
>>>
>>> вс, 25 окт. 2020 г. в 05:00, narges saleh :
>>>
>>>> Hi All,
>>>> What is the consequence of data rebalancing across nodes as far as
>>>> IgniteAtomicSequence and the reserve on each node is concerned? For
>>>> example, if the last sequence number is 6000 in one node and the record
>>>> moves to a node whose last sequence number is 1000? Do the reserves on both
>>>> nodes get recalculated?
>>>>
>>>> Are there best practices for generating and using these sequences?
>>>>
>>>> Is IgniteAtomicSequence the right approach if I want to keep track of
>>>> the records on each node for a partitioned cache?
>>>>
>>>> thanks.
>>>>
>>>
>>>
>>> --
>>>
>>> Regards
>>>
>>> Pavel Vinokurov
>>>
>>

-- 

Regards

Pavel Vinokurov


Re: IgniteAtomicSequence

2020-10-27 Thread Pavel Vinokurov
Hi,

In case of rebalance the reserves aren't recalculated. So the next values
will be in the range  [1001,1000+reserveSize].
After the local reserve exceeds, the node reserves a new range
starting from the global last sequence number.
The main idea IgniteAtomicSequence is to generate unique numbers but it
doesn't guarantee ordering.
If you need ordering you could set the reserve size to 1 or use
IgniteAtomicLong.

Thanks,
Pavel

вс, 25 окт. 2020 г. в 05:00, narges saleh :

> Hi All,
> What is the consequence of data rebalancing across nodes as far as
> IgniteAtomicSequence and the reserve on each node is concerned? For
> example, if the last sequence number is 6000 in one node and the record
> moves to a node whose last sequence number is 1000? Do the reserves on both
> nodes get recalculated?
>
> Are there best practices for generating and using these sequences?
>
> Is IgniteAtomicSequence the right approach if I want to keep track of the
> records on each node for a partitioned cache?
>
> thanks.
>


-- 

Regards

Pavel Vinokurov


Re: Deploying Ignite in Docker

2020-06-29 Thread Pavel Vinokurov
Hi Courtney,

Probably you need to specify a local ip address to bind to by setting
IgniteConfiguration#setLocalhost().

Thanks,
Pavel

пн, 29 июн. 2020 г. в 21:18, Courtney Robinson :

> I've deployed Ignite bare-metal and in Kubernetes in test and production
> but I'm now trying to deploy for a new project in a new cluster with Docker
> and it is proving difficult. I can't figure out what port it is trying to
> use or how to force it to use a specific one.
>
> Note that I've removed the "--net=host" option which the docs
> <https://apacheignite.readme.io/docs/docker-deployment> mention since
> that would expose the cluster on the node's public IPs.
>
> Note that if I use --net=host it works as expected.
>
> sudo docker run -d -it --name=ignite --restart=unless-stopped \
> -e CONFIG_URI="/ignite/config.xml" \
> -e OPTION_LIBS="ignite-rest-http,ignite-visor-console,ignite-web" \
> -e JVM_OPTS="-Xms1g -Xmx10g -server -XX:+AggressiveOpts -XX:MaxPermSize=256m 
> -Djava.net.preferIPv4Stack=true -DIGNITE_QUIET=false 
> -Dcom.sun.management.jmxremote.port=49112" \
> -e IGNITE_WORK_DIR=/persistence \
> -v /mnt/vol0/ignite:/persistence \
> -v /root/ignite/config.xml:/ignite/config.xml \
> -p ${arr[$host]}:11211:11211 \
> -p ${arr[$host]}:47100:47100 \
> -p ${arr[$host]}:47500:47500 \
> -p ${arr[$host]}:49112:49112 \
> -p ${arr[$host]}:49100:49100 \
> -p ${arr[$host]}:10800:10800 \
> -p ${arr[$host]}:8080:8080 \
> apacheignite/ignite:2.8.1
>
> This is in a loop so ${arr[$host]} will be replaced with one of the
> private network's IPs.
> You can see from the logs that the TCPDiscovery gets connections but
> whatever happens next isn't successful so the nodes keep retrying and the
> logs below just repeat forever:
>
> [12:46:42,934][INFO][tcp-disco-sock-reader-[20ec0423 
> 172.17.0.1:35998]-#10][TcpDiscoverySpi]
>> Finished serving remote node connection [rmtAddr=/172.17.0.1:35998,
>> rmtPort=35998
>> [12:46:45,637][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
>> discovery accepted incoming connection [rmtAddr=/172.17.0.1,
>> rmtPort=36016]
>> [12:46:45,637][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
>> discovery spawning a new thread for connection [rmtAddr=/172.17.0.1,
>> rmtPort=36016]
>> [12:46:45,637][INFO][tcp-disco-sock-reader-[]-#11][TcpDiscoverySpi]
>> Started serving remote node connection [rmtAddr=/172.17.0.1:36016,
>> rmtPort=36016]
>> [12:46:45,639][INFO][tcp-disco-sock-reader-[20ec0423 
>> 172.17.0.1:36016]-#11][TcpDiscoverySpi]
>> Finished serving remote node connection [rmtAddr=/172.17.0.1:36016,
>> rmtPort=36016
>> [12:46:45,661][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
>> discovery accepted incoming connection [rmtAddr=/10.131.60.224,
>> rmtPort=40325]
>> [12:46:45,661][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
>> discovery spawning a new thread for connection [rmtAddr=/10.131.60.224,
>> rmtPort=40325]
>> [12:46:45,661][INFO][tcp-disco-sock-reader-[]-#12][TcpDiscoverySpi]
>> Started serving remote node connection [rmtAddr=/10.131.60.224:40325,
>> rmtPort=40325]
>> [12:46:45,662][INFO][tcp-disco-sock-reader-[8433be36 
>> 10.131.60.224:40325]-#12][TcpDiscoverySpi]
>> Initialized connection with remote server node
>> [nodeId=8433be36-2855-4ff3-a849-35c3ebb25545, rmtAddr=/
>> 10.131.60.224:40325]
>> [12:46:45,676][INFO][tcp-disco-sock-reader-[8433be36 
>> 10.131.60.224:40325]-#12][TcpDiscoverySpi]
>> Finished serving remote node connection [rmtAddr=/10.131.60.224:40325,
>> rmtPort=40325
>>
>
> The config.xml being used is below and the docker command is:
>
> What have I missed in this config? What port does it need that isn't being
> set/opened?
>
> 
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
>   
> 
> 
>   
>   
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
>   
> 
> 10.131.53.147:47500
> 10.131.77.79:47500
> 10.131.60.224:47500
> 10.131.77.111:47500
> 10.131.77.93:47500
> 10.131.77.84:47500
>   
> 
>   
> 
>   
> 
> 
>class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
>   
> 
>   
> 
>
> Regards,
> Courtney Robinson
> Founder and CEO, Hypi
> Tel: ++44 208 123 2413 (GMT+0) <https://hypi.io>
>
> <https://hypi.io>
> https://hypi.io
>


-- 

Regards

Pavel Vinokurov


Re: SQLFeatureNotSupportedException in JdbcThinConnection

2020-05-12 Thread Pavel Vinokurov
Hello,

I've created the ticket for this issue:
https://issues.apache.org/jira/browse/IGNITE-13000

Thanks,
Pavel

вс, 10 мая 2020 г. в 04:54, Emmanuel Neri :

> Hello, I`m a Ignite user and i have a problem using Ignite with SQL at
> 2.7.5 version.
>
> I use vertx-jdbc-client dependency at my Vert.x base application and this
> API call prepareStatement method from JdbcThinConnection
> but prepareStatement method as sql and
> autoGeneratedKeys arguments throw SQLFeatureNotSupportedException("Auto
> generated keys are not supported."), like the stack below:
>
>
> *java.sql.SQLFeatureNotSupportedException: Auto generated keys are not
> supported. at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.prepareStatement(JdbcThinConnection.java:557)*
>
> I think Ignite not supported auto generated key and i called this method
> with autoGeneratedKeys = 2 (NO_GENERATED_KEYS), but I still having
> problems, has any reason for this method haven`t implementation or is
> possible to implement something like that at JdbcThinConnection?
>
> @Override
> public PreparedStatement prepareStatement(String sql, int autoGeneratedKeys) 
> throws SQLException {
> ensureNotClosed();
>
> if(autoGeneratedKeys == Statement.RETURN_GENERATED_KEYS)
> throw new SQLFeatureNotSupportedException("Auto generated keys are 
> not supported.");
>
> return prepareStatement(sql);
> }
>
>
> Thank you
>
> --
> Att.:
>
>   *  Emmanuel Neri *
>
> www.emmanuelneri.com.br/sobre
>


-- 

Regards

Pavel Vinokurov


Re: Sudden node failure on Ignite v2.7.5

2019-07-15 Thread Pavel Vinokurov
Hi,

It looks like the issue described in
https://issues.apache.org/jira/browse/IGNITE-11953
There a two options:
1. I believe it will be fixed very soon. Thus you could create a build
based on 2.7.5 with cherry-picked fix.
2. You could remove the cachegroup property from the cache configuration.
In this case you have to start a new cluster and it requires much more heap
memory on the startup since there are about 650 caches.

Thanks,
Pavel

сб, 13 июл. 2019 г. в 18:18, ihalilaltun :

> Hi Igniters,
>
> Recently (11.07.2019), we have upgraded our ignite versin from 2.7.0 to
> 2.7.5. Just like after 11 hours one of our nodes killed itself without any
> notification. I am adding the details that I could get from the server and
> the topology we use;
>
> *Ignite version*: 2.7.5
> *Cluster size*: 16
> *Client size*: 22
> *Cluster OS version*: Centos 7
> *Cluster Kernel version*: 4.4.185-1.el7.elrepo.x86_64
> *Java version* :
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
>
> By the way this is a production environment and we have been using this
> topology for almost 5 months. Our average tps size is ~5000 for the
> cluster.
> We have 8 to 10 different object that we persist on ignite, some of them
> relatively big and some ara just strings.
>
> ignite.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2515/ignite.zip>
> gc.current
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2515/gc.current>
> hs_err_pid18537.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/hs_err_pid18537.log>
>
>
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 

Regards

Pavel Vinokurov


Re: Task flow implementation question

2019-05-30 Thread Pavel Vinokurov
Hi Pascoe,

Please pay attention to the following example:
https://apacheignite.readme.io/docs/continuous-mapping#section-example

It demonstrates continuous mapping that could work in your case.


Thanks,
Pavel

чт, 30 мая 2019 г. в 08:34, Pascoe Scholle :

> Hello everyone,
>
> So I am trying to put together a task flow chain. A task can have any
> number of inputs and outputs, what I would call ports and each port has a
> value stored in the cache.
>
> A task can have numerous predecessors.
>
> For example say I have two nodes which can be executed in parallel: one
> generates a large 2d sparse array and saves this to the cache and the
> second generates a vector which is also saved to cache. A successor is
> linked to these two tasks, and has an input of type array and a second
> input of type vector, looking like follows:
>
> GEN_MAT (e.g. 15 seconds)  - >
>   MAT_VEC_MUL - >
> GEN_VEC(e.g. 1 second)   - >
>
> As I have tried to show, the GEN_MAT takes a lot longer, MAT_MUL can only
> execute once all input ports are set.
>
> My question is how to implement this functionality effectively.
> The input and output ports use a KEY:VALUE scheme, so with cache events,
> all successor nodes can have their input ports listen for their KEY values
> to be set, and it does work. But it feels very clunky. I am playing around
> with setting attributes in ComputeTaskSession, but have not managed to get
> it working. In a way cache events seem like the best option.
>
> Any recommendations or ideas would be really helpful, I am very new to
> apache ignite and just programming in general.
>
> Thanks and kind regards,
> Pascoe
>


-- 

Regards

Pavel Vinokurov


Re: xa transaction manager and ignite

2019-03-31 Thread Pavel Vinokurov
*return **transactionManager*;
>> }
>>
>>
>>
>>
>>
>> *public *IgniteConfiguration doit() {
>>
>> IgniteConfiguration cfg = *new *IgniteConfiguration();
>> cfg.setTransactionConfiguration(doTransactionConfiguration());
>>
>> *return *cfg;
>>
>> }
>>
>>
>>
>> Here is a snippet of my code (with pieces omitted for brevity):
>>
>> When I execute the public method “do it”, the autowire fails to load (I
>> do not require it) and none of the jndi locations that I listed seem to
>> have the “java TransactionManager” (again – not to be confused with the
>> bean named “TransactionManager” that does not implement TransactionManager,
>> but instead implements TransactionManagerPlatform).
>>
>>
>>
>> What is the right way to associate the xa manager that is setup with
>> spring – which I don’t know in advance, with ignite?
>>
>>
>>
>> Thanks,
>>
>>
>>
>> SCott
>>
>>

-- 

Regards

Pavel Vinokurov


Re: Cassandra writeBehind configuration

2019-02-12 Thread Pavel Vinokurov
e.store.GridCacheWriteBehindStore$Flusher.(
> *GridCacheWriteBehindStore.java:914*)
>
>at
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.start(
> *GridCacheWriteBehindStore.java:342*)
>
>at
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.start0(
> *GridCacheStoreManagerAdapter.java:205*)
>
>at
> org.apache.ignite.internal.processors.cache.store.CacheOsStoreManager.start0(
> *CacheOsStoreManager.java:64*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCacheManagerAdapter.start(
> *GridCacheManagerAdapter.java:50*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(
> *GridCacheProcessor.java:1188*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(
> *GridCacheProcessor.java:1964*)
>
>at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(
> *CacheAffinitySharedManager.java:438*)
>
>at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(
> *CacheAffinitySharedManager.java:622*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(
> *GridCacheProcessor.java:376*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(
> *GridCachePartitionExchangeManager.java:2235*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(
> *GridCachePartitionExchangeManager.java:2371*)
>
>at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(
> *GridCachePartitionExchangeManager.java:2299*)
>
>at org.apache.ignite.internal.util.worker.GridWorker.run(
> *GridWorker.java:110*)
>
>at java.lang.Thread.run(*Thread.java:748*)
>
>
>
>
> American Express made the following annotations
>
> --
>
>
> "This message and any attachments are solely for the intended recipient
> and may contain confidential or privileged information. If you are not the
> intended recipient, any disclosure, copying, use, or distribution of the
> information
>
> included in this message and any attachments is prohibited. If you have
> received this communication in error, please notify us by reply e-mail and
> immediately and permanently delete this message and any attachments. Thank
> you."
>
>
> American Express a ajouté le commentaire suivant le
>
> Ce courrier et toute pièce jointe qu'il contient sont réservés au seul
> destinataire indiqué et peuvent renfermer des renseignements confidentiels
> et privilégiés. Si vous n'êtes pas le destinataire prévu, toute
> divulgation, duplication, utilisation ou distribution du courrier ou de
> toute pièce jointe est interdite. Si vous avez reçu cette communication par
> erreur, veuillez nous en aviser par courrier et détruire immédiatement le
> courrier et les pièces jointes. Merci.
>
> --
>


-- 

Regards

Pavel Vinokurov


Re: Affinity quesions

2018-10-08 Thread Pavel Vinokurov
James,

Please pay attention to the following query configuration method
Query#setLocal(true). It allows to return records located on only local
node.
Following code block iterates through cache entries located on each server
node:

ignite.compute(ignite.cluster().forServers()).broadcast(new IgniteRunnable() {
@Override public void run() {
Ignite local = Ignition.localIgnite();

System.out.println("Ignite node:"+local.cluster().localNode().id());

IgniteCache cache = local.cache("cache");

QueryCursor query = cache.query(new ScanQuery().setLocal(true));

Iterator it = query.iterator();
while (it.hasNext()){
//iterate through local entities
}
}
});


пт, 5 окт. 2018 г. в 22:45, James Dodson :

> Thanks Val.
> Is there a way I can verify this behavior - a way to query one node in
> particular to see what data is on each node?
>
> On Fri, Oct 5, 2018 at 2:04 PM vkulichenko 
> wrote:
>
>> James,
>>
>> That would be enough - everything with the same affinity key will be
>> stored
>> on the same node. That's assuming, of course, that both caches have equal
>> affinity configuration (same affinity function, same number of partitions,
>> etc.).
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 

Regards

Pavel Vinokurov


Re: Can't write to Ignite cluster

2018-07-16 Thread Pavel Vinokurov
Ray,

As I see from following logs:
18/07/13 01:33:18 WARN cache.GridCachePartitionExchangeManager: >>>
GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
[topVer=25309, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode
[id=0caad476-3652-453f-8fc8-e8880c12eea9, addrs=[10.252.218.225,
127.0.0.1], sockAddrs=[/127.0.0.1:49501,
rpbt1ign003.webex.com/10.252.218.225:49501], discPort=49501, order=25309,
intOrder=12657, lastExchangeTime=1531445583607, loc=false,
ver=2.4.0#20180305-sha1:aa342270, isClient=false], done=false]

A new node from 10.252.218.225 was joined to the cluster with discovery
port  49501.
It means that port 49500 was busy and 49501 was chosen according
to TcpDiscoverySpi#locPortRange parameter. But this port isn't included in
TcpDiscoveryVmIpFinder#addresses.
Please check that you start only one node on host or/and configure
locPortRange and addresses parameters.

Thanks,
Pavel




2018-07-16 6:37 GMT+03:00 Ray :

> Hi Pavel,
>
> I started Ignite using this command.
> /usr/bin/java -Xmx128g -Xms128g -XX:+UseG1GC -XX:+PrintAdaptiveSizePolicy
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> -Xloggc:/spare/ignite/log/ignitegc-2018_07_14-02_38.log
> -DIGNITE_QUIET=true
> -DIGNITE_SUCCESS_FILE=/spare/ignite/work/ignite_success_
> c6a52b2c-db0d-4eb7-a0e6-64df5ec615c4
> -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=18999
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=node1
> -DIGNITE_HOME=/opt/apache-ignite
> -DIGNITE_PROG_NAME=/opt/apache-ignite/bin/ignite.sh -cp
> /opt/apache-ignite/libs/*:/opt/apache-ignite/libs/ignite-
> indexing/*:/opt/apache-ignite/libs/ignite-log4j2/*:/opt/
> apache-ignite/libs/ignite-rest-http/*:/opt/apache-
> ignite/libs/ignite-spring/*:/opt/apache-ignite/libs/licenses/*
> org.apache.ignite.startup.cmdline.CommandLineStartup
> /opt/apache-ignite/config/persistent-config.xml
>
> And I use Spark dataframe API to ingest data into Ignite.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Ignite AWS cluster setup

2018-07-16 Thread Pavel Vinokurov
Hi,

Could you please show the cluster configuration.

2018-07-13 17:56 GMT+03:00 Kia Rahmani :

> Hello all,
> I recently started using Ignite for benchmarking a research project on a
> local multi-node cluster. Now I need to deploy the cluster on
> geographically
> distributed nodes, but I am stuck at setting up the connection on EC2
> machines.
>
> I have closely followed the TCP-based discovery instructions with static
> IPs
> of EC2 nodes. The first node seems to be starting up perfectly fine, but
> when I launch the second node, even though it triggers a topology change in
> the first node, it is immediately bounced back to the single-node (initial)
> state (I'll attach a sample output of the node 1 at the end of the
> message).
> I have used S3 Based Discovery method as well, but the problem persists.
>
> I appreciate your time and sorry for asking a novice question
> Bests, Kia
>
> [14:28:07] Ignite node started OK (id=431973b5)
> [14:28:07] Topology snapshot [ver=1, servers=1, clients=0, CPUs=2,
> offheap=0.77GB, heap=1.0GB]
> [14:28:07]   ^-- Node [...]
> [14:28:07] Data Regions Configured:
> [14:28:07]   ...
> [14:29:40] Topology snapshot [ver=2, servers=2, clients=0, CPUs=4,
> offheap=1.5GB, heap=2.0GB]
> [14:29:40]   ^-- Node [...]
> [14:29:40] Data Regions Configured:
> [14:29:40]   ...
> [14:29:40] Topology snapshot [ver=3, servers=1, clients=0, CPUs=2,
> offheap=0.77GB, heap=1.0GB]
> [14:29:40]  ...
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Can't write to Ignite cluster

2018-07-15 Thread Pavel Vinokurov
Hi,

It looks pretty strange such topology version increasing.
Would you be able to show  how do you launch the cluster and use the
datastreamer.

Thanks,
Pavel

2018-07-13 9:50 GMT+03:00 Ray :

> Here's the full log and thread dump for three nodes and client to ingest
> data.
>
> node1.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t1346/node1.zip>
> node2.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t1346/node2.zip>
> node3.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t1346/node3.zip>
> client_log.client_log
> <http://apache-ignite-users.70518.x6.nabble.com/file/
> t1346/client_log.client_log>
>
> And I can only query Ignite cluster using sqlline.
> When when I try to launch a Java client, it failed.
> The log is similar with client_log I attached.
> These two logs is printed again and again
> 18/07/13 01:33:18 WARN cache.GridCachePartitionExchangeManager: Failed to
> wait for initial partition map exchange. Possible reasons are:
>   ^-- Transactions in deadlock.
>   ^-- Long running transactions (ignore if this is the case).
>   ^-- Unreleased explicit locks.
> 18/07/13 01:33:18 WARN internal.diagnostic: Failed to wait for partition
> map
> exchange [topVer=AffinityTopologyVersion [topVer=25308, minorTopVer=0],
> node=3c164ab8-0cf1-4451-8bfe-0c415ac932cd]. Dumping pending objects that
> might be the cause:
>
> Can anybody advise me why the topology version keeps increasing so I can do
> a preliminary research?
> From my prior experience with Ignite, the topology version shouldn't be
> increasing when there's no data ingested into cluster.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: odbc / cursor is in open state allready

2018-07-13 Thread Pavel Vinokurov
Hi,

Probably this issue relates to
https://issues.apache.org/jira/browse/IGNITE-8838

Thanks,
Pavel

2018-07-12 15:36 GMT+03:00 Som Som <2av10...@gmail.com>:

> 2.5.0
>
> чт, 12 июл. 2018 г., 12:47 Igor Sapego :
>
>> Hello,
>>
>> What is the Ignite version you are using?
>>
>> Best Regards,
>> Igor
>>
>>
>> On Wed, Jul 11, 2018 at 7:47 PM Som Som <2av10...@gmail.com> wrote:
>>
>>> There is a system ("DS") which publishes data into MS db via odbc and *it
>>> works without any problems*.
>>>
>>> So i created cache instead of MS db table “T”, then as a test i
>>> connected via DBeaver and inserted test row and there was no problem. Next
>>> i installed ignite odbc driver and created dsn, but when i tryed to publish
>>> data from data "DS" only one row was inserted into the table and  i saw
>>> errors in "DS" log file: “Error: SQL Error DBTable ‘T’ DB ODBC error: Query
>>> cursor is in open state already.”
>>>
>>>
>>>
>>> Cache was created with the following code:
>>>
>>>
>>>
>>> var cache = ignite.GetOrCreateCache(
>>>
>>> new CacheConfiguration
>>>
>>> {
>>>
>>> SqlSchema = "PUBLIC",
>>>
>>> Name = "T",
>>>
>>> WriteSynchronizationMode =
>>> CacheWriteSynchronizationMode.FullAsync,
>>>
>>> QueryEntities = new[] { newQueryEntity(typeof(TKey),
>>>  typeof(T)) }
>>>
>>> });
>>>
>>>
>>>
>>> What could be the problem?
>>>
>>


-- 

Regards

Pavel Vinokurov


Re: Ignite with Spring Boot Data JPA

2018-07-10 Thread Pavel Vinokurov
Hi

Ignite uses H2 syntax, so you could set H2 dialect:
   

Thanks,
Pavel

2018-07-11 0:46 GMT+03:00 Вячеслав Коптилин :

> Hello,
>
> I don't think I clearly understand the issue you faced.
> Please provide more details about the problem so that the community can
> help you.
> Could create a small reproducer project and share it on GitHub?
> By the way, the log file from the server node will be helpful as well.
>
> Thanks,
> S.
>
> вт, 10 июл. 2018 г. в 22:40, bitanxen :
>
>> Hi, I have a running a ignite cluster. Now from my spring boot maven
>> project
>> which is dependent on spring-data-jpa where the jdbc datasource is
>> configured with Ignite. But it's giving me an error that hibernate
>> entitymanager bean is not initialised because it didn't find
>> hibernate.dialect.
>>
>> Is there any working day configuration for Ingite JDBC with Spring Data
>> JPA??
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


-- 

Regards

Pavel Vinokurov


Re: Scaling with SQL query

2018-06-26 Thread Pavel Vinokurov
Hi Tom,

In case of a replicated cache the Ignite plans the execution of the sql
query across whole cluster by splitting into multiple map queries and a
single reduce query.
Thus it is possible communication overheads caused by  that the "reduce"
node collects data from multiple nodes.
Please show metrics for this query for you configuration.

Thanks,
Pavel


2018-06-26 6:24 GMT+03:00 Tom M :

> Hi,
>
> I have a cluster of 10 nodes, and a cache with replication factor 3 and no
> persistency enabled.
> The SQL query is pretty simple -- "SELECT * FROM Logs ORDER by time DESC
> LIMIT 100".
> I have checked the index for "time" attribute is applied.
>
> When I increase the number of nodes, throughput drops and latency
> increases.
> Can you please explain why and how Ignite processes this SQL request?
>



-- 

Regards

Pavel Vinokurov


Re: SEVERE error while starting ignite in embedded mode

2018-05-23 Thread Pavel Vinokurov
Hi,

File META-INF/classnames.properties located in ignite-core*.jar.
Seems to classloader RuntimeClassLoader is unable to find this resource.
Please check your classpath and classloader.


2018-05-22 10:33 GMT+03:00 the_palakkaran <jik...@suntecsbs.com>:

> Hi,
>
> When I am trying to start ignite in embedded mode, the below error is
> thrown.
>
> Can anyone help me understand what is wrong here?
>
> May 22, 2018 12:57:08 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Got exception while starting (will rollback startup routine).
> class org.apache.ignite.IgniteException: Failed to load class names
> properties file packaged with ignite binaries
> [file=META-INF/classnames.properties,
> ldr=com.container.deploy.loader.RuntimeClassLoader@308144a9]
>
> at
> org.apache.ignite.internal.MarshallerContextImpl.(
> MarshallerContextImpl.java:120)
> at
> org.apache.ignite.internal.GridKernalContextImpl.(
> GridKernalContextImpl.java:463)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:847)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1909)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1652)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1080)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:600)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:525)
> at org.apache.ignite.Ignition.start(Ignition.java:322)
> at
> com.tpe.caching.config.IgniteConfig.igniteInstance(IgniteConfig.java:2724)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at
> org.springframework.beans.factory.support.SimpleInstantiationStrategy.
> instantiate(SimpleInstantiationStrategy.java:166)
> at
> org.springframework.beans.factory.support.ConstructorResolver.
> instantiateUsingFactoryMethod(ConstructorResolver.java:586)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFac
> tory.java:1094)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:989)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.createBean(AbstractAutowireCapableBeanFactory.java:475)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory$1.
> getObject(AbstractBeanFactory.java:304)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.
> getSingleton(DefaultSingletonBeanRegistry.java:228)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(
> AbstractBeanFactory.java:300)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(
> AbstractBeanFactory.java:195)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.
> preInstantiateSingletons(DefaultListableBeanFactory.java:703)
> at
> org.springframework.context.support.AbstractApplicationContext.
> finishBeanFactoryInitialization(AbstractApplicationContext.java:760)
> at
> org.springframework.context.support.AbstractApplicationContext.refresh(
> AbstractApplicationContext.java:482)
> at
> com.tpe.caching.controller.IgniteController.loadCache(
> IgniteController.java:427)
> at com.tpe.core.server.ServerManager.init(ServerManager.java:3663)
> at
> com.container.core.ServiceContainerCommandDespatcher.run(
> ServiceContainerCommandDespatcher.java:46)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Read request response time is unstable, often more than500milliseconds, but the cluster load is small

2018-05-10 Thread Pavel Vinokurov
Ignite node should start with any wal mode. I suppose that the same error
should be occurred with FSYNC mode.
Would you be able to restart with LOG_ONLY mode and show the logs.

2018-05-10 12:39 GMT+03:00 NO <727418...@qq.com>:

> Using the LOG_ONLY mode, I remember having encountered this problem. After
> the node rebooted and printed an error message, the node could not be
> started. At that time, I did not reserve the error message. I searched for
> the source code, which may be one of the two.
> 1. 'Failed to find checkpoint record at the given WAL pointer'
> 2. 'on disk, but checkpoint record is missed in WAL '
>
> In the LOG_ONLY mode, it may not start in case of node crash?
>
>
> -- 原始邮件 --
> *发件人:* "Pavel Vinokurov"<vinokurov.pa...@gmail.com>;
> *发送时间:* 2018年5月10日(星期四) 下午5:13
> *收件人:* "user"<user@ignite.apache.org>;
> *主题:* Re: Read request response time is unstable, often more
> than500milliseconds, but the cluster load is small
>
> Please, try to check performance with LOG_ONLY mode.
>
> 2018-05-10 12:03 GMT+03:00 NO <727418...@qq.com>:
>
>> Hi,
>>
>> I have tested -DIGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER=true set this
>> parameter, but it will seriously affect the write speed, I do not know what
>> the impact of setting this parameter is, whether it is necessary to set
>> other parameters to increase the write speed?
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Pavel Vinokurov"<vinokurov.pa...@gmail.com>;
>> *发送时间:* 2018年5月10日(星期四) 下午4:59
>> *收件人:* "user"<user@ignite.apache.org>;
>> *主题:* Re: Read request response time is unstable, often more than
>> 500milliseconds, but the cluster load is small
>>
>> Hi,
>>
>> I see several exceptions in your logs. Probably it causes the slowdown.
>> >> java.lang.ClassCastException: org.apache.ignite.internal.pro
>> cessors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager cannot
>> be cast to org.apache.ignite.internal.processors.cache.persistence.wal.
>> FileWriteAheadLogManager
>>
>> Seems to you have the issue related to https://issues.apache.org/j
>> ira/browse/IGNITE-7865 that fixed in the 2.5 version.
>> As workaround you could change WALMode to LOG_ONLY or start ignite with
>> the jvm property -DIGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER=true
>>
>> Thanks,
>> Pavel
>>
>>
>>
>>
>>
>> 2018-05-10 5:42 GMT+03:00 NO <727418...@qq.com>:
>>
>>> hi,
>>>
>>> Ignite version : 2.4.0
>>>
>>> Read operations often exceed 500 milliseconds, but the cluster traffic
>>> is very small. I don't know why. Please help me solve this problem. Thank
>>> you very much. Here is some configuration information.
>>>
>>> 8 node : (48 core ,192G RAM, 4TB SSD)
>>> Cluster records : 1.7 billion primary keys , 1.7 billion backup keys
>>> Get requests per second : 100+
>>> Put requests per second : 400+
>>> Each node occupies more than 500GB of disk space.
>>>
>>> 2 node :
>>> LSB Version::core-4.1-amd64:core-4.1-noarc
>>> h:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1
>>> -noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.
>>> 1-amd64:printing-4.1-noarch
>>> Distributor ID:CentOS
>>> Description:CentOS Linux release 7.2.1511 (Core)
>>> Release:7.2.1511
>>> Codename:Core
>>>
>>> 6 node:
>>> LSB Version::base-4.0-amd64:base-4.0-noarc
>>> h:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics
>>> -4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>>> Distributor ID:CentOS
>>> Description:CentOS release 6.7 (Final)
>>> Release:6.7
>>> Codename:Final
>>> 
>>> =
>>> The node configuration is as follows
>>> 
>>> http://www.springframework.org/schema/beans;
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>>xmlns:util="http://www.springframework.org/schema/util;
>>>xsi:schemaLocation="http://www.springframework.org/schema/beans
>>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>> http://www.springframework.org/schema/util
>>> http://www.springframework.org/schema/util/spring-util.xsd
>>> ">
>>> 
>>>

Re: Read request response time is unstable, often more than 500milliseconds, but the cluster load is small

2018-05-10 Thread Pavel Vinokurov
Please, try to check performance with LOG_ONLY mode.

2018-05-10 12:03 GMT+03:00 NO <727418...@qq.com>:

> Hi,
>
> I have tested -DIGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER=true set this
> parameter, but it will seriously affect the write speed, I do not know what
> the impact of setting this parameter is, whether it is necessary to set
> other parameters to increase the write speed?
>
>
> -- 原始邮件 ------
> *发件人:* "Pavel Vinokurov"<vinokurov.pa...@gmail.com>;
> *发送时间:* 2018年5月10日(星期四) 下午4:59
> *收件人:* "user"<user@ignite.apache.org>;
> *主题:* Re: Read request response time is unstable, often more than
> 500milliseconds, but the cluster load is small
>
> Hi,
>
> I see several exceptions in your logs. Probably it causes the slowdown.
> >> java.lang.ClassCastException: org.apache.ignite.internal.
> processors.cache.persistence.wal.FsyncModeFileWriteAheadLogManager cannot
> be cast to org.apache.ignite.internal.processors.cache.persistence.
> wal.FileWriteAheadLogManager
>
> Seems to you have the issue related to https://issues.apache.org/j
> ira/browse/IGNITE-7865 that fixed in the 2.5 version.
> As workaround you could change WALMode to LOG_ONLY or start ignite with
> the jvm property -DIGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER=true
>
> Thanks,
> Pavel
>
>
>
>
>
> 2018-05-10 5:42 GMT+03:00 NO <727418...@qq.com>:
>
>> hi,
>>
>> Ignite version : 2.4.0
>>
>> Read operations often exceed 500 milliseconds, but the cluster traffic is
>> very small. I don't know why. Please help me solve this problem. Thank you
>> very much. Here is some configuration information.
>>
>> 8 node : (48 core ,192G RAM, 4TB SSD)
>> Cluster records : 1.7 billion primary keys , 1.7 billion backup keys
>> Get requests per second : 100+
>> Put requests per second : 400+
>> Each node occupies more than 500GB of disk space.
>>
>> 2 node :
>> LSB Version::core-4.1-amd64:core-4.1-noarc
>> h:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.
>> 1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-
>> 4.1-amd64:printing-4.1-noarch
>> Distributor ID:CentOS
>> Description:CentOS Linux release 7.2.1511 (Core)
>> Release:7.2.1511
>> Codename:Core
>>
>> 6 node:
>> LSB Version::base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-
>> noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-
>> amd64:printing-4.0-noarch
>> Distributor ID:CentOS
>> Description:CentOS release 6.7 (Final)
>> Release:6.7
>> Codename:Final
>> =
>> The node configuration is as follows
>> 
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xmlns:util="http://www.springframework.org/schema/util;
>>xsi:schemaLocation="http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd
>> http://www.springframework.org/schema/util
>> http://www.springframework.org/schema/util/spring-util.xsd
>> ">
>> 
>>
>>
>> > value="6"/>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > value="qipu_entity_cache_data_region"/>
>> 
>> 
>> 
>> 
>> > value="#{1 * 1024 * 1024 * 1024}"/>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>>
>> 
>> 
>> 
>>
>> 
>> 
>> > value="qipu_entity_cache_data_region"/>
>> 
>> 
>> 
>> 
>> 
>> > value="FULL_SYNC"/>
>> 
>> 
>> 
>>
>> 
>> > value=&quo

Re: Read request response time is unstable, often more than 500 milliseconds, but the cluster load is small

2018-05-10 Thread Pavel Vinokurov
Xmx24g -server -XX:+AggressiveOpts
> -XX:MaxMetaspaceSize=512m"
> JVM_OPTS="${JVM_OPTS} -XX:+AlwaysPreTouch"
> JVM_OPTS="${JVM_OPTS} -XX:+UseG1GC"
> JVM_OPTS="${JVM_OPTS} -XX:+ScavengeBeforeFullGC"
> JVM_OPTS="${JVM_OPTS} -XX:+DisableExplicitGC"
> JVM_OPTS="${JVM_OPTS} -XX:+HeapDumpOnOutOfMemoryError "
> JVM_OPTS="${JVM_OPTS} -XX:HeapDumpPath=${IGNITE_HOME}/work"
> JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDetails"
> JVM_OPTS="${JVM_OPTS} -XX:+PrintGCTimeStamps"
> JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDateStamps"
> JVM_OPTS="${JVM_OPTS} -XX:+UseGCLogFileRotation"
> JVM_OPTS="${JVM_OPTS} -XX:NumberOfGCLogFiles=10"
> JVM_OPTS="${JVM_OPTS} -XX:GCLogFileSize=100M"
> JVM_OPTS="${JVM_OPTS} -Xloggc:${IGNITE_HOME}/work/gc.log"
> JVM_OPTS="${JVM_OPTS} -XX:+PrintAdaptiveSizePolicy"
> JVM_OPTS="${JVM_OPTS} -XX:MaxGCPauseMillis=100"
> 
> =
> node config
> #/etc/sysctl.conf
> fs.file-max = 512000
> net.core.rmem_max = 67108864
> net.core.wmem_max = 67108864
> net.core.rmem_default = 65536
> net.core.wmem_default = 65536
> net.core.netdev_max_backlog = 4096
> net.core.somaxconn = 4096
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_tw_recycle = 0
> net.ipv4.tcp_fin_timeout = 30
> net.ipv4.tcp_keepalive_time = 1200
> net.ipv4.ip_local_port_range = 1 65000
> net.ipv4.tcp_max_syn_backlog = 4096
> net.ipv4.tcp_max_tw_buckets = 5000
> net.ipv4.tcp_rmem = 4096 87380 67108864
> net.ipv4.tcp_wmem = 4096 65536 67108864
> net.ipv4.tcp_mtu_probing = 1
> vm.swappiness=0
> vm.zone_reclaim_mode = 0
> vm.dirty_writeback_centisecs = 500
> vm.dirty_expire_centisecs = 500
> ===
> #/etc/security/limits.conf
> *   softnofile  65535
> *   hardnofile  65535
>
>
> # End of file
> *   softnofile 65535
> *   hardnofile 65535
> *   softnofile  81920
> *   hardnofile  81920
> *   softnproc   81920
> *   hardnproc   81920
> *   softcore10240
> *   hardcore10240
> *softdata   unlimited
> *harddata   unlimited
> *softstack  unlimited
> *hardstack  unlimited
> *softmemory unlimited
> *hardmemory unlimited
> *softcpuunlimited
> *hardcpuunlimited
> *softmemlockunlimited
> *hardmemlockunlimited
>
> * hard memlock  unlimited
> * soft memlock  unlimited
> ===
>
> client code
> ==
> Ignition.setClientMode(true);
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>     TcpDiscoverySpi spi = new TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder();
> finder.setAddresses(Arrays.asList(env.getProperty("
> ignite.server").split(",")));
> spi.setIpFinder(finder);
>
> cfg.setDiscoverySpi(spi);
> cfg.setGridLogger(new Slf4jLogger());
> Ignite ignite = Ignition.start(cfg);
> IgniteCache<String, byte[]> igniteCache = ignite
> .getOrCreateCache("qipu_entity_cache");
>
> // get code 【Read operation response time often exceeds 1s】
> igniteCache.getAllAsync(keySet).get(1000);
>
> // put code
> // cache.putAllAsync(map).get(3000);
> ==
>
>
> Attachment is a node's gc log and node log
>
> Please give some suggestions on how to reduce the read operation response
> time. Thank you.
>
>
>
>


-- 

Regards

Pavel Vinokurov


Re: NullPointer Exception in Continuous Query

2018-05-10 Thread Pavel Vinokurov
Hi

Seems to you have the issue related to
https://issues.apache.org/jira/browse/IGNITE-7865 that fixed in the 2.5
version.
As workaround you could change WALMode to LOG_ONLY or start ignite with the
jvm property -DIGNITE_WAL_FSYNC_WITH_DEDICATED_WORKER=true

Thanks,
Pavel


2018-05-10 9:50 GMT+03:00 begineer <redni...@gmail.com>:

> Hi,
> I am getting this exception in logs but not sure on which cache it is
> happening. Logs does not help me identifying the cache either.
>
> Apologies to ask here without much details but just wondering if someone
> know what are the chances to get this exception.
> I have set of caches and  CacheMode is set to Local to all of them.
>
> [2018-05-10/02:02:29.585/BST][ERROR][grid-timeout-worker-#
> 43%null%][o.a.i.i.p.t.GridTimeoutProcessor]
> - Error when executing timeout callback: CancelableTask
> [id=a2ad, endTime=1525914154576, period=5000,
> cancel=false,
> task=o.a.i.i.processors.cache.query.continuous.
> CacheContinuousQueryManager$BackupCleaner@4b4455c9]
> java.lang.NullPointerException: null
> at
> org.apache.ignite.internal.processors.cache.query.continuous.
> CacheContinuousQueryHandler$1.acknowledgeBackupOnTimeout(
> CacheContinuousQueryHandler.java:513)
> ~[ignite-core-1.9.9.jar:1.9.9]
> at
> org.apache.ignite.internal.processors.cache.query.continuous.
> CacheContinuousQueryManager$BackupCleaner.run(CacheContinuousQueryManager.
> java:1195)
> ~[ignite-core-1.9.9.jar:1.9.9]
> at
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$
> CancelableTask.onTimeout(GridTimeoutProcessor.java:256)
> ~[ignite-core-1.9.9.jar:1.9.9]
> at
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$
> TimeoutWorker.body(GridTimeoutProcessor.java:158)
> ~[ignite-core-1.9.9.jar:1.9.9]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [ignite-core-1.9.9.jar:1.9.9]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_171]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Local Client not connecting to Server deployed in azure cloud.

2018-05-09 Thread Pavel Vinokurov
Hi,

Please check that 47101 port is accessible and try to run with argument
-Djava.net.preferIPv4Stack=true

Thanks,
Pavel

2018-05-09 5:55 GMT+03:00 JP <jopandy...@gmail.com>:

> Hi,
>I have running ignite servers and clusters running in Azure cloud West
> US
> region.
>
>   I am trying to connect my local ignite(South India) to server. Initially
> both are connections, but after few seconds client automatically
> disconnected. Show these error messages:
>
> 2018-05-09 08:21:56 WARN  GridCachePartitionExchangeManager:480 - Local
> node
> failed to complete partition map exchange due to network issues, will try
> to
> reconnect to cluster
> 2018-05-09 08:22:06 WARN  TcpCommunicationSpi:480 - Connect timed out
> (consider increasing 'failureDetectionTimeout' configuration property)
> [addr=/10.0.0.9:47101, failureDetectionTimeout=1]
> 2018-05-09 08:22:07 WARN  TcpCommunicationSpi:480 - Connect timed out
> (consider increasing 'failureDetectionTimeout' configuration property)
> [addr=/0:0:0:0:0:0:0:1%lo:47101, failureDetectionTimeout=1]
> 2018-05-09 08:22:08 WARN  TcpCommunicationSpi:480 - Connect timed out
> (consider increasing 'failureDetectionTimeout' configuration property)
> [addr=/127.0.0.1:47101, failureDetectionTimeout=1]
> 2018-05-09 08:22:08 WARN  TcpCommunicationSpi:480 - Failed to connect to a
> remote node (make sure that destination node is alive and operating system
> firewall is disabled on local and remote hosts) [addrs=[/10.0.0.9:47101,
> /0:0:0:0:0:0:0:1%lo:47101, /127.0.0.1:47101]]
> 2018-05-09 08:22:08 WARN  TcpDiscoverySpi:480 - Local node will try to
> reconnect to cluster with new id due to network problems
> [newId=03f6723a-d7b2-4dff-b743-673b6760a52d,
> prevId=b1425d4f-b126-452c-a944-b14330bc714f, locNode=TcpDiscoveryNode
> [id=b1425d4f-b126-452c-a944-b14330bc714f, addrs=[0:0:0:0:0:0:0:1,
> 127.0.0.1,
> 192.168.1.31, 2001:0:9d38:6abd:20c3:3ba3:3f57:fee0],
> sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0,
> /2001:0:9d38:6abd:20c3:3ba3:3f57:fee0:0, DESKTOP-JP/192.168.1.31:0],
> discPort=0, order=56, intOrder=0, lastExchangeTime=1525834262800,
> loc=true,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=true]]
>
> Even I have added timeout
> too..https://apacheignite-net.readme.io/docs/clients-and-
> servers#section-managing-slow-clients.
>
> How to solve this issue?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Locking mechanism difference between sequential get() and getAll()

2018-05-08 Thread Pavel Vinokurov
Hi,

Method getAll() obtains locks by order provided by the keys collection.
There are no difference between two methods except that internally getAll()
obtains locks by batches for optimization.

Thanks,
Pavel




2018-05-08 8:54 GMT+03:00 kotamrajuyashasvi <kotamrajuyasha...@gmail.com>:

> Hi
>
> I'm using a Transactional cache in my project with pessimistic
> repeatable_read mode. In my logic I have an ordered set of keys which need
> to be locked. I can lock in two ways.
>
> 1. Iterate through the ordered set and call get() on each key.
> 2. use getAll() passing the ordered set of keys.
>
> Functionality wise will there be any difference between the above two
> methods ?
> Other than that what are all the differences between the two ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: IS NULL on Numeric Query SQL Fields

2018-05-08 Thread Pavel Vinokurov
Hi,

As I understand you would like to have null value in the database, but 0 in
the java model.
I suppose that most properly way is to just keep "Integer" field and modify
getter and setter to return 0 value.
Also  Ignite provides for [1]Binarylizable interface for customization of
the binary serialization. With that you could customize transformation for
the "int" field.

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/binary/Binarylizable.html

Thanks,
Pavel


2018-05-08 8:56 GMT+03:00 the_palakkaran <jik...@suntecsbs.com>:

> Hi,
>
> int uniqueId;
> int actNo;
> String actId;
> String somethingElse;
>
> Now when I load values to the cache, if actNo is null in DB, then it will
> be
> *loaded as 0* (zero being initial value)
>
> So if I execute the below query as SQLFieldsQuery on cache, no rows would
> be
> returned:
> select somethingElse from Account where actNo * is null * and actId = ?;
> (arguments will be passed as object array)
>
> In hibernate we had the provision to mention parameter type so that it
> would
> understand this scenario? Does ignite provide a way to resolve this ?
>
> Thanks in advance.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Slow Ignite Cache with multi-nodes

2018-05-03 Thread Pavel Vinokurov
> From: Roman Guseinov [mailto:ro...@gromtech.ru <ro...@gromtech.ru>]
> Sent: Tuesday, April 10, 2018 1:56 PM
> To: user@ignite.apache.org
> Subject: Re: Slow Ignite Cache with multi-nodes
>
>
>
> Hi Rick,
>
>
>
> It seems that you need to increase clients number to achieve better
> performance when you increase server nodes number. You can try to put data
> from different threads as well.
>
>
>
> Also, you can set
>
> CacheConfiguration.setWriteSynchronizationMode(
> CacheWriteSynchronizationMode.PRIMARY_SYNC).
>
> The default value is FULL_SYNC which means that client node will wait for
> write or commit to complete on all participating remote nodes (primary and
>
> backup) [1].
>
>
>
> Performance Tips can be found here [2].
>
>
>
> Best Regards,
>
> Roman
>
>
>
> [1]
>
> https://apacheignite.readme.io/v1.1/docs/primary-and-
> backup-copies#section-synchronous-and-asynchronous-backups
>
> [2] https://apacheignite.readme.io/docs/performance-tips
>
>
>
>
>
>
>
> --
>
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>



-- 

Regards

Pavel Vinokurov


Re: ignite cluster cannot be activated - Failed to restore from a checkpoint

2018-04-30 Thread Pavel Vinokurov
SharedManager.java:863)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.distributedExchange(
> GridDhtPartitionsExchangeFuture.java:1019)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFutur
> e.java:651)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeMana
> ger$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279)
> ... 2 more
>
> And, I cannot do any operations.
>
> This symptom started to show when I cancelled (Ctrl+C) a service
> deployment.
> At that time, other job was writing to a cache. I just changed the sticky
> parameter of a service deployment (from false to true), and the deployment
> was too slow, so I cancelled it. And then I restarted the cluster, and the
> problem began.
>
> Is there any solution or workaround for this error like skipping the
> checkpoint restoring process, because it's ok for me to lose some recent
> cache updates.
>
> Ignite version is 2.3.0 and config is as follows.
>
> 
>
>
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
>
> 
>
> 
>
>
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>  value="valid_dup_ratio_cache_name"/>
> 
> 
> 
> 
> java.lang.String
> java.util.LinkedList
> 
> 
> 
>
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>  value="dup_ratio_hbase_read_through"/>
> 
> 
> 
> 
>  class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
> 
>
> 
> 
>
> 
>  class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
>  class="javax.cache.configuration.FactoryBuilder"
> factory-method="factoryOf">
>  value="com.naver.kweb.serp.title.ignite.read_through.
> HBaseDupRatioAdapter"/>
> 
> 
> 
> 
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
> 
>
>
> csb7x0876.nfra.io:47500..47509
>
> csb7x0877.nfra.io:47500..47509
>
> csb7x0878.nfra.io:47500..47509
>
> csb7x0879.nfra.io:47500..47509
>
> csb7x0880.nfra.io:47500..47509
>
> csb7x0881.nfra.io:47500..47509
>
> csb7x0882.nfra.io:47500..47509
>
> csb7x0883.nfra.io:47500..47509
>
> csb7x0884.nfra.io:47500..47509
>
> csb7x0885.nfra.io:47500..47509
> 
> 
> 
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>
>
> 
>
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
>  value="#{1024L * 1024 * 1024}"/>
> 
> 
> 
>  value="/naver/ignite_storage/20180330/storage"/>
>  value="/naver/ignite_storage/20180330/wal"/>
>  value="/naver/ignite_storage/20180330/walArchive"/>
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.configuration.BinaryConfiguration">
> 
> 
>  class="org.apache.ignite.binary.BinaryTypeConfiguration">
>  value="com.naver.kweb.serp.title.ignite.service.TitleMakerServiceImpl"/>
> 
> 
> 
> 
> 
> 
> 
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: How to upgrade Ignite 2.3 version to the latest version i.e. 2.4 version

2018-04-17 Thread Pavel Vinokurov
Hi,

Ignite does not support the rolling upgrade.
To upgrade from 2.3 to 2.4 you could stop the whole cluster, update to 2.4
and restart the cluster.
Please, look at page Cluster Activation and Baseline Topology
<https://apacheignite.readme.io/docs/cluster-activation> Baseline Topology
is the major feature introduced in the 2.4 version.

2018-04-17 19:39 GMT+03:00 siva <siva.ka...@bizruntime.com>:

> Hi,
>
> We have 2.3 version server and client node.We want to upgrade to new
> version
> ,Is  rolling upgrade support Ignite? or any other way is there to upgrade?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Ignite spatial index behavior

2018-04-17 Thread Pavel Vinokurov
Hi,

Ignite provides envelope intersection, so it searches intersection of
bounding box for a polygon and a certain point.
Low 'hit-ratio' should not significantly affect to query performance.
Could you please share a small piece of code or a project.

2018-04-15 10:42 GMT+03:00 olg.k...@gmail.com <olg.k...@gmail.com>:

> Is there an optimization about spatial index behavior?
> I'm running a POC, trying to show Ignite performance when searching for a
> polygon (from a polygon table) which intersects with a certain point
> (randomly generated).
> I've noticed that when the 'hit-ratio' is low (no intersecting polygon) the
> query return faster.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Slow invoke call

2018-04-10 Thread Pavel Vinokurov
Sam,

>>
Why does invoking remote method is taking 50% of time? Is it because of
concurrency and entry processor will execute under a lock internally?
>>
Could you please share a small reproducer project.

>> Is there any better way to deal with this usecase?
Have you tested with only heap memory?

Thanks,
Pavel




2018-04-06 2:39 GMT+07:00 javastuff@gmail.com <javastuff@gmail.com>:

> Thanks Pavel for reply.
>
> /"When you store entries into the off-heap, any update operation requires
> copying value from the off-heap to the heap." /
> I am updating using remote entry processor, does that also need to copy
> value to calling heap? If yes then no benefit using entry processor here, I
> can fetch and put, probably with a lock.
>
> Why does invoking remote method is taking 50% of time? Is it because of
> concurrency and entry processor will execute under a lock internally?
>
> Coping existing and incoming bytes, must be generating a lot for GC.
>
> Is there any better way to deal with this usecase? right now it seems
> slower
> than DB updates.
>
> Thanks,
> -Sam
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Transaction progress, affinity, locks

2018-04-09 Thread Pavel Vinokurov
Hi Arial,

>> But there are also non-transactional reads from one of the caches.
Any operation on a transaction cache executes within a transaction. If the
transaction isn't declared, it is created implicitly with default
concurrency and isolation modes.


>> There will be only 2PC commit for transactions, no 1-phase commit, since
I
>> have 3 nodes, REPLICATED?
Yes 2PC commit will be used.

>>During the transaction, even for get, locks will be taken on backups
>>during the prepare phase, since it's PESSIMISTIC concurrency?
Locks will be taken during a transaction before read/write operations.


>> readFromBackup - looks like it's not going to improve transactions
>> performance since transaction coordinator still going to always send it
to
>> primary node and primary node will instruct backup to take locks?
>>* readFromBackup - still should help in non-transactional read, no? or it
>>always sent to primary node?
readFromBackup enhances performance for cache operations. Locking and
performance depend on the type of cache operation read or write,
concurrency  and isolation levels.
Better performance could be achieved by using deadlock free
optimistic/serializable transactions.

>>* affinity - will it help to configure it for REPLICATED caches, where
>>transaction touches all the node anyway?
I suppose that affinity doesn't affect to transaction performance.

>>* If there is no timeout for transaction and one of the backup nodes is
>>segmented out, transaction seems to be blocked for FailureDetectionTimeout
>>time (worst case). But is it supposed to resume and succeed after topology
>>is recalculated to include 2 nodes only?
Yes transaction resumes after topology is recalculated.

2018-04-06 3:14 GMT+03:00 Ariel Tubaltsev <tubalt...@gmail.com>:

> I have 2 different caches C1, C2 on 3 nodes N1, N2, N3. I want to configure
> it in the most possibly reliable way to avoid any inconsistency and data
> loss, so both caches are REPLICATED, TRANSACTIONAL,  FULL_SYNC.
>
> Most of my operations are transactions: PESSIMISTIC, SERIALIZABLE that
> touch
> both caches, for example
> tx {
>   v= get(C1, k1);
>   put(C2, v, x);
> }
>
> But there are also non-transactional reads from one of the caches.
> All operations are performed from client (in client mode).
>
> Some questions to better map the features:
>
> * There will be only 2PC commit for transactions, no 1-phase commit, since
> I
> have 3 nodes, REPLICATED?
> * During the transaction, even for get, locks will be taken on backups
> during the prepare phase, since it's PESSIMISTIC concurrency?
> * readFromBackup - looks like it's not going to improve transactions
> performance since transaction coordinator still going to always send it to
> primary node and primary node will instruct backup to take locks?
> * readFromBackup - still should help in non-transactional read, no? or it
> always sent to primary node?
> * affinity - will it help to configure it for REPLICATED caches, where
> transaction touches all the node anyway?
> * If there is no timeout for transaction and one of the backup nodes is
> segmented out, transaction seems to be blocked for FailureDetectionTimeout
> time (worst case). But is it supposed to resume and succeed after topology
> is recalculated to include 2 nodes only?
>
> Thank you
> Ariel
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: ClusterTopologyServerNotFoundException

2018-04-05 Thread Pavel Vinokurov
Hi Venkat,

Could you setup *constistentId* (IgniteConfiguration#setConsistentId)  for
3 server nodes and add these ids to the baseline topology
using ignite.cluster().setBaselineTopology(nodes);

Please setup the baseline and activate the cluster as described in
https://apacheignite.readme.io/docs/cluster-activation.

Thanks,
Pavel

2018-04-01 15:58 GMT+03:00 kvenkatramtreddy <kvenkatramtre...@gmail.com>:

> Hi,
>
> Thank you very much.
>
> We are receiving this error only when one node was running. Exception is
> gone as soon other nodes are started and joined the cluster.
>
> I have attached the single node where we received the error.
>
> exception.log
> <http://apache-ignite-users.70518.x6.nabble.com/file/t1700/exception.log>
>
> Thanks & Regards,
> Venkat
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Slow invoke call

2018-04-05 Thread Pavel Vinokurov
Hi Sam,

When you store entries into the off-heap, any update operation requires
copying value from the off-heap to the heap.
Thus frequent updates of  the "heavy" entry  lead to significant overheads
for copying data.
Have you tested only with the heap memory?

Thanks,
Pavel

2018-04-05 4:44 GMT+03:00 javastuff@gmail.com <javastuff@gmail.com>:

> Hi,
>
> I have a usecase where we are storing byte array into one of the OFFHEAP
> ATOMIC cache. Multiple threads keep on appending bytes to it using remote
> operation (Entry processor / invoke call).
>
> Below is the logic for remote operation.
>
> if (mutableEntry.exists()) {
> MyObject m = (MyObject)mutableEntry.getValue();
> byte[] bytes = new byte[m.getContent().length +
> newContent.getContent().length];
> System.arraycopy(m.getContent(), 0, bytes, 0,
> m.getContent().length);
> System.arraycopy(newContent.getContent(), 0, bytes,
> m.getContent().length,newContent.getContent().length);
> m.setContent(bytes);
> mutableEntry.setValue(m);
> } else {
> mutableEntry.setValue(newContent);
> }
>
> We tested by adding about 25MB of content.
>
> What we have seen is -
> 1. In 8 threads test we have seen average time for Invoke() is 36ms.
> 2. Out of total time taken to execute Invoke(), 50% time taken by above
> remote logic. Where does other 50% spend? Probably just to invoke the
> method.
>
> What can we tune here? We are considering InvokeAll(), any more ideas?
>
> If we increase this to 250MB we have seen a very slow performance and after
> hours it goes Out-of-memory, setOffHeapMaxMemory(0) does not help on 32GB
> box.
>
> We are using Ignite 1.9, Java 1.8, Parallel GC and app JVM is 4GB.
> Unfortunately, we can not upgrade Ignite version as of now.
>
> Thanks,
> -Sam
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Determining BinaryObject field type

2018-03-28 Thread Pavel Vinokurov
Dave,

There is one way to delete meta data.
You could find typeId using ignite.binary().type("package.Employeee").typeId()
and remove .bin files in all  *binary_meta* subfolders.
​


Re: [2.4.0] Cluster unrecoverable after node failure

2018-03-28 Thread Pavel Vinokurov
Hi Jose,

>>
Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with set up
BaselineTopology is not allowed to join cluster without one
>>

That is expected behavior. The "Crashed" node has the empty baseline. Nodes
with a old baseline could not connect to the cluster with empty baseline
topology.
So the node with data should start first.

Baseline topology described in following pages:
https://apacheignite.readme.io/docs/cluster-activation#section-baseline-topology
https://cwiki.apache.org/confluence/display/IGNITE/IEP-4+Baseline+topology+for+caches


Thanks,
Pavel

2018-03-24 17:59 GMT+03:00 joseheitor <j...@heitorprojects.com>:

> Thanks Arseny - I really appreciate your assistance!
>
> The config files that I use are included in the attached archive.
> ignite-replicated.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/
> t1652/ignite-replicated.zip>
>
> Let me know if you need anything else, or any clarification?
>
> Below (for your reference) the SQL commands that I use, to populate the
> database with test data:
>
> DROP TABLE PUBLIC.Person
> DROP TABLE PUBLIC.City
>
> CREATE TABLE PUBLIC.City (
>   id LONG PRIMARY KEY, name VARCHAR)
>   WITH "TEMPLATE=REPLICATED, BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=FULL_SYNC"
>
> CREATE TABLE PUBLIC.Person (
>   id LONG, name VARCHAR, city_id LONG, PRIMARY KEY (id, city_id))
>   WITH "TEMPLATE=REPLICATED, BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=FULL_SYNC"
>
> INSERT INTO PUBLIC.City (id, name) VALUES (1, 'Forest Hill')
> INSERT INTO PUBLIC.City (id, name) VALUES (2, 'Denver')
> INSERT INTO PUBLIC.City (id, name) VALUES (3, 'St. Petersburg')
> INSERT INTO PUBLIC.Person (id, name, city_id) VALUES (1, 'John Doe', 3)
> INSERT INTO PUBLIC.Person (id, name, city_id) VALUES (2, 'Jane Roe', 2)
> INSERT INTO PUBLIC.Person (id, name, city_id) VALUES (3, 'Mary Major', 1)
> INSERT INTO PUBLIC.Person (id, name, city_id) VALUES (4, 'Richard Miles',
> 2)
>
> SELECT p.name, c.name
> FROM PUBLIC.Person p, PUBLIC.City c
> WHERE p.city_id = c.id AND c.name = 'Denver'
>
> SELECT COUNT(*) FROM PUBLIC.Person
> SELECT COUNT(*) FROM PUBLIC.City
>
> DELETE FROM PUBLIC.Person WHERE name = 'Jane Roe'
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Determining BinaryObject field type

2018-03-28 Thread Pavel Vinokurov
Dave,

I suppose there isn't way to delete the schema.
You could get the meta information about binary objects using Ignite#binary()
method.
For example ignite.binary().type("package.Employeee").fieldTypeName("name").



Thanks,
Pavel

2018-03-24 1:10 GMT+03:00 Dave Harvey <dhar...@jobcase.com>:

> Once a BinaryObjectSchema is created, it is not possible to change the type
> of a field for a known field name.
>
> My question is whether there is any way to determine the type of that field
> in the Schema.
>
> We are hitting a case were the way we get the data out of a different
> database returns a TIMESTAMP, but our binary object wants a DATE. In
> this test case, I could figure out that, but in the general case,  I have a
> BinaryObject type name, and a field name, and an exception if I try to put
> the wrong type in that field.
>
> The hokey general solutions I have come up with are:
> 1) Parse the exception message to see what type it wants
> 2) Have a list of conversions to try for the source type, and step through
> them on each exception.
> 3) Get the field from an existing binary object of that type, and use the
> class of the result.   But there is the chicken/egg problem.
>
> I have found that I can create a cache on a cluster with persistence, with
> some type definition, then delete that cache, the cluster will remember the
> BinaryObjectSchema for that type, and refuse to allow me to change the
> field's type.  If I  don't remember the field's type, how can I
> build the binary object?
>
> Is there any way to delete the schema without nuking some of the
> binary-meta/marshaller files when the cluster is down?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: ContinuousQuery - SqlFieldsQuery as InitialQuery

2018-03-23 Thread Pavel Vinokurov
This approach makes sense. I don't see any downside except a case when you
put some data in a cache after execution of the SqlFieldsQuery and before
ContinuousQuery.

2018-03-22 18:40 GMT+03:00 au.fp2018 <au.fp2...@gmail.com>:

> Thanks Pavel
>
> I was able to get a Java version of the query client implemented using your
> suggestions. But I am having a hard time convincing the scala compiler to
> type check. Even if I get it to work the amount of type coercing I would
> need to get it to work is extremely hairy.
>
> Before your reply I had given up on the possibility of using SqlFieldsQuery
> as the initialQuery and ended up making the snapshot query as a regular
> SqlFieldsQuery and then issue a ContinuousQuery (with no initial query) to
> receive updates.
>
> Is there any downside to this approach?
>
> Thanks,
> Andre
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: [2.4.0] Cluster unrecoverable after node failure

2018-03-22 Thread Pavel Vinokurov
Hi,

>>
  4 - simulate power-failure ... all components down; Node-B with
unrecoverable damage (hardware)
>>

How do you emulate failure with unrecoverable damage ?

2018-03-22 14:34 GMT+03:00 joseheitor <j...@heitorprojects.com>:

> I do apologise for the long-winded post earlier (with error stack-traces,
> etc.).
>
> And hope that someone can assist me with this issue - it is a basic,
> real-world scenario that tests the fundamental integrity of the clustering
> system!
>
> Am I perhaps missing something? Or mismanaging the cluster in such an
> occurrence? What is the 'best-practice' to recover from such a scenario?
>
> Here is the condensed version of the problem, which is hopefully easier to
> read (without the stack-traces):
>
> *Scenario: Secondary (Node-B) Failure*
>
> Environment:
>   - 2 nodes (Node-A, Node-B)
>   - Ignite native persistence enabled
>   - static IP discovery - both node IPs listed
>   - JDBC (Client) - DBeaver
>   - manual cluster activation
>
> Steps:
>   1 - start  both nodes with no data
>   2 - activate cluster on same machine as Node-A
>   3 - load data via SQL JDBC (...WITH template=replicated, backups=1)
>   4 - simulate power-failure ... all components down; Node-B with
> unrecoverable damage (hardware)
>   5 - start new Node-B instance (with no data)
>   6 - attempt to start Node-A (undamaged, with good data)...
>
> PROBLEM: Unable to start Node-A. (Error in previous post below...)
>
> In an attempt to recover the cluster and data:
>   7 - stop Node-B
>   8 - start Node-A - first
>   9 - start Node-B (starts)
>   10 - attempt to activate cluster
>
> PROBLEM: Cluster activation operation Freezes.
>
> Additional notes:
> - All data is lost and cannot be recovered.
> - This did not occur with Ignite 2.3.0, although data consistency was
> unpredictable but would sometimes align after some period of time.
> - If Node-A (with data) is started before Node-B (new instance, empty
> data),
> both nodes start, but cluster fails to activate. (See below post for
> details
> of the error observed on Node-A ouput)
>
> ...
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Performance of Ignite integrating with PostgreSQL

2018-03-22 Thread Pavel Vinokurov
t;+ "(:val1),(:val2),(:val3),(:
> val4),(:val5),(:val6),(:val7),(:val8),(:val9),(:val10),"
>
>+ "(:val11),(:val12),(:val13),(:
> val14),(:val15),(:val16),(:val17),(:val18),(:val19),(:val20),"
>
>+ "(:val21),(:val22),(:val23),(:
> val24),(:val25),(:val26),(:val27),(:val28),(:val29),(:val30),"
>
>+ "(:val31),(:val32),(:val33),(:
> val34),(:val35),(:val36),(:val37),(:val38),(:val39),(:val40),"
>
>+ "(:val41),(:val42),(:val43),(:
> val44),(:val45),(:val46),(:val47),(:val48),(:val49),(:val50),"
>
>+ "(:val51),(:val52),(:val53),(:
> val54),(:val55),(:val56),(:val57),(:val58),(:val59),(:val60),"
>
>+ "(:val61),(:val62),(:val63),(:
> val64),(:val65),(:val66),(:val67),(:val68),(:val69),(:val70),"
>
>+ "(:val71),(:val72),(:val73),(:
> val74),(:val75),(:val76),(:val77),(:val78),(:val79),(:val80),"
>
>+ "(:val81),(:val82),(:val83),(:
> val84),(:val85),(:val86),(:val87),(:val88),(:val89),(:val90),"
>
>+ "(:val91),(:val92),(:val93),(:
> val94),(:val95),(:val96),(:val97),(:val98),(:val99),(:val100);";
>
>
>
> jdbcTemplate.update(sqlString, parameterMap);
>
> }
>
>
>
>
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Wednesday, March 14, 2018 5:42 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>
>
>
> Hi,
>
>
>
> You could try to use igniteCache.putAll  for write batches by 1000
> entries.
>
> Use following script in PostgresDBStore#writeAll method to put data into
> the database:
>
> String sqlString = "INSERT INTO test(val) VALUES (:val1)(:val2)(:val3);";
>
>
>
>
>
> 2018-03-14 11:58 GMT+03:00 <itria40...@itri.org.tw>:
>
> Hi,
>
> I try to use Ignite to integrate with PostgreSQL.
>
> And I use “atop” to monitor the data write to PostgreSQL.
>
> Then observed that the writing speed is 1 MB per second.
>
> This performance is not really good. Below is my configuration and code.
> Please help me to improve it.
>
> Thanks.
>
>
>
> There is my cache configuration:
>
>   
>
>  "testCache"/>
>
>  value="PARTITIONED"/>
>
>  value=" ATOMIC"/>
>
>  value="PRIMARY"/>
>
>  
>
>  value="true"/>
>
>   value="true"/>
>
>
>
>   value="64"/>
>
>   value="131072" />
>
>   value="131072" />
>
>
>
>  
>
>  
>
>     
>
>  
>
>  
>
>  
>
>  
>
> 
>
>     
>
>     
>
>     
>
>     
>
>
> java.lang.String
>
>
> java.lang.String
>
>
>
> 
>
> 
>
>
>
>
>
> Main function:
>
> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>
> IgniteCache<String, String> igniteCache = 
> ignite.getOrCreateCache("testCache
> ");
>
> int seqint = 0;
>
> while(true)
>
> {
>
> igniteCache.put(Integer.toString(seqint),
> "valueString");
>
> seqint++;
>
> }
>
>
>
>
>
> Write behind to PostgreSQL through JDBC:
>
> @Override
>
> public void write(Cache.Entry entry)
> throws CacheWriterException {
>
> Map<String, Object> parameterMap = new HashMap<>();
>
> parameterMap.put(“val”, entry.getValue());
>
> String sqlString = "INSERT INTO test(val) VALUES (:val);";
>
> jdbcTemplate.update(sqlString, parameterMap);
>
> }
>
>
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>



-- 

Regards

Pavel Vinokurov


Re: ContinuousQuery - SqlFieldsQuery as InitialQuery

2018-03-22 Thread Pavel Vinokurov
Hi,

Yes, continuous queries are only predicate based and could named as "continuous
listener", but for an initial query that run once we could use
SQLFieldsQuery.
Let try with following code:
ContinuousQuery<Long,SomeValue > q= new ContinuousQuery<Long, SomeValue>();
((ContinuousQuery)q).setInitialQuery(new SqlFieldsQuery("select _val from
table"));
q.setLocalListener(System.out::println);
QueryCursor cursor = binaryCache.query(q);
cursor.forEach(o -> {
   SomeValue v = (SomeValue) o;
});

Thanks,
Pavel

2018-03-21 17:44 GMT+03:00 au.fp2018 <au.fp2...@gmail.com>:

> Thanks for your answer Pavel.
>
> But that does not help. 'q' is already a ContinuousQuery am I not sure what
> good the additional cast does. Anyway I tried that and also explicitly
> casting the initialQuery, didn't work. As I was suggesting in my first post
> the types don't line-up.
>
> In my search I came across this on Stack Overflow:
> https://stackoverflow.com/a/43076601
>
> According to that reply "Continuous queries are only predicate based, SQL
> is
> not supported here.". But the documentation says SQL is supported. This is
> very confusing and misleading.
>
> If only predicate based queries are supported than how can one take
> advantage of the indexes in Ignite.
> I would prefer SCAN queries, but without indexes my initial query is
> unacceptably slow.
>
> Thanks in Advance.
>
> Thanks,
> Andre
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Re: Get all data of one cache in Ignite without creating a new ignite node

2018-03-22 Thread Pavel Vinokurov
Hi Rick,

You could use rest api that documented in
https://apacheignite.readme.io/docs/rest-api .

Thanks,
Pavel

2018-03-22 9:44 GMT+03:00 <linr...@itri.org.tw>:

> Dear all,
>
>
>
> My settings of running environment is as:
>
> OS: Ubuntn 14.04.3 LTS
>
> JAVA: JDK 1.7
>
> Ignite: 2.4.0
>
>
>
> I would like to get all data of one cache in Ignite
>
>
>
> Without creating a new ignite node, I do not know how to get data by
> another executive java program (without the Ignition.getOrStart(cfg)).
>
>
>
> The setting of one listening ignite node is:
>
> " IgniteConfiguration cfg  = new IgniteConfiguration();
>
>  cfg.setClientMode(*false*);  //server
>
>   *CacheConfiguration* cacheConf = *new* *CacheConfiguration*();
>
> cacheConf.setName("igniteCache");
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
> cacheConf.setCacheMode(CacheMode.*REPLICATED*);
>
> cacheConf.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
> cfg.setCacheConfiguration(cacheConf);
>
> Ignite igniteNode =
>
> *IgniteCache* cache = *igniteNode**.getOrCreateCache(**cacheConf**)*;
>
>
> *int* datasize = 10;
>
> *for* (*int* i = 0; i < datasize; i++) {
>
> cache.put("key " + Integer.*toString*(i), Integer.*toString*(i));
>
> }"
>
>
>
> *In the Ignite document, there are many ways to get data. However, it
> seems to need to create a new ignite node (server/client) to get data.*
>
>
>
> *I know that Restful API is a way to get data. But, it is not flexible to
> use over the java program.*
>
>
>
> If you have any idea about the issue, I am looking forward to hearing from
> you.
>
>
>
> Thanks
>
>
>
> Rick
>
>
>
>
> --
> 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain
> confidential information. Please do not use or disclose it in any way and
> delete it if you are not the intended recipient.
>



-- 

Regards

Pavel Vinokurov