Re: Inserting date into ignite with spark jdbc

2020-11-03 Thread Andrei Aleksandrov

Hi,

It will be great if you share the reproducer.

BR,
Andrei

11/3/2020 10:17 AM, Humphrey пишет:

Let me have a summarize here. Working with
| IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath |
seems to work fine.

But when I'm using the JDBC thin client connection, (like connecting to a
database through JDBC Driver) it was giving me the error:
*java.sql.SQLException: No PRIMARY KEY defined for CREATE TABLE* even when
supplying the option *OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS*.

Do you want me to make a reproducible example? Or is there already a ticket?

Humphrey


aealexsandrov wrote

Denis,

I can check it out soon. The mentioned problem can probably only be
related to JDBC data frames. In this case, I will create a JIRA ticket.
But as I know using OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS should be the
same as I showed in my example.

BR,
Andrei

10/30/2020 6:01 PM, Denis Magda пишет:

Andrey,

Do we need to update our docs? It feels like the docs miss these
details or have an outdated example.

-
Denis


On Fri, Oct 30, 2020 at 7:03 AM Andrei Aleksandrov


aealexsandrov@
  mailto:
aealexsandrov@
> wrote:

 Hi,

 Here's an example with correct syntax that should work fine:

 |DataFrameWriter < Row > df = resultDF .write()
 .format(IgniteDataFrameSettings.FORMAT_IGNITE())
 .option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath)
 .option(IgniteDataFrameSettings.OPTION_TABLE(), "Person")

.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),

 "id, city_id")
 .option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(),
 "template=partitioned,backups=1") .mode(Append); |

 Please let me know if something is wrong here.

 BR,
 Andrei

 10/30/2020 2:20 AM, Humphrey пишет:

 Hello guys this question has been asked on  Stack Overflow

https://stackoverflow.com/questions/64554684/how-to-create-a-table-with-primary-key-using-jdbc-spark-connector-to-ignite;

https://stackoverflow.com/questions/64554684/how-to-create-a-table-with-primary-key-using-jdbc-spark-connector-to-ignite;
 but yet no answer is a provided.

 I'm facing the same issue (trying to insert data in ignite using
 spark.jdbc):
 Exception in thread "main" java.sql.SQLException: No PRIMARY KEY
defined for
 CREATE TABLE
at

org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)


 Code:
  println("-- writing using jdbc --")
  val prop = Properties()
  prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"

  df.write().apply {
  mode(SaveMode.Overwrite)
  format("jdbc")
  option("url", "jdbc:ignite:thin://127.0.0.1
http://127.0.0.1;)
  option("dbtable", "comments")
 

option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),

 "last_name")
  }.save()

 The last option doesn't seem to work/help.



 --
 Sent from:http://apache-ignite-users.70518.x6.nabble.com/
http://apache-ignite-users.70518.x6.nabble.com/;





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Too long JVM pause out of nowhere leading into shutdowns of ignite-servers

2020-11-02 Thread Andrei Aleksandrov

Hi,

Long JVM pauses can lead to different problems, but in your case, I see 
some network problems that lead to a segmentation of some nodes:


Connection to Zookeeper server is lost, local node SEGMENTED.
What you can do to avoid current problem:

1)You should find out the reason for the pause. 22 seconds is a huge 
pause that can cause some operations to fail. It might be GC issues, but 
I am assuming you are using VMs and these pauses might just be VM pauses.

2)You can turn off you MMAP:

IGNITE_WAL_MMAP=false

3)You can increase client failure detection and failure detection timeouts:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setFailureDetectionTimeout-long-
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/IgniteSpiAdapter.html#clientFailureDetectionTimeout--

4)You can reduce communication timeouts (because I see your 
communication connection cannot be established):



   class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">

  ...
 
 
  ...
    


However, as I mentioned earlier, you must first figure out the reason 
for the pause of your virtual machine.


BR,
Andrei

10/30/2020 5:56 PM, VincentCE пишет:

Hello!

In our project we are currently using ignite 2.81 and using zookeeper.
During the last couple of days we were facing shutdowns of some of our
ignite-server nodes.

Please find the logs below:

1) Why can there occur such long jvm/gc pauses although previous metrics in
the log do not indicate that imho?

2) We have the following timeouts set for the server-nodes. Which of them
would influence the handling after such long gc-pauses in order to avoid a
restart of the node?

Thanks in advance for your help!

Configs:


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  

LOGs:

[12:46:21,142][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:47:21,146][INFO][grid-timeout-worker-#35][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
 ^-- Node [id=3f58f4f5, uptime=9 days, 20:56:18.016]
 ^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
 ^-- CPU [cur=-100%, avg=-100%, GC=0%]
 ^-- PageMemory [pages=16626106]
 ^-- Heap [used=20318MB, free=44.88%, comm=36864MB]
 ^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
 ^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
 ^--   TxLog region [used=0MB, free=100%, comm=40MB]
 ^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
 ^-- Outbound messages queue [size=0]
 ^-- Public thread pool [active=0, idle=0, qSize=0]
 ^-- System thread pool [active=0, idle=14, qSize=0]
[12:47:21,146][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:48:21,154][INFO][grid-timeout-worker-#35][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
 ^-- Node [id=3f58f4f5, uptime=9 days, 20:57:18.025]
 ^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
 ^-- CPU [cur=-100%, avg=-100%, GC=0%]
 ^-- PageMemory [pages=16626106]
 ^-- Heap [used=13057MB, free=64.58%, comm=36864MB]
 ^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
 ^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
 ^--   TxLog region [used=0MB, free=100%, comm=40MB]
 ^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
 ^-- Outbound messages queue [size=0]
 ^-- Public thread pool [active=0, idle=0, qSize=0]
 ^-- System thread pool [active=0, idle=14, qSize=0]
[12:48:21,154][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:49:21,162][INFO][grid-timeout-worker-#35][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
 ^-- Node [id=3f58f4f5, uptime=9 days, 20:58:18.029]
 ^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
 ^-- CPU [cur=-100%, avg=-100%, GC=0%]
 ^-- PageMemory [pages=16626106]
 ^-- Heap [used=8768MB, free=76.21%, comm=36864MB]
 ^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
 ^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
 ^--   TxLog region [used=0MB, free=100%, comm=40MB]
 ^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
 ^-- Outbound messages queue [size=0]
 ^-- Public thread pool [active=0, idle=14, qSize=0]
 ^-- System thread pool [active=0, idle=14, qSize=0]
[12:49:21,162][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]

Re: Execution of local SqlFieldsQuery on client node disallowed

2020-11-02 Thread Andrei Aleksandrov

Hello,

You can use events:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/events/CacheRebalancingEvent.html

But in this case, you should wait until the rebalancing of all caches is 
completed.


Also you can use the web console tool:

https://apacheignite-tools.readme.io/v2.8.0/docs

And the last option is to use the following property:

https://www.gridgain.com/sdk/ee/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html#IGNITE_WAIT_FOR_BACKUPS_ON_SHUTDOWN

With the current setting, your node, which was properly stopped, will 
not be stopped until there are backups in the cluster.


BR,
Andrew

11/1/2020 5:19 PM, narges saleh пишет:
Hi Denis -- How would I detect that rebalancing started, or finished? 
Do I need to listen to the rebalancing events and abort the task in 
case a rebalancing has started? thanks.


On Fri, Oct 30, 2020 at 4:28 PM Denis Magda > wrote:


Hi Narges,

Then just send a task to a required node. If the cluster topology
changes while the task was running you can re-submit it to ensure
the result is accurate.

-
Denis


On Fri, Oct 30, 2020 at 2:16 PM narges saleh mailto:snarges...@gmail.com>> wrote:

Hi Denis,

My problem with using affinity call/run is that I have to have
the key in order to run it. I just want to run a function on
the data on the current node, without knowing the key. Is
there anyway to do this and also guard against partition
rebalancing?

thanks

On Tue, Oct 27, 2020 at 10:31 AM narges saleh
mailto:snarges...@gmail.com>> wrote:

Thanks Ilya, Denis for the feedback.

On Mon, Oct 26, 2020 at 1:44 PM Denis Magda
mailto:dma...@apache.org>> wrote:

Narges,

Also, keep in mind that if a local query is executed
over a partitioned table and it happens that
partitions rebalancing starts, the local query might
return a wrong result (if partitions the query was
executed over were rebalanced to another node during
the query execution time). To address this:

 1. Execute the local query inside of an
affinityCall/Run function

(https://ignite.apache.org/docs/latest/distributed-computing/collocated-computations#colocating-by-key

).
Those functions don't let partitions be evicted
until the function execution completes.
 2. Don't use the local queries, let the Ignite SQL
engine to run standard queries, and to take care
of possible optimizations.


-
Denis


On Mon, Oct 26, 2020 at 8:50 AM Ilya Kasnacheev
mailto:ilya.kasnach...@gmail.com>> wrote:

Hello!

You are using an Ignite Thick Client driver. As
its name implies, it will start a local client
node and then connect to it, without the option of
doing local queries.

You need to use Ignite Thin JDBC driver:
jdbc:ignite:thin://
Then you can do local queries.

Regards,
-- 
Ilya Kasnacheev



сб, 24 окт. 2020 г. в 16:04, narges saleh
mailto:snarges...@gmail.com>>:

Hello Ilya
Yes, it happens all the time. It seems ignite
forces the "client" establishing the jdbc
connection into a client mode, even if I set
client=false.  The sample code and config are
attached. The question is how do I force JDBC
connections from a server node.
thanks.

On Fri, Oct 23, 2020 at 10:31 AM Ilya
Kasnacheev mailto:ilya.kasnach...@gmail.com>> wrote:

Hello!

Does this happen every time? If so, do you
have a reproducer for the issue?

Regards,
-- 
Ilya Kasnacheev



пт, 23 окт. 2020 г. в 13:06, narges saleh
mailto:snarges...@gmail.com>>:

Denis -- Just checked. I do specify my
services to be deployed on server
nodes only. Why would ignite think
that I am running my code on a client

Re: Inserting date into ignite with spark jdbc

2020-10-30 Thread Andrei Aleksandrov

Denis,

I can check it out soon. The mentioned problem can probably only be 
related to JDBC data frames. In this case, I will create a JIRA ticket. 
But as I know using OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS should be the 
same as I showed in my example.


BR,
Andrei

10/30/2020 6:01 PM, Denis Magda пишет:

Andrey,

Do we need to update our docs? It feels like the docs miss these 
details or have an outdated example.


-
Denis


On Fri, Oct 30, 2020 at 7:03 AM Andrei Aleksandrov 
mailto:aealexsand...@gmail.com>> wrote:


Hi,

Here's an example with correct syntax that should work fine:

|DataFrameWriter < Row > df = resultDF .write()
.format(IgniteDataFrameSettings.FORMAT_IGNITE())
.option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath)
.option(IgniteDataFrameSettings.OPTION_TABLE(), "Person")
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),
"id, city_id")
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(),
"template=partitioned,backups=1") .mode(Append); |

Please let me know if something is wrong here.

BR,
Andrei

10/30/2020 2:20 AM, Humphrey пишет:

Hello guys this question has been asked on  Stack Overflow
<https://stackoverflow.com/questions/64554684/how-to-create-a-table-with-primary-key-using-jdbc-spark-connector-to-ignite>  <https://stackoverflow.com/questions/64554684/how-to-create-a-table-with-primary-key-using-jdbc-spark-connector-to-ignite>   
but yet no answer is a provided.


I'm facing the same issue (trying to insert data in ignite using
spark.jdbc):
Exception in thread "main" java.sql.SQLException: No PRIMARY KEY defined for
CREATE TABLE
at

org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)

Code:
 println("-- writing using jdbc --")
 val prop = Properties()
 prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"

 df.write().apply {
 mode(SaveMode.Overwrite)
 format("jdbc")
 option("url", "jdbc:ignite:thin://127.0.0.1  
<http://127.0.0.1>")
 option("dbtable", "comments")

option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),

"last_name")
 }.save()

The last option doesn't seem to work/help.



--
Sent from:http://apache-ignite-users.70518.x6.nabble.com/  
<http://apache-ignite-users.70518.x6.nabble.com/>




Re: Ignite Cluster Issue on 2.7.6

2020-10-30 Thread Andrei Aleksandrov

Hi,

Did you remove the code with ignite.cluster().active(*true*); ?

However, yes, all of your data nodes should be in baseline topology. 
Could you collect logs from your servers?


BR,
Andrei

10/30/2020 2:28 PM, Gurmehar Kalra пишет:


Hi,

I tried changes suggested by you , waited for nodes  and then tried to 
start cluster , but only 1 node is  joins cluster other node  does not 
participates in cluster.


Do I have to add all nodes into BLT ?

Regards,

Gurmehar Singh

*From:*Andrei Aleksandrov 
*Sent:* 29 October 2020 20:11
*To:* user@ignite.apache.org
*Subject:* Re: Ignite Cluster Issue on 2.7.6

[CAUTION: This Email is from outside the Organization. Unless you 
trust the sender, Don’t click links or open attachments as it may be a 
Phishing email, which can steal your Information and compromise your 
Computer.]


Hi,

Do you use cluster with persistence? After first actication all your 
data will be located on the first activated node.


In this case, you also should track your baseline.

https://www.gridgain.com/docs/latest/developers-guide/baseline-topology 
<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gridgain.com%2Fdocs%2Flatest%2Fdevelopers-guide%2Fbaseline-topology=04%7C01%7Cgurmehar.kalra%40hcl.com%7Cb6b965fa945e4a7f9b2608d87c189df1%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637395793215914323%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=vlUsoU0O2U8cW1aqojX29MmnxvhnVfL%2B4AbVGem5wIs%3D=0>


Baseline topology is a subset of nodes where you cache data located.

The recommendations are the following:

1)you should activate the cluster only when all server nodes were started
2)If the topology changes, you must either restore the failed nodes or 
reset to the base topology to trigger partition reassignment and 
rebalancing.
3)If some new node should contain the cache data then you should add 
this node to baseline topology:


using java code:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#setBaselineTopology-java.util.Collection 
<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.apache.org%2Freleases%2Flatest%2Fjavadoc%2Forg%2Fapache%2Fignite%2FIgniteCluster.html%23setBaselineTopology-java.util.Collection=04%7C01%7Cgurmehar.kalra%40hcl.com%7Cb6b965fa945e4a7f9b2608d87c189df1%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637395793215914323%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=i470Z%2FOz7uOPcm8dgQXSJYNhrlUXfcv6fTn%2BQlnfeVk%3D=0>-


using utility tool:

https://www.gridgain.com/docs/latest/administrators-guide/control-script#adding-nodes-to-baseline-topology 
<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gridgain.com%2Fdocs%2Flatest%2Fadministrators-guide%2Fcontrol-script%23adding-nodes-to-baseline-topology=04%7C01%7Cgurmehar.kalra%40hcl.com%7Cb6b965fa945e4a7f9b2608d87c189df1%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637395793215924318%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=Z2SLId9kSjQhXA0SWmMMu5uEq3OHT1YX5v30OM5ckTM%3D=0>


4)In case if some node from baseline can't be started (e.g because its 
data on the disk was destroyed) it should be removed from baseline:


https://www.gridgain.com/docs/latest/administrators-guide/control-script#removing-nodes-from-baseline-topology 
<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gridgain.com%2Fdocs%2Flatest%2Fadministrators-guide%2Fcontrol-script%23removing-nodes-from-baseline-topology=04%7C01%7Cgurmehar.kalra%40hcl.com%7Cb6b965fa945e4a7f9b2608d87c189df1%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637395793215924318%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000=EiZeVORdR03LAr%2BUworeIBxeP3%2F3LfosUg5izvFylu0%3D=0>


If you are not using persistence, please provide additional 
information that "data is being added to the cache but not available 
to any of the modules." means:


1) How you access data
2) What do you see in the logs

BR,
Andrei

10/29/2020 4:19 PM, Gurmehar Kalra пишет:

Hi,

I have two module(Web and Engine)  and want to share data b/w the
modules , but when I run  web and engine together , data is added
to cache  but is not available to either of modules.
below is my ignite config, which is same in both modules

config.setActiveOnStart(*true*);

config.setAutoActivationEnabled(*true*);


config.setIgniteHome(propertyReader.getProperty("spring.ignite.storage.path"));

config.setFailureHandler(*new*StopNodeOrHaltFailureHandler());

config.setDataStorageConfiguration(getDataStorageConfiguration());


config.setGridLogger(*new*JavaLogger(java.util.logging.Logger./getLogger/(*/LOG/*.getClass().getCanonicalName(;

Ignite ignite= Ignition./start/(config);

ignite.cluster().active

Re: IgniteSpiOperationTimeoutException: Operation timed out [timeoutStrategy= ExponentialBackoffTimeoutStrategy

2020-10-30 Thread Andrei Aleksandrov

Hi,

Often, problems with establishing a communication connection can be 
solved with the following configuration:


1)You may have multiple network interfaces and the wrong one could be 
used. Solved by changing the SPI communication timeouts.:



    class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">

          ...
  
  
          ...
    


Otherwise, you can wait more than 10 minutes when trying to create a 
connection (due to the ExponentialBackoffTimeoutStrategy strategy).


2)Some operations in the cluster require communication with clients 
through communication. In case you have communication problems, but you 
can still access through the discovery SPI, such operations may hang. To 
avoid it please set the following property:


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html#IGNITE_ENABLE_FORCIBLE_NODE_KILL

If these recommendations do not help, then yes, as Ilya said, we require 
a loudspeaker on your part.


BR,
Andrei

10/30/2020 2:20 PM, Ilya Kasnacheev пишет:

Hello!

Do you have a reproducer for this behaviour that I could run and see 
it failing?


Regards,
--
Ilya Kasnacheev


вт, 27 окт. 2020 г. в 22:02, VeenaMithare >:


Hi Ilya, The node communication issue is because one of the node
is being restarted - and not due to network failure . The original
issue is as below : Our setup : Servers - 3 node cluster Reader
clients : wait for an update on an entry of a cache ( around 20 of
them ) Writer Client : 1 If one of the reader client restarts
while the writer is writing into the entry of the cache , the
server attempts to send the update to the failed client's local
listener . It keeps attempting to communicate with the failed
client ( client's continous query local listener ? ) till it
timesout as per
connTimeoutStrategy=ExponentialBackoffTimeoutStrategy . ( Please
find the snippet of the exception below. The complete log is
attached as an attachment ) This delays the completion of the
transaction that was started by the writer client. Is there any
way the writer client could complete the transaction without
getting impacted by the reader client restarts ? regards, Veena.

Sent from the Apache Ignite Users mailing list archive
 at Nabble.com.



Re: Ignite instances frequently failing - BUG: soft lockup - CPU#1 stuck

2020-10-30 Thread Andrei Aleksandrov

Hello,

Too little information has been provided on your part:

1) Could you provide the screenshot from the web console at this time?
2) Could you collect Ignite logs during this period?
3) What tool shows that the processors are frozen? Have you checked 
other tools?


BR,
Andrew

10/30/2020 3:07 PM, bbellrose пишет:

Ignite instances keep failing. Server indicates CPU stuck. However monitoring
shows very little CPU usage. This happens almost every day on different
nodes of the cluster.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting date into ignite with spark jdbc

2020-10-30 Thread Andrei Aleksandrov

Hi,

Here's an example with correct syntax that should work fine:

|DataFrameWriter < Row > df = resultDF .write() 
.format(IgniteDataFrameSettings.FORMAT_IGNITE()) 
.option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath) 
.option(IgniteDataFrameSettings.OPTION_TABLE(), "Person") 
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(), 
"id, city_id") 
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(), 
"template=partitioned,backups=1") .mode(Append); |


Please let me know if something is wrong here.

BR,
Andrei

10/30/2020 2:20 AM, Humphrey пишет:

Hello guys this question has been asked on  Stack Overflow

but yet no answer is a provided.

I'm facing the same issue (trying to insert data in ignite using
spark.jdbc):
Exception in thread "main" java.sql.SQLException: No PRIMARY KEY defined for
CREATE TABLE
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)

Code:
 println("-- writing using jdbc --")
 val prop = Properties()
 prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"

 df.write().apply {
 mode(SaveMode.Overwrite)
 format("jdbc")
 option("url", "jdbc:ignite:thin://127.0.0.1")
 option("dbtable", "comments")

option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),

"last_name")
 }.save()

The last option doesn't seem to work/help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Cluster Issue on 2.7.6

2020-10-29 Thread Andrei Aleksandrov

Hi,

Do you use cluster with persistence? After first actication all your 
data will be located on the first activated node.


In this case, you also should track your baseline.

https://www.gridgain.com/docs/latest/developers-guide/baseline-topology

Baseline topology is a subset of nodes where you cache data located.

The recommendations are the following:

1)you should activate the cluster only when all server nodes were started
2)If the topology changes, you must either restore the failed nodes or 
reset to the base topology to trigger partition reassignment and 
rebalancing.
3)If some new node should contain the cache data then you should add 
this node to baseline topology:


using java code:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#setBaselineTopology-java.util.Collection-

using utility tool:

https://www.gridgain.com/docs/latest/administrators-guide/control-script#adding-nodes-to-baseline-topology

4)In case if some node from baseline can't be started (e.g because its 
data on the disk was destroyed) it should be removed from baseline:


https://www.gridgain.com/docs/latest/administrators-guide/control-script#removing-nodes-from-baseline-topology

If you are not using persistence, please provide additional information 
that "data is being added to the cache but not available to any of the 
modules." means:


1) How you access data
2) What do you see in the logs

BR,
Andrei

10/29/2020 4:19 PM, Gurmehar Kalra пишет:


Hi,

I have two module(Web and Engine)  and want to share data b/w the 
modules , but when I run  web and engine together , data is added to 
cache  but is not available to either of modules.

below is my ignite config, which is same in both modules

config.setActiveOnStart(*true*);

config.setAutoActivationEnabled(*true*);

config.setIgniteHome(propertyReader.getProperty("spring.ignite.storage.path"));

config.setFailureHandler(*new*StopNodeOrHaltFailureHandler());

config.setDataStorageConfiguration(getDataStorageConfiguration());

config.setGridLogger(*new*JavaLogger(java.util.logging.Logger./getLogger/(*/LOG/*.getClass().getCanonicalName(;

Ignite ignite= Ignition./start/(config);

ignite.cluster().active(*true*);

All Caches created have below properties |

cache.setWriteSynchronizationMode(CacheWriteSynchronizationMode.*/FULL_ASYNC/*);

cache.setAtomicityMode(CacheAtomicityMode.*/TRANSACTIONAL/*);

cache.setCacheMode(CacheMode.*/REPLICATED/*);

cache.setGroupName("EngineGroup");

Both Modules  are running on  IP List : 
127.0.0.1:47501,127.0.0.1:47502,127.0.0.1:47503,127.0.0.1:47504


Please suggest..

Regards,

Gurmehar Singh

::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and 
intended for the named recipient(s) only. E-mail transmission is not 
guaranteed to be secure or error-free as information could be 
intercepted, corrupted, lost, destroyed, arrive late or incomplete, or 
may contain viruses in transmission. The e mail and its contents (with 
or without referred errors) shall therefore not attach any liability 
on the originator or HCL or its affiliates. Views or opinions, if any, 
presented in this email are solely those of the author and may not 
necessarily reflect the views or opinions of HCL or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, 
modification, distribution and / or publication of this message 
without the prior written consent of authorized representative of HCL 
is strictly prohibited. If you have received this email in error 
please delete it and notify the sender immediately. Before opening any 
email and/or attachments, please check them for viruses and other defects.




Re: Large Heap with lots of BinaryMetaDataHolders

2020-10-29 Thread Andrei Aleksandrov

Hello,

Let's start from the very beginning.

1) Could you please share the server and client config?
2) Java code of what you have in your client launcher application

I will try to investigate your case.

BR,
Andrew

10/28/2020 7:19 PM, ssansoy пишет:

Hi, could anyone please help understand why the heap of a client app has such
large amounts of data pertaining to binary meta data?

Here it takes up 30mb but in our UAT environment we have approx 50 caches.
The binary meta data that gets added to the client's heap equats to around
220mb (even for a very simple app that doesn't do any subscriptions - it
just calls Igition.start() to connect to the cluster)

It seems meta is kept on the client for every cache even if the client app
needs it or not. Is there any way to tune this at all - e.g. knowing that a
particular client is only interested in a particular cache?

Screenshot:



Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: High number of Exception Unexpected response ID with setTimeout

2020-09-28 Thread Andrei Aleksandrov

Hi,

It will be great to take a look at full logs that contain these errors.

BR,
Andrei

9/28/2020 12:35 PM, AravindJP пишет:

Thank you stepan for your reply , problem is the parent thread calling ignite
cache get has a timeout of 120 ms , hence we allow a max of 100 ms for every
cache read . We are seeing that during non peak time , 80% of cache read is
< 20 ms only . But during peak load application is crashing with some
threads going beyond 400 to 500 ms . That's the reason why we had to limit
timeout to 50 . But why such an exception occurs ? Why such an exception
occurs from Ignite side ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: OutOfMemoryException with Persistence and Eviction Enabled

2020-09-25 Thread Andrei Aleksandrov

Hi,

It looks like that I found the issue:

https://issues.apache.org/jira/browse/IGNITE-8917

When you use *put *or *removeAll *in persistence cache with more data 
than data region size throw IgniteOutOfMemoryException. Data streamer 
looks like don't affected by this ticket.


The WA is pretty simple - use bigger data region.

BR,
Andrei

9/24/2020 11:44 PM, Mitchell Rathbun (BLOOMBERG/ 731 LEX) пишет:
I tried doubling it from 200 MB to 400 MB. My initial test worked, but 
as I increased the number of fields that I was writing per entry, the 
same issue occurred again. So it seems to just increase the capacity 
of what can be written, not actually prevent the exception from 
occurring. I guess my main question is why is it possible for Ignite 
to get an OOME when persistence and eviction are enabled? It seems 
like if there are a lot of writes, performance should degrade as the 
in memory cache evicts members of the cache, but no exceptions should 
occur. The error is always "Failed to find a page for eviction", which 
doesn't really make sense when eviction is enabled. What are the 
internal structures that Ignite holds in Off-heap memory?
Also, why isn't this an issue when using IgniteDataStreamer if the 
issue has to do with space for internal structures? Wouldn't Ignite 
need the same internal structures for either case?

From: user@ignite.apache.org At: 09/24/20 11:42:21
To: user@ignite.apache.org  Subject: 
Re: OutOfMemoryException with Persistence and Eviction Enabled


Hi, Did you try to increase the DataRegion size a little bit? It
looks like 190 MB isn't enough for some internal structures that
Ignite stores in OFF-HEAP except the data. I suggest you increase
the data region size to for example 512 MB - 1024 MB and take a
look at how it will work. If you still will see the issue then I
guess we should create the ticket: 1)Collect the logs 2)Provide
the java code example 3)Provide the configuration of the nodes
After that, we can take a look more deeply into it and if it's an
issue then file JIRA. BR, Andrei

9/23/2020 7:36 PM, Mitchell Rathbun (BLOOMBERG/ 731 LEX) пишет:

Here is the exception:
Sep 22, 2020 7:58:22 PM java.util.logging.LogManager$RootLogger log
SEVERE: Critical system error detected. Will be handled
accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class
o.a.i.i.mem.IgniteOutOfMemoryException: Out of memory in data
region [name=customformulacalcrts, initSize=190.7 MiB,
maxSize=190.7 MiB, persistenceEnabled=true] Try the following:
^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
^-- Enable Ignite persistence
(DataRegionConfiguration.persistenceEnabled)
^-- Enable eviction or expiration policies]]
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException:
Out of memory in data region [name=customformulacalcrts,
initSize=190.7 MiB, maxSize=190.7 MiB, persistenceEnabled=true]
Try the following:
^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
^-- Enable Ignite persistence
(DataRegionConfiguration.persistenceEnabled)
^-- Enable eviction or expiration policies
at

org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.allocatePage(PageMemoryImpl.java:607)
at

org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.allocateDataPage(AbstractFreeList.java:464)
at

org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.insertDataRow(AbstractFreeList.java:491)
at

org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:59)
at

org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:35)
at

org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:103)
at

org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1691)
at

org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.createRow(GridCacheOffheapManager.java:1910)
at

org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5701)
at

org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5643)
at

org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3719)
at


Re: IgniteCache.size() is hanging

2020-09-24 Thread Andrei Aleksandrov

Hi,

Highly likely some of the nodes go offline and try to connect again. 
Probably you had some network issues. I think I will see this and other 
information in the logs. Can you provide them?


BR,
Andrei

9/24/2020 6:54 PM, Alan Ward пишет:
The only log I see is from one of the server nodes, which is spewing 
at a very high rate:


[grid-nio-worker-tcp-comm-...][TcpCommunicationSpi] Accepted incoming 
communication connection [locAddr=/:47100, rmtAddr=:


Note that each time the log is printed, i see a different value for 
.


Also note  that I only see these logs when i try to run 
ignitevisorcmd's "cache" command. When I run the java application that 
calls IgniteCache.size(), I don't see any such logs. But in both 
cases, the result is that the operation is just hanging.


The cluster is active and I am able to insert data (albeit at a pretty 
slow rate), so it's not like things are completely non-functional. 
It's really confusing :\


On Thu, Sep 24, 2020 at 11:04 AM aealexsandrov 
mailto:aealexsand...@gmail.com>> wrote:


Hi,

Can you please provide the full server logs?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: OutOfMemoryException with Persistence and Eviction Enabled

2020-09-24 Thread Andrei Aleksandrov

Hi,

Did you try to increase the DataRegion size a little bit? It looks like 
190 MB isn't enough for some internal structures that Ignite stores in 
OFF-HEAP except the data.


I suggest you increase the data region size to for example 512 MB - 1024 
MB and take a look at how it will work.


If you still will see the issue then I guess we should create the ticket:

1)Collect the logs
2)Provide the java code example
3)Provide the configuration of the nodes

After that, we can take a look more deeply into it and if it's an issue 
then file JIRA.


BR,
Andrei

9/23/2020 7:36 PM, Mitchell Rathbun (BLOOMBERG/ 731 LEX) пишет:

Here is the exception:
Sep 22, 2020 7:58:22 PM java.util.logging.LogManager$RootLogger log
SEVERE: Critical system error detected. Will be handled accordingly to 
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, 
timeout=0, super=AbstractFailureHandler 
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=CRITICAL_ERROR, err=class 
o.a.i.i.mem.IgniteOutOfMemoryException: Out of memory in data region 
[name=customformulacalcrts, initSize=190.7 MiB, maxSize=190.7 MiB, 
persistenceEnabled=true] Try the following:
^-- Increase maximum off-heap memory size 
(DataRegionConfiguration.maxSize)

^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
^-- Enable eviction or expiration policies]]
class org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out 
of memory in data region [name=customformulacalcrts, initSize=190.7 
MiB, maxSize=190.7 MiB, persistenceEnabled=true] Try the following:
^-- Increase maximum off-heap memory size 
(DataRegionConfiguration.maxSize)

^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
^-- Enable eviction or expiration policies
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.allocatePage(PageMemoryImpl.java:607)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.allocateDataPage(AbstractFreeList.java:464)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.insertDataRow(AbstractFreeList.java:491)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:59)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.insertDataRow(CacheFreeListImpl.java:35)
at 
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:103)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.createRow(IgniteCacheOffheapManagerImpl.java:1691)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.createRow(GridCacheOffheapManager.java:1910)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5701)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5643)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3719)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5900(BPlusTree.java:3613)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1895)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1872)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1779)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1638)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1621)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1935)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:428)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4248)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:4226)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdateLocal(GridCacheMapEntry.java:2106)
at 
org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.updateAllInternal(GridLocalAtomicCache.java:929)
at 
org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.access$100(GridLocalAtomicCache.java:86)
at 
org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache$6.call(GridLocalAtomicCache.java:776)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6817)
at 

Re: Performance on Windows vs Linux

2020-07-23 Thread Andrei Aleksandrov
Got it. If everything is the same then it looks like I can't suggest 
something to you here but as I know most users run their solutions on 
Linux OS. Probably it can be related to some performance benefits of 
Linux OS but I don't think that this difference should be significant.


7/23/2020 7:34 PM, njcstreet пишет:

Thanks. Not so easy to describe the PoC due to confidentiality. But it
involves writing a lot of data as fast as possible at the start of the day,
with persistence enabled, then with incremental updates throughout the day,
and with many user queries on top through SQL (sorry I know that is probably
not that helpful).

I have 6 machines all with decent specification, SSD storage and 10gb
network connectivity. I was just wondering if there is a particular benefit
to deploying one OS over another. If there isn’t much in it, I will go with
the one I am familiar with.

Regards,

Nigel



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance on Windows vs Linux

2020-07-23 Thread Andrei Aleksandrov

Hi,

I guess that you should take a look from another point of view. You have 
two servers that should be compared to each other. Take a look at the 
following things:


1)Disk speed and size
2)Network latency
3)CPU and RAM capacities

I guess that these things will be more important than the used operation 
system.


However, can you describe your POC details? Probably it can help us to 
advise you something else.


BR,
Andrei

7/23/2020 12:45 PM, njcstreet пишет:

Hi,

I am about to start a proof of concept on Ignite and we have the option of
deploying either on Windows Server 2016 or Red Hat Linux 7. I know that
Ignite can be deployed on both, but is there reason to pick one over the
other?

Is performance better on a particular environment? We are using native
persistence -  think that Direct IO can be enabled but only on Linux?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 : Ignite Security : Cache_Put event generated from a remote_client user action has subject uuid of Node that executes the request

2020-07-23 Thread Andrei Aleksandrov

Hi Veena,

Indeed it looks like that current problem wasn't solved. It looks like 
there are not enough people interested in the current fix. However, 
Ignite is the open-source community. You can make a patch for you or 
even provide it to the community.


Unfortunately, I don't think that somebody on the user mail list can 
help here. You can try to ask one more time on the developer mail list.


Also, you can try to investigate some third party security plugins in 
case if it's important to you.


BR,
Andrei

7/22/2020 4:17 PM, VeenaMithare пишет:

Hi Team,

1. I noticed that this issue (
https://issues.apache.org/jira/browse/IGNITE-12781) is not resolved in
2.8.1.

Could you guide how can we get audit information if a cache record
modification is done on dbeaver and the cache_put event contains the node id
instead of the remote_client subject id ?

Please note this is a blocker issue for us to use Apache Ignite , since we
use dbeaver to update records sometimes.
If this is not resolved, could we kindly ask this to be included in the next
release.

2. Even if the cache_put event did contain the remote_client user id , how
are we supposed to fetch it from the auditstoragespi ?

The below link mentions
http://apache-ignite-users.70518.x6.nabble.com/JDBC-thin-client-incorrect-security-context-td31354.html

public class EventStorageSpi extends IgniteSpiAdapter implements
EventStorageSpi {
 @LoggerResource
 private IgniteLogger log;

 @Override
 public  Collection localEvents(IgnitePredicate p)
{
 return null;
 }

 @Override
 public void record(Event evt) throws IgniteSpiException {
 if (evt.type() == EVT_MANAGEMENT_TASK_STARTED) {
 TaskEvent taskEvent = (TaskEvent) evt;

 SecuritySubject subj = taskEvent.subjectId() != null
 ?
getSpiContext().authenticatedSubject(taskEvent.subjectId())
 : null;

 log.info("Management task started: [" +
 "name=" + taskEvent.taskName() + ", " +
 "eventNode=" + taskEvent.node() + ", " +
 "timestamp=" + taskEvent.timestamp() + ", " +
 "info=" + taskEvent.message() + ", " +
 "subjectId=" + taskEvent.subjectId() + ", " +
 "secureSubject=" + subj +
 "]");
 }
 }

 @Override
 public void spiStart(@Nullable String igniteInstanceName) throws
IgniteSpiException {
 /* No-op. */
 }

 @Override
 public void spiStop() throws IgniteSpiException {
 /* No-op. */
 }
}

IgniteSpiContext exposes authenticatedSubject which according to some
discussions gets the subject *only for node* . (
http://apache-ignite-developers.2346864.n4.nabble.com/Security-Subject-of-thin-client-on-remote-nodes-td46029.html#a46412
)

/*securityContext(uuid ) was added to the GridSecurityProcessor to get the
securitycontext of the thin client. However this is not exposed via the
IgniteSpiContext.* /


3. The workaround I did was as follows. Please let me know if you see any
concerns on this approach -
a. Add the remoteclientsubject into the authorizationcontext of the
authenticationcontext in the authenticate method of the securityprocessor.

b. This authorizationcontext is now put in a threadlocal variable ( Check
the class AuthorizationContext )
private static ThreadLocal actx = new ThreadLocal<>();

c. The following has been done in the storagespi when a change is made in
the dbeaver,
c1. capture the EVT_TX_STARTED in the storage spi. The thread that generates
this event contains the subject in its threadlocal authorizationcontext.
Store this in a cache that holds the mapping transaction id to security
subject.

c2. capture the cache_put event and link the transaction id in the cache_put
event to the transaction id in the EVT_TX_STARTED and get the subject by
this mapping.

c3. The transactionid in cache_put and the transactionid in EVT_TX_STARTED
could be same, in which case it is a direct mapping

c4. The transactionid in cache_put and the transactionid in EVT_TX_STARTED
could be different, in which case it is a case of finding the nearxid of the
transactionid in the cacheput event. And then find the security subject of
the nearxid


regards,
Veena.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: third-party persistance and junction table

2020-07-23 Thread Andrei Aleksandrov

Hi,

Unfortunately, Ignite doesn't support such kind of relations out of the 
box. Ignite just translates it to third party data storage that used as 
cache-store.


It's expected that inserts and updates will be rejected in case if they 
break some rules.


BR,
Andrei
7/21/2020 11:16 AM, Bastien Durel пишет:

Hello,

I have a junction table in my model, and used the web console to
generate ignite config and classes from my SQL database

-> There is a table user with id (long) and some data
-> There is a table role with id (long) and some data
-> There is a table user_role with user_id (fk) and role_id (fk)

Reading cache from table works, I can query ignite with jdbc and I get
my relations as expected.

But if I want to add a new relation, the query :
insert into "UserRoleCache".user_role(USER_ID, ROLE_ID) values(6003, 2)
is translated into this one, sent to postgresql :
UPDATE public.user_role SET  WHERE (user_id=$1 AND role_id=$2)

Which obviously is rejected.

The web console generated a cache for this table, with UserRole
& UserRoleKey types, which each contains userId and roleId Long's.

Is there a better (correct) way to handle these many-to-many relations
in ignite (backed by RDBMS) ?

Regards,



Re: Cache query exception when using generic type: class java.util.ArrayList cannot be cast to

2020-07-23 Thread Andrei Aleksandrov

Hi,

You can put different types of objects in your cache because of the 
specific of its implementation. But this possibility can break your 
ScanQuery because you are going to see in cache only StationDto objects.


I guess that previously you put ArrayList inside the cache and then you 
put StationDto.


Please note that in case if you are going to change the ValueType of the 
cache then you must destroy it and then create with a new configuration.


I guess values of different types in the same cache are a reason of your 
issue.


BR,
Andrei

7/16/2020 4:41 PM, xingjl6280 пишет:

hi team,

Please kindly advise.
Below is my code and exception.
Btw, if I use ScanQuery and List, no error.

Something wrong with classloader? the normal cache put and get work well for
my class,  data could be deserialised to my class automatically.


thank you


My code:
***
cache.put(ProjectCacheConst.STATION_PREFIX+"1", new StationDto());
cache.put(ProjectCacheConst.STATION_PREFIX+"2", new StationDto());
cache.put(ProjectCacheConst.STATION_PREFIX+"3", new StationDto());

ScanQuery scanQuery = new ScanQuery<>(
 (k, v) -> k.startsWith(ProjectCacheConst.STATION_PREFIX) &&
nonNull(v));

List list = getCache(projectCode).query(scanQuery,
Cache.Entry::getValue).getAll();
***

Exception:

org.apache.ignite.IgniteException: class java.util.ArrayList cannot be cast
to class com.hh.sd.rtms.h_dto.map.StationDto (java.util.ArrayList is in
module java.base of loader 'bootstrap'; com.hh.sd.rtms.h_dto.map.StationDto
is in unnamed module of loader
org.apache.catalina.loader.ParallelWebappClassLoader @597bc1c6)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$InternalScanFilter.apply(GridCacheQueryManager.java:3232)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3108)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2997)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:123)
~[ignite-core-2.8.1.jar:2.8.1]
at
com.hh.sd.rtms.f_data_service.ProjectCacheServiceBean.getAllStations(ProjectCacheServiceBean.java:166)
~[rtms-core-0.1-SNAPSHOT.jar:na]
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) ~[na:na]
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[na:na]
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: cache.getAsync() blocks if cluster is not activated.

2020-07-23 Thread Andrei Aleksandrov

Hi,

I don't think that it should hang because there are no cache operations 
allowed when the cluster isn't activated. It should be some kind of 
CacheExcdption.


Is it possible to prepare some reproducer or unit test? Otherwise please 
provide some details:


1)What Ignite version was used?
2)Can you please share a server and cache configuration?

BR,
Andrei
7/15/2020 9:57 PM, John Smith пишет:

Hi, testing some failover scenarios etc...

When we call cache.getAsync() and the state of the cluster is not 
active. It seems to block.


I implemented a cache repository as follows and using Vertx.io. It 
seems to block at the cacheOperation.apply(cache)


So when I call myRepo.get(myKey) which underneath applies the 
cache.getAsync() function it blocks.


public class IgniteCacheRepository implements CacheRepository {
 public final long DEFAULT_OPERATION_TIMEOUT =1000; private final 
TimeUnitDEFAULT_TIMEOUT_UNIT = TimeUnit.MILLISECONDS; private Vertxvertx; private 
IgniteCache cache; public IgniteCacheRepository(Vertx vertx, IgniteCache cache) {
 this.vertx = vertx; this.cache = cache; }

 @Override public Futureput(K key, V value) {
 return executeAsync(cache -> cache.putAsync(key, value), 
DEFAULT_OPERATION_TIMEOUT, DEFAULT_TIMEOUT_UNIT); }

 @Override public Future get(K key) {
 return executeAsync(cache -> cache.getAsync(key), 
DEFAULT_OPERATION_TIMEOUT, DEFAULT_TIMEOUT_UNIT); }

 @Override public  Future invoke(K key, EntryProcessor 
processor, Object... arguments) {
 return executeAsync(cache -> cache.invokeAsync(key, processor, 
arguments), DEFAULT_OPERATION_TIMEOUT, DEFAULT_TIMEOUT_UNIT); }

 @Override public  T cache() {
 return (T)cache; }

 /** * Adapt Ignite async operation to vertx futures. * * @param 
cacheOperation The ignite operation to execute async. * @return The 
value from the cache operation. */ private  Future executeAsync(Function, IgniteFuture> cacheOperation, long timeout, TimeUnit unit) {

 Future future = Future.future(); try {
 IgniteFuture value = cacheOperation.apply(cache); 
value.listenAsync(result -> {
 try {
 future.complete(result.get(timeout, unit)); 
}catch(Exception ex) {
 future.fail(ex); }
 }, 
VertxIgniteExecutorAdapter.getOrCreate(vertx.getOrCreateContext())); 
}catch(Exception ex) {
 // Catch some RuntimeException that can be thrown by Ignite cache. 
future.fail(ex); }

 return future; }
}






Re: Ignite node log file setup

2020-06-10 Thread Andrei Aleksandrov

Hi,

Can you please attach the whole log file?

BR,
Andrei

6/9/2020 5:14 AM, kay пишет:

Hello!

I start up

sh ./ignite.sh -J-DgridName=testGridName -v ./config/config-cache.xml

and in config-cache.xml


but server start failed.

Is that not proper to set igniteInstanceName??

log is here
class org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.communication.GridIoManager]
  at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1067)
  at org.apache.ignite.Ignition.start(Ignition.java:349)
  at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:300)
  Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
manager: GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.communication.GridIoManager]
  at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1965)
  at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1173)
  at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
  at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703)
  at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117)
  at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035)
  at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921)
  at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820)
  at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
  at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659)
  at org.apache.ignite.Ignition.start(Ignition.java:346)
  ... 1 more
  Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
SPI: TcpCommunicationSpi [connectGate=null,
connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@6622fc65,
chConnPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$4@299321e2,
enableForcibleNodeKill=false, enableTroubleshootingLog=false,
locAddr=42.1.188.128, locHost=intdev01/42.1.188.128, locPort=48722,
locPortRange=1, shmemPort=-1, directBuf=true, directSndBuf=false,
idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0,
slowClientQueueLimit=0, nioSrvr=GridNioServer [selectorSpins=0,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=org.apache.ignite.internal.util.nio.GridDirectParser@2f17e30d,
directMode=true], GridConnectionBytesVerifyFilter], closed=false,
directBuf=true, tcpNoDelay=true, sockSndBuf=32768, sockRcvBuf=32768,
writeTimeout=2000, idleTimeout=60, skipWrite=false, skipRead=false,
locAddr=intdev01/42.1.188.128:48722, order=LITTLE_ENDIAN, sndQueueLimit=0,
directMode=true,
mreg=org.apache.ignite.internal.processors.metric.MetricRegistry@71cf1b07,
rcvdBytesCntMetric=org.apache.ignite.internal.processors.metric.impl.LongAdderMetric@a9be6fa5,
sentBytesCntMetric=org.apache.ignite.internal.processors.metric.impl.LongAdderMetric@489b09ce,
outboundMessagesQueueSizeMetric=org.apache.ignite.internal.processors.metric.impl.LongAdderMetric@69a257d1,
sslFilter=null, msgQueueLsnr=null, readerMoveCnt=0, writerMoveCnt=0,
readWriteSelectorsAssign=false], shmemSrv=null, usePairedConnections=false,
connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false,
ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000,
boundTcpPort=48722, boundTcpShmemPort=-1, selectorsCnt=8, selectorSpins=0,
addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@181e731e[Count = 1],
stopping=false,
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@19648c40]
  at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
  at
org.apache.ignite.internal.managers.communication.GridIoManager.start(GridIoManager.java:435)
  at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1960)
  ... 11 more
  Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to
register SPI MBean: null
  at
org.apache.ignite.spi.IgniteSpiAdapter.registerMBean(IgniteSpiAdapter.java:421)
  at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.spiStart(TcpCommunicationSpi.java:2397)
  at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
  ... 13 more
  Caused by: javax.management.MalformedObjectNameException: Invalid character
':' in value part of property
  at javax.management.ObjectName.construct(ObjectName.java:618)
  at javax.management.ObjectName.(ObjectName.java:1382)
  at
org.apache.ignite.internal.util.IgniteUtils.makeMBeanName(IgniteUtils.java:4719)
  

Re: Cache_Put event generated from a remote_client user action has subject uuid of Node that executes the request sometimes

2020-06-10 Thread Andrei Aleksandrov

Hi,

Your approach is correct as for me.

IgniteCache can help to hold all the 
information about started transactions and users.


Using EventStorageSpi is a good way to handle the events.

BR,
Andrei

6/9/2020 10:06 AM, VeenaMithare пишет:

Cache_Put event generated from a remote_client user action has subject uuid
of Node that executes the request sometimes.
The Jira IGNITE-12781 was created by me for this. Some related conversation
on this could be found at
  (
http://apache-ignite-developers.2346864.n4.nabble.com/Security-Subject-of-thin-client-on-remote-nodes-td46029.html#a46406
and the last few comments on this post :
http://apache-ignite-developers.2346864.n4.nabble.com/JDBC-thin-client-incorrect-security-context-td45929.html)

To tackle the issue till this jira is fixed I have used the approach as
below .
Kindly confirm if you see any concerns with this :

1.If the cache_put event holds the subject id of the remoteclient, then
fetch it using getSpiContext().authenticatedSubject(uuid ) method. ( This in
turn will check the AuthenticationContext.context() and match the subjectId
in of the event with the one in the AuthenticationContext.context() )
2.If it holds the subjectId of the node instead of the remoteclient( In this
case, the subject returned by point 1 will be null ) -

1.Create a cache( transactionIdToSubjectCache) that holds xid vs security
subject information where xid is the id of the transaction started event.
The subject Id on this event always holds the remote client id for cache put
events generated on dbeaver.
2.When a cacheput event is sent to the storage spi - match the xid as
follows
a.Get the subject from transactionIdToSubjectCache using the xid.
b.If the above is null, get the originating xid of the event xid and get the
subject using the originating xid.



  


I am able to get the subject using this approach- could you kindly verify if
I am missing anything.

Here is a pseudo code :

public class AuditSpi extends IgniteSpiAdapter implements EventStorageSpi {
 private IgniteCache
transactionIdSubjectMapCache;

  


 @Override
 public void record(Event evt) throws IgniteSpiException {
 assert evt != null;
 ignite = Ignition.ignite(igniteInstanceName);
 transactionIdSubjectMapCache =
ignite.cache("transactionIdSubjectMapCache");




 if (evt instanceof TransactionStateChangedEvent && (evt.type()
 == EventType.EVT_TX_STARTED
 )) {

 //populate the transactionIdSubjectMapCache for events generated
from dbeaver. This always contains the remote_client subject id.
 if (AuthorizationContext.context() != null)
{
transactionIdSubjectMapCache.put(((TransactionStateChangedEvent)
evt).tx().xid(),
((ProjectAuthorizationContext) AuthorizationContext.context()) .subject());
 }
 return;

 }

 if (evt instanceof CacheEvent) {

 SecuritySubject subj =
getSpiContext().authenticatedSubject(((CacheEvent) evt).subjectId())l;
 IgniteUuid transactionId = null;
 if (subj == null)
{
SecuritySubject sub =
getSecuritySubjectFromTransactionMap((CacheEvent) evt,  transactionId);
   // more logic to store it in the audit cache here.

}
 }

 }

 private SecuritySubject getSecuritySubjectFromTransactionMap(CacheEvent
evt,
   
IgniteUuid transactionId) {

 SecuritySubject subj = transactionIdSubjectMapCache.get(evt.xid());
 


 if (subj == null) {

 IgniteTxManager tm = ((IgniteEx)
ignite).context().cache().context().tm();

 for (IgniteInternalTx transaction : tm.activeTransactions()) {

 if (transaction.xid().equals(evt.xid())) {
 if (transaction.nearXidVersion() != null)
{ subj = transactionIdSubjectMapCache
.get(transaction.nearXidVersion().asGridUuid()); }
 }
 }
 }
 return subj;

 }

  


}

  


regards,

Veena.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite.cache.loadcache.Does this method do Increamental Load?

2020-03-26 Thread Andrei Aleksandrov

Hi,

loadCache - will load all the keys and values from third party store 
into Ignite. It can be used for initial loading and for restore of the 
Ignite cache.


There are no incremental updates.

In case if you are going to do updates in the third party store not 
using Ignite API then you also should do the same for Ignite.


However, you can use read-through and write-through properties. They can 
help you to make your updates in 3-rd party store using Ignite API:


https://apacheignite.readme.io/docs/3rd-party-store#section-read-through-and-write-through

BR,
Andrei

3/23/2020 9:34 PM, nithin91 пишет:

Hi

I am trying to load the data into ignite cache using JDBC Pojo Store method
ignite.cache("cacheName").loadCache(null).I have used this method and got
the following results for the following scenarios.

*Scenario-1:Trying to load the same key which is available in cache*

In this case, the value part corresponding to the key is not updated in the
cache based on the latest available record.

*Scenario -2:When loading a key which is not present in cache.*

in this case, it is appending the new key and value pair to the cache and
preserving the old data.


But my doubt is why in scenario-1, it is not  updating the value
corresponding to the when i am  trying to load the same key.

Does this method do incremental load?.
Is this the expected behavior or do i need to set any additional property in
the bean file.Attaching you the bean configuration of the cache.

cache.xml










--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to access IGFS file written one node from other node in cluster ??

2020-02-24 Thread Andrei Aleksandrov

Hi,

I can suggest to use the cache store implementation. For example current 
guide 
 
shows how Hive schema can be imported using web console.


In case if you require to work with files (not tables) then please use 
HDFS or Spark API directly. Ignite provides good Spark integration:


https://apacheignite-fs.readme.io/docs/ignite-data-frame

BR,
Andrei

2/21/2020 6:52 PM, Preet пишет:

Then how to create shared file system or is there any way to access/modify
file written by one node in cluster by other node ??



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow cache updates with indexing module enabled

2020-02-14 Thread Andrei Aleksandrov

Hi,

Some recommendations should be applied to every SQL query:

1)You should avoid using __SCAN_
2)You should use LAZY if the result set is big.
3)You should inline the indexes

Please read my comments:

1)My question is what is happening with all those indexes when an entry is
updated but, none of the indexed fields (except one) are being changed? In
our case, we are only flipping a boolean value of only 1 field. Is this
change triggering updates in ALL the indexes associated with the cache?

Yes, all your indexes will be rebuilt (new value will be inserted in the 
index tree).


2)When we update entry field SEGMENT_1 field with a True, are the other 99
indexes updated?

It looks like yes.

3)Those tickets I mentioned seem to be related but I would like to have your
confirmation.

Yes, https://issues.apache.org/jira/browse/IGNITE-7015 is related to 
this behavior. You can try to highlite it on the development mail list.


BR,
Andrei
2/13/2020 5:50 PM, xero пишет:

Hi Andrei, thanks for taking the time to answer my question. I will consider
your suggestion if we decide to switch to a multiple tables approach that
will require those JOIN considerations. But, in this case we have only 1
cache and the operation that we are executing is an update. We tried using
SQL-Update but we also tried using a CacheEntryProcessor directly. My
question is what is happening with all those indexes when an entry is
updated but, none of the indexed fields (except one) are being changed? In
our case, we are only flipping a boolean value of only 1 field. Is this
change triggering updates in ALL the indexes associated with the cache?

Cache is like this (with indexes on all fields):
id|(other fields)|segment_1|segment_2|segment_2|...|segment_99|segment_100

Then we try updating a batch of entries with an invokeAll using a
CacheEntryProcessor:
public Void process(MutableEntry entry, Object...
arguments) {
final BinaryObjectBuilder builder =
entry.getValue().toBuilder().setField("SEGMENT_1", true);
entry.setValue(builder.build());

return null;
}
When we update entry field SEGMENT_1 field with a True, are the other 99
indexes updated?
Those tickets I mentioned seem to be related but I would like to have your
confirmation.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-14 Thread Andrei Aleksandrov

Hi Nithin,

You face current message because your client lost the connection to the 
server. It tries to get the acknowledge message on some operation (I 
guess it should be some cache operation).


You can see that IgniteClientDisconnectedException was thrown. In this 
case, you can get the reconnect future and wait for the client reconnection:


https://apacheignite.readme.io/docs/clients-vs-servers#reconnecting-a-client

Please add try/catch blocks around your cache operation and add next logic:

|catch (IgniteClientDisconnectedException e) { 
e.reconnectFuture().get(); // Wait for reconnect. // Can proceed and use 
the same IgniteCompute instance. }|



I can't say why your client was disconnected. Highly likely it's because 
of some network issues. You can try to take a look at server logs and 
find there *NODE_LEFT *or *NODE_FAILED *messages.


BR,
Andrei

2/14/2020 8:08 AM, nithin91 пишет:

Hi

I am unable to attach any file as a result of which i pasted the code and
bean file in my previous messages.

Following is error i get.

Feb 13, 2020 11:34:40 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to send message: null
java.io.IOException: Failed to get acknowledge for message:
TcpDiscoveryClientMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage

  


[sndNodeId=null, id=b9bb52d3071-613fd9b8-0c00-4dde-ba8f-8f5341734a3c,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]]
 at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1398)
 at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

Feb 13, 2020 11:34:47 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to reconnect to cluster (consider increasing 'networkTimeout'
configuration property) [networkTimeout=5000]
[11:34:52] Ignite node stopped OK [uptime=00:00:24.772]
Exception in thread "main" javax.cache.CacheException: class
org.apache.ignite.IgniteClientDisconnectedException: Failed to execute
dynamic cache change request, client node disconnected.
 at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
 at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:3023)
 at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2992)
 at Load.OrdersLoad.main(OrdersLoad.java:82)
Caused by: class org.apache.ignite.IgniteClientDisconnectedException: Failed
to execute dynamic cache change request, client node disconnected.
 at
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:952)
 at
org.apache.ignite.internal.util.IgniteUtils$15.apply(IgniteUtils.java:948)
 ... 4 more
Caused by: class
org.apache.ignite.internal.IgniteClientDisconnectedCheckedException: Failed
to execute dynamic cache change request, client node disconnected.
 at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onDisconnected(GridCacheProcessor.java:1180)
 at
org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:3949)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:821)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:604)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2667)
 at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2705)
 at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
 at java.lang.Thread.run(Thread.java:748)




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite yarn resources keep on increasing

2020-02-13 Thread Andrei Aleksandrov

Hi,

Could you please provide more details:

1)Your configurations and environment variables (IGNITE_PATH?)
2)The logs of your Ignite nodes where you see the mentioned exception.

IGNITE_PATH - a path to unzipped Ignite distribution instead of the URL. 
Is it possible that you didn't unzip the binaries or forget to copy 
binaries to some node?


BR,
Andrei

2/13/2020 1:12 PM, ChandanS пишет:

I am using ignite 2.7 version for ignite yarn deployment. I have my own spark
application that start ignite yarn cluster and load data to ignite. It works
fine in positive scenarios, but whenever there is an exception from the
ignite-yarn.jar side like giving wrong path for some properties
(IGNITE_PATH), the resource uses keep on increasing with some time interval.
I have started my application with --num-executors 40 --executor-cores 2,
currently after keeping the application up for last 10 hrs number of
executors is 461 and cores 921 with increasing in memory as well. I am
getting the below exception from ignite-yarn application:

class org.apache.ignite.IgniteException: Failed to start manager:
GridManagerAdapter [enabled=true,
name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread Andrei Aleksandrov

Hi,

Can you please attach the full logs with the mentioned exception? BTW I 
don't see any attaches in the previous message (probably user list can't 
do it).


BR,
Andrei

2/13/2020 3:44 PM, nithin91 пишет:

Attached the bean file used



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow cache updates with indexing module enabled

2020-02-13 Thread Andrei Aleksandrov

Hi,

SQL query performance can be not great because of several cases:

1)Incorrect indexes. Please check that your EXPLAIN contains indexes and 
doesn't have scans for joins:


INNER JOIN PUBLIC.PERSON P__Z1
    /* PUBLIC.PERSON.__SCAN_ */

Probably the inline size for used index is incorrect or wrong index used.

To solve this problem you should calculate the Inline for every index 
and check that your correct index used in EXPLAIN of your query. Here is 
the example of how inline for the field can be calculated:


**

*long*

*

0     1   9

| tag | value |

Total: 9 bytes


int

0     1   5

| tag | value |

Total: 5 bytes


String

0     1 3             N

| tag | size | UTF-8 value |

Total: 3 + string length


POJO (BinaryObejct)

0     1 3     4 8     12 16 20       24 32 N

| tag | size | tag | size | BO flags | type ID | hash | length | schema 
info | BO body |


  |               Binary object header         
      |


Total: 32 + N

*
2)GC pauses because of query execution without *LAZY *flag.

3)In the case of multiple joins the order of these joins can be 
incorrect because of H2 optimizer specific used in Ignite.


To fix this problem you should prepare the correct join order and set 
the *"enforce join order"* flag. When the BIG table will be joined to 
SMALL then it will be faster than otherwise:


select * from SMALLTABLE, BIGTABLE where SMALLTABLE.id = BIGTABLE.id - 
correct
select * from BIGTABLE , SMALLTABLEwhere SMALLTABLE.id = BIGTABLE.id - 
incorrect


Check the join order using the EXPLAIN command.

BR,
Andrei

2/12/2020 11:24 PM, xero пишет:

Hi,
We are experiencing slow updates to a cache with multiple indexed fields
(around 25 indexes during testing but we expect to have many more) for
updates that are only changing one field. Basically, we have a
customer*->belongsto->*segment relationship and we have one column per
segment. Only one column is updated with a 1 or 0 if the customer belongs to
the segment.

During testing, we tried dropping half of the unrelated indexes (indexes
over fields that are not being updated) and we duplicate the performance. We
went from 1k ops to 2k ops approximately.

We found these cases may be related:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-19%3A+SQL+index+update+optimizations
https://issues.apache.org/jira/browse/IGNITE-7015?src=confmacro

Could you please confirm us if IGNITE-7015 could be related to this
scenario? If yes, do you have any plans to continue the development of the
fix?


We are using Ignite 2.7.6 with 10 nodes, 2 backups, indexing module enabled
and persistence.

Cache Configuration: [name=xdp-contactcomcast-1, grpName=null,
memPlcName=xdp, storeConcurrentLoadAllThreshold=5, rebalancePoolSize=2,
rebalanceTimeout=1, evictPlc=null, evictPlcFactory=null,
onheapCache=false, sqlOnheapCache=false, sqlOnheapCacheMaxSize=0,
evictFilter=null, eagerTtl=true, dfltLockTimeout=0, nearCfg=null,
writeSync=PRIMARY_SYNC, storeFactory=null, storeKeepBinary=false,
loadPrevVal=false, aff=RendezvousAffinityFunction [parts=1024, mask=1023,
exclNeighbors=false, exclNeighborsWarn=false, backupFilter=null,
affinityBackupFilter=null], cacheMode=PARTITIONED, atomicityMode=ATOMIC,
backups=2, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
rebalanceOrder=0, rebalanceBatchSize=524288, rebalanceBatchesPrefetchCnt=2,
maxConcurrentAsyncOps=500, sqlIdxMaxInlineSize=-1, writeBehindEnabled=false,
writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
writeBehindCoalescing=true, maxQryIterCnt=1024,
affMapper=org.apache.ignite.internal.processors.cache.CacheDefaultBinaryAffinityKeyMapper@db5e319,
rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
longQryWarnTimeout=3000, qryDetailMetricsSz=0, readFromBackup=true,
nodeFilter=IgniteAllNodesPredicate [], sqlSchema=XDP_CONTACTCOMCAST_1,
sqlEscapeAll=false, cpOnRead=true, topValidator=null, partLossPlc=IGNORE,
qryParallelism=1, evtsDisabled=false, encryptionEnabled=false]


Thanks,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh using Ignite

2020-02-13 Thread Andrei Aleksandrov

Hi,

Please read my comments:

1)Ignite generally doesn't support changing of the cache configuration 
without re-creation of the the cache. But for SQL caches that were 
created via QueryEntity or CREATE TABLE you can add and remove the 
columns using ALTER TABLE commands:


https://apacheignite-sql.readme.io/docs/alter-table
https://apacheignite.readme.io/docs/cache-queries#query-configuration-using-queryentity
https://apacheignite-sql.readme.io/docs/create-table
2)First of all, you can use the following options:

https://apacheignite.readme.io/docs/3rd-party-store#section-read-through-and-write-through

Read through can load the requested keys from DB
Write through will load all the updates to DB.

In case if you require some cache invalidation or refresh then you can 
create some cron job for it.


3)I guess that loadCache is the only to do it. It will filter the values 
that have already existed in the cache.


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#loadCache-org.apache.ignite.lang.IgniteBiPredicate-java.lang.Object...-

4)You can use a different subset of integrations that can do distributed 
streaming to Ignite like Spark or Kafka:


https://apacheignite-mix.readme.io/docs/getting-started

BR,
Andrei
2/12/2020 9:11 PM, nithin91 пишет:

Hi

We are doing a  a POC on exploring the Ignite in memory capabilities and
building a rest api on
top of it using node express.


Currently as a part of POC, installed Ignite in UNIX and trying to load the
data from Oracle DB
to Ignite Cache using Cache JDBC Pojo Store.

Can someone help me whether the following scenarios can be handled using
Ignite as i couldn't find this in the official documentation.

1. If we want to add/drop/modify a  column to the cache, can we 
update the
bean file directly
   when the node is running or do we need to stop the node and 
then again
restart.
   It would be really helpful if you can  share sample code or
documentation link.

2. How to refresh the ignite cache automatically or schedule 
the cache
refresh.
   It would be really helpful if you can  share sample code or
documentation link.

3. Is incremental refresh allowed? It would be really helpful 
if you can
share sample code or
   documentation link.


4. Is there any other way to load the caches fast other Cache 
JDBC POJO
Store.
   It would be really helpful if you can  share sample code or
documentation link.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite on yarn doesn't started

2020-02-13 Thread Andrei Aleksandrov

Hi,

I asked you to check it because I see the next option:

IGNITE_PATH = /tmp/ignite/apache-ignite-2.7.6-bin.zip

This option should be a path to *unzipped *Ignite distribution instead 
of the URL (you set zip)


Also I see commented IGNITE_URL option:

#IGNITE_URL =
http://ambari1.dmz.loc:/filebrowser/view=/tmp/ignite/apache-ignite-2.7.6-bin.zip

So it looks like you don't provide the Ignite binaries to your YARN 
deployment.


BR,
Andrei

2/11/2020 9:19 PM, v.shinkevich пишет:

aealexsandrov wrote

1) check that Ignite libs (from ignite_binaries/libs) are available for
your YARN deployment.
2) check that path to the configuration file is reachable from every node

1) I don't understand what I need to check. Where should these libs be ?  Do
I need to unpack the distribution? To a local folder or to HDFS?

My /tmp/ignite folder (on HDFS, on local the same content + unpacked distro
for local run check)

On HDFS I don't have any logs. Only one jar in workdir.


Log of local run:
[root@dn07 /tmp/ignite/apache-ignite-2.7.6-bin/bin]# ./ignite.sh

[20:54:37]__  
[20:54:37]   /  _/ ___/ |/ /  _/_  __/ __/
[20:54:37]  _/ // (7 7// /  / / / _/
[20:54:37] /___/\___/_/|_/___/ /_/ /___/
[20:54:37]
[20:54:37] ver. 2.7.6#20190911-sha1:21f7ca41
[20:54:37] 2019 Copyright(C) Apache Software Foundation
[20:54:37]
[20:54:37] Ignite documentation: http://ignite.apache.org
[20:54:37]
[20:54:37] Quiet mode.
[20:54:37]   ^-- Logging to file
'/tmp/ignite/apache-ignite-2.7.6-bin/work/log/ignite-e2eeb3da.0.log'
[20:54:37]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[20:54:37]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[20:54:37]
[20:54:37] OS: Linux 3.10.0-693.el7.x86_64 amd64
[20:54:37] VM information: Java(TM) SE Runtime Environment 1.8.0_141-b15
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.141-b15
[20:54:38] Please set system property '-Djava.net.preferIPv4Stack=true' to
avoid possible problems in mixed environments.
[20:54:38] Configured plugins:
[20:54:38]   ^-- None
[20:54:38]
[20:54:38] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT
Java HotSpot(TM) 64-Bit Server VM warning: sched_getaffinity failed (Invalid
argument)- using online processor count (192) which may exceed available
processors
[20:54:38] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[20:54:39] Security status [authentication=off, tls/ssl=off]
[20:54:44] Performance suggestions for grid  (fix if possible)
[20:54:44] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[20:54:44]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[20:54:44]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to
JVM options)
[20:54:44]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[20:54:44]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[20:54:44]   ^-- Speed up flushing of dirty pages by OS (alter
vm.dirty_expire_centisecs parameter by setting to 500)
[20:54:44] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[20:54:44]
[20:54:44] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[20:54:44] Data Regions Configured:
[20:54:44]   ^-- default [initSize=256.0 MiB, maxSize=403.0 GiB,
persistence=false]
[20:54:44]
[20:54:44] Ignite node started OK (id=e2eeb3da)
^C
[20:55:19] Ignite node stopped OK [uptime=00:00:35.686]




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using EntryProcessor arguments recommendations

2020-02-13 Thread Andrei Aleksandrov

Hi,

I suggest to read the documentation:

EntryProcessor:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheEntry.html
Invoke java doc:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-

CacheAtomicityMode specific:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheAtomicityMode.html

Note, that using invoke and invokeAll you will block the keys. It means 
that deadlock is possible in the following cases:


1)You use external keys inside EntryProcessor
2)Use unordered maps for invokeAll. TreeMap is suggested.

BR,
Andrei

2/12/2020 12:18 AM, Григорий Доможиров пишет:

I see two options of using EntryProcessor:
1. Pass arguments like this:
cache.invoke(key, new CustomProcessor(), someValue)
2. Pass stateful EntryProcessor like this:
  cache.invoke(key, new CustomProcessor(someValue))

Is there any recommendations?


Re: JDBC thin client incorrect security context

2020-02-13 Thread Andrei Aleksandrov

Hi,

I see that you found the ticket related to the current issue:

https://issues.apache.org/jira/browse/IGNITE-12589

Looks like it can be a reason of your problem.

Generally, I don't know how you implemented your security plugin if you 
take a look at similar plugin from third party vendor 
 
then you can see that subjectID should be related to user 
connection/session, not to node where some task will be executed (yes 
every node has it's subjectID  and user but JDBC connection with another 
user should have its own subjectID ).


How it implemented there in common details:

1)JDBC supports username and password fields:

https://apacheignite-sql.readme.io/docs/jdbc-driver#section-parameters

2)Every user session/connection mapped to some SecuritySubject (that 
contains subjectID)


3)Every event that contains subjectID can be linked with some user 
connection (SecuritySubject.login()) using the following code:


|public class EventStorageSpi extends IgniteSpiAdapter implements 
EventStorageSpi { @LoggerResource private IgniteLogger log; @Override 
public  Collection localEvents(IgnitePredicate p) 
{ return null; } @Override public void record(Event evt) throws 
IgniteSpiException { if (evt.type() == EVT_MANAGEMENT_TASK_STARTED) { 
TaskEvent taskEvent = (TaskEvent) evt; SecuritySubject subj = 
taskEvent.subjectId() != null ? 
getSpiContext().authenticatedSubject(taskEvent.subjectId()) : null; 
log.info("Management task started: [" + "name=" + taskEvent.taskName() + 
", " + "eventNode=" + taskEvent.node() + ", " + "timestamp=" + 
taskEvent.timestamp() + ", " + "info=" + taskEvent.message() + ", " + 
"subjectId=" + taskEvent.subjectId() + ", " + "secureSubject=" + subj + 
"]"); } } @Override public void spiStart(@Nullable String 
igniteInstanceName) throws IgniteSpiException { /* No-op. */ } @Override 
public void spiStop() throws IgniteSpiException { /* No-op. */ } }|


In case if this approach doesn't work for your implementation because of 
some issues then you can try to start the thread on Ignite developer 
mail list.


BR,
Andrei

2/12/2020 6:54 PM, VeenaMithare пишет:

Hi ,

We have built a security and audit plugin for security of our ignite
cluster. We are unable to get the right audit information i.e. we are unable
to get the right subject for users logged in through dbeaver ( jdbc thin
client. ). This is because the subjectid associated with the "CACHE_PUT"
event when an update is triggered by the jdbc thin client, contains the uuid
of the node that executed the update rather than the logged in jdbc thin
client user.

If this is a limitation with the current version of ignite, is there any
workaround to get this information ?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with large number of Continuous Queries

2020-02-11 Thread Andrei Aleksandrov

Hi,

Please read some comments below:

1)You said:

*Scenario*: There are 1000 items in the myCache and client-1 is pushing 3
updates per second on every item. Lets say both client-2 and client-3 have
1000 different continuous queries open to listen to every update.

Continues query can handle all updates for some cache. You shouldn't 
start special CQ for every entry. Please read the following:


https://apacheignite.readme.io/docs/continuous-queries

2)You said:

With the above scenario, we are observing the server-1 alone taking 60-70%
CPU and about 4GB memory.
In this case when a high number of continuous queries machine reaches 
100% CPU

utilization.

You can try to monitor what Ignite parts take the CPU and memory usage. 
Taking into account the point 1) - you shouldn't have a lot of CQ for 
the same cache.


3)You said:

*Thinking to fix as*: Use single continuous query per client to listen to
all the updates. i.e. there would be just one continuous query and it would
listen to all the updates.

This is the correct way.

4)You said:

But now the problem is that both the clients do not need to listen to
updates in all the keys in the cache. So we are thinking of adding another
column to the ignite cache using which we can filter the updates by checking
if the client column of the updated row contains the client name for which
filter is being checked. e.g. the new table would look like-

Correct you can filter the updates using the following:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/AbstractContinuousQuery.html#getRemoteFilterFactory--

5)You said:

How many continuous queries can ignite handle at once with the configuration
we mentioned or is there any such benchmark values available on any
particular configuration? Is it fine to have as many as 1000 (or even more)
continuous queries running at once? If yes, how can we make it less CPU
intensive and more performant?

I don't think that the current information can be provided. It depends 
on many things and should be tested in your specific environment. 
However, I suggest to don't set so more QE without reason.


BR,
Andrei

2/11/2020 9:42 AM, Dorah пишет:

*Topology*:

Server-1 --> Cache myCache, holds continuously updating data like market
data(prices, status, tradetime etc) for instruments. InstrumentId is the key
for this cache.
Server-1 running with following jvm params: -Xms1g,-Xmx6g

Client-1 --> Pushing continuous updates on the cache

Client - 2 & 3 --> Listening updates on myCache using separate Continuous
query on every key (i.e. one continuous query per instrumentId).

The Cache Configuration is as follows:
   cacheModePARTITIONED
   atomicityModeATOMIC
   backups2
   readFromBackuptrue
   copyOnReadtrue
   statisticsEnabledtrue
   managementEnabledtrue

System hardware: 8 core, 32gb RAM
For now all servers and client below run on same machine.

-

*Scenario*: There are 1000 items in the myCache and client-1 is pushing 3
updates per second on every item. Lets say both client-2 and client-3 have
1000 different continuous queries open to listen to every update.

With the above scenario, we are observing the server-1 alone taking 60-70%
CPU and about 4GB memory.
In this case when high number continuous queries machine reaches 100% CPU
utilization.

*Thinking to fix as*: Use single continuous query per client to listen to
all the updates. i.e. there would be just one continuous query and it would
listen to all the updates.



But now the problem is that both the clients do not need to listen to
updates in all the keys in cache. So we are thinking of adding another
column to the ignite cache using which we can filter the updates by checking
if the client column of the updated row contains the client name for which
filter is being checked. e.g. the new table would look like-



Would this be the correct way to achieve what we are trying to achieve? Or
could this be done some other better way in ignite?

Follow up Question:
How many continuous queries can ignite handle at once with the configuration
we mentioned or is there any such benchmark values available on any
particular configuration? Is it fine to have as many as 1000 (or even more)
continuous queries running at once? If yes, how can we make it less CPU
intensive and more performant?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite on yarn doesn't started

2020-02-11 Thread Andrei Aleksandrov
Duplicate of 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-on-yarn-doesn-t-started-td31325.html


2/10/2020 11:00 PM, v.shinkevich пишет:

Hi, All!
I tried to run Ignite on Yarn
My cluster properties:


I also tried to add hdfs:// before paths, but result is the same.
I created dir /tmp/ignite on local node fs and copied it to hdfs.


then run
yarn jar ignite-yarn-2.7.6.jar /tmp/ignite/ignite-yarn-2.7.6.jar
/tmp/ignite/cluster.properties
It prints in console: (I had to add some log.info to code)


(/ignite/workdir/ was created on hdfs with ignite-yarn.jar)

But it's maximum that I got (Yarn application log) :

I didn't find any errors in hadoop-yarn logs.
I tried to complie for hadoop 2.7.3, but result is the same.

P.S. Ignite started locally on any node without problems.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite on yarn doesn't started

2020-02-11 Thread Andrei Aleksandrov

Hi,

Can you attach the Ignite node logs?

Also, I suggest to check:

1) check that Ignite libs (from ignite_binaries/libs) are available for 
your YARN deployment.

2) check that path to the configuration file is reachable from every node

I see that you set absolute paths for to tmp folder. Probably you should 
avoid using a temporary folder.


However, the real reason we can see only from Ignite logs.

BR,
Andrei

2/11/2020 8:49 AM, v.shinkevich пишет:

Hi, All!

I tried to run Ignite on Yarn

My cluster properties:

#IGNITE_HOME = /tmp/ignite

# The HDFS path to the Apache Ignite config file.
IGNITE_XML_CONFIG =  /tmp/ignite/config/default-config.xml

# The directory which will be used for saving Apache Ignite distribution.
#IGNITE_WORKING_DIR = ./work

# The HDFS directory which will be used for saving Apache Ignite
distribution.
#IGNITE_RELEASES_DIR = /tmp/ignite/releases/

# The HDFS path to libs which will be added to classpath.
# IGNITE_USERS_LIBS = N/A  #/opt/libs/

# The number of megabytes of RAM for each Apache Ignite node.
# This is the size of the Java heap.
# This includes on-heap caching if it is used.
IGNITE_MEMORY_PER_NODE = 2048

# The amount of memory necessary for all data regions, with padding for JVM
native overhead, interned Strings, etc.
# This setting should always be adjusted for nodes that are used to store
data, not just for pure computations.
# Memory requested to YARN for containers running an Ignite node is the sum
of IGNITE_MEMORY_PER_NODE and IGNITE_MEMORY_OVERHEAD_PER_NODE.
IGNITE_MEMORY_OVERHEAD_PER_NODE = 16384
#IGNITE_MEMORY_PER_NODE * 0.10, with a minimum of 384

# The constraint on slave hosts.
# IGNITE_HOSTNAME_CONSTRAINT = N/A  #192.168.0.[1-100]

# The number of nodes in the cluster.
IGNITE_NODE_COUNT = 16

# The number of CPU Cores for each Apache Ignite node.
IGNITE_RUN_CPU_PER_NODE = 2

# The version of Ignite which will be run on nodes.
IGNITE_VERSION = 2.7.6

# The HDFS path to the Apache Ignite build. This property can be useful when
the yarn
# is cluster running in net without internet access.
IGNITE_PATH = /tmp/ignite/apache-ignite-2.7.6-bin.zip

# Location where Ignite binary distribution is stored to be downloaded for
delivery. As per version 2.7, either IGNITE_PATH or IGNITE_URL is mandatory
in practice.
#IGNITE_URL =
http://ambari1.dmz.loc:/filebrowser/view=/tmp/ignite/apache-ignite-2.7.6-bin.zip

# Additional JVM options.
IGNITE_JVM_OPTS = -Djava.net.preferIPv4Stack=true

I also tried to add hdfs:// before paths, but result is the same.
I created dir /tmp/ignite on local node fs and copied it to hdfs.

apache-ignite-2.7.6-bin.zip
cluster.properties
commons-beanutils-1.9.3.jar
commons-codec-1.11.jar
commons-collections-3.2.2.jar
config
hadoop-common-2.7.3.jar
hadoop-yarn-client-2.7.3.jar
ignite-yarn-2.7.6.jar

Then run
yarn jar ignite-yarn-2.7.6.jar /tmp/ignite/ignite-yarn-2.7.6.jar
/tmp/ignite/cluster.properties

It prints in console: (I had to add some log.info to code)
   
Feb 10, 2020 10:33:35 PM org.apache.ignite.yarn.IgniteYarnClient main INFO:

ignite: hdfs:/tmp/ignite/apache-ignite-2.7.6-bin.zip
Feb 10, 2020 10:33:35 PM org.apache.ignite.yarn.IgniteYarnClient main INFO:
appJar: /ignite/workdir/ignite-yarn.jar
Feb 10, 2020 10:33:35 PM org.apache.ignite.yarn.IgniteYarnClient main INFO:
appMasterJar: scheme: "hdfs" host: "nmnode1.dmz.loc" port: 8020 file:
"/ignite/workdir/ignite-yarn.jar"
Feb 10, 2020 10:33:35 PM org.apache.ignite.yarn.IgniteYarnClient main INFO:
Submitted application. Application id: application_1581322307764_0057
Feb 10, 2020 10:33:39 PM org.apache.ignite.yarn.IgniteYarnClient main INFO:
Application application_1581322307764_0057 is RUNNING.

(/ignite/workdir/ was created on hdfs with ignite-yarn.jar)

But it's maximum that I got (Yarn application log) :

Log Type: stderr
Log Upload Time: Mon Feb 10 22:07:15 +0300 2020
Log Length: 7921
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/data/hdfs/v11/spill/usercache/root/appcache/application_1581322307764_0054/filecache/10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/02/10 22:06:38 INFO impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
20/02/10 22:06:38 INFO client.RMProxy: Connecting to ResourceManager at
ambari1.dmz.loc/10.254.62.127:8030
Feb 10, 2020 10:06:38 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Application master registered.
Feb 10, 2020 10:06:38 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 18,432, cpu 2.
Feb 10, 2020 10:06:38 PM org.apache.ignite.yarn.ApplicationMaster run
INFO: Making request. Memory: 18,432, cpu 2.
Feb 10, 2020 10:06:38 PM 

Re: Exception when joining a data node

2020-01-31 Thread Andrei Aleksandrov

Hi,

Are you sure that you saw this message on node stopping? It's strange 
because the following message should appear on node joining (and you can 
see it from the stack trace):


BaselineTopology of joining node (bf40986d-fba5-4985-b128-d30bb45228e7) 
is not compatible with BaselineTopology in the cluster. Branching 
history of cluster BlT ([1649854264, 548904244]) doesn't contain 
branching point hash of joining node BlT (0). Consider cleaning 
persistent storage of the node and adding it to the cluster again.


I guess that following situations can explain it:

1)You faced the "split-brain" situation when different parts of your 
cluster lose the connection to each other and continue working as 
different clusters. To avoid it you can try to use


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/plugin/segmentation/SegmentationResolver.html

2)Some of your nodes went offline. During this period the baseline was 
changed and when this node was restarted it found that its baseline is 
different.


3)Probably you started several nodes and activate the cluster. Then you 
stop these nodes and start another node and activate it too. After these 
steps, they will not be able to join each other.


The solution to the current situation was also described in the message. 
You should clean the persistence store of the node that can't join to 
the cluster.


Also I suggest to read next article:

https://apacheignite.readme.io/docs/baseline-topology

BR,
Andrei

1/31/2020 5:14 AM, krkumar24061...@gmail.com пишет:

Hi Guys - I am getting this error these days, What does this mean and why am
I getting into this error when I am doing a Ignition.stop(false) when I
shutdown the server.

Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
marsh=JdkMarshaller
[clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@3e8afc2d],
reconCnt=10, reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
clientReconnectDisabled=false, internalLsnr=null]
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
... 14 more
Caused by: class org.apache.ignite.spi.IgniteSpiException: BaselineTopology
of joining node (bf40986d-fba5-4985-b128-d30bb45228e7) is not compatible
with BaselineTopology in the cluster. Branching history of cluster BlT
([1649854264, 548904244]) doesn't contain branching point hash of joining
node BlT (0). Consider cleaning persistent storage of the node and adding it
to the cluster again.
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1946)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:969)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)



Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache.replace-K-V-V- performing slow

2020-01-31 Thread Andrei Aleksandrov

Hi,

I have also described the possible problem here - 
https://stackoverflow.com/questions/59950157/why-apache-ignite-cache-replace-k-v-v-api-call-performing-slow


BR,
Andrei

1/31/2020 12:33 PM, Ilya Kasnacheev пишет:

Hello!

I'm not sure the benchmarking is relevant, since most of work happens 
on server nodes and you are not benchmarking their threads.


Are you sure you're not doing more replace()s than necessary?

Regards,
--
Ilya Kasnacheev


вт, 28 янв. 2020 г. в 15:27, tarunk >:


Hi All,

We are running Ignite cluster with 12 nodes running Ignite 2.7.0
on openjdk
1.8 RHEL platform.
We saw some slowness with one of our process and when we tried to
drill it
further by profiling the JVM, the main culprit (taking ~78% of
total time)
seems to be coming from cache.repalce(K,V,V) Ignite api call.
Out of 77.9 by replace, 39% is taken by GridCacheAdapater.equalVal
and 38.5%
by GridCacheAdapter.put

Attaching the profiling snapshot ,Can someone please check and
suggest what
could be the cause or some known issue with this version ?


Let me know for any further query need to answer this please.

Regards,
Tarun



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: "Adding entry to partition that is concurrently evicted" error

2020-01-31 Thread Andrei Aleksandrov

Hi,

Current problem should be solved in ignite-2.8. I am not sure why this 
fix isn't a part of ignite-2.7.6.


https://issues.apache.org/jira/browse/IGNITE-11127

Your cluster was stopped because of failure handler work.

https://apacheignite.readme.io/docs/critical-failures-handling#section-failure-handling

I am not sure about possible workarounds here (probably you can set the 
NoOpFailureHandler). You also can try to create the thread on developer 
user list:


http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-2-7-release-td34076i40.html

BR,
Andrei

1/29/2020 1:58 AM, Abhishek Gupta (BLOOMBERG/ 919 3RD A) пишет:
Hello! I've got a 6 node Ignite 2.7.5 grid. I had this strange issue 
where multiple nodes hit the following exception - [ERROR] 
[sys-stripe-53-#54] GridCacheIoManager - Failed to process message 
[senderId=f4a736b6-cfff-4548-a8b4-358d54d19ac6, messageType=class 
o.a.i.i.processors.cache.distributed.near.GridNearGetRequest] 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException: 
Adding entry to partition that is concurrently evicted [grp=mainCache, 
part=733, shouldBeMoving=, belongs=false, 
topVer=AffinityTopologyVersion [topVer=1978, minorTopVer=1], 
curTopVer=AffinityTopologyVersion [topVer=1978, minorTopVer=1]] and 
then died after 2020-01-27 13:30:19.849 [ERROR] 
[ttl-cleanup-worker-#159] - JVM will be halted immediately due to the 
failure: [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, 
err=class 
o.a.i.i.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException 
[part=1013, msg=Adding entry to partition that is concurrently evicted 
[grp=mainCache, part=1013, shouldBeMoving=, belongs=false, 
topVer=AffinityTopologyVersion [topVer=1978, minorTopVer=1], 
curTopVer=AffinityTopologyVersion [topVer=1978, minorTopVer=1] The 
sequence of events was simply the following -
One of the nodes (lets call it node 1) was down for 2.5 hours and 
restarted. After a configured delay of 20 mins, it started to 
rebalance from the other 5 nodes. There were no other nodes that 
joined or left in this period. 40 minutes into the rebalance the the 
above errors started showing in the other nodes and they just bounced, 
and therefore there was data loss. I found a few links related to this 
but nothing that explained the root cause or what my work around could 
be - * 
http://apache-ignite-users.70518.x6.nabble.com/Adding-entry-to-partition-that-is-concurrently-evicted-td24782.html#a24786 
* https://issues.apache.org/jira/browse/IGNITE-9803

* https://issues.apache.org/jira/browse/IGNITE-11620
Thanks, Abhishek


Re: Distinct Query too slow

2020-01-24 Thread Andrei Aleksandrov

Hi,

Looks like it's known issue:

https://issues.apache.org/jira/browse/IGNITE-10781

According to this issue, indexes can work non-effective for distinct clause.

However, looks like the explain from your log isn't full. It should 
contain two parts - reducer and mapper.


Can you please run next from any SQL tool:

explain select distinct ipStart,ipEnd from IpContainerIpV4Data where 
subscriptionId = some_value;


BR,
Andrei

1/24/2020 3:23 PM, Prasad Bhalerao пишет:

Hi,

I am using Ignite 2.6 version. I have around total 6 million entries 
in my cache.


Following sql is taking too much time to execute. Some times it takes 
more than 180 seconds.


This SQL returns 4.5 million entries for given subscriptionId. I tried 
to add query parallelism (4-16 threads) on cache configuration. But it 
did not help.


If I remove DISTINCT keyword from sql then it executes quickly. But I 
need distinct in this particular case.


Can some please advise?

*SQL:*
select distinct ipStart,ipEnd from IpContainerIpV4Data where 
subscriptionId = ?


/2020-01-23 06:49:38,249 264738612 [query-#30600%springDataNode%] WARN 
o.a.i.i.p.query.h2.IgniteH2Indexing - Query execution is too long 
[time=83159 ms, sql='SELECT DISTINCT __Z0.IPSTART __C0_0, __Z0.IPEND 
__C0_1 FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z0 WHERE 
__Z0.SUBSCRIPTIONID = ?1', plan= SELECT DISTINCT __Z0.IPSTART AS 
__C0_0, __Z0.IPEND AS __C0_1 FROM 
IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z0 /* 
IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX2: SUBSCRIPTIONID = ?1 */ 
WHERE __Z0.SUBSCRIPTIONID = ?1 , parameters=[1234]]/



*Index is as follows:*
**
public class IpContainerIpV4Data implements 
Data, UpdatableData {


@QuerySqlField
private long id;

@QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
"ip_container_ipv4_idx1", order = 2)})

private int moduleId;
@QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
"ip_container_ipv4_idx1", order = 1),

@QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 0)})
private long subscriptionId;
@QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
"ip_container_ipv4_idx1", order = 4, descending = true),
@QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 2, 
descending = true)})

private int ipEnd;
@QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
"ip_container_ipv4_idx1", order = 3),

@QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 1)})
private int ipStart;
@QuerySqlField
private int partitionId;
@QuerySqlField
private long updatedDate;

}


*Cache Configuration:*

private CacheConfiguration ipContainerIPV4CacheCfg() {

   CacheConfiguration ipContainerIpV4CacheCfg =new 
CacheConfiguration<>(CacheName.IP_CONTAINER_IPV4_CACHE.name());
   ipContainerIpV4CacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
   ipContainerIpV4CacheCfg.setWriteThrough(ENABLE_WRITE_THROUGH);
   ipContainerIpV4CacheCfg.setReadThrough(false);
   ipContainerIpV4CacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
   
ipContainerIpV4CacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
   ipContainerIpV4CacheCfg.setBackups(this.backupCount);
   Factory storeFactory = 
FactoryBuilder.factoryOf(IpContainerIpV4CacheStore.class);
   ipContainerIpV4CacheCfg.setCacheStoreFactory(storeFactory);
   ipContainerIpV4CacheCfg.setIndexedTypes(DefaultDataAffinityKey.class, 
IpContainerIpV4Data.class);
   
ipContainerIpV4CacheCfg.setCacheStoreSessionListenerFactories(cacheStoreSessionListenerFactory());
   ipContainerIpV4CacheCfg.*setSqlIndexMaxInlineSize(84);*
   RendezvousAffinityFunction affinityFunction =new 
RendezvousAffinityFunction();
   affinityFunction.setExcludeNeighbors(true);
   ipContainerIpV4CacheCfg.setAffinity(affinityFunction);
   ipContainerIpV4CacheCfg.setStatisticsEnabled(true);
   
ipContainerIpV4CacheCfg.setPartitionLossPolicy(PartitionLossPolicy.READ_WRITE_SAFE);
ipContainerIpV4CacheCfg.setQueryParallelism(4);return 
ipContainerIpV4CacheCfg; }

Thanks,
Prasad


Re: How to set timeout while use data streamer in sql mode.

2020-01-07 Thread Andrei Aleksandrov

Hi,

You can find all existed options here:

https://apacheignite-sql.readme.io/docs/jdbc-client-driver
https://apacheignite-sql.readme.io/docs/jdbc-driver
https://apacheignite-sql.readme.io/docs/odbc-driver

At the moment I see that only *jdbc-client-driver* has special 
*streamingFlushFrequency *timeout:


Timeout, in milliseconds, that data streamer should use to flush data. 
By default, the data is flushed on connection close.


Thin clients looks like don't have it yet. Probably you should ask about 
it on developers user list.


BR,
Andrei

1/2/2020 9:25 AM, yangjiajun пишет:

Hello.

I use 'SET STREAMING ON ORDERED;' to use streaming mode.I know that we can
set timeout to a data streamer.How can I set this in sql mode?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CEP and Sliding Window Documentation

2020-01-07 Thread Andrei Aleksandrov

Hi,

That information wasn't removed. It was moved to correct place:

1)Next section https://apacheignite.readme.io/v1.0/docs/data-streamers 
was moved herehttps://apacheignite.readme.io/docs/data-streamers 



2)https://apacheignite.readme.io/v1.0/docs/sliding-windows section 
contains information about expire policies and SQL operations.


Expire policies sections:

https://apacheignite.readme.io/docs/expiry-policies

SQL documentation:
https://apacheignite-sql.readme.io/docs

BR,
Andrei

1/3/2020 3:57 PM, narges saleh пишет:

Hi All,

Why the section on on CEP and sliding windows was removed from  the 
recent ignite versions documentation? Is the application not being a 
focus of the software the only reason for the removal?


thanks
Narges


Re: Streamer and data loss

2020-01-07 Thread Andrei Aleksandrov

Hi,

Not flushed data in a data streamer will be lost. Data streamer works 
thought some Ignite node and in case if this the node failed it can't 
somehow start working with another one. So your application should think 
about how to track that all data was loaded (wait for completion of 
loading, catch the exceptions, check the cache sizes, etc) and use 
another client for data loading in case if previous one was failed.


BR,
Andrei

1/6/2020 2:37 AM, narges saleh пишет:

Hi All,

Another question regarding ignite's streamer.
What happens to the data if the streamer node crashes before the 
buffer's content is flushed to the cache? Is the client responsible 
for making sure the data is persisted or ignite redirects the data to 
another node's streamer?


thanks.


Re: Ignite Persistence: Baseline Topology

2020-01-06 Thread Andrei Aleksandrov

Hi,

I guess that every data node should have have the same data regions. I 
checked that in case if you have for example 2 nodes with persistence 
region in BLT and then start a new node (that isn't the part of BLT) 
with new region and some cache in this new region then it will produce 
next exception:


[17:53:30,446][SEVERE][exchange-worker-#48][GridDhtPartitionsExchangeFuture] 
Failed to reinitialize local partitions (rebalancing will be stopped): 
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=3, 
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1, 
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113, 
192.168.56.1], 
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502, 
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502, 
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502, 
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3, 
lastExchangeTime=1578322410223, loc=false, 
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], topVer=3, 
nodeId8=f581f039, msg=Node joined: TcpDiscoveryNode 
[id=44c8ba83-4a4d-4b0e-b4b6-530a23b25d24, addrs=[0:0:0:0:0:0:0:1, 
10.0.1.1, 10.0.75.1, 127.0.0.1, 172.25.4.231, 192.168.244.113, 
192.168.56.1], 
sockAddrs=[LAPTOP-I5CE4BEI.mshome.net/192.168.244.113:47502, 
/192.168.56.1:47502, host.docker.internal/172.25.4.231:47502, 
LAPTOP-I5CE4BEI/10.0.75.1:47502, /0:0:0:0:0:0:0:1:47502, 
/10.0.1.1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3, 
lastExchangeTime=1578322410223, loc=false, 
ver=2.7.2#20191202-sha1:2e9d1c89, isClient=false], type=NODE_JOINED, 
tstamp=1578322410400], nodeId=44c8ba83, evt=NODE_JOINED]
class org.apache.ignite.IgniteCheckedException: Requested DataRegion is 
not configured: 1GB_Region_Eviction
    at 
org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.dataRegion(IgniteCacheDatabaseSharedManager.java:729)


BR,
Andrei

1/6/2020 2:52 PM, djm132 пишет:

You can also look to this topic, probably related to yours with code sample
http://apache-ignite-users.70518.x6.nabble.com/Embedded-ignite-and-baseline-upgrade-questions-td30822.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with Spark Intergration

2019-12-12 Thread Andrei Aleksandrov

Hi,
In Spark you can use next options:

 * spark.driver.extraJavaOptions
 * spark.executor.extraJavaOptions

You can path your IGNITE JVM options there like -DIGNITE_QUIET=false. 
Generally clients will be started on executors during data loading or 
data reading but you also can start them on driver side using 
Ignition.start().


BR,
Andrei

12/12/2019 12:17 PM, datta пишет:

Stopped client node in the machine where spark worker node was running and
started spark shell. It started a ignite client node from within.

I have only 1 problem that how to specify ignite jvm options from spark. it
is taking default -Xms and -Xmx arguments which are very less



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Manage offset of KafkaStreamer

2019-12-10 Thread Andrei Aleksandrov

Hi,

Generally you can't do it using Ignite KafkaStreamer API because it's 
not so flexible to configure the KafkaConsumer used inside.


Generally it supported in 
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long) 
but can't be configured. You can create the JIRA ticket for it.


However, you can try to configure it using Kafka scripts like it 
described here:


https://stackoverflow.com/questions/29791268/how-to-change-start-offset-for-topic

BR,
Andrei

12/3/2019 1:07 PM, ashishb888 пишет:

I want to start from a specific offset of the Kafka partition, it is possible
with KafkaStreamer?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite on-heap & off-heap caches

2019-12-02 Thread Andrei Aleksandrov

Hi,

No, heap and off-heap memory are different features.

*Heap *memory uses -Xms and -Xmx option to allocate the memory used for 
different operations and generally can't be used for data storing (in 
case of you don't set on-heap caching). Java GC will work with current 
memory.


*Off-heap* memory uses *initial *and *max *properties for region size 
that should be set in the data region configuration. This memory will be 
used for data storage. Java GC will not work with current memory.


You can read more here:

https://apacheignite.readme.io/docs/memory-architecture

BR,
Andrei

11/28/2019 3:36 PM, ashishb888 пишет:

I have below question:

Do both on-heap & off-heap caches use memory from data regions (by setting
initial & max
of DataRegionConfiguration)?

Does Ignite use heap provided to the application (-Xms & -Xmx) for cache
storage?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite AI learning resources

2019-12-02 Thread Andrei Aleksandrov

Hi,

I don't think that exists some special resources. You can try to ask 
your questions on mail lists and read existed documentation:


1)Ignite user list - http://apache-ignite-users.70518.x6.nabble.com/
2)Ignite developers list - 
http://apache-ignite-developers.2346864.n4.nabble.com/
3)Ignite documentation portal - 
https://apacheignite.readme.io/docs/getting-started


Also exist a lot of different articles in the internet.

BR,
Andrei

12/2/2019 11:47 AM, joseheitor пишет:

Hi,

Can anyone recommend some resources to learn the fundamentals of ML and DL,
and how to use these techniques in practical ways with the Apache Ignite AI
platform?

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache data not being stored on server nodes

2019-12-02 Thread Andrei Aleksandrov

Hi,

Please read about expiration here:

https://apacheignite.readme.io/docs/expiry-policies

Expiry Policy specifies the amount of time that must pass before an 
entry is considered expired


 * In-Memory Mode (data is stored solely in RAM): expired entries are
   *purged *from RAM completely.

 * Memory + Ignite persistence: expired entries are *removed *from both
   memory and disk tiers. Note that expiry policies will remove entries
   from the partition files on disk without freeing up space. The space
   will be reused to write subsequent entries.

 * Memory + 3rd party persistence: expired entries are *removed *from
   the memory tier only (Ignite) and left untouched in the 3​rd party
   persistence (RDBMS, NoSQL, and other databases).

 * Memory + Swap: expired entries are *removed *from both RAM and swap
   files.

So it's expected that entry will be removed after expiration.

BR,
Andrei
12/2/2019 7:39 AM, swattal пишет:

I have recently started using Ap=
ache Ignite for my application and had questions about where the data gets
stored in the cache. I have two nodes which act as Cached servers and
another node which acts as a client. I am using the Zookeeper discovery SPI
for discovery. While putting the data in the cache my server nodes gets the
events of entry creation but on expiration i would assume that both Key and
Value are present on my server side nodes CacheExpiredListener. The
expiration listener is invoked with the right key but with the value being
null. This makes me believe that the put call on the client just gets
limited to caching on the client side and doesn’t send entries to
  server cache nodes. Is there a config setting that i am missing?

Thanks,
Sumit



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Blocked Worker Thread

2019-11-08 Thread Andrei Aleksandrov

Hi Conrad,

The reasons can be different. Could you please share the logs?

BR,
Andrei

11/7/2019 10:35 PM, Conrad Mukai (cmukai) пишет:


We are running a cache in a 4 node cluster with atomicityMode set to 
ATOMIC and have persistence enabled. We repeatedly get a 
SYSTEM_WORKER_BLOCKED error on one node which is disabling the entire 
cluster. We were seeing a lot of sockets in TIME_WAIT state which was 
blocking clients from connecting so we did the following on all the nodes:


/# ignore TIME_WAIT state on sockets
/echo *"1" **> */proc/sys/net/ipv4/tcp_tw_reuse
echo *"1" **> */proc/sys/net/ipv4/tcp_tw_recycle

This made that issue go away, but may play a part in this new issue. 
First question is what is the root cause of the error? The second 
question is why does this bring down the entire cluster?


Here is the error message:

[2019-11-07 16:36:22,037][ERROR][tcp-disco-msg-worker-#2][root] 
Critical system error detected. Will be handled accordingly to 
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, 
timeout=0, super=AbstractFailureHandler 
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: 
GridWorker [name=partition-exchanger, igniteInstanceName=null, 
finished=false, heartbeatTs=1573090509428]]]


class org.apache.ignite.IgniteException: GridWorker 
[name=partition-exchanger, igniteInstanceName=null, finished=false, 
heartbeatTs=1573090509428]


    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)


    at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)


    at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)


    at 
org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)


    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663)


    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181)


    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)


    at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)


    at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)


    at 
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)


This is followed by a warning and a thread dump:

[2019-11-07 16:36:22,038][WARN 
][tcp-disco-msg-worker-#2][FailureProcessor] No deadlocked threads 
detected.


[2019-11-07 16:36:22,328][WARN 
][tcp-disco-msg-worker-#2][FailureProcessor] Thread dump at 2019/11/07 
16:36:22 GMT


For the particular thread in the error and warning messages here is 
the thread dump:


Thread [name="tcp-disco-msg-worker-#2", id=113, state=RUNNABLE, 
blockCnt=211, waitCnt=4745368]


    at sun.management.ThreadImpl.dumpThreads0(Native Method)

    at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:454)

    at o.a.i.i.util.IgniteUtils.dumpThreads(IgniteUtils.java:1368)

    at 
o.a.i.i.processors.failure.FailureProcessor.process(FailureProcessor.java:128)


    - locked o.a.i.i.processors.failure.FailureProcessor@7e65ceba

    at 
o.a.i.i.processors.failure.FailureProcessor.process(FailureProcessor.java:104)


    at 
o.a.i.i.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1829)


    at 
o.a.i.i.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)


    at o.a.i.i.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)

    at o.a.i.i.util.worker.GridWorker.onIdle(GridWorker.java:297)

    at 
o.a.i.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663)


    at 
o.a.i.spi.discovery.tcp.ServerImpl$RingMessageWorker$$Lambda$47/1047515321.run(Unknown 
Source)


    at 
o.a.i.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181)


    at 
o.a.i.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)


    at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:120)

    at 
o.a.i.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)


    at o.a.i.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

In addition all the system threads are in TIMED_WAITING state:

Thread [name="sys-#7099", id=9252, state=TIMED_WAITING, blockCnt=0, 
waitCnt=1]


    Lock 
[object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@677ec573, 
ownerName=null, ownerId=-1]


    at sun.misc.Unsafe.park(Native Method)

    at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)


    at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)


    at 

Re: Kubernetes- Failed to retrieve Ignite pods IP addresses.

2019-11-07 Thread Andrei Aleksandrov

Hi,

Using the search I found that current issue was resolved in next thread:

http://apache-ignite-users.70518.x6.nabble.com/Ignite-on-RBAC-enabled-K8s-cluster-td22165.html

I guess that you should also read next article:

https://apacheignite.readme.io/docs/rbac-authorization

BR,
Andrei

11/7/2019 2:27 PM, Gokulnath Chidambaram пишет:

Hello,

I am trying to install apache ignite:2.7.6 in kubernetes cluster 
(deployed in aws).

I created

1.service account
2. role access
3.rolebinding
4.deployment.
4. added namesapce (bean property)  in xml configuration file.

I am getting the following error message.

11:18:39,724][INFO][main][PartitionsEvictManager] Evict partition 
permits=2
[11:18:44,719][INFO][main][ClientListenerProcessor] Client connector 
processor has started on TCP port 10800
[11:18:46,418][INFO][main][GridTcpRestProtocol] Command protocol 
successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0 
, port=11211]
[11:18:48,617][WARNING][jvm-pause-detector-worker][IgniteKernal] 
Possible too long JVM pause: 850 milliseconds.
[11:18:52,420][INFO][main][GridJettyRestProtocol] Command protocol 
successfully started [name=Jetty REST, host=/0.0.0.0 , 
port=8080]
[11:18:53,721][INFO][main][IgniteKernal] Non-loopback local IPs: 
10.42.5.132

[11:18:53,721][INFO][main][IgniteKernal] Enabled local MACs: 363E412C208F
[11:18:54,625][INFO][main][TcpDiscoverySpi] Connection check threshold 
is calculated: 1
[11:18:55,024][INFO][main][TcpDiscoverySpi] Successfully bound to TCP 
port [port=47500, localHost=0.0.0.0/0.0.0.0 , 
locNodeId=6bc33d41-6dba-422b-832d-ed8cae326a00]
[11:19:03,423][SEVERE][main][TcpDiscoverySpi] Failed to get registered 
addresses from IP finder on start (retrying every 2000ms; change 
'reconnectDelay' to configure the frequency of retries).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve 
Ignite pods IP addresses.
at 
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172) 

at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900) 

at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848) 

at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1049) 

at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910) 

at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391) 

at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020) 

at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297) 

at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939) 

at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682) 


at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) 

at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730) 


at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076) 


at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
at org.apache.ignite.Ignition.start(Ignition.java:348)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301) 

Caused by: java.io.IOException: Server returned HTTP response code: 
403 for URL: 
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/dataobjns/endpoints/ignite 

at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894) 

at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492) 

at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263) 

at 
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:153) 


... 20 more
[11:19:15,821][WARNING][jvm-pause-detector-worker][IgniteKernal] 
Possible too long JVM pause: 500 milliseconds.


any help is appreciated.


Re: apache ignite installation fails in kubernetes

2019-11-06 Thread Andrei Aleksandrov

Hi,

As I know it should be fixed in Apache Ignite 2.7.6. Also as an option 
you can try to modify your current Docker file as next (master version):


https://github.com/apache/ignite/blob/master/docker/apache-ignite/Dockerfile 



BR,
Andrei

11/6/2019 10:33 AM, Gokulnath Chidambaram пишет:

Hello,

I am trying to run Apache ignite : 2.5.0 inside the kubernetes 
cluster. My organization security policy doesn't allow to run as 
'root' inside any container. I tried to add security context 
(runAsNonRoot) in kubernetes yaml file. I am always getting the 
following error.


cp: can't create '/opt/ignite/apache-ignite-fabric/libs/README.txt': 
File exists
cp: can't create 
'/opt/ignite/apache-ignite-fabric/libs/ignite-kubernetes-2.5.0.jar': 
Permission denied
cp: can't create 
'/opt/ignite/apache-ignite-fabric/libs/jackson-core-asl-1.9.13.jar': 
Permission denied
cp: can't create 
'/opt/ignite/apache-ignite-fabric/libs/jackson-mapper-asl-1.9.13.jar': 
Permission denied
cp: can't create 
'/opt/ignite/apache-ignite-fabric/libs/licenses/apache-2.0.txt': File 
exists
cp: can't create 
'/opt/ignite/apache-ignite-fabric/libs/licenses/ignite-kubernetes-licenses.txt': 
Permission denied


Any insights is appreciated.

Thanks


Re: Ignite YARN deployment - how to use TCP IP Discovery?

2019-11-06 Thread Andrei Aleksandrov

Hi,

I guess you can try collect all IP addresses from all nodes that managed 
by YARN (where you are going to start Ignite) and add them all to 
addresses into TcpDiscoveryVmIpFinder part of Ignite configuration.


Also you should provide the guarantee that each such hosts will be able 
to connect each other.


BR,
Andrei

11/5/2019 9:15 PM, Seshan, Manoj N. (TR Tech, Content & Ops) пишет:


We are using Ignite as a Distributed In-Memory cache, deployed using 
YARN on a Hadoop Cluster.  We have configured Zookeeper Discovery, and 
this is working fine.


Given this is a small 20 node Ignite cluster, Zookeeper Discovery 
seems overkill. Would it be possible to switch to TCP Discovery? 
Multicast Finding is not an option, as that is disabled. Static IP 
Finding would also not work, as the Ignite Containers are dynamically 
allocated by YARN to arbitrary nodes of the Hadoop Cluster.


Rgds

*Manoj Seshan - Senior Architect*

Platform Content Technology, Bangalore

cid:image001.gif@01C95541.6801BF70

*Voice:*+91-98806 72987  +91-80-67492572



Re: Not able to start second server node due to authentication failure

2019-11-06 Thread Andrei Aleksandrov

Hi,

It's correct that SecurityContext is null in your case:

    SecurityContext subj = spi.nodeAuth.authenticateNode(node, cred);

    if (subj == null) {
        // Node has not pass authentication.
        LT.warn(log, "Authentication failed [nodeId=" + node.id() +
            ", addrs=" + U.addressesAsString(node) + ']');

This subject should be returned from security processor (here spi is 
DiscoverySPI):


    spi.setAuthenticator(new DiscoverySpiNodeAuthenticator() {
    @Override public SecurityContext 
authenticateNode(ClusterNode node, SecurityCredentials cred) {

    try {
    return ctx.security().authenticateNode(node, cred);
    }
    catch (IgniteCheckedException e) {
    throw U.convertException(e);
    }
    }

    @Override public boolean isGlobalNodeAuthentication() {
    return ctx.security().isGlobalNodeAuthentication();
    }
    });

From ctx.security().authenticateNode(node, cred); method.

But there is no security processor by default in Ignite. However, looks 
like you should re-implement your DiscoverySPI and setAuthenticator method:


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/DiscoverySpi.html#setAuthenticator-org.apache.ignite.spi.discovery.DiscoverySpiNodeAuthenticator-

BR,
Andei

11/6/2019 1:26 PM, Sankar Ramiah пишет:
I have implemented custom authentication and authorization through a 
plugin.


/public class MyPlugin implements GridSecurityProcessor, IgnitePlugin {/

Implemented authenticateNode method which bypasses authentication for 
server nodes and returns a security context instance. validateNode is 
returning null always. When I start the second server node, 
authenticateNode is being invoked and it goes through code which 
bypasses authentication but the startup fails after that with 
Authentication Failed error. validateNode doesn't seem to be invoked.
ERROR: org.apache.ignite.internal.IgniteKernal - Got exception while 
starting (will rollback startup routine). 
org.apache.ignite.IgniteCheckedException: Failed to start manager: 
GridManagerAdapter [enabled=true, 
name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager] 
at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1687) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066) 
[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) 
[ignite-core-2.7.0.jar!/:2.7.0] Caused by: 
org.apache.ignite.IgniteCheckedException: Failed to start SPI: 
TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, 
marsh=JdkMarshaller 
[clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@5b51df3f], 
reconCnt=10, reconDelay=2000, maxAckTimeout=60, 
forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null] 
at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682) 
~[ignite-core-2.7.0.jar!/:2.7.0] ... 66 more Caused by: 
org.apache.ignite.spi.IgniteSpiException: Authentication failed 
[nodeId=e3ab993e-0acf-4e55-86a7-473989e0fdca, addr=0.0.0.0] at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.authenticationFailedError(TcpDiscoverySpi.java:1935) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:967) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020) 
~[ignite-core-2.7.0.jar!/:2.7.0] at 
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297) 
~[ignite-core-2.7.0.jar!/:2.7.0]



I have spent quiet sometime with this error. The first node starts 
without any issues. Multiple server start fine without the security 
plugin in place. Any help in this regard would be highly appreciated. 
Thanks.


Sent from the Apache Ignite Users mailing list archive 
 at Nabble.com.


Re: Get key or cache's updation time?

2019-10-11 Thread Andrei Aleksandrov

Hi,

I guess that you can use CacheEntry to check that new version of entry 
is different from previous. Example you can see here:


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheEntry.html

In case of you are going to check cache updates then try to use Events 
(but here you can get performance drop):


https://apacheignite.readme.io/docs/events

BR,
Andrei

10/11/2019 4:08 PM, SidP пишет:

Is there a way to know if key and/or cache is updation time?

I want to check if key or cache is updated in last 10 sec or not?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Authorizing thin clients

2019-10-02 Thread Andrei Aleksandrov

Hi Kurt,

Unfortunately from the box Ignite provide only simple username/password 
authentication for thin clients.


You can read more about it here:

https://apacheignite.readme.io/docs/advanced-security

BR,
Andrei

10/1/2019 3:08 PM, Kurt Semba пишет:


Hi all,

is there a way to define which SQL statements a thin client is allowed 
to execute (some users don’t need the rights to drop / create /alter 
tables, etc.)? Or which SQL tables that client is allowed to query?


Thanks

Kurt



Re: Spark setup

2019-09-23 Thread Andrei Aleksandrov

Hi,

Ignite 2.7.5 requires the spark of 2.3.X version.

You should start the separate Spark and Ignite cluster:

https://apacheignite.readme.io/docs/getting-started
https://spark.apache.org/docs/2.3.0/spark-standalone.html

After that you should provide all required Ignite libraries to you 
driver and executor classpath. You can just copy all Ignite jars to 
every Spark node and add them to Spark classpath or try to use next 
script to submit your spark job:


LIBS_DIR=$1
EXAMPLE_CLASS=$2
PATH_TO_JAR=$3
JARS=find $LIBS_DIR -name '*.jar'
EXECUTOR_PATH=""
 for eachjarinlib in $JARS ; do
if [ "$eachjarinlib" != "ABCDEFGHIJKLMNOPQRSTUVWXYZ.JAR" ]; then
    EXECUTOR_PATH=file:$eachjarinlib:$EXECUTOR_PATH
fi
done
spark-submit --deploy-mode client --master 
spark://andrei-ThinkPad-P51s:7077 --conf 
"spark.driver.extraClassPath=$EXECUTOR_PATH" --conf 
"spark.executor.extraClassPath=$EXECUTOR_PATH" --class $EXAMPLE_CLASS 
$PATH_TO_JAR $4 $5 $6 $7


Libs can be collected in one place in case of you used maven project as 
next:


org.apache.maven.pluginsmaven-dependency-plugin3.1.1copy-sourcespackagecopy-dependenciestarget/libsfalsefalsetrue

After that your start command should be like next:

bash run_example.sh ./target/libs/* com.some.your.ClassName 
./target/your.jar client.xml


Some Spark job example you can see here (the code from this link can be 
used with Ignite as well):


https://docs.gridgain.com/docs/cross-database-queries

BR,
Andrei

9/22/2019 8:36 PM, George Davies пишет:
I have already have a standalone ignite cluster running on k8s and can 
run SQL statements against it fine.


Part of a requirements on the system i am building is to perform 
v-pivots on the query result set.


I've seen spark come up as a good solution to v-pivots and so I'm 
trying to set up a simple master + executor cluster.


I have added all the ignite libs to the classpath per the docs but 
when i attempt to launch the master i get the error:


Error SparkUncaughtExceptionHandler:91 - Uncaught Exception in thread 
Thread[main,5,main]

java.io.IOException: failure to login
Caused by: javax.security.auth.login.LoginException: 
java.lang.NullPointerException: invalid null input: name


Any pointers on what I am doing incorrectly? I dont have a separate 
HDFS cluster to log in to, I just want to use spark over the ignite 
caches.






Re: Authentication

2019-09-17 Thread Andrei Aleksandrov

Hi Kurt,

Yes, you can create new users via SQL as was mentioned here:

https://apacheignite-sql.readme.io/docs/create-user
https://apacheignite-sql.readme.io/docs/alter-user
https://apacheignite-sql.readme.io/docs/drop-user

By default user "ignite" will be created. Password will be "ignite" too. 
Current SQL could be executed in Java via 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/SqlFieldsQuery.html


To provide auth for:

1) JDBC: just add "user" and "password" parameters to jdbc connection 
string:


https://apacheignite-sql.readme.io/docs/jdbc-driver#section-parameters

2)Thick java nodes: Implement your own plugin for security

I see the answer from Evgenii here about it:

https://stackoverflow.com/questions/46150920/custom-security-plugin-for-apache-ignite

BR,
Andrei

9/16/2019 1:26 PM, Kurt Semba пишет:


Hi Andrei,

good to know – thank you.

So we need to distinguish between auth for

 1. thin clients like JDBC clients and
 2. thick clients (Java client that wants to join the cluster (as
server or client))

I will look at GridSecurityProcessorfor item 2 but in the meantime: I 
saw the CREATE command to create new SQL users on a freshly started 
cluster. How would you execute that using Java code? Would the app 
need to start the cluster, then use the Ignite JDBC driver to connect 
to the (PUBLIC) schema of that cluster, then run the CREATE SQL 
command and then exit?


Kurt

*From:*Andrei Aleksandrov 
*Sent:* Monday, September 16, 2019 12:13 PM
*To:* user@ignite.apache.org
*Subject:* Re: Authentication

*External Email:*Use caution in opening links or attachments.

Hi,

I guess that here Ignite has some documentation gap. Advanced security 
out of the box will work only with thin connections like webconsole, 
ODBC/JDBC, etc.


To get cluster node authentication you should add 
GridSecurityProcessor implementation:


https://apacheignite.readme.io/docs/advanced-security#section-enable-authentication 
<https://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapacheignite.readme.io%2Fdocs%2Fadvanced-security%23section-enable-authentication=02%7C01%7Cksemba%40extremenetworks.com%7C596f2dd5794c4f2ac42008d73a8e6e93%7Cfc8c2bf6914d4c1fb35246a9adb87030%7C0%7C0%7C637042255736541107=JvE0yIHv2EDyVKKfGPZYe7XFe1cR797GnxGkrR8SSeY%3D=0>


I created ticket on documentation:

https://issues.apache.org/jira/browse/IGNITE-12170 
<https://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FIGNITE-12170=02%7C01%7Cksemba%40extremenetworks.com%7C596f2dd5794c4f2ac42008d73a8e6e93%7Cfc8c2bf6914d4c1fb35246a9adb87030%7C0%7C0%7C637042255736551100=okrAhi7i44OewjWQxYQqUdECIEqpFxBPBA%2F13%2F%2FvMBI%3D=0>


BR,
Andrei

9/16/2019 10:43 AM, Kurt Semba пишет:

Hi all,

I used the web-console to auto-generate some code and then
extended the ServerNodeCodeStartup.java class according to the
documentation to enable authentication (which requires to enable
persistence) like this:

publicstaticvoidmain(String[] args) throwsException {

IgniteConfigurationcfg =
ServerConfigurationFactory.createConfiguration();

// Ignite persistence configuration.

DataStorageConfigurationstorageCfg = newDataStorageConfiguration();

// Enabling the persistence.

storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);

// Applying settings.

cfg.setDataStorageConfiguration(storageCfg);

// Enable authentication

cfg.setAuthenticationEnabled(true);

Igniteignite = Ignition.start(cfg);

// Activate the cluster.

// This is required only if the cluster is still inactive.

ignite.cluster().active(true);

// Get all server nodes that are already up and running.

Collection nodes = ignite.cluster().forServers().nodes();

// Set the baseline topology that is represented by these nodes.

ignite.cluster().setBaselineTopology(nodes);

}

But when I run this, the output shows “authentication=off” and I
can also connect a client without providing any user+pass…

/[…]/

/[08:57:13] Security status [authentication=off, tls/ssl=off]/

/[…] /

/[08:57:16] Ignite node started OK (id=1f668071, instance
name=ImportedCluster6)/

/[08:57:16] Topology snapshot [ver=1, locNode=1f668071, servers=1,
clients=0, state=INACTIVE, CPUs=4, offheap=2.3GB, heap=2.6GB]/

/[08:57:16]   ^-- Baseline [id=0, size=1, online=1, offline=0]/

/[08:57:16]   ^-- All baseline nodes are online, will start
auto-activation/

/[08:57:16] Ignite node stopped in the middle of checkpoint. Will
restore memory state and finish checkpoint on node start./

/[08:57:16] Both Ignite native persistence and CacheStore are
configured for cache 'NsdevicesCache'. This configuration does not
guarantee strict consistency between CacheStore and Ignite data
storage upon restarts. Co

Re: Authentication

2019-09-16 Thread Andrei Aleksandrov

Hi,

I guess that here Ignite has some documentation gap. Advanced security 
out of the box will work only with thin connections like webconsole, 
ODBC/JDBC, etc.


To get cluster node authentication you should add GridSecurityProcessor 
implementation:


https://apacheignite.readme.io/docs/advanced-security#section-enable-authentication

I created ticket on documentation:

https://issues.apache.org/jira/browse/IGNITE-12170

BR,
Andrei

9/16/2019 10:43 AM, Kurt Semba пишет:


Hi all,

I used the web-console to auto-generate some code and then extended 
the ServerNodeCodeStartup.java class according to the documentation to 
enable authentication (which requires to enable persistence) like this:


publicstaticvoidmain(String[] args) throwsException {

IgniteConfigurationcfg = ServerConfigurationFactory.createConfiguration();

// Ignite persistence configuration.

DataStorageConfigurationstorageCfg = newDataStorageConfiguration();

// Enabling the persistence.

storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);

// Applying settings.

cfg.setDataStorageConfiguration(storageCfg);

// Enable authentication

cfg.setAuthenticationEnabled(true);

Igniteignite = Ignition.start(cfg);

// Activate the cluster.

// This is required only if the cluster is still inactive.

ignite.cluster().active(true);

// Get all server nodes that are already up and running.

Collection nodes = ignite.cluster().forServers().nodes();

// Set the baseline topology that is represented by these nodes.

ignite.cluster().setBaselineTopology(nodes);

}

But when I run this, the output shows “authentication=off” and I can 
also connect a client without providing any user+pass…


/[…]/

/[08:57:13] Security status [authentication=off, tls/ssl=off]/

/[…] /

/[08:57:16] Ignite node started OK (id=1f668071, instance 
name=ImportedCluster6)/


/[08:57:16] Topology snapshot [ver=1, locNode=1f668071, servers=1, 
clients=0, state=INACTIVE, CPUs=4, offheap=2.3GB, heap=2.6GB]/


/[08:57:16]   ^-- Baseline [id=0, size=1, online=1, offline=0]/

/[08:57:16]   ^-- All baseline nodes are online, will start 
auto-activation/


/[08:57:16] Ignite node stopped in the middle of checkpoint. Will 
restore memory state and finish checkpoint on node start./


/[08:57:16] Both Ignite native persistence and CacheStore are 
configured for cache 'NsdevicesCache'. This configuration does not 
guarantee strict consistency between CacheStore and Ignite data 
storage upon restarts. Consult documentation for more details./


Any idea what I’m doing wrong?

I will also look into enabling TLS but wanted to start with user+pass 
auth.


Thanks

Kurt



Re: Slowness in initial data load using COPY command

2019-09-12 Thread Andrei Aleksandrov

Hi,

You can try to investigate your cluster state first:

1)First of all try to take a look at your logs and find there long JVM 
pauses. Also, you can collect the GC log and see the memory usage. It 
possible that the provided memory for your node isn't enough because it 
was used for something else.


In case of big memory usage you can take a look in heap dump. It will 
show what takes your memory.


Also in case of big pauses you can investigate the GC JVM configuration. 
Check that use at least G1 and set the required GC pause.


2)In case if you store your WAL into some network storage then please 
check that there were no connectivity issues that possible blocked the 
WAL copying.


3)Check that there are no connectivity issues between different nodes. 
You can see messages about NODE_FAILED in your server node. If you see 
it than it should be investigated.


4)Finally think about disabling of the WAL during initial data loading 
and using IgniteDataStreamer via some java thick client.


If nothing helps then try to provide logs.

According how CSV data can be loaded:

1)Using spark integration

https://www.gridgain.com/resources/blog/apacher-ignitetm-and-apacher-sparktm-integration-using-ignite-rdds

2)Using java code (some logic to read CSV) + IgniteDataStreamer

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html

3)Using kafka integration

https://apacheignite-mix.readme.io/docs/kafka-streamer

BR,
Andrei

9/12/2019 3:20 PM, Muhammed Favas пишет:


Thanks Oleg,

I need to have persistence enabled for my system.

I believe due to the locking/wait issue, the data load is slowing down.

Can anyone suggest a solution on how to avoid this situation and make 
data load faster?


Note: I am using COPY command to lad data into table from csv.

*Regards,*

*Favas ***

*From:* Oleg Popov 
*Sent:* Thursday, September 12, 2019 4:09 PM
*To:* user 
*Subject:* Re: Slowness in initial data load using COPY command

Hello. The same thing I see when I move data from MariaDB to Apache 
Ignite (PutAll).


I have tens of databases, but such things happens for a 100k rows 
database and for 400k+ rows databases as well.


I tried to:

1. Increase threads

2. Disable WaL/persistance.

3. Put WaL/data on a SSD.

Nothing was helped. And now I have to completely disable failureHandler:

<*property **name**="failureHandler"*><*bean 
**class**="org.apache.ignite.failure.NoOpFailureHandler"*/>



*From: *"Muhammed Favas" >

*To: *"user" mailto:user@ignite.apache.org>>
*Sent: *Thursday, September 12, 2019 1:23:43 PM
*Subject: *Slowness in initial data load using COPY command

HI,

I was trying to load data using csv file (each file contains 5 million 
rows of record which is approx.. 4 GB of data) using COPY command.


In initial stage of loading it was quit fast, but later the load 
process start slowing down and showing hardly 1% of CPU usage. My 
cluster has 5 nodes, each with 8 core CPU and 32 GB RAM.


When I checked one node’s log, I have seen some sever messages which 
is like what I have give below.


Can some one help me to understand the error details give below and 
how can I improve my data load speed.


[08:51:00,910][INFO][wal-file-archiver%null-#54][FileWriteAheadLogManager] 
Copied file 
[src=/data/apache-ignite-2.7.5-bin/work/db/wal/node00-18be5852-ed47-40a7-a256-ebbaa3376d39/0006.wal, 
dst=/data/apache-ignite-2.7.5-bin/work/db/wal/archive/node00-18be5852-ed47-40a7-a256-ebbaa3376d39/3066.wal]


[08:51:00,910][INFO][wal-file-archiver%null-#54][FileWriteAheadLogManager] 
Starting to copy WAL segment [absIdx=3067, segIdx=7, 
origFile=/data/apache-ignite-2.7.5-bin/work/db/wal/node00-18be5852-ed47-40a7-a256-ebbaa3376d39/0007.wal, 
dstFile=/data/apache-ignite-2.7.5-bin/work/db/wal/archive/node00-18be5852-ed47-40a7-a256-ebbaa3376d39/3067.wal]


[08:51:08,512][SEVERE][tcp-disco-msg-worker-#2][G] Blocked 
system-critical thread has been detected. This can lead to 
cluster-wide undefined behaviour [threadName=data-streamer-stripe-2, 
blockedFor=31s]


[08:51:08,512][WARNING][tcp-disco-msg-worker-#2][G] Thread 
[name="data-streamer-stripe-2-#11", id=24, state=WAITING, blockCnt=0, 
waitCnt=81018]


Lock 
[object=java.util.concurrent.locks.ReentrantLock$NonfairSync@23aede76, 
ownerName=data-streamer-stripe-4-#13, ownerId=26]


[08:51:08,512][SEVERE][tcp-disco-msg-worker-#2][] Critical system 
error detected. Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler 
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext 
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: 
GridWorker [name=data-streamer-stripe-2, igniteInstanceName=null, 
finished=false, 

Re: Users named "ignite" cannot change their password?

2019-09-11 Thread Andrei Aleksandrov

Hi,

In addition to Ilia answer, please read next page:

https://apacheignite-sql.readme.io/docs/create-user#section-description

BR,
Andrei

9/4/2019 11:47 AM, Ilya Kasnacheev пишет:

Hello!

User names are case sensitive and bound by SQL rules, i.e.:

ALTER USER "ignite" WITH PASSWORD 'test123';

Regards,
--
Ilya Kasnacheev


ср, 4 сент. 2019 г. в 05:14, liyuj <18624049...@163.com 
>:


Hi,

Execute the following statement:

ALTER USER ignite WITH PASSWORD 'test123';

The error message is as follows:

SQL 错误 [1] [5]: Operation failed
[nodeId=88b03674-04a4-44cb-bd42-8f2ed1e980ff,
opId=5b656f9fc61-7cd6fa68-ee67-49d4-aee8-60958f5584af, err=class

org.apache.ignite.internal.processors.authentication.UserManagementException:

User doesn't exist [userName=IGNITE]]

The password of other users can be changed.

jdk1.8.0,gridgain-community-8.7.6



Re: Altered sql table (adding new columns) does not reflect in Spark shell

2019-09-10 Thread Andrei Aleksandrov

Hi,

Yes, I can confirm that this is the issue. I filed next ticket for it:

https://issues.apache.org/jira/browse/IGNITE-12159

BR,
Andrei

9/7/2019 10:00 PM, Shravya Nethula пишет:

Hi,

*I created and altered the table using the following queries: *

a. CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id 
DOUBLE, zip_code LONG, PRIMARY KEY (name))WITH "backups=1"
b. ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name 
VARCHAR(64))


**The changes (columns added from above Alter table SQL) are correct 
when verified from GridGain. *


*However, when I use Spark shell, couldn't find the columns added 
through Alter table SQL (above (b) query).
Is there any configuration that I am missing? (Attached ignite-config 
file for reference)


Executed the following commands in Spark shell:*

Step 1: Connected to Spark shell:
/usr/hdp/2.6.5.1100-53/spark2/bin/spark-shell --jars 
/opt/jar/ignite-core-2.7.0.jar,/opt/jar/ignite-spark-2.7.0.jar,/opt/jar/ignite-spring-2.7.0.jar,"/opt/jar/commons-logging-1.1.3.jar","/opt/jar/spark-core_2.11-2.3.0.jar","/opt/jar/spring-core-4.3.18.RELEASE.jar","/opt/jar/spring-beans-4.3.18.RELEASE.jar","/opt/jar/spring-aop-4.3.18.RELEASE.jar","/opt/jar/spring-context-4.3.18.RELEASE.jar","/opt/jar/spring-tx-4.3.18.RELEASE.jar","/opt/jar/spring-jdbc-4.3.18.RELEASE.jar","/opt/jar/spring-expression-4.3.18.RELEASE.jar","/opt/jar/cache-api-1.0.0.jar","/opt/jar/annotations-13.0.jar","/opt/jar/ignite-shmem-1.0.0.jar","/opt/jar/ignite-indexing-2.7.0.jar","/opt/jar/lucene-analyzers-common-7.4.0.jar","/opt/jar/lucene-core-7.4.0.jar","/opt/jar/h2-1.4.197.jar","/opt/jar/commons-codec-1.11.jar","/opt/jar/lucene-queryparser-7.4.0.jar","/opt/jar/spark-sql_2.11-2.3.0.jar" 
--driver-memory 4g


Step 2: Ran the import statements:

import org.apache.ignite.{ Ignite, Ignition }

import org.apache.ignite.spark.IgniteDataFrameSettings._

import org.apache.spark.sql.{DataFrame, Row, SQLContext}

val CONFIG = "file:///opt/ignite-config.xml"

Step3: Read a table

var df = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
CONFIG).option(OPTION_TABLE, "person").load()


df.show();





Regards,

Shravya Nethula,

BigData Developer,


Hyderabad.



Re: IgniteCache.invoke deadlock example

2019-09-09 Thread Andrei Aleksandrov

Hello,

When you use the entry processor then you lock only provided key. So 
when you tries to work with *other keys* (different from provided one) 
that are being processed somewhere in other threads then deadlock is 
possible because other thread can take lock on these *other keys* and 
wait for provided one. Otherwise, entry processor will wait for these 
*other keys*. It's typical deadlock.


Sorry, I will not provide the example but hopes that my explanation is 
clear.


BR,
Andrei

9/7/2019 6:31 PM, Evangelos Morakis пишет:


Dear igniters,

I would like to elicit your expert
advice in regards to how ignite differentiates
on the use of a call to: 1)|IgniteCompute.affinityRun(...)|
and
|2)IgniteCache.invoke(...)|
|
|
| as far as dead locks are concerned. According to the documentation 
the main difference is that method 2 above, operates within a lock. 
Specifically the doc quotes:|
|“EntryProcessors| are executed atomically within a lock on the given 
cache key.”
Now it even comes with a warning that is meant to show how it is 
supposed to be used (or conversely NOT to be used):
“You should not access *other keys* from within the 
|EntryProcessor| logic as it may cause a deadlock.”
But this phrase “*other keys*” to what kind of keys does it refer to? 
 The remaining keys of the passed in cache?  For e.g. :

 Assume a persons cache...
Cache Person personsCache=...

|personsCache.invoke("personKey", new EntryProcessorVoid>() {|

||
|@Override public Object process(MutableEntry entry, 
Object... args) { |

||
|Person person= entry.getValue(); 
entry.setValue(person.setOccupation(“foo”));|

|return null;|
| } |
| });|
In other words can someone provide an example based on the above dummy 
code  that would make invoke deadlock so that I could get an 
understanding of what the documentation refers to?


Thanks

Evangelos Morakis



Re: Cache expiry policy not deleting records from disk(native persistence)

2019-09-09 Thread Andrei Aleksandrov

Hello,

I guess that generated WAL will take this disk space. Please read about 
WAL here:


https://apacheignite.readme.io/docs/write-ahead-log

Please provide the size of every folder under /opt/ignite/persistence.

BR,
Andrei

9/6/2019 9:45 PM, Shiva Kumar пишет:

Hi all,
I have set cache expiry policy like this


 
 
          
              class="org.apache.ignite.configuration.CacheConfiguration">

                
                
                
                
                
                  factory-method="factoryOf">

                    
                      
                        
                        
                      
                    
                  
                

              
          
 


And batch inserting records to one of the table which is created with 
above cache template.
Around 10 minutes, I ingested ~1.5GB of data and after 10 minutes 
records started reducing(expiring) when I monitored from sqlline.


0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


248896

1 row selected (0.86 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


222174

1 row selected (0.313 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


118154

1 row selected (0.15 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800>
0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


76061

1 row selected (0.106 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800>
0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


41671

1 row selected (0.063 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


18455

1 row selected (0.037 seconds)
0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;


COUNT(ID)


0

1 row selected (0.014 seconds)


But in the meantime, the disk space used by the persistence store was 
in the same usage level instead of decreasing.



[ignite@ignite-cluster-ign-shiv-0 ignite]$ while true ; do df -h 
/opt/ignite/persistence/; sleep 1s; done

Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
Filesystem Size Used Avail Use% Mounted on
/dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence



This means that expiry policy not deleting records from the disk, but 
ignite document says when expiry policy is set and native persistence 
is enabled then it deletes records from disk as well.

Am I missing some configuration?
Any help is appreciated.

Shiva


Re: Ignite ignores cache config when putting entries through near cache

2019-09-09 Thread Andrei Aleksandrov

Hi Bartłomiej,

Yes, it looks like a bug. Thank you for filing of the JIRA ticket.

Possible that http://apache-ignite-developers.2346864.n4.nabble.com 
 is better place 
to discuss the issues in product. You can start the thread there.


BR,
Andrei

9/6/2019 12:36 PM, Bartłomiej Stefański пишет:

Hi,
I have a problem with putting entries to partitioned or replicated 
cache through near cache on client node. Even when I configured cache 
on server to put values to off-heap space they are stored on heap.


I already descibed it on jira 
https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-12142. 
I'm writing also here - mailing list seems to be more active.


Is it a bug in Ignite or a problem with configuration?

--
Bartłomiej Stefański


Re: Concurrent threads updating the same cache item

2019-09-05 Thread Andrei Aleksandrov

Hi,

When you start the compute task then it (and code from there) will be 
executed on every server (that were chosen) consistently in the single 
thread.


But if you will try to broadcast the same task on several servers then 
race between different tasks on different servers is possible in this 
case. To avoid it you can try to use the transactions API (to provide 
atomicity of get-update-put operation) or distributed locks for updated 
keys:


https://apacheignite.readme.io/docs/transactions
https://apacheignite.readme.io/docs/distributed-locks 



BR,
Andrei

9/5/2019 5:41 PM, Ari Erev пишет:


Hello,

This question is related to the question by “humenius”,  with subject: 
“Race condition and conflicts during cache modifications?” – but I 
believe it is a simpler case…


When code is run on an Ignite server node (such as from a distributed 
compute, or service)  – is all access to a specific object (object 
with a specific key) – done from one (the same) specific thread?


The reason I am asking is this:

Some examples of Ignite code on GitHub and the ones that are embedded 
in White Paper articles from GridGain contain the following conceptual 
code (incrementing a value in the cache).


My_Object  obj = cache.get(key);

  obj.increment_value();

 cache.put(key, obj);

If this code is executed concurrently from more than one thread, there 
is a risk for inconsistency, as the new/incremented value may 
overwrite a cached value which is already different than it was at the 
time of the cache.get().


If so, should such code be synchronized (use some sort of lock)?

Thanks,

Ari


Confidentiality: This communication and any attachments are intended 
for the above-named persons only and may be confidential and/or 
legally privileged. Any opinions expressed in this communication are 
not necessarily those of NICE Actimize. If this communication has come 
to you in error you must take no action based on it, nor must you copy 
or show it to anyone; please delete/destroy and inform the sender by 
e-mail immediately.

Monitoring: NICE Actimize may monitor incoming and outgoing e-mails.
Viruses: Although we have taken steps toward ensuring that this e-mail 
and attachments are free from any virus, we advise that in keeping 
with good computing practice the recipient should ensure they are 
actually virus free.




Re: Common table expressions preserved as view.

2019-09-05 Thread Andrei Aleksandrov

Hi,

At the moment SQL views and temporary tables (CTE) don't fully supported 
in Ignite. The CTE syntax will just set the sub-query inside your SQL 
request.


BR,
Andrei

9/5/2019 4:36 PM, kresimir.horvat пишет:

Hi, I noticed that some common table expressions defined in queries are left
preserved as views. The first time I noticed this is when I managed to run
query when CTE name was changed. Seems that in some cases they are kept and
can be referenced from different sessions (I can see them when I run select
from INFORMATION_SCHEMA.VIEWS).
This rises a bit of concern if other users can get data from another user if
queries are run in parallel, ie. they will both read from view.

In few latest test seems that CTEs preserved as view are all created when
query is run over rest.
Can someone, please, give me some explanation how its handled in Ignite?

Thanks in advance!
Kresimir



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Can Ignite transaction manage cached and non-cached data writes?

2019-09-03 Thread Andrei Aleksandrov

Hi,

By default Ignite will handle only updates that were done using Ignite 
cache API.


I guess that you have already read next article:

https://www.gridgain.com/resources/blog/apache-ignite-transactions-architecture-transaction-handling-level-3rd-party

BR,
Andrei

9/3/2019 2:03 PM, bijunathg пишет:

Hi,
Our application wants to do SQL queries and writes on some cached data
(partitioned) and at the same time update some other non-cached data in the
same transactional context. We do not want to cache everything for
optimizing the cache memory footprint.

The data-store could be any RDBMS store. Both the cached data and non-cached
data are stored on the same DB schema (instance).

We can enable the Write-through mode for the cached data so that Ignite will
directly write to the store.
  Could any of you please advice the best practices to manage such a
transaction? How will we write the non-cached data in the same transactional
context? Does Ignite provide any provision to write the non-cached data also
to the underlying store within the same Ignite transaction?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Job Stealing node not stealing jobs

2019-09-03 Thread Andrei Aleksandrov

Hi,

Some remarks about job stealing SPI:

1)You have some nodes that can proceed the tasks of some compute job.
2)Tasks will be executed in public thread pool by default:
https://apacheignite.readme.io/docs/thread-pools#section-public-pool
3)If some node thread pool is busy then some task of compute job can be 
executed on other node.


In next cases it will not work:

1)In case if you choose specific node for your compute task
2)In case if you do affinity call (the same as above but node will be 
choose by affinity mapping)


According to your case:

It's not clear for me what exactly you try to do. Possible job stealing 
didn't work because of your weak node began executions of some tasks in 
public pool but just do it longer then faster one.


Could you please share your full reproducer for investigation?

BR,
Andrei

9/3/2019 1:43 PM, Pascoe Scholle пишет:

HI there,

I have asked this question, however I asked it under a different and 
resolved topic, so I posted the quest under a more suitable title. I 
hope thats ok


We have tried to configure two compute server nodes one of which is 
running on a weaker machine. The node running on the more powerful 
machine always finished its tasks far before

the weaker node and then sits idle.

The node is not even sending a steal request, so I must have 
configured something wrong.


I have attached the code for both nodes if you could kindly point out 
what I am missing , I would really appreciate it!





Re: Cache spreading to new nodes

2019-08-12 Thread Andrei Aleksandrov

Hi,

Could you share the whole reproducer with all configurations and 
required methods?


BR,
Andrei

8/12/2019 4:48 PM, Marco Bernagozzi пишет:
I have a set of nodes, and I want to be able to set a cache in 
specific nodes. It works, but whenever I turn on a new node the cache 
is automatically spread to that node, which then causes errors like:
Failed over job to a new node ( I guess that there was a computation 
going on in a node that shouldn't have computed that, and was shut 
down in the meantime).


I don't know if I'm doing something wrong here or I'm missing something.
As I understand it, NodeFilter and Affinity are equivalent in my case 
(Affinity is a node filter which also creates rules on where can the 
cache spread from a given node?). With rebalance mode set to NONE, 
shouldn't the cache be spread on the "nodesForOptimization" nodes, 
according to either the node filter or the affinityFunction?


Here's my code:

List nodesForOptimization = fetchNodes();

CacheConfiguration graphCfg = new 
CacheConfiguration<>(graphCacheName);

graphCfg = graphCfg.setCacheMode(CacheMode.REPLICATED)
            .setBackups(nodesForOptimization.size() - 1)
            .setAtomicityMode(CacheAtomicityMode.ATOMIC)
            .setRebalanceMode(CacheRebalanceMode.NONE)
            .setStoreKeepBinary(true)
            .setCopyOnRead(false)
            .setOnheapCacheEnabled(false)
            .setNodeFilter(u -> nodesForOptimization.contains(u.id 
()))

            .setAffinity(
                new RendezvousAffinityFunction(
                    1024,
                    (c1, c2) -> nodesForOptimization.contains(c1.id 
()) && nodesForOptimization.contains(c2.id 
())

                )
            )
.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);


Re: TimeoutException not wrapped in CacheException

2019-08-09 Thread Andrei Aleksandrov

Hi,

Sorry, it's my fault.

I thought that you get TransactionTimeoutExceptionfrom from some 
IgniteCache method. So it's not expected there.


From commit method you can't get the CacheException of course.

So I guess that you should handle only exceptions that mentioned in Java 
Doc.


BR,
Andrei

8/9/2019 6:21 PM, Andrey Davydov пишет:

As I see In javadocs for org.apache.ignite.transactions.Transaction

    /**
     * Commits this transaction by initiating {@code two-phase-commit} 
process.

     *
     * @throws IgniteException If commit failed.
     * @throws TransactionTimeoutException If transaction is timed out.
     * @throws TransactionRollbackException If transaction is 
automatically rolled back.
     * @throws TransactionOptimisticException If transaction 
concurrency is {@link TransactionConcurrency#OPTIMISTIC}

     * and commit is optimistically failed.
     * @throws TransactionHeuristicException If transaction has 
entered an unknown state.

     */
    @IgniteAsyncSupported
    public void commit() throws IgniteException;

And as we see in trace, exception come 
from org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl.commit
So system behaviour meets javadoc but don't meet docs and example on 
https://apacheignite.readme.io/docs/transactions



On Fri, Aug 9, 2019 at 6:12 PM Andrey Davydov 
mailto:andrey.davy...@gmail.com>> wrote:


Sorry fo misprint, test does not check that there are no any way
to get TimeoutException

On Fri, Aug 9, 2019 at 5:55 PM Andrey Davydov
mailto:andrey.davy...@gmail.com>> wrote:

It is a little bit difficult to reproduce. We got unhadled
exception on pre prod performance test of our system. I will
try to reproduce it on weekend.

You test just check that if you get CacheException then
TimeoutException is inside it, but doesn't check that there
are no any way to get CacheException. As I check listened in
stack trace lines from 2.7.5 sources (loaded from maven), i
dont see where TOE shoud be wrapped to CE

If there is full list of Exceptions when it is valid to retry
transaction (Optimistic or Pessimistic). As I found in
different pages of docs, now I catch (optimistic tx):
TransactionOptimisticException - try to rerun transaction
ClusterTopologyException - retryReadyFuture().get() and try to
rerun transaction
CacheException check if getCause is Timeout then try to rerun
or rethrow in other cases

Thanks.

On Fri, Aug 9, 2019 at 5:31 PM Andrei Aleksandrov
mailto:aealexsand...@gmail.com>> wrote:

Hi,

It looks strange because even Ignite tests expect that
TransactionTimeoutException will be wrapped
in CacheException. For
example IgniteTxConfigCacheSelfTest:

 try (final Transaction tx =
ignite.transactions().txStart()) {
 assert tx != null;

 cache.put("key0", "val0");

 sleepForTxFailure();

 cache.put("key", "val");

 fail("Timeout exception must be thrown");
 }
 catch (CacheException e) {
 assert e.getCause() instanceof
TransactionTimeoutException;
 }

So could you please provide the reproducer for your issue?
We will check
it and create the JIRA for it.

BR,
Andrei

8/9/2019 5:16 PM, Andrey Davydov пишет:
> On ignite 2.7.5 I got TransactionTimeoutException not
wrapped
> in CacheException. If it is normal behaviour and I
should catch
> TransactionTimeoutException too. My current logic is to
> catch CacheException and check CacheException.getCause()
if it
> was TransactionTimeoutException.
>
> Thanks.
>
> Full statck trace:
>
> Caused by:
org.apache.ignite.transactions.TransactionTimeoutException:
> Failed to acquire lock within provided timeout for
transaction
> [timeout=150, tx=GridDhtTxLocal
> [nearNodeId=18e6b4a9-c39d-463a-9260-b5ed5057a491,
> nearFutId=74752d17c61-0341ea15-fcbd-48ef-b655-299a6d885196,
> nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
> nearXidVer=GridCacheVersion [topVer=176751246,
order=1565274668396,
> nodeOrder=1], super=GridDhtTxLocalAdapter
> [nearOnOriginatingNode=false, nearNodes=[],
> dhtNodes=[18e6b4a9-c39d-463a-9260-b5ed5057a491,

Re: TimeoutException not wrapped in CacheException

2019-08-09 Thread Andrei Aleksandrov

Hi,

It looks strange because even Ignite tests expect that 
TransactionTimeoutException will be wrapped in CacheException. For 
example IgniteTxConfigCacheSelfTest:


    try (final Transaction tx = ignite.transactions().txStart()) {
    assert tx != null;

    cache.put("key0", "val0");

    sleepForTxFailure();

    cache.put("key", "val");

    fail("Timeout exception must be thrown");
    }
    catch (CacheException e) {
    assert e.getCause() instanceof TransactionTimeoutException;
    }

So could you please provide the reproducer for your issue? We will check 
it and create the JIRA for it.


BR,
Andrei

8/9/2019 5:16 PM, Andrey Davydov пишет:
On ignite 2.7.5 I got TransactionTimeoutException not wrapped 
in CacheException. If it is normal behaviour and I should catch 
TransactionTimeoutException too. My current logic is to 
catch CacheException and check CacheException.getCause() if it 
was TransactionTimeoutException.


Thanks.

Full statck trace:

Caused by: org.apache.ignite.transactions.TransactionTimeoutException: 
Failed to acquire lock within provided timeout for transaction 
[timeout=150, tx=GridDhtTxLocal 
[nearNodeId=18e6b4a9-c39d-463a-9260-b5ed5057a491, 
nearFutId=74752d17c61-0341ea15-fcbd-48ef-b655-299a6d885196, 
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0, 
nearXidVer=GridCacheVersion [topVer=176751246, order=1565274668396, 
nodeOrder=1], super=GridDhtTxLocalAdapter 
[nearOnOriginatingNode=false, nearNodes=[], 
dhtNodes=[18e6b4a9-c39d-463a-9260-b5ed5057a491, 
1144f759-6d1f-4aa3-9592-cc0b3481eb15], explicitLock=false, 
super=IgniteTxLocalAdapter [completedBase=null, 
sndTransformedVals=false, depEnabled=false, txState=IgniteTxStateImpl 
[activeCacheIds=[1895344369], recovery=false, mvccEnabled=false, 
txMap=[IgniteTxEntry [key=KeyCacheObjectImpl [part=127, 
val=cancel_queue#FIRSTG, hasValBytes=true], cacheId=1895344369, 
txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=127, 
val=cancel_queue#FIRSTG, hasValBytes=true], cacheId=1895344369], 
val=[op=READ, val=null], prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, 
val=null], entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, 
conflictVer=null, explicitVer=null, dhtVer=null, filters=[], 
filtersPassed=false, filtersSet=false, entry=GridDhtCacheEntry 
[rdrs=[], part=127, super=GridDistributedCacheEntry 
[super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=127, 
val=cancel_queue#FIRSTG, hasValBytes=true], val=null, 
ver=GridCacheVersion [topVer=0, order=0, nodeOrder=0], 
hash=1903256846, extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc 
[locs=[GridCacheMvccCandidate 
[nodeId=73a0c88f-3628-4c12-bd75-d273b77a6752, ver=GridCacheVersion 
[topVer=176751246, order=1565274668397, nodeOrder=2], threadId=883, 
id=412844, topVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
reentry=null, otherNodeId=18e6b4a9-c39d-463a-9260-b5ed5057a491, 
otherVer=GridCacheVersion [topVer=176751246, order=1565274668396, 
nodeOrder=1], mappedDhtNodes=null, mappedNearNodes=null, 
ownerVer=null, serOrder=GridCacheVersion [topVer=176751246, 
order=1565274668396, nodeOrder=1], key=KeyCacheObjectImpl [part=127, 
val=cancel_queue#FIRSTG, hasValBytes=true], 
masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_implicit=0|dht_local=1|near_local=0|removed=0|read=1, 
prevVer=GridCacheVersion [topVer=176751246, order=1565274668397, 
nodeOrder=2], nextVer=null]], rmts=null]], flags=2]]], prepared=1, 
locked=false, nodeId=null, locMapped=false, expiryPlc=null, 
transferExpiryPlc=false, flags=0, partUpdateCntr=0, 
serReadVer=GridCacheVersion [topVer=0, order=0, nodeOrder=0], 
xidVer=null], IgniteTxEntry [key=KeyCacheObjectImpl [part=121, 
val=cancel_queue#FIRSTA, hasValBytes=true], cacheId=1895344369, 
txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=121, 
val=cancel_queue#FIRSTA, hasValBytes=true], cacheId=1895344369], 
val=[op=DELETE, val=null], prevVal=[op=NOOP, val=null], 
oldVal=[op=NOOP, val=null], entryProcessorsCol=null, ttl=-1, 
conflictExpireTime=-1, conflictVer=null, explicitVer=null, 
dhtVer=null, filters=[], filtersPassed=false, filtersSet=false, 
entry=GridDhtCacheEntry [rdrs=[], part=121, 
super=GridDistributedCacheEntry [super=GridCacheMapEntry 
[key=KeyCacheObjectImpl [part=121, val=cancel_queue#FIRSTA, 
hasValBytes=true], val=null, ver=GridCacheVersion [topVer=176751246, 
order=1565274668382, nodeOrder=2], hash=1903256840, 
extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc 
[locs=[GridCacheMvccCandidate 
[nodeId=73a0c88f-3628-4c12-bd75-d273b77a6752, ver=GridCacheVersion 
[topVer=176751246, order=1565274668397, nodeOrder=2], threadId=883, 
id=412843, topVer=AffinityTopologyVersion [topVer=3, minorTopVer=1], 
reentry=null, otherNodeId=18e6b4a9-c39d-463a-9260-b5ed5057a491, 
otherVer=GridCacheVersion [topVer=176751246, order=1565274668396, 
nodeOrder=1], mappedDhtNodes=null, mappedNearNodes=null, 
ownerVer=null, serOrder=GridCacheVersion [topVer=176751246, 
order=1565274668396, 

Re: Ignite Spark Example Question

2019-08-09 Thread Andrei Aleksandrov

Hi,

Spark contains several *SaveModes *that will be applied if the table 
that you are going to use exists:


* *Overwrite *- with this option you *will try to re-create* existed 
table or create new and load data there using IgniteDataStreamer 
implementation
* *Append *- with this option you *will not try to re-create* existed 
table or create new table and just load the data to existed table


* *ErrorIfExists *- with this option you will get the exception if the 
table that you are going to use exists


* *Ignore *- with this option nothing will be done in case if the table 
that you are going to use exists. If table already exists, the save 
operation is expected to not save the contents of the DataFrame and to 
not change the existing data.


According to your question:

You should use the *Append *SaveMode for your spark integration in case 
if you are going to store new data to cache and save the previous stored 
data.


Note, that in case if you will store the data for the same Primary Keys 
then with data will be overwritten in Ignite table. For example:


1)Add person {id=1, name=Vlad, age=19} where id is the primary key
2)Add person {id=1, name=Nikita, age=26} where id is the primary key

In Ignite you will see only {id=1, name=Nikita, age=26}.

Also here you can see the code sample for you and other information 
about SaveModes:


https://apacheignite-fs.readme.io/docs/ignite-data-frame#section-saving-dataframes

BR,
Andrei

On 2019/08/08 17:33:39, sri hari kali charan Tummala  
wrote:

> Hi All,>
>
> I am new to Apache Ignite community I am testing out ignite for 
knowledge>
> sake in the below example the code reads a json file and writes to 
ingite>

> in-memory table is it overwriting can I do append mode I did try spark>
> append mode .mode(org.apache.spark.sql.SaveMode.Append)>
> without stopping one ignite application inginte.stop which keeps the 
cache>

> alive and tried to insert data to cache twice but I am still getting 4>
> records I was expecting 8 records , what would be the reason ?>
>
> 
https://github.com/apache/ignite/blob/1f8cf042f67f523e23f795571f609a9c81726258/examples/src/main/spark/org/apache/ignite/examples/spark/IgniteDataFrameWriteExample.scala#L89> 


>
> -- >
> Thanks & Regards>
> Sri Tummala>
>


Re: IgniteCache.lock behaviour on a key that doesn't exist in the cache (yet)

2019-08-07 Thread Andrei Aleksandrov

Hi,

I tested your code and looks like it worked fine.

However, you also can try to use next method:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#getAndPutIfAbsent-K-V-

In this case you can:

1)Check cache.containsKey(key)
2)If true then prepare the default value
3)Run getAndPutIfAbsent(key, defaultValue)

If another client was able to put the value during current prepare the 
default then you will get that value, otherwise default value will be 
returned.


BR,
Andrei

8/7/2019 12:08 PM, Yohan Fernando пишет:


I'm trying to write a transaction-safe way of performing a lazy load 
of an object from the database if it doesnt exist. However as 
IgniteCache doesn't have the equivalent of HashMap.computeIfAbsent, 
I'm trying to use the IgniteCache.lock method to achieve this.


The question is whether IgniteCache.lock will lock the cache for that 
key even if the key does not yet exist in the cache.


Following is the example code,

public GridOrder getOrder(OrderKey key) {

    Lock orderKeyLock = cache.lock(key);

    try {

    orderKeyLock.lock();

    if (!cache.containsKey(key)) {

    GridOrder order = loadOrderFromDb(key);

    if ( order == null) {

    throw new IllegalStateException("Key " + key + " 
not in Order Cache or in Database DB Name ");


    }

    cache.put(key,order);

    }

    return cache.get(key);

    } finally {

    orderKeyLock.unlock();

    }

    }

Alternatively, is there a better way to achieve this?

_

This email, its contents, and any attachments transmitted with it are 
intended only for the addressee(s) and may be confidential and legally 
privileged. We do not waive any confidentiality by misdelivery. If you 
have received this email in error, please notify the sender 
immediately and delete it. You should not copy it, forward it or 
otherwise use the contents, attachments or information in any way. Any 
liability for viruses is excluded to the fullest extent permitted by law.


Tudor Capital Europe LLP (TCE) is authorised and regulated by The 
Financial Conduct Authority (the FCA). TCE is registered as a limited 
liability partnership in England and Wales No: OC340673 with its 
registered office at 10 New Burlington Street, London, W1S 3BE, United 
Kingdom




Re: What happens when a client gets disconnected

2019-08-06 Thread Andrei Aleksandrov

Hi,

I guess that you should provide the full client and server logs, 
configuration files and reproducer if it's possible for case when the 
client node with near cache was able to crush the whole cluster.


Looks like it can be the issue here and the best way will be raise the 
JIRA ticket for it after analyze of provided data.


BR,
Andrei

On 2019/07/31 14:54:42, Matt Nohelty  wrote:
> Sorry for the long delay in responding to this issue. I will work on>
> replicating this issue in a more controlled test environment and try to>
> grab thread dumps from there.>
>
> In a previous post you mentioned that the blocking in this thread dump>
> should only happen when a data node is affected which is usually a 
server>

> node and you also said that near cache consistency is observed>
> continuously. If we have near caching enabled, does that mean clients>
> become data nodes? If that's the case, does that explain why we are 
seeing>

> blocking when a client crashes or hangs?>
>
> Assuming this is related to near caching, is there any configuration to>
> adjust this behavior to give us availability over perfect consistency?>
> Having a failure on one client ripple across the entire system and>
> effectively take down all other clients of that cluster is a major 
problem.>
> We obviously want to avoid problems like an OOM error or a big GC 
pause in>

> the client application but if these things happen we need to be able to>
> absorb these gracefully and limit the blast radius to just that client>
> node.>
>


Re: Ignite 2.7.0: Ignite client:: memory leak

2019-08-06 Thread Andrei Aleksandrov

Hi Mahesh,

Yes, it's a problem related to IGNITE_EXCHANGE_HISTORY_SIZE. Ignite 
stored the data for the last 1000 exchanges.


It generally can be required for the case when the coordinator was 
changed and new coordinator required to load last exchange history.


Exist two problems here:

1)Client nodes can't be a coordinator. So there is no reason to store 
1000 entries there. Will be better to set this option to some small 
value or zero for client nodes.
2)Server nodes also don't require 1000 entries. The required number of 
exchange history can depend on the number of server nodes. I suggest 
change the default value to small value.


Here is the ticket related to this problem:

https://issues.apache.org/jira/browse/IGNITE-11767

It fixed and should be available in Ignite 2.8 where these exchanges 
will take less memory.


BR,
Andrei

On 2019/08/03 01:09:19, Mahesh Renduchintala  
wrote:
> The clients we use have memory ranging from 4GB to 8GB. OOM was 
produced on all these clientssome sooner, some little later, bit 
always was seen.>

>
> The workaround is still stable for more than 48 hours now.>
>
>


Re: Can Ignite Kafka connector be able to perform partial update ?

2019-08-06 Thread Andrei Aleksandrov

Hi,

Unfortunately, Ignite Kafka connector is simple implementation of Kafka 
connector that use source and sink functions.


All data transformation and filtering should be done using Kafka API. I 
guess that you can try use next functions for your purposes:


https://docs.confluent.io/current/connect/transforms/index.html

BR,
Andrei

On 2019/08/01 19:15:52, Yao Weng  wrote:
> Hi I have subscribe to user-subscr...@ignite.apache.org>
> 
,> 


> but still cannot post my question. So I send it directly to this email>
> address.>
>
> Our application receives Kafka message, and then calls invoke to do 
partial>
> update. Does ignite kafka connector support invoke ? If not, is 
Ignite team>

> going to support it ?>
>
> Thank you very much>
>
> Yao Weng>
>
> Our much.>
> Yao>
>


Re: Declaring server side CacheEntryListenerin Ignte config

2019-07-17 Thread Andrei Aleksandrov

Hi,

Could you please provide more details about your case?

Generally, for tracking of the cache updates in Ignite you can use Events:

https://apacheignite.readme.io/docs/events

Continuous Queries:

https://apacheignite.readme.io/docs/continuous-queries

CacheInterceptor:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheInterceptor.html

BR,
Andrei

http://apache-ignite-users.70518.x6.nabble.com/Declaring-server-side-CacheEntryListener-in-Ignite-config-td28849.html

On 2019/07/16 13:33:54, Jean-Philippe Laroche  wrote:
> I saw many examples on how to programmatically declare a>
> CacheEntryListenerfrom a client application, but is there a way to 
register>

> by configuration, a CacheEntryListener so it is active on node/cluster>
> startup?>
>


Re: Ignite DB Issues

2019-07-16 Thread Andrei Aleksandrov

Hi,

There are not enough details in your message.

1. I have 10 records of CSV and stored in Ignite DB then ten records 
will be stored along with new table creation. Now I have removed drop 
table code from my java code and removed table creation code and running 
the java code. It is not updating in Ignite DB table records.


Can you share your java code and cluster configurations? How you try to 
update the tables in Ignite?


2. Why Ignite DB always showing four columns of a table?

I guess that you said about SQL select table. It will show only the 
fields that you set in CREATE_TABLE command or in QUERY_ENTITY in your 
cache configuration.


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/QueryEntity.html
https://apacheignite-sql.readme.io/docs/create-table

BR,
Andrei

http://apache-ignite-users.70518.x6.nabble.com/Ignite-DB-Issues-td28836.html

On 2019/07/15 01:44:45, anji m  wrote:
> Hi Team,>
>
>
> 1. I have 10 records of CSV and stored in Ignite DB then ten records 
will>

> be stored along with new table creation. Now I>
> have removed drop table code from my java code and removed table 
creation>

> code and running the java code. It is not updating>
> in Ignite DB table records.>
>
> 2. Why Ignite DB always showing four columns of table?>
> -- >
> *Thanks*>
> *Anji M*>
> *M:+1 (267) 916 2969*>
>


Re: about start -f command in ignitevisorcmd

2019-07-15 Thread Andrei Aleksandrov

Hi,

Looks like this file was moved to 
https://github.com/gridgain/apache-ignite/blob/ignite-2.7.5/config/visor-cmd/node_startup_by_ssh.sample.ini


Here is the ticket:

https://issues.apache.org/jira/browse/IGNITE-10036

Please double check your config folder or you can download this file 
from the link above.


BR,
Andrei

On 2019/07/15 06:30:55, liyuj <1...@163.com> wrote:
> Hi,>
>
> In the ignitevisorcmd environment, enter the help start command to see >
> the following:>
> - f=>
> Path to INI file that contains topology specification.>
> For sample INI file refer >
> to'bin/include/visorcmd/node_startup_by_ssh.sample.ini'.>
>
> But the node_startup_by_ssh.sample.ini file does not exist. Can 
somebody >

> provide an example of this file?>
>
>


Re: Metrics for Ignite

2019-07-15 Thread Andrei Aleksandrov

Hi,

You can try to use the web console for Ignite. It contains the 
Monitoring Dashboard with different cache metrics that were calculated 
during some period of time. Also, it contains different graphics related 
to cache operation throughput and latency:


https://apacheignite-tools.readme.io/docs/ignite-web-console

You can try to use the next "ready to go" installation for testing:

https://console.gridgain.com/monitoring/dashboard

BR,
Andrei

On 2019/07/15 13:20:09, nikhil dhiman  wrote:
> Hi, I am almost production ready, but I want to plot graphs for>
> Throughput, latency operation wise[get, put, delete, eviction]. Is>
> there a way I can produce metrics for time taken by ignite node for>
> get/put/delete. I can see many metrics via rest module. But i am>
> unable to find the above metrics.>
>
> I am on Ignite 2.7.5 version.>
>
> Thanks & Regards,>
> Nikhil Dhiman>
>


Re: AdaptiveLoadBalancingSpi not removing finished tasks from taskTops map

2019-07-05 Thread Andrei Aleksandrov

Hi,

I tested your case and looks like Ignite have the memory issue here.

I filed the ticket for it:

https://issues.apache.org/jira/browse/IGNITE-11966

BR,
Andrei

On 2019/06/03 13:04:49, chris_d  wrote:
> Hi Andrei,>
>
> I've attached a zip of the top consumers and the class we're using to>
> configure the ignite clients within our app server.>
>
> I can't really provide all the code involved as the load test was 
testing>

> quite a large chunk of our system.>
>
> If necessary I can try and create a cut-down version of the test. >
>
> 855_Top_Consumers.zip>
> 
 
>

>
> AbstractGridConfigurationBuilder.java>
> 
 
>

>
> Thanks>
> Chris.>
>
>
>
> -->
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/>
>


Re: Python client: create cache with expiration policy

2019-07-05 Thread Andrei Aleksandrov

Hi,

Looks like Python API should be improved.

I created the ticket on this:

https://issues.apache.org/jira/browse/IGNITE-11965

BR,
Andrei

On 2019/06/03 11:09:29, kulinskyvs  wrote:
> Hi,>
>
> Are we talking about the same? I mean, my goal is to create a cache 
(if not>
> yet created) with some predefined expiration policy in order to be 
able to>

> set the expiration timeout for key/value pair put into Ignite.>
>
> Looks like you are referring to>
> https://apacheignite.readme.io/docs/partition-loss-policies,>
> while I'm interested in 
https://apacheignite.readme.io/docs/expiry-policies.>

>
> Thanks.>
>
> Best regards,>
> Vadzim Kulinski>
>
>
>
> -->
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/>
>