failed to get the security context object

2019-05-20 Thread radha jai
Hi,
   Please Some one reply on this.
   I have implemented the grid security processor and setting the
securityconext holder in the authenticate function as below,

public class MySecurityProcessor extends GridProcessorAdapter implements
DiscoverySpiNodeAuthenticator, GridSecurityProcessor, IgnitePlugin {


public SecurityContext authenticate(AuthenticationContext
authenticationContext) throws IgniteCheckedException {
   SecuritySubject secureSecuritySubject = new SecuritySubject(
authenticationContext.subjectId(),
authenticationContext.subjectType(),
authenticationContext.credentials().getLogin(),
authenticationContext.address()
);
SecurityContext securityContext = new
MySecurityContext(secureSecuritySubject, accessToken);
SecurityContextHolder.set(securityContext);
return securityContext;
}
public void authorize(String name, SecurityPermission perm, SecurityContext
securityCtx) throws SecurityException {
System.out.println(   SecurityContextHolder.get());
System.out.println( securityCtx );
//do some authorization
 .
}
..
}

In plugin provider i am creating the component : GridSecurityProcessor.

The server starts  without throwing any error, also plugin provider also
starts.
Questions:
1. When i start the visor , it is able to connect to ignite server. If i
execute some commands in visor like: top , cache,etc, authorize function is
getting called and always gives the  security context as NULL. How do i get
the securitycontext?  . Also when visor is called authenticate function is
not getting called.
2. When rest api call is made to create a cache why the authroize function
is getting called twice? one my GridRestProcessor and GridCacheProcessor?
In this scenario: secuirty context i am getting from
SecurityContextHolder.get(). So no issues.

regards
Radha


Re: Ignite WebSessionFilter client threads eating up CPU usage

2019-05-20 Thread wbyeh
Ilya ,After we switched build-in openJDK (jdk-8u181-ojdkbuild-linux-x64) to
Zulu JDK (zulu8.38.0.13-ca-jdk8.0.212-linux_x64), it still has a high
loading CPU usage.And we found the linux node has a low CPU frequency than
other nodes w/ same configurations on CentOS 7.5.

 

Finally, we correct the CPU frequency with cpupower idle-set -d 0 #1 2 3 4 5
6  cpupower frequency-set -g performance.The high CPU loading situation is
getting well.So, this check could be a first check while system
tuning.cpupower frequency-infousing a supported JDK.Thank you all!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Ignite 'complex' type

2019-05-20 Thread Stéphane Thibaud
Thank you Dmitry. I will use serialization in this case.


Kind regards,

Stéphane Thibaud

2019年5月21日(火) 9:01 Dmitry Melnichuk :

> Stéphane,
>
> I afraid this won't work. Ignite storage is typed, thus you can not use
> values of arbitrary type, but only of Pythonic types that defined
> here[1].
>
> If you really want to store such a complex object as a whole, you can
> either
>
> 1) serialize it with pickle/dill/protobuf/json/et c. and store as a
> StringObject or a ByteArrayObject, or
>
> 2) define a class with a GenericObjectMeta[2] metaclass, with unique
> class name and a schema. It will be an object of a special class, not
> an arbitrary typed value, but it may do the job for you.
>
> [1]
>
> https://apacheignite.readme.io/docs/python-thin-client-initialization-and-configuration#section-data-types
>
> [2] https://apacheignite.readme.io/docs/python-thin-client-binary-types
>
> On Tue, 2019-05-21 at 07:15 +0900, Stéphane Thibaud wrote:
> > Any thoughts about this one?
> >
> >
> > Kind regards,
> >
> > Stéphane Thibaud
> >
> > 2019年5月16日(木) 20:55 Stéphane Thibaud :
> > > Hello Ignite experts,
> > >
> > > I am trying to store a python value of type
> > >
> > > Tuple[Dict[str, float], bytes]
> > >
> > > in an Ignite cache. However, doing that without type hints I get:
> > >
> > > TypeError: Type `array of None` is invalid
> > >
> > > Do you know if this is possible?
> > >
> > >
> > > Kind regards,
> > >
> > > Stéphane Thibaud
>
>


Re: The CPU of Ignite NODEs in the same cluster has 20% gap

2019-05-20 Thread Ropugg
Update the jvisualvm screenshot.

 

 

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


The CPU of Ignite NODEs in the same cluster has 20% gap

2019-05-20 Thread Ropugg
We have a cluster has 20 nodes.
18 nodes work as http servers, 2 nodes work as cache servers.
The 18 nodes query data from the 2 cache servers via IgniteService and
IgniteCache.
But on the load testing, the CPU of 2 cache severs has 20% gap.

 
I added a counter in my code, it shows the services access was balanced, but
the cpu wasn't.

 

 

 

Here is the cpu snapshot, you can find more details via ./jvisualvm.
ignite01w-snap.nps
  
ignite02w-snap.nps
  

The Http servers were not balance either. 
Do we know why they are not balance?
Is there anyway to resolve the unbalance issue?
All ignite cache are replicated on the two cache servers.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 'complex' type

2019-05-20 Thread Dmitry Melnichuk
Stéphane,

I afraid this won't work. Ignite storage is typed, thus you can not use
values of arbitrary type, but only of Pythonic types that defined
here[1].

If you really want to store such a complex object as a whole, you can
either

1) serialize it with pickle/dill/protobuf/json/et c. and store as a
StringObject or a ByteArrayObject, or

2) define a class with a GenericObjectMeta[2] metaclass, with unique
class name and a schema. It will be an object of a special class, not
an arbitrary typed value, but it may do the job for you.

[1] 
https://apacheignite.readme.io/docs/python-thin-client-initialization-and-configuration#section-data-types

[2] https://apacheignite.readme.io/docs/python-thin-client-binary-types

On Tue, 2019-05-21 at 07:15 +0900, Stéphane Thibaud wrote:
> Any thoughts about this one?
> 
> 
> Kind regards,
> 
> Stéphane Thibaud
> 
> 2019年5月16日(木) 20:55 Stéphane Thibaud :
> > Hello Ignite experts,
> > 
> > I am trying to store a python value of type 
> > 
> > Tuple[Dict[str, float], bytes]
> > 
> > in an Ignite cache. However, doing that without type hints I get:
> > 
> > TypeError: Type `array of None` is invalid
> > 
> > Do you know if this is possible?
> > 
> > 
> > Kind regards,
> > 
> > Stéphane Thibaud



Re: Ignite 'complex' type

2019-05-20 Thread Stéphane Thibaud
Any thoughts about this one?


Kind regards,

Stéphane Thibaud

2019年5月16日(木) 20:55 Stéphane Thibaud :

> Hello Ignite experts,
>
> I am trying to store a python value of type
>
> Tuple[Dict[str, float], bytes]
>
> in an Ignite cache. However, doing that without type hints I get:
>
> TypeError: Type `array of None` is invalid
>
> Do you know if this is possible?
>
>
> Kind regards,
>
> Stéphane Thibaud
>


Native Client/Full Python API

2019-05-20 Thread mfrey
Hello,

Is there a plan to release the native client API for Python (like the 
Java, .Net and C++ ones) ? I am not talking about the thin client here.

I am evaluating Apache Ignite for a solution with many decentralized
write-backs/data imports and quite complex business calculations. i would
like to push as many as possible of the calculations to the database layer,
to avoid querying data and doing the processing in Python.

My understanding is that I cannot create/execute calculation with the thin
client and this is only possible with the native client, is that correct ?

Using Java/.Net is not alternative in my case, as the solution will need to
run fully on python to integrate nicely with other solutions.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: C++ Thin Client Lacks PC File

2019-05-20 Thread Igor Sapego
Maybe, you could start with these threads: [1], [2]

[1] -
https://lists.freedesktop.org/archives/pkg-config/2007-August/000218.html
[2] - https://autotools.io/pkgconfig/file-format.html

Best Regards,
Igor


On Mon, May 20, 2019 at 4:57 PM mwilliamso58 
wrote:

> bump
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Trouble with continuous queries

2019-05-20 Thread Mike Needham
Hi All,

I have a cache that is running and is defined as
IgniteCache exchCache =
ignite.getOrCreateCache(new CacheConfiguration<>("EXCHANGE")
.setIndexedTypes(Long.class, Exchange.class)
.setAtomicityMode(CacheAtomicityMode.ATOMIC)
.setBackups(0)
);

from a dotnet client how can I get a continuous query so that I am notified
of the changes to the cache?  I can access the cache VIA DBeaver and other
sql tools.

the documentation does not make it clear how to set this up.  I want ALL
changes to the cache to be sent to the client.  The DOTNET example does not
appear to work for this scenario.  It is using a simple  for
cache.  I have tried  but it does not appear to ever be
notified of events

On Tue, May 14, 2019 at 10:55 AM Denis Mekhanikov 
wrote:

> Mike,
>
> First of all, it's recommended to have a separate cache per table to avoid
> storing of objects of different types in the same cache.
>
> Continuous query receives all updates on the cache regardless of their
> type. Local listener is invoked when new events happen. Existing records
> can be processed using initial query.
>
> Refer to the following documentation page for more information:
> https://apacheignite.readme.io/docs/continuous-queries
>
> Denis
>
> чт, 2 мая 2019 г. в 14:14, Mike Needham :
>
>> I have seen that example, what I do not understand is I have two SQL
>> tables in a cache that has n number of nodes.  it is loaded ahead of time
>> and a client wants to be notified when the contents of the cache are
>> changed.  Do you have to have the continuous query in a never ending loop
>> to not have it end?  All the examples are simply using ContinuousQuery<
>> Integer, String>. my example uses  which is a java
>> class defining the structure.  do I just set-up a   ContinuousQuery> Exchange.class>
>>
>> On Thu, May 2, 2019 at 3:59 AM aealexsandrov 
>> wrote:
>>
>>> Hi,
>>>
>>> The good example of how it can be done you can see here:
>>>
>>>
>>> https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ContinuousQueryExample.java
>>>
>>> You can set remote listener to handle changes on remote nodes and local
>>> listers for current.
>>>
>>> Note that you will get the updates only until ContinuousQuery will not be
>>> closed or until the node that starts it will not left the cluster.
>>>
>>> Also, you can try to use CacheEvents like in example here:
>>>
>>> https://apacheignite.readme.io/docs/events#section-remote-events
>>>
>>> Note that events can affect your performance.
>>>
>>> BR,
>>> Andrei
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>> --
>> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
>> the answers. Don't be afraid to say "I think" instead of "I know."*
>>
>

-- 
*Don't be afraid to be wrong. Don't be afraid to admit you don't have all
the answers. Don't be afraid to say "I think" instead of "I know."*


Re: C++ Thin Client Lacks PC File

2019-05-20 Thread mwilliamso58
bump



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Multithreading using pyignite

2019-05-20 Thread Stéphane Thibaud
Great. Thank you. I went with an awesome pip package called 'resource-pool'.

Kind regards,

Stéphane Thibaud

2019年5月20日(月) 15:44 Ilya Kasnacheev :

> Hello!
>
> I guess the recommendation here is to create a new client in every thread
> where you need them.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вс, 19 мая 2019 г. в 08:39, Stéphane Thibaud :
>
>> Hello Ignite users,
>>
>> I am wondering whether the pyignite library is thread-safe. I get strange
>> errors when I try to run the below script. I cannot find anything about
>> threads in the docs at
>> https://apache-ignite-binary-protocol-client.readthedocs.io/en/latest/ .
>> Should I use the library in a different way or is multithreading simply
>> not supported?
>>
>> from threading import Thread
>>
>> import pyignite
>>
>> IGNITE_PORT = 10800
>> WEB_HOST = '127.0.0.1'
>> DB_CLIENT = pyignite.Client()
>> DB_CLIENT.connect(WEB_HOST, IGNITE_PORT)
>>
>>
>> test_cache = DB_CLIENT.get_or_create_cache("test_cache")
>> test_cache.put("a", "b")
>> for _ in range(1000):
>> Thread(target=lambda: print(test_cache.get("a"))).start()
>>
>>
>> Kind regards,
>>
>> Stéphane Thibaud
>>
>


Re: Recommended way of random sampling

2019-05-20 Thread Stéphane Thibaud
Hello Ilya,

Yes, you are right. I sent my first response too quickly. With an extra
column it will work.
I will go with the approach you suggest. :-)


Kind regards,

Stéphane Thibaud

2019年5月20日(月) 20:26 Ilya Kasnacheev :

> Hello!
>
> I'm not sure why you think it will not scale. If this field is indexed
> then taking a random sample is basically one b-tree walk away.
>
> I guess you will have to store random numbers, if you rely on non-random
> field it might introduce bias.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 20 мая 2019 г. в 13:28, Stéphane Thibaud :
>
>> Hello Ilya,
>>
>> Thank you for that suggestion. On a traditional database I know that
>> approach does not scale well, since a random number is first assigned to
>> all rows (it scales linearly with the number of rows if I am not mistaken).
>> Do you think this would be different for Ignite?
>>
>>
>> Kind regards,
>>
>> Stéphane Thibaud
>>
>> 2019年5月20日(月) 15:53 Ilya Kasnacheev :
>>
>>> Hello!
>>>
>>> You can have a random indexed field in your table and do queries like
>>> SELECT * FROM table WHERE rand_field < RAND() LIMIT 1; to sample random
>>> item.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 20 мая 2019 г. в 04:50, Stéphane Thibaud :
>>>
 As a small addition: it would really help if Ignite had a hashing
 function for this, but I only see AES encryption.


 Kind regards,

 Stéphane Thibaud

 2019年5月19日(日) 20:59 Stéphane Thibaud :

> Hello Ignite users,
>
> I am considering to sample randomly on large amounts of data, but I
> was wondering what would be the most efficient way for this. Right now, I
> think I might need cluster-based randomness using a MOD function as
> described here:
> https://www.alandix.com/academic/topics/random/sampling-SQL.html
>
> I currently have a UUID column (uuid4), which I think can be used for
> it, but I might need some bit manipulation to get the non-random parts out
> of the UUID.
> Do you think this is indeed the most straightforward way to do it?
>
>
> Kind regards,
>
> Stéphane Thibaud
>



Re: Issue with peerclassloading when using dataStreamers

2019-05-20 Thread Ilya Kasnacheev
Hello!

Can you please share complete logs from all nodes in your cluster?

Regards,
-- 
Ilya Kasnacheev


чт, 9 мая 2019 г. в 18:00, Abhishek Gupta (BLOOMBERG/ 731 LEX) <
agupta...@bloomberg.net>:

> I'm using datastreamers to ingest data from files into the cache. I've a
> need to do an 'upsertion' to the data being ingested, therefore I'm using
> streamReceiver too.
>
> See attached java class and log snippet. When we run the code calling
> addData on the datastreamer after a while, we start seeing exceptions being
> thrown. But this doesn't happen always. I.e. Peerclassloading seems to work
> at times and not at others with no code change. It seems like there is a
> race.
>
> Any suggestions on what might be happening or is there a better, more
> reliable way to do this?
>
>
> Thanks,
> Abhishek
>
>


Re: Recommended way of random sampling

2019-05-20 Thread Ilya Kasnacheev
Hello!

I'm not sure why you think it will not scale. If this field is indexed then
taking a random sample is basically one b-tree walk away.

I guess you will have to store random numbers, if you rely on non-random
field it might introduce bias.

Regards,
-- 
Ilya Kasnacheev


пн, 20 мая 2019 г. в 13:28, Stéphane Thibaud :

> Hello Ilya,
>
> Thank you for that suggestion. On a traditional database I know that
> approach does not scale well, since a random number is first assigned to
> all rows (it scales linearly with the number of rows if I am not mistaken).
> Do you think this would be different for Ignite?
>
>
> Kind regards,
>
> Stéphane Thibaud
>
> 2019年5月20日(月) 15:53 Ilya Kasnacheev :
>
>> Hello!
>>
>> You can have a random indexed field in your table and do queries like
>> SELECT * FROM table WHERE rand_field < RAND() LIMIT 1; to sample random
>> item.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 20 мая 2019 г. в 04:50, Stéphane Thibaud :
>>
>>> As a small addition: it would really help if Ignite had a hashing
>>> function for this, but I only see AES encryption.
>>>
>>>
>>> Kind regards,
>>>
>>> Stéphane Thibaud
>>>
>>> 2019年5月19日(日) 20:59 Stéphane Thibaud :
>>>
 Hello Ignite users,

 I am considering to sample randomly on large amounts of data, but I was
 wondering what would be the most efficient way for this. Right now, I think
 I might need cluster-based randomness using a MOD function as described
 here: https://www.alandix.com/academic/topics/random/sampling-SQL.html

 I currently have a UUID column (uuid4), which I think can be used for
 it, but I might need some bit manipulation to get the non-random parts out
 of the UUID.
 Do you think this is indeed the most straightforward way to do it?


 Kind regards,

 Stéphane Thibaud

>>>


Re: Recommended way of random sampling

2019-05-20 Thread Stéphane Thibaud
Hello Ilya,

Thank you for that suggestion. On a traditional database I know that
approach does not scale well, since a random number is first assigned to
all rows (it scales linearly with the number of rows if I am not mistaken).
Do you think this would be different for Ignite?


Kind regards,

Stéphane Thibaud

2019年5月20日(月) 15:53 Ilya Kasnacheev :

> Hello!
>
> You can have a random indexed field in your table and do queries like
> SELECT * FROM table WHERE rand_field < RAND() LIMIT 1; to sample random
> item.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 20 мая 2019 г. в 04:50, Stéphane Thibaud :
>
>> As a small addition: it would really help if Ignite had a hashing
>> function for this, but I only see AES encryption.
>>
>>
>> Kind regards,
>>
>> Stéphane Thibaud
>>
>> 2019年5月19日(日) 20:59 Stéphane Thibaud :
>>
>>> Hello Ignite users,
>>>
>>> I am considering to sample randomly on large amounts of data, but I was
>>> wondering what would be the most efficient way for this. Right now, I think
>>> I might need cluster-based randomness using a MOD function as described
>>> here: https://www.alandix.com/academic/topics/random/sampling-SQL.html
>>>
>>> I currently have a UUID column (uuid4), which I think can be used for
>>> it, but I might need some bit manipulation to get the non-random parts out
>>> of the UUID.
>>> Do you think this is indeed the most straightforward way to do it?
>>>
>>>
>>> Kind regards,
>>>
>>> Stéphane Thibaud
>>>
>>


Re: Recommended way of random sampling

2019-05-20 Thread Stéphane Thibaud
Excuse me, I just sent my response, but I see that you actually suggested a
new column... in that case this would work, but I think it's a bit
unfortunate to have to store random numbers.


Kind regards,

Stéphane Thibaud

2019年5月20日(月) 19:22 Stéphane Thibaud :

> Hello Ilya,
>
> Thank you for that suggestion. On a traditional database I know that
> approach does not scale well, since a random number is first assigned to
> all rows (it scales linearly with the number of rows if I am not mistaken).
> Do you think this would be different for Ignite?
>
>
> Kind regards,
>
> Stéphane Thibaud
>
> 2019年5月20日(月) 15:53 Ilya Kasnacheev :
>
>> Hello!
>>
>> You can have a random indexed field in your table and do queries like
>> SELECT * FROM table WHERE rand_field < RAND() LIMIT 1; to sample random
>> item.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 20 мая 2019 г. в 04:50, Stéphane Thibaud :
>>
>>> As a small addition: it would really help if Ignite had a hashing
>>> function for this, but I only see AES encryption.
>>>
>>>
>>> Kind regards,
>>>
>>> Stéphane Thibaud
>>>
>>> 2019年5月19日(日) 20:59 Stéphane Thibaud :
>>>
 Hello Ignite users,

 I am considering to sample randomly on large amounts of data, but I was
 wondering what would be the most efficient way for this. Right now, I think
 I might need cluster-based randomness using a MOD function as described
 here: https://www.alandix.com/academic/topics/random/sampling-SQL.html

 I currently have a UUID column (uuid4), which I think can be used for
 it, but I might need some bit manipulation to get the non-random parts out
 of the UUID.
 Do you think this is indeed the most straightforward way to do it?


 Kind regards,

 Stéphane Thibaud

>>>


Re: Re:Re:RE: Re: Node can not join cluster

2019-05-20 Thread Vishalan
What was the solution to above problem.I am facing the same issue



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Certificate upgrade in Ignite Cluster

2019-05-20 Thread Ilya Kasnacheev
Hello!

Can your new certificate be read with your existing trust store? If it can,
you can just stop nodes one by one, bring them back with revised
certificate.

If it can't, first you have to push trust store to all nodes in the same
fashion, which will contain trusts for both new and old certificates while
you transition.

Regards,
-- 
Ilya Kasnacheev


пт, 10 мая 2019 г. в 10:37, ANKIT SINGHAI :

> Hi,
> I meant tls/ssl=on.
>
> On Fri, May 10, 2019, 12:23 Nikolay Izhikov  wrote:
>
>> Hello, Ankit.
>>
>> Please, clarify, what do you mean by "secure mode"?
>>
>> В Чт, 09/05/2019 в 05:33 -0700, Ankit Singhai пишет:
>> > Hello,
>> > We are running Ignite Cluster with 3 servers and 10 client nodes in
>> secure
>> > mode. Now as the certificate is going to expire, how can we configure
>> the
>> > new certificate without taking any down time?
>> >
>> > Thanks,
>> > Ankit Singhai
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite WebSessionFilter client threads eating up CPU usage

2019-05-20 Thread Ilya Kasnacheev
Hello!

I can only guess that maybe your Web Sessions are very large so there are
performance implications. Can you make sure you don't hold bulky objects in
Sessions?

Regards,
-- 
Ilya Kasnacheev


вт, 14 мая 2019 г. в 10:43, wbyeh :

> We are using Ignite 2.7 as a 4-server-node session cluster but recent weeks
> we found some Ignite WebSessionFilter client threads eating up CPU usage
> after days of running.
> The GC is running smoothly.
>
> How to address the root cause?
> Is it a Ignite 2.x known issue with StripedExecutor?
>
> Can experts here point out any solutions?
>
> Attached file contained thread dump & 'top' usage.
>
> 811261.jstack
> 
>
>
> Thank you!
>
> -Patrick
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: count not updated through SQL using DataStreamer

2019-05-20 Thread Ilya Kasnacheev
Hello!

Can you show how you create FOO table? I guess expected cache type is NOT
, that's why no data is seen.

Regards,
-- 
Ilya Kasnacheev


чт, 16 мая 2019 г. в 23:49, Banias H :

> Hello Ignite experts,
>
> I am very new to Ignite. I am trying to ingest 15M rows of data using
> DataStreamer into a table in a two-node Ignite cluster (v2.7) but run into
> problems of not getting the data through running SQL on DBeaver.
>
> Here is the list of steps I took:
>
> 1. Start up two nodes using the following xml.
>
> 
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>   
> 
> 
>class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
>class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
>   
> 
>   
> 
> 
>   
> 
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
>   
> IP1
> IP2
>   
> 
>   
> 
>   
> 
> 
>   
> 
> 
> 
>   
> 
>   
> 
>
> 2. Use python thin client to get the cache SQL_PUBLIC_FOO and insert ten
> row of data. After this step, both thin client and DBeaver SQL client
> report the same count:
>
> - thin client:
>
> nodes = [
> (IP1, 10800),
> (IP2, 10800),
> ]
> client = Client()
> client.connect(nodes)
> cache = client.get_cache("SQL_PUBLIC_FOO")
> print(cache.get_size())
>
> returns 10
>
> - SQL through DBeaver
>
> SELECT COUNT(*) FROM FOO
>
> returns 10
>
> 3. However when I tried using DataStreamer to ingest 100 rows into the
> cache SQL_PUBLIC_FOO, only thin client showed new count value and SQL
> returned old count value:
>
> - ingesting through DataStreamer
> //I ran the jar on one of the Ignite nodes
> String CONFIG_FILE = ;
> Ignition.setClientMode(true);
> Ignite ignite = Ignition.start(CONFIG_FILE);
> IgniteDataStreamer stmr =
> ignite.dataStreamer("SQL_PUBLIC_FOO");
> stmr.addData(rowCount, value);
>
> - thin client:
>
> nodes = [
> (IP1, 10800),
> (IP2, 10800),
> ]
> client = Client()
> client.connect(nodes)
> cache = client.get_cache("SQL_PUBLIC_FOO")
> cache.get_size()
>
> returns 110
>
> - SQL through DBeaver
>
> SELECT COUNT(*) FROM FOO
>
> returns 10
>
> Would anyone shed some lights on what I did wrong? I would love to use
> DataStreamer to put much more data into the cache so that I would wan to be
> able to query them through SQL.
>
> Thanks for the help. I appreciate it.
>
> Regards,
> Calvin
>
>


Re: Recommended way of random sampling

2019-05-20 Thread Ilya Kasnacheev
Hello!

You can have a random indexed field in your table and do queries like
SELECT * FROM table WHERE rand_field < RAND() LIMIT 1; to sample random
item.

Regards,
-- 
Ilya Kasnacheev


пн, 20 мая 2019 г. в 04:50, Stéphane Thibaud :

> As a small addition: it would really help if Ignite had a hashing function
> for this, but I only see AES encryption.
>
>
> Kind regards,
>
> Stéphane Thibaud
>
> 2019年5月19日(日) 20:59 Stéphane Thibaud :
>
>> Hello Ignite users,
>>
>> I am considering to sample randomly on large amounts of data, but I was
>> wondering what would be the most efficient way for this. Right now, I think
>> I might need cluster-based randomness using a MOD function as described
>> here: https://www.alandix.com/academic/topics/random/sampling-SQL.html
>>
>> I currently have a UUID column (uuid4), which I think can be used for it,
>> but I might need some bit manipulation to get the non-random parts out of
>> the UUID.
>> Do you think this is indeed the most straightforward way to do it?
>>
>>
>> Kind regards,
>>
>> Stéphane Thibaud
>>
>


Re: Ignite with Hibernate 5.3

2019-05-20 Thread Ilya Kasnacheev
Hello!

There are no solid plans to release 2.8.0.

Your best bet is probably trying to backport and build this module from
master.

Regards,
-- 
Ilya Kasnacheev


ср, 8 мая 2019 г. в 18:59, Tomasz Prus :

> When it will be released?
>
> > On 8 May 2019, at 15:29, Vladimir Pligin  wrote:
> >
> > Hi Tomasz,
> >
> > It seems like Hibernate 5.3 support has been already added to Ignite in
> the
> > master branch.
> > I suppose this feature will be delivered as a part of upcoming Ignite
> > release. Here is the corresponding ticket
> > https://issues.apache.org/jira/browse/IGNITE-9893, fix version is 2.8.
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL API in C++ thin client

2019-05-20 Thread Ilya Kasnacheev
Hello!

Why don't you use ODBC? ODBC is a native C++ SQL API and Ignite supports it.

Regards,
-- 
Ilya Kasnacheev


вс, 19 мая 2019 г. в 16:38, matanlevy :

> Thanks.
> I saw now this JIRA item
> https://issues.apache.org/jira/browse/IGNITE-9109
> 
>
> Is it suppose to be implemented in ignite-2.8? as fix versions is not
> mentioned there.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Multithreading using pyignite

2019-05-20 Thread Ilya Kasnacheev
Hello!

I guess the recommendation here is to create a new client in every thread
where you need them.

Regards,
-- 
Ilya Kasnacheev


вс, 19 мая 2019 г. в 08:39, Stéphane Thibaud :

> Hello Ignite users,
>
> I am wondering whether the pyignite library is thread-safe. I get strange
> errors when I try to run the below script. I cannot find anything about
> threads in the docs at
> https://apache-ignite-binary-protocol-client.readthedocs.io/en/latest/ .
> Should I use the library in a different way or is multithreading simply
> not supported?
>
> from threading import Thread
>
> import pyignite
>
> IGNITE_PORT = 10800
> WEB_HOST = '127.0.0.1'
> DB_CLIENT = pyignite.Client()
> DB_CLIENT.connect(WEB_HOST, IGNITE_PORT)
>
>
> test_cache = DB_CLIENT.get_or_create_cache("test_cache")
> test_cache.put("a", "b")
> for _ in range(1000):
> Thread(target=lambda: print(test_cache.get("a"))).start()
>
>
> Kind regards,
>
> Stéphane Thibaud
>