Unsubscribe me from mailing list

2018-06-18 Thread Harsh Mishra
-- 
Regards,
*Harsh Mishra** | Solution Architect*
Quovantis Technologies
Mobile: +91-9958311308
Skype ID: harshmishra1984
www.quovantis.com


Apache Ignite with PosgreSQL

2018-05-30 Thread Harsh Mishra
*Objective:* To scale existing application where PostgreSQL is used as a
data store.

*How can Apache Ignite help:* We have an application which has many modules
and all the modules are using some shared tables. So we have only one
PostgreSQL master database and It's already on AWS large SSD machines. We
already have Redis for caching but as we no limitation of Redis is, It's
not easy partial updates and querying on secondary indexes.

*Our use case:* We have two big tables, one is member and second is
subscription. It's many to many relations where one member is subscribed in
multiple groups and we are maintaining subscriptions in subscription table.
Member table size is around 40 million and size of this table is around 40M
x 1.5KB + more ~= 60GB

*Challenge*

A challenge is, we can't archive this data since every member is working
and there are frequent updates and read on this table.

*My thought:*

Apache Ignite can help to provide a caching layer on top of PostgreSQL
table, as per I read from the documentation.

   -

   Now, I have a couple of questions from an Implementation point of view.
   1. Will Apache Ignite fits in our use case? If Yes then,
  2. Will apache Ignite keep all data 60GB in RAM? Or we can distribute
  RAM load on multiple machines?
  3. On updating PostgreSQL database table, we are using python and
  SQLALchamy (ORM). Will there be a separate call for Apache
Ignite to update
  the same record in memory OR IS there any way that Apache Ignite can sync
  it immediately from Database?
  4. Is there enough support for Python?
  5. Are there REST API support to Interact with Apache Ignite. I can
  avoid ODBC connection.
  6. How about If this load becomes double in next one year?

A quick answer is much appreciated and Thanks in Advance.
Please help.

-- 
Regards,
*Harsh Mishra** | Solution Architect*
Quovantis Technologies
Mobile: +91-9958311308
Skype ID: harshmishra1984
www.quovantis.com


Re: Blocked: Migrate PosgreSQL JSONB data in Apache Ignite

2018-05-30 Thread Harsh Mishra
Hi Iliya,

Thanks for the reply. I will follow your suggestions.

Does cache API support full document post and Partial update?

API doc seems confusing.

In my case, a table has all normal columns(int, varchar, char, boolean,
date, timestamp) + a custom column that has JSON data.

My requirement is:
1. Add new record.
2. Update - Full or partial. I have JSON, Array fields also.
3. Delete
4. Querying it? by many where clauses which can have string operations,
logical operators and date operators.

5. JoIN(for now, It's optional)
I could not find something which can fulfill all mentioned cases.


- Harsh



On Wed, May 30, 2018 at 9:05 PM, ilya.kasnacheev 
wrote:

> Hello!
>
> Apache Ignite in SQL mode doesn't have special support for
> "destructure-able" datatypes. This means you can put your JSON into VARCHAR
> or BLOB and hope that performance will be sufficient.
>
> I would recommend using Cache API instead of SQL for such use cases, where
> you can collocate processing with data and handle complex data structures.
> Do affinity calls or use StreamReceiver, save on round-trips.
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Regards,
*Harsh Mishra** | Solution Architect*
Quovantis Technologies
Mobile: +91-9958311308
Skype ID: harshmishra1984
www.quovantis.com


Blocked: Migrate PosgreSQL JSONB data in Apache Ignite

2018-05-30 Thread Harsh Mishra
Hi,

Objective: JSON handling in Apache Ignite

I am using PostgreSQL database and one table has JSONB data type so I can
add any type of key in DB.
How to handle this case in Apache Ignite.

TABLE STRUCTURE(Partial)

app_member_id character varying(64) DEFAULT NULL::character varying,
mobile_os character varying(64) DEFAULT NULL::character varying,
app_name character varying(64) DEFAULT NULL::character varying,
deleted boolean DEFAULT false,
preferred_language bigint,
client_member_id character varying(256) DEFAULT NULL::character varying,
*custom_data jsonb DEFAULT '{}'::jsonb,*
encrypted_data bytea,
useragent text,


Data in custom_data column is like this but key name is dynamic. I do read
update these custom fields very frequently.

{"date1": "2018-05-16", "time1": "00:30:00", "number2": 123, "picklist1":
5, "paragraph1": "I am loving elasticsearch.", "singleline1": "Singleline1
data", "multipicklist1": [7, 8]}

Please let me know, I am right now blocked.

-- 

Regards,
*Harsh Mishra** | Solution Architect*
Quovantis Technologies
Mobile: +91-9958311308
Skype ID: harshmishra1984
www.quovantis.com