Hello,

Facing the same situation, I'm considering 3 solutions:
- Sharding with postgres_xl (waiting for a Pg10 release)
- Sharding with citusdata (Release 7.2, compatible with Pg10 and
pg_partman, seems interesting)
- Partitioning with PG 10 native partitioning or pg_partman

With colleagues, we have tested the 3 scenarios.
Sharding looks interesting, but you have to apprehend its behaviour in case
of node loss, or cross-node queries.

Thomas

2018-01-29 15:44 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>:

>
>
> On Mon, Jan 29, 2018 at 9:34 AM, Matej <gma...@gmail.com> wrote:
>
>> Hi Everyone.
>>
>> We are looking at a rather large fin-tech installation. But as
>> scalability requirements are high we look at sharding of-course.
>>
>> I have looked at many sources for Postgresql sharding, but we are a
>> little confused as to shared with schema or databases or both.
>>
>>
>> So far our understanding:
>>
>> *SCHEMA.*
>>
>> PROS:
>> - seems native to PG
>> - backup seems easier
>> - connection pooling seems easier, as you can use same connection between
>> shard.
>>
>> CONS:
>> - schema changes seems litlle more complicated
>> - heard of backup and maintenance problems
>> - also some caching  problems.
>>
>> *DATABASE:*
>>
>> PROS:
>> - schema changes litlle easier
>> - backup and administration seems more robust
>>
>> CONS:
>> - heard of vacuum problems
>> - connection pooling is hard, as 100 shards would mean 100 pools
>>
>>
>> So what is actually the right approach? If anyone could  shed some light
>> on my issue.
>>
>> *Thanks*
>>
>>
>>
>
> *You might also want to consider GridSQL. IIRC it was originally developed
> by EnterpriseDB. I saw a demo of it a few years ago and it was quite
> impressive, *
> *but I've had no interaction with it since, so you will have to judge for
> yourself.*
>
>
> *https://sourceforge.net/projects/gridsql/?source=navbar
> <https://sourceforge.net/projects/gridsql/?source=navbar>*
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>

Reply via email to