I strongly suspect many would like further details on your implementation. A user conference session well recorded perhaps?
> On Mar 8, 2015, at 4:57 PM, John R Pierce <pie...@hogranch.com> wrote: > >> On 3/8/2015 7:40 AM, Nigel Gardiner wrote: >> I'm looking at making a data warehouse to address our rapidly spiralling >> report query times against the OLTP. I'm looking first at what it would take >> to make this a real-time data warehouse, as opposed to batch-driven. > > we use a hybrid architecture. we have a 'republisher' process that > repeatedly slurps new data from the OLTP database and sends it to the back > end databases, using a 'publish/subscribe' messaging bus. several back end > databases subscribe to this data, and their subscriber processes insert the > incoming data into those OLAP and various other reporting databases. this > way the reporting databases can have completely different schemas optimized > for their needs, and have different retention requirements than the OLTP > database. > > this republisher is usually within a few seconds of live new data. in our > case its made fairly easy to track 'new' because all our OLTP transactions > are event-oriented. > > > -- > john r pierce 37N 122W > somewhere on the middle of the left coast > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general