Re: [GENERAL] Logical decoding CPU-bound w/ large number of tables

2017-05-05 Thread Andres Freund
On 2017-05-05 21:32:27 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2017-05-05 14:24:07 -0600, Mathieu Fenniak wrote:
> >> It appears that most of the time is spent in the
> >> RelfilenodeMapInvalidateCallback and CatalogCacheIdInvalidate cache
> >> invalidation callbacks, both of which appear to be invalidating caches
> >> based upon the cache value.
> 
> > I think optimizing those has some value (and I see Tom is looking at
> > that aspect, but the bigger thing would probably be to do fewer lookups.
> 
> I'm confused --- the lookup side of things is down in the noise in
> Mathieu's trace.

Err, sorry. Completely mangled that sentence.  Executing fewer
invalidations.  We currently are likely re-executing the same set of
invalidations constantly in Mathieu's case.

Background: When decoding a transaction during logical decoding we're
currently re-executing *all* a transaction's own cache invalidations, if
it has any, at every new command-id observed in the WAL stream.  That's
because currently invalidations are only sent at commit, so we don't
know from "when" they are.  But I think there's some very low-hanging
fruits reducing the frequency at which those are executed.

In many cases where there's just a few schema changes in a transaction,
this doesn't hurt badly.  But if you have a transaction that does a
bootload of schema changes *and* a has a lot of other changes, it gets
expensive.

Mathieu: The above also indicates a possible workaround, you can try
separating larger amounts of data manipulations from schema changes,
into separate transactions.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding CPU-bound w/ large number of tables

2017-05-05 Thread Tom Lane
Andres Freund  writes:
> On 2017-05-05 14:24:07 -0600, Mathieu Fenniak wrote:
>> It appears that most of the time is spent in the
>> RelfilenodeMapInvalidateCallback and CatalogCacheIdInvalidate cache
>> invalidation callbacks, both of which appear to be invalidating caches
>> based upon the cache value.

> I think optimizing those has some value (and I see Tom is looking at
> that aspect, but the bigger thing would probably be to do fewer lookups.

I'm confused --- the lookup side of things is down in the noise in
Mathieu's trace.  Further reducing the number of lookups doesn't seem
like it helps this scenario at all.  It might matter once we've whacked
down the cost of invalidations ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding CPU-bound w/ large number of tables

2017-05-05 Thread Andres Freund
Hi,

On 2017-05-05 20:59:09 -0400, Tom Lane wrote:
> Hmm ... as for RelfilenodeMapInvalidateCallback, the lack of calls to
> hash_search() from it in your trace says that it usually isn't doing
> anything useful.  All the time is being spent in hash_seq_search,
> uselessly iterating over the hashtable.  I'm inclined to think that
> we need a smarter data structure there, maybe an independent hashtable
> tracking the reverse map from relation OID to filenode map entry.

Yea, that might be worthwhile.  Let me try to address the issue that we
do way too much invalidation, then we can check whether this is still
exercised hotly.  On the other hand, it's still a dumb invalidation
approach, so if somebody feels like working on this...


> As for CatalogCacheIdInvalidate, I wonder how many of those cycles
> are doing something useful, and how many are being wasted in the outer
> loop that just iterates over the cache list.  We could trivially get
> rid of that outer search by using syscache.c's array, as in the
> attached patch.  It'd be interesting to see if this patch helps your
> scenario #1.  (Patch is against HEAD but seems to apply cleanly to 9.5)

I've seen this be a significant fraction of CPU time completely
independent of logical decoding, so I'd guess this is worthwhile
independently.  Not sure what a good benchmark for this would be.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding CPU-bound w/ large number of tables

2017-05-05 Thread Andres Freund
Hi,

On 2017-05-05 14:24:07 -0600, Mathieu Fenniak wrote:
> The stalls occur unpredictably on my production system, but generally seem
> to be correlated with schema operations.  My source database has about
> 100,000 tables; it's a one-schema-per-tenant multi-tenant SaaS system.

I'm unfortunately not entirely surprised you're seeing some issues in
that case.  We're invalidating internal caches a bit bit
overjudiciously, and that invalidation is triggered by schema changes.


> I've performed a CPU sampling with the OSX `sample` tool based upon
> reproduction approach #1:
> https://gist.github.com/mfenniak/366d7ed19b2d804f41180572dc1600d8  It
> appears that most of the time is spent in the
> RelfilenodeMapInvalidateCallback and CatalogCacheIdInvalidate cache
> invalidation callbacks, both of which appear to be invalidating caches
> based upon the cache value.

I think optimizing those has some value (and I see Tom is looking at
that aspect, but the bigger thing would probably be to do fewer lookups.


> Has anyone else run into this kind of performance problem?  Any thoughts on
> how it might be resolved?  I don't mind putting in the work if someone
> could describe what is happening here, and have a discussion with me about
> what kind of changes might be necessary to improve the performance.

If you could provide an easily runnable sql script that reproduces the
issue, I'll have a look.  I think I have a rough idea what to do.


Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG96 pg_restore connecting to PG95 causes ERROR: unrecognized configuration parameter "idle_in_transaction_session_timeout"

2017-05-05 Thread Tom Lane
Justin Pryzby  writes:
> When doing a dump+restore upgrade, it's commonly recommended to use the later
> version of pg_restore:

> https://www.postgresql.org/docs/current/static/upgrading.html
> "It is recommended that you use the pg_dump and pg_dumpall programs from the
> newer version of PostgreSQL, to take advantage of enhancements that might have
> been made in these programs. Current releases of the dump programs can read
> data from any server version back to 7.0."

That says to use the pg_dump version *corresponding to the destination
server version*, not the latest version you can find anywhere.

No version of pg_dump has ever promised that its output would load
perfectly cleanly into lower-version destination servers, and that's
not a set of new compatibility constraints that I'd want to take on.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding CPU-bound w/ large number of tables

2017-05-05 Thread Tom Lane
Mathieu Fenniak  writes:
> I'm attempting to use logical decoding with the streaming replication
> protocol to perform change-data-capture on PostgreSQL 9.5.4.  I'm seeing
> the replication stream "stall" for long periods of time where the walsender
> process will be pinned at 100% CPU utilization, but no data is being sent
> to my client.

> I've performed a CPU sampling with the OSX `sample` tool based upon
> reproduction approach #1:
> https://gist.github.com/mfenniak/366d7ed19b2d804f41180572dc1600d8
> It appears that most of the time is spent in the
> RelfilenodeMapInvalidateCallback and CatalogCacheIdInvalidate cache
> invalidation callbacks, both of which appear to be invalidating caches
> based upon the cache value.

Hmm ... as for RelfilenodeMapInvalidateCallback, the lack of calls to
hash_search() from it in your trace says that it usually isn't doing
anything useful.  All the time is being spent in hash_seq_search,
uselessly iterating over the hashtable.  I'm inclined to think that
we need a smarter data structure there, maybe an independent hashtable
tracking the reverse map from relation OID to filenode map entry.

As for CatalogCacheIdInvalidate, I wonder how many of those cycles
are doing something useful, and how many are being wasted in the outer
loop that just iterates over the cache list.  We could trivially get
rid of that outer search by using syscache.c's array, as in the
attached patch.  It'd be interesting to see if this patch helps your
scenario #1.  (Patch is against HEAD but seems to apply cleanly to 9.5)

Most likely, your scenario #2 is completely stuck on the
RelfilenodeMapInvalidateCallback issue, though it would be good
to get a trace to confirm that.

regards, tom lane

diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index c27186f..b19044c 100644
*** a/src/backend/utils/cache/catcache.c
--- b/src/backend/utils/cache/catcache.c
*** CatCacheRemoveCList(CatCache *cache, Cat
*** 422,428 
  
  
  /*
!  *	CatalogCacheIdInvalidate
   *
   *	Invalidate entries in the specified cache, given a hash value.
   *
--- 422,428 
  
  
  /*
!  *	CatCacheInvalidate
   *
   *	Invalidate entries in the specified cache, given a hash value.
   *
*** CatCacheRemoveCList(CatCache *cache, Cat
*** 440,510 
   *	This routine is only quasi-public: it should only be used by inval.c.
   */
  void
! CatalogCacheIdInvalidate(int cacheId, uint32 hashValue)
  {
! 	slist_iter	cache_iter;
  
! 	CACHE1_elog(DEBUG2, "CatalogCacheIdInvalidate: called");
  
  	/*
! 	 * inspect caches to find the proper cache
  	 */
- 	slist_foreach(cache_iter, >ch_caches)
- 	{
- 		CatCache   *ccp = slist_container(CatCache, cc_next, cache_iter.cur);
- 		Index		hashIndex;
- 		dlist_mutable_iter iter;
  
! 		if (cacheId != ccp->id)
! 			continue;
! 
! 		/*
! 		 * We don't bother to check whether the cache has finished
! 		 * initialization yet; if not, there will be no entries in it so no
! 		 * problem.
! 		 */
  
! 		/*
! 		 * Invalidate *all* CatCLists in this cache; it's too hard to tell
! 		 * which searches might still be correct, so just zap 'em all.
! 		 */
! 		dlist_foreach_modify(iter, >cc_lists)
! 		{
! 			CatCList   *cl = dlist_container(CatCList, cache_elem, iter.cur);
  
! 			if (cl->refcount > 0)
! cl->dead = true;
! 			else
! CatCacheRemoveCList(ccp, cl);
! 		}
  
! 		/*
! 		 * inspect the proper hash bucket for tuple matches
! 		 */
! 		hashIndex = HASH_INDEX(hashValue, ccp->cc_nbuckets);
! 		dlist_foreach_modify(iter, >cc_bucket[hashIndex])
  		{
! 			CatCTup*ct = dlist_container(CatCTup, cache_elem, iter.cur);
! 
! 			if (hashValue == ct->hash_value)
  			{
! if (ct->refcount > 0 ||
! 	(ct->c_list && ct->c_list->refcount > 0))
! {
! 	ct->dead = true;
! 	/* list, if any, was marked dead above */
! 	Assert(ct->c_list == NULL || ct->c_list->dead);
! }
! else
! 	CatCacheRemoveCTup(ccp, ct);
! CACHE1_elog(DEBUG2, "CatalogCacheIdInvalidate: invalidated");
  #ifdef CATCACHE_STATS
! ccp->cc_invals++;
  #endif
! /* could be multiple matches, so keep looking! */
! 			}
  		}
- 		break;	/* need only search this one cache */
  	}
  }
  
--- 440,496 
   *	This routine is only quasi-public: it should only be used by inval.c.
   */
  void
! CatCacheInvalidate(CatCache *cache, uint32 hashValue)
  {
! 	Index		hashIndex;
! 	dlist_mutable_iter iter;
  
! 	CACHE1_elog(DEBUG2, "CatCacheInvalidate: called");
  
  	/*
! 	 * We don't bother to check whether the cache has finished initialization
! 	 * yet; if not, there will be no entries in it so no problem.
  	 */
  
! 	/*
! 	 * Invalidate *all* CatCLists in this cache; it's too hard to tell which
! 	 * searches might still be correct, so just zap 'em all.
! 	 */
! 	dlist_foreach_modify(iter, >cc_lists)
! 	{
! 		CatCList   *cl = dlist_container(CatCList, cache_elem, iter.cur);
  
! 		if 

[GENERAL] PG96 pg_restore connecting to PG95 causes ERROR: unrecognized configuration parameter "idle_in_transaction_session_timeout"

2017-05-05 Thread Justin Pryzby
When doing a dump+restore upgrade, it's commonly recommended to use the later
version of pg_restore:

https://www.postgresql.org/docs/current/static/upgrading.html
"It is recommended that you use the pg_dump and pg_dumpall programs from the
newer version of PostgreSQL, to take advantage of enhancements that might have
been made in these programs. Current releases of the dump programs can read
data from any server version back to 7.0."

In the immediate case, I was loading data from PG95 dumps into PG95 server (not
an upgrade), using P96 pg_restore, and getting:

ERROR:  unrecognized configuration parameter 
"idle_in_transaction_session_timeout"

I can't see anybody has raised that issue before.  Should pg_restore check the
remote server version and avoid sending commands not expected to be understood?

(Yes, I know and use pg_upgrade, however I'm currenting migrating a DB between
servers and this procedure will allow doing so with ~30min
downtime...pg_upgrade to 9.6 will be done afterwards, which is why PG96
pg_upgrade is installed).

Thanks,
Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Logical decoding CPU-bound w/ large number of tables

2017-05-05 Thread Mathieu Fenniak
Heyo,

I'm attempting to use logical decoding with the streaming replication
protocol to perform change-data-capture on PostgreSQL 9.5.4.  I'm seeing
the replication stream "stall" for long periods of time where the walsender
process will be pinned at 100% CPU utilization, but no data is being sent
to my client.

The stalls occur unpredictably on my production system, but generally seem
to be correlated with schema operations.  My source database has about
100,000 tables; it's a one-schema-per-tenant multi-tenant SaaS system.

I've reproduced the same symptoms with two different approaches on my local
machine.  With both, I have a replication client connected via the
streaming protocol.

In reproduction approach 1, I've created a thread that inserts small sets
of data, and a thread that creates a schema w/ 500 tables and then drops
it.  This approach has pinned CPU usage, but data does come out of it --
just excruciatingly slow when compared to the same test without the schema
create & drop.

In reproduction approach 2, I've created a database w/ 100,000 tables on it
and performed a "vacuum ful".  The walsender goes to 100% CPU and no data
comes out of the replication stream for hours.

I've performed a CPU sampling with the OSX `sample` tool based upon
reproduction approach #1:
https://gist.github.com/mfenniak/366d7ed19b2d804f41180572dc1600d8  It
appears that most of the time is spent in the
RelfilenodeMapInvalidateCallback and CatalogCacheIdInvalidate cache
invalidation callbacks, both of which appear to be invalidating caches
based upon the cache value.

Has anyone else run into this kind of performance problem?  Any thoughts on
how it might be resolved?  I don't mind putting in the work if someone
could describe what is happening here, and have a discussion with me about
what kind of changes might be necessary to improve the performance.

Thanks all,


*Mathieu Fenniak* | Senior Software Architect | Phone 1-587-315-1185

*Replicon* | The leader in cloud time tracking applications - 7,800+
Customers - 70+ Countries - 1.5 Million Users
www.replicon.com | facebook  |
linkedin  | twitter
 | blog  | contact
us 

*We are hiring!* | search jobs 


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Bill Moran
On Fri, 5 May 2017 19:52:42 +0100
Tony Finch  wrote:

> Bill Moran  wrote:
> >
> > There's a well-written article I saw recently that directly addresses
> > your question ... I'm too lazy to find it, but google will probably
> > turn it up for you.
> 
> This? http://renesd.blogspot.co.uk/2017/02/is-postgresql-good-enough.html

Oh, hey! You found it, thanks!

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Peter J. Holzer
On 2017-05-05 11:46:55 -0700, John R Pierce wrote:
> On 5/5/2017 11:28 AM, Peter J. Holzer wrote:
> 
> On 2017-05-04 23:08:25 +0200, Sven R. Kunze wrote:
> 
> On 03.05.2017 12:57, Thomas Güttler wrote:
> 
> Am 02.05.2017 um 05:43 schrieb Jeff Janes:
> 
> No.  You can certainly use PostgreSQL to store blobs.  But 
> then, you
> need to store the PostgreSQL data **someplace**.
> If you don't store it in S3, you have to store it somewhere 
> else.
> 
> I don't understand what you mean here. AFAIK storing blobs in PG 
> is not
> recommended since it is not very efficient.
> 
> Seems like several people here disagree with this conventional wisdom.
> 
> I think it depends very much on what level of "efficiency" you need. On
> my home server (i5 processor, 32GB RAM, Samsung 850 SSD - not a piece of
> junk, but not super powerful either) I can retrieve a small blob from a
> 100GB table in about 0.1 ms, and for large blobs the speed approaches
> 200MB/s. For just about everything I'd do on that server (or even at
> work) this is easily fast enough.
> 
> 
> S3 is often used for terabyte to petabyte file collections.   I would not want
> to burden my relational database with this.

I repeat the the first sentence I wrote: "I think it depends very much
on what level of 'efficiency' you need." Just because some people need
to store petabytes of blob data doesn't mean everybody does. If you need
to store petabytes of blobs, PostgreSQL may not be the right tool. But
it may be the right tool if you just need to store a few thousand PDFs.
To tell people to never store blobs in PostgreSQL because PostgreSQL is
"not efficient" is just bullshit. There are many factors which determine
how you should store your data, and "efficiency" (however that is
defined, if it's defined at all and not just used as a buzzword) is only
one of them - and rarely, in my experience, the most important one.

hp

-- 
   _  | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| |   | h...@hjp.at | -- John Gordon
__/   | http://www.hjp.at/ |http://downlode.org/Etext/alicebob.html


signature.asc
Description: Digital signature


Re: [GENERAL] Vaccum Query

2017-05-05 Thread Vik Fearing
On 05/04/2017 02:04 PM, PAWAN SHARMA wrote:
> Hi All,
>
> Any Idea what happens when we run standard vacuum (without full) on a
> table which has exclusive lock (or alter statement is being processed
> on that table).
>
> What happens in this case ? Will vacuum wait for the alter to commit ?
>

Yes.

> If so, how long ?
>

Either forever or for however long lock_timeout and/or statement_timeout
are set to (or any user command like Ctrl+C).

> If not, then what ?
>

If any of those limits are reached, the vacuum is cancelled.

> And what actions can be taken to prevent this ?
>

Prevent what?  Even if the vacuum could run while the other transaction
had the exclusive lock, it wouldn't be able to do any work.

-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Tony Finch
Bill Moran  wrote:
>
> There's a well-written article I saw recently that directly addresses
> your question ... I'm too lazy to find it, but google will probably
> turn it up for you.

This? http://renesd.blogspot.co.uk/2017/02/is-postgresql-good-enough.html

Tony.
-- 
f.anthony.n.finch    http://dotat.at/  -  I xn--zr8h punycode
Irish Sea: East or northeast 5 or 6. Slight or moderate. Fair. Good.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread John R Pierce

On 5/5/2017 11:28 AM, Peter J. Holzer wrote:

On 2017-05-04 23:08:25 +0200, Sven R. Kunze wrote:

On 03.05.2017 12:57, Thomas Güttler wrote:

Am 02.05.2017 um 05:43 schrieb Jeff Janes:

No.  You can certainly use PostgreSQL to store blobs.  But then, you
need to store the PostgreSQL data **someplace**.
If you don't store it in S3, you have to store it somewhere else.

I don't understand what you mean here. AFAIK storing blobs in PG is not
recommended since it is not very efficient.

Seems like several people here disagree with this conventional wisdom.

I think it depends very much on what level of "efficiency" you need. On
my home server (i5 processor, 32GB RAM, Samsung 850 SSD - not a piece of
junk, but not super powerful either) I can retrieve a small blob from a
100GB table in about 0.1 ms, and for large blobs the speed approaches
200MB/s. For just about everything I'd do on that server (or even at
work) this is easily fast enough.



S3 is often used for terabyte to petabyte file collections.   I would 
not want to burden my relational database with this.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Peter J. Holzer
On 2017-05-04 23:08:25 +0200, Sven R. Kunze wrote:
> On 03.05.2017 12:57, Thomas Güttler wrote:
> >Am 02.05.2017 um 05:43 schrieb Jeff Janes:
> >>No.  You can certainly use PostgreSQL to store blobs.  But then, you
> >>need to store the PostgreSQL data **someplace**.
> >>If you don't store it in S3, you have to store it somewhere else.
> >
> >I don't understand what you mean here. AFAIK storing blobs in PG is not
> >recommended since it is not very efficient.
> 
> Seems like several people here disagree with this conventional wisdom.

I think it depends very much on what level of "efficiency" you need. On
my home server (i5 processor, 32GB RAM, Samsung 850 SSD - not a piece of
junk, but not super powerful either) I can retrieve a small blob from a
100GB table in about 0.1 ms, and for large blobs the speed approaches
200MB/s. For just about everything I'd do on that server (or even at
work) this is easily fast enough.

Sure, just telling the kernel "send data from file descriptor A (which
happens to be a file) to file descriptor B (a socket)" is a lot more
efficient than copying data from disk into a postgresql process, then
from that process to an application server, from that to the webserver
and that finally sends it to the socket. But if that just lets my server
be 99.9% idle instead of 99.0% idle, I haven't gained much. Similarly,
if my server spends 90% of it's resources doing other stuff, I won't
gain much by optimizing this (I should better optimize that other stuff
it's spending so much time on).

I am in this regard a firm believer in not optimizing prematurely. Do
whatever makes sense from an application point of view. If the blobs are
logically part of some other data (e.g. PDFs in a literature database),
store them together (either all of them in PostgreSQL, or all in some
NoSQL database, or maybe on stone tablets, if that makes sense for some
reason). Only if you have good reason[1] to believe that physically
separating data which logically belongs together will resolve a
bottleneck, then by all means separate them.

hp

[1] "I read somewhere on the internet" is usually not a good reason.

-- 
   _  | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| |   | h...@hjp.at | -- John Gordon
__/   | http://www.hjp.at/ |http://downlode.org/Etext/alicebob.html


signature.asc
Description: Digital signature


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Jeff Janes
On Wed, May 3, 2017 at 3:57 AM, Thomas Güttler  wrote:

> Am 02.05.2017 um 05:43 schrieb Jeff Janes:
>
>> On Sun, Apr 30, 2017 at 4:37 AM, Thomas Güttler <
>> guettl...@thomas-guettler.de >
>> wrote:
>>
>> Is is possible that PostgreSQL will replace these building blocks in
>> the future?
>>
>>  - redis (Caching)
>>
>>
>> PostgreSQL has its own caching.  It might not be quite as effective as
>> redis', but you can us it if you are willing to
>> take those trade offs.
>>
>
> What kind of caching does PG offer?
>

It has shared_buffers to cache the data it needs frequently (not query
results, but the data needed to produce the results), and also uses the
file systems cache.  This is what I am referring to.  I wouldn't recommend
using PostgreSQL simply as a cache for something else, if you don't want
any other features of the database.  But if you want to throw Redis up as a
layer of cache in front of PostgreSQL, maybe you should first see if that
RAM, and a bit of tuning, can be used to make PostgreSQL fast enough to not
need the Redis cache.


>
>>
>>
>>  - s3 (Blob storage)
>>
>>
>>
>
> No.  You can certainly use PostgreSQL to store blobs.  But then, you need
>> to store the PostgreSQL data **someplace**.
>> If you don't store it in S3, you have to store it somewhere else.
>>
>
> I don't understand what you mean here. AFAIK storing blobs in PG is not
> recommended since it is not very efficient.
>

If the metadata is stored in PG and the blobs themselves are stored
individually S3, you have a transaction atomicity problem. Solving that is
not likely to be very efficient, either.  You have to pick your poison.

Cheers,

Jeff


Re: [GENERAL] Link errors

2017-05-05 Thread Magnus Hagander
On Fri, May 5, 2017 at 1:30 AM, Igor Korot  wrote:

> Hi,
> [code]
> 1> Creating library vc_mswuddll\postgres_dll.lib and object
> vc_mswuddll\postgres_dll.exp
> 1>libpqd.lib(fe-connect.obj) : error LNK2019: unresolved external
> symbol __imp__WSAIoctl@36 referenced in function _setKeepalivesWin32
> 1>libpqd.lib(ip.obj) : error LNK2001: unresolved external symbol
> __imp__WSAIoctl@36
> 1>libpqd.lib(fe-connect.obj) : error LNK2019: unresolved external
> symbol __imp__DeleteSecurityContext@4 referenced in function
> _closePGconn
> 1>libpqd.lib(fe-connect.obj) : error LNK2019: unresolved external
> symbol __imp__FreeCredentialsHandle@4 referenced in function
> _closePGconn
> 1>libpqd.lib(fe-auth.obj) : error LNK2019: unresolved external symbol
> _FreeContextBuffer@4 referenced in function _pg_SSPI_continue
> 1>libpqd.lib(fe-auth.obj) : error LNK2019: unresolved external symbol
> _InitializeSecurityContextA@48 referenced in function
> _pg_SSPI_continue
> 1>libpqd.lib(fe-auth.obj) : error LNK2019: unresolved external symbol
> _AcquireCredentialsHandleA@36 referenced in function _pg_SSPI_startup
> 1>libpqd.lib(ip.obj) : error LNK2019: unresolved external symbol
> __imp__WSASocketA@24 referenced in function _pg_foreach_ifaddr
> 1>..\dbhandler\vc_mswuddll\postgres_dll.dll : fatal error LNK1120: 7
> unresolved externals
>
> [/code]
>
> And I'm linking with following libraries:
>
> kernel32.lib;user32.lib;gdi32.lib;comdlg32.lib;winspool.lib;
> winmm.lib;shell32.lib;shlwapi.lib;comctl32.lib;ole32.lib;
> oleaut32.lib;uuid.lib;rpcrt4.lib;advapi32.lib;version.lib;
> wsock32.lib;wininet.lib;%(
>
> What am I missing? I am linking with wsock32.lib...
>


I suggest you try building it with the supported method (the msvc build
system in src/tools/msvc), and then inspect the difference.

You can also look up those references in the Microsoft docs (for each API
function there is a listing at the bottom of the page telling you which
header and library is needed). From doing that, AFAICT, you are missing
ws2_32.lib and secur32.lib.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/