On 05.09.2013 17:22, Tom Lane wrote:
Heikki Linnakangas<hlinnakan...@vmware.com> writes:
I ran pgbench for ten seconds, and printed the number of tuples in each
catcache after that:
[ very tiny numbers ]
I find these numbers a bit suspicious. For example, we must have hit at
least 13 different system catalogs, and more than that many indexes, in
the course of populating the syscaches you show as initialized. How is
it there are only 4 entries in the RELOID cache? I wonder if there were
cache resets going on.
Relcache is loaded from the init file. The lookups of those system
catalogs and indexes never hit the syscache, because the entries are
found in relcache. When I delete the init file and launch psql, without
running any queries, I get this (caches with 0 tups left out):
LOG: cache id 45 on pg_class: 7 tups
LOG: cache id 32 on pg_index: 63 tups
LOG: cache id 21 on pg_database: 1 tups
LOG: cache id 11 on pg_authid: 1 tups
LOG: cache id 10 on pg_authid: 1 tups
LOG: cache id 2 on pg_am: 1 tups
A larger issue is that pgbench might not be too representative. In
a quick check, I find that cache 37 (OPERNAMENSP) starts out empty,
and contains 1 entry after "select 2=2", which is expected since
the operator-lookup code will start by looking for int4 = int4 and
will get an exact match. But after "select 2=2::numeric" there are
61 entries, as a byproduct of having thumbed through every binary
operator named "=" to resolve the ambiguous match. We went so far
as to install another level of caching in front of OPERNAMENSP because
it was getting too expensive to deal with heavily-overloaded operators
like that one. In general, we've had to spend enough sweat on optimizing
catcache searches to make me highly dubious of any claim that the caches
are usually almost empty.
I understand your argument that resizing is so cheap that it might not
matter, but nonetheless reducing these caches as far as you're suggesting
sounds to me to be penny-wise and pound-foolish. I'm okay with setting
them on the small side rather than on the large side as they are now, but
not with choosing sizes that are guaranteed to result in resizing cycles
during startup of any real app.
Ok, committed the attached.
To choose the initial sizes, I put a WARNING into the rehash function,
ran the regression suite, and adjusted the sizes so that most regression
tests run without rehashing. With the attached patch, 18 regression
tests cause rehashing (see regression.diffs). The ones that do are
because they are exercising some parts of the system more than a typical
application would do: the enum regression test for example causes
rehashes of the pg_enum catalog cache, and the aggregate regression test
causes rehashing of pg_aggregate, and so on. A few regression tests do a
database-wide VACUUM or ANALYZE; those touch all relations, and cause a
rehash of pg_class and pg_index.
- Heikki
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index cca0572..c467f11 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -728,21 +728,20 @@ InitCatCache(int id,
int nkeys,
const int *key,
int nbuckets)
{
CatCache *cp;
MemoryContext oldcxt;
int i;
/*
- * nbuckets is the number of hash buckets to use in this catcache.
- * Currently we just use a hard-wired estimate of an appropriate size for
- * each cache; maybe later make them dynamically resizable?
+ * nbuckets is the initial number of hash buckets to use in this catcache.
+ * It will be enlarged later if it becomes too full.
*
* nbuckets must be a power of two. We check this via Assert rather than
* a full runtime check because the values will be coming from constant
* tables.
*
* If you're confused by the power-of-two check, see comments in
* bitmapset.c for an explanation.
*/
Assert(nbuckets > 0 && (nbuckets & -nbuckets) == nbuckets);
@@ -769,19 +768,20 @@ InitCatCache(int id,
on_proc_exit(CatCachePrintStats, 0);
#endif
}
/*
* allocate a new cache structure
*
* Note: we rely on zeroing to initialize all the dlist headers correctly
*/
- cp = (CatCache *) palloc0(sizeof(CatCache) + nbuckets * sizeof(dlist_head));
+ cp = (CatCache *) palloc0(sizeof(CatCache));
+ cp->cc_bucket = palloc0(nbuckets * sizeof(dlist_head));
/*
* initialize the cache's relation information for the relation
* corresponding to this cache, and initialize some of the new cache's
* other internal fields. But don't open the relation yet.
*/
cp->id = id;
cp->cc_relname = "(not known yet)";
cp->cc_reloid = reloid;
@@ -808,18 +808,55 @@ InitCatCache(int id,
/*
* back to the old context before we return...
*/
MemoryContextSwitchTo(oldcxt);
return cp;
}
/*
+ * Enlarge a catcache, doubling the number of buckets.
+ */
+static void
+RehashCatCache(CatCache *cp)
+{
+ dlist_head *newbucket;
+ int newnbuckets;
+ int i;
+
+ elog(DEBUG1, "rehashing catalog cache id %d for %s; %d tups, %d buckets",
+ cp->id, cp->cc_relname, cp->cc_ntup, cp->cc_nbuckets);
+
+ /* Allocate a new, larger, hash table. */
+ newnbuckets = cp->cc_nbuckets * 2;
+ newbucket = (dlist_head *) MemoryContextAllocZero(CacheMemoryContext, newnbuckets * sizeof(dlist_head));
+
+ /* Move all entries from old hash table to new. */
+ for (i = 0; i < cp->cc_nbuckets; i++)
+ {
+ dlist_mutable_iter iter;
+ dlist_foreach_modify(iter, &cp->cc_bucket[i])
+ {
+ CatCTup *ct = dlist_container(CatCTup, cache_elem, iter.cur);
+ int hashIndex = HASH_INDEX(ct->hash_value, newnbuckets);
+
+ dlist_delete(iter.cur);
+ dlist_push_head(&newbucket[hashIndex], &ct->cache_elem);
+ }
+ }
+
+ /* Switch to the new array. */
+ pfree(cp->cc_bucket);
+ cp->cc_nbuckets = newnbuckets;
+ cp->cc_bucket = newbucket;
+}
+
+/*
* CatalogCacheInitializeCache
*
* This function does final initialization of a catcache: obtain the tuple
* descriptor and set up the hash and equality function links. We assume
* that the relcache entry can be opened at this point!
*/
#ifdef CACHEDEBUG
#define CatalogCacheInitializeCache_DEBUG1 \
elog(DEBUG2, "CatalogCacheInitializeCache: cache @%p rel=%u", cache, \
@@ -1678,18 +1715,25 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp,
ct->dead = false;
ct->negative = negative;
ct->hash_value = hashValue;
dlist_push_head(&cache->cc_bucket[hashIndex], &ct->cache_elem);
cache->cc_ntup++;
CacheHdr->ch_ntup++;
+ /*
+ * If the hash table has become too full, enlarge the buckets array.
+ * Quite arbitrarily, we enlarge when fill factor > 2.
+ */
+ if (cache->cc_ntup > cache->cc_nbuckets * 2)
+ RehashCatCache(cache);
+
return ct;
}
/*
* build_dummy_tuple
* Generate a palloc'd HeapTuple that contains the specified key
* columns, and NULLs for other columns.
*
* This is used to store the keys for negative cache entries and CatCList
diff --git a/src/backend/utils/cache/syscache.c b/src/backend/utils/cache/syscache.c
index 1ff2f2b..e9bdfea 100644
--- a/src/backend/utils/cache/syscache.c
+++ b/src/backend/utils/cache/syscache.c
@@ -116,19 +116,19 @@ static const struct cachedesc cacheinfo[] = {
{AggregateRelationId, /* AGGFNOID */
AggregateFnoidIndexId,
1,
{
Anum_pg_aggregate_aggfnoid,
0,
0,
0
},
- 32
+ 16
},
{AccessMethodRelationId, /* AMNAME */
AmNameIndexId,
1,
{
Anum_pg_am_amname,
0,
0,
0
@@ -171,85 +171,85 @@ static const struct cachedesc cacheinfo[] = {
{AccessMethodProcedureRelationId, /* AMPROCNUM */
AccessMethodProcedureIndexId,
4,
{
Anum_pg_amproc_amprocfamily,
Anum_pg_amproc_amproclefttype,
Anum_pg_amproc_amprocrighttype,
Anum_pg_amproc_amprocnum
},
- 64
+ 16
},
{AttributeRelationId, /* ATTNAME */
AttributeRelidNameIndexId,
2,
{
Anum_pg_attribute_attrelid,
Anum_pg_attribute_attname,
0,
0
},
- 2048
+ 32
},
{AttributeRelationId, /* ATTNUM */
AttributeRelidNumIndexId,
2,
{
Anum_pg_attribute_attrelid,
Anum_pg_attribute_attnum,
0,
0
},
- 2048
+ 128
},
{AuthMemRelationId, /* AUTHMEMMEMROLE */
AuthMemMemRoleIndexId,
2,
{
Anum_pg_auth_members_member,
Anum_pg_auth_members_roleid,
0,
0
},
- 128
+ 8
},
{AuthMemRelationId, /* AUTHMEMROLEMEM */
AuthMemRoleMemIndexId,
2,
{
Anum_pg_auth_members_roleid,
Anum_pg_auth_members_member,
0,
0
},
- 128
+ 8
},
{AuthIdRelationId, /* AUTHNAME */
AuthIdRolnameIndexId,
1,
{
Anum_pg_authid_rolname,
0,
0,
0
},
- 128
+ 8
},
{AuthIdRelationId, /* AUTHOID */
AuthIdOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 128
+ 8
},
{
CastRelationId, /* CASTSOURCETARGET */
CastSourceTargetIndexId,
2,
{
Anum_pg_cast_castsource,
Anum_pg_cast_casttarget,
0,
@@ -260,96 +260,96 @@ static const struct cachedesc cacheinfo[] = {
{OperatorClassRelationId, /* CLAAMNAMENSP */
OpclassAmNameNspIndexId,
3,
{
Anum_pg_opclass_opcmethod,
Anum_pg_opclass_opcname,
Anum_pg_opclass_opcnamespace,
0
},
- 64
+ 8
},
{OperatorClassRelationId, /* CLAOID */
OpclassOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 64
+ 8
},
{CollationRelationId, /* COLLNAMEENCNSP */
CollationNameEncNspIndexId,
3,
{
Anum_pg_collation_collname,
Anum_pg_collation_collencoding,
Anum_pg_collation_collnamespace,
0
},
- 64
+ 8
},
{CollationRelationId, /* COLLOID */
CollationOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 64
+ 8
},
{ConversionRelationId, /* CONDEFAULT */
ConversionDefaultIndexId,
4,
{
Anum_pg_conversion_connamespace,
Anum_pg_conversion_conforencoding,
Anum_pg_conversion_contoencoding,
ObjectIdAttributeNumber,
},
- 128
+ 8
},
{ConversionRelationId, /* CONNAMENSP */
ConversionNameNspIndexId,
2,
{
Anum_pg_conversion_conname,
Anum_pg_conversion_connamespace,
0,
0
},
- 128
+ 8
},
{ConstraintRelationId, /* CONSTROID */
ConstraintOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 16
},
{ConversionRelationId, /* CONVOID */
ConversionOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 128
+ 8
},
{DatabaseRelationId, /* DATABASEOID */
DatabaseOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
@@ -359,41 +359,41 @@ static const struct cachedesc cacheinfo[] = {
{DefaultAclRelationId, /* DEFACLROLENSPOBJ */
DefaultAclRoleNspObjIndexId,
3,
{
Anum_pg_default_acl_defaclrole,
Anum_pg_default_acl_defaclnamespace,
Anum_pg_default_acl_defaclobjtype,
0
},
- 256
+ 8
},
{EnumRelationId, /* ENUMOID */
EnumOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 256
+ 8
},
{EnumRelationId, /* ENUMTYPOIDNAME */
EnumTypIdLabelIndexId,
2,
{
Anum_pg_enum_enumtypid,
Anum_pg_enum_enumlabel,
0,
0
},
- 256
+ 8
},
{EventTriggerRelationId, /* EVENTTRIGGERNAME */
EventTriggerNameIndexId,
1,
{
Anum_pg_event_trigger_evtname,
0,
0,
0
@@ -414,74 +414,74 @@ static const struct cachedesc cacheinfo[] = {
{ForeignDataWrapperRelationId, /* FOREIGNDATAWRAPPERNAME */
ForeignDataWrapperNameIndexId,
1,
{
Anum_pg_foreign_data_wrapper_fdwname,
0,
0,
0
},
- 8
+ 2
},
{ForeignDataWrapperRelationId, /* FOREIGNDATAWRAPPEROID */
ForeignDataWrapperOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 8
+ 2
},
{ForeignServerRelationId, /* FOREIGNSERVERNAME */
ForeignServerNameIndexId,
1,
{
Anum_pg_foreign_server_srvname,
0,
0,
0
},
- 32
+ 2
},
{ForeignServerRelationId, /* FOREIGNSERVEROID */
ForeignServerOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 32
+ 2
},
{ForeignTableRelationId, /* FOREIGNTABLEREL */
ForeignTableRelidIndexId,
1,
{
Anum_pg_foreign_table_ftrelid,
0,
0,
0
},
- 128
+ 4
},
{IndexRelationId, /* INDEXRELID */
IndexRelidIndexId,
1,
{
Anum_pg_index_indexrelid,
0,
0,
0
},
- 1024
+ 64
},
{LanguageRelationId, /* LANGNAME */
LanguageNameIndexId,
1,
{
Anum_pg_language_lanname,
0,
0,
0
@@ -502,305 +502,305 @@ static const struct cachedesc cacheinfo[] = {
{NamespaceRelationId, /* NAMESPACENAME */
NamespaceNameIndexId,
1,
{
Anum_pg_namespace_nspname,
0,
0,
0
},
- 256
+ 4
},
{NamespaceRelationId, /* NAMESPACEOID */
NamespaceOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 256
+ 16
},
{OperatorRelationId, /* OPERNAMENSP */
OperatorNameNspIndexId,
4,
{
Anum_pg_operator_oprname,
Anum_pg_operator_oprleft,
Anum_pg_operator_oprright,
Anum_pg_operator_oprnamespace
},
- 1024
+ 256
},
{OperatorRelationId, /* OPEROID */
OperatorOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 32
},
{OperatorFamilyRelationId, /* OPFAMILYAMNAMENSP */
OpfamilyAmNameNspIndexId,
3,
{
Anum_pg_opfamily_opfmethod,
Anum_pg_opfamily_opfname,
Anum_pg_opfamily_opfnamespace,
0
},
- 64
+ 8
},
{OperatorFamilyRelationId, /* OPFAMILYOID */
OpfamilyOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 64
+ 8
},
{ProcedureRelationId, /* PROCNAMEARGSNSP */
ProcedureNameArgsNspIndexId,
3,
{
Anum_pg_proc_proname,
Anum_pg_proc_proargtypes,
Anum_pg_proc_pronamespace,
0
},
- 2048
+ 128
},
{ProcedureRelationId, /* PROCOID */
ProcedureOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 2048
+ 128
},
{RangeRelationId, /* RANGETYPE */
RangeTypidIndexId,
1,
{
Anum_pg_range_rngtypid,
0,
0,
0
},
- 64
+ 4
},
{RelationRelationId, /* RELNAMENSP */
ClassNameNspIndexId,
2,
{
Anum_pg_class_relname,
Anum_pg_class_relnamespace,
0,
0
},
- 1024
+ 128
},
{RelationRelationId, /* RELOID */
ClassOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 128
},
{RewriteRelationId, /* RULERELNAME */
RewriteRelRulenameIndexId,
2,
{
Anum_pg_rewrite_ev_class,
Anum_pg_rewrite_rulename,
0,
0
},
- 1024
+ 8
},
{StatisticRelationId, /* STATRELATTINH */
StatisticRelidAttnumInhIndexId,
3,
{
Anum_pg_statistic_starelid,
Anum_pg_statistic_staattnum,
Anum_pg_statistic_stainherit,
0
},
- 1024
+ 128
},
{TableSpaceRelationId, /* TABLESPACEOID */
TablespaceOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0,
},
- 16
+ 4
},
{TSConfigMapRelationId, /* TSCONFIGMAP */
TSConfigMapIndexId,
3,
{
Anum_pg_ts_config_map_mapcfg,
Anum_pg_ts_config_map_maptokentype,
Anum_pg_ts_config_map_mapseqno,
0
},
- 4
+ 2
},
{TSConfigRelationId, /* TSCONFIGNAMENSP */
TSConfigNameNspIndexId,
2,
{
Anum_pg_ts_config_cfgname,
Anum_pg_ts_config_cfgnamespace,
0,
0
},
- 16
+ 2
},
{TSConfigRelationId, /* TSCONFIGOID */
TSConfigOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 16
+ 2
},
{TSDictionaryRelationId, /* TSDICTNAMENSP */
TSDictionaryNameNspIndexId,
2,
{
Anum_pg_ts_dict_dictname,
Anum_pg_ts_dict_dictnamespace,
0,
0
},
- 16
+ 2
},
{TSDictionaryRelationId, /* TSDICTOID */
TSDictionaryOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 16
+ 2
},
{TSParserRelationId, /* TSPARSERNAMENSP */
TSParserNameNspIndexId,
2,
{
Anum_pg_ts_parser_prsname,
Anum_pg_ts_parser_prsnamespace,
0,
0
},
- 4
+ 2
},
{TSParserRelationId, /* TSPARSEROID */
TSParserOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 4
+ 2
},
{TSTemplateRelationId, /* TSTEMPLATENAMENSP */
TSTemplateNameNspIndexId,
2,
{
Anum_pg_ts_template_tmplname,
Anum_pg_ts_template_tmplnamespace,
0,
0
},
- 16
+ 2
},
{TSTemplateRelationId, /* TSTEMPLATEOID */
TSTemplateOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 16
+ 2
},
{TypeRelationId, /* TYPENAMENSP */
TypeNameNspIndexId,
2,
{
Anum_pg_type_typname,
Anum_pg_type_typnamespace,
0,
0
},
- 1024
+ 64
},
{TypeRelationId, /* TYPEOID */
TypeOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 1024
+ 64
},
{UserMappingRelationId, /* USERMAPPINGOID */
UserMappingOidIndexId,
1,
{
ObjectIdAttributeNumber,
0,
0,
0
},
- 128
+ 2
},
{UserMappingRelationId, /* USERMAPPINGUSERSERVER */
UserMappingUserServerIndexId,
2,
{
Anum_pg_user_mapping_umuser,
Anum_pg_user_mapping_umserver,
0,
0
},
- 128
+ 2
}
};
static CatCache *SysCache[
lengthof(cacheinfo)];
static int SysCacheSize = lengthof(cacheinfo);
static bool CacheInitialized = false;
static Oid SysCacheRelationOid[lengthof(cacheinfo)];
diff --git a/src/include/utils/catcache.h b/src/include/utils/catcache.h
index b6e1c97..524319a 100644
--- a/src/include/utils/catcache.h
+++ b/src/include/utils/catcache.h
@@ -60,20 +60,20 @@ typedef struct catcache
/*
* cc_searches - (cc_hits + cc_neg_hits + cc_newloads) is number of failed
* searches, each of which will result in loading a negative entry
*/
long cc_invals; /* # of entries invalidated from cache */
long cc_lsearches; /* total # list-searches */
long cc_lhits; /* # of matches against existing lists */
#endif
- dlist_head cc_bucket[1]; /* hash buckets --- VARIABLE LENGTH ARRAY */
-} CatCache; /* VARIABLE LENGTH STRUCT */
+ dlist_head *cc_bucket; /* hash buckets */
+} CatCache;
typedef struct catctup
{
int ct_magic; /* for identifying CatCTup entries */
#define CT_MAGIC 0x57261502
CatCache *my_cache; /* link to owning catcache */
/*
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/enum.out
2013-08-22 17:45:02.577496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/enum.out
2013-09-05 19:07:25.264383746 +0300
***************
*** 126,131 ****
--- 126,132 ----
alter type insenum add value 'i3' before 'L2';
alter type insenum add value 'i4' before 'L2';
alter type insenum add value 'i5' before 'L2';
+ WARNING: rehashing catalog cache id 24 for pg_enum; 17 tups, 8 buckets
alter type insenum add value 'i6' before 'L2';
alter type insenum add value 'i7' before 'L2';
alter type insenum add value 'i8' before 'L2';
***************
*** 142,147 ****
--- 143,149 ----
alter type insenum add value 'i19' before 'L2';
alter type insenum add value 'i20' before 'L2';
alter type insenum add value 'i21' before 'L2';
+ WARNING: rehashing catalog cache id 24 for pg_enum; 33 tups, 16 buckets
alter type insenum add value 'i22' before 'L2';
alter type insenum add value 'i23' before 'L2';
alter type insenum add value 'i24' before 'L2';
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/rangetypes.out
2013-08-22 17:45:02.661496722 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/rangetypes.out
2013-09-05 19:07:26.112383795 +0300
***************
*** 1130,1135 ****
--- 1130,1138 ----
create domain mydomain as int4;
create type mydomainrange as range(subtype=mydomain);
select '[4,50)'::mydomainrange @> 7::mydomain;
+ WARNING: rehashing catalog cache id 43 for pg_range; 9 tups, 4 buckets
+ LINE 1: select '[4,50)'::mydomainrange @> 7::mydomain;
+ ^
?column?
----------
t
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/create_index.out
2013-08-27 18:17:41.238830573 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/create_index.out
2013-09-05 19:07:35.372384332 +0300
***************
*** 1937,1942 ****
--- 1937,1943 ----
(1 row)
CREATE INDEX textarrayidx ON array_index_op_test USING gin (t);
+ WARNING: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets
explain (costs off)
SELECT * FROM array_index_op_test WHERE t @> '{AAAAAAAA72908}' ORDER BY seqno;
QUERY PLAN
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/updatable_views.out
2013-08-27 18:17:41.242830573 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/updatable_views.out
2013-09-05 19:07:41.776384704 +0300
***************
*** 853,858 ****
--- 853,859 ----
RESET SESSION AUTHORIZATION;
SET SESSION AUTHORIZATION view_user2;
CREATE VIEW rw_view2 AS SELECT b AS bb, c AS cc, a AS aa FROM base_tbl;
+ WARNING: rehashing catalog cache id 22 for pg_default_acl; 17 tups, 8 buckets
SELECT * FROM base_tbl; -- ok
a | b | c
---+-------+---
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/sanity_check.out
2013-08-22 17:45:02.681496721 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/sanity_check.out
2013-09-05 19:07:42.868384767 +0300
***************
*** 1,4 ****
--- 1,6 ----
VACUUM;
+ WARNING: rehashing catalog cache id 32 for pg_index; 129 tups, 64 buckets
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
--
-- sanity check, if we don't have indices the test will take years to
-- complete. But skip TOAST relations (since they will have varying
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/aggregates.out
2013-09-05 10:24:41.048849206 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/aggregates.out
2013-09-05 19:07:44.492384862 +0300
***************
*** 193,198 ****
--- 193,199 ----
(1 row)
SELECT count(four) AS cnt_1000 FROM onek;
+ WARNING: rehashing catalog cache id 0 for pg_aggregate; 33 tups, 16 buckets
cnt_1000
----------
1000
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/matview.out
2013-08-27 18:17:41.238830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/matview.out
2013-09-05 19:07:51.220385252 +0300
***************
*** 392,397 ****
--- 392,406 ----
(0 rows)
VACUUM ANALYZE;
+ WARNING: rehashing catalog cache id 14 for pg_opclass; 17 tups, 8 buckets
+ WARNING: rehashing catalog cache id 12 for pg_cast; 513 tups, 256 buckets
+ WARNING: rehashing catalog cache id 5 for pg_amproc; 33 tups, 16 buckets
+ WARNING: rehashing catalog cache id 7 for pg_attribute; 257 tups, 128 buckets
+ WARNING: rehashing catalog cache id 32 for pg_index; 129 tups, 64 buckets
+ WARNING: rehashing catalog cache id 12 for pg_cast; 1025 tups, 512 buckets
+ WARNING: rehashing catalog cache id 14 for pg_opclass; 33 tups, 16 buckets
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
+ WARNING: rehashing catalog cache id 7 for pg_attribute; 513 tups, 256 buckets
SELECT * FROM hogeview WHERE i < 10;
i
---
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/alter_generic.out
2013-08-27 18:17:41.238830573 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/alter_generic.out
2013-09-05 19:07:51.296385257 +0300
***************
*** 404,409 ****
--- 404,410 ----
-- Should work. Textbook case of ALTER OPERATOR FAMILY ... ADD OPERATOR with
FOR ORDER BY
CREATE OPERATOR FAMILY alt_opf11 USING gist;
ALTER OPERATOR FAMILY alt_opf11 USING gist ADD OPERATOR 1 < (int4, int4) FOR
ORDER BY float_ops;
+ WARNING: rehashing catalog cache id 39 for pg_opfamily; 17 tups, 8 buckets
ALTER OPERATOR FAMILY alt_opf11 USING gist DROP OPERATOR 1 (int4, int4);
DROP OPERATOR FAMILY alt_opf11 USING gist;
-- Should fail. btree comparison functions should return INTEGER in ALTER
OPERATOR FAMILY ... ADD FUNCTION
***************
*** 514,519 ****
--- 515,521 ----
ALTER TEXT SEARCH DICTIONARY alt_ts_dict3 RENAME TO alt_ts_dict4; -- failed
(not owner)
ERROR: must be owner of text search dictionary alt_ts_dict3
ALTER TEXT SEARCH DICTIONARY alt_ts_dict1 RENAME TO alt_ts_dict4; -- OK
+ WARNING: rehashing catalog cache id 52 for pg_ts_dict; 5 tups, 2 buckets
ALTER TEXT SEARCH DICTIONARY alt_ts_dict3 OWNER TO regtest_alter_user2; --
failed (not owner)
ERROR: must be owner of text search dictionary alt_ts_dict3
ALTER TEXT SEARCH DICTIONARY alt_ts_dict2 OWNER TO regtest_alter_user3; --
failed (no role membership)
***************
*** 545,550 ****
--- 547,553 ----
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 RENAME TO alt_ts_conf2; --
failed (name conflict)
ERROR: text search configuration "alt_ts_conf2" already exists in schema
"alt_nsp1"
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf1 RENAME TO alt_ts_conf3; -- OK
+ WARNING: rehashing catalog cache id 50 for pg_ts_config; 5 tups, 2 buckets
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf2 OWNER TO regtest_alter_user2;
-- failed (no role membership)
ERROR: must be member of role "regtest_alter_user2"
ALTER TEXT SEARCH CONFIGURATION alt_ts_conf2 OWNER TO regtest_alter_user3;
-- OK
***************
*** 585,590 ****
--- 588,594 ----
ALTER TEXT SEARCH TEMPLATE alt_ts_temp1 RENAME TO alt_ts_temp2; -- failed
(name conflict)
ERROR: text search template "alt_ts_temp2" already exists in schema
"alt_nsp1"
ALTER TEXT SEARCH TEMPLATE alt_ts_temp1 RENAME TO alt_ts_temp3; -- OK
+ WARNING: rehashing catalog cache id 56 for pg_ts_template; 5 tups, 2 buckets
ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- OK
CREATE TEXT SEARCH TEMPLATE alt_ts_temp2 (lexize=dsimple_lexize);
ALTER TEXT SEARCH TEMPLATE alt_ts_temp2 SET SCHEMA alt_nsp2; -- failed
(name conflict)
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/rules.out
2013-08-27 18:17:41.242830573 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/rules.out
2013-09-05 19:07:53.956385411 +0300
***************
*** 1277,1282 ****
--- 1277,1283 ----
-- Check that ruleutils are working
--
SELECT viewname, definition FROM pg_views WHERE schemaname <>
'information_schema' ORDER BY viewname;
+ WARNING: rehashing catalog cache id 7 for pg_attribute; 257 tups, 128 buckets
viewname |
definition
---------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
iexit | SELECT ih.name,
+
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/event_trigger.out
2013-08-22 17:45:02.577496726 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/event_trigger.out
2013-09-05 19:07:54.316385432 +0300
***************
*** 115,120 ****
--- 115,121 ----
CREATE OR REPLACE FUNCTION schema_two.add(int, int) RETURNS int LANGUAGE
plpgsql
CALLED ON NULL INPUT
AS $$ BEGIN RETURN coalesce($1,0) + coalesce($2,0); END; $$;
+ WARNING: rehashing catalog cache id 22 for pg_default_acl; 17 tups, 8 buckets
CREATE AGGREGATE schema_two.newton
(BASETYPE = int, SFUNC = schema_two.add, STYPE = int);
RESET SESSION AUTHORIZATION;
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/dependency.out
2013-08-22 17:45:02.577496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/dependency.out
2013-09-05 19:07:55.908385524 +0300
***************
*** 60,65 ****
--- 60,66 ----
GRANT ALL ON deptest1 TO regression_user1 WITH GRANT OPTION;
SET SESSION AUTHORIZATION regression_user1;
CREATE TABLE deptest (a serial primary key, b text);
+ WARNING: rehashing catalog cache id 22 for pg_default_acl; 17 tups, 8 buckets
GRANT ALL ON deptest1 TO regression_user2;
RESET SESSION AUTHORIZATION;
\z deptest1
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/tsdicts.out
2013-08-22 17:45:02.717496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/tsdicts.out
2013-09-05 19:07:54.360385434 +0300
***************
*** 197,202 ****
--- 197,205 ----
Synonyms=synonym_sample
);
SELECT ts_lexize('synonym', 'PoStGrEs');
+ WARNING: rehashing catalog cache id 52 for pg_ts_dict; 5 tups, 2 buckets
+ LINE 1: SELECT ts_lexize('synonym', 'PoStGrEs');
+ ^
ts_lexize
-----------
{pgsql}
***************
*** 223,228 ****
--- 226,235 ----
Dictionary=english_stem
);
SELECT ts_lexize('thesaurus', 'one');
+ WARNING: rehashing catalog cache id 52 for pg_ts_dict; 9 tups, 4 buckets
+ LINE 1: SELECT ts_lexize('thesaurus', 'one');
+ ^
+ WARNING: rehashing catalog cache id 53 for pg_ts_dict; 5 tups, 2 buckets
ts_lexize
-----------
{1}
***************
*** 259,264 ****
--- 266,272 ----
);
ALTER TEXT SEARCH CONFIGURATION hunspell_tst ALTER MAPPING
REPLACE ispell WITH hunspell;
+ WARNING: rehashing catalog cache id 50 for pg_ts_config; 5 tups, 2 buckets
SELECT to_tsvector('hunspell_tst', 'Booking the skies after rebookings for
footballklubber from a foot');
to_tsvector
----------------------------------------------------------------------------------------------------
***************
*** 316,321 ****
--- 324,331 ----
ALTER TEXT SEARCH CONFIGURATION thesaurus_tst ALTER MAPPING FOR
asciiword, hword_asciipart, asciihword
WITH synonym, thesaurus, english_stem;
+ WARNING: rehashing catalog cache id 50 for pg_ts_config; 9 tups, 4 buckets
+ WARNING: rehashing catalog cache id 51 for pg_ts_config; 5 tups, 2 buckets
SELECT to_tsvector('thesaurus_tst', 'one postgres one two one two three one');
to_tsvector
----------------------------------
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/foreign_data.out
2013-08-22 17:45:02.601496724 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/foreign_data.out
2013-09-05 19:07:54.764385458 +0300
***************
*** 561,566 ****
--- 561,567 ----
ERROR: user mapping "foreign_data_user" already exists for server s4
CREATE USER MAPPING FOR public SERVER s4 OPTIONS ("this mapping" 'is public');
CREATE USER MAPPING FOR user SERVER s8 OPTIONS (username 'test', password
'secret'); -- ERROR
+ WARNING: rehashing catalog cache id 29 for pg_foreign_server; 5 tups, 2
buckets
ERROR: invalid option "username"
HINT: Valid options in this context are: user, password
CREATE USER MAPPING FOR user SERVER s8 OPTIONS (user 'test', password
'secret');
***************
*** 570,580 ****
--- 571,583 ----
CREATE USER MAPPING FOR current_user SERVER s5;
CREATE USER MAPPING FOR current_user SERVER s6 OPTIONS (username 'test');
CREATE USER MAPPING FOR current_user SERVER s7; -- ERROR
+ WARNING: rehashing catalog cache id 30 for pg_foreign_server; 5 tups, 2
buckets
ERROR: permission denied for foreign server s7
CREATE USER MAPPING FOR public SERVER s8; -- ERROR
ERROR: must be owner of foreign server s8
RESET ROLE;
ALTER SERVER t1 OWNER TO regress_test_indirect;
+ WARNING: rehashing catalog cache id 29 for pg_foreign_server; 9 tups, 4
buckets
SET ROLE regress_test_role;
CREATE USER MAPPING FOR current_user SERVER t1 OPTIONS (username 'bob',
password 'boo');
CREATE USER MAPPING FOR public SERVER t1;
***************
*** 636,641 ****
--- 639,645 ----
DROP USER MAPPING IF EXISTS FOR public SERVER s7;
NOTICE: user mapping "public" does not exist for the server, skipping
CREATE USER MAPPING FOR public SERVER s8;
+ WARNING: rehashing catalog cache id 61 for pg_user_mapping; 5 tups, 2 buckets
SET ROLE regress_test_role;
DROP USER MAPPING FOR public SERVER s8; -- ERROR
ERROR: must be owner of foreign server s8
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/xmlmap_1.out
2013-08-22 17:45:02.721496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/xmlmap.out
2013-09-05 19:07:54.836385462 +0300
***************
*** 96,101 ****
--- 96,105 ----
DETAIL: This functionality requires the server to be built with libxml
support.
HINT: You need to rebuild PostgreSQL using --with-libxml.
SELECT schema_to_xmlschema('testxmlschema', false, true, '');
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
+ CONTEXT: SQL statement "SELECT oid FROM pg_catalog.pg_class WHERE
relnamespace = 856104 AND relkind IN ('r', 'm', 'v') AND
pg_catalog.has_table_privilege (oid, 'SELECT') ORDER BY relname;"
+ WARNING: rehashing catalog cache id 45 for pg_class; 513 tups, 256 buckets
+ CONTEXT: SQL statement "SELECT oid FROM pg_catalog.pg_class WHERE
relnamespace = 856104 AND relkind IN ('r', 'm', 'v') AND
pg_catalog.has_table_privilege (oid, 'SELECT') ORDER BY relname;"
ERROR: unsupported XML feature
DETAIL: This functionality requires the server to be built with libxml
support.
HINT: You need to rebuild PostgreSQL using --with-libxml.
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/conversion.out
2013-08-22 17:45:02.573496726 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/conversion.out
2013-09-05 19:08:07.908386221 +0300
***************
*** 160,165 ****
--- 160,166 ----
-- ISO-8859-5 --> WIN1251
SELECT CONVERT('foo', 'ISO-8859-5', 'WIN1251');
+ WARNING: rehashing catalog cache id 17 for pg_conversion; 17 tups, 8 buckets
convert
---------
foo
***************
*** 272,277 ****
--- 273,279 ----
-- EUC_TW --> MULE_INTERNAL
SELECT CONVERT('foo', 'EUC_TW', 'MULE_INTERNAL');
+ WARNING: rehashing catalog cache id 17 for pg_conversion; 33 tups, 16 buckets
convert
---------
foo
***************
*** 510,515 ****
--- 512,518 ----
-- EUC_TW --> UTF8
SELECT CONVERT('foo', 'EUC_TW', 'UTF8');
+ WARNING: rehashing catalog cache id 17 for pg_conversion; 65 tups, 32 buckets
convert
---------
foo
======================================================================
***
/home/heikki/git-sandbox/postgresql/src/test/regress/expected/alter_table.out
2013-08-27 18:17:41.238830573 +0300
---
/home/heikki/git-sandbox/postgresql/src/test/regress/results/alter_table.out
2013-09-05 19:08:20.604386957 +0300
***************
*** 1746,1751 ****
--- 1746,1752 ----
-- table's row type
create table tab1 (a int, b text);
create table tab2 (x int, y tab1);
+ WARNING: rehashing catalog cache id 58 for pg_type; 129 tups, 64 buckets
alter table tab1 alter column b type varchar; -- fails
ERROR: cannot alter table "tab1" because column "tab2.y" uses its row type
-- disallow recursive containment of row types
***************
*** 2318,2323 ****
--- 2319,2326 ----
FROM pg_class
WHERE relkind IN ('r', 'i', 'S', 't', 'm')
) mapped;
+ WARNING: rehashing catalog cache id 45 for pg_class; 257 tups, 128 buckets
+ WARNING: rehashing catalog cache id 45 for pg_class; 513 tups, 256 buckets
incorrectly_mapped | have_mappings
--------------------+---------------
0 | t
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/sequence.out
2013-08-22 17:45:02.713496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/sequence.out
2013-09-05 19:08:08.660386264 +0300
***************
*** 300,305 ****
--- 300,306 ----
('sequence_test2', 'serialtest2_f2_seq', 'serialtest2_f3_seq',
'serialtest2_f4_seq', 'serialtest2_f5_seq', 'serialtest2_f6_seq')
ORDER BY sequence_name ASC;
+ WARNING: rehashing catalog cache id 36 for pg_namespace; 33 tups, 16 buckets
sequence_catalog | sequence_schema | sequence_name | data_type |
numeric_precision | numeric_precision_radix | numeric_scale | start_value |
minimum_value | maximum_value | increment | cycle_option
------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+---------------+---------------------+-----------+--------------
regression | public | sequence_test2 | bigint |
64 | 2 | 0 | 32 | 5
| 36 | 4 | YES
======================================================================
*** /home/heikki/git-sandbox/postgresql/src/test/regress/expected/xml_1.out
2013-08-22 17:45:02.721496719 +0300
--- /home/heikki/git-sandbox/postgresql/src/test/regress/results/xml.out
2013-09-05 19:08:08.268386241 +0300
***************
*** 463,468 ****
--- 463,469 ----
HINT: You need to rebuild PostgreSQL using --with-libxml.
SELECT table_name, view_definition FROM information_schema.views
WHERE table_name LIKE 'xmlview%' ORDER BY 1;
+ WARNING: rehashing catalog cache id 36 for pg_namespace; 33 tups, 16 buckets
table_name | view_definition
------------+--------------------------------------------------------------------------------
xmlview1 | SELECT xmlcomment('test'::text) AS xmlcomment;
======================================================================
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers