Re: [Proposal] Global temporary tables

2022-06-30 Thread Jacob Champion
On 3/3/22 13:20, Andres Freund wrote:
> On 2022-03-03 16:07:37 -0500, Robert Haas wrote:
>> I agree that the feature is desirable, but I think getting there is
>> going to require a huge amount of effort that may amount to a total
>> rewrite of the patch.
> 
> Agreed. I think this needs very fundamental design work, and the patch itself
> isn't worth reviewing until that's tackled.

Given two opinions that the patch can't be effectively reviewed as-is, I
will mark this RwF for this commitfest. Anyone up for shepherding the
design conversations, going forward?

--Jacob




Re: [Proposal] Global temporary tables

2022-03-03 Thread Andres Freund
Hi,

On 2022-03-03 16:07:37 -0500, Robert Haas wrote:
> On Thu, Mar 3, 2022 at 3:29 PM Greg Stark  wrote:
> > I'm still hopeful we get to advance this early in 16 because I think
> > everyone agrees the feature would be great.
> 
> I'm not saying this patch can't make progress, but I think the chances
> of this being ready to commit any time in the v16 release cycle, let
> alone at the beginning, are low. This patch set has been around since
> 2019, and here Andres and I are saying it's not even really reviewable
> in the shape that it's in. I have done some review of it previously,
> BTW, but eventually I gave up because it just didn't seem like we were
> making any progress. And then a long time after that people were still
> finding many server crashes with relatively simple test cases.
> 
> I agree that the feature is desirable, but I think getting there is
> going to require a huge amount of effort that may amount to a total
> rewrite of the patch.

Agreed. I think this needs very fundamental design work, and the patch itself
isn't worth reviewing until that's tackled.

Greetings,

Andres Freund




Re: [Proposal] Global temporary tables

2022-03-03 Thread Robert Haas
On Thu, Mar 3, 2022 at 3:29 PM Greg Stark  wrote:
> I'm still hopeful we get to advance this early in 16 because I think
> everyone agrees the feature would be great.

I'm not saying this patch can't make progress, but I think the chances
of this being ready to commit any time in the v16 release cycle, let
alone at the beginning, are low. This patch set has been around since
2019, and here Andres and I are saying it's not even really reviewable
in the shape that it's in. I have done some review of it previously,
BTW, but eventually I gave up because it just didn't seem like we were
making any progress. And then a long time after that people were still
finding many server crashes with relatively simple test cases.

I agree that the feature is desirable, but I think getting there is
going to require a huge amount of effort that may amount to a total
rewrite of the patch.

--
Robert Haas
EDB: http://www.enterprisedb.com




Re: [Proposal] Global temporary tables

2022-03-03 Thread Greg Stark
It doesn't look like this is going to get committed this release
cycle. I understand more feedback could be valuable, especially on the
overall design, but as this is the last commitfest of the release we
should focus on other patches for now and spend that time in the next
release cycle.

I'm going to bump this one now as Waiting on Author for the design
documentation Robert asks for and probably a plan for how to separate
that design into multiple separable features as Andres suggested.

I'm still hopeful we get to advance this early in 16 because I think
everyone agrees the feature would be great.




Re: [Proposal] Global temporary tables

2022-03-03 Thread Robert Haas
On Wed, Mar 2, 2022 at 4:18 PM Andres Freund  wrote:
> I think there's just no way that it can be merged with anything close to the
> current design - it's unmaintainable. The need for the feature doesn't change
> that.

I don't know whether the design is right or wrong, but I agree that a
bad design isn't OK just because we need the feature. I'm not entirely
convinced that the change to _bt_getrootheight() is a red flag,
although I agree that there is a need to explain and justify why
similar changes aren't needed in other places. But I think overall
this patch is just too big and too unpolished to be seriously
considered. It clearly needs to be broken down into incremental
patches that are not just separated by topic but potentially
independently committable, with proposed commit messages for each.

And, like, there's a long history on this thread of people pointing
out particular crash bugs and particular problems with code comments
or whatever and I guess those are getting fixed as they are reported,
but I do not have the feeling that the overall code quality is
terribly high, because people just keep finding more stuff. Like look
at this:

+ uint8 flags = 0;
+
+ /* return 0 if feature is disabled */
+ if (max_active_gtt <= 0)
+ return InvalidTransactionId;
+
+ /* Disable in standby node */
+ if (RecoveryInProgress())
+ return InvalidTransactionId;
+
+ flags |= PROC_IS_AUTOVACUUM;
+ flags |= PROC_IN_LOGICAL_DECODING;
+
+ LWLockAcquire(ProcArrayLock, LW_SHARED);
+ arrayP = procArray;
+ for (index = 0; index < arrayP->numProcs; index++)
+ {
+ int pgprocno = arrayP->pgprocnos[index];
+ PGPROC*proc = [pgprocno];
+ uint8 statusFlags = ProcGlobal->statusFlags[index];
+ TransactionId gtt_frozenxid = InvalidTransactionId;
+
+ if (statusFlags & flags)
+ continue;

This looks like code someone wrote, modified multiple times as they
found problems, and never cleaned up. 'flags' gets set to 0, and then
unconditionally gets two bits xor'd in, and then we test it against
statusFlags. Probably there shouldn't be a local variable at all, and
if there is, the value should be set properly from the start instead
of constructed incrementally as we go along. And there should be
comments. Why is it OK to return InvalidTransactionId in standby mode?
Why is it OK to pass that flags value? And, if we look at this
function a little further down, is it really OK to hold ProcArrayLock
across an operation that could perform multiple memory allocation
operations? I bet it's not, unless calls are very infrequent in
practice.

I'm not asking for this particular part of the code to be cleaned up.
I'm asking for the whole patch to be cleaned up. Like, nobody who is a
committer is going to have enough time to go through the patch
function by function and point out issues on this level of detail in
every place where they occur. Worse, discussing all of those issues is
just a distraction from the real task of figuring out whether the
design needs adjustment. Because the patch is one massive code drop,
and with not-really-that-clean code and not-that-great comments, it's
almost impossible to review. I don't plan to try unless the quality
improves a lot. I'm not saying it's the worst code ever written, but I
think it's kind of at a level of "well, it seems to work for me," and
the standard around here is higher than that. It's not the job of the
community or of individual committers to prove that problems exist in
this patch and therefore it shouldn't be committed. It's the job of
the author to prove that there aren't and it should be. And I don't
think we're close to that at all.

-- 
Robert Haas
EDB: http://www.enterprisedb.com




Re: [Proposal] Global temporary tables

2022-03-02 Thread Pavel Stehule
Hi


> I think you'd basically have to come up with a generic design for
> partitioning
> catalog tables into local / non-local storage, without needing explicit
> code
> for each catalog. That could also be used to store the default catalog
> contents separately from user defined ones (e.g. pg_proc is pretty large).
>

There is still a risk of bloating in local storage, but, mainly, you
probably have to modify a lot of lines because the system cache doesn't
support partitioning.

Regards

Pavel


>
> Greetings,
>
> Andres Freund
>


Re: [Proposal] Global temporary tables

2022-03-02 Thread Andres Freund
Hi,

On 2022-02-27 06:09:54 +0100, Pavel Stehule wrote:
> ne 27. 2. 2022 v 5:13 odesílatel Andres Freund  napsal:
> > On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:
> > > Without this, the GTT will be terribly slow like current temporary tables
> > > with a lot of problems with bloating of pg_class, pg_attribute and
> > > pg_depend tables.
> >
> > I think it's not a great idea to solve multiple complicated problems at
> > once...

> I thought about this issue for a very long time, and I didn't find any
> better (without more significant rewriting of pg storage). In a lot of
> projects, that I know, the temporary tables are strictly prohibited due
> possible devastating impact to system catalog bloat.  It is a serious
> problem. So any implementation of GTT should solve the questions: a) how to
> reduce catalog bloating, b) how to allow session related statistics for
> GTT. I agree so implementation of GTT like template based LTT (local
> temporary tables) can be very simple (it is possible by extension), but
> with the same unhappy performance impacts.

> I don't say so current design should be accepted without any discussions
> and without changes. Maybe GTT based on LTT can be better than nothing
> (what we have now), and can be good enough for a lot of projects where the
> load is not too high (and almost all projects have low load).

I think there's just no way that it can be merged with anything close to the
current design - it's unmaintainable. The need for the feature doesn't change
that.

That's not to say it's impossible to come up with a workable design. But it's
definitely not easy. If I were to work on this - which I am not planning to -
I'd try to solve the problems of "LTT" first, with an eye towards using the
infrastructure for GTT.

I think you'd basically have to come up with a generic design for partitioning
catalog tables into local / non-local storage, without needing explicit code
for each catalog. That could also be used to store the default catalog
contents separately from user defined ones (e.g. pg_proc is pretty large).

Greetings,

Andres Freund




Re: [Proposal] Global temporary tables

2022-03-02 Thread Pavel Stehule
st 2. 3. 2022 v 19:02 odesílatel Adam Brusselback 
napsal:

> >In my observation, very few users require an accurate query plan for
> temporary tables to
> perform manual analyze.
>
> Absolutely not true in my observations or personal experience. It's one of
> the main reasons I have needed to use (local) temporary tables rather than
> just materializing a CTE when decomposing queries that are too complex for
> Postgres to handle.
>
> I wish I could use GTT to avoid the catalog bloat in those instances, but
> that will only be possible if the query plans are accurate.
>

This strongly depends on usage.  Very common patterns from MSSQL don't need
statistics. But on second thought, sometimes, the query should be divided
and temp tables are used for storing some middle results. In this case, you
cannot exist without statistics. In the first case, the temp tables can be
replaced by arrays. In the second case, the temp tables are not replaceable.

Regards

Pavel


Re: [Proposal] Global temporary tables

2022-03-02 Thread Adam Brusselback
>In my observation, very few users require an accurate query plan for
temporary tables to
perform manual analyze.

Absolutely not true in my observations or personal experience. It's one of
the main reasons I have needed to use (local) temporary tables rather than
just materializing a CTE when decomposing queries that are too complex for
Postgres to handle.

I wish I could use GTT to avoid the catalog bloat in those instances, but
that will only be possible if the query plans are accurate.


Re: [Proposal] Global temporary tables

2022-03-01 Thread Wenjing Zeng



> 2022年2月27日 12:13,Andres Freund  写道:
> 
> Hi,
> 
> On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:
>>> You redirect stats from pg_class and pg_statistics to a local hash table.
>>> This is pretty hairy :(
> 
> As is I think the patch is architecturally completely unacceptable. Having
> code everywhere to redirect to manually written in-memory catalog table code
> isn't maintainable.
> 
> 
>>> I guess you'd also need to handle pg_statistic_ext and ext_data.
>>> pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to
>>> look
>>> at pg_get_gtt_statistics.
>> 
>> Without this, the GTT will be terribly slow like current temporary tables
>> with a lot of problems with bloating of pg_class, pg_attribute and
>> pg_depend tables.
> 
> I think it's not a great idea to solve multiple complicated problems at
> once...

I'm trying to break down the entire implementation into multiple sub-patches.


Regards, Wenjing.


> 
> Greetings,
> 
> Andres Freund
> 
> 





Re: [Proposal] Global temporary tables

2022-02-28 Thread Wenjing Zeng


> 2022年2月27日 08:21,Justin Pryzby  写道:
> 
> I read through this.
> Find attached some language fixes.  You should be able to apply each "fix"
> patch on top of your own local branch with git am, and then squish them
> together.  Let me know if you have trouble with that.
> 
> I think get_seqence_start_value() should be static.  (Or otherwise, it should
> be in lsyscache.c).
> 
> The include added to execPartition.c seems to be unused.
> 
> +#define RELATION_IS_TEMP_ON_CURRENT_SESSION(relation) \
> +#define RELATION_IS_TEMP(relation) \
> +#define RelpersistenceTsTemp(relpersistence) \
> +#define RELATION_GTT_ON_COMMIT_DELETE(relation)\
> 
> => These macros can evaluate their arguments multiple times.
> You should add a comment to warn about that.  And maybe avoid passing them a
> function argument, like: RelpersistenceTsTemp(get_rel_persistence(rte->relid))
> 
> +list_all_backend_gtt_frozenxids should return TransactionId not int.
> The function name should say "oldest" and not "all" ?
> 
> I think the GUC should have a longer name.  max_active_gtt is too short for a
> global var.
> 
> +#defineMIN_NUM_ACTIVE_GTT  0
> +#defineDEFAULT_NUM_ACTIVE_GTT  1000
> +#defineMAX_NUM_ACTIVE_GTT  100
> 
> +intmax_active_gtt = MIN_NUM_ACTIVE_GTT
> 
> It's being initialized to MIN, but then the GUC machinery sets it to DEFAULT.
> By convention, it should be initialized to default.
> 
> fout->remoteVersion >= 14
> 
> => should say 15
> 
> describe.c has gettext_noop("session"), which is a half-truth.  The data is
> per-session but the table definition is persistent..
Thanks for your advice, I will try to merge this part of the code.

> 
> You redirect stats from pg_class and pg_statistics to a local hash table.
> This is pretty hairy :(
> I guess you'd also need to handle pg_statistic_ext and ext_data.
> pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to 
> look
> at pg_get_gtt_statistics.
> 
> I wonder if there's a better way to do it, like updating pg_statistic but
> forcing the changes to be rolled back when the session ends...  But I think
> that would make longrunning sessions behave badly, the same as "longrunning
> transactions".

There are three pieces of data related to session-level GTT data that need to 
be managed
1 session-level storage info like relfilenode
2 session-level like relfrozenxid
3 session-level stats like relpages or column stats

I think the 1 and 2 are necessary, but not for stats.
In the previous email, It has been suggested that GTT statistics not be 
processed.
This means that GTT statistics are not recorded in the localhash or catalog.
In my observation, very few users require an accurate query plan for temporary 
tables to
perform manual analyze.
Of course, doing this will also avoid catalog bloat and performance problems.


> 
> Have you looked at Gilles Darold's GTT extension ?
If you are referring to https://github.com/darold/pgtt 
 , yes.
It is smart to use unlogged table as a template and then use LTT to read and 
write data.
For this implementation, I want to point out two things:
1 For the first insert of GTT in each session, create table or create index is 
implicitly executed.
2 The catalog bloat caused by LTT still exist.


Regards, Wenjing.


> <0002-f-0002-gtt-v64-doc.txt><0004-f-0003-gtt-v64-implementation.txt><0006-f-0004-gtt-v64-regress.txt>



Re: [Proposal] Global temporary tables

2022-02-28 Thread Wenjing Zeng


> 2022年2月25日 15:45,Andres Freund  写道:
> 
> Hi,
> 
> 
> This is a huge thread. Realistically reviewers and committers can't reread
> it. I think there needs to be more of a description of how this works included
> in the patchset and *why* it works that way. The readme does a bit of that,
> but not particularly well.
Thank you for your review of the design and code.
I'm always trying to improve it. If you are confused or need clarification on 
something, please point it out.


> 
> 
> On 2022-02-25 14:26:47 +0800, Wenjing Zeng wrote:
>> +++ b/README.gtt.txt
>> @@ -0,0 +1,172 @@
>> +Global Temporary Table(GTT)
>> +=
>> +
>> +Feature description
>> +-
>> +
>> +Previously, temporary tables are defined once and automatically
>> +exist (starting with empty contents) in every session before using them.
> 
> I think for a README "previously" etc isn't good language - if it were
> commited, it'd not be understandable anymore. It makes more sense for commit
> messages etc.
Thanks for pointing it out. I will adjust the description.

> 
> 
>> +Main design ideas
>> +-
>> +In general, GTT and LTT use the same storage and buffer design and
>> +implementation. The storage files for both types of temporary tables are 
>> named
>> +as t_backendid_relfilenode, and the local buffer is used to cache the data.
> 
> What does "named ast_backendid_relfilenode" mean?
This is the storage file naming format for describing temporary tables.
It starts with 't', followed by backendid and relfilenode, connected by an 
underscore.
File naming rules are the same as LTT.
The data in the file is no different from regular tables and LTT.

> 
> 
>> +The schema of GTTs is shared among sessions while their data are not. We 
>> build
>> +a new mechanisms to manage those non-shared data and their statistics.
>> +Here is the summary of changes:
>> +
>> +1) CATALOG
>> +GTTs store session-specific data. The storage information of GTTs'data, 
>> their
>> +transaction information, and their statistics are not stored in the catalog.
>> +
>> +2) STORAGE INFO & STATISTICS INFO & TRANSACTION INFO
>> +In order to maintain durability and availability of GTTs'session-specific 
>> data,
>> +their storage information, statistics, and transaction information is 
>> managed
>> +in a local hash table tt_storage_local_hash.
> 
> "maintain durability"? Durable across what? In the context of databases it's
> typically about crash safety, but that can't be the case here.
It means that the transaction information(relfrozenxid/relminmxid)  storage 
information(relfilenode)
and statistics(relpages) of GTT, which are maintained in hashtable , not 
pg_class.
This is to allow GTT to store its own local data in different sessions and to 
avoid frequent catalog changes.

> 
> 
>> +3) DDL
>> +Currently, GTT supports almost all table'DDL except CLUSTER/VACUUM FULL.
>> +Part of the DDL behavior is limited by shared definitions and multiple 
>> copies of
>> +local data, and we added some structures to handle this.
> 
>> +A shared hash table active_gtt_shared_hash is added to track the state of 
>> the
>> +GTT in a different session. This information is recorded in the hash table
>> +during the DDL execution of the GTT.
> 
>> +The data stored in a GTT can only be modified or accessed by owning session.
>> +The statements that only modify data in a GTT do not need a high level of
>> +table locking. The operations making those changes include truncate GTT,
>> +reindex GTT, and lock GTT.
> 
> I think you need to introduce a bit more terminology for any of this to make
> sense. Sometimes GTT means the global catalog entity, sometimes, like here, it
> appears to mean the session specific contents of a GTT.
> 
> What state of a GTT in a nother session?
> 
> 
> How do GTTs handle something like BEGIN; TRUNCATE some_gtt_table; ROLLBACK;?

GTT behaves exactly like a regular table.
Specifically, the latest relfilenode for the current session is stored in the 
hashtable and may change it.
If the transaction rolls back, the old relfilenode is enabled again, just as it 
is in pg_class.

> 
> 
>> +1.2 on commit clause
>> +LTT's status associated with on commit DELETE ROWS and on commit PRESERVE 
>> ROWS
>> +is not stored in catalog. Instead, GTTs need a bool value 
>> on_commit_delete_rows
>> +in reloptions which is shared among sessions.
> 
> Why?
The LTT is always created and used in the current session. The on commit clause 
property
does not need to be shared with other sessions. This is why LTT does not record 
the on commit clause
in the catalog.
However, GTT's table definitions are shared between sessions, including the on 
commit clause,
so it needs to be saved in the catalog.


> 
> 
> 
>> +2.3 statistics info
>> +1) relpages reltuples relallvisible relfilenode
> 
> ?
It was mentioned above.

> 
>> +3 DDL
>> +3.1. active_gtt_shared_hash
>> +This is the hash table 

Re: [Proposal] Global temporary tables

2022-02-26 Thread Pavel Stehule
ne 27. 2. 2022 v 5:13 odesílatel Andres Freund  napsal:

> Hi,
>
> On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:
> > > You redirect stats from pg_class and pg_statistics to a local hash
> table.
> > > This is pretty hairy :(
>
> As is I think the patch is architecturally completely unacceptable. Having
> code everywhere to redirect to manually written in-memory catalog table
> code
> isn't maintainable.
>
>
> > > I guess you'd also need to handle pg_statistic_ext and ext_data.
> > > pg_stats doesn't work, since the data isn't in pg_statistic - it'd
> need to
> > > look
> > > at pg_get_gtt_statistics.
> >
> > Without this, the GTT will be terribly slow like current temporary tables
> > with a lot of problems with bloating of pg_class, pg_attribute and
> > pg_depend tables.
>
> I think it's not a great idea to solve multiple complicated problems at
> once...
>

I thought about this issue for a very long time, and I didn't find any
better (without more significant rewriting of pg storage). In a lot of
projects, that I know, the temporary tables are strictly prohibited due
possible devastating impact to system catalog bloat.  It is a serious
problem. So any implementation of GTT should solve the questions: a) how to
reduce catalog bloating, b) how to allow session related statistics for
GTT. I agree so implementation of GTT like template based LTT (local
temporary tables) can be very simple (it is possible by extension), but
with the same unhappy performance impacts.

I don't say so current design should be accepted without any discussions
and without changes. Maybe GTT based on LTT can be better than nothing
(what we have now), and can be good enough for a lot of projects where the
load is not too high (and almost all projects have low load).
Unfortunately,it can be a trap for a lot of projects in future, so there
should be discussion and proposed solutions for fix of related issues. The
performance of GTT should be fixable, so any discussion about this topic
should have part about protections against catalog bloat and about cost
related to frequent catalog updates.

But anyway, I invite (and probably not just me) any discussion on how to
implement this feature, how to solve performance issues, and how to divide
implementation into smaller steps. I am sure so fast GTT  implementation
can be used for fast implementation of LTT too, and maybe with all other
temporary objects

Regards

Pavel


> Greetings,
>
> Andres Freund
>


Re: [Proposal] Global temporary tables

2022-02-26 Thread Andres Freund
Hi,

On 2022-02-27 04:17:52 +0100, Pavel Stehule wrote:
> > You redirect stats from pg_class and pg_statistics to a local hash table.
> > This is pretty hairy :(

As is I think the patch is architecturally completely unacceptable. Having
code everywhere to redirect to manually written in-memory catalog table code
isn't maintainable.


> > I guess you'd also need to handle pg_statistic_ext and ext_data.
> > pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to
> > look
> > at pg_get_gtt_statistics.
>
> Without this, the GTT will be terribly slow like current temporary tables
> with a lot of problems with bloating of pg_class, pg_attribute and
> pg_depend tables.

I think it's not a great idea to solve multiple complicated problems at
once...

Greetings,

Andres Freund




Re: [Proposal] Global temporary tables

2022-02-26 Thread Pavel Stehule
Hi

You redirect stats from pg_class and pg_statistics to a local hash table.
> This is pretty hairy :(
> I guess you'd also need to handle pg_statistic_ext and ext_data.
> pg_stats doesn't work, since the data isn't in pg_statistic - it'd need to
> look
> at pg_get_gtt_statistics.
>

Without this, the GTT will be terribly slow like current temporary tables
with a lot of problems with bloating of pg_class, pg_attribute and
pg_depend tables.

Regards

Pavel


Re: [Proposal] Global temporary tables

2022-02-26 Thread Justin Pryzby
global_temporary_table to enable this feature.")));
 
 	if (RecoveryInProgress())
-		elog(ERROR, "readonly mode not support access global temporary table");
+		elog(ERROR, "recovery mode does not support accessing global temporary table");
 
 	if (rel->rd_rel->relkind == RELKIND_INDEX &&
 		rel->rd_index &&
 		(!rel->rd_index->indisvalid ||
 		 !rel->rd_index->indisready ||
 		 !rel->rd_index->indislive))
-		 elog(ERROR, "invalid gtt index %s not allow to create storage", RelationGetRelationName(rel));
+		 elog(ERROR, "invalid gtt index %s not allowed to create storage", RelationGetRelationName(rel));
 
 	/* First time through: initialize the hash table */
 	if (gtt_storage_local_hash == NULL)
@@ -757,7 +757,7 @@ up_gtt_relstats(Oid relid,
 
 /*
  * Search GTT relstats(relpage/reltuple/relallvisible)
- * from local has.
+ * from local hash.
  */
 bool
 get_gtt_relstats(Oid relid, BlockNumber *relpages, double *reltuples,
@@ -852,7 +852,7 @@ up_gtt_att_statistic(Oid reloid, int attnum, bool inh, int natts,
 	}
 	MemoryContextSwitchTo(oldcontext);
 
-	if (!found)
+	if (!found) // XXX: should be an ereport with a hint ?
 		elog(WARNING, "analyze can not update relid %u column %d statistics after add or drop column, try truncate table first", reloid, attnum);
 
 	return;
@@ -960,7 +960,7 @@ remove_gtt_relfrozenxid_from_ordered_list(Oid relfrozenxid)
 
 /*
  * Update of backend Level oldest relfrozenxid to MyProc.
- * This makes each backend's oldest RelFrozenxID globally visible.
+ * This makes each backend's oldest RelFrozenXID globally visible.
  */
 static void
 set_gtt_session_relfrozenxid(void)
@@ -1282,7 +1282,7 @@ pg_list_gtt_relfrozenxids(PG_FUNCTION_ARGS)
 }
 
 /*
- * In order to build the GTT index, force enable GTT'index.
+ * In order to build the GTT index, force enable GTT index.
  */
 void
 force_enable_gtt_index(Relation index)
@@ -1331,7 +1331,7 @@ gtt_fix_index_backend_state(Relation index)
 
 /*
  * During the SQL initialization of the executor (InitPlan)
- * Initialize storage of GTT GTT'indexes and build empty index.
+ * Initialize storage of GTT's indexes and build empty index.
  */
 void
 init_gtt_storage(CmdType operation, Relation relation)
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 72218b2c66..6dee51784f 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -188,8 +188,8 @@ analyze_rel(Oid relid, RangeVar *relation,
 	}
 
 	/*
-	 * Skip the global temporary table that did not initialize the storage
-	 * in this backend.
+	 * Skip global temporary tables for which storage was not initialized by
+	 * this backend.
 	 */
 	if (RELATION_IS_GLOBAL_TEMP(onerel) &&
 		!gtt_storage_attached(RelationGetRelid(onerel)))
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 535f537aae..ff6ae583f5 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -392,15 +392,15 @@ cluster_rel(Oid tableOid, Oid indexOid, ClusterParams *params)
 	}
 
 	/*
-	 * Skip the global temporary table that did not initialize the storage
-	 * in this backend.
+	 * Skip global temporary tables for which storage was not initialized by
+	 * this backend.
 	 */
 	if (RELATION_IS_GLOBAL_TEMP(OldHeap))
 	{
 		if (gtt_storage_attached(RelationGetRelid(OldHeap)))
 			ereport(ERROR,
 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("not support cluster global temporary table")));
+ errmsg("clustering is not supported on a global temporary table")));
 
 		relation_close(OldHeap, AccessExclusiveLock);
 		pgstat_progress_end_command();
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index af12dd2cdc..4d5aedf2c3 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -461,7 +461,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 		if (is_other_backend_use_gtt(RelationGetRelid(seqrel)))
 			ereport(ERROR,
 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- errmsg("cannot alter global temporary sequence %s when other backend attached it.",
+ errmsg("cannot alter global temporary sequence %s being accessed by another backend",
 		RelationGetRelationName(seqrel;
 	}
 
@@ -1194,7 +1194,7 @@ init_sequence(Oid relid, SeqTable *p_elm, Relation *p_rel)
 	*p_elm = elm;
 	*p_rel = seqrel;
 
-	/* Initializes the storage for sequence which the global temporary table belongs. */
+	/* Initializes the storage for sequence which belongs to a global temporary table. */
 	if (RELATION_IS_GLOBAL_TEMP(seqrel) &&
 		!gtt_storage_attached(RelationGetRelid(seqrel)))
 	{
@@ -1999,7 +1999,7 @@ get_seqence_start_value(Oid seqid)
 }
 
 /*
- * Initialize sequence which global temporary table belongs.
+ * Initialize sequence which belongs to a global temporary table.
  */
 void
 gtt_init_s

Re: [Proposal] Global temporary tables

2022-02-24 Thread Andres Freund
Hi,


This is a huge thread. Realistically reviewers and committers can't reread
it. I think there needs to be more of a description of how this works included
in the patchset and *why* it works that way. The readme does a bit of that,
but not particularly well.


On 2022-02-25 14:26:47 +0800, Wenjing Zeng wrote:
> +++ b/README.gtt.txt
> @@ -0,0 +1,172 @@
> +Global Temporary Table(GTT)
> +=
> +
> +Feature description
> +-
> +
> +Previously, temporary tables are defined once and automatically
> +exist (starting with empty contents) in every session before using them.

I think for a README "previously" etc isn't good language - if it were
commited, it'd not be understandable anymore. It makes more sense for commit
messages etc.


> +Main design ideas
> +-
> +In general, GTT and LTT use the same storage and buffer design and
> +implementation. The storage files for both types of temporary tables are 
> named
> +as t_backendid_relfilenode, and the local buffer is used to cache the data.

What does "named ast_backendid_relfilenode" mean?


> +The schema of GTTs is shared among sessions while their data are not. We 
> build
> +a new mechanisms to manage those non-shared data and their statistics.
> +Here is the summary of changes:
> +
> +1) CATALOG
> +GTTs store session-specific data. The storage information of GTTs'data, their
> +transaction information, and their statistics are not stored in the catalog.
> +
> +2) STORAGE INFO & STATISTICS INFO & TRANSACTION INFO
> +In order to maintain durability and availability of GTTs'session-specific 
> data,
> +their storage information, statistics, and transaction information is managed
> +in a local hash table tt_storage_local_hash.

"maintain durability"? Durable across what? In the context of databases it's
typically about crash safety, but that can't be the case here.


> +3) DDL
> +Currently, GTT supports almost all table'DDL except CLUSTER/VACUUM FULL.
> +Part of the DDL behavior is limited by shared definitions and multiple 
> copies of
> +local data, and we added some structures to handle this.

> +A shared hash table active_gtt_shared_hash is added to track the state of the
> +GTT in a different session. This information is recorded in the hash table
> +during the DDL execution of the GTT.

> +The data stored in a GTT can only be modified or accessed by owning session.
> +The statements that only modify data in a GTT do not need a high level of
> +table locking. The operations making those changes include truncate GTT,
> +reindex GTT, and lock GTT.

I think you need to introduce a bit more terminology for any of this to make
sense. Sometimes GTT means the global catalog entity, sometimes, like here, it
appears to mean the session specific contents of a GTT.

What state of a GTT in a nother session?


How do GTTs handle something like BEGIN; TRUNCATE some_gtt_table; ROLLBACK;?


> +1.2 on commit clause
> +LTT's status associated with on commit DELETE ROWS and on commit PRESERVE 
> ROWS
> +is not stored in catalog. Instead, GTTs need a bool value 
> on_commit_delete_rows
> +in reloptions which is shared among sessions.

Why?



> +2.3 statistics info
> +1) relpages reltuples relallvisible relfilenode

?


> +3 DDL
> +3.1. active_gtt_shared_hash
> +This is the hash table created in shared memory to trace the GTT files 
> initialized
> +in each session. Each hash entry contains a bitmap that records the 
> backendid of
> +the initialized GTT file. With this hash table, we know which backend/session
> +is using this GTT. Such information is used during GTT's DDL operations.

So there's a separate locking protocol for GTTs that doesn't use the normal
locking infrastructure? Why?


> +3.7 CLUSTER GTT/VACUUM FULL GTT
> +The current version does not support.

Why?


> +4 MVCC commit log(clog) cleanup
> +
> +The GTT storage file contains transaction information. Queries for GTT data 
> rely
> +on transaction information such as clog. The transaction information 
> required by
> +each session may be completely different.

Why is transaction information different between sessions? Or does this just
mean that different transaction ids will be accessed?



0003-gtt-v67-implementation.patch
 71 files changed, 3167 insertions(+), 195 deletions(-)

This needs to be broken into smaller chunks to be reviewable.


> @@ -677,6 +678,14 @@ _bt_getrootheight(Relation rel)
>   {
>   Buffer  metabuf;
>  
> + /*
> +  * If a global temporary table storage file is not initialized 
> in the
> +  * this session, its index does not have a root page, just 
> returns 0.
> +  */
> + if (RELATION_IS_GLOBAL_TEMP(rel) &&
> + !gtt_storage_attached(RelationGetRelid(rel)))
> + return 0;
> +
>   metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);
>   

Re: [Proposal] Global temporary tables

2022-01-10 Thread Andrew Bille
Hi!

I could not detect crashes with your last patch, so I think the patch is
ready for a review.
Please, also consider fixing error messages, as existing ones don't follow
message writing guidelines.
https://www.postgresql.org/docs/14/error-style-guide.html

Regards, Andrew

On Thu, Dec 23, 2021 at 7:36 PM wenjing  wrote:

>
>
> Andrew Bille  于2021年12月21日周二 14:00写道:
>
>> Hi!
>> Thanks for new patches.
>> Yet another crash reproduced on master with v63 patches:
>>
>> CREATE TABLESPACE ts LOCATION '/tmp/ts';
>> CREATE GLOBAL TEMP TABLE tbl (num1 bigint);
>> INSERT INTO tbl (num1) values (1);
>> CREATE INDEX tbl_idx ON tbl (num1);
>> REINDEX (TABLESPACE ts) TABLE tbl;
>>
> This is a feature made in PG14 that supports reindex change tablespaces.
> Thank you for pointing that out and I fixed it in v64.
> Waiting for your feedback.
>


Re: [Proposal] Global temporary tables

2021-12-20 Thread Andrew Bille
Hi!
Thanks for new patches.
Yet another crash reproduced on master with v63 patches:

CREATE TABLESPACE ts LOCATION '/tmp/ts';
CREATE GLOBAL TEMP TABLE tbl (num1 bigint);
INSERT INTO tbl (num1) values (1);
CREATE INDEX tbl_idx ON tbl (num1);
REINDEX (TABLESPACE ts) TABLE tbl;

Got error:
CREATE TABLESPACE
CREATE TABLE
INSERT 0 1
CREATE INDEX
WARNING:  AbortTransaction while in COMMIT state
ERROR:  gtt relfilenode 16388 not found in rel 16388
PANIC:  cannot abort transaction 726, it was already committed
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

in log:
2021-12-21 12:54:08.273 +07 [208725] ERROR:  gtt relfilenode 16388 not
found in rel 16388
2021-12-21 12:54:08.273 +07 [208725] STATEMENT:  REINDEX (TABLESPACE ts)
TABLE tbl;
2021-12-21 12:54:08.273 +07 [208725] WARNING:  AbortTransaction while in
COMMIT state
2021-12-21 12:54:08.273 +07 [208725] PANIC:  cannot abort transaction 726,
it was already committed
2021-12-21 12:54:08.775 +07 [208716] LOG:  server process (PID 208725) was
terminated by signal 6: Аварийный останов
2021-12-21 12:54:08.775 +07 [208716] DETAIL:  Failed process was running:
REINDEX (TABLESPACE ts) TABLE tbl;
2021-12-21 12:54:08.775 +07 [208716] LOG:  terminating any other active
server processes
2021-12-21 12:54:08.775 +07 [208716] LOG:  all server processes terminated;
reinitializing

with dump:
[New LWP 208725]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: andrew postgres [local] REINDEX
   '.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: Нет такого файла или каталога.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7feadfac7859 in __GI_abort () at abort.c:79
#2  0x55e36b6d9ec7 in errfinish (filename=0x55e36b786e20 "xact.c",
lineno=1729, funcname=0x55e36b788660 <__func__.29619>
"RecordTransactionAbort") at elog.c:680
#3  0x55e36b0d6e37 in RecordTransactionAbort (isSubXact=false) at
xact.c:1729
#4  0x55e36b0d7f64 in AbortTransaction () at xact.c:2787
#5  0x55e36b0d88fa in AbortCurrentTransaction () at xact.c:3315
#6  0x55e36b524f33 in PostgresMain (dbname=0x55e36d4d97b8 "postgres",
username=0x55e36d4d9798 "andrew") at postgres.c:4252
#7  0x55e36b44d1e0 in BackendRun (port=0x55e36d4d1020) at
postmaster.c:4594
#8  0x55e36b44cac5 in BackendStartup (port=0x55e36d4d1020) at
postmaster.c:4322
#9  0x55e36b448bad in ServerLoop () at postmaster.c:1802
#10 0x55e36b448346 in PostmasterMain (argc=3, argv=0x55e36d4a84d0) at
postmaster.c:1474
#11 0x55e36b33b5ca in main (argc=3, argv=0x55e36d4a84d0) at main.c:198

Regards!

On Mon, Dec 20, 2021 at 7:42 PM wenjing zeng  wrote:

> Post GTT v63 to fixed conflicts with the latest code.
>
>
>
> Hi Andrew
>
> Have you found any new bugs recently?
>
>
>
> Wenjing
>
>
>
>
> 2021年11月20日 01:31,wenjing  写道:
>
>
>
> Andrew Bille  于2021年11月15日周一 下午6:34写道:
>
>> Thanks for the patches. The feature has become much more stable.
>> However, there is another simple case that generates an error:
>> Master with v61 patches
>>
>> CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;
>> ERROR:  could not open file "base/13560/t3_16384": No such file or
>> directory
>>
> Thank you for pointing out that this part is not reasonable enough.
> This issue has been fixed in v62.
> Looking forward to your reply.
>
>
> Wenjing
>
>
>
>> Andrew
>>
>> On Thu, Nov 11, 2021 at 3:15 PM wenjing  wrote:
>>
>>> Fixed a bug in function pg_gtt_attached_pid.
>>> Looking forward to your reply.
>>>
>>>
>>> Wenjing
>>>
>>>
>
>
> <0001-gtt-v62-reademe.patch><0004-gtt-v62-regress.patch>
> <0002-gtt-v62-doc.patch><0003-gtt-v62-implementation.patch>
>
>
>
>
>


Re: [Proposal] Global temporary tables

2021-11-15 Thread Andrew Bille
Thanks for the patches. The feature has become much more stable.
However, there is another simple case that generates an error:
Master with v61 patches

CREATE GLOBAL TEMPORARY TABLE t AS SELECT 1 AS a;
ERROR:  could not open file "base/13560/t3_16384": No such file or directory
Andrew

On Thu, Nov 11, 2021 at 3:15 PM wenjing  wrote:

> Fixed a bug in function pg_gtt_attached_pid.
> Looking forward to your reply.
>
>
> Wenjing
>
>


Re: [Proposal] Global temporary tables

2021-10-28 Thread Andrew Bille
Thanks, the "group by" is fixed

Yet another crash (on v58 patches), reproduced with:

psql -t -c "create global temp table t(b text)
with(on_commit_delete_rows=true); create index idx_b on t (b); insert into
t values('test'); alter table t alter b type varchar;"
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

with trace:

[New LWP 569199]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: andrew postgres [local] ALTER TABLE
  '.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7f197493f859 in __GI_abort () at abort.c:79
#2  0x5562b3306fb9 in ExceptionalCondition
(conditionName=0x5562b34dd740 "reln->md_num_open_segs[forkNum] == 0",
errorType=0x5562b34dd72c "FailedAssertion", fileName=0x5562b34dd727 "md.c",
lineNumber=187) at assert.c:69
#3  0x5562b3148f15 in mdcreate (reln=0x5562b41abdc0,
forkNum=MAIN_FORKNUM, isRedo=false) at md.c:187
#4  0x5562b314b73f in smgrcreate (reln=0x5562b41abdc0,
forknum=MAIN_FORKNUM, isRedo=false) at smgr.c:335
#5  0x5562b2d88b23 in RelationCreateStorage (rnode=...,
relpersistence=103 'g', rel=0x7f196b597270) at storage.c:154
#6  0x5562b2d5a408 in index_build (heapRelation=0x7f196b58dc40,
indexRelation=0x7f196b597270, indexInfo=0x5562b4167d60, isreindex=true,
parallel=false) at index.c:3038
#7  0x5562b2d533c1 in RelationTruncateIndexes
(heapRelation=0x7f196b58dc40, lockmode=1) at heap.c:3354
#8  0x5562b2d5360b in heap_truncate_one_rel (rel=0x7f196b58dc40) at
heap.c:3452
#9  0x5562b2d53544 in heap_truncate (relids=0x5562b4167c58,
is_global_temp=true) at heap.c:3410
#10 0x5562b2ea09fc in PreCommit_on_commit_actions () at
tablecmds.c:16495
#11 0x5562b2d0d4ee in CommitTransaction () at xact.c:2140
#12 0x5562b2d0e320 in CommitTransactionCommand () at xact.c:2979
#13 0x5562b3151b7e in finish_xact_command () at postgres.c:2721
#14 0x5562b314f340 in exec_simple_query (query_string=0x5562b40c2170
"create global temp table t(b text) with(on_commit_delete_rows=true);
create index idx_b on t (b); insert into t values('test'); alter table t
alter b type varchar;") at postgres.c:1239
#15 0x5562b3153f0a in PostgresMain (dbname=0x5562b40ed6e8 "postgres",
username=0x5562b40ed6c8 "andrew") at postgres.c:4497
#16 0x5562b307df6e in BackendRun (port=0x5562b40e4500) at
postmaster.c:4560
#17 0x5562b307d853 in BackendStartup (port=0x5562b40e4500) at
postmaster.c:4288
#18 0x5562b3079a1d in ServerLoop () at postmaster.c:1801
#19 0x5562b30791b6 in PostmasterMain (argc=3, argv=0x5562b40bc5b0) at
postmaster.c:1473
#20 0x5562b2f6d98e in main (argc=3, argv=0x5562b40bc5b0) at main.c:198

On Mon, Oct 25, 2021 at 7:13 PM wenjing  wrote:

>
> I missed whole row and system column. It has been fixed in v58.
> Please review the new code(v58) again
>
>


Re: [Proposal] Global temporary tables

2021-10-23 Thread Andrew Bille
Thanks, the vacuum is fixed

But I found another crash (on v57 patches), reproduced with:

psql -t -c "create global temp table t (a integer); insert into t values
(1); select count(*) from t group by t;"
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

with trace:

[New LWP 2580215]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: andrew postgres [local] SELECT
 '.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7f258d482859 in __GI_abort () at abort.c:79
#2  0x55ad0be8878f in ExceptionalCondition
(conditionName=conditionName@entry=0x55ad0bf19743
"gtt_rnode->att_stat_tups[i]", errorType=errorType@entry=0x55ad0bee500b
"FailedAssertion", fileName=fileName@entry=0x55ad0bf1966b "storage_gtt.c",
lineNumber=lineNumber@entry=902) at assert.c:69
#3  0x55ad0ba9379f in get_gtt_att_statistic (reloid=,
attnum=0, inh=) at storage_gtt.c:902
#4  0x55ad0be35625 in examine_simple_variable
(root=root@entry=0x55ad0c498748,
var=var@entry=0x55ad0c498c68, vardata=vardata@entry=0x7fff06c9ebf0) at
selfuncs.c:5391
#5  0x55ad0be36a89 in examine_variable (root=root@entry=0x55ad0c498748,
node=node@entry=0x55ad0c498c68, varRelid=varRelid@entry=0,
vardata=vardata@entry=0x7fff06c9ebf0) at selfuncs.c:4990
#6  0x55ad0be3ad64 in estimate_num_groups (root=root@entry=0x55ad0c498748,
groupExprs=, input_rows=input_rows@entry=255,
pgset=pgset@entry=0x0, estinfo=estinfo@entry=0x0) at selfuncs.c:3455
#7  0x55ad0bc50835 in get_number_of_groups (root=root@entry=0x55ad0c498748,
path_rows=255, gd=gd@entry=0x0, target_list=0x55ad0c498bb8) at
planner.c:3241
#8  0x55ad0bc5576f in create_ordinary_grouping_paths
(root=root@entry=0x55ad0c498748,
input_rel=input_rel@entry=0x55ad0c3ce148,
grouped_rel=grouped_rel@entry=0x55ad0c4983f0,
agg_costs=agg_costs@entry=0x7fff06c9edb0, gd=gd@entry=0x0,
extra=extra@entry=0x7fff06c9ede0,
partially_grouped_rel_p=0x7fff06c9eda8)
at planner.c:3628
#9  0x55ad0bc55a72 in create_grouping_paths
(root=root@entry=0x55ad0c498748,
input_rel=input_rel@entry=0x55ad0c3ce148, target=target@entry=0x55ad0c4c95d8,
target_parallel_safe=target_parallel_safe@entry=true, gd=gd@entry=0x0) at
planner.c:3377
#10 0x55ad0bc5686d in grouping_planner (root=root@entry=0x55ad0c498748,
tuple_fraction=, tuple_fraction@entry=0) at planner.c:1592
#11 0x55ad0bc57910 in subquery_planner (glob=glob@entry=0x55ad0c497880,
parse=parse@entry=0x55ad0c3cdbb8, parent_root=parent_root@entry=0x0,
hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)
at planner.c:1025
#12 0x55ad0bc57f36 in standard_planner (parse=0x55ad0c3cdbb8,
query_string=, cursorOptions=2048, boundParams=0x0) at
planner.c:406
#13 0x55ad0bc584d4 in planner (parse=parse@entry=0x55ad0c3cdbb8,
query_string=query_string@entry=0x55ad0c3cc470 "create global temp table t
(a integer); insert into t values (1); select count(*) from t group by t;",
cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)
at planner.c:277
#14 0x55ad0bd4855f in pg_plan_query
(querytree=querytree@entry=0x55ad0c3cdbb8,
query_string=query_string@entry=0x55ad0c3cc470 "create global temp table t
(a integer); insert into t values (1); select count(*) from t group by t;",
cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)
at postgres.c:847
#15 0x55ad0bd4863b in pg_plan_queries (querytrees=0x55ad0c4986f0,
query_string=query_string@entry=0x55ad0c3cc470 "create global temp table t
(a integer); insert into t values (1); select count(*) from t group by t;",
cursorOptions=cursorOptions@entry=2048, boundParams=boundParams@entry=0x0)
at postgres.c:939
#16 0x55ad0bd48b20 in exec_simple_query
(query_string=query_string@entry=0x55ad0c3cc470
"create global temp table t (a integer); insert into t values (1); select
count(*) from t group by t;") at postgres.c:1133
#17 0x55ad0bd4aaf3 in PostgresMain (dbname=,
username=) at postgres.c:4497
#18 0x55ad0bca3f91 in BackendRun (port=port@entry=0x55ad0c3f1020) at
postmaster.c:4560
#19 0x55ad0bca7115 in BackendStartup (port=port@entry=0x55ad0c3f1020)
at postmaster.c:4288
#20 0x55ad0bca735c in ServerLoop () at postmaster.c:1801
#21 0x55ad0bca893e in PostmasterMain (argc=3, argv=) at
postmaster.c:1473
#22 0x55ad0bbe8e31 in main (argc=3, argv=0x55ad0c3c6660) at main.c:198

On Thu, Oct 21, 2021 at 4:25 PM wenjing  wrote:

>
>
> Andrew Bille  于2021年10月20日周三 上午2:59写道:
>
>> Another thanks for the fix. It works for me.
>>
>> But I found another crash!
>>
> This is a 

Re: [Proposal] Global temporary tables

2021-10-19 Thread Andrew Bille
Another thanks for the fix. It works for me.

But I found another crash!

On master with the v56 patches applied:

initdb -D data
pg_ctl -w -t 5 -D data -l server.log start
echo "create global temp table t(i int4); insert into t values (1); vacuum
t;" > tmp.sql
psql < tmp.sql

CREATE TABLE
INSERT 0 1
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

with following stack:
[New LWP 2192409]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: andrew regression [local] VACUUM
 '.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7fb26b558859 in __GI_abort () at abort.c:79
#2  0x5627ddd8466c in ExceptionalCondition
(conditionName=conditionName@entry=0x5627dde153d0
"TransactionIdIsNormal(relfrozenxid)", errorType=errorType@entry=0x5627ddde100b
"FailedAssertion", fileName=fileName@entry=0x5627dddfa697 "vacuum.c",
lineNumber=lineNumber@entry=1170) at assert.c:69
#3  0x5627dda70808 in vacuum_xid_failsafe_check
(relfrozenxid=, relminmxid=) at vacuum.c:1170
#4  0x5627dd8db7ee in lazy_check_wraparound_failsafe
(vacrel=vacrel@entry=0x5627df5c9680) at vacuumlazy.c:2607
#5  0x5627dd8ded18 in lazy_scan_heap (vacrel=vacrel@entry=0x5627df5c9680,
params=params@entry=0x7fffb3d36100, aggressive=aggressive@entry=true) at
vacuumlazy.c:978
#6  0x5627dd8e019a in heap_vacuum_rel (rel=0x7fb26218af70,
params=0x7fffb3d36100, bstrategy=) at vacuumlazy.c:644
#7  0x5627dda70033 in table_relation_vacuum (bstrategy=,
params=0x7fffb3d36100, rel=0x7fb26218af70) at
../../../src/include/access/tableam.h:1678
#8  vacuum_rel (relid=16385, relation=,
params=params@entry=0x7fffb3d36100)
at vacuum.c:2124
#9  0x5627dda71624 in vacuum (relations=0x5627df610598,
params=params@entry=0x7fffb3d36100, bstrategy=,
bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:476
#10 0x5627dda71eb1 in ExecVacuum (pstate=pstate@entry=0x5627df567440,
vacstmt=vacstmt@entry=0x5627df545e70, isTopLevel=isTopLevel@entry=true) at
vacuum.c:269
#11 0x5627ddc4a8cc in standard_ProcessUtility (pstmt=0x5627df5461c0,
queryString=0x5627df545380 "vacuum t;", readOnlyTree=,
context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,
dest=0x5627df5462b0, qc=0x7fffb3d36470) at utility.c:858
#12 0x5627ddc4ada1 in ProcessUtility (pstmt=pstmt@entry=0x5627df5461c0,
queryString=, readOnlyTree=,
context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=,
queryEnv=, dest=0x5627df5462b0, qc=0x7fffb3d36470) at
utility.c:527
#13 0x5627ddc4822d in PortalRunUtility (portal=portal@entry=0x5627df5a67e0,
pstmt=pstmt@entry=0x5627df5461c0, isTopLevel=isTopLevel@entry=true,
setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5627df5462b0,
qc=qc@entry=0x7fffb3d36470) at pquery.c:1155
#14 0x5627ddc48551 in PortalRunMulti (portal=portal@entry=0x5627df5a67e0,
isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,
dest=dest@entry=0x5627df5462b0, altdest=altdest@entry=0x5627df5462b0,
qc=qc@entry=0x7fffb3d36470) at pquery.c:1312
#15 0x5627ddc4896c in PortalRun (portal=portal@entry=0x5627df5a67e0,
count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,
run_once=run_once@entry=true, dest=dest@entry=0x5627df5462b0,
altdest=altdest@entry=0x5627df5462b0, qc=0x7fffb3d36470) at pquery.c:788
#16 0x5627ddc44afb in exec_simple_query
(query_string=query_string@entry=0x5627df545380
"vacuum t;") at postgres.c:1214
#17 0x5627ddc469df in PostgresMain (dbname=,
username=) at postgres.c:4497
#18 0x5627ddb9fe7d in BackendRun (port=port@entry=0x5627df566580) at
postmaster.c:4560
#19 0x5627ddba3001 in BackendStartup (port=port@entry=0x5627df566580)
at postmaster.c:4288
#20 0x5627ddba3248 in ServerLoop () at postmaster.c:1801
#21 0x5627ddba482a in PostmasterMain (argc=3, argv=) at
postmaster.c:1473
#22 0x5627ddae4d1d in main (argc=3, argv=0x5627df53f750) at main.c:198

On Mon, Oct 18, 2021 at 7:00 PM wenjing  wrote:

> Hi Andrew
> I fixed the problem, please confirm again.
> Thanks
>
> Wenjing
>


Re: [Proposal] Global temporary tables

2021-10-15 Thread Andrew Bille
On master with the v55 patches applied the following script leads to crash:
initdb -D data
pg_ctl -w -t 5 -D data -l server.log start

psql -t -c "begin; create global temp table gtt_with_index(a int primary
key, b text); commit; select pg_sleep(5);" >psql1.log &
psql -t -c "select pg_sleep(1); create index idx_b on gtt_with_index(b);"
>psql2.log &
for i in `seq 40`; do (psql -t -c "select pg_sleep(1); insert into
gtt_with_index values(1,'test');" &); done

sleep 10


and I got crash
INSERT 0 1
...
INSERT 0 1
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

and some coredumps with the following stack:

[New LWP 1821493]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `postgres: andrew regression [local] INSERT
 '.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7f021d809859 in __GI_abort () at abort.c:79
#2  0x564dc1bd22e8 in ExceptionalCondition
(conditionName=conditionName@entry=0x564dc1c5c957
"index->rd_index->indisvalid", errorType=errorType@entry=0x564dc1c2a00b
"FailedAssertion", fileName=fileName@entry=0x564dc1c5c854 "storage_gtt.c",
lineNumber=lineNumber@entry=1381) at assert.c:69
#3  0x564dc185778b in init_gtt_storage
(operation=operation@entry=CMD_INSERT,
resultRelInfo=resultRelInfo@entry=0x564dc306f6c0) at storage_gtt.c:1381
#4  0x564dc194c888 in ExecInsert (mtstate=0x564dc306f4a8,
resultRelInfo=0x564dc306f6c0, slot=0x564dc30706d0, planSlot=0x564dc306fca0,
estate=0x564dc306f230, canSetTag=) at nodeModifyTable.c:638
#5  0x564dc194d945 in ExecModifyTable (pstate=) at
nodeModifyTable.c:2565
#6  0x564dc191ca83 in ExecProcNode (node=0x564dc306f4a8) at
../../../src/include/executor/executor.h:257
#7  ExecutePlan (execute_once=, dest=0x564dc310ed80,
direction=, numberTuples=0, sendTuples=,
operation=CMD_INSERT, use_parallel_mode=,
planstate=0x564dc306f4a8, estate=0x564dc306f230) at execMain.c:1555
#8  standard_ExecutorRun (queryDesc=0x564dc306bce0, direction=, count=0, execute_once=) at execMain.c:361
#9  0x564dc1ab47a0 in ProcessQuery (plan=,
sourceText=0x564dc3049a30 "select pg_sleep(1); insert into gtt_with_index
values(1,'test');", params=0x0, queryEnv=0x0, dest=0x564dc310ed80,
qc=0x7ffd3a6cf2e0) at pquery.c:160
#10 0x564dc1ab52e2 in PortalRunMulti (portal=portal@entry=0x564dc30acd80,
isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,
dest=dest@entry=0x564dc310ed80, altdest=altdest@entry=0x564dc310ed80,
qc=qc@entry=0x7ffd3a6cf2e0)
at pquery.c:1274
#11 0x564dc1ab5861 in PortalRun (portal=portal@entry=0x564dc30acd80,
count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,
run_once=run_once@entry=true, dest=dest@entry=0x564dc310ed80,
altdest=altdest@entry=0x564dc310ed80, qc=0x7ffd3a6cf2e0)
at pquery.c:788
#12 0x564dc1ab1522 in exec_simple_query (query_string=0x564dc3049a30
"select pg_sleep(1); insert into gtt_with_index values(1,'test');") at
postgres.c:1214
#13 0x564dc1ab327a in PostgresMain (dbname=,
username=) at postgres.c:4497
#14 0x564dc1a1f539 in BackendRun (port=, port=) at postmaster.c:4560
#15 BackendStartup (port=) at postmaster.c:4288
#16 ServerLoop () at postmaster.c:1801
#17 0x564dc1a2053c in PostmasterMain (argc=,
argv=0x564dc3043fc0) at postmaster.c:1473
#18 0x564dc1750180 in main (argc=3, argv=0x564dc3043fc0) at main.c:198
(gdb) q


I've built the server using gcc 9 as following:
./configure --enable-debug --enable-cassert

Thanks to Alexander Lakhin for simplifying the repro.

On Thu, Oct 14, 2021 at 3:29 PM wenjing zeng  wrote:

>
> Be deeply grateful.
> Perhaps you can give the stack of problems so that you can start analyzing
> them as soon as possible.
>
> Wenjing
>
>


Re: [Proposal] Global temporary tables

2021-10-14 Thread wenjing zeng


> 2021年10月13日 13:08,Andrew Bille  写道:
> 
> Thanks for the fix. It works for me.
> 
> Now I'm exploring another crash related to GTT, but I need a few days to 
> present a simple repro.

Be deeply grateful.
Perhaps you can give the stack of problems so that you can start analyzing them 
as soon as possible.


Wenjing

> 
> On Sat, Oct 9, 2021 at 2:41 PM wenjing  > wrote:
> 
> Thank you for pointing it out. 
> This is a bug that occurs during transaction rollback and process exit, I 
> fixed it, please confirm it.
> 
> Wenjing 



Re: [Proposal] Global temporary tables

2021-10-13 Thread Andrew Bille
Thanks for the fix. It works for me.

Now I'm exploring another crash related to GTT, but I need a few days to
present a simple repro.

On Sat, Oct 9, 2021 at 2:41 PM wenjing  wrote:

>
> Thank you for pointing it out.
> This is a bug that occurs during transaction rollback and process exit, I
> fixed it, please confirm it.
>
> Wenjing
>


Re: [Proposal] Global temporary tables

2021-10-06 Thread Andrew Bille
On master with the v54 patches applied the following script leads to crash:
export
ASAN_OPTIONS=detect_leaks=0:abort_on_error=1:disable_coredump=0:strict_string_checks=1:check_initialization_order=1:strict_init_order=1
initdb -D data
pg_ctl -w -t 5 -D data -l server.log start
psql -c "create global temp table tmp_table_test_statistics(a int); insert
into temp_table_test_statistics values(generate_series(1,10));" &
sleep 1
pg_ctl -w -t 5 -D data -l server.log stop

and i got error
=
==1022892==ERROR: AddressSanitizer: heap-use-after-free on address
0x6254c640 at pc 0x562435348750 bp 0x7ffee8487e60 sp 0x7ffee8487e50
READ of size 8 at 0x6254c640 thread T0
---

with backtrace:

Core was generated by `postgres: andrew regression [local] INSERT
 '.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x7fa8fd008859 in __GI_abort () at abort.c:79
#2  0x56243471eae2 in __sanitizer::Abort() ()
#3  0x56243472968c in __sanitizer::Die() ()
#4  0x56243470ad1c in
__asan::ScopedInErrorReport::~ScopedInErrorReport() ()
#5  0x56243470a793 in __asan::ReportGenericError(unsigned long,
unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned
int, bool) ()
#6  0x56243470b5db in __asan_report_load8 ()
#7  0x562435348750 in DropRelFileNodesAllBuffers
(smgr_reln=smgr_reln@entry=0x6254c640, nnodes=nnodes@entry=1) at
bufmgr.c:3211
#8  0x5624353ec8a8 in smgrdounlinkall (rels=rels@entry=0x6254c640,
nrels=nrels@entry=1, isRedo=isRedo@entry=false) at smgr.c:397
#9  0x562434aa76e1 in gtt_storage_removeall (code=,
arg=) at storage_gtt.c:726
#10 0x562435371962 in shmem_exit (code=code@entry=1) at ipc.c:236
#11 0x562435371d4f in proc_exit_prepare (code=code@entry=1) at ipc.c:194
#12 0x562435371f74 in proc_exit (code=code@entry=1) at ipc.c:107
#13 0x56243581e35c in errfinish (filename=,
filename@entry=0x562435b800e0 "postgres.c", lineno=lineno@entry=3191,
funcname=funcname@entry=0x562435b836a0 <__func__.26025>
"ProcessInterrupts") at elog.c:666
#14 0x5624353f5f86 in ProcessInterrupts () at postgres.c:3191
#15 0x562434eb26d6 in ExecProjectSet (pstate=0x6253f150) at
nodeProjectSet.c:51
#16 0x562434eaae8e in ExecProcNode (node=0x6253f150) at
../../../src/include/executor/executor.h:257
#17 ExecModifyTable (pstate=0x6253ec98) at nodeModifyTable.c:2429
#18 0x562434df5755 in ExecProcNodeFirst (node=0x6253ec98) at
execProcnode.c:463
#19 0x562434dd678a in ExecProcNode (node=0x6253ec98) at
../../../src/include/executor/executor.h:257
#20 ExecutePlan (estate=estate@entry=0x6253ea20,
planstate=0x6253ec98, use_parallel_mode=,
use_parallel_mode@entry=false, operation=operation@entry=CMD_INSERT,
sendTuples=false, numberTuples=numberTuples@entry=0,
direction=ForwardScanDirection,
dest=0x62545550, execute_once=true) at execMain.c:1555
#21 0x562434dd9867 in standard_ExecutorRun (queryDesc=0x619015a0,
direction=ForwardScanDirection, count=0, execute_once=execute_once@entry=true)
at execMain.c:361
#22 0x562434dd9a83 in ExecutorRun
(queryDesc=queryDesc@entry=0x619015a0,
direction=direction@entry=ForwardScanDirection, count=count@entry=0,
execute_once=execute_once@entry=true) at execMain.c:305
#23 0x562435401be6 in ProcessQuery (plan=plan@entry=0x62545480,
sourceText=0x62505220 "insert into temp_table_test_statistics
values(generate_series(1,10));", params=0x0, queryEnv=0x0,
dest=dest@entry=0x62545550, qc=qc@entry=0x7ffee84886d0)
at pquery.c:160
#24 0x562435404a32 in PortalRunMulti (portal=portal@entry=0x62520a20,
isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,
dest=dest@entry=0x62545550, altdest=altdest@entry=0x62545550,
qc=qc@entry=0x7ffee84886d0)
at pquery.c:1274
#25 0x56243540598d in PortalRun (portal=portal@entry=0x62520a20,
count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,
run_once=run_once@entry=true, dest=dest@entry=0x62545550,
altdest=altdest@entry=0x62545550, qc=)
at pquery.c:788
#26 0x5624353fa917 in exec_simple_query
(query_string=query_string@entry=0x62505220
"insert into temp_table_test_statistics
values(generate_series(1,10));") at postgres.c:1214
#27 0x5624353ff61d in PostgresMain (dbname=dbname@entry=0x62911278
"regression", username=username@entry=0x62911258 "andrew") at
postgres.c:4497
#28 0x5624351f65c7 in BackendRun (port=port@entry=0x61502d80) at
postmaster.c:4560
#29 0x5624351ff1c5 in BackendStartup (port=port@entry=0x61502d80)
at postmaster.c:4288
#30 0x5624351ff970 in ServerLoop () at 

Re: [Proposal] Global temporary tables

2021-09-28 Thread Pavel Stehule
hi

ne 26. 9. 2021 v 6:05 odesílatel wenjing  napsal:

>
>
> 2021年9月23日 21:55,Tony Zhu  写道:
>
> Hi Wenjing
>
> we have reviewed the code, and done the regression tests,  all tests is
> pass,  we believe the feature code quality is ready for production ; and I
> will change the status to "Ready for commit”
>
> Thank you very much for your attention and testing.
> As we communicated, I fixed several issues and attached the latest patch.
>

looks so windows build is broken

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.148542

Regards

Pavel

>
>
> Wenjing
>
>
>


Re: [Proposal] Global temporary tables

2021-09-23 Thread Tony Zhu
Hi Wenjing

we have reviewed the code, and done the regression tests,  all tests is pass,  
we believe the feature code quality is ready for production ; and I will change 
the status to "Ready for commit"

Re: [Proposal] Global temporary tables

2021-09-23 Thread wenjing
Andrew Dunstan  于2021年3月28日周日 下午9:07写道:

>
> On 3/17/21 7:59 AM, wenjing wrote:
> > ok
> >
> > The cause of the problem is that the name of the dependent function
> > (readNextTransactionID) has changed. I fixed it.
> >
> > This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
> >
> > Wenjing
> >
> >
>
> I have fixed this patch so that
>
> a) it applies cleanly
>
> b) it uses project best practice for catalog Oid assignment.
>
> However, as noted elsewhere it fails the recovery TAP test.
>
> I also note this:
>
>
> diff --git a/src/test/regress/parallel_schedule
> b/src/test/regress/parallel_schedule
> index 312c11a4bd..d44fa62f4e 100644
> --- a/src/test/regress/parallel_schedule
> +++ b/src/test/regress/parallel_schedule
> @@ -129,3 +129,10 @@ test: fast_default
>
>  # run stats by itself because its delay may be insufficient under heavy
> load
>  test: stats
> +
> +# global temp table test
> +test: gtt_stats
> +test: gtt_function
> +test: gtt_prepare
> +test: gtt_parallel_1 gtt_parallel_2
> +test: gtt_clean
>
>
> Tests that need to run in parallel should use either the isolation
> tester framework (which is explicitly for testing things concurrently)
> or the TAP test framework.
>
> Adding six test files to the regression test suite for this one feature
> is not a good idea. You should have one regression test script ideally,
> and it should be added as appropriate to both the parallel and serial
> schedules (and not at the end). Any further tests should be added using
> the other frameworks mentioned.
>
Thank you for your advice.
I have simplified the case in regress and put further tests into the
Isolation Tester Framework based on your suggestion.
And I found a few bugs and fixed them.

Please review the GTT v52 and give me feedback.
https://commitfest.postgresql.org/31/2349/


Wenjing



>
>
> cheers
>
>
> andrew
>
>
> --
>
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
>
>


Re: [Proposal] Global temporary tables

2021-09-22 Thread wenjing
2021年7月14日 10:56,Ming Li  写道:

Hi Wenjing,

Some suggestions may help:

1) It seems that no test case covers the below scenario: 2 sessions attach
the same gtt, and insert/update/select concurrently. It is better to use
the test framework in src/test/isolation like the code changes in
https://commitfest.postgresql.org/24/2233/.


I rewrote the case under regress to make it easier to read.
and I used the Isolation module to add some concurrent cases and fix some
bugs.

Please check code(v52) and give me feedback.


Wenjing


2) CREATE GLOBAL TEMP SEQUENCE also need to be supported
in src/bin/psql/tab-complete.c


On Wed, Jul 14, 2021 at 10:36 AM wenjing  wrote:

> Rebase code based on the latest version.
>
> Regards,
> wenjing
>
>


Re: [Proposal] Global temporary tables

2021-09-16 Thread Pavel Stehule
Hi

looks so this patch is broken again. Please, can you do rebase?

Regards

Pavel

čt 16. 9. 2021 v 8:28 odesílatel wenjing  napsal:

>
>
>
>
>
>
>
>
>


Re: [Proposal] Global temporary tables

2021-08-05 Thread ZHU XIAN WEN
Hi WenJing


Thanks for the feedback,

I have tested the code, it seems okay, and regression tests got pass

and I have reviewed the code, and I don't find any issue anymore


Hello all


Review and comments for the patches V51 is welcome.


if there is no feedback, I'm going to changed the status to 'Ready for
Committer' on Aug 25


big thanks

Tony



On 2021/7/29 23:19, wenjing zeng wrote:
>
>> 2021年7月28日 23:09,Tony Zhu  写道:
>>
>> Hi Wenjing
>>
>> would you please rebase the code?
> Thank you for your attention.
> According to the test, the latest pgmaster code can merge the latest patch 
> and pass the test.
> https://www.travis-ci.com/github/wjzeng/postgres/builds 
> 
> If you have any questions, please give me feedback.
>
>
> Wenjing
>
>
>> Thank you very much
>> Tony
>>
>> The new status of this patch is: Waiting on Author
>



OpenPGP_signature
Description: OpenPGP digital signature


Re: [Proposal] Global temporary tables

2021-07-29 Thread wenjing zeng


> 2021年7月28日 23:09,Tony Zhu  写道:
> 
> Hi Wenjing
> 
> would you please rebase the code?
Thank you for your attention.
According to the test, the latest pgmaster code can merge the latest patch and 
pass the test.
https://www.travis-ci.com/github/wjzeng/postgres/builds 

If you have any questions, please give me feedback.


Wenjing


> 
> Thank you very much
> Tony
> 
> The new status of this patch is: Waiting on Author



Re: [Proposal] Global temporary tables

2021-07-28 Thread Tony Zhu
Hi Wenjing

would you please rebase the code?

Thank you very much
Tony

The new status of this patch is: Waiting on Author


Re: [Proposal] Global temporary tables

2021-07-13 Thread Ming Li
Hi Wenjing,

Some suggestions may help:

1) It seems that no test case covers the below scenario: 2 sessions attach
the same gtt, and insert/update/select concurrently. It is better to use
the test framework in src/test/isolation like the code changes in
https://commitfest.postgresql.org/24/2233/.

2) CREATE GLOBAL TEMP SEQUENCE also need to be supported
in src/bin/psql/tab-complete.c


On Wed, Jul 14, 2021 at 10:36 AM wenjing  wrote:

> Rebase code based on the latest version.
>
> Regards,
> wenjing
>
>


Re: [Proposal] Global temporary tables

2021-05-10 Thread Dilip Kumar
On Thu, Apr 22, 2021 at 1:11 PM wenjing  wrote:
>

I have briefly looked into the design comments added by the patch.  I
have a few questions.

+Feature description
+
+
+Previously, temporary tables are defined once and automatically
+created (starting with empty contents) in every session before using them.


I don’t think this statement is correct, I mean if we define a temp
table in one session then it doesn’t automatically create in all the
sessions.


+
+Like local temporary table, Global Temporary Table supports ON COMMIT
PRESERVE ROWS
+or ON COMMIT DELETE ROWS clause, so that data in the temporary table can be
+cleaned up or reserved automatically when a session exits or a
transaction COMMITs.

/reserved/preserved


I was trying to look into the “Main design idea” section.

+1) CATALOG
+GTTs store session-specific data. The storage information of GTTs'data, their
+transaction information, and their statistics are not stored in the catalog.

I did not understand what do you mean by “transaction information” is
not stored in the catalog?  Mean what transaction information are
stored in catalog in the normal table which is not stored for GTT?

+Changes to the GTT's metadata affect all sessions.
+The operations making those changes include truncate GTT, Vacuum/Cluster GTT,
+and Lock GTT.

How does Truncate or Vacuum affect all the sessions, I mean truncate
should only truncate the data of the current session and the same is
true for the vacuum no?

I will try to do a more detailed review.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [Proposal] Global temporary tables

2021-04-15 Thread shawn wang
wenjing  于2021年4月15日周四 下午3:26写道:

> HI Pavel
>
> I added user documentation.
> Please give me feedback.
>
>
> Wenjing
>
>
Hi, Wenjing,

I have checked your documentation section and fixed a spelling mistake,
adjusted some sentences for you.
All the modified content is in the new patch, and please check it.

Regards

Shawn


0002-gtt-v47-doc.patch
Description: Binary data


Re: [Proposal] Global temporary tables

2021-03-30 Thread Pavel Stehule
po 29. 3. 2021 v 13:45 odesílatel wenjing  napsal:

> HI all
>
> I fixed the document description error and the regression test bug
> mentioned by Pavel.
> This patch(V45) is base on 30aaab26e52144097a1a5bbb0bb66ea1ebc0cb81
> Please give me feedback.
>

Yes, it is working.

So please, can you write some user documentation?



>
> Wenjing
>
>
>


Re: [Proposal] Global temporary tables

2021-03-29 Thread 曾文旌


> 2021年3月29日 16:37,Pavel Stehule  写道:
> 
> 
> 
> ne 28. 3. 2021 v 15:07 odesílatel Andrew Dunstan  > napsal:
> 
> On 3/17/21 7:59 AM, wenjing wrote:
> > ok
> >
> > The cause of the problem is that the name of the dependent function
> > (readNextTransactionID) has changed. I fixed it.
> >
> > This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
> >
> > Wenjing
> >
> >
> 
> I have fixed this patch so that
> 
> a) it applies cleanly
> 
> b) it uses project best practice for catalog Oid assignment.
> 
> However, as noted elsewhere it fails the recovery TAP test.
> 
> I also note this:
> 
> 
> diff --git a/src/test/regress/parallel_schedule
> b/src/test/regress/parallel_schedule
> index 312c11a4bd..d44fa62f4e 100644
> --- a/src/test/regress/parallel_schedule
> +++ b/src/test/regress/parallel_schedule
> @@ -129,3 +129,10 @@ test: fast_default
>  
>  # run stats by itself because its delay may be insufficient under heavy
> load
>  test: stats
> +
> +# global temp table test
> +test: gtt_stats
> +test: gtt_function
> +test: gtt_prepare
> +test: gtt_parallel_1 gtt_parallel_2
> +test: gtt_clean
> 
> 
> Tests that need to run in parallel should use either the isolation
> tester framework (which is explicitly for testing things concurrently)
> or the TAP test framework.
> 
> Adding six test files to the regression test suite for this one feature
> is not a good idea. You should have one regression test script ideally,
> and it should be added as appropriate to both the parallel and serial
> schedules (and not at the end). Any further tests should be added using
> the other frameworks mentioned.
> 
> 
> * bad name of GTT-README - the convention is README.gtt
> 
> * Typo - "ofa" 
> 
> 2) Use beforeshmemexit to ensure that all files ofa session GTT are deleted 
> when
> the session exits. 
> 
> * Typo "nd" 
> 
> 3) GTT storage file cleanup during abnormal situations
> When a backend exits abnormally (such as oom kill), the startup process starts
> recovery before accepting client connection. The same startup process checks
> nd removes all GTT files before redo WAL.
> 
> * This comment is wrong
> 
>   /*
> + * Global temporary table is allowed to be dropped only when the
> + * current session is using it.
> + */
> + if (RELATION_IS_GLOBAL_TEMP(rel))
> + {
> + if (is_other_backend_use_gtt(RelationGetRelid(rel)))
> + ereport(ERROR,
> + (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),
> + errmsg("cannot drop global temporary table %s when other backend attached 
> it.",
> + RelationGetRelationName(rel;
> + }
> 
> * same wrong comment
> 
>   /*
> + * Global temporary table is allowed to be dropped only when the
> + * current session is using it.
> + */
> + if (RELATION_IS_GLOBAL_TEMP(rel))
> + {
> + if (is_other_backend_use_gtt(RelationGetRelid(rel)))
> + ereport(ERROR,
> + (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),
> + errmsg("cannot drop global temporary table %s when other backend attached 
> it.",
> + RelationGetRelationName(rel;
> + }
> 
> * typo "backand"
> 
> +/*
> + * Check if there are other backends using this GTT besides the current 
> backand.
> + */
> 
> There is not user's documentation
This is necessary, and I will make a separate document patch.


Wenjing.


> 
> Regards
> 
> Pavel
> 
>  
> 
> cheers
> 
> 
> andrew
> 
> 
> -- 
> 
> Andrew Dunstan
> EDB: https://www.enterprisedb.com 
> 



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2021-03-29 Thread 曾文旌


> 2021年3月28日 21:07,Andrew Dunstan  写道:
> 
> 
> On 3/17/21 7:59 AM, wenjing wrote:
>> ok
>> 
>> The cause of the problem is that the name of the dependent function
>> (readNextTransactionID) has changed. I fixed it.
>> 
>> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
>> 
>> Wenjing
>> 
>> 
> 
> I have fixed this patch so that
> 
> a) it applies cleanly
> 
> b) it uses project best practice for catalog Oid assignment.
> 
> However, as noted elsewhere it fails the recovery TAP test.
> 
> I also note this:
> 
> 
> diff --git a/src/test/regress/parallel_schedule
> b/src/test/regress/parallel_schedule
> index 312c11a4bd..d44fa62f4e 100644
> --- a/src/test/regress/parallel_schedule
> +++ b/src/test/regress/parallel_schedule
> @@ -129,3 +129,10 @@ test: fast_default
>  
>  # run stats by itself because its delay may be insufficient under heavy
> load
>  test: stats
> +
> +# global temp table test
> +test: gtt_stats
> +test: gtt_function
> +test: gtt_prepare
> +test: gtt_parallel_1 gtt_parallel_2
> +test: gtt_clean
> 
> 
> Tests that need to run in parallel should use either the isolation
> tester framework (which is explicitly for testing things concurrently)
> or the TAP test framework.
> 
> Adding six test files to the regression test suite for this one feature
> is not a good idea. You should have one regression test script ideally,
> and it should be added as appropriate to both the parallel and serial
> schedules (and not at the end). Any further tests should be added using
> the other frameworks mentioned.
You're right, it doesn't look good.
I'll organize them and put them in place.


Wenjing.

> 
> 
> cheers
> 
> 
> andrew
> 
> 
> -- 
> 
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
> 
> 



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2021-03-29 Thread 曾文旌


> 2021年3月28日 15:27,Pavel Stehule  写道:
> 
> Hi
> 
> st 17. 3. 2021 v 12:59 odesílatel wenjing  > napsal:
> ok
> 
> The cause of the problem is that the name of the dependent function 
> (readNextTransactionID) has changed. I fixed it.
> 
> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
> 
> Wenjing
> 
> I tested this patch and make check-world fails
> 
> make[2]: Vstupuje se do adresáře 
> „/home/pavel/src/postgresql.master/src/test/recovery“
> rm -rf '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check
> /usr/bin/mkdir -p 
> '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check
> cd . && TESTDIR='/home/pavel/src/postgresql.master/src/test/recovery' 
> PATH="/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:$PATH"
>  
> LD_LIBRARY_PATH="/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib"
>   PGPORT='65432' 
> PG_REGRESS='/home/pavel/src/postgresql.master/src/test/recovery/../../../src/test/regress/pg_regress'
>  
> REGRESS_SHLIB='/home/pavel/src/postgresql.master/src/test/regress/regress.so' 
> /usr/bin/prove -I ../../../src/test/perl/ -I .  t/*.pl
> t/001_stream_rep.pl  .. ok 
> t/002_archiving.pl  ... ok   
> t/003_recovery_targets.pl   ok   
> t/004_timeline_switch.pl  . ok   
> t/005_replay_delay.pl   ok   
> t/006_logical_decoding.pl   ok   
>   
> t/007_sync_rep.pl   ok 
> t/008_fsm_truncation.pl  .. ok   
> t/009_twophase.pl   ok 
> t/010_logical_decoding_timelines.pl 
>  .. ok 
> t/011_crash_recovery.pl  .. ok   
> t/012_subtransactions.pl  . ok
>  
> t/013_crash_restart.pl  ... ok 
> t/014_unlogged_reinit.pl  . ok
>  
> t/015_promotion_pages.pl  . ok   
> t/016_min_consistency.pl  . ok   
> t/017_shm.pl  . skipped: SysV 
> shared memory not supported by this platform
> t/018_wal_optimize.pl   ok 
> t/019_replslot_limit.pl  .. ok 
> t/020_archive_status.pl  .. ok 
> t/021_row_visibility.pl  .. ok 
> t/022_crash_temp_files.pl   1/9 
> #   Failed test 'one temporary file'
> #   at t/022_crash_temp_files.pl  line 231.
> #  got: '0'
> # expected: '1'
> t/022_crash_temp_files.pl   9/9 
> # Looks like you failed 1 test of 9.
> t/022_crash_temp_files.pl   
> Dubious, test returned 1 (wstat 256, 0x100)
> Failed 1/9 subtests 
> t/023_pitr_prepared_xact.pl  .. ok 
>   
> 
> Test Summary Report
> ---
> t/022_crash_temp_files.pl   (Wstat: 
> 256 Tests: 9 Failed: 1)
>   Failed test:  8
>   Non-zero exit status: 1
> Files=23, Tests=259, 115 wallclock secs ( 0.21 usr  0.06 sys + 28.57 cusr 
> 18.01 csys = 46.85 CPU)
> Result: FAIL
> make[2]: *** [Makefile:19: check] Chyba 1
> make[2]: Opouští se adresář 
> „/home/pavel/src/postgresql.master/src/test/recovery“
> make[1]: *** [Makefile:49: check-recovery-recurse] Chyba 2
> make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“
> make: *** [GNUmakefile:71: check-world-src/test-recurse] Chyba 2

This is because part of the logic of GTT is duplicated with the new commid  
cd91de0d17952b5763466cfa663e98318f26d357
that is commit by Tomas Vondra merge 11 days ago: "Remove Temporary Files after 
Backend Crash”.
The "Remove Temporary Files after Backend Crash” is exactly what GTT needs, or 
even better.
Therefore, I chose to delete the temporary file cleanup logic in the GTT path.

Let me update a new version.


Wenjing

> 
> Regards
> 
> Pavel



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2021-03-29 Thread Pavel Stehule
ne 28. 3. 2021 v 15:07 odesílatel Andrew Dunstan 
napsal:

>
> On 3/17/21 7:59 AM, wenjing wrote:
> > ok
> >
> > The cause of the problem is that the name of the dependent function
> > (readNextTransactionID) has changed. I fixed it.
> >
> > This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
> >
> > Wenjing
> >
> >
>
> I have fixed this patch so that
>
> a) it applies cleanly
>
> b) it uses project best practice for catalog Oid assignment.
>
> However, as noted elsewhere it fails the recovery TAP test.
>
> I also note this:
>
>
> diff --git a/src/test/regress/parallel_schedule
> b/src/test/regress/parallel_schedule
> index 312c11a4bd..d44fa62f4e 100644
> --- a/src/test/regress/parallel_schedule
> +++ b/src/test/regress/parallel_schedule
> @@ -129,3 +129,10 @@ test: fast_default
>
>  # run stats by itself because its delay may be insufficient under heavy
> load
>  test: stats
> +
> +# global temp table test
> +test: gtt_stats
> +test: gtt_function
> +test: gtt_prepare
> +test: gtt_parallel_1 gtt_parallel_2
> +test: gtt_clean
>
>
> Tests that need to run in parallel should use either the isolation
> tester framework (which is explicitly for testing things concurrently)
> or the TAP test framework.
>
> Adding six test files to the regression test suite for this one feature
> is not a good idea. You should have one regression test script ideally,
> and it should be added as appropriate to both the parallel and serial
> schedules (and not at the end). Any further tests should be added using
> the other frameworks mentioned.
>
>
* bad name of GTT-README - the convention is README.gtt

* Typo - "ofa"

2) Use beforeshmemexit to ensure that all files ofa session GTT are deleted
when
the session exits.

* Typo "nd"

3) GTT storage file cleanup during abnormal situations
When a backend exits abnormally (such as oom kill), the startup process
starts
recovery before accepting client connection. The same startup process checks
nd removes all GTT files before redo WAL.

* This comment is wrong

  /*
+ * Global temporary table is allowed to be dropped only when the
+ * current session is using it.
+ */
+ if (RELATION_IS_GLOBAL_TEMP(rel))
+ {
+ if (is_other_backend_use_gtt(RelationGetRelid(rel)))
+ ereport(ERROR,
+ (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),
+ errmsg("cannot drop global temporary table %s when other backend attached
it.",
+ RelationGetRelationName(rel;
+ }

* same wrong comment

  /*
+ * Global temporary table is allowed to be dropped only when the
+ * current session is using it.
+ */
+ if (RELATION_IS_GLOBAL_TEMP(rel))
+ {
+ if (is_other_backend_use_gtt(RelationGetRelid(rel)))
+ ereport(ERROR,
+ (errcode(ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST),
+ errmsg("cannot drop global temporary table %s when other backend attached
it.",
+ RelationGetRelationName(rel;
+ }

* typo "backand"

+/*
+ * Check if there are other backends using this GTT besides the current
backand.
+ */

There is not user's documentation

Regards

Pavel



>
> cheers
>
>
> andrew
>
>
> --
>
> Andrew Dunstan
> EDB: https://www.enterprisedb.com
>
>


Re: [Proposal] Global temporary tables

2021-03-28 Thread Andrew Dunstan

On 3/17/21 7:59 AM, wenjing wrote:
> ok
>
> The cause of the problem is that the name of the dependent function
> (readNextTransactionID) has changed. I fixed it.
>
> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
>
> Wenjing
>
>

I have fixed this patch so that

a) it applies cleanly

b) it uses project best practice for catalog Oid assignment.

However, as noted elsewhere it fails the recovery TAP test.

I also note this:


diff --git a/src/test/regress/parallel_schedule
b/src/test/regress/parallel_schedule
index 312c11a4bd..d44fa62f4e 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -129,3 +129,10 @@ test: fast_default
 
 # run stats by itself because its delay may be insufficient under heavy
load
 test: stats
+
+# global temp table test
+test: gtt_stats
+test: gtt_function
+test: gtt_prepare
+test: gtt_parallel_1 gtt_parallel_2
+test: gtt_clean


Tests that need to run in parallel should use either the isolation
tester framework (which is explicitly for testing things concurrently)
or the TAP test framework.

Adding six test files to the regression test suite for this one feature
is not a good idea. You should have one regression test script ideally,
and it should be added as appropriate to both the parallel and serial
schedules (and not at the end). Any further tests should be added using
the other frameworks mentioned.


cheers


andrew


-- 

Andrew Dunstan
EDB: https://www.enterprisedb.com



global_temporary_table_v44-pg14.patch.gz
Description: application/gzip


Re: [Proposal] Global temporary tables

2021-03-28 Thread Pavel Stehule
Hi

I wrote simple benchmarks. I checked the possible slowdown of connections
to postgres when GTT is used.

/usr/local/pgsql/master/bin/pgbench -c 10 -C -f script4.sql -t 1000

script has one line just with INSERT or SELECT LIMIT 1;

PATCH
insert to global temp table (with connect) -- 349 tps (10 clients 443tps)
select from gtt (with connects) -- 370 tps (10 clients 446tps)
insert to normal table (with connect) - 115 tps (10 clients 417 tps)
select from normal table (with connect) -- 358 (10 clients 445 tps)

MASTER
insert to temp table (with connect) -- 58 tps (10 clients 352 tps) -- after
test pg_attribute bloated to 11MB
insert into normal table (with connect) -- 118 tps (10 clients 385)
select from normal table (with connect) -- 346 tps (10 clients 449)

The measurement doesn't show anything interesting - it is not possible to
see the impact of usage of GTT on connect time.

It is interesting to see the overhead of local temp tables against global
temp tables - the performance is about 6x worse, and there is a significant
bloat of  the pg_attribute table. And the tested table had only one column.
So an idea or concept of global temp tables is very good, and
implementation looks well (from performance perspective).

I didn't check the code yet, I just tested behaviour and I think it is very
satisfiable for the first stage and first release. The patch is long now,
and for the first step is good to stop in implemented features.

Next steps should be supporting DDL for actively used GTT tables. This
topic is pretty complex, there are possible more scenarios. I think so GTT
behaviour should be the same like behaviour of normal tables (by default) -
but I see an advantage of other possibilities, so I don't want to open the
discussion about this topic now. Current implementation should not block
any possible implementations in future.

Regards

Pavel


Re: [Proposal] Global temporary tables

2021-03-28 Thread Pavel Stehule
Hi

st 17. 3. 2021 v 12:59 odesílatel wenjing  napsal:

> ok
>
> The cause of the problem is that the name of the dependent function
> (readNextTransactionID) has changed. I fixed it.
>
> This patch(V43) is base on 9fd2952cf4920d563e9cea51634c5b364d57f71a
>
> Wenjing
>

I tested this patch and make check-world fails

make[2]: Vstupuje se do adresáře
„/home/pavel/src/postgresql.master/src/test/recovery“
rm -rf '/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check
/usr/bin/mkdir -p
'/home/pavel/src/postgresql.master/src/test/recovery'/tmp_check
cd . && TESTDIR='/home/pavel/src/postgresql.master/src/test/recovery'
PATH="/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/bin:$PATH"
LD_LIBRARY_PATH="/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib"
 PGPORT='65432'
PG_REGRESS='/home/pavel/src/postgresql.master/src/test/recovery/../../../src/test/regress/pg_regress'
REGRESS_SHLIB='/home/pavel/src/postgresql.master/src/test/regress/regress.so'
/usr/bin/prove -I ../../../src/test/perl/ -I .  t/*.pl
t/001_stream_rep.pl .. ok
t/002_archiving.pl ... ok
t/003_recovery_targets.pl  ok
t/004_timeline_switch.pl . ok
t/005_replay_delay.pl  ok
t/006_logical_decoding.pl  ok
t/007_sync_rep.pl  ok
t/008_fsm_truncation.pl .. ok
t/009_twophase.pl  ok
t/010_logical_decoding_timelines.pl .. ok
t/011_crash_recovery.pl .. ok
t/012_subtransactions.pl . ok
t/013_crash_restart.pl ... ok
t/014_unlogged_reinit.pl . ok
t/015_promotion_pages.pl . ok
t/016_min_consistency.pl . ok
t/017_shm.pl . skipped: SysV shared memory not
supported by this platform
t/018_wal_optimize.pl  ok
t/019_replslot_limit.pl .. ok
t/020_archive_status.pl .. ok
t/021_row_visibility.pl .. ok
t/022_crash_temp_files.pl  1/9
#   Failed test 'one temporary file'
#   at t/022_crash_temp_files.pl line 231.
#  got: '0'
# expected: '1'
t/022_crash_temp_files.pl  9/9 # Looks like you failed 1 test
of 9.
t/022_crash_temp_files.pl  Dubious, test returned 1 (wstat 256,
0x100)
Failed 1/9 subtests
t/023_pitr_prepared_xact.pl .. ok

Test Summary Report
---
t/022_crash_temp_files.pl  (Wstat: 256 Tests: 9 Failed: 1)
  Failed test:  8
  Non-zero exit status: 1
Files=23, Tests=259, 115 wallclock secs ( 0.21 usr  0.06 sys + 28.57 cusr
18.01 csys = 46.85 CPU)
Result: FAIL
make[2]: *** [Makefile:19: check] Chyba 1
make[2]: Opouští se adresář
„/home/pavel/src/postgresql.master/src/test/recovery“
make[1]: *** [Makefile:49: check-recovery-recurse] Chyba 2
make[1]: Opouští se adresář „/home/pavel/src/postgresql.master/src/test“
make: *** [GNUmakefile:71: check-world-src/test-recurse] Chyba 2

Regards

Pavel


Re: [Proposal] Global temporary tables

2021-03-16 Thread Pavel Stehule
Hi

this patch is broken now. Please, can you check it?

Regards

Pavel


st 25. 11. 2020 v 14:08 odesílatel 曾文旌  napsal:

>
>
> 2020年11月25日 14:19,Pavel Stehule  写道:
>
>
>
> po 23. 11. 2020 v 10:27 odesílatel 曾文旌 
> napsal:
>
>>
>>
>> 2020年11月21日 02:28,Pavel Stehule  写道:
>>
>> Hi
>>
>> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 
>> napsal:
>>
>>> I have written the README for the GTT, which contains the GTT
>>> requirements and design.
>>> I found that compared to my first email a year ago, many GTT Limitations
>>> are now gone.
>>> Now, I'm adding comments to some of the necessary functions.
>>>
>>
>> There are problems with patching. Please, can you rebase your patch?
>>
>> Sure.
>> I'm still working on sort code and comments.
>> If you have any suggestions, please let me know.
>>
>
> It is broken again
>
> There is bad white space
>
> +   /*
> +* For global temp table only
> +* use ShareUpdateExclusiveLock for ensure safety
> +*/
> +   {
> +   {
> +   "on_commit_delete_rows",
> +   "global temp table on commit options",
> +   RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,
> +   ShareUpdateExclusiveLock
> +   },
> +   true
> +   },  <=
> /* list terminator */
> {{NULL}}
>
> +7 OTHERS
> +Parallel query
> +Planner does not produce parallel query plans for SQL related to GTT.
> Because <=
> +GTT private data cannot be accessed across processes.
> diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile
>
>
> +/*
> + * Update global temp table relstats(relpage/reltuple/relallvisible)
> <
> + * to local hashtable
> + */
> +void
>
> +/*
> + * Search global temp table relstats(relpage/reltuple/relallvisible)
> <==
> + * from lo
>
> and there are lot of more places ...
>
> I found other issue
>
> postgres=# create global temp table foo(a int);
> CREATE TABLE
> postgres=# create index on foo(a);
> CREATE INDEX
>
> close session and in new session
>
> postgres=# reindex index foo_a_idx ;
> WARNING:  relcache reference leak: relation "foo" not closed
> REINDEX
>
>
> I fixed all the above issues and rebase code.
> Please review the new version code again.
>
>
> Wenjing
>
>
>
>
> Regards
>
> Pavel
>
>
>
>>
>> Wenjing
>>
>>
>>
>> Regards
>>
>> Pavel
>>
>>
>>>
>>> Wenjing
>>>
>>>
>>>
>>>
>>>
>>> > 2020年7月31日 上午4:57,Robert Haas  写道:
>>> >
>>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng 
>>> wrote:
>>> >> Please continue to review the code.
>>> >
>>> > This patch is pretty light on comments. Many of the new functions have
>>> > no header comments, for example. There are comments here and there in
>>> > the body of the new functions that are added, and in places where
>>> > existing code is changed there are comments here and there, but
>>> > overall it's not a whole lot. There's no documentation and no README,
>>> > either. Since this adds a new feature and a bunch of new SQL-callable
>>> > functions that interact with that feature, the feature itself should
>>> > be documented, along with its limitations and the new SQL-callable
>>> > functions that interact with it. I think there should be either a
>>> > lengthy comment in some suitable file, or maybe various comments in
>>> > various files, or else a README file, that clearly sets out the major
>>> > design principles behind the patch, and explaining also what that
>>> > means in terms of features and limitations. Without that, it's really
>>> > hard for anyone to jump into reviewing this code, and it will be hard
>>> > for people who have to maintain it in the future to understand it,
>>> > either. Or for users, for that matter.
>>> >
>>> > --
>>> > Robert Haas
>>> > EnterpriseDB: http://www.enterprisedb.com
>>> > The Enterprise PostgreSQL Company
>>>
>>>
>>
>


Re: [Proposal] Global temporary tables

2021-02-24 Thread Alexandre Arruda
Sorry, I don't have checkout the hash codebase for the patch.
Compiled OK now.

Best regards

Re: [Proposal] Global temporary tables

2021-02-24 Thread Alexandre Arruda
Hi,

Trying to apply the last patch (global_temporary_table_v42-pg14.patch) to the 
code cloned from github, ending with this error:

usr/bin/ld: catalog/storage_gtt.o: in function `up_gtt_relstats':
storage_gtt.c:(.text+0x1276): undefined reference to `ReadNewTransactionId'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:66: postgres] Error 1
make[2]: Leaving directory '/root/downloads/postgres/src/backend'
make[1]: *** [Makefile:42: all-backend-recurse] Error 2
make[1]: Leaving directory '/root/downloads/postgres/src'
make: *** [GNUmakefile:11: all-src-recurse] Error 2

Anyway, what's the next step to see this very important feature included in v14 
?

Best regards,
Alexandre

Re: [Proposal] Global temporary tables

2021-01-25 Thread Pavel Stehule
Hi

út 22. 12. 2020 v 4:20 odesílatel wenjing  napsal:

> HI all
>
> I rebased patch, fix OID conflicts, fix some comments.
> Code updates to v41.
>

Please, can you do rebase?

Regards

Pavel


>
> Wenjing
>
>


Re: [Proposal] Global temporary tables

2020-12-09 Thread Pavel Stehule
st 9. 12. 2020 v 7:34 odesílatel 曾文旌  napsal:

>
>
> 2020年12月8日 13:28,Pavel Stehule  写道:
>
> Hi
>
> čt 26. 11. 2020 v 12:46 odesílatel 曾文旌 
> napsal:
>
>> This is the latest patch for feature GTT(v38).
>> Everybody, if you have any questions, please let me know.
>>
>
> please, co you send a rebased patch. It is broken again
>
>
> Sure
>
> This patch is base on 16c302f51235eaec05a1f85a11c1df04ef3a6785
> Simplify code for getting a unicode codepoint's canonical class.
>
>
Thank you

Pavel

>
> Wenjing
>
>
> Regards
>
> Pavel
>
>
>>
>> Wenjing
>>
>>
>>
>


Re: [Proposal] Global temporary tables

2020-12-07 Thread Pavel Stehule
Hi

čt 26. 11. 2020 v 12:46 odesílatel 曾文旌  napsal:

> This is the latest patch for feature GTT(v38).
> Everybody, if you have any questions, please let me know.
>

please, co you send a rebased patch. It is broken again

Regards

Pavel


>
> Wenjing
>
>
>


Re: [Proposal] Global temporary tables

2020-11-26 Thread Magnus Hagander
On Thu, Nov 26, 2020 at 11:16 AM Julien Rouhaud  wrote:
>
> On Thu, Nov 26, 2020 at 4:05 PM 曾文旌  wrote:
> >
> > I found that the new Patch mail failed to register to Commitfest
> > https://commitfest.postgresql.org/28/2349/#
> > I don't know what's wrong and how to check it?
> > Could you help me figure it out?
>
> Apparently the attachment in
> https://www.postgresql.org/message-id/a3f1ebd9-e694-4384-8049-37b093084...@alibaba-inc.com
> wasn't detected.  I have no idea why, maybe Magnus will know.
> Otherwise you could try to ask on -www.

Not offhand. The email appeas to have a fairly complex nested mime
structure, so something in the python library that parses the MIME
decides that it's not there. For some reason the email is 7 parts. 1
is the signature, the rest seems complexly nested. And the attachment
seems to be squeezed in between two different HTML parts.

Basically, at the top it's multipart/alternative, which says there are
two choices. One is text/plain, which is what the archives uses. The
other is a combination of text/html followed by
application/octetstream (the patch) followed by another text/html.

The archives picks the first alternative, which is text/plain, which
does not contain the attachment. The attachment only exists in the
HTML view.

I think the easiest solution is to re-send as plain text email with
the attachment, which would then put the attachment on the email
itself instead of embedded in the HTML, I would guess.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




Re: [Proposal] Global temporary tables

2020-11-26 Thread Julien Rouhaud
On Thu, Nov 26, 2020 at 4:05 PM 曾文旌  wrote:
>
> I found that the new Patch mail failed to register to Commitfest
> https://commitfest.postgresql.org/28/2349/#
> I don't know what's wrong and how to check it?
> Could you help me figure it out?

Apparently the attachment in
https://www.postgresql.org/message-id/a3f1ebd9-e694-4384-8049-37b093084...@alibaba-inc.com
wasn't detected.  I have no idea why, maybe Magnus will know.
Otherwise you could try to ask on -www.




Re: [Proposal] Global temporary tables

2020-11-26 Thread 曾文旌

I found that the new Patch mail failed to register to Commitfest
https://commitfest.postgresql.org/28/2349/# 

I don't know what's wrong and how to check it?
Could you help me figure it out?



> 2020年11月25日 14:19,Pavel Stehule  写道:
> 
> 
> 
> po 23. 11. 2020 v 10:27 odesílatel 曾文旌  > napsal:
> 
> 
>> 2020年11月21日 02:28,Pavel Stehule > > 写道:
>> 
>> Hi
>> 
>> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 > > napsal:
>> I have written the README for the GTT, which contains the GTT requirements 
>> and design.
>> I found that compared to my first email a year ago, many GTT Limitations are 
>> now gone.
>> Now, I'm adding comments to some of the necessary functions.
>> 
>> There are problems with patching. Please, can you rebase your patch?
> Sure.
> I'm still working on sort code and comments.
> If you have any suggestions, please let me know.
> 
> It is broken again 
> 
> There is bad white space
> 
> +   /*
> +* For global temp table only
> +* use ShareUpdateExclusiveLock for ensure safety
> +*/
> +   {
> +   {
> +   "on_commit_delete_rows",
> +   "global temp table on commit options",
> +   RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,
> +   ShareUpdateExclusiveLock
> +   },
> +   true
> +   },  <=
> /* list terminator */
> {{NULL}}
> 
> +7 OTHERS
> +Parallel query
> +Planner does not produce parallel query plans for SQL related to GTT. 
> Because <=
> +GTT private data cannot be accessed across processes.
> diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile
> 
> 
> +/*
> + * Update global temp table relstats(relpage/reltuple/relallvisible) 
> <
> + * to local hashtable
> + */
> +void
> 
> +/*
> + * Search global temp table relstats(relpage/reltuple/relallvisible) 
> <==
> + * from lo
> 
> and there are lot of more places ...
> 
> I found other issue
> 
> postgres=# create global temp table foo(a int);
> CREATE TABLE
> postgres=# create index on foo(a);
> CREATE INDEX
> 
> close session and in new session
> 
> postgres=# reindex index foo_a_idx ;
> WARNING:  relcache reference leak: relation "foo" not closed
> REINDEX
> 
> Regards
> 
> Pavel
> 
> 
> 
> 
> Wenjing
> 
> 
>> 
>> Regards
>> 
>> Pavel
>> 
>> 
>> 
>> Wenjing
>> 
>> 
>> 
>> 
>> 
>> > 2020年7月31日 上午4:57,Robert Haas > > > 写道:
>> > 
>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng > > > wrote:
>> >> Please continue to review the code.
>> > 
>> > This patch is pretty light on comments. Many of the new functions have
>> > no header comments, for example. There are comments here and there in
>> > the body of the new functions that are added, and in places where
>> > existing code is changed there are comments here and there, but
>> > overall it's not a whole lot. There's no documentation and no README,
>> > either. Since this adds a new feature and a bunch of new SQL-callable
>> > functions that interact with that feature, the feature itself should
>> > be documented, along with its limitations and the new SQL-callable
>> > functions that interact with it. I think there should be either a
>> > lengthy comment in some suitable file, or maybe various comments in
>> > various files, or else a README file, that clearly sets out the major
>> > design principles behind the patch, and explaining also what that
>> > means in terms of features and limitations. Without that, it's really
>> > hard for anyone to jump into reviewing this code, and it will be hard
>> > for people who have to maintain it in the future to understand it,
>> > either. Or for users, for that matter.
>> > 
>> > -- 
>> > Robert Haas
>> > EnterpriseDB: http://www.enterprisedb.com 
>> > The Enterprise PostgreSQL Company



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-11-24 Thread Pavel Stehule
po 23. 11. 2020 v 10:27 odesílatel 曾文旌  napsal:

>
>
> 2020年11月21日 02:28,Pavel Stehule  写道:
>
> Hi
>
> pá 11. 9. 2020 v 17:00 odesílatel 曾文旌 
> napsal:
>
>> I have written the README for the GTT, which contains the GTT
>> requirements and design.
>> I found that compared to my first email a year ago, many GTT Limitations
>> are now gone.
>> Now, I'm adding comments to some of the necessary functions.
>>
>
> There are problems with patching. Please, can you rebase your patch?
>
> Sure.
> I'm still working on sort code and comments.
> If you have any suggestions, please let me know.
>

It is broken again

There is bad white space

+   /*
+* For global temp table only
+* use ShareUpdateExclusiveLock for ensure safety
+*/
+   {
+   {
+   "on_commit_delete_rows",
+   "global temp table on commit options",
+   RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,
+   ShareUpdateExclusiveLock
+   },
+   true
+   },  <=
/* list terminator */
{{NULL}}

+7 OTHERS
+Parallel query
+Planner does not produce parallel query plans for SQL related to GTT.
Because <=
+GTT private data cannot be accessed across processes.
diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile


+/*
+ * Update global temp table relstats(relpage/reltuple/relallvisible)
<
+ * to local hashtable
+ */
+void

+/*
+ * Search global temp table relstats(relpage/reltuple/relallvisible)
<==
+ * from lo

and there are lot of more places ...

I found other issue

postgres=# create global temp table foo(a int);
CREATE TABLE
postgres=# create index on foo(a);
CREATE INDEX

close session and in new session

postgres=# reindex index foo_a_idx ;
WARNING:  relcache reference leak: relation "foo" not closed
REINDEX

Regards

Pavel



>
> Wenjing
>
>
>
> Regards
>
> Pavel
>
>
>>
>> Wenjing
>>
>>
>>
>>
>>
>> > 2020年7月31日 上午4:57,Robert Haas  写道:
>> >
>> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng 
>> wrote:
>> >> Please continue to review the code.
>> >
>> > This patch is pretty light on comments. Many of the new functions have
>> > no header comments, for example. There are comments here and there in
>> > the body of the new functions that are added, and in places where
>> > existing code is changed there are comments here and there, but
>> > overall it's not a whole lot. There's no documentation and no README,
>> > either. Since this adds a new feature and a bunch of new SQL-callable
>> > functions that interact with that feature, the feature itself should
>> > be documented, along with its limitations and the new SQL-callable
>> > functions that interact with it. I think there should be either a
>> > lengthy comment in some suitable file, or maybe various comments in
>> > various files, or else a README file, that clearly sets out the major
>> > design principles behind the patch, and explaining also what that
>> > means in terms of features and limitations. Without that, it's really
>> > hard for anyone to jump into reviewing this code, and it will be hard
>> > for people who have to maintain it in the future to understand it,
>> > either. Or for users, for that matter.
>> >
>> > --
>> > Robert Haas
>> > EnterpriseDB: http://www.enterprisedb.com
>> > The Enterprise PostgreSQL Company
>>
>>
>


Re: [Proposal] Global temporary tables

2020-11-20 Thread Pavel Stehule
Hi

pá 11. 9. 2020 v 17:00 odesílatel 曾文旌  napsal:

> I have written the README for the GTT, which contains the GTT requirements
> and design.
> I found that compared to my first email a year ago, many GTT Limitations
> are now gone.
> Now, I'm adding comments to some of the necessary functions.
>

There are problems with patching. Please, can you rebase your patch?

Regards

Pavel


>
> Wenjing
>
>
>
>
>
> > 2020年7月31日 上午4:57,Robert Haas  写道:
> >
> > On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng 
> wrote:
> >> Please continue to review the code.
> >
> > This patch is pretty light on comments. Many of the new functions have
> > no header comments, for example. There are comments here and there in
> > the body of the new functions that are added, and in places where
> > existing code is changed there are comments here and there, but
> > overall it's not a whole lot. There's no documentation and no README,
> > either. Since this adds a new feature and a bunch of new SQL-callable
> > functions that interact with that feature, the feature itself should
> > be documented, along with its limitations and the new SQL-callable
> > functions that interact with it. I think there should be either a
> > lengthy comment in some suitable file, or maybe various comments in
> > various files, or else a README file, that clearly sets out the major
> > design principles behind the patch, and explaining also what that
> > means in terms of features and limitations. Without that, it's really
> > hard for anyone to jump into reviewing this code, and it will be hard
> > for people who have to maintain it in the future to understand it,
> > either. Or for users, for that matter.
> >
> > --
> > Robert Haas
> > EnterpriseDB: http://www.enterprisedb.com
> > The Enterprise PostgreSQL Company
>
>


Re: [Proposal] Global temporary tables

2020-08-07 Thread movead...@highgo.ca

>I find this is the most latest mail with an attachment, so I test and reply on
>this thread, several points as below:

>1. I notice it produces new relfilenode when new session login and some
>data insert. But the relfilenode column in pg_class still the one when create
>the global temp table. I think you can try to show 0 in this area as what nail
>relation does. 
>I think getting the GTT to have a default relfilenode looks closer to the 
>existing implementation, and setting it to 0 requires extra work and has no 
>clear benefit.
>What do you think?
>I'd like to know the reasons for your suggestion.
The 'relfilenode' mean the file no on disk which different from oid of a 
relation,
 the default one is a wrong for gtt, so I think it's not so good to show it in 
pg_class.

>2. The nail relations handle their relfilenodes by RelMapFile struct, and this
>patch use hash entry and relfilenode_list, maybe RelMapFile approach more
>understandable in my opinion. Sorry if I miss the real design for that.
>We can see the STORAGE and statistics info for the GTT, including relfilenode, 
>through view pg_gtt_relstats

postgres=# \d gtt
Table "public.gtt"
 Column |  Type   | Collation | Nullable | Default 
+-+---+--+-
 a  | integer |   |  | 
 b  | integer |   |  | 

postgres=# insert into gtt values(1,1);
INSERT 0 1
postgres=# select * from pg_gtt_relstats ;
 schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | 
relfrozenxid | relminmxid 
+---+-+--+---+---+--+
 public | gtt   |   16384 |0 | 0 | 0 |  
532 |  1
(1 row)

postgres=# truncate gtt;
TRUNCATE TABLE
postgres=# select * from pg_gtt_relstats ;
 schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | 
relfrozenxid | relminmxid 
+---+-+--+---+---+--+
 public | gtt   |   16387 |0 | 0 | 0 |  
533 |  1
(1 row)


I just suggest a way which maybe most naturely to the exist code struct, and 
it's
uo to you.


>3. I get a wrong result of pg_relation_filepath() function for global temp 
>table,
>I think it's necessaryto keep this an correct output.

postgres=# select pg_relation_filepath(oid) from pg_class where relname = 'gtt';
 pg_relation_filepath 
--
 base/13835/t3_16384
(1 row)

I didn't find anything wrong. Could you please give me a demo.

In my opinoin it should show 'base/13835/t3_16387', other than 
'base/13835/t3_16384',
because the relfilenode change to 16387 when you truncate it in step 2.

>4. In gtt_search_by_relid() function, it has not handle the missing_ok argument
>if gtt_storage_local_hash is null. There should be some comments if it's the 
>right
>code.
>This is a problem that has been fixed in global_temporary_table_v34-pg13.patch.
Sorry about it, I can not find it in mail thread and maybe I miss something. 
The mail thread
is so long, it's better to create a new mail thread I think.



Regards,
Highgo Software (Canada/China/Pakistan) 
URL : www.highgo.ca 
EMAIL: mailto:movead(dot)li(at)highgo(dot)ca


Re: [Proposal] Global temporary tables

2020-08-07 Thread 曾文旌


> 2020年7月31日 上午4:57,Robert Haas  写道:
> 
> On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng  wrote:
>> Please continue to review the code.
> 
> This patch is pretty light on comments. Many of the new functions have
> no header comments, for example. There are comments here and there in
> the body of the new functions that are added, and in places where
> existing code is changed there are comments here and there, but
> overall it's not a whole lot. There's no documentation and no README,
> either. Since this adds a new feature and a bunch of new SQL-callable
> functions that interact with that feature, the feature itself should
> be documented, along with its limitations and the new SQL-callable
> functions that interact with it. I think there should be either a
> lengthy comment in some suitable file, or maybe various comments in
> various files, or else a README file, that clearly sets out the major
> design principles behind the patch, and explaining also what that
> means in terms of features and limitations. Without that, it's really
> hard for anyone to jump into reviewing this code, and it will be hard
> for people who have to maintain it in the future to understand it,
> either. Or for users, for that matter.
Your suggestion is to the point. I do lack a lot of comments, as is necessary.
I'll do this.


Wenjing



> 
> -- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-08-07 Thread 曾文旌
Thank you very much for reviewing this patch.
This is very important to improve the GTT.

> 2020年8月3日 下午3:09,movead...@highgo.ca 写道:
> 
> 
> >Fixed in global_temporary_table_v29-pg13.patch
> >Please check.
> 
> I find this is the most latest mail with an attachment, so I test and reply on
> this thread, several points as below:
> 
> 1. I notice it produces new relfilenode when new session login and some
> data insert. But the relfilenode column in pg_class still the one when create
> the global temp table. I think you can try to show 0 in this area as what nail
> relation does. 
I think getting the GTT to have a default relfilenode looks closer to the 
existing implementation, and setting it to 0 requires extra work and has no 
clear benefit.
What do you think?
I'd like to know the reasons for your suggestion.

> 
> 2. The nail relations handle their relfilenodes by RelMapFile struct, and this
> patch use hash entry and relfilenode_list, maybe RelMapFile approach more
> understandable in my opinion. Sorry if I miss the real design for that.
We can see the STORAGE and statistics info for the GTT, including relfilenode, 
through view pg_gtt_relstats

postgres=# \d gtt
Table "public.gtt"
 Column |  Type   | Collation | Nullable | Default 
+-+---+--+-
 a  | integer |   |  | 
 b  | integer |   |  | 

postgres=# insert into gtt values(1,1);
INSERT 0 1
postgres=# select * from pg_gtt_relstats ;
 schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | 
relfrozenxid | relminmxid 
+---+-+--+---+---+--+
 public | gtt   |   16384 |0 | 0 | 0 |  
532 |  1
(1 row)

postgres=# truncate gtt;
TRUNCATE TABLE
postgres=# select * from pg_gtt_relstats ;
 schemaname | tablename | relfilenode | relpages | reltuples | relallvisible | 
relfrozenxid | relminmxid 
+---+-+--+---+---+--+
 public | gtt   |   16387 |0 | 0 | 0 |  
533 |  1
(1 row)

> 
> 3. I get a wrong result of pg_relation_filepath() function for global temp 
> table,
> I think it's necessaryto keep this an correct output.

postgres=# select pg_relation_filepath(oid) from pg_class where relname = 'gtt';
 pg_relation_filepath 
--
 base/13835/t3_16384
(1 row)

I didn't find anything wrong. Could you please give me a demo.

> 
> 4. In gtt_search_by_relid() function, it has not handle the missing_ok 
> argument
> if gtt_storage_local_hash is null. There should be some comments if it's the 
> right
> code.
This is a problem that has been fixed in global_temporary_table_v34-pg13.patch.

> 
> 5. It's a long patch and hard to review, I think it will pretty good if it 
> can be
> divided into several subpatches with relatively independent subfunctions.
Thank you for your suggestion, and I am considering doing so, including adding 
comments.


Wenjing

> 
> Regards,
> Highgo Software (Canada/China/Pakistan) 
> URL : www.highgo.ca  
> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-08-03 Thread movead...@highgo.ca

>Fixed in global_temporary_table_v29-pg13.patch
>Please check.

I find this is the most latest mail with an attachment, so I test and reply on
this thread, several points as below:

1. I notice it produces new relfilenode when new session login and some
data insert. But the relfilenode column in pg_class still the one when create
the global temp table. I think you can try to show 0 in this area as what nail
relation does. 

2. The nail relations handle their relfilenodes by RelMapFile struct, and this
patch use hash entry and relfilenode_list, maybe RelMapFile approach more
understandable in my opinion. Sorry if I miss the real design for that.

3. I get a wrong result of pg_relation_filepath() function for global temp 
table,
I think it's necessaryto keep this an correct output.

4. In gtt_search_by_relid() function, it has not handle the missing_ok argument
if gtt_storage_local_hash is null. There should be some comments if it's the 
right
code.

5. It's a long patch and hard to review, I think it will pretty good if it can 
be
divided into several subpatches with relatively independent subfunctions.



Regards,
Highgo Software (Canada/China/Pakistan) 
URL : www.highgo.ca 
EMAIL: mailto:movead(dot)li(at)highgo(dot)ca


Re: [Proposal] Global temporary tables

2020-07-30 Thread Robert Haas
On Thu, Jul 30, 2020 at 8:09 AM wenjing zeng  wrote:
> Please continue to review the code.

This patch is pretty light on comments. Many of the new functions have
no header comments, for example. There are comments here and there in
the body of the new functions that are added, and in places where
existing code is changed there are comments here and there, but
overall it's not a whole lot. There's no documentation and no README,
either. Since this adds a new feature and a bunch of new SQL-callable
functions that interact with that feature, the feature itself should
be documented, along with its limitations and the new SQL-callable
functions that interact with it. I think there should be either a
lengthy comment in some suitable file, or maybe various comments in
various files, or else a README file, that clearly sets out the major
design principles behind the patch, and explaining also what that
means in terms of features and limitations. Without that, it's really
hard for anyone to jump into reviewing this code, and it will be hard
for people who have to maintain it in the future to understand it,
either. Or for users, for that matter.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: [Proposal] Global temporary tables

2020-07-23 Thread Pavel Stehule
> I am thinking about explicit LOCK statements. Some applications use
> explicit locking from some reasons - typically as protection against race
> conditions.
>
> But on GTT race conditions are not possible. So my question is - does the
> exclusive lock on GTT  protection other sessions do insert into their own
> instances of the same GTT?
>
> In my opinion, with a GTT, always work on the private data of the session,
> there is no need to do anything by holding the lock, so the lock statement
> should do nothing (The same is true for ORACLE GTT)
>
> What do you think?
>
>
> What is a level where table locks are active? shared part of GTT or
> session instance part of GTT?
>
> I don't quite understand what you mean, could you explain it a little bit?
>

It is about perspective, how we should see GTT tables. GTT table is a mix
of two concepts - session private (data), and session shared (catalog). And
hypothetically we can place locks to the private part (no effect) or shared
part (usual effect how we know it). But can has sense, and both have an
advantage and disadvantage. I afraid little bit about behaviour of stupid
ORM systems - but the most important part of table are data - and then I
prefer empty lock implementation for GTT.

Regards

Pavel



>
>
> Wenjing
>
>
>
>
>
>
>
>>
>> Wenjing
>>
>>
>>
>> Now, table locks are implemented on a global level. So exclusive lock on
>> GTT in one session block insertion on the second session. Is it expected
>> behaviour? It is safe, but maybe it is too strict.
>>
>> We should define what table lock is meaning on GTT.
>>
>> Regards
>>
>> Pavel
>>
>>
>>> Pavel
>>>
>>>
 With Regards,
 Prabhat Kumar Sahu
 EnterpriseDB: http://www.enterprisedb.com



>>>
>>
>


Re: [Proposal] Global temporary tables

2020-07-22 Thread wenjing zeng


> 2020年7月14日 下午10:28,Pavel Stehule  写道:
> 
> 
> 
> pá 10. 7. 2020 v 11:04 odesílatel wenjing zeng  > napsal:
> HI all
> 
> I started using my personal email to respond to community issue.
> 
> 
> 
>> 2020年7月7日 下午6:05,Pavel Stehule > > 写道:
>> 
>> Hi
>>  
>> GTT Merge the latest PGMaster and resolves conflicts.
>> 
>> 
>> 
>> I tested it and it looks fine. I think it is very usable in current form, 
>> but still there are some issues:
>> 
>> postgres=# create global temp table foo(a int);
>> CREATE TABLE
>> postgres=# insert into foo values(10);
>> INSERT 0 1
>> postgres=# alter table foo add column x int;
>> ALTER TABLE
>> postgres=# analyze foo;
>> WARNING:  reloid 16400 not support update attstat after add colunm
>> WARNING:  reloid 16400 not support update attstat after add colunm
>> ANALYZE
> This is a limitation that we can completely eliminate.
> 
>> 
>> Please, can you summarize what is done, what limits are there, what can be 
>> implemented hard, what can be implemented easily?
> Sure.
> 
> The current version of the GTT implementation supports all regular table 
> operations.
> 1 what is done
> 1.1 insert/update/delete on GTT.
> 1.2 The GTT supports all types of indexes, and the query statement supports 
> the use of GTT indexes to speed up the reading of data in the GTT.
> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are 
> provided to the optimizer to choose the best query plan.
> 1.4 analyze vacuum GTT.
> 1.5 truncate cluster GTT.
> 1.6 all DDL on GTT.
> 1.7 GTT table can use  GTT sequence  or Regular sequence.
> 1.8 Support for creating views on GTT.
> 1.9 Support for creating views on foreign key.
> 1.10 support global temp partition.
> 
> I feel like I cover all the necessary GTT requirements.
> 
> For cluster GTT,I think it's complicated.
> I'm not sure the current implementation is quite reasonable. Maybe you can 
> help review it.
> 
> 
>> 
>> 
>> 
>> I found one open question - how can be implemented table locks - because 
>> data is physically separated, then we don't need table locks as protection 
>> against race conditions. 
> Yes, but GTT’s DML DDL still requires table locking.
> 1 The DML requires table locks (RowExclusiveLock) to ensure that 
> definitions do not change during run time (the DDL may modify or delete them).
> This part of the implementation does not actually change the code,
> because the DML on GTT does not block each other between sessions.
> 
> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, 
> they only modify local data and do not modify the GTT definition.
> So I lowered the table lock level held by the GTT, only need RowExclusiveLock.
> 
> 3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter 
> GTT), 
> an exclusive level of table locking is required(AccessExclusiveLock), 
> as is the case for regular table.
> This part of the implementation also does not actually change the code.
> 
> Summary: What I have done is to adjust the GTT lock levels in different types 
> of statements based on the above thinking.
> For example, truncate GTT, I'm reducing the GTT holding table lock level to 
> RowExclusiveLock,
> So We can truncate data in the same GTT between different sessions at the 
> same time.
> 
> What do you think about table locks on GTT?
> 
> I am thinking about explicit LOCK statements. Some applications use explicit 
> locking from some reasons - typically as protection against race conditions. 
> 
> But on GTT race conditions are not possible. So my question is - does the 
> exclusive lock on GTT  protection other sessions do insert into their own 
> instances of the same GTT?
In my opinion, with a GTT, always work on the private data of the session, 
there is no need to do anything by holding the lock, so the lock statement 
should do nothing (The same is true for ORACLE GTT)

What do you think?

> 
> What is a level where table locks are active? shared part of GTT or session 
> instance part of GTT?
I don't quite understand what you mean, could you explain it a little bit?



Wenjing



> 
> 
> 
> 
> 
> Wenjing
> 
> 
>> 
>> Now, table locks are implemented on a global level. So exclusive lock on GTT 
>> in one session block insertion on the second session. Is it expected 
>> behaviour? It is safe, but maybe it is too strict. 
>> 
>> We should define what table lock is meaning on GTT.
>> 
>> Regards
>> 
>> Pavel
>>  
>> Pavel
>> 
>> 
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com 
>> 
>> 
> 



Re: [Proposal] Global temporary tables

2020-07-14 Thread Pavel Stehule
pá 10. 7. 2020 v 11:04 odesílatel wenjing zeng 
napsal:

> HI all
>
> I started using my personal email to respond to community issue.
>
>
>
> 2020年7月7日 下午6:05,Pavel Stehule  写道:
>
> Hi
>
>
>> GTT Merge the latest PGMaster and resolves conflicts.
>>
>>
>>
> I tested it and it looks fine. I think it is very usable in current form,
> but still there are some issues:
>
> postgres=# create global temp table foo(a int);
> CREATE TABLE
> postgres=# insert into foo values(10);
> INSERT 0 1
> postgres=# alter table foo add column x int;
> ALTER TABLE
> postgres=# analyze foo;
> WARNING:  reloid 16400 not support update attstat after add colunm
> WARNING:  reloid 16400 not support update attstat after add colunm
> ANALYZE
>
> This is a limitation that we can completely eliminate.
>
>
> Please, can you summarize what is done, what limits are there, what can be
> implemented hard, what can be implemented easily?
>
> Sure.
>
> The current version of the GTT implementation supports all regular table
> operations.
> 1 what is done
> 1.1 insert/update/delete on GTT.
> 1.2 The GTT supports all types of indexes, and the query statement
> supports the use of GTT indexes to speed up the reading of data in the GTT.
> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are
> provided to the optimizer to choose the best query plan.
> 1.4 analyze vacuum GTT.
> 1.5 truncate cluster GTT.
> 1.6 all DDL on GTT.
> 1.7 GTT table can use  GTT sequence  or Regular sequence.
> 1.8 Support for creating views on GTT.
> 1.9 Support for creating views on foreign key.
> 1.10 support global temp partition.
>
> I feel like I cover all the necessary GTT requirements.
>
> For cluster GTT,I think it's complicated.
> I'm not sure the current implementation is quite reasonable. Maybe you can
> help review it.
>
>
>
>
>
> I found one open question - how can be implemented table locks - because
> data is physically separated, then we don't need table locks as protection
> against race conditions.
>
> Yes, but GTT’s DML DDL still requires table locking.
> 1 The DML requires table locks (RowExclusiveLock) to ensure that
> definitions do not change during run time (the DDL may modify or delete
> them).
> This part of the implementation does not actually change the code,
> because the DML on GTT does not block each other between sessions.
>
> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML,
> they only modify local data and do not modify the GTT definition.
> So I lowered the table lock level held by the GTT, only need
> RowExclusiveLock.
>
> 3 For DDLs that need to be modified the GTT table definition(Drop
> GTT Alter GTT),
> an exclusive level of table locking is required(AccessExclusiveLock),
> as is the case for regular table.
> This part of the implementation also does not actually change the code.
>
> Summary: What I have done is to adjust the GTT lock levels in different
> types of statements based on the above thinking.
> For example, truncate GTT, I'm reducing the GTT holding table lock level
> to RowExclusiveLock,
> So We can truncate data in the same GTT between different sessions at the
> same time.
>
> What do you think about table locks on GTT?
>

I am thinking about explicit LOCK statements. Some applications use
explicit locking from some reasons - typically as protection against race
conditions.

But on GTT race conditions are not possible. So my question is - does the
exclusive lock on GTT  protection other sessions do insert into their own
instances of the same GTT?

What is a level where table locks are active? shared part of GTT or session
instance part of GTT?




>
> Wenjing
>
>
>
> Now, table locks are implemented on a global level. So exclusive lock on
> GTT in one session block insertion on the second session. Is it expected
> behaviour? It is safe, but maybe it is too strict.
>
> We should define what table lock is meaning on GTT.
>
> Regards
>
> Pavel
>
>
>> Pavel
>>
>>
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com
>>>
>>>
>>>
>>
>


Re: [Proposal] Global temporary tables

2020-07-14 Thread Pavel Stehule
po 13. 7. 2020 v 13:59 odesílatel wenjing zeng 
napsal:

>
>
> 2020年7月10日 下午5:03,wenjing zeng  写道:
>
> HI all
>
> I started using my personal email to respond to community issue.
>
>
>
> 2020年7月7日 下午6:05,Pavel Stehule  写道:
>
> Hi
>
>
>> GTT Merge the latest PGMaster and resolves conflicts.
>>
>>
>>
> I tested it and it looks fine. I think it is very usable in current form,
> but still there are some issues:
>
> postgres=# create global temp table foo(a int);
> CREATE TABLE
> postgres=# insert into foo values(10);
> INSERT 0 1
> postgres=# alter table foo add column x int;
> ALTER TABLE
> postgres=# analyze foo;
> WARNING:  reloid 16400 not support update attstat after add colunm
> WARNING:  reloid 16400 not support update attstat after add colunm
> ANALYZE
>
> This is a limitation that we can completely eliminate.
>
>
> Please, can you summarize what is done, what limits are there, what can be
> implemented hard, what can be implemented easily?
>
> Sure.
>
> The current version of the GTT implementation supports all regular table
> operations.
> 1 what is done
> 1.1 insert/update/delete on GTT.
> 1.2 The GTT supports all types of indexes, and the query statement
> supports the use of GTT indexes to speed up the reading of data in the GTT.
> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are
> provided to the optimizer to choose the best query plan.
> 1.4 analyze vacuum GTT.
> 1.5 truncate cluster GTT.
> 1.6 all DDL on GTT.
> 1.7 GTT table can use  GTT sequence  or Regular sequence.
> 1.8 Support for creating views on GTT.
> 1.9 Support for creating views on foreign key.
> 1.10 support global temp partition.
>
> I feel like I cover all the necessary GTT requirements.
>
> For cluster GTT,I think it's complicated.
> I'm not sure the current implementation is quite reasonable. Maybe you can
> help review it.
>
>
>
>
>
> I found one open question - how can be implemented table locks - because
> data is physically separated, then we don't need table locks as protection
> against race conditions.
>
> Yes, but GTT’s DML DDL still requires table locking.
> 1 The DML requires table locks (RowExclusiveLock) to ensure that
> definitions do not change during run time (the DDL may modify or delete
> them).
> This part of the implementation does not actually change the code,
> because the DML on GTT does not block each other between sessions.
>
> As a side note, since the same row of GTT data can not modified by
> different sessions,
> So, I don't see the need to care the GTT's PG_class.relminmxID.
> What do you think?
>

yes, probably it is not necessary

Regards

Pavel

>
>
> Wenjing
>
>
>
> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML,
> they only modify local data and do not modify the GTT definition.
> So I lowered the table lock level held by the GTT, only need
> RowExclusiveLock.
>
> 3 For DDLs that need to be modified the GTT table definition(Drop
> GTT Alter GTT),
> an exclusive level of table locking is required(AccessExclusiveLock),
> as is the case for regular table.
> This part of the implementation also does not actually change the code.
>
> Summary: What I have done is to adjust the GTT lock levels in different
> types of statements based on the above thinking.
> For example, truncate GTT, I'm reducing the GTT holding table lock level
> to RowExclusiveLock,
> So We can truncate data in the same GTT between different sessions at the
> same time.
>
> What do you think about table locks on GTT?
>
>
> Wenjing
>
>
>
> Now, table locks are implemented on a global level. So exclusive lock on
> GTT in one session block insertion on the second session. Is it expected
> behaviour? It is safe, but maybe it is too strict.
>
> We should define what table lock is meaning on GTT.
>
> Regards
>
> Pavel
>
>
>> Pavel
>>
>>
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com
>>>
>>>
>>>
>>
>
>


Re: [Proposal] Global temporary tables

2020-07-13 Thread wenjing zeng


> 2020年7月10日 下午5:03,wenjing zeng  写道:
> 
> HI all
> 
> I started using my personal email to respond to community issue.
> 
> 
> 
>> 2020年7月7日 下午6:05,Pavel Stehule > > 写道:
>> 
>> Hi
>>  
>> GTT Merge the latest PGMaster and resolves conflicts.
>> 
>> 
>> 
>> I tested it and it looks fine. I think it is very usable in current form, 
>> but still there are some issues:
>> 
>> postgres=# create global temp table foo(a int);
>> CREATE TABLE
>> postgres=# insert into foo values(10);
>> INSERT 0 1
>> postgres=# alter table foo add column x int;
>> ALTER TABLE
>> postgres=# analyze foo;
>> WARNING:  reloid 16400 not support update attstat after add colunm
>> WARNING:  reloid 16400 not support update attstat after add colunm
>> ANALYZE
> This is a limitation that we can completely eliminate.
> 
>> 
>> Please, can you summarize what is done, what limits are there, what can be 
>> implemented hard, what can be implemented easily?
> Sure.
> 
> The current version of the GTT implementation supports all regular table 
> operations.
> 1 what is done
> 1.1 insert/update/delete on GTT.
> 1.2 The GTT supports all types of indexes, and the query statement supports 
> the use of GTT indexes to speed up the reading of data in the GTT.
> 1.3 GTT statistics keep a copy of THE GTT local statistics, which are 
> provided to the optimizer to choose the best query plan.
> 1.4 analyze vacuum GTT.
> 1.5 truncate cluster GTT.
> 1.6 all DDL on GTT.
> 1.7 GTT table can use  GTT sequence  or Regular sequence.
> 1.8 Support for creating views on GTT.
> 1.9 Support for creating views on foreign key.
> 1.10 support global temp partition.
> 
> I feel like I cover all the necessary GTT requirements.
> 
> For cluster GTT,I think it's complicated.
> I'm not sure the current implementation is quite reasonable. Maybe you can 
> help review it.
> 
> 
>> 
>> 
>> 
>> I found one open question - how can be implemented table locks - because 
>> data is physically separated, then we don't need table locks as protection 
>> against race conditions. 
> Yes, but GTT’s DML DDL still requires table locking.
> 1 The DML requires table locks (RowExclusiveLock) to ensure that 
> definitions do not change during run time (the DDL may modify or delete them).
> This part of the implementation does not actually change the code,
> because the DML on GTT does not block each other between sessions.
As a side note, since the same row of GTT data can not modified by different 
sessions,
So, I don't see the need to care the GTT's PG_class.relminmxID.
What do you think?


Wenjing


> 
> 2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, 
> they only modify local data and do not modify the GTT definition.
> So I lowered the table lock level held by the GTT, only need RowExclusiveLock.
> 
> 3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter 
> GTT), 
> an exclusive level of table locking is required(AccessExclusiveLock), 
> as is the case for regular table.
> This part of the implementation also does not actually change the code.
> 
> Summary: What I have done is to adjust the GTT lock levels in different types 
> of statements based on the above thinking.
> For example, truncate GTT, I'm reducing the GTT holding table lock level to 
> RowExclusiveLock,
> So We can truncate data in the same GTT between different sessions at the 
> same time.
> 
> What do you think about table locks on GTT?
> 
> 
> Wenjing
> 
> 
>> 
>> Now, table locks are implemented on a global level. So exclusive lock on GTT 
>> in one session block insertion on the second session. Is it expected 
>> behaviour? It is safe, but maybe it is too strict. 
>> 
>> We should define what table lock is meaning on GTT.
>> 
>> Regards
>> 
>> Pavel
>>  
>> Pavel
>> 
>> 
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com 
>> 
>> 
> 



Re: [Proposal] Global temporary tables

2020-07-10 Thread wenjing zeng
HI all

I started using my personal email to respond to community issue.



> 2020年7月7日 下午6:05,Pavel Stehule  写道:
> 
> Hi
>  
> GTT Merge the latest PGMaster and resolves conflicts.
> 
> 
> 
> I tested it and it looks fine. I think it is very usable in current form, but 
> still there are some issues:
> 
> postgres=# create global temp table foo(a int);
> CREATE TABLE
> postgres=# insert into foo values(10);
> INSERT 0 1
> postgres=# alter table foo add column x int;
> ALTER TABLE
> postgres=# analyze foo;
> WARNING:  reloid 16400 not support update attstat after add colunm
> WARNING:  reloid 16400 not support update attstat after add colunm
> ANALYZE
This is a limitation that we can completely eliminate.

> 
> Please, can you summarize what is done, what limits are there, what can be 
> implemented hard, what can be implemented easily?
Sure.

The current version of the GTT implementation supports all regular table 
operations.
1 what is done
1.1 insert/update/delete on GTT.
1.2 The GTT supports all types of indexes, and the query statement supports the 
use of GTT indexes to speed up the reading of data in the GTT.
1.3 GTT statistics keep a copy of THE GTT local statistics, which are provided 
to the optimizer to choose the best query plan.
1.4 analyze vacuum GTT.
1.5 truncate cluster GTT.
1.6 all DDL on GTT.
1.7 GTT table can use  GTT sequence  or Regular sequence.
1.8 Support for creating views on GTT.
1.9 Support for creating views on foreign key.
1.10 support global temp partition.

I feel like I cover all the necessary GTT requirements.

For cluster GTT,I think it's complicated.
I'm not sure the current implementation is quite reasonable. Maybe you can help 
review it.


> 
> 
> 
> I found one open question - how can be implemented table locks - because data 
> is physically separated, then we don't need table locks as protection against 
> race conditions. 
Yes, but GTT’s DML DDL still requires table locking.
1 The DML requires table locks (RowExclusiveLock) to ensure that 
definitions do not change during run time (the DDL may modify or delete them).
This part of the implementation does not actually change the code,
because the DML on GTT does not block each other between sessions.

2 For truncate/analyze/vacuum reinidex cluster GTT is now like DML, 
they only modify local data and do not modify the GTT definition.
So I lowered the table lock level held by the GTT, only need RowExclusiveLock.

3 For DDLs that need to be modified the GTT table definition(Drop GTT Alter 
GTT), 
an exclusive level of table locking is required(AccessExclusiveLock), 
as is the case for regular table.
This part of the implementation also does not actually change the code.

Summary: What I have done is to adjust the GTT lock levels in different types 
of statements based on the above thinking.
For example, truncate GTT, I'm reducing the GTT holding table lock level to 
RowExclusiveLock,
So We can truncate data in the same GTT between different sessions at the same 
time.

What do you think about table locks on GTT?


Wenjing


> 
> Now, table locks are implemented on a global level. So exclusive lock on GTT 
> in one session block insertion on the second session. Is it expected 
> behaviour? It is safe, but maybe it is too strict. 
> 
> We should define what table lock is meaning on GTT.
> 
> Regards
> 
> Pavel
>  
> Pavel
> 
> 
>> With Regards,
>> Prabhat Kumar Sahu
>> EnterpriseDB: http://www.enterprisedb.com 
> 
> 



Re: [Proposal] Global temporary tables

2020-07-07 Thread Pavel Stehule
Hi


> GTT Merge the latest PGMaster and resolves conflicts.
>
>
>
I tested it and it looks fine. I think it is very usable in current form,
but still there are some issues:

postgres=# create global temp table foo(a int);
CREATE TABLE
postgres=# insert into foo values(10);
INSERT 0 1
postgres=# alter table foo add column x int;
ALTER TABLE
postgres=# analyze foo;
WARNING:  reloid 16400 not support update attstat after add colunm
WARNING:  reloid 16400 not support update attstat after add colunm
ANALYZE

Please, can you summarize what is done, what limits are there, what can be
implemented hard, what can be implemented easily?



I found one open question - how can be implemented table locks - because
data is physically separated, then we don't need table locks as protection
against race conditions.

Now, table locks are implemented on a global level. So exclusive lock on
GTT in one session block insertion on the second session. Is it expected
behaviour? It is safe, but maybe it is too strict.

We should define what table lock is meaning on GTT.

Regards

Pavel


> Pavel
>
>
>> With Regards,
>> Prabhat Kumar Sahu
>> EnterpriseDB: http://www.enterprisedb.com
>>
>>
>>
>


Re: [Proposal] Global temporary tables

2020-07-06 Thread Pavel Stehule
Hi

čt 11. 6. 2020 v 4:13 odesílatel 曾文旌  napsal:

>
> 2020年6月9日 下午8:15,Prabhat Sahu  写道:
>
>
>
> On Wed, Apr 29, 2020 at 8:52 AM 曾文旌  wrote:
>
>> 2020年4月27日 下午9:48,Prabhat Sahu  写道:
>>
>> Thanks Wenjing, for the fix patch for previous issues.
>> I have verified the issues, now those fix look good to me.
>> But the below error message is confusing(for gtt2).
>>
>> postgres=# drop table gtt1;
>> ERROR:  cannot drop global temp table gtt1 when other backend attached it.
>>
>> postgres=# drop table gtt2;
>> ERROR:  cannot drop index idx2 on global temp table gtt2 when other
>> backend attached it.
>>
>> I feel the above error message shown for "DROP TABLE gtt2;" is a bit
>> confusing(looks similar to DROP INDEX gtt2;).
>> If possible, can we keep the error message simple as "ERROR:  cannot
>> drop global temp table gtt2 when other backend attached it."?
>> I mean, without giving extra information for the index attached to that
>> GTT.
>>
>> Fixed the error message to make the expression more accurate. In v33.
>>
>
> Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all
> reported issues and they work fine.
> Thanks.
> --
>
>
> I'm very glad to hear such good news.
> I am especially grateful for your professional work on GTT.
> Please feel free to let me know if there is anything you think could be
> improved.
>
>
> Thanks.
>
>
> Wenjing
>

this patch needs rebase

Regards

Pavel


> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>
>
>


Re: [Proposal] Global temporary tables

2020-06-10 Thread 曾文旌

> 2020年6月9日 下午8:15,Prabhat Sahu  写道:
> 
> 
> 
> On Wed, Apr 29, 2020 at 8:52 AM 曾文旌  > wrote:
>> 2020年4月27日 下午9:48,Prabhat Sahu > > 写道:
>> 
>> Thanks Wenjing, for the fix patch for previous issues.
>> I have verified the issues, now those fix look good to me.
>> But the below error message is confusing(for gtt2).
>> 
>> postgres=# drop table gtt1;
>> ERROR:  cannot drop global temp table gtt1 when other backend attached it.
>> 
>> postgres=# drop table gtt2;
>> ERROR:  cannot drop index idx2 on global temp table gtt2 when other backend 
>> attached it.
>> 
>> I feel the above error message shown for "DROP TABLE gtt2;" is a bit 
>> confusing(looks similar to DROP INDEX gtt2;).
>> If possible, can we keep the error message simple as "ERROR:  cannot drop 
>> global temp table gtt2 when other backend attached it."?
>> I mean, without giving extra information for the index attached to that GTT.
> Fixed the error message to make the expression more accurate. In v33.
>  
> Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all 
> reported issues and they work fine. 
> Thanks.
> -- 

I'm very glad to hear such good news.
I am especially grateful for your professional work on GTT.
Please feel free to let me know if there is anything you think could be 
improved.


Thanks.


Wenjing

> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com 



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-06-09 Thread Prabhat Sahu
On Wed, Apr 29, 2020 at 8:52 AM 曾文旌  wrote:

> 2020年4月27日 下午9:48,Prabhat Sahu  写道:
>
> Thanks Wenjing, for the fix patch for previous issues.
> I have verified the issues, now those fix look good to me.
> But the below error message is confusing(for gtt2).
>
> postgres=# drop table gtt1;
> ERROR:  cannot drop global temp table gtt1 when other backend attached it.
>
> postgres=# drop table gtt2;
> ERROR:  cannot drop index idx2 on global temp table gtt2 when other
> backend attached it.
>
> I feel the above error message shown for "DROP TABLE gtt2;" is a bit
> confusing(looks similar to DROP INDEX gtt2;).
> If possible, can we keep the error message simple as "ERROR:  cannot drop
> global temp table gtt2 when other backend attached it."?
> I mean, without giving extra information for the index attached to that
> GTT.
>
> Fixed the error message to make the expression more accurate. In v33.
>

Thanks Wenjing. We verified your latest patch(gtt_v33) focusing on all
reported issues and they work fine.
Thanks.
-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-05-07 Thread 曾文旌


> 2020年4月29日 下午7:46,tushar  写道:
> 
> On 4/29/20 8:52 AM, 曾文旌 wrote:
>> Fixed the error message to make the expression more accurate. In v33.
> 
> Thanks wenjing
> 
> Please refer this scenario  , where getting an error while performing cluster 
> o/p
> 
> 1)
> 
> X terminal -
> 
> postgres=# create global temp table f(n int);
> CREATE TABLE
> 
> Y Terminal -
> 
> postgres=# create index index12 on f(n);
> CREATE INDEX
> postgres=# \q
> 
> X terminal -
> 
> postgres=# reindex index  index12;
> REINDEX
> postgres=#  cluster f using index12;
> ERROR:  cannot cluster on invalid index "index12"
> postgres=# drop index index12;
> DROP INDEX
> 
> if this is an expected  , could we try  to make the error message more 
> simpler, if possible.
> 
> Another issue  -
> 
> X terminal -
> 
> postgres=# create global temp table f11(n int);
> CREATE TABLE
> postgres=# create index ind1 on f11(n);
> CREATE INDEX
> postgres=# create index ind2 on f11(n);
> CREATE INDEX
> postgres=#
> 
> Y terminal -
> 
> postgres=# drop table f11;
> ERROR:  cannot drop index ind2 or global temporary table f11
> HINT:  Because the index is created on the global temporary table and other 
> backend attached it.
> postgres=#
> 
> it is only mentioning about ind2 index but what about ind1 and what if  - 
> they have lots of indexes ?
> i  think - we should not mix index information while dropping the table and 
> vice versa.
postgres=# drop index index12;
ERROR:  cannot drop index index12 or global temporary table f
HINT:  Because the index is created on the global temporary table and other 
backend attached it.

postgres=# drop table f;
ERROR:  cannot drop index index12 or global temporary table f
HINT:  Because the index is created on the global temporary table and other 
backend attached it.
postgres=#

Dropping an index on a GTT and dropping a GTT with an index can both trigger 
this message, so the message looks like this, and it feels like there's no 
better way to do it.



Wenjing



> 
> -- 
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-29 Thread tushar

On 4/29/20 8:52 AM, 曾文旌 wrote:

Fixed the error message to make the expression more accurate. In v33.


Thanks wenjing

Please refer this scenario  , where getting an error while performing 
cluster o/p


1)

X terminal -

postgres=# create global temp table f(n int);
CREATE TABLE

Y Terminal -

postgres=# create index index12 on f(n);
CREATE INDEX
postgres=# \q

X terminal -

postgres=# reindex index  index12;
REINDEX
postgres=#  cluster f using index12;
ERROR:  cannot cluster on invalid index "index12"
postgres=# drop index index12;
DROP INDEX

if this is an expected  , could we try  to make the error message more 
simpler, if possible.


Another issue  -

X terminal -

postgres=# create global temp table f11(n int);
CREATE TABLE
postgres=# create index ind1 on f11(n);
CREATE INDEX
postgres=# create index ind2 on f11(n);
CREATE INDEX
postgres=#

Y terminal -

postgres=# drop table f11;
ERROR:  cannot drop index ind2 or global temporary table f11
HINT:  Because the index is created on the global temporary table and 
other backend attached it.

postgres=#

it is only mentioning about ind2 index but what about ind1 and what if  
- they have lots of indexes ?
i  think - we should not mix index information while dropping the table 
and vice versa.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-27 Thread Prabhat Sahu
Thanks Wenjing, for the fix patch for previous issues.
I have verified the issues, now those fix look good to me.
But the below error message is confusing(for gtt2).

postgres=# drop table gtt1;
ERROR:  cannot drop global temp table gtt1 when other backend attached it.

postgres=# drop table gtt2;
ERROR:  cannot drop index idx2 on global temp table gtt2 when other backend
attached it.

I feel the above error message shown for "DROP TABLE gtt2;" is a bit
confusing(looks similar to DROP INDEX gtt2;).
If possible, can we keep the error message simple as "ERROR:  cannot drop
global temp table gtt2 when other backend attached it."?
I mean, without giving extra information for the index attached to that GTT.

On Mon, Apr 27, 2020 at 5:34 PM 曾文旌  wrote:

>
>
> 2020年4月27日 下午5:26,Prabhat Sahu  写道:
>
> Hi Wenjing,
>
> Please check the below scenario shows different error message with "DROP
> TABLE gtt;" for gtt with and without index.
>
> *-- Session1:*postgres=# create global temporary table gtt1 (c1 int);
> CREATE TABLE
> postgres=# create global temporary table gtt2 (c1 int);
> CREATE TABLE
> postgres=# create index idx2 on gtt2(c1);
> CREATE INDEX
>
>
> *-- Session2:*postgres=# drop table gtt1;
> ERROR:  can not drop relation gtt1 when other backend attached this global
> temp table
> postgres=# drop table gtt2;
> ERROR:  can not drop index gtt2 when other backend attached this global
> temp table.
>
> For DROP GTT, we need to drop the index on the table first.
> So the indexes on the GTT are checked first.
> But the error message needs to be fixed.
> Fixed in v32
>
>
> wenjing
>
>
>
>
>
> --
>
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>
>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-27 Thread 曾文旌


> 2020年4月24日 下午9:03,tushar  写道:
> 
> On 4/22/20 2:49 PM, 曾文旌 wrote:
>> 
>> I provide the TRUNCATE tablename DROP to clear the data in the GTT and 
>> delete the storage files.
>> This feature requires the current transaction to commit immediately after it 
>> finishes truncate.
>> 
> Thanks Wenjing , Please refer this scenario
> 
> postgres=# create global temp table testing (a int);
> CREATE TABLE
> postgres=# begin;
> BEGIN
> postgres=*# truncate testing;  -- working   [1]
> TRUNCATE TABLE
> postgres=*# truncate testing drop;
> ERROR:  Truncate global temporary table cannot run inside a transaction block 
>--that is throwing an error claiming something which i did  successfully 
> [1]
The truncate GTT drop was removed.
So the problem goes away.


Wenjing


> postgres=!#
> 
> -- 
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-27 Thread Prabhat Sahu
Hi Wenjing,

Please check the below scenario shows different error message with "DROP
TABLE gtt;" for gtt with and without index.

*-- Session1:*postgres=# create global temporary table gtt1 (c1 int);
CREATE TABLE
postgres=# create global temporary table gtt2 (c1 int);
CREATE TABLE
postgres=# create index idx2 on gtt2(c1);
CREATE INDEX


*-- Session2:*postgres=# drop table gtt1;
ERROR:  can not drop relation gtt1 when other backend attached this global
temp table
postgres=# drop table gtt2;
ERROR:  can not drop index gtt2 when other backend attached this global
temp table.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-26 Thread 曾文旌
OP from any session(not even 
>>>> from the session in which GTT is created)
>>>> 
>>>> Case1:
>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>>> insert into gtt2 values(100);
>>>> drop table gtt2;
>>>> 
>>>> SQL> drop table gtt2;
>>>> drop table gtt2
>>>>   *
>>>> ERROR at line 1:
>>>> ORA-14452: attempt to create, alter or drop an index on temporary table 
>>>> already in use
>>>> 
>>>> -- Issue 3: But, we are able to drop the GTT(having data) which we have 
>>>> created in the same session.
>>>> postgres=# drop table gtt2;
>>>> DROP TABLE
>>>> 
>>>> Case2: GTT with(on commit preserve rows) having data in both session1 and 
>>>> session2
>>>> Session1:
>>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>>> insert into gtt2 values(100);
>>>> 
>>>> Session2:
>>>> insert into gtt2 values(200);
>>>> 
>>>> -- If we try to drop the table from any session we should get an error, it 
>>>> is working fine.
>>>> drop table gtt2;
>>>> SQL> drop table gtt2;
>>>> drop table gtt2
>>>>   *
>>>> ERROR at line 1:
>>>> ORA-14452: attempt to create, alter or drop an index on temporary table 
>>>> already in use
>>>> 
>>>> postgres=# drop table gtt2 ;
>>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>>> temp table
>>>> 
>>>> -- To drop the table gtt2 from any session1/session2, we need to truncate 
>>>> the table data first from all the session(session1, session2) which is 
>>>> having data.
>>>> Session1:
>>>> truncate table gtt2;
>>>> -- Session2:
>>>> truncate table gtt2;
>>>> 
>>>> Session 2:
>>>> SQL> drop table gtt2;
>>>> 
>>>> Table dropped.
>>>> 
>>>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the 
>>>> table in all the sessions.
>>>> -- truncate from all sessions where GTT have data.
>>>> postgres=# truncate gtt2 ;
>>>> TRUNCATE TABLE
>>>> 
>>>> -- try to DROP GTT still, we are getting error.
>>>> postgres=# drop table gtt2 ;
>>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>>> temp table
>>>> 
>>>> To drop the GTT from any session, we need to exit from all other sessions.
>>>> postgres=# drop table gtt2 ;
>>>> DROP TABLE
>>>> 
>>>> Kindly let me know if I am missing something.
>>>> 
>>>> 
>>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu >>> <mailto:prabhat.s...@enterprisedb.com>> wrote:
>>>> Hi Wenjing,
>>>> I hope we need to change the below error message.
>>>> 
>>>> postgres=# create global temporary table gtt(c1 int) on commit preserve 
>>>> rows;
>>>> CREATE TABLE
>>>> 
>>>> postgres=# create materialized view mvw as select * from gtt;
>>>> ERROR: materialized views must not use global temporary tables or views
>>>> 
>>>> Anyways we are not allowed to create a "global temporary view", 
>>>> so the above ERROR message should change(i.e. " or view" need to be 
>>>> removed from the error message) something like:
>>>> "ERROR: materialized views must not use global temporary tables"
>>>> 
>>>> -- 
>>>> With Regards,
>>>> Prabhat Kumar Sahu
>>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
>>>> 
>>>> 
>>>> -- 
>>>> With Regards,
>>>> Prabhat Kumar Sahu
>>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
>>> 
>> 
>> 
>> 
>> -- 
>> With Regards,
>> Prabhat Kumar Sahu
>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
> 



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-24 Thread tushar

On 4/22/20 2:49 PM, 曾文旌 wrote:


I provide the TRUNCATE tablename DROP to clear the data in the GTT and 
delete the storage files.
This feature requires the current transaction to commit immediately 
after it finishes truncate.



Thanks Wenjing , Please refer this scenario

postgres=# create global temp table testing (a int);
CREATE TABLE
postgres=# begin;
BEGIN
postgres=*# truncate testing;  -- working   [1]
TRUNCATE TABLE
postgres=*# truncate testing drop;
ERROR:  Truncate global temporary table cannot run inside a transaction 
block    --that is throwing an error claiming something which i did  
successfully [1]

postgres=!#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-24 Thread Prabhat Sahu
Hi Wenjing,

With the new patch(v30) as you mentioned the new syntax support for
"TRUNCATE TABLE gtt DROP", but we also observe the syntax "DROP TABLE gtt
DROP" is working as below:

postgres=# create global temporary table gtt(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# DROP TABLE gtt DROP;
DROP TABLE

Does this syntax intensional? If not, we should get a syntax error.

On Fri, Apr 24, 2020 at 10:25 AM Prabhat Sahu 
wrote:

> Hi Wenjing,
>
> Please check, the server getting crash with the below scenario(CLUSTER gtt
> using INDEX).
>
> *-- Session1:*
> postgres=# create global temporary table gtt (c1 integer) on commit
> preserve rows;
> CREATE TABLE
> postgres=# create index idx1 on gtt (c1);
> CREATE INDEX
>
> *-- Session2:*
> postgres=# create index idx2 on gtt (c1);
> CREATE INDEX
>
> *-- Session1:*
> postgres=# cluster gtt using idx1;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
> !?>
>
> *-- Below is the stacktrace:*
> [edb@localhost bin]$ gdb -q -c data/core.95690 postgres
> Reading symbols from
> /home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
> [New LWP 95690]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `postgres: edb postgres [local] CLUSTER
>'.
> Program terminated with signal 6, Aborted.
> #0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
> krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
> zlib-1.2.7-18.el7.x86_64
> (gdb) bt
> #0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
> #1  0x7f9c574efa28 in abort () from /lib64/libc.so.6
> #2  0x00ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8
> "!ReindexIsProcessingIndex(indexOid)", errorType=0xb5d365
> "FailedAssertion",
> fileName=0xb5d4e9 "index.c", lineNumber=3825) at assert.c:67
> #3  0x005b0412 in reindex_relation (relid=16384, flags=2,
> options=0) at index.c:3825
> #4  0x0065e36d in finish_heap_swap (OIDOldHeap=16384,
> OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,
> check_constraints=false, is_internal=true, frozenXid=491,
> cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448
> #5  0x0065ccef in rebuild_relation (OldHeap=0x7f9c589adef0,
> indexOid=16387, verbose=false) at cluster.c:602
> #6  0x0065c757 in cluster_rel (tableOid=16384, indexOid=16387,
> options=0) at cluster.c:418
> #7  0x0065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at
> cluster.c:180
> #8  0x0093b213 in standard_ProcessUtility (pstmt=0x2cd16c8,
> queryString=0x2cd0b30 "cluster gtt using idx1;",
> context=PROCESS_UTILITY_TOPLEVEL,
> params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at
> utility.c:819
> #9  0x0093aa50 in ProcessUtility (pstmt=0x2cd16c8,
> queryString=0x2cd0b30 "cluster gtt using idx1;",
> context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
> queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522
> #10 0x009398c2 in PortalRunUtility (portal=0x2d36ba0,
> pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8,
> qc=0x7ffcd32604b0)
> at pquery.c:1157
> #11 0x00939ad8 in PortalRunMulti (portal=0x2d36ba0,
> isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8,
> qc=0x7ffcd32604b0)
> at pquery.c:1303
> #12 0x00938ff6 in PortalRun (portal=0x2d36ba0,
> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8,
> altdest=0x2cd19a8,
> qc=0x7ffcd32604b0) at pquery.c:779
> #13 0x009331b0 in exec_simple_query (query_string=0x2cd0b30
> "cluster gtt using idx1;") at postgres.c:1239
> #14 0x009371bc in PostgresMain (argc=1, argv=0x2cfab80,
> dbname=0x2cfaa78 "postgres", username=0x2cfaa58 "edb") at postgres.c:4315
> #15 0x008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510
> #16 0x00886a9e in BackendStartup (port=0x2cf2b50) at
> postmaster.c:4202
> #17 0x0088301c in ServerLoop () at postmaster.c:1727
> #18 0x008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at
> postmaster.c:1400
> #19 0x00789c54 in main (argc=3, argv=0x2ccb460) at main.c:210
> (gdb)
>
> --
>
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>


-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-23 Thread Prabhat Sahu
Hi Wenjing,

Please check, the server getting crash with the below scenario(CLUSTER gtt
using INDEX).

*-- Session1:*
postgres=# create global temporary table gtt (c1 integer) on commit
preserve rows;
CREATE TABLE
postgres=# create index idx1 on gtt (c1);
CREATE INDEX

*-- Session2:*
postgres=# create index idx2 on gtt (c1);
CREATE INDEX

*-- Session1:*
postgres=# cluster gtt using idx1;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>

*-- Below is the stacktrace:*
[edb@localhost bin]$ gdb -q -c data/core.95690 postgres
Reading symbols from
/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
[New LWP 95690]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: edb postgres [local] CLUSTER
 '.
Program terminated with signal 6, Aborted.
#0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f9c574ee337 in raise () from /lib64/libc.so.6
#1  0x7f9c574efa28 in abort () from /lib64/libc.so.6
#2  0x00ab3a3c in ExceptionalCondition (conditionName=0xb5e2e8
"!ReindexIsProcessingIndex(indexOid)", errorType=0xb5d365
"FailedAssertion",
fileName=0xb5d4e9 "index.c", lineNumber=3825) at assert.c:67
#3  0x005b0412 in reindex_relation (relid=16384, flags=2,
options=0) at index.c:3825
#4  0x0065e36d in finish_heap_swap (OIDOldHeap=16384,
OIDNewHeap=16389, is_system_catalog=false, swap_toast_by_content=false,
check_constraints=false, is_internal=true, frozenXid=491,
cutoffMulti=1, newrelpersistence=103 'g') at cluster.c:1448
#5  0x0065ccef in rebuild_relation (OldHeap=0x7f9c589adef0,
indexOid=16387, verbose=false) at cluster.c:602
#6  0x0065c757 in cluster_rel (tableOid=16384, indexOid=16387,
options=0) at cluster.c:418
#7  0x0065c2cf in cluster (stmt=0x2cd1600, isTopLevel=true) at
cluster.c:180
#8  0x0093b213 in standard_ProcessUtility (pstmt=0x2cd16c8,
queryString=0x2cd0b30 "cluster gtt using idx1;",
context=PROCESS_UTILITY_TOPLEVEL,
params=0x0, queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at
utility.c:819
#9  0x0093aa50 in ProcessUtility (pstmt=0x2cd16c8,
queryString=0x2cd0b30 "cluster gtt using idx1;",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
queryEnv=0x0, dest=0x2cd19a8, qc=0x7ffcd32604b0) at utility.c:522
#10 0x009398c2 in PortalRunUtility (portal=0x2d36ba0,
pstmt=0x2cd16c8, isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8,
qc=0x7ffcd32604b0)
at pquery.c:1157
#11 0x00939ad8 in PortalRunMulti (portal=0x2d36ba0,
isTopLevel=true, setHoldSnapshot=false, dest=0x2cd19a8, altdest=0x2cd19a8,
qc=0x7ffcd32604b0)
at pquery.c:1303
#12 0x00938ff6 in PortalRun (portal=0x2d36ba0,
count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2cd19a8,
altdest=0x2cd19a8,
qc=0x7ffcd32604b0) at pquery.c:779
#13 0x009331b0 in exec_simple_query (query_string=0x2cd0b30
"cluster gtt using idx1;") at postgres.c:1239
#14 0x009371bc in PostgresMain (argc=1, argv=0x2cfab80,
dbname=0x2cfaa78 "postgres", username=0x2cfaa58 "edb") at postgres.c:4315
#15 0x008872a9 in BackendRun (port=0x2cf2b50) at postmaster.c:4510
#16 0x00886a9e in BackendStartup (port=0x2cf2b50) at
postmaster.c:4202
#17 0x0088301c in ServerLoop () at postmaster.c:1727
#18 0x008828f3 in PostmasterMain (argc=3, argv=0x2ccb460) at
postmaster.c:1400
#19 0x00789c54 in main (argc=3, argv=0x2ccb460) at main.c:210
(gdb)

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-23 Thread Pavel Stehule
t;
>>>>
>>>>
>>>> *Case2: GTT with(on commit preserve rows) having data in both session1
>>>> and session2Session1:*create global temporary table gtt2 (c1 integer)
>>>> on commit preserve rows;
>>>> insert into gtt2 values(100);
>>>>
>>>>
>>>> *Session2:*insert into gtt2 values(200);
>>>>
>>>> -- If we try to drop the table from any session we should get an error,
>>>> it is working fine.
>>>> drop table gtt2;
>>>>
>>>> SQL> drop table gtt2;
>>>> drop table gtt2
>>>>   *
>>>> ERROR at line 1:
>>>> ORA-14452: attempt to create, alter or drop an index on temporary table
>>>> already in use
>>>>
>>>> postgres=# drop table gtt2 ;
>>>> ERROR:  can not drop relation gtt2 when other backend attached this
>>>> global temp table
>>>>
>>>>
>>>> -- To drop the table gtt2 from any session1/session2, we need to
>>>> truncate the table data first from all the session(session1, session2)
>>>> which is having data.
>>>> *Session1:*
>>>> truncate table gtt2;
>>>> -- Session2:
>>>> truncate table gtt2;
>>>>
>>>> *Session 2:*
>>>> SQL> drop table gtt2;
>>>>
>>>> Table dropped.
>>>>
>>>> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE
>>>> the table in all the sessions.
>>>> -- truncate from all sessions where GTT have data.
>>>> postgres=# truncate gtt2 ;
>>>> TRUNCATE TABLE
>>>>
>>>> -- *try to DROP GTT still, we are getting error.*
>>>>
>>>> postgres=# drop table gtt2 ;
>>>> ERROR:  can not drop relation gtt2 when other backend attached this
>>>> global temp table
>>>>
>>>>
>>>> To drop the GTT from any session, we need to exit from all other
>>>> sessions.
>>>> postgres=# drop table gtt2 ;
>>>> DROP TABLE
>>>>
>>>> Kindly let me know if I am missing something.
>>>>
>>>>
>>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <
>>>> prabhat.s...@enterprisedb.com> wrote:
>>>>
>>>>> Hi Wenjing,
>>>>> I hope we need to change the below error message.
>>>>>
>>>>> postgres=# create global temporary table gtt(c1 int) on commit
>>>>> preserve rows;
>>>>> CREATE TABLE
>>>>>
>>>>> postgres=# create materialized view mvw as select * from gtt;
>>>>> ERROR: materialized views must not use global temporary tables* or
>>>>> views*
>>>>>
>>>>> Anyways we are not allowed to create a "global temporary view",
>>>>> so the above ERROR message should change(i.e. *" or view"* need to be
>>>>> removed from the error message) something like:
>>>>> *"ERROR: materialized views must not use global temporary tables"*
>>>>>
>>>>> --
>>>>>
>>>>> With Regards,
>>>>> Prabhat Kumar Sahu
>>>>> EnterpriseDB: http://www.enterprisedb.com
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> With Regards,
>>>> Prabhat Kumar Sahu
>>>> EnterpriseDB: http://www.enterprisedb.com
>>>>
>>>>
>>>>
>>>
>>
>> --
>>
>> With Regards,
>> Prabhat Kumar Sahu
>> EnterpriseDB: http://www.enterprisedb.com
>>
>
>


Re: [Proposal] Global temporary tables

2020-04-23 Thread 曾文旌
t;>> in a session, we will not be able to DROP from any session(not even from 
>>> the session in which GTT is created), we need to truncate the table data 
>>> first from all the session(session1, session2) which is having data.
>>> 1. Any tables(Normal table / GTT) without having data in a session, we will 
>>> be able to DROP from another session.
>>> Session1:
>>> create table t1 (c1 integer);
>>> create global temporary table gtt1 (c1 integer) on commit delete rows;
>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>> 
>>> Session2:
>>> drop table t1;
>>> drop table gtt1;
>>> drop table gtt2;
>>> 
>>> -- Issue 1: But we are able to drop a simple table and failed to drop GTT 
>>> as below.
>>> postgres=# drop table t1;
>>> DROP TABLE
>>> postgres=# drop table gtt1;
>>> ERROR:  can not drop relation gtt1 when other backend attached this global 
>>> temp table
>>> postgres=# drop table gtt2;
>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>> temp table
>>> 
>>> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no 
>>> data in GTT) in a session, we will be able to DROP from another session.
>>> Session1:
>>> create global temporary table gtt1 (c1 integer) on commit delete rows;
>>> 
>>> Session2:
>>> drop table gtt1;
>>> 
>>> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) 
>>> without data.
>>> postgres=# drop table gtt1;
>>> ERROR:  can not drop relation gtt1 when other backend attached this global 
>>> temp table
>>> 
>>> 4. For a completed transaction on GTT with(on commit preserve rows) with 
>>> data in any session, we will not be able to DROP from any session(not even 
>>> from the session in which GTT is created)
>>> 
>>> Case1:
>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>> insert into gtt2 values(100);
>>> drop table gtt2;
>>> 
>>> SQL> drop table gtt2;
>>> drop table gtt2
>>>   *
>>> ERROR at line 1:
>>> ORA-14452: attempt to create, alter or drop an index on temporary table 
>>> already in use
>>> 
>>> -- Issue 3: But, we are able to drop the GTT(having data) which we have 
>>> created in the same session.
>>> postgres=# drop table gtt2;
>>> DROP TABLE
>>> 
>>> Case2: GTT with(on commit preserve rows) having data in both session1 and 
>>> session2
>>> Session1:
>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>> insert into gtt2 values(100);
>>> 
>>> Session2:
>>> insert into gtt2 values(200);
>>> 
>>> -- If we try to drop the table from any session we should get an error, it 
>>> is working fine.
>>> drop table gtt2;
>>> SQL> drop table gtt2;
>>> drop table gtt2
>>>   *
>>> ERROR at line 1:
>>> ORA-14452: attempt to create, alter or drop an index on temporary table 
>>> already in use
>>> 
>>> postgres=# drop table gtt2 ;
>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>> temp table
>>> 
>>> -- To drop the table gtt2 from any session1/session2, we need to truncate 
>>> the table data first from all the session(session1, session2) which is 
>>> having data.
>>> Session1:
>>> truncate table gtt2;
>>> -- Session2:
>>> truncate table gtt2;
>>> 
>>> Session 2:
>>> SQL> drop table gtt2;
>>> 
>>> Table dropped.
>>> 
>>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the 
>>> table in all the sessions.
>>> -- truncate from all sessions where GTT have data.
>>> postgres=# truncate gtt2 ;
>>> TRUNCATE TABLE
>>> 
>>> -- try to DROP GTT still, we are getting error.
>>> postgres=# drop table gtt2 ;
>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>> temp table
>>> 
>>> To drop the GTT from any session, we need to exit from all other sessions.
>>> postgres=# drop table gtt2 ;
>>> DROP TABLE
>>> 
>>> Kindly let me know if I am missing something.
>>> 
>>> 
>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu >> <mailto:prabhat.s...@enterprisedb.com>> wrote:
>>> Hi Wenjing,
>>> I hope we need to change the below error message.
>>> 
>>> postgres=# create global temporary table gtt(c1 int) on commit preserve 
>>> rows;
>>> CREATE TABLE
>>> 
>>> postgres=# create materialized view mvw as select * from gtt;
>>> ERROR: materialized views must not use global temporary tables or views
>>> 
>>> Anyways we are not allowed to create a "global temporary view", 
>>> so the above ERROR message should change(i.e. " or view" need to be removed 
>>> from the error message) something like:
>>> "ERROR: materialized views must not use global temporary tables"
>>> 
>>> -- 
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
>>> 
>>> 
>>> -- 
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
>> 
> 
> 
> 
> -- 
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-23 Thread 曾文旌
gt; ERROR:  can not drop relation gtt1 when other backend attached this global 
>>> temp table
>>> postgres=# drop table gtt2;
>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>> temp table
>>> 
>>> 3. For a completed transaction on GTT with(on commit delete rows) (i.e. no 
>>> data in GTT) in a session, we will be able to DROP from another session.
>>> Session1:
>>> create global temporary table gtt1 (c1 integer) on commit delete rows;
>>> 
>>> Session2:
>>> drop table gtt1;
>>> 
>>> -- Issue 2: But we are getting error for GTT with(on_commit_delete_rows) 
>>> without data.
>>> postgres=# drop table gtt1;
>>> ERROR:  can not drop relation gtt1 when other backend attached this global 
>>> temp table
>>> 
>>> 4. For a completed transaction on GTT with(on commit preserve rows) with 
>>> data in any session, we will not be able to DROP from any session(not even 
>>> from the session in which GTT is created)
>>> 
>>> Case1:
>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>> insert into gtt2 values(100);
>>> drop table gtt2;
>>> 
>>> SQL> drop table gtt2;
>>> drop table gtt2
>>>   *
>>> ERROR at line 1:
>>> ORA-14452: attempt to create, alter or drop an index on temporary table 
>>> already in use
>>> 
>>> -- Issue 3: But, we are able to drop the GTT(having data) which we have 
>>> created in the same session.
>>> postgres=# drop table gtt2;
>>> DROP TABLE
>>> 
>>> Case2: GTT with(on commit preserve rows) having data in both session1 and 
>>> session2
>>> Session1:
>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>> insert into gtt2 values(100);
>>> 
>>> Session2:
>>> insert into gtt2 values(200);
>>> 
>>> -- If we try to drop the table from any session we should get an error, it 
>>> is working fine.
>>> drop table gtt2;
>>> SQL> drop table gtt2;
>>> drop table gtt2
>>>   *
>>> ERROR at line 1:
>>> ORA-14452: attempt to create, alter or drop an index on temporary table 
>>> already in use
>>> 
>>> postgres=# drop table gtt2 ;
>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>> temp table
>>> 
>>> -- To drop the table gtt2 from any session1/session2, we need to truncate 
>>> the table data first from all the session(session1, session2) which is 
>>> having data.
>>> Session1:
>>> truncate table gtt2;
>>> -- Session2:
>>> truncate table gtt2;
>>> 
>>> Session 2:
>>> SQL> drop table gtt2;
>>> 
>>> Table dropped.
>>> 
>>> -- Issue 4: But we are not able to drop the GTT, even after TRUNCATE the 
>>> table in all the sessions.
>>> -- truncate from all sessions where GTT have data.
>>> postgres=# truncate gtt2 ;
>>> TRUNCATE TABLE
>>> 
>>> -- try to DROP GTT still, we are getting error.
>>> postgres=# drop table gtt2 ;
>>> ERROR:  can not drop relation gtt2 when other backend attached this global 
>>> temp table
>>> 
>>> To drop the GTT from any session, we need to exit from all other sessions.
>>> postgres=# drop table gtt2 ;
>>> DROP TABLE
>>> 
>>> Kindly let me know if I am missing something.
>>> 
>>> 
>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu >> <mailto:prabhat.s...@enterprisedb.com>> wrote:
>>> Hi Wenjing,
>>> I hope we need to change the below error message.
>>> 
>>> postgres=# create global temporary table gtt(c1 int) on commit preserve 
>>> rows;
>>> CREATE TABLE
>>> 
>>> postgres=# create materialized view mvw as select * from gtt;
>>> ERROR: materialized views must not use global temporary tables or views
>>> 
>>> Anyways we are not allowed to create a "global temporary view", 
>>> so the above ERROR message should change(i.e. " or view" need to be removed 
>>> from the error message) something like:
>>> "ERROR: materialized views must not use global temporary tables"
>>> 
>>> -- 
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
>>> 
>>> 
>>> -- 
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>
>> 
> 
> 
> 
> -- 
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-22 Thread Pavel Stehule
;> *Session2:*
>>> drop table t1;
>>> drop table gtt1;
>>> drop table gtt2;
>>>
>>> -- *Issue 1:* But we are able to drop a simple table and failed to drop
>>> GTT as below.
>>>
>>> postgres=# drop table t1;
>>> DROP TABLE
>>> postgres=# drop table gtt1;
>>> ERROR:  can not drop relation gtt1 when other backend attached this
>>> global temp table
>>> postgres=# drop table gtt2;
>>> ERROR:  can not drop relation gtt2 when other backend attached this
>>> global temp table
>>>
>>>
>>> *3. For a completed transaction on GTT with(on commit delete rows) (i.e.
>>> no data in GTT) in a session, we will be able to DROP from another session.*
>>>
>>> *Session1:*create global temporary table gtt1 (c1 integer) on commit
>>> delete rows;
>>>
>>> *Session2:*
>>> drop table gtt1;
>>>
>>> -- *Issue 2:* But we are getting error for GTT
>>> with(on_commit_delete_rows) without data.
>>>
>>> postgres=# drop table gtt1;
>>> ERROR:  can not drop relation gtt1 when other backend attached this
>>> global temp table
>>>
>>>
>>> *4. For a completed transaction on GTT with(on commit preserve
>>> rows) with data in any session, we will not be able to DROP from any
>>> session(not even from the session in which GTT is created)*
>>>
>>> *Case1:*
>>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>>> insert into gtt2 values(100);
>>> drop table gtt2;
>>>
>>> SQL> drop table gtt2;
>>> drop table gtt2
>>>   *
>>> ERROR at line 1:
>>> ORA-14452: attempt to create, alter or drop an index on temporary table
>>> already in use
>>>
>>> -- *Issue 3:* But, we are able to drop the GTT(having data) which we
>>> have created in the same session.
>>>
>>> postgres=# drop table gtt2;
>>> DROP TABLE
>>>
>>>
>>>
>>>
>>> *Case2: GTT with(on commit preserve rows) having data in both session1
>>> and session2Session1:*create global temporary table gtt2 (c1 integer)
>>> on commit preserve rows;
>>> insert into gtt2 values(100);
>>>
>>>
>>> *Session2:*insert into gtt2 values(200);
>>>
>>> -- If we try to drop the table from any session we should get an error,
>>> it is working fine.
>>> drop table gtt2;
>>>
>>> SQL> drop table gtt2;
>>> drop table gtt2
>>>   *
>>> ERROR at line 1:
>>> ORA-14452: attempt to create, alter or drop an index on temporary table
>>> already in use
>>>
>>> postgres=# drop table gtt2 ;
>>> ERROR:  can not drop relation gtt2 when other backend attached this
>>> global temp table
>>>
>>>
>>> -- To drop the table gtt2 from any session1/session2, we need to
>>> truncate the table data first from all the session(session1, session2)
>>> which is having data.
>>> *Session1:*
>>> truncate table gtt2;
>>> -- Session2:
>>> truncate table gtt2;
>>>
>>> *Session 2:*
>>> SQL> drop table gtt2;
>>>
>>> Table dropped.
>>>
>>> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE
>>> the table in all the sessions.
>>> -- truncate from all sessions where GTT have data.
>>> postgres=# truncate gtt2 ;
>>> TRUNCATE TABLE
>>>
>>> -- *try to DROP GTT still, we are getting error.*
>>>
>>> postgres=# drop table gtt2 ;
>>> ERROR:  can not drop relation gtt2 when other backend attached this
>>> global temp table
>>>
>>>
>>> To drop the GTT from any session, we need to exit from all other
>>> sessions.
>>> postgres=# drop table gtt2 ;
>>> DROP TABLE
>>>
>>> Kindly let me know if I am missing something.
>>>
>>>
>>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <
>>> prabhat.s...@enterprisedb.com> wrote:
>>>
>>>> Hi Wenjing,
>>>> I hope we need to change the below error message.
>>>>
>>>> postgres=# create global temporary table gtt(c1 int) on commit preserve
>>>> rows;
>>>> CREATE TABLE
>>>>
>>>> postgres=# create materialized view mvw as select * from gtt;
>>>> ERROR: materialized views must not use global temporary tables* or
>>>> views*
>>>>
>>>> Anyways we are not allowed to create a "global temporary view",
>>>> so the above ERROR message should change(i.e. *" or view"* need to be
>>>> removed from the error message) something like:
>>>> *"ERROR: materialized views must not use global temporary tables"*
>>>>
>>>> --
>>>>
>>>> With Regards,
>>>> Prabhat Kumar Sahu
>>>> EnterpriseDB: http://www.enterprisedb.com
>>>>
>>>
>>>
>>> --
>>>
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com
>>>
>>>
>>>
>>
>
> --
>
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com
>


Re: [Proposal] Global temporary tables

2020-04-22 Thread Prabhat Sahu
postgres=# drop table gtt1;
>> ERROR:  can not drop relation gtt1 when other backend attached this
>> global temp table
>>
>>
>> *4. For a completed transaction on GTT with(on commit preserve rows) with
>> data in any session, we will not be able to DROP from any session(not even
>> from the session in which GTT is created)*
>>
>> *Case1:*
>> create global temporary table gtt2 (c1 integer) on commit preserve rows;
>> insert into gtt2 values(100);
>> drop table gtt2;
>>
>> SQL> drop table gtt2;
>> drop table gtt2
>>   *
>> ERROR at line 1:
>> ORA-14452: attempt to create, alter or drop an index on temporary table
>> already in use
>>
>> -- *Issue 3:* But, we are able to drop the GTT(having data) which we
>> have created in the same session.
>>
>> postgres=# drop table gtt2;
>> DROP TABLE
>>
>>
>>
>>
>> *Case2: GTT with(on commit preserve rows) having data in both session1
>> and session2Session1:*create global temporary table gtt2 (c1 integer) on
>> commit preserve rows;
>> insert into gtt2 values(100);
>>
>>
>> *Session2:*insert into gtt2 values(200);
>>
>> -- If we try to drop the table from any session we should get an error,
>> it is working fine.
>> drop table gtt2;
>>
>> SQL> drop table gtt2;
>> drop table gtt2
>>   *
>> ERROR at line 1:
>> ORA-14452: attempt to create, alter or drop an index on temporary table
>> already in use
>>
>> postgres=# drop table gtt2 ;
>> ERROR:  can not drop relation gtt2 when other backend attached this
>> global temp table
>>
>>
>> -- To drop the table gtt2 from any session1/session2, we need to truncate
>> the table data first from all the session(session1, session2) which is
>> having data.
>> *Session1:*
>> truncate table gtt2;
>> -- Session2:
>> truncate table gtt2;
>>
>> *Session 2:*
>> SQL> drop table gtt2;
>>
>> Table dropped.
>>
>> -- *Issue 4:* But we are not able to drop the GTT, even after TRUNCATE
>> the table in all the sessions.
>> -- truncate from all sessions where GTT have data.
>> postgres=# truncate gtt2 ;
>> TRUNCATE TABLE
>>
>> -- *try to DROP GTT still, we are getting error.*
>>
>> postgres=# drop table gtt2 ;
>> ERROR:  can not drop relation gtt2 when other backend attached this
>> global temp table
>>
>>
>> To drop the GTT from any session, we need to exit from all other sessions.
>> postgres=# drop table gtt2 ;
>> DROP TABLE
>>
>> Kindly let me know if I am missing something.
>>
>>
>> On Wed, Apr 1, 2020 at 6:26 PM Prabhat Sahu <
>> prabhat.s...@enterprisedb.com> wrote:
>>
>>> Hi Wenjing,
>>> I hope we need to change the below error message.
>>>
>>> postgres=# create global temporary table gtt(c1 int) on commit preserve
>>> rows;
>>> CREATE TABLE
>>>
>>> postgres=# create materialized view mvw as select * from gtt;
>>> ERROR: materialized views must not use global temporary tables* or
>>> views*
>>>
>>> Anyways we are not allowed to create a "global temporary view",
>>> so the above ERROR message should change(i.e. *" or view"* need to be
>>> removed from the error message) something like:
>>> *"ERROR: materialized views must not use global temporary tables"*
>>>
>>> --
>>>
>>> With Regards,
>>> Prabhat Kumar Sahu
>>> EnterpriseDB: http://www.enterprisedb.com
>>>
>>
>>
>> --
>>
>> With Regards,
>> Prabhat Kumar Sahu
>> EnterpriseDB: http://www.enterprisedb.com
>>
>>
>>
>

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-20 Thread tushar

On 4/20/20 2:59 PM, 曾文旌 wrote:

Please check my new patch.


Thanks Wenjing. Please refer this below scenario , getting error -  
ERROR:  could not read block 0 in file "base/16466/t4_16472": read only 
0 of 8192 bytes


Steps to reproduce

Connect to psql terminal,create a table ( create global temp table t2 (n 
int primary key ) on commit delete rows;)

exit from psql terminal and execute (./clusterdb -t t2 -d postgres -v)
connect to psql terminal and one by one execute these below sql statements
(
cluster verbose t2 using t2_pkey;
cluster verbose t2 ;
alter table t2 add column i int;
cluster verbose t2 ;
cluster verbose t2 using t2_pkey;
create unique index ind on t2(n);
create unique index concurrently  ind1 on t2(n);
select * from t2;
)
This last SQL - will throw this error -  - ERROR:  could not read block 
0 in file "base/16466/t4_16472": read only 0 of 8192 bytes


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-20 Thread Prabhat Sahu
> I think this is expected, and user test_gtt does not have permission to
> vacuum the system table.
> This has nothing to do with GTT.
>
> Hi Wenjing, Thanks for the explanation.
Thanks for the new patch. I have verified the crash, Now its resolved.

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-20 Thread 曾文旌


> 2020年4月17日 下午7:26,Prabhat Sahu  写道:
> 
> On Fri, Apr 17, 2020 at 2:44 PM 曾文旌  > wrote:
> 
> I improved the logic of the warning message so that when the gap between 
> relfrozenxid of GTT is small,
> it will no longer be alarmed message.
> 
> Hi Wenjing,
> Thanks for the patch(v26), I have verified the previous related issues, and 
> are working fine now.
> Please check the below scenario VACUUM from a non-super user.
> 
> -- Create user "test_gtt", connect it , create gtt, VACUUM gtt and VACUUM / 
> VACUUM FULL
> postgres=# CREATE USER test_gtt;
> CREATE ROLE
> postgres=# \c postgres test_gtt
> You are now connected to database "postgres" as user "test_gtt".
> postgres=> CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int);
> CREATE TABLE
> 
> -- VACUUM gtt is working fine, whereas we are getting huge WARNING for VACUUM 
> / VACUUM FULL as below:
> postgres=> VACUUM gtt1 ;
> VACUUM
> postgres=> VACUUM;
> WARNING:  skipping "pg_statistic" --- only superuser or database owner can 
> vacuum it
> WARNING:  skipping "pg_type" --- only superuser or database owner can vacuum 
> it
> WARNING:  skipping "pg_toast_2600" --- only table or database owner can 
> vacuum it
> WARNING:  skipping "pg_toast_2600_index" --- only table or database owner can 
> vacuum it
> 
> ... ... 
> ... ... 
> 
> WARNING:  skipping "_pg_foreign_tables" --- only table or database owner can 
> vacuum it
> WARNING:  skipping "foreign_table_options" --- only table or database owner 
> can vacuum it
> WARNING:  skipping "user_mapping_options" --- only table or database owner 
> can vacuum it
> WARNING:  skipping "user_mappings" --- only table or database owner can 
> vacuum it
> VACUUM 
I think this is expected, and user test_gtt does not have permission to vacuum 
the system table.
This has nothing to do with GTT.


Wenjing

> 
> -- 
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com 



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-17 Thread Prabhat Sahu
Hi Wenjing,

Please check below scenario, we are getting a server crash with "ALTER
TABLE" add column with default value as sequence:

-- Create gtt, exit and re-connect the psql prompt, create sequence, alter
table add a column with sequence.
postgres=# create global temporary table gtt1 (c1 int);
CREATE TABLE
postgres=# \q
[edb@localhost bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# create sequence seq;
CREATE SEQUENCE
postgres=# alter table gtt1 add c2 int default nextval('seq');
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?> \q


-- Stack trace:
[edb@localhost bin]$ gdb -q -c data/core.70358 postgres
Reading symbols from
/home/edb/PG/PGsrcNew/postgresql/inst/bin/postgres...done.
[New LWP 70358]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: edb postgres [local] ALTER TABLE
 '.
Program terminated with signal 6, Aborted.
#0  0x7f150223b337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f150223b337 in raise () from /lib64/libc.so.6
#1  0x7f150223ca28 in abort () from /lib64/libc.so.6
#2  0x00ab2cdd in ExceptionalCondition (conditionName=0xc03ab8
"OidIsValid(relfilenode1) && OidIsValid(relfilenode2)",
errorType=0xc0371f "FailedAssertion", fileName=0xc03492 "cluster.c",
lineNumber=1637) at assert.c:67
#3  0x0065e200 in gtt_swap_relation_files (r1=16384, r2=16390,
target_is_pg_class=false, swap_toast_by_content=false, is_internal=true,
frozenXid=490, cutoffMulti=1, mapped_tables=0x7ffd841f7ee0) at
cluster.c:1637
#4  0x0065dcd9 in finish_heap_swap (OIDOldHeap=16384,
OIDNewHeap=16390, is_system_catalog=false, swap_toast_by_content=false,
check_constraints=true, is_internal=true, frozenXid=490, cutoffMulti=1,
newrelpersistence=103 'g') at cluster.c:1395
#5  0x006bca18 in ATRewriteTables (parsetree=0x1deab80,
wqueue=0x7ffd841f80c8, lockmode=8, context=0x7ffd841f8260) at
tablecmds.c:4991
#6  0x006ba890 in ATController (parsetree=0x1deab80,
rel=0x7f150378f330, cmds=0x1deab28, recurse=true, lockmode=8,
context=0x7ffd841f8260)
at tablecmds.c:3991
#7  0x006ba4f8 in AlterTable (stmt=0x1deab80, lockmode=8,
context=0x7ffd841f8260) at tablecmds.c:3644
#8  0x0093b62a in ProcessUtilitySlow (pstate=0x1e0d6d0,
pstmt=0x1deac48,
queryString=0x1de9b30 "alter table gtt1 add c2 int default
nextval('seq');", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,
queryEnv=0x0, dest=0x1deaf28,
qc=0x7ffd841f8830) at utility.c:1267
#9  0x0093b141 in standard_ProcessUtility (pstmt=0x1deac48,
queryString=0x1de9b30 "alter table gtt1 add c2 int default
nextval('seq');",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,
dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:1067
#10 0x0093a22b in ProcessUtility (pstmt=0x1deac48,
queryString=0x1de9b30 "alter table gtt1 add c2 int default
nextval('seq');",
context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,
dest=0x1deaf28, qc=0x7ffd841f8830) at utility.c:522
#11 0x0093909d in PortalRunUtility (portal=0x1e4fba0,
pstmt=0x1deac48, isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28,
qc=0x7ffd841f8830)
at pquery.c:1157
#12 0x009392b3 in PortalRunMulti (portal=0x1e4fba0,
isTopLevel=true, setHoldSnapshot=false, dest=0x1deaf28, altdest=0x1deaf28,
qc=0x7ffd841f8830)
at pquery.c:1303
#13 0x009387d1 in PortalRun (portal=0x1e4fba0,
count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x1deaf28,
altdest=0x1deaf28,
qc=0x7ffd841f8830) at pquery.c:779
#14 0x0093298b in exec_simple_query (query_string=0x1de9b30 "alter
table gtt1 add c2 int default nextval('seq');") at postgres.c:1239
#15 0x00936997 in PostgresMain (argc=1, argv=0x1e13b80,
dbname=0x1e13a78 "postgres", username=0x1e13a58 "edb") at postgres.c:4315
#16 0x008868b3 in BackendRun (port=0x1e0bb50) at postmaster.c:4510
#17 0x008860a8 in BackendStartup (port=0x1e0bb50) at
postmaster.c:4202
#18 0x00882626 in ServerLoop () at postmaster.c:1727
#19 0x00881efd in PostmasterMain (argc=3, argv=0x1de4460) at
postmaster.c:1400
#20 0x00789288 in main (argc=3, argv=0x1de4460) at main.c:210
(gdb)

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-17 Thread Prabhat Sahu
On Fri, Apr 17, 2020 at 2:44 PM 曾文旌  wrote:

>
> I improved the logic of the warning message so that when the gap between
> relfrozenxid of GTT is small,
> it will no longer be alarmed message.
>

Hi Wenjing,
Thanks for the patch(v26), I have verified the previous related issues, and
are working fine now.
Please check the below scenario VACUUM from a non-super user.

-- Create user "test_gtt", connect it , create gtt, VACUUM gtt and VACUUM /
VACUUM FULL
postgres=# CREATE USER test_gtt;
CREATE ROLE
postgres=# \c postgres test_gtt
You are now connected to database "postgres" as user "test_gtt".
postgres=> CREATE GLOBAL TEMPORARY TABLE gtt1(c1 int);
CREATE TABLE

-- VACUUM gtt is working fine, whereas we are getting huge WARNING for
VACUUM / VACUUM FULL as below:
postgres=> VACUUM gtt1 ;
VACUUM
postgres=> VACUUM;
WARNING:  skipping "pg_statistic" --- only superuser or database owner can
vacuum it
WARNING:  skipping "pg_type" --- only superuser or database owner can
vacuum it
WARNING:  skipping "pg_toast_2600" --- only table or database owner can
vacuum it
WARNING:  skipping "pg_toast_2600_index" --- only table or database owner
can vacuum it

... ...
... ...

WARNING:  skipping "_pg_foreign_tables" --- only table or database owner
can vacuum it
WARNING:  skipping "foreign_table_options" --- only table or database owner
can vacuum it
WARNING:  skipping "user_mapping_options" --- only table or database owner
can vacuum it
WARNING:  skipping "user_mappings" --- only table or database owner can
vacuum it
VACUUM

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-13 Thread tushar

On 4/9/20 6:26 PM, 曾文旌 wrote:

On 4/7/20 2:27 PM, 曾文旌 wrote:

Vacuum full GTT, cluster GTT is already supported in 
global_temporary_table_v24-pg13.patch.

Here , it is skipping GTT

postgres=# \c foo
You are now connected to database "foo" as user "tushar".
foo=# create global temporary table  g123( c1 int) ;
CREATE TABLE
foo=# \q
[tushar@localhost bin]$ ./vacuumdb --full  foo
vacuumdb: vacuuming database "foo"
WARNING:  skipping vacuum global temp table "g123" because storage is not 
initialized for current session

The message was inappropriate at some point, so I removed it.


Thanks Wenjing. Please see -if this below behavior is correct

X terminal -

postgres=# create global temp table foo1(n int);
CREATE TABLE
postgres=# insert into foo1 values (generate_series(1,10));
INSERT 0 10
postgres=# vacuum full ;
VACUUM

Y Terminal -

[tushar@localhost bin]$ ./vacuumdb -f  postgres
vacuumdb: vacuuming database "postgres"
WARNING:  global temp table oldest relfrozenxid 3276 is the oldest in 
the entire db

DETAIL:  The oldest relfrozenxid in pg_class is 3277
HINT:  If they differ greatly, please consider cleaning up the data in 
global temp table.
WARNING:  global temp table oldest relfrozenxid 3276 is the oldest in 
the entire db

DETAIL:  The oldest relfrozenxid in pg_class is 3277
HINT:  If they differ greatly, please consider cleaning up the data in 
global temp table.



--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-04-13 Thread tushar

On 4/13/20 1:57 PM, 曾文旌 wrote:

[tushar@localhost bin]$ tail -20   pg_upgrade_dump_13592.log
pg_restore: error: could not execute query: ERROR:  pg_type array OID value not 
set when in binary upgrade mode

I found that the regular table also has this problem, I am very unfamiliar with 
this part, so I opened another email to consult this problem.


ohh. Thanks.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-04-13 Thread 曾文旌


> 2020年4月8日 下午6:34,tushar  写道:
> 
> On 4/7/20 2:27 PM, 曾文旌 wrote:
>> Vacuum full GTT, cluster GTT is already supported in 
>> global_temporary_table_v24-pg13.patch.
> Please refer this below scenario , where pg_upgrade is failing
> 1)Server is up and running (./pg_ctl -D data status)
> 2)Stop the server ( ./pg_ctl -D data stop)
> 3)Connect to server using single user mode ( ./postgres --single -D data 
> postgres) and create a global temp table
> [tushar@localhost bin]$ ./postgres --single -D data1233 postgres
> 
> PostgreSQL stand-alone backend 13devel
> backend> create global temp table t(n int);
> 
> --Press Ctl+D to exit
> 
> 4)Perform initdb ( ./initdb -D data123)
> 5.Run pg_upgrade
> [tushar@localhost bin]$ ./pg_upgrade -d data -D data123 -b . -B .
> --
> --
> --
> Restoring database schemas in the new cluster
>   postgres
> *failure*
> Consult the last few lines of "pg_upgrade_dump_13592.log" for
> the probable cause of the failure.
> Failure, exiting
> 
> log file content  -
> 
> [tushar@localhost bin]$ tail -20   pg_upgrade_dump_13592.log
> pg_restore: error: could not execute query: ERROR:  pg_type array OID value 
> not set when in binary upgrade mode
I found that the regular table also has this problem, I am very unfamiliar with 
this part, so I opened another email to consult this problem.

> Command was:
> -- For binary upgrade, must preserve pg_type oid
> SELECT 
> pg_catalog.binary_upgrade_set_next_pg_type_oid('13594'::pg_catalog.oid);
> 
> 
> -- For binary upgrade, must preserve pg_class oids
> SELECT 
> pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('13593'::pg_catalog.oid);
> 
> CREATE GLOBAL TEMPORARY TABLE "public"."t" (
> "n" integer
> )
> WITH ("on_commit_delete_rows"='false');
> 
> -- For binary upgrade, set heap's relfrozenxid and relminmxid
> UPDATE pg_catalog.pg_class
> SET relfrozenxid = '0', relminmxid = '0'
> WHERE oid = '"public"."t"'::pg_catalog.regclass;
> 
> -- 
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-09 Thread Erik Rijkers

On 2020-04-09 15:28, 曾文旌 wrote:

[global_temporary_table_v25-pg13.patch]


Part of the problem is that some variables are only used by assert 
statements, and I fixed those alarms.

Please provide your configue parameter, and I will verify it again.



Hi,

Just now I compiled the newer version of your patch (v25), and the 
warnings/notes that I saw earlier, are now gone. Thank you.



In case you still want it here is the configure:

-- [2020.04.09 15:06:45 global_temp_tables/1] ./configure  
--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.global_temp_tables 
--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.global_temp_tables/bin.fast 
--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.global_temp_tables/lib.fast 
--with-pgport=6975 --quiet --enable-depend --with-openssl --with-perl 
--with-libxml --with-libxslt --with-zlib  --enable-tap-tests  
--with-extra-version=_0409


-- [2020.04.09 15:07:13 global_temp_tables/1] make core: make --quiet -j 
4

partbounds.c: In function ‘partition_bounds_merge’:
partbounds.c:1024:21: warning: unused variable ‘inner_binfo’ 
[-Wunused-variable]

 1024 |  PartitionBoundInfo inner_binfo = inner_rel->boundinfo;
  | ^~~
All of PostgreSQL successfully made. Ready to install.


Thanks,

Erik Rijkers









Re: [Proposal] Global temporary tables

2020-04-09 Thread 曾文旌


> 2020年4月9日 下午7:46,tushar  写道:
> 
> On 4/7/20 2:27 PM, 曾文旌 wrote:
>> Vacuum full GTT, cluster GTT is already supported in 
>> global_temporary_table_v24-pg13.patch.
> 
> Hi Wenjing,
> 
> Please refer this scenario , where reindex   message is not coming next time 
> ( after reconnecting to database) for GTT
> 
> A)
> --normal table
> postgres=# create table nt(n int primary key);
> CREATE TABLE
> --GTT table
> postgres=# create global temp table gtt(n int primary key);
> CREATE TABLE
> B)
> --Reindex  , normal table
> postgres=# REINDEX (VERBOSE) TABLE  nt;
> INFO:  index "nt_pkey" was reindexed
> DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
> REINDEX
> --reindex GTT table
> postgres=# REINDEX (VERBOSE) TABLE  gtt;
> INFO:  index "gtt_pkey" was reindexed
> DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
> REINDEX
> C)
> --Reconnect  to database
> postgres=# \c
> You are now connected to database "postgres" as user "tushar".
> D) again perform step B)
> 
> postgres=# REINDEX (VERBOSE) TABLE  nt;
> INFO:  index "nt_pkey" was reindexed
> DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
> REINDEX
> postgres=# REINDEX (VERBOSE) TABLE  gtt;   <-- message  not coming
> REINDEX
Yes , Since the newly established connection is on the db, the GTT store file 
is not initialized, so there is no info message.

> 
> -- 
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company



smime.p7s
Description: S/MIME cryptographic signature


Re: [Proposal] Global temporary tables

2020-04-09 Thread tushar

On 4/7/20 2:27 PM, 曾文旌 wrote:
Vacuum full GTT, cluster GTT is already 
supported in global_temporary_table_v24-pg13.patch.


Hi Wenjing,

Please refer this scenario , where reindex   message is not coming next 
time ( after reconnecting to database) for GTT


A)
--normal table
postgres=# create table nt(n int primary key);
CREATE TABLE
--GTT table
postgres=# create global temp table gtt(n int primary key);
CREATE TABLE
B)
--Reindex  , normal table
postgres=# REINDEX (VERBOSE) TABLE  nt;
INFO:  index "nt_pkey" was reindexed
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
REINDEX
--reindex GTT table
postgres=# REINDEX (VERBOSE) TABLE  gtt;
INFO:  index "gtt_pkey" was reindexed
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
REINDEX
C)
--Reconnect  to database
postgres=# \c
You are now connected to database "postgres" as user "tushar".
D) again perform step B)

postgres=# REINDEX (VERBOSE) TABLE  nt;
INFO:  index "nt_pkey" was reindexed
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
REINDEX
postgres=# REINDEX (VERBOSE) TABLE  gtt;   <-- message  not coming
REINDEX

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-08 Thread Prabhat Sahu
On Wed, Apr 8, 2020 at 1:48 PM 曾文旌  wrote:

>
>
> 2020年4月7日 下午6:22,Prabhat Sahu  写道:
>
> Thanks for review.
>> This parameter should support all types of writing of the bool type like
>> parameter autovacuum_enabled.
>> So I fixed in global_temporary_table_v24-pg13.patch.
>>
>
> Thank you Wenjing for the new patch with the fix and the "VACUUM FULL GTT"
> support.
> I have verified the above issue now its resolved.
>
> Please check the below findings on VACUUM FULL.
>
> postgres=# create global temporary table  gtt(c1 int) on commit preserve
> rows;
> CREATE TABLE
> postgres=# vacuum FULL ;
> WARNING:  global temp table oldest FrozenXid is far in the past
> HINT:  please truncate them or kill those sessions that use them.
> VACUUM
>
>
> This is expected,
> This represents that the GTT FrozenXid is the oldest in the entire db, and
> dba should vacuum the GTT if he want to push the db datfrozenxid.
> Also he can use function pg_list_gtt_relfrozenxids() to check which
> session has "too old” data and truncate them or kill the sessions.
>

Again as per the HINT given, as  "HINT:  please truncate them or kill those
sessions that use them."
There is only a single session.
If we try "TRUNCATE" and "VACUUM FULL" still the behavior is same as below.

postgres=# truncate gtt ;
TRUNCATE TABLE
postgres=# vacuum full;
WARNING: global temp table oldest FrozenXid is far in the past
HINT: please truncate them or kill those sessions that use them.
VACUUM

I have one more finding related to "CLUSTER table USING index", Please
check the below issue.
postgres=# create global temporary table gtt(c1 int) on commit preserve
rows;
CREATE TABLE
postgres=# create index idx1 ON gtt (c1);
CREATE INDEX

-- exit and re-connect the psql prompt
postgres=# \q
[edb@localhost bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# cluster gtt using idx1;
WARNING:  relcache reference leak: relation "gtt" not closed
CLUSTER

-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [Proposal] Global temporary tables

2020-04-08 Thread tushar

On 4/7/20 2:27 PM, 曾文旌 wrote:
Vacuum full GTT, cluster GTT is already 
supported in global_temporary_table_v24-pg13.patch.

Please refer this below scenario , where pg_upgrade is failing
1)Server is up and running (./pg_ctl -D data status)
2)Stop the server ( ./pg_ctl -D data stop)
3)Connect to server using single user mode ( ./postgres --single -D data 
postgres) and create a global temp table

[tushar@localhost bin]$ ./postgres --single -D data1233 postgres

PostgreSQL stand-alone backend 13devel
backend> create global temp table t(n int);

--Press Ctl+D to exit

4)Perform initdb ( ./initdb -D data123)
5.Run pg_upgrade
[tushar@localhost bin]$ ./pg_upgrade -d data -D data123 -b . -B .
--
--
--
Restoring database schemas in the new cluster
  postgres
*failure*
Consult the last few lines of "pg_upgrade_dump_13592.log" for
the probable cause of the failure.
Failure, exiting

log file content  -

[tushar@localhost bin]$ tail -20   pg_upgrade_dump_13592.log
pg_restore: error: could not execute query: ERROR:  pg_type array OID 
value not set when in binary upgrade mode

Command was:
-- For binary upgrade, must preserve pg_type oid
SELECT 
pg_catalog.binary_upgrade_set_next_pg_type_oid('13594'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_class oids
SELECT 
pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('13593'::pg_catalog.oid);


CREATE GLOBAL TEMPORARY TABLE "public"."t" (
    "n" integer
)
WITH ("on_commit_delete_rows"='false');

-- For binary upgrade, set heap's relfrozenxid and relminmxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '0', relminmxid = '0'
WHERE oid = '"public"."t"'::pg_catalog.regclass;

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-08 Thread 曾文旌


> 2020年4月7日 下午6:22,Prabhat Sahu  写道:
> 
> Thanks for review.
> This parameter should support all types of writing of the bool type like 
> parameter autovacuum_enabled.
> So I fixed in global_temporary_table_v24-pg13.patch.
> 
> Thank you Wenjing for the new patch with the fix and the "VACUUM FULL GTT" 
> support.
> I have verified the above issue now its resolved.
> 
> Please check the below findings on VACUUM FULL.
> 
> postgres=# create global temporary table  gtt(c1 int) on commit preserve rows;
> CREATE TABLE
> postgres=# vacuum FULL ;
> WARNING:  global temp table oldest FrozenXid is far in the past
> HINT:  please truncate them or kill those sessions that use them.
> VACUUM

This is expected,
This represents that the GTT FrozenXid is the oldest in the entire db, and dba 
should vacuum the GTT if he want to push the db datfrozenxid.
Also he can use function pg_list_gtt_relfrozenxids() to check which session has 
"too old” data and truncate them or kill the sessions.



> 
> -- 
> With Regards,
> Prabhat Kumar Sahu
> EnterpriseDB: http://www.enterprisedb.com 



smime.p7s
Description: S/MIME cryptographic signature


  1   2   3   4   >