Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-08 Thread Steve Petrie, P.Eng.

Andy,

Thanks very much for your response.

No worries about raining on my parade. Your feedback is exactly what I'm 
looking for -- praise is nice, but I really do prefer to have the 
experts throwing rocks at my naive ideas :)


Please see my comments embedded below.

Steve

- Original Message - 
From: "Andy Colson" <a...@squeakycode.net>
To: "Steve Petrie, P.Eng." <apet...@aspetrie.net>; 
<pgsql-general@postgresql.org>

Sent: Thursday, January 07, 2016 10:17 PM
Subject: Re: [GENERAL] Question -- Session Operations - Feasibility Of 
Proposed Synchronization Method?




On 01/07/2016 06:30 PM, Steve Petrie, P.Eng. wrote:
Thanks to forum members for the four helpful replies, to my earlier 
message that initiated this thread.


The replies expressed concerns, with the feasibility of my proposal 
to use postgres tables to store short-lived context data, for dialog 
continuity during website app transient sessions, with visitor 
browsers over modeless HTTP connections.


Hope the four emails I sent in response (5 January 2016), went some 
way to satisfying the concerns expressed.


Here is a list of the issues discussed, in the dialog mentioned 
above:


1. "Session" defined;
2. Avoid row DELETEs;
3. Periodically TRUNCATE each table in a pool of session context 
tables;
4. Embed a session ID key parameter in an HTML "hidden" field 
(optional);
5. Use sequence generators as rapid global iterators controlling 
access to session context tables;






Thanks to forum members for taking the time to read my email.



This feels hugely overcomplicated.


I agree. It is complicated.

But I believe it's the minimum functionality required to both: 1. avoid 
using the  approach, 
to recycling "dead" session context row image storage space back to the 
filesystem, and 2. enable use of the much faster TRUNCATE command on an 
entire "dead" session context table..


I also didn't read most of the last thread, so forgive me if you've 
answered this already:  How many website requests a second (that 
actually need to touch session data) are you expecting?  How much 
space is the session data going to take?  (like, 5 Gig a day?)




Every incoming request to the website for non-static content, needs to 
touch (INSERT or SELECT + UPDATE) the corresponding session context row. 
That row is where the transient continuity context for the app session 
dialog, gets stored, between requestand request   <i+1>  coming 
in from the browser driving that app session.


So session data will be touched by every request that launches an app 
php function, to service the next step in the session dialog with that 
visitor.


But an individual session isn't going to live all that long, from the 
time that it's context row gets INSERTed until the time that the session 
"dies" and it's context row gets UPDATEd as "dead" in its "status" 
column (the row is never explicitely DELETEd, the entire table in which 
it resides gets TRUNCATEd).


If the website manages to register e.g. 100,000 subscribers in its first 
year, it will be a runaway success. I'm not expecting more than a few 
percent of subscribers to visit on any given day. So if the website 
proves to be a massive winner, there will be maybe 1000 to 5000 
subscriber sessions / day, each session being initiated, conducted and 
then terminated over the time span of a few minutes (rarely more than 30 
minutes).


But I do fear "success disaster" if suddenly, the website (which will 
promote a politically controversial technology concept for preventing 
freeway traffic congestion) gets media coverage in its initial market 
area (the greater Toronto area in the province of Ontario, Canada), and 
among a million+ people watching the 6-o'clock Toronto news, a few 
thousand jump on their smart-phone browsers to hit the website, looking 
to subscribe or send a contact message via web page form.


So I'm looking to build in capacity to handle brief intense bursts of 
session traffic workload. Not anywhere near Google-scale, obviously. But 
maybe to handle a 10-minute burst driving a maximum rate of  e.g. 1000 
requests / second to the database server (being mostly a combination of 
an INSERT for each new session row, followed by a few s 
to that row, as the session proceeds through its brief "life", towards 
its inevitable "death".


Actual access activity to longer-lived data tables: 1. subscriber 
membership table, 2. contact message table; will be orders-of-magnitude 
lower, than activity in the session context tables.


Each visitor session is allowed a "quota" of requests (e.g. 25) so the 
visitor gets 25 chances to e.g. click a "submit" button. There is also a 
session timeout "quota" (e.g. 10 minutes) that will kill the session if 
the visitor waits too long between requests.


So the session context tables 

Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-07 Thread Andy Colson

On 01/07/2016 06:30 PM, Steve Petrie, P.Eng. wrote:

Thanks to forum members for the four helpful replies, to my earlier message 
that initiated this thread.

The replies expressed concerns, with the feasibility of my proposal to use 
postgres tables to store short-lived context data, for dialog continuity during 
website app transient sessions, with visitor browsers over modeless HTTP 
connections.

Hope the four emails I sent in response (5 January 2016), went some way to 
satisfying the concerns expressed.

Here is a list of the issues discussed, in the dialog mentioned above:

1. "Session" defined;
2. Avoid row DELETEs;
3. Periodically TRUNCATE each table in a pool of session context tables;
4. Embed a session ID key parameter in an HTML "hidden" field (optional);
5. Use sequence generators as rapid global iterators controlling access to 
session context tables;





Thanks to forum members for taking the time to read my email.



This feels hugely overcomplicated.  I also didn't read most of the last thread, 
so forgive me if you've answered this already:  How many website requests a 
second (that actually need to touch session data) are you expecting?  How much 
space is the session data going to take?  (like, 5 Gig a day?)

If its a huge number, you should put effort into growing horizontally, not all 
of this stuff.
If its a small number, you'll spend more time fixing all the broken things than 
its worth.
Have you benchmarked this?  In my mind, complicated == slow.

Sorry if I'm raining on your parade, it looks like you have really put a lot of 
work into this.

Have you considered saving session data to disk is faster than saving to db?  A 
good reverse web proxy can stick a session to the same backend.  1 web proxy up 
front, 5 web servers behind it.  I'd bet its way faster.

-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-05 Thread Steve Petrie, P.Eng.

Thanks to Melvin for your remarks. My reply is posted below.

Steve

From: "Melvin Davidson"
Sent: Sunday, January 03, 2016 4:32 PM

More to the point, if you absolutely must
store session data, then why not just do it in a TEMPORARY table, which
will reside in memory and clean itself up when the session ends?

http://www.postgresql.org/docs/9.4/static/sql-createtable.html



Sorry for not having defined the term "session" in my original posting. Here
are quotes from the draft design document, for the proposed session
operations system:

"In this document, the term "session" does not refer to connections with a
postgres server through a postgres client, which connections are referred to
as "sessions" in postgres docs. In this document, the term "session" means a
dialog between an app and the consumer of that app, where the app may or may
not use postgres, for other than its session context data.

"The app that motivates this present proposed design for a postgres-based
session operations facility, is an Internet website application (HTML / HTTP
/ PHP / PostgreSQL) that provides services to a visitor using a web browser.

The "session" under discussion therefore, is used to maintain the
contextual data of the (visitor <=> app) dialog, during which dialog the app
will open and close numerous brief connections with the postgres client,
such connections also being referred to as "sessions" within postgres
docs.

So a TEMPORARY table will not suffice, but using a tablespace in SSHD for
the session operations tables, might yield performance gains like
TEMPORARY..



On Sun, Jan 3, 2016 at 3:43 PM, Pavel Stehule 
wrote:


Hi

2016-01-03 20:46 GMT+01:00 Steve Petrie, P.Eng. :


*Greetings To Postgres Forum,*

This posting is further to a prior forum thread -- subject "[*GENERAL]
using a postgres table as a multi-writer multi-updater queue*", that was
started on 23 November 2015 by Chris Withers ch...@simplistix.co.uk. I
believe the last posting to that thread was on 1 December 2015 by George
Neuner .

A related thread of interest, was started earlier -- subject *"[GENERAL]
using postgresql for session*",  on 7 October 2015 by John Tiger <
john.tigernas...@gmail.com>.



I am sorry for off topic. But are you sure, so using Postgres for session
data is a good idea? Using Postgres for short living data is can enforce
a
performance problems when a load will be higher.

Regards

Pavel




* * *
* * *

I made some postings to the first above-mentioned thread, as I am
working
to migrate a php website application from mysql to postgres. An
important
objective of this migration is to find a good way to use a postgres
table
to store session context data rows, one row for each active website
visitor.

One item of advice (among much other helpful advice) I took away from
the
first thread mentioned above, was to avoid use of DELETE commands as a
means to recycle session context table row image storage, when a session
is terminated.

To use instead, a TRUNCATE command on an entire session context table,
to
quickly and efficiently recycle session context row image storage space,
back to the filesystem, so the space is immediately available for reuse.

* * *
* * *

Since then, I have been working to design a way to use postgres table(s)
as a session context store, for a simple, reliable and high-performance
"session operations system" (SOS).

A design for a postgres-based SOS, that follows two key principles to
ensure maximum session workload throughput capacity:

*PRINCIPLE #1*: *1.1* Use only the TRUNCATE TABLE command, to recycle
frequently, rapidly and efficiently back to the filesystem, session
context
table storage space occupied by obsolete images of session context rows;
and *1.2* do not use DELETE / AUTOVACUUM / VACUUM commands at all, for
this recycling.

*PRINCIPLE #2*: *2.1* Use sequence generators for various
globally-addressable fast-access "iterators"**, that provide the php
website app (and its PL/pgSQL functions), with e.g. access to an
appropriate individual session context table; *2.2* Access granted to a
table from a pool of session context tables, each pool having its tables
all in the same operational state.

The downside of Principle #1 is the considerable added complexity of
having to manage multiple tables, to store session context data rows.

The downside of Principle #2 is that the sequence generator has no role
in sql transaction / savepoint semantics. So explicit provision for
synchronization is required, adding further complexity.

 (** An "iterator" is derived from a sequence generator, by using excess
unneeded precision in high-order bits of the sequence integer value, to
encode "iterator" metadata -- as an efficient way to make this metadata
available to multiple concurrently executing app execution control flow
paths.)

* * *
* * *

*The purpose of this present email, is to present (in pseudocode) for
critque by forum members, 

Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-05 Thread Steve Petrie, P.Eng.

Thanks to Pavel for your remarks. My reply is posted below.

Steve

- Original Message - 
From: "Pavel Stehule" <pavel.steh...@gmail.com>

To: "Steve Petrie, P.Eng." <apet...@aspetrie.net>
Cc: <pgsql-general@postgresql.org>
Sent: Sunday, January 03, 2016 3:43 PM
Subject: Re: [GENERAL] Question -- Session Operations - Feasibility Of
Proposed Synchronization Method?



Hi

2016-01-03 20:46 GMT+01:00 Steve Petrie, P.Eng. <apet...@aspetrie.net>:


*Greetings To Postgres Forum,*

This posting is further to a prior forum thread -- subject "[*GENERAL]
using a postgres table as a multi-writer multi-updater queue*", that was
started on 23 November 2015 by Chris Withers ch...@simplistix.co.uk. I
believe the last posting to that thread was on 1 December 2015 by George
Neuner <gneun...@comcast.net>.

A related thread of interest, was started earlier -- subject *"[GENERAL]
using postgresql for session*",  on 7 October 2015 by John Tiger <
john.tigernas...@gmail.com>.



I am sorry for off topic. But are you sure, so using Postgres for session
data is a good idea? Using Postgres for short living data is can enforce a
performance problems when a load will be higher.



You are not off topic. Not at all. The objective is to find a design for a
session operations system based on postgres, that eliminates performance 
problems

under high load, that you mention.

Get the benefits of using postgres as the session context store
facility. But use postgres in a way that is optimized for handling smoothly
and efficiently, large workload volumes of  INSERT / SELECT / UPDATE
commands, on short-lived transient session data rows,
under long intense bursts of app session activity.


Regards

Pavel




* * *
* * *

I made some postings to the first above-mentioned thread, as I am working
to migrate a php website application from mysql to postgres. An important
objective of this migration is to find a good way to use a postgres table
to store session context data rows, one row for each active website
visitor.

One item of advice (among much other helpful advice) I took away from the
first thread mentioned above, was to avoid use of DELETE commands as a
means to recycle session context table row image storage, when a session
is terminated.

To use instead, a TRUNCATE command on an entire session context table, to
quickly and efficiently recycle session context row image storage space,
back to the filesystem, so the space is immediately available for reuse.

* * *
* * *

Since then, I have been working to design a way to use postgres table(s)
as a session context store, for a simple, reliable and high-performance
"session operations system" (SOS).

A design for a postgres-based SOS, that follows two key principles to
ensure maximum session workload throughput capacity:

*PRINCIPLE #1*: *1.1* Use only the TRUNCATE TABLE command, to recycle
frequently, rapidly and efficiently back to the filesystem, session
context
table storage space occupied by obsolete images of session context rows;
and *1.2* do not use DELETE / AUTOVACUUM / VACUUM commands at all, for
this recycling.

*PRINCIPLE #2*: *2.1* Use sequence generators for various
globally-addressable fast-access "iterators"**, that provide the php
website app (and its PL/pgSQL functions), with e.g. access to an
appropriate individual session context table; *2.2* Access granted to a
table from a pool of session context tables, each pool having its tables
all in the same operational state.

The downside of Principle #1 is the considerable added complexity of
having to manage multiple tables, to store session context data rows.

The downside of Principle #2 is that the sequence generator has no role
in
sql transaction / savepoint semantics. So explicit provision for
synchronization is required, adding further complexity.

 (** An "iterator" is derived from a sequence generator, by using excess
unneeded precision in high-order bits of the sequence integer value, to
encode "iterator" metadata -- as an efficient way to make this metadata
available to multiple concurrently executing app execution control flow
paths.)

* * *
* * *

*The purpose of this present email, is to present (in pseudocode) for
critque by forum members, a proposed approach to synchronizing use of the
"iterators" (sequence generators) described above, among multiple
concurrent actors, in the website php app session operations scenario.*

Since I am a postgres novice, I am hoping that members of this postgres
forum, will be kind enough to examine and critique the (boiled-down,
simplified) pseudocode for the proposed approach to synchronization.

(In this discussion, the term "process" does not refer specifically to a
"process" as implemented in operating systems, as one form of program
execution control, that is contrasted with "thread" as another form of
pro

Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-05 Thread Steve Petrie, P.Eng.

Thanks to Adrian for your remarks. My replies are posted below.

Steve

- Original Message - 
From: "Adrian Klaver" <adrian.kla...@aklaver.com>

To: "Melvin Davidson" <melvin6...@gmail.com>; "Pavel Stehule"
<pavel.steh...@gmail.com>
Cc: "Steve Petrie, P.Eng." <apet...@aspetrie.net>;
<pgsql-general@postgresql.org>
Sent: Sunday, January 03, 2016 4:38 PM
Subject: Re: [GENERAL] Question -- Session Operations - Feasibility Of
Proposed Synchronization Method?



On 01/03/2016 01:32 PM, Melvin Davidson wrote:

As others have pointed out, storing session data in a table is not a
good idea. Even if you use TRUNCATE, you will still not reclaim all the
space used unless you use vacuum full. More to the point, if you


Actually:

http://www.postgresql.org/docs/9.4/interactive/sql-truncate.html

"TRUNCATE quickly removes all rows from a set of tables. It has the same
effect as an unqualified DELETE on each table, but since it does not
actually scan the tables it is faster. Furthermore, it reclaims disk space
immediately, rather than requiring a subsequent VACUUM operation. This is
most useful on large tables."



Good to have this confirmed. TRUNCATE is way faster than DELETE /.
AUTOVACUUM / VACUUM.

So if the session operations system uses a pool (of e.g. 31) session context
tables, it will be feasible periodically to recycle session context row
image storage space back to the filesystem, by means of the hyper-fast table
TRUNCATE command, without ever disrupting the app's access to the session
context data store. Because many other session context tables in the pool of
(e.g. 31) tables, remain online and available to the app, during the
TRUNCATE operation.

For example, session operations could be configured to impose a 3 limit,
on the count of row images stored in a session context table.

If each session context row image occupies e.g. 1000 bytes, then the session
operations system will be recycling 30 MB of storage back to the filesystem,
in one quick table TRUNCATE command. Instead of going through a painstaking
per-row AUTOVACUUM / VACUUM process, that is more suitable for long-lived,
high-value data assets.

The app only ever uses INSERT / SELECT / UPDATE commands on session context
data rows. The app never uses DELETE commands on these rows. Session
operations has other means to ensure, before doing the table TRUNCATE, that
the session context context table contains only "dead" session context row
images.


the rub is:

"TRUNCATE acquires an ACCESS EXCLUSIVE lock on each table it operates on,
which blocks all other concurrent operations on the table. When RESTART
IDENTITY is specified, any sequences that are to be restarted are likewise
locked exclusively. If concurrent access to a table is required, then the
DELETE command should be used instead."




My proposal is to turn this disadvantage of the TRUNCATE command (that it
requires an ACCESS EXCLUSIVE lock on the table) into an advantage, by means
of two design ideas:

1. Using a pool (of e.g. 31) session context tables, so that it will be
feasible periodically to recycle large amounts of session context row image
storage space back to the filesystem, by means of the hyper-fast table
TRUNCATE command, without ever disrupting the app's access to the session
context data store. Because many other session context tables in the pool
(of e.g. 31) tables, will always remain online and available to the app,
during the TRUNCATE operation on the one "dead" table, that is briefly
taken offline by session operations to be TRUNCATEd.

2. Using an explicit LOCK TABLE ... IN ACCESS EXCLUSIVE MODE
command, as a means of synchronizing
usage of sequence generators, with use of TRANSACTION /
SAVEPOINT semantics. The idea is to use a number of sequence generators as
quick-access "iterators"** that are globally-addressable by the app, to
assist efficient orderly usage of individual session context tables by the
app.

(**An "iterator" in this design, piggy-backs iterator fixed-value meta-data,
in the high-order bits of the underlying 64-bit integer sequence generator.
High-order bits that are surplus to the amount of integer precision
required, for the incrementing numeric range needed by the iterator.)

And the objective of my original posting, was to ask this forum to critique
pseudocode that I propose, for achieving the synchronization envisioned in
point 2, above, to coordinate use of sequence generators with SQL
transaction / savepoint semantics. The author being a postgres novice, and a
barely-scratching-the-surface SQL app developer (but an experienced software
engineer).


absolutely must store session data, then why not just do it in a
TEMPORARY table, which will reside in memory and clean itself up when
the session ends?

http://www.postgresql.org/docs/9.4/static/sql-createtable.html


On Sun, Jan 3, 2016 at 3:43 PM, Pavel Ste

Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-05 Thread Steve Petrie, P.Eng.

Thanks to John for your remarks. My reply is posted below.

Steve

- Original Message - 
From: "John R Pierce" <pie...@hogranch.com>

To: <pgsql-general@postgresql.org>
Sent: Sunday, January 03, 2016 4:40 PM
Subject: Re: [GENERAL] Question -- Session Operations - Feasibility Of
Proposed Synchronization Method?



On 1/3/2016 1:32 PM, Melvin Davidson wrote:

As others have pointed out, storing session data in a table is not a good
idea. Even if you use TRUNCATE, you will still not reclaim all the space
used unless you use vacuum full. More to the point, if you absolutely
must store session data, then why not just do it in a TEMPORARY table,
which will reside in memory and clean itself up when the session ends?


The fundamental problem is that HTTP itself is stateless, and user
sessions have to be enforced by the web app, with the aid of browser
cookies.   you can't simply directly link a http connection to a SQL
connection and assume thats a session.



Please see my reply to Melvin Davidson.

The "session" I mean is the one that you point out, that has to be enforced
by the web app, to provide continuity missing from the stateless HTTP
protocol.

Except that I do not use browser cookies.

Instead of browser cookies, I use a parameter (session ID key) embedded in
the HTML page served by the app to the consumer's web browser.

And this parameter goes on a round-trip journey, back to the app
program that is launched by the HTTP server, when the visitor actions the
web page submit button and their browser sends an HTTP request message.

The format of the (32-character string) session ID key parameter is:

  CC:MMDDHHMMSS:TT

where:

1. CC --  "quasi-unique" random session identification code
(10-char, hex-encoded), a hash value (e.g. substring of MD5) but NOT a hash
of a password to be remembered and reused, just a randomized value from
x00 to xFF (decimal: 0 ... 1099511627775);

2. MMDDHHMMSS -- session initiation time (14-char, decimal-encoded) e.g.
20160105040427;

3.  -- serial number of corresponding session context row when first
INSERTed (4-char, hex-encoded) value from x0001 to x (decimal: 1 ...
65535);

4. TT -- serial number of current session context table where session row is
stored (2-char, hex-encoded) value from x01 to x1F (decimal: 1 ... 31);

Here is a paste of example HTML:



  
  

  
   

   

...


In the above HTML, hidden field name="a"
value="B37000AC2C:20160105040427:0AD11A" is a session ID key input parameter
to PHP program eto_session_act_2a.php, that interprets parameter "a" as:

1. CC == "B37000AC2C";
2. MMDDHHMMSS == "20160105040427";
3.  == "0AD1" (decimal 2769);
4. TT == "1A" (decimal 26);


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-03 Thread Adrian Klaver

On 01/03/2016 01:32 PM, Melvin Davidson wrote:

As others have pointed out, storing session data in a table is not a
good idea. Even if you use TRUNCATE, you will still not reclaim all the
space used unless you use vacuum full. More to the point, if you


Actually:

http://www.postgresql.org/docs/9.4/interactive/sql-truncate.html

"TRUNCATE quickly removes all rows from a set of tables. It has the same 
effect as an unqualified DELETE on each table, but since it does not 
actually scan the tables it is faster. Furthermore, it reclaims disk 
space immediately, rather than requiring a subsequent VACUUM operation. 
This is most useful on large tables."


the rub is:

"TRUNCATE acquires an ACCESS EXCLUSIVE lock on each table it operates 
on, which blocks all other concurrent operations on the table. When 
RESTART IDENTITY is specified, any sequences that are to be restarted 
are likewise locked exclusively. If concurrent access to a table is 
required, then the DELETE command should be used instead."




absolutely must store session data, then why not just do it in a
TEMPORARY table, which will reside in memory and clean itself up when
the session ends?

http://www.postgresql.org/docs/9.4/static/sql-createtable.html


On Sun, Jan 3, 2016 at 3:43 PM, Pavel Stehule > wrote:

Hi

2016-01-03 20:46 GMT+01:00 Steve Petrie, P.Eng.
>:

__
*Greetings To Postgres Forum,*
This posting is further to a prior forum thread -- subject
"[/GENERAL] using a postgres table as a multi-writer
multi-updater queue/", that was started on 23 November 2015 by
Chris Withers ch...@simplistix.co.uk
. I believe the last posting to
that thread was on 1 December 2015 by George Neuner
>.
A related thread of interest, was started earlier -- subject
/"[GENERAL] using postgresql for session/",  on 7 October 2015
by John Tiger >.


I am sorry for off topic. But are you sure, so using Postgres for
session data is a good idea? Using Postgres for short living data is
can enforce a performance problems when a load will be higher.

Regards

Pavel

* * *
* * *
I made some postings to the first above-mentioned thread, as I
am working to migrate a php website application from mysql to
postgres. An important objective of this migration is to find a
good way to use a postgres table to store session context data
rows, one row for each active website visitor.
One item of advice (among much other helpful advice) I took away
from the first thread mentioned above, was to avoid use of
DELETE commands as a means to recycle session context table row
image storage, when a session is terminated.
To use instead, a TRUNCATE command on an entire session context
table, to quickly and efficiently recycle session context row
image storage space, back to the filesystem, so the space is
immediately available for reuse.
* * *
* * *
Since then, I have been working to design a way to use postgres
table(s) as a session context store, for a simple, reliable and
high-performance "session operations system" (SOS).
A design for a postgres-based SOS, that follows two key
principles to ensure maximum session workload throughput capacity:
*PRINCIPLE #1*: *1.1* Use only the TRUNCATE TABLE command, to
recycle frequently, rapidly and efficiently back to the
filesystem, session context table storage space occupied by
obsolete images of session context rows; and *1.2* do not use
DELETE / AUTOVACUUM / VACUUM commands at all, for this recycling.
*PRINCIPLE #2*: *2.1* Use sequence generators for various
globally-addressable fast-access "iterators"**, that provide the
php website app (and its PL/pgSQL functions), with e.g. access
to an appropriate individual session context table; *2.2* Access
granted to a table from a pool of session context tables, each
pool having its tables all in the same operational state.
The downside of Principle #1 is the considerable added
complexity of having to manage multiple tables, to store session
context data rows.
The downside of Principle #2 is that the sequence generator has
no role in sql transaction / savepoint semantics. So explicit
provision for synchronization is required, adding further
complexity.
  (** An "iterator" is derived from a sequence generator,
by using excess unneeded precision in high-order bits of the
sequence 

Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-03 Thread Pavel Stehule
Hi

2016-01-03 20:46 GMT+01:00 Steve Petrie, P.Eng. :

> *Greetings To Postgres Forum,*
>
> This posting is further to a prior forum thread -- subject "[*GENERAL]
> using a postgres table as a multi-writer multi-updater queue*", that was
> started on 23 November 2015 by Chris Withers ch...@simplistix.co.uk. I
> believe the last posting to that thread was on 1 December 2015 by George
> Neuner .
>
> A related thread of interest, was started earlier -- subject *"[GENERAL]
> using postgresql for session*",  on 7 October 2015 by John Tiger <
> john.tigernas...@gmail.com>.
>

I am sorry for off topic. But are you sure, so using Postgres for session
data is a good idea? Using Postgres for short living data is can enforce a
performance problems when a load will be higher.

Regards

Pavel


>
> * * *
> * * *
>
> I made some postings to the first above-mentioned thread, as I am working
> to migrate a php website application from mysql to postgres. An important
> objective of this migration is to find a good way to use a postgres table
> to store session context data rows, one row for each active website visitor.
>
> One item of advice (among much other helpful advice) I took away from the
> first thread mentioned above, was to avoid use of DELETE commands as a
> means to recycle session context table row image storage, when a session
> is terminated.
>
> To use instead, a TRUNCATE command on an entire session context table, to
> quickly and efficiently recycle session context row image storage space,
> back to the filesystem, so the space is immediately available for reuse.
>
> * * *
> * * *
>
> Since then, I have been working to design a way to use postgres table(s)
> as a session context store, for a simple, reliable and high-performance
> "session operations system" (SOS).
>
> A design for a postgres-based SOS, that follows two key principles to
> ensure maximum session workload throughput capacity:
>
> *PRINCIPLE #1*: *1.1* Use only the TRUNCATE TABLE command, to recycle
> frequently, rapidly and efficiently back to the filesystem, session context
> table storage space occupied by obsolete images of session context rows;
> and *1.2* do not use DELETE / AUTOVACUUM / VACUUM commands at all, for
> this recycling.
>
> *PRINCIPLE #2*: *2.1* Use sequence generators for various
> globally-addressable fast-access "iterators"**, that provide the php
> website app (and its PL/pgSQL functions), with e.g. access to an
> appropriate individual session context table; *2.2* Access granted to a
> table from a pool of session context tables, each pool having its tables
> all in the same operational state.
>
> The downside of Principle #1 is the considerable added complexity of
> having to manage multiple tables, to store session context data rows.
>
> The downside of Principle #2 is that the sequence generator has no role in
> sql transaction / savepoint semantics. So explicit provision for
> synchronization is required, adding further complexity.
>
>  (** An "iterator" is derived from a sequence generator, by using excess
> unneeded precision in high-order bits of the sequence integer value, to
> encode "iterator" metadata -- as an efficient way to make this metadata
> available to multiple concurrently executing app execution control flow
> paths.)
>
> * * *
> * * *
>
> *The purpose of this present email, is to present (in pseudocode) for
> critque by forum members, a proposed approach to synchronizing use of the
> "iterators" (sequence generators) described above, among multiple
> concurrent actors, in the website php app session operations scenario.*
>
> Since I am a postgres novice, I am hoping that members of this postgres
> forum, will be kind enough to examine and critique the (boiled-down,
> simplified) pseudocode for the proposed approach to synchronization.
>
> (In this discussion, the term "process" does not refer specifically to a
> "process" as implemented in operating systems, as one form of program
> execution control, that is contrasted with "thread" as another form of
> program execution control. In this discussion, the term "process" means the
> general sense of any program execution path that can occur in parallel
> concurrently with other program execution paths.)
>
> In the pseudocode example provided below, two concurrent processes
> (session process, supervisory process) operate on the same same table
> *sql_table_01*, and they use sequence generator *sql_sequence_01* as a
> "version" number for the operational state of table *sql_table_01*.
>
> *QUESTION: In supervisory process step sup.2 (below), will the command:*
>
> *   LOCK TABLE sql_table_01 IN ACCESS EXCLUSIVE MODE;*
>
> *ensure that the session process, having read a value from sequence
> generator sql_sequence_01 in step ses.1, will never ever begin to execute
> step ses.6:*
>
> *   SELECT currval('sql_sequence_01');*
>
> *so long as the supervisory process, has completed step sup.2:*
>
> *   LOCK 

Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-03 Thread John R Pierce

On 1/3/2016 1:32 PM, Melvin Davidson wrote:
As others have pointed out, storing session data in a table is not a 
good idea. Even if you use TRUNCATE, you will still not reclaim all 
the space used unless you use vacuum full. More to the point, if you 
absolutely must store session data, then why not just do it in a 
TEMPORARY table, which will reside in memory and clean itself up when 
the session ends?


The fundamental problem is that HTTP itself is stateless, and user 
sessions have to be enforced by the web app, with the aid of browser 
cookies.   you can't simply directly link a http connection to a SQL 
connection and assume thats a session.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-03 Thread Melvin Davidson
As others have pointed out, storing session data in a table is not a good
idea. Even if you use TRUNCATE, you will still not reclaim all the space
used unless you use vacuum full. More to the point, if you absolutely must
store session data, then why not just do it in a TEMPORARY table, which
will reside in memory and clean itself up when the session ends?

http://www.postgresql.org/docs/9.4/static/sql-createtable.html


On Sun, Jan 3, 2016 at 3:43 PM, Pavel Stehule 
wrote:

> Hi
>
> 2016-01-03 20:46 GMT+01:00 Steve Petrie, P.Eng. :
>
>> *Greetings To Postgres Forum,*
>>
>> This posting is further to a prior forum thread -- subject "[*GENERAL]
>> using a postgres table as a multi-writer multi-updater queue*", that was
>> started on 23 November 2015 by Chris Withers ch...@simplistix.co.uk. I
>> believe the last posting to that thread was on 1 December 2015 by George
>> Neuner .
>>
>> A related thread of interest, was started earlier -- subject *"[GENERAL]
>> using postgresql for session*",  on 7 October 2015 by John Tiger <
>> john.tigernas...@gmail.com>.
>>
>
> I am sorry for off topic. But are you sure, so using Postgres for session
> data is a good idea? Using Postgres for short living data is can enforce a
> performance problems when a load will be higher.
>
> Regards
>
> Pavel
>
>
>>
>> * * *
>> * * *
>>
>> I made some postings to the first above-mentioned thread, as I am working
>> to migrate a php website application from mysql to postgres. An important
>> objective of this migration is to find a good way to use a postgres table
>> to store session context data rows, one row for each active website visitor.
>>
>> One item of advice (among much other helpful advice) I took away from the
>> first thread mentioned above, was to avoid use of DELETE commands as a
>> means to recycle session context table row image storage, when a session
>> is terminated.
>>
>> To use instead, a TRUNCATE command on an entire session context table, to
>> quickly and efficiently recycle session context row image storage space,
>> back to the filesystem, so the space is immediately available for reuse.
>>
>> * * *
>> * * *
>>
>> Since then, I have been working to design a way to use postgres table(s)
>> as a session context store, for a simple, reliable and high-performance
>> "session operations system" (SOS).
>>
>> A design for a postgres-based SOS, that follows two key principles to
>> ensure maximum session workload throughput capacity:
>>
>> *PRINCIPLE #1*: *1.1* Use only the TRUNCATE TABLE command, to recycle
>> frequently, rapidly and efficiently back to the filesystem, session context
>> table storage space occupied by obsolete images of session context rows;
>> and *1.2* do not use DELETE / AUTOVACUUM / VACUUM commands at all, for
>> this recycling.
>>
>> *PRINCIPLE #2*: *2.1* Use sequence generators for various
>> globally-addressable fast-access "iterators"**, that provide the php
>> website app (and its PL/pgSQL functions), with e.g. access to an
>> appropriate individual session context table; *2.2* Access granted to a
>> table from a pool of session context tables, each pool having its tables
>> all in the same operational state.
>>
>> The downside of Principle #1 is the considerable added complexity of
>> having to manage multiple tables, to store session context data rows.
>>
>> The downside of Principle #2 is that the sequence generator has no role
>> in sql transaction / savepoint semantics. So explicit provision for
>> synchronization is required, adding further complexity.
>>
>>  (** An "iterator" is derived from a sequence generator, by using excess
>> unneeded precision in high-order bits of the sequence integer value, to
>> encode "iterator" metadata -- as an efficient way to make this metadata
>> available to multiple concurrently executing app execution control flow
>> paths.)
>>
>> * * *
>> * * *
>>
>> *The purpose of this present email, is to present (in pseudocode) for
>> critque by forum members, a proposed approach to synchronizing use of the
>> "iterators" (sequence generators) described above, among multiple
>> concurrent actors, in the website php app session operations scenario.*
>>
>> Since I am a postgres novice, I am hoping that members of this postgres
>> forum, will be kind enough to examine and critique the (boiled-down,
>> simplified) pseudocode for the proposed approach to synchronization.
>>
>> (In this discussion, the term "process" does not refer specifically to a
>> "process" as implemented in operating systems, as one form of program
>> execution control, that is contrasted with "thread" as another form of
>> program execution control. In this discussion, the term "process" means the
>> general sense of any program execution path that can occur in parallel
>> concurrently with other program execution paths.)
>>
>> In the pseudocode example provided below, two concurrent processes
>> (session process,