[GENERAL] Connection loosing at some places - caused by firewall

2017-11-14 Thread Durumdara
Dear Members!


Windows 10, PGSQL 9.4 and 9.6 (Debian with SSL, and Windows without it - it
doesn't matter).

When I (or my boss) work(s) at home, I got connection lost errors from
PGAdmin (3/4) or from other applications too.


server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.


PGAdmin, EMS PostgreSQL manager, Navicat for PGSQL, and psql too.

When minimum 5 minutes passed after the last request (select, insert, or
any communication what I did) the connection lost.

The keepalive timer is 10 minutes in the server - so it is not the source
of the problem.

I tried to set keepalive in the clients (EMS, Navicat called ping) to lower
- no effect.

When I did any request in my side before the 5 minutes "limit" passed, I
got a new 5 minutes.

It seems to be libpq.dll applications need traffic (started by the client)
in 5 minute periods to keep the connection.

In the office I don't have this limitation.

My colleauge can work at home without time limit, and he has same Internet
Provider as mine.

It's not PGSQL server based.
When I use VPN connection from home to reach office PG server, I also got
this problem.
>From the office I can use same IP, same server without problem.

I checked the router, I disabled all things - nothing happened.

*I disabled my firewall at home - the problem vanished!!!*

And: if I use a secondary test computer (also Windows 10, PGAdmin III) - it
works at home!!!

I very-very wondered because it seems to be computer firewall problem.
I tried to use wifi and direct cable - no differents, so network card could
be eliminated from the "problem sources" list.

It's very interesting!
The firewall could make differents between networks and limits these
packages timeout to 5 minutes? Hmmm...

I made an exception to the firewall - nothing happened.

I know this is not Windows Admin list, but you may experienced same
problem,  or you have idea about what to do.

This is become important for us because one of my customer have same
problem, and we don't know the solution...

What is your opinion about it?

Thanks
  dd


Re: [GENERAL] Restore LargeObjects on different server

2017-10-12 Thread Durumdara
Hi!

Somebody wrote me that:

The pg_catalog schema is system schema, but it is IN the DB.

Is this true? So OID is not global (out from DB)?

So we can dump and restore the DB with OIDs without collision in new server?

Thank you!

dd

2017-10-12 11:35 GMT+02:00 Durumdara <durumd...@gmail.com>:

> Dear Members!
>
> Because of upload/download progress we used LargeObjects to store some
> files in one of our database (and not bytea).
>
> Only this database uses the OID-s of these files.
>
> In the near future we must move to another server.
> This new server is also working now, the moving of databases is continous,
> one by one.
>
> The main problem that LargeObjects are stored in the system table(s). Same
> OIDs could links to different LargeObjects.
>
> The old and new PGSQL server may have same OID values (160606 f. e.) with
> different content (LargeObject).
> In old this is one of our file data, in the new this is a table's system
> definition.
>
> Can we backup this database WITH OIDs, and restore it in new server
> without worrying of LargeObject overwriting?
>
> Or how to migrate (move) this database with largeobjects in new to
> preserve the consistency of copied database and lobs, but preserve the
> existing OID/largeobject pairs in new server?
>
> Thank you for the info/help!
>
> Best regards
>dd
>
>
>
>


[GENERAL] Restore LargeObjects on different server

2017-10-12 Thread Durumdara
Dear Members!

Because of upload/download progress we used LargeObjects to store some
files in one of our database (and not bytea).

Only this database uses the OID-s of these files.

In the near future we must move to another server.
This new server is also working now, the moving of databases is continous,
one by one.

The main problem that LargeObjects are stored in the system table(s). Same
OIDs could links to different LargeObjects.

The old and new PGSQL server may have same OID values (160606 f. e.) with
different content (LargeObject).
In old this is one of our file data, in the new this is a table's system
definition.

Can we backup this database WITH OIDs, and restore it in new server without
worrying of LargeObject overwriting?

Or how to migrate (move) this database with largeobjects in new to preserve
the consistency of copied database and lobs, but preserve the existing
OID/largeobject pairs in new server?

Thank you for the info/help!

Best regards
   dd


[GENERAL] Error: "cached plan must not change result type"

2017-10-09 Thread Durumdara
Dear Members!


At Friday one of our clients got this error:

"cached plan must not change result type"

He restarted the application and the problem vanished.


We used PGDAC to access the database.


Firstly we didn't know nothing about this kind of error.


But later we realized that somebody in our company altered a very often
used table.

He added only one field with default value while they were using the
programme (they didn't exit).


Just for we will be sure: these modifications could generate this kind of
error?

What is the complete description of this error? What we need to not do for
avoid it?


Thank you!


Best Regards

   dd


[GENERAL] pg_stat_activity extra Fields are NULL

2017-05-04 Thread Durumdara
Dear Members!

Windows Clients, PG 9.4 on Linux.
I used [PID + Client IP + Port + BackEnd Start] for unique identification
of a connection.
Every connection store it's unique connection info in a table (per
DataBase).
F.e:
"8574|195.12.4.3|50120|2017-01-01 08:12:15"

Yesterday I realized that BackEnd Start + Client IP + Port would be NULL in
a query.
I checked it and it seems to be access limitation.
Extra fields are on or off based on the account and rights.

As I see the logic:
- postgres (admin) can see everything (db1, db2, db3)
- db owner can see the connections on his db (user1 can see db1, but not
db2)
- other users can see only their connections (user_x can see his row only)

Rows are not limited, only fields are set to NULL for "non-accessable"
connections.

If I use the db owners only for the connections, the technic will works.

But if it is a "member of" kind user, it would break my logic to
uniqueness.
The actual connection can join his unique string, because everybody can see
himself.
"8574|195.12.4.3|50120|2017-01-01 08:12:15"
But in this wrong case the another connection can see only:
"8574|||"

So we know only that the PID is living, but we don't know it's same PID, or
it closed, and reused by a new connection.

Is there a way to we can read these extra informations by extra call?

Or is there any way to we can determine it's same PID or a newer (from a
different connection)?

The always visible fields are "datname, usename, pid, app_name", the extras
(creation time) would be NULL for non-accessable objects.

The appname is seems to be limited (I see it's text, but when I tried to
set it longer with an extra GUID, it truncated).

Thank you for all advice!

Best wishes
  dd


Re: [GENERAL] Maximum of connections in PG

2017-03-14 Thread Durumdara
Dear Melvin!

What is the meaning of PgBouncer with persistent, non-interruptable
connections? To I know it (for learn).

They are non web connections (request, get connection, result, drop
connection), they are pure, native applications which are keeping
connection from the start to the termination.

Thank you!

dd




2017-03-14 15:29 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>:

>
>
> On Tue, Mar 14, 2017 at 10:15 AM, Durumdara <durumd...@gmail.com> wrote:
>
>> Dear Members!
>>
>> In a very strong Linux machine (with many 16-30 GB RAM) what is the limit
>> of the PGSQL server (9.4-9.5) "maximum connections"?
>>
>> 1000?
>> 2000?
>>
>> The clients are native applications (Windows executables) with persistent
>> connections, with more than 100 databases (every client have only one
>> database connection).
>>
>> Now we must determine where is the upper limit to know when we must buy a
>> new machine to customers clients (which have to migrate in future).
>>
>> I know my question is too common without precise numbers, but what is I
>> need is your experiences in this theme?
>> What areas are problematic when we increase the "max_connection" number?
>>
>> Thanks for any info!
>>
>> Best wishes
>>dd
>>
>
>
> *>In a very strong Linux machine (with many 16-30 GB RAM) what is the
> limit of the PGSQL server (9.4-9.5) "maximum connections"?*
>
>
> *It all depends on the amount of shared memory, which in turn is dependent
> on the O/S memory.*
>
>
> *But if you are going to have thousands of users, you are better off using
> a connection pooler. My preference is for
> PgBouncer.https://pgbouncer.github.io/ <https://pgbouncer.github.io/>*
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


[GENERAL] Maximum of connections in PG

2017-03-14 Thread Durumdara
Dear Members!

In a very strong Linux machine (with many 16-30 GB RAM) what is the limit
of the PGSQL server (9.4-9.5) "maximum connections"?

1000?
2000?

The clients are native applications (Windows executables) with persistent
connections, with more than 100 databases (every client have only one
database connection).

Now we must determine where is the upper limit to know when we must buy a
new machine to customers clients (which have to migrate in future).

I know my question is too common without precise numbers, but what is I
need is your experiences in this theme?
What areas are problematic when we increase the "max_connection" number?

Thanks for any info!

Best wishes
   dd


Re: [GENERAL] Who dropped a role?

2016-12-08 Thread Durumdara
Hello!

Sorry, meanwhile I found it as Group Role.
I never used this kind of role. How to restore it to normal login role?

Thanks for it!

dd

2016-12-08 9:53 GMT+01:00 Durumdara <durumd...@gmail.com>:

> Dear PG Masters!
>
> In a server today I didn't find an important role.
> I don't know what my colleagues did with it, but it seems to be lost.
> Do you know a trick, how get info about it?
>
> 1.) Who dropped it?
> 2.) If not dropped what happened with it?
>
> Does role dropping logged somewhere?
>
> Thanks for your help!
>
> dd
>


[GENERAL] Who dropped a role?

2016-12-08 Thread Durumdara
Dear PG Masters!

In a server today I didn't find an important role.
I don't know what my colleagues did with it, but it seems to be lost.
Do you know a trick, how get info about it?

1.) Who dropped it?
2.) If not dropped what happened with it?

Does role dropping logged somewhere?

Thanks for your help!

dd


Re: [GENERAL] Surviving connections after internet problem

2016-11-08 Thread Durumdara
Dear Laurenz!


2016-11-07 16:06 GMT+01:00 Albe Laurenz <laurenz.a...@wien.gv.at>:

> Durumdara wrote:
> > Linux server, 9.4 PG, Windows clients far-far away.
> > May we must limit these parameters in clients after the starting of the
> connection?
>
> Don't bother about the clients, just see that the backends go away on the
> server.
>
> You can use pg_terminate_backend to kill a database session.
>
> Setting the keepalive options in postgresql.conf can make PostgreSQL
> discover dead connections more quickly.
>

The server is licenced, so we can't access the conf file now.
We will report this to the provider.

For that moment could we set these parameters from clients after the
connection established?

For example:

set param bla = nnn?

Thank you!

Regards
   dd


[GENERAL] Surviving connections after internet problem

2016-11-07 Thread Durumdara
Hello!


Linux server, 9.4 PG, Windows clients far-far away.

They called us that they had an "internet reset" at 13.00, but many client
locks are alive now (14:00).
I checked server status, and and saw 16 connections.

In Windows PG server I read about keepalive parameters which are control
and redefine default TCP keepalive values.

As I read it could be two hours in Windows, and it is a system default, so
we can change for all applications.

I don't know what happens with Linux server and Windows clients.

May Linux version of PGSQL also uses 2 hour keepalive (default), or the it
caused by Windows clients, or an active device?

Or how could they survive this internet connection reset? :-o

May we must limit these parameters in clients after the starting of the
connection?

Thank you for your help!

dd


Re: [GENERAL] Cannot delete role because it depends on "default privileges"

2016-10-21 Thread Durumdara
Wow... I found it. The postgres database contained more
default privs. But PGAdmin III nothing say about dependents in it's reports.

Thanks!

2016-10-21 16:19 GMT+02:00 Durumdara <durumd...@gmail.com>:

> Dear Tom!
>
> Is there any tool what can show me the dependents or dependencies?
>
> In PGAdmin I don't see any dependencies or dependents... :-(
>
> Thanks
>dd
>
> 2016-10-21 16:08 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:
>
>> Durumdara <durumd...@gmail.com> writes:
>> > The DB_X dropped, so I can't choose it as "actual database".
>> > I tried this in a neutral database:
>> > drop owned by role_x;
>> > But nothing happened, the error is same.
>>
>> The error you are reporting is describing default privileges that
>> exist in the *current* database.  You need to do DROP OWNED BY
>> in that database (and maybe other ones, but start there).
>>
>> regards, tom lane
>>
>
>


Re: [GENERAL] Cannot delete role because it depends on "default privileges"

2016-10-21 Thread Durumdara
Dear Tom!

Is there any tool what can show me the dependents or dependencies?

In PGAdmin I don't see any dependencies or dependents... :-(

Thanks
   dd

2016-10-21 16:08 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Durumdara <durumd...@gmail.com> writes:
> > The DB_X dropped, so I can't choose it as "actual database".
> > I tried this in a neutral database:
> > drop owned by role_x;
> > But nothing happened, the error is same.
>
> The error you are reporting is describing default privileges that
> exist in the *current* database.  You need to do DROP OWNED BY
> in that database (and maybe other ones, but start there).
>
> regards, tom lane
>


Re: [GENERAL] Cannot delete role because it depends on "default privileges"

2016-10-21 Thread Durumdara
Dear Tom!

The DB_X dropped, so I can't choose it as "actual database".

I tried this in a neutral database:

drop owned by role_x;

But nothing happened, the error is same.

As I read it have "CASCADE" mode, but I'm afraid to start it, because I
don't know what will happen.

It is a really used server, so if I do something wrong, I will killed by my
boss... :-(

To complicate the things my coll. renamed this role to "role_x_to_delete",
and created a new role_x, and a new db_x database.

So if the PG uses "names" and not identifiers in the background, I will
be in trouble...

Thanks


2016-10-21 15:27 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Durumdara <durumd...@gmail.com> writes:
> > We have a ROLE_MAIN.
> > This gave default privileges to all next objects in DB_X to ROLE_X.
> > Somebody dropped DB_X, and later he tried to drop ROLE_X.
> > But he got errors in PGAdmin.
> > ERROR:  role "role_x" cannot be dropped because some objects depend on it
> > DETAIL:  privileges for default privileges on new types belonging to role
> > role_main
>
> See DROP OWNED BY.
>
> https://www.postgresql.org/docs/9.6/static/role-removal.html
>
> regards, tom lane
>


[GENERAL] Cannot delete role because it depends on "default privileges"

2016-10-21 Thread Durumdara
Hello!

We created a DB named DB_X, and a role ROLE_X.

We have a ROLE_MAIN.
This gave default privileges to all next objects in DB_X to ROLE_X.

Somebody dropped DB_X, and later he tried to drop ROLE_X.
But he got errors in PGAdmin.

---
pgAdmin III
---
An error has occurred:
ERROR:  role "role_x" cannot be dropped because some objects depend on it
DETAIL:  privileges for default privileges on new types belonging to role
role_main
privileges for default privileges on new functions belonging to role
role_main
privileges for default privileges on new sequences belonging to role
role_main
privileges for default privileges on new relations belonging to role
role_main
privileges for default privileges on new types belonging to role postgres
privileges for default privileges on new functions belonging to role
postgres
privileges for default privileges on new sequences belonging to role
postgres
privileges for default privileges on new relations belonging to role
postgres
---
OK
---

How to we remove the default privileges from role_x to we can drop it?

Thank you!

dd


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Durumdara
Oh, WTF (Word Trade Fenster)! :-o

PGAdmin did that!

There are subdialog for Default Privileges, with Tables, and with ONLY ONE
ROLE.  This role is used after "TO". But nowhere role is used after "FOR"...

Hm

Thank you!


2016-10-04 12:57 GMT+02:00 Albe Laurenz <laurenz.a...@wien.gv.at>:

> Durumdara wrote:
> [...]
> > --- login with postgres:
> [...]
> >   ALTER DEFAULT PRIVILEGES
> >   GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE, REFERENCES,
> TRIGGER ON TABLES
> >   TO u_tr_db;
> >
> >  login with u_tr_main:
> >
> >   create table t_canyouseeme_1 (k int);
> >
> >  login with u_tr_db:
> >
> >   select * from t_canyouseeme_1;
> >
> >   ERROR: permission denied for relation t_canyouseeme_1
> >   SQL state: 42501
> >
> >  As you see before, u_tr_db got all default privileges on future tables,
> so I don't understand why he
> > don't get to "t_canyouseeme_1".
>
> You should have written
>
>ALTER DEFAULT PRIVILEGES FOR ROLE u_tr_main ...
>
> The way you did it, you effectively wrote "FOR ROLE postgres" because
> you were connected as that user.
>
> Than means that all future tables created *by postgres* will have
> privileges for user "u_tr_db" added.  But you want tables created
> *by u_tr_main* to get the privileges.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Restricted access on DataBases

2016-10-04 Thread Durumdara
Dear Charles!


Sorry for late answer. Now I got a little time to check this again...

2016-09-14 18:43 GMT+02:00 Charles Clavadetscher :

> Hello
>
>
>
>
>
> Also try this:
>
>
>
> ALTER DEFAULT PRIVILEGES FOR ex_mainuser GRANT INSERT, SELECT, UPDATE,
> DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO ex_dbuser;
>
>
>
> You execute the ALTER DEFAULT PRIVILEGES as su, so the grant applies to
> objects created by su and not ex_mainuser, unless you specify it with FOR
> ex_mainuser.
>
>
>
> Besides, if the objects in the table will not be created by the owner, but
> by your admin, then I don’t very much see the point in giving ownership.
> That could be done anyway in the public schema, unless you changed that.
>


So... I repeated the test.

*--- login with postgres:*




*CREATE DATABASE db_testrole   WITH ENCODING='UTF8'
TEMPLATE=template0 CONNECTION LIMIT=-1;*


*CREATE ROLE u_tr_db LOGIN NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE
NOREPLICATION; *




*CREATE ROLE u_tr_main LOGIN   NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE
NOREPLICATION; GRANT u_tr_db TO u_tr_main; *


*ALTER DATABASE db_testrole   OWNER TO u_tr_db;*









*REVOKE ALL ON DATABASE db_testrole FROM public; GRANT CREATE, TEMPORARY ON
DATABASE db_testrole TO public; GRANT ALL ON DATABASE db_testrole TO
u_tr_db; ALTER DEFAULT PRIVILEGES GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db; *


* login with u_tr_main: *


*create table t_canyouseeme_1 (k int); *


* login with u_tr_db: *






*select * from t_canyouseeme_1; ERROR: permission denied for relation
t_canyouseeme_1 SQL state: 42501 *

 As you see before, u_tr_db got all default privileges on future tables, so
I don't understand why he don't get to "t_canyouseeme_1".

If I try to use these things they would work:






*A.) ** login with u_tr_main:*





*set role u_tr_db; create table t_canyouseeme_2 (k int); *


* login with u_tr_db: *



*select * from t_canyouseeme_2; -- OK! *


*B.) *

* login with su: *



*ALTER DEFAULT PRIVILEGES FOR role u_tr_main  GRANT INSERT, SELECT, UPDATE,
DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db; *

* login with u_tr_main:*



*create table t_canyouseeme_3 (k int); *


* login with u_tr_db: *

*select * from t_canyouseeme_3; -- OK!*


A.) is because I can set role to u_tr_db and then he is the creator, he get
all rights.
B.) I don't understand this statement... :-( :-( :-(

So the main questions.
Why the default privilege settings aren't affected on newly created table?
See:



*ALTER DEFAULT PRIVILEGES GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db;*

What are the meaning of this statement if they won't usable for object
created by another users?
U_TR_DB is owner, so they have all privileges for next tables he will
create.
So I supposed that "default privileges" is for future objects created by
different users.
But this not works here.

I don't understand case B.
U_TR_MAIN gives all privileges to U_TR_DB for all newly created table?

What are the differences between?


1.

*ALTER DEFAULT PRIVILEGES GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db;*
2.  *ALTER DEFAULT PRIVILEGES FOR role u_tr_main  GRANT INSERT, SELECT,
UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO u_tr_db;*


Why the second works and first not?


---

db_testrole-# \ddp
Default access privileges
   Owner   | Schema | Type  |  Access privileges
---++---+-
 postgres  || table | postgres=arwdDxt/postgres  +
   ||   | u_tr_db=arwdDxt/postgres
 u_tr_main || table | u_tr_db=arwdDxt/u_tr_main  +
   ||   | u_tr_main=arwdDxt/u_tr_main
(2 rows)

db_testrole-# \d
  List of relations
 Schema |  Name   | Type  |   Owner
+-+---+---
 public | t_canyouseeme_1 | table | u_tr_main
 public | t_canyouseeme_2 | table | u_tr_db
 public | t_canyouseeme_3 | table | u_tr_main
(3 rows)


---


Thank you for your help!

Best wishes
   dd


Re: [GENERAL] Restricted access on DataBases

2016-09-14 Thread Durumdara
Dear Adrian and Charles!

I tried to create a step by step instruction.

The real commands are separated by ";"

Other commands are: "login as username", "use db"
I ran them in PGAdmin with changing the connection to simulate what I feel
as problem.
I suppused the ex_dbuser have owner rights to the DB, and with "default
privileges" it must see the new tables created by ex_mainuser. Without them
I would understand why (inherited role would have diffferent rights on
creation).

If I want to represent this in other way, I would say:
- ex_mainuser have all rights as ex_dbuser, but it could have more
- but when ex_dbuser got all rights to future objects, it must see what
ex_mainuser created on his database

If this not happened then my idea crashes, because we must login with
ex_dbuser to create objects, or we must create all objects by ex_mainuser
WITH ONLY OWNER SETTING (as ex_dbuser).

The example:


-- login as su
-- CREATE DATABASE ct_db WITH OWNER = ex_dbuser ENCODING = 'UTF8'
TABLESPACE = pg_default template = template0;
-- use ct_db

-- login as ex_dbuser
-- begin; create table t_dbuser (id integer);commit;

-- login as ex_mainuser
-- begin; create table t_mainuser (id integer); commit;

-- login as ex_dbuser
-- select * from t_mainuser; -- ERROR!

-- login as su
-- ALTER DEFAULT PRIVILEGES  GRANT INSERT, SELECT, UPDATE, DELETE,
TRUNCATE, REFERENCES, TRIGGER ON TABLES TO ex_dbuser;

-- login as ex_mainuser
-- begin; create table t_mainuser2 (id integer); commit;

-- login as ex_dbuser
-- select * from t_mainuser2;  -- ERROR!

-- login as su
-- ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT INSERT, SELECT, UPDATE,
DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO ex_dbuser;

-- login as ex_mainuser
-- begin; create table t_mainuser3 (id integer); commit;

-- login as ex_dbuser
-- select * from t_mainuser3;  -- ERROR!


Thanks: dd


2016-09-14 16:52 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>:

> On 09/14/2016 06:52 AM, Durumdara wrote:
>
>> Dear Charles!
>>
>>
>
>> I thought before this test that mainuser get all rights as dbuser, so it
>> have rights to the next (future) objects too.
>> So mainuser and dbuser have equivalent rights in db database.
>>
>> Thanks for your every info!
>>
>
> In my previous post I mentioned using \dp or \z. The output from those
> commands can be hard to understand without a key, which I forgot to
> mention. The key can be found here:
>
> https://www.postgresql.org/docs/9.5/static/sql-grant.html
>
> in the Notes section.
>
>
>
>> Regards
>> dd
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Restricted access on DataBases

2016-09-14 Thread Durumdara
Dear Charles!

I checked your solution. For example:
db - database
dbuser, mainuser

1. dbuser own the database, and the objects in it.
2. mainuser member of dbuser.
3. public connection revoked.

Ok.

Then dbuser can see all tables, and mainuser too.

Ok.

The operation (overlord):
1. set role to mainuser (or login).
2. create table test_mainuser(id integer);
3. set role to dbuser (or login).
4. select * from test_mainuser;

Result: Permission denied.

Hmmm... the owner of test_mainuser is mainuser...

Then I dropped the test_mainuser table.

I tried to use default privileges. They are for future, so they must be
affected on newly created table.
I set them all.


ALTER DEFAULT PRIVILEGES  GRANT INSERT, SELECT, UPDATE, DELETE, TRUNCATE,
REFERENCES, TRIGGER ON TABLES to dbuser;

I thought this makes all rights to the newly generated table.

I do the test again, but I got same result.

Why? What I do wrong? (Maybe only my mistake).

I thought before this test that mainuser get all rights as dbuser, so it
have rights to the next (future) objects too.
So mainuser and dbuser have equivalent rights in db database.

Thanks for your every info!

Regards
dd


Re: [GENERAL] Restricted access on DataBases

2016-09-07 Thread Durumdara
Dear Everybody!

I read the documentation based on your example. First reactions.
2016-09-05 18:25 GMT+02:00 Charles Clavadetscher :

>
>> GRANT us_a, us_b, us_c TO main_admin;
>>
>

Ah, it's good. I can merge the "owner" rights to one. :-)
It's like "doubling"! :-)



>
>>
>
> Here an example (obviously you will choose secure passwords and initialize
> them using \password . This is just a very simple example). I
> used 9.5 but it would work with earlier versions as well.
>
> -- Create roles and databases
>
> CREATE ROLE main_admin LOGIN PASSWORD 'xxx';
>
> CREATE ROLE us_a LOGIN PASSWORD 'xxx';
> CREATE DATABASE db_a;
> ALTER DATABASE db_a OWNER TO us_a;
>
> CREATE ROLE us_b LOGIN PASSWORD 'xxx';
> CREATE DATABASE db_b;
> ALTER DATABASE db_b OWNER TO us_b;
>
> -- Restrict access
>
> REVOKE CONNECT ON DATABASE db_a FROM public;
> GRANT CONNECT ON DATABASE db_a TO us_a;
>
> REVOKE CONNECT ON DATABASE db_b FROM public;
> GRANT CONNECT ON DATABASE db_b TO us_b;
>
> -- Grant all user rights to main_admin:
>
> GRANT us_a, us_b TO main_admin;
>

What could be the problem with the revoking only "connect" priv? What
are/would be the silent side-effects?
For example:
Ok, us_b can't connect to db_a, but...
He can connect to db_b and may he can start(?) a multidatabase query...
He can set his role to bla, and he can insert the table db_a.X...
He can read the temp tables of db_a...
He can read the structure of db_a
He can break out from his sandbox by...???

---

Other question:
Can I imagine the GRANT as present, and the DEFAULT PRIVILEGES as future?

Your two solutions are seem to be better like "revoke public in all and
grant all rights in all object in the present (GRANT) and in the future
(DEF. PRIV)".

Very-very thank you!

dd


Re: [GENERAL] Restricted access on DataBases

2016-09-06 Thread Durumdara
Dear Everybody!

I'm sorry because lack of answer - I try to do it now.

2016-09-05 16:19 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>:

> On 09/05/2016 05:45 AM, Durumdara wrote:
>
>> Dear PG-masters!
>>
>> We want to put more databases to one server, to "public" schema:
>> DB_A, DB_B, DB_C.
>>
>
> The PUBLIC schema is contained within a database not the other way around,
> so further explanation is necessary.


Thank you, this is important information. I will read about it.


>
>
> And users:
>> US_A, US_B, US_C, and Main_Admin.
>> We want to setup the environment.
>> Every simple user can access his database:
>> DB_A - US_A
>> DB_B - US_B
>> DB_C - US_C
>>
>> They can't access other databases only theirs.
>>
>> Main_Admin can access all databases.
>>
>
> Is Main_Admin created as a superuser?
>

It is not really su, but we want to use it as a "super user of these
databases"

For example:  there are a, b, c, d customers. Each of them have on
database. All of them in one server.
We create different users for them to not see any data except theirs.

Main_Admin is a login role for us. We know each role's password, but we
want to use our role to manage everything. It's not a superuser like
postgres, but it can do everything what A + B + C + D can.

If I want to write this in pseudo code, I would do as:

1.) Double owning:
set owner on DB_A to US_A, Main_Admin
set owner on DB_B to US_B, Main_Admin
...

2.) Grant
Grant all rights to US_A on DB_A to all objects;
Grant all rights to Main_Admin on DB_A to all objects;
Grant all rights to US_B on DB_B to all objects;
Grant all rights to Main_Admin on DB_B to all objects;


The owner got right to every object in the database. To the future objects
too.
"Double owning", or "All rights in database and contained objects" is good
for us to Main_Admin can access everything on every databases.

Like superuser, but without rights to get out from these databases.
Just like sandboxes. US_A have sandbox DB_A. US_B have sandbox DB_B.
Main_Admin have sandboxes (A + B + C + D), but can't do any problem on
other databases.


I will read your answers.

Very-very thank you.

dd


[GENERAL] Restricted access on DataBases

2016-09-05 Thread Durumdara
Dear PG-masters!

We want to put more databases to one server, to "public" schema:
DB_A, DB_B, DB_C.
And users:
US_A, US_B, US_C, and Main_Admin.
We want to setup the environment.
Every simple user can access his database:
DB_A - US_A
DB_B - US_B
DB_C - US_C

They can't access other databases only theirs.

Main_Admin can access all databases.

I'm not sure how to do it perfectly.
We tried to remove "public" role, and add US_A to DB_A.
But the subobjects (table named "teszt") aren't accessable.

I can reown DB_A to US_A, but this revoke all rights from Main_Admin.

What is the simple way to we can avoid the access from another users, but
give needed rights to DB_[n] and Main_Admin? (Tables, Sequences, etc).

And how we keep this state later? For example: DB_A creates a new table.
Main_Admin must access this automatically...

I don't understand this area properly. For me the "public" means "access
for all users", which isn't good (DB_A vs. US_C).

As I think we can't mix the rights (Main_Admin = US_A + US_B  + US_C...).

Thank you for the help. information, or an example!

DD


[GENERAL] Field size become unlimited in union...

2016-05-03 Thread Durumdara
Hello!

As I experienced, PGSQL changes the result field sizes to unlimited
varchar, when I passed different field size in same column of the union.

SP.Key - varchar(100)
SP.Value - varchar(200)

Example:

  select 'a', value from sp
  union all
  select key, value from sp


The first field is not limited to 100, it's unlimited varchar() (= Text,
Memo, CLOB).

So PG don't use the maximal field size (100).


If I did cast on the field to resize to 100, the result is limited
correctly.


  select cast('a' as varchar(100)), value from sp
  union all
  select key, value from sp


Can I force somehow to PG use the maximal size?

Or must I know and equalize all field sizes in union?

Or must I use temporary tables and inserts to not get this problem?

Thanks

Regards
dd


Re: [GENERAL] 9.5 - Is there any way to disable automatic rollback?

2016-04-11 Thread Durumdara
Dear All!

>In PG it's seems to be different. PG silently rollback the actual

> transaction.
>>
>
> I am not seeing silent:
> ERROR:  current transaction is aborted, commands ignored until end of
> transaction block
>

I say "silently", because first I didn't recognize that all things lost,
not only last stmt.
And I also say it, because the client library shows this error, but
datasets are remaining in edited, modified state (not just last record, all
priorly edited tables)



>
>
>
> My client controls, my client libraries, my client users believe that
>> changes were sent.
>>
>
> What are your client and client libraries?
>

PGDAC.


>
>
>> My client library lies that I'm "InTransaction", and in same transaction
>> I started(?). Every statement creates error message.
>>
>
> You are in the same transaction block until you issue the ROLLBACK or
> COMMIT.
>

Yes, I see. For Commit or Rollback I don't execute any selects, for
example: "select txid_current()", because I got that error...
Thanks.


>
> I think it's a little bit problematic.  This is not under my control.
>> In AutoCommit mode ok, because it must drop the last modification, but
>> here no, I think.
>>
>
>
>
In MS or FireBird the statements' modifications were stored in DB in limbo
state, so I can commit the prior statements. For this there are only
savepoints I think.


Thanks for your answers!

dd


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-11 Thread Durumdara
Dear All!

2016-04-10 18:03 GMT+02:00 Adrian Klaver :

> On 04/10/2016 06:29 AM, durumd...@gmail.com wrote:
>
>>
>> Products, offers are edited for long time.
>>
>
> Define long time, a session, a day, days, etc?
>

For 1 minute to 1-2 hours.


>
> But we must save subdata. This is not a "word like document" which can
>> saved at once, in a transaction.
>> When a product edited, we must protect it from other user's edit.
>> But it's subdata must be posted/commited to the DB, for example
>> shipping, article quantity changes, vouchers, etc.
>>
>
> So folks can make changes to the attributes of a Product, Offer, etc while
> it is being changed in ways they can not see?
>

They are not linked closely. Other people also could give new transport
areas. And they must read them.
And no, Product and Offer elements don't editable for more people at once.


>
> Or do they get a read only view that changes as the 'locking' user makes
> edits?
>

Yes, and they can add new lines.


>
>
>>
> To be clear you are trying to come up with a solution that allows your
> application to run against different databases(Firebird, SQL Server,
> Postgres, etc?), using a single code base, correct?


Yes, absulately right.

We have 3 choices:
1.) A second connection with locking transaction (we lock other resources)
2.) An advisory locks.
3.) A multi-DB solution - if we can identify the current sessions in 99.99%
(except summer/winter or other hour problem and perfect similarity of
timestamps).

Thanks, we can close this thread... :-)

Regards


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-10 Thread Durumdara
Dear Adrian!

Again. As I see the beginning blocks are removed by mailing system in the
code.

We have an "ourlocks" table which hold records (TableName, RecordID,
SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).

If anybody wants to lock record "for long time", "over the transactions" it
try to insert a new record here.
If other process want to lock same record, it can see this record (or got
violation error), so it stopped.

This is not for protect all tables, only for protect main entities have
many subtables like "Products", "Offers", etc.
We can't use transactions, because in the editor they must post/commit
subdata.
And because PG is different from other DBs, so if ANY of statements failed,
it rollback whole thing automatically (see my prior mail).
In FireBird we can do simple record lock protection with main connection's
second transaction, but in MS and PG not.

So we used real records in a real table. But how to clean if client
disconnected without delete own records?
For this we created own sessioninfo table with inner id, user id,
timestamp, and [connectionid, connectiontime].
The locking mechanism checks for same lock (Offer, 117), if anybody locks
the record, it checks for he's on or not.
If active connection (ID + TS) then lock is valid, and we can show
information that "who is editing, please ask for him to release, etc.".
If not, we can eliminate the record and insert our.

The main goal is to protect the main entities. It is working in MS.
My question was about how to get my client's connection timestamp as
get_backend_pid.
But as you wrote I can get it from activity log. Because PID can't be same
as mine, I can select my from the table.

You said it have danger (like guid repetition). Yes, it have. And not the
BAD SYSADMIN, but the summer/winter time changes are dangerous (the
backward version). In PG we can extend our "guid" with IP and Port too, and
this could be enough safe for us.

Thanks




2016-04-09 16:05 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>:

> On 04/09/2016 01:30 AM, Durumdara wrote:
>
>> Dear Everybody!
>>
>>
>> In MS we had a "persistent lock" structure and method.
>> This over  transactions because based on real records in a real table
>> with pri key (tablename + id).
>>
>> For garbaging we had a special session info.
>> In MS the session id is smallint, so it can repeats after server
>> restarts, but my coll. found a "session creation timestamp".
>> This is a key which unique.
>> With this we can check for died sessions and we can clean their records.
>>
>
> It might help to explain more what it is you are trying to achieve.
>
> First I am not sure what you mean by 'persistent lock', especially as it
> applies to Postgres?
>
> Second, I assume by garbaging you mean garbage collection of something?
> If that is the case what exactly are you garbage collecting?
> I see 'clean records', what records would these be?
> In particular, on Postgres, where are you going to do this?
>
>
>> We want create same mechanism.
>>
>
> If the above questions did not already cover this, what mechanism?
>
>
> I know there are adv. locks in PG, but I want to use session id.
>>
>> This could be:
>> |pg_backend_pid|()
>>
>> May pid repeats.
>> Where I can get timestamp or some other unique data with I can create a
>> combined primary key?
>>
>> Thanks for your help!
>>
>> dd
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Durumdara
Dear John!

This is not about MS.

This is about: how to get unique session_id. This could be "pid" + "client
starting time".
The pid isn't unique, where I can get "client's starting time"?

But I think the solution is pg_activity view.

There is no same PID as mine, because no one have same subprocess as mine.

So I can filter to my line. This contains my client's starting time.
Anybody's record had same PID in the past can be eliminated.
Other PID's are invalid if were not in it the view.

If Lock_Is_On(aTable, aIDValue, HisPID, HisClientStartTS):

p = GetMyPID

bCanUnlock = False
if p = HisPID:

// This is mine PID - but really mine?

ts = SelectClientStartTSFromActivityLog(p)

if ts = HisClientStartTS:



// It's mine

pass



else:



// The pid is same, but client's starting time is different - we can unlock
it, this is not mine

bCanUnlock = True



 else:

// This is not mine PID, we can check for existence here

if HaveClientPIDAndStartTSinActivityLog(HisPID, HisTS) = False:

bCanUnlock = True // No one have it, he was off





if bCanUnlock:

RemoveTableLock(aTable, aID)

else:

raise Exception('Cannot lock the object because locked by another user!')



Thanks for your inspiration!


2016-04-09 12:30 GMT+02:00 John R Pierce <pie...@hogranch.com>:

> On 4/9/2016 1:30 AM, Durumdara wrote:
>
>> In MS we had...
>>
>
> If you want Microsoft's unique version of SQL, run Microsoft SQL. That
> stuff you describe is a whole bunch of implementation specific wierdness
> from the standpoint of someone outside, looking in..
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Really unique session ID - PID + connection timestamp?

2016-04-09 Thread Durumdara
Dear Everybody!


In MS we had a "persistent lock" structure and method.
This over  transactions because based on real records in a real table with
pri key (tablename + id).

For garbaging we had a special session info.
In MS the session id is smallint, so it can repeats after server restarts,
but my coll. found a "session creation timestamp".
This is a key which unique.
With this we can check for died sessions and we can clean their records.

We want create same mechanism.
I know there are adv. locks in PG, but I want to use session id.

This could be:
pg_backend_pid()

May pid repeats.
Where I can get timestamp or some other unique data with I can create a
combined primary key?

Thanks for your help!

dd


[GENERAL] Drop only temporary table

2016-03-18 Thread Durumdara
Dear PG Masters!

As I experienced I can create normal and temp table with same name.

create table x (kod integer);

CREATE TEMPORARY TABLE x (kod integer);

select tablename from pg_tables where schemaname='public'
 union all
select c.relname from pg_class c
join pg_namespace n on n.oid=c.relnamespace
where
n.nspname like 'pg_temp%'
and c.relkind = 'r'
and pg_table_is_visible(c.oid);

---

I can see two x tables.

As I see that drop table stmt don't have "temporary" suboption to determine
which to need to eliminate - the real or the temporary.

Same thing would happen with other DDL/DML stmts - what is the destination
table - the real or the temporary?

"insert into x(kod) values(1)"

So what do you think about this problem?

I want to avoid to remove any real table on resource closing (= dropping of
temporary table).
How to I force "drop only temporary"? Prefix, option, etc.

Thanks for your help!

dd


[GENERAL] Phantom read example for new Repeatable Read level

2012-11-09 Thread Durumdara
Dear Everybody!

Can somebody show me an example for phantom read in Repeatable Read mode
(in 9.1, new, serialized kind of isolation level)?

The help wrote that it is possible:

Repeatable read Not possible Not possible Possible
But I don't understand, why, and how.

C1, C2 are clients.
Every of them is in RR transaction.

What need to do in C1 to read C2's phantom records?
Can you show me the detailed steps?

Thanks for every info, help!

Best wishes: dd


[GENERAL] Re: One transaction by connection - commit subdetails without release master transaction?

2012-02-29 Thread Durumdara
Dear Anybody!

I replace the long question to some shorter:

As I see the PGSQL supports one transaction per connection. Is this
information ok?

Formerly I believed that only Zeos/PGDAC supports one trs/conn, but
now it seems to be based on the server, and not the components...
(The pgadmin is also showing one tx state/statement per connection)

So if I want to make two transactions in client am I must duplify the
connections in this period?

Thanks for your help:
   dd

2012/2/17 Durumdara durumd...@gmail.com:
 Dear PGSQL Masters!


 What do you thinking about this? Can I do this without doubling the
 connections by users (if I duplicate the connections, the subforms can use
 the second connection = another transaction)?


 Thanks for your help: dd


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Correct way for locking a row for long time without blocking another transactions (=nowait)?

2012-02-28 Thread Durumdara
Hi!

In FireBird I can set the transaction to nowait.
When I want to protect a row for long time (showing a dialog, and on
closing I release the row), I need to do this:

trans.StartTransaction();
sql.execute('update thetable set afield = afield where idfield = anyvalue');

This is locking the row with id anyvalue.

If anyone trying to so something with this row (update, delete) from
another transaction, the FireBird generates an error to show: the row
is locked.

On the dialog closing I simply do commit or rollback what is
eliminates the lock on row.

I search for same mechanism in PG...

But: as I see the Lock Table (where I can set nowait) is for only
short transactions, because it is not define the row, it is holding
the table fully.

Another way is when I starting a transaction and update a row, and
waiting, but then all other transactions are trying to update this row
are waiting for me... (they are blocked!).

I want to protect the row, but other transactions mustn't blocked on
this, they rather generate an error after 200 msec (for example), when
they are saw the row locked.

Maybe the solution is the usage of advisory locks, but advisory locks
are don't preventing the changes on the real record, if a procedure or
sql statement don't checking this adv lock existance, it is is simply
overwrite my data...
Or we must use beforeupdate and beforedelete trigger what first
everytime checking the existence of advisory lock by ID?

Hmmm...

Thanks for your every idea:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Correct way for locking a row for long time without blocking another transactions (=nowait)?

2012-02-28 Thread Durumdara
Dear Filip!

2012/2/28 Filip Rembiałkowski plk.zu...@gmail.com:
 On Tue, Feb 28, 2012 at 10:26 AM, Durumdara durumd...@gmail.com wrote:
 Just some loose comments.

 http://www.postgresql.org/docs/current/static/explicit-locking.html#LOCKING-ROWS

 A way to explicitly lock given row without updating it:
 SELECT whatever FROM thetable WHERE id=123 FOR UPDATE NOWAIT;

Thanks, that was I have been searching for this time.


 A way to force error when any statement takes more than 200 msec:
 SET statement_timeout TO '200ms';

As I see that is not needed here.
Only for normal updates.

And how I can reset statement_timeout after this command to default value?


 The waiting that you observed is normal - there is no way in
 PostgreSQL to force _other_ transactions into NOWAIT mode. All
 transactions that do not want to wait, should use explicit locking
 with NOWAIT option.

If I understand it well, I must follow NOWAIT schema for update to
avoid long updates (waiting for error).

1.) I starting everywhere with select for update nowait
2.) Next I make update command
3.) Commit

So if I starting with point 2, I got long blocking because of waiting
for release row lock?

But as I remember in PGSQL there is the read committed iso-level the default.
This meaning that rows are same in on select for, after they can change.

May the solution is if PGSQL support that:

create temporary table tmp_update as
select id from atable
where ...

select * from atable for update nowait
where id in (select id from tmp_update)

update atable set value = 1
where id in (select id from tmp_update)

Is this correct?


Thanks for it:
   dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stability in Windows?

2012-02-24 Thread Durumdara
Hi!

We planned to port some very old DBASE db into PGSQL.

But somebody said in a developer list that he tried with PGSQL (8.x) and it
was very unstable in Windows (and it have problem when many users use it).

Another people also said that they used PGSQL only in Linux - and there is
no problem with it, only some patches needed for speeding up writes...

What is your experience in this theme?

Do you also have same experience in Windows?

The user number is from 20 to up 100 (concurrently).

Thanks for your every idea, help, link, information about this.

Regards:
   dd


Re: [GENERAL] Stability in Windows?

2012-02-24 Thread Durumdara
Hi!

2012/2/24 mgo...@isstrucksoftware.net

 We are using it on a rather beefy server with no problems with a Win32
 client/server app.


Which programming language you use in clients? Which adapter?

If Delphi then which components?


  There are additonal things you can do to tune the
 database.


Some word please about it!


  I've not seen any stability problems.  Remember it's been
 several years since version 8 came out and the current version is 9.1.2.


How many users use it concurrently?

Thanks for your info:
   dd


[GENERAL] One transaction by connection - commit subdetails without release master transaction?

2012-02-17 Thread Durumdara
Dear PGSQL Masters!

I got a new project: porting a DBASE based application to PGSQL. Some
theoretic question I have before I coding/typing.

OS: Win, IDE: Delphi.

Preface:
Delphi uses TDataSet-s (every of them have records/rows).
When use Post method to save the actual record. In DBASE or flat table
systems the Post is instantly write the record into the file (isolation
level = commit on post). Edit or Append methods are usable for modify
the record.
These methods put lock on actual record to protect from other
modifications. After Post or Cancel these locks released.

The construction of the old application is this (and it is repeating with
many times, in many forms = dialogs):

1.) The edit form is opening a master item (table row) - with master
dataset, dbedits, grids, etc. For example: an order.

2.) The edit form is opening details (show them). For example: dispo
addresses, bill items, suborders, etc.

3.) Master dataset set locked with Table.Edit.

4.) You can edit the master...

5.) ... or you can edit any of subdetails, or basic data (for example
types, kinds, workers, etc) in other forms, and this instantly posted (post
= commit). So master kept in Edit state after I added a new address (but
the new address record saved in DB).

6.) After you finished the form closing, and Master Data posted. Then the
lock released.

Because this lock alive, never anyone change master or details at once
(conflicts resolved with this), only basic data (colors, types, kinds,
workers, etc.)

If trying to visualize this under PGSQL, or other RDBMS (except Firebird
with IBX/IBO), I everytime blocked on these things:


a.) Lock, protection on the master record
b.) Only one living transaction by connection

Firebird with IBX/IBO is allows you to make more living transactions.

In this case I can start a new transaction in every new subdetail form,
this not affected on main transaction (master).

begin; update master set id = id where id = :id

begin; insert into detail1 () values(); commit;
begin; update detail2 ...; commit;

master; commit


This update SQL locking the master and we can post/commit on any
subdetails, the subdata are flushed into db, no matter that later the
master rolled back or not.

But ZEOS, or PGDAC supports only one transaction by connection (and may
other dbs (ADO) too).

So if I want to use lock for update, or lock record on master, then the
lock will vanishing when any of subdetails commited or rolled back (fail).

How can I protect the master? Can I create subtransactions that can
commitable without commit the master?

Other problem that I cannot change the mode of transaction, because it is
held on connection (AutoCommit = ???).
The bills, or heavily linked data must saved with No AutoCommit mode
(protected by transaction, all or none = subitems with master at once).
But these subdetails are commitable by rows, that is no matter.

I hope I describe well the situation.

Forms can open other Forms, and subforms can save subdata without releasing
master lock, or drop master's transaction (ok, in DBASE clones there is
no transaction, but may you understand it).

Because users used this method for this time, I must provide same under
PGSQL.

What do you thinking about this? Can I do this without doubling the
connections by users (if I duplicate the connections, the subforms can use
the seconds connection = another transaction)?


Thanks for your help: dd


[GENERAL] Extending Session / Logged User info

2012-01-31 Thread durumdara
Dear PGSQL Masters!

I have a complicated question.

Is there any way to extend the session information (what visible in
pgadmin's Server Status)?

I explain what I want to do:

We used alternative DB in a place and I need to port this.
In this DB we using different db users for every real users.
When the program logging to the server it input for username/pwd, and
log with it.
So if John XY logged in, the session is shown as John XY, which is a
real user, and db user too.

This meaning that if we have 80 users, we need to create 80 db users.

It is very useful when you want to see who use the db, but very tiring
when you want to create or manage them (plus role, etc).

I thought that (if possible), we can avoid this mechanism, if Session
can store additional informations.

Then I need to create only ONE DB user, I have an aux table with
usernames/pwds, and if the logging was successful, I need only put
this real username into Session's additional field.

In the view I can see all users as one, but with plus column I can see
the real username.

Is PGSQL supports this kind of Session information extending, or any
way to I can use one user with more real user - and to get this info
(the real username)?

Maybe the solution is a table with:
Session ID (PID???), UserName, Logged in

If PID is unique in all lifetime of the server, and I can get only the
last logging in of each PID-s.


Thanks for your help:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending Session / Logged User info

2012-01-31 Thread Durumdara
Hi!

2012/1/31 Guillaume Lelarge guilla...@lelarge.info

 On Tue, 2012-01-31 at 04:16 -0800, durumdara wrote:
  Dear PGSQL Masters!
 
  I have a complicated question.
 
  Is there any way to extend the session information (what visible in
  pgadmin's Server Status)?
 
  I explain what I want to do:
 
  We used alternative DB in a place and I need to port this.
  In this DB we using different db users for every real users.
  When the program logging to the server it input for username/pwd, and
  log with it.
  So if John XY logged in, the session is shown as John XY, which is a
  real user, and db user too.
 
  This meaning that if we have 80 users, we need to create 80 db users.
 
  It is very useful when you want to see who use the db, but very tiring
  when you want to create or manage them (plus role, etc).
 

 Creating 80 users is not a really big burden. Managing privileges for 80
 users will be. So you need to create some roles, on which you'll
 grant/revoke privileges, and you'll make each user member of one or more
 of the roles.


This is also make problems when I don't want to install a new PGSQL server,
but they want to see the old, closed archives.
Now one solution is if I make prefixes on usernames:
main_[name]
archive1_[name]
and the program automatically set these prefixes.

But it would be easier if I can use one DB user can represent all users,
and I can make differents on them with extending session info (additional
fields).

main_user
archive1_user
archive2_user

Thanks:
dd


Re: [GENERAL] When the Session ends in PGSQL?

2011-07-06 Thread Durumdara
Hi!

2011/7/4 Craig Ringer cr...@postnewspapers.com.au:
 On 4/07/2011 7:50 PM, Durumdara wrote:

 As I understand you then running Queries forced to abort on the end of
 Session and no more Query running over the session's ending point (or
 after TCP connection broken).

 Correct. The server might not notice that the client broke it's connection
 for a while, though, especially if there's along tcp timeout, no keepalives
 are enabled, and the server isn't actively sending data to the client.

 This makes me wonder, though: If a client sends a COMMIT message to the
 server, and the server replies to the client to confirm the commit but the
 client has vanished, is the data committed? How does the client find out?

Good question.

 I'd assume it'd still be committed, because if the server had to wait for
 the client to acknowledge there would be issues with delaying other commits.
 The trouble is, though, that if a client sends a COMMIT then loses contact
 with the server it doesn't know for sure if the commit really happened. It
 can't reconnect to its old session as it's been destroyed. Is there any way
 it can ask the server did my old xid commit successfully' if it recorded
 the xid of the transaction it lost contact with during COMMIT?

 Is there any way to have both server and client always know, for certain,
 whether a commit has occurred without using 2PC?

Interesting question. In other RDBMS-s the commits got before Session
Timeout finish, but if you got net problem, you never know, what
finished in background.
Stored Procedures will continue work after TCP lost, and if they
supports COMMIT, they can do persistent modifications.

The question is very same as TWO PHASE COMMIT: when we defined some
action as closed (acknowledgement).


 Stored procedures will remain. Note that stored procedures in postgres
 are a
 bit different from what you may be used to in other dbs; while I assure
 you
 it's for the better, you might want to RTFM to avoid surprises.

 Please explain a little this (Just 1-2 sentence please).

 PostgreSQL doesn't have true stored procedures at all. It only has
 user-defined functions that can be called from a SELECT/INSERT/UPDATE/DELETE
 statement.

 Most importantly, PostgreSQL's stored procedures cannot control
 transactions. They cannot commit, rollback, or begin a new transaction. They
 have some control over subtransactions using PL/PgSQL exceptions, but that's
 about it.

So: I tried it, I created a LOOP/END LOOP infinite procedure, and
after started with pgAdmin, I killed the pgAdmin.

8 minutes passed, but server process don't stop this procedure yet.
Have the process some limit on running? When the server kill this process?
Never because of working state?

How to abort it without abort another sessions, or kill the server?

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When the Session ends in PGSQL?

2011-07-06 Thread Durumdara
Hi!

2011/7/6 Durumdara durumd...@gmail.com:
 Most importantly, PostgreSQL's stored procedures cannot control
 transactions. They cannot commit, rollback, or begin a new transaction. They
 have some control over subtransactions using PL/PgSQL exceptions, but that's
 about it.

 So: I tried it, I created a LOOP/END LOOP infinite procedure, and
 after started with pgAdmin, I killed the pgAdmin.

 8 minutes passed, but server process don't stop this procedure yet.
 Have the process some limit on running? When the server kill this process?
 Never because of working state?

 How to abort it without abort another sessions, or kill the server?

Interesting:

CREATE OR REPLACE FUNCTION a()
  RETURNS integer AS
$BODY$BEGIN
   LOOP
-- x
   END LOOP;
   return 1;
END;$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

select a();

Then the server cannot abort my query.
Only statement limitation (timeout) helps in this situation.

But I'm not sure what's this timeout meaning?
What is the statement what measured?

The main statement (which is visible as last Query in monitor)?
Or substatements also measured one by one, no matter the main length?

For example I have a complex stored procedure that call subqueries, to
provide some records to main query.
The timeout is affected on Total length of main Query, or resetted on
each subselects (statements) I do in my STP?

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When the Session ends in PGSQL?

2011-07-04 Thread Durumdara
Dear Bill!

I don't want to raise flame - I only reflect.

As I see we misunderstand ourselves, and I want to correct this
question, and show my viewpoint.

Preface:

1.) DataBases need to close the resources kept by Sessions.
2.) There is need some border, or a rule to split - which Session is
considered as finished.

The FireBird is uses DummyPacketInterval to recognize dead sessions,
EDB uses Session Timeout value for it.
The EDB can still alive if network connection lost, if before Timeout
the client also send a sign to the server.

To I can use PGSQL also, I need to know the limitations, and how to
handle the connections, how to manage them.

I wondering to PGSQL handle this with living TCP connections, because
this is may out of our control.
If we have a timeout value, we can control, which time we have, and
what time is acceptable for a Session.
For example: some places we have that uses wifi connections are
sometimes broken for just a little period. This is enough to
disconnect, but because of higher Session Timeout variable our
DataBase connections still alive without close the applications.

Another thing is sign (packet). We must do something periodically to
keep alive the connection. For example: every 1 minutes we do some
dummy thing one server, like select date or etc.

The reflections:

2011/6/30 Bill Moran wmo...@potentialtech.com:
 In response to Durumdara durumd...@gmail.com:

 Hi!

 2011/6/30 Bill Moran wmo...@potentialtech.com:
  In response to Durumdara durumd...@gmail.com:
 
  Session ends when the connection is terminated.  I.e., a PostgreSQL
  session is synonymous with a TCP connection (or domain socket in some
  cases).

 In Windows OS what's this meaning?
 I know there is a difference between Win's and Lin's TCP handling.

 There's differences between TCP handling in different distros of Linux,
 but it doesn't change the simple fact that all OSes will kill dead
 sockets eventually, and when the socket is closed, the PG session ends.

Ok, I understand it. Thanks.
But 2 questions are remaining:
1.) how can I control the OS's TCP/IP to make more timeout?
2.) how can force PG to change some parameter, to make bigger keep
alive timeouts? (tcp_keepalives_*?)


  Such an option wouldn't make any sense to include, if you ask me.

 In EDB and FireBird we experienced that Timeout is good thing for
 avoid the short net problems.
 For example: many place they are using notebooks, and wifis.
 If somebody get out the wifi area, lost the connection - but the wifi
 CAN reactivate it when he/she is go again the needed place. And here
 the DB service also can reactivate the Session in the background -
 except the period exhaustion.

 All of that can be done with the correct settings at the TCP level as
 well.

Can you show me an example, or some descriptions/links/documents about
it for RTFM operation? :-)


  I
  mean, if I open a psql and start a transaction, then get interrupted or
  have to spend some time researching something else before I can finish,
  the last thing I want is to come back to psql and find that my session
  was terminated and my in-progress data changes lost.

 Maybe in PSQL, but not in client applications where users working, and
 sometimes they are got some failures, like power, or blue screen, or
 application dying.
 They want to continue their's work, but when the locks/trs are
 reamining they cannot do it.

 Incorrect.  If a Windows client bluescreens, the server will eventually
 notice that the socket is dead and clean it up.  If that's taking too
 long, then you need to review the TCP settings on your server.

If pgsql connection is socket based then this is true.


 However, it sounds to me like your application is poorly designed.

Not. It is only uses the database supported connection model which is
not same as in PGSQL.

If
 it's being used via unreliable connections, but requires that database
 locks be held for any length of time, you've got two warring design
 requirements, and one or the other is always going to suffer.

We don't have long perioded locks, transactions, but we had some
situations when the locks remained in unreleased state because of
client freezes.
Then we must waiting for the timeout period.


  For your concern about dying applications, the OS will tear down the
  TCP/IP connection eventually, which will result in PG ending the
  session (rolling back and uncommitted transaction), so that approach
  doesn't really cause you problems there.

 Uhh... This sounds awfully.

 Is this meaning that I periodically lost my temp tables, locks,
 transactions because of OS's network handling what is out of my
 control?

 It sounds horrible for me. When this thing happens commonly?

 You are the first person I've had complain that this is a common
 scenario with database applications.

Always have a FIRST TIME, and a FIRST PERSON... :-)

But this is not common scenario, because many DB-s support reconnect
on TCP connection lost.


  It sounds like

Re: [GENERAL] When the Session ends in PGSQL?

2011-07-04 Thread Durumdara
Hi!

2011/7/4 Vincent de Phily vincent.deph...@mobile-devices.fr:
 On Monday 04 July 2011 10:48:48 Durumdara wrote:
 1.) DataBases need to close the resources kept by Sessions.
 2.) There is need some border, or a rule to split - which Session is
 considered as finished.

 So far so good.

...so what! (Megadeth).

:-)

 Another thing is sign (packet). We must do something periodically to
 keep alive the connection. For example: every 1 minutes we do some
 dummy thing one server, like select date or etc.

 AFAIK postgres doesn't distinguish between a TCP session and a database
 session like (if I understand you correctly) FireBird/EDB does. You cannot
 reconnect and say hello it's me again from session FOOBAR, can I resume that
 session ?. I believe you'll have to solve this at the application layer :

 * Make transactions, locks, temp tables, etc as short-lived as possible (this
   is always a good thing to do anyway).
 * If that's still not enough, store your current working state in a purpose-
   built table and add logic in your client to reinitialize session state
   using that data, and to purge the data after it has been used / timed out.

 Another thing you could do (but I'm not sure it is a good idea) is to write a
 proxy application that runs on the server machine and understands your session
 requirements. Then connect your application to this proxy instead of the
 database.

Thanks for your information.

And then I ask the question that is remaining hidden in prev. mail:

What happens with running statements and stored procs at Session's end?

They will be aborted and destroyed, or they can continue the running?

For example:
User X starts a long Query/STP.
But he is lost in deep space (like in Space Oddity:
http://www.youtube.com/watch?v=rKE3FSPJu-4feature=related).
The TCP connection aborted, then the Session is set to dead.
But I don't know what happens with this Query? Is it aborted by Server
or Running may infinitively?

Thanks:
   dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When the Session ends in PGSQL?

2011-07-04 Thread Durumdara
Ahoj!

2011/7/4 Vincent de Phily vincent.deph...@mobile-devices.fr:
 On Monday 04 July 2011 12:49:24 Durumdara wrote:
 What happens with running statements and stored procs at Session's end?

 They will be aborted and destroyed, or they can continue the running?

 For example:
 User X starts a long Query/STP.
 But he is lost in deep space (like in Space Oddity:
 http://www.youtube.com/watch?v=rKE3FSPJu-4feature=related).
 The TCP connection aborted, then the Session is set to dead.
 But I don't know what happens with this Query? Is it aborted by Server
 or Running may infinitively?

 The running query will be rolled back when the session ends. Note that when
 the session ends must be understood from the server's point of view, not the
 client's.

As I understand you then running Queries forced to abort on the end of
Session and no more Query running over the session's ending point (or
after TCP connection broken).


 Temporary tables, prepared statements, and cursors (not an exhaustive list)
 will be destroyed too.

 Stored procedures will remain. Note that stored procedures in postgres are a
 bit different from what you may be used to in other dbs; while I assure you
 it's for the better, you might want to RTFM to avoid surprises.

Please explain a little this (Just 1-2 sentence please).
Or suggest me some info, if possible... :-) (Prayer) :-)
I want to know about differences.

Thanks for it!
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] When the Session ends in PGSQL?

2011-06-30 Thread Durumdara
Hi!

I asked same thing in the list formerly,
http://archives.postgresql.org/pgsql-general/2011-06/msg00888.php

And because of lack of answers now I asked here also:
http://stackoverflow.com/questions/6534654/postgresql-session-timeout

I'm very wondering that I don't (and can't) found a system variable
that define the timeout of the client session. Maybe I'm a real lama,
or search with wrong terms... :-)

The manual often refers to Session end - when the resources
released, but never I saw a section that describe when it happens.

The RDBMS-s needs some rule to mark a Session inactive or/and dead.

Commonly they are uses a Timeout parameter that control how many
minutes has ellapsed to session marked as Dead.

FireBird and ElevateDB do this too. If the client is do anything, this
time period extends again.

The cleanup is needed for remove the locks, unfinished (limbo or
started) transactions, and decrease the deadlock, or lock situations;
and to never reach the maximum connections.

So: is anybody here that can tell me how this working in PGSQL?

And if there is no control, when the Session ends?

Practical example (real):
A client program died on a transaction, and it leaves many locks in
the DataBase.

If Session Timeout is 5 minutes, then we can tell the other users who
got Row lock, or Transaction Timeout errors to wait 6-7 minutes
and then trying again, because after 5 minutes the problematic session
is cleaned up, and locks/transactions removed from the objects.

Thanks for your help:
   dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When the Session ends in PGSQL?

2011-06-30 Thread Durumdara
Hi!

2011/6/30 Bill Moran wmo...@potentialtech.com:
 In response to Durumdara durumd...@gmail.com:

 Session ends when the connection is terminated.  I.e., a PostgreSQL
 session is synonymous with a TCP connection (or domain socket in some
 cases).

In Windows OS what's this meaning?
I know there is a difference between Win's and Lin's TCP handling.


 There is no setting in PostgreSQL to pro-actively terminate connections.

Ok.

 Such an option wouldn't make any sense to include, if you ask me.

In EDB and FireBird we experienced that Timeout is good thing for
avoid the short net problems.
For example: many place they are using notebooks, and wifis.
If somebody get out the wifi area, lost the connection - but the wifi
CAN reactivate it when he/she is go again the needed place. And here
the DB service also can reactivate the Session in the background -
except the period exhaustion.

 I
 mean, if I open a psql and start a transaction, then get interrupted or
 have to spend some time researching something else before I can finish,
 the last thing I want is to come back to psql and find that my session
 was terminated and my in-progress data changes lost.

Maybe in PSQL, but not in client applications where users working, and
sometimes they are got some failures, like power, or blue screen, or
application dying.
They want to continue their's work, but when the locks/trs are
reamining they cannot do it.


 For your concern about dying applications, the OS will tear down the
 TCP/IP connection eventually, which will result in PG ending the
 session (rolling back and uncommitted transaction), so that approach
 doesn't really cause you problems there.

Uhh... This sounds awfully.

Is this meaning that I periodically lost my temp tables, locks,
transactions because of OS's network handling what is out of my
control?

It sounds horrible for me. When this thing happens commonly?

I must sure in my Session still alive, and to keeps my important temp
tables, locks, etc in safely place...

I will waiting for your answer.

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Long Query and User Session

2011-06-29 Thread durumdara

Hi!

I want to ask that what happens if a long query running and user session 
timeout reached?


1.)

For example: somebody starts a very long query or statistical stored 
procedure.


The session timeout is 5 minutes, and the session exhausted this time.

What happens?

a.)
The Query/STP automatically aborted by PGSQL, the resources released.

b.)
The Query/STP continue working
b1)
For unlimited time (while it reach the end of Qry/STP)
b2)
For limited time by a system parameter (max_query_running_time)
but user is locked out from session.

c.)
The Query/STP working time is make the session alive, it is defined as 
user interaction too, so the session ended on Qry/STP end + 5 minutes.



2.)
Another question based on this that can I force PGSQL server to abort 
and release a session that got infinite loop (as admin, to close any 
session?)?


3.)
What happens if the client is disconnecting (the program died, or 
network connection died) while Query/STP running?


4.)
And: is PGSQL supports aborting of the long Query basically?

Thanks for your help:
   dd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] An example for WITH QUERY

2011-06-22 Thread Durumdara
Hi!

I have 3 tables. I want to run a query that collect some data from
them, and join into one result table.

I show a little example, how to do this in another DB with script:

create temp table tmp_a as select id, name, sum(cost) cost from items
where... group by id, name with data;

create temp table tmp_b as select item_id, sum(price) price from bills
where... group by item_id with data;

create temp table tmp_c as select item_id, sum(price) price from
incoming_bills where... group by item_id with data;

select
  tmp_a.id, tmp_a.name, tmp_a.cost,
  tmp_b.price outgoing_price,
  tmp_c.price incoming_price
from tmp_a
left join tmp_b on (tmp_a.id = tmp_b.item_id)
left join tmp_c on (tmp_a.id = tmp_c.item_id)
order by name

This is very simple example. How I can write this in WITH QUERY form?

Thanks for every help!

dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Set transaction waiting mode and timeout

2011-05-18 Thread Durumdara
Hi!

In Firebird the transaction can be parameterizable as:

SET TRANSACTION
   ...
   [WAIT | NO WAIT]
   [LOCK TIMEOUT seconds]

The NO WAIT is force transaction to return instantly with errormsg
when some deadlock situation happened, Timeout is define the seconds
for wait.
This can be set for each transaction I used.

I search for same option in pg if exists.
Because when I do this (theid field is primary key):

Con1.Begin
Con2.Begin
Con1.Exec(insert into x (theid, value) values (1, 'a'));
Con2.Exec(insert into x (theid, value) values (1, 'a'));
...
then Con2 is remaining in deadlock state (nothing happened, the code
is waiting for the transaction's end).

When Con1's transaction is end, Con2 is continue...

But users better like the ErrorMessage than incomprehensible freezed screen.
And I also better like the controllable situation (I can re-try 3 times, etc.).

Thanks for your help:
 dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unique Session ID in PGSQL?

2011-05-18 Thread Durumdara
Hi!

Is there any function in PGSQL that returns an unique Session
identifier of the actual session?
For example a Bigint, or GUID, or etc?
Can I get the living Session identifiers from PG?

We have a little application that uses these infos which are basically
came from another database system, and I will port this, but I don't
know, how to do in PGSQL...
In this app. we associated (stored) the session id to a special field
in some tables.
We can retreive the living session ids with a system functions.

If a session died, these records have become irrelevant - we can
delete them. If the session still alive, we don't touch them, only can
read the content...

I see adv_locks in pg, but because the quantity of locks are
pre-determined by the server, I don't choose them.
I better like a technic that cannot exhaust the number of persistent elements.

Thanks:
   dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Read Committed transaction with long query

2011-05-12 Thread Durumdara
Hi!

2011/5/12 Albe Laurenz laurenz.a...@wien.gv.at:
 Durumdara wrote:
Two table:
Main
Lookup

The query is:
select Main.*, Lookup.Name
left join Lookup on (Main.Type_ID = Lookup.ID)

 hat's not correct SQL, but I think I understand what you mean.

Sorry, the from is missed here... :-(



Lookup:
ID Name
1 Value1
2 Value 2
3 Value 3

Many records is in Main table (for example 1 million).

What happens in this case (C = connection):

C1.) begin read committed
C1.) starting this query
C1.) query running
C2.) begin read committed
C2.) update Lookup set Name = New2 where ID = 2
C2.) commit
C1.) query running
C1.) query finished

Is it possible to the first joins (before C2 modifications) are
containing Value2 on the beginning of the query and New2 on the
end of the query?
So is it possible to the long query is containing not consistent state
because of C2's changing? For example mixing Value2 and New2?

 No, this is not possible.

Thanks! Great!


 See
 http://www.postgresql.org/docs/current/static/transaction-iso.html#XACT-
 READ-COMMITTED :

  When a transaction uses this [read committed] isolation level, a
 SELECT query
  (without a FOR UPDATE/SHARE clause) sees only data committed before
 the query began;
  it never sees either uncommitted data or changes committed during
 query execution
  by concurrent transactions.

Query is meaning statement here?
For example if I have more statement in one Query are they running
separatedly?
They can be see the modifications?

Query text (or stored procedure body):
insert into ... ; +
update ...; +
select ...

Are they handled as one unit, or they are handled one by one?
AutoCommit = False!

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Read Committed transaction with long query

2011-05-11 Thread Durumdara
Hi!

Two table:
Main
Lookup

The query is:
select Main.*, Lookup.Name
left join Lookup on (Main.Type_ID = Lookup.ID)

Lookup:
ID Name
1 Value1
2 Value 2
3 Value 3

Many records is in Main table (for example 1 million).

What happens in this case (C = connection):

C1.) begin read committed
C1.) starting this query
C1.) query running
C2.) begin read committed
C2.) update Lookup set Name = New2 where ID = 2
C2.) commit
C1.) query running
C1.) query finished

Is it possible to the first joins (before C2 modifications) are
containing Value2 on the beginning of the query and New2 on the
end of the query?
So is it possible to the long query is containing not consistent state
because of C2's changing? For example mixing Value2 and New2?

Thanks for your help!

Regards:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ZEOS or PGDAC - How to lock a resource?

2011-05-04 Thread durumdara

Hi!

We will porting an application to PGSQL from some table based app (BDE 
like).


The older application used a special technic of the driver: if a record 
edited, some exclusive (over transaction), forever living lock put on it.

On exit, cancel, or post this lock removed.

We used this to lock the main resource from concurrent edits.

For example:
A product (a Car) have many properties and sublists (details), like 
color, transport date, elements (what we need to build into car: wheel, 
etc), bill informations, subtransport times, etc.
Because ALL of them is the product, we must protect it with Edit lock 
on open.

The subinformations are easily editable, and postable (there is autocommit).

Now I search for some technics in PGSQL.
As I read, the locks are transaction depended, because they are vanishes 
on rollback/commit.


But we want to save the subelements on editing (one by one), not on 
saving the main.
This meaning that we break the transaction with commit - ergo the lock 
vanish.


For example:
Car Edit:
- Lock This car
- Edit color
- Open product elements tab
- Add two new elements
- Save them (ApplyUpdates, Commit)
- Add a bill date
- Save it (Apply, Commit)
- Post car record (Apply, Commit)
- Release resource
- Close Form

I read the help, but I saw only transaction-dependent locks.

Zeos or PGDAC is not like IBX/IBO (Firebird), so they don't have 
Transaction Component, I can use only one transaction by connection.


How can I do a lock mechanism that:
- Session based
- No limit on how many I used
- Linked to a Row, or a Resource Name
- I can test to is it exists or not

Like Mutex in Windows, but in PGSQL...

Thanks for your help:
   dd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple instances with same version?

2011-04-20 Thread durumdara

Dear Everybody!

I want to ask that have some way to install PGSQL 9.0 as two instances 
in one machine?


Most important question. The OS is can be Windows or Linux.

I asked this, because formerly we heard about a story. I cite this as I 
remember:


   We have some product, and in  the only one server of the customer
   (Win) have a PG9.0 version installed.
   But the developer company lost in the space, only the software
   (must) working...

   We don't know the root password, and we don't want to hack it (the
   system must work).
   But we needed to install the our version of the PG what is also 9.0
   (because of the new functions)...
   ...

We want to prepare to same situations with learn about PG.

With Firebird and MS-SQL this case is not problem, because on same 
machine we can install another instances with same version.


But I don't know that is PG supports multiple instances with same 
version or not?


Also interesting question are the users.

In our systems we create user for every real user. If they are 100, we 
have same number of db users.


But if we want to server more database in one place, we may do conflict 
on users. For example: all of the databases have user JohnM.


If we can do multiple instances, the problem is vanishing, because all 
have it's own user list.


If we cannot, then only idea if have if we make prefix on usernames 
based on short dbname.

For example:
offer_db users: off_JohnM, off_MaryK
press_db users: prs_JohnM, prs_TomR

Please help just a little to I can know the limitations of PG.

Thanks for it!

Regards:
dd


[GENERAL] Bug in PGSQL 9.0 with handling chr(1..255) in Win1250 context?

2011-04-14 Thread Durumdara
Hi!

Windows 7x64, PG9.0, PGAdmin 1.12.1.

First I sent this problem to PGADMIN support list, you can see it, but
I repeat the description.

http://archives.postgresql.org/pgadmin-support/2011-04/msg00012.php

I tried to fillup a text field with all ANSI characters
chr(1)..chr(255). Except 0 because it is string limitator in C.

This experience is based on that I want to check, how the row encoded
in Backup (how to COPY eat this).

But when I tried to see the row in PGADMIN, I got this error:

ERROR:  character 0x81 of encoding WIN1250 has no equivalent in UTF8

** Error **

ERROR: character 0x81 of encoding WIN1250 has no equivalent in UTF8
SQL state: 22P05


I reported this to PGADMIN list, but Dave said:

That error comes from PostgreSQL, and is entirely expected if you try
to store non-printable characters in a text column, and then convert
it to UTF-8 (which is what happens when pgAdmin requests data).
PostgreSQL is extremely strict about enforcing the correctness of
unicode data.

Interesting thing that I have Win1250 context with Win1250 db.

I think that is unacceptable to cannot see the row.

EMS PG Manager can show the row, but may it do something in other way
- I don't know.

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Blob handling with Delphi...

2011-04-13 Thread Durumdara
Hi!

PG9.0, Delphi 6, Zeos.

I want to use PGSQL bytea field as normal BLOB field in Delphi.

But when I insert a value into this field, for example all characters (chr
0..255), and I fetch, and save it as blob stream into a file, I got
interesting result, not what I stored here previously.

It is got \x prefix, and it is stored hexadecimal values.

Is it normal, and I needs to convert this format to readable before I use
it, or I can get same result as in other databases/adapters (the stream
saved BlobField.SaveToFile have equal content as BlobField.LoadFromFile)...

Many DBAware components can show the blob directly as Image. With PG's \x
prefix this won't working well... :-(


Thanks for your help:
dd


Re: [GENERAL] How to get index columns/dir/ord informations?

2011-04-02 Thread durumdara

Hi!

The pg_index, and pg_indexes is good for I get the index names, and types.

I have two indexes on test table a:

CREATE INDEX ix1
  ON a
  USING btree
  (a);


CREATE UNIQUE INDEX x2
  ON a
  USING btree
  (a DESC, b);

From this I can recognize the type (unique or normal) of the index, but 
none of the columns.


I don't found any tables that can say to me, which columns with which 
direction used in index.


A pseudo code demonstrate it:

select * from pg_index_columns where index_name = 'x2'

Ordinal   ColNameIsAsc
1 a   False
2 b   True

Have PGSQL same information?

Thanks:
dd




2011.04.01. 18:01 keltezéssel, Raghavendra írta:

Hi,

Query to list the tables and its concerned indexes.

SELECT indexrelid::regclass as index , relid::regclass as
table FROM pg_stat_user_indexes JOIN pg_index USING
(indexrelid) WHERE idx_scan  100 AND indisunique IS FALSE;

Query will list the contraints.

SELECT relname FROM pg_class WHERE oid IN ( SELECT indexrelid FROM 
pg_index, pg_class WHERE pg_class.oid=pg_index.indrelid AND ( 
indisunique = 't' OR indisprimary = 't' ) );


To get the column order number, use this query.

SELECT a.attname,a.attnum FROM pg_class c, pg_attribute a, pg_type t 
WHERE c.relname = 'vacc' AND a.attnum  0 AND a.attrelid = c.oid AND 
a.atttypid = t.oid;


Note: This query for a particular Table 'VACC'


Best Regards,
Raghavendra
EnterpriseDB Corporation

On Fri, Apr 1, 2011 at 8:54 PM, Durumdara durumd...@gmail.com 
mailto:durumd...@gmail.com wrote:


Hi!

I want to migrate some database to PG.
I want to make intelligens migrator, that makes the list of the
SQL-s what need to do to get same table structure in PG as in the
Source DB.

All things I can get from the views about tables, except the indices.

These indices are not containing the constraints - these elements
I can analyze.

I found and SQL that get the index columns:


select
t.relname as table_name,
i.relname as index_name,
a.attname as column_name
from
pg_class t,
pg_class i,
pg_index ix,
pg_attribute a
where
t.oid = ix.indrelid
and i.oid = ix.indexrelid
and a.attrelid = t.oid
and a.attnum = ANY(ix.indkey)
and t.relkind = 'r'
and t.relname = 'a'
and ix.indisunique = 'f'
and ix.indisprimary = 'f'
order by
t.relname,
i.relname;

This can list the columns. But - what a pity - this don't
containing that:
- Is this index unique?
- What the direction of the sort by columns
- What is the ordinal number of the column

So everything what I need to analyze that the needed index is
exists or not.


Please help me: how can I get these informations?
I don't want to drop the tables everytime if possible.

Thanks:
dd






[GENERAL] How to get index columns/dir/ord informations?

2011-04-01 Thread Durumdara
Hi!

I want to migrate some database to PG.
I want to make intelligens migrator, that makes the list of the SQL-s what
need to do to get same table structure in PG as in the Source DB.

All things I can get from the views about tables, except the indices.

These indices are not containing the constraints - these elements I can
analyze.

I found and SQL that get the index columns:


select
t.relname as table_name,
i.relname as index_name,
a.attname as column_name
from
pg_class t,
pg_class i,
pg_index ix,
pg_attribute a
where
t.oid = ix.indrelid
and i.oid = ix.indexrelid
and a.attrelid = t.oid
and a.attnum = ANY(ix.indkey)
and t.relkind = 'r'
and t.relname = 'a'
and ix.indisunique = 'f'
and ix.indisprimary = 'f'
order by
t.relname,
i.relname;

This can list the columns. But - what a pity - this don't containing that:
- Is this index unique?
- What the direction of the sort by columns
- What is the ordinal number of the column

So everything what I need to analyze that the needed index is exists or not.


Please help me: how can I get these informations?
I don't want to drop the tables everytime if possible.

Thanks:
dd


[GENERAL] Linux, Hungarian charset (Win1250) is supports the hungarian collation?

2011-03-21 Thread Durumdara
Dear Everybody!

We need to choice a DB for our new project.
Two of the databases are possible to choose.

1.) PGSQL 9.x

2.) FireBird 2.x

We needs to serve 75/80 users in a time.
The client platform is Windows, Delphi based applications with [Zeos/PGDAC]
or [IBX/ZEOS].
The server is may Windows, but in other place may Linux!

I want to ask about PG, because formerly I experienced a strange thing with
it, and I need to check that is possible to use it against FB in the
project.

The language is Windows 1250 (ISO-8859-2).

I remembered that when I tried in 8.1 to create database as same in Windows:

CharSet: Win1250
Collation: - (disabled, and it is handled as HUN - iso-8859-2)

then I failed.

Because in Linux (Ubuntu as I remembered) the collation with Win1250 is not
supports, only C ordering.
Only one possible way was that if change CharSet to UTF, then collation can
be Windows1250...

But I want to avoid the UTF hell if possible.

Because now I don't have Linux here, I cannot test the PG 9.0...

May Latin2 is the solution, but may Latin2 is also supports only C
collation.

The hungarian language have special accents. The good order is:

AÁEÉIÍOÓÖŐUÜŰ

Can anybody help me to see this in Linux and PG 9.x?


Thanks for your help:
dd


[GENERAL] Disconnect/Resource releasing/PING

2011-03-21 Thread Durumdara
Hi!

Many databases (like Firebird) have some time limitation for the clients.
These cases possible with client:

a.) It is working, but network connection is temp. down.
b.) The client is crashing
c.) The client is on, but it is not working for N minutes.

When the server experience that client is not speaking in N minutes, the
handle and all resources will be freed.

I want to know about this case in PG.

1.) Do I need to PING the server periodically N minutes with select time()
query to keep the handle live?
2.) What happened, if client crashed, or died? WHen the resources (temp
tables, cursors, etc) freed?
3.) Can I configure this per databases, or only per server?
4.) If I use PGDAC or ZEOS to access PG are them make this PING
automatically?
5.) What happens if a client died but it's query (long) is running? Is query
cancelled by the server, or wait for finish?
6.) If a long query running the client can got timeout because it is cannot
work?


Thanks for your help:
   dd


[GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Durumdara
Hi!

In other RDBMS I found a way to make dynamic statements.
I can use variables, or concat the SQL segments, and execute it all.

:tablename = call CreateTempTable;
insert into :tablename 
drop table :tablename

or (FireBird like cursor handling):

sql = select * from  || :tablename ||  where...
for select :sql 
...

Can I do same thing in PGSQL too?

Thanks:
   dd


Re: [GENERAL] PG and dynamic statements in stored procedures/triggers?

2011-03-07 Thread Durumdara
Hi!

Thanks!

How do I create cursor or for select in PGSQL with dynamic way?

For example

:tbl = GenTempTableName()
insert into :tbl...
insert into :tbl...
insert into :tbl...

for select :part_id from :tbl begin
exec 'select count(*) from subitems where id = ?' using :part_id into
:sumof
update :tbl set sumof = :sumof where part_id=:part_id
end;

Can you show me same example?

Thanks:
dd

2011/3/7 Adrian Klaver adrian.kla...@gmail.com

 On Monday, March 07, 2011 6:32:44 am Durumdara wrote:
  Hi!
 
  In other RDBMS I found a way to make dynamic statements.
  I can use variables, or concat the SQL segments, and execute it all.
 
  :tablename = call CreateTempTable;
 
  insert into :tablename 
  drop table :tablename
 
  or (FireBird like cursor handling):
 
  sql = select * from  || :tablename ||  where...
  for select :sql 
  ...
 
  Can I do same thing in PGSQL too?
 
  Thanks:
 dd


 http://www.postgresql.org/docs/9.0/interactive/plpgsql-statements.html#PLPGSQL-
 STATEMENTS-EXECUTING-DYN
 --
 Adrian Klaver
 adrian.kla...@gmail.com



[GENERAL] Bytea/Text blob deletion is very slow...

2010-05-26 Thread Durumdara
Hi!

PG8,4, Window XP, Python.

I have a program that makes many picture version from an original with
XNView effects.
Because I want preserve the integrity, previous datas, I used transactions,
and PGSQL.

The main problem with blobs that insertion is good, the select is good, but
the deletion is very slow.

I had 300 MB source, and 4 GB database with effect-converted images.
The table was:
blobs(id integer, file_id integer, ext char(3), size integer, blob bytea)
with primary key, and some index on file_id, ext, size.

But these indexes not matter, when I stored the blobs in the picture table
in fields, I also got this problem.

Extremely slow - this meaning that I started a query that do:

delete * from blobs where file_id in ()

(file_id have index, and select is very fast on it).

And I need to wait 2 hours for the deletion, and 1 hours for vacuum... :-(

I tried with PySQLite also. The deletion is also slow, but it was 30 minutes
only, but compact was 1 hours process...

Ok, I know that better to store blobs in other ways, but I want to preserve
the integrity, and anydbm (for example) is does not browsable, etc...

Prev. I tried with Text fields (and with Hexlify), but it was also slow. So
something is basically wrong with blobs, or blob deletion...

Thanks for your help:
   dd


[GENERAL] Opened ports vs. Packages...

2010-05-03 Thread Durumdara
Hi!

I want to ask about PGSQL-crack/hack-web-database.

So:

We have a web site with python.  The DB engine is PGSQL.

The first theory was that PGSQL is locally used, and each of other
operations are realized with encoded XML packages.

These operations are: fill up partner data, fill up offer data.

The client application is Delphi based. That was a little hard to convert
local data (DataSets, etc) to XML, and recode in the server side (in
python).

The new theory is need more flexibility: we can fillup more data and read
some (long) query result.

The main problem that I need to open the PGSQL's port to the net with login
possibility.

1.) Auth. - password trying.
The clients are access PGSQL by Zeos, or by PGDAC. I don't know what auth.
methods they are support, but I think that md5 and plain text is not enough
here...

2.) Opened port -  PGSQL hack/crack possibilty.
I don't know about PGSQL hack/crack on ports, but everything is possible. If
they are hack the PGDB without knowing password (with some special code
injection), we are in problem...

3.) Server overloading with DOS. (Example: many-many requests to login)

4.) Lost connections? How to handle when connection lost on wrong web, or
temp. down?

If we don't want to open the port, we need to make a complete interface that
know same thing as Zeos.

Zeos (as I know) supports data packages - the data loaded in packages, and
if you scroll to the end of the table, you can get the next package.

We need to develop same thing to load all info we want.
Need to make an update mechanism to reflect changes, etc.

We need to create a mechanism that read Queries, process them, and rewrite
the result packages...

And these infos are changing - because if I open a query with web server at
X, and got a new request at Y to make next package, may the query data
changes...

For example:
select * from data
1.) AD (select * from partner top 0 range 1000)
2.) C...F  (select * from partner top 1001 range 2000)

because between these requests we have a big modification... (in the
webserver all request a new cursor).

This is hard work.

So... what is the magic we can use? And what you heard about opened
ports/hack/connections?

Thanks:
   dd


Re: [GENERAL] Charset Win1250 on Windows and Ubuntu

2009-12-21 Thread Durumdara
Hi!

2009/12/19 Albe Laurenz laurenz.a...@wien.gv.at

 If you need the data in WIN1250 on the client side, change the client
 encoding to WIN1250.

 So:
 - Create the database with UTF8.
 - Change the client encoding to WIN1250 (e.g. by setting the environment
 variable PGCLIENTENCODING).
 - Import the dump of the Windows database. It will be converted to UTF-8.
 - Make sure that the client program has client encoding WIN1250.

 Yours,
 Laurenz Albe


So if I have Python and pygresql, can I set this value in Python?
The main problem that I don't want to set this value globally - possible
another applications want to use another encoding...

Thanks for your help:
   dd


Re: [GENERAL] Charset Win1250 on Windows and Ubuntu

2009-12-21 Thread Durumdara
Hi!

2009/12/21 Albe Laurenz laurenz.a...@wien.gv.at

 Durumdara wrote:
  - Change the client encoding to WIN1250 (e.g. by
  setting the environment variable PGCLIENTENCODING).
 
  So if I have Python and pygresql, can I set this value in Python?
  The main problem that I don't want to set this value globally
  - possible another applications want to use another encoding...

 There may be special Python functions, but you can use the following
 SQL statement: SET client_encoding TO 'WIN1250'


And what happening what DB recognize not win1250 character in SQL?
Is it converted to ? or an exception dropped?
And if the UTF db contains non win1250 character?
Is it replaced in result with ? or some exception dropped?

Thanks:
   dd


[GENERAL] Charset Win1250 on Windows and Ubuntu

2009-12-18 Thread Durumdara
Hi!

I have a software that uses Postgresql. This program (and website) developed
and working on Window (XP/2003), with native charset (win1250).

Prior week we got a special request to install this software to a Linux
server.

Yesterday I installed Ubu9.10 on VirtualBox, and tried to moving the
database under Linux.

First big problem is that when I tried to create a database with same
parameters as in Windows, the PGAdmin show an error.
The errormessage is:
Error: new encoding (Win1250) is incompatible with the encoding of the
template database (UTF8).

Ok, I changed to template0.

Then I got error that Win1250 is not good for collation hu_HU.UTF8.

When I tried to insert hungarian chars (to check sort order), the C and
POSIX return wrong result - as I thought before.

The Windows version of PG and Admin is not supports collation, so these two
options are disable (collation, character type).

But in Linux I have only UTF version that can sort rows in good order.

The problem that the client program is win1250 based, and I must rewrite all
things to make same results.

Have anybody some way, some tricky solution for this problem?

Thanks for your help:
dd


[GENERAL] Can I get Field informations from system tables?

2009-08-13 Thread Durumdara
Hi!

I must get informations about the tables.
For example:
Tnnn:
1. [Field Name, Type, Size, NotNull]
2. [Field Name, Type, Size, NotNull]
...

The test table is this:

CREATE TABLE testfields
(
  fbigint bigint NOT NULL,
  fbool boolean,
  fchar character(100),
  fcharv character varying(100),
  fdate date,
  fdouble double precision,
  fint integer,
  fnum numeric,
  fmemo text,
  ftimez time with time zone,
  ftime time without time zone,
  ftimestampz timestamp with time zone,
  ftimestamp timestamp without time zone,
  fserial serial NOT NULL,
  CONSTRAINT testfields_pkey PRIMARY KEY (fserial)
)

When I see Pygresql, I got only these informations:

Command line: C:\Python25\python.exe c:\PGDB_T~1.PY
Working directory: c:\
Timeout: 0 ms

[{'FSERIAL': 1, 'FMEMO': 'fdsf sdf dsfds sdd sfsdfsdfsd dsfsd sdfsd ssdsd
sdsd', 'FCHAR':
'alma
', 'FBIGINT': 1L, 'FNUM': Decimal(454.3234), 'FTIMESTAMPZ': '1999-01-01
10:10:10+01', 'FINT': 43545, 'FTIMEZ': '10:10:10+02', 'FDOUBLE': 4.5656656,
'FTIME': '10:10:10', 'FCHARV': 'alma', 'FDATE': '1999-03-25', 'FTIMESTAMP':
'1999-01-01 10:10:10', 'FBOOL': True}]

('fbigint', 'int8', None, 8, None, None, None)
('fbool', 'bool', None, 1, None, None, None)
('fchar', 'bpchar', None, -1, None, None, None)
('fcharv', 'varchar', None, -1, None, None, None)
('fdate', 'date', None, 4, None, None, None)
('fdouble', 'float8', None, 8, None, None, None)
('fint', 'int4', None, 4, None, None, None)
('fnum', 'numeric', None, -1, None, None, None)
('fmemo', 'text', None, -1, None, None, None)
('ftimez', 'timetz', None, 12, None, None, None)
('ftime', 'time', None, 8, None, None, None)
('ftimestampz', 'timestamptz', None, 8, None, None, None)
('ftimestamp', 'timestamp', None, 8, None, None, None)
('fserial', 'int4', None, 4, None, None, None)

Process Python Interpeter terminated, ExitCode: 

The main problem that I don't see the size of the char/varchar fields, and I
don't determine that int field is serial or not?

Thanks for the help:
   dd


[GENERAL] = Null is Null?

2009-07-08 Thread Durumdara
Hi!
select * from any where is_deleted = Null
select * from any where is_deleted in (Null)

They are show 0 record.

select * from any where is_deleted is Null

It is show all records.

Some of other DBs are uses Null as Null in = comparisons. Is PG not? What
are the rules?

Thanks: dd


Re: [GENERAL] An example needed for Serializable conflict...

2009-07-07 Thread durumdara

Hi!

Thanks for your help!

Another question if I use only SELECTS can I get some Serialization Error?

For example:
I need a report tool that can show the actual state of the business.
Because of I use value-dependent logic, I MUST use consistent state to 
preserve the equality of many values (sums, counts, etc.).


So some (Read Committer) threads are update/delete/insert (sum modify) 
rows, but this report tool only READ the tables, and only works for temp 
tables.


Can I get some S. error from this transaction?
Or can I get some error from modifier threads if this (serializer 
report) thread actually read the rows that they are want to modify?


This is the main question about it.

Thanks for your read/answer!

dd



2009.07.07. 11:36 keltezéssel, Albe Laurenz írta:

Durumdara wrote:
   

Please send me an example (pseudo-code) for Serializable conflict.
And I wanna know, if possible, that if more transactions only
read the tables in Serializable mode, and one or others write
to it, can I got some conflicts in read operation?
 


You get a serialization conflict if you try to modify a row
in a serializable transaction T1 that has been changed by a second
transaction T2 after T1 started.

Sample 1:

T1: START TRANSACTION ISOLATION LEVEL SERIALIZABLE;

T1: SELECT * FROM t;
  id | val
+--
   1 | test
(1 row)

T2: DELETE FROM t WHERE id=1;

T1: UPDATE t SET val='new' WHERE id=1;
ERROR:  could not serialize access due to concurrent update

Sample 2:

T1: START TRANSACTION ISOLATION LEVEL SERIALIZABLE;

T1: SELECT * FROM t;
  id | val
+--
   1 | test
(1 row)

T2: UPDATE t SET val=val WHERE id=1;

T1: DELETE FROM t;
ERROR:  could not serialize access due to concurrent update


Yours,
Laurenz Albe
   




[GENERAL] An example needed for Serializable conflict...

2009-07-02 Thread Durumdara
Hi!
Please send me an example (pseudo-code) for Serializable conflict.
And I wanna know, if possible, that if more transactions only read the
tables in Serializable mode, and one or others write to it, can I got some
conflicts in read operation?

c = client t = transaction

c1t1 begin serial
c1t1 select sum(netto) from order_items where order_code = 1

c2t2 begin
c2t2 insert into order_items 

c3t3 begin serial
c3t3 select sum(netto) from order_items where order_code = 2

c2t2 commit

c4t4 begin serial
c4t4 select sum(netto) from order_items where order_code = 1

c1t1 select count(order_items)

c4t4 rollback

c1t1 rollback

c3t3 rollback

I wanna understand, which limitations I need to see in my code to avoid
conflicts on important queries where my calculated items must be equal
(like count(*) = len(fetched(*)).

Sorry for the rtfm like question, but is rather DSFE like (Do and See the
F*ed Experience). So your experience needed.

Thanks for your help:
dd


Re: [Fwd: Re: [GENERAL] Python client + select = locked resources???]

2009-07-01 Thread durumdara

Hi!

2009.07.01. 9:43 keltezéssel, Craig Ringer írta:

On Wed, 2009-07-01 at 09:23 +0200, durumdara wrote:


   

Firebird have repeatable read, but PG is not have it. Hmmm... Then that is 
meaning that every statement is in new
transaction context which can makes inconsistency in the views...
For example (pseudo):
select item_head, count(items)
select items

Possible: count(items)  len(fetchall(items)) if someone committed a
new record into items table...

Am I thinking good?
 


You are. However, this is ALSO true in a transaction by default.
   

Sh*t... (* = [o | u ])... :-(

MySQL doc:

   REPEATABLE READ
   */_
   This is the default isolation level for InnoDB_/*. For consistent
   reads, there is an important difference from the READ COMMITTED
   isolation level: All consistent reads within the same transaction
   read the snapshot established by the first read. This convention
   means that if you issue several plain (nonlocking) SELECT statements
   within the same transaction, these SELECT statements are consistent
   also with respect to each other. See Section 13.2.8.2,


PostgreSQL defaults to the READ COMMITTED isolation level, which means
that statements may see data that was committed by another transaction
after the start of the transaction in which the statement is run, but
before the start of the statement.
   

A... this was I want to avoid.

In the Firebird world I simply used Repeatable Read without concurrency 
or other error.


If some updates had conflicts in nonwaiting mode, the FB sent an error 
message me to show, we had a problem.
But normally, if someone simply read the tables, or inserted new records 
to it, the FB handle this case without errors.


I got errormessage only if two transactions want to do something in same 
record in same time...


This is conflict:
tr1: update a set a.code = a.code
tr2: update a set a.name = al where a.code = 1

This is not:
tr1: select count(*) a1 from a
tr2: update a set a.name = al where a.code = 1
tr2: insert into a ...
tr1: select count(*) a2 from a (a1 = a2)
tr1: select * from a (a1 = a2 = fetched(records))


If you want to avoid that, you may use the SERIALIZABLE isolation level.
That has its own complications and costs, though, including the need to
be prepared to retry any transaction after a serialization failure.

(Of course, your app should be prepared to retry a transaction ANYWAY
unless you're incredibly sure your code is perfectly free from lock
conflicts etc).
   
Sometimes we need consistent data, this [select count(*) from a  
fetchall(select * from a)] not good result.


But I'm not sure in SERIALIZABLE mode because I don't know, if I change 
the records, or add new records to table a, I can get some errors in any 
of the clients, or PG handle this without problems - as Firebird do it, 
or I got many errors in the clients.


Data integrity is very important sometimes  - for count(*) = 
len(fetched(*)), and for querys, sums, subqueries are let equal.



See:

http://www.postgresql.org/docs/8.3/static/transaction-iso.html

Once again, I VERY strongly recommend reading the whole PostgreSQL
manual. It'll teach you a lot about SQL and relational databases in
general as well as PostgreSQL in particular, and is very well written.
   
Ok, I understand it, I read it, but experience is more and more than the 
read.


For example:
I read the apache/fastcgi documentation, but never I think that it is 
not working in Windows (as working in Linux)... :-(


Sometimes the people need to release her/his ideas from the idealist 
world, because hard to realize.


Possible if I change my default transactions to serial, it is not 
working; and it is better to working with read committed - and with some 
little mistakes that client's won't see...


Thanks:
dd



Re: [GENERAL] Python client + select = locked resources???

2009-06-30 Thread durumdara

Hi!

2009.06.29. 18:26 keltezéssel, Craig Ringer írta:

On Mon, 2009-06-29 at 13:36 +0200, durumdara wrote:

   

I wanna ask something. I came from IB/FB world.
 


InterBase / FireBird ?
   

Yes, sorry for short descriptions.

   

In this world I was everytime in transaction, because of reads are
also working under transactions.
 


Just like PostgreSQL. You can't run a query without a transaction in
PostgreSQL; if you don't issue an explicit BEGIN, it'll do an implicit
BEGIN/COMMIT around the statement.
   
Aha... So if I'm getting out from normal transactions I moved into 
implicit autocommit way.
Hmmm... Then that is meaning that every statement is in new transaction 
context which can makes inconsistency in the views...

For example (pseudo):
select item_head, count(items)
select items

Possible: count(items)  len(fetchall(items)) if someone committed a 
new record into items table...


Am I thinking good?

So I need:

   begin;
   select item_head, count(items)
   select items
   rollback;


to get full consistent data-sets?
   

In the FB world the transactions without any writes/updates are not
locking the database, so another clients can makes a transactions on
any records.
 


PostgreSQL doesn't lock the database for reads or writes. Transactions
do take out various levels of lock on tables when you do things with
those tables. See the locking documentation:

http://www.postgresql.org/docs/8.3/static/explicit-locking.html
   
The locks meaning in my dictionary that DB will prevent some functions 
on the table to avoid the consistency and other errors.


For example DBISAM is working in that way. We can make record 
modifications, etc, but for altering table I need to shut down all of 
the clients!

Because DBISAM put a file lock to this table while it altered.

But in FireBird is allowed to add a new field to table when it is used. 
Because FB is makes a new record version, and this version used for the 
next queries.

And I can add a new field without shutting down all of the clients.


Additionally, PostgreSQL can take out share and update locks against
rows, as the documentation mentions.
   

Ok.
   

0.) I started Pylons web server, and in the browser I request for a
simple view (without modify anything).
1.) I opened PGAdmin.
2.) I move the focus to the table X.
3.) I opened an SQL editor and try to make two column adds:
alter table X add test_a date;
alter table X add test_b date;
 


ALTER TABLE does take out an exclusive lock on the table. See the
manual:

http://www.postgresql.org/docs/8.3/static/explicit-locking.html

If there's other work in progress, it can't get the exclusive lock until
that work completes.
   
Thanks. So this was I ask from this mailing list before I started to use 
PGDB.


But they are said to me that PGDB is working in same mode like FB.

And now I know from your mail that isn't true - it will be easier if I 
shut down the webserver, make the modifications on PGDB and after that I 
restart them all.


   

And I wanna solve this problem, because if I must do some
modifications in the online database (for example: add a new field), I
don't want to shut down the webserver with all online clients...
 


You should not have to. If you can't get a lock on the table, then most
likely the web app is holding transactions open instead of opening them,
doing work, and promptly committing / rolling back.

Try connecting to the database with psql and running
   select * from pg_stat_activity
while the web app is running. You should see only IDLE or working
connections, never idle in transaction. If you have anything idle in a
transaction for more than a few moments you WILL have problems, because
if those transactions have SELECTed from the table you're trying to
alter they'll hold share locks that will prevent ALTER TABLE from
grabbing an exclusive lock on the table.
   

Hmmm... Thanks for your info!!!


 cur.close()
 while 1:
 pass
 


Here you're holding a transaction open and idle. Wrong move. Close the
transaction (dispose the cursor) and then open a new transaction to do
more work.
   

Yes. If I can make a rollback on it, all of resources released.
Now I search for a way to force dbutils to it must make a rollback 
before it re-move the connection into it's pool, or a way to I can do 
this easily from the webserver...


Thanks for your help and please suggest me a transaction mode to 
consistent views (see above).


dd




Re: [GENERAL] Am I in intransaction or in autocommit mode?

2009-06-30 Thread durumdara

Hi!

2009.06.29. 15:34 keltezéssel, A. Kretschmer írta:

In response to durumdara :
   

Hi!

Can I check with something that I'm in in-transaction or in autocommit
mode?
I wanna avoid the notices I got when I'm also in mode I need...

For example:
begin
begin --- error notice...
 


Warning, not error.

In psql, you can set the PROMPT:

\set PROMPT1 '%/%R%x%# '

test=# begin;
BEGIN
test=*#
   
Yes, warning... Hmmm... As I see no way to dispose this warning (except 
psql)...


This is distrubing me a little, because if I use pre and post rollback 
to surely release resources/transactions, and then my Pylons log can 
full with warnings caused by double begins (begin; rollback;). I do 
not check I'm in which transaction mode


Thanks for your help:
   dd


[GENERAL] Python client + select = locked resources???

2009-06-29 Thread durumdara

Hi!

I wanna ask something. I came from IB/FB world.
In this world I was everytime in transaction, because of reads are also 
working under transactions.
In the FB world the transactions without any writes/updates are not 
locking the database, so another clients can makes a transactions on any 
records.

And also can add new fields to the tables.

Now I used Pylons webserver (Python) with PyGRESQL, and DBUtils for 
cached database connections/cursors.


Today I saw a locking situation in many times.

0.) I started Pylons web server, and in the browser I request for a 
simple view (without modify anything).

1.) I opened PGAdmin.
2.) I move the focus to the table X.
3.) I opened an SQL editor and try to make two column adds:
alter table X add test_a date;
alter table X add test_b date;
4.) After the the PGAdmin's Query Execution (F5) nothing happened. I see 
this menu is disabled, and PGAdmin is locked for new operations.
5.) When I simply close Pylons web server, the PGAdmin quickly 
finished with this table restructure without problems...


The problem can repeatable in any times.

This is very hateable thing, because in this view I don't modify 
anything, I use only selects, nothing other things.


And I wanna solve this problem, because if I must do some modifications 
in the online database (for example: add a new field), I don't want to 
shut down the webserver with all online clients...


I simplified this bug to see this without web server, dbutils, and 
other layers.


I wrote this python code:

   import os, sys, pgdb

   fmtstring = '%s:%s:%s:%s'
   fmtstring = fmtstring % ('127.0.0.1',
'anydb',
'anyuser', 'what?')
   db = pgdb.connect (fmtstring)
   print ok
   cur = db.cursor()
   cur.execute('select * from testtable')
   rek = cur.fetchone()
   cur.close()
   while 1:
pass
   db.close()


After start this I tried to add a new field to the testtable from 
PGAdmin's Query GUI:

alter table testtable add test_001 date;

With the cur.execute(select * from testtable) I got lock error, the 
PGAdmin query is running and running and running... :-(


Without this cur.execute the alter table finished without locks.

When I put a db.rollback() before while the lock vanished...

So pg is hold all records I fetched? They are not useable freely in a 
simple, readonly select?


Please help me SOS if possible, I must find a way to get out from these 
locks!


And please help me: how to check that I'm in transaction or not?

Thanks for your help:
dd






[GENERAL] Am I in intransaction or in autocommit mode?

2009-06-29 Thread durumdara

Hi!

Can I check with something that I'm in in-transaction or in autocommit 
mode?

I wanna avoid the notices I got when I'm also in mode I need...

For example:
begin
begin --- error notice...

Thanks for your help:
  dd

ps: in my prev. mail I asked this too, but in another context... 
possible this subject is better for this question.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction settings: nowait

2009-05-06 Thread durumdara

Hi!

In FireBird the transactions have these settings:

SET TRANSACTION
   [NAME/|hostvar|/]
   [READ WRITE | READ ONLY]
   [ [ISOLATION LEVEL] { SNAPSHOT [TABLE STABILITY]
 | READ COMMITTED [[NO] RECORD_VERSION] } ]
   [WAIT | NO WAIT]
   [LOCK TIMEOUT/|seconds|/]

And this is the important thing:

[WAIT | NO WAIT]
   [LOCK TIMEOUT/|seconds|/]

If set wait and timeout, the Firebird is waiting for the locked resource 
(record) for X seconds before it show deadlock error.


But when you set no wait, the deadlock error immediately shown by the 
server.


I wanna ask that if I want to avoid the full deadlocks.

For. example: I forget to make commit, or rollback on exception then all 
resources I used (updated) is locked.


If I use nowait, the clients immediately get the error message, and they 
are are not sitting deafly and blindly before the screen, waiting for 
what will happen.


So: have PGSQL same mechanism like nowait?

Thanks for your help:
dd


Re: [GENERAL] Transaction settings: nowait

2009-05-06 Thread durumdara

Hi!

2009.05.06. 11:54 keltezéssel, Richard Huxton írta:

durumdara wrote:


So: have PGSQL same mechanism like nowait?


When you take a lock:
http://www.postgresql.org/docs/8.3/interactive/sql-lock.html
http://www.postgresql.org/docs/8.3/interactive/sql-select.html#SQL-FOR-UPDATE-SHARE 

As I see these things are not help to me when two transactions are 
trying to use same recource...


Yesterday I tried to test my PG Website. I withdrew a rollback 
instruction from the code.
The transaction started, but the insert SQL was wrong (I miss some 
required field).
So this transaction haven't been changed anything on the table, but the 
transaction remaining in active state because my fail.

Then the pgadmin is flew away on a field readjustment in this table.
This was an deadlock...

I don't want to see any deadlocks... I don't want to lock the tables.




There is also the statement_timeout config setting which will 
terminate any single statement that takes longer than a set time.
http://www.postgresql.org/docs/8.3/interactive/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-STATEMENT 





As I read this config param, this is terminate the statements only.
As I think, this meaning that if I have a wrong join, or I do very 
slow query, the server can cancel and terminate it.
But if I have a transaction that remaining in opened state, this could 
be a source of the errors (deadlocks).


Thanks for your help:
   dd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Web + Slicing/Paging datas

2009-04-23 Thread durumdara

Hi!

In a mod_py application I wanna write a wrapper that handle all PSQL 
data view with paging/slicing.


For example:
I have 1.500 records. I wanna show only N (f. ex: 15) records in the 
view, other records are accessable with a pager (links):


[First, P-2, P-1, P, P+1 P+2, Last]
F. Ex: First, 5, 6, {7}, 8, 9, Last

Ok, I can realize this with count, and next to define the select's 
start/and limit parameters.


But I heard about the count(*) is slow in PG.

This paging is a typical problem: I need to paid two times for the datas.
First time I get all data, but I only count them. Next time I get only 
the slice of records I need.


As I saw, some systems with less data do this:

1.)
Inserts all records to a temp table.
Check the affected rows (as count).
Slicing the records.
Fetch the slice records.
Destroy temp table.

2.)
Select all records.
Fetching all records.
Dropping all not needed elements.
Return needed records.
Close cursor.

Every solution is slow, the 1.) is because of storing the records (bulk 
insert) 2.) is because of fetching not needed records (network speed).


So I wanna ask: what are you doing if you wanna use paging/slicing of 
records?

The first (count/slicing) solution is enough fast for you?

Thanks for your help:
dd




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web + Slicing/Paging datas

2009-04-23 Thread durumdara

Hi!

2009.04.23. 9:23 keltezéssel, John R Pierce írta:

durumdara wrote:

Hi!

In a mod_py application I wanna write a wrapper that handle all PSQL 
data view with paging/slicing.


..
Inserts all records to a temp table.
Check the affected rows (as count).
Slicing the records.
Fetch the slice records.
Destroy temp table.



how about KEEPING the data in this slice format, and keeping the count 
somewhere ?  you'd need to update the count and slice info when new 
data gets added, but perhaps thats better than doing it every time you 
want to view a slice?




This is a complicated thing.
These datas are provided by a query, with some user's conditions.
For example:
Text = Free String Search
Live = This data is living
Category = Some category
SubCategory = Some subcategory
...
etc.

So your way is possible working with this pseudo code:

def Slicing(PageIndex, Conditions):
# Del recs that have older than 30 minutes
delsql = delete from pagingtable where inserted  %s % 
IncWithMinutes(Now, -30)

ExecSQL(delsql)
# The Query
datasql = BuildSQL(Conditions)
# Have same query in the pool?
checksql = select count from pagingtable where sql='%s' % 
SafeEscape(datasql)

records = ExecSQL(checksql)
if records:
# Yes
count = records[0]['COUNT']
else:
# No, we must run a count sql to check
countsql = BuildSQL(Conditions, Count = 1)
datarecords = ExecSQL(countsql)
datarecord = datarecords[0]
count = datarecord['COUNT']
# We must insert it to the paging table
InsertPagingRecord(datasql, count)
...
# Use the count
...
finalsql = BuildSQL(Conditions, WithPaging = PageIndex)
finalrecords = ExecSQL(finalsql)
...

But possible it is too complex and I fear that it cause more pain than I 
winning in it... (deadlocks?)...


Thanks for your help:
  dd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is there any special way to a trigger send a signal to outer application?

2009-04-02 Thread durumdara

Hi!

Windows Server, PGSQL.

When a new records arrived to the database, I want process them in the 
background.

I have a processor service.
This is periodically (5-10 minutes) checking the message table, and if 
there is some message, it is process them.


This periodic processing is good, but I want to decrease the process 
time, when records arrived (to increase the performance, and dec. the 
user's waiting).


So I search for a way to do any IPC communication from PGSQL to the 
processor service.


Which way is supports by PGSQL trigger?

a.) Starting a new process with any params (this app. can send a WM_* 
message to my service)?

b.) Sending a WM_* message (PostMessage) to my service?
c.) Sending a TCP message to my service?

Or other?

If you have an example about this problem, please send me with the answer!

Thanks for your help:
dd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGAdmin and records as inserts (like in SQLyog)

2009-02-04 Thread durumdara

Hi!

I wanna ask, that have the PGAdmin same possibility (like in SQLyog) to 
copy records (from the Query, from the Table View)  to clipboard in the 
INSERT SQL format?


This can speed up the work, and I can insert the record to another table 
that have similar (but a little different) format without field content 
confusion.


Thanks for your help:
dd






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGAdmin and user privileges - what I do wrong?

2009-02-04 Thread durumdara

Hi!

Please help me a little.

I used PGAdmin to administrate my databases.
I created a new user named zx.

CREATE ROLE zx LOGIN
  ENCRYPTED PASSWORD '*'
  NOSUPERUSER NOINHERIT NOCREATEDB NOCREATEROLE;

When I want to manually assign this user to a table, I have problem in 
the PGAdmin's Privilege tab.
The privileges GroupBox have a ComboBox, named Role. This ComboBox is 
not containing the zx user in it's list.


When I write the user (zx) into the combo , then the Add button have 
been disabled.


I can write only an SQL cmd to add this privilege. When I do it, the 
Privilege list extended with this user, I can select it from the combobox.


What I do wrong? Which property I need to set to I can Add privileges to 
the tables/objects with this user?


Thanks for your help:
   dd



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSQL or other DB?

2009-02-02 Thread durumdara

Hi!

2009.01.31. 10:13 keltezéssel, Erik Jones írta:


On Jan 30, 2009, at 11:37 AM, durumdara wrote:


The main viewpoints:
 - quick (re)connect - because mod_python basically not store the 
database connections persistently


mod_python is not a database adaptor.
Yes, I know. But because of mod_python (apache) commonly configured to 
1 interpreter/1 request, I need to reconnect with the adapter.
But I tested it, and it is seems to be have enough speed. If not, I must 
use some proxy server to keep them alive.


rewrite the table data and previously existing indexes as well.   
What's more, Postgres allows you to create real tablespaces so that 
you can place individual persistent database objects (databases, 
tables, indexes, and some constraints) on separate storage.
I wanna ask that if I create a database in another tablespace, is the 
database name already stored in main data storage (like global metadata)?
And If I makes a table or other object into this database, need I to 
define the tablespace name, or it is taken from the database 
tablespace name?

For example:
create db anything tablespace new2

These sqls are different?
create table any_table (a int)
or
create table any_table (a int) tablespace new2.

So: need I define tablespace instruction in every table/object creation, 
or not?


And: if I store this database in another drive, and it is unmounted 
(because of any reason - like failure, etc), is it causes any problems 
with postgresql main databases (for example: service stops, etc). I ask 
about after restart pg service, not online usage.
So if I remove some tablespace path (directory, drive) in the level of 
the filesystem, can pg service up to serve main databases the it can access?

Or is it completely die in this problem?

Thanks for your help:
   dd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGSQL or other DB?

2009-01-30 Thread durumdara

Dear PG Users!

I don't want to make a heated debate 
http://szotar.sztaki.hu/dict_search.php?M=1O=HUNE=1C=1A=0S=HT=1D=0G=0P=0F=0MR=100orig_lang=HUN%3AENG%3AEngHunDictorig_mode=1orig_word=vitaflash=sid=0ab237888b26676a51a7567bf8920f92vk=L=ENG%3AHUN%3AEngHunDictW=heated%20debate 
with this subject, but I wanna ask about your experiences because we 
need to make a new special site and we wanna know, which DB is the best 
for this...


This website will supports up to 200 corporations.
The functions are similar, but because of many differents of these corps 
(datas to keep, way to working them), we wanna make many databases (one 
DB for one corp).
The site uses one main database that handle all public, and shared 
services, and store the links to subdatabases.
The website will works with apache/mod_python, and the each of the 
corp's services are handled with another python module (from another 
(separated) database). The main structure of the source code already wroted.


But: we need to determine, which database we will use in the future for 
this project.


The main viewpoints:
 - quick (re)connect - because mod_python basically not store the 
database connections persistently

 - fast queries
 - easy IDE to use (like pgadmin)
 - the db server can handle many databases (webdb_nnn where nnn is an 
integer)
 - I can add/modify a table, or a field to a table without full lock 
on the table (like DBISAM restructure). Like in FireBird, where the add 
field change only the table description. I don't know that PG supports 
this way of the DB modifying.

 - Quick and easy backup/restore system

Another important thing that I don't understand (what as I saw) that the 
PostGreSQL is store the databases in one, unseparatable file set, in a 
directory named data.
In another databases, like DBISAM, FireBird, MySQL, the databases are 
separated to another directories/files.


This one datadir is seems to be not too good for us. We used DBISAM in 
our clients, and many times when we got some filesystem error, we can 
simply recover the tables - from the files.
When we want to backup or restore one database, we can do it in two way: 
a.) archive all files b.) make sql dump from database.


If a file get corrupted in a database, then we can restore the datas 
from files, and this filesystem error causes problems only for this 
database, not for all.


I very fear from to keep all databases in one place, because if they are 
corrupted, possible more of them injured (if they are not separated).
I cannot make filesystem based (hard) copy from one db (only SQL dump 
enabled).


Ok, I saw that pgsql supports tablespaces, but as I saw, this function 
can hold only table datas in the another directory, and not the full 
database can separated with them.


Because I don't used one PGSQL with many databases (up to 200), I don't 
know, what happening, and which cases possible. But I think you have 
many experience with it. Please share it with me!


Please help me, because we need to determine which DB to use.

I started the usage of the PG in prev. month, and I liked it except the 
way of the data storage (one data dir).


I tried the MySQL before I tried PG. InnoDB is seems to be forcing 
transaction system on MyISAM. And boolean data type is missing (solved 
with enum?).
I don't like it all, but it is seems to be fast with little tables, and 
it is separate the database files to another directories which thing I 
like. Possible it have many limitations what I don't saw in first time.


Please help me, which DB is good for us, and how to configure, and use 
PGSQL with these database-set which we need to use.


Thanks for your help:
dd








[GENERAL] Can I separate my Databases as in FireBird? Working with DataBases...

2009-01-20 Thread Durumdara
Hi!

I came from SQLite, FireBird and DBISAM world.
These database systems have a speciality that they stores one database in
one file, or in one directory.

Each of databases have separated file (or dir with files), and when I want
to copy a database, I have MORE possibility to do this.
First solution is the file-system based copy.
Next is the embedded dump (FireBird).
And the last is some SQL dump.

Sometimes the customers have problems with the programs we wrote.
In this case sometimes we ask them that send me the DB fully.
They are using filesystem based copy. We open the DB, or the tables here
simply, without special doing. Only we register the DB with the name test,
and we can test the applications simply.

But PGSQL have only one directory named data. All of the databases placed
here.
I cannot separate them.
If the user send me a DB, I cannot copy it simply to anywhere, and register
it. Because it have only SQL dump.

Ok, I saw that PGSQL have tablespace.

But: the databases need to be unique named. Previously I thought that
tablespace higher object than database, and I can create more database with
same name without collision if I use another tablespace for them.
But when I tried to do this, I got error.

So: we want to search for a simplified way to place and separate our 200
databases, and 200*8 applications without problems (to our development
machine). And we wanna use them easier. If some of the customers want to
resend a real db to us (for testing), we want to register easier this new
db without rewrite our test context.

Have anybody an experience with PGSQL and many similar, but not same DB
usage?
What is the easiest way to handle the question of the data transporting in
the PGSQL?

Thanks for your help:
dd