Re: Getting wrong datetime in database using insert into table query.

2024-04-02 Thread Sándor Daku
On Tue, 2 Apr 2024 at 10:59, Saksham Joshi  wrote:

> OS: Windows 10
> Psycopg version: 2.9.9
> Python version: 3.11.6
> PostgreSQL version: 11
> pip version : 24.0
> 1: what you did
> We are using 'psycopg2-binary' library to connect to my postgresql hosted
> on Azure.We have created a table named 'apilog' to store our api logs using
> 'Insert Into table' query.We have specifically added two columns named
> create_date and update_date with 'timestamp with time zone' property
> enabled.I only update create_date for each log locally using python and i
> expected update_date column to automatically update the datetime when the
> transaction is committed at the end in python.
> 2: what you expected to happen
> I expected to see update_date column returning datetime values which are
> similar to the time the transaction is committed in python however instead
> the value seems to returning datetime which is more closer to the time db
> connection is established.
> 3: what happened instead
> The datetime value in update_date is coming earlier than the create_date
> value of even the very first log which is creating discrepancy and making
> it difficult to track the exact time logs are committed into database.
>
> For example:
> This query INSERT INTO api_log(log_detail,create_date)
> VALUES('example log 1', datetime.datetime.utcnow'),('example log 2',
> datetime.datetime.utcnow')
> Should ideally return update_date which is older than 'example log 2'
> create_date but it is returning a datetime which is even earlier than
> 'example log 1' create_date.
>


Without seeing the definition of your table it's only a guess, but if you
using 'now' or now() as the default value for update_date then this might
be the source of your problem:

https://www.postgresql.org/docs/11/functions-datetime.html
now() timestamp with time zone Current date and time (start of current
transaction);I think clock_timestamp is what you need.

Regards,
Sándor


Re: replication primary writting infinite number of WAL files

2023-11-24 Thread Sándor Daku
On Fri, 24 Nov 2023, 17:12 Ron Johnson,  wrote:

> On Fri, Nov 24, 2023 at 11:00 AM Les  wrote:
> [snip]
>
>> Writing of WAL files continued after we shut down all clients, and
>> restarted the primary PostgreSQL server.
>>
>> The order was:
>>
>> 1. shut down all clients
>> 2. stop the primary
>> 3. start the primary
>> 4. primary started to write like mad again
>> 5. removed replication slot
>> 6. primary stopped madness and deleted all WAL files (except for a few)
>>
>> How can the primary server generate more and more WAL files (writes)
>> after all clients have been shut down and the server was restarted? My only
>> bet was the autovacuum. But I ruled that out, because removing a
>> replication slot has no effect on the autovacuum (am I wrong?). Now you are
>> saying that this looks like a huge rollback. Does rolling back changes
>> require even more data to be written to the WAL after server restart? As
>> far as I know, if something was not written to the WAL, then it is not
>> something that can be rolled back. Does removing a replication slot lessen
>> the amount of data needed to be written for a rollback (or for anything
>> else)? It is a fact that the primary stopped writing at 1.5GB/sec the
>> moment we removed the slot.
>>
>> I'm not saying that you are wrong. Maybe there was a
>> crazy application. I'm just saying that a crazy application cannot be the
>> whole picture. It cannot explain this behaviour as a whole. Or maybe I have
>> a deep misunderstanding about how WAL files work.  On the second occasion,
>> the primary was running for a few minutes when pg_wal started to increase.
>> We noticed that early, and shut down all clients, then restarted the
>> primary server. After the restart, the primary was writing out more WAL
>> files for many more minutes, until we dropped the slot again. E.g. it was
>> writing much more data after the restart than before the restart; and it
>> only stopped (exactly) when we removed the slot.
>>
>
> pg_stat_activity will tell you something about what's happening even after
> you think "all clients have been shut down".
>
> I'd crank up the logging.to at least:
> log_error_verbosity = verbose
> log_statement = all
> track_activity_query_size = 10240
> client_min_messages = notice
> log_line_prefix = '%m\t%r\t%u\t%d\t%p\t%i\t%a\t%e\t'
>

I dont know if it makes any sense, but is there a relatively painless way
to look into the produced wal files to see what are they filled with? It
might give some pointers to the source of the issue.

Regards,
Sándor

>


Re: Making Sure Primary and Secondary Keys Alligns

2023-09-13 Thread Sándor Daku
On Wed, 13 Sept 2023 at 17:30, Anthony Apollis 
wrote:

> Yes in deed.
> I am trying to make sure that the keys are aligned, but it doesnt update
> or it simply shows NULL in Fact table, meaning its secondary keys.
>
> "-- Step 1: Drop existing foreign key constraint for Entity
> ALTER TABLE system."IMETA_ZTRB_MP$F_ZTBR_TA_BW" DROP CONSTRAINT IF EXISTS
> fk_entity;
>
> -- Step 2: Drop and recreate secondary key for Entity, setting it to null
> by default
> ALTER TABLE system."IMETA_ZTRB_MP$F_ZTBR_TA_BW"
> DROP COLUMN IF EXISTS "Entity_Secondary_Key",
> ADD COLUMN "Entity_Secondary_Key" INTEGER;
>
> -- Step 3: Update secondary key for Entity based on primary key from the
> dimension table
> UPDATE system."IMETA_ZTRB_MP$F_ZTBR_TA_BW" AS fact
> SET "Entity_Secondary_Key" = dim2."Entity_ID"
> FROM dim."IMETA_Entity_Mapping" AS dim2
> WHERE fact."Entity_Secondary_Key" = dim2."Entity_ID";
>
> -- Step 4: Re-add foreign key constraint for Entity
> ALTER TABLE system."IMETA_ZTRB_MP$F_ZTBR_TA_BW"
> ADD CONSTRAINT fk_entity FOREIGN KEY ("Entity_Secondary_Key") REFERENCES
> dim."IMETA_Entity_Mapping"("Entity_ID");
> "
> Thank you!
>
>>
>>
I think you get two things wrong in this case:

Foreign key ensures that you can't put any value in the
Entity_Secondary_Key field which doesn't exists in the IMETA_Entity_Mapping
table's Entity_ID column. (Null is still acceptable.)
Removing the foreign key constraint and then later adding again kind of
countering that purpose.

Your  step 3 doesn't make sense: Your SET expression is the same as the
WHERE clause. It would change the value of Entity_Secondary_Key to the same
value it already has. Except you removed that field and added again, and
because this newly added Entity_Secondary_Key field contains null in all
record the WHERE fact."Entity_Secondary_Key" = dim2."Entity_ID" clause
won't find any matching records in the IMETA_Entity_Mapping table.

Regards,
Sándor


Re: Determine if range list contains specified integer

2022-05-27 Thread Sándor Daku
On Fri, 27 May 2022 at 18:19, Andrus  wrote:

> Hi!
>
>
> Product type table contains product types. Some ids may missing :
>
> create table artliik (liiginrlki char(3) primary key);
> insert into artliik values('1');
> insert into artliik values('3');
> insert into artliik values('4');
> ...
> insert into artliik values('999');
>
> Property table contais comma separated list of types.
>
> create table strings ( id char(100) primary key, kirjeldLku chr(200) );
> insert into strings values ('item1', '1,4-5' );
> insert into strings values ('item2', '1,2,3,6-9,23-44,45' );
>
> Type can specified as single integer, e.q 1,2,3 or as range like 6-9  or
> 23-44
> List can contain both of them.
>
>
> How to all properties for given type.
> Query
>
> select id
> from artliik
> join strings on ','||trim(strings.kirjeldLku)||',' like
> '%,'||trim(artliik.liiginrlki)||',%'
>
> returns date for single integer list only.
> How to change join so that type ranges in list like 6-9 are also returned?
> Eq. f list contains 6-9, Type 6,7,8 and 9 shoud included in report.
>
> Postgres 13 is used.
>
> Posted also in
>
>
> https://stackoverflow.com/questions/72404218/how-determine-if-range-list-contains-specified-integer
>
> Andrus.
>

Hello,

As Adrian said the best idea would be to redesign your data model. For
example make a third "contains" table where every product/type relationship
is one record. After that your problem would be trivial and your life much
easier.
However, this is a first class place. The customer wants the pain the
customer gets the pain:

Bad idea which solves the immediate problem 1:
Write a function which unpacks your "1,2,3,6-9,4" string into an array
'1','2','3','6','7','8','9','4' and use the string=ANY(array_of_strings)
syntax.

Bad idea which solves the immediate problem 2:
Write a trigger which unpacks the the "1,2,3,6-9,4" form into
"1,2,3,6,7,8,9,4" at insert/update time and then use the the
string=ANY(array_of_strings) syntax.

Regards,
Sándor


Re: Sort question - Fractions, Metric etc

2021-08-14 Thread Sándor Daku
On Sat, 14 Aug 2021 at 19:03, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 8/14/21 9:37 AM, Bret Stern wrote:
> >> I haven't explored doing this with numeric types, but some columns
> >> needed alpha chars eg 13mm.
>
> > Two columns:
>
> > data_val(numeric) data_unit(varchar)
> > 13mm
>
> It sounds like your data is a completely disorganized mess :-(.
> You might be able to bring some semblance of coherence to it with
> an extension like postgresql-unit [1], but it will take a lot of
> effort to get the data into a representation that can sort reasonably.
> There's no "easy button" here.
>
> regards, tom lane
>
> [1] https://github.com/df7cb/postgresql-unit
>
>
I'm not sure, but maybe a kind of cast function that cast everything into a
reasonable common unit(Khm... millimeters) and sort on that? It seems
relatively simple to pick up the value and unit from a string with a
regexp. Admittedly the data would still be a mess.

Regards,
Sándor


Re: insert ..... returning problem

2021-06-24 Thread Sándor Daku
On Thu, 24 Jun 2021 at 15:25, Michael Ivanov  wrote:

> Hallo, no.
>
> I am using postgres c library, namely I'm calling PQexec()
>
> Best regards,
>
> On 24.06.2021 16:14, Laurenz Albe wrote:
> > On Thu, 2021-06-24 at 12:31 +0300, Michael Ivanov wrote:
> >> I'm trying to insert a record returning column value using the
> following query:
> >>
> >>INSERT INTO t_load (load_id, load_type) VALUES (:id, :typ) returning
> load_idx INTO :idx
> >>
> >> But when I try to prepare the query I'm getting the following error:
> >>
> >>PREPARE p1515544c0001 AS insert into t_load (load_id, load_type)
> values ($1, $2) returning load_idx into $3]:
> >>ERROR:  syntax error at or near "into" LINE 1: ...ad_id,
> load_type) values ($1, $2) returning load_idx into $3
> >>
>^
> >> Why 'into var' is not accepted here?
> >
> > Are you talking about ECPG?
> >
> > Yours,
> > Laurenz Albe
> >
>
>
> --
>  \   / |   |
>  (OvO) |  Михаил Иванов|
>  (^^^) |   |
>   \^/  |  E-mail:  iv...@isle.spb.ru   |
>   ^ ^  |   |
>

Hi,

Client libraries substitute the values(!) of the variables into the sql
queries (Or in this case do a PREPARE and provide the values to that), but
the final query must comply with Postgres syntax.
You can get the values defined in the returning clause from the result set,
like you would do in the case of a SELECT.

Regards,
Sándor


Re: PostgreSQL server does not increment a SERIAL internally

2020-07-06 Thread Sándor Daku
Hi Matthias,

On Mon, 6 Jul 2020 at 11:43, Matthias Apitz  wrote:

>
> Hello,
>
> Me and my team passed a full weekend hunting a bug in our Perl written
> software were rows have been inserted with the same id 'acq_haushalt.hnr'
> which should not have been the case because any budget year in that
> table has a single internal number 'hnr'
>
> The table in the 11.4 server is created as:
>
> create table acq_haushalt (
>   hnr  serialnot NULL ,   /*  internal budget year number primary key
>  */
>   hjahr smallint   not NULL , /*  budget year  */
>   stufe smallint   not NULL , /*  level  0,1,2,3*/
>   kurzname char (16)  ,   /*  short name for ... */
>   ...
>   );
>
> We update the serial 'acq_haushalt_hnr_seq' with this statement after
> loading:
>

What does "loading" mean, and why do you reset the sequence after loading?
(And as I can see you setting it to the value it most likely already has.)
My guess is that your problem lurks somewhere here as in certain
circumstances you reset it to an incorrect(previous) value.


> /* table: acq_haushalt */
> DO $$
> DECLARE
>   max_id int;
> BEGIN
>  if to_regclass('acq_haushalt') is not null then
>   SELECT INTO max_id GREATEST(COALESCE(max(hnr), 0),0) + 1 FROM
> acq_haushalt;
>   RAISE NOTICE '% % %', 'acq_haushalt', 'hnr', max_id ;
>   EXECUTE 'ALTER SEQUENCE acq_haushalt_hnr_seq RESTART ' || max_id::text;
>  end if;
> END $$ LANGUAGE plpgsql;
>
>
> Usage in Perl DBI to get the next value for acq_haushalt.hnr:
>
>  if (  eq 'Pg') {
>  $erg = ("SELECT NEXTVAL('acq_haushalt_hnr_seq')",[]);
>  if ($erg->{'CountData'} == 0) {
>$newhnr=1;
>  }else{
>
>  $newhnr=$erg->{'Data'}->[0]->{$erg->{'Fields'}->[0]->{'Name'}};
>  }
>  } else {   code block for Sybase ...
>
>  }
>
> But the serial was not incremented internally as we could see with
> 'psql' and so more than one row was build and inserted with the same
> number in $newhnr.
>
> What helped was using:
>
>  $erg = ("SELECT max(hnr) FROM acq_haushalt",[]);
>  if ($erg->{'CountData'} == 0) {
>   $newhnr=1;
>  }else{
>
> $newhnr=$erg->{'Data'}->[0]->{$erg->{'Fields'}->[0]->{'Name'}}+1;
>  }
>

Where do you want to insert this $newhnr?


> What we are doing wrong?
>
> Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
> May, 9: Спаси́бо освободители! Thank you very much, Russian liberators!
>
>
>


Re: Table partitioning for cloud service?

2020-05-21 Thread Sándor Daku
Hi,

On Thu, 21 May 2020 at 18:14, Christopher Browne  wrote:

> On Thu, 21 May 2020 at 11:53, Israel Brewster 
> wrote:
>
>>
>> - Table-based tenancy (e.g. - each table has a "tenant_id" and queries
> need to specify the tenant)
>

The database/schema per tenant solution can be tedious when you want to
modify something on the structure and you have numerous tenants.
Therefore I used the "tables with tenant_id" version in a similar situation
but with a slight twist. One of the biggest issue of this solution is that
if you forget to add the tenant_id to the where clause you are going to
reveal one tenant's data to another.
I came up with the solution that the database user have no privileges for
accessing the base tables. Instead of that I generate views for each tenant
and they can access their own data in the underlying table through these
views. Now if forget to address the right tenant in my client code(it still
happens sometimes) and try to directly access the base tables I get a
strongly worded reminder from the server.


Regards,
Sándor


Re: Estimated resources for a 500 connections instance (VM)

2020-04-07 Thread Sándor Daku
On Tue, 7 Apr 2020 at 21:52, David Gauthier 
wrote:

> After looking at some of the factors that can affect this, I think it may
> be important to know that most of the connections will be almost idle (in
> terms of interacting with the DB).  The "users" are perl/dbi scripts which
> connect to the DB and spend the vast majority of the time doing things
> other than interacting with the DB.  So a connection is consumed, but it's
> not really working very hard with the DB per-se.  I am cleaning up some of
> that code by strategically connecting/disconnecting only when a DB
> interaction is required.  But for my edification, is it roughly true that 2
> connections working with the DB 100% of the time is equivalent to 20
> connections @ 10% = 200 connections @ 1 % (if you know what I mean) ?
>

Hi,

Every open connection consumes a bit of resources witch is not a big deal
if you keeping open a few more connections than you strictly needed.
However when you keeping a few hundred idle connections those resources add
up quickly. So don't do that if it's possible.
Likewise, establishing a new connection is resource costly process. So
don't do that either if it's possible.
Long story short, if those connections don't use many different users
then(as others already suggested) connection pooling will be the best
solution.

Regards,
Sándor


Re: Another INSTEAD OF TRIGGER question

2020-03-08 Thread Sándor Daku
On Sun, 8 Mar 2020 at 15:31, stan  wrote:

> On Sun, Mar 08, 2020 at 10:29:09AM -0400, stan wrote:
> > Still working on updateable views.
> >
> > Wish list item, a way to see the entire query that  caused the trigger to
> > fire.
> >
> > Now on to something i hope I can get. Can I see what the verb that caused
> > the trigger to fire is? IE UPDATE, INSERT, DELETE?
> >
> > --
> > "They that would give up essential liberty for temporary safety deserve
> > neither liberty nor safety."
> >   -- Benjamin Franklin
> >
> >
>
> OH, what was I thinking, that is controled by the trigger.
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>

Hi,

Still, you can access that information in the TG_OP variable available in
trigger functions because you can define triggers firing on
multiple operation types.
For instance this is an example from the Postgres documentation:

CREATE TRIGGER emp_audit
INSTEAD OF INSERT OR UPDATE OR DELETE ON emp_view
FOR EACH ROW EXECUTE FUNCTION update_emp_view();

Regards,
Sándor


Re: Invoice increment

2020-02-26 Thread Sándor Daku
On Wed, 26 Feb 2020 at 09:54, Søren Frisk  wrote:

> Hi all
>
> I'm trying to find a way to increment an invoice number. And i think it
> would be pretty straight forward just using a SERIAL int. But as we're
> selling across multiple countries, i need the invoice increment to be reset
> by a country_id. any good way to achieve this in a dynamic way?
> Hope this is the right place to ask.
>

Hi,

Instead of a SERIAL field use INTEGER and different sequences for different
countries. You can write a trigger which picks the right sequence based on
the country_id and you can even use a trigger to making the new sequence
for any new country_id.
However it might cause problems if you are not cautious enough. Sequences
and serials not roll back their counter in case of a failing query, so you
could introduce holes into your invoice numbers, which can make supervising
authorities angry. :)

Regards,
Sándor