>
>
>
> Do you have any indexes?
> If not - you should, if yes - what are they?
>
>
>
Yes we have a primary key on this table which is on a UUID type column and
also we have other indexes in other timestamp columns . But how is this
going to help as we are going to update almost all the rows in the
Hii,
On Fri, Sep 13, 2024 at 10:22 PM yudhi s wrote:
>
> Hello,
> We have to update a column value(from numbers like '123' to codes like 'abc'
> by looking into a reference table data) in a partitioned table with billions
> of rows in it, with each partition having 100's millions rows. As we te
Hello,
We have to update a column value(from numbers like '123' to codes like
'abc' by looking into a reference table data) in a partitioned table with
billions of rows in it, with each partition having 100's millions rows. As
we tested for ~30million rows it's taking ~20minutes to update. So if we
Am 13.09.24 um 17:34 schrieb Wong, Kam Fook (TR Technology):
We have a flavor of this type of query with long in-list/bind
variables (see below). We notice that some of the bind variables
come in as 0 which causes the optimizer to choose to full scan two of
the following 3 tables. One thought
On Fri, Sep 13, 2024 at 8:27 PM Adrian Klaver
wrote:
> On 9/13/24 07:50, Adrian Klaver wrote:
> > On 9/13/24 02:58, Juan Rodrigo Alejandro Burgos Mella wrote:
> >> Hello, I find it unlikely that the trigger will work properly, since
> >> the reserved fields of the OLD subset have no value in an I
On Fri, Sep 13, 2024 at 11:35 AM Wong, Kam Fook (TR Technology) <
kamfook.w...@thomsonreuters.com> wrote:
> 1) Where does query parsing occur?
>
>
Always on the server side, although your driver may do something as well.
2) Will this cause extra parsing to the posgress DB?
>
Yes
> Any pg sys
>
>
>
> > or use a lateral subquery to surgically (fetch first 1) retrieve the
> first row when sorted by recency descending.
>
> I'm not sure that I see how to apply this when I need top-k, not top-1.
>
Fetch first k
It's just a modern limit clause.
David J.
> On Sep 13, 2024, at 10:57 AM, Adrian Klaver wrote:
>
> On 9/13/24 07:50, Adrian Klaver wrote:
>>> On 9/13/24 02:58, Juan Rodrigo Alejandro Burgos Mella wrote:
>>> Hello, I find it unlikely that the trigger will work properly, since the
>>> reserved fields of the OLD subset have no value in
I would usually try to avoid such long IN causes. Why not try this ..
Create a temp table of 1 column. Bulk insert all your IDs into that table.
Then change your query to join to the temp table? This also has the
advantage of working for 1000s of values.
On Fri, 13 Sept 2024, 16:35 Wong, Kam
On Thu, 12 Sep 2024, Adrian Klaver wrote:
Quick and dirty:
people_table
person_id PK
name_last
name_first
email_address
ph_number
...
location_table
loc_id PK
person_id_fk FK <--> people_table(person_id)
loc_name
loc_st_addr
loc_st_city
loc_st_st_prov
...
contact_
Thanks both for your suggestions so far.
On Fri, Sep 13, 2024 at 8:43 AM David G. Johnston
wrote:
>
> On Friday, September 13, 2024, Willow Chargin wrote:
>>
>> In reality I really do want the ID columns of the
>> *most recent* items.
>
>
> Use a window function to rank them and pull out rank=1
On Friday, September 13, 2024, Willow Chargin
wrote:
> In reality I really do want the ID columns of the
> *most recent* items.
>
Use a window function to rank them and pull out rank=1, or use a lateral
subquery to surgically (fetch first 1) retrieve the first row when sorted
by recency descendi
Follow Postgres expert,
We have a flavor of this type of query with long in-list/bind variables (see
below). We notice that some of the bind variables come in as 0 which causes
the optimizer to choose to full scan two of the following 3 tables. One
thought to fix a full table scan is to chop
On Thu, Sep 12, 2024 at 11:13 PM wrote:
>
> What about using DISTINCT ON () ?
> SELECT DISTINCT ON (items.id) items.*
> FROM items
> JOIN parts ON items.id = parts.item_id
> WHERE part_id % 3 = 0
> ORDER BY items.id,items.create_time DESC
> LIMIT 5;
>
> This gives me this
On 9/13/24 07:50, Adrian Klaver wrote:
On 9/13/24 02:58, Juan Rodrigo Alejandro Burgos Mella wrote:
Hello, I find it unlikely that the trigger will work properly, since
the reserved fields of the OLD subset have no value in an INSERT
I'm not seeing that the OP is asking for OLD.* values, they
On 9/13/24 02:58, Juan Rodrigo Alejandro Burgos Mella wrote:
Hello, I find it unlikely that the trigger will work properly, since the
reserved fields of the OLD subset have no value in an INSERT
I'm not seeing that the OP is asking for OLD.* values, they are just
looking to include the result
Le Thu, Sep 12, 2024 at 01:38:11PM -0400, Tom Lane a écrit :
> =?iso-8859-1?Q?Fran=E7ois?= SIMON writes:
> > So the problem seems to come from xlc, and only at initdb step.
> > I can see that initdb runs a backend postgres in single user mode.
> > And this is this backend, when compiled with xlc,
Am Fri, Sep 13, 2024 at 05:39:22PM +0530 schrieb Vinay Oli:
> I'm currently facing a strange issue with PostgreSQL 15.0. I have a
> primary-standby setup that is in sync, with a replication slot in place.
> There are 18 databases, and one of the databases on the primary side is 104
> GB, while the
On Fri, 13 Sep 2024, Tony Shelver wrote:
Or if you want to get even more flexible, where a dairy could have more
than one owner as well as one owner having more than one dairy, you could
create an intersection / relationship table.
Something like
-- Create people table (one entry per person)
C
On Fri, 13 Sep 2024, Muhammad Usman Khan wrote:
To handle this situation in PostgreSQL, you can model the data in a way
that maintains a single entry for each owner in the people table while
linking the owner to multiple dairies through a separate dairies table.
This is a typical one-to-many rel
On Thu, 12 Sep 2024, Adrian Klaver wrote:
Quick and dirty:
people_table
person_id PK
name_last
name_first
email_address
ph_number
...
location_table
loc_id PK
person_id_fk FK <--> people_table(person_id)
loc_name
loc_st_addr
loc_st_city
loc_st_st_prov
...
contact_
On Thu, 12 Sep 2024, David G. Johnston wrote:
Read up on “many-to-many” data models. In SQL they involve a linking
table, one row per bidirectional edge, in addition to the two node tables.
David,
Thanks very much. I knew about those a long time ago but haven't needed them
in a long time so I
Hi Team
I have been using PostgreSQL for the past 6 years. PostgreSQL has
significantly impacted my life, providing me with great opportunities for
knowledge and self-development.
I'm currently facing a strange issue with PostgreSQL 15.0. I have a
primary-standby setup that is in sync, with a rep
Hello, I find it unlikely that the trigger will work properly, since the
reserved fields of the OLD subset have no value in an INSERT
Atte
JRBN
El vie, 13 de sept de 2024, 04:32, yudhi s
escribió:
> Hello All,
>
> We are having a table which is going to be inserted with 100's of millions
> of r
Hello community, As you know, pg_stat_statements is very popular and useful extension used by many Postgres user. What do you think of adding to store or capture also query plans as new feature in this extension? This would be an optional feature by default closed controling with a parameter maybe.
Hi
To me it seems possible to create an insert select, e.g.
Insert into tab1 (val1, val2)
Select valA, valB
>From tab2
Where valC = :param1
Cheers
Thiemo
Hello All,
We are having a table which is going to be inserted with 100's of millions
of rows each day. And we now want to have a requirement in which we need to
do some transformation/lookup logic built on top of a few of the input bind
values , while inserting the data. So I wanted to understand
27 matches
Mail list logo