Ok Will check. But from pgadmin it takes 1min and by psql it is taking 20
mins for 100,000 rows with BEGIN; COMMIT;
Thanks,
Aditya.
On Tue, Mar 8, 2022 at 8:23 PM Bruce Momjian wrote:
> On Tue, Mar 8, 2022 at 06:36:17PM +0530, aditya desai wrote:
> > Hi Tom,
> > I added BEGIN and COMMIT as
On Tue, Mar 8, 2022 at 06:36:17PM +0530, aditya desai wrote:
> Hi Tom,
> I added BEGIN and COMMIT as shown below around insert and executed it from
> pgadmin for 100,000 rows. It ran in just 1 min.
>
> BEGIN;
> INSERT INTO TABLE VALUES();
> INSERT INTO TABLE VALUES();
> .
> ,
> COMMIT;
>
Hi Tom,
I added BEGIN and COMMIT as shown below around insert and executed it from
pgadmin for 100,000 rows. It ran in just 1 min.
BEGIN;
INSERT INTO TABLE VALUES();
INSERT INTO TABLE VALUES();
.
,
COMMIT;
However when I run above from psql by passing it to psql(As shown below) as
a
Thanks all for your inputs. We will try to implement inserts in single
transaction. I feel that is the best approach.
Thanks,
AD.
On Saturday, March 5, 2022, Bruce Momjian wrote:
> On Fri, Mar 4, 2022 at 01:42:39PM -0500, Tom Lane wrote:
> > aditya desai writes:
> > > One of the service
> Correct rows are wider. One of the columns is text and one is bytea.
with the PG14 the LZ4 compression is worth checking.
via
https://www.postgresql.fastware.com/blog/what-is-the-new-lz4-toast-compression-in-postgresql-14
*"""INSERT statements with 16 clientsAnother common scenario that I
Performance<mailto:pgsql-performance@lists.postgresql.org>
Assunto: Re: Any way to speed up INSERT INTO
Hi,
On March 4, 2022 10:42:39 AM PST, Tom Lane wrote:
>aditya desai writes:
>> One of the service layer app is inserting Millions of records in a table
>> but one ro
Hi,
On March 4, 2022 10:42:39 AM PST, Tom Lane wrote:
>aditya desai writes:
>> One of the service layer app is inserting Millions of records in a table
>> but one row at a time. Although COPY is the fastest way to import a file in
>> a table. Application has a requirement of processing a row
On Fri, Mar 4, 2022 at 01:42:39PM -0500, Tom Lane wrote:
> aditya desai writes:
> > One of the service layer app is inserting Millions of records in a table
> > but one row at a time. Although COPY is the fastest way to import a file in
> > a table. Application has a requirement of processing a
Hi Bruce,
Correct rows are wider. One of the columns is text and one is bytea.
Regards,
Aditya.
On Sat, Mar 5, 2022 at 12:08 AM Bruce Momjian wrote:
> On Sat, Mar 5, 2022 at 12:01:52AM +0530, aditya desai wrote:
> > Hi,
> > One of the service layer app is inserting Millions of records in a
aditya desai writes:
> One of the service layer app is inserting Millions of records in a table
> but one row at a time. Although COPY is the fastest way to import a file in
> a table. Application has a requirement of processing a row and inserting it
> into a table. Is there any way this INSERT
On Sat, Mar 5, 2022 at 12:01:52AM +0530, aditya desai wrote:
> Hi,
> One of the service layer app is inserting Millions of records in a table but
> one row at a time. Although COPY is the fastest way to import a file in a
> table. Application has a requirement of processing a row and inserting it
Hi,
One of the service layer app is inserting Millions of records in a table
but one row at a time. Although COPY is the fastest way to import a file in
a table. Application has a requirement of processing a row and inserting it
into a table. Is there any way this INSERT can be tuned by increasing
12 matches
Mail list logo