Ronny Abraham schrieb am 15.05.2017 um 19:25:
4. Insert 10,000 rows to JSON, execution time (sec):
122.855001211
5. Insert 10,000 rows to JSONB, execution time (sec):
122.128999233
What’s interesting is that inserting to JSONB is slightly faster than inserting
to JSON.
A difference in 0.7
On Mon, May 15, 2017 at 12:02 PM, Ronny Abraham wrote:
> 4. Insert 10,000 rows to JSON, execution time (sec):
> 5. Insert 10,000 rows to JSONB, execution time (sec):
>
> What’s interesting is that inserting to JSONB is slightly faster than
> inserting to JSON.
With those times, only explanation i
each row), execution time (sec):
118.248999119
Thanks
Ronny
From: Dmitry Dolgov [mailto:9erthali...@gmail.com]
Sent: Monday, May 15, 2017 3:35 PM
To: Ronny Abraham
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Insert performance and disk usage in JSON vs JSONB
On 15 May 2017 at 19:25
On 15 May 2017 at 19:25, Ronny Abraham wrote:
>
> What’s interesting is that inserting to JSONB is slightly faster than
inserting to JSON.
>
> Maybe that’s because my JSON has a flat structure (no nesting), or maybe
I am doing something else wrong?
I assume it's because your json documents (10 fi
Here are the attachments.
From: Ronny Abraham
Sent: Monday, May 15, 2017 1:03 PM
To: 'pgsql-general@postgresql.org'
Subject: Insert performance and disk usage in JSON vs JSONB
Hello all,
I am trying to decide whether to use JSON or JSONB to store my application data.
From what I read so far ab
Hello all,
I am trying to decide whether to use JSON or JSONB to store my application data.
>From what I read so far about JSON vs JSONB:
Performance - JSON is faster for inserts since it only odes JSON format
verification, vs JSONB which
El jue, 08-11-2007 a las 13:01 -0500, Tom Lane escribió:
> "Krasimir Hristozov \(InterMedia Ltd\)" <[EMAIL PROTECTED]> writes:
> > We need to import data from a relatively large MySQL database into an
> > existing PostgreSQL database, using a PHP5 script that SELECTs from MySQL
> > and INSERTs in
Try to one of these:
a) don't use INSERT statements, use a COPY instead
b) from time to time run ANALYZE on the "public" table (say 1000
inserts, then one analyze)
c) create the table without constraints (primary / foreign keys in this
case), import all the data, and then create the const
Thanks to all who responded. Using COPY instead of INSERT really solved the
problem - the whole process took about 1h 20min on an indexed table, with
constraints (which is close to our initial expectations). We're performing some
additional tests now. I'll post some more observations when finish
Hello Krasimir,
You got a lot of good advices above and I would like to add another one:
d) Make sure of your PHP code is not recursive. As you said the memory is
stable so I think your method is iterative.
A recursive method certainly will increase a little time for each insert
using more memory
[EMAIL PROTECTED] ("=?UTF-8?B?VG9tw6HFoSBWb25kcmE=?=") writes:
> Try to one of these:
>
> a) don't use INSERT statements, use a COPY instead
>
> b) from time to time run ANALYZE on the "public" table (say 1000
>inserts, then one analyze)
>
> c) create the table without constraints (primary / fo
In response to "Krasimir Hristozov \(InterMedia Ltd\)" <[EMAIL PROTECTED]>:
> We need to import data from a relatively large MySQL database into an
> existing PostgreSQL database, using a PHP5 script that SELECTs from MySQL
> and INSERTs in PostgreSQL. A part of the import involves moving about
>
"Krasimir Hristozov \(InterMedia Ltd\)" <[EMAIL PROTECTED]> writes:
> We need to import data from a relatively large MySQL database into an
> existing PostgreSQL database, using a PHP5 script that SELECTs from MySQL
> and INSERTs in PostgreSQL. A part of the import involves moving about
> 1,300,000
Try to one of these:
a) don't use INSERT statements, use a COPY instead
b) from time to time run ANALYZE on the "public" table (say 1000
inserts, then one analyze)
c) create the table without constraints (primary / foreign keys in this
case), import all the data, and then create the const
We need to import data from a relatively large MySQL database into an
existing PostgreSQL database, using a PHP5 script that SELECTs from MySQL
and INSERTs in PostgreSQL. A part of the import involves moving about
1,300,000 records from one MySQL table to one of our PostgreSQL tables. The
problem
On Friday August 12 2005 6:29 pm, Michael Fuhr wrote:
> I meant profiling of DBD::Pg, as Greg Sabino Mullane
> suggested. Here's his message in case you missed it:
No, I didn't miss that, and will do that as a next step. Thanks,
Michael.
Ed
---(end of broadcast)---
On Fri, Aug 12, 2005 at 06:20:27PM -0600, Ed L. wrote:
> On Friday August 12 2005 6:11 pm, Michael Fuhr wrote:
> > Has anything changed on the system since the results were
> > consistently slow? New hardware, new versions of anything,
> > reboot, etc.? Did you do any profiling when DBD::Pg was
> >
On Friday August 12 2005 6:11 pm, Michael Fuhr wrote:
> Has anything changed on the system since the results were
> consistently slow? New hardware, new versions of anything,
> reboot, etc.? Did you do any profiling when DBD::Pg was
> consistently slow to see where the bottleneck was?
All good q
On Fri, Aug 12, 2005 at 05:37:22PM -0600, Ed L. wrote:
> On Friday August 12 2005 5:27 pm, Michael Fuhr wrote:
> > How consistent were the results before? I got the impression
> > that you saw consistently bad performance with DBD::Pg when
> > other methods performed well.
>
> Results were very c
On Friday August 12 2005 5:27 pm, Michael Fuhr wrote:
> On Fri, Aug 12, 2005 at 05:20:49PM -0600, Ed L. wrote:
> > Well, just as I thought I had this one pinned, my test
> > results have become wildly inconsistent, eroding all
> > confidence in my prior conclusions about DBI slowness, etc.
> > I'v
On Fri, Aug 12, 2005 at 05:20:49PM -0600, Ed L. wrote:
> Well, just as I thought I had this one pinned, my test results
> have become wildly inconsistent, eroding all confidence in my
> prior conclusions about DBI slowness, etc. I've seen at least
> 1000+ QPS performance via DBI/DBD::Pg, Pg, an
On Thursday August 11 2005 6:20 pm, Michael Fuhr wrote:
> On Thu, Aug 11, 2005 at 03:29:29PM -0600, Ed L. wrote:
> > Michael, you nailed it again. My libpq test C program
> > delivered between 2400 QPS and 5000 QPS vs ~10 QPS for
> > DBI/DBD::Pg on this box.
> >
> > It remains unclear to me why th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> It remains unclear to me why the same DBI/DBD::Pg client code
> would deliver performance 2-3 orders of magnitude better on
> other roughly comparable or inferior boxes.
You need to see exactly what is going on to make things so slow.
You can try
On Thu, Aug 11, 2005 at 03:29:29PM -0600, Ed L. wrote:
> Michael, you nailed it again. My libpq test C program delivered
> between 2400 QPS and 5000 QPS vs ~10 QPS for DBI/DBD::Pg on this
> box.
>
> It remains unclear to me why the same DBI/DBD::Pg client code
> would deliver performance 2-3 o
>
> Michael, you nailed it again. My libpq test C program delivered
> between 2400 QPS and 5000 QPS vs ~10 QPS for DBI/DBD::Pg on this
> box.
>
> It remains unclear to me why the same DBI/DBD::Pg client code
> would deliver performance 2-3 orders of magnitude better on
> other roughly comparabl
On Thursday August 11 2005 1:37 pm, Michael Fuhr wrote:
> Have you done any client-side tests that eliminate Perl? I'd
> suggest writing a little C program so you can measure libpq's
> performance without the extra layers of Perl and DBI/DBD::Pg.
> Test both local (Unix socket) and network (IPv4
On Thu, Aug 11, 2005 at 12:59:32PM -0600, Ed L. wrote:
> > Michael, you seem to have nailed it. The local inserts (via
> > Unix domain sockets?) that were running at 6 QPS ran at 6800
> > to 41000 QPS in a PL/pgSQL function.
>
> Here's another part of the riddle. The query durations for the
> i
On Thursday August 11 2005 12:36 pm, Ed L. wrote:
> On Wednesday August 10 2005 6:03 pm, Michael Fuhr wrote:
> > On Wed, Aug 10, 2005 at 05:02:46PM -0600, Ed L. wrote:
> > > I have two identical servers giving abysmal INSERT
> > > performance in pgsql 7.3.4, 7.4.8, and 8.1devel under no
> > > load
On Wednesday August 10 2005 6:03 pm, Michael Fuhr wrote:
> On Wed, Aug 10, 2005 at 05:02:46PM -0600, Ed L. wrote:
> > I have two identical servers giving abysmal INSERT
> > performance in pgsql 7.3.4, 7.4.8, and 8.1devel under no
> > load or I/O contention at all (no dumps, no vacuums, no
> > apps,
On Wed, Aug 10, 2005 at 05:02:46PM -0600, Ed L. wrote:
> I have two identical servers giving abysmal INSERT performance in
> pgsql 7.3.4, 7.4.8, and 8.1devel under no load or I/O contention
> at all (no dumps, no vacuums, no apps, etc). Any suggested
> investigations appreciated...
>
> Metric:
I have two identical servers giving abysmal INSERT performance in
pgsql 7.3.4, 7.4.8, and 8.1devel under no load or I/O contention
at all (no dumps, no vacuums, no apps, etc). Any suggested
investigations appreciated...
Metric: I'm measuring average insert speed on the following
table with
I'm running PostgreSQL 7.3.4 on Windows 2000/Cygwin (AMD Athlon XP+
1800, 512MB RAM).
I want to insert about 500 records into a table. The table is heavily
indexed (has about 10-12 indices created on it). The insert is performed
in a transaction block.
If I keep the indices on the table, the i
32 matches
Mail list logo