and
dual PIII 1.4GHz
2GB of memory
512MB ramdisk (mounted noatime)
mirrored internal SCSI160 10k rpm drives for OS and swap
1 PCI 66MHz 64bit QLA2300
1 Gbit SAN with several RAID5 LUN's on a Hitachi 9910
All filesystems are ext3.
Any thoughts?
Greg
--
Greg Spiegelberg
Sr. Product
,
2x1.4GHz PIII, 2GB memory, and 1Gbs SAN w/ Hitachi 9910 LUN's.
Greg
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED]
Cranel. Technology. Integrity. Focus.
---(end of broadcast
See below.
Shridhar Daithankar wrote:
Greg Spiegelberg wrote:
The data represents metrics at a point in time on a system for
network, disk, memory, bus, controller, and so-on. Rx, Tx, errors,
speed, and whatever else can be gathered.
We arrived at this one 642 column table after testing
Joe Conway wrote:
Greg Spiegelberg wrote:
The reason for my initial question was this. We save changes only.
In other words, if system S has row T1 for day D1 and if on day D2
we have another row T1 (excluding our time column) we don't want
to save it.
It still isn't entirely clear to me what
for a simple WHERE...
Okay. I'll give it a whirl. What's one more column, right?
Greg
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED]
Cranel. Technology. Integrity. Focus.
---(end
kinda sick. I like reading on most computer theory,
designs, algorithms, database implementations, etc. Usually
how I get into trouble too with 642 column tables though. :)
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email
specialized tables.
Greg
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED]
Cranel. Technology. Integrity. Focus.
---(end of broadcast)---
TIP 6: Have you
files are kept in scratch only the files being loaded
into the database via COPY or lo_import.
My WAL logs are kept on a separate ext3 file system.
Greg
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED
Hannu Krosing wrote:
Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:
Hannu Krosing wrote:
Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:
It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime).
Which files/directories do you keep
, could you try also doing the full bulk insert
test with the checkpoint log files on another physical disk? See if
that's any faster.
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED]
Cranel. Technology
system.
The system is completely idle except for this restore process. Could
syslog the culprit?
I turned syslog back on and the restore slowed down again. Turned
it off and it sped right back up.
Can anyone confirm this for me?
Greg
--
Greg Spiegelberg
Sr. Product Development Engineer
Cranel
Tom Lane wrote:
Greg Spiegelberg [EMAIL PROTECTED] writes:
I turned syslog back on and the restore slowed down again. Turned
it off and it sped right back up.
We have heard reports before of syslog being quite slow. What platform
are you on exactly? Does Richard's suggestion of turning off
I've been following this thread closely as I have the same problem
with an UPDATE. Everything is identical here right down to the
strace output.
Has anyone found a workaround or resolved the problem? If not,
I have test systems here which I can use to help up test and explore.
Greg
--
Greg
Rosser Schwarz wrote:
Greg Spiegelberg wrote:
I've been following this thread closely as I have the same problem
with an UPDATE. Everything is identical here right down to the
strace output.
Has anyone found a workaround or resolved the problem? If not,
I have test systems here which I can
)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
--
Greg Spiegelberg
Product Development Manager
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED]
Technology. Integrity. Focus. V-Solve
swap
back on for those heavy load times and move on.
Greg
--
Greg Spiegelberg
Product Development Manager
Cranel, Incorporated.
Phone: 614.318.4314
Fax: 614.431.8388
Email: [EMAIL PROTECTED]
Technology. Integrity. Focus.
---(end of broadcast
On Mon, Sep 21, 2009 at 5:39 PM, Scott Marlowe scott.marl...@gmail.comwrote:
I'm looking at running session servers in ram. All the data is
throw-away data, so my plan is to have a copy of the empty db on the
hard drive ready to go, and have a script that just copies it into ram
and starts
On Mon, Apr 26, 2010 at 5:24 PM, Anj Adu fotogra...@gmail.com wrote:
I have a 16G box and tmpfs is configured to use 8G for tmpfs .
Is a lot of memory being wasted that can be used for Postgres ? (I am
not seeing any performance issues, but I am not clear how Linux uses
the tmpfs and how
On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Matthew Wakeling wrote:
Apologies, I was interpreting the graph as the latency of the device, not
all the layers in-between as well. There isn't any indication in the email
with the graph as to what the test conditions
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2 isn't
really faster and the manual need for fix (y) during
On Tue, Jul 20, 2010 at 9:51 PM, kuopo sp...@cs.nctu.edu.tw wrote:
Let me make my problem clearer. Here is a requirement to log data from a
set of objects consistently. For example, the object maybe a mobile phone
and it will report its location every 30s. To record its historical trace, I
On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Yeb Havinga wrote:
Due to the LBA remapping of the SSD, I'm not sure of putting files that
are sequentially written in a different partition (together with e.g.
tables) would make a difference: in the end the SSD will
List,
I see benefits to using the 8.4 WINDOW clause in some cases but I'm having
trouble seeing if I could morph the following query using it.
wxd0812=# EXPLAIN ANALYZE
wxd0812-# SELECT * FROM
wxd0812-# (SELECT DISTINCT ON (key1_id,key2_id) * FROM sid120.data ORDER BY
key1_id,key2_id,time_id
On Tue, Oct 19, 2010 at 1:18 AM, AI Rumman rumman...@gmail.com wrote:
Not actualy. I used pagination with limit clause in details query and I
need the total number of records in the detail query.
Can you use a cursor? Roughly...
BEGIN;
DECLARE x CURSOR FOR SELECT * FROM crm;
MOVE FORWARD
On Sat, Jan 22, 2011 at 8:41 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 21, 2011 at 5:10 PM, Madhu Ramachandran iamma...@gmail.com
wrote:
i was looking at
http://www.postgresql.org/files/documentation/books/aw_pgsql/node96.html
when they talk about using OID type to store
On Mon, Mar 14, 2011 at 4:17 AM, Marti Raudsepp ma...@juffo.org wrote:
On Sun, Mar 13, 2011 at 18:36, runner run...@winning.com wrote:
Other than being very inefficient, and consuming
more time than necessary, is there any other down side to importing
into an indexed table?
Doing so will
On Tue, May 3, 2011 at 2:09 PM, Alan Hodgson ahodg...@simkin.ca wrote:
On May 3, 2011 12:43:13 pm you wrote:
On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:
I am also interested in tips for this. EBS seems to suck pretty bad.
Alan, can you elaborate? Are you using PG on top of EBS?
On Wed, Jun 29, 2011 at 12:37 PM, Marinos Yannikos m...@geizhals.at wrote:
On Wed, 29 Jun 2011 13:55:58 +0200, Svetlin Manavski
svetlin.manav...@gmail.com wrote:
Question: Is there a way to get the same result from within a PL/pgSQL
function but running all the sub-queries in parallel? In
On Thu, Jun 30, 2011 at 3:02 AM, Svetlin Manavski
svetlin.manav...@gmail.com wrote:
I am now a bit puzzled after the initial satisfaction by Marinos' reply.
1. what do you mean exactly by to ensure your UNION succeeds. The dblink
docs do not mention anything about issues using directly the
On Wed, Aug 24, 2011 at 9:33 AM, Greg Smith g...@2ndquadrant.com wrote:
On 08/24/2011 07:07 AM, Venkat Balaji wrote:
But, if put log_connections to on and log_disconnections to on wouldn't
the Postgres be logging in lot of data ?
Will this not be IO intensive ? I understand that this is the
On Thu, Feb 23, 2012 at 8:25 AM, Reuven M. Lerner reu...@lerner.co.ilwrote:
I've suggested something similar, but was told that we have limited time
to execute the DELETE, and that doing it in stages might not be possible.
Just so happens I had this exact problem last week on a rather large
On Thu, Feb 23, 2012 at 11:11 AM, Andy Colson a...@squeakycode.net wrote:
On 2/23/2012 12:05 PM, Shaun Thomas wrote:
On 02/23/2012 11:56 AM, Greg Spiegelberg wrote:
I know there are perils in using ctid but with the LOCK it should be
safe. This transaction took perhaps 30 minutes
On Fri, Mar 30, 2012 at 8:45 AM, Campbell, Lance la...@illinois.edu wrote:
PostgreSQL 9.0.x
When PostgreSQL storage is using a relatively large raid 5 or 6 array is
there any value in having your tables distributed across multiple
tablespaces if those tablespaces will exists on the
On Wed, Apr 25, 2012 at 12:52 PM, Venki Ramachandran
venki_ramachand...@yahoo.com wrote:
Now I have to run the same pgplsql on all possible combinations of
employees and with 542 employees that is about say 300,000 unique pairs.
So (30 * 40)/(1000 * 60 * 60) = 3.33 hours and I have to
On Fri, May 25, 2012 at 9:04 AM, Craig James cja...@emolecules.com wrote:
On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg gspiegelb...@gmail.com
wrote:
On Sun, May 13, 2012 at 10:01 AM, Craig James cja...@emolecules.com
wrote:
On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин tiv
On Wed, Jul 4, 2012 at 6:25 AM, Hermann Matthes hermann.matt...@web.dewrote:
I want to implement a paged Query feature, where the user can enter in a
dialog, how much rows he want to see. After displaying the first page of
rows, he can can push a button to display the next/previous page.
On
On Mon, Jul 9, 2012 at 8:16 AM, Craig James cja...@emolecules.com wrote:
A good solution to this general problem is hitlists. I wrote about this
concept before:
http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php
I implemented this exact strategy in our product years
Rob,
I'm going to make half of the list cringe at this suggestion though I have
used it successfully.
If you can guarantee the table will not be vacuumed during this cleanup or
rows you want deleted updated, I would suggest using the ctid column to
facilitate the delete. Using the simple
Two solutions come to mind. First possibility is table partitioning on the
column you're sorting. Second, depending on your application, is to use a
cursor. Cursor won't help with web applications however a stateful
application could benefit.
HTH
-Greg
On Wed, Aug 28, 2013 at 2:39 PM,
Hey all,
Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time
has said not to have millions of tables. I too have long believed it until
recently.
AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for
PGDATA. Over the weekend, I created 8M tables with
chema and it's intended use is complete. You'll have to trust me on
that one.
-Greg
On Sun, Sep 25, 2016 at 9:23 PM, Mike Sofen <mso...@runbox.com> wrote:
> *From:* Greg Spiegelberg *Sent:* Sunday, September 25, 2016 7:50 PM
> … Over the weekend, I created 8M tables with 1
in is strictly prohibited. If you have received this
> communication in error, please immediately notify the sender and
> delete this message.Unless it is made by the authorized person, any
> views expressed in this message are those of the individual sender and
> may not necessari
On Wed, Sep 28, 2016 at 9:39 AM, Vitalii Tymchyshyn wrote:
> Have you considered having many databases (e.g. 100) and possibly many
> postgresql servers (e.g. 10) started on different ports?
> This would give you 1000x less tables per db.
>
The system design already allows for many
On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost <sfr...@snowman.net> wrote:
> Greg,
>
> * Greg Spiegelberg (gspiegelb...@gmail.com) wrote:
> > Bigger buckets mean a wider possibility of response times. Some buckets
> > may contain 140k records and some 100X more.
>
On Fri, Sep 30, 2016 at 4:49 PM, Jim Nasby wrote:
> On 9/29/16 6:11 AM, Alex Ignatov (postgrespro) wrote:
>
>> With millions of tables you have to setautovacuum_max_workers
>> sky-high =). We have some situation when at thousands of tables
>> autovacuum can’t
On Tue, Sep 27, 2016 at 10:15 AM, Terry Schmitt <tschm...@schmittworks.com>
wrote:
>
>
> On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <gspiegelb...@gmail.com>
> wrote:
>
>> Hey all,
>>
>> Obviously everyone who's been in PostgreSQL or almost
On Tue, Sep 27, 2016 at 8:30 AM, Craig James <cja...@emolecules.com> wrote:
> On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <gspiegelb...@gmail.com>
> wrote:
>
>> Hey all,
>>
>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for
On Tue, Sep 27, 2016 at 9:42 AM, Mike Sofen <mso...@runbox.com> wrote:
> *From:* Mike Sofen *Sent:* Tuesday, September 27, 2016 8:10 AM
>
> *From:* Greg Spiegelberg *Sent:* Monday, September 26, 2016 7:25 AM
> I've gotten more responses than anticipated and have answere
Following list etiquette response inline ;)
On Mon, Sep 26, 2016 at 2:28 AM, Álvaro Hernández Tortosa <a...@8kdata.com>
wrote:
>
>
> On 26/09/16 05:50, Greg Spiegelberg wrote:
>
>> Hey all,
>>
>> Obviously everyone who's been in PostgreSQL or almost any RDBM
On Mon, Sep 26, 2016 at 3:43 AM, Stuart Bishop <stu...@stuartbishop.net>
wrote:
> On 26 September 2016 at 11:19, Greg Spiegelberg <gspiegelb...@gmail.com>
> wrote:
>
>> I did look at PostgresXL and CitusDB. Both are admirable however neither
>> could support
10M rows in a table is not a problem for the query
> times you are referring to. So instead of millions of tables, unless I'm
> doing my math wrong, you probably only need thousands of tables.
>
>
>
> On Mon, Sep 26, 2016 at 5:43 AM, Stuart Bishop <stu...@stuartbishop.net>
&
Consider the problem though. Random access to trillions of records with no
guarantee any one will be fetched twice in a short time frame nullifies the
effectiveness of a cache unless the cache is enormous. If such a cache
were that big, 100's of TB's, I wouldn't be looking at on-disk storage
On Mon, Sep 26, 2016 at 7:05 AM, Mike Sofen wrote:
> *From:* Rick Otten *Sent:* Monday, September 26, 2016 3:24 AM
> Are the tables constantly being written to, or is this a mostly read
> scenario?
>
>
>
> With regards to consistent query performance, I think you need to get
On Sun, Sep 25, 2016 at 8:50 PM, Greg Spiegelberg <gspiegelb...@gmail.com>
wrote:
> Hey all,
>
> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time
> has said not to have millions of tables. I too have long believed it until
> recently.
>
&
54 matches
Mail list logo