I am trying to create a static, universal (x86 and ppc at least - 64 bit architectures are optional) build of the PostgreSQL libraries on a Mac OS X 10.6 machine. The command line I am using is as follows:CFLAGS="-arch i386 -arch ppc -arch x86_64" ./configure --disable-sharedThe configure runs
On Dec 2, 2009, at 1:03 AM, Dave Page wrote:
2009/12/2 Grzegorz Jaśkiewicz gryz...@gmail.com:
On Tue, Dec 1, 2009 at 11:43 PM, Israel Brewster isr...@frontierflying.com
wrote:
I am trying to create a static, universal (x86 and ppc at least -
64 bit
architectures are optional) build
On Dec 2, 2009, at 8:26 AM, Tom Lane wrote:
Israel Brewster isr...@frontierflying.com writes:
That said, I did sort of get this to work. What I ended up doing was
building for each architecture separately (but on the same machine),
then using lipo to combine the resulting libraries. When I
On Dec 2, 2009, at 5:16 PM, Tom Lane wrote:
Israel Brewster isr...@frontierflying.com writes:
Well, I'm not trying to use the server or client programs from this
build - I just want the universal libraries for my programs. My point
in this last section, however, doesn't necessarily extend
On Dec 3, 2009, at 10:54 AM, Tom Lane wrote:
Israel Brewster isr...@frontierflying.com writes:
1) From the second paragraph of that post:
If you add something like -arch i386 -arch ppc to CFLAGS and build
normally, you get real working multiarch binaries and libraries.
Which is exactly
On Dec 19, 2009, at 2:59 AM, Filip Rembiałkowski wrote:2009/12/18 Gauthier, Dave dave.gauth...@intel.com Can you define your own commands in PG. In psql, yes:\set sel 'SELECT * FROM':sel clients;\set desc '\\d' :desc table E.g., if users from other DBs use “describe foo” to get the metadata for
On Dec 21, 2009, at 9:34 AM, Pavel Stehule wrote:
2009/12/21 Israel Brewster isr...@frontierflying.com:
On Dec 19, 2009, at 2:59 AM, Filip Rembiałkowski wrote:
2009/12/18 Gauthier, Dave dave.gauth...@intel.com
Can you define your own commands in PG.
In psql, yes:
\set sel 'SELECT
This is sort of a PostgreSQL question/sort of a general SQL question, so I apologize if this isn't the best place to ask. At any rate, I know in PostgreSQL you can issue a command like 'SELECT "time"(timestamp_column) from table_name' to get the time part of a timestamp. The problem is that this
On Dec 24, 2009, at 12:53 PM, Adrian Klaver wrote:
On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
This is sort of a PostgreSQL question/sort of a general SQL question,
so I apologize if this isn't the best place to ask. At any rate, I
know in PostgreSQL you can issue a command
On Dec 29, 2009, at 5:41 AM, Adrian Klaver wrote:
On Monday 28 December 2009 8:58:38 am Israel Brewster wrote:
On Dec 24, 2009, at 12:53 PM, Adrian Klaver wrote:
On Thursday 24 December 2009 1:44:58 pm Israel Brewster wrote:
This is sort of a PostgreSQL question/sort of a general SQL
I have a Postgresql 9.2.4 database containing real-time tracking data for our aircraft for the past week (theoretically). It is populated by two different processes: one that runs every few minutes, retrieving data from a number of sources and storing it in the DB, and one that has an "always on"
...@aklaver.com wrote:
On 03/05/2014 10:22 AM, Israel Brewster wrote:
My first thought was Oh, I must have a typo in my
cleanup routine, such that it is deleting all records rather than only
those a week old, and it's just that no one has noticed until now. So I
looked at that, but changing the delete
On Mar 5, 2014, at 10:01 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Israel Brewster wrote:
So my question is, aside from someone going in and mucking about in the wee
hours of the morning, what could possibly cause this behavior? What sort of
event could cause all data
On Mar 5, 2014, at 10:01 AM, Thom Brown t...@linux.com wrote:On 5 March 2014 18:22, Israel Brewster isr...@eraalaska.net wrote:
I have a Postgresql 9.2.4 database containing real-time tracking data for our aircraft for the past week (theoretically). It is populated by two different processes: one
On Mar 6, 2014, at 9:03 AM, Adrian Klaver adrian.kla...@aklaver.com wrote:
On 03/06/2014 09:33 AM, Israel Brewster wrote:
For starters, this happened again this morning (no data prior to 4:45 am and
sequence reset), so whatever is going on appears to be reoccurring. Also, I
forgot
On Mar 6, 2014, at 10:03 AM, Adrian Klaver adrian.kla...@aklaver.com wrote:
On 03/06/2014 10:43 AM, Israel Brewster wrote:
On Mar 6, 2014, at 9:03 AM, Adrian Klaver adrian.kla...@aklaver.com wrote:
Well something is happening. See my notes on logging below to help track
down the cause
in, but I still don't know how or why. Closer
though. Thanks for the suggestions!
---
Israel Brewster
Computer Support Technician II
Era Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x7293
On Mar 6, 2014, at 1:25 PM, Steve Crawford scrawf...@pinpointresearch.com
wrote:
On 03/06/2014 09:33 AM, Israel Brewster wrote:
For starters, this happened again this morning (no data prior to 4:45 am and
sequence reset), so whatever is going on appears to be reoccurring. Also, I
forgot
---
Israel Brewster
Computer Support Technician II
Era Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7250 x7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some stats today, I saw that it was handling about 4-5 transactions/second (according to theSELECT sum(xact_commit+xact_rollback) FROM pg_stat_database; query), and an instance of the postmaster process was consistently showing
I am currently doing periodic syncs of one of my production databases to my development database using the command pg_dump -ch production host database name | psql database name, run on my development server. This works well enough, but as the size of the production database grows, this command
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
On Oct 8, 2014, at 9:30 AM, Emanuel Calvo emanuel.ca...@2ndquadrant.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
El 08/10/14 a las 14:01, Israel Brewster escibió:
I am currently doing periodic syncs of one of my production
databases to my development database using
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying
*/string
/array
keyRunAtLoad/key
true/
keyUserName/key
string_postgres/string
/dict
/plist
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
Currently, when I need to create/edit a stored procedure in Postgresql, my workflow goes like the following:- Create/edit the desired function in my "DB Commands" text file- Copy and paste function into my development database- Test- repeat above until it works as desired- Copy and paste function
On Jan 29, 2015, at 1:20 PM, Michael Heaney mhea...@jcvi.org wrote:
On 1/29/2015 5:12 PM, Israel Brewster wrote:
I'm working on setting up a new PostgreSQL database server, and would like
to be able to monitor a number of statistics on it, such as:
number of connections
number of queries
On Mar 16, 2015, at 2:22 PM, David G. Johnston david.g.johns...@gmail.com
wrote:
On Mon, Mar 16, 2015 at 3:13 PM, Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com wrote:
On 03/16/2015 02:57 PM, Israel Brewster wrote:
I have a table with two timestamp columns
I have a table with two timestamp columns for the start time and end time of each record (call them start and end).I'm trying to figure out if there is a way to group these records by "hour of day", that is the record should be included in the group if the hour of the day for the group falls
On Mar 16, 2015, at 2:16 PM, Paul Jungwirth p...@illuminatedcomputing.com
wrote:
I have a table with two timestamp columns for the start time and end
time of each record (call them start and end).I'm trying to figure out
if there is a way to group these records by hour of day,
I think you
On Mar 16, 2015, at 2:13 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:
On 03/16/2015 02:57 PM, Israel Brewster wrote:
I have a table with two timestamp columns for the start time and end
time of each record (call them start and end).I'm trying to figure out
if there is a way to group
On Mar 17, 2015, at 9:05 AM, David G. Johnston david.g.johns...@gmail.com
wrote:
On Tuesday, March 17, 2015, Israel Brewster isr...@ravnalaska.net
mailto:isr...@ravnalaska.net wrote:
On Mar 17, 2015, at 8:09 AM, Paul Jungwirth p...@illuminatedcomputing.com
javascript:; wrote
.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
Paul
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On Mar 17, 2015, at 1:41 PM, Marc Mamin m.ma...@intershop.de wrote:
On Tue, Mar 17, 2015, at 11:30 AM, Adrian Klaver wrote:
On 03/17/2015 10:57 AM, Israel Brewster wrote:
On Mar 17, 2015, at 9:30 AM, Paul Jungwirth
p...@illuminatedcomputing.com wrote:
So next question: how do I
;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD
On Mar 16, 2015, at 3:46 PM, Adrian Klaver adrian.kla...@aklaver.com wrote:
On 03/16/2015 04:16 PM, Israel Brewster wrote:
On Mar 16, 2015, at 2:22 PM, David G. Johnston
david.g.johns
the 9 hour, etc.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
I'm working on setting up a new PostgreSQL database server, and would like to be able to monitor a number of statistics on it, such as:number of connectionsnumber of queriesquery timesetc.All these stats are easily available, either from the pg_stat_statements view (which I have enabled) and the
it around to be:
WHERE ANY(flightnum) like '8%'
gives me a syntax error. So is there any way I can run this query such that I
get any rows containing a flight number that starts with an 8 (or whatever)?
---
Israel Brewster
Systems Analyst II
Ravn
gg(flightnum) FROM logs WHERE flightnum ~ '^8' GROUP BY lognum;
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
--
Sent via pgsql-general
On Oct 12, 2015, at 10:39 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
> Jeff Janes <jeff.ja...@gmail.com> writes:
>> On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster <isr...@ravnalaska.net>
>> wrote:
>>> My first thought was to do something lik
END:VCARD
On Oct 12, 2015, at 11:50 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:On Mon, Oct 12, 2015 at 11:39 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:Jeff Janes <jeff.ja...@gmail.com> writes:
> On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster <isr...@ravnalaska.net>
&
How sensitive is PostgreSQL to a failure that causes it to loose a single tablespace, while the rest of the database cluster is still accessible? Will it continue running, allowing access to databases that aren't in the missing tablespace, or will it crash (or similar)?I have one rather large
I have a couple of questions regarding setting up and administering a high availability Postgresql "cluster". At the moment the setup I have consists of two servers set up with streaming replication using the named replication slots feature of Postgresql 9.4 (currently running 9.4.1). Let me know
greSQL Conference PostgreSQL High Availability with Corosync/Pacemaker, and then bought the book PostgreSQL 9.0 High Performance (a sample chapter is available as a PDF).All working out well in testing at the moment.Steve PritchardBritish Trust for OrnithologyOn 23 September 2015 at 17:36, Israel Brewste
This is probably crazy talk, but in Postgresql is there any way to have a "keyed" sequence? That is, I have a table with a list of departments. While *relatively* static, technically speaking this is a dynamic list - someone certainly could decide to add a department, or combine two departments
>
> On Apr 28, 2016, at 10:39 AM, Vik Fearing <v...@2ndquadrant.fr> wrote:
>
> On 04/28/2016 08:30 PM, Israel Brewster wrote:
>> This is probably crazy talk, but in Postgresql is there any way to have
>> a "keyed" sequence? That is, I have a table with a l
On Apr 28, 2016, at 11:21 AM, David G. Johnston <david.g.johns...@gmail.com>
wrote:
>
> On Thu, Apr 28, 2016 at 12:09 PM, Israel Brewster <isr...@ravnalaska.net
> <mailto:isr...@ravnalaska.net>> wrote:
>
> >
> > It'll kill your performance, b
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying
On Feb 1, 2017, at 1:45 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
> Israel Brewster <isr...@ravnalaska.net> writes:
>> So just a bit ago I ran into a bit of excitement when the kernel decided
>> to kill one of my postmaster processes due to an out-of-memory issue,
So just a bit ago I ran into a bit of excitement when the kernel decided to kill one of my postmaster processes due to an out-of-memory issue, which would have been fine, except that the problem was then compounded by Pacemaker attempting to restart postgresql, but only managing to get as far as
On Jan 23, 2017, at 10:06 AM, Jerry Sievers <gsiever...@comcast.net> wrote:
>
> Israel Brewster <isr...@ravnalaska.net> writes:
>
>> I have a backup strategy that in part consists of doing pg_dump s on my
>> various databases. In order to hopefully reduce/
I have a backup strategy that in part consists of doing pg_dump s on my various databases. In order to hopefully reduce/prevent operational slow-down as a result of the backup, I do the dumps from my secondary server, configured as a hot standby with streaming replication.In general this works
On Sep 27, 2016, at 10:55 AM, Melvin Davidson <melvin6...@gmail.com> wrote:
>
>
>
> On Tue, Sep 27, 2016 at 2:46 PM, Israel Brewster <isr...@ravnalaska.net
> <mailto:isr...@ravnalaska.net>> wrote:
>> >I'm still curious as to how I can track c
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---
BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying
On Sep 27, 2016, at 11:16 AM, John R Pierce <pie...@hogranch.com> wrote:
>
> On 9/27/2016 12:06 PM, Israel Brewster wrote:
>> That helps for one-time stat collection, but as I mentioned in my original
>> message, since connections may not last long, I could be getti
I have a Postgresql (9.4.6) cluster that hosts several databases, used by about half-a-dozen different in-house apps. I have two servers set up as master-slave with streaming replication. Lately I've been running into an issue where one of the apps periodically can't connect to the db. The problem
On Sep 27, 2016, at 10:07 AM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
>
> On 09/27/2016 09:54 AM, Israel Brewster wrote:
>> I have a Postgresql (9.4.6) cluster that hosts several databases, used
>> by about half-a-dozen different in-house apps. I have two ser
On Sep 27, 2016, at 10:55 AM, Jonathan Vanasco <postg...@2xlp.com> wrote:
>
>
> On Sep 27, 2016, at 2:46 PM, Israel Brewster wrote:
>
>> I do have those on, and I could write a parser that scans through the logs
>> counting connections and disconnectio
On Sep 27, 2016, at 9:55 AM, John R Pierce <pie...@hogranch.com> wrote:
>
> On 9/27/2016 9:54 AM, Israel Brewster wrote:
>>
>> I did look at pgbadger, which tells me I have gotten as high as 62
>> connections/second, but given that most of those connections are
unately, only seems to track connections per second and not consecutive
connections), already existed, or that there was some way to have the database
itself track this metric. If not, well, I guess that's another project :)
---
Israel Brewster
Sys
I was wondering if someone could inform me about, or point me to an online article about best practices for backing up a postgresql database cluster? At the moment, I have two servers running with streaming replication for failover purposes, and then I do nightly database dumps for recovery/backup
On Nov 18, 2016, at 5:48 AM, Jehan-Guillaume de Rorthais <iog...@free.fr> wrote:
>
> On Thu, 17 Nov 2016 08:26:59 -0900
> Israel Brewster <isr...@ravnalaska.net <mailto:isr...@ravnalaska.net>> wrote:
>
>>> On Nov 16, 2016, at 4:24 PM, Adrian Klave
On Nov 25, 2016, at 1:00 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>
> On Wed, Nov 23, 2016 at 10:16 AM, Israel Brewster <isr...@ravnalaska.net
> <mailto:isr...@ravnalaska.net>> wrote:
> I was wondering if someone could inform me about, or point me to an online
On Nov 28, 2016, at 2:13 PM, John R Pierce <pie...@hogranch.com> wrote:
>
> On 11/28/2016 2:50 PM, Israel Brewster wrote:
>>>
>>> Why wouldn't the streaming replica salvage that? Are they expected to fail
>>> together? Is the NFS share onto which you wan
END:VCARD
On Nov 23, 2016, at 9:42 AM, John R Pierce <pie...@hogranch.com> wrote:On 11/23/2016 10:16 AM, Israel Brewster wrote:To mitigate these issues, I am thinking of supplementing the individual dumps with a full base backup and WAL archiving to a NFS share. This should enable (relatively)
On Nov 23, 2016, at 10:31 AM, John R Pierce <pie...@hogranch.com> wrote:
>
> On 11/23/2016 11:20 AM, Israel Brewster wrote:
>> How do I determine which those are? Just based on the timestamp if the WAL
>> file, such that I could do something like take the timestamp of
I've been playing around with streaming replication, and discovered that the following series of steps *appears* to work without complaint:- Start with master on server A, slave on server B, replicating via streaming replication with replication slots.- Shut down master on A- Promote slave on B to
On Nov 16, 2016, at 11:39 PM, Jehan-Guillaume de Rorthais <iog...@free.fr>
wrote:
>
> On Wed, 16 Nov 2016 15:51:26 -0900
> Israel Brewster <isr...@ravnalaska.net <mailto:isr...@ravnalaska.net>> wrote:
>
>> I've been playing around with streaming replication,
> On Nov 16, 2016, at 4:24 PM, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
>
> On 11/16/2016 04:51 PM, Israel Brewster wrote:
>> I've been playing around with streaming replication, and discovered that
>> the following series of steps *appears* to work withou
> On Jan 9, 2017, at 1:54 PM, Kevin Grittner <kgri...@gmail.com> wrote:
>
> On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster <isr...@ravnalaska.net>
> wrote:
>
>> [load of new data]
>
>> Limit (cost=354643835.82..354643835.83 rows=1 width=9) (ac
On Nov 28, 2016, at 10:04 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>
> On Mon, Nov 28, 2016 at 2:50 PM, Israel Brewster <isr...@ravnalaska.net
> <mailto:isr...@ravnalaska.net>> wrote:
>
>> - What is the "best" (or just a good) method of keeping
On Nov 28, 2016, at 10:20 PM, Thomas Kellerer <spam_ea...@gmx.net> wrote:
>
> Israel Brewster schrieb am 28.11.2016 um 23:50:
>>>
>>> pg_archivecleanup -n /mnt/server/archiverdir
>>> 00010010.0020.backup
>>
>> Ok, but
Simple question: are WAL files archived when full, or when recycled? That is, are the WAL archive files "up-to-date" other than the current WAL file, or will the archives always be wal_keep_segments behind?---Israel BrewsterSystems Analyst IIRavn
On Nov 29, 2016, at 8:12 AM, Israel Brewster <isr...@ravnalaska.net> wrote:
>
> On Nov 28, 2016, at 10:04 PM, Jeff Janes <jeff.ja...@gmail.com
> <mailto:jeff.ja...@gmail.com>> wrote:
>>
>> On Mon, Nov 28, 2016 at 2:50 PM, Israel Brewster <isr...@ravnal
I have a database (PostgreSQL 9.6.1) containing 62,702,675 rows of latitude (numeric), longitude(numeric), elevation(integer) data, along with a PostGIS (2.3.0) geometry column (location), running on a CentOS 6.8 box with 64GB RAM and a RAID10 SSD data drive. I'm trying to get the maximum
sion)
(8 rows)
So apparently it is now doing a sequential scan on data rather than using the
index. And, of course, sorting 20 million rows is not trivial either. Did I do
something wrong with forming the query?
-------
Israel Brewster
Systems Analyst II
Ravn Al
like mentioned, that's a worst-case
scenario query. Thanks!
Of course, if you have any suggestions for further improvement, I'm all ears :-)
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
Worth an experiment.POn Thu, Jan 5, 2017 at 1:00 PM, Israel Brewster <isr...@ravnalaska.net> wrote:On Jan 5, 2017, at 10:38 AM, Paul Ramsey <pram...@cleverelephant.ca> wrote:Yes, you did. You want a query that spits out a tupleset of goemetries (one each for each wee segment), and then you
1:00 Paul Ramsey <pram...@cleverelephant.ca
> <mailto:pram...@cleverelephant.ca>>:
> Varying the segment length upwards might have a salutary effect for a while,
> as the efficiency improvement of fewer inner loops battles with the
> inefficiency of having more points selected by
...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD
On Jan 5, 2017, at 1:55 PM, Israel Brewster <isr...@ravnal
ay faster. Created the table with the additional
column in about 10 minutes rather than 19+ hours. Now to see how long it takes
to generate the indexes :-)
Thanks again!
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport
nd MUCH nicer looking syntax (perhaps more
efficient as well?) for populating the column. I'll give it a shot, and see how
it goes!
-------
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Indust
Short version:Do indexes impact the speed of an UPDATE, even when the indexed columns aren't changing?Details:I have a table containing geographical data (Latitude, longitude, and elevation) with 406,833,705 records. The Latitude and Longitude columns are indexed. In order to better utilize the
I have two tables, a ticket table and a notes table, set up where each ticket can have multiple notes. I'm trying to come up with a query that returns the ticket fields as well as a field that is an array type field with the values being json-encoded note records. I've come up with the following
On Jun 19, 2017, at 12:29 PM, Thomas Kellerer <spam_ea...@gmx.net> wrote:
>
> Israel Brewster schrieb am 19.06.2017 um 22:17:
>> SELECT
>> ...
>> (SELECT
>> array_agg(to_json(row(notedate,username,note)))
>> FROM sabrenotes
>> INNER JOIN users ON
On Oct 20, 2017, at 3:31 PM, Justin Pryzby <pry...@telsasoft.com> wrote:
>
> On Fri, Oct 20, 2017 at 03:08:26PM -0800, Israel Brewster wrote:
>> Summary: the following query takes around 12 seconds on my test machine. On
>> my production machine, it's at half an hour an
Summary: the following query takes around 12 seconds on my test machine. On my production machine, it's at half an hour and counting. What's going on?Details:As a first stab at getting some data I need, I've developed the following SQL query:SELECT legfrom, count(*) as totaldeps, count(*)
87 matches
Mail list logo