I'm wondering if there is anyway to enforce that ranges in a table be
adjacent in perhaps a similar way to enforcing that they are
non-overlapping.
Consider a (very simplified table)
CREATE TABLE (
thing_id int,
period tsrange,
EXCLUDE USING gist (thing_id WITH =, period WITH &&),
CHECK (
On Wed, Dec 6, 2017 at 9:29 PM, Dylan Luong
wrote:
> Since the temp files are easily identifiable as it has the PID in the
> fileaname.
> Is it ok just manually deleting these files as the process has already
> being killed.
>
I've done that before without issue.
Cheers,
Jeff
Eugene Poole wrote:
> What happens if I use the Oracle DDL on PostgreSQL? It's not much with
> plsql but it has a lot of foreign keys and sequence fields.
These are the easy part, and ora2pg can definitely help there.
You'll have to modify the DDL statements, but the functionality
exists in Po
Thanks, That's why I'm starting now and I have until 2020.
What happens if I use the Oracle DDL on PostgreSQL? It's not much with
plsql but it has a lot of foreign keys and sequence fields.
TIA
Gene
On 12/7/2017 12:02 AM, John R Pierce wrote:
On 12/5/2017 10:59 AM, Eugene Poole wrote:
Since the temp files are easily identifiable as it has the PID in the
fileaname.
Is it ok just manually deleting these files as the process has already being
killed.
-Original Message-
From: Thomas Munro [mailto:thomas.mu...@enterprisedb.com]
Sent: Thursday, 7 December 2017 2:56 PM
To:
On 12/5/2017 10:59 AM, Eugene Poole wrote:
3. Would I use ora2pg do do the move? Is ora2pg still maintained?
4. Is there a better conversion package?
there are a variety of "ETL" tools out there that can extract data from
one database and load it into another. I was going to suggest
pglo
On Wed, Dec 6, 2017 at 8:18 PM, Dylan Luong
wrote:
> Hi
>
>
>
> We have an issue where one of the developers ran a large query that hung
> was filling up the DATA drive very rapidly. From 50% usage to 95% in less
> than 2hrs.
>
> It created a very large pgsql_tmp size (300GB). To stop the drive f
On Thu, Dec 7, 2017 at 1:18 PM, Dylan Luong wrote:
> How do we clean up the pgsql_tmp folder? Will Postgres periodically clean
> it? Ie CHECKPOINT?
A postmaster restart cleans up those files automatically.
--
Michael
On Thu, Dec 7, 2017 at 5:18 PM, Dylan Luong wrote:
> We have an issue where one of the developers ran a large query that hung was
> filling up the DATA drive very rapidly. From 50% usage to 95% in less than
> 2hrs.
>
> It created a very large pgsql_tmp size (300GB). To stop the drive filling up
>
Hi
We have an issue where one of the developers ran a large query that hung was
filling up the DATA drive very rapidly. From 50% usage to 95% in less than 2hrs.
It created a very large pgsql_tmp size (300GB). To stop the drive filling up we
had to kill the process manually using kill -9.
How do
Please un-subscribe me..
thanks
-Original Message-
From: John R Pierce [mailto:pie...@hogranch.com]
Sent: 06 December 2017 06:11:PM
To: pgsql-general@lists.postgresql.org
Subject: Re: Migrating From CentOS-6 Oracle 11g R2 To CentOS-7 PostgreSQL ??
On 12/5/2017 10:59 AM, Eugene Poole wrote
On 12/5/2017 10:59 AM, Eugene Poole wrote:
2. I don't want to use a RPM because I like controlling where software
is installed
then why are you using CentOS/RHEL ? those are by design RPM managed
distribution, and rpms with their service scripts etc tend to be pretty
fixed about where
On 12/5/17 12:07, Kevin Brannen wrote:
> 1. What version of PostgreSQL should I use?
>
> The latest that works for you. Some people don’t like .0 releases
> for various reasons and if that applies to you, then use the latest
> of 9.6.x (9.6.5 I think); else use 10.0.
In addition,
Maltsev Eduard schrieb am 06.12.2017 um 14:11:
I'm curious if the new feature of Postgresql allows to take advantage
of multiple cpus on server, and multiple servers (fdw), for larger
read only queries (Data mining). In general there should be some
worker that queries partitions and merges the re
John R Pierce schrieb am 06.12.2017 um 20:34:
PostgreSQL 10 does indeed have a parallel query feature that will use
multiple cores. you have to explicity invoke it.
"you have to explicitly invoke it" - huh?
oops, I meant, enable.
You are correct for 9.6 where the default was "disabled", bu
On 12/6/2017 11:33 AM, Andres Freund wrote:
PostgreSQL 10 does indeed have a parallel query feature that will use
multiple cores. you have to explicity invoke it.
"you have to explicitly invoke it" - huh?
oops, I meant, enable.
--
john r pierce, recycling bits in santa cruz
On 2017-12-06 10:53:22 -0800, John R Pierce wrote:
> On 12/6/2017 5:11 AM, Maltsev Eduard wrote:
> > I'm curious if the new feature of Postgresql allows to take advantage of
> > multiple cpus on server, and multiple servers (fdw), for larger read
> > only queries (Data mining). In general there sho
On 12/6/2017 5:11 AM, Maltsev Eduard wrote:
I'm curious if the new feature of Postgresql allows to take advantage
of multiple cpus on server, and multiple servers (fdw), for larger
read only queries (Data mining). In general there should be some
worker that queries partitions and merges the res
Hi,
I'm using postgresql 9.6.4 on linux and I'm trying to execute this query on
a table contains 51000 rows.
and I got this query plan:
Creation of index:
I get this queryPlan:
Why the execution time increases when creating a spatial index on a table
containing a large amount of data?
I'm curious if the new feature of Postgresql allows to take advantage of
multiple cpus on server, and multiple servers (fdw), for larger read only
queries (Data mining). In general there should be some worker that queries
partitions and merges the results, and I expect it to be done in parallel.
Th
On Wed, Dec 6, 2017 at 9:52 PM, Martin Mueller <
martinmuel...@northwestern.edu> wrote:
>
>
> On 12/6/17, 4:39 AM, "karsten.hilb...@gmx.net"
> wrote:
>
> On Tue, Dec 05, 2017 at 09:52:28PM +, Martin Mueller wrote:
>
> > Are there rules for thumb for deciding when you can dump a
>
On Wed, Dec 6, 2017 at 7:52 AM, Martin Mueller <
martinmuel...@northwestern.edu> wrote:
>
> The objective is to create a backup from which I can restore any or all
> tables in the event of a crash. In my case, I use Postgres for my own
> scholarly purposes. Publications of whatever kind are not
John, all,
* John R Pierce (pie...@hogranch.com) wrote:
> On 12/5/2017 2:09 PM, Martin Mueller wrote:
> >Time is not really a problem for me, if we talk about hours rather
> >than days. On a roughly comparable machine I’ve made backups of
> >databases less than 10 GB, and it was a matter of minut
On Wed, Dec 6, 2017 at 7:37 AM, hmidi slim wrote:
> Hi,
> When I used explain I found that the query planner use a seq scan to
> execute a query on a table containing about 2 millions rows.However I'm
> creating an index.Why does the planner uses seq scan in place of index
> scan?Does the executi
On Wed, Dec 6, 2017 at 9:37 AM, hmidi slim wrote:
> Hi,
> When I used explain I found that the query planner use a seq scan to
> execute a query on a table containing about 2 millions rows.However I'm
> creating an index.Why does the planner uses seq scan in place of index
> scan?Does the executi
Hi,
When I used explain I found that the query planner use a seq scan to
execute a query on a table containing about 2 millions rows.However I'm
creating an index.Why does the planner uses seq scan in place of index
scan?Does the execution of index scan is slower with table containing a
huge amount
On Wed, Dec 06, 2017 at 12:52:53PM +, Martin Mueller wrote:
>> Are there rules for thumb for deciding when you can dump a
>> whole database and when you’d be better off dumping groups of
>> tables?
>> It seems to me we'd have to define the objective of "dumping" first ?
> The objective is to
On 12/6/17, 4:39 AM, "karsten.hilb...@gmx.net" wrote:
On Tue, Dec 05, 2017 at 09:52:28PM +, Martin Mueller wrote:
> Are there rules for thumb for deciding when you can dump a
> whole database and when you’d be better off dumping groups of
> tables?
It seems to
On Tue, Dec 05, 2017 at 09:52:28PM +, Martin Mueller wrote:
> Are there rules for thumb for deciding when you can dump a
> whole database and when you’d be better off dumping groups of
> tables?
It seems to me we'd have to define the objective of "dumping" first ?
Regards,
Karsten
--
GPG ke
29 matches
Mail list logo