On Mon, Sep 12, 2016 at 3:46 PM, Lee Hachadoorian
wrote:
> * Because database is updated infrequently, workforce can come
> together for LAN-based replication as needed
> * Entire database is on the order of a few GB
Just update one copy, then send pg_dump's to the
On Mon, 12 Sep 2016, Tom Lane wrote:
Hmm, AFAIK that's what it should be out of the box. Certainly on my RHEL6
machine I see
Until I make the time to upgrade this host from Slackware-14.1 to -14.2
the kernel is 3.10.17-smp.
Rich
--
Sent via pgsql-general mailing list
Rich Shepard writes:
>Yes, it's a linux box. And /dev/shm/ does have incorrect permissions
> (755). Thanks to your response I remembered that chromium does not run until
> I follow its advice to chmod 1777 /dev/shm. Sure enough, chromium would not
> load.
>So, I
On Mon, 12 Sep 2016, Tom Lane wrote:
A look at the code suggests this is shm_open() returning EACCES. Not sure
why that's happening. If this is a Linux box, maybe the permissions on
/dev/shm are bollixed?
Tom,
Yes, it's a linux box. And /dev/shm/ does have incorrect permissions
(755).
2016-09-12 18:22 GMT-03:00 Istvan Soos :
> Hi Vinicius,
>
> At Heap we have non-trivial complexity in our analytical queries, and
> some of them can take a long time to complete. We did analyze features
> like the query planner's output, our query properties (type,
>
Rich Shepard writes:
>Tried to compile 3 large programs at one time and the CPU overheated,
> shutting down the server. Now when I try to start postgres-9.5.4 (as the
> superuser, postgres) I get this result:
> postgres@salmo:~$ postgres -D /var/lib/pgsql/9.5/data/
2016-09-12 17:01 GMT-03:00 Jeff Janes :
> On Mon, Sep 12, 2016 at 7:03 AM, Vinicius Segalin
> wrote:
>
>> Hi everyone,
>>
>> I'm trying to find a way to predict query runtime (I don't need to be
>> extremely precise). I've been reading some papers
2016-09-12 15:16 GMT-03:00 Merlin Moncure :
> On Mon, Sep 12, 2016 at 9:03 AM, Vinicius Segalin
> wrote:
> > Hi everyone,
> >
> > I'm trying to find a way to predict query runtime (I don't need to be
> > extremely precise). I've been reading some papers
Tried to compile 3 large programs at one time and the CPU overheated,
shutting down the server. Now when I try to start postgres-9.5.4 (as the
superuser, postgres) I get this result:
postgres@salmo:~$ postgres -D /var/lib/pgsql/9.5/data/ &
[1] 14544
postgres@salmo:~$ FATAL: could not open
On 09/12/2016 02:35 PM, Lee Hachadoorian wrote:
On Mon, Sep 12, 2016 at 5:12 PM, Adrian Klaver
wrote:
On 09/12/2016 12:46 PM, Lee Hachadoorian wrote:
There are a wide variety of Postgres replication solutions, and I
would like advice on which one would be
On Mon, Sep 12, 2016 at 5:12 PM, Adrian Klaver
wrote:
> On 09/12/2016 12:46 PM, Lee Hachadoorian wrote:
>>
>> There are a wide variety of Postgres replication solutions, and I
>> would like advice on which one would be appropriate to my use case.
>>
>> * Small (~half
Hi Vinicius,
At Heap we have non-trivial complexity in our analytical queries, and
some of them can take a long time to complete. We did analyze features
like the query planner's output, our query properties (type,
parameters, complexity) and tried to automatically identify factors
that
On 09/12/2016 12:46 PM, Lee Hachadoorian wrote:
There are a wide variety of Postgres replication solutions, and I
would like advice on which one would be appropriate to my use case.
* Small (~half dozen) distributed workforce using a file sharing
service, but without access to direct network
On Sat, Sep 10, 2016 at 7:09 PM, Jim Nasby wrote:
> On 9/8/16 3:29 PM, David Gibbons wrote:
>
>>
>> Isn't this heading in the wrong direction? We need to be more
>> precise than 0 (since 0 is computed off of rounded/truncated time
>> stamps), not less
On Mon, Sep 12, 2016 at 7:03 AM, Vinicius Segalin
wrote:
> Hi everyone,
>
> I'm trying to find a way to predict query runtime (I don't need to be
> extremely precise). I've been reading some papers about it, and people are
> using machine learning to do so. For the feature
There are a wide variety of Postgres replication solutions, and I
would like advice on which one would be appropriate to my use case.
* Small (~half dozen) distributed workforce using a file sharing
service, but without access to direct network connection over the
internet
* Database is updated
On Mon, Sep 12, 2016 at 9:03 AM, Vinicius Segalin wrote:
> Hi everyone,
>
> I'm trying to find a way to predict query runtime (I don't need to be
> extremely precise). I've been reading some papers about it, and people are
> using machine learning to do so. For the feature
2016-09-13 0:06 GMT+12:00 Jeff Janes :
> On Sep 12, 2016 1:12 AM, "Scott Marlowe" wrote:
> >
> >
> >
> > Why not subscribe a new cluster on the same box with pg_basebackup?
>
> +1.
>
> Maybe he is afraid of (or doesn't know how to) configuring
2016-09-12 12:08 GMT-03:00 Oleksandr Shulgin :
> On Mon, Sep 12, 2016 at 4:03 PM, Vinicius Segalin
> wrote:
>
>> Hi everyone,
>>
>> I'm trying to find a way to predict query runtime (I don't need to be
>> extremely precise). I've been reading
On Mon, Sep 12, 2016 at 4:03 PM, Vinicius Segalin
wrote:
> Hi everyone,
>
> I'm trying to find a way to predict query runtime (I don't need to be
> extremely precise). I've been reading some papers about it, and people are
> using machine learning to do so. For the feature
Hi everyone,
I'm trying to find a way to predict query runtime (I don't need to be
extremely precise). I've been reading some papers about it, and people are
using machine learning to do so. For the feature vector, they use what the
DBMS's query planner provide, such as operators and their cost.
On 09/12/2016 12:47 AM, Moreno Andreo wrote:
Ccing list.
Il 08/09/2016 15:26, Adrian Klaver ha scritto:
so, I should be able to manage 800*64 = 5120 locks, right?
OMG, time to go back to school... 800*64 = 51200 ! ! !
Now my pg_locks table has more than 6200 rows, but if I reorder them by
On Mon, Sep 12, 2016 at 7:30 AM, Akash Bedi wrote:
> Note that a VACUUM wouldn't be able to remove the dead rows if there's a
> long running active query OR any idle transaction in an isolation >=
> Repeatable Read, tracking transactions in "pg_stat_activity" should help
>
On Sep 12, 2016 1:12 AM, "Scott Marlowe" wrote:
>
>
>
> Why not subscribe a new cluster on the same box with pg_basebackup?
+1.
Maybe he is afraid of (or doesn't know how to) configuring things to run on
a non standard port, for testing?
Cheers,
Jeff
Note that a VACUUM wouldn't be able to remove the dead rows if there's a
long running active query OR any idle transaction in an isolation >=
Repeatable Read, tracking transactions in "pg_stat_activity" should help
you eliminate/track this activity. Also, the row estimates consider the
size of
Hi:
On Mon, Sep 12, 2016 at 1:17 AM, Patrick B wrote:
>> schemaname relname n_live_tup n_dead_tup
>> -- - -- --
>> public parts 191623953 182477402
...
> Because of that the table is very slow...
> When I do a select on
Il 10/09/2016 23:07, Jeff Janes ha
scritto:
On Thu, Sep 8, 2016 at 4:30 AM,
Moreno Andreo
wrote:
Hi folks! :-)
This morning I was woken up by a call of a
On Sun, Sep 11, 2016 at 3:26 AM, Patrick B wrote:
>
>
> 2016-09-11 14:09 GMT+12:00 Jim Nasby :
>>
>> On 9/8/16 3:29 PM, David Gibbons wrote:
>>>
>>>
>>> Isn't this heading in the wrong direction? We need to be more
>>> precise than 0
28 matches
Mail list logo