there, or will PG freak out seeing a foreign object.
>
PostgreSQL certainly does not check if there are unknown directories in
the data directory, and it will not crash and burn. But it causes all
sorts of problems, and it increases the probability of human error.
regards
--
Tomas Von
On 10/11/2017 02:26 AM, pinker wrote:
> Tomas Vondra-4 wrote
>> I'm probably a bit dumb (after all, it's 1AM over here), but can you
>> explain the CPU chart? I'd understand percentages (say, 75% CPU used)
>> but what do the seconds / fractions mean? E.g. when the sys
On 10/11/2017 12:28 AM, pinker wrote:
> Tomas Vondra-4 wrote
>> What is "CPU load"? Perhaps you mean "load average"?
>
> Yes, I wasn't exact: I mean system cpu usage, it can be seen here - it's the
> graph from yesterday's failure (after 6p.m.):
> <ht
prove the hit ratio, it will only make enough room for the backends.
But I don't quite see how this could cause the severe problems you have,
as I assume this is kinda regular behavior on that system. Hard to say
without more data.
regards
--
Tomas Vondra http://www.2ndQuadran
On 09/28/2017 04:34 PM, Seamus Abshere wrote:
> hey,
>
> Does anybody have a function lying around (preferably pl/pgsql) that
> takes a table name and returns coverage counts?
>
What is "coverage count"?
cheers
--
Tomas Vondra http://www.2
You may need to tweak this to handle the inheritance tree.
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
riod, so a single page may end up
being written to disk repeatedly.
With sufficiently large shared buffers the page would be repeatedly
modified in shared buffers, and written our just once by the checkpoint
process.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQ
disk so WAL can be deleted?
>
The last step in a checkpoin is fsync() on the files. Without that, the
checkpoint is considered incomplete and the database won't rely on it.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remot
etc.
So there's nothing inherently wrong with (shared_buffers > max_wal_size)
or (shared_buffers > max_wal_size), it depends on your workload.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 09/20/2017 01:28 PM, bluefrog wrote:
>
> thanks, interestingly your method works in both Oracle and PostgreSQL,
> albeit with a different random function call.
> It does not work in SQL Anywhere though.
>
You will have to ask SQL Anywhere people, I guess.
cheers
-
, count(*) over() max_rn
from customer where c_d_id=5
) t
where rn = (select floor(r*(max_rn))+1 from rand);
or define an immutable wrapper for random():
CREATE FUNCTION random_stable() RETURNS DOUBLE PRECISION
AS 'SELECT random()'
LANGUAGE SQL
IMMUTABLE;
and use t
tion is mostly "I'll never use that, because it pretty much
eliminates all benefits of partitioning." So we need to thread pretty
carefully here, not to run into the same limitations.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 04/28/2017 01:34 AM, Andres Freund wrote:
On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:
I can confirm this observation. I bought the Intel 750 NVMe SSD last year,
the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of
sustained O_DIRECT sequential writes. But when
~1GB/s of sustained O_DIRECT sequential writes. But when running
pgbench, I can't push more than ~300MB/s of WAL to it, no matter what I
do because of WALWriteLock.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Serv
probably do some research and
testing first, before choosing the database. If we don't perform well
enough, you pick something else. Which removes the data point.
Obviously, there are systems that start small and get busier and busier
over time. And those are the ones we see.
cheers
--
Tomas
uous-archiving.html
[2] https://wiki.postgresql.org/wiki/Warm_Standby
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
u're looking for though, you'll probably need
to explain what you're trying to achieve to get better answers.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (p
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
oup aggregate, because that can work with very little memory.
At execution time we'll only use as much memory as actually needed. The
trouble is that if we under-estimated the amount of memory, there's no
way back.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
Postgre
rse, there are other things we might need to know. For example if
there are foreign keys between the tables, 9.6 is using that for the
estimates (unlike 9.5). And so on.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & S
ster_user_id)
while the 9.6 plan does not include anything like that, i.e. there's
only a single aggregate at the top level, without any group keys. Also,
the SQL query you've provided does not include any GROUP BY clause, so I
claim that those plans are from two different queries.
regards
Tom
he results to the mailing list:
select name, setting from pg_settings where source != 'default';
That'll tell us how was the configuration modified.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
based on estimates) and runtime (abort query if limit is reached).
That might be an interesting feature, but it's also significantly more
complex than the topic of implementing a safe context for secrets,
making RLS less problematic with connection pools.
regards
--
Tomas Vondra http
en, there are features needed here that
we don't have and it would be a great deal better if we did.
Yeah.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pg
nclude all
columns into the index, even those in the index predicate:
CREATE INDEX ON (clientid, is_demo, deleted, id, job, job_share_mode)
WHERE NOT is_demo AND NOT deleted;
I'd bet all of those will outperform the current plan.
regards
--
Tomas Vondra http://www.2ndQu
and so the clock source overhead is pretty much
irrelevant. Moreover it's marked as 'stable' which makes repeated calls
unnecessary.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsq
On 01/07/2017 04:43 AM, Venkata B Nagothi wrote:
On Sat, Jan 7, 2017 at 2:56 AM, Job > wrote:
__
Hi guys,
really much appreciated your replies.
>> You might want to include the query plans for each server
W
etaining the most valuable information.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ecks, triggers.
CREATE TABLE AS SELECT would not pay any of those costs, of course.
Also, if you're running with wal_level=minimal, it would not have to
write the changes into WAL, while the regular UPDATE has to do that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
Postgr
On 12/30/2016 12:46 AM, David G. Johnston wrote:
On Thu, Dec 29, 2016 at 4:38 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com <mailto:tomas.von...@2ndquadrant.com>>wrote:
On 12/30/2016 12:33 AM, David G. Johnston wrote:
On Thu, Dec 29, 2016 at 4:21 PM, Job <j...@colli
fix queries, it's also possible to use simple btree index with
varchar_pattern_ops.
https://www.postgresql.org/docs/9.6/static/indexes-opclass.html
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent
ed at the Postgres log entries that cover these
> episodes?
>
> Is there anything of interest there?
>
In particular look at checkpoints. In the config file you've changed
checkpoint_timeout, but you haven't changed max_wal_size, so my guess
is the checkpoints happen every few minutes, a
the time the
planning is fast enough not to be logged, so it's invisible for
pgbadger.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreS
especially with arrays of ids.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
write workload
into one containing a lot of synchronous, single threaded, reads.
Even with SSDs that can very quickly lead to not being able to keep
up with replay anymore.
I don't immediately see why that would happen? Can you elaborate?
regards
--
Tomas Vondra http://www.2ndQuadra
Of course, there might be other benefits of further lowering page sizes
at the OS/database level (and AFAIK there are SSD drives that use pages
smaller than 4kB).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
write hole" problem.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.
ite Back' allows the controller to buffer
writes beyond what the devices do.
AFAIK there's no difference. It's an option that disables write cache in
case the battery on BBU dies for some reason (so the write cache would
become volatile). With capacitors this is not really applicable.
regards
On 09/30/2015 07:33 PM, Benjamin Smith wrote:
On Wednesday, September 30, 2015 02:22:31 PM Tomas Vondra wrote:
I think this really depends on the workload - if you have a lot of
random writes, CoW filesystems will perform significantly worse than
e.g. EXT4 or XFS, even on SSD.
I'd
On 09/30/2015 03:45 PM, Patric Bechtel wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Tomas,
Tomas Vondra schrieb am 30.09.2015 um 14:01:
Hi,
...
I've also done a few runs with compression, but that reduces the performance a
bit
(understandably).
I'm somewhat surprised
ated with the kernel,
but it should release the memory when needed and such. Perhaps not
letting it to use all the RAM is a good idea, but 1/3 seems a bit too
aggressive?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Serv
are ~2x faster
than ZFS. But of course, if the ZFS features are interesting for you,
maybe it's a reasonable price.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing
at in memory.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
, remove the recovery conf and start
it again as a standalone database?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
of that.
If you don't like that, you have multiple options - stop the database
while performing the backup, perform file system level backup (e.g. lvm
snapshot) or use tools like pg_basebackup.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7
.
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
archive_status folder.
--
С Уважением,Генералов Юрий
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
random_page_cost = 2 (instead of default 4).
Sadly, the original post provides very little information about the
context, e.g. database size, storage system, explain plans etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA
effective_cache_size = 10GB
logging_collector = on
track_counts = on
autovacuum = on
Otherwise, I don't see anything terribly misconfigured.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via
the sender
at Cyient and delete the original message.
Sure ...
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
a crash or a reboot. You might keep a snapshot of the database
(e.g. using pg_basebackup), and use it to 'seed' the database after a
server restart.
But don't expect the database to start without a tablespace that got
lost because of being placed in a tmpfs or something.
--
Tomas Vondra
If that does not work, you probably need to investigate - firewall
settings, pg_hba.conf and postgresql.conf (maybe it's not listening on
this address)?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via
idea. In case
of crash you won't be able to even start the database, because it will
try to recover the tablespace.
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql
and tuning the kernel/mount options may
help a lot.
Putting the DB into RAM and backing up periodically to disk is a
validsolutions?
Well, that depends on your requirements. You may lose the changes since
the last backup.
Or, is storing indexes on a ram drive possible?
No, not really.
--
Tomas
a UPS may help a bit here). But
maybe that changed recently.
If you're on Linux use smartctl.
Also, as others have pointed out 8.4 is out of support, so consider
upgrading.
+1 to this
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA
ON (actor_id = ir_actor_id)
WHERE ir_dos_id = '5226');
That might behave a bit differently if there are multiple
f_intervenant_ref rows matching the actor. If that's the case, a simple
DISTINCT should fix that.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL
for that, but you may think of it as regular queries vs.
prepared statements.
Prepared statements are parsed and planned once, regular query needs to
be parsed and planned over and over again.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA
partition (without any triggers).
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
part_number LIKE '104/%'
[1] http://www.postgresql.org/docs/9.2/static/indexes-opclass.html
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
= 'r';
but that may require a bit more work, if you want schema names too for
example (as the query executed by psql illustrates).
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql
,
or if it was choosing different execution plans?
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
be really helpful, so
that we can identify the bottleneck (is that a CPU, I/O, locking, ...).
A few lines from these commands should give us some basic idea:
iostat -x -k 5
vmstat -w 5
regards
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote
On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get the following
error:
pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult()
Hi,
Dne 5 Listopad 2014, 17:31, Rémy-Christophe Schermesser napsal(a):
Hi,
We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
data, schema, PG configuration, and are almost identical machines, same
number of cores and memory, but different cloud provider. The data was
Dne 5 Listopad 2014, 18:10, Tom Lane napsal(a):
Tomas Vondra t...@fuzzy.cz writes:
Dne 5 Listopad 2014, 17:31, R??my-Christophe Schermesser napsal(a):
We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
data, schema, PG configuration, and are almost identical machines, same
On 23 Červenec 2014, 15:56, klo uo wrote:
Bill, thanks for your reply.
shared_buffers is set to 128MB.
Now that you mention config file, the only thing I did change there, and
was suggested to me while I made some on my databases was
max_locks_per_transaction = 5 (which has default
On 30 Duben 2014, 10:46, David Noel wrote:
Very strange. I ran the query and it seemed slow so I rewrote it with
a join instead. Using join it finished in 800ms. The query using the
lateral finished in more than a minute. I guess I need to do some
analysis on those queries to figure out why
On 29 Duben 2014, 8:59, David Noel wrote:
The query I'm running is:
select page.*, coalesce((select COUNT(*) from sentence where
sentence.PageURL = page.URL group by page.URL), 0) as
NoOfSentences from page WHERE Classification LIKE CASE WHEN
THEN ELSE '%' END ORDER BY PublishDate DESC
Hi all,
I needed to implement an aggregate producing a random sample, with an
upper bound on the number of items. I.e. not the usual 5% of values
but up to 1000 values.
So my plan was to do something like this:
sample_append(internal, anyelement, int) - internal
sample_final(internal) -
Steve, please stop top-posting, especially if others bottom-post. It turns
the messages a hard to follow mess. I took the liberty to reshuffle the
parts a bit.
On 18 Duben 2014, 6:09, Steve Spence wrote:
On Fri, Apr 18, 2014 at 12:02 AM, Jan Wieck j...@wi3ck.info wrote:
On 04/17/14 20:01,
On 18 Duben 2014, 18:11, Edson Richter wrote:
Is Arduino capable of running Java apps?
If so, you can use the excellent high quality PgSQL JDBC driver.
Java interfacing with native libraries is not difficult, JNI are pretty
well know and documented (and long lived already).
Just my 2c (if
On 18 Duben 2014, 17:01, Adrian Klaver wrote:
On 04/18/2014 07:53 AM, Jan Wieck wrote:
On 04/18/14 10:31, Steve Spence wrote:
Not a thing in that document about the Arduino. Just how to install
Postgres on a Raspberry Pi. My Postgres is on a hosted server at a ISP.
You intend to have
On 17.4.2014 16:51, Adrian Klaver wrote:
On 04/17/2014 07:44 AM, David Rysdam wrote:
Bruce Momjian br...@momjian.us writes:
On Thu, Apr 17, 2014 at 09:39:55AM -0400, Steve Spence wrote:
So, who wants to work on this with me? I'm a fair arduino programmer,
but know nothing about postgres.
I
On 17.4.2014 19:43, Steve Spence wrote:
Oracle thought it was a good idea to put out a MySQL version, I
figure there should be some effort to counter that here .
Really? I found no information about this on oracle.com or mysql.com,
except for a section in the discussion forum with ~20
On 9.4.2014 23:28, Martijn van Oosterhout wrote:
On Wed, Apr 09, 2014 at 02:16:34PM -0700, Ken Tanzer wrote:
Any thoughts on how to pull this off for PostgreSQL stored data?
I looked at this a while ago because I have clients who might
require this in the future. ISTM you should be able to
Hi,
On 17 Březen 2014, 11:45, basti wrote:
Hello,
we have a database master Version:
PostgreSQL 9.1.6 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.7.2-2) 4.7.2, 64-bit
and a WAL-Replication Slave with hot-standby version:
PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by
On 26 Únor 2014, 8:45, john gale wrote:
Does anybody have any ideas about this.
We restarted the postmaster and the issue persists. So previously in
9.0.4 where we could clean corruption, it seems in 9.3.2 we can no longer
clean corruption.o I'm assuming this because our data insert
(i.e. shared memory). But if the
database starts OK, the cause is somewhere else.
kind regards
Tomas Vondra
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 27 Listopad 2013, 22:39, Brian Wong wrote:
Date: Fri, 22 Nov 2013 20:11:47 +0100
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of
size ???
From: t...@fuzzy.cz
To: bwon...@hotmail.com
CC: brick...@gmail.com; pgsql-general@postgresql.org
On 19 Listopad 2013, 5:30,
Hi,
On 22 Listopad 2013, 20:09, Edson Richter wrote:
Excuse me (or just ignore me) if it is a stupid question, but have you
configured sysctl.conf accordingly?
For instance, to use larget memory settings, I had to configure my EL as
follows:
# Controls the maximum shared segment size, in
On 19 Listopad 2013, 5:30, Brian Wong wrote:
I've tried any work_mem value from 1gb all the way up to 40gb, with no
effect on the error. I'd like to think of this problem as a server
process memory (not the server's buffers) or client process memory issue,
primarily because when we tested the
On 14 Listopad 2013, 16:09, Alexander Farber wrote:
pgtune has produced the following for my server (the specs:
http://www.hetzner.de/en/hosting/produkte_rootserver/px60ssd ):
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target =
Hi,
On 11 Listopad 2013, 7:33, Jeffrey Walton wrote:
I've been tasked with a quick acceptance check of Postgres for an
upcoming project. It's a quick check, so its limited to Clang's
analyzer and sanitizers.
The analyzer is reporting some findings, and some of the findings look
legitimate.
On 25.10.2013 19:04, Scott Marlowe wrote:
On Fri, Oct 25, 2013 at 8:29 AM, Rahila Syed rahilasye...@gmail.com wrote:
Configurations of my machine is:
Processors: Xeon E5-2650 Processor Kit
Intel® Xeon ® Processor E5-2650 (2 GHz, 8C/16T,
20 MB) * 2 nos
RAM : 32GB
Hi,
On 24.10.2013 23:18, Alban Hertroys wrote:
On Oct 24, 2013, at 18:10, maillis...@gmail.com wrote:
Thank you for the answers. I'm still confused. If fsync is not
replicated to the slave, then how is replication affected by a
corrupt master? If the master dies and there's a commit
On 25 Říjen 2013, 3:53, Andreas wrote:
Hi,
how can I give adb-assistant the rights to create and drop schemas,
tables, views ... BUT keep him out of certain existing schemas and tables?
This position of an restricted db assistant is new for us and it wasn't
considered in our access rights
On 25 Říjen 2013, 3:53, Andreas wrote:
Hi,
I'd like to set up a DB-Server that keeps copies of our productive db
for an external db-assistant.
He should prepare chores on the test-server and mail the sql scripts to
me.
I'll look over those scripts and run them against the productive db
Hi,
On 18 Říjen 2013, 17:06, akp geek wrote:
when I issue the top command on solaris, what ever I have posted is
exactly
getting.
top -n
load averages: 11.4, 10.8, 10.2;up 680+21:31:46
15:05:21
137 processes: 123 sleeping, 14 on cpu
CPU states: 82.0% idle, 17.5%
On 17.10.2013 20:56, akp geek wrote:
got the output from depesz and this the top on the system. thanks for
the help ..
From depesz? That site works differently - you enter the explain plan
into the form, it does some formatting/highlighting and you're supposed
to submit the link to that page.
On 15.10.2013 03:44, Huang, Suya wrote:
Thanks Tomas!
However, in the example I sent, I already did a vacuum full right
after deleted the rows causing problem, before created the index and
got an error even the table is vacuumed. Note, the table is I
temporarily created using create table
Hi,
On 14.10.2013 05:47, Huang, Suya wrote:
Hi,
OK, first, I know the reason of this error “index row size 3040
exceeds btree maximum, 2712” and know that we cannot create index on
certain columns with size larger than 1/3 buffer page size.
The question is, no matter if I deleted records
On 14.10.2013 22:18, Laurentius Purba wrote:
Hello all,
I am having core dump on Postgres 9.0.13 with the message ...was
terminated by signal 10: Bus error
So, I set a PID on the log file to capture specific PID that causing
this crash. After, several crashes, I finally got the PID
Hi,
On 13.10.2013 17:43, akp geek wrote:
Sorry all not posting clear. I posted our postresql.conf file. What
we are having is very slow response from the database.
would like to get some feedback about postgresql.conf file
parameters that I posted are causing the issue for performance of
On 28 Září 2013, 23:25, Edson Richter wrote:
Em 28/09/2013 18:12, Tomas Vondra escreveu:
Do you realize the counters in pg_stat_database (temp_files, temp_bytes)
are counters tracking all the temp files created since the last reset of
the stats (possible since the cluster was created)?
So
On 28 Září 2013, 20:12, Eugene Ostrovsky wrote:
Hello!
I would like to find out what is the difference in hardware resources
consuming between two solutions:
1. Several databases in the same postgresql cluster
2. Several clusters (one per each database) on the same host
Currently I have
On 28 Září 2013, 21:30, Eugene Ostrovsky wrote:
Thanks for the answer!
About you questions:
1. Postgres 9.3
2. There are about 30-50 user connections. Actually Only 2 of databases
are used intensively, others only in rare cases.
3. Hardware is AMD Phenom II X4 965, 8 Gb RAM, 2 SATA2 HDD in
On 28 Září 2013, 22:54, Edson Richter wrote:
Em 28/09/2013 15:54, Adrian Klaver escreveu:
On 09/28/2013 11:30 AM, Edson Richter wrote:
Em 28/09/2013 15:22, Adrian Klaver escreveu:
On 09/28/2013 11:16 AM, Edson Richter wrote:
I've a 12Gb database running without problems in Linux Centos 64bit
On 9 Září 2013, 3:12, jane...@web.de wrote:
Hi, this is my query: SELECT user, sum(CASE WHEN lev = 50
AND lev 70 THEN 1 ELSE 0 END) as a,
sum(CASE WHEN lev = 70 AND lev 80 THEN 1 ELSE 0 END) as b,
sum(CASE WHEN lev = 80 AND lev 90 THEN 1 ELSE 0 END) as d,
On 5 Září 2013, 8:52, Arun P.L wrote:
Hi all,
I have moved a lot of old records from one table say original_data_table
to another table say old_data_table which is newly created for the old
records. There were around 15 milliion old records to move. After this
migration I have done standard
Hi,
On 6 Srpen 2013, 9:12, liuyuanyuan wrote:
Error detail:
org.postgresql.util.PSQLException: Error: out of memory
Details:Failed on request of size 268443660.
Seems like an issue with the OS, not with PostgreSQL, to me.
What OS and HW are you using? How much memory you have and do you
1 - 100 of 335 matches
Mail list logo