On 4/23/15 8:36 AM, Job wrote:
Hello, thank you first of all for your wonder help!
Tomas, regarding:
There are ways to make the writes less frequent, both at the database
and OS level. We don't know what's your PostgreSQL config, but making
the checkpoints less frequent and tuning the
On Thu, Apr 23, 2015 at 9:36 AM, Job j...@colliniconsulting.it wrote:
We have a table, about 500Mb, that is updated and written every day.
When machines updates, table is truncated and then re-populated with
pg_bulk.
But i think we strongly writes when importing new data tables..
so this is
On 04/23/15 15:36, Job wrote:
Hello, thank you first of all for your wonder help!
Tomas, regarding:
There are ways to make the writes less frequent, both at the database
and OS level. We don't know what's your PostgreSQL config, but making
the checkpoints less frequent and tuning the
Thanks Geoff for your idea but my query return something different each time I
call it. This way,
select row_number()over() as id,q.*
from (select sum(cost) as total_cost,sum(length) as
total_length,json_agg(row_to_json(r)) as data
from (select * from
I have the need to move a specific set of data from one schema to another.
These schemas are on the same database instance and have all of the same
relations defined. The SQL to copy data from one table is relatively
straightforward:
INSERT INTO schema_b.my_table
SELECT * FROM schema_a.my_table
On Apr 23, 2015, at 10:09 AM, Cory Tucker cory.tuc...@gmail.com wrote:
I have the need to move a specific set of data from one schema to another.
These schemas are on the same database instance and have all of the same
relations defined. The SQL to copy data from one table is relatively
On 23/04/2015 18:09, Cory Tucker wrote:
I have the need to move a specific set of data from one schema to
another. These schemas are on the same database instance and have all
of the same relations defined. The SQL to copy data from one table is
relatively straightforward:
INSERT INTO
On 23/04/2015 19:08, Raymond O'Donnell wrote:
On 23/04/2015 18:09, Cory Tucker wrote:
I have the need to move a specific set of data from one schema to
another. These schemas are on the same database instance and have all
of the same relations defined. The SQL to copy data from one table is
I'm starting to test BDR, and I've followed the quickstart included in the
documentation successfully.
The problem I'm encountering is when two servers are on different hosts,
which is not covered in the documentation. Node1 is 10.0.0.1, node2 is
10.0.0.2, but when I try to connect from node2:
On Thu, Apr 23, 2015 at 10:27 AM Steve Atkins st...@blighty.com wrote:
On Apr 23, 2015, at 10:09 AM, Cory Tucker cory.tuc...@gmail.com wrote:
I have the need to move a specific set of data from one schema to
another. These schemas are on the same database instance and have all of
the same
Additional things to consider for decreasing pressure on the cheap drives:
- Another configuration parameter to look into
is effective_io_concurrency. For SSD we typically set it to 1 io per
channel of controller card not including the RAID parity drives. If you
decrease this value
Hello, I'm processing a 100Million row table.
I get error message about memory and I'ld like to know what can cause this
issue.
...
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges processed
On 23-04-2015 16:55, Marc-André Goderre wrote:
Hello, I'm processing a 100Million row table.
I get error message about memory and I'ld like to know what can cause this
issue.
...
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed
psql:/home/ubuntu/create_topo.sql:12:
On 4/23/15 1:08 PM, Raymond O'Donnell wrote:
What I am trying to figure out is that if I also have other relations
that have foreign keys into the data I am moving, how would I also move
the data from those relations and maintain the FK integrity?
I'd create the tables in the new schema without
Jim Nasby jim.na...@bluetreble.com writes:
We need more information from the OP about what they're doing.
Yeah. Those NOTICEs about nnn edges processed are not coming out of
anything in core Postgres; I'll bet whatever is producing those is at
fault (by trying to palloc indefinitely-large
Hello, I'm processing a 100Million row table.
I get error message about memory and I'ld like to know what can cause this
issue.
...
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges processed
On 4/23/15 3:15 PM, Edson Richter wrote:
My question would sound stupid... you have 10Gb shared buffer, but how
much physical memory on this server?
How have you configured the kernel swappines, overcommit_memoryt,
overcommit_ratio?
Have you set anything different in shmmax or shmall?
I don't
Hi!
On 04/23/15 20:42, billythebomber wrote:
I'm starting to test BDR, and I've followed the quickstart included in
the documentation successfully.
The problem I'm encountering is when two servers are on different hosts,
which is not covered in the documentation. Node1 is 10.0.0.1, node2 is
Hi all,
I can see that value pg_stat_database.xact_rollback for my db is instantly
growing, but I can not find a way to log these rolled back transactions
(or, may be, last statement within).
Even with
log_min_duration_statement = 0
log_statement = 'all'
there is no error messages in log.
I
On 4/23/2015 4:07 AM, holger.friedrich-fa-triva...@it.nrw.de wrote:
On Tuesday, April 21, 2015 7:43 PM, Andy Colson wrote:
On 4/21/2015 9:21 AM, holger.friedrich-fa-triva...@it.nrw.de wrote:
Exactly what constitutes reproducible values from pgbench? I keep
getting a range between 340 tps and
W dniu 23.04.2015 o 00:06, John R Pierce pisze:
On 4/22/2015 2:57 PM, Joseph Kregloh wrote:
I see. That would still require a manual process to create the user
on each server. I was planing on using some already existing scripts
to create the user automatically on all servers and then LDAP
Hi,
On 04/23/15 14:33, John McKown wrote:
That's a really old release. But I finally found some doc on it. And
8.4 does appear to have TABLESPACEs in it.
http://www.postgresql.org/docs/8.4/static/manage-ag-tablespaces.html
quote
To define a tablespace, use the CREATE TABLESPACE
On Tuesday, April 21, 2015 7:43 PM, Andy Colson wrote:
On 4/21/2015 9:21 AM, holger.friedrich-fa-triva...@it.nrw.de wrote:
Exactly what constitutes reproducible values from pgbench? I keep
getting a range between 340 tps and 440 tps or something like that
I think its common to get different
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid drive, and
we noticed that the DB, using lots of indexes, is writing a lot.
In some monthes, two test machine got SSD broken, and we are
On Thu, 23 Apr 2015 11:07:05 +0200
holger.friedrich-fa-triva...@it.nrw.de wrote:
On Tuesday, April 21, 2015 7:43 PM, Andy Colson wrote:
On 4/21/2015 9:21 AM, holger.friedrich-fa-triva...@it.nrw.de wrote:
Exactly what constitutes reproducible values from pgbench? I keep
getting a range
On Thu, Apr 23, 2015 at 7:07 AM, Job j...@colliniconsulting.it wrote:
Are there some suggestions with SSD drives?
Putting the DB into RAM and backing up periodically to disk is a valid
solutions?
I have some very busy databases on SSD-only systems. I think you're using
SSDs that are not
On Thu, Apr 23, 2015 at 6:07 AM, Job j...@colliniconsulting.it wrote:
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid drive,
and we noticed that the DB, using lots of indexes, is
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid drive, and
we noticed that the DB, using lots of indexes, is writing a lot.
In some monthes, two test machine got SSD broken, and
On 04/23/15 13:07, Job wrote:
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid
drive, and we noticed that the DB, using lots of indexes, is writing a lot.
In some monthes, two test
Hello, thank you first of all for your wonder help!
Tomas, regarding:
There are ways to make the writes less frequent, both at the database
and OS level. We don't know what's your PostgreSQL config, but making
the checkpoints less frequent and tuning the kernel/mount options may
help a lot.
We
On 23/04/2015 15:28, Vick Khera wrote:
On Thu, Apr 23, 2015 at 7:07 AM, Job j...@colliniconsulting.it
mailto:j...@colliniconsulting.it wrote:
Are there some suggestions with SSD drives?
Putting the DB into RAM and backing up periodically to disk is a valid
solutions?
I have some
Hi
On 04/23/15 14:50, Chris Mair wrote:
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid
drive, and we noticed that the DB, using lots of indexes, is writing a lot.
In some monthes,
32 matches
Mail list logo