Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory is not enough.
My customer reported that when they run a program they found the totall
memory and disk i/o usage all reached to threshold value(80%).
That program is written by Java.
It is to use JDBC to pull out data
On 8/25/2013 11:08 PM, 高健 wrote:
That program is written by Java.
It is to use JDBC to pull out data from DB, while the query joined
some table together, It will return about 3000,000 records.
Then the program will use JDBC again to write the records row by row
, to inert into another
Hello All,
I am migrating oracle queries to postgres queries
*Oracle query is below*
select * from (select * from KM_COURSE_MAST where ID in (select OBJ_ID from
(select OBJ_ID,PERFORMER_TYPE,PERFORMER_ID from KM_REL_OBJ_PER_ACTION
where OBJ_TYPE='COURSETYPE') where PERFORMER_TYPE='GROUP'
On Mon, Aug 26, 2013 at 4:57 AM, 高健 luckyjack...@gmail.com wrote:
But why writes the entire content of each disk page to WAL ?
The documentation states that: The row-level change data normally
stored in WAL will not be enough to completely restore such a page
during post-crash recovery.. I
On 08/26/2013 11:37 AM, Luca Ferrari wrote:
On Mon, Aug 26, 2013 at 4:57 AM, 高健 luckyjack...@gmail.com wrote:
But why writes the entire content of each disk page to WAL ?
The documentation states that: The row-level change data normally
stored in WAL will not be enough to completely restore
Hi there.
I didn't find out why this issue happened. Only backup and format of the
filesystem where corrupted postmaster.pid file existed helped to get rid of
it. Hopefully the file won't appear in the future.
Best regards,
Michal Mistina
From: pgsql-general-ow...@postgresql.org
On Mon, Aug 26, 2013 at 9:53 PM, Mistina Michal michal.mist...@virte.sk wrote:
Hi there.
I didn’t find out why this issue happened. Only backup and format of the
filesystem where corrupted postmaster.pid file existed helped to get rid of
it. Hopefully the file won’t appear in the future.
I
Given a system with 32 cores, an SSD SAN with 48x drives, and 2x 8Gbps
paths from the server to the SAN, what would be a good starting point
to set effective_io_concurrency? I currently have it set to 32, but I
kind of feel like the right setting would be 2 since we have two
paths. We don't
BladeOfLight16 wrote
Then again, I guess you don't need a nested query.
SELECT v_rec1.user,
CASE WIDTH_BUCKET(v_rec_fts.lev, 0, 100, 4)
WHEN 1 THEN '0 to 25'
WHEN 2 THEN '25 to 50'
WHEN 3 THEN '50 to 75'
WHEN 4 THEN '75 to
Hi Masao.
Thank you for suggestion. In deed that could occure. Most probably while I
was testing split-brain situation. In that case I turned off network card on
one node and on both nodes DRBD was in primary role. But after the
split-brain occurred I resync DRBD so from two primaries I promoted
Hi to all
On my Postgresql 9.1 instance I had a problem with an index.
Using index I get less tuples than expected.
I try to remove index and the query works fine but obviosly the query is
slow so I try to recreate the index.
I run the create index statement but after a lot of time I get
On Mon, Aug 26, 2013 at 11:02 PM, Mistina Michal
michal.mist...@virte.sk wrote:
Hi Masao.
Thank you for suggestion. In deed that could occure. Most probably while I
was testing split-brain situation. In that case I turned off network card on
one node and on both nodes DRBD was in primary role.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ??
Sent: Monday, August 26, 2013 2:08 AM
To: pgsql-general
Subject: [GENERAL] Is there any method to limit resource usage in PG?
Hello:
Sorry for disturbing.
I am now encountering a serious
On 08/26/2013 04:27 PM, Torello Querci wrote:
Create index statement that I use is:
CREATE INDEX dati_impianto_id_tipo_dato_id_data_misurazione_idx
ON dati
USING btree
(impianto_id , tipo_dato_id , data_misurazione DESC);
What are the data types of these columns?
--
Florian Weimer
On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci tque...@gmail.com wrote:
ERROR: unexpected end of tape
Really strange, if I get it right something went wrong while sorting tuples.
Is it possible to test with an incremented work_mem value?
Luca
--
Sent via pgsql-general mailing list
2013/8/26 Florian Weimer fwei...@redhat.com
On 08/26/2013 04:27 PM, Torello Querci wrote:
Create index statement that I use is:
CREATE INDEX dati_impianto_id_tipo_dato_id_**data_misurazione_idx
ON dati
USING btree
(impianto_id , tipo_dato_id , data_misurazione DESC);
What are
2013/8/26 Luca Ferrari fluca1...@infinito.it
On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci tque...@gmail.com wrote:
ERROR: unexpected end of tape
Really strange, if I get it right something went wrong while sorting
tuples.
Is it possible to test with an incremented work_mem value?
On Sun, Aug 25, 2013 at 7:57 PM, 高健 luckyjack...@gmail.com wrote:
Hi :
Thanks to Alvaro! Sorry for replying lately.
I have understood a little about it.
But the description of full_page_write made me even confused. Sorry that
maybe I go to another problem:
It is said:
Torello Querci tque...@gmail.com writes:
2013/8/26 Luca Ferrari fluca1...@infinito.it
Is it possible to test with an incremented work_mem value?
Actually I use the default work_set value (1MB).
maintenance_work_mem is what would be used for CREATE INDEX.
FWIW, though, the combination of this
Torello Querci escribió:
2013/8/26 Luca Ferrari fluca1...@infinito.it
On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci tque...@gmail.com wrote:
ERROR: unexpected end of tape
Really strange, if I get it right something went wrong while sorting
tuples.
Is it possible to test with an
2013/8/26 Tom Lane t...@sss.pgh.pa.us
Torello Querci tque...@gmail.com writes:
2013/8/26 Luca Ferrari fluca1...@infinito.it
Is it possible to test with an incremented work_mem value?
Actually I use the default work_set value (1MB).
maintenance_work_mem is what would be used for CREATE
On Sun, Aug 25, 2013 at 11:08 PM, 高健 luckyjack...@gmail.com wrote:
Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory is not enough.
My customer reported that when they run a program they found the totall
memory and disk i/o usage all reached to threshold
Ok,
now create index is finished using maintenance_work_mem=100MB.
Thanks to all.
I suppose that an error message more clear can help.
Best Regards, Torello
2013/8/26 Torello Querci tque...@gmail.com
2013/8/26 Tom Lane t...@sss.pgh.pa.us
Torello Querci tque...@gmail.com writes:
On 08/26/2013 06:37 PM, Torello Querci wrote:
In this moment I get this error while executing the restore of the big
table in a different database on the same machine:
psql:dump_ess_2013_08_26.sql:271177424: SSL error: sslv3 alert
unexpected message
On 08/23/2013 03:29 AM, sachin kotwal wrote:
create sample table with one or two rows then use following command to
populate data.
INSERT INTO TABLE_NAME VALUES(generate_series(1,10));
Cartesian joins are also useful - especially when you want
semi-realistic data. A quick Google will
On Mon, Aug 26, 2013 at 10:01 AM, Torello Querci tque...@gmail.com wrote:
Ok,
now create index is finished using maintenance_work_mem=100MB.
Thanks to all.
I suppose that an error message more clear can help.
Unfortunately, since no one knows what the real problem is, we can't
make the
Hi,
thanks for all your answers.
Ill have a try with the contains operator and the intrange, but before Id like to know if this would work:
CASE WHEN a = 0 AND a 25
CASE WHEN a =25 AND a 50
There wouldnt be a double endpoint. I just have to decide which range the endpoint includes.
Janek Sendrowski wrote
Hi,
nbsp;
thanks for all your answers.
I#39;ll have a try with the contains operator and the intrange, but
before I#39;d like to know if this would work:
nbsp;
CASE WHEN a gt;= 0 AND a lt;nbsp;25
CASE WHEN a gt;=nbsp;25 AND a lt; 50
nbsp;
On Mon, Aug 26, 2013 at 3:17 AM, gajendra s v svgajen...@gmail.com wrote:
Please explain me why it is ?
A good place to start would be removing all the parts here that don't seem
to matter. Your problem seems to be with the recursive query (since that is
the part you're changing). Cut off
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
...replication between PostgreSQL 9.2.4 and Oracle Database
...
We also thank to develop a solution based on trigger and/or WAL
Before you reinvent that first wheel, check out Bucardo, which is a
trigger-based solution that can go from
Peter Eisentraut,
My name is Minmin,I come from China.At present,I am responsible for
translation work.From the site,I see some information about calling for
translations.I have great interest in this job,
and have time to do it.I hope that have opportunity to do this work.
Yes,
the table is bigger than 512MB.
Thank got your tips.
Best Regard, Torello
2013/8/26 Rafael Martinez Guerrero r.m.guerr...@usit.uio.no
On 08/26/2013 06:37 PM, Torello Querci wrote:
In this moment I get this error while executing the restore of the big
table in a different
2013/8/26 Jeff Janes jeff.ja...@gmail.com
On Mon, Aug 26, 2013 at 10:01 AM, Torello Querci tque...@gmail.com
wrote:
Ok,
now create index is finished using maintenance_work_mem=100MB.
Thanks to all.
I suppose that an error message more clear can help.
Unfortunately, since no one
33 matches
Mail list logo