[GENERAL] Adding Cyrillic support

2017-09-23 Thread Job
Dear guys, how can i add more character support to PostgreSql 9.6? I need to also store some information in Cyrillic, for Russian users. Can more characters coexist in the same database? Thank you! F -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to

R: [GENERAL] Insert large number of records

2017-09-20 Thread Job
scan in the master table is Always sequential, descending into the partitions. Thank you again, F Da: Alban Hertroys [haram...@gmail.com] Inviato: mercoledì 20 settembre 2017 17.50 A: Job Cc: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] In

[GENERAL] Inserting millions of record in a partitioned Table

2017-09-20 Thread Job
Hi guys, with Postgresql 9.6.1 we need to insert, from a certain select query, some millions of record in a partitioned table. The table is partitioned by day. The datas we will import can, often, be mixed between two different days. We noticed that if we import directly into the global table

R: [GENERAL] Insert large number of records

2017-09-19 Thread Job
Dear Alban, thank you for your precious reply, first of all. >> On 19 Sep 2017, at 15:47, Job <j...@colliniconsulting.it> wrote: >> >> Hi guys, >> >> we need to insert from a table to another (Postgresql 9.6.1) a large amount >> of data (about 10/20 mil

[GENERAL] Insert large number of records

2017-09-19 Thread Job
Hi guys, we need to insert from a table to another (Postgresql 9.6.1) a large amount of data (about 10/20 millions of rows) without locking destination table. Pg_bulkload is the fastest way but it locks the table. Are there other ways? Classic "COPY" from? Thank you! F -- Sent via

[GENERAL] Multicolumn Index on OR conditions

2017-09-19 Thread Job
Hi, within a query with two or more conditions with "OR", example: "where a = 2 or b < 3" could be useful to speed up the query a multi-column index (a,b) even though the two conditions are in "OR" and not in "AND"? Thank you! F -- Sent via pgsql-general mailing list

[GENERAL] Aggregate query on large tables

2017-04-09 Thread Job
Hi, i have a table with about 400 millions of rows and i need to build some aggregate function for reporting. I noticed that query performances are slowing down, even though indexes are present. Query is simple (i make an example, my table is in italian language): select a,sum(b) from table

R: R: [GENERAL] Slow queries on very big (and partitioned) table

2017-02-21 Thread Job
Hi Stephen, Thank you for your excellent opinion! >If this is really what you're mostly doing, having constraint exclusion and an >index on 'profile' would probably be enough, if you insist on continuing to >have the table partitioned by day (which I continue to argue is a bad idea- >based on

R: [GENERAL] Search on very big (partitioned) table

2017-02-21 Thread Job
Hi Sushant, i think the problem is that we miss check constraints. We will implement asap For the moment thank you to everybody for the excellent support! /F

R: [GENERAL] Slow queries on very big (and partitioned) table

2017-02-20 Thread Job
Index Cond: (("timestamp")::date = '2017-02-20'::date) -> Bitmap Heap Scan on webtraffic_archive_day_2017_02_19 (cost=120.00..160.02 rows=1 width=70) (actual time=4696.922..4696.922 rows=0 loops=1) Recheck Cond: (("timestamp")::date = '

[GENERAL] Slow queries on very big (and partitioned) table

2017-02-20 Thread Job
Hu guys, we have a test machine with Postgresql 9.6.1 an about 800.000.000 record in a table. Table is partitioned by day, with indexes on partitioned table. Also a simple query (for example i want to search log occurred in a specific day), is immediate in tha partitioned table

R: [GENERAL] (solved) R: Very strange problem on index

2017-02-02 Thread Job
Hi Raymond, Thank nyou for your appreciated feedback. Here is the original message: i really strange problem, quite near to paranormal, is occurring during a server migration. We have a table with some millions of record, perfectly working on other Postgresql 9.6.1 machines:

[GENERAL] (solved) R: Very strange problem on index

2017-02-02 Thread Job
I create a table copy, with indexes, and it worked perfecly. Then i dropped the original table and recreated it back by a copy of the newest. Same problems, lack during searches and indexes not used. I restart Postgresql 9.6.1 engine and now index is working perfectly! Maybe a cache or

[GENERAL] Very strange problem on index

2017-02-02 Thread Job
Hi guys, i really strange problem, quite near to paranormal, is occurring during a server migration. We have a table with some millions of record, perfectly working on other Postgresql 9.6.1 machines: Table "public.webrecord" Column | Type

R: [GENERAL] Partitioned "views"

2017-01-22 Thread Job
Hi Stephen. >>The mat view takes longer and longer to update because it runs the full >>query. What you really want to do is have a side-table that you update >>regularly with appropriate SQL to issue UPDATE statements for just the >>current day (or whatever). If correct, i leave only last

[GENERAL] Partitioned "views"

2017-01-22 Thread Job
Hello, we use a materialized view to aggregate datas from a very big table containing logs. The source table is partitioned, one table for a day. Since the refresh of the materialized view seems to grow a lot about timing, we would like to know if it is pssible to make a "partitioned

[GENERAL] Materialized view vs. view

2017-01-10 Thread Job
Hi guys, i am making some tests with a massive number of "select" queries (only for reading datas) on a view and a materialized view. We use Postgresql 9.6.1 on a 64bit server. Only for "select" queries, which one is faster and less expensive as resources cost? The view or the materialized

[GENERAL] Matching indexe for timestamp

2017-01-09 Thread Job
Hello, on this table: Table "public.gruorari_tmp" Column | Type |Modifiers ---++-- id|

R: R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-09 Thread Job
Hi guys, Really thank you. Thanks to your help i solved the problem. As said by Adrian: >>Caveats, it is morning here and coffee is still brewing, but I am not >>following. The left join limits grulist.stato to NULL, 1, 2. Your first >>condition catches the 1 value. Should not the second

R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Job
Hi, here it is, excuse me for the delay: select * from webrecord left join grucategorie on grucategorie.codcategoria=webrecord.categoria and grucategorie.codgruppo='f50132' left join grulist on grulist.nome=webrecord.dominio and grulist.codgruppo='f50132' and grulist.stato in (1,2) left join

R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Job
I would also like to add this: The explain analyze show that the index on that numeric field *is not* being used. I also try to set the seqscan off but that index continues not to be used. Maybe the problem is this? Thank you again! /F -- Sent via pgsql-general mailing list

R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Job
Hi guys, Here is an update, i think i have found the right statement that bring Postgresql 9.6.1 under heavy-use of resources (CPU) in a massive benchmark. I only try to remove one simple, but very simple, condition: and (grulist.stato is null or grulist.stato!=2) Grulist.stato field is

R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Job
Hi guys, >>FWIW you still haven't explained how the upgrade was performed. That might be >>a very important piece of information, because the 9.4 cluster might have >>hint bits set and/or the data may be mostly frozen, but the >>9.6 cluster may not have that yet, resulting in higher CPU usage.

R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-07 Thread Job
Hi guys, First of all excuse me but i really do not explain the problem, sorry... >>Are you being serious? You're complaining about a "big slowdown" for a query >>that goes from 1.5ms to 4ms? >>What is the actual problem you're trying to solve? Because I don't see one in >>the above. Single

R: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-06 Thread Job
Hi guys, really much appreciated your replies. >> You might want to include the query plans for each server W e use a function, the explain analyze is quite similar: POSTGRESQL 8.4.22: explain analyze select 'record.com' where 'record.com' like '%.%' and function_cloud_view_orari('53',

[GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-05 Thread Job
Hello guys, a very strange thing: after upgrading from 8.4.22 to 9.6.1 i noticed, under heavy beanchmarks, a really slowdown of Postgresql 9.6.1, with the machine really "without breath". By replacing Postgresql 8.4.22 evberything returns working fine. With three days of investigation, i come

R: [GENERAL] Queries on very big table

2017-01-02 Thread Job
Hi, i am trying "materialized views" in Postgresql 9.6.1. Regarding Andy Calson suggestion: >>I do very similar thing, log all my webstats to PG, but querying millions of >>rows is always going to be slow. I use a summary table. They seems to work fine. One question: the materialized view is

[GENERAL] Queries on very big table

2017-01-02 Thread Job
Hello guys and very good new year to everybody! We are now approaching some queries and statistics on very big table (about 180 millions of record). The table is partitioned by day (about ~3 Gb of data for every partition/day). We use Postgresql 9.6.1 I am experiencing quite important

R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
Hi Alban, I was wrong: i have only one column: tsrarnge. Which index can i create to use this statement fastly: ... AND timestamp <@ '[2016-12-27 00:00:00, 2016-12-31 00:00:00)'::tsrange Thank you again! /F -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make

R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
Dear Alban, Regarding: >>... AND timestamp BETWEEN '2016-12-27 00:00:00'::timestamp with time zone AND >>'2016-12-30 23:59:59'::timestamp with time zone ... I think it is a very good approach, and i would like to try. My table has got two different field for "starting" and "ending" timestamp

R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
>>And, basically, if you need help with some queries you could try >>posting them whole, even redacted, along the table defs, this way >>perople can see the problem and not invent one based on a partial >>description Thank you very much, very kind from you. The index applied on the timestamp

R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
conto di Job [j...@colliniconsulting.it] Inviato: venerdì 30 dicembre 2016 10.55 A: David G. Johnston Cc: pgsql-general@postgresql.org Oggetto: R: [GENERAL] Special index for "like"-based query >>GIST​ >>https://www.postgresql.org/docs/9.6/static/pgtrgm.html >>​https:/

R: [GENERAL] Special index for "like"-based query

2016-12-30 Thread Job
e we can mix date and time parameters? Thank you again! /F Da: David G. Johnston [david.g.johns...@gmail.com] Inviato: venerdì 30 dicembre 2016 0.33 A: Job Cc: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] Special index for "like"-based query On Thu,

[GENERAL] Special index for "like"-based query

2016-12-29 Thread Job
Hello, in Postgresql 9.6 we have a query running on a very large table based, in some cases, on a like statement: ... WHERE FIELD='CODE' OR FIELD LIKE 'CODE_%' Which type of index can i create to speed to the search when the "like" case happens? Thank you! /F -- Sent via pgsql-general

[GENERAL] Postgresql 9.6 and Big Data

2016-12-02 Thread Job
Hello, we are planning to store historically data into a Postgresql 9.6 table. We see on Postgresql limit that it could handle "big data". In fact, limit for a single table is 32 Tb. We need to archive this data to generate report and analysis views. Anyone has experienced Postgresql 9.6 with

R: [GENERAL] CachedPlan logs until full disk

2016-12-01 Thread Job
ncesco Da: Tom Lane [t...@sss.pgh.pa.us] Inviato: venerdì 4 novembre 2016 21.24 A: Job Cc: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] CachedPlan logs until full disk Job <j...@colliniconsulting.it> writes: > it is the second time (in two weeks),

[GENERAL] Replicating hundreds of thousandw of rows

2016-11-25 Thread Job
Hello, we need to replicate hundreds of thousands of rows (for reporting) between Postgresql Database nodes that are in different locations. Actually, we use Rubyrep with Postgresql 8.4.22. It works fine but it is very slow with a massive numbers of rows. With Postgresql 9.x, are there some

R: [GENERAL] Autovacuum and frequent pg_bulkload

2016-11-20 Thread Job
Hi Adrian, Thank you for your fast reply! >FYI, Postgres 8.4 is over two years past EOL. Yes, i am aware; we will migrate next year but for application compatibility reason we have at the moment to the remain here.. >What are the steps in the load/delete cycle? We need to load, with

[GENERAL] Autovacuum and frequent pg_bulkload

2016-11-20 Thread Job
Hello, i use PostgreSql 8.4.22 and two times a day, i import logs wth pg_bulkload and i delete old data. I move some millions of records in a day. I noticed that only autovacuum seems not to be able to free unused space. I need a periodical vacuum full but this operations takes many hours. Do

[GENERAL] CachedPlan logs until full disk

2016-11-04 Thread Job
Hello guys, it is the second time (in two weeks), that have a very strange Postgresql in a 8.4.22 installation (32 bit still). Logfile grow up (in few hours) until filling the Whole disk space. I can read infinite series of this messages: CachedPlan: 1024 total in 1 blocks; 640 free (0

[GENERAL] Query killed with Out of memory

2016-10-04 Thread Job
Hello, With a heavy query, when line number results raise over 600k query hangs with out of memory. Here is the explain analyze: CTE Scan on lista  (cost=25066.66..47721.23 rows=3678 width=260)    CTE lista ->  Unique  (cost=24956.32..25066.66 rows=3678 width=512)    ->  Sort 

[GENERAL] Out of memory in pg_bulkload

2016-09-29 Thread Job
Hello, sometimes, in pg_bulkload log, i see an "out of memory" error. We use Postgresql 8.4.8 with Pg_bulkload 3.1 but we have at least 2 Gb of free memory space and system machine does not use swap. We notice that it happens only sometimes; in other launch it works fine, with an higher number

R: [GENERAL] Vacuum full: alternatives?

2016-06-22 Thread Job
Excellent Scott! Thank you! Francesco Da: Scott Marlowe [scott.marl...@gmail.com] Inviato: martedì 21 giugno 2016 2.06 A: Job Cc: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] Vacuum full: alternatives? On Mon, Jun 20, 2016 at 3:18 AM, Job &l

R: R: [GENERAL] Vacuum full: alternatives?

2016-06-21 Thread Job
be slower but free-marked space should be reused again? Thank you! Francesco Da: Jeff Janes [jeff.ja...@gmail.com] Inviato: lunedì 20 giugno 2016 17.51 A: Martín Marqués Cc: Melvin Davidson; Rakesh Kumar; Job; pgsql-general@postgresql.org Oggetto: Re: R:

R: [GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Job
__ Da: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] per conto di Andreas Kretschmer [andr...@a-kretschmer.de] Inviato: lunedì 20 giugno 2016 11.37 A: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] Vacuum full: alternatives? Am 20.06.2016 um 11:18 schrieb Job: >

R: [GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Job
[rakeshkumar46...@gmail.com] Inviato: lunedì 20 giugno 2016 11.34 A: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] Vacuum full: alternatives? Any reason why you need the space back? What is wrong with space remaining constant at 4GB. From: Job &l

[GENERAL] Vacuum full: alternatives?

2016-06-20 Thread Job
Hello, we have a table with an heavy traffic of pg_bulkload and delete of records. The size pass, in only one day, for example for 1Gb to 4Gb and then 1Gb back. We have important problems on size and the only way to gain free space is issueing a vacuum full . But the operation is very slow,

[GENERAL] R: Hot disable WAL archiving

2016-06-17 Thread Job
Hi Albe and thank you, first of all. Is there a way to disable Wal logging only for specific table, permanently? Thank you again! Francesco Da: Albe Laurenz [laurenz.a...@wien.gv.at] Inviato: venerdì 17 giugno 2016 13.48 A: Job; pgsql-general

[GENERAL] Hot disable WAL archiving

2016-06-17 Thread Job
Hello, is there a way in Postgresql-9.5 to disable temporarily WAL archiving to speed up pg_bulkload with restarting database engine? Thank you, Francesco -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription:

[GENERAL] Pg_bulkload for PostgreSql 9.5

2016-06-17 Thread Job
Hello, i have some problems about compiling pg_bulkload-3.1.8 on a CentOS 5 with Postgresql 9.5. If i use a previous version of Psql it compile and works. Thank you if you can help me Francesco -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your

R: [GENERAL] Index on integer or on string field

2015-05-15 Thread Job
: Arthur Silva [arthur...@gmail.com] Inviato: venerdì 15 maggio 2015 17.26 A: Job Cc: pgsql-general@postgresql.org Oggetto: Re: [GENERAL] Index on integer or on string field You should probably experiment with a btree-gin index on those. Em 15/05/2015 12:22, Job j...@colliniconsulting.itmailto:j

[GENERAL] Index on integer or on string field

2015-05-15 Thread Job
Hello, i have a table of about 10 millions of records, with the index on a string field. Actually is alphabetical; since queries are about 100/200 per seconds, i was looking for a better way to improve performance and reduce workload. The unique values, of that fields, are about the 50

[GENERAL] Pg_bulkload and speed

2015-04-29 Thread Job
Hello! Due to speed up loading of a table of about 4 fields with 10 millions of record, are there some parameters to optimize? In same machine it takes about 15 minutes, in other machines about one hour. Parallel mode is better than direct mode? Other interesting things? Thank you in advance

[GENERAL] DB on mSATA SSD

2015-04-23 Thread Job
Dear Postgresql mailing list, we use Postgresql 8.4.x on our Linux firewall distribution. Actually, we are moving from standard SATA disk to mSATA SSD solid drive, and we noticed that the DB, using lots of indexes, is writing a lot. In some monthes, two test machine got SSD broken, and we are

R: [GENERAL] DB on mSATA SSD

2015-04-23 Thread Job
Hello, thank you first of all for your wonder help! Tomas, regarding: There are ways to make the writes less frequent, both at the database and OS level. We don't know what's your PostgreSQL config, but making the checkpoints less frequent and tuning the kernel/mount options may help a lot. We

[GENERAL] DB Connections

2015-03-13 Thread Job
Hello, i have just installed (on a Centos 6.5 x64 machine) Postgresql 9 latest stable version. The application i use need lots of static DB connections: in the 9 version, how many connections can be declared in postgresql.conf? I searched for a max number but i noticed it is not specified.

[GENERAL] DB Connections

2015-03-13 Thread Job
Hello, i have just installed (on a Centos 6.5 x64 machine) Postgresql 9 latest stable version. The application i use need lots of static DB connections: in the 9 version, how many connections can be declared in postgresql.conf? I searched for a max number but i noticed it is not specified.

[GENERAL] Best replication options

2004-02-15 Thread Steven job
What is the best replication option out there today for postgresql. Have been trying to get rserv working and that hasn't been the easiest. Would like to run 7.4.1. Any suggestions? -Steve ---(end of broadcast)--- TIP 3: if posting/reading