iously
executed as part of a given transaction?
If I could get that, it would be a hell lot easier to figure out the
misbehaving client.
Thanks for any pointers,
Nicolas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Every day I discover Postgresql's new features. Today : make women happy :)
2015-03-25 4:27 GMT+01:00 :
> Successfully loaded two files into two different tables. Happy. :-)
>
> Diana
>
> > Yes, it is a header in the .csv file. I did not know that there is such
> an
> > option as specifying WITH
Change OS, or change GUI tool
https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
2015-03-29 9:59 GMT+02:00 John R Pierce :
> On 3/29/2015 12:48 AM, Yuri Budilov wrote:
>
>> Red Hat/Oracle Linux 6.x
>>
>
> is that anything like Ford/Chevy ?
>
> Oracle Linux, while originally
2015-05-20 22:16 GMT+02:00 Stefan Stefanov :
> Hi,
>
> I have been using COPY .. FROM a lot these days for reading in tabular
> data and it does a very good job. Still there is an inconvenience when a
> (large) text file contains more columns than the target table or the
> columns’ order differ
the first, second and seventh columns form myfile.txt into
> table "stafflist". myfile.txt has many columns.
> COPY stafflist (userid, username, staffid)
> FROM 'myfile.txt'
> WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7), ENCODING
> 'win
2015-06-10 17:43 GMT+02:00 Kevin Grittner :
> inspector morse wrote:
>
> > After doing that, if you add or delete a topic from the Topics
> > Table, SQL Server automatically keeps the count updated.and
> > it's fast because of the unique index.
> >
> > Doing the same thing in Postgresql using
Are you using docker on centos ? I had problem with
centos/docker/postgresql because container size was (maybe still is)
limited to 20GB on that specific OS. Maybe not related, but good to know
2015-10-03 0:03 GMT+02:00 John R Pierce :
> On 10/2/2015 2:02 PM, Paolo De Michele wrote:
>
>> exec s
2015-10-19 0:08 GMT+02:00 dinesh kumar :
> On Sun, Oct 18, 2015 at 7:04 AM, wrote:
>
>> Hello
>>
>> Is anyone aware of any tools like TOAD that are available for Postgresql?
>>
>>
> PgAdmin fits here.
>
>
>> Regards
>>
>> John Wiencek
>>
>
>
>
> --
>
> Regards,
> Dinesh
> manojadinesh.blogspot.co
Yes, moreover it provides SQL auto-completion (ctrl+space), query history,
The eclipse workbench is easy, fast & reactiv.
Hope I had kown DBeaver before
2015-10-19 14:49 GMT+02:00 Yves Dorfsman :
> On 2015-10-18 16:37, Nicolas Paris wrote:
> >
> > I didn't know DBea
2015-11-11 10:44 GMT+01:00 Dusan :
> Hi,
> I'm using table with parent_id to themselve and WITH RECURSIVE in SELECT on
> about 3thousands records.
> The "tree" of data is wide (each node has more children) but not deep
> (maximal depth of branch is 10 nodes).
>
> I'm planning to use same schema on
rows, but most of the time, only one row will match the
tenant ID, other rows belonging to other tenants).
A few questions:
- Am I missing something?
- Am I overestimating the benefit of a clustered index in our case, and the
cost of not having one in PostgreSQL?
- Is there another technical so
rows, but most of the time, only one row will match the
tenant ID, other rows belonging to other tenants).
A few questions:
- Am I missing something?
- Am I overestimating the benefit of a clustered index in our case, and the
cost of not having one in PostgreSQL?
- Is there another technical solu
On Tue, Aug 30, 2016 at 7:26 PM, Vick Khera wrote:
> I'll assume you have an index on the tenant ID. In that case, your
> queries will be pretty fast.
>
> On some instances, we have multi-column indexes starting with the
> tenant ID, and those are used very effectively as well.
>
> I never worry
On Tue, Aug 30, 2016 at 8:17 PM, Kenneth Marshall wrote:
> We have been using the extension pg_repack to keep a table groomed into
> cluster order. With an appropriate FILLFACTOR to keep updates on the same
> page, it works well. The issue is that it needs space to rebuild the new
> index/table.
Eduardo Morras wrote:
> Check BRIN indexs, they are "designed for handling very large tables in
> which certain columns have some natural correlation with their physical
> location within the table", I think they fit your needs.
Yes, a BRIN index on the tenant ID would be very useful if the row
s. If you add rows only
> rarely but then do lots of updates, then the clustering would work great.
> If this is an active real time data table, then clustering would not be
> viable.
>
The application is very interactive and news rows are inserted all the time
in my use case.
Thanks for your time,
Nicolas
On Wed, Aug 31, 2016 at 6:05 PM, Kenneth Marshall wrote:
> We just run it via cron. In our case, we run it once a day, but depending
> on
> your churn, it could be run once a week or more.
>
Could you provide some numbers: what is the size of the tables or tables
that are repacked? how long does
postgres-5-tips-from
This is very similar to what I'm trying to achieve.
The article is 3 years old. I'd be curious to know if they still do that.
Nicolas
On Thu, Sep 1, 2016 at 12:05 AM, Ben Chobot wrote:
> If what they did 3 years ago is similar to what you are trying to do
> today, who cares what they are doing today? (Besides using pg_repack
> instead of pg_reorg, of course.)
>
I'm curious because, in the meantime, Instagram could have stopped
On Tue, Aug 30, 2016 at 8:17 PM, Kenneth Marshall wrote:
> We have been using the extension pg_repack to keep a table groomed into
> cluster order. With an appropriate FILLFACTOR to keep updates on the same
> page, it works well. The issue is that it needs space to rebuild the new
> index/table.
On Thu, Sep 1, 2016 at 12:31 AM, Nicolas Grilly
wrote:
> In DB2, it seems possible to define a "clustering index" that determines
> how rows are physically ordered in the "table space" (the heap).
>
> The documentation says: "When a table has a clustering in
On Thu, Sep 1, 2016 at 3:08 PM, Igor Neyman wrote:
> Don’t know about plans to implement clustered indexes in PostgreSQL.
>
It was discussed on the mailing list in the past.
I found an interesting thread dated from 2012 about integrating pg_reorg
(the ancestor of pg_repack) in PostgreSQL core:
Hello,
Has anyone already tested to integrate presto (https://prestodb.io/) within
postgresql thought the postgres_fdw extension ?
Presto is a distributed SQL query engine able to scale horizontally on top
of hadoop, cassandra or mongodb.
Moreover, presto has a PostgreSQL protocol (
https://githu
On Tue, Aug 30, 2016 at 1:10 PM, Nicolas Grilly
wrote:
> We are developing a multitenant application which is currently based on
> MySQL, but we're thinking of migrating to PostgreSQL.
>
> We rely on clustered indexes to preserve data locality for each tenant.
> Primary
On Thu, Sep 8, 2016 at 2:35 AM, dandl wrote:
> I understand that. What I'm trying to get a handle on is the magnitude of
> that cost and how it influences other parts of the product, specifically
> for Postgres. If the overhead for perfect durability were (say) 10%, few
> people would care about
Hi,
You could run 2 queries separatly and asynchrouneously
1) the limit 10
2) the count
While the limit 10 query would be showned instanteneously, the web table
would way for the count to build the pagination
Le lun. 26 sept. 2016 à 20:59, Leonardo M. Ramé a
écrit :
> Hi, I'm using a query t
Hello,
I want to minimize postgresql json size when I fetch them.
I translate columnar table to json thought json_build_object/array or even
row_to_jeon.
While row_to_json do have a "pretty_bool" option, the latter do not. Each
json object/array I build contains spaces.
Is there a workaround ?
I
Hello,
I have a 9.6 pg instance, and I am trying to link a foreign postgresql
database that do not accept extended queries. (only simple queries
https://www.postgresql.org/docs/current/static/protocol.html )
When I run a query against the foreign pg instance thought postres_fdw, it
looks like it
2016-10-24 10:36 GMT+02:00 Albe Laurenz :
> Nicolas Paris wrote:
> > I have a 9.6 pg instance, and I am trying to link a foreign postgresql
> database that do not accept
> > extended queries. (only simple queries https://www.postgresql.org/
> docs/current/static/protocol.html
2016-12-29 1:03 GMT+01:00 Rich Shepard :
> On Wed, 28 Dec 2016, Adrian Klaver wrote:
>
> An example from my machine that works:
>> aklaver@tito:~/bin> java -jar schemaSpy_5.0.0.jar -t pgsql -s public -u
>> postgres -db production -host localhost -dp
>> /home/aklaver/bin/postgresql-9.4.1212.jre6
Le 10/33/2017 à 21:33, David G. Johnston écrivait :
>On Tue, Jan 10, 2017 at 1:01 PM, Melvin Davidson
><[1]melvin6...@gmail.com> wrote:
>
>Can we all agree that the "Materialized View" should be faster
>
>
Yes.
The OP told about a 500K rows view. Every select queries on that view
Hello,
In postgresl order of columns does have an non negligeable impact on table
size[1].
Table are in many cases dynamic, and new fields can appear in the database life.
I suspect re-ordering columns based on types would be an automatisable task
would be feaseable such:
```
reorderTableWithTe
Le 02 févr. 2017 à 20:00, Rob Nikander écrivait :
> Hi,
>
> I'm working on a project with multiple different data storage backends. I'd
> like to consolidate and use Postgres for more things. In one new situation
> we're starting to use Redis, thinking it will perform better than Postgres for
> a
2015-12-03 7:12 GMT+01:00 Kaushal Shriyan :
> Hi,
>
> Are there any scripts which will diff two pg_dump files for t1 and t2 time
> period. For example pg_dump taken on t1 -> 01/11/2015 and then on t2 ->
> 30/11/2015.
>
> backup_01112015.dump (dump taken on 01/11/2015)
> backup_30112015.dump (dump
Hi,
I guess capture will help you look at
http://www.postgresql.org/docs/9.0/static/functions-matching.html
SELECT regexp_replace('http://test.com/test/testfile.php',
'^(.*)/(.*\.php)$', E'\\1&file=\\2', 'g')
2015-12-09 22:58 GMT+01:00 Christopher Molnar :
> Hello,
>
> I am running into a problem
I recently tried many tools, and "sql power architect" is the tool I
have selected.
It is compatible with liquibase, and works with postgres, mysql oracle
and so on.
It allows comparing structure differences between databases. Moreover
it has a community free edition that covers my needs.
2016-01-
Hello,
Is there any way for a client to know if a conflict happened in an ON
CONFLICT DO UPDATE query ?
Thanks !
--
Nicolas "Pause" ALBEZA
Hi,
I wonder why the third query returns 0.
To me, it would return 0.1, because there is not baz in the text
Thanks !
(pg 9.4)
SELECT ts_rank_cd(apod.t, query,4) AS rank
FROM (SELECT to_tsvector('foo baz') as t) as apod, to_tsquery('foo & baz')
query
WHERE query @@ apod.t;
rank|
-
Hello,
Documentation says : (
http://www.postgresql.org/docs/9.5/static/textsearch-controls.html#TEXTSEARCH-RANKING
)
"The built-in ranking functions are only examples. You can write your own
ranking functions and/or combine their results with additional factors to
fit your specific needs."
The b
Thanks Oleg, this is a good start for me
2016-05-03 15:47 GMT+02:00 Oleg Bartunov :
>
>
> On Tue, May 3, 2016 at 3:21 PM, Nicolas Paris wrote:
>
>> Hello,
>>
>> Documentation says : (
>> http://www.postgresql.org/docs/9.5/static/textsearch-controls.html#TEXT
Hello,
What is the way to build a binary format (instead of a csv) ? Is there
specification for this file ?
http://www.postgresql.org/docs/9.5/static/sql-copy.html
Could I create such format from java ?
I guess this would be far faster, and maybe safer than CSVs
Thanks by advance,
2016-05-10 13:04 GMT+02:00 Moreno Andreo :
> Il 10/05/2016 12:56, Nicolas Paris ha scritto:
>
> Hello,
>
> What is the way to build a binary format (instead of a csv) ? Is there
> specification for this file ?
> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>
2016-05-10 14:47 GMT+02:00 Moreno Andreo :
> Il 10/05/2016 13:38, Nicolas Paris ha scritto:
>
> 2016-05-10 13:04 GMT+02:00 Moreno Andreo :
>
>> Il 10/05/2016 12:56, Nicolas Paris ha scritto:
>>
>> Hello,
>>
>> What is the way to build a binary format (ins
washington/escience/myria/PostgresBinaryTupleWriter.java
-
https://github.com/bytefish/PgBulkInsert/tree/master/PgBulkInsert/src/main/de/bytefish/pgbulkinsert/pgsql/handlers
Thanks,
2016-05-10 15:08 GMT+02:00 Cat :
> On Tue, May 10, 2016 at 03:00:55PM +0200, Nicolas Paris wrote:
> >
8.1 function in 8.4?
Thanks in advance for any help you can give me!
Cheers,
Nicolas.
n any kind of refactoring,
as plans are underway to develop a new system. what we are looking for is
just a quick fix, if there's such thing out there!
Any thoughts?
Original Message --
>Date: Tue, 11 Jan 2011 10:25:59 +1100
>From: Craig Ringer
>To: Nicolas Garfinkiel
>CC: pgsql-g
your help and advice.
Regards,
Nicolas Grilly
and is there a
way to be notified of a potential error before calling PQputCopyEnd? Or do I
have to send my data in small chunks (for example batch of 1
rows), issue a PQputCopyEnd, check for errors, and continue with the next
chunk?
Thanks for your help and advice.
Regards,
Nicolas Grilly
ate. Maybe we can make a special case for the COPY FROM
subprotocol and handle errors early, in order to make them available to
PQgetResult? Is is feasible in a simple way or is it a bad idea?
Regards,
Nicolas Grilly
On Wed, Feb 2, 2011 at 20:06, John R Pierce wrote:
> On 02/02/11 10:20 AM, N
Maybe you can priorize your worker with a ionice?
- Mail original -
De: "Mike Christensen"
À: "Prabhjot Sheena"
Cc: pgsql-ad...@postgresql.org, "Forums postgresql"
Envoyé: Lundi 7 Juillet 2014 16:15:18
Objet: Re: [ADMIN] [GENERAL] WARNING: database must be vacuumed within 8439472
tran
PY 10" (when 10 rows
inserted), I get with psql way, or with pgadmin.
I have tried the int Statement.executeUpdate() method, but it returns 0
Thanks for any help !
Nicolas PARIS
BulkExec.git
I guess this is faster to read from STDIN than export the file on the
remote server and then COPY FROM "file exported on the remote server"
isn'it ?
Nicolas PARIS
2015-02-07 16:40 GMT+01:00 Thomas Kellerer :
> Nicolas Paris wrote on 07.02.2015 15:14:
>
> Hel
Hello,
TRY in psql :
update pg_database set encoding = pg_char_to_encoding('your_encoding')
where datname = 'your_data_base';
Works for postgres 9.3
Nicolas PARIS
2015-02-09 9:11 GMT+01:00 Oliver :
> 2015-02-09 7:54 GMT+00:00 Oliver :
>
>> 2015-02-0
Hello,
AFAIK there is no built-in way to combine full text search and fuzzy matching
(https://www.postgresql.org/docs/current/static/fuzzystrmatch.html).
By example, phrase searching with tipos in it.
First I don't know if postgresql concurrents (lucene based...) are able
to do so.
Second, is su
Le 27 févr. 2017 à 10:32, Oleg Bartunov écrivait :
>
>
> On Sun, Feb 26, 2017 at 3:52 PM, Nicolas Paris wrote:
>
> Hello,
>
> AFAIK there is no built-in way to combine full text search and fuzzy
> matching
> (https://www.postgresql.org/docs/curren
Le 03 mars 2017 à 14:08, Artur Zakirov écrivait :
> On 03.03.2017 15:49, Nicolas Paris wrote:
> >
> >Hi Oleg,
> >
> >Thanks. I thought pgtrgm was not able to index my long texts because of
> >limitation of 8191 bytes per index row for btree.
> >
> >
Le 09 avril 2017 à 05:31, Steve Petrie, P.Eng. écrivait :
> Warm Greetings To pgsql-general@postgresql.org
>
> (I am a very newbie user of PG for a pretty trivial PHP / SQL web app. Been
> lurking with great admiration for a long time, on the
> pgsql-general@postgresql.org discussion list channel.
Le 23 avril 2017 à 12:48, Ertan Küçükoğlu écrivait :
> Hello All,
>
> Using PostgreSQL 9.6.2 on a Windows 64bit platform.
>
> I am about to start a new software development dealing with warehouse
> operations. Software should handle multi-company structure. There will be
> single company startin
Hi,
I have dumps from oracle and microsoft sql server (no more details). Is it
possible to load them "directly" into postgres (without oracle/mssql
license)?
dump -> csv -> postgtres
or something ?
Thanks a lot
:
> On 2017-05-31 16:43, Nicolas Paris wrote:
>
>> Hi,
>>
>> I have dumps from oracle and microsoft sql server (no more details).
>> Is it possible to load them "directly" into postgres (without
>> oracle/mssql license)?
>>
>> dump -
> If they aren't too big, you might get away by installing the express edition
> of the respective DBMS, then import them using the native tools, then export
> the data as CSV files.
Thanks Thomas. Both are binaries. The oracle's one is a 30TB database...
--
Sent via pgsql-general mailing lis
> Or spin up an AWS SQL Server instance:
>
> https://aws.amazon.com/windows/resources/amis/
>
Thanks for the suggestion. Problem is the data is highly sensible and
cannot go on the cloud or non trusted place
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
com/
http://www.google.com/reader/
http://www.google.fr/
http://www.postgresql.org/
https://gmail.com/
https://mail.google.com/mail/
https://www.sixxs.net/
Thanks,
--
Nicolas
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> On Fri, Feb 01, 2008 at 11:06:07AM +0100, Nicolas KOWALSKI wrote:
>>
>> I do not understand why the following ORDER BY statment does not work
>> as I would expect:
>>
>> 3) When I want to sort them, I g
b --locale=C
Thanks Tom, using the C locale as indicated gets this right in our
database.
Best regards,
--
Nicolas
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
ql.org/
https://gmail.com/
https://mail.google.com/mail/
https://www.sixxs.net/
(8 rows)
Best regards,
--
Nicolas
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your
chine!
I'm afraid this is because ts_rank needs to read document_vector, and
because that column is stored in TOAST table, it triggers a random
access for each matching row. Am I correct? Is it the expected
behavior? Is there a way to reduce the execution time?
I use PostgreSQL 8.4
On Tue, Jul 12, 2011 at 22:25, Oleg Bartunov wrote:
> I don't see your query uses index :)
Yes, I know. :)
I ran VACUUM ANALYZE and re-ran the query but the output of EXPLAIN
ANALYZE stays exactly the same: no index used.
Any idea why?
By the way, does ts_rank is supposed to use a GIN index wh
ment_vector)
Total runtime: 296220.493 ms
>> By the way, does ts_rank is supposed to use a GIN index when it's
>> available?
>
> no, I see no benefit :)
Ok. But what is the solution to improve ts_rank execution time? Am I
doing something wrong?
Thanks for your help,
Nicolas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
...
> :( The only solution I see is to store enough information for ranking in
> index.
Is it the expected behavior? How can I improve that?
Thanks,
Nicolas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ltext internals.
On Wed, Mar 7, 2012 at 8:05 PM, Nicolas Grilly wrote:
> In a previous discussion thread, Oleg suggested that ts_rank is unable to
> use GIN indices:
> http://archives.postgresql.org/pgsql-general/2011-07/msg00351.php
>
> This is the only information I have about this.
>
tion_statement to 2500;
SET
postgres=# SHOW log_min_duration_statement ;
log_min_duration_statement
2500ms
(1 ligne)
Am I missing something ?
Thanks for your help !
Nico
--
Nicolas PAYART
Administrateur de bases de données
Benchmark Group
Atalis 2 - Bât D
3, rue de Paris
35510 Cesson
t; so i need self-written postprocessing of query to replace OR with AND.
>
> --
> Regards,
> Andrey
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-
is stored in TOAST table, which triggers a random read for
each ranked document.
Cheers,
Nicolas Grilly
On Wed, Jul 13, 2011 at 18:55, Nicolas Grilly wrote:
> The first query ran in 347 seconds; the second one in 374 seconds.
> Conclusion: There is no significant overhead in the ts_rank
In a previous discussion thread, Oleg suggested that ts_rank is unable to
use GIN indices:
http://archives.postgresql.org/pgsql-general/2011-07/msg00351.php
This is the only information I have about this.
On Wed, Mar 7, 2012 at 18:59, Andrey Chursin wrote:
> Is there any way to sort by ranking,
Hi,
I just wanted to know if there is a specific version of PostGreSQL for 64
Bits CPU (AMD/Intel) on a platform like Linux or Windows XP 64.
thx
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
; 684533
> >
> > and for 8.0.X I get:
> >
> > 648130
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
thers will keep
ignoring its rows. There is nothing to rollback here, thanks to MVCC.
Of course, those rows will still be physically present until the next
VACUUM.
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html
---(end of broadcast)--
he expressions are actually not using ::text
anymore? Normally they will keep the ::text, and only to change to the
new system should you use the script. Most people *do* want the new
behavior though, so run it :-).
greetings,
Nicolas
--
Nicolas Barbier
http://www.gnu.org/philosop
ulating all aggregations in one run for ROLLUP (instead
of doing multiple scans).
greetings,
Nicolas
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html
---(end of broadcast)---
TIP 4: Have you searched our list archives?
2007/1/21, Shashank Tripathi <[EMAIL PROTECTED]>:
For all its flaws, MySQL is catching on quick and has a very active
community of developments that several of us find rather handy -
http://forge.mysql.com/
Is there something similar for Pgsql?
http://pgfoundry.org/>
greetings
2007/1/19, Paul Lambert <[EMAIL PROTECTED]>:
A number of months ago I was pointed towards Postgre as a reliable database
server
Please don't use the word Postgre:
http://stoned.homeunix.org/~itsme/postgre/>.
greetings,
Nicolas
--
Nicolas Barbier
http://www.gnu.org/phil
d?
Thanks for your help.
Nicolas Gignac
Thanks. Finally, I discovered one line not uncomment, stupid typos error.
Nicolas
2007/2/7, Nicolas Gignac <[EMAIL PROTECTED]>:
Hello,
I have installed Postgres 8.2 on a internal server having Windows Server
2003 (IIS 6) up and running.
- I have configure the hp_config file to: ho
u are supposed to see those indexes. Try "\d tablename" in psql. It
should give you a bunch of information, including something like:
Indexes:
"tablename_pkey" PRIMARY KEY, btree (keyfieldname)
Where tablename is your table's name, and keyfieldname the name of the
column th
I have the same problem !
When I setup Postgres 8.0 Beta 4 on a Windows Xp or 2003 Server, it works
parfectly with parameter listen_adresses set to '*' or localhost.
I have been testing Beta5, RC1 and RC2 on my XP workstation and there is no
problem, event if I accept external connections ( listen
I can't find any solution.
Is it a bug or a config problem ?
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
quot; but SELECT
> count("ID") from "XYZ" still takes 35 seconds*. (ID is the primary key
> basing on a sequence, select count(*) isn't faster.)
I would like to redirect you to the zillions of mailing list posts
about this subject :-).
> So - what kind of inde
icipating in Google Summer of Code 2006, perhaps
the GnuTLS support could be a student's project.
--
Nicolas Baradakis
---(end of broadcast)---
TIP 6: explain analyze is your friend
Martijn van Oosterhout wrote:
> On Sat, Apr 22, 2006 at 01:10:34AM +0200, Nicolas Baradakis wrote:
> > As PostgreSQL is participating in Google Summer of Code 2006, perhaps
> > the GnuTLS support could be a student's project.
>
> Before someone runs off to consider this
.postgresql.org/docs/whatsnew>
> who is the leading person on postgre?
http://www.postgresql.org/developer/bios>
Greetings,
Nicolas
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html
---(end of broadcast)---
T
2006/4/24, Dany De Bontridder <[EMAIL PROTECTED]>:
> and the "select count(*)" will be able to use index scan (faster) (in version
> 8.1 ?)
No, it won't.
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html
-
Hi,
I have to set up a replication database from a large production
database on a new server, using Slony.
As the tables I have to replicate have several million rows, I tried to
dump the entire database from the master and restore it as a slave
database before setting up Slony (in a developpemen
nd for "binary".
In the latter: It won't, because the splitting mechanism will never
result in an almost-empty leaf. That can only be caused by deletions.
greetings,
Nicolas
--
Nicolas Barbier
http://www.gnu.org/philosophy/no-word-attachments.html
---
> -Message d'origine-
> De: Lehmeier, Michael [SMTP:[EMAIL PROTECTED]]
> Date: jeudi 7 juin 2001 18:06
> Objet:[GENERAL] Format of BOOLEAN
>
> testdb=# SELECT * FROM testtable WHERE acolumn = t;
> ERROR: Attribute 't' not found
testdb=# SELECT * FROM testtable WHERE acolumn =
With the patch from Alex Pilosov (7.1), I was able to create external functions (6.5)
to cast TEXT type to CIDR.
These function allow things like :
SELECT text_inet(text_field);
SELECT ... FROM ... WHERE text_cidr(text_field) >> '192.168.200.1'::inet;
...which are impossible with
Hi every one
I have a big problem, I think it's a syntaxical one, but can't solve it,
please help !
What I am trying to do is a simple function who updates data in a table and
if no row was updated, then add a new one with the specified parameters.
And i would like to do it using SQL and anyth
Dear PostgreSQL Expert,
I'm looking for the help on the PostgreSQL installation matter.
Could you give me such an advice or readdress me to some other experienced
people?
The problem is following.
I got the PostgreSQL ver.7.0.2 from the Postgres site and compiled it on HP-UX machine
under OS h
I don't find that nowhere on the net.
I only have a broken 1.07 binary.
THe official site for it (cygutils.netpedia.org) is password protected !!!
Help.
Eric
set nomail
1 - 100 of 121 matches
Mail list logo