2008/9/9 0123 zyxw <[EMAIL PROTECTED]>:
> Kevin Hunter wrote:
>>
>> 1. Oracle was "first", and has vendor lock-in momentum.
>> 2. Oracle ...speed/performance/concurrency...
>> 3. Oracle has application lock-in as well. ...
>> 4. Oracle is company-backed, so there is ostensibly "someone to blame"..
8><
> For some more info, I've given at least one presentation on the topic, which
> seems to be missing from the omniti site, but I've uploaded it to
> slideshare...
> http://www.slideshare.net/xzilla/postgresql-partitioning-pgcon-2007-presentation
>
> HTH.
>
8><
Very nice presentation. I have 2
On Thursday 11 September 2008 07:47:00 Joao Ferreira gmail wrote:
> Hello all,
>
> my application is coming to a point on which 'partitioning' seems to be
> the solution for many problems:
>
> - query speed up
> - data elimination speed up
>
> I'dd like to get the feeling of it by talking to people
On Saturday 13 September 2008 09:07:23 Patrik Strömstedt wrote:
> Hi,
>
> I have a big problem.
>
> The backup (done nightly with pg_dump) at one of our customers sites is
> broken (well, it's overwritten and is of no use anymore). What is left is a
> filesystem backup that incudes the postgresql d
On Friday 12 September 2008 14:23:52 Kevin Duffy wrote:
> Hello:
>
> I am move to a new production server and am testing my backup and
> restore procedures.
>
> Given a backup created with the follow command
>
> C:\>C:\progFiles\PostgreSQL\8.2\bin\pg_dump -Fc -b -C -o -f
> E:\backupPostgres\benchxx
On Sat, 13 Sep 2008, Dmitry Koterov wrote:
Hello.
We have a good intarray contrib module which contains a lot of features:
additional functions, operators with GIN support etc.
Are there plans for bigintarray?
contrib/intarray has GiST index, not GIN, which has basic support
for bigint[].
W
On Friday 12 September 2008 15:55:46 Tom Lane wrote:
> Scott Ribe <[EMAIL PROTECTED]> writes:
> >> The worry expressed upthread about the transaction being "too large" is
> >> unfounded, btw. Unlike some other DBs, PG doesn't have a finite-size
> >> undo log.
> >
> > Sure, it won't fail. But would
Hello.
We have a good intarray contrib module which contains a lot of features:
additional functions, operators with GIN support etc.
Are there plans for bigintarray?
>
> explain analyze
>> select * from test.test_tsq
>> where to_tsvector('40x40') @@ q
>>
>
> why do you need tsvector @@ q ? Much better to use tsquery = tsquery
>
> test=# explain analyze select * from test_tsq where q =
> '40x40'::tsque>
>
Tom Lane <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> Incidentally the visibility bugs are indeed entirely fixed in 8.3. In 8.2 and
>> before cluster and alter table rewrites can both cause tuples to not appear
>> for transactions which were started before the cluster
Gregory Stark <[EMAIL PROTECTED]> writes:
> Incidentally the visibility bugs are indeed entirely fixed in 8.3. In 8.2 and
> before cluster and alter table rewrites can both cause tuples to not appear
> for transactions which were started before the cluster or alter table such as
> a long-running pg
[EMAIL PROTECTED] writes:
> About 150 million records into the import process, I get the following error:
> ERROR: lock AccessShareLock on object 51533/51769/0 is already held
What PG version? Can you put together a self-contained test case?
(It's probably independent of the data, so you could m
I'd like to store recurring appointments in my database, and be pretty
accepting in the scheduling of those appointments. For instance, I
want to accept both "every other Tuesday starting 2008-11-04" as well
as "every 3rd October 13th starting 2009." Storing those appointments
isn't that ha
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> On Thu, Sep 11, 2008 at 8:56 AM, Bill Moran
> <[EMAIL PROTECTED]> wrote:
>> In response to Alvaro Herrera <[EMAIL PROTECTED]>:
>>
>>> Bill Moran wrote:
>>> > In response to "Gauthier, Dave" <[EMAIL PROTECTED]>:
>>> >
>>> > > I might be able to answer m
Gordon-
without disclosing company proprietary specifics can you provide a pared down
schema of the affected
tables and their column names
e.g.
Table1
{
column1 int primary key;
};
Table2
{
int column2;
int column1 references table1(column1)
}
thanks
Martin
Hi,
I have a big problem.
The backup (done nightly with pg_dump) at one of our customers sites is broken
(well, it's overwritten and is of no use anymore). What is left is a filesystem
backup that incudes the postgresql directories.
I'm trying to restore one of the tables from this "filesyste
Good Morning Brian-
sounds like a very nasty bug first discovered in 07
http://archives.postgresql.org/pgsql-bugs/2007-04/msg00075.php
the bug was supposed to be eradicated post 8.3.3
which version are you using which exhibits this behaviour?
thanks/
Martin
On Mon, Sep 8, 2008 at 12:29 PM, Gordon <[EMAIL PROTECTED]> wrote:
> I'm considering using an array of ints column in a table which lists a
> row's ancestry. For example, if item 97 is contained within itme 68
> and that item is contained with in 31 and that item is contained
> within item 1 then
Hello,
I was curious if there was a known size limit for Postgres transactions. In
order to import data into my database, my Java application begins a transaction,
imports the data (into several different tables), and then commits the
transaction on success. It works great on small data sets, but
Kevin Hunter wrote:
1. Oracle was "first", and has vendor lock-in momentum.
2. Oracle ...speed/performance/concurrency...
3. Oracle has application lock-in as well. ...
4. Oracle is company-backed, so there is ostensibly "someone to blame"..
5. ... individuals ... may prefer it *because* it's ex
..::rDk::.. wrote:
im strugling with my dataset..
got a small pgsql db with a timestamp column in format :MM:DD
HH:MM:SS for each record
Use to_char
to_char(tscol, 'dy') -> mon
to_char(tscol, 'Day') -> Monday
to_char(tscol, 'D') -> 2
--
Sent via pgsql-general mailing list (pgsql-gener
If I use the value of the hierarchy column in a query I can get all
the rows that a given row is an descendant of. (SELECT * FROM items
WHERE itm_id IN (1,31,68,97), for example. However, I need the rows
to be in the correct order, ie the root node first, child second,
grandchild third etc. I
On 2008-09-12 15:52, Jack Orenstein wrote:
Sorry, I misspoke. I have an index, but preferred doing a scan without
the index in this case.
Why?
The only reason I can think of is that you'd like to avoid disk seeking.
But you get at most 1 row in 30 seconds, so disk latency (only several
mill
23 matches
Mail list logo