ash recovery, to archive (ship) it and for log based
replication slots ), but postgres recycles segments ( which can have
an impact on big memory machines ). I do not know to what extent a
modern OS can detect the access pattern and do things like evict the
log pages early after sync.
Francisco Olarte.
s incurred over PostgreSQL,
> which leads to higher throughput.
Have you accounted for disk caching? Your CDC may be getting log from
the cache when going with little lag but being forced to read from
disk (make the server do it ) when it falls behind.
Francisco Olarte.
--
Sent via pgsql-ge
to test.
Also note it is documented as compressed BY DEFAULT, but you can use
options to avoid compression, and it is the only one which supports
paralell dumps.
Also, custom and tar can be made uncompressed, but I do not think
that's a great idea.
Francisco Olarte.
--
Sent via pgsql-general mailing l
fically for these partitioned data. One of our current
> problems is exactly the time it takes for backup and restore operations. I
> did not mentioned it before because of the size of the original message.
We normally do the schema trick, and as 90% of data is in historic
schema, we skip most of
like that to begin with...
Seems fine to me. Never used that because y normally use special
insertion programs for my partitiones tables ( my usage allows thats
), so I insert directly in the appropiate partition always ( so I just
use inheritance, no triggers or rules ).
Francisco Olar
19:17 //tmp
655361 drwxrwxrwt 14 root root 45056 Sep 12 19:17 ///tmp
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
C,
windows shells are terrible at quoting arguments and redirecting i/o,
so it may be better to avoid it).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Why headers are not allowed in other formats remains a
mistery to me), and LIMIT 0 ( I would try adding AND FALSE to the
where clause better, it may lead to faster response, although I doubt
it) .
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
g a query using libpq and can do it, we may provide some
useful info if you show yours first.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
abit personally, feel free to use it as
much as you like, or to recomend it as good if you want. Normally I
wouldn't even mention it, as I did not in my first response, I just
did to explain why I ignored the tail.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgr
<ol...@mipt.ru> wrote:
> On Sat, 2017-09-02 at 17:54 +0200, Francisco Olarte wrote:
>> It's probably doing 1(integer) => double precioson => numeric(20) or
>> something similar if you do not specify.
>>
>> Francisco Olarte.
>
> Well, the question was
lly never cast to unspecified numeric
precision ( except for toy one liners and the like ), and I was trying
to show the OP why he got 20.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
0)/3;
?column?
--
0.33
(1 row)
It's probably doing 1(integer) => double precioson => numeric(20) or
something similar if you do not specify.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-gene
understood.
Valid arguments, but those extensions are NOT free to develop, test
and maintain. And every syntax extensions, specially one like this,
introduces the possibility of collisions with future standards ( de
facto or de iure, although Pg already deviates from ansi on the temp
stuff ).
ht) increase on the bug-surface is enough
to avoid it, as the pg_temp. makes it equally clear and explicit you
are dropping a temporary table.
And if the programmer forgets the pg_temp. it can equally forget the TEMP.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgres
m to
recall custom format needs seekable files, but not too sure about tar,
it should not. In this case, as previously suggested, a simple gunzip
-t is enough to verify backup file integrity, but checking internal
correctness is very difficult ( as it may even depend on server
configuration, i.e., needing s
On Wed, Aug 2, 2017 at 6:23 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
> Does insert's "on conflict" clause not work for this usage?
Did you even bother to read the queries? He is using two different tables.
Francisco Olarte.
--
Sent via pgsql-general mailing
otentially noticeable delay.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
rectly to the
console api , even a simple 'while((c=getc())!=EOF) putc(c)' should be
fast if the console/redirections is not playing tricks.
You can also test with a single table, single text field, put a
several pages text value there and select it. If it is slow, I would
bet for terminal emulator pr
ta take longer than I would like as a result. Is there
> any way to improve just the display/write performance in the console?
Are you sure the culprit is psql and not you terminal emulator ?
Francisco Olarte.-
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
n my experience,
> it isn't always easy to do.
Excessive = too much, normally implies bad things.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
hers signatures )
is especially ugly, combined with top posting I feel it as insulting (
to me it feels as 'you do not deserve me taking time to edit a bit and
make things clear' ) ( but well, I started when all the university
multiplexed over a 9600bps link, so I may be a bit extreme on this )
Regards.
F
On Tue, May 9, 2017 at 1:44 PM, vinny <vi...@xs4all.nl> wrote:
> In fact, I don't think many companies/developers even choose a language
> or database, but rather just use whatever they have experience in.
That is choosing. You choose them because you know them.
Francisco Olarte.
> largely married to MondogDB?
I do not think either of these is true.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ete+reinsert and would like
to point that in pg update~=delete+insert. I use those because many
times they are more efficient ( simple conditions on delete, insert is
fast in postgres, and you can vacuum in the middle if a large portion
is going to get reinserted to reuse the space )
Francisco Olarte.
and reimport them. It also makes
debugging the code much easier ( as the temp table can be cloned to
test easily ). For encapsulation "with" helps a lot, or, in a
function, you can use an real temporary table.
Francisco Olarte
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
s will make postgres slower.
Anyway, with base backup + wal archive you always have the option of
making incremental. Just start a recovery on the backup each time you
receive a wal segment wal and you are done. In fact, you can treat a
replication slave as a very low lag backup.
Francisco Olar
nsformation is
contextless, much easier than csv:
Copy out: Replace NULL with '\N', newline with '\n', tab with '\t',
backslash with '\\', join fields with tab, print with newline at end.
Copy In: Read till newline, split on tabs, replace '\n' with newline,
'\t' with tab, '\\' with backslash.
Much
pens in other OS too depending on how
you manage it ). IIRC windows had infraestructure to do that with
services, but haven't used it since they launched XP so I'm really
rusty and outdated.
Francisco Olarte
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chan
erything you can do with
tar x and a tar backup is possible with pg_restore, and then more ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
use it is
faster ( than using it, index fetches are slower than sequential
fetches ( for the same number of rows ), so for some queries it is
better to not use the index ) ( specially if you are using them for
small test tables, i.e., for a single page table nothing beats a
sequential scan ).
a single big
constraint.
> What am I doing wrong please?
Not RTFM ? ( if I'm right, or not understanding it )
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ed a main program to do all the retrying and
accounting. And you can optimize some things ( like copying from
several satellites and then inserting them at once ).
YMMV anyway, just use whichever is easier for you, but avoid false lazyness ;-)
Francisco Olarte.
--
Sent via pgsql-general
the half-inch tape
and punched cards times, and use it a lot for file processing ( as
renaming in Linux is atomic in many filesystems )
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
ommits.
If the local ( t1 ) is commited before the remote, you can lose rows.
If the remote ( t2 ) is commited before the local you'll have dupes,
so you need some way to purge them.
These things can be solved with the aid of transaction manager, and
prepared transactions, but I'm not sure of the s
t do nothing" ( I do not remember the
versions ) you can use an extra step. Instead of inserting in main in
2 do 2.a - Copy holding to main ( truncating before hand if copy
present ) and 2.b insert news from the copy, either by using and
anti-join with main or by deleting ( in the same tr
ay
renaming tricks. It all depends in the concrete app, but you can try
to fit the pattern in it, I've done it several times and its a useful
one.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.
ter
password at least in unix a user account with a .pgpass file works as
it. You store every password in file readable by a user, .pgpass, and
you use that user login credentials to get access to it.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make cha
_array next to it. And any
regexp related function clustered with its siblings, either in the
string page or ( in another manuals ) in its dedicated section.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
you can get the idea.
Francisco Olarte.
P.S.:
-- shufflings of three elements, with any or from its pos to the end:
Swapping with any element in the array
0,0,0: abc =>swap(0,0)=> abc =>swap(1,0)=> bac =>swap(2,0)=> cab
0,0,1: abc =>swap(0,0)=> abc =>swap(1,0)=> bac =&
ime >='08:00'
and ts:time<'13:00' should work quite well, the first two condition
guide to an index scan and the rest is done with a filtering ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
vent one based on a partial
description. I do not see any thing in common between 'like based
query' and timestmap columns.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
d term and I'm not able to
study that tool ).
FrancisCO Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
with "head the-datafile | perl the_script.pl |
my_favourite_pager" until correct, the beauty of this approache is
you do not touch the db in debug, feed it to psql when done. In my
experience the perl script overhead is unnoticeable in any 2k+ machine
(and perl was specifically dessigned to be good
", spanish version
of April Fools I was tempted to write something about different
impedances in the copper tracks used for DB data traffic when entering
the CPU silicon interconnects via golden cables.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
T
herits where
pg_class.oid=inhrelid and inhparent='20473)
SELECT 'ALTER TABLE ' || relname || ' rest of alter table command;'
from childs ;
And feed the result back to the server using your favorite tool (
quoting maybe needed, schema names may be needed, YMMV ).
Francisco Olarte.
--
Sent via p
te servers ).
You could check with type/file wether you have something similar.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
"Statuses"
Also, when having strange issues, I've found the combo
echo | od -tx1 -tc
very useful, this help rule out potential fancy quotes pointed previously
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
r
requirements are imposible to satisfy ( of course, you could run with
scissors, but that will loose data without hdd failure ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
although I fear this will lock the table too, but it
will be a very short time, your readers may well tolerate it. )
Yours seem a special app with special need, try a few, measure, it is
certainly possible.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.or
where a
heavy update plus vacuuum full was successfully used.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
l;
?column? | ?column? | ?column?
--+--+--
t| t| t
(1 row)
I.e., the same happens with a nullable unique column, you can have one
of each not null values and as many nulls as you want.
SQL null is a strange beast.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-genera
aint smell of http://xyproblem.info/ ,
what are you truing to achieve by doing that?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ds and then filtering, just with the index if it includes word
position or by reading the docs. In general, in FTS, you need to use
selective terms for fast queries.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
imes is FTS does not mix too well with relational queries at
the optimizer level ( as FTS terms can have very diverse degrees of
correlation, which is very difficult to store in the statistics a
relational optimizer normally uses ).
Francisco Olarte.
--
Sent via pgsql-general mailing li
a base backup from a running system some
recovery is needed to make it consistent. With the target time you can
limit how much is done. But there is a minimum. Think of it, if you
stated '1970-01-01' it would be clearly imposible, your date is
bigger, but still imposible, try raising it a bit.
Merlin:
On Thu, Oct 27, 2016 at 7:29 PM, Merlin Moncure <mmonc...@gmail.com> wrote:
> On Thu, Oct 27, 2016 at 11:18 AM, Francisco Olarte
> <fola...@peoplecall.com> wrote:
>> It is, but handling them is not easy, and you have to deal with things
>> like DoS which
Tom:
On Thu, Oct 27, 2016 at 6:32 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Francisco Olarte <fola...@peoplecall.com> writes:
>> Isn't this a server setting, and so going to affect every connection,
> Yes,
Ok, just checking.
> but there are equivalent libpq para
Merlin:
On Thu, Oct 27, 2016 at 6:10 PM, Merlin Moncure <mmonc...@gmail.com> wrote:
> On Thu, Oct 27, 2016 at 10:01 AM, Francisco Olarte
> <fola...@peoplecall.com> wrote:
>> And I'd like to point libpq sessions does not sound to be the best
>> kind of traffic across
ves_* in postgresql.conf
Isn't this a server setting, and so going to affect every connection,
being it from the (affected) libpq connections or from other sources (
like jdbc, although he may want keepalives for those too )?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general
ecifically search for keepalive. Keepalives generate traffic which
normally keeps overzealous firewalls happy, I have used it before
successfully.
And I'd like to point libpq sessions does not sound to be the best
kind of traffic across a firewall, not a good service / protocol to
expose.
Francisco Olart
standby down mean primary will stop
accepting work?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
'. Unless you abuse things like ø or ö or things like
these people do not normally have problem running them ( in spanish we
just have to avoid tildes in vowels and ñ and are fine ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
e in your
queries, but better convert explicitly ( as it can bite you in some
ocasions ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
p.
> select * from table where date = '2016/10/20'::date
I think is the other way round ( date::date = '2016/10/20' ).
To me it seems yours will do:
date = '2016/10/20'::date::timestamp ( = 2016/10/20 00:00:00 )
( widening conversion )
Francisco Olarte.
--
Sent via pgsql-general
u will select very few rows
and speed up the last phase. Anyway, I fear bernouilly must read all
the table too, to be able to discard randomly, so you may not win
nothing ( I would compare the query time against a simple 'count(one)
query', to have a benchmark of how much time the server expends
r
it's fast, I would try selecting a several
thousands and eyeballing the result, if it does what I fear the
grouping should be obvious ).
Maybe you do not mind it, in which case it's ok, but a one minute run
should let you know wahat you are exactly doing.
Francisco Olarte.
--
Sent via pgsql-ge
the OP stated a busy
system, but anyway the lock is going to execute fast and but with a
long delay, and counting the time form the issuing of the command to
the time of end is a perfectly reasonable way to do it.
Anyway, ok, exclusive locks cause the slownes.
Francisco Olarte.
--
Sent via pg
) than pure speed .
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
d sync a big uncached file between the
affected volumes. If move does say, 1.5 times slower I wouldn't say it
is that slow ( given copy is optimized for this kind of transfers and
a database not so much ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
s, one can not say too much.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ors have 'test and set' and 'compare-exchange' and similar.
This one is similar to a test and set, you set existence to false and
test whether it existed before. I can easily test and then set, but is
not the same as TAS. And the notice is not the reason it is not done
at commit time, the reason is the o
op being a no-op if table did not exist, as
commands are not postponed ( it must show you the notice or not before
completing ), so you are just issuing to create commands for the same
table.
Your serial postponed execution is a nice desire, but I doubt it is necessary .
Francisco Olarte.
--
Sent via pg
intermediate results, the server does not ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
serializable, I MAY expect
that, but wouldn't put much money in it.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
24617892 +0200
Change: 2016-09-30 17:31:21.024617892 +0200
Birth: -
Further details left for the reader.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t, from a db client. You could try one of the
functions in
https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-GENFILE
and many of the untrusted programming languages for postgres functions
( plperl, plpython, etc ) has methods of calling stat in the server.
Francisco Ola
r that big fraction, so reading all of them sequentially
and oing a quick filter is easier )).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
normally
would typically query it with a range condition on the timestamp, a
cluster will help.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
d to me.
>> Also, did you signal the postmaster to reread after adding the line?
> What do you mean?
When you change the file you need to signal the postgres main process
( postmaster ) to reread it by sending it a HUP signal, or using
pg_ctl reload ( your OS/distro may have other methods ).
Fran
t is really used by postgres?
> I think so as another service (Postfix) is running and working.
It has nothing to do with it, except if postfix is using postgres.
> How can I verify ?
If you used hab, it is wrong, if you used hba, consult the docs for
your version & os and check.
Fra
27.0.0.1 md5
>
Are this all the contents of you pg_hba.conf? Note order matters, all
non comment ( or at least the host ones ) need to be checked .
Also, did you signal the postmaster to reread after adding the line?
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-g
en a good idea. If it can be
conditionally enabled with a simple set and implemented in very few (
< 20 ) lines of code, ok for me , otherwise I would prefer the reduced
bug surface.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your sub
seem kind of opposed to each other.
He probably wants to just treat is as non-deferrable just during the
upsert. I do not know if he has thought this opens a can of worms (
like, the constraint may be already broken due to precious DML ).
Francisco Olarte.
--
Sent via pgsql-general mailin
edge ). I do not know if
pg inserts several items at a time in bulk loading, but I doubt it.
Normally every btree indexing library has some optimization for this
cases, as they are common, just like every real sort routine has some
optimization for presorted input.
Francisco Olarte.
--
Sent via
On Tue, Aug 23, 2016 at 4:28 PM, Rob Sargent <robjsarg...@gmail.com> wrote:
> On 08/23/2016 07:44 AM, Francisco Olarte wrote:
>> On Tue, Aug 23, 2016 at 2:26 PM, pinker <pin...@onet.eu> wrote:
>>> I am just surprised by the order of magnitude in the difference thoug
nd full of interesting information, even if skipping large
chunks knowing where the info is can sava you a lot of work and mails.
AAMOF, it's one of the main reasons I've been using postgres all this
years.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
T
able lock can be
easily obtained.". A big aborted bulk load may just fit the case, as
it may put a lot of tuples at new pages at the end and be executed in
a low-load period where the lock is easier to acquire.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgr
m not current with the current postgres
details, but it does not surprise me they have big optimizations for
this, especially when index ordered insertion is quite common in
things like bulk loads or timestamped log lines.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@p
hile the second one
> finish off after 13 rows fetched and returns the full 10 rows.
Good. The only problem is you are not guaranteed a result, like in the
contrived example I gave, but if it is what you want this is a way to
go.
Francisco Olarte.
--
Sent via pgsql-general mailing l
ou can even try using *10, *100, *1k of the real limit until you have
enough results if you want to time-limit your queries.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ake a difference.
But anyway, to compare two things like that, as the original poster
was doing, I normally prefer to test just one thing at a time, that's
why I would normally try to do it by writing a sorted file, shuffling
it with sort -R, and copying it, server side if posible, to eliminate
so both
Francisco
um
delay is X you just move every row older than that from staging to the
partition with whatever period is best). Staging partition is normally
small and cached and can be processed quite fast ( with 200k/day an
hourly movement will leave staging with less than about 10k rows if
distribution is so
n general
select * from table where common_condition and filter_condition order
by xx limit N
becomes
with base as (select * from table where common_condition order by xx
limit base_fecthes)
select * from base where filter_condition order by XX limit N;
In the example common_condition is non e
ent numbers in the range
2**32/64. I think there are some pseudo-random number generators which
can be made to work with any range, but do not recall which ones right
now.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscr
ideally I would just write 10M integers to
a disk file, then shuffle it and compare COPY FROM times from both ) (
unless you know of an easy way to generate a random permutation on the
fly without using a lot of memory, I do not ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-g
and using count tends to
be the faster way to extract the relevant data ( the rest of my query,
after the first with, is just moving data around for pretty-printing (
or pretty-selecting ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to y
counts for the case where not al
vendids are present. If you prefer null you can use it, IIRC max
ignores them.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
-PSQL-VARIABLES
and be sure to scroll down to "SQL Interpolation" after the built in
variables list and read that. I've used it several times, just
remember it's a macro processor and it's done by psql, not by the
server.
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-ge
) tracks
can give this kind of problems ( although the disk CRC should catch
all odd number of bit errors , but with VMs in the mix who knows where
the messages could end up ).
Francisco Olarte.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
1 - 100 of 255 matches
Mail list logo