Thank you, I'm also curious as to whether the data folder is already in some
way encrypted and if so, what encryption/obfuscation is being used. There
doesn't seem to be anything about this on the web.
Stuart Gundry wrote:
Thank you, I'm also curious as to whether the data folder is already in some
way encrypted and if so, what encryption/obfuscation is being used. There
doesn't seem to be anything about this on the web.
No encryption, although large text fields may be compressed (read up on
On Wed, 2008-07-09 at 09:11 +, Dean Rasheed wrote:
Simon, I like your proposal, and I think I can see how to code it
fairly easily.
There is one thing that it doesn't allow, however, which the debug_xxx
parameters do, and that is for a non-superuser to trace SQL used in
functions, from
Simon Riggs [EMAIL PROTECTED] writes:
On Wed, 2008-07-09 at 09:11 +, Dean Rasheed wrote:
So I suggest grouping these parameters in their own category
(eg. sql_trace) and then having additional parameters to control
where the output would go. So the sql_trace parameters would be:
*
Stuart Gundry wrote:
Been looking into truecrypt but can't seem to get it to play nice with
postgres silent installer. When I try to set the BASEDIR=M:\, which is
where I mounted my encrypted volume it gives the following error in the log
The Cacls command can be run only on disk drives that
At 2008-07-09 17:06:19 -0700, [EMAIL PROTECTED] wrote:
I'm really new to this git thing, but I now have access to create
git-shell accounts, etc. on git.postgresql.org. Any ideas you can
offer on how better to handle this would really help me. :)
The question is: what is your objective in
Abhijit Menon-Sen [EMAIL PROTECTED] writes:
The only real benefit to review that I can imagine would be if full
change history were available, which it could do if a) changes were
committed separately with proper comments and b) if the branch were
*NOT* rebased, but instead periodically
On 7/10/08, Robert Hodges [EMAIL PROTECTED] wrote:
This is a quick update on a promise I made early in June to suggest
requirements as well as ways to add replication hooks that would support
logical replication, as opposed to the physical replication work currently
underway based on NTT's
I am trying to generate a patch with respect to the current CVS head. So
ai rsynced the tree, then did cvs up and installed the db. However, when
I did initdb on a data directory it is stuck:
It is stuck after printing creating template1
creating template1 database in /home/postgres/data/base/1
After sleeping on this, I think we should follow your idea.
Hmm. I preferred your idea ;-) It reduces the number of new parameters
back down to 3, which makes it easier to use:
* trace_min_planner_duration - int, PGC_USERSET
* trace_min_executor_duration - int, PGC_USERSET
*
On Thu, Jul 10, 2008 at 5:36 PM, Sushant Sinha [EMAIL PROTECTED] wrote:
Seems like a bug to me. Is the tree stable only after commit fests and I
should not use the unstable tree for generating patches?
I quickly tried on my repo and its working fine. (Well it could be a
bit out of sync with
Sushant Sinha wrote:
I am trying to generate a patch with respect to the current CVS head. So
ai rsynced the tree, then did cvs up and installed the db. However, when
I did initdb on a data directory it is stuck:
[snip]
Seems like a bug to me. Is the tree stable only after commit fests
You are right. I did not do make clean last time. After make clean, make
all, and make install it works fine.
-Sushant.
On Thu, 2008-07-10 at 17:55 +0530, Pavan Deolasee wrote:
On Thu, Jul 10, 2008 at 5:36 PM, Sushant Sinha [EMAIL PROTECTED] wrote:
Seems like a bug to me. Is the tree
Am Mittwoch, 9. Juli 2008 schrieb Peter Eisentraut:
I propose that we relax these two checks to test for binary-coercibility
instead, which is effectively what is expected and required here anyway.
Here is the corresponding patch.
diff -ur ../cvs-pgsql/doc/src/sgml/ref/create_cast.sgml
Robert Treat [EMAIL PROTECTED] writes:
On Monday 07 July 2008 21:56:34 Bruce Momjian wrote:
Right now you advance the static link to the next commit fest once the
current one starts --- I was hoping for a link that advances when the
commit fest is done so I could make it a permanent tab in
On Thu, Jul 10, 2008 at 04:12:34PM +0530, Abhijit Menon-Sen wrote:
At 2008-07-09 17:06:19 -0700, [EMAIL PROTECTED] wrote:
I'm really new to this git thing, but I now have access to create
git-shell accounts, etc. on git.postgresql.org. Any ideas you can
offer on how better to handle this
Stephen R. van den Berg [EMAIL PROTECTED] writes:
Then, from a client perspective, there is no use at all, because the
client can actually pause reading the results at any time it wants,
when it wants to avoid storing all of the result rows. The network
will perform the cursor/fetch facility
On Thu, Jul 10, 2008 at 3:16 PM, Tom Lane [EMAIL PROTECTED] wrote:
I surely do not have an objection to having a link defined as above ---
I just wanted to be clear on what we meant by current commitfest.
We probably need two separate terms for the place to submit new
patches and the place we
Tom Lane wrote:
Stephen R. van den Berg [EMAIL PROTECTED] writes:
Then, from a client perspective, there is no use at all, because the
client can actually pause reading the results at any time it wants,
when it wants to avoid storing all of the result rows. The network
will perform the
Hi Marko,
No fear, we definitely will discuss on pgsql-hackers. I just wanted to make
sure that people understood we are still committed to solving this problem and
will one way or another commit resources to help.
Just to be clear, by logical replication I mean replication based on sending
Dave Page [EMAIL PROTECTED] writes:
On Thu, Jul 10, 2008 at 3:16 PM, Tom Lane [EMAIL PROTECTED] wrote:
I surely do not have an objection to having a link defined as above ---
I just wanted to be clear on what we meant by current commitfest.
We probably need two separate terms for the place to
The new data type, UUID, is stored as a string -char(16)-:
struct pg_uuid_t
{
unsigned char data[UUID_LEN];
};
#define UUID_LEN 16
but this it's very inefficient as you can read here [1].
The ideal would be use bit(128), but today isn't possible. One
possible
* David Fetter [EMAIL PROTECTED] [080710 10:19]:
The question is: what is your objective in providing this repository?
Here are my objectives:
1. Make a repository that keeps up with CVS HEAD.
There are already at least 2 public ones that do:
git://repo.or.cz/PostgreSQL.git
Kless wrote:
The new data type, UUID, is stored as a string -char(16)-:
struct pg_uuid_t
{
unsigned char data[UUID_LEN];
};
#define UUID_LEN 16
No it is not. It is stored as 16 binary bytes. As text it won't fit into
16 bytes.
but this it's very
Sushant Sinha [EMAIL PROTECTED] writes:
You are right. I did not do make clean last time. After make clean, make
all, and make install it works fine.
My ironclad rule for syncing with CVS is
make distclean
cvs update
reconfigure, rebuild
The cycles you save by taking
Aidan Van Dyk wrote:
* David Fetter [EMAIL PROTECTED] [080710 10:19]:
2. Allow people who are not currently committers on CVS HEAD to make
needed changes.
Uh, the point of git is it's distributed, so you don't need to be
involved for them to do that
Yep. People can already clone
Peter Eisentraut [EMAIL PROTECTED] writes:
Am Mittwoch, 9. Juli 2008 schrieb Peter Eisentraut:
I propose that we relax these two checks to test for binary-coercibility
instead, which is effectively what is expected and required here anyway.
Here is the corresponding patch.
Looks good, but
On Thu, Jul 10, 2008 at 11:31:00AM -0400, Alvaro Herrera wrote:
Aidan Van Dyk wrote:
* David Fetter [EMAIL PROTECTED] [080710 10:19]:
2. Allow people who are not currently committers on CVS HEAD to
make needed changes.
Uh, the point of git is it's distributed, so you don't need to
Kless wrote:
The new data type, UUID, is stored as a string -char(16)-:
struct pg_uuid_t
{
unsigned char data[UUID_LEN];
};
#define UUID_LEN 16
but this it's very inefficient as you can read here [1].
The ideal would be use bit(128), but today isn't possible.
Mark Mielke wrote:
Kless wrote:
The new data type, UUID, is stored as a string -char(16)-:
struct pg_uuid_t
{
unsigned char data[UUID_LEN];
};
#define UUID_LEN 16
What is the complaint? Do you have evidence that it would be
noticeably faster as two 64-bits? Note that a UUID is
Mark Mielke [EMAIL PROTECTED] writes:
Kless wrote:
[1] http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/
That's a general page about UUID vs serial integers.
AFAICT the author of that page thinks that UUIDs are stored in ASCII
form (32 hex digits), which would indeed be
Mark Mielke wrote:
I didn't notice that he put 16. Now I'm looking at uuid.c in
PostgreSQL 8.3.3 and I see that it does use 16, and the struct
pg_uuid_t is length 16. I find myself confused now - why does
PostgreSQL define UUID_LEN as 16?
I will investigate if I have time tonight. There MUST
* David Fetter [EMAIL PROTECTED] [080710 11:34]:
Yep. People can already clone the master Pg trunk, and start from
there to build patches. If they use their *private* repos for this,
awesome -- they have complete history. If they want other
developers to chime in with further patches,
On Thu, 2008-07-10 at 12:05 -0400, Mark Mielke wrote:
Mark Mielke wrote:
I didn't notice that he put 16. Now I'm looking at uuid.c in
PostgreSQL 8.3.3 and I see that it does use 16, and the struct
pg_uuid_t is length 16. I find myself confused now - why does
PostgreSQL define
On Jul 10, 2008, at 09:13, Joshua D. Drake wrote:
You are out to lunch and you dragged me with you. Did we have beer at
least? :-)
Sounds like at least 4 and a couple of chasers.
Next time I'd like to be invited to the party, too! :-P
David
--
Sent via pgsql-hackers mailing list
Tom Lane wrote:
Stephen R. van den Berg [EMAIL PROTECTED] writes:
Then, from a client perspective, there is no use at all, because the
client can actually pause reading the results at any time it wants,
when it wants to avoid storing all of the result rows. The network
will perform the
You are out to lunch and you dragged me with you. Did we have beer at
least? :-)
A bit, and you had a byte of bread.
--
Med venlig hilsen
Kaare Rasmussen, Jasonic
Jasonic Telefon: +45 3816 2582
Nordre Fasanvej 12
2000 Frederiksberg Email: [EMAIL PROTECTED]
--
Sent via
Tom Lane [EMAIL PROTECTED] writes:
Mark Mielke [EMAIL PROTECTED] writes:
Kless wrote:
[1] http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/
That's a general page about UUID vs serial integers.
AFAICT the author of that page thinks that UUIDs are stored in ASCII
form
(I don't really have much to add to the discussion here; I'm just
posting for the record on the question of client behaviour, since
I also wrote and maintain a client library in C++.)
At 2008-07-10 18:40:03 +0200, [EMAIL PROTECTED] wrote:
I start returning rows as they arrive, and pause reading
Stephen R. van den Berg [EMAIL PROTECTED] writes:
A possibly more convincing argument is that with that approach, the
connection is completely tied up --- you cannot issue additional
database commands based on what you just read, nor pull rows from
multiple portals in an interleaved fashion.
Gregory Stark [EMAIL PROTECTED] writes:
Stephen R. van den Berg [EMAIL PROTECTED] writes:
In practice, most applications that need that, open multiple
connections to the same database (I'd think).
Er? There's nothing particularly unusual about application logic like:
$sth-execute('huge
set pgsql-hackers digest
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
At 2008-07-10 07:18:28 -0700, [EMAIL PROTECTED] wrote:
Here are my objectives:
1. Make a repository that keeps up with CVS HEAD.
2. Allow people who are not currently committers on CVS HEAD to
make needed changes.
OK. Then, to begin with, I think it is very important to make the
Abhijit Menon-Sen [EMAIL PROTECTED] writes:
Interleaved retrieval using multiple portals is not what most
libraries support, I'd guess.
My code did support that mode of operation in theory, but in practice
in the few situations where I have needed to use something like it, I
found it more
Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
Well, one thing I think we want to do by having non-committer reviewers, is
to not involve a committer at all if the patch is going to be sent back.
So one thing I was thinking of is:
1) change status to ready for committer
2) post
On Jul 10, 2008, at 1:20 PM, Fabrízio de Royes Mello wrote:
set pgsql-hackers digest
Postgresql hackers have been successfully digested. *burp*
-M
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Jul 10, 5:05 pm, [EMAIL PROTECTED] (Mark Mielke) wrote:
Mark Mielke wrote:
I didn't notice that he put 16. Now I'm looking at uuid.c in
PostgreSQL 8.3.3 and I see that it does use 16, and the struct
pg_uuid_t is length 16. I find myself confused now - why does
PostgreSQL define
Josh Berkus [EMAIL PROTECTED] writes:
Tom Lane wrote:
Josh Berkus [EMAIL PROTECTED] writes:
1) change status to ready for committer
2) post message to -hackers detailing the review and calling for a
committer to check the patch
3) a committer picks it up
Well, the key point there is just
I have a patch that I will be submitting to add to the build system the
capability of reporting on test code coverage metrics for the test
suite. Actually it can show coverage for any application run against
PostgreSQL. Download Image:Coverage.tar.gz
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
I don't want the tag there at all, much less converted to a pointer.
What would the semantics be of copying the node, and why?
Please justify why you must have this and can't do what you want some
other way.
In
Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= [EMAIL PROTECTED] writes:
Do you think it's worthwhile to implement the LC algorithm in C and send
it out, so others could try it out? Heck, maybe it's worthwhile to
replace the current compute_minimal_stats() algorithm with LC and see
how that
Michelle Caisse [EMAIL PROTECTED] writes:
I have a patch that I will be submitting to add to the build system the
capability of reporting on test code coverage metrics for the test
suite.
Cool.
To generate coverage statistics, you run configure with
--enable-coverage and after building
Jan Urbański wrote:
Oh, one important thing. You need to choose a bucket width for the LC
algorithm, that is decide after how many elements will you prune your
data structure. I chose to prune after every twenty tsvectors.
Do you prune after X tsvectors regardless of the numbers of
Alvaro Herrera wrote:
Jan Urbański wrote:
Oh, one important thing. You need to choose a bucket width for the LC
algorithm, that is decide after how many elements will you prune your
data structure. I chose to prune after every twenty tsvectors.
Do you prune after X tsvectors regardless of
Alvaro Herrera [EMAIL PROTECTED] writes:
Jan UrbaÅski wrote:
Oh, one important thing. You need to choose a bucket width for the LC
algorithm, that is decide after how many elements will you prune your
data structure. I chose to prune after every twenty tsvectors.
Do you prune after X
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= [EMAIL PROTECTED] writes:
Still, there's a decision to be made: after how many lexemes should the
pruning occur?
The way I think it ought to work is that the number of lexemes stored in
the final pg_statistic entry is statistics_target times a constant
(perhaps
Hi,
after long discussion with Mr. Kotala, we've decided to redesign our
collation support proposal.
For those of you who aren't familiar with my WIP patch and comments from
other hackers here's the original mail:
http://archives.postgresql.org/pgsql-hackers/2008-07/msg00019.php
In a few
It should be possible to make it work for a VPATH build with
appropriate arguments to gcov and lcov, but currently it expects the
object files and generated data files to be in the build directory.
You need access to the build tree to generate coverage statistics and to
generate the report
Tom Lane wrote:
The way I think it ought to work is that the number of lexemes stored in
the final pg_statistic entry is statistics_target times a constant
(perhaps 100). I don't like having it vary depending on tsvector width
I think the existing code puts at most statistics_target elements
Abhijit Menon-Sen [EMAIL PROTECTED] writes:
At 2008-07-03 16:36:02 +0200, [EMAIL PROTECTED] wrote:
Here's a patch for this.
I reviewed the patch, it basically looks fine. A few quibbles with the
provided documentation:
Applied, with ams' doc changes and some further wordsmithing.
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= [EMAIL PROTECTED] writes:
Tom Lane wrote:
The way I think it ought to work is that the number of lexemes stored in
the final pg_statistic entry is statistics_target times a constant
(perhaps 100). I don't like having it vary depending on tsvector width
I
On Wed, 9 Jul 2008, Jan Urbaski wrote:
Jan Urbaski wrote:
Do you think it's worthwhile to implement the LC algorithm in C and send it
out, so others could try it out? Heck, maybe it's worthwhile to replace the
current compute_minimal_stats() algorithm with LC and see how that compares?
I
I was trying to set up warm standby for an 8.1.11 instance, and was using
pg_standby's -l option so that it creates links and does not actually copies
files. After struggling for a few hours, I found two problems; one big, one
small.
The smaller issue is that even if we do not end the
63 matches
Mail list logo