On 2/4/2016 12:47, Tom Lane wrote:
> I wrote:
>> Karl Denninger writes:
>>> $ initdb -D data-default
>>> ...
>>> creating template1 database in data-default/base/1 ... FATAL: could not
>>> create semaphores: Invalid argument
>>> DETAIL: Fa
On 2/4/2016 12:28, Tom Lane wrote:
> Karl Denninger writes:
>> $ initdb -D data-default
>> ...
>> creating template1 database in data-default/base/1 ... FATAL: could not
>> create semaphores: Invalid argument
>> DETAIL: Failed system call was semget(2, 17
emoving contents of data directory "data-default"
$
$ sysctl -a|grep semm
kern.ipc.semmsl: 512
kern.ipc.semmnu: 256
kern.ipc.semmns: 512
kern.ipc.semmni: 256
The system is running 9.4 just fine and the kernel configuration
requirements shouldn't have changed for semaphores should
er use md5 for the password or use a
certificate.
You can reload the file without restarting postgres with "pg_ctl -D
data-directory reload"
(where "data-directory" is wherever the data directory that has the
pg_hba.conf file -- and the rest of the base of the data store -- is)
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
smime.p7s
Description: S/MIME Cryptographic Signature
siderably faster than a dump/restore and is "in-place."
I use it regularly.
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
smime.p7s
Description: S/MIME Cryptographic Signature
lin
>
They/we are not THAT hard to come by.
It's the common lament that customers have in a nice whorehouse. The
price is too high.
(You can easily pay me to quit doing what I'm doing now and do something
else; the problem only rests in one place when it comes to enticing me
to do so -- money. :-))
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
On 5/9/2013 12:08 PM, Nelson Green wrote:
> Thanks Karl, but I'm trying to do this from a psql shell. I can't use
> the C functions there, can I?
>
>
> On Thu, May 9, 2013 at 11:21 AM, Karl Denninger <mailto:k...@denninger.net>> wrote:
>
> On 5/9/2013 1
On 5/9/2013 11:34 AM, Alvaro Herrera wrote:
> Karl Denninger escribió:
>
>>> To encode:
>>>
>>>
>>> write_conn = Postgresql communication channel in your software that is
>>> open to write to the table
>>>
>>> char*out;
>&
On 5/9/2013 11:12 AM, Karl Denninger wrote:
> On 5/9/2013 10:51 AM, Achilleas Mantzios wrote:
>>
>> Take a look here first :
>>
>> http://www.postgresql.org/docs/9.2/interactive/datatype-binary.html
>>
>>
>>
>> then here :
>> http://w
eemem(out)" to
release the memory that was allocated.
To recover the data you do:
PQresult *result;
result = PQexec(write_conn, "select badge_photo blah-blah-blah");
out = PQunescapeBytea(PQgetvalue(result, 0, 0)); /* Get the returned
piece of the tuple and convert it */
"out" now contains the BINARY (decoded) photo data. When done with it you:
PQfreemem(out) to release the memory that was allocated.
That's the rough outline -- see here:
http://www.postgresql.org/docs/current/static/libpq-exec.html
--
Karl Denninger
k...@denninger.net
/Cuda Systems LLC/
ill-advised. In the event of a disruption
between the two systems you're virtually guaranteed to suffer data
corruption which is (much worse) rather likely to go undetected.
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
ply initdb-ing the second
instance with a different data directory structure, and when starting it
do so with a different data directory structure?
e.g. "initdb -D data"
and
"initdb -D data2"
And that as long as there are no collisions (E.g. port numbers) this
works fine?
--
et between the two is less
than some value at which you alarm is sufficient, and if you then alarm
if you can't reach the master and slave hosts, you then know if the
machines are "up" from a standpoint of reachability on the network as well.
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
.
There's no status update on the pgfoundry page indicating activity or
testing with the current releases.
Thanks in advance.
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
this is run from the cron it will remain silent unless the offset is
breached at which point it will emit an email to the submitting owner of
the job.
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
AND ONLY IF the log data and database table spaces are all on the same
snapshotted volume.
IF THEY ARE NOT then it will probably work 95% of the time, and the
other 5% it will be unrecoverable. Be very, very careful -- the
snapshot must in fact snapshot ALL of the involved database volumes (l
On 5/28/2012 11:44 AM, Tom Lane wrote:
> Karl Denninger writes:
>> I am attempting to validate the path forward to 9.2, and thus tried the
>> following:
>> 1. Build 9.2Beta1; all fine.
>> 2. Run a pg_basebackup from the current master machine (running 9.1) to
>
On 5/27/2012 11:08 PM, Jan Nielsen wrote:
> Hi Karl,
>
> On Sun, May 27, 2012 at 9:18 PM, Karl Denninger <mailto:k...@denninger.net>> wrote:
>
> Here's what I'm trying to do in testing 9.2Beta1.
>
> The current configuration is a master and a hot
available and online during the testing.
Do I need to run a complete parallel environment instead of trying to
attach a 9.2Beta1 slave to an existing 9.1 master? (and if so, why
doesn't the code complain about the mismatch instead of the bogus WAL
message?)
--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC
On 10/5/2010 2:12 PM, Chris Barnes wrote:
> I would like to know if there is a way to configure 9 to do this.
>
> I have 4 unique databases running on 4 servers.
> I would like to have them replicate to a remote site for disaster
> recovery.
>
> I would like to consolidate these 4 database into on
On 10/3/2010 3:44 PM, Karl Denninger wrote:
> On 10/3/2010 1:34 AM, Guillaume Lelarge wrote:
>> Le 03/10/2010 07:07, Karl Denninger a écrit :
>>> On 10/2/2010 11:40 PM, Rajesh Kumar Mallah wrote:
>>>> I hope u checked point #11
>>>> http://wiki
On 10/3/2010 1:34 AM, Guillaume Lelarge wrote:
> Le 03/10/2010 07:07, Karl Denninger a écrit :
>> On 10/2/2010 11:40 PM, Rajesh Kumar Mallah wrote:
>>> I hope u checked point #11
>>> http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use
>>>
&
On 10/2/2010 11:40 PM, Rajesh Kumar Mallah wrote:
>
> I hope u checked point #11
> http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use
>
> * *11.* You can calculate the replication lag by comparing the
> current WAL write location on the primary with the last WAL
> lo
I'm trying to come up with an automated monitoring system to watch the
WAL log progress and sound appropriate alarms if it gets too far behind
for some reason (e.g. communications problems, etc.) - so far without
success.
What I need is some sort of way to compute a difference between the
master
On 9/29/2010 8:55 PM, Jeff Davis wrote:
> On Wed, 2010-09-29 at 20:04 -0500, Karl Denninger wrote:
>> Sep 29 19:58:54 dbms2 postgres[8564]: [2-2] STATEMENT: update post set
>> views = (select views from post where number='116763' and toppost='1') +
>&g
I am playing with the replication on 9.0 and running into the following.
I have a primary that is running at a colo, and is replicated down to a
secondary here using SLONY. This is working normally.
I decided to set up a replication of the SLONY secondary onto my
"sandbox" machine to see what I
If you use Slony, expect it to lose the replication status.
I attempted the following:
1. Master and slaves on 8.4.
2. Upgrade one slave to 9.0. Shut it down, used pg_upgrade to perform
the upgrade.
3. Restarted the slave.
Slony appeared to come up, but said it was syncing only TWO tables (
On 9/21/2010 10:16 PM, Bruce Momjian wrote:
> Karl Denninger wrote:
>> Uh, is there a way around this problem?
>>
>> $ bin/pg_upgrade -c -d /usr/local/pgsql-8.4/data -D data -b
>> /usr/local/pgsql-8.4/bin -B bin
>> Performing Consistency Checks
>> --
Uh, is there a way around this problem?
$ bin/pg_upgrade -c -d /usr/local/pgsql-8.4/data -D data -b
/usr/local/pgsql-8.4/bin -B bin
Performing Consistency Checks
-
Checking old data directory (/usr/local/pgsql-8.4/data) ok
Checking old bin directory (/usr/local/pgs
Peter C. Lai wrote:
> The doublequotes isn't UTF8 it's people copying and pasting from Microsoft
> stuff, which is WIN-1252. So try to use that with iconv instead of utf8
>
> On 2010-08-16 12:40:03PM -0500, Karl Denninger wrote:
>
>> So I have myself a nice pi
So I have myself a nice pickle here.
I've got a database which was originally created with SQL_ASCII for the
encoding (anything goes text fields)
Unfortunately, I have a bunch of data that was encoded in UTF-8 that's
in an RSS feed that I need to load into said database. iconv barfs all
over
Bruce Momjian wrote:
> Craig Ringer wrote:
>
>> On 13/08/2010 9:31 PM, Bruce Momjian wrote:
>>
>>> Karl Denninger wrote:
>>>
>>>> I may be blind - I don't see a way to enable this. OpenSSL "kinda"
>>>> su
I may be blind - I don't see a way to enable this. OpenSSL "kinda"
supports this - does Postgres' SSL connectivity allow it to be
supported/enabled?
- Karl
<>
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mai
Tom Duffey wrote:
> Hi Everyone,
>
> I have a table with several hundred million rows of timestamped
> values. Using pg_dump we are able to dump the entire table to disk no
> problem. However, I would like to retrieve a large subset of data
> from this table using something like:
>
> COPY (SELECT
This may better-belong in pgsql-sql but since it deals with a function
as opposed to raw SQL syntax I am sticking it here
Consider the following DBMS schema slice
Table "public.post"
Column | Type |
Modifi
t; --
> Koichi Suzuki
>
>
>
> 2010/4/19 Karl Denninger :
>
>> Has there been an update on this situation?
>>
>> Koichi Suzuki wrote:
>>
>> I understand the situation. I'll upload the improved code ASAP.
>>
>> --
>&
Has there been an update on this situation?
Koichi Suzuki wrote:
> I understand the situation. I'll upload the improved code ASAP.
>
> --
> Koichi Suzuki
>
>
>
> 2010/2/11 Karl Denninger :
>
>> Will this come through as a commit on the pgfoundry
ne who gets caught "by surprise" on this
could quite possibly lose all their data!
I (fortunately) caught it during TESTING of my archives - before I
needed them.
-- Karl Denninger
<>
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
#x27;m fixing the bug and will
> upload the fix shortly.
>
> Sorry for inconvenience.
>
> --
> Koichi Suzuki
>
> 2010/2/8 Karl Denninger :
>
>> This may belong in a bug report, but I'll post it here first...
>>
>> There appears to be
01017100A7.bz2
And off the BACKUP archive, which is what I'm trying to restore:
# cksum *171*A[67]*
172998591 830621 0001017100A6.bz2
1283345296 1541006 0001017100A7.bz2
Identical, says the checksums.
This is VERY BAD - if pg_compresslog is damaging the files in som
Is there a way through the libpq interface to access performance data on
a query?
I don't see an obvious way to do it - that is, retrieve the amount of
time (clock, cpu, etc) required to process a command or query, etc
Thanks in advance!
--
--
Karl Denninger
k...@denninge
Douglas McNaught wrote:
On Sat, Jul 19, 2008 at 9:02 PM, Karl Denninger <[EMAIL PROTECTED]> wrote:
childrensjustice=# create table petition_bail like petition_white;
ERROR: syntax error at or near "like"
LINE 1: create table petition_bail like petition_white;
It
childrensjustice=# create table petition_bail like petition_white;
ERROR: syntax error at or near "like"
LINE 1: create table petition_bail like petition_white;
Huh?
Yes, the source table exists and obviously as postgres superuser
("pgsql") I have select permission on t
rably simpler than a database that
sees very frequent inserts and/or updates.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
l box with this setup and
it is very fast. Really quite amazing when you get right down to it.
The latest release of the PostgreSQL code markedly improved query
optimization, by the way. The performance improvement when I migrated
over was quite stunning.
Karl Denninger ([EMAIL PROTECTED])
I can reproduce this as I have the dump from "before conversion" and can
load it on a different box and "make it happen" a second time.
Would you like it on the list or privately?
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Richard Huxton wrote:
Richard Huxton wrote:
Karl Denninger wrote:
The problem is that I was holding the ts_vector in a column in the
table with a GIST index on that column. This fails horribly under
8.3; it appears to be ok on the reload but as there is a trigger on
updates any update or insert fails
Tom Lane wrote:
Karl Denninger <[EMAIL PROTECTED]> writes:
It looks like the problem had to do with the tsearch2 module that I have
in use in a number of my databases, and which had propagated into
template1, which meant that new creates had it in there.
The old tsearch2 module
Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sun, 02 Mar 2008 15:46:25 -0600
Karl Denninger <[EMAIL PROTECTED]> wrote:
I'm not quite clear what I have to do in terms of if/when I can drop
the old tsearch config stuff and for obvious reasons (like not
r
Scott Marlowe wrote:
On Sun, Mar 2, 2008 at 1:41 PM, Karl Denninger <[EMAIL PROTECTED]> wrote:
Ugh.
I am attempting to move from 8.2.6 to 8.3, and have run into a major
problem.
The build goes fine, the install goes fine, the pg_dumpall goes fine.
However, the reload does not.
ay be
that the change in configure requires a gmake clean; not sure)
In any event I have another machine and will get something more detailed
ASAP - I will also try the "restore" program and see if that works.
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Scott Marl
ersions at once.
both ARE loaded on the system; is there a way to do that?
Thanks in advance....
--
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
---(end of broadcast)---
TIP 4: Have you searched our list archives?
Scott Marlowe wrote:
On 8/28/07, Karl Denninger <[EMAIL PROTECTED]> wrote:
Am I correct in that this number will GROW over time? Or is what I see
right now (with everything running ok) all that the system
will ever need?
They will grow at first to accomodate your typical l
Steve Crawford wrote:
Karl Denninger wrote:
Are your FSM settings enough to keep track of the dead space you have?
I don't know. How do I check?
vacuum verbose;
Toward the bottom you will see something like:
...
1200 page slots are required to track all free space.
Cu
Tom Lane wrote:
Karl Denninger <[EMAIL PROTECTED]> writes:
But... .shouldn't autovacuum prevent this? Is there some way to look in
a log somewhere and see if and when the autovacuum is being run - and on
what?
There's no log messages (at the default log verbosity a
I don't know. How do I check?
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
Alvaro Herrera wrote:
Karl Denninger wrote:
A manual "Vacuum full analyze" fixes it immediately.
But... .shouldn't autovacuum prevent this? Is there some way to look in a
l
mewhere and see if and when the autovacuum is being run - and on
what?
--
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
%SPAMBLOCK-SYS: Matched [EMAIL PROTECTED], message ok
---(end of broadcast)---
TIP 6: explain analyze is your friend
use "AND NOT" as a conditional on the second clause to the
OR and that didn't work; it excluded all of the NULL records)
--
Karl Denninger ([EMAIL PROTECTED])
http://www.denninger.net
%SPAMBLOCK-SYS: Matched [EMAIL PROTECTED], message ok
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Thu, Feb 03, 2005 at 10:20:47PM -0500, Tom Lane wrote:
> Karl Denninger <[EMAIL PROTECTED]> writes:
> > I agree with this - what would be even better would be a way to create
> > 'subclasses' for things like this, which could then be 'included' ea
On Thu, Feb 03, 2005 at 06:59:55PM -0700, Michael Fuhr wrote:
> On Thu, Feb 03, 2005 at 06:44:55PM -0600, Karl Denninger wrote:
> >
> > As it happens, there's an "untsearch2.sql" script in the contrib directory.
>
> That reminds me: it would be useful
ebuild them.
I had to reinsert the columns and indices, but that's not a big deal.
All fixed... thanks to the pointer to the OID issue, that got me on the
right track.
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights Activist
http://www.denninger.netM
he data, I should then be able to do so
without the tsearch2.sql stuff. I can then reload the tsearch2.sql
functions and re-create the indices.
Sound plausible?
-
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights Activist
http://www.denninger.netMy home on the n
h the same
error; it looks like something is badly mangled internally in the tsearch2
module... even though it DOES appear that it loaded properly.
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights Activist
http://www.denninger.netMy home on the net - links to eve
ts_name = cfg.ts_name and cfg.oid= $2 order by
lt.tokid desc;"
Ai!
A reindex did nothing.
What did I miss? Looks like there's something missing, but what?!
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights Activist
http://www.denninger.netMy
On Thu, Feb 03, 2005 at 01:03:57PM -0600, Karl Denninger wrote:
> Hi folks;
>
> Trying to move from 7.4.1 to 8.0.1
>
> All goes well until I try to reload after installation.
>
> Dump was done with the 8.0.1 pg_dumpall program
>
> On restore, I get thousands of er
n-standard" things about my 7.4.1 DBMS is that I do have
significant amonuts of binary data stored in the dbms itself, and in
addition I have "tsearch" loaded.
Any ideas?
--
--
Karl Denninger ([EMAIL PROTECTED]) Internet Consultant & Kids Rights Activist
http://www.denninger.
66 matches
Mail list logo