es
10/05/1582 - 10/14/1582 as missing dates. Dates entered in that
range via to_date will be stored as 10/15/1582.
--
Mike Nolan
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
3-4 seconds are now
taking 10-12 seconds. (Have others noticed that 7 seconds seems to be
a threshold point for users reacting to query times?)
MySQL also does case independent text comparisions, and apparently ONLY
case-insensitive comparisons.
--
Mike Nolan
---
I want case insensitivity, I'll WRITE IT INTO THE CODE,
but I can see how some people might think that 'NOLAN', 'Nolan' and
'nolan' should be considered as the same data.
BTW, I just tested it and MySQL does case folding on values in unique
indexes, too. (Well
play with this before the
CIO figures out I'm done. :-)
I'm hoping this turns out to be a tuning issue, as I'm still very much
of a rookie at tuning PostgreSQL.
I'll see if I can work something up. Should this go to the general
list or somewhere else?
--
Mike No
> > Personally, if I want case insensitivity, I'll WRITE IT INTO THE CODE,
> > but I can see how some people might think that 'NOLAN', 'Nolan' and
> > 'nolan' should be considered as the same data.
>
> Oh, you mean like "SELECT
> We require ~* syntax for that, or upper()/lower().
Slowly the light dawns!
If I anchor a ~ search on both ends, it is the same search as =.
Duh!
I converted the prototype over to use ~ and it is running much faster.
I'll try to do some detailed timings against MySQL tonight.
--
Mi
ee book projects (two textbooks and
one rule book), in all three case the authors did their own index. Maybe
I've just had a good run of luck on the O'Reilly books I've bought, or maybe
I haven't bought as many of them in the last three or four years as I used
to.
--
Mike Nolan
as well? Are there any conditions under which this would cause problems?
--
Mike Nolan
---(end of broadcast)---
TIP 8: explain analyze is your friend
e
stretch by then. :-)
--
Mike Nolan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
> In what ways is the application you are writing going to be different from
> phpPgAdmin?
I see it as more of a web-based application forms development engine
than as a DBA tool.
--
Mike Nolan
---(end of broadcast)---
TIP 7: don'
database,
which is on yet another system.)
--
Mike Nolan
---(end of broadcast)---
TIP 8: explain analyze is your friend
Perhaps I'm missing something, but it appears that there is no way to
force the 'tuples-only' toggle in psql to either state. I think the same
may be true of other toggles.
This creates problems when running scripts via the \i command.
--
Mike Nolan
---
antage of the index over
time and a wide range of queries, so regularly scheduled 'vacuum analyze's
are desirable.
What I don't know is whether there is a way pass along hints to the
optimizer or to write a query to FORCE it to use an index on some part
of a query despite wh
ff, in which case I may propose several changes
in the definitions and options for both the echo and output options
in psql.
--
Mike Nolan
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
.
If the team would accept the rest as a bugfix, I can send it. Otherwise,
I'll wait for 7.5 and see if I can change the whole output section to
eliminate the problem completely.
--
Mike Nolan
---(end of broadcast)---
TIP 6: Have you searc
x 10 databases is getting
up there for tape backup both in terms of capacity and in the time it
would take to do the backup.
--
Mike Nolan
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
g an option to output in XML? (This may raise a number of XML
design questions, though, and I'm just starting to read 'XML in a Nutshell'.)
--
Mike Nolan
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
e ...
I am more inclined to go with the rewrite of the output sections, which I
hope to jump into in mid-August.
--
Mike Nolan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
st record inserted was.
--
Mike Nolan
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
be higher only if I also have my own cache of sequence
values, assuming that the sequence never wraps.)
--
Mike Nolan
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
> MySQL can tell you from it's index because it doesn't care if it gives you the
> right number or not.
Under what circumstances would MySQL give the wrong number?
--
Mike Nolan
---(end of broadcast)---
TIP 6: Have you searched o
o_char(current_timestamp,'mm/dd/yy');
gives you control over the formatting.
--
Mike Nolan
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
One of my sons was hired by Google last year after spending the past
several years working on various open-source projects, it took 2 days of
back-and-forth with Google's legal department before he was satisfied with
the restrictions in their offer.
--
Mike Nolan
On Wed, Mar 11, 2015 at 4:
consistent are the tables in pg_dumpall
files?
--
Mike Nolan
On Wed, May 20, 2015 at 12:40 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> Yes. The entire dump is performed within a single transaction.
>
> On Wed, May 20, 2015 at 9:24 AM, Michael Nolan wrote:
>
>> The documentation for pg_dump says that dum
y.
>
> Cheers,
> Casey
>
Probably too late for this time, but in the past when I've needed to
redefine the type for a column, I've made a dump, edited the dump file to
change the type and then renamed the table and reloaded it. That's usually
several orders of magnitude faster.
--
Mike Nolan
no...@tssi.com
hat and your original query as a subquery
and do your averages, since nulls are not included in either count()
or average() aggregates:
select dow, count(*), avg(some_column) from (
select extract ('dow' from some_date) as dow, some_number from some_table
union select generate_series(0,6) as
On Mon, Jul 6, 2015 at 5:50 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Mon, Jul 6, 2015 at 6:16 PM, Michael Nolan wrote:
>
>> > But you can see it wont give correct results since (for example)
>> Monday's
>> > with no new user
Sat 3.0000
--
Mike Nolan
t is
first-rate. (I've been working on a project that requires MySQL, their
documentation is far inferior.)
--
Mike Nolan
no...@tssi.com
ary
Clinton's deleted email, recovering that data might be more valuable to
some people than the data that was not deleted.
--
Mike Nolan
far easier to write a short PHP or
PERL program to do tasks like this. Much easier to debug and the speed
improvement by using SQL is not important for 200,000 records.
--
Mike Nolan
e non-null?
--
Mike Nolan
no...@tssi.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
setting them to null.
--
Mike Nolan
make available to app developers.
My next task is to find out if validating and importing a JSON file into a
table is as easy as exporting a table in JSON turned out to be. Thanks for
the help.
--
Mike Nolan
o the table into which the
inserts are made, an insert could fail or result in incorrect data.
--
Mike Nolan
On Sun, Apr 10, 2016 at 2:30 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sat, Apr 9, 2016 at 9:48 PM, Michael Nolan wrote:
>
>>
>> 2nd Followup: It turns out that loading a table from a JSON string is
>> more complicated than going from a t
|
(1 row)
The fact that the null values were stripped out is not an issue here.
But,
uscf=> insert into goldmast_test select * from
json_populate_record(NULL::"goldmast_test", (select * from gold1604_test
limit 2) )
uscf-> \g
ERROR: more than one row returned by a subquery used as an expression
Is there a way to get around the one row per subquery issue?
--
Mike Nolan
In case it wasn't clear, the sample data was 3 rows of data. (There are
actually around 890K rows in the table pgfutter built from the JSON file.)
-
Mike Nolan
ddress
from mailing lists.
Procedures to verify that an email address works and to administer its use
under rules like CAN_SPAM cannot exist solely within the database itself.
And as others have noted, what makes for a 'well-formed' email address has
always been a bit complicated.
--
Mike Nolan
bytes long.
Upgrading to the current version of the GNU tar utility program fixed the
problem.
--
Mike Nolan
e the files in BASE64 encoding to keep them from getting
corrupted.
--
Mike Nolan
51:14 GMT+01:00'::timestamp with time zone
timestamptz
2007-07-11 06:51:14-05
--
Mike Nolan
On 6/12/07, Adam Witney <[EMAIL PROTECTED]> wrote:
Hi,
I am upgrading from 7.4.12 to 8.2.4 and I have run into a difference in
date
style handling...
rom string/hex to
integer?
(If this is referenced in the documentation, I'm not finding it.)
--
Mike Nolan
On 7/16/07, Steve Atkins <[EMAIL PROTECTED]> wrote:
On Jul 16, 2007, at 11:36 AM, Michael Nolan wrote:
> I have data (from an external source) which is in text format as a
> hex number (it's actually an IP address, but that's probably not
> relevant.)
It likely is r
Is there any way to enable detailed logging (ie, at the SQL statement level)
but only for one user?
--
Mike Nolan
;t load into a current
> modern postgres install, but I think there were several major versions
> gap in those cases.
>
I recently tried loading a pg_dump of a large database dump file (around 20
GB) from 8.2 into 9.0 and it failed.
However restoring the 22GB pg_dumpall file worked.
--
Mike Nolan
no...@tssi.com
e which is better?
--
Mike Nolan
Mike Nolan
day access limitation parameters in the pg_hba.conf
file, are there any simple ways to do this?
--
Mike Nolan
Yeah, a cron job to swap pg_hba.conf files is the best solution I've come
up with so far. It's not one web app, it's closer to two dozen of them, on
multiple sites.
--
Mike Nolan
On Sat, Dec 13, 2014 at 11:10 PM, Adrian Klaver
wrote:
>
> On 12/13/2014 08:13 PM, Michael Nol
r as I can tell, the other virtual servers weren't being slowed
down, so I don't suspect problems with the virtual server or the SAN.
If this happens again, what sorts of settings in postgresq.conf or
other tools should I be using to try to track down what's causing
this?
--
Mike
data so it'll be an even longer run than
this week's was.)
--
Mike Nolan
On Sat, Jan 10, 2015 at 12:55 PM, Andy Colson wrote:
> On 01/09/2015 07:52 PM, Tomas Vondra wrote:
>
>> On 9.1.2015 23:14, Michael Nolan wrote:
>>
>>> I'm running 9.3.5 on a virtual m
On Fri, Jan 9, 2015 at 7:52 PM, Tomas Vondra
wrote:
> On 9.1.2015 23:14, Michael Nolan wrote:
> > I'm running 9.3.5 on a virtual machine with 5 cores and 24 GB of
> > memory. Disk is on a SAN.
> >
> > I have a task that runs weekly that processes possibly as many
would not have been many
inserts or updates to the tables used by the lookup function since the
latest vacuum analyze. I think I may have even done a vacuum analyze on
the two largest tables after the first DB shutdown.
--
Mike Nolan
On Mon, Jan 12, 2015 at 7:46 PM, Bob Futrelle
wrote:
> You should be able to find a cloud provider that could give you many TB.
> Or so they like to claim.
>
>
> Nope, but you probably find one willing to SELL you access to many TB.
--
Mike Nolan
For what it's worth, this week's run covered even more months than
last week's did, and ran in about 5 1/2 hours, with no slowdowns,
under a similar system load. So, it could have been a one-time thing
or some combination of factors that will be difficult to reproduce.
--
Mike N
Might not do what you want, but I just change the password.
--
Mike Nolan
On Fri, Feb 6, 2015 at 4:11 PM, Melvin Davidson
wrote:
> Possibly,
>
> To disble:
> ALTER USER name RENAME TO xname;
>
> To enable
> ALTER USER xname RENAME TO name;
>
> ???
>
>
> On
On 2/6/15, David G Johnston wrote:
> On Fri, Feb 6, 2015 at 2:22 PM, Michael Nolan [via PostgreSQL] <
> ml-node+s1045698n5836989...@n5.nabble.com> wrote:
>
>> Might not do what you want, but I just change the password.
>>
>>
> How do you do that and r
2-31'::TIMESTAMP
is the same as 2011-12-31 00:00:00.0
so records timestamped later in the day on the 31st would not get selected
SELECT ... WHERE
'2011-01-01'::TIMESTAMP <= col_of_type_timestamp
AND col_of_type_timestamp < '2012-01:0
It is probably not the most efficient, but I often use this syntax,
which reads better.
Select . where col_type_timestamp::date between '2011-01-01' and
'2011-12-31'
This will use a timestamp index.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-genera
specifically enabled in pg_hba.conf.
--
Mike Nolan
---
1
And then there's this:
create table wkdata
(numval numeric(5,2))
CREATE TABLE
Time: 6.761 ms
nolan=> insert into wkdata
nolan-> values (123.456789);
INSERT 569625265 1
Time: 4.063 ms
nolan=> select * from wkdata;
select * from wkdata;
numval
--
123.46
So rounding a mon
stgreSQL databases for the last 10 years or so. I'd take
PostgreSQL over the other two in a heartbeat!
Data integrity/data preservation issues (backup is just one aspect of
that) are going to be your biggest problems with VERY large databases,
no matter how much money you throw at it.
--
a
directory files, but you probably won't need to restart the master to
change the master configuration files since you've already got
replication working to one server and you're apparently not planning
to have the second slave server poll the master for updates.
--
Mike Nolan
--
On 8/1/13, haman...@t-online.de wrote:
> Hi,
> I want to store copies of our data on a remote machine as a security
> measure.
> Wolfgang
2 questions:
1. How secure is the remote site?
2. How much data are we talking about?
--
Mike Nolan
--
Sent via pgsql-general mailing
tes when the row was inserted or
last updated may help.
A true incremental backup would IMHO be a very useful tool for
database administrators, but there are a number of technical
challenges involved, especially dealing with deleted records.
--
Mike Nolan
--
Sent via pgsql-general mailing list (
0 rows, why?
>
>
> I'm using 9.2.4.
What is the content of the field 'birthday''? My guess is there's a
null value for the field, in which case you are comparing two nulls.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 8/29/13, Michael Nolan wrote:
> On 8/29/13, Andreas Kretschmer wrote:
>
>> I'm using 9.2.4.
>
>
> What is the content of the field 'birthday''? My guess is there's a
> null value for the field, in which case you are comparing two nulls.
Oo
I also have some pre-defined percentage functions, they check the
denominator and return null if it is zero, to avoid 'divide by zero'
errors.
--
Mike Nolan
On Sun, Apr 16, 2017 at 11:37 AM, Melvin Davidson
wrote:
>
>
> On Sun, Apr 16, 2017 at 12:23 PM, Adrian Klaver >
structure the same,
> but if this has corrupted the files I can drop, dump and restore, in which
> case how do I ‘drop’ the DB without postgres running?
>
> Ta,
>
> Martin.
>
Was the server you were backing up shut down or in backup mode when you did
the 'dd' copy?
--
Mike Nolan
?
--
Mike Nolan
On Sat, Sep 14, 2013 at 8:32 AM, Moshe Jacobson wrote:
> How do I migrate my 9.1 directory to a new file system with the least
> downtime possible?
>
> I don't know if this makes any difference, but my pg_xlog directory is on
> its own volume as well, so I would ha
Assuming the database hasn't changed much since the failover, doing a
fsync from the new primary back to the old primary should be fairly
quick.
--
Mike Nolan
On 9/19/13, Vick Khera wrote:
> On Thu, Sep 19, 2013 at 11:31 AM, ascot.m...@gmail.com
> > wrote:
>
>> I use
ssues that need to be
resolved before putting it back online, and fixing them could affect
how much work you have to do to get the physical files back in sync.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
You could write a plperlul function that runs a shell script to back up
your database, you can even pass it parameters and put a call to that in a
trigger.
BUT, this could result in multiple backups running at the same time and
become a performance drag.
--
Mike Nolan
On Tue, Oct 22, 2013 at 9
the data is not proper.
Try using a cast to date instead:
select '33-oct-2013'::date throws an error.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thomas, try this:
'2013-02-31'::date
--
Mike Nolan
On Mon, Jan 20, 2014 at 7:44 AM, Thomas Kellerer wrote:
> Hi,
>
> I asked this a while back already:
>
>select to_date('2013-02-31', '-mm-dd');
>
> will not generate an error (unlike
I think that PHP has modules (eg, PEAR) that can read MS Access database
files, and once you have it in an array you can create INSERT statements
for PostgreSQL, including cleaning up any data format issues (eg, dates of
00-00-)
--
Mike Nolan
On Fri, Feb 28, 2014 at 6:21 PM, Rich Shepard
line_prefix in the postgresql.conf file and reload it.
I use:
log_line_prefix = '%m %u '
You might also want to use this, at least temporarily:
log_statement = all
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscriptio
thing_id
etc.
I find when building complex queries (I've written some that ran over
100 lines and involved a dozen or more joined tables), I need to build
them up, testing them as I build.
--
Mike Nolan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make ch
L files, each 16MB. That's about 1.2
gigabytes for a database that takes up about 58 GB.
--
Mike Nolan
fighting SELInux every inch of the way. Setting up PostgreSQL on that box
is coming up on the schedule, maybe forewarned is forearmed. :-)
--
Mike Nolan
ate_series(0,1095)
as s(a)
where to_char('2011-01-01'::date+s.a,'dd') between '01' and '07'
and to_char('2011-01-01'::date+s.a,'dy') = 'sat'
--
Mike Nolan
testing it to see if performance is going to be a major concern.
--
Mike Nolan
no...@tssi.com
On Wed, Jun 22, 2011 at 3:48 PM, Tom Lane wrote:
> Michael Nolan writes:
> > Has anyone successfully used encfs with postgresq recently?
>
> > PANIC: could not open file "pg_xlog/00010009000D" (log file
> 9,
> > segment 13): Invalid argume
amused.
PostgreSQL reports this as an error, of course.
--
Mike Nolan
no...@tssi.com
always just built everything from the
source code.
--
Mike Nolan
no...@tssi.com
n a linux kernel.
sar doesn't tell you a lot about what postgres is up to.
--
Mike Nolan
no...@tssi.com
e
database, even if that is what schemas are for.
The ability to do cross-database (most likely cross-server as well) queries
would address a lot of real-world problems.
-
Mike Nolan
no...@tssi.com
2011/7/16 - -
>
> The weird thing is that before I updated my server the query was about 5
> times faster.
>
Updated it from what to what, and how?
--
Mike Nolan
no...@tssi.com
!
>
> AFAIK there's no way to find out which compiler was used to build
> PostgreSQL binaries
You can do a strings on a binary file (eg, postmaster) and search for GCC in
the output.
--
Mike Nolan
-table=xyz_copy
would restore a copy of table xyz into the existing (and presumably empty)
table xyz_copy, leaving table xyz untouched.
--
Mike Nolan
no...@tssi.com
On Tue, Jul 26, 2011 at 6:10 PM, Chris Travers wrote:
> On Tue, Jul 26, 2011 at 3:48 PM, Michael Nolan wrote:
> > I suggest adding the following parameter to pg_restore:
> >
> > --rename-table=
> >
> > When used in conjunction with the --data-only, --schema a
nteger, nor can you cast an integer to an xid.
The only way I can get this to work is:
select count(*) from tablename where not xmin = 2
That seems pretty obscure.
--
Mike Nolan
no...@tssi.com
On Thu, Jul 28, 2011 at 12:23 PM, Andy Colson wrote:
> On 7/28/2011 11:40 AM, Michael Nolan wrote:
>
>> Why does this query succeed:
>>
>> select count(*) from tablename where xmin = 2
>>
>> while this query fails:
>>
>> select count(*) from tab
On Thu, Jul 28, 2011 at 2:27 PM, Tom Lane wrote:
> Michael Nolan writes:
> > Why does this query succeed:
> > select count(*) from tablename where xmin = 2
>
> > while this query fails:
>
> > select count(*) from tablename where xmin != 2
>
> It told you w
On Thu, Jul 28, 2011 at 5:09 PM, Tom Lane wrote:
> Michael Nolan writes:
> > It seems like we're being inconsistent here in allowing 'where xid =
> > integer' but not allowing 'where xid != integer'.
>
> Well, if you look into pg_operator you'
On Thu, Jul 28, 2011 at 5:36 PM, Tom Lane wrote:
> Michael Nolan writes:
> > It also appears you cannot group on a column of type xid.
>
> You can in 8.4 and up. Previous versions only know how to GROUP BY
> sortable columns, which requires a btree opclass, which xid doesn
s being stored on a 500 GB external hard drive
connected via USB2 to an HP laptop running Linux Fedora 15.
I found that the encrypted database ran 15-20% slower on PostgreSQL 9.0.4 on
most queries.
--
Mike Nolan
-- Forwarded message --
From: Michael Nolan
Date: Tue, May 29, 2012 at 1:37 PM
Subject: Re: [GENERAL] Disable Streaming Replication without restarting
either master or slave
To: Fujii Masao
On Tue, May 29, 2012 at 1:15 PM, Fujii Masao wrote:
> On Tue, May 29, 2012 at 10
1 - 100 of 264 matches
Mail list logo