Hello,
I am using DJango 1.7 and Postgresql 9.3.5.
I am trying to store email in the username field of
django.contrib.auth.models.User.
but when i try to store more than 30 characters i am getting this error
: Ensure this value has at most 30 characters (it has 31) django
i tried
VENKTESH GUTTEDAR wrote
Hello,
I am using DJango 1.7 and Postgresql 9.3.5.
I am trying to store email in the username field of
django.contrib.auth.models.User.
but when i try to store more than 30 characters i am getting this
error
: Ensure this value has at most 30
On 12/10/2014 04:38 AM, VENKTESH GUTTEDAR wrote:
Hello,
I am using DJango 1.7 and Postgresql 9.3.5.
I am trying to store email in the username field of
django.contrib.auth.models.User.
but when i try to store more than 30 characters i am getting this
error : Ensure this value
On 12/10/2014 01:32 AM, Eric Svenson wrote:
So, one more success...
I have taken a part of the backup SQL file which fills the table
COPY dev_my_settings (.) from stdin;
12345 text text 0 123.345345
This file ALONE works! (without changing ANYTHING!)
Hmm, almost like the
On 12/08/2014 02:05 AM, chris.jur...@primesoft.ph wrote:
I am having a problem with having idle sessions in transactions. In
pgAdmin Server Status, it is showing RELEASE_EXEC_SVP_XX (XX
data are varied) as its query and it's locks also contain a lot of these
RELEASE_EXEC_SVP_XX
Is the list of shorthand casts documented somewhere?
If so can you please direct me to it. A working URL would be great.
Thank you.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I have users, friends, and friend_requests. I need a query that essentially
returns a summary containing:
* user (name, imageURL, bio, ...)
* Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the
So, one more success...
I have taken a part of the backup SQL file which fills the table
COPY dev_my_settings (.) from stdin;
12345 text text 0 123.345345
This file ALONE works! (without changing ANYTHING!)
So if I run the first (huge) SQL file and then the second, which fills the
The restore left you with two empty tables. What happens if you log into
Postgres via psql and then INSERT one set of values containing floats
into say, dev_my_settings?
SUCCESS! This works OK!
INSERT INTO dev_my_settings(123, 'test', 'test', 'test', 123, 123.345);
Value 123.345 can be read
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get the following
error:
pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR: invalid
CCing list.
On 12/10/2014 08:06 AM, VENKTESH GUTTEDAR wrote:
Ya i used python manage.py syncdb
Well, per Davids suggestion, you might want to check that it is indeed
the email field that is triggering the validation error. Both the
first_name and last_name fields in auth_user have a field
On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get the following
error:
pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
On 12/10/2014 11:16 AM, Adrian Klaver wrote:
On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get the following
error:
pg_dump: Dumping the contents of
On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:16 AM, Adrian Klaver wrote:
On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get
Sam Mason s...@samason.me.uk writes:
On Mon, Nov 10, 2008 at 02:30:41PM -0800, Steve Atkins wrote:
On Nov 10, 2008, at 1:35 PM, Tom Lane wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
It seems that there is enough need for this feature, that it has been
implemented multiple times
On 12/10/2014 11:49 AM, Adrian Klaver wrote:
On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:16 AM, Adrian Klaver wrote:
On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.
On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:49 AM, Adrian Klaver wrote:
On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:16 AM, Adrian Klaver wrote:
On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL
On 12/10/2014 12:47 PM, Adrian Klaver wrote:
On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:49 AM, Adrian Klaver wrote:
On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:16 AM, Adrian Klaver wrote:
On 12/10/2014 08:07 AM, Gabriel Sánchez
On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 12:47 PM, Adrian Klaver wrote:
On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:49 AM, Adrian Klaver wrote:
On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:16 AM, Adrian
On 12/10/2014 01:00 PM, Adrian Klaver wrote:
On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 12:47 PM, Adrian Klaver wrote:
On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:49 AM, Adrian Klaver wrote:
On 12/10/2014 08:31 AM, Gabriel Sánchez
On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get the following
error:
pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult()
On 12/10/2014 01:48 PM, Tomas Vondra wrote:
On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote:
Hi all,
I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM. When running pg_dump on a specific table, I get the following
error:
pg_dump: Dumping the contents of table
in 9.4, GIN indexes are pretty close to this already
Do I understand correctly that BRIN indexes will be even closer to this?
Kindest regards
Jack
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: 24 May 2014 22:46
To: Martijn van Oosterhout
Cc: Jack Douglas;
On Tue, Dec 9, 2014 at 4:24 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
SELECT ...
FROM people p
LEFT JOIN LATERAL (SELECT * FROM names n
WHERE n.people_id = p.people_id
AND current_timestamp n.validfrom
ORDER
On 12/10/2014 10:08 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 01:00 PM, Adrian Klaver wrote:
On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 12:47 PM, Adrian Klaver wrote:
On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 11:49 AM, Adrian
Jack Douglas wrote:
in 9.4, GIN indexes are pretty close to this already
Do I understand correctly that BRIN indexes will be even closer to this?
Yeah, in a way. You could say they are closer from the opposite end.
There is one index tuple in a BRIN index for each page range (contiguous
On 12/10/2014 02:34 PM, Adrian Klaver wrote:
On 12/10/2014 10:08 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 01:00 PM, Adrian Klaver wrote:
On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:
On 12/10/2014 12:47 PM, Adrian Klaver wrote:
On 12/10/2014 09:25 AM, Gabriel Sánchez
If the values are perfectly clustered, the index is optimal because you
scan the minimal set of pages.
That's the bit I'm particularly interested in, as my plan would be to keep
the pages well clustered: http://dba.stackexchange.com/a/66293/1396
Do you see any blocker preventing BRIN being used
On Mon, Dec 8, 2014 at 4:58 PM, Vincent de Phily
vincent.deph...@mobile-devices.fr wrote:
On Monday 08 December 2014 10:17:37 Jeff Janes wrote:
On Mon, Dec 8, 2014 at 4:54 AM, Vincent de Phily
I don't think that routine vacuums even attempts to update relfrozenxid,
or
at least doesn't
I have a custom type and want to add the yet missing SEND and RECEIVE functions
is there anyway to alter the type definition without dropping and recreating it?
Manuel
Jack Douglas wrote:
If the values are perfectly clustered, the index is optimal because you
scan the minimal set of pages.
That's the bit I'm particularly interested in, as my plan would be to keep
the pages well clustered: http://dba.stackexchange.com/a/66293/1396
Do you see any blocker
Manuel Kniep m.kn...@web.de writes:
I have a custom type and want to add the yet missing  SEND and RECEIVE
functions
is there anyway to alter the type definition without dropping and recreating
it?
There's no supported way to do that. As an unsupported way, you could
consider a manual
Tom Lane-2 wrote
FWIW, if you are using the logging collector (highly recommended), output
to a backend process's stdout or stderr will be caught and included in the
log, though it won't have a log_line_prefix. This might be a usable
substitute for adapting your code.
Currently, when I need to create/edit a stored procedure in Postgresql, my workflow goes like the following:- Create/edit the desired function in my "DB Commands" text file- Copy and paste function into my development database- Test- repeat above until it works as desired- Copy and paste function
On 11/12/14 13:53, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in
Postgresql, my workflow goes like the following:
- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above
On 12/10/2014 04:53 PM, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in Postgresql,
my workflow goes like the following:
- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above
On 12/10/2014 05:53 PM, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in
Postgresql, my workflow goes like the following:
- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat
I want to do something that is perfectly satisfied by an hstore column.
*Except* that I want to be able to do fast (ie indexed) , etc
comparisons, not just equality.
From what I can tell, there isn’t really any way to get hstore to do this,
so I’ll have to go to a key-value table. But
On Dec 6, 2014, at 12:38 , Bruce Momjian br...@momjian.us wrote:
On Wed, Dec 3, 2014 at 01:15:50AM -0800, Guyren Howe wrote:
GIN is certainly not the “three times” size suggested in the docs, but
perhaps
that just hasn’t been updated for the 9.4 improvements. Certainly, there
isn’t
On 12/10/2014 05:03 PM, Gavin Flower wrote:
On 11/12/14 13:53, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in
Postgresql, my workflow goes like the following:
- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my
How do you handle DDL changes in general? I would treat stored
procedures the same way. For instance Ruby on Rails has database
migrations where you write one method to apply the DDL change and
another to revert it, like this:
def up
add_column :employees, :manager_id, :integer
On Wed, Dec 10, 2014 at 05:27:16PM -0800, Guyren Howe wrote:
Given the futility of database benchmarking in general, I didn’t
want to go any further with this. What I was interested in was
whether it might be worth switching from BTree to GIST/GIN indexes
with regular sorts of data. It
Hello,
I'm working on a package of functions that compute statistics on
arrays of numbers. For example this one computes a histogram from a
bunch of values plus some bucket characteristics:
CREATE OR REPLACE FUNCTION
array_to_hist(double precision[], double precision, double precision, int)
I suggest you download and install PgAdmin.
http://www.pgadmin.org/index.php
It makes review of functions and other database objects, as well as
maintenance, a lot easier.
Otherwise, you can just use psql
eg:
psql your_database
\o /some_dir/your_proc_filename
\sf+ your_proc
\q
Your function
Paul Jungwirth p...@illuminatedcomputing.com writes:
Is it legal to define a bunch of functions all called `array_to_hist`
for the different numeric types, and have them all implemented by the
same C function?
Sure.
(There is a regression test that objects if we try to do that with
built-in
45 matches
Mail list logo