FreeBSD is OK if you are experienced. As a system it requires much more
maturity by the admin
than lets say Ubuntu which is targeted at a larger user base.
I'd say, explore your other Linux options first, since you already have
experience with Linux.
FreeBSD requires a much bigger learning
Scot Kreienkamp wrote:
I have a table like so:
Receiptlimitid: BIGINT (Primary Key)
Profitcenterid: BIGINT
Receiptnumber: INTEGER
All are set to Not Null also.
My question is, if I have an insert that goes idle in transaction for a while
before it commits, will
it stop all other
Hi,
A few questions about jsonb and hstore:
1. does jsonb replace hstore?
2. compatibility of jsonb hstore?
3. will the development of hstore continue?
4. is it recommended to use jsonb when planning new projects?
Thanks
Armand
--
Sent via pgsql-general mailing list
Hi,
here is an example of a problem I encounter with json_populate_recordset
because it does not support nested object. Actually, I would prefer that it
ignores nested object rather than raise an error, as it fails on a key I
don't even want to use!
Here's the query:
select e.timestamp::date,
I have a plpgsql function with:
PERFORM * FROM answers(_h);--works fine.
CREATE TEMP VIEW answers AS SELECT * FROM answers(_h); --gives error...
Why I get this error:
ERROR: column \_h\ does not exist\nLINE 1: ...TEMP VIEW answers AS SELECT
* FROM antwoorden_view(_h)
--
+31 6 29401726
Hello
2014-04-04 11:43 GMT+02:00 Tjibbe tji...@rijpma.org:
I have a plpgsql function with:
PERFORM * FROM answers(_h);--works fine.
CREATE TEMP VIEW answers AS SELECT * FROM answers(_h); --gives error...
Inside view definition should not be plpgsql variable - this statement has
no
On Fri, Apr 4, 2014 at 12:20 PM, Armand Turpel
armand.turpel.m...@gmail.com wrote:
Hi,
A few questions about jsonb and hstore:
1. does jsonb replace hstore?
no, it's different data type
2. compatibility of jsonb hstore?
hstore is a simple key-value model, while jsonb - richer
On Apr 4, 2014, at 12:03 AM, François Beausoleil franc...@teksol.info wrote:
I have some performance issues on a Ubuntu 12.04 which I'd like to resolve.
iowait varies a lot, between 5 and 50%.
Is the SAN dedicated to this app? I wonder if the i/o, if not related to your
app, is being
Thanks that works!
Little bit confusing ERROR.
Regards
Tjibbe
--
+31 6 29401726
tji...@rijpma.org
Jacobusstraat 185
3012 JM Rotterdam
On 4 April 2014 11:43, Tjibbe tji...@rijpma.org wrote:
I have a plpgsql function with:
PERFORM * FROM answers(_h);--works fine.
CREATE TEMP VIEW
On 4 April 2014 13:04, Oleg Bartunov obartu...@gmail.com wrote:
On Fri, Apr 4, 2014 at 12:20 PM, Armand Turpel
armand.turpel.m...@gmail.com wrote:
Hi,
A few questions about jsonb and hstore:
1. does jsonb replace hstore?
no, it's different data type
2. compatibility of jsonb hstore?
2014-04-04 14:16 GMT+02:00 Tjibbe tji...@rijpma.org:
Thanks that works!
Little bit confusing ERROR.
yes, it could be - but hard to fix it, because it is based on cooperation
two worlds - plpgsql and SQL - and it is sometimes not simple.
When you understand how plpgsql interpret use
Le 2014-04-04 à 08:11, Ray Stell a écrit :
On Apr 4, 2014, at 12:03 AM, François Beausoleil franc...@teksol.info wrote:
I have some performance issues on a Ubuntu 12.04 which I'd like to resolve.
iowait varies a lot, between 5 and 50%.
Is the SAN dedicated to this app? I wonder if
As a side note, when we migrated the exact same pgsql 8.3 system from linux
kernel 2.6 to 3.6,
we experienced an almost dramatic slowdown by 6 times.
Linux Kernel's were known to have issues around those dates, i recall.
We had to set synchronous_commit to off, this gave a huge boost ,
but this
Hi,
I have a question regarding the use of indices when querying foreign data
tables (using postgres_fdw of 9.3 to another postgres DB).
Everything works fine beside the fact that the indices which are defined in the
foreign DB are not used at all when I do query it through the foreign data
Hi,
I’m developing an application using Postgres and when doing a transaction with
a bunch of INSERT statements, the connection to the server (on localhost) often
suddenly dies. Trying to diagnose the problem, I’ve increased log levels and
I’ve run a tcpdump session to find out what’s going
Mark van Cuijk postgresql.org-pgsql-general...@45cents.nl writes:
Im developing an application using Postgres and when doing a transaction
with a bunch of INSERT statements, the connection to the server (on
localhost) often suddenly dies. Trying to diagnose the problem, Ive
increased log
On 04 Apr 2014, at 16:09 , Tom Lane t...@sss.pgh.pa.us wrote:
Your log extract looks like the server side thought it got a connection
closure command. In particular, the server process seems to have exited
normally, and it did not write LOG: unexpected EOF on client connection,
which it
Kohler Manuel wrote:
I have a question regarding the use of indices when querying foreign data
tables (using postgres_fdw
of 9.3 to another postgres DB).
Everything works fine beside the fact that the indices which are defined in
the foreign DB are not
used at all when I do query it
We'll work on contrib/jsonxtra with all operators ported from hstore
and release it after 9.4 as separate extension.
On Fri, Apr 4, 2014 at 4:32 PM, Thom Brown t...@linux.com wrote:
On 4 April 2014 13:04, Oleg Bartunov obartu...@gmail.com wrote:
On Fri, Apr 4, 2014 at 12:20 PM, Armand Turpel
On 4 April 2014 16:15, Oleg Bartunov obartu...@gmail.com wrote:
We'll work on contrib/jsonxtra with all operators ported from hstore
and release it after 9.4 as separate extension.
That would be useful. :)
Would there be an aim of getting that in-core for 9.5?
--
Thom
--
Sent via
On Fri, Apr 4, 2014 at 7:17 PM, Thom Brown t...@linux.com wrote:
On 4 April 2014 16:15, Oleg Bartunov obartu...@gmail.com wrote:
We'll work on contrib/jsonxtra with all operators ported from hstore
and release it after 9.4 as separate extension.
That would be useful. :)
Would there be an
hi,
Why is pg_restore trying to put stuff into the pg_catalog schema of all
places?
It's ignoring the schema specified in the pg_dump itself (`myschema`)
and even my search_path (`public`).
$ psql stuff_development --command show search_path
search_path
-
public
(1 row)
$
Hello,
I am using FDW to transparently access a remote Postgres database
containing a table indexed with the Q3C library.
On the remote server :
postgres=# SELECT * FROM object_000 WHERE q3c_radial_query(ra_PS, decl_PS,
1.3, 3.4, .2) ;
...
(2416 rows)
Time: 130.300 ms
But on the
Seamus Abshere sea...@abshere.net writes:
Why is pg_restore trying to put stuff into the pg_catalog schema of all
places?
Hm ... does myschema actually exist in the target database? If it
doesn't, and the dump file doesn't include a command to create it,
this is what you'll get. That's
On 04/03/2014 12:44 PM, Brent Wood wrote:
Hi David,
Does the RAID 1 array give any performance benefits over a single
drive? I'd guess that writes may be slower, reads may be faster (if
balanced) but data security is improved.
I've been looking into upgrading to SSD and wondering about RAID
It would be useful to know more details -- how much storage space you
need for example.
fwiw I considered all of these issues when we first deployed SSDs and
decided to not use RAID controllers.
There have not been any reasons to re-think that decision since.
However, it depends on your
Emmanuel Medernach meder...@clermont.in2p3.fr writes:
So the problem is that is calls the UDF on the FDW node (Foreign
Scan on master_object_000 + Filter) and not on the remote server.
Right.
- How to call the UDF on the remote server ?
postgres_fdw intentionally refuses to do this,
On Fri, Apr 4, 2014 at 11:04 AM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
On 04/03/2014 12:44 PM, Brent Wood wrote:
Hi David,
My take:
Does the RAID 1 array give any performance benefits over a single drive? I'd
guess that writes may be slower, reads may be faster (if balanced)
On 4/4/2014 10:15 AM, Merlin Moncure wrote:
2. Do I need both BBU on the RAID*and* capacitor on the SSD or just on one?
Which one? I'm suspecting capacitor on the SSD and write-through on the
RAID.
You need both. The capacitor protects the drive, the BBU protects the
raid controller.
note
On 4/4/14, 12:58 PM, Tom Lane wrote:
Seamus Abshere sea...@abshere.net writes:
Why is pg_restore trying to put stuff into the pg_catalog schema of all places?
Hm ... does myschema actually exist in the target database? [...] if myschema
doesn't exist, the creation target devolves to
Seamus Abshere sea...@abshere.net writes:
On 4/4/14, 12:58 PM, Tom Lane wrote:
Hm ... does myschema actually exist in the target database? [...] if
myschema doesn't exist, the creation target devolves to pg_catalog. This is
not terribly elegant but I'm not sure if there's consensus to
Hi,
here are the Query plans. The first plan is on the source database directly. So
no fdw involved:
source_db=# EXPLAIN ANALYZE select ds.code, count(*), sum(dsf.size_in_bytes) as
raw_size,pg_size_pretty(sum(dsf.size_in_bytes)) as size from data_set_files
dsf, data_sets ds where dsf.parent_id
Kohler Manuel (ID SIS) manuel.koh...@id.ethz.ch writes:
here are the Query plans. The first plan is on the source database directly.
So no fdw involved:
Do you have use_remote_estimate enabled? The rowcount estimates for the
foreign tables seem pretty far from reality; use_remote_estimates
On Fri, Apr 4, 2014 at 11:15 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Apr 4, 2014 at 11:04 AM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
On 04/03/2014 12:44 PM, Brent Wood wrote:
2. Do I need both BBU on the RAID *and* capacitor on the SSD or just on one?
Which one?
On 4/4/2014 12:08 PM, Scott Marlowe wrote:
You don't technically need the BBU / flashback memory IF the
controller is in write through.
if you HAVE the BBU/flash why would you put the controller in write
through?? the whole POINT of bbu/flashback is that you can safely
enable writeback
On Fri, Apr 4, 2014 at 10:15 AM, Merlin Moncure mmonc...@gmail.com wrote:
For all around performance, the
S3700 (2.5$/gb) IMO held the crown for most of 2013 and I think is
still the one to buy. The s3500 (1.25$/gb) came out and also looks
like a pretty good deal
The S3500 can be had for
I'm having trouble with setof returning functions
Let's say I have function x() returning setof integers. I want to do
SELECT x(), but only keep the values which meet a criteria. Something
like: SELECT x() as xval WHERE xval = 10.
How can I do that?
In general, is there a way to unroll a
On Fri, Apr 4, 2014 at 1:18 PM, John R Pierce pie...@hogranch.com wrote:
On 4/4/2014 12:08 PM, Scott Marlowe wrote:
You don't technically need the BBU / flashback memory IF the
controller is in write through.
if you HAVE the BBU/flash why would you put the controller in write
through??
On 04/04/2014 20:42, Robert James wrote:
I'm having trouble with setof returning functions
Let's say I have function x() returning setof integers. I want to do
SELECT x(), but only keep the values which meet a criteria. Something
like: SELECT x() as xval WHERE xval = 10.
How can I do
On Fri, Apr 4, 2014 at 12:42 PM, Robert James srobertja...@gmail.comwrote:
I'm having trouble with setof returning functions
Let's say I have function x() returning setof integers. I want to do
SELECT x(), but only keep the values which meet a criteria. Something
like: SELECT x() as xval
On 04/04/14 17:14, Kohler Manuel (ID SIS) wrote:
Here is the same query with fdw:
db=# EXPLAIN VERBOSE select ds.code, count(*), sum(dsf.size_in_bytes) as
raw_size,pg_size_pretty(sum(dsf.size_in_bytes)) as size from
data_set_files_fdw dsf, data_sets_fdw ds where dsf.parent_id is null and
On Friday, April 4, 2014, Scott Marlowe scott.marl...@gmail.com wrote:
On Fri, Apr 4, 2014 at 1:18 PM, John R Pierce
pie...@hogranch.comjavascript:;
wrote:
On 4/4/2014 12:08 PM, Scott Marlowe wrote:
You don't technically need the BBU / flashback memory IF the
controller is in write
On 04/04/2014 10:15 AM, Merlin Moncure wrote:
2. Do I need both BBU on the RAID *and* capacitor on the SSD or just on one?
Which one? I'm suspecting capacitor on the SSD and write-through on the
RAID.
You need both. The capacitor protects the drive, the BBU protects the
raid controller.
?? In
On 4/4/2014 3:57 PM, Steve Crawford wrote:
Judicious archiving allows us to keep our total OS+data storage
requirements under 100GB. Usually. So we should be able to easily stay
in the $500/drive price range (200GB S3700) and still have plenty of
headroom for wear-leveling.
One option I'm
I'm working on an ETL system that is driven from Java/JDBC but is implemented
mostly as SQL queries against a PostgreSQL database.
An ETL job runs inside its own transaction and consists of a series of
queries that transform the data from staging tables to the destination tables.
If a
On 04/04/2014 03:41 PM, Jim Garrison wrote:
I'm working on an ETL system that is driven from Java/JDBC but is implemented
mostly as SQL queries against a PostgreSQL database.
An ETL job runs inside its own transaction and consists of a series of queries that
transform the data from staging
On 4/4/2014 3:41 PM, Jim Garrison wrote:
Assuming I can pause the driving Java code between queries in a job, is there
any way to connect from PGAdmin (or another tool) and view the state of tables
in the in-progress transaction? If this is not currently possible, how
difficult would it be
On 04/02/2014 02:55 PM, Bret Stern wrote:
Care to share the SSD hardware you're using?
I've used none to date, and have some critical data I would like
to put on a development server to test with.
Regards,
Bret Stern
SSDs are ridiculously cheap when you consider the performance
difference.
It might be tempting to use a consumer-grade SSD due to the significant
cost savings, but the money saved is vapor. They may be OK for a dev
environment, but you *will* pay in downtime in a production environment.
Unlike regular hard drives where the difference between consumer and
On Fri, Apr 4, 2014 at 5:29 PM, Lists li...@benjamindsmith.com wrote:
On 04/02/2014 02:55 PM, Bret Stern wrote:
Care to share the SSD hardware you're using?
I've used none to date, and have some critical data I would like
to put on a development server to test with.
Regards,
Bret Stern
Jim Garrison wrote
Assuming I can pause the driving Java code between queries in a job, is
there any way to connect from PGAdmin (or another tool) and view the state
of tables in the in-progress transaction? If this is not currently
possible, how difficult would it be to do?
What you want is
51 matches
Mail list logo