Hi,
I use Postgres 9.4 database.Now,I am optimizing the queries by using
the results of "explain" and "explain analyze",Sometimes I am creating
Indexes to optimize them. But, I was not successful sometimes as even I
create Index to optimize them, the planner is not using them .
So my question
Hi Radovan !
Thank you for the reply. The question is that this table is not a
subject for a massive updates/deletes.
Is there any additional traces except from perf or pg_top to trace what
replica is doing at the particular moment when we are lagging in replay?
To see locks or spins or slee
Mason Hale wrote:
> Hello, I've got a 9.3 database hosted at Heroku.
>
> I'm full text search to search for "group names" in part of my application,
> and some of my group names are the names of youth sports age groups like
> "Boys 9-10" or "Girls 11-12".
>
> I would like for a search for the ter
Hello, I've got a 9.3 database hosted at Heroku.
I'm full text search to search for "group names" in part of my application,
and some of my group names are the names of youth sports age groups like
"Boys 9-10" or "Girls 11-12".
I would like for a search for the terms "Boys", "Boys 9-10", "9", "1
Hi,
I have an application that replicates data from an Oracle database for
postgresql. The flow goes as follows:
oracle.table1 -> AppSincronizador -> postgresql.table1 -> Trigger (upd,
ins, del) -> postgresql.table2
I'm having situations where the data volume is large that the changes that
shoul
> On Feb 25, 2015, at 3:27 PM, Alvaro Herrera wrote:
>
> Tong Pham wrote:
>
>> We do have fsync turned on, and there was no disk failure. The database
>> had to be shut down forcefully because it was becoming nonresponsive
>> (probably due to inadequate earlier vacuuming) and we could not get t
Tong Pham wrote:
> We do have fsync turned on, and there was no disk failure. The database
> had to be shut down forcefully because it was becoming nonresponsive
> (probably due to inadequate earlier vacuuming) and we could not get the
> remaining queries to terminate with normal cancel/terminate
There is no problem dumping large tables using parallel dump. My script had
limit on the file size that was causing parallel dump to abort on large tables.
Thanks everyone for their valuable suggestion.
Thanks
shanker
From: Shanker Singh
Sent: Monday, February 23, 2015 6:18 PM
To: Sterfield
Cc:
On 25/02/15 15:42, Merlin Moncure wrote:
> On Sat, Feb 21, 2015 at 5:33 AM, Arjen Nienhuis
> wrote:
>> Hi,
>>
>> On 19 Feb 2015 17:12, "brian" wrote:
>>>
>>>
>>> Hi folks,
>>>
>>> I have a single-user application which is growing beyond the
>>> fixed-format data files in which it currently holds
tpham wrote:
> Hi everyone,
>
> Two weeks ago, one of our Postgres databases crashed violently and had to be
> brought up again. This certainly resulted in some lost pg_clog files, and we
> had to zero-fill them in one by one to get autovacuum up and running again.
You should never lose pg_clog f
On 2/25/15 10:49 AM, Cenkar, Maciej wrote:
> Given PostgreSQL 9.3.5 what is locking strategy when executing query
> such as:
>
> UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM
> expensive_query_with_joins).
>
> Is this starting to lock rows after it executed sub-select or is it
Hi everyone,
Two weeks ago, one of our Postgres databases crashed violently and had to be
brought up again. This certainly resulted in some lost pg_clog files, and we
had to zero-fill them in one by one to get autovacuum up and running again.
Now, we have two autovacuuming processes constantly st
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
On 25.2.2015 12:50, JD wrote:
> Hi All,
>
> please find herewith the following query
>
> 1. select * from partdetails where scode=118 and
> (to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104')
>
> it is showing only 1 record as output, it is expected to give 17 reco
Thanks, Stephen and David for your responses. My setup is as you described
it. Thanks for clarifying.
--
View this message in context:
http://postgresql.nabble.com/Row-level-Security-vs-Application-level-authz-tp5839069p5839291.html
Sent from the PostgreSQL - general mailing list
Hi All,
I wonder if anyone can explain something I cannot easily find on google.
Given PostgreSQL 9.3.5 what is locking strategy when executing query such as:
UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM
expensive_query_with_joins).
Is this starting to lock rows after it ex
Hi All,
please find herewith the following query
1. select * from partdetails where scode=118 and
(to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104')
it is showing only 1 record as output, it is expected to give 17 records as
output.
2. select * from partdetails where s
On Sat, Feb 21, 2015 at 5:33 AM, Arjen Nienhuis wrote:
> Hi,
>
> On 19 Feb 2015 17:12, "brian" wrote:
>>
>>
>> Hi folks,
>>
>> I have a single-user application which is growing beyond the
>> fixed-format data files in which it currently holds its data, I need a
>> proper database as the backend.
On 02/25/2015 07:11 AM, Felix Ivan Romero Rodríguez wrote:
I've got the followin issue with postgres 9.3, trying to restore a
backup from sql script:
"invalid byte sequence for encoding “UTF8″: 0xc2de"
how can i solve it?
Provide more information:)
Where is backup coming from , another databa
Fix the character sequence to be valid UTF8 before you restore it. I'm
assuming you're restoring from a pretty old version of Postgres which did
not do strict UTF8 character validation. Are you sure the data is encoded
as UTF8 and not some other?
I've got the followin issue with postgres 9.3, trying to restore a backup from
sql script:
"invalid byte sequence for encoding “UTF8″: 0xc2de"
how can i solve it?
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= writes:
> On 25/02/15 07:34, David G Johnston wrote:
>> Torsten Förtsch wrote
>>> Is there anything similar for JSON scalars?
>> IDK, but have you tried "::text"?
> yes. Here is the difference
> select * from (values (('{"a":"b"}'::json -> 'a')::text),
>
Guillaume Drolet wrote:
If you want to move a whole database to a different tablespace (the only
reason
I can think of for doing what you are trying to so), use the command
ALTER DATABASE ... SET TABLESPACE ...
>>> Thanks Laurenz. I tried your suggestion:
>>>
>>> psql -U postg
On 25/02/15 07:34, David G Johnston wrote:
> Torsten Förtsch wrote
>> > Is there anything similar for JSON scalars?
> IDK, but have you tried "::text"?
yes. Here is the difference
select * from (values (('{"a":"b"}'::json -> 'a')::text),
('{"a":"b"}'::json ->> 'a')) t;
col
On 25/02/15 07:22, David G Johnston wrote:
> I'm doubting you intended to join a bunch of commas using the field value as
> the delimiter...methinks your got the argument order reversed for
> string_agg.
OMG, I am so stupid. Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgres
> On 25 Feb 2015, at 24:50, Tim Smith wrote:
>
> Have an Oracle "connect by" SQL that looks something like :
>
> select phone, pfx, len, (select info from codes where
> pfx = x.pfx) infot
> from (
> select :x phone, to_number(substr( :x, 1, length(:x)-level+1 )) pfx,
> length(:x)-level+1 len
>
Some other solutions
http://postgres.cz/wiki/PostgreSQL_SQL_Tricks_II#Fast_searching_of_longer_prefix
2015-02-25 9:04 GMT+01:00 Tim Smith :
> Will take a look. Thanks steve.
>
> On 24 February 2015 at 23:57, Steve Atkins wrote:
> >
> > On Feb 24, 2015, at 3:50 PM, Tim Smith
> wrote:
> >
> >>
Will take a look. Thanks steve.
On 24 February 2015 at 23:57, Steve Atkins wrote:
>
> On Feb 24, 2015, at 3:50 PM, Tim Smith wrote:
>
>>
>>
>> The goal being to match the longest prefix given a full phone number, e.g.
>>
>>
>> 61234567890 would match "australia proper 61"
>> whilst
>> 61134567
28 matches
Mail list logo