Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-19 Thread twoflower
Alvaro Herrera-9 wrote
> I saw one installation with "gsutil cp" in archive_command recently. 
> Ithad the CLOUDSDK_PYTHON environment variable set in the
> archive_commanditself.  Maybe that's a problem.

After all, this was the solution:
archive_command = 'CLOUDSDK_PYTHON=/usr/bin/python gsutil cp
/storage/postgresql/9.6/main/pg_xlog/%p gs://my_bucket/'
as also hinted in  https://github.com/GoogleCloudPlatform/gsutil/issues/402
  

I still don't understand why the environments differ (the context of
archive_command vs. "su postgres -" and executing it there) but I am happy
it's working now.Thank you!



--
View this message in context: 
http://www.postgresql-archive.org/archive-command-fails-but-works-outside-of-Postgres-tp5979040p5979093.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-18 Thread twoflower
Mark Watson-12 wrote
> I think the parameter %p contains the complete path of the file

It does not, see the link to the official documentation above.



--
View this message in context: 
http://www.postgresql-archive.org/archive-command-fails-but-works-outside-of-Postgres-tp5979040p5979061.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-18 Thread twoflower
Alvaro Herrera-9 wrote
> I saw one installation with "gsutil cp" in archive_command recently. 
> Ithad the CLOUDSDK_PYTHON environment variable set in the
> archive_commanditself.  Maybe that's a problem.

That's not the case here, I don't have this variable set anywhere where
gsutil works.
Alvaro Herrera-9 wrote
> Another possible problem might be the lack of %f (this command seems
> torely on the file being the same name at the other end, which
> isn'tnecessarily so)  and the fact that %p is supposed to be the path of
> thefile, so you shouldn't qualify it with the full path.

%p is not the full path, it is relative to the cluster's data directory (as
described in the  documentation

 
).Not using %f is not a problem - gsutil is invoked in the following way:
gsutil full_path storage_dir
Also, as I mentioned, the command works fine when copied verbatim from the
log error message and executed manually.



--
View this message in context: 
http://www.postgresql-archive.org/archive-command-fails-but-works-outside-of-Postgres-tp5979040p5979060.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-18 Thread twoflower
Scott Marlowe-2 wrote
> Sounds like it depends on some envvar it doesn't see when run from the
> postmaster. If you sudo -u postgres and run it does it work?

Yes, I can do su postgres, execute the command and it works.




--
View this message in context: 
http://www.postgresql-archive.org/archive-command-fails-but-works-outside-of-Postgres-tp5979040p5979059.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] archive_command fails but works outside of Postgres

2017-08-18 Thread twoflower
I changed my archive_command to the following:
archive_command = 'gsutil cp /storage/postgresql/9.6/main/%p
gs://my_bucket/pg_xlog/'
and it fails, leaving the following in the log:

2017-08-18 18:34:25.057 GMT [1436][0]: [104319] LOG:  archive command failed
with exit code 12017-08-18 18:34:25.057 GMT [1436][0]: [104320] DETAIL:  The
failed archive command was: gsutil cp 
/storage/postgresql/9.6/main/0001038B00D8
gs://my_bucket/pg_xlog/2017-08-18 18:34:25.057 GMT [1436][0]: [104321]
WARNING:  archiving transaction log file "0001038B00D8" failed
too many times, will try again later

But the command works when executed manually:
root$ su postgres -c "gsutil cp
/storage/postgresql/9.6/main/0001038B00D8
gs://my_bucket/pg_xlog/"root$ echo $?0
The last command verifies that gsutil indeed exited with 0.

How to best debug this issue?




--
View this message in context: 
http://www.postgresql-archive.org/archive-command-fails-but-works-outside-of-Postgres-tp5979040.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Text search dictionary vs. the C locale

2017-07-02 Thread twoflower
Tom Lane-2 wrote
> Presumably the problem is that the dictionary file parsing functionsreject
> anything that doesn't satisfy t_isalpha() (unless it matchest_isspace())
> and in C locale that's not going to accept very much.

That's what I also guessed and the fact that setting lc-ctype=en_US.UTF-8
makes it work confirms it, I think.



--
View this message in context: 
http://www.postgresql-archive.org/Text-search-dictionary-vs-the-C-locale-tp5969677p5969703.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Text search dictionary vs. the C locale

2017-07-02 Thread twoflower
Initializing the cluster with 

initdb
--locale=C
--lc-ctype=en_US.UTF-8
--lc-messages=en_US.UTF-8
--lc-monetary=en_US.UTF-8
--lc-numeric=en_US.UTF-8
--lc-time=en_US.UTF-8
--encoding=UTF8

allows me to use my text search dictionary. Now it only remains to see
whether index creation will be still fast (I suspect it should) and if it
doesn't have any other unintended consequences (e.g. in pattern matching
which we use a lot).



--
View this message in context: 
http://www.postgresql-archive.org/Text-search-dictionary-vs-the-C-locale-tp5969677p5969678.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Text search dictionary vs. the C locale

2017-07-02 Thread twoflower
I am having problems creating an Ispell-based text search dictionary for
Czech language.

Issuing the following command:

create text search dictionary czech_ispell (
  template = ispell,
  dictfile = czech_ispell,
  affFile = czech_ispell
);

ends with

ERROR:  syntax error
CONTEXT:  line 252 of configuration file
"/usr/share/postgresql/9.6/tsearch_data/czech_ispell.affix": " . > TŘIA

The dictionary files are in UTF-8. The database cluster was initialized with

initdb --locale=C --encoding=UTF8

When, on the other hand, I initialize it with

initdb --locale=en_US.UTF8

it works.

I was hoping I could have the C locale with the UTF-8 encoding but it seems
non-ASCII text search dictionaries are not supported in that case. This is a
shame as restoring the dumps goes from 1.5 hour (with the C locale) to 9.5
hours (with en_US.UTF8).



--
View this message in context: 
http://www.postgresql-archive.org/Text-search-dictionary-vs-the-C-locale-tp5969677.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread twoflower
That makes perfect sense.

Thank you for a great help, Adrian!



--
View this message in context: 
http://postgresql.nabble.com/pg-ctl-restart-does-not-terminate-tp5932070p5932095.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread twoflower
Ah, it didn't occur to me to try hitting ENTER. Still, this would be fine for
manually running the script, but as I am restarting the server as a part of
SaltStack config, I need pg_ctl to terminate without me intervening.

The solution with the -l argument is fine, I think. Even if I use it, the
server then logs its output into the file I specified in postgresql.conf
(which I would not expect, by the way).



--
View this message in context: 
http://postgresql.nabble.com/pg-ctl-restart-does-not-terminate-tp5932070p5932092.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread twoflower
Yes, I am using that, thank you. But just by themselves these settings do not
make pg_ctl terminate.



--
View this message in context: 
http://postgresql.nabble.com/pg-ctl-restart-does-not-terminate-tp5932070p5932090.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread twoflower
Adrian Klaver-4 wrote
> You also specify a log file to pg_ctl  by using -l:
> 
> https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html

This did the trick, thank you!




--
View this message in context: 
http://postgresql.nabble.com/pg-ctl-restart-does-not-terminate-tp5932070p5932076.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 'pg_ctl restart' does not terminate

2016-11-26 Thread twoflower
I am restarting the server using the following:
su postgres -c "/usr/lib/postgresql/9.6/bin/pg_ctl -D
/var/lib/postgresql/9.6/main -o '-c
config_file=/etc/postgresql/9.6/main/postgresql.conf' restart"
The server is restarted properly, but the the command never finishes. After
the restart, it displays the server's logfile.Is this intended?



--
View this message in context: 
http://postgresql.nabble.com/pg-ctl-restart-does-not-terminate-tp5932070.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] SELECT blocks UPDATE

2015-08-13 Thread twoflower
Hello,

if I am reading  the documentation on explicit locking
http://www.postgresql.org/docs/current/interactive/explicit-locking.html#LOCKING-TABLES
  
correctly, SELECT should never conflict with UPDATE. However, what I am
observing as a result of this monitoring query: 

SELECT bl.pid AS blocked_pid,
nbsp;nbsp;nbsp;nbsp;a.usename  AS blocked_user,
nbsp;nbsp;nbsp;nbsp;ka.query   AS blocking_statement,
nbsp;nbsp;nbsp;nbsp;now() - ka.query_start AS blocking_duration,
nbsp;nbsp;nbsp;nbsp;kl.pid AS blocking_pid,
nbsp;nbsp;nbsp;nbsp;ka.usename AS blocking_user,
nbsp;nbsp;nbsp;nbsp;a.queryAS blocked_statement,
nbsp;nbsp;nbsp;nbsp;now() - a.query_start  AS blocked_duration
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND
kl.pid != bl.pid
JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid
WHERE NOT bl.granted;


is this
*Blocking statement*: SELECT tmtranslat0_.id as id164_0_, tmtranslat1_.id as
id101_1_, tmlanguage2_.id as id73_2_, ... FROM TRANSLATION ...
*Blocked statement*: UPDATE TRANSLATION SET fk_assignment_queue_item =
1000211 WHERE id IN (47032216)


I don't remember ever having problems with things like this. I am not even
issuing SQL queries in parallel from my application (the execution is
single-threaded). Now my application is stuck on the UPDATE statement. 

1) How is it possible that these two statements block?
2) What can I do about it?

Thank you.



--
View this message in context: 
http://postgresql.nabble.com/SELECT-blocks-UPDATE-tp5862040.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] SELECT blocks UPDATE

2015-08-13 Thread twoflower
Further observation: Now I managed to get rid of the blocking. I am not sure
if you are familiar with the NHibernate ORM, but it has a concept of a
stateful and stateless sessions. Session holds a connection to the database
and transaction is created on a particular session. In this case, 'begin
transaction' and the SELECT statement was executed in the context of the
stateful session, while the UPDATE was executed on a stateless session. I am
not sure how this situation manifests on Postgres but since the 'blocked'
and 'blocking' lock apparently belong to the same transaction, it does not
look like it should matter, except it does.



--
View this message in context: 
http://postgresql.nabble.com/SELECT-blocks-UPDATE-tp5862040p5862097.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SELECT blocks UPDATE

2015-08-13 Thread twoflower
The Postgres version is 9.3.9.

The actual output of the lock query is (I added *locktype* and *mode*
columns from the *pg_locks* table)


*blocked_pid*: 7574
*blocked_statement*: UPDATE TRANSLATION SET fk_assignment_queue_item =
1009184 WHERE id IN (47049861)
*blocked_locktype*: transactionid
*blocked_mode*: ShareLock
*blocked_duration*: 00:35:01.81106
*blocking_pid*: 7569
*blocking_statement*: select tmtranslat0_.id as id164_0_, tmtranslat1_.id as
id101_1_, tmlanguage2_.id as id73_2_, tmtranslat0_.status as status164_0_,
...
*blocking_locktype*: transactionid
*blocking_mode*: ExclusiveLock
*blocking_duration*: 00:35:03.017109


User names are irelevant, so I omitted that. Also the *blocking_statement*
is actually cut off even before the FROM clause, but there is only one
SELECT query issued at that moment which matches the start:

select from   TRANSLATION tmtranslat0_left outer join TRANSLATION_UNIT
tmtranslat1_  on tmtranslat0_.fk_id_translation_unit = tmtranslat1_.idleft
outer join LANGUAGE tmlanguage2_  on tmtranslat0_.fk_id_language =
tmlanguage2_.idwhere  tmtranslat0_.id in (47049860, 47049861, 47049862)order 
by tmtranslat0_.id asc


I also suspected a SELECT FOR UPDATE query, but it's not the case. Also, I
don't use these at all in the application.


Tom Lane-2 wrote
 So either the SELECT is a SELECT FOR UPDATE, or it's part of a transaction
 that's done datachanges in the past.

If these are the only two explanations, it must be the latter then. What I
still don't understand - these two statements are part of the same
transaction (because the lock query joins on the lock's transaction id), so
it looks like a transaction blocking itself. As I think about it now, it
does not even make sense to me /why/ the lock query joins on the
lock.transactionid - I would expect two locks will mostly conflict with each
other when they are executed within /different/ transactions.

As for other context, I fail to see how this situation is special or
different from any other...Is there any pattern I should be looking for? 



--
View this message in context: 
http://postgresql.nabble.com/SELECT-blocks-UPDATE-tp5862040p5862091.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread twoflower
林士博 wrote
 Can you post execution plan of the original update sql.EXPLAIN (ANALYZE
 ON, BUFFERS ON) update TRANSLATION setfk_assignmentwhere fk_job = 1000;

Here it is:

Update on TRANSLATION  (cost=0.56..9645.13 rows=3113 width=391) (actual
time=35091.036..35091.036 rows=0 loops=1)
nbsp;nbsp;  Buffers: shared hit=74842343 read=7242 dirtied=7513
nbsp;nbsp;nbsp;nbsp;  -  Index Scan using
TRANSLATION_idx_composite_job_last_revision on TRANSLATION 
(cost=0.56..9645.13 rows=3113 width=391) (actual time=0.042..24.147
rows=8920 loops=1)
nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;   Index Cond: (fk_job = 59004)
nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;   Buffers: shared hit=626

Planning time: 0.362 msExecution time: 35091.192 ms



--
View this message in context: 
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859197.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread twoflower
Thank you, I will look into those suggestions. 

Meanwhile, I started experimenting with partitioning the table into smaller
tables, each holding rows with ID spanning 1 million values and using this
approach, the UPDATE takes 300ms. I have to check if all the SELECTs I am
issuing against the original table keep their performance, but so far it
seems they do, if the appropriate indexes are present on the child tables. I
was worried about the overhead of each query having to go through all
(currently) 58 partition tables, but it seems like it's not that big of a
deal.



--
View this message in context: 
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859203.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread twoflower
Adrian Klaver-4 wrote
 Have you tried wrapping the above in a BEGIN/COMMIT block?

Yes, I am running the tests inside a BEGIN TRANSACTION / ROLLBACK block.



--
View this message in context: 
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859148.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread twoflower
Hello,I have a table with 30 million records in which I need to update a
single column for a couple of thousands of rows, let's say 10 000. The new
column value is identical for all matching rows.Doing

update TRANSLATION set fk_assignmentwhere fk_job = 1000;

takes 45 seconds. I understand that UPDATE is basically an INSERT followed
by DELETE but I was hoping I could do better than that.I found a suggestion
to use a temporary table to speed things up, so now I have this:

create unlogged table temp_table as
  select id, fk_assignment
  from TRANSLATION
  where fk_job = 1000;

update temp_table set fk_assignment = null;

update TRANSLATION _target
set fk_assignment = _source.fk_assignment
from temp_table _source
where _target.id = _source.id;

drop table temp_table;


This got me to about 37 seconds. Still pretty slow.The TRANSLATION has an
index and a foreign key constraint on fk_assignment. Removing the constraint
brought very little benefit. Removing the index is probably out of question
as these kind of operations are very frequent and the table itself is used
heavily, including the index.Execution plan:

Update on TRANSLATION _target  (cost=0.56..116987.76 rows=13983 width=405)
(actual time=43262.266..43262.266 rows=0 loops=1)
nbsp;-  Nested Loop  (cost=0.56..116987.76 rows=13983 width=405) (actual
time=0.566..146.084 rows=8920 loops=1)
nbsp;nbsp;nbsp;-  Seq Scan on temp_segs _source  (cost=0.00..218.83
rows=13983 width=22) (actual time=0.457..13.994 rows=8920 loops=1)
nbsp;nbsp;nbsp;-  Index Scan using TRANSLATION_pkey on TRANSLATION
_target  (cost=0.56..8.34 rows=1 width=391) (actual time=0.009..0.011 rows=1
loops=8920)
nbsp;nbsp;nbsp;nbsp;nbsp;nbsp;Index Cond: (id = _source.id)

Planning time: 1.167 ms
Execution time: 43262.577 ms

Is there anything else worth trying? Are these numbers something to be
expected, from your experience?

I have Postgres 9.4, the database is on SSD.

Thank you very much for any suggestions.

Standa



--
View this message in context: 
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-23 Thread twoflower
林士博 wrote
 Try creating an index on TRANSLATION fk_job.

The index is already there.



--
View this message in context: 
http://postgresql.nabble.com/The-fastest-way-to-update-thousands-of-rows-in-moderately-sized-table-tp5859144p5859191.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Re: Server tries to read a different config file than it is supposed to

2015-05-25 Thread twoflower

 I think that that unwritable postgresql.conf file had probably been
 hanging around in your data directory for some time.  It was not causing
 any particular problem until we decided we ought to fsync everything in
 the data directory after a crash.  So this is indeed the same case
 Christoph was complaining about.  But really you should remove that file
 not just change its permissions; as is it's just causing confusion.

Thank you very much Tom for explaining the cause, now I will make sure I am
not leaving any unused files in the data directory anymore.



--
View this message in context: 
http://postgresql.nabble.com/Server-tries-to-read-a-different-config-file-than-it-is-supposed-to-tp5850752p5850896.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Re: Server tries to read a different config file than it is supposed to

2015-05-24 Thread twoflower

 From root, presumably ... 

Yes 
 I thought of a different theory: maybe the server's complaint is not due
 to trying to read that file as a config file, but it's just because there
 is an unreadable/unwritable file in the data directory.  See Christoph
 Berg's complaint at
 http://www.postgresql.org/message-id/20150523172627.ga24...@...this would
 only apply if the OP was trying to use this week's releases though.  Also,
 I thought the fsync-everything code would only run if the server had been
 shut down uncleanly.  Which maybe it was, but that bit of info wasn't
 provided either.

I was doing this after I upgraded to 9.4.2, yes. As for the shut down: I
suspect the server was rebooted without explicitly stopping Postgres. Not
sure how this plays out in terms of cleanliness. This is everything relevant
in the log file after I ran the start script:
2015-05-23 10:36:39.999 GMT [2102][0]: [1] LOG: database system was
interrupted; last known up at 2015-05-23 08:59:41 GMT
2015-05-23 10:36:40.053 GMT [2102][0]: [2] FATAL: could not open file
/storage/postgresql/9.4/data/postgresql.conf: Permission denied
2015-05-23 10:36:40.054 GMT [2100][0]: [3] LOG: startup process (PID 2102)
exited with exit code 1
2015-05-23 10:36:40.054 GMT [2100][0]: [4] LOG: aborting startup due to
startup process failure
I also tried the same situation on two other Ubuntu servers with the same
version of Postgres (also upgraded to 9.4.2) and the same directory layout -
made *postgresql.conf* in the data directory unaccessible, even renamed it,
and everything worked fine. The only difference is that these are
streaming-replicated standby servers. They also had been restarted without
explicitly terminating Postgres.




--
View this message in context: 
http://postgresql.nabble.com/Server-tries-to-read-a-different-config-file-than-it-is-supposed-to-tp5850752p5850829.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Server tries to read a different config file than it is supposed to

2015-05-23 Thread twoflower
I thought I understood how specifying a config file path for the server
works, but that's apparently not the case.
The cluster data is at */storage/postgresql/9.4/data*.
The config files are at */etc/postgresql/9.4/main* (this is the default
location on Ubuntu).
This is how the beginning of */etc/postgresql/9.4/main/postgresql.conf*
looks like:
data_directory = '/storage/postgresql/9.4/data'
hba_file = '/etc/postgresql/9.4/main/pg_hba.conf'
ident_file = '/etc/postgresql/9.4/main/pg_ident.conf'
So I wrote a few scripts to make my life easier, e.g. *pg94start.sh*:
su postgres -c /usr/lib/postgresql/9.4/bin/pg_ctl -D
/storage/postgresql/9.4/data -o '-c
config_file=/etc/postgresql/9.4/main/postgresql.conf'
But running this script did not work, the server would not start. So I
checked the log file and there was:
*FATAL: could not open file /storage/postgresql/9.4/data/postgresql.conf:
Permission denied*
After fixing the ownership of this file, it worked.
What's the reason the server was trying to access that file? Why does not
the override given by the *config_file* parameter work?
Thank you.



--
View this message in context: 
http://postgresql.nabble.com/Server-tries-to-read-a-different-config-file-than-it-is-supposed-to-tp5850752.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

[GENERAL] Re: Server tries to read a different config file than it is supposed to

2015-05-23 Thread twoflower

 Testing this, the problem appears to be that you forgot the
 keywordstart, so pg_ctl didn't really do anything.

I am sorry, that was just a mistake on my part here, it is in the script.
 I suspect this was left over from some previous attempt.

It doesn't look like it. I tried several times, always looked in the log
file after that. The timestamp also suggests it was a result of my attempts.
 One possible theory is that you had an include directive in the config
 file in /etc, causing it to try to read the other one? 

I checked now, no include in there.



--
View this message in context: 
http://postgresql.nabble.com/Server-tries-to-read-a-different-config-file-than-it-is-supposed-to-tp5850752p5850763.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Emulating flexible regex replace

2014-10-24 Thread twoflower
Thank you Francisco, that's a clever idea. However, I don't think this would
reduce the complexity since the target pattern can contain

1) regular back-references (referencing to matches of its own)
2) the special source text references I mentioned

Obviously, these will have to be written in a different way and this I
believe brings me back to start (or in other words, it's not a silver bullet
obviating the need to rewrite the target pattern manually).

I will probably end up writing a function in PL/Perl which Tom Lane
suggested since I'm apparently not skilled in SQL enough to be able to do it
using a single query without using custom functions. 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Emulating-flexible-regex-replace-tp5824058p5824109.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Emulating flexible regex replace

2014-10-24 Thread twoflower
Thank you Francisco.

In fact, I am already solving part of the problem in my application -
fetching from the DB the records matching the source pattern and then
filtering them in the application's memory by matching against the target
pattern, with the references replaced (it's a breeze in C#).

It works very vell. However, I am not completely satisfied with i as it's
unnecessarily loading larger data set than it absolutely must. Besides, I'd
also like to get some experience in DB programming. That's why the PL/Perl
way seems pretty attractive to me.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Emulating-flexible-regex-replace-tp5824058p5824174.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Emulating flexible regex replace

2014-10-23 Thread twoflower
Hello,

my scenario is this: I have a *SEGMENT* table with two text fields, *source*
and *target*. From the user, I get the following input:

/source pattern/
/target pattern/

Where both patterns are regexes and moreover the target pattern contains
references to the source in the following way: 

Supposing *source* matches the /source pattern/, the $/n/ expressions inside
the /target pattern/ correspond to the captured groups inside *source*.

Example:

Source: 123 source text
Target: 123 target text
Source pattern: ([0-9]+) source text
Target pattern: $1 target text

This yields a successful match since $1 in the /target pattern/ is replaced
by 123 from the first captured group in *source* and the resulting string,
123 target text, matches the /target pattern/.

I would like to execute a query which for a given /source pattern/ and
/target pattern/ returns all rows from the *SEGMENT* table where *source*
matches the /source pattern/ and *target* matches the /target pattern/ after
it has its references replaced with the actual captured groups.

I believe this is not possible since *regexp_replace* expects a string as
its /replacement/ argument which is not enough in this case. This kind of
stuff is easy in e.g. C# where for regex replace you can provide a function
which receives the (in this case) reference index as its argument and you
can build the replacement string using external knowledge.

However, as I am no pro in Postgres, I may be missing something and
therefore I ask: is it possible to somehow mimic the behavior of
hypothetical *regexp_replace* which would accept a function of the
to-be-replaced value and would return the replacement string based on that.

And as I am thinking about it, even that would not suffice since that
function would need to access not only the to-be-replaced value but also the
corresponding source pattern match.

Still, isn't there some super clever way to do that?








--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Emulating-flexible-regex-replace-tp5824034.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general