[GENERAL] ways of monitoring logical decoding performance

2017-11-12 Thread Weiping Qu

Dear all,

I'd like to monitor the resource utilization of logical decoding (e.g. 
in version 9.5).
For example, I'd like to see the wal buffer hit ratio, i.e. how much 
reading for logical decoding is from in-memory pages.

This can be set by blks_hit/(blks_read+blks_hit) from pg_stat_database.
But this values might include numbers incurred by other concurrent sessions.

Is there any clear manner to entirely focus on the performance of 
logical decoding?


Regards,
Weiping



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] wal configuration setting for fast streaming replication with logical decoding

2017-11-08 Thread Weiping Qu

Hi,

I intend to increase the speed of streaming replication with logical 
decoding using following configuration:


wal_level = logical
fsync = on
synchronous_commit = off
wal_sync_method = fdatasync
wal_buffers = 256MB
wal_writer_delay = 2seconds

checkpoint_timeout = 15min
max_wal_size=10GB

The intention is to first let WAL records to be buffered in WAL buffers 
(with increasing wal_buffers as 256MB) by turning off synchronous_commit 
and increasing the wal_writer_delay to 2 second.
Target WAL records are wished to be directly fetched from RAM through 
streaming replication to external nodes, thus reducing I/Os.
Besides, to avoid expensive checkpoints, its timeout and max_wal_size 
are also increased.


However, as suggested online, wal_buffers should be not more than one 
WAL segment file which is 16MB.

and wal_writer_delay should be at millisecond level.
Therefore, I would like to listen to your opinions.

Besides, I would also like to fetch WAL records periodically (say per 
150 ms) which would cause pile-up of WAL records in memory at each 
wal_writer_delay interval.
As also introduced online, when XLogInsertRecord is called, a new record 
is inserted in to WAL buffers, if no space, then a few WAL records would 
be moved to kernel cache (buffer cache). Shall I also set 
vm.dirty_background_ratio = 5 and vm.dirty_ratio = 80 to avoid disk I/Os?


Looking forward to your kind help.
Best,
Weiping



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question regarding logical replication

2017-10-27 Thread Weiping Qu
Thanks, Francisco. From the plots we got the same feeling, cache reads 
with little lags and high cache hits really don't put extra burden on 
the original write throughput for OLTP transactions. And log-based is 
the most efficient and harm-less one as compared to trigger-based and 
timestamp based change data capture.


Weiping


On 27.10.2017 14:03, Francisco Olarte wrote:

On Fri, Oct 27, 2017 at 12:04 PM, Weiping Qu <q...@informatik.uni-kl.de> wrote:

That's a good point and we haven't accounted for disk caching.
Is there any way to confirm this fact in PostgreSQL?

I doubt, as it names indicates cache should be hidden from the db server.

You could monitor the machine with varying lags and see the disk-cache
hit ratio , or monitor the throughput loss, a disk-cache effect should
exhibit a constant part for little lags, where you mostly do cache
reads, then a rising part as you begin reading from disks stabilizing
asyntotically ( as most of the fraction of reads comes from disk, but
it could also exhibit a jump if you are unlucky and you evict pages
you'll need soon ), but it is not a simple thing to measure, specially
with a job mix and long delays.

The xlog can do strange things. IIRC it is normally write-only ( only
used on crash recovery, to archive (ship) it and for log based
replication slots ), but postgres recycles segments ( which can have
an impact on big memory machines ). I do not know to what extent a
modern OS can detect the access pattern and do things like evict the
log pages early after sync.

Francisco Olarte.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question regarding logical replication

2017-10-27 Thread Weiping Qu

That's a good point and we haven't accounted for disk caching.
Is there any way to confirm this fact in PostgreSQL?

Weiping


On 27.10.2017 11:53, Francisco Olarte wrote:

On Thu, Oct 26, 2017 at 10:20 PM, Weiping Qu <q...@informatik.uni-kl.de> wrote:


However, the plots showed different trend (currently I don't have plots on
my laptop) which shows that the more frequently are the CDC processes
reading from logical slots, the less overhead is incurred over PostgreSQL,
which leads to higher throughput.

Have you accounted for disk caching? Your CDC may be getting log from
the cache when going with little lag but being forced to read from
disk (make the server do it ) when it falls behind.

Francisco Olarte.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question regarding logical replication

2017-10-26 Thread Weiping Qu

Hi. Thank you very much for such detailed explanation. :)
We are currently testing the overhead of log-based Change Data Capture 
method (i.e. logical decoding) over Postgresql.
The test setting consists of one processing running TPC-C on node1, 
which issued transactions against a database residing on node2, which is 
accessed by three CDC processes running on three different nodes, say 
node3-5.
The difference between three CDC processes are only different table sets 
they are interested while the commonness is that each of they would 
sleep periodically during keeping capturing changes from node2.


We always measured the impact of CDC on original TPC-C workload by 
looking into the transaction throughput on node1.
We selected 0ms, 200ms and 400ms as three different sleeping periods for 
CDC processes to control their impact on node2.
We expect that the longer a sleeping period is set for CDC, the less 
impact is incurred over Postgresql, since less I/Os are triggered to 
fetch data from xlog.
However, the plots showed different trend (currently I don't have plots 
on my laptop) which shows that the more frequently are the CDC processes 
reading from logical slots, the less overhead is incurred over 
PostgreSQL, which leads to higher throughput.


That's the reason why I asked the previous question, whether logical 
slot is implemented as queue.
Without continuous dequeuing the "queue" get larger and larger, thus 
lowering the OLTP workload.


Regards;

Weiping


On 26.10.2017 21:42, Alvaro Aguayo Garcia-Rada wrote:

Hi. I've had experience with both BDR & pglogical. For each replication slot, 
postgres saves a LSN which points to the last xlog entry read by the client. When a 
client does not reads xlog, for example, if it cannot connect to the server, then 
the distance between such LSN(pg_replication_slots.restart_lsn) and the current 
xlog location(pg_current_xlog_insert_location()) will enlarge over the time. Not 
sure about the following, but postgres will not clear old xlog entries which are 
still pending to be read on any replication slot. Such situation may also happen, 
in lower degree, if the client cannot read WAL as fast as it's produced. Anyhow, 
what will happen is xlog will grow more and more. However, that will probably not 
impact performance, as xlog is written anyway. But if you don't have enough free 
space, you could get your partition full of xlog.

Regards,

Alvaro Aguayo
Operations Manager
Open Comb Systems E.I.R.L.

Office: (+51-1) 3377813 | Mobile: (+51) 995540103 | (+51) 954183248
Web: www.ocs.pe

- Original Message -
From: "Weiping Qu" <q...@informatik.uni-kl.de>
To: "PostgreSql-general" <pgsql-general@postgresql.org>
Sent: Thursday, 26 October, 2017 14:07:54
Subject: [GENERAL] Question regarding logical replication

Dear postgresql community,

I have a question regarding understanding the implementation logic
behind logical replication.

Assume a replication slot created on the master node, will more and more
data get piled up in the slot and the size of replication slot
continuously increase if there is no slave reading/dequeuing data out of
this slot or very slowly, thus incurring high I/Os and slow down the
transaction throughput?

Looking forward to your explanation.


Kindly review and please share your comments on this matter.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question regarding logical replication

2017-10-26 Thread Weiping Qu

Dear postgresql community,

I have a question regarding understanding the implementation logic 
behind logical replication.


Assume a replication slot created on the master node, will more and more 
data get piled up in the slot and the size of replication slot 
continuously increase if there is no slave reading/dequeuing data out of 
this slot or very slowly, thus incurring high I/Os and slow down the 
transaction throughput?


Looking forward to your explanation.


Kindly review and please share your comments on this matter.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] commit time in logical decoding

2016-03-01 Thread Weiping Qu

Hello Artur,

Thank you for your reply.
Should it work in a stable version like Postgresql 9.4, since it's 
enough for me and I don't care whether it's 9.6 or 9.5.

Nevertheless I will try it using 9.4.

Regards,
Weiping

On 01.03.2016 22:04, Artur Zakirov wrote:

Hello, Weiping

It seems that it is a bug. Thank you for report. I guess it will be 
fixed soon.


On 01.03.2016 17:36, Weiping Qu wrote:

Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think
as far as I found in the source, wal_level: logical,
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure
whether this will help or not).
Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL,
NULL, 'include-timestamp', 'on');


I always got 1999-12-31 16:00 as the commit time for arbitrary
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was
always 0.
Could you help me by indicating me what could be wrong in my case? Any
missing parameters set?

Thank you in advance,
Kind Regards,
Weiping









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] commit time in logical decoding

2016-03-01 Thread Weiping Qu

Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think 
as far as I found in the source, wal_level: logical, 
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure 
whether this will help or not).

Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL, 
NULL, 'include-timestamp', 'on');



I always got 1999-12-31 16:00 as the commit time for arbitrary 
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was 
always 0.
Could you help me by indicating me what could be wrong in my case? Any 
missing parameters set?


Thank you in advance,
Kind Regards,
Weiping


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Confusing with commit time usage in logical decoding

2016-02-29 Thread Weiping Qu

Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think 
as far as I found in the source, wal_level: logical, 
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure 
whether this will help or not).

Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL, NULL, 
'include-timestamp', 'on');


I always got 1999-12-31 16:00 as the commit time for arbitrary 
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was 
always 0.
Could you help me by indicating me what could be wrong in my case? Any 
missing parameters set?


Thank you in advance,
Kind Regards,
Weiping


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Confusing with commit time usage in logical decoding

2016-02-29 Thread Weiping Qu
If you received this message twice, sorry for annoying since I did not 
subscribe successfully previously due to conflicting email domain.


Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think 
as far as I found in the source, wal_level: logical, 
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure 
whether this will help or not).

Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL, 
NULL, 'include-timestamp', 'on');



I always got 1999-12-31 16:00 as the commit time for arbitrary 
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was 
always 0.
Could you help me by indicating me what could be wrong in my case? Any 
missing parameters set?


Thank you in advance,
Kind Regards,
Weiping


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea vs large object in version 8

2005-04-11 Thread Weiping
I think bytea is a little bit slower then large object.
Regards
Laser
If speed (add/get) is the only concern, image files could be big (~10M),
and database only serves as storage. In the postgresql 8, which type
(bytea vs large object) is the preferred one? Is it true, in general,
that bytea inserts is slower?
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] support

2004-11-29 Thread Weiping
 wrote:

hi,pgsql-genera

I am chinese user, I have installed thd PostGreSQL 8.0 for win in my 
 computer, it's very good.
but I find a problem, when I use select * from dcvalue where 
text_value='' to search record, 
the system return no results, 

seems like your locale setting doesn't match to your database encoding.
you should use EUC_CN or unicode as your database encoding and zh_CN.utf8
as your locale setting.

regards

laser

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Could we hide the table name listing from unprivileged user?

2004-11-27 Thread Weiping
Hi,
A problem we are facing is although we revoke all from
one database user, he can still see the table exists
in psql using \dt command, but he can'd select * from it
of course, how could we hide the table name listing from
him?

We are using 7.4.x and 8.0 beta, with ODBC, JDBC and libpq.

regards

Laser

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] PGCLIENTENCODING behavior of current CVS source

2004-11-16 Thread Weiping
hi,
I'm using CVS source built postgres, may be one day later
then the main site, but found one problem:

I've set PGCLIENTENCODING environment before, for easy of
typing, like export PGCLIENTENCODING=GBK in my .profile,
but after I upgrade my postgresql to current CVS, I found
problem, the database initialized using:

initdb --locale=zh_CN.utf8 ...

the database connected is UNICODE encoded, but when I
use psql to loging to one of my database, it response:

psql: FATAL: invalid value for parameter client_encoding: GBK

but when I remove the PGCLIENTENCODING setting:

unset PGCLIENTENCODING,

now I can login, but when I do a:

DHY_JJG=# \dt
ERROR: invalid byte sequence for encoding UNICODE: 0xed

but, after:

DHY_JJG=# \encoding gbk
DHY_JJG=#\dt

woule be ok. the LANG setting is zh_CN.gbk, I guess it's
a localization problem. may be the encoding of thos po files.
because while using psql -E we see the query contain the
locale string in AS clause, but don't know the best way to
fix that, may be use UNICODE to encode those po files?

regards

Laser


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] how much ram do i give postgres?

2004-10-20 Thread Weiping

It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this has caused selects to be very slow. At times, a
select count(1) from a table will take several mins. I don't think
selects would have to wait on locks by inserts/updates would it?
I would just like to do anything possible to help speed this up.
 

If there are really many rows in table , select count(1) would be a 
little bit slow,
for postgresql use sequential scan to count the rows. If the query is 
other kind,
then may be check if there are index on search condition or use EXPLAIN 
command
to see the query plan would be greatly help.

By the way, what's the version of your postgresql? older version (7.4?) 
still suffer from index
space bloating.

regards
Laser
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] About upper() and lower to handle multibyte char

2004-10-19 Thread Weiping
Hi,

while upgrade to 8.0 (beta3) we got some problem:

we have a database which encoding is UNICODE,
when we do queries like:
select upper(''); --select some multibyte character,
then postgresql response:

ERROR: invalid multibyte character for locale

but when we do it in a SQL_ASCII encoding database,
it's ok and return unchanged string, that's what we think correct result.

I've searched the archive and found that in 8.0, the upper()/lower()
function have been changed to could handle multibyte character,
but, what's the expected behavior of these two function in coping with
multibyte character?

Another question: from the archive, I know that on system with
wctype.h toupper/tolower functions, the postgresql would support
multibyte upper/lower function; my system (slackware 10) got wctype.h,
but why still I get the ERROR? How can I check if my postgresql installation
come with multibyte upper/lower support?

The problem make us very difficlut when using upper/lower to deal with
columns with more then one encoding char, like Chinese and English char
in Unicode
database, because the transaction would abort with the error above, that
breaks
our application a lot.

Thanks and any help would be appreciated

Laser


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] About upper() and lower to handle multibyte char

2004-10-19 Thread Weiping
Tom Lane wrote:
What locale did you initdb in?  The most likely explanation for this
is that the LC_CTYPE setting is not unicode-compatible.
 

emm, I initdb --no-locale, which means LC_CTYPE=C, but if I don't use it 
there are
some other issue in multibyte comparing (= operator) operation, will try 
again.

Thanks!
Laser
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] About upper() and lower to handle multibyte char

2004-10-19 Thread Weiping
Weiping wrote:

 Tom Lane wrote:

 What locale did you initdb in? The most likely explanation for this
 is that the LC_CTYPE setting is not unicode-compatible.



finally I get it work, while initdb, we should use matched locale
setting and database encoding, like:

initdb --locale=zh_CN.utf8 -E UNICODE ...

then everything ok (on my platform: slackware 10 and RH9).

Emm, I think it's better to add some words in our docs to tell the uesr
to do so,
because we always to use --no-locale while initdb, because the default
locale
setting of many Linux destro (normally en_US), would cause the multibyte
character compare operaction fail (like select '' = '', that's
select 'one'='two' in Chinese,
but it return true), and we use UNICODE as database encoding to store
multi-language characters
(like Japanese and Korean), don't know if the locale setting
(zh_CN.utf8) would conflict with
those setting.

Any better suggestion?

Thanks

Laser




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Management system for PostgreSQL?

2004-10-09 Thread Weiping
could phppgadmin serve your purpose?
http://phppgadmin.sourceforge.net/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] A simple question about Read committed isolation level

2004-04-16 Thread weiping he
Tom Lane дµÀ:

weiping he [EMAIL PROTECTED] writes:
 

txn1: txn2:
begin; begin;
update table_a set col= col + 1; update table_a set col = col + 1;
end; end;
   

 

if two transaction begin at exact the same time,
what's the result of 'col' after both transactions committed
in Read committed level? it's 3 or 2?
My understanding is the result is 3,
   

If the second xact to lock the row is READ COMMITTED, you get 3.
If it's SERIALIZABLE you get an error.  In no case will you silently
lose an update.
 

dose is mean that I must use some kind of lock ( ... FOR UPDATE for example)
to lock that row to get the result 3 in READ COMMITTED level? My 
understanding
is even in MVCC environment, the update action would still be executed 
sequencly
(by means of some kind of lock).

What confused me is: in MVCC environment, what's the procedure of 
postgresql to
use lock when two transaction update to the same row at the same time?

thanks you.

Laser

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] how to group by a joined query?

2003-09-01 Thread Weiping He
suppose I've got two table:

laser_uni=# \d t1
Table public.t1
Column | Type | Modifiers
+--+---
name   | text |
addr   | text |
laser_uni=# \d t2
 Table public.t2
Column |  Type   | Modifiers
+-+---
name   | text|
len| integer |
of | integer |
and I want to use join to select out data and then group by one column, 
like this:

laser_uni=# select t1.name, t1.addr, t2.name, t2.len, t2.of from t1 
right join t2 on t1.name=t2.name group by t2.name;
ERROR:  Attribute t1.name must be GROUPed or used in an aggregate function

seems the I must gorup all those fields:

laser_uni=# select t1.name as t1name, t1.addr as t1addr, t2.name as 
t2name, t2.len, t2.of from t1 right join t2 on t1.name=t2.name group by 
t1.name, t1.addr, t2.name, t2.len, t2.of;
t1name | t1addr | t2name | len | of
+++-+
   || henry  |   2 |  4
   || laser  |   4 |  4
(2 rows)

is it specification compliant or postgresql specific?

Thanks

Laser

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] how to group by a joined query?

2003-09-01 Thread Weiping He
Weiping He wrote:

suppose I've got two table:

laser_uni=# \d t1
Table public.t1
Column | Type | Modifiers
+--+---
name   | text |
addr   | text |
laser_uni=# \d t2
 Table public.t2
Column |  Type   | Modifiers
+-+---
name   | text|
len| integer |
of | integer |
and I want to use join to select out data and then group by one 
column, like this:

laser_uni=# select t1.name, t1.addr, t2.name, t2.len, t2.of from t1 
right join t2 on t1.name=t2.name group by t2.name;
ERROR:  Attribute t1.name must be GROUPed or used in an aggregate 
function

seems the I must gorup all those fields:

laser_uni=# select t1.name as t1name, t1.addr as t1addr, t2.name as 
t2name, t2.len, t2.of from t1 right join t2 on t1.name=t2.name group 
by t1.name, t1.addr, t2.name, t2.len, t2.of;
t1name | t1addr | t2name | len | of
+++-+
   || henry  |   2 |  4
   || laser  |   4 |  4
(2 rows)

is it specification compliant or postgresql specific?

Thanks

reread the docs, seems use DISTINCE ON clause solved my problem:

select distinct on( t2.len) t1.name as t1name, t1.addr as t1addr, 
t2.name as t2name, t2.len, t2.of from t1 right join t2 on t1.name=t2.name;

Thanks

Laser

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] compile error on slackware 9.0 while --enable-thread-safety

2003-08-09 Thread Weiping He
while remove --enable-thread-safety everything ok.
what's the matter?
the error output:
---8-
make[2]: Entering directory `/usr/laser/postgresql-7.4beta1/src/port'
gcc -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations 
-I../../src/include   -c -o path.o path.c
gcc -O2 -g -Wall -Wmissing-prototypes -Wmissing-declarations 
-I../../src/include   -c -o threads.o threads.c
threads.c: In function `pqGetpwuid':
threads.c:49: too few arguments to function `getpwuid_r'
threads.c:49: warning: assignment makes pointer from integer without a cast
threads.c: In function `pqGethostbyname':
threads.c:74: warning: passing arg 5 of `gethostbyname_r' from 
incompatible pointer type
threads.c:74: too few arguments to function `gethostbyname_r'
threads.c:74: warning: assignment makes pointer from integer without a cast
---8-

Thanks and regards

Laser

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Can postgres supports Chinese GB18030?

2003-07-28 Thread Weiping He


LitelWang wrote:

It is useful for me to use Chinese tone sort order .
Any version on Cygwin?
Thanks for any advice . 

 

I never try GB18030 in Cygwin, but in Linux or other Unix system,
you may use gb18030 as client side encoding and use UNICODE as
backend encoding, and it's pretty good.
regards Laser

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] any body using Solaris8 with postgresql 7.3.3

2003-07-04 Thread Weiping He
Tom Lane wrote:

Weiping He [EMAIL PROTECTED] writes:
 

   I've met a wierd problem on a Solaris 8/sparc box with postgresql 7.3.3:
   the server would automatically shutdown after a period of time of not
   operating. The log show something like this:
   pmdie 2
   

Assuming signal 2 is SIGINT on Solaris (look in /usr/include/signal.h
to make sure, but that's pretty standard), this would indicate that
something is sending SIGINT to the postmaster.  The postmaster will
interpret that as a fast shutdown request.  So the problem is not with
the postmaster, but with whatever is sending the signal.
I suspect this isn't a platform problem so much as a setup mistake.
How are you launching the postmaster?  Is it possible it's still
connected to a controlling terminal?  (If so, the shell would probably
SIGINT the postmaster anytime you typed control-C.)
			regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html

 

emm, I'll check that. My first start method is:
pg_ctl start -l ~/pgrun.log
(I've export PGDATA=/pgsqldata)

but it reports:
  pg_ctl: test: argument expected
I'm using ksh I think, which I've also test in 7.4-devl version, the 
same result,
ISTM a little bug in pg_ctl script, but still didn't dig into it to see 
if I can
found the problem.

Later I use:
pg_ctl start pgrun.log 21
to start the program, and it runs ok. but, then the pmdie 2...

Thanks

Laser





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] any body using Solaris8 with postgresql 7.3.3

2003-07-04 Thread Weiping He
Tom Lane wrote:

Weiping He [EMAIL PROTECTED] writes:
 

Later I use:
pg_ctl start pgrun.log 21
to start the program, and it runs ok. but, then the pmdie 2...
   

Hm.  My first thought was that you needed a /dev/null in there too,
but it looks like pg_ctl does that for you.  The other likely
possibility is that you need a nohup in front of all of this.
We should check the theory though.  After you start the postmaster
using the above command, if you type control-C (or whatever your
favorite interrupt character is) on the same terminal, does the
postmaster shut down?
			regards, tom lane

 

Ok, I'll test it. But needs more time, cause I can't access that box 
now, it's in office.
report back later.

Thanks

Laser

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Inserting Unicode into Postgre

2001-04-17 Thread He Weiping

Firestar wrote:

 Hi,

 I'm currently using PostgreSQL 7.0 on Solaris. My Java program receives
 strings in Big5
 encoding and will store them in PostgreSQL (via JDBC). However, the inserted
 strings become
 multiple '?' (question marks) instead everytime i do a insert command. And
 when i retrieve them,
 via JDBC, the string becomes those question marks.

 Is the problem due to the Unicode encoding that Java String uses, or must i
 enable multibyte-support
 in my postgre installation? If i enable multibyte support, should i create
 my table with Unicode support,
 or Big5?


Upgrade to just released 7.1,
now postgres can do unicode conversion to you.
(thanks to Mr. Tatsuo Ishii)
I think you should enable both  enable-multibyte  enable-unicode-conversion
switch.
when building postgresql.

regards

Laser


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] Psql Question

2000-09-18 Thread He Weiping (Laser Henry)

Danny wrote:

 - Hello
 - I had previous experience with Access and MySQL.

 -Situation

 - I am trying to create the equvilant of the following which is a mysql
 command.

 - Queston
 - But I cannot figure out how to do this is postgresql

 "mysql -u root -p mydb  mydb.dump"


I think:
psql -u somebody -d template1  yourdb.dump
would work.


 - I was trying to create a test database using the following commands using a
 very cliche example . This command works on mySQL and should be part of the
 ANSI SQL standard

 mydb=# INSERT INTO Customer 
(Customer_ID,Customer_Name,Customer_Address,Customer_Email)
 mydb-# VALUES ('1','Danny Ho','99 Second Ave, Kingswood','[EMAIL PROTECTED]'),
 mydb-# ('2','Randal Handel','54 Oxford Road, Cambridge','[EMAIL PROTECTED]')
 mydb-# ;


you can't insert two values at the same time, you would have to use two INSERT.


 -and I get the following errors :

 ERROR:  parser: parse error at or near ","

 Looking forwrd to your feedback.,

 dannyh

 [EMAIL PROTECTED]