Hello,I will have a web application having postgres 8.4+ as backend. At any 
given time, there will be max of 1000 parallel web-users interacting with the 
database (read/write)I wish to do performance testing of 1000 simultaneous 
read/write to the database.
I can do a simple unix script on the postgres server and have parallel updates 
fired for example with an ampersand at the end. Example:
echo '\timing \\update "DAPP".emp_data set f1 = 123where  emp_id =0;' | "psql" 
test1 postgres|grep "Time:"|cut -d' ' -f2- >> 
"/home/user/Documents/temp/logs/$NUM.txt" &pid1=$!    echo '\timing \\update 
"DAPP".emp_data set f1 = 123 where  emp_id =2;' | "psql" test1 postgres|grep 
"Time:"|cut -d' ' -f2- >> "/home/user/Documents/temp/logs/$NUM.txt" &pid2=$!    
    echo '\timing \\update "DAPP".emp_data set f1 = 123 where  emp_id =4;' | 
"psql" test1 postgres|grep "Time:"|cut -d' ' -f2- >> 
"/home/user/Documents/temp/logs/$NUM.txt" &pid3=$!        .............

My question is:Am I losing something by firing these queries directly off the 
server and should I look at firing the queries from different IP address (as it 
would happen in a web application). Would the way postgres opens 
sockets/allocates buffer etc change in the two approaches and I get 
non-realistic results by a unix script on the server ?It will be very tedious  
exercise to have 1000 different machines (IP address)  and each firing a query; 
all the same time. But at the same time, I want to be absolutely sure my test 
would give the same result in production (requirements for latency for 
read/write is very very low)I am not interested in the network time; just the 
database read/write time.

Thanks for any tips !-Bala
                                          
_________________________________________________________________
The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with 
Hotmail. 
http://www.windowslive.com/campaign/thenewbusy?tile=multicalendar&ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_5

Reply via email to