Hi,

The page 
http://www.postgresql.org/docs/7.2/static/datatype-datetime.htmlmentions
that the resolution of all time and timestamp data types is 1
microsecond.   I have an application that runs on both a Windows (XP with
SP2) machine and a Linux (SUSE 10.2) machine.   I saw that on postgres
enterprisedb 8.3 installed on both these machines, the default timestamp
precision on the former is upto a millisecond and on the latter it is 1
microsecond.

My curiosity is : is this a universal phenomenon ie a basic issue with
Windows?  Or could there be some hardware or architectural differences or
something else...
And my problem is: is there any way to enforce a higher precision in
Windows?  Because my application badly needs it.

Please help / guide.

Thanks a million,
Shruthi

Reply via email to