This is probably a really naive and stupid question:

what's the cost of IT for say a six room GP practice , 
and would this kind of server be a reasonable cost-benefit :

http://www.scorptec.com.au/index.php?prdid=17283

and what other additional costs would need to be considered
to set up the hardware to take advantage of this if it were
useful ? Would md2 run any better on it ?
I also saw a 8 way home network gigabit switch from netgear doing a quick 
google search
what kind of gigabit switch is needed in an average multi-doc
practice , and how much would it cost ?
is it possible to just upgrade the network to gigabit, using existing
cabling , and how much would it cost just to upgrade to gigabit , nowadays? 
what's the chance of finding things don't work any faster after such 
a network upgrade e.g. because it's the server / harddrive / memory problem ?
I am quite surprised that md2 works so well as it does, being an xbase type 
database , which is basically one file per table, one file per table index, 
and that the main things that keep it fast , is the fairly few number of 
tables , the use of uptodate indexes to lookup tables, and scalable index 
algorithms such as external hash tables and external btrees ( I'm guessing). 

Suppose there was an emr that stored emr records as one great big block of 
data , would that reduce the number of disk accesses when retrieving an emr 
record as opposed to a relational / xbase  , row-per-emr item based system 
(given n items of info in an emr record, it would have to lookup an index n 
times, and do n separate disk accesses to retrieve the n items to compose
the emr record ) ? is  that how multi-dimensional databases work , or say 
something like cache / mumps/ vista  works ?  Would such a monolithic blob
based emr system be any faster ?   And what if the emr system had read memory 
cache for emr records, and write-through caching, and one had 6 Gig of 
volatile memory,  ( perhaps being able to store the entire database in memory, 
and write-through persisting the changes to the database, but otherwise
reading would involve no disk accesses) : then if the system were still
slow , you couldn't blame the hardrive or hardrive interface, but could
still blame the cpu throughput, or the network .

the original  question is, can it be guaranteed that you get a performance
improvement by upgrading a network from 100 bit to gigabit,  how much would
it cost, can you use existing cabling, and can you do it just by replacing
the switch, and network cards at each computer ? 
 


On Thursday 30 March 2006 11:18, Greg Twyford wrote:
> Ian Cheong wrote:
> > Please explain in what circumstances one would need to rebuild a server
> > same day if running mirrored drives.
> >
> > I presume any failed part (power supply, motherboard, etc)  would be
> > replaceable. Only if both drives crashed simultaneously would one need
> > to rebuild. (??)
>
> There are good arguments for having a spare HDD and a spare mainboard,
> identical to your server's on hand. With RAID1 you should match your
> drives, and getting an identical server mainboard in a hurry may be
> difficult or impossible after a year or so, due to model obsolenece and
> brand name support infrastructure. The rest of the bits shouldn't be too
> hard.
>
> Another approach is an emergency server, a good workstation that gets a
> copy of the data after the tape or DVD backup each night. Handy too when
> re-building a RAID array in business hours.
>
> The joy of RAID 1 is that a HDD failure that would otherwise stop a
> medical centre functioning, becomes a warning message on the server's
> screen.
>
> One of my inhibitions about terminal services is that you become
> absolutely dependent on the terminal server. The major choices become a
> high-end 'real' server with redundant everything, like a fully kitted HP
> ML350-G4, or a second terminal server online with double the licensing,
> hardware and other costs, or relying on a system that will go down
> big-time if the server fails with no prospect of an early resumption of
> business, as all the clients will be terminals, not be up to it and/or
> need major reconfiguration of the whole system. The 'spare bits'
> strategy may save the day in the last scenario.
>
> We need to remember that TS was thought up for big corporate
> environments with redundant domain controllers and all that other stuff.
>
> Greg
_______________________________________________
Gpcg_talk mailing list
[email protected]
http://ozdocit.org/cgi-bin/mailman/listinfo/gpcg_talk

Reply via email to