Tracy,
Our 3 sans are all HP and to be honest we haven't needed to mess around with 
the San configuration at all. Having said that the HP "SAN device software" is 
all linux based running below Windows 2012 by a company called "Left Hand" who 
were bought out by HP to acquire their technology and support is all by HP 
direct .... there are literally hundreds of configuration objects you can 
change but as I say, nothing changed so far apart from the size of cache used 
as our SQL server apps especially backups started to take 4-5 times longer than 
with the old "standard" servers but HP quickly identified the problem and it 
was to increase the cache size. 

If you think about it, we have 3 clusters running and obviously the data is 
replicated across all 3 of the clusters with the SQL2012 databases being 
mirrored  on one cluster so we have the situation where all data is quadrupled, 
which makes data volumes rather large!

In addition to the SQL Server 2012 where I am slowly migrating all the VFP apps 
to we have a 1.5Gb VFP native database which is slowly decreasing in size as I 
migrate the tables over to SQL, hence running a bit of a hybrid VFP/SQL 
solution, but it does work well and is mega fast.

So, to answer you question, the cache in our case has actually been increased 
... but cache on the native hardware, not the O/S where once again I have not 
touch the Microsoft default of Write Cache enabled.

so far, in 4 months of live runnng we have had no corruption on the VFP or SQL 
data even when we had a total power crash 2 weeks ago where the clusters stayed 
up on UPS power for 40 minutes whilst the power went on and off thus swapping 
the prime Node from one cluster to another. So I can confirm that clustering 
DOES work, despite my initial scepticism. We have decided NOT to cluster the 
SQL box and have left it mirroring with auto failover which is stable, reliable 
and does work. Clustering of SQL using a dedicated SQL cluster really doesn't 
give us any advantages over mirroring and is really for mega organisations 
where the actual data integrity between data segments on the shared notes is 
important as opposed to "up time" or high availability of the total database, 
which is our main priority.

Dave

-----Original Message-----
From: ProFox [mailto:[email protected]] On Behalf Of Tracy Pearson
Sent: 15 August 2013 15:40
To: [email protected]
Subject: RE: VFP9 & Windows Server 2012

Dave Crozier wrote on 2013-08-15: 
>  We have apps VFP apps and services running on 2012 on a quad 
> clustered
SAN and have done for about 7 months now with no problems. Make sure that they 
are up to date patch wise.
>  
>  As per Windows 7, the O/s is picky about where you run your apps from
permission wise. Best to put both data and VFP into a saparate folder off the 
root and then share it out accordingly.
>  
>  Dave
> 

Dave,

We also have VFP running on a 2012 servers. Our clients connect using a client 
that allows them a T/S type connection IDS-ACS which is a Go-Global partner. 
Clients with iOS and Linux can use the Go-Global App found in the respective 
stores to connect to our system.

The data is hosted on a equilogic SAN. We occasionally have clients with 
corrupt indexes. When we've attempted to turn off the write-cache, the OS tells 
us it cannot do this.

Do you have any suggestions to ensure the write-cache is actually off on the 
SAN?

Tracy Pearson
PowerChurch Software


[excessive quoting removed by server]

_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://mail.leafe.com/mailman/listinfo/profox
OT-free version of this list: http://mail.leafe.com/mailman/listinfo/profoxtech
Searchable Archive: http://leafe.com/archives/search/profox
This message: 
http://leafe.com/archives/byMID/profox/[email protected]
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to