Thank you. I was trying to understand how the resource locking works. So far 
this what I got: 
- with one web-server PHP maintains resource locks in memory (semaphores), 
which essentially stops multiple users to request the same computer. 
- with more then one each web-server has non-shared locks, which presents the 
problem. 

Is memcached could help to share semaphore locks between web-servers?  From 
wikipedia: "...Memcached's APIs provide a giant hash table distributed across 
multiple machines...".

I'm not sure how much work it would be to implement either solution - fewer 
change with same result the better...

Thanks.

On Jul 23, 2013, at 16:14 , Aaron Coburn <[email protected]>
 wrote:

> I agree with Mark.
> 
> Stored procedures seem nice, but they are incredibly difficult to document, 
> and if you have to make changes later, they can be a nightmare to debug.
> 
> Aaron
> 
> On Jul 23, 2013, at 3:58 PM, Mark Gardner <[email protected]> wrote:
> 
>> I would vote against stored procedures. Keep the DB simple.
>> 
>> Mark
>> 
>> On Tue, Jul 23, 2013 at 9:01 AM, Josh Thompson <[email protected]> 
>> wrote:
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>> 
>>> Hi Jim,
>>> 
>>> What are you saying would not be a problem?  I know having 2 systems trying 
>>> to
>>> write at the same time would be okay, but the scheduling code has to read a
>>> bunch of data from the database, analyze it to select a computer, and then
>>> write information to the database.  The semaphore is needed because the 
>>> state
>>> of the system needs to remain the same between the reading of the data and 
>>> the
>>> writing of it.  Otherwise, computers that were not assigned during the read
>>> may get assigned before the write happens, which can result in multiple 
>>> people
>>> getting the same computer.  We actually had this happen back in 2004 when we
>>> were having our initial pilot of VCL, which is what prompted adding the
>>> semaphore.
>>> 
>>> Another option is to actually move the scheduling code to a stored procedure
>>> in the database.  However, that would require someone with much better 
>>> stored
>>> procedure skills than mine.
>>> 
>>> Josh
>>> 
>>> On Monday, July 22, 2013 2:36:28 PM James O'Dell wrote:
>>>> Hey Josh,
>>>> 
>>>> I've been F5 load balancing, and SSL offloading, Moodle for the last
>>>> couple of years with out a problem.
>>>> Moodle just insists the database engine be ACID compliant
>>>> 
>>>> If the database engine is transaction  safe (i.e. InnoDB) you shouldn't
>>>> have a problem.
>>>> It's really just a matter of handling the results (i.e. if someone else
>>>> grabs the reservation
>>>> what do you do)
>>>> 
>>>> Imho relying on the database seems like the simplest/cleanest approach
>>>> 
>>>> __Jim
>>>> 
>>>> On 7/22/2013 2:20 PM, Josh Thompson wrote:
>>>>> -----BEGIN PGP SIGNED MESSAGE-----
>>>>> Hash: SHA1
>>>>> 
>>>>> The problem with load balancing the front end is that there is a semaphore
>>>>> lock (php's built in implementation) around the scheduling code.  Without
>>>>> it, 2 users coming in at the same time can get assigned the same
>>>>> computer.  So, having multiple frontends would allow the same thing to
>>>>> happen.
>>>>> 
>>>>> I've considered just locking a table in the database as a semaphore.
>>>>> However, in my testing of this, if the web server locks a table and then
>>>>> goes offline before unlocking it, the table stays locked.  So, done this
>>>>> way, if there were multiple frontends and one went offline while in the
>>>>> middle of the lock, none of them would be able to create reservations.  I
>>>>> don't remember what had to be done to release the lock.
>>>>> 
>>>>> The option I'd like to use that I've not gotten around to implementing (I
>>>>> believe there is a really old JIRA issue for it) is to add an XMLRPC API
>>>>> function for calling the scheduling code.  Then, have one frontend
>>>>> assigned as the one that would handle all of the scheduling.  The others
>>>>> would make calls to that one for just the scheduling part, but would do
>>>>> everything else normally.  Optionally, there could be an election process
>>>>> so that a new frontend could be selected to do the scheduling if the one
>>>>> that had been doing it went down.
>>>>> 
>>>>> Aaron C. listed some good ideas, but I think the above would be more
>>>>> straightforward since it would not involve bringing in other technologies.
>>>>> 
>>>>> Josh
>>>>> 
>>>>> On Thursday, July 18, 2013 9:49:39 AM Aaron Peeler wrote:
>>>>>> I like both of these approaches, especially 1, but I'm not sure of the
>>>>>> effort it would take to convert.
>>>>>> -AP
>>>>>> 
>>>>>> On Wed, Jul 17, 2013 at 2:59 PM, Aaron Coburn <[email protected]>
>>> wrote:
>>>>>>> In my opinion, this issue has largely been solved by the NoSQL and
>>>>>>> application messaging communities. But as long as the VCL web server(s)
>>>>>>> use local, file-based semaphores to control resource allocation, this
>>>>>>> will be a hard nut to crack.
>>>>>>> 
>>>>>>> If I were going to tackle this problem, I would take one of two general
>>>>>>> approaches:
>>>>>>> 
>>>>>>> 1) Use an asynchronous messaging queue (i.e. Apache ActiveMQ or
>>>>>>> something
>>>>>>> like Redis)
>>>>>>> 
>>>>>>> In this way, when a reservation is made, the request is pushed onto a
>>>>>>> centralized queue, and some intermediate (i.e. vcld) process will shift
>>>>>>> messages off the queue and assign the reservations. If we used ActiveMQ,
>>>>>>> the php and perl code would communicate with the message broker over the
>>>>>>> STOMP protocol [1]. Redis would also work because it  runs in a single
>>>>>>> thread and all operations are atomic -- requests can simply be pushed
>>>>>>> onto a FIFO-type list structure [2].
>>>>>>> 
>>>>>>> 2. Use a NoSQL database such as CouchDB or Riak that uses a type of
>>>>>>> optimistic locking model for all writes.
>>>>>>> 
>>>>>>> That is, if reservations are stored in such a way that the resource id
>>>>>>> (i.e. computer id) forms the database key, the assignment of a user to a
>>>>>>> particular compute resource requires sending the correct revision_id of
>>>>>>> that resource in order for the write to be successful. If successful, it
>>>>>>> returns an HTTP 200 status code and the client proceeds normally;
>>>>>>> otherwise, it sends a 409 header and it is up to the client to try again
>>>>>>> [3]. It would also be possible to use the existing MySQL database to
>>>>>>> emulate something like #2, but under a heavy load, I imagine that the
>>>>>>> row
>>>>>>> locking could really slow things down. Using Couch would really be much
>>>>>>> more scalable than trying to do everything in MySQL.
>>>>>>> 
>>>>>>> Either of these approaches would then allow you to distribute the web
>>>>>>> front end across multiple machines.
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> Aaron Coburn
>>>>>>> 
>>>>>>> [1] http://activemq.apache.org/stomp.html
>>>>>>> [2] http://redis.io/commands/lpush
>>>>>>> [3] http://wiki.apache.org/couchdb/HTTP_Document_API#PUT
>>>>>>> 
>>>>>>> On Jul 17, 2013, at 1:06 PM, Aaron Peeler <[email protected]> wrote:
>>>>>>>> This is definitely desired.
>>>>>>>> 
>>>>>>>> An additional challenge with multiple web servers is to figure how to
>>>>>>>> sanely lock/unlock the resource to prevent it from getting assigned
>>>>>>>> two separate users that are making requests at the same instant.
>>>>>>>> 
>>>>>>>> AP
>>>>>>>> 
>>>>>>>> On Wed, Jul 17, 2013 at 12:57 PM, James O'Dell <[email protected]>
>>>>> 
>>>>> wrote:
>>>>>>>>> I also tried load balancing the web. I didn't have any success.
>>>>>>>>> 
>>>>>>>>> I also tried an SSL offload using the F5
>>>>>>>>> 
>>>>>>>>> The SSLoffload didn't work because
>>>>>>>>> an internal VCL check noticed the header wasn't https
>>>>>>>>> and redirected to https. Basically it kept looping
>>>>>>>>> 
>>>>>>>>> On 7/17/2013 9:36 AM, Dmitri Chebotarov wrote:
>>>>>>>>> 
>>>>>>>>> Hi
>>>>>>>>> 
>>>>>>>>> I would like to load balance multiple front-end VCL servers.
>>>>>>>>> It is F5 load balancer. The LB configuration allows to enable session
>>>>>>>>> persistency.
>>>>>>>>> Is this will be OK with VCL's front-end? I remember someone mentioned
>>>>>>>>> some
>>>>>>>>> issues with having multiple front-end servers, but don't remember
>>>>>>>>> details.
>>>>>>>>> 
>>>>>>>>> Thanks.
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Aaron Peeler
>>>>>>>> Program Manager
>>>>>>>> Virtual Computing Lab
>>>>>>>> NC State University
>>>>>>>> 
>>>>>>>> All electronic mail messages in connection with State business which
>>>>>>>> are sent to or received by this account are subject to the NC Public
>>>>>>>> Records Law and may be disclosed to third parties.
>>>>> 
>>>>> - --
>>>>> - -------------------------------
>>>>> Josh Thompson
>>>>> VCL Developer
>>>>> North Carolina State University
>>>>> 
>>>>> my GPG/PGP key can be found at pgp.mit.edu
>>>>> 
>>>>> All electronic mail messages in connection with State business which
>>>>> are sent to or received by this account are subject to the NC Public
>>>>> Records Law and may be disclosed to third parties.
>>>>> -----BEGIN PGP SIGNATURE-----
>>>>> Version: GnuPG v2.0.19 (GNU/Linux)
>>>>> 
>>>>> iEYEARECAAYFAlHtojYACgkQV/LQcNdtPQNNjwCfQ+PRt/yZXI4tf2YDiH2IZu2m
>>>>> coYAn2QbjICSa5MUR0cR9DIxniZVQFtK
>>>>> =/YZs
>>>>> -----END PGP SIGNATURE-----
>>> - --
>>> - -------------------------------
>>> Josh Thompson
>>> VCL Developer
>>> North Carolina State University
>>> 
>>> my GPG/PGP key can be found at pgp.mit.edu
>>> 
>>> All electronic mail messages in connection with State business which
>>> are sent to or received by this account are subject to the NC Public
>>> Records Law and may be disclosed to third parties.
>>> -----BEGIN PGP SIGNATURE-----
>>> Version: GnuPG v2.0.19 (GNU/Linux)
>>> 
>>> iEYEARECAAYFAlHufrkACgkQV/LQcNdtPQPSagCdHzGSWMPRoJjTYR/nqdJScmW/
>>> k2EAn2cGY5gKBZZgkRsG9ebscAVyOcYa
>>> =oxwt
>>> -----END PGP SIGNATURE-----
>>> 
>> 
>> 
>> 
>> -- 
>> Mark Gardner
>> --
> 
> 



--
Thank you,

Dmitri Chebotarov
VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers & Messaging
223 Aquia Building, Ffx, MSN: 1B5
Phone: (703) 993-6175 | Fax: (703) 993-3404



Reply via email to