The problem with shared locks on a distributed filesystem is that, if you have 
additional servers trying to share the same lock, you may actually achieve 
worse performance -- that is, there would, potentially, be an even higher level 
of contention over a single resource while also dealing with, potentially, 
inconsistent levels of latency across a network.

If the desire to support multiple web servers is motivated by a need to scale 
out the infrastructure, the best way to do that would be to avoid locks 
altogether.

Also, my concern with the JIRA issue related to using the XML-RPC interface, is 
that it introduces a single point of failure on the part of the web servers. 
That is, if you have ten web servers running, and all are pointing the XML-RPC 
calls to node 1, then what happens when node1 goes down?

In terms of what I had suggested re: NoSQL, this can be emulated in MySQL in a 
fairly straight forward manner. I still really like the Async Messaging 
approach because it would reduce the need to continuously poll the MySQL db and 
it could be used in a lot of other ways, too, but here is one way to do this 
without adding new dependencies to the current infrastructure:

First, you would need something like a 'pending_actions' table, structured like 
this:

`computer_id` INT NOT NULL,
`timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
`revision` varchar(255) NOT NULL,
`data` text,
PRIMARY KEY (`computer_id`)

Then the action of "acquiring a semaphore" would happen on each computer 
(assume a lock is automatically released after 100 seconds -- that value is 
arbitrary):

SELECT computer_id, revision 
FROM pending_actions 
WHERE computer_id IN(X,Y,Z) AND NOW() - timestamp > {100};

In the PHP code, I would store a mapping from computer_id to 'revision' value, 
while also generating a new, random value for the new revision -- using a new 
revision value will cause any other concurrent attempts to grab this resource 
to fail.

... Loop over results ...

* generate new random revision value

UPDATE pending_actions
SET revision = {random, unique value}
WHERE computer_id = {X} AND revision = {Y}

* check if update succeeded via `mysql_affected_rows()`

* if so: put the new revision value in the computer_id -> revision mapping
   otherwise: exclude the computer from the set of "locked" resources

... End of loop ...

In the PHP code, I would know that the update (i.e. "locking") operation 
succeeded on each resource if the `mysql_affected_rows()` function returns a 
positive integer. If it returns 0, then some other process had successfully 
"locked" a particular computer before I did. In that case, the code can either 
try to use another computer or tell the user to try again.

Now, for all the computers that were successfully "locked", the code has a 
period of time (i.e. 100 seconds) to do what it typically does between 
'semLock()' and 'semUnlock()'

Then, in place of semUnlock(), the code could simply set the timestamp to 0 or 
sometime in the distant past:

UPDATE pending_actions
SET timestamp = 0
WHERE computer_id = {X} AND revision = {Y}

The key point is just that any UPDATE statements must use both the computer_id 
AND revision values.

If I am not mistaken, this type of code could be a simple drop-in replacement 
for semLock() / semUnlock(). There would clearly need to be a way to populate 
the pending_actions table in the first place, though INSERT IGNORE could be 
used for that.

-Aaron C


On Jul 23, 2013, at 9:26 AM, Dmitri Chebotarov <[email protected]>
 wrote:

> 
> Is it possible to use shared filesystem to store/manage semaphore locks?
> 
> On Jul 22, 2013, at 17:20 , Josh Thompson <[email protected]> wrote:
> 
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> 
>> The problem with load balancing the front end is that there is a semaphore 
>> lock (php's built in implementation) around the scheduling code.  Without 
>> it, 
>> 2 users coming in at the same time can get assigned the same computer.  So, 
>> having multiple frontends would allow the same thing to happen.
>> 
>> I've considered just locking a table in the database as a semaphore.  
>> However, 
>> in my testing of this, if the web server locks a table and then goes offline 
>> before unlocking it, the table stays locked.  So, done this way, if there 
>> were 
>> multiple frontends and one went offline while in the middle of the lock, 
>> none 
>> of them would be able to create reservations.  I don't remember what had to 
>> be 
>> done to release the lock.
>> 
>> The option I'd like to use that I've not gotten around to implementing (I 
>> believe there is a really old JIRA issue for it) is to add an XMLRPC API 
>> function for calling the scheduling code.  Then, have one frontend assigned 
>> as 
>> the one that would handle all of the scheduling.  The others would make 
>> calls 
>> to that one for just the scheduling part, but would do everything else 
>> normally.  Optionally, there could be an election process so that a new 
>> frontend could be selected to do the scheduling if the one that had been 
>> doing 
>> it went down.
>> 
>> Aaron C. listed some good ideas, but I think the above would be more 
>> straightforward since it would not involve bringing in other technologies.
>> 
>> Josh
>> 
>> On Thursday, July 18, 2013 9:49:39 AM Aaron Peeler wrote:
>>> I like both of these approaches, especially 1, but I'm not sure of the
>>> effort it would take to convert.
>>> -AP
>>> 
>>> On Wed, Jul 17, 2013 at 2:59 PM, Aaron Coburn <[email protected]> wrote:
>>>> In my opinion, this issue has largely been solved by the NoSQL and
>>>> application messaging communities. But as long as the VCL web server(s)
>>>> use local, file-based semaphores to control resource allocation, this
>>>> will be a hard nut to crack.
>>>> 
>>>> If I were going to tackle this problem, I would take one of two general
>>>> approaches:
>>>> 
>>>> 1) Use an asynchronous messaging queue (i.e. Apache ActiveMQ or something
>>>> like Redis)
>>>> 
>>>> In this way, when a reservation is made, the request is pushed onto a
>>>> centralized queue, and some intermediate (i.e. vcld) process will shift
>>>> messages off the queue and assign the reservations. If we used ActiveMQ,
>>>> the php and perl code would communicate with the message broker over the
>>>> STOMP protocol [1]. Redis would also work because it  runs in a single
>>>> thread and all operations are atomic -- requests can simply be pushed
>>>> onto a FIFO-type list structure [2].
>>>> 
>>>> 2. Use a NoSQL database such as CouchDB or Riak that uses a type of
>>>> optimistic locking model for all writes.
>>>> 
>>>> That is, if reservations are stored in such a way that the resource id
>>>> (i.e. computer id) forms the database key, the assignment of a user to a
>>>> particular compute resource requires sending the correct revision_id of
>>>> that resource in order for the write to be successful. If successful, it
>>>> returns an HTTP 200 status code and the client proceeds normally;
>>>> otherwise, it sends a 409 header and it is up to the client to try again
>>>> [3]. It would also be possible to use the existing MySQL database to
>>>> emulate something like #2, but under a heavy load, I imagine that the row
>>>> locking could really slow things down. Using Couch would really be much
>>>> more scalable than trying to do everything in MySQL.
>>>> 
>>>> Either of these approaches would then allow you to distribute the web
>>>> front end across multiple machines.
>>>> 
>>>> Cheers,
>>>> Aaron Coburn
>>>> 
>>>> [1] http://activemq.apache.org/stomp.html
>>>> [2] http://redis.io/commands/lpush
>>>> [3] http://wiki.apache.org/couchdb/HTTP_Document_API#PUT
>>>> 
>>>> On Jul 17, 2013, at 1:06 PM, Aaron Peeler <[email protected]> wrote:
>>>>> This is definitely desired.
>>>>> 
>>>>> An additional challenge with multiple web servers is to figure how to
>>>>> sanely lock/unlock the resource to prevent it from getting assigned
>>>>> two separate users that are making requests at the same instant.
>>>>> 
>>>>> AP
>>>>> 
>>>>> On Wed, Jul 17, 2013 at 12:57 PM, James O'Dell <[email protected]> 
>> wrote:
>>>>>> I also tried load balancing the web. I didn't have any success.
>>>>>> 
>>>>>> I also tried an SSL offload using the F5
>>>>>> 
>>>>>> The SSLoffload didn't work because
>>>>>> an internal VCL check noticed the header wasn't https
>>>>>> and redirected to https. Basically it kept looping
>>>>>> 
>>>>>> On 7/17/2013 9:36 AM, Dmitri Chebotarov wrote:
>>>>>> 
>>>>>> Hi
>>>>>> 
>>>>>> I would like to load balance multiple front-end VCL servers.
>>>>>> It is F5 load balancer. The LB configuration allows to enable session
>>>>>> persistency.
>>>>>> Is this will be OK with VCL's front-end? I remember someone mentioned
>>>>>> some
>>>>>> issues with having multiple front-end servers, but don't remember
>>>>>> details.
>>>>>> 
>>>>>> Thanks.
>>>>> 
>>>>> --
>>>>> Aaron Peeler
>>>>> Program Manager
>>>>> Virtual Computing Lab
>>>>> NC State University
>>>>> 
>>>>> All electronic mail messages in connection with State business which
>>>>> are sent to or received by this account are subject to the NC Public
>>>>> Records Law and may be disclosed to third parties.
>> - -- 
>> - -------------------------------
>> Josh Thompson
>> VCL Developer
>> North Carolina State University
>> 
>> my GPG/PGP key can be found at pgp.mit.edu
>> 
>> All electronic mail messages in connection with State business which
>> are sent to or received by this account are subject to the NC Public
>> Records Law and may be disclosed to third parties.
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v2.0.19 (GNU/Linux)
>> 
>> iEYEARECAAYFAlHtojYACgkQV/LQcNdtPQNNjwCfQ+PRt/yZXI4tf2YDiH2IZu2m
>> coYAn2QbjICSa5MUR0cR9DIxniZVQFtK
>> =/YZs
>> -----END PGP SIGNATURE-----
>> 
>> 
> 
> 
> 
> --
> Thank you,
> 
> Dmitri Chebotarov
> VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers & 
> Messaging
> 223 Aquia Building, Ffx, MSN: 1B5
> Phone: (703) 993-6175 | Fax: (703) 993-3404
> 
> 
> 

Reply via email to