[email protected] (Paul Gilmartin) writes:
> Is there a shell interface to flock()?
>
> Is the lock automatically freed when the requesting process terminates,
> for whatever cause?

from flock man page:

NAME
       flock - Manage locks from shell scripts

SYNOPSIS
       flock [-sxon] [-w timeout] lockfile [-c] command...

       flock [-sxon] [-w timeout] lockdir [-c] command...

       flock [-sxun] [-w timeout] fd

DESCRIPTION
       This  utility manages flock(2) locks from within shell scripts or the 
command line.

       The first and second forms wraps the lock around the executing a 
command, in a man-
       ner  similar  to su(1) or newgrp(1).  It locks a specified file or 
directory, which
       is created (assuming appropriate permissions), if it does not already 
exist.

       The third form is convenient inside shell scripts, and is usually used 
the  follow-
       ing manner:

       (
         flock -s 200
         # ... commands executed under lock ...
       ) 200>/var/lock/mylockfile

       The  mode  used  to open the file doesn’t matter to flock; using > or >> 
allows the
       lockfile to be created if it does not already exist, however, write  
permission  is
       required; using < requires that the file already exists but only read 
permission is
       required.

       By default, if the lock cannot be immediately acquired, flock waits 
until the  lock
       is available.

... snip ...

lock is freed when command (or subshell) completes. 

possible more than you ever wanted to know ... I have shell script for
morning news reading:

function sqlhq {

# block everybody else until file of previously seen URLs is created
(
flock -x 200
.... sqllite extraction from browser history file and url reformating
     uses exclusive lock
) 200>sqlhq.lock

}
# end extracting previous URLs from browser sqlite history file
#-------------------------------------------------------------

# wget retrieve and convert news page URL
function wgethq {

... calls wget to fetch news url web page, then extracts & reformats
    news item urls in the web page ... news web page processing
    done asynchronously

(
flock -s 200
.... waits until sqlite browser history extraction has finished before
     eliminating news item URLs already seen, uses "shared" lock
     once exclusive lock has been released
) 200>sqlhq.lock


(
flock -x 300
... does some serialized global processing, one news URL page at a
    time; sends active browser each unseen news item URL to be
    be opened in separate background tabs (in aggregate can be several
    hundred tabs/urls). uses exclusive lock.
) 300>x1.lock

}
# end processing of each wget file
#---------------------------------------------------------

... start main shell ... asynchronously start browser history URL extraction
sqlhq &

# ansyncronously fetch/proces each news URL page
fn=0
while read url ; do
    wgethq f$fn $url  &
    let fn=$fn+1
done<hq.list

# wait until everything has completed
wait

... snip ...

"hq.list" contains a list of 70 or so news webpage URLs from around the
internet. 

there is very tiny possibility of race condition ... that "sqlhq"
asynchronous function doesn't set exclusive lock until after the first
wgethq asynchronous function obtains the shared lock (however,
realistically even if the asynchronous sqlhq function was stalled for
unknown reasons, the ansynchronous wgethq functions do quite a bit of
web operation before attempting to obtain the shared lock).

wgethq function eliminates previously seen news item URL (already in
browser history file) from being sent to the browser for loading.  The
global serialization (in wgethq) is for the case where multiple
different news sites might refer to the same news item URL ... so that
it is only sent to the browser for loading once.

-- 
42yrs virtualization experience (since Jan68), online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to