Hi Corinna,

On Mon, 24 Nov 2025 22:35:46 +0900
Takashi Yano wrote:
> Hi Corinna,
> 
> On Mon, 24 Nov 2025 14:09:03 +0100
> Corinna Vinschen wrote:
> > On Nov 24 13:05, Corinna Vinschen wrote:
> > > So I wonder...
> > > 
> > > Wouldn't this simple patch just moving the tmp_pathbuf up into
> > > fhandler_base::lock() fix the problem just as well, plus avoiding
> > > multiple w_get() calls?
> > 
> > Version 2 of the patch.  Rather than calling get_all_locks_list()
> > from lf_setlock() *and* lf_clearlock(), call it right from
> > fhandler_base::lock() to avoid calling the function twice.
> > Also, move comment.
> > 
> > diff --git a/winsup/cygwin/flock.cc b/winsup/cygwin/flock.cc
> > index e03caba27a8e..e486ad7f5ece 100644
> > --- a/winsup/cygwin/flock.cc
> > +++ b/winsup/cygwin/flock.cc
> > @@ -945,6 +945,7 @@ fhandler_base::lock (int a_op, struct flock *fl)
> >  {
> >    off_t start, end, oadd;
> >    int error = 0;
> > +  tmp_pathbuf tp;
> >  
> >    short a_flags = fl->l_type & (F_OFD | F_POSIX | F_FLOCK);
> >    short type = fl->l_type & (F_RDLCK | F_WRLCK | F_UNLCK);
> > @@ -1149,6 +1150,9 @@ restart:      /* Entry point after a restartable 
> > signal came in. */
> >        return -1;
> >      }
> >  
> > +  /* Create temporary space for the all locks list. */
> > +  node->i_all_lf = (lockf_t *) tp.w_get ();
> > +
> >    switch (a_op)
> >      {
> >      case F_SETLK:
> > @@ -1157,6 +1161,11 @@ restart:     /* Entry point after a restartable 
> > signal came in. */
> >        break;
> >  
> >      case F_UNLCK:
> > +      /* lf_clearlock() is called from here as well as from lf_setlock().
> > +    lf_setlock() already calls get_all_locks_list(), so we don't call it
> > +    from lf_clearlock() but rather here to avoid calling the (potentially
> > +    timeconsuming) function twice if called through lf_setlock(). */
> > +      node->get_all_locks_list ();
> >        error = lf_clearlock (lock, &clean, get_handle ());
> >        lock->lf_next = clean;
> >        clean = lock;
> > @@ -1218,7 +1227,6 @@ lf_setlock (lockf_t *lock, inode_t *node, lockf_t 
> > **clean, HANDLE fhdl)
> >    lockf_t **head = lock->lf_head;
> >    lockf_t **prev, *overlap;
> >    int ovcase, priority, old_prio, needtolink;
> > -  tmp_pathbuf tp;
> >  
> >    /*
> >     * Set the priority
> > @@ -1229,8 +1237,6 @@ lf_setlock (lockf_t *lock, inode_t *node, lockf_t 
> > **clean, HANDLE fhdl)
> >    /*
> >     * Scan lock list for this file looking for locks that would block us.
> >     */
> > -  /* Create temporary space for the all locks list. */
> > -  node->i_all_lf = (lockf_t *) (void *) tp.w_get ();
> >    while ((block = lf_getblock(lock, node)))
> >      {
> >        HANDLE obj = block->lf_obj;
> > @@ -1543,9 +1549,6 @@ lf_clearlock (lockf_t *unlock, lockf_t **clean, 
> > HANDLE fhdl)
> >      return 0;
> >  
> >    inode_t *node = lf->lf_inode;
> > -  tmp_pathbuf tp;
> > -  node->i_all_lf = (lockf_t *) tp.w_get ();
> > -  node->get_all_locks_list (); /* Update lock count */
> >    uint32_t lock_cnt = node->get_lock_count ();
> >    bool first_loop = true;
> >  
> > @@ -1631,10 +1634,7 @@ static int
> >  lf_getlock (lockf_t *lock, inode_t *node, struct flock *fl)
> >  {
> >    lockf_t *block;
> > -  tmp_pathbuf tp;
> >  
> > -  /* Create temporary space for the all locks list. */
> > -  node->i_all_lf = (lockf_t *) (void * ) tp.w_get ();
> >    if ((block = lf_getblock (lock, node)))
> >      {
> >        if (block->lf_obj)
> 
> Thanks. Your patch looks better than mine, however, this
> does not fix the second error in the report; i.e.,
> 
> tmp_dir: /tmp/flockRsYGNU
> done[7]
> done[3]
> lock error: 14 - Bad address: 3
> assertion "lock_res == 0" failed: file "main.c", line 40, function: 
> thread_func
>                                                                               
>  Aborted
> 
> while mine does. I'm not sure why...

fhandler_base::lock() seems necessary to be reentrant for the original
test case, because, your v2 patch + making i_all_lf local variable
solves the issue. Please see my v2 patch.

What do you think?

-- 
Takashi Yano <[email protected]>

Reply via email to