Re: CVS commit: src

2011-01-03 Thread rud...@eq.cz

Adam Hamsik wrote:

Module Name:src
Committed By:   haad
Date:   Sat Jan  1 12:49:49 UTC 2011

Modified Files:
src/distrib/sets/lists/base: mi
src/etc/mtree: NetBSD.dist.base

Log Message:
Add /var/lock directory to base set it's used by LVM and other tools.
Change group owner to operator to enable LVM locking for him.


Hi,

Why is /var/run not the right place for your needs?

r.



Re: CVS commit: src/tests/fs/common

2011-01-03 Thread Manuel Bouyer
On Fri, Dec 31, 2010 at 07:45:26PM +, David Laight wrote:
 [...]
 From what I remember of the NFS protocol, the following 'rules' applied:
 1) If you export part of a filesystem, you export all of the filesystem.

that's probably trye

 2) If you give anyone access, you give everyone access.
 3) If you give anyone write access, you give everyone write access.

these 2 are not true for NetBSD I think

 This is all because it is the 'mount' protocol that verifies whether
 a client has access - so a client that disobeys the mount protocol, or
 fakes up valid nfs file handles can avoid the access checks.

This was true for the SunOS 4 nfs implementation (and maybe other
implementations derived from the same base), but for NetBSD, some checks are
done at the nfsd level: the source IP address from the NFS request is
checked against the export list, as well as the R/O status for a write
request (and other things such as the uid root is mapped to).
So if you consider IP address are not spoofables in your environnement,
IP-based access and write permissions are fine.

-- 
Manuel Bouyer bou...@antioche.eu.org
 NetBSD: 26 ans d'experience feront toujours la difference
--


/var/lock

2011-01-03 Thread Alan Barrett
On Mon, 03 Jan 2011, rud...@eq.cz wrote:
 Adam Hamsik wrote:
 Modified Files:
  src/distrib/sets/lists/base: mi
  src/etc/mtree: NetBSD.dist.base
 
 Log Message:
 Add /var/lock directory to base set it's used by LVM and other tools.
 Change group owner to operator to enable LVM locking for him.
 
 Why is /var/run not the right place for your needs?

Also, where was this discussed?  If it was discussed, please
update hier(7) according to the outcome of the discussion.

--apb (Alan Barrett)


Re: /var/lock

2011-01-03 Thread Adam Hamsik

On Jan,Monday 3 2011, at 1:21 PM, Alan Barrett wrote:

 On Mon, 03 Jan 2011, rud...@eq.cz wrote:
 Adam Hamsik wrote:
 Modified Files:
 src/distrib/sets/lists/base: mi
 src/etc/mtree: NetBSD.dist.base
 
 Log Message:
 Add /var/lock directory to base set it's used by LVM and other tools.
 Change group owner to operator to enable LVM locking for him.
 
 Why is /var/run not the right place for your needs?
 
 Also, where was this discussed?  If it was discussed, please
 update hier(7) according to the outcome of the discussion.

It wasn't discussed, but there were couple of changes to dm driver not done 
by me which weren't discussed either. Main reason for them was to allow 
operator 
to see lvm devices status this was last change needed. I can change it to 
/var/run would be 
/var/run/locks/ better place for it ? 

Regards

Adam.



Re: /var/lock

2011-01-03 Thread Bernd Ernesti
On Mon, Jan 03, 2011 at 02:46:31PM +0100, Adam Hamsik wrote:
 
 On Jan,Monday 3 2011, at 1:21 PM, Alan Barrett wrote:
 
  On Mon, 03 Jan 2011, rud...@eq.cz wrote:
  Adam Hamsik wrote:
  Modified Files:
src/distrib/sets/lists/base: mi
src/etc/mtree: NetBSD.dist.base
  
  Log Message:
  Add /var/lock directory to base set it's used by LVM and other tools.
  Change group owner to operator to enable LVM locking for him.
  
  Why is /var/run not the right place for your needs?
  
  Also, where was this discussed?  If it was discussed, please
  update hier(7) according to the outcome of the discussion.
 
 It wasn't discussed, but there were couple of changes to dm driver not done 
 by me which weren't discussed either. Main reason for them was to allow 
 operator 
 to see lvm devices status this was last change needed. I can change it to 
 /var/run would be 
 /var/run/locks/ better place for it ? 

IMHO /var/run/lvm would be better since it is used by lvm.

Bernd



Re: /var/lock

2011-01-03 Thread Adam Hamsik

On Jan,Monday 3 2011, at 2:54 PM, Bernd Ernesti wrote:

 On Mon, Jan 03, 2011 at 02:46:31PM +0100, Adam Hamsik wrote:
 
 On Jan,Monday 3 2011, at 1:21 PM, Alan Barrett wrote:
 
 On Mon, 03 Jan 2011, rud...@eq.cz wrote:
 Adam Hamsik wrote:
 Modified Files:
   src/distrib/sets/lists/base: mi
   src/etc/mtree: NetBSD.dist.base
 
 Log Message:
 Add /var/lock directory to base set it's used by LVM and other tools.
 Change group owner to operator to enable LVM locking for him.
 
 Why is /var/run not the right place for your needs?
 
 Also, where was this discussed?  If it was discussed, please
 update hier(7) according to the outcome of the discussion.
 
 It wasn't discussed, but there were couple of changes to dm driver not done 
 by me which weren't discussed either. Main reason for them was to allow 
 operator 
 to see lvm devices status this was last change needed. I can change it to 
 /var/run would be 
 /var/run/locks/ better place for it ? 
 
 IMHO /var/run/lvm would be better since it is used by lvm.

It would end as /var/run/locks/lvm but I can remove that locks part if it's 
needed.

Regards

Adam.



Re: /var/lock

2011-01-03 Thread Alan Barrett
On Mon, 03 Jan 2011, Adam Hamsik wrote:
  Log Message:
  Add /var/lock directory to base set it's used by LVM and other tools.
  Change group owner to operator to enable LVM locking for him.
  
  Why is /var/run not the right place for your needs?
  
  Also, where was this discussed?  If it was discussed, please
  update hier(7) according to the outcome of the discussion.
 
 It wasn't discussed, but there were couple of changes to dm driver
 not done by me which weren't discussed either. Main reason for them   
 was to allow operator to see lvm devices status this was last change  
 needed. I can change it to /var/run would be /var/run/locks/ better   
 place for it ?

If the locks do not need to persist across reboot, then somewhere under
/var/run would probably be appropriate.  If they are specific to lvm,
then /var/run/lvm would probably be appropriate.  Anything else (such as
/var/lock or /var/run/lock) probably needs more discussion.

--apb (Alan Barrett)


Re: /var/lock

2011-01-03 Thread Adam Hamsik

On Jan,Monday 3 2011, at 4:08 PM, Alan Barrett wrote:

 On Mon, 03 Jan 2011, Adam Hamsik wrote:
 Log Message:
 Add /var/lock directory to base set it's used by LVM and other tools.
 Change group owner to operator to enable LVM locking for him.
 
 Why is /var/run not the right place for your needs?
 
 Also, where was this discussed?  If it was discussed, please
 update hier(7) according to the outcome of the discussion.
 
 It wasn't discussed, but there were couple of changes to dm driver
 not done by me which weren't discussed either. Main reason for them   
 was to allow operator to see lvm devices status this was last change  
 needed. I can change it to /var/run would be /var/run/locks/ better   
 place for it ?
 
 If the locks do not need to persist across reboot, then somewhere under
 /var/run would probably be appropriate.  If they are specific to lvm,
 then /var/run/lvm would probably be appropriate.  Anything else (such as
 /var/lock or /var/run/lock) probably needs more discussion.

I would like to have something persistent between reboots. I have found 
that we already have /var/spool/lock. Therefore /var/spool/lock/lvm/ 
seems to be might preferred place. Do you agree ? Also /var/spool/lock 
is not mentioned in hier.

Regards

Adam.



Re: /var/lock

2011-01-03 Thread Takahiro Kambe
Hi,

In message c5724e4b-3267-40d5-a93a-425072bb2...@gmail.com
on Tue, 4 Jan 2011 02:46:17 +0100,
Adam Hamsik haa...@gmail.com wrote:
 I would like to have something persistent between reboots. I have found 
 that we already have /var/spool/lock. Therefore /var/spool/lock/lvm/ 
 seems to be might preferred place. Do you agree ? Also /var/spool/lock 
 is not mentioned in hier.
Are they really *lock* files?

Anywaym I think it should be /var/db/lvm for them unless those files
are temporary files like printer outputs.

Best regards.

-- 
Takahiro Kambe t...@back-street.net


Re: /var/lock

2011-01-03 Thread Adam Hamsik

On Jan,Tuesday 4 2011, at 2:51 AM, Takahiro Kambe wrote:

 Hi,
 
 In message c5724e4b-3267-40d5-a93a-425072bb2...@gmail.com
   on Tue, 4 Jan 2011 02:46:17 +0100,
   Adam Hamsik haa...@gmail.com wrote:
 I would like to have something persistent between reboots. I have found 
 that we already have /var/spool/lock. Therefore /var/spool/lock/lvm/ 
 seems to be might preferred place. Do you agree ? Also /var/spool/lock 
 is not mentioned in hier.
 Are they really *lock* files?

It's lvm subsystem lock file. Does it need to be specific in any way ?

 
 Anywaym I think it should be /var/db/lvm for them unless those files
 are temporary files like printer outputs.

So we have these options:

1) /var/lock/lvm - needs much more discussion 
2) /var/spool/lock/lvm 
3) /var/run/lvm - not persistent, it needs to be recreated every time
4) /var/db/lvm

What would you prefer ?

Regards

Adam.



Re: /var/lock

2011-01-03 Thread Matt Thomas

On Jan 3, 2011, at 5:58 PM, Adam Hamsik wrote:
 On Jan,Tuesday 4 2011, at 2:51 AM, Takahiro Kambe wrote:
 So we have these options:
 
 1) /var/lock/lvm - needs much more discussion 
 2) /var/spool/lock/lvm 
 3) /var/run/lvm - not persistent, it needs to be recreated every time
 4) /var/db/lvm
 
 What would you prefer ?

/var/run/lvm.lock if you just need a single lock file.



Re: CVS commit: src/sys/uvm

2011-01-03 Thread Masao Uebayashi
I take silence as no objection.

On Thu, Dec 23, 2010 at 12:48:04PM +0900, Masao Uebayashi wrote:
 On Wed, Dec 22, 2010 at 05:37:57AM +, YAMAMOTO Takashi wrote:
  hi,
  
   Could you ack this discussion?
  
  sorry for dropping a ball.
  
   
   On Tue, Dec 07, 2010 at 01:19:46AM +0900, Masao Uebayashi wrote:
   On Thu, Nov 25, 2010 at 11:32:39PM +, YAMAMOTO Takashi wrote:
[ adding cc: tech-kern@ ]

hi,

 On Wed, Nov 24, 2010 at 11:26:39PM -0800, Matt Thomas wrote:
 
 On Nov 24, 2010, at 10:47 PM, Masao Uebayashi wrote:
 
  On Thu, Nov 25, 2010 at 05:44:21AM +, YAMAMOTO Takashi wrote:
  hi,
  
  On Thu, Nov 25, 2010 at 04:18:25AM +, YAMAMOTO Takashi 
  wrote:
  hi,
  
  Hi, thanks for review.
  
  On Thu, Nov 25, 2010 at 01:58:04AM +, YAMAMOTO Takashi 
  wrote:
  hi,
  
  - what's VM_PHYSSEG_OP_PG?
  
  It's to lookup vm_physseg by struct vm_page *, relying on 
  that
  struct vm_page *[] is allocated linearly.  It'll be used to 
  remove
  vm_page::phys_addr as we talked some time ago.
  
  i'm not sure if commiting this unused uncommented code now 
  helps it.
  some try-and-benchmark cycles might be necessary given that
  vm_page - paddr conversion could be performace critical.
  
  If you really care performance, we can directly pass struct 
  vm_page
  * to pmap_enter().
  
  We're doing struct vm_page * - paddr_t just before 
  pmap_enter(),
  then doing paddr_t - vm_physseg reverse lookup again in
  pmap_enter() to check if a given PA is managed.  What is really
  needed here is, to lookup struct vm_page * - vm_physseg 
  once
  and you'll know both paddr_t and managed or not.
  
  i agree that the current code is not ideal in that respect.
  otoh, i'm not sure if passing vm_physseg around is a good idea.
  
  It's great you share the interest.
  
  I chose vm_physseg, because it was there.  I'm open to 
  alternatives,
  but I don't think you have many options...
 
 Passing vm_page * doesn't work if the page isn't managed since there
 won't be a vm_page for the paddr_t.
 
 Now passing both paddr_t and vm_page * would work and if the pointer
 to the vm_page, it would be an unmanaged mapping.  This also gives 
 the
 access to mdpg without another lookup.
 
 What if XIP'ed md(4), where physical pages are in .data (or .rodata)?
 
 And don't forget that you're the one who first pointed out that
 allocating vm_pages for XIP is a pure waste of memory. ;)

i guess matt meant if the pointer to the vm_page is NULL,.

 
 I'm allocating vm_pages, only because of phys_addr and loan_count.
 I believe vm_pages is unnecessary for read-only XIP segments.
 Because they're read-only, and stateless.
 
 I've already concluded that the current managed or not model
 doesn't work for XIP.  I'm pretty sure that my vm_physseg + off_t
 model can explain everything.  I'm rather waiting for a counter
 example how vm_physseg doesn't work...

i guess your suggestion is too vague.
where do you want to use vm_physseg * + off_t instead of vm_page * ?
getpages, pmap_enter, and?  how their function prototypes would be?
   
   The basic idea is straightforward; always allocate vm_physseg for
   memories/devices.  If a vm_physseg is used as general purpose
   memory, you allocate vm_page[] (as vm_physseg::pgs).  If it's
   potentially mapped as cached, you allocate pvh (as vm_physseg:pvh).
  
  can you provide function prototypes?
 
 I have no real code for this big picture at this moment.  Making
 vm_physseg available as reference is the first step.  This only
 changes uvm_page_physload() to return a pointer:
 
   -void uvm_page_physload();
   +void *uvm_page_physload();
 
 But this makes XIP pager MUCH cleaner.  The reason has been explained
 many times.
 
 Making fault handlers and pagers to use vm_physseg * + off_t is
 the next step, and I don't intend to work on it now.  I just want
 to explain the big picture.
 
  
   
   Keep vm_physseg * + off_t array on stack.  If UVM objects uses
   vm_page (e.g. vnode), its pager looks up vm_page - vm_physseg *
   + off_t *once* and cache it on stack.
  
  do you mean something like this?
  struct {
  vm_physseg *hoge;
  off_t fuga;
  } foo [16];
 
 Yes.
 
 Or cache vm_page * with it, like:
 
   struct vm_id {
   vm_physseg *seg;
   off_t off;
   vm_page *pg;
   };
 
   uvm_fault()
   {
   vm_id pgs[];
   :
   }
 
 Vnode pager (genfs_getpages) takes vm_page's by looking up
 vnode::v_uobj's list, or uvn_findpages().
 
 When it returns back to fault handler, we have to lookup vm_physseg
 for each page.  Then fill the seg slot 

Re: CVS commit: src

2011-01-03 Thread David Holland
On Sun, Jan 02, 2011 at 12:58:13PM +0200, Antti Kantee wrote:
   Add an INRELOOKUP namei flag. Sigh. (We don't need more namei flags.)
   
   However, because of a protocol deficiency puffs relies on being able
   to keep track of VOP_LOOKUP calls by inspecting their contents, and
   this at least allows it to use something vaguely principled instead of
   making wild guesses based on whether SAVESTART is set.
  
  Well, not really.  That code in libp2k was compensating for file systems
  wanting to modify some componentname flags in lookup and then expecting
  them to be there (or not be there) come the time of the op itself.

Yes, and you yourself called it a protocol deficiency a while back.
Given the way componentname works it should really be treated as
INOUT, that is, picked up and shipped back after lookup runs.

all of that crap is going to go away... la la la

  If componentname is guaranteed to be opaque (i.e. no
  HASBUF/SAVENAME/START etc. mucking), you should be able to delete that
  ugly code and flag and simply construct the componentnames passed to
  VOP_RENAME from what puffs_rename gives you (but it's been a while,
  so please test it if you implement it).

By the time that happens there won't be any componentnames any more. :-)

While HASBUF, SAVENAME, and SAVESTART are all dead now there's still a
bunch of flags that get flipped around, particularly
ISWHITEOUT/DOWHITEOUT, plus etfs still depends on mucking with
cn_consume, etc. etc.

But after this past weekend I think the path forward is now looking
pretty clear.

-- 
David A. Holland
dholl...@netbsd.org


Re: /var/lock

2011-01-03 Thread David Holland
On Tue, Jan 04, 2011 at 02:58:18AM +0100, Adam Hamsik wrote:
   Are they really *lock* files?
  
  It's lvm subsystem lock file. Does it need to be specific in any way ?

If it's really a lock file that may need to persist across reboots, then

  2) /var/spool/lock/lvm 

is the right place.

-- 
David A. Holland
dholl...@netbsd.org


Re: CVS commit: src/sys/dev/acpi

2011-01-03 Thread Jukka Ruohonen
On Tue, Jan 04, 2011 at 05:48:49AM +, Jukka Ruohonen wrote:
   Do not queue functions via sysmon_taskq(9) in the pmf(9) resume hooks.
There is a small and unlikely race when the drivers are loaded as modules;
suspend, resume, queue a function, and immediately unload the module.

Anything against adding for instance the following to sysmon_taskq(9)? Or
better ideas how this should be handled?

- Jukka.

Index: sysmon_taskq.c
===
RCS file: /cvsroot/src/sys/dev/sysmon/sysmon_taskq.c,v
retrieving revision 1.14
diff -u -p -r1.14 sysmon_taskq.c
--- sysmon_taskq.c  5 Sep 2008 22:06:52 -   1.14
+++ sysmon_taskq.c  4 Jan 2011 06:17:45 -
@@ -209,3 +209,30 @@ sysmon_task_queue_sched(u_int pri, void 
 
return 0;
 }
+
+/*
+ * sysmon_task_queue_cancel:
+ *
+ * Cancel a scheduled task.
+ */
+int
+sysmon_task_queue_cancel(void (*func)(void *))
+{
+   struct sysmon_task *st;
+
+   if (func == NULL)
+   return EINVAL;
+
+   mutex_enter(sysmon_task_queue_mtx);
+   TAILQ_FOREACH(st, sysmon_task_queue, st_list) {
+   if (st-st_func == func) {
+   TAILQ_REMOVE(sysmon_task_queue, st, st_list);
+   mutex_exit(sysmon_task_queue_mtx);
+   free(st, M_TEMP);
+   mutex_enter(sysmon_task_queue_mtx);
+   }
+   }
+   mutex_exit(sysmon_task_queue_mtx);
+
+   return 0;
+}
Index: sysmon_taskq.h
===
RCS file: /cvsroot/src/sys/dev/sysmon/sysmon_taskq.h,v
retrieving revision 1.2
diff -u -p -r1.2 sysmon_taskq.h
--- sysmon_taskq.h  21 Jul 2007 23:15:17 -  1.2
+++ sysmon_taskq.h  4 Jan 2011 06:17:45 -
@@ -42,5 +42,6 @@ void  sysmon_task_queue_preinit(void);
 void   sysmon_task_queue_init(void);
 void   sysmon_task_queue_fini(void);
 intsysmon_task_queue_sched(u_int, void (*)(void *), void *);
+intsysmon_task_queue_cancel(void (*func)(void *));
 
 #endif /* _DEV_SYSMON_SYSMON_TASKQ_H_ */
Index: sysmon_taskq.9
===
RCS file: /cvsroot/src/share/man/man9/sysmon_taskq.9,v
retrieving revision 1.6
diff -u -p -r1.6 sysmon_taskq.9
--- sysmon_taskq.9  26 Jan 2010 08:48:39 -  1.6
+++ sysmon_taskq.9  4 Jan 2011 06:18:23 -
@@ -43,6 +43,8 @@
 .Fn sysmon_task_queue_fini void
 .Ft int
 .Fn sysmon_task_queue_sched u_int pri void (*func)(void *) void *arg
+.Ft int
+.Fn sysmon_task_queue_cancel void (*func)(void *)
 .Sh DESCRIPTION
 The machine-independent
 .Nm
@@ -78,10 +80,15 @@ The single argument passed to
 .Fa func
 is specified by
 .Fa arg .
+The
+.Fn sysmon_task_queue_cancel
+function can be used to cancel the execution of already scheduled function.
 .Sh RETURN VALUES
-Upon successful completion,
+Both
 .Fn sysmon_task_queue_sched
-returns 0.
+and
+.Fn sysmon_task_queue_cancel
+return 0 upon successful completion,
 Otherwise, the following error values are returned:
 .Bl -tag -width [EINVAL]
 .It Bq Er EINVAL