Re: [lxc-devel] cgroup management daemon

2013-11-26 Thread Victor Marmol
On Tue, Nov 26, 2013 at 8:12 AM, Serge E. Hallyn se...@hallyn.com wrote:

 Quoting Tim Hockin (thoc...@google.com):
  What are the requirements/goals around performance and concurrency?
  Do you expect this to be a single-threaded thing, or can we handle
  some number of concurrent operations?  Do you expect to use threads of
  processes?

 The cgmanager should be pretty dumb, so I would expect it to be
 quite fast.  I don't have any specific perf goals though.  If you
 have requirements I'm very interested to hear them.  I should be
 able to tell pretty soon how far short I fall.

 By default I'd expect to run with a single thread, but I don't
 imagine one thread can serve a busy 1024-cpu system very well.
 Unless you have guidance right now, I think I'd like to get
 started with the basic functionality and see how it measures
 up to your requirements.  I should add perf counters from the
 start so we can figure out where bottlenecks (if any) are and
 how to handle them.

 Otherwise I could start out with a basic numcpus/10 threadpool
 and have the main thread do socket i/o and parcel access
 verification and vfs work out to the threadpool, but I'd rather
 first know where the problems lie.


From Rohit's talk at Linux plumbers:

http://www.linuxplumbersconf.net/2013/ocw//system/presentations/1239/original/lmctfy%20(1).pdf

The goal is O(1000) reads and O(100) writes per second.



  Can you talk about logging - what and where?

 When started under upstart, anything we print out goes to
 /var/log/upstart/cgmanager.log.  Would be nice to keep it
 that simple.  We could log requests by r to do something
 it is not allowed to do, but it seems to me the failed
 attempts cause no harm, while the potential for overflowing
 logs can.

 Did you have anything in mind?  Did you want logging to help
 detect certain conditions for system optimization, or just
 for failure notices and security violations?

  How will we handle event_fd?  Pass a file-descriptor back to the caller?

 The only thing currently supporting eventfd is memory threshold,
 right?  I haven't tested whether this will work or not, but
 ideally the caller would open the eventfd fd, pass it, the
 cgroup name, controller file to be watched, and the args to
 cgmanager;  cgmanager confirms read access, opens the
 controller fd, makes the request over cgroup.event_control,
 then passes the controller fd back to the caller and closes
 its own copy.

 I'm also not sure whether the cgroup interface is going to be
 offering a new feature to replace eventfd, since it wants
 people to stop using cgroupfs...  Tejun?


From my discussions with Tejun, he wanted to move to using inotify so it
may still be an fd we pass around.


  That's all I can come up with for now.

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel


Re: [lxc-devel] cgroup management daemon

2013-11-26 Thread Victor Marmol
On Tue, Nov 26, 2013 at 8:41 AM, Serge E. Hallyn se...@hallyn.com wrote:

 Quoting Victor Marmol (vmar...@google.com):
  On Tue, Nov 26, 2013 at 8:12 AM, Serge E. Hallyn se...@hallyn.com
 wrote:
 
   Quoting Tim Hockin (thoc...@google.com):
What are the requirements/goals around performance and concurrency?
Do you expect this to be a single-threaded thing, or can we handle
some number of concurrent operations?  Do you expect to use threads
 of
processes?
  
   The cgmanager should be pretty dumb, so I would expect it to be
   quite fast.  I don't have any specific perf goals though.  If you
   have requirements I'm very interested to hear them.  I should be
   able to tell pretty soon how far short I fall.
  
   By default I'd expect to run with a single thread, but I don't
   imagine one thread can serve a busy 1024-cpu system very well.
   Unless you have guidance right now, I think I'd like to get
   started with the basic functionality and see how it measures
   up to your requirements.  I should add perf counters from the
   start so we can figure out where bottlenecks (if any) are and
   how to handle them.
  
   Otherwise I could start out with a basic numcpus/10 threadpool
   and have the main thread do socket i/o and parcel access
   verification and vfs work out to the threadpool, but I'd rather
   first know where the problems lie.
  
 
  From Rohit's talk at Linux plumbers:
 
 
 http://www.linuxplumbersconf.net/2013/ocw//system/presentations/1239/original/lmctfy%20(1).pdf
 
  The goal is O(1000) reads and O(100) writes per second.

 Cool, thanks.  I can try and get a sense next week of how far off the
 mark I am for reads.

Can you talk about logging - what and where?
  
   When started under upstart, anything we print out goes to
   /var/log/upstart/cgmanager.log.  Would be nice to keep it
   that simple.  We could log requests by r to do something
   it is not allowed to do, but it seems to me the failed
   attempts cause no harm, while the potential for overflowing
   logs can.
  
   Did you have anything in mind?  Did you want logging to help
   detect certain conditions for system optimization, or just
   for failure notices and security violations?
  
How will we handle event_fd?  Pass a file-descriptor back to the
 caller?
  
   The only thing currently supporting eventfd is memory threshold,
   right?  I haven't tested whether this will work or not, but
   ideally the caller would open the eventfd fd, pass it, the
   cgroup name, controller file to be watched, and the args to
   cgmanager;  cgmanager confirms read access, opens the
   controller fd, makes the request over cgroup.event_control,
   then passes the controller fd back to the caller and closes
   its own copy.
  
   I'm also not sure whether the cgroup interface is going to be
   offering a new feature to replace eventfd, since it wants
   people to stop using cgroupfs...  Tejun?
  
 
  From my discussions with Tejun, he wanted to move to using inotify so it
  may still be an fd we pass around.

 Hm, would that just be inotify on the memory.max_usage_in_bytes
 file, of inotify on a specific fd you've created which is
 associated with any threshold you specify?  The former seems
 less ideal.


Tejun can comment more, but I think it is still TBD.


 -serge

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel


Re: [lxc-devel] cgroup management daemon

2013-11-26 Thread Victor Marmol
I think most of our usecases have only wanted to know about the parent, but
I can see people wanting to go further. Would it be much different to
support both? I feel like it'll be simpler to support all if we go that
route.


On Tue, Nov 26, 2013 at 1:28 PM, Serge E. Hallyn se...@hallyn.com wrote:

 Quoting Tim Hockin (thoc...@google.com):
  lmctfy literally supports .. as a container name :)

 So is ../.. ever used, or does noone every do anything beyond ..?

--
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET,  PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351iu=/4140/ostg.clktrk___
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel


Re: [lxc-devel] cgroup management daemon

2013-12-03 Thread Victor Marmol
I thought we were going to use chown in the initial version to enforce the
ownership/permissions on the hierarchy. Only the cgroup manager has access
to the hierarchy, but it tries to access the hierarchy as the user that
sent the request. It was only meant to be a for now solution while the
real one rolls out. It may also have gotten thrown out since last I heard :)


On Tue, Dec 3, 2013 at 8:53 PM, Tim Hockin thoc...@google.com wrote:

 If this daemon works as advertised, we will explore moving all write
 traffic to use it.  I still have concerns that this can't handle read
 traffic at the scale we need.

 Tejun,  I am not sure why chown came back into the conversation.  This
 is a replacement for that.

 On Tue, Dec 3, 2013 at 6:31 PM, Serge Hallyn serge.hal...@ubuntu.com
 wrote:
  Quoting Tejun Heo (t...@kernel.org):
  Hello, Serge.
 
  On Tue, Dec 03, 2013 at 06:03:44PM -0600, Serge Hallyn wrote:
As I communicated multiple times before, delegating write access to
control knobs to untrusted domain has always been a security risk
 and
is likely to continue to remain so.  Also, organizationally, a
  
   Then that will need to be address with per-key blacklisting and/or
   per-value filtering in the manager.
  
   Which is my way of saying:  can we please have a list of the security
   issues so we can handle them?  :)  (I've asked several times before
   but haven't seen a list or anyone offering to make one)
 
  Unfortunately, for now, please consider everything blacklisted.  Yes,
  it is true that some knobs should be mostly safe but given the level
  of changes we're going through and the difficulty of properly auditing
  anything for delegation to untrusted environment, I don't feel
  comfortable at all about delegating through chown.  It is an
  accidental feature which happened just because it uses filesystem as
  its interface and it is no where near the top of the todo list.  It
  has never worked properly and won't in any foreseeable future.
 
cgroup's control knobs belong to the parent not the cgroup itself.
  
   After thinking awhile I think this makes perfect sense.  I haven't
   implemented set_value yet, and when I do I think I'll implement this
   guideline.
 
  I'm kinda confused here.  You say *everything* is gonna go through the
  manager and then talks about chowning directories.  Don't the two
  conflict?
 
  No.  I expect the user - except in the google case - to either have
  access to no cgroupfs mounts, or readonly mounts.
 
  -serge

--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___
lxc-devel mailing list
lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel