Re: xrdp broken, possible directions

2017-01-23 Thread John Moser
Nothing yet.

I did manage to file a Launchpad bug with patch to get weston-rdp to
build--as a separate package (weston-rdp-compositor) due to the large
number of additional dependencies the RDP compositor draws in.  As noted, I
haven't managed to actually test it--I don't have a real Zesty
environment--and mainly intended to make the component installable as a
first step.

https://bugs.launchpad.net/ubuntu/+source/weston/+bug/1654864

I'm not a programmer, in any case; I can try to work out a bash script to
wrapper weston-rdp with an X11rdp frontend, but that's about it for me.  So
far, Zesty is at the point where other interested parties can readily start
trying to do more-advanced things like integrate an RDP session back-end
with a greeter (e.g. get LightDM to spawn under weston-rdp/xwayland or
mir-rdp/xmir, if someone implements that) or use the more-advanced features
of Wayland/Mir to move sessions between console and RDP (supposedly,
Wayland can do things like move a session from the console to VNC or such
without disrupting applications).

I'm hoping to get somewhere farther with this by 18.04 LTS.  That would
land this on servers and Raspberry Pi distributions.  As to how far...
maybe I can make an X11rdp shell script work; if there's no other interest,
that's as far as it's going to go.


On Mon, Jan 23, 2017 at 5:47 PM, Martinx - ジェームズ <thiagocmarti...@gmail.com>
wrote:

>
> On 7 January 2017 at 22:35, John Moser <john.r.mo...@gmail.com> wrote:
>
>> As per bug 220005
>>
>> https://bugs.launchpad.net/ubuntu/+source/xrdp/+bug/220005
>>
>> xrdp doesn't work.  It used to build X11rdp by patching Xorg to write to
>> an RDP session, but this no longer builds and doesn't get installed.
>>
>> Fundamentally, xrdp-sesman runs a command with some arguments.  It has a
>> configuration section in /etc/xrdp/sesman.ini like so:
>>
>> [X11rdp]
>> param1=-bs
>> param2=-ac
>> param3=-nolisten
>> param4=tcp
>>
>> Thus any X11 service can stand in.
>>
>> Weston typically comes with weston-rdp, although this isn't built in
>> Ubuntu.  This is an RDP compositor, and listens on a port for an RDP
>> connection.  Stacking Wayland-X on top of this would immediately give an
>> xrdp replacement.
>>
>> As a possible forward direction, Ubuntu could:
>>
>>  - Provide weston-rdp;
>>  - Provide an X11rdp which runs weston-rdp to host Wayland-X;
>>  - promote the necessary pieces to main;
>>  - provide a default configuration which listens on 3389 (RDP) and
>> automatically starts an X11rdp session with a Display Manager (lightdm,
>> gdm, etc.).
>>
>> This would allow a user to install xrdp and immediately have a system
>> which gives a login screen on the RDP port.  No VNC, no logging in with
>> xrdp's ugly session manager; instead we would get the same functionality as
>> Windows servers, using the same protocol.
>>
>> Even servers without a console display manager could allow login through
>> RDP in this way.
>>
>> Obviously, this doesn't immediately provide advanced options like
>> disconnecting from the RDP session and leaving it running, reconnecting to
>> the same session (by logging in as the same user), sharing the session, or
>> accessing the existing console session.  I believe all of the pieces to
>> provide basic remote RDP access are there, however; advanced functionality
>> will require more code.
>>
>> Thoughts?
>>
>> --
>> Ubuntu-devel-discuss mailing list
>> Ubuntu-devel-discuss@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailm
>> an/listinfo/ubuntu-devel-discuss
>>
>>
> +1000 for that! I really want to see this smooth RDP integration on
> Ubuntu! Sounds awesome!
>
> Do you have some workaround for this problem? Maybe manually doing some
> changes?
>
> Cheers!
> Thiago
>
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: My opinion on Ubuntu cancelling Intel 80386/80386-clone processor support

2016-09-11 Thread John Moser
On Sun, 2016-09-11 at 12:52 -0400, Tom H wrote:
> On Sun, Sep 11, 2016, Ralf Mardorf 
> wrote:
> > 
> > 
> > You are quoting me out of context. The context is that the poor
> > can't
> > donate new computers and they can't pay for infrastructure, such as
> > internet access for everyone. _BUT_ rich people could, they are
> > just
> > not interested in doing it, they are greedy.
> You mean selfish. So what? We all are!
> 
> I've only read a quarter (or less) of the posts in this thread so I
> don't know how it went from "32-bit ISOs are being deprecated" to
> social and economic pseudo-commentary (I can make an educated guess!)
> but do you really think that this is the best use of
> ubuntu-devel-discuss@?
> 

As much as I enjoy discussing economics, it's really hard to separate
this from political contexts.

The major point of contention is whether Ubuntu targets old systems and
third-world e-reuse programs.  I have said an organization specifically
targeting those uses and managing collection, software provision, and
distribution would be more-efficient (cheaper) and more-effective
(better results) than tacking on so-called "support" for an imaginary,
ill-defined, and minority use case to a general-purpose effort.

There is no use dragging around a boat anchor just in case you meet
someone who has a boat.  Someone else is already in the business of
making and selling boat anchors.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


[OT] Re: My opinion on Ubuntu cancelling Intel 80386/80386-clone processor support

2016-09-11 Thread John Moser
On Sun, 2016-09-11 at 17:24 +0200, Ralf Mardorf wrote:
> On Sun, 11 Sep 2016 15:58:44 +0300, Thierry Andriamirado wrote:
> > 
> > Le 10 septembre 2016 20:13:47 UTC+03:00, Ralf Mardorf
> >  a écrit :
> >  
> > > 
> > > It's not the task of the poor to help the poor.
> > Of course IT IS! ;)
> > I'm not so poor compared to many malagasy people, but being in a
> > poor
> > country, I should be one of the first to raise their hands to speak
> > for these "poors" who still use old hardwares. With the help of
> > those
> > in rich countries. ;)  
> You are quoting me out of context. The context is that the poor can't
> donate new computers and they can't pay for infrastructure, such as
> internet access for everyone. _BUT_ rich people could, they are just
> not interested in doing it, they are greedy.

Stop that.

Everybody loves to say "X has less and Y has more, Y is greedy!"  This
has lead to enormous political problems preventing any effective aid to
the economically disadvantaged.

To illustrate in a somewhat off-topic direction:  the United States can
implement the modern concept of a Universal Basic Income (UBI) as a
Universal Social Security (USS) for $1 trillion lower burden on the
taxpayer, without raising taxes on the rich.  This can easily end
homelessness and hunger across the nation; create an enormous demand
for jobs (which eventually requires shorter working hours to
counterbalance); and remediate an incredibly faulty welfare system that
would take too long to discuss here.

And yes, it really is a trillion dollars:  https://bluecollarlunch.word
press.com/2016/07/22/a-basic-income-is-a-trillion-dollars-cheaper/

Most UBI proponents oppose this because IT DOESN'T TAX THE RICH MORE.
 Many people also get angry because the income bump "doesn't make
businesses pay"--your effective "minimum wage" goes up, but the evil,
greedy business doesn't have to pay it, so this is wrong.

In other words:  people are less-interested in helping the poor and
more interested in attacking people or classes of people whom they
dislike.

People attacked the American Red Cross WHILE THEY WERE STOPPING A
CHOLERA EPIDEMIC IN HAITI, going so far as to complain that ARC hired
contractors who then made a profit--never mind that they actually got
food, water, sanitization, vaccination, government disaster response
programs for future crises, and temporary shelter distributed to people
who would be dead by now; there's evil rich people to burn, and nobody
really cares about dirty poor people on some barely-developed island
somewhere.

Do you have any concept of how many people suffer and die every year
because everyone is focused on how to pry money away from businesses
and high-income individuals instead of how to effectively address
societal problems by organized effort or public policy?



That doesn't even go into the economic considerations.  People still
think money is wealth; but money is backed by the productive output of
a population.  Want to see how it really works?

In America, we outsource a lot.  We import labor and goods (ultimately
labor).  Even when we bring things from China, someone has to ship it,
someone has to stock it, someone has to retail it.  There's a huge
amount of labor just in moving and selling goods.

Americans have income, from jobs.  When goods and services are
purchased, that money is business revenue.  Revenue goes to individuals
(wages), other businesses (overhead), and profits.  Put the wages and
profits together, and you have income.  All of the income in America is
equivalent to all of the business productive output; import goods are
bought and thus the money goes out of the country, while those goods
are sold locally and the price then divided between wages and business
profits, thus reflecting the production of retail and shipping
services.  IT services and made-in-America things (e.g. food?) are of
course tangibly made here.

The ability to keep importing goods, of course, predicates on our
ability to produce more goods.  America does trade away a lot of grain,
IT services, and electronics goods (iPhones made in China by the
specifications of Apple; China gets its cut, but so does the US).

Take a step back and look at all the money moving around.

Every dollar spent represents somethings that was made, shipped, and
sold.  If we produce half as much but still employed as many people for
the same yearly wage, everything would cost twice as much--suddenly
we're making 5,000 instead of 10,000, but we're still paying Charlie
$40,000/year, and have to divvy his salary up into the price of each of
these.  That means Charlie ultimately can only buy half as much.

So you look at these poor countries and tell me what they're missing.

The answer isn't money, computers, or a modern welfare system to make
their rich people pay for their poor people.

The answer is technology.

Man learned to sharpen a pointy stick and spend less time hunting.  He
learned to plant 

Re: My opinion on Ubuntu cancelling Intel 80386/80386-clone processor support

2016-09-08 Thread John Moser
On Thu, 2016-09-08 at 18:10 +0300, Thierry Andriamirado wrote:
> 
> Le 8 septembre 2016 01:35:05 UTC+03:00, John Moser <john.r.moser@gmai
> l.com> a écrit :
> > 
> > 
> > > 
> > > There are countless very old computers running Ubuntu, in
> > > Developing
> > > Countries.
> > > 
> > It's not my fault nobody counted.
> It's nobody's fault: many of those Ubuntu boxes are used in villages
> in the bush, and are not even connected to the Internet. Updated from
> time to time via CD-Rom..
> 

Sorry, my socialization is kind of poor, and I never know what's going
to come across.

I was implying that the ideal that there are some unknown "Many" and
that such things serve some dire purpose is imaginative because there
aren't even estimates on program size, much less impact.  Essentially,
you were begging the question, and I called you on it.

So what we have here is an imaginary program with an imaginary impact
that is more-likely to exist the closer we get to 0 (that is:  it's
almost-certain someone in some bush country has 1 or more such
machines; it becomes less-certain the larger we scale).  Directly, the
probability of size can't be known (i.e. estimates are impossible);
indirectly, because the probability of size can't be known, there is a
larger probability of any such thing being smaller and less-effective
--two independent traits stemming from the common cause of not having a
well-defined and well-operating program.

It would be safe to assume OLPC is much larger in scope of impact
because OLPC has a structured program working to maximize effectiveness
and distribution, whereas the supposed Ubuntu systems are theoretically
out there somewhere for some reason as consequence of some effort by
someone.

I remain unconvinced of imaginary things--including the imaginary
mountain where one might expect to find at least a molehill.  When you
get a picture of Bigfoot, let me know.

That explanation should be crass enough to sink in.

> -- 
> Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.
> 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: My opinion on Ubuntu cancelling Intel 80386/80386-clone processor support

2016-09-07 Thread John Moser
On Wed, 2016-09-07 at 15:54 +0300, Thierry Andriamirado wrote:
> 
> Le 7 septembre 2016 04:58:44 UTC+03:00, John Moser <john.r.moser@gmai
> l.com> a écrit :
> > 
> > 
> > that context are uncommon by nature.  That in an of itself seems to
> > warrant a project specially dedicated to e-waste reuse programs,
> > rather
> > than a best-effort and costly nod to the concept of older systems.
> There are countless very old computers running Ubuntu, in Developing
> Countries.
> 


It's not my fault nobody counted.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: My opinion on Ubuntu cancelling Intel 80386/80386-clone processor support

2016-09-06 Thread John Moser
On Tue, 2016-09-06 at 21:33 -0400, JMZ wrote:
> Hi Ryan,
> 
> When you say "Ubuntu 16.10" I wonder if you mean that you are
> running 
> gnome with the unity shell or just the command line only. Running any
> of 
> the graphical enviroments (save maybe lxde) on a 80586 would be
> pretty 
> exceptional.
> 
> 

> Pentium 4/4 HT systems (which are still 80586 chipset basically) can
> be 
> got even at community trash dumps.  If you're starting a school,
> setting 
> up donated Pentium 4's and old dual-cores with Lubuntu or another
> lxde 
> distro might be your best bet.  This is especially true if all
> you're 
> running is Firefox and LibreOffice, or similar.
> 

Is this even worth the resources?  There are multiple issues here, most
obvious being the distinction between a current-generation operating
system (Ubuntu) and a special-purpose software project (to target
legacy hardware).  Is legacy 32-bit support part of Ubuntu's mission,
or are resources best diverted to improving the system for the other
99.99% of use cases?  Like it or not, i586 is probably less than one in
ten thousand installations.


E-waste reuse is itself an economics issue.  We like to think we can
donate those systems to some poor people somewhere; but that has a huge
array of complexities:

* Humans have to eat, among other things, and so their labor time is at
  a premium:  you trade the labor time to produce one good for the
  labor time to produce another, e.g. food, and thus any volunteered
  time is a real cost paid by the volunteer;

* Collecting, sorting, and shipping those things takes human labor; 

* The logistics takes an immense amount of labor:  who gets these
  computers, what are their requirements, how do we optimize the
  benefit for their particular poverty case, and so forth;

* The targets of e-waste reuse are frequently poor nations with
  unreliable or expensive access to electricity and even waste
  disposal

E-waste reuse can actually cost as much or more than new production,
and has runtime costs because it's less-efficient to use, maintain, and
even power.  Collecting, inventorying, and preparing e-waste as a
refurbished good incurs more labor per unit than rolling new units off
an assembly line; the cost advantage depends on if the components cost
more than the additional labor.  Even then, there's a lot of cost in
developing the logistics of using something out-of-date in a modern
environment.

Even if you can co-opt slave labor into the deal, is supporting this
kind of specialized use costly for the Ubuntu maintainers?  Just
running a build of the OS as 32-bit is inadequate; with the broad range
of out-of-date hardware left behind by modern system software, you'd
need to respond to ad-hoc support issues with varied hardware
configurations breaking because all hardware configurations used in
that context are uncommon by nature.  That in an of itself seems to
warrant a project specially dedicated to e-waste reuse programs, rather
than a best-effort and costly nod to the concept of older systems.



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Protobuf 3.0.0 instead of 2.6.1?

2016-07-27 Thread John Moser
Use a Docker container for now.  You may need to map /tmp as a volume
(to get the X11 socket) if it's an X application.
It might be easier to start from a Debian or Alpine container when
building the Dockerfile.
On Tue, 2016-07-26 at 14:23 -0700, JIA Pei wrote:
> 
> Hi, Canonical developers?
> 
> Recently, I started to play with tensorflow. I prefer run code from
> my native computer instead of building an virtual environment.
> Therefore, I installed tensorflow from source, and happened to notice
> it requires protobuf 3.0.0+ .
> 
> However, Ubuntu comes with protobuf 2.6.1 from the repository by
> default, And to thoroughly remove protobuf is headache. 
> 
> Actually, I put two questions from Ubuntuforum, please refer to:
> https://ubuntuforums.org/showthread.php?t=2331702
> https://ubuntuforums.org/showthread.php?t=2331071
> 
> 
> Although now I can build my code based on tensorflow within a virtual
> environment. I still prefer building everything WITHOUT a virtualenv
> anyway. Can anybody give me a hand please??
> 
> 
> Thank you very much...
> 
> 
> Cheers
> 
> -- 
> 
> Pei JIA, Ph.D.
> 
> Email: jp4w...@gmail.com
> cell in Canada:    +1 778-863-5816
> cell in China: +86 186-8244-3503
> 
> Welcome to Vision Open
> http://www.visionopen.com
> -- 
> Ubuntu-devel-discuss mailing list
> Ubuntu-devel-discuss@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/l
> istinfo/ubuntu-devel-discuss-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: The Simple Things in Life

2016-07-19 Thread John Moser
On Tue, 2016-07-19 at 15:01 -0700, Markus Lankeit wrote:
> Adding my $0.02...
> 
> If you pick "samba file server" during install, libnss-winbind
> libpam-winbind are not installed by default.  It took me a long to
> time to track down why in 16.04 I can "join" an AD domain just fine,
> but domain users get "access denied" to samba file shares.  Not sure
> the logic behind not installing relevant packages...
> 
To be fair, configuring Samba is non-trivial, and I often think joining
a domain as a member rather than a domain controller is some incidental
feature that's a prerequisite for being a domain controller.  Samba
doesn't seem to support being a domain member very well at all, to the
point that searching on errors and asking Google how to get a Samba
domain member to authenticate to a different domain controller (because
you joined on a RWDC network and now need to authenticate against a
RODC) brings up documentation on configuring Samba as a domain
controller.
I configure Samba all the time; I have no idea how it works, and when
it breaks I'm lost.  To put this into perspective, I know how
*everything* works, and when it breaks I can project the entire
configuration and behavior and identify something I probably should
have seen before--something unfamiliar, which I haven't inspected, but
which I was able to assembly by simply throwing the state together in
my head and making myself aware that some problem exists somewhere.  I
have NO IDEA why my Linux servers can authenticate to Active Directory;
I just know I did things to PAM and nsswitch.conf and repeatedly ran a
dozen forms of net join until, despite consistently throwing errors and
failing, the server magically started authenticating.
More basically, Samba can be a Samba file server without joining an AD
domain.
> Also, the whole network device naming scheme is just a fiasco...
>   Before, I could have a simple template for all my systems... now
>   every system requires a unique template that takes me to the HW
>   level to figure out what it might be.  And this is supposed to be
>   more intuitive and/or predictable than "eth0"? 
> 
>   
> 
>   Thx.
> 
>   
> 
>   -ml
> 
> 
> 
> > On 7/19/2016 2:48 PM, John Moser wrote:
> 
> 
> 
> > > 
> >   > > On Tue, 2016-07-19 at 14:29 -0700, Jason Benjamin wrote:
> > 
> >   > > > 
> > > > > > I've
> > >   been irritated by so many obvious shortcomings of Ubuntu this
> > >   version (16.04).  So many of the most obvious fixes are easily
> > >   attributed to configuration files.  I don't know if those who
> > >   purchase the operating system directly from Canonical versus a
> > >   download are having to deal with the same problems or are
> > >   getting a supe> > > rior/better operating system.
> > >  operating system.
> > >    Some of  my main qualms that I am unable to deal with are the
> > >   theming.  Even using alternative themes most of them won't
> > >   even look right as supposed.  
> > > 
> > > > > > The
> > >   HIBERNATION itself seems to work fine on other closely related
> > >   distros (Elementary OS I tested).  but Ubuntu has problems
> > >   with it.  AFAIK the GRUB_CMDLINE breaks this if anything, and
> > >   alternatives such as TuxOnIce don't work either.  My guess is
> > >   that its Plymouth and there doesn't seem to be any clear
> > >   pointers to a solution.  After desktop session saving was
> > >   deprecated (or removed because of transition from Gnome?),
> > >   this seems like a serious and necessary *implementation* of
> > >   desktop application saving.  
> > > 
> > > > > > I've
> > >   seen a lot of these blogs that suggest installing extra
> > >   programs and such after the installation.  Here's mine:
> > > 
> > >   
> > 
> >   
> > 
> >   
> > 
> >   > > You just listed a bunch of odd things about hiding the boot
> > process.
> > 
> >   
> > 
> >   
> > 
> >   > > I've been repeatedly distressed and confused by this hidden
> > boot process.  I've sat and waited at blank screens and splashes
> > that give no feedback, wondering if the kernel is hanging at
> > initializing a driver, trying to find network, or makin

Re: The Simple Things in Life

2016-07-19 Thread John Moser
On Tue, 2016-07-19 at 14:29 -0700, Jason Benjamin wrote:
> I've been irritated by so many obvious shortcomings of Ubuntu this
> version (16.04).  So many of the most obvious fixes are easily
> attributed to configuration files.  I don't know if those who
> purchase the operating system directly from Canonical versus a
> download are having to deal with the same problems or are getting a
> superior/better operating system.  Some of  my main qualms that I am
> unable to deal with are the theming.  Even using alternative themes
> most of them won't even look right as supposed.  
> The HIBERNATION itself seems to work fine on other closely related
> distros (Elementary OS I tested).  but Ubuntu has problems with it.
>  AFAIK the GRUB_CMDLINE breaks this if anything, and alternatives
> such as TuxOnIce don't work either.  My guess is that its Plymouth
> and there doesn't seem to be any clear pointers to a solution.  After
> desktop session saving was deprecated (or removed because of
> transition from Gnome?), this seems like a serious and necessary
> *implementation* of desktop application saving.  
> I've seen a lot of these blogs that suggest installing extra programs
> and such after the installation.  Here's mine:
You just listed a bunch of odd things about hiding the boot process.
I've been repeatedly distressed and confused by this hidden boot
process.  I've sat and waited at blank screens and splashes that give
no feedback, wondering if the kernel is hanging at initializing a
driver, trying to find network, or making decisions about a disk.
 There is no standard flow which can be disrupted with a new, non-error 
status message curtly explaining that something is happening and all is
well; there is a standard flow in which the machine displays a blank,
meaningless state for a fixed amount of time, and deviation in that
time by any more than a few tenths of a second gives the immediate,
gut-wrenching feeling that the system has hanged during boot and is
terminally broken in some mysterious and completely-unknown manner.
What Ubuntu needs most is a simple, non-buried toggle option to show
the boot process--including displaying the bootloader, displaying the
kernel load messages, and listing which services are loading and
already-loaded during the graphical boot.  Ubuntu's best current
feature is the Recovery boot mode, aside from not having a setting to
make this the standard boot mode sans the recovery prompt.  "Blindside
the user with a confusing and meaningless boot process and terror at a
slight lag in boot time because the system may be broken" is not a good
policy for boot times longer than 1 second.
Even Android displays a count of system assemblies AOT cached during
boots after update so as to convey to the user that something is indeed
happening.-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Snapcraft, Snappy

2016-07-10 Thread John Moser
On Sun, 2016-07-10 at 17:11 +0200, Ralf Mardorf wrote:
> Hi,
> 
> there's an interesting counter-argument against something similar to
> snapcraft/snappy.
> 
> https://lists.archlinux.org/pipermail/arch-general/2016-July/041579.h
> tml
> 

That's the security team going off into lala land with a bunch of
overblown wargarble.

Basically, containers completely, 100% perfectly isolate software on
the system from other software execution environments.  That means the
file system, devices, network stacks (tcpdump!), and so forth are all
as reachable as if you're on another machine.

The Security team points out that a kernel-level exploit will allow you
to route around this.

They take that observation to mean that containers supply zero
security, and that a compromise in a container is a system level
compromise.

To follow that logic completely:  there's no such thing as security
anyway, because Linux has to accept a TCP packet into its network stack
to even look at it in iptables, thus any network-reachable machine is
already compromised.

The argument from the security team essentially fails to create risk
models and assess probability and severity of the compromises they
describe.  Instead of recognizing, categorizing, and accounting for
those risks, they just run around flailing their arms and scream that
the sky is falling into the face of every passer-by to whom they can
get close enough.

Whoever wrote that message isn't qualified to handle computer security
concerns.

> I guess snapcraft/snappy and anything similar could be useful, but
> indeed, IMO those are good reasons to not become too much used to
> this
> approach.
> 
> Regards,
> Ralf
> 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Feature request: module [pam_limits]

2016-02-27 Thread John Moser


On 02/27/2016 04:06 PM, Ralf Mardorf wrote:
> # 
> @foo   softnproc  20
> @foo   hardnproc  50
> 
> Every user who is _not_ in the group "foo", simply is _not_ in
> this group, it makes completely no sense to introduce a negation of
> being in a group, since the negation is already not being member of this
> group.


The short explanation here is "complexity and design decisions".  The
naive approach is probably to check for negations first, and skip the
line if a user is in a negated group.  The application of a rule to all
except those in a group is a form of that:  negation, all.

The bigger question is what purpose does this serve?  Limits are not so
much a security feature as an administrative resource feature, and
they're flaky as hell.  They're set by a privileged process (such as
login) and inherited by children.  That's why MySQL or Apache can start
up, set their own ulimit (as root), and then drop privileges and switch
to the mysql or www-data user and keep their limits:  no interposing
process makes any further decisions.

The general limit of "don't create 80,000 processes" stops fork bombs;
everything else is academic.  Even then, it just stops :(){ :|:; } and
not a thread-creating perl script.

I use limits to keep those little boxes around and make sure my system
behaves in cases where it's being erratic.  When it's under attack,
ulimits don't really offer any considerable protection.  They're part of
a list of pointless security dogma, like running chmod go= on mount,
ping, and other setuid binaries.  Everyone wants a checklist so they can
claim they've "hardened" their system without actually bothering to
identify threats and set up things like network-level firewalling,
privilege separation, and the like.

It's the same reason people encrypt data in databases with the key
stored in the application config file or, as with Miva Merchant, in the
"AESKey" column right next to the encrypted credit card numbers:  "It's
encrypted and encryption is security!"

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Wishes for +Xenial

2016-01-25 Thread John Moser
I"d just like to see Jabref work (for LyX) and MonoDevelop actually able
to make an ASP.NET MVC project (install xsp-4, start MonoDevelop, create
a C# ASP.NET MVC project, and it fails; any amount of finegeling only
brings further failure.  Works on Windows; and all the missing
components are Apache licensed!).

Of course those things are Universe.

The latest Docker I'm happy to see coming.

Bitcoin is an exercise in bad economics, media fascination, and pyramid
schemes, but I get that the base users want it.

On 01/25/2016 12:52 PM, Ryein Goddard wrote:
> yeah thats true.  I'm on 14.04 using wily kernel right now :D
> 
> On 01/25/2016 09:20 AM, Martinx - ジェームズ wrote:
>> On 23 January 2016 at 22:56, Martinx - ジェームズ
>>  wrote:
>>> Hey guys,
>>>
>>>   I'm wondering here about what package versions we'll have on Xenial
>>> 16.04...
>>>
>>>   So, I'm creating a wish list!:-P
>>>
>>>   Upgrades:
>>>
>>>   - Apt 1.2
>>>   - Linux 4.4 (long term) - confirmed
>>>   - OpenShot 2.0
>>>   - Ansible 2.0
>>>   - Docker 1.10.X - better integrated with Ubuntu and with rolling
>>> upgrades
>>> during 16.04 life cycle (like Ubuntu Cloud archive)
>>>   - QEmu 2.5 - Done - With 3D support (specially for Ubuntu Desktop
>>> on KVM)!
>>>   - Mesa 11.1 - Done
>>>   - Xen 4.6 - Done!
>>>   - DPDK 2.2 or 2.3 (new 16.04) with Xen support (for ParaVirt DomUs) -
>>> confirmed!
>>>   - Virt-Manager 1.3 and SPICE support (Python 3 deps already in)
>>>   - Libvirt 1.2.21 - Done!
>>>   - Samba 4.3 - Done!
>>>   - SSSD 1.13 - Done!
>>>   - Wireshark 2 (QT based, no GTK)
>>>   - nmap 7
>>>   - InfluxDB 0.9.6 (or newer) - Done!
>>>   - Prometheus Server 0.16 & Node Exporter (0.12)
>>>   - MariaDB 10.1.10 plus better support for it (move it for Main Repo)
>>>   - Vagrant 1.8 - Done - But fully integrated with both KVM/Libvirt and
>>> VirtualBox
>>>   - Enlightenment 0.20 & LibEFL 1.16 -
>>> https://www.enlightenment.org/download
>>> with Wayland enabled   ;-)
>>>   - Terminology 0.9.1
>>>
>>>   - blivet-gui: http://blog.vojtechtrefny.cz/blivet-gui instead of
>>> GParted ?
>>>
>>>
>>>   And why not, new packages (specially more Go projects)? Like:
>>>
>>>   - Consul
>>>   - Alertmanager (Prometheus)
>>>   - PromDash (Prometheus)
>>>   - Packer
>>>
>>> More:
>>>
>>>   - KVM VirtIO Windows Drivers as ISO but, packaged as .deb
>>>   - Xen GPLPV Windows Drivers as ISO but, packaged as .deb
>>>   - Better NodeJS Integration and NPM management
>>>   - More Ruby Gem as Deb packages
>>>
>>>   - Bitcoin/Litecoin Electrum Wallet
>>>   - Ethereum and IoT Devices?
>>>   - Bitcoin/Litecoin/Etc integrated with Ubuntu App Store!?
>>>
>>> * systemd networkd - drop ifupdowd, please.
>>>
>>> LXD fully supported on OpenStack.
>>>
>>> ** OpenStack hybrid Compute Node that supports KVM and LXD side-by-side.
>>>
>>> Perfect IPv6 support! Especially for OpenStack...
>>>
>>>   Dreaming:
>>>
>>>   - All Microsoft Open Source projects, .NET (corefx, corectr, etc),
>>> Visual
>>> Studio, everything... Kidding...
>>>   - Apple Swift available
>>>
>>>
>>>   Bug fixes for:
>>>
>>>   * Make Linux Kernel more modular (very important and an easy one):
>>>   https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1475078
>>>
>>>
>>>   * ecryptfs-utils does not work with Ubuntu 14.04.3:
>>>   https://bugs.launchpad.net/ecryptfs/+bug/1328689
>>>
>>>   * ubuntu-desktop depends on iBus, which is totally broken:
>>>  
>>> https://bugs.launchpad.net/baltix/+source/unity-control-center/+bug/1365752
>>>
>>>
>>>   * NM disables accept_ra for an IPv6 connection, where it should
>>> enable it:
>>>   https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1455967
>>>
>>> * Impossible to disable IPv6 auto, params "accept_ra & autoconf = 0"
>>> have no
>>> effect on VLAN interfaces:
>>>   https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1345847
>>>
>>> * memcached unable to bind to an ipv6 address:
>>>   https://code.google.com/p/memcached/issues/detail?id=310
>>>
>>>
>>>   - Samba related bugs:
>>>
>>> * Samba4 AD DC randomly dies (4.3 might be better, need to put it in
>>> prod
>>> and see):
>>> https://bugs.launchpad.net/ubuntu/+source/samba/+bug/1357471
>>>
>>> * Samba4, when with 2003 default level, dies when IPv6 is enabled:
>>>   https://bugzilla.samba.org/show_bug.cgi?id=10730
>>>
>>>   * Samba has a wrong "Dual-Stack" implementation (I think) - (I'll
>>> help to
>>> prepare a reproducible procedure):
>>>   https://bugzilla.samba.org/show_bug.cgi?id=10729
>>>
>>>   * Samba4 get stuck after a server reboot, if IPv6 is enabled:
>>>   https://bugs.launchpad.net/samba/+bug/1339434/
>>>
>>>
>>>   - A elegant solution for this:
>>>
>>>   * Ubuntu does not honor “ignore-hosts” proxy settings for IPv6:
>>>   https://bugs.launchpad.net/ubuntu/+source/d-conf/+bug/1295003
>>>
>>>   This problem is interesting, I think that Ubuntu should provide a
>>> way to,
>>> if the proxy is enabled, do a DNS Look up locally, before querying local
>>> 

Can we have a sunset filter effect in Unity, KDE, and Compiz

2015-11-13 Thread John Moser
I'd like to see a configuration option to uniformly reduce the blue channel
in the desktop during certain hours.  It's ugly, but people have had good
results on other OSes with fading the screen to red-orange as it nears
their bedtime, using applications such as F.lux.  This prevents the brain
from staying awake and excited, or so goes the hypothesis.

Thoughts?
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Window Controls on the Right Side

2015-05-18 Thread John Moser


On 05/18/2015 08:24 PM, Raphael Calvo wrote:
 As an engineer I can definitelly say that saddens me to see such level
 of argumentation towards a client/user.

As an engineer, I can say you're vulnerable to the engineer's problem of
coming up with some complicated way of doing things.

 Thinking as an engineer focused on maximizing the happiness all possible
 clients I would invest time to bring customization options that are safe
 to be tweaked and very accessible instead of deep hidden in the system.

...and this is the other problem:  instead of coming up with the worst
answer by overcomplication (how do I handle a file being picked up
early during upload by a daemon watching a directory? Well we could
write a kernel module to hide the file from that daemon), you come
up with the answer catering to the engineer audience (How do we design
the best plastic wrench? Well, you could sell a plastic epoxy allowing
the customer to design their own wrench...).

Congratulations, you managed to beat the primary engineer's problem and
instead fall to a more common-mode thinking problem.

Customization is its own engineering decision; it comes with presets,
defaults, and a configuration setting to make new outcomes if the
presets are not satisfactory.  Before you can provide customization, you
must provide *the* definitive default.

 
 For this particular subject IMHO there is no correct answer.

Then you haven't actually asked a question.

 The correct place for the close,  minimize and maximize controls
 are where the user wants it to be and not where someone think its best.
 

That's a project management approach:  If the customer has asked for the
controls to be on the right, then they belong on the right.

We're arguing over the engineering approach:  where is the optimal place
for this crap?

 IDEA: Could we have a drop down menu accessible by right-clicking on the
 Close-Minimize-Maximize button where we could choose where in the title
 bar we would wanted it to be placed? And once this setting is applied it
 becomes valid for every window already opened and for future windows?
 

This opens a brand new battle about how much is too much.  Context-based
interfaces are great; but context-based interfaces with hundreds of
context options are overwhelming.  Now we must ask if this is the
highest-priority action for the context, or if we should prune it in
favor of other things to put on that context interface.

 Best Regards
 
 Raphael das Neves Calvo
 
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Window Controls on the Right Side

2015-04-29 Thread John Moser
On 04/29/2015 08:54 PM, John Moser wrote:
 
 
 On 04/29/2015 08:32 PM, Clint Byrum wrote:
 

 The entire reason for them being on the left is to make the top-right

Actually, no, you know what?  I'm going to set decorum aside and pull a
Linus here, on everyone involved.  Not you, Clynt; *everyone*.

First and foremost, the biggest red flag you'll ever find in the UI
design sphere is Apple blahblahblah.  This statement comes out of
people who have no clue what they're talking about, so make an appeal to
authority--typically the authority of the least-successful product
produced by the least-successful desktop computer OS manufacturer.

Folks seem to forget that Apple's OSX is the only broadly-marketed,
consumer-targeted alternative to Microsoft Windows, and is completely
trounced by them; while also conveniently forgetting that Android
devices control *four* *times* the handheld device market of iOS
(caveat:  that's by browser detection; by sales, people have purchased 7
times as many Android devices as iOS devices, and manufacturers have
shipped 8 times as many Android devices as iOS devices).  Claiming that
Apple does something a certain way should be an argument made *against*
doing something--it would still be a bad argument, but it would at least
make sense.

Apple makes a shitload of money being iTunes, Inc.

So let's set aside the pointless Apple fanboy arguments and do some history.


Back in 10.04, Ubuntu tried moving the controls to the left.  This met
with huge resistance, largely in the form of complaining, whining, and
people putting the controls back where they belong.  Now, I can't recall
who said what, but I can at least recall what I said, so we'll go with that.

What did I say?

Oh yeah.

I said most people are right-handed, and that the easiest way to tilt
your wrist or move your arm was out and away.  The top-right of your
screen is the easiest area of the screen to access--go ahead, try it.
Those of us with civil rights in Elbonia will find I'm completely
correct; lefties will find confusion, followed by the realization that
they're using the wrong hand.



A year later, in 11.04, Ubuntu released the Global Menu.  Three days
before 15.04, Ubuntu reversed a decision to disable the Global Menu by
default, after preening themselves with talk about the new Locally
Integrated Menus--i.e. pre-11.04, non-Apple menus.

Again, more bitching.  People hated on the Global Menu.  A lot.  It's
sort of a big deal:  loads of contention among users, news articles
asking if Shuttleworth is insane or just stupid, everything from
strategic trepidation to outright hostility.

The Ubuntu developers actually had an explanation for this one.  They
said it puts the menus in a consistent location, so the user won't get
lost trying to find File Edit blah blah blah Tools.

Translation:  Users are retards who have been beaten with Cricket bats
until they've sustained sufficient brain damage to soil themselves
uncontrollably, so we've put the menus somewhere we can train an Amoeba
to find consistently.

My take on the situation?  Two simple things:

First, if the window is maximized, the menu is obviously in the same
place on the screen.  If not, you have multiple windows, and it takes
*two* *mouse* *clicks* to click a menu.  With LIMs (you know, *normal*
menus), you just click File on the window; with Global Menus, you have
to click the window, then go back and click File at the top.

These days, even standard Windows 7 is so screwed up that I'm not sure
what window I've got selected; right now, on Ubuntu, the only difference
between this window and the Thunderbird main window is this window has
black title bar text and controls, while every other window on the
screen has medium-dark gray text and controls.  Back in the day, the
title bar would be an entirely different color.  You can be pretty sure
the user will have to stop and verify he's looking at the right menus
before he can click with confidence.

Second, people don't work the way Canonical has suggested.

A screen is meaningless.  Say it with me:  The screen is meaningless.
People don't know where they are on the screen.  They know they're
working on a specific window; LIMs are part of that window, and share a
consistent spatial relationship with that window.  Everything in the
window shares a specific spatial relationship with that window--mostly
with the top and left of that window.  The window may resize or move
around, but most things--including the menus and controls--share a
specific spatial relationship with the top and left of that window.

Putting the menu in the same fixed position in the workspace--the
screen--means you're moving it around.  You have a component of the
window which no longer has a fixed spatial relationship with the window,
and must be located when used.  The controls in the top-right have the
slightest disadvantage of being affected by the width of the window;
this is made up for by the fact that the user is typically well

Re: Window Controls on the Right Side

2015-04-29 Thread John Moser


On 04/29/2015 10:38 PM, Marc Deslauriers wrote:
 On 2015-04-29 10:23 PM, John Moser wrote:
 I said most people are right-handed, and that the easiest way to tilt
 your wrist or move your arm was out and away.  The top-right of your
 screen is the easiest area of the screen to access--go ahead, try it.
 
 Right, so by that logic the close button should be as far away from that as
 possible, right? I mean, you definitely wouldn't want to hit it by mistake. :)
 

This comment is competing with my Elbonia comment for most hilarious in
this thread, and I think I may be losing.

You see my point, of course:  reaching up-left is slow and awkward;
bicycling 5 miles is faster and less exhausting than walking 2 (it takes
the same amount of energy to walk 1 mile as it does to bicycle 7 in the
same amount of time).

The menus would belong in the top-right if they weren't variable-width
and, generally, wide and complex.
 Marc.
 
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Window Controls on the Right Side

2015-04-29 Thread John Moser


On 04/29/2015 08:32 PM, Clint Byrum wrote:

 
 The entire reason for them being on the left is to make the top-right
 of the screen consequence free for a single click. This is to encourage
 the user to dig into the indicators and to help developers inform users
 easily in a uniform way.

You encourage people to the indicators by highlighting them some way,
not moving something they're actually looking for away from them.
You're not going to draw attention to something by moving things
actually relevant to the user away from it.

 
 Hate on it all you want, this is safer for new users, and it's almost
 a perfect copy of one of the things that is actually good about OS X.

Doesn't seem safer.  If you're talking about being able to click up at
the indicators without accidentally hitting the close button... the top
row of pixels is active for the indicators; you're just bluntly slamming
the mouse into the top of the screen to use indicators, not
precision-noodling around a bunch of small controls trying to poke things.


You know.
 
|Activities
||[X][_][0]
||File  Edit  Blah  Blah  Blah  Tools
||

The treacherous Close/File area.

 
 I've grown accustomed now, and I prefer it this way. :)
 

Well yeah, you've been trained now, and have those reflexes, and would
then have to untrain it now.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Window Controls on the Right Side

2015-04-29 Thread John Moser


On 04/29/2015 10:36 PM, Marc Deslauriers wrote:
 On 2015-04-29 12:42 AM, John Moser wrote:


 On 04/29/2015 12:40 AM, Martinx - ジェームズ wrote:

 I am very happy with window controls on the correct (left) side.   :-P

 It is close to the App's Menus

 which is why the window closes 40% of the time I try to hit File
 
 Seriously? On my 13 screen, File is about an inch away from the close button.
 

Close button was directly at the top-left here when i upgraded to 15.04.

On my 39 inch monitor, I don't believe the entire title bar is an inch
high.  :p


 I think you need a better mouse.
 
 Marc.
 
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Window Controls on the Right Side

2015-04-28 Thread John Moser


On 04/29/2015 12:40 AM, Martinx - ジェームズ wrote:
 
 I am very happy with window controls on the correct (left) side.   :-P
 
 It is close to the App's Menus

which is why the window closes 40% of the time I try to hit File

 (I don't need to travel the entire screen
 to hit those buttons) and Unity left panel.
 
 If I'm not wrong, with Ubuntu Teak Tools, you can do that, with Ubuntu
 Gnome, you can do that, for sure.
 

requires dconf-editor now


 Best!
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: irqbalance superfluous

2015-04-09 Thread John Moser
IRQbalance keeps all of the IRQ requests from backing up on a single
CPU.  It tries to balance this out in an intelligent way accross all the
CPUs and, when possible, puts the IRQ processing as close to the process
as possible.

On NUMA systems, you may want numad.  I believe irqbalance exits if it
detects NUMA.

On 04/09/2015 08:14 AM, Istimsak Abdulbasir wrote:
 What is the irqbalance and what was the reason for using it?
 
 On Apr 9, 2015 8:09 AM, Daniel J Blueman dan...@quora.org
 mailto:dan...@quora.org wrote:
 
 Checked with Vivid beta on Intel i5 hardware, and it seems interrupt
 distribution doesn't change when I boot with irqbalance running [1],
 or after purging it and rebooting [2].
 
 Finally, it can't second guess MSI interrupt setup better than the
 APIC driver and adds a unnecessary layer of 'intelligence'. I don't
 see any case common enough to warrant deploying it by default.
 
 Anyone against removing it?
 
 Dan
 
 -- [1]
 
 root@nuc:~# cat /proc/interrupts
CPU0   CPU1   CPU2   CPU3
   0: 18  0  0  0  IR-IO-APIC-edge   
   timer
   1:  1  1  0  1  IR-IO-APIC-edge   
   i8042
   7:  9  0  0  0  IR-IO-APIC-edge
   8:  0  0  0  1  IR-IO-APIC-edge   
   rtc0
   9:  3  0  0  0 
 IR-IO-APIC-fasteoi   acpi
  12:  1  2  1  0  IR-IO-APIC-edge   
   i8042
  23: 11  1 15  6  IR-IO-APIC
 23-fasteoi   ehci_hcd:usb3
  40:  0  0  0  0  DMAR_MSI-edge 
 dmar0
  41:  0  0  0  0  DMAR_MSI-edge 
 dmar1
  42:  0  0  0  0  IR-PCI-MSI-edge   
   PCIe PME
  43:  0  0  0  0  IR-PCI-MSI-edge   
   PCIe PME
  44: 87 25 95 23  IR-PCI-MSI-edge   
   xhci_hcd
  45:   2740   4911   4405  14870  IR-PCI-MSI-edge
 :00:1f.2
  46: 10 10   2504  3  IR-PCI-MSI-edge   
   eth0
  47: 12  1  1  0  IR-PCI-MSI-edge   
   mei_me
  48:  8 14  3  1  IR-PCI-MSI-edge   
   iwlwifi
  49:496594290331  IR-PCI-MSI-edge   
   i915
  50:197 80  4 39  IR-PCI-MSI-edge
 snd_hda_intel
  51:721  0  0 28  IR-PCI-MSI-edge
 snd_hda_intel
 NMI:  1  1  0  1   Non-maskable
 interrupts
 LOC:  30046  19983  13391  18318   Local timer
 interrupts
 SPU:  0  0  0  0   Spurious interrupts
 PMI:  1  1  0  1   Performance
 monitoring interrupts
 IWI:  0  0  0  0   IRQ work interrupts
 RTR:  2  0  0  0   APIC ICR read retries
 RES:943   1130618693   Rescheduling
 interrupts
 CAL:840699674747   Function call
 interrupts
 TLB:198205286   1241   TLB shootdowns
 TRM:  0  0  0  0   Thermal event
 interrupts
 THR:  0  0  0  0   Threshold APIC
 interrupts
 MCE:  0  0  0  0   Machine check
 exceptions
 MCP:  4  4  4  4   Machine check polls
 HYP:  0  0  0  0   Hypervisor callback
 interrupts
 ERR:  9
 MIS:  0
 
 -- [2]
 
 root@nuc:~# cat /proc/interrupts
CPU0   CPU1   CPU2   CPU3
   0: 20  0  0  0  IR-IO-APIC-edge   
   timer
   1:  1  0  1  1  IR-IO-APIC-edge   
   i8042
   7: 11  0  0  0  IR-IO-APIC-edge
   8:  0  0  0  1  IR-IO-APIC-edge   
   rtc0
   9:  3  0  0  0 
 IR-IO-APIC-fasteoi   acpi
  12:  3  0  1  0  IR-IO-APIC-edge   
   i8042
  23: 15 10 10  0  IR-IO-APIC
 23-fasteoi   ehci_hcd:usb3
  40:  0  0  0  0  DMAR_MSI-edge 
 dmar0
  41:  0  0  0  0  DMAR_MSI-edge 
 dmar1
  42:  0  0  0  0  IR-PCI-MSI-edge   
   PCIe PME
  43:  0  0  0  0  IR-PCI-MSI-edge   
  

Re: Change what is considered by apt-get as major amount of disk space

2015-03-25 Thread John Moser
Why?

On multiple CentOS systems installed from the same CD using the same
parameters, yum will either list updates and ask Y/n or just
update/install stuff without confirmation; this irritates me, because
sometimes I see updates I want to run in a separate batch for risk
management, or I see that a kernel update is going to add 15MB to /boot
which has 9MB free and I need to uninstall old kernels.

apt is going to ask me whether to continue or not anyway; if I don't
want to be asked, I'll use apt-get -y.  It may as well be informative
about it, listing all packages to be installed, removed, and updated, as
well as the disk space impact.

What do you want apt to say instead?  After this operation, some voodoo
will happen that you don't need to worry about!  Continue?  [Y/n]

On 03/25/2015 08:42 AM, Mateusz Konieczny wrote:
 apt-get will ask user about using significant amounts of disk space
 
 but it seems that what is considered as significant needs adjustment,
 for me major amount of disk space is about 200MB but apt-get will ask
 questions like After this operation, 9805 kB of additional disk space
 will be
 used. Do you want to continue? [Y/n].
 
 I propose increasing this threshold to 50MB.
 
 I know that I can use parameters (with aliases or apt.conf I can even
 make it
 permanent) to completely skip this check.
 
 I am not aware about any way that allows user to configure this threshold
 (see
 http://askubuntu.com/questions/596691/how-can-i-stop-apt-get-from-asking-about-using-minor-amounts-of-additional-disk
 ).
 
 Note: i was directed to this mailing list by
 https://help.ubuntu.com/community/ReportingBugs
 (Discussing features and existing policy) linked from
 https://bugs.launchpad.net/ubuntu/
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Change what is considered by apt-get as major amount of disk space

2015-03-25 Thread John Moser
You want apt-get to go from

This will take 9,205kB disk space.
Continue? [Y/n]

to

Continue? [Y/n]

??

That's pointless.  That would create multiple layouts, requiring me to
assess what the output format is before reading the information from it.
 It would slow down my assessment of what Apt is telling me, because the
information would be in a different format.

You may as well randomly scramble the menu order on Firefox while you're
at it.

On 03/25/2015 09:06 AM, Mateusz Konieczny wrote:
 No, I want to stop apt-get from asking dumb questions. Asking
 whatever it is OK to use 1/25000 of disk is a dumb question that
 should not be asked.
 
 At the same time it makes sense to ask for confirmation about
 installing 2GB of new programs, so -y is not a proper solution in
 that case (also, I mentioned I know that I can use parameters
 (with aliases or apt.conf I can even make it permanent) to
 completely skip this check.).
 
 2015-03-25 13:47 GMT+01:00 John Moser john.r.mo...@gmail.com
 mailto:john.r.mo...@gmail.com:
 
 Why?
 
 On multiple CentOS systems installed from the same CD using the same
 parameters, yum will either list updates and ask Y/n or just
 update/install stuff without confirmation; this irritates me, because
 sometimes I see updates I want to run in a separate batch for risk
 management, or I see that a kernel update is going to add 15MB to /boot
 which has 9MB free and I need to uninstall old kernels.
 
 apt is going to ask me whether to continue or not anyway; if I don't
 want to be asked, I'll use apt-get -y.  It may as well be informative
 about it, listing all packages to be installed, removed, and updated, as
 well as the disk space impact.
 
 What do you want apt to say instead?  After this operation, some voodoo
 will happen that you don't need to worry about!  Continue?  [Y/n]
 
 On 03/25/2015 08:42 AM, Mateusz Konieczny wrote:
  apt-get will ask user about using significant amounts of disk space
 
  but it seems that what is considered as significant needs adjustment,
  for me major amount of disk space is about 200MB but apt-get
 will ask
  questions like After this operation, 9805 kB of additional disk space
  will be
  used. Do you want to continue? [Y/n].
 
  I propose increasing this threshold to 50MB.
 
  I know that I can use parameters (with aliases or apt.conf I can even
  make it
  permanent) to completely skip this check.
 
  I am not aware about any way that allows user to configure this
 threshold
  (see
 
 
 http://askubuntu.com/questions/596691/how-can-i-stop-apt-get-from-asking-about-using-minor-amounts-of-additional-disk
  ).
 
  Note: i was directed to this mailing list by
  https://help.ubuntu.com/community/ReportingBugs
  (Discussing features and existing policy) linked from
  https://bugs.launchpad.net/ubuntu/
 
 
 
 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 mailto:Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: A Jumble of Lines - Because that's what systemD is.

2014-11-05 Thread John Moser


On 11/05/2014 12:27 PM, Chateau DuBlanc wrote:
 A Jumble of Lines - Because that's what systemD is.
 More code .: better
 
 youtu.be/ACDi1YOcupk
 
 SystemD introduces (further) systemic vulnerabilities into Gnu/Linux.
 Here is a song feeling that situation out.
 

Take this immaturity elsewhere.  I thought I was being spammed by
YouTube again.

 (C) Gnu GPL v2
 

GPL is not an appropriate media license; it is a license for source
code.  This usage shows you like to make statements about things you
don't understand; thus your assessments of systemd are immediately
suspect, and readily assumed misguided.

In other words:  you obviously don't know what you're talking about, and
your input on any matter is valueless.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: A Jumble of Lines - Because that's what systemD is.

2014-11-05 Thread John Moser

On 11/05/2014 01:01 PM, Castle OfWhite wrote:
 It's licensed elsewhere under CC-BY-SA aswell, just for such an occasion.
 The GPL is a fine media license when said media is a part of a program
 (game etc). In such cases Judges usually see the entire thing as a whole
 rather than a collection of parts.
 

That's a weak argument.  It's an appeal to authority and to tradition.

The GPL directly describes considerations for linking and source code,
as well as program executable modules.  While it makes some sense to
include resources in the definition of the program in general,
extracting the media from the program immediately creates confusion
about what exactly was violated; the only proper interpretation is that
distributing the media with the program is allowed, while the media
without being part of the program is not under any permissive license
and, thus, distributing it is a copyright violation.

 I do know what I'm doing, and You show your ignorance :).
 I've been writing programs and making free/opensource media
 from before the CC license were created.
 
 Back then we just licensed our stuff under the GPL, the BSD license, or
 the gnu FDL. (or all of them) Worked fine. People understood.
 
 SO FUCK YOU!
 
 Go fuck yourself you pro-feminist pro-systemd piece of shit.
 (I remeber when there weren't feminists in free/opensource too)
 (nor better-than-thou systemd loving assholes)
 
 I hope Mr Cook is edged out because he supports your beliefs
 (and that is leading to a ban in russia of apple products)
 You people need to be fought back against. You've ruined the
 world for regular men.
 

Okay, civilized conversation has broken down.


 On 11/05/2014 12:27 PM, Chateau DuBlanc wrote:
 A Jumble of Lines - Because that's what systemD is.
 More code .: better

 youtu.be/ACDi1YOcupk

 SystemD introduces (further) systemic vulnerabilities into Gnu/Linux.
 Here is a song feeling that situation out.


 Take this immaturity elsewhere. I thought I was being spammed by
 YouTube again.

 (C) Gnu GPL v2


 GPL is not an appropriate media license; it is a license for source
 code. This usage shows you like to make statements about things you
 don't understand; thus your assessments of systemd are immediately
 suspect, and readily assumed misguided.

 In other words: you obviously don't know what you're talking about, and
 your input on any matter is valueless.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


NFS and fscache always crashes

2014-07-10 Thread John Moser
Has anyone ever got cachefilesd and fsc to actually work?

On any combination of servers (glusterfs, CentOS 6, Ubuntu 14.04) and
clients (CentOS 6, Ubuntu 14.04), an NFS mount as simple as nfs4 with
option 'fsc' always locks up the system.  You can do a few things; but as
soon as you get to any level of activity (try running an rsync against it,
or using it for /home and logging in/running Chromium/etc.), the client
locks up hard without even throwing a panic.

I'm using 64-bit Ubuntu 14.04 client, running inside VMware ESXi 5.5 here.
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Increase default nofile ulimit

2014-06-09 Thread John Moser
On 06/09/2014 07:10 AM, Robie Basak wrote:
 AIUI, there are security implications for raising this limit system-wide
 by default, since applications that use select() are often broken and
 will become vulnerable with a higher limit.

 See
 https://lists.ubuntu.com/archives/ubuntu-devel/2010-September/031446.html

 for the previous discussion.
That looks like a glibc bug from 2010.  Is that still relevant?  If so,
why has this not been fixed?

The simple fix is to replace the 1024 spec with the result of
getrlimit() for the hard limit; however, Linux supplies a non-POSIX
function to raise the hard limit of an arbitrary process.  Likewise, the
limit may be excessively large, thus wasteful of memory.

I am certain the glibc developers are competent to dynamically grow the
buffer when full, and could write such code within a four year time
span.  Whether they have or not is a different matter, but ... that's
the question.  Have they?


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Increase default nofile ulimit

2014-06-08 Thread John Moser
The default nofile ulimit is 1024 soft, 4096 hard.

As you know, the soft limit is *the* limit.  A hard limit specifies how
high a process may increase the ulimit; many processes don't attempt
this.  Apache on CentOS 6, for example, will easily run over the 1024
soft limit after just 55 server side includes (not attempted on Ubuntu),
and does not attempt to raise the ulimit to the 4096 maximum.  I have
had some issues with Web browsers running out of open files because of
sites with keepalive sockets (websockets), cache, plug-ins using local
storage in crazy ways, and so on.

Perhaps it is prudent to set limits in /etc/security/limits.conf by
default to a higher value, such as 32767 soft and 65535 hard for open
files.  The original limits came about on 32-bit systems with less than
16GB of RAM and 4TB of hard drive space.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: core unix command broken in latest updates

2014-05-13 Thread John Moser


On 05/13/2014 01:04 PM, Dale Amon wrote:
 It just seems strange that something like this could slip 
 past in a set of updates to a package... but the dh
 command is not working after the last security update I
 did via dselect.
 



dh is not a core Unix command.

Strongly suggested reading:

http://www.catb.org/esr/faqs/smart-questions.html

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Those extra packages on the live CD

2014-02-17 Thread John Moser


On 02/17/2014 04:26 PM, Tong Sun wrote:
 Hi, 
 
 I have never taken a closer look at what's inside the Lubuntu CD, until
 now when I discovered that there is bunch of packages on the CD that is
 not packed in the filesystem.squashfs file. There are *quite* a few of
 them (ref 1).
 
 What's the purpose of having those packages loose on the disk instead of
 installing them and have them in compressed squashfs file? 
 

deb files are compressed, now with LZMA2 (xz) instead of old gzip.
There is no imperative to install packages to a live system; it just
saves time, since you can copy the base image directly to disk.
Naturally, packages which may not be needed but would be needed in
common configurations are included.

What bugs me more is the uninstallation of live system packages after
install.  The squashfs is a liveCD installation, and it's unpacked to
disk and then fixed up into a fixed installation.  I've never understood
why it's not a base squashfs, union mount on top a squashfs made from a
union mount on that which has been modified into a LiveCD, then union
mount tmpfs on top of that.  Shaves 5 minutes off installation--which
takes 15 minutes anyway.

 Thanks
 
 Tong
 
 ref 1:
 
 package list:
 
 ./pool
 ./pool/main
 ./pool/main/b
 ./pool/main/b/build-essential
 ./pool/main/b/build-essential/build-essential_11.6ubuntu5_amd64.deb
 ./pool/main/d
 ./pool/main/d/dpkg
 ./pool/main/d/dpkg/dpkg-dev_1.16.12ubuntu1_all.deb
 ./pool/main/e
 ./pool/main/e/eglibc
 ./pool/main/e/eglibc/libc-dev-bin_2.17-93ubuntu4_amd64.deb
 ./pool/main/e/eglibc/libc6-dev_2.17-93ubuntu4_amd64.deb
 ./pool/main/f
 ./pool/main/f/fakeroot
 ./pool/main/f/fakeroot/fakeroot_1.20-1_amd64.deb
 ./pool/main/g
 ./pool/main/g/gcc-4.8
 ./pool/main/g/gcc-4.8/g++-4.8_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/gcc-4.8_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/libasan0_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/libatomic1_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/libgcc-4.8-dev_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/libitm1_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/libstdc++-4.8-dev_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-4.8/libtsan0_4.8.1-10ubuntu8_amd64.deb
 ./pool/main/g/gcc-defaults
 ./pool/main/g/gcc-defaults/g++_4.8.1-2ubuntu3_amd64.deb
 ./pool/main/g/gcc-defaults/gcc_4.8.1-2ubuntu3_amd64.deb
 ./pool/main/l
 ./pool/main/l/linux
 ./pool/main/l/linux/linux-libc-dev_3.11.0-12.19_amd64.deb
 ./pool/main/l/lupin
 ./pool/main/l/lupin/lupin-support_0.54_amd64.deb
 ./pool/main/liba
 ./pool/main/liba/libalgorithm-diff-perl
 ./pool/main/liba/libalgorithm-diff-perl/libalgorithm-diff-perl_1.19.02-3_all.deb
 ./pool/main/liba/libalgorithm-diff-xs-perl
 ./pool/main/liba/libalgorithm-diff-xs-perl/libalgorithm-diff-xs-perl_0.04-2build3_amd64.deb
 ./pool/main/liba/libalgorithm-merge-perl
 ./pool/main/liba/libalgorithm-merge-perl/libalgorithm-merge-perl_0.08-2_all.deb
 ./pool/main/m
 ./pool/main/m/manpages
 ./pool/main/m/manpages/manpages-dev_3.54-1ubuntu1_all.deb
 ./pool/main/m/mouseemu
 ./pool/main/m/mouseemu/mouseemu_0.16-0ubuntu9_amd64.deb
 ./pool/main/u
 ./pool/main/u/ubiquity
 ./pool/main/u/ubiquity/oem-config-gtk_2.15.26_all.deb
 ./pool/main/u/ubiquity/oem-config_2.15.26_all.deb
 ./pool/main/u/user-setup
 ./pool/main/u/user-setup/user-setup_1.48ubuntu1_all.deb
 ./pool/multiverse
 ./pool/multiverse/d
 ./pool/multiverse/d/drdsl
 ./pool/multiverse/d/drdsl/drdsl_1.2.0-1build1_amd64.deb
 ./pool/universe
 ./pool/universe/c
 ./pool/universe/c/caspar
 ./pool/universe/c/caspar/caspar_20120530-1_all.deb
 ./pool/universe/i
 ./pool/universe/i/isdnutils
 ./pool/universe/i/isdnutils/capiutils_3.12.20071127-0ubuntu11_amd64.deb
 ./pool/universe/i/isdnutils/isdnutils-base_3.12.20071127-0ubuntu11_amd64.deb
 ./pool/universe/i/isdnutils/isdnutils-xtools_3.12.20071127-0ubuntu11_amd64.deb
 ./pool/universe/i/isdnutils/libcapi20-3_3.12.20071127-0ubuntu11_amd64.deb
 ./pool/universe/i/isdnutils/libcapi20-dev_3.12.20071127-0ubuntu11_amd64.deb
 ./pool/universe/i/isdnutils/pppdcapiplugin_3.12.20071127-0ubuntu11_amd64.deb
 
 
 
 
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


DT_RUNPATH settings for all libraries

2014-01-16 Thread John Moser
Can we move to a DT_RUNPATH scheme for all libraries as below?

COMPATIBILITY

This change alone does almost nothing:  with the system files arranged
as they are now, there is no change in how linking works.

ADVANTAGE

This change allows us to switch around libraries and to use different
versions of the same library.  This allows for increased compatibility.

For example:  every version of libfoo.so.2 should be
compatible--libfoo.so.2.15 should link properly with a program compiled
for libfoo.so.2.0.  On occasion and for various reasons, this sometimes
doesn't happen; it become useful to have different versions of the same
library installed, yet they conflict.  We have conflicting lib* packages.



DESCRIPTION

On load, ld.so links dynamic libraries in the following order:

1.  DT_RPATH if DT_RUNPATH does not exist
2.  LD_LIBRARY_PATH environment if not SUID/SGID
3.  DT_RUNPATH
4.  ld.so.cache file from ldconfig
5.  /lib and then /usr/lib

This means that a binary (a main executable file, including a shared
object that may execute as a main executable file) specifying dynamic
linking to libfoo.so.2 will currently link against the first one of the
below:

1. $LD_LIBRARY_PATH/libfoo.so.2
2. The library specified in ld.so.cache
3. /lib/libfoo.so.2
4. /usr/lib/libfoo.so.2

A binary with DT_RPATH set will pre-empt $LD_LIBRARY_PATH, which breaks
that entire functionality.  We don't want that.

A binary with DT_RUNPATH set will insert a new step.  If DT_RUNPATH
contains /var/lib/cat1:/var/lib/cat2 then the above checks the
following instead:

1. $LD_LIBRARY_PATH/libfoo.so.2
2. /var/lib/cat1/libfoo.so.2
3. /var/lib/cat2/libfoo.so.2
4. The library specified in ld.so.cache
5. /lib/libfoo.so.2
6. /usr/lib/libfoo.so.2

In this way, placing a different library at (2) or (3), or a symlink to
such, will cause that library to load instead.



PROPOSAL

Section 5.1 of the Filesystem Hierarchy Standard v2.2 ends in the
following stipulation:

Applications must generally not add directories to the top level
of /var. Such directories should only be added if they have some
system-wide implication, and in consultation with the FHS mailing
list.

I have investigated /var/lib, /var/cache, and /var/run and determined
that none of these is particularly suitable for this proposal.  /var/lib
looks close; however, the system is managing these contents without
concern for what the application expects to see there.  /var/run is for
the system state since boot, not persistent.  /var/cache is just the
wrong application.

Leveraging the above, I propose that the DT_RUNPATH of each application
be set as follows:

/var/sys/$PACKAGE/lib:/var/sys/current-system/lib

For x86_64:

/var/sys/$PACKAGE/lib64:/var/sys/current-system/lib64


In the future, it may become possible to install conflicting libraries
to support binaries mutually non-conflicting save for dependencies on
conflicting libraries.  To do so, a symlink to the conflicting library
wold be installed in the proper /var/sys/$PACKAGE/lib[64] directory.

The ability to do this depends on this proposal.  It also depends on
rewiring dpkg and apt to recognize and manage conflicts as alternatives,
or to package conflicting libraries as alternatives and improve the
dpkg-alternatives system to recognize when the majority of installed
packages in a system rely on package X and a minority rely on package Y.
 After all, somehow you need to decide which is the main package and
which is installed for compatibility, and for what, and how to handle
that.

The change proposed is harmless, non-standards-breaking, and compatible
with Debian, Ubuntu, Mint, and ELF-based distributions in general.  It
is heavily influenced by NixOS, although the particular mechanism comes
from a side project I never got started much (I don't like how NixOS
does it).  It is expandable to achieve similar goals to NixOS by setting
DT_RUNPATH to:

/var/sys/$PACKAGE/$VERSION/lib:/var/sys/$PACKAGE/default/lib:/var/sys/current-system/lib

('default' is of course a misnomer; it's not straight 'lib' because you
may also mess with $PATH and put a bin and sbin in there, but that's a
big stretch)

Other ways to achieve the main goals of NixOS--running i.e. postgresql 7
8 9 and 10 at the same time--would be to use lxc, which is probably the
right way to do that.  Between that and other factors, I haven't
proposed that.


Thoughts?

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Qubes-OS

2014-01-08 Thread John Moser
http://qubes-os.org/trac/wiki/QubesArchitecture

This looks interesting.  Can we learn anything from this?
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Qubes-OS

2014-01-08 Thread John Moser
On 01/08/2014 11:08 AM, Anca Emanuel wrote:
 No, until you demo something useful.
Well, they have a large amount of stuff showing how they've demonstrated
VM isolation under a paravirtualizing hypervisor to separate out
security zones on a single system.  X11 is in one VM, some user
applications are in another VM, other user applications have their own VM...

That's not entirely useful unless:

1) the VMs are substantially isolated--having (write) access to network,
disk, and files is basically un-isolated; and
2) You can't accomplish the same at the kernel level.

If it were i.e. Minix, I'd say that a microkernel can easily apply its
own security policies--essentially that's what Xen is, with full
operating systems running as OS services; a uK would accomplish the same
by having separate disk/FS/network services for different security
domains.  But we're not moving to Minix and we're not rewriting Linux as
a microkernel; thus the concern of what if you hacked the kernel? is real.

The other concern is just how isolated are multi-VMs?  The only real
advantage is a kernel exploit doesn't show you the full memory space of
the operating system.  I guess that means your browser that's doing
banking has memory mapped to domBankStuff that's not accessible by the
kernel in domUntrustedBrowsing at all.  But as far as system compromise
goes, what kind of write access do you have?  Can you write files to
/home, shared across domains?  Or what?  Do you have different /home
directories?

It's an interesting concept.  The question, Can we learn anything from
this? addresses the many questions starting with Does this do anything
useful? and How useful is that? and then moving on to Can we
incorporate this and leverage it to any benefit?

There's a reason why I don't just show up drooling over the isolation
and security that virtualization provides:  it sure does provide that
when running 4 different server OSes on one host, but you start reducing
the benefits when you break down the isolation.  Peoples' holy grail
ideal of we'll run a browser in this VM, and it can save files to your
main home directory is really pointless:  if it can do that, why not
just run it with all the other stuff?  It obviously has compromising
access to your stuff.

So the question comes up:  can we learn anything from this?  Even if we
inspect it and come out with just the realization that all the
Security provided in this model is illusionary, that's something.  I
mean hell, even Microsoft created a bunch of chatter talking about how
Vista was going to let you use Hyper-V to run some software in a secure
VM instead of directly, on the same model.  Sobering up to the
realization that it's either A) a good idea or B) completely stupid and
pointless would be enlightening.



 On Wed, Jan 8, 2014 at 6:02 PM, John Moser john.r.mo...@gmail.com wrote:
 http://qubes-os.org/trac/wiki/QubesArchitecture

 This looks interesting.  Can we learn anything from this?

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


ffmpeg vs libav: Please clarify the situation

2013-05-22 Thread John Moser
I've stumbled across this:

http://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html

I did some more digging and got the same from the other side.  The short
version is they all hate each other and they each claim that they're
merging perfect, well-reviewed, properly-designed code and the other side
is merging piles of crap and refusing to play nice.

Okay so:

 - libav is garbage, full of hacks, loaded with massive security concerns,
breaks things for existing users, and is only used by Ubuntu because it's
in Debian and/or because some key Ubuntu/Debian developers are also libav
developers.

 - ffmpeg is old garbage, run by idiots, filled with useless crap to make
it look good, and loaded with garbage code that undergoes no review.  Plus
they steal from libav without attribution.

 - libav gets a lot more developer attention, reviews everything, and
merges everything that goes into ffmpeg.  They're careful to fix anything
broken and ugly.

 - ffmpeg has grown much faster than libav, has better code review, and
carries a lot more features without breaking things.

This can't all parse at once.  Something is wrong here.

Will somebody please explain what's going on?
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: PIE on 64bit

2013-04-19 Thread John Moser

On 04/19/2013 08:25 AM, Matthias Klose wrote:

Am 18.04.2013 20:25, schrieb John Moser:

Meant to go to list
On Apr 18, 2013 2:15 PM, John Moser john.r.mo...@gmail.com wrote:


On Apr 18, 2013 2:07 PM, Insanity Bit colintre...@gmail.com wrote:

On 64bit multiple services (pulseaudio, rsyslogd, many others) are

shipping without Position Independent Code. On 32bit there is a potential
performance hit for startup time... but there shouldn't be any performance
hit (or negligible) on 64bit.
There is a continuous performance hit of under 1% without
-fomit-frame-pointer and under 6% with -fomit-frame-pointer on IA-32.  The
impact is statistically insignificant (i got 0.002% +/- 0.5%) on x86-64.

The performance hit on IA-32 only applies to main executable code because
library code is PIC already.  This accounts for under 2% runtime, except in
X where it used to be 5%.  That makes the overall impact 2% of 6% or
0.12%--which is non-existent if your CPU is ever at less than 99.88% load
because you would swiftly catch up.

In other words:  there is NO PERFORMANCE HIT for PIE in any
non-laboratory, non-theoretical situation.  (Theo de Raadt argued this with
me once, using the term very expensive a lot.  I built two identical
Gentoo boxes and profiled them both extensively with oprofile.  It is
exactly a theoretical cost, and the performance concerns come from people
who have no clue what the execution flow of modern software looks like)

I'm tired to repeat that there *is* a performance penalty.  Building the python
interpreters with -fPIE results in about 15% slower benchmarks.  Building GCC
with -fPIE slows down the build times by 10-20%.

So maybe you want to have a python interpreter with -fPIE, accepting this
performance penalty, and gaining some security?  But what else do you gain by
building GCC with -fPIE besides forcing longer build times on developers?
On x86-64 PIC is handled fast natively with additional registers, and 
non-PIC has a higher penalty to load and execute (because of more RAM 
usage and so occasional page faults to read from swap, since the main 
executable .text is not shared).


What are your Python benchmarks?  Loading/unloading a program?  Most of 
Python's modules are *in* Python.


Do you mean to indicate that building gcc with a gcc built with -fPIE is 
slower, or that building gcc with -fPIE is slower?  The first is an 
actual legitimate test; the second is making gcc itself do more work 
during build.


Who ran these benchmarks?  What do they actually measure?


I don't think that -fPIE is ready to be enabled by default, but maybe we need to
think about a better or easier way to enable it. However the current method
using the hardening-wrapper seems to work fine.

   Matthias





--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: PIE on 64bit

2013-04-18 Thread John Moser
Meant to go to list
On Apr 18, 2013 2:15 PM, John Moser john.r.mo...@gmail.com wrote:


 On Apr 18, 2013 2:07 PM, Insanity Bit colintre...@gmail.com wrote:
 
  On 64bit multiple services (pulseaudio, rsyslogd, many others) are
 shipping without Position Independent Code. On 32bit there is a potential
 performance hit for startup time... but there shouldn't be any performance
 hit (or negligible) on 64bit.
 

 There is a continuous performance hit of under 1% without
 -fomit-frame-pointer and under 6% with -fomit-frame-pointer on IA-32.  The
 impact is statistically insignificant (i got 0.002% +/- 0.5%) on x86-64.

 The performance hit on IA-32 only applies to main executable code because
 library code is PIC already.  This accounts for under 2% runtime, except in
 X where it used to be 5%.  That makes the overall impact 2% of 6% or
 0.12%--which is non-existent if your CPU is ever at less than 99.88% load
 because you would swiftly catch up.

 In other words:  there is NO PERFORMANCE HIT for PIE in any
 non-laboratory, non-theoretical situation.  (Theo de Raadt argued this with
 me once, using the term very expensive a lot.  I built two identical
 Gentoo boxes and profiled them both extensively with oprofile.  It is
 exactly a theoretical cost, and the performance concerns come from people
 who have no clue what the execution flow of modern software looks like)

  PIE on x86_64 does not have the same penalties, and will eventually be
 made the default, but more testing is required.
  - https://wiki.ubuntu.com/Security/Features#pie
 
  Is there an open bug for testing? A way for users to test and give
 feedback without recompiling massive projects like pulseaudio?
 
  13.04 still doesn't ship PIE. I would expect this would be something
 you'd want to put in *before* an LTS release?
 
 
  --
  Ubuntu-devel-discuss mailing list
  Ubuntu-devel-discuss@lists.ubuntu.com
  Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
 

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Extra Pane in Nautilus

2013-04-07 Thread John Moser
Will all of you stop top-posting?  This isn't a business exchange 
server, we're not stuck in the stone-age of suits who haven't learned to 
effectively communicate and think doing things right isn't professional.


/slightly frustrated
//back to Fark with me

On 07/04/13 17:08, Thomas Novin wrote:

Thanks for the tip. Updated to Raring today and wow, how is it
possible to make something suck that bad (nautilus 3.6.3). nemo on the
other hand was great.

Delete to go one step up in the filesystem has been removed. No option
to enable it, not even via gsettings (AFAIK).. I mean.. why?

Rgds//Thomas

On Fri, Mar 29, 2013 at 11:46 PM, Frank Cheung fcuk...@gmail.com wrote:

Nemo (Nautilus fork by Linux Mint team) looks like a potential (interim)
solution...  More info here:
http://www.webupd8.org/2012/12/how-to-install-nemo-file-manager-in.html

Cheers, Frank.

On 29/03/13 22:35, Greg Williams wrote:

man that's disappointing. We so need a new File Manager. Nautilus is rapidly
going down hill. I wish it could get forked to a version that keeps all the
good stuff like Extra Pane


From: clan...@googlemail.com
Date: Fri, 29 Mar 2013 22:22:06 +
Subject: Re: Extra Pane in Nautilus
To: mttbrns...@outlook.com
CC: ubuntu-devel-discuss@lists.ubuntu.com

On 29 March 2013 22:12, Greg Williams mttbrns...@outlook.com wrote:

Why won't the Ubuntu-Developers switch to the Marlin File-Browser? Why
are
they continuing to use Nautilus with such a weakening feature base?


I understand that the problem is that Nautilus also handles the
desktop and other stuff. It is not just the file manager. There is
nothing to stop you installing Marlin of course. When I tried it,
however, it seemed rather deficient in some areas. I submitted some
bugs but there has been little response. There does not seem to be a
lot of activity on the source.

Colin




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-02-11 Thread John Moser
On Mon, Feb 11, 2013 at 1:11 PM, Stig Sandbeck Mathisen s...@debian.org wrote:
 John Moser john.r.mo...@gmail.com writes:

 While we're at it, why is etckeeper stuff in the package? The
 Puppetlabs guys said because it's in Debian's package and Debian
 packagers are fruitbats, so they're imitating for compatibility.

 The etckeeper integration is a contribution from the Ubuntu packaging.
 See http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=571127

 If etckeeper is not installed, the hooks in the default configuration do
 nothing.


Interesting.  That's a more in-depth (but less amusing) explanation
than the one I received previously.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: kexec and Grub

2013-02-10 Thread John Moser

On 02/10/2013 06:22 PM, Dmitrijs Ledkovs wrote:

Yes, it might be mine or someone else's.  There's an ANCIENT spec for
RapidReboot

Nah, I was thinking about more recent stuff which is exactly what's
proposed in the quickly reboot.

https://wiki.ubuntu.com/SessionHandling#Restart

https://wiki.ubuntu.com/StartupSettings


I wish I could un-see that second one.  I'm completely baffled as to 
what startup software is (I'd imagine that would be the bootloader, 
kernel, systemd, all systemd/init scripts, and in Ubuntu's case X11 as 
well, along with everything in /bin and /sbin and /lib according to how 
Unix systems are lain out--everything required for system start-up 
belongs there, while everything not required for system start-up goes 
into /usr/)


tbh repair systems are a good idea, but they belong in their own place.  
Don't care for Here is a dialog to configure X system aspect.  Also if 
your system is broken you can fix this part from here.  Where is the 
Troubleshoot my system dialog?


Anyway, off-topic.


Although later deals with grub customisation bugs as well.

Regards,

Dmitrijs.




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: kexec and Grub

2013-02-08 Thread John Moser
Ah this is finally becoming a thing then :)  I am glad to see progress 
after a decade of having a tool written just for this


On 02/08/2013 05:04 AM, Sergey Shnatsel Davidoff wrote:

Nice... Now can you tell grub to boot entry 1, 3, 7, etc?  Read an alternate 
grub.conf?

Rapid reboot :)

Yes, you can, with grub-reboot command. You should probably team up
with the script kitties behind Unity Reboot
(http://www.webupd8.org/2013/01/unity-reboot-launcher-to-quickly-reboot.html)
and implement a faster reboot sequence. I believe it would be
beneficial for both parties.

--
Sergey Shnatsel Davidoff




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: kexec and Grub

2013-02-08 Thread John Moser

On 02/08/2013 02:27 PM, Dmitrijs Ledkovs wrote:

On 8 February 2013 10:04, Sergey Shnatsel Davidoff
ser...@elementaryos.org wrote:

Nice... Now can you tell grub to boot entry 1, 3, 7, etc?  Read an alternate 
grub.conf?

Rapid reboot :)

Yes, you can, with grub-reboot command. You should probably team up
with the script kitties behind Unity Reboot
(http://www.webupd8.org/2013/01/unity-reboot-launcher-to-quickly-reboot.html)
and implement a faster reboot sequence. I believe it would be
beneficial for both parties.


Wait a second. There is a whole plan and design to integrated Reboot
into in the g-s-d panel and the reboot popup.


Yes, it might be mine or someone else's.  There's an ANCIENT spec for 
RapidReboot


https://wiki.ubuntu.com/RapidReboot

It looks like revision 1 was indeed me

... in 2006.

And I was late to the party.  Again, kexec was made for rebooting is 
slow, there's ten million server bioses.




We want this in ubuntu as well =)

Mpt, can you point where the designs are? Since technically the
utility exists and prof of concept it should be quick to integrate.

Regards,

Dmitrijs.




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: kexec and Grub

2013-02-07 Thread John Moser
On Feb 6, 2013 7:12 AM, Colin Watson cjwat...@ubuntu.com wrote:

 On Tue, Feb 05, 2013 at 04:07:58PM -0500, John Moser wrote:
  Has anyone gotten Grub2 to load via Linux Kexec?  It used to be
  possible to kexec grub.exe for some reason.

 kexec can boot multiboot images, according to kexec(8); so you should be
 able to kexec core.img, which has a multiboot header.  I suggest trying
 it first on a non-production system. :-)


Nice... Now can you tell grub to boot entry 1, 3, 7, etc?  Read an
alternate grub.conf?

Rapid reboot :)
 --
 Colin Watson   [cjwat...@ubuntu.com]

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-02-05 Thread John Moser
On Tue, Feb 5, 2013 at 12:55 PM, Alec Warner anta...@google.com wrote:

 On Tue, Feb 5, 2013 at 4:52 AM, Robie Basak robie.ba...@canonical.com wrote:
  On Sun, Jan 27, 2013 at 11:23:37AM -0500, John Moser wrote:
  OK further research yields that Debian is not updating Sid due to
  Can we see this imported to 13.04?
 
  What would the implications be of an update to puppet 3 in the archive
  for installations using older LTS releases running older versions of
  puppet? Can an agent continue to run 2.7 and be served by a 3
  puppetmaster?
 

 As long as the server version is = the client version, things are OK.
 If the client version is  the server version, things can go wrong
 very quickly. 'Wrong' tends to mean 'clients will fail to get
 updates'.


This is correct and important.


 
  I'm just trying to identify if there are any cases where it could be
  painful for users to find that puppet has been updated, for any
  reasonable upgrade path. Are there any complications that I haven't
  thought of, or would everything be fine?

 I run puppet on thousands of nodes. If you updated puppet in the
 middle of an LTS; I would be *pissed* as all hell.


Yes and I am absolutely *not* recommending they update the LTS.

I'm recommending the latest up-and-coming release of Ubuntu get the
new Puppet.  If you want to continue using an LTS with Puppet, you
have three choices:

1.  Keep your shop LTS.  When a new LTS comes out, upgrade your
Puppetmaster FIRST (after all staging of course), then roll out the
LTS updates to the clients.

2.  Convince Ubuntu to put the newest Puppetmaster in Backports.  I am
not advocating this either.

3.  Use the Puppetlabs repos on your LTS Puppetmaster

I have no sympathy for the use case of running your Puppetmaster as
LTS and expecting the next five years of Ubuntu releases to hold back
updating Puppet just so you can mix and match LTS server with
latest-release clients.  Among other things, this would cause an issue
where overlapping LTS (i.e. 3 years between) would require the new LTS
stay on the old Puppet, which means that Puppet never gets upgraded
since there is always an in-life LTS holding back Puppet for all
further releases when a new LTS comes out.

I do have sympathy for the use case of staying on an LTS for the whole
network.  If you use LTS Puppetmaster to administrate LTS servers,
this should not break.  Mind you if an update to Puppet comes down to
the LTS, the Puppetmaster will update; but maybe you don't WANT to
move forward--that's why you're on LTS.  Yes I understand this use
case and yes it is problematic in many ways, but it can be handled
stably at the discretion of the administrator--it's up to you to
decide to update your server to a newer version, move to a newer
Puppetmaster, update your modules and other Puppet code to work with
the newest Puppetmaster, and then perform your roll-outs.  We don't
need to throw down HERE IS PUPPET 3.1 FOR LTS ENJOY YOUR BREAKAGE at
people.

My advice to you:  If your LTS Puppetmaster isn't going to handle
Puppet 3.0 or 3.1 clients, don't upgrade your administrated servers to
Raring.  Wait for the next LTS; go Puppetlabs repos; or upgrade your
Puppetmaster to Raring first.



 
  Robie
 
  --
  Ubuntu-devel-discuss mailing list
  Ubuntu-devel-discuss@lists.ubuntu.com
  Modify settings or unsubscribe at: 
  https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-02-05 Thread John Moser
On Tue, Feb 5, 2013 at 2:09 PM, Alec Warner anta...@google.com wrote:
 On Tue, Feb 5, 2013 at 10:07 AM, John Moser john.r.mo...@gmail.com wrote:
 I have no sympathy for the use case of running your Puppetmaster as
 LTS and expecting the next five years of Ubuntu releases to hold back
 updating Puppet just so you can mix and match LTS server with
 latest-release clients.  Among other things, this would cause an issue
 where overlapping LTS (i.e. 3 years between) would require the new LTS
 stay on the old Puppet, which means that Puppet never gets upgraded
 since there is always an in-life LTS holding back Puppet for all
 further releases when a new LTS comes out.

 I don't think any sane customers expect this (and I do not.) Letter
 updates (P - Q, Q - R) are when I expect changes (and pain!) But
 that is why we are on a release based OS and not a rolling release
 like Arch ;)


Oh good, then we're on the same page.

On a related note, Puppet 3.1 came out ... yesterday.  So next debate:
 3.0.2 or 3.1 into Debian experimental?  (I've been trying to get it
brought in)

3.1 did not include https://projects.puppetlabs.com/issues/16856 or I
would be lobbying heavily for 3.1 into Experimental and then directly
into Ubuntu.  As is, there are good arguments for sticking to 3.0.2 in
this scenario (notably:  stuff was deprecated in 3.0; it is GONE in
3.1, and now Ubuntu/Debian have to make a jump since next Stable will
be 2.7 for Debian and the last was 2.7 for Ubuntu.  The 2.7 - 3.1
jump is nasty).

Alec, I'm sure you can appreciate the implications, as well as the
challenging difficulties now faced due to failure to keep up with a
fast moving target.  If it's that important, we may need to just throw
down 3.0 and 3.1 and meta-packages for a while here. This is all kinds
of badness, too:  3.0 is dead; 3.0.2 is the last and there will be no
more updates to the branch (think Apple Quicktime, you know the
drill), so we definitely don't want to support Puppet 3 for an
extended time because no bugfixes.

TBH, for Puppet in general, your task is to keep up; like Pacemaker,
Corosync, and Heartbeat, Puppet is racing towards advancement and
things are rapidly changing.  You should see the mess that is
Pacemaker/Corosync, trying with RHEL5 and RHEL6 and SuSE and Debian
gets you many different procedures and different configurations
required.  It's now somewhat stabilized, and has grown into something
awesome.  Puppet is doing that right now and pain will come.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


kexec and Grub

2013-02-05 Thread John Moser
Has anyone gotten Grub2 to load via Linux Kexec?  It used to be
possible to kexec grub.exe for some reason.

I have been tasked tonight to reboot a very critical production server
during a short window.  It's long enough, but at the moment our big
issue is that the reboot will be somewhere between 3-5 minutes up to
18 minutes (we don't know) because server hardware does a ton of
self-check and has RAID, Video, bootrom, etc BIOS crap to go through.

Years ago, this came up and Kexec was written.

Nobody uses it.  We use it for fancy debugging but that's it.

So I propose:  We must find a method of rebooting into A) a bootloader
entry; or B) directly into Grub2 and let it boot the system.  (B)
would be less fragile, as any incorrectness in (A) will at best make
kexec fail during late-stage shutdown and at worst load the kernel
with invalid parameters and cause a panic before mounting rootfs (a
nightmare without remote console).  Loading the bootloader can only
fail in general, in which case we can go as far as re-initializing
init onto an operating runlevel and come back up without a reboot
(white hot reboot).

So:

 - Cold boot (physical power cycle)
 - Warm boot (ACPI reboot)
 - Red hot boot (drop back and reload the kernel/bootloader)
 - White hot boot (shutdown completely, then go back into a live
runlevel rather than halt or reboot)

We're attempting a red hot boot, and on soft failure coming out via a
white hot boot.

If Linux can respond to panic() by warm boot, that would be very optimal.



(Spoiler:  the next logical step would be porting freeze/thaw from
Dragonfly so you can reboot into a new kernel without closing your
desktop session.  Yes, this is totally doable.  You would not believe
the insanity that is physically possible.)

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-02-05 Thread John Moser
On Tue, Feb 5, 2013 at 4:07 PM, Jordon Bedwell jor...@envygeeks.com wrote:
 On Tue, Feb 5, 2013 at 3:00 PM, John Moser john.r.mo...@gmail.com wrote:
 On a related note, Puppet 3.1 came out ... yesterday.  So next debate:
  3.0.2 or 3.1 into Debian experimental?  (I've been trying to get it
 brought in)

 If it were me, I would rather fight to upgrade once, not twice.

 3.1 did not include https://projects.puppetlabs.com/issues/16856 or I
 would be lobbying heavily for 3.1 into Experimental and then directly
 into Ubuntu.  As is, there are good arguments for sticking to 3.0.2 in
 this scenario (notably:  stuff was deprecated in 3.0; it is GONE in
 3.1, and now Ubuntu/Debian have to make a jump since next Stable will
 be 2.7 for Debian and the last was 2.7 for Ubuntu.  The 2.7 - 3.1
 jump is nasty).

 Exactly my point, I would rather fight once to upgrade then fight once
 to upgrade then have to fight again to figure out what hell broke in
 the next upgrade though most of the time it can be somewhat straight
 forward if treading carefully.  I'd rather it all fall down at once
 during a test-run and it be fixed than me have to do those runs twice
 in the same year.

I work in a place without staging, and we desperately need it, and I
am becoming slowly more aggressive and will be making arguments after
I torch my burn down charts.

Think about that though.  No testing environment.  So much pain.

With a testing environment, massive breakage to me is just a
playground and casual Friday.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: kexec and Grub

2013-02-05 Thread John Moser
On Tue, Feb 5, 2013 at 4:44 PM, Felix Miata mrma...@earthlink.net wrote:
 On 2013-02-05 16:07 (GMT-0500) John Moser composed:


 Has anyone gotten Grub2 to load via Linux Kexec?  It used to be
 possible to kexec grub.exe for some reason.


 This question makes me think either you haven't read the kexec man page, or
 one of misunderstands it. Why need any bootloader be involved with kexec
 usage?

Oh I understand it.  Just jumping straight to the bootloader is a
desirable use case.

Also covered in my message, you are less error prone going through the
bootloader.  Consider these two possible paths to solve the above
problem:

SOLUTION A:

 - Write yourself a Grub parser (maybe even something that runs grub
itself and gets it to spit out a list of boot options, a select boot
option, or the default boot option)

- Parse the data you get into viable arguments to pass to kexec for
loading the kernel, initrd, setting parameters, etc.

- Shut down the system to a halt state and call kexec as your
termination (halt, reboot, etc) command.


SOLUTION B:

 - Load Grub into kexec

 - Shut down the system into a halt state and call kexec as your
termination (halt, reboot, etc) command


Which of these looks easier?

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-02-05 Thread John Moser



On 02/05/2013 07:45 PM, Ryan Tandy wrote:

John Moser john.r.moser at gmail.com writes:

2.  Convince Ubuntu to put the newest Puppetmaster in Backports.  I am
not advocating this either.


Slightly off-topic, but FWIW I would be happy to see raring's puppet
(whatever version that ends up being) in precise-backports.
lucid-backports has puppet 2.7 and that made my life a LOT easier since
my puppetmaster runs precise and I am using some recent modules. Having
backports available but not installed by default is really quite nice.
Furthermore it's quite likely that at some point I'll have some clients
running a newer Ubuntu than the puppetmaster, and it would be great to
be able to support it just by upgrading puppetmaster to a backports
version.




http://apt.puppetlabs.com/

While we're at it, why is etckeeper stuff in the package?  The 
Puppetlabs guys said because it's in Debian's package and Debian 
packagers are fruitbats, so they're imitating for compatibility.


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-02-05 Thread John Moser



On 02/05/2013 07:58 PM, Alec Warner wrote:

On Tue, Feb 5, 2013 at 4:50 PM, John Moser john.r.mo...@gmail.com wrote:



On 02/05/2013 07:45 PM, Ryan Tandy wrote:


John Moser john.r.moser at gmail.com writes:


2.  Convince Ubuntu to put the newest Puppetmaster in Backports.  I am
not advocating this either.



Slightly off-topic, but FWIW I would be happy to see raring's puppet
(whatever version that ends up being) in precise-backports.
lucid-backports has puppet 2.7 and that made my life a LOT easier since
my puppetmaster runs precise and I am using some recent modules. Having
backports available but not installed by default is really quite nice.
Furthermore it's quite likely that at some point I'll have some clients
running a newer Ubuntu than the puppetmaster, and it would be great to
be able to support it just by upgrading puppetmaster to a backports
version.




http://apt.puppetlabs.com/

While we're at it, why is etckeeper stuff in the package?  The Puppetlabs
guys said because it's in Debian's package and Debian packagers are
fruitbats, so they're imitating for compatibility.


I know Nigel Kirsten and Andrew Pollock, so if there is stuff wrong in
the debian packaging I am happy to chat with them.



I'm just relaying what I got from #puppet on freenode.  There seems to 
be no purpose to hooking etckeeper--at least, none that warrants hooking 
it into puppet by default.



-A





--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-01-27 Thread John Moser

On 01/26/2013 02:19 PM, John Moser wrote:
I'm noticing that 2.7 is still the version of Puppet in Raring; 
however, version 3.0 was released October 1, 2012, before release of 
12.04:


https://groups.google.com/forum/#!topic/puppet-users/lqmTBX9XDtw/discussion 



Does this package currently not have a maintainer, or is it just slow 
in Debian as well?


3.0 is an important release.  (Every Puppet release is an important 
release; Puppet is rather volatile.)  New features include integration 
of Hiera and deprecation of stored configurations in favor of PuppetDB 
for the same task.  Also the kick feature is deprecated in favor of 
mcollective.  Deprecated features still work, but the transition must 
be made before they become non-existent. You cannot jump from 2.7 to 
future 3.5 without pain; 3.0 still allows some things that are going 
away, but provides their replacements, so you can move smoothly.





OK further research yields that Debian is not updating Sid due to 
feature freeze for Testing.  However, Mathisain notes this:


On the other hand, the master packaging branch at
http://anonscm.debian.org/gitweb/?p=pkg-puppet/puppet.git;a=summary
yields a working set of puppet 3.0.2 debs, they're just not tagged with
a debian release. Feel free to use that until we can upload to sid.

Can we see this imported to 13.04?

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Puppet version bump

2013-01-26 Thread John Moser
I'm noticing that 2.7 is still the version of Puppet in Raring; however, 
version 3.0 was released October 1, 2012, before release of 12.04:


https://groups.google.com/forum/#!topic/puppet-users/lqmTBX9XDtw/discussion

Does this package currently not have a maintainer, or is it just slow in 
Debian as well?


3.0 is an important release.  (Every Puppet release is an important 
release; Puppet is rather volatile.)  New features include integration 
of Hiera and deprecation of stored configurations in favor of PuppetDB 
for the same task.  Also the kick feature is deprecated in favor of 
mcollective.  Deprecated features still work, but the transition must be 
made before they become non-existent. You cannot jump from 2.7 to future 
3.5 without pain; 3.0 still allows some things that are going away, but 
provides their replacements, so you can move smoothly.




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-01-26 Thread John Moser

On 01/26/2013 02:29 PM, Jordon Bedwell wrote:

On Sat, Jan 26, 2013 at 1:19 PM, John Moser john.r.mo...@gmail.com wrote:

I'm noticing that 2.7 is still the version of Puppet in Raring; however,
version 3.0 was released October 1, 2012, before release of 12.04:
https://groups.google.com/forum/#!topic/puppet-users/lqmTBX9XDtw/discussion

Does this package currently not have a maintainer, or is it just slow

12.04 was released on 26/4/12 not in October.  12.10 was released in

Yes typo, can't keep number straight in my head like this.

October and Puppets release was after the feature freeze.  You will
need to wait until 13.04.  It has nothing to do with being slow, it
has to do with them either releasing before the feature freeze or
having to wait until the next release cycle.  Typically a feature


It is still 2.7 in 13.04

http://packages.ubuntu.com/raring/admin/puppet


freeze happens 1-2 months before release... so if puppet releases 3.0
in October there is no reason for it to make it into 12.10 (in that
case) because there were probably no super important security updates
that mandated an extreme exception.

No, but there is reason to immediately update it in 13.04 after dropping 
the 12.10 release.  Packages are available in their own repositories, as 
well as source debs and SRPMs.


It makes little sense for a package that was outdated before the last 
release of Ubuntu to remain in the current release, in Main, when there 
is a compelling reason to update the package.  The only interesting 
assumptions are that either the Ubuntu or the Debian maintainers (or 
both) aren't paying attention; it's not like puppet gets as much focus 
as openssh or firefox.


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Puppet version bump

2013-01-26 Thread John Moser



On 01/26/2013 04:01 PM, Tom H wrote:

On Sat, Jan 26, 2013 at 2:19 PM, John Moser john.r.mo...@gmail.com wrote:



I'm noticing that 2.7 is still the version of Puppet in Raring; however,
version 3.0 was released October 1, 2012, before release of 12.04:


I assume that you mean 13.04.


I meant 12.10.  Point being this stuff was out before the last version 
was released, should have been a bump after release since all the 
feature freezes come off for the new dev cycle.


the numbers.  They flip flop a lot.  These are not the binary digits you 
are looking for.






https://groups.google.com/forum/#!topic/puppet-users/lqmTBX9XDtw/discussion

Does this package currently not have a maintainer, or is it just slow in
Debian as well?


There were two puppet 3.0.0-rcX uploads to Debian experimental in May
and they'll move to unstable, and therefore be inheritable by Ubuntu,
once Debian 7's released.



They're past -rc, it's at 3.0.2 now.  I would avoid an -rc update; at 
least 3.0.0 stable.


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu and derivatives (Re: Ubuntu.com Download Page)

2013-01-25 Thread John Moser


On 01/25/2013 06:17 PM, Jordon Bedwell wrote:

On Fri, Jan 25, 2013 at 5:13 PM, Allison Randal alli...@ubuntu.com wrote:

On 01/25/2013 02:38 PM, Scott Kitterman wrote:
Each flavor has a dedicated landing page: kubuntu.org, edubuntu.org,
xubuntu.org, ubuntustudio.org, mythbuntu.org, lubuntu.net. The one for
*U*buntu (with *U* for Unity  is ubuntu.com.


By that flawed and short-sighted logic how do you explain Ubuntu with
a G for GNOME up until a couple of years ago?



Mark Shuttleworth hadn't figured out what direction he wanted to take 
his megalomania in yet and was biding his time for something disruptive 
to happen so he could try to usurp GNOME's position.


(Honestly, Unity gets criticized, laughed at, and used as a primary 
example of how Ubuntu is nothing more than a new love child of Microsoft 
Embrace-Extend-Extinguish and RedHat Not-Invented-Here mentalities.  Do 
you think it would have lasted more than a week if they had rolled that 
out without major disruptive changes in GNOME a la Gnome-Shell?  People 
jumped ship as-is.)


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Promote puppet to main?

2012-12-24 Thread John Moser
Are there any specific reasons for not promoting Puppet and/or Chef to 
main in the next release or two?  I'm leaning on Puppet because that's 
what I'm using, because it's easier to learn somehow (more books about 
it, better documentation, and just falls together); Chef, Ansible, and 
cfengine are fairly popular as well.


These tools centralize configuration.  Puppet is declarative (you tell 
it what you want the system to look like) and idempotent (you WILL get 
the same results no matter what order things are executed in, no matter 
how many times you execute them); Chef is imperative (you tell it how to 
do it); the others are unknown to me.


In Puppet specifically, a pile of stuff goes into the modules describing 
what the system should look like and which bits depend on which other 
bits.  The modules specify packages to install, configuration files 
(from files or from templates modified by parameters), services that 
must be running, and relationships (dependencies, alerts i.e. for the 
purpose of restarting services when a configuration file is changed, 
etc).  Classes in the modules may take parameters, allowing for 
interesting configuration--a Web server node could include 
nodes/web-01.example.com/vhosts/*, which are a bunch of .pp files 
importing the apache::config::vhost class with different parameters, 
thus generating a bunch of files for the vhosts configured on the server.


This allows you to stand up a system instantly by pointing the Puppet 
agent at Puppetmaster.  All required packages are installed, 
configuration files are brought in, and services are started. Modifying 
any configuration files on the Puppetmaster server will produce updates 
on the given node; these updates may cause services to restart, if 
necessary.


Puppet examines the remote system and makes decisions about what kind of 
configuration it needs.  It can decide if apt-get, emerge, yum, pkginst, 
or the like is needed to install a package.  It will inform the modules 
of many OS parameters including type, version, processor type, and other 
things, so that decisions can be made--for example, to use a RedHat 
specific config file, or to use a different package name on Solaris than 
on Ubuntu, or to generate the /etc/apt/sources.list.d/ files with 
'debian' or 'ubuntu' and with 'squeeze' or 'edgy' or 'precise'.



A TLS certificate is generated, signed by the puppet server, and placed 
on the remote server; the common name in the certificate is used as the 
node name.  Puppet can automatically generate a signed certificate for a 
new node, using its hostname as a common name, and push it to the 
agent.  It is preferable to manually generate the certificate in 
high-security settings.  Puppet has an access control list based on 
whether an agent is authenticated (i.e. presents a signed, known 
certificate) or not, which provides four of the primary security 
guarantees (Authentication, Authorization, Confidentiality, Integrity).



You can also create a subversion repository for /etc/puppet/modules and 
/etc/puppet/manifests.  Check out your modules or your manifests, make 
changes, then check them in and have subversion respond to a check-in by 
pushing an update to the Puppet server. Chef allows you to store 
Cookbooks in a git repository, so you can always roll back to a previous 
Cookbook version and apply different versions of Cookbooks to different 
servers.


Bringing Puppet into main and giving puppet and puppetmaster their own 
Tasksel entries in Ubuntu Server would represent a significant step 
forward.  Chef is more of an unknown to me, as I had difficulty getting 
started with it; I know of a few places that do use Chef and, as well, 
Ansible and cfengine, so there is market for all such tools.  Puppet 
seems to have the greatest mind share at the moment; and of course, 
obviously, as stated, I have my own preference for it at the moment.


There are also excellent books on Puppet such as Pro Puppet by John 
Turnbull.


And whatnot.

Did anybody actually not know what these things are?

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Promote puppet to main?

2012-12-24 Thread John Moser



On 12/24/2012 10:23 PM, John Moser wrote:

[Inane rambling]

Somehow I missed that puppet is ALREADY IN MAIN.

Okay, can we just get a tasksel for puppet and puppetmaster on Ubuntu 
Server then?


*hides face* *wanders off*

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Fwd: Chef-server

2012-12-21 Thread John Moser
Gentlemen,

It appears that Chef packages are hit-and-miss in Universe in Ubuntu.  They
are also not entirely present in Debian, although present in earlier Ubuntu
installations.  See Debian:

http://packages.debian.org/search?suite=defaultsection=allarch=anysearchon=nameskeywords=chef-server

And Ubuntu:

http://packages.ubuntu.com/search?keywords=chef-server

Packages and source tarballs appear available from this location:

http://apt.opscode.com/pool/main/

Potentially these are appropriate for multiverse, if the Chef developers
are willing to submit them.  Some packages in Debian have DFSG designation,
so I assume some modifications are necessary for inclusion of Chef in
Universe or Main.

I am simply at a loss as to why some packages have been brought over, yet
others have not.  Perhaps the client is simply more open and thus easier
to import by policy, and so the bits needed to interact with a server are
brought in functionally but the bits needed to run a server are left out.
That would at least be a valuable effort and explain the current state of
things.  But that is simply speculation on my part.
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Possible inclusion of zram-config on default install

2012-12-07 Thread John Moser
it's a definite win for large configurations infrequent swapping, much 
more additional RAM), as well as for small configurations where there's 
a lot of swapping (faster than swapping).


I wrote and use an init script that breaks up the zswap into n devices, 
1 per cpu execution thread, to fully parallel the operation.  If you 
swap 64KiB at once with 8 core plus hyperthreading, the kernel will do 
16 4k pages in parallel under that configuration--big, multi-CPU, 
hypethreaded servers (it's a big 30% gain with parallel like threads, 
i.e. compression) benefit much from this.


Less useful in VMs, where such things should really happen at the 
hypervisor.


On 12/07/2012 05:32 PM, Fabio Pedretti wrote:

It would be nice if Ubuntu could include zram-config by default. This package
set up compressed RAM swap space and can lower RAM requirements for running and
installing Ubuntu. It should be a win for every configuration. Since kernel 3.8
the zram module is out of staging, I am using it since precise with no problem.

The bug request is here:
https://bugs.launchpad.net/ubuntu/+source/zram-config/+bug/381059



--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Possible inclusion of zram-config on default install

2012-12-07 Thread John Moser



On 12/07/2012 05:44 PM, Jordon Bedwell wrote:

-1. I am not on a netbook and even my laptop have 12gb of Ram.  It
would be nice if Ubuntu did detect your ram and decide but not force
it on people like me who aren't memory constrained.



In an abundant memory situation, the device is set up and sits idle 
consuming zero CPU resources and a few pages of RAM.  As memory is not 
constrained, the few pages of RAM are negligible.  Extremely.



On Fri, Dec 7, 2012 at 4:32 PM, Fabio Pedretti fabio@libero.it wrote:

It would be nice if Ubuntu could include zram-config by default. This package
set up compressed RAM swap space and can lower RAM requirements for running and
installing Ubuntu. It should be a win for every configuration. Since kernel 3.8
the zram module is out of staging, I am using it since precise with no problem.

The bug request is here:
https://bugs.launchpad.net/ubuntu/+source/zram-config/+bug/381059

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


RTMP/HLS in Nginx for 13.04

2012-11-08 Thread John Moser
Can we look at adding this to nginx-extras?

http://rarut.wordpress.com/2012/06/25/hls-support-in-nginx-rtmp-16-2/

It's possibly not ready yet, but there's 5 months to go.  It's real simple
to get in the package.



Basically I added dependencies to debian/control:

Build-Depends:  add the below
   libavcodec-dev,
   libavformat-dev

You should also add libav to the dependencies.



To get the actual module source, I snagged:

https://github.com/arut/nginx-rtmp-module/archive/master.zip

and unzipped it to debian/modules/, changing the folder to
nginx-rtmp-module (instead of nginx-rtmp-module-master)



Then, debian/rules, under config.status.extras, I added this to ./configure:

--add-module=$(MODULESDIR)/nginx-rtmp-module \
--add-module=$(MODULESDIR)/nginx-rtmp-module/hls \
   $(CONFIGURE_OPTS) $@

(obviously, the last line was already there).



These have to go into debian/source/include-binaries:

debian/modules/nginx-rtmp-module/doc/video_file_format_spec_v10.pdf
debian/modules/nginx-rtmp-module/doc/rtmp-decoded.pdf
debian/modules/nginx-rtmp-module/test/rtmp-publisher/RtmpPlayer.swf
debian/modules/nginx-rtmp-module/test/rtmp-publisher/RtmpPublisher.swf
debian/modules/nginx-rtmp-module/test/www/jwplayer_old/player.swf
debian/modules/nginx-rtmp-module/test/www/jwplayer/player.swf




After that it's just dpkg-buildpackage -rfakeroot and you're good.  the
nginx-extras package includes rtmp and hls.

Viability?
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Update manager mandating rebooting

2012-10-31 Thread John Moser



On 10/31/2012 06:45 PM, Dale Amon wrote:

On Wed, Oct 31, 2012 at 12:09:09PM -0500, Jordon Bedwell wrote:

That's a subjective point of view, if libssl is vulnerable or the
kernel is vulnerable you need to restart too, not because you can't
restart services or use a rolling Kernel (read KSplice) but because
there are multiple ways to look at it, from my perspective a login and
logout is just as fast as a reboot (because reboot requires less steps
for me since again I'm already in my terminal and my laptop boots at
blazing speeds.)  I would much rather reboot than trust a system that
assumes it knows every possible service that could be using a
vulnerable lib reliably and reboot them.  It's easier that way. Easy


grep -iHnr libssl /proc/[0-9]*


is good but easy shouldn't be annoying like what you describe happens
with update manager when you update .


It has long been the way of professional unix servers that
they almost never need to be rebooted except for a kernel
update, and on 'real' servers you only do that during scheduled
maintenance windows.



I'm using Gnome-Shell, so I just tap the top left corner and hit the X 
on the REBOOT NOW PLZ window and it goes away.  ;)



I look forward to the day when someone finds a way to reliably
switch into a new kernel so that I never need to reboot a
system ever again... except to take it out of service.




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Default group

2012-10-17 Thread John Moser
Currently each Ubuntu user gets his own group, so:

jsmith:jsmith
lmanning:lmanning
rpaul:rpaul

and so on.  I feel this is a lot of clutter for no benefit.

First let's discuss the benefit.

Since each user has his own group, the administrator can grant other
users access to each others' files in a fine-grained manner by adding
them to other users' groups.  This seems useful, but consider:

 - To modify the groups a user is in, you must have administrative access
 - As long as you're modifying users anyway, you're in a position to
create a group and add both users to it
 - This is better accomplished with POSIX ACLs, which users can
control on files they own

That third one, by the way, suggests that we should have a Windows NT
style permissions tab in Nautilus' file properties such that you can
add a user and alter their permissions.  UNIX permissions allow you to
set Owner, Group, Owner access, Group access, Other access; POSIX ACLs
allow additional Users and Groups to be added with their own
permissions as well.  Thus:

Creator/Owner:  [User]
Group:  [Group]
Permissions:
::Creator/Owner:  rwx
::Group:  ---
::Everyone:  ---
::group=developers:  rwx
::group=managers:  r-x

etc



The above suggests to me that any such benefit from giving users
individual groups is quickly mitigated because either A) the users are
all administrators, so sharing versus isolating files is wholly
imaginary; or B) giving fine-grained access via group membership
requires administrator mediation.

I suggest all users should go into group 'users' as the default group,
with $HOME default to 700 and in the group 'users'.  A umask of 027 or
the traditional 022 is still viable:  the files in $HOME are not
visible because you cannot list the contents of $HOME (not readable)
or change into it to access the files within (not executable).  A user
can grant permissions to other users to access his files simply by
making the directory readable by them--by 'users' or others (thus
everyone) or by fine-grained POSIX ACLs selecting for individual users
and groups.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser
On Wed, Oct 17, 2012 at 10:05 AM, Jordon Bedwell jor...@envygeeks.com wrote:

 The problem with this is how are you going to fix permissions on bad
 software like Ruby Gems who do not reset permissions when packaging
 and uploading to the public repository (because they claim this would
 violate security even though it comes from a public repo like the
 Debian repo and having public read and execute on a public gem from a
 public place is bad.) This has a huge impact as a default permission
 for not just examples like Ruby gems but other software do not reset
 when packaging, making it more cumbersome to package software and
 making it so now work around's are the rule and not the exception.

Explain the problem more.  The directory the user is in would be owned
by $USER:users instead of $USER:$USER.  The only difference, then, is
instead of your stuff being owned by jordon:jordon it's owned by
jordon:users.

What you're saying here is... I don't know what you're saying.
Permissions are currently $USER:users by default with umask=022 and
$HOME permissions of 755 which means every file is created as:

drwxr-xr-x jordan:jordan /home/jordan
drwxr-xr-x jordan:jordan /home/jordan/somedir
-rwxr--r-- jordan:jordan /home/jordan/somefile

What I'm suggesting is either umask=022 with a shared 'users' group
and a default $HOME permission of 700, so

drwx-- jordan:users /home/jordan
drwxr-xr-x jordan:users /home/jordan/somedir
-rwxr--r-- jordan:users /home/jordan/somefile

In which case if you give the 'users' group or (via extended ACL) any
other group or person read/execute on /home/jordan they can read
everything:  they're in 'users' and thus have access to your files,
just before they couldn't actually reach the inode.  If you give
'others' read/execute on /home/jordan then everyone on the system can
see inside your $HOME, as is the case now.


...OR--more risky--a default umask=027 with a shared 'users' group and
a default $HOME permission of 700, so

drwx-- jordan:users /home/jordan
drwxr-x--- jordan:users /home/jordan/somedir
drwxr- jordan:users /home/jordan/somefile

and security is increased, nominally, but honestly not much.  The
security boost here is files created in shared directories or
hardlinks created won't let anyone and everyone read those files; the
truth of the matter is that shouldn't happen, and stuff done in /tmp
is usually ... temporary, and aware of the security implications.
More restrictive you could umask=077, but same deal, and then if you
want to give anyone access to your files you have to change
permissions the whole way down (which opens up the user to mistakes
like chmod -R on $HOME and exposing their SSH keys).


How does putting everyone in the same group and changing $HOME to 0700
do what you said?

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


pam-tmpdir promote to main?

2012-10-17 Thread John Moser
Can we promote pam-tmpdir to main instead of universe for 13.04?  It
seems to work pretty well now, and so I recommend activating it by
default early in the development cycle.  Very early.  Like first
change early:  pam-tmpdir is part of the base system default install.

The rationale for this is pam-tmpdir makes changes to $TMP and $TMPDIR
which affect application behavior.  Non-conforming applications will
dump their temp files into /tmp anyway; conforming applications using
$TMP or $TMPDIR will put them in a user-specific directory.  SOME
applications may break--they shouldn't, but GDM broke in 2004 so I
could see things breaking.

Applications ceasing to function is what I'm interested in.  Anything
that's built and tested that fails to run properly under pam-tmpdir.

pam-tmpdir creates a root-owned directory /tmp/users with permissions
o=--x.  Upon log-on, pam creates a directory /tmp/users/$UID/ owned by
the user and with permissions 700. That becomes $TMP and $TMPDIR, and
so most applications put their temporary files there.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser
On Wed, Oct 17, 2012 at 10:44 AM, Marc Deslauriers
marc.deslauri...@canonical.com wrote:
 On 12-10-17 09:59 AM, John Moser wrote:
 I suggest all users should go into group 'users' as the default group,
 with $HOME default to 700 and in the group 'users'.  A umask of 027 or
 the traditional 022 is still viable:  the files in $HOME are not
 visible because you cannot list the contents of $HOME (not readable)
 or change into it to access the files within (not executable).  A user
 can grant permissions to other users to access his files simply by
 making the directory readable by them--by 'users' or others (thus
 everyone) or by fine-grained POSIX ACLs selecting for individual users
 and groups.


 We want users to be able to share files with other users. Having $HOME
 be 700 defeats that purpose. See:

 https://wiki.ubuntu.com/SecurityTeam/Policies#Permissive_Home_Directory_Access


Which, as I said, is accomplished by adding the user or an appropriate
group to the Extended ACL of $HOME, as the umask is still permissive
and the files are all owned by a common user group.  It can also be
blanket accomplished by adding read access to group or others on
$HOME, which would return the system to effectively as it is now.

 Also, one of the reasons for using User Private Groups, is to be able to
 create directories that are used by multiple users, by setting the
 setgid on the directory. With a default umask of 022, users need to
 manually set group permissions each time they create a file.


Setting setgid on the directory to allow multiple users to add files
to it still requires that the users be in the group or that the
directory be world-writable. The proper way to accomplish this is,
again, to place the directory into the shared 'users' group and grant
individual user or group access via ACLs, rather than a shotgun
approach by which either the directory is either world-writable or the
users have to be put into some other user's group and then suddenly
have blanket access to that user's files unless he tightens down
permissions on his $HOME.

setgid would also do ... just about nothing, since without setUID on
the directory the file's permissions are still g-w.  Although some
Googling is telling me that Ubuntu changed the default umask to 002
back in Oneric, so apparently yeah this works, caveat above paragraph.

In short, the current method is a lot of this works... with a lot of
unintended consequences.


 Marc.


 --
 Marc Deslauriers
 Ubuntu Security Engineer | http://www.ubuntu.com/
 Canonical Ltd.   | http://www.canonical.com/

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: pam-tmpdir promote to main?

2012-10-17 Thread John Moser
On Wed, Oct 17, 2012 at 10:52 AM, Marc Deslauriers
marc.deslauri...@canonical.com wrote:

 Now that we have symlink restrictions in Ubuntu, security issues with
 using the /tmp directory are greatly reduced.

 Since Quantal now sets $XDG_RUNTIME_DIR, apps should use it or one of
 the other $XDG_* locations to store temporary user data. If use of /tmp
 is still necessary, apps should simply assign appropriate permissions to
 the files they create in /tmp.

I'm more concerned with keeping the contents of /tmp private.  When I
filed bugs for Thunderbird and Firefox years ago (which never got
fixed) I pointed out things like site designations, client names, and
(amusingly) pornography being leaked through /tmp.  Which has got to
be great when you're 15 and peeking at /tmp to see what kinds of
flicks your dad's been downloading, though now everything streams in
browser.

Well, except torrent names, which are spewed all over the place, and
stay there until reboot.


 Please file bugs on any app that doesn't currently do this properly.

 Marc.


 --
 Marc Deslauriers
 Ubuntu Security Engineer | http://www.ubuntu.com/
 Canonical Ltd.   | http://www.canonical.com/

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser
 do it.  You'd
share both directories with both users.  The second user would have to
share a directory to you--direction is suddenly critically important
to semantics of overall access.  And of course when you have three
users doing this to each other, suddenly there's no arrangement that
works.


In truth it's the same problem as just ... basic Unix groups.  You
want Developers and Managers to all be able to access a certain
folder, but you don't want to make all Developers also Managers
because they'd be able to do things like monkey with their own salary.
 Then you decide you don't want to make Managers Developers because
they'll monkey with the code even though they're not supposed to and
most of them were good programmers 10 years ago but have gotten rusty
and refuse to admit it.  What now?  Only difference is our use case
has more unstructured sharing, which means the problems that creep up
creep up REALLY FAST and much more often and usually go unnoticed.

On Wed, Oct 17, 2012 at 3:14 PM, Nicolas Michel
be.nicolas.mic...@gmail.com wrote:
 John,

 Do you know KISS?
 So ACL works well. But it's really more complicated to use than UGO and
 surely to understand who has which access to what. Trust me it can be really
 hard to get it with complex configurations.

 So I would say : why use a complex solution for a simple need?

 Regards,
 Nicolas


 2012/10/17 John Moser john.r.mo...@gmail.com

 On Wed, Oct 17, 2012 at 10:44 AM, Marc Deslauriers
 marc.deslauri...@canonical.com wrote:
  On 12-10-17 09:59 AM, John Moser wrote:
  I suggest all users should go into group 'users' as the default group,
  with $HOME default to 700 and in the group 'users'.  A umask of 027 or
  the traditional 022 is still viable:  the files in $HOME are not
  visible because you cannot list the contents of $HOME (not readable)
  or change into it to access the files within (not executable).  A user
  can grant permissions to other users to access his files simply by
  making the directory readable by them--by 'users' or others (thus
  everyone) or by fine-grained POSIX ACLs selecting for individual users
  and groups.
 
 
  We want users to be able to share files with other users. Having $HOME
  be 700 defeats that purpose. See:
 
 
  https://wiki.ubuntu.com/SecurityTeam/Policies#Permissive_Home_Directory_Access
 

 Which, as I said, is accomplished by adding the user or an appropriate
 group to the Extended ACL of $HOME, as the umask is still permissive
 and the files are all owned by a common user group.  It can also be
 blanket accomplished by adding read access to group or others on
 $HOME, which would return the system to effectively as it is now.

  Also, one of the reasons for using User Private Groups, is to be able to
  create directories that are used by multiple users, by setting the
  setgid on the directory. With a default umask of 022, users need to
  manually set group permissions each time they create a file.
 

 Setting setgid on the directory to allow multiple users to add files
 to it still requires that the users be in the group or that the
 directory be world-writable. The proper way to accomplish this is,
 again, to place the directory into the shared 'users' group and grant
 individual user or group access via ACLs, rather than a shotgun
 approach by which either the directory is either world-writable or the
 users have to be put into some other user's group and then suddenly
 have blanket access to that user's files unless he tightens down
 permissions on his $HOME.

 setgid would also do ... just about nothing, since without setUID on
 the directory the file's permissions are still g-w.  Although some
 Googling is telling me that Ubuntu changed the default umask to 002
 back in Oneric, so apparently yeah this works, caveat above paragraph.

 In short, the current method is a lot of this works... with a lot of
 unintended consequences.


  Marc.
 
 
  --
  Marc Deslauriers
  Ubuntu Security Engineer | http://www.ubuntu.com/
  Canonical Ltd.   | http://www.canonical.com/
 
  --
  Ubuntu-devel-discuss mailing list
  Ubuntu-devel-discuss@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss




 --
 Nicolas MICHEL

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser
On Wed, Oct 17, 2012 at 3:52 PM, John Moser john.r.mo...@gmail.com wrote:
 First:  that's why we need an interface that handles POSIX ACLs
 properly, long-overdue.


It actually occurs to me that this is probably not just technically
important, but important for planning purposes.  That is, we can sit
here arguing all day about theoretical use cases, but everybody is
going to have differing opinions that lean in odd directions until
certain things are fixed.  Or in short, as long as POSIX ACLs require
a scout badge in command line comfort, discussing how to leverage
POSIX ACLs is pointless because people don't want to think two steps
out.

Let's see if we can get anything agreeable on that.  It'll probably
look strikingly like Windows' Security tab, except without supplying
Allow/Deny, fine grained special permissions (which don't exist), or
having all the Windows main permissions.  So, it'll look like the only
thing that makes sense in general, and specifically for Unix.  That
is, a few rows of controls for Owner, Group, and then a list box of
permissions with one row with three checkboxes per row for each user
(Owner, Group, Everyone, individual users/groups).

Can we get that?

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser
Doesn't look integrated into the default UI.  Workable, but not quite 
intuitive.  Things I'd prefer:


 - Shows the user and group ownership, instead of piling them is as 
just part of the ACL.  Remember these have special meanings for SUID/SGID.


 - First three ACL entries are always Owner, Group, and Other.

 - Integrate into the UI (Konqueror, Gnome)

 - Apply ACL to Enclosed Files button to apply the ACL all the way down.

 - Possibly Apply Permissions to Enclosed Files button for only UNIX 
permissions


i.e. something more obvious and intuitive.

On 10/17/2012 05:12 PM, Matt Wheeler wrote:


It's called eiciel

--
Matt Wheeler
m...@funkyhat.org mailto:m...@funkyhat.org

On 17 Oct 2012 21:15, John Moser john.r.mo...@gmail.com 
mailto:john.r.mo...@gmail.com wrote:


On Wed, Oct 17, 2012 at 3:52 PM, John Moser
john.r.mo...@gmail.com mailto:john.r.mo...@gmail.com wrote:
 First:  that's why we need an interface that handles POSIX ACLs
 properly, long-overdue.


It actually occurs to me that this is probably not just technically
important, but important for planning purposes.  That is, we can sit
here arguing all day about theoretical use cases, but everybody is
going to have differing opinions that lean in odd directions until
certain things are fixed.  Or in short, as long as POSIX ACLs require
a scout badge in command line comfort, discussing how to leverage
POSIX ACLs is pointless because people don't want to think two steps
out.

Let's see if we can get anything agreeable on that.  It'll probably
look strikingly like Windows' Security tab, except without supplying
Allow/Deny, fine grained special permissions (which don't exist), or
having all the Windows main permissions.  So, it'll look like the only
thing that makes sense in general, and specifically for Unix.  That
is, a few rows of controls for Owner, Group, and then a list box of
permissions with one row with three checkboxes per row for each user
(Owner, Group, Everyone, individual users/groups).

Can we get that?

--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
mailto:Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser



On 10/17/2012 05:34 PM, Marc Deslauriers wrote:

On 12-10-17 03:52 PM, John Moser wrote:


First, he must find the sysadmin.  The sysadmin must then put wriker
in group jkirk.  Also, ~jkirk must be group-readable, as must any
files.


In a default Ubuntu installation, jkirk's files are already accessible
to other users.


Yeah I just looked and saw that, my whole $HOME is world-readable.

This displeases me.  I'd prefer default $HOME chmod 700.




A user can't change permissions on his $HOME by himself. Only a sysadmin
can.


$ ls -ld ~
drwxr--r-x 100 bluefox bluefox 4096 Oct 14 11:47 /home/bluefox
$ chmod go-rx ~
$ ls -ld ~
drwx-- 100 bluefox bluefox 4096 Oct 14 11:47 /home/bluefox
$ setfacl -m u:root:r ~
$ getfacl ~
# file: home/bluefox
# owner: bluefox
# group: bluefox
user::rwx
user:root:r--
group::---
mask::r--
other::---

Try again.



This only works if the user default umask is 002, which wouldn't be the
case if you're not using User Private Groups.


Well, it's the case now; and if we leave it the case and make ACL 
handling more intuitive, then it'll all work.  Changing $HOME to 700 
instead of 755 would adequately protect the user's private files in 
$HOME even with a umask of 002, since you simply can't look into $HOME 
to read/modify those files anyway.


The only other thing needed would then be a Shared Documents alike 
(borrowing from Windows again--it's a pile of crap but that doesn't mean 
everything associated is terrible by default) supplying a place for 
folks to put shared files or such secured shared folders, made sticky of 
course.





Marc.




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Default group

2012-10-17 Thread John Moser



On 10/17/2012 06:43 PM, Marc Deslauriers wrote:

On 12-10-17 05:45 PM, John Moser wrote:



On 10/17/2012 05:34 PM, Marc Deslauriers wrote:

On 12-10-17 03:52 PM, John Moser wrote:


First, he must find the sysadmin.  The sysadmin must then put wriker
in group jkirk.  Also, ~jkirk must be group-readable, as must any
files.


In a default Ubuntu installation, jkirk's files are already accessible
to other users.


Yeah I just looked and saw that, my whole $HOME is world-readable.

This displeases me.  I'd prefer default $HOME chmod 700.


As I said, we wanted people to be able to share files by default without
having to understand granting permissions. This has already been
discussed to death, although it's been a while.



It's good to revisit things.  I'm bringing up what's turning into an 
evolving alternate proposal, at this point I've gotten as far out as 
suggesting UI changes and a sticky bit (= /tmp) directory acting as a 
Shared Documents between users, which I'm guessing didn't come up last 
round.  Sometimes the rules change.




Of course, you're absolutely right. I'm not sure what I was thinking
there for a sec. :P


Wrong facility probably.  Changing permission != changing group, which 
is what this discussion started on.





I'm not sure this proposal would be simple enough to be understood by
most non-technical users. Also, last time we looked at using extended
attributes, there were issues with proper support in common tools,
backup software, certain filesystems, etc. This would need to be looked
at again to see if extended attributes can be used now. It's certainly
worth investigating it again.


I may as well continue beating the Look at Windows horse, it's done 
this right for ages anyway.  You should consider your user base though: 
 they've already installed Ubuntu.  Most of them.  Some probably got a 
tablet or such that came with it, but single-user systems likely do not 
care.  We're talking about shared multi-user systems where everyone 
isn't admin, which is kind of a narrow case.


That's less most users don't need to know how and more most users 
won't be faced by a sudden, surprising, confusing change (like moving 
from Gnome2 to Ubiquity or Gnome3 by default?).  In other words, I 
suspect most users aren't particularly attached to the sharing my files 
with everyone else case.


Finally on that discussion, the current case is--as discussed--a default 
Share with everyone, and the user has to take an unknown technical step 
to stop that.  That means it's hard (for some value of hard) for some 
14 year old girl to stop her little brother from reading her diary. 
Fortunately actual e-mails are stored in a directory Thunderbird by 
default forces to 700, although any attachments you save you'll need to 
purposely revoke permission on.  Mostly to the point, documents, saved 
attachments, saved files off Web sites, PDF collections of Victorian era 
pornography downloaded from torrents, all by default available without 
jumping through technical hoops.




Don't know about common tools, backup software.  I'm well aware it's not 
straight out-of-box supported by Thunar, Nautilus, and Konqueror--it 
works, but their file properties dialogs don't let you manage those 
permissions.  Again, Windows does this right.




As for filesystems, POSIX ACLs tend to cause issues with read latency 
for inodes not in disk cache.  XFS with 256 byte inodes takes something 
like 9uS to read a non-ACL inode, 7000uS for an ACL inode; XFS with 512 
byte inodes takes 9us either way; and other file systems tend to behave 
like XFS with 256 byte inodes (9uS-15uS without POSIX ACL, 1000-1500mS 
with it).  This is because another seek-and-read must be done.  Not a 
problem on SSD. not a problem on a warm system, minor performance issue 
at first boot.  This isn't something that's going to slow the system to 
a crawl:  these ACLs are only going to be set on user-owned files, and 
even then the proposed semantics favor setting them on a directory here 
and there and so you lose a millisecond or three once in a great while.


Do note that performance issues are circa 2003: 
http://users.suse.com/~agruen/acl/linux-acls/online/








The only other thing needed would then be a Shared Documents alike
(borrowing from Windows again--it's a pile of crap but that doesn't mean
everything associated is terrible by default) supplying a place for
folks to put shared files or such secured shared folders, made sticky of
course.


Well, right now we're defaulting to sharing everything except private
information in private directories. Your proposal is basically to share
nothing, and create exceptions. If this is to be discussed again, we
probably need to figure out if our users are able to understand file
permissions well enough to be able to share documents.


I recall Windows XP notifying users that it was in fact going to 
privatize directories after upgrade, or some such.  Such flip-flops 
have been done.


Users

Re: Prevent deletion of file when it is being copied

2012-09-27 Thread John Moser
On Thu, Sep 27, 2012 at 8:34 AM, Nimit Shah nimit.sv...@gmail.com wrote:
 Haha :D
 I was removing the useless files and by mistake selected that file as well
 along with other files. The copy was going on in the background so had
 forgotten about it.

Unix has a proud tradition of assuming you're not a moron.  That's why
Unix tools don't complain when they do things right.  It's like

~$ rm -rf all_my_stuff/
~$

Because, hey, you asked to remove all your stuff.  What do you think
happened?  It's not like we need to ask for confirmation; you pressed
enter, didn't you?  Unless something goes distinctly wrong
(files/directories are immutable, no permission, etc)

I like that you can cut/copy/paste files around in Nautilus without it
going, Do you want to place a copy of XXX here? or Are you sure you
want to move these files here?  When you shift+delete doesn't it ask,
though?  (Which is silly, yeah; you hit delete, didn't you?  Of course
you want to delete these files!)

 Nimit Shah,
 B Tech 4th year,
 Computer Engineering Department,
 SVNIT Surat
 www.dude-says.blogspot.com



 On Thu, Sep 27, 2012 at 5:42 PM, Luis Mondesi lem...@gmail.com wrote:

 El Sep 27, 2012, a las 1:28, Emmet Hikory per...@ubuntu.com escribió:

  Nimit Shah wrote:
  While copying a file from my computer to external disk, I by mistake
  shift+deleted the file. But still the file transfer dialog showed that
  it
  was continuing. At the end of the transfer it failed.
 
  Hence i request you to add a check for file transfer before deleting
  the
  file.
 
 As much as this would be a lovely feature, I don't believe that it is
  something that we could implement in Ubuntu.
 
 When copying a file, there are usually two ways to go about it:
  either
  open the entire file, and write it to a new location, or open a series
  of
  sections of the file, and write them each to a new location.  There are
  a
  very large number of programs that provide both of these mechanisms in
  the
  archive.  In the majority of cases, the first potential solution is not
  used, because it limits file copies to files that fit entirely in memory
  (with everything else), and requires a longer-running algorithm, but
  when the second method is used, the file cannot be allowed to be deleted
  before the file transfer is confirmed as complete.
 
 When deleting a file, the usual practice is to remove the reference
  from the directory definitions (unlinking), leaving the underlying
  filesystem
  to manage recovery of the newly available space.  Again, there are a
  vast
  number of packages in the archive providing programs that do this.
 
 In order to implement the feature you describe, we would have to
  either
  provide some systems interface that traps all calls to unlink() and
  checks
  some external reference to determine if it is being copied or patch all
  software that could potentially delete files to check the reference,
  whilst
  simultaneously patching every package that provides a means to copy a
  file
  to populate this reference during the file copy, which would make all
  such
  operations considerably slower, with potentially massive impact on
  server
  capacities, interactive response times, and battery life.
 
 Further, it is unlikely that the developers and maintainers of most
  of
  the software in our archive would be willing to accept such patches,
  given
  the potential complications and incompatibilities with other systems,
  such
  that the result of this vast undertaking would require considerable
  ongoing
  development effort to port these patches for each new upstream release.
 
 Lastly, in the event that any of the programs providing file copy
  functionality were to crash, they may not properly clear the reference
  indicating files whose deletion need block on the transfer completion.
  As a result of such a crash (or any other bug when updating references),
  a user's system may end up having any number of files that cannot be
  deleted without manual intervention into the file transfer reference.
 
  --
  Emmet HIKORY

 Usually there are 2 solutions to these types of problems:

 1. develop some complex code to deal with it
 2. don't do the action to begin with

 Typically # 2 is the best answer. Writing complex code to avoid something
 that's obviously the wrong thing to do by the end user seems silly. The
 users simply have to wait until their files are copied. Why would you delete
 the file by shift-delete while is being copied!?
 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss



 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 

Re: chromium no longer maintained

2012-09-04 Thread John Moser



On 09/04/2012 07:13 AM, Daniel Hollocher wrote:

On Tue, Sep 4, 2012 at 6:38 AM, Damian Ivanov damianator...@gmail.com wrote:

Hi,


It would be nice if someone could step up and maintain the
chromium-browser version of chromium, but for whatever reason, that
isn't happening.  Shouldn't the ppas at least be updated to state that
the version held is out of date?  Shouldn't version 18 be removed from
the archive?



There has long been talk of switching to Chromium by default in Ubuntu 
new installs.  For a while wasn't it the default browser in Xubuntu?


Look if it gets Google to give Canonical more money (i.e. from product 
placement, like with the Firefox package's search bar functionality 
embedding Canonical's adsense ID into Google searches) I say Chromium 
should be maintained so the portion of the userbase who uses it doesn't 
go using a downloaded/hand-compiled package and not generating any revenue.




Dan



--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: chromium no longer maintained

2012-09-04 Thread John Moser
On Tue, Sep 4, 2012 at 8:43 AM, David Klasinc bigwh...@lubica.net wrote:
 who's forking and
 who's not. :)


Cheeky.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: ecryptfs default config

2012-09-02 Thread John Moser

did you change your password from your account or using the root account?

It looks like pam actually stores encryption keys in /var/lib/ somewhere 
and can re-cypher them.  That only works if you enter the previous 
password when changing passwords, though (which I hadn't considered, 
since normally when you init=/bin/bash you drop straight to root...)


On 09/02/2012 09:37 AM, Damian Ivanov wrote:

Hi John,

I appreciate your fast answer!
So what can I do to prevent this default behaviour? e.g if password
gets changed data is unreadable unless to have the secret key?
Wouldn't this be a more reasonable default?

Best regards,
Damian

2012/9/2 John Moser john.r.mo...@gmail.com:

Yes that would indicate that there's a key stored somewhere that doesn't
need a known secret, unless pam is storing a key and re-crypting it when you
change passwords (unlikely).


On 09/02/2012 09:16 AM, Damian Ivanov wrote:

Hi folks,

I just did an ubuntu 12.04 fresh install and I wanted to test
something in ecryptfs. So basically I selected during install to
require password to login and to encrypt home folder. I logged in and
created secret.txt on my desktop and shut down. I booted up again but
in bootloader I appended init=/bin/bash booted into the root shell,
did a
mount -o remount,rw / and passwd $my_user set a new password and
rebooted.  After reboot I logged into $my_user account with the new
password. secret.txt is readable and all other files too. Is this the
expected behaviour?! If yes isn't it better to change the behaviour to
something more secure...

Regards,
Damian




--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


LVM and Thin Provisioning

2012-08-23 Thread John Moser
Gents,

Do you think in the future Ubuntu would benefit from an LVM with thin
provisioning default whole-disk layout?  At the moment thin
provisioning is not considered stable, and so it would be
inappropriate.

I believe that once LVM thin provisioning is stable, it would be
worthwhile for Ubuntu to use it by default when installing across a
whole disk.  Essentially with a single large disk, Ubuntu could create
one big logical volume such that 100% of the disk is /, 100% is /home,
and some small amount is swap.  This would allow for snapshot backups,
encryption, and such through the supported LVM interfaces.  More
importantly, it would allow for the isolation of file systems
(particularly / and /home) without complex considerations like how
big do we make them?

The down side to this is LVM complexity--power users can't simply pull
up gparted and manipulate LVM partitions, slide things around to
install an alternate OS, etc, without learning some new tools.  I
think power users would plan ahead for that, and other users who do a
full disk install won't particularly have such needs because they'll
be of the Install one Linux because I want my computer to work
variety.

Users who are resizing an existing OS and using part of the disk may
legitimately have a middle ground where they eventually move to resize
partitions (remove the old OS or Ubuntu) and find that their basic
knowledge is suddenly useless and they don't know where to go from
here or really want to put in that kind of effort.  From that
perspective, shrinking a Windows partition and putting an LVM Physical
Volume next to it with a complex Logical Volume layout may not be a
great idea; the distinction between power user and regular user
does have a gray-zone border, and these sorts of installs fall within
it much more often than straight-up whole disk installs.  But then,
maybe it'd be perfectly fine anyway.

LVM thin provision does legitimize automatic file system migration.
Passing TRIM through a thin provisioned LVM volume doesn't just knock
a block off an SSD; it tells the thin provisioning layer that that
block is free.  When an entire extent is TRIMed off, it becomes
available again (as is my understanding, anyway).  So a user on Ubuntu
with ext3 migrating to ext4 loses out on a lot of features that ext4
simply has to be created from scratch for; well you can create a new
thin root, move the data across (TRIMing as you go), and then remove
the old LV.  Even if the disk is 90% full.  Same for when some fool
has experimented with btrfs and realizes there's no fsck tool (fsck
doesn't FIX btrfs, it just tells you if it's broken) and he wants to
go back to ext4 or XFS.

Thoughts?

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are UI developers all left handed?

2012-08-09 Thread John Moser
On Thu, Aug 9, 2012 at 8:25 AM, Tom H tomh0...@gmail.com wrote:
 On Wed, Aug 8, 2012 at 1:52 PM, John Moser john.r.mo...@gmail.com wrote:
 And Apple with MacOSX, which Unity mimics.

 The default OS X Dock position is at the bottom of the screen and the
 Dock can be moved to the left or to the right of the screen. So
 Unity's Launcher doesn't quite mimic it. If it did, I'd move the
 Launcher to the bottom with auto-hide. As it is, I just look at
 switching back and forth between OS X and Unity as a test/game; on
 OS X go down for the Dock and on Unity go left for the Launcher.


You're right, of course.  I actually have no idea what MacOSX looks
like; the last MacOS I used was system 7.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are UI developers all left handed?

2012-08-09 Thread John Moser
On Thu, Aug 9, 2012 at 10:30 AM, Felix Miata mrma...@earthlink.net wrote:
 On 2012/08/09 10:37 (GMT-0300) Conscious User composed:


 So the point only seems mostly relevant in two situations: when the
 person has just arrived on the computer and when the person was
 typing. The first case does not seem to be statistically significant.
 The second is valid, but prioritizing it seems strange since a very
 common argument against Unity and Shell is ZOMG YOU ARE
 FORCING ME TO TYPE AND TYPING IS FOR NERDY GEEKY DORKS
 AND NORMAL PEOPLE NEVER TYPE ANYTHING, EVER EVER EVER.


 Dolts make that argument. People shop and bank online, and fill out other
 web forms as well. No small number create email rather than just reading it
 or re-forwarding jokes and pr0n forwarded to themselves. Some even use them
 for business and run LibreOffice to create snail mail, manuscripts and other
 things a mouse cannot create, and various other apps to create such mundane
 things as web content.


This is true, most people type, and most people in front of a computer
are dolts.  Honestly when was the last time you met an intelligent
person on the Internet?  Answer me that question.  Uh huh.  You ain't
never seen it, 'cause everybody on the Internet is dumb[1].

Honestly I just find the outward motions easier than the inward
motions.  Inward motions seem to put a lot more physical stress on
joints and tendons.  Then again, if I rest my arm straight out to the
side and bend my elbow at 90 degrees for a starting position, many
mouse movements are much closer to baseline; any other position
(including the positions used at work[2] and at home--where my
computer is on the floor) seems to create difficulties.  So, keyboard
slide-out tray with mouse on the right marginalizes these complaints.

Also for the touch pad guy, those things are FAST going up-right and
down-left!  Thumb or index finger on the pad.  Middle finger on the
pad.  Raise thumb/index finger.  Mouse jumps up-right (if you're right
handed).  They're not multi-touch and so they register this as a fast,
long movement.

[1]http://xkcd.com/386/
[2]http://img826.imageshack.us/img826/6742/img20120809105855.jpg

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Are UI developers all left handed?

2012-08-08 Thread John Moser
Put your mouse pointer in the middle of the screen.

Put your mouse somewhere you can grab it.

Now reach out and grab the mouse.

Where does the pointer end up?

If it winds up in the top right of your screen, it seems you're right
handed.  Your arm just goes that way, and your wrist straightens to
support the movement.  It flexes outward more easily than inward, too.

UNITY.  Puts the control box (close, minimize, etc) in the top left.
Of the entire screen.

GNOME SHELL.  The thing you have to hit to do anything is in the top
left corner.  Want to log out?  That's in the top right, fastest thing
you'll be able to hit ever.

EVERYTHING puts menus left to right (the Help menu used to be on the
far right, separate from all other menus, in Windows 3.1), but that's
probably more for left-to-right text flow than anything.  Also it
keeps File from being too damn close to the big evil [X] button, but
it's still slow and inconvenient.

Why do UI designers insist on designing interfaces for left handed people?

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are UI developers all left handed?

2012-08-08 Thread John Moser
On Wed, Aug 8, 2012 at 1:15 PM, Jordon Bedwell jor...@envygeeks.com wrote:

 It has a lot of bearing for people.  Proper usability testing would have
 pointed that out, and Canonicals decision not to allow the toolbar to be
 on the right if users wanted is completely ignorant, more ignorant then
 the joke of a Usability test Canonical did...

And Gnome with the Activities button

And Apple with MacOSX, which Unity mimics; though if I wanted to go on
a tirade about Unity specifically, I'd say something about menus at
the top of the screen (which has become relevant with 26 inch wide
screen displays at 1920x1080, where maximizing things is ridiculous
and so windows float around on the screen... 2 clicks to open File
on that window over there instead of 1).

UI design is something everyone's an expert in and nobody gets right.
Focus groups and thick tomes on User Interface Design Principles and
they still bring out ridiculousness.

GNOME2 for example is so great precisely because it's familiar and
sensible--it looks kind of like everything else, though with the panel
at the top that's new territory for a Windows guy... but at least the
menus are organized in a sensible way.

Gnome Shell is closer.  Tap Activities, everything is there.  Start
typing, it searches through programs.  Mouse on the right side, play
with virtual desktops.  Drag and drop to move windows around,
seriously point and grab.  Seems like everything is in perfect context
and works so obviously well... ... But then when you start trying to
muck about with the stuff at the bottom right (notification icons),
they don't always work as expected.  Sometimes you get kicked back out
to the desktop for unknown reasons trying to get information out of
'em.  The notification at the bottom of the screen covers a third of
it, in the center, but prevents mouse clicks from going through on
that entire horizontal area (plug in a USB drive?  The bottom 3 inches
of your screen are unusable until you dismiss the pop-up!).

Windows is a mess.  Windows 7 is an even bigger mess, to the point
that I can't figure out where stuff is.  Now apparently I have
Documents and Downloads and Pictures, I'm not sure where it all goes,
some of this is new, some of it moved.  I appear to have a Home folder
now that CONTAINS Documents, and some stuff randomly saves there
instead of My Documents ... oh, and inside there I have two folders
named Desktop, two folders named My Documents, My Music, My
Pictures, etc.. but only one Contacts or Downloads or Favorites
folder.  And they split these things that are Mine up between
Favorites (Desktop, Downloads) and Libraries (Documents, Music,
Pictures) on the navigation pane in Explorer, instead of just calling
it all MY FREAKING STUFF.

I hate Unity but I think I'd have trouble making a decent argument,
given the above.  Really I just want to know why EVERYTHING except
Windows (which doesn't do anything useful in the first place) puts the
useful stuff in the top left when it's ergonomically and
biomechanically [B-B-B-BUZZWORD C-C-C-COMBO!] easier to move your hand
away and outward from your body.  I don't think we can really blame
Canonical for that.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are UI developers all left handed?

2012-08-08 Thread John Moser
On Wed, Aug 8, 2012 at 2:15 PM, Dale Amon a...@vnl.com wrote:
 On Wed, Aug 08, 2012 at 01:52:45PM -0400, John Moser wrote:
 I hate Unity but I think I'd have trouble making a decent argument,
 given the above.  Really I just want to know why EVERYTHING except
 Windows (which doesn't do anything useful in the first place) puts the
 useful stuff in the top left when it's ergonomically and
 biomechanically [B-B-B-BUZZWORD C-C-C-COMBO!] easier to move your hand
 away and outward from your body.  I don't think we can really blame
 Canonical for that.

 I can tell you the historical reasons. All windowing systems
 began with their coordinate systems with 0,0 in the upper left
 because that is where the scan lines begin. Lines are written
 from left to right, top to bottom.

 It was more difficult to correctly set the position of the upper
 right corner because there was not always a good way to get that
 info. And if your key controls were at (XMAX,0) and you got the
 screen size wrong, you were stuffed.


Hush kids, this is actually interesting.  You all can continue your
little spat in a minute, the grown-ups are talking about old computers
they programmed when Linus was still in diapers.

Anyway that's all very fascinating, but how does that translate to the
Activities menu in Gnome Shell getting up there?  Or Canonical
deliberately moving the control box to the top-left, Apple style?
Your explanation seems satisfactory for Apple, since it's been running
MacOS on a top-left control box scheme since inception.

While we're reminiscing about the past, you know what's funny?  I used
Windows 3.0 and DOS and all, and I didn't even know the X worked until
Windows 95.  Didn't get Windows 95 until 1996 either.  I was taught
that you double-clicked the - button (top left corner in Windows
3.1, drops a menu of sorts when clicked) to close a window and never
figured out what the control box did.  It was quickly forgotten after
I learned to maximize and minimize (and close!) windows years later.

 Dale Amon
 Who once in a time long ago and far away
 worked on a windowing system for a display
 graphic terminal output controlled by PDP-11
 assembly code.



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Are UI developers all left handed?

2012-08-08 Thread John Moser
On Wed, Aug 8, 2012 at 3:06 PM, Tom H tomh0...@gmail.com wrote:
 On Wed, Aug 8, 2012 at 11:25 AM, Phillip Susi ps...@ubuntu.com wrote:
 On 8/8/2012 11:01 AM, John Moser wrote:

 Put your mouse pointer in the middle of the screen.

 Put your mouse somewhere you can grab it.

 Now reach out and grab the mouse.

 Where does the pointer end up?

 It ends up in the middle of the screen; if you pick up the mouse off
 of the pad, it isn't going to move.

 AFAIU he means that the momentum of the right hand reaching for the
 mouse moves the pointer to the top right of the screen - or at least
 moves the pointer in that direction.

Indeed, and the implication that away and out is the natural
direction.  Swinging my arm inward and pulling it toward me seems to
put stress on tendons in the shoulder; when the arm is closer, it
pushes against the torso; an inward wrist movement seems more
stressful than an outward one; extending the fingers pushes the mouse
away (and lowers the hand, straightening the wrist), curling them to
pull is more awkward but also common (and tilts the hand upward,
creating a sharp angle at the wrist and increasing stress throughout
the motion).

Though the fact that the fastest and most natural movement when
initially grabbing for the mouse seems to be out and away does seem
significant.

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


File systems

2012-06-21 Thread John Moser
I've become disdained with Linux's pet filesystems as of late.  This
is for a few very simple reasons:

 - ext4 is attempting to be XFS
 - btrfs is trying to be ZFS

Let's shoot down btrfs first, because it'll be easier.

btrfs is an enterprise-scale management system similar to LLVM +
$FILESYSTEM, more tightly integrated.  It does a lot of ridiculous
stuff trying to be a VFS layer as well as a filesystem, but that
integration can be useful in specialized situations.

First argument:  Home desktop users don't need all that fancy stuff.
This also argues that we don't need LLVM or ZFS on a home end user
desktop.  Servers, sure.

Second argument:  btrfs is immature.  We know this already of course.
It has a fsck that can't fix filesystem errors; it has a version that
can, marked DO NOT USE WILL DESTROY YOUR DATA!!!  Obviously, not
production ready.

To its merit, btrfs is also trying to be a decent stand-alone
filesystem, and offers features like built-in compression etc etc.
When it's mature (i.e. has non-broken fsck), it'll probably hold
together better on technical merit than old-generation file systems.


ext4 is a little harder to shoot down because the technical arguments
mainly don't apply.  I refer you to 2009:

http://lwn.net/Articles/322973/

ext4 is the new XFS

ext4 has always been about catching up to XFS.  XFS is older, has had
data loss problems due to exactly how it's proceeded to commit data to
disk, and has had time to fix those bugs.  Back in 2009, ext4 hadn't
had time to fix some of those same data loss bugs that it inherited
when it started being ext3-pretending-to-be-XFS; today, of course,
much of that has gone.

Argument 1:  Why use pretend XFS?  Really, ext4 was designed for two
things:  Semi-compatibility with ext3 (easy upgrade path, re-use of
file system tools like tune2fs and debugfs and fsck) and pretending to
be XFS.  ext4's killer features are extents--which break the ability
to mount ext4 as ext3 anymore--and the ability to convert to a lesser
ext4 (without extents) by mounting ext3 as ext4.  ext4 also has better
performance because it added delayed allocation:  it allocates
space instead of specific blocks until it's ready to flush, and so
it knows that 500 blocks will be in use but hasn't committed to WHICH
500 until it flushes--a feature XFS has had forever.

New installations given ext4 will be non-compatible with ext3 and
ext2.  They won't need an upgrade path.  Thus there is no strong
argument for ext4.


There's also not much of an argument AGAINST ext4.  Yes, XFS is more
mature; but ext4 is plenty mature by now, or the kernel developers are
woefully and terminally too stupid to be allowed to write code--face
it, ext4 has had a LOT of attention and should be at least as mature
as any other file system implementation in Linux if not more so.  That
doesn't mean it doesn't have bugs; just that we can't ascribe any such
bugs to it being too young and/or too much of a fringe FS (i.e.
ReiserFS, JFS) to have had the attention needed to shape it into
production quality.


XFS and JFS both have one thing over ext4:  dynamic inode allocation.
 This is of course a corner case feature, because who runs out of
inodes? (I have once or twice).

ext4 has something big over XFS though:  you can shrink an ext4
partition, and nobody has done the legwork to make xfs shrinkable.
Shrinking is a common case for people who want to get away from a
filesystem--750GB used on 1TB, shrink the file system, create 250GB
file system, copy 1/3 of the data, shrink the filesystem again, move
the high partition down, grow it, copy more data in, shrink the file
system again, create another filesystem, copy data into that, remove
original file system and reformat as your target FS, move data into
it, expand it, move data from the other FS into that, shrink the high
FS, move it all the way right, expand the new FS, move the rest of the
data over, remove the high FS, then fill the disk with the new FS.

You can't do that with XFS, because you can't shrink XFS; you would
have to use thin provisioning (dm-thinp) from the start.  Then you
could create a new file system in the same space as the old one, move
some data into it, fstrim the old XFS, then move more data, repeat
until all the data is moved, then drop the XFS provision.  Nobody
knows how to use thin provisioning, and it's ridiculous to manage; if
a desktop wants to switch from one file system to another, they're
more likely to resize and copy partitions repeatedly for 3 or 4 days,
or use an external drive.  Thus XFS not having shrink is big.


Now if XFS had shrink, I would argue that XFS is probably better than
chasing ext4 and zfs... er, btrfs... for the desktop; while btrfs is
better for the server, or will be when it's production ready.  I don't
argue JFS because it's very little used, although IBM wrote it and we
all know IBM is universally horrible at software so we could assume
JFS doesn't work at all outside somebody's fantasy.

As it stands, 

[RFC] zswap for Precise, with script

2012-05-17 Thread John Moser
Any thoughts on this?  I wrote it on a whim after installing an SSD and 
completely disabling all swap.  Haven't checked to see if Ubuntu 
supports hibernate to file yet (creating a hibernation file on demand 
would be optimal for me...)


This works with kernel 3.2.0 ... 3.0 used num_devices as the parameter 
for zram, while 2.6.32 used num (I think).  They keep changing the 
parameter name!


This init script (sorry, I have no clue how to write an upstart job) 
will load zram, set one of its devices to a given size, create swap on 
it, and turn that swap on.  It'll also deactivate the swap and free the 
associated RAM.


Accepted sizes are half and quarter of installed RAM as gotten by 
MemTotal in /proc/meminfo; any size in bytes; or suffixed K, M, G sizes.



/etc/default/zswap can contain the following variables:

# Set to 1 to disable
ZSWAP_DISABLED=0

# Number of /dev/zramX devices
ZRAM_NUM_DEVICES=4

# Swap device is /dev/$ZSWAP_DEVICE
ZSWAP_DEVICE=zram0

# Size
ZSWAP_SIZE=quarter
#! /bin/sh
### BEGIN INIT INFO
# Provides:  zswap
# Required-Start:$syslog
# Required-Stop: $syslog
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description:   Activate compressed swap
### END INIT INFO
#
#
# Version:  1.0 john.r.mo...@gmail.com
#

#
# It's also possible to resize the zswap by device hopping, i.e.
# making a new one on /dev/zram1, swapon /dev/zram1, and then
# swapoff /dev/zram0.  This would be CPU intensive...
#

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DESC=Sets up compressed swap
NAME=zswap
SCRIPTNAME=/etc/init.d/$NAME

# Default value
ZRAM_NUM_DEVICES=4
ZSWAP_DEVICE=zram0
ZSWAP_SIZE=quarter

# Read config file if it is present.
if [ -r /etc/default/$NAME ]; then
. /etc/default/$NAME
fi

# Gracefully exit if disabled
[ $ZSWAP_DISABLED = 1 ]  exit 0

is_numeric() {
echo $@ | grep -q -v [^0-9]
}

#Takes:
# zswap_to_bytes 524288
# zswap_to_bytes 512K
# zswap_to_bytes 128M
# zswap_to_bytes 2G
# otherwise formed parameters are errors.
zswap_to_bytes() {
MODIFIER=${1#${1%?}}
ZR_SIZE=${1: -1}

# Numeric:  just pass as-is
if ( is_numeric ${1} ) ; then
echo ${1}
return 0
fi

# If size isn't a number,
if ! ( is_numeric ${ZR_SIZE} ) ; then
echo 0
return 1
fi

if [ ${MODIFIER} = K ]; then
ZR_SIZE=$(( ZR_SIZE * 1024 ))
elif [ ${MODIFIER} = M ]; then
ZR_SIZE=$(( ZR_SIZE * 1024 * 1024 ))
elif [ ${MODIFIER} = G ]; then
ZR_SIZE=$(( ZR_SIZE * 1024 * 1024 * 1024 ))
elif [ ! is_numeric ${MODIFIER} ]; then
echo 0
return 1
fi
echo $ZR_SIZE
}

#
#   Function that starts the daemon/service.
#
d_start() {
ZSWAP_LOADED=0
swapon -s | cut -f 1 | grep /dev/${ZSWAP_DEVICE}  ZSWAP_LOADED=1
if [ ${ZSWAP_LOADED} -eq 1 ]; then
echo zswap already in use
return 1
fi
# this parameter name keeps changing with new kernel versions
modprobe zram zram_num_devices=${ZRAM_NUM_DEVICES}

# Does it now exist?
if [ ! -b /dev/${ZSWAP_DEVICE} ]; then
echo /dev/${ZSWAP_DEVICE} does not exist!
return 1
fi

# half or quarter size
if [ ${ZSWAP_SIZE} = half -o ${ZSWAP_SIZE} = quarter ]; then
MEM_SZ=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
if [ ${ZSWAP_SIZE} = half ]; then
ZSWAP_SIZE=$(( MEM_SZ * 512 ))
else
ZSWAP_SIZE=$(( MEM_SZ * 256 ))
fi
else
ZSWAP_SIZE=$( zswap_to_bytes $ZSWAP_SIZE )
if [ ${ZSWAP_SIZE} = 0 ]; then
echo Invalid ZSWAP_SIZE
return 1
fi
fi
echo $ZSWAP_SIZE  /sys/block/${ZSWAP_DEVICE}/disksize
mkswap /dev/${ZSWAP_DEVICE}
swapon /dev/${ZSWAP_DEVICE}
}

#
#   Function that stops the daemon/service.
#
d_stop() {
ZSWAP_LOADED=0
swapon -s | cut -f 1 | grep /dev/${ZSWAP_DEVICE}  ZSWAP_LOADED=1
if [ ${ZSWAP_LOADED} != 1 ]; then
echo zswap not in use
return 1
fi
if ! ( swapoff /dev/${ZSWAP_DEVICE} ); then
echo Cannot de-activate compressed swap /dev/${ZSWAP_DEVICE}!
return 1
fi

# Double check this
ZSWAP_LOADED=0
swapon -s | cut -f 1 | grep /dev/${ZSWAP_DEVICE}  ZSWAP_LOADED=1
if [ ${ZSWAP_LOADED} = 1 ]; then
echo zswap /dev/${ZSWAP_DEVICE} did not de-activate!
return 1
fi

# free the block device's memory
echo 1  /sys/block/${ZSWAP_DEVICE}/reset
modprobe -r zram
}


case $1 in
  start)


Re: Linux (or Ubuntu specific) tools to measure number of page faults

2012-05-02 Thread John Moser

TIME=%Uuser %Ssystem %Eelapsed %PCPU (%Xtext+%Ddata %Mmax)k
%Iinputs+%Ooutputs (%Fmajor+%Rminor)pagefaults %Wswaps time ls


On 05/02/2012 10:08 PM, Alfred Zhong wrote:

Thank you all so much!

On Wed, May 2, 2012 at 5:47 AM, Colin Ian King colin.k...@canonical.com
mailto:colin.k...@canonical.com wrote:

On 01/05/12 02:53, Alfred Zhong wrote:

Dear Ubuntu Developers, especially Kernel Hackers,


This may be a stupid question, please excuse my ignorance.

I am doing a project on Linux scheduler that trying to minimize
number
of page faults.

I finished the algorithm implementation and I need to measure the
effect. I am wondering if Linux provides tools to record number
of page
fault happened during the whole execution process?

Basically, I want something like $ pfstat ./a.out page faults: 3
Execution Time: 1003 ms

Is there such a tool? I want to make sure before deciding to
write one
by myself, which will be a lot of work...

Thanks a lot!

Alfred

There are well defined APIs for collecting this kind of data, for
example you can collect the rusage info for an exiting child process
using wait3() or wait4().

References:
   man 2 wait3
   man 2 rusage







--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Tor application-firewall support

2012-04-24 Thread John Moser


On 04/24/2012 08:49 AM, Paul Campbell wrote:

There's been some discussion on this mailing list about
application-firewalls, and I wanted to say a word about Ubuntu's
inability to filter internet connections at the application-level.


It's doable, just not pretty.


I work as a freelance journalist. On many occasions I recommend the use
of Tor to sources in middle eastern and southeast Asian countries. For
their own safety, they need an anonymous way to upload things to the
internet and in general to communicate online.


Immediately assuming you've got the technical profile of a ZDNet columnist.



When needing to use Tor, the source will activate the firewall
software's user-created Tor Profile and then start a Tor browsing
session. When finished browsing, the source will close Tor and change
the firewall settings from the Tor Profile back to the default profile
which in general allows all applications to connect to the internet.
This setup ensures that no other applications accidentally connect to
the internet during an active Tor session and reveal the source's true
IP address.



Vacuous.

A connection from your IP address doesn't reveal your source address. 
 The source address from your computer is stamped on every TOR packet: 
 it's possible to determine that you're using TOR, regardless. 
Blocking other connections unrelated to TOR won't hide what you're doing 
under TOR; and having other connections (say to your e-mail, IRC, P2P, 
non-sensitive Web sites, etc.) doesn't jeopardize the secrecy of your 
TOR connection.


Aside, has anyone considered that actively aiding a sovereign nation's 
population in accessing materials restricted from the general 
population's view is an active attack on that nation's procedurally 
declared national security, and a direct act of war?  Not defending 
tyranny, just saying:  you are committing an act of war.  If we have 
extradition treaties with these people, it's perfectly reasonable for 
you to be arrested and shipped over there; and if our government refuses 
to do so, then the logical response in kind is for them to start bombing 
our soil.


Some things are worth getting bloody for, and some things carry the 
implications but in practice those implications never pan out.  You 
probably won't get extradited and nobody is going to start lobbing nukes 
just because of people helping crack the Great Arab Firewall.  They 
could though; it's actually a reasonable response.





Sincerely,

Paul Campbell






--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: How to install Precise without getting screwed?

2012-04-09 Thread John Moser

On 04/08/2012 11:14 PM, Dane Mutters wrote:


John,


So, while I'm, in fact, all /for /speaking bluntly, I also see the 
quandary that speaking too bluntly produces when being wrong (for 
the owners of a work) would mean that the months they spent on a 
particular project would all be for nothing, should they admit that 
they were actually wrong.


All things have a balance.  Direct personal attacks are less useful than 
attacks on a particular feature you don't like; attacks by proxy are 
also more useful than direct personal attacks (I don't know what idiot 
came up with this... that idiot is somewhere, but he's at least able to 
shuffle back into the crowd and hide...).  Directly grabbing the 
developer in question and giving them a severe public dressing down is 
just not constructive--let's ignore the issue and lob personal attacks 
instead now eh?  (Thorough dressings down are for the rare situation 
where the person in question is a severely destructive idiot--this 
doesn't happen much, aside from that one coworker we've all had that 
gets paid to creates problems for everyone else.)


Either way, getting *too* uncivil is a bad thing.  Strong language can 
be very useful in some forums; but in forums where it's strongly 
inappropriate you should pick your tone well enough to have the same 
effect.  Railing on something by proxy on a glancing blow may be 
overstepping the bounds of civility, or it might be a needed slap 
alongside the head for someone; continuing to ignore the feature itself 
and continuously use it purely as a proxy to insult someone is just 
malicious and useless.


We don't want to degenerate into a forum of continuous flame wars in any 
case; but the truth is the occasional burn serves to remind us that fire 
is hot and we should really pay attention to what we're doing.  While 
you don't want to burn your house down, you also really don't want to 
freeze to death.


That all said, let's keep it civil.  Or at least let's go for a farce.


Plus it's fun to read people speaking frankly, though if you spoke
like a Franc I guess you'd have to use a lot more accents and
apostrophes.


Well said.  ;-)


If only brevity was my strong point.

--Dane


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: cpufreqd as standard install?

2012-03-03 Thread John Moser

On 03/03/2012 12:13 AM, Phillip Susi wrote:

On 02/29/2012 04:40 PM, John Moser wrote:

At full load (encoding a video), it eventually reaches 80C and the
system shuts down.


It sounds like you have some broken hardware.  The stock heatsink and 
fan are designed to keep the cpu from overheating under full load at 
the design frequency and voltage.  You might want to verify that your 
motherboard is driving the cpu at the correct frequency and voltage.




Possibly.

The only other use case I can think of is when ambient temperature is 
hot.  Remember server rooms use air conditioning; I did find that for a 
while my machine would quickly overheat if the room temperature was 
above 79F, and so kept the room at 75F.  The heat sink was completely 
clogged with dust at the time, though, which is why I recently cleaned 
and inspected it and checked all the fan speed monitors and motherboard 
settings to make sure everything was running as appropriate.


In any case if the A/C goes down in a server room, it would be nice to 
have the system CPU frequency scaling kick in and take the clock speed 
down before the chip overheats.  Modern servers--for example, the new 
revision of the Dell PowerEdge II and III as per 4 or 5 years ago--lean 
on their low-power capabilities, and modern data centers use a 
centralized DC converter and high voltage (220V) DC mains in the data 
center to reduce power waste because of the high cost of electricity.  
It's extremely likely that said servers would provide a low enough clock 
speed to not overheat without air conditioning, which is an emergency 
situation.


Of course, the side benefit of not overheating desktops with inadequate 
cooling or faulty motherboard behavior is simply a bonus.  Still, I 
believe in fault tolerance.



I currently have cpufreqd configured to clock to 1.8GHz at 73C, and move
to the ondemand governor at 70C.


This need for manual configuring is a good reason why it is not a 
candidate for standard install.




I've attached a configuration that generically uses sensors (i.e. if the 
program 'sensors' gives useful output, this works).  It's just one core 
though (a multi-core system reads the same temperature for them all, as 
it's per-CPU); you can easily automatically generate this.


Mind you on the topic of automatic generation, 80C is a hard limit.  It 
just is.  My machine reports (through sensors) +95.0C as Critical, but 
my BIOS shuts down the system at +80.0C immediately.  Silicon physically 
does not tolerate temperatures above 80.0C well at all; if a chip claims 
it can run at 95.0C it's lying.  Even SOD-CMOS doesn't tolerate those 
temperatures.


As well, again, you could write some generic profiles that detect when 
the system is running on battery (UPS, laptop) and make appreciable 
adjustments based on how much battery life is left.



At 73C, the system switches from 1.9GHz to 1.8GHz. Ten seconds later,
it's at 70C and switches back to 1.9GHz. 41 seconds after that, it
reaches 73C again and switches to 1.8GHz.

That means at stock frequency (1.9GHz) with stock cooling equipment, the
CPU overheats under full load. Clocked 0.1GHz slower than its rated
speed, it rapidly cools. Which is ridiculous; who designed this thing?


This sounds like your motherboard is overvolting the cpu in that 1.9 
GHz stepping.




Possibly, but the settings are all default, nothing set to overclock (it 
has jumper free overclocking configuration, but the option Standard is 
default for clock rate and voltage settings, which I assume the CPU 
supplies).


Basically the argument here is between Supply fault tolerance and 
Well your motherboard is [old|poorly designed] so buy a new one.  
That's an excellent argument for hard drives (I have, in fact, suggested 
in the past that Ubuntu monitor hard disks for behavior indicative of 
dying drives--SMART errors, IDE RESET commands because the drive hangs, 
etc--and begin annoying the user with messages about the SEVERE risk of 
extreme data loss if he doesn't back up his data), but really if my 
mobo/CPU is aging and the CPU runs a little hot I'm not going to cry 
when the CPU suddenly burns out and my machine shuts down.  I'll be 
confused, annoyed, but I'll buy a new one--I might buy an entire new 
computer, unaware that just my CPU is broken, and shove the hard drive 
in there.  So there's no harm in allowing the user's hardware to go 
ahead and burn itself out if you think that's what's going on here.


By all means that doesn't mean you can't have a diagnostic center 
somewhere that the user can review and see the whole collection.  
Ethernet: Lots of garbage [Possibly:  Faulty switch, faulty NIC, 
another computer with a chattering NIC spewing packets].  CPU:  
Overheats under high CPU load [Possibly:  Dust-clogged CPU heat sink, 
failing CPU fan, overclocking, failing CPU, failing motherboard voltage 
regulators, buggy motherboard BIOS].  /!\ Hard drive:  Freezes and 
needs IDE Resets [Possibly:  Dying

Re: zram swap on Desktop

2012-03-03 Thread John Moser

On 03/03/2012 12:05 AM, Phillip Susi wrote:

On 02/27/2012 08:58 PM, John Moser wrote:

I believe that swap space is only actually freed when the memory it is 
backing is freed.  In other words, if the process frees the memory, 
the swap is freed, but when the page is read back in from swap, it is 
left in swap so that the page can be discarded again in the future 
without having to write it back out again.  This can lead to some 
wasted memory by having pages still in zswap that have also been moved 
back into regular ram.




This may be true.  Also zswap seems to not bother compacting until it's 
being added to, so it bloats and then doesn't shrink much.  For example 
you can put 500MB into zswap at 29% compression ratio and 30% total 
memory usage, free 200MB and have it stay at 29% compression ratio with 
the actual data 70MB smaller ... but around 40% total memory usage 
because you only saved 20MB of RAM, as fragmentation left a lot of empty 
space in the zswap in pages that also contain in-use compressed data.  
It doesn't background compact.



- Desktops may benefit by eschewing physical swap for RAM
* But this breaks suspend-to-disk; then again, so does everything:
+ Who has a 16GB swap partition? Many people have 4GB, 8GB, 16GB RAM
+ The moment RAM + SWAP - CACHE  TOTAL_SWAP, suspend to disk breaks


Cache can be discarded at hibernate time, so you only need RAM + SWAP. 
Also people generally don't go to hibernate while that much ram is in 
use, and almost never have much swap used.  Also, I *think* I saw a 
patch somewhere recently to address this by avoiding the zswap device 
for hibernation and falling back to other swaps instead.




Well I mean I shut off my VMs and all.  A quick glance and some math at 
top tells me right now I'm using 2.3GB ... and there's nothing I'd want 
to close if I decided to hibernate my computer for the night.  Closing 
down programs sort of defeats the purpose.  Maybe LibreOffice.
It looks like CleanCache/CompCache is a better solution since it 
avoids the step of emulating a block device.





zcache on cleancache is just for compressing page cache (file backed), 
not swap (anonymous).  zcache on freeswap is the solution for 
compressing swap without a block device, also written by the zram guy, 
not sure how to configure it though.  CleanCache and zcache are in 3.2 
staging, freeswap is not.  For what it's worth I'm running both zram 
swap and CleanCache zcache in tandem; one does not affect the other.  
I've tested this running Ubuntu 11.10 at 288MB of RAM, which is painful; 
it's crippling MUCH faster without zcache enabled.


http://i.imgur.com/aAeSE.png

All of the above is swap on zram, no disk backed device.  With zcache 
enabled and two CPUs (kswapd uses 60%-70% CPU like this!) I can get this 
far with about 20-25MB of page cache, and then I can still raise and 
lower Firefox and open a new tab in gnome-terminal to run killall on 
Firefox (attempting to close Firefox was taking too long for dialog 
boxes to load and draw--it annoyed me).


I'm pretty sure zram will be superceded by zcache on freeswap.  zcache 
is a tmem backend, freeswap and CleanCache are freemem frontends.  Any 
backend can be used on any frontend, so when (if) the freeswap frontend 
goes into mainline zcache will load onto that.  zcache is zram, it uses 
zram's xvmalloc when running on freeswap (uses a different allocator on 
page cache) and everything, it's even written by the same guy.  zcache 
is just zram ported to tmem, which makes it both the same and 
separate--it is zram, but it's not zram.  As tmem looks like the way the 
kernel is moving in the future, zram will probably go away--the 
appropriate compressed in-memory file system is tmpfs with zswap, as RAM 
used to back tmpfs can be swapped and thus zcache and zram will both act 
to compress tmpfs, so zram's usefulness as a block device in RAM that 
can house a compressed file system is limited.


Of course, that theory then raises the question:  what about when you 
don't have swap?  Does the kernel make its swapping decisions and then 
ask freeswap if it's got something to do with this memory when you don't 
actually have any swap space?  (i.e. attempting to swap without swap)


Anyway in the end I feel the situation boils down to this:  You get an 
Intel CPU these days and it comes with 6 cores.  What do you do with 6 
cores?  You run some pretty extreme applications.  What does your 
average desktop do with 6 cores with hyperthreading enabled running 12 
way SMP?  It uses 1-2 cores ... what do you do with the other 10?  Run 
compression/decompression and compact fragmented compressed swap, what 
else?  On something like i.e. the XO laptop where RAM is limited, it's 
simply a necessity*.  On a desktop with a smaller RAM space and a slower 
CPU, it's a livable trade-off that does enhance performance some (it's a 
godsend on a dual core with just 512MB RAM trying to run Unity).




*I believe the use is different

cpufreqd as standard install?

2012-02-29 Thread John Moser

Has anyone considered cpufreqd in standard install?

I have a 1.9GHz Athlon 64 X-2 with stock heat sink (recently cleaned and 
inspected) and fan (operating at 3200RPM).  Its clock rates are 1.9GHz, 
1.8GHz, and 1.0GHz.


At full load (encoding a video), it eventually reaches 80C and the 
system shuts down.


I currently have cpufreqd configured to clock to 1.8GHz at 73C, and move 
to the ondemand governor at 70C.


At 73C, the system switches from 1.9GHz to 1.8GHz.  Ten seconds later, 
it's at 70C and switches back to 1.9GHz.  41 seconds after that, it 
reaches 73C again and switches to 1.8GHz.


That means at stock frequency (1.9GHz) with stock cooling equipment, the 
CPU overheats under full load.  Clocked 0.1GHz slower than its rated 
speed, it rapidly cools.  Which is ridiculous; who designed this thing?



Besides this, it would be possible to enter power save mode on laptops 
and UPS based on battery life remaining.


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


zram swap on Desktop

2012-02-27 Thread John Moser


I've been toying with zram swap on the desktop, under Ubuntu Precise.  
It looks like a good candidate for a major feature in the next version; 
Precise is currently in feature freeze.  Yes, implementing this would 
involve just a single Upstart script; but it's a major change to the 
memory system in practice, thus a major feature.  I am not pushing this 
for implementation in Precise.


Looking for comments on if I should spec this out and put it up on 
Launchpad as a goal for Precise+1, and maybe additional testing if 
anyone cares to.


The justification for this is as follows:

The zram device dynamically allocates RAM to store a block device in 
compressed blocks.  The device has a fixed size, but zram only allocates 
RAM when it needs it.  When the kernel frees swap, it now informs the 
underlying device; thus zram releases that RAM back to the system.  
Similarly, when a file system frees a block, it now also informs the 
underlying device; zram supports this.


Because of all this, you can feasibly actually have a swap device on RAM 
the size of your entire RAM and nothing will happen until you start 
swapping--you can have 2GB of RAM, a 2GB zram block device, mkswap on 
it, and activate it.  Nothing will happen until it was time to start 
swapping anyway, in which case it'll work.  That means zram swap devices 
have no negative impact by design.  They could only have a negative 
impact if they were slower than regular RAM (purpose defeating) or if 
the driver had bugs (obviously wrong and should be fixed).


All this basically means a number of things, in theory:

 - The LiveCD should activate a zram0 swap device immediately at boot, 
equivalent to the size of physical RAM.
   * Immediately at boot is feasible as soon as /sys and /dev are 
accessible.

   * tmpfs swaps, so tmpfs inherits compression!

 - Desktops may benefit by eschewing physical swap for RAM
   * But this breaks suspend-to-disk; then again, so does everything:
  + Who has a 16GB swap partition?  Many people have 4GB, 8GB, 16GB RAM
  + The moment RAM + SWAP - CACHE  TOTAL_SWAP, suspend to disk breaks

 - Servers may benefit greatly by eschewing physical swap for RAM
   * More RAM means an even bigger impact
   * Suspend to disk isn't important

Also for consideration is that Linux doesn't (to my knowledge) 
differentiate between fast and Slow swap and won't start moving 
really old stuff out of zram0 and onto regular disk swap if you have 
both.  This naturally means that the oldest, least used stuff would tend 
to float to zram0 if you had both.



ON TO THE TESTS!


The test conditions are as such:

 - Limited RAM to 2GB with mem=2G
 - Running a normal desktop:  Xchat, bitlbee, Thunderbird, Rhythmbox, 
Chromium

 - Memory pressure occasionally supplied by VirtualBox

In this case, I ran under 2GB with a 1.5GB swap as such:

$ sudo -i
# echo 1610612736  /sys/block/zram0/disksize
# mkswap /dev/zram0
# swapon /dev/zram0
# swapoff /dev/sda5

My swap device is now solely zram0.  I have attached an analysis script 
that analyzes /sys/block/zram0/




CURSORY RESULTS:


===STAGE 1===
Condition:  VirtualBox running Windows XP and Ubuntu 11.4 from LiveCD 
(two VMs).



Memory looks like:

Mem:   2051396k total,  1997028k used,54368k free,   92k buffers
Swap:  1572860k total,   945784k used,   627076k free,51576k cached

Every 2.0s: ./zram_stat.sh  Mon Feb 27 
17:09:40 2012


Current Predicted
Original:   922M1536M
Compressed: 262M
Total mem use:  269M448M
Saved:  652M1087M
Ratio:  29% (28%)

Physical RAM:   2003M
Effective RAM:  2656M   (3090M)

[Explanation of zram_stat.sh:  Current data in zram0; Compresesd data 
size; Total size of zram0 in memory including fragmentation, padding, 
etc; RAM saved by compression; size, i.e. 4M becomes 1M then size is 
25%; Effective RAM = physical RAM + Saved RAM.  The Predicted column 
assumes the same Ratio and a full swap device on zram0.]



In this case, the machine was extremely slow due to constant hammering 
of the disk.  I was quite aware of the reason:  not enough disk cache, 
thus lots of reading libraries off disk.  I tried adjusting 
/proc/sys/vm/swappiness=100 but no luck.




===STAGE 2===

I closed the VirtualBox machine running the Ubuntu installer.

Mem:   2051396k total,  1614096k used,   437300k free,  192k buffers
Swap:  1572860k total,   860040k used,   712820k free,   138940k cached

   Current Predicted
Original:   821M1536M
Compressed: 240M
Total mem use:  267M499M
Saved:  554M1036M
Ratio:  32% (29%)

Physical RAM:   2003M
Effective RAM:  2558M   (3040M)

Notice that 100M was in swap and about  475M got freed in actual RAM, 
almost 90MB of which went directly to cache.  That's how much cache 
pressure there was.


The machine is now quite 

Re: Ubuntu should move all binaries to /usr/bin/

2011-11-02 Thread John Moser
On Tue, Nov 1, 2011 at 3:01 PM, nick rundy nru...@hotmail.com wrote:
 I came to ubuntu from Windows. And one thing Windows does well is make it
 easy to find an executable file (i.e., it's in C:\Program Files\). Finding
 an executable file in Ubuntu is frustrating  lacks organization that makes
 sense to users. Fedora is considering a fix for this issue. I think Ubuntu
 should do the same.

ABSOLUTELY NOT.

System binaries for administrative and basic shell tools go into /bin
and /sbin.  Libraries go into /lib.  This is well-known, it is
documented, it is standard.  A system that cannot boot to the point of
mounting or giving a basic recovery console without /usr is not
functional.

We are not Windows.  We do not go breaking standards that are
well-defined and reasoned for a variety of flexible use cases because
we think it would be better in some invisible way.  You want to
find a binary?  Here, I've solved this problem for you, completely.
It's easy.  Do this:

luser$ which ls

luser$ which gnome-session

luser$ which synaptic

If it isn't in your path, then it's broken.  Something strange has
happened.  Yes, some applications (Mozilla...) do use a small shell
script that loads a binary from /usr/libexec/mozilla/bin/ and I find
this annoying and ill-designed.  That's part of what XULrunner was
supposed to fix:  a single Mozilla libs install to run all XUL apps.
I loudly proclaim that reusable code should not have one copy
installed as a separate library for each time it is reused, and that
this behavior is inconsistent and broken--that changing it will cause
the system to become nonfunctional means the brokenness isn't in where
we've installed it, but how the program was designed.

I wonder how many shell scripts will break when /bin/sh and /bin/bash
aren't there anymore.



 Here's a link to an article that talks about Fedora's idea:
 http://www.h-online.com/open/news/item/Fedora-considers-moving-all-binaries-to-usr-bin-1369642.html?view=print

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss



-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Secure attention Key: Login and GkSudo

2011-10-30 Thread John Moser
On Sun, Oct 30, 2011 at 9:21 AM, staticd
staticd.growthecomm...@gmail.com wrote:
 The Secure Access key(SAK) is a key combination captured/capturable only by
 the OS.
 It can be used to initiate authentication interfaces where the user is sure
 that the keys are being captured only by the OS.
 This feature is present on windows(Ctrl+Alt+Del) to initiate logon.

Enjoy the kool-ade.


 I think these two steps will help make ubuntu even more secure, and help
 prepare it for a large non technical userbase.

 What do you think?

Windows NT is designed so that, unless system security is already
compromised in some other way, only the Winlogon process, a trusted
system process, can receive notification of this keystroke
combination. This is because the kernel remembers the process ID of
the Winlogon process, and allows only that process to receive the
notification.

So says Wikipedia.

Interestingly, VMWare catches the sequence as well.

While it is true that the SAK will trigger a kernel event, it is also
true that the major method of bypass isn't going to be anything so
simple as hacking the log-in dialog or gksudo prompt.  No, that won't
work.

What you want to do is spoof the user with gksudo itself.  Try this:

 - Open a terminal
 - gksudo /usr/bin/ls
 - Examine the dialog box
 - Cancel without inputting password.
 - gksudo ls
 - Examine the dialog box
 - Cancel out

See a difference?

Now try adding $HOME/.system/ to your $PATH as the first member.  Put
a shell script called 'synaptic' into it:

#!/bin/sh
synaptic 
cp ~/.system/cfg `which gksudo`
chmod u=srwx,go=rx `which gksudo`


Now create a launcher that says Synaptic in the menu, to replace the
current Synaptic launcher.  Viola!

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Secure attention Key: Login and GkSudo

2011-10-30 Thread John Moser
On Sun, Oct 30, 2011 at 9:37 AM, John Moser john.r.mo...@gmail.com wrote:

 #!/bin/sh
 synaptic 
 cp ~/.system/cfg `which gksudo`
 chmod u=srwx,go=rx `which gksudo`

Sorry, that would be '/usr/bin/synaptic '

Of course.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Ubuntu System Restore

2011-10-30 Thread John Moser
The simple way to create a restore point system ...

... is to mount / as an overlay FS, which you periodically merge (to
remove prior restore points), condense into a squashfs (to take a
point backup), or wipe (to restore to backup).  This of course means
/home should be its own partition.

On Sun, Oct 30, 2011 at 4:40 PM, Bear Giles bgi...@coyotesong.com wrote:
 You need to either have a local repository or download from the internet
 again. I've used 'apt-mirror' in the past to maintain a local cache but that
 was when I was building local systems with a minimal Debian installer. I
 don't even know if the standard Ubuntu installer can load off a local cache.
 (I guess the process is to do the install without updates, change your
 sources.list files, then upgrade from the cache.)

 It's also worth remembering that the specific versions of your packages may
 not be available when you need to restore your system. This is usually a
 good thing since more recent versions have a shot at preventing whatever
 caused you to lose your system in the first place (e.g., closing
 vulnerabilities) but some people may need to restore the system exactly.
 On checksums - I checked my system and almost none of the conffiles have
 checksums. (In fact that may be against packaging standards - I would have
 to check.) That's a bummer since it means that there's no easy way of seeing
 what's changed unless you peek into the .deb file. There are some deb tools
 that can do this but since I can do it programmatically I usually just did
 that.
 The 'monster diff' is just a comment on the number of files involved. What I
 actually did create two lists, one generated by walking the filesystem and
 the other generated by concatenating all of the *.files and *.md5sums
 metadata and then comparing them. I did this programmatically but you could
 also create actual files, sort them, and then run a diff on them. IIRC I
 typically had over 70k files.
 On Fri, Oct 28, 2011 at 3:33 AM, Gaurav Saxena grvsaxena...@gmail.com
 wrote:

 Hello all

 On Fri, Oct 7, 2011 at 4:45 AM, Bear Giles bgi...@coyotesong.com wrote:

 I've written a few prototypes and this comes down to four issues. Some of
 the details below are debian/ubuntu-specific but the same concepts will
 apply to redhat.

 1. User data (/home) must be backed up explicitly. (ditto server data on
 servers).
 2. Packages should NOT be backed up. All you need is the package name and
 version. Reinstall from .deb and .rpm if necessary since this way you're
 sure that you never restore compromised files.

 But it might be possible that the package files are not available on the
 system. That means for all the packages installed the .deb files need to be
 downloaded from the internet for restore purpose ?

 3. Configuration data (/etc) must be backed up explicitly. This is tricky
 because backing up the entire directory will cause its own problems. Worse,
 some applications keep their configuration files elsewhere. The best
 solution I've found is to scan the package metadata to identify
 configuration files and to only save those with a different checksum than
 the standard file.

 Ok. Nice idea indeed , but is there checksum associated with the files in
 the package ? Or that can be calculated at the time of restore ? What you
 say ?


 4. Local files. Ideally everyone would keep these files under /usr/local
 and /opt but that's rarely the case. The best solution I've found is to scan
 the debian package metadata and do a monster diff between what's on the
 filesystem under /bin, /sbin, /usr and (chunks of) /var with what's in the
 metadata.

 Could you suggest me some way of scanning the debian package metadata
 without actually downloading the packages? and how to this monster diff ?

 It's worth noting that the last item isn't that hard if you have a strict
 security policy that everything under those directories MUST be in a
 package. It's deleted without a second thought if it's not. You can still do
 everything you could before, you just need to create a local package for it.
 So what do you do with this? The best solution, which I haven't
 implemented yet, is to handle #2 and #3 with autogenerated packages. You set
 up one or more local packages that will install the right software and then
 overwrite the configuration files. You can fit everything, including
 original package archive, on a single DVD.

 Could you please tell some detail about autogenerated packages ? Like if
 we have a list of packages installed on the system, We need to reinstall
 those all softwares  and remove those which are installed after the restore
 point?

 BTW Debian has a C++ interface to the package metadata. I've never used
 it - I find it easier to just scan the metadata directory myself. There's
 also hooks that will allow your application to be called at each step during
 a package installation or removal. You could, in theory, keep your snapshots
 current to the last minute that way.

 So 

QTstalker is 0.32, but 3 years ago 0.36 was released?

2011-10-08 Thread John Moser
qtstalker in the repos is way behind.  The new version tracks 
candlestick indicators.  :(


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Very odd issue, dead spots on screen

2011-07-10 Thread John Moser
I can't click links or pictures or highlight text...

so I unmaximize Chrome, move it, and then click it.

Can't highlight in any other app either ... it's that rectangle of the
screen, it's like there's an invisible window overlayed and I can't
click or right click through it.

I don't know how to diagnose this.

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Chromium vs Firefox?

2011-05-01 Thread John Moser
Has anyone yet brought up the potential to ship Chromium default rather 
than Firefox?  At this point it's more advanced methinks, with the only 
likely complaint being that you can't add NoScript or AdBlock+.  Ubuntu 
doesn't ship these default anyway; if you want those things, you can get 
Firefox yourself, as you likely already know what you're doing.


For the privacy discussion, see SRWare Iron as a potential source of 
ideas for changes to back-merge (or options to add).


--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Chromium vs Firefox?

2011-05-01 Thread John Moser
This has not been my experience.  Flash seems to crash a lot in 
chromium, but it doesn't take it down.  I've had Chromium blow out 
completely once, and once I've had every single page in it turn to Sad 
Browser.  But that was around Chromium 5.


Firefox has been doing better, but it only seems to handle a Flash crash 
once or twice:  after the first crash, if you reload a page with Flash, 
Flash will likely crash AGAIN very quickly and tear down the whole 
browser.  Firefox also tends to go down if you have too much crap going 
on, i.e. if you load a page that runs excessive scripts or brings in too 
many GIF images.  The whole UI will lag (Firefox 4 too), and often crash 
if it comes under too much load doing too many things (race 
conditions?).  I've never seen Firefox actually free memory by closing a 
tab.


Chromium has been a lot faster and a lot more stable for me.  I use 
Firefox 4 at work and Chromium at home, and I'm constantly restarting 
Firefox after it crashes.  I switched to Firefox 3 for a time, it's more 
stable but still crashes--a lot less than 4, but 2-3 times a week.  
Also, when one tab in Chromium is lagged down to the point of complete 
and total browser crawl, you can still switch to other tabs and use them 
like nothing is happening.


So eh.  What's unstable?


On 05/01/2011 10:54 AM, Alexandre Strube wrote:

Define more advanced.

It is also less stable.

On Sun, May 1, 2011 at 4:36 PM, John Moser john.r.mo...@gmail.com 
mailto:john.r.mo...@gmail.com wrote:


Has anyone yet brought up the potential to ship Chromium default
rather than Firefox?  At this point it's more advanced methinks,
with the only likely complaint being that you can't add NoScript
or AdBlock+.  Ubuntu doesn't ship these default anyway; if you
want those things, you can get Firefox yourself, as you likely
already know what you're doing.

For the privacy discussion, see SRWare Iron as a potential source
of ideas for changes to back-merge (or options to add).

-- 
Ubuntu-devel-discuss mailing list

Ubuntu-devel-discuss@lists.ubuntu.com
mailto:Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss




--
[]
Alexandre Strube
su...@ubuntu.com mailto:su...@ubuntu.com


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Chromium vs Firefox?

2011-05-01 Thread John Moser

On 05/01/2011 01:28 PM, Jason Todd wrote:

Chromium/Chrome has a lot of problems that Firefox doesn't have.

The only substantial advantages that Chromium/Chrome has is its 
multi-process design (stability), it starts faster, and its nifty 
method of showing downloads at the bottom of the browser window. And 
when Firefox gets Electrolysis implemented the stability advantage of 
Chrome will be eliminated.




I like its smart form filling stuff too, but that's me.  (If I start 
typing my name, it'll give me options to pick sets of data I filled 
before, and try to fill in the whole form for me based on what I entered 
in other, completely different forms)



Here are some of the problems I have with Chrome/Chromium:
-lacks NoScript functionality 
http://code.google.com/p/chromium/issues/detail?id=54257


Not standard in Ubuntu, if you want that you can install Firefox.  
Already stated.


Also:  Chromium does not have $FAVORITE_FIREFOX_EXTENSION WHY IS THERE 
NO BRIEF FOR CHROMIUM!?!?
-cannot handle MMS URL streams 
http://code.google.com/p/chromium/issues/detail?id=47154

No comment (don't care about this)
-can't close single window via keyboard 
http://code.google.com/p/chromium/issues/detail?id=80424#c0


I just tried CTRL+N, CTRL+T, creates multiple tabs in new window.  
CTRL+W closes a separate tab, until you run out of tabs and close the 
window; ALT+F4 just drops the whole window.  Your argument is invalid.


-Private/Incognito does not apply to all windows 
http://code.google.com/p/chromium/issues/detail?id=79689#c0


I thought these were supposed to apply to individual tabs?  I don't know 
about Incognito mode so I won't comment.


-cursor functions fail when gtkrc 2.0 tooltips are turned off 
http://code.google.com/p/chromium/issues/detail?id=77821#c0
-can't cycle thru dropdown list with TAB key 
http://code.google.com/p/chromium/issues/detail?id=77607#c0
-the dropdown list does not list bookmark occurrences as thoroughly as 
Firefox does (I can explain more thoroughly if needed)
-it is not possible to place a dropdown Bookmarks Button on the URL 
bar (users are forced to use crappy 3rd party options)

I have no clue about these things

-inability to customize/move Button placements on the URL bar
Customization of the UI indicates to me that you are not used to the UI 
being given you, thus it is not your favorite program, thus you will 
probably want to install Opera instead of trying to make Firefox/Chrome 
look like Opera.



-can't reopen closed tabs if in Incognito mode
-Firefox is faster on many benchmarks 
http://download.cnet.com/8301-2007_4-20047314-12.html


IE is faster on many benchmarks.  There is a whole class of arguments 
about benchmarks and why they don't work.  This is why graphics card 
reviewers pull out games and look at the FPS:  actual performance often 
doesn't correlate to the benchmarks at all.



-there is no print preview

Hadn't noticed, as I don't own a printer.

Print preview in Firefox ... I haven't used in ages, because it was 
always buggy somehow.  Like it would show me 30 pages and print half of 
one, and I couldn't understand why (this was on a certain web site that 
generated a transcript for a college).  But that's neither here nor there.


-there is no quick method of reviewing Recent Bookmarks without 
drilling down into windows/menus




I've forgotten what a bookmark is...

I like Chrome a lot. But it can't compete with Firefox as a fully 
capable and mature browser. It's better as a minimalist occasional use 
browser.




Dunno, I switched away from Firefox because Chromium was far more 
stable, faster (UI wise), and faster (rendering-wise).  It handles 
JavaScript well (unlike Dillo), it renders properly, and I like its new 
tab page over a blank tab.


Firefox 4 has that tab sorting thing, though, which I really wish was in 
Chromium.  Well, not really; but I use it when it's there.




 Date: Sun, 1 May 2011 10:36:45 -0400
 From: john.r.mo...@gmail.com
 To: ubuntu-devel-discuss@lists.ubuntu.com
 Subject: Chromium vs Firefox?

 Has anyone yet brought up the potential to ship Chromium default rather
 than Firefox? At this point it's more advanced methinks, with the only
 likely complaint being that you can't add NoScript or AdBlock+. Ubuntu
 doesn't ship these default anyway; if you want those things, you can 
get

 Firefox yourself, as you likely already know what you're doing.

 For the privacy discussion, see SRWare Iron as a potential source of
 ideas for changes to back-merge (or options to add).

 --
 Ubuntu-devel-discuss mailing list
 Ubuntu-devel-discuss@lists.ubuntu.com
 Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


Re: Congrats on 11.04

2011-04-29 Thread John Moser
By contrast, I am struggling to get rid of this horrible Unity thing and get
Gnome Shell.

Ubuntu is pulling a Microsoft by using its clout to push a product and kill
another; they can make Unity default, but they've actively removed Gnome
Shell to the tune of ...well the PPAs all say THIS CAN BREAK YOUR
SYSTEM!!! Scariness.

So the lesson here is that rather than see Ubuntu users go to Gnome Shell,
they're making them leave, which is higher impedence and so the number of
users who will migrate off Ubuntu is a subset of those who would instal
Gnome Shell.  Another subset is people like me who now have 80% of the
desktop environment coming out of a PPA.

The Gnome developers are also upset at Canonical.  No idea why.

On Apr 29, 2011 9:01 PM, Chris Jones chrisjo...@comcen.com.au wrote:

Just a quick congrats on the 11.04 release. I was previously running Fedora
and openSUSE because I was angry with the previous state of Ubuntu. Yet the
11.04 release has made me return. Well done to all the developers and all
others involved to make such an awesome release possible.

Cheers and regards.


--
PHOTO RESOLUTIONS - Photo - Graphic - Web

C and L Jones - Proprietors

ABN: 98 317 740 240
WWW: http://photoresolutions.freehostia.com
@: chrisjo...@comcen.com.au or photoresoluti...@comcen.com.au

cjlinux...@gmail.com

Command lines and Linux terminals are my comfort zone!

OS: Ubuntu 11.04
System: Linux 2.6.38 x86_64
Desktop: Unity

OS: Windows XP
System:  x86
Desktop: Professional SP3

OS: FreeBSD 7.3
System: 7.3-RELEASE-p3 i386
Server: WebUI+putty



--
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss
-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss


  1   2   >