Re: Hoping to donate/sell a Talos II motherboard

2023-03-01 Thread Luke Kenneth Casson Leighton
On Wednesday, March 1, 2023,  wrote:
> March 1, 2023 5:11 AM, "Toshaan Bharvani | VanTosh" 
wrote:

>> Yes, please, I am interested.
>> I would use it for PowerEL, LibreBMC and LibreSOC.
>> All open source projects.
>> Is this just a board or also a CPU?
>
> It is just the motherboard.  :)

so someone has to spend maybe an additional...  USD... 1000?
2000? to get it into a useful state.  minimum 128 GB preferably
a lot more than that (in order to host multiple VMs),
plus SSDs / HDDs, plus a minimum 1,000 watt power supply.


> donate to the university of Oregon, whose contact is Toshaan Bharvani.

ah no.

Two SEPARATE options:

* donate to University of Oregon, whose contact is Sameer Shende,
 and if they agree they can add it to the multiple POWER9
 systems which are already available to FOSS Groups through
 the OpenPOWER Hub, have been for a few years now.

* donate to Vantosh Ltd, whose contact is Toshaan Bharvani,
 who already also hosts POWER9 systems for FOSS Projects
 (Libre-SOC in particular), and who is the maintainer of
 the PowerEL distribution.

l.


-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Hoping to donate/sell a Talos II motherboard

2023-02-28 Thread Luke Kenneth Casson Leighton
On Tuesday, February 28, 2023,  wrote:
> Hello you fabulous developers!
>
> My friend has a spare Talos II motherboard that is currently sitting in
his house
> in Indiana USA collecting dust.
>
> https://www.raptorcs.com/TALOSII/
>
> I have convinced him to donate/sell it to an open source project or
developer.
>
> I reached out to Richard Stallman, and he agreed to take the board.  I am
certain that the
> FSF would put it to good use.  My friend and I have not yet decided, to
whom we will give
> the motherboard.  Is it possible that I could give it to someone or
project, such that all
> parties here would benefit?

i am reasonably certain that Toshaan Bharvani would be
prepared to do that although he would need to speak for
himself.

the other option would be to donate it to the University of
Oregon who already have POWER9 systems that are accessible
to FOSS projects via the "OpenPOWER Hub".  cc'ing Sameer
as well.

(in case that wasn't clear: FOSS projects can *already*, right
now, apply for access to POWER9 systems, do i have that right,
Sameer?)

> Is there any project or developer here that would be willing to take this
motherboard and create
> virtual machines that other projects could have access to?
>
> Thoughts?
>
> Thanks,
>
> Joshua Branson
> FOSS enthusiast
> https://gnucode.me
>
>

-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Re: Concerns about Security of packages in Debain OS and the Operating system itself.

2022-04-19 Thread Luke Kenneth Casson Leighton
> Do you have a publication of that analysis? I was thinking the same
> about the organization of Debian for some time but never did analysis
> or compared it to other distros.

i found it here http://lkcl.net/reports/wot/ it's dated 2017 (not a bad
guess, 4 years). please bear in mind, the primary reason for writing it
was to help a group that was (still is) severely lacking in both technical
security understanding and also infrastructure within their distro.

as a group they genuinely believed that SSL would be beneficial in some
way. a leading gnunet developer on the list made one single comment and
then, knowing that the size of the group was large and comprised largely
non-security-conscious individuals, knew that any further discussion
would be... unwise, declined to take part further.

naively, i tried my best to explain it (hence this document - which contains a
detailed appendix outlining why SSL is dangerous as it was the primary
focus of bikeshedded "but it'll add an extra layer of security")

i was intending to document the examples of other Distros, but the
bikeshedding degenerated into verbally-abusive behaviour and i was
so shocked that i terminated further planned development of the document
(and left the group).

this has left some of the thoughts which i outlined in my post unpublished.
the general idea was - and i would welcome contributions here
(http://lkcl.net/reports/wot/wot.tex - also see Makefile in the same dir)
the general idea was to add example Distros, explaining where they
break down, because they break one (or more) of the chain of integrity,
referring clearly to the "Requirement" as a way to do so.
(and then clarifying the requirements further, in an iterative process)

for example Ubuntu violates at least Requirement 11, because the
size of the group comprising the ring-of-trust is too small, and the
integrity of the group is compromised because they may be threatened
with salary reductions or loss of employment if they don't do what
the Corporation demands.  it sounds obvious once expressed, but
i can guarantee that it's not even remotely on the radar of the average
ubuntu user.

i do have to say that having a public document like this would go a
long way towards preventing some of the criticism that Debian receives
for "being slow to react" and "being too complex" or "not secure enough"

i've had discussions with NixOS developers recently, who genuinely
believe that Debian is vulnerable and NixOS is better because, their
words, "debian doesn't have reproducible builds."

rather embarrassingly i had to explain to them that the reason
why they're having an easy time of adding reproducible builds to
NixOS is because both debian and fedora originally did all the heavy
lifting, and have had reproducible builds for what... 8 years now?
those distros *paved the way*... oh and then didn't really talk about it
or promote it.  hence why NixOS developers genuinely believe that they
are "the world's first secure reproducible build distro".

explaining to them that relying on github and unverified unsigned
git checkins is a bad idea (no commits and no packages are GPG-signed
in NixOS) took multiple round-trips, spanning over a week.

> Also I like to add that reproducible builds are an excellent addition
> to the mechanisms you are describing.

very true: they'd be part of the integrity-checking, down to the binary
level.  interestingly (this from my Software Engineering training)
it'd be added to the section on Functional Specification, not
necessarily Requirements.  if added to Requirements it would
be worded something like:

"Other Maintainers should be able to verify the full integrity
 of a package by reproducing its contents from the original source"

the *implementation* of that - part of the Functional Specification -
would mention "reproducible builds" because that is *how* you
fulfil the Requirement.

i'd be delighted to receive a patch to the .tex file to add that:
please do also remember to add an appropriate Copyright notice
at the same time, should you choose to contribute.
http://lkcl.net/reports/wot/wot.tex

best,

l.


---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68



audacity has become spyware

2021-07-05 Thread Luke Kenneth Casson Leighton
https://yro.slashdot.org/story/21/07/05/2155212/open-source-audio-editor-audacity-has-become-spyware

---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-22 Thread Luke Kenneth Casson Leighton
On Friday, August 23, 2019, Karsten Merker  wrote:

>
> and decide for themselves who is displaying "violent hatred" on
> mailing lists and come to their own judgement about your
> allegations:


You've now violated the Debian Conduct twice in under an hour.

https://www.debian.org/code_of_conduct

Karsten: I very deliberately made a conscious choice to respect the debian
devel list members by not poisoning the list with an extremely toxic
discussion.

I note that chose to do so *instead* of saying "ah yes I see your
perspective, I see how the rewritten version was much less confrontational,
I will try to improve my communication in the future"

In other words, your intention, just like Ted's word (where he chose to
ignore information - deliberately or unintentionally - that I had provided,
and used confrontational language *and you supported him in doing that*,
Karsten), is not to work together to resolve matters, it is to inflame them
and to poison this conversation.

Do you understand that that is what you have done?

Please can someone urgently step in and have a private word with Karsten
and Ted, I feel that because of their unreasonable approach that they are
making me feel extremely unwelcome to contribute further to this discussion.

Debian's Code and the matching Diversity Statement are extremely simple:
the best tgat I have ever seen.  The combined documents request that people
assume good faith, that they work together to further the goals of the
Debian Project, and that people go out of their way to be inclusive of all
who wish to see Debian progress.

I trust that these violations are clear and will be taken seriously.

With many apologies to everyone else on debian-devel that the conversation
has been poisoned by hostile intentions.

L.



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-22 Thread Luke Kenneth Casson Leighton
On Thu, Aug 22, 2019 at 7:58 PM Karsten Merker  wrote:

> On Fri, Aug 23, 2019 at 01:49:57AM +0800, Luke Kenneth Casson Leighton wrote:
>
> > The last time that we spoke, Theo, some time around 2003 you informed me
> > that you were doing so very deliberately "to show everyone how stupid I
> > was".  It was on the linux kernel lists.  It was also very shockingly
> > unkind. I can see some signs that you are tryint to be deliberately
> > confrontational and I do not like it.
> >
> > As the Debian lists are covered by the Debian Conduct document, please read
> > it and review the talk that was given in Taipei (it had a panel of 5 people
> > including Steve McIntyre, if someone remembers it please do kindly post a
> > link).

[i found it: https://wiki.debian.org/AntiHarassment ]

> Luke, please reconsider what you wrote above.

ted has a history of being abusive, and hiding it extremely well.  the
term is called "intellectual bullying".  it's where someone treats
someone in a way that, to the majority of people, looks really,
*really* reasonable, but on close inspection you find signs that
they're not actually trying to work *with* people in the group,
towards a solution, they're using the conversation as a way to make
themselves superior to another human being.

this is a form of "harrassment" - an intellectual kind.


> The only person in
> this thread whom I perceive as being confrontational is you,
> while Ted has in my view been perfectly civil and completely
> non-confrontational in what he wrote here.

ah, karsten: yes, i recall your violent hatred from very recent
conversations on other lists.  i did make an effort to present you
with an opportunity to use the resources from the conflict resolution
network, www.crnhq.org, but i did not receive a response.

your response may seem reasonable to you, however you just
demonstrated - as you did on the other list - that you are willing to
"blame" another person and are willing to *support* others who have
engaged in intellectual bullying in the past (Ted Tso), and are
actively supportive of his efforts to try that here.

just as you were actively supportive of the ongoing recent and
persistent intellectual bullying on the other list.

i'm therefore contacting the anti-harrassment team, above, to report
both you (Karsten), and also Ted Tso.  i appreciate that it's normally
just for events, however they are the people most likely to be in a
position to speak with you, privately, and also to advise on the best
course of action.

if you had responded on the other list, in a way that demonstrated a
willingness to resolve matters and work together, for the benefit of
everyone on that list, i would not be reporting you here.

if you had responded on *this* list with words to the effect, "did you
know, luke, that your words could be viewed as being confrontational?
to me, it genuinely looks like Ted is being perfectly civil.  could
you clarify, as it looks like i have made a mistake in interpreting
what you have written?  what did i miss?" i would not be reporting you
here.

can you see the difference between that paragraph, above, and what you
wrote?  do you notice that the rewritten words do not assume "bad
faith"?

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-22 Thread Luke Kenneth Casson Leighton
On Thursday, August 22, 2019, Theodore Y. Ts'o  wrote:

> On Wed, Aug 21, 2019 at 10:03:01AM +0100, Luke Kenneth Casson Leighton
> wrote:
> >
> > so i hope that list gives a bit more context as to how serious the
> > consequences of dropping 32 bit support really is.
>
> I very much doubt we are any where near "dropping 32-bit support".


Theo you have not rear the context. Sam specifically asked a question and I
answered it.


> There's a lot of "all or nothing thinking" in your argumentation
> style.


That would be incorrect, Theo. Having read the Debian Conduct Document,
please also read it.

Also please review the context properly.


As Sam has said multiple times, what's much more likely is that the
> set of packages that can't be built on native packages on 32-bit
> platforms will grow over time.


Yes. Everyone is aware of that. It is why the conversation exists.

Why did you assume that I was not aware of this?

 The question is when will that
> actually *matter*?  There are many of these packages which no one
> would want to run on a 32-bit platform, especially if we're focusing
> on the embedded use case.


That would also be specifically missing the point, which I have mentioned
at least twice, namely that 32 bit Dual and Quad Core 1ghz and above
systems are perfectly capable of running a full desktop environment.

Obvioudly you don't run video editing or other computationally heavy tasks
on them, however many such so-called "embedded" claased processors were
specifically designed for Android tablets or IPTV and so consequently not
only are 3D capable, they also have 1080p video playback.

So again: these are NOT pathetic little SINGLE CORE 8 to 10 year old
processors with 256 to 512MB of RAM (OMAP3 series, Allwinner A10), these
are 2 to 5 year old DUAL to QUAD Core systems, 2 to 4 GB RAM, 1 to 1.5ghz
and perfectly capable of booting to a full Desktop OS in 25 seconds or less.


The last time that we spoke, Theo, some time around 2003 you informed me
that you were doing so very deliberately "to show everyone how stupid I
was".  It was on the linux kernel lists.  It was also very shockingly
unkind. I can see some signs that you are tryint to be deliberately
confrontational and I do not like it.

As the Debian lists are covered by the Debian Conduct document, please read
it and review the talk that was given in Taipei (it had a panel of 5 people
including Steve McIntyre, if someone remembers it please do kindly post a
link).

Thank you.

L.




-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-21 Thread Luke Kenneth Casson Leighton
i remembered a couple more:

* the freescale iMX6 has a 19-year supply / long-term support (with
about another 10 years to go).  it's used in the bunnie huang "Novena
Laptop" and can take up to 4GB of RAM.  processor core: *32-bit* ARM
Cortex A9, in 1, 2 and 4-core SMP arrangements.

* the Zync 7000 series of FPGAs.  they're typically supplied with 1GB
RAM on developer kits (and sometimes with an SODIMM socket), and are
extremely useful and powerful.  processor core: *32-bit* Dual-Core ARM
Cortex A9 @ 800mhz.

these (as well as the other 32-bit SBCs i mentioned, particularly the
ones that work with 2GB of RAM) are not "shrinking violets".  they're
perfectly capable of running a full desktop GUI, Libre Office, web
browsers, Gimp - everything that the "average user" needs.

the Allwinner A20, Allwinner R40, Freescale iMX6 - these even have
SATA on-board!  they're more than capable of having a HDD or SSD
connected to them, to create a *really* decent low-power desktop
system.

now, i don't _want_ to say that it's insulting to these systems to
relegate them to "embedded" distro status (buildroot and other
severely-limited distributions), but i don't know what other phrase to
use.  "lost opportunity", perhaps?  something along those lines.

so i hope that list gives a bit more context as to how serious the
consequences of dropping 32 bit support really is.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Luke Kenneth Casson Leighton
On Tue, Aug 20, 2019 at 3:31 PM Sam Hartman  wrote:
>
> >>>>> "\Luke" == Luke Kenneth Casson Leighton  writes:
> Hi.
> First, thanks for working with you.
> I'm seeing a lot more depth into where you're coming from, and it is
> greatly appreciated.

likewise.

> I'd like to better understand the train wreck you see.

that 32-bit hardware is consigned to landfill.  debian has a far
higher impact (as a leader, due to the number of ports) than i think
anyone realises.  if debian says "we're dropping 32 bit hardware",
that's it, it's done.

[btw, i'm still running my web site off of a 2006 dual-core XEON,
because it was one of the extremely hard-to-get ultra-low-power ones
that, at idle, the entire system only used 4 watts at the plug].

> Eventually, Debian itself will drop 32-bit arches.

that's the nightmare trainwreck that i foresee.

that means that every person who has an original raspberry pi who
wants to run debian... can't.

every person who has a $30 to $90 32-bit SBC with a 32-bit ARM core
from AMLogic, Rockchip, Allwinner - landfill.

marvell openrd ultimate: landfill.

the highly efficient dual-core XEON that runs the email and web
service that i'm using to communicate: landfill.

ingenic's *entire product range* - based as it is on MIPS32 - landfill.

that's an entire company's product range that's going to be wiped out
because of an *assumption* that all hardware is going "64 bit"!

to give you some idea of how influential debian really is: one of
*the* most iconic processors that AMD, bless 'em, tried desperately
for about a decade to End-of-Life, was the AMD Geode LX800.   the
reason why it wouldn't die is because up until around 2013, *debian
still supported it* out-of-the-box.

and the reason why it was so well supported in the first place was:
the OLPC project [they still get over 10,000 software downloads a week
on the OLPC website, btw - 12 years beyond the expected lifetime of
the OLPC XO-1]

i installed debian back in 2007 on a First International Computers
(FIC) box with an AMD Geode LX800, for Earth University in Costa Rica.
over 10 years later they keep phoning up my friend, saying "what the
hell kind of voodoo magic did you put in this box??  we've had 10
years worth of failed computers in the library next to this thing, and
this damn tiny machine that only uses 5 watts *just won't die*"

:)

there's absolutely no chance of upgrading it, now.

the embedded world is something that people running x86_64 hardware
just are... completely unaware of.  additionally, the sheer
overwhelming package support and convenience of debian makes it the
"leader", no matter the "statistics" of other distros.  other distros
cover one, *maybe* two hardware ports: x86_64, and *maybe* arm64 if
we're lucky.

if debian gives up, that leaves people who are depending on them
_really_ in an extremely bad position.

and yes, i'm keenly aware that that's people who *aren't* paying
debian developers, nor are they paying the upstream developers.

maybe it will be a good thing, if 32-bit hardware support in debian is
dropped.  it would certainly get peoples' attention that they actually
f*g well should start paying software libre developers properly,
instead of constantly spongeing off of them, in a way that shellshock
and heartbleed really didn't grab people.

at least with shellshock, heartbleed etc. there was a software "fix".
dropping 32 bit hardware support, there *is* no software fix.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Luke Kenneth Casson Leighton
On Tue, Aug 20, 2019 at 2:52 PM Sam Hartman  wrote:

> I think my concern about your approach is that you're trying to change
> how the entire world thinks.

that would be... how can i put it... an "incorrect" interpretation.  i
think globally - i always have.  i didn't start the NT Domains
Reverse-Engineering "because it would be fun", i did it because,
world-wide, i could see the harm that was being caused by the
polarisation between the Windows and UNIX worlds.

>  You're trying to convince everyone to be
> conservative in how much (virtual) memory they use.

not quite: i'm inviting people to become *aware of the consequences*
of *not* being conservative in how much (virtual) memory they use...
when the consequences of their focus on the task that is "today" and
is "right now", with "my resources and my development machine" are
extended to a global scale.

whether people listen or not is up to them.

> Except I think that a lot of people actually only do need to care about
> 64-bit environments with reasonable memory.  I think that will increase
> over time.
>
> I think that approaches that focus the cost of constrained environments
> onto places where we need constrained environments are actually better.
>
> There are cases where it's actually easier to write code assuming you
> have lots of virtual memory.

yes.  a *lot* easier.  LMDB for example simply will not work on files
that are larger than 4GB, because it uses shared-memory copy-on-write
B+-Trees (just like BTRFS).

...oops :)

> Human time is one of our most precious
> resources.  It's reasonable for people to value their own time.  Even
> when people are aware of the tradeoffs, they may genuinely decide that
> being able to write code faster and that is conceptually simpler is the
> right choice for them.

indeed.  i do recognise this.  one of the first tasks that i was given
at university was to write a Matrix Multiply function that could
(hypothetically) extend well beyond the size of virtual memory (let
alone physical memory).

"vast matrix multiply" is known to be such a hard problem that you
just... do... not... try it.  you use a math library, and that's
really the end of the discussion!

there are several other computer science problems that fall into this
category.  one of them is, ironically (given how the discussion
started) linking.

i really wish Dr Stallman's algorithms had not been ripped out of ld.


>  And having a flat address space is often
> conceptually simpler than having what amounts to multiple types/levels
> of addressing.  In this sense, having an on-disk record store/database
> and indexing that and having a library to access it is just a complex
> addressing mechanism.
>
> We see this trade off all over the place as memory mapped databases
> compete

... such as LMDB...

> with more complex relational databases which compete with nosql
> databases which compete with sharded cloud databases that are spread
> across thousands of nodes.  There are trade offs involving complexity of
> code, time to write code, latency, overall throughput, consistency, etc.
>
> How much effort we go to support 32-bit architectures as our datasets
> (and building is just another dataset) grow is just the same trade offs
> in miniture.  And choosing to write code quickly is often the best
> answer.  It gets us code after all.

indeed.

i do get it - i did say.  i'm aware that software libre developers
aren't paid, so it's extremely challenging to expect any change - at
all.  they're certainly not paid by the manufacturers of the hardware
that their software actually *runs* on.

i just... it's frustrating for me to think ahead, projecting where
things are going (which i do all the time), and see the train wreck
that has a high probability of occurring.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Luke Kenneth Casson Leighton
On Tue, Aug 20, 2019 at 1:17 PM Sam Hartman  wrote:

> I'd ask you to reconsider your argument style.

that's very reasonable, and appreciated the way that you put it.

> I'm particularly frustrated that you spent your entire reply moralizing
> and ignored the technical points I made.

ah: i really didn't (apologies for giving that impression).  i
mentioned that earlier in the thread, cross-building had been
mentioned, and (if memory serves correctly), the build team had
already said it wasn't something that should be done lightly.

> As you point out there are challenges with cross building.

yes.  openembedded, as one of the longest-standing
cross-compiling-capable distros that has been able to target sub-16MB
systems as well as modern desktops for two decades, deals with it in a
technically amazing way, including:

* the option to over-ride autoconf with specially-prepared config.sub
/ config.guess files
* the ability to compile through a command-line-driven hosted native
compiler *inside qemu*
* many more "tricks" which i barely remember.

so i know it can be done... it's just that, historically, the efforts
completely overwhelmed the (small) team, as the number of systems,
options and flexibility that they had to keep track of far exceeded
their resources.

> I even agree with you that we cannot address these challenges and get to
> a point where we have confidence a large fraction of our software will
> cross-build successfully.

sigh.

> But we don't need to address a large fraction of the source packages.
> There are a relatively small fraction of the source packages that
> require more than 2G of RAM to build.

... at the moment.  with there being a lack of awareness of the
consequences of the general thinking, "i have a 64 bit system,
everyone else must have a 64 bit system, 32-bit must be on its last
legs, therefore i don't need to pay attention to it at all", unless
there is a wider (world-wide) general awareness campaign, that number
is only going to go up, isn't it?


> Especially given that in the cases we care about we can (at least today)
> arrange to natively run both host and target binaries, I think we can
> approach limited cross-building in ways that  meet our needs.
> Examples include installing cross-compilers for arm64 targeting arm32
> into the arm32 build chroots when building arm32 on native arm64
> hardware.
> There are limitations to that we've discussed in the thread.

indeed.  and my (limited) torture-testing of ld, showed that it really
doesn't work reliably (i.e. there's bugs in binutils that are
triggered by large binaries greater than 4GB being linked *on 64-bit
systems*).

it's a mess.

> Yes, there's work to be done with all the above.
> My personal belief is that the work I'm talking about is more tractable
> than your proposal to significantly change how we think about cross
> library linkage.

i forgot to say: i'm thinking ahead over the next 3-10 years,
projecting the current trends.


> And ultimately, if no one does the work, then we will lose the 32-bit
> architectures.

... and i have a thousand 32-bit systems that i am delivering on a
crowdfunding campaign, the majority of which would go directly into
landfill.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-19 Thread Luke Kenneth Casson Leighton
On Mon, Aug 19, 2019 at 7:29 PM Sam Hartman  wrote:

> Your entire argument is built on the premise that it is actually
> desirable for these applications (compilers, linkers, etc) to work in
> 32-bit address spaces.

that's right [and in another message in the thread it was mentioned
that builds have to be done natively.  the reasons are to do with
mistakes that cross-compiling, particularly during autoconf
hardware/feature-detection, can introduce *into the binary*.  with
40,000 packages to build, it is just far too much extra work to
analyse even a fraction of them]

at the beginning of the thread, the very first thing that was
mentioned was: is it acceptable for all of us to abdicate
responsibility and, "by default" - by failing to take that
responsibility - end up indirectly responsible for the destruction and
consignment to landfill of otherwise perfectly good [32-bit] hardware?

now, if that is something that you - all of you - find to be perfectly
acceptable, then please continue to not make the decision to take any
action, and come up with whatever justifications you see fit which
will help you to ignore the consequences.

that's the "tough, reality-as-it-is, in-your-face" way to look at it.

the _other_ way to look at is: "nobody's paying any of us to do this,
we're perfectly fine doing what we're doing, we're perfectly okay with
western resources, we can get nice high-end hardware, i'm doing fine,
why should i care??".

this perspective was one that i first encountered during a ukuug
conference on samba as far back as... 1998.  i was too shocked to even
answer the question, not least because everybody in the room clapped
at this utterly selfish, self-centered "i'm fine, i'm doing my own
thing, why should i care, nobody's paying us, so screw microsoft and
screw those stupid users for using proprietary software, they get
everything they deserve" perspective.

this very similar situation - 32-bit hardware being consigned to
landfill - is slowly and inexorably coming at us, being squeezed from
all sides not just by 32-bit hardware itself being completely useless
for actual *development* purposes (who actually still has a 32-bit
system as a main development machine?) it's being squeezed out by
advances in standards, processor speed, user expectations and much
more.

i *know* that we don't have - and can't use - 32-bit hardware for
primary development purposes.  i'm writing this on a 2.5 year old
gaming laptop that was the fastest high-end resourced machine i could
buy at the time (16GB RAM, 512mb NVMe, 3.6ghz quad-core
hyperthreaded).

and y'know what? given that we're *not* being paid by these users of
32-bit hardware - in fact most of us are not being paid *at all* -
it's not as unreasonable as it first sounds.

i am *keenly aware* that we volunteer our time, and are not paid
*anything remotely* close to what we should be paid, given the
responsibility and the service that we provide to others.

it is a huge "pain in the ass" conundrum, that leaves each of us with
a moral and ethical dilemma that we each *individually* have to face.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-14 Thread Luke Kenneth Casson Leighton
On Wed, Aug 14, 2019 at 5:13 PM Aurelien Jarno  wrote:

> > a proper fix would also have the advantage of keeping linkers for
> > *other* platforms (even 64 bit ones) out of swap-thrashing, saving
> > power consumption for build hardware and costing a lot less on SSD and
> > HDD regular replacements.
>
> That would only fix ld, which is only a small part of the issue. Do you
> also have ideas about how to fix llvm, gcc or rustc which are also
> affected by virtual memory exhaustion on 32-bit architectures?

*deep breath* - no.  or, you're not going to like it: it's not a
technical solution, it's going to need a massive world-wide sustained
and systematic education campaign, written in reasonable and logical
language, explaining and advising GNU/Linux applications writers to
take more care and to be much more responsible about how they put
programs together.

a first cut at such a campaign would be:

* designing of core critical libraries to be used exclusively through
dlopen / dlsym.  this is just good library design practice in the
first place: one function and one function ONLY is publicly exposed,
returning a pointer to a table-of-functions (samba's VFS layer for
example [1]).
* compile-time options that use alternative memory-efficient
algorithms instead of performance-efficient ones
* compile-time options to remove non-essentlal resource-hungry features
* compiling options in Makefiles that do not assume that there is vast
amounts of memory available (KDE "developer" mode for example would
compile c++ objects individually whereas "maintainer" mode would
auto-generate a file that #included absolutely every single .cpp file
into one, because it's "quicker").
* potential complete redesigns using IPC/RPC modular architectures:
applying the "UNIX Philosophy" however doing so through a runtime
binary self-describing "system" specifically designed for that
purpose.  *this is what made Microsoft [and Apple] successful*.  that
means a strategic focus on getting DCOM for UNIX up and running [2].
god no please not d-bus [3] [4].

also, it's going to need to be made clear to people - diplomatically
but clearly - that whilst they're developing on modern hardware
(because it's what *they* can afford, and what *they* can get - in the
West), the rest of the world (particularly "Embedded" processors)
simply does not have the money or the resources that they do.

unfortunately, here, the perspective "i'm ok, i'm doing my own thing,
in my own free time, i'm not being paid to support *your* hardware" is
a legitimate one.

now, i'm not really the right person to head such an effort.  i can
*identify* the problem, and get the ball rolliing on a discussion:
however with many people within debian alone having the perspective
that everything i do, think or say is specifically designed to "order
people about" and "tell them what to do", i'm disincentivised right
from the start.

also, i've got a thousand systems to deliver as part of a
crowd-funding campaign [and i'm currently also dealing wiith designing
the Libre RISC-V CPU/GPU/VPU]

as of right now those thousand systems - 450 of them are going to have
to go out with Debian/Testing 8.  there's no way they can go out with
Debian 10.  why? because this first revision hardware - designed to be
eco-conscious - uses an Allwinner A20 and only has 2GB of RAM
[upgrades are planned - i *need* to get this first version out, first]

with Debian 10 requiring 4GB of RAM primarily because of firefox,
they're effectively useless if they're ever "upgraded"

that's a thousand systems that effectively go straight into landfill.

l.

[1] 
https://www.samba.org/samba/docs/old/Samba3-Developers-Guide/vfs.html#id2559133

[2] incredibly, Wine has had DCOM and OLE available and good enough to
use, for about ten years now.  it just needs "extracting" from the
Wine codebase. DCOM stops all of the arguments over APIs (think
"libboost".  if puzzled, add debian/testing and debian/old-stable to
/etc/apt/sources.lst, then do "apt-cache search boost | wc")

due to DCOM providing "a means to publish a runtime self-describing
language-independent interface", 30-year-old WIN32 OLE binaries for
which the source code has been irretrievably lost will *still work*
and may still be used, in modern Windows desktops today.  Mozilla
ripped out XPCOM because although it was "inspired" by COM, they
failed, during its initial design, to understand why Co-Classes exist.

as a result it caused massive ongoing problems for 3rd party java and
c++ users, due to binary incompatibility caused by changes to APIs on
major releases.  Co-Classes were SPECIFICALLY designed to stop EXACTLY
that problem... and Mozilla failed to add it to XPCOM.

bottom line: the free software community has, through "hating" on
microsoft, rejected the very technology that made microsoft so
successful in the first place.

Microsoft used DCOM (and OLE), Apple (thanks to Steve's playtime /
break doing NeXT) developed Objective-C / Objective-J / 

Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-09 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

On Thu, Aug 8, 2019 at 9:39 PM Aurelien Jarno  wrote:

> We are at a point were we should probably look for a real solution
> instead of relying on tricks.

 *sigh* i _have_ been pointing out for several years now that this is
a situation that is going to get increasingly worse and worse, leaving
perfectly good hardware only fit for landfill.

 i spoke to Dr Stallman about the lack of progress here:
 https://sourceware.org/bugzilla/show_bug.cgi?id=22831

he expressed some puzzlement as the original binutils algorithm was
perfectly well capable of handling linking with far less resident
memory than was available at the time - and did *NOT*, just like gcc -
assume that virtual memory was "the way to go".  this because the
algorithm used in ld was written at a time when virtual memory was far
from adequate.

 then somewhere in the mid-90s, someone went "4GB is enough for
anybody" and ripped the design to shreds, making the deeply flawed and
short-sighted assumption that application linking would remain -
forever - below 640k^H^H^H^H4GB.

 now we're paying the price.

 the binutils-gold algorithm (with options listed in the bugreport, a
is *supposed* to fix this, however the ld-torture test that i created
shows that the binutils-gold algorithm is *also* flawed: it probably
uses mmap when it is in *no way* supposed to.

 binutils with the --no-keep-memory option actually does far better
than binutils-gold... in most circumstances.  however it also
spuriously fails with inexplicable errors.

 basically, somebody needs to actually properly take responsibility
for this and get it fixed.  the pressure will then be off: linking
will take longer *but at least it will complete*.

 i've written the ld-torture program - a random function generator -
so that it can be used to easily generate large numbers of massive c
source files that will hit well over the 4GB limit at link time.  so
it's easily reproducible.

 l.

p.s. no, going into virtual memory is not acceptable.  the
cross-referencing instantly creates a swap-thrash scenario, that will
put all and any builds into 10 to 100x the completion time.  any link
that goes into "thrash" will take 2-3 days to complete instead of an
hour.  "--no-keep-memory" is supposed to fix that, but it is *NOT* an
option on binutils-gold, it is *ONLY* available on the *original*
binutils-ld.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-09 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

On Fri, Aug 9, 2019 at 1:49 PM Ivo De Decker  wrote:
>
> Hi Aurelien,
>
> On 8/8/19 10:38 PM, Aurelien Jarno wrote:
>
> > 32-bit processes are able to address at maximum 4GB of memory (2^32),
> > and often less (2 or 3GB) due to architectural or kernel limitations.
>
> [...]
>
> Thanks for bringing this up.
>
> > 1) Build a 64-bit compiler targeting the 32-bit corresponding
> > architecture and install it in the 32-bit chroot with the other
> > 64-bit dependencies. This is still a kind of cross-compiler, but the
> > rest of the build is unchanged and the testsuite can be run. I guess
> > it *might* be something acceptable. release-team, could you please
> > confirm?
>
> As you noted, our current policy doesn't allow that. However, we could
> certainly consider reevaluating this part of the policy if there is a
> workable solution.

it was a long time ago: people who've explained it to me sounded like
they knew what they were talking about when it comes to insisting that
builds be native.

fixing binutils to bring back the linker algorithms that were
short-sightedly destroyed "because they're so historic and laughably
archaic why would we ever need them" should be the first and only
absolute top priority.

only if that catastrophically fails should other options be considered.

with the repro ld-torture code-generator that i wrote, and the amount
of expertise there is within the debian community, it would not
surprise me at all if binutils-ld could be properly fixed extremely
rapidly.

a proper fix would also have the advantage of keeping linkers for
*other* platforms (even 64 bit ones) out of swap-thrashing, saving
power consumption for build hardware and costing a lot less on SSD and
HDD regular replacements.

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-26 Thread Luke Kenneth Casson Leighton
On Mon, Jan 7, 2019 at 11:30 PM Mike Hommey  wrote:

> > it would be extremely useful to confirm that 32-bit builds can in fact
> > be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
> > builds that are failing at the linker phase due to lack of memory.
>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bts
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.

 hi mike, hi steve, i did not receive a response on the queries about
the additional recommended options [1], so rather than lose track i
raised a bugreport and cross-referenced this discussion:

 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=919882

 personally, after using the ld-evil-linker.py tool i do not expect
the recommended options to work on 32-bit, as i suspect that, despite
the options saying that they do not use mmap, the investigation that i
did provides some empirical evidence to the contrary, whereas ld-bfd
does *not*.

 so, ironically, on ld-bfd you run into one bug, and on ld-gold you
run into another :)

l.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=22831#c25



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-09 Thread Luke Kenneth Casson Leighton
https://sourceware.org/bugzilla/show_bug.cgi?id=22831 sorry using phone to
type, mike, comment 25 shows some important options to ld gold would it be
possible to retry with those? 32 bit. Disabling mmap looks really important
as clearly a 4gb+ binary is guaranteed going to fail to fit into 32bit mmap.

On Tuesday, January 8, 2019, Mike Hommey  wrote:

>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bits
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.
>
> Mike
>


-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-08 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 7:26 AM Luke Kenneth Casson Leighton
 wrote:
>
> On Tue, Jan 8, 2019 at 7:01 AM Luke Kenneth Casson Leighton
>  wrote:

> trying this:
>
> $ python evil_linker_torture.py 3000 400 200 50
>
> running with "make -j4" is going to take a few hours.

 ok so that did the trick: got to 4.3gb total resident memory even
with --no-keep-memory tacked on to the link.  fortunately it bombed
out (below) before it could get to the (assumed) point where it would
double the amount of resident RAM (8.6GB) and cause my laptop to go
into complete thrashing meltdown.

hypothetically it should have created an 18 GB executable.  3000 times
500,000 static chars isn't the only reason this is failing, because
when restricted to only 100 functions and 100 random calls per
function, it worked.

ok so i'm retrying without --no-keep-memory... and it's now gone
beyond the 5GB mark.  backgrounding it and letting it progress a few
seconds at a time... that's interesting up to 8GB...  9.5GB ok
that's enough: any more than that and i really will trash the laptop.

ok so the above settings will definitely do the job (and seem to have
thrown up a repro candidate for the issue you were experiencing with
firefox builds, mike).

i apologise that it takes about 3 hours to build all 3,000 6mb object
files, even with a quad-core 3.6ghz i7.  they're a bit monstrous.

will find this post somewhere on debian-devel archives and
cross-reference it here
https://sourceware.org/bugzilla/show_bug.cgi?id=22831


ld: warning: cannot find entry symbol _start; defaulting to 00401000
ld: src9.o: in function `fn_9_0':
/home/lkcl/src/ld_torture/src9.c:3006:(.text+0x27): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1149_322' defined
in .text section in src1149.o
ld: /home/lkcl/src/ld_torture/src9.c:3008:(.text+0x41): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1387_379' defined
in .text section in src1387.o
ld: /home/lkcl/src/ld_torture/src9.c:3014:(.text+0x8f): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1821_295' defined
in .text section in src1821.o
ld: /home/lkcl/src/ld_torture/src9.c:3015:(.text+0x9c): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1082_189' defined
in .text section in src1082.o
ld: /home/lkcl/src/ld_torture/src9.c:3016:(.text+0xa9): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_183_330' defined
in .text section in src183.o
ld: /home/lkcl/src/ld_torture/src9.c:3024:(.text+0x111): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_162_394' defined
in .text section in src162.o
ld: /home/lkcl/src/ld_torture/src9.c:3026:(.text+0x12b): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_132_235' defined
in .text section in src132.o
ld: /home/lkcl/src/ld_torture/src9.c:3028:(.text+0x145): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1528_316' defined
in .text section in src1528.o
ld: /home/lkcl/src/ld_torture/src9.c:3029:(.text+0x152): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1178_357' defined
in .text section in src1178.o
ld: /home/lkcl/src/ld_torture/src9.c:3031:(.text+0x16c): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1180_278' defined
in .text section in src1180.o
ld: /home/lkcl/src/ld_torture/src9.c:3035:(.text+0x1a0): additional
relocation overflows omitted from the output
^Cmake: *** Deleting file `main'
make: *** [main] Interrupt



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 7:01 AM Luke Kenneth Casson Leighton
 wrote:

> i'm going to see if i can get above the 4GB mark by modifying the
> Makefile to do 3,000 shared libraries instead of 3,000 static object
> files.

 fail.  shared libraries link extremely quickly.  reverted to static,
trying this:

$ python evil_linker_torture.py 3000 400 200 50

so that's 4x the number of functions per file, and 2x the number of
calls *in* each function.

just the compile phase requires 1GB per object file (gcc 7.3.0-29),
which, on "make -j8" ratched up the loadavg to the point where...
well.. *when* it recovered it reported a loadavg of over 35, with 95%
usage of the 16GB swap space...

running with "make -j4" is going to take a few hours.

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
$ python evil_linker_torture.py 3000 100 100 50

ok so that managed to get up to 1.8GB resident memory, paused for a
bit, then doubled it to 3.6GB, and a few seconds later successfully
outputted a binary.

i'm going to see if i can get above the 4GB mark by modifying the
Makefile to do 3,000 shared libraries instead of 3,000 static object
files.

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 6:27 AM Luke Kenneth Casson Leighton
 wrote:

> i'm just running the above, will hit "send" now in case i can't hit
> ctrl-c in time on the linker phase... goodbye world... :)

$ python evil_linker_torture.py 2000 50 100 200
$ make -j8

oh, err... whoopsie... is this normal? :)  it was only showing around
600mb during the linker phase anyway. will keep hunting. where is this
best discussed (i.e. not such a massive cc list)?

/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`deregister_tm_clones':
crtstuff.c:(.text+0x3): relocation truncated to fit: R_X86_64_PC32
against `.tm_clone_table'
/usr/bin/ld: crtstuff.c:(.text+0xb): relocation truncated to fit:
R_X86_64_PC32 against symbol `__TMC_END__' defined in .data section in
main
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`register_tm_clones':
crtstuff.c:(.text+0x43): relocation truncated to fit: R_X86_64_PC32
against `.tm_clone_table'
/usr/bin/ld: crtstuff.c:(.text+0x4a): relocation truncated to fit:
R_X86_64_PC32 against symbol `__TMC_END__' defined in .data section in
main
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`__do_global_dtors_aux':
crtstuff.c:(.text+0x92): relocation truncated to fit: R_X86_64_PC32
against `.bss'
/usr/bin/ld: crtstuff.c:(.text+0xba): relocation truncated to fit:
R_X86_64_PC32 against `.bss'
collect2: error: ld returned 1 exit status
make: *** [main] Error 1



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
$ python evil_linker_torture.py 2000 50 100 200

ok so it's pretty basic, and arguments of "2000 50 10 100"
resulted in around a 10-15 second linker phase, which top showed to be
getting up to around the 2-3GB resident memory range.  "2000 50 100
200" should start to make even a system with 64GB RAM start to
feel the pain.

evil_linker_torture.py N M O P generates N files with M functions
calling O randomly-selected functions where each file contains a
static char of size P that is *deliberately* put into the code segment
by being initialised with a non-zero value, exactly and precisely as
you should never do because... surpriiise! it adversely impacts the
binary size.

i'm just running the above, will hit "send" now in case i can't hit
ctrl-c in time on the linker phase... goodbye world... :)

l.
#!/usr/bin/env python

import sys
import random

maketemplate = """\
CC := gcc
CFILES:=$(shell ls | grep "\.c")
OBJS:=$(CFILES:%.c=%.o)
DEPS := $(CFILES:%.c=%.d)
CFLAGS := -g -g -g
LDFLAGS := -g -g -g

%.d: %.c
	$(CC) $(CFLAGS) -MM -o $@ $<

%.o: %.c
	$(CC) $(CFLAGS) -o $@ -c $<

#	$(CC) $(CFLAGS) -include $(DEPS) -o $@ $<

main: $(OBJS)
	$(CC) $(OBJS) $(LDFLAGS) -o main
"""

def gen_makefile():
with open("Makefile", "w") as f:
f.write(maketemplate)

def gen_headers(num_files, num_fns):
for fnum in range(num_files):
with open("hdr{}.h".format(fnum), "w") as f:
for fn_num in range(num_fns):
f.write("extern int fn_{}_{}(int arg1);\n".format(fnum, fn_num))

def gen_c_code(num_files, num_fns, num_calls, static_sz):
for fnum in range(num_files):
with open("src{}.c".format(fnum), "w") as f:
for hfnum in range(num_files):
f.write('#include "hdr{}.h"\n'.format(hfnum))
f.write('static char data[%d] = {1};\n' % static_sz)
for fn_num in range(num_fns):
f.write("int fn_%d_%d(int arg1)\n{\n" % (fnum, fn_num))
f.write("\tint arg = arg1 + 1;\n")
for nc in range(num_calls):
cnum = random.randint(0, num_fns-1)
cfile = random.randint(0, num_files-1)
f.write("\targ += fn_{}_{}(arg);\n".format(cfile, cnum))
f.write("\treturn arg;\n")
f.write("}\n")
if fnum != 0:
continue
f.write("int main(int argc, char *argv[])\n{\n")
f.write("\tint arg = 0;\n")
for nc in range(num_calls):
cnum = random.randint(0, num_fns-1)
cfile = random.randint(0, num_files-1)
f.write("\targ += fn_{}_{}(arg);\n".format(cfile, cnum))
f.write("\treturn 0;\n")
f.write("}\n")

if __name__ == '__main__':
num_files = int(sys.argv[1])
num_fns = int(sys.argv[2])
num_calls = int(sys.argv[3])
static_sz = int(sys.argv[4])
gen_makefile()
gen_headers(num_files, num_fns)
gen_c_code(num_files, num_fns, num_calls, static_sz)


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tuesday, January 8, 2019, Mike Hommey  wrote:

> On Mon, Jan 07, 2019 at 11:46:41PM +0000, Luke Kenneth Casson Leighton
> wrote:
>
> > At some point apps are going to become so insanely large that not even
> > disabling debug info will help.
>
> That's less likely, I'd say. Debug info *is* getting incredibly more and
> more complex for the same amount of executable weight, and linking that
> is making things worse and worse. But having enough code to actually be
> a problem without debug info is probably not so close.
>
>
It's a slow boil problem, taken 10 years to get bad, another 10 years to
get really bad. Needs strategic planning. Right now things are not exactly
being tackled except in a reactive way, which unfortunately takes time as
everyone is volunteers. Exacerbates the problem and leaves drastic
"solutions" such as "drop all 32 bit support".


> There are solutions to still keep full debug info, but the Debian
> packaging side doesn't support that presently: using split-dwarf. It
> would probably be worth investing in supporting that.
>
>
Sounds very reasonable, always wondered why debug syms are not separated at
build/link, would buy maybe another decade?



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tuesday, January 8, 2019, Mike Hommey  wrote:

> .
>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bts
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.


Dang. Yes, removing debug symbols was the only way I could get webkit to
link without thrashing, it's a temporary fix though.

So the removal of the algorithm in ld Dr Stallman wrote, dating back to the
1990s, has already resulted in a situation that's worse than I feared.

At some point apps are going to become so insanely large that not even
disabling debug info will help.

At which point perhaps it is worth questioning the approach of having an
app be a single executable in the first place.  Even on a 64 bit system if
an app doesn't fit into 4gb RAM there's something drastically going awry.



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
(hi edmund, i'm reinstating debian-devel on the cc list as this is not
a debian-arm problem, it's *everyone's* problem)

On Mon, Jan 7, 2019 at 12:40 PM Edmund Grimley Evans
 wrote:

> >  i spoke with dr stallman a couple of weeks ago and confirmed that in
> > the original version of ld that he wrote, he very very specifically
> > made sure that it ONLY allocated memory up to the maximum *physical*
> > resident available amount (i.e. only went into swap as an absolute
> > last resort), and secondly that the number of object files loaded into
> > memory was kept, again, to the minimum that the amount of spare
> > resident RAM could handle.
>
> How did ld back then determine how much physical memory was available,
> and how might a modern reimplemention do it?

 i don't know: i haven't investigated the code.  one clue: gcc does
exactly the same thing (or, used to: i believe that someone *may* have
tried removing the feature from recent versions of gcc).

 ... you know how gcc stays below the radar of available memory, never
going into swap-space except as a last resort?

> Perhaps you use sysconf(_SC_PHYS_PAGES) or sysconf(_SC_AVPHYS_PAGES).
> But which? I have often been annoyed by how "make -j" may attempt
> several huge linking phases in parallel.

 on my current laptop, which was one of the very early quad core i7
skylakes with 2400mhz DDR4 RAM, the PCIe bus actually shuts down if
too much data goes over it (too high a power draw occurs).

 consequently, if swap-thrashing occurs, it's extremely risky, as it
causes the NVMe SSD to go *offline*, re-initialise, and come back on
again after some delay.

 that means that i absolutely CANNOT allow the linker phase to go into
swap-thrashing, as it will result in the loadavg shooting up to over
120 within just a few seconds.


> Would it be possible to put together a small script that demonstrates
> ld's inefficient use of memory? It is easy enough to generate a big
> object file from a tiny source file, and there are no doubt easy ways
> of measuring how much memory a process used, so it may be possible to
> provide a more convenient test case than "please try building Firefox
> and watch/listen as your SSD/HDD gets t(h)rashed".
>
> extern void *a[], *b[];
> void *c[1000] = {  };
> void *d[1000] = {  };
>
> If we had an easy test case we could compare GNU ld, GNU gold, and LLD.

 a simple script that auto-generated tens of thousands of functions in
a couple of hundred c files, with each function making tens to
hundreds of random cross-references (calls) to other functions across
the entire range of auto-generated c files should be more than
adequate to make the linker phase go into near-total meltdown.

 the evil kid in me really *really* wants to give that a shot...
except it would be extremely risky to run on my laptop.

 i'll write something up. mwahahah :)

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Sun, Jan 6, 2019 at 11:46 PM Steve McIntyre  wrote:
>
> [ Please note the cross-post and respect the Reply-To... ]
>
> Hi folks,
>
> This has taken a while in coming, for which I apologise. There's a lot
> of work involved in rebuilding the whole Debian archive, and many many
> hours spent analysing the results. You learn quite a lot, too! :-)
>
> I promised way back before DC18 that I'd publish the results of the
> rebuilds that I'd just started. Here they are, after a few false
> starts. I've been rebuilding the archive *specifically* to check if we
> would have any problems building our 32-bit Arm ports (armel and
> armhf) using 64-bit arm64 hardware. I might have found other issues
> too, but that was my goal.

 very cool.

 steve, this is probably as good a time as any to mention a very
specific issue with binutils (ld) that has been slowly and inexorably
creeping up on *all* distros - both 64 and 32 bit - where the 32-bit
arches are beginning to hit the issue first.

 it's a 4GB variant of the "640k should be enough for anyone" problem,
as applied to linking.

 i spoke with dr stallman a couple of weeks ago and confirmed that in
the original version of ld that he wrote, he very very specifically
made sure that it ONLY allocated memory up to the maximum *physical*
resident available amount (i.e. only went into swap as an absolute
last resort), and secondly that the number of object files loaded into
memory was kept, again, to the minimum that the amount of spare
resident RAM could handle.

 some... less-experienced people, somewhere in the late 1990s, ripped
all of that code out [what's all this crap, why are we not just
relying on swap, 4GB swap will surely be enough for anybody"]

 by 2008 i experienced a complete melt-down on a 2GB system when
compiling webkit.  i tracked it down to having accidentally enabled
"-g -g -g" in the Makefile, which i had done specifically for one
file, forgot about it, and accidentally recompiled everything.

 that resulted in an absolute thrashing meltdown that nearly took out
the entire laptop.

 the problem is that the linker phase in any application is so heavy
on cross-references that the moment the memory allocated by the linker
goes outside of the boundary of the available resident RAM it is
ABSOLUTELY GUARANTEED to go into permanent sustained thrashing.

 i cannot emphasise enough how absolutely critical that this is to
EVERY distribution to get this fixed.

resources world-wide are being completely wasted (power, time, and the
destruction of HDDs and SSDs) because systems which should only really
take an hour to do a link are instead often taking FIFTY times longer
due to swap thrashing.

not only that, but the poor design of ld is beginning to stop certain
packages from even *linking* on 32-bit systems!  firefox i heard now
requires SEVEN GIGABYTES during the linker phase!

and it's down to this very short-sighted decision to remove code
written by dr stallman, back in the late 1990s.

it would be extremely useful to confirm that 32-bit builds can in fact
be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
builds that are failing at the linker phase due to lack of memory.

however *please do not make the mistake of thinking that this is
specifically a 32-bit problem*.  resources are being wasted on 64-bit
systems by them going into massive thrashing, just as much as they are
on 32-bit ones: it's just that if it happens on a 32-bit system a hard
error occurs.

somebody needs to take responsibility for fixing binutils: the
maintainer of binutils needs help as he does not understand the
problem.  https://sourceware.org/bugzilla/show_bug.cgi?id=22831

l.



Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-26 Thread Luke Kenneth Casson Leighton
On Fri, Nov 23, 2018 at 12:18 AM Lisandro Damián Nicanor Pérez Meyer
 wrote:

> So: what's the best outcome for our *current* users? Again, pick only one.

here's a perspective that may not have been considered: how much
influence and effect on purchasing decisions would the choice made
have?

we know that proprietary embedded GPUs and associated proprietary
software are not just unethical, and cause huge problems, they also
hurt company profits and cause increases in support costs.

by complete contrast, when all the source code is libre-licensed, this
is what happens:

 
http://www.h-online.com/open/news/item/Intel-and-Valve-collaborate-to-develop-open-source-graphics-drivers-1649632.html

basically what i am inviting you to consider is that in making this
decision, the one that supports, encourages and indirectly endorses
the continued propagation of proprietary 3D libraries is one that is
going to have a massive world-wide adverse financial impact over time.

i would therefore strongly recommend prioritising decisions that
support libre-licensed GPU firmware and PCIe GPU cards that have
libre-licensed source code.

if systems with etnaviv are "punished" for example by this decision,
that would not go down too well.  if people running older Radeon GPU
Cards (on the RockPro64 which has a 4x PCIe Card that easily runs at
2500 MBytes/sec) find that their cards perform badly, that is also not
going to go down well.

bottom line: your decisions here have far more impact than you may realise.

l.



EOMA68-A20 Crowd-funded Laptop and Micro-Desktop

2016-07-16 Thread Luke Kenneth Casson Leighton
https://www.crowdsupply.com/eoma68/micro-desktop

i've been working on a strategy to make it possible for people to have
more control over the hardware that they own, and for it to cost less
money for them to do so, long-term.  i've had to become an open
hardware developer in order to do that.

i believed for a long long time that leaving hardware design to the
mass-volume manufacturers would result in us having affordable
hardware that we could own.  they would make stuff; we could port
OSes to it, everybody wins.  starting in 2003 and working for almost
2 years continuously on reverse-engineering i got a bit of a rude but
early wake-up call where i learned just how naive that expectation
really is [1].  example: it took over THREE YEARS for cr2 to
reverse-engineer the drivers for the HTC Universal clamshell 3G
micro-laptop.

for everyone else, that message came through loud and clear with
mjg59's android tablet GPL violations list - which he stopped maintaining
because it was pointless to continue [2][3].  it was a bit of a slap in
the face - a wake-up call which not only debian but every other ARM
free software distribution is painfully reminded of on a regular basis
when someone new contacts them and asks:

  "I have hardware {X} bought off of Amazon / Aliexpress, can i run
   Linux on it"

and pretty much every time someone has to spend their time patiently
explaining that no, it's not possible, due to the extraordinary
amount of reverse-engineering that's required due to rampant and:
endemic GPL violations, and even if they could, it's *already too late*
due to "Single-Board Computer Supernova Lifecycle" syndrome.

shockingly even intel do not really "Get It".  not only do they have
the arbitrary remote code execution backdoor co-processor [4]
in every x86_64 processor since 2009, but in speaking to a member
of the intel open source education team at fosdem2016 i learned that
intel considers something as trivial as DDR3 RAM initialisation
sequences to be "commercial advantage" and this they use as
justification for not releasing the 200 lines of code needed... not
that it would help very much because of the RSA secret key needed
to sign early boot code.

we also have the issue of proliferation of linux kernel device drivers:
put simply if there are M processors and N "types of products",
we can reasonably and rationally expect the number of submissions
of device drivers and device tree files for upstream inclusion to be of the
order of "M *TIMES* N".  with "M" just in the ARM world alone being
enormous (over 650 licensees as of 10 years ago) and "N" likewise
being huge, this places a huge burden on the linux kernel developers,
and an additional burden downstream on you, the OS maintainers, as
well.

... would it not be better to have hardware that was designed around
"M plus N"?  this would stop the endemic proliferation of device drivers,
would it not?

so this is the primary driving factor behind EOMA68 - to reduce the
burden of work required to be carried out by software libre developers,
as well as reduce the long-term cost of ownership of hardware for
everyone.

so after five years i can finally say that the EOMA68 standard is
ready, and with the last (and final) revision adding in USB 3.1 it
can be declared to have at least a decade of useful life ahead of
it.  there are NO "options".  there will be NO further changes made
(which would result in chaos). a modern Computer Card bought
10 years from now will still work with a Housing that's bought today,
and vice-versa.

if this approach is something that you feel is worthwhile supporting,
the crowd funding campaign runs for another 40 days.  crowd funding
campaigns are about supporting "ideas" and being rewarded with
a gift for doing so.  they're not about "buying a boxed product under
contract of sale".

with your support it will be possible to bring other designs and other
processors to you later.  picking a processor has its own interesting
challenges [5] if you have ethical business considerations to take
into account, such as "don't use anything that's GPL violating or
otherwise illegal". [why *is* it that people think it's okay to sell GPL
violating products, even amongst the open hardware community?
ethical ends can never be justified by unethical means].

lastly, i'm... reluctant to bring this up, but i have to.  *deep breath*.
i'm aware that a lot of people in the debian world don't like me. many
of you *genuinely* believe that i am out to control you, to tell you
what to do, to "order you about".  which is nonsense, but, more
importantly, rationally-speaking, completely impossible given the
nature of free software.  we can therefore conclude, rationally, that
the conclusion reached by many of you [that i am "ordering you
about"] simply cannot be true.

after thinking about this for a long, long time, my feeling is that this
startlingly and completely overwhelmingly WRONG impression
stems from my reverse-engineering background, which, 

libre version of iceweasel

2015-06-09 Thread Luke Kenneth Casson Leighton
http://news.slashdot.org/story/15/06/09/1722236/mozilla-responds-to-firefox-user-backlash-over-pocket-integration

after seeing this, i'm becoming increasingly alarmed at where firefox
is going [the first signs were the way in which the announcement was
made to focus on speed improvements when chrome and webkit first
became popular]

open question: is there a perceived need to dig into the source code
of firefox and, just as was done with google-chrome to create
chromium-browser, permanently patch out certain features?

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDwH=pvRP=rwni6mgbhvp1a0jw0wqk4vgdcsznqhap_...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-22 Thread Luke Kenneth Casson Leighton
On Wed, Feb 18, 2015 at 12:16 AM, Axel Wagner m...@merovius.de wrote:
 Luke Kenneth Casson Leighton l...@lkcl.net writes:
  what *does* concern me is that it takes such incredible (and amazing)
 efforts by people like adam for the average end-user or sysadmin to
 contemplate replacing {insert nameless package}.

 insert libc6.

 libc6 has alternatives, and, itself is maintained by a diverse group
(the FSF) with a reputation for respecting software freedom and
sufficient experience to know the ropes surrounding UNIX/POSIX.  even
google when developing android decided that the GPL was a horrible
virus, they didn't like libc6 and very kindly funded the creation of
an alternative.  also as a well-defined standard it would be
unbelievably stupid to attempt to deviate from it (get creative).

 conclusion: we can trust the libc6 maintainers.

 Or insert perl.

 perl is maintained by an extremely experienced and diverse group of
developers.  they understand the responsibilities behind maintaining
and developing such a critical programming language.

 conclusion: we can trust the perl developers.

 Or insert linux-image.

 linux is developed by a hodge-podge bag of cat-like developers who
all, amazingly, pull together and get the job done.  they're also
headed by a team of incredibly responsible people who have had decades
of experience.

 conclusion: we can trust the linux kernel developers.

 a previous example given was SE/Linux.  i outlined a case where,
paradoxically, it can be demonstrated that we can trust the developers
behind SE/Linux.

 another example i was gives was grub.  grub has alternatives (lilo
and others): by inference this keeps them honest by way of
competition.  conclusion: we can trust the grub developers.

 And suddenly no on cares (not even you).

 the cases you give are ones where a rational analysis shows that the
people behind them can be trusted.  we can go at this for as long as
you feel it useful for you to do so, but i think you will find that,
in every case, the team behind the package engender our trust (trust
is *not* earned, btw: respect is earned.  trust is given.  always.
past performance != guarantee of future behaviour)

 but systemd is very, very different.  far from being able to find
reasons why the systemd team may be trusted, analysis by several
people shows that, sadly, the complete opposite is the case.

 unfortunately, the team behind the systemd project have demonstrated
time and time again that their focus is extremely narrow: Redhat
Desktop.

 one example (and there are many, many more):
 http://www.freedesktop.org/wiki/Software/systemd/separate-usr-is-broken/

 just one good analysis of which is here:
 http://lists.busybox.net/pipermail/busybox/2011-September/076713.html


 You have to do better than this, sorry. Just that
 a package has reverse dependencies and that you have to recompile a big
 part of the debian archive to install a debian without them does *not*
 mean, that this package is in any way problematic
 [continued below]

 you are correct about that [i.e. you are correct in the assertion
that the recompilation for removal has nothing to do with removal].

  i am still trying to track down concrete reasons why i feel so
alarmed.  i believe it's because everything i've seen that team do -
all their blogs and reports has been... it feels so rational, and
so logical, yet nowhere do i see any kind of debate, inclusion of
other people, other teams.  do these people join mailing lists other
than those directly related to fedora desktop?

 the whole situation feels desperately, desperately wrong, and i
cannot unfortunately give you a single concrete specific example or
reason why, and that is part of the problem: nobody else really can,
either.

 and i think that's really why everyone has been getting so fed up,
getting into such severe arguments that they end up leaving projects
that they've worked well for decades with everyone for such a long
time.

that *in and of itself* should tell you that there's something
seriously wrong, here.  how many prominent, committed, dedicated and
experienced people have resigned from roles in debian so far - people
without whom debian is clearly worse off.

 [continued here]
 or takes away choice.

 on this you are wrong: by definition and by the immediate evidence
shown, it does exactly and precisely that.  [more specifically, the
choices that people are forced into making are so extreme that many
cannot even make them, they are so disruptive, or require such extreme
knowledge, or require extreme risks, or require violations of company
policy - use of unofficial archives which would violate support
contracts - and so on].

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDzfzUx=+hzezcgpomp7ufrogfbv4k9aa-oqao+ev1g...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-18 Thread Luke Kenneth Casson Leighton
On Wed, Feb 18, 2015 at 12:27 AM, Steve Langasek vor...@debian.org wrote:
 On Tue, Feb 17, 2015 at 11:52:21PM +, Luke Kenneth Casson Leighton wrote:
 On Tue, Feb 17, 2015 at 10:52 PM, Josh Triplett j...@joshtriplett.org 
 wrote:

  So, please go educate yourself on what libsystemd0 actually does,

  i know what it does, and what it does - technically - is *not* the
 issue that i am concerned about.

 And that is why you'll find little interest here in entertaining your
 argument.  You have *not* presented any evidence that Debian is technically
 worse off as a result of packages depending on libsystemd0.

 that's right - i haven't.  because (a) i have complete confidence in
your technical abilities, as a group.  i wouldn't use debian
otherwise! :)  and (b) this isn't a technical issue, it's a strategic
one.

 so, the gist is: debian developers make decisions primarily based on
technical merit (almost exclusively), disregarding strategic issues
(almost exclusively).  would that be a fairly broad but accurate
assessment? (thank you to everyone else who has chipped in, i read a
couple of other messages from people which point in a similar
direction)

 a couple of things occur to me.

 firstly, when i was last in holland i was working for NC3A, some kind
person referred me to an obscure book called The Strategy-focussed
Organisation.  very intelligent guy, who had actually read it... i
don't recommend reading all of it cover-to-cover, and neither did he
:)

 he pointed out to me the one key question is that when it comes to
the strategy (direction, focus) of any organisation, the question why
should we care what anyone else is doing? is *the* most important one
you can possibly ask.  why - when you, the debian developers, are
doing such a fantastic job (really and sincerely) - should you care
when someone from *outside* of your group jumps up and down and says
uhh... guuuys?

 i invite you to think seriously about that, ok?  (because i don't
have an answer!!)

 the second thing - and i'm taking a huge risk here by using the
example that i'm about to share with you; please DO NOT think for ONE
SECOND that you are being ACCUSED of anything, ok?  i'm using this
example because i believe it will get through to you with enough
clarity.  i DO NOT want to hear ANYONE say god almighty, did he
_really_ just accuse us of being horrible people by association, ok?

 do you know what the world's most authoritative medical texts are on
the subject of pain?  pain thresholds, tolerance, stress levels and so
on?  it's the documentation that the nazis made during their reign.
horrifyingly, they were *genuinely curious*, but, unlike other groups
who have tortured other humans, they meticulously documented all of
their work.

 why am i mentioning this example?  because, *technically*, the nazis
documentation of their work is sufficiently flawless as to be of
extremely high *technical* value in the medical world, even today.

 ... but does that mean that *strategically* they should even have
been doing that research in the first place?  does the *technical*
quality of their work justify their torturing and murdering of other
human beings, just to see what happened??  of *course* it f*g well
doesn't!

 so this extreme example should, i believe, serve as an extremely
graphic illustration that, in any group, technical decisions need to
be guided by some sort of moral and/or strategic compass.  not
that i claim to be an authority on either [1].

would you agree with that?  i mean the moral and strategic compass
bit, not my claim to not be an authority on moral compasses :)

l.

[1] please don't say i am claiming to tell you what to do, therefore
you have the right to ignore it. as a group you keep doing that, and i
keep having to tell you i'm not, and it's getting really, really old.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedyrutzfv9wu2pouktngl6ckbdoisetqgla-etdzyi1...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
On Tue, Feb 17, 2015 at 10:52 PM, Josh Triplett j...@joshtriplett.org wrote:

 So, please go educate yourself on what libsystemd0 actually does,

 i know what it does, and what it does - technically - is *not* the
issue that i am concerned about.

 and if
 for some reason you still consider it a problem after doing so, you'll
 need to explain why,

 i have done so, a number of times.  take away the name of the
library.  take away what it does.  take away how it does it, because
none of those things are relevant.

 what *does* concern me is that it takes such incredible (and amazing)
efforts by people like adam for the average end-user or sysadmin to
contemplate replacing {insert nameless package}.

 that *is* the problem.  i'm aware that there are many people in key
positions in debian who do not see this lack of choice as being the
problem, but i can assure you that it is.

 because as demonstrated in this thread, even those
 developers in Debian who still do care about non-systemd systems do not
 agree with you that it's a problem.  See, for instance, Russ's response,
 which you lauded while failing to actually comprehend, since you seem to
 believe that his response described something that needed changing
 rather than describing the current state.

 i believe tiredness may be affecting my ability to understand the
point you're trying to make, here.  i'm genuinely pleased that russ
(and adam) came up with the same possible solution (dynamic library
loading) that, if deployed, would end this entire issue because it
would allow people to make a choice.

 ah.  i got it.  i worked it out.  the sentence that bothered me was
the one which implied that no change is possible.  or desirable.  i
leave it to you over the next few weeks and months to assess whether
that assertion is true or not: when people continue, over the next few
weeks and months to *not* stop talking about systemd, remember this
moment, yeah?

 We used to build a half-dozen versions of libsdl, with support for
 various libraries, just so that people could avoid installing unused
 libraries on their systems.  We don't do that anymore; if you install a
 program based on libsdl, you'll get libsdl1.2debian, which depends on
 libasound2 and libpulse0 and libdirectfb-1.2-9 and libx11-6 and other
 libraries.  If you always run against X with ALSA, and never run with
 DirectFB or PulseAudio, then you get a couple of extra libraries on your
 system.  Worth it so that libsdl doesn't have to build a half-dozen
 conflicting binary packages.

 great!  sounds like a sensible decision to me.

 question.  is libsdl on a par with sysvinit, openrc, systemd and
depinit?  no it isn't, is it.  if you run a server, do you *really*
need libsdl?  no you don't, do you.

 and, y'know what: another thing - the very fact that there *is*
choice within libsdl - a lot of it - different backends, different
graphics, different sound libraries, that's... that's fantastic!

 ... because it's everything that systemd is not.

 right now, my deepest concern is that there isn't any other choice.
do you not also perceive that as being a problem?


 You should also learn what the word unilateral means; for someone
 willing to pedantically post a link to a dictionary, you seem to have
 failed to read it.  Distributions and projects have independently (or,
 if you like, *multilaterally*) started using systemd because it works
 well for them.

 yyyeah... i know - because they all took what the upstream developers
provided and they all ripped out everything *but* systemd.

 and that means we're into a monoculture.

 do you see that that is a problem?  to make it clear: under what
circumstances has a monoculture traditionally and historicallybeen a
problem, under software-related (and non-software-related)
circumstances?


  And yes, that means they use libsystemd0, whether or not
 they depend on PID 1 at runtime.  Your incredulity at how that managed
 to happen does not actually refute that it did.

 i never said that it did, nor was i incredulous at how it happened.
i believe i've posted a number of times - twice on this list -
indicating that i have been keeping an eye on this for some time, and
also analysed retrospectively what happened.  *at no time* did i post
any kind of unrealistic statements how did that happen?? i can see
very clearly how it happened, and, importantly - twice at least - i've
gone to some lengths to say that i don't consider it to be anyone's
fault.

 ... where on earth are you getting this stuff about how incredulous
i am from, josh? :)  *puzzled and tired*...

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedxmemkdlgrfzng2oa0sibfawl2rrtp261avbyx40xu...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
ok, so there's been quite a discussion, both on slashdot, where
amazingly the comments that filtered to the top were insightful and
respectful, and also here on debian-devel and debian-users.  as i
normally use gmane to reply (and maintain and respect threads) but
this discussion is not *on* gmane, i apologise for having to write a
summary-style follow-up: if people would like me to reply (thank you
christian) please cc me in future, but (see last paragraph) i think
the software libre community's interests are best served if i wait for
replies to accumulate for a few days.

after thinking about this yesterday, a random sentence popped into my
head, which i believe is very appropriate:

i disagree with what you are saying, but i will defend your right
to say it.

i believe it was someone famous who wrote that, and it applies to this
situation because this really isn't about the technical merits of the
available software: solutions will come in time (and already are:
eudev, mdev, uselessd and many more).  the reason why i've joined this
debate is because i feel that closing doors on choice in ways that
force people to have to make extremely disruptive and risky decisions
that could adversely affect their livelihoods - i have a *really* bad
feeling about that, and i cannot sit by and let it happen without
speaking up.

in the past two days i've seen a lot of people on this list make it
clear (by saying for example you have the source, go modify it) that
they do not truly appreciate the responsibility and duty of care that
they have.  in saying that i can say that *i know* how you feel: i've
been the leader of many software libre projects where people would
expect me to feed them answers for no financial reward - and all those
other nuances that we frequently encounter.  but i learned in the past
few years that even if you are not being paid, you *still* have a duty
to those people less intelligent or with less time or less money than
you.  we're *serving others* with our skill, time and intelligence.
it's a really awkward and delicate situation, i know, but answering
go away and modify the source yourself is to do both yourself and
the recipient of that answer a very strong disservice.

anyway - down to it.

so, marco, you wrote:

 Again, you clearly do not understand well how systemd works.

marco: understanding or otherwise how systemd works is not the point:
the point is that there has been a unilateral decision across
virtually every single GNU/Linux distro to abandon and remove *any*
alternative to having libsystemd0 installed.  historical precedent in
the software industry and beyond tells us that placing so much power
and trust in a single system and a single group should be ringing
alarm bells so loudly in your head that you should wake up deaf after
having first passed out with dizziness! :)

so could i ask you, as i really genuinely don't understand, why is it
that the lack of choice here *doesn't* bother you?  i'm not asking for
a technical review or a technically-based argument as to why
libsystemd0 is better - that has been debated many many times and is
entirely moot.  i'm asking why does *only* having libsystemd0 as the
sole exclusive startup method, removal of which prevents and prohibits
the use of a whopping FIFTEEN PERCENT of the available debian software
base, and where that exclusive exclusionary process is being rapidly
duplicated across virtually every single GNU/Linux distribution that
we know; why does that *not* make you pause for thought that there
might be something desperately and very badly wrong?

ric writes, amongst other things:

 You are completely free to fork or go your own direction,

indeed we are, and in fact one person mentions further in the thread
so far that they did exactly that.  they also outline quite how much
work it is.  on the slashdot discussion, someone pointed out that it
was really unconscionable that people have to go to such extreme
lengths.  GNU/Linux distros should be a place where people can make
happy and convenient choices, not extreme decisions!  the extreme
absurd version of what you suggest is to do what very very few people
in the world have ever done (one of them being richard lightman, an
amazingly intelligent and reclusive individual), namely to create an
*entire* linux distribution - on their own - from source.  i take it
you can see, from that example, quite how much of a disservice it is
to say what you said, ric?

no, the very fact that this *doesn't go away* - that discussions about
libsystemd0 are *continuous and ongoing*, should tell you that there
is something very, very badly wrong with what's going on.  and that's
what i want to get to the bottom of.  like... *properly* understand.

the second thing, ric, is that i have to point out, respectfully, that
there are signs that you didn't read the slashdot article summary, nor
my report, as shown here:

 But, to raise comparisons to MicroSoft is very much out of line.

that is a 

Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
On Tue, Feb 17, 2015 at 7:03 PM, Andrew Shadura and...@shadura.me wrote:
 Hello,

 I'd like to apologise for my mail I sent about two hours ago. I have
 overreacted mainly because of the length of the email, CAPS INSIDE and
 also because it's a topic which is being discussed for more than a year
 and which many of people here are already tired of.

 i know, andrew.  i've been following it from a distance, staying away
until i had a better handle on what's going on, and a clue about
possible solutions.  i'm writing to the systemd developers now.


 I however still think that such lengthy writeups do really belong
 somewhere else, maybe to a blog, with a short post with a link being
 posted here.

  yehh, i wasn't expecting it to be that long - i lost track of
time, but also i wanted to make sure i addressed and included everyone
who responded over the past couple of days.

 Luke, Claude and everyone else, I am really sorry.

 not a problem andrew.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDyfnUwFTD2dde36DM2Ay2jQVv=pfsoozqqmo7_n_mc...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
On Tue, Feb 17, 2015 at 6:25 PM, Luke Kenneth Casson Leighton
l...@lkcl.net wrote:

 which should help answer the question you asked: your work - fantastic
 as it is - was *impossible to find*.  it doesn't even remotely come up
 on the radar of queries.  *nobody knows what you've achieved* and
 that's something i would like to help correct.

 ok done:  http://neofutur.net/systemd-vault
 also i've edited http://without-systemd.org/wiki/index.php/Main_Page
adding a sentence that, i hope, allows what you did, adam, to be
easily distinguished from all the forks and rather challenging
alternatives to consider (including the inconvenience of moving away
from debian entirely).

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedzscrh+mfp93+4xgkh2klbpxewmzfiizqpnpv7y5o+...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
On Tue, Feb 17, 2015 at 5:20 PM, claude juif claude.j...@gmail.com wrote:


 2015-02-17 17:55 GMT+01:00 Andrew Shadura and...@shadura.me:

 Hi Luke,

 On 17 February 2015 at 17:28, Luke Kenneth Casson Leighton
 l...@lkcl.net wrote:
  265 lines of text and counting snipped

 In short, this is TL;DR. We've all got better things to waste our time
 on. Please go away. Nobody's interested in this any longer regardless
 of their position on systemd.

 Thanks.


 Hi,

 Really rude answer. Really bad.

 thanks for pointing that out, claude - it helps that it was someone
else who pointed out that being uncivil by asking a *person* to go
away doesn't make the *problem* go away.

 andrew: i will go away only when i am satisified that the problem
which i believe it is my duty and responsibility to help highlight and
fix has, in fact gone away.

 if you feel that this is sufficiently beyond your psyche's limits,
there are a number of ways in which you may deal with that, but
*demanding* of people that they violate their principles, as well as
inconveniencing many other people and increasing _their_ stress levels
by voicing such demands... can you see how that that really will not
work out very well, for everyone involved, including yourself?

 short answer: no, i will not accede to your unreasonable demand.  i
have the right to speak up, and, just to make it clear: like that
famous person said, which i find myself quoting within a couple of
hours for completely  different reasons, i do not agree with you, but
i will defend your right to say so.

 so thank you for making it clear that you find this difficult to cope
with, but please do take a relaxing holiday or something, ok? :)

 anyway, in other news, i'm delighted to have been made aware (very
recently) of the work by adam borowski, which i have to say is
completely unknown and underappreciated at this point.  links are
here:

 http://forums.debian.net/viewtopic.php?f=20t=119836

 it would appear that one person has managed to achieve what the
devuan team are endeavouring to duplicate, and what my report has only
begun to scratch the surface on.  i find this to be incredibly funny.

 *but*... we are *not done yet*.  the work by adam is amazing and
everything that i was hoping would be done as an interim measure, so
adam THANK YOU, you have made it possible for the average end-user and
sysadmin to continue to manage their machines in a convenient way
*and* still make the choice to not have libsystemd0 present, and
that's just... words fail me to express my gratitude.

 *but*... the next phase is to tackle upstream and to pursue the
design concept advocated by russ: dynamic loading.  there really
should be no need to use what adam's done (or what devuan want).  it
*really should* be possible to install (or remove) a few packages that
are *part of debian*, and have libsystemd0 enabled or disabled *at
will*.  even with editing an /etc/ config file.

 that this is not even possible *is* why i will not stop - andrew -
until it is.  have i made myself clear?

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDxC+vvKuUd5sORghK1FX1NPhw-n97zHZsT21RCgkujM=q...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
On Tue, Feb 17, 2015 at 5:58 PM, Andrew Shadura and...@shadura.me wrote:
 Hi,

 On 17 February 2015 at 18:20, claude juif claude.j...@gmail.com wrote:
 Really rude answer. Really bad.

 I find it really rude to send emails of about 300 lines of text in
 total. Extremely rude.

 i did apologise in advance, and explained why i took the steps that i
did,.  if you are unable to accept that apology, i cannot help you
with that, andrew (as in: i recognise that i have no right to
interfere with your choice of mindset): it is your decision to choose
what to think and what to react to (positively or otherwise), and i
have to respect that.

 however as this is a public forum for discussing debian, and there
are thousands of people reading this and many more in the future,
apart from apologising for taking up so much time in distractions of
this kind i am not going to get involved further into discussions of
ettiquette, if that's ok.

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDxKaorP=84ztkzrakptawngboy4_9ry7hg_7jwhnss...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-17 Thread Luke Kenneth Casson Leighton
adam, i apologise for not being in a position to reply in-thread: as
mentioned previously i tried (via gmane) but the entire discussion is
completely missing, and i forgot to ask people in the original post to
cc me if they would like an ongoing threaded reply.

i also notice that you removed debian-user, so for those people on
that list who (like me) were completely unaware of the fantastic work
that you've done, here is a link to the archives containing what you
wrote:

 https://lists.debian.org/debian-devel/2015/02/msg00189.html

all i can say is, HOORAY!  and thank you for doing properly what i
only hinted at was possible.  i wish i had known of what you've done,
even a few days ago.  i would have:

(a) not have had to mess up my system
(b) would not have written the slashdot report
(c) would not have heard from so many people who have put links to my
report onto their site
(d) not been in a position to further advocate your fantastic work (to them)

so... actually.. if you think about it, it's a good thing.

if you don't mind i'm going to contact several people who maintain web
sites and lists in order to have them add your work to them.

which should help answer the question you asked: your work - fantastic
as it is - was *impossible to find*.  it doesn't even remotely come up
on the radar of queries.  *nobody knows what you've achieved* and
that's something i would like to help correct.

now, exactly as you, i and russ point out, the next phase is to do
dynamic library loading.  i'm absolutely delighted to note that you
have a handle on this, already, and i see you make it clear that
you've thought it through already.

i plan to write directly to the systemd developers, taking at face
value the recent announcement that they listen to users.  is there
anything that you would recommend in particular that i include?

well done, and thank you for making my hacks completely irrelevant in
under 24 hours.

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedwatrhrmtivzii369dk2wvx6kdrfodnzj7ek99mbht...@mail.gmail.com



Re: how to remove libsystemd0 from a live-running debian desktop system

2015-02-16 Thread Luke Kenneth Casson Leighton
On Mon, Feb 16, 2015 at 11:42 AM, Christian Seiler christ...@iwakd.de wrote:
 Am 16.02.2015 um 02:54 schrieb Luke Kenneth Casson Leighton:

 http://lkcl.net/reports/removing_systemd_from_debian/


 It's funny that when Wheezy (not Jessie!) came out, nobody complained
 that libsystemd-login0 (which is now part of libsystemd0) was as a
 dependency of dbus, so it is probably already installed on most desktop
 systems running current Debian stable.

 i'll hazard a guess that it's because they had no idea that, in the
very near future, all the major desktop developers and all the major
distros would make the unilateral decision to hard-code the
*exclusive* use of systemd (or parts of it).

 my assessment is that it's that total lack of choice that is causing
people to get so upset.  but there's no need to get upset about it:
*we didn't know*. nobody could have predicted how far this would go,
so quickly.

 so the question then becomes: at a fundamental level (in a
distro-agnostic way) how to go about giving people a proper choice (to
run systemd and associated components, or not)?

 l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDwgsC9nfXfiGnTPuXS9SdJNExqaXhLgdQdPpJ9g=7g...@mail.gmail.com



how to remove libsystemd0 from a live-running debian desktop system

2015-02-15 Thread Luke Kenneth Casson Leighton
http://lkcl.net/reports/removing_systemd_from_debian/

i've documented the process by which it is possible to run some of the
debian desktop window managers (TDE, fvwm, twm etc.) without the need
for systemd or libsystemd0 or any components related to systemd
whatsoever.

the process is not without difficulties, however out of (at the time
of writing) only two people who have followed this procedure, i was
the one that ended up having to disable udev: the other individual had
a working system (devoid of libsystemd0) purely by following only the
instructions to alter and replace bsdutils from the util-linux
package.

the reasons for demonstrating that this is possible have absolutely
nothing to do with my *personal* (technically-based) dislike of
systemd, although my reasons for actually removing libsystemd0 from
personal systems *are* based on a technical assessment (mostly with a
sysadmin eye).  but, i repeat: my *personal* choice has *nothing to
do* with the reason for posting this documentation.

the reason for demonstrating that this is possible is because nobody
has yet made it clear to either the upstream developers - or to the
distro maintainers who unfortunately are caught in the crossfire -
that systemd's unilateral adoption  is fast becoming an
all-or-nothing polarised choice that reminds me keenly of the
polarised Microsoft Monopoly power and dominance of the late 1990s.

and that *really is it*.  the technical issues are completely
irrelevant: those can and will be solved.  already we have evdev,
mdev, devuan, uselessd and many more, but those technical options are
*COMPLETELY SHUT OUT* by the exclusive - monopolistic - position that
systemd now has.

to illustrate the dominance of libsystemd0, if you carry out an
apt-get --purge remove libsystemd0, *all* of the packages and many
more on the following PNG will be removed:

http://anfo.slavino.sk/libsystemd-journal0.png

that list is woefully incomplete, so i have generated a current list
using apt-rdepends -r libsystemd0 | some manual magic | sort | uniq.
http://lkcl.net/reports/removing_systemd_from_debian/list_of_libsystemd0_dependent_packages.txt

the list is a whopping 4,583 packages (from the current
debian/testing).  apache2-dev, androidsdk, apt-cacher-ng,
avahi-daemon, blender, bluetooth, bochs, cairo-dock, calligra,
consolekit, cups-daemon, cups-core-drivers, cups-driver-gutenprint,
dbus - this is just a few major software libre packages i can see in
the the first 9% of the list that are affected (cannot be installed)
should anyone exercise their right to choose *not* to have libsystemd0
on their machines.

even dh is on the list.  erlang is on the list!  kde, gimp, xfce,
lxde, gnome, libreoffice, xine, mediawiki, mplayer, network-manager,
openjdk-7, phonon, php (??? why is php dependent on libsystemd0??),
pidgin, policykit-1, postfixadmin (??), pulseaudio, qemu, syslog-ng,
vlc, wicd (client and server), xbase-clients (??), x11-apps (??),
xbmc, xchat... those are just ones that i recognise out of the 4,500+
packages that are not permitted to be installed.

so the short and long of it is: i do not like it when people are not
given the freedom to choose...  and that includes when, just like when
microsoft was so dominant in the 1990s, the choices they are presented
are not really a choice at all.  what i have done therefore is to show
how to modify the debian packages for policykit-1, dbus, pulseaudio
and util-linux, such that libsystemd0 may be entirely removed.
removal of libsystemd0 from those packages trims that list of several
thousand unilaterally-excluded packages *significantly*.

this process comes with a price: i had to disable udev, and i had to
re-enable the keyboard and mouse sections in xorg.conf that i had
added years ago.  however, already within hours of the report's
publication i have received word from one other person who did *not*
have the same extensive difficulties that i encountered: udev
(unmodified) worked perfectly for them.  in a follow-up message they
did however explain that they have successfully installed and then
removed (at an earlier point) a source-compiled version of mdev, which
illustrates that they have some quite significant experience in
maintaining a hybrid of standard debian packages and system-critical
packages compiled directly from source.

so, in short, i have two key things to say.

to debian-users: you don't have complete choice (yet), but i have
demonstrated with a few hours work that there is a way to run
(certain) desktop environments without requiring libsystemd0 or any of
its dependencies, and after a little investigation there do appear to
be people working hard to give you your right to choose what software
to run *without* having to abandon debian.

to debian-developers: the technical issues are irrelevant (and can
always be solved over time) - it's that you are complicit in removing
people's software freedom right to choose what to run on their system:
that is why so many users 

Re: arm64 update - help wanted

2014-05-17 Thread Luke Kenneth Casson Leighton
  suggestion, wookey: i'd love to help... but obviously with no
 hardware that's kinda hard: is there a clear set of instructions
 somewhere - a wiki page for example - on how to debootstrap an arm64
 qemu so that even if it's dead slow it's still possible to help out?

https://wiki.debian.org/Arm64Qemu

wookey i found this - it's not entirely clear (lots of options, not
step-by-step): what is relevant to getting up and running, is the page
still relevant, has aarm64 made it into the latest debian qemu package
yet.

tia,

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedxywtwrtxi7qgevefku3rgxn-syh3iu5fukxj2twyc...@mail.gmail.com



Re: arm64 update - help wanted

2014-05-17 Thread Luke Kenneth Casson Leighton
On Thu, May 15, 2014 at 2:10 AM, Wookey woo...@wookware.org wrote:
 The debian-port arm64 rebootstrap is progressing nicely, and we just
 passed 4200 source packages built, with another few hundred
 pending. There are now 2 buildds running.

 awesome

 Thus I'd love it if anyone else could help go through the failures
 pile and file bugs, or upload old existing ones, or classify them on
 the wiki. Or if they happen to be your packages then just fix them :-)

 I've put some links on the wiki page
 https://wiki.debian.org/Arm64Port#Bug_tracking

 suggestion, wookey: i'd love to help... but obviously with no
hardware that's kinda hard: is there a clear set of instructions
somewhere - a wiki page for example - on how to debootstrap an arm64
qemu so that even if it's dead slow it's still possible to help out?

 l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDyrR8woYLt3i=DLkyf3e-K=c=8ddhcjcwe2liwqxgh...@mail.gmail.com



Re: ARM port(s) BoF at DebConf

2012-07-20 Thread Luke Kenneth Casson Leighton
On Fri, Jul 20, 2012 at 12:54 AM, Hideki Yamane henr...@debian.or.jp wrote:
 Hi,

 On Thu, 19 Jul 2012 18:35:44 +0100
 Steve McIntyre st...@einval.com wrote:
 buildds
 ===

 Both armel and armhf are doing well, covering ~96% of the archive. We
 don't have any ARM server hardware yet, so we're stuck using
 development boards as build machines. They work, but they're a PITA
 for hosting and they're not designed for 24x7 usage like we're doing
 so they're not that reliable.

  As I've posted during DebConf(*), Maybe OpenBlocks can solve this problem.
  It has 2GB RAM, reliable production use and we can buy it NOW.

  *) http://lists.debian.org/debian-arm/2012/07/msg7.html

 hideki, those look superb.  summarising (in case anyone's missed it):
they're armv7 compatible because they're using a marvell xp processor;
they're up to dual-core 1.4ghz and the company openblocks can do them
with up to 3gb of RAM, and i gather the openblocks boxes have a mini
pci-e port as well as gigabit ethernet.

 i'm including arm-netbooks because there may almost certainly be
people on that list who would be interested in a group buy.  there has
been quite a bit of interest in getting hold of modular computing
devices for rack-mounted server usage.

 l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDxB=akchpidonavwvuon9xf0gge9cf2smyjrrgdm3+...@mail.gmail.com



Re: ARM port(s) BoF at DebConf

2012-07-19 Thread Luke Kenneth Casson Leighton
On Thu, Jul 19, 2012 at 6:35 PM, Steve McIntyre st...@einval.com wrote:

 Both armel and armhf are doing well, covering ~96% of the archive. We
 don't have any ARM server hardware yet, so we're stuck using
 development boards as build machines. They work, but they're a PITA
 for hosting and they're not designed for 24x7 usage like we're doing
 so they're not that reliable.

 there was a post on the arm-netbook mailing list about a 7W quad-core
tegra3-based mini ITX motherboard which could take up to 2gb of RAM.
whether it's the usual
let's-put-something-out-there-see-if-anyone-is-actually-interested
style of vapourware or actual reality i'd strongly suggest someone
gets onto them and considers putting together a group buy / bulk order
just to make it worthwhile their time making a batch, because it's
literally the first ARM-based machine i've ever heard about that can
actually take 2gb of RAM.

 oh: also the motherboards have eSATA and uPCI-e, hm let me find the
post here you go:

 http://lists.phcomp.co.uk/pipermail/arm-netbook/2012-July/005127.html

 btw, steve: it's not the c++ doing linking in swap that's the
problem, it's trying to do *debug* builds of c++ applications that's
the problem.  webkit for example requires a minimum of 1.4gb of
resident RAM for the linker phase if you enable debug builds.  i have
mentioned a number of times that it's debug builds that are the
problem, and that all you have to do is disable debugging (*1) and the
build will complete within 15 minutes instead of 15 hours, but as
usual because it's that fucking moron lkcl telling us what the fuck
to do nobody bothers to listen.  well, keep on not listening for as
long as you (plural) find it useful to do so: i'm not the one with a
PITA for having to wait 18 hours for a debug build of a c++ app to
complete, am i, eh? *eyebrows-arched*

l.

(*1) and if someone _really_ wants a debug build of that particular
problematic package, on a build and distro port that's still
experimental, well, surely they can compile it themselves, using their
own resources, yes?


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDzD1sVfWTUFrt64=drral2fntoagjaqmm8oabyag7d...@mail.gmail.com



Re: ARM port(s) BoF at DebConf

2012-07-19 Thread Luke Kenneth Casson Leighton
On Thu, Jul 19, 2012 at 9:15 PM, Adam D. Barratt
a...@adam-barratt.org.uk wrote:
 On Thu, 2012-07-19 at 20:09 +0100, Luke Kenneth Casson Leighton wrote:
 On Thu, Jul 19, 2012 at 6:35 PM, Steve McIntyre st...@einval.com wrote:
  Both armel and armhf are doing well, covering ~96% of the archive. We
 [...]
 (*1) and if someone _really_ wants a debug build of that particular
 problematic package, on a build and distro port that's still
 experimental, well, surely they can compile it themselves, using their
 own resources, yes?

 Neither wheezy nor the armhf port contained in it are experimental.  If
 that's not what you meant, please be clearer.

 yes i used the wrong word: apologies.  i was trying to convey the
following in a concise way, and chose the word experimental, which i
realise in hindsight doesn't cover half of it: doesn't yet have as
many users as e.g. i386/amd64, hasn't been around as long as
i386/amd64, hasn't got hardware that the average user can buy at a
spec approaching that of i386/amd64 yet, and doesn't have as many
packages successfully and reliably building as i386/amd64.

 btw continuing on the thread on debian-arm (only) i put forward a
[temporary!] procedure for review which is an interactive balancing
act to relieve the burden of having excessive linker-related loads,
moving it down instead to later inconvenience for users.  of course,
if the package is perfect and there *aren't* any bugreports then the
interim proposed procedure has done its job.
http://lists.debian.org/debian-arm/2012/07/msg00073.html

 l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDxPpxZsPjHf7R=nzem5jibfa1snjsve5hk-b167xqu...@mail.gmail.com



B2G security model (debian package management recommended) - help and advice needed

2012-03-21 Thread Luke Kenneth Casson Leighton
folks, hi,

please take a deep breath before reading.

i'm keenly aware of the view that many people hold of me in debian.
that i'm even bringing something to your attention and asking for your
help (not for me, personally) should therefore tell you a lot more
than needs to actually be said.

i'm presently coming up to nine hours of continuous non-stop typing
(just today) to deal with something on the B2G developer mailing
lists: the security model, and it's getting to the point where i'm i
serious need of some highly-technical (as well as social) assistance.

your patience greatly appreciated whilst i explain.

B2G is a new operating system where the Gecko web front-end doubles up
(triples up?) as the window manager as well as the applications
runner.  they started from the android codebase, ripped out java,
ripped out webkit, dropped gecko in place using OpenGL _directly_
writing to the framebuffer, and then went, ok, now we got the basics,
let's make this work.

as a concept, i find this both fascinating and exciting.  the
resources of the mozilla foundation with the non-profit-orientated
motivation to create an open alternative to google's android?
fantastic!

by the time they're done, there will be *another* FOSS operating
system out there - one with the potential to reach a hundred million
units or more, through mass-volume sales world-wide, via numerous
telephone companies.  applications could be developed by people that
could take off just as fast as any android or iphone application:
potentially millions of downloads per day if an app becomes highly
popular.

and the whole thing *won't* be tied to a particular vendor, or to a
profit-maximising company.  the mozilla foundation is a non-for-profit
organisation, so has the potential to take into consideration aspects
that are *not* over-ridden by profit maximisation (such as increasing
revenue stream).

i have every confidence in the mozilla foundation to deliver the goods
on the U.I, on the apps example development, and much more besides.

what's of fundamental and deep concern to me is their ability to
comprehend the security implications of what they're doing.

and, you *know* me - it should come as absolutely no surprise to any
of you to learn that even the mozilla foundation is deploying
increasingly-hostile censorship measures to prevent me from
communicating with them.

ok, you can laugh now.  go on.  yeah, i know.  i did it again.

happy now? :)

but seriously: it should be reasonably clear as to why i've persisted
with this *despite* increasing resistance, and it should speak volumes
that i'm asking (requesting NOT demanding) - publicly - for people to
wander over and take (CONSIDER taking) a look at the security
discussion taking place.

why am i asking [requesting-not-demanding] that _you_ - debian
developers - get [CONSIDER getting] involved?  it's because the
application distribution model that i know best (that will also fulfil
the requirements) is... yep, debian.

why i am so deeply concerned about the discussion on the b2g-dev
mailing list has nothing to do with me, but has everything to do with
what the people in the mozilla foundation - those actively involved in
the b2g decision-making process - are recommending be used for
application distribution. it's SSL!

now, i know you know how debian package management works.  imagine if
someone on a debian list somewhere went hey guys!  you know that
infrastructure you use to distribute debian packages?  i've got a
great idea for a replacement for the system you've been establishing
and refining for almost 20 years, it's called SSL!

now that you've picked yourself up off the floor, either from shock or
from laughing so hard you couldn't stand upright, you may be wondering
if i've made some sort of hilarious techie joke.  i assure you i
haven't.

what _would_ be funny though would be if, after reading this, someone
who *really* knows how SSL well and truly works actually said yes,
here's a workable technical solution which uses SSL and delivers the
goods, i did a complete paper on it, it easily scales to the numbers
you describe, but didn't mention it here on the debian lists ever
because they already have a fully-working solution, and i didn't wish
to make an ass of myself like you do on a regular basis, mr lkcl.

then i would be very very happy that the joke was entirely on me.

... but my instincts and experience with SSL (so far) and with the
debian package management system (so far), tell me otherwise (to
date).

the problem is: i've rather comprehensively fucked up relations
with... i think it's now about... ooo, getting on for 30+ people in
mozilla, who are getting so incensed that i'm quotes telling them what
to do quotes that in true terry pratchett style it's going beyond
annoying, gone out the other side and is bordering on becoming funny.

i think saying i know there are many of you who wish i was dead so
that i'd stop typing probably did it, this time.

anyway, enough of 

Re: linux-image-2.6.39 not booting due to older package (not in list of dependencies!)

2011-08-06 Thread Luke Kenneth Casson Leighton
On Tue, Aug 2, 2011 at 2:05 PM, Luke Kenneth Casson Leighton
luke.leigh...@gmail.com wrote:

 now, i've discussed this on the bugtracker and there clearly isn't -
 and really shouldn't be - a listed debian dependency between
 linux-image-2.6.39 kernel and a userspace library.  however, there
 clearly *is* a dependency because It Don't Wurk (tm).

 so the issue is: how the bloody hell should this clear dependency be
 expressed in Debian Dependency terms, such that nobody else runs
 smack into this same issue?

 ok i spoke to phil hands, and asked his advice: apparently there's
something called Breaks: which would do the job.
  http://www.debian.org/doc/debian-policy/ch-relationships.html#s-breaks

 this fulfils the requirements, namely that if you haven't got the
package installed, it's irrelevant, but if you have, then the version
must, clearly, be greater than N in order to work.

 thus, it would appear to be the case that the *older* libdevmapper
library must have Breaks: ( linux-image-2.6.39) and this will force
the installation of the newer libdevmapper *before* going ahead with
the installation of linux-image-2.6.39.

 why must that be the case?  very simple: if libdevmapper happens to
be upgraded at the same time, and happens to get unpacked *after*
initramfs-tools gets triggered [in the postinst?], then you have the
nasty situation where the new (correct) library is correctly
installed... but it was the *older* libdevmapper that was dropped into
the initrd at the time of the 2.6.39 kernel upgrade.  and that's known
to be Bad (tm).

 the other nice thing about Breaks: is that it's the opposite of
Conflicts: i.e. if you were to use Conflicts: it would have to go
into the linux-image-2.6.39 package, and that would be just a bit...
weird.

@begin ot
[plus, ben has completely ignored that he's been terribly insulting
and believes that my responses pointing this out are themselves in
fact insulting, and that all this ego insultingness including saying
you are a complete liar, your bugreport *must* be worthless is
something that justifies completely ignoring any further input, which
is, itself insulting to say the least.  thus, any further input from
ben cannot be expected, and thus the way to fix the issue is to go
from the other end i.e. fix the issue using Breaks:.  *sigh*.  i
really must actually try acting like the egofuckingmaniac that people
believe i am, one of these days.  perhaps if i pointed out more often
that peoples' behaviour is very insulting rather than assuming that
they know that, things would go a bit smoother.  trouble is that i
just don't notice the things that other people would, ordinarily, be
completely outraged by, consequently get blamed rather a lot for being
pathologically honest and blunt.  oh well let's stop here, eh?]
@end ot

 also it's not entirely clear (whereas Breaks: definitely is) that
the use of Conflicts: would trigger a complete upgrade of
libdevmapper before proceeding with the installation of the 2.6.39
kernel (or more to the point, proceeding with the initrd recreation).

 perhaps somebody with a bit more experience of how Breaks: and
Conflicts: work would like to comment, thus ensuring that this issue
is resolved in the best possible way for the benefit of the debian
free software community? (*)

l.

(* i.e. ignoring that the report is coming from someone whom many
debian developers feel is an aggravating little shit who needs taking
down a peg or two by deliberately seeing the absolute worst in
whatever they write in order to *deliberately* create situations where
afore-mentioned little shit can be proven wrong har har, because this
particular aggravating little shit can in fact take care of his own
system, whereas there are many debian users who, when presented with
this same issue would find themselves completely lost)


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedydwrzrn-ubuixmynupb1_qcsk5ffnyb85gi44u_xg...@mail.gmail.com



Re: Does anyone care about LSB on arm?

2011-06-01 Thread Luke Kenneth Casson Leighton
On Tue, May 31, 2011 at 5:22 PM, Wookey woo...@wookware.org wrote:

 In my experience anyone distributing binaries actually picks a small
 set of distros and builds for those explicitly, rather than relying
 on the LSB. Does that mean that it's not actually useful in the real
 world? I guess in a sense this posting is to the wrong lists; we're
 all free software people here who have little use for the LSB. Where
 do the proprietary software distributors hang out :-)

 the proprietary software distributors hang out around USA lawyer
offices, where they get advice on how to perform tivoisation without
anybody noticing.  they then ship TVs and even 3G modems with embedded
linux kernels and custom OSes... and nobody notices.

 my take on this is that ARM is still just emerging from the
uselessness of sub-600mhz ARM9s and ARM11s as far as general-purpose
computing is concerned [laptop / desktop etc. *not* true embedded
purposes obviously: don't get upset, ARM employees, because mr LKCL
said your processors were quotes useless quotes - read it again: it's
a *conditional* description].  also, the sheer diversity of SoCs plays
directly, psychologically, against anyone joining forces on things
like LSB.  thus the majority of proprietary software distributors up
until recently have been doing custom-built from scratch software
stacks [using e.g. buildroot, openembedded] and thus LSB was and still
is completely useless to them.

 even android is custom-built, and everything (except the
highly-optimised apps - for ARM - which are becoming more common) is a
java app.

 that having been said, 500mhz+ Dual-Core Cortex A9s already out which
knock the stuffing out of 1.6ghz Intel Atoms (yes, saw the youtube
video) mean that could just be about to change, completely.

 sooo... although the situation *right now* is that nobody in the
commercial world is the slightest bit interested in LSB because they
all do custom builds of complete software stacks, it could be said
that *if* the free software community just dropped ready-to-go LSB
standards in front of their noses, they'd quite likely use it.

 you have to remember that the majority of these companies could not
put two lines of code together to save their lives.  they literally
have to be spoon-fed (in some cases even to the point of being told
where to put the screws, let alone the software).  they are usually
spoon-fed by the CPU manufacturer [and in the case of MStar Semi, they
won't even let *you* violate the GPL, they do it entirely for you].

 so in that regard, i think it's more a case of if the free software
community provides LSB across ARM, it'll get used.

 so in _that_ regard, the question becomes: are the efforts of the
free software community better off being spent elsewhere?  and what
benefit is there *TO THE FREE SOFTWARE COMMUNITY* of doing LSB for
ARM?  forget the proprietary junkies, they'll suck anything from us
that moves and not give a dime in return.

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/banlktimigcu991spdornpbh0zbnut+n...@mail.gmail.com



Re: Distributed Debian Distribution Development

2010-09-02 Thread Luke Kenneth Casson Leighton
On Wed, Sep 1, 2010 at 6:47 PM, Yaroslav Halchenko
deb...@onerussian.com wrote:
 Just - Wow... thanks!

 Hopefully digesting of this tasty post would not cause too much of farting ;-)

 :)

 seems might be worth adding (if I am not missing the point), then the
 concept of derivatives would then converge finally to a more
 digestible, more manageable, and thus more robust mechanism of
 branches... ?

 ahh, you're still using git: i'll be doing nothing more fancy than
creating a git-remote-helper which will add a protocol gitp2p:// or
gpp:// or something like that.

 so you'll still need to understand the concept of branches and
branching: it's just that you'd be able to grab them from peers rather
than being reliant on a central server.

 imagine a situation where you meet a fellow debian developer(s) and
you both(all) happen to be on holiday or just... somewhere where
connectivity is patchy.  i envisage a situation where both (or all)
publish their own trackers, and they can simply share whatever bits of
git repositories they (individually) happened to be most interested
in, personally, with everyone else.

for example, one person happens to have an OCD-fueled interest in
keeping up-to-date with every single mailing list, whilst another
happens to be interested in some obscure piece of software.  everybody
collaborates, does loads of packages, GPG signs them, commits them to
each others' git repositories, they all say bye bye nice meeting you,
get on planes or goats and the VERY first person who happens to have
internet access does a git push, wham, everyones' packages are
uploaded/available to the rest of the world.  including those people
who shared them originally, but they have the git commit refs already
_in_ their database, so when _they_ get online as well, they
automatically become seeds as part of the wider git-bittorrent
network for those packages.

so... yeah.  it's a little... radical, but actually nothing more than
a mind-shift rather than any actual significant coding.

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktintf4xjugend0slotu341=bfaxsdrcwo=1wq...@mail.gmail.com



Re: Distributed Debian Distribution Development

2010-09-02 Thread Luke Kenneth Casson Leighton
On Thu, Sep 2, 2010 at 1:44 PM, Oscar Morante spacep...@gmail.com wrote:
 Have you seen this project [1]? It looks like they have been already
 thinking about the git+bittorrent idea.

 [1] http://code.google.com/p/gittorrent/

 yes.  it's effectively shelved.  the name gittorrent was abandoned
and the name mirrorsync selected, because the people working on it
decided that bittorrent was an inappropriate protocol to use.  i got
them some slashdot coverage.  but, ultimately, i disagreed with them
that bittorrent isn't an appropriate protocol, so that's why i did the
thingy and proved that it's an effective transfer / distribution
mechanism.  thingy.  haven't even picked a name for it! :)
suggestions welcome.

 i want to see how far i can get away with leaving the bittorrent
protocol _entirely_ unchanged, by virtue of doing everything through
this vfs layer jobbie.  only if it becomes absolutely absolutely
necessary, _then_ start making changes.

good reasons to make changes:

a) the 256k chunk size becomes blindingly obviously completely
unacceptable.  for example: cameron dale did a study of the .deb
archives and found that a very large percentage are under 32k in size.
 this was the primary reason why he abandoned the bittorrent protocol
(baby? bathwater?? *sigh*...) even after modifying it to be able to
negotiate chunk sizes and he designed apt-p2p instead.

b) ISPs start doing packet-filtering (fuckers) so it becomes necessary
to change headers, port numbers, permanently enable encryption at a
fundamental level such that deep packet inspection becomes impossible,
e.g. move to SSL and so on.

c) digital signing of individual commits becomes necessary, and...
somehow (i don't know how yet) it *hand-waving* becomes necessary to
integrate the GPG signature verification on git packs at the
bittorrent-protocol-level.  haven't got to that bit, yet - the
project's only been going for 4 days!  sam vilain solved this by
simply creating refs with the signers' 32-bit GPG fingerprint, but he
didn't get as far as actually _checking_ it :)

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktikkyhzu_ws7ke3f-ntppdtxy9z6-pgn0pf7m...@mail.gmail.com



Re: upcoming issues with python-hulahop, python-xpcom, xulrunner-1.9.2

2010-07-14 Thread Luke Kenneth Casson Leighton
On Wed, Jul 14, 2010 at 8:52 AM, Mike Hommey m...@glandium.org wrote:
 On Tue, Jul 13, 2010 at 06:07:27PM +, Luke Kenneth Casson Leighton wrote:
 hi folks,

 i don't know if you're aware of the ... issues shall we say ...
 surrounding xulrunner 1.9.2 but there's a few changes going on.
 python-xpcom is being *dropped* from xulrunner as a first class
 citizen and is being turned into a third-rate one.  this isn't a
 problem right now because debian releases versions of firefox that use
 xulrunner-1.9.1.

 the rdepends for python-xpcom include python-hulahop and
 pyjamas-desktop, epiphany-gecko, sugar-web-activity and so on.
 removal of python-xpcom basically screws these projects.

 epiphany-gecko is already gone.

 to make matters slightly worse, the mozilla team have dicked with the
 xpcom interface c-code as they focus all-out on speed-speed-speed to
 the absolute pathological exclusion of all else, in an attempt to
 catch up with webkit's increasing mindshare.  this decision is
 affecting all the language bindings (such as java-xpcom, python-xpcom
 and so on).

 so, right now, the situation is as follows:

 * if you upgrade firefox to a version which uses xulrunner-1.9.2,
 python-xpcom and its rdepends go out the window.

 * even if you happen to include the third party module
 http://hg.mozilla.org/pyxpcom as it is now known, xulrunner's XPCOM
 code has been been brain-damaged to the extent that several key
 strategic things such as python bindings to XMLHttpRequest will no
 longer work.  todd whiteman has very kindly agreed to look at this,
 and to keep up with the brain-damage.

 For people interested in more intelligible information than the above
 rant,

 ( it's a good one, innit? :)

 some xpcom exposed interfaces in xulrunner have been tweaked
 such that they will only work when called from javascript. Such
 interfaces thus can't work from other xpcom bindings.

 yup.  appreciate the clarification, mike.

 basically, an interpretation of the decision from the mozilla
foundation is that all languages but javascript can get lost.  i do
not understand why, after years of support thanks to xpcom, _just_
when there's a project which actually _uses_ alternative language
bindings 100% and i meaaan 100%, the mozilla foundation slams the door
in its face and in the face of every other project using xpcom.

 it's not like there's a chance of any non-mozilla-foundation-funded
project having the money to maintain a parallel version of xulrunner
with a non-broken version of xpcom or anything.

 basically i wanted to appraise people of the situation, because, with
 xulrunner-1.9 being in debian/testing since pyjamas-desktop was added,
 any attempt to follow the mozilla foundation's headless-chicken
 meltdown moments means goodbye epiphany-gecko, sugar-web-activity and
 pyjamas-desktop.  and... that would be bad :)

 I guess you mean xulrunner-1.9.1.

 yes.  again, thank you for clarifying.

 xulrunner-1.9.2 is still in
 experimental and will stay there until squeeze is released.

 ok.  thank god.

 so unless the mozilla foundation see the light, basically all
projects that use python-xpcom must stick with xulrunner-1.9.1.

 that means that all linux distributions must maintain two parallel
versions of xulrunner, or that they must bundle a version of
xulrunner specifically dedicated to firefox _in_ firefox (just like
the stand-alone releases made by the mozilla foundation itself).

 or, all linux distributions must tell python-xpcom-dependent projects
to go to hell in a handbasket.

 those are some the options, and i'm interested to know a) if anyone
has any alternative ideas b) which way the debian project is going to
jump.

 i _could_ create and maintain and even submit a series of packages
xulrunner-1.9.1-to-deal-with-shortsighted-mozilla-foundation-decision,
python-xpcom-1.9.1-to-deal-with-shortsighted-mozilla-foundation-decision
and would be happy to submit patches to python-hulahop,
sugar-web-activity and pyjamas-desktop packages, if that would help,
but i am not entiiirely sure that would go down well :)

l.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktiki3m5rwoxe3kdlipsgjn9hv50ua_brhfw54...@mail.gmail.com



upcoming issues with python-hulahop, python-xpcom, xulrunner-1.9.2

2010-07-13 Thread Luke Kenneth Casson Leighton
hi folks,

i don't know if you're aware of the ... issues shall we say ...
surrounding xulrunner 1.9.2 but there's a few changes going on.
python-xpcom is being *dropped* from xulrunner as a first class
citizen and is being turned into a third-rate one.  this isn't a
problem right now because debian releases versions of firefox that use
xulrunner-1.9.1.

the rdepends for python-xpcom include python-hulahop and
pyjamas-desktop, epiphany-gecko, sugar-web-activity and so on.
removal of python-xpcom basically screws these projects.

to make matters slightly worse, the mozilla team have dicked with the
xpcom interface c-code as they focus all-out on speed-speed-speed to
the absolute pathological exclusion of all else, in an attempt to
catch up with webkit's increasing mindshare.  this decision is
affecting all the language bindings (such as java-xpcom, python-xpcom
and so on).

so, right now, the situation is as follows:

* if you upgrade firefox to a version which uses xulrunner-1.9.2,
python-xpcom and its rdepends go out the window.

* even if you happen to include the third party module
http://hg.mozilla.org/pyxpcom as it is now known, xulrunner's XPCOM
code has been been brain-damaged to the extent that several key
strategic things such as python bindings to XMLHttpRequest will no
longer work.  todd whiteman has very kindly agreed to look at this,
and to keep up with the brain-damage.

basically i wanted to appraise people of the situation, because, with
xulrunner-1.9 being in debian/testing since pyjamas-desktop was added,
any attempt to follow the mozilla foundation's headless-chicken
meltdown moments means goodbye epiphany-gecko, sugar-web-activity and
pyjamas-desktop.  and... that would be bad :)

yes, you guessed it: ubuntu have indeed been led by the mozilla
foundation on this merry chase, and have indeed just happily ripped
out all the python-xpcom rdepends in the hope that noone would notice.

so... just a heads-up.

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktime__exg2vv9bbgin5cyrbopavaf_zykwvq_...@mail.gmail.com



Re: pid file security

2010-06-05 Thread Luke Kenneth Casson Leighton
On Sat, Jun 5, 2010 at 2:26 AM, Russell Coker russ...@coker.com.au wrote:
 On Sat, 5 Jun 2010, Luke Kenneth Casson Leighton luke.leigh...@gmail.com
 wrote:
 apologies for butting-in without being able to continue the thread,
 but i've just seen this:
 http://advogato.org/person/etbe/diary/779.html
 which links to this:
 http://lists.debian.org/debian-devel/2010/05/msg00067.html

 http://etbe.coker.com.au/2010/06/04/securely-killing-processes/

 You're quick.

 :)  it was pure chance: i saw it in advogato recentlog, which flashes
by in a blur.

 http://sourceforge.net/projects/depinit/

 The above URL is one place to download depinit.  It's an init replacement that
 uses configuration files to give the details of services to start.

 yes.  it's worth explicitly mentioning that it's a parallel-capable
replacement for sysv init, and a bit more besides.  it doesn't use
inittab, for example.  in another project that he did, richard i think
even wrote his own /bin/login replacement because he didn't like the
ones that were on offer _either_ :) which he then fired off from
depinit, via a signal that is i believe generated for extraneous
key-presses such as ctrl-alt-delete or alt-left or alt-right; in this
way, pressing alt-right fired up another login console.

 depinit solved the fork-bombing issue because richard lightman was
 concerned about attacks on his internet-facing system.  richard added
 code which actively tracks child signals (depinit is highly unusual
 and innovative in that it catches ALL signals, and can therefore react
 _to_ any signal) and analyses the timing etc. and provides a means to
 trigger arbitrary scripts based on the signal type.

 How does it do that?  Does it ptrace them?

 i don't honestly know. richard was the complete frigging introverted
genius, here, not me :)

 http://etbe.coker.com.au/2010/05/16/systemd-init/

 How does [depinit] prevent processes escaping?

 reaally don't know.  apologies.

 richard also solved the security PID problem ... by doing away with
 the need for the PID file.

 That doesn't do away with the need for arbitrary programs to kill other
 arbitrary programs and not make a mistake about which program they are
 killing.

 yes.  correct.  i believe that depinit can manage / knows about only
the services which it initiates, and are under its immediate control,
by capturing all (16?) signals of its immediate child processes.

 FOREGROUND=True or whatever it is) and so on.  in this way, there
 simply _is_ no need for a PID file, period.  the relevant state
 information is contained within depinit itself, and you can guarantee
 that depinit will catch the signal.

 systemd does all that.

 excellent.

 and looked for unauthorised login attempts.  more than three of
 those occurring within a specified time, and iptables would be called
 to block that user's IP address.  voila: no delays due to syslog
 polling: instant and real-time attacker blocking, all using simple

 Does a program that uses inotify to wait for log file changes on disk
 experience any delay of note?

 ... no - you're right: it wouldn't.  so that would be a solution
but again, it would require an application that had that capability
[to use notify] - times however many services you wanted to react to,
in real-time.  so, an sshd-monitoring application would need to be
written (in c?) to wait for inotify; an apache2-monitoring application
would... etc. etc.

 however if that functionality was built-in to systemd, just as it is
already built-in to depinit, i.e. if the services which were fired off
foreground-style by systemd could have their stdin, stdout and stderr
redirected to applications/scripts, as specified by command-line
options to systemd...

 The systemd option of creating sockets before executing services that listen
 to them seems to offer the potential of more significant boot performance
 benefits than just starting things in parallel.

 that's got my eyebrows raised - how the heck does _that_ work?  i'm
both surprised and intrigued.

 ok, darn it - systemd seems to be a really bad name choice: google
search comes up with Systeme D, and also systemd on windows??

 ok, let's read this:
 http://etbe.coker.com.au/2010/05/16/systemd-init/

 okaaay, riight.  so.  ah ha.  it makes things quicker... by avoiding
starting the services _entirely_ :)  ok, so systemd is a merging of
the functionality of sysv init and also inetd.

 right.  let me think.  insights.  ok.  inetd.  usually inetd (and
presumably systemd) services have their stdin and stdout redirected to
the TCP/UDP ports, and you pass a specific option to the service to
tell it that you want that to happen.  that leaves stderr available
for error/info logging... so yes, systemd could in fact be enhanced to
do the same job as depinit [wrt the real-time reacting _without_
having to use polling or inotify].

 second: assuming that systemd is _only_ capable of starting up
services [as an inetd replacement] via redirecting stdin

Re: pid file security

2010-06-04 Thread Luke Kenneth Casson Leighton
apologies for butting-in without being able to continue the thread,
but i've just seen this:
http://advogato.org/person/etbe/diary/779.html
which links to this:
http://lists.debian.org/debian-devel/2010/05/msg00067.html

can i please gently remind people that depinit solved the security and
fork-bombing issues years ago.  i do keep mentioning depinit, on
debian lists, but there is typically absolutely zero response, which i
do not understand.  nevertheless, as a debian and free software
advocate i feel compelled to keep pointing people at solutions: it's
up to you to investigate them.

depinit solved the fork-bombing issue because richard lightman was
concerned about attacks on his internet-facing system.  richard added
code which actively tracks child signals (depinit is highly unusual
and innovative in that it catches ALL signals, and can therefore react
_to_ any signal) and analyses the timing etc. and provides a means to
trigger arbitrary scripts based on the signal type.

i recall a discussion with richard back in 2004/5 where he said that
when depinit is asked to stop a dependency/service, it does so by
first sending graceful signals, then goes on to take increasingly
aggressive action, including deciding, based on child-fork-bombing,
that a service has been corrupted and thus needs to be terminated with
extreme prejudice.

richard also solved the security PID problem ... by doing away with
the need for the PID file.  in other words, a service is _always_ run
in foreground mode.  if it dies (i.e. a segfault signal is caught),
the service is restarted automatically - by depinit (based on the
signal alone).  thus, the need for safe_mysql goes away entirely; the
need for apache2ctl start goes away (i.e. you use apache2 -c
FOREGROUND=True or whatever it is) and so on.  in this way, there
simply _is_ no need for a PID file, period.  the relevant state
information is contained within depinit itself, and you can guarantee
that depinit will catch the signal.

one additional incredibly useful action of this foregrounding
approach to services was that he added the means to connect dependent
services via pipes, between their stdin and stdout.

the advantage of the entire services approach that richard took in
depinit is phenomenal: richard created dependent services where in
real-time you could script sshd's stdout (logging output) into
_another_ service, which was a shell-script that analysed the contents
and looked for unauthorised login attempts.  more than three of
those occurring within a specified time, and iptables would be called
to block that user's IP address.  voila: no delays due to syslog
polling: instant and real-time attacker blocking, all using simple
shell scripts.  [the alternative - continuous polling and reading of
syslog entries - is just utterly messy, results in potential delays,
and requires that each and every polling program written for a
particular service understand the concept of syslog, how to read it,
how to read the last entries etc. etc.  just... messy.]

so i feel compelled to point these things out, along with the other
incredible benefits that depinit brings including _massive_ reductions
in startup time (25 seconds on a 1.5ghz Pentium 4 when debian was
doing about 90 at the time), and phenomenal near-unbelievable
improvements in shutdown time (2 seconds on a 1.5ghz Pentium 4 when
debian was doing about 60 at the time), as it pains me to see depinit
being totally ignored and these security and painful issues being
discussed _years_ after a solution has already been done, and proven
to be effective.

you are welcome to contact me and discuss this further, if i can
remember any of the details i will be glad to describe them, and if
necessary go dig out the depinit scripts that i created for a KDE
debian desktop system, 4 years ago.  which included solving the
udevsettle massive delay problems, by parallelising them and working
out the dependencies for critical startup services.

l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimphzloqs1kfvzin2_er_z2qffhihhvnpq36...@mail.gmail.com




Re: Very newbe help/pointers required about building a distribution from scratch

2010-03-11 Thread Luke Kenneth Casson Leighton
On Wed, Mar 10, 2010 at 9:52 PM, Neil Williams codeh...@debian.org wrote:
 *Precisely* what changes do you need for that architecture - is it
 really a different architecture from armel? (Answers to debian-embedded
 please.)

 hi neil,

 firstly thank you for the informative post, esp. the history about
actually _doing_ an architecture port and it taking over a year to
complete - even with your expertise and knowledge - that puts things
into perspective (that this _isn't_ like e.g. gentoo, openembedded or
linuxfromscratch)

 the OP, jonathon wilson (cc'd) wanted to recompile to take advantage
of ARM926EJ-S features and also OMAP3530 features (cortex A8); i want
to recompile to take advantage of S5PC100 (cortex A8) features.
[apparently there's quite a significant speed improvement to be had
because the cortex A8 architecture was actually done by a different
company as a from-ground-up reimplementation of ARM instruction set,
that ARM then bought up when the investor pulled out]

 based on what you've said, would you advise jonathon that it be
better to target the worst offender packages and where the most
bang-per-buck can be had, for example recompiling the linux kernel to
take advantage of processor-specific optimisations i can think of as
being the most obvious one.

 l.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/ced5f0f61003110346t1a7938d5v510ed4d5645f4...@mail.gmail.com



Re: Very newbe help/pointers required about building a distribution from scratch

2010-03-10 Thread Luke Kenneth Casson Leighton
On Wed, Mar 10, 2010 at 8:18 PM, Lennart Sorensen
lsore...@csclub.uwaterloo.ca wrote:
 On Wed, Mar 10, 2010 at 07:20:04PM +, Luke Kenneth Casson Leighton wrote:
  yeah - i'd like to know how to do this, too.  i installed buildd (and
 wannabuild) but there appears to be some manual steps involved, and
 i was kind-of expecting it to be automatic and recursive.

  what i was expecting was that there was a simple way - e.g. grab all
 the packages of a task - and just shove them at buildd, and i was
 expecting it to just... go ahead and recursively grab all build
 dependencies and all source dependencies, right down to coreutils and
 build them all from the top down.

  a bit like openembedded.

  ... but there's absolutely nothing that can be found, like that: it
 seems more that buildd is designed to be a half-way house, which is
 kinda useless for this sort of task, creating entire specialised
 rebuilds (a la gentoo) for specific architectures.

  yes, basically, i want to rebuild an entire suite of debian packages
 for the arm cortex A8 processor (the S5PC100).

 A number of packages have circular dependancies.  These have to be
 resolved manually by either temporarily using packages built elsewhere
 or by manually building parts of a package to solve the dependancies.

 or by using e.g. debian armel packages a la cross-debootstrap
(rootstock under the dreaded ubuntu), that gets you into a position
where each of those dependencies can be replaced one at a time.

 You better have a good understanding of the debian packaging system and
 how dpkg-buildpackage works.

 It only really becomes automatic with wannabuild once you have a working
 base system.

 excellent.

 ... where is all this documented?

 has anyone actually done this - documented and automated e.g. how the
debian-armel port was created, when previously there was only the
debian-arm one?

 because it really does make sense to have a way to do automated total
recompiles for e.g. the cortex a8, and if debian won't officially
add that as an architecture, at least having a well-documented and
automated process by which a random person can just... set some
machines compiling for a month, would be good.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/ced5f0f61003101308n133ec20dha948b52410c2d...@mail.gmail.com



Accepted pyjamas 0.7~svn2052-1 (source all)

2009-10-17 Thread Luke Kenneth Casson Leighton
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.8
Date: Sat, 17 Oct 2009 16:37:22 +0100
Source: pyjamas
Binary: pyjamas pyjamas-pyjs pyjamas-desktop pyjamas-ui pyjamas-doc 
pyjamas-canvas pyjamas-gchart
Architecture: source all
Version: 0.7~svn2052-1
Distribution: unstable
Urgency: low
Maintainer: Luke Kenneth Casson Leighton l...@lkcl.net
Changed-By: Luke Kenneth Casson Leighton l...@lkcl.net
Description: 
 pyjamas- Python web widget toolkit and Python-to-Javascript compiler
 pyjamas-canvas - Pyjamas Python port of GWTCanvas SVG Library
 pyjamas-desktop - Python web widget toolkit (Desktop version)
 pyjamas-doc - Python web widget set
 pyjamas-gchart - Pyjamas Python port of GWT GChart Charting and Graph Widget 
Libra
 pyjamas-pyjs - Pyjamas Python-to-Javascript compiler
 pyjamas-ui - Python Pyjamas Web Widget Set library
Closes: 501744
Changes: 
 pyjamas (0.7~svn2052-1) unstable; urgency=low
 .
   * Initial release (Closes: #501744)
Checksums-Sha1: 
 54b5e4389470af52d9bb8c0d32fdf6ce930d0631 1175 pyjamas_0.7~svn2052-1.dsc
 3366b8e03efa7cd3159855762d879f2cab264643 2348108 
pyjamas_0.7~svn2052.orig.tar.gz
 5b297c4c11edc757c6581efd146bd2ff429a8f85 17763 pyjamas_0.7~svn2052-1.diff.gz
 612e76e1001264230245ccc899fd649887034936 26004 pyjamas_0.7~svn2052-1_all.deb
 f47df6ba32dd34c1b1e5e607d71926fd6b70b2b8 220440 
pyjamas-pyjs_0.7~svn2052-1_all.deb
 5630c0dc3fd398ecc2df5f0b4fbce1e15288daa1 61688 
pyjamas-desktop_0.7~svn2052-1_all.deb
 0cd0bfcd2dd1781cc295e64aca574906237dad13 89968 pyjamas-ui_0.7~svn2052-1_all.deb
 5975146ab3f6cd994f4e804988bfd2cbaac8c052 387422 
pyjamas-doc_0.7~svn2052-1_all.deb
 de6798fba11ed979e7b6520713cb3615a96f0233 38502 
pyjamas-canvas_0.7~svn2052-1_all.deb
 eb299b258f85b3803ba2e5536006f2efbb6263e1 168422 
pyjamas-gchart_0.7~svn2052-1_all.deb
Checksums-Sha256: 
 be00d4737a8aa56bf0adf07856951d28bdfd57ca48c9ee472549e4aa816c9132 1175 
pyjamas_0.7~svn2052-1.dsc
 f297b0b800cfaa4aada8d2d99cfa9da58d8a83e67de58c683ca515091bb8507e 2348108 
pyjamas_0.7~svn2052.orig.tar.gz
 972e53dab05ad4cd431872b5d8e9ae92babbc8dc6906b7b91f4094fad47a 17763 
pyjamas_0.7~svn2052-1.diff.gz
 105260cd503346cb7ed9dfbe9e5ef8a088af3f633bcefffbb48dbd8deb85f892 26004 
pyjamas_0.7~svn2052-1_all.deb
 6507a40d4f45a40f43d14c102194b2099685d0e09cc3caf6f6ac46f2d8edb94b 220440 
pyjamas-pyjs_0.7~svn2052-1_all.deb
 cf86e83f52e55a53c4465d76365341b70c892fe38a19b9aba5cfea3145d62a77 61688 
pyjamas-desktop_0.7~svn2052-1_all.deb
 c10763c4892abe5d2017420b8a4e0ea7e1f7177ec29c410e88390e331995f936 89968 
pyjamas-ui_0.7~svn2052-1_all.deb
 88a4df710818f9868c48750847f5acadde57c03b442ce5faf1ebcb77fe2a298f 387422 
pyjamas-doc_0.7~svn2052-1_all.deb
 aeb05d970e04a7d5a6f981b8173ef09395d905a2486fb73ed460831071dbf3c2 38502 
pyjamas-canvas_0.7~svn2052-1_all.deb
 398659e18d8ab86110ae90f5a2ff9d342640711515970efca224be445c62198c 168422 
pyjamas-gchart_0.7~svn2052-1_all.deb
Files: 
 7d52ce006fc98d748678d705e1bf0334 1175 python extra pyjamas_0.7~svn2052-1.dsc
 6d9d3e799bf2d5091a582af0bdf690a2 2348108 python extra 
pyjamas_0.7~svn2052.orig.tar.gz
 dec6637a411306443a451c7a3707cf7b 17763 python extra 
pyjamas_0.7~svn2052-1.diff.gz
 c2e47bffe31131136e5960529c80a29d 26004 python extra 
pyjamas_0.7~svn2052-1_all.deb
 0341edd13a4f6b2b4dae025c6aa1fe1e 220440 python extra 
pyjamas-pyjs_0.7~svn2052-1_all.deb
 d653f306bbfb71b795bd8cb3f44d4871 61688 python extra 
pyjamas-desktop_0.7~svn2052-1_all.deb
 2baa8bf39cd52442313f10154150c4d5 89968 python extra 
pyjamas-ui_0.7~svn2052-1_all.deb
 42a1450f22256b7f23904c94b97385d7 387422 doc extra 
pyjamas-doc_0.7~svn2052-1_all.deb
 c72c83f5966eec16a7944457b97700c3 38502 python extra 
pyjamas-canvas_0.7~svn2052-1_all.deb
 98aa4c414dae3bb833237deff7ac1e34 168422 python extra 
pyjamas-gchart_0.7~svn2052-1_all.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkrZ9EQACgkQYgOKS92bmRBrRwCgjJpBiNskcYIxJnwAgXDYe2fI
eFcAoIqODM0JdpGpyjUIUxmRULnZ1mEj
=zFwd
-END PGP SIGNATURE-


Accepted:
pyjamas-canvas_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas-canvas_0.7~svn2052-1_all.deb
pyjamas-desktop_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas-desktop_0.7~svn2052-1_all.deb
pyjamas-doc_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas-doc_0.7~svn2052-1_all.deb
pyjamas-gchart_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas-gchart_0.7~svn2052-1_all.deb
pyjamas-pyjs_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas-pyjs_0.7~svn2052-1_all.deb
pyjamas-ui_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas-ui_0.7~svn2052-1_all.deb
pyjamas_0.7~svn2052-1.diff.gz
  to pool/main/p/pyjamas/pyjamas_0.7~svn2052-1.diff.gz
pyjamas_0.7~svn2052-1.dsc
  to pool/main/p/pyjamas/pyjamas_0.7~svn2052-1.dsc
pyjamas_0.7~svn2052-1_all.deb
  to pool/main/p/pyjamas/pyjamas_0.7~svn2052-1_all.deb
pyjamas_0.7~svn2052.orig.tar.gz
  to pool/main/p/pyjamas/pyjamas_0.7~svn2052.orig.tar.gz


-- 
To UNSUBSCRIBE, email to debian-devel-changes-requ...@lists.debian.org
with a subject of unsubscribe

#501774 - where should the library source go?

2008-11-23 Thread Luke Kenneth Casson Leighton
folks, hi,
with respect to RFP #501744 pyjamas package, i thought it best to
explain the code's layout and also ask some advice on where files
should be installed.

pyjamas is general-purpose compiler technology, not just a random
dumb tool with a single fixed - and unexpandable - purpose. the
libraries that come with the current versions of pyjamas are the
_first_ of their kind, but they certainly won't be the last.

the code is broken down as follows:

* pyjs/pyjs.py - the python-to-javascript compiler.  takes input file
name or stdin, outputs to stdout or output file.  module pathnames
_have_ to be given to it, to make sure that it DOES NOT read in any
python modules from STANDARD python2.4/2.5/2.6 locations. suitable for
general-purpose use.

* builder/build.py - a specialised builder which is targetted at
building web applications: it adds the default pyjamas-web library
path automatically; it adds a few pngs, gifs and outputs an html
template into a subdirectory.  NOT suitable for general-purpose use.

* library/pyjslib.py - a dedicated library that provides
javascript-equivalents of python builtins List, Dict, Tuple, str,
len etc. etc. suitable for general-purpose use.

* library/ui.py, DOM.py, Window.py etc - the web-specific libraries
that provide the basis for pyjamas widgets running _specifically_ in
web browsers.  like build.py, these library files are not appropriate
for general-purpose use.

so that's the background.  the equivalence, if this was gcc, would be
as follows:

* pyjs.py would be /usr/bin/gcc

* build.py would be... ohh... perhaps something like... autoconf.

* pyjslib.py would be /usr/lib/gcc/x86_64-linux-gnu/4.3.2/libgcc.a

* DOM.py would be ... ooo... /usr/lib/libglib.so and /usr/include/glib/glib.h

* ui.py would be... /usr/lib/libgtk.so

my question, therefore, is:  where in hell's name should these files
be installed

* pyjs.py is obvious: it goes into /usr/bin/pyjs.py

* build.py is slighly less obvious: it should be called /usr/bin/pyjswebbuild.py

* pyjslib.py i have absolutely NO clue about.
/usr/share/pyjamas/library/pyjslib.py ?
/usr/lib/pyjs/library/pyjslib.py ?

* DOM.py and ui.py etc. i have NO clue.
/usr/share/pyjamas/library/pyjamas/DOM.py ui.py Window.py etc. ?
/usr/lib/pyjs/pyjamas/DOM.py ?

these files MUST be kept the hell away from standard python: pyjamas
is a COMPLETELY different toolchain, it's a completely separate
compiler, that _happens_ to take its input as python and _happens_ to
output javascript.  pyjamas must neither be allowed to get
non-explicitly-given access to standard python (/usr/lib/python*,
/usr/share/python-support/*) not must standard python be given access
to pyjamas libraries.

that being the case, someone needs to invent a standard location
where all libraries contained as part of the pyjamas package, and
all FUTURE libraries which depend, in future, on the pyjamas compiler,
are to be installed.

can anyone come up with any good ideas?

ta,

l.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: #501774 - where should the library source go?

2008-11-23 Thread Luke Kenneth Casson Leighton
On Sun, Nov 23, 2008 at 10:03 PM, Thomas Viehmann [EMAIL PROTECTED] wrote:
 Hi Luke,

 hiya thomas.

 Luke Kenneth Casson Leighton wrote:

 * build.py would be... ohh... perhaps something like... autoconf.
 Not more like make?

  *hand-waving* :)

I never called pyjs directly.

 me only for _really_ obscure stuff / demo purposes.

 my question, therefore, is:  where in hell's name should these files
 be installed

 * pyjs.py is obvious: it goes into /usr/bin/pyjs.py
 I wouldn't unless one needs to call it directly. Also, lose the .py for
 stuff in /usr/bin

 ack.

 * pyjslib.py i have absolutely NO clue about.
 /usr/share/pyjamas/library/pyjslib.py ?
 /usr/lib/pyjs/library/pyjslib.py ?
 * DOM.py and ui.py etc. i have NO clue.
 /usr/share/pyjamas/library/pyjamas/DOM.py ui.py Window.py etc. ?
 /usr/lib/pyjs/pyjamas/DOM.py ?
 These (both *s) should all go somewhere under /usr/share/pyjamas.
 share vs. lib is whether the files are *arch*-dependent, which they are not.

 oh, is that what the difference is? :)

 that being the case, someone needs to invent a standard location
 where all libraries contained as part of the pyjamas package, and
 all FUTURE libraries which depend, in future, on the pyjamas compiler,
 are to be installed.

 Well, the question I'd have is whether pyjamas really is stable enough
 to go into unstable.

 suuure :)  as stable as can be, with MS changing the bloody
user-agent string in IE7.

 well if FreeBSD can turn 0.3 into a release, then what the heck.  but
yes - there has to be a 0.3.1 release before pyjamas can be packaged:
the recent change in the IE user-agent string from MSIE 7 to MSIE7
causes pyjamas 0.3 to fallback to old mozilla which is a bit of a
screw-up.

 can anyone come up with any good ideas?

 For the most part, you seem to have figured it all out already.

 random guesses ha ha

 thanks for the kicks in the right direction.

 l.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#408467: exim4-config: exim4 'virtual domains' config fits neatly in without interfering with config: please add!

2007-01-26 Thread Luke Kenneth Casson Leighton
@begin note to debian developers

dear debian developers: would someone _please_ explain to marc that,
as exim4 is the default mailer for debian, that he is in quite a
serious position of responsibility, and that exim4 needs to cater for
_everybody_'s needs, with the minimum amount of disruption to existing
users, and, i believe, that the addition of 'virtual domains' as
described in the article below falls into that category of
'maximum-default-config-beneficial-feature-improvements,
zero-impact-on-existing-users'.

in this instance, the addition of the extremely effective and very
useful 'virtual domains' concept brings the default debian-provided
exim4 feature set more into line with what postfix has provided 
for _years_.

the instructions posted on http://www.debian-administration.org/articles/140
are extremely good: steve kemp's well-written article took me 5 mins to
read, 5 to include in my current config, and 5 to test and be happy that
it worked.

i am therefore appealing to someone else's better nature out there to
get this guy marc either replaced or to see sense, or to be guided by
somebody _other_ than me.

complaints about the word 'virtual' as an excuse are not really
acceptable if the default debian mailer's configuration is to move
forward.

i fully realise when i am disobeying my own guidelines (or, more
importantly, i fully recognise a situation where i'm not going to get
anywhere with a particular individual's intransigence) about endeavouring
to help the most people with the most effective amount of effort (some
of you on this list will recall my talk on the sunday morning at the
last UKUUG conference)

so i am appealing to you, the core debian developers, to step in, and
deal with this guy, instead of me.

thanks.

@end note to debian developers

On Fri, Jan 26, 2007 at 10:52:57AM +0100, Marc Haber wrote:

 tags #408467 wontfix
 thanks
 
 On Fri, Jan 26, 2007 at 12:50:26AM +, Luke Kenneth Casson Leighton wrote:
  i've been looking for this for _four years_ for exim, and you _have_
  to add it in - /etc/aliases is pathetic and annoying, and this
  virtual domains thing is a _significant_ step forward in the usefulness
  of exim4.
 
 It does, however, move away a lot from the classical paradigm that we
 have always been using, and the word virtual in a mail context is
 heavily overused.

 with the greatest of respect, i really do have to be quite harsh with
 you on this one, and say this: tough - get over it.

 the word 'virtual' is an important concept in computer science, dating
 back 30 years of _course_ it's going to get used a lot, in a _lot_
 more contexts and a lot more many times in those same contexts.

 i can't imagine the linux kernel developers _ever_ asking people to
 stop using the word 'virtual' when someone goes and refers to 'virtual
 memory' as well as 'virtual address'.

 imagine the chaos that would ensue, and the ridicule that would be
 heaped onto the person that even _suggested_ such a thing.


 not least: if you don't _like_ the word 'virtual', then take it up with
 steve kemp, who wrote the article.

 
  i seriously considered ripping out a very sophisticated and
  simple-to-set-up arrangement, and replacing it with postfix, _just_
  because this virtual domains thing took me so long to find.
 
 If you are that emotional, please go with postfix.
 
 whenever i think of postfix, my heart sinks.

 it takes _days_ to work out what to do: you need to be a serious
 rocket-scientist to get _anything_ done.

 by contrast, with exim4, it is absolutely dead-easy to set up, even
 the most complex arrangements.

 the longest amount of time is spent tracking things down.

 a good example is when i needed to do authenticated LMTP TCP redirection:
 it turned out that you can combine the smtp auth mechanism with the
 LMTP transport, as, intuitively, might be expected, and dang me if
 it actually worked!  i think i added about 6 lines to some transport
 config file!  took me 30 mins to find the two articles (one about lmtp,
 the other about smtp auth) to read up about, 5 to combine-edit the
 configs, and 5 to sit there stunned that it actually worked.



 I do not see enough arguments to have this added to the default
 configurationn. 

 i am very sorry.  i assumed, now incorrectly i see, that it would be
 obvious that the advantages that this addition brings to exim4 as a
 useful mail system would be significant.

 this is my mistake, and i genuinely apologise for not thinking in
 advance to include any extra details for you to evaluate.


 I would be willing to add the configuration snippet as
 an example in /usr/share/doc, as long as the word virtual is not
 used and it is exactly explained what the configuration snippet does.
 
 I am not going to write these documentation myself though.

 ahh - are you the guy whom i spoke to some time ago (a couple of years
 back), about doing some improvements to exim4 config, and you said
 'that's too complex for me to cope

Re: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=265920

2006-01-11 Thread Luke Kenneth Casson Leighton
On Tue, Jan 10, 2006 at 02:04:45PM +0100, Adeodato Sim?? wrote:
 * Matthew Garrett [Tue, 10 Jan 2006 02:50:56 +]:
 
  Luke Kenneth Casson Leighton [EMAIL PROTECTED] wrote:
   i've thought for a long time about how to reply to your message.
 
  Let's quickly outline what's happened here:
 
  1) Luke files a bug agains Debian. So far, so good.

 yep.

  2) Some time later, 

 a year to 18 months later.

  Luke contacts a KDE developer and asks if the bug
  has been fixed.

 yep.

  3) The response is, approximately, This is the first I've heard of it.
 
 nooo, the response was, approximately, bugger off and report
 it via upstream.
 
 no indication of intent to take care of it was given.


 i _was_ prepared to be all nice and tactful, and i spent quite a lot of
 time (on and off, but mostly off) over a period of several weeks as to
 how i was going to respond.

 then i re-read the message and went nuts, then tried to temper and
 channel some of my anger by going overboard and into the ridiculous.

 as i calmed down i began to think of _sensible_ ways forward.


  We (Debian) have a bug tracking system in order to keep track of bugs in
  our distribution. 

 i didn't know that.  and neither will a hell of a lot of other people.
 i just found reportbug about 2 years ago and thought cool, i can
 run a program on debian and i can report bugs.  via the commandline.
 great!
 
 it never occurred to me that i shouldn't be reporting bugs,
 and nothing i encountered in reportbug told me that i was
 doing anything i shouldn't be.


  It's the job of either the bug submitter or (more
  usually) the Debian maintainer to contact upstream to make sure that
  they're aware of the bug. It is *not* the upstream maintainer's job to
  examine Debian's bug database.

 that distinction isn't made clear: it's only if people think about it
 that they will realise that they are supposed to report debian-specific
 packaging bugs to the debian bugs database and package-specific bugs
 to whatever upstream thingy they can find.  _if_ they can find it.

 and even if some people do think, there's lots that won't.

 for the _really_ popular packages, this becomes a serious problem:
 the percentage of people reporting bugs into what effectively becomes
 a black hole starts to get quite serious.


  Which is, uh, pretty much what Dirk said. Luke, what the christ are you
  upset about? 

 
 
 Nobody's said Don't report this bug to us, they've said
  If you report a bug to Debian and nobody forwards it, we know nothing
  about it.
 
   All correct. Thanks, Matthew. I'll just note that the Debian KDE
   packages receive an incredible amount of bug reports, and that we're
   understaffed to forward all of them to KDE upstream. 

 that's why one of my recommendations was to consider putting, into
 certain key very popular packages, a means to either transfer the bug
 to upstream (via some mad notional XMLeey are pee cee-ey common API) or
 to simply put into reportbug a list of packages for which reporting
 should be given special messages:

 if reporting on package kde, libkonq  long list  then
 report if this is a bug in KDE itself, please DO NOT report the
 bug here, go to http://bugs.kde.org whatever.  if you have a
 debian-specific packaging issue (installation problem, missing
 files, conflict etc.), please continue.

 and likewise for mozilla.

 and openoffice.

 and possibly even the linux kernel, although that's probably the
 exception.

 other possibilities:

 1) add into the dpkg thingy an upstream URL where bugs can be reported:

UpstreamBugs: http://bugs.kde.org/enter_bug.cgi (whatever)
 if you encounter a bug in kde.
 please report it here because otherwise nobody.
 will fix it, thank you.
.

   this would be _great_ because it could be automatically looked at by
   reportbug and filed.

   it would also be great because you could have, in dpkg-buildpackage,
   the UpstreamBugs thingy of one control file added to dozens or
   hundreds of individual packages.

   this would save maintainers a boat-load of time.

 2) against the list of UpstreamBugs, on bugs.debian.org, email
received automatically notifies the sender of the above info.

just to make absolutely damn sure they know about it, plus
not _everybody_ uses reportbug - they sometimes (?) send in
messages direct.



 In particular, we
   directly almost never do it for wishlist bugs. My response in [1]
   explicitly included the sentence:
 
 you or some other SE/Linux user may consider reporting the problem
 to upstream KDE, with a good reasoning too.
 
   I know this is suboptimal, but it's how things are now.
 
 [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=265920;msg=25
 
   Cheers,
 
 -- 
 Adeodato Sim?? dato at net.com.org.es
 Debian Developer  adeodato at debian.org
  
 You cannot achieve the impossible without attempting the absurd

Re: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=265920

2006-01-11 Thread Luke Kenneth Casson Leighton
On Tue, Jan 10, 2006 at 09:29:12AM +0100, Dirk Mueller wrote:

 Relax, nobody is being pissed. You just have to realize that if you tell 
 person A about a problem, person B doesn't magically get notified about it. 
 This is not different than in other situations in real life. 
 
 heya dirk,

 thank goodness you noticed that i'd gone overboard on the raving
 front.

 h... where's the bug tracker for bugzilla itself? hmmm...


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=265920

2006-01-09 Thread Luke Kenneth Casson Leighton
i've thought for a long time about how to reply to your message.

which, now that i re-read it, i notice that it is extremely patronising,
and all possible thought of being nice and non-confrontational goes out
the ing window.

given that you are happy to write patronising messages, i am not
therefore too surprised at your statement.

i therefore invite you to accept reality.

the reality is: there are too many people using debian who have found
reportbug and use it for you to whine about how the world does not
revolve around debian.

the mozilla team accept the reality that bugs are going to
come in from several sources.

why the  can't you?

get with it, get off your damn high horse, and accept that intelligent
and stupid people alike are going to report bugs - not to suit _your_
whims but because the reporting method is _there_ and they haven't been
told any different.

if you _want_ people to stop using the debian system, then here are your
options, in no particular order:

1) write a program to sabotage bugs.debian.org or a subsection of it.

2) write a program that slurps bugs of certain debian package names and
duplicates the contents in the kde bugs.

3) write a program that monitors the bugs of certain debian package
names and sends a message to each notifying them of your fucking dipshit
disposition that this bug will be totally ignored because i am so up my
own arse i cannot be bothered to read it unless you post it on _my_
system.

4) put in a bugreport against the debian reportbug package about this
entire issue you find so objectionable

5) write a patch to reportbug to have an exclusionary list or an
advisory / warning saying that the debian bug reporting for any kde
package is _specifically_ for reporting debian packaging problems _not_
for reporting bugs on kde, and plasse pretty please could you go
go _ourr_ nice bug-reporting system

6) stick your head in a bucket of cold water and CHILL OUT (i'll be
doing likewise in a couple of minutes, just as i get to about no 8 or so
on this list of suggestions)

7) develop an RSS/XML-lovely-intercommuney-system of
bug-communicationey-stuff protocol thing that allows free software
bugs to be pushed across to different interoperable systems.  i
strongly advise you to consider looking up AS/2 which is an RFC on how
to communicate XML documents and also to have a digitally signed
receipt indicating acceptance of the transfer.  perhaps that's a
bit overkill, but worth considering.

the basic principle: allow bugs to be searched across
multiple systems (not just your own system); allow a bug to
be transferred by the thingies.  bug maintainer people.  for them
with one easy push-of-a-browser-button say here.  _you_ deal with it.


ahh, why didnt' _you_ think of some of these ideas, instead
of just bitching about how debian and its users are so 
XXX XX we interrupt this email to bring you some light
refridgerator i mean elevator music.

ahh, i feel better now.  calm, calm.  i am at oe with the universe.
i am bleeeded in.


On Wed, Dec 07, 2005 at 04:06:54AM +0100, Dirk Mueller wrote:

 On Tuesday 06 December 2005 02:52, Luke Kenneth Casson Leighton wrote:
 
  was the issue mentioned in this report ever resolved?
 
 I'm not sure why I have to state the obvious, but the world does not rotate 
 around Debian, and unless you report the bug at an upstream place where the 
 actual maintainer can read about it, its unlikely that bugs get fixed in a 
 magically automated way. 
 
 
 -- 
 Dirk//\

-- 
--
a href=http://lkcl.net;http://lkcl.net/a
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: libselinux1 - required

2005-06-09 Thread Luke Kenneth Casson Leighton
On Wed, Jun 08, 2005 at 10:44:30PM +1200, Nigel Jones wrote:
 It's been implied that people will be basicly *forced* to use selinux,

 wrong.  completely wrong.

 in the debian kernel builds (as arranged i believe by
 manoj), the default option for the selinux kernel module is
 selinux=0.

 that means it's switched off.

 libselinux1 has a function which will detect this.
 
 all patches in all programs and all utilities and all services
 utilise this function, and disable all and any functionality
 related to and pertaining to selinux.

 at no time and on any account and under no circumstances will selinux
 cause any loss of functionality, loss of speed or any other perceived
 or actual detrimental effects or side-effects when the kernel option
 selinux=0 is set.

 if this isn't clear and explicit enough, do let me know and
 i'll be happy to come round with a big stick with nails in
 it to help make the point more clearly.

 cheers,

 l.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



heeeeeaaaave! debian releases: the solution to windows viruses!

2005-06-09 Thread Luke Kenneth Casson Leighton
uhhn... is it just me, or has the world's internet traffic just taken
a major performance degradation over the past few days?

roll up roll up, get yorr anti-virus sofwarz here - right from a debian
mirror.  all you have to do is get the debian developers to do _another_
major release.  noo more problems with in'ur'ne' traffic, cos there
won't be any _left_ for viruses.

... but seriously: if anyone knows mr bram cohen personally,
or if there's anyone willing to be a victim, could someone
_please_ write a bittorrent server (and associated client
including a filesystem driver) and install it on the debian
mirrors so that the entire debian file/directory structure
can be shared/downloaded just like they can with rsync?

plase?

l.

-- 
--
a href=http://lkcl.net;http://lkcl.net/a
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



http://www.golden-gryphon.com/software/security/selinux.xhtml

2005-06-09 Thread Luke Kenneth Casson Leighton
manoj, hi,

i am delighted to see the above web page re: selinux.

i notice you mention that there is an effort underway to make
a uml-selinux.

perhaps i should mention that it is utterly trivial to set up
a xen system with a guest domain running pretty much any kind
of kernel - including selinux enabled ones.

people who are not happy about using or waiting for uml-selinux
might want to consider either temporarily or permanently
utilising xen instead.

l.

p.s. xen's a lot damn quicker, too.  quick enough so that you can
seriously consider just doing apt-get update, blah blah.

-- 
--
a href=http://lkcl.net;http://lkcl.net/a
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: http://www.golden-gryphon.com/software/security/selinux.xhtml

2005-06-09 Thread Luke Kenneth Casson Leighton
On Thu, Jun 09, 2005 at 11:42:00PM +0100, antoine wrote:
 On Thu, 2005-06-09 at 20:20 +0100, Luke Kenneth Casson Leighton wrote:
  manoj, hi,
  
  i am delighted to see the above web page re: selinux.
 Err?

 never seen it before :)

  
  i notice you mention that there is an effort underway to make
  a uml-selinux.
  
  perhaps i should mention that it is utterly trivial to set up
  a xen system with a guest domain running pretty much any kind
  of kernel - including selinux enabled ones.

 We have been running selinux guest kernels in uml for years, that was

 _great_.
 
 hm - the above page gives the impression that it hasn't been:

  There also has been an interest in creating an
  
  SELinux UML, since it allows for rapid testing of
  policies, and packages, and to observe the reaction of
  the machine to threats and other stimuli. However,
  it has been tedious, traditionally, to create a
  UML that can be run in enforcing mode. A recipe for
  doing so has been created...

 not the issue here, 

 or are you just doing xen advocacy?

 i was under the impression, from the above, that somehow
 debian cannot run selinux/uml.

 i was therefore recommending an alternative that is, by
 comparison, just... okay: xen takes a source code download,
 two kernel compiles, create a guest-machine-config, and
 a guest-machine-install (unless like me you're prepared to
 copy the drive images of an existing machine and hack it into
 submission from there :) and you're done, up, running.

 by contrast: i once installed uml...

 The question was about ensuring proper containment of the UML kernel
 process *from outside*, with regards to the way uml handles tmpfs (which
 it uses as a ram backing store with execute attributes).
 
  people who are not happy about using or waiting for uml-selinux
  might want to consider either temporarily or permanently
  utilising xen instead.
 Running uml-selinux guests is not a problem, and xen is not necessarily
 the right approach for everything: the system virtualisation does not
 happen at the same os level. Can you control your xen instance from
 within a selinux controlled system? 

 you're talking about running xen in the domain master, yes?

 known as domain 0.

 in theory, it can be done (and i haven't been mad enough to switch on
 selinux in the xen master domain yet...)

 management of xen (communication between domains) is done
 via a python-based HTTP web server (twisted python) running on a high
 port number.

 want fine-grained control?  ... erk.




 (note: I am not talking about
 running selinux from within a xen instance)
 
 known as a guest domain (i.e not numbered domain 0)

  l.
  
  p.s. xen's a lot damn quicker, too.  quick enough so that you can
  seriously consider just doing apt-get update, blah blah.
 uml on x86 with the skas3 patch is very fast.
 We've been running debian guests (inc apt-get) just fine for years.

 hm.  sorry about that - the above URL gives an impression other than
 that.

 l.

--
a href=http://lkcl.net;http://lkcl.net/a
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: libselinux1 - required

2005-06-08 Thread Luke Kenneth Casson Leighton
On Tue, Jun 07, 2005 at 09:56:17PM -0400, Stephen Frost wrote:
 * Luke Kenneth Casson Leighton ([EMAIL PROTECTED]) wrote:
   last time i spoke to him [name forgotten] the maintainer
   of coreutils would not accept the coreutils patches -
   already completed and demonstrated as working and sitting on
   http://selinux.lemuria.org/newselinux - because libselinux1
   is not a Required package, and could not be made a Required
   package because of the sarge freeze.
 
 If it was related to sarge being frozen then that issue is out of the
 way and it should be brought up again if you're interested in it, with
 the coreutils maintainer, not on d-d.
 
 okay!


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: libselinux1 - required

2005-06-07 Thread Luke Kenneth Casson Leighton
On Mon, Jun 06, 2005 at 08:24:31PM -0400, Stephen Frost wrote:
 * Luke Kenneth Casson Leighton ([EMAIL PROTECTED]) wrote:
  any progress on making libselinux1 a Required package?
  
  the possibility of having debian/selinux is totally dependent
  on this one thing happening.
  
  no libselinux1=Required, no debian/selinux [all dependent packages
  e.g. coreutils will be policy violations].
 
 Uhhh, it's the other way around.  Get coreutils to Depend on libselinux1
 and it'll be brought up to Required.  You don't get the library
 Required first, having a library be Required but nothing Required
 actually depending on it is senseless.
 
 last time i spoke to him [name forgotten] the maintainer
 of coreutils would not accept the coreutils patches -
 already completed and demonstrated as working and sitting on
 http://selinux.lemuria.org/newselinux - because libselinux1
 is not a Required package, and could not be made a Required
 package because of the sarge freeze.

 either way i don't particularly care.  well - the bit i care about is
 that libselinux1 should have been made Required some time well over a
 year ago.

 l.

-- 
--
a href=http://lkcl.net;http://lkcl.net/a
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



libselinux1 - required

2005-06-06 Thread Luke Kenneth Casson Leighton
hi,

any progress on making libselinux1 a Required package?

the possibility of having debian/selinux is totally dependent
on this one thing happening.

no libselinux1=Required, no debian/selinux [all dependent packages
e.g. coreutils will be policy violations].

l.

-- 
--
a href=http://lkcl.net;http://lkcl.net/a
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



debian kernel 2.6.9 with selinux enabled!

2004-12-01 Thread Luke Kenneth Casson Leighton
manoj, thank you.  thank you thank you *smooch*.

l.




Re: Updated SELinux Release

2004-11-05 Thread Luke Kenneth Casson Leighton
On Thu, Nov 04, 2004 at 11:06:06PM -0500, Colin Walters wrote:
 On Thu, 2004-11-04 at 13:15 +, Luke Kenneth Casson Leighton wrote:
 
   default: no.
 
 Why not on by default, 

 i would agree with stephen that it should be compiled in,
 default options selinux=no.

 that gives people the choice, without affecting performance.

 with a targeted policy, for everyone?  

 debianites have yet to be convinced of the benefits of
 _anything_ to do with selinux [irrespective of whether they
 are actually _aware_ of its benefits]

 i specifically recall seeing a message from 2002 the more i learn
 about selinux, i like it less and less.

 that having been said, i believe, like i think you do, that a
 targetted policy for debian _would_ make selinux much easier
 to accept.

 l.




Re: Updated SELinux Release

2004-11-05 Thread Luke Kenneth Casson Leighton
On Fri, Nov 05, 2004 at 10:11:01AM -0500, Colin Walters wrote:
 On Fri, 2004-11-05 at 10:28 +, Luke Kenneth Casson Leighton wrote:
  On Thu, Nov 04, 2004 at 11:06:06PM -0500, Colin Walters wrote:
   On Thu, 2004-11-04 at 13:15 +, Luke Kenneth Casson Leighton wrote:
   
 default: no.
   
   Why not on by default, 
  
   i would agree with stephen that it should be compiled in,
   default options selinux=no.
 
 I don't believe Stephen said that.  He said that the performance hit in
 that case is just the LSM hooks.
 
 oh. yes.

   that gives people the choice, 
 
 It doesn't make sense to make security a choice.  The current Linux
 security model is simply inadequate.

 response 1: *shrug*.  that's their choice - and their problem.

 response 2: you don't have to tell _me_ that - i'm the mad one who is
 actively working on a debian/selinux distro!!! :)

 response 3: _is_ it the job of debian developers to dictate the minimum
 acceptable security level?

 basically what i mean is, in gentoo, it's a no-brainer: you set options
 at the beginning of your build, come back [2 weeks? :) ] later and you
 have a system with PAX stack smashing, lovely kernel, everything
 hunky-dory.

 debian doesn't GIVE users that choice [remember the adamantix
 bun-fight, anyone?] and instead settles for about the lowest possible
 common denominator - no consideration to modern security AT ALL!

  without affecting performance.
 
 That's just a bug, and it's being worked on.  

 cool.

 Personally I don't notice any performance problems.
 
 maybe it's just me with my weird setup [very likely], but
 running mozilla under KDE 3.3.0 with selinux 2.6.8.1-selinux1
 on a 256mb system P4 2.4Ghz) is a 10-11 second startup,
 whereas if i set selinux=0 i've seen as fast as a THREE second
 startup time.

 i've put KDE_IS_PRELINKED=1, KDE_FORK_SLAVES=1 into the
 /usr/bin/startkde and i've run prelink, but i have the nvidia drivers
 so the x-windows glx drivers are symlinks, which stops prelink from
 being able to do its job on them.

 also i recompiled kde 3.3.0 .debs with the latest gcc 3.3.

 so i'm not _entirely_ confident that my setup is a good example to
 follow (!)

-- 
--
you don't have to BE MAD   | this space| my brother wanted to join mensa,
  to work, but   IT HELPS  |   for rent| for an ego trip - and get kicked 
 you feel better!  I AM| can pay cash  | out for a even bigger one.
--




Re: Updated SELinux Release

2004-11-04 Thread Luke Kenneth Casson Leighton
On Thu, Nov 04, 2004 at 01:02:35AM -0600, Manoj Srivastava wrote:
 On Wed, 03 Nov 2004 21:15:38 -0500, Colin Walters [EMAIL PROTECTED] said: 
 
  On Wed, 2004-11-03 at 19:21 +, Dhruv Gami wrote:
  Personally, i would prefer to have those two tarballs available. I
  know most people using SELinux are familiar with patching the
  kernel, and are generally familiar with how Linux works and know
  their way around on a Linux system.
 
  But moving forward, we don't want people to have to patch their
  kernel or utilities.
 
   Moving waaay forward. I asked the Debian kernel team to
  consider  compiling in SELinux (perhaps disabled by default, for
  starters), and was told that that is not going to fly because of
  significant performance hit one takes by compiling SELinux in.  I
  did not have any data to refute the claim, so  that is where we sit.
 
  i had a bun-fight with the people who have taken over from herbert:
  at the point where i told them that recompiling applications to be
  optimised like yoper and gentoo distributions gives back performance
  far in excess of that lost by selinux, i stopped hearing back from
  them.

   While a laudable long term goal, the reality is that most
  distributions do not ship these utilities today, and in the case of
  Debian, progress, while it is happening, is slow enough that
  pragmatism requires we consider the reality that SELinux shall _not_
  be the default in the near term.
 
 default: no.

 available as an additional package: why not?

 heck, personally i wouldn't even care if it was i386 or 686 only.

 l.

-- 
--
you don't have to BE MAD   | this space| my brother wanted to join mensa,
  to work, but   IT HELPS  |   for rent| for an ego trip - and get kicked 
 you feel better!  I AM| can pay cash  | out for a even bigger one.
--




[sds@epoch.ncsc.mil: Re: Updated SELinux Release]

2004-11-04 Thread Luke Kenneth Casson Leighton
- Forwarded message from Stephen Smalley [EMAIL PROTECTED] -

Envelope-to: [EMAIL PROTECTED]
Delivery-date: Thu, 04 Nov 2004 16:37:30 +
X-Sieve: CMU Sieve 2.2
Subject: Re: Updated SELinux Release
From: Stephen Smalley [EMAIL PROTECTED]
To: Manoj Srivastava [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Organization: National Security Agency
X-Mailing-List: selinux-tycho.nsa.gov
X-hands-com-MailScanner: Found to be clean
X-MailScanner-From: [EMAIL PROTECTED]

On Thu, 2004-11-04 at 02:02, Manoj Srivastava wrote:
   Moving waaay forward. I asked the Debian kernel team to
  consider  compiling in SELinux (perhaps disabled by default, for
  starters), and was told that that is not going to fly because of
  significant performance hit one takes by compiling SELinux in.  I
  did not have any data to refute the claim, so  that is where we sit.

Given that SELinux supports disabling both at boot time (via selinux=0)
and at runtime (via /selinux/disable, only useable prior to the initial
policy load, used by the patched /sbin/init when /etc/selinux/config
specifies disabled), the only performance impact they can truly claim is
fundamental to enabling SELinux at compile-time is the overhead of LSM
itself.  So ask for measurements showing that LSM in 2.6 imposes a
significant overhead by itself, and don't accept measurements based on
old versions of LSM prior to 2.6.

   While a laudable long term goal, the reality is that most
  distributions do not ship these utilities today, and in the case of
  Debian, progress, while it is happening, is slow enough that
  pragmatism requires we consider the reality that SELinux shall _not_
  be the default in the near term.

Fedora (and RHEL4) and Hardened Gentoo have extensive SELinux
integration, and SuSE 9.x had the SELinux code included in the kernel
and a subset of the userland, just disabled by default.

-- 
Stephen Smalley [EMAIL PROTECTED]
National Security Agency


--
This message was distributed to subscribers of the selinux mailing list.
If you no longer wish to subscribe, send mail to [EMAIL PROTECTED] with
the words unsubscribe selinux without quotes as the message.

- End forwarded message -

-- 
--
you don't have to BE MAD   | this space| my brother wanted to join mensa,
  to work, but   IT HELPS  |   for rent| for an ego trip - and get kicked 
 you feel better!  I AM| can pay cash  | out for a even bigger one.
--




SourceForge.net PR-Web Upgrade Notice.

2004-10-26 Thread Luke Kenneth Casson Leighton
i'm forwarding this to debian devel for people's attention because
it would appear that debian has lost a quite large opportunity -
by not having selinux available.

l.

- Forwarded message from SourceForge.net Team [EMAIL PROTECTED] -

Hello,  
You are receiving this email because you are an Admin of a project
on SourceForge.net.   (Note: These update emails are very low 
volume, approximately twice a year).

The SourceForge.net team is pleased to announce the long-awaited
upgrade to our project web service.  SourceForge.net staff are

[...]

This upgrade consists of a significant hardware upgrade and
Operating System upgrade.  Due to the large upgrades involved here,
it may be necessary to upgrade your scripts.


Old configuration:

  Debian Potato
  ^^
  Linux kernel 2.4.x
  GNU libc 2.2.1
  Apache 1.3.26
  Perl 5.005_03
  PHP 4.1.2
  Python 1.5.2
  Tcl 8.0

New configuration:

  Fedora Linux: Fedora Core 2
  ^^
  Linux kernel 2.6.x
  GNU libc 2.3.3
  Apache 2.0.51
  Perl 5.8.3
  PHP 4.3.8
  Python 2.3.3
  Tcl 8.4.5





Re: Bug#193838: libgcc1: installation of libgcc1:3.3-2 causes failure of massive number of programs

2003-05-19 Thread Luke Kenneth Casson Leighton
p.s. the version of python 2.2 is back at 2.2.1 compiled with gcc 2.95.4
 (stable version)

the version that got into trouble was the python 2.2 that was compiled
with gcc 3.2 (unstable latest version?)

On Mon, May 19, 2003 at 07:40:06PM +0200, Matthias Klose wrote:
 [CC to debian-devel, did anybody see this behaviour on an update?]
 
 Luke Kenneth Casson Leighton writes:
  On Mon, May 19, 2003 at 04:52:51PM +0200, Matthias Klose wrote:
   Never seen this upgrade behaviour. Was libgcc1 installed before
   libstdc++5? If not, please could you explictely install libgcc1 and
   then libstdc++5?
   
   i have tried that.
  
   it says already at latest version.
  
   then i tried installing gcc 3.3.
  
   that failed to fix the problem.
  
   when i manually installed the OLD version of libstdc++:
  
514  dpkg -i /var/cache/apt/archives/libgcc1_1%3a3.2.3-0pre6_i386.deb 
515  dpkg -i /var/cache/apt/archives/libstdc++5_1%3a3.2.3-0pre6_i386.deb 
  
   then it fixed the problem
  

-- 
-- 
expecting email to be received and understood is a bit like
picking up the telephone and immediately dialing without
checking for a dial-tone; speaking immediately without listening
for either an answer or ring-tone; hanging up immediately and
then expecting someone to call you (and to be able to call you).
--
every day, people send out email expecting it to be received
without being tampered with, read by other people, delayed or
simply - without prejudice but lots of incompetence - destroyed.
--
please therefore treat email more like you would a CB radio
to communicate across the world (via relaying stations):
ask and expect people to confirm receipt; send nothing that
you don't mind everyone in the world knowing about...