Re: How to push back against repeated login attempts?

2021-03-03 Thread Luke Kenneth Casson Leighton
On Wednesday, March 3, 2021,  wrote:

> So honeypot or tarpit seems like something to try. Endlessh sounds good,
but labrea and iisemulator have debian packages. Any suggestions or
warnings to consider?

if you run exim4 and live spamassassin mta-time the teergrube config is a
lot of fun (teergrube: german for tarpit).

unfortunately, running mta-time spamassassin takes a MENTAL amount of
server-side resources esp. if you enable clamav, pyzor and razor like i
used to.

after a couple years i went "this is nuts" and left greylistd running but
did forwarding-only.

btw the other one to watch out for is the Iranian attack against OpenVPN.
 i had repeated attempts to break in on OpenVPN come up and had to add that
to recidive as well, with some custom pattern matching.

a week later the slashdot announcement came up, "Iran sponsored hackers
break in to somethingorother by turning OpenVPN servers into botnets".

keeping an eye on your fail2ban logs you get a fairly good advance
indication of massive govt sponsored hacking attempts.

l.



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: How to push back against repeated login attempts?

2021-03-02 Thread Luke Kenneth Casson Leighton
On Tue, Mar 2, 2021 at 9:51 AM  wrote:

> Considering running a freedom box or similar, I have a RPi running Buster 
> outside my home router's DMZ. It was discovered within a short time (minutes 
> or hours) of first being setup.

ahh yes.  welcome to the discovery that there are people running
extremely sophisticated long-running break-in attempts, world-wide.

> It now has fail2ban running with defaults. Over about the last month, 
> fail2ban logs show about 35,000 "unbans" from about 3700 unique IPs.

if you want to do something "gradual", use fail2ban recidive.

i decided 3 years ago that enough was enough, and simply set all and
any failed password attempts at an instant 2 week ban.  by running
OpenVPN i can at least get in if i happen to make a mistake.

l.



Re: Debian Bullseye on Raspberry Pi 4 4GB?

2021-02-19 Thread Luke Kenneth Casson Leighton
On Friday, February 19, 2021, Pete Batard  wrote:

> Why, when it is absolutely possible to achieve it, as was demonstrated on
a cheap platform like the Pi (that actually comes with horrible quirks to
be able to accomplish so, especially in terms of xHCI support), should end
users have to juggle with heteroclitic means of configuring their system
for OS installation?

because the product, designed by Broadcom, is not in the slightest bit
targetted at end-users, and Broadcom do not give a flying f*** about such a
tiny market of only 1 million a year in sales.  their profit margins are
too small to bother with us.

Broadcom's Minimum Order Quantity for these processors, which are designed
for mass-volume IPTV, Set Top Box and other multimedia computing
appliances, is 5 million units and above.

Normally Broadcom would provide the full software stack for the customer
placing 5, 10, 50, 100 million orders for a complete solution.  That
customer *does not care* about the software boot process or some xHCI
incompatibility.

Have people forgotten already that the only reason the original PI exists
is because Eben Upton was an employee who, having access to normally NDA'd
documentation, worked on the PCB in his spare time?  This was the only
exception to Broadcom's normal MOQ rules, they could not exactly tell him
to stop when he told them it was for educational purposes, could they?

please understand: the manufacturers of these SoCs consider you, all Free
Software idiots (including me) to be an absolute nuisance.

i showed the manufacturers of my laptop the linux kernel boot process at
Computex, they told me it looked like i was spying on their product!

we are "lapping at the heels" of these massive Corporations.  we are
nothing to them.

when we can place orders for a million processors, *then* they will listen
to what you and I want, Pete.

sorry if any of the reality check above shocks you.  personally i got
absolutely sick of the ongoing callous pathological exploitation of our
collective expertise, many years ago, and started a new SoC initiative.
 it's entirely Libre.  [and offtopic for the debian-arm list, so please if
you would like to discuss that, contact me direct or on freenode
#libre-soc].

l.



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Reducing apt's memory footprint (on small boxes)

2021-02-17 Thread Luke Kenneth Casson Leighton
i remembered sonething: the apt packages are read into memory in order to
sort them alphabetically, aren't they? (i.e. there's no database per se)

that being the case, then, well, doing a hierarchical office paperwork sort
would do the trick.

first not-sort by appending all packages beginning with a, then b etc and
split out liba libb as well (this is not new, it's what debian archives
do).  then re-open and re-read and sort those individually.  finally
concatenate them together.

this would literally reduce memory consumption by a factor of 50, removing
all need for compromises.

it also provides opportunities for speeding up sort time by allocating
multiple processes to different starting letters.

l.




-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Reducing apt's memory footprint (on small boxes)

2021-02-14 Thread Luke Kenneth Casson Leighton
On Monday, February 15, 2021, Paul Wise  wrote:
> On Sun, Feb 14, 2021 at 2:53 PM Paul Wise wrote:
>
>> I think that this could be useful to a subset of Debian users,
>> possibly including embedded hardware and low-RAM cloud/VPS users.
>
> This could also be useful to bandwidth-constrained environments,

indeed! doesn't backports split into different archives anyway? and fedora
has splits by general category.

still, the moment all archives are added the problem returns.

> the
> apt package indices are really quite large these days.

not being funny or anything: i appreciate the dependencies have to be kept
exceptionally low, but why is noone thinking in terms of modifications to
apt that do not require the package indices to be in-memory?

surely the long-term solution is to use a minimalist database or suitable
key-value store, even if that involves running a conversion routine so that
the current index file(s) can be distributed as-is?

i remember having live-running x86 systems 15 years ago that i could not
upgrade because this was a problem even back then.  surely it has occurred
to someone that whatever reductions are done now by splitting archives will
only stave off inevitable increases that will hit once again in a few years?

it may even turn out to be the case that using a minimalist database or
key-value store actually *speeds up* package lookups and saves time even on
systems with larger amounts of memory.

options i would be investigating would be sqlite, datadraw and lmdb.

l.






-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Debian on reMarkable paper tablet?

2021-01-06 Thread Luke Kenneth Casson Leighton
On Wed, Jan 6, 2021 at 11:59 AM Martin Lucina  wrote:
> It probably wouldn't be too hard to get Debian to boot on it,

i saw in the notes of the developer that the PCB is pretty much a
standard vanilla iMX6 Reference Design.  consequently it should be
much easier to reverse-engineer than it first seems.  with the KConfig
options already given on his website it's even more straightforward.

however, yes, applications is where it gets... "interesting".

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-19 Thread Luke Kenneth Casson Leighton
On Mon, Aug 19, 2019 at 7:29 PM Sam Hartman  wrote:

> Your entire argument is built on the premise that it is actually
> desirable for these applications (compilers, linkers, etc) to work in
> 32-bit address spaces.

that's right [and in another message in the thread it was mentioned
that builds have to be done natively.  the reasons are to do with
mistakes that cross-compiling, particularly during autoconf
hardware/feature-detection, can introduce *into the binary*.  with
40,000 packages to build, it is just far too much extra work to
analyse even a fraction of them]

at the beginning of the thread, the very first thing that was
mentioned was: is it acceptable for all of us to abdicate
responsibility and, "by default" - by failing to take that
responsibility - end up indirectly responsible for the destruction and
consignment to landfill of otherwise perfectly good [32-bit] hardware?

now, if that is something that you - all of you - find to be perfectly
acceptable, then please continue to not make the decision to take any
action, and come up with whatever justifications you see fit which
will help you to ignore the consequences.

that's the "tough, reality-as-it-is, in-your-face" way to look at it.

the _other_ way to look at is: "nobody's paying any of us to do this,
we're perfectly fine doing what we're doing, we're perfectly okay with
western resources, we can get nice high-end hardware, i'm doing fine,
why should i care??".

this perspective was one that i first encountered during a ukuug
conference on samba as far back as... 1998.  i was too shocked to even
answer the question, not least because everybody in the room clapped
at this utterly selfish, self-centered "i'm fine, i'm doing my own
thing, why should i care, nobody's paying us, so screw microsoft and
screw those stupid users for using proprietary software, they get
everything they deserve" perspective.

this very similar situation - 32-bit hardware being consigned to
landfill - is slowly and inexorably coming at us, being squeezed from
all sides not just by 32-bit hardware itself being completely useless
for actual *development* purposes (who actually still has a 32-bit
system as a main development machine?) it's being squeezed out by
advances in standards, processor speed, user expectations and much
more.

i *know* that we don't have - and can't use - 32-bit hardware for
primary development purposes.  i'm writing this on a 2.5 year old
gaming laptop that was the fastest high-end resourced machine i could
buy at the time (16GB RAM, 512mb NVMe, 3.6ghz quad-core
hyperthreaded).

and y'know what? given that we're *not* being paid by these users of
32-bit hardware - in fact most of us are not being paid *at all* -
it's not as unreasonable as it first sounds.

i am *keenly aware* that we volunteer our time, and are not paid
*anything remotely* close to what we should be paid, given the
responsibility and the service that we provide to others.

it is a huge "pain in the ass" conundrum, that leaves each of us with
a moral and ethical dilemma that we each *individually* have to face.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-14 Thread Luke Kenneth Casson Leighton
On Wed, Aug 14, 2019 at 5:13 PM Aurelien Jarno  wrote:

> > a proper fix would also have the advantage of keeping linkers for
> > *other* platforms (even 64 bit ones) out of swap-thrashing, saving
> > power consumption for build hardware and costing a lot less on SSD and
> > HDD regular replacements.
>
> That would only fix ld, which is only a small part of the issue. Do you
> also have ideas about how to fix llvm, gcc or rustc which are also
> affected by virtual memory exhaustion on 32-bit architectures?

*deep breath* - no.  or, you're not going to like it: it's not a
technical solution, it's going to need a massive world-wide sustained
and systematic education campaign, written in reasonable and logical
language, explaining and advising GNU/Linux applications writers to
take more care and to be much more responsible about how they put
programs together.

a first cut at such a campaign would be:

* designing of core critical libraries to be used exclusively through
dlopen / dlsym.  this is just good library design practice in the
first place: one function and one function ONLY is publicly exposed,
returning a pointer to a table-of-functions (samba's VFS layer for
example [1]).
* compile-time options that use alternative memory-efficient
algorithms instead of performance-efficient ones
* compile-time options to remove non-essentlal resource-hungry features
* compiling options in Makefiles that do not assume that there is vast
amounts of memory available (KDE "developer" mode for example would
compile c++ objects individually whereas "maintainer" mode would
auto-generate a file that #included absolutely every single .cpp file
into one, because it's "quicker").
* potential complete redesigns using IPC/RPC modular architectures:
applying the "UNIX Philosophy" however doing so through a runtime
binary self-describing "system" specifically designed for that
purpose.  *this is what made Microsoft [and Apple] successful*.  that
means a strategic focus on getting DCOM for UNIX up and running [2].
god no please not d-bus [3] [4].

also, it's going to need to be made clear to people - diplomatically
but clearly - that whilst they're developing on modern hardware
(because it's what *they* can afford, and what *they* can get - in the
West), the rest of the world (particularly "Embedded" processors)
simply does not have the money or the resources that they do.

unfortunately, here, the perspective "i'm ok, i'm doing my own thing,
in my own free time, i'm not being paid to support *your* hardware" is
a legitimate one.

now, i'm not really the right person to head such an effort.  i can
*identify* the problem, and get the ball rolliing on a discussion:
however with many people within debian alone having the perspective
that everything i do, think or say is specifically designed to "order
people about" and "tell them what to do", i'm disincentivised right
from the start.

also, i've got a thousand systems to deliver as part of a
crowd-funding campaign [and i'm currently also dealing wiith designing
the Libre RISC-V CPU/GPU/VPU]

as of right now those thousand systems - 450 of them are going to have
to go out with Debian/Testing 8.  there's no way they can go out with
Debian 10.  why? because this first revision hardware - designed to be
eco-conscious - uses an Allwinner A20 and only has 2GB of RAM
[upgrades are planned - i *need* to get this first version out, first]

with Debian 10 requiring 4GB of RAM primarily because of firefox,
they're effectively useless if they're ever "upgraded"

that's a thousand systems that effectively go straight into landfill.

l.

[1] 
https://www.samba.org/samba/docs/old/Samba3-Developers-Guide/vfs.html#id2559133

[2] incredibly, Wine has had DCOM and OLE available and good enough to
use, for about ten years now.  it just needs "extracting" from the
Wine codebase. DCOM stops all of the arguments over APIs (think
"libboost".  if puzzled, add debian/testing and debian/old-stable to
/etc/apt/sources.lst, then do "apt-cache search boost | wc")

due to DCOM providing "a means to publish a runtime self-describing
language-independent interface", 30-year-old WIN32 OLE binaries for
which the source code has been irretrievably lost will *still work*
and may still be used, in modern Windows desktops today.  Mozilla
ripped out XPCOM because although it was "inspired" by COM, they
failed, during its initial design, to understand why Co-Classes exist.

as a result it caused massive ongoing problems for 3rd party java and
c++ users, due to binary incompatibility caused by changes to APIs on
major releases.  Co-Classes were SPECIFICALLY designed to stop EXACTLY
that problem... and Mozilla failed to add it to XPCOM.

bottom line: the free software community has, through "hating" on
microsoft, rejected the very technology that made microsoft so
successful in the first place.

Microsoft used DCOM (and OLE), Apple (thanks to Steve's playtime /
break doing NeXT) developed Objective-C / Objective-J / 

Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-09 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

On Fri, Aug 9, 2019 at 1:49 PM Ivo De Decker  wrote:
>
> Hi Aurelien,
>
> On 8/8/19 10:38 PM, Aurelien Jarno wrote:
>
> > 32-bit processes are able to address at maximum 4GB of memory (2^32),
> > and often less (2 or 3GB) due to architectural or kernel limitations.
>
> [...]
>
> Thanks for bringing this up.
>
> > 1) Build a 64-bit compiler targeting the 32-bit corresponding
> > architecture and install it in the 32-bit chroot with the other
> > 64-bit dependencies. This is still a kind of cross-compiler, but the
> > rest of the build is unchanged and the testsuite can be run. I guess
> > it *might* be something acceptable. release-team, could you please
> > confirm?
>
> As you noted, our current policy doesn't allow that. However, we could
> certainly consider reevaluating this part of the policy if there is a
> workable solution.

it was a long time ago: people who've explained it to me sounded like
they knew what they were talking about when it comes to insisting that
builds be native.

fixing binutils to bring back the linker algorithms that were
short-sightedly destroyed "because they're so historic and laughably
archaic why would we ever need them" should be the first and only
absolute top priority.

only if that catastrophically fails should other options be considered.

with the repro ld-torture code-generator that i wrote, and the amount
of expertise there is within the debian community, it would not
surprise me at all if binutils-ld could be properly fixed extremely
rapidly.

a proper fix would also have the advantage of keeping linkers for
*other* platforms (even 64 bit ones) out of swap-thrashing, saving
power consumption for build hardware and costing a lot less on SSD and
HDD regular replacements.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-09 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

On Thu, Aug 8, 2019 at 9:39 PM Aurelien Jarno  wrote:

> We are at a point were we should probably look for a real solution
> instead of relying on tricks.

 *sigh* i _have_ been pointing out for several years now that this is
a situation that is going to get increasingly worse and worse, leaving
perfectly good hardware only fit for landfill.

 i spoke to Dr Stallman about the lack of progress here:
 https://sourceware.org/bugzilla/show_bug.cgi?id=22831

he expressed some puzzlement as the original binutils algorithm was
perfectly well capable of handling linking with far less resident
memory than was available at the time - and did *NOT*, just like gcc -
assume that virtual memory was "the way to go".  this because the
algorithm used in ld was written at a time when virtual memory was far
from adequate.

 then somewhere in the mid-90s, someone went "4GB is enough for
anybody" and ripped the design to shreds, making the deeply flawed and
short-sighted assumption that application linking would remain -
forever - below 640k^H^H^H^H4GB.

 now we're paying the price.

 the binutils-gold algorithm (with options listed in the bugreport, a
is *supposed* to fix this, however the ld-torture test that i created
shows that the binutils-gold algorithm is *also* flawed: it probably
uses mmap when it is in *no way* supposed to.

 binutils with the --no-keep-memory option actually does far better
than binutils-gold... in most circumstances.  however it also
spuriously fails with inexplicable errors.

 basically, somebody needs to actually properly take responsibility
for this and get it fixed.  the pressure will then be off: linking
will take longer *but at least it will complete*.

 i've written the ld-torture program - a random function generator -
so that it can be used to easily generate large numbers of massive c
source files that will hit well over the 4GB limit at link time.  so
it's easily reproducible.

 l.

p.s. no, going into virtual memory is not acceptable.  the
cross-referencing instantly creates a swap-thrash scenario, that will
put all and any builds into 10 to 100x the completion time.  any link
that goes into "thrash" will take 2-3 days to complete instead of an
hour.  "--no-keep-memory" is supposed to fix that, but it is *NOT* an
option on binutils-gold, it is *ONLY* available on the *original*
binutils-ld.



Re: loss of synaptic due to wayland

2019-07-10 Thread Luke Kenneth Casson Leighton
On Wed, Jul 10, 2019 at 2:34 PM Gene Heskett  wrote:
>
> On Wednesday 10 July 2019 03:19:04 Luke Kenneth Casson Leighton wrote:
>
> > On Wed, Jul 10, 2019 at 2:11 AM Gene Heskett 
> wrote:
> > > On Tuesday 09 July 2019 10:01:44 Luke Kenneth Casson Leighton wrote:
> > > > for source stuff:
> > > >  * apt-get source {package} - gets the *source code* of a package
> > >
> > > doesn't exist, this stuff is tar.gz's straight from kernel.org
> >
> >  doesn't matter: pick the nearest package, that's why i said
> > "similar".  if that's 4.1something.something it will [should] not
> > matter.  if the build dependencies are that drastically different from
> > kernel to kernel it's an alarm bell.
> >
> > l.
> you are missing the point Luke.

no, i'm not: you're misunderstanding.  allow me to emphasis it by
spelling it out, explicitly.

step 1: use apt-get build-dep
linux-image-4.whatever-is-the-largest-highest-available-debian-image
step 2: COMPILE THE 5.0 UPSTREAM SOURCE CODE.

step 1 installs the prerequisite build dependencies
step 2 gets you what you want.

note that i did NOT say:

step 1: use apt-get build-dep linux-image-4.whatever
step 2: use apt-get source {the-exact-same-package-name}

now. if you're missing some of the drivers (as you have in the past),
it means that you've not correctly selected the exact same Kconfig
options that would get you the loadable dynamic modules that support
that hardware.

therefore, the easiest way to correct that: you need to track down
what the debian config is which _does_ enable the required hardware.
i haven't done that in a while, so can't recall off the top of my head
the exact process: i'd suggest starting with "apt-get-source
linux-image-4.whatever-is-greatest-that-you-can-find".

again, to reiterate, that is NOT for the purposes or to imply IN ANY
WAY that you should stop trying to do what you are doing and go with
the 4.whatever kernel.  you should use the debian 4.whatever Kconfig
AS THE BASIS for what you are trying to do [with the 5.0-or-whatever
kernel].

you may find that some Kconfig options are missing, or their names
changed: this is just how it goes with every revision of the linux
kernel.  you'll have to apply some experimentation and thought.

l.



Re: loss of synaptic due to wayland

2019-07-10 Thread Luke Kenneth Casson Leighton
On Wed, Jul 10, 2019 at 2:11 AM Gene Heskett  wrote:
>
> On Tuesday 09 July 2019 10:01:44 Luke Kenneth Casson Leighton wrote:

> > for source stuff:
> >  * apt-get source {package} - gets the *source code* of a package
>
> doesn't exist, this stuff is tar.gz's straight from kernel.org

 doesn't matter: pick the nearest package, that's why i said
"similar".  if that's 4.1something.something it will [should] not
matter.  if the build dependencies are that drastically different from
kernel to kernel it's an alarm bell.

l.



Re: loss of synaptic due to wayland

2019-07-09 Thread Luke Kenneth Casson Leighton
(hi gene, hope you don't mind, i'm cc'ing the list back again, i
assume you accidentally didn't hit "reply-to-all?"  or that i did, if
so, whoops...)

On Mon, Jul 8, 2019 at 7:20 PM Gene Heskett  wrote:
>
> On Monday 08 July 2019 08:37:14 Luke Kenneth Casson Leighton wrote:
>
> > On Mon, Jul 8, 2019 at 12:55 PM Gene Heskett 
> wrote:
> > > yes it was, and no solution was offered that I read about. And no,
> > > aptitude is not a replacement.
> >
> >  used it once or twice, wasn't impressed, returned to apt-get and
> > apt-cache search, which work extremely well, and have done since
> > debian began.
> >
> What I am trying to do is build a much newer, rt-preempt kernel for
> buster on an armhf, aka a pi3b.  After having configured it, I try
> a "make" and in about a minute, am getting a missing openssl/bio.h exit:
>
> pi@picnc:/media/pi/workpi120/buildbot/linux-5.1.14 $ make
>   HOSTCC  scripts/extract-cert
> scripts/extract-cert.c:21:10: fatal error: openssl/bio.h: No such file or
> directory
>  #include 
>   ^~~
> compilation terminated.
> make[1]: *** [scripts/Makefile.host:92: scripts/extract-cert] Error 1
> make: *** [Makefile:1065: scripts] Error 2
>
>
> not at all fam with apt-cache search, I have not found a bio.h except in
> some obvious biology related programs. unrelated to openssl IOW.
>
> The man page is so long I quickly lose track of all the options.
>
> So how would I state the search that will find it if it exists in the
> repo's?

 there's a file search "thing" somewhere, for apt... it's a plugin (i
think)... although i suspect you simply have the wrong version of
openssl installed.

 ok so i do have /usr/include/openssl/bio.h (makes it easier if
someone else has it) and so i can find it with:

$ grep bio.h /var/lib/dpkg/info/*.list | grep openssl

and that gives:

/var/lib/dpkg/info/libssl-dev:amd64.list:/usr/include/openssl/bio.h
/var/lib/dpkg/info/nodejs.list:/usr/include/node/openssl/bio.h

shriik wtf am i doiiing with nodejs installed, dieee nodejs,
die sorry about that, adverse reaction to node.js

ok so you'll need to do "apt-get install libssl-dev" and that *should*
get you the missing openssl/bio.h file.

if you run into any other difficulties with missing packages, try this:

"apt-get build-dep linux-image-4.something.something"

that will install *all* build dependencies for a *debian* kernel build
process... which (warning) may be a little bit more than you bargained
for, you'll have to review what it recommends to install before
proceeding, ok?

basically when doing a build of a package that's similar (or
identical) to an existing debian one, the trick of installing
*debian's* build dependencies for the same name uuusuuually does the
trick of getting you everything you'll need to build that "vanilla"
upstream {whatever}.

problems come when debian sets different options from the default, and
you can always inspect the debian/rules file for what they are.


> My /e/a/sources.list:
>
> deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib
> non-free rpi
> # Uncomment line below then 'apt-get update' to enable 'apt-get source'
> deb-src http://raspbian.raspberrypi.org/raspbian/ buster main contrib
> non-free rpi
>
> >  never had *any* problems - at all -  that weren't caused by doing
> > something incredibly stupid such as "ctrl-c" in the middle of an
> > installation (at the point where dpkg is being called), and even then,
> > apt-get -f install in almost 100% of cases fixed the "problem that i
> > had myself caused".
> >
> >  really: if you ask me, relying on GUIs for something as
> > mission-critical as installation of packages is asking for trouble.
>
> What the gui is good for is showing you the exact package name to install
> or purge. Nothing else, however capable it might be, can really replace
> the look and feel of a good gui. But I've been corrected before.  Teach
> me!

 :)

 on-list is better (other people benefit too).  these are what i use:

for source stuff:
 * apt-get source {package} - gets the *source code* of a package
 * apt-get build-dep {package} - gets you the (full) build
dependencies required to *make* a source package (with
"dpkg-buildpackage)

those are typically best done in a chroot, for safety.


to find out which package has a file installed:
* grep filename /var/lib/dpkg/info/*.list

general package installing process:
 * apt-cache search "keyword(s)"
 * apt-cache show {package} - usually pipe this into more (or less)
 * apt-get install {package} - just one.
 * apt-get --purge remove {package} - just one.

 these are [almost certainly] the commands that synaptics runs,
behind-the-scenes.  for me, GUIs just irritate me beyond belief,
because they typically require moving hands off the keyboard and onto
the mouse.  i even use fvwm2 with "mouse-over equals window-focus"
very deliberately to minimise clicks. this all because i have
recurring bouts of RSI...

hth.

l.



Re: loss of synaptic due to wayland

2019-07-08 Thread Luke Kenneth Casson Leighton
On Mon, Jul 8, 2019 at 2:01 PM Andrei POPESCU  wrote:
>
> On Lu, 08 iul 19, 07:42:46, Gene Heskett wrote:
> >
> > yes it was, and no solution was offered that I read about. And no,
> > aptitude is not a replacement. I've hit q for quit and had it tear a
> > working system down to doing a reinstall to recover, 3 times now.
>
> I used to be a heavy aptitude user in the past, on unstable (i.e. almost
> daily package upgrades). It does have it quirks. It also shows very
> clearly what it is about to do before you press the final 'g'.

 indeed, as does apt-get (which i much prefer).  ultimately, if you
are... how can i put this diplomatically... a GUI gives you the "nice
warm feeling" on presenting you with a nice warm cozy dialog box, "Are
You Sure You Wanna Do This".

 apt and aptitude it is assumed that you are... well... hum no
other way to say it really. it is assumed that you are... um...
capable of reading?

 sorry if that sounds like it's terribly insulting, there's really not
a way to say it without implying so, because if you don't actually
read the warning - no matter that it's in words that are not
bold-faced and surrounded by a big dialog box - you basically get to
learn *why* the warning is there.

> > It may be capable, but imnsho its also dangerous. Having it do
> > anything but quit instantly when you hit the quit key q, hit because
> > you're lost is unforgivable.
>
> Thanks, but no thanks. Having it exit immediately in the middle of a
> complex upgrade just because I hit 'q' by mistake is not nice and might
> leave your system in a very bad state.
>
> Once an action has been started it might be possible to interrupt it
> with Ctrl+C. Please do so at your own risk.

 synaptics i presume actively prevents and prohibits such termination.

 recovery of a system that's been terminated in the middle of an
install can actually damage the dpkg database.  apt and aptitude exec
dpkg to install individual packages, and, as anyone knows who has
tried to manually install a .dpkg, you interrupt that process, as
andrei says, at your own risk.

 of course, it is perfectly possible to f*** up with synaptics as
well: "killall -9 synaptics" whilst it's in the middle of an install
will achieve the exact same level of system-fg-up-ness.

 if you really _really_ get into such a mess, the first action to take
is "apt-get -f install".  this sually recovers things back to a
known stable state, and you can re-run apt-get {whatever}

 sometimes i've had to do a dpkg -i --force-all {insert package that
failed.deb}, particularly on systems where there's been file conflicts
(very old packages still installed, where new ones have the same
file).

 ultimately, though, there is absolutely *NO* excuse for quotes
reinstalling quotes.  any debian system is ENTIRELY RECOVERABLE
without resorting to the stupidity of the windows mindset "if it's
broke duhhh reinstall".  in really *really* broken conditions (a
kernel upgrade interrupted, a grub replacement gone wrong), you can
run recovery live USB boot media, and repair the damage by chrooting
in to the root filesystem.

in one hilarious incident involving "cpio" i managed to write ARM
files onto an x86 filesystem (in /lib, /usr/lib, /bin and /sbin) *and
still recovered the system* by live-booting a recovery USB stick,
manually downloading the relevant dpkgs, unpacking them and
hand-copying the accidentally-replaced files.

bottom line: if you're relying heavily on synaptics, be worried and
concerned that you're turning into a windows user :)

l.



Re: loss of synaptic due to wayland

2019-07-08 Thread Luke Kenneth Casson Leighton
On Mon, Jul 8, 2019 at 12:55 PM Gene Heskett  wrote:

> yes it was, and no solution was offered that I read about. And no,
> aptitude is not a replacement.

 used it once or twice, wasn't impressed, returned to apt-get and
apt-cache search, which work extremely well, and have done since
debian began.

 never had *any* problems - at all -  that weren't caused by doing
something incredibly stupid such as "ctrl-c" in the middle of an
installation (at the point where dpkg is being called), and even then,
apt-get -f install in almost 100% of cases fixed the "problem that i
had myself caused".

 really: if you ask me, relying on GUIs for something as
mission-critical as installation of packages is asking for trouble.

 l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-26 Thread Luke Kenneth Casson Leighton
On Mon, Jan 7, 2019 at 11:30 PM Mike Hommey  wrote:

> > it would be extremely useful to confirm that 32-bit builds can in fact
> > be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
> > builds that are failing at the linker phase due to lack of memory.
>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bts
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.

 hi mike, hi steve, i did not receive a response on the queries about
the additional recommended options [1], so rather than lose track i
raised a bugreport and cross-referenced this discussion:

 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=919882

 personally, after using the ld-evil-linker.py tool i do not expect
the recommended options to work on 32-bit, as i suspect that, despite
the options saying that they do not use mmap, the investigation that i
did provides some empirical evidence to the contrary, whereas ld-bfd
does *not*.

 so, ironically, on ld-bfd you run into one bug, and on ld-gold you
run into another :)

l.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=22831#c25



Re: armel *and* armhf qualification for Wheezy

2019-01-13 Thread Luke Kenneth Casson Leighton
https://sourceware.org/bugzilla/show_bug.cgi?id=22831#c25

steve, mike, any news on the use of the options recommended in comment
#25 ?  this for native 32-bit builds.

Ian Lance Taylor 2019-01-09 23:48:45 UTC

When using gold the key options are --no-mmap-output-file
--no-map-whole-files --no-keep-files-mapped.  Can you confirm that
those options--all of them together--were tried with gold?


On Wed, May 16, 2012 at 8:18 PM Mike Hommey  wrote:
>
> On Wed, May 16, 2012 at 04:56:39PM +0100, Steve McIntyre wrote:
> > On Wed, May 16, 2012 at 05:44:10PM +0200, Mike Hommey wrote:
> > >Hi Steve,
> >
> > Hey Mike,
> >
> > >On Wed, May 16, 2012 at 04:26:10PM +0100, Steve McIntyre wrote:
> > >> In terms of raw buildd CPU right now, I think we're doing OK, but
> > >> memory is more of a limiting factor with bigger C++ builds.
> > >
> > >As maintainer of such a package that pushes buildds limits, I have a
> > >question.
> > >Isn't memory really only a problem when linking C++ with big DWARF info?
> >
> > Honestly, I'm not 100% sure where all the memory is going. I do know
> > that at current rates of usage increase we'll struggle to link some
> > large programs (like browsers) on any 32-bit platform soon.
> >
> > >Would it be worth trying to link with gold for these?
> >
> > It might be, yes. I can try that with iceweasel on an imx53 or Panda
> > with 1GB if you like. Are there any non-obvious patches needed to the
> > packaging?
>
> Apart from whatever is needed for gcc to use gold, there shouldn't be.
>
> Mike



Re: armel *and* armhf qualification for Wheezy

2019-01-09 Thread Luke Kenneth Casson Leighton
On Thursday, May 17, 2012, Steve McIntyre  wrote:

> On Wed, May 16, 2012 at 09:18:38PM +0200, Mike Hommey wrote:
>
> >> >Would it be worth trying to link with gold for these?
> >>
> >> It might be, yes. I can try that with iceweasel on an imx53 or Panda
> >> with 1GB if you like. Are there any non-obvious patches needed to the
> >> packaging?
> >
> >Apart from whatever is needed for gcc to use gold, there shouldn't be.
>
> OK, cool. Building with ld and gold on a panda right now, to see how
> they compare.
>
>
Any chance of trying the options listed in comment 25
https://sourceware.org/bugzilla/show_bug.cgi?id=22831


-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-09 Thread Luke Kenneth Casson Leighton
https://sourceware.org/bugzilla/show_bug.cgi?id=22831 sorry using phone to
type, mike, comment 25 shows some important options to ld gold would it be
possible to retry with those? 32 bit. Disabling mmap looks really important
as clearly a 4gb+ binary is guaranteed going to fail to fit into 32bit mmap.

On Tuesday, January 8, 2019, Mike Hommey  wrote:

>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bits
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.
>
> Mike
>


-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-08 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 7:26 AM Luke Kenneth Casson Leighton
 wrote:
>
> On Tue, Jan 8, 2019 at 7:01 AM Luke Kenneth Casson Leighton
>  wrote:

> trying this:
>
> $ python evil_linker_torture.py 3000 400 200 50
>
> running with "make -j4" is going to take a few hours.

 ok so that did the trick: got to 4.3gb total resident memory even
with --no-keep-memory tacked on to the link.  fortunately it bombed
out (below) before it could get to the (assumed) point where it would
double the amount of resident RAM (8.6GB) and cause my laptop to go
into complete thrashing meltdown.

hypothetically it should have created an 18 GB executable.  3000 times
500,000 static chars isn't the only reason this is failing, because
when restricted to only 100 functions and 100 random calls per
function, it worked.

ok so i'm retrying without --no-keep-memory... and it's now gone
beyond the 5GB mark.  backgrounding it and letting it progress a few
seconds at a time... that's interesting up to 8GB...  9.5GB ok
that's enough: any more than that and i really will trash the laptop.

ok so the above settings will definitely do the job (and seem to have
thrown up a repro candidate for the issue you were experiencing with
firefox builds, mike).

i apologise that it takes about 3 hours to build all 3,000 6mb object
files, even with a quad-core 3.6ghz i7.  they're a bit monstrous.

will find this post somewhere on debian-devel archives and
cross-reference it here
https://sourceware.org/bugzilla/show_bug.cgi?id=22831


ld: warning: cannot find entry symbol _start; defaulting to 00401000
ld: src9.o: in function `fn_9_0':
/home/lkcl/src/ld_torture/src9.c:3006:(.text+0x27): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1149_322' defined
in .text section in src1149.o
ld: /home/lkcl/src/ld_torture/src9.c:3008:(.text+0x41): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1387_379' defined
in .text section in src1387.o
ld: /home/lkcl/src/ld_torture/src9.c:3014:(.text+0x8f): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1821_295' defined
in .text section in src1821.o
ld: /home/lkcl/src/ld_torture/src9.c:3015:(.text+0x9c): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1082_189' defined
in .text section in src1082.o
ld: /home/lkcl/src/ld_torture/src9.c:3016:(.text+0xa9): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_183_330' defined
in .text section in src183.o
ld: /home/lkcl/src/ld_torture/src9.c:3024:(.text+0x111): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_162_394' defined
in .text section in src162.o
ld: /home/lkcl/src/ld_torture/src9.c:3026:(.text+0x12b): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_132_235' defined
in .text section in src132.o
ld: /home/lkcl/src/ld_torture/src9.c:3028:(.text+0x145): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1528_316' defined
in .text section in src1528.o
ld: /home/lkcl/src/ld_torture/src9.c:3029:(.text+0x152): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1178_357' defined
in .text section in src1178.o
ld: /home/lkcl/src/ld_torture/src9.c:3031:(.text+0x16c): relocation
truncated to fit: R_X86_64_PLT32 against symbol `fn_1180_278' defined
in .text section in src1180.o
ld: /home/lkcl/src/ld_torture/src9.c:3035:(.text+0x1a0): additional
relocation overflows omitted from the output
^Cmake: *** Deleting file `main'
make: *** [main] Interrupt



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 7:01 AM Luke Kenneth Casson Leighton
 wrote:

> i'm going to see if i can get above the 4GB mark by modifying the
> Makefile to do 3,000 shared libraries instead of 3,000 static object
> files.

 fail.  shared libraries link extremely quickly.  reverted to static,
trying this:

$ python evil_linker_torture.py 3000 400 200 50

so that's 4x the number of functions per file, and 2x the number of
calls *in* each function.

just the compile phase requires 1GB per object file (gcc 7.3.0-29),
which, on "make -j8" ratched up the loadavg to the point where...
well.. *when* it recovered it reported a loadavg of over 35, with 95%
usage of the 16GB swap space...

running with "make -j4" is going to take a few hours.

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
$ python evil_linker_torture.py 3000 100 100 50

ok so that managed to get up to 1.8GB resident memory, paused for a
bit, then doubled it to 3.6GB, and a few seconds later successfully
outputted a binary.

i'm going to see if i can get above the 4GB mark by modifying the
Makefile to do 3,000 shared libraries instead of 3,000 static object
files.

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tue, Jan 8, 2019 at 6:27 AM Luke Kenneth Casson Leighton
 wrote:

> i'm just running the above, will hit "send" now in case i can't hit
> ctrl-c in time on the linker phase... goodbye world... :)

$ python evil_linker_torture.py 2000 50 100 200
$ make -j8

oh, err... whoopsie... is this normal? :)  it was only showing around
600mb during the linker phase anyway. will keep hunting. where is this
best discussed (i.e. not such a massive cc list)?

/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`deregister_tm_clones':
crtstuff.c:(.text+0x3): relocation truncated to fit: R_X86_64_PC32
against `.tm_clone_table'
/usr/bin/ld: crtstuff.c:(.text+0xb): relocation truncated to fit:
R_X86_64_PC32 against symbol `__TMC_END__' defined in .data section in
main
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`register_tm_clones':
crtstuff.c:(.text+0x43): relocation truncated to fit: R_X86_64_PC32
against `.tm_clone_table'
/usr/bin/ld: crtstuff.c:(.text+0x4a): relocation truncated to fit:
R_X86_64_PC32 against symbol `__TMC_END__' defined in .data section in
main
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o: in function
`__do_global_dtors_aux':
crtstuff.c:(.text+0x92): relocation truncated to fit: R_X86_64_PC32
against `.bss'
/usr/bin/ld: crtstuff.c:(.text+0xba): relocation truncated to fit:
R_X86_64_PC32 against `.bss'
collect2: error: ld returned 1 exit status
make: *** [main] Error 1



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
$ python evil_linker_torture.py 2000 50 100 200

ok so it's pretty basic, and arguments of "2000 50 10 100"
resulted in around a 10-15 second linker phase, which top showed to be
getting up to around the 2-3GB resident memory range.  "2000 50 100
200" should start to make even a system with 64GB RAM start to
feel the pain.

evil_linker_torture.py N M O P generates N files with M functions
calling O randomly-selected functions where each file contains a
static char of size P that is *deliberately* put into the code segment
by being initialised with a non-zero value, exactly and precisely as
you should never do because... surpriiise! it adversely impacts the
binary size.

i'm just running the above, will hit "send" now in case i can't hit
ctrl-c in time on the linker phase... goodbye world... :)

l.
#!/usr/bin/env python

import sys
import random

maketemplate = """\
CC := gcc
CFILES:=$(shell ls | grep "\.c")
OBJS:=$(CFILES:%.c=%.o)
DEPS := $(CFILES:%.c=%.d)
CFLAGS := -g -g -g
LDFLAGS := -g -g -g

%.d: %.c
	$(CC) $(CFLAGS) -MM -o $@ $<

%.o: %.c
	$(CC) $(CFLAGS) -o $@ -c $<

#	$(CC) $(CFLAGS) -include $(DEPS) -o $@ $<

main: $(OBJS)
	$(CC) $(OBJS) $(LDFLAGS) -o main
"""

def gen_makefile():
with open("Makefile", "w") as f:
f.write(maketemplate)

def gen_headers(num_files, num_fns):
for fnum in range(num_files):
with open("hdr{}.h".format(fnum), "w") as f:
for fn_num in range(num_fns):
f.write("extern int fn_{}_{}(int arg1);\n".format(fnum, fn_num))

def gen_c_code(num_files, num_fns, num_calls, static_sz):
for fnum in range(num_files):
with open("src{}.c".format(fnum), "w") as f:
for hfnum in range(num_files):
f.write('#include "hdr{}.h"\n'.format(hfnum))
f.write('static char data[%d] = {1};\n' % static_sz)
for fn_num in range(num_fns):
f.write("int fn_%d_%d(int arg1)\n{\n" % (fnum, fn_num))
f.write("\tint arg = arg1 + 1;\n")
for nc in range(num_calls):
cnum = random.randint(0, num_fns-1)
cfile = random.randint(0, num_files-1)
f.write("\targ += fn_{}_{}(arg);\n".format(cfile, cnum))
f.write("\treturn arg;\n")
f.write("}\n")
if fnum != 0:
continue
f.write("int main(int argc, char *argv[])\n{\n")
f.write("\tint arg = 0;\n")
for nc in range(num_calls):
cnum = random.randint(0, num_fns-1)
cfile = random.randint(0, num_files-1)
f.write("\targ += fn_{}_{}(arg);\n".format(cfile, cnum))
f.write("\treturn 0;\n")
f.write("}\n")

if __name__ == '__main__':
num_files = int(sys.argv[1])
num_fns = int(sys.argv[2])
num_calls = int(sys.argv[3])
static_sz = int(sys.argv[4])
gen_makefile()
gen_headers(num_files, num_fns)
gen_c_code(num_files, num_fns, num_calls, static_sz)


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tuesday, January 8, 2019, Mike Hommey  wrote:

> On Mon, Jan 07, 2019 at 11:46:41PM +0000, Luke Kenneth Casson Leighton
> wrote:
>
> > At some point apps are going to become so insanely large that not even
> > disabling debug info will help.
>
> That's less likely, I'd say. Debug info *is* getting incredibly more and
> more complex for the same amount of executable weight, and linking that
> is making things worse and worse. But having enough code to actually be
> a problem without debug info is probably not so close.
>
>
It's a slow boil problem, taken 10 years to get bad, another 10 years to
get really bad. Needs strategic planning. Right now things are not exactly
being tackled except in a reactive way, which unfortunately takes time as
everyone is volunteers. Exacerbates the problem and leaves drastic
"solutions" such as "drop all 32 bit support".


> There are solutions to still keep full debug info, but the Debian
> packaging side doesn't support that presently: using split-dwarf. It
> would probably be worth investing in supporting that.
>
>
Sounds very reasonable, always wondered why debug syms are not separated at
build/link, would buy maybe another decade?



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Tuesday, January 8, 2019, Mike Hommey  wrote:

> .
>
> Note that Firefox is built with --no-keep-memory
> --reduce-memory-overheads, and that was still not enough for 32-bts
> builds. GNU gold instead of BFD ld was also given a shot. That didn't
> work either. Presently, to make things link at all on 32-bits platforms,
> debug info is entirely disabled. I still need to figure out what minimal
> debug info can be enabled without incurring too much memory usage
> during linking.


Dang. Yes, removing debug symbols was the only way I could get webkit to
link without thrashing, it's a temporary fix though.

So the removal of the algorithm in ld Dr Stallman wrote, dating back to the
1990s, has already resulted in a situation that's worse than I feared.

At some point apps are going to become so insanely large that not even
disabling debug info will help.

At which point perhaps it is worth questioning the approach of having an
app be a single executable in the first place.  Even on a 64 bit system if
an app doesn't fit into 4gb RAM there's something drastically going awry.



-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
(hi edmund, i'm reinstating debian-devel on the cc list as this is not
a debian-arm problem, it's *everyone's* problem)

On Mon, Jan 7, 2019 at 12:40 PM Edmund Grimley Evans
 wrote:

> >  i spoke with dr stallman a couple of weeks ago and confirmed that in
> > the original version of ld that he wrote, he very very specifically
> > made sure that it ONLY allocated memory up to the maximum *physical*
> > resident available amount (i.e. only went into swap as an absolute
> > last resort), and secondly that the number of object files loaded into
> > memory was kept, again, to the minimum that the amount of spare
> > resident RAM could handle.
>
> How did ld back then determine how much physical memory was available,
> and how might a modern reimplemention do it?

 i don't know: i haven't investigated the code.  one clue: gcc does
exactly the same thing (or, used to: i believe that someone *may* have
tried removing the feature from recent versions of gcc).

 ... you know how gcc stays below the radar of available memory, never
going into swap-space except as a last resort?

> Perhaps you use sysconf(_SC_PHYS_PAGES) or sysconf(_SC_AVPHYS_PAGES).
> But which? I have often been annoyed by how "make -j" may attempt
> several huge linking phases in parallel.

 on my current laptop, which was one of the very early quad core i7
skylakes with 2400mhz DDR4 RAM, the PCIe bus actually shuts down if
too much data goes over it (too high a power draw occurs).

 consequently, if swap-thrashing occurs, it's extremely risky, as it
causes the NVMe SSD to go *offline*, re-initialise, and come back on
again after some delay.

 that means that i absolutely CANNOT allow the linker phase to go into
swap-thrashing, as it will result in the loadavg shooting up to over
120 within just a few seconds.


> Would it be possible to put together a small script that demonstrates
> ld's inefficient use of memory? It is easy enough to generate a big
> object file from a tiny source file, and there are no doubt easy ways
> of measuring how much memory a process used, so it may be possible to
> provide a more convenient test case than "please try building Firefox
> and watch/listen as your SSD/HDD gets t(h)rashed".
>
> extern void *a[], *b[];
> void *c[1000] = {  };
> void *d[1000] = {  };
>
> If we had an easy test case we could compare GNU ld, GNU gold, and LLD.

 a simple script that auto-generated tens of thousands of functions in
a couple of hundred c files, with each function making tens to
hundreds of random cross-references (calls) to other functions across
the entire range of auto-generated c files should be more than
adequate to make the linker phase go into near-total meltdown.

 the evil kid in me really *really* wants to give that a shot...
except it would be extremely risky to run on my laptop.

 i'll write something up. mwahahah :)

l.



Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft

2019-01-07 Thread Luke Kenneth Casson Leighton
On Sun, Jan 6, 2019 at 11:46 PM Steve McIntyre  wrote:
>
> [ Please note the cross-post and respect the Reply-To... ]
>
> Hi folks,
>
> This has taken a while in coming, for which I apologise. There's a lot
> of work involved in rebuilding the whole Debian archive, and many many
> hours spent analysing the results. You learn quite a lot, too! :-)
>
> I promised way back before DC18 that I'd publish the results of the
> rebuilds that I'd just started. Here they are, after a few false
> starts. I've been rebuilding the archive *specifically* to check if we
> would have any problems building our 32-bit Arm ports (armel and
> armhf) using 64-bit arm64 hardware. I might have found other issues
> too, but that was my goal.

 very cool.

 steve, this is probably as good a time as any to mention a very
specific issue with binutils (ld) that has been slowly and inexorably
creeping up on *all* distros - both 64 and 32 bit - where the 32-bit
arches are beginning to hit the issue first.

 it's a 4GB variant of the "640k should be enough for anyone" problem,
as applied to linking.

 i spoke with dr stallman a couple of weeks ago and confirmed that in
the original version of ld that he wrote, he very very specifically
made sure that it ONLY allocated memory up to the maximum *physical*
resident available amount (i.e. only went into swap as an absolute
last resort), and secondly that the number of object files loaded into
memory was kept, again, to the minimum that the amount of spare
resident RAM could handle.

 some... less-experienced people, somewhere in the late 1990s, ripped
all of that code out [what's all this crap, why are we not just
relying on swap, 4GB swap will surely be enough for anybody"]

 by 2008 i experienced a complete melt-down on a 2GB system when
compiling webkit.  i tracked it down to having accidentally enabled
"-g -g -g" in the Makefile, which i had done specifically for one
file, forgot about it, and accidentally recompiled everything.

 that resulted in an absolute thrashing meltdown that nearly took out
the entire laptop.

 the problem is that the linker phase in any application is so heavy
on cross-references that the moment the memory allocated by the linker
goes outside of the boundary of the available resident RAM it is
ABSOLUTELY GUARANTEED to go into permanent sustained thrashing.

 i cannot emphasise enough how absolutely critical that this is to
EVERY distribution to get this fixed.

resources world-wide are being completely wasted (power, time, and the
destruction of HDDs and SSDs) because systems which should only really
take an hour to do a link are instead often taking FIFTY times longer
due to swap thrashing.

not only that, but the poor design of ld is beginning to stop certain
packages from even *linking* on 32-bit systems!  firefox i heard now
requires SEVEN GIGABYTES during the linker phase!

and it's down to this very short-sighted decision to remove code
written by dr stallman, back in the late 1990s.

it would be extremely useful to confirm that 32-bit builds can in fact
be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
builds that are failing at the linker phase due to lack of memory.

however *please do not make the mistake of thinking that this is
specifically a 32-bit problem*.  resources are being wasted on 64-bit
systems by them going into massive thrashing, just as much as they are
on 32-bit ones: it's just that if it happens on a 32-bit system a hard
error occurs.

somebody needs to take responsibility for fixing binutils: the
maintainer of binutils needs help as he does not understand the
problem.  https://sourceware.org/bugzilla/show_bug.cgi?id=22831

l.



Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-27 Thread Luke Kenneth Casson Leighton
On Tue, Nov 27, 2018 at 7:54 AM Gene Heskett  wrote:

> Not in that light, but would sinking the pi foundation make us look any
> better? I think not.

 the rule i go by is that of the "bill of ethics" [1] - which is both
simple and puzzling at the same time.  if you're familiar with the
"New Law" Robots from Roger Allen MacBride's books, it's a bit like
that: whilst it is unethical to do "harm", it is also unethical to
*support* someone to do "harm"... therefore it is "permitted" to
*withdraw* resources (time, energy, money and intelligence enhancers)
as a means to reduce the influence and effectiveness of any person or
group doing "harm".


> But I'm not trying to white wash them either. IP
> theft is the mode of the times it seems. But,if you want a high
> performance gpio that can run a short spi bus at 50 megabaud, where else
> are you going to get it besides broadcom?

 i've done quite a lot of embedded work: i'm a big fan of the STM32F
series, and many of them are well under $2.  STM NUCLEO boards retail
for under $10.  i way, waaay prefer to have a separate embedded
processor with its own greatly-simplified firmware, then communicate
with that over a serial bus to send it commands (or a fast bus if
needed).

 with simpler firmware (embedded into onboard FLASH) the risk of a
crash is greatly reduced, and recovery/restart time even if it does is
well under a second.

> >  ... you see how that works?  the wrong decision here, debian gets to
> > completely destroy what is already a fragile ecosystem, just for
> > "convenience".
>
> There is also TANSTAAFL. But you will do as pure as you can and according
> to the debian bylaws, and I admire that greatly, its a very large part
> of why I'm here. But you did ask the question, and I gave you 2 cents
> worth of the real world where we need to just get the job done to the
> best of the hardware's ability.

 appreciated.  it's a tough call, i know.

> And "we" are not doing a very good job
> of exploiting the hardware we can buy today.

 the best assembly-level programmers used to be from russia... why?
because they couldn't get the hardware (Cold War), so had to squeeze
as much out of what they had as they could.

> But I've said my piece, so won't hassle you about it anymore in this
> thread. I might still complain even, but thats how it is. And its not
> your fault so please don't take it personal, its certainly not intended
> to be.
>
> Take care and stay well, Luke Kenneth Casson Leighton.

you too, gene.

l.

[1] https://www.titanians.org/the-bill-of-ethics/



Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-26 Thread Luke Kenneth Casson Leighton
On Tue, Nov 27, 2018 at 6:47 AM Gene Heskett  wrote:
>
> On Monday 26 November 2018 22:04:04 Luke Kenneth Casson Leighton wrote:
>
> > On Fri, Nov 23, 2018 at 12:18 AM Lisandro Damián Nicanor Pérez Meyer
> >
> >  wrote:
> > > So: what's the best outcome for our *current* users? Again, pick
> > > only one.
> >
> > here's a perspective that may not have been considered: how much
> > influence and effect on purchasing decisions would the choice made
> > have?
> >
> > we know that proprietary embedded GPUs and associated proprietary
> > software are not just unethical, and cause huge problems, they also
> > hurt company profits and cause increases in support costs.
>
> If they choose to support it, I've found most are in it for the initial
> sale, and its up to you to make it do useful work. This seems to be the
> case with the rock64's, Odroids and such, very close to zero support,

 the odroids, i know they're a very small team in korea: you cannot
expect "support".   the contract: you bought the hardware, therefore
you know what you're doing.

rock64, that's TL Lim's baby: he's doing a hell of a lot of work
behind the scenes to get GPL compliance from Allwinner and much more.
 TL's team do the best they can by providing forums for people to use
and to support each other.

 ultimately, though, if you've the money (if you place an order for
10k units for example), that will get their immediate attention.  if
not... then, well, the reality is: every "support" call or email or
forum message answered is profit lost.

 that's just how it is.  you want better, be prepared to pay the money.


> Pi's are buckets better,

 better because the GPU is proprietary and Broadcom should be
financially supported and we should all buy their products, thus
keeping them in business and sustaining and endorsing Broadcom's
unethical practices?

 and, because they are "better", debian should also prioritise
supporting Broadcom's unethical practices by making a
mutually-exclusive decision that chops off alternatives that *are*
libre?

 ... you see how that works?  the wrong decision here, debian gets to
completely destroy what is already a fragile ecosystem, just for
"convenience".

l.



Re: Upcoming Qt switch to OpenGL ES on arm64

2018-11-26 Thread Luke Kenneth Casson Leighton
On Fri, Nov 23, 2018 at 12:18 AM Lisandro Damián Nicanor Pérez Meyer
 wrote:

> So: what's the best outcome for our *current* users? Again, pick only one.

here's a perspective that may not have been considered: how much
influence and effect on purchasing decisions would the choice made
have?

we know that proprietary embedded GPUs and associated proprietary
software are not just unethical, and cause huge problems, they also
hurt company profits and cause increases in support costs.

by complete contrast, when all the source code is libre-licensed, this
is what happens:

 
http://www.h-online.com/open/news/item/Intel-and-Valve-collaborate-to-develop-open-source-graphics-drivers-1649632.html

basically what i am inviting you to consider is that in making this
decision, the one that supports, encourages and indirectly endorses
the continued propagation of proprietary 3D libraries is one that is
going to have a massive world-wide adverse financial impact over time.

i would therefore strongly recommend prioritising decisions that
support libre-licensed GPU firmware and PCIe GPU cards that have
libre-licensed source code.

if systems with etnaviv are "punished" for example by this decision,
that would not go down too well.  if people running older Radeon GPU
Cards (on the RockPro64 which has a 4x PCIe Card that easily runs at
2500 MBytes/sec) find that their cards perform badly, that is also not
going to go down well.

bottom line: your decisions here have far more impact than you may realise.

l.



Re: Arm ports build machines (was Re: Arch qualification for buster: call for DSA, Security, toolchain concerns)

2018-06-29 Thread Luke Kenneth Casson Leighton
spoke again to TL and asked if pine64 would be willing to look at
sponsorship witn rockpro64 boards (the ones that take 4x PCIe): if
someone from debian were to contact him direct he would happily
consider it.

i then asked him if i could cc him into this discussion and he said he
was way *way* too busy to enter into another mailing list discussion,
so if someone from debian emails me privately, off-list, i will then
cc him and/or put them in touch with him on irc.  i can also be
reached on freenode and oftc as "lkcl", if that is easier.

l.



Re: Arm ports build machines (was Re: Arch qualification for buster: call for DSA, Security, toolchain concerns)

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 8:13 PM, Luke Kenneth Casson Leighton
 wrote:
> On Fri, Jun 29, 2018 at 6:59 PM, Jonathan Wiltshire  wrote:
>
>>>  also worth noting, they're working on a 2U rackmount server which
>>> will have i think something insane like 48 Rock64Pro boards in one
>>> full-length case.
>
>> None of this addresses the basic DSA requirement of remote management.
>> Troubling local hands to change a disk once in a while is reasonable; being
>> blocked waiting for a power cycle on a regular basis is not (and I can't
>> imagine hosting sponsors are wild about their employees' time being used
>> for that either).
>
> i know exactly what you mean, i've had to deal with data centres.
> i'll make sure that TL Lim is aware of this, and will ask him if
> there's a way to include remote power-management / power-cycling of
> boards in the planned product or if it's already been thought of.

 TL informs me that all the power and reset signals for all 48 of the
RockPro64s tucked into the full-length 2U case are brought out to the
back panel.  an MCU (or MCUs) or SBC (or SBCs) may therefore be
connected directly to those in order to provide *individual* remote
power / reset management of all 48 RockPro64s.  DIY remote power
management, but it is actual remote power management.

 l.



Re: Arm ports build machines (was Re: Arch qualification for buster: call for DSA, Security, toolchain concerns)

2018-06-29 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Fri, Jun 29, 2018 at 8:31 PM, Florian Weimer  wrote:
> * Luke Kenneth Casson Leighton:
>
>>  that is not a surprise to hear: the massive thrashing caused by the
>> linker phase not being possible to be RAM-resident will be absolutely
>> hammering the drives beyond reasonable wear-and-tear limits.  which is
>> why i'm recommending people try "-Wl,--no-keep-memory".
>
> Note that ld will sometimes stuff everything into a single RWX segment
> as a result, which is not desirable.

 florian, thank you for responding: i've put a copy of the insights
that you give into the bugreport at
https://sourceware.org/bugzilla/show_bug.cgi?id=22831#c16

> Unfortunately, without significant investment into historic linker
> technologies (with external sorting and that kind of stuff),

 yes, ah ha!  funnily enough the algorithm that i was asked to create
back in 1988 was an external matrix-multiply, i take it you are
talking about the same thing, where linking is done using explicit
load-process-save cycles rather than relying on swap.

> I don't
> think it is viable to build 32-bit software natively in the near
> future.

  i noted an alternative strategy in the bugreport: if binutils *at
the very least* were to look at the available resident RAM and only
malloc'd and used up to that amount, and kept only a few (or even just
one) object file in memory at a time and did all the linking for that,
it would be infinitely better than the current situation which is
*only going to get worse*.

>   Maybe next year only a few packages will need exceptions, but
> the number will grow with each month.

 well... that ignores the fact that at some point in the next few
years there will be a package that needs 16 GB of resident RAM to
link.   and a few years after that it will be 32 GB.  and that's on
64-bit systems.  the package's name will probably be "firefox", given
the current growth rate.

 does debian *really* want to have to upgrade all 64-bit systems in
the build farm first to 16 GB RAM and then to 32 GB and then to 64
GB??  can the powerpc64 systems and all other 64-bit architectures
even *be* upgraded to 16 GB then 32 GB then 64 GB of RAM??

 basically the problems faced by 32-bit systems are a warning shot
across the bows about ld not really being kept up-to-date with the
increases in software complexity that's being thrown at it.  it's
*NOT* just about 32-bit.

 this problem can basically be faced REactively... or it can be faced
PROactively: the historic linker strategies that you mention are i
feel going to be needed in some years' time *even for 64-bit*.

l.



Re: Arm ports build machines (was Re: Arch qualification for buster: call for DSA, Security, toolchain concerns)

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 6:59 PM, Jonathan Wiltshire  wrote:

>>  also worth noting, they're working on a 2U rackmount server which
>> will have i think something insane like 48 Rock64Pro boards in one
>> full-length case.

> None of this addresses the basic DSA requirement of remote management.
> Troubling local hands to change a disk once in a while is reasonable; being
> blocked waiting for a power cycle on a regular basis is not (and I can't
> imagine hosting sponsors are wild about their employees' time being used
> for that either).

i know exactly what you mean, i've had to deal with data centres.
i'll make sure that TL Lim is aware of this, and will ask him if
there's a way to include remote power-management / power-cycling of
boards in the planned product or if it's already been thought of.

l.



Re: Arm ports build machines (was Re: Arch qualification for buster: call for DSA, Security, toolchain concerns)

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 5:21 PM, Steve McIntyre  wrote:

>>2G is also way too little memory these days for a new buildd.
>
> Nod - lots of packages are just too big for that now.

 apologies for repeating it again: this is why i'm recommending people
try "-Wl,--no-keep-memory" on the linker phase as if it works as
intended it will almost certainly drastically reduce memory usage to
the point where it will stay, for the majority of packages, well
within the 2GB limit i.e. within resident RAM.

 i'm really not sure why the discussions continue not to take this
into account, repeating the status-quo and accepting "packages are too
big" as if there is absolutely no possible way that this problem may
be solved or even attempted to be solved... ever.  i am very confused
by this.  perhaps it is down to latency in discussions as new people
contribute (but have signficant delay on reading), i don't know.


> Future options
> ==
>
> I understand DSA's reluctance to continue supporting dev boards as
> build platforms - I've been the one working on some of these machines
> in the machine room at Arm, and it's painful when you can't reliably
> reboot or get onto the console of crashed machines. We've also had a
> spate of disk failures recently which has caused extended downtime.

 that is not a surprise to hear: the massive thrashing caused by the
linker phase not being possible to be RAM-resident will be absolutely
hammering the drives beyond reasonable wear-and-tear limits.  which is
why i'm recommending people try "-Wl,--no-keep-memory".

 ... oh, i have an idea which people might like to consider trying.
it's to use "-Wl,--no-keep-memory" on the linker phase of 32-bit
builds.  did i mention that already? :)  it might save some build
hardware from being destroyed if people try using
"-Wl,--no-keep-memory"!


> I'm just in the middle of switching the arm64 machines here to using
> SW RAID to mitigate that in future, and that's just not an option on
> the dev boards. We want to move away from dev boards for these
> reasons, at the very least.

 most of them won't have native SATA: very few 32-bit ARM systems do.
GbE is not that common either (so decent-speed network drives are
challenging, as well).  so they'll almost certainly be USB-based
(USB-to-SATA, which is known-unreliable), and putting such vast
amounts of drive-hammering through USB-to-SATA due to thrashing isn't
going to help :)

 the allwinner A20 and R40 are the two low-cost ARM systems that i'm
aware of that have native SATA.


 there is however a new devboard that is reasonably cheap and should
be available really soon: the Rock64Pro (not to be confused with the
Rock64, which does NOT have PCie), from pine64:
https://www.pine64.org/?page_id=61454

 it's one of the first *low-cost* ARM dev-boards that i've seen which
has 4GB of RAM and has a 4x PCIe slot.  the team have tested it out
with an NVMe SSD and also 4x SATA PCIe cards: they easily managed to
hit 20 Gigabits per second on the NVMe drive (2500 mbytes/sec).

 also worth noting, they're working on a 2U rackmount server which
will have i think something insane like 48 Rock64Pro boards in one
full-length case.

 the Rock64Pro uses the RK3399 which is a 4-core CortexA53 plus 2-core
CortexA72 for a total 6-core SMP system, all 64-bit.

 if anyone would like me to have a word with TL Lim (the CEO of
pine64) i can see if he is willing and able to donate some Rock64Pro
boards to the debian farm, let me know.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 3:28 PM, Samuel Thibault  wrote:

> Roger Shimizu, le ven. 29 juin 2018 23:04:26 +0900, a ecrit:
>> On Fri, Jun 29, 2018 at 10:04 PM, Uwe Kleine-König
>>  wrote:
>> > On 06/29/2018 11:23 AM, Julien Cristau wrote:
>> >> 2G is also way too little memory these days for a new buildd.
>> >
>> > Then the machine is out, the amount of RAM isn't upgradable.
>>
>> I don't think 2GB is not enough for 32-bit machine.
>
> I can say that 2GB is really not enough for a quite-non-small list
> of packages.

 and that list is only going to get inexorably and slowly bigger, to
the point where even on 64-bit systems at some point someone is going
to notice.  7GB (and climbing) of resident RAM for the linker phase on
firefox to keep it out of swap space *should* be ringing alarm bells
even for amd64 build maintainers.

> Sure you can add swap, but then e.g. link phases are
> agonizingly long.

 sorry if it was not clear, and (to people who have read the analysis
already) apologies for repeating it for the third time, but this is
precisely and exactly why i said that it would be a good idea to
investigate adding "-Wl,--no-keep-memory" to the linker phase of
32-bit builds.

 https://sourceware.org/bugzilla/show_bug.cgi?id=22831

linker phases 100% guaranteed to go into thrashing, due to the massive
amount of cross-referencing needed to be done in object-file linking
has been a known problem for over nine years, which nobody really
seems to understand or acknowledge or tackle.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 1:12 PM, John Paul Adrian Glaubitz
 wrote:
> On 06/29/2018 01:42 PM, Luke Kenneth Casson Leighton wrote:
>>> I think that building on arm64 after fixing the bug in question is the
>>> way to move forward. I'm surprised the bug itself hasn't been fixed yet,
>>> doesn't speak for ARM.
>>
>>  if you mean ARM hardware (OoO), it's too late.  systems are out there
>> with OoO speculative execution bugs in the hardware (and certainly
>> more to be found), and they're here to stay unfortunately.
>
> How are the speculative execution bugs related in any way to the software
> bug which prevents proper KVM emulation of ARM32 binaries on ARM64? Those
> are two completely different topics.

 apologies, i just didn't quite understand your answer, so i was
looking for clarification.

>>  if you mean that buildd on 32-bit systems could be modified to pass
>> "-Wl,--no-keep-memory" to all linker phases to see if that results in
>> the anticipated dramatic reduction in memory usage, that's
>> straightforward to try, nothing to do with ARM themselves.
>
> Again: I was talking about building 32-bit packages on 64-bit systems,
> i.e. 32-bit userland on a 64-bit kernel. We do that for a lot of 
> architectures,
> including mips, powerpc, x86 and in the past also for sparc.

 would it be possible to try out that linker flag and let the binutils
team know the results?  i'd also really like to know.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 12:50 PM, Julien Cristau  wrote:

> Everyone, please avoid followups to debian-po...@lists.debian.org.
> Unless something is relevant to *all* architectures (hint: discussion of
> riscv or arm issues don't qualify), keep replies to the appropriate
> port-specific mailing list.

 apologies, julien: as an outsider i'm not completely familiar with
the guidelines.  the reduction in memory-usage at the linker phase
"-Wl,--no-keep-memory" however - and the associated inherent
slowly-inexorably-increasing size is i feel definitely something that
affects all ports.

 it is really *really* tricky to get any kind of traction *at all*
with people on this.  it's not gcc's problem to solve, it's not one
package's problem to solve, it's not any one distros problem to solve,
it's not any one port's problem to solve and so on, *and* it's a
slow-burn problem that's taking *literally* a decade to become more
and more of a problem.  consequently getting reports and feedback to
the binutils team is... damn hard.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 12:06 PM, John Paul Adrian Glaubitz
 wrote:
> On 06/29/2018 10:41 AM, Luke Kenneth Casson Leighton wrote:
>> On Fri, Jun 29, 2018 at 8:16 AM, Uwe Kleine-König  
>> wrote:
>>
>>>> In short, the hardware (development boards) we're currently using to
>>>> build armel and armhf packages aren't up to our standards, and we
>>>> really, really want them to go away when stretch goes EOL (expected in
>>>> 2020).  We urge arm porters to find a way to build armhf packages in
>>>> VMs or chroots on server-class arm64 hardware.
>>
>>  from what i gather the rule is that the packages have to be built
>> native.  is that a correct understanding or has the policy changed?
>
> Native in the sense that the CPU itself is not emulated which is the case
> when building arm32 packages on arm64.

 ok.  that's clear.  thanks john.

> I think that building on arm64 after fixing the bug in question is the
> way to move forward. I'm surprised the bug itself hasn't been fixed yet,
> doesn't speak for ARM.

 if you mean ARM hardware (OoO), it's too late.  systems are out there
with OoO speculative execution bugs in the hardware (and certainly
more to be found), and they're here to stay unfortunately.

 if you mean that buildd on 32-bit systems could be modified to pass
"-Wl,--no-keep-memory" to all linker phases to see if that results in
the anticipated dramatic reduction in memory usage, that's
straightforward to try, nothing to do with ARM themselves.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 12:23 PM, Adam D. Barratt
 wrote:

>>  i don't know: i'm an outsider who doesn't have the information in
>> short-term memory, which is why i cc'd the debian-riscv team as they
>> have current facts and knowledge foremost in their minds.  which is
>> why i included them.
>
> It would have been wiser to do so *before* stating that nothing was
> happening as if it were a fact.

 true... apologies.

>>  ah.  so what you're saying is, you could really do with some extra
>> help?
>
> I don't think that's ever been in dispute for basically any core team
> in Debian.

 :)

> That doesn't change the fact that the atmosphere around the change in
> question has made me feel very uncomfortable and unenthused about SRM
> work. (I realise that this is somewhat of a self-feeding energy
> monster.)

 i hear ya.

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Fri, Jun 29, 2018 at 10:35 AM, Adam D. Barratt
 wrote:

>>  what is the reason why that package is not moving forward?
>
> I assume you're referring to the dpkg upload that's in proposed-updates
> waiting for the point release in two weeks time?

 i don't know: i'm an outsider who doesn't have the information in
short-term memory, which is why i cc'd the debian-riscv team as they
have current facts and knowledge foremost in their minds.  which is
why i included them.

> I'm also getting very tired of the repeated vilification of SRM over
> this, and if there were any doubt can assure you that it is not
> increasing at least my inclination to spend my already limited free
> time on Debian activity.

 ah.  so what you're saying is, you could really do with some extra help?

l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concerns

2018-06-29 Thread Luke Kenneth Casson Leighton
On Wed, Jun 27, 2018 at 9:03 PM, Niels Thykier  wrote:

> armel/armhf:
> 
>
>  * Undesirable to keep the hardware running beyond 2020.  armhf VM
>support uncertain. (DSA)
>- Source: [DSA Sprint report]

 [other affected 32-bit architectures removed but still relevant]

 ... i'm not sure how to put this other than to just ask the question.
has it occurred to anyone to think through the consequences of not
maintaining 32-bit versions of debian for the various different
architectures?  there are literally millions of ARM-based tablets and
embedded systems out there which will basically end up in landfill if
a major distro such as debian does not take a stand and push back
against the "well everything's going 64-bit so why should *we*
bother?" meme.

 arm64 is particularly inefficient and problematic compared to
aarch32: the change in the instruction set resulted in dropping some
of the more efficiently-encoded instructions that extend a 64-bit
program size, require a whopping FIFTY PERCENT instruction-cache size
increase to compensate for, whch increased power consumption by over
15%.

 in addition, arm64 is usually speculative OoO (Cavium ThunderX V1
being a notable exception) which means it's vulnerable to spectre and
meltdown attacks, whereas 32-bit ARM is exclusively in-order.  if you
want to GUARANTEE that you've got spectre-immune hardware you need
either any 32-bit system (where even Cortex A7 has virtualisattion) or
if 64-bit is absolutely required use Cortex A53.

 basically, abandoning or planning to abandon 32-bit ARM *right now*
leaves security-conscious end-users in a really *really* dicey
position.


> We are currently unaware of any new architectures likely to be ready in
> time for inclusion in buster.

 debian-riscv has been repeatedly asking for a single zero-impact line
to be included in *one* file in *one* dpkg-related package which would
allow riscv to stop being a NMU architecture and become part of
debian/unstable (and quickly beyond), for at least six months, now.
cc'ing the debian-riscv list because they will know the details about
this.  it's really quite ridiculous that a single one-line change
having absolutely no effect on any other architecture whatsover is not
being actioned and is holding debian-riscv back because of that.

 what is the reason why that package is not moving forward?

 l.



Re: Arch qualification for buster: call for DSA, Security, toolchain concernsj

2018-06-29 Thread Luke Kenneth Casson Leighton
On Fri, Jun 29, 2018 at 8:16 AM, Uwe Kleine-König  
wrote:

> Hello,
>
> On Wed, Jun 27, 2018 at 08:03:00PM +, Niels Thykier wrote:
>> armel/armhf:
>> 
>>
>>  * Undesirable to keep the hardware running beyond 2020.  armhf VM
>>support uncertain. (DSA)
>>- Source: [DSA Sprint report]
>>
>> [DSA Sprint report]:
>> https://lists.debian.org/debian-project/2018/02/msg4.html
>
> In this report Julien Cristau wrote:
>
>> In short, the hardware (development boards) we're currently using to
>> build armel and armhf packages aren't up to our standards, and we
>> really, really want them to go away when stretch goes EOL (expected in
>> 2020).  We urge arm porters to find a way to build armhf packages in
>> VMs or chroots on server-class arm64 hardware.

 from what i gather the rule is that the packages have to be built
native.  is that a correct understanding or has the policy changed?

>
> If the concerns are mostly about the hardware not being rackable, there
> is a rackable NAS by Netgear:
>
> 
> https://www.netgear.com/business/products/storage/readynas/RN2120.aspx#tab-techspecs
>
> with an armhf cpu. Not sure if cpu speed (1.2 GHz) and available RAM (2
> GiB) are good enough.

 no matter how much RAM there is it's never going to be "enough", and
letting systems go into swap is also not a viable option [2]

 i've been endeavouring to communicate the issue for many many years
wrt building (linking) of very large packages, for a long, *long*
time.  as it's a strategic cross-distro problem that's been very very
slowly creeping up on *all* distros as packages inexorably creep up in
size, reaching people about the problem and possible solutions is
extremely difficult.  eventually i raised a bug on binutils and it
took several months to communicate the extent and scope of the problem
even to the developer of binutils:

 https://sourceware.org/bugzilla/show_bug.cgi?id=22831

the problem is that ld from binutils by default, unlike gcc which
looks dynamically at how much RAM is available, loads absolutely all
object files into memory and ASSUMES that swap space is going to take
care of any RAM deficiencies.

 unfortunately due to the amount of cross-referencing that takes place
in the linker phase this "strategy" causes MASSIVE thrashing, even if
one single object file is sufficient to cause swapping.

 this is particularly pertinent for systems which compile with debug
info switched on as it is far more likely that a debug compile will go
into swap, due to the amount of RAM being consumed.

 firefox now requires 7GB of resident RAM, making it impossible to
compile on 32-bit systems  webkit-based packages require well over 2GB
RAM (and have done for many years).  i saw one scientific package a
couple years back that could not be compiled for 32-bit systems
either.

 all of this is NOT the fault of the PACKAGES [1], it's down to the
fact that *binutils* - ld's default memory-allocation strategy - is
far too aggressive.

 the main developer of ld has this to say:

Please try if "-Wl,--no-keep-memory" works.

 now, that's *not* a good long-term "solution" - it's a drastic,
drastic hack that cuts the optimisation of keeping object files in
memory stone dead.  it'll work... it will almost certainly result in
32-bit systems being able to successfully link applications that
previously failed... but it *is* a hack.  someone really *really*
needs to work with the binutils developer to *properly* solve this.

 if any package maintainer manages to use the above hack to
successfully compile 32-bit packages that previously completely ran
out of RAM or otherwise took days to complete, please do put a comment
to that effect in the binutiols bugreport, it will help everyone in
the entire GNU/Linux community to do so.

l.

[1] really, it is... developers could easily split packages into
dynamic-loadable modules, where each module easily compiles well below
even 2GB or 1GB of RAM.  they choose not to, choosing instead to link
hundreds of object files into a single executable (or library).
asking so many developers to change their strategy however... yyeah :)
 big task, i ain't taking responsibility for that one.

[2] the amount of memory being required for the linker phase of large
packages over time goes up, and up, and up, and up... when is it going
to stop?  never.  so just adding more RAM is never going to "solve"
the problem, is it?  it just *avoids* the problem.  letting even
64-bit systems go into swap is a huge waste of resources as builds
that go into swap will consume far more resources and time.  so *even
on 64-bit systems* this needs solving.



Re: causes for this?

2018-06-25 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Sun, Jun 24, 2018 at 3:57 PM, Gene Heskett  wrote:

> I will try to remember that s35xx intel.
>
> Unforch, that search at newegg comes back empty today.

 currently up to version "S3520"
 https://www.amazon.co.uk/Intel-S3520-240GB-Solid-State-Drive/dp/B01K4I77JE/

> The 3rd one I just put on the pi, an SP 60GB thats now 24 bucks, with a
> $10 usb-3<->sata adapter plugged into a usb-2 port on the pi, seems to
> be ok so far.  Yeah, that faint knocking sound is real but my knuckles
> are getting tender. :)

 :)

 if you're setting up a cluster of ultra-low-cost machines with
distributed redundancy then pffh who cares if even 10% of the machines
die.

l.



Re: causes for this?

2018-06-24 Thread Luke Kenneth Casson Leighton
On Sat, Jun 23, 2018 at 4:15 PM, Gene Heskett  wrote:

>> So when you first plug in a flash device, only a few megabytes are
>> actually available for writing, and the controller is busy running
>> self test routines on the rest. Any writes to the untested parts of
>> the flash get queued behind the testing so will be quite slow. Most
>> users would not notice an effect, especially with SD cards in digital
>> cameras because they are powered all the time and only filled
>> gradually.
>
> Sounds plausible, but you'd think they'd want to test it just to stop the
> shipment of bad product.

 pffh, naah.  you can't do tests on flash without actually risking
damaging it.  damage means reduced life.  reduced life means less
confidence from the customer as its capacity is less than what it's
supposed to be.  much better to ship out untested product and let
amazon and other sales front(s) deal with complaints and returns.

 firmware on low-cost (and newly-designed unusual) SSDs is extremely
dodgy.  one of the drives that i tested literally crawled to an
absolute stand-still after a certain sustained amount of parallel
writing (from different processes).  the article went out on slashdot
and i was given some advice about it: stop the parallel write
queueing.  there's a linux kernel parameter somewhere for it...  i
didn't get to try it out unfortunately.

 this was after OCZ had been caught switching on a firmware #define
which they had been TOLD under no circumstances to enable as it causes
data corruption (they wanted to be "faster" than the competition).
the data corruption was so bad it actually in some cases overwrote the
actual firmware *on the drive*, meaning that the SSD was no longer...
an SSD.

 the only reasonably-priced SSDs i trust now are the intel s35xx
series.  other drives such as the toshibas which are also supposed to
have supercapacitors for "enhanced power loss protection", the
supercapacitors simply aren't large enough, so a sustained series of
writes above a certain threshold speed, pull the power and there's not
enough in the supercapacitors to cover the time it takes to save the
cached data.

 only the intel s35xx series has had the work put into it,
technically, to do the job *at a reasonable price*.  i ran a 4-day
test writing several terabytes of data, the power was randomly pulled
at between 7 and 25 second intervals, for a total of six and a half
THOUSAND times, and *not a single byte* was lost.  which is deeply
impressive.

 the s37xx series is by a different team and they use the fuckwit
marvel "consumer" chipset that's so troublesome in kingston, crucial
and other SSDs.

 really not being funny or anything: if you care about your data
(*and* your wallet) just don't buy anything other than intel s35xx
series SSDs.  of course if you have over $10k to spend there are
plenty of data-centre quality SSDs.

l.



Re: Mirabox kernel help needed

2018-02-09 Thread Luke Kenneth Casson Leighton
On Friday, February 2, 2018, amon  wrote:

> Thanks for getting back to me. I hope you are right. What little
> I have found via searches with Google were not comforting, to
> say the least.
>
>
most people are not sufficiently competent on the general internet to know
what they are doing. the mass of unanswered or incompetently answered
questions can be a pain in the add to filter out.

that is your responsibility... and something you just have to accept and be
patient with rather than attach emotional responses to such as "not
comforting" and other such phrases.

find the RIGHT people, ignore the rest, and please DOCUMENT your successes
in STATIC web pages such as a wiki NOT a mailing list, forum or blog post.

why? because YOU will need the documentation in six months time, forget
everyone else that will indirectly benefit!!

that includes where and how you obtained the toolchain, everything.


I can't do a boot from the sdcard for this application as I
> will probably be using it for something else. I think that is
> why I was thinking I need to do something with uboot.
>
> I don't have a cross compiler, although I have heard there
> might be one at Globalscale. I have built cross-compilers
> but not since I did one to build m68000 code for a NeXT from
> an i486... needless to say, it took several days to do the
> 3 compiles to generate it. I hope the world has gotten easier
> in the ensuing two decades.
>
> I'm working on some stuff where I have to be quite conservative
> as I hope to put it into a cubesat someday if we can raise the
> cash for it.
>
> --
> +---+
> |   Dale Amon  Immortal Data|
> |   CEO Midland International Air and Space Port|
> | a...@vnl.com   "Data Systems for Deep Space and Time" |
> +---+
>
>

-- 
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


Re: Rock64

2017-08-11 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Fri, Aug 11, 2017 at 3:56 PM, Luna Jernberg  wrote:
> Hello!
>
> Just saw the BoF on the Debconf stream and has a small question:
>
> Anyone know hows the support for the Rock64 is?
> https://www.pine64.org/?page_id=7147 Pines new Board
> thinking
> of maybe buying one?

 the main source of libre info about these will be #linux-rockchip on
freenode.  that's your "beeline" direct to people who *will* know what
they're talking about.

 the RK3388 is pretty awesome, it's basically an upgrade to the
RK3288, so a max of 4GB RAM.  the RK3288 was used in a chromebook by
Acer and google ABSOLUTELY INSISTED that rockchip make FULL SOURCE
CODE AVAILABLE.  that trend now seems to be continuing, as they move
forward with near-identical SoCs that the engineering teams go, "huh.
uhm our source code seems to be mainlined.. um well... we'd
better base our new SoC off of that, then".

[unlike allwinner's engineers who just fuck everybody off because
they just don't care, their shares are about to vest in 6 months and
they'll be millionaires...]

anyway.

 TL's team managed to get the rock64's LPDDR3 RAM up to some
ridiculous speed (1600mhz) - _way_ above what rockchip's own EVB team
managed.  they also managed to get the HDMI 2 interface correctly laid
out so it'll do 4K HD.  and it has one *full* USB3 Host interface.  so
no bullshit about "w, wa i want my SATA drives connected to a
low-cost ultra-low-power ARM board but i don't want them limited to
480mbit/sec, waa"

  :)

 basically it's a deeply impressive bit of kit.

l.



Re: missed keystrokes problem, back with a vengeance.

2017-02-01 Thread Luke Kenneth Casson Leighton
On Sun, Jan 29, 2017 at 6:48 PM, Gene Heskett  wrote:

> On Sunday 29 January 2017 12:40:21 Alan Corey wrote:

> Alan, ALL of the motor drivers are switchmode, with the current
> regulation running at or above 20 KHz. And these noise spikes are
> ringing at nominally 100 MHz. I have managed to get the xy motor noise
> under control by takeing out the switchmode psu's, and putting in
> tordoid transformers, bridge rectifiers, and the biggest electrolytic I
> had in the drawers that had sufficient withstand voltage.  Its a star
> ground system, and the output cables to the motors are shielded, and the
> shield grounded as it goes by this single bolt ground on its way out of
> the box. The shielding extends both ways from that point but is not
> connected either at the motor, or at the driver, just at the bolt.  This
> is std in such noisy machinery.

 i'm going to be looking at setting up an open hardware lathe and cnc
machine at some point, so i'm really delighted to see what you're
doing... i'm just really very confused as to why you're using such a
low-cost board from a *known* unethical fabless semiconductor company
to control such really very expensive equipment.  just sayin.

 anyway: when working for Path Intelligence we learned the hard way
about R.F. and EMI interference, so we had to nickel-copper paint the
entire interior of the IP66 box and replace its rubber gasket in the
lid with a conducting one.

 we also got a 2nd-hand spectrum analyser, our resident expert made a
probe out of a single 20mm loop of copper wire exactly like you see in
the fairground attractions in days of yore, and we could confirm
immediately the effectiveness of anything that we tried to do.  which
was really *really* effective in the case of the nickel-copper spray.

 i can therefore strongly recommend trying the same thing... but
honestly, the cost of all this equipment should be far in excess of
just replacing the entire board, including the processor that's made
by the unethical company known as broadcom, so i have to ask,
particularly given that it's so problematic, if you'd given that some
thought as a first priority rather than a last?

l.



Re: d-i on Firefly-rk3288

2016-12-12 Thread Luke Kenneth Casson Leighton
On 12/12/16, Diego Roversi <die...@tiscali.it> wrote:
> On Mon, 12 Dec 2016 05:35:01 +0000
> Luke Kenneth Casson Leighton <l...@lkcl.net> wrote:
>
>> add console=ttyS2 to the kernel parameters, also earlyprintk is really
>> helpful (but you have to have the right options compiled in the kernel
>> to use it).
>>
>
> Ok, I retried with this, and now the serial console works (thanks):

 great!  ok so now you have a feedback loop to monitor issues until success.

> Except the ethernet doesn't works. There are errors in dmesg:

 ok.  right.  so the next questions are: how flexible are you prepared
to be to get this working, and do you *absolutely* need to use
debian-installer to get this up-and-running?  i,e, do you have some
hard requirement that *forces* you to use debian-installer or did you
choose it because you'd heard it was the "normal" way to install
debian?

 the reason i ask is because the last time i actually used
debian-installer on arm hardware was way back in 2010, when frans pop
very kindly built a custom (armel) d-i for the gpl-violating CT-PC89e
which had an S3C6410.  i loved the fact that it could be loaded into
memory such that you could install whatever you wanted on whatever
hardware was available, and loved the minimalism... *but*... it's so
complex to set up that i've never been able to successfully build one
for any of the hardware i've been working with.

 instead, i've resorted - reluctantly - to using either debootstrap or
qemu-arm to carry out the root filesystem preparation... then copied
that over.

 in doing so, i've *always* dealt exclusively with initrd-less
*custom* kernels, dedicated specifically for the target hardware
(including modules which again are copied over manually).

 what i'm hoping to do in the future now that the rk3288 is actually a
decent system is try native compiles, so there stands a chance of
actually compiling up debian-installer native... but that's a looong
way off for me, yet.

 anyway, so we have two possible hints above of paths that you could
choose, here (a third being to download some random rootfs off the
internet that someone else has arbitrarily made decisions on, during
the install.. which is why i really really prefer
debian-installer)

 ... OR 

 you could look for a debian-testing "weekly build" version of
debian-installer (which should have a more recent kernel)

 ... OR

 you could try unpacking the debian-installer initrd, compiling your
own kernel, putting in the replacement modules by hand and repacking
it, but FOR GOD's SAKE watch out for the fact that when using cpio you
ABSOLUTELY MUST specify the target directory properly.  cpio by
default will unpack with ABSOLUTE paths and that means that you
will end up fucking your x86 root filesystem by overwriting critical
system files with the contents of the initrd.  done it once... won't
do it again, ever... managed to recover it but it was a bit
hair-raising...

 ... OR

 you could ask around to see if someone else has a working (older or
newer) debian-installer from debian/testing or sid that is known to
work and they can provide a copy online for you.

 lots of options, if you're prepared to be flexible.

l.



Re: d-i on Firefly-rk3288

2016-12-11 Thread Luke Kenneth Casson Leighton
... you're doing ok, btw - progressing really quickly.  took me
several days to get this right and i had to bother a lot of people,
including valgrind, stdint, mmind0 and nvm on #linux-rockchip :)

l.



Re: d-i on Firefly-rk3288

2016-12-11 Thread Luke Kenneth Casson Leighton
add console=ttyS2 to the kernel parameters, also earlyprintk is really
helpful (but you have to have the right options compiled in the kernel
to use it).

you could really do with adding the options to boot.cmd then running
the (well-known?) mkimage command that turns them into boot.scr... ok
adapt these:

boot.cmd

setenv console 'ttyS2,115200n8'
setenv root '/dev/mmcblk2p1'
setenv panicarg 'panic=10'
setenv extra 'rootfstype=ext4 rootwait rw earlyprintk=serial,ttyS0'
setenv loglevel '8'
setenv setargs 'setenv bootargs console=${console} root=${root}
loglevel=${loglevel} ${panicarg} ${extra}'
setenv fdtfile 'rk3288-firefly.dtb'
setenv kernel 'zImage'
setenv mmc_boot 'ext2load mmc 0:1 ${kernel_addr_r} ${kernel}; ext2load
mmc 0:1 ${fdt_addr_r} ${fdtfile}; bootz ${kernel_addr_r} -
${fdt_addr_r}'
run setargs mmc_boot

mkbootimg.sh (so you don't have to keep typing it... or forget it... i
put this actually on the sdcard)

mkimage -A arm -O u-boot -T script -C none -n "boot" -d boot.cmd boot.scr

l.



Re: d-i on Firefly-rk3288

2016-12-11 Thread Luke Kenneth Casson Leighton
On 12/10/16, Diego Roversi  wrote:
> On Fri, 09 Dec 2016 23:08:13 +0200
> Vagrant Cascadian  wrote:
>
>> > U-Boot 2014.10-RK3288-02 (Nov 26 2014 - 09:28:44)
>>
>> This u-boot version is not coming from the SD card; probably from the
>> on-board eMMC. You may need to zero out of first few MB on the
>> eMMC. This will obviously render the OS on the eMMC non-functional.
>
> I should write on /dev/mmclblk0 ? Or should I write on a specific partition
> of the eMMC (/dev/mmcblk0p?) ?

 a boot rom is too expensive to develop to have it be capable of
understanding partitions for these kinds of low-cost processors.  so
it's been designed to load from SECTORS *not* from partions.
therefore, the SPL loader is in the first few sectors (sector 8 or
something), *NOT* on a partition.

 also when you investigate this further you'll find that the kernel
you're running has been set up to partition the drive using devicetree
specifications: certain areas of the drive are RESERVED... and you
have absolutely no idea what those are unless you happen to have read
the devicetree spec (and know where to get it *sigh*...).  you
need to BYPASS all of that by referencing /dev/mmcblk0.

 fairly straightfoward and logical when you think it through.


 make sure you wipe out at least 150mb (so as to trash *all* the
partitions) - i had some segfaults occur with the default 3.10 kernels
by not erasing enough data.  i only erased 10mb and it left
half-set-up partitions on the drive.  wark-wark...

 l.



Re: d-i on Firefly-rk3288

2016-12-10 Thread Luke Kenneth Casson Leighton
On 12/10/16, Diego Roversi <die...@tiscali.it> wrote:
> On Sat, 10 Dec 2016 04:42:19 +0000
> Luke Kenneth Casson Leighton <l...@lkcl.net> wrote:
>
>
>>  yep - this is the recommended method [for now] because the default
>> hardware boot order is something like eMMC microsd USB.  the technical
>> reference manual is available, some references here
>> http://rhombus-tech.net/rock_chips/rk3288/
>
> Thanks for the links.

 no problem

>> recovery OS, yes?  you'd expect the u-boot SPL loader to help you out,
>> there, yes?  by always looking on the external microsd first, and THEN
>> looking on the eMMC, yes?
>
> Uhm, loading spl and u-boot from the same device it may be a sensible
> advice, because not always spl and uboot code from different version of
> u-boot are compatible.

 wtf?  a sub-16k loader basically a few hundred lines of code in total
and they couldn't be bothered to make sure it has interoperable
chainloading??  there'd better be a *really* good reason for that.


> And with rkflashkit you could always reflash spl and
> u-boot on emmc, at least I hope so, because I've not really tried yet.

 took a look at it, went "GUI? f*** that".  saw that various people
have been using it as a command-line tool, investigated further and
still didn't like it.  i'm used to the USB-FEL of the A20, where i can
upload the SPL, execute it, upload u-boot, kernel, initramfs and dtb
direct into memory, then execute u-boot with some default
parameters... takes a while but it works, and *all automated* entirely
over the USB interface... with *no* proprietary software or libraries.

 i can't exactly recall if this is correct but i got the impression
that rkflashkit required proprietary drivers, or required a
proprietary program to be on the eMMC in order for it to "talk" to
rkflashkit... there was something weird and i just couldn't be
bothered to investigate.

 in the u-boot docs on the rk3288 which were written by a chromium
developer from google he *nearly* got USB-MASKROM bootloading up and
running.  i suspect he ran out of time, but actually managed to do the
job: i suspect that it would be possible to compile up what he did,
enable the "load from external sd" as a compile-time option and i
would expect it to actually work.  has to be a very *very* basic mmc
loader though.

> And
> after loading spl+u-boot, from u-boot you can choose to boot from sd or
> emmc.

 yeah... as long as u-boot on the eMMC is never corrupted / default
parameters partition never corrupted / etc. / etc. / etc.

> Documentation on rk3288 devices are quite sparse, at least compared to
> allwinner devices. So I think, I'll write a bit on debian wiki after some
> experiment.

 what i've found is that the info *is* actually out there, but the
signal-to-noise ratio where a ton of people have written "how i are
installing mi favurit OS on the Acur C201" isn't really helping.

 also to watch out for is the fact that because google said that UEFI
support is important, a lot of the documentation is "HOWTO boot from
UEFI".  you do NOT need to format the eMMC or sd card as UEFI.  that
is a SOFTWARE compile option that google put BY DEFAULT into their
u-boot and linux kernel releases on the chromium web site.

 two people (including myself) have managed to use mmc boot directly
from legacy-formatted partitions: the other guy (nvm i think on
#linux-rockchip) used fat32, i used ext2 so our arrangements are
slightly different, but are basically adaptations of the exact same
mmc_boot commands commonly used for the A20 and placed into uEnv.txt.

 i do have to document what i did to get up-and-running, too

>> in this way you will be able to do test out future upgrades to u-boot
>> by putting them onto the external microsd card, without having to do a
>> one-off potentially-destructive "i hope like hell this is going to
>> work first time" overwrite of u-boot, because the eMMC SPL-u-boot
>> loader will be configured to help you.  you'll also be able to recover
>> the system should the eMMC u-boot ever become corrupted.
>
> That's quite a problem, but a this point you should reflash the uboot with a
> usb cable. Because even spl can became corrupted, and you need to cover also
> this case (imho).

 the SPL is far, far smaller (and is on a separate - single - NAND
block) so is far less likely to get corrupted.

 reflashing with a USB cable at this point requires that you short out
*two* GPIO pins to GND (one is EMMC_D1 and the other EMMC_CLK i
think).  that's the only way to recover the system if the eMMC becomes
corrupted and u-boot is *NOT* configured to look on external sd card.

 it's not enough to *hope* that the system will drop into MASKROM (USB
loader) mode.

>>
>> the firefly's a really nice board, btw.  did you get on

Re: d-i on Firefly-rk3288

2016-12-09 Thread Luke Kenneth Casson Leighton
On 12/9/16, Vagrant Cascadian  wrote:
> On 2016-12-09, Diego Roversi wrote:
>>   I'm trying to install debian stretch on a firefly-rk3288, using debian
>> installer. I downloaded the firmware from:
>>
>>
>> http://ftp.nl.debian.org/debian/dists/testing/main/installer-armhf/current/images/netboot/SD-card-images/
>>
>>   Then I wrote the installer on a sd:
>>
>>   zcat firmware.Firefly-RK3288.img.gz partition.img.gz | pv > /dev/sdc
>>
>>   and tried to boot the firefly from sd. But I'm stuck with this errors:
>>
>> U-Boot 2014.10-RK3288-02 (Nov 26 2014 - 09:28:44)
>
> This u-boot version is not coming from the SD card; probably from the
> on-board eMMC. You may need to zero out of first few MB on the
> eMMC. This will obviously render the OS on the eMMC non-functional.

 yep - this is the recommended method [for now] because the default
hardware boot order is something like eMMC microsd USB.  the technical
reference manual is available, some references here
http://rhombus-tech.net/rock_chips/rk3288/

> Not sure if there's a better way to force it to boot from eMMC. I don't
> recall if mainline u-boot yet supports eMMC on the firefly boards;

 not yet, but the patch  *but*, both the u-boot maintainer for the
rk3288 and the rockchip developer for the rk3288 are being a bit...
silly, you can see the [straightforward] proposed patch which i do NOT
recommend you apply as-is, here:
https://patchwork.ozlabs.org/patch/657573/

 what this patch does is basically turn your firefly into a
deliberately-intentionally-brickable device.  let's say that you
successfully installed that patched u-boot and OS.  now let's say that
somewhere down the line u-boot becomes corrupted on the eMMC.  what
you would *want* to do is to put in an external microsd with a
recovery OS, yes?  you'd expect the u-boot SPL loader to help you out,
there, yes?  by always looking on the external microsd first, and THEN
looking on the eMMC, yes?

 both sjg's suggestion "always boot from the device on which the SPL
loader is present" *and* jacob's patch result in the device basically
NEVER looking on the external microsd unless u-boot *is not actually
present*.  both suggestions will even try to load a corrupted u-boot
from the eMMC.

 so.

 to correct that, diego, grab that patch then modify it to *reverse*
BOOT_DEVICE_MMC2 and BOOT_DEVICE_MMC1:

+   spl_boot_list[0] = BOOT_DEVICE_MMC1;
+   spl_boot_list[1] = BOOT_DEVICE_MMC2;

in this way you will be able to do test out future upgrades to u-boot
by putting them onto the external microsd card, without having to do a
one-off potentially-destructive "i hope like hell this is going to
work first time" overwrite of u-boot, because the eMMC SPL-u-boot
loader will be configured to help you.  you'll also be able to recover
the system should the eMMC u-boot ever become corrupted.

the firefly's a really nice board, btw.  did you get one with 4GB RAM?

l.



Re: Broadcom BCM2709, ARMv8, and missing CPU features

2016-08-06 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Sat, Aug 6, 2016 at 8:15 PM, Stefan Monnier  wrote:
>>  the only big advantage of dtb files (binary compiled) is *IF* the
>>  decision is made to respect dtb files and treat them as inviolate
>>  and supported forever without needing recompiles, you stand a
>>  chance of being able to upgrade linux kernels *without* replacing
>>  the dtb file.
>
> That might be true when compared to some potential replacement of DTBs,
> but when compared to what we had before DTBs, then the benefit is much
> more clear: a single linux-image-armhf package which works for "all"
> machines.  Personally I don't mind changing the DTB every time I change
> the kernel.  Hell, that could/should be integrated with the process
> which refreshes the initrd file anyway.

 ... are you _sure_ it's clear? :)

 the reason i ask that is, i'm not seeing any real difference: you
still have to download the linux kernel source (to submit dtsi
patches), the linux git repo is still the central location for dtsi
management... unless you're happy to set up an alternative parallel
repository (and compile infrastructure) for dtsi management...  thus
you still have to download the full git repo, you still have to
compile stuff *from* that same git repo where's the actual benefit
to having moved to dtsi, in terms of "work needed to maintain it"?

i appreciate you don't *mind* changing the DTB file each time you
change the kernel, but that defeats one of the very purposes *of* the
DTB file.

 also, i don't know if you've looked in arch/arm/boot/dts but it's
already alarmingly full.   i appreciate that there's some includes
(dtsi) but realistically over time the sharing process is going to
begin to look like the selinux m4 macro includes or the openembedded
infrastructure: an unintelligeable and unmaintainable dog's dinner
that only a handful of people in the world can understand.

 anyway to get back to the original topic, there's very little that
can actually shared - even with devicetree - between different
devices.  it's the "N product design types" times "M processors"
thing.  which is why i'm designing a hardware standard that's similar
to how things are in the x86 world, so that we can get back to "N PLUS
M" at the linux kernel level.

l.



Re: Broadcom BCM2709, ARMv8, and missing CPU features

2016-08-06 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Sat, Aug 6, 2016 at 2:57 PM, Stefan Monnier  wrote:


> Note also that you will sometimes *lose* performance by going to 64bit
> because the pointers use up twice as much space, so if your program
> needs to store many pointers, it will use up more cache space
> and memory bandwidth, which will tend to slow it down.

 did i hear right that there's also a core design difference between
the A7 and the A53 which results in a performance/watt loss of around
15%?  so you're actually *worse off* going to 64-bit at the moment, if
power (battery life) really matters.  i think it was on anandtech or
something.

l.



Re: Broadcom BCM2709, ARMv8, and missing CPU features

2016-08-06 Thread Luke Kenneth Casson Leighton
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68


On Thu, Jul 28, 2016 at 5:35 PM, Gunnar Wolf  wrote:

> Keep in mind it's not different Debian images we are talking about —
> "real" Debian cannot be booted on Raspberry hardware. I run a Debian
> userland on top of their provided kernel (with the mystery blobs to
> control its hardware), started by their mystery bootloader. And yes,
> for us people coming from the x86 world, we expect similar devices to
> "just work", but in ARM it *is* really a different way of doing things
> per each kind of board.

 i did find it very funny to learn that Linus did not understand why there
 were so many ARM developers at the Cambridge Linux Conference
 back in... when was it... 2007?  it coincided with UKUUG at the time.
 he's famously on record as saying "why are there so many of you?
 go away, choose one representative and come back with just one
 person!"

 likewise, i _am_ on record as pointing out a long long time ago that
 device-tree will not stop the proliferation or complexity of developing
 device drivers for ARM: it merely *moves* the proliferation and
 complexity... into dtsi files

 the only big advantage of dtb files (binary compiled) is *IF* the
 decision is made to respect dtb files and treat them as inviolate
 and supported forever without needing recompiles, you stand a
 chance of being able to upgrade linux kernels *without* replacing
 the dtb file.  however i seriously doubt that the stringent testing
 needed to make that work will ever be put in place.  oh well.

l.



EOMA68-A20 Crowd-funded Laptop and Micro-Desktop

2016-07-16 Thread Luke Kenneth Casson Leighton
https://www.crowdsupply.com/eoma68/micro-desktop

i've been working on a strategy to make it possible for people to have
more control over the hardware that they own, and for it to cost less
money for them to do so, long-term.  i've had to become an open
hardware developer in order to do that.

i believed for a long long time that leaving hardware design to the
mass-volume manufacturers would result in us having affordable
hardware that we could own.  they would make stuff; we could port
OSes to it, everybody wins.  starting in 2003 and working for almost
2 years continuously on reverse-engineering i got a bit of a rude but
early wake-up call where i learned just how naive that expectation
really is [1].  example: it took over THREE YEARS for cr2 to
reverse-engineer the drivers for the HTC Universal clamshell 3G
micro-laptop.

for everyone else, that message came through loud and clear with
mjg59's android tablet GPL violations list - which he stopped maintaining
because it was pointless to continue [2][3].  it was a bit of a slap in
the face - a wake-up call which not only debian but every other ARM
free software distribution is painfully reminded of on a regular basis
when someone new contacts them and asks:

  "I have hardware {X} bought off of Amazon / Aliexpress, can i run
   Linux on it"

and pretty much every time someone has to spend their time patiently
explaining that no, it's not possible, due to the extraordinary
amount of reverse-engineering that's required due to rampant and:
endemic GPL violations, and even if they could, it's *already too late*
due to "Single-Board Computer Supernova Lifecycle" syndrome.

shockingly even intel do not really "Get It".  not only do they have
the arbitrary remote code execution backdoor co-processor [4]
in every x86_64 processor since 2009, but in speaking to a member
of the intel open source education team at fosdem2016 i learned that
intel considers something as trivial as DDR3 RAM initialisation
sequences to be "commercial advantage" and this they use as
justification for not releasing the 200 lines of code needed... not
that it would help very much because of the RSA secret key needed
to sign early boot code.

we also have the issue of proliferation of linux kernel device drivers:
put simply if there are M processors and N "types of products",
we can reasonably and rationally expect the number of submissions
of device drivers and device tree files for upstream inclusion to be of the
order of "M *TIMES* N".  with "M" just in the ARM world alone being
enormous (over 650 licensees as of 10 years ago) and "N" likewise
being huge, this places a huge burden on the linux kernel developers,
and an additional burden downstream on you, the OS maintainers, as
well.

... would it not be better to have hardware that was designed around
"M plus N"?  this would stop the endemic proliferation of device drivers,
would it not?

so this is the primary driving factor behind EOMA68 - to reduce the
burden of work required to be carried out by software libre developers,
as well as reduce the long-term cost of ownership of hardware for
everyone.

so after five years i can finally say that the EOMA68 standard is
ready, and with the last (and final) revision adding in USB 3.1 it
can be declared to have at least a decade of useful life ahead of
it.  there are NO "options".  there will be NO further changes made
(which would result in chaos). a modern Computer Card bought
10 years from now will still work with a Housing that's bought today,
and vice-versa.

if this approach is something that you feel is worthwhile supporting,
the crowd funding campaign runs for another 40 days.  crowd funding
campaigns are about supporting "ideas" and being rewarded with
a gift for doing so.  they're not about "buying a boxed product under
contract of sale".

with your support it will be possible to bring other designs and other
processors to you later.  picking a processor has its own interesting
challenges [5] if you have ethical business considerations to take
into account, such as "don't use anything that's GPL violating or
otherwise illegal". [why *is* it that people think it's okay to sell GPL
violating products, even amongst the open hardware community?
ethical ends can never be justified by unethical means].

lastly, i'm... reluctant to bring this up, but i have to.  *deep breath*.
i'm aware that a lot of people in the debian world don't like me. many
of you *genuinely* believe that i am out to control you, to tell you
what to do, to "order you about".  which is nonsense, but, more
importantly, rationally-speaking, completely impossible given the
nature of free software.  we can therefore conclude, rationally, that
the conclusion reached by many of you [that i am "ordering you
about"] simply cannot be true.

after thinking about this for a long, long time, my feeling is that this
startlingly and completely overwhelmingly WRONG impression
stems from my reverse-engineering background, which, 

Re: Bug#399608: fixed in sysvinit 2.88dsf-59.1

2015-05-18 Thread Luke Kenneth Casson Leighton
On Sun, May 17, 2015 at 3:48 PM, Andreas Henriksson andr...@fatal.se wrote:
 Hello Adrian!

 Thanks for raising awareness about this issue. If there's anything
 I can do to help please tell me. That the new util-linux version hasn't
 been built yet sounds like it can't be avoided as it was just uploaded
 and unfortunately the sysvinit and util-linux update is a lockstep
 upgrade where both change at the same time as things are moved between
 the packages. There's no intermediate step possible, because the
 moved binaries always needs to be available at all times and thus
 have tight dependencies in both directions. Not sure how dependencies
 affects the build of these packages though They should both be
 able to build on systems with older versions of the packages installed
 and build independently.

 that sounds like the kind of thing that would cause nightmare
circular build dependencies for anyone porting to a new architecture
[which i'm considering doing: mvp from icubecorp].

 would that be correct - that if there *is* no older version it
would now be impossible to build both [or either] of the packages - or
am i mistaken?

 if it is correct, do you happen to know if they would cross-build, at all?

 l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDzvHBK9OUipm7Bt_P3MhCWK=udqyvfoiwhrhztu7sa...@mail.gmail.com



Re: DebCamp15

2015-04-28 Thread Luke Kenneth Casson Leighton
On Tue, Apr 28, 2015 at 12:39 PM, Neil Williams codeh...@debian.org wrote:
 On Tue, 28 Apr 2015 11:30:33 +0100
 Luke Kenneth Casson Leighton l...@lkcl.net wrote:

 On Tue, Apr 28, 2015 at 9:07 AM, Neil Williams codeh...@debian.org
 wrote:

  Chroot tests suffer from limitations with daemons (changed port
  numbers if the same daemon is running outside etc.) and are subject
  to whatever the running kernel can offer.

  has anyone considered modifying any automated chroot-based systems to
 use lxc?

 There hasn't been any work on lxc in LAVA at least. It's been mentioned
 and some work has been planned but it's not scoped yet and there is
 insufficient reason / insufficient resources to push for it.

 ack.

  using lxc would solve the limitation that you describe, neil, about
 daemons having to change port numbers.  the only thing you can't do is
 test kernel-related stuff (udev-related) with lxc.

 Kernel testing is the main objective of LAVA testing currently, so when
 chroot isn't enough, a complete test as a virtual machine or a new
 deployment is the main go-to.

 makes sense.  esp. as a full VM - if you have to have it for one
reason or another - makes the use of lxc unnecessary / redundant, so i
see the sense in not prioritising it.

 ok i go back to sleep now :)

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedxoycjcwfw-pmfjvccstraw2ni9fej0zj7jv5x8l6r...@mail.gmail.com



Re: DebCamp15

2015-04-28 Thread Luke Kenneth Casson Leighton
On Tue, Apr 28, 2015 at 9:07 AM, Neil Williams codeh...@debian.org wrote:

 Chroot tests suffer from limitations with daemons (changed port
 numbers if the same daemon is running outside etc.) and are subject to
 whatever the running kernel can offer.

 has anyone considered modifying any automated chroot-based systems to
use lxc?  three years ago at phil hand's recommendation i converted a
XEN server with five guests over to lxc and it has worked out
extremely well.

 with a bit of arseing about i was even able to add the LVM partitions
formerly used by the XEN guests over to lxc, meaning that i didn't
have to mess with the filesystems (copy them out or anything).

 using lxc would solve the limitation that you describe, neil, about
daemons having to change port numbers.  the only thing you can't do is
test kernel-related stuff (udev-related) with lxc.  don't know the
specifics but when i installed lxc (3 years ago) udev said not gonna
start in all the clients.  that worked for me - might not be ok for
certain scenarios that you envisage testing.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedy_ofgg45pqvs4m-ab+effa+pmgddxl7c0j4az6rns...@mail.gmail.com



Re: Cubietech Cubieboard3 / Cubietruck support under Debian Jessie?

2014-10-28 Thread Luke Kenneth Casson Leighton
On Tue, Oct 28, 2014 at 5:51 PM, Andrew M.A. Cater
amaca...@galactic.demon.co.uk wrote:
 Before I go crazy:

 There are at least three issues with full support:

 1. Mainline kernel versus Allwinner/Sun-xi kernel.

 Debian support on mainline kernel does not support
 all Cubietruck features.

 2. Mainline U-boot vs uboot-sunxi.
 This is now pretty much sorted.

 3. Some Sunxi hardware not yet supported as not yet ported into the 
 mainstream kernel.
 Audio / some graphics features?

 Debian doesn't yet support install to NAND - the only way to do this is via 
 Sunxi Phoenix tools or thereabouts.

 not quite the only method.  there is the USB FEL tools (sunxi-tools),
you can compile a version of u-boot which creates a FEL/SPL loader,
then you can use that to boot a kernel which has NAND support.  the
last one i found which did this was cb2-3.4.

 lots of other people know a bit more about this, what the current status is.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedytgehmgopeqadyq5ts-_bqjki5rzgsxaok0hummeh...@mail.gmail.com



Allwinner A33 Quad Core 2GB board in development

2014-09-05 Thread Luke Kenneth Casson Leighton
i'm doing an EOMA68-A33 CPU Card which will cost $5600 to complete the
layout and 5 samples.  i can cover that cost over a couple of months,
so it is going ahead, and first (working) 5 samples should be done and
received by Dec 2014.

if there is anyone who would like to buy either one of the 5 samples
(at a premium) or would like to be part of a group-buy please do
contact me directly.

specifications are:

* Allwinner A33 Quad Core ARM Cortex A7
* 2GByte RAM on a 32-bit interface
* 18-pin RGB/TTL up to 1280x800 resolution
* on-board SMC9514 USB-Ethernet IC (10/100)
* 2x USB2 (480mb/s) on EOMA68 interface
* 1x USB-OTG (can be used to power CPU Card)
* 1x Micro-SD

note that unlike the A20 CPU Card this one does *not* have HDMI out
(it's a lower-cost SoC) and the max resolution really is limited to
1280x800 because the A33 is a tablet SoC... albeit a quad-core one.

news as it happens will be at http://rhombus-tech.net/allwinner/a33/news/

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedxt8dmmdk-3jirwubxl579c788xhywa4yv2ac7cdg2...@mail.gmail.com



Re: A10-OLinuXino-LIME board

2014-08-25 Thread Luke Kenneth Casson Leighton
On Mon, Aug 25, 2014 at 1:22 PM, Karsten Merker mer...@debian.org wrote:
 On Mon, Aug 25, 2014 at 01:46:50AM -0700, Joey Hess wrote:
 Ian Campbell wrote:
  Why hd-media? The standard netboot images work fine on sunxi AFAIK
  (testing on cubie{truck,board}).

 Board doesn't netboot by default AFAIK, so this would need a serial
 console, which needs a nonstandard cable.

 Hello,

 I have just looked at the board manual at
 https://www.olimex.com/Products/OLinuXino/A10/A10-OLinuXino-LIME/resources/A10-OLinuXino-LIME_manual.pdf
 and found the the description of the serial port rather
 confusing.

 it's fairly standard nowadays to put a UART over pins that are
shared/multiplexed with other functions such as GPIO.  in order to
avoid having to do special I/O drivers on the SoC they simply put
the 0s and 1s effectively straight out of the same pins at the same
voltage and current levels as the GPIO that they multiplex/share.

 otherwise it would be necessary to have special RS232 driver logic
on-board the SoC and if you know how big I/O driver pads are compared
to actual transistors for logic or memory you start to appreciate why
this is almost never done.

 so this is why you need a converter IC.  it's actually very very
common practice, and things like the FTDI chipsets or the MAX232 will
do the job perfectly... for a given level of perfect.  the only thing
you have to watch out for is not to spike the SoC or the USB
converter, especially when powering up from a separate power supply
from the USB converter you're connecting to.

 typically FT5306s and MAX232s will easily support the full range 3.3
up to 5.0v input signal levels of SoCs so you need not be concerned
about having voltage converters: just wire them directly up...
preferably just *after* powering up.

 there are people with a little more experience at this sort of thing,
henrik (u-boot maintainer) is one such person i know who can give you
a much better idea of the full technical details and the gotchas.

 but, basically, grab yourself something like an FTDI USB dongle off
of farnell and you should be set.  something like this:
http://uk.farnell.com/ftdi/um232r/dev-module-usb-to-serial-uart/dp/1146036?Ntt=FT232+USB+UART

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedyf3za3tcafvwhgg5xgf7jhfgtz0za4gm3yk2g-q7y...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-06-11 Thread Luke Kenneth Casson Leighton
On Tue, Jun 10, 2014 at 12:20 PM, Jerry Stuckle jstuc...@attglobal.net wrote:
 On 6/10/2014 3:42 AM, Ian Campbell wrote:
 On Tue, 2014-06-10 at 08:27 +0100, Luke Kenneth Casson Leighton wrote:
 On Mon, May 19, 2014 at 7:53 AM, Ian Campbell i...@hellion.org.uk wrote:
 On Sun, 2014-05-18 at 14:54 -0700, Vagrant Cascadian wrote:
 On Sun, May 18, 2014 at 07:41:58PM +0200, Karsten Merker wrote:
 attached is a small patch against flash-kernel to add machine db
 entries for the Cubieboard 1/2 and the Mele A1000.
 ...
 I'll probably get access to a Cubieboard2 sometime next week and
 will test an installation on it, but for the Cubieboard 1 and the
 Mele somebody else would have to give them a try.

 Thanks for your work. I could test on a Cubieboard 1...

 I've not written my notes into prose for the wiki yet but, perhaps they
 are sufficient for you and others to get going with though.

  great!  this is *exactly* what i suggested be written up (note-form
 is perfect), and exactly why i suggested it be written up as well [to
 help others].

  now it will serve as a reminder for you as well, ian, in a few months
 when this has dropped out of electrical memory in your brain.

 Please can you stop being such a patronising/arrogant know it all. I am
 well aware of the benefits of documentation. I also resent your previous
 implication that I am (and others are) not capable of doing what is best
 for Debian without being told what that means by you.

  the only other thing that's missing is a [brief] list of repos to
 download and compile up.  yes the patches have been submitted to the
 list but it's unclear as to where and what they should be applied to.
 a small group of people with massive amounts of experience will be
 able to work that out but everyone else will not.

 This stuff is all now available in the standard Debian testing release,
 including the daily installer builds, there is nothing secretive of
 expert going on here. This is the right way to go about making things
 available in Debian.

 Ian.



 Ian,

 I don't know what the history between you and Luke is.

 ian and ben have for some reason decided to take an adversarial
approach that (presumably deliberately) assumes absolute worst case
scenarios in any communication that i send.  this is taking up a
considerable amount of everybody's time to get to the bottom of why it
is that they wish to do this.

 But I have to agree with him.  The documentation in this case really stinks.

 even people who have significant experience with debian (phil hands
for example) have had difficulty building debian-installer.  i know
because when after 4 days i couldn't work it out i asked him for help.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDxD-nHu+9is_Z50maWzKYfrJMqX=cncstbehg8-vg5...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-06-10 Thread Luke Kenneth Casson Leighton
On Mon, May 19, 2014 at 7:53 AM, Ian Campbell i...@hellion.org.uk wrote:
 On Sun, 2014-05-18 at 14:54 -0700, Vagrant Cascadian wrote:
 On Sun, May 18, 2014 at 07:41:58PM +0200, Karsten Merker wrote:
  attached is a small patch against flash-kernel to add machine db
  entries for the Cubieboard 1/2 and the Mele A1000.
 ...
  I'll probably get access to a Cubieboard2 sometime next week and
  will test an installation on it, but for the Cubieboard 1 and the
  Mele somebody else would have to give them a try.

 Thanks for your work. I could test on a Cubieboard 1...

 I've not written my notes into prose for the wiki yet but, perhaps they
 are sufficient for you and others to get going with though.

 great!  this is *exactly* what i suggested be written up (note-form
is perfect), and exactly why i suggested it be written up as well [to
help others].

 now it will serve as a reminder for you as well, ian, in a few months
when this has dropped out of electrical memory in your brain.

 the only other thing that's missing is a [brief] list of repos to
download and compile up.  yes the patches have been submitted to the
list but it's unclear as to where and what they should be applied to.
a small group of people with massive amounts of experience will be
able to work that out but everyone else will not.

so, ben (re-cc'd) - is it clear now that this is reasonable to ask
about and relevant to ask for on the debian-arm mailing list?  apart
from having to make it clear by way of comparison was there _any_
mention of quotes my quotes project in the above?  if you do not
respond i will assume that you are happy and that you will stop making
adversarial assumptions.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDy7VR-TNN_R_HJGLnG0VYO0g-=irwn6qdujkdn0lrp...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-06-10 Thread Luke Kenneth Casson Leighton
On Thu, May 8, 2014 at 10:09 PM, Ian Campbell i...@hellion.org.uk wrote:
 On Thu, 2014-05-08 at 22:05 +0100, Luke Kenneth Casson Leighton wrote:
 but for the BENEFIT OF THE DEBIAN PROJECT.  debian-installer is
 incredibly hard to get properly compiled up: instructions and context
 entirely missing.

 There are daily builds of the installer, which will pick this up once it
 is committed and uploaded, which will be sooner rather than later. I'm
 not interested in documenting how to create some custom version of the
 installer for this platform, there is simply no need for it.

 ian: you misunderstand (again).  by comparison of what you *believe*
i have said and what i have actually said, we may eventually make this
clear.

 let me be clear: i am not interested in how to create some custom
version of the installer for this platform

 i am interested in how to create debian installer *AT ALL*.  several
people have tried - very experienced people - and have completely
failed after following available documentation.

 there are absolutely no available step-by-step instructions.

 there is a basic and fundamental assumption that you already know
what you are doing.

 is that clear enough now?

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedwd5vzcqiuqnjjoju-ixpdywxaep66k4b+eyga_7k+...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-06-10 Thread Luke Kenneth Casson Leighton
On Fri, May 9, 2014 at 1:16 AM, Ben Hutchings b...@decadent.org.uk wrote:
 On Thu, 2014-05-08 at 22:05 +0100, Luke Kenneth Casson Leighton wrote:
 once again, respectfully may i ask, without prejudice and without
 obligation, that someone please consider documenting the steps taken
 so that others may easily replicate them, and i am going to continue
 this paragraph without a break so that i may point out that i am in no
 way requesting this FOR MYSELF, NOR FOR ANY PROJECT, but for the
 BENEFIT OF THE DEBIAN PROJECT.  debian-installer is incredibly hard to
 get properly compiled up: instructions and context entirely missing.

 Luke, with all due respect, please leave Debian alone and stick to your
 own project.

 ben: if the standard debian installer documentation on how to build
debian installer was clear enough i would happily do so.

 however until there exists clear and explicit documentation on how to
build debian installer it would be irresponsible of me as a person who
is interested in seeing debian move forward to do as you suggest.

 i trust that you can understand and respect that and also that you
will stop making assumptions.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedwixfefw4gejvzp3_qkflpeaohwxqibrxyk5cpvyg3...@mail.gmail.com



Re: arm64 update - help wanted

2014-05-17 Thread Luke Kenneth Casson Leighton
On Thu, May 15, 2014 at 2:10 AM, Wookey woo...@wookware.org wrote:
 The debian-port arm64 rebootstrap is progressing nicely, and we just
 passed 4200 source packages built, with another few hundred
 pending. There are now 2 buildds running.

 awesome

 Thus I'd love it if anyone else could help go through the failures
 pile and file bugs, or upload old existing ones, or classify them on
 the wiki. Or if they happen to be your packages then just fix them :-)

 I've put some links on the wiki page
 https://wiki.debian.org/Arm64Port#Bug_tracking

 suggestion, wookey: i'd love to help... but obviously with no
hardware that's kinda hard: is there a clear set of instructions
somewhere - a wiki page for example - on how to debootstrap an arm64
qemu so that even if it's dead slow it's still possible to help out?

 l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDyrR8woYLt3i=DLkyf3e-K=c=8ddhcjcwe2liwqxgh...@mail.gmail.com



Re: arm64 update - help wanted

2014-05-17 Thread Luke Kenneth Casson Leighton
  suggestion, wookey: i'd love to help... but obviously with no
 hardware that's kinda hard: is there a clear set of instructions
 somewhere - a wiki page for example - on how to debootstrap an arm64
 qemu so that even if it's dead slow it's still possible to help out?

https://wiki.debian.org/Arm64Qemu

wookey i found this - it's not entirely clear (lots of options, not
step-by-step): what is relevant to getting up and running, is the page
still relevant, has aarm64 made it into the latest debian qemu package
yet.

tia,

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedxywtwrtxi7qgevefku3rgxn-syh3iu5fukxj2twyc...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-05-08 Thread Luke Kenneth Casson Leighton
once again, respectfully may i ask, without prejudice and without
obligation, that someone please consider documenting the steps taken
so that others may easily replicate them, and i am going to continue
this paragraph without a break so that i may point out that i am in no
way requesting this FOR MYSELF, NOR FOR ANY PROJECT, but for the
BENEFIT OF THE DEBIAN PROJECT.  debian-installer is incredibly hard to
get properly compiled up: instructions and context entirely missing.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDxzL3Y1=v5jSQmr-5zxOm48Yir==vkkzhmrf7ttwms...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-04-27 Thread Luke Kenneth Casson Leighton
On Sat, Apr 26, 2014 at 11:25 PM, Karsten Merker mer...@debian.org wrote:
 Hello,

 I am currently working on better support for sunxi-based ARM systems
 in d-i and flash-kernel.  Thanks to Ian's backport of the sunxi AHCI
 support from kernel 3.15rc1 into the Debian 3.14 kernel package (as
 of linux-image-3.14-trunk-armmp_3.14.1-1~exp2_armhf.deb, currently
 only available as source in git) it is now possible to run d-i on
 Allwinner A10/A20-based systems like the Cubie{board,board2,truck}.

 hoooray!  congratulations.  this is an awesome strategic
achievement.  it will not stop people doing stupid things like forcing
their personal preferred settings onto others when distributing
ridiculously large (4gbyte) fixed size root images but it at least
provides an alternative... one that is a sub-5mbyte alternative at
that.

 do you have a pre-built option or some instructions for people to follow?

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedzlqarsqkupmwveby-phvodr44of+qw5em8gpkekk3...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-04-27 Thread Luke Kenneth Casson Leighton
On Sun, Apr 27, 2014 at 8:14 PM, Karsten Merker mer...@debian.org wrote:
 On Sun, Apr 27, 2014 at 07:35:35AM +0100, Ian Campbell wrote:
 On Sat, 2014-04-26 at 23:25 +0200, Karsten Merker wrote:

 No uImage is required for sunxi,

 no uImage is required for sunxi but a uImage is required for *debian
installer*.  that is after all the whole point of the exercise.  ok,
sure you can put debian-installer onto an SD card directly but then
you will not be able to install the OS onto the SD card because debian
installer will be using it.

 karsten it would seem that you are being mis-advised by people not
familiar with debian installer, who think that booting off of
pre-prepared images is quotes normal quotes.

 although it would in some limited cases be useful to boot debian
installer without a uImage, it would not be the most helpful thing to
the most people.

 may i respectfully suggest that you speak to henrik nordstrom (hno) -
you can find him on #linux-sunxi - and get a uImage up and running
that may be loaded over the USB-FEL.  henrik and others such as myself
have got USB-FEL up and running for a number of A10 and A20 devices:
the advantage of boot-installing a device from USB-FEL is that it is
simple to do: just plug in to a Micro-USB, load the 1st stage
bootloader directly into RAM, load the kernel directly into RAM, load
the debian installer uImage directly into RAM, then hit the execute
command.  instantly you have all the hardware available on which to
install a *clean* OS.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDzTyrZ6sHn8U5rfAuCxdi_Jc34nyry=dcmn_axdsy5...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-04-27 Thread Luke Kenneth Casson Leighton
On Sun, Apr 27, 2014 at 8:41 PM, Karsten Merker mer...@debian.org wrote:
 On Sun, Apr 27, 2014 at 03:52:36PM +0200, Luke Kenneth Casson Leighton wrote:
 On Sat, Apr 26, 2014 at 11:25 PM, Karsten Merker mer...@debian.org wrote:
  Hello,
 
  I am currently working on better support for sunxi-based ARM systems
  in d-i and flash-kernel.  Thanks to Ian's backport of the sunxi AHCI
  support from kernel 3.15rc1 into the Debian 3.14 kernel package (as
  of linux-image-3.14-trunk-armmp_3.14.1-1~exp2_armhf.deb, currently
  only available as source in git) it is now possible to run d-i on
  Allwinner A10/A20-based systems like the Cubie{board,board2,truck}.

 [snip]
  do you have a pre-built option or some instructions for people to follow?

 Sorry, no. This was an experimental build with all components
 locally built from development versions in various git/svn
 repositories.

 ok - then can you please document that somewhere, so that other
people can replicate it and help you out?

 As mentioned in my original mail there is also still the issue of
 the MMC driver not yet being available in mainline,

 that´s ok.  forget mainline.  if you document what you´ve done then
others may replicate it on the more stable kernels.

l.


--
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedw0eeoivw0tabbbff+pyriuaw8zcomod2ykkurokzf...@mail.gmail.com



Re: Support for sunxi-based ARM systems in d-i

2014-04-27 Thread Luke Kenneth Casson Leighton
On Sun, Apr 27, 2014 at 9:10 PM, Ben Hutchings b...@decadent.org.uk wrote:
 On Sun, 2014-04-27 at 21:04 +0200, Luke Kenneth Casson Leighton wrote:
 On Sun, Apr 27, 2014 at 8:41 PM, Karsten Merker mer...@debian.org wrote:
  On Sun, Apr 27, 2014 at 03:52:36PM +0200, Luke Kenneth Casson Leighton 
  wrote:
  On Sat, Apr 26, 2014 at 11:25 PM, Karsten Merker mer...@debian.org 
  wrote:
   Hello,
  
   I am currently working on better support for sunxi-based ARM systems
   in d-i and flash-kernel.  Thanks to Ian's backport of the sunxi AHCI
   support from kernel 3.15rc1 into the Debian 3.14 kernel package (as
   of linux-image-3.14-trunk-armmp_3.14.1-1~exp2_armhf.deb, currently
   only available as source in git) it is now possible to run d-i on
   Allwinner A10/A20-based systems like the Cubie{board,board2,truck}.
 
  [snip]
   do you have a pre-built option or some instructions for people to follow?
 
  Sorry, no. This was an experimental build with all components
  locally built from development versions in various git/svn
  repositories.

  ok - then can you please document that somewhere, so that other
 people can replicate it and help you out?

  As mentioned in my original mail there is also still the issue of
  the MMC driver not yet being available in mainline,

  that´s ok.  forget mainline.  if you document what you´ve done then
 others may replicate it on the more stable kernels.

 Luke, Debian uses upstream kernels with minimal backporting.

 ... which is not yet complete, meaning that that will reach only a
small handful of people.  if you want to reach more people, thus
increasing the probability of more people being in a position to help
use upstream kernels with minimal backporting, then bridging the gap
between the two would seem like a good idea, would you agree?

 anyway - i´m done here. you guys are doing ok.  i have had to take
other work - for the second time now - which places the project that i
started on a back-burner until we have sales or funding.

 so please, feel free to do whatever you choose.

l.


--
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedxeok8gokq4no9zrli7ekemxgk_e8nruw3nanw_hdb...@mail.gmail.com



Re: Boot time speed up

2014-04-21 Thread Luke Kenneth Casson Leighton
On Mon, Apr 21, 2014 at 1:02 PM, Divya Subramanian
divyaenginee...@gmail.com wrote:
 I am unable to find a site through which I can download depinit

https://web.archive.org/web/20050206182736/http://www.nezumi.plus.com/depinit/index.html
https://web.archive.org/web/20050221090150/http://www.nezumi.plus.com/depinit/depinit-0.1.4.tar.bz2

if youŕe serious about considering using it (rather than the
better-maintained systemd) i will create a mirror.  depinit is
absolutely awesome though: i wish richard wasn´t a recluse.  ah well.

l.


--
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDzWu_7S-qgRYwBYwKSPAmzzpUXCBtjxHtJ0TG1S=bh...@mail.gmail.com



Re: Boot time speed up

2014-04-18 Thread Luke Kenneth Casson Leighton
... if you're feeling really adventurous look at depinit rather than systemd :)

i recall a few years back there was some company claiming they'd
managed a 1 second boot time (was it redhat or was it IBM?), and there
were also some embedded companies that managed under 350ms including
starting up a single-screen dedicated QT app.  this was on 720mhz TI
OMAPs so it's definitely doable.

one of the things i remember them doing was removing damn udev!  i
recall having (back in only 2005) having a 90mhz Pentium-I system
which i used as a firewall.  the depth of the bash shell scripts fired
up by udev was flat-out *insane*.  the fork/process tree was in some
cases well over 30 deep.  it was only because i had such a slow system
that i was able to catch udev in the act so to speak.

i think i ended up reporting a debian bug for the pty / tty creation
at the time, because there were 256 ptys, 256 ttys, and another mad
bunch of 256 ttys somewhere else.  this resulted in 768 *separate*
instances of udev insanity at shell script depth 30 each.  it was
therefore no wonder that that poor pentium I system, with little in
the way of process context switching support that modern CPUs now
have, was flipping its nuts off and took over *twenty seconds* to
complete the udev setup phase.

now, the relevance here to ARM is that context-switching on ARM CPUs
is not as heavily hardware-optimised as it is in the high-end x86
world with hyperthreading and 4+ mbytes of 2nd level cache pushing
the number of transistors close to and in some cases above a billion.

the recommendation was therefore, if you want to keep udev, to
recompile the kernel reducing the number of MAX_TTYs.

now, the reason i mentioned depinit was because when i explored this i
took a different approach.  basically what i did was create two
*separate* udev initialisation trigger scripts, and created separate
parallel dependencies on each.

the first udev trigger script fired off the absolute minimum necessary
stuff: only 10 ptys, /dev/sd*, /dev/hd*, that sort of thing.
following on from that it was possible to make networking, disks and
so on depend on that.

the *second* udev trigger script was the normal one that you get
every day on the majority of linux distros.  it fired eeeverything.
dependent on the completion of this script i therefore had everything
else.  cups printer service.  ssh server.  etc. etc.

it worked like a charm and i had a boot time on a 1ghz pentium-III
laptop *including* X-Server startup at something like 15 seconds.
shutdown time (thanks to depinit) was something like 3 seconds, and
much of that was the actual hardware shutting down. depinit didn't
mess about there :)

you _should_ be able to replicate this if it really bothers you that
udev's too slow, with other parallel startup systems, but the advice
to find out *where* the main time is being spent, first, is very very
good!

also wasn't there something recently about the 3.15 kernel having a
more parallel approach to hardware startup?  although... you're a bit
buggered there because you'd need to patch together your own kernel...

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDyozHHn=7zcm9_g0txva0syelkuee4iof+thkgwhzq...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-03-01 Thread Luke Kenneth Casson Leighton
On Sat, Mar 1, 2014 at 7:01 AM, Paul Wise p...@debian.org wrote:
 On Sat, Mar 1, 2014 at 2:29 PM, Martin Guy wrote:

 earlier the same day, Broadcom announced[1] full documentation for
 the VideoCore IV graphics core, and a complete source release of the
 graphics stack under a 3-clause BSD license

 Unfortunately that graphics stack seems to include some proprietary
 code that Broadcom might not have had permission to distribute.

 https://lwn.net/Articles/588950/

 ARC.  oo that's interesting.  if that's the same ARC as now owned
by... achhh who is it... not mentor graphics, not cadence... synopsis!
 that's them.  it's a design that's a lot better than ARM, and has
specialist Video Instruction extensions amongst other things.  several
thousand instructions.  synopsis' extensions to ARC are *really* good
at this sort of thing, so it would make a lot of sense that it's been
used by broadcom for, duh, video processing.

 the significant thing is: ARC has a full gcc toolchain available.

 http://sourceforge.net/projects/gcc-arc/

 l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDx=0zqgeghvtahcwrxwcd2u2czxo4w894ss310e3m-...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-28 Thread Luke Kenneth Casson Leighton
On Fri, Feb 28, 2014 at 12:57 AM, Eric Nelson
eric.nel...@boundarydevices.com wrote:
 Hi Luke,


 On 02/27/2014 03:02 PM, Luke Kenneth Casson Leighton wrote:

 On Thu, Feb 27, 2014 at 6:18 PM, Eric Nelson
 eric.nel...@boundarydevices.com wrote:

 Thanks for the feedback Luke.


   no problem sah.  btw, the sabre lite pcb cad/cam files i received
 from one of your associates were really useful.  i was able to create
 - with a lot of effort(!) an EOMA68-iMX6 design.  sadly nobody's come
 forward with the funds to see it through to production, but hey...


 :)

 At least you're not hitting us up left and right for support,
 like some of the other folks who've done this...

 *lol*  neeeh.  see, i get the whole software libre concept too.
don't burden other people!  it was fun cramming things down into 6
layers (1.2mm PCB, only 43x75mm  *shudder*

 [this is why i designed the EOMA68 platform, btw, so that the majority
 of the cost of product development could be focussed in the base -
 the chassis, then later we could look at doing an FSF-Endorseable CPU
 Card with less features]


 Interesting. We'll have to take a look at that.

 http://rhombus-tech.net/community_ideas/ - those are base-board
concepts at varying degrees of completion.  the one that we need to
get word out about is improv:

   https://makeplaylive.com/#/open-hardware/improv

 if people buy that, it gets the ball rolling.  for that CPU Card
(A20) there's some reverse-engineering efforts going on with the VPU,
there: H264 and MPEG decode are now successfully done.  and the GPU -
MALI - is also being done (limadriver.org, headed by luc).

 on the FSF-side: over the past 2+ years i've done a detailed
evaluation dozens of SoCs: the only ones that come remotely close to
being FSF-Endorseable as well as available in the kinds of small
volumes _and_ have PCB schematics that someone with my skill-set can
start from are things like the old OMAP3525, and some of the AM
series.  the one that's used in the beaglebone for example.  except...
the damn things are missing SATA! argh...

   ubuntu also uses debian installer.  i understand that they still cut
 much of this stuff over from debian, where they haven't continued the
 silliness of an entire fork of the debian distro *sigh* :)


 Ever try telling programmers what to do?

 i regularly get accused of doing exactly that, when in full
recognition of that, the reality is that i'm doing nothing of the
sort.  i should perhaps put it in a top-posted .sig read my .sig! i
am not ordering you what to do!

 (of course you have, so
 you know that from many vantage points, freedom looks like chaos).

 yeahhh.  brownian programming.  *rueful* there's something to be said
for a corporate-driven single-minded strategy...

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/CAPweEDzyfSwCYHE3+q8Hb+_NiRALDf_FzurL7-odM6=+oxu...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-27 Thread Luke Kenneth Casson Leighton
On Thu, Feb 27, 2014 at 1:26 AM, Eric Nelson
eric.nel...@boundarydevices.com wrote:
 Hi Luke,


 On 02/26/2014 05:44 PM, Luke Kenneth Casson Leighton wrote:

 On Wed, Feb 26, 2014 at 11:46 PM, Reg Lnx regier.kun...@gmail.com wrote:

 Thank you Karsten.
 It answers a lot of questions and it makes sense. I think we can say the
 very same about the odroid, it has some non free things too.


   indeed it does.  i've been working at every opportunity possible to
 get a software-libre compliant *desirable* processor available for
 general use.  four years and counting.  it's getting boring.


 I jumped in late and haven't read the entire thread, but what
 do you consider desirable?

 desirable in the context of both end-user pricing and modern end-user
features.  let's take the announcement of an FSF-Endorseable laptop
very recently as a good example.  if you look closely at the specs,
you find it was something like a 5-year-old laptop.  how can a 5 year
old laptop be desirable to the average person on the street?


 So it looks like we still don't have a 100% open source computer.



 Ahem You can run our boards with 100% open source, and I think
 our quad-core GHz i.MX6

 are they available for $USD 50 because you're selling them in volumes
of 100k+ units?  rockchips 28nm GPL-violating quad-core RK3188 is $USD
12.  compared to $36 for a *more power-hungry* 40nm quad-core iMX6.

 (btw, if you're interested, i can put you in touch with a company
that can get you China-based pricing for the iMX6.  for various very
good reasons Freescale operate *different* pricing for S.E. Asia and
the EU/USA).

 which would you think would be more desirable?  $99 products with
worse battery life, or $50 products with better battery life?

 unfortunately, the iMX6 is considered old already.  the pricing
doesn't help, either.

 There's one key piece that's normally closed-source (the GPU), but
 there's an open-source alternative here:
 https://github.com/laanwj/etna_viv

 yes.  i've been advocating the use of Vivante to the Fabless
Semiconductor companies i'm in contact with, exactly for the reason
that etnaviv exists.

 unfortunately the iMX6 still has proprietary VPU firmware.  if that
were to be reverse-engineered then it would mean that the iMX6 would
be the *very* first FSF-Endorseable ARM SoC... that was still just
about classifiable as desirable.  but only just.


 There are also some open-source bits with licenses other than
 GPL/LGPL provided by Freescale (notably, some of the VPU code),
 but the restrictions are pretty reasonable: Don't use on non-Freescale
 processors...

 unfortunately the fact that it is *possible* to install proprietary
firmware means that it's non-FSF-Endorseable.  to be
FSF-Endorseable... it's complicated, but in this case it would be
necessary to power down the VPU entirely at a system-board-level and
for it to *never* be possible to power the VPU back up.  ever.

 now, if it was a separate chip (or peripheral), where the firmware
was uploaded as a one-off to get the hardware up-and-running: that's
ok.  if the firmware was hard-coded into a library (rather than being
a separate blob uploaded into the SoC's memory) then it would qualify
under the GPL's System Library exemptions.  but the iMX6 VPU
firmware arrangement is neither of these things, so unfortunately it
doesn't qualify.

 Again, ahem... We try very hard to give back when we can.

 ... because as a mid-level-volume company that's also a software
service and solutions provider you fully recognise the value that's
being provided by the software libre community.

 which is great!

 i was NOT referring to boundarydevices - or companies like boundarydevices.

i was referring to large mass-volume companies such as AMLogic,
Mediatek, and LG, and the likes, all of whom *knowingly* commit
endemic GPL violations on a massive scale.  LG actually consider it a
FAILURE ON THEIR PART IF YOU EVEN NOTICE THAT THERE IS GPL SOURCE CODE
USED IN THEIR PRODUCTS.  rather than work with the software libre
community, they paid a team of lawyers a considerable amount of money
on how to devise a sophisticated tivoisation and DRM scam.

does boundarydevices knowingly and deliberately commit GPL
violations??  of course they don't.  because you are in a different
market where you provide solutions, not appliances.

 Essentially everything we provide is open-source,

 awesome!  that's because you Get It.

   but, working from the ground up is the only way that this situation
 is going to change.  The Plan:

 1) make some successful desirable mass-volume hardware that respects
 software freedom
 2) sell lots of it
 3) put the money made back into funding software libre
 4) put the rest back into solving a non-free issue whilst not
 compromising the profitability needed for the next iteration round the
 loop
 5) repeat from 1.


 I don't know what you consider lots, and we don't put **all** of
 our money back into free software, but we do spend time

Re: Official support Odroid hardware and other ARM development boards.

2014-02-27 Thread Luke Kenneth Casson Leighton
On Thu, Feb 27, 2014 at 3:32 AM, Paul Wise p...@debian.org wrote:

 Interestingly there was an ARM based tablet that planned to use an
 OpenRISC chip for the embedded controller.

 http://www.indiegogo.com/projects/pengpod-1040-quad-core-linux-android-dual-booting-tablets

 ah that's a misunderstanding, paul.  what neal (bless 'im) is
referring to there is the fact that the A31 has *inside the Silicon
design* an early version of the OR1000 core, which is run at a very
low clock speed and is used to manage the chip when everything else is
powered down in sleep state.

 let's just hope the Allwinner had the good sense to realise that if
they made any modifications to that OR1000 core they have to respect
the LGPL license that OR1000 is released under, eh?

 ... oh dearie me :)

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedynybyn9_ee6wkwjq87w1vy2ygl_7qcb_mkjfwdurj...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-27 Thread Luke Kenneth Casson Leighton
On Thu, Feb 27, 2014 at 7:27 AM, Tobias Frost t...@coldtobi.de wrote:
 Am Mittwoch, den 26.02.2014, 18:26 -0700 schrieb Eric Nelson:

 There are also some open-source bits with licenses other than
 GPL/LGPL provided by Freescale (notably, some of the VPU code),
 but the restrictions are pretty reasonable: Don't use on non-Freescale
 processors...

 If you need that code to run the board, well, than also your boards are
 non-free, if you refer to the DFSG.

ah that's a good point.  tobias: the VPU code is basically for
accelerated video and audio encode and decode.  unlike the AM338x
DaVinci HD Media focussed SoCs from TI that firmware is *not* required
for general day-to-day usage of the chip.  you can simply ignore that
firmware, and the only Bad Thing (tm) that will happen is that
you'll need to use NEON instructions for watching videos and you'd
only get what... 720p20 or something like that.  big deal.

 so, the firmware could go into the nonfree debian repositories, just
like Flash plugin does at the moment.

 would something like that qualify as acceptable under the DFSG?

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedwefcanqyvo6zibgsqkpj6trg-l_+x1mfomxpzdlzt...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-27 Thread Luke Kenneth Casson Leighton
On Thu, Feb 27, 2014 at 12:52 PM, Paul Wise p...@debian.org wrote:
 On Thu, Feb 27, 2014 at 8:00 PM, Luke Kenneth Casson Leighton wrote:

  ah that's a misunderstanding, paul.  what neal (bless 'im) is
 referring to there is the fact that the A31 has *inside the Silicon
 design* an early version of the OR1000 core, which is run at a very
 low clock speed and is used to manage the chip when everything else is
 powered down in sleep state.

 Sounds similar to what I mean by embedded controller,

 ... yeah.  that's a good description.

 which is a term I'm only familiar with from the x86 hardware space.

 anything these days that's an 8086 (or more usually an 8051) is...
yeah.  you'd be surprised - no, more like stunned - how many
appliances are based around the 8051 instruction set.  CMOS camera
chips, touchscreen controllers.  ARM's got into this area with the
Cortex M0, M3 and M4 licensable designs.

 Only
 difference about the A31 is that it is part of the SoC rather than
 external. Thanks for the clarification and extra detail.

 no problem

 http://www.coreboot.org/Embedded_controller

 let's just hope the Allwinner had the good sense to realise that if
 they made any modifications to that OR1000 core they have to respect
 the LGPL license that OR1000 is released under, eh?

 The LGPL specifically allows private modifications

 ... you sure about that? :)

 and physical hardware isn't copyrightable

 augh.  ahh that's what Design Rights are for... hmmm... i wonder if
that's the solution to being able to apply the GPL and LGPL to
hardware...

 sorry this is off-topic.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedzuwwsx8g4g8jqgqzn-zobujvoc8ikancy0yj2+qb3...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-27 Thread Luke Kenneth Casson Leighton
On Thu, Feb 27, 2014 at 6:18 PM, Eric Nelson
eric.nel...@boundarydevices.com wrote:
 Thanks for the feedback Luke.

 no problem sah.  btw, the sabre lite pcb cad/cam files i received
from one of your associates were really useful.  i was able to create
- with a lot of effort(!) an EOMA68-iMX6 design.  sadly nobody's come
forward with the funds to see it through to production, but hey...

 I'm not sure I understand the distinction. The VPU is really a
 separate processor, though shipped as a part of the same package.

 right.  if it *was* a separate processor - a separate chip or a
separate peripheral - then that wouldn't be a problem.  apart from
anything, it would be possible to simply... not put that separate
processor onto the PCB, and you'd have a fully-endorseable product.
but, if you did that, especially with video processing, the power
needed to sustain the bandwidth required to communicate inter-chip
would be so high that it would no longer be possible to call it a low
power SoC.  or even an SoC at all!

 SoCs - systems-on-a-chip are a unique and serious problem for FSF
Endorseability because the endorsement rules are *very* specific.
when faced with the total integration of SoCs, even a single piece
of proprietary firmware can jeapordise an entire SoC from being
chosen.

 so, sadly, the market is divided into two types of SoCs:

 1) those that are desirable (as in, mass-market, mass-volume, low
price, good features).
 2) those that aren't (because typically they don't have a VPU, or a
GPU, or they're old and so there's been time to do the required
reverse-engineering).

those that are in category 1 in the ARM (and MIPS) SoC world... *all*
of them require some form of proprietary firmware blob.  i really mean
*all* of them.  and i've been looking now for over six years.  not
one!

those that are in category 2 are such a small and specialist market
that it is cost-prohibitive to even consider supplying the (small)
FSF-Endorseable market with product based around them.

[this is why i designed the EOMA68 platform, btw, so that the majority
of the cost of product development could be focussed in the base -
the chassis, then later we could look at doing an FSF-Endorseable CPU
Card with less features]

   especially if you're planning to sell the sabre-lite for a few more
 years yet.  btw, it's got 2gb RAM, hasn't it?


 Nitrogen6X has 2GiB (as an option at the moment). Upcoming product
 will have 4GiB.

 oo that's very unusual.  you'd have a niche there within the debian
(and other distro) worlds especially for the compile farms, where the
more RAM the better, because of the issues associated with the linker
phase in debug compiles taking up vast amounts of RAM.  the effect
is just... staggering.  a link phase that would normally be an hour
can often be *TWO DAYS* because the cross-referencing is so vast that
if the entire binary isn't fully RAM-resident, ld basically
permanently thrashes the OS.


 SABRE Lite lives in this weird place because we did it jointly
 with Freescale so we've needed approval to change.

 oh dear :)

 Right. Laci (on CC) is working right now to get Debian packaging of
 our 3.10.17 kernel, though he's targeting Ubuntu first because Unity
 plays nicely with a touch screen.

 ubuntu also uses debian installer.  i understand that they still cut
much of this stuff over from debian, where they haven't continued the
silliness of an entire fork of the debian distro *sigh* :)


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedy+cadr-2z-rzripui-cevbpfpmwjao257v2-joshc...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-26 Thread Luke Kenneth Casson Leighton
On Wed, Feb 26, 2014 at 8:54 PM, Reg Lnx regier.kun...@gmail.com wrote:

 I'd like to know if Debian community have plans to officially support
 any of those development boards, providing ready to boot images,
 containing the Debian Installer for example.

 hi reg,

 this question comes up on a regular basis i would like to see
{insert custom-designed completely unique} hardware model A using
{custom-designed completely unique} processor B from {competitive and
specialist} manufacturer C working with debian.

 it comes up so regularly that it should really have an FAQ answer
(can that be done at all, added somewhere to a debian-arm wiki?)

 the clue as to why this is challenging is in the format i've laid out
- the more general form of the question.  the first issue is that each
SoC is unique, custom designed *even* from the same manufacturer with
unique ways to access GPIO, interfaces, everything.  unlike the
monolithic x86 architecture, there's absolutely *no* common ground.

 the second is that each piece of hardware (that's designed around
each of these SoCs with absolutely no common functionality) is again
utterly unique.

 now, if the buses (interfaces) on these SoCs were limited to
self-describing ones - SATA, USB, PCIe, Ethernet - this wouldn't be
a problem.  heyyy yeahhh, let's just plug in some peripherals, they're
detected at run-time, nooo problem, right?  ... sounds just like an
x86 system, right?  but we're not *in* x86-land.

 so instead, we have to customise *the entire* software stack - from
the ground up, for the most ultra-basic and mind-numbing tasks such as
if you want that USB hub up-and-running, ya gotta pull GPIO pin
A591231569 to high for 20ms.  i give you that as just one boring
and *simple* example.  in many cases it can be far more complex than
that.

 now.

 against that background, can you appreciate that what you've asked is
as follows:

 i'd like to know if anyone in the Debian community has the desire -
without any form of payment of any kind - to spend several weeks
full-time working through all the issues required, from the ground up
including possible reverse-engineering, possible JTAG debugging,
porting u-boot (possibly up to 3 weeks full time per board), then
porting the kernel (possibly up to 3 weeks per board again of
full-time effort), then finally getting to the O.S., then adding
support to Debian Installer (possibly an additional 3 weeks because
Debian Installer is quite hard to understand and work with)

  and against this time required, exactly how long does one
particular SoC last?   (answer: it's about 6-9 months before a better
successor comes along).

 does that now make sense, reg, why there are so few ARM systems that
have official debian support?  or any official OS support at all?

 it's a real serious problem - we know.  and i told people over 3
years ago that device-tree isn't the answer, because the differences
between SoCs and the hardware systems that use ARM SoCs is simply too
great for device-tree to make any impact.  device-tree works
fantastically well in the x86 world (monolithic architecture), really
great where it was designed - for Sun Microsystems (a company with
control over its architecture and its processor lines), really great
for the PowerPC community (small, mostly monolithic architecture).

 and no, you can't have a BIOS, because that would require all ARM SoC
licensees - over 650 of them - to communicate and agree.  that's not
going to happen.  plus it would be runtime overhead that none of them
would accept.

so, bottom line: if you want any particular OS - doesn't matter which
one it is - ported to a specific piece of hardware, you need to
contact someone, pay them some money, and give them a contract to get
the work done.  if it's not important, however, you could just wait to
see if someone else does the work.  it might happen.  but it almost
certainly won't, because the bang-per-buck ratio - due to the
nova-like lifetime of ARM SoCs - is very very low.

 apologies if that's not what you wanted to hear!

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedysh_xjobk1t-fmo+tse057e85+8ci4i-ejeeavzcw...@mail.gmail.com



Re: Official support Odroid hardware and other ARM development boards.

2014-02-26 Thread Luke Kenneth Casson Leighton
On Wed, Feb 26, 2014 at 11:46 PM, Reg Lnx regier.kun...@gmail.com wrote:
 Thank you Karsten.
 It answers a lot of questions and it makes sense. I think we can say the
 very same about the odroid, it has some non free things too.

 indeed it does.  i've been working at every opportunity possible to
get a software-libre compliant *desirable* processor available for
general use.  four years and counting.  it's getting boring.

 So it looks like we still don't have a 100% open source computer.

 oh you do... they're just shit compared to the latest and greatest,
because you have to go back about 2 to 5 years in technology terms
to get them.  which means they're either useless, or expensive, or
both.  take the latest software for example: you simply can't run
libreoffice or firefox in under 512mb of RAM nowadays.   or at less
than a 1ghz processor speed.

 the manufacturers of successful products just *do not* wish to work
with software libre individuals.  sure they're prepared to take
whatever they've created for free and say thank you very much, fuck
off now, BYE sucker, we'll fuck you over for the next revision you
release as well har har that's what you get when you release code with
such a lame license that doesn't need us to pay you any money har
har... you get the drift.

 but, working from the ground up is the only way that this situation
is going to change.  The Plan:

1) make some successful desirable mass-volume hardware that respects
software freedom
2) sell lots of it
3) put the money made back into funding software libre
4) put the rest back into solving a non-free issue whilst not
compromising the profitability needed for the next iteration round the
loop
5) repeat from 1.

if !do above, expect current situation (support for ARM hardware in
GNU/Linux distros) to remain very very low.  other methods aren't
working out.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/capweedw6p5j73nu1ofpxbogeqnmpnqnv6wrs1pjfgm0yvr9...@mail.gmail.com



Re: On a Samsung ARM Chromebook, could nv-uboot easily boot to stock linux kernels, by way of ARM-GRUB?

2014-01-02 Thread Luke Kenneth Casson Leighton
On Thu, Jan 2, 2014 at 2:57 PM, Subharo Bhikkhu
subh...@forestsangha.net wrote:

 Indeed.  It seems that the Utopian technological future that I was hoping 
 for, where solid state hardware would last *even longer* than non-solid state 
 hardware, has been replaced with a distopian present, where the solid state 
 hardware lasts *even less long* than the non-solid state hardware that came 
 before it,

 ah if you are referring to NAND flash, that's nothing to do with ARM
processors and more to do with cost (no moving parts, smaller
devices).  the issue with NAND is that the smaller the geometries
become (25nm, 22nm etc.) the less reliable the storage and the more we
end up relying on software and ECC.  so it's not *planned*
obsolescence!  it's down to the physics :)

 but, that _really_ has nothing to do with what type of processor is
in the device.

l.


--
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedxcbmt+sr9+fg9au4i_ver0gq7y3s8twbxuhjniaac...@mail.gmail.com



Re: On a Samsung ARM Chromebook, could nv-uboot easily boot to stock linux kernels, by way of ARM-GRUB?

2014-01-01 Thread Luke Kenneth Casson Leighton
subharo, hi,

basically you've been caught out by the use of treacherous computing,
and have purchased a product that you cannot and will not ever own.
the samsung processors have bootloader-signing actually built-in to
the ROM: once the e-fuses are fired and the private key installed in
EEPROM there is no way to gain control of the machine short of paying
someone tens of thousands of dollars to have the top taken off the
processor in a class 1 cleanroom and to use lasers to dig around, hunt
for and re-build the e-fuse.  and then put the plastic back.

actually, there *might* be a cheaper way: obtain a replacement
processor, pay for the treacherous one to be removed and have the
stock one soldered in its place (and then blow the e-fuse which
permanently disables treacherous computing).  as this would involve
heating up the board to around 200C and these SoCs have a hell of a
lot of pins it is not without risk.

but, without going down that insane route, you are along the right
kind of lines with loading a 2nd bootloader - one that can then load
an unsigned kernel.

there is potentially a simpler option: you might wish to look at the
kexec option.  this would allow you to continue to use the *existing*
kernel - unmodified - purely as a bootloader.  there is a userspace
program kicking around which allows selection of kernels (heck, you
could even try using grub in userspace).  modify /sbin/init (or other
method) to run that userspace kernel-selector, then that userspace
kernel-selector-program will kexec the *actual* kernel that you
require, which can, of course, be anything you want.

regarding the custom-compiled packages: yep... tough.  that's how
things are in the ARM world.  i won't say get used to it... instead
i'll say please *consider* getting used to it :)  there is no BIOS:
*every* kernel is custom-compiled and hard-coded to match the
hardware... which, because there is no BIOS, and all the CPUs are
different *and* all the hardware is different you see how that
quickly becomes a complete nightmare?

so yeah, if someone's already done custom-compiled packages then
you are very, very lucky.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDyyfQRgredyzcaVWO8bRqENFq+2=hnryua9cxpe1fu...@mail.gmail.com



Fwd: [Arm-netbook] Another new ARM plateform

2013-12-25 Thread Luke Kenneth Casson Leighton
just something for people's attention, this is a pretty damn good find
by erix: quad-core 1ghz iMX6, it has 2gb of RAM, SATA, GbE and a full
MiniPCIe slot (not just USB-only but *full* PCIe because the iMX6 has
1x PCIe).  and plenty more.  on the face of it it looks like the $150
price tag is high when compared to alternatives but the alternatives
don't come with SATA, PCIe or 2gb of RAM.

l.


-- Forwarded message --
From: Erix erix...@gmail.com
Date: Wed, Dec 25, 2013 at 6:42 AM
Subject: [Arm-netbook] Another new ARM plateform
To: arm-netb...@lists.phcomp.co.uk
Cc: erix molinie erix...@gmail.com


Hi everybody,

first at all, Merry Christmas to all of you.

what do you think about this new one?

http://www.tbsdtv.com/launch/tbs-2910-matrix-arm-mini-pc.html

driven by a Freescal MCIMX6Q5EYM10AC / Quad ARM Cortex-A9 at 1.0GHz
and 10/100/1000 wired Ethernet, WIFI IEEE 802.11n/b/g + SATA

the GPIO port looks a bit...  little.

Regards
Erix


___
arm-netbook mailing list arm-netb...@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-netb...@files.phcomp.co.uk


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDxt5DVM-=-xnxzpcu3-ozgoojgzfz+oxfb6ueryoup...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-24 Thread Luke Kenneth Casson Leighton
jerry: i apologise - there are too many judgements and assumptions for
me to be able to continue this conversation, especially without
consultancy fees being paid.  i've given you a lot of advice: you're
not listening to it.  you may also wish to bear in mind that the
client is going to be able to do their own google searches and find
this conversation.  you may also wish to consider that you've misled
people on this list, and they have provided you with answers according
to those misleading questions.  you might like to consider
compensating them - or the debian project - for their time in some
appropriate way.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedw_xq_knqseweioyjxccqdf35kfxswgg4hpkrutnk5...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-24 Thread Luke Kenneth Casson Leighton
On Tue, Dec 24, 2013 at 1:07 PM, Dale Amon a...@vnl.com wrote:
 On Tue, Dec 24, 2013 at 11:37:07AM +, Luke Kenneth Casson Leighton wrote:
 jerry: i apologise - there are too many judgements and assumptions for
 me to be able to continue this conversation, especially without
 consultancy fees being paid.  i've given you a lot of advice: you're
 not listening to it.  you may also wish to bear in mind that the
 client is going to be able to do their own google searches and find
 this conversation.  you may also wish to consider that you've misled
 people on this list, and they have provided you with answers according
 to those misleading questions.  you might like to consider
 compensating them - or the debian project - for their time in some
 appropriate way.

 However many of us who are silent have found this to be
 one of the more interesting discussions on the business
 and industry of SoC that have come along in a very long
 time. Definitely a high S/N.

 appreciated, dale.  something that, in my... eenteresting history of
interactions in the software libre world of the past 18 years i'm a
leetle paranoid about.

 I say this as someone in early stages of an aerospace
 product development.

 intriguing.  as there's some interest i'm happier to be able to carry
on a bit more: investigations of SD/MMC which is coincidentally
multiplexed onto the EOMA68-A20 CPU Card [not as a bit-banged
interface, jerry - one of the many assumptions i left it with you to
ask as questions not as judgements] show that SD/MMC has an SPI mode
[not as a big-banged interface, jerry - another of the assumptions
that you made as judgements].

 and, although it's sketchy, there appears to be evidence in the form
of a broadcom wifi driver of active support for SPI in the allwinner
SD/MMC/SDIO hardware:

 
https://github.com/allwinner-ics/lichee_linux-3.0/blob/master/drivers/staging/brcm80211/include/sdio.h

 so it looks at first glance like it's possible.  really i should get
some sort of random bit of SPI-based hardware and try it out.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDzXT8tf=xwhsgrt0o6mhxr-_1ur5qo93oux4ulazyj...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-24 Thread Luke Kenneth Casson Leighton
On Tue, Dec 24, 2013 at 2:27 PM, Jerry Stuckle jstuc...@attglobal.net wrote:
 On 12/24/2013 6:37 AM, Luke Kenneth Casson Leighton wrote:

 jerry: i apologise - there are too many judgements and assumptions for
 me to be able to continue this conversation, especially without
 consultancy fees being paid.  i've given you a lot of advice: you're
 not listening to it.  you may also wish to bear in mind that the
 client is going to be able to do their own google searches and find
 this conversation.  you may also wish to consider that you've misled
 people on this list, and they have provided you with answers according
 to those misleading questions.  you might like to consider
 compensating them - or the debian project - for their time in some
 appropriate way.

 l.



 Luke,

 Sorry, I have been upfront since the start about the requirements.  I have
 not misled anyone.

 you've misled people by releasing information in reverse-order.  we
had no idea that you have a client who is looking for 1000s of units.
that came *later* in a 2nd message.  the 1st message - weeks ago -
gave everyone the impression that you were looking for a *personal*
system.  everyone responded on that basis.

  But you have continued to ignore those requirements,
 instead pushing your own design,

 wrong on several counts.  i _have_ warned you about making
judgements, jerry.  now you're beginning to annoy me by continuing to
make judgements, when i'm trying to help you.  and not being paid to
do so.

1) based on the facts (which have come through very slowly), the
EOMA68 design and the design strategy is the only one that fits the
requirements.  despite your judgements to the contrary.

2) i have given you some leads to explore.  the freescale and atmel
parts.  these are nothing to do with the EOMA68 design.

3) when we did not have all the facts, i and several others
recommended that you explore the cubieboard and other hardware.  when
you *eventually* presented all the facts [drip-feed], those boards
were eliminated.  also, the EOMA68-A20 and Improv were not ready or
available at the time so i could not recommend them at the time.


 which does not work on several fronts.

 that is a judgement on your part which is in direct conflict with the
facts.  i'm still waiting for you to ask the questions rather than
continuing to make judgements.

 are you interested in asking questions or is it more useful to you to
pass judgement?


 And I hope you don't continue to demand consulting fees for Debian email
 list support.  There are thousands of people on this and other Debian lists
 who provide answers to questions without being mercenary about it.  And they
 do it without asking if this is a personal or work-related question.

 you may continue to view this in a way that is useful to you.

 Yes, my client can read this thread; in fact they know this is one of the
 places I'm looking.  I was very up front with them about the difficulty in
 meeting their requirements,

 jerry, i have to be honest, here: the difficulty is in your ability
to understand the field, not in the requirements themselves.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedyvzuaqcvs7faojgfvx22mbgnzt2qydikrgcibvflc...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-24 Thread Luke Kenneth Casson Leighton
On Tue, Dec 24, 2013 at 1:07 PM, Dale Amon a...@vnl.com wrote:


 However many of us who are silent have found this to be
 one of the more interesting discussions on the business
 and industry of SoC that have come along in a very long
 time. Definitely a high S/N.

 I say this as someone in early stages of an aerospace
 product development.

 ok, so after a little bit of thought, it occurred to me that there
may be others who would benefit from continuing the discussion rather
than attempting to get jerry to correct some of his assumptions and
stop making judgements.

 one of the mistakes that jerry's made was to assume that a board with
a higher component count would automatically be more expensive than a
lower component count board.  the pricing is critically
volume-dependent.  in cases where the boards are ordered in 100k+
volumes (which would be reasonable to expect in a product that's
designed around a mass-volume strategy), there are several advantages:

 1) the factories can typically subcontract and ask ODMs to absorb the
NREs of development.  especially if the factory is PRC
State-Sponsored.  a written letter placing an order for 200k units -
even ones that haven't even been designed yet - is as good as cash.

 2) tape and reel typically comes in quantities of 2500 (or so).  it's
only in europe and the west that you can buy part-reels online.
typically from companies such as mouser, digikey, arrow etc.  the
markups for doing so are usually between 300 and 1000%.

 3) ordering 250 or even 1000 units therefore ends up with *massive*
component wastage.  or huge storage costs.

 4) below 1000 units it's just not worth any factory's time to set up
automated assembly: it takes too long to set up and tear down.
therefore when you get quotes for 100 units you find that the labour
costs bring the costs to pretty much the same as 1000 units.

 the EOMA68 strategy is designed to take these factors into
consideration.  by taking advantage of huge amortised bulk-buying
during the high point of any one SoC's lifetime, not only are the NREs
drastically reduced but the production runs are longer and,
potentially, if the volume is large enough, actually end up
*extending* a SoC beyond its anticipated EOL.  overall, the cost ends
up being lower - MUCH lower - despite the possibility of there being
several components (such as HDMI, Micro-SD or USB-OTG) which may not
have been part of the initial requirements when compared to a
single-board design.

 so, the scenario that jerry's looking at is one where the NREs to
create a custom 6-layer PCB would be somewhere around... $USD 20k to
$30k, PCB printing and population of 2 samples would be around $1500
per revision of the PCB, and the cost of the system would (depending
on complexity and peripherals) typically be somewhere around let's say
$25 in 10k volumes, what with all the production NREs being amortised
over that 10k volumes *BUT*, if only 1,000 units were required, those
NREs would probably lift those prices to $40 or even $50.

by contrast, let's say that a mass-volume module is used instead.  a
custom 2 or 4-layer carrier-board PCB would be somewhere around... $5k
to develop, PCB printing and population of 2 samples would be
around... $1000 per revision of the PCB, and the cost of the system
would (assuming the same functionality) probably be about the same in
10k volumes.  if only 1,000 units were required, however, then because
the module was mass-volume and effectively just a component, the only
NREs affecting the pricing would be those of the carrier board which,
as it was a simpler design, would potentially be lower.  pricing could
well be around $5 less than a custom board, at the lower volumes.

the other mistake that jerry's made is in not asking about the plans
to put SPI on the EOMA68 interface list.  he's dismissed the board
based on a judgement without checking the facts.

i trust that others may find this useful and insightful information,
as it's challenging enough as it is to put debian onto ARM-based
systems, without having to custom-port the OS to every single piece of
hardware (not just a SoC).  how many times have people asked - just on
this mailing list alone in the past year - please can you advise me
how to get debian onto my {insert piece of low-cost hardware i bought
off the internet}?  i've lost count!  but every time we've had to
patiently advise them, i'm sorry but you're looking at a
reverse-engineering effort or i'm sorry but you'll need to run a
chroot under android unless you're happy to do reverse-engineering.

i leave it at that.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAPweEDw5Pgz4JjL7COGbC=u-2fw9yjtxhcod_8oo9e8hoeh...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-23 Thread Luke Kenneth Casson Leighton
On Mon, Dec 23, 2013 at 4:43 PM, Jerry Stuckle jstuc...@attglobal.net wrote:
 On 12/23/2013 2:24 AM, Luc Verhaegen wrote:

 On Sun, Dec 22, 2013 at 11:09:39PM -0500, Jerry Stuckle wrote:


 Let's try this again.  I'm still looking for a good ARM board for
 Debian.  I thought the Olinuxino A10s board would work until I found out
 recently that Allwinner has stopped making the SDK as of last February.
 No more updates for Linux, and it looks like this chip is going by the
 wayside.  We need one which will be around for a while.


 Wow, what world do you live in?

 A world where cheap chinese manufacturers actually support their
 hardware? Where they make full software available for many years? What
 world is that?


 Luc,

 I live in a world with commercial products where costs must be controlled to
 remain competitive.  This includes not only the cost of parts now, but the
 cost of having to redesign when something you are using becomes unavailable.

 unfortunately, what luc is pointing out is that the requirements that
you've set, whilst being extremely common, are in fact mutually
exclusively incompatible.

 the people who make low-cost SoCs that you're expecting to buy don't
make them for you - or in fact any of us here on debian-arm - they
make those $5 to $7 SoCs to sell *immense* numbers of tablets,
tablets, tablets and yet more tablets... in china.  the rest of the
world - which is 1/10th the size as a market - is almost an
afterthought.

 the hilarious thing is that not even the chinese fabless SoC
companies themselves realise this.  allwinner made the A31 a year ago,
it was quad-core, it had MIPI, it had DisplayPort, it had all the
fantastic bells and whistles, had faster graphics (PowerVR 545MP),
ticked all the boxes... but because it was $19 and targetted at
SuperTablets (with 2560x1800whatever displays) that is *automatically*
outside of mainstream chinese markets Rockchip did pretty much at
the same time a 28nm quad-core $12 lower-cost SoC with slower graphics
(still MALI 400), and wiped the floor with them.

 but even rockchip are not immune to the supernova SoC effect.  that
amazing 28nm $12 quad-core SoC - which is only sold to clients with
good engineering resources (of whom there are extremely few - tom
cubie's team is one of them) or it's sold with full support services
to a handful of chinese tablet-tablet-tablet-tablet makers who have
proven that they can shift 100k units - will be viewed with increasing
unease by potential ODMs because of its age [under 9 months!].

there *will* be something better coming out...  always... and the
moment it does, you're screwed.  but there *is no other way* to get
access to these low-cost SoCs!

... except with EOMA68.  the exact scenario that you face, jerry, is
why i designed EOMA68.  it's there to provide people like you [1] with
access to the latest low-cost SoCs, yet using only a subset of
functionality of each SoC, such that the base-board *has* to be
designed to accommodate a long-term strategy where the SoC *does not
matter*.

does that make any sense?

l.

[1] the situation is compounded by software license violations.  i
have a friend who has an engineering firm in australia.  he wants to
do a low-cost WIFI product.  you'd think that it would be possible to
buy a tablet, strip it down and put the PCB into a great wall-mounted
product, make a lot of money, right?  wrong.  he has a stack a METRE
HIGH of rejected tablets - all of them low-cost - which tells him
before he's even started that he's out of business.  why? because
*every single one* of them is GPL-violating and the factories don't
even have the source code.  i'll say that again: NOT EVEN THE
FACTORIES have the source code: they were supplied with GPL-violating
binary-only images by a 3rd party intermediary design house.  by
contrast, because i am a software libre advocate, no EOMA68 CPU Card
will receive Certification, no SoC will even be *considered* unless
the GPL and all other software licenses are properly respected.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedyc1plbdwfvhvhfoqa3rzirzvtowntombsxdoh_cjz...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-23 Thread Luke Kenneth Casson Leighton
jerry this is an example of a freescale SoC that could potentially fit
the requirements you've set.

http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=VF5xx#

like many of these cortex A5 SoCs it's relatively new, and quite... specialist.

the other one is that SAMA5:
http://www.atmel.com/microsite/sama5d3/highlights.aspx

the advantage of atmel SoCs is that atmel have considerable experience
in analogue: you *do not* need an external PMIC, just a few capacitors
and you're done, because they build the PMIC directly into the SoC
itself.

so do not be tempted to consider atmel SoCs on the price of the SoC
alone: bear in mind that the design will be simpler and the cost
lower.

but, do not also be tempted to just run at atmel or freescale going
hooray! - consider the expected lifetime of the product for the
client, as well, and check the end-of-life dates on the SoC [and all
other components].

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedznxy4wxvs_bnw65rqnejisccdy9_ot7uifo7fbuua...@mail.gmail.com



Re: Good ARM board for Debian?

2013-12-23 Thread Luke Kenneth Casson Leighton
[jerry please read very last paragraph first, thanks].

On Mon, Dec 23, 2013 at 8:11 PM, Jerry Stuckle jstuc...@attglobal.net wrote:
 On 12/23/2013 2:27 PM, Luke Kenneth Casson Leighton wrote:

 On Mon, Dec 23, 2013 at 4:09 AM, Jerry Stuckle jstuc...@attglobal.net
 wrote:

 On 9/26/2013 5:13 PM, Jerry Stuckle wrote:


 Hi again, all,

 Well, it looks like for several reasons the RaspberryPi won't work for
 this project.  Can anyone recommend other ARM-based boards which run
 Wheezy well?

 This is going to be a used as a monitor/controller, so major speed isn't
 a factor.  It will mainly be using SPI and GPIO ports, plus ethernet for
 communications.  Other things like graphics, USB ports, etc. are not
 important for this project (but their presence doesn't rule the board
 out).

 Also the ability to run their ARM version of Wheezy under QEMU is
 important for development.

 I appreciate any recommendations.

 TIA
 Jerry



 Let's try this again.  I'm still looking for a good ARM board for Debian.
 I
 thought the Olinuxino A10s board would work until I found out recently
 that
 Allwinner has stopped making the SDK as of last February.


   ignore that.  it's irrelevant.  unless you're ordering 100k+ units
 you'll never get direct support from allwinner, they're overloaded as
 it is.  you're using completely the wrong criteria.


 It is completely relevant when you are talking a commercial product.

 yes.  the facts here have come out rather slowly.  when you initiated
this thread i believe pretty much everyone responded based on an
assumption that you were looking to purchase a single unit for
personal use.  only later on (even in this round of questions) and
even later on in the _message_ does it become clear that the
requirements are actually those of a client.

 so we are a bit arse-about-face here :)

 we also have yet to established expected product lifetime.  is it 6
months, 12 months, 6 years or 12 years?  also are product revisions
anticipated?  would it be okay to redesign the entire board later on,
using a completely different SoC?

 When
 the manufacturer stops creating SDKs for a chip, chip EOL is not far behind.

 then the fact that the majority of chinese ODMs only ever produce one
SDK - usually an incomplete one and almost invariably GPL-violating -
should tell you everything you need to know.

   you _should_ be asking the question how long will the sunxi
 community support the A10s and the answer to _that_ will be as long
 as there are people using them.  not as long as allwinner is doing
 an SDK - fuck the SDK: it won't help you anyway.


 Commercial products cannot depend on community support.  It is too
 unreliable.

 i wouldn't say unreliable - i'd say that they're just as dependent
on financial support as anyone is.  a community is the first place i
would go to look for people to pay to provide *commercial* support.

 and the nice thing is that by taking advantage of those people to
provide commercial support, it strengthens their resolve to continue
to be a community.

 if on the other hand you were expecting something for nothing,
without putting them under contract, and without paying them a single
cent, then yes, you could say 100% that you can have absolutely no
expectation that a community would be around or be reliable for the
duration that you would like them to be.


 And such language is completely uncalled for.

 in an informal setting, for emphasis i tend to make sparse use of
swearwords, just like any liverpudlian or mancurian would, in any
conversation.  they're intended both for emphasis as well as comical
and theatrical effect.  this, after all, is a public forum: a little
theatre and entertainment is... expected :)  in a business setting
however i would never use them unless something reaally bad happened.


   what you *should* be asking is, what's the lifetime of the A10s
 processor and can i buy as many as is needed, for as long as is
 needed.


 The lifetime is obviously limited.

 yes.


   also you should be asking can i get a replacement within the
 expected lifetime of the product i'm putting out the door?


 Replacing the processor in existing boards will require redesign

 in a single-board monolithic design: yes, you would be absolutely
right.  if that's something that the client is prepared to take into
consideration as part of the product lifetime and evolution, that's
great.

 in the special case of EOMA68, the answer is NO, the base board will
NOT require a redesign.  at all.  ever.  [caveat: as long as the
EOMA68 specification is adhered to, without deviation].  and the CPU
Cards will be available, and hardware-compatible,

 and possibly different software.

 in a single-board monolithic design: that would, if ARM SoCs were
exclusively picked, be pretty much limited to the kernel and the boot
system.  however, interfacing to the hardware would be where the
niggles start: GPIO will be radically different on a per-SoC basis (we
know this) and so

Re: Good ARM board for Debian?

2013-12-23 Thread Luke Kenneth Casson Leighton
On Mon, Dec 23, 2013 at 8:20 PM, Jerry Stuckle jstuc...@attglobal.net wrote:

   but even rockchip are not immune to the supernova SoC effect.  that
 amazing 28nm $12 quad-core SoC - which is only sold to clients with
 good engineering resources (of whom there are extremely few - tom
 cubie's team is one of them) or it's sold with full support services
 to a handful of chinese tablet-tablet-tablet-tablet makers who have
 proven that they can shift 100k units - will be viewed with increasing
 unease by potential ODMs because of its age [under 9 months!].


 Understand.  But my client isn't in the tablet market.  High speed, all
 kinds of extra features, etc. is not necessary.  All they need to do is
 collect data and make it available via the ethernet port.

 well, there are plenty of SoCs out there that do that: even the
STM32F207 can do ethernet... but it's a 120mhz ARM Cortex-M, has a
limited amount of on-board RAM and so can't run debian (or any
mainstream GNU/Linux OS for that matter).

 what i'm pointing out is that your client's requirements are
completely and utterly 100% mutually exclusively incompatible, and
that's really the end of the discussion.  also, i have to say that on
learning that you're a consultant, i'm reluctant to assist further
unless there is a chance of some direct or indirect financial
renumeration.  either in the form of product orders or in the form of
consultancy fees.

 apologies to those people on debian-arm, i believe this discussion
may have strayed beyond the boundaries of debian-arm: i've tried to
keep the discussion general however it is beginning to get into
specifics so i won't take up peoples' time further on-list.  jerry if
you'd like to discuss this further off-list i'm happy to do so as long
as it proves financially worthwile, directly or indirectly, for me to
do so.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedyypym4k6vw8p51-z0xkpxzhmkmb2g33x0_r6s87vi...@mail.gmail.com



Re: qemu status on ARM

2013-12-14 Thread Luke Kenneth Casson Leighton
tim i'm sure i saw some patches which add one of the other CPU types,
not just A15, a couple months back.  the work was, if i vaguely
remember, done by someone with an @redhat.com email address.

the clue that you're operating in emulated (non-VM) mode is this:

 kvm [1]: HYP mode not available

so it'll be slow until you find out what's going on here.

l.


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedxluoj4a54fde8s-w1p5bbjegz3v08jtzobh7s9sg8...@mail.gmail.com



Re: qemu status on ARM

2013-12-14 Thread Luke Kenneth Casson Leighton
On Sat, Dec 14, 2013 at 4:51 PM, Tim Fletcher t...@night-shade.org.uk wrote:

 The boot log was from the VM booting, so that would be expected?

 http://www.spinics.net/lists/kvm-arm/msg02582.html

 right.  ok.  thought it was the host.  don't know why!

 --
 Tim Fletcher t...@night-shade.org.uk


-- 
To UNSUBSCRIBE, email to debian-arm-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/capweedx1viz1r_84qox0ovm22c93fh3sfnw5k1aroj+cogx...@mail.gmail.com



  1   2   3   4   >