Re: [PATCH v2] char drivers: Ram oops/panic logger

2010-03-14 Thread Jamie Lokier
Marco Stornelli wrote:
> Il 13/03/2010 00:31, Jamie Lokier ha scritto:
> > That'd be fine if the kernel link scripts choose the address, as long
> > as it's consistent between different compiles and similar
> > configurations.  That'd be a bit simpler than the admin having to know
> > the memory map well enough to choose an address.
> > 
> > -- Jamie
> > 
> 
> I agree, but the bootloader should be aware of it. I mean, usually
> bootloaders at boot, reset the RAM, so you have to tell to the
> bootloader that you are using a piece of RAM as persistent RAM, for
> example U-Boot has got a specific option CONFIG_PRAM. I don't know if
> all the process can be completely transparent to the admin in all
> situations.

Sometimes you can't change the bootloader (they don't always come with
source code).  Or you could, but you don't want to risk it (there
isn't always a way to recover if you break it).

Obviously then the feature is only useful when the bootloader doesn't
clear all the RAM :-)

On slow boards in consumer devices, they sometimes avoid clearing the
RAM because that adds measurable boot time.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] char drivers: Ram oops/panic logger

2010-03-12 Thread Jamie Lokier
Andrew Morton wrote:
> > I meant with the "classic" use of mtdoops, therefore with a flash
> > partition without use MTD_RAM. Using MTD_RAM, it's more or less the
> > same thing, with the exception of "where" you want deploy the log. For
> > example: if in your system you have got a nvram you can use it without
> > problem, you need to specify the address of the nvram to the module.
> > Very simple. I  think it's a small driver but very useful, feedback
> > from other embedded guys are welcome.
> 
> Seems sensible to me.  If you have a machine whose memory is persistent
> across reboots then you reserve an arbitrary 4k hunk of memory for
> collecting oops traces, yes?

Me too, I think it's a great idea which sounds simpler to use than MTD-RAM.

> What tools are used for displaying that memory on the next boot?  How
> do those tools distinguish between "valid oops trace" and "garbage
> because it was just powered on"?  A magic signature?
> 
> Should the kernel provide the 4k of memory rather than (or in addition
> to) requiring that the system administrator reserve it and tell the
> kernel about it?  That'd be a matter of creating a linker section which
> isn't cleared out by the startup code.

It's good if there's an option to make the location not vary between
kernels, and be known to the bootloader.

Then you can debug kernels which always crash during boot, by either
booting into another kernel which works and looking at the oops, or by
a bootloader command to dump it.

That'd be fine if the kernel link scripts choose the address, as long
as it's consistent between different compiles and similar
configurations.  That'd be a bit simpler than the admin having to know
the memory map well enough to choose an address.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 03/11] readahead: bump up the default readahead size

2010-02-11 Thread Jamie Lokier
Matt Mackall wrote:
> On Mon, 2010-02-08 at 21:46 +0800, Wu Fengguang wrote:
> > Chris,
> > 
> > Firstly inform the linux-embedded maintainers :)
> > 
> > I think it's a good suggestion to add a config option
> > (CONFIG_READAHEAD_SIZE). Will update the patch..
> 
> I don't have a strong opinion here beyond the nagging feeling that we
> should be using a per-bdev scaling window scheme rather than something
> static.

I agree with both.  100Mb/s isn't typical on little devices, even if a
fast ATA disk is attached.  I've got something here where the ATA
interface itself (on a SoC) gets about 10MB/s max when doing nothing
else, or 4MB/s when talking to the network at the same time.
It's not a modern design, but you know, it's junk we try to use :-)

It sounds like a calculation based on throughput and seek time or IOP
rate, and maybe clamped if memory is small, would be good.

Is the window size something that could be meaningfully adjusted
according to live measurements?

-- Jamie



--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-29 Thread Jamie Lokier
Andy Green wrote:
> >No TCP/IP, no TFTP, not even BOOTP (but it's a nice bonus), no command
> >line interpreter (just a GPIO on board to boot into "unbrick me" mode
> >:-), and most strikingly _no_ flash driver for flash chip du jour.
> >
> >To flash it you send a kernel to boot from RAM which is capable of
> >flashing it.
> 
> Sorry I missed where this kernel appears from and the bootloader that 
> spawned it, since both could get trashed.

The kernel appeared from the network/serial/USB, as described.  It's
not on the device.  The point is you don't need a "spare" kernel for
unbrickability (and kernels are quite large); it's enough to be able
to send one in unbricking mode.

> That is actually a conundrum 
> on a lot of systems and some of the solutions (write-once backup 
> bootloader) in the long run lead to other issues.

I agree especially about the write-once backup bootloader.

> True SD Boot does truly deliver unbrickability if you are willing to 
> swap out or reformat the SD card.

With the disadvantage that you need the board space, connector cost
etc. for an SD card.  It's fine if you already have that.

I'd put it into a similar category to network/serial/USB: Using
whatever your board provides for bare-bones communication, bootstrap a
kernel for more advanced stuff involving fancier drivers.

> >>>http://wiki.openmoko.org/wiki/Qi
> >>
> >>Looking at the screen shot there, you've got code to parse ext2 
> >>filesystems.
> >>What is your definition of "minimal"?
> >
> >Ew, ext2 doesn't even satisfy powerfail-during-kernel-upgrade safety.
> 
> It's just misleading (but accurate).  ext2 is the "lowest common 
> denominator" read-only parsing that actually supports ext3 and ext4 if 
> you are careful about the formatting options.  So the actual filesystem 
> is ext3 or ext4 typically (ext3 in GTA02 case), it's not that the 
> bootloader is mandating specifically ext2.

If it reads an ext3/4 filesystem using an ext2 read-only parser, then
it will not be powerfail-safe.  Partially written files and directory
updates will look like corruption on boot.

It's essential that it parses the journal as well.  It does not have
to commit the journal (which would need device writing ability), but
in that case it must allow journal blocks to supercede blocks from
elsewhere in the filesystem.

> >It's possible to boot without parsing filesystems, but that is one
> >rather nice feature, and with the right filesystems it can make system
> >updates powerfail-safe.
> 
> Bootloader is tricky, but actually on this iMX31 device Fedora is used, 
> yum update keeps the last 3 kernels around and our kernel package 
> follows that.  So it's possible to have backup kernels automatically 
> integrated into the bootloader and packaging system.

It's useless to have 3 kernels around if the directory containing them
looks corrupted due to not parsing the journal. :-)  Then you won't see
any of the kernels at boot time.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-22 Thread Jamie Lokier
Rob Landley wrote:
> However, if that's your minimum then you can't use the bootloader to
> re-flash the device, which is kind of handy.  (It gives you an
> un-bricking fallback short of pulling out a jtag.)  But doing that
> requires things like a network driver, TCP/IP stack, tftp
> implementation, serial driver, command line interpreter, and so on.
> And of course code to erase and write flash blocks for your flash
> chip du jour, plus knowledge of the flash layout.  (In theory, said
> knowledge comes from parsing a device tree.)

What a lot of bloat you added for such a basic requirement. :-)
You don't need all that to unbrick.

It's enough to have a serial/USB/network driver (choose one),
obediently listen for a kernel to be sent for booting from RAM,
and boot it.  In the network case it can be a simple UDP protocol,
or even raw ethernet.

No TCP/IP, no TFTP, not even BOOTP (but it's a nice bonus), no command
line interpreter (just a GPIO on board to boot into "unbrick me" mode
:-), and most strikingly _no_ flash driver for flash chip du jour.

To flash it you send a kernel to boot from RAM which is capable of
flashing it.

> > http://wiki.openmoko.org/wiki/Qi
> 
> Looking at the screen shot there, you've got code to parse ext2 filesystems.  
> What is your definition of "minimal"?

Ew, ext2 doesn't even satisfy powerfail-during-kernel-upgrade safety.

I agree it does beg the question of what is "minimal".

The proposal did explain quite well what Qi aims for: not duplicating
lots of kernel drivers badly.  If it succeeds in the area of flash
writing, network drivers, network protocols and so on it would be no
bad thing.

One area for potential common ground among bootloaders could be to
share the code for parsing filesystems.  It'd be great to see that in
a library shared by GRUB, Qi, U-boot and so on as it's not device
specific at all and not particularly large, but not so trivial that
it's good to have lots of clones.

It's possible to boot without parsing filesystems, but that is one
rather nice feature, and with the right filesystems it can make system
updates powerfail-safe.

> Rationale for not providing a boot menu is you don't want to mess with video 
> init.  I don't think I've actually seen an embedded bootloader that messes 
> with video, they do serial console instead, and you have a screen shot of 
> serial console messages so apparently the serial driver part is there...

In perspective, serial is usually quite simple.  Output only serial is
even simpler, though :-)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/7] printk: provide a filtering macro for printk

2009-09-02 Thread Jamie Lokier
Mike Frysinger wrote:
> it depends completely on how the macro is intended to be used.  if you
> want to maintain the "this macro has a return value", then you have to
> use ({...}).  if you want the macro to return a void, then you have to
> use do{...}while(0).

Actually no.  The difference is do {...} while(0) is a _statement_ and
cannot be used in an expression.  Whereas a void value can be used in
expressions like A?B:C, (A,B) and returned from a function, it just
has type void.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/7] printk: provide a filtering macro for printk

2009-09-02 Thread Jamie Lokier
Marc Andre Tanner wrote:
> Thanks, so if I understood it correctly this should be used like this:
> 
> #define PRINTK_FILTER(fmt) (  
> \
>   (((const char *)(fmt))[0] != '<' && CONFIG_PRINTK_VERBOSITY >= 4) ||
> \
>   (((const char *)(fmt))[0] == '<' && 
> \
>((const char *)(fmt))[1] <= *__stringify(CONFIG_PRINTK_VERBOSITY)) 
> \
> )
> 
> #define printk(fmt, ...) ({   
> \
>   if (__builtin_constant_p(PRINTK_FILTER(fmt)) && PRINTK_FILTER(fmt)) 
> \
>   printk((fmt), ##__VA_ARGS__);   
> \
> })
> 
> The sizeof check wouldn't be necessary. Is this correct?

Looks good, except that I think kernel style is to use "do {...} while
(0)" rather than "({ ... })"

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 7/7] printk: provide a filtering macro for printk

2009-09-01 Thread Jamie Lokier
Marc Andre Tanner wrote:
> + * The check with sizeof(void*) should make sure that we don't operate on
> + * pointers, which the compiler wouldn't be able to optimize out, but only
> + * on string constants.

Take a look at __builtin_constant_p in the GCC manual.

You'll probably find that wrapping the whole of the rest of the
expression (without the sizeof) in __builtin_constant_p is a good
way to know when to depend on the result of the expression.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 100Mbit ethernet performance on embedded devices

2009-08-19 Thread Jamie Lokier
Johannes Stezenbach wrote:
> a while ago I was working on a SoC with 200MHz ARM926EJ-S CPU
> and integrated 100Mbit ethernet core, connected on internal
> (fast) memory bus, with DMA.  With iperf I measured:
> 
>   TCP RX ~70Mbit/sec  (iperf -s on SoC, iperf -c on destop PC)
>   TCP TX ~56Mbit/sec  (iperf -s on destop PC, iperf -c o SoC)
> 
> The CPU load during the iperf test is around
> 1% user, 44% system, 4% irq, 48% softirq, with 7500 irqs/sec.
> 
> The kernel used in these measurements does not have iptables
> support, I think packet filtering will slow it down noticably,
> but I didn't actually try.  The ethernet driver uses NAPI,
> but it doesn't seem to be a win judging from the irq/sec number.

You should see far fewer interrupts if NAPI was working properly.
Rather than NAPI not being a win, it looks like it's not active at
all.

7500/sec is close to the packet rate, for sending TCP with
full-size ethernet packages over a 100Mbit ethernet link.

> What I'm interested in are some numbers for similar hardware,
> to find out if my hardware and/or ethernet driver can be improved,
> or if the CPU will always be the limiting factor.

I have a SoC with a 166MHz ARMv4 (ARM7TDMI I think, but I'm not sure),
and an external RTL8139 100Mbit ethernet chip over the SoC's PCI bus.

It gets a little over 80Mbit/s actual data throughput in both
directions, running a simple FTP client.

> I'd also be interested to know if hardware checksumming
> support would improve throughput noticably in such a system,
> or if it is only useful for 1Gbit and above.
> 
> Did anyone actually manage to get close to 100Mbit/sec
> with similar CPU resources?

Remember, the TCP throughput cannot reach 100Mbit/sec due to the
overhead of packet framing.  But it should be much closer to 100 than 70.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: new ipdelay= option for faster netboot

2009-08-19 Thread Jamie Lokier
Tim Bird wrote:
> David Miller wrote:
> > From: Tim Bird 
> > Date: Mon, 17 Aug 2009 18:24:26 -0700
> >
> >> David Miller wrote:
> >>> I have card/switch combinations that take up to 10 seconds to
> >>> negotiate a proper link.
> >> What types of delays are these timeouts supposed to
> >> cover?
> >
> > The problem is that if you don't first give at least some time for the
> > link to come up, the remaining time it takes the link to come up will
> > end up chewing into the actual bootp/dhcp protocol timeouts.  And
> > that's what we're trying to avoid.
> 
> What link?  I'm not that familiar with networking.
> 
> Assuming I'm using ethernet, what link needs to come up?

When you plug an ethernet cable in, you may have noticed it takes a
short time before the signal light comes on.  That's negotiation time.
Some are slower than others, but none of them do it instantly.

> Is this something to do with power propagation to the
> physical wire?

Not really.

> Is there some MAC layer negotiation between the card and the switch?
> Is it the time for the switch to do speed detection?

Yes and yes.

> And, can any of this be more accurately determined
> or guessed-at with knowledge of the onboard hardware?
> Or is it dependent on external conditions?

It can be accurately determined with most cards (all modern ones)
because you get a notification when it's done, or you can poll the card.

That's why on the desktop it's able to detect when you plug in an
ethernet cable and start DHCP as soon as link negotiation is complete.

So the right thing to do, as David Miller suggested too, isn't a fixed
timeout.  It should wait for link state UP and then start DHCP
immediately.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: New fast(?)-boot results on ARM

2009-08-14 Thread Jamie Lokier
Zan Lynx wrote:
> Or maybe its cheap and slow flash. In that case I think your only hope 
> is to make all the code as small as possible and/or find a different 
> flash filesystem that does not have to read so much of the device to 
> mount. Perhaps use a read-only compressed filesystem for the system 
> binaries and reflash it for software upgrades. Only init and mount the 
> writable flash for user-storable data well after system boot has finished.

Fwiw, logfs claims to mount quickly, but I haven't heard much about it
in recent months and http://logfs.org/logfs/ implies it's not really
stable yet.  But maybe if you're working on a prototype that doesn't
matter so much.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: flicker free booting

2009-08-01 Thread Jamie Lokier
Bill Gatliff wrote:
> Actually, I'd rather that drivers look at the hardware itself, and 
> verify that the configuration matches what the parameter specifies 
> before making changes.  That way you could use framebuffer= to 
> communicate the desired setup to the driver in cases where the hardware 
> wasn't already initialized.

It's a good idea when possible, but some hardware cannot be queried in
enough detail to confirm the framebuffer configuration - unless the
bootloader passes an extra parameter saying what it has done.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 2.6.27- Sending uevent from a driver

2009-07-03 Thread Jamie Lokier
Alan Cox wrote:
> > Or, maybe the userspace program can receive some sort of interrupt
> > from the TTY device when it is ready for I/O.
> > 
> > Perhaps there is another way to avoid continued polling?
> 
> TIOCMIWAIT ioctl for modem signals, and just using poll/select() on the
> tty for I/O.

Ah yes, the undocumented ioctl...  I knew there was one, just couldn't
find it in the man page just now.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/14] Pramfs: Persistent and protected ram filesystem

2009-06-26 Thread Jamie Lokier
Marco Stornelli wrote:
> 2009/6/24 Jamie Lokier :
> > Marco wrote:
> >> > Second question: what happens if the system crashing _during_ a write
> >> > to a file.  Does it mean that file will fail it's checksum when it's
> >> > read at the next boot?
> >> >
> >> > Maybe files aren't so important.  What about when you write a file,
> >> > and then rename it over an existing file to replace it.  (E.g. a
> >> > config file), and the system crashes _during_ the rename?  At the next
> >> > boot, is it guaranteed to see either the old or the new file, or can
> >> > the directory be corrupt / fail it's checksum?
> >>
> >> First of all I have to explain better the current policy: the checksum
> >> works at inode and superblock level and currently there isn't a recovery
> >> function as the journaling. About the superblock it's easy to use a
> >> redundant policy to be more robust.
> >
> > To be honest, superblock robustness is less of a concern.  The real
> > concern is losing file or directory contents, so it can't be used to
> > store persistent configuration data, only debugging logs.
> >
> >> About the inode, at the moment when the checksum doesn't match the
> >> inode it's marked as bad calling the function make_bad_inode().
> >
> > Let's see if I understand right.
> >
> > If it lose power when writing to a file, after boot the file is likely
> > to be marked bad and so return -EIO instead of any file contents?
> 
> Depends on the checksum. If you lose power before the checksum update
> of the inode
> you'll have a bad inode and then an -EIO at the next access.
> 
> >

> > If it loses power when doing atomic rename (to replace config files,
> > for example), it's likely that the whole /pramfs/configs/ directory
> > will be corrupt, because the rename is writing to the directory inode,
> > so you lose access to all names in that directory?
> >
> > That sounds like it can't be used for persistent configuration data.
> 
> It's true from this point of view currently there is a lack for this
> and it needs a bit of effort to resolve this problem.  >From this
> point of view I'd like to point out that I know that there was some
> aspects to study in a deeper way, so I'll need of more then one
> review :) but since this fs has been abandoned since 2004 and it
> hadn't ever reviewed, it was important to do a serious review with
> the kernel community to understand all the problems.

That's reasonable.

What do you think of my suggestion to double-buffer writes using a
single fixed position block, as explained elsewhere in this thread?

It should give the power fail safety with very little code.  I don't
know how much it would slwo down writing.  That probably depends on
whether it's the checksum which is slow (which only needs to be done
once when double-buffering), or the writing.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/14] Pramfs: Persistent and protected ram filesystem

2009-06-24 Thread Jamie Lokier
Pavel Machek wrote:
> On Tue 2009-06-23 20:07:23, Marco wrote:
> > You are talked about journaling. This schema works well for a disk, but
> > what about a piece of ram? What about a crazy kernel that write in that
> > area for a bug? Do you remember for example the e1000e bug? It's not
> 
> I believe you need both journaling *and* write protection. How do you
> handle power fault while writing data?

I think this is basically right.

write protection for the crazy kernels, and journalling for
powerfail/crash during updates.

Journalling can be extremely simple.  It can be just one memory block
at a fixed location, double-buffering all writes.

Pramfs already has checksums, which makes that easier.  You just write
to the buffer area first, with checksum, then write to the final area.
Mount looks at the buffer area, and if the checksum is fine, copies
the contents to the destination block.

That's all it takes to be resistant against power failures and crashes
during writes.  Probably <100 lines of code.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/14] Pramfs: Persistent and protected ram filesystem

2009-06-24 Thread Jamie Lokier
Marco wrote:
> > Second question: what happens if the system crashing _during_ a write
> > to a file.  Does it mean that file will fail it's checksum when it's
> > read at the next boot?
> > 
> > Maybe files aren't so important.  What about when you write a file,
> > and then rename it over an existing file to replace it.  (E.g. a
> > config file), and the system crashes _during_ the rename?  At the next
> > boot, is it guaranteed to see either the old or the new file, or can
> > the directory be corrupt / fail it's checksum?
> 
> First of all I have to explain better the current policy: the checksum
> works at inode and superblock level and currently there isn't a recovery
> function as the journaling. About the superblock it's easy to use a
> redundant policy to be more robust.

To be honest, superblock robustness is less of a concern.  The real
concern is losing file or directory contents, so it can't be used to
store persistent configuration data, only debugging logs.

> About the inode, at the moment when the checksum doesn't match the
> inode it's marked as bad calling the function make_bad_inode().

Let's see if I understand right.

If it lose power when writing to a file, after boot the file is likely
to be marked bad and so return -EIO instead of any file contents?

If it loses power when doing atomic rename (to replace config files,
for example), it's likely that the whole /pramfs/configs/ directory
will be corrupt, because the rename is writing to the directory inode,
so you lose access to all names in that directory?

That sounds like it can't be used for persistent configuration data.

If a directory is marked as bad, or a file-inode in it is marked bad,
can you even rmdir it to clean up and start again?

Thanks,
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Representing Embedded Architectures at the Kernel Summit

2009-06-16 Thread Jamie Lokier
Grant Likely wrote:
> On Tue, Jun 16, 2009 at 12:18 PM, Jamie Lokier wrote:
> > Something which lets you specify a dependency in a one-line
> > MODULE_INIT_PREREQS() macro would be much nicer.
> 
> That would work for some cases, but a lot of cases the problem is not
> module init order, but rather driver the probe order.  ie. In the
> Ethernet case I was working on the mac cannot be opened before the
> phy_device is registered.  Or another example is GPIOs.  An GPIO
> driven SPI or MDIO bus cannot be probed before all the devices
> providing the GPIOs are probed (right now GPIOs are 'special' and
> probed early, but this is ugly and there is no reason it couldn't be
> handled within the Linux driver model.

Both of those cases look like a simple module init order problem.

I'm not seeing how it's a probe order problem.  Even if you can probe
PHYs independently first, how are they going to be bound to the
ethernet MACs, other than by dodgy defaults?

In any of your examples, is the "modprobe" symbol dependency order
insufficient, when they are external modules?

If the problem is simply that "modprobe" can calculate dependencies
but linked-in modules don't, maybe the solution is to use the symbolic
dependencies to calculate a linked-in driver initialisation order.

If it's a probe order problem, where there aren't symbolic
dependencies, then MODULE_PROVIDE("gpio") and MODULE_REQUIRE("gpio")
or something like that might handle those cases, except for tricky
ones like a GPIO-driven I2C bus which controls a GPIO chip.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Representing Embedded Architectures at the Kernel Summit

2009-06-16 Thread Jamie Lokier
Grant Likely wrote:
> http://patchwork.ozlabs.org/patch/24152/
> 
> I never actually pushed through and finished it because it turned out
> to be a non-issue for Ethernet devices in the end.  However, I can see
> the value.  With this approach, a driver can use a
> bus_register_notifier() variant without caring about the device
> registration order, and the drivers notifier callback will get called
> at the appropriate time.
> 
> In your example case I could see the framebuffer driver deferring the
> final part of its initialization until the needed i2c device shows up.

It makes drivers more complicated and larger than necessary, though,
having to add more functions and register more callbacks.

Something which lets you specify a dependency in a one-line
MODULE_INIT_PREREQS() macro would be much nicer.

I can see the callbacks being useful, and necessary, for drivers which
need to interleave their initialisation.  E.g. need to initialise an
external framebuffer chip before you can use it's I2C GPIOs, but then
need to initialise an I2C driver before you can read the framebuffer
parameters to finish setting up the framebuffer.

-- Jamie

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/14] Pramfs: Persistent and protected ram filesystem

2009-06-16 Thread Jamie Lokier
Marco wrote:
> There's the checksum, but the most important feature of this fs is the
> write protection. The page table entries that map the
> backing-store RAM are normally marked read-only. Write operations into
> the filesystem temporarily mark the affected pages as writeable, the
> write operation is carried out with locks held, and then the pte is
> marked read-only again. This feature provides protection against
> filesystem corruption caused by errant writes into the RAM due to
> kernel bugs for instance. I provided a test module for this. When the
> module is loaded tries to do a dirty write in the superblock, at this
> point you should see an error on the write.

Ok.  Random question: does it work with NOMMU? :-)  (I'm biased, my
devices are NOMMU).

Second question: what happens if the system crashing _during_ a write
to a file.  Does it mean that file will fail it's checksum when it's
read at the next boot?

Maybe files aren't so important.  What about when you write a file,
and then rename it over an existing file to replace it.  (E.g. a
config file), and the system crashes _during_ the rename?  At the next
boot, is it guaranteed to see either the old or the new file, or can
the directory be corrupt / fail it's checksum?

> > As you say PRAMFS can work with special SRAMs needing memory
> > protection (and maybe cache coherence?), if you mmap() a file does it
> > need to use the page cache then?  If so, do you have issues with
> > coherency between mmap() and direct read/write?
> 
> See my response above about my concept of protection. However the mmap
> it's a similar approach. I can "mmap" the SRAM and I can write into it
> my data, but I think the possibility to have a fs it's great. We can use
> the device as normal disk, i.e. we can use cp, mv and so on.

I meant when you mmap() a file on the filesystem, like you do when
running an executable, for example.  Does mmap() on a file work or is
it forbidden?  Just curious, I'd guess it's forbidden, and you
wouldn't want _direct_ mappings to the backing SRAM anyway so you can
keep those checksums up to date.

> >> On this point I'd like to hear other embedded guys.
> > 
> > As one, I'd like to say if it can checksum the RAM at boot as well,
> > then I might like to use a small one in ordinary SRAM (at a fixed
> > reserved address) for those occasions when a reboot happens
> > (intentional or not) and I'd like to pass a little data to the next
> > running kernel about why the reboot happened, without touching flash
> > every time.
> > 
> > -- Jamie
> 
> Yeah Jamie, the goal of this fs is exactly that!

Great :-)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/14] Pramfs: Persistent and protected ram filesystem

2009-06-14 Thread Jamie Lokier
Marco wrote:
> Simply because the ramdisk was not designed to work in a persistent
> environment.

One thing with persistent RAM disks is you _really_ want it to be
robust if the system crashes for any reason while it is being
modified.  The last thing you want is to reboot, and find various
directories containing configuration files or application files have
been corrupted or disappeared as a side effect of writing something else.

That's one of the advantages of using a log-structured filesystem such
as Nilfs, JFFS2, Logfs, UBIFS, Btrfs, ext3, reiserfs, XFS or JFS on a
ramdisk :-)

Does PRAMFS have this kind of robustness?

> In addition this kind of filesystem has been designed to work not
> only with classic ram. You can think at the situation where you have
> got an external SRAM with a battery for example. With it you can
> "remap" in an easy way the SRAM. Moreover there's the issue of
> memory protection that this filesystem takes care.  > Why is an
> entire filesystem needed, instead of simply a block driver > if the
> ramdisk driver cannot be used?  > >From documentation: "A relatively
> straight-forward solution is to write a simple block driver for the
> non-volatile RAM, and mount over it any disk-based filesystem such
> as ext2/ext3, reiserfs, etc.  But the disk-based fs over
> non-volatile RAM block driver approach has some drawbacks:
>
> 1. Disk-based filesystems such as ext2/ext3 were designed for
> optimum performance on spinning disk media, so they implement
> features such as block groups, which attempts to group inode data
> into a contiguous set of data blocks to minimize disk seeking when
> accessing files. For RAM there is no such concern; a file's data
> blocks can be scattered throughout the media with no access speed
> penalty at all. So block groups in a filesystem mounted over RAM
> just adds unnecessary complexity. A better approach is to use a
> filesystem specifically tailored to RAM media which does away with
> these disk-based features.  This increases the efficient use of
> space on the media, i.e. more space is dedicated to actual file data
> storage and less to meta-data needed to maintain that file data.

All true, I agree.  RAM-based databases use different structures to
disk-based databases for the same reasons.

Isn't there any good RAM-based filesystem already?  Some of the flash
filesystems and Nilfs seem promising, using fake MTD with a small
erase size.  All are robust on crashes.

> 2. If the backing-store RAM is comparable in access speed to system
> memory, there's really no point in caching the file I/O data in the
> page cache.
>
>Better to move file data directly between the user buffers
>and the backing store RAM, i.e. use direct I/O. This prevents the
>unnecessary populating of the page cache with dirty pages.

Good idea.

>However direct I/O has to be enabled at every file open. To
>enable direct I/O at all times for all regular files requires
>either that applications be modified to include the O_DIRECT flag
>on all file opens, or that a new filesystem be used that always
>performs direct I/O by default."

There are other ways to include the O_DIRECT flag automatically.  A
generic mount option would be enough.  I've seen other OSes with such
an option.  That code for that would be tiny.

But standard O_DIRECT direct I/O doesn't work for all applications: it
has to be aligned: device offset, application memory address and size
all have to be aligned.

(It would be a nice touch to produce a generic mount option
o_direct_when_possible, which turns on direct I/O but permits
unaligned I/O.  That could be used with all applications.)

As you say PRAMFS can work with special SRAMs needing memory
protection (and maybe cache coherence?), if you mmap() a file does it
need to use the page cache then?  If so, do you have issues with
coherency between mmap() and direct read/write?

> On this point I'd like to hear other embedded guys.

As one, I'd like to say if it can checksum the RAM at boot as well,
then I might like to use a small one in ordinary SRAM (at a fixed
reserved address) for those occasions when a reboot happens
(intentional or not) and I'd like to pass a little data to the next
running kernel about why the reboot happened, without touching flash
every time.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel crashing and log buffers...

2009-06-13 Thread Jamie Lokier
Robin Getz wrote:
>  - late crash analysis - Sometimes - even after the kernel is up
>and running properly - a device driver does a bad thing. 
>Normally - complete kernel dumps can come out a serial
>console, and you can you your favourite serial application
>to scroll back and determine what is going on.
> 
>In many embedded devices - this is not a possibility, since
>all the serial ports are in use (Irda, Bluetooth, GPS, etc).

There's another thing with serial ports:

In embedded devices, I've seen serial drivers hooked up to the console
output either synchronously or asynchronously.

Synchronously, every printk() waits to be sent to the serial port, and
this slows the kernel down in normal operation, and especially boot
time.

Asynchronously, every printk() does into the log buffer without delay,
and is sent over the serial port as fast as that can.  It doesn't slow
the kernel down much.

When it's done asynchronously, if the kernel crashes you don't always
get the last output prior to the crash, even with a serial terminal
listening in your lab.  But if you do it synchronously, it slows
things down.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/14] Pramfs: Persistent and protected ram filesystem

2009-06-13 Thread Jamie Lokier
Marco wrote:
> Linux traditionally had no support for a persistent, non-volatile
> RAM-based filesystem, persistent meaning the filesystem survives a
> system reboot or power cycle intact. The RAM-based filesystems such as
> tmpfs and ramfs have no actual backing store but exist entirely in the
> page and buffer caches, hence the filesystem disappears after a system
> reboot or power cycle.

Why is a ramdisk not sufficient for this?

Why is an entire filesystem needed, instead of simply a block driver
if the ramdisk driver cannot be used?

It just struck me as a lot of code which might be completely
unnecessary for the desired functionality.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I run an application compiled with gcc ABI 2.95 on a kernel compiled with gcc ABI 3.4?

2009-05-15 Thread Jamie Lokier
Jamie Lokier wrote:
> Structure packing: Isn't that basically the same set of fixups that
> need to be done for 32-bit compatibility on 64-bit kernels?  Could it
> even use the same code - sneakily replacing "32" with OABI and "64"
> with EABI?

On second thoughts, I guess there may be a few fixups in common but
not much.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I run an application compiled with gcc ABI 2.95 on a kernel compiled with gcc ABI 3.4?

2009-05-15 Thread Jamie Lokier
George G. Davis wrote:
> On Fri, May 15, 2009 at 02:55:57PM +0100, Ben Dooks wrote:
> > On Fri, May 15, 2009 at 02:51:05PM +0100, Jamie Lokier wrote:
> > > Eek, can you say a bit more about the ARM EABI mismatch?
> > > 
> > > I would like to run a shiny modern ARM EABI kernel and userspace, but
> > > also need to run one or two OABI binaries (from the gcc 2.95 era) on
> > > the same kernel which I cannot recompile because they're built with
> > > closed source libraries only supplied as OABI.
> > > 
> > > Does that not work at all?
> > 
> > There are a few ioctl() incompatibilities between the two ABIs, the
> > main problems are within the ALSA API. Mostly it will work, but there
> > are a couple of caveats.
> 
> Right, you can run ARM OABI binaries on an ARM eABI kernel by enabling
> OABI_COMPAT.  However, as Ben notes, there are (more than, IMNSHO ; )
> "a couple of caveats".  Most of the "easy" ABI compatibility fixups
> should be handled already via OABI_COMPAT.  However, it's practically
> impossible to fixup all OABI/eABI compatibility issues due to register
> assignment, parameter alignment and/or packing differences between
> the two ABIs.  You would have to analyze all kernel and driver
> user interfaces to reassign parameters to registers, align and/or
> repack data structures, etc.,.  In fact, some of the existing fixups
> include side effects that in some cases can cause userspace code to
> fail, depending on how it is using I/O parameters, e.g. in some cases,
> library code may try to validate parameters which are relocated and
> those tests fail due to reshuffling of parameters.  It's a nasty
> path to go down, quite frankly. I would not recommend trying to
> support OABI binaries on an eABI kernel using OABI_COMPAT.

Structure packing: Isn't that basically the same set of fixups that
need to be done for 32-bit compatibility on 64-bit kernels?  Could it
even use the same code - sneakily replacing "32" with OABI and "64"
with EABI?

Register/parameter assignment: How is that relevant to the kernel
interface, if the kernel itself and modules are all EABI?  The system
call interface is a fixed set of registers.

It sounds like you're saying I should use OABI kernels and userspace
even with latest kernels, if I have a single OABI binary that might
use anything interesting from the kernel, like readdir, poll, signal
context, ioctl, device read/write, or any other system calls which
take a struct that isn't all 32-bit words.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I run an application compiled with gcc ABI 2.95 on a kernel compiled with gcc ABI 3.4?

2009-05-15 Thread Jamie Lokier
David Woodhouse wrote:
> On Fri, 2009-05-15 at 13:50 +0200, muzu...@gmx.net wrote:
> > Questions:
> > - Can I run an application compiled with gcc ABI 2.95 on a kernel
> > compiled with gcc ABI 3.4?
> 
> Yes. The kernel ABI never changes in a backward-incompatible fashion.
> Unless you try switching your kernel to ARM EABI and don't update
> userspace to match -- don't do that :)

Eek, can you say a bit more about the ARM EABI mismatch?

I would like to run a shiny modern ARM EABI kernel and userspace, but
also need to run one or two OABI binaries (from the gcc 2.95 era) on
the same kernel which I cannot recompile because they're built with
closed source libraries only supplied as OABI.

Does that not work at all?

If it does work, which part of userspace must be EABI to work?

Thanks,
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-26 Thread Jamie Lokier
Kay Sievers wrote:
> On Mon, Apr 27, 2009 at 01:12, Jamie Lokier  wrote:
> > Kay Sievers wrote:
> >> > _If_ the system doesn't wait for all block devices present at boot to
> >> > be enumerated before the boot script, then when the script looks in
> >> > that directory for a specific UUID, it would be good to wait until
> >> > "has everything present at boot been enumerated?" says yes.
> >>
> >> That's what distros do with initramfs today.
> >
> > I don't see how that's possible.  Haven't we been discussing how the
> > USB driver does not have any support (yet) for saying when it's found
> > every device present at boot time, and that it can probably be added?
> 
> It's easy for the rootfs, or any other mandatory filesystem, unlike it
> is for the console stuff.
> 
> There is no timeout, distro's initramfs waits for the specified root
> device until it appears. It's simple, there is nothing else to do for
> it. It does not care what bus it is, or who is probing what for how
> long. It loads all drivers it finds hardware for, and then spins until
> the device shows up and continues.

Ok.  I'm specifically _not_ talking about mandatory filesystems.
Thanks :-)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-26 Thread Jamie Lokier
Kay Sievers wrote:
> > _If_ the system doesn't wait for all block devices present at boot to
> > be enumerated before the boot script, then when the script looks in
> > that directory for a specific UUID, it would be good to wait until
> > "has everything present at boot been enumerated?" says yes.
> 
> That's what distros do with initramfs today.

I don't see how that's possible.  Haven't we been discussing how the
USB driver does not have any support (yet) for saying when it's found
every device present at boot time, and that it can probably be added?

I have a boot script which does this:

1. *If* USB disk plugged in, mount it on /mnt/usb, do some stuff.
2. If not, do something else.

If I'm not mistaken, the only way to do that right now is pick an
arbitrary time, wait, and proceed with 2 if you didn't see the disk.

The desired USB console behaviour is very similar.  Use it if it's
plugged in at boot, use something else otherwise, and no arbitrary
delays.

Did I miss something?
> > Since this is all about making boot faster, it would be quite nice not
> > to wait for all block devices before starting the boot script, or at
> > least the initramfs module-loading script :-)
> 
> You wait in initramfs until the device show up. Usually dynamic udev
> rules created from the root= string of the kernel commandline create a
> rule which matches on the UUID, and creates a /dev/root symlink when
> the device shows up. As soon as this link shows up,  init mounts it
> and goes ahead.

That's fine when you're waiting for a device to appear that you know
will appear.

The USB console excitement, and my mention of USB disks and such, is
how to decide when the device _isn't_ going to appear, without an
arbitrary timeout.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-26 Thread Jamie Lokier
Alan Stern wrote:
> On Sun, 26 Apr 2009, Jamie Lokier wrote:
> 
> > > Are you suggesting this new interface be exported to userspace somehow?
> > 
> > Not directly.  Only in the same way that open("/dev/console") delays
> > until there's a console, so reading the keyboard can delay until we
> > know if we had a keyboard plugged in at boot, and looking for a disk
> > called UUID=392852908752345749857 can wait until we know if there was
> > one plugged in at boot time.
> > 
> > The latter issue with UUID is done in userspace now by reading all
> > disks, but I'm under the impression changes are planned in that
> > department because reading every disk from userspace to locate
> > specific ones is too slow on big systems.
> 
> IIUC, David is proposing that no userspace process should get started
> until some (all?) of the console devices and the block device
> containing the root filesystem (all block devices present at boot?)
> have been registered.  That would remove the need for your delays, at 
> least in part.
> 
> As for searching for a particular UUID, I believe recent changes to 
> sysfs/udev should improve the situation.  There will be a "by_UUID" 
> directory somewhere, containing a bunch of symbolic links whose names 
> are the UUID values of all the registered drives.  Programs won't have 
> to read every disk; they'll only have to search through this directory.

That will be great.

_If_ the system doesn't wait for all block devices present at boot to
be enumerated before the boot script, then when the script looks in
that directory for a specific UUID, it would be good to wait until 
"has everything present at boot been enumerated?" says yes.

Otherwise you have hacks like my boot script which waits 5 seconds for
a disk to show up on USB, and then continues if not.  It sounds
awfully like waiting X seconds for a USB console to show up :-)

Since this is all about making boot faster, it would be quite nice not
to wait for all block devices before starting the boot script, or at
least the initramfs module-loading script :-)

That's outside the scope of bootconsole and root device.  I certainly
won't demand it's done, but I'm sure you can see a strong similarity:
the ability to wait in the kernel until a class of devices that were
present at boot have finished enumerating.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-26 Thread Jamie Lokier
Alan Stern wrote:
> On Sat, 25 Apr 2009, Jamie Lokier wrote:
> 
> > I'm thinking this broader use:
> > 
> > - My boot _script_ is waiting for a disk which identifies as
> >   UUID=392852908752345749857 to appear before it can mount it on
> >   /data.  If there's no such disk, it proceeds without it.  It's a
> >   USB disk, behind a USB hub.
> > 
> > - My boot script is looking to see if I'm holding down 'z' on the
> >   keyboard, to do something different.  But how does it know if
> >   there's a USB keyboard plugged in (behind two USB hubs) that
> >   hasn't finished being detected?
> > 
> > It just seemed to fit comfortably into what's being discussed.
> 
> Are you suggesting this new interface be exported to userspace somehow?

Not directly.  Only in the same way that open("/dev/console") delays
until there's a console, so reading the keyboard can delay until we
know if we had a keyboard plugged in at boot, and looking for a disk
called UUID=392852908752345749857 can wait until we know if there was
one plugged in at boot time.

The latter issue with UUID is done in userspace now by reading all
disks, but I'm under the impression changes are planned in that
department because reading every disk from userspace to locate
specific ones is too slow on big systems.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-24 Thread Jamie Lokier
David VomLehn wrote:
> I think this is over-engineered. This focused on boot devices, so you really
> don't care about things like buses, and I don't perceive a broader use. What
> really matters is particular boot device types, wherever they came from.

I'm thinking this broader use:

- My boot _script_ is waiting for a disk which identifies as
  UUID=392852908752345749857 to appear before it can mount it on
  /data.  If there's no such disk, it proceeds without it.  It's a
  USB disk, behind a USB hub.

- My boot script is looking to see if I'm holding down 'z' on the
  keyboard, to do something different.  But how does it know if
  there's a USB keyboard plugged in (behind two USB hubs) that
  hasn't finished being detected?

It just seemed to fit comfortably into what's being discussed.

(I do have these a system with these requirements, by the way.  It's
solved at the moment by waiting 5 seconds after booting, and by using
an older kernel which doesn't have boot parallelisation yet...)

There was a thread about BTRFS wanting to match up multiple disks
being scranned with volume ids some months ago, which might have
similar requirements, I'm not sure.

> I've been thinking about the issue of handling device classes because, as you
> clearly understand, distingishing between them can give you finer granularity
> during boot initialization. There are really three possible steps:
> 1. Discover a device exists.
> 2. Discover the device type
> 3. Completion of the probe function for the device.

Yes.

> The existing code is great if the interval between 1 and 2, or 2 and 3, is
> nearly zero. In the first case, you do nothing at step 1 and at step 2 you
> indicate that a boot device of the given type it found. In the second case,
> you indicate that you have found a device of unknown type was found (passing
> BOOTDEV_ANY_MASK) at step 1, ignore the information at step 2, and report
> completion of the probe for a generic device type at step 3 (again passing
> BOOTDEV_ANY_MASK).

Yes.

> There is one additional possibility, that there is a significant
> amount of time that passes between steps 1, 2, and 3. The existing
> interfaces already handle that, but I'm thinking a clearer interface
> is in order. The key is that, when you indicate a possible boot
> device was found, and when you indicate the completion of probing,
> you are actually passing a mask of boot device types.

This too, yes.

> Say that the device is actually a console, my favorite example. In
> this case, you'd pass BOOTDEV_ANY_MASK to bootdev_found at step 1,
> indicating that you don't really know the device type. This
> increments the pending count for all boot device types. At step 2,
> you find out you have a console, so you pass BOOTDEV_ANY_MASK &
> ~BOOTDEV_CONSOLE_MASK to bootdev_probe_done. This decrements the
> pending count for all device types except consoles. Then, at step 3,
> you call bootdev_probe_done with BOOTDEV_CONSOLE_MASK. Which
> decrements the pending count for console devices and wakes up any
> waiters.

Only one problem I see: what happens when there's an attempt to open
/dev/console before you increment the pending count?  It seems to me
you have to wait for all buses to have been detected, which is why I
mentioned buses, as some buses are _themselves_ slow devices to detect.

> The key question is, are there cases where there is enough time between steps
> 1 and 2, and steps 2 and 3, to add this complexity? If not, let's skip it.

The time between enumerating that a USB device exists and what it's
class is (could be a console?), and actually initialising the device
to find out if it's then usable, including loading firmware, can be a
little while.

I don't know if the times are long enough to matter.

Possibly related to all this: it would be really nice if the ATA
rather slow probe time didn't have to delay boot scripts until they
depend on the not-yet-probed disks, as sometimes they might not.

-- Jamie
 
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-24 Thread Jamie Lokier
David VomLehn wrote:
> > This looks like a good plan and not hard to implement.  It even should 
> > be possible to fit USB disk drives into the scheme.
> 
> That would definitely rock. 

How about this, perhaps in the generic device model:

  1. Whenever a device's existence is detected by its parent bus,
 add the device id to a pool of "potential devices worth waiting for".

  2. Whenever a device is registered, remove that device id from the pool.

  3. Whenever a device is itself a bus, or has subdevices or
 attributes to be discovered, it triggers step 1 for all devices
 found by enumeration (or in the case of USB, whatever you have to
 wait for).  Then the bus can declare itself initialised.
 
  4. The top-level enumeration behaves as though there was a root bus,
 onto which the real buses like PCI etc. are attached as in step 3.

  5. Waiting for console / boot device / userspace waiting for other
 mount points all check this pool for device ids of matching type.

In this, the pool serves the same role as Alan Stern's global counter,
the difference being you can wait for particular types of device when
you need to, and this is more explicit about how a hierarchy is handled.

Device ids in this pool are simply "category" values for what the
device is relevant to - and a waitqueue.  If a PCI device is a serial
port, then goes into category "serial port", because it's relevant if
serial console is requested on the boot command line.

When waiting for a newly powered USB bus to settle, you may get
notification of all devices on it, but you might not know enough about
each device until the individual drivers start fetching descriptors.
Then you can either make every device go temporarily into the pool,
much as if it were a little bus itself, until it has detailed
information about what type of device it is.  Or you can wait until
all those devices have fetched descriptors before the USB bus declares
that its enumeration is complete and removes its own id.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-22 Thread Jamie Lokier
Alan Cox wrote:
> > It would be a sensible though boring cleanup to change all the "VC"
> > (virtual console) stuff to use the name "VT" (virtual terminal)
> > consistently.
> 
> Then we would have two completely different meanings for virtual terminal
> in the kernel.

As opposed to three meanings for console - causing a bit of confusion
you had to point out to Linus earlier :-)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-22 Thread Jamie Lokier
Linus Torvalds wrote:
> [ Actually, looking closer we should not use that particular name: we 
>   already have something called a "console_driver" which is really the 
>   "current VT" driver.

Speaking of names... The word "console" is used to mean three
different things in the kernel.  Last time I had to understand the
console code (because it wasn't showing on some device) that didn't
help.  But it was a 2.4 kernel 2 years ago so I didn't submit a patch :-)

It would be a sensible though boring cleanup to change all the "VC"
(virtual console) stuff to use the name "VT" (virtual terminal)
consistently.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-22 Thread Jamie Lokier
Alan Cox wrote:
> To start with there is no reason that the
> USB console can't implement a "maybe we have hardware, maybe I buffer 64K
> until it shows up" behaviour and bind to hardware as and when USB serial
> devices get inserted.

Ah, that doesn't work, when you want to use the USB serial console
_if_ there's a USB serial adapter plugged in, and a different (lower
priority) console if it's not.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Wait for console to become available, v3.2

2009-04-21 Thread Jamie Lokier
Ingo Molnar wrote:
> * Arjan van de Ven  wrote:
> > But more importantly... USB *CANNOT* do this fundamental 
> > operation. USB does not have the capability to know when you have 
> > seen all devices that are connected. Devices just show up a random 
> > amount of time after you turn the power on for the bus there's 
> > no "and now we've seen all" operation.
> 
> Yes - and this is fundamentally true of any hotplug bus design.

It's not fundamental, for devices you know are plugged in at boot.
All it takes is for the bus to support a synchronous "enumerate all"
procedure.  That _could_ involve a timeout, but there are better ways.
But not for USB.

> What i'm saying is: instead of "wait 2000 msecs, maybe it works out" 
> hack, there should be a proper sleep+event based approach to the 
> same. With perhaps a _timeout_ for the "no console arrived" negative 
> case as well. (which timeout can be set to zero in the "I _know_ 
> there's no console around".)

Isn't the proposed code doing exactly that?  It sleeps waiting for a
console, with a timeout of X msecs.  If a console arrives before that,
doesn't it wake immediately?

> Do you see the fundamental design difference between the two 
> solutions?

The only difference I see is: If you asked for a USB console, the
timeout should begin relative to when the USB host driver kicks the
bus, not the first /dev/console open.

Conceptually the timeout is a USB characteristic, not a generic
console one.

For other (non-USB) devices which might take a while to initialise, it
should work as you describe.  E.g. an ethernet TCP console should
register ("i'm trying to give you a console") followed by ("got it")
or ("failed, carry on without me").

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Sources of entropy?

2009-03-26 Thread Jamie Lokier
Robin Getz wrote:
> I'm just wondering what people using on standard embedded/headless/diskless 
> targets (which do not have hw random number generators) as a source of 
> entropy - since networking was removed as an entropy source circa 2.6.26

You might not have much real entropy to use.  I guess networking was
removed because it's an obvious attack vector.

On my devices, I save the entropy pool to flash on shutdown and merge
it back on reboot.  This lets cumulative history build.

[shutdown]
dd if=/dev/urandom of=$ENTROPY_STORE.new bs=512 count=1 2>/dev/null \
   && mv $ENTROPY_STORE.new $ENTROPY_STORE \
   || rm -f $ENTROPY_STORE.new

[boot]
dd if=$ENTROPY_STORE of=/dev/random bs=512 2>/dev/null

You'll still drain the pool quickly so may need to use /dev/urandom
for everything (e.g. by linking /dev/random -> /dev/urandom), but
keeping history does mean you get more real entropy from /dev/urandom,
even though entropy_avail cannot estimate it (and the lower bound is
still zero, if what you did before has always been predictable).

> I have seen rngd, clrngd, audio_entropyd, & video_entroyd - but I was just 
> wondering what others were actually using. (I was cautioned that everything 
> was pretty CPU intensive, since they all have a FIPS testing to ensure 
> randomness)...

You can write anything you think is an entropy source to /dev/random,
and it won't increase the entropy estimate but it will increase real
entropy if your source has any.  So you could add low-order bits from
high-resolution timing data from your network application from time to
time, for example, if you think it's worth it.

That won't make /dev/random show confirmed non-zero entropy, but that
might not be feasible on your device anyway.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-17 Thread Jamie Lokier
Rob Landley wrote:
> On Friday 16 January 2009 08:54:42 valdis.kletni...@vt.edu wrote:
> > On Fri, 16 Jan 2009 00:11:09 CST, Rob Landley said:
> > > P.S.  I still hope autoconf dies off and the world wakes up and moves
> > > away from that.  And from makefiles for that matter.  But in the
> > > meantime, I can work around it with enough effort.
> >
> > What do you propose autoconf and makefiles get replaced by?
> 

> I've never built pidgin from source, but I've got the output of the binutils 
> build in a log file. 
> How many of these tests are actually necessary on an Linux system:

None, but then it's not a Linux-only program that you're compiling.
(Nor is it Linux-in-2009-only).

If you _know_ you're running on Linux from a particular era, you can
provide a config.cache file with the correct answers already filled in.

I agree that Autoconf sucks (I've written enough sucking Autoconf
macros myself, I hate it), but the tough part is providing a suitable
replacement when you still want portable source code.

> It just goes on and on and on like this.  Tests like "checking
> whether byte ordering is bigendian... no" means "Either I didn't
> know endian.h existed, or I don't trust it to be there".  How about
> the long stretches checking for the existence of header files
> specified by posix?

You seem to be arguing for "let's make all our programs Linux-specific
(and Glibc-specific in many cases)".  Given all the problems you've
seen with cross-compiling, let alone compiling for different OS
platforms, that seems a little odd.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-15 Thread Jamie Lokier
Pádraig Brady wrote:
> > The $(( ... )) construct is standard POSIX shell syntax, see
> > http://www.opengroup.org/onlinepubs/95399/utilities/xcu_chap02.html#tag_02_06_04
> > 
> > Bash supports $[ ... ] as an alternate syntax for the same thing.
> > Perhaps you were thinking of that.
> 
> I think the misconception that $(( ... )) is a bashism is caused by
> the wrong highlighting defaults chosen by vim.

I think the misconception is because traditional unix bourne shells
don't implement that construct.  I just tried it on a few machines,
and it failed on 4 of them.  Admittedly, the only up to date one is
running Solaris 10; the others are older unixes that you're unlikely
to build Linux on.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-13 Thread Jamie Lokier
Paul Mundt wrote:
> This happens in a lot of places, like embedded gentoo ports, where almost
> all of the work is sent across distcc to a cross-compilation machine. In
> systems that use package management, it is done on the host through
> emulation, or painfully cross-compiled.

Ah yes, I remember using embedded Gentoo.

95% of the time in ./configure scripts, 5% in compilations.

And this is on x86!  I dread to think how slow it gets on something
slow.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-05 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> I assume that the NFS-mounted root filesystem is a real distribution.

Not unless you call uClinux (MMU-less) a real distribution, no.

> > (* - No MMU on some ARMs, but I'm working on ARM FDPIC-ELF to add
> >  proper shared libs.  Feel free to fund this :-)
> 
> The above mentioned ARMs have a MMU. Without MMU, it would be truly
> insane IMHO.

We have similar cross-build issues without MMUs... I.e. that a lot of
useful packages don't cross-build properly (including many which use
Autoconf), and it might be easier to make a native build environment
than to debug and patch all the broken-for-cross-build packages.
Especially as sometimes they build, but fail at run-time in some
conditions.

But you're right it's probably insane to try.  I haven't dared as I
suspect GCC and/or Binutils would break too :-)

I'm sticking instead with "oh well cross-build a few packages by hand
and just don't even _try_ to use most of the handy software out there".

You mentioned ARM Debian.  According to
http://wiki.debian.org/ArmEabiPort one recommended method of
bootstrapping it is building natively on an emulated ARM, because
cross-building is fragile.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-04 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> > (I have 850 Linux boxes on my network with a bourne shell which
> > doesn't do $((...)).  I won't be building kernels on them though :-)
> 
> Believe it or not, but there are folks out there who build the firmware
> on ARM 200 MHz NFS-mounted systems natively  (and not simply
> cross-compile it on a 2GHz PC .).

Really?

My 850 Linux boxes are 166MHz ARMs and occasionally NFS-mounted.
Their /bin/sh does not do $((...)), and Bash is not there at all.

If I were installing GCC natively on them, I'd install GNU Make and a
proper shell while I were at it.  But I don't know if Bash works
properly without fork()* - or even if GCC does :-)

Perl might be hard, as shared libraries aren't supported by the
toolchain which targets my ARMs* and Perl likes its loadable modules.

I'm not sure why I would want to build a kernel on these devices.

But I see why people with mobile ARM devices like gphones might
want to, when they're out travelling.

-- Jamie

(* - No MMU on some ARMs, but I'm working on ARM FDPIC-ELF to add
 proper shared libs.  Feel free to fund this :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-04 Thread Jamie Lokier
Rob Landley wrote:
> In a private email, Bernd Petrovitsch suggested "set -- $i" and then
> using NAME=$1; PERIOD=$2.  (I keep getting private email responses
> to these sort of threads, and then getting dismissed as the only one
> who cares about the issue.  Less so this time around, but still...)
> This apparently works all the way back to the bourne shell.

If you're going "all the way back to the bourne shell", don't use "set
-- $i"; use "set x $i" instead, and don't expect to do any arithmetic
in the shell; use "expr" or "awk" for arithmetic.

(Not relevant to kernel scripts, imho, since you can always assume
something a bit more modern and not too stripped down).

(I have 850 Linux boxes on my network with a bourne shell which
doesn't do $((...)).  I won't be building kernels on them though :-)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Jamie Lokier
Rob Landley wrote:
> This doesn't _need_ bignum support.  It maxes out around 72 bits and
> the _result_ can't use more than about $SHIFT bits because you're
> dividing by the amount you shifted, so just chop off the bottom 32
> bits, do a normal 64 bit division on the top (it has to fit), and
> then do the same division on the appropriate shifted remainder, and
> combine the results.  This is easy because when the shift _is_ 32
> bits or more, the bottom 32 bits all have to be zeroes so you don't
> even have to mask and add, just shift the remainder left 32 bits so
> you can continue the divide.
> 
> Pulling out perl isn't always a good alternative to thinking about
> the problem.

Related query:

Does the Perl script being replaced use 64-bit arithmetic?  Because
many Perl installations only do 32-bit arithmetic.

If the Perl version works in 32-bit arithmetic, why does the shell
version not do the same thing?

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-02 Thread Jamie Lokier
Theodore Tso wrote:
> perl is actually quite portable.

Portability aside, Perl has another fun issue.  The number of times
I've had a Perl script break when copied to a newer system which had a
newer version of Perl is... noticable.

> I'd also suggest explicitly add a reminder to the shell scripts'
> comments to avoid bashisms for maximum portability, to remind
> developers in the future who might try to change the shell scripts
> to watch out for portability issues.

You can force Bash into POSIX mode if that's helpful.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 2.6.27 1/1] gpiolib: add support for batch set of pins

2008-12-29 Thread Jamie Lokier
David Brownell wrote:
> The reason single-bit operations don't provide error paths is twofold.
> First, they started as wrappers for can't-fail register accessors.
> Second, it's extremely unrealisitic to expect much code to handle any
> kind of faults in the middle of bitbanging loops ... or even just in
> classic "set this bit and continue" configuration code.

That's interesting.  I'm not sure it's a good idea not to return an
error code.  The caller can just ignore it if they don't care, and
it's extremely cheap to "return 0" in GPIO drivers which can't error.

If I were bit-banging on GPIOs reached via some peripheral chip (such
a GPIO-fanout chip over I2C/SPI, where that chip is itself feeding a
secondary I2C or similar bit-banging bus), I probably would like to
check for errors and take emergency action if the peripheral chip
isn't responding, or just report to userspace.

This has actually happened on a board I worked with, where the primary
I2C failed due to a plugged in peripheral loading it too much, and a
secondary bit-banging bus was not then reachable.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LZMA inclusion

2008-12-08 Thread Jamie Lokier
Phillip Lougher wrote:
> One-shot LZMA decoding therefore isn't going to work very well with 
> future versions of Squashfs, obviously a solution (as is currently done 
> with the Squashfs-LZMA patches) is to use separately allocated 
> contiguous input/output buffers, and memcpy into and out of them, but 
> this isn't particularly ideal.
> 
> The discussion about using the output buffer as the temporary workspace 
> (as it isn't touched until after decompression is completely finished) 
> will work with the current version of Squashfs, but it isn't going to 
> work with later versions unless the LZMA code can be changed to work 
> with a list of discontiguous output buffers (i.e. a scatter-gather type 
> list).
> 
> So it looks inevitable that a separately vmalloced workspace buffer will 
> be required.

If the kernel has trouble even vmallocing 1MiB, the LZMA algorithm
will need reworking to explicitly use discontiguous workspace buffers
anyway, regardless of whether the output buffer is used as workspace.

If the kernel can vmalloc 1MiB easily, then in principle it could map
the discontiguous output buffer temporarily into a contiguous region
of vmalloc address space, avoiding the allocation.

Instead of memcpy, you'd have some cache coherency fun on some
architectures.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Power cut in management

2008-10-18 Thread Jamie Lokier
David Woodhouse wrote:
> On Sat, 2008-10-18 at 13:56 +0100, Jamie Lokier wrote:
> > Trouble is, that's not suitable for a dashboard unit where users plug
> > in their own media card.
> > 
> > Marco didn't say if the SD card is for users to plug in their own
> > media, or if it's internal storage for the device.
> 
> True, but the situation is different for a removable card. Firstly, it's
> unlikely to be mission-critical; the device will still operate without
> it. Secondly, even the most naïve of users knows that these things are
> disposable.
> 
> It's different if you're building a black box around them, with one of
> these things inside.

I agree, this is very good advice.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Power cut in management

2008-10-18 Thread Jamie Lokier
David Woodhouse wrote:
> On Sat, 2008-10-18 at 12:49 +0100, Jamie Lokier wrote:
> > Can you use a journalling filesystem like ext3, reiserfs, xfs, or even
> > UBIFS on the card, or does it have to be FAT?  With a journalling
> > filesystem, they vary on the details but basically if you can finish
> > the current write request in progress, there's no fs-level corruption.
> 
> Careful. When you use a solid state "disk", you're actually using your
> file system on top of an underlying "pseudo-filesystem" which makes the
> flash pretend to be a disk. Your data are at the mercy of that
> underlying code, which in our past experience always seem to have been
> implemented by the same crack-smoking hobos they drag in off the street
> to write PC BIOSes. However good your own journalling file system on
> _top_ of that 'disk' is, you can still lose everything with a badly
> timed power-off.

I agree, hence the idea to complete the current write request _and
also_ gracefully shutdown the SD card to an inactive state, before
power is dropped.

To continue the filesystem on another pseudo-filesystem analogy,
that's like calling fsync on the pseudo-filesystem.

No guarantees - even Linux has fsync worthy of crack-smoking hobos at
times ;-) - but better than not calling fsync at all!

But is it even possible to shutdown an SD card gracefully, e.g. with a
power-down command?

> If you want to store data on solid state media, my current advice is
> either to do it on _real_ flash directly with a flash-based file system
> like JFFS2 or UBIFS, or to pray regularly to the deity of your choice.

Trouble is, that's not suitable for a dashboard unit where users plug
in their own media card.

Marco didn't say if the SD card is for users to plug in their own
media, or if it's internal storage for the device.

Marco, if it's just for the device itself to use, follow David's
suggestion if you can: use real flash with UBIFS or something like it,
not an SD card.  That will be much easier.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Power cut in management

2008-10-18 Thread Jamie Lokier
[EMAIL PROTECTED] wrote:
> I'm working with a board where the power is turn on/off through a key as
> in a car. Is there any design pattern to afford that? It's the first time
> I have to manage a situation where the power can suddenly cut in anytime.
> Hardware guys are working to get "time" to do a clean shutdown (I turn the
> key but I still have the power) but any suggestions are welcome. However,
> I can't use a simply read-only filesystem (at least for root fs) because
> the board has got an sdcard that can be inserted (and then mounted in rw
> mode) anytime. Thanks.

This is off the top of my head, and may be rubbish.

>From http://www.linux-mtd.infradead.org/doc/ubifs.html:

We have heard reports that MMC and SD cards corrupt and loose data
if power is cut during writing. Even the data which was there long
time before may corrupt or disappear. This means that they have
bad FTL which does not do things properly. But again, this does
not have to be true for all MMCs and SDs - there are many
different vendors. But again, you should be careful.

As long as you have sufficient emergency power to finish writing a
consistent filesystem state and no incomplete block to the SD card,
and to gracefully shut down the SD card so that it stops reorganising
data on the flash internally, it should be ok.  That means routing
emergency power to the CPU and SD card, so that you don't get
corruption in the middle of writing a block.

Can you use a journalling filesystem like ext3, reiserfs, xfs, or even
UBIFS on the card, or does it have to be FAT?  With a journalling
filesystem, they vary on the details but basically if you can finish
the current write request in progress, there's no fs-level corruption.

With FAT, completing just the write in progress is not enough.  In
principle a small number of writes could be used to get the fs into a
consistent state in an emergency, and the required number could be
kept bounded by changes to the FAT fs code.

Emergency power isn't necessarily required for the peripherals such as
panel LEDs, displays, CD drives, radios, serial ports, speakers
etc. that you might have, so by routing it only to a fewer essential
components, and shutting down the rest, you'll have a stable power for
longer off a smaller capacitor.

Make sure the hardware is able to notify the CPU by interrupt
immediately when the main power is cut, so the CPU can switch into the
emergency mode as quickly as possible, cut power to unnecessary
peripherals, only finish the I/O request already in progress, or the
minimum required for a consistent state, and gracefully shut down the
SD card.  This means the board's PSU should not simply keep going for
a couple of seconds after input power is lost: it must provide you
with a logic signal too.  Alos, make sure during this time that the
board's PSU voltage output remains stable and doesn't drop drastically
during the emergency shutdown, as that could lead to SD card
corruption too.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-09-15 Thread Jamie Lokier
Jared Hulbert wrote:
> > I think the "fast" in "fast synchronous" gives it away :-)
> 
> Yes, I suppose it does.
> 
> > I'm using Spansion MirrorBit S29GL128N, which reads at about 0.6 MByte/s.
> 
> I think you should get more like an order of magnitude higher Get
> an expert to look at your timings in the bootloader.  Make sure things
> are cached too.  ioremap_cached()...

Yes, looking at the Spansion datasheet, if it were interfaced properly
it should be quite fast.  (25ns access time for in-page 16-bit reads,
100ns for random reads).

I'll see if ioremap_cached() makes a difference to streaming read
performance.

The BSP suppliers have been quite cautious in places, flushing cache a
bit too often.  (I'm not surprised - we had disk ext3 filesystem
corruption due to insufficient cache flushing in places too.)

> > Oh, and it's a 166MHz ARM, so it's quite capable of decompressing
> > faster than the NOR can deliver.
> 
> Depends on how you are measuring it.   You ought to be able to get at
> least 2 orders of magnitude higher read speeds with a good sync Flash.
>  Some of the newer stuff is even faster.

Thanks.

Oh, how I look forward to the day of working with current kernels and
current hardware.

-- Jamie

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-09-15 Thread Jamie Lokier
Jared Hulbert wrote:
> >> > What kind of NOR you using?  That is not what I measure with fast
> >> > synchronous burst NOR's.
> >>
> >> I think the "fast" in "fast synchronous" gives it away :-)
> >>
> >> I'm using Spansion MirrorBit S29GL128N, which reads at about 0.6 MByte/s.
> >
> > By the way, what speeds do you get on fast synchronous burst NORs - and
> > which chips are those?
> 
> I am only familiar with the Numonyx product line up.
> 
> If you are using a GL, you'll probably find our P33 a good fit and at
> competitive prices to GL as I understand it.  That's I think 50Mhz.

What's a GL?  Never heard of it - all I can think of is OpenGL :-)

I'm using a Sigma Designs 862x media processor.  It clocks at 166MHz
to main RAM, has an ARM internally to run Linux, and the intensive
work happens in coprocessors.  The NOR is not on the RAM bus, it's on
a "peripheral bus".  About the only thing I know about the bus is it's
16 bits wide - I have the schematic, but only the board supplier has
access to Sigma chip documentation.

> A rule of thumb is to say that copying to RAM will reduce this value
> less than 50% because RAM should be at least a little faster than NOR.

In our case, RAM is at least 100x faster :-)

> Nevertheless, If you can't use a simple calculation like this to
> explain the numbers then you have a poorly configured bus timings or
> have your cache off.

I'm not sure if cache is an option with this device - but would it
make a difference anyway?  Launching executables like Busybox - those
are much larger than the cache anyway, so launch time is dominated by
bulk streaming copy speed.  Thanks for the idea, I'll look into
whether it's possible to access this 'peripheral bus' through the
ARM's cache and see if that speeds up streaming read time.

> So, If you are only getting 0.6MB/s out of your NOR. You're
> using it wrong.

Interesting, thanks.  I'm not sure it's possible to change the way NOR
is being used with this chip, and it'll be a while before it's
economical to replace the board with a new design.

This is all very interesting - I had no prior experience with NOR, so
didn't know that 0.6MB/s was slow.  It's fast compared with older
EEPROMs after all, and had imagined that people wanting fast flash
would use NAND.

On looking at the datasheet, I see it's quite a lot faster.  I'm
suspecting the Sigma Designs perpheral bus and the way it's wired up
not doing it any favours.  We already have the weirdness that we have
to patch the Linux CFI-0002 MTD code: the CPU locks up when polling
the erase status byte, until erase is finished.  Unfortunately this is
difficult to change now - I'm programming hardware which is already
out in the field and cannot be redesigned.

Thanks for your thoughts.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> > 32MB no-MMU ARM boards which people run new things and attach new
> > devices to rather often - without making new hardware.  Volume's too
> > low per individual application to get new hardware designed and made.
> 
> Yes, you may have several products on the same hardware with somewhat
> differing requirements (or not). But that is much less than a general
> purpose system IMHO.

It is, but the idea that small embedded systems go through a 'all
components are known, drivers are known, test and if it passes it's
shippable' does not always apply.

> > I'm seriously thinking of forwarding porting the 4 year old firmware
> > from 2.4.26 to 2.6.current, just to get new drivers and capabilities.
> 
> That sounds reasonable (and I never meant maintaining the old system
> infinitely.

Sounds reasonable, but it's vetoed for anticipated time and cost,
compared with backporting on demand.  Fair enough, since 2.6.current
doesn't support ARM no-MMU last I heard ('soon'?).

On the other hand, the 2.6 anti-fragmentation patches, including
latest SLUB stuff, ironically meant to help big machines, sound really
appealing for my current problem and totally unrealistic to
backport...

> ACK. We avoid MMU-less hardware too - especially since there is enough
> hardware with a MMU around.

I can't emphasise enough how much difference MMU makes to Linux userspace.

It's practically: MMU = standard Linux (with less RAM), have everything.
No-MMU = lots of familiar 'Linux' things not available or break.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Jamie Lokier
Linus Torvalds wrote:
> > Most LOCs of the kernel are not written by people like you or Al Viro or 
> > David Miller, and the average kernel developer is unlikely to do it as 
> > good as gcc.
> 
> Sure. But we do have tools. We do have checkstack.pl, it's just that it 
> hasn't been an issue in a long time, so I suspect many people didn't even 
> _realize_ we have it, and I certainly can attest to the fact that even 
> people who remember it - like me - don't actually tend to run it all that 
> often.

Sounds like what's really desired here isn't more worry and
unpredictability, but for GCC+Binutils to gain the ability to
calculate the stack depth over all callchains (doesn't have to be
exact, just an upper bound; annotate recursions) in a way that's good
enough to do on every compile, complain if a depth is exceeded
statically (or it can't be proven), and to gain the
architecture-independent option "optimise to reduce stack usage".

> > BTW:
> > I just ran checkstack on a (roughly) allyesconfig kernel, and we have a 
> > new driver that allocates "unsigned char recvbuf[1500];" on the stack...
> 
> Yeah, it's _way_ too easy to do bad things.

In my userspace code, I have macros tmp_alloc and tmp_free.  They must
be matched in the same function:

 unsigned char * recvbuf = tmp_alloc(1500);
 
 tmp_free(recvbuf);

When stack is plentiful, it maps to alloca() which is roughly
equivalent to using a stack variable.

When stack is constrained (as it is on my little devices), that maps
to xmalloc/free.  The kernel equivalent would be kmalloc GFP_ATOMIC
(perhaps).

With different macros to mine, it may be possible to map small
fixed-size requests exactly onto local variables, and large ones to
kmalloc().  A stab at it (not tested):

#define LOCAL_ALLOC_THRESHOLD 128

#define LOCAL_ALLOC(type, ptr)\
__typeof__(type) __attribute__((__unused__)) ptr##_local_struct;  \
__typeof__(type) * ptr =  \
  ((__builtin_constant_p(sizeof(type))\
&& sizeof(type) <= LOCAL_ALLOC_THRESHOLD) \
   ? &ptr##_local_struct : kmalloc(sizeof(type), GFP_ATOMIC))

#define LOCAL_FREE(ptr)   \
((__builtin_constant_p(sizeof (*(ptr)))   \
  && sizeof(*(ptr)) <= LOCAL_ALLOC_THRESHOLD) \
 ? (void) 0 : kfree(ptr))

Would that be useful in the kernel?

I'm thinking if it were a commonly used pattern for temporary buffers,
unknown structures and arrays of macro-determined size, the "new
driver" author would be less likely to accidentally drop a big object
on the stack.

Obviously it would be nicer for GCC to code such a thing
automatically, but that really is wishful thinking.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-27 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> If you "develop" an embedded system (which is partly system integration
> of existing apps) to be installed in the field, you don't have that many
> conceivable work loads compared to a desktop/server system. And you have
> a fixed list of drivers and applications.

Hah!  Not in my line of embedded device.

32MB no-MMU ARM boards which people run new things and attach new
devices to rather often - without making new hardware.  Volume's too
low per individual application to get new hardware designed and made.

I'm seriously thinking of forwarding porting the 4 year old firmware
from 2.4.26 to 2.6.current, just to get new drivers and capabilities.
Backporting is tedious, so's feeling wretchedly far from the mainline
world.

> A usual approach is to run stress tests on several (or all)
> subsystems/services/... in parallel and if the device survives it
> functioning correctly, it is at least good enough.

Per application.

Some little devices run hundreds of different applications and
customers expect to customise, script themselves, and attach different
devices (over USB).  The next customer in the chain expects the bits
you supplied to work in a variety of unexpected situations, even when
you advise that it probably won't do that.

Much like desktop/server Linux, but on a small device where silly
little things like 'create a process' are a stress for the dear little
thing.

(My biggest lesson: insist on an MMU next time!)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Bug #11342] Linux 2.6.27-rc3: kernel BUG at mm/vmalloc.c - bisected

2008-08-26 Thread Jamie Lokier
Linus Torvalds wrote:
> The inline-functions-called-once thing is what causes even big functions 
> to be inlined, and that's where you find the big downsides too (eg the 
> stack usage).

That's a bit bizarre, though, isn't it?

A function which is only called from one place should, if everything
made sense, _never_ use more stack through being inlined.  Inlining
should just increase the opportunities that the called function's
local variables can share the same stack slots are the caller's dead
locals.

Whereas not inlining guarantees they occupy separate, immediately
adjacent regions of the stack, and shouldn't be increasing the total
numbers of local variables.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-08-25 Thread Jamie Lokier
Greg Ungerer wrote:
> Sort of. It actually just uses a single ->read to bring in
> the entire file contents. There is a few limitations on the use
> of mmap() for non-mmu. Documentation/nommu-mmap.txt gives
> more details. With no MMU it does rely on being able to kmalloc()
> a single RAM region big enough to hold the entire file.

That's unfortunate, if you're using FDPIC-ELF or BFLT-XIP, you really want
to kmalloc() one region for code (i.e. mmap not the whole file), and
one separate for data.  Asking for a single larger region sometimes
creates much higher memory pressure while kmalloc() attempts to
defragment by evicting everything.

But that's fiddly to do right in general.

The natural thing for AXFS to do to support no-MMU FDPIC-ELF or
BFLT-XIP is store the code segment uncompressed and contiguous, and
the data segment however the filesystem prefers, and the profiling
information to work out where these are is readily available from the
mmap() calls, which are always the same when an executable is run.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-08-22 Thread Jamie Lokier
Jamie Lokier wrote:
> Jared Hulbert wrote:
> > What kind of NOR you using?  That is not what I measure with fast
> > synchronous burst NOR's.
> 
> I think the "fast" in "fast synchronous" gives it away :-)
> 
> I'm using Spansion MirrorBit S29GL128N, which reads at about 0.6 MByte/s.

By the way, what speeds do you get on fast synchronous burst NORs - and
which chips are those?

Thanks,
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-08-22 Thread Jamie Lokier
Jared Hulbert wrote:
> On Fri, Aug 22, 2008 at 11:13 AM, Jamie Lokier <[EMAIL PROTECTED]> wrote:
> > Greg Ungerer wrote:
> >> One thing for sure is that many people who do non-MMU setups
> >> are interested in XIP to get the space savings. These are very
> >> often small devices with very constrained RAM and flash. (For
> >> whatever it is worth single NOR flash only boards are common in
> >> these smaller form factors :-)
> >
> > I'm using XIP on a device with 32MB RAM.  The reason I use it is
> > _partly_ to save RAM, partly because programs start about 10 times
> > faster (reading NOR flash is slow and I keep the XIP region in RAM)
> 
> What kind of NOR you using?  That is not what I measure with fast
> synchronous burst NOR's.

I think the "fast" in "fast synchronous" gives it away :-)

I'm using Spansion MirrorBit S29GL128N, which reads at about 0.6 MByte/s.

Not because they're good, but because that's what the board I'm coding
for has on it.  I presume they were cheap and familiar to the board
designers.  (There is 32MB of RAM to play with after all.)

So start a sequence of Busybox processes from a shell script is noticable,
if it reads from NOR each time.

Oh, and it's a 166MHz ARM, so it's quite capable of decompressing
faster than the NOR can deliver.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-08-22 Thread Jamie Lokier
Greg Ungerer wrote:
> One thing for sure is that many people who do non-MMU setups
> are interested in XIP to get the space savings. These are very
> often small devices with very constrained RAM and flash. (For
> whatever it is worth single NOR flash only boards are common in
> these smaller form factors :-)

I'm using XIP on a device with 32MB RAM.  The reason I use it is
_partly_ to save RAM, partly because programs start about 10 times
faster (reading NOR flash is slow and I keep the XIP region in RAM)
and partly because it reduces memory fragmentation.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-08-22 Thread Jamie Lokier
Greg Ungerer wrote:
> 
> Jamie Lokier wrote:
> >Jared Hulbert wrote:
> >>The biggest improvement is in the way AXFS allows for each page to be XIP 
> >>or
> >>not.  First, a user collects information about which pages are accessed 
> >>on a
> >>compressed image for each mmap()ed region from /proc/axfs/volume0.  That
> >>'profile' is used as an input to the image builder.  The resulting image 
> >>has
> >>only the relevant pages uncompressed and XIP.  The result is smaller 
> >>memory
> >>sizes and faster launches.
> >
> >Sounds great, really nice idea.
> >
> >How does it fare with no MMU?  Can the profiler and image builder lay
> >out the XIP pages in such a way that no-MMU mmaps can map those regions?
> 
> The key for XIP on noMMU would be the ability to store a
> file as one complete contiguous chunk. Can AXFS do this?

Or more generally, the mmap'd parts of a file.

XIP doesn't mmap the whole file, it just maps the code and rodata.
The data segment is copied.

AXFS's magic for keeping parts of the file uncompressed, but parts
compressed, would be good for this - both for space saving, and also
because decompressing compressed data from NOR is faster than reading
uncompressed data.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/10] AXFS: Advanced XIP filesystem

2008-08-21 Thread Jamie Lokier
Jared Hulbert wrote:
> The biggest improvement is in the way AXFS allows for each page to be XIP or
> not.  First, a user collects information about which pages are accessed on a
> compressed image for each mmap()ed region from /proc/axfs/volume0.  That
> 'profile' is used as an input to the image builder.  The resulting image has
> only the relevant pages uncompressed and XIP.  The result is smaller memory
> sizes and faster launches.

Sounds great, really nice idea.

How does it fare with no MMU?  Can the profiler and image builder lay
out the XIP pages in such a way that no-MMU mmaps can map those regions?

No complaint if not, it would be a nice bonus though.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [patch 1/4] Configure out AIO support

2008-08-05 Thread Jamie Lokier
Adrian Bunk wrote:
> On Fri, Aug 01, 2008 at 12:42:22AM +0200, Bernhard Fischer wrote:
> > On Thu, Jul 31, 2008 at 01:12:19PM +0300, Adrian Bunk wrote:
> > >On Thu, Jul 31, 2008 at 12:09:29PM +0200, Bernhard Fischer wrote:
> > >> On Thu, Jul 31, 2008 at 11:27:04AM +0200, Thomas Petazzoni wrote:
> > >> >This patchs adds the CONFIG_AIO option which allows to remove support
> > >> >for asynchronous I/O operations, that are not necessarly used by
> > >> >applications, particularly on embedded devices. As this is a
> > >> >size-reduction option, it depends on CONFIG_EMBEDDED. It allows to
> > >> >save ~7 kilobytes of kernel code/data:
> > >> 
> > >> Shouldn't this also make sure not to install aio_abi.h or at least an
> > >> empty aio_abi.h?
> > >
> > >The userspace headers are independent of any kernel configuration
> > >(except for the architecture).
> > 
> > I beg to disagree:
> > internals as exposed by e.g. aio_abi.h are impl dependent. Noone except
> > the impl and it's users are interrested in it.
> >...
> 
> That's utter bullshit.
> 
> The contents of aio_abi.h is a kernel<->userspace ABI that mustn't ever 
> change. [1]

Case in point:

I want to be able to compile an application for embedded Linux which
*can use* Linux-AIO, but can also run on a kernel which has Linux-AIO
removed by this patch.

I still want to compile the application with that capability, in case
it's run on another kernel with it enabled.

I shouldn't have to have a separate, special kernel with all options
enabled, just to compile applications that run on multiple kernels and
use run-time features when available.

Just like all the other kernel<->userspace interfaces, the header
files (including their presence) shouldn't depend on kernel
configuration at all.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-30 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> If "GOLD" is as old and flexible (and portable?) as binutils,

The author says it will only work with ELF, and he does not
intend to add support for all the other things binutils does.

> gcc and/or other huge software maintained to death, it is probably
> similar complex and odd.  If people take a > 10 year old tool and
> rewrite it from scratch, I would assume that design is better.

Only true if the cruft is no longer relevant.  If the cruft is
intrinsic to the problem, e.g. supporting umpteen quirky architectures
implies umpteen quirks of cruft, then it'll be in the new design.

Btw, gcc and binutils are more like 30 years old :-)

> And I can't see any direct dependence on the used programming
> language(s) if one compares running code and what is left of "design"
> after years of design extensions, changes, enhancements, etc. to a new
> design from scratch from the lessons learned (hopefully) from the former
> one.

Some programming languages allow you to express a problem concisely
and clearly, and some don't.  That clarity then affects whether an
evolving design becomes loaded with implementation cruft or not - and
you can't always tell the difference.

Most languages are well-matched to different problem domains.

Binutils and bfd look very crufty, but I think it's hard to tell how
much of that is due to the implementation language and implementation,
or the design and requirements, and how much would exist in any
implementation on any language.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-30 Thread Jamie Lokier
Haavard Skinnemoen wrote:
> "Bart Van Assche" <[EMAIL PROTECTED]> wrote:
> > I looked through the gold sources a bit. I wish everything in the GNU
> > toolchain were written this way. It is very clean code, nicely
> > commented, and easy to follow. It shows pretty clearly, I think, the
> > ways in which C++ can be better than C when it is used well.
> 
> I guess he never looked at the target interface...
>
> [snip virtual method with loads of arguments which looks like binutils]
>
> I can't wait to implement avr32 support for that monster...I thoroughly
> hate working on libbfd, and it looks like gold has made many of the
> same stupid decisions on the interface level.

> Just shows that using C++ doesn't fix a design that is broken to begin
> with.

The GNU Binutils requirement was to target lots of different object
formats, and architectures, allow different ones to be interconverted
and linked together, and to run on lots of platforms.

Given those constraints, probably C was the only option at the time,
and BFD's interface, although ugly and difficult to work with, does
reflect the abstractions of different object formats and architectures
moderately well IMHO.

It's tough to make a nice design that meets those requirements.

It's unfortunate that BFD is so hard to work with that people resort
to post-processing tools and other hacks, instead of enjoying adding
new format support to it.

For all it's faults working with it, the tools themselves are very
versatile and useful compared with most equivalents.

If you have clear improvements that would simplify GOLD (without
breaking it or requirements you might not be aware of), the author may
be quite receptive to them.  He seems keen on the code being of high
quality, and he's quite experienced at working on "open" projects with
many contributors.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-30 Thread Jamie Lokier
Bart Van Assche wrote:
> On Tue, Jul 29, 2008 at 10:08 PM, Leisner, Martin
> <[EMAIL PROTECTED]> wrote:
> > If you're embedded device has a window system, than a language like C++
> > is fine...But...
> 
> C++ is suited for much more than just windowing systems. A good
> example is the GOLD project, a linker for ELF files. GOLD is a rewrite
> of the GNU linker (ld). See also
> http://google-opensource.blogspot.com/2008/04/gold-google-releases-new-and-improved.html.

Is C++ intrinsic to GOLD's linking superiority over ld?  Or was it
chosen because the author fancied using it?  (I don't know).

There's been a resistance to using C++ in GNU programming tools
generally for a long time - see GCC which only recently switched to
ANSI C.  That's because they want the tools to run on lots of
platforms, and C++ templates in particular haven't been standardly
implemented until the last few years, and probably still aren't on
some platforms that they'd like to run GNU tools on.

So using C++ in GOLD was a bit of a bold decision :-)

What I can't help noticing is that GOLD, while superior for linking
straight GNU/Linux applications due to better algorithms, and
extremely knowledgable author etc. - it explicitly does not support
anything but ELF.  It doesn't support the zillions of linker
capabilities of GNU binutils ld, and the author says he doesn't intend
it to.

So it won't ever be suitable for linking some embedded targets -
you'll still need to use Binutils ld/objdump or another tool, at least
for the last step :-)

Binutils' undoing is probably the complexity in its approach to
generically supporting every kind of linkable object anywhere.  That
complexity is the reason we have the ugly 'elf2flt' instead of simply
a backend which emits uClinux executable formats.  The authors of
uClinux tools found it easier to postprocess the output than to write
another format backend.

I don't think C++ would help a lot with that complexity if you wanted
to still support lots of different formats - although another language
with versatile metaprogramming might.  (There's a lot to choose from).
I could be wrong of course.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-30 Thread Jamie Lokier
Leisner, Martin wrote:
> I've found you can understand spaghetti C code with some effort -- its
> nearly impossible to understand spaghetti C++ code.  Much professional
> programming is "kitchen sink mentality" -- if there's a feature, use it.
> 
> I find it interesting K&R is about 200 pages, Stroustrup is 1000+ pages.
> What percentage of the 200 pages is understood by the average C
> programmer versus the 1000+ pages by the average C++ programmers?
> 
> I program by the quote by Einstein "Things should be as simple as
> possible, no simpler".
> 
> Much of the C++ code I've seen has more complicated implementation
> details than the problem being solved (I'm a believer in Halstead
> metrics, a lot of solutions I've seen in C++ would be much smaller in
> C).  Of course, that's the solutions in C++ I've seen...not all of
> them

Ok, but most of what you say applies the same to "generic" programming
and not particularly to embedded.  I.e. if you agree with your points,
you won't use C++ much in general, and if you disagree and like C++ in
general, then why not use it for embedded as well.

> I think C++ lends itself to coming up with complicated solutions to
> simple problems...(of course really good C++ is simple and
> clever...but much C++ I see is poorly designed raw overcooked
> spaghetti).

If you think C lends itself to simple solutions, go read a Linux
kernel sometime :-)

> Also its very useful to have an understanding how the hardware works in
> systems where memory/time is an issue (and it almost always should be an
> issue).  I have a good understanding of what will happen in my C
> compiler 
> (a good algorithm in C runs rings around bad algorithms in assembler).
> [nowadays, instead of processor performance, you think about cache
> performance].  I doubt there's generally a good understand of time/space
> of C++ features in the compiler and standard library...]

Actually, the C++ standard library specification _defines_ time
requirements for many of its algorithms.  That's better than C - in
theory.  (Whether implementations follow the spec that far in practice
is a different question.

I can honestly say I've both read and written simple to understand,
and lousy and complex C code.  And the same with C++.

For some problems, C++ has expressed the solution far more clearly
than the equivalent C.  Most notably in a video game with lots of
characters and representations of physical objects, and in a GUI -
very object oriented systems by nature - fit a C++ expression very
well.

You can imagine in a video game, time/space performance is critical.
Some understanding of what goes on behind the scenes in C++ is very
helpful to manage performance.  I guess knowing C and machine code
helps one's understanding of what a C++ compiler produces :-)

Aside from time/space performance, another factor in many types of
embedded programming is time to deliver the product - or how good can
you make it in the fixed time available.  If C helps, go for it; if
C++ is familiar to you and gets you a better looking product in the
same time, though, it might be prefereable for some parts.  (Same for
choice of libraries, tools, etc.).  That really depends what kind of
device you're making.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [patch 2/4] Configure out file locking features

2008-07-29 Thread Jamie Lokier
Matt Mackall wrote:
> The typical embedded NFS-based devices are NAS servers and media players
> and are going to be more concerned about things like page cache
> balancing.

Oh, those.

It would be really annoying to buy a home NAS and find it doesn't
support NFS locks or SMB oplocks.  NASes are vaguely useful for more
than one computer in the house at the same time.

That said, I bought a big, expensive one, found it far too slow for my
needs, send it back for a refund and bought a portable cheap USB disk
which had *so* much higher performance.  The convenience of serving
multiple machines just wasn't worth the lousy performance.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: prevalence of C++ in embedded linux?

2008-07-28 Thread Jamie Lokier
Robert P. J. Day wrote:
>   just curious -- how many folks are working in C++ in their embedded
> linux work?

I'm avoiding it, because of reports of occasional elf2flt relocation
errors when using C++ a few months ago, on this list.

However, some of the libraries I'm using have some C++ in them, and a
C API wrapped around the C++ core!  I'm glad they use a C wrapper, as
they only supply binaries build with GCC 2.95.3, I have the impression
the C++ ABI has changed between that and GCC 4.x.  But I haven't checked.

All systems using Qt (such as Qtopia) will use C++ a lot, so it is
well supported.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
Bernhard Fischer wrote:
> On Mon, Jun 16, 2008 at 02:32:01PM +0100, Jamie Lokier wrote:
> 
> >No, I'm talking about improving Autotools to handle some things better
> >than they do now.  Passing the high hurdles required to become part of
> >Autotools - especially compatibility - is part of the goal.
> 
> If you look at the sh scripts generated by autotools one is tempted to
> just ship a small, clean sh, written in C89 with autotools, use that
> and drop the workarounds.

Exactly!

Use an early, small Autoconf script written in very portable shell (as
now) to detect the small set of features needed for this "clean sh",
and a host compiler.  Then build it, then continue in clean sh.

But if you're doing that, why use sh?  It could be a subset of sh with
extensions especially for this job - or something else entirely - to
make Autoconf scripts cleaner, smaller, faster than they are now.  Or
even a tokenised sh, so Autoconf tests run directly on it but are
compact.  Possibly even _more_ portable (no need to depend on quirks
of some utilities; configure scripts more checkable for common
errors).  And easier to configure things on environments with no shell
(i.e. Windows).

It adds an extra requirement for a host C compiler, but that's
probably reasonable these days.

This conversation ought to be heading over to the autotools
lists... if anyone wants to take it up there.

-- Jamie

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
Alexander Neundorf wrote:
> On Monday 16 June 2008 13:39:46 you wrote:
> ...
> > > > If you're going to rewrite Autotools using GNU Make, why not ask if
> > > > another tool would be better, perhaps a tool specially designed for
> > > > the job?
> > >
> > > Go ahead.
> >
> > The first part of going ahead is to brainstorm and find ideas and
> > likely interest...  It's too big (as you rightly note) to simply sit
> > down and do it by oneself in isolation.
> 
> You are not seriously talking about writing yet another buildsystem?
> With autotools, cmake, scons, ant, (b)jam, bitbake, makeng,
> etc. there should be enough available.

No, I'm talking about improving Autotools to handle some things better
than they do now.  Passing the high hurdles required to become part of
Autotools - especially compatibility - is part of the goal.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
David Woodhouse wrote:
> On Mon, 2008-06-16 at 11:49 +0100, Jamie Lokier wrote:
> > But here's the thing: do you really want every package have code
> > calling every different variation on a system call, at run time, until
> > it finds one that works?
> 
> No. That functionality lives in libc, if you want it at all.

Sometimes that's ok.

But sometimes libc can't do that, because the point of a newer
function is it addresses some inadequacy of an old function.  If libc
does a "compatibility" thing and uses old syscalls on old kernels,
sometimes that introduces bugs.

E.g. Calls to pread should _not_ be implemented as lseek+read+lseek on
old kernels which don't have pread.  That leads to race conditions and
corruption in some applications.  (I think this has really occurred,
but I'm unable to find it now).

The same applications are fine if pread returns ENOSYS and they know
what they need to do with lseek and read.

The same is true of the rsync example, with utimes and utime.  It's
wrong for libc to "emulate" utimes using utime on old kernels: they
aren't the same.  So the application does it instead: the application
has a policy that it's fine with the different functionality when
utimes returns ENOSYS.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
Enrico Weigelt wrote:
> > Imho, Kconfig would be good for more programs than it's currently used for,
> > and could be made to work with those --enable/--with options: you'd be
> > able to configure them entirely on the command line, or interactively
> > with "./configure --menu" (runs menuconfig), or with a config file.
> 
> Yes, that would be fine. But for me the primary constraint is that
> all switches/options can be specified by command line - otherwise
> I'd need extra complexity for each package in my distbuilder tool.

Yes, I agree with that.  It's one of the nice things about Kconfig
that you can use it with pre-specific config files, even when the
config is from a different version and doesn't specify everything.
Doing so as ./configure --enable/--disable arguments seems like a
comfortable fit.

> 
> > Perhaps it might even be possible to write a very small, portable,
> > specialised alternative to Make which is small enough to ship with
> > packages that use it?
> 
> No, I really wouldn't advise this. Make tools are, IMHO, part of 
> the toolchain (in a wider sense). Once point is avoiding code 
> duplication, but the really important one is: a central point of
> adaption/configuration. That's eg. why I like pkg-config so much:
> if I need some tweaking, I just pass my own command (or a wrapper).
> If each package does it's library lookup completely by itself, I
> also need to touch each single package in case I need some tweaks.
> I had exactly that trouble w/ lots of packages, before I ported
> them to pkg-config.

That's interesting, thanks.  I have seen problems with pkg-config, but
adding your own wrapper is a nice way to fix any of them :-)

That said, if you can specify library lookup on the ./configure
command line, that fixes a lot of problems of repeatability too.

Only problem then is you don't always know _which_ options to pass on
the ./configure command line, if you have a package build script, and
a different version of the package.  There's a tendancy to drift apart
which manifests as silent mistakes, until some user reports a bug.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
Enrico Weigelt wrote:
> > Reality is that Kconfig front end to autotools does work - as you've
> > proved.  It's a good idea. :-)
> 
> Now, we just need an autoconf-alike frontend for Kconfig ;-)

I agree and would support:

./configure --menu
 Invokes a configuration menu / wizard for user-selectable options.
 Only works if you have the "autoconf-menu" tools already installed.

./configure --ask
 Invokes command line configuration questions / wizard for
 user-selectable options.

./configure --ask --from-config=OLDCONFIG
 Uses existing settings from a previous configuration.

./configure
 Without --menu or --ask, invokes traditional Autoconf
 automatic detection of capabilities.

> > You said about too many user-selectable options.  Many large packages
> > _check_ for many installed libraries.  Get them wrong, and you have
> > the same problems of untested combinations.
> 
> It even gets worse when they silently enable certain features on
> presence/absence of some lib. That's btw one of the reasons why
> sysroot is an primary constraint for me, even when building for the
> platform+arch.

That's a basic design feature of Autoconf.  And it really is very good
most of the time - much better than config scripts which ask a lot of
questions (Perl).

> > Have you felt uncomfortable shipping a package that does use Autoconf,
> > Automake and Libtool, knowing that the scripts generated by those
> > tools are huge compared with the entire source of your package?
> 
> Yeah, that's one of those things in autotools I never understood:
> why isn't there just one function for each type of check/action, 
> which is just called with the right params ?

It's because it's written in very portable Bourne shell, which does
not support shell functions.

(This is why I toy with the idea of writing a very portable C program,
similar to a simplified shell, specially designed for Autoconf, with
functions...)

It could simulate functions portably by creating lots of little
scripts at the start, and then calling them later on.  That would
shrink ./configure a lot.  But it's a big change; somebody's got to do
it, test it, look after it, and so on.

Libtool similarly does not have to be huge, but it would be a lot of
work to shrink it without breaking anything.  It's hard enough to
change it at all without breaking anything on some system, somewhere.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
Robert Schwebel wrote:
> We have once tried to write our automated test system for embedded
> boards with python and xml; the idea was to have something fancy, new
> and with good code-reuse. In the end it failed because the pexpect
> package we used to do the pattern matching bitrotted so quickly that
> four months later all the fancy tests didn't work any more, because it
> is not part of the python core.
> 
> In the meantime we have migrated our automatic testing stuff to use
> shell, ssh/rsh and kermit. It is rock solid, and the code reuse factor
> is at least as good as with anything else.

It sounds like the basic problem there was using non-core Python.  You
can count on shell, ssh/rsh, kernel - they're stable.  You can count
on Pythong's core language.  But not on pexpect: if you depended on
that, you should probably have copied the code into your own test
system, or written an equivalent.  Easier said than done, I know.

> > Have you felt uncomfortable shipping a package that does use Autoconf,
> > Automake and Libtool, knowing that the scripts generated by those
> > tools are huge compared with the entire source of your package?
> 
> No :-)

It's really disturbed me, sometimes, shipping a package of 50kB source
and 2MB configure+libtool.

> > Have you _written_ Autoconf tests recently?
> 
> Yea, all our packages are autotoolized.
> 
> > Made any shell / shellutils non-portability mistakes in the tests?
> 
> Yea, it happens in ptxdist all the time. People report about problems,
> we add new tests and the next revision works even on Ubuntu :)

Right, so you have no idea about portability of your tests on, say, SCO
Unixserver or IRIX? :-)

> I have no problems with people writing fancy new things. It's just that
> most people who try to do something better than autotools have only a
> fraction of the features.
> 
> Open Source is darwinism: if there is something better, let's use it.
> But compare apples with apples.
> 
> > If you're going to rewrite Autotools using GNU Make, why not ask if
> > another tool would be better, perhaps a tool specially designed for
> > the job?
> 
> Go ahead.

The first part of going ahead is to brainstorm and find ideas and
likely interest...  It's too big (as you rightly note) to simply sit
down and do it by oneself in isolation.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-16 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> > _check_ for many installed libraries.  Get them wrong, and you have
> > the same problems of untested combinations.
> 
> As long as I can specify that libfoo support must be compiled in (and
> thus libfoo must be present) and the tools throw an error if it doesn't
> find it, I have no problem.
> Otherwise all package builders have a serious problem.

They do have problems, when you want to repeatably build and deploy,
if the build environment isn't very similar each time.

Typically the way you specify that libfoo support must be compiled in
is --with-libfoo=/path/to/libfoo.

That way lies bitrot between your build script which calls ./configure
(since you won't by typing it manually with 20+ options like that each
time you rebuild), and the changing version of an upstream package you
configure.

To prevent it trying to compile in libs you don't want, you also need
--without-libfoo.  Using Kerberos as an example, which I remember when
building CVS ages ago: If you don't _prevent_ it using libraries you
don't want, you get different binariesn depending on whether a
Kerberos library was installed on the build system at build time.  You
might then send a built program to another system, and find it won't
run at all, or has unwanted behaviour.

Do you really see package building scripts with 20 --with-libfoo= and
--without-libfoo= options in them for every library?  Sometimes.  But
more often, not: instead, they more often have build-time installed
prerequisites.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-16 Thread Jamie Lokier
Enrico Weigelt wrote:
> Most times I've seen those checks, they silently enable some 
> features, eg. if it looks for certain kernel devices. Definitively
> the wrong way!

Agreed.  Though, you do often have to check for headers etc.,
otherwise you won't have the definitions needed to work with those
devices.  (Think: inotify).

> It really should be in users/packagers control to
> explicitly enable features.

But then there's a problem with 100 or more --enable-foo command line
options.  The user's script will _calls_ ./configure will
bitrot... there's no way you'll be typing it on the command line.

A big package can easily depend on that many local features.

> Nevertheless, the existence of some file or device says nothing
> about whether it will be usable (or *should* be used) at
> runtime. I've seen packages silently enabling some feature and then
> failing at runtime since the previously detected device is missing
> later. What a nightamare for packagers.

I agree, assume they're all there is not sane.

> Another point are broken syscalls. Well, you *have* check at runtime 
> to be sure, or perhaps choose to ignore it and expect a sane system.

That said, to use rsync as an example.  It uses the utimes() system
call when available, and utime() otherwise, because the former is
better.

On current GNU/Linux it will use utimes().  Recently someone tried to
run it on an old kernel, and it failed.  If it tried utimes() first
then fell back to utime(), it was fine, and rsync has a patch to do
that now.

But here's the thing: do you really want every package have code
calling every different variation on a system call, at run time, until
it finds one that works?

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: about size optimizations (Re: Not as much ccache win as I expected)

2008-06-15 Thread Jamie Lokier
David Woodhouse wrote:
> On Sat, 2008-06-14 at 10:56 +0100, Oleg Verych wrote:
> > I saw that. My point is pure text processing. But as it seems doing
> > `make` is a lot more fun than to do `sh` && `sed`.
> 
> The problem is that it _isn't_ pure text processing. There's more to
> building with --combine than that, and we really do want the compiler to
> do it.
> 
> _Sometimes_ you can just append C files together and they happen to
> work. But not always. A simple case where it fails would be when you
> have a static variable with the same name in two different files.

I suspect the simplest way to adapt an existing makefile is:

1. Replace each compile command "gcc args... file.c -o file.o"
   with "gcc -E args... file.c -o file.o.i".

2. Replace each incremental link "ld -r -o foo.o files..." with
   "cat `echo files... | sed 's/$/.i/'` > foo.o.i".

3. Similar replacement for each "ar" command making .a files.

4. Replace the main link "ld -o vmlinux files..." with
   "gcc -o vmlinux --combine -fwhole-program `echo files... | sed 
's/$/.i/'`".

You can do this without changin the Makefile, if you provide suitable
scripts on $PATH for the make.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cross Compiler and loads of issues

2008-06-13 Thread Jamie Lokier
Bill Traynor wrote:
> > For some reason, that stopped a while ago, and you had to go to
> > different places to get working basic tools.  And often, the place to
> > go wasn't clear.  Different people advertised their "ARM toolchain",
> > "m68k toolchain" etc.  and they were slightly different sets of
> > patches on top of mainline tools.  Central authorities you might
> > believe in existed, but they were always a few patches behind what you
> > actually needed.
> 
> I disagree with respect to ARM.  GNU toolchains for ARM have for the last
> four or five years been relatively up to date for ARM hardware.  This is a
> direct result of ARM paying for this support to be added.

I believe you, but they were difficult to _find_.  A couple of years
ago I looked for up to date ARM GNU tools.  The first few pages from
Google lead to a small number of places where you could download
toolchains at least a year out of date, possibly more, and none to get
anything current in a straightforward format (i.e. not an entire dev
kit with IDEs etc.).

That situation has improved greatly since then.

But, to be honest, ARM-nommu toolchain support is still second class.

There's no shared library / dynamic loader support for ARM-nommu, and
no apparent plans to implement it.  (I'd help but I don't have time to
do it all).  NPTL threads are coming along, but not yet ready; it's
been quite slow.  C++ sometimes crashes at the linking stage.

> > When I last needed a toolchain, Google led to confusing results, and I
> > had to try more than one.  I still use mutiple GCC versions (from
> > different people) to compile different programs for the same
> > architecture: each one fails some things in a different way, including
> > run-time failures in the precompiled toolchains.
> 
> You didn't have to try more than one, you chose to try more than one.

I found significant problems with the first.  Fixing obscure looking
register allocation crash bugs in the compiler was more than I thought
I had time for - looking for another was less work than that,
therefore I rationally chose it.

I figured that bug was fixed in a newer version, and it was.  Then I
found run-time problems with threads (crashing and impossible to trap
in GDB - probably distributed libc mismatched my kernel - wtf?), and
incompatibilities with binary libraries I had to link with.  (That's
why I use two different toolchain versions in my project now).

Eventually, for some applications, what worked was a combination:
tools from one place but headers and libraries from the other.

To solve this properly, requires that I delve into the toolchain code,
or find _another_ newer one which might fix these problems.

It's a substantial time sink to do any of these things.

> You could have just as easily chosen to use only the current
> mainline GNU toolchain and worked through your issues with the
> community, thereby allowing mainline to benefit from your fixes, and
> you to benefit by getting to use the most current toolchain.

I agree that is good.  However, my issues were rather specific to my
application setup and third party commercial dependencies (typical of
some embedded projects), and depended on older tools and kernels, so I
wouldn't expect the community to have much interest.  The lack of
interest in 2.4 kernels is pretty explicit (understandably - I don't
like using it either), and for GCC 2.old/3.old I'm assuming it.

The lack of backed-by-action interest in making ARM-nommu tools
support current GCC features like C++ properly, shared libraries, and
modern threads, adds to the sense that, while it's good to be in touch
with the community, there isn't a lot of readiness to work on obscure
things which don't affect a lot of people, and on hardware which isn't
the future.

Eventually I might forward port enough to follow mainline kernels and
tools.  But that's a substantial time sink too - don't underestimate
it.  I'm told that mainline current 2.6 doesn't build let alone work
on ARM-nommu (yet); that does not make forward-porting including local
drivers and architecture quirks sound like it will be quick.  If slow,
it's not worth the effort to me alone.

> > Just Google for "uclinux toolchain" and the top hits lead to very old
> > releases, with bugs that have long been fixed elsewhere.  "uclinux arm
> > toolchain" is no better.
> 
> The "fixed elsewhere" is the problem.  If everyone used the most current
> release and worked through issues with the community, this problem would
> go away.

I agree.  I would like to do that, and with a thriving, interested
community I will.  I'm hoping linux-embedded@vger.kernel.org might
help catalyse it - perhaps by creating the perception of it to people
like me.

It only really works if lots of people do that together, and it seems
to be particularly _not_ the culture in some segments of the embedded
development community.

I base this on Google leading to lots of pages with old announcements
of compilers, cross-compilation sc

Re: Cross Compiler and loads of issues

2008-06-13 Thread Jamie Lokier
Enrico Weigelt wrote:
> > Contrast with kernel.org: everyone knows where to get a good working
> > Linux kernel for the mainstream architectures, and the quality work
> > tends to be quite good at reaching mainline there nowadays.
> 
> ACK. But you perhaps remember the discussions on LKML where some
> folks wanted to stop this and leave all the QM works to individual
> distros. I'm glad this plan was dropped.

I'm glad too.  I've had to do the reverse: cherry-pick through 2000
patches from distro kernel source packages, to find the good ones for
my kernel - bug fixes, driver fixes.  It took a long time, and I had
to give up before finishing, it was simply too much work.

That was 2.4, back when distros did a lot of their own patches, kept
outside the mainline kernel.  Thankfully, the 2.6 process is much better.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-13 Thread Jamie Lokier
Enrico Weigelt wrote:
> * Alexander Neundorf <[EMAIL PROTECTED]> schrieb:
> 
> > E.g. in python there are tests which call functions and check 
> > their result to see if we are currently on a platform where 
> > that function is broken (I think there was such a test for 
> > poll() and some other functions).
> 
> IMHO, that's broken sw engineering from ground up.

Oh?  The alternative I see is to do the test at run-time.  But that
adds to executable size and run-time slowdown on most platforms.

Doing it at build time is an improvement, for those people who don't
care about cross-compilation.  (Not me, you understand.)

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-13 Thread Jamie Lokier
Robert Schwebel wrote:
> On Fri, Jun 13, 2008 at 11:25:23PM +0100, Jamie Lokier wrote:
> > A trouble with that is some packages have hundreds of user-selectable
> > options - or even thousands.
> 
> I've not seen a package with thousands of user selectable options. It's
> not even desirable, because the more options you have, the more
> difficult it becomes to test all the combinations.

Kernels, uclibc, busybox.  All combinations can't be tested.  But it's
still _very_ useful to compile in only those parts wanted.

Most packages have far fewer.  But some have enough that the command
line is unwieldy and Kconfig would fit.  I'm thinking libcurl - one
configure option for each protocol supported.  iptables (userspace)
would be a candidate, when you have to link statically and save space.

Media players with lots of optional formats and drivers are another.
(They also have considerable problems with their Autoconf in my
experience).

Generally, anything with lots of parts that different applications
might not use, and space or library dependencies are an issue.

> > Some other packages _should_ have more options, but don't because it's
> > too unwieldy to make them highly configurable with Autoconf.  Imho,
> > Kconfig would be good for more programs than it's currently used for,
> > and could be made to work with those --enable/--with options: you'd be
> > able to configure them entirely on the command line, or interactively
> > with "./configure --menu" (runs menuconfig), or with a config file.
> 
> That's exactly what ptxdist does: add a Kconfig frontend to the
> configurable switches. It does it for the user's convenience, although
> the currently implemented method is really developer-unfriendly (but we
> care about our users first).

I agree.  (And it proved about not being able to test more
combinations: last time I tried to build ptxdist, an up to date
version (at the time), it failed in several places.)

> But it's of absolutely no use to whine about the fact that the world is
> such a curel place. *If* Kconfig had been there 20 years ago ... *if*
> 90% of the packages out there would have been Kconfig instead of
> autotools... We have to live with *reality*, and reality is that
> autotools solve real world problems, and they offer *one* standard user
> interface to package building. I can cross build almost all autotoolized
> packages in exactly the same way and people are used to it. All other
> build systems I've seen invented their very special way of doing things,
> leading to wheel-reinvention all over the place.

I agree with you.  There's no need to disagree, and who's whining?
We're trying to think of realistic improvments.

Reality is that Kconfig front end to autotools does work - as you've
proved.  It's a good idea. :-)

> > The "make" / "make install" part is easy to retain even with other
> > build systems, using a trivial Makefile which calls the other tools.
> 
> I still don't understand why all the scons, cmakes and jams out there
> don't even try to provide the *standard* user interface everyone is used
> to on a unix system.

Me neither.

> > Lots of packages need special tools or other software installed to
> > build them; this is no different.
> 
> autotools need only a shell and make

No, that's true only for very simple packages.

Most packages need lots of additional libraries installed - and the
development versions of those libraries, for that matter.  Too often
the right development version - not too recent, not too old.  With the wrong 
versions, there are surprises.

You said about too many user-selectable options.  Many large packages
_check_ for many installed libraries.  Get them wrong, and you have
the same problems of untested combinations.

Quite a lot of packages require extra tools to build, beyond shell,
make, GCC and Binutils.  Perl, Python are common.

Sure, autotools by themselves don't need much.  But that's not
interesting: Autotools are not used only by themselves.

> > Perhaps it might even be possible to write a very small, portable,
> > specialised alternative to Make which is small enough to ship with
> > packages that use it?
> 
> Why on earth would one want to reinvent make?

Have you felt uncomfortable shipping a package that does use Autoconf,
Automake and Libtool, knowing that the scripts generated by those
tools are huge compared with the entire source of your package?

Have you _written_ Autoconf tests recently?  Made any shell /
shellutils non-portability mistakes in the tests?

Have you _read_ a portable Makefile lately?  Have you tried writing
one for a complex package, confident that it's portable to different
quirky makes, 

Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-13 Thread Jamie Lokier
Robert Schwebel wrote:
> On Fri, Jun 13, 2008 at 08:30:52AM +0200, Alexander Neundorf wrote:
> > Battle of Wesnoth is currently converted to both Scons and CMake, and
> > in the end they will decide about the winner. (since Eric is good at
> > arguing I guess it will be scons).
> 
> The thing is that 'configure && make && make install' plus the usuall
> --enable-foo / --disable-foo / --with-bla=blub semantics is simply *the*
> standard way of configuring stuff for unix systems. You don't need fancy
> tools, you get cross compiling almost for free and unix people simply
> know how to use it.

A trouble with that is some packages have hundreds of user-selectable
options - or even thousands.  It is unfeasible to use --enable-foo
options for all of those when configuring then.

Some other packages _should_ have more options, but don't because it's
too unwieldy to make them highly configurable with Autoconf.  Imho,
Kconfig would be good for more programs than it's currently used for,
and could be made to work with those --enable/--with options: you'd be
able to configure them entirely on the command line, or interactively
with "./configure --menu" (runs menuconfig), or with a config file.

The "make" / "make install" part is easy to retain even with other
build systems, using a trivial Makefile which calls the other tools.
Lots of packages need special tools or other software installed to
build them; this is no different.

Perhaps it might even be possible to write a very small, portable,
specialised alternative to Make which is small enough to ship with
packages that use it?

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-13 Thread Jamie Lokier
Enrico Weigelt wrote:
> But: the question is whether you'll need such a test at all
> or if just using sizeof() at the right place won't do the trick ;-P

It's best to do that if you can, and nearly always possible.  There
are a few coding techniques - especially performance sensitive - where
that's not feasible.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cross Compiler and loads of issues

2008-06-13 Thread Jamie Lokier
Bill Traynor wrote:
> Maybe I'm being dense, but what's specifically wrong with the current
> toolchain universe?

Back in ye olde days, you could download GCC and Binutils from
gnu.org, configure for whatever is your architecture, and most times
it just worked.

For some reason, that stopped a while ago, and you had to go to
different places to get working basic tools.  And often, the place to
go wasn't clear.  Different people advertised their "ARM toolchain",
"m68k toolchain" etc.  and they were slightly different sets of
patches on top of mainline tools.  Central authorities you might
believe in existed, but they were always a few patches behind what you
actually needed.

When I last needed a toolchain, Google led to confusing results, and I
had to try more than one.  I still use mutiple GCC versions (from
different people) to compile different programs for the same
architecture: each one fails some things in a different way, including
run-time failures in the precompiled toolchains.

Just Google for "uclinux toolchain" and the top hits lead to very old
releases, with bugs that have long been fixed elsewhere.  "uclinux arm
toolchain" is no better.

Perhaps current versions (e.g. from Codesourcery?) are more dependable
for embedded architectures, but I don't have the time to thoroughly
test them, and my last experience warns me to be careful.

It seems people release tools, release patches, publish on an obscure
web page, then forget about the page.  More authoritative-sounding
uclinux web pages tend to be out of date.  Google isn't finding good
current authorities in this area, which suggests the space is rather
fragmented with people pulling in different directions and not working
together enough to create stable, common places for these things.

Contrast with kernel.org: everyone knows where to get a good working
Linux kernel for the mainstream architectures, and the quality work
tends to be quite good at reaching mainline there nowadays.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives

2008-06-13 Thread Jamie Lokier
Bernd Petrovitsch wrote:
> Actually the size of ints (or any other type) can be easily deduced
> without running a (for the target) compiled binary:
> - compile the binary (for the target) with an initialized variable with
>   that value.
> - use cross nm (or a similar tool) to read it from there.

Or the method autoconf uses - binary search, using a compile-time
numeric comparison which resolves to a successful or failed compile.

That seems more portable to me.

Relying on 'nm 'finding the variable, and not accidentally matching
another variable with the wrong value, does not work for all C
environments.  E.g. some compile to compressed executables; some
produce intermediate objects with incomplete or lazy compiles or
symbolic, to be finished at link stage, and some are even more
abstract.

And it requires the 'nm' tool, which you might not have for
cross-compilation, or might not find the right one.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-12 Thread Jamie Lokier
Rob Landley wrote:
> Most packages don't cross compile at all.  Debian has somewhere north of 
> 30,000 packages.  Every project that does large scale cross compiling 
> (buildroot, gentoo embedded, timesys making fedora cross compile, etc) tends 
> to have about 200 packages that cross compile more or less easily, another 
> 400 or so that can be made to cross compile with _lot_ of effort and a large 
> enough rock, and then the project stalls at about that size.

+1.

I spent several months fixing up cross-compile issues on Gentoo
Embedded a few years ago, for a specific application - only a small
subset of packages needed.  The majority of packages I needed failed
to compile out of the box, one way or another - Glibc dependencies,
arch dependencies, scripts which depend on the host environment, or
invoke the host compiler, or the host Perl (and then depend on it's
byte order). etc.

More recently, I've been compiling a build which is _intended_ for
cross-compilation - it's an old uClinux kit, patched by a third party.
Even that fails to build on newer GNU/Linuxes, as the syntax of GNU
Make has changed, and Bash has changed.  Also GCC 2.old doesn't
compile on current GNU/Linux with GCC 4.new.

Fortunately the latter were a few small, easy to fix issues.  But I
understand now why some find it important to have a replicatable build
environment when you need to get an old distribution out of the closet
to update firmware for some 5 year old device.

Virtual machines ought to be great for that.  They are.  But even
those are surprisingly changable - images that worked on a VM a few
years ago no longer work on the current version of the VM host.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-10 Thread Jamie Lokier
Wolfgang Denk wrote:
> In message <[EMAIL PROTECTED]> you wrote:
> >
> > I would be surprised if it was possible to compile Linux with gcc 4.2
> > with 32MiB of total system memory.
> 
> Hint: if memory really gets tight, you can use swap space. Either  to
> a  local  drive  (either through PCI or PCMCIA/PCCard or USB or FW or
> ...), or over the network. This just adds  another  level  of  stress
> testing  to  areas in the kernel that are not so well covered by some
> other tests.

Great, I'm looking forward to your implementation of swap on no-MMU!

Thanks,
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-10 Thread Jamie Lokier
Wolfgang Denk wrote:
> Being unable to do this just because we now also would need a  native
> Perl is indeed a PITA...

You can run the Perl bit with "ssh remote perl", and still do the rest
of the compile natively.  It's not pretty, but workable.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-10 Thread Jamie Lokier
My only comment is I remember when Eric Raymond submitted a smart
config thing (before Kconfig, which copied Eric's best ideas).

The main objection to Eric's patch was that it was written in Python,
causing kernel builds to depend Python.

When did this policy change, so that it's now acceptable to depend on
Perl, which is roughly equivalent as a tool dependency?

-- Jamie

--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] console - Add configurable support for console charset translation

2008-06-09 Thread Jamie Lokier
H. Peter Anvin wrote:
> >When you imply it's stupid to think anyone will ever have a console on an 
> >embedded system (because we all know the embedded world is far more 
> >uniform than crazy diverse things like the desktop space), which of these 
> >are _you_ referring to?
> 
> I'm talking about the virtual console stuff, which is *not* inherently 
> tied to VGA.
> 
> The issue is whether it makes sense to have virtual console capability 
> on a device which is so constrained that you want a *subset* of the 
> functionality in place.

The uClinux devices I ship do have that.  System console is the serial
port on a board header.  But field engineers are instructed to plug in
a USB keyboard and Alt-F2 to get a diagnostic screen (we call it a
"shell" ;-).  Occasionally even customers with specific requirements
get to use that.

We had to patch the kernel to get the combination of virtual terminal
capability on a framebuffer, but system console messages not to the
framebuffer.  The uClinux (2.4.x) disables system console on a virtual
terminal, but due to some confusion(*), it has the side effect that
you can't attach a VT to a framebuffer at all.  Several howtos provide
a "con2fb" program which didn't work with uClinux.  We had to fix that.

(*) - the word "console" is used for four different concepts in the
kernel, sometimes in the same code, and it seems to have confused
someone.(**)  In the following, CONFIG_VT_CONSOLE should not control
take_over_console(); the two "console" are not the same concept:

#ifdef CONFIG_VT_CONSOLE
take_over_console(&fb_con, first_fb_vc, last_fb_vc, fbcon_is_default);
#endif

(**) So I suggest using "virtual terminal" consistently in the code,
and when talking about them, would be an improvement.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] console - Add configurable support for console charset translation

2008-06-03 Thread Jamie Lokier
H. Peter Anvin wrote:
> Tim Bird wrote:
> >With CONSOLE_TRANSLATIONS turned off, this saves about 6K
> >on my kernel configured for an ARM development board (OMAP
> >5912 OSK).  In embedded products I'm familiar with,
> >console translations are not needed.
> 
> On most embedded products I'm familiar with, you wouldn't have virtual 
> consoles at all...?

On anything with a framebuffer, VCs are quite handy for debugging.
Saving 6k won't save the world on those devices, but each little bit
helps - especially due to no-MMU fragmentation.

-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-embedded" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html