On 6/22/2011 5:08 PM, Steve Wart wrote:
Still, databases and file systems are both based on concepts that
predate electronic computers.
When Windows and Macs came along the document metaphor became
prevalent, but in practice this was always just a "user friendly" name
for a file. The layers and layers of slightly broken metaphors never
gets simpler. They interact in unpredictable and inconvenient ways
that people adapt to, because people are far more adaptable than the
machines we build.
I personally consider documents+folders and files+directories to be
equivalent.
ideally, one could discard the metaphors, but if this involves the names
as well, then what does one call them?... lumps and nodes?...
it comes up as a thought that an HDB and a filesystem could be,
potentially, unified, however doing so could create a complicated
interface, as both are traditionally accessed differently (unless one
effectively eliminates the concept of opening/closing files as well).
an example of the later would be replacing, say, file
open/close/read/write, say, with a handle to the file, and treating the
file, for example, like a large byte array.
hmm, say:
var f=File.getFileAsArray("~/foo.bin");
var c;
c=f;
while(!c.eof)
{
printf("%d ", *c); //print byte
printf("%d ", *(c.asWord)); //print word (16-bit LE unsigned short)
printf("%d\n", *(c.asDWord)); //print dword (32-bit LE unsigned int)
c++; //step to next byte
}
where "asWord"/"asDWord" would coerce the array to a virtual word or
dword array.
maybe there are ways the interface can be made nicer, and is ideally
without being horridly inefficient (such as the "hair" in the above if
the "FileAsArray" is to behave like a proper array type). for example,
if every FileAsArray+Integer -> new FileAsArray, although this could
work, it would spew garbage and be slow in the above (whereas in-place
mutation has its own costs). a partial trick (used in implementing array
iteration) was to generally use a pointer into the array body as the
index, but this would require memory-mapping the file (and thus not work
well with large files on 32-bit systems).
VM-level value-types are another option, but these are a relatively
costly feature in general (as they are allocated/copied/freed
more-or-less continuously in the present VM).
all this would likely mean having to use file-arithmetic sparingly, and
instead mostly access array-like files by index.
for(i=0; i<c.length; i++)
printf("%d ", c[i]);
but, in any case, could offer an alternative to the traditional
read/write/seek interface.
hmm...
The rather intense focus on usability in recent years is probably the
most powerful tool we have to sweep away all these partially
implemented and poorly thought out concepts.
User experience is even now the main driving force behind the
evolution of programming environments. Even without auto-completion
and fancy debuggers, there is a profound growth in the use of dynamic
languages. The millions of hours of development that have gone into
the tools the major computer vendors have been using to lock people
into their platforms are no match. What matters for developers is
quick turnaround for debugging problems and code that expresses its
intention in a concise and elegant manner. This is a profound change
in the past 5 years.
potentially, but there are limits, and the target is still relatively
stable/slow-moving, at least going by TIOBE and similar.
meanwhile, trying to balance being conservative and practical with
trying out possibilities is difficult.
So how can you make simple languages simple to use? Developers have
been rejecting complex GUIs in favour of plain text. If Google and
Apple are right, every program component isn't a file on a disk, but
rather some network accessible resource. On the other hand, a cynical
person would say that the "cloud" is just another broken metaphor to
pile onto the heap, because all this stuff is going to be built on top
of the concepts we are already stuck with.
"cloud" ==> HTTP, CIFS, WebDav, ...
have some servers, and pile together a bunch of assorted stuff (network
file-sharing, running virtualized Linux instances in VMware or QEMU,
...) and suddenly from this a "cloud" emerges...
although, maybe it is a good thing, or a "cloud" emerging from a big
burning pile of crap, and service-providers blowing the smoke up... well...
I would much rather keep most of the disk/memory/CPU/... on the
client-side, or use a hybrid model. say, where for ones' low-power
mobile devices, a lot of the "raw power" is provided, say, by a PC they
have running at their house, as then one has no obligation to pay a
service provider for their storage and CPU usage.
about the only real place it really makes sense IMO is for things like
high-volume web-servers / ..., where a service provider can probably
provide the resources (processing power and bandwidth) in a more
cost-effective manner than, say, getting a fiber-optic line into ones'
house and putting a bunch of rack-mounted servers in ones' garage or
something...
but, at the moment, I seem to be doing ok for a low-volume server with
an old laptop and an ADSL connection.
however, I don't personally find the idea of "thin clients", more so
thin clients with subscription fees, as anywhere near a desirable option.
and, if people buy into it, ISPs and people providing cloud-services are
likely to try to find some way to charge higher-than-optimal fees and
effectively screw over the people who really buy into it (sadly, sort of
like what MS does to most people who buy MS products...).
granted, I do use some MS products (Windows and Visual Studio), but more
because at present I have them for free from the college (which is at
the moment mostly paid by student-aid, which means it is mostly paid
from taxpayer money...), and because effectively making SW other people
can use requires targeting the OS more people use.
otherwise though, in a more ideal world, I would probably just be using
Linux.
my codebase is generally written to work on either OS...
but, just ideally, people should refrain from doing things which require
them to pay more money (for no real gain) than they already have to.
much like, in an ideal world, I would probably be aiming to be a plain
FOSS developer, but since I may need to make an income somehow, and no
real jobs are lined up, well this means needing to keep being a
proprietary developer also as an option (and trying for at least
something which could be sold).
sadly, in life there are no real ideal solutions, so one tries to make
due with what they have.
granted, dual-licensed proprietary/GPL may still be an option (people
can pay me if they want to use some parts of my code not under terms of
the GPL...). I have yet to fully decide how I am going to do all this...
a concern though is what effect getting a job might have on my FOSS
efforts...
in an ideal world, having a job will still allow me to have personal
freedom for personal projects and to contribute to other projects
if/when I so desire, but I guess many companies don't allow this
(effectively regarding all activities by the employee to be part of
their intellectual property).
...
Virtual environments might also be a good way to keep the detritus of
the past from cluttering the future, but a cynic might have some
opinions about that too :-)
dunno, it is not obvious what exactly is meant here...
if you mean the "virtual system within a system" approach, personally I
think that this is prone to make a bigger mess than it helps.
one saves a little complexity up-front, but then the need to battle with
the "river of separation" often creates a much bigger and nastier
problem (take the JVM as an example).
"biting the bullet" and creating a system much closer to the underlying
architecture, though initially more complex, seems to lead to a cleaner
and more friendly architecture overall...
this is not to say that one can't use abstractions, only that ideally
they shouldn't be the "virtualized world" / "walled city" approach...
or such...
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc