I don't know why at first opportunity everyone who
replies to  any issues, firstly jumps to a command line
to accomplish anything. This is not helping new users
and works in reverse by concluding that every
alteration or fix MUST require a command line fix.

We all need to be vary wary of how we are viewed by
looking at the following article.

It is not appropriate for me to discuss the right or
the wrongs of this article - however over 600 other
people have expressed their comments and like it or
lump it if we do not address the bad things we will
never progress form Linux being a Home PC toy to a
useful and manageable part of ever day corporate life
and if we cannot make that transition we will never be
a credible force to be taken seriously.

Many of the fixes I see offered to users can be
accomplished by the GUI interface. If the interface
cannot accomplish the task we need - Then we very
seriously need to evaluate where the interface is
lacking and we therefore need to log an application
Enhancement Bug. For the future we cannot remain with
both feet in the ground. If we are totally dependant on
a command line entry to correct or set a variable - we
have lost already.

Remember, MS-Windows did succeed by providing a
graphical user interface in total for every beginner to
advanced user, (I will exclude admin and config
additions and deletions made by admins - NOT users); to
be come the worlds largest AND MOST accepted interface
that killed DOS, and what finally killed DOS was
Windows 95; where at last the user no longer had to
contend with editing just 2 files (autoexec.bat,
config.sys) Windows 95 was the first true 32bit

We cannot deny MS success and willingness to adopt by
the world, to do so would argue that Bill Gates is a
poor man and MS Windows is not the most prolific
interface on available in the world, thus we cannot
deny the need to a GUI to perform just about everything
for us.

Deriving pleasure by writing scripts, or even creating
an .RPM file for yourself IS very satisfying for such a
user - However is comes at the expense of corporate
acceptance and without that Linux will NOT survive or
ever threaten Microsoft.

This is a horrific crime that such and advanced O/S
such as Linux, were we have real memory management
issues, where we have a true multitasking O/S, innate
security against so many things and where our current
GUI is faster and more efficient anything MS could
dream of given the same resources.

Article - Five crucial things the Linux community
doesn't understand about the average computer user
http://blogs.zdnet.com/hardware/?p=420&tag=nl.e590

and over 1100 comments can be viewed at
http://talkback.zdnet.com/5208-12554-0.html?forumID=1&threadID=34034&messageID=636658



Prophetically speaking, in my lifetime, I hope to see
the end of X86 Processor chips. With Intel's multi core
(origami) based processor and AMD's 64bit Processors I
can see logically medium term winner being the 64 bit
AMD type chip. I say this as MS O/S file and memory
management is so appalling that major further
advancements will require dumping of the architecture.
http://www.extremetech.com/article2/0,1697,2000441,00.asp

Just a small bit to try and explain my reasons. Please
disassociate your concept of the word Domain from the
DNS system). Please also acknowledge that we still base
 our ability to address RAM via the concepts initially
created with with first 80386 Intel Processors. This
ring principal in inherent to the design of the
Microsoft O/S, whether it be DOS or Windows Vista - The
latter O/S is still basically running a mutilated form
of DOS and Still only able to address very limited
amounts of RAM directly and requires an XMS memory
management specification to do the juggling up to 4GB
(32-bit Version) Windows 95 was the first true 32bit
O/S, however no big change was required in processor
development as this was accomplished in the inherent
design of the first 80386 where the then 16 bit O/S
just used 2 clock cycles to address the processor. It
was very easy to produce Windows 95 O/S.
http://www.extremetech.com/article2/0,1697,2000446,00.asp

Further more it is not the architecture that will need
changing the 64 bit and 64 bit O/S will no longer
function in protected mode but rather Long Mode
http://en.wikibooks.org/wiki/X86_Assembly/Protected_Mode

This should be a relatively easy task for Linux, but an
absolute nightmare for MS.

The Long term solution lies is the re-release of the
spark and alpha chips and when this happens Linux will
just lap it up at the Desktop not only the server.

http://www.siliconchip.com.au/cms/A_108523/article.html
http://news.zdnet.com/2100-9595_22-826572.html

Fundamental security and stability issues of Protected
Mode Architecture and O/S utilisation.

It is often said that computer systems are insecure
because the market demands feature rich, user-friendly
software. According to the commonly accepted wisdom,
features and convenience are the enemy of security. If
it is unrealistic to expect that software complexity
will be reduced just to improve security, how can we
get out of the break and patch and break cycle? A good
place to start would be a redesign of Mainstream
Operating Systems (MOSs), both open and closed source,
to address some deeply rooted flaws in their protection
architecture. Unfortunately, this is unlikely to happen
any time soon, but that does not change the fact that
it is necessary. The problem with MOSs is their lack of
reliable domain separation. You cannot build a secure
computing system without reliable domain separation.
Domain separation isolates components of the operating
system that must work correctly to enforce security,
from everything else. The computing pioneers who
designed the influential Multics operating system over
30 years ago understood this very well. Multics
security was based on a hierarchy of hardware enforced
execution domains known as rings. Only a stripped down
operating system kernel operated in the most privileged
ring (ring 0) while device drivers and other less
trusted and potentially buggy operating system
components were walled off in rings of lesser
privilege. Unlike our current MOSs, in Multics a buggy
or malicious driver couldn’t crash the kernel or bypass
access controls to steal private data because the
processor hardware would stop it from making
unauthorised memory accesses. Unfortunately, (at least
from the perspective of security) in a MOS there are
only two rings. The kernel runs in ring 0 but it shares
this domain with device drivers and a huge array of
operating system and application components. If you
required Administrator rights to install a new
application, chances are you added more code that can
do whatever it likes because it runs with ring 0
privilege. There is no reliable ‘wall’ between the
security critical parts and less trusted parts. This is
why an attacker can write code that takes complete
control of your computer by exploiting an obscure bug
in a sound card driver. Drawing heavily on the Multics
approach, Intel x86 processors have supported a
four-ring protection architecture since the 286 chip.
The problem is that MOSs only use the highest and
lowest privilege levels, ignoring the ones in
between. This design approach delivers improved
performance and makes system and application
development easier, but these advantages have been
bought at a high price in terms of reduced security.
How can the domain separation problem be addressed? The
inertia of a huge installed base of insecure systems
and the pragmatic need for backward compatibility
precludes any solution that involves moving less
trusted components out of ring 0. This explains the new
processor instructions that Intel announced it would
implement to support Microsoft’s NGSCB trusted
computing initiative. These new instructions
effectively provide a higher privileged ‘ring –1’ to
house a hardware protected domain separation mechanism.
Intel’s announcement of new processors that support
hardware based ‘virtualisation technology’ also shows
some promise. Virtualisation technology will allow a
MOS to concurrently and transparently share a single
processor with another operating system – hopefully,
one that implements sound principles of operating
system security, principles that have been
well understood for over 30 years.

*Fundamental to the success of 64-bit Vista will be its
ability to not operate in Protected Mode, however I am
inclined to think and unable to quantify if is still
runs in protected Mode in its 64-bit for, but I would
think this highly likely.

Longhorne's Server, however I DO believe will run
64-bit and both O/S and Applications will be written to
address Long Mode programming in its message addressing
and be the most secure and biggest hint at the end of
the 80x86 Processor, architecture and the end of the
Bastardised DOS form Vists's 32 and 64* bit O/S much to
 the delight of Linux, Unix and we can just pray all
this happens before 2038 ...LOL

Good Evening to ALL
Scott ;-) 21:41 - GMT +10

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to